diff --git a/pythonPackages/numpy/COMPATIBILITY b/pythonPackages/numpy/COMPATIBILITY deleted file mode 100755 index d2cd3cd275..0000000000 --- a/pythonPackages/numpy/COMPATIBILITY +++ /dev/null @@ -1,59 +0,0 @@ - - -X.flat returns an indexable 1-D iterator (mostly similar to an array -but always 1-d) --- only has .copy and .__array__ attributes of an array!!! - -.typecode() --> .dtype.char - -.iscontiguous() --> .flags['CONTIGUOUS'] or .flags.contiguous - -.byteswapped() -> .byteswap() - -.itemsize() -> .itemsize - -.toscalar() -> .item() - -If you used typecode characters: - -'c' -> 'S1' or 'c' -'b' -> 'B' -'1' -> 'b' -'s' -> 'h' -'w' -> 'H' -'u' -> 'I' - - -C -level - -some API calls that used to take PyObject * now take PyArrayObject * -(this should only cause warnings during compile and not actual problems). - PyArray_Take - -These commands now return a buffer that must be freed once it is used -using PyMemData_FREE(ptr); - -a->descr->zero --> PyArray_Zero(a) -a->descr->one --> PyArray_One(a) - -Numeric/arrayobject.h --> numpy/oldnumeric.h - - -# These will actually work and are defines for PyArray_BYTE, -# but you really should change it in your code -PyArray_CHAR --> PyArray_CHAR - (or PyArray_STRING which is more flexible) -PyArray_SBYTE --> PyArray_BYTE - -Any uses of character codes will need adjusting.... -use PyArray_XXXLTR where XXX is the name of the type. - - -If you used function pointers directly (why did you do that?), -the arguments have changed. Everything that was an int is now an intp. -Also, arrayobjects should be passed in at the end. - -a->descr->cast[i](fromdata, fromstep, todata, tostep, n) -a->descr->cast[i](fromdata, todata, n, PyArrayObject *in, PyArrayObject *out) - anything but single-stepping is not supported by this function - use the PyArray_CastXXXX functions. - diff --git a/pythonPackages/numpy/DEV_README.txt b/pythonPackages/numpy/DEV_README.txt deleted file mode 100755 index ae9b59fb6d..0000000000 --- a/pythonPackages/numpy/DEV_README.txt +++ /dev/null @@ -1,19 +0,0 @@ -Thank you for your willingness to help make NumPy the best array system -available. - -We have a few simple rules: - - * try hard to keep the SVN repository in a buildable state and to not - indiscriminately muck with what others have contributed. - - * Simple changes (including bug fixes) and obvious improvements are - always welcome. Changes that fundamentally change behavior need - discussion on numpy-discussions@scipy.org before anything is - done. - - * Please add meaningful comments when you check changes in. These - comments form the basis of the change-log. - - * Add unit tests to excercise new code, and regression tests - whenever you fix a bug. - diff --git a/pythonPackages/numpy/INSTALL.txt b/pythonPackages/numpy/INSTALL.txt deleted file mode 100755 index b16af0ef78..0000000000 --- a/pythonPackages/numpy/INSTALL.txt +++ /dev/null @@ -1,139 +0,0 @@ -.. -*- rest -*- -.. vim:syntax=rest -.. NB! Keep this document a valid restructured document. - -Building and installing NumPy -+++++++++++++++++++++++++++++ - -:Authors: Numpy Developers -:Discussions to: numpy-discussion@scipy.org - -.. Contents:: - -PREREQUISITES -============= - -Building NumPy requires the following software installed: - -1) Python__ 2.4.x or newer - - On Debian and derivative (Ubuntu): python python-dev - - On Windows: the official python installer on Python__ is enough - - Make sure that the Python package distutils is installed before - continuing. For example, in Debian GNU/Linux, distutils is included - in the python-dev package. - - Python must also be compiled with the zlib module enabled. - -2) nose__ (pptional) 0.10.3 or later - - This is required for testing numpy, but not for using it. - -Python__ http://www.python.org -nose__ http://somethingaboutorange.com/mrl/projects/nose/ - -Fortran ABI mismatch -==================== - -The two most popular open source fortran compilers are g77 and gfortran. -Unfortunately, they are not ABI compatible, which means that concretely you -should avoid mixing libraries built with one with another. In particular, if -your blas/lapack/atlas is built with g77, you *must* use g77 when building -numpy and scipy; on the contrary, if your atlas is built with gfortran, you -*must* build numpy/scipy with gfortran. - -Choosing the fortran compiler ------------------------------ - -To build with g77: - - python setup.py build --fcompiler=gnu - -To build with gfortran: - - python setup.py build --fcompiler=gnu95 - -How to check the ABI of blas/lapack/atlas ------------------------------------------ - -One relatively simple and reliable way to check for the compiler used to build -a library is to use ldd on the library. If libg2c.so is a dependency, this -means that g77 has been used. If libgfortran.so is a a dependency, gfortran has -been used. If both are dependencies, this means both have been used, which is -almost always a very bad idea. - -Building with ATLAS support -=========================== - -Ubuntu 8.10 (Intrepid) ----------------------- - -You can install the necessary packages for optimized ATLAS with this command: - - sudo apt-get install libatlas-base-dev - -If you have a recent CPU with SIMD suppport (SSE, SSE2, etc...), you should -also install the corresponding package for optimal performances. For example, -for SSE2: - - sudo apt-get install libatlas3gf-sse2 - -*NOTE*: if you build your own atlas, Intrepid changed its default fortran -compiler to gfortran. So you should rebuild everything from scratch, including -lapack, to use it on Intrepid. - -Ubuntu 8.04 and lower ---------------------- - -You can install the necessary packages for optimized ATLAS with this command: - - sudo apt-get install atlas3-base-dev - -If you have a recent CPU with SIMD suppport (SSE, SSE2, etc...), you should -also install the corresponding package for optimal performances. For example, -for SSE2: - - sudo apt-get install atlas3-sse2 - -Windows 64 bits notes -===================== - -Note: only AMD64 is supported (IA64 is not) - AMD64 is the version most people -want. - -Free compilers (mingw-w64) --------------------------- - -http://mingw-w64.sourceforge.net/ - -To use the free compilers (mingw-w64), you need to build your own toolchain, as -the mingw project only distribute cross-compilers (cross-compilation is not -supported by numpy). Since this toolchain is still being worked on, serious -compilers bugs can be expected. binutil 2.19 + gcc 4.3.3 + mingw-w64 runtime -gives you a working C compiler (but the C++ is broken). gcc 4.4 will hopefully -be able to run natively. - -This is the only tested way to get a numpy with a FULL blas/lapack (scipy does -not work because of C++). - -MS compilers ------------- - -If you are familiar with MS tools, that's obviously the easiest path, and the -compilers are hopefully more mature (although in my experience, they are quite -fragile, and often segfault on invalid C code). The main drawback is that no -fortran compiler + MS compiler combination has been tested - mingw-w64 gfortran -+ MS compiler does not work at all (it is unclear whether it ever will). - -For python 2.5, you need VS 2005 (MS compiler version 14) targetting -AMD64 bits, or the Platform SDK v6.0 or below (which gives command -line versions of 64 bits target compilers). The PSDK is free. - -For python 2.6, you need VS 2008. The freely available version does not -contains 64 bits compilers (you also need the PSDK, v6.1). - -It is *crucial* to use the right version: python 2.5 -> version 14, python 2.6, -version 15. You can check the compiler version with cl.exe /?. Note also that -for python 2.5, 64 bits and 32 bits versions use a different compiler version. diff --git a/pythonPackages/numpy/LICENSE.txt b/pythonPackages/numpy/LICENSE.txt deleted file mode 100755 index 4371a777b8..0000000000 --- a/pythonPackages/numpy/LICENSE.txt +++ /dev/null @@ -1,30 +0,0 @@ -Copyright (c) 2005-2009, NumPy Developers. -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - * Redistributions in binary form must reproduce the above - copyright notice, this list of conditions and the following - disclaimer in the documentation and/or other materials provided - with the distribution. - - * Neither the name of the NumPy Developers nor the names of any - contributors may be used to endorse or promote products derived - from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/pythonPackages/numpy/MANIFEST.in b/pythonPackages/numpy/MANIFEST.in deleted file mode 100755 index c33aa32156..0000000000 --- a/pythonPackages/numpy/MANIFEST.in +++ /dev/null @@ -1,20 +0,0 @@ -# -# Use .add_data_files and .add_data_dir methods in a appropriate -# setup.py files to include non-python files such as documentation, -# data, etc files to distribution. Avoid using MANIFEST.in for that. -# -include MANIFEST.in -include LICENSE.txt -include setupscons.py -include setupsconsegg.py -include setupegg.py -# Adding scons build related files not found by distutils -recursive-include numpy/core/code_generators *.py *.txt -recursive-include numpy/core *.in *.h -recursive-include numpy SConstruct SConscript -# Add documentation: we don't use add_data_dir since we do not want to include -# this at installation, only for sdist-generated tarballs -include doc/Makefile doc/postprocess.py -recursive-include doc/release * -recursive-include doc/source * -recursive-include doc/sphinxext * diff --git a/pythonPackages/numpy/PKG-INFO b/pythonPackages/numpy/PKG-INFO deleted file mode 100755 index 95682c73db..0000000000 --- a/pythonPackages/numpy/PKG-INFO +++ /dev/null @@ -1,37 +0,0 @@ -Metadata-Version: 1.0 -Name: numpy -Version: 1.5.0b1 -Summary: NumPy: array processing for numbers, strings, records, and objects. -Home-page: http://numpy.scipy.org -Author: NumPy Developers -Author-email: numpy-discussion@scipy.org -License: BSD -Download-URL: http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103 -Description: NumPy is a general-purpose array-processing package designed to - efficiently manipulate large multi-dimensional arrays of arbitrary - records without sacrificing too much speed for small multi-dimensional - arrays. NumPy is built on the Numeric code base and adds features - introduced by numarray as well as an extended C-API and the ability to - create arrays of arbitrary type which also makes NumPy suitable for - interfacing with general-purpose data-base applications. - - There are also basic facilities for discrete fourier transform, - basic linear algebra and random number generation. - -Platform: Windows -Platform: Linux -Platform: Solaris -Platform: Mac OS-X -Platform: Unix -Classifier: Development Status :: 5 - Production/Stable -Classifier: Intended Audience :: Science/Research -Classifier: Intended Audience :: Developers -Classifier: License :: OSI Approved -Classifier: Programming Language :: C -Classifier: Programming Language :: Python -Classifier: Topic :: Software Development -Classifier: Topic :: Scientific/Engineering -Classifier: Operating System :: Microsoft :: Windows -Classifier: Operating System :: POSIX -Classifier: Operating System :: Unix -Classifier: Operating System :: MacOS diff --git a/pythonPackages/numpy/README.txt b/pythonPackages/numpy/README.txt deleted file mode 100755 index 80e375eb20..0000000000 --- a/pythonPackages/numpy/README.txt +++ /dev/null @@ -1,23 +0,0 @@ -NumPy is the fundamental package needed for scientific computing with Python. -This package contains: - - * a powerful N-dimensional array object - * sophisticated (broadcasting) functions - * tools for integrating C/C++ and Fortran code - * useful linear algebra, Fourier transform, and random number capabilities. - -It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray. - -More information can be found at the website: - -http://scipy.org/NumPy - -After installation, tests can be run with: - -python -c 'import numpy; numpy.test()' - -The most current development version is always available from our -subversion repository: - -http://svn.scipy.org/svn/numpy/trunk - diff --git a/pythonPackages/numpy/THANKS.txt b/pythonPackages/numpy/THANKS.txt deleted file mode 100755 index 5a29c0e3b3..0000000000 --- a/pythonPackages/numpy/THANKS.txt +++ /dev/null @@ -1,62 +0,0 @@ -Travis Oliphant for the NumPy core, the NumPy guide, various - bug-fixes and code contributions. -Paul Dubois, who implemented the original Masked Arrays. -Pearu Peterson for f2py, numpy.distutils and help with code - organization. -Robert Kern for mtrand, bug fixes, help with distutils, code - organization, strided tricks and much more. -Eric Jones for planning and code contributions. -Fernando Perez for code snippets, ideas, bugfixes, and testing. -Ed Schofield for matrix.py patches, bugfixes, testing, and docstrings. -Robert Cimrman for array set operations and numpy.distutils help. -John Hunter for code snippets from matplotlib. -Chris Hanley for help with records.py, testing, and bug fixes. -Travis Vaught for administration, community coordination and - marketing. -Joe Cooper, Jeff Strunk for administration. -Eric Firing for bugfixes. -Arnd Baecker for 64-bit testing. -David Cooke for many code improvements including the auto-generated C-API, - and optimizations. -Andrew Straw for help with the web-page, documentation, packaging and - testing. -Alexander Belopolsky (Sasha) for Masked array bug-fixes and tests, - rank-0 array improvements, scalar math help and other code additions. -Francesc Altet for unicode, work on nested record arrays, and bug-fixes. -Tim Hochberg for getting the build working on MSVC, optimization - improvements, and code review. -Charles (Chuck) Harris for the sorting code originally written for - Numarray and for improvements to polyfit, many bug fixes, delving - into the C code, release management, and documentation. -David Huard for histogram improvements including 2-D and d-D code and - other bug-fixes. -Stefan van der Walt for numerous bug-fixes, testing and documentation. -Albert Strasheim for documentation, bug-fixes, regression tests and - Valgrind expertise. -David Cournapeau for build support, doc-and-bug fixes, and code - contributions including fast_clipping. -Jarrod Millman for release management, community coordination, and code - clean up. -Chris Burns for work on memory mapped arrays and bug-fixes. -Pauli Virtanen for documentation, bug-fixes, lookfor and the - documentation editor. -A.M. Archibald for no-copy-reshape code, strided array tricks, - documentation and bug-fixes. -Pierre Gerard-Marchant for rewriting masked array functionality. -Roberto de Almeida for the buffered array iterator. -Alan McIntyre for updating the NumPy test framework to use nose, improve - the test coverage, and enhancing the test system documentation. -Joe Harrington for administering the 2008 Documentation Sprint. - -NumPy is based on the Numeric (Jim Hugunin, Paul Dubois, Konrad -Hinsen, and David Ascher) and NumArray (Perry Greenfield, J Todd -Miller, Rick White and Paul Barrett) projects. We thank them for -paving the way ahead. - -Institutions ------------- - -Enthought for providing resources and finances for development of NumPy. -UC Berkeley for providing travel money and hosting numerous sprints. -The University of Central Florida for funding the 2008 Documentation Marathon. -The University of Stellenbosch for hosting the buildbot. diff --git a/pythonPackages/numpy/doc/Makefile b/pythonPackages/numpy/doc/Makefile deleted file mode 100755 index 09278cb14e..0000000000 --- a/pythonPackages/numpy/doc/Makefile +++ /dev/null @@ -1,165 +0,0 @@ -# Makefile for Sphinx documentation -# - -PYVER = -PYTHON = python$(PYVER) - -# You can set these variables from the command line. -SPHINXOPTS = -SPHINXBUILD = LANG=C sphinx-build -PAPER = - -NEED_AUTOSUMMARY = $(shell $(PYTHON) -c 'import sphinx; print sphinx.__version__ < "0.7" and "1" or ""') - -# Internal variables. -PAPEROPT_a4 = -D latex_paper_size=a4 -PAPEROPT_letter = -D latex_paper_size=letter -ALLSPHINXOPTS = -d build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source - -.PHONY: help clean html web pickle htmlhelp latex changes linkcheck \ - dist dist-build - -#------------------------------------------------------------------------------ - -help: - @echo "Please use \`make ' where is one of" - @echo " html to make standalone HTML files" - @echo " pickle to make pickle files (usable by e.g. sphinx-web)" - @echo " htmlhelp to make HTML files and a HTML help project" - @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" - @echo " changes to make an overview over all changed/added/deprecated items" - @echo " linkcheck to check all external links for integrity" - @echo " dist PYVER=... to make a distribution-ready tree" - @echo " upload USER=... to upload results to docs.scipy.org" - -clean: - -rm -rf build/* source/reference/generated - - -#------------------------------------------------------------------------------ -# Automated generation of all documents -#------------------------------------------------------------------------------ - -# Build the current numpy version, and extract docs from it. -# We have to be careful of some issues: -# -# - Everything must be done using the same Python version -# - We must use eggs (otherwise they might override PYTHONPATH on import). -# - Different versions of easy_install install to different directories (!) -# - -INSTALL_DIR = $(CURDIR)/build/inst-dist/ -INSTALL_PPH = $(INSTALL_DIR)/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/lib/python$(PYVER)/dist-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/dist-packages - -DIST_VARS=SPHINXBUILD="LANG=C PYTHONPATH=$(INSTALL_PPH) python$(PYVER) `which sphinx-build`" PYTHON="PYTHONPATH=$(INSTALL_PPH) python$(PYVER)" SPHINXOPTS="$(SPHINXOPTS)" - -UPLOAD_TARGET = $(USER)@docs.scipy.org:/home/docserver/www-root/doc/numpy/ - -upload: - @test -e build/dist || { echo "make dist is required first"; exit 1; } - @test output-is-fine -nt build/dist || { \ - echo "Review the output in build/dist, and do 'touch output-is-fine' before uploading."; exit 1; } - rsync -r -z --delete-after -p \ - $(if $(shell test -f build/dist/numpy-ref.pdf && echo "y"),, \ - --exclude '**-ref.pdf' --exclude '**-user.pdf') \ - $(if $(shell test -f build/dist/numpy-chm.zip && echo "y"),, \ - --exclude '**-chm.zip') \ - build/dist/ $(UPLOAD_TARGET) - -dist: - make $(DIST_VARS) real-dist - -real-dist: dist-build html - test -d build/latex || make latex - make -C build/latex all-pdf - -test -d build/htmlhelp || make htmlhelp-build - -rm -rf build/dist - cp -r build/html build/dist - perl -pi -e 's#^\s*(
  • NumPy.*?Manual.*?»
  • )#
  • Numpy and Scipy Documentation »
  • #;' build/dist/*.html build/dist/*/*.html build/dist/*/*/*.html - cd build/html && zip -9r ../dist/numpy-html.zip . - cp build/latex/numpy-*.pdf build/dist - -zip build/dist/numpy-chm.zip build/htmlhelp/numpy.chm - cd build/dist && tar czf ../dist.tar.gz * - chmod ug=rwX,o=rX -R build/dist - find build/dist -type d -print0 | xargs -0r chmod g+s - -dist-build: - rm -f ../dist/*.egg - cd .. && $(PYTHON) setupegg.py bdist_egg - install -d $(subst :, ,$(INSTALL_PPH)) - $(PYTHON) `which easy_install` --prefix=$(INSTALL_DIR) ../dist/*.egg - - -#------------------------------------------------------------------------------ -# Basic Sphinx generation rules for different formats -#------------------------------------------------------------------------------ - -generate: build/generate-stamp -build/generate-stamp: $(wildcard source/reference/*.rst) - mkdir -p build -ifeq ($(NEED_AUTOSUMMARY),1) - $(PYTHON) \ - ./sphinxext/autosummary_generate.py source/reference/*.rst \ - -p dump.xml -o source/reference/generated -endif - touch build/generate-stamp - -html: generate - mkdir -p build/html build/doctrees - $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html - $(PYTHON) postprocess.py html build/html/*.html - @echo - @echo "Build finished. The HTML pages are in build/html." - -pickle: generate - mkdir -p build/pickle build/doctrees - $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) build/pickle - @echo - @echo "Build finished; now you can process the pickle files or run" - @echo " sphinx-web build/pickle" - @echo "to start the sphinx-web server." - -web: pickle - -htmlhelp: generate - mkdir -p build/htmlhelp build/doctrees - $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp - @echo - @echo "Build finished; now you can run HTML Help Workshop with the" \ - ".hhp project file in build/htmlhelp." - -htmlhelp-build: htmlhelp build/htmlhelp/numpy.chm -%.chm: %.hhp - -hhc.exe $^ - -qthelp: generate - mkdir -p build/qthelp build/doctrees - $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) build/qthelp - -latex: generate - mkdir -p build/latex build/doctrees - $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex - $(PYTHON) postprocess.py tex build/latex/*.tex - perl -pi -e 's/\t(latex.*|pdflatex) (.*)/\t-$$1 -interaction batchmode $$2/' build/latex/Makefile - @echo - @echo "Build finished; the LaTeX files are in build/latex." - @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ - "run these through (pdf)latex." - -coverage: build - mkdir -p build/coverage build/doctrees - $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) build/coverage - @echo "Coverage finished; see c.txt and python.txt in build/coverage" - -changes: generate - mkdir -p build/changes build/doctrees - $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes - @echo - @echo "The overview file is in build/changes." - -linkcheck: generate - mkdir -p build/linkcheck build/doctrees - $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck - @echo - @echo "Link check complete; look for any errors in the above output " \ - "or in build/linkcheck/output.txt." diff --git a/pythonPackages/numpy/doc/postprocess.py b/pythonPackages/numpy/doc/postprocess.py deleted file mode 100755 index 1c6ef1b2eb..0000000000 --- a/pythonPackages/numpy/doc/postprocess.py +++ /dev/null @@ -1,59 +0,0 @@ -#!/usr/bin/env python -""" -%prog MODE FILES... - -Post-processes HTML and Latex files output by Sphinx. -MODE is either 'html' or 'tex'. - -""" -import re, optparse - -def main(): - p = optparse.OptionParser(__doc__) - options, args = p.parse_args() - - if len(args) < 1: - p.error('no mode given') - - mode = args.pop(0) - - if mode not in ('html', 'tex'): - p.error('unknown mode %s' % mode) - - for fn in args: - f = open(fn, 'r') - try: - if mode == 'html': - lines = process_html(fn, f.readlines()) - elif mode == 'tex': - lines = process_tex(f.readlines()) - finally: - f.close() - - f = open(fn, 'w') - f.write("".join(lines)) - f.close() - -def process_html(fn, lines): - return lines - -def process_tex(lines): - """ - Remove unnecessary section titles from the LaTeX file. - - """ - new_lines = [] - for line in lines: - if (line.startswith(r'\section{numpy.') - or line.startswith(r'\subsection{numpy.') - or line.startswith(r'\subsubsection{numpy.') - or line.startswith(r'\paragraph{numpy.') - or line.startswith(r'\subparagraph{numpy.') - ): - pass # skip! - else: - new_lines.append(line) - return new_lines - -if __name__ == "__main__": - main() diff --git a/pythonPackages/numpy/doc/release/1.3.0-notes.rst b/pythonPackages/numpy/doc/release/1.3.0-notes.rst deleted file mode 100755 index fc7edddfe9..0000000000 --- a/pythonPackages/numpy/doc/release/1.3.0-notes.rst +++ /dev/null @@ -1,278 +0,0 @@ -========================= -NumPy 1.3.0 Release Notes -========================= - -This minor includes numerous bug fixes, official python 2.6 support, and -several new features such as generalized ufuncs. - -Highlights -========== - -Python 2.6 support -~~~~~~~~~~~~~~~~~~ - -Python 2.6 is now supported on all previously supported platforms, including -windows. - -http://www.python.org/dev/peps/pep-0361/ - -Generalized ufuncs -~~~~~~~~~~~~~~~~~~ - -There is a general need for looping over not only functions on scalars but also -over functions on vectors (or arrays), as explained on -http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose to -realize this concept by generalizing the universal functions (ufuncs), and -provide a C implementation that adds ~500 lines to the numpy code base. In -current (specialized) ufuncs, the elementary function is limited to -element-by-element operations, whereas the generalized version supports -"sub-array" by "sub-array" operations. The Perl vector library PDL provides a -similar functionality and its terms are re-used in the following. - -Each generalized ufunc has information associated with it that states what the -"core" dimensionality of the inputs is, as well as the corresponding -dimensionality of the outputs (the element-wise ufuncs have zero core -dimensions). The list of the core dimensions for all arguments is called the -"signature" of a ufunc. For example, the ufunc numpy.add has signature -"(),()->()" defining two scalar inputs and one scalar output. - -Another example is (see the GeneralLoopingFunctions page) the function -inner1d(a,b) with a signature of "(i),(i)->()". This applies the inner product -along the last axis of each input, but keeps the remaining indices intact. For -example, where a is of shape (3,5,N) and b is of shape (5,N), this will return -an output of shape (3,5). The underlying elementary function is called 3*5 -times. In the signature, we specify one core dimension "(i)" for each input and -zero core dimensions "()" for the output, since it takes two 1-d arrays and -returns a scalar. By using the same name "i", we specify that the two -corresponding dimensions should be of the same size (or one of them is of size -1 and will be broadcasted). - -The dimensions beyond the core dimensions are called "loop" dimensions. In the -above example, this corresponds to (3,5). - -The usual numpy "broadcasting" rules apply, where the signature determines how -the dimensions of each input/output object are split into core and loop -dimensions: - -While an input array has a smaller dimensionality than the corresponding number -of core dimensions, 1's are pre-pended to its shape. The core dimensions are -removed from all inputs and the remaining dimensions are broadcasted; defining -the loop dimensions. The output is given by the loop dimensions plus the -output core dimensions. - -Experimental Windows 64 bits support -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Numpy can now be built on windows 64 bits (amd64 only, not IA64), with both MS -compilers and mingw-w64 compilers: - -This is *highly experimental*: DO NOT USE FOR PRODUCTION USE. See INSTALL.txt, -Windows 64 bits section for more information on limitations and how to build it -by yourself. - -New features -============ - -Formatting issues -~~~~~~~~~~~~~~~~~ - -Float formatting is now handled by numpy instead of the C runtime: this enables -locale independent formatting, more robust fromstring and related methods. -Special values (inf and nan) are also more consistent across platforms (nan vs -IND/NaN, etc...), and more consistent with recent python formatting work (in -2.6 and later). - -Nan handling in max/min -~~~~~~~~~~~~~~~~~~~~~~~ - -The maximum/minimum ufuncs now reliably propagate nans. If one of the -arguments is a nan, then nan is retured. This affects np.min/np.max, amin/amax -and the array methods max/min. New ufuncs fmax and fmin have been added to deal -with non-propagating nans. - -Nan handling in sign -~~~~~~~~~~~~~~~~~~~~ - -The ufunc sign now returns nan for the sign of anan. - - -New ufuncs -~~~~~~~~~~ - -#. fmax - same as maximum for integer types and non-nan floats. Returns the - non-nan argument if one argument is nan and returns nan if both arguments - are nan. -#. fmin - same as minimum for integer types and non-nan floats. Returns the - non-nan argument if one argument is nan and returns nan if both arguments - are nan. -#. deg2rad - converts degrees to radians, same as the radians ufunc. -#. rad2deg - converts radians to degrees, same as the degrees ufunc. -#. log2 - base 2 logarithm. -#. exp2 - base 2 exponential. -#. trunc - truncate floats to nearest integer towards zero. -#. logaddexp - add numbers stored as logarithms and return the logarithm - of the result. -#. logaddexp2 - add numbers stored as base 2 logarithms and return the base 2 - logarithm of the result result. - -Masked arrays -~~~~~~~~~~~~~ - -Several new features and bug fixes, including: - - * structured arrays should now be fully supported by MaskedArray - (r6463, r6324, r6305, r6300, r6294...) - * Minor bug fixes (r6356, r6352, r6335, r6299, r6298) - * Improved support for __iter__ (r6326) - * made baseclass, sharedmask and hardmask accesible to the user (but - read-only) - * doc update - -gfortran support on windows -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Gfortran can now be used as a fortran compiler for numpy on windows, even when -the C compiler is Visual Studio (VS 2005 and above; VS 2003 will NOT work). -Gfortran + Visual studio does not work on windows 64 bits (but gcc + gfortran -does). It is unclear whether it will be possible to use gfortran and visual -studio at all on x64. - -Arch option for windows binary -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Automatic arch detection can now be bypassed from the command line for the superpack installed: - - numpy-1.3.0-superpack-win32.exe /arch=nosse - -will install a numpy which works on any x86, even if the running computer -supports SSE set. - -Deprecated features -=================== - -Histogram -~~~~~~~~~ - -The semantics of histogram has been modified to fix long-standing issues -with outliers handling. The main changes concern - -#. the definition of the bin edges, now including the rightmost edge, and -#. the handling of upper outliers, now ignored rather than tallied in the - rightmost bin. - -The previous behavior is still accessible using `new=False`, but this is -deprecated, and will be removed entirely in 1.4.0. - -Documentation changes -===================== - -A lot of documentation has been added. Both user guide and references can be -built from sphinx. - -New C API -========= - -Multiarray API -~~~~~~~~~~~~~~ - -The following functions have been added to the multiarray C API: - - * PyArray_GetEndianness: to get runtime endianness - -Ufunc API -~~~~~~~~~~~~~~ - -The following functions have been added to the ufunc API: - - * PyUFunc_FromFuncAndDataAndSignature: to declare a more general ufunc - (generalized ufunc). - - -New defines -~~~~~~~~~~~ - -New public C defines are available for ARCH specific code through numpy/npy_cpu.h: - - * NPY_CPU_X86: x86 arch (32 bits) - * NPY_CPU_AMD64: amd64 arch (x86_64, NOT Itanium) - * NPY_CPU_PPC: 32 bits ppc - * NPY_CPU_PPC64: 64 bits ppc - * NPY_CPU_SPARC: 32 bits sparc - * NPY_CPU_SPARC64: 64 bits sparc - * NPY_CPU_S390: S390 - * NPY_CPU_IA64: ia64 - * NPY_CPU_PARISC: PARISC - -New macros for CPU endianness has been added as well (see internal changes -below for details): - - * NPY_BYTE_ORDER: integer - * NPY_LITTLE_ENDIAN/NPY_BIG_ENDIAN defines - -Those provide portable alternatives to glibc endian.h macros for platforms -without it. - -Portable NAN, INFINITY, etc... -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -npy_math.h now makes available several portable macro to get NAN, INFINITY: - - * NPY_NAN: equivalent to NAN, which is a GNU extension - * NPY_INFINITY: equivalent to C99 INFINITY - * NPY_PZERO, NPY_NZERO: positive and negative zero respectively - -Corresponding single and extended precision macros are available as well. All -references to NAN, or home-grown computation of NAN on the fly have been -removed for consistency. - -Internal changes -================ - -numpy.core math configuration revamp -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This should make the porting to new platforms easier, and more robust. In -particular, the configuration stage does not need to execute any code on the -target platform, which is a first step toward cross-compilation. - -http://projects.scipy.org/numpy/browser/trunk/doc/neps/math_config_clean.txt - -umath refactor -~~~~~~~~~~~~~~ - -A lot of code cleanup for umath/ufunc code (charris). - -Improvements to build warnings -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Numpy can now build with -W -Wall without warnings - -http://projects.scipy.org/numpy/browser/trunk/doc/neps/warnfix.txt - -Separate core math library -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The core math functions (sin, cos, etc... for basic C types) have been put into -a separate library; it acts as a compatibility layer, to support most C99 maths -functions (real only for now). The library includes platform-specific fixes for -various maths functions, such as using those versions should be more robust -than using your platform functions directly. The API for existing functions is -exactly the same as the C99 math functions API; the only difference is the npy -prefix (npy_cos vs cos). - -The core library will be made available to any extension in 1.4.0. - -CPU arch detection -~~~~~~~~~~~~~~~~~~ - -npy_cpu.h defines numpy specific CPU defines, such as NPY_CPU_X86, etc... -Those are portable across OS and toolchains, and set up when the header is -parsed, so that they can be safely used even in the case of cross-compilation -(the values is not set when numpy is built), or for multi-arch binaries (e.g. -fat binaries on Max OS X). - -npy_endian.h defines numpy specific endianness defines, modeled on the glibc -endian.h. NPY_BYTE_ORDER is equivalent to BYTE_ORDER, and one of -NPY_LITTLE_ENDIAN or NPY_BIG_ENDIAN is defined. As for CPU archs, those are set -when the header is parsed by the compiler, and as such can be used for -cross-compilation and multi-arch binaries. diff --git a/pythonPackages/numpy/doc/release/1.4.0-notes.rst b/pythonPackages/numpy/doc/release/1.4.0-notes.rst deleted file mode 100755 index 5429f8e76d..0000000000 --- a/pythonPackages/numpy/doc/release/1.4.0-notes.rst +++ /dev/null @@ -1,238 +0,0 @@ -========================= -NumPy 1.4.0 Release Notes -========================= - -This minor includes numerous bug fixes, as well as a few new features. It -is backward compatible with 1.3.0 release. - -Highlights -========== - -* New datetime dtype support to deal with dates in arrays - -* Faster import time - -* Extended array wrapping mechanism for ufuncs - -* New Neighborhood iterator (C-level only) - -* C99-like complex functions in npymath - -New features -============ - -Extended array wrapping mechanism for ufuncs -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -An __array_prepare__ method has been added to ndarray to provide subclasses -greater flexibility to interact with ufuncs and ufunc-like functions. ndarray -already provided __array_wrap__, which allowed subclasses to set the array type -for the result and populate metadata on the way out of the ufunc (as seen in -the implementation of MaskedArray). For some applications it is necessary to -provide checks and populate metadata *on the way in*. __array_prepare__ is -therefore called just after the ufunc has initialized the output array but -before computing the results and populating it. This way, checks can be made -and errors raised before operations which may modify data in place. - -Automatic detection of forward incompatibilities -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Previously, if an extension was built against a version N of NumPy, and used on -a system with NumPy M < N, the import_array was successfull, which could cause -crashes because the version M does not have a function in N. Starting from -NumPy 1.4.0, this will cause a failure in import_array, so the error will be -catched early on. - -New iterators -~~~~~~~~~~~~~ - -A new neighborhood iterator has been added to the C API. It can be used to -iterate over the items in a neighborhood of an array, and can handle boundaries -conditions automatically. Zero and one padding are available, as well as -arbitrary constant value, mirror and circular padding. - -New polynomial support -~~~~~~~~~~~~~~~~~~~~~~ - -New modules chebyshev and polynomial have been added. The new polynomial module -is not compatible with the current polynomial support in numpy, but is much -like the new chebyshev module. The most noticeable difference to most will -be that coefficients are specified from low to high power, that the low -level functions do *not* work with the Chebyshev and Polynomial classes as -arguements, and that the Chebyshev and Polynomial classes include a domain. -Mapping between domains is a linear substitution and the two classes can be -converted one to the other, allowing, for instance, a Chebyshev series in -one domain to be expanded as a polynomial in another domain. The new classes -should generally be used instead of the low level functions, the latter are -provided for those who wish to build their own classes. - -The new modules are not automatically imported into the numpy namespace, -they must be explicitly brought in with an "import numpy.polynomial" -statement. - -New C API -~~~~~~~~~ - -The following C functions have been added to the C API: - - #. PyArray_GetNDArrayCFeatureVersion: return the *API* version of the - loaded numpy. - #. PyArray_Correlate2 - like PyArray_Correlate, but implements the usual - definition of correlation. Inputs are not swapped, and conjugate is - taken for complex arrays. - #. PyArray_NeighborhoodIterNew - a new iterator to iterate over a - neighborhood of a point, with automatic boundaries handling. It is - documented in the iterators section of the C-API reference, and you can - find some examples in the multiarray_test.c.src file in numpy.core. - -New ufuncs -~~~~~~~~~~ - -The following ufuncs have been added to the C API: - - #. copysign - return the value of the first argument with the sign copied - from the second argument. - #. nextafter - return the next representable floating point value of the - first argument toward the second argument. - -New defines -~~~~~~~~~~~ - -The alpha processor is now defined and available in numpy/npy_cpu.h. The -failed detection of the PARISC processor has been fixed. The defines are: - - #. NPY_CPU_HPPA: PARISC - #. NPY_CPU_ALPHA: Alpha - -Testing -~~~~~~~ - - #. deprecated decorator: this decorator may be used to avoid cluttering - testing output while testing DeprecationWarning is effectively raised by - the decorated test. - #. assert_array_almost_equal_nulps: new method to compare two arrays of - floating point values. With this function, two values are considered - close if there are not many representable floating point values in - between, thus being more robust than assert_array_almost_equal when the - values fluctuate a lot. - #. assert_array_max_ulp: raise an assertion if there are more than N - representable numbers between two floating point values. - #. assert_warns: raise an AssertionError if a callable does not generate a - warning of the appropriate class, without altering the warning state. - -Reusing npymath -~~~~~~~~~~~~~~~ - -In 1.3.0, we started putting portable C math routines in npymath library, so -that people can use those to write portable extensions. Unfortunately, it was -not possible to easily link against this library: in 1.4.0, support has been -added to numpy.distutils so that 3rd party can reuse this library. See coremath -documentation for more information. - -Improved set operations -~~~~~~~~~~~~~~~~~~~~~~~ - -In previous versions of NumPy some set functions (intersect1d, -setxor1d, setdiff1d and setmember1d) could return incorrect results if -the input arrays contained duplicate items. These now work correctly -for input arrays with duplicates. setmember1d has been renamed to -in1d, as with the change to accept arrays with duplicates it is -no longer a set operation, and is conceptually similar to an -elementwise version of the Python operator 'in'. All of these -functions now accept the boolean keyword assume_unique. This is False -by default, but can be set True if the input arrays are known not -to contain duplicates, which can increase the functions' execution -speed. - -Improvements -============ - - #. numpy import is noticeably faster (from 20 to 30 % depending on the - platform and computer) - - #. The sort functions now sort nans to the end. - - * Real sort order is [R, nan] - * Complex sort order is [R + Rj, R + nanj, nan + Rj, nan + nanj] - - Complex numbers with the same nan placements are sorted according to - the non-nan part if it exists. - #. The type comparison functions have been made consistent with the new - sort order of nans. Searchsorted now works with sorted arrays - containing nan values. - #. Complex division has been made more resistent to overflow. - #. Complex floor division has been made more resistent to overflow. - -Deprecations -============ - -The following functions are deprecated: - - #. correlate: it takes a new keyword argument old_behavior. When True (the - default), it returns the same result as before. When False, compute the - conventional correlation, and take the conjugate for complex arrays. The - old behavior will be removed in NumPy 1.5, and raises a - DeprecationWarning in 1.4. - - #. unique1d: use unique instead. unique1d raises a deprecation - warning in 1.4, and will be removed in 1.5. - - #. intersect1d_nu: use intersect1d instead. intersect1d_nu raises - a deprecation warning in 1.4, and will be removed in 1.5. - - #. setmember1d: use in1d instead. setmember1d raises a deprecation - warning in 1.4, and will be removed in 1.5. - -The following raise errors: - - #. When operating on 0-d arrays, ``numpy.max`` and other functions accept - only ``axis=0``, ``axis=-1`` and ``axis=None``. Using an out-of-bounds - axes is an indication of a bug, so Numpy raises an error for these cases - now. - - #. Specifying ``axis > MAX_DIMS`` is no longer allowed; Numpy raises now an - error instead of behaving similarly as for ``axis=None``. - -Internal changes -================ - -Use C99 complex functions when available -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The numpy complex types are now guaranteed to be ABI compatible with C99 -complex type, if availble on the platform. Moreoever, the complex ufunc now use -the platform C99 functions intead of our own. - -split multiarray and umath source code -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The source code of multiarray and umath has been split into separate logic -compilation units. This should make the source code more amenable for -newcomers. - -Separate compilation -~~~~~~~~~~~~~~~~~~~~ - -By default, every file of multiarray (and umath) is merged into one for -compilation as was the case before, but if NPY_SEPARATE_COMPILATION env -variable is set to a non-negative value, experimental individual compilation of -each file is enabled. This makes the compile/debug cycle much faster when -working on core numpy. - -Separate core math library -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -New functions which have been added: - - * npy_copysign - * npy_nextafter - * npy_cpack - * npy_creal - * npy_cimag - * npy_cabs - * npy_cexp - * npy_clog - * npy_cpow - * npy_csqr - * npy_ccos - * npy_csin diff --git a/pythonPackages/numpy/doc/release/1.5.0-notes.rst b/pythonPackages/numpy/doc/release/1.5.0-notes.rst deleted file mode 100755 index abccfdd2e0..0000000000 --- a/pythonPackages/numpy/doc/release/1.5.0-notes.rst +++ /dev/null @@ -1,106 +0,0 @@ -========================= -NumPy 1.5.0 Release Notes -========================= - - -Plans -===== - -This release has the following aims: - -* Python 3 compatibility -* :pep:`3118` compatibility - - -Highlights -========== - - -New features -============ - -Warning on casting complex to real -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Numpy now emits a `numpy.ComplexWarning` when a complex number is cast -into a real number. For example: - - >>> x = np.array([1,2,3]) - >>> x[:2] = np.array([1+2j, 1-2j]) - ComplexWarning: Casting complex values to real discards the imaginary part - -The cast indeed discards the imaginary part, and this may not be the -intended behavior in all cases, hence the warning. This warning can be -turned off in the standard way: - - >>> import warnings - >>> warnings.simplefilter("ignore", np.ComplexWarning) - -Dot method for ndarrays -~~~~~~~~~~~~~~~~~~~~~~~ - -Ndarrays now have the dot product also as a method, which allows writing -chains of matrix products as - - >>> a.dot(b).dot(c) - -instead of the longer alternative - - >>> np.dot(a, np.dot(b, c)) - -linalg.slogdet function -~~~~~~~~~~~~~~~~~~~~~~~ - -The slogdet function returns the sign and logarithm of the determinant -of a matrix. Because the determinant may involve the product of many -small/large values, the result is often more accurate than that obtained -by simple multiplication. - -new header -~~~~~~~~~~ - -The new header file ndarraytypes.h contains the symbols from -ndarrayobject.h that do not depend on the PY_ARRAY_UNIQUE_SYMBOL and -NO_IMPORT/_ARRAY macros. Broadly, these symbols are types, typedefs, -and enumerations; the array function calls are left in -ndarrayobject.h. This allows users to include array-related types and -enumerations without needing to concern themselves with the macro -expansions and their side- effects. - -Changes -======= - -polynomial.polynomial ---------------------- - -* The polyint and polyder functions now check that the specified number - integrations or derivations is a non-negative integer. The number 0 is - a valid value for both functions. -* A degree method has been added to the Polynomial class. -* A trimdeg method has been added to the Polynomial class. It operates like - truncate except that the argument is the desired degree of the result, - not the number of coefficients. -* Polynomial.fit now uses None as the default domain for the fit. The default - Polynomial domain can be specified by using [] as the domain value. -* Weights can be used in both polyfit and Polynomial.fit -* A linspace method has been added to the Polynomial class to ease plotting. - -polynomial.chebyshev --------------------- - -* The chebint and chebder functions now check that the specified number - integrations or derivations is a non-negative integer. The number 0 is - a valid value for both functions. -* A degree method has been added to the Chebyshev class. -* A trimdeg method has been added to the Chebyshev class. It operates like - truncate except that the argument is the desired degree of the result, - not the number of coefficients. -* Chebyshev.fit now uses None as the default domain for the fit. The default - Chebyshev domain can be specified by using [] as the domain value. -* Weights can be used in both chebfit and Chebyshev.fit -* A linspace method has been added to the Chebyshev class to ease plotting. - -histogram ---------- -After a two years transition period, the old behavior of the histogram function -has been phased out, and the "new" keyword has been removed. diff --git a/pythonPackages/numpy/doc/release/time_based_proposal.rst b/pythonPackages/numpy/doc/release/time_based_proposal.rst deleted file mode 100755 index 555be68633..0000000000 --- a/pythonPackages/numpy/doc/release/time_based_proposal.rst +++ /dev/null @@ -1,129 +0,0 @@ -.. vim:syntax=rst - -Introduction -============ - -This document proposes some enhancements for numpy and scipy releases. -Successive numpy and scipy releases are too far apart from a time point of -view - some people who are in the numpy release team feel that it cannot -improve without a bit more formal release process. The main proposal is to -follow a time-based release, with expected dates for code freeze, beta and rc. -The goal is two folds: make release more predictable, and move the code forward. - -Rationale -========= - -Right now, the release process of numpy is relatively organic. When some -features are there, we may decide to make a new release. Because there is not -fixed schedule, people don't really know when new features and bug fixes will -go into a release. More significantly, having an expected release schedule -helps to *coordinate* efforts: at the beginning of a cycle, everybody can jump -in and put new code, even break things if needed. But after some point, only -bug fixes are accepted: this makes beta and RC releases much easier; calming -things down toward the release date helps focusing on bugs and regressions - -Proposal -======== - -Time schedule -------------- - -The proposed schedule is to release numpy every 9 weeks - the exact period can -be tweaked if it ends up not working as expected. There will be several stages -for the cycle: - - * Development: anything can happen (by anything, we mean as currently - done). The focus is on new features, refactoring, etc... - - * Beta: no new features. No bug fixing which requires heavy changes. - regression fixes which appear on supported platforms and were not - caught earlier. - - * Polish/RC: only docstring changes and blocker regressions are allowed. - -The schedule would be as follows: - - +------+-----------------+-----------------+------------------+ - | Week | 1.3.0 | 1.4.0 | Release time | - +======+=================+=================+==================+ - | 1 | Development | | | - +------+-----------------+-----------------+------------------+ - | 2 | Development | | | - +------+-----------------+-----------------+------------------+ - | 3 | Development | | | - +------+-----------------+-----------------+------------------+ - | 4 | Development | | | - +------+-----------------+-----------------+------------------+ - | 5 | Development | | | - +------+-----------------+-----------------+------------------+ - | 6 | Development | | | - +------+-----------------+-----------------+------------------+ - | 7 | Beta | | | - +------+-----------------+-----------------+------------------+ - | 8 | Beta | | | - +------+-----------------+-----------------+------------------+ - | 9 | Beta | | 1.3.0 released | - +------+-----------------+-----------------+------------------+ - | 10 | Polish | Development | | - +------+-----------------+-----------------+------------------+ - | 11 | Polish | Development | | - +------+-----------------+-----------------+------------------+ - | 12 | Polish | Development | | - +------+-----------------+-----------------+------------------+ - | 13 | Polish | Development | | - +------+-----------------+-----------------+------------------+ - | 14 | | Development | | - +------+-----------------+-----------------+------------------+ - | 15 | | Development | | - +------+-----------------+-----------------+------------------+ - | 16 | | Beta | | - +------+-----------------+-----------------+------------------+ - | 17 | | Beta | | - +------+-----------------+-----------------+------------------+ - | 18 | | Beta | 1.4.0 released | - +------+-----------------+-----------------+------------------+ - -Each stage can be defined as follows: - - +------------------+-------------+----------------+----------------+ - | | Development | Beta | Polish | - +==================+=============+================+================+ - | Python Frozen | | slushy | Y | - +------------------+-------------+----------------+----------------+ - | Docstring Frozen | | slushy | thicker slush | - +------------------+-------------+----------------+----------------+ - | C code Frozen | | thicker slush | thicker slush | - +------------------+-------------+----------------+----------------+ - -Terminology: - - * slushy: you can change it if you beg the release team and it's really - important and you coordinate with docs/translations; no "big" - changes. - - * thicker slush: you can change it if it's an open bug marked - showstopper for the Polish release, you beg the release team, the - change is very very small yet very very important, and you feel - extremely guilty about your transgressions. - -The different frozen states are intended to be gradients. The exact meaning is -decided by the release manager: he has the last word on what's go in, what -doesn't. The proposed schedule means that there would be at most 12 weeks -between putting code into the source code repository and being released. - -Release team ------------- - -For every release, there would be at least one release manager. We propose to -rotate the release manager: rotation means it is not always the same person -doing the dirty job, and it should also keep the release manager honest. - -References -========== - - * Proposed schedule for Gnome from Havoc Pennington (one of the core - GTK and Gnome manager): - http://mail.gnome.org/archives/gnome-hackers/2002-June/msg00041.html - The proposed schedule is heavily based on this email - - * http://live.gnome.org/ReleasePlanning/Freezes diff --git a/pythonPackages/numpy/doc/source/_static/scipy.css b/pythonPackages/numpy/doc/source/_static/scipy.css deleted file mode 100755 index 44ac1a60f7..0000000000 --- a/pythonPackages/numpy/doc/source/_static/scipy.css +++ /dev/null @@ -1,183 +0,0 @@ -@import "default.css"; - -/** - * Spacing fixes - */ - -div.body p, div.body dd, div.body li { - line-height: 125%; -} - -ul.simple { - margin-top: 0; - margin-bottom: 0; - padding-top: 0; - padding-bottom: 0; -} - -/* spacing around blockquoted fields in parameters/attributes/returns */ -td.field-body > blockquote { - margin-top: 0.1em; - margin-bottom: 0.5em; -} - -/* spacing around example code */ -div.highlight > pre { - padding: 2px 5px 2px 5px; -} - -/* spacing in see also definition lists */ -dl.last > dd { - margin-top: 1px; - margin-bottom: 5px; - margin-left: 30px; -} - -/** - * Hide dummy toctrees - */ - -ul { - padding-top: 0; - padding-bottom: 0; - margin-top: 0; - margin-bottom: 0; -} -ul li { - padding-top: 0; - padding-bottom: 0; - margin-top: 0; - margin-bottom: 0; -} -ul li a.reference { - padding-top: 0; - padding-bottom: 0; - margin-top: 0; - margin-bottom: 0; -} - -/** - * Make high-level subsections easier to distinguish from top-level ones - */ -div.body h3 { - background-color: transparent; -} - -div.body h4 { - border: none; - background-color: transparent; -} - -/** - * Scipy colors - */ - -body { - background-color: rgb(100,135,220); -} - -div.document { - background-color: rgb(230,230,230); -} - -div.sphinxsidebar { - background-color: rgb(230,230,230); -} - -div.related { - background-color: rgb(100,135,220); -} - -div.sphinxsidebar h3 { - color: rgb(0,102,204); -} - -div.sphinxsidebar h3 a { - color: rgb(0,102,204); -} - -div.sphinxsidebar h4 { - color: rgb(0,82,194); -} - -div.sphinxsidebar p { - color: black; -} - -div.sphinxsidebar a { - color: #355f7c; -} - -div.sphinxsidebar ul.want-points { - list-style: disc; -} - -.field-list th { - color: rgb(0,102,204); -} - -/** - * Extra admonitions - */ - -div.tip { - background-color: #ffffe4; - border: 1px solid #ee6; -} - -div.plot-output { - clear-after: both; -} - -div.plot-output .figure { - float: left; - text-align: center; - margin-bottom: 0; - padding-bottom: 0; -} - -div.plot-output .caption { - margin-top: 2; - padding-top: 0; -} - -div.plot-output p.admonition-title { - display: none; -} - -div.plot-output:after { - content: ""; - display: block; - height: 0; - clear: both; -} - - -/* -div.admonition-example { - background-color: #e4ffe4; - border: 1px solid #ccc; -}*/ - - -/** - * Styling for field lists - */ - -table.field-list th { - border-left: 1px solid #aaa !important; - padding-left: 5px; -} - -table.field-list { - border-collapse: separate; - border-spacing: 10px; -} - -/** - * Styling for footnotes - */ - -table.footnote td, table.footnote th { - border: none; -} diff --git a/pythonPackages/numpy/doc/source/_templates/autosummary/class.rst b/pythonPackages/numpy/doc/source/_templates/autosummary/class.rst deleted file mode 100755 index 0cabe7cd16..0000000000 --- a/pythonPackages/numpy/doc/source/_templates/autosummary/class.rst +++ /dev/null @@ -1,23 +0,0 @@ -{% extends "!autosummary/class.rst" %} - -{% block methods %} -{% if methods %} - .. HACK - .. autosummary:: - :toctree: - {% for item in methods %} - {{ name }}.{{ item }} - {%- endfor %} -{% endif %} -{% endblock %} - -{% block attributes %} -{% if attributes %} - .. HACK - .. autosummary:: - :toctree: - {% for item in attributes %} - {{ name }}.{{ item }} - {%- endfor %} -{% endif %} -{% endblock %} diff --git a/pythonPackages/numpy/doc/source/_templates/indexcontent.html b/pythonPackages/numpy/doc/source/_templates/indexcontent.html deleted file mode 100755 index 49d955d8c2..0000000000 --- a/pythonPackages/numpy/doc/source/_templates/indexcontent.html +++ /dev/null @@ -1,56 +0,0 @@ -{% extends "defindex.html" %} -{% block tables %} -

    Parts of the documentation:

    - - -
    - - -
    - -

    Indices and tables:

    - - -
    - - - - - - -
    - -

    Meta information:

    - - -
    - - - - - -
    - -

    Acknowledgements

    -

    - Large parts of this manual originate from Travis E. Oliphant's book - "Guide to Numpy" (which generously entered - Public Domain in August 2008). The reference documentation for many of - the functions are written by numerous contributors and developers of - Numpy, both prior to and during the - Numpy Documentation Marathon. -

    -

    - The Documentation Marathon is still ongoing. Please help us write - better documentation for Numpy by joining it! Instructions on how to - join and what to do can be found - on the scipy.org website. -

    -{% endblock %} diff --git a/pythonPackages/numpy/doc/source/_templates/indexsidebar.html b/pythonPackages/numpy/doc/source/_templates/indexsidebar.html deleted file mode 100755 index 409743a038..0000000000 --- a/pythonPackages/numpy/doc/source/_templates/indexsidebar.html +++ /dev/null @@ -1,5 +0,0 @@ -

    Resources

    - diff --git a/pythonPackages/numpy/doc/source/_templates/layout.html b/pythonPackages/numpy/doc/source/_templates/layout.html deleted file mode 100755 index 27798878e9..0000000000 --- a/pythonPackages/numpy/doc/source/_templates/layout.html +++ /dev/null @@ -1,17 +0,0 @@ -{% extends "!layout.html" %} -{% block rootrellink %} -
  • {{ shorttitle }}{{ reldelim1 }}
  • -{% endblock %} - -{% block sidebarsearch %} -{%- if sourcename %} - -{%- endif %} -{{ super() }} -{% endblock %} diff --git a/pythonPackages/numpy/doc/source/about.rst b/pythonPackages/numpy/doc/source/about.rst deleted file mode 100755 index bcfbe53230..0000000000 --- a/pythonPackages/numpy/doc/source/about.rst +++ /dev/null @@ -1,65 +0,0 @@ -About NumPy -=========== - -`NumPy `__ is the fundamental package -needed for scientific computing with Python. This package contains: - -- a powerful N-dimensional :ref:`array object ` -- sophisticated :ref:`(broadcasting) functions ` -- basic :ref:`linear algebra functions ` -- basic :ref:`Fourier transforms ` -- sophisticated :ref:`random number capabilities ` -- tools for integrating Fortran code -- tools for integrating C/C++ code - -Besides its obvious scientific uses, *NumPy* can also be used as an -efficient multi-dimensional container of generic data. Arbitrary -data types can be defined. This allows *NumPy* to seamlessly and -speedily integrate with a wide variety of databases. - -NumPy is a successor for two earlier scientific Python libraries: -NumPy derives from the old *Numeric* code base and can be used -as a replacement for *Numeric*. It also adds the features introduced -by *Numarray* and can also be used to replace *Numarray*. - -NumPy community ---------------- - -Numpy is a distributed, volunteer, open-source project. *You* can help -us make it better; if you believe something should be improved either -in functionality or in documentation, don't hesitate to contact us --- or -even better, contact us and participate in fixing the problem. - -Our main means of communication are: - -- `scipy.org website `__ - -- `Mailing lists `__ - -- `Numpy Trac `__ (bug "tickets" go here) - -More information about the development of Numpy can be found at -http://scipy.org/Developer_Zone - -If you want to fix issues in this documentation, the easiest way -is to participate in `our ongoing documentation marathon -`__. - - -About this documentation -======================== - -Conventions ------------ - -Names of classes, objects, constants, etc. are given in **boldface** font. -Often they are also links to a more detailed documentation of the -referred object. - -This manual contains many examples of use, usually prefixed with the -Python prompt ``>>>`` (which is not a part of the example code). The -examples assume that you have first entered:: - ->>> import numpy as np - -before running the examples. diff --git a/pythonPackages/numpy/doc/source/bugs.rst b/pythonPackages/numpy/doc/source/bugs.rst deleted file mode 100755 index cd2c5d3e85..0000000000 --- a/pythonPackages/numpy/doc/source/bugs.rst +++ /dev/null @@ -1,23 +0,0 @@ -************** -Reporting bugs -************** - -File bug reports or feature requests, and make contributions -(e.g. code patches), by submitting a "ticket" on the Trac pages: - -- Numpy Trac: http://scipy.org/scipy/numpy - -Because of spam abuse, you must create an account on our Trac in order -to submit a ticket, then click on the "New Ticket" tab that only -appears when you have logged in. Please give as much information as -you can in the ticket. It is extremely useful if you can supply a -small self-contained code snippet that reproduces the problem. Also -specify the component, the version you are referring to and the -milestone. - -Report bugs to the appropriate Trac instance (there is one for NumPy -and a different one for SciPy). There are also read-only mailing lists -for tracking the status of your bug ticket. - -More information can be found on the http://scipy.org/Developer_Zone -website. diff --git a/pythonPackages/numpy/doc/source/conf.py b/pythonPackages/numpy/doc/source/conf.py deleted file mode 100755 index 59bf3e9099..0000000000 --- a/pythonPackages/numpy/doc/source/conf.py +++ /dev/null @@ -1,274 +0,0 @@ -# -*- coding: utf-8 -*- - -import sys, os, re - -# Check Sphinx version -import sphinx -if sphinx.__version__ < "0.5": - raise RuntimeError("Sphinx 0.5.dev or newer required") - -# ----------------------------------------------------------------------------- -# General configuration -# ----------------------------------------------------------------------------- - -# Add any Sphinx extension module names here, as strings. They can be extensions -# coming with Sphinx (named 'sphinx.ext.*') or your custom ones. - -sys.path.insert(0, os.path.abspath('../sphinxext')) - -extensions = ['sphinx.ext.autodoc', 'sphinx.ext.pngmath', 'numpydoc', - 'sphinx.ext.intersphinx', 'sphinx.ext.coverage', - 'sphinx.ext.doctest', - 'plot_directive'] - -if sphinx.__version__ >= "0.7": - extensions.append('sphinx.ext.autosummary') -else: - extensions.append('autosummary') - extensions.append('only_directives') - -# Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] - -# The suffix of source filenames. -source_suffix = '.rst' - -# The master toctree document. -#master_doc = 'index' - -# General substitutions. -project = 'NumPy' -copyright = '2008-2009, The Scipy community' - -# The default replacements for |version| and |release|, also used in various -# other places throughout the built documents. -# -import numpy -# The short X.Y version (including .devXXXX, rcX, b1 suffixes if present) -version = re.sub(r'(\d+\.\d+)\.\d+(.*)', r'\1\2', numpy.__version__) -version = re.sub(r'(\.dev\d+).*?$', r'\1', version) -# The full version, including alpha/beta/rc tags. -release = numpy.__version__ -print version, release - -# There are two options for replacing |today|: either, you set today to some -# non-false value, then it is used: -#today = '' -# Else, today_fmt is used as the format for a strftime call. -today_fmt = '%B %d, %Y' - -# List of documents that shouldn't be included in the build. -#unused_docs = [] - -# The reST default role (used for this markup: `text`) to use for all documents. -default_role = "autolink" - -# List of directories, relative to source directories, that shouldn't be searched -# for source files. -exclude_dirs = [] - -# If true, '()' will be appended to :func: etc. cross-reference text. -add_function_parentheses = False - -# If true, the current module name will be prepended to all description -# unit titles (such as .. function::). -#add_module_names = True - -# If true, sectionauthor and moduleauthor directives will be shown in the -# output. They are ignored by default. -#show_authors = False - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = 'sphinx' - - -# ----------------------------------------------------------------------------- -# HTML output -# ----------------------------------------------------------------------------- - -# The style sheet to use for HTML and HTML Help pages. A file of that name -# must exist either in Sphinx' static/ path, or in one of the custom paths -# given in html_static_path. -html_style = 'scipy.css' - -# The name for this set of Sphinx documents. If None, it defaults to -# " v documentation". -html_title = "%s v%s Manual (DRAFT)" % (project, version) - -# The name of an image file (within the static path) to place at the top of -# the sidebar. -html_logo = 'scipyshiny_small.png' - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ['_static'] - -# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, -# using the given strftime format. -html_last_updated_fmt = '%b %d, %Y' - -# If true, SmartyPants will be used to convert quotes and dashes to -# typographically correct entities. -#html_use_smartypants = True - -# Custom sidebar templates, maps document names to template names. -html_sidebars = { - 'index': 'indexsidebar.html' -} - -# Additional templates that should be rendered to pages, maps page names to -# template names. -html_additional_pages = { - 'index': 'indexcontent.html', -} - -# If false, no module index is generated. -html_use_modindex = True - -# If true, the reST sources are included in the HTML build as _sources/. -#html_copy_source = True - -# If true, an OpenSearch description file will be output, and all pages will -# contain a tag referring to it. The value of this option must be the -# base URL from which the finished HTML is served. -#html_use_opensearch = '' - -# If nonempty, this is the file name suffix for HTML files (e.g. ".html"). -#html_file_suffix = '.html' - -# Output file base name for HTML help builder. -htmlhelp_basename = 'numpy' - -# Pngmath should try to align formulas properly -pngmath_use_preview = True - - -# ----------------------------------------------------------------------------- -# LaTeX output -# ----------------------------------------------------------------------------- - -# The paper size ('letter' or 'a4'). -#latex_paper_size = 'letter' - -# The font size ('10pt', '11pt' or '12pt'). -#latex_font_size = '10pt' - -# Grouping the document tree into LaTeX files. List of tuples -# (source start file, target name, title, author, document class [howto/manual]). -_stdauthor = 'Written by the NumPy community' -latex_documents = [ - ('reference/index', 'numpy-ref.tex', 'NumPy Reference', - _stdauthor, 'manual'), - ('user/index', 'numpy-user.tex', 'NumPy User Guide', - _stdauthor, 'manual'), -] - -# The name of an image file (relative to this directory) to place at the top of -# the title page. -#latex_logo = None - -# For "manual" documents, if this is true, then toplevel headings are parts, -# not chapters. -#latex_use_parts = False - -# Additional stuff for the LaTeX preamble. -latex_preamble = r''' -\usepackage{amsmath} -\DeclareUnicodeCharacter{00A0}{\nobreakspace} - -% In the parameters section, place a newline after the Parameters -% header -\usepackage{expdlist} -\let\latexdescription=\description -\def\description{\latexdescription{}{} \breaklabel} - -% Make Examples/etc section headers smaller and more compact -\makeatletter -\titleformat{\paragraph}{\normalsize\py@HeaderFamily}% - {\py@TitleColor}{0em}{\py@TitleColor}{\py@NormalColor} -\titlespacing*{\paragraph}{0pt}{1ex}{0pt} -\makeatother - -% Fix footer/header -\renewcommand{\chaptermark}[1]{\markboth{\MakeUppercase{\thechapter.\ #1}}{}} -\renewcommand{\sectionmark}[1]{\markright{\MakeUppercase{\thesection.\ #1}}} -''' - -# Documents to append as an appendix to all manuals. -#latex_appendices = [] - -# If false, no module index is generated. -latex_use_modindex = False - - -# ----------------------------------------------------------------------------- -# Intersphinx configuration -# ----------------------------------------------------------------------------- -intersphinx_mapping = {'http://docs.python.org/dev': None} - - -# ----------------------------------------------------------------------------- -# Numpy extensions -# ----------------------------------------------------------------------------- - -# If we want to do a phantom import from an XML file for all autodocs -phantom_import_file = 'dump.xml' - -# Make numpydoc to generate plots for example sections -numpydoc_use_plots = True - -# ----------------------------------------------------------------------------- -# Autosummary -# ----------------------------------------------------------------------------- - -if sphinx.__version__ >= "0.7": - import glob - autosummary_generate = glob.glob("reference/*.rst") - -# ----------------------------------------------------------------------------- -# Coverage checker -# ----------------------------------------------------------------------------- -coverage_ignore_modules = r""" - """.split() -coverage_ignore_functions = r""" - test($|_) (some|all)true bitwise_not cumproduct pkgload - generic\. - """.split() -coverage_ignore_classes = r""" - """.split() - -coverage_c_path = [] -coverage_c_regexes = {} -coverage_ignore_c_items = {} - - -# ----------------------------------------------------------------------------- -# Plots -# ----------------------------------------------------------------------------- -plot_pre_code = """ -import numpy as np -np.random.seed(0) -""" -plot_include_source = True -plot_formats = [('png', 100), 'pdf'] - -import math -phi = (math.sqrt(5) + 1)/2 - -import matplotlib -matplotlib.rcParams.update({ - 'font.size': 8, - 'axes.titlesize': 8, - 'axes.labelsize': 8, - 'xtick.labelsize': 8, - 'ytick.labelsize': 8, - 'legend.fontsize': 8, - 'figure.figsize': (3*phi, 3), - 'figure.subplot.bottom': 0.2, - 'figure.subplot.left': 0.2, - 'figure.subplot.right': 0.9, - 'figure.subplot.top': 0.85, - 'figure.subplot.wspace': 0.4, - 'text.usetex': False, -}) diff --git a/pythonPackages/numpy/doc/source/contents.rst b/pythonPackages/numpy/doc/source/contents.rst deleted file mode 100755 index 31ade23060..0000000000 --- a/pythonPackages/numpy/doc/source/contents.rst +++ /dev/null @@ -1,13 +0,0 @@ -##################### -Numpy manual contents -##################### - -.. toctree:: - - user/index - reference/index - release - about - bugs - license - glossary diff --git a/pythonPackages/numpy/doc/source/glossary.rst b/pythonPackages/numpy/doc/source/glossary.rst deleted file mode 100755 index ffa8f7368c..0000000000 --- a/pythonPackages/numpy/doc/source/glossary.rst +++ /dev/null @@ -1,14 +0,0 @@ -******** -Glossary -******** - -.. toctree:: - -.. glossary:: - - .. automodule:: numpy.doc.glossary - -Jargon ------- - -.. automodule:: numpy.doc.jargon diff --git a/pythonPackages/numpy/doc/source/license.rst b/pythonPackages/numpy/doc/source/license.rst deleted file mode 100755 index 2b3b7ebd33..0000000000 --- a/pythonPackages/numpy/doc/source/license.rst +++ /dev/null @@ -1,35 +0,0 @@ -************* -Numpy License -************* - -Copyright (c) 2005, NumPy Developers - -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - -* Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above - copyright notice, this list of conditions and the following - disclaimer in the documentation and/or other materials provided - with the distribution. - -* Neither the name of the NumPy Developers nor the names of any - contributors may be used to endorse or promote products derived - from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/pythonPackages/numpy/doc/source/reference/arrays.classes.rst b/pythonPackages/numpy/doc/source/reference/arrays.classes.rst deleted file mode 100755 index bef723091d..0000000000 --- a/pythonPackages/numpy/doc/source/reference/arrays.classes.rst +++ /dev/null @@ -1,423 +0,0 @@ -######################### -Standard array subclasses -######################### - -.. currentmodule:: numpy - -The :class:`ndarray` in NumPy is a "new-style" Python -built-in-type. Therefore, it can be inherited from (in Python or in C) -if desired. Therefore, it can form a foundation for many useful -classes. Often whether to sub-class the array object or to simply use -the core array component as an internal part of a new class is a -difficult decision, and can be simply a matter of choice. NumPy has -several tools for simplifying how your new object interacts with other -array objects, and so the choice may not be significant in the -end. One way to simplify the question is by asking yourself if the -object you are interested in can be replaced as a single array or does -it really require two or more arrays at its core. - -Note that :func:`asarray` always returns the base-class ndarray. If -you are confident that your use of the array object can handle any -subclass of an ndarray, then :func:`asanyarray` can be used to allow -subclasses to propagate more cleanly through your subroutine. In -principal a subclass could redefine any aspect of the array and -therefore, under strict guidelines, :func:`asanyarray` would rarely be -useful. However, most subclasses of the arrayobject will not -redefine certain aspects of the array object such as the buffer -interface, or the attributes of the array. One important example, -however, of why your subroutine may not be able to handle an arbitrary -subclass of an array is that matrices redefine the "*" operator to be -matrix-multiplication, rather than element-by-element multiplication. - - -Special attributes and methods -============================== - -.. seealso:: :ref:`Subclassing ndarray ` - -Numpy provides several hooks that subclasses of :class:`ndarray` can -customize: - -.. function:: __array_finalize__(self) - - This method is called whenever the system internally allocates a - new array from *obj*, where *obj* is a subclass (subtype) of the - :class:`ndarray`. It can be used to change attributes of *self* - after construction (so as to ensure a 2-d matrix for example), or - to update meta-information from the "parent." Subclasses inherit - a default implementation of this method that does nothing. - -.. function:: __array_prepare__(array, context=None) - - At the beginning of every :ref:`ufunc `, this - method is called on the input object with the highest array - priority, or the output object if one was specified. The output - array is passed in and whatever is returned is passed to the ufunc. - Subclasses inherit a default implementation of this method which - simply returns the output array unmodified. Subclasses may opt to - use this method to transform the output array into an instance of - the subclass and update metadata before returning the array to the - ufunc for computation. - -.. function:: __array_wrap__(array, context=None) - - At the end of every :ref:`ufunc `, this method - is called on the input object with the highest array priority, or - the output object if one was specified. The ufunc-computed array - is passed in and whatever is returned is passed to the user. - Subclasses inherit a default implementation of this method, which - transforms the array into a new instance of the object's class. - Subclasses may opt to use this method to transform the output array - into an instance of the subclass and update metadata before - returning the array to the user. - -.. data:: __array_priority__ - - The value of this attribute is used to determine what type of - object to return in situations where there is more than one - possibility for the Python type of the returned object. Subclasses - inherit a default value of 1.0 for this attribute. - -.. function:: __array__([dtype]) - - If a class having the :obj:`__array__` method is used as the output - object of an :ref:`ufunc `, results will be - written to the object returned by :obj:`__array__`. - -Matrix objects -============== - -.. index:: - single: matrix - -:class:`matrix` objects inherit from the ndarray and therefore, they -have the same attributes and methods of ndarrays. There are six -important differences of matrix objects, however, that may lead to -unexpected results when you use matrices but expect them to act like -arrays: - -1. Matrix objects can be created using a string notation to allow - Matlab-style syntax where spaces separate columns and semicolons - (';') separate rows. - -2. Matrix objects are always two-dimensional. This has far-reaching - implications, in that m.ravel() is still two-dimensional (with a 1 - in the first dimension) and item selection returns two-dimensional - objects so that sequence behavior is fundamentally different than - arrays. - -3. Matrix objects over-ride multiplication to be - matrix-multiplication. **Make sure you understand this for - functions that you may want to receive matrices. Especially in - light of the fact that asanyarray(m) returns a matrix when m is - a matrix.** - -4. Matrix objects over-ride power to be matrix raised to a power. The - same warning about using power inside a function that uses - asanyarray(...) to get an array object holds for this fact. - -5. The default __array_priority\__ of matrix objects is 10.0, and - therefore mixed operations with ndarrays always produce matrices. - -6. Matrices have special attributes which make calculations easier. - These are - - .. autosummary:: - :toctree: generated/ - - matrix.T - matrix.H - matrix.I - matrix.A - -.. warning:: - - Matrix objects over-ride multiplication, '*', and power, '**', to - be matrix-multiplication and matrix power, respectively. If your - subroutine can accept sub-classes and you do not convert to base- - class arrays, then you must use the ufuncs multiply and power to - be sure that you are performing the correct operation for all - inputs. - -The matrix class is a Python subclass of the ndarray and can be used -as a reference for how to construct your own subclass of the ndarray. -Matrices can be created from other matrices, strings, and anything -else that can be converted to an ``ndarray`` . The name "mat "is an -alias for "matrix "in NumPy. - -.. autosummary:: - :toctree: generated/ - - matrix - asmatrix - bmat - -Example 1: Matrix creation from a string - ->>> a=mat('1 2 3; 4 5 3') ->>> print (a*a.T).I -[[ 0.2924 -0.1345] - [-0.1345 0.0819]] - -Example 2: Matrix creation from nested sequence - ->>> mat([[1,5,10],[1.0,3,4j]]) -matrix([[ 1.+0.j, 5.+0.j, 10.+0.j], - [ 1.+0.j, 3.+0.j, 0.+4.j]]) - -Example 3: Matrix creation from an array - ->>> mat(random.rand(3,3)).T -matrix([[ 0.7699, 0.7922, 0.3294], - [ 0.2792, 0.0101, 0.9219], - [ 0.3398, 0.7571, 0.8197]]) - -Memory-mapped file arrays -========================= - -.. index:: - single: memory maps - -.. currentmodule:: numpy - -Memory-mapped files are useful for reading and/or modifying small -segments of a large file with regular layout, without reading the -entire file into memory. A simple subclass of the ndarray uses a -memory-mapped file for the data buffer of the array. For small files, -the over-head of reading the entire file into memory is typically not -significant, however for large files using memory mapping can save -considerable resources. - -Memory-mapped-file arrays have one additional method (besides those -they inherit from the ndarray): :meth:`.flush() ` which -must be called manually by the user to ensure that any changes to the -array actually get written to disk. - -.. note:: - - Memory-mapped arrays use the the Python memory-map object which - (prior to Python 2.5) does not allow files to be larger than a - certain size depending on the platform. This size is always - < 2GB even on 64-bit systems. - -.. autosummary:: - :toctree: generated/ - - memmap - memmap.flush - -Example: - ->>> a = memmap('newfile.dat', dtype=float, mode='w+', shape=1000) ->>> a[10] = 10.0 ->>> a[30] = 30.0 ->>> del a ->>> b = fromfile('newfile.dat', dtype=float) ->>> print b[10], b[30] -10.0 30.0 ->>> a = memmap('newfile.dat', dtype=float) ->>> print a[10], a[30] -10.0 30.0 - - -Character arrays (:mod:`numpy.char`) -==================================== - -.. seealso:: :ref:`routines.array-creation.char` - -.. index:: - single: character arrays - -.. note:: - The `chararray` class exists for backwards compatibility with - Numarray, it is not recommended for new development. Starting from numpy - 1.4, if one needs arrays of strings, it is recommended to use arrays of - `dtype` `object_`, `string_` or `unicode_`, and use the free functions - in the `numpy.char` module for fast vectorized string operations. - -These are enhanced arrays of either :class:`string_` type or -:class:`unicode_` type. These arrays inherit from the -:class:`ndarray`, but specially-define the operations ``+``, ``*``, -and ``%`` on a (broadcasting) element-by-element basis. These -operations are not available on the standard :class:`ndarray` of -character type. In addition, the :class:`chararray` has all of the -standard :class:`string ` (and :class:`unicode`) methods, -executing them on an element-by-element basis. Perhaps the easiest -way to create a chararray is to use :meth:`self.view(chararray) -` where *self* is an ndarray of str or unicode -data-type. However, a chararray can also be created using the -:meth:`numpy.chararray` constructor, or via the -:func:`numpy.char.array ` function: - -.. autosummary:: - :toctree: generated/ - - chararray - core.defchararray.array - -Another difference with the standard ndarray of str data-type is -that the chararray inherits the feature introduced by Numarray that -white-space at the end of any element in the array will be ignored -on item retrieval and comparison operations. - - -.. _arrays.classes.rec: - -Record arrays (:mod:`numpy.rec`) -================================ - -.. seealso:: :ref:`routines.array-creation.rec`, :ref:`routines.dtype`, - :ref:`arrays.dtypes`. - -Numpy provides the :class:`recarray` class which allows accessing the -fields of a record/structured array as attributes, and a corresponding -scalar data type object :class:`record`. - -.. currentmodule:: numpy - -.. autosummary:: - :toctree: generated/ - - recarray - record - -Masked arrays (:mod:`numpy.ma`) -=============================== - -.. seealso:: :ref:`maskedarray` - -Standard container class -======================== - -.. currentmodule:: numpy - -For backward compatibility and as a standard "container "class, the -UserArray from Numeric has been brought over to NumPy and named -:class:`numpy.lib.user_array.container` The container class is a -Python class whose self.array attribute is an ndarray. Multiple -inheritance is probably easier with numpy.lib.user_array.container -than with the ndarray itself and so it is included by default. It is -not documented here beyond mentioning its existence because you are -encouraged to use the ndarray class directly if you can. - -.. autosummary:: - :toctree: generated/ - - numpy.lib.user_array.container - -.. index:: - single: user_array - single: container class - - -Array Iterators -=============== - -.. currentmodule:: numpy - -.. index:: - single: array iterator - -Iterators are a powerful concept for array processing. Essentially, -iterators implement a generalized for-loop. If *myiter* is an iterator -object, then the Python code:: - - for val in myiter: - ... - some code involving val - ... - -calls ``val = myiter.next()`` repeatedly until :exc:`StopIteration` is -raised by the iterator. There are several ways to iterate over an -array that may be useful: default iteration, flat iteration, and -:math:`N`-dimensional enumeration. - - -Default iteration ------------------ - -The default iterator of an ndarray object is the default Python -iterator of a sequence type. Thus, when the array object itself is -used as an iterator. The default behavior is equivalent to:: - - for i in arr.shape[0]: - val = arr[i] - -This default iterator selects a sub-array of dimension :math:`N-1` -from the array. This can be a useful construct for defining recursive -algorithms. To loop over the entire array requires :math:`N` for-loops. - ->>> a = arange(24).reshape(3,2,4)+10 ->>> for val in a: -... print 'item:', val -item: [[10 11 12 13] - [14 15 16 17]] -item: [[18 19 20 21] - [22 23 24 25]] -item: [[26 27 28 29] - [30 31 32 33]] - - -Flat iteration --------------- - -.. autosummary:: - :toctree: generated/ - - ndarray.flat - -As mentioned previously, the flat attribute of ndarray objects returns -an iterator that will cycle over the entire array in C-style -contiguous order. - ->>> for i, val in enumerate(a.flat): -... if i%5 == 0: print i, val -0 10 -5 15 -10 20 -15 25 -20 30 - -Here, I've used the built-in enumerate iterator to return the iterator -index as well as the value. - - -N-dimensional enumeration -------------------------- - -.. autosummary:: - :toctree: generated/ - - ndenumerate - -Sometimes it may be useful to get the N-dimensional index while -iterating. The ndenumerate iterator can achieve this. - ->>> for i, val in ndenumerate(a): -... if sum(i)%5 == 0: print i, val -(0, 0, 0) 10 -(1, 1, 3) 25 -(2, 0, 3) 29 -(2, 1, 2) 32 - - -Iterator for broadcasting -------------------------- - -.. autosummary:: - :toctree: generated/ - - broadcast - -The general concept of broadcasting is also available from Python -using the :class:`broadcast` iterator. This object takes :math:`N` -objects as inputs and returns an iterator that returns tuples -providing each of the input sequence elements in the broadcasted -result. - ->>> for val in broadcast([[1,0],[2,3]],[0,1]): -... print val -(1, 0) -(0, 1) -(2, 0) -(3, 1) diff --git a/pythonPackages/numpy/doc/source/reference/arrays.dtypes.rst b/pythonPackages/numpy/doc/source/reference/arrays.dtypes.rst deleted file mode 100755 index c1b09f6095..0000000000 --- a/pythonPackages/numpy/doc/source/reference/arrays.dtypes.rst +++ /dev/null @@ -1,512 +0,0 @@ -.. currentmodule:: numpy - -.. _arrays.dtypes: - -********************************** -Data type objects (:class:`dtype`) -********************************** - -A data type object (an instance of :class:`numpy.dtype` class) -describes how the bytes in the fixed-size block of memory -corresponding to an array item should be interpreted. It describes the -following aspects of the data: - -1. Type of the data (integer, float, Python object, etc.) -2. Size of the data (how many bytes is in *e.g.* the integer) -3. Byte order of the data (:term:`little-endian` or :term:`big-endian`) -4. If the data type is a :term:`record`, an aggregate of other - data types, (*e.g.*, describing an array item consisting of - an integer and a float), - - 1. what are the names of the ":term:`fields `" of the record, - by which they can be :ref:`accessed `, - 2. what is the data-type of each :term:`field`, and - 3. which part of the memory block each field takes. - -5. If the data is a sub-array, what is its shape and data type. - -.. index:: - pair: dtype; scalar - -To describe the type of scalar data, there are several :ref:`built-in -scalar types ` in Numpy for various precision -of integers, floating-point numbers, *etc*. An item extracted from an -array, *e.g.*, by indexing, will be a Python object whose type is the -scalar type associated with the data type of the array. - -Note that the scalar types are not :class:`dtype` objects, even though -they can be used in place of one whenever a data type specification is -needed in Numpy. - -.. index:: - pair: dtype; field - pair: dtype; record - -Record data types are formed by creating a data type whose -:term:`fields` contain other data types. Each field has a name by -which it can be :ref:`accessed `. The parent data -type should be of sufficient size to contain all its fields; the -parent can for example be based on the :class:`void` type which allows -an arbitrary item size. Record data types may also contain other record -types and fixed-size sub-array data types in their fields. - -.. index:: - pair: dtype; sub-array - -Finally, a data type can describe items that are themselves arrays of -items of another data type. These sub-arrays must, however, be of a -fixed size. If an array is created using a data-type describing a -sub-array, the dimensions of the sub-array are appended to the shape -of the array when the array is created. Sub-arrays in a field of a -record behave differently, see :ref:`arrays.indexing.rec`. - -.. admonition:: Example - - A simple data type containing a 32-bit big-endian integer: - (see :ref:`arrays.dtypes.constructing` for details on construction) - - >>> dt = np.dtype('>i4') - >>> dt.byteorder - '>' - >>> dt.itemsize - 4 - >>> dt.name - 'int32' - >>> dt.type is np.int32 - True - - The corresponding array scalar type is :class:`int32`. - -.. admonition:: Example - - A record data type containing a 16-character string (in field 'name') - and a sub-array of two 64-bit floating-point number (in field 'grades'): - - >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) - >>> dt['name'] - dtype('|S16') - >>> dt['grades'] - dtype(('float64',(2,))) - - Items of an array of this data type are wrapped in an :ref:`array - scalar ` type that also has two fields: - - >>> x = np.array([('Sarah', (8.0, 7.0)), ('John', (6.0, 7.0))], dtype=dt) - >>> x[1] - ('John', [6.0, 7.0]) - >>> x[1]['grades'] - array([ 6., 7.]) - >>> type(x[1]) - - >>> type(x[1]['grades']) - - -.. _arrays.dtypes.constructing: - -Specifying and constructing data types -====================================== - -Whenever a data-type is required in a NumPy function or method, either -a :class:`dtype` object or something that can be converted to one can -be supplied. Such conversions are done by the :class:`dtype` -constructor: - -.. autosummary:: - :toctree: generated/ - - dtype - -What can be converted to a data-type object is described below: - -:class:`dtype` object - - .. index:: - triple: dtype; construction; from dtype - - Used as-is. - -:const:`None` - - .. index:: - triple: dtype; construction; from None - - The default data type: :class:`float_`. - -.. index:: - triple: dtype; construction; from type - -Array-scalar types - - The 21 built-in :ref:`array scalar type objects - ` all convert to an associated data-type object. - This is true for their sub-classes as well. - - Note that not all data-type information can be supplied with a - type-object: for example, :term:`flexible` data-types have - a default *itemsize* of 0, and require an explicitly given size - to be useful. - - .. admonition:: Example - - >>> dt = np.dtype(np.int32) # 32-bit integer - >>> dt = np.dtype(np.complex128) # 128-bit complex floating-point number - -Generic types - - The generic hierarchical type objects convert to corresponding - type objects according to the associations: - - ===================================================== =============== - :class:`number`, :class:`inexact`, :class:`floating` :class:`float` - :class:`complexfloating` :class:`cfloat` - :class:`integer`, :class:`signedinteger` :class:`int\_` - :class:`unsignedinteger` :class:`uint` - :class:`character` :class:`string` - :class:`generic`, :class:`flexible` :class:`void` - ===================================================== =============== - -Built-in Python types - - Several python types are equivalent to a corresponding - array scalar when used to generate a :class:`dtype` object: - - ================ =============== - :class:`int` :class:`int\_` - :class:`bool` :class:`bool\_` - :class:`float` :class:`float\_` - :class:`complex` :class:`cfloat` - :class:`str` :class:`string` - :class:`unicode` :class:`unicode\_` - :class:`buffer` :class:`void` - (all others) :class:`object_` - ================ =============== - - .. admonition:: Example - - >>> dt = np.dtype(float) # Python-compatible floating-point number - >>> dt = np.dtype(int) # Python-compatible integer - >>> dt = np.dtype(object) # Python object - -Types with ``.dtype`` - - Any type object with a ``dtype`` attribute: The attribute will be - accessed and used directly. The attribute must return something - that is convertible into a dtype object. - -.. index:: - triple: dtype; construction; from string - -Several kinds of strings can be converted. Recognized strings can be -prepended with ``'>'`` (:term:`big-endian`), ``'<'`` -(:term:`little-endian`), or ``'='`` (hardware-native, the default), to -specify the byte order. - -One-character strings - - Each built-in data-type has a character code - (the updated Numeric typecodes), that uniquely identifies it. - - .. admonition:: Example - - >>> dt = np.dtype('b') # byte, native byte order - >>> dt = np.dtype('>H') # big-endian unsigned short - >>> dt = np.dtype('>> dt = np.dtype('d') # double-precision floating-point number - -Array-protocol type strings (see :ref:`arrays.interface`) - - The first character specifies the kind of data and the remaining - characters specify how many bytes of data. The supported kinds are - - ================ ======================== - ``'b'`` Boolean - ``'i'`` (signed) integer - ``'u'`` unsigned integer - ``'f'`` floating-point - ``'c'`` complex-floating point - ``'S'``, ``'a'`` string - ``'U'`` unicode - ``'V'`` anything (:class:`void`) - ================ ======================== - - .. admonition:: Example - - >>> dt = np.dtype('i4') # 32-bit signed integer - >>> dt = np.dtype('f8') # 64-bit floating-point number - >>> dt = np.dtype('c16') # 128-bit complex floating-point number - >>> dt = np.dtype('a25') # 25-character string - -String with comma-separated fields - - Numarray introduced a short-hand notation for specifying the format - of a record as a comma-separated string of basic formats. - - A basic format in this context is an optional shape specifier - followed by an array-protocol type string. Parenthesis are required - on the shape if it is greater than 1-d. NumPy allows a modification - on the format in that any string that can uniquely identify the - type can be used to specify the data-type in a field. - The generated data-type fields are named ``'f0'``, ``'f1'``, ..., - ``'f'`` where N (>1) is the number of comma-separated basic - formats in the string. If the optional shape specifier is provided, - then the data-type for the corresponding field describes a sub-array. - - .. admonition:: Example - - - field named ``f0`` containing a 32-bit integer - - field named ``f1`` containing a 2 x 3 sub-array - of 64-bit floating-point numbers - - field named ``f2`` containing a 32-bit floating-point number - - >>> dt = np.dtype("i4, (2,3)f8, f4") - - - field named ``f0`` containing a 3-character string - - field named ``f1`` containing a sub-array of shape (3,) - containing 64-bit unsigned integers - - field named ``f2`` containing a 3 x 4 sub-array - containing 10-character strings - - >>> dt = np.dtype("a3, 3u8, (3,4)a10") - -Type strings - - Any string in :obj:`numpy.sctypeDict`.keys(): - - .. admonition:: Example - - >>> dt = np.dtype('uint32') # 32-bit unsigned integer - >>> dt = np.dtype('Float64') # 64-bit floating-point number - -.. index:: - triple: dtype; construction; from tuple - -``(flexible_dtype, itemsize)`` - - The first argument must be an object that is converted to a - flexible data-type object (one whose element size is 0), the - second argument is an integer providing the desired itemsize. - - .. admonition:: Example - - >>> dt = np.dtype((void, 10)) # 10-byte wide data block - >>> dt = np.dtype((str, 35)) # 35-character string - >>> dt = np.dtype(('U', 10)) # 10-character unicode string - -``(fixed_dtype, shape)`` - - .. index:: - pair: dtype; sub-array - - The first argument is any object that can be converted into a - fixed-size data-type object. The second argument is the desired - shape of this type. If the shape parameter is 1, then the - data-type object is equivalent to fixed dtype. If *shape* is a - tuple, then the new dtype defines a sub-array of the given shape. - - .. admonition:: Example - - >>> dt = np.dtype((np.int32, (2,2))) # 2 x 2 integer sub-array - >>> dt = np.dtype(('S10', 1)) # 10-character string - >>> dt = np.dtype(('i4, (2,3)f8, f4', (2,3))) # 2 x 3 record sub-array - -``(base_dtype, new_dtype)`` - - Both arguments must be convertible to data-type objects in this - case. The *base_dtype* is the data-type object that the new - data-type builds on. This is how you could assign named fields to - any built-in data-type object. - - .. admonition:: Example - - 32-bit integer, whose first two bytes are interpreted as an integer - via field ``real``, and the following two bytes via field ``imag``. - - >>> dt = np.dtype((np.int32,{'real':(np.int16, 0),'imag':(np.int16, 2)}) - - 32-bit integer, which is interpreted as consisting of a sub-array - of shape ``(4,)`` containing 8-bit integers: - - >>> dt = np.dtype((np.int32, (np.int8, 4))) - - 32-bit integer, containing fields ``r``, ``g``, ``b``, ``a`` that - interpret the 4 bytes in the integer as four unsigned integers: - - >>> dt = np.dtype(('i4', [('r','u1'),('g','u1'),('b','u1'),('a','u1')])) - -.. index:: - triple: dtype; construction; from list - -``[(field_name, field_dtype, field_shape), ...]`` - - *obj* should be a list of fields where each field is described by a - tuple of length 2 or 3. (Equivalent to the ``descr`` item in the - :obj:`__array_interface__` attribute.) - - The first element, *field_name*, is the field name (if this is - ``''`` then a standard field name, ``'f#'``, is assigned). The - field name may also be a 2-tuple of strings where the first string - is either a "title" (which may be any string or unicode string) or - meta-data for the field which can be any object, and the second - string is the "name" which must be a valid Python identifier. - - The second element, *field_dtype*, can be anything that can be - interpreted as a data-type. - - The optional third element *field_shape* contains the shape if this - field represents an array of the data-type in the second - element. Note that a 3-tuple with a third argument equal to 1 is - equivalent to a 2-tuple. - - This style does not accept *align* in the :class:`dtype` - constructor as it is assumed that all of the memory is accounted - for by the array interface description. - - .. admonition:: Example - - Data-type with fields ``big`` (big-endian 32-bit integer) and - ``little`` (little-endian 32-bit integer): - - >>> dt = np.dtype([('big', '>i4'), ('little', '>> dt = np.dtype([('R','u1'), ('G','u1'), ('B','u1'), ('A','u1')]) - -.. index:: - triple: dtype; construction; from dict - -``{'names': ..., 'formats': ..., 'offsets': ..., 'titles': ...}`` - - This style has two required and two optional keys. The *names* - and *formats* keys are required. Their respective values are - equal-length lists with the field names and the field formats. - The field names must be strings and the field formats can be any - object accepted by :class:`dtype` constructor. - - The optional keys in the dictionary are *offsets* and *titles* and - their values must each be lists of the same length as the *names* - and *formats* lists. The *offsets* value is a list of byte offsets - (integers) for each field, while the *titles* value is a list of - titles for each field (:const:`None` can be used if no title is - desired for that field). The *titles* can be any :class:`string` - or :class:`unicode` object and will add another entry to the - fields dictionary keyed by the title and referencing the same - field tuple which will contain the title as an additional tuple - member. - - .. admonition:: Example - - Data type with fields ``r``, ``g``, ``b``, ``a``, each being - a 8-bit unsigned integer: - - >>> dt = np.dtype({'names': ['r','g','b','a'], - ... 'formats': [uint8, uint8, uint8, uint8]}) - - Data type with fields ``r`` and ``b`` (with the given titles), - both being 8-bit unsigned integers, the first at byte position - 0 from the start of the field and the second at position 2: - - >>> dt = np.dtype({'names': ['r','b'], 'formats': ['u1', 'u1'], - ... 'offsets': [0, 2], - ... 'titles': ['Red pixel', 'Blue pixel']}) - - -``{'field1': ..., 'field2': ..., ...}`` - - This style allows passing in the :attr:`fields ` - attribute of a data-type object. - - *obj* should contain string or unicode keys that refer to - ``(data-type, offset)`` or ``(data-type, offset, title)`` tuples. - - .. admonition:: Example - - Data type containing field ``col1`` (10-character string at - byte position 0), ``col2`` (32-bit float at byte position 10), - and ``col3`` (integers at byte position 14): - - >>> dt = np.dtype({'col1': ('S10', 0), 'col2': (float32, 10), - 'col3': (int, 14)}) - - -:class:`dtype` -============== - -Numpy data type descriptions are instances of the :class:`dtype` class. - -Attributes ----------- - -The type of the data is described by the following :class:`dtype` attributes: - -.. autosummary:: - :toctree: generated/ - - dtype.type - dtype.kind - dtype.char - dtype.num - dtype.str - -Size of the data is in turn described by: - -.. autosummary:: - :toctree: generated/ - - dtype.name - dtype.itemsize - -Endianness of this data: - -.. autosummary:: - :toctree: generated/ - - dtype.byteorder - -Information about sub-data-types in a :term:`record`: - -.. autosummary:: - :toctree: generated/ - - dtype.fields - dtype.names - -For data types that describe sub-arrays: - -.. autosummary:: - :toctree: generated/ - - dtype.subdtype - dtype.shape - -Attributes providing additional information: - -.. autosummary:: - :toctree: generated/ - - dtype.hasobject - dtype.flags - dtype.isbuiltin - dtype.isnative - dtype.descr - dtype.alignment - - -Methods -------- - -Data types have the following method for changing the byte order: - -.. autosummary:: - :toctree: generated/ - - dtype.newbyteorder - -The following methods implement the pickle protocol: - -.. autosummary:: - :toctree: generated/ - - dtype.__reduce__ - dtype.__setstate__ diff --git a/pythonPackages/numpy/doc/source/reference/arrays.indexing.rst b/pythonPackages/numpy/doc/source/reference/arrays.indexing.rst deleted file mode 100755 index 8da4ecca7d..0000000000 --- a/pythonPackages/numpy/doc/source/reference/arrays.indexing.rst +++ /dev/null @@ -1,368 +0,0 @@ -.. _arrays.indexing: - -Indexing -======== - -.. sectionauthor:: adapted from "Guide to Numpy" by Travis E. Oliphant - -.. currentmodule:: numpy - -.. index:: indexing, slicing - -:class:`ndarrays ` can be indexed using the standard Python -``x[obj]`` syntax, where *x* is the array and *obj* the selection. -There are three kinds of indexing available: record access, basic -slicing, advanced indexing. Which one occurs depends on *obj*. - -.. note:: - - In Python, ``x[(exp1, exp2, ..., expN)]`` is equivalent to - ``x[exp1, exp2, ..., expN]``; the latter is just syntactic sugar - for the former. - - -Basic Slicing -------------- - -Basic slicing extends Python's basic concept of slicing to N -dimensions. Basic slicing occurs when *obj* is a :class:`slice` object -(constructed by ``start:stop:step`` notation inside of brackets), an -integer, or a tuple of slice objects and integers. :const:`Ellipsis` -and :const:`newaxis` objects can be interspersed with these as -well. In order to remain backward compatible with a common usage in -Numeric, basic slicing is also initiated if the selection object is -any sequence (such as a :class:`list`) containing :class:`slice` -objects, the :const:`Ellipsis` object, or the :const:`newaxis` object, -but no integer arrays or other embedded sequences. - -.. index:: - triple: ndarray; special methods; getslice - triple: ndarray; special methods; setslice - single: ellipsis - single: newaxis - -The simplest case of indexing with *N* integers returns an :ref:`array -scalar ` representing the corresponding item. As in -Python, all indices are zero-based: for the *i*-th index :math:`n_i`, -the valid range is :math:`0 \le n_i < d_i` where :math:`d_i` is the -*i*-th element of the shape of the array. Negative indices are -interpreted as counting from the end of the array (*i.e.*, if *i < 0*, -it means :math:`n_i + i`). - - -All arrays generated by basic slicing are always :term:`views ` -of the original array. - -The standard rules of sequence slicing apply to basic slicing on a -per-dimension basis (including using a step index). Some useful -concepts to remember include: - -- The basic slice syntax is ``i:j:k`` where *i* is the starting index, - *j* is the stopping index, and *k* is the step (:math:`k\neq0`). - This selects the *m* elements (in the corresponding dimension) with - index values *i*, *i + k*, ..., *i + (m - 1) k* where - :math:`m = q + (r\neq0)` and *q* and *r* are the quotient and remainder - obtained by dividing *j - i* by *k*: *j - i = q k + r*, so that - *i + (m - 1) k < j*. - - .. admonition:: Example - - >>> x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - >>> x[1:7:2] - array([1, 3, 5]) - -- Negative *i* and *j* are interpreted as *n + i* and *n + j* where - *n* is the number of elements in the corresponding dimension. - Negative *k* makes stepping go towards smaller indices. - - .. admonition:: Example - - >>> x[-2:10] - array([8, 9]) - >>> x[-3:3:-1] - array([7, 6, 5, 4]) - -- Assume *n* is the number of elements in the dimension being - sliced. Then, if *i* is not given it defaults to 0 for *k > 0* and - *n* for *k < 0* . If *j* is not given it defaults to *n* for *k > 0* - and -1 for *k < 0* . If *k* is not given it defaults to 1. Note that - ``::`` is the same as ``:`` and means select all indices along this - axis. - - .. admonition:: Example - - >>> x[5:] - array([5, 6, 7, 8, 9]) - -- If the number of objects in the selection tuple is less than - *N* , then ``:`` is assumed for any subsequent dimensions. - - .. admonition:: Example - - >>> x = np.array([[[1],[2],[3]], [[4],[5],[6]]]) - >>> x.shape - (2, 3, 1) - >>> x[1:2] - array([[[4], - [5], - [6]]]) - -- :const:`Ellipsis` expand to the number of ``:`` objects needed to - make a selection tuple of the same length as ``x.ndim``. Only the - first ellipsis is expanded, any others are interpreted as ``:``. - - .. admonition:: Example - - >>> x[...,0] - array([[1, 2, 3], - [4, 5, 6]]) - -- Each :const:`newaxis` object in the selection tuple serves to expand - the dimensions of the resulting selection by one unit-length - dimension. The added dimension is the position of the :const:`newaxis` - object in the selection tuple. - - .. admonition:: Example - - >>> x[:,np.newaxis,:,:].shape - (2, 1, 3, 1) - -- An integer, *i*, returns the same values as ``i:i+1`` - **except** the dimensionality of the returned object is reduced by - 1. In particular, a selection tuple with the *p*-th - element an integer (and all other entries ``:``) returns the - corresponding sub-array with dimension *N - 1*. If *N = 1* - then the returned object is an array scalar. These objects are - explained in :ref:`arrays.scalars`. - -- If the selection tuple has all entries ``:`` except the - *p*-th entry which is a slice object ``i:j:k``, - then the returned array has dimension *N* formed by - concatenating the sub-arrays returned by integer indexing of - elements *i*, *i+k*, ..., *i + (m - 1) k < j*, - -- Basic slicing with more than one non-``:`` entry in the slicing - tuple, acts like repeated application of slicing using a single - non-``:`` entry, where the non-``:`` entries are successively taken - (with all other non-``:`` entries replaced by ``:``). Thus, - ``x[ind1,...,ind2,:]`` acts like ``x[ind1][...,ind2,:]`` under basic - slicing. - - .. warning:: The above is **not** true for advanced slicing. - -- You may use slicing to set values in the array, but (unlike lists) you - can never grow the array. The size of the value to be set in - ``x[obj] = value`` must be (broadcastable) to the same shape as - ``x[obj]``. - -.. index:: - pair: ndarray; view - -.. note:: - - Remember that a slicing tuple can always be constructed as *obj* - and used in the ``x[obj]`` notation. Slice objects can be used in - the construction in place of the ``[start:stop:step]`` - notation. For example, ``x[1:10:5,::-1]`` can also be implemented - as ``obj = (slice(1,10,5), slice(None,None,-1)); x[obj]`` . This - can be useful for constructing generic code that works on arrays - of arbitrary dimension. - -.. data:: newaxis - - The :const:`newaxis` object can be used in the basic slicing syntax - discussed above. :const:`None` can also be used instead of - :const:`newaxis`. - - -Advanced indexing ------------------ - -Advanced indexing is triggered when the selection object, *obj*, is a -non-tuple sequence object, an :class:`ndarray` (of data type integer or bool), -or a tuple with at least one sequence object or ndarray (of data type -integer or bool). There are two types of advanced indexing: integer -and Boolean. - -Advanced indexing always returns a *copy* of the data (contrast with -basic slicing that returns a :term:`view`). - -Integer -^^^^^^^ - -Integer indexing allows selection of arbitrary items in the array -based on their *N*-dimensional index. This kind of selection occurs -when advanced indexing is triggered and the selection object is not -an array of data type bool. For the discussion below, when the -selection object is not a tuple, it will be referred to as if it had -been promoted to a 1-tuple, which will be called the selection -tuple. The rules of advanced integer-style indexing are: - -- If the length of the selection tuple is larger than *N* an error is raised. - -- All sequences and scalars in the selection tuple are converted to - :class:`intp` indexing arrays. - -- All selection tuple objects must be convertible to :class:`intp` - arrays, :class:`slice` objects, or the :const:`Ellipsis` object. - -- The first :const:`Ellipsis` object will be expanded, and any other - :const:`Ellipsis` objects will be treated as full slice (``:``) - objects. The expanded :const:`Ellipsis` object is replaced with as - many full slice (``:``) objects as needed to make the length of the - selection tuple :math:`N`. - -- If the selection tuple is smaller than *N*, then as many ``:`` - objects as needed are added to the end of the selection tuple so - that the modified selection tuple has length *N*. - -- All the integer indexing arrays must be :ref:`broadcastable - ` to the same shape. - -- The shape of the output (or the needed shape of the object to be used - for setting) is the broadcasted shape. - -- After expanding any ellipses and filling out any missing ``:`` - objects in the selection tuple, then let :math:`N_t` be the number - of indexing arrays, and let :math:`N_s = N - N_t` be the number of - slice objects. Note that :math:`N_t > 0` (or we wouldn't be doing - advanced integer indexing). - -- If :math:`N_s = 0` then the *M*-dimensional result is constructed by - varying the index tuple ``(i_1, ..., i_M)`` over the range - of the result shape and for each value of the index tuple - ``(ind_1, ..., ind_M)``:: - - result[i_1, ..., i_M] == x[ind_1[i_1, ..., i_M], ind_2[i_1, ..., i_M], - ..., ind_N[i_1, ..., i_M]] - - .. admonition:: Example - - Suppose the shape of the broadcasted indexing arrays is 3-dimensional - and *N* is 2. Then the result is found by letting *i, j, k* run over - the shape found by broadcasting ``ind_1`` and ``ind_2``, and each - *i, j, k* yields:: - - result[i,j,k] = x[ind_1[i,j,k], ind_2[i,j,k]] - -- If :math:`N_s > 0`, then partial indexing is done. This can be - somewhat mind-boggling to understand, but if you think in terms of - the shapes of the arrays involved, it can be easier to grasp what - happens. In simple cases (*i.e.* one indexing array and *N - 1* slice - objects) it does exactly what you would expect (concatenation of - repeated application of basic slicing). The rule for partial - indexing is that the shape of the result (or the interpreted shape - of the object to be used in setting) is the shape of *x* with the - indexed subspace replaced with the broadcasted indexing subspace. If - the index subspaces are right next to each other, then the - broadcasted indexing space directly replaces all of the indexed - subspaces in *x*. If the indexing subspaces are separated (by slice - objects), then the broadcasted indexing space is first, followed by - the sliced subspace of *x*. - - .. admonition:: Example - - Suppose ``x.shape`` is (10,20,30) and ``ind`` is a (2,3,4)-shaped - indexing :class:`intp` array, then ``result = x[...,ind,:]`` has - shape (10,2,3,4,30) because the (20,)-shaped subspace has been - replaced with a (2,3,4)-shaped broadcasted indexing subspace. If - we let *i, j, k* loop over the (2,3,4)-shaped subspace then - ``result[...,i,j,k,:] = x[...,ind[i,j,k],:]``. This example - produces the same result as :meth:`x.take(ind, axis=-2) `. - - .. admonition:: Example - - Now let ``x.shape`` be (10,20,30,40,50) and suppose ``ind_1`` - and ``ind_2`` are broadcastable to the shape (2,3,4). Then - ``x[:,ind_1,ind_2]`` has shape (10,2,3,4,40,50) because the - (20,30)-shaped subspace from X has been replaced with the - (2,3,4) subspace from the indices. However, - ``x[:,ind_1,:,ind_2]`` has shape (2,3,4,10,30,50) because there - is no unambiguous place to drop in the indexing subspace, thus - it is tacked-on to the beginning. It is always possible to use - :meth:`.transpose() ` to move the subspace - anywhere desired. (Note that this example cannot be replicated - using :func:`take`.) - - -Boolean -^^^^^^^ - -This advanced indexing occurs when obj is an array object of Boolean -type (such as may be returned from comparison operators). It is always -equivalent to (but faster than) ``x[obj.nonzero()]`` where, as -described above, :meth:`obj.nonzero() ` returns a -tuple (of length :attr:`obj.ndim `) of integer index -arrays showing the :const:`True` elements of *obj*. - -The special case when ``obj.ndim == x.ndim`` is worth mentioning. In -this case ``x[obj]`` returns a 1-dimensional array filled with the -elements of *x* corresponding to the :const:`True` values of *obj*. -The search order will be C-style (last index varies the fastest). If -*obj* has :const:`True` values at entries that are outside of the -bounds of *x*, then an index error will be raised. - -You can also use Boolean arrays as element of the selection tuple. In -such instances, they will always be interpreted as :meth:`nonzero(obj) -` and the equivalent integer indexing will be -done. - -.. warning:: - - The definition of advanced indexing means that ``x[(1,2,3),]`` is - fundamentally different than ``x[(1,2,3)]``. The latter is - equivalent to ``x[1,2,3]`` which will trigger basic selection while - the former will trigger advanced indexing. Be sure to understand - why this is occurs. - - Also recognize that ``x[[1,2,3]]`` will trigger advanced indexing, - whereas ``x[[1,2,slice(None)]]`` will trigger basic slicing. - -.. _arrays.indexing.rec: - -Record Access -------------- - -.. seealso:: :ref:`arrays.dtypes`, :ref:`arrays.scalars` - -If the :class:`ndarray` object is a record array, *i.e.* its data type -is a :term:`record` data type, the :term:`fields ` of the array -can be accessed by indexing the array with strings, dictionary-like. - -Indexing ``x['field-name']`` returns a new :term:`view` to the array, -which is of the same shape as *x* (except when the field is a -sub-array) but of data type ``x.dtype['field-name']`` and contains -only the part of the data in the specified field. Also record array -scalars can be "indexed" this way. - -If the accessed field is a sub-array, the dimensions of the sub-array -are appended to the shape of the result. - -.. admonition:: Example - - >>> x = np.zeros((2,2), dtype=[('a', np.int32), ('b', np.float64, (3,3))]) - >>> x['a'].shape - (2, 2) - >>> x['a'].dtype - dtype('int32') - >>> x['b'].shape - (2, 2, 3, 3) - >>> x['b'].dtype - dtype('float64') - - -Flat Iterator indexing ----------------------- - -:attr:`x.flat ` returns an iterator that will iterate -over the entire array (in C-contiguous style with the last index -varying the fastest). This iterator object can also be indexed using -basic slicing or advanced indexing as long as the selection object is -not a tuple. This should be clear from the fact that :attr:`x.flat -` is a 1-dimensional view. It can be used for integer -indexing with 1-dimensional C-style-flat indices. The shape of any -returned array is therefore the shape of the integer indexing object. - -.. index:: - single: indexing - single: ndarray diff --git a/pythonPackages/numpy/doc/source/reference/arrays.interface.rst b/pythonPackages/numpy/doc/source/reference/arrays.interface.rst deleted file mode 100755 index 87ba15a9f2..0000000000 --- a/pythonPackages/numpy/doc/source/reference/arrays.interface.rst +++ /dev/null @@ -1,336 +0,0 @@ -.. index:: - pair: array; interface - pair: array; protocol - -.. _arrays.interface: - -******************* -The Array Interface -******************* - -.. note:: - - This page describes the numpy-specific API for accessing the contents of - a numpy array from other C extensions. :pep:`3118` -- - :cfunc:`The Revised Buffer Protocol ` introduces - similar, standardized API to Python 2.6 and 3.0 for any extension - module to use. Cython__'s buffer array support - uses the :pep:`3118` API; see the `Cython numpy - tutorial`__. Cython provides a way to write code that supports the buffer - protocol with Python versions older than 2.6 because it has a - backward-compatible implementation utilizing the legacy array interface - described here. - -__ http://cython.org/ -__ http://wiki.cython.org/tutorials/numpy - -:version: 3 - -The array interface (sometimes called array protocol) was created in -2005 as a means for array-like Python objects to re-use each other's -data buffers intelligently whenever possible. The homogeneous -N-dimensional array interface is a default mechanism for objects to -share N-dimensional array memory and information. The interface -consists of a Python-side and a C-side using two attributes. Objects -wishing to be considered an N-dimensional array in application code -should support at least one of these attributes. Objects wishing to -support an N-dimensional array in application code should look for at -least one of these attributes and use the information provided -appropriately. - -This interface describes homogeneous arrays in the sense that each -item of the array has the same "type". This type can be very simple -or it can be a quite arbitrary and complicated C-like structure. - -There are two ways to use the interface: A Python side and a C-side. -Both are separate attributes. - -Python side -=========== - -This approach to the interface consists of the object having an -:data:`__array_interface__` attribute. - -.. data:: __array_interface__ - - A dictionary of items (3 required and 5 optional). The optional - keys in the dictionary have implied defaults if they are not - provided. - - The keys are: - - **shape** (required) - - Tuple whose elements are the array size in each dimension. Each - entry is an integer (a Python int or long). Note that these - integers could be larger than the platform "int" or "long" - could hold (a Python int is a C long). It is up to the code - using this attribute to handle this appropriately; either by - raising an error when overflow is possible, or by using - :cdata:`Py_LONG_LONG` as the C type for the shapes. - - **typestr** (required) - - A string providing the basic type of the homogenous array The - basic string format consists of 3 parts: a character describing - the byteorder of the data (``<``: little-endian, ``>``: - big-endian, ``|``: not-relevant), a character code giving the - basic type of the array, and an integer providing the number of - bytes the type uses. - - The basic type character codes are: - - ===== ================================================================ - ``t`` Bit field (following integer gives the number of - bits in the bit field). - ``b`` Boolean (integer type where all values are only True or False) - ``i`` Integer - ``u`` Unsigned integer - ``f`` Floating point - ``c`` Complex floating point - ``O`` Object (i.e. the memory contains a pointer to :ctype:`PyObject`) - ``S`` String (fixed-length sequence of char) - ``U`` Unicode (fixed-length sequence of :ctype:`Py_UNICODE`) - ``V`` Other (void \* -- each item is a fixed-size chunk of memory) - ===== ================================================================ - - **descr** (optional) - - A list of tuples providing a more detailed description of the - memory layout for each item in the homogeneous array. Each - tuple in the list has two or three elements. Normally, this - attribute would be used when *typestr* is ``V[0-9]+``, but this is - not a requirement. The only requirement is that the number of - bytes represented in the *typestr* key is the same as the total - number of bytes represented here. The idea is to support - descriptions of C-like structs (records) that make up array - elements. The elements of each tuple in the list are - - 1. A string providing a name associated with this portion of - the record. This could also be a tuple of ``('full name', - 'basic_name')`` where basic name would be a valid Python - variable name representing the full name of the field. - - 2. Either a basic-type description string as in *typestr* or - another list (for nested records) - - 3. An optional shape tuple providing how many times this part - of the record should be repeated. No repeats are assumed - if this is not given. Very complicated structures can be - described using this generic interface. Notice, however, - that each element of the array is still of the same - data-type. Some examples of using this interface are given - below. - - **Default**: ``[('', typestr)]`` - - **data** (optional) - - A 2-tuple whose first argument is an integer (a long integer - if necessary) that points to the data-area storing the array - contents. This pointer must point to the first element of - data (in other words any offset is always ignored in this - case). The second entry in the tuple is a read-only flag (true - means the data area is read-only). - - This attribute can also be an object exposing the - :cfunc:`buffer interface ` which - will be used to share the data. If this key is not present (or - returns :class:`None`), then memory sharing will be done - through the buffer interface of the object itself. In this - case, the offset key can be used to indicate the start of the - buffer. A reference to the object exposing the array interface - must be stored by the new object if the memory area is to be - secured. - - **Default**: :const:`None` - - **strides** (optional) - - Either :const:`None` to indicate a C-style contiguous array or - a Tuple of strides which provides the number of bytes needed - to jump to the next array element in the corresponding - dimension. Each entry must be an integer (a Python - :const:`int` or :const:`long`). As with shape, the values may - be larger than can be represented by a C "int" or "long"; the - calling code should handle this appropiately, either by - raising an error, or by using :ctype:`Py_LONG_LONG` in C. The - default is :const:`None` which implies a C-style contiguous - memory buffer. In this model, the last dimension of the array - varies the fastest. For example, the default strides tuple - for an object whose array entries are 8 bytes long and whose - shape is (10,20,30) would be (4800, 240, 8) - - **Default**: :const:`None` (C-style contiguous) - - **mask** (optional) - - :const:`None` or an object exposing the array interface. All - elements of the mask array should be interpreted only as true - or not true indicating which elements of this array are valid. - The shape of this object should be `"broadcastable" - ` to the shape of the - original array. - - **Default**: :const:`None` (All array values are valid) - - **offset** (optional) - - An integer offset into the array data region. This can only be - used when data is :const:`None` or returns a :class:`buffer` - object. - - **Default**: 0. - - **version** (required) - - An integer showing the version of the interface (i.e. 3 for - this version). Be careful not to use this to invalidate - objects exposing future versions of the interface. - - -C-struct access -=============== - -This approach to the array interface allows for faster access to an -array using only one attribute lookup and a well-defined C-structure. - -.. cvar:: __array_struct__ - - A :ctype:`PyCObject` whose :cdata:`voidptr` member contains a - pointer to a filled :ctype:`PyArrayInterface` structure. Memory - for the structure is dynamically created and the :ctype:`PyCObject` - is also created with an appropriate destructor so the retriever of - this attribute simply has to apply :cfunc:`Py_DECREF()` to the - object returned by this attribute when it is finished. Also, - either the data needs to be copied out, or a reference to the - object exposing this attribute must be held to ensure the data is - not freed. Objects exposing the :obj:`__array_struct__` interface - must also not reallocate their memory if other objects are - referencing them. - -The PyArrayInterface structure is defined in ``numpy/ndarrayobject.h`` -as:: - - typedef struct { - int two; /* contains the integer 2 -- simple sanity check */ - int nd; /* number of dimensions */ - char typekind; /* kind in array --- character code of typestr */ - int itemsize; /* size of each element */ - int flags; /* flags indicating how the data should be interpreted */ - /* must set ARR_HAS_DESCR bit to validate descr */ - Py_intptr_t *shape; /* A length-nd array of shape information */ - Py_intptr_t *strides; /* A length-nd array of stride information */ - void *data; /* A pointer to the first element of the array */ - PyObject *descr; /* NULL or data-description (same as descr key - of __array_interface__) -- must set ARR_HAS_DESCR - flag or this will be ignored. */ - } PyArrayInterface; - -The flags member may consist of 5 bits showing how the data should be -interpreted and one bit showing how the Interface should be -interpreted. The data-bits are :const:`CONTIGUOUS` (0x1), -:const:`FORTRAN` (0x2), :const:`ALIGNED` (0x100), :const:`NOTSWAPPED` -(0x200), and :const:`WRITEABLE` (0x400). A final flag -:const:`ARR_HAS_DESCR` (0x800) indicates whether or not this structure -has the arrdescr field. The field should not be accessed unless this -flag is present. - -.. admonition:: New since June 16, 2006: - - In the past most implementations used the "desc" member of the - :ctype:`PyCObject` itself (do not confuse this with the "descr" member of - the :ctype:`PyArrayInterface` structure above --- they are two separate - things) to hold the pointer to the object exposing the interface. - This is now an explicit part of the interface. Be sure to own a - reference to the object when the :ctype:`PyCObject` is created using - :ctype:`PyCObject_FromVoidPtrAndDesc`. - - -Type description examples -========================= - -For clarity it is useful to provide some examples of the type -description and corresponding :data:`__array_interface__` 'descr' -entries. Thanks to Scott Gilbert for these examples: - -In every case, the 'descr' key is optional, but of course provides -more information which may be important for various applications:: - - * Float data - typestr == '>f4' - descr == [('','>f4')] - - * Complex double - typestr == '>c8' - descr == [('real','>f4'), ('imag','>f4')] - - * RGB Pixel data - typestr == '|V3' - descr == [('r','|u1'), ('g','|u1'), ('b','|u1')] - - * Mixed endian (weird but could happen). - typestr == '|V8' (or '>u8') - descr == [('big','>i4'), ('little','i4'), ('data','>f8',(16,4))] - - * Padded structure - struct { - int ival; - double dval; - } - typestr == '|V16' - descr == [('ival','>i4'),('','|V4'),('dval','>f8')] - -It should be clear that any record type could be described using this interface. - -Differences with Array interface (Version 2) -============================================ - -The version 2 interface was very similar. The differences were -largely asthetic. In particular: - -1. The PyArrayInterface structure had no descr member at the end - (and therefore no flag ARR_HAS_DESCR) - -2. The desc member of the PyCObject returned from __array_struct__ was - not specified. Usually, it was the object exposing the array (so - that a reference to it could be kept and destroyed when the - C-object was destroyed). Now it must be a tuple whose first - element is a string with "PyArrayInterface Version #" and whose - second element is the object exposing the array. - -3. The tuple returned from __array_interface__['data'] used to be a - hex-string (now it is an integer or a long integer). - -4. There was no __array_interface__ attribute instead all of the keys - (except for version) in the __array_interface__ dictionary were - their own attribute: Thus to obtain the Python-side information you - had to access separately the attributes: - - * __array_data__ - * __array_shape__ - * __array_strides__ - * __array_typestr__ - * __array_descr__ - * __array_offset__ - * __array_mask__ diff --git a/pythonPackages/numpy/doc/source/reference/arrays.ndarray.rst b/pythonPackages/numpy/doc/source/reference/arrays.ndarray.rst deleted file mode 100755 index c14e6869a3..0000000000 --- a/pythonPackages/numpy/doc/source/reference/arrays.ndarray.rst +++ /dev/null @@ -1,567 +0,0 @@ -.. _arrays.ndarray: - -****************************************** -The N-dimensional array (:class:`ndarray`) -****************************************** - -.. currentmodule:: numpy - -An :class:`ndarray` is a (usually fixed-size) multidimensional -container of items of the same type and size. The number of dimensions -and items in an array is defined by its :attr:`shape `, -which is a :class:`tuple` of *N* positive integers that specify the -sizes of each dimension. The type of items in the array is specified by -a separate :ref:`data-type object (dtype) `, one of which -is associated with each ndarray. - -As with other container objects in Python, the contents of an -:class:`ndarray` can be accessed and modified by :ref:`indexing or -slicing ` the array (using, for example, *N* integers), -and via the methods and attributes of the :class:`ndarray`. - -.. index:: view, base - -Different :class:`ndarrays ` can share the same data, so that -changes made in one :class:`ndarray` may be visible in another. That -is, an ndarray can be a *"view"* to another ndarray, and the data it -is referring to is taken care of by the *"base"* ndarray. ndarrays can -also be views to memory owned by Python :class:`strings ` or -objects implementing the :class:`buffer` or :ref:`array -` interfaces. - - -.. admonition:: Example - - A 2-dimensional array of size 2 x 3, composed of 4-byte integer - elements: - - >>> x = np.array([[1, 2, 3], [4, 5, 6]], np.int32) - >>> type(x) - - >>> x.shape - (2, 3) - >>> x.dtype - dtype('int32') - - The array can be indexed using Python container-like syntax: - - >>> x[1,2] # i.e., the element of x in the *second* row, *third* - column, namely, 6. - - For example :ref:`slicing ` can produce views of - the array: - - >>> y = x[:,1] - >>> y - array([2, 5]) - >>> y[0] = 9 # this also changes the corresponding element in x - >>> y - array([9, 5]) - >>> x - array([[1, 9, 3], - [4, 5, 6]]) - - -Constructing arrays -=================== - -New arrays can be constructed using the routines detailed in -:ref:`routines.array-creation`, and also by using the low-level -:class:`ndarray` constructor: - -.. autosummary:: - :toctree: generated/ - - ndarray - -.. _arrays.ndarray.indexing: - - -Indexing arrays -=============== - -Arrays can be indexed using an extended Python slicing syntax, -``array[selection]``. Similar syntax is also used for accessing -fields in a :ref:`record array `. - -.. seealso:: :ref:`Array Indexing `. - -Internal memory layout of an ndarray -==================================== - -An instance of class :class:`ndarray` consists of a contiguous -one-dimensional segment of computer memory (owned by the array, or by -some other object), combined with an indexing scheme that maps *N* -integers into the location of an item in the block. The ranges in -which the indices can vary is specified by the :obj:`shape -` of the array. How many bytes each item takes and how -the bytes are interpreted is defined by the :ref:`data-type object -` associated with the array. - -.. index:: C-order, Fortran-order, row-major, column-major, stride, - offset - -A segment of memory is inherently 1-dimensional, and there are many -different schemes for arranging the items of an *N*-dimensional array -in a 1-dimensional block. Numpy is flexible, and :class:`ndarray` -objects can accommodate any *strided indexing scheme*. In a strided -scheme, the N-dimensional index :math:`(n_0, n_1, ..., n_{N-1})` -corresponds to the offset (in bytes): - -.. math:: n_{\mathrm{offset}} = \sum_{k=0}^{N-1} s_k n_k - -from the beginning of the memory block associated with the -array. Here, :math:`s_k` are integers which specify the :obj:`strides -` of the array. The :term:`column-major` order (used, -for example, in the Fortran language and in *Matlab*) and -:term:`row-major` order (used in C) schemes are just specific kinds of -strided scheme, and correspond to the strides: - -.. math:: - - s_k^{\mathrm{column}} = \prod_{j=0}^{k-1} d_j , - \quad s_k^{\mathrm{row}} = \prod_{j=k+1}^{N-1} d_j . - -.. index:: single-segment, contiguous, non-contiguous - -where :math:`d_j` = `self.itemsize * self.shape[j]`. - -Both the C and Fortran orders are :term:`contiguous`, *i.e.,* -:term:`single-segment`, memory layouts, in which every part of the -memory block can be accessed by some combination of the indices. - -Data in new :class:`ndarrays ` is in the :term:`row-major` -(C) order, unless otherwise specified, but, for example, :ref:`basic -array slicing ` often produces :term:`views ` -in a different scheme. - -.. seealso: :ref:`Indexing `_ - -.. note:: - - Several algorithms in NumPy work on arbitrarily strided arrays. - However, some algorithms require single-segment arrays. When an - irregularly strided array is passed in to such algorithms, a copy - is automatically made. - - -.. _arrays.ndarray.attributes: - -Array attributes -================ - -Array attributes reflect information that is intrinsic to the array -itself. Generally, accessing an array through its attributes allows -you to get and sometimes set intrinsic properties of the array without -creating a new array. The exposed attributes are the core parts of an -array and only some of them can be reset meaningfully without creating -a new array. Information on each attribute is given below. - -Memory layout -------------- - -The following attributes contain information about the memory layout -of the array: - -.. autosummary:: - :toctree: generated/ - - ndarray.flags - ndarray.shape - ndarray.strides - ndarray.ndim - ndarray.data - ndarray.size - ndarray.itemsize - ndarray.nbytes - ndarray.base - -Data type ---------- - -.. seealso:: :ref:`Data type objects ` - -The data type object associated with the array can be found in the -:attr:`dtype ` attribute: - -.. autosummary:: - :toctree: generated/ - - ndarray.dtype - -Other attributes ----------------- - -.. autosummary:: - :toctree: generated/ - - ndarray.T - ndarray.real - ndarray.imag - ndarray.flat - ndarray.ctypes - __array_priority__ - - -.. _arrays.ndarray.array-interface: - -Array interface ---------------- - -.. seealso:: :ref:`arrays.interface`. - -========================== =================================== -:obj:`__array_interface__` Python-side of the array interface -:obj:`__array_struct__` C-side of the array interface -========================== =================================== - -:mod:`ctypes` foreign function interface ----------------------------------------- - -.. autosummary:: - :toctree: generated/ - - ndarray.ctypes - -.. _array.ndarray.methods: - -Array methods -============= - -An :class:`ndarray` object has many methods which operate on or with -the array in some fashion, typically returning an array result. These -methods are briefly explained below. (Each method's docstring has a -more complete description.) - -For the following methods there are also corresponding functions in -:mod:`numpy`: :func:`all`, :func:`any`, :func:`argmax`, -:func:`argmin`, :func:`argsort`, :func:`choose`, :func:`clip`, -:func:`compress`, :func:`copy`, :func:`cumprod`, :func:`cumsum`, -:func:`diagonal`, :func:`imag`, :func:`max `, :func:`mean`, -:func:`min `, :func:`nonzero`, :func:`prod`, :func:`ptp`, -:func:`put`, :func:`ravel`, :func:`real`, :func:`repeat`, -:func:`reshape`, :func:`round `, :func:`searchsorted`, -:func:`sort`, :func:`squeeze`, :func:`std`, :func:`sum`, -:func:`swapaxes`, :func:`take`, :func:`trace`, :func:`transpose`, -:func:`var`. - -Array conversion ----------------- - -.. autosummary:: - :toctree: generated/ - - ndarray.item - ndarray.tolist - ndarray.itemset - ndarray.tostring - ndarray.tofile - ndarray.dump - ndarray.dumps - ndarray.astype - ndarray.byteswap - ndarray.copy - ndarray.view - ndarray.getfield - ndarray.setflags - ndarray.fill - -Shape manipulation ------------------- - -For reshape, resize, and transpose, the single tuple argument may be -replaced with ``n`` integers which will be interpreted as an n-tuple. - -.. autosummary:: - :toctree: generated/ - - ndarray.reshape - ndarray.resize - ndarray.transpose - ndarray.swapaxes - ndarray.flatten - ndarray.ravel - ndarray.squeeze - -Item selection and manipulation -------------------------------- - -For array methods that take an *axis* keyword, it defaults to -:const:`None`. If axis is *None*, then the array is treated as a 1-D -array. Any other value for *axis* represents the dimension along which -the operation should proceed. - -.. autosummary:: - :toctree: generated/ - - ndarray.take - ndarray.put - ndarray.repeat - ndarray.choose - ndarray.sort - ndarray.argsort - ndarray.searchsorted - ndarray.nonzero - ndarray.compress - ndarray.diagonal - -Calculation ------------ - -.. index:: axis - -Many of these methods take an argument named *axis*. In such cases, - -- If *axis* is *None* (the default), the array is treated as a 1-D - array and the operation is performed over the entire array. This - behavior is also the default if self is a 0-dimensional array or - array scalar. (An array scalar is an instance of the types/classes - float32, float64, etc., whereas a 0-dimensional array is an ndarray - instance containing precisely one array scalar.) - -- If *axis* is an integer, then the operation is done over the given - axis (for each 1-D subarray that can be created along the given axis). - -.. admonition:: Example of the *axis* argument - - A 3-dimensional array of size 3 x 3 x 3, summed over each of its - three axes - - >>> x - array([[[ 0, 1, 2], - [ 3, 4, 5], - [ 6, 7, 8]], - [[ 9, 10, 11], - [12, 13, 14], - [15, 16, 17]], - [[18, 19, 20], - [21, 22, 23], - [24, 25, 26]]]) - >>> x.sum(axis=0) - array([[27, 30, 33], - [36, 39, 42], - [45, 48, 51]]) - >>> # for sum, axis is the first keyword, so we may omit it, - >>> # specifying only its value - >>> x.sum(0), x.sum(1), x.sum(2) - (array([[27, 30, 33], - [36, 39, 42], - [45, 48, 51]]), - array([[ 9, 12, 15], - [36, 39, 42], - [63, 66, 69]]), - array([[ 3, 12, 21], - [30, 39, 48], - [57, 66, 75]])) - -The parameter *dtype* specifies the data type over which a reduction -operation (like summing) should take place. The default reduce data -type is the same as the data type of *self*. To avoid overflow, it can -be useful to perform the reduction using a larger data type. - -For several methods, an optional *out* argument can also be provided -and the result will be placed into the output array given. The *out* -argument must be an :class:`ndarray` and have the same number of -elements. It can have a different data type in which case casting will -be performed. - -.. autosummary:: - :toctree: generated/ - - ndarray.argmax - ndarray.min - ndarray.argmin - ndarray.ptp - ndarray.clip - ndarray.conj - ndarray.round - ndarray.trace - ndarray.sum - ndarray.cumsum - ndarray.mean - ndarray.var - ndarray.std - ndarray.prod - ndarray.cumprod - ndarray.all - ndarray.any - -Arithmetic and comparison operations -==================================== - -.. index:: comparison, arithmetic, operation, operator - -Arithmetic and comparison operations on :class:`ndarrays ` -are defined as element-wise operations, and generally yield -:class:`ndarray` objects as results. - -Each of the arithmetic operations (``+``, ``-``, ``*``, ``/``, ``//``, -``%``, ``divmod()``, ``**`` or ``pow()``, ``<<``, ``>>``, ``&``, -``^``, ``|``, ``~``) and the comparisons (``==``, ``<``, ``>``, -``<=``, ``>=``, ``!=``) is equivalent to the corresponding -:term:`universal function` (or :term:`ufunc` for short) in Numpy. For -more information, see the section on :ref:`Universal Functions -`. - -Comparison operators: - -.. autosummary:: - :toctree: generated/ - - ndarray.__lt__ - ndarray.__le__ - ndarray.__gt__ - ndarray.__ge__ - ndarray.__eq__ - ndarray.__ne__ - -Truth value of an array (:func:`bool()`): - -.. autosummary:: - :toctree: generated/ - - ndarray.__nonzero__ - -.. note:: - - Truth-value testing of an array invokes - :meth:`ndarray.__nonzero__`, which raises an error if the number of - elements in the the array is larger than 1, because the truth value - of such arrays is ambiguous. Use :meth:`.any() ` and - :meth:`.all() ` instead to be clear about what is meant - in such cases. (If the number of elements is 0, the array evaluates - to ``False``.) - - -Unary operations: - -.. autosummary:: - :toctree: generated/ - - ndarray.__neg__ - ndarray.__pos__ - ndarray.__abs__ - ndarray.__invert__ - -Arithmetic: - -.. autosummary:: - :toctree: generated/ - - ndarray.__add__ - ndarray.__sub__ - ndarray.__mul__ - ndarray.__div__ - ndarray.__truediv__ - ndarray.__floordiv__ - ndarray.__mod__ - ndarray.__divmod__ - ndarray.__pow__ - ndarray.__lshift__ - ndarray.__rshift__ - ndarray.__and__ - ndarray.__or__ - ndarray.__xor__ - -.. note:: - - - Any third argument to :func:`pow()` is silently ignored, - as the underlying :func:`ufunc ` takes only two arguments. - - - The three division operators are all defined; :obj:`div` is active - by default, :obj:`truediv` is active when - :obj:`__future__` division is in effect. - - - Because :class:`ndarray` is a built-in type (written in C), the - ``__r{op}__`` special methods are not directly defined. - - - The functions called to implement many arithmetic special methods - for arrays can be modified using :func:`set_numeric_ops`. - -Arithmetic, in-place: - -.. autosummary:: - :toctree: generated/ - - ndarray.__iadd__ - ndarray.__isub__ - ndarray.__imul__ - ndarray.__idiv__ - ndarray.__itruediv__ - ndarray.__ifloordiv__ - ndarray.__imod__ - ndarray.__ipow__ - ndarray.__ilshift__ - ndarray.__irshift__ - ndarray.__iand__ - ndarray.__ior__ - ndarray.__ixor__ - -.. warning:: - - In place operations will perform the calculation using the - precision decided by the data type of the two operands, but will - silently downcast the result (if necessary) so it can fit back into - the array. Therefore, for mixed precision calculations, ``A {op}= - B`` can be different than ``A = A {op} B``. For example, suppose - ``a = ones((3,3))``. Then, ``a += 3j`` is different than ``a = a + - 3j``: while they both perform the same computation, ``a += 3`` - casts the result to fit back in ``a``, whereas ``a = a + 3j`` - re-binds the name ``a`` to the result. - - -Special methods -=============== - -For standard library functions: - -.. autosummary:: - :toctree: generated/ - - ndarray.__copy__ - ndarray.__deepcopy__ - ndarray.__reduce__ - ndarray.__setstate__ - -Basic customization: - -.. autosummary:: - :toctree: generated/ - - ndarray.__new__ - ndarray.__array__ - ndarray.__array_wrap__ - -Container customization: (see :ref:`Indexing `) - -.. autosummary:: - :toctree: generated/ - - ndarray.__len__ - ndarray.__getitem__ - ndarray.__setitem__ - ndarray.__getslice__ - ndarray.__setslice__ - ndarray.__contains__ - -Conversion; the operations :func:`complex()`, :func:`int()`, -:func:`long()`, :func:`float()`, :func:`oct()`, and -:func:`hex()`. They work only on arrays that have one element in them -and return the appropriate scalar. - -.. autosummary:: - :toctree: generated/ - - ndarray.__int__ - ndarray.__long__ - ndarray.__float__ - ndarray.__oct__ - ndarray.__hex__ - -String representations: - -.. autosummary:: - :toctree: generated/ - - ndarray.__str__ - ndarray.__repr__ diff --git a/pythonPackages/numpy/doc/source/reference/arrays.rst b/pythonPackages/numpy/doc/source/reference/arrays.rst deleted file mode 100755 index 4204f13a42..0000000000 --- a/pythonPackages/numpy/doc/source/reference/arrays.rst +++ /dev/null @@ -1,47 +0,0 @@ -.. _arrays: - -************* -Array objects -************* - -.. currentmodule:: numpy - -NumPy provides an N-dimensional array type, the :ref:`ndarray -`, which describes a collection of "items" of the same -type. The items can be :ref:`indexed ` using for -example N integers. - -All ndarrays are :term:`homogenous`: every item takes up the same size -block of memory, and all blocks are interpreted in exactly the same -way. How each item in the array is to be interpreted is specified by a -separate :ref:`data-type object `, one of which is associated -with every array. In addition to basic types (integers, floats, -*etc.*), the data type objects can also represent data structures. - -An item extracted from an array, *e.g.*, by indexing, is represented -by a Python object whose type is one of the :ref:`array scalar types -` built in Numpy. The array scalars allow easy manipulation -of also more complicated arrangements of data. - -.. figure:: figures/threefundamental.png - - **Figure** - Conceptual diagram showing the relationship between the three - fundamental objects used to describe the data in an array: 1) the - ndarray itself, 2) the data-type object that describes the layout - of a single fixed-size element of the array, 3) the array-scalar - Python object that is returned when a single element of the array - is accessed. - - - -.. toctree:: - :maxdepth: 2 - - arrays.ndarray - arrays.scalars - arrays.dtypes - arrays.indexing - arrays.classes - maskedarray - arrays.interface diff --git a/pythonPackages/numpy/doc/source/reference/arrays.scalars.rst b/pythonPackages/numpy/doc/source/reference/arrays.scalars.rst deleted file mode 100755 index 62e22146a9..0000000000 --- a/pythonPackages/numpy/doc/source/reference/arrays.scalars.rst +++ /dev/null @@ -1,284 +0,0 @@ -.. _arrays.scalars: - -******* -Scalars -******* - -.. currentmodule:: numpy - -Python defines only one type of a particular data class (there is only -one integer type, one floating-point type, etc.). This can be -convenient in applications that don't need to be concerned with all -the ways data can be represented in a computer. For scientific -computing, however, more control is often needed. - -In NumPy, there are 21 new fundamental Python types to describe -different types of scalars. These type descriptors are mostly based on -the types available in the C language that CPython is written in, with -several additional types compatible with Python's types. - -Array scalars have the same attributes and methods as :class:`ndarrays -`. [#]_ This allows one to treat items of an array partly on -the same footing as arrays, smoothing out rough edges that result when -mixing scalar and array operations. - -Array scalars live in a hierarchy (see the Figure below) of data -types. They can be detected using the hierarchy: For example, -``isinstance(val, np.generic)`` will return :const:`True` if *val* is -an array scalar object. Alternatively, what kind of array scalar is -present can be determined using other members of the data type -hierarchy. Thus, for example ``isinstance(val, np.complexfloating)`` -will return :const:`True` if *val* is a complex valued type, while -:const:`isinstance(val, np.flexible)` will return true if *val* is one -of the flexible itemsize array types (:class:`string`, -:class:`unicode`, :class:`void`). - -.. figure:: figures/dtype-hierarchy.png - - **Figure:** Hierarchy of type objects representing the array data - types. Not shown are the two integer types :class:`intp` and - :class:`uintp` which just point to the integer type that holds a - pointer for the platform. All the number types can be obtained - using bit-width names as well. - -.. [#] However, array scalars are immutable, so none of the array - scalar attributes are settable. - -.. _arrays.scalars.character-codes: - -.. _arrays.scalars.built-in: - -Built-in scalar types -===================== - -The built-in scalar types are shown below. Along with their (mostly) -C-derived names, the integer, float, and complex data-types are also -available using a bit-width convention so that an array of the right -size can always be ensured (e.g. :class:`int8`, :class:`float64`, -:class:`complex128`). Two aliases (:class:`intp` and :class:`uintp`) -pointing to the integer type that is sufficiently large to hold a C pointer -are also provided. The C-like names are associated with character codes, -which are shown in the table. Use of the character codes, however, -is discouraged. - -Five of the scalar types are essentially equivalent to fundamental -Python types and therefore inherit from them as well as from the -generic array scalar type: - -==================== ==================== -Array scalar type Related Python type -==================== ==================== -:class:`int_` :class:`IntType` -:class:`float_` :class:`FloatType` -:class:`complex_` :class:`ComplexType` -:class:`str_` :class:`StringType` -:class:`unicode_` :class:`UnicodeType` -==================== ==================== - -The :class:`bool_` data type is very similar to the Python -:class:`BooleanType` but does not inherit from it because Python's -:class:`BooleanType` does not allow itself to be inherited from, and -on the C-level the size of the actual bool data is not the same as a -Python Boolean scalar. - -.. warning:: - - The :class:`bool_` type is not a subclass of the :class:`int_` type - (the :class:`bool_` is not even a number type). This is different - than Python's default implementation of :class:`bool` as a - sub-class of int. - - -.. tip:: The default data type in Numpy is :class:`float_`. - -In the tables below, ``platform?`` means that the type may not be -available on all platforms. Compatibility with different C or Python -types is indicated: two types are compatible if their data is of the -same size and interpreted in the same way. - -Booleans: - -=================== ============================= =============== -Type Remarks Character code -=================== ============================= =============== -:class:`bool_` compatible: Python bool ``'?'`` -:class:`bool8` 8 bits -=================== ============================= =============== - -Integers: - -=================== ============================= =============== -:class:`byte` compatible: C char ``'b'`` -:class:`short` compatible: C short ``'h'`` -:class:`intc` compatible: C int ``'i'`` -:class:`int_` compatible: Python int ``'l'`` -:class:`longlong` compatible: C long long ``'q'`` -:class:`intp` large enough to fit a pointer ``'p'`` -:class:`int8` 8 bits -:class:`int16` 16 bits -:class:`int32` 32 bits -:class:`int64` 64 bits -=================== ============================= =============== - -Unsigned integers: - -=================== ============================= =============== -:class:`ubyte` compatible: C unsigned char ``'B'`` -:class:`ushort` compatible: C unsigned short ``'H'`` -:class:`uintc` compatible: C unsigned int ``'I'`` -:class:`uint` compatible: Python int ``'L'`` -:class:`ulonglong` compatible: C long long ``'Q'`` -:class:`uintp` large enough to fit a pointer ``'P'`` -:class:`uint8` 8 bits -:class:`uint16` 16 bits -:class:`uint32` 32 bits -:class:`uint64` 64 bits -=================== ============================= =============== - -Floating-point numbers: - -=================== ============================= =============== -:class:`single` compatible: C float ``'f'`` -:class:`double` compatible: C double -:class:`float_` compatible: Python float ``'d'`` -:class:`longfloat` compatible: C long float ``'g'`` -:class:`float32` 32 bits -:class:`float64` 64 bits -:class:`float96` 96 bits, platform? -:class:`float128` 128 bits, platform? -=================== ============================= =============== - -Complex floating-point numbers: - -=================== ============================= =============== -:class:`csingle` ``'F'`` -:class:`complex_` compatible: Python complex ``'D'`` -:class:`clongfloat` ``'G'`` -:class:`complex64` two 32-bit floats -:class:`complex128` two 64-bit floats -:class:`complex192` two 96-bit floats, - platform? -:class:`complex256` two 128-bit floats, - platform? -=================== ============================= =============== - -Any Python object: - -=================== ============================= =============== -:class:`object_` any Python object ``'O'`` -=================== ============================= =============== - -.. note:: - - The data actually stored in :term:`object arrays ` - (*i.e.*, arrays having dtype :class:`object_`) are references to - Python objects, not the objects themselves. Hence, object arrays - behave more like usual Python :class:`lists `, in the sense - that their contents need not be of the same Python type. - - The object type is also special because an array containing - :class:`object_` items does not return an :class:`object_` object - on item access, but instead returns the actual object that - the array item refers to. - -The following data types are :term:`flexible`. They have no predefined -size: the data they describe can be of different length in different -arrays. (In the character codes ``#`` is an integer denoting how many -elements the data type consists of.) - -=================== ============================= ======== -:class:`str_` compatible: Python str ``'S#'`` -:class:`unicode_` compatible: Python unicode ``'U#'`` -:class:`void` ``'V#'`` -=================== ============================= ======== - - -.. warning:: - - Numeric Compatibility: If you used old typecode characters in your - Numeric code (which was never recommended), you will need to change - some of them to the new characters. In particular, the needed - changes are ``c -> S1``, ``b -> B``, ``1 -> b``, ``s -> h``, ``w -> - H``, and ``u -> I``. These changes make the type character - convention more consistent with other Python modules such as the - :mod:`struct` module. - - -Attributes -========== - -The array scalar objects have an :obj:`array priority -<__array_priority__>` of :cdata:`NPY_SCALAR_PRIORITY` -(-1,000,000.0). They also do not (yet) have a :attr:`ctypes ` -attribute. Otherwise, they share the same attributes as arrays: - -.. autosummary:: - :toctree: generated/ - - generic.flags - generic.shape - generic.strides - generic.ndim - generic.data - generic.size - generic.itemsize - generic.base - generic.dtype - generic.real - generic.imag - generic.flat - generic.T - generic.__array_interface__ - generic.__array_struct__ - generic.__array_priority__ - generic.__array_wrap__ - - -Indexing -======== -.. seealso:: :ref:`arrays.indexing`, :ref:`arrays.dtypes` - -Array scalars can be indexed like 0-dimensional arrays: if *x* is an -array scalar, - -- ``x[()]`` returns a 0-dimensional :class:`ndarray` -- ``x['field-name']`` returns the array scalar in the field *field-name*. - (*x* can have fields, for example, when it corresponds to a record data type.) - -Methods -======= - -Array scalars have exactly the same methods as arrays. The default -behavior of these methods is to internally convert the scalar to an -equivalent 0-dimensional array and to call the corresponding array -method. In addition, math operations on array scalars are defined so -that the same hardware flags are set and used to interpret the results -as for :ref:`ufunc `, so that the error state used for ufuncs -also carries over to the math on array scalars. - -The exceptions to the above rules are given below: - -.. autosummary:: - :toctree: generated/ - - generic - generic.__array__ - generic.__array_wrap__ - generic.__squeeze__ - generic.byteswap - generic.__reduce__ - generic.__setstate__ - generic.setflags - - -Defining new types -================== - -There are two ways to effectively define a new array scalar type -(apart from composing record :ref:`dtypes ` from the built-in -scalar types): One way is to simply subclass the :class:`ndarray` and -overwrite the methods of interest. This will work to a degree, but -internally certain behaviors are fixed by the data type of the array. -To fully customize the data type of an array you need to define a new -data-type, and register it with NumPy. Such new types can only be -defined in C, using the :ref:`Numpy C-API `. diff --git a/pythonPackages/numpy/doc/source/reference/c-api.array.rst b/pythonPackages/numpy/doc/source/reference/c-api.array.rst deleted file mode 100755 index 49d073b7ee..0000000000 --- a/pythonPackages/numpy/doc/source/reference/c-api.array.rst +++ /dev/null @@ -1,2827 +0,0 @@ -Array API -========= - -.. sectionauthor:: Travis E. Oliphant - -| The test of a first-rate intelligence is the ability to hold two -| opposed ideas in the mind at the same time, and still retain the -| ability to function. -| --- *F. Scott Fitzgerald* - -| For a successful technology, reality must take precedence over public -| relations, for Nature cannot be fooled. -| --- *Richard P. Feynman* - -.. index:: - pair: ndarray; C-API - pair: C-API; array - - -Array structure and data access -------------------------------- - -These macros all access the :ctype:`PyArrayObject` structure members. The input -argument, obj, can be any :ctype:`PyObject *` that is directly interpretable -as a :ctype:`PyArrayObject *` (any instance of the :cdata:`PyArray_Type` and its -sub-types). - -.. cfunction:: void *PyArray_DATA(PyObject *obj) - -.. cfunction:: char *PyArray_BYTES(PyObject *obj) - - These two macros are similar and obtain the pointer to the - data-buffer for the array. The first macro can (and should be) - assigned to a particular pointer where the second is for generic - processing. If you have not guaranteed a contiguous and/or aligned - array then be sure you understand how to access the data in the - array to avoid memory and/or alignment problems. - -.. cfunction:: npy_intp *PyArray_DIMS(PyObject *arr) - -.. cfunction:: npy_intp *PyArray_STRIDES(PyObject* arr) - -.. cfunction:: npy_intp PyArray_DIM(PyObject* arr, int n) - - Return the shape in the *n* :math:`^{\textrm{th}}` dimension. - -.. cfunction:: npy_intp PyArray_STRIDE(PyObject* arr, int n) - - Return the stride in the *n* :math:`^{\textrm{th}}` dimension. - -.. cfunction:: PyObject *PyArray_BASE(PyObject* arr) - -.. cfunction:: PyArray_Descr *PyArray_DESCR(PyObject* arr) - -.. cfunction:: int PyArray_FLAGS(PyObject* arr) - -.. cfunction:: int PyArray_ITEMSIZE(PyObject* arr) - - Return the itemsize for the elements of this array. - -.. cfunction:: int PyArray_TYPE(PyObject* arr) - - Return the (builtin) typenumber for the elements of this array. - -.. cfunction:: PyObject *PyArray_GETITEM(PyObject* arr, void* itemptr) - - Get a Python object from the ndarray, *arr*, at the location - pointed to by itemptr. Return ``NULL`` on failure. - -.. cfunction:: int PyArray_SETITEM(PyObject* arr, void* itemptr, PyObject* obj) - - Convert obj and place it in the ndarray, *arr*, at the place - pointed to by itemptr. Return -1 if an error occurs or 0 on - success. - -.. cfunction:: npy_intp PyArray_SIZE(PyObject* arr) - - Returns the total size (in number of elements) of the array. - -.. cfunction:: npy_intp PyArray_Size(PyObject* obj) - - Returns 0 if *obj* is not a sub-class of bigndarray. Otherwise, - returns the total number of elements in the array. Safer version - of :cfunc:`PyArray_SIZE` (*obj*). - -.. cfunction:: npy_intp PyArray_NBYTES(PyObject* arr) - - Returns the total number of bytes consumed by the array. - - -Data access -^^^^^^^^^^^ - -These functions and macros provide easy access to elements of the -ndarray from C. These work for all arrays. You may need to take care -when accessing the data in the array, however, if it is not in machine -byte-order, misaligned, or not writeable. In other words, be sure to -respect the state of the flags unless you know what you are doing, or -have previously guaranteed an array that is writeable, aligned, and in -machine byte-order using :cfunc:`PyArray_FromAny`. If you wish to handle all -types of arrays, the copyswap function for each type is useful for -handling misbehaved arrays. Some platforms (e.g. Solaris) do not like -misaligned data and will crash if you de-reference a misaligned -pointer. Other platforms (e.g. x86 Linux) will just work more slowly -with misaligned data. - -.. cfunction:: void* PyArray_GetPtr(PyArrayObject* aobj, npy_intp* ind) - - Return a pointer to the data of the ndarray, *aobj*, at the - N-dimensional index given by the c-array, *ind*, (which must be - at least *aobj* ->nd in size). You may want to typecast the - returned pointer to the data type of the ndarray. - -.. cfunction:: void* PyArray_GETPTR1(PyObject* obj, i) - -.. cfunction:: void* PyArray_GETPTR2(PyObject* obj, i, j) - -.. cfunction:: void* PyArray_GETPTR3(PyObject* obj, i, j, k) - -.. cfunction:: void* PyArray_GETPTR4(PyObject* obj, i, j, k, l) - - Quick, inline access to the element at the given coordinates in - the ndarray, *obj*, which must have respectively 1, 2, 3, or 4 - dimensions (this is not checked). The corresponding *i*, *j*, - *k*, and *l* coordinates can be any integer but will be - interpreted as ``npy_intp``. You may want to typecast the - returned pointer to the data type of the ndarray. - - -Creating arrays ---------------- - - -From scratch -^^^^^^^^^^^^ - -.. cfunction:: PyObject* PyArray_NewFromDescr(PyTypeObject* subtype, PyArray_Descr* descr, int nd, npy_intp* dims, npy_intp* strides, void* data, int flags, PyObject* obj) - - This is the main array creation function. Most new arrays are - created with this flexible function. The returned object is an - object of Python-type *subtype*, which must be a subtype of - :cdata:`PyArray_Type`. The array has *nd* dimensions, described by - *dims*. The data-type descriptor of the new array is *descr*. If - *subtype* is not :cdata:`&PyArray_Type` (*e.g.* a Python subclass of - the ndarray), then *obj* is the object to pass to the - :obj:`__array_finalize__` method of the subclass. If *data* is - ``NULL``, then new memory will be allocated and *flags* can be - non-zero to indicate a Fortran-style contiguous array. If *data* - is not ``NULL``, then it is assumed to point to the memory to be - used for the array and the *flags* argument is used as the new - flags for the array (except the state of :cdata:`NPY_OWNDATA` and - :cdata:`UPDATEIFCOPY` flags of the new array will be reset). In - addition, if *data* is non-NULL, then *strides* can also be - provided. If *strides* is ``NULL``, then the array strides are - computed as C-style contiguous (default) or Fortran-style - contiguous (*flags* is nonzero for *data* = ``NULL`` or *flags* & - :cdata:`NPY_F_CONTIGUOUS` is nonzero non-NULL *data*). Any provided - *dims* and *strides* are copied into newly allocated dimension and - strides arrays for the new array object. - -.. cfunction:: PyObject* PyArray_New(PyTypeObject* subtype, int nd, npy_intp* dims, int type_num, npy_intp* strides, void* data, int itemsize, int flags, PyObject* obj) - - This is similar to :cfunc:`PyArray_DescrNew` (...) except you - specify the data-type descriptor with *type_num* and *itemsize*, - where *type_num* corresponds to a builtin (or user-defined) - type. If the type always has the same number of bytes, then - itemsize is ignored. Otherwise, itemsize specifies the particular - size of this array. - - - -.. warning:: - - If data is passed to :cfunc:`PyArray_NewFromDescr` or :cfunc:`PyArray_New`, - this memory must not be deallocated until the new array is - deleted. If this data came from another Python object, this can - be accomplished using :cfunc:`Py_INCREF` on that object and setting the - base member of the new array to point to that object. If strides - are passed in they must be consistent with the dimensions, the - itemsize, and the data of the array. - -.. cfunction:: PyObject* PyArray_SimpleNew(int nd, npy_intp* dims, int typenum) - - Create a new unitialized array of type, *typenum*, whose size in - each of *nd* dimensions is given by the integer array, *dims*. - This function cannot be used to create a flexible-type array (no - itemsize given). - -.. cfunction:: PyObject* PyArray_SimpleNewFromData(int nd, npy_intp* dims, int typenum, void* data) - - Create an array wrapper around *data* pointed to by the given - pointer. The array flags will have a default that the data area is - well-behaved and C-style contiguous. The shape of the array is - given by the *dims* c-array of length *nd*. The data-type of the - array is indicated by *typenum*. - -.. cfunction:: PyObject* PyArray_SimpleNewFromDescr(int nd, npy_intp* dims, PyArray_Descr* descr) - - Create a new array with the provided data-type descriptor, *descr* - , of the shape deteremined by *nd* and *dims*. - -.. cfunction:: PyArray_FILLWBYTE(PyObject* obj, int val) - - Fill the array pointed to by *obj* ---which must be a (subclass - of) bigndarray---with the contents of *val* (evaluated as a byte). - -.. cfunction:: PyObject* PyArray_Zeros(int nd, npy_intp* dims, PyArray_Descr* dtype, int fortran) - - Construct a new *nd* -dimensional array with shape given by *dims* - and data type given by *dtype*. If *fortran* is non-zero, then a - Fortran-order array is created, otherwise a C-order array is - created. Fill the memory with zeros (or the 0 object if *dtype* - corresponds to :ctype:`PyArray_OBJECT` ). - -.. cfunction:: PyObject* PyArray_ZEROS(int nd, npy_intp* dims, int type_num, int fortran) - - Macro form of :cfunc:`PyArray_Zeros` which takes a type-number instead - of a data-type object. - -.. cfunction:: PyObject* PyArray_Empty(int nd, npy_intp* dims, PyArray_Descr* dtype, int fortran) - - Construct a new *nd* -dimensional array with shape given by *dims* - and data type given by *dtype*. If *fortran* is non-zero, then a - Fortran-order array is created, otherwise a C-order array is - created. The array is uninitialized unless the data type - corresponds to :ctype:`PyArray_OBJECT` in which case the array is - filled with :cdata:`Py_None`. - -.. cfunction:: PyObject* PyArray_EMPTY(int nd, npy_intp* dims, int typenum, int fortran) - - Macro form of :cfunc:`PyArray_Empty` which takes a type-number, - *typenum*, instead of a data-type object. - -.. cfunction:: PyObject* PyArray_Arange(double start, double stop, double step, int typenum) - - Construct a new 1-dimensional array of data-type, *typenum*, that - ranges from *start* to *stop* (exclusive) in increments of *step* - . Equivalent to **arange** (*start*, *stop*, *step*, dtype). - -.. cfunction:: PyObject* PyArray_ArangeObj(PyObject* start, PyObject* stop, PyObject* step, PyArray_Descr* descr) - - Construct a new 1-dimensional array of data-type determined by - ``descr``, that ranges from ``start`` to ``stop`` (exclusive) in - increments of ``step``. Equivalent to arange( ``start``, - ``stop``, ``step``, ``typenum`` ). - - -From other objects -^^^^^^^^^^^^^^^^^^ - -.. cfunction:: PyObject* PyArray_FromAny(PyObject* op, PyArray_Descr* dtype, int min_depth, int max_depth, int requirements, PyObject* context) - - This is the main function used to obtain an array from any nested - sequence, or object that exposes the array interface, *op*. The - parameters allow specification of the required *dtype*, the - minimum (*min_depth*) and maximum (*max_depth*) number of - dimensions acceptable, and other *requirements* for the array. The - *dtype* argument needs to be a :ctype:`PyArray_Descr` structure - indicating the desired data-type (including required - byteorder). The *dtype* argument may be NULL, indicating that any - data-type (and byteorder) is acceptable. Unless ``FORCECAST`` is - present in ``flags``, this call will generate an error if the data - type cannot be safely obtained from the object. If you want to use - ``NULL`` for the *dtype* and ensure the array is notswapped then - use :cfunc:`PyArray_CheckFromAny`. A value of 0 for either of the - depth parameters causes the parameter to be ignored. Any of the - following array flags can be added (*e.g.* using \|) to get the - *requirements* argument. If your code can handle general (*e.g.* - strided, byte-swapped, or unaligned arrays) then *requirements* - may be 0. Also, if *op* is not already an array (or does not - expose the array interface), then a new array will be created (and - filled from *op* using the sequence protocol). The new array will - have :cdata:`NPY_DEFAULT` as its flags member. The *context* argument - is passed to the :obj:`__array__` method of *op* and is only used if - the array is constructed that way. Almost always this - parameter is ``NULL``. - - .. cvar:: NPY_C_CONTIGUOUS - - Make sure the returned array is C-style contiguous - - .. cvar:: NPY_F_CONTIGUOUS - - Make sure the returned array is Fortran-style contiguous. - - .. cvar:: NPY_ALIGNED - - Make sure the returned array is aligned on proper boundaries for its - data type. An aligned array has the data pointer and every strides - factor as a multiple of the alignment factor for the data-type- - descriptor. - - .. cvar:: NPY_WRITEABLE - - Make sure the returned array can be written to. - - .. cvar:: NPY_ENSURECOPY - - Make sure a copy is made of *op*. If this flag is not - present, data is not copied if it can be avoided. - - .. cvar:: NPY_ENSUREARRAY - - Make sure the result is a base-class ndarray or bigndarray. By - default, if *op* is an instance of a subclass of the - bigndarray, an instance of that same subclass is returned. If - this flag is set, an ndarray object will be returned instead. - - .. cvar:: NPY_FORCECAST - - Force a cast to the output type even if it cannot be done - safely. Without this flag, a data cast will occur only if it - can be done safely, otherwise an error is reaised. - - .. cvar:: NPY_UPDATEIFCOPY - - If *op* is already an array, but does not satisfy the - requirements, then a copy is made (which will satisfy the - requirements). If this flag is present and a copy (of an - object that is already an array) must be made, then the - corresponding :cdata:`NPY_UPDATEIFCOPY` flag is set in the returned - copy and *op* is made to be read-only. When the returned copy - is deleted (presumably after your calculations are complete), - its contents will be copied back into *op* and the *op* array - will be made writeable again. If *op* is not writeable to - begin with, then an error is raised. If *op* is not already an - array, then this flag has no effect. - - .. cvar:: NPY_BEHAVED - - :cdata:`NPY_ALIGNED` \| :cdata:`NPY_WRITEABLE` - - .. cvar:: NPY_CARRAY - - :cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_BEHAVED` - - .. cvar:: NPY_CARRAY_RO - - :cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_ALIGNED` - - .. cvar:: NPY_FARRAY - - :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_BEHAVED` - - .. cvar:: NPY_FARRAY_RO - - :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_ALIGNED` - - .. cvar:: NPY_DEFAULT - - :cdata:`NPY_CARRAY` - - .. cvar:: NPY_IN_ARRAY - - :cdata:`NPY_CONTIGUOUS` \| :cdata:`NPY_ALIGNED` - - .. cvar:: NPY_IN_FARRAY - - :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_ALIGNED` - - .. cvar:: NPY_INOUT_ARRAY - - :cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \| - :cdata:`NPY_ALIGNED` - - .. cvar:: NPY_INOUT_FARRAY - - :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \| - :cdata:`NPY_ALIGNED` - - .. cvar:: NPY_OUT_ARRAY - - :cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \| - :cdata:`NPY_ALIGNED` \| :cdata:`NPY_UPDATEIFCOPY` - - .. cvar:: NPY_OUT_FARRAY - - :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \| - :cdata:`NPY_ALIGNED` \| :cdata:`UPDATEIFCOPY` - - -.. cfunction:: PyObject* PyArray_CheckFromAny(PyObject* op, PyArray_Descr* dtype, int min_depth, int max_depth, int requirements, PyObject* context) - - Nearly identical to :cfunc:`PyArray_FromAny` (...) except - *requirements* can contain :cdata:`NPY_NOTSWAPPED` (over-riding the - specification in *dtype*) and :cdata:`NPY_ELEMENTSTRIDES` which - indicates that the array should be aligned in the sense that the - strides are multiples of the element size. - -.. cvar:: NPY_NOTSWAPPED - - Make sure the returned array has a data-type descriptor that is in - machine byte-order, over-riding any specification in the *dtype* - argument. Normally, the byte-order requirement is determined by - the *dtype* argument. If this flag is set and the dtype argument - does not indicate a machine byte-order descriptor (or is NULL and - the object is already an array with a data-type descriptor that is - not in machine byte- order), then a new data-type descriptor is - created and used with its byte-order field set to native. - -.. cvar:: NPY_BEHAVED_NS - - :cdata:`NPY_ALIGNED` \| :cdata:`NPY_WRITEABLE` \| :cdata:`NPY_NOTSWAPPED` - -.. cvar:: NPY_ELEMENTSTRIDES - - Make sure the returned array has strides that are multiples of the - element size. - -.. cfunction:: PyObject* PyArray_FromArray(PyArrayObject* op, PyArray_Descr* newtype, int requirements) - - Special case of :cfunc:`PyArray_FromAny` for when *op* is already an - array but it needs to be of a specific *newtype* (including - byte-order) or has certain *requirements*. - -.. cfunction:: PyObject* PyArray_FromStructInterface(PyObject* op) - - Returns an ndarray object from a Python object that exposes the - :obj:`__array_struct__`` method and follows the array interface - protocol. If the object does not contain this method then a - borrowed reference to :cdata:`Py_NotImplemented` is returned. - -.. cfunction:: PyObject* PyArray_FromInterface(PyObject* op) - - Returns an ndarray object from a Python object that exposes the - :obj:`__array_shape__` and :obj:`__array_typestr__` - methods following - the array interface protocol. If the object does not contain one - of these method then a borrowed reference to :cdata:`Py_NotImplemented` - is returned. - -.. cfunction:: PyObject* PyArray_FromArrayAttr(PyObject* op, PyArray_Descr* dtype, PyObject* context) - - Return an ndarray object from a Python object that exposes the - :obj:`__array__` method. The :obj:`__array__` method can take 0, 1, or 2 - arguments ([dtype, context]) where *context* is used to pass - information about where the :obj:`__array__` method is being called - from (currently only used in ufuncs). - -.. cfunction:: PyObject* PyArray_ContiguousFromAny(PyObject* op, int typenum, int min_depth, int max_depth) - - This function returns a (C-style) contiguous and behaved function - array from any nested sequence or array interface exporting - object, *op*, of (non-flexible) type given by the enumerated - *typenum*, of minimum depth *min_depth*, and of maximum depth - *max_depth*. Equivalent to a call to :cfunc:`PyArray_FromAny` with - requirements set to :cdata:`NPY_DEFAULT` and the type_num member of the - type argument set to *typenum*. - -.. cfunction:: PyObject *PyArray_FromObject(PyObject *op, int typenum, int min_depth, int max_depth) - - Return an aligned and in native-byteorder array from any nested - sequence or array-interface exporting object, op, of a type given by - the enumerated typenum. The minimum number of dimensions the array can - have is given by min_depth while the maximum is max_depth. This is - equivalent to a call to :cfunc:`PyArray_FromAny` with requirements set to - BEHAVED. - -.. cfunction:: PyObject* PyArray_EnsureArray(PyObject* op) - - This function **steals a reference** to ``op`` and makes sure that - ``op`` is a base-class ndarray. It special cases array scalars, - but otherwise calls :cfunc:`PyArray_FromAny` ( ``op``, NULL, 0, 0, - :cdata:`NPY_ENSUREARRAY`). - -.. cfunction:: PyObject* PyArray_FromString(char* string, npy_intp slen, PyArray_Descr* dtype, npy_intp num, char* sep) - - Construct a one-dimensional ndarray of a single type from a binary - or (ASCII) text ``string`` of length ``slen``. The data-type of - the array to-be-created is given by ``dtype``. If num is -1, then - **copy** the entire string and return an appropriately sized - array, otherwise, ``num`` is the number of items to **copy** from - the string. If ``sep`` is NULL (or ""), then interpret the string - as bytes of binary data, otherwise convert the sub-strings - separated by ``sep`` to items of data-type ``dtype``. Some - data-types may not be readable in text mode and an error will be - raised if that occurs. All errors return NULL. - -.. cfunction:: PyObject* PyArray_FromFile(FILE* fp, PyArray_Descr* dtype, npy_intp num, char* sep) - - Construct a one-dimensional ndarray of a single type from a binary - or text file. The open file pointer is ``fp``, the data-type of - the array to be created is given by ``dtype``. This must match - the data in the file. If ``num`` is -1, then read until the end of - the file and return an appropriately sized array, otherwise, - ``num`` is the number of items to read. If ``sep`` is NULL (or - ""), then read from the file in binary mode, otherwise read from - the file in text mode with ``sep`` providing the item - separator. Some array types cannot be read in text mode in which - case an error is raised. - -.. cfunction:: PyObject* PyArray_FromBuffer(PyObject* buf, PyArray_Descr* dtype, npy_intp count, npy_intp offset) - - Construct a one-dimensional ndarray of a single type from an - object, ``buf``, that exports the (single-segment) buffer protocol - (or has an attribute __buffer\__ that returns an object that - exports the buffer protocol). A writeable buffer will be tried - first followed by a read- only buffer. The :cdata:`NPY_WRITEABLE` - flag of the returned array will reflect which one was - successful. The data is assumed to start at ``offset`` bytes from - the start of the memory location for the object. The type of the - data in the buffer will be interpreted depending on the data- type - descriptor, ``dtype.`` If ``count`` is negative then it will be - determined from the size of the buffer and the requested itemsize, - otherwise, ``count`` represents how many elements should be - converted from the buffer. - -.. cfunction:: int PyArray_CopyInto(PyArrayObject* dest, PyArrayObject* src) - - Copy from the source array, ``src``, into the destination array, - ``dest``, performing a data-type conversion if necessary. If an - error occurs return -1 (otherwise 0). The shape of ``src`` must be - broadcastable to the shape of ``dest``. The data areas of dest - and src must not overlap. - -.. cfunction:: int PyArray_MoveInto(PyArrayObject* dest, PyArrayObject* src) - - Move data from the source array, ``src``, into the destination - array, ``dest``, performing a data-type conversion if - necessary. If an error occurs return -1 (otherwise 0). The shape - of ``src`` must be broadcastable to the shape of ``dest``. The - data areas of dest and src may overlap. - -.. cfunction:: PyArrayObject* PyArray_GETCONTIGUOUS(PyObject* op) - - If ``op`` is already (C-style) contiguous and well-behaved then - just return a reference, otherwise return a (contiguous and - well-behaved) copy of the array. The parameter op must be a - (sub-class of an) ndarray and no checking for that is done. - -.. cfunction:: PyObject* PyArray_FROM_O(PyObject* obj) - - Convert ``obj`` to an ndarray. The argument can be any nested - sequence or object that exports the array interface. This is a - macro form of :cfunc:`PyArray_FromAny` using ``NULL``, 0, 0, 0 for the - other arguments. Your code must be able to handle any data-type - descriptor and any combination of data-flags to use this macro. - -.. cfunction:: PyObject* PyArray_FROM_OF(PyObject* obj, int requirements) - - Similar to :cfunc:`PyArray_FROM_O` except it can take an argument - of *requirements* indicating properties the resulting array must - have. Available requirements that can be enforced are - :cdata:`NPY_CONTIGUOUS`, :cdata:`NPY_F_CONTIGUOUS`, - :cdata:`NPY_ALIGNED`, :cdata:`NPY_WRITEABLE`, - :cdata:`NPY_NOTSWAPPED`, :cdata:`NPY_ENSURECOPY`, - :cdata:`NPY_UPDATEIFCOPY`, :cdata:`NPY_FORCECAST`, and - :cdata:`NPY_ENSUREARRAY`. Standard combinations of flags can also - be used: - -.. cfunction:: PyObject* PyArray_FROM_OT(PyObject* obj, int typenum) - - Similar to :cfunc:`PyArray_FROM_O` except it can take an argument of - *typenum* specifying the type-number the returned array. - -.. cfunction:: PyObject* PyArray_FROM_OTF(PyObject* obj, int typenum, int requirements) - - Combination of :cfunc:`PyArray_FROM_OF` and :cfunc:`PyArray_FROM_OT` - allowing both a *typenum* and a *flags* argument to be provided.. - -.. cfunction:: PyObject* PyArray_FROMANY(PyObject* obj, int typenum, int min, int max, int requirements) - - Similar to :cfunc:`PyArray_FromAny` except the data-type is - specified using a typenumber. :cfunc:`PyArray_DescrFromType` - (*typenum*) is passed directly to :cfunc:`PyArray_FromAny`. This - macro also adds :cdata:`NPY_DEFAULT` to requirements if - :cdata:`NPY_ENSURECOPY` is passed in as requirements. - -.. cfunction:: PyObject *PyArray_CheckAxis(PyObject* obj, int* axis, int requirements) - - Encapsulate the functionality of functions and methods that take - the axis= keyword and work properly with None as the axis - argument. The input array is ``obj``, while ``*axis`` is a - converted integer (so that >=MAXDIMS is the None value), and - ``requirements`` gives the needed properties of ``obj``. The - output is a converted version of the input so that requirements - are met and if needed a flattening has occurred. On output - negative values of ``*axis`` are converted and the new value is - checked to ensure consistency with the shape of ``obj``. - - -Dealing with types ------------------- - - -General check of Python Type -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. cfunction:: PyArray_Check(op) - - Evaluates true if *op* is a Python object whose type is a sub-type - of :cdata:`PyArray_Type`. - -.. cfunction:: PyArray_CheckExact(op) - - Evaluates true if *op* is a Python object with type - :cdata:`PyArray_Type`. - -.. cfunction:: PyArray_HasArrayInterface(op, out) - - If ``op`` implements any part of the array interface, then ``out`` - will contain a new reference to the newly created ndarray using - the interface or ``out`` will contain ``NULL`` if an error during - conversion occurs. Otherwise, out will contain a borrowed - reference to :cdata:`Py_NotImplemented` and no error condition is set. - -.. cfunction:: PyArray_HasArrayInterfaceType(op, type, context, out) - - If ``op`` implements any part of the array interface, then ``out`` - will contain a new reference to the newly created ndarray using - the interface or ``out`` will contain ``NULL`` if an error during - conversion occurs. Otherwise, out will contain a borrowed - reference to Py_NotImplemented and no error condition is set. - This version allows setting of the type and context in the part of - the array interface that looks for the :obj:`__array__` attribute. - -.. cfunction:: PyArray_IsZeroDim(op) - - Evaluates true if *op* is an instance of (a subclass of) - :cdata:`PyArray_Type` and has 0 dimensions. - -.. cfunction:: PyArray_IsScalar(op, cls) - - Evaluates true if *op* is an instance of :cdata:`Py{cls}ArrType_Type`. - -.. cfunction:: PyArray_CheckScalar(op) - - Evaluates true if *op* is either an array scalar (an instance of a - sub-type of :cdata:`PyGenericArr_Type` ), or an instance of (a - sub-class of) :cdata:`PyArray_Type` whose dimensionality is 0. - -.. cfunction:: PyArray_IsPythonScalar(op) - - Evaluates true if *op* is a builtin Python "scalar" object (int, - float, complex, str, unicode, long, bool). - -.. cfunction:: PyArray_IsAnyScalar(op) - - Evaluates true if *op* is either a Python scalar or an array - scalar (an instance of a sub- type of :cdata:`PyGenericArr_Type` ). - - -Data-type checking -^^^^^^^^^^^^^^^^^^ - -For the typenum macros, the argument is an integer representing an -enumerated array data type. For the array type checking macros the -argument must be a :ctype:`PyObject *` that can be directly interpreted as a -:ctype:`PyArrayObject *`. - -.. cfunction:: PyTypeNum_ISUNSIGNED(num) - -.. cfunction:: PyDataType_ISUNSIGNED(descr) - -.. cfunction:: PyArray_ISUNSIGNED(obj) - - Type represents an unsigned integer. - -.. cfunction:: PyTypeNum_ISSIGNED(num) - -.. cfunction:: PyDataType_ISSIGNED(descr) - -.. cfunction:: PyArray_ISSIGNED(obj) - - Type represents a signed integer. - -.. cfunction:: PyTypeNum_ISINTEGER(num) - -.. cfunction:: PyDataType_ISINTEGER(descr) - -.. cfunction:: PyArray_ISINTEGER(obj) - - Type represents any integer. - -.. cfunction:: PyTypeNum_ISFLOAT(num) - -.. cfunction:: PyDataType_ISFLOAT(descr) - -.. cfunction:: PyArray_ISFLOAT(obj) - - Type represents any floating point number. - -.. cfunction:: PyTypeNum_ISCOMPLEX(num) - -.. cfunction:: PyDataType_ISCOMPLEX(descr) - -.. cfunction:: PyArray_ISCOMPLEX(obj) - - Type represents any complex floating point number. - -.. cfunction:: PyTypeNum_ISNUMBER(num) - -.. cfunction:: PyDataType_ISNUMBER(descr) - -.. cfunction:: PyArray_ISNUMBER(obj) - - Type represents any integer, floating point, or complex floating point - number. - -.. cfunction:: PyTypeNum_ISSTRING(num) - -.. cfunction:: PyDataType_ISSTRING(descr) - -.. cfunction:: PyArray_ISSTRING(obj) - - Type represents a string data type. - -.. cfunction:: PyTypeNum_ISPYTHON(num) - -.. cfunction:: PyDataType_ISPYTHON(descr) - -.. cfunction:: PyArray_ISPYTHON(obj) - - Type represents an enumerated type corresponding to one of the - standard Python scalar (bool, int, float, or complex). - -.. cfunction:: PyTypeNum_ISFLEXIBLE(num) - -.. cfunction:: PyDataType_ISFLEXIBLE(descr) - -.. cfunction:: PyArray_ISFLEXIBLE(obj) - - Type represents one of the flexible array types ( :cdata:`NPY_STRING`, - :cdata:`NPY_UNICODE`, or :cdata:`NPY_VOID` ). - -.. cfunction:: PyTypeNum_ISUSERDEF(num) - -.. cfunction:: PyDataType_ISUSERDEF(descr) - -.. cfunction:: PyArray_ISUSERDEF(obj) - - Type represents a user-defined type. - -.. cfunction:: PyTypeNum_ISEXTENDED(num) - -.. cfunction:: PyDataType_ISEXTENDED(descr) - -.. cfunction:: PyArray_ISEXTENDED(obj) - - Type is either flexible or user-defined. - -.. cfunction:: PyTypeNum_ISOBJECT(num) - -.. cfunction:: PyDataType_ISOBJECT(descr) - -.. cfunction:: PyArray_ISOBJECT(obj) - - Type represents object data type. - -.. cfunction:: PyTypeNum_ISBOOL(num) - -.. cfunction:: PyDataType_ISBOOL(descr) - -.. cfunction:: PyArray_ISBOOL(obj) - - Type represents Boolean data type. - -.. cfunction:: PyDataType_HASFIELDS(descr) - -.. cfunction:: PyArray_HASFIELDS(obj) - - Type has fields associated with it. - -.. cfunction:: PyArray_ISNOTSWAPPED(m) - - Evaluates true if the data area of the ndarray *m* is in machine - byte-order according to the array's data-type descriptor. - -.. cfunction:: PyArray_ISBYTESWAPPED(m) - - Evaluates true if the data area of the ndarray *m* is **not** in - machine byte-order according to the array's data-type descriptor. - -.. cfunction:: Bool PyArray_EquivTypes(PyArray_Descr* type1, PyArray_Descr* type2) - - Return :cdata:`NPY_TRUE` if *type1* and *type2* actually represent - equivalent types for this platform (the fortran member of each - type is ignored). For example, on 32-bit platforms, - :cdata:`NPY_LONG` and :cdata:`NPY_INT` are equivalent. Otherwise - return :cdata:`NPY_FALSE`. - -.. cfunction:: Bool PyArray_EquivArrTypes(PyArrayObject* a1, PyArrayObject * a2) - - Return :cdata:`NPY_TRUE` if *a1* and *a2* are arrays with equivalent - types for this platform. - -.. cfunction:: Bool PyArray_EquivTypenums(int typenum1, int typenum2) - - Special case of :cfunc:`PyArray_EquivTypes` (...) that does not accept - flexible data types but may be easier to call. - -.. cfunction:: int PyArray_EquivByteorders({byteorder} b1, {byteorder} b2) - - True if byteorder characters ( :cdata:`NPY_LITTLE`, - :cdata:`NPY_BIG`, :cdata:`NPY_NATIVE`, :cdata:`NPY_IGNORE` ) are - either equal or equivalent as to their specification of a native - byte order. Thus, on a little-endian machine :cdata:`NPY_LITTLE` - and :cdata:`NPY_NATIVE` are equivalent where they are not - equivalent on a big-endian machine. - - -Converting data types -^^^^^^^^^^^^^^^^^^^^^ - -.. cfunction:: PyObject* PyArray_Cast(PyArrayObject* arr, int typenum) - - Mainly for backwards compatibility to the Numeric C-API and for - simple casts to non-flexible types. Return a new array object with - the elements of *arr* cast to the data-type *typenum* which must - be one of the enumerated types and not a flexible type. - -.. cfunction:: PyObject* PyArray_CastToType(PyArrayObject* arr, PyArray_Descr* type, int fortran) - - Return a new array of the *type* specified, casting the elements - of *arr* as appropriate. The fortran argument specifies the - ordering of the output array. - -.. cfunction:: int PyArray_CastTo(PyArrayObject* out, PyArrayObject* in) - - Cast the elements of the array *in* into the array *out*. The - output array should be writeable, have an integer-multiple of the - number of elements in the input array (more than one copy can be - placed in out), and have a data type that is one of the builtin - types. Returns 0 on success and -1 if an error occurs. - -.. cfunction:: PyArray_VectorUnaryFunc* PyArray_GetCastFunc(PyArray_Descr* from, int totype) - - Return the low-level casting function to cast from the given - descriptor to the builtin type number. If no casting function - exists return ``NULL`` and set an error. Using this function - instead of direct access to *from* ->f->cast will allow support of - any user-defined casting functions added to a descriptors casting - dictionary. - -.. cfunction:: int PyArray_CanCastSafely(int fromtype, int totype) - - Returns non-zero if an array of data type *fromtype* can be cast - to an array of data type *totype* without losing information. An - exception is that 64-bit integers are allowed to be cast to 64-bit - floating point values even though this can lose precision on large - integers so as not to proliferate the use of long doubles without - explict requests. Flexible array types are not checked according - to their lengths with this function. - -.. cfunction:: int PyArray_CanCastTo(PyArray_Descr* fromtype, PyArray_Descr* totype) - - Returns non-zero if an array of data type *fromtype* (which can - include flexible types) can be cast safely to an array of data - type *totype* (which can include flexible types). This is - basically a wrapper around :cfunc:`PyArray_CanCastSafely` with - additional support for size checking if *fromtype* and *totype* - are :cdata:`NPY_STRING` or :cdata:`NPY_UNICODE`. - -.. cfunction:: int PyArray_ObjectType(PyObject* op, int mintype) - - This function is useful for determining a common type that two or - more arrays can be converted to. It only works for non-flexible - array types as no itemsize information is passed. The *mintype* - argument represents the minimum type acceptable, and *op* - represents the object that will be converted to an array. The - return value is the enumerated typenumber that represents the - data-type that *op* should have. - -.. cfunction:: void PyArray_ArrayType(PyObject* op, PyArray_Descr* mintype, PyArray_Descr* outtype) - - This function works similarly to :cfunc:`PyArray_ObjectType` (...) - except it handles flexible arrays. The *mintype* argument can have - an itemsize member and the *outtype* argument will have an - itemsize member at least as big but perhaps bigger depending on - the object *op*. - -.. cfunction:: PyArrayObject** PyArray_ConvertToCommonType(PyObject* op, int* n) - - Convert a sequence of Python objects contained in *op* to an array - of ndarrays each having the same data type. The type is selected - based on the typenumber (larger type number is chosen over a - smaller one) ignoring objects that are only scalars. The length of - the sequence is returned in *n*, and an *n* -length array of - :ctype:`PyArrayObject` pointers is the return value (or ``NULL`` if an - error occurs). The returned array must be freed by the caller of - this routine (using :cfunc:`PyDataMem_FREE` ) and all the array objects - in it ``DECREF`` 'd or a memory-leak will occur. The example - template-code below shows a typically usage: - - .. code-block:: c - - mps = PyArray_ConvertToCommonType(obj, &n); - if (mps==NULL) return NULL; - {code} - - for (i=0; iitemsize that - holds the representation of 0 for that type. The returned pointer, - *ret*, **must be freed** using :cfunc:`PyDataMem_FREE` (ret) when it is - not needed anymore. - -.. cfunction:: char* PyArray_One(PyArrayObject* arr) - - A pointer to newly created memory of size *arr* ->itemsize that - holds the representation of 1 for that type. The returned pointer, - *ret*, **must be freed** using :cfunc:`PyDataMem_FREE` (ret) when it - is not needed anymore. - -.. cfunction:: int PyArray_ValidType(int typenum) - - Returns :cdata:`NPY_TRUE` if *typenum* represents a valid type-number - (builtin or user-defined or character code). Otherwise, this - function returns :cdata:`NPY_FALSE`. - - -New data types -^^^^^^^^^^^^^^ - -.. cfunction:: void PyArray_InitArrFuncs(PyArray_ArrFuncs* f) - - Initialize all function pointers and members to ``NULL``. - -.. cfunction:: int PyArray_RegisterDataType(PyArray_Descr* dtype) - - Register a data-type as a new user-defined data type for - arrays. The type must have most of its entries filled in. This is - not always checked and errors can produce segfaults. In - particular, the typeobj member of the ``dtype`` structure must be - filled with a Python type that has a fixed-size element-size that - corresponds to the elsize member of *dtype*. Also the ``f`` - member must have the required functions: nonzero, copyswap, - copyswapn, getitem, setitem, and cast (some of the cast functions - may be ``NULL`` if no support is desired). To avoid confusion, you - should choose a unique character typecode but this is not enforced - and not relied on internally. - - A user-defined type number is returned that uniquely identifies - the type. A pointer to the new structure can then be obtained from - :cfunc:`PyArray_DescrFromType` using the returned type number. A -1 is - returned if an error occurs. If this *dtype* has already been - registered (checked only by the address of the pointer), then - return the previously-assigned type-number. - -.. cfunction:: int PyArray_RegisterCastFunc(PyArray_Descr* descr, int totype, PyArray_VectorUnaryFunc* castfunc) - - Register a low-level casting function, *castfunc*, to convert - from the data-type, *descr*, to the given data-type number, - *totype*. Any old casting function is over-written. A ``0`` is - returned on success or a ``-1`` on failure. - -.. cfunction:: int PyArray_RegisterCanCast(PyArray_Descr* descr, int totype, PyArray_SCALARKIND scalar) - - Register the data-type number, *totype*, as castable from - data-type object, *descr*, of the given *scalar* kind. Use - *scalar* = :cdata:`NPY_NOSCALAR` to register that an array of data-type - *descr* can be cast safely to a data-type whose type_number is - *totype*. - - -Special functions for PyArray_OBJECT -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. cfunction:: int PyArray_INCREF(PyArrayObject* op) - - Used for an array, *op*, that contains any Python objects. It - increments the reference count of every object in the array - according to the data-type of *op*. A -1 is returned if an error - occurs, otherwise 0 is returned. - -.. cfunction:: void PyArray_Item_INCREF(char* ptr, PyArray_Descr* dtype) - - A function to INCREF all the objects at the location *ptr* - according to the data-type *dtype*. If *ptr* is the start of a - record with an object at any offset, then this will (recursively) - increment the reference count of all object-like items in the - record. - -.. cfunction:: int PyArray_XDECREF(PyArrayObject* op) - - Used for an array, *op*, that contains any Python objects. It - decrements the reference count of every object in the array - according to the data-type of *op*. Normal return value is 0. A - -1 is returned if an error occurs. - -.. cfunction:: void PyArray_Item_XDECREF(char* ptr, PyArray_Descr* dtype) - - A function to XDECREF all the object-like items at the loacation - *ptr* as recorded in the data-type, *dtype*. This works - recursively so that if ``dtype`` itself has fields with data-types - that contain object-like items, all the object-like fields will be - XDECREF ``'d``. - -.. cfunction:: void PyArray_FillObjectArray(PyArrayObject* arr, PyObject* obj) - - Fill a newly created array with a single value obj at all - locations in the structure with object data-types. No checking is - performed but *arr* must be of data-type :ctype:`PyArray_OBJECT` and be - single-segment and uninitialized (no previous objects in - position). Use :cfunc:`PyArray_DECREF` (*arr*) if you need to - decrement all the items in the object array prior to calling this - function. - - -Array flags ------------ - -The ``flags`` attribute of the ``PyArrayObject`` structure contains -important information about the memory used by the array (pointed to -by the data member) This flag information must be kept accurate or -strange results and even segfaults may result. - -There are 6 (binary) flags that describe the memory area used by the -data buffer. These constants are defined in ``arrayobject.h`` and -determine the bit-position of the flag. Python exposes a nice -attribute- based interface as well as a dictionary-like interface for -getting (and, if appropriate, setting) these flags. - -Memory areas of all kinds can be pointed to by an ndarray, -necessitating these flags. If you get an arbitrary ``PyArrayObject`` -in C-code, you need to be aware of the flags that are set. If you -need to guarantee a certain kind of array (like ``NPY_CONTIGUOUS`` and -``NPY_BEHAVED``), then pass these requirements into the -PyArray_FromAny function. - - -Basic Array Flags -^^^^^^^^^^^^^^^^^ - -An ndarray can have a data segment that is not a simple contiguous -chunk of well-behaved memory you can manipulate. It may not be aligned -with word boundaries (very important on some platforms). It might have -its data in a different byte-order than the machine recognizes. It -might not be writeable. It might be in Fortan-contiguous order. The -array flags are used to indicate what can be said about data -associated with an array. - -.. cvar:: NPY_C_CONTIGUOUS - - The data area is in C-style contiguous order (last index varies the - fastest). - -.. cvar:: NPY_F_CONTIGUOUS - - The data area is in Fortran-style contiguous order (first index varies - the fastest). - -Notice that contiguous 1-d arrays are always both ``NPY_FORTRAN`` -contiguous and C contiguous. Both of these flags can be checked and -are convenience flags only as whether or not an array is -``NPY_CONTIGUOUS`` or ``NPY_FORTRAN`` can be determined by the -``strides``, ``dimensions``, and ``itemsize`` attributes. - -.. cvar:: NPY_OWNDATA - - The data area is owned by this array. - -.. cvar:: NPY_ALIGNED - - The data area is aligned appropriately (for all strides). - -.. cvar:: NPY_WRITEABLE - - The data area can be written to. - - Notice that the above 3 flags are are defined so that a new, well- - behaved array has these flags defined as true. - -.. cvar:: NPY_UPDATEIFCOPY - - The data area represents a (well-behaved) copy whose information - should be transferred back to the original when this array is deleted. - - This is a special flag that is set if this array represents a copy - made because a user required certain flags in - :cfunc:`PyArray_FromAny` and a copy had to be made of some other - array (and the user asked for this flag to be set in such a - situation). The base attribute then points to the "misbehaved" - array (which is set read_only). When the array with this flag set - is deallocated, it will copy its contents back to the "misbehaved" - array (casting if necessary) and will reset the "misbehaved" array - to :cdata:`NPY_WRITEABLE`. If the "misbehaved" array was not - :cdata:`NPY_WRITEABLE` to begin with then :cfunc:`PyArray_FromAny` - would have returned an error because :cdata:`NPY_UPDATEIFCOPY` - would not have been possible. - -:cfunc:`PyArray_UpdateFlags` (obj, flags) will update the -``obj->flags`` for ``flags`` which can be any of -:cdata:`NPY_CONTIGUOUS`, :cdata:`NPY_FORTRAN`, :cdata:`NPY_ALIGNED`, -or :cdata:`NPY_WRITEABLE`. - - -Combinations of array flags -^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. cvar:: NPY_BEHAVED - - :cdata:`NPY_ALIGNED` \| :cdata:`NPY_WRITEABLE` - -.. cvar:: NPY_CARRAY - - :cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_BEHAVED` - -.. cvar:: NPY_CARRAY_RO - - :cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_ALIGNED` - -.. cvar:: NPY_FARRAY - - :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_BEHAVED` - -.. cvar:: NPY_FARRAY_RO - - :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_ALIGNED` - -.. cvar:: NPY_DEFAULT - - :cdata:`NPY_CARRAY` - -.. cvar:: NPY_UPDATE_ALL - - :cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_ALIGNED` - - -Flag-like constants -^^^^^^^^^^^^^^^^^^^ - -These constants are used in :cfunc:`PyArray_FromAny` (and its macro forms) to -specify desired properties of the new array. - -.. cvar:: NPY_FORCECAST - - Cast to the desired type, even if it can't be done without losing - information. - -.. cvar:: NPY_ENSURECOPY - - Make sure the resulting array is a copy of the original. - -.. cvar:: NPY_ENSUREARRAY - - Make sure the resulting object is an actual ndarray (or bigndarray), - and not a sub-class. - -.. cvar:: NPY_NOTSWAPPED - - Only used in :cfunc:`PyArray_CheckFromAny` to over-ride the byteorder - of the data-type object passed in. - -.. cvar:: NPY_BEHAVED_NS - - :cdata:`NPY_ALIGNED` \| :cdata:`NPY_WRITEABLE` \| :cdata:`NPY_NOTSWAPPED` - - -Flag checking -^^^^^^^^^^^^^ - -For all of these macros *arr* must be an instance of a (subclass of) -:cdata:`PyArray_Type`, but no checking is done. - -.. cfunction:: PyArray_CHKFLAGS(arr, flags) - - The first parameter, arr, must be an ndarray or subclass. The - parameter, *flags*, should be an integer consisting of bitwise - combinations of the possible flags an array can have: - :cdata:`NPY_C_CONTIGUOUS`, :cdata:`NPY_F_CONTIGUOUS`, - :cdata:`NPY_OWNDATA`, :cdata:`NPY_ALIGNED`, - :cdata:`NPY_WRITEABLE`, :cdata:`NPY_UPDATEIFCOPY`. - -.. cfunction:: PyArray_ISCONTIGUOUS(arr) - - Evaluates true if *arr* is C-style contiguous. - -.. cfunction:: PyArray_ISFORTRAN(arr) - - Evaluates true if *arr* is Fortran-style contiguous. - -.. cfunction:: PyArray_ISWRITEABLE(arr) - - Evaluates true if the data area of *arr* can be written to - -.. cfunction:: PyArray_ISALIGNED(arr) - - Evaluates true if the data area of *arr* is properly aligned on - the machine. - -.. cfunction:: PyArray_ISBEHAVED(arr) - - Evalutes true if the data area of *arr* is aligned and writeable - and in machine byte-order according to its descriptor. - -.. cfunction:: PyArray_ISBEHAVED_RO(arr) - - Evaluates true if the data area of *arr* is aligned and in machine - byte-order. - -.. cfunction:: PyArray_ISCARRAY(arr) - - Evaluates true if the data area of *arr* is C-style contiguous, - and :cfunc:`PyArray_ISBEHAVED` (*arr*) is true. - -.. cfunction:: PyArray_ISFARRAY(arr) - - Evaluates true if the data area of *arr* is Fortran-style - contiguous and :cfunc:`PyArray_ISBEHAVED` (*arr*) is true. - -.. cfunction:: PyArray_ISCARRAY_RO(arr) - - Evaluates true if the data area of *arr* is C-style contiguous, - aligned, and in machine byte-order. - -.. cfunction:: PyArray_ISFARRAY_RO(arr) - - Evaluates true if the data area of *arr* is Fortran-style - contiguous, aligned, and in machine byte-order **.** - -.. cfunction:: PyArray_ISONESEGMENT(arr) - - Evaluates true if the data area of *arr* consists of a single - (C-style or Fortran-style) contiguous segment. - -.. cfunction:: void PyArray_UpdateFlags(PyArrayObject* arr, int flagmask) - - The :cdata:`NPY_C_CONTIGUOUS`, :cdata:`NPY_ALIGNED`, and - :cdata:`NPY_F_CONTIGUOUS` array flags can be "calculated" from the - array object itself. This routine updates one or more of these - flags of *arr* as specified in *flagmask* by performing the - required calculation. - - -.. warning:: - - It is important to keep the flags updated (using - :cfunc:`PyArray_UpdateFlags` can help) whenever a manipulation with an - array is performed that might cause them to change. Later - calculations in NumPy that rely on the state of these flags do not - repeat the calculation to update them. - - -Array method alternative API ----------------------------- - - -Conversion -^^^^^^^^^^ - -.. cfunction:: PyObject* PyArray_GetField(PyArrayObject* self, PyArray_Descr* dtype, int offset) - - Equivalent to :meth:`ndarray.getfield` (*self*, *dtype*, *offset*). Return - a new array of the given *dtype* using the data in the current - array at a specified *offset* in bytes. The *offset* plus the - itemsize of the new array type must be less than *self* - ->descr->elsize or an error is raised. The same shape and strides - as the original array are used. Therefore, this function has the - effect of returning a field from a record array. But, it can also - be used to select specific bytes or groups of bytes from any array - type. - -.. cfunction:: int PyArray_SetField(PyArrayObject* self, PyArray_Descr* dtype, int offset, PyObject* val) - - Equivalent to :meth:`ndarray.setfield` (*self*, *val*, *dtype*, *offset* - ). Set the field starting at *offset* in bytes and of the given - *dtype* to *val*. The *offset* plus *dtype* ->elsize must be less - than *self* ->descr->elsize or an error is raised. Otherwise, the - *val* argument is converted to an array and copied into the field - pointed to. If necessary, the elements of *val* are repeated to - fill the destination array, But, the number of elements in the - destination must be an integer multiple of the number of elements - in *val*. - -.. cfunction:: PyObject* PyArray_Byteswap(PyArrayObject* self, Bool inplace) - - Equivalent to :meth:`ndarray.byteswap` (*self*, *inplace*). Return an array - whose data area is byteswapped. If *inplace* is non-zero, then do - the byteswap inplace and return a reference to self. Otherwise, - create a byteswapped copy and leave self unchanged. - -.. cfunction:: PyObject* PyArray_NewCopy(PyArrayObject* old, NPY_ORDER order) - - Equivalent to :meth:`ndarray.copy` (*self*, *fortran*). Make a copy of the - *old* array. The returned array is always aligned and writeable - with data interpreted the same as the old array. If *order* is - :cdata:`NPY_CORDER`, then a C-style contiguous array is returned. If - *order* is :cdata:`NPY_FORTRANORDER`, then a Fortran-style contiguous - array is returned. If *order is* :cdata:`NPY_ANYORDER`, then the array - returned is Fortran-style contiguous only if the old one is; - otherwise, it is C-style contiguous. - -.. cfunction:: PyObject* PyArray_ToList(PyArrayObject* self) - - Equivalent to :meth:`ndarray.tolist` (*self*). Return a nested Python list - from *self*. - -.. cfunction:: PyObject* PyArray_ToString(PyArrayObject* self, NPY_ORDER order) - - Equivalent to :meth:`ndarray.tostring` (*self*, *order*). Return the bytes - of this array in a Python string. - -.. cfunction:: PyObject* PyArray_ToFile(PyArrayObject* self, FILE* fp, char* sep, char* format) - - Write the contents of *self* to the file pointer *fp* in C-style - contiguous fashion. Write the data as binary bytes if *sep* is the - string ""or ``NULL``. Otherwise, write the contents of *self* as - text using the *sep* string as the item separator. Each item will - be printed to the file. If the *format* string is not ``NULL`` or - "", then it is a Python print statement format string showing how - the items are to be written. - -.. cfunction:: int PyArray_Dump(PyObject* self, PyObject* file, int protocol) - - Pickle the object in *self* to the given *file* (either a string - or a Python file object). If *file* is a Python string it is - considered to be the name of a file which is then opened in binary - mode. The given *protocol* is used (if *protocol* is negative, or - the highest available is used). This is a simple wrapper around - cPickle.dump(*self*, *file*, *protocol*). - -.. cfunction:: PyObject* PyArray_Dumps(PyObject* self, int protocol) - - Pickle the object in *self* to a Python string and return it. Use - the Pickle *protocol* provided (or the highest available if - *protocol* is negative). - -.. cfunction:: int PyArray_FillWithScalar(PyArrayObject* arr, PyObject* obj) - - Fill the array, *arr*, with the given scalar object, *obj*. The - object is first converted to the data type of *arr*, and then - copied into every location. A -1 is returned if an error occurs, - otherwise 0 is returned. - -.. cfunction:: PyObject* PyArray_View(PyArrayObject* self, PyArray_Descr* dtype) - - Equivalent to :meth:`ndarray.view` (*self*, *dtype*). Return a new view of - the array *self* as possibly a different data-type, *dtype*. If - *dtype* is ``NULL``, then the returned array will have the same - data type as *self*. The new data-type must be consistent with - the size of *self*. Either the itemsizes must be identical, or - *self* must be single-segment and the total number of bytes must - be the same. In the latter case the dimensions of the returned - array will be altered in the last (or first for Fortran-style - contiguous arrays) dimension. The data area of the returned array - and self is exactly the same. - - -Shape Manipulation -^^^^^^^^^^^^^^^^^^ - -.. cfunction:: PyObject* PyArray_Newshape(PyArrayObject* self, PyArray_Dims* newshape) - - Result will be a new array (pointing to the same memory location - as *self* if possible), but having a shape given by *newshape* - . If the new shape is not compatible with the strides of *self*, - then a copy of the array with the new specified shape will be - returned. - -.. cfunction:: PyObject* PyArray_Reshape(PyArrayObject* self, PyObject* shape) - - Equivalent to :meth:`ndarray.reshape` (*self*, *shape*) where *shape* is a - sequence. Converts *shape* to a :ctype:`PyArray_Dims` structure and - calls :cfunc:`PyArray_Newshape` internally. - -.. cfunction:: PyObject* PyArray_Squeeze(PyArrayObject* self) - - Equivalent to :meth:`ndarray.squeeze` (*self*). Return a new view of *self* - with all of the dimensions of length 1 removed from the shape. - -.. warning:: - - matrix objects are always 2-dimensional. Therefore, - :cfunc:`PyArray_Squeeze` has no effect on arrays of matrix sub-class. - -.. cfunction:: PyObject* PyArray_SwapAxes(PyArrayObject* self, int a1, int a2) - - Equivalent to :meth:`ndarray.swapaxes` (*self*, *a1*, *a2*). The returned - array is a new view of the data in *self* with the given axes, - *a1* and *a2*, swapped. - -.. cfunction:: PyObject* PyArray_Resize(PyArrayObject* self, PyArray_Dims* newshape, int refcheck, NPY_ORDER fortran) - - Equivalent to :meth:`ndarray.resize` (*self*, *newshape*, refcheck - ``=`` *refcheck*, order= fortran ). This function only works on - single-segment arrays. It changes the shape of *self* inplace and - will reallocate the memory for *self* if *newshape* has a - different total number of elements then the old shape. If - reallocation is necessary, then *self* must own its data, have - *self* - ``>base==NULL``, have *self* - ``>weakrefs==NULL``, and - (unless refcheck is 0) not be referenced by any other array. A - reference to the new array is returned. The fortran argument can - be :cdata:`NPY_ANYORDER`, :cdata:`NPY_CORDER`, or - :cdata:`NPY_FORTRANORDER`. It currently has no effect. Eventually - it could be used to determine how the resize operation should view - the data when constructing a differently-dimensioned array. - -.. cfunction:: PyObject* PyArray_Transpose(PyArrayObject* self, PyArray_Dims* permute) - - Equivalent to :meth:`ndarray.transpose` (*self*, *permute*). Permute the - axes of the ndarray object *self* according to the data structure - *permute* and return the result. If *permute* is ``NULL``, then - the resulting array has its axes reversed. For example if *self* - has shape :math:`10\times20\times30`, and *permute* ``.ptr`` is - (0,2,1) the shape of the result is :math:`10\times30\times20.` If - *permute* is ``NULL``, the shape of the result is - :math:`30\times20\times10.` - -.. cfunction:: PyObject* PyArray_Flatten(PyArrayObject* self, NPY_ORDER order) - - Equivalent to :meth:`ndarray.flatten` (*self*, *order*). Return a 1-d copy - of the array. If *order* is :cdata:`NPY_FORTRANORDER` the elements are - scanned out in Fortran order (first-dimension varies the - fastest). If *order* is :cdata:`NPY_CORDER`, the elements of ``self`` - are scanned in C-order (last dimension varies the fastest). If - *order* :cdata:`NPY_ANYORDER`, then the result of - :cfunc:`PyArray_ISFORTRAN` (*self*) is used to determine which order - to flatten. - -.. cfunction:: PyObject* PyArray_Ravel(PyArrayObject* self, NPY_ORDER order) - - Equivalent to *self*.ravel(*order*). Same basic functionality - as :cfunc:`PyArray_Flatten` (*self*, *order*) except if *order* is 0 - and *self* is C-style contiguous, the shape is altered but no copy - is performed. - - -Item selection and manipulation -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. cfunction:: PyObject* PyArray_TakeFrom(PyArrayObject* self, PyObject* indices, int axis, PyArrayObject* ret, NPY_CLIPMODE clipmode) - - Equivalent to :meth:`ndarray.take` (*self*, *indices*, *axis*, *ret*, - *clipmode*) except *axis* =None in Python is obtained by setting - *axis* = :cdata:`NPY_MAXDIMS` in C. Extract the items from self - indicated by the integer-valued *indices* along the given *axis.* - The clipmode argument can be :cdata:`NPY_RAISE`, :cdata:`NPY_WRAP`, or - :cdata:`NPY_CLIP` to indicate what to do with out-of-bound indices. The - *ret* argument can specify an output array rather than having one - created internally. - -.. cfunction:: PyObject* PyArray_PutTo(PyArrayObject* self, PyObject* values, PyObject* indices, NPY_CLIPMODE clipmode) - - Equivalent to *self*.put(*values*, *indices*, *clipmode* - ). Put *values* into *self* at the corresponding (flattened) - *indices*. If *values* is too small it will be repeated as - necessary. - -.. cfunction:: PyObject* PyArray_PutMask(PyArrayObject* self, PyObject* values, PyObject* mask) - - Place the *values* in *self* wherever corresponding positions - (using a flattened context) in *mask* are true. The *mask* and - *self* arrays must have the same total number of elements. If - *values* is too small, it will be repeated as necessary. - -.. cfunction:: PyObject* PyArray_Repeat(PyArrayObject* self, PyObject* op, int axis) - - Equivalent to :meth:`ndarray.repeat` (*self*, *op*, *axis*). Copy the - elements of *self*, *op* times along the given *axis*. Either - *op* is a scalar integer or a sequence of length *self* - ->dimensions[ *axis* ] indicating how many times to repeat each - item along the axis. - -.. cfunction:: PyObject* PyArray_Choose(PyArrayObject* self, PyObject* op, PyArrayObject* ret, NPY_CLIPMODE clipmode) - - Equivalent to :meth:`ndarray.choose` (*self*, *op*, *ret*, *clipmode*). - Create a new array by selecting elements from the sequence of - arrays in *op* based on the integer values in *self*. The arrays - must all be broadcastable to the same shape and the entries in - *self* should be between 0 and len(*op*). The output is placed - in *ret* unless it is ``NULL`` in which case a new output is - created. The *clipmode* argument determines behavior for when - entries in *self* are not between 0 and len(*op*). - - .. cvar:: NPY_RAISE - - raise a ValueError; - - .. cvar:: NPY_WRAP - - wrap values < 0 by adding len(*op*) and values >=len(*op*) - by subtracting len(*op*) until they are in range; - - .. cvar:: NPY_CLIP - - all values are clipped to the region [0, len(*op*) ). - - -.. cfunction:: PyObject* PyArray_Sort(PyArrayObject* self, int axis) - - Equivalent to :meth:`ndarray.sort` (*self*, *axis*). Return an array with - the items of *self* sorted along *axis*. - -.. cfunction:: PyObject* PyArray_ArgSort(PyArrayObject* self, int axis) - - Equivalent to :meth:`ndarray.argsort` (*self*, *axis*). Return an array of - indices such that selection of these indices along the given - ``axis`` would return a sorted version of *self*. If *self* - ->descr is a data-type with fields defined, then - self->descr->names is used to determine the sort order. A - comparison where the first field is equal will use the second - field and so on. To alter the sort order of a record array, create - a new data-type with a different order of names and construct a - view of the array with that new data-type. - -.. cfunction:: PyObject* PyArray_LexSort(PyObject* sort_keys, int axis) - - Given a sequence of arrays (*sort_keys*) of the same shape, - return an array of indices (similar to :cfunc:`PyArray_ArgSort` (...)) - that would sort the arrays lexicographically. A lexicographic sort - specifies that when two keys are found to be equal, the order is - based on comparison of subsequent keys. A merge sort (which leaves - equal entries unmoved) is required to be defined for the - types. The sort is accomplished by sorting the indices first using - the first *sort_key* and then using the second *sort_key* and so - forth. This is equivalent to the lexsort(*sort_keys*, *axis*) - Python command. Because of the way the merge-sort works, be sure - to understand the order the *sort_keys* must be in (reversed from - the order you would use when comparing two elements). - - If these arrays are all collected in a record array, then - :cfunc:`PyArray_Sort` (...) can also be used to sort the array - directly. - -.. cfunction:: PyObject* PyArray_SearchSorted(PyArrayObject* self, PyObject* values) - - Equivalent to :meth:`ndarray.searchsorted` (*self*, *values*). Assuming - *self* is a 1-d array in ascending order representing bin - boundaries then the output is an array the same shape as *values* - of bin numbers, giving the bin into which each item in *values* - would be placed. No checking is done on whether or not self is in - ascending order. - -.. cfunction:: PyObject* PyArray_Diagonal(PyArrayObject* self, int offset, int axis1, int axis2) - - Equivalent to :meth:`ndarray.diagonal` (*self*, *offset*, *axis1*, *axis2* - ). Return the *offset* diagonals of the 2-d arrays defined by - *axis1* and *axis2*. - -.. cfunction:: PyObject* PyArray_Nonzero(PyArrayObject* self) - - Equivalent to :meth:`ndarray.nonzero` (*self*). Returns a tuple of index - arrays that select elements of *self* that are nonzero. If (nd= - :cfunc:`PyArray_NDIM` ( ``self`` ))==1, then a single index array is - returned. The index arrays have data type :cdata:`NPY_INTP`. If a - tuple is returned (nd :math:`\neq` 1), then its length is nd. - -.. cfunction:: PyObject* PyArray_Compress(PyArrayObject* self, PyObject* condition, int axis, PyArrayObject* out) - - Equivalent to :meth:`ndarray.compress` (*self*, *condition*, *axis* - ). Return the elements along *axis* corresponding to elements of - *condition* that are true. - - -Calculation -^^^^^^^^^^^ - -.. tip:: - - Pass in :cdata:`NPY_MAXDIMS` for axis in order to achieve the same - effect that is obtained by passing in *axis* = :const:`None` in Python - (treating the array as a 1-d array). - -.. cfunction:: PyObject* PyArray_ArgMax(PyArrayObject* self, int axis) - - Equivalent to :meth:`ndarray.argmax` (*self*, *axis*). Return the index of - the largest element of *self* along *axis*. - -.. cfunction:: PyObject* PyArray_ArgMin(PyArrayObject* self, int axis) - - Equivalent to :meth:`ndarray.argmin` (*self*, *axis*). Return the index of - the smallest element of *self* along *axis*. - -.. cfunction:: PyObject* PyArray_Max(PyArrayObject* self, int axis, PyArrayObject* out) - - Equivalent to :meth:`ndarray.max` (*self*, *axis*). Return the largest - element of *self* along the given *axis*. - -.. cfunction:: PyObject* PyArray_Min(PyArrayObject* self, int axis, PyArrayObject* out) - - Equivalent to :meth:`ndarray.min` (*self*, *axis*). Return the smallest - element of *self* along the given *axis*. - -.. cfunction:: PyObject* PyArray_Ptp(PyArrayObject* self, int axis, PyArrayObject* out) - - Equivalent to :meth:`ndarray.ptp` (*self*, *axis*). Return the difference - between the largest element of *self* along *axis* and the - smallest element of *self* along *axis*. - - - -.. note:: - - The rtype argument specifies the data-type the reduction should - take place over. This is important if the data-type of the array - is not "large" enough to handle the output. By default, all - integer data-types are made at least as large as :cdata:`NPY_LONG` - for the "add" and "multiply" ufuncs (which form the basis for - mean, sum, cumsum, prod, and cumprod functions). - -.. cfunction:: PyObject* PyArray_Mean(PyArrayObject* self, int axis, int rtype, PyArrayObject* out) - - Equivalent to :meth:`ndarray.mean` (*self*, *axis*, *rtype*). Returns the - mean of the elements along the given *axis*, using the enumerated - type *rtype* as the data type to sum in. Default sum behavior is - obtained using :cdata:`PyArray_NOTYPE` for *rtype*. - -.. cfunction:: PyObject* PyArray_Trace(PyArrayObject* self, int offset, int axis1, int axis2, int rtype, PyArrayObject* out) - - Equivalent to :meth:`ndarray.trace` (*self*, *offset*, *axis1*, *axis2*, - *rtype*). Return the sum (using *rtype* as the data type of - summation) over the *offset* diagonal elements of the 2-d arrays - defined by *axis1* and *axis2* variables. A positive offset - chooses diagonals above the main diagonal. A negative offset - selects diagonals below the main diagonal. - -.. cfunction:: PyObject* PyArray_Clip(PyArrayObject* self, PyObject* min, PyObject* max) - - Equivalent to :meth:`ndarray.clip` (*self*, *min*, *max*). Clip an array, - *self*, so that values larger than *max* are fixed to *max* and - values less than *min* are fixed to *min*. - -.. cfunction:: PyObject* PyArray_Conjugate(PyArrayObject* self) - - Equivalent to :meth:`ndarray.conjugate` (*self*). - Return the complex conjugate of *self*. If *self* is not of - complex data type, then return *self* with an reference. - -.. cfunction:: PyObject* PyArray_Round(PyArrayObject* self, int decimals, PyArrayObject* out) - - Equivalent to :meth:`ndarray.round` (*self*, *decimals*, *out*). Returns - the array with elements rounded to the nearest decimal place. The - decimal place is defined as the :math:`10^{-\textrm{decimals}}` - digit so that negative *decimals* cause rounding to the nearest 10's, 100's, etc. If out is ``NULL``, then the output array is created, otherwise the output is placed in *out* which must be the correct size and type. - -.. cfunction:: PyObject* PyArray_Std(PyArrayObject* self, int axis, int rtype, PyArrayObject* out) - - Equivalent to :meth:`ndarray.std` (*self*, *axis*, *rtype*). Return the - standard deviation using data along *axis* converted to data type - *rtype*. - -.. cfunction:: PyObject* PyArray_Sum(PyArrayObject* self, int axis, int rtype, PyArrayObject* out) - - Equivalent to :meth:`ndarray.sum` (*self*, *axis*, *rtype*). Return 1-d - vector sums of elements in *self* along *axis*. Perform the sum - after converting data to data type *rtype*. - -.. cfunction:: PyObject* PyArray_CumSum(PyArrayObject* self, int axis, int rtype, PyArrayObject* out) - - Equivalent to :meth:`ndarray.cumsum` (*self*, *axis*, *rtype*). Return - cumulative 1-d sums of elements in *self* along *axis*. Perform - the sum after converting data to data type *rtype*. - -.. cfunction:: PyObject* PyArray_Prod(PyArrayObject* self, int axis, int rtype, PyArrayObject* out) - - Equivalent to :meth:`ndarray.prod` (*self*, *axis*, *rtype*). Return 1-d - products of elements in *self* along *axis*. Perform the product - after converting data to data type *rtype*. - -.. cfunction:: PyObject* PyArray_CumProd(PyArrayObject* self, int axis, int rtype, PyArrayObject* out) - - Equivalent to :meth:`ndarray.cumprod` (*self*, *axis*, *rtype*). Return - 1-d cumulative products of elements in ``self`` along ``axis``. - Perform the product after converting data to data type ``rtype``. - -.. cfunction:: PyObject* PyArray_All(PyArrayObject* self, int axis, PyArrayObject* out) - - Equivalent to :meth:`ndarray.all` (*self*, *axis*). Return an array with - True elements for every 1-d sub-array of ``self`` defined by - ``axis`` in which all the elements are True. - -.. cfunction:: PyObject* PyArray_Any(PyArrayObject* self, int axis, PyArrayObject* out) - - Equivalent to :meth:`ndarray.any` (*self*, *axis*). Return an array with - True elements for every 1-d sub-array of *self* defined by *axis* - in which any of the elements are True. - -Functions ---------- - - -Array Functions -^^^^^^^^^^^^^^^ - -.. cfunction:: int PyArray_AsCArray(PyObject** op, void* ptr, npy_intp* dims, int nd, int typenum, int itemsize) - - Sometimes it is useful to access a multidimensional array as a - C-style multi-dimensional array so that algorithms can be - implemented using C's a[i][j][k] syntax. This routine returns a - pointer, *ptr*, that simulates this kind of C-style array, for - 1-, 2-, and 3-d ndarrays. - - :param op: - - The address to any Python object. This Python object will be replaced - with an equivalent well-behaved, C-style contiguous, ndarray of the - given data type specifice by the last two arguments. Be sure that - stealing a reference in this way to the input object is justified. - - :param ptr: - - The address to a (ctype* for 1-d, ctype** for 2-d or ctype*** for 3-d) - variable where ctype is the equivalent C-type for the data type. On - return, *ptr* will be addressable as a 1-d, 2-d, or 3-d array. - - :param dims: - - An output array that contains the shape of the array object. This - array gives boundaries on any looping that will take place. - - :param nd: - - The dimensionality of the array (1, 2, or 3). - - :param typenum: - - The expected data type of the array. - - :param itemsize: - - This argument is only needed when *typenum* represents a - flexible array. Otherwise it should be 0. - -.. note:: - - The simulation of a C-style array is not complete for 2-d and 3-d - arrays. For example, the simulated arrays of pointers cannot be passed - to subroutines expecting specific, statically-defined 2-d and 3-d - arrays. To pass to functions requiring those kind of inputs, you must - statically define the required array and copy data. - -.. cfunction:: int PyArray_Free(PyObject* op, void* ptr) - - Must be called with the same objects and memory locations returned - from :cfunc:`PyArray_AsCArray` (...). This function cleans up memory - that otherwise would get leaked. - -.. cfunction:: PyObject* PyArray_Concatenate(PyObject* obj, int axis) - - Join the sequence of objects in *obj* together along *axis* into a - single array. If the dimensions or types are not compatible an - error is raised. - -.. cfunction:: PyObject* PyArray_InnerProduct(PyObject* obj1, PyObject* obj2) - - Compute a product-sum over the last dimensions of *obj1* and - *obj2*. Neither array is conjugated. - -.. cfunction:: PyObject* PyArray_MatrixProduct(PyObject* obj1, PyObject* obj) - - Compute a product-sum over the last dimension of *obj1* and the - second-to-last dimension of *obj2*. For 2-d arrays this is a - matrix-product. Neither array is conjugated. - -.. cfunction:: PyObject* PyArray_CopyAndTranspose(PyObject \* op) - - A specialized copy and transpose function that works only for 2-d - arrays. The returned array is a transposed copy of *op*. - -.. cfunction:: PyObject* PyArray_Correlate(PyObject* op1, PyObject* op2, int mode) - - Compute the 1-d correlation of the 1-d arrays *op1* and *op2* - . The correlation is computed at each output point by multiplying - *op1* by a shifted version of *op2* and summing the result. As a - result of the shift, needed values outside of the defined range of - *op1* and *op2* are interpreted as zero. The mode determines how - many shifts to return: 0 - return only shifts that did not need to - assume zero- values; 1 - return an object that is the same size as - *op1*, 2 - return all possible shifts (any overlap at all is - accepted). - - .. rubric:: Notes - - This does not compute the usual correlation: if op2 is larger than op1, the - arguments are swapped, and the conjugate is never taken for complex arrays. - See PyArray_Correlate2 for the usual signal processing correlation. - -.. cfunction:: PyObject* PyArray_Correlate2(PyObject* op1, PyObject* op2, int mode) - - Updated version of PyArray_Correlate, which uses the usual definition of - correlation for 1d arrays. The correlation is computed at each output point - by multiplying *op1* by a shifted version of *op2* and summing the result. - As a result of the shift, needed values outside of the defined range of - *op1* and *op2* are interpreted as zero. The mode determines how many - shifts to return: 0 - return only shifts that did not need to assume zero- - values; 1 - return an object that is the same size as *op1*, 2 - return all - possible shifts (any overlap at all is accepted). - - .. rubric:: Notes - - Compute z as follows:: - - z[k] = sum_n op1[n] * conj(op2[n+k]) - -.. cfunction:: PyObject* PyArray_Where(PyObject* condition, PyObject* x, PyObject* y) - - If both ``x`` and ``y`` are ``NULL``, then return - :cfunc:`PyArray_Nonzero` (*condition*). Otherwise, both *x* and *y* - must be given and the object returned is shaped like *condition* - and has elements of *x* and *y* where *condition* is respectively - True or False. - - -Other functions -^^^^^^^^^^^^^^^ - -.. cfunction:: Bool PyArray_CheckStrides(int elsize, int nd, npy_intp numbytes, npy_intp* dims, npy_intp* newstrides) - - Determine if *newstrides* is a strides array consistent with the - memory of an *nd* -dimensional array with shape ``dims`` and - element-size, *elsize*. The *newstrides* array is checked to see - if jumping by the provided number of bytes in each direction will - ever mean jumping more than *numbytes* which is the assumed size - of the available memory segment. If *numbytes* is 0, then an - equivalent *numbytes* is computed assuming *nd*, *dims*, and - *elsize* refer to a single-segment array. Return :cdata:`NPY_TRUE` if - *newstrides* is acceptable, otherwise return :cdata:`NPY_FALSE`. - -.. cfunction:: npy_intp PyArray_MultiplyList(npy_intp* seq, int n) - -.. cfunction:: int PyArray_MultiplyIntList(int* seq, int n) - - Both of these routines multiply an *n* -length array, *seq*, of - integers and return the result. No overflow checking is performed. - -.. cfunction:: int PyArray_CompareLists(npy_intp* l1, npy_intp* l2, int n) - - Given two *n* -length arrays of integers, *l1*, and *l2*, return - 1 if the lists are identical; otherwise, return 0. - - -Array Iterators ---------------- - -An array iterator is a simple way to access the elements of an -N-dimensional array quickly and efficiently. Section `2 -<#sec-array-iterator>`__ provides more description and examples of -this useful approach to looping over an array. - -.. cfunction:: PyObject* PyArray_IterNew(PyObject* arr) - - Return an array iterator object from the array, *arr*. This is - equivalent to *arr*. **flat**. The array iterator object makes - it easy to loop over an N-dimensional non-contiguous array in - C-style contiguous fashion. - -.. cfunction:: PyObject* PyArray_IterAllButAxis(PyObject* arr, int \*axis) - - Return an array iterator that will iterate over all axes but the - one provided in *\*axis*. The returned iterator cannot be used - with :cfunc:`PyArray_ITER_GOTO1D`. This iterator could be used to - write something similar to what ufuncs do wherein the loop over - the largest axis is done by a separate sub-routine. If *\*axis* is - negative then *\*axis* will be set to the axis having the smallest - stride and that axis will be used. - -.. cfunction:: PyObject *PyArray_BroadcastToShape(PyObject* arr, npy_intp *dimensions, int nd) - - Return an array iterator that is broadcast to iterate as an array - of the shape provided by *dimensions* and *nd*. - -.. cfunction:: int PyArrayIter_Check(PyObject* op) - - Evaluates true if *op* is an array iterator (or instance of a - subclass of the array iterator type). - -.. cfunction:: void PyArray_ITER_RESET(PyObject* iterator) - - Reset an *iterator* to the beginning of the array. - -.. cfunction:: void PyArray_ITER_NEXT(PyObject* iterator) - - Incremement the index and the dataptr members of the *iterator* to - point to the next element of the array. If the array is not - (C-style) contiguous, also increment the N-dimensional coordinates - array. - -.. cfunction:: void *PyArray_ITER_DATA(PyObject* iterator) - - A pointer to the current element of the array. - -.. cfunction:: void PyArray_ITER_GOTO(PyObject* iterator, npy_intp* destination) - - Set the *iterator* index, dataptr, and coordinates members to the - location in the array indicated by the N-dimensional c-array, - *destination*, which must have size at least *iterator* - ->nd_m1+1. - -.. cfunction:: PyArray_ITER_GOTO1D(PyObject* iterator, npy_intp index) - - Set the *iterator* index and dataptr to the location in the array - indicated by the integer *index* which points to an element in the - C-styled flattened array. - -.. cfunction:: int PyArray_ITER_NOTDONE(PyObject* iterator) - - Evaluates TRUE as long as the iterator has not looped through all of - the elements, otherwise it evaluates FALSE. - - -Broadcasting (multi-iterators) ------------------------------- - -.. cfunction:: PyObject* PyArray_MultiIterNew(int num, ...) - - A simplified interface to broadcasting. This function takes the - number of arrays to broadcast and then *num* extra ( :ctype:`PyObject *` - ) arguments. These arguments are converted to arrays and iterators - are created. :cfunc:`PyArray_Broadcast` is then called on the resulting - multi-iterator object. The resulting, broadcasted mult-iterator - object is then returned. A broadcasted operation can then be - performed using a single loop and using :cfunc:`PyArray_MultiIter_NEXT` - (..) - -.. cfunction:: void PyArray_MultiIter_RESET(PyObject* multi) - - Reset all the iterators to the beginning in a multi-iterator - object, *multi*. - -.. cfunction:: void PyArray_MultiIter_NEXT(PyObject* multi) - - Advance each iterator in a multi-iterator object, *multi*, to its - next (broadcasted) element. - -.. cfunction:: void *PyArray_MultiIter_DATA(PyObject* multi, int i) - - Return the data-pointer of the *i* :math:`^{\textrm{th}}` iterator - in a multi-iterator object. - -.. cfunction:: void PyArray_MultiIter_NEXTi(PyObject* multi, int i) - - Advance the pointer of only the *i* :math:`^{\textrm{th}}` iterator. - -.. cfunction:: void PyArray_MultiIter_GOTO(PyObject* multi, npy_intp* destination) - - Advance each iterator in a multi-iterator object, *multi*, to the - given :math:`N` -dimensional *destination* where :math:`N` is the - number of dimensions in the broadcasted array. - -.. cfunction:: void PyArray_MultiIter_GOTO1D(PyObject* multi, npy_intp index) - - Advance each iterator in a multi-iterator object, *multi*, to the - corresponding location of the *index* into the flattened - broadcasted array. - -.. cfunction:: int PyArray_MultiIter_NOTDONE(PyObject* multi) - - Evaluates TRUE as long as the multi-iterator has not looped - through all of the elements (of the broadcasted result), otherwise - it evaluates FALSE. - -.. cfunction:: int PyArray_Broadcast(PyArrayMultiIterObject* mit) - - This function encapsulates the broadcasting rules. The *mit* - container should already contain iterators for all the arrays that - need to be broadcast. On return, these iterators will be adjusted - so that iteration over each simultaneously will accomplish the - broadcasting. A negative number is returned if an error occurs. - -.. cfunction:: int PyArray_RemoveSmallest(PyArrayMultiIterObject* mit) - - This function takes a multi-iterator object that has been - previously "broadcasted," finds the dimension with the smallest - "sum of strides" in the broadcasted result and adapts all the - iterators so as not to iterate over that dimension (by effectively - making them of length-1 in that dimension). The corresponding - dimension is returned unless *mit* ->nd is 0, then -1 is - returned. This function is useful for constructing ufunc-like - routines that broadcast their inputs correctly and then call a - strided 1-d version of the routine as the inner-loop. This 1-d - version is usually optimized for speed and for this reason the - loop should be performed over the axis that won't require large - stride jumps. - -Neighborhood iterator ---------------------- - -.. versionadded:: 1.4.0 - -Neighborhood iterators are subclasses of the iterator object, and can be used -to iter over a neighborhood of a point. For example, you may want to iterate -over every voxel of a 3d image, and for every such voxel, iterate over an -hypercube. Neighborhood iterator automatically handle boundaries, thus making -this kind of code much easier to write than manual boundaries handling, at the -cost of a slight overhead. - -.. cfunction:: PyObject* PyArray_NeighborhoodIterNew(PyArrayIterObject* iter, npy_intp bounds, int mode, PyArrayObject* fill_value) - - This function creates a new neighborhood iterator from an existing - iterator. The neighborhood will be computed relatively to the position - currently pointed by *iter*, the bounds define the shape of the - neighborhood iterator, and the mode argument the boundaries handling mode. - - The *bounds* argument is expected to be a (2 * iter->ao->nd) arrays, such - as the range bound[2*i]->bounds[2*i+1] defines the range where to walk for - dimension i (both bounds are included in the walked coordinates). The - bounds should be ordered for each dimension (bounds[2*i] <= bounds[2*i+1]). - - The mode should be one of: - - * NPY_NEIGHBORHOOD_ITER_ZERO_PADDING: zero padding. Outside bounds values - will be 0. - * NPY_NEIGHBORHOOD_ITER_ONE_PADDING: one padding, Outside bounds values - will be 1. - * NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING: constant padding. Outside bounds - values will be the same as the first item in fill_value. - * NPY_NEIGHBORHOOD_ITER_MIRROR_PADDING: mirror padding. Outside bounds - values will be as if the array items were mirrored. For example, for the - array [1, 2, 3, 4], x[-2] will be 2, x[-2] will be 1, x[4] will be 4, - x[5] will be 1, etc... - * NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING: circular padding. Outside bounds - values will be as if the array was repeated. For example, for the - array [1, 2, 3, 4], x[-2] will be 3, x[-2] will be 4, x[4] will be 1, - x[5] will be 2, etc... - - If the mode is constant filling (NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING), - fill_value should point to an array object which holds the filling value - (the first item will be the filling value if the array contains more than - one item). For other cases, fill_value may be NULL. - - - The iterator holds a reference to iter - - Return NULL on failure (in which case the reference count of iter is not - changed) - - iter itself can be a Neighborhood iterator: this can be useful for .e.g - automatic boundaries handling - - the object returned by this function should be safe to use as a normal - iterator - - If the position of iter is changed, any subsequent call to - PyArrayNeighborhoodIter_Next is undefined behavior, and - PyArrayNeighborhoodIter_Reset must be called. - - .. code-block:: c - - PyArrayIterObject \*iter; - PyArrayNeighborhoodIterObject \*neigh_iter; - iter = PyArray_IterNew(x); - - //For a 3x3 kernel - bounds = {-1, 1, -1, 1}; - neigh_iter = (PyArrayNeighborhoodIterObject*)PyArrayNeighborhoodIter_New( - iter, bounds, NPY_NEIGHBORHOOD_ITER_ZERO_PADDING, NULL); - - for(i = 0; i < iter->size; ++i) { - for (j = 0; j < neigh_iter->size; ++j) { - // Walk around the item currently pointed by iter->dataptr - PyArrayNeighborhoodIter_Next(neigh_iter); - } - - // Move to the next point of iter - PyArrayIter_Next(iter); - PyArrayNeighborhoodIter_Reset(neigh_iter); - } - -.. cfunction:: int PyArrayNeighborhoodIter_Reset(PyArrayNeighborhoodIterObject* iter) - - Reset the iterator position to the first point of the neighborhood. This - should be called whenever the iter argument given at - PyArray_NeighborhoodIterObject is changed (see example) - -.. cfunction:: int PyArrayNeighborhoodIter_Next(PyArrayNeighborhoodIterObject* iter) - - After this call, iter->dataptr points to the next point of the - neighborhood. Calling this function after every point of the - neighborhood has been visited is undefined. - -Array Scalars -------------- - -.. cfunction:: PyObject* PyArray_Return(PyArrayObject* arr) - - This function checks to see if *arr* is a 0-dimensional array and, - if so, returns the appropriate array scalar. It should be used - whenever 0-dimensional arrays could be returned to Python. - -.. cfunction:: PyObject* PyArray_Scalar(void* data, PyArray_Descr* dtype, PyObject* itemsize) - - Return an array scalar object of the given enumerated *typenum* - and *itemsize* by **copying** from memory pointed to by *data* - . If *swap* is nonzero then this function will byteswap the data - if appropriate to the data-type because array scalars are always - in correct machine-byte order. - -.. cfunction:: PyObject* PyArray_ToScalar(void* data, PyArrayObject* arr) - - Return an array scalar object of the type and itemsize indicated - by the array object *arr* copied from the memory pointed to by - *data* and swapping if the data in *arr* is not in machine - byte-order. - -.. cfunction:: PyObject* PyArray_FromScalar(PyObject* scalar, PyArray_Descr* outcode) - - Return a 0-dimensional array of type determined by *outcode* from - *scalar* which should be an array-scalar object. If *outcode* is - NULL, then the type is determined from *scalar*. - -.. cfunction:: void PyArray_ScalarAsCtype(PyObject* scalar, void* ctypeptr) - - Return in *ctypeptr* a pointer to the actual value in an array - scalar. There is no error checking so *scalar* must be an - array-scalar object, and ctypeptr must have enough space to hold - the correct type. For flexible-sized types, a pointer to the data - is copied into the memory of *ctypeptr*, for all other types, the - actual data is copied into the address pointed to by *ctypeptr*. - -.. cfunction:: void PyArray_CastScalarToCtype(PyObject* scalar, void* ctypeptr, PyArray_Descr* outcode) - - Return the data (cast to the data type indicated by *outcode*) - from the array-scalar, *scalar*, into the memory pointed to by - *ctypeptr* (which must be large enough to handle the incoming - memory). - -.. cfunction:: PyObject* PyArray_TypeObjectFromType(int type) - - Returns a scalar type-object from a type-number, *type* - . Equivalent to :cfunc:`PyArray_DescrFromType` (*type*)->typeobj - except for reference counting and error-checking. Returns a new - reference to the typeobject on success or ``NULL`` on failure. - -.. cfunction:: NPY_SCALARKIND PyArray_ScalarKind(int typenum, PyArrayObject** arr) - - Return the kind of scalar represented by *typenum* and the array - in *\*arr* (if *arr* is not ``NULL`` ). The array is assumed to be - rank-0 and only used if *typenum* represents a signed integer. If - *arr* is not ``NULL`` and the first element is negative then - :cdata:`NPY_INTNEG_SCALAR` is returned, otherwise - :cdata:`NPY_INTPOS_SCALAR` is returned. The possible return values - are :cdata:`NPY_{kind}_SCALAR` where ``{kind}`` can be **INTPOS**, - **INTNEG**, **FLOAT**, **COMPLEX**, **BOOL**, or **OBJECT**. - :cdata:`NPY_NOSCALAR` is also an enumerated value - :ctype:`NPY_SCALARKIND` variables can take on. - -.. cfunction:: int PyArray_CanCoerceScalar(char thistype, char neededtype, NPY_SCALARKIND scalar) - - Implements the rules for scalar coercion. Scalars are only - silently coerced from thistype to neededtype if this function - returns nonzero. If scalar is :cdata:`NPY_NOSCALAR`, then this - function is equivalent to :cfunc:`PyArray_CanCastSafely`. The rule is - that scalars of the same KIND can be coerced into arrays of the - same KIND. This rule means that high-precision scalars will never - cause low-precision arrays of the same KIND to be upcast. - - -Data-type descriptors ---------------------- - - - -.. warning:: - - Data-type objects must be reference counted so be aware of the - action on the data-type reference of different C-API calls. The - standard rule is that when a data-type object is returned it is a - new reference. Functions that take :ctype:`PyArray_Descr *` objects and - return arrays steal references to the data-type their inputs - unless otherwise noted. Therefore, you must own a reference to any - data-type object used as input to such a function. - -.. cfunction:: int PyArrayDescr_Check(PyObject* obj) - - Evaluates as true if *obj* is a data-type object ( :ctype:`PyArray_Descr *` ). - -.. cfunction:: PyArray_Descr* PyArray_DescrNew(PyArray_Descr* obj) - - Return a new data-type object copied from *obj* (the fields - reference is just updated so that the new object points to the - same fields dictionary if any). - -.. cfunction:: PyArray_Descr* PyArray_DescrNewFromType(int typenum) - - Create a new data-type object from the built-in (or - user-registered) data-type indicated by *typenum*. All builtin - types should not have any of their fields changed. This creates a - new copy of the :ctype:`PyArray_Descr` structure so that you can fill - it in as appropriate. This function is especially needed for - flexible data-types which need to have a new elsize member in - order to be meaningful in array construction. - -.. cfunction:: PyArray_Descr* PyArray_DescrNewByteorder(PyArray_Descr* obj, char newendian) - - Create a new data-type object with the byteorder set according to - *newendian*. All referenced data-type objects (in subdescr and - fields members of the data-type object) are also changed - (recursively). If a byteorder of :cdata:`NPY_IGNORE` is encountered it - is left alone. If newendian is :cdata:`NPY_SWAP`, then all byte-orders - are swapped. Other valid newendian values are :cdata:`NPY_NATIVE`, - :cdata:`NPY_LITTLE`, and :cdata:`NPY_BIG` which all cause the returned - data-typed descriptor (and all it's - referenced data-type descriptors) to have the corresponding byte- - order. - -.. cfunction:: PyArray_Descr* PyArray_DescrFromObject(PyObject* op, PyArray_Descr* mintype) - - Determine an appropriate data-type object from the object *op* - (which should be a "nested" sequence object) and the minimum - data-type descriptor mintype (which can be ``NULL`` ). Similar in - behavior to array(*op*).dtype. Don't confuse this function with - :cfunc:`PyArray_DescrConverter`. This function essentially looks at - all the objects in the (nested) sequence and determines the - data-type from the elements it finds. - -.. cfunction:: PyArray_Descr* PyArray_DescrFromScalar(PyObject* scalar) - - Return a data-type object from an array-scalar object. No checking - is done to be sure that *scalar* is an array scalar. If no - suitable data-type can be determined, then a data-type of - :cdata:`NPY_OBJECT` is returned by default. - -.. cfunction:: PyArray_Descr* PyArray_DescrFromType(int typenum) - - Returns a data-type object corresponding to *typenum*. The - *typenum* can be one of the enumerated types, a character code for - one of the enumerated types, or a user-defined type. - -.. cfunction:: int PyArray_DescrConverter(PyObject* obj, PyArray_Descr** dtype) - - Convert any compatible Python object, *obj*, to a data-type object - in *dtype*. A large number of Python objects can be converted to - data-type objects. See :ref:`arrays.dtypes` for a complete - description. This version of the converter converts None objects - to a :cdata:`NPY_DEFAULT_TYPE` data-type object. This function can - be used with the "O&" character code in :cfunc:`PyArg_ParseTuple` - processing. - -.. cfunction:: int PyArray_DescrConverter2(PyObject* obj, PyArray_Descr** dtype) - - Convert any compatible Python object, *obj*, to a data-type - object in *dtype*. This version of the converter converts None - objects so that the returned data-type is ``NULL``. This function - can also be used with the "O&" character in PyArg_ParseTuple - processing. - -.. cfunction:: int Pyarray_DescrAlignConverter(PyObject* obj, PyArray_Descr** dtype) - - Like :cfunc:`PyArray_DescrConverter` except it aligns C-struct-like - objects on word-boundaries as the compiler would. - -.. cfunction:: int Pyarray_DescrAlignConverter2(PyObject* obj, PyArray_Descr** dtype) - - Like :cfunc:`PyArray_DescrConverter2` except it aligns C-struct-like - objects on word-boundaries as the compiler would. - -.. cfunction:: PyObject *PyArray_FieldNames(PyObject* dict) - - Take the fields dictionary, *dict*, such as the one attached to a - data-type object and construct an ordered-list of field names such - as is stored in the names field of the :ctype:`PyArray_Descr` object. - - -Conversion Utilities --------------------- - - -For use with :cfunc:`PyArg_ParseTuple` -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -All of these functions can be used in :cfunc:`PyArg_ParseTuple` (...) with -the "O&" format specifier to automatically convert any Python object -to the required C-object. All of these functions return -:cdata:`NPY_SUCCEED` if successful and :cdata:`NPY_FAIL` if not. The first -argument to all of these function is a Python object. The second -argument is the **address** of the C-type to convert the Python object -to. - - -.. warning:: - - Be sure to understand what steps you should take to manage the - memory when using these conversion functions. These functions can - require freeing memory, and/or altering the reference counts of - specific objects based on your use. - -.. cfunction:: int PyArray_Converter(PyObject* obj, PyObject** address) - - Convert any Python object to a :ctype:`PyArrayObject`. If - :cfunc:`PyArray_Check` (*obj*) is TRUE then its reference count is - incremented and a reference placed in *address*. If *obj* is not - an array, then convert it to an array using :cfunc:`PyArray_FromAny` - . No matter what is returned, you must DECREF the object returned - by this routine in *address* when you are done with it. - -.. cfunction:: int PyArray_OutputConverter(PyObject* obj, PyArrayObject** address) - - This is a default converter for output arrays given to - functions. If *obj* is :cdata:`Py_None` or ``NULL``, then *\*address* - will be ``NULL`` but the call will succeed. If :cfunc:`PyArray_Check` ( - *obj*) is TRUE then it is returned in *\*address* without - incrementing its reference count. - -.. cfunction:: int PyArray_IntpConverter(PyObject* obj, PyArray_Dims* seq) - - Convert any Python sequence, *obj*, smaller than :cdata:`NPY_MAXDIMS` - to a C-array of :ctype:`npy_intp`. The Python object could also be a - single number. The *seq* variable is a pointer to a structure with - members ptr and len. On successful return, *seq* ->ptr contains a - pointer to memory that must be freed to avoid a memory leak. The - restriction on memory size allows this converter to be - conveniently used for sequences intended to be interpreted as - array shapes. - -.. cfunction:: int PyArray_BufferConverter(PyObject* obj, PyArray_Chunk* buf) - - Convert any Python object, *obj*, with a (single-segment) buffer - interface to a variable with members that detail the object's use - of its chunk of memory. The *buf* variable is a pointer to a - structure with base, ptr, len, and flags members. The - :ctype:`PyArray_Chunk` structure is binary compatibile with the - Python's buffer object (through its len member on 32-bit platforms - and its ptr member on 64-bit platforms or in Python 2.5). On - return, the base member is set to *obj* (or its base if *obj* is - already a buffer object pointing to another object). If you need - to hold on to the memory be sure to INCREF the base member. The - chunk of memory is pointed to by *buf* ->ptr member and has length - *buf* ->len. The flags member of *buf* is :cdata:`NPY_BEHAVED_RO` with - the :cdata:`NPY_WRITEABLE` flag set if *obj* has a writeable buffer - interface. - -.. cfunction:: int PyArray_AxisConverter(PyObject \* obj, int* axis) - - Convert a Python object, *obj*, representing an axis argument to - the proper value for passing to the functions that take an integer - axis. Specifically, if *obj* is None, *axis* is set to - :cdata:`NPY_MAXDIMS` which is interpreted correctly by the C-API - functions that take axis arguments. - -.. cfunction:: int PyArray_BoolConverter(PyObject* obj, Bool* value) - - Convert any Python object, *obj*, to :cdata:`NPY_TRUE` or - :cdata:`NPY_FALSE`, and place the result in *value*. - -.. cfunction:: int PyArray_ByteorderConverter(PyObject* obj, char* endian) - - Convert Python strings into the corresponding byte-order - character: - '>', '<', 's', '=', or '\|'. - -.. cfunction:: int PyArray_SortkindConverter(PyObject* obj, NPY_SORTKIND* sort) - - Convert Python strings into one of :cdata:`NPY_QUICKSORT` (starts - with 'q' or 'Q') , :cdata:`NPY_HEAPSORT` (starts with 'h' or 'H'), - or :cdata:`NPY_MERGESORT` (starts with 'm' or 'M'). - -.. cfunction:: int PyArray_SearchsideConverter(PyObject* obj, NPY_SEARCHSIDE* side) - - Convert Python strings into one of :cdata:`NPY_SEARCHLEFT` (starts with 'l' - or 'L'), or :cdata:`NPY_SEARCHRIGHT` (starts with 'r' or 'R'). - -Other conversions -^^^^^^^^^^^^^^^^^ - -.. cfunction:: int PyArray_PyIntAsInt(PyObject* op) - - Convert all kinds of Python objects (including arrays and array - scalars) to a standard integer. On error, -1 is returned and an - exception set. You may find useful the macro: - - .. code-block:: c - - #define error_converting(x) (((x) == -1) && PyErr_Occurred() - -.. cfunction:: npy_intp PyArray_PyIntAsIntp(PyObject* op) - - Convert all kinds of Python objects (including arrays and array - scalars) to a (platform-pointer-sized) integer. On error, -1 is - returned and an exception set. - -.. cfunction:: int PyArray_IntpFromSequence(PyObject* seq, npy_intp* vals, int maxvals) - - Convert any Python sequence (or single Python number) passed in as - *seq* to (up to) *maxvals* pointer-sized integers and place them - in the *vals* array. The sequence can be smaller then *maxvals* as - the number of converted objects is returned. - -.. cfunction:: int PyArray_TypestrConvert(int itemsize, int gentype) - - Convert typestring characters (with *itemsize*) to basic - enumerated data types. The typestring character corresponding to - signed and unsigned integers, floating point numbers, and - complex-floating point numbers are recognized and converted. Other - values of gentype are returned. This function can be used to - convert, for example, the string 'f4' to :cdata:`NPY_FLOAT32`. - - -Miscellaneous -------------- - - -Importing the API -^^^^^^^^^^^^^^^^^ - -In order to make use of the C-API from another extension module, the -``import_array`` () command must be used. If the extension module is -self-contained in a single .c file, then that is all that needs to be -done. If, however, the extension module involves multiple files where -the C-API is needed then some additional steps must be taken. - -.. cfunction:: void import_array(void) - - This function must be called in the initialization section of a - module that will make use of the C-API. It imports the module - where the function-pointer table is stored and points the correct - variable to it. - -.. cmacro:: PY_ARRAY_UNIQUE_SYMBOL - -.. cmacro:: NO_IMPORT_ARRAY - - Using these #defines you can use the C-API in multiple files for a - single extension module. In each file you must define - :cmacro:`PY_ARRAY_UNIQUE_SYMBOL` to some name that will hold the - C-API (*e.g.* myextension_ARRAY_API). This must be done **before** - including the numpy/arrayobject.h file. In the module - intialization routine you call ``import_array`` (). In addition, - in the files that do not have the module initialization - sub_routine define :cmacro:`NO_IMPORT_ARRAY` prior to including - numpy/arrayobject.h. - - Suppose I have two files coolmodule.c and coolhelper.c which need - to be compiled and linked into a single extension module. Suppose - coolmodule.c contains the required initcool module initialization - function (with the import_array() function called). Then, - coolmodule.c would have at the top: - - .. code-block:: c - - #define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API - #include numpy/arrayobject.h - - On the other hand, coolhelper.c would contain at the top: - - .. code-block:: c - - #define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API - #define NO_IMPORT_ARRAY - #include numpy/arrayobject.h - -Checking the API Version -^^^^^^^^^^^^^^^^^^^^^^^^ - -Because python extensions are not used in the same way as usual libraries on -most platforms, some errors cannot be automatically detected at build time or -even runtime. For example, if you build an extension using a function available -only for numpy >= 1.3.0, and you import the extension later with numpy 1.2, you -will not get an import error (but almost certainly a segmentation fault when -calling the function). That's why several functions are provided to check for -numpy versions. The macros :cdata:`NPY_VERSION` and -:cdata:`NPY_FEATURE_VERSION` corresponds to the numpy version used to build the -extension, whereas the versions returned by the functions -PyArray_GetNDArrayCVersion and PyArray_GetNDArrayCFeatureVersion corresponds to -the runtime numpy's version. - -The rules for ABI and API compatibilities can be summarized as follows: - - * Whenever :cdata:`NPY_VERSION` != PyArray_GetNDArrayCVersion, the - extension has to be recompiled (ABI incompatibility). - * :cdata:`NPY_VERSION` == PyArray_GetNDArrayCVersion and - :cdata:`NPY_FEATURE_VERSION` <= PyArray_GetNDArrayCFeatureVersion means - backward compatible changes. - -ABI incompatibility is automatically detected in every numpy's version. API -incompatibility detection was added in numpy 1.4.0. If you want to supported -many different numpy versions with one extension binary, you have to build your -extension with the lowest NPY_FEATURE_VERSION as possible. - -.. cfunction:: unsigned int PyArray_GetNDArrayCVersion(void) - - This just returns the value :cdata:`NPY_VERSION`. :cdata:`NPY_VERSION` - changes whenever a backward incompatible change at the ABI level. Because - it is in the C-API, however, comparing the output of this function from the - value defined in the current header gives a way to test if the C-API has - changed thus requiring a re-compilation of extension modules that use the - C-API. This is automatically checked in the function import_array. - -.. cfunction:: unsigned int PyArray_GetNDArrayCFeatureVersion(void) - - .. versionadded:: 1.4.0 - - This just returns the value :cdata:`NPY_FEATURE_VERSION`. - :cdata:`NPY_FEATURE_VERSION` changes whenever the API changes (e.g. a - function is added). A changed value does not always require a recompile. - -Internal Flexibility -^^^^^^^^^^^^^^^^^^^^ - -.. cfunction:: int PyArray_SetNumericOps(PyObject* dict) - - NumPy stores an internal table of Python callable objects that are - used to implement arithmetic operations for arrays as well as - certain array calculation methods. This function allows the user - to replace any or all of these Python objects with their own - versions. The keys of the dictionary, *dict*, are the named - functions to replace and the paired value is the Python callable - object to use. Care should be taken that the function used to - replace an internal array operation does not itself call back to - that internal array operation (unless you have designed the - function to handle that), or an unchecked infinite recursion can - result (possibly causing program crash). The key names that - represent operations that can be replaced are: - - **add**, **subtract**, **multiply**, **divide**, - **remainder**, **power**, **square**, **reciprocal**, - **ones_like**, **sqrt**, **negative**, **absolute**, - **invert**, **left_shift**, **right_shift**, - **bitwise_and**, **bitwise_xor**, **bitwise_or**, - **less**, **less_equal**, **equal**, **not_equal**, - **greater**, **greater_equal**, **floor_divide**, - **true_divide**, **logical_or**, **logical_and**, - **floor**, **ceil**, **maximum**, **minimum**, **rint**. - - - These functions are included here because they are used at least once - in the array object's methods. The function returns -1 (without - setting a Python Error) if one of the objects being assigned is not - callable. - -.. cfunction:: PyObject* PyArray_GetNumericOps(void) - - Return a Python dictionary containing the callable Python objects - stored in the the internal arithmetic operation table. The keys of - this dictionary are given in the explanation for :cfunc:`PyArray_SetNumericOps`. - -.. cfunction:: void PyArray_SetStringFunction(PyObject* op, int repr) - - This function allows you to alter the tp_str and tp_repr methods - of the array object to any Python function. Thus you can alter - what happens for all arrays when str(arr) or repr(arr) is called - from Python. The function to be called is passed in as *op*. If - *repr* is non-zero, then this function will be called in response - to repr(arr), otherwise the function will be called in response to - str(arr). No check on whether or not *op* is callable is - performed. The callable passed in to *op* should expect an array - argument and should return a string to be printed. - - -Memory management -^^^^^^^^^^^^^^^^^ - -.. cfunction:: char* PyDataMem_NEW(size_t nbytes) - -.. cfunction:: PyDataMem_FREE(char* ptr) - -.. cfunction:: char* PyDataMem_RENEW(void * ptr, size_t newbytes) - - Macros to allocate, free, and reallocate memory. These macros are used - internally to create arrays. - -.. cfunction:: npy_intp* PyDimMem_NEW(nd) - -.. cfunction:: PyDimMem_FREE(npy_intp* ptr) - -.. cfunction:: npy_intp* PyDimMem_RENEW(npy_intp* ptr, npy_intp newnd) - - Macros to allocate, free, and reallocate dimension and strides memory. - -.. cfunction:: PyArray_malloc(nbytes) - -.. cfunction:: PyArray_free(ptr) - -.. cfunction:: PyArray_realloc(ptr, nbytes) - - These macros use different memory allocators, depending on the - constant :cdata:`NPY_USE_PYMEM`. The system malloc is used when - :cdata:`NPY_USE_PYMEM` is 0, if :cdata:`NPY_USE_PYMEM` is 1, then - the Python memory allocator is used. - - -Threading support -^^^^^^^^^^^^^^^^^ - -These macros are only meaningful if :cdata:`NPY_ALLOW_THREADS` -evaluates True during compilation of the extension module. Otherwise, -these macros are equivalent to whitespace. Python uses a single Global -Interpreter Lock (GIL) for each Python process so that only a single -thread may excecute at a time (even on multi-cpu machines). When -calling out to a compiled function that may take time to compute (and -does not have side-effects for other threads like updated global -variables), the GIL should be released so that other Python threads -can run while the time-consuming calculations are performed. This can -be accomplished using two groups of macros. Typically, if one macro in -a group is used in a code block, all of them must be used in the same -code block. Currently, :cdata:`NPY_ALLOW_THREADS` is defined to the -python-defined :cdata:`WITH_THREADS` constant unless the environment -variable :cdata:`NPY_NOSMP` is set in which case -:cdata:`NPY_ALLOW_THREADS` is defined to be 0. - -Group 1 -""""""" - - This group is used to call code that may take some time but does not - use any Python C-API calls. Thus, the GIL should be released during - its calculation. - - .. cmacro:: NPY_BEGIN_ALLOW_THREADS - - Equivalent to :cmacro:`Py_BEGIN_ALLOW_THREADS` except it uses - :cdata:`NPY_ALLOW_THREADS` to determine if the macro if - replaced with white-space or not. - - .. cmacro:: NPY_END_ALLOW_THREADS - - Equivalent to :cmacro:`Py_END_ALLOW_THREADS` except it uses - :cdata:`NPY_ALLOW_THREADS` to determine if the macro if - replaced with white-space or not. - - .. cmacro:: NPY_BEGIN_THREADS_DEF - - Place in the variable declaration area. This macro sets up the - variable needed for storing the Python state. - - .. cmacro:: NPY_BEGIN_THREADS - - Place right before code that does not need the Python - interpreter (no Python C-API calls). This macro saves the - Python state and releases the GIL. - - .. cmacro:: NPY_END_THREADS - - Place right after code that does not need the Python - interpreter. This macro acquires the GIL and restores the - Python state from the saved variable. - - .. cfunction:: NPY_BEGIN_THREADS_DESCR(PyArray_Descr *dtype) - - Useful to release the GIL only if *dtype* does not contain - arbitrary Python objects which may need the Python interpreter - during execution of the loop. Equivalent to - - .. cfunction:: NPY_END_THREADS_DESCR(PyArray_Descr *dtype) - - Useful to regain the GIL in situations where it was released - using the BEGIN form of this macro. - -Group 2 -""""""" - - This group is used to re-acquire the Python GIL after it has been - released. For example, suppose the GIL has been released (using the - previous calls), and then some path in the code (perhaps in a - different subroutine) requires use of the Python C-API, then these - macros are useful to acquire the GIL. These macros accomplish - essentially a reverse of the previous three (acquire the LOCK saving - what state it had) and then re-release it with the saved state. - - .. cmacro:: NPY_ALLOW_C_API_DEF - - Place in the variable declaration area to set up the necessary - variable. - - .. cmacro:: NPY_ALLOW_C_API - - Place before code that needs to call the Python C-API (when it is - known that the GIL has already been released). - - .. cmacro:: NPY_DISABLE_C_API - - Place after code that needs to call the Python C-API (to re-release - the GIL). - -.. tip:: - - Never use semicolons after the threading support macros. - - -Priority -^^^^^^^^ - -.. cvar:: NPY_PRIOIRTY - - Default priority for arrays. - -.. cvar:: NPY_SUBTYPE_PRIORITY - - Default subtype priority. - -.. cvar:: NPY_SCALAR_PRIORITY - - Default scalar priority (very small) - -.. cfunction:: double PyArray_GetPriority(PyObject* obj, double def) - - Return the :obj:`__array_priority__` attribute (converted to a - double) of *obj* or *def* if no attribute of that name - exists. Fast returns that avoid the attribute lookup are provided - for objects of type :cdata:`PyArray_Type`. - - -Default buffers -^^^^^^^^^^^^^^^ - -.. cvar:: NPY_BUFSIZE - - Default size of the user-settable internal buffers. - -.. cvar:: NPY_MIN_BUFSIZE - - Smallest size of user-settable internal buffers. - -.. cvar:: NPY_MAX_BUFSIZE - - Largest size allowed for the user-settable buffers. - - -Other constants -^^^^^^^^^^^^^^^ - -.. cvar:: NPY_NUM_FLOATTYPE - - The number of floating-point types - -.. cvar:: NPY_MAXDIMS - - The maximum number of dimensions allowed in arrays. - -.. cvar:: NPY_VERSION - - The current version of the ndarray object (check to see if this - variable is defined to guarantee the numpy/arrayobject.h header is - being used). - -.. cvar:: NPY_FALSE - - Defined as 0 for use with Bool. - -.. cvar:: NPY_TRUE - - Defined as 1 for use with Bool. - -.. cvar:: NPY_FAIL - - The return value of failed converter functions which are called using - the "O&" syntax in :cfunc:`PyArg_ParseTuple`-like functions. - -.. cvar:: NPY_SUCCEED - - The return value of successful converter functions which are called - using the "O&" syntax in :cfunc:`PyArg_ParseTuple`-like functions. - - -Miscellaneous Macros -^^^^^^^^^^^^^^^^^^^^ - -.. cfunction:: PyArray_SAMESHAPE(a1, a2) - - Evaluates as True if arrays *a1* and *a2* have the same shape. - -.. cfunction:: PyArray_MAX(a,b) - - Returns the maximum of *a* and *b*. If (*a*) or (*b*) are - expressions they are evaluated twice. - -.. cfunction:: PyArray_MIN(a,b) - - Returns the minimum of *a* and *b*. If (*a*) or (*b*) are - expressions they are evaluated twice. - -.. cfunction:: PyArray_CLT(a,b) - -.. cfunction:: PyArray_CGT(a,b) - -.. cfunction:: PyArray_CLE(a,b) - -.. cfunction:: PyArray_CGE(a,b) - -.. cfunction:: PyArray_CEQ(a,b) - -.. cfunction:: PyArray_CNE(a,b) - - Implements the complex comparisons between two complex numbers - (structures with a real and imag member) using NumPy's definition - of the ordering which is lexicographic: comparing the real parts - first and then the complex parts if the real parts are equal. - -.. cfunction:: PyArray_REFCOUNT(PyObject* op) - - Returns the reference count of any Python object. - -.. cfunction:: PyArray_XDECREF_ERR(PyObject \*obj) - - DECREF's an array object which may have the :cdata:`NPY_UPDATEIFCOPY` - flag set without causing the contents to be copied back into the - original array. Resets the :cdata:`NPY_WRITEABLE` flag on the base - object. This is useful for recovering from an error condition when - :cdata:`NPY_UPDATEIFCOPY` is used. - - -Enumerated Types -^^^^^^^^^^^^^^^^ - -.. ctype:: NPY_SORTKIND - - A special variable-type which can take on the values :cdata:`NPY_{KIND}` - where ``{KIND}`` is - - **QUICKSORT**, **HEAPSORT**, **MERGESORT** - - .. cvar:: NPY_NSORTS - - Defined to be the number of sorts. - -.. ctype:: NPY_SCALARKIND - - A special variable type indicating the number of "kinds" of - scalars distinguished in determining scalar-coercion rules. This - variable can take on the values :cdata:`NPY_{KIND}` where ``{KIND}`` can be - - **NOSCALAR**, **BOOL_SCALAR**, **INTPOS_SCALAR**, - **INTNEG_SCALAR**, **FLOAT_SCALAR**, **COMPLEX_SCALAR**, - **OBJECT_SCALAR** - - - .. cvar:: NPY_NSCALARKINDS - - Defined to be the number of scalar kinds - (not including :cdata:`NPY_NOSCALAR`). - -.. ctype:: NPY_ORDER - - A variable type indicating the order that an array should be - interpreted in. The value of a variable of this type can be - :cdata:`NPY_{ORDER}` where ``{ORDER}`` is - - **ANYORDER**, **CORDER**, **FORTRANORDER** - -.. ctype:: NPY_CLIPMODE - - A variable type indicating the kind of clipping that should be - applied in certain functions. The value of a variable of this type - can be :cdata:`NPY_{MODE}` where ``{MODE}`` is - - **CLIP**, **WRAP**, **RAISE** - -.. index:: - pair: ndarray; C-API diff --git a/pythonPackages/numpy/doc/source/reference/c-api.config.rst b/pythonPackages/numpy/doc/source/reference/c-api.config.rst deleted file mode 100755 index 0989c53d7e..0000000000 --- a/pythonPackages/numpy/doc/source/reference/c-api.config.rst +++ /dev/null @@ -1,104 +0,0 @@ -System configuration -==================== - -.. sectionauthor:: Travis E. Oliphant - -When NumPy is built, information about system configuration is -recorded, and is made available for extension modules using Numpy's C -API. These are mostly defined in ``numpyconfig.h`` (included in -``ndarrayobject.h``). The public symbols are prefixed by ``NPY_*``. -Numpy also offers some functions for querying information about the -platform in use. - -For private use, Numpy also constructs a ``config.h`` in the NumPy -include directory, which is not exported by Numpy (that is a python -extension which use the numpy C API will not see those symbols), to -avoid namespace pollution. - - -Data type sizes ---------------- - -The :cdata:`NPY_SIZEOF_{CTYPE}` constants are defined so that sizeof -information is available to the pre-processor. - -.. cvar:: NPY_SIZEOF_SHORT - - sizeof(short) - -.. cvar:: NPY_SIZEOF_INT - - sizeof(int) - -.. cvar:: NPY_SIZEOF_LONG - - sizeof(long) - -.. cvar:: NPY_SIZEOF_LONG_LONG - - sizeof(longlong) where longlong is defined appropriately on the - platform (A macro defines **NPY_SIZEOF_LONGLONG** as well.) - -.. cvar:: NPY_SIZEOF_PY_LONG_LONG - - -.. cvar:: NPY_SIZEOF_FLOAT - - sizeof(float) - -.. cvar:: NPY_SIZEOF_DOUBLE - - sizeof(double) - -.. cvar:: NPY_SIZEOF_LONG_DOUBLE - - sizeof(longdouble) (A macro defines **NPY_SIZEOF_LONGDOUBLE** as well.) - -.. cvar:: NPY_SIZEOF_PY_INTPTR_T - - Size of a pointer on this platform (sizeof(void \*)) (A macro defines - NPY_SIZEOF_INTP as well.) - - -Platform information --------------------- - -.. cvar:: NPY_CPU_X86 -.. cvar:: NPY_CPU_AMD64 -.. cvar:: NPY_CPU_IA64 -.. cvar:: NPY_CPU_PPC -.. cvar:: NPY_CPU_PPC64 -.. cvar:: NPY_CPU_SPARC -.. cvar:: NPY_CPU_SPARC64 -.. cvar:: NPY_CPU_S390 -.. cvar:: NPY_CPU_PARISC - - .. versionadded:: 1.3.0 - - CPU architecture of the platform; only one of the above is - defined. - - Defined in ``numpy/npy_cpu.h`` - -.. cvar:: NPY_LITTLE_ENDIAN - -.. cvar:: NPY_BIG_ENDIAN - -.. cvar:: NPY_BYTE_ORDER - - .. versionadded:: 1.3.0 - - Portable alternatives to the ``endian.h`` macros of GNU Libc. - If big endian, :cdata:`NPY_BYTE_ORDER` == :cdata:`NPY_BIG_ENDIAN`, and - similarly for little endian architectures. - - Defined in ``numpy/npy_endian.h``. - -.. cfunction:: PyArray_GetEndianness() - - .. versionadded:: 1.3.0 - - Returns the endianness of the current platform. - One of :cdata:`NPY_CPU_BIG`, :cdata:`NPY_CPU_LITTLE`, - or :cdata:`NPY_CPU_UNKNOWN_ENDIAN`. - diff --git a/pythonPackages/numpy/doc/source/reference/c-api.coremath.rst b/pythonPackages/numpy/doc/source/reference/c-api.coremath.rst deleted file mode 100755 index 5c50f36e48..0000000000 --- a/pythonPackages/numpy/doc/source/reference/c-api.coremath.rst +++ /dev/null @@ -1,183 +0,0 @@ -Numpy core libraries -==================== - -.. sectionauthor:: David Cournapeau - -.. versionadded:: 1.3.0 - -Starting from numpy 1.3.0, we are working on separating the pure C, -"computational" code from the python dependent code. The goal is twofolds: -making the code cleaner, and enabling code reuse by other extensions outside -numpy (scipy, etc...). - -Numpy core math library ------------------------ - -The numpy core math library ('npymath') is a first step in this direction. This -library contains most math-related C99 functionality, which can be used on -platforms where C99 is not well supported. The core math functions have the -same API as the C99 ones, except for the npy_* prefix. - -The available functions are defined in npy_math.h - please refer to this header -in doubt. - -Floating point classification -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. cvar:: NPY_NAN - - This macro is defined to a NaN (Not a Number), and is guaranteed to have - the signbit unset ('positive' NaN). The corresponding single and extension - precision macro are available with the suffix F and L. - -.. cvar:: NPY_INFINITY - - This macro is defined to a positive inf. The corresponding single and - extension precision macro are available with the suffix F and L. - -.. cvar:: NPY_PZERO - - This macro is defined to positive zero. The corresponding single and - extension precision macro are available with the suffix F and L. - -.. cvar:: NPY_NZERO - - This macro is defined to negative zero (that is with the sign bit set). The - corresponding single and extension precision macro are available with the - suffix F and L. - -.. cfunction:: int npy_isnan(x) - - This is a macro, and is equivalent to C99 isnan: works for single, double - and extended precision, and return a non 0 value is x is a NaN. - -.. cfunction:: int npy_isfinite(x) - - This is a macro, and is equivalent to C99 isfinite: works for single, - double and extended precision, and return a non 0 value is x is neither a - NaN or a infinity. - -.. cfunction:: int npy_isinf(x) - - This is a macro, and is equivalent to C99 isinf: works for single, double - and extended precision, and return a non 0 value is x is infinite (positive - and negative). - -.. cfunction:: int npy_signbit(x) - - This is a macro, and is equivalent to C99 signbit: works for single, double - and extended precision, and return a non 0 value is x has the signbit set - (that is the number is negative). - -.. cfunction:: double npy_copysign(double x, double y) - - This is a function equivalent to C99 copysign: return x with the same sign - as y. Works for any value, including inf and nan. Single and extended - precisions are available with suffix f and l. - - .. versionadded:: 1.4.0 - -Useful math constants -~~~~~~~~~~~~~~~~~~~~~ - -The following math constants are available in npy_math.h. Single and extended -precision are also available by adding the F and L suffixes respectively. - -.. cvar:: NPY_E - - Base of natural logarithm (:math:`e`) - -.. cvar:: NPY_LOG2E - - Logarithm to base 2 of the Euler constant (:math:`\frac{\ln(e)}{\ln(2)}`) - -.. cvar:: NPY_LOG10E - - Logarithm to base 10 of the Euler constant (:math:`\frac{\ln(e)}{\ln(10)}`) - -.. cvar:: NPY_LOGE2 - - Natural logarithm of 2 (:math:`\ln(2)`) - -.. cvar:: NPY_LOGE10 - - Natural logarithm of 10 (:math:`\ln(10)`) - -.. cvar:: NPY_PI - - Pi (:math:`\pi`) - -.. cvar:: NPY_PI_2 - - Pi divided by 2 (:math:`\frac{\pi}{2}`) - -.. cvar:: NPY_PI_4 - - Pi divided by 4 (:math:`\frac{\pi}{4}`) - -.. cvar:: NPY_1_PI - - Reciprocal of pi (:math:`\frac{1}{\pi}`) - -.. cvar:: NPY_2_PI - - Two times the reciprocal of pi (:math:`\frac{2}{\pi}`) - -.. cvar:: NPY_EULER - - The Euler constant - :math:`\lim_{n\rightarrow\infty}({\sum_{k=1}^n{\frac{1}{k}}-\ln n})` - -Low-level floating point manipulation -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Those can be useful for precise floating point comparison. - -.. cfunction:: double npy_nextafter(double x, double y) - - This is a function equivalent to C99 nextafter: return next representable - floating point value from x in the direction of y. Single and extended - precisions are available with suffix f and l. - - .. versionadded:: 1.4.0 - -.. cfunction:: double npy_spacing(double x) - - This is a function equivalent to Fortran intrinsic. Return distance between - x and next representable floating point value from x, e.g. spacing(1) == - eps. spacing of nan and +/- inf return nan. Single and extended precisions - are available with suffix f and l. - - .. versionadded:: 1.4.0 - -Complex functions -~~~~~~~~~~~~~~~~~ - -.. versionadded:: 1.4.0 - -C99-like complex functions have been added. Those can be used if you wish to -implement portable C extensions. Since we still support platforms without C99 -complex type, you need to restrict to C90-compatible syntax, e.g.: - -.. code-block:: c - - /* a = 1 + 2i \*/ - npy_complex a = npy_cpack(1, 2); - npy_complex b; - - b = npy_log(a); - -Linking against the core math library in an extension -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. versionadded:: 1.4.0 - -To use the core math library in your own extension, you need to add the npymath -compile and link options to your extension in your setup.py: - - >>> from numpy.distutils.misc_utils import get_info - >>> info = get_info('npymath') - >>> config.add_extension('foo', sources=['foo.c'], extra_info=**info) - -In other words, the usage of info is exactly the same as when using blas_info -and co. diff --git a/pythonPackages/numpy/doc/source/reference/c-api.dtype.rst b/pythonPackages/numpy/doc/source/reference/c-api.dtype.rst deleted file mode 100755 index 569a4ccb31..0000000000 --- a/pythonPackages/numpy/doc/source/reference/c-api.dtype.rst +++ /dev/null @@ -1,218 +0,0 @@ -Data Type API -============= - -.. sectionauthor:: Travis E. Oliphant - -The standard array can have 21 different data types (and has some -support for adding your own types). These data types all have an -enumerated type, an enumerated type-character, and a corresponding -array scalar Python type object (placed in a hierarchy). There are -also standard C typedefs to make it easier to manipulate elements of -the given data type. For the numeric types, there are also bit-width -equivalent C typedefs and named typenumbers that make it easier to -select the precision desired. - -.. warning:: - - The names for the types in c code follows c naming conventions - more closely. The Python names for these types follow Python - conventions. Thus, :cdata:`NPY_FLOAT` picks up a 32-bit float in - C, but :class:`numpy.float_` in Python corresponds to a 64-bit - double. The bit-width names can be used in both Python and C for - clarity. - - -Enumerated Types ----------------- - -There is a list of enumerated types defined providing the basic 21 -data types plus some useful generic names. Whenever the code requires -a type number, one of these enumerated types is requested. The types -are all called :cdata:`NPY_{NAME}` where ``{NAME}`` can be - - **BOOL**, **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**, - **UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**, - **FLOAT**, **DOUBLE**, **LONGDOUBLE**, **CFLOAT**, **CDOUBLE**, - **CLONGDOUBLE**, **OBJECT**, **STRING**, **UNICODE**, **VOID** - - **NTYPES**, **NOTYPE**, **USERDEF**, **DEFAULT_TYPE** - -The various character codes indicating certain types are also part of -an enumerated list. References to type characters (should they be -needed at all) should always use these enumerations. The form of them -is :cdata:`NPY_{NAME}LTR` where ``{NAME}`` can be - - **BOOL**, **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**, - **UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**, - **FLOAT**, **DOUBLE**, **LONGDOUBLE**, **CFLOAT**, **CDOUBLE**, - **CLONGDOUBLE**, **OBJECT**, **STRING**, **VOID** - - **INTP**, **UINTP** - - **GENBOOL**, **SIGNED**, **UNSIGNED**, **FLOATING**, **COMPLEX** - -The latter group of ``{NAME}s`` corresponds to letters used in the array -interface typestring specification. - - -Defines -------- - -Max and min values for integers -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. cvar:: NPY_MAX_INT{bits} - -.. cvar:: NPY_MAX_UINT{bits} - -.. cvar:: NPY_MIN_INT{bits} - - These are defined for ``{bits}`` = 8, 16, 32, 64, 128, and 256 and provide - the maximum (minimum) value of the corresponding (unsigned) integer - type. Note: the actual integer type may not be available on all - platforms (i.e. 128-bit and 256-bit integers are rare). - -.. cvar:: NPY_MIN_{type} - - This is defined for ``{type}`` = **BYTE**, **SHORT**, **INT**, - **LONG**, **LONGLONG**, **INTP** - -.. cvar:: NPY_MAX_{type} - - This is defined for all defined for ``{type}`` = **BYTE**, **UBYTE**, - **SHORT**, **USHORT**, **INT**, **UINT**, **LONG**, **ULONG**, - **LONGLONG**, **ULONGLONG**, **INTP**, **UINTP** - - -Number of bits in data types -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -All :cdata:`NPY_SIZEOF_{CTYPE}` constants have corresponding -:cdata:`NPY_BITSOF_{CTYPE}` constants defined. The :cdata:`NPY_BITSOF_{CTYPE}` -constants provide the number of bits in the data type. Specifically, -the available ``{CTYPE}s`` are - - **BOOL**, **CHAR**, **SHORT**, **INT**, **LONG**, - **LONGLONG**, **FLOAT**, **DOUBLE**, **LONGDOUBLE** - - -Bit-width references to enumerated typenums -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -All of the numeric data types (integer, floating point, and complex) -have constants that are defined to be a specific enumerated type -number. Exactly which enumerated type a bit-width type refers to is -platform dependent. In particular, the constants available are -:cdata:`PyArray_{NAME}{BITS}` where ``{NAME}`` is **INT**, **UINT**, -**FLOAT**, **COMPLEX** and ``{BITS}`` can be 8, 16, 32, 64, 80, 96, 128, -160, 192, 256, and 512. Obviously not all bit-widths are available on -all platforms for all the kinds of numeric types. Commonly 8-, 16-, -32-, 64-bit integers; 32-, 64-bit floats; and 64-, 128-bit complex -types are available. - - -Integer that can hold a pointer -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The constants **PyArray_INTP** and **PyArray_UINTP** refer to an -enumerated integer type that is large enough to hold a pointer on the -platform. Index arrays should always be converted to **PyArray_INTP** -, because the dimension of the array is of type npy_intp. - - -C-type names ------------- - -There are standard variable types for each of the numeric data types -and the bool data type. Some of these are already available in the -C-specification. You can create variables in extension code with these -types. - - -Boolean -^^^^^^^ - -.. ctype:: npy_bool - - unsigned char; The constants :cdata:`NPY_FALSE` and - :cdata:`NPY_TRUE` are also defined. - - -(Un)Signed Integer -^^^^^^^^^^^^^^^^^^ - -Unsigned versions of the integers can be defined by pre-pending a 'u' -to the front of the integer name. - -.. ctype:: npy_(u)byte - - (unsigned) char - -.. ctype:: npy_(u)short - - (unsigned) short - -.. ctype:: npy_(u)int - - (unsigned) int - -.. ctype:: npy_(u)long - - (unsigned) long int - -.. ctype:: npy_(u)longlong - - (unsigned long long int) - -.. ctype:: npy_(u)intp - - (unsigned) Py_intptr_t (an integer that is the size of a pointer on - the platform). - - -(Complex) Floating point -^^^^^^^^^^^^^^^^^^^^^^^^ - -.. ctype:: npy_(c)float - - float - -.. ctype:: npy_(c)double - - double - -.. ctype:: npy_(c)longdouble - - long double - -complex types are structures with **.real** and **.imag** members (in -that order). - - -Bit-width names -^^^^^^^^^^^^^^^ - -There are also typedefs for signed integers, unsigned integers, -floating point, and complex floating point types of specific bit- -widths. The available type names are - - :ctype:`npy_int{bits}`, :ctype:`npy_uint{bits}`, :ctype:`npy_float{bits}`, - and :ctype:`npy_complex{bits}` - -where ``{bits}`` is the number of bits in the type and can be **8**, -**16**, **32**, **64**, 128, and 256 for integer types; 16, **32** -, **64**, 80, 96, 128, and 256 for floating-point types; and 32, -**64**, **128**, 160, 192, and 512 for complex-valued types. Which -bit-widths are available is platform dependent. The bolded bit-widths -are usually available on all platforms. - - -Printf Formatting ------------------ - -For help in printing, the following strings are defined as the correct -format specifier in printf and related commands. - - :cdata:`NPY_LONGLONG_FMT`, :cdata:`NPY_ULONGLONG_FMT`, - :cdata:`NPY_INTP_FMT`, :cdata:`NPY_UINTP_FMT`, - :cdata:`NPY_LONGDOUBLE_FMT` diff --git a/pythonPackages/numpy/doc/source/reference/c-api.generalized-ufuncs.rst b/pythonPackages/numpy/doc/source/reference/c-api.generalized-ufuncs.rst deleted file mode 100755 index 870e5dbc41..0000000000 --- a/pythonPackages/numpy/doc/source/reference/c-api.generalized-ufuncs.rst +++ /dev/null @@ -1,175 +0,0 @@ -================================== -Generalized Universal Function API -================================== - -There is a general need for looping over not only functions on scalars -but also over functions on vectors (or arrays), as explained on -http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose -to realize this concept by generalizing the universal functions -(ufuncs), and provide a C implementation that adds ~500 lines -to the numpy code base. In current (specialized) ufuncs, the elementary -function is limited to element-by-element operations, whereas the -generalized version supports "sub-array" by "sub-array" operations. -The Perl vector library PDL provides a similar functionality and its -terms are re-used in the following. - -Each generalized ufunc has information associated with it that states -what the "core" dimensionality of the inputs is, as well as the -corresponding dimensionality of the outputs (the element-wise ufuncs -have zero core dimensions). The list of the core dimensions for all -arguments is called the "signature" of a ufunc. For example, the -ufunc numpy.add has signature ``(),()->()`` defining two scalar inputs -and one scalar output. - -Another example is (see the GeneralLoopingFunctions page) the function -``inner1d(a,b)`` with a signature of ``(i),(i)->()``. This applies the -inner product along the last axis of each input, but keeps the -remaining indices intact. For example, where ``a`` is of shape ``(3,5,N)`` -and ``b`` is of shape ``(5,N)``, this will return an output of shape ``(3,5)``. -The underlying elementary function is called 3*5 times. In the -signature, we specify one core dimension ``(i)`` for each input and zero core -dimensions ``()`` for the output, since it takes two 1-d arrays and -returns a scalar. By using the same name ``i``, we specify that the two -corresponding dimensions should be of the same size (or one of them is -of size 1 and will be broadcasted). - -The dimensions beyond the core dimensions are called "loop" dimensions. In -the above example, this corresponds to ``(3,5)``. - -The usual numpy "broadcasting" rules apply, where the signature -determines how the dimensions of each input/output object are split -into core and loop dimensions: - -#. While an input array has a smaller dimensionality than the corresponding - number of core dimensions, 1's are pre-pended to its shape. -#. The core dimensions are removed from all inputs and the remaining - dimensions are broadcasted; defining the loop dimensions. -#. The output is given by the loop dimensions plus the output core dimensions. - - - -Definitions ------------ - -Elementary Function - Each ufunc consists of an elementary function that performs the - most basic operation on the smallest portion of array arguments - (e.g. adding two numbers is the most basic operation in adding two - arrays). The ufunc applies the elementary function multiple times - on different parts of the arrays. The input/output of elementary - functions can be vectors; e.g., the elementary function of inner1d - takes two vectors as input. - -Signature - A signature is a string describing the input/output dimensions of - the elementary function of a ufunc. See section below for more - details. - -Core Dimension - The dimensionality of each input/output of an elementary function - is defined by its core dimensions (zero core dimensions correspond - to a scalar input/output). The core dimensions are mapped to the - last dimensions of the input/output arrays. - -Dimension Name - A dimension name represents a core dimension in the signature. - Different dimensions may share a name, indicating that they are of - the same size (or are broadcastable). - -Dimension Index - A dimension index is an integer representing a dimension name. It - enumerates the dimension names according to the order of the first - occurrence of each name in the signature. - - -Details of Signature --------------------- - -The signature defines "core" dimensionality of input and output -variables, and thereby also defines the contraction of the -dimensions. The signature is represented by a string of the -following format: - -* Core dimensions of each input or output array are represented by a - list of dimension names in parentheses, ``(i_1,...,i_N)``; a scalar - input/output is denoted by ``()``. Instead of ``i_1``, ``i_2``, - etc, one can use any valid Python variable name. -* Dimension lists for different arguments are separated by ``","``. - Input/output arguments are separated by ``"->"``. -* If one uses the same dimension name in multiple locations, this - enforces the same size (or broadcastable size) of the corresponding - dimensions. - -The formal syntax of signatures is as follows:: - - ::= "->" - ::= - ::= - ::= nil | | "," - ::= "(" ")" - ::= nil | | - "," - ::= valid Python variable name - - -Notes: - -#. All quotes are for clarity. -#. Core dimensions that share the same name must be broadcastable, as - the two ``i`` in our example above. Each dimension name typically - corresponding to one level of looping in the elementary function's - implementation. -#. White spaces are ignored. - -Here are some examples of signatures: - -+-------------+------------------------+-----------------------------------+ -| add | ``(),()->()`` | | -+-------------+------------------------+-----------------------------------+ -| inner1d | ``(i),(i)->()`` | | -+-------------+------------------------+-----------------------------------+ -| sum1d | ``(i)->()`` | | -+-------------+------------------------+-----------------------------------+ -| dot2d | ``(m,n),(n,p)->(m,p)`` | matrix multiplication | -+-------------+------------------------+-----------------------------------+ -| outer_inner | ``(i,t),(j,t)->(i,j)`` | inner over the last dimension, | -| | | outer over the second to last, | -| | | and loop/broadcast over the rest. | -+-------------+------------------------+-----------------------------------+ - -C-API for implementing Elementary Functions -------------------------------------------- - -The current interface remains unchanged, and ``PyUFunc_FromFuncAndData`` -can still be used to implement (specialized) ufuncs, consisting of -scalar elementary functions. - -One can use ``PyUFunc_FromFuncAndDataAndSignature`` to declare a more -general ufunc. The argument list is the same as -``PyUFunc_FromFuncAndData``, with an additional argument specifying the -signature as C string. - -Furthermore, the callback function is of the same type as before, -``void (*foo)(char **args, intp *dimensions, intp *steps, void *func)``. -When invoked, ``args`` is a list of length ``nargs`` containing -the data of all input/output arguments. For a scalar elementary -function, ``steps`` is also of length ``nargs``, denoting the strides used -for the arguments. ``dimensions`` is a pointer to a single integer -defining the size of the axis to be looped over. - -For a non-trivial signature, ``dimensions`` will also contain the sizes -of the core dimensions as well, starting at the second entry. Only -one size is provided for each unique dimension name and the sizes are -given according to the first occurrence of a dimension name in the -signature. - -The first ``nargs`` elements of ``steps`` remain the same as for scalar -ufuncs. The following elements contain the strides of all core -dimensions for all arguments in order. - -For example, consider a ufunc with signature ``(i,j),(i)->()``. In -this case, ``args`` will contain three pointers to the data of the -input/output arrays ``a``, ``b``, ``c``. Furthermore, ``dimensions`` will be -``[N, I, J]`` to define the size of ``N`` of the loop and the sizes ``I`` and ``J`` -for the core dimensions ``i`` and ``j``. Finally, ``steps`` will be -``[a_N, b_N, c_N, a_i, a_j, b_i]``, containing all necessary strides. diff --git a/pythonPackages/numpy/doc/source/reference/c-api.rst b/pythonPackages/numpy/doc/source/reference/c-api.rst deleted file mode 100755 index 9bcc68b498..0000000000 --- a/pythonPackages/numpy/doc/source/reference/c-api.rst +++ /dev/null @@ -1,49 +0,0 @@ -.. _c-api: - -########### -Numpy C-API -########### - -.. sectionauthor:: Travis E. Oliphant - -| Beware of the man who won't be bothered with details. -| --- *William Feather, Sr.* - -| The truth is out there. -| --- *Chris Carter, The X Files* - - -NumPy provides a C-API to enable users to extend the system and get -access to the array object for use in other routines. The best way to -truly understand the C-API is to read the source code. If you are -unfamiliar with (C) source code, however, this can be a daunting -experience at first. Be assured that the task becomes easier with -practice, and you may be surprised at how simple the C-code can be to -understand. Even if you don't think you can write C-code from scratch, -it is much easier to understand and modify already-written source code -then create it *de novo*. - -Python extensions are especially straightforward to understand because -they all have a very similar structure. Admittedly, NumPy is not a -trivial extension to Python, and may take a little more snooping to -grasp. This is especially true because of the code-generation -techniques, which simplify maintenance of very similar code, but can -make the code a little less readable to beginners. Still, with a -little persistence, the code can be opened to your understanding. It -is my hope, that this guide to the C-API can assist in the process of -becoming familiar with the compiled-level work that can be done with -NumPy in order to squeeze that last bit of necessary speed out of your -code. - -.. currentmodule:: numpy-c-api - -.. toctree:: - :maxdepth: 2 - - c-api.types-and-structures - c-api.config - c-api.dtype - c-api.array - c-api.ufunc - c-api.generalized-ufuncs - c-api.coremath diff --git a/pythonPackages/numpy/doc/source/reference/c-api.types-and-structures.rst b/pythonPackages/numpy/doc/source/reference/c-api.types-and-structures.rst deleted file mode 100755 index 044fd4f2c9..0000000000 --- a/pythonPackages/numpy/doc/source/reference/c-api.types-and-structures.rst +++ /dev/null @@ -1,1192 +0,0 @@ -***************************** -Python Types and C-Structures -***************************** - -.. sectionauthor:: Travis E. Oliphant - -Several new types are defined in the C-code. Most of these are -accessible from Python, but a few are not exposed due to their limited -use. Every new Python type has an associated :ctype:`PyObject *` with an -internal structure that includes a pointer to a "method table" that -defines how the new object behaves in Python. When you receive a -Python object into C code, you always get a pointer to a -:ctype:`PyObject` structure. Because a :ctype:`PyObject` structure is -very generic and defines only :cmacro:`PyObject_HEAD`, by itself it -is not very interesting. However, different objects contain more -details after the :cmacro:`PyObject_HEAD` (but you have to cast to the -correct type to access them --- or use accessor functions or macros). - - -New Python Types Defined -======================== - -Python types are the functional equivalent in C of classes in Python. -By constructing a new Python type you make available a new object for -Python. The ndarray object is an example of a new type defined in C. -New types are defined in C by two basic steps: - -1. creating a C-structure (usually named :ctype:`Py{Name}Object`) that is - binary- compatible with the :ctype:`PyObject` structure itself but holds - the additional information needed for that particular object; - -2. populating the :ctype:`PyTypeObject` table (pointed to by the ob_type - member of the :ctype:`PyObject` structure) with pointers to functions - that implement the desired behavior for the type. - -Instead of special method names which define behavior for Python -classes, there are "function tables" which point to functions that -implement the desired results. Since Python 2.2, the PyTypeObject -itself has become dynamic which allows C types that can be "sub-typed -"from other C-types in C, and sub-classed in Python. The children -types inherit the attributes and methods from their parent(s). - -There are two major new types: the ndarray ( :cdata:`PyArray_Type` ) -and the ufunc ( :cdata:`PyUFunc_Type` ). Additional types play a -supportive role: the :cdata:`PyArrayIter_Type`, the -:cdata:`PyArrayMultiIter_Type`, and the :cdata:`PyArrayDescr_Type` -. The :cdata:`PyArrayIter_Type` is the type for a flat iterator for an -ndarray (the object that is returned when getting the flat -attribute). The :cdata:`PyArrayMultiIter_Type` is the type of the -object returned when calling ``broadcast`` (). It handles iteration -and broadcasting over a collection of nested sequences. Also, the -:cdata:`PyArrayDescr_Type` is the data-type-descriptor type whose -instances describe the data. Finally, there are 21 new scalar-array -types which are new Python scalars corresponding to each of the -fundamental data types available for arrays. An additional 10 other -types are place holders that allow the array scalars to fit into a -hierarchy of actual Python types. - - -PyArray_Type ------------- - -.. cvar:: PyArray_Type - - The Python type of the ndarray is :cdata:`PyArray_Type`. In C, every - ndarray is a pointer to a :ctype:`PyArrayObject` structure. The ob_type - member of this structure contains a pointer to the :cdata:`PyArray_Type` - typeobject. - -.. ctype:: PyArrayObject - - The :ctype:`PyArrayObject` C-structure contains all of the required - information for an array. All instances of an ndarray (and its - subclasses) will have this structure. For future compatibility, - these structure members should normally be accessed using the - provided macros. If you need a shorter name, then you can make use - of :ctype:`NPY_AO` which is defined to be equivalent to - :ctype:`PyArrayObject`. - - .. code-block:: c - - typedef struct PyArrayObject { - PyObject_HEAD - char *data; - int nd; - npy_intp *dimensions; - npy_intp *strides; - PyObject *base; - PyArray_Descr *descr; - int flags; - PyObject *weakreflist; - } PyArrayObject; - -.. cmacro:: PyArrayObject.PyObject_HEAD - - This is needed by all Python objects. It consists of (at least) - a reference count member ( ``ob_refcnt`` ) and a pointer to the - typeobject ( ``ob_type`` ). (Other elements may also be present - if Python was compiled with special options see - Include/object.h in the Python source tree for more - information). The ob_type member points to a Python type - object. - -.. cmember:: char *PyArrayObject.data - - A pointer to the first element of the array. This pointer can - (and normally should) be recast to the data type of the array. - -.. cmember:: int PyArrayObject.nd - - An integer providing the number of dimensions for this - array. When nd is 0, the array is sometimes called a rank-0 - array. Such arrays have undefined dimensions and strides and - cannot be accessed. :cdata:`NPY_MAXDIMS` is the largest number of - dimensions for any array. - -.. cmember:: npy_intp PyArrayObject.dimensions - - An array of integers providing the shape in each dimension as - long as nd :math:`\geq` 1. The integer is always large enough - to hold a pointer on the platform, so the dimension size is - only limited by memory. - -.. cmember:: npy_intp *PyArrayObject.strides - - An array of integers providing for each dimension the number of - bytes that must be skipped to get to the next element in that - dimension. - -.. cmember:: PyObject *PyArrayObject.base - - This member is used to hold a pointer to another Python object - that is related to this array. There are two use cases: 1) If - this array does not own its own memory, then base points to the - Python object that owns it (perhaps another array object), 2) - If this array has the :cdata:`NPY_UPDATEIFCOPY` flag set, then this - array is a working copy of a "misbehaved" array. As soon as - this array is deleted, the array pointed to by base will be - updated with the contents of this array. - -.. cmember:: PyArray_Descr *PyArrayObject.descr - - A pointer to a data-type descriptor object (see below). The - data-type descriptor object is an instance of a new built-in - type which allows a generic description of memory. There is a - descriptor structure for each data type supported. This - descriptor structure contains useful information about the type - as well as a pointer to a table of function pointers to - implement specific functionality. - -.. cmember:: int PyArrayObject.flags - - Flags indicating how the memory pointed to by data is to be - interpreted. Possible flags are :cdata:`NPY_C_CONTIGUOUS`, - :cdata:`NPY_F_CONTIGUOUS`, :cdata:`NPY_OWNDATA`, :cdata:`NPY_ALIGNED`, - :cdata:`NPY_WRITEABLE`, and :cdata:`NPY_UPDATEIFCOPY`. - -.. cmember:: PyObject *PyArrayObject.weakreflist - - This member allows array objects to have weak references (using the - weakref module). - - -PyArrayDescr_Type ------------------ - -.. cvar:: PyArrayDescr_Type - - The :cdata:`PyArrayDescr_Type` is the built-in type of the - data-type-descriptor objects used to describe how the bytes comprising - the array are to be interpreted. There are 21 statically-defined - :ctype:`PyArray_Descr` objects for the built-in data-types. While these - participate in reference counting, their reference count should never - reach zero. There is also a dynamic table of user-defined - :ctype:`PyArray_Descr` objects that is also maintained. Once a - data-type-descriptor object is "registered" it should never be - deallocated either. The function :cfunc:`PyArray_DescrFromType` (...) can - be used to retrieve a :ctype:`PyArray_Descr` object from an enumerated - type-number (either built-in or user- defined). - -.. ctype:: PyArray_Descr - - The format of the :ctype:`PyArray_Descr` structure that lies at the - heart of the :cdata:`PyArrayDescr_Type` is - - .. code-block:: c - - typedef struct { - PyObject_HEAD - PyTypeObject *typeobj; - char kind; - char type; - char byteorder; - char hasobject; - int type_num; - int elsize; - int alignment; - PyArray_ArrayDescr *subarray; - PyObject *fields; - PyArray_ArrFuncs *f; - } PyArray_Descr; - -.. cmember:: PyTypeObject *PyArray_Descr.typeobj - - Pointer to a typeobject that is the corresponding Python type for - the elements of this array. For the builtin types, this points to - the corresponding array scalar. For user-defined types, this - should point to a user-defined typeobject. This typeobject can - either inherit from array scalars or not. If it does not inherit - from array scalars, then the :cdata:`NPY_USE_GETITEM` and - :cdata:`NPY_USE_SETITEM` flags should be set in the ``hasobject`` flag. - -.. cmember:: char PyArray_Descr.kind - - A character code indicating the kind of array (using the array - interface typestring notation). A 'b' represents Boolean, a 'i' - represents signed integer, a 'u' represents unsigned integer, 'f' - represents floating point, 'c' represents complex floating point, 'S' - represents 8-bit character string, 'U' represents 32-bit/character - unicode string, and 'V' repesents arbitrary. - -.. cmember:: char PyArray_Descr.type - - A traditional character code indicating the data type. - -.. cmember:: char PyArray_Descr.byteorder - - A character indicating the byte-order: '>' (big-endian), '<' (little- - endian), '=' (native), '\|' (irrelevant, ignore). All builtin data- - types have byteorder '='. - -.. cmember:: char PyArray_Descr.hasobject - - A data-type bit-flag that determines if the data-type exhibits object- - array like behavior. Each bit in this member is a flag which are named - as: - - .. cvar:: NPY_ITEM_REFCOUNT - - .. cvar:: NPY_ITEM_HASOBJECT - - Indicates that items of this data-type must be reference - counted (using :cfunc:`Py_INCREF` and :cfunc:`Py_DECREF` ). - - .. cvar:: NPY_ITEM_LISTPICKLE - - Indicates arrays of this data-type must be converted to a list - before pickling. - - .. cvar:: NPY_ITEM_IS_POINTER - - Indicates the item is a pointer to some other data-type - - .. cvar:: NPY_NEEDS_INIT - - Indicates memory for this data-type must be initialized (set - to 0) on creation. - - .. cvar:: NPY_NEEDS_PYAPI - - Indicates this data-type requires the Python C-API during - access (so don't give up the GIL if array access is going to - be needed). - - .. cvar:: NPY_USE_GETITEM - - On array access use the ``f->getitem`` function pointer - instead of the standard conversion to an array scalar. Must - use if you don't define an array scalar to go along with - the data-type. - - .. cvar:: NPY_USE_SETITEM - - When creating a 0-d array from an array scalar use - ``f->setitem`` instead of the standard copy from an array - scalar. Must use if you don't define an array scalar to go - along with the data-type. - - .. cvar:: NPY_FROM_FIELDS - - The bits that are inherited for the parent data-type if these - bits are set in any field of the data-type. Currently ( - :cdata:`NPY_NEEDS_INIT` \| :cdata:`NPY_LIST_PICKLE` \| - :cdata:`NPY_ITEM_REFCOUNT` \| :cdata:`NPY_NEEDS_PYAPI` ). - - .. cvar:: NPY_OBJECT_DTYPE_FLAGS - - Bits set for the object data-type: ( :cdata:`NPY_LIST_PICKLE` - \| :cdata:`NPY_USE_GETITEM` \| :cdata:`NPY_ITEM_IS_POINTER` \| - :cdata:`NPY_REFCOUNT` \| :cdata:`NPY_NEEDS_INIT` \| - :cdata:`NPY_NEEDS_PYAPI`). - - .. cfunction:: PyDataType_FLAGCHK(PyArray_Descr *dtype, int flags) - - Return true if all the given flags are set for the data-type - object. - - .. cfunction:: PyDataType_REFCHK(PyArray_Descr *dtype) - - Equivalent to :cfunc:`PyDataType_FLAGCHK` (*dtype*, - :cdata:`NPY_ITEM_REFCOUNT`). - -.. cmember:: int PyArray_Descr.type_num - - A number that uniquely identifies the data type. For new data-types, - this number is assigned when the data-type is registered. - -.. cmember:: int PyArray_Descr.elsize - - For data types that are always the same size (such as long), this - holds the size of the data type. For flexible data types where - different arrays can have a different elementsize, this should be - 0. - -.. cmember:: int PyArray_Descr.alignment - - A number providing alignment information for this data type. - Specifically, it shows how far from the start of a 2-element - structure (whose first element is a ``char`` ), the compiler - places an item of this type: ``offsetof(struct {char c; type v;}, - v)`` - -.. cmember:: PyArray_ArrayDescr *PyArray_Descr.subarray - - If this is non- ``NULL``, then this data-type descriptor is a - C-style contiguous array of another data-type descriptor. In - other-words, each element that this descriptor describes is - actually an array of some other base descriptor. This is most - useful as the data-type descriptor for a field in another - data-type descriptor. The fields member should be ``NULL`` if this - is non- ``NULL`` (the fields member of the base descriptor can be - non- ``NULL`` however). The :ctype:`PyArray_ArrayDescr` structure is - defined using - - .. code-block:: c - - typedef struct { - PyArray_Descr *base; - PyObject *shape; - } PyArray_ArrayDescr; - - The elements of this structure are: - - .. cmember:: PyArray_Descr *PyArray_ArrayDescr.base - - The data-type-descriptor object of the base-type. - - .. cmember:: PyObject *PyArray_ArrayDescr.shape - - The shape (always C-style contiguous) of the sub-array as a Python - tuple. - - -.. cmember:: PyObject *PyArray_Descr.fields - - If this is non-NULL, then this data-type-descriptor has fields - described by a Python dictionary whose keys are names (and also - titles if given) and whose values are tuples that describe the - fields. Recall that a data-type-descriptor always describes a - fixed-length set of bytes. A field is a named sub-region of that - total, fixed-length collection. A field is described by a tuple - composed of another data- type-descriptor and a byte - offset. Optionally, the tuple may contain a title which is - normally a Python string. These tuples are placed in this - dictionary keyed by name (and also title if given). - -.. cmember:: PyArray_ArrFuncs *PyArray_Descr.f - - A pointer to a structure containing functions that the type needs - to implement internal features. These functions are not the same - thing as the universal functions (ufuncs) described later. Their - signatures can vary arbitrarily. - -.. ctype:: PyArray_ArrFuncs - - Functions implementing internal features. Not all of these - function pointers must be defined for a given type. The required - members are ``nonzero``, ``copyswap``, ``copyswapn``, ``setitem``, - ``getitem``, and ``cast``. These are assumed to be non- ``NULL`` - and ``NULL`` entries will cause a program crash. The other - functions may be ``NULL`` which will just mean reduced - functionality for that data-type. (Also, the nonzero function will - be filled in with a default function if it is ``NULL`` when you - register a user-defined data-type). - - .. code-block:: c - - typedef struct { - PyArray_VectorUnaryFunc *cast[PyArray_NTYPES]; - PyArray_GetItemFunc *getitem; - PyArray_SetItemFunc *setitem; - PyArray_CopySwapNFunc *copyswapn; - PyArray_CopySwapFunc *copyswap; - PyArray_CompareFunc *compare; - PyArray_ArgFunc *argmax; - PyArray_DotFunc *dotfunc; - PyArray_ScanFunc *scanfunc; - PyArray_FromStrFunc *fromstr; - PyArray_NonzeroFunc *nonzero; - PyArray_FillFunc *fill; - PyArray_FillWithScalarFunc *fillwithscalar; - PyArray_SortFunc *sort[PyArray_NSORTS]; - PyArray_ArgSortFunc *argsort[PyArray_NSORTS]; - PyObject *castdict; - PyArray_ScalarKindFunc *scalarkind; - int **cancastscalarkindto; - int *cancastto; - int listpickle - } PyArray_ArrFuncs; - - The concept of a behaved segment is used in the description of the - function pointers. A behaved segment is one that is aligned and in - native machine byte-order for the data-type. The ``nonzero``, - ``copyswap``, ``copyswapn``, ``getitem``, and ``setitem`` - functions can (and must) deal with mis-behaved arrays. The other - functions require behaved memory segments. - - .. cmember:: void cast(void *from, void *to, npy_intp n, void *fromarr, - void *toarr) - - An array of function pointers to cast from the current type to - all of the other builtin types. Each function casts a - contiguous, aligned, and notswapped buffer pointed at by - *from* to a contiguous, aligned, and notswapped buffer pointed - at by *to* The number of items to cast is given by *n*, and - the arguments *fromarr* and *toarr* are interpreted as - PyArrayObjects for flexible arrays to get itemsize - information. - - .. cmember:: PyObject *getitem(void *data, void *arr) - - A pointer to a function that returns a standard Python object - from a single element of the array object *arr* pointed to by - *data*. This function must be able to deal with "misbehaved - "(misaligned and/or swapped) arrays correctly. - - .. cmember:: int setitem(PyObject *item, void *data, void *arr) - - A pointer to a function that sets the Python object *item* - into the array, *arr*, at the position pointed to by *data* - . This function deals with "misbehaved" arrays. If successful, - a zero is returned, otherwise, a negative one is returned (and - a Python error set). - - .. cmember:: void copyswapn(void *dest, npy_intp dstride, void *src, - npy_intp sstride, npy_intp n, int swap, void *arr) - - .. cmember:: void copyswap(void *dest, void *src, int swap, void *arr) - - These members are both pointers to functions to copy data from - *src* to *dest* and *swap* if indicated. The value of arr is - only used for flexible ( :cdata:`NPY_STRING`, :cdata:`NPY_UNICODE`, - and :cdata:`NPY_VOID` ) arrays (and is obtained from - ``arr->descr->elsize`` ). The second function copies a single - value, while the first loops over n values with the provided - strides. These functions can deal with misbehaved *src* - data. If *src* is NULL then no copy is performed. If *swap* is - 0, then no byteswapping occurs. It is assumed that *dest* and - *src* do not overlap. If they overlap, then use ``memmove`` - (...) first followed by ``copyswap(n)`` with NULL valued - ``src``. - - .. cmember:: int compare(const void* d1, const void* d2, void* arr) - - A pointer to a function that compares two elements of the - array, ``arr``, pointed to by ``d1`` and ``d2``. This - function requires behaved arrays. The return value is 1 if * - ``d1`` > * ``d2``, 0 if * ``d1`` == * ``d2``, and -1 if * - ``d1`` < * ``d2``. The array object arr is used to retrieve - itemsize and field information for flexible arrays. - - .. cmember:: int argmax(void* data, npy_intp n, npy_intp* max_ind, - void* arr) - - A pointer to a function that retrieves the index of the - largest of ``n`` elements in ``arr`` beginning at the element - pointed to by ``data``. This function requires that the - memory segment be contiguous and behaved. The return value is - always 0. The index of the largest element is returned in - ``max_ind``. - - .. cmember:: void dotfunc(void* ip1, npy_intp is1, void* ip2, npy_intp is2, - void* op, npy_intp n, void* arr) - - A pointer to a function that multiplies two ``n`` -length - sequences together, adds them, and places the result in - element pointed to by ``op`` of ``arr``. The start of the two - sequences are pointed to by ``ip1`` and ``ip2``. To get to - the next element in each sequence requires a jump of ``is1`` - and ``is2`` *bytes*, respectively. This function requires - behaved (though not necessarily contiguous) memory. - - .. cmember:: int scanfunc(FILE* fd, void* ip , void* sep , void* arr) - - A pointer to a function that scans (scanf style) one element - of the corresponding type from the file descriptor ``fd`` into - the array memory pointed to by ``ip``. The array is assumed - to be behaved. If ``sep`` is not NULL, then a separator string - is also scanned from the file before returning. The last - argument ``arr`` is the array to be scanned into. A 0 is - returned if the scan is successful. A negative number - indicates something went wrong: -1 means the end of file was - reached before the separator string could be scanned, -4 means - that the end of file was reached before the element could be - scanned, and -3 means that the element could not be - interpreted from the format string. Requires a behaved array. - - .. cmember:: int fromstr(char* str, void* ip, char** endptr, void* arr) - - A pointer to a function that converts the string pointed to by - ``str`` to one element of the corresponding type and places it - in the memory location pointed to by ``ip``. After the - conversion is completed, ``*endptr`` points to the rest of the - string. The last argument ``arr`` is the array into which ip - points (needed for variable-size data- types). Returns 0 on - success or -1 on failure. Requires a behaved array. - - .. cmember:: Bool nonzero(void* data, void* arr) - - A pointer to a function that returns TRUE if the item of - ``arr`` pointed to by ``data`` is nonzero. This function can - deal with misbehaved arrays. - - .. cmember:: void fill(void* data, npy_intp length, void* arr) - - A pointer to a function that fills a contiguous array of given - length with data. The first two elements of the array must - already be filled- in. From these two values, a delta will be - computed and the values from item 3 to the end will be - computed by repeatedly adding this computed delta. The data - buffer must be well-behaved. - - .. cmember:: void fillwithscalar(void* buffer, npy_intp length, - void* value, void* arr) - - A pointer to a function that fills a contiguous ``buffer`` of - the given ``length`` with a single scalar ``value`` whose - address is given. The final argument is the array which is - needed to get the itemsize for variable-length arrays. - - .. cmember:: int sort(void* start, npy_intp length, void* arr) - - An array of function pointers to a particular sorting - algorithms. A particular sorting algorithm is obtained using a - key (so far :cdata:`PyArray_QUICKSORT`, :data`PyArray_HEAPSORT`, and - :cdata:`PyArray_MERGESORT` are defined). These sorts are done - in-place assuming contiguous and aligned data. - - .. cmember:: int argsort(void* start, npy_intp* result, npy_intp length, - void \*arr) - - An array of function pointers to sorting algorithms for this - data type. The same sorting algorithms as for sort are - available. The indices producing the sort are returned in - result (which must be initialized with indices 0 to length-1 - inclusive). - - .. cmember:: PyObject *castdict - - Either ``NULL`` or a dictionary containing low-level casting - functions for user- defined data-types. Each function is - wrapped in a :ctype:`PyCObject *` and keyed by the data-type number. - - .. cmember:: PyArray_SCALARKIND scalarkind(PyArrayObject* arr) - - A function to determine how scalars of this type should be - interpreted. The argument is ``NULL`` or a 0-dimensional array - containing the data (if that is needed to determine the kind - of scalar). The return value must be of type - :ctype:`PyArray_SCALARKIND`. - - .. cmember:: int **cancastscalarkindto - - Either ``NULL`` or an array of :ctype:`PyArray_NSCALARKINDS` - pointers. These pointers should each be either ``NULL`` or a - pointer to an array of integers (terminated by - :cdata:`PyArray_NOTYPE`) indicating data-types that a scalar of - this data-type of the specified kind can be cast to safely - (this usually means without losing precision). - - .. cmember:: int *cancastto - - Either ``NULL`` or an array of integers (terminated by - :cdata:`PyArray_NOTYPE` ) indicated data-types that this data-type - can be cast to safely (this usually means without losing - precision). - - .. cmember:: int listpickle - - Unused. - -The :cdata:`PyArray_Type` typeobject implements many of the features of -Python objects including the tp_as_number, tp_as_sequence, -tp_as_mapping, and tp_as_buffer interfaces. The rich comparison -(tp_richcompare) is also used along with new-style attribute lookup -for methods (tp_methods) and properties (tp_getset). The -:cdata:`PyArray_Type` can also be sub-typed. - -.. tip:: - - The tp_as_number methods use a generic approach to call whatever - function has been registered for handling the operation. The - function PyNumeric_SetOps(..) can be used to register functions to - handle particular mathematical operations (for all arrays). When - the umath module is imported, it sets the numeric operations for - all arrays to the corresponding ufuncs. The tp_str and tp_repr - methods can also be altered using PyString_SetStringFunction(...). - - -PyUFunc_Type ------------- - -.. cvar:: PyUFunc_Type - - The ufunc object is implemented by creation of the - :cdata:`PyUFunc_Type`. It is a very simple type that implements only - basic getattribute behavior, printing behavior, and has call - behavior which allows these objects to act like functions. The - basic idea behind the ufunc is to hold a reference to fast - 1-dimensional (vector) loops for each data type that supports the - operation. These one-dimensional loops all have the same signature - and are the key to creating a new ufunc. They are called by the - generic looping code as appropriate to implement the N-dimensional - function. There are also some generic 1-d loops defined for - floating and complexfloating arrays that allow you to define a - ufunc using a single scalar function (*e.g.* atanh). - - -.. ctype:: PyUFuncObject - - The core of the ufunc is the :ctype:`PyUFuncObject` which contains all - the information needed to call the underlying C-code loops that - perform the actual work. It has the following structure: - - .. code-block:: c - - typedef struct { - PyObject_HEAD - int nin; - int nout; - int nargs; - int identity; - PyUFuncGenericFunction *functions; - void **data; - int ntypes; - int check_return; - char *name; - char *types; - char *doc; - void *ptr; - PyObject *obj; - PyObject *userloops; - } PyUFuncObject; - - .. cmacro:: PyUFuncObject.PyObject_HEAD - - required for all Python objects. - - .. cmember:: int PyUFuncObject.nin - - The number of input arguments. - - .. cmember:: int PyUFuncObject.nout - - The number of output arguments. - - .. cmember:: int PyUFuncObject.nargs - - The total number of arguments (*nin* + *nout*). This must be - less than :cdata:`NPY_MAXARGS`. - - .. cmember:: int PyUFuncObject.identity - - Either :cdata:`PyUFunc_One`, :cdata:`PyUFunc_Zero`, or - :cdata:`PyUFunc_None` to indicate the identity for this operation. - It is only used for a reduce-like call on an empty array. - - .. cmember:: void PyUFuncObject.functions(char** args, npy_intp* dims, - npy_intp* steps, void* extradata) - - An array of function pointers --- one for each data type - supported by the ufunc. This is the vector loop that is called - to implement the underlying function *dims* [0] times. The - first argument, *args*, is an array of *nargs* pointers to - behaved memory. Pointers to the data for the input arguments - are first, followed by the pointers to the data for the output - arguments. How many bytes must be skipped to get to the next - element in the sequence is specified by the corresponding entry - in the *steps* array. The last argument allows the loop to - receive extra information. This is commonly used so that a - single, generic vector loop can be used for multiple - functions. In this case, the actual scalar function to call is - passed in as *extradata*. The size of this function pointer - array is ntypes. - - .. cmember:: void **PyUFuncObject.data - - Extra data to be passed to the 1-d vector loops or ``NULL`` if - no extra-data is needed. This C-array must be the same size ( - *i.e.* ntypes) as the functions array. ``NULL`` is used if - extra_data is not needed. Several C-API calls for UFuncs are - just 1-d vector loops that make use of this extra data to - receive a pointer to the actual function to call. - - .. cmember:: int PyUFuncObject.ntypes - - The number of supported data types for the ufunc. This number - specifies how many different 1-d loops (of the builtin data types) are - available. - - .. cmember:: int PyUFuncObject.check_return - - Obsolete and unused. However, it is set by the corresponding entry in - the main ufunc creation routine: :cfunc:`PyUFunc_FromFuncAndData` (...). - - .. cmember:: char *PyUFuncObject.name - - A string name for the ufunc. This is used dynamically to build - the __doc\__ attribute of ufuncs. - - .. cmember:: char *PyUFuncObject.types - - An array of *nargs* :math:`\times` *ntypes* 8-bit type_numbers - which contains the type signature for the function for each of - the supported (builtin) data types. For each of the *ntypes* - functions, the corresponding set of type numbers in this array - shows how the *args* argument should be interpreted in the 1-d - vector loop. These type numbers do not have to be the same type - and mixed-type ufuncs are supported. - - .. cmember:: char *PyUFuncObject.doc - - Documentation for the ufunc. Should not contain the function - signature as this is generated dynamically when __doc\__ is - retrieved. - - .. cmember:: void *PyUFuncObject.ptr - - Any dynamically allocated memory. Currently, this is used for dynamic - ufuncs created from a python function to store room for the types, - data, and name members. - - .. cmember:: PyObject *PyUFuncObject.obj - - For ufuncs dynamically created from python functions, this member - holds a reference to the underlying Python function. - - .. cmember:: PyObject *PyUFuncObject.userloops - - A dictionary of user-defined 1-d vector loops (stored as CObject ptrs) - for user-defined types. A loop may be registered by the user for any - user-defined type. It is retrieved by type number. User defined type - numbers are always larger than :cdata:`NPY_USERDEF`. - - -PyArrayIter_Type ----------------- - -.. cvar:: PyArrayIter_Type - - This is an iterator object that makes it easy to loop over an N-dimensional - array. It is the object returned from the flat attribute of an - ndarray. It is also used extensively throughout the implementation - internals to loop over an N-dimensional array. The tp_as_mapping - interface is implemented so that the iterator object can be indexed - (using 1-d indexing), and a few methods are implemented through the - tp_methods table. This object implements the next method and can be - used anywhere an iterator can be used in Python. - -.. ctype:: PyArrayIterObject - - The C-structure corresponding to an object of :cdata:`PyArrayIter_Type` is - the :ctype:`PyArrayIterObject`. The :ctype:`PyArrayIterObject` is used to - keep track of a pointer into an N-dimensional array. It contains associated - information used to quickly march through the array. The pointer can - be adjusted in three basic ways: 1) advance to the "next" position in - the array in a C-style contiguous fashion, 2) advance to an arbitrary - N-dimensional coordinate in the array, and 3) advance to an arbitrary - one-dimensional index into the array. The members of the - :ctype:`PyArrayIterObject` structure are used in these - calculations. Iterator objects keep their own dimension and strides - information about an array. This can be adjusted as needed for - "broadcasting," or to loop over only specific dimensions. - - .. code-block:: c - - typedef struct { - PyObject_HEAD - int nd_m1; - npy_intp index; - npy_intp size; - npy_intp coordinates[NPY_MAXDIMS]; - npy_intp dims_m1[NPY_MAXDIMS]; - npy_intp strides[NPY_MAXDIMS]; - npy_intp backstrides[NPY_MAXDIMS]; - npy_intp factors[NPY_MAXDIMS]; - PyArrayObject *ao; - char *dataptr; - Bool contiguous; - } PyArrayIterObject; - - .. cmember:: int PyArrayIterObject.nd_m1 - - :math:`N-1` where :math:`N` is the number of dimensions in the - underlying array. - - .. cmember:: npy_intp PyArrayIterObject.index - - The current 1-d index into the array. - - .. cmember:: npy_intp PyArrayIterObject.size - - The total size of the underlying array. - - .. cmember:: npy_intp *PyArrayIterObject.coordinates - - An :math:`N` -dimensional index into the array. - - .. cmember:: npy_intp *PyArrayIterObject.dims_m1 - - The size of the array minus 1 in each dimension. - - .. cmember:: npy_intp *PyArrayIterObject.strides - - The strides of the array. How many bytes needed to jump to the next - element in each dimension. - - .. cmember:: npy_intp *PyArrayIterObject.backstrides - - How many bytes needed to jump from the end of a dimension back - to its beginning. Note that *backstrides* [k]= *strides* [k]*d - *ims_m1* [k], but it is stored here as an optimization. - - .. cmember:: npy_intp *PyArrayIterObject.factors - - This array is used in computing an N-d index from a 1-d index. It - contains needed products of the dimensions. - - .. cmember:: PyArrayObject *PyArrayIterObject.ao - - A pointer to the underlying ndarray this iterator was created to - represent. - - .. cmember:: char *PyArrayIterObject.dataptr - - This member points to an element in the ndarray indicated by the - index. - - .. cmember:: Bool PyArrayIterObject.contiguous - - This flag is true if the underlying array is - :cdata:`NPY_C_CONTIGUOUS`. It is used to simplify calculations when - possible. - - -How to use an array iterator on a C-level is explained more fully in -later sections. Typically, you do not need to concern yourself with -the internal structure of the iterator object, and merely interact -with it through the use of the macros :cfunc:`PyArray_ITER_NEXT` (it), -:cfunc:`PyArray_ITER_GOTO` (it, dest), or :cfunc:`PyArray_ITER_GOTO1D` (it, -index). All of these macros require the argument *it* to be a -:ctype:`PyArrayIterObject *`. - - -PyArrayMultiIter_Type ---------------------- - -.. cvar:: PyArrayMultiIter_Type - - This type provides an iterator that encapsulates the concept of - broadcasting. It allows :math:`N` arrays to be broadcast together - so that the loop progresses in C-style contiguous fashion over the - broadcasted array. The corresponding C-structure is the - :ctype:`PyArrayMultiIterObject` whose memory layout must begin any - object, *obj*, passed in to the :cfunc:`PyArray_Broadcast` (obj) - function. Broadcasting is performed by adjusting array iterators so - that each iterator represents the broadcasted shape and size, but - has its strides adjusted so that the correct element from the array - is used at each iteration. - - -.. ctype:: PyArrayMultiIterObject - - .. code-block:: c - - typedef struct { - PyObject_HEAD - int numiter; - npy_intp size; - npy_intp index; - int nd; - npy_intp dimensions[NPY_MAXDIMS]; - PyArrayIterObject *iters[NPY_MAXDIMS]; - } PyArrayMultiIterObject; - - .. cmacro:: PyArrayMultiIterObject.PyObject_HEAD - - Needed at the start of every Python object (holds reference count and - type identification). - - .. cmember:: int PyArrayMultiIterObject.numiter - - The number of arrays that need to be broadcast to the same shape. - - .. cmember:: npy_intp PyArrayMultiIterObject.size - - The total broadcasted size. - - .. cmember:: npy_intp PyArrayMultiIterObject.index - - The current (1-d) index into the broadcasted result. - - .. cmember:: int PyArrayMultiIterObject.nd - - The number of dimensions in the broadcasted result. - - .. cmember:: npy_intp *PyArrayMultiIterObject.dimensions - - The shape of the broadcasted result (only ``nd`` slots are used). - - .. cmember:: PyArrayIterObject **PyArrayMultiIterObject.iters - - An array of iterator objects that holds the iterators for the arrays - to be broadcast together. On return, the iterators are adjusted for - broadcasting. - -PyArrayNeighborhoodIter_Type ----------------------------- - -.. cvar:: PyArrayNeighborhoodIter_Type - - This is an iterator object that makes it easy to loop over an N-dimensional - neighborhood. - -.. ctype:: PyArrayNeighborhoodIterObject - - The C-structure corresponding to an object of - :cdata:`PyArrayNeighborhoodIter_Type` is the - :ctype:`PyArrayNeighborhoodIterObject`. - -PyArrayFlags_Type ------------------ - -.. cvar:: PyArrayFlags_Type - - When the flags attribute is retrieved from Python, a special - builtin object of this type is constructed. This special type makes - it easier to work with the different flags by accessing them as - attributes or by accessing them as if the object were a dictionary - with the flag names as entries. - - -ScalarArrayTypes ----------------- - -There is a Python type for each of the different built-in data types -that can be present in the array Most of these are simple wrappers -around the corresponding data type in C. The C-names for these types -are :cdata:`Py{TYPE}ArrType_Type` where ``{TYPE}`` can be - - **Bool**, **Byte**, **Short**, **Int**, **Long**, **LongLong**, - **UByte**, **UShort**, **UInt**, **ULong**, **ULongLong**, - **Float**, **Double**, **LongDouble**, **CFloat**, **CDouble**, - **CLongDouble**, **String**, **Unicode**, **Void**, and - **Object**. - -These type names are part of the C-API and can therefore be created in -extension C-code. There is also a :cdata:`PyIntpArrType_Type` and a -:cdata:`PyUIntpArrType_Type` that are simple substitutes for one of the -integer types that can hold a pointer on the platform. The structure -of these scalar objects is not exposed to C-code. The function -:cfunc:`PyArray_ScalarAsCtype` (..) can be used to extract the C-type value -from the array scalar and the function :cfunc:`PyArray_Scalar` (...) can be -used to construct an array scalar from a C-value. - - -Other C-Structures -================== - -A few new C-structures were found to be useful in the development of -NumPy. These C-structures are used in at least one C-API call and are -therefore documented here. The main reason these structures were -defined is to make it easy to use the Python ParseTuple C-API to -convert from Python objects to a useful C-Object. - - -PyArray_Dims ------------- - -.. ctype:: PyArray_Dims - - This structure is very useful when shape and/or strides information is - supposed to be interpreted. The structure is: - - .. code-block:: c - - typedef struct { - npy_intp *ptr; - int len; - } PyArray_Dims; - - The members of this structure are - - .. cmember:: npy_intp *PyArray_Dims.ptr - - A pointer to a list of (:ctype:`npy_intp`) integers which usually - represent array shape or array strides. - - .. cmember:: int PyArray_Dims.len - - The length of the list of integers. It is assumed safe to - access *ptr* [0] to *ptr* [len-1]. - - -PyArray_Chunk -------------- - -.. ctype:: PyArray_Chunk - - This is equivalent to the buffer object structure in Python up to - the ptr member. On 32-bit platforms (*i.e.* if :cdata:`NPY_SIZEOF_INT` - == :cdata:`NPY_SIZEOF_INTP` ) or in Python 2.5, the len member also - matches an equivalent member of the buffer object. It is useful to - represent a generic single- segment chunk of memory. - - .. code-block:: c - - typedef struct { - PyObject_HEAD - PyObject *base; - void *ptr; - npy_intp len; - int flags; - } PyArray_Chunk; - - The members are - - .. cmacro:: PyArray_Chunk.PyObject_HEAD - - Necessary for all Python objects. Included here so that the - :ctype:`PyArray_Chunk` structure matches that of the buffer object - (at least to the len member). - - .. cmember:: PyObject *PyArray_Chunk.base - - The Python object this chunk of memory comes from. Needed so that - memory can be accounted for properly. - - .. cmember:: void *PyArray_Chunk.ptr - - A pointer to the start of the single-segment chunk of memory. - - .. cmember:: npy_intp PyArray_Chunk.len - - The length of the segment in bytes. - - .. cmember:: int PyArray_Chunk.flags - - Any data flags (*e.g.* :cdata:`NPY_WRITEABLE` ) that should be used - to interpret the memory. - - -PyArrayInterface ----------------- - -.. seealso:: :ref:`arrays.interface` - -.. ctype:: PyArrayInterface - - The :ctype:`PyArrayInterface` structure is defined so that NumPy and - other extension modules can use the rapid array interface - protocol. The :obj:`__array_struct__` method of an object that - supports the rapid array interface protocol should return a - :ctype:`PyCObject` that contains a pointer to a :ctype:`PyArrayInterface` - structure with the relevant details of the array. After the new - array is created, the attribute should be ``DECREF``'d which will - free the :ctype:`PyArrayInterface` structure. Remember to ``INCREF`` the - object (whose :obj:`__array_struct__` attribute was retrieved) and - point the base member of the new :ctype:`PyArrayObject` to this same - object. In this way the memory for the array will be managed - correctly. - - .. code-block:: c - - typedef struct { - int two; - int nd; - char typekind; - int itemsize; - int flags; - npy_intp *shape; - npy_intp *strides; - void *data; - PyObject *descr; - } PyArrayInterface; - - .. cmember:: int PyArrayInterface.two - - the integer 2 as a sanity check. - - .. cmember:: int PyArrayInterface.nd - - the number of dimensions in the array. - - .. cmember:: char PyArrayInterface.typekind - - A character indicating what kind of array is present according to the - typestring convention with 't' -> bitfield, 'b' -> Boolean, 'i' -> - signed integer, 'u' -> unsigned integer, 'f' -> floating point, 'c' -> - complex floating point, 'O' -> object, 'S' -> string, 'U' -> unicode, - 'V' -> void. - - .. cmember:: int PyArrayInterface.itemsize - - The number of bytes each item in the array requires. - - .. cmember:: int PyArrayInterface.flags - - Any of the bits :cdata:`NPY_C_CONTIGUOUS` (1), - :cdata:`NPY_F_CONTIGUOUS` (2), :cdata:`NPY_ALIGNED` (0x100), - :cdata:`NPY_NOTSWAPPED` (0x200), or :cdata:`NPY_WRITEABLE` - (0x400) to indicate something about the data. The - :cdata:`NPY_ALIGNED`, :cdata:`NPY_C_CONTIGUOUS`, and - :cdata:`NPY_F_CONTIGUOUS` flags can actually be determined from - the other parameters. The flag :cdata:`NPY_ARR_HAS_DESCR` - (0x800) can also be set to indicate to objects consuming the - version 3 array interface that the descr member of the - structure is present (it will be ignored by objects consuming - version 2 of the array interface). - - .. cmember:: npy_intp *PyArrayInterface.shape - - An array containing the size of the array in each dimension. - - .. cmember:: npy_intp *PyArrayInterface.strides - - An array containing the number of bytes to jump to get to the next - element in each dimension. - - .. cmember:: void *PyArrayInterface.data - - A pointer *to* the first element of the array. - - .. cmember:: PyObject *PyArrayInterface.descr - - A Python object describing the data-type in more detail (same - as the *descr* key in :obj:`__array_interface__`). This can be - ``NULL`` if *typekind* and *itemsize* provide enough - information. This field is also ignored unless - :cdata:`ARR_HAS_DESCR` flag is on in *flags*. - - -Internally used structures --------------------------- - -Internally, the code uses some additional Python objects primarily for -memory management. These types are not accessible directly from -Python, and are not exposed to the C-API. They are included here only -for completeness and assistance in understanding the code. - - -.. ctype:: PyUFuncLoopObject - - A loose wrapper for a C-structure that contains the information - needed for looping. This is useful if you are trying to understand - the ufunc looping code. The :ctype:`PyUFuncLoopObject` is the associated - C-structure. It is defined in the ``ufuncobject.h`` header. - -.. ctype:: PyUFuncReduceObject - - A loose wrapper for the C-structure that contains the information - needed for reduce-like methods of ufuncs. This is useful if you are - trying to understand the reduce, accumulate, and reduce-at - code. The :ctype:`PyUFuncReduceObject` is the associated C-structure. It - is defined in the ``ufuncobject.h`` header. - -.. ctype:: PyUFunc_Loop1d - - A simple linked-list of C-structures containing the information needed - to define a 1-d loop for a ufunc for every defined signature of a - user-defined data-type. - -.. cvar:: PyArrayMapIter_Type - - Advanced indexing is handled with this Python type. It is simply a - loose wrapper around the C-structure containing the variables - needed for advanced array indexing. The associated C-structure, - :ctype:`PyArrayMapIterObject`, is useful if you are trying to - understand the advanced-index mapping code. It is defined in the - ``arrayobject.h`` header. This type is not exposed to Python and - could be replaced with a C-structure. As a Python type it takes - advantage of reference- counted memory management. diff --git a/pythonPackages/numpy/doc/source/reference/c-api.ufunc.rst b/pythonPackages/numpy/doc/source/reference/c-api.ufunc.rst deleted file mode 100755 index 384a69cf75..0000000000 --- a/pythonPackages/numpy/doc/source/reference/c-api.ufunc.rst +++ /dev/null @@ -1,367 +0,0 @@ -UFunc API -========= - -.. sectionauthor:: Travis E. Oliphant - -.. index:: - pair: ufunc; C-API - - -Constants ---------- - -.. cvar:: UFUNC_ERR_{HANDLER} - - ``{HANDLER}`` can be **IGNORE**, **WARN**, **RAISE**, or **CALL** - -.. cvar:: UFUNC_{THING}_{ERR} - - ``{THING}`` can be **MASK**, **SHIFT**, or **FPE**, and ``{ERR}`` can - be **DIVIDEBYZERO**, **OVERFLOW**, **UNDERFLOW**, and **INVALID**. - -.. cvar:: PyUFunc_{VALUE} - - ``{VALUE}`` can be **One** (1), **Zero** (0), or **None** (-1) - - -Macros ------- - -.. cmacro:: NPY_LOOP_BEGIN_THREADS - - Used in universal function code to only release the Python GIL if - loop->obj is not true (*i.e.* this is not an OBJECT array - loop). Requires use of :cmacro:`NPY_BEGIN_THREADS_DEF` in variable - declaration area. - -.. cmacro:: NPY_LOOP_END_THREADS - - Used in universal function code to re-acquire the Python GIL if it - was released (because loop->obj was not true). - -.. cfunction:: UFUNC_CHECK_ERROR(loop) - - A macro used internally to check for errors and goto fail if - found. This macro requires a fail label in the current code - block. The *loop* variable must have at least members (obj, - errormask, and errorobj). If *loop* ->obj is nonzero, then - :cfunc:`PyErr_Occurred` () is called (meaning the GIL must be held). If - *loop* ->obj is zero, then if *loop* ->errormask is nonzero, - :cfunc:`PyUFunc_checkfperr` is called with arguments *loop* ->errormask - and *loop* ->errobj. If the result of this check of the IEEE - floating point registers is true then the code redirects to the - fail label which must be defined. - -.. cfunction:: UFUNC_CHECK_STATUS(ret) - - A macro that expands to platform-dependent code. The *ret* - variable can can be any integer. The :cdata:`UFUNC_FPE_{ERR}` bits are - set in *ret* according to the status of the corresponding error - flags of the floating point processor. - - -Functions ---------- - -.. cfunction:: PyObject* PyUFunc_FromFuncAndData(PyUFuncGenericFunction* func, - void** data, char* types, int ntypes, int nin, int nout, int identity, - char* name, char* doc, int check_return) - - Create a new broadcasting universal function from required variables. - Each ufunc builds around the notion of an element-by-element - operation. Each ufunc object contains pointers to 1-d loops - implementing the basic functionality for each supported type. - - .. note:: - - The *func*, *data*, *types*, *name*, and *doc* arguments are not - copied by :cfunc:`PyUFunc_FromFuncAndData`. The caller must ensure - that the memory used by these arrays is not freed as long as the - ufunc object is alive. - - :param func: - Must to an array of length *ntypes* containing - :ctype:`PyUFuncGenericFunction` items. These items are pointers to - functions that actually implement the underlying - (element-by-element) function :math:`N` times. - - :param data: - Should be ``NULL`` or a pointer to an array of size *ntypes* - . This array may contain arbitrary extra-data to be passed to - the corresponding 1-d loop function in the func array. - - :param types: - Must be of length (*nin* + *nout*) \* *ntypes*, and it - contains the data-types (built-in only) that the corresponding - function in the *func* array can deal with. - - :param ntypes: - How many different data-type "signatures" the ufunc has implemented. - - :param nin: - The number of inputs to this operation. - - :param nout: - The number of outputs - - :param name: - The name for the ufunc. Specifying a name of 'add' or - 'multiply' enables a special behavior for integer-typed - reductions when no dtype is given. If the input type is an - integer (or boolean) data type smaller than the size of the int_ - data type, it will be internally upcast to the int_ (or uint) - data type. - - - :param doc: - Allows passing in a documentation string to be stored with the - ufunc. The documentation string should not contain the name - of the function or the calling signature as that will be - dynamically determined from the object and available when - accessing the **__doc__** attribute of the ufunc. - - :param check_return: - Unused and present for backwards compatibility of the C-API. A - corresponding *check_return* integer does exist in the ufunc - structure and it does get set with this value when the ufunc - object is created. - -.. cfunction:: int PyUFunc_RegisterLoopForType(PyUFuncObject* ufunc, - int usertype, PyUFuncGenericFunction function, int* arg_types, void* data) - - This function allows the user to register a 1-d loop with an - already- created ufunc to be used whenever the ufunc is called - with any of its input arguments as the user-defined - data-type. This is needed in order to make ufuncs work with - built-in data-types. The data-type must have been previously - registered with the numpy system. The loop is passed in as - *function*. This loop can take arbitrary data which should be - passed in as *data*. The data-types the loop requires are passed - in as *arg_types* which must be a pointer to memory at least as - large as ufunc->nargs. - -.. cfunction:: int PyUFunc_ReplaceLoopBySignature(PyUFuncObject* ufunc, - PyUFuncGenericFunction newfunc, int* signature, - PyUFuncGenericFunction* oldfunc) - - Replace a 1-d loop matching the given *signature* in the - already-created *ufunc* with the new 1-d loop newfunc. Return the - old 1-d loop function in *oldfunc*. Return 0 on success and -1 on - failure. This function works only with built-in types (use - :cfunc:`PyUFunc_RegisterLoopForType` for user-defined types). A - signature is an array of data-type numbers indicating the inputs - followed by the outputs assumed by the 1-d loop. - -.. cfunction:: int PyUFunc_GenericFunction(PyUFuncObject* self, - PyObject* args, PyArrayObject** mps) - - A generic ufunc call. The ufunc is passed in as *self*, the - arguments to the ufunc as *args*. The *mps* argument is an array - of :ctype:`PyArrayObject` pointers containing the converted input - arguments as well as the ufunc outputs on return. The user is - responsible for managing this array and receives a new reference - for each array in *mps*. The total number of arrays in *mps* is - given by *self* ->nin + *self* ->nout. - -.. cfunction:: int PyUFunc_checkfperr(int errmask, PyObject* errobj) - - A simple interface to the IEEE error-flag checking support. The - *errmask* argument is a mask of :cdata:`UFUNC_MASK_{ERR}` bitmasks - indicating which errors to check for (and how to check for - them). The *errobj* must be a Python tuple with two elements: a - string containing the name which will be used in any communication - of error and either a callable Python object (call-back function) - or :cdata:`Py_None`. The callable object will only be used if - :cdata:`UFUNC_ERR_CALL` is set as the desired error checking - method. This routine manages the GIL and is safe to call even - after releasing the GIL. If an error in the IEEE-compatibile - hardware is determined a -1 is returned, otherwise a 0 is - returned. - -.. cfunction:: void PyUFunc_clearfperr() - - Clear the IEEE error flags. - -.. cfunction:: void PyUFunc_GetPyValues(char* name, int* bufsize, - int* errmask, PyObject** errobj) - - Get the Python values used for ufunc processing from the - thread-local storage area unless the defaults have been set in - which case the name lookup is bypassed. The name is placed as a - string in the first element of *\*errobj*. The second element is - the looked-up function to call on error callback. The value of the - looked-up buffer-size to use is passed into *bufsize*, and the - value of the error mask is placed into *errmask*. - - -Generic functions ------------------ - -At the core of every ufunc is a collection of type-specific functions -that defines the basic functionality for each of the supported types. -These functions must evaluate the underlying function :math:`N\geq1` -times. Extra-data may be passed in that may be used during the -calculation. This feature allows some general functions to be used as -these basic looping functions. The general function has all the code -needed to point variables to the right place and set up a function -call. The general function assumes that the actual function to call is -passed in as the extra data and calls it with the correct values. All -of these functions are suitable for placing directly in the array of -functions stored in the functions member of the PyUFuncObject -structure. - -.. cfunction:: void PyUFunc_f_f_As_d_d(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_d_d(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_f_f(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_g_g(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_F_F_As_D_D(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_F_F(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_D_D(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_G_G(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - - Type specific, core 1-d functions for ufuncs where each - calculation is obtained by calling a function taking one input - argument and returning one output. This function is passed in - ``func``. The letters correspond to dtypechar's of the supported - data types ( ``f`` - float, ``d`` - double, ``g`` - long double, - ``F`` - cfloat, ``D`` - cdouble, ``G`` - clongdouble). The - argument *func* must support the same signature. The _As_X_X - variants assume ndarray's of one data type but cast the values to - use an underlying function that takes a different data type. Thus, - :cfunc:`PyUFunc_f_f_As_d_d` uses ndarrays of data type :cdata:`NPY_FLOAT` - but calls out to a C-function that takes double and returns - double. - -.. cfunction:: void PyUFunc_ff_f_As_dd_d(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_ff_f(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_dd_d(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_gg_g(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_FF_F_As_DD_D(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_DD_D(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_FF_F(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_GG_G(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - - Type specific, core 1-d functions for ufuncs where each - calculation is obtained by calling a function taking two input - arguments and returning one output. The underlying function to - call is passed in as *func*. The letters correspond to - dtypechar's of the specific data type supported by the - general-purpose function. The argument ``func`` must support the - corresponding signature. The ``_As_XX_X`` variants assume ndarrays - of one data type but cast the values at each iteration of the loop - to use the underlying function that takes a different data type. - -.. cfunction:: void PyUFunc_O_O(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - -.. cfunction:: void PyUFunc_OO_O(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - - One-input, one-output, and two-input, one-output core 1-d functions - for the :cdata:`NPY_OBJECT` data type. These functions handle reference - count issues and return early on error. The actual function to call is - *func* and it must accept calls with the signature ``(PyObject*) - (PyObject*)`` for :cfunc:`PyUFunc_O_O` or ``(PyObject*)(PyObject *, - PyObject *)`` for :cfunc:`PyUFunc_OO_O`. - -.. cfunction:: void PyUFunc_O_O_method(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - - This general purpose 1-d core function assumes that *func* is a string - representing a method of the input object. For each - iteration of the loop, the Python obejct is extracted from the array - and its *func* method is called returning the result to the output array. - -.. cfunction:: void PyUFunc_OO_O_method(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - - This general purpose 1-d core function assumes that *func* is a - string representing a method of the input object that takes one - argument. The first argument in *args* is the method whose function is - called, the second argument in *args* is the argument passed to the - function. The output of the function is stored in the third entry - of *args*. - -.. cfunction:: void PyUFunc_On_Om(char** args, npy_intp* dimensions, - npy_intp* steps, void* func) - - This is the 1-d core function used by the dynamic ufuncs created - by umath.frompyfunc(function, nin, nout). In this case *func* is a - pointer to a :ctype:`PyUFunc_PyFuncData` structure which has definition - - .. ctype:: PyUFunc_PyFuncData - - .. code-block:: c - - typedef struct { - int nin; - int nout; - PyObject *callable; - } PyUFunc_PyFuncData; - - At each iteration of the loop, the *nin* input objects are exctracted - from their object arrays and placed into an argument tuple, the Python - *callable* is called with the input arguments, and the nout - outputs are placed into their object arrays. - - -Importing the API ------------------ - -.. cvar:: PY_UFUNC_UNIQUE_SYMBOL - -.. cvar:: NO_IMPORT_UFUNC - -.. cfunction:: void import_ufunc(void) - - These are the constants and functions for accessing the ufunc - C-API from extension modules in precisely the same way as the - array C-API can be accessed. The ``import_ufunc`` () function must - always be called (in the initialization subroutine of the - extension module). If your extension module is in one file then - that is all that is required. The other two constants are useful - if your extension module makes use of multiple files. In that - case, define :cdata:`PY_UFUNC_UNIQUE_SYMBOL` to something unique to - your code and then in source files that do not contain the module - initialization function but still need access to the UFUNC API, - define :cdata:`PY_UFUNC_UNIQUE_SYMBOL` to the same name used previously - and also define :cdata:`NO_IMPORT_UFUNC`. - - The C-API is actually an array of function pointers. This array is - created (and pointed to by a global variable) by import_ufunc. The - global variable is either statically defined or allowed to be seen - by other files depending on the state of - :cdata:`Py_UFUNC_UNIQUE_SYMBOL` and :cdata:`NO_IMPORT_UFUNC`. - -.. index:: - pair: ufunc; C-API diff --git a/pythonPackages/numpy/doc/source/reference/distutils.rst b/pythonPackages/numpy/doc/source/reference/distutils.rst deleted file mode 100755 index 63174c2c75..0000000000 --- a/pythonPackages/numpy/doc/source/reference/distutils.rst +++ /dev/null @@ -1,316 +0,0 @@ -********************************** -Packaging (:mod:`numpy.distutils`) -********************************** - -.. module:: numpy.distutils - -NumPy provides enhanced distutils functionality to make it easier to -build and install sub-packages, auto-generate code, and extension -modules that use Fortran-compiled libraries. To use features of NumPy -distutils, use the :func:`setup ` command from -:mod:`numpy.distutils.core`. A useful :class:`Configuration -` class is also provided in -:mod:`numpy.distutils.misc_util` that can make it easier to construct -keyword arguments to pass to the setup function (by passing the -dictionary obtained from the todict() method of the class). More -information is available in the NumPy Distutils Users Guide in -``/numpy/doc/DISTUTILS.txt``. - -.. index:: - single: distutils - - -Modules in :mod:`numpy.distutils` -================================= - -misc_util ---------- - -.. module:: numpy.distutils.misc_util - -.. autosummary:: - :toctree: generated/ - - Configuration - get_numpy_include_dirs - dict_append - appendpath - allpath - dot_join - generate_config_py - get_cmd - terminal_has_colors - red_text - green_text - yellow_text - blue_text - cyan_text - cyg2win32 - all_strings - has_f_sources - has_cxx_sources - filter_sources - get_dependencies - is_local_src_dir - get_ext_source_files - get_script_files - - -.. class:: Configuration(package_name=None, parent_name=None, top_path=None, package_path=None, **attrs) - - Construct a configuration instance for the given package name. If - *parent_name* is not :const:`None`, then construct the package as a - sub-package of the *parent_name* package. If *top_path* and - *package_path* are :const:`None` then they are assumed equal to - the path of the file this instance was created in. The setup.py - files in the numpy distribution are good examples of how to use - the :class:`Configuration` instance. - - .. automethod:: todict - - .. automethod:: get_distribution - - .. automethod:: get_subpackage - - .. automethod:: add_subpackage - - .. automethod:: add_data_files - - .. automethod:: add_data_dir - - .. automethod:: add_include_dirs - - .. automethod:: add_headers - - .. automethod:: add_extension - - .. automethod:: add_library - - .. automethod:: add_scripts - - .. automethod:: add_installed_library - - .. automethod:: add_npy_pkg_config - - .. automethod:: paths - - .. automethod:: get_config_cmd - - .. automethod:: get_build_temp_dir - - .. automethod:: have_f77c - - .. automethod:: have_f90c - - .. automethod:: get_version - - .. automethod:: make_svn_version_py - - .. automethod:: make_config_py - - .. automethod:: get_info - -Other modules -------------- - -.. currentmodule:: numpy.distutils - -.. autosummary:: - :toctree: generated/ - - system_info.get_info - system_info.get_standard_file - cpuinfo.cpu - log.set_verbosity - exec_command - -Building Installable C libraries -================================ - -Conventional C libraries (installed through `add_library`) are not installed, and -are just used during the build (they are statically linked). An installable C -library is a pure C library, which does not depend on the python C runtime, and -is installed such that it may be used by third-party packages. To build and -install the C library, you just use the method `add_installed_library` instead of -`add_library`, which takes the same arguments except for an additional -``install_dir`` argument:: - - >>> config.add_installed_library('foo', sources=['foo.c'], install_dir='lib') - -npy-pkg-config files --------------------- - -To make the necessary build options available to third parties, you could use -the `npy-pkg-config` mechanism implemented in `numpy.distutils`. This mechanism is -based on a .ini file which contains all the options. A .ini file is very -similar to .pc files as used by the pkg-config unix utility:: - - [meta] - Name: foo - Version: 1.0 - Description: foo library - - [variables] - prefix = /home/user/local - libdir = ${prefix}/lib - includedir = ${prefix}/include - - [default] - cflags = -I${includedir} - libs = -L${libdir} -lfoo - -Generally, the file needs to be generated during the build, since it needs some -information known at build time only (e.g. prefix). This is mostly automatic if -one uses the `Configuration` method `add_npy_pkg_config`. Assuming we have a -template file foo.ini.in as follows:: - - [meta] - Name: foo - Version: @version@ - Description: foo library - - [variables] - prefix = @prefix@ - libdir = ${prefix}/lib - includedir = ${prefix}/include - - [default] - cflags = -I${includedir} - libs = -L${libdir} -lfoo - -and the following code in setup.py:: - - >>> config.add_installed_library('foo', sources=['foo.c'], install_dir='lib') - >>> subst = {'version': '1.0'} - >>> config.add_npy_pkg_config('foo.ini.in', 'lib', subst_dict=subst) - -This will install the file foo.ini into the directory package_dir/lib, and the -foo.ini file will be generated from foo.ini.in, where each ``@version@`` will be -replaced by ``subst_dict['version']``. The dictionary has an additional prefix -substitution rule automatically added, which contains the install prefix (since -this is not easy to get from setup.py). npy-pkg-config files can also be -installed at the same location as used for numpy, using the path returned from -`get_npy_pkg_dir` function. - -Reusing a C library from another package ----------------------------------------- - -Info are easily retrieved from the `get_info` function in -`numpy.distutils.misc_util`:: - - >>> info = get_info('npymath') - >>> config.add_extension('foo', sources=['foo.c'], extra_info=**info) - -An additional list of paths to look for .ini files can be given to `get_info`. - -Conversion of ``.src`` files -============================ - -NumPy distutils supports automatic conversion of source files named -.src. This facility can be used to maintain very similar -code blocks requiring only simple changes between blocks. During the -build phase of setup, if a template file named .src is -encountered, a new file named is constructed from the -template and placed in the build directory to be used instead. Two -forms of template conversion are supported. The first form occurs for -files named named .ext.src where ext is a recognized Fortran -extension (f, f90, f95, f77, for, ftn, pyf). The second form is used -for all other cases. - -.. index:: - single: code generation - -Fortran files -------------- - -This template converter will replicate all **function** and -**subroutine** blocks in the file with names that contain '<...>' -according to the rules in '<...>'. The number of comma-separated words -in '<...>' determines the number of times the block is repeated. What -these words are indicates what that repeat rule, '<...>', should be -replaced with in each block. All of the repeat rules in a block must -contain the same number of comma-separated words indicating the number -of times that block should be repeated. If the word in the repeat rule -needs a comma, leftarrow, or rightarrow, then prepend it with a -backslash ' \'. If a word in the repeat rule matches ' \\' then -it will be replaced with the -th word in the same repeat -specification. There are two forms for the repeat rule: named and -short. - - -Named repeat rule -^^^^^^^^^^^^^^^^^ - -A named repeat rule is useful when the same set of repeats must be -used several times in a block. It is specified using , where N is the number of times the block -should be repeated. On each repeat of the block, the entire -expression, '<...>' will be replaced first with item1, and then with -item2, and so forth until N repeats are accomplished. Once a named -repeat specification has been introduced, the same repeat rule may be -used **in the current block** by referring only to the name -(i.e. . - - -Short repeat rule -^^^^^^^^^^^^^^^^^ - -A short repeat rule looks like . The -rule specifies that the entire expression, '<...>' should be replaced -first with item1, and then with item2, and so forth until N repeats -are accomplished. - - -Pre-defined names -^^^^^^^^^^^^^^^^^ - -The following predefined named repeat rules are available: - -- - -- <_c=s,d,c,z> - -- <_t=real, double precision, complex, double complex> - -- - -- - -- - -- - - -Other files ------------ - -Non-Fortran files use a separate syntax for defining template blocks -that should be repeated using a variable expansion similar to the -named repeat rules of the Fortran-specific repeats. The template rules -for these files are: - -1. "/\**begin repeat "on a line by itself marks the beginning of - a segment that should be repeated. - -2. Named variable expansions are defined using #name=item1, item2, item3, - ..., itemN# and placed on successive lines. These variables are - replaced in each repeat block with corresponding word. All named - variables in the same repeat block must define the same number of - words. - -3. In specifying the repeat rule for a named variable, item*N is short- - hand for item, item, ..., item repeated N times. In addition, - parenthesis in combination with \*N can be used for grouping several - items that should be repeated. Thus, #name=(item1, item2)*4# is - equivalent to #name=item1, item2, item1, item2, item1, item2, item1, - item2# - -4. "\*/ "on a line by itself marks the end of the the variable expansion - naming. The next line is the first line that will be repeated using - the named rules. - -5. Inside the block to be repeated, the variables that should be expanded - are specified as @name@. - -6. "/\**end repeat**/ "on a line by itself marks the previous line - as the last line of the block to be repeated. diff --git a/pythonPackages/numpy/doc/source/reference/figures/dtype-hierarchy.dia b/pythonPackages/numpy/doc/source/reference/figures/dtype-hierarchy.dia deleted file mode 100755 index 65379b880c..0000000000 Binary files a/pythonPackages/numpy/doc/source/reference/figures/dtype-hierarchy.dia and /dev/null differ diff --git a/pythonPackages/numpy/doc/source/reference/figures/dtype-hierarchy.pdf b/pythonPackages/numpy/doc/source/reference/figures/dtype-hierarchy.pdf deleted file mode 100755 index 6ce496a3e1..0000000000 Binary files a/pythonPackages/numpy/doc/source/reference/figures/dtype-hierarchy.pdf and /dev/null differ diff --git a/pythonPackages/numpy/doc/source/reference/figures/dtype-hierarchy.png b/pythonPackages/numpy/doc/source/reference/figures/dtype-hierarchy.png deleted file mode 100755 index 5722ac527a..0000000000 Binary files a/pythonPackages/numpy/doc/source/reference/figures/dtype-hierarchy.png and /dev/null differ diff --git a/pythonPackages/numpy/doc/source/reference/figures/threefundamental.fig b/pythonPackages/numpy/doc/source/reference/figures/threefundamental.fig deleted file mode 100755 index 79760c410e..0000000000 --- a/pythonPackages/numpy/doc/source/reference/figures/threefundamental.fig +++ /dev/null @@ -1,57 +0,0 @@ -#FIG 3.2 -Landscape -Center -Inches -Letter -100.00 -Single --2 -1200 2 -6 1950 2850 4350 3450 -2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 - 1950 2850 4350 2850 4350 3450 1950 3450 1950 2850 -2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 - 2550 2850 2550 3450 -2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 - 3150 2850 3150 3450 -2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 - 3750 2850 3750 3450 --6 -2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 - 5100 2850 7500 2850 7500 3450 5100 3450 5100 2850 -2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 - 5700 2850 5700 3450 -2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 - 6300 2850 6300 3450 -2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 - 6900 2850 6900 3450 -2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5 - 7800 3600 7800 2700 525 2700 525 3600 7800 3600 -2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 - 675 2850 1725 2850 1725 3450 675 3450 675 2850 -2 2 0 4 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 - 5700 2850 6300 2850 6300 3450 5700 3450 5700 2850 -2 2 0 4 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 - 5700 1725 6300 1725 6300 2325 5700 2325 5700 1725 -2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5 - 6450 2475 6450 1275 5550 1275 5550 2475 6450 2475 -2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 - 5700 1350 6300 1350 6300 1575 5700 1575 5700 1350 -2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3 - 2 1 1.00 60.00 120.00 - 900 2850 900 1875 1575 1875 -2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 - 2 1 1.00 60.00 120.00 - 3375 1800 5550 1800 -2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 - 2 1 1.00 60.00 120.00 - 6000 2850 6000 2325 -2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5 - 3375 2100 3375 1575 1575 1575 1575 2100 3375 2100 -4 0 0 50 -1 18 14 0.0000 4 165 720 825 3225 header\001 -4 0 0 50 -1 2 40 0.0000 4 105 450 4500 3225 ...\001 -4 0 0 50 -1 18 14 0.0000 4 210 810 3600 3900 ndarray\001 -4 0 0 50 -1 18 14 0.0000 4 165 630 6600 2175 scalar\001 -4 0 0 50 -1 18 14 0.0000 4 165 540 6600 1950 array\001 -4 0 0 50 -1 16 12 0.0000 4 135 420 5775 1500 head\001 -4 0 0 50 -1 18 14 0.0000 4 210 975 1950 1875 data-type\001 diff --git a/pythonPackages/numpy/doc/source/reference/figures/threefundamental.pdf b/pythonPackages/numpy/doc/source/reference/figures/threefundamental.pdf deleted file mode 100755 index b89e9f2afa..0000000000 Binary files a/pythonPackages/numpy/doc/source/reference/figures/threefundamental.pdf and /dev/null differ diff --git a/pythonPackages/numpy/doc/source/reference/figures/threefundamental.png b/pythonPackages/numpy/doc/source/reference/figures/threefundamental.png deleted file mode 100755 index de252fc9d4..0000000000 Binary files a/pythonPackages/numpy/doc/source/reference/figures/threefundamental.png and /dev/null differ diff --git a/pythonPackages/numpy/doc/source/reference/index.rst b/pythonPackages/numpy/doc/source/reference/index.rst deleted file mode 100755 index 8074c24baa..0000000000 --- a/pythonPackages/numpy/doc/source/reference/index.rst +++ /dev/null @@ -1,42 +0,0 @@ -.. _reference: - -############### -NumPy Reference -############### - -:Release: |version| -:Date: |today| - -.. module:: numpy - -This reference manual details functions, modules, and objects -included in Numpy, describing what they are and what they do. -For learning how to use NumPy, see also :ref:`user`. - - -.. toctree:: - :maxdepth: 2 - - arrays - ufuncs - routines - ctypes - distutils - c-api - internals - - -Acknowledgements -================ - -Large parts of this manual originate from Travis E. Oliphant's book -`Guide to Numpy `__ (which generously entered -Public Domain in August 2008). The reference documentation for many of -the functions are written by numerous contributors and developers of -Numpy, both prior to and during the -`Numpy Documentation Marathon -`__. - -Please help to improve NumPy's documentation! Instructions on how to -join the ongoing documentation marathon can be found -`on the scipy.org website `__ diff --git a/pythonPackages/numpy/doc/source/reference/internals.code-explanations.rst b/pythonPackages/numpy/doc/source/reference/internals.code-explanations.rst deleted file mode 100755 index cceb1a60d4..0000000000 --- a/pythonPackages/numpy/doc/source/reference/internals.code-explanations.rst +++ /dev/null @@ -1,666 +0,0 @@ -.. currentmodule:: numpy - -************************* -Numpy C Code Explanations -************************* - - Fanaticism consists of redoubling your efforts when you have forgotten - your aim. - --- *George Santayana* - - An authority is a person who can tell you more about something than - you really care to know. - --- *Unknown* - -This Chapter attempts to explain the logic behind some of the new -pieces of code. The purpose behind these explanations is to enable -somebody to be able to understand the ideas behind the implementation -somewhat more easily than just staring at the code. Perhaps in this -way, the algorithms can be improved on, borrowed from, and/or -optimized. - - -Memory model -============ - -.. index:: - pair: ndarray; memory model - -One fundamental aspect of the ndarray is that an array is seen as a -"chunk" of memory starting at some location. The interpretation of -this memory depends on the stride information. For each dimension in -an :math:`N` -dimensional array, an integer (stride) dictates how many -bytes must be skipped to get to the next element in that dimension. -Unless you have a single-segment array, this stride information must -be consulted when traversing through an array. It is not difficult to -write code that accepts strides, you just have to use (char \*) -pointers because strides are in units of bytes. Keep in mind also that -strides do not have to be unit-multiples of the element size. Also, -remember that if the number of dimensions of the array is 0 (sometimes -called a rank-0 array), then the strides and dimensions variables are -NULL. - -Besides the structural information contained in the strides and -dimensions members of the :ctype:`PyArrayObject`, the flags contain important -information about how the data may be accessed. In particular, the -:cdata:`NPY_ALIGNED` flag is set when the memory is on a suitable boundary -according to the data-type array. Even if you have a contiguous chunk -of memory, you cannot just assume it is safe to dereference a data- -type-specific pointer to an element. Only if the :cdata:`NPY_ALIGNED` flag is -set is this a safe operation (on some platforms it will work but on -others, like Solaris, it will cause a bus error). The :cdata:`NPY_WRITEABLE` -should also be ensured if you plan on writing to the memory area of -the array. It is also possible to obtain a pointer to an unwriteable -memory area. Sometimes, writing to the memory area when the -:cdata:`NPY_WRITEABLE` flag is not set will just be rude. Other times it can -cause program crashes ( *e.g.* a data-area that is a read-only -memory-mapped file). - - -Data-type encapsulation -======================= - -.. index:: - single: dtype - -The data-type is an important abstraction of the ndarray. Operations -will look to the data-type to provide the key functionality that is -needed to operate on the array. This functionality is provided in the -list of function pointers pointed to by the 'f' member of the -:ctype:`PyArray_Descr` structure. In this way, the number of data-types can be -extended simply by providing a :ctype:`PyArray_Descr` structure with suitable -function pointers in the 'f' member. For built-in types there are some -optimizations that by-pass this mechanism, but the point of the data- -type abstraction is to allow new data-types to be added. - -One of the built-in data-types, the void data-type allows for -arbitrary records containing 1 or more fields as elements of the -array. A field is simply another data-type object along with an offset -into the current record. In order to support arbitrarily nested -fields, several recursive implementations of data-type access are -implemented for the void type. A common idiom is to cycle through the -elements of the dictionary and perform a specific operation based on -the data-type object stored at the given offset. These offsets can be -arbitrary numbers. Therefore, the possibility of encountering mis- -aligned data must be recognized and taken into account if necessary. - - -N-D Iterators -============= - -.. index:: - single: array iterator - -A very common operation in much of NumPy code is the need to iterate -over all the elements of a general, strided, N-dimensional array. This -operation of a general-purpose N-dimensional loop is abstracted in the -notion of an iterator object. To write an N-dimensional loop, you only -have to create an iterator object from an ndarray, work with the -dataptr member of the iterator object structure and call the macro -:cfunc:`PyArray_ITER_NEXT` (it) on the iterator object to move to the next -element. The "next" element is always in C-contiguous order. The macro -works by first special casing the C-contiguous, 1-D, and 2-D cases -which work very simply. - -For the general case, the iteration works by keeping track of a list -of coordinate counters in the iterator object. At each iteration, the -last coordinate counter is increased (starting from 0). If this -counter is smaller then one less than the size of the array in that -dimension (a pre-computed and stored value), then the counter is -increased and the dataptr member is increased by the strides in that -dimension and the macro ends. If the end of a dimension is reached, -the counter for the last dimension is reset to zero and the dataptr is -moved back to the beginning of that dimension by subtracting the -strides value times one less than the number of elements in that -dimension (this is also pre-computed and stored in the backstrides -member of the iterator object). In this case, the macro does not end, -but a local dimension counter is decremented so that the next-to-last -dimension replaces the role that the last dimension played and the -previously-described tests are executed again on the next-to-last -dimension. In this way, the dataptr is adjusted appropriately for -arbitrary striding. - -The coordinates member of the :ctype:`PyArrayIterObject` structure maintains -the current N-d counter unless the underlying array is C-contiguous in -which case the coordinate counting is by-passed. The index member of -the :ctype:`PyArrayIterObject` keeps track of the current flat index of the -iterator. It is updated by the :cfunc:`PyArray_ITER_NEXT` macro. - - -Broadcasting -============ - -.. index:: - single: broadcasting - -In Numeric, broadcasting was implemented in several lines of code -buried deep in ufuncobject.c. In NumPy, the notion of broadcasting has -been abstracted so that it can be performed in multiple places. -Broadcasting is handled by the function :cfunc:`PyArray_Broadcast`. This -function requires a :ctype:`PyArrayMultiIterObject` (or something that is a -binary equivalent) to be passed in. The :ctype:`PyArrayMultiIterObject` keeps -track of the broadcasted number of dimensions and size in each -dimension along with the total size of the broadcasted result. It also -keeps track of the number of arrays being broadcast and a pointer to -an iterator for each of the arrays being broadcasted. - -The :cfunc:`PyArray_Broadcast` function takes the iterators that have already -been defined and uses them to determine the broadcast shape in each -dimension (to create the iterators at the same time that broadcasting -occurs then use the :cfunc:`PyMultiIter_New` function). Then, the iterators are -adjusted so that each iterator thinks it is iterating over an array -with the broadcasted size. This is done by adjusting the iterators -number of dimensions, and the shape in each dimension. This works -because the iterator strides are also adjusted. Broadcasting only -adjusts (or adds) length-1 dimensions. For these dimensions, the -strides variable is simply set to 0 so that the data-pointer for the -iterator over that array doesn't move as the broadcasting operation -operates over the extended dimension. - -Broadcasting was always implemented in Numeric using 0-valued strides -for the extended dimensions. It is done in exactly the same way in -NumPy. The big difference is that now the array of strides is kept -track of in a :ctype:`PyArrayIterObject`, the iterators involved in a -broadcasted result are kept track of in a :ctype:`PyArrayMultiIterObject`, -and the :cfunc:`PyArray_BroadCast` call implements the broad-casting rules. - - -Array Scalars -============= - -.. index:: - single: array scalars - -The array scalars offer a hierarchy of Python types that allow a one- -to-one correspondence between the data-type stored in an array and the -Python-type that is returned when an element is extracted from the -array. An exception to this rule was made with object arrays. Object -arrays are heterogeneous collections of arbitrary Python objects. When -you select an item from an object array, you get back the original -Python object (and not an object array scalar which does exist but is -rarely used for practical purposes). - -The array scalars also offer the same methods and attributes as arrays -with the intent that the same code can be used to support arbitrary -dimensions (including 0-dimensions). The array scalars are read-only -(immutable) with the exception of the void scalar which can also be -written to so that record-array field setting works more naturally -(a[0]['f1'] = ``value`` ). - - -Advanced ("Fancy") Indexing -============================= - -.. index:: - single: indexing - -The implementation of advanced indexing represents some of the most -difficult code to write and explain. In fact, there are two -implementations of advanced indexing. The first works only with 1-D -arrays and is implemented to handle expressions involving a.flat[obj]. -The second is general-purpose that works for arrays of "arbitrary -dimension" (up to a fixed maximum). The one-dimensional indexing -approaches were implemented in a rather straightforward fashion, and -so it is the general-purpose indexing code that will be the focus of -this section. - -There is a multi-layer approach to indexing because the indexing code -can at times return an array scalar and at other times return an -array. The functions with "_nice" appended to their name do this -special handling while the function without the _nice appendage always -return an array (perhaps a 0-dimensional array). Some special-case -optimizations (the index being an integer scalar, and the index being -a tuple with as many dimensions as the array) are handled in -array_subscript_nice function which is what Python calls when -presented with the code "a[obj]." These optimizations allow fast -single-integer indexing, and also ensure that a 0-dimensional array is -not created only to be discarded as the array scalar is returned -instead. This provides significant speed-up for code that is selecting -many scalars out of an array (such as in a loop). However, it is still -not faster than simply using a list to store standard Python scalars, -because that is optimized by the Python interpreter itself. - -After these optimizations, the array_subscript function itself is -called. This function first checks for field selection which occurs -when a string is passed as the indexing object. Then, 0-D arrays are -given special-case consideration. Finally, the code determines whether -or not advanced, or fancy, indexing needs to be performed. If fancy -indexing is not needed, then standard view-based indexing is performed -using code borrowed from Numeric which parses the indexing object and -returns the offset into the data-buffer and the dimensions necessary -to create a new view of the array. The strides are also changed by -multiplying each stride by the step-size requested along the -corresponding dimension. - - -Fancy-indexing check --------------------- - -The fancy_indexing_check routine determines whether or not to use -standard view-based indexing or new copy-based indexing. If the -indexing object is a tuple, then view-based indexing is assumed by -default. Only if the tuple contains an array object or a sequence -object is fancy-indexing assumed. If the indexing object is an array, -then fancy indexing is automatically assumed. If the indexing object -is any other kind of sequence, then fancy-indexing is assumed by -default. This is over-ridden to simple indexing if the sequence -contains any slice, newaxis, or Ellipsis objects, and no arrays or -additional sequences are also contained in the sequence. The purpose -of this is to allow the construction of "slicing" sequences which is a -common technique for building up code that works in arbitrary numbers -of dimensions. - - -Fancy-indexing implementation ------------------------------ - -The concept of indexing was also abstracted using the idea of an -iterator. If fancy indexing is performed, then a :ctype:`PyArrayMapIterObject` -is created. This internal object is not exposed to Python. It is -created in order to handle the fancy-indexing at a high-level. Both -get and set fancy-indexing operations are implemented using this -object. Fancy indexing is abstracted into three separate operations: -(1) creating the :ctype:`PyArrayMapIterObject` from the indexing object, (2) -binding the :ctype:`PyArrayMapIterObject` to the array being indexed, and (3) -getting (or setting) the items determined by the indexing object. -There is an optimization implemented so that the :ctype:`PyArrayIterObject` -(which has it's own less complicated fancy-indexing) is used for -indexing when possible. - - -Creating the mapping object -^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The first step is to convert the indexing objects into a standard form -where iterators are created for all of the index array inputs and all -Boolean arrays are converted to equivalent integer index arrays (as if -nonzero(arr) had been called). Finally, all integer arrays are -replaced with the integer 0 in the indexing object and all of the -index-array iterators are "broadcast" to the same shape. - - -Binding the mapping object -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -When the mapping object is created it does not know which array it -will be used with so once the index iterators are constructed during -mapping-object creation, the next step is to associate these iterators -with a particular ndarray. This process interprets any ellipsis and -slice objects so that the index arrays are associated with the -appropriate axis (the axis indicated by the iteraxis entry -corresponding to the iterator for the integer index array). This -information is then used to check the indices to be sure they are -within range of the shape of the array being indexed. The presence of -ellipsis and/or slice objects implies a sub-space iteration that is -accomplished by extracting a sub-space view of the array (using the -index object resulting from replacing all the integer index arrays -with 0) and storing the information about where this sub-space starts -in the mapping object. This is used later during mapping-object -iteration to select the correct elements from the underlying array. - - -Getting (or Setting) -^^^^^^^^^^^^^^^^^^^^ - -After the mapping object is successfully bound to a particular array, -the mapping object contains the shape of the resulting item as well as -iterator objects that will walk through the currently-bound array and -either get or set its elements as needed. The walk is implemented -using the :cfunc:`PyArray_MapIterNext` function. This function sets the -coordinates of an iterator object into the current array to be the -next coordinate location indicated by all of the indexing-object -iterators while adjusting, if necessary, for the presence of a sub- -space. The result of this function is that the dataptr member of the -mapping object structure is pointed to the next position in the array -that needs to be copied out or set to some value. - -When advanced indexing is used to extract an array, an iterator for -the new array is constructed and advanced in phase with the mapping -object iterator. When advanced indexing is used to place values in an -array, a special "broadcasted" iterator is constructed from the object -being placed into the array so that it will only work if the values -used for setting have a shape that is "broadcastable" to the shape -implied by the indexing object. - - -Universal Functions -=================== - -.. index:: - single: ufunc - -Universal functions are callable objects that take :math:`N` inputs -and produce :math:`M` outputs by wrapping basic 1-D loops that work -element-by-element into full easy-to use functions that seamlessly -implement broadcasting, type-checking and buffered coercion, and -output-argument handling. New universal functions are normally created -in C, although there is a mechanism for creating ufuncs from Python -functions (:func:`frompyfunc`). The user must supply a 1-D loop that -implements the basic function taking the input scalar values and -placing the resulting scalars into the appropriate output slots as -explaine n implementation. - - -Setup ------ - -Every ufunc calculation involves some overhead related to setting up -the calculation. The practical significance of this overhead is that -even though the actual calculation of the ufunc is very fast, you will -be able to write array and type-specific code that will work faster -for small arrays than the ufunc. In particular, using ufuncs to -perform many calculations on 0-D arrays will be slower than other -Python-based solutions (the silently-imported scalarmath module exists -precisely to give array scalars the look-and-feel of ufunc-based -calculations with significantly reduced overhead). - -When a ufunc is called, many things must be done. The information -collected from these setup operations is stored in a loop-object. This -loop object is a C-structure (that could become a Python object but is -not initialized as such because it is only used internally). This loop -object has the layout needed to be used with PyArray_Broadcast so that -the broadcasting can be handled in the same way as it is handled in -other sections of code. - -The first thing done is to look-up in the thread-specific global -dictionary the current values for the buffer-size, the error mask, and -the associated error object. The state of the error mask controls what -happens when an error-condiction is found. It should be noted that -checking of the hardware error flags is only performed after each 1-D -loop is executed. This means that if the input and output arrays are -contiguous and of the correct type so that a single 1-D loop is -performed, then the flags may not be checked until all elements of the -array have been calcluated. Looking up these values in a thread- -specific dictionary takes time which is easily ignored for all but -very small arrays. - -After checking, the thread-specific global variables, the inputs are -evaluated to determine how the ufunc should proceed and the input and -output arrays are constructed if necessary. Any inputs which are not -arrays are converted to arrays (using context if necessary). Which of -the inputs are scalars (and therefore converted to 0-D arrays) is -noted. - -Next, an appropriate 1-D loop is selected from the 1-D loops available -to the ufunc based on the input array types. This 1-D loop is selected -by trying to match the signature of the data-types of the inputs -against the available signatures. The signatures corresponding to -built-in types are stored in the types member of the ufunc structure. -The signatures corresponding to user-defined types are stored in a -linked-list of function-information with the head element stored as a -``CObject`` in the userloops dictionary keyed by the data-type number -(the first user-defined type in the argument list is used as the key). -The signatures are searched until a signature is found to which the -input arrays can all be cast safely (ignoring any scalar arguments -which are not allowed to determine the type of the result). The -implication of this search procedure is that "lesser types" should be -placed below "larger types" when the signatures are stored. If no 1-D -loop is found, then an error is reported. Otherwise, the argument_list -is updated with the stored signature --- in case casting is necessary -and to fix the output types assumed by the 1-D loop. - -If the ufunc has 2 inputs and 1 output and the second input is an -Object array then a special-case check is performed so that -NotImplemented is returned if the second input is not an ndarray, has -the __array_priority\__ attribute, and has an __r{op}\__ special -method. In this way, Python is signaled to give the other object a -chance to complete the operation instead of using generic object-array -calculations. This allows (for example) sparse matrices to override -the multiplication operator 1-D loop. - -For input arrays that are smaller than the specified buffer size, -copies are made of all non-contiguous, mis-aligned, or out-of- -byteorder arrays to ensure that for small arrays, a single-loop is -used. Then, array iterators are created for all the input arrays and -the resulting collection of iterators is broadcast to a single shape. - -The output arguments (if any) are then processed and any missing -return arrays are constructed. If any provided output array doesn't -have the correct type (or is mis-aligned) and is smaller than the -buffer size, then a new output array is constructed with the special -UPDATEIFCOPY flag set so that when it is DECREF'd on completion of the -function, it's contents will be copied back into the output array. -Iterators for the output arguments are then processed. - -Finally, the decision is made about how to execute the looping -mechanism to ensure that all elements of the input arrays are combined -to produce the output arrays of the correct type. The options for loop -execution are one-loop (for contiguous, aligned, and correct data- -type), strided-loop (for non-contiguous but still aligned and correct -data-type), and a buffered loop (for mis-aligned or incorrect data- -type situations). Depending on which execution method is called for, -the loop is then setup and computed. - - -Function call -------------- - -This section describes how the basic universal function computation -loop is setup and executed for each of the three different kinds of -execution possibilities. If :cdata:`NPY_ALLOW_THREADS` is defined during -compilation, then the Python Global Interpreter Lock (GIL) is released -prior to calling all of these loops (as long as they don't involve -object arrays). It is re-acquired if necessary to handle error -conditions. The hardware error flags are checked only after the 1-D -loop is calcluated. - - -One Loop -^^^^^^^^ - -This is the simplest case of all. The ufunc is executed by calling the -underlying 1-D loop exactly once. This is possible only when we have -aligned data of the correct type (including byte-order) for both input -and output and all arrays have uniform strides (either contiguous, -0-D, or 1-D). In this case, the 1-D computational loop is called once -to compute the calculation for the entire array. Note that the -hardware error flags are only checked after the entire calculation is -complete. - - -Strided Loop -^^^^^^^^^^^^ - -When the input and output arrays are aligned and of the correct type, -but the striding is not uniform (non-contiguous and 2-D or larger), -then a second looping structure is employed for the calculation. This -approach converts all of the iterators for the input and output -arguments to iterate over all but the largest dimension. The inner -loop is then handled by the underlying 1-D computational loop. The -outer loop is a standard iterator loop on the converted iterators. The -hardware error flags are checked after each 1-D loop is completed. - - -Buffered Loop -^^^^^^^^^^^^^ - -This is the code that handles the situation whenever the input and/or -output arrays are either misaligned or of the wrong data-type -(including being byte-swapped) from what the underlying 1-D loop -expects. The arrays are also assumed to be non-contiguous. The code -works very much like the strided loop except for the inner 1-D loop is -modified so that pre-processing is performed on the inputs and post- -processing is performed on the outputs in bufsize chunks (where -bufsize is a user-settable parameter). The underlying 1-D -computational loop is called on data that is copied over (if it needs -to be). The setup code and the loop code is considerably more -complicated in this case because it has to handle: - -- memory allocation of the temporary buffers - -- deciding whether or not to use buffers on the input and output data - (mis-aligned and/or wrong data-type) - -- copying and possibly casting data for any inputs or outputs for which - buffers are necessary. - -- special-casing Object arrays so that reference counts are properly - handled when copies and/or casts are necessary. - -- breaking up the inner 1-D loop into bufsize chunks (with a possible - remainder). - -Again, the hardware error flags are checked at the end of each 1-D -loop. - - -Final output manipulation -------------------------- - -Ufuncs allow other array-like classes to be passed seamlessly through -the interface in that inputs of a particular class will induce the -outputs to be of that same class. The mechanism by which this works is -the following. If any of the inputs are not ndarrays and define the -:obj:`__array_wrap__` method, then the class with the largest -:obj:`__array_priority__` attribute determines the type of all the -outputs (with the exception of any output arrays passed in). The -:obj:`__array_wrap__` method of the input array will be called with the -ndarray being returned from the ufunc as it's input. There are two -calling styles of the :obj:`__array_wrap__` function supported. The first -takes the ndarray as the first argument and a tuple of "context" as -the second argument. The context is (ufunc, arguments, output argument -number). This is the first call tried. If a TypeError occurs, then the -function is called with just the ndarray as the first argument. - - -Methods -------- - -Their are three methods of ufuncs that require calculation similar to -the general-purpose ufuncs. These are reduce, accumulate, and -reduceat. Each of these methods requires a setup command followed by a -loop. There are four loop styles possible for the methods -corresponding to no-elements, one-element, strided-loop, and buffered- -loop. These are the same basic loop styles as implemented for the -general purpose function call except for the no-element and one- -element cases which are special-cases occurring when the input array -objects have 0 and 1 elements respectively. - - -Setup -^^^^^ - -The setup function for all three methods is ``construct_reduce``. -This function creates a reducing loop object and fills it with -parameters needed to complete the loop. All of the methods only work -on ufuncs that take 2-inputs and return 1 output. Therefore, the -underlying 1-D loop is selected assuming a signature of [ ``otype``, -``otype``, ``otype`` ] where ``otype`` is the requested reduction -data-type. The buffer size and error handling is then retrieved from -(per-thread) global storage. For small arrays that are mis-aligned or -have incorrect data-type, a copy is made so that the un-buffered -section of code is used. Then, the looping strategy is selected. If -there is 1 element or 0 elements in the array, then a simple looping -method is selected. If the array is not mis-aligned and has the -correct data-type, then strided looping is selected. Otherwise, -buffered looping must be performed. Looping parameters are then -established, and the return array is constructed. The output array is -of a different shape depending on whether the method is reduce, -accumulate, or reduceat. If an output array is already provided, then -it's shape is checked. If the output array is not C-contiguous, -aligned, and of the correct data type, then a temporary copy is made -with the UPDATEIFCOPY flag set. In this way, the methods will be able -to work with a well-behaved output array but the result will be copied -back into the true output array when the method computation is -complete. Finally, iterators are set up to loop over the correct axis -(depending on the value of axis provided to the method) and the setup -routine returns to the actual computation routine. - - -Reduce -^^^^^^ - -.. index:: - triple: ufunc; methods; reduce - -All of the ufunc methods use the same underlying 1-D computational -loops with input and output arguments adjusted so that the appropriate -reduction takes place. For example, the key to the functioning of -reduce is that the 1-D loop is called with the output and the second -input pointing to the same position in memory and both having a step- -size of 0. The first input is pointing to the input array with a step- -size given by the appropriate stride for the selected axis. In this -way, the operation performed is - -.. math:: - :nowrap: - - \begin{align*} - o & = & i[0] \\ - o & = & i[k]\textrm{}o\quad k=1\ldots N - \end{align*} - -where :math:`N+1` is the number of elements in the input, :math:`i`, -:math:`o` is the output, and :math:`i[k]` is the -:math:`k^{\textrm{th}}` element of :math:`i` along the selected axis. -This basic operations is repeated for arrays with greater than 1 -dimension so that the reduction takes place for every 1-D sub-array -along the selected axis. An iterator with the selected dimension -removed handles this looping. - -For buffered loops, care must be taken to copy and cast data before -the loop function is called because the underlying loop expects -aligned data of the correct data-type (including byte-order). The -buffered loop must handle this copying and casting prior to calling -the loop function on chunks no greater than the user-specified -bufsize. - - -Accumulate -^^^^^^^^^^ - -.. index:: - triple: ufunc; methods; accumulate - -The accumulate function is very similar to the reduce function in that -the output and the second input both point to the output. The -difference is that the second input points to memory one stride behind -the current output pointer. Thus, the operation performed is - -.. math:: - :nowrap: - - \begin{align*} - o[0] & = & i[0] \\ - o[k] & = & i[k]\textrm{}o[k-1]\quad k=1\ldots N. - \end{align*} - -The output has the same shape as the input and each 1-D loop operates -over :math:`N` elements when the shape in the selected axis is :math:`N+1`. -Again, buffered loops take care to copy and cast the data before -calling the underlying 1-D computational loop. - - -Reduceat -^^^^^^^^ - -.. index:: - triple: ufunc; methods; reduceat - single: ufunc - -The reduceat function is a generalization of both the reduce and -accumulate functions. It implements a reduce over ranges of the input -array specified by indices. The extra indices argument is checked to -be sure that every input is not too large for the input array along -the selected dimension before the loop calculations take place. The -loop implementation is handled using code that is very similar to the -reduce code repeated as many times as there are elements in the -indices input. In particular: the first input pointer passed to the -underlying 1-D computational loop points to the input array at the -correct location indicated by the index array. In addition, the output -pointer and the second input pointer passed to the underlying 1-D loop -point to the same position in memory. The size of the 1-D -computational loop is fixed to be the difference between the current -index and the next index (when the current index is the last index, -then the next index is assumed to be the length of the array along the -selected dimension). In this way, the 1-D loop will implement a reduce -over the specified indices. - -Mis-aligned or a loop data-type that does not match the input and/or -output data-type is handled using buffered code where-in data is -copied to a temporary buffer and cast to the correct data-type if -necessary prior to calling the underlying 1-D function. The temporary -buffers are created in (element) sizes no bigger than the user -settable buffer-size value. Thus, the loop must be flexible enough to -call the underlying 1-D computational loop enough times to complete -the total calculation in chunks no bigger than the buffer-size. diff --git a/pythonPackages/numpy/doc/source/reference/internals.rst b/pythonPackages/numpy/doc/source/reference/internals.rst deleted file mode 100755 index c9716813d1..0000000000 --- a/pythonPackages/numpy/doc/source/reference/internals.rst +++ /dev/null @@ -1,9 +0,0 @@ -*************** -Numpy internals -*************** - -.. toctree:: - - internals.code-explanations - -.. automodule:: numpy.doc.internals diff --git a/pythonPackages/numpy/doc/source/reference/maskedarray.baseclass.rst b/pythonPackages/numpy/doc/source/reference/maskedarray.baseclass.rst deleted file mode 100755 index fd1fd7ae61..0000000000 --- a/pythonPackages/numpy/doc/source/reference/maskedarray.baseclass.rst +++ /dev/null @@ -1,462 +0,0 @@ -.. currentmodule:: numpy.ma - - -.. _numpy.ma.constants: - -Constants of the :mod:`numpy.ma` module -======================================= - -In addition to the :class:`MaskedArray` class, the :mod:`numpy.ma` module -defines several constants. - -.. data:: masked - - The :attr:`masked` constant is a special case of :class:`MaskedArray`, - with a float datatype and a null shape. It is used to test whether a - specific entry of a masked array is masked, or to mask one or several - entries of a masked array:: - - >>> x = ma.array([1, 2, 3], mask=[0, 1, 0]) - >>> x[1] is ma.masked - True - >>> x[-1] = ma.masked - >>> x - masked_array(data = [1 -- --], - mask = [False True True], - fill_value = 999999) - - -.. data:: nomask - - Value indicating that a masked array has no invalid entry. - :attr:`nomask` is used internally to speed up computations when the mask - is not needed. - - -.. data:: masked_print_options - - String used in lieu of missing data when a masked array is printed. - By default, this string is ``'--'``. - - - - -.. _maskedarray.baseclass: - -The :class:`MaskedArray` class -============================== - - -.. class:: MaskedArray - - A subclass of :class:`~numpy.ndarray` designed to manipulate numerical arrays with missing data. - - - - An instance of :class:`MaskedArray` can be thought as the combination of several elements: - -* The :attr:`~MaskedArray.data`, as a regular :class:`numpy.ndarray` of any shape or datatype (the data). -* A boolean :attr:`~numpy.ma.MaskedArray.mask` with the same shape as the data, where a ``True`` value indicates that the corresponding element of the data is invalid. - The special value :const:`nomask` is also acceptable for arrays without named fields, and indicates that no data is invalid. -* A :attr:`~numpy.ma.MaskedArray.fill_value`, a value that may be used to replace the invalid entries in order to return a standard :class:`numpy.ndarray`. - - - -Attributes and properties of masked arrays ------------------------------------------- - -.. seealso:: :ref:`Array Attributes ` - - -.. attribute:: MaskedArray.data - - Returns the underlying data, as a view of the masked array. - If the underlying data is a subclass of :class:`numpy.ndarray`, it is - returned as such. - - >>> x = ma.array(np.matrix([[1, 2], [3, 4]]), mask=[[0, 1], [1, 0]]) - >>> x.data - matrix([[1, 2], - [3, 4]]) - - The type of the data can be accessed through the :attr:`baseclass` - attribute. - -.. attribute:: MaskedArray.mask - - Returns the underlying mask, as an array with the same shape and structure - as the data, but where all fields are atomically booleans. - A value of ``True`` indicates an invalid entry. - - -.. attribute:: MaskedArray.recordmask - - Returns the mask of the array if it has no named fields. For structured - arrays, returns a ndarray of booleans where entries are ``True`` if **all** - the fields are masked, ``False`` otherwise:: - - >>> x = ma.array([(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)], - ... mask=[(0, 0), (1, 0), (1, 1), (0, 1), (0, 0)], - ... dtype=[('a', int), ('b', int)]) - >>> x.recordmask - array([False, False, True, False, False], dtype=bool) - - -.. attribute:: MaskedArray.fill_value - - Returns the value used to fill the invalid entries of a masked array. - The value is either a scalar (if the masked array has no named fields), - or a 0-D ndarray with the same :attr:`dtype` as the masked array if it has - named fields. - - The default filling value depends on the datatype of the array: - - ======== ======== - datatype default - ======== ======== - bool True - int 999999 - float 1.e20 - complex 1.e20+0j - object '?' - string 'N/A' - ======== ======== - - - -.. attribute:: MaskedArray.baseclass - - Returns the class of the underlying data. - - >>> x = ma.array(np.matrix([[1, 2], [3, 4]]), mask=[[0, 0], [1, 0]]) - >>> x.baseclass - - - -.. attribute:: MaskedArray.sharedmask - - Returns whether the mask of the array is shared between several masked arrays. - If this is the case, any modification to the mask of one array will be - propagated to the others. - - -.. attribute:: MaskedArray.hardmask - - Returns whether the mask is hard (``True``) or soft (``False``). - When the mask is hard, masked entries cannot be unmasked. - - -As :class:`MaskedArray` is a subclass of :class:`~numpy.ndarray`, a masked array also inherits all the attributes and properties of a :class:`~numpy.ndarray` instance. - -.. autosummary:: - :toctree: generated/ - - MaskedArray.base - MaskedArray.ctypes - MaskedArray.dtype - MaskedArray.flags - - MaskedArray.itemsize - MaskedArray.nbytes - MaskedArray.ndim - MaskedArray.shape - MaskedArray.size - MaskedArray.strides - - MaskedArray.imag - MaskedArray.real - - MaskedArray.flat - MaskedArray.__array_priority__ - - - -:class:`MaskedArray` methods -============================ - -.. seealso:: :ref:`Array methods ` - - -Conversion ----------- - -.. autosummary:: - :toctree: generated/ - - MaskedArray.__float__ - MaskedArray.__hex__ - MaskedArray.__int__ - MaskedArray.__long__ - MaskedArray.__oct__ - - MaskedArray.view - MaskedArray.astype - MaskedArray.byteswap - - MaskedArray.compressed - MaskedArray.filled - MaskedArray.tofile - MaskedArray.toflex - MaskedArray.tolist - MaskedArray.torecords - MaskedArray.tostring - - -Shape manipulation ------------------- - -For reshape, resize, and transpose, the single tuple argument may be -replaced with ``n`` integers which will be interpreted as an n-tuple. - -.. autosummary:: - :toctree: generated/ - - MaskedArray.flatten - MaskedArray.ravel - MaskedArray.reshape - MaskedArray.resize - MaskedArray.squeeze - MaskedArray.swapaxes - MaskedArray.transpose - MaskedArray.T - - -Item selection and manipulation -------------------------------- - -For array methods that take an *axis* keyword, it defaults to `None`. -If axis is *None*, then the array is treated as a 1-D array. -Any other value for *axis* represents the dimension along which -the operation should proceed. - -.. autosummary:: - :toctree: generated/ - - MaskedArray.argmax - MaskedArray.argmin - MaskedArray.argsort - MaskedArray.choose - MaskedArray.compress - MaskedArray.diagonal - MaskedArray.fill - MaskedArray.item - MaskedArray.nonzero - MaskedArray.put - MaskedArray.repeat - MaskedArray.searchsorted - MaskedArray.sort - MaskedArray.take - - -Pickling and copy ------------------ - -.. autosummary:: - :toctree: generated/ - - MaskedArray.copy - MaskedArray.dump - MaskedArray.dumps - - -Calculations ------------- - -.. autosummary:: - :toctree: generated/ - - MaskedArray.all - MaskedArray.anom - MaskedArray.any - MaskedArray.clip - MaskedArray.conj - MaskedArray.conjugate - MaskedArray.cumprod - MaskedArray.cumsum - MaskedArray.max - MaskedArray.mean - MaskedArray.min - MaskedArray.prod - MaskedArray.product - MaskedArray.ptp - MaskedArray.round - MaskedArray.std - MaskedArray.sum - MaskedArray.trace - MaskedArray.var - - -Arithmetic and comparison operations ------------------------------------- - -.. index:: comparison, arithmetic, operation, operator - -Comparison operators: -~~~~~~~~~~~~~~~~~~~~~ - -.. autosummary:: - :toctree: generated/ - - MaskedArray.__lt__ - MaskedArray.__le__ - MaskedArray.__gt__ - MaskedArray.__ge__ - MaskedArray.__eq__ - MaskedArray.__ne__ - -Truth value of an array (:func:`bool()`): -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. autosummary:: - :toctree: generated/ - - MaskedArray.__nonzero__ - - -Arithmetic: -~~~~~~~~~~~ - -.. autosummary:: - :toctree: generated/ - - MaskedArray.__abs__ - MaskedArray.__add__ - MaskedArray.__radd__ - MaskedArray.__sub__ - MaskedArray.__rsub__ - MaskedArray.__mul__ - MaskedArray.__rmul__ - MaskedArray.__div__ - MaskedArray.__rdiv__ - MaskedArray.__truediv__ - MaskedArray.__rtruediv__ - MaskedArray.__floordiv__ - MaskedArray.__rfloordiv__ - MaskedArray.__mod__ - MaskedArray.__rmod__ - MaskedArray.__divmod__ - MaskedArray.__rdivmod__ - MaskedArray.__pow__ - MaskedArray.__rpow__ - MaskedArray.__lshift__ - MaskedArray.__rlshift__ - MaskedArray.__rshift__ - MaskedArray.__rrshift__ - MaskedArray.__and__ - MaskedArray.__rand__ - MaskedArray.__or__ - MaskedArray.__ror__ - MaskedArray.__xor__ - MaskedArray.__rxor__ - - -Arithmetic, in-place: -~~~~~~~~~~~~~~~~~~~~~ - -.. autosummary:: - :toctree: generated/ - - MaskedArray.__iadd__ - MaskedArray.__isub__ - MaskedArray.__imul__ - MaskedArray.__idiv__ - MaskedArray.__itruediv__ - MaskedArray.__ifloordiv__ - MaskedArray.__imod__ - MaskedArray.__ipow__ - MaskedArray.__ilshift__ - MaskedArray.__irshift__ - MaskedArray.__iand__ - MaskedArray.__ior__ - MaskedArray.__ixor__ - - -Representation --------------- - -.. autosummary:: - :toctree: generated/ - - MaskedArray.__repr__ - MaskedArray.__str__ - - MaskedArray.ids - MaskedArray.iscontiguous - - -Special methods ---------------- - -For standard library functions: - -.. autosummary:: - :toctree: generated/ - - MaskedArray.__copy__ - MaskedArray.__deepcopy__ - MaskedArray.__getstate__ - MaskedArray.__reduce__ - MaskedArray.__setstate__ - -Basic customization: - -.. autosummary:: - :toctree: generated/ - - MaskedArray.__new__ - MaskedArray.__array__ - MaskedArray.__array_wrap__ - -Container customization: (see :ref:`Indexing `) - -.. autosummary:: - :toctree: generated/ - - MaskedArray.__len__ - MaskedArray.__getitem__ - MaskedArray.__setitem__ - MaskedArray.__delitem__ - MaskedArray.__getslice__ - MaskedArray.__setslice__ - MaskedArray.__contains__ - - - -Specific methods ----------------- - -Handling the mask -~~~~~~~~~~~~~~~~~ - -The following methods can be used to access information about the mask or to -manipulate the mask. - -.. autosummary:: - :toctree: generated/ - - MaskedArray.__setmask__ - - MaskedArray.harden_mask - MaskedArray.soften_mask - MaskedArray.unshare_mask - MaskedArray.shrink_mask - - -Handling the `fill_value` -~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. autosummary:: - :toctree: generated/ - - MaskedArray.get_fill_value - MaskedArray.set_fill_value - - - -Counting the missing elements -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. autosummary:: - :toctree: generated/ - - MaskedArray.count diff --git a/pythonPackages/numpy/doc/source/reference/maskedarray.generic.rst b/pythonPackages/numpy/doc/source/reference/maskedarray.generic.rst deleted file mode 100755 index bb8695408a..0000000000 --- a/pythonPackages/numpy/doc/source/reference/maskedarray.generic.rst +++ /dev/null @@ -1,499 +0,0 @@ -.. currentmodule:: numpy.ma - -.. _maskedarray.generic: - - - -The :mod:`numpy.ma` module -========================== - -Rationale ---------- - -Masked arrays are arrays that may have missing or invalid entries. -The :mod:`numpy.ma` module provides a nearly work-alike replacement for numpy -that supports data arrays with masks. - - - -What is a masked array? ------------------------ - -In many circumstances, datasets can be incomplete or tainted by the presence -of invalid data. For example, a sensor may have failed to record a data, or -recorded an invalid value. The :mod:`numpy.ma` module provides a convenient -way to address this issue, by introducing masked arrays. - -A masked array is the combination of a standard :class:`numpy.ndarray` and a -mask. A mask is either :attr:`nomask`, indicating that no value of the -associated array is invalid, or an array of booleans that determines for each -element of the associated array whether the value is valid or not. When an -element of the mask is ``False``, the corresponding element of the associated -array is valid and is said to be unmasked. When an element of the mask is -``True``, the corresponding element of the associated array is said to be -masked (invalid). - -The package ensures that masked entries are not used in computations. - -As an illustration, let's consider the following dataset:: - - >>> import numpy as np - >>> import numpy.ma as ma - >>> x = np.array([1, 2, 3, -1, 5]) - -We wish to mark the fourth entry as invalid. The easiest is to create a masked -array:: - - >>> mx = ma.masked_array(x, mask=[0, 0, 0, 1, 0]) - -We can now compute the mean of the dataset, without taking the invalid data -into account:: - - >>> mx.mean() - 2.75 - - -The :mod:`numpy.ma` module --------------------------- - - -The main feature of the :mod:`numpy.ma` module is the :class:`MaskedArray` -class, which is a subclass of :class:`numpy.ndarray`. The class, its -attributes and methods are described in more details in the -:ref:`MaskedArray class ` section. - -The :mod:`numpy.ma` module can be used as an addition to :mod:`numpy`: :: - - >>> import numpy as np - >>> import numpy.ma as ma - -To create an array with the second element invalid, we would do:: - - >>> y = ma.array([1, 2, 3], mask = [0, 1, 0]) - -To create a masked array where all values close to 1.e20 are invalid, we would -do:: - - >>> z = masked_values([1.0, 1.e20, 3.0, 4.0], 1.e20) - -For a complete discussion of creation methods for masked arrays please see -section :ref:`Constructing masked arrays `. - - - - -Using numpy.ma -============== - -.. _maskedarray.generic.constructing: - -Constructing masked arrays --------------------------- - -There are several ways to construct a masked array. - -* A first possibility is to directly invoke the :class:`MaskedArray` class. - -* A second possibility is to use the two masked array constructors, - :func:`array` and :func:`masked_array`. - - .. autosummary:: - :toctree: generated/ - - array - masked_array - - -* A third option is to take the view of an existing array. In that case, the - mask of the view is set to :attr:`nomask` if the array has no named fields, - or an array of boolean with the same structure as the array otherwise. - - >>> x = np.array([1, 2, 3]) - >>> x.view(ma.MaskedArray) - masked_array(data = [1 2 3], - mask = False, - fill_value = 999999) - >>> x = np.array([(1, 1.), (2, 2.)], dtype=[('a',int), ('b', float)]) - >>> x.view(ma.MaskedArray) - masked_array(data = [(1, 1.0) (2, 2.0)], - mask = [(False, False) (False, False)], - fill_value = (999999, 1e+20), - dtype = [('a', '>> x = ma.array([[1, 2], [3, 4]], mask=[[0, 1], [1, 0]]) - >>> x[~x.mask] - masked_array(data = [1 4], - mask = [False False], - fill_value = 999999) - -Another way to retrieve the valid data is to use the :meth:`compressed` -method, which returns a one-dimensional :class:`~numpy.ndarray` (or one of its -subclasses, depending on the value of the :attr:`~MaskedArray.baseclass` -attribute):: - - >>> x.compressed() - array([1, 4]) - -Note that the output of :meth:`compressed` is always 1D. - - - -Modifying the mask ------------------- - -Masking an entry -~~~~~~~~~~~~~~~~ - -The recommended way to mark one or several specific entries of a masked array -as invalid is to assign the special value :attr:`masked` to them:: - - >>> x = ma.array([1, 2, 3]) - >>> x[0] = ma.masked - >>> x - masked_array(data = [-- 2 3], - mask = [ True False False], - fill_value = 999999) - >>> y = ma.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) - >>> y[(0, 1, 2), (1, 2, 0)] = ma.masked - >>> y - masked_array(data = - [[1 -- 3] - [4 5 --] - [-- 8 9]], - mask = - [[False True False] - [False False True] - [ True False False]], - fill_value = 999999) - >>> z = ma.array([1, 2, 3, 4]) - >>> z[:-2] = ma.masked - >>> z - masked_array(data = [-- -- 3 4], - mask = [ True True False False], - fill_value = 999999) - - -A second possibility is to modify the :attr:`~MaskedArray.mask` directly, -but this usage is discouraged. - -.. note:: - When creating a new masked array with a simple, non-structured datatype, - the mask is initially set to the special value :attr:`nomask`, that - corresponds roughly to the boolean ``False``. Trying to set an element of - :attr:`nomask` will fail with a :exc:`TypeError` exception, as a boolean - does not support item assignment. - - -All the entries of an array can be masked at once by assigning ``True`` to the -mask:: - - >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) - >>> x.mask = True - >>> x - masked_array(data = [-- -- --], - mask = [ True True True], - fill_value = 999999) - -Finally, specific entries can be masked and/or unmasked by assigning to the -mask a sequence of booleans:: - - >>> x = ma.array([1, 2, 3]) - >>> x.mask = [0, 1, 0] - >>> x - masked_array(data = [1 -- 3], - mask = [False True False], - fill_value = 999999) - -Unmasking an entry -~~~~~~~~~~~~~~~~~~ - -To unmask one or several specific entries, we can just assign one or several -new valid values to them:: - - >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) - >>> x - masked_array(data = [1 2 --], - mask = [False False True], - fill_value = 999999) - >>> x[-1] = 5 - >>> x - masked_array(data = [1 2 5], - mask = [False False False], - fill_value = 999999) - -.. note:: - Unmasking an entry by direct assignment will silently fail if the masked - array has a *hard* mask, as shown by the :attr:`hardmask` attribute. This - feature was introduced to prevent overwriting the mask. To force the - unmasking of an entry where the array has a hard mask, the mask must first - to be softened using the :meth:`soften_mask` method before the allocation. - It can be re-hardened with :meth:`harden_mask`:: - - >>> x = ma.array([1, 2, 3], mask=[0, 0, 1], hard_mask=True) - >>> x - masked_array(data = [1 2 --], - mask = [False False True], - fill_value = 999999) - >>> x[-1] = 5 - >>> x - masked_array(data = [1 2 --], - mask = [False False True], - fill_value = 999999) - >>> x.soften_mask() - >>> x[-1] = 5 - >>> x - masked_array(data = [1 2 --], - mask = [False False True], - fill_value = 999999) - >>> x.harden_mask() - - -To unmask all masked entries of a masked array (provided the mask isn't a hard -mask), the simplest solution is to assign the constant :attr:`nomask` to the -mask:: - - >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) - >>> x - masked_array(data = [1 2 --], - mask = [False False True], - fill_value = 999999) - >>> x.mask = ma.nomask - >>> x - masked_array(data = [1 2 3], - mask = [False False False], - fill_value = 999999) - - - -Indexing and slicing --------------------- - -As a :class:`MaskedArray` is a subclass of :class:`numpy.ndarray`, it inherits -its mechanisms for indexing and slicing. - -When accessing a single entry of a masked array with no named fields, the -output is either a scalar (if the corresponding entry of the mask is -``False``) or the special value :attr:`masked` (if the corresponding entry of -the mask is ``True``):: - - >>> x = ma.array([1, 2, 3], mask=[0, 0, 1]) - >>> x[0] - 1 - >>> x[-1] - masked_array(data = --, - mask = True, - fill_value = 1e+20) - >>> x[-1] is ma.masked - True - -If the masked array has named fields, accessing a single entry returns a -:class:`numpy.void` object if none of the fields are masked, or a 0d masked -array with the same dtype as the initial array if at least one of the fields -is masked. - - >>> y = ma.masked_array([(1,2), (3, 4)], - ... mask=[(0, 0), (0, 1)], - ... dtype=[('a', int), ('b', int)]) - >>> y[0] - (1, 2) - >>> y[-1] - masked_array(data = (3, --), - mask = (False, True), - fill_value = (999999, 999999), - dtype = [('a', '>> x = ma.array([1, 2, 3, 4, 5], mask=[0, 1, 0, 0, 1]) - >>> mx = x[:3] - >>> mx - masked_array(data = [1 -- 3], - mask = [False True False], - fill_value = 999999) - >>> mx[1] = -1 - >>> mx - masked_array(data = [1 -1 3], - mask = [False True False], - fill_value = 999999) - >>> x.mask - array([False, True, False, False, True], dtype=bool) - >>> x.data - array([ 1, -1, 3, 4, 5]) - - -Accessing a field of a masked array with structured datatype returns a -:class:`MaskedArray`. - -Operations on masked arrays ---------------------------- - -Arithmetic and comparison operations are supported by masked arrays. -As much as possible, invalid entries of a masked array are not processed, -meaning that the corresponding :attr:`data` entries *should* be the same -before and after the operation. - -.. warning:: - We need to stress that this behavior may not be systematic, that masked - data may be affected by the operation in some cases and therefore users - should not rely on this data remaining unchanged. - -The :mod:`numpy.ma` module comes with a specific implementation of most -ufuncs. Unary and binary functions that have a validity domain (such as -:func:`~numpy.log` or :func:`~numpy.divide`) return the :data:`masked` -constant whenever the input is masked or falls outside the validity domain:: - - >>> ma.log([-1, 0, 1, 2]) - masked_array(data = [-- -- 0.0 0.69314718056], - mask = [ True True False False], - fill_value = 1e+20) - -Masked arrays also support standard numpy ufuncs. The output is then a masked -array. The result of a unary ufunc is masked wherever the input is masked. The -result of a binary ufunc is masked wherever any of the input is masked. If the -ufunc also returns the optional context output (a 3-element tuple containing -the name of the ufunc, its arguments and its domain), the context is processed -and entries of the output masked array are masked wherever the corresponding -input fall outside the validity domain:: - - >>> x = ma.array([-1, 1, 0, 2, 3], mask=[0, 0, 0, 0, 1]) - >>> np.log(x) - masked_array(data = [-- -- 0.0 0.69314718056 --], - mask = [ True True False False True], - fill_value = 1e+20) - - - -Examples -======== - -Data with a given value representing missing data -------------------------------------------------- - -Let's consider a list of elements, ``x``, where values of -9999. represent -missing data. We wish to compute the average value of the data and the vector -of anomalies (deviations from the average):: - - >>> import numpy.ma as ma - >>> x = [0.,1.,-9999.,3.,4.] - >>> mx = ma.masked_values (x, -9999.) - >>> print mx.mean() - 2.0 - >>> print mx - mx.mean() - [-2.0 -1.0 -- 1.0 2.0] - >>> print mx.anom() - [-2.0 -1.0 -- 1.0 2.0] - - -Filling in the missing data ---------------------------- - -Suppose now that we wish to print that same data, but with the missing values -replaced by the average value. - - >>> print mx.filled(mx.mean()) - [ 0. 1. 2. 3. 4.] - - -Numerical operations --------------------- - -Numerical operations can be easily performed without worrying about missing -values, dividing by zero, square roots of negative numbers, etc.:: - - >>> import numpy as np, numpy.ma as ma - >>> x = ma.array([1., -1., 3., 4., 5., 6.], mask=[0,0,0,0,1,0]) - >>> y = ma.array([1., 2., 0., 4., 5., 6.], mask=[0,0,0,0,0,1]) - >>> print np.sqrt(x/y) - [1.0 -- -- 1.0 -- --] - -Four values of the output are invalid: the first one comes from taking the -square root of a negative number, the second from the division by zero, and -the last two where the inputs were masked. - - -Ignoring extreme values ------------------------ - -Let's consider an array ``d`` of random floats between 0 and 1. We wish to -compute the average of the values of ``d`` while ignoring any data outside -the range ``[0.1, 0.9]``:: - - >>> print ma.masked_outside(d, 0.1, 0.9).mean() diff --git a/pythonPackages/numpy/doc/source/reference/maskedarray.rst b/pythonPackages/numpy/doc/source/reference/maskedarray.rst deleted file mode 100755 index c2deb3ba19..0000000000 --- a/pythonPackages/numpy/doc/source/reference/maskedarray.rst +++ /dev/null @@ -1,19 +0,0 @@ -.. _maskedarray: - -************* -Masked arrays -************* - -Masked arrays are arrays that may have missing or invalid entries. -The :mod:`numpy.ma` module provides a nearly work-alike replacement for numpy -that supports data arrays with masks. - -.. index:: - single: masked arrays - -.. toctree:: - :maxdepth: 2 - - maskedarray.generic - maskedarray.baseclass - routines.ma diff --git a/pythonPackages/numpy/doc/source/reference/routines.array-creation.rst b/pythonPackages/numpy/doc/source/reference/routines.array-creation.rst deleted file mode 100755 index 23b35243b2..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.array-creation.rst +++ /dev/null @@ -1,103 +0,0 @@ -.. _routines.array-creation: - -Array creation routines -======================= - -.. seealso:: :ref:`Array creation ` - -.. currentmodule:: numpy - -Ones and zeros --------------- -.. autosummary:: - :toctree: generated/ - - empty - empty_like - eye - identity - ones - ones_like - zeros - zeros_like - -From existing data ------------------- -.. autosummary:: - :toctree: generated/ - - array - asarray - asanyarray - ascontiguousarray - asmatrix - copy - frombuffer - fromfile - fromfunction - fromiter - fromstring - loadtxt - -.. _routines.array-creation.rec: - -Creating record arrays (:mod:`numpy.rec`) ------------------------------------------ - -.. note:: :mod:`numpy.rec` is the preferred alias for - :mod:`numpy.core.records`. - -.. autosummary:: - :toctree: generated/ - - core.records.array - core.records.fromarrays - core.records.fromrecords - core.records.fromstring - core.records.fromfile - -.. _routines.array-creation.char: - -Creating character arrays (:mod:`numpy.char`) ---------------------------------------------- - -.. note:: :mod:`numpy.char` is the preferred alias for - :mod:`numpy.core.defchararray`. - -.. autosummary:: - :toctree: generated/ - - core.defchararray.array - core.defchararray.asarray - -Numerical ranges ----------------- -.. autosummary:: - :toctree: generated/ - - arange - linspace - logspace - meshgrid - mgrid - ogrid - -Building matrices ------------------ -.. autosummary:: - :toctree: generated/ - - diag - diagflat - tri - tril - triu - vander - -The Matrix class ----------------- -.. autosummary:: - :toctree: generated/ - - mat - bmat diff --git a/pythonPackages/numpy/doc/source/reference/routines.array-manipulation.rst b/pythonPackages/numpy/doc/source/reference/routines.array-manipulation.rst deleted file mode 100755 index 2c1a5b2006..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.array-manipulation.rst +++ /dev/null @@ -1,104 +0,0 @@ -Array manipulation routines -*************************** - -.. currentmodule:: numpy - -Changing array shape -==================== -.. autosummary:: - :toctree: generated/ - - - reshape - ravel - ndarray.flat - ndarray.flatten - -Transpose-like operations -========================= -.. autosummary:: - :toctree: generated/ - - rollaxis - swapaxes - ndarray.T - transpose - -Changing number of dimensions -============================= -.. autosummary:: - :toctree: generated/ - - atleast_1d - atleast_2d - atleast_3d - broadcast - broadcast_arrays - expand_dims - squeeze - -Changing kind of array -====================== -.. autosummary:: - :toctree: generated/ - - asarray - asanyarray - asmatrix - asfarray - asfortranarray - asscalar - require - -Joining arrays -============== -.. autosummary:: - :toctree: generated/ - - column_stack - concatenate - dstack - hstack - vstack - -Splitting arrays -================ -.. autosummary:: - :toctree: generated/ - - array_split - dsplit - hsplit - split - vsplit - -Tiling arrays -============= -.. autosummary:: - :toctree: generated/ - - tile - repeat - -Adding and removing elements -============================ -.. autosummary:: - :toctree: generated/ - - delete - insert - append - resize - trim_zeros - unique - -Rearranging elements -==================== -.. autosummary:: - :toctree: generated/ - - fliplr - flipud - reshape - roll - rot90 diff --git a/pythonPackages/numpy/doc/source/reference/routines.bitwise.rst b/pythonPackages/numpy/doc/source/reference/routines.bitwise.rst deleted file mode 100755 index 58661abc72..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.bitwise.rst +++ /dev/null @@ -1,31 +0,0 @@ -Binary operations -***************** - -.. currentmodule:: numpy - -Elementwise bit operations --------------------------- -.. autosummary:: - :toctree: generated/ - - bitwise_and - bitwise_or - bitwise_xor - invert - left_shift - right_shift - -Bit packing ------------ -.. autosummary:: - :toctree: generated/ - - packbits - unpackbits - -Output formatting ------------------ -.. autosummary:: - :toctree: generated/ - - binary_repr diff --git a/pythonPackages/numpy/doc/source/reference/routines.char.rst b/pythonPackages/numpy/doc/source/reference/routines.char.rst deleted file mode 100755 index 2e995a7728..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.char.rst +++ /dev/null @@ -1,88 +0,0 @@ -String operations -***************** - -.. currentmodule:: numpy.core.defchararray - -This module provides a set of vectorized string operations for arrays -of type `numpy.string_` or `numpy.unicode_`. All of them are based on -the string methods in the Python standard library. - -String operations ------------------ - -.. autosummary:: - :toctree: generated/ - - add - multiply - mod - capitalize - center - decode - encode - join - ljust - lower - lstrip - partition - replace - rjust - rpartition - rsplit - rstrip - split - splitlines - strip - swapcase - title - translate - upper - zfill - -Comparison ----------- - -Unlike the standard numpy comparison operators, the ones in the `char` -module strip trailing whitespace characters before performing the -comparison. - -.. autosummary:: - :toctree: generated/ - - equal - not_equal - greater_equal - less_equal - greater - less - -String information ------------------- - -.. autosummary:: - :toctree: generated/ - - count - len - find - index - isalpha - isdecimal - isdigit - islower - isnumeric - isspace - istitle - isupper - rfind - rindex - startswith - -Convenience class ------------------ - -.. autosummary:: - :toctree: generated/ - - chararray - diff --git a/pythonPackages/numpy/doc/source/reference/routines.ctypeslib.rst b/pythonPackages/numpy/doc/source/reference/routines.ctypeslib.rst deleted file mode 100755 index b04713b61b..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.ctypeslib.rst +++ /dev/null @@ -1,11 +0,0 @@ -*********************************************************** -C-Types Foreign Function Interface (:mod:`numpy.ctypeslib`) -*********************************************************** - -.. currentmodule:: numpy.ctypeslib - -.. autofunction:: as_array -.. autofunction:: as_ctypes -.. autofunction:: ctypes_load_library -.. autofunction:: load_library -.. autofunction:: ndpointer diff --git a/pythonPackages/numpy/doc/source/reference/routines.dtype.rst b/pythonPackages/numpy/doc/source/reference/routines.dtype.rst deleted file mode 100755 index a311f3da58..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.dtype.rst +++ /dev/null @@ -1,52 +0,0 @@ -.. _routines.dtype: - -Data type routines -================== - -.. currentmodule:: numpy - -.. autosummary:: - :toctree: generated/ - - can_cast - common_type - obj2sctype - -Creating data types -------------------- - -.. autosummary:: - :toctree: generated/ - - - dtype - format_parser - -Data type information ---------------------- -.. autosummary:: - :toctree: generated/ - - finfo - iinfo - MachAr - -Data type testing ------------------ -.. autosummary:: - :toctree: generated/ - - issctype - issubdtype - issubsctype - issubclass_ - find_common_type - -Miscellaneous -------------- -.. autosummary:: - :toctree: generated/ - - typename - sctype2char - mintypecode diff --git a/pythonPackages/numpy/doc/source/reference/routines.dual.rst b/pythonPackages/numpy/doc/source/reference/routines.dual.rst deleted file mode 100755 index 456fc5c027..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.dual.rst +++ /dev/null @@ -1,48 +0,0 @@ -Optionally Scipy-accelerated routines (:mod:`numpy.dual`) -********************************************************* - -.. automodule:: numpy.dual - -Linear algebra --------------- - -.. currentmodule:: numpy.linalg - -.. autosummary:: - - cholesky - det - eig - eigh - eigvals - eigvalsh - inv - lstsq - norm - pinv - solve - svd - -FFT ---- - -.. currentmodule:: numpy.fft - -.. autosummary:: - - fft - fft2 - fftn - ifft - ifft2 - ifftn - -Other ------ - -.. currentmodule:: numpy - -.. autosummary:: - - i0 - diff --git a/pythonPackages/numpy/doc/source/reference/routines.emath.rst b/pythonPackages/numpy/doc/source/reference/routines.emath.rst deleted file mode 100755 index 9f6c2aaa77..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.emath.rst +++ /dev/null @@ -1,10 +0,0 @@ -Mathematical functions with automatic domain (:mod:`numpy.emath`) -*********************************************************************** - -.. currentmodule:: numpy - -.. note:: :mod:`numpy.emath` is a preferred alias for :mod:`numpy.lib.scimath`, - available after :mod:`numpy` is imported. - -.. automodule:: numpy.lib.scimath - diff --git a/pythonPackages/numpy/doc/source/reference/routines.err.rst b/pythonPackages/numpy/doc/source/reference/routines.err.rst deleted file mode 100755 index b3a7164b98..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.err.rst +++ /dev/null @@ -1,25 +0,0 @@ -Floating point error handling -***************************** - -.. currentmodule:: numpy - -Setting and getting error handling ----------------------------------- - -.. autosummary:: - :toctree: generated/ - - seterr - geterr - seterrcall - geterrcall - errstate - -Internal functions ------------------- - -.. autosummary:: - :toctree: generated/ - - seterrobj - geterrobj diff --git a/pythonPackages/numpy/doc/source/reference/routines.fft.rst b/pythonPackages/numpy/doc/source/reference/routines.fft.rst deleted file mode 100755 index 6c47925eee..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.fft.rst +++ /dev/null @@ -1,2 +0,0 @@ -.. _routines.fft: -.. automodule:: numpy.fft diff --git a/pythonPackages/numpy/doc/source/reference/routines.financial.rst b/pythonPackages/numpy/doc/source/reference/routines.financial.rst deleted file mode 100755 index 5f426d7abf..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.financial.rst +++ /dev/null @@ -1,21 +0,0 @@ -Financial functions -******************* - -.. currentmodule:: numpy - -Simple financial functions --------------------------- - -.. autosummary:: - :toctree: generated/ - - fv - pv - npv - pmt - ppmt - ipmt - irr - mirr - nper - rate diff --git a/pythonPackages/numpy/doc/source/reference/routines.functional.rst b/pythonPackages/numpy/doc/source/reference/routines.functional.rst deleted file mode 100755 index e4aababddc..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.functional.rst +++ /dev/null @@ -1,13 +0,0 @@ -Functional programming -********************** - -.. currentmodule:: numpy - -.. autosummary:: - :toctree: generated/ - - apply_along_axis - apply_over_axes - vectorize - frompyfunc - piecewise diff --git a/pythonPackages/numpy/doc/source/reference/routines.help.rst b/pythonPackages/numpy/doc/source/reference/routines.help.rst deleted file mode 100755 index a41563ccea..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.help.rst +++ /dev/null @@ -1,24 +0,0 @@ -.. _routines.help: - -Numpy-specific help functions -============================= - -.. currentmodule:: numpy - -Finding help ------------- - -.. autosummary:: - :toctree: generated/ - - lookfor - - -Reading help ------------- - -.. autosummary:: - :toctree: generated/ - - info - source diff --git a/pythonPackages/numpy/doc/source/reference/routines.indexing.rst b/pythonPackages/numpy/doc/source/reference/routines.indexing.rst deleted file mode 100755 index 9d8fde8820..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.indexing.rst +++ /dev/null @@ -1,61 +0,0 @@ -.. _routines.indexing: - -Indexing routines -================= - -.. seealso:: :ref:`Indexing ` - -.. currentmodule:: numpy - -Generating index arrays ------------------------ -.. autosummary:: - :toctree: generated/ - - c_ - r_ - s_ - nonzero - where - indices - ix_ - ogrid - unravel_index - diag_indices - diag_indices_from - mask_indices - tril_indices - tril_indices_from - triu_indices - triu_indices_from - -Indexing-like operations ------------------------- -.. autosummary:: - :toctree: generated/ - - take - choose - compress - diag - diagonal - select - -Inserting data into arrays --------------------------- -.. autosummary:: - :toctree: generated/ - - place - put - putmask - fill_diagonal - -Iterating over arrays ---------------------- -.. autosummary:: - :toctree: generated/ - - ndenumerate - ndindex - flatiter diff --git a/pythonPackages/numpy/doc/source/reference/routines.io.rst b/pythonPackages/numpy/doc/source/reference/routines.io.rst deleted file mode 100755 index 1293acb485..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.io.rst +++ /dev/null @@ -1,65 +0,0 @@ -Input and output -**************** - -.. currentmodule:: numpy - -NPZ files ---------- -.. autosummary:: - :toctree: generated/ - - load - save - savez - -Text files ----------- -.. autosummary:: - :toctree: generated/ - - loadtxt - savetxt - genfromtxt - fromregex - fromstring - ndarray.tofile - ndarray.tolist - -String formatting ------------------ -.. autosummary:: - :toctree: generated/ - - array_repr - array_str - -Memory mapping files --------------------- -.. autosummary:: - :toctree: generated/ - - memmap - -Text formatting options ------------------------ -.. autosummary:: - :toctree: generated/ - - set_printoptions - get_printoptions - set_string_function - -Base-n representations ----------------------- -.. autosummary:: - :toctree: generated/ - - binary_repr - base_repr - -Data sources ------------- -.. autosummary:: - :toctree: generated/ - - DataSource diff --git a/pythonPackages/numpy/doc/source/reference/routines.linalg.rst b/pythonPackages/numpy/doc/source/reference/routines.linalg.rst deleted file mode 100755 index 4c3c676d9a..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.linalg.rst +++ /dev/null @@ -1,68 +0,0 @@ -.. _routines.linalg: - -Linear algebra (:mod:`numpy.linalg`) -************************************ - -.. currentmodule:: numpy - -Matrix and vector products --------------------------- -.. autosummary:: - :toctree: generated/ - - dot - vdot - inner - outer - tensordot - linalg.matrix_power - kron - -Decompositions --------------- -.. autosummary:: - :toctree: generated/ - - linalg.cholesky - linalg.qr - linalg.svd - -Matrix eigenvalues ------------------- -.. autosummary:: - :toctree: generated/ - - linalg.eig - linalg.eigh - linalg.eigvals - linalg.eigvalsh - -Norms and other numbers ------------------------ -.. autosummary:: - :toctree: generated/ - - linalg.norm - linalg.cond - linalg.det - linalg.slogdet - trace - -Solving equations and inverting matrices ----------------------------------------- -.. autosummary:: - :toctree: generated/ - - linalg.solve - linalg.tensorsolve - linalg.lstsq - linalg.inv - linalg.pinv - linalg.tensorinv - -Exceptions ----------- -.. autosummary:: - :toctree: generated/ - - linalg.LinAlgError diff --git a/pythonPackages/numpy/doc/source/reference/routines.logic.rst b/pythonPackages/numpy/doc/source/reference/routines.logic.rst deleted file mode 100755 index 56e36f49aa..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.logic.rst +++ /dev/null @@ -1,64 +0,0 @@ -Logic functions -*************** - -.. currentmodule:: numpy - -Truth value testing -------------------- -.. autosummary:: - :toctree: generated/ - - all - any - -Array contents --------------- -.. autosummary:: - :toctree: generated/ - - isfinite - isinf - isnan - isneginf - isposinf - -Array type testing ------------------- -.. autosummary:: - :toctree: generated/ - - iscomplex - iscomplexobj - isfortran - isreal - isrealobj - isscalar - -Logical operations ------------------- -.. autosummary:: - :toctree: generated/ - - logical_and - logical_or - logical_not - logical_xor - -Comparison ----------- -.. autosummary:: - :toctree: generated/ - - allclose - array_equal - array_equiv - -.. autosummary:: - :toctree: generated/ - - greater - greater_equal - less - less_equal - equal - not_equal diff --git a/pythonPackages/numpy/doc/source/reference/routines.ma.rst b/pythonPackages/numpy/doc/source/reference/routines.ma.rst deleted file mode 100755 index 7367553384..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.ma.rst +++ /dev/null @@ -1,404 +0,0 @@ -.. _routines.ma: - -Masked array operations -*********************** - -.. currentmodule:: numpy - - -Constants -========= - -.. autosummary:: - :toctree: generated/ - - ma.MaskType - - -Creation -======== - -From existing data -~~~~~~~~~~~~~~~~~~ - -.. autosummary:: - :toctree: generated/ - - ma.masked_array - ma.array - ma.copy - ma.frombuffer - ma.fromfunction - - ma.MaskedArray.copy - - -Ones and zeros -~~~~~~~~~~~~~~ - -.. autosummary:: - :toctree: generated/ - - ma.empty - ma.empty_like - ma.masked_all - ma.masked_all_like - ma.ones - ma.zeros - - -_____ - -Inspecting the array -==================== - -.. autosummary:: - :toctree: generated/ - - ma.all - ma.any - ma.count - ma.count_masked - ma.getmask - ma.getmaskarray - ma.getdata - ma.nonzero - ma.shape - ma.size - - ma.MaskedArray.data - ma.MaskedArray.mask - ma.MaskedArray.recordmask - - ma.MaskedArray.all - ma.MaskedArray.any - ma.MaskedArray.count - ma.MaskedArray.nonzero - ma.shape - ma.size - - -_____ - -Manipulating a MaskedArray -========================== - -Changing the shape -~~~~~~~~~~~~~~~~~~ - -.. autosummary:: - :toctree: generated/ - - ma.ravel - ma.reshape - ma.resize - - ma.MaskedArray.flatten - ma.MaskedArray.ravel - ma.MaskedArray.reshape - ma.MaskedArray.resize - - -Modifying axes -~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.swapaxes - ma.transpose - - ma.MaskedArray.swapaxes - ma.MaskedArray.transpose - - -Changing the number of dimensions -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.atleast_1d - ma.atleast_2d - ma.atleast_3d - ma.expand_dims - ma.squeeze - - ma.MaskedArray.squeeze - - ma.column_stack - ma.concatenate - ma.dstack - ma.hstack - ma.hsplit - ma.mr_ - ma.row_stack - ma.vstack - - -Joining arrays -~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.column_stack - ma.concatenate - ma.dstack - ma.hstack - ma.vstack - - -_____ - -Operations on masks -=================== - -Creating a mask -~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.make_mask - ma.make_mask_none - ma.mask_or - ma.make_mask_descr - - -Accessing a mask -~~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.getmask - ma.getmaskarray - ma.masked_array.mask - - -Finding masked data -~~~~~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.flatnotmasked_contiguous - ma.flatnotmasked_edges - ma.notmasked_contiguous - ma.notmasked_edges - - -Modifying a mask -~~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.mask_cols - ma.mask_or - ma.mask_rowcols - ma.mask_rows - ma.harden_mask - ma.soften_mask - - ma.MaskedArray.harden_mask - ma.MaskedArray.soften_mask - ma.MaskedArray.shrink_mask - ma.MaskedArray.unshare_mask - - -_____ - -Conversion operations -====================== - -> to a masked array -~~~~~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.asarray - ma.asanyarray - ma.fix_invalid - ma.masked_equal - ma.masked_greater - ma.masked_greater_equal - ma.masked_inside - ma.masked_invalid - ma.masked_less - ma.masked_less_equal - ma.masked_not_equal - ma.masked_object - ma.masked_outside - ma.masked_values - ma.masked_where - - -> to a ndarray -~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.compress_cols - ma.compress_rowcols - ma.compress_rows - ma.compressed - ma.filled - - ma.MaskedArray.compressed - ma.MaskedArray.filled - - -> to another object -~~~~~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.MaskedArray.tofile - ma.MaskedArray.tolist - ma.MaskedArray.torecords - ma.MaskedArray.tostring - - -Pickling and unpickling -~~~~~~~~~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.dump - ma.dumps - ma.load - ma.loads - - -Filling a masked array -~~~~~~~~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.common_fill_value - ma.default_fill_value - ma.maximum_fill_value - ma.maximum_fill_value - ma.set_fill_value - - ma.MaskedArray.get_fill_value - ma.MaskedArray.set_fill_value - ma.MaskedArray.fill_value - - -_____ - -Masked arrays arithmetics -========================= - -Arithmetics -~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.anom - ma.anomalies - ma.average - ma.conjugate - ma.corrcoef - ma.cov - ma.cumsum - ma.cumprod - ma.mean - ma.median - ma.power - ma.prod - ma.std - ma.sum - ma.var - - ma.MaskedArray.anom - ma.MaskedArray.cumprod - ma.MaskedArray.cumsum - ma.MaskedArray.mean - ma.MaskedArray.prod - ma.MaskedArray.std - ma.MaskedArray.sum - ma.MaskedArray.var - - -Minimum/maximum -~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.argmax - ma.argmin - ma.max - ma.min - ma.ptp - - ma.MaskedArray.argmax - ma.MaskedArray.argmin - ma.MaskedArray.max - ma.MaskedArray.min - ma.MaskedArray.ptp - - -Sorting -~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.argsort - ma.sort - ma.MaskedArray.argsort - ma.MaskedArray.sort - - -Algebra -~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.diag - ma.dot - ma.identity - ma.inner - ma.innerproduct - ma.outer - ma.outerproduct - ma.trace - ma.transpose - - ma.MaskedArray.trace - ma.MaskedArray.transpose - - -Polynomial fit -~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.vander - ma.polyfit - - -Clipping and rounding -~~~~~~~~~~~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.around - ma.clip - ma.round - - ma.MaskedArray.clip - ma.MaskedArray.round - - -Miscellanea -~~~~~~~~~~~ -.. autosummary:: - :toctree: generated/ - - ma.allequal - ma.allclose - ma.apply_along_axis - ma.arange - ma.choose - ma.ediff1d - ma.indices - ma.where - - diff --git a/pythonPackages/numpy/doc/source/reference/routines.math.rst b/pythonPackages/numpy/doc/source/reference/routines.math.rst deleted file mode 100755 index 7ce77c24d4..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.math.rst +++ /dev/null @@ -1,150 +0,0 @@ -Mathematical functions -********************** - -.. currentmodule:: numpy - -Trigonometric functions ------------------------ -.. autosummary:: - :toctree: generated/ - - sin - cos - tan - arcsin - arccos - arctan - hypot - arctan2 - degrees - radians - unwrap - deg2rad - rad2deg - -Hyperbolic functions --------------------- -.. autosummary:: - :toctree: generated/ - - sinh - cosh - tanh - arcsinh - arccosh - arctanh - -Rounding --------- -.. autosummary:: - :toctree: generated/ - - around - round_ - rint - fix - floor - ceil - trunc - -Sums, products, differences ---------------------------- -.. autosummary:: - :toctree: generated/ - - prod - sum - nansum - cumprod - cumsum - diff - ediff1d - gradient - cross - trapz - -Exponents and logarithms ------------------------- -.. autosummary:: - :toctree: generated/ - - exp - expm1 - exp2 - log - log10 - log2 - log1p - logaddexp - logaddexp2 - -Other special functions ------------------------ -.. autosummary:: - :toctree: generated/ - - i0 - sinc - -Floating point routines ------------------------ -.. autosummary:: - :toctree: generated/ - - signbit - copysign - frexp - ldexp - -Arithmetic operations ---------------------- -.. autosummary:: - :toctree: generated/ - - add - reciprocal - negative - multiply - divide - power - subtract - true_divide - floor_divide - - fmod - mod - modf - remainder - -Handling complex numbers ------------------------- -.. autosummary:: - :toctree: generated/ - - angle - real - imag - conj - - -Miscellaneous -------------- -.. autosummary:: - :toctree: generated/ - - convolve - clip - - sqrt - square - - absolute - fabs - sign - maximum - minimum - - nan_to_num - real_if_close - - interp diff --git a/pythonPackages/numpy/doc/source/reference/routines.matlib.rst b/pythonPackages/numpy/doc/source/reference/routines.matlib.rst deleted file mode 100755 index 7f8a9eabb3..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.matlib.rst +++ /dev/null @@ -1,11 +0,0 @@ -Matrix library (:mod:`numpy.matlib`) -************************************ - -.. currentmodule:: numpy - -This module contains all functions in the :mod:`numpy` namespace, with -the following replacement functions that return :class:`matrices -` instead of :class:`ndarrays `. - -.. automodule:: numpy.matlib - diff --git a/pythonPackages/numpy/doc/source/reference/routines.numarray.rst b/pythonPackages/numpy/doc/source/reference/routines.numarray.rst deleted file mode 100755 index 36e5aa7645..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.numarray.rst +++ /dev/null @@ -1,6 +0,0 @@ -********************************************** -Numarray compatibility (:mod:`numpy.numarray`) -********************************************** - -.. automodule:: numpy.numarray - diff --git a/pythonPackages/numpy/doc/source/reference/routines.oldnumeric.rst b/pythonPackages/numpy/doc/source/reference/routines.oldnumeric.rst deleted file mode 100755 index d7f15bcfd9..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.oldnumeric.rst +++ /dev/null @@ -1,8 +0,0 @@ -*************************************************** -Old Numeric compatibility (:mod:`numpy.oldnumeric`) -*************************************************** - -.. currentmodule:: numpy - -.. automodule:: numpy.oldnumeric - diff --git a/pythonPackages/numpy/doc/source/reference/routines.other.rst b/pythonPackages/numpy/doc/source/reference/routines.other.rst deleted file mode 100755 index 354f457338..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.other.rst +++ /dev/null @@ -1,24 +0,0 @@ -Miscellaneous routines -********************** - -.. toctree:: - -.. currentmodule:: numpy - -Buffer objects --------------- -.. autosummary:: - :toctree: generated/ - - getbuffer - newbuffer - -Performance tuning ------------------- -.. autosummary:: - :toctree: generated/ - - alterdot - restoredot - setbufsize - getbufsize diff --git a/pythonPackages/numpy/doc/source/reference/routines.poly.rst b/pythonPackages/numpy/doc/source/reference/routines.poly.rst deleted file mode 100755 index f30b2c8844..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.poly.rst +++ /dev/null @@ -1,46 +0,0 @@ -Polynomials -*********** - -.. currentmodule:: numpy - -Basics ------- -.. autosummary:: - :toctree: generated/ - - poly1d - polyval - poly - roots - -Fitting -------- -.. autosummary:: - :toctree: generated/ - - polyfit - -Calculus --------- -.. autosummary:: - :toctree: generated/ - - polyder - polyint - -Arithmetic ----------- -.. autosummary:: - :toctree: generated/ - - polyadd - polydiv - polymul - polysub - -Warnings --------- -.. autosummary:: - :toctree: generated/ - - RankWarning diff --git a/pythonPackages/numpy/doc/source/reference/routines.random.rst b/pythonPackages/numpy/doc/source/reference/routines.random.rst deleted file mode 100755 index 508c2c96e7..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.random.rst +++ /dev/null @@ -1,77 +0,0 @@ -.. _routines.random: - -Random sampling (:mod:`numpy.random`) -************************************* - -.. currentmodule:: numpy.random - -Simple random data -================== -.. autosummary:: - :toctree: generated/ - - rand - randn - randint - random_integers - random_sample - bytes - -Permutations -============ -.. autosummary:: - :toctree: generated/ - - shuffle - permutation - -Distributions -============= -.. autosummary:: - :toctree: generated/ - - beta - binomial - chisquare - mtrand.dirichlet - exponential - f - gamma - geometric - gumbel - hypergeometric - laplace - logistic - lognormal - logseries - multinomial - multivariate_normal - negative_binomial - noncentral_chisquare - noncentral_f - normal - pareto - poisson - power - rayleigh - standard_cauchy - standard_exponential - standard_gamma - standard_normal - standard_t - triangular - uniform - vonmises - wald - weibull - zipf - -Random generator -================ -.. autosummary:: - :toctree: generated/ - - mtrand.RandomState - seed - get_state - set_state diff --git a/pythonPackages/numpy/doc/source/reference/routines.rst b/pythonPackages/numpy/doc/source/reference/routines.rst deleted file mode 100755 index 0788d3a0ab..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.rst +++ /dev/null @@ -1,47 +0,0 @@ -******** -Routines -******** - -In this chapter routine docstrings are presented, grouped by functionality. -Many docstrings contain example code, which demonstrates basic usage -of the routine. The examples assume that NumPy is imported with:: - - >>> import numpy as np - -A convenient way to execute examples is the ``%doctest_mode`` mode of -IPython, which allows for pasting of multi-line examples and preserves -indentation. - -.. toctree:: - :maxdepth: 2 - - routines.array-creation - routines.array-manipulation - routines.indexing - routines.dtype - routines.io - routines.fft - routines.linalg - routines.random - routines.sort - routines.logic - routines.bitwise - routines.statistics - routines.math - routines.functional - routines.poly - routines.financial - routines.set - routines.window - routines.err - routines.ma - routines.help - routines.other - routines.testing - routines.emath - routines.matlib - routines.dual - routines.numarray - routines.oldnumeric - routines.ctypeslib - routines.char diff --git a/pythonPackages/numpy/doc/source/reference/routines.set.rst b/pythonPackages/numpy/doc/source/reference/routines.set.rst deleted file mode 100755 index 27c6aeb898..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.set.rst +++ /dev/null @@ -1,22 +0,0 @@ -Set routines -============ - -.. currentmodule:: numpy - -Making proper sets ------------------- -.. autosummary:: - :toctree: generated/ - - unique - -Boolean operations ------------------- -.. autosummary:: - :toctree: generated/ - - in1d - intersect1d - setdiff1d - setxor1d - union1d diff --git a/pythonPackages/numpy/doc/source/reference/routines.sort.rst b/pythonPackages/numpy/doc/source/reference/routines.sort.rst deleted file mode 100755 index 8dc769ea93..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.sort.rst +++ /dev/null @@ -1,32 +0,0 @@ -Sorting and searching -===================== - -.. currentmodule:: numpy - -Sorting -------- -.. autosummary:: - :toctree: generated/ - - sort - lexsort - argsort - ndarray.sort - msort - sort_complex - -Searching ---------- -.. autosummary:: - :toctree: generated/ - - argmax - nanargmax - argmin - nanargmin - argwhere - nonzero - flatnonzero - where - searchsorted - extract diff --git a/pythonPackages/numpy/doc/source/reference/routines.statistics.rst b/pythonPackages/numpy/doc/source/reference/routines.statistics.rst deleted file mode 100755 index b41b62839f..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.statistics.rst +++ /dev/null @@ -1,51 +0,0 @@ -Statistics -========== - -.. currentmodule:: numpy - - -Extremal values ---------------- - -.. autosummary:: - :toctree: generated/ - - amin - amax - nanmax - nanmin - ptp - -Averages and variances ----------------------- - -.. autosummary:: - :toctree: generated/ - - average - mean - median - std - var - -Correlating ------------ - -.. autosummary:: - :toctree: generated/ - - corrcoef - correlate - cov - -Histograms ----------- - -.. autosummary:: - :toctree: generated/ - - histogram - histogram2d - histogramdd - bincount - digitize diff --git a/pythonPackages/numpy/doc/source/reference/routines.testing.rst b/pythonPackages/numpy/doc/source/reference/routines.testing.rst deleted file mode 100755 index 5f92da1634..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.testing.rst +++ /dev/null @@ -1,48 +0,0 @@ -Test Support (:mod:`numpy.testing`) -=================================== - -.. currentmodule:: numpy.testing - -Common test support for all numpy test scripts. - -This single module should provide all the common functionality for numpy -tests in a single location, so that test scripts can just import it and -work right away. - - -Asserts -======= -.. autosummary:: - :toctree: generated/ - - assert_almost_equal - assert_approx_equal - assert_array_almost_equal - assert_array_equal - assert_array_less - assert_equal - assert_raises - assert_warns - assert_string_equal - -Decorators ----------- -.. autosummary:: - :toctree: generated/ - - decorators.deprecated - decorators.knownfailureif - decorators.setastest - decorators.skipif - decorators.slow - decorate_methods - - -Test Running ------------- -.. autosummary:: - :toctree: generated/ - - Tester - run_module_suite - rundocs diff --git a/pythonPackages/numpy/doc/source/reference/routines.window.rst b/pythonPackages/numpy/doc/source/reference/routines.window.rst deleted file mode 100755 index 7f3414815f..0000000000 --- a/pythonPackages/numpy/doc/source/reference/routines.window.rst +++ /dev/null @@ -1,16 +0,0 @@ -Window functions -================ - -.. currentmodule:: numpy - -Various windows ---------------- - -.. autosummary:: - :toctree: generated/ - - bartlett - blackman - hamming - hanning - kaiser diff --git a/pythonPackages/numpy/doc/source/reference/ufuncs.rst b/pythonPackages/numpy/doc/source/reference/ufuncs.rst deleted file mode 100755 index 77269be589..0000000000 --- a/pythonPackages/numpy/doc/source/reference/ufuncs.rst +++ /dev/null @@ -1,568 +0,0 @@ -.. sectionauthor:: adapted from "Guide to Numpy" by Travis E. Oliphant - -.. _ufuncs: - -************************************ -Universal functions (:class:`ufunc`) -************************************ - -.. note: XXX: section might need to be made more reference-guideish... - -.. currentmodule:: numpy - -.. index: ufunc, universal function, arithmetic, operation - -A universal function (or :term:`ufunc` for short) is a function that -operates on :class:`ndarrays ` in an element-by-element fashion, -supporting :ref:`array broadcasting `, :ref:`type -casting `, and several other standard features. That -is, a ufunc is a ":term:`vectorized`" wrapper for a function that -takes a fixed number of scalar inputs and produces a fixed number of -scalar outputs. - -In Numpy, universal functions are instances of the -:class:`numpy.ufunc` class. Many of the built-in functions are -implemented in compiled C code, but :class:`ufunc` instances can also -be produced using the :func:`frompyfunc` factory function. - - -.. _ufuncs.broadcasting: - -Broadcasting -============ - -.. index:: broadcasting - -Each universal function takes array inputs and produces array outputs -by performing the core function element-wise on the inputs. Standard -broadcasting rules are applied so that inputs not sharing exactly the -same shapes can still be usefully operated on. Broadcasting can be -understood by four rules: - -1. All input arrays with :attr:`ndim ` smaller than the - input array of largest :attr:`ndim `, have 1's - prepended to their shapes. - -2. The size in each dimension of the output shape is the maximum of all - the input sizes in that dimension. - -3. An input can be used in the calculation if its size in a particular - dimension either matches the output size in that dimension, or has - value exactly 1. - -4. If an input has a dimension size of 1 in its shape, the first data - entry in that dimension will be used for all calculations along - that dimension. In other words, the stepping machinery of the - :term:`ufunc` will simply not step along that dimension (the - :term:`stride` will be 0 for that dimension). - -Broadcasting is used throughout NumPy to decide how to handle -disparately shaped arrays; for example, all arithmetic operations (``+``, -``-``, ``*``, ...) between :class:`ndarrays ` broadcast the -arrays before operation. - -.. _arrays.broadcasting.broadcastable: - -.. index:: broadcastable - -A set of arrays is called ":term:`broadcastable`" to the same shape if -the above rules produce a valid result, *i.e.*, one of the following -is true: - -1. The arrays all have exactly the same shape. - -2. The arrays all have the same number of dimensions and the length of - each dimensions is either a common length or 1. - -3. The arrays that have too few dimensions can have their shapes prepended - with a dimension of length 1 to satisfy property 2. - -.. admonition:: Example - - If ``a.shape`` is (5,1), ``b.shape`` is (1,6), ``c.shape`` is (6,) - and ``d.shape`` is () so that *d* is a scalar, then *a*, *b*, *c*, - and *d* are all broadcastable to dimension (5,6); and - - - *a* acts like a (5,6) array where ``a[:,0]`` is broadcast to the other - columns, - - - *b* acts like a (5,6) array where ``b[0,:]`` is broadcast - to the other rows, - - - *c* acts like a (1,6) array and therefore like a (5,6) array - where ``c[:]`` is broadcast to every row, and finally, - - - *d* acts like a (5,6) array where the single value is repeated. - - -.. _ufuncs.output-type: - -Output type determination -========================= - -The output of the ufunc (and its methods) is not necessarily an -:class:`ndarray`, if all input arguments are not :class:`ndarrays `. - -All output arrays will be passed to the :obj:`__array_prepare__` and -:obj:`__array_wrap__` methods of the input (besides -:class:`ndarrays `, and scalars) that defines it **and** has -the highest :obj:`__array_priority__` of any other input to the -universal function. The default :obj:`__array_priority__` of the -ndarray is 0.0, and the default :obj:`__array_priority__` of a subtype -is 1.0. Matrices have :obj:`__array_priority__` equal to 10.0. - -All ufuncs can also take output arguments. If necessary, output will -be cast to the data-type(s) of the provided output array(s). If a class -with an :obj:`__array__` method is used for the output, results will be -written to the object returned by :obj:`__array__`. Then, if the class -also has an :obj:`__array_prepare__` method, it is called so metadata -may be determined based on the context of the ufunc (the context -consisting of the ufunc itself, the arguments passed to the ufunc, and -the ufunc domain.) The array object returned by -:obj:`__array_prepare__` is passed to the ufunc for computation. -Finally, if the class also has an :obj:`__array_wrap__` method, the returned -:class:`ndarray` result will be passed to that method just before -passing control back to the caller. - -Use of internal buffers -======================= - -.. index:: buffers - -Internally, buffers are used for misaligned data, swapped data, and -data that has to be converted from one data type to another. The size -of internal buffers is settable on a per-thread basis. There can -be up to :math:`2 (n_{\mathrm{inputs}} + n_{\mathrm{outputs}})` -buffers of the specified size created to handle the data from all the -inputs and outputs of a ufunc. The default size of a buffer is -10,000 elements. Whenever buffer-based calculation would be needed, -but all input arrays are smaller than the buffer size, those -misbehaved or incorrectly-typed arrays will be copied before the -calculation proceeds. Adjusting the size of the buffer may therefore -alter the speed at which ufunc calculations of various sorts are -completed. A simple interface for setting this variable is accessible -using the function - -.. autosummary:: - :toctree: generated/ - - setbufsize - - -Error handling -============== - -.. index:: error handling - -Universal functions can trip special floating-point status registers -in your hardware (such as divide-by-zero). If available on your -platform, these registers will be regularly checked during -calculation. Error handling is controlled on a per-thread basis, -and can be configured using the functions - -.. autosummary:: - :toctree: generated/ - - seterr - seterrcall - -.. _ufuncs.casting: - -Casting Rules -============= - -.. index:: - pair: ufunc; casting rules - -At the core of every ufunc is a one-dimensional strided loop that -implements the actual function for a specific type combination. When a -ufunc is created, it is given a static list of inner loops and a -corresponding list of type signatures over which the ufunc operates. -The ufunc machinery uses this list to determine which inner loop to -use for a particular case. You can inspect the :attr:`.types -` attribute for a particular ufunc to see which type -combinations have a defined inner loop and which output type they -produce (:ref:`character codes ` are used -in said output for brevity). - -Casting must be done on one or more of the inputs whenever the ufunc -does not have a core loop implementation for the input types provided. -If an implementation for the input types cannot be found, then the -algorithm searches for an implementation with a type signature to -which all of the inputs can be cast "safely." The first one it finds -in its internal list of loops is selected and performed, after all -necessary type casting. Recall that internal copies during ufuncs (even -for casting) are limited to the size of an internal buffer (which is user -settable). - -.. note:: - - Universal functions in NumPy are flexible enough to have mixed type - signatures. Thus, for example, a universal function could be defined - that works with floating-point and integer values. See :func:`ldexp` - for an example. - -By the above description, the casting rules are essentially -implemented by the question of when a data type can be cast "safely" -to another data type. The answer to this question can be determined in -Python with a function call: :func:`can_cast(fromtype, totype) -`. The Figure below shows the results of this call for -the 21 internally supported types on the author's 32-bit system. You -can generate this table for your system with the code given in the Figure. - -.. admonition:: Figure - - Code segment showing the "can cast safely" table for a 32-bit system. - - >>> def print_table(ntypes): - ... print 'X', - ... for char in ntypes: print char, - ... print - ... for row in ntypes: - ... print row, - ... for col in ntypes: - ... print int(np.can_cast(row, col)), - ... print - >>> print_table(np.typecodes['All']) - X ? b h i l q p B H I L Q P f d g F D G S U V O - ? 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 - b 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 - h 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 - i 0 0 0 1 1 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 - l 0 0 0 1 1 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 - q 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 - p 0 0 0 1 1 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 - B 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 - H 0 0 0 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 - I 0 0 0 0 0 1 0 0 0 1 1 1 1 0 1 1 0 1 1 1 1 1 1 - L 0 0 0 0 0 1 0 0 0 1 1 1 1 0 1 1 0 1 1 1 1 1 1 - Q 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 1 1 1 1 1 1 - P 0 0 0 0 0 1 0 0 0 1 1 1 1 0 1 1 0 1 1 1 1 1 1 - f 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 - d 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 - g 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 1 1 - F 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 - D 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 - G 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 - S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 - U 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 - V 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 - O 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 - -You should note that, while included in the table for completeness, -the 'S', 'U', and 'V' types cannot be operated on by ufuncs. Also, -note that on a 64-bit system the integer types may have different -sizes, resulting in a slightly altered table. - -Mixed scalar-array operations use a different set of casting rules -that ensure that a scalar cannot "upcast" an array unless the scalar is -of a fundamentally different kind of data (*i.e.*, under a different -hierarchy in the data-type hierarchy) than the array. This rule -enables you to use scalar constants in your code (which, as Python -types, are interpreted accordingly in ufuncs) without worrying about -whether the precision of the scalar constant will cause upcasting on -your large (small precision) array. - - -:class:`ufunc` -============== - -Optional keyword arguments --------------------------- - -All ufuncs take optional keyword arguments. These represent rather -advanced usage and will not typically be used by most Numpy users. - -.. index:: - pair: ufunc; keyword arguments - -*sig* - - Either a data-type, a tuple of data-types, or a special signature - string indicating the input and output types of a ufunc. This argument - allows you to provide a specific signature for the 1-d loop to use - in the underlying calculation. If the loop specified does not exist - for the ufunc, then a TypeError is raised. Normally, a suitable loop is - found automatically by comparing the input types with what is - available and searching for a loop with data-types to which all inputs - can be cast safely. This keyword argument lets you bypass that - search and choose a particular loop. A list of available signatures is - provided by the **types** attribute of the ufunc object. - -*extobj* - - a list of length 1, 2, or 3 specifying the ufunc buffer-size, the - error mode integer, and the error call-back function. Normally, these - values are looked up in a thread-specific dictionary. Passing them - here circumvents that look up and uses the low-level specification - provided for the error mode. This may be useful, for example, as an - optimization for calculations requiring many ufunc calls on small arrays - in a loop. - - -Attributes ----------- - -There are some informational attributes that universal functions -possess. None of the attributes can be set. - -.. index:: - pair: ufunc; attributes - - -============ ================================================================= -**__doc__** A docstring for each ufunc. The first part of the docstring is - dynamically generated from the number of outputs, the name, and - the number of inputs. The second part of the docstring is - provided at creation time and stored with the ufunc. - -**__name__** The name of the ufunc. -============ ================================================================= - -.. autosummary:: - :toctree: generated/ - - ufunc.nin - ufunc.nout - ufunc.nargs - ufunc.ntypes - ufunc.types - ufunc.identity - -Methods -------- - -All ufuncs have four methods. However, these methods only make sense on -ufuncs that take two input arguments and return one output argument. -Attempting to call these methods on other ufuncs will cause a -:exc:`ValueError`. The reduce-like methods all take an *axis* keyword -and a *dtype* keyword, and the arrays must all have dimension >= 1. -The *axis* keyword specifies the axis of the array over which the reduction -will take place and may be negative, but must be an integer. The -*dtype* keyword allows you to manage a very common problem that arises -when naively using :ref:`{op}.reduce `. Sometimes you may -have an array of a certain data type and wish to add up all of its -elements, but the result does not fit into the data type of the -array. This commonly happens if you have an array of single-byte -integers. The *dtype* keyword allows you to alter the data type over which -the reduction takes place (and therefore the type of the output). Thus, -you can ensure that the output is a data type with precision large enough -to handle your output. The responsibility of altering the reduce type is -mostly up to you. There is one exception: if no *dtype* is given for a -reduction on the "add" or "multiply" operations, then if the input type is -an integer (or Boolean) data-type and smaller than the size of the -:class:`int_` data type, it will be internally upcast to the :class:`int_` -(or :class:`uint`) data-type. - -.. index:: - pair: ufunc; methods - -.. autosummary:: - :toctree: generated/ - - ufunc.reduce - ufunc.accumulate - ufunc.reduceat - ufunc.outer - - -.. warning:: - - A reduce-like operation on an array with a data-type that has a - range "too small" to handle the result will silently wrap. One - should use `dtype` to increase the size of the data-type over which - reduction takes place. - - -Available ufuncs -================ - -There are currently more than 60 universal functions defined in -:mod:`numpy` on one or more types, covering a wide variety of -operations. Some of these ufuncs are called automatically on arrays -when the relevant infix notation is used (*e.g.*, :func:`add(a, b) ` -is called internally when ``a + b`` is written and *a* or *b* is an -:class:`ndarray`). Nevertheless, you may still want to use the ufunc -call in order to use the optional output argument(s) to place the -output(s) in an object (or objects) of your choice. - -Recall that each ufunc operates element-by-element. Therefore, each -ufunc will be described as if acting on a set of scalar inputs to -return a set of scalar outputs. - -.. note:: - - The ufunc still returns its output(s) even if you use the optional - output argument(s). - -Math operations ---------------- - -.. autosummary:: - - add - subtract - multiply - divide - logaddexp - logaddexp2 - true_divide - floor_divide - negative - power - remainder - mod - fmod - absolute - rint - sign - conj - exp - exp2 - log - log2 - log10 - expm1 - log1p - sqrt - square - reciprocal - ones_like - -.. tip:: - - The optional output arguments can be used to help you save memory - for large calculations. If your arrays are large, complicated - expressions can take longer than absolutely necessary due to the - creation and (later) destruction of temporary calculation - spaces. For example, the expression ``G = a * b + c`` is equivalent to - ``t1 = A * B; G = T1 + C; del t1``. It will be more quickly executed - as ``G = A * B; add(G, C, G)`` which is the same as - ``G = A * B; G += C``. - - -Trigonometric functions ------------------------ -All trigonometric functions use radians when an angle is called for. -The ratio of degrees to radians is :math:`180^{\circ}/\pi.` - -.. autosummary:: - - sin - cos - tan - arcsin - arccos - arctan - arctan2 - hypot - sinh - cosh - tanh - arcsinh - arccosh - arctanh - deg2rad - rad2deg - -Bit-twiddling functions ------------------------ - -These function all require integer arguments and they manipulate the -bit-pattern of those arguments. - -.. autosummary:: - - bitwise_and - bitwise_or - bitwise_xor - invert - left_shift - right_shift - -Comparison functions --------------------- - -.. autosummary:: - - greater - greater_equal - less - less_equal - not_equal - equal - -.. warning:: - - Do not use the Python keywords ``and`` and ``or`` to combine - logical array expressions. These keywords will test the truth - value of the entire array (not element-by-element as you might - expect). Use the bitwise operators & and \| instead. - -.. autosummary:: - - logical_and - logical_or - logical_xor - logical_not - -.. warning:: - - The bit-wise operators & and \| are the proper way to perform - element-by-element array comparisons. Be sure you understand the - operator precedence: ``(a > 2) & (a < 5)`` is the proper syntax because - ``a > 2 & a < 5`` will result in an error due to the fact that ``2 & a`` - is evaluated first. - -.. autosummary:: - - maximum - -.. tip:: - - The Python function ``max()`` will find the maximum over a one-dimensional - array, but it will do so using a slower sequence interface. The reduce - method of the maximum ufunc is much faster. Also, the ``max()`` method - will not give answers you might expect for arrays with greater than - one dimension. The reduce method of minimum also allows you to compute - a total minimum over an array. - -.. autosummary:: - - minimum - -.. warning:: - - the behavior of ``maximum(a, b)`` is different than that of ``max(a, b)``. - As a ufunc, ``maximum(a, b)`` performs an element-by-element comparison - of `a` and `b` and chooses each element of the result according to which - element in the two arrays is larger. In contrast, ``max(a, b)`` treats - the objects `a` and `b` as a whole, looks at the (total) truth value of - ``a > b`` and uses it to return either `a` or `b` (as a whole). A similar - difference exists between ``minimum(a, b)`` and ``min(a, b)``. - - -Floating functions ------------------- - -Recall that all of these functions work element-by-element over an -array, returning an array output. The description details only a -single operation. - -.. autosummary:: - - isreal - iscomplex - isfinite - isinf - isnan - signbit - copysign - nextafter - modf - ldexp - frexp - fmod - floor - ceil - trunc diff --git a/pythonPackages/numpy/doc/source/release.rst b/pythonPackages/numpy/doc/source/release.rst deleted file mode 100755 index ce50cf2901..0000000000 --- a/pythonPackages/numpy/doc/source/release.rst +++ /dev/null @@ -1,5 +0,0 @@ -************* -Release Notes -************* - -.. include:: ../release/1.3.0-notes.rst diff --git a/pythonPackages/numpy/doc/source/scipyshiny_small.png b/pythonPackages/numpy/doc/source/scipyshiny_small.png deleted file mode 100755 index 7ef81a9e8f..0000000000 Binary files a/pythonPackages/numpy/doc/source/scipyshiny_small.png and /dev/null differ diff --git a/pythonPackages/numpy/doc/source/user/basics.broadcasting.rst b/pythonPackages/numpy/doc/source/user/basics.broadcasting.rst deleted file mode 100755 index 65584b1fd3..0000000000 --- a/pythonPackages/numpy/doc/source/user/basics.broadcasting.rst +++ /dev/null @@ -1,7 +0,0 @@ -************ -Broadcasting -************ - -.. seealso:: :class:`numpy.broadcast` - -.. automodule:: numpy.doc.broadcasting diff --git a/pythonPackages/numpy/doc/source/user/basics.byteswapping.rst b/pythonPackages/numpy/doc/source/user/basics.byteswapping.rst deleted file mode 100755 index 4b1008df3a..0000000000 --- a/pythonPackages/numpy/doc/source/user/basics.byteswapping.rst +++ /dev/null @@ -1,5 +0,0 @@ -************* -Byte-swapping -************* - -.. automodule:: numpy.doc.byteswapping diff --git a/pythonPackages/numpy/doc/source/user/basics.creation.rst b/pythonPackages/numpy/doc/source/user/basics.creation.rst deleted file mode 100755 index b3fa810177..0000000000 --- a/pythonPackages/numpy/doc/source/user/basics.creation.rst +++ /dev/null @@ -1,9 +0,0 @@ -.. _arrays.creation: - -************** -Array creation -************** - -.. seealso:: :ref:`Array creation routines ` - -.. automodule:: numpy.doc.creation diff --git a/pythonPackages/numpy/doc/source/user/basics.indexing.rst b/pythonPackages/numpy/doc/source/user/basics.indexing.rst deleted file mode 100755 index 8844adcae6..0000000000 --- a/pythonPackages/numpy/doc/source/user/basics.indexing.rst +++ /dev/null @@ -1,9 +0,0 @@ -.. _basics.indexing: - -******** -Indexing -******** - -.. seealso:: :ref:`Indexing routines ` - -.. automodule:: numpy.doc.indexing diff --git a/pythonPackages/numpy/doc/source/user/basics.io.genfromtxt.rst b/pythonPackages/numpy/doc/source/user/basics.io.genfromtxt.rst deleted file mode 100755 index 814ba520a2..0000000000 --- a/pythonPackages/numpy/doc/source/user/basics.io.genfromtxt.rst +++ /dev/null @@ -1,444 +0,0 @@ -.. sectionauthor:: Pierre Gerard-Marchant - -********************************************* -Importing data with :func:`~numpy.genfromtxt` -********************************************* - -Numpy provides several functions to create arrays from tabular data. -We focus here on the :func:`~numpy.genfromtxt` function. - -In a nutshell, :func:`~numpy.genfromtxt` runs two main loops. -The first loop converts each line of the file in a sequence of strings. -The second loop converts each string to the appropriate data type. -This mechanism is slower than a single loop, but gives more flexibility. -In particular, :func:`~numpy.genfromtxt` is able to take missing data into account, when other faster and simpler functions like :func:`~numpy.loadtxt` cannot - - -.. note:: - When giving examples, we will use the following conventions - - >>> import numpy as np - >>> from StringIO import StringIO - - - -Defining the input -================== - -The only mandatory argument of :func:`~numpy.genfromtxt` is the source of the data. -It can be a string corresponding to the name of a local or remote file, or a file-like object with a :meth:`read` method (such as an actual file or a :class:`StringIO.StringIO` object). -If the argument is the URL of a remote file, this latter is automatically downloaded in the current directory. - -The input file can be a text file or an archive. -Currently, the function recognizes :class:`gzip` and :class:`bz2` (`bzip2`) archives. -The type of the archive is determined by examining the extension of the file: -if the filename ends with ``'.gz'``, a :class:`gzip` archive is expected; if it ends with ``'bz2'``, a :class:`bzip2` archive is assumed. - - - -Splitting the lines into columns -================================ - -The :keyword:`delimiter` argument ---------------------------------- - -Once the file is defined and open for reading, :func:`~numpy.genfromtxt` splits each non-empty line into a sequence of strings. -Empty or commented lines are just skipped. -The :keyword:`delimiter` keyword is used to define how the splitting should take place. - -Quite often, a single character marks the separation between columns. -For example, comma-separated files (CSV) use a comma (``,``) or a semicolon (``;``) as delimiter. - - >>> data = "1, 2, 3\n4, 5, 6" - >>> np.genfromtxt(StringIO(data), delimiter=",") - array([[ 1., 2., 3.], - [ 4., 5., 6.]]) - -Another common separator is ``"\t"``, the tabulation character. -However, we are not limited to a single character, any string will do. -By default, :func:`~numpy.genfromtxt` assumes ``delimiter=None``, meaning that the line is split along white spaces (including tabs) and that consecutive white spaces are considered as a single white space. - -Alternatively, we may be dealing with a fixed-width file, where columns are defined as a given number of characters. -In that case, we need to set :keyword:`delimiter` to a single integer (if all the columns have the same size) or to a sequence of integers (if columns can have different sizes). - - >>> data = " 1 2 3\n 4 5 67\n890123 4" - >>> np.genfromtxt(StringIO(data), delimiter=3) - array([[ 1., 2., 3.], - [ 4., 5., 67.], - [ 890., 123., 4.]]) - >>> data = "123456789\n 4 7 9\n 4567 9" - >>> np.genfromtxt(StringIO(data), delimiter=(4, 3, 2)) - array([[ 1234., 567., 89.], - [ 4., 7., 9.], - [ 4., 567., 9.]]) - - -The :keyword:`autostrip` argument ---------------------------------- - -By default, when a line is decomposed into a series of strings, the individual entries are not stripped of leading nor trailing white spaces. -This behavior can be overwritten by setting the optional argument :keyword:`autostrip` to a value of ``True``. - - >>> data = "1, abc , 2\n 3, xxx, 4" - >>> # Without autostrip - >>> np.genfromtxt(StringIO(data), dtype="|S5") - array([['1', ' abc ', ' 2'], - ['3', ' xxx', ' 4']], - dtype='|S5') - >>> # With autostrip - >>> np.genfromtxt(StringIO(data), dtype="|S5", autostrip=True) - array([['1', 'abc', '2'], - ['3', 'xxx', '4']], - dtype='|S5') - - -The :keyword:`comments` argument --------------------------------- - -The optional argument :keyword:`comments` is used to define a character string that marks the beginning of a comment. -By default, :func:`~numpy.genfromtxt` assumes ``comments='#'``. -The comment marker may occur anywhere on the line. -Any character present after the comment marker(s) is simply ignored. - - >>> data = """# - ... # Skip me ! - ... # Skip me too ! - ... 1, 2 - ... 3, 4 - ... 5, 6 #This is the third line of the data - ... 7, 8 - ... # And here comes the last line - ... 9, 0 - ... """ - >>> np.genfromtxt(StringIO(data), comments="#", delimiter=",") - [[ 1. 2.] - [ 3. 4.] - [ 5. 6.] - [ 7. 8.] - [ 9. 0.]] - -.. note:: - There is one notable exception to this behavior: if the optional argument ``names=True``, the first commented line will be examined for names. - - - -Skipping lines and choosing columns -=================================== - -The :keyword:`skip_header` and :keyword:`skip_footer` arguments ---------------------------------------------------------------- - -The presence of a header in the file can hinder data processing. -In that case, we need to use the :keyword:`skip_header` optional argument. -The values of this argument must be an integer which corresponds to the number of lines to skip at the beginning of the file, before any other action is performed. -Similarly, we can skip the last ``n`` lines of the file by using the :keyword:`skip_footer` attribute and giving it a value of ``n``. - - >>> data = "\n".join(str(i) for i in range(10)) - >>> np.genfromtxt(StringIO(data),) - array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) - >>> np.genfromtxt(StringIO(data), - ... skip_header=3, skip_footer=5) - array([ 3., 4.]) - -By default, ``skip_header=0`` and ``skip_footer=0``, meaning that no lines are skipped. - - -The :keyword:`usecols` argument -------------------------------- - -In some cases, we are not interested in all the columns of the data but only a few of them. -We can select which columns to import with the :keyword:`usecols` argument. -This argument accepts a single integer or a sequence of integers corresponding to the indices of the columns to import. -Remember that by convention, the first column has an index of 0. -Negative integers correspond to - -For example, if we want to import only the first and the last columns, we can use ``usecols=(0, -1)``: - >>> data = "1 2 3\n4 5 6" - >>> np.genfromtxt(StringIO(data), usecols=(0, -1)) - array([[ 1., 3.], - [ 4., 6.]]) - -If the columns have names, we can also select which columns to import by giving their name to the :keyword:`usecols` argument, either as a sequence of strings or a comma-separated string. - >>> data = "1 2 3\n4 5 6" - >>> np.genfromtxt(StringIO(data), - ... names="a, b, c", usecols=("a", "c")) - array([(1.0, 3.0), (4.0, 6.0)], - dtype=[('a', '>> np.genfromtxt(StringIO(data), - ... names="a, b, c", usecols=("a, c")) - array([(1.0, 3.0), (4.0, 6.0)], - dtype=[('a', '>> data = StringIO("1 2 3\n 4 5 6") - >>> np.genfromtxt(data, dtype=[(_, int) for _ in "abc"]) - array([(1, 2, 3), (4, 5, 6)], - dtype=[('a', '>> data = StringIO("1 2 3\n 4 5 6") - >>> np.genfromtxt(data, names="A, B, C") - array([(1.0, 2.0, 3.0), (4.0, 5.0, 6.0)], - dtype=[('A', '>> data = StringIO("So it goes\n#a b c\n1 2 3\n 4 5 6") - >>> np.genfromtxt(data, skip_header=1, names=True) - array([(1.0, 2.0, 3.0), (4.0, 5.0, 6.0)], - dtype=[('a', '>> data = StringIO("1 2 3\n 4 5 6") - >>> ndtype=[('a',int), ('b', float), ('c', int)] - >>> names = ["A", "B", "C"] - >>> np.genfromtxt(data, names=names, dtype=ndtype) - array([(1, 2.0, 3), (4, 5.0, 6)], - dtype=[('A', '>> data = StringIO("1 2 3\n 4 5 6") - >>> np.genfromtxt(data, dtype=(int, float, int)) - array([(1, 2.0, 3), (4, 5.0, 6)], - dtype=[('f0', '>> data = StringIO("1 2 3\n 4 5 6") - >>> np.genfromtxt(data, dtype=(int, float, int), names="a") - array([(1, 2.0, 3), (4, 5.0, 6)], - dtype=[('a', '>> data = StringIO("1 2 3\n 4 5 6") - >>> np.genfromtxt(data, dtype=(int, float, int), defaultfmt="var_%02i") - array([(1, 2.0, 3), (4, 5.0, 6)], - dtype=[('var_00', ',<``. - :keyword:`excludelist` - Gives a list of the names to exclude, such as ``return``, ``file``, ``print``... - If one of the input name is part of this list, an underscore character (``'_'``) will be appended to it. - :keyword:`case_sensitive` - Whether the names should be case-sensitive (``case_sensitive=True``), - converted to upper case (``case_sensitive=False`` or ``case_sensitive='upper'``) or to lower case (``case_sensitive='lower'``). - - - -Tweaking the conversion -======================= - -The :keyword:`converters` argument ----------------------------------- - -Usually, defining a dtype is sufficient to define how the sequence of strings must be converted. -However, some additional control may sometimes be required. -For example, we may want to make sure that a date in a format ``YYYY/MM/DD`` is converted to a :class:`datetime` object, or that a string like ``xx%`` is properly converted to a float between 0 and 1. -In such cases, we should define conversion functions with the :keyword:`converters` arguments. - -The value of this argument is typically a dictionary with column indices or column names as keys and a conversion function as values. -These conversion functions can either be actual functions or lambda functions. In any case, they should accept only a string as input and output only a single element of the wanted type. - -In the following example, the second column is converted from as string representing a percentage to a float between 0 and 1 - >>> convertfunc = lambda x: float(x.strip("%"))/100. - >>> data = "1, 2.3%, 45.\n6, 78.9%, 0" - >>> names = ("i", "p", "n") - >>> # General case ..... - >>> np.genfromtxt(StringIO(data), delimiter=",", names=names) - array([(1.0, nan, 45.0), (6.0, nan, 0.0)], - dtype=[('i', '>> # Converted case ... - >>> np.genfromtxt(StringIO(data), delimiter=",", names=names, - ... converters={1: convertfunc}) - array([(1.0, 0.023, 45.0), (6.0, 0.78900000000000003, 0.0)], - dtype=[('i', '>> # Using a name for the converter ... - >>> np.genfromtxt(StringIO(data), delimiter=",", names=names, - ... converters={"p": convertfunc}) - array([(1.0, 0.023, 45.0), (6.0, 0.78900000000000003, 0.0)], - dtype=[('i', '>> data = "1, , 3\n 4, 5, 6" - >>> convert = lambda x: float(x.strip() or -999) - >>> np.genfromtxt(StringIO(data), delimiter=",", - ... converter={1: convert}) - array([[ 1., -999., 3.], - [ 4., 5., 6.]]) - - - - -Using missing and filling values --------------------------------- - -Some entries may be missing in the dataset we are trying to import. -In a previous example, we used a converter to transform an empty string into a float. -However, user-defined converters may rapidly become cumbersome to manage. - -The :func:`~nummpy.genfromtxt` function provides two other complementary mechanisms: the :keyword:`missing_values` argument is used to recognize missing data and a second argument, :keyword:`filling_values`, is used to process these missing data. - -:keyword:`missing_values` -------------------------- - -By default, any empty string is marked as missing. -We can also consider more complex strings, such as ``"N/A"`` or ``"???"`` to represent missing or invalid data. -The :keyword:`missing_values` argument accepts three kind of values: - - a string or a comma-separated string - This string will be used as the marker for missing data for all the columns - a sequence of strings - In that case, each item is associated to a column, in order. - a dictionary - Values of the dictionary are strings or sequence of strings. - The corresponding keys can be column indices (integers) or column names (strings). In addition, the special key ``None`` can be used to define a default applicable to all columns. - - -:keyword:`filling_values` -------------------------- - -We know how to recognize missing data, but we still need to provide a value for these missing entries. -By default, this value is determined from the expected dtype according to this table: - -============= ============== -Expected type Default -============= ============== -``bool`` ``False`` -``int`` ``-1`` -``float`` ``np.nan`` -``complex`` ``np.nan+0j`` -``string`` ``'???'`` -============= ============== - -We can get a finer control on the conversion of missing values with the :keyword:`filling_values` optional argument. -Like :keyword:`missing_values`, this argument accepts different kind of values: - - a single value - This will be the default for all columns - a sequence of values - Each entry will be the default for the corresponding column - a dictionary - Each key can be a column index or a column name, and the corresponding value should be a single object. - We can use the special key ``None`` to define a default for all columns. - -In the following example, we suppose that the missing values are flagged with ``"N/A"`` in the first column and by ``"???"`` in the third column. -We wish to transform these missing values to 0 if they occur in the first and second column, and to -999 if they occur in the last column. - ->>> data = "N/A, 2, 3\n4, ,???" ->>> kwargs = dict(delimiter=",", -... dtype=int, -... names="a,b,c", -... missing_values={0:"N/A", 'b':" ", 2:"???"}, -... filling_values={0:0, 'b':0, 2:-999}) ->>> np.genfromtxt(StringIO.StringIO(data), **kwargs) -array([(0, 2, 3), (4, 0, -999)], - dtype=[('a', '` - -.. automodule:: numpy.doc.basics diff --git a/pythonPackages/numpy/doc/source/user/c-info.beyond-basics.rst b/pythonPackages/numpy/doc/source/user/c-info.beyond-basics.rst deleted file mode 100755 index 563467e72e..0000000000 --- a/pythonPackages/numpy/doc/source/user/c-info.beyond-basics.rst +++ /dev/null @@ -1,740 +0,0 @@ -***************** -Beyond the Basics -***************** - -| The voyage of discovery is not in seeking new landscapes but in having -| new eyes. -| --- *Marcel Proust* - -| Discovery is seeing what everyone else has seen and thinking what no -| one else has thought. -| --- *Albert Szent-Gyorgi* - - -Iterating over elements in the array -==================================== - -.. _`sec:array_iterator`: - -Basic Iteration ---------------- - -One common algorithmic requirement is to be able to walk over all -elements in a multidimensional array. The array iterator object makes -this easy to do in a generic way that works for arrays of any -dimension. Naturally, if you know the number of dimensions you will be -using, then you can always write nested for loops to accomplish the -iteration. If, however, you want to write code that works with any -number of dimensions, then you can make use of the array iterator. An -array iterator object is returned when accessing the .flat attribute -of an array. - -.. index:: - single: array iterator - -Basic usage is to call :cfunc:`PyArray_IterNew` ( ``array`` ) where array -is an ndarray object (or one of its sub-classes). The returned object -is an array-iterator object (the same object returned by the .flat -attribute of the ndarray). This object is usually cast to -PyArrayIterObject* so that its members can be accessed. The only -members that are needed are ``iter->size`` which contains the total -size of the array, ``iter->index``, which contains the current 1-d -index into the array, and ``iter->dataptr`` which is a pointer to the -data for the current element of the array. Sometimes it is also -useful to access ``iter->ao`` which is a pointer to the underlying -ndarray object. - -After processing data at the current element of the array, the next -element of the array can be obtained using the macro -:cfunc:`PyArray_ITER_NEXT` ( ``iter`` ). The iteration always proceeds in a -C-style contiguous fashion (last index varying the fastest). The -:cfunc:`PyArray_ITER_GOTO` ( ``iter``, ``destination`` ) can be used to -jump to a particular point in the array, where ``destination`` is an -array of npy_intp data-type with space to handle at least the number -of dimensions in the underlying array. Occasionally it is useful to -use :cfunc:`PyArray_ITER_GOTO1D` ( ``iter``, ``index`` ) which will jump -to the 1-d index given by the value of ``index``. The most common -usage, however, is given in the following example. - -.. code-block:: c - - PyObject *obj; /* assumed to be some ndarray object */ - PyArrayIterObject *iter; - ... - iter = (PyArrayIterObject *)PyArray_IterNew(obj); - if (iter == NULL) goto fail; /* Assume fail has clean-up code */ - while (iter->index < iter->size) { - /* do something with the data at it->dataptr */ - PyArray_ITER_NEXT(it); - } - ... - -You can also use :cfunc:`PyArrayIter_Check` ( ``obj`` ) to ensure you have -an iterator object and :cfunc:`PyArray_ITER_RESET` ( ``iter`` ) to reset an -iterator object back to the beginning of the array. - -It should be emphasized at this point that you may not need the array -iterator if your array is already contiguous (using an array iterator -will work but will be slower than the fastest code you could write). -The major purpose of array iterators is to encapsulate iteration over -N-dimensional arrays with arbitrary strides. They are used in many, -many places in the NumPy source code itself. If you already know your -array is contiguous (Fortran or C), then simply adding the element- -size to a running pointer variable will step you through the array -very efficiently. In other words, code like this will probably be -faster for you in the contiguous case (assuming doubles). - -.. code-block:: c - - npy_intp size; - double *dptr; /* could make this any variable type */ - size = PyArray_SIZE(obj); - dptr = PyArray_DATA(obj); - while(size--) { - /* do something with the data at dptr */ - dptr++; - } - - -Iterating over all but one axis -------------------------------- - -A common algorithm is to loop over all elements of an array and -perform some function with each element by issuing a function call. As -function calls can be time consuming, one way to speed up this kind of -algorithm is to write the function so it takes a vector of data and -then write the iteration so the function call is performed for an -entire dimension of data at a time. This increases the amount of work -done per function call, thereby reducing the function-call over-head -to a small(er) fraction of the total time. Even if the interior of the -loop is performed without a function call it can be advantageous to -perform the inner loop over the dimension with the highest number of -elements to take advantage of speed enhancements available on micro- -processors that use pipelining to enhance fundmental operations. - -The :cfunc:`PyArray_IterAllButAxis` ( ``array``, ``&dim`` ) constructs an -iterator object that is modified so that it will not iterate over the -dimension indicated by dim. The only restriction on this iterator -object, is that the :cfunc:`PyArray_Iter_GOTO1D` ( ``it``, ``ind`` ) macro -cannot be used (thus flat indexing won't work either if you pass this -object back to Python --- so you shouldn't do this). Note that the -returned object from this routine is still usually cast to -PyArrayIterObject \*. All that's been done is to modify the strides -and dimensions of the returned iterator to simulate iterating over -array[...,0,...] where 0 is placed on the -:math:`\textrm{dim}^{\textrm{th}}` dimension. If dim is negative, then -the dimension with the largest axis is found and used. - - -Iterating over multiple arrays ------------------------------- - -Very often, it is desireable to iterate over several arrays at the -same time. The universal functions are an example of this kind of -behavior. If all you want to do is iterate over arrays with the same -shape, then simply creating several iterator objects is the standard -procedure. For example, the following code iterates over two arrays -assumed to be the same shape and size (actually obj1 just has to have -at least as many total elements as does obj2): - -.. code-block:: c - - /* It is already assumed that obj1 and obj2 - are ndarrays of the same shape and size. - */ - iter1 = (PyArrayIterObject *)PyArray_IterNew(obj1); - if (iter1 == NULL) goto fail; - iter2 = (PyArrayIterObject *)PyArray_IterNew(obj2); - if (iter2 == NULL) goto fail; /* assume iter1 is DECREF'd at fail */ - while (iter2->index < iter2->size) { - /* process with iter1->dataptr and iter2->dataptr */ - PyArray_ITER_NEXT(iter1); - PyArray_ITER_NEXT(iter2); - } - - -Broadcasting over multiple arrays ---------------------------------- - -.. index:: - single: broadcasting - -When multiple arrays are involved in an operation, you may want to use the -same broadcasting rules that the math operations (*i.e.* the ufuncs) use. -This can be done easily using the :ctype:`PyArrayMultiIterObject`. This is -the object returned from the Python command numpy.broadcast and it is almost -as easy to use from C. The function -:cfunc:`PyArray_MultiIterNew` ( ``n``, ``...`` ) is used (with ``n`` input -objects in place of ``...`` ). The input objects can be arrays or anything -that can be converted into an array. A pointer to a PyArrayMultiIterObject is -returned. Broadcasting has already been accomplished which adjusts the -iterators so that all that needs to be done to advance to the next element in -each array is for PyArray_ITER_NEXT to be called for each of the inputs. This -incrementing is automatically performed by -:cfunc:`PyArray_MultiIter_NEXT` ( ``obj`` ) macro (which can handle a -multiterator ``obj`` as either a :ctype:`PyArrayMultiObject *` or a -:ctype:`PyObject *`). The data from input number ``i`` is available using -:cfunc:`PyArray_MultiIter_DATA` ( ``obj``, ``i`` ) and the total (broadcasted) -size as :cfunc:`PyArray_MultiIter_SIZE` ( ``obj``). An example of using this -feature follows. - -.. code-block:: c - - mobj = PyArray_MultiIterNew(2, obj1, obj2); - size = PyArray_MultiIter_SIZE(obj); - while(size--) { - ptr1 = PyArray_MultiIter_DATA(mobj, 0); - ptr2 = PyArray_MultiIter_DATA(mobj, 1); - /* code using contents of ptr1 and ptr2 */ - PyArray_MultiIter_NEXT(mobj); - } - -The function :cfunc:`PyArray_RemoveLargest` ( ``multi`` ) can be used to -take a multi-iterator object and adjust all the iterators so that -iteration does not take place over the largest dimension (it makes -that dimension of size 1). The code being looped over that makes use -of the pointers will very-likely also need the strides data for each -of the iterators. This information is stored in -multi->iters[i]->strides. - -.. index:: - single: array iterator - -There are several examples of using the multi-iterator in the NumPy -source code as it makes N-dimensional broadcasting-code very simple to -write. Browse the source for more examples. - -.. _`sec:Creating-a-new`: - -Creating a new universal function -================================= - -.. index:: - pair: ufunc; adding new - -The umath module is a computer-generated C-module that creates many -ufuncs. It provides a great many examples of how to create a universal -function. Creating your own ufunc that will make use of the ufunc -machinery is not difficult either. Suppose you have a function that -you want to operate element-by-element over its inputs. By creating a -new ufunc you will obtain a function that handles - -- broadcasting - -- N-dimensional looping - -- automatic type-conversions with minimal memory usage - -- optional output arrays - -It is not difficult to create your own ufunc. All that is required is -a 1-d loop for each data-type you want to support. Each 1-d loop must -have a specific signature, and only ufuncs for fixed-size data-types -can be used. The function call used to create a new ufunc to work on -built-in data-types is given below. A different mechanism is used to -register ufuncs for user-defined data-types. - -.. cfunction:: PyObject *PyUFunc_FromFuncAndData( PyUFuncGenericFunction* func, - void** data, char* types, int ntypes, int nin, int nout, int identity, - char* name, char* doc, int check_return) - - *func* - - A pointer to an array of 1-d functions to use. This array must be at - least ntypes long. Each entry in the array must be a - ``PyUFuncGenericFunction`` function. This function has the following - signature. An example of a valid 1d loop function is also given. - - .. cfunction:: void loop1d(char** args, npy_intp* dimensions, - npy_intp* steps, void* data) - - *args* - - An array of pointers to the actual data for the input and output - arrays. The input arguments are given first followed by the output - arguments. - - *dimensions* - - A pointer to the size of the dimension over which this function is - looping. - - *steps* - - A pointer to the number of bytes to jump to get to the - next element in this dimension for each of the input and - output arguments. - - *data* - - Arbitrary data (extra arguments, function names, *etc.* ) - that can be stored with the ufunc and will be passed in - when it is called. - - .. code-block:: c - - static void - double_add(char *args, npy_intp *dimensions, npy_intp *steps, - void *extra) - { - npy_intp i; - npy_intp is1=steps[0], is2=steps[1]; - npy_intp os=steps[2], n=dimensions[0]; - char *i1=args[0], *i2=args[1], *op=args[2]; - for (i=0; i`__ . - - *arg_types* - - (optional) If given, this should contain an array of integers of at - least size ufunc.nargs containing the data-types expected by the loop - function. The data will be copied into a NumPy-managed structure so - the memory for this argument should be deleted after calling this - function. If this is NULL, then it will be assumed that all data-types - are of type usertype. - - *data* - - (optional) Specify any optional data needed by the function which will - be passed when the function is called. - - .. index:: - pair: dtype; adding new - - -Subtyping the ndarray in C -========================== - -One of the lesser-used features that has been lurking in Python since -2.2 is the ability to sub-class types in C. This facility is one of -the important reasons for basing NumPy off of the Numeric code-base -which was already in C. A sub-type in C allows much more flexibility -with regards to memory management. Sub-typing in C is not difficult -even if you have only a rudimentary understanding of how to create new -types for Python. While it is easiest to sub-type from a single parent -type, sub-typing from multiple parent types is also possible. Multiple -inheritence in C is generally less useful than it is in Python because -a restriction on Python sub-types is that they have a binary -compatible memory layout. Perhaps for this reason, it is somewhat -easier to sub-type from a single parent type. - -.. index:: - pair: ndarray; subtyping - -All C-structures corresponding to Python objects must begin with -:cmacro:`PyObject_HEAD` (or :cmacro:`PyObject_VAR_HEAD`). In the same -way, any sub-type must have a C-structure that begins with exactly the -same memory layout as the parent type (or all of the parent types in -the case of multiple-inheritance). The reason for this is that Python -may attempt to access a member of the sub-type structure as if it had -the parent structure ( *i.e.* it will cast a given pointer to a -pointer to the parent structure and then dereference one of it's -members). If the memory layouts are not compatible, then this attempt -will cause unpredictable behavior (eventually leading to a memory -violation and program crash). - -One of the elements in :cmacro:`PyObject_HEAD` is a pointer to a -type-object structure. A new Python type is created by creating a new -type-object structure and populating it with functions and pointers to -describe the desired behavior of the type. Typically, a new -C-structure is also created to contain the instance-specific -information needed for each object of the type as well. For example, -:cdata:`&PyArray_Type` is a pointer to the type-object table for the ndarray -while a :ctype:`PyArrayObject *` variable is a pointer to a particular instance -of an ndarray (one of the members of the ndarray structure is, in -turn, a pointer to the type- object table :cdata:`&PyArray_Type`). Finally -:cfunc:`PyType_Ready` () must be called for -every new Python type. - - -Creating sub-types ------------------- - -To create a sub-type, a similar proceedure must be followed except -only behaviors that are different require new entries in the type- -object structure. All other entires can be NULL and will be filled in -by :cfunc:`PyType_Ready` with appropriate functions from the parent -type(s). In particular, to create a sub-type in C follow these steps: - -1. If needed create a new C-structure to handle each instance of your - type. A typical C-structure would be: - - .. code-block:: c - - typedef _new_struct { - PyArrayObject base; - /* new things here */ - } NewArrayObject; - - Notice that the full PyArrayObject is used as the first entry in order - to ensure that the binary layout of instances of the new type is - identical to the PyArrayObject. - -2. Fill in a new Python type-object structure with pointers to new - functions that will over-ride the default behavior while leaving any - function that should remain the same unfilled (or NULL). The tp_name - element should be different. - -3. Fill in the tp_base member of the new type-object structure with a - pointer to the (main) parent type object. For multiple-inheritance, - also fill in the tp_bases member with a tuple containing all of the - parent objects in the order they should be used to define inheritance. - Remember, all parent-types must have the same C-structure for multiple - inheritance to work properly. - -4. Call :cfunc:`PyType_Ready` (). If this function - returns a negative number, a failure occurred and the type is not - initialized. Otherwise, the type is ready to be used. It is - generally important to place a reference to the new type into the - module dictionary so it can be accessed from Python. - -More information on creating sub-types in C can be learned by reading -PEP 253 (available at http://www.python.org/dev/peps/pep-0253). - - -Specific features of ndarray sub-typing ---------------------------------------- - -Some special methods and attributes are used by arrays in order to -facilitate the interoperation of sub-types with the base ndarray type. - -The __array_finalize\__ method -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. attribute:: ndarray.__array_finalize__ - - Several array-creation functions of the ndarray allow - specification of a particular sub-type to be created. This allows - sub-types to be handled seamlessly in many routines. When a - sub-type is created in such a fashion, however, neither the - __new_\_ method nor the __init\__ method gets called. Instead, the - sub-type is allocated and the appropriate instance-structure - members are filled in. Finally, the :obj:`__array_finalize__` - attribute is looked-up in the object dictionary. If it is present - and not None, then it can be either a CObject containing a pointer - to a :cfunc:`PyArray_FinalizeFunc` or it can be a method taking a - single argument (which could be None). - - If the :obj:`__array_finalize__` attribute is a CObject, then the pointer - must be a pointer to a function with the signature: - - .. code-block:: c - - (int) (PyArrayObject *, PyObject *) - - The first argument is the newly created sub-type. The second argument - (if not NULL) is the "parent" array (if the array was created using - slicing or some other operation where a clearly-distinguishable parent - is present). This routine can do anything it wants to. It should - return a -1 on error and 0 otherwise. - - If the :obj:`__array_finalize__` attribute is not None nor a CObject, - then it must be a Python method that takes the parent array as an - argument (which could be None if there is no parent), and returns - nothing. Errors in this method will be caught and handled. - - -The __array_priority\__ attribute -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. attribute:: ndarray.__array_priority__ - - This attribute allows simple but flexible determination of which sub- - type should be considered "primary" when an operation involving two or - more sub-types arises. In operations where different sub-types are - being used, the sub-type with the largest :obj:`__array_priority__` - attribute will determine the sub-type of the output(s). If two sub- - types have the same :obj:`__array_priority__` then the sub-type of the - first argument determines the output. The default - :obj:`__array_priority__` attribute returns a value of 0.0 for the base - ndarray type and 1.0 for a sub-type. This attribute can also be - defined by objects that are not sub-types of the ndarray and can be - used to determine which :obj:`__array_wrap__` method should be called for - the return output. - -The __array_wrap\__ method -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. attribute:: ndarray.__array_wrap__ - - Any class or type can define this method which should take an ndarray - argument and return an instance of the type. It can be seen as the - opposite of the :obj:`__array__` method. This method is used by the - ufuncs (and other NumPy functions) to allow other objects to pass - through. For Python >2.4, it can also be used to write a decorator - that converts a function that works only with ndarrays to one that - works with any type with :obj:`__array__` and :obj:`__array_wrap__` methods. - -.. index:: - pair: ndarray; subtyping diff --git a/pythonPackages/numpy/doc/source/user/c-info.how-to-extend.rst b/pythonPackages/numpy/doc/source/user/c-info.how-to-extend.rst deleted file mode 100755 index 6c5e25aff6..0000000000 --- a/pythonPackages/numpy/doc/source/user/c-info.how-to-extend.rst +++ /dev/null @@ -1,641 +0,0 @@ -******************* -How to extend NumPy -******************* - -| That which is static and repetitive is boring. That which is dynamic -| and random is confusing. In between lies art. -| --- *John A. Locke* - -| Science is a differential equation. Religion is a boundary condition. -| --- *Alan Turing* - - -.. _`sec:Writing-an-extension`: - -Writing an extension module -=========================== - -While the ndarray object is designed to allow rapid computation in -Python, it is also designed to be general-purpose and satisfy a wide- -variety of computational needs. As a result, if absolute speed is -essential, there is no replacement for a well-crafted, compiled loop -specific to your application and hardware. This is one of the reasons -that numpy includes f2py so that an easy-to-use mechanisms for linking -(simple) C/C++ and (arbitrary) Fortran code directly into Python are -available. You are encouraged to use and improve this mechanism. The -purpose of this section is not to document this tool but to document -the more basic steps to writing an extension module that this tool -depends on. - -.. index:: - single: extension module - -When an extension module is written, compiled, and installed to -somewhere in the Python path (sys.path), the code can then be imported -into Python as if it were a standard python file. It will contain -objects and methods that have been defined and compiled in C code. The -basic steps for doing this in Python are well-documented and you can -find more information in the documentation for Python itself available -online at `www.python.org `_ . - -In addition to the Python C-API, there is a full and rich C-API for -NumPy allowing sophisticated manipulations on a C-level. However, for -most applications, only a few API calls will typically be used. If all -you need to do is extract a pointer to memory along with some shape -information to pass to another calculation routine, then you will use -very different calls, then if you are trying to create a new array- -like type or add a new data type for ndarrays. This chapter documents -the API calls and macros that are most commonly used. - - -Required subroutine -=================== - -There is exactly one function that must be defined in your C-code in -order for Python to use it as an extension module. The function must -be called init{name} where {name} is the name of the module from -Python. This function must be declared so that it is visible to code -outside of the routine. Besides adding the methods and constants you -desire, this subroutine must also contain calls to import_array() -and/or import_ufunc() depending on which C-API is needed. Forgetting -to place these commands will show itself as an ugly segmentation fault -(crash) as soon as any C-API subroutine is actually called. It is -actually possible to have multiple init{name} functions in a single -file in which case multiple modules will be defined by that file. -However, there are some tricks to get that to work correctly and it is -not covered here. - -A minimal ``init{name}`` method looks like: - -.. code-block:: c - - PyMODINIT_FUNC - init{name}(void) - { - (void)Py_InitModule({name}, mymethods); - import_array(); - } - -The mymethods must be an array (usually statically declared) of -PyMethodDef structures which contain method names, actual C-functions, -a variable indicating whether the method uses keyword arguments or -not, and docstrings. These are explained in the next section. If you -want to add constants to the module, then you store the returned value -from Py_InitModule which is a module object. The most general way to -add itmes to the module is to get the module dictionary using -PyModule_GetDict(module). With the module dictionary, you can add -whatever you like to the module manually. An easier way to add objects -to the module is to use one of three additional Python C-API calls -that do not require a separate extraction of the module dictionary. -These are documented in the Python documentation, but repeated here -for convenience: - -.. cfunction:: int PyModule_AddObject(PyObject* module, char* name, PyObject* value) - -.. cfunction:: int PyModule_AddIntConstant(PyObject* module, char* name, long value) - -.. cfunction:: int PyModule_AddStringConstant(PyObject* module, char* name, char* value) - - All three of these functions require the *module* object (the - return value of Py_InitModule). The *name* is a string that - labels the value in the module. Depending on which function is - called, the *value* argument is either a general object - (:cfunc:`PyModule_AddObject` steals a reference to it), an integer - constant, or a string constant. - - -Defining functions -================== - -The second argument passed in to the Py_InitModule function is a -structure that makes it easy to to define functions in the module. In -the example given above, the mymethods structure would have been -defined earlier in the file (usually right before the init{name} -subroutine) to: - -.. code-block:: c - - static PyMethodDef mymethods[] = { - { nokeywordfunc,nokeyword_cfunc, - METH_VARARGS, - Doc string}, - { keywordfunc, keyword_cfunc, - METH_VARARGS|METH_KEYWORDS, - Doc string}, - {NULL, NULL, 0, NULL} /* Sentinel */ - } - -Each entry in the mymethods array is a :ctype:`PyMethodDef` structure -containing 1) the Python name, 2) the C-function that implements the -function, 3) flags indicating whether or not keywords are accepted for -this function, and 4) The docstring for the function. Any number of -functions may be defined for a single module by adding more entries to -this table. The last entry must be all NULL as shown to act as a -sentinel. Python looks for this entry to know that all of the -functions for the module have been defined. - -The last thing that must be done to finish the extension module is to -actually write the code that performs the desired functions. There are -two kinds of functions: those that don't accept keyword arguments, and -those that do. - - -Functions without keyword arguments ------------------------------------ - -Functions that don't accept keyword arguments should be written as: - -.. code-block:: c - - static PyObject* - nokeyword_cfunc (PyObject *dummy, PyObject *args) - { - /* convert Python arguments */ - /* do function */ - /* return something */ - } - -The dummy argument is not used in this context and can be safely -ignored. The *args* argument contains all of the arguments passed in -to the function as a tuple. You can do anything you want at this -point, but usually the easiest way to manage the input arguments is to -call :cfunc:`PyArg_ParseTuple` (args, format_string, -addresses_to_C_variables...) or :cfunc:`PyArg_UnpackTuple` (tuple, "name" , -min, max, ...). A good description of how to use the first function is -contained in the Python C-API reference manual under section 5.5 -(Parsing arguments and building values). You should pay particular -attention to the "O&" format which uses converter functions to go -between the Python object and the C object. All of the other format -functions can be (mostly) thought of as special cases of this general -rule. There are several converter functions defined in the NumPy C-API -that may be of use. In particular, the :cfunc:`PyArray_DescrConverter` -function is very useful to support arbitrary data-type specification. -This function transforms any valid data-type Python object into a -:ctype:`PyArray_Descr *` object. Remember to pass in the address of the -C-variables that should be filled in. - -There are lots of examples of how to use :cfunc:`PyArg_ParseTuple` -throughout the NumPy source code. The standard usage is like this: - -.. code-block:: c - - PyObject *input; - PyArray_Descr *dtype; - if (!PyArg_ParseTuple(args, "OO&", &input, - PyArray_DescrConverter, - &dtype)) return NULL; - -It is important to keep in mind that you get a *borrowed* reference to -the object when using the "O" format string. However, the converter -functions usually require some form of memory handling. In this -example, if the conversion is successful, *dtype* will hold a new -reference to a :ctype:`PyArray_Descr *` object, while *input* will hold a -borrowed reference. Therefore, if this conversion were mixed with -another conversion (say to an integer) and the data-type conversion -was successful but the integer conversion failed, then you would need -to release the reference count to the data-type object before -returning. A typical way to do this is to set *dtype* to ``NULL`` -before calling :cfunc:`PyArg_ParseTuple` and then use :cfunc:`Py_XDECREF` -on *dtype* before returning. - -After the input arguments are processed, the code that actually does -the work is written (likely calling other functions as needed). The -final step of the C-function is to return something. If an error is -encountered then ``NULL`` should be returned (making sure an error has -actually been set). If nothing should be returned then increment -:cdata:`Py_None` and return it. If a single object should be returned then -it is returned (ensuring that you own a reference to it first). If -multiple objects should be returned then you need to return a tuple. -The :cfunc:`Py_BuildValue` (format_string, c_variables...) function makes -it easy to build tuples of Python objects from C variables. Pay -special attention to the difference between 'N' and 'O' in the format -string or you can easily create memory leaks. The 'O' format string -increments the reference count of the :ctype:`PyObject *` C-variable it -corresponds to, while the 'N' format string steals a reference to the -corresponding :ctype:`PyObject *` C-variable. You should use 'N' if you ave -already created a reference for the object and just want to give that -reference to the tuple. You should use 'O' if you only have a borrowed -reference to an object and need to create one to provide for the -tuple. - - -Functions with keyword arguments --------------------------------- - -These functions are very similar to functions without keyword -arguments. The only difference is that the function signature is: - -.. code-block:: c - - static PyObject* - keyword_cfunc (PyObject *dummy, PyObject *args, PyObject *kwds) - { - ... - } - -The kwds argument holds a Python dictionary whose keys are the names -of the keyword arguments and whose values are the corresponding -keyword-argument values. This dictionary can be processed however you -see fit. The easiest way to handle it, however, is to replace the -:cfunc:`PyArg_ParseTuple` (args, format_string, addresses...) function with -a call to :cfunc:`PyArg_ParseTupleAndKeywords` (args, kwds, format_string, -char \*kwlist[], addresses...). The kwlist parameter to this function -is a ``NULL`` -terminated array of strings providing the expected -keyword arguments. There should be one string for each entry in the -format_string. Using this function will raise a TypeError if invalid -keyword arguments are passed in. - -For more help on this function please see section 1.8 (Keyword -Paramters for Extension Functions) of the Extending and Embedding -tutorial in the Python documentation. - - -Reference counting ------------------- - -The biggest difficulty when writing extension modules is reference -counting. It is an important reason for the popularity of f2py, weave, -pyrex, ctypes, etc.... If you mis-handle reference counts you can get -problems from memory-leaks to segmentation faults. The only strategy I -know of to handle reference counts correctly is blood, sweat, and -tears. First, you force it into your head that every Python variable -has a reference count. Then, you understand exactly what each function -does to the reference count of your objects, so that you can properly -use DECREF and INCREF when you need them. Reference counting can -really test the amount of patience and diligence you have towards your -programming craft. Despite the grim depiction, most cases of reference -counting are quite straightforward with the most common difficulty -being not using DECREF on objects before exiting early from a routine -due to some error. In second place, is the common error of not owning -the reference on an object that is passed to a function or macro that -is going to steal the reference ( *e.g.* :cfunc:`PyTuple_SET_ITEM`, and -most functions that take :ctype:`PyArray_Descr` objects). - -.. index:: - single: reference counting - -Typically you get a new reference to a variable when it is created or -is the return value of some function (there are some prominent -exceptions, however --- such as getting an item out of a tuple or a -dictionary). When you own the reference, you are responsible to make -sure that :cfunc:`Py_DECREF` (var) is called when the variable is no -longer necessary (and no other function has "stolen" its -reference). Also, if you are passing a Python object to a function -that will "steal" the reference, then you need to make sure you own it -(or use :cfunc:`Py_INCREF` to get your own reference). You will also -encounter the notion of borrowing a reference. A function that borrows -a reference does not alter the reference count of the object and does -not expect to "hold on "to the reference. It's just going to use the -object temporarily. When you use :cfunc:`PyArg_ParseTuple` or -:cfunc:`PyArg_UnpackTuple` you receive a borrowed reference to the -objects in the tuple and should not alter their reference count inside -your function. With practice, you can learn to get reference counting -right, but it can be frustrating at first. - -One common source of reference-count errors is the :cfunc:`Py_BuildValue` -function. Pay careful attention to the difference between the 'N' -format character and the 'O' format character. If you create a new -object in your subroutine (such as an output array), and you are -passing it back in a tuple of return values, then you should most- -likely use the 'N' format character in :cfunc:`Py_BuildValue`. The 'O' -character will increase the reference count by one. This will leave -the caller with two reference counts for a brand-new array. When the -variable is deleted and the reference count decremented by one, there -will still be that extra reference count, and the array will never be -deallocated. You will have a reference-counting induced memory leak. -Using the 'N' character will avoid this situation as it will return to -the caller an object (inside the tuple) with a single reference count. - -.. index:: - single: reference counting - - - - -Dealing with array objects -========================== - -Most extension modules for NumPy will need to access the memory for an -ndarray object (or one of it's sub-classes). The easiest way to do -this doesn't require you to know much about the internals of NumPy. -The method is to - -1. Ensure you are dealing with a well-behaved array (aligned, in machine - byte-order and single-segment) of the correct type and number of - dimensions. - - 1. By converting it from some Python object using - :cfunc:`PyArray_FromAny` or a macro built on it. - - 2. By constructing a new ndarray of your desired shape and type - using :cfunc:`PyArray_NewFromDescr` or a simpler macro or function - based on it. - - -2. Get the shape of the array and a pointer to its actual data. - -3. Pass the data and shape information on to a subroutine or other - section of code that actually performs the computation. - -4. If you are writing the algorithm, then I recommend that you use the - stride information contained in the array to access the elements of - the array (the :cfunc:`PyArray_GETPTR` macros make this painless). Then, - you can relax your requirements so as not to force a single-segment - array and the data-copying that might result. - -Each of these sub-topics is covered in the following sub-sections. - - -Converting an arbitrary sequence object ---------------------------------------- - -The main routine for obtaining an array from any Python object that -can be converted to an array is :cfunc:`PyArray_FromAny`. This -function is very flexible with many input arguments. Several macros -make it easier to use the basic function. :cfunc:`PyArray_FROM_OTF` is -arguably the most useful of these macros for the most common uses. It -allows you to convert an arbitrary Python object to an array of a -specific builtin data-type ( *e.g.* float), while specifying a -particular set of requirements ( *e.g.* contiguous, aligned, and -writeable). The syntax is - -.. cfunction:: PyObject *PyArray_FROM_OTF(PyObject* obj, int typenum, int requirements) - - Return an ndarray from any Python object, *obj*, that can be - converted to an array. The number of dimensions in the returned - array is determined by the object. The desired data-type of the - returned array is provided in *typenum* which should be one of the - enumerated types. The *requirements* for the returned array can be - any combination of standard array flags. Each of these arguments - is explained in more detail below. You receive a new reference to - the array on success. On failure, ``NULL`` is returned and an - exception is set. - - *obj* - - The object can be any Python object convertable to an ndarray. - If the object is already (a subclass of) the ndarray that - satisfies the requirements then a new reference is returned. - Otherwise, a new array is constructed. The contents of *obj* - are copied to the new array unless the array interface is used - so that data does not have to be copied. Objects that can be - converted to an array include: 1) any nested sequence object, - 2) any object exposing the array interface, 3) any object with - an :obj:`__array__` method (which should return an ndarray), - and 4) any scalar object (becomes a zero-dimensional - array). Sub-classes of the ndarray that otherwise fit the - requirements will be passed through. If you want to ensure - a base-class ndarray, then use :cdata:`NPY_ENSUREARRAY` in the - requirements flag. A copy is made only if necessary. If you - want to guarantee a copy, then pass in :cdata:`NPY_ENSURECOPY` - to the requirements flag. - - *typenum* - - One of the enumerated types or :cdata:`NPY_NOTYPE` if the data-type - should be determined from the object itself. The C-based names - can be used: - - :cdata:`NPY_BOOL`, :cdata:`NPY_BYTE`, :cdata:`NPY_UBYTE`, - :cdata:`NPY_SHORT`, :cdata:`NPY_USHORT`, :cdata:`NPY_INT`, - :cdata:`NPY_UINT`, :cdata:`NPY_LONG`, :cdata:`NPY_ULONG`, - :cdata:`NPY_LONGLONG`, :cdata:`NPY_ULONGLONG`, :cdata:`NPY_DOUBLE`, - :cdata:`NPY_LONGDOUBLE`, :cdata:`NPY_CFLOAT`, :cdata:`NPY_CDOUBLE`, - :cdata:`NPY_CLONGDOUBLE`, :cdata:`NPY_OBJECT`. - - Alternatively, the bit-width names can be used as supported on the - platform. For example: - - :cdata:`NPY_INT8`, :cdata:`NPY_INT16`, :cdata:`NPY_INT32`, - :cdata:`NPY_INT64`, :cdata:`NPY_UINT8`, - :cdata:`NPY_UINT16`, :cdata:`NPY_UINT32`, - :cdata:`NPY_UINT64`, :cdata:`NPY_FLOAT32`, - :cdata:`NPY_FLOAT64`, :cdata:`NPY_COMPLEX64`, - :cdata:`NPY_COMPLEX128`. - - The object will be converted to the desired type only if it - can be done without losing precision. Otherwise ``NULL`` will - be returned and an error raised. Use :cdata:`NPY_FORCECAST` in the - requirements flag to override this behavior. - - *requirements* - - The memory model for an ndarray admits arbitrary strides in - each dimension to advance to the next element of the array. - Often, however, you need to interface with code that expects a - C-contiguous or a Fortran-contiguous memory layout. In - addition, an ndarray can be misaligned (the address of an - element is not at an integral multiple of the size of the - element) which can cause your program to crash (or at least - work more slowly) if you try and dereference a pointer into - the array data. Both of these problems can be solved by - converting the Python object into an array that is more - "well-behaved" for your specific usage. - - The requirements flag allows specification of what kind of array is - acceptable. If the object passed in does not satisfy this requirements - then a copy is made so that thre returned object will satisfy the - requirements. these ndarray can use a very generic pointer to memory. - This flag allows specification of the desired properties of the - returned array object. All of the flags are explained in the detailed - API chapter. The flags most commonly needed are :cdata:`NPY_IN_ARRAY`, - :cdata:`NPY_OUT_ARRAY`, and :cdata:`NPY_INOUT_ARRAY`: - - .. cvar:: NPY_IN_ARRAY - - Equivalent to :cdata:`NPY_CONTIGUOUS` \| - :cdata:`NPY_ALIGNED`. This combination of flags is useful - for arrays that must be in C-contiguous order and aligned. - These kinds of arrays are usually input arrays for some - algorithm. - - .. cvar:: NPY_OUT_ARRAY - - Equivalent to :cdata:`NPY_CONTIGUOUS` \| - :cdata:`NPY_ALIGNED` \| :cdata:`NPY_WRITEABLE`. This - combination of flags is useful to specify an array that is - in C-contiguous order, is aligned, and can be written to - as well. Such an array is usually returned as output - (although normally such output arrays are created from - scratch). - - .. cvar:: NPY_INOUT_ARRAY - - Equivalent to :cdata:`NPY_CONTIGUOUS` \| - :cdata:`NPY_ALIGNED` \| :cdata:`NPY_WRITEABLE` \| - :cdata:`NPY_UPDATEIFCOPY`. This combination of flags is - useful to specify an array that will be used for both - input and output. If a copy is needed, then when the - temporary is deleted (by your use of :cfunc:`Py_DECREF` at - the end of the interface routine), the temporary array - will be copied back into the original array passed in. Use - of the :cdata:`UPDATEIFCOPY` flag requires that the input - object is already an array (because other objects cannot - be automatically updated in this fashion). If an error - occurs use :cfunc:`PyArray_DECREF_ERR` (obj) on an array - with the :cdata:`NPY_UPDATEIFCOPY` flag set. This will - delete the array without causing the contents to be copied - back into the original array. - - - Other useful flags that can be OR'd as additional requirements are: - - .. cvar:: NPY_FORCECAST - - Cast to the desired type, even if it can't be done without losing - information. - - .. cvar:: NPY_ENSURECOPY - - Make sure the resulting array is a copy of the original. - - .. cvar:: NPY_ENSUREARRAY - - Make sure the resulting object is an actual ndarray and not a sub- - class. - -.. note:: - - Whether or not an array is byte-swapped is determined by the - data-type of the array. Native byte-order arrays are always - requested by :cfunc:`PyArray_FROM_OTF` and so there is no need for - a :cdata:`NPY_NOTSWAPPED` flag in the requirements argument. There - is also no way to get a byte-swapped array from this routine. - - -Creating a brand-new ndarray ----------------------------- - -Quite often new arrays must be created from within extension-module -code. Perhaps an output array is needed and you don't want the caller -to have to supply it. Perhaps only a temporary array is needed to hold -an intermediate calculation. Whatever the need there are simple ways -to get an ndarray object of whatever data-type is needed. The most -general function for doing this is :cfunc:`PyArray_NewFromDescr`. All array -creation functions go through this heavily re-used code. Because of -its flexibility, it can be somewhat confusing to use. As a result, -simpler forms exist that are easier to use. - -.. cfunction:: PyObject *PyArray_SimpleNew(int nd, npy_intp* dims, int typenum) - - This function allocates new memory and places it in an ndarray - with *nd* dimensions whose shape is determined by the array of - at least *nd* items pointed to by *dims*. The memory for the - array is uninitialized (unless typenum is :cdata:`PyArray_OBJECT` in - which case each element in the array is set to NULL). The - *typenum* argument allows specification of any of the builtin - data-types such as :cdata:`PyArray_FLOAT` or :cdata:`PyArray_LONG`. The - memory for the array can be set to zero if desired using - :cfunc:`PyArray_FILLWBYTE` (return_object, 0). - -.. cfunction:: PyObject *PyArray_SimpleNewFromData( int nd, npy_intp* dims, int typenum, void* data) - - Sometimes, you want to wrap memory allocated elsewhere into an - ndarray object for downstream use. This routine makes it - straightforward to do that. The first three arguments are the same - as in :cfunc:`PyArray_SimpleNew`, the final argument is a pointer to a - block of contiguous memory that the ndarray should use as it's - data-buffer which will be interpreted in C-style contiguous - fashion. A new reference to an ndarray is returned, but the - ndarray will not own its data. When this ndarray is deallocated, - the pointer will not be freed. - - You should ensure that the provided memory is not freed while the - returned array is in existence. The easiest way to handle this is - if data comes from another reference-counted Python object. The - reference count on this object should be increased after the - pointer is passed in, and the base member of the returned ndarray - should point to the Python object that owns the data. Then, when - the ndarray is deallocated, the base-member will be DECREF'd - appropriately. If you want the memory to be freed as soon as the - ndarray is deallocated then simply set the OWNDATA flag on the - returned ndarray. - - -Getting at ndarray memory and accessing elements of the ndarray ---------------------------------------------------------------- - -If obj is an ndarray (:ctype:`PyArrayObject *`), then the data-area of the -ndarray is pointed to by the void* pointer :cfunc:`PyArray_DATA` (obj) or -the char* pointer :cfunc:`PyArray_BYTES` (obj). Remember that (in general) -this data-area may not be aligned according to the data-type, it may -represent byte-swapped data, and/or it may not be writeable. If the -data area is aligned and in native byte-order, then how to get at a -specific element of the array is determined only by the array of -npy_intp variables, :cfunc:`PyArray_STRIDES` (obj). In particular, this -c-array of integers shows how many **bytes** must be added to the -current element pointer to get to the next element in each dimension. -For arrays less than 4-dimensions there are :cfunc:`PyArray_GETPTR{k}` -(obj, ...) macros where {k} is the integer 1, 2, 3, or 4 that make -using the array strides easier. The arguments .... represent {k} non- -negative integer indices into the array. For example, suppose ``E`` is -a 3-dimensional ndarray. A (void*) pointer to the element ``E[i,j,k]`` -is obtained as :cfunc:`PyArray_GETPTR3` (E, i, j, k). - -As explained previously, C-style contiguous arrays and Fortran-style -contiguous arrays have particular striding patterns. Two array flags -(:cdata:`NPY_C_CONTIGUOUS` and :cdata`NPY_F_CONTIGUOUS`) indicate -whether or not the striding pattern of a particular array matches the -C-style contiguous or Fortran-style contiguous or neither. Whether or -not the striding pattern matches a standard C or Fortran one can be -tested Using :cfunc:`PyArray_ISCONTIGUOUS` (obj) and -:cfunc:`PyArray_ISFORTRAN` (obj) respectively. Most third-party -libraries expect contiguous arrays. But, often it is not difficult to -support general-purpose striding. I encourage you to use the striding -information in your own code whenever possible, and reserve -single-segment requirements for wrapping third-party code. Using the -striding information provided with the ndarray rather than requiring a -contiguous striding reduces copying that otherwise must be made. - - -Example -======= - -.. index:: - single: extension module - -The following example shows how you might write a wrapper that accepts -two input arguments (that will be converted to an array) and an output -argument (that must be an array). The function returns None and -updates the output array. - -.. code-block:: c - - static PyObject * - example_wrapper(PyObject *dummy, PyObject *args) - { - PyObject *arg1=NULL, *arg2=NULL, *out=NULL; - PyObject *arr1=NULL, *arr2=NULL, *oarr=NULL; - - if (!PyArg_ParseTuple(args, "OOO!", &arg1, &arg2, - &PyArray_Type, &out)) return NULL; - - arr1 = PyArray_FROM_OTF(arg1, NPY_DOUBLE, NPY_IN_ARRAY); - if (arr1 == NULL) return NULL; - arr2 = PyArray_FROM_OTF(arg2, NPY_DOUBLE, NPY_IN_ARRAY); - if (arr2 == NULL) goto fail; - oarr = PyArray_FROM_OTF(out, NPY_DOUBLE, NPY_INOUT_ARRAY); - if (oarr == NULL) goto fail; - - /* code that makes use of arguments */ - /* You will probably need at least - nd = PyArray_NDIM(<..>) -- number of dimensions - dims = PyArray_DIMS(<..>) -- npy_intp array of length nd - showing length in each dim. - dptr = (double *)PyArray_DATA(<..>) -- pointer to data. - - If an error occurs goto fail. - */ - - Py_DECREF(arr1); - Py_DECREF(arr2); - Py_DECREF(oarr); - Py_INCREF(Py_None); - return Py_None; - - fail: - Py_XDECREF(arr1); - Py_XDECREF(arr2); - PyArray_XDECREF_ERR(oarr); - return NULL; - } diff --git a/pythonPackages/numpy/doc/source/user/c-info.python-as-glue.rst b/pythonPackages/numpy/doc/source/user/c-info.python-as-glue.rst deleted file mode 100755 index 6ce2668592..0000000000 --- a/pythonPackages/numpy/doc/source/user/c-info.python-as-glue.rst +++ /dev/null @@ -1,1522 +0,0 @@ -******************** -Using Python as glue -******************** - -| There is no conversation more boring than the one where everybody -| agrees. -| --- *Michel de Montaigne* - -| Duct tape is like the force. It has a light side, and a dark side, and -| it holds the universe together. -| --- *Carl Zwanzig* - -Many people like to say that Python is a fantastic glue language. -Hopefully, this Chapter will convince you that this is true. The first -adopters of Python for science were typically people who used it to -glue together large application codes running on super-computers. Not -only was it much nicer to code in Python than in a shell script or -Perl, in addition, the ability to easily extend Python made it -relatively easy to create new classes and types specifically adapted -to the problems being solved. From the interactions of these early -contributors, Numeric emerged as an array-like object that could be -used to pass data between these applications. - -As Numeric has matured and developed into NumPy, people have been able -to write more code directly in NumPy. Often this code is fast-enough -for production use, but there are still times that there is a need to -access compiled code. Either to get that last bit of efficiency out of -the algorithm or to make it easier to access widely-available codes -written in C/C++ or Fortran. - -This chapter will review many of the tools that are available for the -purpose of accessing code written in other compiled languages. There -are many resources available for learning to call other compiled -libraries from Python and the purpose of this Chapter is not to make -you an expert. The main goal is to make you aware of some of the -possibilities so that you will know what to "Google" in order to learn more. - -The http://www.scipy.org website also contains a great deal of useful -information about many of these tools. For example, there is a nice -description of using several of the tools explained in this chapter at -http://www.scipy.org/PerformancePython. This link provides several -ways to solve the same problem showing how to use and connect with -compiled code to get the best performance. In the process you can get -a taste for several of the approaches that will be discussed in this -chapter. - - -Calling other compiled libraries from Python -============================================ - -While Python is a great language and a pleasure to code in, its -dynamic nature results in overhead that can cause some code ( *i.e.* -raw computations inside of for loops) to be up 10-100 times slower -than equivalent code written in a static compiled language. In -addition, it can cause memory usage to be larger than necessary as -temporary arrays are created and destroyed during computation. For -many types of computing needs the extra slow-down and memory -consumption can often not be spared (at least for time- or memory- -critical portions of your code). Therefore one of the most common -needs is to call out from Python code to a fast, machine-code routine -(e.g. compiled using C/C++ or Fortran). The fact that this is -relatively easy to do is a big reason why Python is such an excellent -high-level language for scientific and engineering programming. - -Their are two basic approaches to calling compiled code: writing an -extension module that is then imported to Python using the import -command, or calling a shared-library subroutine directly from Python -using the ctypes module (included in the standard distribution with -Python 2.5). The first method is the most common (but with the -inclusion of ctypes into Python 2.5 this status may change). - -.. warning:: - - Calling C-code from Python can result in Python crashes if you are not - careful. None of the approaches in this chapter are immune. You have - to know something about the way data is handled by both NumPy and by - the third-party library being used. - - -Hand-generated wrappers -======================= - -Extension modules were discussed in Chapter `1 -<#sec-writing-an-extension>`__ . The most basic way to interface with -compiled code is to write an extension module and construct a module -method that calls the compiled code. For improved readability, your -method should take advantage of the PyArg_ParseTuple call to convert -between Python objects and C data-types. For standard C data-types -there is probably already a built-in converter. For others you may -need to write your own converter and use the "O&" format string which -allows you to specify a function that will be used to perform the -conversion from the Python object to whatever C-structures are needed. - -Once the conversions to the appropriate C-structures and C data-types -have been performed, the next step in the wrapper is to call the -underlying function. This is straightforward if the underlying -function is in C or C++. However, in order to call Fortran code you -must be familiar with how Fortran subroutines are called from C/C++ -using your compiler and platform. This can vary somewhat platforms and -compilers (which is another reason f2py makes life much simpler for -interfacing Fortran code) but generally involves underscore mangling -of the name and the fact that all variables are passed by reference -(i.e. all arguments are pointers). - -The advantage of the hand-generated wrapper is that you have complete -control over how the C-library gets used and called which can lead to -a lean and tight interface with minimal over-head. The disadvantage is -that you have to write, debug, and maintain C-code, although most of -it can be adapted using the time-honored technique of -"cutting-pasting-and-modifying" from other extension modules. Because, -the procedure of calling out to additional C-code is fairly -regimented, code-generation procedures have been developed to make -this process easier. One of these code- generation techniques is -distributed with NumPy and allows easy integration with Fortran and -(simple) C code. This package, f2py, will be covered briefly in the -next session. - - -f2py -==== - -F2py allows you to automatically construct an extension module that -interfaces to routines in Fortran 77/90/95 code. It has the ability to -parse Fortran 77/90/95 code and automatically generate Python -signatures for the subroutines it encounters, or you can guide how the -subroutine interfaces with Python by constructing an interface-definition-file (or modifying the f2py-produced one). - -.. index:: - single: f2py - -Creating source for a basic extension module --------------------------------------------- - -Probably the easiest way to introduce f2py is to offer a simple -example. Here is one of the subroutines contained in a file named -:file:`add.f`: - -.. code-block:: none - - C - SUBROUTINE ZADD(A,B,C,N) - C - DOUBLE COMPLEX A(*) - DOUBLE COMPLEX B(*) - DOUBLE COMPLEX C(*) - INTEGER N - DO 20 J = 1, N - C(J) = A(J)+B(J) - 20 CONTINUE - END - -This routine simply adds the elements in two contiguous arrays and -places the result in a third. The memory for all three arrays must be -provided by the calling routine. A very basic interface to this -routine can be automatically generated by f2py:: - - f2py -m add add.f - -You should be able to run this command assuming your search-path is -set-up properly. This command will produce an extension module named -addmodule.c in the current directory. This extension module can now be -compiled and used from Python just like any other extension module. - - -Creating a compiled extension module ------------------------------------- - -You can also get f2py to compile add.f and also compile its produced -extension module leaving only a shared-library extension file that can -be imported from Python:: - - f2py -c -m add add.f - -This command leaves a file named add.{ext} in the current directory -(where {ext} is the appropriate extension for a python extension -module on your platform --- so, pyd, *etc.* ). This module may then be -imported from Python. It will contain a method for each subroutine in -add (zadd, cadd, dadd, sadd). The docstring of each method contains -information about how the module method may be called: - - >>> import add - >>> print add.zadd.__doc__ - zadd - Function signature: - zadd(a,b,c,n) - Required arguments: - a : input rank-1 array('D') with bounds (*) - b : input rank-1 array('D') with bounds (*) - c : input rank-1 array('D') with bounds (*) - n : input int - - -Improving the basic interface ------------------------------ - -The default interface is a very literal translation of the fortran -code into Python. The Fortran array arguments must now be NumPy arrays -and the integer argument should be an integer. The interface will -attempt to convert all arguments to their required types (and shapes) -and issue an error if unsuccessful. However, because it knows nothing -about the semantics of the arguments (such that C is an output and n -should really match the array sizes), it is possible to abuse this -function in ways that can cause Python to crash. For example: - - >>> add.zadd([1,2,3],[1,2],[3,4],1000) - -will cause a program crash on most systems. Under the covers, the -lists are being converted to proper arrays but then the underlying add -loop is told to cycle way beyond the borders of the allocated memory. - -In order to improve the interface, directives should be provided. This -is accomplished by constructing an interface definition file. It is -usually best to start from the interface file that f2py can produce -(where it gets its default behavior from). To get f2py to generate the -interface file use the -h option:: - - f2py -h add.pyf -m add add.f - -This command leaves the file add.pyf in the current directory. The -section of this file corresponding to zadd is: - -.. code-block:: none - - subroutine zadd(a,b,c,n) ! in :add:add.f - double complex dimension(*) :: a - double complex dimension(*) :: b - double complex dimension(*) :: c - integer :: n - end subroutine zadd - -By placing intent directives and checking code, the interface can be -cleaned up quite a bit until the Python module method is both easier -to use and more robust. - -.. code-block:: none - - subroutine zadd(a,b,c,n) ! in :add:add.f - double complex dimension(n) :: a - double complex dimension(n) :: b - double complex intent(out),dimension(n) :: c - integer intent(hide),depend(a) :: n=len(a) - end subroutine zadd - -The intent directive, intent(out) is used to tell f2py that ``c`` is -an output variable and should be created by the interface before being -passed to the underlying code. The intent(hide) directive tells f2py -to not allow the user to specify the variable, ``n``, but instead to -get it from the size of ``a``. The depend( ``a`` ) directive is -necessary to tell f2py that the value of n depends on the input a (so -that it won't try to create the variable n until the variable a is -created). - -The new interface has docstring: - - >>> print add.zadd.__doc__ - zadd - Function signature: - c = zadd(a,b) - Required arguments: - a : input rank-1 array('D') with bounds (n) - b : input rank-1 array('D') with bounds (n) - Return objects: - c : rank-1 array('D') with bounds (n) - -Now, the function can be called in a much more robust way: - - >>> add.zadd([1,2,3],[4,5,6]) - array([ 5.+0.j, 7.+0.j, 9.+0.j]) - -Notice the automatic conversion to the correct format that occurred. - - -Inserting directives in Fortran source --------------------------------------- - -The nice interface can also be generated automatically by placing the -variable directives as special comments in the original fortran code. -Thus, if I modify the source code to contain: - -.. code-block:: none - - C - SUBROUTINE ZADD(A,B,C,N) - C - CF2PY INTENT(OUT) :: C - CF2PY INTENT(HIDE) :: N - CF2PY DOUBLE COMPLEX :: A(N) - CF2PY DOUBLE COMPLEX :: B(N) - CF2PY DOUBLE COMPLEX :: C(N) - DOUBLE COMPLEX A(*) - DOUBLE COMPLEX B(*) - DOUBLE COMPLEX C(*) - INTEGER N - DO 20 J = 1, N - C(J) = A(J) + B(J) - 20 CONTINUE - END - -Then, I can compile the extension module using:: - - f2py -c -m add add.f - -The resulting signature for the function add.zadd is exactly the same -one that was created previously. If the original source code had -contained A(N) instead of A(\*) and so forth with B and C, then I -could obtain (nearly) the same interface simply by placing the -INTENT(OUT) :: C comment line in the source code. The only difference -is that N would be an optional input that would default to the length -of A. - - -A filtering example -------------------- - -For comparison with the other methods to be discussed. Here is another -example of a function that filters a two-dimensional array of double -precision floating-point numbers using a fixed averaging filter. The -advantage of using Fortran to index into multi-dimensional arrays -should be clear from this example. - -.. code-block:: none - - SUBROUTINE DFILTER2D(A,B,M,N) - C - DOUBLE PRECISION A(M,N) - DOUBLE PRECISION B(M,N) - INTEGER N, M - CF2PY INTENT(OUT) :: B - CF2PY INTENT(HIDE) :: N - CF2PY INTENT(HIDE) :: M - DO 20 I = 2,M-1 - DO 40 J=2,N-1 - B(I,J) = A(I,J) + - $ (A(I-1,J)+A(I+1,J) + - $ A(I,J-1)+A(I,J+1) )*0.5D0 + - $ (A(I-1,J-1) + A(I-1,J+1) + - $ A(I+1,J-1) + A(I+1,J+1))*0.25D0 - 40 CONTINUE - 20 CONTINUE - END - -This code can be compiled and linked into an extension module named -filter using:: - - f2py -c -m filter filter.f - -This will produce an extension module named filter.so in the current -directory with a method named dfilter2d that returns a filtered -version of the input. - - -Calling f2py from Python ------------------------- - -The f2py program is written in Python and can be run from inside your -module. This provides a facility that is somewhat similar to the use -of weave.ext_tools described below. An example of the final interface -executed using Python code is: - -.. code-block:: python - - import numpy.f2py as f2py - fid = open('add.f') - source = fid.read() - fid.close() - f2py.compile(source, modulename='add') - import add - -The source string can be any valid Fortran code. If you want to save -the extension-module source code then a suitable file-name can be -provided by the source_fn keyword to the compile function. - - -Automatic extension module generation -------------------------------------- - -If you want to distribute your f2py extension module, then you only -need to include the .pyf file and the Fortran code. The distutils -extensions in NumPy allow you to define an extension module entirely -in terms of this interface file. A valid setup.py file allowing -distribution of the add.f module (as part of the package f2py_examples -so that it would be loaded as f2py_examples.add) is: - -.. code-block:: python - - def configuration(parent_package='', top_path=None) - from numpy.distutils.misc_util import Configuration - config = Configuration('f2py_examples',parent_package, top_path) - config.add_extension('add', sources=['add.pyf','add.f']) - return config - - if __name__ == '__main__': - from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) - -Installation of the new package is easy using:: - - python setup.py install - -assuming you have the proper permissions to write to the main site- -packages directory for the version of Python you are using. For the -resulting package to work, you need to create a file named __init__.py -(in the same directory as add.pyf). Notice the extension module is -defined entirely in terms of the "add.pyf" and "add.f" files. The -conversion of the .pyf file to a .c file is handled by numpy.disutils. - - -Conclusion ----------- - -The interface definition file (.pyf) is how you can fine-tune the -interface between Python and Fortran. There is decent documentation -for f2py found in the numpy/f2py/docs directory where-ever NumPy is -installed on your system (usually under site-packages). There is also -more information on using f2py (including how to use it to wrap C -codes) at http://www.scipy.org/Cookbook under the "Using NumPy with -Other Languages" heading. - -The f2py method of linking compiled code is currently the most -sophisticated and integrated approach. It allows clean separation of -Python with compiled code while still allowing for separate -distribution of the extension module. The only draw-back is that it -requires the existence of a Fortran compiler in order for a user to -install the code. However, with the existence of the free-compilers -g77, gfortran, and g95, as well as high-quality commerical compilers, -this restriction is not particularly onerous. In my opinion, Fortran -is still the easiest way to write fast and clear code for scientific -computing. It handles complex numbers, and multi-dimensional indexing -in the most straightforward way. Be aware, however, that some Fortran -compilers will not be able to optimize code as well as good hand- -written C-code. - -.. index:: - single: f2py - - -weave -===== - -Weave is a scipy package that can be used to automate the process of -extending Python with C/C++ code. It can be used to speed up -evaluation of an array expression that would otherwise create -temporary variables, to directly "inline" C/C++ code into Python, or -to create a fully-named extension module. You must either install -scipy or get the weave package separately and install it using the -standard python setup.py install. You must also have a C/C++-compiler -installed and useable by Python distutils in order to use weave. - -.. index:: - single: weave - -Somewhat dated, but still useful documentation for weave can be found -at the link http://www.scipy/Weave. There are also many examples found -in the examples directory which is installed under the weave directory -in the place where weave is installed on your system. - - -Speed up code involving arrays (also see scipy.numexpr) -------------------------------------------------------- - -This is the easiest way to use weave and requires minimal changes to -your Python code. It involves placing quotes around the expression of -interest and calling weave.blitz. Weave will parse the code and -generate C++ code using Blitz C++ arrays. It will then compile the -code and catalog the shared library so that the next time this exact -string is asked for (and the array types are the same), the already- -compiled shared library will be loaded and used. Because Blitz makes -extensive use of C++ templating, it can take a long time to compile -the first time. After that, however, the code should evaluate more -quickly than the equivalent NumPy expression. This is especially true -if your array sizes are large and the expression would require NumPy -to create several temporaries. Only expressions involving basic -arithmetic operations and basic array slicing can be converted to -Blitz C++ code. - -For example, consider the expression:: - - d = 4*a + 5*a*b + 6*b*c - -where a, b, and c are all arrays of the same type and shape. When the -data-type is double-precision and the size is 1000x1000, this -expression takes about 0.5 seconds to compute on an 1.1Ghz AMD Athlon -machine. When this expression is executed instead using blitz: - -.. code-block:: python - - d = empty(a.shape, 'd'); weave.blitz(expr) - -execution time is only about 0.20 seconds (about 0.14 seconds spent in -weave and the rest in allocating space for d). Thus, we've sped up the -code by a factor of 2 using only a simnple command (weave.blitz). Your -mileage may vary, but factors of 2-8 speed-ups are possible with this -very simple technique. - -If you are interested in using weave in this way, then you should also -look at scipy.numexpr which is another similar way to speed up -expressions by eliminating the need for temporary variables. Using -numexpr does not require a C/C++ compiler. - - -Inline C-code -------------- - -Probably the most widely-used method of employing weave is to -"in-line" C/C++ code into Python in order to speed up a time-critical -section of Python code. In this method of using weave, you define a -string containing useful C-code and then pass it to the function -**weave.inline** ( ``code_string``, ``variables`` ), where -code_string is a string of valid C/C++ code and variables is a list of -variables that should be passed in from Python. The C/C++ code should -refer to the variables with the same names as they are defined with in -Python. If weave.line should return anything the the special value -return_val should be set to whatever object should be returned. The -following example shows how to use weave on basic Python objects: - -.. code-block:: python - - code = r""" - int i; - py::tuple results(2); - for (i=0; ic.data)[i].real = \ - (a.data)[i].real + \ - (b.data)[i].real - (c.data)[i].imag = \ - (a.data)[i].imag + \ - (b.data)[i].imag - return c - -This module shows use of the ``cimport`` statement to load the -definitions from the c_numpy.pxd file. As shown, both versions of the -import statement are supported. It also shows use of the NumPy C-API -to construct NumPy arrays from arbitrary input objects. The array c is -created using PyArray_SimpleNew. Then the c-array is filled by -addition. Casting to a particiular data-type is accomplished using -. Pointers are de-referenced with bracket notation and -members of structures are accessed using '.' notation even if the -object is techinically a pointer to a structure. The use of the -special for loop construct ensures that the underlying code will have -a similar C-loop so the addition calculation will proceed quickly. -Notice that we have not checked for NULL after calling to the C-API ---- a cardinal sin when writing C-code. For routines that return -Python objects, Pyrex inserts the checks for NULL into the C-code for -you and returns with failure if need be. There is also a way to get -Pyrex to automatically check for exceptions when you call functions -that don't return Python objects. See the documentation of Pyrex for -details. - - -Pyrex-filter ------------- - -The two-dimensional example we created using weave is a bit uglier to -implement in Pyrex because two-dimensional indexing using Pyrex is not -as simple. But, it is straightforward (and possibly faster because of -pre-computed indices). Here is the Pyrex-file I named image.pyx. - -.. code-block:: none - - cimport c_numpy - from c_numpy cimport import_array, ndarray, npy_intp,\ - NPY_DOUBLE, NPY_CDOUBLE, \ - NPY_FLOAT, NPY_CFLOAT, NPY_ALIGNED \ - - #We need to initialize NumPy - import_array() - def filter(object ao): - cdef ndarray a, b - cdef npy_intp i, j, M, N, oS - cdef npy_intp r,rm1,rp1,c,cm1,cp1 - cdef double value - # Require an ALIGNED array - # (but not necessarily contiguous) - # We will use strides to access the elements. - a = c_numpy.PyArray_FROMANY(ao, NPY_DOUBLE, \ - 2, 2, NPY_ALIGNED) - b = c_numpy.PyArray_SimpleNew(a.nd,a.dimensions, \ - a.descr.type_num) - M = a.dimensions[0] - N = a.dimensions[1] - S0 = a.strides[0] - S1 = a.strides[1] - for i from 1 <= i < M-1: - r = i*S0 - rm1 = r-S0 - rp1 = r+S0 - oS = i*N - for j from 1 <= j < N-1: - c = j*S1 - cm1 = c-S1 - cp1 = c+S1 - (b.data)[oS+j] = \ - ((a.data+r+c))[0] + \ - (((a.data+rm1+c))[0] + \ - ((a.data+rp1+c))[0] + \ - ((a.data+r+cm1))[0] + \ - ((a.data+r+cp1))[0])*0.5 + \ - (((a.data+rm1+cm1))[0] + \ - ((a.data+rp1+cm1))[0] + \ - ((a.data+rp1+cp1))[0] + \ - ((a.data+rm1+cp1))[0])*0.25 - return b - -This 2-d averaging filter runs quickly because the loop is in C and -the pointer computations are done only as needed. However, it is not -particularly easy to understand what is happening. A 2-d image, ``in`` -, can be filtered using this code very quickly using: - -.. code-block:: python - - import image - out = image.filter(in) - - -Conclusion ----------- - -There are several disadvantages of using Pyrex: - -1. The syntax for Pyrex can get a bit bulky, and it can be confusing at - first to understand what kind of objects you are getting and how to - interface them with C-like constructs. - -2. Inappropriate Pyrex syntax or incorrect calls to C-code or type- - mismatches can result in failures such as - - 1. Pyrex failing to generate the extension module source code, - - 2. Compiler failure while generating the extension module binary due to - incorrect C syntax, - - 3. Python failure when trying to use the module. - - -3. It is easy to lose a clean separation between Python and C which makes - re-using your C-code for other non-Python-related projects more - difficult. - -4. Multi-dimensional arrays are "bulky" to index (appropriate macros - may be able to fix this). - -5. The C-code generated by Pyrex is hard to read and modify (and typically - compiles with annoying but harmless warnings). - -Writing a good Pyrex extension module still takes a bit of effort -because not only does it require (a little) familiarity with C, but -also with Pyrex's brand of Python-mixed-with C. One big advantage of -Pyrex-generated extension modules is that they are easy to distribute -using distutils. In summary, Pyrex is a very capable tool for either -gluing C-code or generating an extension module quickly and should not -be over-looked. It is especially useful for people that can't or won't -write C-code or Fortran code. But, if you are already able to write -simple subroutines in C or Fortran, then I would use one of the other -approaches such as f2py (for Fortran), ctypes (for C shared- -libraries), or weave (for inline C-code). - -.. index:: - single: pyrex - - - - -ctypes -====== - -Ctypes is a python extension module (downloaded separately for Python -<2.5 and included with Python 2.5) that allows you to call an -arbitrary function in a shared library directly from Python. This -approach allows you to interface with C-code directly from Python. -This opens up an enormous number of libraries for use from Python. The -drawback, however, is that coding mistakes can lead to ugly program -crashes very easily (just as can happen in C) because there is little -type or bounds checking done on the parameters. This is especially -true when array data is passed in as a pointer to a raw memory -location. The responsibility is then on you that the subroutine will -not access memory outside the actual array area. But, if you don't -mind living a little dangerously ctypes can be an effective tool for -quickly taking advantage of a large shared library (or writing -extended functionality in your own shared library). - -.. index:: - single: ctypes - -Because the ctypes approach exposes a raw interface to the compiled -code it is not always tolerant of user mistakes. Robust use of the -ctypes module typically involves an additional layer of Python code in -order to check the data types and array bounds of objects passed to -the underlying subroutine. This additional layer of checking (not to -mention the conversion from ctypes objects to C-data-types that ctypes -itself performs), will make the interface slower than a hand-written -extension-module interface. However, this overhead should be neglible -if the C-routine being called is doing any significant amount of work. -If you are a great Python programmer with weak C-skills, ctypes is an -easy way to write a useful interface to a (shared) library of compiled -code. - -To use c-types you must - -1. Have a shared library. - -2. Load the shared library. - -3. Convert the python objects to ctypes-understood arguments. - -4. Call the function from the library with the ctypes arguments. - - -Having a shared library ------------------------ - -There are several requirements for a shared library that can be used -with c-types that are platform specific. This guide assumes you have -some familiarity with making a shared library on your system (or -simply have a shared library available to you). Items to remember are: - -- A shared library must be compiled in a special way ( *e.g.* using - the -shared flag with gcc). - -- On some platforms (*e.g.* Windows) , a shared library requires a - .def file that specifies the functions to be exported. For example a - mylib.def file might contain. - - :: - - LIBRARY mylib.dll - EXPORTS - cool_function1 - cool_function2 - - Alternatively, you may be able to use the storage-class specifier - __declspec(dllexport) in the C-definition of the function to avoid the - need for this .def file. - -There is no standard way in Python distutils to create a standard -shared library (an extension module is a "special" shared library -Python understands) in a cross-platform manner. Thus, a big -disadvantage of ctypes at the time of writing this book is that it is -difficult to distribute in a cross-platform manner a Python extension -that uses c-types and includes your own code which should be compiled -as a shared library on the users system. - - -Loading the shared library --------------------------- - -A simple, but robust way to load the shared library is to get the -absolute path name and load it using the cdll object of ctypes.: - -.. code-block:: python - - lib = ctypes.cdll[] - -However, on Windows accessing an attribute of the cdll method will -load the first DLL by that name found in the current directory or on -the PATH. Loading the absolute path name requires a little finesse for -cross-platform work since the extension of shared libraries varies. -There is a ``ctypes.util.find_library`` utility available that can -simplify the process of finding the library to load but it is not -foolproof. Complicating matters, different platforms have different -default extensions used by shared libraries (e.g. .dll -- Windows, .so --- Linux, .dylib -- Mac OS X). This must also be taken into account if -you are using c-types to wrap code that needs to work on several -platforms. - -NumPy provides a convenience function called -:func:`ctypeslib.load_library` (name, path). This function takes the name -of the shared library (including any prefix like 'lib' but excluding -the extension) and a path where the shared library can be located. It -returns a ctypes library object or raises an OSError if the library -cannot be found or raises an ImportError if the ctypes module is not -available. (Windows users: the ctypes library object loaded using -:func:`load_library` is always loaded assuming cdecl calling convention. -See the ctypes documentation under ctypes.windll and/or ctypes.oledll -for ways to load libraries under other calling conventions). - -The functions in the shared library are available as attributes of the -ctypes library object (returned from :func:`ctypeslib.load_library`) or -as items using ``lib['func_name']`` syntax. The latter method for -retrieving a function name is particularly useful if the function name -contains characters that are not allowable in Python variable names. - - -Converting arguments --------------------- - -Python ints/longs, strings, and unicode objects are automatically -converted as needed to equivalent c-types arguments The None object is -also converted automatically to a NULL pointer. All other Python -objects must be converted to ctypes-specific types. There are two ways -around this restriction that allow c-types to integrate with other -objects. - -1. Don't set the argtypes attribute of the function object and define an - :obj:`_as_parameter_` method for the object you want to pass in. The - :obj:`_as_parameter_` method must return a Python int which will be passed - directly to the function. - -2. Set the argtypes attribute to a list whose entries contain objects - with a classmethod named from_param that knows how to convert your - object to an object that ctypes can understand (an int/long, string, - unicode, or object with the :obj:`_as_parameter_` attribute). - -NumPy uses both methods with a preference for the second method -because it can be safer. The ctypes attribute of the ndarray returns -an object that has an _as_parameter\_ attribute which returns an -integer representing the address of the ndarray to which it is -associated. As a result, one can pass this ctypes attribute object -directly to a function expecting a pointer to the data in your -ndarray. The caller must be sure that the ndarray object is of the -correct type, shape, and has the correct flags set or risk nasty -crashes if the data-pointer to inappropriate arrays are passsed in. - -To implement the second method, NumPy provides the class-factory -function :func:`ndpointer` in the :mod:`ctypeslib` module. This -class-factory function produces an appropriate class that can be -placed in an argtypes attribute entry of a ctypes function. The class -will contain a from_param method which ctypes will use to convert any -ndarray passed in to the function to a ctypes-recognized object. In -the process, the conversion will perform checking on any properties of -the ndarray that were specified by the user in the call to :func:`ndpointer`. -Aspects of the ndarray that can be checked include the data-type, the -number-of-dimensions, the shape, and/or the state of the flags on any -array passed. The return value of the from_param method is the ctypes -attribute of the array which (because it contains the _as_parameter\_ -attribute pointing to the array data area) can be used by ctypes -directly. - -The ctypes attribute of an ndarray is also endowed with additional -attributes that may be convenient when passing additional information -about the array into a ctypes function. The attributes **data**, -**shape**, and **strides** can provide c-types compatible types -corresponding to the data-area, the shape, and the strides of the -array. The data attribute reutrns a ``c_void_p`` representing a -pointer to the data area. The shape and strides attributes each return -an array of ctypes integers (or None representing a NULL pointer, if a -0-d array). The base ctype of the array is a ctype integer of the same -size as a pointer on the platform. There are also methods -data_as({ctype}), shape_as(), and strides_as(). These return the data as a ctype object of your choice and -the shape/strides arrays using an underlying base type of your choice. -For convenience, the **ctypeslib** module also contains **c_intp** as -a ctypes integer data-type whose size is the same as the size of -``c_void_p`` on the platform (it's value is None if ctypes is not -installed). - - -Calling the function --------------------- - -The function is accessed as an attribute of or an item from the loaded -shared-library. Thus, if "./mylib.so" has a function named -"cool_function1" , I could access this function either as: - -.. code-block:: python - - lib = numpy.ctypeslib.load_library('mylib','.') - func1 = lib.cool_function1 # or equivalently - func1 = lib['cool_function1'] - -In ctypes, the return-value of a function is set to be 'int' by -default. This behavior can be changed by setting the restype attribute -of the function. Use None for the restype if the function has no -return value ('void'): - -.. code-block:: python - - func1.restype = None - -As previously discussed, you can also set the argtypes attribute of -the function in order to have ctypes check the types of the input -arguments when the function is called. Use the :func:`ndpointer` factory -function to generate a ready-made class for data-type, shape, and -flags checking on your new function. The :func:`ndpointer` function has the -signature - -.. function:: ndpointer(dtype=None, ndim=None, shape=None, flags=None) - - Keyword arguments with the value ``None`` are not checked. - Specifying a keyword enforces checking of that aspect of the - ndarray on conversion to a ctypes-compatible object. The dtype - keyword can be any object understood as a data-type object. The - ndim keyword should be an integer, and the shape keyword should be - an integer or a sequence of integers. The flags keyword specifies - the minimal flags that are required on any array passed in. This - can be specified as a string of comma separated requirements, an - integer indicating the requirement bits OR'd together, or a flags - object returned from the flags attribute of an array with the - necessary requirements. - -Using an ndpointer class in the argtypes method can make it -significantly safer to call a C-function using ctypes and the data- -area of an ndarray. You may still want to wrap the function in an -additional Python wrapper to make it user-friendly (hiding some -obvious arguments and making some arguments output arguments). In this -process, the **requires** function in NumPy may be useful to return the right -kind of array from a given input. - - -Complete example ----------------- - -In this example, I will show how the addition function and the filter -function implemented previously using the other approaches can be -implemented using ctypes. First, the C-code which implements the -algorithms contains the functions zadd, dadd, sadd, cadd, and -dfilter2d. The zadd function is: - -.. code-block:: c - - /* Add arrays of contiguous data */ - typedef struct {double real; double imag;} cdouble; - typedef struct {float real; float imag;} cfloat; - void zadd(cdouble *a, cdouble *b, cdouble *c, long n) - { - while (n--) { - c->real = a->real + b->real; - c->imag = a->imag + b->imag; - a++; b++; c++; - } - } - -with similar code for cadd, dadd, and sadd that handles complex float, -double, and float data-types, respectively: - -.. code-block:: c - - void cadd(cfloat *a, cfloat *b, cfloat *c, long n) - { - while (n--) { - c->real = a->real + b->real; - c->imag = a->imag + b->imag; - a++; b++; c++; - } - } - void dadd(double *a, double *b, double *c, long n) - { - while (n--) { - *c++ = *a++ + *b++; - } - } - void sadd(float *a, float *b, float *c, long n) - { - while (n--) { - *c++ = *a++ + *b++; - } - } - -The code.c file also contains the function dfilter2d: - -.. code-block:: c - - /* Assumes b is contiguous and - a has strides that are multiples of sizeof(double) - */ - void - dfilter2d(double *a, double *b, int *astrides, int *dims) - { - int i, j, M, N, S0, S1; - int r, c, rm1, rp1, cp1, cm1; - - M = dims[0]; N = dims[1]; - S0 = astrides[0]/sizeof(double); - S1=astrides[1]/sizeof(double); - for (i=1; idimensions[0]; - int dims[1]; - dims[0] = n; - PyArrayObject* ret; - ret = (PyArrayObject*) PyArray_FromDims(1, dims, NPY_DOUBLE); - int i; - char *aj=a->data; - char *bj=b->data; - double *retj = (double *)ret->data; - for (i=0; i < n; i++) { - *retj++ = *((double *)aj) + *((double *)bj); - aj += a->strides[0]; - bj += b->strides[0]; - } - return (PyObject *)ret; - } - """ - import Instant, numpy - ext = Instant.Instant() - ext.create_extension(code=s, headers=["numpy/arrayobject.h"], - include_dirs=[numpy.get_include()], - init_code='import_array();', module="test2b_ext") - import test2b_ext - a = numpy.arange(1000) - b = numpy.arange(1000) - d = test2b_ext.add(a,b) - -Except perhaps for the dependence on SWIG, Instant is a -straightforward utility for writing extension modules. - - -PyInline --------- - -This is a much older module that allows automatic building of -extension modules so that C-code can be included with Python code. -It's latest release (version 0.03) was in 2001, and it appears that it -is not being updated. - - -PyFort ------- - -PyFort is a nice tool for wrapping Fortran and Fortran-like C-code -into Python with support for Numeric arrays. It was written by Paul -Dubois, a distinguished computer scientist and the very first -maintainer of Numeric (now retired). It is worth mentioning in the -hopes that somebody will update PyFort to work with NumPy arrays as -well which now support either Fortran or C-style contiguous arrays. diff --git a/pythonPackages/numpy/doc/source/user/c-info.rst b/pythonPackages/numpy/doc/source/user/c-info.rst deleted file mode 100755 index 086f97c8db..0000000000 --- a/pythonPackages/numpy/doc/source/user/c-info.rst +++ /dev/null @@ -1,9 +0,0 @@ -################# -Using Numpy C-API -################# - -.. toctree:: - - c-info.how-to-extend - c-info.python-as-glue - c-info.beyond-basics diff --git a/pythonPackages/numpy/doc/source/user/howtofind.rst b/pythonPackages/numpy/doc/source/user/howtofind.rst deleted file mode 100755 index 00ed5daa70..0000000000 --- a/pythonPackages/numpy/doc/source/user/howtofind.rst +++ /dev/null @@ -1,7 +0,0 @@ -************************* -How to find documentation -************************* - -.. seealso:: :ref:`Numpy-specific help functions ` - -.. automodule:: numpy.doc.howtofind diff --git a/pythonPackages/numpy/doc/source/user/index.rst b/pythonPackages/numpy/doc/source/user/index.rst deleted file mode 100755 index 022efcaeb4..0000000000 --- a/pythonPackages/numpy/doc/source/user/index.rst +++ /dev/null @@ -1,33 +0,0 @@ -.. _user: - -################ -NumPy User Guide -################ - -This guide is intended as an introductory overview of NumPy and -explains how to install and make use of the most important features of -NumPy. For detailed reference documentation of the functions and -classes contained in the package, see the :ref:`reference`. - -.. warning:: - - This "User Guide" is still a work in progress; some of the material - is not organized, and several aspects of NumPy are not yet covered - sufficient detail. We are an open source community continually - working to improve the documentation and eagerly encourage interested - parties to contribute. For information on how to do so, please visit - the NumPy `doc wiki `_. - - More documentation for NumPy can be found on the `numpy.org - `__ website. - - Thanks! - -.. toctree:: - :maxdepth: 2 - - introduction - basics - performance - misc - c-info diff --git a/pythonPackages/numpy/doc/source/user/install.rst b/pythonPackages/numpy/doc/source/user/install.rst deleted file mode 100755 index aa16546d76..0000000000 --- a/pythonPackages/numpy/doc/source/user/install.rst +++ /dev/null @@ -1,171 +0,0 @@ -***************************** -Building and installing NumPy -***************************** - -Binary installers -================= - -In most use cases the best way to install NumPy on your system is by using an -installable binary package for your operating system. - -Windows -------- - -Good solutions for Windows are, The Enthought Python Distribution `(EPD) -`_ (which provides binary -installers for Windows, OS X and Redhat) and `Python (x, y) -`_. Both of these packages include Python, NumPy and -many additional packages. - -A lightweight alternative is to download the Python -installer from `www.python.org `_ and the NumPy -installer for your Python version from the Sourceforge `download site `_ - -The NumPy installer includes binaries for different CPU's (without SSE -instructions, with SSE2 or with SSE3) and installs the correct one -automatically. If needed, this can be bypassed from the command line with :: - - numpy-<1.y.z>-superpack-win32.exe /arch nosse - -or 'sse2' or 'sse3' instead of 'nosse'. - -Linux ------ - -Most of the major distributions provide packages for NumPy, but these can lag -behind the most recent NumPy release. Pre-built binary packages for Ubuntu are -available on the `scipy ppa -`_. Redhat binaries are -available in the `EPD `_. - -Mac OS X --------- - -A universal binary installer for NumPy is available from the `download site -`_. The `EPD `_ -provides NumPy binaries. - -Building from source -==================== - -A general overview of building NumPy from source is given here, with detailed -instructions for specific platforms given seperately. - -Prerequisites -------------- - -Building NumPy requires the following software installed: - -1) Python 2.4.x, 2.5.x or 2.6.x - - On Debian and derivative (Ubuntu): python, python-dev - - On Windows: the official python installer at - `www.python.org `_ is enough - - Make sure that the Python package distutils is installed before - continuing. For example, in Debian GNU/Linux, distutils is included - in the python-dev package. - - Python must also be compiled with the zlib module enabled. - -2) Compilers - - To build any extension modules for Python, you'll need a C compiler. - Various NumPy modules use FORTRAN 77 libraries, so you'll also need a - FORTRAN 77 compiler installed. - - Note that NumPy is developed mainly using GNU compilers. Compilers from - other vendors such as Intel, Absoft, Sun, NAG, Compaq, Vast, Porland, - Lahey, HP, IBM, Microsoft are only supported in the form of community - feedback, and may not work out of the box. GCC 3.x (and later) compilers - are recommended. - -3) Linear Algebra libraries - - NumPy does not require any external linear algebra libraries to be - installed. However, if these are available, NumPy's setup script can detect - them and use them for building. A number of different LAPACK library setups - can be used, including optimized LAPACK libraries such as ATLAS, MKL or the - Accelerate/vecLib framework on OS X. - -FORTRAN ABI mismatch --------------------- - -The two most popular open source fortran compilers are g77 and gfortran. -Unfortunately, they are not ABI compatible, which means that concretely you -should avoid mixing libraries built with one with another. In particular, if -your blas/lapack/atlas is built with g77, you *must* use g77 when building -numpy and scipy; on the contrary, if your atlas is built with gfortran, you -*must* build numpy/scipy with gfortran. This applies for most other cases -where different FORTRAN compilers might have been used. - -Choosing the fortran compiler -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To build with g77:: - - python setup.py build --fcompiler=gnu - -To build with gfortran:: - - python setup.py build --fcompiler=gnu95 - -For more information see:: - - python setup.py build --help-fcompiler - -How to check the ABI of blas/lapack/atlas -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -One relatively simple and reliable way to check for the compiler used to build -a library is to use ldd on the library. If libg2c.so is a dependency, this -means that g77 has been used. If libgfortran.so is a a dependency, gfortran -has been used. If both are dependencies, this means both have been used, which -is almost always a very bad idea. - -Disabling ATLAS and other accelerater libraries -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Usage of ATLAS and other accelerated libraries in Numpy can be disabled -via:: - - BLAS=None LAPACK=None ATLAS=None python setup.py build - -Building with ATLAS support ---------------------------- - -Ubuntu 8.10 (Intrepid) and 9.04 (Jaunty) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can install the necessary packages for optimized ATLAS with this command:: - - sudo apt-get install libatlas-base-dev - -If you have a recent CPU with SIMD suppport (SSE, SSE2, etc...), you should -also install the corresponding package for optimal performances. For example, -for SSE2:: - - sudo apt-get install libatlas3gf-sse2 - -This package is not available on amd64 platforms. - -*NOTE*: Ubuntu changed its default fortran compiler from g77 in Hardy to -gfortran in Intrepid. If you are building ATLAS from source and are upgrading -from Hardy to Intrepid or later versions, you should rebuild everything from -scratch, including lapack. - -Ubuntu 8.04 and lower -~~~~~~~~~~~~~~~~~~~~~ - -You can install the necessary packages for optimized ATLAS with this command:: - - sudo apt-get install atlas3-base-dev - -If you have a recent CPU with SIMD suppport (SSE, SSE2, etc...), you should -also install the corresponding package for optimal performances. For example, -for SSE2:: - - sudo apt-get install atlas3-sse2 diff --git a/pythonPackages/numpy/doc/source/user/introduction.rst b/pythonPackages/numpy/doc/source/user/introduction.rst deleted file mode 100755 index d29c13b307..0000000000 --- a/pythonPackages/numpy/doc/source/user/introduction.rst +++ /dev/null @@ -1,10 +0,0 @@ -************ -Introduction -************ - - -.. toctree:: - - whatisnumpy - install - howtofind diff --git a/pythonPackages/numpy/doc/source/user/misc.rst b/pythonPackages/numpy/doc/source/user/misc.rst deleted file mode 100755 index 0e1807f3f9..0000000000 --- a/pythonPackages/numpy/doc/source/user/misc.rst +++ /dev/null @@ -1,7 +0,0 @@ -************* -Miscellaneous -************* - -.. automodule:: numpy.doc.misc - -.. automodule:: numpy.doc.methods_vs_functions diff --git a/pythonPackages/numpy/doc/source/user/performance.rst b/pythonPackages/numpy/doc/source/user/performance.rst deleted file mode 100755 index 59f8a2edc9..0000000000 --- a/pythonPackages/numpy/doc/source/user/performance.rst +++ /dev/null @@ -1,5 +0,0 @@ -*********** -Performance -*********** - -.. automodule:: numpy.doc.performance diff --git a/pythonPackages/numpy/doc/source/user/whatisnumpy.rst b/pythonPackages/numpy/doc/source/user/whatisnumpy.rst deleted file mode 100755 index 1c3f96b8b1..0000000000 --- a/pythonPackages/numpy/doc/source/user/whatisnumpy.rst +++ /dev/null @@ -1,128 +0,0 @@ -************** -What is NumPy? -************** - -NumPy is the fundamental package for scientific computing in Python. -It is a Python library that provides a multidimensional array object, -various derived objects (such as masked arrays and matrices), and an -assortment of routines for fast operations on arrays, including -mathematical, logical, shape manipulation, sorting, selecting, I/O, -discrete Fourier transforms, basic linear algebra, basic statistical -operations, random simulation and much more. - -At the core of the NumPy package, is the `ndarray` object. This -encapsulates *n*-dimensional arrays of homogeneous data types, with -many operations being performed in compiled code for performance. -There are several important differences between NumPy arrays and the -standard Python sequences: - -- NumPy arrays have a fixed size at creation, unlike Python lists - (which can grow dynamically). Changing the size of an `ndarray` will - create a new array and delete the original. - -- The elements in a NumPy array are all required to be of the same - data type, and thus will be the same size in memory. The exception: - one can have arrays of (Python, including NumPy) objects, thereby - allowing for arrays of different sized elements. - -- NumPy arrays facilitate advanced mathematical and other types of - operations on large numbers of data. Typically, such operations are - executed more efficiently and with less code than is possible using - Python's built-in sequences. - -- A growing plethora of scientific and mathematical Python-based - packages are using NumPy arrays; though these typically support - Python-sequence input, they convert such input to NumPy arrays prior - to processing, and they often output NumPy arrays. In other words, - in order to efficiently use much (perhaps even most) of today's - scientific/mathematical Python-based software, just knowing how to - use Python's built-in sequence types is insufficient - one also - needs to know how to use NumPy arrays. - -The points about sequence size and speed are particularly important in -scientific computing. As a simple example, consider the case of -multiplying each element in a 1-D sequence with the corresponding -element in another sequence of the same length. If the data are -stored in two Python lists, ``a`` and ``b``, we could iterate over -each element:: - - c = [] - for i in range(len(a)): - c.append(a[i]*b[i]) - -This produces the correct answer, but if ``a`` and ``b`` each contain -millions of numbers, we will pay the price for the inefficiencies of -looping in Python. We could accomplish the same task much more -quickly in C by writing (for clarity we neglect variable declarations -and initializations, memory allocation, etc.) - -:: - - for (i = 0; i < rows; i++): { - c[i] = a[i]*b[i]; - } - -This saves all the overhead involved in interpreting the Python code -and manipulating Python objects, but at the expense of the benefits -gained from coding in Python. Furthermore, the coding work required -increases with the dimensionality of our data. In the case of a 2-D -array, for example, the C code (abridged as before) expands to - -:: - - for (i = 0; i < rows; i++): { - for (j = 0; j < columns; j++): { - c[i][j] = a[i][j]*b[i][j]; - } - } - -NumPy gives us the best of both worlds: element-by-element operations -are the "default mode" when an `ndarray` is involved, but the -element-by-element operation is speedily executed by pre-compiled C -code. In NumPy - -:: - - c = a * b - -does what the earlier examples do, at near-C speeds, but with the code -simplicity we expect from something based on Python (indeed, the NumPy -idiom is even simpler!). This last example illustrates two of NumPy's -features which are the basis of much of its power: vectorization and -broadcasting. - -Vectorization describes the absence of any explicit looping, indexing, -etc., in the code - these things are taking place, of course, just -"behind the scenes" (in optimized, pre-compiled C code). Vectorized -code has many advantages, among which are: - -- vectorized code is more concise and easier to read - -- fewer lines of code generally means fewer bugs - -- the code more closely resembles standard mathematical notation - (making it easier, typically, to correctly code mathematical - constructs) - -- vectorization results in more "Pythonic" code (without - vectorization, our code would still be littered with inefficient and - difficult to read ``for`` loops. - -Broadcasting is the term used to describe the implicit -element-by-element behavior of operations; generally speaking, in -NumPy all operations (i.e., not just arithmetic operations, but -logical, bit-wise, functional, etc.) behave in this implicit -element-by-element fashion, i.e., they broadcast. Moreover, in the -example above, ``a`` and ``b`` could be multidimensional arrays of the -same shape, or a scalar and an array, or even two arrays of with -different shapes. Provided that the smaller array is "expandable" to -the shape of the larger in such a way that the resulting broadcast is -unambiguous (for detailed "rules" of broadcasting see -`numpy.doc.broadcasting`). - -NumPy fully supports an object-oriented approach, starting, once -again, with `ndarray`. For example, `ndarray` is a class, possessing -numerous methods and attributes. Many, of it's methods mirror -functions in the outer-most NumPy namespace, giving the programmer has -complete freedom to code in whichever paradigm she prefers and/or -which seems most appropriate to the task at hand. diff --git a/pythonPackages/numpy/doc/sphinxext/LICENSE.txt b/pythonPackages/numpy/doc/sphinxext/LICENSE.txt deleted file mode 100755 index e00efc31ec..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/LICENSE.txt +++ /dev/null @@ -1,97 +0,0 @@ -------------------------------------------------------------------------------- - The files - - numpydoc.py - - autosummary.py - - autosummary_generate.py - - docscrape.py - - docscrape_sphinx.py - - phantom_import.py - have the following license: - -Copyright (C) 2008 Stefan van der Walt , Pauli Virtanen - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - 1. Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. - -THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR -IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, -INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) -HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, -STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING -IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -------------------------------------------------------------------------------- - The files - - compiler_unparse.py - - comment_eater.py - - traitsdoc.py - have the following license: - -This software is OSI Certified Open Source Software. -OSI Certified is a certification mark of the Open Source Initiative. - -Copyright (c) 2006, Enthought, Inc. -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - * Redistributions of source code must retain the above copyright notice, this - list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. - * Neither the name of Enthought, Inc. nor the names of its contributors may - be used to endorse or promote products derived from this software without - specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR -ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - - -------------------------------------------------------------------------------- - The files - - only_directives.py - - plot_directive.py - originate from Matplotlib (http://matplotlib.sf.net/) which has - the following license: - -Copyright (c) 2002-2008 John D. Hunter; All Rights Reserved. - -1. This LICENSE AGREEMENT is between John D. Hunter (“JDHâ€), and the Individual or Organization (“Licenseeâ€) accessing and otherwise using matplotlib software in source or binary form and its associated documentation. - -2. Subject to the terms and conditions of this License Agreement, JDH hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use matplotlib 0.98.3 alone or in any derivative version, provided, however, that JDH’s License Agreement and JDH’s notice of copyright, i.e., “Copyright (c) 2002-2008 John D. Hunter; All Rights Reserved†are retained in matplotlib 0.98.3 alone or in any derivative version prepared by Licensee. - -3. In the event Licensee prepares a derivative work that is based on or incorporates matplotlib 0.98.3 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to matplotlib 0.98.3. - -4. JDH is making matplotlib 0.98.3 available to Licensee on an “AS IS†basis. JDH MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, JDH MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF MATPLOTLIB 0.98.3 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. - -5. JDH SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF MATPLOTLIB 0.98.3 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING MATPLOTLIB 0.98.3, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. - -6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. - -7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between JDH and Licensee. This License Agreement does not grant permission to use JDH trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. - -8. By copying, installing or otherwise using matplotlib 0.98.3, Licensee agrees to be bound by the terms and conditions of this License Agreement. - diff --git a/pythonPackages/numpy/doc/sphinxext/MANIFEST.in b/pythonPackages/numpy/doc/sphinxext/MANIFEST.in deleted file mode 100755 index f88ed785c5..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/MANIFEST.in +++ /dev/null @@ -1,2 +0,0 @@ -recursive-include tests *.py -include *.txt diff --git a/pythonPackages/numpy/doc/sphinxext/README.txt b/pythonPackages/numpy/doc/sphinxext/README.txt deleted file mode 100755 index f3d782c955..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/README.txt +++ /dev/null @@ -1,52 +0,0 @@ -===================================== -numpydoc -- Numpy's Sphinx extensions -===================================== - -Numpy's documentation uses several custom extensions to Sphinx. These -are shipped in this ``numpydoc`` package, in case you want to make use -of them in third-party projects. - -The following extensions are available: - - - ``numpydoc``: support for the Numpy docstring format in Sphinx, and add - the code description directives ``np:function``, ``np-c:function``, etc. - that support the Numpy docstring syntax. - - - ``numpydoc.traitsdoc``: For gathering documentation about Traits attributes. - - - ``numpydoc.plot_directives``: Adaptation of Matplotlib's ``plot::`` - directive. Note that this implementation may still undergo severe - changes or eventually be deprecated. - - - ``numpydoc.only_directives``: (DEPRECATED) - - - ``numpydoc.autosummary``: (DEPRECATED) An ``autosummary::`` directive. - Available in Sphinx 0.6.2 and (to-be) 1.0 as ``sphinx.ext.autosummary``, - and it the Sphinx 1.0 version is recommended over that included in - Numpydoc. - - -numpydoc -======== - -Numpydoc inserts a hook into Sphinx's autodoc that converts docstrings -following the Numpy/Scipy format to a form palatable to Sphinx. - -Options -------- - -The following options can be set in conf.py: - -- numpydoc_use_plots: bool - - Whether to produce ``plot::`` directives for Examples sections that - contain ``import matplotlib``. - -- numpydoc_show_class_members: bool - - Whether to show all members of a class in the Methods and Attributes - sections automatically. - -- numpydoc_edit_link: bool (DEPRECATED -- edit your HTML template instead) - - Whether to insert an edit link after docstrings. diff --git a/pythonPackages/numpy/doc/sphinxext/__init__.py b/pythonPackages/numpy/doc/sphinxext/__init__.py deleted file mode 100755 index ae9073bc41..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from numpydoc import setup diff --git a/pythonPackages/numpy/doc/sphinxext/autosummary.py b/pythonPackages/numpy/doc/sphinxext/autosummary.py deleted file mode 100755 index 2f8a00a303..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/autosummary.py +++ /dev/null @@ -1,349 +0,0 @@ -""" -=========== -autosummary -=========== - -Sphinx extension that adds an autosummary:: directive, which can be -used to generate function/method/attribute/etc. summary lists, similar -to those output eg. by Epydoc and other API doc generation tools. - -An :autolink: role is also provided. - -autosummary directive ---------------------- - -The autosummary directive has the form:: - - .. autosummary:: - :nosignatures: - :toctree: generated/ - - module.function_1 - module.function_2 - ... - -and it generates an output table (containing signatures, optionally) - - ======================== ============================================= - module.function_1(args) Summary line from the docstring of function_1 - module.function_2(args) Summary line from the docstring - ... - ======================== ============================================= - -If the :toctree: option is specified, files matching the function names -are inserted to the toctree with the given prefix: - - generated/module.function_1 - generated/module.function_2 - ... - -Note: The file names contain the module:: or currentmodule:: prefixes. - -.. seealso:: autosummary_generate.py - - -autolink role -------------- - -The autolink role functions as ``:obj:`` when the name referred can be -resolved to a Python object, and otherwise it becomes simple emphasis. -This can be used as the default role to make links 'smart'. - -""" -import sys, os, posixpath, re - -from docutils.parsers.rst import directives -from docutils.statemachine import ViewList -from docutils import nodes - -import sphinx.addnodes, sphinx.roles -from sphinx.util import patfilter - -from docscrape_sphinx import get_doc_object - -import warnings -warnings.warn( - "The numpydoc.autosummary extension can also be found as " - "sphinx.ext.autosummary in Sphinx >= 0.6, and the version in " - "Sphinx >= 0.7 is superior to the one in numpydoc. This numpydoc " - "version of autosummary is no longer maintained.", - DeprecationWarning, stacklevel=2) - -def setup(app): - app.add_directive('autosummary', autosummary_directive, True, (0, 0, False), - toctree=directives.unchanged, - nosignatures=directives.flag) - app.add_role('autolink', autolink_role) - - app.add_node(autosummary_toc, - html=(autosummary_toc_visit_html, autosummary_toc_depart_noop), - latex=(autosummary_toc_visit_latex, autosummary_toc_depart_noop)) - app.connect('doctree-read', process_autosummary_toc) - -#------------------------------------------------------------------------------ -# autosummary_toc node -#------------------------------------------------------------------------------ - -class autosummary_toc(nodes.comment): - pass - -def process_autosummary_toc(app, doctree): - """ - Insert items described in autosummary:: to the TOC tree, but do - not generate the toctree:: list. - - """ - env = app.builder.env - crawled = {} - def crawl_toc(node, depth=1): - crawled[node] = True - for j, subnode in enumerate(node): - try: - if (isinstance(subnode, autosummary_toc) - and isinstance(subnode[0], sphinx.addnodes.toctree)): - env.note_toctree(env.docname, subnode[0]) - continue - except IndexError: - continue - if not isinstance(subnode, nodes.section): - continue - if subnode not in crawled: - crawl_toc(subnode, depth+1) - crawl_toc(doctree) - -def autosummary_toc_visit_html(self, node): - """Hide autosummary toctree list in HTML output""" - raise nodes.SkipNode - -def autosummary_toc_visit_latex(self, node): - """Show autosummary toctree (= put the referenced pages here) in Latex""" - pass - -def autosummary_toc_depart_noop(self, node): - pass - -#------------------------------------------------------------------------------ -# .. autosummary:: -#------------------------------------------------------------------------------ - -def autosummary_directive(dirname, arguments, options, content, lineno, - content_offset, block_text, state, state_machine): - """ - Pretty table containing short signatures and summaries of functions etc. - - autosummary also generates a (hidden) toctree:: node. - - """ - - names = [] - names += [x.strip().split()[0] for x in content - if x.strip() and re.search(r'^[a-zA-Z_]', x.strip()[0])] - - table, warnings, real_names = get_autosummary(names, state, - 'nosignatures' in options) - node = table - - env = state.document.settings.env - suffix = env.config.source_suffix - all_docnames = env.found_docs.copy() - dirname = posixpath.dirname(env.docname) - - if 'toctree' in options: - tree_prefix = options['toctree'].strip() - docnames = [] - for name in names: - name = real_names.get(name, name) - - docname = tree_prefix + name - if docname.endswith(suffix): - docname = docname[:-len(suffix)] - docname = posixpath.normpath(posixpath.join(dirname, docname)) - if docname not in env.found_docs: - warnings.append(state.document.reporter.warning( - 'toctree references unknown document %r' % docname, - line=lineno)) - docnames.append(docname) - - tocnode = sphinx.addnodes.toctree() - tocnode['includefiles'] = docnames - tocnode['maxdepth'] = -1 - tocnode['glob'] = None - tocnode['entries'] = [(None, docname) for docname in docnames] - - tocnode = autosummary_toc('', '', tocnode) - return warnings + [node] + [tocnode] - else: - return warnings + [node] - -def get_autosummary(names, state, no_signatures=False): - """ - Generate a proper table node for autosummary:: directive. - - Parameters - ---------- - names : list of str - Names of Python objects to be imported and added to the table. - document : document - Docutils document object - - """ - document = state.document - - real_names = {} - warnings = [] - - prefixes = [''] - prefixes.insert(0, document.settings.env.currmodule) - - table = nodes.table('') - group = nodes.tgroup('', cols=2) - table.append(group) - group.append(nodes.colspec('', colwidth=10)) - group.append(nodes.colspec('', colwidth=90)) - body = nodes.tbody('') - group.append(body) - - def append_row(*column_texts): - row = nodes.row('') - for text in column_texts: - node = nodes.paragraph('') - vl = ViewList() - vl.append(text, '') - state.nested_parse(vl, 0, node) - try: - if isinstance(node[0], nodes.paragraph): - node = node[0] - except IndexError: - pass - row.append(nodes.entry('', node)) - body.append(row) - - for name in names: - try: - obj, real_name = import_by_name(name, prefixes=prefixes) - except ImportError: - warnings.append(document.reporter.warning( - 'failed to import %s' % name)) - append_row(":obj:`%s`" % name, "") - continue - - real_names[name] = real_name - - doc = get_doc_object(obj) - - if doc['Summary']: - title = " ".join(doc['Summary']) - else: - title = "" - - col1 = u":obj:`%s <%s>`" % (name, real_name) - if doc['Signature']: - sig = re.sub('^[^(\[]*', '', doc['Signature'].strip()) - if '=' in sig: - # abbreviate optional arguments - sig = re.sub(r', ([a-zA-Z0-9_]+)=', r'[, \1=', sig, count=1) - sig = re.sub(r'\(([a-zA-Z0-9_]+)=', r'([\1=', sig, count=1) - sig = re.sub(r'=[^,)]+,', ',', sig) - sig = re.sub(r'=[^,)]+\)$', '])', sig) - # shorten long strings - sig = re.sub(r'(\[.{16,16}[^,]*?),.*?\]\)', r'\1, ...])', sig) - else: - sig = re.sub(r'(\(.{16,16}[^,]*?),.*?\)', r'\1, ...)', sig) - # make signature contain non-breaking spaces - col1 += u"\\ \u00a0" + unicode(sig).replace(u" ", u"\u00a0") - col2 = title - append_row(col1, col2) - - return table, warnings, real_names - -def import_by_name(name, prefixes=[None]): - """ - Import a Python object that has the given name, under one of the prefixes. - - Parameters - ---------- - name : str - Name of a Python object, eg. 'numpy.ndarray.view' - prefixes : list of (str or None), optional - Prefixes to prepend to the name (None implies no prefix). - The first prefixed name that results to successful import is used. - - Returns - ------- - obj - The imported object - name - Name of the imported object (useful if `prefixes` was used) - - """ - for prefix in prefixes: - try: - if prefix: - prefixed_name = '.'.join([prefix, name]) - else: - prefixed_name = name - return _import_by_name(prefixed_name), prefixed_name - except ImportError: - pass - raise ImportError - -def _import_by_name(name): - """Import a Python object given its full name""" - try: - # try first interpret `name` as MODNAME.OBJ - name_parts = name.split('.') - try: - modname = '.'.join(name_parts[:-1]) - __import__(modname) - return getattr(sys.modules[modname], name_parts[-1]) - except (ImportError, IndexError, AttributeError): - pass - - # ... then as MODNAME, MODNAME.OBJ1, MODNAME.OBJ1.OBJ2, ... - last_j = 0 - modname = None - for j in reversed(range(1, len(name_parts)+1)): - last_j = j - modname = '.'.join(name_parts[:j]) - try: - __import__(modname) - except ImportError: - continue - if modname in sys.modules: - break - - if last_j < len(name_parts): - obj = sys.modules[modname] - for obj_name in name_parts[last_j:]: - obj = getattr(obj, obj_name) - return obj - else: - return sys.modules[modname] - except (ValueError, ImportError, AttributeError, KeyError), e: - raise ImportError(e) - -#------------------------------------------------------------------------------ -# :autolink: (smart default role) -#------------------------------------------------------------------------------ - -def autolink_role(typ, rawtext, etext, lineno, inliner, - options={}, content=[]): - """ - Smart linking role. - - Expands to ":obj:`text`" if `text` is an object that can be imported; - otherwise expands to "*text*". - """ - r = sphinx.roles.xfileref_role('obj', rawtext, etext, lineno, inliner, - options, content) - pnode = r[0][0] - - prefixes = [None] - #prefixes.insert(0, inliner.document.settings.env.currmodule) - try: - obj, name = import_by_name(pnode['reftarget'], prefixes) - except ImportError: - content = pnode[0] - r[0][0] = nodes.emphasis(rawtext, content[0].astext(), - classes=content['classes']) - return r diff --git a/pythonPackages/numpy/doc/sphinxext/autosummary_generate.py b/pythonPackages/numpy/doc/sphinxext/autosummary_generate.py deleted file mode 100755 index a327067488..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/autosummary_generate.py +++ /dev/null @@ -1,219 +0,0 @@ -#!/usr/bin/env python -r""" -autosummary_generate.py OPTIONS FILES - -Generate automatic RST source files for items referred to in -autosummary:: directives. - -Each generated RST file contains a single auto*:: directive which -extracts the docstring of the referred item. - -Example Makefile rule:: - - generate: - ./ext/autosummary_generate.py -o source/generated source/*.rst - -""" -import glob, re, inspect, os, optparse, pydoc -from autosummary import import_by_name - -try: - from phantom_import import import_phantom_module -except ImportError: - import_phantom_module = lambda x: x - -def main(): - p = optparse.OptionParser(__doc__.strip()) - p.add_option("-p", "--phantom", action="store", type="string", - dest="phantom", default=None, - help="Phantom import modules from a file") - p.add_option("-o", "--output-dir", action="store", type="string", - dest="output_dir", default=None, - help=("Write all output files to the given directory (instead " - "of writing them as specified in the autosummary:: " - "directives)")) - options, args = p.parse_args() - - if len(args) == 0: - p.error("wrong number of arguments") - - if options.phantom and os.path.isfile(options.phantom): - import_phantom_module(options.phantom) - - # read - names = {} - for name, loc in get_documented(args).items(): - for (filename, sec_title, keyword, toctree) in loc: - if toctree is not None: - path = os.path.join(os.path.dirname(filename), toctree) - names[name] = os.path.abspath(path) - - # write - for name, path in sorted(names.items()): - if options.output_dir is not None: - path = options.output_dir - - if not os.path.isdir(path): - os.makedirs(path) - - try: - obj, name = import_by_name(name) - except ImportError, e: - print "Failed to import '%s': %s" % (name, e) - continue - - fn = os.path.join(path, '%s.rst' % name) - - if os.path.exists(fn): - # skip - continue - - f = open(fn, 'w') - - try: - f.write('%s\n%s\n\n' % (name, '='*len(name))) - - if inspect.isclass(obj): - if issubclass(obj, Exception): - f.write(format_modulemember(name, 'autoexception')) - else: - f.write(format_modulemember(name, 'autoclass')) - elif inspect.ismodule(obj): - f.write(format_modulemember(name, 'automodule')) - elif inspect.ismethod(obj) or inspect.ismethoddescriptor(obj): - f.write(format_classmember(name, 'automethod')) - elif callable(obj): - f.write(format_modulemember(name, 'autofunction')) - elif hasattr(obj, '__get__'): - f.write(format_classmember(name, 'autoattribute')) - else: - f.write(format_modulemember(name, 'autofunction')) - finally: - f.close() - -def format_modulemember(name, directive): - parts = name.split('.') - mod, name = '.'.join(parts[:-1]), parts[-1] - return ".. currentmodule:: %s\n\n.. %s:: %s\n" % (mod, directive, name) - -def format_classmember(name, directive): - parts = name.split('.') - mod, name = '.'.join(parts[:-2]), '.'.join(parts[-2:]) - return ".. currentmodule:: %s\n\n.. %s:: %s\n" % (mod, directive, name) - -def get_documented(filenames): - """ - Find out what items are documented in source/*.rst - See `get_documented_in_lines`. - - """ - documented = {} - for filename in filenames: - f = open(filename, 'r') - lines = f.read().splitlines() - documented.update(get_documented_in_lines(lines, filename=filename)) - f.close() - return documented - -def get_documented_in_docstring(name, module=None, filename=None): - """ - Find out what items are documented in the given object's docstring. - See `get_documented_in_lines`. - - """ - try: - obj, real_name = import_by_name(name) - lines = pydoc.getdoc(obj).splitlines() - return get_documented_in_lines(lines, module=name, filename=filename) - except AttributeError: - pass - except ImportError, e: - print "Failed to import '%s': %s" % (name, e) - return {} - -def get_documented_in_lines(lines, module=None, filename=None): - """ - Find out what items are documented in the given lines - - Returns - ------- - documented : dict of list of (filename, title, keyword, toctree) - Dictionary whose keys are documented names of objects. - The value is a list of locations where the object was documented. - Each location is a tuple of filename, the current section title, - the name of the directive, and the value of the :toctree: argument - (if present) of the directive. - - """ - title_underline_re = re.compile("^[-=*_^#]{3,}\s*$") - autodoc_re = re.compile(".. auto(function|method|attribute|class|exception|module)::\s*([A-Za-z0-9_.]+)\s*$") - autosummary_re = re.compile(r'^\.\.\s+autosummary::\s*') - module_re = re.compile(r'^\.\.\s+(current)?module::\s*([a-zA-Z0-9_.]+)\s*$') - autosummary_item_re = re.compile(r'^\s+([_a-zA-Z][a-zA-Z0-9_.]*)\s*.*?') - toctree_arg_re = re.compile(r'^\s+:toctree:\s*(.*?)\s*$') - - documented = {} - - current_title = [] - last_line = None - toctree = None - current_module = module - in_autosummary = False - - for line in lines: - try: - if in_autosummary: - m = toctree_arg_re.match(line) - if m: - toctree = m.group(1) - continue - - if line.strip().startswith(':'): - continue # skip options - - m = autosummary_item_re.match(line) - if m: - name = m.group(1).strip() - if current_module and not name.startswith(current_module + '.'): - name = "%s.%s" % (current_module, name) - documented.setdefault(name, []).append( - (filename, current_title, 'autosummary', toctree)) - continue - if line.strip() == '': - continue - in_autosummary = False - - m = autosummary_re.match(line) - if m: - in_autosummary = True - continue - - m = autodoc_re.search(line) - if m: - name = m.group(2).strip() - if m.group(1) == "module": - current_module = name - documented.update(get_documented_in_docstring( - name, filename=filename)) - elif current_module and not name.startswith(current_module+'.'): - name = "%s.%s" % (current_module, name) - documented.setdefault(name, []).append( - (filename, current_title, "auto" + m.group(1), None)) - continue - - m = title_underline_re.match(line) - if m and last_line: - current_title = last_line.strip() - continue - - m = module_re.match(line) - if m: - current_module = m.group(2) - continue - finally: - last_line = line - - return documented - -if __name__ == "__main__": - main() diff --git a/pythonPackages/numpy/doc/sphinxext/comment_eater.py b/pythonPackages/numpy/doc/sphinxext/comment_eater.py deleted file mode 100755 index e11eea9021..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/comment_eater.py +++ /dev/null @@ -1,158 +0,0 @@ -from cStringIO import StringIO -import compiler -import inspect -import textwrap -import tokenize - -from compiler_unparse import unparse - - -class Comment(object): - """ A comment block. - """ - is_comment = True - def __init__(self, start_lineno, end_lineno, text): - # int : The first line number in the block. 1-indexed. - self.start_lineno = start_lineno - # int : The last line number. Inclusive! - self.end_lineno = end_lineno - # str : The text block including '#' character but not any leading spaces. - self.text = text - - def add(self, string, start, end, line): - """ Add a new comment line. - """ - self.start_lineno = min(self.start_lineno, start[0]) - self.end_lineno = max(self.end_lineno, end[0]) - self.text += string - - def __repr__(self): - return '%s(%r, %r, %r)' % (self.__class__.__name__, self.start_lineno, - self.end_lineno, self.text) - - -class NonComment(object): - """ A non-comment block of code. - """ - is_comment = False - def __init__(self, start_lineno, end_lineno): - self.start_lineno = start_lineno - self.end_lineno = end_lineno - - def add(self, string, start, end, line): - """ Add lines to the block. - """ - if string.strip(): - # Only add if not entirely whitespace. - self.start_lineno = min(self.start_lineno, start[0]) - self.end_lineno = max(self.end_lineno, end[0]) - - def __repr__(self): - return '%s(%r, %r)' % (self.__class__.__name__, self.start_lineno, - self.end_lineno) - - -class CommentBlocker(object): - """ Pull out contiguous comment blocks. - """ - def __init__(self): - # Start with a dummy. - self.current_block = NonComment(0, 0) - - # All of the blocks seen so far. - self.blocks = [] - - # The index mapping lines of code to their associated comment blocks. - self.index = {} - - def process_file(self, file): - """ Process a file object. - """ - for token in tokenize.generate_tokens(file.next): - self.process_token(*token) - self.make_index() - - def process_token(self, kind, string, start, end, line): - """ Process a single token. - """ - if self.current_block.is_comment: - if kind == tokenize.COMMENT: - self.current_block.add(string, start, end, line) - else: - self.new_noncomment(start[0], end[0]) - else: - if kind == tokenize.COMMENT: - self.new_comment(string, start, end, line) - else: - self.current_block.add(string, start, end, line) - - def new_noncomment(self, start_lineno, end_lineno): - """ We are transitioning from a noncomment to a comment. - """ - block = NonComment(start_lineno, end_lineno) - self.blocks.append(block) - self.current_block = block - - def new_comment(self, string, start, end, line): - """ Possibly add a new comment. - - Only adds a new comment if this comment is the only thing on the line. - Otherwise, it extends the noncomment block. - """ - prefix = line[:start[1]] - if prefix.strip(): - # Oops! Trailing comment, not a comment block. - self.current_block.add(string, start, end, line) - else: - # A comment block. - block = Comment(start[0], end[0], string) - self.blocks.append(block) - self.current_block = block - - def make_index(self): - """ Make the index mapping lines of actual code to their associated - prefix comments. - """ - for prev, block in zip(self.blocks[:-1], self.blocks[1:]): - if not block.is_comment: - self.index[block.start_lineno] = prev - - def search_for_comment(self, lineno, default=None): - """ Find the comment block just before the given line number. - - Returns None (or the specified default) if there is no such block. - """ - if not self.index: - self.make_index() - block = self.index.get(lineno, None) - text = getattr(block, 'text', default) - return text - - -def strip_comment_marker(text): - """ Strip # markers at the front of a block of comment text. - """ - lines = [] - for line in text.splitlines(): - lines.append(line.lstrip('#')) - text = textwrap.dedent('\n'.join(lines)) - return text - - -def get_class_traits(klass): - """ Yield all of the documentation for trait definitions on a class object. - """ - # FIXME: gracefully handle errors here or in the caller? - source = inspect.getsource(klass) - cb = CommentBlocker() - cb.process_file(StringIO(source)) - mod_ast = compiler.parse(source) - class_ast = mod_ast.node.nodes[0] - for node in class_ast.code.nodes: - # FIXME: handle other kinds of assignments? - if isinstance(node, compiler.ast.Assign): - name = node.nodes[0].name - rhs = unparse(node.expr).strip() - doc = strip_comment_marker(cb.search_for_comment(node.lineno, default='')) - yield name, rhs, doc - diff --git a/pythonPackages/numpy/doc/sphinxext/compiler_unparse.py b/pythonPackages/numpy/doc/sphinxext/compiler_unparse.py deleted file mode 100755 index ffcf51b353..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/compiler_unparse.py +++ /dev/null @@ -1,860 +0,0 @@ -""" Turn compiler.ast structures back into executable python code. - - The unparse method takes a compiler.ast tree and transforms it back into - valid python code. It is incomplete and currently only works for - import statements, function calls, function definitions, assignments, and - basic expressions. - - Inspired by python-2.5-svn/Demo/parser/unparse.py - - fixme: We may want to move to using _ast trees because the compiler for - them is about 6 times faster than compiler.compile. -""" - -import sys -import cStringIO -from compiler.ast import Const, Name, Tuple, Div, Mul, Sub, Add - -def unparse(ast, single_line_functions=False): - s = cStringIO.StringIO() - UnparseCompilerAst(ast, s, single_line_functions) - return s.getvalue().lstrip() - -op_precedence = { 'compiler.ast.Power':3, 'compiler.ast.Mul':2, 'compiler.ast.Div':2, - 'compiler.ast.Add':1, 'compiler.ast.Sub':1 } - -class UnparseCompilerAst: - """ Methods in this class recursively traverse an AST and - output source code for the abstract syntax; original formatting - is disregarged. - """ - - ######################################################################### - # object interface. - ######################################################################### - - def __init__(self, tree, file = sys.stdout, single_line_functions=False): - """ Unparser(tree, file=sys.stdout) -> None. - - Print the source for tree to file. - """ - self.f = file - self._single_func = single_line_functions - self._do_indent = True - self._indent = 0 - self._dispatch(tree) - self._write("\n") - self.f.flush() - - ######################################################################### - # Unparser private interface. - ######################################################################### - - ### format, output, and dispatch methods ################################ - - def _fill(self, text = ""): - "Indent a piece of text, according to the current indentation level" - if self._do_indent: - self._write("\n"+" "*self._indent + text) - else: - self._write(text) - - def _write(self, text): - "Append a piece of text to the current line." - self.f.write(text) - - def _enter(self): - "Print ':', and increase the indentation." - self._write(": ") - self._indent += 1 - - def _leave(self): - "Decrease the indentation level." - self._indent -= 1 - - def _dispatch(self, tree): - "_dispatcher function, _dispatching tree type T to method _T." - if isinstance(tree, list): - for t in tree: - self._dispatch(t) - return - meth = getattr(self, "_"+tree.__class__.__name__) - if tree.__class__.__name__ == 'NoneType' and not self._do_indent: - return - meth(tree) - - - ######################################################################### - # compiler.ast unparsing methods. - # - # There should be one method per concrete grammar type. They are - # organized in alphabetical order. - ######################################################################### - - def _Add(self, t): - self.__binary_op(t, '+') - - def _And(self, t): - self._write(" (") - for i, node in enumerate(t.nodes): - self._dispatch(node) - if i != len(t.nodes)-1: - self._write(") and (") - self._write(")") - - def _AssAttr(self, t): - """ Handle assigning an attribute of an object - """ - self._dispatch(t.expr) - self._write('.'+t.attrname) - - def _Assign(self, t): - """ Expression Assignment such as "a = 1". - - This only handles assignment in expressions. Keyword assignment - is handled separately. - """ - self._fill() - for target in t.nodes: - self._dispatch(target) - self._write(" = ") - self._dispatch(t.expr) - if not self._do_indent: - self._write('; ') - - def _AssName(self, t): - """ Name on left hand side of expression. - - Treat just like a name on the right side of an expression. - """ - self._Name(t) - - def _AssTuple(self, t): - """ Tuple on left hand side of an expression. - """ - - # _write each elements, separated by a comma. - for element in t.nodes[:-1]: - self._dispatch(element) - self._write(", ") - - # Handle the last one without writing comma - last_element = t.nodes[-1] - self._dispatch(last_element) - - def _AugAssign(self, t): - """ +=,-=,*=,/=,**=, etc. operations - """ - - self._fill() - self._dispatch(t.node) - self._write(' '+t.op+' ') - self._dispatch(t.expr) - if not self._do_indent: - self._write(';') - - def _Bitand(self, t): - """ Bit and operation. - """ - - for i, node in enumerate(t.nodes): - self._write("(") - self._dispatch(node) - self._write(")") - if i != len(t.nodes)-1: - self._write(" & ") - - def _Bitor(self, t): - """ Bit or operation - """ - - for i, node in enumerate(t.nodes): - self._write("(") - self._dispatch(node) - self._write(")") - if i != len(t.nodes)-1: - self._write(" | ") - - def _CallFunc(self, t): - """ Function call. - """ - self._dispatch(t.node) - self._write("(") - comma = False - for e in t.args: - if comma: self._write(", ") - else: comma = True - self._dispatch(e) - if t.star_args: - if comma: self._write(", ") - else: comma = True - self._write("*") - self._dispatch(t.star_args) - if t.dstar_args: - if comma: self._write(", ") - else: comma = True - self._write("**") - self._dispatch(t.dstar_args) - self._write(")") - - def _Compare(self, t): - self._dispatch(t.expr) - for op, expr in t.ops: - self._write(" " + op + " ") - self._dispatch(expr) - - def _Const(self, t): - """ A constant value such as an integer value, 3, or a string, "hello". - """ - self._dispatch(t.value) - - def _Decorators(self, t): - """ Handle function decorators (eg. @has_units) - """ - for node in t.nodes: - self._dispatch(node) - - def _Dict(self, t): - self._write("{") - for i, (k, v) in enumerate(t.items): - self._dispatch(k) - self._write(": ") - self._dispatch(v) - if i < len(t.items)-1: - self._write(", ") - self._write("}") - - def _Discard(self, t): - """ Node for when return value is ignored such as in "foo(a)". - """ - self._fill() - self._dispatch(t.expr) - - def _Div(self, t): - self.__binary_op(t, '/') - - def _Ellipsis(self, t): - self._write("...") - - def _From(self, t): - """ Handle "from xyz import foo, bar as baz". - """ - # fixme: Are From and ImportFrom handled differently? - self._fill("from ") - self._write(t.modname) - self._write(" import ") - for i, (name,asname) in enumerate(t.names): - if i != 0: - self._write(", ") - self._write(name) - if asname is not None: - self._write(" as "+asname) - - def _Function(self, t): - """ Handle function definitions - """ - if t.decorators is not None: - self._fill("@") - self._dispatch(t.decorators) - self._fill("def "+t.name + "(") - defaults = [None] * (len(t.argnames) - len(t.defaults)) + list(t.defaults) - for i, arg in enumerate(zip(t.argnames, defaults)): - self._write(arg[0]) - if arg[1] is not None: - self._write('=') - self._dispatch(arg[1]) - if i < len(t.argnames)-1: - self._write(', ') - self._write(")") - if self._single_func: - self._do_indent = False - self._enter() - self._dispatch(t.code) - self._leave() - self._do_indent = True - - def _Getattr(self, t): - """ Handle getting an attribute of an object - """ - if isinstance(t.expr, (Div, Mul, Sub, Add)): - self._write('(') - self._dispatch(t.expr) - self._write(')') - else: - self._dispatch(t.expr) - - self._write('.'+t.attrname) - - def _If(self, t): - self._fill() - - for i, (compare,code) in enumerate(t.tests): - if i == 0: - self._write("if ") - else: - self._write("elif ") - self._dispatch(compare) - self._enter() - self._fill() - self._dispatch(code) - self._leave() - self._write("\n") - - if t.else_ is not None: - self._write("else") - self._enter() - self._fill() - self._dispatch(t.else_) - self._leave() - self._write("\n") - - def _IfExp(self, t): - self._dispatch(t.then) - self._write(" if ") - self._dispatch(t.test) - - if t.else_ is not None: - self._write(" else (") - self._dispatch(t.else_) - self._write(")") - - def _Import(self, t): - """ Handle "import xyz.foo". - """ - self._fill("import ") - - for i, (name,asname) in enumerate(t.names): - if i != 0: - self._write(", ") - self._write(name) - if asname is not None: - self._write(" as "+asname) - - def _Keyword(self, t): - """ Keyword value assignment within function calls and definitions. - """ - self._write(t.name) - self._write("=") - self._dispatch(t.expr) - - def _List(self, t): - self._write("[") - for i,node in enumerate(t.nodes): - self._dispatch(node) - if i < len(t.nodes)-1: - self._write(", ") - self._write("]") - - def _Module(self, t): - if t.doc is not None: - self._dispatch(t.doc) - self._dispatch(t.node) - - def _Mul(self, t): - self.__binary_op(t, '*') - - def _Name(self, t): - self._write(t.name) - - def _NoneType(self, t): - self._write("None") - - def _Not(self, t): - self._write('not (') - self._dispatch(t.expr) - self._write(')') - - def _Or(self, t): - self._write(" (") - for i, node in enumerate(t.nodes): - self._dispatch(node) - if i != len(t.nodes)-1: - self._write(") or (") - self._write(")") - - def _Pass(self, t): - self._write("pass\n") - - def _Printnl(self, t): - self._fill("print ") - if t.dest: - self._write(">> ") - self._dispatch(t.dest) - self._write(", ") - comma = False - for node in t.nodes: - if comma: self._write(', ') - else: comma = True - self._dispatch(node) - - def _Power(self, t): - self.__binary_op(t, '**') - - def _Return(self, t): - self._fill("return ") - if t.value: - if isinstance(t.value, Tuple): - text = ', '.join([ name.name for name in t.value.asList() ]) - self._write(text) - else: - self._dispatch(t.value) - if not self._do_indent: - self._write('; ') - - def _Slice(self, t): - self._dispatch(t.expr) - self._write("[") - if t.lower: - self._dispatch(t.lower) - self._write(":") - if t.upper: - self._dispatch(t.upper) - #if t.step: - # self._write(":") - # self._dispatch(t.step) - self._write("]") - - def _Sliceobj(self, t): - for i, node in enumerate(t.nodes): - if i != 0: - self._write(":") - if not (isinstance(node, Const) and node.value is None): - self._dispatch(node) - - def _Stmt(self, tree): - for node in tree.nodes: - self._dispatch(node) - - def _Sub(self, t): - self.__binary_op(t, '-') - - def _Subscript(self, t): - self._dispatch(t.expr) - self._write("[") - for i, value in enumerate(t.subs): - if i != 0: - self._write(",") - self._dispatch(value) - self._write("]") - - def _TryExcept(self, t): - self._fill("try") - self._enter() - self._dispatch(t.body) - self._leave() - - for handler in t.handlers: - self._fill('except ') - self._dispatch(handler[0]) - if handler[1] is not None: - self._write(', ') - self._dispatch(handler[1]) - self._enter() - self._dispatch(handler[2]) - self._leave() - - if t.else_: - self._fill("else") - self._enter() - self._dispatch(t.else_) - self._leave() - - def _Tuple(self, t): - - if not t.nodes: - # Empty tuple. - self._write("()") - else: - self._write("(") - - # _write each elements, separated by a comma. - for element in t.nodes[:-1]: - self._dispatch(element) - self._write(", ") - - # Handle the last one without writing comma - last_element = t.nodes[-1] - self._dispatch(last_element) - - self._write(")") - - def _UnaryAdd(self, t): - self._write("+") - self._dispatch(t.expr) - - def _UnarySub(self, t): - self._write("-") - self._dispatch(t.expr) - - def _With(self, t): - self._fill('with ') - self._dispatch(t.expr) - if t.vars: - self._write(' as ') - self._dispatch(t.vars.name) - self._enter() - self._dispatch(t.body) - self._leave() - self._write('\n') - - def _int(self, t): - self._write(repr(t)) - - def __binary_op(self, t, symbol): - # Check if parenthesis are needed on left side and then dispatch - has_paren = False - left_class = str(t.left.__class__) - if (left_class in op_precedence.keys() and - op_precedence[left_class] < op_precedence[str(t.__class__)]): - has_paren = True - if has_paren: - self._write('(') - self._dispatch(t.left) - if has_paren: - self._write(')') - # Write the appropriate symbol for operator - self._write(symbol) - # Check if parenthesis are needed on the right side and then dispatch - has_paren = False - right_class = str(t.right.__class__) - if (right_class in op_precedence.keys() and - op_precedence[right_class] < op_precedence[str(t.__class__)]): - has_paren = True - if has_paren: - self._write('(') - self._dispatch(t.right) - if has_paren: - self._write(')') - - def _float(self, t): - # if t is 0.1, str(t)->'0.1' while repr(t)->'0.1000000000001' - # We prefer str here. - self._write(str(t)) - - def _str(self, t): - self._write(repr(t)) - - def _tuple(self, t): - self._write(str(t)) - - ######################################################################### - # These are the methods from the _ast modules unparse. - # - # As our needs to handle more advanced code increase, we may want to - # modify some of the methods below so that they work for compiler.ast. - ######################################################################### - -# # stmt -# def _Expr(self, tree): -# self._fill() -# self._dispatch(tree.value) -# -# def _Import(self, t): -# self._fill("import ") -# first = True -# for a in t.names: -# if first: -# first = False -# else: -# self._write(", ") -# self._write(a.name) -# if a.asname: -# self._write(" as "+a.asname) -# -## def _ImportFrom(self, t): -## self._fill("from ") -## self._write(t.module) -## self._write(" import ") -## for i, a in enumerate(t.names): -## if i == 0: -## self._write(", ") -## self._write(a.name) -## if a.asname: -## self._write(" as "+a.asname) -## # XXX(jpe) what is level for? -## -# -# def _Break(self, t): -# self._fill("break") -# -# def _Continue(self, t): -# self._fill("continue") -# -# def _Delete(self, t): -# self._fill("del ") -# self._dispatch(t.targets) -# -# def _Assert(self, t): -# self._fill("assert ") -# self._dispatch(t.test) -# if t.msg: -# self._write(", ") -# self._dispatch(t.msg) -# -# def _Exec(self, t): -# self._fill("exec ") -# self._dispatch(t.body) -# if t.globals: -# self._write(" in ") -# self._dispatch(t.globals) -# if t.locals: -# self._write(", ") -# self._dispatch(t.locals) -# -# def _Print(self, t): -# self._fill("print ") -# do_comma = False -# if t.dest: -# self._write(">>") -# self._dispatch(t.dest) -# do_comma = True -# for e in t.values: -# if do_comma:self._write(", ") -# else:do_comma=True -# self._dispatch(e) -# if not t.nl: -# self._write(",") -# -# def _Global(self, t): -# self._fill("global") -# for i, n in enumerate(t.names): -# if i != 0: -# self._write(",") -# self._write(" " + n) -# -# def _Yield(self, t): -# self._fill("yield") -# if t.value: -# self._write(" (") -# self._dispatch(t.value) -# self._write(")") -# -# def _Raise(self, t): -# self._fill('raise ') -# if t.type: -# self._dispatch(t.type) -# if t.inst: -# self._write(", ") -# self._dispatch(t.inst) -# if t.tback: -# self._write(", ") -# self._dispatch(t.tback) -# -# -# def _TryFinally(self, t): -# self._fill("try") -# self._enter() -# self._dispatch(t.body) -# self._leave() -# -# self._fill("finally") -# self._enter() -# self._dispatch(t.finalbody) -# self._leave() -# -# def _excepthandler(self, t): -# self._fill("except ") -# if t.type: -# self._dispatch(t.type) -# if t.name: -# self._write(", ") -# self._dispatch(t.name) -# self._enter() -# self._dispatch(t.body) -# self._leave() -# -# def _ClassDef(self, t): -# self._write("\n") -# self._fill("class "+t.name) -# if t.bases: -# self._write("(") -# for a in t.bases: -# self._dispatch(a) -# self._write(", ") -# self._write(")") -# self._enter() -# self._dispatch(t.body) -# self._leave() -# -# def _FunctionDef(self, t): -# self._write("\n") -# for deco in t.decorators: -# self._fill("@") -# self._dispatch(deco) -# self._fill("def "+t.name + "(") -# self._dispatch(t.args) -# self._write(")") -# self._enter() -# self._dispatch(t.body) -# self._leave() -# -# def _For(self, t): -# self._fill("for ") -# self._dispatch(t.target) -# self._write(" in ") -# self._dispatch(t.iter) -# self._enter() -# self._dispatch(t.body) -# self._leave() -# if t.orelse: -# self._fill("else") -# self._enter() -# self._dispatch(t.orelse) -# self._leave -# -# def _While(self, t): -# self._fill("while ") -# self._dispatch(t.test) -# self._enter() -# self._dispatch(t.body) -# self._leave() -# if t.orelse: -# self._fill("else") -# self._enter() -# self._dispatch(t.orelse) -# self._leave -# -# # expr -# def _Str(self, tree): -# self._write(repr(tree.s)) -## -# def _Repr(self, t): -# self._write("`") -# self._dispatch(t.value) -# self._write("`") -# -# def _Num(self, t): -# self._write(repr(t.n)) -# -# def _ListComp(self, t): -# self._write("[") -# self._dispatch(t.elt) -# for gen in t.generators: -# self._dispatch(gen) -# self._write("]") -# -# def _GeneratorExp(self, t): -# self._write("(") -# self._dispatch(t.elt) -# for gen in t.generators: -# self._dispatch(gen) -# self._write(")") -# -# def _comprehension(self, t): -# self._write(" for ") -# self._dispatch(t.target) -# self._write(" in ") -# self._dispatch(t.iter) -# for if_clause in t.ifs: -# self._write(" if ") -# self._dispatch(if_clause) -# -# def _IfExp(self, t): -# self._dispatch(t.body) -# self._write(" if ") -# self._dispatch(t.test) -# if t.orelse: -# self._write(" else ") -# self._dispatch(t.orelse) -# -# unop = {"Invert":"~", "Not": "not", "UAdd":"+", "USub":"-"} -# def _UnaryOp(self, t): -# self._write(self.unop[t.op.__class__.__name__]) -# self._write("(") -# self._dispatch(t.operand) -# self._write(")") -# -# binop = { "Add":"+", "Sub":"-", "Mult":"*", "Div":"/", "Mod":"%", -# "LShift":">>", "RShift":"<<", "BitOr":"|", "BitXor":"^", "BitAnd":"&", -# "FloorDiv":"//", "Pow": "**"} -# def _BinOp(self, t): -# self._write("(") -# self._dispatch(t.left) -# self._write(")" + self.binop[t.op.__class__.__name__] + "(") -# self._dispatch(t.right) -# self._write(")") -# -# boolops = {_ast.And: 'and', _ast.Or: 'or'} -# def _BoolOp(self, t): -# self._write("(") -# self._dispatch(t.values[0]) -# for v in t.values[1:]: -# self._write(" %s " % self.boolops[t.op.__class__]) -# self._dispatch(v) -# self._write(")") -# -# def _Attribute(self,t): -# self._dispatch(t.value) -# self._write(".") -# self._write(t.attr) -# -## def _Call(self, t): -## self._dispatch(t.func) -## self._write("(") -## comma = False -## for e in t.args: -## if comma: self._write(", ") -## else: comma = True -## self._dispatch(e) -## for e in t.keywords: -## if comma: self._write(", ") -## else: comma = True -## self._dispatch(e) -## if t.starargs: -## if comma: self._write(", ") -## else: comma = True -## self._write("*") -## self._dispatch(t.starargs) -## if t.kwargs: -## if comma: self._write(", ") -## else: comma = True -## self._write("**") -## self._dispatch(t.kwargs) -## self._write(")") -# -# # slice -# def _Index(self, t): -# self._dispatch(t.value) -# -# def _ExtSlice(self, t): -# for i, d in enumerate(t.dims): -# if i != 0: -# self._write(': ') -# self._dispatch(d) -# -# # others -# def _arguments(self, t): -# first = True -# nonDef = len(t.args)-len(t.defaults) -# for a in t.args[0:nonDef]: -# if first:first = False -# else: self._write(", ") -# self._dispatch(a) -# for a,d in zip(t.args[nonDef:], t.defaults): -# if first:first = False -# else: self._write(", ") -# self._dispatch(a), -# self._write("=") -# self._dispatch(d) -# if t.vararg: -# if first:first = False -# else: self._write(", ") -# self._write("*"+t.vararg) -# if t.kwarg: -# if first:first = False -# else: self._write(", ") -# self._write("**"+t.kwarg) -# -## def _keyword(self, t): -## self._write(t.arg) -## self._write("=") -## self._dispatch(t.value) -# -# def _Lambda(self, t): -# self._write("lambda ") -# self._dispatch(t.args) -# self._write(": ") -# self._dispatch(t.body) - - - diff --git a/pythonPackages/numpy/doc/sphinxext/docscrape.py b/pythonPackages/numpy/doc/sphinxext/docscrape.py deleted file mode 100755 index ad5998cc6a..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/docscrape.py +++ /dev/null @@ -1,498 +0,0 @@ -"""Extract reference documentation from the NumPy source tree. - -""" - -import inspect -import textwrap -import re -import pydoc -from StringIO import StringIO -from warnings import warn - -class Reader(object): - """A line-based string reader. - - """ - def __init__(self, data): - """ - Parameters - ---------- - data : str - String with lines separated by '\n'. - - """ - if isinstance(data,list): - self._str = data - else: - self._str = data.split('\n') # store string as list of lines - - self.reset() - - def __getitem__(self, n): - return self._str[n] - - def reset(self): - self._l = 0 # current line nr - - def read(self): - if not self.eof(): - out = self[self._l] - self._l += 1 - return out - else: - return '' - - def seek_next_non_empty_line(self): - for l in self[self._l:]: - if l.strip(): - break - else: - self._l += 1 - - def eof(self): - return self._l >= len(self._str) - - def read_to_condition(self, condition_func): - start = self._l - for line in self[start:]: - if condition_func(line): - return self[start:self._l] - self._l += 1 - if self.eof(): - return self[start:self._l+1] - return [] - - def read_to_next_empty_line(self): - self.seek_next_non_empty_line() - def is_empty(line): - return not line.strip() - return self.read_to_condition(is_empty) - - def read_to_next_unindented_line(self): - def is_unindented(line): - return (line.strip() and (len(line.lstrip()) == len(line))) - return self.read_to_condition(is_unindented) - - def peek(self,n=0): - if self._l + n < len(self._str): - return self[self._l + n] - else: - return '' - - def is_empty(self): - return not ''.join(self._str).strip() - - -class NumpyDocString(object): - def __init__(self, docstring, config={}): - docstring = textwrap.dedent(docstring).split('\n') - - self._doc = Reader(docstring) - self._parsed_data = { - 'Signature': '', - 'Summary': [''], - 'Extended Summary': [], - 'Parameters': [], - 'Returns': [], - 'Raises': [], - 'Warns': [], - 'Other Parameters': [], - 'Attributes': [], - 'Methods': [], - 'See Also': [], - 'Notes': [], - 'Warnings': [], - 'References': '', - 'Examples': '', - 'index': {} - } - - self._parse() - - def __getitem__(self,key): - return self._parsed_data[key] - - def __setitem__(self,key,val): - if not self._parsed_data.has_key(key): - warn("Unknown section %s" % key) - else: - self._parsed_data[key] = val - - def _is_at_section(self): - self._doc.seek_next_non_empty_line() - - if self._doc.eof(): - return False - - l1 = self._doc.peek().strip() # e.g. Parameters - - if l1.startswith('.. index::'): - return True - - l2 = self._doc.peek(1).strip() # ---------- or ========== - return l2.startswith('-'*len(l1)) or l2.startswith('='*len(l1)) - - def _strip(self,doc): - i = 0 - j = 0 - for i,line in enumerate(doc): - if line.strip(): break - - for j,line in enumerate(doc[::-1]): - if line.strip(): break - - return doc[i:len(doc)-j] - - def _read_to_next_section(self): - section = self._doc.read_to_next_empty_line() - - while not self._is_at_section() and not self._doc.eof(): - if not self._doc.peek(-1).strip(): # previous line was empty - section += [''] - - section += self._doc.read_to_next_empty_line() - - return section - - def _read_sections(self): - while not self._doc.eof(): - data = self._read_to_next_section() - name = data[0].strip() - - if name.startswith('..'): # index section - yield name, data[1:] - elif len(data) < 2: - yield StopIteration - else: - yield name, self._strip(data[2:]) - - def _parse_param_list(self,content): - r = Reader(content) - params = [] - while not r.eof(): - header = r.read().strip() - if ' : ' in header: - arg_name, arg_type = header.split(' : ')[:2] - else: - arg_name, arg_type = header, '' - - desc = r.read_to_next_unindented_line() - desc = dedent_lines(desc) - - params.append((arg_name,arg_type,desc)) - - return params - - - _name_rgx = re.compile(r"^\s*(:(?P\w+):`(?P[a-zA-Z0-9_.-]+)`|" - r" (?P[a-zA-Z0-9_.-]+))\s*", re.X) - def _parse_see_also(self, content): - """ - func_name : Descriptive text - continued text - another_func_name : Descriptive text - func_name1, func_name2, :meth:`func_name`, func_name3 - - """ - items = [] - - def parse_item_name(text): - """Match ':role:`name`' or 'name'""" - m = self._name_rgx.match(text) - if m: - g = m.groups() - if g[1] is None: - return g[3], None - else: - return g[2], g[1] - raise ValueError("%s is not a item name" % text) - - def push_item(name, rest): - if not name: - return - name, role = parse_item_name(name) - items.append((name, list(rest), role)) - del rest[:] - - current_func = None - rest = [] - - for line in content: - if not line.strip(): continue - - m = self._name_rgx.match(line) - if m and line[m.end():].strip().startswith(':'): - push_item(current_func, rest) - current_func, line = line[:m.end()], line[m.end():] - rest = [line.split(':', 1)[1].strip()] - if not rest[0]: - rest = [] - elif not line.startswith(' '): - push_item(current_func, rest) - current_func = None - if ',' in line: - for func in line.split(','): - push_item(func, []) - elif line.strip(): - current_func = line - elif current_func is not None: - rest.append(line.strip()) - push_item(current_func, rest) - return items - - def _parse_index(self, section, content): - """ - .. index: default - :refguide: something, else, and more - - """ - def strip_each_in(lst): - return [s.strip() for s in lst] - - out = {} - section = section.split('::') - if len(section) > 1: - out['default'] = strip_each_in(section[1].split(','))[0] - for line in content: - line = line.split(':') - if len(line) > 2: - out[line[1]] = strip_each_in(line[2].split(',')) - return out - - def _parse_summary(self): - """Grab signature (if given) and summary""" - if self._is_at_section(): - return - - summary = self._doc.read_to_next_empty_line() - summary_str = " ".join([s.strip() for s in summary]).strip() - if re.compile('^([\w., ]+=)?\s*[\w\.]+\(.*\)$').match(summary_str): - self['Signature'] = summary_str - if not self._is_at_section(): - self['Summary'] = self._doc.read_to_next_empty_line() - else: - self['Summary'] = summary - - if not self._is_at_section(): - self['Extended Summary'] = self._read_to_next_section() - - def _parse(self): - self._doc.reset() - self._parse_summary() - - for (section,content) in self._read_sections(): - if not section.startswith('..'): - section = ' '.join([s.capitalize() for s in section.split(' ')]) - if section in ('Parameters', 'Attributes', 'Methods', - 'Returns', 'Raises', 'Warns'): - self[section] = self._parse_param_list(content) - elif section.startswith('.. index::'): - self['index'] = self._parse_index(section, content) - elif section == 'See Also': - self['See Also'] = self._parse_see_also(content) - else: - self[section] = content - - # string conversion routines - - def _str_header(self, name, symbol='-'): - return [name, len(name)*symbol] - - def _str_indent(self, doc, indent=4): - out = [] - for line in doc: - out += [' '*indent + line] - return out - - def _str_signature(self): - if self['Signature']: - return [self['Signature'].replace('*','\*')] + [''] - else: - return [''] - - def _str_summary(self): - if self['Summary']: - return self['Summary'] + [''] - else: - return [] - - def _str_extended_summary(self): - if self['Extended Summary']: - return self['Extended Summary'] + [''] - else: - return [] - - def _str_param_list(self, name): - out = [] - if self[name]: - out += self._str_header(name) - for param,param_type,desc in self[name]: - out += ['%s : %s' % (param, param_type)] - out += self._str_indent(desc) - out += [''] - return out - - def _str_section(self, name): - out = [] - if self[name]: - out += self._str_header(name) - out += self[name] - out += [''] - return out - - def _str_see_also(self, func_role): - if not self['See Also']: return [] - out = [] - out += self._str_header("See Also") - last_had_desc = True - for func, desc, role in self['See Also']: - if role: - link = ':%s:`%s`' % (role, func) - elif func_role: - link = ':%s:`%s`' % (func_role, func) - else: - link = "`%s`_" % func - if desc or last_had_desc: - out += [''] - out += [link] - else: - out[-1] += ", %s" % link - if desc: - out += self._str_indent([' '.join(desc)]) - last_had_desc = True - else: - last_had_desc = False - out += [''] - return out - - def _str_index(self): - idx = self['index'] - out = [] - out += ['.. index:: %s' % idx.get('default','')] - for section, references in idx.iteritems(): - if section == 'default': - continue - out += [' :%s: %s' % (section, ', '.join(references))] - return out - - def __str__(self, func_role=''): - out = [] - out += self._str_signature() - out += self._str_summary() - out += self._str_extended_summary() - for param_list in ('Parameters','Returns','Raises'): - out += self._str_param_list(param_list) - out += self._str_section('Warnings') - out += self._str_see_also(func_role) - for s in ('Notes','References','Examples'): - out += self._str_section(s) - for param_list in ('Attributes', 'Methods'): - out += self._str_param_list(param_list) - out += self._str_index() - return '\n'.join(out) - - -def indent(str,indent=4): - indent_str = ' '*indent - if str is None: - return indent_str - lines = str.split('\n') - return '\n'.join(indent_str + l for l in lines) - -def dedent_lines(lines): - """Deindent a list of lines maximally""" - return textwrap.dedent("\n".join(lines)).split("\n") - -def header(text, style='-'): - return text + '\n' + style*len(text) + '\n' - - -class FunctionDoc(NumpyDocString): - def __init__(self, func, role='func', doc=None, config={}): - self._f = func - self._role = role # e.g. "func" or "meth" - - if doc is None: - if func is None: - raise ValueError("No function or docstring given") - doc = inspect.getdoc(func) or '' - NumpyDocString.__init__(self, doc) - - if not self['Signature'] and func is not None: - func, func_name = self.get_func() - try: - # try to read signature - argspec = inspect.getargspec(func) - argspec = inspect.formatargspec(*argspec) - argspec = argspec.replace('*','\*') - signature = '%s%s' % (func_name, argspec) - except TypeError, e: - signature = '%s()' % func_name - self['Signature'] = signature - - def get_func(self): - func_name = getattr(self._f, '__name__', self.__class__.__name__) - if inspect.isclass(self._f): - func = getattr(self._f, '__call__', self._f.__init__) - else: - func = self._f - return func, func_name - - def __str__(self): - out = '' - - func, func_name = self.get_func() - signature = self['Signature'].replace('*', '\*') - - roles = {'func': 'function', - 'meth': 'method'} - - if self._role: - if not roles.has_key(self._role): - print "Warning: invalid role %s" % self._role - out += '.. %s:: %s\n \n\n' % (roles.get(self._role,''), - func_name) - - out += super(FunctionDoc, self).__str__(func_role=self._role) - return out - - -class ClassDoc(NumpyDocString): - def __init__(self, cls, doc=None, modulename='', func_doc=FunctionDoc, - config={}): - if not inspect.isclass(cls) and cls is not None: - raise ValueError("Expected a class or None, but got %r" % cls) - self._cls = cls - - if modulename and not modulename.endswith('.'): - modulename += '.' - self._mod = modulename - - if doc is None: - if cls is None: - raise ValueError("No class or documentation string given") - doc = pydoc.getdoc(cls) - - NumpyDocString.__init__(self, doc) - - if config.get('show_class_members', True): - if not self['Methods']: - self['Methods'] = [(name, '', '') - for name in sorted(self.methods)] - if not self['Attributes']: - self['Attributes'] = [(name, '', '') - for name in sorted(self.properties)] - - @property - def methods(self): - if self._cls is None: - return [] - return [name for name,func in inspect.getmembers(self._cls) - if not name.startswith('_') and callable(func)] - - @property - def properties(self): - if self._cls is None: - return [] - return [name for name,func in inspect.getmembers(self._cls) - if not name.startswith('_') and func is None] diff --git a/pythonPackages/numpy/doc/sphinxext/docscrape_sphinx.py b/pythonPackages/numpy/doc/sphinxext/docscrape_sphinx.py deleted file mode 100755 index 9f4350d460..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/docscrape_sphinx.py +++ /dev/null @@ -1,226 +0,0 @@ -import re, inspect, textwrap, pydoc -import sphinx -from docscrape import NumpyDocString, FunctionDoc, ClassDoc - -class SphinxDocString(NumpyDocString): - def __init__(self, docstring, config={}): - self.use_plots = config.get('use_plots', False) - NumpyDocString.__init__(self, docstring, config=config) - - # string conversion routines - def _str_header(self, name, symbol='`'): - return ['.. rubric:: ' + name, ''] - - def _str_field_list(self, name): - return [':' + name + ':'] - - def _str_indent(self, doc, indent=4): - out = [] - for line in doc: - out += [' '*indent + line] - return out - - def _str_signature(self): - return [''] - if self['Signature']: - return ['``%s``' % self['Signature']] + [''] - else: - return [''] - - def _str_summary(self): - return self['Summary'] + [''] - - def _str_extended_summary(self): - return self['Extended Summary'] + [''] - - def _str_param_list(self, name): - out = [] - if self[name]: - out += self._str_field_list(name) - out += [''] - for param,param_type,desc in self[name]: - out += self._str_indent(['**%s** : %s' % (param.strip(), - param_type)]) - out += [''] - out += self._str_indent(desc,8) - out += [''] - return out - - @property - def _obj(self): - if hasattr(self, '_cls'): - return self._cls - elif hasattr(self, '_f'): - return self._f - return None - - def _str_member_list(self, name): - """ - Generate a member listing, autosummary:: table where possible, - and a table where not. - - """ - out = [] - if self[name]: - out += ['.. rubric:: %s' % name, ''] - prefix = getattr(self, '_name', '') - - if prefix: - prefix = '~%s.' % prefix - - autosum = [] - others = [] - for param, param_type, desc in self[name]: - param = param.strip() - if not self._obj or hasattr(self._obj, param): - autosum += [" %s%s" % (prefix, param)] - else: - others.append((param, param_type, desc)) - - if autosum: - out += ['.. autosummary::', ' :toctree:', ''] - out += autosum - - if others: - maxlen_0 = max([len(x[0]) for x in others]) - maxlen_1 = max([len(x[1]) for x in others]) - hdr = "="*maxlen_0 + " " + "="*maxlen_1 + " " + "="*10 - fmt = '%%%ds %%%ds ' % (maxlen_0, maxlen_1) - n_indent = maxlen_0 + maxlen_1 + 4 - out += [hdr] - for param, param_type, desc in others: - out += [fmt % (param.strip(), param_type)] - out += self._str_indent(desc, n_indent) - out += [hdr] - out += [''] - return out - - def _str_section(self, name): - out = [] - if self[name]: - out += self._str_header(name) - out += [''] - content = textwrap.dedent("\n".join(self[name])).split("\n") - out += content - out += [''] - return out - - def _str_see_also(self, func_role): - out = [] - if self['See Also']: - see_also = super(SphinxDocString, self)._str_see_also(func_role) - out = ['.. seealso::', ''] - out += self._str_indent(see_also[2:]) - return out - - def _str_warnings(self): - out = [] - if self['Warnings']: - out = ['.. warning::', ''] - out += self._str_indent(self['Warnings']) - return out - - def _str_index(self): - idx = self['index'] - out = [] - if len(idx) == 0: - return out - - out += ['.. index:: %s' % idx.get('default','')] - for section, references in idx.iteritems(): - if section == 'default': - continue - elif section == 'refguide': - out += [' single: %s' % (', '.join(references))] - else: - out += [' %s: %s' % (section, ','.join(references))] - return out - - def _str_references(self): - out = [] - if self['References']: - out += self._str_header('References') - if isinstance(self['References'], str): - self['References'] = [self['References']] - out.extend(self['References']) - out += [''] - # Latex collects all references to a separate bibliography, - # so we need to insert links to it - if sphinx.__version__ >= "0.6": - out += ['.. only:: latex',''] - else: - out += ['.. latexonly::',''] - items = [] - for line in self['References']: - m = re.match(r'.. \[([a-z0-9._-]+)\]', line, re.I) - if m: - items.append(m.group(1)) - out += [' ' + ", ".join(["[%s]_" % item for item in items]), ''] - return out - - def _str_examples(self): - examples_str = "\n".join(self['Examples']) - - if (self.use_plots and 'import matplotlib' in examples_str - and 'plot::' not in examples_str): - out = [] - out += self._str_header('Examples') - out += ['.. plot::', ''] - out += self._str_indent(self['Examples']) - out += [''] - return out - else: - return self._str_section('Examples') - - def __str__(self, indent=0, func_role="obj"): - out = [] - out += self._str_signature() - out += self._str_index() + [''] - out += self._str_summary() - out += self._str_extended_summary() - for param_list in ('Parameters', 'Returns', 'Raises'): - out += self._str_param_list(param_list) - out += self._str_warnings() - out += self._str_see_also(func_role) - out += self._str_section('Notes') - out += self._str_references() - out += self._str_examples() - for param_list in ('Attributes', 'Methods'): - out += self._str_member_list(param_list) - out = self._str_indent(out,indent) - return '\n'.join(out) - -class SphinxFunctionDoc(SphinxDocString, FunctionDoc): - def __init__(self, obj, doc=None, config={}): - self.use_plots = config.get('use_plots', False) - FunctionDoc.__init__(self, obj, doc=doc, config=config) - -class SphinxClassDoc(SphinxDocString, ClassDoc): - def __init__(self, obj, doc=None, func_doc=None, config={}): - self.use_plots = config.get('use_plots', False) - ClassDoc.__init__(self, obj, doc=doc, func_doc=None, config=config) - -class SphinxObjDoc(SphinxDocString): - def __init__(self, obj, doc=None, config={}): - self._f = obj - SphinxDocString.__init__(self, doc, config=config) - -def get_doc_object(obj, what=None, doc=None, config={}): - if what is None: - if inspect.isclass(obj): - what = 'class' - elif inspect.ismodule(obj): - what = 'module' - elif callable(obj): - what = 'function' - else: - what = 'object' - if what == 'class': - return SphinxClassDoc(obj, func_doc=SphinxFunctionDoc, doc=doc, - config=config) - elif what in ('function', 'method'): - return SphinxFunctionDoc(obj, doc=doc, config=config) - else: - if doc is None: - doc = pydoc.getdoc(obj) - return SphinxObjDoc(obj, doc, config=config) diff --git a/pythonPackages/numpy/doc/sphinxext/numpydoc.py b/pythonPackages/numpy/doc/sphinxext/numpydoc.py deleted file mode 100755 index aa390056b3..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/numpydoc.py +++ /dev/null @@ -1,164 +0,0 @@ -""" -======== -numpydoc -======== - -Sphinx extension that handles docstrings in the Numpy standard format. [1] - -It will: - -- Convert Parameters etc. sections to field lists. -- Convert See Also section to a See also entry. -- Renumber references. -- Extract the signature from the docstring, if it can't be determined otherwise. - -.. [1] http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard - -""" - -import os, re, pydoc -from docscrape_sphinx import get_doc_object, SphinxDocString -from sphinx.util.compat import Directive -import inspect - -def mangle_docstrings(app, what, name, obj, options, lines, - reference_offset=[0]): - - cfg = dict(use_plots=app.config.numpydoc_use_plots, - show_class_members=app.config.numpydoc_show_class_members) - - if what == 'module': - # Strip top title - title_re = re.compile(ur'^\s*[#*=]{4,}\n[a-z0-9 -]+\n[#*=]{4,}\s*', - re.I|re.S) - lines[:] = title_re.sub(u'', u"\n".join(lines)).split(u"\n") - else: - doc = get_doc_object(obj, what, u"\n".join(lines), config=cfg) - lines[:] = unicode(doc).split(u"\n") - - if app.config.numpydoc_edit_link and hasattr(obj, '__name__') and \ - obj.__name__: - if hasattr(obj, '__module__'): - v = dict(full_name=u"%s.%s" % (obj.__module__, obj.__name__)) - else: - v = dict(full_name=obj.__name__) - lines += [u'', u'.. htmlonly::', ''] - lines += [u' %s' % x for x in - (app.config.numpydoc_edit_link % v).split("\n")] - - # replace reference numbers so that there are no duplicates - references = [] - for line in lines: - line = line.strip() - m = re.match(ur'^.. \[([a-z0-9_.-])\]', line, re.I) - if m: - references.append(m.group(1)) - - # start renaming from the longest string, to avoid overwriting parts - references.sort(key=lambda x: -len(x)) - if references: - for i, line in enumerate(lines): - for r in references: - if re.match(ur'^\d+$', r): - new_r = u"R%d" % (reference_offset[0] + int(r)) - else: - new_r = u"%s%d" % (r, reference_offset[0]) - lines[i] = lines[i].replace(u'[%s]_' % r, - u'[%s]_' % new_r) - lines[i] = lines[i].replace(u'.. [%s]' % r, - u'.. [%s]' % new_r) - - reference_offset[0] += len(references) - -def mangle_signature(app, what, name, obj, options, sig, retann): - # Do not try to inspect classes that don't define `__init__` - if (inspect.isclass(obj) and - (not hasattr(obj, '__init__') or - 'initializes x; see ' in pydoc.getdoc(obj.__init__))): - return '', '' - - if not (callable(obj) or hasattr(obj, '__argspec_is_invalid_')): return - if not hasattr(obj, '__doc__'): return - - doc = SphinxDocString(pydoc.getdoc(obj)) - if doc['Signature']: - sig = re.sub(u"^[^(]*", u"", doc['Signature']) - return sig, u'' - -def setup(app, get_doc_object_=get_doc_object): - global get_doc_object - get_doc_object = get_doc_object_ - - app.connect('autodoc-process-docstring', mangle_docstrings) - app.connect('autodoc-process-signature', mangle_signature) - app.add_config_value('numpydoc_edit_link', None, False) - app.add_config_value('numpydoc_use_plots', None, False) - app.add_config_value('numpydoc_show_class_members', True, True) - - # Extra mangling domains - app.add_domain(NumpyPythonDomain) - app.add_domain(NumpyCDomain) - -#------------------------------------------------------------------------------ -# Docstring-mangling domains -#------------------------------------------------------------------------------ - -from docutils.statemachine import ViewList -from sphinx.domains.c import CDomain -from sphinx.domains.python import PythonDomain - -class ManglingDomainBase(object): - directive_mangling_map = {} - - def __init__(self, *a, **kw): - super(ManglingDomainBase, self).__init__(*a, **kw) - self.wrap_mangling_directives() - - def wrap_mangling_directives(self): - for name, objtype in self.directive_mangling_map.items(): - self.directives[name] = wrap_mangling_directive( - self.directives[name], objtype) - -class NumpyPythonDomain(ManglingDomainBase, PythonDomain): - name = 'np' - directive_mangling_map = { - 'function': 'function', - 'class': 'class', - 'exception': 'class', - 'method': 'function', - 'classmethod': 'function', - 'staticmethod': 'function', - 'attribute': 'attribute', - } - -class NumpyCDomain(ManglingDomainBase, CDomain): - name = 'np-c' - directive_mangling_map = { - 'function': 'function', - 'member': 'attribute', - 'macro': 'function', - 'type': 'class', - 'var': 'object', - } - -def wrap_mangling_directive(base_directive, objtype): - class directive(base_directive): - def run(self): - env = self.state.document.settings.env - - name = None - if self.arguments: - m = re.match(r'^(.*\s+)?(.*?)(\(.*)?', self.arguments[0]) - name = m.group(2).strip() - - if not name: - name = self.arguments[0] - - lines = list(self.content) - mangle_docstrings(env.app, objtype, name, None, None, lines) - self.content = ViewList(lines, self.content.parent) - - return base_directive.run(self) - - return directive - diff --git a/pythonPackages/numpy/doc/sphinxext/only_directives.py b/pythonPackages/numpy/doc/sphinxext/only_directives.py deleted file mode 100755 index c0dff7e65a..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/only_directives.py +++ /dev/null @@ -1,96 +0,0 @@ -# -# A pair of directives for inserting content that will only appear in -# either html or latex. -# - -from docutils.nodes import Body, Element -from docutils.writers.html4css1 import HTMLTranslator -try: - from sphinx.latexwriter import LaTeXTranslator -except ImportError: - from sphinx.writers.latex import LaTeXTranslator - - import warnings - warnings.warn("The numpydoc.only_directives module is deprecated;" - "please use the only:: directive available in Sphinx >= 0.6", - DeprecationWarning, stacklevel=2) - -from docutils.parsers.rst import directives - -class html_only(Body, Element): - pass - -class latex_only(Body, Element): - pass - -def run(content, node_class, state, content_offset): - text = '\n'.join(content) - node = node_class(text) - state.nested_parse(content, content_offset, node) - return [node] - -try: - from docutils.parsers.rst import Directive -except ImportError: - from docutils.parsers.rst.directives import _directives - - def html_only_directive(name, arguments, options, content, lineno, - content_offset, block_text, state, state_machine): - return run(content, html_only, state, content_offset) - - def latex_only_directive(name, arguments, options, content, lineno, - content_offset, block_text, state, state_machine): - return run(content, latex_only, state, content_offset) - - for func in (html_only_directive, latex_only_directive): - func.content = 1 - func.options = {} - func.arguments = None - - _directives['htmlonly'] = html_only_directive - _directives['latexonly'] = latex_only_directive -else: - class OnlyDirective(Directive): - has_content = True - required_arguments = 0 - optional_arguments = 0 - final_argument_whitespace = True - option_spec = {} - - def run(self): - self.assert_has_content() - return run(self.content, self.node_class, - self.state, self.content_offset) - - class HtmlOnlyDirective(OnlyDirective): - node_class = html_only - - class LatexOnlyDirective(OnlyDirective): - node_class = latex_only - - directives.register_directive('htmlonly', HtmlOnlyDirective) - directives.register_directive('latexonly', LatexOnlyDirective) - -def setup(app): - app.add_node(html_only) - app.add_node(latex_only) - - # Add visit/depart methods to HTML-Translator: - def visit_perform(self, node): - pass - def depart_perform(self, node): - pass - def visit_ignore(self, node): - node.children = [] - def depart_ignore(self, node): - node.children = [] - - HTMLTranslator.visit_html_only = visit_perform - HTMLTranslator.depart_html_only = depart_perform - HTMLTranslator.visit_latex_only = visit_ignore - HTMLTranslator.depart_latex_only = depart_ignore - - LaTeXTranslator.visit_html_only = visit_ignore - LaTeXTranslator.depart_html_only = depart_ignore - LaTeXTranslator.visit_latex_only = visit_perform - LaTeXTranslator.depart_latex_only = depart_perform diff --git a/pythonPackages/numpy/doc/sphinxext/phantom_import.py b/pythonPackages/numpy/doc/sphinxext/phantom_import.py deleted file mode 100755 index c77eeb544e..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/phantom_import.py +++ /dev/null @@ -1,162 +0,0 @@ -""" -============== -phantom_import -============== - -Sphinx extension to make directives from ``sphinx.ext.autodoc`` and similar -extensions to use docstrings loaded from an XML file. - -This extension loads an XML file in the Pydocweb format [1] and -creates a dummy module that contains the specified docstrings. This -can be used to get the current docstrings from a Pydocweb instance -without needing to rebuild the documented module. - -.. [1] http://code.google.com/p/pydocweb - -""" -import imp, sys, compiler, types, os, inspect, re - -def setup(app): - app.connect('builder-inited', initialize) - app.add_config_value('phantom_import_file', None, True) - -def initialize(app): - fn = app.config.phantom_import_file - if (fn and os.path.isfile(fn)): - print "[numpydoc] Phantom importing modules from", fn, "..." - import_phantom_module(fn) - -#------------------------------------------------------------------------------ -# Creating 'phantom' modules from an XML description -#------------------------------------------------------------------------------ -def import_phantom_module(xml_file): - """ - Insert a fake Python module to sys.modules, based on a XML file. - - The XML file is expected to conform to Pydocweb DTD. The fake - module will contain dummy objects, which guarantee the following: - - - Docstrings are correct. - - Class inheritance relationships are correct (if present in XML). - - Function argspec is *NOT* correct (even if present in XML). - Instead, the function signature is prepended to the function docstring. - - Class attributes are *NOT* correct; instead, they are dummy objects. - - Parameters - ---------- - xml_file : str - Name of an XML file to read - - """ - import lxml.etree as etree - - object_cache = {} - - tree = etree.parse(xml_file) - root = tree.getroot() - - # Sort items so that - # - Base classes come before classes inherited from them - # - Modules come before their contents - all_nodes = dict([(n.attrib['id'], n) for n in root]) - - def _get_bases(node, recurse=False): - bases = [x.attrib['ref'] for x in node.findall('base')] - if recurse: - j = 0 - while True: - try: - b = bases[j] - except IndexError: break - if b in all_nodes: - bases.extend(_get_bases(all_nodes[b])) - j += 1 - return bases - - type_index = ['module', 'class', 'callable', 'object'] - - def base_cmp(a, b): - x = cmp(type_index.index(a.tag), type_index.index(b.tag)) - if x != 0: return x - - if a.tag == 'class' and b.tag == 'class': - a_bases = _get_bases(a, recurse=True) - b_bases = _get_bases(b, recurse=True) - x = cmp(len(a_bases), len(b_bases)) - if x != 0: return x - if a.attrib['id'] in b_bases: return -1 - if b.attrib['id'] in a_bases: return 1 - - return cmp(a.attrib['id'].count('.'), b.attrib['id'].count('.')) - - nodes = root.getchildren() - nodes.sort(base_cmp) - - # Create phantom items - for node in nodes: - name = node.attrib['id'] - doc = (node.text or '').decode('string-escape') + "\n" - if doc == "\n": doc = "" - - # create parent, if missing - parent = name - while True: - parent = '.'.join(parent.split('.')[:-1]) - if not parent: break - if parent in object_cache: break - obj = imp.new_module(parent) - object_cache[parent] = obj - sys.modules[parent] = obj - - # create object - if node.tag == 'module': - obj = imp.new_module(name) - obj.__doc__ = doc - sys.modules[name] = obj - elif node.tag == 'class': - bases = [object_cache[b] for b in _get_bases(node) - if b in object_cache] - bases.append(object) - init = lambda self: None - init.__doc__ = doc - obj = type(name, tuple(bases), {'__doc__': doc, '__init__': init}) - obj.__name__ = name.split('.')[-1] - elif node.tag == 'callable': - funcname = node.attrib['id'].split('.')[-1] - argspec = node.attrib.get('argspec') - if argspec: - argspec = re.sub('^[^(]*', '', argspec) - doc = "%s%s\n\n%s" % (funcname, argspec, doc) - obj = lambda: 0 - obj.__argspec_is_invalid_ = True - obj.func_name = funcname - obj.__name__ = name - obj.__doc__ = doc - if inspect.isclass(object_cache[parent]): - obj.__objclass__ = object_cache[parent] - else: - class Dummy(object): pass - obj = Dummy() - obj.__name__ = name - obj.__doc__ = doc - if inspect.isclass(object_cache[parent]): - obj.__get__ = lambda: None - object_cache[name] = obj - - if parent: - if inspect.ismodule(object_cache[parent]): - obj.__module__ = parent - setattr(object_cache[parent], name.split('.')[-1], obj) - - # Populate items - for node in root: - obj = object_cache.get(node.attrib['id']) - if obj is None: continue - for ref in node.findall('ref'): - if node.tag == 'class': - if ref.attrib['ref'].startswith(node.attrib['id'] + '.'): - setattr(obj, ref.attrib['name'], - object_cache.get(ref.attrib['ref'])) - else: - setattr(obj, ref.attrib['name'], - object_cache.get(ref.attrib['ref'])) diff --git a/pythonPackages/numpy/doc/sphinxext/plot_directive.py b/pythonPackages/numpy/doc/sphinxext/plot_directive.py deleted file mode 100755 index 8de8c73994..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/plot_directive.py +++ /dev/null @@ -1,563 +0,0 @@ -""" -A special directive for generating a matplotlib plot. - -.. warning:: - - This is a hacked version of plot_directive.py from Matplotlib. - It's very much subject to change! - - -Usage ------ - -Can be used like this:: - - .. plot:: examples/example.py - - .. plot:: - - import matplotlib.pyplot as plt - plt.plot([1,2,3], [4,5,6]) - - .. plot:: - - A plotting example: - - >>> import matplotlib.pyplot as plt - >>> plt.plot([1,2,3], [4,5,6]) - -The content is interpreted as doctest formatted if it has a line starting -with ``>>>``. - -The ``plot`` directive supports the options - - format : {'python', 'doctest'} - Specify the format of the input - - include-source : bool - Whether to display the source code. Default can be changed in conf.py - -and the ``image`` directive options ``alt``, ``height``, ``width``, -``scale``, ``align``, ``class``. - -Configuration options ---------------------- - -The plot directive has the following configuration options: - - plot_include_source - Default value for the include-source option - - plot_pre_code - Code that should be executed before each plot. - - plot_basedir - Base directory, to which plot:: file names are relative to. - (If None or empty, file names are relative to the directoly where - the file containing the directive is.) - - plot_formats - File formats to generate. List of tuples or strings:: - - [(suffix, dpi), suffix, ...] - - that determine the file format and the DPI. For entries whose - DPI was omitted, sensible defaults are chosen. - -TODO ----- - -* Refactor Latex output; now it's plain images, but it would be nice - to make them appear side-by-side, or in floats. - -""" - -import sys, os, glob, shutil, imp, warnings, cStringIO, re, textwrap, traceback -import sphinx - -import warnings -warnings.warn("A plot_directive module is also available under " - "matplotlib.sphinxext; expect this numpydoc.plot_directive " - "module to be deprecated after relevant features have been " - "integrated there.", - FutureWarning, stacklevel=2) - - -#------------------------------------------------------------------------------ -# Registration hook -#------------------------------------------------------------------------------ - -def setup(app): - setup.app = app - setup.config = app.config - setup.confdir = app.confdir - - app.add_config_value('plot_pre_code', '', True) - app.add_config_value('plot_include_source', False, True) - app.add_config_value('plot_formats', ['png', 'hires.png', 'pdf'], True) - app.add_config_value('plot_basedir', None, True) - - app.add_directive('plot', plot_directive, True, (0, 1, False), - **plot_directive_options) - -#------------------------------------------------------------------------------ -# plot:: directive -#------------------------------------------------------------------------------ -from docutils.parsers.rst import directives -from docutils import nodes - -def plot_directive(name, arguments, options, content, lineno, - content_offset, block_text, state, state_machine): - return run(arguments, content, options, state_machine, state, lineno) -plot_directive.__doc__ = __doc__ - -def _option_boolean(arg): - if not arg or not arg.strip(): - # no argument given, assume used as a flag - return True - elif arg.strip().lower() in ('no', '0', 'false'): - return False - elif arg.strip().lower() in ('yes', '1', 'true'): - return True - else: - raise ValueError('"%s" unknown boolean' % arg) - -def _option_format(arg): - return directives.choice(arg, ('python', 'lisp')) - -def _option_align(arg): - return directives.choice(arg, ("top", "middle", "bottom", "left", "center", - "right")) - -plot_directive_options = {'alt': directives.unchanged, - 'height': directives.length_or_unitless, - 'width': directives.length_or_percentage_or_unitless, - 'scale': directives.nonnegative_int, - 'align': _option_align, - 'class': directives.class_option, - 'include-source': _option_boolean, - 'format': _option_format, - } - -#------------------------------------------------------------------------------ -# Generating output -#------------------------------------------------------------------------------ - -from docutils import nodes, utils - -try: - # Sphinx depends on either Jinja or Jinja2 - import jinja2 - def format_template(template, **kw): - return jinja2.Template(template).render(**kw) -except ImportError: - import jinja - def format_template(template, **kw): - return jinja.from_string(template, **kw) - -TEMPLATE = """ -{{ source_code }} - -{{ only_html }} - - {% if source_code %} - (`Source code <{{ source_link }}>`__) - - .. admonition:: Output - :class: plot-output - - {% endif %} - - {% for img in images %} - .. figure:: {{ build_dir }}/{{ img.basename }}.png - {%- for option in options %} - {{ option }} - {% endfor %} - - ( - {%- if not source_code -%} - `Source code <{{source_link}}>`__ - {%- for fmt in img.formats -%} - , `{{ fmt }} <{{ dest_dir }}/{{ img.basename }}.{{ fmt }}>`__ - {%- endfor -%} - {%- else -%} - {%- for fmt in img.formats -%} - {%- if not loop.first -%}, {% endif -%} - `{{ fmt }} <{{ dest_dir }}/{{ img.basename }}.{{ fmt }}>`__ - {%- endfor -%} - {%- endif -%} - ) - {% endfor %} - -{{ only_latex }} - - {% for img in images %} - .. image:: {{ build_dir }}/{{ img.basename }}.pdf - {% endfor %} - -""" - -class ImageFile(object): - def __init__(self, basename, dirname): - self.basename = basename - self.dirname = dirname - self.formats = [] - - def filename(self, format): - return os.path.join(self.dirname, "%s.%s" % (self.basename, format)) - - def filenames(self): - return [self.filename(fmt) for fmt in self.formats] - -def run(arguments, content, options, state_machine, state, lineno): - if arguments and content: - raise RuntimeError("plot:: directive can't have both args and content") - - document = state_machine.document - config = document.settings.env.config - - options.setdefault('include-source', config.plot_include_source) - - # determine input - rst_file = document.attributes['source'] - rst_dir = os.path.dirname(rst_file) - - if arguments: - if not config.plot_basedir: - source_file_name = os.path.join(rst_dir, - directives.uri(arguments[0])) - else: - source_file_name = os.path.join(setup.confdir, config.plot_basedir, - directives.uri(arguments[0])) - code = open(source_file_name, 'r').read() - output_base = os.path.basename(source_file_name) - else: - source_file_name = rst_file - code = textwrap.dedent("\n".join(map(str, content))) - counter = document.attributes.get('_plot_counter', 0) + 1 - document.attributes['_plot_counter'] = counter - base, ext = os.path.splitext(os.path.basename(source_file_name)) - output_base = '%s-%d.py' % (base, counter) - - base, source_ext = os.path.splitext(output_base) - if source_ext in ('.py', '.rst', '.txt'): - output_base = base - else: - source_ext = '' - - # ensure that LaTeX includegraphics doesn't choke in foo.bar.pdf filenames - output_base = output_base.replace('.', '-') - - # is it in doctest format? - is_doctest = contains_doctest(code) - if options.has_key('format'): - if options['format'] == 'python': - is_doctest = False - else: - is_doctest = True - - # determine output directory name fragment - source_rel_name = relpath(source_file_name, setup.confdir) - source_rel_dir = os.path.dirname(source_rel_name) - while source_rel_dir.startswith(os.path.sep): - source_rel_dir = source_rel_dir[1:] - - # build_dir: where to place output files (temporarily) - build_dir = os.path.join(os.path.dirname(setup.app.doctreedir), - 'plot_directive', - source_rel_dir) - if not os.path.exists(build_dir): - os.makedirs(build_dir) - - # output_dir: final location in the builder's directory - dest_dir = os.path.abspath(os.path.join(setup.app.builder.outdir, - source_rel_dir)) - - # how to link to files from the RST file - dest_dir_link = os.path.join(relpath(setup.confdir, rst_dir), - source_rel_dir).replace(os.path.sep, '/') - build_dir_link = relpath(build_dir, rst_dir).replace(os.path.sep, '/') - source_link = dest_dir_link + '/' + output_base + source_ext - - # make figures - try: - images = makefig(code, source_file_name, build_dir, output_base, - config) - except PlotError, err: - reporter = state.memo.reporter - sm = reporter.system_message( - 3, "Exception occurred in plotting %s: %s" % (output_base, err), - line=lineno) - return [sm] - - # generate output restructuredtext - if options['include-source']: - if is_doctest: - lines = [''] - lines += [row.rstrip() for row in code.split('\n')] - else: - lines = ['.. code-block:: python', ''] - lines += [' %s' % row.rstrip() for row in code.split('\n')] - source_code = "\n".join(lines) - else: - source_code = "" - - opts = [':%s: %s' % (key, val) for key, val in options.items() - if key in ('alt', 'height', 'width', 'scale', 'align', 'class')] - - if sphinx.__version__ >= "0.6": - only_html = ".. only:: html" - only_latex = ".. only:: latex" - else: - only_html = ".. htmlonly::" - only_latex = ".. latexonly::" - - result = format_template( - TEMPLATE, - dest_dir=dest_dir_link, - build_dir=build_dir_link, - source_link=source_link, - only_html=only_html, - only_latex=only_latex, - options=opts, - images=images, - source_code=source_code) - - lines = result.split("\n") - if len(lines): - state_machine.insert_input( - lines, state_machine.input_lines.source(0)) - - # copy image files to builder's output directory - if not os.path.exists(dest_dir): - os.makedirs(dest_dir) - - for img in images: - for fn in img.filenames(): - shutil.copyfile(fn, os.path.join(dest_dir, os.path.basename(fn))) - - # copy script (if necessary) - if source_file_name == rst_file: - target_name = os.path.join(dest_dir, output_base + source_ext) - f = open(target_name, 'w') - f.write(unescape_doctest(code)) - f.close() - - return [] - - -#------------------------------------------------------------------------------ -# Run code and capture figures -#------------------------------------------------------------------------------ - -import matplotlib -matplotlib.use('Agg') -import matplotlib.pyplot as plt -import matplotlib.image as image -from matplotlib import _pylab_helpers - -import exceptions - -def contains_doctest(text): - try: - # check if it's valid Python as-is - compile(text, '', 'exec') - return False - except SyntaxError: - pass - r = re.compile(r'^\s*>>>', re.M) - m = r.search(text) - return bool(m) - -def unescape_doctest(text): - """ - Extract code from a piece of text, which contains either Python code - or doctests. - - """ - if not contains_doctest(text): - return text - - code = "" - for line in text.split("\n"): - m = re.match(r'^\s*(>>>|\.\.\.) (.*)$', line) - if m: - code += m.group(2) + "\n" - elif line.strip(): - code += "# " + line.strip() + "\n" - else: - code += "\n" - return code - -class PlotError(RuntimeError): - pass - -def run_code(code, code_path): - # Change the working directory to the directory of the example, so - # it can get at its data files, if any. - pwd = os.getcwd() - old_sys_path = list(sys.path) - if code_path is not None: - dirname = os.path.abspath(os.path.dirname(code_path)) - os.chdir(dirname) - sys.path.insert(0, dirname) - - # Redirect stdout - stdout = sys.stdout - sys.stdout = cStringIO.StringIO() - - # Reset sys.argv - old_sys_argv = sys.argv - sys.argv = [code_path] - - try: - try: - code = unescape_doctest(code) - ns = {} - exec setup.config.plot_pre_code in ns - exec code in ns - except (Exception, SystemExit), err: - raise PlotError(traceback.format_exc()) - finally: - os.chdir(pwd) - sys.argv = old_sys_argv - sys.path[:] = old_sys_path - sys.stdout = stdout - return ns - - -#------------------------------------------------------------------------------ -# Generating figures -#------------------------------------------------------------------------------ - -def out_of_date(original, derived): - """ - Returns True if derivative is out-of-date wrt original, - both of which are full file paths. - """ - return (not os.path.exists(derived) - or os.stat(derived).st_mtime < os.stat(original).st_mtime) - - -def makefig(code, code_path, output_dir, output_base, config): - """ - Run a pyplot script *code* and save the images under *output_dir* - with file names derived from *output_base* - - """ - - # -- Parse format list - default_dpi = {'png': 80, 'hires.png': 200, 'pdf': 50} - formats = [] - for fmt in config.plot_formats: - if isinstance(fmt, str): - formats.append((fmt, default_dpi.get(fmt, 80))) - elif type(fmt) in (tuple, list) and len(fmt)==2: - formats.append((str(fmt[0]), int(fmt[1]))) - else: - raise PlotError('invalid image format "%r" in plot_formats' % fmt) - - # -- Try to determine if all images already exist - - # Look for single-figure output files first - all_exists = True - img = ImageFile(output_base, output_dir) - for format, dpi in formats: - if out_of_date(code_path, img.filename(format)): - all_exists = False - break - img.formats.append(format) - - if all_exists: - return [img] - - # Then look for multi-figure output files - images = [] - all_exists = True - for i in xrange(1000): - img = ImageFile('%s_%02d' % (output_base, i), output_dir) - for format, dpi in formats: - if out_of_date(code_path, img.filename(format)): - all_exists = False - break - img.formats.append(format) - - # assume that if we have one, we have them all - if not all_exists: - all_exists = (i > 0) - break - images.append(img) - - if all_exists: - return images - - # -- We didn't find the files, so build them - - # Clear between runs - plt.close('all') - - # Run code - run_code(code, code_path) - - # Collect images - images = [] - - fig_managers = _pylab_helpers.Gcf.get_all_fig_managers() - for i, figman in enumerate(fig_managers): - if len(fig_managers) == 1: - img = ImageFile(output_base, output_dir) - else: - img = ImageFile("%s_%02d" % (output_base, i), output_dir) - images.append(img) - for format, dpi in formats: - try: - figman.canvas.figure.savefig(img.filename(format), dpi=dpi) - except exceptions.BaseException, err: - raise PlotError(traceback.format_exc()) - img.formats.append(format) - - return images - - -#------------------------------------------------------------------------------ -# Relative pathnames -#------------------------------------------------------------------------------ - -try: - from os.path import relpath -except ImportError: - def relpath(target, base=os.curdir): - """ - Return a relative path to the target from either the current - dir or an optional base dir. Base can be a directory - specified either as absolute or relative to current dir. - """ - - if not os.path.exists(target): - raise OSError, 'Target does not exist: '+target - - if not os.path.isdir(base): - raise OSError, 'Base is not a directory or does not exist: '+base - - base_list = (os.path.abspath(base)).split(os.sep) - target_list = (os.path.abspath(target)).split(os.sep) - - # On the windows platform the target may be on a completely - # different drive from the base. - if os.name in ['nt','dos','os2'] and base_list[0] <> target_list[0]: - raise OSError, 'Target is on a different drive to base. Target: '+target_list[0].upper()+', base: '+base_list[0].upper() - - # Starting from the filepath root, work out how much of the - # filepath is shared by base and target. - for i in range(min(len(base_list), len(target_list))): - if base_list[i] <> target_list[i]: break - else: - # If we broke out of the loop, i is pointing to the first - # differing path elements. If we didn't break out of the - # loop, i is pointing to identical path elements. - # Increment i so that in all cases it points to the first - # differing path elements. - i+=1 - - rel_list = [os.pardir] * (len(base_list)-i) + target_list[i:] - return os.path.join(*rel_list) diff --git a/pythonPackages/numpy/doc/sphinxext/setup.py b/pythonPackages/numpy/doc/sphinxext/setup.py deleted file mode 100755 index 016d8f8ae5..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/setup.py +++ /dev/null @@ -1,31 +0,0 @@ -from distutils.core import setup -import setuptools -import sys, os - -version = "0.3.dev" - -setup( - name="numpydoc", - packages=["numpydoc"], - package_dir={"numpydoc": ""}, - version=version, - description="Sphinx extension to support docstrings in Numpy format", - # classifiers from http://pypi.python.org/pypi?%3Aaction=list_classifiers - classifiers=["Development Status :: 3 - Alpha", - "Environment :: Plugins", - "License :: OSI Approved :: BSD License", - "Topic :: Documentation"], - keywords="sphinx numpy", - author="Pauli Virtanen and others", - author_email="pav@iki.fi", - url="http://projects.scipy.org/numpy/browser/trunk/doc/sphinxext", - license="BSD", - zip_safe=False, - install_requires=["Sphinx >= 0.5"], - package_data={'numpydoc': 'tests', '': ''}, - entry_points={ - "console_scripts": [ - "autosummary_generate = numpydoc.autosummary_generate:main", - ], - }, -) diff --git a/pythonPackages/numpy/doc/sphinxext/tests/test_docscrape.py b/pythonPackages/numpy/doc/sphinxext/tests/test_docscrape.py deleted file mode 100755 index 1d775e99e4..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/tests/test_docscrape.py +++ /dev/null @@ -1,545 +0,0 @@ -# -*- encoding:utf-8 -*- - -import sys, os -sys.path.append(os.path.join(os.path.dirname(__file__), '..')) - -from docscrape import NumpyDocString, FunctionDoc, ClassDoc -from docscrape_sphinx import SphinxDocString, SphinxClassDoc -from nose.tools import * - -doc_txt = '''\ - numpy.multivariate_normal(mean, cov, shape=None) - - Draw values from a multivariate normal distribution with specified - mean and covariance. - - The multivariate normal or Gaussian distribution is a generalisation - of the one-dimensional normal distribution to higher dimensions. - - Parameters - ---------- - mean : (N,) ndarray - Mean of the N-dimensional distribution. - - .. math:: - - (1+2+3)/3 - - cov : (N,N) ndarray - Covariance matrix of the distribution. - shape : tuple of ints - Given a shape of, for example, (m,n,k), m*n*k samples are - generated, and packed in an m-by-n-by-k arrangement. Because - each sample is N-dimensional, the output shape is (m,n,k,N). - - Returns - ------- - out : ndarray - The drawn samples, arranged according to `shape`. If the - shape given is (m,n,...), then the shape of `out` is is - (m,n,...,N). - - In other words, each entry ``out[i,j,...,:]`` is an N-dimensional - value drawn from the distribution. - - Warnings - -------- - Certain warnings apply. - - Notes - ----- - - Instead of specifying the full covariance matrix, popular - approximations include: - - - Spherical covariance (`cov` is a multiple of the identity matrix) - - Diagonal covariance (`cov` has non-negative elements only on the diagonal) - - This geometrical property can be seen in two dimensions by plotting - generated data-points: - - >>> mean = [0,0] - >>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis - - >>> x,y = multivariate_normal(mean,cov,5000).T - >>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show() - - Note that the covariance matrix must be symmetric and non-negative - definite. - - References - ---------- - .. [1] A. Papoulis, "Probability, Random Variables, and Stochastic - Processes," 3rd ed., McGraw-Hill Companies, 1991 - .. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification," - 2nd ed., Wiley, 2001. - - See Also - -------- - some, other, funcs - otherfunc : relationship - - Examples - -------- - >>> mean = (1,2) - >>> cov = [[1,0],[1,0]] - >>> x = multivariate_normal(mean,cov,(3,3)) - >>> print x.shape - (3, 3, 2) - - The following is probably true, given that 0.6 is roughly twice the - standard deviation: - - >>> print list( (x[0,0,:] - mean) < 0.6 ) - [True, True] - - .. index:: random - :refguide: random;distributions, random;gauss - - ''' -doc = NumpyDocString(doc_txt) - - -def test_signature(): - assert doc['Signature'].startswith('numpy.multivariate_normal(') - assert doc['Signature'].endswith('shape=None)') - -def test_summary(): - assert doc['Summary'][0].startswith('Draw values') - assert doc['Summary'][-1].endswith('covariance.') - -def test_extended_summary(): - assert doc['Extended Summary'][0].startswith('The multivariate normal') - -def test_parameters(): - assert_equal(len(doc['Parameters']), 3) - assert_equal([n for n,_,_ in doc['Parameters']], ['mean','cov','shape']) - - arg, arg_type, desc = doc['Parameters'][1] - assert_equal(arg_type, '(N,N) ndarray') - assert desc[0].startswith('Covariance matrix') - assert doc['Parameters'][0][-1][-2] == ' (1+2+3)/3' - -def test_returns(): - assert_equal(len(doc['Returns']), 1) - arg, arg_type, desc = doc['Returns'][0] - assert_equal(arg, 'out') - assert_equal(arg_type, 'ndarray') - assert desc[0].startswith('The drawn samples') - assert desc[-1].endswith('distribution.') - -def test_notes(): - assert doc['Notes'][0].startswith('Instead') - assert doc['Notes'][-1].endswith('definite.') - assert_equal(len(doc['Notes']), 17) - -def test_references(): - assert doc['References'][0].startswith('..') - assert doc['References'][-1].endswith('2001.') - -def test_examples(): - assert doc['Examples'][0].startswith('>>>') - assert doc['Examples'][-1].endswith('True]') - -def test_index(): - assert_equal(doc['index']['default'], 'random') - print doc['index'] - assert_equal(len(doc['index']), 2) - assert_equal(len(doc['index']['refguide']), 2) - -def non_blank_line_by_line_compare(a,b): - a = [l for l in a.split('\n') if l.strip()] - b = [l for l in b.split('\n') if l.strip()] - for n,line in enumerate(a): - if not line == b[n]: - raise AssertionError("Lines %s of a and b differ: " - "\n>>> %s\n<<< %s\n" % - (n,line,b[n])) -def test_str(): - non_blank_line_by_line_compare(str(doc), -"""numpy.multivariate_normal(mean, cov, shape=None) - -Draw values from a multivariate normal distribution with specified -mean and covariance. - -The multivariate normal or Gaussian distribution is a generalisation -of the one-dimensional normal distribution to higher dimensions. - -Parameters ----------- -mean : (N,) ndarray - Mean of the N-dimensional distribution. - - .. math:: - - (1+2+3)/3 - -cov : (N,N) ndarray - Covariance matrix of the distribution. -shape : tuple of ints - Given a shape of, for example, (m,n,k), m*n*k samples are - generated, and packed in an m-by-n-by-k arrangement. Because - each sample is N-dimensional, the output shape is (m,n,k,N). - -Returns -------- -out : ndarray - The drawn samples, arranged according to `shape`. If the - shape given is (m,n,...), then the shape of `out` is is - (m,n,...,N). - - In other words, each entry ``out[i,j,...,:]`` is an N-dimensional - value drawn from the distribution. - -Warnings --------- -Certain warnings apply. - -See Also --------- -`some`_, `other`_, `funcs`_ - -`otherfunc`_ - relationship - -Notes ------ -Instead of specifying the full covariance matrix, popular -approximations include: - - - Spherical covariance (`cov` is a multiple of the identity matrix) - - Diagonal covariance (`cov` has non-negative elements only on the diagonal) - -This geometrical property can be seen in two dimensions by plotting -generated data-points: - ->>> mean = [0,0] ->>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis - ->>> x,y = multivariate_normal(mean,cov,5000).T ->>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show() - -Note that the covariance matrix must be symmetric and non-negative -definite. - -References ----------- -.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic - Processes," 3rd ed., McGraw-Hill Companies, 1991 -.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification," - 2nd ed., Wiley, 2001. - -Examples --------- ->>> mean = (1,2) ->>> cov = [[1,0],[1,0]] ->>> x = multivariate_normal(mean,cov,(3,3)) ->>> print x.shape -(3, 3, 2) - -The following is probably true, given that 0.6 is roughly twice the -standard deviation: - ->>> print list( (x[0,0,:] - mean) < 0.6 ) -[True, True] - -.. index:: random - :refguide: random;distributions, random;gauss""") - - -def test_sphinx_str(): - sphinx_doc = SphinxDocString(doc_txt) - non_blank_line_by_line_compare(str(sphinx_doc), -""" -.. index:: random - single: random;distributions, random;gauss - -Draw values from a multivariate normal distribution with specified -mean and covariance. - -The multivariate normal or Gaussian distribution is a generalisation -of the one-dimensional normal distribution to higher dimensions. - -:Parameters: - - **mean** : (N,) ndarray - - Mean of the N-dimensional distribution. - - .. math:: - - (1+2+3)/3 - - **cov** : (N,N) ndarray - - Covariance matrix of the distribution. - - **shape** : tuple of ints - - Given a shape of, for example, (m,n,k), m*n*k samples are - generated, and packed in an m-by-n-by-k arrangement. Because - each sample is N-dimensional, the output shape is (m,n,k,N). - -:Returns: - - **out** : ndarray - - The drawn samples, arranged according to `shape`. If the - shape given is (m,n,...), then the shape of `out` is is - (m,n,...,N). - - In other words, each entry ``out[i,j,...,:]`` is an N-dimensional - value drawn from the distribution. - -.. warning:: - - Certain warnings apply. - -.. seealso:: - - :obj:`some`, :obj:`other`, :obj:`funcs` - - :obj:`otherfunc` - relationship - -.. rubric:: Notes - -Instead of specifying the full covariance matrix, popular -approximations include: - - - Spherical covariance (`cov` is a multiple of the identity matrix) - - Diagonal covariance (`cov` has non-negative elements only on the diagonal) - -This geometrical property can be seen in two dimensions by plotting -generated data-points: - ->>> mean = [0,0] ->>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis - ->>> x,y = multivariate_normal(mean,cov,5000).T ->>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show() - -Note that the covariance matrix must be symmetric and non-negative -definite. - -.. rubric:: References - -.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic - Processes," 3rd ed., McGraw-Hill Companies, 1991 -.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification," - 2nd ed., Wiley, 2001. - -.. only:: latex - - [1]_, [2]_ - -.. rubric:: Examples - ->>> mean = (1,2) ->>> cov = [[1,0],[1,0]] ->>> x = multivariate_normal(mean,cov,(3,3)) ->>> print x.shape -(3, 3, 2) - -The following is probably true, given that 0.6 is roughly twice the -standard deviation: - ->>> print list( (x[0,0,:] - mean) < 0.6 ) -[True, True] -""") - - -doc2 = NumpyDocString(""" - Returns array of indices of the maximum values of along the given axis. - - Parameters - ---------- - a : {array_like} - Array to look in. - axis : {None, integer} - If None, the index is into the flattened array, otherwise along - the specified axis""") - -def test_parameters_without_extended_description(): - assert_equal(len(doc2['Parameters']), 2) - -doc3 = NumpyDocString(""" - my_signature(*params, **kwds) - - Return this and that. - """) - -def test_escape_stars(): - signature = str(doc3).split('\n')[0] - assert_equal(signature, 'my_signature(\*params, \*\*kwds)') - -doc4 = NumpyDocString( - """a.conj() - - Return an array with all complex-valued elements conjugated.""") - -def test_empty_extended_summary(): - assert_equal(doc4['Extended Summary'], []) - -doc5 = NumpyDocString( - """ - a.something() - - Raises - ------ - LinAlgException - If array is singular. - - """) - -def test_raises(): - assert_equal(len(doc5['Raises']), 1) - name,_,desc = doc5['Raises'][0] - assert_equal(name,'LinAlgException') - assert_equal(desc,['If array is singular.']) - -def test_see_also(): - doc6 = NumpyDocString( - """ - z(x,theta) - - See Also - -------- - func_a, func_b, func_c - func_d : some equivalent func - foo.func_e : some other func over - multiple lines - func_f, func_g, :meth:`func_h`, func_j, - func_k - :obj:`baz.obj_q` - :class:`class_j`: fubar - foobar - """) - - assert len(doc6['See Also']) == 12 - for func, desc, role in doc6['See Also']: - if func in ('func_a', 'func_b', 'func_c', 'func_f', - 'func_g', 'func_h', 'func_j', 'func_k', 'baz.obj_q'): - assert(not desc) - else: - assert(desc) - - if func == 'func_h': - assert role == 'meth' - elif func == 'baz.obj_q': - assert role == 'obj' - elif func == 'class_j': - assert role == 'class' - else: - assert role is None - - if func == 'func_d': - assert desc == ['some equivalent func'] - elif func == 'foo.func_e': - assert desc == ['some other func over', 'multiple lines'] - elif func == 'class_j': - assert desc == ['fubar', 'foobar'] - -def test_see_also_print(): - class Dummy(object): - """ - See Also - -------- - func_a, func_b - func_c : some relationship - goes here - func_d - """ - pass - - obj = Dummy() - s = str(FunctionDoc(obj, role='func')) - assert(':func:`func_a`, :func:`func_b`' in s) - assert(' some relationship' in s) - assert(':func:`func_d`' in s) - -doc7 = NumpyDocString(""" - - Doc starts on second line. - - """) - -def test_empty_first_line(): - assert doc7['Summary'][0].startswith('Doc starts') - - -def test_no_summary(): - str(SphinxDocString(""" - Parameters - ----------""")) - - -def test_unicode(): - doc = SphinxDocString(""" - öäöäöäöäöåååå - - öäöäöäööäååå - - Parameters - ---------- - ååå : äää - ööö - - Returns - ------- - ååå : ööö - äää - - """) - assert doc['Summary'][0] == u'öäöäöäöäöåååå'.encode('utf-8') - -def test_plot_examples(): - cfg = dict(use_plots=True) - - doc = SphinxDocString(""" - Examples - -------- - >>> import matplotlib.pyplot as plt - >>> plt.plot([1,2,3],[4,5,6]) - >>> plt.show() - """, config=cfg) - assert 'plot::' in str(doc), str(doc) - - doc = SphinxDocString(""" - Examples - -------- - .. plot:: - - import matplotlib.pyplot as plt - plt.plot([1,2,3],[4,5,6]) - plt.show() - """, config=cfg) - assert str(doc).count('plot::') == 1, str(doc) - -def test_class_members(): - - class Dummy(object): - """ - Dummy class. - - """ - def spam(self, a, b): - """Spam\n\nSpam spam.""" - pass - def ham(self, c, d): - """Cheese\n\nNo cheese.""" - pass - - for cls in (ClassDoc, SphinxClassDoc): - doc = cls(Dummy, config=dict(show_class_members=False)) - assert 'Methods' not in str(doc), (cls, str(doc)) - assert 'spam' not in str(doc), (cls, str(doc)) - assert 'ham' not in str(doc), (cls, str(doc)) - - doc = cls(Dummy, config=dict(show_class_members=True)) - assert 'Methods' in str(doc), (cls, str(doc)) - assert 'spam' in str(doc), (cls, str(doc)) - assert 'ham' in str(doc), (cls, str(doc)) - - if cls is SphinxClassDoc: - assert '.. autosummary::' in str(doc), str(doc) diff --git a/pythonPackages/numpy/doc/sphinxext/traitsdoc.py b/pythonPackages/numpy/doc/sphinxext/traitsdoc.py deleted file mode 100755 index 0fcf2c1cd3..0000000000 --- a/pythonPackages/numpy/doc/sphinxext/traitsdoc.py +++ /dev/null @@ -1,140 +0,0 @@ -""" -========= -traitsdoc -========= - -Sphinx extension that handles docstrings in the Numpy standard format, [1] -and support Traits [2]. - -This extension can be used as a replacement for ``numpydoc`` when support -for Traits is required. - -.. [1] http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard -.. [2] http://code.enthought.com/projects/traits/ - -""" - -import inspect -import os -import pydoc - -import docscrape -import docscrape_sphinx -from docscrape_sphinx import SphinxClassDoc, SphinxFunctionDoc, SphinxDocString - -import numpydoc - -import comment_eater - -class SphinxTraitsDoc(SphinxClassDoc): - def __init__(self, cls, modulename='', func_doc=SphinxFunctionDoc): - if not inspect.isclass(cls): - raise ValueError("Initialise using a class. Got %r" % cls) - self._cls = cls - - if modulename and not modulename.endswith('.'): - modulename += '.' - self._mod = modulename - self._name = cls.__name__ - self._func_doc = func_doc - - docstring = pydoc.getdoc(cls) - docstring = docstring.split('\n') - - # De-indent paragraph - try: - indent = min(len(s) - len(s.lstrip()) for s in docstring - if s.strip()) - except ValueError: - indent = 0 - - for n,line in enumerate(docstring): - docstring[n] = docstring[n][indent:] - - self._doc = docscrape.Reader(docstring) - self._parsed_data = { - 'Signature': '', - 'Summary': '', - 'Description': [], - 'Extended Summary': [], - 'Parameters': [], - 'Returns': [], - 'Raises': [], - 'Warns': [], - 'Other Parameters': [], - 'Traits': [], - 'Methods': [], - 'See Also': [], - 'Notes': [], - 'References': '', - 'Example': '', - 'Examples': '', - 'index': {} - } - - self._parse() - - def _str_summary(self): - return self['Summary'] + [''] - - def _str_extended_summary(self): - return self['Description'] + self['Extended Summary'] + [''] - - def __str__(self, indent=0, func_role="func"): - out = [] - out += self._str_signature() - out += self._str_index() + [''] - out += self._str_summary() - out += self._str_extended_summary() - for param_list in ('Parameters', 'Traits', 'Methods', - 'Returns','Raises'): - out += self._str_param_list(param_list) - out += self._str_see_also("obj") - out += self._str_section('Notes') - out += self._str_references() - out += self._str_section('Example') - out += self._str_section('Examples') - out = self._str_indent(out,indent) - return '\n'.join(out) - -def looks_like_issubclass(obj, classname): - """ Return True if the object has a class or superclass with the given class - name. - - Ignores old-style classes. - """ - t = obj - if t.__name__ == classname: - return True - for klass in t.__mro__: - if klass.__name__ == classname: - return True - return False - -def get_doc_object(obj, what=None, config=None): - if what is None: - if inspect.isclass(obj): - what = 'class' - elif inspect.ismodule(obj): - what = 'module' - elif callable(obj): - what = 'function' - else: - what = 'object' - if what == 'class': - doc = SphinxTraitsDoc(obj, '', func_doc=SphinxFunctionDoc, config=config) - if looks_like_issubclass(obj, 'HasTraits'): - for name, trait, comment in comment_eater.get_class_traits(obj): - # Exclude private traits. - if not name.startswith('_'): - doc['Traits'].append((name, trait, comment.splitlines())) - return doc - elif what in ('function', 'method'): - return SphinxFunctionDoc(obj, '', config=config) - else: - return SphinxDocString(pydoc.getdoc(obj), config=config) - -def setup(app): - # init numpydoc - numpydoc.setup(app, get_doc_object) - diff --git a/pythonPackages/numpy/numpy/__init__.py b/pythonPackages/numpy/numpy/__init__.py deleted file mode 100755 index 8247f4c7ad..0000000000 --- a/pythonPackages/numpy/numpy/__init__.py +++ /dev/null @@ -1,170 +0,0 @@ -""" -NumPy -===== - -Provides - 1. An array object of arbitrary homogeneous items - 2. Fast mathematical operations over arrays - 3. Linear Algebra, Fourier Transforms, Random Number Generation - -How to use the documentation ----------------------------- -Documentation is available in two forms: docstrings provided -with the code, and a loose standing reference guide, available from -`the NumPy homepage `_. - -We recommend exploring the docstrings using -`IPython `_, an advanced Python shell with -TAB-completion and introspection capabilities. See below for further -instructions. - -The docstring examples assume that `numpy` has been imported as `np`:: - - >>> import numpy as np - -Code snippets are indicated by three greater-than signs:: - - >>> x = 42 - >>> x = x + 1 - -Use the built-in ``help`` function to view a function's docstring:: - - >>> help(np.sort) - ... # doctest: +SKIP - -For some objects, ``np.info(obj)`` may provide additional help. This is -particularly true if you see the line "Help on ufunc object:" at the top -of the help() page. Ufuncs are implemented in C, not Python, for speed. -The native Python help() does not know how to view their help, but our -np.info() function does. - -To search for documents containing a keyword, do:: - - >>> np.lookfor('keyword') - ... # doctest: +SKIP - -General-purpose documents like a glossary and help on the basic concepts -of numpy are available under the ``doc`` sub-module:: - - >>> from numpy import doc - >>> help(doc) - ... # doctest: +SKIP - -Available subpackages ---------------------- -doc - Topical documentation on broadcasting, indexing, etc. -lib - Basic functions used by several sub-packages. -random - Core Random Tools -linalg - Core Linear Algebra Tools -fft - Core FFT routines -polynomial - Polynomial tools -testing - Numpy testing tools -f2py - Fortran to Python Interface Generator. -distutils - Enhancements to distutils with support for - Fortran compilers support and more. - -Utilities ---------- -test - Run numpy unittests -show_config - Show numpy build configuration -dual - Overwrite certain functions with high-performance Scipy tools -matlib - Make everything matrices. -__version__ - Numpy version string - -Viewing documentation using IPython ------------------------------------ -Start IPython with the NumPy profile (``ipython -p numpy``), which will -import `numpy` under the alias `np`. Then, use the ``cpaste`` command to -paste examples into the shell. To see which functions are available in -`numpy`, type ``np.`` (where ```` refers to the TAB key), or use -``np.*cos*?`` (where ```` refers to the ENTER key) to narrow -down the list. To view the docstring for a function, use -``np.cos?`` (to view the docstring) and ``np.cos??`` (to view -the source code). - -Copies vs. in-place operation ------------------------------ -Most of the functions in `numpy` return a copy of the array argument -(e.g., `np.sort`). In-place versions of these functions are often -available as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``. -Exceptions to this rule are documented. - -""" - -# We first need to detect if we're being called as part of the numpy setup -# procedure itself in a reliable manner. -try: - __NUMPY_SETUP__ -except NameError: - __NUMPY_SETUP__ = False - - -if __NUMPY_SETUP__: - import sys as _sys - _sys.stderr.write('Running from numpy source directory.') - del _sys -else: - try: - from numpy.__config__ import show as show_config - except ImportError: - msg = """Error importing numpy: you should not try to import numpy from - its source directory; please exit the numpy source tree, and relaunch - your python intepreter from there.""" - raise ImportError(msg) - from version import version as __version__ - - from _import_tools import PackageLoader - - def pkgload(*packages, **options): - loader = PackageLoader(infunc=True) - return loader(*packages, **options) - - import add_newdocs - __all__ = ['add_newdocs'] - - pkgload.__doc__ = PackageLoader.__call__.__doc__ - - from testing import Tester - test = Tester().test - bench = Tester().bench - - import core - from core import * - import compat - import lib - from lib import * - import linalg - import fft - import polynomial - import random - import ctypeslib - import ma - import matrixlib as _mat - from matrixlib import * - - # Make these accessible from numpy name-space - # but not imported in from numpy import * - from __builtin__ import bool, int, long, float, complex, \ - object, unicode, str - from core import round, abs, max, min - - __all__.extend(['__version__', 'pkgload', 'PackageLoader', - 'show_config']) - __all__.extend(core.__all__) - __all__.extend(_mat.__all__) - __all__.extend(lib.__all__) - __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma']) diff --git a/pythonPackages/numpy/numpy/_import_tools.py b/pythonPackages/numpy/numpy/_import_tools.py deleted file mode 100755 index 38bf712fe3..0000000000 --- a/pythonPackages/numpy/numpy/_import_tools.py +++ /dev/null @@ -1,346 +0,0 @@ -import os -import sys - -__all__ = ['PackageLoader'] - -class PackageLoader: - def __init__(self, verbose=False, infunc=False): - """ Manages loading packages. - """ - - if infunc: - _level = 2 - else: - _level = 1 - self.parent_frame = frame = sys._getframe(_level) - self.parent_name = eval('__name__',frame.f_globals,frame.f_locals) - parent_path = eval('__path__',frame.f_globals,frame.f_locals) - if isinstance(parent_path, str): - parent_path = [parent_path] - self.parent_path = parent_path - if '__all__' not in frame.f_locals: - exec('__all__ = []',frame.f_globals,frame.f_locals) - self.parent_export_names = eval('__all__',frame.f_globals,frame.f_locals) - - self.info_modules = {} - self.imported_packages = [] - self.verbose = None - - def _get_info_files(self, package_dir, parent_path, parent_package=None): - """ Return list of (package name,info.py file) from parent_path subdirectories. - """ - from glob import glob - files = glob(os.path.join(parent_path,package_dir,'info.py')) - for info_file in glob(os.path.join(parent_path,package_dir,'info.pyc')): - if info_file[:-1] not in files: - files.append(info_file) - info_files = [] - for info_file in files: - package_name = os.path.dirname(info_file[len(parent_path)+1:])\ - .replace(os.sep,'.') - if parent_package: - package_name = parent_package + '.' + package_name - info_files.append((package_name,info_file)) - info_files.extend(self._get_info_files('*', - os.path.dirname(info_file), - package_name)) - return info_files - - def _init_info_modules(self, packages=None): - """Initialize info_modules = {: }. - """ - import imp - info_files = [] - info_modules = self.info_modules - - if packages is None: - for path in self.parent_path: - info_files.extend(self._get_info_files('*',path)) - else: - for package_name in packages: - package_dir = os.path.join(*package_name.split('.')) - for path in self.parent_path: - names_files = self._get_info_files(package_dir, path) - if names_files: - info_files.extend(names_files) - break - else: - try: - exec 'import %s.info as info' % (package_name) - info_modules[package_name] = info - except ImportError, msg: - self.warn('No scipy-style subpackage %r found in %s. '\ - 'Ignoring: %s'\ - % (package_name,':'.join(self.parent_path), msg)) - - for package_name,info_file in info_files: - if package_name in info_modules: - continue - fullname = self.parent_name +'.'+ package_name - if info_file[-1]=='c': - filedescriptor = ('.pyc','rb',2) - else: - filedescriptor = ('.py','U',1) - - try: - info_module = imp.load_module(fullname+'.info', - open(info_file,filedescriptor[1]), - info_file, - filedescriptor) - except Exception,msg: - self.error(msg) - info_module = None - - if info_module is None or getattr(info_module,'ignore',False): - info_modules.pop(package_name,None) - else: - self._init_info_modules(getattr(info_module,'depends',[])) - info_modules[package_name] = info_module - - return - - def _get_sorted_names(self): - """ Return package names sorted in the order as they should be - imported due to dependence relations between packages. - """ - - depend_dict = {} - for name,info_module in self.info_modules.items(): - depend_dict[name] = getattr(info_module,'depends',[]) - package_names = [] - - for name in depend_dict.keys(): - if not depend_dict[name]: - package_names.append(name) - del depend_dict[name] - - while depend_dict: - for name, lst in depend_dict.items(): - new_lst = [n for n in lst if n in depend_dict] - if not new_lst: - package_names.append(name) - del depend_dict[name] - else: - depend_dict[name] = new_lst - - return package_names - - def __call__(self,*packages, **options): - """Load one or more packages into parent package top-level namespace. - - This function is intended to shorten the need to import many - subpackages, say of scipy, constantly with statements such as - - import scipy.linalg, scipy.fftpack, scipy.etc... - - Instead, you can say: - - import scipy - scipy.pkgload('linalg','fftpack',...) - - or - - scipy.pkgload() - - to load all of them in one call. - - If a name which doesn't exist in scipy's namespace is - given, a warning is shown. - - Parameters - ---------- - *packages : arg-tuple - the names (one or more strings) of all the modules one - wishes to load into the top-level namespace. - verbose= : integer - verbosity level [default: -1]. - verbose=-1 will suspend also warnings. - force= : bool - when True, force reloading loaded packages [default: False]. - postpone= : bool - when True, don't load packages [default: False] - - """ - frame = self.parent_frame - self.info_modules = {} - if options.get('force',False): - self.imported_packages = [] - self.verbose = verbose = options.get('verbose',-1) - postpone = options.get('postpone',None) - self._init_info_modules(packages or None) - - self.log('Imports to %r namespace\n----------------------------'\ - % self.parent_name) - - for package_name in self._get_sorted_names(): - if package_name in self.imported_packages: - continue - info_module = self.info_modules[package_name] - global_symbols = getattr(info_module,'global_symbols',[]) - postpone_import = getattr(info_module,'postpone_import',False) - if (postpone and not global_symbols) \ - or (postpone_import and postpone is not None): - continue - - old_object = frame.f_locals.get(package_name,None) - - cmdstr = 'import '+package_name - if self._execcmd(cmdstr): - continue - self.imported_packages.append(package_name) - - if verbose!=-1: - new_object = frame.f_locals.get(package_name) - if old_object is not None and old_object is not new_object: - self.warn('Overwriting %s=%s (was %s)' \ - % (package_name,self._obj2repr(new_object), - self._obj2repr(old_object))) - - if '.' not in package_name: - self.parent_export_names.append(package_name) - - for symbol in global_symbols: - if symbol=='*': - symbols = eval('getattr(%s,"__all__",None)'\ - % (package_name), - frame.f_globals,frame.f_locals) - if symbols is None: - symbols = eval('dir(%s)' % (package_name), - frame.f_globals,frame.f_locals) - symbols = filter(lambda s:not s.startswith('_'),symbols) - else: - symbols = [symbol] - - if verbose!=-1: - old_objects = {} - for s in symbols: - if s in frame.f_locals: - old_objects[s] = frame.f_locals[s] - - cmdstr = 'from '+package_name+' import '+symbol - if self._execcmd(cmdstr): - continue - - if verbose!=-1: - for s,old_object in old_objects.items(): - new_object = frame.f_locals[s] - if new_object is not old_object: - self.warn('Overwriting %s=%s (was %s)' \ - % (s,self._obj2repr(new_object), - self._obj2repr(old_object))) - - if symbol=='*': - self.parent_export_names.extend(symbols) - else: - self.parent_export_names.append(symbol) - - return - - def _execcmd(self,cmdstr): - """ Execute command in parent_frame.""" - frame = self.parent_frame - try: - exec (cmdstr, frame.f_globals,frame.f_locals) - except Exception,msg: - self.error('%s -> failed: %s' % (cmdstr,msg)) - return True - else: - self.log('%s -> success' % (cmdstr)) - return - - def _obj2repr(self,obj): - """ Return repr(obj) with""" - module = getattr(obj,'__module__',None) - file = getattr(obj,'__file__',None) - if module is not None: - return repr(obj) + ' from ' + module - if file is not None: - return repr(obj) + ' from ' + file - return repr(obj) - - def log(self,mess): - if self.verbose>1: - print >> sys.stderr, str(mess) - def warn(self,mess): - if self.verbose>=0: - print >> sys.stderr, str(mess) - def error(self,mess): - if self.verbose!=-1: - print >> sys.stderr, str(mess) - - def _get_doc_title(self, info_module): - """ Get the title from a package info.py file. - """ - title = getattr(info_module,'__doc_title__',None) - if title is not None: - return title - title = getattr(info_module,'__doc__',None) - if title is not None: - title = title.lstrip().split('\n',1)[0] - return title - return '* Not Available *' - - def _format_titles(self,titles,colsep='---'): - display_window_width = 70 # How to determine the correct value in runtime?? - lengths = [len(name)-name.find('.')-1 for (name,title) in titles]+[0] - max_length = max(lengths) - lines = [] - for (name,title) in titles: - name = name[name.find('.')+1:] - w = max_length - len(name) - words = title.split() - line = '%s%s %s' % (name,w*' ',colsep) - tab = len(line) * ' ' - while words: - word = words.pop(0) - if len(line)+len(word)>display_window_width: - lines.append(line) - line = tab - line += ' ' + word - else: - lines.append(line) - return '\n'.join(lines) - - def get_pkgdocs(self): - """ Return documentation summary of subpackages. - """ - import sys - self.info_modules = {} - self._init_info_modules(None) - - titles = [] - symbols = [] - for package_name, info_module in self.info_modules.items(): - global_symbols = getattr(info_module,'global_symbols',[]) - fullname = self.parent_name +'.'+ package_name - note = '' - if fullname not in sys.modules: - note = ' [*]' - titles.append((fullname,self._get_doc_title(info_module) + note)) - if global_symbols: - symbols.append((package_name,', '.join(global_symbols))) - - retstr = self._format_titles(titles) +\ - '\n [*] - using a package requires explicit import (see pkgload)' - - - if symbols: - retstr += """\n\nGlobal symbols from subpackages"""\ - """\n-------------------------------\n""" +\ - self._format_titles(symbols,'-->') - - return retstr - -class PackageLoaderDebug(PackageLoader): - def _execcmd(self,cmdstr): - """ Execute command in parent_frame.""" - frame = self.parent_frame - print 'Executing',`cmdstr`,'...', - sys.stdout.flush() - exec (cmdstr, frame.f_globals,frame.f_locals) - print 'ok' - sys.stdout.flush() - return - -if int(os.environ.get('NUMPY_IMPORT_DEBUG','0')): - PackageLoader = PackageLoaderDebug diff --git a/pythonPackages/numpy/numpy/add_newdocs.py b/pythonPackages/numpy/numpy/add_newdocs.py deleted file mode 100755 index cf57c031bf..0000000000 --- a/pythonPackages/numpy/numpy/add_newdocs.py +++ /dev/null @@ -1,5866 +0,0 @@ -# This is only meant to add docs to objects defined in C-extension modules. -# The purpose is to allow easier editing of the docstrings without -# requiring a re-compile. - -# NOTE: Many of the methods of ndarray have corresponding functions. -# If you update these docstrings, please keep also the ones in -# core/fromnumeric.py, core/defmatrix.py up-to-date. - -from numpy.lib import add_newdoc - -############################################################################### -# -# flatiter -# -# flatiter needs a toplevel description -# -############################################################################### - -add_newdoc('numpy.core', 'flatiter', - """ - Flat iterator object to iterate over arrays. - - A `flatiter` iterator is returned by ``x.flat`` for any array `x`. - It allows iterating over the array as if it were a 1-D array, - either in a for-loop or by calling its `next` method. - - Iteration is done in C-contiguous style, with the last index varying the - fastest. The iterator can also be indexed using basic slicing or - advanced indexing. - - See Also - -------- - ndarray.flat : Return a flat iterator over an array. - ndarray.flatten : Returns a flattened copy of an array. - - Notes - ----- - A `flatiter` iterator can not be constructed directly from Python code - by calling the `flatiter` constructor. - - Examples - -------- - >>> x = np.arange(6).reshape(2, 3) - >>> fl = x.flat - >>> type(fl) - - >>> for item in fl: - ... print item - ... - 0 - 1 - 2 - 3 - 4 - 5 - - >>> fl[2:4] - array([2, 3]) - - """) - -# flatiter attributes - -add_newdoc('numpy.core', 'flatiter', ('base', - """ - A reference to the array that is iterated over. - - Examples - -------- - >>> x = np.arange(5) - >>> fl = x.flat - >>> fl.base is x - True - - """)) - - - -add_newdoc('numpy.core', 'flatiter', ('coords', - """ - An N-dimensional tuple of current coordinates. - - Examples - -------- - >>> x = np.arange(6).reshape(2, 3) - >>> fl = x.flat - >>> fl.coords - (0, 0) - >>> fl.next() - 0 - >>> fl.coords - (0, 1) - - """)) - - - -add_newdoc('numpy.core', 'flatiter', ('index', - """ - Current flat index into the array. - - Examples - -------- - >>> x = np.arange(6).reshape(2, 3) - >>> fl = x.flat - >>> fl.index - 0 - >>> fl.next() - 0 - >>> fl.index - 1 - - """)) - -# flatiter functions - -add_newdoc('numpy.core', 'flatiter', ('__array__', - """__array__(type=None) Get array from iterator - - """)) - - -add_newdoc('numpy.core', 'flatiter', ('copy', - """ - copy() - - Get a copy of the iterator as a 1-D array. - - Examples - -------- - >>> x = np.arange(6).reshape(2, 3) - >>> x - array([[0, 1, 2], - [3, 4, 5]]) - >>> fl = x.flat - >>> fl.copy() - array([0, 1, 2, 3, 4, 5]) - - """)) - - -############################################################################### -# -# broadcast -# -############################################################################### - -add_newdoc('numpy.core', 'broadcast', - """ - Produce an object that mimics broadcasting. - - Parameters - ---------- - in1, in2, ... : array_like - Input parameters. - - Returns - ------- - b : broadcast object - Broadcast the input parameters against one another, and - return an object that encapsulates the result. - Amongst others, it has ``shape`` and ``nd`` properties, and - may be used as an iterator. - - Examples - -------- - Manually adding two vectors, using broadcasting: - - >>> x = np.array([[1], [2], [3]]) - >>> y = np.array([4, 5, 6]) - >>> b = np.broadcast(x, y) - - >>> out = np.empty(b.shape) - >>> out.flat = [u+v for (u,v) in b] - >>> out - array([[ 5., 6., 7.], - [ 6., 7., 8.], - [ 7., 8., 9.]]) - - Compare against built-in broadcasting: - - >>> x + y - array([[5, 6, 7], - [6, 7, 8], - [7, 8, 9]]) - - """) - -# attributes - -add_newdoc('numpy.core', 'broadcast', ('index', - """ - current index in broadcasted result - - Examples - -------- - >>> x = np.array([[1], [2], [3]]) - >>> y = np.array([4, 5, 6]) - >>> b = np.broadcast(x, y) - >>> b.index - 0 - >>> b.next(), b.next(), b.next() - ((1, 4), (1, 5), (1, 6)) - >>> b.index - 3 - - """)) - -add_newdoc('numpy.core', 'broadcast', ('iters', - """ - tuple of iterators along ``self``'s "components." - - Returns a tuple of `numpy.flatiter` objects, one for each "component" - of ``self``. - - See Also - -------- - numpy.flatiter - - Examples - -------- - >>> x = np.array([1, 2, 3]) - >>> y = np.array([[4], [5], [6]]) - >>> b = np.broadcast(x, y) - >>> row, col = b.iters - >>> row.next(), col.next() - (1, 4) - - """)) - -add_newdoc('numpy.core', 'broadcast', ('nd', - """ - Number of dimensions of broadcasted result. - - Examples - -------- - >>> x = np.array([1, 2, 3]) - >>> y = np.array([[4], [5], [6]]) - >>> b = np.broadcast(x, y) - >>> b.nd - 2 - - """)) - -add_newdoc('numpy.core', 'broadcast', ('numiter', - """ - Number of iterators possessed by the broadcasted result. - - Examples - -------- - >>> x = np.array([1, 2, 3]) - >>> y = np.array([[4], [5], [6]]) - >>> b = np.broadcast(x, y) - >>> b.numiter - 2 - - """)) - -add_newdoc('numpy.core', 'broadcast', ('shape', - """ - Shape of broadcasted result. - - Examples - -------- - >>> x = np.array([1, 2, 3]) - >>> y = np.array([[4], [5], [6]]) - >>> b = np.broadcast(x, y) - >>> b.shape - (3, 3) - - """)) - -add_newdoc('numpy.core', 'broadcast', ('size', - """ - Total size of broadcasted result. - - Examples - -------- - >>> x = np.array([1, 2, 3]) - >>> y = np.array([[4], [5], [6]]) - >>> b = np.broadcast(x, y) - >>> b.size - 9 - - """)) - - -############################################################################### -# -# numpy functions -# -############################################################################### - -add_newdoc('numpy.core.multiarray', 'array', - """ - array(object, dtype=None, copy=True, order=None, subok=False, ndmin=0) - - Create an array. - - Parameters - ---------- - object : array_like - An array, any object exposing the array interface, an - object whose __array__ method returns an array, or any - (nested) sequence. - dtype : data-type, optional - The desired data-type for the array. If not given, then - the type will be determined as the minimum type required - to hold the objects in the sequence. This argument can only - be used to 'upcast' the array. For downcasting, use the - .astype(t) method. - copy : bool, optional - If true (default), then the object is copied. Otherwise, a copy - will only be made if __array__ returns a copy, if obj is a - nested sequence, or if a copy is needed to satisfy any of the other - requirements (`dtype`, `order`, etc.). - order : {'C', 'F', 'A'}, optional - Specify the order of the array. If order is 'C' (default), then the - array will be in C-contiguous order (last-index varies the - fastest). If order is 'F', then the returned array - will be in Fortran-contiguous order (first-index varies the - fastest). If order is 'A', then the returned array may - be in any order (either C-, Fortran-contiguous, or even - discontiguous). - subok : bool, optional - If True, then sub-classes will be passed-through, otherwise - the returned array will be forced to be a base-class array (default). - ndmin : int, optional - Specifies the minimum number of dimensions that the resulting - array should have. Ones will be pre-pended to the shape as - needed to meet this requirement. - - Examples - -------- - >>> np.array([1, 2, 3]) - array([1, 2, 3]) - - Upcasting: - - >>> np.array([1, 2, 3.0]) - array([ 1., 2., 3.]) - - More than one dimension: - - >>> np.array([[1, 2], [3, 4]]) - array([[1, 2], - [3, 4]]) - - Minimum dimensions 2: - - >>> np.array([1, 2, 3], ndmin=2) - array([[1, 2, 3]]) - - Type provided: - - >>> np.array([1, 2, 3], dtype=complex) - array([ 1.+0.j, 2.+0.j, 3.+0.j]) - - Data-type consisting of more than one element: - - >>> x = np.array([(1,2),(3,4)],dtype=[('a','>> x['a'] - array([1, 3]) - - Creating an array from sub-classes: - - >>> np.array(np.mat('1 2; 3 4')) - array([[1, 2], - [3, 4]]) - - >>> np.array(np.mat('1 2; 3 4'), subok=True) - matrix([[1, 2], - [3, 4]]) - - """) - -add_newdoc('numpy.core.multiarray', 'empty', - """ - empty(shape, dtype=float, order='C') - - Return a new array of given shape and type, without initializing entries. - - Parameters - ---------- - shape : int or tuple of int - Shape of the empty array - dtype : data-type, optional - Desired output data-type. - order : {'C', 'F'}, optional - Whether to store multi-dimensional data in C (row-major) or - Fortran (column-major) order in memory. - - See Also - -------- - empty_like, zeros, ones - - Notes - ----- - `empty`, unlike `zeros`, does not set the array values to zero, - and may therefore be marginally faster. On the other hand, it requires - the user to manually set all the values in the array, and should be - used with caution. - - Examples - -------- - >>> np.empty([2, 2]) - array([[ -9.74499359e+001, 6.69583040e-309], - [ 2.13182611e-314, 3.06959433e-309]]) #random - - >>> np.empty([2, 2], dtype=int) - array([[-1073741821, -1067949133], - [ 496041986, 19249760]]) #random - - """) - - -add_newdoc('numpy.core.multiarray', 'scalar', - """ - scalar(dtype, obj) - - Return a new scalar array of the given type initialized with obj. - - This function is meant mainly for pickle support. `dtype` must be a - valid data-type descriptor. If `dtype` corresponds to an object - descriptor, then `obj` can be any object, otherwise `obj` must be a - string. If `obj` is not given, it will be interpreted as None for object - type and as zeros for all other types. - - """) - -add_newdoc('numpy.core.multiarray', 'zeros', - """ - zeros(shape, dtype=float, order='C') - - Return a new array of given shape and type, filled with zeros. - - Parameters - ---------- - shape : int or sequence of ints - Shape of the new array, e.g., ``(2, 3)`` or ``2``. - dtype : data-type, optional - The desired data-type for the array, e.g., `numpy.int8`. Default is - `numpy.float64`. - order : {'C', 'F'}, optional - Whether to store multidimensional data in C- or Fortran-contiguous - (row- or column-wise) order in memory. - - Returns - ------- - out : ndarray - Array of zeros with the given shape, dtype, and order. - - See Also - -------- - zeros_like : Return an array of zeros with shape and type of input. - ones_like : Return an array of ones with shape and type of input. - empty_like : Return an empty array with shape and type of input. - ones : Return a new array setting values to one. - empty : Return a new uninitialized array. - - Examples - -------- - >>> np.zeros(5) - array([ 0., 0., 0., 0., 0.]) - - >>> np.zeros((5,), dtype=numpy.int) - array([0, 0, 0, 0, 0]) - - >>> np.zeros((2, 1)) - array([[ 0.], - [ 0.]]) - - >>> s = (2,2) - >>> np.zeros(s) - array([[ 0., 0.], - [ 0., 0.]]) - - >>> np.zeros((2,), dtype=[('x', 'i4'), ('y', 'i4')]) # custom dtype - array([(0, 0), (0, 0)], - dtype=[('x', '>> np.fromstring('\\x01\\x02', dtype=np.uint8) - array([1, 2], dtype=uint8) - >>> np.fromstring('1 2', dtype=int, sep=' ') - array([1, 2]) - >>> np.fromstring('1, 2', dtype=int, sep=',') - array([1, 2]) - >>> np.fromstring('\\x01\\x02\\x03\\x04\\x05', dtype=np.uint8, count=3) - array([1, 2, 3], dtype=uint8) - - Invalid inputs: - - >>> np.fromstring('\\x01\\x02\\x03\\x04\\x05', dtype=np.int32) - Traceback (most recent call last): - File "", line 1, in - ValueError: string size must be a multiple of element size - >>> np.fromstring('\\x01\\x02', dtype=np.uint8, count=3) - Traceback (most recent call last): - File "", line 1, in - ValueError: string is smaller than requested size - - """) - -add_newdoc('numpy.core.multiarray', 'fromiter', - """ - fromiter(iterable, dtype, count=-1) - - Create a new 1-dimensional array from an iterable object. - - Parameters - ---------- - iterable : iterable object - An iterable object providing data for the array. - dtype : data-type - The data type of the returned array. - count : int, optional - The number of items to read from iterable. The default is -1, - which means all data is read. - - Returns - ------- - out : ndarray - The output array. - - Notes - ----- - Specify ``count`` to improve performance. It allows - ``fromiter`` to pre-allocate the output array, instead of - resizing it on demand. - - Examples - -------- - >>> iterable = (x*x for x in range(5)) - >>> np.fromiter(iterable, np.float) - array([ 0., 1., 4., 9., 16.]) - - """) - -add_newdoc('numpy.core.multiarray', 'fromfile', - """ - fromfile(file, dtype=float, count=-1, sep='') - - Construct an array from data in a text or binary file. - - A highly efficient way of reading binary data with a known data-type, - as well as parsing simply formatted text files. Data written using the - `tofile` method can be read using this function. - - Parameters - ---------- - file : file or str - Open file object or filename. - dtype : data-type - Data type of the returned array. - For binary files, it is used to determine the size and byte-order - of the items in the file. - count : int - Number of items to read. ``-1`` means all items (i.e., the complete - file). - sep : str - Separator between items if file is a text file. - Empty ("") separator means the file should be treated as binary. - Spaces (" ") in the separator match zero or more whitespace characters. - A separator consisting only of spaces must match at least one - whitespace. - - See also - -------- - load, save - ndarray.tofile - loadtxt : More flexible way of loading data from a text file. - - Notes - ----- - Do not rely on the combination of `tofile` and `fromfile` for - data storage, as the binary files generated are are not platform - independent. In particular, no byte-order or data-type information is - saved. Data can be stored in the platform independent ``.npy`` format - using `save` and `load` instead. - - Examples - -------- - Construct an ndarray: - - >>> dt = np.dtype([('time', [('min', int), ('sec', int)]), - ... ('temp', float)]) - >>> x = np.zeros((1,), dtype=dt) - >>> x['time']['min'] = 10; x['temp'] = 98.25 - >>> x - array([((10, 0), 98.25)], - dtype=[('time', [('min', '>> import os - >>> fname = os.tmpnam() - >>> x.tofile(fname) - - Read the raw data from disk: - - >>> np.fromfile(fname, dtype=dt) - array([((10, 0), 98.25)], - dtype=[('time', [('min', '>> np.save(fname, x) - >>> np.load(fname + '.npy') - array([((10, 0), 98.25)], - dtype=[('time', [('min', '>> dt = np.dtype(int) - >>> dt = dt.newbyteorder('>') - >>> np.frombuffer(buf, dtype=dt) - - The data of the resulting array will not be byteswapped, - but will be interpreted correctly. - - Examples - -------- - >>> s = 'hello world' - >>> np.frombuffer(s, dtype='S1', count=5, offset=6) - array(['w', 'o', 'r', 'l', 'd'], - dtype='|S1') - - """) - -add_newdoc('numpy.core.multiarray', 'concatenate', - """ - concatenate((a1, a2, ...), axis=0) - - Join a sequence of arrays together. - - Parameters - ---------- - a1, a2, ... : sequence of array_like - The arrays must have the same shape, except in the dimension - corresponding to `axis` (the first, by default). - axis : int, optional - The axis along which the arrays will be joined. Default is 0. - - Returns - ------- - res : ndarray - The concatenated array. - - See Also - -------- - ma.concatenate : Concatenate function that preserves input masks. - array_split : Split an array into multiple sub-arrays of equal or - near-equal size. - split : Split array into a list of multiple sub-arrays of equal size. - hsplit : Split array into multiple sub-arrays horizontally (column wise) - vsplit : Split array into multiple sub-arrays vertically (row wise) - dsplit : Split array into multiple sub-arrays along the 3rd axis (depth). - hstack : Stack arrays in sequence horizontally (column wise) - vstack : Stack arrays in sequence vertically (row wise) - dstack : Stack arrays in sequence depth wise (along third dimension) - - Notes - ----- - When one or more of the arrays to be concatenated is a MaskedArray, - this function will return a MaskedArray object instead of an ndarray, - but the input masks are *not* preserved. In cases where a MaskedArray - is expected as input, use the ma.concatenate function from the masked - array module instead. - - Examples - -------- - >>> a = np.array([[1, 2], [3, 4]]) - >>> b = np.array([[5, 6]]) - >>> np.concatenate((a, b), axis=0) - array([[1, 2], - [3, 4], - [5, 6]]) - >>> np.concatenate((a, b.T), axis=1) - array([[1, 2, 5], - [3, 4, 6]]) - - This function will not preserve masking of MaskedArray inputs. - - >>> a = np.ma.arange(3) - >>> a[1] = np.ma.masked - >>> b = np.arange(2, 5) - >>> a - masked_array(data = [0 -- 2], - mask = [False True False], - fill_value = 999999) - >>> b - array([2, 3, 4]) - >>> np.concatenate([a, b]) - masked_array(data = [0 1 2 2 3 4], - mask = False, - fill_value = 999999) - >>> np.ma.concatenate([a, b]) - masked_array(data = [0 -- 2 2 3 4], - mask = [False True False False False False], - fill_value = 999999) - - """) - -add_newdoc('numpy.core', 'inner', - """ - inner(a, b) - - Inner product of two arrays. - - Ordinary inner product of vectors for 1-D arrays (without complex - conjugation), in higher dimensions a sum product over the last axes. - - Parameters - ---------- - a, b : array_like - If `a` and `b` are nonscalar, their last dimensions of must match. - - Returns - ------- - out : ndarray - `out.shape = a.shape[:-1] + b.shape[:-1]` - - Raises - ------ - ValueError - If the last dimension of `a` and `b` has different size. - - See Also - -------- - tensordot : Sum products over arbitrary axes. - dot : Generalised matrix product, using second last dimension of `b`. - - Notes - ----- - For vectors (1-D arrays) it computes the ordinary inner-product:: - - np.inner(a, b) = sum(a[:]*b[:]) - - More generally, if `ndim(a) = r > 0` and `ndim(b) = s > 0`:: - - np.inner(a, b) = np.tensordot(a, b, axes=(-1,-1)) - - or explicitly:: - - np.inner(a, b)[i0,...,ir-1,j0,...,js-1] - = sum(a[i0,...,ir-1,:]*b[j0,...,js-1,:]) - - In addition `a` or `b` may be scalars, in which case:: - - np.inner(a,b) = a*b - - Examples - -------- - Ordinary inner product for vectors: - - >>> a = np.array([1,2,3]) - >>> b = np.array([0,1,0]) - >>> np.inner(a, b) - 2 - - A multidimensional example: - - >>> a = np.arange(24).reshape((2,3,4)) - >>> b = np.arange(4) - >>> np.inner(a, b) - array([[ 14, 38, 62], - [ 86, 110, 134]]) - - An example where `b` is a scalar: - - >>> np.inner(np.eye(2), 7) - array([[ 7., 0.], - [ 0., 7.]]) - - """) - -add_newdoc('numpy.core','fastCopyAndTranspose', - """_fastCopyAndTranspose(a)""") - -add_newdoc('numpy.core.multiarray','correlate', - """cross_correlate(a,v, mode=0)""") - -add_newdoc('numpy.core.multiarray', 'arange', - """ - arange([start,] stop[, step,], dtype=None) - - Return evenly spaced values within a given interval. - - Values are generated within the half-open interval ``[start, stop)`` - (in other words, the interval including `start` but excluding `stop`). - For integer arguments the function is equivalent to the Python built-in - `range `_ function, - but returns a ndarray rather than a list. - - Parameters - ---------- - start : number, optional - Start of interval. The interval includes this value. The default - start value is 0. - stop : number - End of interval. The interval does not include this value. - step : number, optional - Spacing between values. For any output `out`, this is the distance - between two adjacent values, ``out[i+1] - out[i]``. The default - step size is 1. If `step` is specified, `start` must also be given. - dtype : dtype - The type of the output array. If `dtype` is not given, infer the data - type from the other input arguments. - - Returns - ------- - out : ndarray - Array of evenly spaced values. - - For floating point arguments, the length of the result is - ``ceil((stop - start)/step)``. Because of floating point overflow, - this rule may result in the last element of `out` being greater - than `stop`. - - See Also - -------- - linspace : Evenly spaced numbers with careful handling of endpoints. - ogrid: Arrays of evenly spaced numbers in N-dimensions - mgrid: Grid-shaped arrays of evenly spaced numbers in N-dimensions - - Examples - -------- - >>> np.arange(3) - array([0, 1, 2]) - >>> np.arange(3.0) - array([ 0., 1., 2.]) - >>> np.arange(3,7) - array([3, 4, 5, 6]) - >>> np.arange(3,7,2) - array([3, 5]) - - """) - -add_newdoc('numpy.core.multiarray','_get_ndarray_c_version', - """_get_ndarray_c_version() - - Return the compile time NDARRAY_VERSION number. - - """) - -add_newdoc('numpy.core.multiarray','_reconstruct', - """_reconstruct(subtype, shape, dtype) - - Construct an empty array. Used by Pickles. - - """) - - -add_newdoc('numpy.core.multiarray', 'set_string_function', - """ - set_string_function(f, repr=1) - - Internal method to set a function to be used when pretty printing arrays. - - """) - -add_newdoc('numpy.core.multiarray', 'set_numeric_ops', - """ - set_numeric_ops(op1=func1, op2=func2, ...) - - Set numerical operators for array objects. - - Parameters - ---------- - op1, op2, ... : callable - Each ``op = func`` pair describes an operator to be replaced. - For example, ``add = lambda x, y: np.add(x, y) % 5`` would replace - addition by modulus 5 addition. - - Returns - ------- - saved_ops : list of callables - A list of all operators, stored before making replacements. - - Notes - ----- - .. WARNING:: - Use with care! Incorrect usage may lead to memory errors. - - A function replacing an operator cannot make use of that operator. - For example, when replacing add, you may not use ``+``. Instead, - directly call ufuncs. - - Examples - -------- - >>> def add_mod5(x, y): - ... return np.add(x, y) % 5 - ... - >>> old_funcs = np.set_numeric_ops(add=add_mod5) - - >>> x = np.arange(12).reshape((3, 4)) - >>> x + x - array([[0, 2, 4, 1], - [3, 0, 2, 4], - [1, 3, 0, 2]]) - - >>> ignore = np.set_numeric_ops(**old_funcs) # restore operators - - """) - -add_newdoc('numpy.core.multiarray', 'where', - """ - where(condition, [x, y]) - - Return elements, either from `x` or `y`, depending on `condition`. - - If only `condition` is given, return ``condition.nonzero()``. - - Parameters - ---------- - condition : array_like, bool - When True, yield `x`, otherwise yield `y`. - x, y : array_like, optional - Values from which to choose. `x` and `y` need to have the same - shape as `condition`. - - Returns - ------- - out : ndarray or tuple of ndarrays - If both `x` and `y` are specified, the output array contains - elements of `x` where `condition` is True, and elements from - `y` elsewhere. - - If only `condition` is given, return the tuple - ``condition.nonzero()``, the indices where `condition` is True. - - See Also - -------- - nonzero, choose - - Notes - ----- - If `x` and `y` are given and input arrays are 1-D, `where` is - equivalent to:: - - [xv if c else yv for (c,xv,yv) in zip(condition,x,y)] - - Examples - -------- - >>> np.where([[True, False], [True, True]], - ... [[1, 2], [3, 4]], - ... [[9, 8], [7, 6]]) - array([[1, 8], - [3, 4]]) - - >>> np.where([[0, 1], [1, 0]]) - (array([0, 1]), array([1, 0])) - - >>> x = np.arange(9.).reshape(3, 3) - >>> np.where( x > 5 ) - (array([2, 2, 2]), array([0, 1, 2])) - >>> x[np.where( x > 3.0 )] # Note: result is 1D. - array([ 4., 5., 6., 7., 8.]) - >>> np.where(x < 5, x, -1) # Note: broadcasting. - array([[ 0., 1., 2.], - [ 3., 4., -1.], - [-1., -1., -1.]]) - - """) - - -add_newdoc('numpy.core.multiarray', 'lexsort', - """ - lexsort(keys, axis=-1) - - Perform an indirect sort using a sequence of keys. - - Given multiple sorting keys, which can be interpreted as columns in a - spreadsheet, lexsort returns an array of integer indices that describes - the sort order by multiple columns. The last key in the sequence is used - for the primary sort order, the second-to-last key for the secondary sort - order, and so on. The keys argument must be a sequence of objects that - can be converted to arrays of the same shape. If a 2D array is provided - for the keys argument, it's rows are interpreted as the sorting keys and - sorting is according to the last row, second last row etc. - - Parameters - ---------- - keys : (k,N) array or tuple containing k (N,)-shaped sequences - The `k` different "columns" to be sorted. The last column (or row if - `keys` is a 2D array) is the primary sort key. - axis : int, optional - Axis to be indirectly sorted. By default, sort over the last axis. - - Returns - ------- - indices : (N,) ndarray of ints - Array of indices that sort the keys along the specified axis. - - See Also - -------- - argsort : Indirect sort. - ndarray.sort : In-place sort. - sort : Return a sorted copy of an array. - - Examples - -------- - Sort names: first by surname, then by name. - - >>> surnames = ('Hertz', 'Galilei', 'Hertz') - >>> first_names = ('Heinrich', 'Galileo', 'Gustav') - >>> ind = np.lexsort((first_names, surnames)) - >>> ind - array([1, 2, 0]) - - >>> [surnames[i] + ", " + first_names[i] for i in ind] - ['Galilei, Galileo', 'Hertz, Gustav', 'Hertz, Heinrich'] - - Sort two columns of numbers: - - >>> a = [1,5,1,4,3,4,4] # First column - >>> b = [9,4,0,4,0,2,1] # Second column - >>> ind = np.lexsort((b,a)) # Sort by a, then by b - >>> print ind - [2 0 4 6 5 3 1] - - >>> [(a[i],b[i]) for i in ind] - [(1, 0), (1, 9), (3, 0), (4, 1), (4, 2), (4, 4), (5, 4)] - - Note that sorting is first according to the elements of ``a``. - Secondary sorting is according to the elements of ``b``. - - A normal ``argsort`` would have yielded: - - >>> [(a[i],b[i]) for i in np.argsort(a)] - [(1, 9), (1, 0), (3, 0), (4, 4), (4, 2), (4, 1), (5, 4)] - - Structured arrays are sorted lexically by ``argsort``: - - >>> x = np.array([(1,9), (5,4), (1,0), (4,4), (3,0), (4,2), (4,1)], - ... dtype=np.dtype([('x', int), ('y', int)])) - - >>> np.argsort(x) # or np.argsort(x, order=('x', 'y')) - array([2, 0, 4, 6, 5, 3, 1]) - - """) - -add_newdoc('numpy.core.multiarray', 'can_cast', - """ - can_cast(fromtype, totype) - - Returns True if cast between data types can occur without losing precision. - - Parameters - ---------- - fromtype : dtype or dtype specifier - Data type to cast from. - totype : dtype or dtype specifier - Data type to cast to. - - Returns - ------- - out : bool - True if cast can occur without losing precision. - - Examples - -------- - >>> np.can_cast(np.int32, np.int64) - True - >>> np.can_cast(np.float64, np.complex) - True - >>> np.can_cast(np.complex, np.float) - False - - >>> np.can_cast('i8', 'f8') - True - >>> np.can_cast('i8', 'f4') - False - >>> np.can_cast('i4', 'S4') - True - - """) - -add_newdoc('numpy.core.multiarray','newbuffer', - """newbuffer(size) - - Return a new uninitialized buffer object of size bytes - - """) - -add_newdoc('numpy.core.multiarray', 'getbuffer', - """ - getbuffer(obj [,offset[, size]]) - - Create a buffer object from the given object referencing a slice of - length size starting at offset. - - Default is the entire buffer. A read-write buffer is attempted followed - by a read-only buffer. - - Parameters - ---------- - obj : object - - offset : int, optional - - size : int, optional - - Returns - ------- - buffer_obj : buffer - - Examples - -------- - >>> buf = np.getbuffer(np.ones(5), 1, 3) - >>> len(buf) - 3 - >>> buf[0] - '\\x00' - >>> buf - - - """) - -add_newdoc('numpy.core', 'dot', - """ - dot(a, b) - - Dot product of two arrays. - - For 2-D arrays it is equivalent to matrix multiplication, and for 1-D - arrays to inner product of vectors (without complex conjugation). For - N dimensions it is a sum product over the last axis of `a` and - the second-to-last of `b`:: - - dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m]) - - Parameters - ---------- - a : array_like - First argument. - b : array_like - Second argument. - - Returns - ------- - output : ndarray - Returns the dot product of `a` and `b`. If `a` and `b` are both - scalars or both 1-D arrays then a scalar is returned; otherwise - an array is returned. - - Raises - ------ - ValueError - If the last dimension of `a` is not the same size as - the second-to-last dimension of `b`. - - See Also - -------- - vdot : Complex-conjugating dot product. - tensordot : Sum products over arbitrary axes. - - Examples - -------- - >>> np.dot(3, 4) - 12 - - Neither argument is complex-conjugated: - - >>> np.dot([2j, 3j], [2j, 3j]) - (-13+0j) - - For 2-D arrays it's the matrix product: - - >>> a = [[1, 0], [0, 1]] - >>> b = [[4, 1], [2, 2]] - >>> np.dot(a, b) - array([[4, 1], - [2, 2]]) - - >>> a = np.arange(3*4*5*6).reshape((3,4,5,6)) - >>> b = np.arange(3*4*5*6)[::-1].reshape((5,4,6,3)) - >>> np.dot(a, b)[2,3,2,1,2,2] - 499128 - >>> sum(a[2,3,2,:] * b[1,2,:,2]) - 499128 - - """) - -add_newdoc('numpy.core', 'alterdot', - """ - Change `dot`, `vdot`, and `innerproduct` to use accelerated BLAS functions. - - Typically, as a user of Numpy, you do not explicitly call this function. If - Numpy is built with an accelerated BLAS, this function is automatically - called when Numpy is imported. - - When Numpy is built with an accelerated BLAS like ATLAS, these functions - are replaced to make use of the faster implementations. The faster - implementations only affect float32, float64, complex64, and complex128 - arrays. Furthermore, the BLAS API only includes matrix-matrix, - matrix-vector, and vector-vector products. Products of arrays with larger - dimensionalities use the built in functions and are not accelerated. - - See Also - -------- - restoredot : `restoredot` undoes the effects of `alterdot`. - - """) - -add_newdoc('numpy.core', 'restoredot', - """ - Restore `dot`, `vdot`, and `innerproduct` to the default non-BLAS - implementations. - - Typically, the user will only need to call this when troubleshooting and - installation problem, reproducing the conditions of a build without an - accelerated BLAS, or when being very careful about benchmarking linear - algebra operations. - - See Also - -------- - alterdot : `restoredot` undoes the effects of `alterdot`. - - """) - -add_newdoc('numpy.core', 'vdot', - """ - Return the dot product of two vectors. - - The vdot(`a`, `b`) function handles complex numbers differently than - dot(`a`, `b`). If the first argument is complex the complex conjugate - of the first argument is used for the calculation of the dot product. - - For 2-D arrays it is equivalent to matrix multiplication, and for 1-D - arrays to inner product of vectors (with complex conjugation of `a`). - For N dimensions it is a sum product over the last axis of `a` and - the second-to-last of `b`:: - - dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m]) - - Parameters - ---------- - a : array_like - If `a` is complex the complex conjugate is taken before calculation - of the dot product. - b : array_like - Second argument to the dot product. - - Returns - ------- - output : ndarray - Returns dot product of `a` and `b`. Can be an int, float, or - complex depending on the types of `a` and `b`. - - See Also - -------- - dot : Return the dot product without using the complex conjugate of the - first argument. - - Notes - ----- - The dot product is the summation of element wise multiplication. - - .. math:: - a \\cdot b = \\sum_{i=1}^n a_i^*b_i = a_1^*b_1+a_2^*b_2+\\cdots+a_n^*b_n - - Examples - -------- - >>> a = np.array([1+2j,3+4j]) - >>> b = np.array([5+6j,7+8j]) - >>> np.vdot(a, b) - (70-8j) - >>> np.vdot(b, a) - (70+8j) - >>> a = np.array([[1, 4], [5, 6]]) - >>> b = np.array([[4, 1], [2, 2]]) - >>> np.vdot(a, b) - 30 - >>> np.vdot(b, a) - 30 - - """) - - -############################################################################## -# -# Documentation for ndarray attributes and methods -# -############################################################################## - - -############################################################################## -# -# ndarray object -# -############################################################################## - - -add_newdoc('numpy.core.multiarray', 'ndarray', - """ - ndarray(shape, dtype=float, buffer=None, offset=0, - strides=None, order=None) - - An array object represents a multidimensional, homogeneous array - of fixed-size items. An associated data-type object describes the - format of each element in the array (its byte-order, how many bytes it - occupies in memory, whether it is an integer, a floating point number, - or something else, etc.) - - Arrays should be constructed using `array`, `zeros` or `empty` (refer - to the See Also section below). The parameters given here refer to - a low-level method (`ndarray(...)`) for instantiating an array. - - For more information, refer to the `numpy` module and examine the - the methods and attributes of an array. - - Parameters - ---------- - (for the __new__ method; see Notes below) - - shape : tuple of ints - Shape of created array. - dtype : data-type, optional - Any object that can be interpreted as a numpy data type. - buffer : object exposing buffer interface, optional - Used to fill the array with data. - offset : int, optional - Offset of array data in buffer. - strides : tuple of ints, optional - Strides of data in memory. - order : {'C', 'F'}, optional - Row-major or column-major order. - - Attributes - ---------- - T : ndarray - Transpose of the array. - data : buffer - The array's elements, in memory. - dtype : dtype object - Describes the format of the elements in the array. - flags : dict - Dictionary containing information related to memory use, e.g., - 'C_CONTIGUOUS', 'OWNDATA', 'WRITEABLE', etc. - flat : numpy.flatiter object - Flattened version of the array as an iterator. The iterator - allows assignments, e.g., ``x.flat = 3`` (See `ndarray.flat` for - assignment examples; TODO). - imag : ndarray - Imaginary part of the array. - real : ndarray - Real part of the array. - size : int - Number of elements in the array. - itemsize : int - The memory use of each array element in bytes. - nbytes : int - The total number of bytes required to store the array data, - i.e., ``itemsize * size``. - ndim : int - The array's number of dimensions. - shape : tuple of ints - Shape of the array. - strides : tuple of ints - The step-size required to move from one element to the next in - memory. For example, a contiguous ``(3, 4)`` array of type - ``int16`` in C-order has strides ``(8, 2)``. This implies that - to move from element to element in memory requires jumps of 2 bytes. - To move from row-to-row, one needs to jump 8 bytes at a time - (``2 * 4``). - ctypes : ctypes object - Class containing properties of the array needed for interaction - with ctypes. - base : ndarray - If the array is a view into another array, that array is its `base` - (unless that array is also a view). The `base` array is where the - array data is actually stored. - - See Also - -------- - array : Construct an array. - zeros : Create an array, each element of which is zero. - empty : Create an array, but leave its allocated memory unchanged (i.e., - it contains "garbage"). - dtype : Create a data-type. - - Notes - ----- - There are two modes of creating an array using ``__new__``: - - 1. If `buffer` is None, then only `shape`, `dtype`, and `order` - are used. - 2. If `buffer` is an object exposing the buffer interface, then - all keywords are interpreted. - - No ``__init__`` method is needed because the array is fully initialized - after the ``__new__`` method. - - Examples - -------- - These examples illustrate the low-level `ndarray` constructor. Refer - to the `See Also` section above for easier ways of constructing an - ndarray. - - First mode, `buffer` is None: - - >>> np.ndarray(shape=(2,2), dtype=float, order='F') - array([[ -1.13698227e+002, 4.25087011e-303], - [ 2.88528414e-306, 3.27025015e-309]]) #random - - Second mode: - - >>> np.ndarray((2,), buffer=np.array([1,2,3]), - ... offset=np.int_().itemsize, - ... dtype=int) # offset = 1*itemsize, i.e. skip first element - array([2, 3]) - - """) - - -############################################################################## -# -# ndarray attributes -# -############################################################################## - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('__array_interface__', - """Array protocol: Python side.""")) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('__array_finalize__', - """None.""")) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('__array_priority__', - """Array priority.""")) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('__array_struct__', - """Array protocol: C-struct side.""")) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('_as_parameter_', - """Allow the array to be interpreted as a ctypes object by returning the - data-memory location as an integer - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('base', - """ - Base object if memory is from some other object. - - Examples - -------- - The base of an array that owns its memory is None: - - >>> x = np.array([1,2,3,4]) - >>> x.base is None - True - - Slicing creates a view, whose memory is shared with x: - - >>> y = x[2:] - >>> y.base is x - True - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('ctypes', - """ - An object to simplify the interaction of the array with the ctypes - module. - - This attribute creates an object that makes it easier to use arrays - when calling shared libraries with the ctypes module. The returned - object has, among others, data, shape, and strides attributes (see - Notes below) which themselves return ctypes objects that can be used - as arguments to a shared library. - - Parameters - ---------- - None - - Returns - ------- - c : Python object - Possessing attributes data, shape, strides, etc. - - See Also - -------- - numpy.ctypeslib - - Notes - ----- - Below are the public attributes of this object which were documented - in "Guide to NumPy" (we have omitted undocumented public attributes, - as well as documented private attributes): - - * data: A pointer to the memory area of the array as a Python integer. - This memory area may contain data that is not aligned, or not in correct - byte-order. The memory area may not even be writeable. The array - flags and data-type of this array should be respected when passing this - attribute to arbitrary C-code to avoid trouble that can include Python - crashing. User Beware! The value of this attribute is exactly the same - as self._array_interface_['data'][0]. - - * shape (c_intp*self.ndim): A ctypes array of length self.ndim where - the basetype is the C-integer corresponding to dtype('p') on this - platform. This base-type could be c_int, c_long, or c_longlong - depending on the platform. The c_intp type is defined accordingly in - numpy.ctypeslib. The ctypes array contains the shape of the underlying - array. - - * strides (c_intp*self.ndim): A ctypes array of length self.ndim where - the basetype is the same as for the shape attribute. This ctypes array - contains the strides information from the underlying array. This strides - information is important for showing how many bytes must be jumped to - get to the next element in the array. - - * data_as(obj): Return the data pointer cast to a particular c-types object. - For example, calling self._as_parameter_ is equivalent to - self.data_as(ctypes.c_void_p). Perhaps you want to use the data as a - pointer to a ctypes array of floating-point data: - self.data_as(ctypes.POINTER(ctypes.c_double)). - - * shape_as(obj): Return the shape tuple as an array of some other c-types - type. For example: self.shape_as(ctypes.c_short). - - * strides_as(obj): Return the strides tuple as an array of some other - c-types type. For example: self.strides_as(ctypes.c_longlong). - - Be careful using the ctypes attribute - especially on temporary - arrays or arrays constructed on the fly. For example, calling - ``(a+b).ctypes.data_as(ctypes.c_void_p)`` returns a pointer to memory - that is invalid because the array created as (a+b) is deallocated - before the next Python statement. You can avoid this problem using - either ``c=a+b`` or ``ct=(a+b).ctypes``. In the latter case, ct will - hold a reference to the array until ct is deleted or re-assigned. - - If the ctypes module is not available, then the ctypes attribute - of array objects still returns something useful, but ctypes objects - are not returned and errors may be raised instead. In particular, - the object will still have the as parameter attribute which will - return an integer equal to the data attribute. - - Examples - -------- - >>> import ctypes - >>> x - array([[0, 1], - [2, 3]]) - >>> x.ctypes.data - 30439712 - >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long)) - - >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long)).contents - c_long(0) - >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_longlong)).contents - c_longlong(4294967296L) - >>> x.ctypes.shape - - >>> x.ctypes.shape_as(ctypes.c_long) - - >>> x.ctypes.strides - - >>> x.ctypes.strides_as(ctypes.c_longlong) - - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('data', - """Python buffer object pointing to the start of the array's data.""")) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('dtype', - """ - Data-type of the array's elements. - - Parameters - ---------- - None - - Returns - ------- - d : numpy dtype object - - See Also - -------- - numpy.dtype - - Examples - -------- - >>> x - array([[0, 1], - [2, 3]]) - >>> x.dtype - dtype('int32') - >>> type(x.dtype) - - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('imag', - """ - The imaginary part of the array. - - Examples - -------- - >>> x = np.sqrt([1+0j, 0+1j]) - >>> x.imag - array([ 0. , 0.70710678]) - >>> x.imag.dtype - dtype('float64') - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('itemsize', - """ - Length of one array element in bytes. - - Examples - -------- - >>> x = np.array([1,2,3], dtype=np.float64) - >>> x.itemsize - 8 - >>> x = np.array([1,2,3], dtype=np.complex128) - >>> x.itemsize - 16 - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('flags', - """ - Information about the memory layout of the array. - - Attributes - ---------- - C_CONTIGUOUS (C) - The data is in a single, C-style contiguous segment. - F_CONTIGUOUS (F) - The data is in a single, Fortran-style contiguous segment. - OWNDATA (O) - The array owns the memory it uses or borrows it from another object. - WRITEABLE (W) - The data area can be written to. Setting this to False locks - the data, making it read-only. A view (slice, etc.) inherits WRITEABLE - from its base array at creation time, but a view of a writeable - array may be subsequently locked while the base array remains writeable. - (The opposite is not true, in that a view of a locked array may not - be made writeable. However, currently, locking a base object does not - lock any views that already reference it, so under that circumstance it - is possible to alter the contents of a locked array via a previously - created writeable view onto it.) Attempting to change a non-writeable - array raises a RuntimeError exception. - ALIGNED (A) - The data and strides are aligned appropriately for the hardware. - UPDATEIFCOPY (U) - This array is a copy of some other array. When this array is - deallocated, the base array will be updated with the contents of - this array. - - FNC - F_CONTIGUOUS and not C_CONTIGUOUS. - FORC - F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). - BEHAVED (B) - ALIGNED and WRITEABLE. - CARRAY (CA) - BEHAVED and C_CONTIGUOUS. - FARRAY (FA) - BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. - - Notes - ----- - The `flags` object can be accessed dictionary-like (as in ``a.flags['WRITEABLE']``), - or by using lowercased attribute names (as in ``a.flags.writeable``). Short flag - names are only supported in dictionary access. - - Only the UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be changed by - the user, via direct assignment to the attribute or dictionary entry, - or by calling `ndarray.setflags`. - - The array flags cannot be set arbitrarily: - - - UPDATEIFCOPY can only be set ``False``. - - ALIGNED can only be set ``True`` if the data is truly aligned. - - WRITEABLE can only be set ``True`` if the array owns its own memory - or the ultimate owner of the memory exposes a writeable buffer - interface or is a string. - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('flat', - """ - A 1-D iterator over the array. - - This is a `numpy.flatiter` instance, which acts similarly to, but is not - a subclass of, Python's built-in iterator object. - - See Also - -------- - flatten : Return a copy of the array collapsed into one dimension. - - flatiter - - Examples - -------- - >>> x = np.arange(1, 7).reshape(2, 3) - >>> x - array([[1, 2, 3], - [4, 5, 6]]) - >>> x.flat[3] - 4 - >>> x.T - array([[1, 4], - [2, 5], - [3, 6]]) - >>> x.T.flat[3] - 5 - >>> type(x.flat) - - - An assignment example: - - >>> x.flat = 3; x - array([[3, 3, 3], - [3, 3, 3]]) - >>> x.flat[[1,4]] = 1; x - array([[3, 1, 3], - [3, 1, 3]]) - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('nbytes', - """ - Total bytes consumed by the elements of the array. - - Notes - ----- - Does not include memory consumed by non-element attributes of the - array object. - - Examples - -------- - >>> x = np.zeros((3,5,2), dtype=np.complex128) - >>> x.nbytes - 480 - >>> np.prod(x.shape) * x.itemsize - 480 - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('ndim', - """ - Number of array dimensions. - - Examples - -------- - >>> x = np.array([1, 2, 3]) - >>> x.ndim - 1 - >>> y = np.zeros((2, 3, 4)) - >>> y.ndim - 3 - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('real', - """ - The real part of the array. - - Examples - -------- - >>> x = np.sqrt([1+0j, 0+1j]) - >>> x.real - array([ 1. , 0.70710678]) - >>> x.real.dtype - dtype('float64') - - See Also - -------- - numpy.real : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('shape', - """ - Tuple of array dimensions. - - Notes - ----- - May be used to "reshape" the array, as long as this would not - require a change in the total number of elements - - Examples - -------- - >>> x = np.array([1, 2, 3, 4]) - >>> x.shape - (4,) - >>> y = np.zeros((2, 3, 4)) - >>> y.shape - (2, 3, 4) - >>> y.shape = (3, 8) - >>> y - array([[ 0., 0., 0., 0., 0., 0., 0., 0.], - [ 0., 0., 0., 0., 0., 0., 0., 0.], - [ 0., 0., 0., 0., 0., 0., 0., 0.]]) - >>> y.shape = (3, 6) - Traceback (most recent call last): - File "", line 1, in - ValueError: total size of new array must be unchanged - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('size', - """ - Number of elements in the array. - - Equivalent to ``np.prod(a.shape)``, i.e., the product of the array's - dimensions. - - Examples - -------- - >>> x = np.zeros((3, 5, 2), dtype=np.complex128) - >>> x.size - 30 - >>> np.prod(x.shape) - 30 - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('strides', - """ - Tuple of bytes to step in each dimension when traversing an array. - - The byte offset of element ``(i[0], i[1], ..., i[n])`` in an array `a` - is:: - - offset = sum(np.array(i) * a.strides) - - A more detailed explanation of strides can be found in the - "ndarray.rst" file in the NumPy reference guide. - - Notes - ----- - Imagine an array of 32-bit integers (each 4 bytes):: - - x = np.array([[0, 1, 2, 3, 4], - [5, 6, 7, 8, 9]], dtype=np.int32) - - This array is stored in memory as 40 bytes, one after the other - (known as a contiguous block of memory). The strides of an array tell - us how many bytes we have to skip in memory to move to the next position - along a certain axis. For example, we have to skip 4 bytes (1 value) to - move to the next column, but 20 bytes (5 values) to get to the same - position in the next row. As such, the strides for the array `x` will be - ``(20, 4)``. - - See Also - -------- - numpy.lib.stride_tricks.as_strided - - Examples - -------- - >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) - >>> y - array([[[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]], - [[12, 13, 14, 15], - [16, 17, 18, 19], - [20, 21, 22, 23]]]) - >>> y.strides - (48, 16, 4) - >>> y[1,1,1] - 17 - >>> offset=sum(y.strides * np.array((1,1,1))) - >>> offset/y.itemsize - 17 - - >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) - >>> x.strides - (32, 4, 224, 1344) - >>> i = np.array([3,5,2,2]) - >>> offset = sum(i * x.strides) - >>> x[3,5,2,2] - 813 - >>> offset / x.itemsize - 813 - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('T', - """ - Same as self.transpose(), except that self is returned if - self.ndim < 2. - - Examples - -------- - >>> x = np.array([[1.,2.],[3.,4.]]) - >>> x - array([[ 1., 2.], - [ 3., 4.]]) - >>> x.T - array([[ 1., 3.], - [ 2., 4.]]) - >>> x = np.array([1.,2.,3.,4.]) - >>> x - array([ 1., 2., 3., 4.]) - >>> x.T - array([ 1., 2., 3., 4.]) - - """)) - - -############################################################################## -# -# ndarray methods -# -############################################################################## - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('__array__', - """ a.__array__(|dtype) -> reference if type unchanged, copy otherwise. - - Returns either a new reference to self if dtype is not given or a new array - of provided data type if dtype is different from the current dtype of the - array. - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('__array_prepare__', - """a.__array_prepare__(obj) -> Object of same type as ndarray object obj. - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('__array_wrap__', - """a.__array_wrap__(obj) -> Object of same type as ndarray object a. - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('__copy__', - """a.__copy__([order]) - - Return a copy of the array. - - Parameters - ---------- - order : {'C', 'F', 'A'}, optional - If order is 'C' (False) then the result is contiguous (default). - If order is 'Fortran' (True) then the result has fortran order. - If order is 'Any' (None) then the result has fortran order - only if the array already is in fortran order. - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('__deepcopy__', - """a.__deepcopy__() -> Deep copy of array. - - Used if copy.deepcopy is called on an array. - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('__reduce__', - """a.__reduce__() - - For pickling. - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('__setstate__', - """a.__setstate__(version, shape, dtype, isfortran, rawdata) - - For unpickling. - - Parameters - ---------- - version : int - optional pickle version. If omitted defaults to 0. - shape : tuple - dtype : data-type - isFortran : bool - rawdata : string or list - a binary string with the data (or a list if 'a' is an object array) - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('all', - """ - a.all(axis=None, out=None) - - Returns True if all elements evaluate to True. - - Refer to `numpy.all` for full documentation. - - See Also - -------- - numpy.all : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('any', - """ - a.any(axis=None, out=None) - - Returns True if any of the elements of `a` evaluate to True. - - Refer to `numpy.any` for full documentation. - - See Also - -------- - numpy.any : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('argmax', - """ - a.argmax(axis=None, out=None) - - Return indices of the maximum values along the given axis. - - Refer to `numpy.argmax` for full documentation. - - See Also - -------- - numpy.argmax : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('argmin', - """ - a.argmin(axis=None, out=None) - - Return indices of the minimum values along the given axis of `a`. - - Refer to `numpy.argmin` for detailed documentation. - - See Also - -------- - numpy.argmin : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('argsort', - """ - a.argsort(axis=-1, kind='quicksort', order=None) - - Returns the indices that would sort this array. - - Refer to `numpy.argsort` for full documentation. - - See Also - -------- - numpy.argsort : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('astype', - """ - a.astype(t) - - Copy of the array, cast to a specified type. - - Parameters - ---------- - t : string or dtype - Typecode or data-type to which the array is cast. - - Examples - -------- - >>> x = np.array([1, 2, 2.5]) - >>> x - array([ 1. , 2. , 2.5]) - - >>> x.astype(int) - array([1, 2, 2]) - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('byteswap', - """ - a.byteswap(inplace) - - Swap the bytes of the array elements - - Toggle between low-endian and big-endian data representation by - returning a byteswapped array, optionally swapped in-place. - - Parameters - ---------- - inplace: bool, optional - If ``True``, swap bytes in-place, default is ``False``. - - Returns - ------- - out: ndarray - The byteswapped array. If `inplace` is ``True``, this is - a view to self. - - Examples - -------- - >>> A = np.array([1, 256, 8755], dtype=np.int16) - >>> map(hex, A) - ['0x1', '0x100', '0x2233'] - >>> A.byteswap(True) - array([ 256, 1, 13090], dtype=int16) - >>> map(hex, A) - ['0x100', '0x1', '0x3322'] - - Arrays of strings are not swapped - - >>> A = np.array(['ceg', 'fac']) - >>> A.byteswap() - array(['ceg', 'fac'], - dtype='|S3') - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('choose', - """ - a.choose(choices, out=None, mode='raise') - - Use an index array to construct a new array from a set of choices. - - Refer to `numpy.choose` for full documentation. - - See Also - -------- - numpy.choose : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('clip', - """ - a.clip(a_min, a_max, out=None) - - Return an array whose values are limited to ``[a_min, a_max]``. - - Refer to `numpy.clip` for full documentation. - - See Also - -------- - numpy.clip : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('compress', - """ - a.compress(condition, axis=None, out=None) - - Return selected slices of this array along given axis. - - Refer to `numpy.compress` for full documentation. - - See Also - -------- - numpy.compress : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('conj', - """ - a.conj() - - Complex-conjugate all elements. - - Refer to `numpy.conjugate` for full documentation. - - See Also - -------- - numpy.conjugate : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('conjugate', - """ - a.conjugate() - - Return the complex conjugate, element-wise. - - Refer to `numpy.conjugate` for full documentation. - - See Also - -------- - numpy.conjugate : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('copy', - """ - a.copy(order='C') - - Return a copy of the array. - - Parameters - ---------- - order : {'C', 'F', 'A'}, optional - By default, the result is stored in C-contiguous (row-major) order in - memory. If `order` is `F`, the result has 'Fortran' (column-major) - order. If order is 'A' ('Any'), then the result has the same order - as the input. - - Examples - -------- - >>> x = np.array([[1,2,3],[4,5,6]], order='F') - - >>> y = x.copy() - - >>> x.fill(0) - - >>> x - array([[0, 0, 0], - [0, 0, 0]]) - - >>> y - array([[1, 2, 3], - [4, 5, 6]]) - - >>> y.flags['C_CONTIGUOUS'] - True - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('cumprod', - """ - a.cumprod(axis=None, dtype=None, out=None) - - Return the cumulative product of the elements along the given axis. - - Refer to `numpy.cumprod` for full documentation. - - See Also - -------- - numpy.cumprod : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('cumsum', - """ - a.cumsum(axis=None, dtype=None, out=None) - - Return the cumulative sum of the elements along the given axis. - - Refer to `numpy.cumsum` for full documentation. - - See Also - -------- - numpy.cumsum : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('diagonal', - """ - a.diagonal(offset=0, axis1=0, axis2=1) - - Return specified diagonals. - - Refer to `numpy.diagonal` for full documentation. - - See Also - -------- - numpy.diagonal : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('dump', - """a.dump(file) - - Dump a pickle of the array to the specified file. - The array can be read back with pickle.load or numpy.load. - - Parameters - ---------- - file : str - A string naming the dump file. - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('dumps', - """ - a.dumps() - - Returns the pickle of the array as a string. - pickle.loads or numpy.loads will convert the string back to an array. - - Parameters - ---------- - None - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('fill', - """ - a.fill(value) - - Fill the array with a scalar value. - - Parameters - ---------- - value : scalar - All elements of `a` will be assigned this value. - - Examples - -------- - >>> a = np.array([1, 2]) - >>> a.fill(0) - >>> a - array([0, 0]) - >>> a = np.empty(2) - >>> a.fill(1) - >>> a - array([ 1., 1.]) - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('flatten', - """ - a.flatten(order='C') - - Return a copy of the array collapsed into one dimension. - - Parameters - ---------- - order : {'C', 'F'}, optional - Whether to flatten in C (row-major) or Fortran (column-major) order. - The default is 'C'. - - Returns - ------- - y : ndarray - A copy of the input array, flattened to one dimension. - - See Also - -------- - ravel : Return a flattened array. - flat : A 1-D flat iterator over the array. - - Examples - -------- - >>> a = np.array([[1,2], [3,4]]) - >>> a.flatten() - array([1, 2, 3, 4]) - >>> a.flatten('F') - array([1, 3, 2, 4]) - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('getfield', - """ - a.getfield(dtype, offset) - - Returns a field of the given array as a certain type. - - A field is a view of the array data with each itemsize determined - by the given type and the offset into the current array, i.e. from - ``offset * dtype.itemsize`` to ``(offset+1) * dtype.itemsize``. - - Parameters - ---------- - dtype : str - String denoting the data type of the field. - offset : int - Number of `dtype.itemsize`'s to skip before beginning the element view. - - Examples - -------- - >>> x = np.diag([1.+1.j]*2) - >>> x - array([[ 1.+1.j, 0.+0.j], - [ 0.+0.j, 1.+1.j]]) - >>> x.dtype - dtype('complex128') - - >>> x.getfield('complex64', 0) # Note how this != x - array([[ 0.+1.875j, 0.+0.j ], - [ 0.+0.j , 0.+1.875j]], dtype=complex64) - - >>> x.getfield('complex64',1) # Note how different this is than x - array([[ 0. +5.87173204e-39j, 0. +0.00000000e+00j], - [ 0. +0.00000000e+00j, 0. +5.87173204e-39j]], dtype=complex64) - - >>> x.getfield('complex128', 0) # == x - array([[ 1.+1.j, 0.+0.j], - [ 0.+0.j, 1.+1.j]]) - - If the argument dtype is the same as x.dtype, then offset != 0 raises - a ValueError: - - >>> x.getfield('complex128', 1) - Traceback (most recent call last): - File "", line 1, in - ValueError: Need 0 <= offset <= 0 for requested type but received offset = 1 - - >>> x.getfield('float64', 0) - array([[ 1., 0.], - [ 0., 1.]]) - - >>> x.getfield('float64', 1) - array([[ 1.77658241e-307, 0.00000000e+000], - [ 0.00000000e+000, 1.77658241e-307]]) - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('item', - """ - a.item(*args) - - Copy an element of an array to a standard Python scalar and return it. - - Parameters - ---------- - \\*args : Arguments (variable number and type) - - * none: in this case, the method only works for arrays - with one element (`a.size == 1`), which element is - copied into a standard Python scalar object and returned. - - * int_type: this argument is interpreted as a flat index into - the array, specifying which element to copy and return. - - * tuple of int_types: functions as does a single int_type argument, - except that the argument is interpreted as an nd-index into the - array. - - Returns - ------- - z : Standard Python scalar object - A copy of the specified element of the array as a suitable - Python scalar - - Notes - ----- - When the data type of `a` is longdouble or clongdouble, item() returns - a scalar array object because there is no available Python scalar that - would not lose information. Void arrays return a buffer object for item(), - unless fields are defined, in which case a tuple is returned. - - `item` is very similar to a[args], except, instead of an array scalar, - a standard Python scalar is returned. This can be useful for speeding up - access to elements of the array and doing arithmetic on elements of the - array using Python's optimized math. - - Examples - -------- - >>> x = np.random.randint(9, size=(3, 3)) - >>> x - array([[3, 1, 7], - [2, 8, 3], - [8, 5, 3]]) - >>> x.item(3) - 2 - >>> x.item(7) - 5 - >>> x.item((0, 1)) - 1 - >>> x.item((2, 2)) - 3 - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('itemset', - """ - a.itemset(*args) - - Insert scalar into an array (scalar is cast to array's dtype, if possible) - - There must be at least 1 argument, and define the last argument - as *item*. Then, ``a.itemset(*args)`` is equivalent to but faster - than ``a[args] = item``. The item should be a scalar value and `args` - must select a single item in the array `a`. - - Parameters - ---------- - \*args : Arguments - If one argument: a scalar, only used in case `a` is of size 1. - If two arguments: the last argument is the value to be set - and must be a scalar, the first argument specifies a single array - element location. It is either an int or a tuple. - - Notes - ----- - Compared to indexing syntax, `itemset` provides some speed increase - for placing a scalar into a particular location in an `ndarray`, - if you must do this. However, generally this is discouraged: - among other problems, it complicates the appearance of the code. - Also, when using `itemset` (and `item`) inside a loop, be sure - to assign the methods to a local variable to avoid the attribute - look-up at each loop iteration. - - Examples - -------- - >>> x = np.random.randint(9, size=(3, 3)) - >>> x - array([[3, 1, 7], - [2, 8, 3], - [8, 5, 3]]) - >>> x.itemset(4, 0) - >>> x.itemset((2, 2), 9) - >>> x - array([[3, 1, 7], - [2, 0, 3], - [8, 5, 9]]) - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('max', - """ - a.max(axis=None, out=None) - - Return the maximum along a given axis. - - Refer to `numpy.amax` for full documentation. - - See Also - -------- - numpy.amax : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('mean', - """ - a.mean(axis=None, dtype=None, out=None) - - Returns the average of the array elements along given axis. - - Refer to `numpy.mean` for full documentation. - - See Also - -------- - numpy.mean : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('min', - """ - a.min(axis=None, out=None) - - Return the minimum along a given axis. - - Refer to `numpy.amin` for full documentation. - - See Also - -------- - numpy.amin : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('newbyteorder', - """ - arr.newbyteorder(new_order='S') - - Return the array with the same data viewed with a different byte order. - - Equivalent to:: - - arr.view(arr.dtype.newbytorder(new_order)) - - Changes are also made in all fields and sub-arrays of the array data - type. - - - - Parameters - ---------- - new_order : string, optional - Byte order to force; a value from the byte order specifications - above. `new_order` codes can be any of:: - - * 'S' - swap dtype from current to opposite endian - * {'<', 'L'} - little endian - * {'>', 'B'} - big endian - * {'=', 'N'} - native order - * {'|', 'I'} - ignore (no change to byte order) - - The default value ('S') results in swapping the current - byte order. The code does a case-insensitive check on the first - letter of `new_order` for the alternatives above. For example, - any of 'B' or 'b' or 'biggish' are valid to specify big-endian. - - - Returns - ------- - new_arr : array - New array object with the dtype reflecting given change to the - byte order. - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('nonzero', - """ - a.nonzero() - - Return the indices of the elements that are non-zero. - - Refer to `numpy.nonzero` for full documentation. - - See Also - -------- - numpy.nonzero : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('prod', - """ - a.prod(axis=None, dtype=None, out=None) - - Return the product of the array elements over the given axis - - Refer to `numpy.prod` for full documentation. - - See Also - -------- - numpy.prod : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('ptp', - """ - a.ptp(axis=None, out=None) - - Peak to peak (maximum - minimum) value along a given axis. - - Refer to `numpy.ptp` for full documentation. - - See Also - -------- - numpy.ptp : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('put', - """ - a.put(indices, values, mode='raise') - - Set ``a.flat[n] = values[n]`` for all `n` in indices. - - Refer to `numpy.put` for full documentation. - - See Also - -------- - numpy.put : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'putmask', - """ - putmask(a, mask, values) - - Changes elements of an array based on conditional and input values. - - Sets ``a.flat[n] = values[n]`` for each n where ``mask.flat[n]==True``. - - If `values` is not the same size as `a` and `mask` then it will repeat. - This gives behavior different from ``a[mask] = values``. - - Parameters - ---------- - a : array_like - Target array. - mask : array_like - Boolean mask array. It has to be the same shape as `a`. - values : array_like - Values to put into `a` where `mask` is True. If `values` is smaller - than `a` it will be repeated. - - See Also - -------- - place, put, take - - Examples - -------- - >>> x = np.arange(6).reshape(2, 3) - >>> np.putmask(x, x>2, x**2) - >>> x - array([[ 0, 1, 2], - [ 9, 16, 25]]) - - If `values` is smaller than `a` it is repeated: - - >>> x = np.arange(5) - >>> np.putmask(x, x>1, [-33, -44]) - >>> x - array([ 0, 1, -33, -44, -33]) - - """) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('ravel', - """ - a.ravel([order]) - - Return a flattened array. - - Refer to `numpy.ravel` for full documentation. - - See Also - -------- - numpy.ravel : equivalent function - - ndarray.flat : a flat iterator on the array. - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('repeat', - """ - a.repeat(repeats, axis=None) - - Repeat elements of an array. - - Refer to `numpy.repeat` for full documentation. - - See Also - -------- - numpy.repeat : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('reshape', - """ - a.reshape(shape, order='C') - - Returns an array containing the same data with a new shape. - - Refer to `numpy.reshape` for full documentation. - - See Also - -------- - numpy.reshape : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('resize', - """ - a.resize(new_shape, refcheck=True) - - Change shape and size of array in-place. - - Parameters - ---------- - new_shape : tuple of ints, or `n` ints - Shape of resized array. - refcheck : bool, optional - If False, reference count will not be checked. Default is True. - - Returns - ------- - None - - Raises - ------ - ValueError - If `a` does not own its own data or references or views to it exist, - and the data memory must be changed. - - SystemError - If the `order` keyword argument is specified. This behaviour is a - bug in NumPy. - - See Also - -------- - resize : Return a new array with the specified shape. - - Notes - ----- - This reallocates space for the data area if necessary. - - Only contiguous arrays (data elements consecutive in memory) can be - resized. - - The purpose of the reference count check is to make sure you - do not use this array as a buffer for another Python object and then - reallocate the memory. However, reference counts can increase in - other ways so if you are sure that you have not shared the memory - for this array with another Python object, then you may safely set - `refcheck` to False. - - Examples - -------- - Shrinking an array: array is flattened (in the order that the data are - stored in memory), resized, and reshaped: - - >>> a = np.array([[0, 1], [2, 3]], order='C') - >>> a.resize((2, 1)) - >>> a - array([[0], - [1]]) - - >>> a = np.array([[0, 1], [2, 3]], order='F') - >>> a.resize((2, 1)) - >>> a - array([[0], - [2]]) - - Enlarging an array: as above, but missing entries are filled with zeros: - - >>> b = np.array([[0, 1], [2, 3]]) - >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple - >>> b - array([[0, 1, 2], - [3, 0, 0]]) - - Referencing an array prevents resizing... - - >>> c = a - >>> a.resize((1, 1)) - Traceback (most recent call last): - ... - ValueError: cannot resize an array that has been referenced ... - - Unless `refcheck` is False: - - >>> a.resize((1, 1), refcheck=False) - >>> a - array([[0]]) - >>> c - array([[0]]) - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('round', - """ - a.round(decimals=0, out=None) - - Return `a` with each element rounded to the given number of decimals. - - Refer to `numpy.around` for full documentation. - - See Also - -------- - numpy.around : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('searchsorted', - """ - a.searchsorted(v, side='left') - - Find indices where elements of v should be inserted in a to maintain order. - - For full documentation, see `numpy.searchsorted` - - See Also - -------- - numpy.searchsorted : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('setfield', - """ - a.setfield(val, dtype, offset=0) - - Put a value into a specified place in a field defined by a data-type. - - Place `val` into `a`'s field defined by `dtype` and beginning `offset` - bytes into the field. - - Parameters - ---------- - val : object - Value to be placed in field. - dtype : dtype object - Data-type of the field in which to place `val`. - offset : int, optional - The number of bytes into the field at which to place `val`. - - Returns - ------- - None - - See Also - -------- - getfield - - Examples - -------- - >>> x = np.eye(3) - >>> x.getfield(np.float64) - array([[ 1., 0., 0.], - [ 0., 1., 0.], - [ 0., 0., 1.]]) - >>> x.setfield(3, np.int32) - >>> x.getfield(np.int32) - array([[3, 3, 3], - [3, 3, 3], - [3, 3, 3]]) - >>> x - array([[ 1.00000000e+000, 1.48219694e-323, 1.48219694e-323], - [ 1.48219694e-323, 1.00000000e+000, 1.48219694e-323], - [ 1.48219694e-323, 1.48219694e-323, 1.00000000e+000]]) - >>> x.setfield(np.eye(3), np.int32) - >>> x - array([[ 1., 0., 0.], - [ 0., 1., 0.], - [ 0., 0., 1.]]) - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('setflags', - """ - a.setflags(write=None, align=None, uic=None) - - Set array flags WRITEABLE, ALIGNED, and UPDATEIFCOPY, respectively. - - These Boolean-valued flags affect how numpy interprets the memory - area used by `a` (see Notes below). The ALIGNED flag can only - be set to True if the data is actually aligned according to the type. - The UPDATEIFCOPY flag can never be set to True. The flag WRITEABLE - can only be set to True if the array owns its own memory, or the - ultimate owner of the memory exposes a writeable buffer interface, - or is a string. (The exception for string is made so that unpickling - can be done without copying memory.) - - Parameters - ---------- - write : bool, optional - Describes whether or not `a` can be written to. - align : bool, optional - Describes whether or not `a` is aligned properly for its type. - uic : bool, optional - Describes whether or not `a` is a copy of another "base" array. - - Notes - ----- - Array flags provide information about how the memory area used - for the array is to be interpreted. There are 6 Boolean flags - in use, only three of which can be changed by the user: - UPDATEIFCOPY, WRITEABLE, and ALIGNED. - - WRITEABLE (W) the data area can be written to; - - ALIGNED (A) the data and strides are aligned appropriately for the hardware - (as determined by the compiler); - - UPDATEIFCOPY (U) this array is a copy of some other array (referenced - by .base). When this array is deallocated, the base array will be - updated with the contents of this array. - - All flags can be accessed using their first (upper case) letter as well - as the full name. - - Examples - -------- - >>> y - array([[3, 1, 7], - [2, 0, 0], - [8, 5, 9]]) - >>> y.flags - C_CONTIGUOUS : True - F_CONTIGUOUS : False - OWNDATA : True - WRITEABLE : True - ALIGNED : True - UPDATEIFCOPY : False - >>> y.setflags(write=0, align=0) - >>> y.flags - C_CONTIGUOUS : True - F_CONTIGUOUS : False - OWNDATA : True - WRITEABLE : False - ALIGNED : False - UPDATEIFCOPY : False - >>> y.setflags(uic=1) - Traceback (most recent call last): - File "", line 1, in - ValueError: cannot set UPDATEIFCOPY flag to True - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('sort', - """ - a.sort(axis=-1, kind='quicksort', order=None) - - Sort an array, in-place. - - Parameters - ---------- - axis : int, optional - Axis along which to sort. Default is -1, which means sort along the - last axis. - kind : {'quicksort', 'mergesort', 'heapsort'}, optional - Sorting algorithm. Default is 'quicksort'. - order : list, optional - When `a` is an array with fields defined, this argument specifies - which fields to compare first, second, etc. Not all fields need be - specified. - - See Also - -------- - numpy.sort : Return a sorted copy of an array. - argsort : Indirect sort. - lexsort : Indirect stable sort on multiple keys. - searchsorted : Find elements in sorted array. - - Notes - ----- - See ``sort`` for notes on the different sorting algorithms. - - Examples - -------- - >>> a = np.array([[1,4], [3,1]]) - >>> a.sort(axis=1) - >>> a - array([[1, 4], - [1, 3]]) - >>> a.sort(axis=0) - >>> a - array([[1, 3], - [1, 4]]) - - Use the `order` keyword to specify a field to use when sorting a - structured array: - - >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)]) - >>> a.sort(order='y') - >>> a - array([('c', 1), ('a', 2)], - dtype=[('x', '|S1'), ('y', '>> a = np.array([1, 2]) - >>> a.tolist() - [1, 2] - >>> a = np.array([[1, 2], [3, 4]]) - >>> list(a) - [array([1, 2]), array([3, 4])] - >>> a.tolist() - [[1, 2], [3, 4]] - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('tostring', - """ - a.tostring(order='C') - - Construct a Python string containing the raw data bytes in the array. - - Constructs a Python string showing a copy of the raw contents of - data memory. The string can be produced in either 'C' or 'Fortran', - or 'Any' order (the default is 'C'-order). 'Any' order means C-order - unless the F_CONTIGUOUS flag in the array is set, in which case it - means 'Fortran' order. - - Parameters - ---------- - order : {'C', 'F', None}, optional - Order of the data for multidimensional arrays: - C, Fortran, or the same as for the original array. - - Returns - ------- - s : str - A Python string exhibiting a copy of `a`'s raw data. - - Examples - -------- - >>> x = np.array([[0, 1], [2, 3]]) - >>> x.tostring() - '\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x03\\x00\\x00\\x00' - >>> x.tostring('C') == x.tostring() - True - >>> x.tostring('F') - '\\x00\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x03\\x00\\x00\\x00' - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('trace', - """ - a.trace(offset=0, axis1=0, axis2=1, dtype=None, out=None) - - Return the sum along diagonals of the array. - - Refer to `numpy.trace` for full documentation. - - See Also - -------- - numpy.trace : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('transpose', - """ - a.transpose(*axes) - - Returns a view of the array with axes transposed. - - For a 1-D array, this has no effect. (To change between column and - row vectors, first cast the 1-D array into a matrix object.) - For a 2-D array, this is the usual matrix transpose. - For an n-D array, if axes are given, their order indicates how the - axes are permuted (see Examples). If axes are not provided and - ``a.shape = (i[0], i[1], ... i[n-2], i[n-1])``, then - ``a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])``. - - Parameters - ---------- - axes : None, tuple of ints, or `n` ints - - * None or no argument: reverses the order of the axes. - - * tuple of ints: `i` in the `j`-th place in the tuple means `a`'s - `i`-th axis becomes `a.transpose()`'s `j`-th axis. - - * `n` ints: same as an n-tuple of the same ints (this form is - intended simply as a "convenience" alternative to the tuple form) - - Returns - ------- - out : ndarray - View of `a`, with axes suitably permuted. - - See Also - -------- - ndarray.T : Array property returning the array transposed. - - Examples - -------- - >>> a = np.array([[1, 2], [3, 4]]) - >>> a - array([[1, 2], - [3, 4]]) - >>> a.transpose() - array([[1, 3], - [2, 4]]) - >>> a.transpose((1, 0)) - array([[1, 3], - [2, 4]]) - >>> a.transpose(1, 0) - array([[1, 3], - [2, 4]]) - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('var', - """ - a.var(axis=None, dtype=None, out=None, ddof=0) - - Returns the variance of the array elements, along given axis. - - Refer to `numpy.var` for full documentation. - - See Also - -------- - numpy.var : equivalent function - - """)) - - -add_newdoc('numpy.core.multiarray', 'ndarray', ('view', - """ - a.view(dtype=None, type=None) - - New view of array with the same data. - - Parameters - ---------- - dtype : data-type, optional - Data-type descriptor of the returned view, e.g., float32 or int16. - The default, None, results in the view having the same data-type - as `a`. - type : Python type, optional - Type of the returned view, e.g., ndarray or matrix. Again, the - default None results in type preservation. - - Notes - ----- - ``a.view()`` is used two different ways: - - ``a.view(some_dtype)`` or ``a.view(dtype=some_dtype)`` constructs a view - of the array's memory with a different data-type. This can cause a - reinterpretation of the bytes of memory. - - ``a.view(ndarray_subclass)`` or ``a.view(type=ndarray_subclass)`` just - returns an instance of `ndarray_subclass` that looks at the same array - (same shape, dtype, etc.) This does not cause a reinterpretation of the - memory. - - - Examples - -------- - >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) - - Viewing array data using a different type and dtype: - - >>> y = x.view(dtype=np.int16, type=np.matrix) - >>> y - matrix([[513]], dtype=int16) - >>> print type(y) - - - Creating a view on a structured array so it can be used in calculations - - >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)]) - >>> xv = x.view(dtype=np.int8).reshape(-1,2) - >>> xv - array([[1, 2], - [3, 4]], dtype=int8) - >>> xv.mean(0) - array([ 2., 3.]) - - Making changes to the view changes the underlying array - - >>> xv[0,1] = 20 - >>> print x - [(1, 20) (3, 4)] - - Using a view to convert an array to a record array: - - >>> z = x.view(np.recarray) - >>> z.a - array([1], dtype=int8) - - Views share data: - - >>> x[0] = (9, 10) - >>> z[0] - (9, 10) - - """)) - - -############################################################################## -# -# umath functions -# -############################################################################## - -add_newdoc('numpy.core.umath', 'frexp', - """ - Return normalized fraction and exponent of 2 of input array, element-wise. - - Returns (`out1`, `out2`) from equation ``x` = out1 * 2**out2``. - - Parameters - ---------- - x : array_like - Input array. - - Returns - ------- - (out1, out2) : tuple of ndarrays, (float, int) - `out1` is a float array with values between -1 and 1. - `out2` is an int array which represent the exponent of 2. - - See Also - -------- - ldexp : Compute ``y = x1 * 2**x2``, the inverse of `frexp`. - - Notes - ----- - Complex dtypes are not supported, they will raise a TypeError. - - Examples - -------- - >>> x = np.arange(9) - >>> y1, y2 = np.frexp(x) - >>> y1 - array([ 0. , 0.5 , 0.5 , 0.75 , 0.5 , 0.625, 0.75 , 0.875, - 0.5 ]) - >>> y2 - array([0, 1, 2, 2, 3, 3, 3, 3, 4]) - >>> y1 * 2**y2 - array([ 0., 1., 2., 3., 4., 5., 6., 7., 8.]) - - """) - -add_newdoc('numpy.core.umath', 'frompyfunc', - """ - frompyfunc(func, nin, nout) - - Takes an arbitrary Python function and returns a Numpy ufunc. - - Can be used, for example, to add broadcasting to a built-in Python - function (see Examples section). - - Parameters - ---------- - func : Python function object - An arbitrary Python function. - nin : int - The number of input arguments. - nout : int - The number of objects returned by `func`. - - Returns - ------- - out : ufunc - Returns a Numpy universal function (``ufunc``) object. - - Notes - ----- - The returned ufunc always returns PyObject arrays. - - Examples - -------- - Use frompyfunc to add broadcasting to the Python function ``oct``: - - >>> oct_array = np.frompyfunc(oct, 1, 1) - >>> oct_array(np.array((10, 30, 100))) - array([012, 036, 0144], dtype=object) - >>> np.array((oct(10), oct(30), oct(100))) # for comparison - array(['012', '036', '0144'], - dtype='|S4') - - """) - -add_newdoc('numpy.core.umath', 'ldexp', - """ - Compute y = x1 * 2**x2. - - Parameters - ---------- - x1 : array_like - The array of multipliers. - x2 : array_like - The array of exponents. - - Returns - ------- - y : array_like - The output array, the result of ``x1 * 2**x2``. - - See Also - -------- - frexp : Return (y1, y2) from ``x = y1 * 2**y2``, the inverse of `ldexp`. - - Notes - ----- - Complex dtypes are not supported, they will raise a TypeError. - - `ldexp` is useful as the inverse of `frexp`, if used by itself it is - more clear to simply use the expression ``x1 * 2**x2``. - - Examples - -------- - >>> np.ldexp(5, np.arange(4)) - array([ 5., 10., 20., 40.], dtype=float32) - - >>> x = np.arange(6) - >>> np.ldexp(*np.frexp(x)) - array([ 0., 1., 2., 3., 4., 5.]) - - """) - -add_newdoc('numpy.core.umath', 'geterrobj', - """ - geterrobj() - - Return the current object that defines floating-point error handling. - - The error object contains all information that defines the error handling - behavior in Numpy. `geterrobj` is used internally by the other - functions that get and set error handling behavior (`geterr`, `seterr`, - `geterrcall`, `seterrcall`). - - Returns - ------- - errobj : list - The error object, a list containing three elements: - [internal numpy buffer size, error mask, error callback function]. - - The error mask is a single integer that holds the treatment information - on all four floating point errors. The information for each error type - is contained in three bits of the integer. If we print it in base 8, we - can see what treatment is set for "invalid", "under", "over", and - "divide" (in that order). The printed string can be interpreted with - - * 0 : 'ignore' - * 1 : 'warn' - * 2 : 'raise' - * 3 : 'call' - * 4 : 'print' - * 5 : 'log' - - See Also - -------- - seterrobj, seterr, geterr, seterrcall, geterrcall - getbufsize, setbufsize - - Notes - ----- - For complete documentation of the types of floating-point exceptions and - treatment options, see `seterr`. - - Examples - -------- - >>> np.geterrobj() # first get the defaults - [10000, 0, None] - - >>> def err_handler(type, flag): - ... print "Floating point error (%s), with flag %s" % (type, flag) - ... - >>> old_bufsize = np.setbufsize(20000) - >>> old_err = np.seterr(divide='raise') - >>> old_handler = np.seterrcall(err_handler) - >>> np.geterrobj() - [20000, 2, ] - - >>> old_err = np.seterr(all='ignore') - >>> np.base_repr(np.geterrobj()[1], 8) - '0' - >>> old_err = np.seterr(divide='warn', over='log', under='call', - invalid='print') - >>> np.base_repr(np.geterrobj()[1], 8) - '4351' - - """) - -add_newdoc('numpy.core.umath', 'seterrobj', - """ - seterrobj(errobj) - - Set the object that defines floating-point error handling. - - The error object contains all information that defines the error handling - behavior in Numpy. `seterrobj` is used internally by the other - functions that set error handling behavior (`seterr`, `seterrcall`). - - Parameters - ---------- - errobj : list - The error object, a list containing three elements: - [internal numpy buffer size, error mask, error callback function]. - - The error mask is a single integer that holds the treatment information - on all four floating point errors. The information for each error type - is contained in three bits of the integer. If we print it in base 8, we - can see what treatment is set for "invalid", "under", "over", and - "divide" (in that order). The printed string can be interpreted with - - * 0 : 'ignore' - * 1 : 'warn' - * 2 : 'raise' - * 3 : 'call' - * 4 : 'print' - * 5 : 'log' - - See Also - -------- - geterrobj, seterr, geterr, seterrcall, geterrcall - getbufsize, setbufsize - - Notes - ----- - For complete documentation of the types of floating-point exceptions and - treatment options, see `seterr`. - - Examples - -------- - >>> old_errobj = np.geterrobj() # first get the defaults - >>> old_errobj - [10000, 0, None] - - >>> def err_handler(type, flag): - ... print "Floating point error (%s), with flag %s" % (type, flag) - ... - >>> new_errobj = [20000, 12, err_handler] - >>> np.seterrobj(new_errobj) - >>> np.base_repr(12, 8) # int for divide=4 ('print') and over=1 ('warn') - '14' - >>> np.geterr() - {'over': 'warn', 'divide': 'print', 'invalid': 'ignore', 'under': 'ignore'} - >>> np.geterrcall() is err_handler - True - - """) - - -############################################################################## -# -# lib._compiled_base functions -# -############################################################################## - -add_newdoc('numpy.lib._compiled_base', 'digitize', - """ - digitize(x, bins) - - Return the indices of the bins to which each value in input array belongs. - - Each index ``i`` returned is such that ``bins[i-1] <= x < bins[i]`` if - `bins` is monotonically increasing, or ``bins[i-1] > x >= bins[i]`` if - `bins` is monotonically decreasing. If values in `x` are beyond the - bounds of `bins`, 0 or ``len(bins)`` is returned as appropriate. - - Parameters - ---------- - x : array_like - Input array to be binned. It has to be 1-dimensional. - bins : array_like - Array of bins. It has to be 1-dimensional and monotonic. - - Returns - ------- - out : ndarray of ints - Output array of indices, of same shape as `x`. - - Raises - ------ - ValueError - If the input is not 1-dimensional, or if `bins` is not monotonic. - TypeError - If the type of the input is complex. - - See Also - -------- - bincount, histogram, unique - - Notes - ----- - If values in `x` are such that they fall outside the bin range, - attempting to index `bins` with the indices that `digitize` returns - will result in an IndexError. - - Examples - -------- - >>> x = np.array([0.2, 6.4, 3.0, 1.6]) - >>> bins = np.array([0.0, 1.0, 2.5, 4.0, 10.0]) - >>> inds = np.digitize(x, bins) - >>> inds - array([1, 4, 3, 2]) - >>> for n in range(x.size): - ... print bins[inds[n]-1], "<=", x[n], "<", bins[inds[n]] - ... - 0.0 <= 0.2 < 1.0 - 4.0 <= 6.4 < 10.0 - 2.5 <= 3.0 < 4.0 - 1.0 <= 1.6 < 2.5 - - """) - -add_newdoc('numpy.lib._compiled_base', 'bincount', - """ - bincount(x, weights=None) - - Count number of occurrences of each value in array of non-negative ints. - - The number of bins (of size 1) is one larger than the largest value in - `x`. Each bin gives the number of occurrences of its index value in `x`. - If `weights` is specified the input array is weighted by it, i.e. if a - value ``n`` is found at position ``i``, ``out[n] += weight[i]`` instead - of ``out[n] += 1``. - - Parameters - ---------- - x : array_like, 1 dimension, nonnegative ints - Input array. - weights : array_like, optional - Weights, array of the same shape as `x`. - - Returns - ------- - out : ndarray of ints - The result of binning the input array. - The length of `out` is equal to ``np.amax(x)+1``. - - Raises - ------ - ValueError - If the input is not 1-dimensional, or contains elements with negative - values. - TypeError - If the type of the input is float or complex. - - See Also - -------- - histogram, digitize, unique - - Examples - -------- - >>> np.bincount(np.arange(5)) - array([1, 1, 1, 1, 1]) - >>> np.bincount(np.array([0, 1, 1, 3, 2, 1, 7])) - array([1, 3, 1, 1, 0, 0, 0, 1]) - - >>> x = np.array([0, 1, 1, 3, 2, 1, 7, 23]) - >>> np.bincount(x).size == np.amax(x)+1 - True - - >>> np.bincount(np.arange(5, dtype=np.float)) - Traceback (most recent call last): - File "", line 1, in - TypeError: array cannot be safely cast to required type - - A possible use of ``bincount`` is to perform sums over - variable-size chunks of an array, using the ``weights`` keyword. - - >>> w = np.array([0.3, 0.5, 0.2, 0.7, 1., -0.6]) # weights - >>> x = np.array([0, 1, 1, 2, 2, 2]) - >>> np.bincount(x, weights=w) - array([ 0.3, 0.7, 1.1]) - - """) - -add_newdoc('numpy.lib._compiled_base', 'add_docstring', - """ - docstring(obj, docstring) - - Add a docstring to a built-in obj if possible. - If the obj already has a docstring raise a RuntimeError - If this routine does not know how to add a docstring to the object - raise a TypeError - """) - -add_newdoc('numpy.lib._compiled_base', 'packbits', - """ - packbits(myarray, axis=None) - - Packs the elements of a binary-valued array into bits in a uint8 array. - - The result is padded to full bytes by inserting zero bits at the end. - - Parameters - ---------- - myarray : array_like - An integer type array whose elements should be packed to bits. - axis : int, optional - The dimension over which bit-packing is done. - ``None`` implies packing the flattened array. - - Returns - ------- - packed : ndarray - Array of type uint8 whose elements represent bits corresponding to the - logical (0 or nonzero) value of the input elements. The shape of - `packed` has the same number of dimensions as the input (unless `axis` - is None, in which case the output is 1-D). - - See Also - -------- - unpackbits: Unpacks elements of a uint8 array into a binary-valued output - array. - - Examples - -------- - >>> a = np.array([[[1,0,1], - ... [0,1,0]], - ... [[1,1,0], - ... [0,0,1]]]) - >>> b = np.packbits(a, axis=-1) - >>> b - array([[[160],[64]],[[192],[32]]], dtype=uint8) - - Note that in binary 160 = 1010 0000, 64 = 0100 0000, 192 = 1100 0000, - and 32 = 0010 0000. - - """) - -add_newdoc('numpy.lib._compiled_base', 'unpackbits', - """ - unpackbits(myarray, axis=None) - - Unpacks elements of a uint8 array into a binary-valued output array. - - Each element of `myarray` represents a bit-field that should be unpacked - into a binary-valued output array. The shape of the output array is either - 1-D (if `axis` is None) or the same shape as the input array with unpacking - done along the axis specified. - - Parameters - ---------- - myarray : ndarray, uint8 type - Input array. - axis : int, optional - Unpacks along this axis. - - Returns - ------- - unpacked : ndarray, uint8 type - The elements are binary-valued (0 or 1). - - See Also - -------- - packbits : Packs the elements of a binary-valued array into bits in a uint8 - array. - - Examples - -------- - >>> a = np.array([[2], [7], [23]], dtype=np.uint8) - >>> a - array([[ 2], - [ 7], - [23]], dtype=uint8) - >>> b = np.unpackbits(a, axis=1) - >>> b - array([[0, 0, 0, 0, 0, 0, 1, 0], - [0, 0, 0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 0, 1, 1, 1]], dtype=uint8) - - """) - - -############################################################################## -# -# Documentation for ufunc attributes and methods -# -############################################################################## - - -############################################################################## -# -# ufunc object -# -############################################################################## - -add_newdoc('numpy.core', 'ufunc', - """ - Functions that operate element by element on whole arrays. - - To see the documentation for a specific ufunc, use np.info(). For - example, np.info(np.sin). Because ufuncs are written in C - (for speed) and linked into Python with NumPy's ufunc facility, - Python's help() function finds this page whenever help() is called - on a ufunc. - - A detailed explanation of ufuncs can be found in the "ufuncs.rst" - file in the NumPy reference guide. - - Unary ufuncs: - ============= - - op(X, out=None) - Apply op to X elementwise - - Parameters - ---------- - X : array_like - Input array. - out : array_like - An array to store the output. Must be the same shape as `X`. - - Returns - ------- - r : array_like - `r` will have the same shape as `X`; if out is provided, `r` - will be equal to out. - - Binary ufuncs: - ============== - - op(X, Y, out=None) - Apply `op` to `X` and `Y` elementwise. May "broadcast" to make - the shapes of `X` and `Y` congruent. - - The broadcasting rules are: - - * Dimensions of length 1 may be prepended to either array. - * Arrays may be repeated along dimensions of length 1. - - Parameters - ---------- - X : array_like - First input array. - Y : array_like - Second input array. - out : array_like - An array to store the output. Must be the same shape as the - output would have. - - Returns - ------- - r : array_like - The return value; if out is provided, `r` will be equal to out. - - """) - - -############################################################################## -# -# ufunc attributes -# -############################################################################## - -add_newdoc('numpy.core', 'ufunc', ('identity', - """ - The identity value. - - Data attribute containing the identity element for the ufunc, if it has one. - If it does not, the attribute value is None. - - Examples - -------- - >>> np.add.identity - 0 - >>> np.multiply.identity - 1 - >>> np.power.identity - 1 - >>> print np.exp.identity - None - """)) - -add_newdoc('numpy.core', 'ufunc', ('nargs', - """ - The number of arguments. - - Data attribute containing the number of arguments the ufunc takes, including - optional ones. - - Notes - ----- - Typically this value will be one more than what you might expect because all - ufuncs take the optional "out" argument. - - Examples - -------- - >>> np.add.nargs - 3 - >>> np.multiply.nargs - 3 - >>> np.power.nargs - 3 - >>> np.exp.nargs - 2 - """)) - -add_newdoc('numpy.core', 'ufunc', ('nin', - """ - The number of inputs. - - Data attribute containing the number of arguments the ufunc treats as input. - - Examples - -------- - >>> np.add.nin - 2 - >>> np.multiply.nin - 2 - >>> np.power.nin - 2 - >>> np.exp.nin - 1 - """)) - -add_newdoc('numpy.core', 'ufunc', ('nout', - """ - The number of outputs. - - Data attribute containing the number of arguments the ufunc treats as output. - - Notes - ----- - Since all ufuncs can take output arguments, this will always be (at least) 1. - - Examples - -------- - >>> np.add.nout - 1 - >>> np.multiply.nout - 1 - >>> np.power.nout - 1 - >>> np.exp.nout - 1 - - """)) - -add_newdoc('numpy.core', 'ufunc', ('ntypes', - """ - The number of types. - - The number of numerical NumPy types - of which there are 18 total - on which - the ufunc can operate. - - See Also - -------- - numpy.ufunc.types - - Examples - -------- - >>> np.add.ntypes - 18 - >>> np.multiply.ntypes - 18 - >>> np.power.ntypes - 17 - >>> np.exp.ntypes - 7 - >>> np.remainder.ntypes - 14 - - """)) - -add_newdoc('numpy.core', 'ufunc', ('types', - """ - Returns a list with types grouped input->output. - - Data attribute listing the data-type "Domain-Range" groupings the ufunc can - deliver. The data-types are given using the character codes. - - See Also - -------- - numpy.ufunc.ntypes - - Examples - -------- - >>> np.add.types - ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', - 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', - 'GG->G', 'OO->O'] - - >>> np.multiply.types - ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', - 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', - 'GG->G', 'OO->O'] - - >>> np.power.types - ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', - 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', - 'OO->O'] - - >>> np.exp.types - ['f->f', 'd->d', 'g->g', 'F->F', 'D->D', 'G->G', 'O->O'] - - >>> np.remainder.types - ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', - 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'OO->O'] - - """)) - - -############################################################################## -# -# ufunc methods -# -############################################################################## - -add_newdoc('numpy.core', 'ufunc', ('reduce', - """ - reduce(a, axis=0, dtype=None, out=None) - - Reduces `a`'s dimension by one, by applying ufunc along one axis. - - Let :math:`a.shape = (N_0, ..., N_i, ..., N_{M-1})`. Then - :math:`ufunc.reduce(a, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]` = - the result of iterating `j` over :math:`range(N_i)`, cumulatively applying - ufunc to each :math:`a[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]`. - For a one-dimensional array, reduce produces results equivalent to: - :: - - r = op.identity # op = ufunc - for i in xrange(len(A)): - r = op(r, A[i]) - return r - - For example, add.reduce() is equivalent to sum(). - - Parameters - ---------- - a : array_like - The array to act on. - axis : int, optional - The axis along which to apply the reduction. - dtype : data-type code, optional - The type used to represent the intermediate results. Defaults - to the data-type of the output array if this is provided, or - the data-type of the input array if no output array is provided. - out : ndarray, optional - A location into which the result is stored. If not provided, a - freshly-allocated array is returned. - - Returns - ------- - r : ndarray - The reduced array. If `out` was supplied, `r` is a reference to it. - - Examples - -------- - >>> np.multiply.reduce([2,3,5]) - 30 - - A multi-dimensional array example: - - >>> X = np.arange(8).reshape((2,2,2)) - >>> X - array([[[0, 1], - [2, 3]], - [[4, 5], - [6, 7]]]) - >>> np.add.reduce(X, 0) - array([[ 4, 6], - [ 8, 10]]) - >>> np.add.reduce(X) # confirm: default axis value is 0 - array([[ 4, 6], - [ 8, 10]]) - >>> np.add.reduce(X, 1) - array([[ 2, 4], - [10, 12]]) - >>> np.add.reduce(X, 2) - array([[ 1, 5], - [ 9, 13]]) - - """)) - -add_newdoc('numpy.core', 'ufunc', ('accumulate', - """ - accumulate(array, axis=0, dtype=None, out=None) - - Accumulate the result of applying the operator to all elements. - - For a one-dimensional array, accumulate produces results equivalent to:: - - r = np.empty(len(A)) - t = op.identity # op = the ufunc being applied to A's elements - for i in xrange(len(A)): - t = op(t, A[i]) - r[i] = t - return r - - For example, add.accumulate() is equivalent to np.cumsum(). - - For a multi-dimensional array, accumulate is applied along only one - axis (axis zero by default; see Examples below) so repeated use is - necessary if one wants to accumulate over multiple axes. - - Parameters - ---------- - array : array_like - The array to act on. - axis : int, optional - The axis along which to apply the accumulation; default is zero. - dtype : data-type code, optional - The data-type used to represent the intermediate results. Defaults - to the data-type of the output array if such is provided, or the - the data-type of the input array if no output array is provided. - out : ndarray, optional - A location into which the result is stored. If not provided a - freshly-allocated array is returned. - - Returns - ------- - r : ndarray - The accumulated values. If `out` was supplied, `r` is a reference to - `out`. - - Examples - -------- - 1-D array examples: - - >>> np.add.accumulate([2, 3, 5]) - array([ 2, 5, 10]) - >>> np.multiply.accumulate([2, 3, 5]) - array([ 2, 6, 30]) - - 2-D array examples: - - >>> I = np.eye(2) - >>> I - array([[ 1., 0.], - [ 0., 1.]]) - - Accumulate along axis 0 (rows), down columns: - - >>> np.add.accumulate(I, 0) - array([[ 1., 0.], - [ 1., 1.]]) - >>> np.add.accumulate(I) # no axis specified = axis zero - array([[ 1., 0.], - [ 1., 1.]]) - - Accumulate along axis 1 (columns), through rows: - - >>> np.add.accumulate(I, 1) - array([[ 1., 1.], - [ 0., 1.]]) - - """)) - -add_newdoc('numpy.core', 'ufunc', ('reduceat', - """ - reduceat(a, indices, axis=0, dtype=None, out=None) - - Performs a (local) reduce with specified slices over a single axis. - - For i in ``range(len(indices))``, `reduceat` computes - ``ufunc.reduce(a[indices[i]:indices[i+1]])``, which becomes the i-th - generalized "row" parallel to `axis` in the final result (i.e., in a - 2-D array, for example, if `axis = 0`, it becomes the i-th row, but if - `axis = 1`, it becomes the i-th column). There are two exceptions to this: - - * when ``i = len(indices) - 1`` (so for the last index), - ``indices[i+1] = a.shape[axis]``. - * if ``indices[i] >= indices[i + 1]``, the i-th generalized "row" is - simply ``a[indices[i]]``. - - The shape of the output depends on the size of `indices`, and may be - larger than `a` (this happens if ``len(indices) > a.shape[axis]``). - - Parameters - ---------- - a : array_like - The array to act on. - indices : array_like - Paired indices, comma separated (not colon), specifying slices to - reduce. - axis : int, optional - The axis along which to apply the reduceat. - dtype : data-type code, optional - The type used to represent the intermediate results. Defaults - to the data type of the output array if this is provided, or - the data type of the input array if no output array is provided. - out : ndarray, optional - A location into which the result is stored. If not provided a - freshly-allocated array is returned. - - Returns - ------- - r : ndarray - The reduced values. If `out` was supplied, `r` is a reference to - `out`. - - Notes - ----- - A descriptive example: - - If `a` is 1-D, the function `ufunc.accumulate(a)` is the same as - ``ufunc.reduceat(a, indices)[::2]`` where `indices` is - ``range(len(array) - 1)`` with a zero placed - in every other element: - ``indices = zeros(2 * len(a) - 1)``, ``indices[1::2] = range(1, len(a))``. - - Don't be fooled by this attribute's name: `reduceat(a)` is not - necessarily smaller than `a`. - - Examples - -------- - To take the running sum of four successive values: - - >>> np.add.reduceat(np.arange(8),[0,4, 1,5, 2,6, 3,7])[::2] - array([ 6, 10, 14, 18]) - - A 2-D example: - - >>> x = np.linspace(0, 15, 16).reshape(4,4) - >>> x - array([[ 0., 1., 2., 3.], - [ 4., 5., 6., 7.], - [ 8., 9., 10., 11.], - [ 12., 13., 14., 15.]]) - - :: - - # reduce such that the result has the following five rows: - # [row1 + row2 + row3] - # [row4] - # [row2] - # [row3] - # [row1 + row2 + row3 + row4] - - >>> np.add.reduceat(x, [0, 3, 1, 2, 0]) - array([[ 12., 15., 18., 21.], - [ 12., 13., 14., 15.], - [ 4., 5., 6., 7.], - [ 8., 9., 10., 11.], - [ 24., 28., 32., 36.]]) - - :: - - # reduce such that result has the following two columns: - # [col1 * col2 * col3, col4] - - >>> np.multiply.reduceat(x, [0, 3], 1) - array([[ 0., 3.], - [ 120., 7.], - [ 720., 11.], - [ 2184., 15.]]) - - """)) - -add_newdoc('numpy.core', 'ufunc', ('outer', - """ - outer(A, B) - - Apply the ufunc `op` to all pairs (a, b) with a in `A` and b in `B`. - - Let ``M = A.ndim``, ``N = B.ndim``. Then the result, `C`, of - ``op.outer(A, B)`` is an array of dimension M + N such that: - - .. math:: C[i_0, ..., i_{M-1}, j_0, ..., j_{N-1}] = - op(A[i_0, ..., i_{M-1}], B[j_0, ..., j_{N-1}]) - - For `A` and `B` one-dimensional, this is equivalent to:: - - r = empty(len(A),len(B)) - for i in xrange(len(A)): - for j in xrange(len(B)): - r[i,j] = op(A[i], B[j]) # op = ufunc in question - - Parameters - ---------- - A : array_like - First array - B : array_like - Second array - - Returns - ------- - r : ndarray - Output array - - See Also - -------- - numpy.outer - - Examples - -------- - >>> np.multiply.outer([1, 2, 3], [4, 5, 6]) - array([[ 4, 5, 6], - [ 8, 10, 12], - [12, 15, 18]]) - - A multi-dimensional example: - - >>> A = np.array([[1, 2, 3], [4, 5, 6]]) - >>> A.shape - (2, 3) - >>> B = np.array([[1, 2, 3, 4]]) - >>> B.shape - (1, 4) - >>> C = np.multiply.outer(A, B) - >>> C.shape; C - (2, 3, 1, 4) - array([[[[ 1, 2, 3, 4]], - [[ 2, 4, 6, 8]], - [[ 3, 6, 9, 12]]], - [[[ 4, 8, 12, 16]], - [[ 5, 10, 15, 20]], - [[ 6, 12, 18, 24]]]]) - - """)) - - -############################################################################## -# -# Documentation for dtype attributes and methods -# -############################################################################## - -############################################################################## -# -# dtype object -# -############################################################################## - -add_newdoc('numpy.core.multiarray', 'dtype', - """ - dtype(obj, align=False, copy=False) - - Create a data type object. - - A numpy array is homogeneous, and contains elements described by a - dtype object. A dtype object can be constructed from different - combinations of fundamental numeric types. - - Parameters - ---------- - obj - Object to be converted to a data type object. - align : bool, optional - Add padding to the fields to match what a C compiler would output - for a similar C-struct. Can be ``True`` only if `obj` is a dictionary - or a comma-separated string. - copy : bool, optional - Make a new copy of the data-type object. If ``False``, the result - may just be a reference to a built-in data-type object. - - Examples - -------- - Using array-scalar type: - - >>> np.dtype(np.int16) - dtype('int16') - - Record, one field name 'f1', containing int16: - - >>> np.dtype([('f1', np.int16)]) - dtype([('f1', '>> np.dtype([('f1', [('f1', np.int16)])]) - dtype([('f1', [('f1', '>> np.dtype([('f1', np.uint), ('f2', np.int32)]) - dtype([('f1', '>> np.dtype([('a','f8'),('b','S10')]) - dtype([('a', '>> np.dtype("i4, (2,3)f8") - dtype([('f0', '>> np.dtype([('hello',(np.int,3)),('world',np.void,10)]) - dtype([('hello', '>> np.dtype((np.int16, {'x':(np.int8,0), 'y':(np.int8,1)})) - dtype(('>> np.dtype({'names':['gender','age'], 'formats':['S1',np.uint8]}) - dtype([('gender', '|S1'), ('age', '|u1')]) - - Offsets in bytes, here 0 and 25: - - >>> np.dtype({'surname':('S25',0),'age':(np.uint8,25)}) - dtype([('surname', '|S25'), ('age', '|u1')]) - - """) - -############################################################################## -# -# dtype attributes -# -############################################################################## - -add_newdoc('numpy.core.multiarray', 'dtype', ('alignment', - """ - The required alignment (bytes) of this data-type according to the compiler. - - More information is available in the C-API section of the manual. - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('byteorder', - """ - A character indicating the byte-order of this data-type object. - - One of: - - === ============== - '=' native - '<' little-endian - '>' big-endian - '|' not applicable - === ============== - - All built-in data-type objects have byteorder either '=' or '|'. - - Examples - -------- - - >>> dt = np.dtype('i2') - >>> dt.byteorder - '=' - >>> # endian is not relevant for 8 bit numbers - >>> np.dtype('i1').byteorder - '|' - >>> # or ASCII strings - >>> np.dtype('S2').byteorder - '|' - >>> # Even if specific code is given, and it is native - >>> # '=' is the byteorder - >>> import sys - >>> sys_is_le = sys.byteorder == 'little' - >>> native_code = sys_is_le and '<' or '>' - >>> swapped_code = sys_is_le and '>' or '<' - >>> dt = np.dtype(native_code + 'i2') - >>> dt.byteorder - '=' - >>> # Swapped code shows up as itself - >>> dt = np.dtype(swapped_code + 'i2') - >>> dt.byteorder == swapped_code - True - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('char', - """A unique character code for each of the 21 different built-in types.""")) - -add_newdoc('numpy.core.multiarray', 'dtype', ('descr', - """ - Array-interface compliant full description of the data-type. - - The format is that required by the 'descr' key in the - `__array_interface__` attribute. - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('fields', - """ - Dictionary of named fields defined for this data type, or ``None``. - - The dictionary is indexed by keys that are the names of the fields. - Each entry in the dictionary is a tuple fully describing the field:: - - (dtype, offset[, title]) - - If present, the optional title can be any object (if it is a string - or unicode then it will also be a key in the fields dictionary, - otherwise it's meta-data). Notice also that the first two elements - of the tuple can be passed directly as arguments to the ``ndarray.getfield`` - and ``ndarray.setfield`` methods. - - See Also - -------- - ndarray.getfield, ndarray.setfield - - Examples - -------- - - >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) - >>> print dt.fields - {'grades': (dtype(('float64',(2,))), 16), 'name': (dtype('|S16'), 0)} - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('flags', - """ - Bit-flags describing how this data type is to be interpreted. - - Bit-masks are in `numpy.core.multiarray` as the constants - `ITEM_HASOBJECT`, `LIST_PICKLE`, `ITEM_IS_POINTER`, `NEEDS_INIT`, - `NEEDS_PYAPI`, `USE_GETITEM`, `USE_SETITEM`. A full explanation - of these flags is in C-API documentation; they are largely useful - for user-defined data-types. - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('hasobject', - """ - Boolean indicating whether this dtype contains any reference-counted - objects in any fields or sub-dtypes. - - Recall that what is actually in the ndarray memory representing - the Python object is the memory address of that object (a pointer). - Special handling may be required, and this attribute is useful for - distinguishing data types that may contain arbitrary Python objects - and data-types that won't. - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('isbuiltin', - """ - Integer indicating how this dtype relates to the built-in dtypes. - - Read-only. - - = ======================================================================== - 0 if this is a structured array type, with fields - 1 if this is a dtype compiled into numpy (such as ints, floats etc) - 2 if the dtype is for a user-defined numpy type - A user-defined type uses the numpy C-API machinery to extend - numpy to handle a new array type. See - :ref:`user.user-defined-data-types` in the Numpy manual. - = ======================================================================== - - Examples - -------- - >>> dt = np.dtype('i2') - >>> dt.isbuiltin - 1 - >>> dt = np.dtype('f8') - >>> dt.isbuiltin - 1 - >>> dt = np.dtype([('field1', 'f8')]) - >>> dt.isbuiltin - 0 - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('isnative', - """ - Boolean indicating whether the byte order of this dtype is native - to the platform. - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('itemsize', - """ - The element size of this data-type object. - - For 18 of the 21 types this number is fixed by the data-type. - For the flexible data-types, this number can be anything. - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('kind', - """ - A character code (one of 'biufcSUV') identifying the general kind of data. - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('name', - """ - A bit-width name for this data-type. - - Un-sized flexible data-type objects do not have this attribute. - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('names', - """ - Ordered list of field names, or ``None`` if there are no fields. - - The names are ordered according to increasing byte offset. This can be - used, for example, to walk through all of the named fields in offset order. - - Examples - -------- - - >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) - >>> dt.names - ('name', 'grades') - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('num', - """ - A unique number for each of the 21 different built-in types. - - These are roughly ordered from least-to-most precision. - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('shape', - """ - Shape tuple of the sub-array if this data type describes a sub-array, - and ``()`` otherwise. - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('str', - """The array-protocol typestring of this data-type object.""")) - -add_newdoc('numpy.core.multiarray', 'dtype', ('subdtype', - """ - Tuple ``(item_dtype, shape)`` if this `dtype` describes a sub-array, and - None otherwise. - - The *shape* is the fixed shape of the sub-array described by this - data type, and *item_dtype* the data type of the array. - - If a field whose dtype object has this attribute is retrieved, - then the extra dimensions implied by *shape* are tacked on to - the end of the retrieved array. - - """)) - -add_newdoc('numpy.core.multiarray', 'dtype', ('type', - """The type object used to instantiate a scalar of this data-type.""")) - -############################################################################## -# -# dtype methods -# -############################################################################## - -add_newdoc('numpy.core.multiarray', 'dtype', ('newbyteorder', - """ - newbyteorder(new_order='S') - - Return a new dtype with a different byte order. - - Changes are also made in all fields and sub-arrays of the data type. - - Parameters - ---------- - new_order : string, optional - Byte order to force; a value from the byte order - specifications below. The default value ('S') results in - swapping the current byte order. - `new_order` codes can be any of:: - - * 'S' - swap dtype from current to opposite endian - * {'<', 'L'} - little endian - * {'>', 'B'} - big endian - * {'=', 'N'} - native order - * {'|', 'I'} - ignore (no change to byte order) - - The code does a case-insensitive check on the first letter of - `new_order` for these alternatives. For example, any of '>' - or 'B' or 'b' or 'brian' are valid to specify big-endian. - - Returns - ------- - new_dtype : dtype - New dtype object with the given change to the byte order. - - Notes - ----- - Changes are also made in all fields and sub-arrays of the data type. - - Examples - -------- - >>> import sys - >>> sys_is_le = sys.byteorder == 'little' - >>> native_code = sys_is_le and '<' or '>' - >>> swapped_code = sys_is_le and '>' or '<' - >>> native_dt = np.dtype(native_code+'i2') - >>> swapped_dt = np.dtype(swapped_code+'i2') - >>> native_dt.newbyteorder('S') == swapped_dt - True - >>> native_dt.newbyteorder() == swapped_dt - True - >>> native_dt == swapped_dt.newbyteorder('S') - True - >>> native_dt == swapped_dt.newbyteorder('=') - True - >>> native_dt == swapped_dt.newbyteorder('N') - True - >>> native_dt == native_dt.newbyteorder('|') - True - >>> np.dtype('>> np.dtype('>> np.dtype('>i2') == native_dt.newbyteorder('>') - True - >>> np.dtype('>i2') == native_dt.newbyteorder('B') - True - - """)) - - -############################################################################## -# -# nd_grid instances -# -############################################################################## - -add_newdoc('numpy.lib.index_tricks', 'mgrid', - """ - `nd_grid` instance which returns a dense multi-dimensional "meshgrid". - - An instance of `numpy.lib.index_tricks.nd_grid` which returns an dense - (or fleshed out) mesh-grid when indexed, so that each returned argument - has the same shape. The dimensions and number of the output arrays are - equal to the number of indexing dimensions. If the step length is not a - complex number, then the stop is not inclusive. - - However, if the step length is a **complex number** (e.g. 5j), then - the integer part of its magnitude is interpreted as specifying the - number of points to create between the start and stop values, where - the stop value **is inclusive**. - - Returns - ---------- - mesh-grid `ndarrays` all of the same dimensions - - See Also - -------- - numpy.lib.index_tricks.nd_grid : class of `ogrid` and `mgrid` objects - ogrid : like mgrid but returns open (not fleshed out) mesh grids - r_ : array concatenator - - Examples - -------- - >>> np.mgrid[0:5,0:5] - array([[[0, 0, 0, 0, 0], - [1, 1, 1, 1, 1], - [2, 2, 2, 2, 2], - [3, 3, 3, 3, 3], - [4, 4, 4, 4, 4]], - [[0, 1, 2, 3, 4], - [0, 1, 2, 3, 4], - [0, 1, 2, 3, 4], - [0, 1, 2, 3, 4], - [0, 1, 2, 3, 4]]]) - >>> np.mgrid[-1:1:5j] - array([-1. , -0.5, 0. , 0.5, 1. ]) - - """) - -add_newdoc('numpy.lib.index_tricks', 'ogrid', - """ - `nd_grid` instance which returns an open multi-dimensional "meshgrid". - - An instance of `numpy.lib.index_tricks.nd_grid` which returns an open - (i.e. not fleshed out) mesh-grid when indexed, so that only one dimension - of each returned array is greater than 1. The dimension and number of the - output arrays are equal to the number of indexing dimensions. If the step - length is not a complex number, then the stop is not inclusive. - - However, if the step length is a **complex number** (e.g. 5j), then - the integer part of its magnitude is interpreted as specifying the - number of points to create between the start and stop values, where - the stop value **is inclusive**. - - Returns - ---------- - mesh-grid `ndarrays` with only one dimension :math:`\\neq 1` - - See Also - -------- - np.lib.index_tricks.nd_grid : class of `ogrid` and `mgrid` objects - mgrid : like `ogrid` but returns dense (or fleshed out) mesh grids - r_ : array concatenator - - Examples - -------- - >>> from numpy import ogrid - >>> ogrid[-1:1:5j] - array([-1. , -0.5, 0. , 0.5, 1. ]) - >>> ogrid[0:5,0:5] - [array([[0], - [1], - [2], - [3], - [4]]), array([[0, 1, 2, 3, 4]])] - - """) - - -############################################################################## -# -# Documentation for `generic` attributes and methods -# -############################################################################## - -add_newdoc('numpy.core.numerictypes', 'generic', - """ - Base class for numpy scalar types. - - Class from which most (all?) numpy scalar types are derived. For - consistency, exposes the same API as `ndarray`, despite many - consequent attributes being either "get-only," or completely irrelevant. - This is the class from which it is strongly suggested users should derive - custom scalar types. - - """) - -# Attributes - -add_newdoc('numpy.core.numerictypes', 'generic', ('T', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class so as to - provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('base', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class so as to - a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('data', - """Pointer to start of data.""")) - -add_newdoc('numpy.core.numerictypes', 'generic', ('dtype', - """Get array data-descriptor.""")) - -add_newdoc('numpy.core.numerictypes', 'generic', ('flags', - """The integer value of flags.""")) - -add_newdoc('numpy.core.numerictypes', 'generic', ('flat', - """A 1-D view of the scalar.""")) - -add_newdoc('numpy.core.numerictypes', 'generic', ('imag', - """The imaginary part of the scalar.""")) - -add_newdoc('numpy.core.numerictypes', 'generic', ('itemsize', - """The length of one element in bytes.""")) - -add_newdoc('numpy.core.numerictypes', 'generic', ('nbytes', - """The length of the scalar in bytes.""")) - -add_newdoc('numpy.core.numerictypes', 'generic', ('ndim', - """The number of array dimensions.""")) - -add_newdoc('numpy.core.numerictypes', 'generic', ('real', - """The real part of the scalar.""")) - -add_newdoc('numpy.core.numerictypes', 'generic', ('shape', - """Tuple of array dimensions.""")) - -add_newdoc('numpy.core.numerictypes', 'generic', ('size', - """The number of elements in the gentype.""")) - -add_newdoc('numpy.core.numerictypes', 'generic', ('strides', - """Tuple of bytes steps in each dimension.""")) - -# Methods - -add_newdoc('numpy.core.numerictypes', 'generic', ('all', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('any', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('argmax', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('argmin', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('argsort', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('astype', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('byteswap', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class so as to - provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('choose', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('clip', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('compress', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('conjugate', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('copy', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('cumprod', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('cumsum', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('diagonal', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('dump', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('dumps', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('fill', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('flatten', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('getfield', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('item', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('itemset', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('max', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('mean', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('min', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('newbyteorder', - """ - newbyteorder(new_order='S') - - Return a new `dtype` with a different byte order. - - Changes are also made in all fields and sub-arrays of the data type. - - The `new_order` code can be any from the following: - - * {'<', 'L'} - little endian - * {'>', 'B'} - big endian - * {'=', 'N'} - native order - * 'S' - swap dtype from current to opposite endian - * {'|', 'I'} - ignore (no change to byte order) - - Parameters - ---------- - new_order : str, optional - Byte order to force; a value from the byte order specifications - above. The default value ('S') results in swapping the current - byte order. The code does a case-insensitive check on the first - letter of `new_order` for the alternatives above. For example, - any of 'B' or 'b' or 'biggish' are valid to specify big-endian. - - - Returns - ------- - new_dtype : dtype - New `dtype` object with the given change to the byte order. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('nonzero', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('prod', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('ptp', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('put', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('ravel', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('repeat', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('reshape', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('resize', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('round', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('searchsorted', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('setfield', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('setflags', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class so as to - provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('sort', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('squeeze', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('std', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('sum', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('swapaxes', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('take', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('tofile', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('tolist', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('tostring', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('trace', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('transpose', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('var', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - -add_newdoc('numpy.core.numerictypes', 'generic', ('view', - """ - Not implemented (virtual attribute) - - Class generic exists solely to derive numpy scalars from, and possesses, - albeit unimplemented, all the attributes of the ndarray class - so as to provide a uniform API. - - See Also - -------- - The corresponding attribute of the derived class of interest. - - """)) - - -############################################################################## -# -# Documentation for other scalar classes -# -############################################################################## - -add_newdoc('numpy.core.numerictypes', 'bool_', - """Numpy's Boolean type. Character code: ``?``. Alias: bool8""") - -add_newdoc('numpy.core.numerictypes', 'complex64', - """ - Complex number type composed of two 32 bit floats. Character code: 'F'. - - """) - -add_newdoc('numpy.core.numerictypes', 'complex128', - """ - Complex number type composed of two 64 bit floats. Character code: 'D'. - Python complex compatible. - - """) - -add_newdoc('numpy.core.numerictypes', 'complex256', - """ - Complex number type composed of two 128-bit floats. Character code: 'G'. - - """) - -add_newdoc('numpy.core.numerictypes', 'float32', - """ - 32-bit floating-point number. Character code 'f'. C float compatible. - - """) - -add_newdoc('numpy.core.numerictypes', 'float64', - """ - 64-bit floating-point number. Character code 'd'. Python float compatible. - - """) - -add_newdoc('numpy.core.numerictypes', 'float96', - """ - """) - -add_newdoc('numpy.core.numerictypes', 'float128', - """ - 128-bit floating-point number. Character code: 'g'. C long float - compatible. - - """) - -add_newdoc('numpy.core.numerictypes', 'int8', - """8-bit integer. Character code ``b``. C char compatible.""") - -add_newdoc('numpy.core.numerictypes', 'int16', - """16-bit integer. Character code ``h``. C short compatible.""") - -add_newdoc('numpy.core.numerictypes', 'int32', - """32-bit integer. Character code 'i'. C int compatible.""") - -add_newdoc('numpy.core.numerictypes', 'int64', - """64-bit integer. Character code 'l'. Python int compatible.""") - -add_newdoc('numpy.core.numerictypes', 'object_', - """Any Python object. Character code: 'O'.""") diff --git a/pythonPackages/numpy/numpy/compat/__init__.py b/pythonPackages/numpy/numpy/compat/__init__.py deleted file mode 100755 index 9b42616167..0000000000 --- a/pythonPackages/numpy/numpy/compat/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -""" -Compatibility module. - -This module contains duplicated code from Python itself or 3rd party -extensions, which may be included for the following reasons: - - * compatibility - * we may only need a small subset of the copied library/module - -""" -import _inspect -import py3k -from _inspect import getargspec, formatargspec -from py3k import * - -__all__ = [] -__all__.extend(_inspect.__all__) -__all__.extend(py3k.__all__) diff --git a/pythonPackages/numpy/numpy/compat/_inspect.py b/pythonPackages/numpy/numpy/compat/_inspect.py deleted file mode 100755 index 612d1e1477..0000000000 --- a/pythonPackages/numpy/numpy/compat/_inspect.py +++ /dev/null @@ -1,219 +0,0 @@ -"""Subset of inspect module from upstream python - -We use this instead of upstream because upstream inspect is slow to import, and -significanly contributes to numpy import times. Importing this copy has almost -no overhead. -""" - -import types - -__all__ = ['getargspec', 'formatargspec'] - -# ----------------------------------------------------------- type-checking -def ismethod(object): - """Return true if the object is an instance method. - - Instance method objects provide these attributes: - __doc__ documentation string - __name__ name with which this method was defined - im_class class object in which this method belongs - im_func function object containing implementation of method - im_self instance to which this method is bound, or None""" - return isinstance(object, types.MethodType) - -def isfunction(object): - """Return true if the object is a user-defined function. - - Function objects provide these attributes: - __doc__ documentation string - __name__ name with which this function was defined - func_code code object containing compiled function bytecode - func_defaults tuple of any default values for arguments - func_doc (same as __doc__) - func_globals global namespace in which this function was defined - func_name (same as __name__)""" - return isinstance(object, types.FunctionType) - -def iscode(object): - """Return true if the object is a code object. - - Code objects provide these attributes: - co_argcount number of arguments (not including * or ** args) - co_code string of raw compiled bytecode - co_consts tuple of constants used in the bytecode - co_filename name of file in which this code object was created - co_firstlineno number of first line in Python source code - co_flags bitmap: 1=optimized | 2=newlocals | 4=*arg | 8=**arg - co_lnotab encoded mapping of line numbers to bytecode indices - co_name name with which this code object was defined - co_names tuple of names of local variables - co_nlocals number of local variables - co_stacksize virtual machine stack space required - co_varnames tuple of names of arguments and local variables""" - return isinstance(object, types.CodeType) - -# ------------------------------------------------ argument list extraction -# These constants are from Python's compile.h. -CO_OPTIMIZED, CO_NEWLOCALS, CO_VARARGS, CO_VARKEYWORDS = 1, 2, 4, 8 - -def getargs(co): - """Get information about the arguments accepted by a code object. - - Three things are returned: (args, varargs, varkw), where 'args' is - a list of argument names (possibly containing nested lists), and - 'varargs' and 'varkw' are the names of the * and ** arguments or None.""" - - if not iscode(co): - raise TypeError('arg is not a code object') - - code = co.co_code - nargs = co.co_argcount - names = co.co_varnames - args = list(names[:nargs]) - step = 0 - - # The following acrobatics are for anonymous (tuple) arguments. - for i in range(nargs): - if args[i][:1] in ['', '.']: - stack, remain, count = [], [], [] - while step < len(code): - op = ord(code[step]) - step = step + 1 - if op >= dis.HAVE_ARGUMENT: - opname = dis.opname[op] - value = ord(code[step]) + ord(code[step+1])*256 - step = step + 2 - if opname in ['UNPACK_TUPLE', 'UNPACK_SEQUENCE']: - remain.append(value) - count.append(value) - elif opname == 'STORE_FAST': - stack.append(names[value]) - - # Special case for sublists of length 1: def foo((bar)) - # doesn't generate the UNPACK_TUPLE bytecode, so if - # `remain` is empty here, we have such a sublist. - if not remain: - stack[0] = [stack[0]] - break - else: - remain[-1] = remain[-1] - 1 - while remain[-1] == 0: - remain.pop() - size = count.pop() - stack[-size:] = [stack[-size:]] - if not remain: break - remain[-1] = remain[-1] - 1 - if not remain: break - args[i] = stack[0] - - varargs = None - if co.co_flags & CO_VARARGS: - varargs = co.co_varnames[nargs] - nargs = nargs + 1 - varkw = None - if co.co_flags & CO_VARKEYWORDS: - varkw = co.co_varnames[nargs] - return args, varargs, varkw - -def getargspec(func): - """Get the names and default values of a function's arguments. - - A tuple of four things is returned: (args, varargs, varkw, defaults). - 'args' is a list of the argument names (it may contain nested lists). - 'varargs' and 'varkw' are the names of the * and ** arguments or None. - 'defaults' is an n-tuple of the default values of the last n arguments. - """ - - if ismethod(func): - func = func.im_func - if not isfunction(func): - raise TypeError('arg is not a Python function') - args, varargs, varkw = getargs(func.func_code) - return args, varargs, varkw, func.func_defaults - -def getargvalues(frame): - """Get information about arguments passed into a particular frame. - - A tuple of four things is returned: (args, varargs, varkw, locals). - 'args' is a list of the argument names (it may contain nested lists). - 'varargs' and 'varkw' are the names of the * and ** arguments or None. - 'locals' is the locals dictionary of the given frame.""" - args, varargs, varkw = getargs(frame.f_code) - return args, varargs, varkw, frame.f_locals - -def joinseq(seq): - if len(seq) == 1: - return '(' + seq[0] + ',)' - else: - return '(' + ', '.join(seq) + ')' - -def strseq(object, convert, join=joinseq): - """Recursively walk a sequence, stringifying each element.""" - if type(object) in [types.ListType, types.TupleType]: - return join(map(lambda o, c=convert, j=join: strseq(o, c, j), object)) - else: - return convert(object) - -def formatargspec(args, varargs=None, varkw=None, defaults=None, - formatarg=str, - formatvarargs=lambda name: '*' + name, - formatvarkw=lambda name: '**' + name, - formatvalue=lambda value: '=' + repr(value), - join=joinseq): - """Format an argument spec from the 4 values returned by getargspec. - - The first four arguments are (args, varargs, varkw, defaults). The - other four arguments are the corresponding optional formatting functions - that are called to turn names and values into strings. The ninth - argument is an optional function to format the sequence of arguments.""" - specs = [] - if defaults: - firstdefault = len(args) - len(defaults) - for i in range(len(args)): - spec = strseq(args[i], formatarg, join) - if defaults and i >= firstdefault: - spec = spec + formatvalue(defaults[i - firstdefault]) - specs.append(spec) - if varargs is not None: - specs.append(formatvarargs(varargs)) - if varkw is not None: - specs.append(formatvarkw(varkw)) - return '(' + ', '.join(specs) + ')' - -def formatargvalues(args, varargs, varkw, locals, - formatarg=str, - formatvarargs=lambda name: '*' + name, - formatvarkw=lambda name: '**' + name, - formatvalue=lambda value: '=' + repr(value), - join=joinseq): - """Format an argument spec from the 4 values returned by getargvalues. - - The first four arguments are (args, varargs, varkw, locals). The - next four arguments are the corresponding optional formatting functions - that are called to turn names and values into strings. The ninth - argument is an optional function to format the sequence of arguments.""" - def convert(name, locals=locals, - formatarg=formatarg, formatvalue=formatvalue): - return formatarg(name) + formatvalue(locals[name]) - specs = [] - for i in range(len(args)): - specs.append(strseq(args[i], convert, join)) - if varargs: - specs.append(formatvarargs(varargs) + formatvalue(locals[varargs])) - if varkw: - specs.append(formatvarkw(varkw) + formatvalue(locals[varkw])) - return '(' + string.join(specs, ', ') + ')' - -if __name__ == '__main__': - import inspect - def foo(x, y, z=None): - return None - - print inspect.getargs(foo.func_code) - print getargs(foo.func_code) - - print inspect.getargspec(foo) - print getargspec(foo) - - print inspect.formatargspec(*inspect.getargspec(foo)) - print formatargspec(*getargspec(foo)) diff --git a/pythonPackages/numpy/numpy/compat/py3k.py b/pythonPackages/numpy/numpy/compat/py3k.py deleted file mode 100755 index 609f099749..0000000000 --- a/pythonPackages/numpy/numpy/compat/py3k.py +++ /dev/null @@ -1,58 +0,0 @@ -""" -Python 3 compatibility tools. - -""" - -__all__ = ['bytes', 'asbytes', 'isfileobj', 'getexception', 'strchar', - 'unicode', 'asunicode', 'asbytes_nested', 'asunicode_nested', - 'asstr', 'open_latin1'] - -import sys - -if sys.version_info[0] >= 3: - import io - bytes = bytes - unicode = str - asunicode = str - def asbytes(s): - if isinstance(s, bytes): - return s - return s.encode('latin1') - def asstr(s): - if isinstance(s, str): - return s - return s.decode('latin1') - def isfileobj(f): - return isinstance(f, io.FileIO) - def open_latin1(filename, mode='r'): - return open(filename, mode=mode, encoding='iso-8859-1') - strchar = 'U' -else: - bytes = str - unicode = unicode - asbytes = str - asstr = str - strchar = 'S' - def isfileobj(f): - return isinstance(f, file) - def asunicode(s): - if isinstance(s, unicode): - return s - return s.decode('ascii') - def open_latin1(filename, mode='r'): - return open(filename, mode=mode) - -def getexception(): - return sys.exc_info()[1] - -def asbytes_nested(x): - if hasattr(x, '__iter__') and not isinstance(x, (bytes, unicode)): - return [asbytes_nested(y) for y in x] - else: - return asbytes(x) - -def asunicode_nested(x): - if hasattr(x, '__iter__') and not isinstance(x, (bytes, unicode)): - return [asunicode_nested(y) for y in x] - else: - return asunicode(x) diff --git a/pythonPackages/numpy/numpy/compat/setup.py b/pythonPackages/numpy/numpy/compat/setup.py deleted file mode 100755 index 4e07810850..0000000000 --- a/pythonPackages/numpy/numpy/compat/setup.py +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env python - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('compat',parent_package,top_path) - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/compat/setupscons.py b/pythonPackages/numpy/numpy/compat/setupscons.py deleted file mode 100755 index e518245b2a..0000000000 --- a/pythonPackages/numpy/numpy/compat/setupscons.py +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/env python -import os.path - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('compat',parent_package,top_path) - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/core/SConscript b/pythonPackages/numpy/numpy/core/SConscript deleted file mode 100755 index 702ec425ae..0000000000 --- a/pythonPackages/numpy/numpy/core/SConscript +++ /dev/null @@ -1,510 +0,0 @@ -# Last Change: Sun Apr 26 05:00 PM 2009 J -# vim:syntax=python -import os -import sys -from os.path import join as pjoin, basename as pbasename, dirname as pdirname -from copy import deepcopy - -from numscons import get_pythonlib_dir -from numscons import GetNumpyEnvironment -from numscons import CheckCBLAS -from numscons import write_info - -from code_generators.numpy_api import \ - multiarray_api as multiarray_api_dict, \ - ufunc_api as ufunc_api_dict - -from setup_common import * -from scons_support import CheckBrokenMathlib, define_no_smp, \ - check_mlib, check_mlibs, is_npy_no_signal, CheckInline -from scons_support import array_api_gen_bld, ufunc_api_gen_bld, template_bld, \ - umath_bld, CheckGCC4, check_api_version, \ - CheckLongDoubleRepresentation - -import SCons - -# Set to True to enable multiple file compilations (experimental) -try: - os.environ['NPY_SEPARATE_COMPILATION'] - ENABLE_SEPARATE_COMPILATION = True -except KeyError: - ENABLE_SEPARATE_COMPILATION = False -try: - os.environ['NPY_BYPASS_SINGLE_EXTENDED'] - BYPASS_SINGLE_EXTENDED = True -except KeyError: - BYPASS_SINGLE_EXTENDED = False - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = env["PYEXTCPPPATH"]) -if os.name == 'nt': - # NT needs the pythonlib to run any code importing Python.h, including - # simple code using only typedef and so on, so we need it for configuration - # checks - env.AppendUnique(LIBPATH = [get_pythonlib_dir()]) - -# Check whether we have a mismatch between the set C API VERSION and the -# actual C API VERSION -check_api_version(C_API_VERSION) - -#======================= -# Starting Configuration -#======================= -config = env.NumpyConfigure(custom_tests = {'CheckBrokenMathlib' : CheckBrokenMathlib, - 'CheckCBLAS' : CheckCBLAS, 'CheckInline': CheckInline, 'CheckGCC4' : CheckGCC4, - 'CheckLongDoubleRepresentation': CheckLongDoubleRepresentation}, - config_h = pjoin('config.h')) - -# numpyconfig_sym will keep the values of some configuration variables, the one -# needed for the public numpy API. - -# Convention: list of tuples (definition, value). value: -# - 0: #undef definition -# - 1: #define definition -# - string: #define definition value -numpyconfig_sym = [] - -#--------------- -# Checking Types -#--------------- -if not config.CheckHeader("Python.h"): - errmsg = [] - for line in config.GetLastError(): - errmsg.append("%s " % line) - print """ -Error: Python.h header cannot be compiled (or cannot be found). -On linux, check that you have python-dev/python-devel packages. On windows, -check that you have he platform SDK. You may also use unsupported cflags. -Configuration error log says: \n\n%s""" % ''.join(errmsg) - Exit(-1) - -st = config.CheckHeader("endian.h") -if st: - numpyconfig_sym.append(('DEFINE_NPY_HAVE_ENDIAN_H', '#define NPY_HAVE_ENDIAN_H 1')) -else: - numpyconfig_sym.append(('DEFINE_NPY_HAVE_ENDIAN_H', '')) - -def check_type(type, include = None): - st = config.CheckTypeSize(type, includes = include) - type = type.replace(' ', '_') - if st: - numpyconfig_sym.append(('SIZEOF_%s' % type.upper(), '%d' % st)) - else: - numpyconfig_sym.append(('SIZEOF_%s' % type.upper(), 0)) - -for type in ('short', 'int', 'long'): - # SIZEOF_LONG defined on darwin - if type == "long": - if not config.CheckDeclaration("SIZEOF_LONG", includes="#include "): - check_type(type) - else: - numpyconfig_sym.append(('SIZEOF_LONG', 'SIZEOF_LONG')) - else: - check_type(type) - -for type in ('float', 'double', 'long double'): - sz = config.CheckTypeSize(type) - numpyconfig_sym.append(('SIZEOF_%s' % type2def(type), str(sz))) - - # Compute size of corresponding complex type: used to check that our - # definition is binary compatible with C99 complex type (check done at - # build time in npy_common.h) - complex_def = "struct {%s __x; %s __y;}" % (type, type) - sz = config.CheckTypeSize(complex_def) - numpyconfig_sym.append(('SIZEOF_COMPLEX_%s' % type2def(type), str(sz))) - -if sys.platform != 'darwin': - tp = config.CheckLongDoubleRepresentation() - config.Define("HAVE_LDOUBLE_%s" % tp, 1, - "Define for arch-specific long double representation") - -for type in ('Py_intptr_t',): - check_type(type, include = "#include \n") - -# We check declaration AND type because that's how distutils does it. -if config.CheckDeclaration('PY_LONG_LONG', includes = '#include \n'): - st = config.CheckTypeSize('PY_LONG_LONG', - includes = '#include \n') - assert not st == 0 - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_LONGLONG', - '#define NPY_SIZEOF_LONGLONG %d' % st)) - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_PY_LONG_LONG', - '#define NPY_SIZEOF_PY_LONG_LONG %d' % st)) -else: - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_LONGLONG', '')) - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_PY_LONG_LONG', '')) - -if not config.CheckDeclaration('CHAR_BIT', includes= '#include \n'): - raise RuntimeError(\ -"""Config wo CHAR_BIT is not supported with scons: please contact the -maintainer (cdavid)""") - -#---------------------- -# Checking signal stuff -#---------------------- -if is_npy_no_signal(): - numpyconfig_sym.append(('DEFINE_NPY_NO_SIGNAL', '#define NPY_NO_SIGNAL\n')) - config.Define('__NPY_PRIVATE_NO_SIGNAL', - comment = "define to 1 to disable SMP support ") -else: - numpyconfig_sym.append(('DEFINE_NPY_NO_SIGNAL', '')) - -#--------------------- -# Checking SMP option -#--------------------- -if define_no_smp(): - nosmp = 1 -else: - nosmp = 0 -numpyconfig_sym.append(('NPY_NO_SMP', nosmp)) - -#---------------------------------------------- -# Check whether we can use C99 printing formats -#---------------------------------------------- -if config.CheckDeclaration(('PRIdPTR'), includes = '#include '): - numpyconfig_sym.append(('DEFINE_NPY_USE_C99_FORMATS', '#define NPY_USE_C99_FORMATS 1')) -else: - numpyconfig_sym.append(('DEFINE_NPY_USE_C99_FORMATS', '')) - -#---------------------- -# Checking the mathlib -#---------------------- -mlibs = [[], ['m'], ['cpml']] -mathlib = os.environ.get('MATHLIB') -if mathlib: - mlibs.insert(0, mathlib) - -mlib = check_mlibs(config, mlibs) - -# XXX: this is ugly: mathlib has nothing to do in a public header file -numpyconfig_sym.append(('MATHLIB', ','.join(mlib))) - -#---------------------------------- -# Checking the math funcs available -#---------------------------------- -# Function to check: -mfuncs = ('expl', 'expf', 'log1p', 'expm1', 'asinh', 'atanhf', 'atanhl', - 'rint', 'trunc') - -# Set value to 1 for each defined function (in math lib) -mfuncs_defined = dict([(f, 0) for f in mfuncs]) - -# Check for mandatory funcs: we barf if a single one of those is not there -if not config.CheckFuncsAtOnce(MANDATORY_FUNCS): - raise SystemError("One of the required function to build numpy is not" - " available (the list is %s)." % str(MANDATORY_FUNCS)) - -# Standard functions which may not be available and for which we have a -# replacement implementation -# -def check_funcs(funcs): - # Use check_funcs_once first, and if it does not work, test func per - # func. Return success only if all the functions are available - st = config.CheckFuncsAtOnce(funcs) - if not st: - # Global check failed, check func per func - for f in funcs: - st = config.CheckFunc(f, language = 'C') - -for f in OPTIONAL_STDFUNCS_MAYBE: - if config.CheckDeclaration(fname2def(f), - includes="#include \n#include "): - OPTIONAL_STDFUNCS.remove(f) -check_funcs(OPTIONAL_STDFUNCS) - -# C99 functions: float and long double versions -if not BYPASS_SINGLE_EXTENDED: - check_funcs(C99_FUNCS_SINGLE) - check_funcs(C99_FUNCS_EXTENDED) - -# Normally, isnan and isinf are macro (C99), but some platforms only have -# func, or both func and macro version. Check for macro only, and define -# replacement ones if not found. -# Note: including Python.h is necessary because it modifies some math.h -# definitions -for f in ["isnan", "isinf", "signbit", "isfinite"]: - includes = """\ -#include -#include -""" - st = config.CheckDeclaration(f, includes=includes) - if st: - numpyconfig_sym.append(('DEFINE_NPY_HAVE_DECL_%s' % f.upper(), - '#define NPY_HAVE_DECL_%s' % f.upper())) - else: - numpyconfig_sym.append(('DEFINE_NPY_HAVE_DECL_%s' % f.upper(), '')) - -inline = config.CheckInline() -config.Define('inline', inline) - - -if ENABLE_SEPARATE_COMPILATION: - config.Define("ENABLE_SEPARATE_COMPILATION", 1) - numpyconfig_sym.append(('DEFINE_NPY_ENABLE_SEPARATE_COMPILATION', '#define NPY_ENABLE_SEPARATE_COMPILATION 1')) -else: - numpyconfig_sym.append(('DEFINE_NPY_ENABLE_SEPARATE_COMPILATION', '')) - -#----------------------------- -# Checking for complex support -#----------------------------- -if config.CheckHeader('complex.h'): - numpyconfig_sym.append(('DEFINE_NPY_USE_C99_COMPLEX', '#define NPY_USE_C99_COMPLEX 1')) - - for t in C99_COMPLEX_TYPES: - st = config.CheckType(t, includes='#include ') - if st: - numpyconfig_sym.append(('DEFINE_NPY_HAVE_%s' % type2def(t), - '#define NPY_HAVE_%s' % type2def(t))) - else: - numpyconfig_sym.append(('DEFINE_NPY_HAVE_%s' % type2def(t), '')) - - def check_prec(prec): - flist = [f + prec for f in C99_COMPLEX_FUNCS] - st = config.CheckFuncsAtOnce(flist) - if not st: - # Global check failed, check func per func - for f in flist: - config.CheckFunc(f, language='C') - - check_prec('') - check_prec('f') - check_prec('l') - -else: - numpyconfig_sym.append(('DEFINE_NPY_USE_C99_COMPLEX', '')) - for t in C99_COMPLEX_TYPES: - numpyconfig_sym.append(('DEFINE_NPY_HAVE_%s' % type2def(t), '')) - -def visibility_define(): - if config.CheckGCC4(): - return '__attribute__((visibility("hidden")))' - else: - return '' - -numpyconfig_sym.append(('VISIBILITY_HIDDEN', visibility_define())) - -# Add the C API/ABI versions -numpyconfig_sym.append(('NPY_ABI_VERSION', '0x%.8X' % C_ABI_VERSION)) -numpyconfig_sym.append(('NPY_API_VERSION', '0x%.8X' % C_API_VERSION)) - -# Check whether we need our own wide character support -if not config.CheckDeclaration('Py_UNICODE_WIDE', includes='#include '): - PYTHON_HAS_UNICODE_WIDE = True -else: - PYTHON_HAS_UNICODE_WIDE = False - -#------------------------------------------------------- -# Define the function PyOS_ascii_strod if not available -#------------------------------------------------------- -if not config.CheckDeclaration('PyOS_ascii_strtod', - includes = "#include "): - if config.CheckFunc('strtod'): - config.Define('PyOS_ascii_strtod', 'strtod', - "Define to a function to use as a replacement for "\ - "PyOS_ascii_strtod if not available in python header") - -#------------------------------------ -# DISTUTILS Hack on AMD64 on windows -#------------------------------------ -# XXX: this is ugly -if sys.platform=='win32' or os.name=='nt': - from distutils.msvccompiler import get_build_architecture - a = get_build_architecture() - print 'BUILD_ARCHITECTURE: %r, os.name=%r, sys.platform=%r' % \ - (a, os.name, sys.platform) - if a == 'AMD64': - distutils_use_sdk = 1 - config.Define('DISTUTILS_USE_SDK', distutils_use_sdk, - "define to 1 to disable SMP support ") - - if a == "Intel": - config.Define('FORCE_NO_LONG_DOUBLE_FORMATTING', 1, - "define to 1 to force long double format string to the" \ - " same as double (Lg -> g)") -#-------------- -# Checking Blas -#-------------- -if config.CheckCBLAS(): - build_blasdot = 1 -else: - build_blasdot = 0 - -config.config_h_text += """ -#ifndef _NPY_NPY_CONFIG_H_ -#error config.h should never be included directly, include npy_config.h instead -#endif -""" - -config.Finish() -write_info(env) - -#========== -# Build -#========== - -# List of headers which need to be "installed " into the build directory for -# proper in-place build support -generated_headers = [] - -#--------------------------------------- -# Generate the public configuration file -#--------------------------------------- -config_dict = {} -# XXX: this is ugly, make the API for config.h and numpyconfig.h similar -for key, value in numpyconfig_sym: - config_dict['@%s@' % key] = str(value) -env['SUBST_DICT'] = config_dict - -include_dir = 'include/numpy' -target = env.SubstInFile(pjoin(include_dir, '_numpyconfig.h'), - pjoin(include_dir, '_numpyconfig.h.in')) -generated_headers.append(target[0]) - -env['CONFIG_H_GEN'] = numpyconfig_sym - -#--------------------------- -# Builder for generated code -#--------------------------- -env.Append(BUILDERS = {'GenerateMultiarrayApi' : array_api_gen_bld, - 'GenerateUfuncApi' : ufunc_api_gen_bld, - 'GenerateFromTemplate' : template_bld, - 'GenerateUmath' : umath_bld}) - -#------------------------ -# Generate generated code -#------------------------ -scalartypes_src = env.GenerateFromTemplate( - pjoin('src', 'multiarray', 'scalartypes.c.src')) -umath_funcs_src = env.GenerateFromTemplate(pjoin('src', 'umath', 'funcs.inc.src')) -umath_loops_src = env.GenerateFromTemplate(pjoin('src', 'umath', 'loops.c.src')) -arraytypes_src = env.GenerateFromTemplate( - pjoin('src', 'multiarray', 'arraytypes.c.src')) -sortmodule_src = env.GenerateFromTemplate(pjoin('src', '_sortmodule.c.src')) -umathmodule_src = env.GenerateFromTemplate(pjoin('src', 'umath', - 'umathmodule.c.src')) -umath_tests_src = env.GenerateFromTemplate(pjoin('src', 'umath', - 'umath_tests.c.src')) -multiarray_tests_src = env.GenerateFromTemplate(pjoin('src', 'multiarray', - 'multiarray_tests.c.src')) -scalarmathmodule_src = env.GenerateFromTemplate( - pjoin('src', 'scalarmathmodule.c.src')) - -umath = env.GenerateUmath('__umath_generated', - pjoin('code_generators', 'generate_umath.py')) - -multiarray_api = env.GenerateMultiarrayApi('include/numpy/multiarray_api', - [SCons.Node.Python.Value(d) for d in multiarray_api_dict]) -generated_headers.append(multiarray_api[0]) - -ufunc_api = env.GenerateUfuncApi('include/numpy/ufunc_api', - [SCons.Node.Python.Value(d) for d in ufunc_api_dict]) -generated_headers.append(ufunc_api[0]) - -# include/numpy is added for compatibility reasons with distutils: this is -# needed for __multiarray_api.c and __ufunc_api.c included from multiarray and -# ufunc. -env.Prepend(CPPPATH = ['src/private', 'include', '.', 'include/numpy']) - -# npymath core lib -npymath_src = [env.GenerateFromTemplate(pjoin('src', 'npymath', 'npy_math.c.src')), - env.GenerateFromTemplate(pjoin('src', 'npymath', 'npy_math_complex.c.src')), - env.GenerateFromTemplate(pjoin('src', 'npymath', 'ieee754.c.src'))] -env.DistutilsInstalledStaticExtLibrary("npymath", npymath_src, install_dir='lib') -env.Prepend(LIBS=["npymath"]) -env.Prepend(LIBPATH=["."]) - -subst_dict = {'@prefix@': '$distutils_install_prefix', - '@pkgname@': 'numpy.core', '@sep@': os.path.sep} -npymath_ini = env.SubstInFile(pjoin('lib', 'npy-pkg-config', 'npymath.ini'), - 'npymath.ini.in', SUBST_DICT=subst_dict) - -subst_dict = {'@posix_mathlib@': " ".join(['-l%s' % l for l in mlib]), - '@msvc_mathlib@': " ".join(['%s.mlib' % l for l in mlib])} -mlib_ini = env.SubstInFile(pjoin('lib', 'npy-pkg-config', 'mlib.ini'), - 'mlib.ini.in', SUBST_DICT=subst_dict) -env.Install('$distutils_installdir/lib/npy-pkg-config', mlib_ini) -env.Install('$distutils_installdir/lib/npy-pkg-config', npymath_ini) - -#----------------- -# Build multiarray -#----------------- -if ENABLE_SEPARATE_COMPILATION: - multiarray_src = [pjoin('src', 'multiarray', 'multiarraymodule.c'), - pjoin('src', 'multiarray', 'hashdescr.c'), - pjoin('src', 'multiarray', 'arrayobject.c'), - pjoin('src', 'multiarray', 'numpyos.c'), - pjoin('src', 'multiarray', 'flagsobject.c'), - pjoin('src', 'multiarray', 'descriptor.c'), - pjoin('src', 'multiarray', 'iterators.c'), - pjoin('src', 'multiarray', 'mapping.c'), - pjoin('src', 'multiarray', 'number.c'), - pjoin('src', 'multiarray', 'getset.c'), - pjoin('src', 'multiarray', 'sequence.c'), - pjoin('src', 'multiarray', 'methods.c'), - pjoin('src', 'multiarray', 'ctors.c'), - pjoin('src', 'multiarray', 'convert_datatype.c'), - pjoin('src', 'multiarray', 'convert.c'), - pjoin('src', 'multiarray', 'shape.c'), - pjoin('src', 'multiarray', 'item_selection.c'), - pjoin('src', 'multiarray', 'calculation.c'), - pjoin('src', 'multiarray', 'common.c'), - pjoin('src', 'multiarray', 'refcount.c'), - pjoin('src', 'multiarray', 'conversion_utils.c'), - pjoin('src', 'multiarray', 'usertypes.c'), - pjoin('src', 'multiarray', 'buffer.c'), - pjoin('src', 'multiarray', 'numpymemoryview.c'), - pjoin('src', 'multiarray', 'scalarapi.c')] - multiarray_src.extend(arraytypes_src) - multiarray_src.extend(scalartypes_src) - if PYTHON_HAS_UNICODE_WIDE: - multiarray_src.extend([pjoin("src", "multiarray", "ucsnarrow.c")]) -else: - multiarray_src = [pjoin('src', 'multiarray', 'multiarraymodule_onefile.c')] -multiarray = env.DistutilsPythonExtension('multiarray', source = multiarray_src) -env.DistutilsPythonExtension('multiarray_tests', source=multiarray_tests_src) - -#------------------ -# Build sort module -#------------------ -sort = env.DistutilsPythonExtension('_sort', source = sortmodule_src) - -#------------------- -# Build umath module -#------------------- -if ENABLE_SEPARATE_COMPILATION: - umathmodule_src.extend([pjoin('src', 'umath', 'ufunc_object.c')]) - umathmodule_src.extend(umath_loops_src) -else: - umathmodule_src = [pjoin('src', 'umath', 'umathmodule_onefile.c')] -umathmodule = env.DistutilsPythonExtension('umath', source = umathmodule_src) - -#------------------------ -# Build scalarmath module -#------------------------ -scalarmathmodule = env.DistutilsPythonExtension('scalarmath', - source = scalarmathmodule_src) - -#------------------------ -# Build scalarmath module -#------------------------ -umath_tests = env.DistutilsPythonExtension('umath_tests', - source=umath_tests_src) - -#---------------------- -# Build _dotblas module -#---------------------- -if build_blasdot: - dotblas_src = [pjoin('blasdot', i) for i in ['_dotblas.c']] - # because _dotblas does #include CBLAS_HEADER instead of #include - # "cblas.h", scons does not detect the dependency - # XXX: PythonExtension builder does not take the Depends on extension into - # account for some reason, so we first build the object, with forced - # dependency, and then builds the extension. This is more likely a bug in - # our PythonExtension builder, but I cannot see how to solve it. - dotblas_o = env.PythonObject('_dotblas', source = dotblas_src) - env.Depends(dotblas_o, pjoin("blasdot", "cblas.h")) - dotblas = env.DistutilsPythonExtension('_dotblas', dotblas_o) - -# "Install" the header in the build directory, so that in-place build works -for h in generated_headers: - env.Install(pjoin('$distutils_installdir', include_dir), h) diff --git a/pythonPackages/numpy/numpy/core/SConstruct b/pythonPackages/numpy/numpy/core/SConstruct deleted file mode 100755 index a377d8391b..0000000000 --- a/pythonPackages/numpy/numpy/core/SConstruct +++ /dev/null @@ -1,2 +0,0 @@ -from numscons import GetInitEnvironment -GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') diff --git a/pythonPackages/numpy/numpy/core/__init__.py b/pythonPackages/numpy/numpy/core/__init__.py deleted file mode 100755 index 4a9f3ac758..0000000000 --- a/pythonPackages/numpy/numpy/core/__init__.py +++ /dev/null @@ -1,42 +0,0 @@ - -from info import __doc__ -from numpy.version import version as __version__ - -import multiarray -import umath -import _internal # for freeze programs -import numerictypes as nt -multiarray.set_typeDict(nt.sctypeDict) -import _sort -from numeric import * -from fromnumeric import * -import defchararray as char -import records as rec -from records import * -from memmap import * -from defchararray import chararray -import scalarmath -from function_base import * -from machar import * -from getlimits import * -from shape_base import * -del nt - -from fromnumeric import amax as max, amin as min, \ - round_ as round -from numeric import absolute as abs - -__all__ = ['char','rec','memmap'] -__all__ += numeric.__all__ -__all__ += fromnumeric.__all__ -__all__ += rec.__all__ -__all__ += ['chararray'] -__all__ += function_base.__all__ -__all__ += machar.__all__ -__all__ += getlimits.__all__ -__all__ += shape_base.__all__ - - -from numpy.testing import Tester -test = Tester().test -bench = Tester().bench diff --git a/pythonPackages/numpy/numpy/core/_internal.py b/pythonPackages/numpy/numpy/core/_internal.py deleted file mode 100755 index e4596f38ed..0000000000 --- a/pythonPackages/numpy/numpy/core/_internal.py +++ /dev/null @@ -1,581 +0,0 @@ -#A place for code to be called from C-code -# that implements more complicated stuff. - -import re -import sys - -from numpy.compat import asbytes, bytes - -if (sys.byteorder == 'little'): - _nbo = asbytes('<') -else: - _nbo = asbytes('>') - -def _makenames_list(adict): - from multiarray import dtype - allfields = [] - fnames = adict.keys() - for fname in fnames: - obj = adict[fname] - n = len(obj) - if not isinstance(obj, tuple) or n not in [2,3]: - raise ValueError("entry not a 2- or 3- tuple") - if (n > 2) and (obj[2] == fname): - continue - num = int(obj[1]) - if (num < 0): - raise ValueError("invalid offset.") - format = dtype(obj[0]) - if (format.itemsize == 0): - raise ValueError("all itemsizes must be fixed.") - if (n > 2): - title = obj[2] - else: - title = None - allfields.append((fname, format, num, title)) - # sort by offsets - allfields.sort(key=lambda x: x[2]) - names = [x[0] for x in allfields] - formats = [x[1] for x in allfields] - offsets = [x[2] for x in allfields] - titles = [x[3] for x in allfields] - - return names, formats, offsets, titles - -# Called in PyArray_DescrConverter function when -# a dictionary without "names" and "formats" -# fields is used as a data-type descriptor. -def _usefields(adict, align): - from multiarray import dtype - try: - names = adict[-1] - except KeyError: - names = None - if names is None: - names, formats, offsets, titles = _makenames_list(adict) - else: - formats = [] - offsets = [] - titles = [] - for name in names: - res = adict[name] - formats.append(res[0]) - offsets.append(res[1]) - if (len(res) > 2): - titles.append(res[2]) - else: - titles.append(None) - - return dtype({"names" : names, - "formats" : formats, - "offsets" : offsets, - "titles" : titles}, align) - - -# construct an array_protocol descriptor list -# from the fields attribute of a descriptor -# This calls itself recursively but should eventually hit -# a descriptor that has no fields and then return -# a simple typestring - -def _array_descr(descriptor): - from multiarray import METADATA_DTSTR - fields = descriptor.fields - if fields is None: - subdtype = descriptor.subdtype - if subdtype is None: - if descriptor.metadata is None: - return descriptor.str - else: - new = descriptor.metadata.copy() - # Eliminate any key related to internal implementation - _ = new.pop(METADATA_DTSTR, None) - return (descriptor.str, new) - else: - return (_array_descr(subdtype[0]), subdtype[1]) - - - names = descriptor.names - ordered_fields = [fields[x] + (x,) for x in names] - result = [] - offset = 0 - for field in ordered_fields: - if field[1] > offset: - num = field[1] - offset - result.append(('','|V%d' % num)) - offset += num - if len(field) > 3: - name = (field[2],field[3]) - else: - name = field[2] - if field[0].subdtype: - tup = (name, _array_descr(field[0].subdtype[0]), - field[0].subdtype[1]) - else: - tup = (name, _array_descr(field[0])) - offset += field[0].itemsize - result.append(tup) - - return result - -# Build a new array from the information in a pickle. -# Note that the name numpy.core._internal._reconstruct is embedded in -# pickles of ndarrays made with NumPy before release 1.0 -# so don't remove the name here, or you'll -# break backward compatibilty. -def _reconstruct(subtype, shape, dtype): - from multiarray import ndarray - return ndarray.__new__(subtype, shape, dtype) - - -# format_re and _split were taken from numarray by J. Todd Miller - -def _split(input): - """Split the input formats string into field formats without splitting - the tuple used to specify multi-dimensional arrays.""" - - newlist = [] - hold = asbytes('') - - listinput = input.split(asbytes(',')) - for element in listinput: - if hold != asbytes(''): - item = hold + asbytes(',') + element - else: - item = element - left = item.count(asbytes('(')) - right = item.count(asbytes(')')) - - # if the parenthesis is not balanced, hold the string - if left > right : - hold = item - - # when balanced, append to the output list and reset the hold - elif left == right: - newlist.append(item.strip()) - hold = asbytes('') - - # too many close parenthesis is unacceptable - else: - raise SyntaxError(item) - - # if there is string left over in hold - if hold != asbytes(''): - raise SyntaxError(hold) - - return newlist - - -format_re = re.compile(asbytes(r'(?P[<>|=]?)(?P *[(]?[ ,0-9]*[)]? *)(?P[<>|=]?)(?P[A-Za-z0-9.]*)')) - -# astr is a string (perhaps comma separated) - -_convorder = {asbytes('='): _nbo} - -def _commastring(astr): - res = _split(astr) - if (len(res)) < 1: - raise ValueError("unrecognized formant") - result = [] - for k,item in enumerate(res): - # convert item - try: - (order1, repeats, order2, dtype) = format_re.match(item).groups() - except (TypeError, AttributeError): - raise ValueError('format %s is not recognized' % item) - - if order2 == asbytes(''): - order = order1 - elif order1 == asbytes(''): - order = order2 - else: - order1 = _convorder.get(order1, order1) - order2 = _convorder.get(order2, order2) - if (order1 != order2): - raise ValueError('in-consistent byte-order specification %s and %s' % (order1, order2)) - order = order1 - - if order in [asbytes('|'), asbytes('='), _nbo]: - order = asbytes('') - dtype = order + dtype - if (repeats == asbytes('')): - newitem = dtype - else: - newitem = (dtype, eval(repeats)) - result.append(newitem) - - return result - -def _getintp_ctype(): - from multiarray import dtype - val = _getintp_ctype.cache - if val is not None: - return val - char = dtype('p').char - import ctypes - if (char == 'i'): - val = ctypes.c_int - elif char == 'l': - val = ctypes.c_long - elif char == 'q': - val = ctypes.c_longlong - else: - val = ctypes.c_long - _getintp_ctype.cache = val - return val -_getintp_ctype.cache = None - -# Used for .ctypes attribute of ndarray - -class _missing_ctypes(object): - def cast(self, num, obj): - return num - - def c_void_p(self, num): - return num - -class _ctypes(object): - def __init__(self, array, ptr=None): - try: - import ctypes - self._ctypes = ctypes - except ImportError: - self._ctypes = _missing_ctypes() - self._arr = array - self._data = ptr - if self._arr.ndim == 0: - self._zerod = True - else: - self._zerod = False - - def data_as(self, obj): - return self._ctypes.cast(self._data, obj) - - def shape_as(self, obj): - if self._zerod: - return None - return (obj*self._arr.ndim)(*self._arr.shape) - - def strides_as(self, obj): - if self._zerod: - return None - return (obj*self._arr.ndim)(*self._arr.strides) - - def get_data(self): - return self._data - - def get_shape(self): - if self._zerod: - return None - return (_getintp_ctype()*self._arr.ndim)(*self._arr.shape) - - def get_strides(self): - if self._zerod: - return None - return (_getintp_ctype()*self._arr.ndim)(*self._arr.strides) - - def get_as_parameter(self): - return self._ctypes.c_void_p(self._data) - - data = property(get_data, None, doc="c-types data") - shape = property(get_shape, None, doc="c-types shape") - strides = property(get_strides, None, doc="c-types strides") - _as_parameter_ = property(get_as_parameter, None, doc="_as parameter_") - - -# Given a datatype and an order object -# return a new names tuple -# with the order indicated -def _newnames(datatype, order): - oldnames = datatype.names - nameslist = list(oldnames) - if isinstance(order, str): - order = [order] - if isinstance(order, (list, tuple)): - for name in order: - try: - nameslist.remove(name) - except ValueError: - raise ValueError("unknown field name: %s" % (name,)) - return tuple(list(order) + nameslist) - raise ValueError("unsupported order value: %s" % (order,)) - -# Given an array with fields and a sequence of field names -# construct a new array with just those fields copied over -def _index_fields(ary, fields): - from multiarray import empty, dtype - dt = ary.dtype - new_dtype = [(name, dt[name]) for name in dt.names if name in fields] - if ary.flags.f_contiguous: - order = 'F' - else: - order = 'C' - - newarray = empty(ary.shape, dtype=new_dtype, order=order) - - for name in fields: - newarray[name] = ary[name] - - return newarray - -# Given a string containing a PEP 3118 format specifier, -# construct a Numpy dtype - -_pep3118_native_map = { - '?': '?', - 'b': 'b', - 'B': 'B', - 'h': 'h', - 'H': 'H', - 'i': 'i', - 'I': 'I', - 'l': 'l', - 'L': 'L', - 'q': 'q', - 'Q': 'Q', - 'f': 'f', - 'd': 'd', - 'g': 'g', - 'Zf': 'F', - 'Zd': 'D', - 'Zg': 'G', - 's': 'S', - 'w': 'U', - 'O': 'O', - 'x': 'V', # padding -} -_pep3118_native_typechars = ''.join(_pep3118_native_map.keys()) - -_pep3118_standard_map = { - '?': '?', - 'b': 'b', - 'B': 'B', - 'h': 'i2', - 'H': 'u2', - 'i': 'i4', - 'I': 'u4', - 'l': 'i4', - 'L': 'u4', - 'q': 'i8', - 'Q': 'u8', - 'f': 'f', - 'd': 'd', - 'Zf': 'F', - 'Zd': 'D', - 's': 'S', - 'w': 'U', - 'O': 'O', - 'x': 'V', # padding -} -_pep3118_standard_typechars = ''.join(_pep3118_standard_map.keys()) - -def _dtype_from_pep3118(spec, byteorder='@', is_subdtype=False): - from numpy.core.multiarray import dtype - - fields = {} - offset = 0 - explicit_name = False - this_explicit_name = False - common_alignment = 1 - is_padding = False - last_offset = 0 - - dummy_name_index = [0] - def next_dummy_name(): - dummy_name_index[0] += 1 - def get_dummy_name(): - while True: - name = 'f%d' % dummy_name_index[0] - if name not in fields: - return name - next_dummy_name() - - # Parse spec - while spec: - value = None - - # End of structure, bail out to upper level - if spec[0] == '}': - spec = spec[1:] - break - - # Sub-arrays (1) - shape = None - if spec[0] == '(': - j = spec.index(')') - shape = tuple(map(int, spec[1:j].split(','))) - spec = spec[j+1:] - - # Byte order - if spec[0] in ('@', '=', '<', '>', '^', '!'): - byteorder = spec[0] - if byteorder == '!': - byteorder = '>' - spec = spec[1:] - - # Byte order characters also control native vs. standard type sizes - if byteorder in ('@', '^'): - type_map = _pep3118_native_map - type_map_chars = _pep3118_native_typechars - else: - type_map = _pep3118_standard_map - type_map_chars = _pep3118_standard_typechars - - # Item sizes - itemsize = 1 - if spec[0].isdigit(): - j = 1 - for j in xrange(1, len(spec)): - if not spec[j].isdigit(): - break - itemsize = int(spec[:j]) - spec = spec[j:] - - # Data types - is_padding = False - - if spec[:2] == 'T{': - value, spec, align, next_byteorder = _dtype_from_pep3118( - spec[2:], byteorder=byteorder, is_subdtype=True) - elif spec[0] in type_map_chars: - next_byteorder = byteorder - if spec[0] == 'Z': - j = 2 - else: - j = 1 - typechar = spec[:j] - spec = spec[j:] - is_padding = (typechar == 'x') - dtypechar = type_map[typechar] - if dtypechar in 'USV': - dtypechar += '%d' % itemsize - itemsize = 1 - numpy_byteorder = {'@': '=', '^': '='}.get(byteorder, byteorder) - value = dtype(numpy_byteorder + dtypechar) - align = value.alignment - else: - raise ValueError("Unknown PEP 3118 data type specifier %r" % spec) - - # - # Native alignment may require padding - # - # Here we assume that the presence of a '@' character implicitly implies - # that the start of the array is *already* aligned. - # - extra_offset = 0 - if byteorder == '@': - start_padding = (-offset) % align - intra_padding = (-value.itemsize) % align - - offset += start_padding - - if intra_padding != 0: - if itemsize > 1 or (shape is not None and _prod(shape) > 1): - # Inject internal padding to the end of the sub-item - value = _add_trailing_padding(value, intra_padding) - else: - # We can postpone the injection of internal padding, - # as the item appears at most once - extra_offset += intra_padding - - # Update common alignment - common_alignment = (align*common_alignment - / _gcd(align, common_alignment)) - - # Convert itemsize to sub-array - if itemsize != 1: - value = dtype((value, (itemsize,))) - - # Sub-arrays (2) - if shape is not None: - value = dtype((value, shape)) - - # Field name - this_explicit_name = False - if spec and spec.startswith(':'): - i = spec[1:].index(':') + 1 - name = spec[1:i] - spec = spec[i+1:] - explicit_name = True - this_explicit_name = True - else: - name = get_dummy_name() - - if not is_padding or this_explicit_name: - if name in fields: - raise RuntimeError("Duplicate field name '%s' in PEP3118 format" - % name) - fields[name] = (value, offset) - last_offset = offset - if not this_explicit_name: - next_dummy_name() - - byteorder = next_byteorder - - offset += value.itemsize - offset += extra_offset - - # Check if this was a simple 1-item type - if len(fields.keys()) == 1 and not explicit_name and fields['f0'][1] == 0 \ - and not is_subdtype: - ret = fields['f0'][0] - else: - ret = dtype(fields) - - # Trailing padding must be explicitly added - padding = offset - ret.itemsize - if byteorder == '@': - padding += (-offset) % common_alignment - if is_padding and not this_explicit_name: - ret = _add_trailing_padding(ret, padding) - - # Finished - if is_subdtype: - return ret, spec, common_alignment, byteorder - else: - return ret - -def _add_trailing_padding(value, padding): - """Inject the specified number of padding bytes at the end of a dtype""" - from numpy.core.multiarray import dtype - - if value.fields is None: - vfields = {'f0': (value, 0)} - else: - vfields = dict(value.fields) - - if value.names and value.names[-1] == '' and \ - value[''].char == 'V': - # A trailing padding field is already present - vfields[''] = ('V%d' % (vfields[''][0].itemsize + padding), - vfields[''][1]) - value = dtype(vfields) - else: - # Get a free name for the padding field - j = 0 - while True: - name = 'pad%d' % j - if name not in vfields: - vfields[name] = ('V%d' % padding, value.itemsize) - break - j += 1 - - value = dtype(vfields) - if '' not in vfields: - # Strip out the name of the padding field - names = list(value.names) - names[-1] = '' - value.names = tuple(names) - return value - -def _prod(a): - p = 1 - for x in a: - p *= x - return p - -def _gcd(a, b): - """Calculate the greatest common divisor of a and b""" - while b: - a, b = b, a%b - return a diff --git a/pythonPackages/numpy/numpy/core/_mx_datetime_parser.py b/pythonPackages/numpy/numpy/core/_mx_datetime_parser.py deleted file mode 100755 index f1b330f007..0000000000 --- a/pythonPackages/numpy/numpy/core/_mx_datetime_parser.py +++ /dev/null @@ -1,962 +0,0 @@ -#-*- coding: latin-1 -*- -""" -Date/Time string parsing module. - -This code is a slightly modified version of Parser.py found in mx.DateTime -version 3.0.0 - -As such, it is subject to the terms of the eGenix public license version 1.1.0. - -FIXME: Add license.txt to NumPy -""" - -__all__ = ['date_from_string', 'datetime_from_string'] - -import types -import re -import datetime as dt - -class RangeError(Exception): pass - -# Enable to produce debugging output -_debug = 0 - -# REs for matching date and time parts in a string; These REs -# parse a superset of ARPA, ISO, American and European style dates. -# Timezones are supported via the Timezone submodule. - -_year = '(?P-?\d+\d(?!:))' -_fullyear = '(?P-?\d+\d\d(?!:))' -_year_epoch = '(?:' + _year + '(?P *[ABCDE\.]+)?)' -_fullyear_epoch = '(?:' + _fullyear + '(?P *[ABCDE\.]+)?)' -_relyear = '(?:\((?P[-+]?\d+)\))' - -_month = '(?P\d?\d(?!:))' -_fullmonth = '(?P\d\d(?!:))' -_litmonth = ('(?P' - 'jan|feb|mar|apr|may|jun|jul|aug|sep|oct|nov|dec|' - 'mär|mae|mrz|mai|okt|dez|' - 'fev|avr|juin|juil|aou|aoû|déc|' - 'ene|abr|ago|dic|' - 'out' - ')[a-z,\.;]*') -litmonthtable = { - # English - 'jan':1, 'feb':2, 'mar':3, 'apr':4, 'may':5, 'jun':6, - 'jul':7, 'aug':8, 'sep':9, 'oct':10, 'nov':11, 'dec':12, - # German - 'mär':3, 'mae':3, 'mrz':3, 'mai':5, 'okt':10, 'dez':12, - # French - 'fev':2, 'avr':4, 'juin':6, 'juil':7, 'aou':8, 'aoû':8, - 'déc':12, - # Spanish - 'ene':1, 'abr':4, 'ago':8, 'dic':12, - # Portuguese - 'out':10, - } -_relmonth = '(?:\((?P[-+]?\d+)\))' - -_day = '(?P\d?\d(?!:))' -_usday = '(?P\d?\d(?!:))(?:st|nd|rd|th|[,\.;])?' -_fullday = '(?P\d\d(?!:))' -_litday = ('(?P' - 'mon|tue|wed|thu|fri|sat|sun|' - 'die|mit|don|fre|sam|son|' - 'lun|mar|mer|jeu|ven|sam|dim|' - 'mie|jue|vie|sab|dom|' - 'pri|seg|ter|cua|qui' - ')[a-z]*') -litdaytable = { - # English - 'mon':0, 'tue':1, 'wed':2, 'thu':3, 'fri':4, 'sat':5, 'sun':6, - # German - 'die':1, 'mit':2, 'don':3, 'fre':4, 'sam':5, 'son':6, - # French - 'lun':0, 'mar':1, 'mer':2, 'jeu':3, 'ven':4, 'sam':5, 'dim':6, - # Spanish - 'mie':2, 'jue':3, 'vie':4, 'sab':5, 'dom':6, - # Portuguese - 'pri':0, 'seg':1, 'ter':2, 'cua':3, 'qui':4, - } -_relday = '(?:\((?P[-+]?\d+)\))' - -_hour = '(?P[012]?\d)' -_minute = '(?P[0-6]\d)' -_second = '(?P[0-6]\d(?:[.,]\d+)?)' - -_days = '(?P\d*\d(?:[.,]\d+)?)' -_hours = '(?P\d*\d(?:[.,]\d+)?)' -_minutes = '(?P\d*\d(?:[.,]\d+)?)' -_seconds = '(?P\d*\d(?:[.,]\d+)?)' - -_reldays = '(?:\((?P[-+]?\d+(?:[.,]\d+)?)\))' -_relhours = '(?:\((?P[-+]?\d+(?:[.,]\d+)?)\))' -_relminutes = '(?:\((?P[-+]?\d+(?:[.,]\d+)?)\))' -_relseconds = '(?:\((?P[-+]?\d+(?:[.,]\d+)?)\))' - -_sign = '(?:(?P[-+]) *)' -_week = 'W(?P\d?\d)' -_zone = '(?P[A-Z]+|[+-]\d\d?:?(?:\d\d)?)' -_ampm = '(?P[ap][m.]+)' - -_time = (_hour + ':' + _minute + '(?::' + _second + '|[^:]|$) *' - + _ampm + '? *' + _zone + '?') -_isotime = _hour + ':?' + _minute + ':?' + _second + '? *' + _zone + '?' - -_yeardate = _year -_weekdate = _year + '-?(?:' + _week + '-?' + _day + '?)?' -_eurodate = _day + '\.' + _month + '\.' + _year_epoch + '?' -_usdate = _month + '/' + _day + '(?:/' + _year_epoch + '|[^/]|$)' -_altusdate = _month + '-' + _day + '-' + _fullyear_epoch -_isodate = _year + '-' + _month + '-?' + _day + '?(?!:)' -_altisodate = _year + _fullmonth + _fullday + '(?!:)' -_usisodate = _fullyear + '/' + _fullmonth + '/' + _fullday -_litdate = ('(?:'+ _litday + ',? )? *' + - _usday + ' *' + - '[- ] *(?:' + _litmonth + '|'+ _month +') *[- ] *' + - _year_epoch + '?') -_altlitdate = ('(?:'+ _litday + ',? )? *' + - _litmonth + '[ ,.a-z]+' + - _usday + - '(?:[ a-z]+' + _year_epoch + ')?') -_eurlitdate = ('(?:'+ _litday + ',?[ a-z]+)? *' + - '(?:'+ _usday + '[ a-z]+)? *' + - _litmonth + - '(?:[ ,.a-z]+' + _year_epoch + ')?') - -_relany = '[*%?a-zA-Z]+' - -_relisodate = ('(?:(?:' + _relany + '|' + _year + '|' + _relyear + ')-' + - '(?:' + _relany + '|' + _month + '|' + _relmonth + ')-' + - '(?:' + _relany + '|' + _day + '|' + _relday + '))') - -_asctime = ('(?:'+ _litday + ',? )? *' + - _usday + ' *' + - '[- ] *(?:' + _litmonth + '|'+ _month +') *[- ]' + - '(?:[0-9: ]+)' + - _year_epoch + '?') - -_relisotime = ('(?:(?:' + _relany + '|' + _hour + '|' + _relhours + '):' + - '(?:' + _relany + '|' + _minute + '|' + _relminutes + ')' + - '(?::(?:' + _relany + '|' + _second + '|' + _relseconds + '))?)') - -_isodelta1 = (_sign + '?' + - _days + ':' + _hours + ':' + _minutes + ':' + _seconds) -_isodelta2 = (_sign + '?' + - _hours + ':' + _minutes + ':' + _seconds) -_isodelta3 = (_sign + '?' + - _hours + ':' + _minutes) -_litdelta = (_sign + '?' + - '(?:' + _days + ' *d[a-z]*[,; ]*)?' + - '(?:' + _hours + ' *h[a-z]*[,; ]*)?' + - '(?:' + _minutes + ' *m[a-z]*[,; ]*)?' + - '(?:' + _seconds + ' *s[a-z]*[,; ]*)?') -_litdelta2 = (_sign + '?' + - '(?:' + _days + ' *d[a-z]*[,; ]*)?' + - _hours + ':' + _minutes + '(?::' + _seconds + ')?') - -_timeRE = re.compile(_time, re.I) -_isotimeRE = re.compile(_isotime, re.I) -_isodateRE = re.compile(_isodate, re.I) -_altisodateRE = re.compile(_altisodate, re.I) -_usisodateRE = re.compile(_usisodate, re.I) -_yeardateRE = re.compile(_yeardate, re.I) -_eurodateRE = re.compile(_eurodate, re.I) -_usdateRE = re.compile(_usdate, re.I) -_altusdateRE = re.compile(_altusdate, re.I) -_litdateRE = re.compile(_litdate, re.I) -_altlitdateRE = re.compile(_altlitdate, re.I) -_eurlitdateRE = re.compile(_eurlitdate, re.I) -_relisodateRE = re.compile(_relisodate, re.I) -_asctimeRE = re.compile(_asctime, re.I) -_isodelta1RE = re.compile(_isodelta1) -_isodelta2RE = re.compile(_isodelta2) -_isodelta3RE = re.compile(_isodelta3) -_litdeltaRE = re.compile(_litdelta) -_litdelta2RE = re.compile(_litdelta2) -_relisotimeRE = re.compile(_relisotime, re.I) - -# Available date parsers -_date_formats = ('euro', - 'usiso', 'us', 'altus', - 'iso', 'altiso', - 'lit', 'altlit', 'eurlit', - 'year', 'unknown') - -# Available time parsers -_time_formats = ('standard', - 'iso', - 'unknown') - -_zoneoffset = ('(?:' - '(?P[+-])?' - '(?P\d\d?)' - ':?' - '(?P\d\d)?' - '(?P\d+)?' - ')' - ) - -_zoneoffsetRE = re.compile(_zoneoffset) - -_zonetable = { - # Timezone abbreviations - # Std Summer - - # Standards - 'UT':0, - 'UTC':0, - 'GMT':0, - - # A few common timezone abbreviations - 'CET':1, 'CEST':2, 'CETDST':2, # Central European - 'MET':1, 'MEST':2, 'METDST':2, # Mean European - 'MEZ':1, 'MESZ':2, # Mitteleuropäische Zeit - 'EET':2, 'EEST':3, 'EETDST':3, # Eastern Europe - 'WET':0, 'WEST':1, 'WETDST':1, # Western Europe - 'MSK':3, 'MSD':4, # Moscow - 'IST':5.5, # India - 'JST':9, # Japan - 'KST':9, # Korea - 'HKT':8, # Hong Kong - - # US time zones - 'AST':-4, 'ADT':-3, # Atlantic - 'EST':-5, 'EDT':-4, # Eastern - 'CST':-6, 'CDT':-5, # Central - 'MST':-7, 'MDT':-6, # Midwestern - 'PST':-8, 'PDT':-7, # Pacific - - # Australian time zones - 'CAST':9.5, 'CADT':10.5, # Central - 'EAST':10, 'EADT':11, # Eastern - 'WAST':8, 'WADT':9, # Western - 'SAST':9.5, 'SADT':10.5, # Southern - - # US military time zones - 'Z': 0, - 'A': 1, - 'B': 2, - 'C': 3, - 'D': 4, - 'E': 5, - 'F': 6, - 'G': 7, - 'H': 8, - 'I': 9, - 'K': 10, - 'L': 11, - 'M': 12, - 'N':-1, - 'O':-2, - 'P':-3, - 'Q':-4, - 'R':-5, - 'S':-6, - 'T':-7, - 'U':-8, - 'V':-9, - 'W':-10, - 'X':-11, - 'Y':-12 - } - - -def utc_offset(zone): - """ utc_offset(zonestring) - - Return the UTC time zone offset in minutes. - - zone must be string and can either be given as +-HH:MM, - +-HHMM, +-HH numeric offset or as time zone - abbreviation. Daylight saving time must be encoded into the - zone offset. - - Timezone abbreviations are treated case-insensitive. - - """ - if not zone: - return 0 - uzone = zone.upper() - if uzone in _zonetable: - return _zonetable[uzone]*60 - offset = _zoneoffsetRE.match(zone) - if not offset: - raise ValueError,'wrong format or unkown time zone: "%s"' % zone - zonesign,hours,minutes,extra = offset.groups() - if extra: - raise ValueError,'illegal time zone offset: "%s"' % zone - offset = int(hours or 0) * 60 + int(minutes or 0) - if zonesign == '-': - offset = -offset - return offset - -def add_century(year): - - """ Sliding window approach to the Y2K problem: adds a suitable - century to the given year and returns it as integer. - - The window used depends on the current year. If adding the current - century to the given year gives a year within the range - current_year-70...current_year+30 [both inclusive], then the - current century is added. Otherwise the century (current + 1 or - - 1) producing the least difference is chosen. - - """ - - current_year=dt.datetime.now().year - current_century=(dt.datetime.now().year / 100) * 100 - - if year > 99: - # Take it as-is - return year - year = year + current_century - diff = year - current_year - if diff >= -70 and diff <= 30: - return year - elif diff < -70: - return year + 100 - else: - return year - 100 - - -def _parse_date(text): - """ - Parses the date part given in text and returns a tuple - (text,day,month,year,style) with the following meanings: - - * text gives the original text without the date part - - * day,month,year give the parsed date - - * style gives information about which parser was successful: - 'euro' - the European date parser - 'us' - the US date parser - 'altus' - the alternative US date parser (with '-' instead of '/') - 'iso' - the ISO date parser - 'altiso' - the alternative ISO date parser (without '-') - 'usiso' - US style ISO date parser (yyyy/mm/dd) - 'lit' - the US literal date parser - 'altlit' - the alternative US literal date parser - 'eurlit' - the Eurpean literal date parser - 'unknown' - no date part was found, defaultdate was used - - Formats may be set to a tuple of style strings specifying which of the above - parsers to use and in which order to try them. - Default is to try all of them in the above order. - - ``defaultdate`` provides the defaults to use in case no date part is found. - Most other parsers default to the current year January 1 if some of these - date parts are missing. - - If ``'unknown'`` is not given in formats and the date cannot be parsed, - a :exc:`ValueError` is raised. - - """ - match = None - style = '' - - formats = _date_formats - - us_formats=('us', 'altus') - iso_formats=('iso', 'altiso', 'usiso') - - now=dt.datetime.now - - # Apply parsers in the order given in formats - for format in formats: - - if format == 'euro': - # European style date - match = _eurodateRE.search(text) - if match is not None: - day,month,year,epoch = match.groups() - if year: - if len(year) == 2: - # Y2K problem: - year = add_century(int(year)) - else: - year = int(year) - else: - defaultdate = now() - year = defaultdate.year - if epoch and 'B' in epoch: - year = -year + 1 - month = int(month) - day = int(day) - # Could have mistaken euro format for us style date - # which uses month, day order - if month > 12 or month == 0: - match = None - continue - break - - elif format == 'year': - # just a year specified - match = _yeardateRE.match(text) - if match is not None: - year = match.groups()[0] - if year: - if len(year) == 2: - # Y2K problem: - year = add_century(int(year)) - else: - year = int(year) - else: - defaultdate = now() - year = defaultdate.year - day = 1 - month = 1 - break - - elif format in iso_formats: - # ISO style date - if format == 'iso': - match = _isodateRE.search(text) - elif format == 'altiso': - match = _altisodateRE.search(text) - # Avoid mistaking ISO time parts ('Thhmmss') for dates - if match is not None: - left, right = match.span() - if left > 0 and \ - text[left - 1:left] == 'T': - match = None - continue - else: - match = _usisodateRE.search(text) - if match is not None: - year,month,day = match.groups() - if len(year) == 2: - # Y2K problem: - year = add_century(int(year)) - else: - year = int(year) - # Default to January 1st - if not month: - month = 1 - else: - month = int(month) - if not day: - day = 1 - else: - day = int(day) - break - - elif format in us_formats: - # US style date - if format == 'us': - match = _usdateRE.search(text) - else: - match = _altusdateRE.search(text) - if match is not None: - month,day,year,epoch = match.groups() - if year: - if len(year) == 2: - # Y2K problem: - year = add_century(int(year)) - else: - year = int(year) - else: - defaultdate = now() - year = defaultdate.year - if epoch and 'B' in epoch: - year = -year + 1 - # Default to 1 if no day is given - if day: - day = int(day) - else: - day = 1 - month = int(month) - # Could have mistaken us format for euro style date - # which uses day, month order - if month > 12 or month == 0: - match = None - continue - break - - elif format == 'lit': - # US style literal date - match = _litdateRE.search(text) - if match is not None: - litday,day,litmonth,month,year,epoch = match.groups() - break - - elif format == 'altlit': - # Alternative US style literal date - match = _altlitdateRE.search(text) - if match is not None: - litday,litmonth,day,year,epoch = match.groups() - month = '' - break - - elif format == 'eurlit': - # European style literal date - match = _eurlitdateRE.search(text) - if match is not None: - litday,day,litmonth,year,epoch = match.groups() - month = '' - break - - elif format == 'unknown': - # No date part: use defaultdate - defaultdate = now() - year = defaultdate.year - month = defaultdate.month - day = defaultdate.day - style = format - break - - # Check success - if match is not None: - # Remove date from text - left, right = match.span() - if 0 and _debug: - print 'parsed date:',repr(text[left:right]),\ - 'giving:',year,month,day - text = text[:left] + text[right:] - style = format - - elif not style: - # Not recognized: raise an error - raise ValueError, 'unknown date format: "%s"' % text - - # Literal date post-processing - if style in ('lit', 'altlit', 'eurlit'): - if 0 and _debug: print match.groups() - # Default to current year, January 1st - if not year: - defaultdate = now() - year = defaultdate.year - else: - if len(year) == 2: - # Y2K problem: - year = add_century(int(year)) - else: - year = int(year) - if epoch and 'B' in epoch: - year = -year + 1 - if litmonth: - litmonth = litmonth.lower() - try: - month = litmonthtable[litmonth] - except KeyError: - raise ValueError,\ - 'wrong month name: "%s"' % litmonth - elif month: - month = int(month) - else: - month = 1 - if day: - day = int(day) - else: - day = 1 - - #print '_parse_date:',text,day,month,year,style - return text,day,month,year,style - -def _parse_time(text): - - """ Parses a time part given in text and returns a tuple - (text,hour,minute,second,offset,style) with the following - meanings: - - * text gives the original text without the time part - * hour,minute,second give the parsed time - * offset gives the time zone UTC offset - * style gives information about which parser was successful: - 'standard' - the standard parser - 'iso' - the ISO time format parser - 'unknown' - no time part was found - - formats may be set to a tuple specifying the parsers to use: - 'standard' - standard time format with ':' delimiter - 'iso' - ISO time format (superset of 'standard') - 'unknown' - default to 0:00:00, 0 zone offset - - If 'unknown' is not given in formats and the time cannot be - parsed, a ValueError is raised. - - """ - match = None - style = '' - - formats=_time_formats - - # Apply parsers in the order given in formats - for format in formats: - - # Standard format - if format == 'standard': - match = _timeRE.search(text) - if match is not None: - hour,minute,second,ampm,zone = match.groups() - style = 'standard' - break - - # ISO format - if format == 'iso': - match = _isotimeRE.search(text) - if match is not None: - hour,minute,second,zone = match.groups() - ampm = None - style = 'iso' - break - - # Default handling - elif format == 'unknown': - hour,minute,second,offset = 0,0,0.0,0 - style = 'unknown' - break - - if not style: - # If no default handling should be applied, raise an error - raise ValueError, 'unknown time format: "%s"' % text - - # Post-processing - if match is not None: - - if zone: - # Convert to UTC offset - offset = utc_offset(zone) - else: - offset = 0 - - hour = int(hour) - if ampm: - if ampm[0] in ('p', 'P'): - # 12pm = midday - if hour < 12: - hour = hour + 12 - else: - # 12am = midnight - if hour >= 12: - hour = hour - 12 - if minute: - minute = int(minute) - else: - minute = 0 - if not second: - second = 0.0 - else: - if ',' in second: - second = second.replace(',', '.') - second = float(second) - - # Remove time from text - left,right = match.span() - if 0 and _debug: - print 'parsed time:',repr(text[left:right]),\ - 'giving:',hour,minute,second,offset - text = text[:left] + text[right:] - - #print '_parse_time:',text,hour,minute,second,offset,style - return text,hour,minute,second,offset,style - -### - -def datetime_from_string(text): - - """ datetime_from_string(text, [formats, defaultdate]) - - Returns a datetime instance reflecting the date and time given - in text. In case a timezone is given, the returned instance - will point to the corresponding UTC time value. Otherwise, the - value is set as given in the string. - - formats may be set to a tuple of strings specifying which of - the following parsers to use and in which order to try - them. Default is to try all of them in the order given below: - - 'euro' - the European date parser - 'us' - the US date parser - 'altus' - the alternative US date parser (with '-' instead of '/') - 'iso' - the ISO date parser - 'altiso' - the alternative ISO date parser (without '-') - 'usiso' - US style ISO date parser (yyyy/mm/dd) - 'lit' - the US literal date parser - 'altlit' - the alternative US literal date parser - 'eurlit' - the Eurpean literal date parser - 'unknown' - if no date part is found, use defaultdate - - defaultdate provides the defaults to use in case no date part - is found. Most of the parsers default to the current year - January 1 if some of these date parts are missing. - - If 'unknown' is not given in formats and the date cannot - be parsed, a ValueError is raised. - - time_formats may be set to a tuple of strings specifying which - of the following parsers to use and in which order to try - them. Default is to try all of them in the order given below: - - 'standard' - standard time format HH:MM:SS (with ':' delimiter) - 'iso' - ISO time format (superset of 'standard') - 'unknown' - default to 00:00:00 in case the time format - cannot be parsed - - Defaults to 00:00:00.00 for time parts that are not included - in the textual representation. - - If 'unknown' is not given in time_formats and the time cannot - be parsed, a ValueError is raised. - - """ - origtext = text - - text,hour,minute,second,offset,timestyle = _parse_time(origtext) - text,day,month,year,datestyle = _parse_date(text) - - if 0 and _debug: - print 'tried time/date on %s, date=%s, time=%s' % (origtext, - datestyle, - timestyle) - - # If this fails, try the ISO order (date, then time) - if timestyle in ('iso', 'unknown'): - text,day,month,year,datestyle = _parse_date(origtext) - text,hour,minute,second,offset,timestyle = _parse_time(text) - if 0 and _debug: - print 'tried ISO on %s, date=%s, time=%s' % (origtext, - datestyle, - timestyle) - - try: - microsecond = int(round(1000000 * (second % 1))) - second = int(second) - return dt.datetime(year,month,day,hour,minute,second, microsecond) - \ - dt.timedelta(minutes=offset) - except ValueError, why: - raise RangeError,\ - 'Failed to parse "%s": %s' % (origtext, why) - -def date_from_string(text): - - """ date_from_string(text, [formats, defaultdate]) - - Returns a datetime instance reflecting the date given in - text. A possibly included time part is ignored. - - formats and defaultdate work just like for - datetime_from_string(). - - """ - _text,day,month,year,datestyle = _parse_date(text) - - try: - return dt.datetime(year,month,day) - except ValueError, why: - raise RangeError,\ - 'Failed to parse "%s": %s' % (text, why) - -def validateDateTimeString(text): - - """ validateDateTimeString(text, [formats, defaultdate]) - - Validates the given text and returns 1/0 depending on whether - text includes parseable date and time values or not. - - formats works just like for datetime_from_string() and defines - the order of date/time parsers to apply. It defaults to the - same list of parsers as for datetime_from_string(). - - XXX Undocumented ! - - """ - try: - datetime_from_string(text) - except ValueError, why: - return 0 - return 1 - - -def validateDateString(text): - - """ validateDateString(text, [formats, defaultdate]) - - Validates the given text and returns 1/0 depending on whether - text includes a parseable date value or not. - - formats works just like for datetime_from_string() and defines - the order of date/time parsers to apply. It defaults to the - same list of parsers as for datetime_from_string(). - - XXX Undocumented ! - - """ - try: - date_from_string(text) - except ValueError, why: - return 0 - return 1 - -### Tests - -def _test(): - - import sys - - t = dt.datetime.now() - _date = t.strftime('%Y-%m-%d') - - print 'Testing DateTime Parser...' - - l = [ - - # Literal formats - ('Sun Nov 6 08:49:37 1994', '1994-11-06 08:49:37.00'), - ('sun nov 6 08:49:37 1994', '1994-11-06 08:49:37.00'), - ('sUN NOV 6 08:49:37 1994', '1994-11-06 08:49:37.00'), - ('Sunday, 06-Nov-94 08:49:37 GMT', '1994-11-06 08:49:37.00'), - ('Sun, 06 Nov 1994 08:49:37 GMT', '1994-11-06 08:49:37.00'), - ('06-Nov-94 08:49:37', '1994-11-06 08:49:37.00'), - ('06-Nov-94', '1994-11-06 00:00:00.00'), - ('06-NOV-94', '1994-11-06 00:00:00.00'), - ('November 19 08:49:37', '%s-11-19 08:49:37.00' % t.year), - ('Nov. 9', '%s-11-09 00:00:00.00' % t.year), - ('Sonntag, der 6. November 1994, 08:49:37 GMT', '1994-11-06 08:49:37.00'), - ('6. November 2001, 08:49:37', '2001-11-06 08:49:37.00'), - ('sep 6', '%s-09-06 00:00:00.00' % t.year), - ('sep 6 2000', '2000-09-06 00:00:00.00'), - ('September 29', '%s-09-29 00:00:00.00' % t.year), - ('Sep. 29', '%s-09-29 00:00:00.00' % t.year), - ('6 sep', '%s-09-06 00:00:00.00' % t.year), - ('29 September', '%s-09-29 00:00:00.00' % t.year), - ('29 Sep.', '%s-09-29 00:00:00.00' % t.year), - ('sep 6 2001', '2001-09-06 00:00:00.00'), - ('Sep 6, 2001', '2001-09-06 00:00:00.00'), - ('September 6, 2001', '2001-09-06 00:00:00.00'), - ('sep 6 01', '2001-09-06 00:00:00.00'), - ('Sep 6, 01', '2001-09-06 00:00:00.00'), - ('September 6, 01', '2001-09-06 00:00:00.00'), - ('30 Apr 2006 20:19:00', '2006-04-30 20:19:00.00'), - - # ISO formats - ('1994-11-06 08:49:37', '1994-11-06 08:49:37.00'), - ('010203', '2001-02-03 00:00:00.00'), - ('2001-02-03 00:00:00.00', '2001-02-03 00:00:00.00'), - ('2001-02 00:00:00.00', '2001-02-01 00:00:00.00'), - ('2001-02-03', '2001-02-03 00:00:00.00'), - ('2001-02', '2001-02-01 00:00:00.00'), - ('20000824/2300', '2000-08-24 23:00:00.00'), - ('20000824/0102', '2000-08-24 01:02:00.00'), - ('20000824', '2000-08-24 00:00:00.00'), - ('20000824/020301', '2000-08-24 02:03:01.00'), - ('20000824 020301', '2000-08-24 02:03:01.00'), - ('20000824T020301', '2000-08-24 02:03:01.00'), - ('20000824 020301', '2000-08-24 02:03:01.00'), - ('2000-08-24 02:03:01.00', '2000-08-24 02:03:01.00'), - ('T020311', '%s 02:03:11.00' % _date), - ('2003-12-9', '2003-12-09 00:00:00.00'), - ('03-12-9', '2003-12-09 00:00:00.00'), - ('003-12-9', '0003-12-09 00:00:00.00'), - ('0003-12-9', '0003-12-09 00:00:00.00'), - ('2003-1-9', '2003-01-09 00:00:00.00'), - ('03-1-9', '2003-01-09 00:00:00.00'), - ('003-1-9', '0003-01-09 00:00:00.00'), - ('0003-1-9', '0003-01-09 00:00:00.00'), - - # US formats - ('06/11/94 08:49:37', '1994-06-11 08:49:37.00'), - ('11/06/94 08:49:37', '1994-11-06 08:49:37.00'), - ('9/23/2001', '2001-09-23 00:00:00.00'), - ('9-23-2001', '2001-09-23 00:00:00.00'), - ('9/6', '%s-09-06 00:00:00.00' % t.year), - ('09/6', '%s-09-06 00:00:00.00' % t.year), - ('9/06', '%s-09-06 00:00:00.00' % t.year), - ('09/06', '%s-09-06 00:00:00.00' % t.year), - ('9/6/2001', '2001-09-06 00:00:00.00'), - ('09/6/2001', '2001-09-06 00:00:00.00'), - ('9/06/2001', '2001-09-06 00:00:00.00'), - ('09/06/2001', '2001-09-06 00:00:00.00'), - ('9-6-2001', '2001-09-06 00:00:00.00'), - ('09-6-2001', '2001-09-06 00:00:00.00'), - ('9-06-2001', '2001-09-06 00:00:00.00'), - ('09-06-2001', '2001-09-06 00:00:00.00'), - ('2002/05/28 13:10:56.114700 GMT+2', '2002-05-28 13:10:56.114700'), - ('1970/01/01', '1970-01-01 00:00:00.00'), - ('20021025 12:00 PM', '2002-10-25 12:00:00.00'), - ('20021025 12:30 PM', '2002-10-25 12:30:00.00'), - ('20021025 12:00 AM', '2002-10-25 00:00:00.00'), - ('20021025 12:30 AM', '2002-10-25 00:30:00.00'), - ('20021025 1:00 PM', '2002-10-25 13:00:00.00'), - ('20021025 2:00 AM', '2002-10-25 02:00:00.00'), - ('Thursday, February 06, 2003 12:40 PM', '2003-02-06 12:40:00.00'), - ('Mon, 18 Sep 2006 23:03:00', '2006-09-18 23:03:00.00'), - - # European formats - ('6.11.2001, 08:49:37', '2001-11-06 08:49:37.00'), - ('06.11.2001, 08:49:37', '2001-11-06 08:49:37.00'), - ('06.11. 08:49:37', '%s-11-06 08:49:37.00' % t.year), - #('21/12/2002', '2002-12-21 00:00:00.00'), - #('21/08/2002', '2002-08-21 00:00:00.00'), - #('21-08-2002', '2002-08-21 00:00:00.00'), - #('13/01/03', '2003-01-13 00:00:00.00'), - #('13/1/03', '2003-01-13 00:00:00.00'), - #('13/1/3', '2003-01-13 00:00:00.00'), - #('13/01/3', '2003-01-13 00:00:00.00'), - - # Time only formats - ('01:03', '%s 01:03:00.00' % _date), - ('01:03:11', '%s 01:03:11.00' % _date), - ('01:03:11.50', '%s 01:03:11.500000' % _date), - ('01:03:11.50 AM', '%s 01:03:11.500000' % _date), - ('01:03:11.50 PM', '%s 13:03:11.500000' % _date), - ('01:03:11.50 a.m.', '%s 01:03:11.500000' % _date), - ('01:03:11.50 p.m.', '%s 13:03:11.500000' % _date), - - # Invalid formats - ('6..2001, 08:49:37', '%s 08:49:37.00' % _date), - ('9//2001', 'ignore'), - ('06--94 08:49:37', 'ignore'), - ('20-03 00:00:00.00', 'ignore'), - ('9/2001', 'ignore'), - ('9-6', 'ignore'), - ('09-6', 'ignore'), - ('9-06', 'ignore'), - ('09-06', 'ignore'), - ('20000824/23', 'ignore'), - ('November 1994 08:49:37', 'ignore'), - ] - - # Add Unicode versions - try: - unicode - except NameError: - pass - else: - k = [] - for text, result in l: - k.append((unicode(text), result)) - l.extend(k) - - for text, reference in l: - try: - value = datetime_from_string(text) - except: - if reference is None: - continue - else: - value = str(sys.exc_info()[1]) - valid_datetime = validateDateTimeString(text) - valid_date = validateDateString(text) - - if reference[-3:] == '.00': reference = reference[:-3] - - if str(value) != reference and \ - not reference == 'ignore': - print 'Failed to parse "%s"' % text - print ' expected: %s' % (reference or '') - print ' parsed: %s' % value - elif _debug: - print 'Parsed "%s" successfully' % text - if _debug: - if not valid_datetime: - print ' "%s" failed date/time validation' % text - if not valid_date: - print ' "%s" failed date validation' % text - - et = dt.datetime.now() - print 'done. (after %f seconds)' % ((et-t).seconds) - -if __name__ == '__main__': - _test() diff --git a/pythonPackages/numpy/numpy/core/arrayprint.py b/pythonPackages/numpy/numpy/core/arrayprint.py deleted file mode 100755 index 422637bfbc..0000000000 --- a/pythonPackages/numpy/numpy/core/arrayprint.py +++ /dev/null @@ -1,526 +0,0 @@ -"""Array printing function - -$Id: arrayprint.py,v 1.9 2005/09/13 13:58:44 teoliphant Exp $ -""" -__all__ = ["array2string", "set_printoptions", "get_printoptions"] -__docformat__ = 'restructuredtext' - -# -# Written by Konrad Hinsen -# last revision: 1996-3-13 -# modified by Jim Hugunin 1997-3-3 for repr's and str's (and other details) -# and by Perry Greenfield 2000-4-1 for numarray -# and by Travis Oliphant 2005-8-22 for numpy - -import sys -import numerictypes as _nt -from umath import maximum, minimum, absolute, not_equal, isnan, isinf -from multiarray import format_longfloat -from fromnumeric import ravel - - -def product(x, y): return x*y - -_summaryEdgeItems = 3 # repr N leading and trailing items of each dimension -_summaryThreshold = 1000 # total items > triggers array summarization - -_float_output_precision = 8 -_float_output_suppress_small = False -_line_width = 75 -_nan_str = 'nan' -_inf_str = 'inf' - -if sys.version_info[0] >= 3: - from functools import reduce - -def set_printoptions(precision=None, threshold=None, edgeitems=None, - linewidth=None, suppress=None, - nanstr=None, infstr=None): - """ - Set printing options. - - These options determine the way floating point numbers, arrays and - other NumPy objects are displayed. - - Parameters - ---------- - precision : int, optional - Number of digits of precision for floating point output (default 8). - threshold : int, optional - Total number of array elements which trigger summarization - rather than full repr (default 1000). - edgeitems : int, optional - Number of array items in summary at beginning and end of - each dimension (default 3). - linewidth : int, optional - The number of characters per line for the purpose of inserting - line breaks (default 75). - suppress : bool, optional - Whether or not suppress printing of small floating point values - using scientific notation (default False). - nanstr : str, optional - String representation of floating point not-a-number (default nan). - infstr : str, optional - String representation of floating point infinity (default inf). - - See Also - -------- - get_printoptions, set_string_function - - Examples - -------- - Floating point precision can be set: - - >>> np.set_printoptions(precision=4) - >>> print np.array([1.123456789]) - [ 1.1235] - - Long arrays can be summarised: - - >>> np.set_printoptions(threshold=5) - >>> print np.arange(10) - [0 1 2 ..., 7 8 9] - - Small results can be suppressed: - - >>> eps = np.finfo(float).eps - >>> x = np.arange(4.) - >>> x**2 - (x + eps)**2 - array([ -4.9304e-32, -4.4409e-16, 0.0000e+00, 0.0000e+00]) - >>> np.set_printoptions(suppress=True) - >>> x**2 - (x + eps)**2 - array([-0., -0., 0., 0.]) - - To put back the default options, you can use: - - >>> np.set_printoptions(edgeitems=3,infstr='Inf', - ... linewidth=75, nanstr='NaN', precision=8, - ... suppress=False, threshold=1000) - """ - - global _summaryThreshold, _summaryEdgeItems, _float_output_precision, \ - _line_width, _float_output_suppress_small, _nan_str, _inf_str - if linewidth is not None: - _line_width = linewidth - if threshold is not None: - _summaryThreshold = threshold - if edgeitems is not None: - _summaryEdgeItems = edgeitems - if precision is not None: - _float_output_precision = precision - if suppress is not None: - _float_output_suppress_small = not not suppress - if nanstr is not None: - _nan_str = nanstr - if infstr is not None: - _inf_str = infstr - -def get_printoptions(): - """ - Return the current print options. - - Returns - ------- - print_opts : dict - Dictionary of current print options with keys - - - precision : int - - threshold : int - - edgeitems : int - - linewidth : int - - suppress : bool - - nanstr : str - - infstr : str - - For a full description of these options, see `set_printoptions`. - - See Also - -------- - set_printoptions, set_string_function - - """ - d = dict(precision=_float_output_precision, - threshold=_summaryThreshold, - edgeitems=_summaryEdgeItems, - linewidth=_line_width, - suppress=_float_output_suppress_small, - nanstr=_nan_str, - infstr=_inf_str) - return d - -def _leading_trailing(a): - import numeric as _nc - if a.ndim == 1: - if len(a) > 2*_summaryEdgeItems: - b = _nc.concatenate((a[:_summaryEdgeItems], - a[-_summaryEdgeItems:])) - else: - b = a - else: - if len(a) > 2*_summaryEdgeItems: - l = [_leading_trailing(a[i]) for i in range( - min(len(a), _summaryEdgeItems))] - l.extend([_leading_trailing(a[-i]) for i in range( - min(len(a), _summaryEdgeItems),0,-1)]) - else: - l = [_leading_trailing(a[i]) for i in range(0, len(a))] - b = _nc.concatenate(tuple(l)) - return b - -def _boolFormatter(x): - if x: return ' True' - else: return 'False' - - -def _array2string(a, max_line_width, precision, suppress_small, separator=' ', - prefix=""): - - if max_line_width is None: - max_line_width = _line_width - - if precision is None: - precision = _float_output_precision - - if suppress_small is None: - suppress_small = _float_output_suppress_small - - if a.size > _summaryThreshold: - summary_insert = "..., " - data = _leading_trailing(a) - else: - summary_insert = "" - data = ravel(a) - - try: - format_function = a._format - except AttributeError: - dtypeobj = a.dtype.type - if issubclass(dtypeobj, _nt.bool_): - # make sure True and False line up. - format_function = _boolFormatter - elif issubclass(dtypeobj, _nt.integer): - max_str_len = max(len(str(maximum.reduce(data))), - len(str(minimum.reduce(data)))) - format = '%' + str(max_str_len) + 'd' - format_function = lambda x: _formatInteger(x, format) - elif issubclass(dtypeobj, _nt.floating): - if issubclass(dtypeobj, _nt.longfloat): - format_function = _longfloatFormatter(precision) - else: - format_function = FloatFormat(data, precision, suppress_small) - elif issubclass(dtypeobj, _nt.complexfloating): - if issubclass(dtypeobj, _nt.clongfloat): - format_function = _clongfloatFormatter(precision) - else: - format_function = ComplexFormat(data, precision, suppress_small) - elif issubclass(dtypeobj, _nt.unicode_) or \ - issubclass(dtypeobj, _nt.string_): - format_function = repr - else: - format_function = str - - next_line_prefix = " " # skip over "[" - next_line_prefix += " "*len(prefix) # skip over array( - - lst = _formatArray(a, format_function, len(a.shape), max_line_width, - next_line_prefix, separator, - _summaryEdgeItems, summary_insert)[:-1] - - return lst - -def _convert_arrays(obj): - import numeric as _nc - newtup = [] - for k in obj: - if isinstance(k, _nc.ndarray): - k = k.tolist() - elif isinstance(k, tuple): - k = _convert_arrays(k) - newtup.append(k) - return tuple(newtup) - - -def array2string(a, max_line_width = None, precision = None, - suppress_small = None, separator=' ', prefix="", - style=repr): - """ - Return a string representation of an array. - - Parameters - ---------- - a : ndarray - Input array. - max_line_width : int, optional - The maximum number of columns the string should span. Newline - characters splits the string appropriately after array elements. - precision : int, optional - Floating point precision. Default is the current printing - precision (usually 8), which can be altered using `set_printoptions`. - suppress_small : bool, optional - Represent very small numbers as zero. A number is "very small" if it - is smaller than the current printing precision. - separator : str, optional - Inserted between elements. - prefix : str, optional - An array is typically printed as:: - - 'prefix(' + array2string(a) + ')' - - The length of the prefix string is used to align the - output correctly. - style : function, optional - A function that accepts an ndarray and returns a string. Used only - when the shape of `a` is equal to (). - - Returns - ------- - array_str : str - String representation of the array. - - See Also - -------- - array_str, array_repr, set_printoptions - - Examples - -------- - >>> x = np.array([1e-16,1,2,3]) - >>> print np.array2string(x, precision=2, separator=',', - ... suppress_small=True) - [ 0., 1., 2., 3.] - - """ - - if a.shape == (): - x = a.item() - try: - lst = a._format(x) - except AttributeError: - if isinstance(x, tuple): - x = _convert_arrays(x) - lst = style(x) - elif reduce(product, a.shape) == 0: - # treat as a null array if any of shape elements == 0 - lst = "[]" - else: - lst = _array2string(a, max_line_width, precision, suppress_small, - separator, prefix) - return lst - -def _extendLine(s, line, word, max_line_len, next_line_prefix): - if len(line.rstrip()) + len(word.rstrip()) >= max_line_len: - s += line.rstrip() + "\n" - line = next_line_prefix - line += word - return s, line - - -def _formatArray(a, format_function, rank, max_line_len, - next_line_prefix, separator, edge_items, summary_insert): - """formatArray is designed for two modes of operation: - - 1. Full output - - 2. Summarized output - - """ - if rank == 0: - obj = a.item() - if isinstance(obj, tuple): - obj = _convert_arrays(obj) - return str(obj) - - if summary_insert and 2*edge_items < len(a): - leading_items, trailing_items, summary_insert1 = \ - edge_items, edge_items, summary_insert - else: - leading_items, trailing_items, summary_insert1 = 0, len(a), "" - - if rank == 1: - s = "" - line = next_line_prefix - for i in xrange(leading_items): - word = format_function(a[i]) + separator - s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) - - if summary_insert1: - s, line = _extendLine(s, line, summary_insert1, max_line_len, next_line_prefix) - - for i in xrange(trailing_items, 1, -1): - word = format_function(a[-i]) + separator - s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) - - word = format_function(a[-1]) - s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) - s += line + "]\n" - s = '[' + s[len(next_line_prefix):] - else: - s = '[' - sep = separator.rstrip() - for i in xrange(leading_items): - if i > 0: - s += next_line_prefix - s += _formatArray(a[i], format_function, rank-1, max_line_len, - " " + next_line_prefix, separator, edge_items, - summary_insert) - s = s.rstrip() + sep.rstrip() + '\n'*max(rank-1,1) - - if summary_insert1: - s += next_line_prefix + summary_insert1 + "\n" - - for i in xrange(trailing_items, 1, -1): - if leading_items or i != trailing_items: - s += next_line_prefix - s += _formatArray(a[-i], format_function, rank-1, max_line_len, - " " + next_line_prefix, separator, edge_items, - summary_insert) - s = s.rstrip() + sep.rstrip() + '\n'*max(rank-1,1) - if leading_items or trailing_items > 1: - s += next_line_prefix - s += _formatArray(a[-1], format_function, rank-1, max_line_len, - " " + next_line_prefix, separator, edge_items, - summary_insert).rstrip()+']\n' - return s - -class FloatFormat(object): - def __init__(self, data, precision, suppress_small, sign=False): - self.precision = precision - self.suppress_small = suppress_small - self.sign = sign - self.exp_format = False - self.large_exponent = False - self.max_str_len = 0 - self.fillFormat(data) - - def fillFormat(self, data): - import numeric as _nc - errstate = _nc.seterr(all='ignore') - try: - special = isnan(data) | isinf(data) - non_zero = absolute(data.compress(not_equal(data, 0) & ~special)) - if len(non_zero) == 0: - max_val = 0. - min_val = 0. - else: - max_val = maximum.reduce(non_zero) - min_val = minimum.reduce(non_zero) - if max_val >= 1.e8: - self.exp_format = True - if not self.suppress_small and (min_val < 0.0001 - or max_val/min_val > 1000.): - self.exp_format = True - finally: - _nc.seterr(**errstate) - - if self.exp_format: - self.large_exponent = 0 < min_val < 1e-99 or max_val >= 1e100 - self.max_str_len = 8 + self.precision - if self.large_exponent: - self.max_str_len += 1 - if self.sign: - format = '%+' - else: - format = '%' - format = format + '%d.%de' % (self.max_str_len, self.precision) - else: - format = '%%.%df' % (self.precision,) - if len(non_zero): - precision = max([_digits(x, self.precision, format) - for x in non_zero]) - else: - precision = 0 - precision = min(self.precision, precision) - self.max_str_len = len(str(int(max_val))) + precision + 2 - if _nc.any(special): - self.max_str_len = max(self.max_str_len, - len(_nan_str), - len(_inf_str)+1) - if self.sign: - format = '%#+' - else: - format = '%#' - format = format + '%d.%df' % (self.max_str_len, precision) - self.special_fmt = '%%%ds' % (self.max_str_len,) - self.format = format - - def __call__(self, x, strip_zeros=True): - import numeric as _nc - err = _nc.seterr(invalid='ignore') - try: - if isnan(x): - return self.special_fmt % (_nan_str,) - elif isinf(x): - if x > 0: - return self.special_fmt % (_inf_str,) - else: - return self.special_fmt % ('-' + _inf_str,) - finally: - _nc.seterr(**err) - - s = self.format % x - if self.large_exponent: - # 3-digit exponent - expsign = s[-3] - if expsign == '+' or expsign == '-': - s = s[1:-2] + '0' + s[-2:] - elif self.exp_format: - # 2-digit exponent - if s[-3] == '0': - s = ' ' + s[:-3] + s[-2:] - elif strip_zeros: - z = s.rstrip('0') - s = z + ' '*(len(s)-len(z)) - return s - - -def _digits(x, precision, format): - s = format % x - z = s.rstrip('0') - return precision - len(s) + len(z) - - -_MAXINT = sys.maxint -_MININT = -sys.maxint-1 -def _formatInteger(x, format): - if _MININT < x < _MAXINT: - return format % x - else: - return "%s" % x - -def _longfloatFormatter(precision): - # XXX Have to add something to determine the width to use a la FloatFormat - # Right now, things won't line up properly - def formatter(x): - if isnan(x): - return _nan_str - elif isinf(x): - if x > 0: - return _inf_str - else: - return '-' + _inf_str - return format_longfloat(x, precision) - return formatter - -def _clongfloatFormatter(precision): - def formatter(x): - r = format_longfloat(x.real, precision) - i = format_longfloat(x.imag, precision) - return '%s+%sj' % (r, i) - return formatter - -class ComplexFormat(object): - def __init__(self, x, precision, suppress_small): - self.real_format = FloatFormat(x.real, precision, suppress_small) - self.imag_format = FloatFormat(x.imag, precision, suppress_small, - sign=True) - - def __call__(self, x): - r = self.real_format(x.real, strip_zeros=False) - i = self.imag_format(x.imag, strip_zeros=False) - if not self.imag_format.exp_format: - z = i.rstrip('0') - i = z + 'j' + ' '*(len(i)-len(z)) - else: - i = i + 'j' - return r + i - -## end diff --git a/pythonPackages/numpy/numpy/core/blasdot/_dotblas.c b/pythonPackages/numpy/numpy/core/blasdot/_dotblas.c deleted file mode 100755 index 2b3e5ad4f6..0000000000 --- a/pythonPackages/numpy/numpy/core/blasdot/_dotblas.c +++ /dev/null @@ -1,1198 +0,0 @@ -static char module_doc[] = -"This module provides a BLAS optimized\nmatrix multiply, inner product and dot for numpy arrays"; - -#include "Python.h" -#include "numpy/ndarrayobject.h" -#ifndef CBLAS_HEADER -#define CBLAS_HEADER "cblas.h" -#endif -#include CBLAS_HEADER - -#include - -#if (PY_VERSION_HEX < 0x02060000) -#define Py_TYPE(o) (((PyObject*)(o))->ob_type) -#define Py_REFCNT(o) (((PyObject*)(o))->ob_refcnt) -#define Py_SIZE(o) (((PyVarObject*)(o))->ob_size) -#endif - -static PyArray_DotFunc *oldFunctions[PyArray_NTYPES]; - -static void -FLOAT_dot(void *a, npy_intp stridea, void *b, npy_intp strideb, void *res, - npy_intp n, void *tmp) -{ - register npy_intp na = stridea / sizeof(float); - register npy_intp nb = strideb / sizeof(float); - - if ((sizeof(float) * na == (size_t)stridea) && - (sizeof(float) * nb == (size_t)strideb) && - (na >= 0) && (nb >= 0)) - *((float *)res) = cblas_sdot((int)n, (float *)a, na, (float *)b, nb); - - else - oldFunctions[PyArray_FLOAT](a, stridea, b, strideb, res, n, tmp); -} - -static void -DOUBLE_dot(void *a, npy_intp stridea, void *b, npy_intp strideb, void *res, - npy_intp n, void *tmp) -{ - register int na = stridea / sizeof(double); - register int nb = strideb / sizeof(double); - - if ((sizeof(double) * na == (size_t)stridea) && - (sizeof(double) * nb == (size_t)strideb) && - (na >= 0) && (nb >= 0)) - *((double *)res) = cblas_ddot((int)n, (double *)a, na, (double *)b, nb); - else - oldFunctions[PyArray_DOUBLE](a, stridea, b, strideb, res, n, tmp); -} - -static void -CFLOAT_dot(void *a, npy_intp stridea, void *b, npy_intp strideb, void *res, - npy_intp n, void *tmp) -{ - - register int na = stridea / sizeof(npy_cfloat); - register int nb = strideb / sizeof(npy_cfloat); - - if ((sizeof(npy_cfloat) * na == (size_t)stridea) && - (sizeof(npy_cfloat) * nb == (size_t)strideb) && - (na >= 0) && (nb >= 0)) - cblas_cdotu_sub((int)n, (float *)a, na, (float *)b, nb, (float *)res); - else - oldFunctions[PyArray_CFLOAT](a, stridea, b, strideb, res, n, tmp); -} - -static void -CDOUBLE_dot(void *a, npy_intp stridea, void *b, npy_intp strideb, void *res, - npy_intp n, void *tmp) -{ - register int na = stridea / sizeof(npy_cdouble); - register int nb = strideb / sizeof(npy_cdouble); - - if ((sizeof(npy_cdouble) * na == (size_t)stridea) && - (sizeof(npy_cdouble) * nb == (size_t)strideb) && - (na >= 0) && (nb >= 0)) - cblas_zdotu_sub((int)n, (double *)a, na, (double *)b, nb, (double *)res); - else - oldFunctions[PyArray_CDOUBLE](a, stridea, b, strideb, res, n, tmp); -} - - -static npy_bool altered=NPY_FALSE; - -/* - * alterdot() changes all dot functions to use blas. - */ -static PyObject * -dotblas_alterdot(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - PyArray_Descr *descr; - - if (!PyArg_ParseTuple(args, "")) return NULL; - - /* Replace the dot functions to the ones using blas */ - - if (!altered) { - descr = PyArray_DescrFromType(PyArray_FLOAT); - oldFunctions[PyArray_FLOAT] = descr->f->dotfunc; - descr->f->dotfunc = (PyArray_DotFunc *)FLOAT_dot; - - descr = PyArray_DescrFromType(PyArray_DOUBLE); - oldFunctions[PyArray_DOUBLE] = descr->f->dotfunc; - descr->f->dotfunc = (PyArray_DotFunc *)DOUBLE_dot; - - descr = PyArray_DescrFromType(PyArray_CFLOAT); - oldFunctions[PyArray_CFLOAT] = descr->f->dotfunc; - descr->f->dotfunc = (PyArray_DotFunc *)CFLOAT_dot; - - descr = PyArray_DescrFromType(PyArray_CDOUBLE); - oldFunctions[PyArray_CDOUBLE] = descr->f->dotfunc; - descr->f->dotfunc = (PyArray_DotFunc *)CDOUBLE_dot; - - altered = NPY_TRUE; - } - - Py_INCREF(Py_None); - return Py_None; -} - -/* - * restoredot() restores dots to defaults. - */ -static PyObject * -dotblas_restoredot(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - PyArray_Descr *descr; - - if (!PyArg_ParseTuple(args, "")) return NULL; - - if (altered) { - descr = PyArray_DescrFromType(PyArray_FLOAT); - descr->f->dotfunc = oldFunctions[PyArray_FLOAT]; - oldFunctions[PyArray_FLOAT] = NULL; - Py_XDECREF(descr); - - descr = PyArray_DescrFromType(PyArray_DOUBLE); - descr->f->dotfunc = oldFunctions[PyArray_DOUBLE]; - oldFunctions[PyArray_DOUBLE] = NULL; - Py_XDECREF(descr); - - descr = PyArray_DescrFromType(PyArray_CFLOAT); - descr->f->dotfunc = oldFunctions[PyArray_CFLOAT]; - oldFunctions[PyArray_CFLOAT] = NULL; - Py_XDECREF(descr); - - descr = PyArray_DescrFromType(PyArray_CDOUBLE); - descr->f->dotfunc = oldFunctions[PyArray_CDOUBLE]; - oldFunctions[PyArray_CDOUBLE] = NULL; - Py_XDECREF(descr); - - altered = NPY_FALSE; - } - - Py_INCREF(Py_None); - return Py_None; -} - -typedef enum {_scalar, _column, _row, _matrix} MatrixShape; - -static MatrixShape -_select_matrix_shape(PyArrayObject *array) -{ - switch (array->nd) { - case 0: - return _scalar; - case 1: - if (array->dimensions[0] > 1) - return _column; - return _scalar; - case 2: - if (array->dimensions[0] > 1) { - if (array->dimensions[1] == 1) - return _column; - else - return _matrix; - } - if (array->dimensions[1] == 1) - return _scalar; - return _row; - } - return _matrix; -} - - -/* This also makes sure that the data segment is aligned with - an itemsize address as well by returning one if not true. -*/ -static int -_bad_strides(PyArrayObject *ap) -{ - register int itemsize = PyArray_ITEMSIZE(ap); - register int i, N=PyArray_NDIM(ap); - register npy_intp *strides = PyArray_STRIDES(ap); - - if (((npy_intp)(ap->data) % itemsize) != 0) - return 1; - for (i=0; ind > 2) || (ap2->nd > 2)) { - /* - * This function doesn't handle dimensions greater than 2 - * (or negative striding) -- other - * than to ensure the dot function is altered - */ - if (!altered) { - /* need to alter dot product */ - PyObject *tmp1, *tmp2; - tmp1 = PyTuple_New(0); - tmp2 = dotblas_alterdot(NULL, tmp1); - Py_DECREF(tmp1); - Py_DECREF(tmp2); - } - ret = (PyArrayObject *)PyArray_MatrixProduct((PyObject *)ap1, - (PyObject *)ap2); - Py_DECREF(ap1); - Py_DECREF(ap2); - return PyArray_Return(ret); - } - - if (_bad_strides(ap1)) { - op1 = PyArray_NewCopy(ap1, PyArray_ANYORDER); - Py_DECREF(ap1); - ap1 = (PyArrayObject *)op1; - if (ap1 == NULL) { - goto fail; - } - } - if (_bad_strides(ap2)) { - op2 = PyArray_NewCopy(ap2, PyArray_ANYORDER); - Py_DECREF(ap2); - ap2 = (PyArrayObject *)op2; - if (ap2 == NULL) { - goto fail; - } - } - ap1shape = _select_matrix_shape(ap1); - ap2shape = _select_matrix_shape(ap2); - - if (ap1shape == _scalar || ap2shape == _scalar) { - PyArrayObject *oap1, *oap2; - oap1 = ap1; oap2 = ap2; - /* One of ap1 or ap2 is a scalar */ - if (ap1shape == _scalar) { /* Make ap2 the scalar */ - PyArrayObject *t = ap1; - ap1 = ap2; - ap2 = t; - ap1shape = ap2shape; - ap2shape = _scalar; - } - - if (ap1shape == _row) { - ap1stride = ap1->strides[1]; - } - else if (ap1->nd > 0) { - ap1stride = ap1->strides[0]; - } - - if (ap1->nd == 0 || ap2->nd == 0) { - npy_intp *thisdims; - if (ap1->nd == 0) { - nd = ap2->nd; - thisdims = ap2->dimensions; - } - else { - nd = ap1->nd; - thisdims = ap1->dimensions; - } - l = 1; - for (j = 0; j < nd; j++) { - dimensions[j] = thisdims[j]; - l *= dimensions[j]; - } - } - else { - l = oap1->dimensions[oap1->nd - 1]; - - if (oap2->dimensions[0] != l) { - PyErr_SetString(PyExc_ValueError, "matrices are not aligned"); - goto fail; - } - nd = ap1->nd + ap2->nd - 2; - /* - * nd = 0 or 1 or 2. If nd == 0 do nothing ... - */ - if (nd == 1) { - /* - * Either ap1->nd is 1 dim or ap2->nd is 1 dim - * and the other is 2-dim - */ - dimensions[0] = (oap1->nd == 2) ? oap1->dimensions[0] : oap2->dimensions[1]; - l = dimensions[0]; - /* - * Fix it so that dot(shape=(N,1), shape=(1,)) - * and dot(shape=(1,), shape=(1,N)) both return - * an (N,) array (but use the fast scalar code) - */ - } - else if (nd == 2) { - dimensions[0] = oap1->dimensions[0]; - dimensions[1] = oap2->dimensions[1]; - /* - * We need to make sure that dot(shape=(1,1), shape=(1,N)) - * and dot(shape=(N,1),shape=(1,1)) uses - * scalar multiplication appropriately - */ - if (ap1shape == _row) { - l = dimensions[1]; - } - else { - l = dimensions[0]; - } - } - - /* Check if the summation dimension is 0-sized */ - if (oap1->dimensions[oap1->nd - 1] == 0) { - l = 0; - } - } - } - else { - /* - * (ap1->nd <= 2 && ap2->nd <= 2) - * Both ap1 and ap2 are vectors or matrices - */ - l = ap1->dimensions[ap1->nd - 1]; - - if (ap2->dimensions[0] != l) { - PyErr_SetString(PyExc_ValueError, "matrices are not aligned"); - goto fail; - } - nd = ap1->nd + ap2->nd - 2; - - if (nd == 1) - dimensions[0] = (ap1->nd == 2) ? ap1->dimensions[0] : ap2->dimensions[1]; - else if (nd == 2) { - dimensions[0] = ap1->dimensions[0]; - dimensions[1] = ap2->dimensions[1]; - } - } - - /* Choose which subtype to return */ - if (Py_TYPE(ap1) != Py_TYPE(ap2)) { - prior2 = PyArray_GetPriority((PyObject *)ap2, 0.0); - prior1 = PyArray_GetPriority((PyObject *)ap1, 0.0); - subtype = (prior2 > prior1 ? Py_TYPE(ap2) : Py_TYPE(ap1)); - } - else { - prior1 = prior2 = 0.0; - subtype = Py_TYPE(ap1); - } - - ret = (PyArrayObject *)PyArray_New(subtype, nd, dimensions, - typenum, NULL, NULL, 0, 0, - (PyObject *) - (prior2 > prior1 ? ap2 : ap1)); - - if (ret == NULL) { - goto fail; - } - numbytes = PyArray_NBYTES(ret); - memset(ret->data, 0, numbytes); - if (numbytes==0 || l == 0) { - Py_DECREF(ap1); - Py_DECREF(ap2); - return PyArray_Return(ret); - } - - - if (ap2shape == _scalar) { - /* - * Multiplication by a scalar -- Level 1 BLAS - * if ap1shape is a matrix and we are not contiguous, then we can't - * just blast through the entire array using a single striding factor - */ - NPY_BEGIN_ALLOW_THREADS; - - if (typenum == PyArray_DOUBLE) { - if (l == 1) { - *((double *)ret->data) = *((double *)ap2->data) * - *((double *)ap1->data); - } - else if (ap1shape != _matrix) { - cblas_daxpy(l, *((double *)ap2->data), (double *)ap1->data, - ap1stride/sizeof(double), (double *)ret->data, 1); - } - else { - int maxind, oind, i, a1s, rets; - char *ptr, *rptr; - double val; - - maxind = (ap1->dimensions[0] >= ap1->dimensions[1] ? 0 : 1); - oind = 1-maxind; - ptr = ap1->data; - rptr = ret->data; - l = ap1->dimensions[maxind]; - val = *((double *)ap2->data); - a1s = ap1->strides[maxind] / sizeof(double); - rets = ret->strides[maxind] / sizeof(double); - for (i = 0; i < ap1->dimensions[oind]; i++) { - cblas_daxpy(l, val, (double *)ptr, a1s, - (double *)rptr, rets); - ptr += ap1->strides[oind]; - rptr += ret->strides[oind]; - } - } - } - else if (typenum == PyArray_CDOUBLE) { - if (l == 1) { - npy_cdouble *ptr1, *ptr2, *res; - - ptr1 = (npy_cdouble *)ap2->data; - ptr2 = (npy_cdouble *)ap1->data; - res = (npy_cdouble *)ret->data; - res->real = ptr1->real * ptr2->real - ptr1->imag * ptr2->imag; - res->imag = ptr1->real * ptr2->imag + ptr1->imag * ptr2->real; - } - else if (ap1shape != _matrix) { - cblas_zaxpy(l, (double *)ap2->data, (double *)ap1->data, - ap1stride/sizeof(npy_cdouble), (double *)ret->data, 1); - } - else { - int maxind, oind, i, a1s, rets; - char *ptr, *rptr; - double *pval; - - maxind = (ap1->dimensions[0] >= ap1->dimensions[1] ? 0 : 1); - oind = 1-maxind; - ptr = ap1->data; - rptr = ret->data; - l = ap1->dimensions[maxind]; - pval = (double *)ap2->data; - a1s = ap1->strides[maxind] / sizeof(npy_cdouble); - rets = ret->strides[maxind] / sizeof(npy_cdouble); - for (i = 0; i < ap1->dimensions[oind]; i++) { - cblas_zaxpy(l, pval, (double *)ptr, a1s, - (double *)rptr, rets); - ptr += ap1->strides[oind]; - rptr += ret->strides[oind]; - } - } - } - else if (typenum == PyArray_FLOAT) { - if (l == 1) { - *((float *)ret->data) = *((float *)ap2->data) * - *((float *)ap1->data); - } - else if (ap1shape != _matrix) { - cblas_saxpy(l, *((float *)ap2->data), (float *)ap1->data, - ap1stride/sizeof(float), (float *)ret->data, 1); - } - else { - int maxind, oind, i, a1s, rets; - char *ptr, *rptr; - float val; - - maxind = (ap1->dimensions[0] >= ap1->dimensions[1] ? 0 : 1); - oind = 1-maxind; - ptr = ap1->data; - rptr = ret->data; - l = ap1->dimensions[maxind]; - val = *((float *)ap2->data); - a1s = ap1->strides[maxind] / sizeof(float); - rets = ret->strides[maxind] / sizeof(float); - for (i = 0; i < ap1->dimensions[oind]; i++) { - cblas_saxpy(l, val, (float *)ptr, a1s, - (float *)rptr, rets); - ptr += ap1->strides[oind]; - rptr += ret->strides[oind]; - } - } - } - else if (typenum == PyArray_CFLOAT) { - if (l == 1) { - npy_cfloat *ptr1, *ptr2, *res; - - ptr1 = (npy_cfloat *)ap2->data; - ptr2 = (npy_cfloat *)ap1->data; - res = (npy_cfloat *)ret->data; - res->real = ptr1->real * ptr2->real - ptr1->imag * ptr2->imag; - res->imag = ptr1->real * ptr2->imag + ptr1->imag * ptr2->real; - } - else if (ap1shape != _matrix) { - cblas_caxpy(l, (float *)ap2->data, (float *)ap1->data, - ap1stride/sizeof(npy_cfloat), (float *)ret->data, 1); - } - else { - int maxind, oind, i, a1s, rets; - char *ptr, *rptr; - float *pval; - - maxind = (ap1->dimensions[0] >= ap1->dimensions[1] ? 0 : 1); - oind = 1-maxind; - ptr = ap1->data; - rptr = ret->data; - l = ap1->dimensions[maxind]; - pval = (float *)ap2->data; - a1s = ap1->strides[maxind] / sizeof(npy_cfloat); - rets = ret->strides[maxind] / sizeof(npy_cfloat); - for (i = 0; i < ap1->dimensions[oind]; i++) { - cblas_caxpy(l, pval, (float *)ptr, a1s, - (float *)rptr, rets); - ptr += ap1->strides[oind]; - rptr += ret->strides[oind]; - } - } - } - NPY_END_ALLOW_THREADS; - } - else if ((ap2shape == _column) && (ap1shape != _matrix)) { - int ap1s, ap2s; - NPY_BEGIN_ALLOW_THREADS; - - ap2s = ap2->strides[0] / ap2->descr->elsize; - if (ap1shape == _row) { - ap1s = ap1->strides[1] / ap1->descr->elsize; - } - else { - ap1s = ap1->strides[0] / ap1->descr->elsize; - } - - /* Dot product between two vectors -- Level 1 BLAS */ - if (typenum == PyArray_DOUBLE) { - double result = cblas_ddot(l, (double *)ap1->data, ap1s, - (double *)ap2->data, ap2s); - *((double *)ret->data) = result; - } - else if (typenum == PyArray_FLOAT) { - float result = cblas_sdot(l, (float *)ap1->data, ap1s, - (float *)ap2->data, ap2s); - *((float *)ret->data) = result; - } - else if (typenum == PyArray_CDOUBLE) { - cblas_zdotu_sub(l, (double *)ap1->data, ap1s, - (double *)ap2->data, ap2s, (double *)ret->data); - } - else if (typenum == PyArray_CFLOAT) { - cblas_cdotu_sub(l, (float *)ap1->data, ap1s, - (float *)ap2->data, ap2s, (float *)ret->data); - } - NPY_END_ALLOW_THREADS; - } - else if (ap1shape == _matrix && ap2shape != _matrix) { - /* Matrix vector multiplication -- Level 2 BLAS */ - /* lda must be MAX(M,1) */ - enum CBLAS_ORDER Order; - int ap2s; - - if (!PyArray_ISONESEGMENT(ap1)) { - PyObject *new; - new = PyArray_Copy(ap1); - Py_DECREF(ap1); - ap1 = (PyArrayObject *)new; - if (new == NULL) { - goto fail; - } - } - NPY_BEGIN_ALLOW_THREADS - if (PyArray_ISCONTIGUOUS(ap1)) { - Order = CblasRowMajor; - lda = (ap1->dimensions[1] > 1 ? ap1->dimensions[1] : 1); - } - else { - Order = CblasColMajor; - lda = (ap1->dimensions[0] > 1 ? ap1->dimensions[0] : 1); - } - ap2s = ap2->strides[0] / ap2->descr->elsize; - if (typenum == PyArray_DOUBLE) { - cblas_dgemv(Order, CblasNoTrans, - ap1->dimensions[0], ap1->dimensions[1], - 1.0, (double *)ap1->data, lda, - (double *)ap2->data, ap2s, 0.0, (double *)ret->data, 1); - } - else if (typenum == PyArray_FLOAT) { - cblas_sgemv(Order, CblasNoTrans, - ap1->dimensions[0], ap1->dimensions[1], - 1.0, (float *)ap1->data, lda, - (float *)ap2->data, ap2s, 0.0, (float *)ret->data, 1); - } - else if (typenum == PyArray_CDOUBLE) { - cblas_zgemv(Order, - CblasNoTrans, ap1->dimensions[0], ap1->dimensions[1], - oneD, (double *)ap1->data, lda, - (double *)ap2->data, ap2s, zeroD, - (double *)ret->data, 1); - } - else if (typenum == PyArray_CFLOAT) { - cblas_cgemv(Order, - CblasNoTrans, ap1->dimensions[0], ap1->dimensions[1], - oneF, (float *)ap1->data, lda, - (float *)ap2->data, ap2s, zeroF, - (float *)ret->data, 1); - } - NPY_END_ALLOW_THREADS; - } - else if (ap1shape != _matrix && ap2shape == _matrix) { - /* Vector matrix multiplication -- Level 2 BLAS */ - enum CBLAS_ORDER Order; - int ap1s; - - if (!PyArray_ISONESEGMENT(ap2)) { - PyObject *new; - new = PyArray_Copy(ap2); - Py_DECREF(ap2); - ap2 = (PyArrayObject *)new; - if (new == NULL) { - goto fail; - } - } - NPY_BEGIN_ALLOW_THREADS - if (PyArray_ISCONTIGUOUS(ap2)) { - Order = CblasRowMajor; - lda = (ap2->dimensions[1] > 1 ? ap2->dimensions[1] : 1); - } - else { - Order = CblasColMajor; - lda = (ap2->dimensions[0] > 1 ? ap2->dimensions[0] : 1); - } - if (ap1shape == _row) { - ap1s = ap1->strides[1] / ap1->descr->elsize; - } - else { - ap1s = ap1->strides[0] / ap1->descr->elsize; - } - if (typenum == PyArray_DOUBLE) { - cblas_dgemv(Order, - CblasTrans, ap2->dimensions[0], ap2->dimensions[1], - 1.0, (double *)ap2->data, lda, - (double *)ap1->data, ap1s, 0.0, (double *)ret->data, 1); - } - else if (typenum == PyArray_FLOAT) { - cblas_sgemv(Order, - CblasTrans, ap2->dimensions[0], ap2->dimensions[1], - 1.0, (float *)ap2->data, lda, - (float *)ap1->data, ap1s, 0.0, (float *)ret->data, 1); - } - else if (typenum == PyArray_CDOUBLE) { - cblas_zgemv(Order, - CblasTrans, ap2->dimensions[0], ap2->dimensions[1], - oneD, (double *)ap2->data, lda, - (double *)ap1->data, ap1s, zeroD, (double *)ret->data, 1); - } - else if (typenum == PyArray_CFLOAT) { - cblas_cgemv(Order, - CblasTrans, ap2->dimensions[0], ap2->dimensions[1], - oneF, (float *)ap2->data, lda, - (float *)ap1->data, ap1s, zeroF, (float *)ret->data, 1); - } - NPY_END_ALLOW_THREADS; - } - else { - /* - * (ap1->nd == 2 && ap2->nd == 2) - * Matrix matrix multiplication -- Level 3 BLAS - * L x M multiplied by M x N - */ - enum CBLAS_ORDER Order; - enum CBLAS_TRANSPOSE Trans1, Trans2; - int M, N, L; - - /* Optimization possible: */ - /* - * We may be able to handle single-segment arrays here - * using appropriate values of Order, Trans1, and Trans2. - */ - - if (!PyArray_ISCONTIGUOUS(ap2)) { - PyObject *new = PyArray_Copy(ap2); - - Py_DECREF(ap2); - ap2 = (PyArrayObject *)new; - if (new == NULL) { - goto fail; - } - } - if (!PyArray_ISCONTIGUOUS(ap1)) { - PyObject *new = PyArray_Copy(ap1); - - Py_DECREF(ap1); - ap1 = (PyArrayObject *)new; - if (new == NULL) { - goto fail; - } - } - - NPY_BEGIN_ALLOW_THREADS; - - Order = CblasRowMajor; - Trans1 = CblasNoTrans; - Trans2 = CblasNoTrans; - L = ap1->dimensions[0]; - N = ap2->dimensions[1]; - M = ap2->dimensions[0]; - lda = (ap1->dimensions[1] > 1 ? ap1->dimensions[1] : 1); - ldb = (ap2->dimensions[1] > 1 ? ap2->dimensions[1] : 1); - ldc = (ret->dimensions[1] > 1 ? ret->dimensions[1] : 1); - if (typenum == PyArray_DOUBLE) { - cblas_dgemm(Order, Trans1, Trans2, - L, N, M, - 1.0, (double *)ap1->data, lda, - (double *)ap2->data, ldb, - 0.0, (double *)ret->data, ldc); - } - else if (typenum == PyArray_FLOAT) { - cblas_sgemm(Order, Trans1, Trans2, - L, N, M, - 1.0, (float *)ap1->data, lda, - (float *)ap2->data, ldb, - 0.0, (float *)ret->data, ldc); - } - else if (typenum == PyArray_CDOUBLE) { - cblas_zgemm(Order, Trans1, Trans2, - L, N, M, - oneD, (double *)ap1->data, lda, - (double *)ap2->data, ldb, - zeroD, (double *)ret->data, ldc); - } - else if (typenum == PyArray_CFLOAT) { - cblas_cgemm(Order, Trans1, Trans2, - L, N, M, - oneF, (float *)ap1->data, lda, - (float *)ap2->data, ldb, - zeroF, (float *)ret->data, ldc); - } - NPY_END_ALLOW_THREADS; - } - - - Py_DECREF(ap1); - Py_DECREF(ap2); - return PyArray_Return(ret); - - fail: - Py_XDECREF(ap1); - Py_XDECREF(ap2); - Py_XDECREF(ret); - return NULL; -} - - -/* - * innerproduct(a,b) - * - * Returns the inner product of a and b for arrays of - * floating point types. Like the generic NumPy equivalent the product - * sum is over the last dimension of a and b. - * NB: The first argument is not conjugated. - */ - -static PyObject * -dotblas_innerproduct(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - PyObject *op1, *op2; - PyArrayObject *ap1, *ap2, *ret; - int j, l, lda, ldb, ldc; - int typenum, nd; - npy_intp dimensions[NPY_MAXDIMS]; - static const float oneF[2] = {1.0, 0.0}; - static const float zeroF[2] = {0.0, 0.0}; - static const double oneD[2] = {1.0, 0.0}; - static const double zeroD[2] = {0.0, 0.0}; - PyTypeObject *subtype; - double prior1, prior2; - - if (!PyArg_ParseTuple(args, "OO", &op1, &op2)) return NULL; - - /* - * Inner product using the BLAS. The product sum is taken along the last - * dimensions of the two arrays. - * Only speeds things up for float double and complex types. - */ - - - typenum = PyArray_ObjectType(op1, 0); - typenum = PyArray_ObjectType(op2, typenum); - - /* This function doesn't handle other types */ - if ((typenum != PyArray_DOUBLE && typenum != PyArray_CDOUBLE && - typenum != PyArray_FLOAT && typenum != PyArray_CFLOAT)) { - return PyArray_Return((PyArrayObject *)PyArray_InnerProduct(op1, op2)); - } - - ret = NULL; - ap1 = (PyArrayObject *)PyArray_ContiguousFromObject(op1, typenum, 0, 0); - if (ap1 == NULL) return NULL; - ap2 = (PyArrayObject *)PyArray_ContiguousFromObject(op2, typenum, 0, 0); - if (ap2 == NULL) goto fail; - - if ((ap1->nd > 2) || (ap2->nd > 2)) { - /* This function doesn't handle dimensions greater than 2 -- other - than to ensure the dot function is altered - */ - if (!altered) { - /* need to alter dot product */ - PyObject *tmp1, *tmp2; - tmp1 = PyTuple_New(0); - tmp2 = dotblas_alterdot(NULL, tmp1); - Py_DECREF(tmp1); - Py_DECREF(tmp2); - } - ret = (PyArrayObject *)PyArray_InnerProduct((PyObject *)ap1, - (PyObject *)ap2); - Py_DECREF(ap1); - Py_DECREF(ap2); - return PyArray_Return(ret); - } - - if (ap1->nd == 0 || ap2->nd == 0) { - /* One of ap1 or ap2 is a scalar */ - if (ap1->nd == 0) { /* Make ap2 the scalar */ - PyArrayObject *t = ap1; - ap1 = ap2; - ap2 = t; - } - for (l = 1, j = 0; j < ap1->nd; j++) { - dimensions[j] = ap1->dimensions[j]; - l *= dimensions[j]; - } - nd = ap1->nd; - } - else { /* (ap1->nd <= 2 && ap2->nd <= 2) */ - /* Both ap1 and ap2 are vectors or matrices */ - l = ap1->dimensions[ap1->nd-1]; - - if (ap2->dimensions[ap2->nd-1] != l) { - PyErr_SetString(PyExc_ValueError, "matrices are not aligned"); - goto fail; - } - nd = ap1->nd+ap2->nd-2; - - if (nd == 1) - dimensions[0] = (ap1->nd == 2) ? ap1->dimensions[0] : ap2->dimensions[0]; - else if (nd == 2) { - dimensions[0] = ap1->dimensions[0]; - dimensions[1] = ap2->dimensions[0]; - } - } - - /* Choose which subtype to return */ - prior2 = PyArray_GetPriority((PyObject *)ap2, 0.0); - prior1 = PyArray_GetPriority((PyObject *)ap1, 0.0); - subtype = (prior2 > prior1 ? Py_TYPE(ap2) : Py_TYPE(ap1)); - - ret = (PyArrayObject *)PyArray_New(subtype, nd, dimensions, - typenum, NULL, NULL, 0, 0, - (PyObject *)\ - (prior2 > prior1 ? ap2 : ap1)); - - if (ret == NULL) goto fail; - NPY_BEGIN_ALLOW_THREADS - memset(ret->data, 0, PyArray_NBYTES(ret)); - - if (ap2->nd == 0) { - /* Multiplication by a scalar -- Level 1 BLAS */ - if (typenum == PyArray_DOUBLE) { - cblas_daxpy(l, *((double *)ap2->data), (double *)ap1->data, 1, - (double *)ret->data, 1); - } - else if (typenum == PyArray_CDOUBLE) { - cblas_zaxpy(l, (double *)ap2->data, (double *)ap1->data, 1, - (double *)ret->data, 1); - } - else if (typenum == PyArray_FLOAT) { - cblas_saxpy(l, *((float *)ap2->data), (float *)ap1->data, 1, - (float *)ret->data, 1); - } - else if (typenum == PyArray_CFLOAT) { - cblas_caxpy(l, (float *)ap2->data, (float *)ap1->data, 1, - (float *)ret->data, 1); - } - } - else if (ap1->nd == 1 && ap2->nd == 1) { - /* Dot product between two vectors -- Level 1 BLAS */ - if (typenum == PyArray_DOUBLE) { - double result = cblas_ddot(l, (double *)ap1->data, 1, - (double *)ap2->data, 1); - *((double *)ret->data) = result; - } - else if (typenum == PyArray_CDOUBLE) { - cblas_zdotu_sub(l, (double *)ap1->data, 1, - (double *)ap2->data, 1, (double *)ret->data); - } - else if (typenum == PyArray_FLOAT) { - float result = cblas_sdot(l, (float *)ap1->data, 1, - (float *)ap2->data, 1); - *((float *)ret->data) = result; - } - else if (typenum == PyArray_CFLOAT) { - cblas_cdotu_sub(l, (float *)ap1->data, 1, - (float *)ap2->data, 1, (float *)ret->data); - } - } - else if (ap1->nd == 2 && ap2->nd == 1) { - /* Matrix-vector multiplication -- Level 2 BLAS */ - lda = (ap1->dimensions[1] > 1 ? ap1->dimensions[1] : 1); - if (typenum == PyArray_DOUBLE) { - cblas_dgemv(CblasRowMajor, - CblasNoTrans, ap1->dimensions[0], ap1->dimensions[1], - 1.0, (double *)ap1->data, lda, - (double *)ap2->data, 1, 0.0, (double *)ret->data, 1); - } - else if (typenum == PyArray_CDOUBLE) { - cblas_zgemv(CblasRowMajor, - CblasNoTrans, ap1->dimensions[0], ap1->dimensions[1], - oneD, (double *)ap1->data, lda, - (double *)ap2->data, 1, zeroD, (double *)ret->data, 1); - } - else if (typenum == PyArray_FLOAT) { - cblas_sgemv(CblasRowMajor, - CblasNoTrans, ap1->dimensions[0], ap1->dimensions[1], - 1.0, (float *)ap1->data, lda, - (float *)ap2->data, 1, 0.0, (float *)ret->data, 1); - } - else if (typenum == PyArray_CFLOAT) { - cblas_cgemv(CblasRowMajor, - CblasNoTrans, ap1->dimensions[0], ap1->dimensions[1], - oneF, (float *)ap1->data, lda, - (float *)ap2->data, 1, zeroF, (float *)ret->data, 1); - } - } - else if (ap1->nd == 1 && ap2->nd == 2) { - /* Vector matrix multiplication -- Level 2 BLAS */ - lda = (ap2->dimensions[1] > 1 ? ap2->dimensions[1] : 1); - if (typenum == PyArray_DOUBLE) { - cblas_dgemv(CblasRowMajor, - CblasNoTrans, ap2->dimensions[0], ap2->dimensions[1], - 1.0, (double *)ap2->data, lda, - (double *)ap1->data, 1, 0.0, (double *)ret->data, 1); - } - else if (typenum == PyArray_CDOUBLE) { - cblas_zgemv(CblasRowMajor, - CblasNoTrans, ap2->dimensions[0], ap2->dimensions[1], - oneD, (double *)ap2->data, lda, - (double *)ap1->data, 1, zeroD, (double *)ret->data, 1); - } - else if (typenum == PyArray_FLOAT) { - cblas_sgemv(CblasRowMajor, - CblasNoTrans, ap2->dimensions[0], ap2->dimensions[1], - 1.0, (float *)ap2->data, lda, - (float *)ap1->data, 1, 0.0, (float *)ret->data, 1); - } - else if (typenum == PyArray_CFLOAT) { - cblas_cgemv(CblasRowMajor, - CblasNoTrans, ap2->dimensions[0], ap2->dimensions[1], - oneF, (float *)ap2->data, lda, - (float *)ap1->data, 1, zeroF, (float *)ret->data, 1); - } - } - else { /* (ap1->nd == 2 && ap2->nd == 2) */ - /* Matrix matrix multiplication -- Level 3 BLAS */ - lda = (ap1->dimensions[1] > 1 ? ap1->dimensions[1] : 1); - ldb = (ap2->dimensions[1] > 1 ? ap2->dimensions[1] : 1); - ldc = (ret->dimensions[1] > 1 ? ret->dimensions[1] : 1); - if (typenum == PyArray_DOUBLE) { - cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasTrans, - ap1->dimensions[0], ap2->dimensions[0], ap1->dimensions[1], - 1.0, (double *)ap1->data, lda, - (double *)ap2->data, ldb, - 0.0, (double *)ret->data, ldc); - } - else if (typenum == PyArray_FLOAT) { - cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasTrans, - ap1->dimensions[0], ap2->dimensions[0], ap1->dimensions[1], - 1.0, (float *)ap1->data, lda, - (float *)ap2->data, ldb, - 0.0, (float *)ret->data, ldc); - } - else if (typenum == PyArray_CDOUBLE) { - cblas_zgemm(CblasRowMajor, CblasNoTrans, CblasTrans, - ap1->dimensions[0], ap2->dimensions[0], ap1->dimensions[1], - oneD, (double *)ap1->data, lda, - (double *)ap2->data, ldb, - zeroD, (double *)ret->data, ldc); - } - else if (typenum == PyArray_CFLOAT) { - cblas_cgemm(CblasRowMajor, CblasNoTrans, CblasTrans, - ap1->dimensions[0], ap2->dimensions[0], ap1->dimensions[1], - oneF, (float *)ap1->data, lda, - (float *)ap2->data, ldb, - zeroF, (float *)ret->data, ldc); - } - } - NPY_END_ALLOW_THREADS - Py_DECREF(ap1); - Py_DECREF(ap2); - return PyArray_Return(ret); - - fail: - Py_XDECREF(ap1); - Py_XDECREF(ap2); - Py_XDECREF(ret); - return NULL; -} - - -/* - * vdot(a,b) - * - * Returns the dot product of a and b for scalars and vectors of - * floating point and complex types. The first argument, a, is conjugated. - */ -static PyObject *dotblas_vdot(PyObject *NPY_UNUSED(dummy), PyObject *args) { - PyObject *op1, *op2; - PyArrayObject *ap1=NULL, *ap2=NULL, *ret=NULL; - int l; - int typenum; - npy_intp dimensions[NPY_MAXDIMS]; - PyArray_Descr *type; - - if (!PyArg_ParseTuple(args, "OO", &op1, &op2)) return NULL; - - /* - * Conjugating dot product using the BLAS for vectors. - * Multiplies op1 and op2, each of which must be vector. - */ - - typenum = PyArray_ObjectType(op1, 0); - typenum = PyArray_ObjectType(op2, typenum); - - type = PyArray_DescrFromType(typenum); - Py_INCREF(type); - ap1 = (PyArrayObject *)PyArray_FromAny(op1, type, 0, 0, 0, NULL); - if (ap1==NULL) {Py_DECREF(type); goto fail;} - op1 = PyArray_Flatten(ap1, 0); - if (op1==NULL) {Py_DECREF(type); goto fail;} - Py_DECREF(ap1); - ap1 = (PyArrayObject *)op1; - - ap2 = (PyArrayObject *)PyArray_FromAny(op2, type, 0, 0, 0, NULL); - if (ap2==NULL) goto fail; - op2 = PyArray_Flatten(ap2, 0); - if (op2 == NULL) goto fail; - Py_DECREF(ap2); - ap2 = (PyArrayObject *)op2; - - if (typenum != PyArray_FLOAT && typenum != PyArray_DOUBLE && - typenum != PyArray_CFLOAT && typenum != PyArray_CDOUBLE) { - if (!altered) { - /* need to alter dot product */ - PyObject *tmp1, *tmp2; - tmp1 = PyTuple_New(0); - tmp2 = dotblas_alterdot(NULL, tmp1); - Py_DECREF(tmp1); - Py_DECREF(tmp2); - } - if (PyTypeNum_ISCOMPLEX(typenum)) { - op1 = PyArray_Conjugate(ap1, NULL); - if (op1==NULL) goto fail; - Py_DECREF(ap1); - ap1 = (PyArrayObject *)op1; - } - ret = (PyArrayObject *)PyArray_InnerProduct((PyObject *)ap1, - (PyObject *)ap2); - Py_DECREF(ap1); - Py_DECREF(ap2); - return PyArray_Return(ret); - } - - if (ap2->dimensions[0] != ap1->dimensions[ap1->nd-1]) { - PyErr_SetString(PyExc_ValueError, "vectors have different lengths"); - goto fail; - } - l = ap1->dimensions[ap1->nd-1]; - - ret = (PyArrayObject *)PyArray_SimpleNew(0, dimensions, typenum); - if (ret == NULL) goto fail; - - NPY_BEGIN_ALLOW_THREADS - - /* Dot product between two vectors -- Level 1 BLAS */ - if (typenum == PyArray_DOUBLE) { - *((double *)ret->data) = cblas_ddot(l, (double *)ap1->data, 1, - (double *)ap2->data, 1); - } - else if (typenum == PyArray_FLOAT) { - *((float *)ret->data) = cblas_sdot(l, (float *)ap1->data, 1, - (float *)ap2->data, 1); - } - else if (typenum == PyArray_CDOUBLE) { - cblas_zdotc_sub(l, (double *)ap1->data, 1, - (double *)ap2->data, 1, (double *)ret->data); - } - else if (typenum == PyArray_CFLOAT) { - cblas_cdotc_sub(l, (float *)ap1->data, 1, - (float *)ap2->data, 1, (float *)ret->data); - } - - NPY_END_ALLOW_THREADS - - Py_DECREF(ap1); - Py_DECREF(ap2); - return PyArray_Return(ret); - - fail: - Py_XDECREF(ap1); - Py_XDECREF(ap2); - Py_XDECREF(ret); - return NULL; -} - -static struct PyMethodDef dotblas_module_methods[] = { - {"dot", (PyCFunction)dotblas_matrixproduct, 1, NULL}, - {"inner", (PyCFunction)dotblas_innerproduct, 1, NULL}, - {"vdot", (PyCFunction)dotblas_vdot, 1, NULL}, - {"alterdot", (PyCFunction)dotblas_alterdot, 1, NULL}, - {"restoredot", (PyCFunction)dotblas_restoredot, 1, NULL}, - {NULL, NULL, 0, NULL} /* sentinel */ -}; - -/* Initialization function for the module */ -PyMODINIT_FUNC init_dotblas(void) { - int i; - PyObject *d, *s; - - /* Create the module and add the functions */ - Py_InitModule3("_dotblas", dotblas_module_methods, module_doc); - - /* Import the array object */ - import_array(); - - /* Initialise the array of dot functions */ - for (i = 0; i < PyArray_NTYPES; i++) - oldFunctions[i] = NULL; - - /* alterdot at load */ - d = PyTuple_New(0); - s = dotblas_alterdot(NULL, d); - Py_DECREF(d); - Py_DECREF(s); - -} diff --git a/pythonPackages/numpy/numpy/core/blasdot/cblas.h b/pythonPackages/numpy/numpy/core/blasdot/cblas.h deleted file mode 100755 index 25de09edfe..0000000000 --- a/pythonPackages/numpy/numpy/core/blasdot/cblas.h +++ /dev/null @@ -1,578 +0,0 @@ -#ifndef CBLAS_H -#define CBLAS_H -#include - -/* Allow the use in C++ code. */ -#ifdef __cplusplus -extern "C" -{ -#endif - -/* - * Enumerated and derived types - */ -#define CBLAS_INDEX size_t /* this may vary between platforms */ - -enum CBLAS_ORDER {CblasRowMajor=101, CblasColMajor=102}; -enum CBLAS_TRANSPOSE {CblasNoTrans=111, CblasTrans=112, CblasConjTrans=113}; -enum CBLAS_UPLO {CblasUpper=121, CblasLower=122}; -enum CBLAS_DIAG {CblasNonUnit=131, CblasUnit=132}; -enum CBLAS_SIDE {CblasLeft=141, CblasRight=142}; - -/* - * =========================================================================== - * Prototypes for level 1 BLAS functions (complex are recast as routines) - * =========================================================================== - */ -float cblas_sdsdot(const int N, const float alpha, const float *X, - const int incX, const float *Y, const int incY); -double cblas_dsdot(const int N, const float *X, const int incX, const float *Y, - const int incY); -float cblas_sdot(const int N, const float *X, const int incX, - const float *Y, const int incY); -double cblas_ddot(const int N, const double *X, const int incX, - const double *Y, const int incY); - -/* - * Functions having prefixes Z and C only - */ -void cblas_cdotu_sub(const int N, const void *X, const int incX, - const void *Y, const int incY, void *dotu); -void cblas_cdotc_sub(const int N, const void *X, const int incX, - const void *Y, const int incY, void *dotc); - -void cblas_zdotu_sub(const int N, const void *X, const int incX, - const void *Y, const int incY, void *dotu); -void cblas_zdotc_sub(const int N, const void *X, const int incX, - const void *Y, const int incY, void *dotc); - - -/* - * Functions having prefixes S D SC DZ - */ -float cblas_snrm2(const int N, const float *X, const int incX); -float cblas_sasum(const int N, const float *X, const int incX); - -double cblas_dnrm2(const int N, const double *X, const int incX); -double cblas_dasum(const int N, const double *X, const int incX); - -float cblas_scnrm2(const int N, const void *X, const int incX); -float cblas_scasum(const int N, const void *X, const int incX); - -double cblas_dznrm2(const int N, const void *X, const int incX); -double cblas_dzasum(const int N, const void *X, const int incX); - - -/* - * Functions having standard 4 prefixes (S D C Z) - */ -CBLAS_INDEX cblas_isamax(const int N, const float *X, const int incX); -CBLAS_INDEX cblas_idamax(const int N, const double *X, const int incX); -CBLAS_INDEX cblas_icamax(const int N, const void *X, const int incX); -CBLAS_INDEX cblas_izamax(const int N, const void *X, const int incX); - -/* - * =========================================================================== - * Prototypes for level 1 BLAS routines - * =========================================================================== - */ - -/* - * Routines with standard 4 prefixes (s, d, c, z) - */ -void cblas_sswap(const int N, float *X, const int incX, - float *Y, const int incY); -void cblas_scopy(const int N, const float *X, const int incX, - float *Y, const int incY); -void cblas_saxpy(const int N, const float alpha, const float *X, - const int incX, float *Y, const int incY); - -void cblas_dswap(const int N, double *X, const int incX, - double *Y, const int incY); -void cblas_dcopy(const int N, const double *X, const int incX, - double *Y, const int incY); -void cblas_daxpy(const int N, const double alpha, const double *X, - const int incX, double *Y, const int incY); - -void cblas_cswap(const int N, void *X, const int incX, - void *Y, const int incY); -void cblas_ccopy(const int N, const void *X, const int incX, - void *Y, const int incY); -void cblas_caxpy(const int N, const void *alpha, const void *X, - const int incX, void *Y, const int incY); - -void cblas_zswap(const int N, void *X, const int incX, - void *Y, const int incY); -void cblas_zcopy(const int N, const void *X, const int incX, - void *Y, const int incY); -void cblas_zaxpy(const int N, const void *alpha, const void *X, - const int incX, void *Y, const int incY); - - -/* - * Routines with S and D prefix only - */ -void cblas_srotg(float *a, float *b, float *c, float *s); -void cblas_srotmg(float *d1, float *d2, float *b1, const float b2, float *P); -void cblas_srot(const int N, float *X, const int incX, - float *Y, const int incY, const float c, const float s); -void cblas_srotm(const int N, float *X, const int incX, - float *Y, const int incY, const float *P); - -void cblas_drotg(double *a, double *b, double *c, double *s); -void cblas_drotmg(double *d1, double *d2, double *b1, const double b2, double *P); -void cblas_drot(const int N, double *X, const int incX, - double *Y, const int incY, const double c, const double s); -void cblas_drotm(const int N, double *X, const int incX, - double *Y, const int incY, const double *P); - - -/* - * Routines with S D C Z CS and ZD prefixes - */ -void cblas_sscal(const int N, const float alpha, float *X, const int incX); -void cblas_dscal(const int N, const double alpha, double *X, const int incX); -void cblas_cscal(const int N, const void *alpha, void *X, const int incX); -void cblas_zscal(const int N, const void *alpha, void *X, const int incX); -void cblas_csscal(const int N, const float alpha, void *X, const int incX); -void cblas_zdscal(const int N, const double alpha, void *X, const int incX); - -/* - * =========================================================================== - * Prototypes for level 2 BLAS - * =========================================================================== - */ - -/* - * Routines with standard 4 prefixes (S, D, C, Z) - */ -void cblas_sgemv(const enum CBLAS_ORDER order, - const enum CBLAS_TRANSPOSE TransA, const int M, const int N, - const float alpha, const float *A, const int lda, - const float *X, const int incX, const float beta, - float *Y, const int incY); -void cblas_sgbmv(const enum CBLAS_ORDER order, - const enum CBLAS_TRANSPOSE TransA, const int M, const int N, - const int KL, const int KU, const float alpha, - const float *A, const int lda, const float *X, - const int incX, const float beta, float *Y, const int incY); -void cblas_strmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const float *A, const int lda, - float *X, const int incX); -void cblas_stbmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const int K, const float *A, const int lda, - float *X, const int incX); -void cblas_stpmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const float *Ap, float *X, const int incX); -void cblas_strsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const float *A, const int lda, float *X, - const int incX); -void cblas_stbsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const int K, const float *A, const int lda, - float *X, const int incX); -void cblas_stpsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const float *Ap, float *X, const int incX); - -void cblas_dgemv(const enum CBLAS_ORDER order, - const enum CBLAS_TRANSPOSE TransA, const int M, const int N, - const double alpha, const double *A, const int lda, - const double *X, const int incX, const double beta, - double *Y, const int incY); -void cblas_dgbmv(const enum CBLAS_ORDER order, - const enum CBLAS_TRANSPOSE TransA, const int M, const int N, - const int KL, const int KU, const double alpha, - const double *A, const int lda, const double *X, - const int incX, const double beta, double *Y, const int incY); -void cblas_dtrmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const double *A, const int lda, - double *X, const int incX); -void cblas_dtbmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const int K, const double *A, const int lda, - double *X, const int incX); -void cblas_dtpmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const double *Ap, double *X, const int incX); -void cblas_dtrsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const double *A, const int lda, double *X, - const int incX); -void cblas_dtbsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const int K, const double *A, const int lda, - double *X, const int incX); -void cblas_dtpsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const double *Ap, double *X, const int incX); - -void cblas_cgemv(const enum CBLAS_ORDER order, - const enum CBLAS_TRANSPOSE TransA, const int M, const int N, - const void *alpha, const void *A, const int lda, - const void *X, const int incX, const void *beta, - void *Y, const int incY); -void cblas_cgbmv(const enum CBLAS_ORDER order, - const enum CBLAS_TRANSPOSE TransA, const int M, const int N, - const int KL, const int KU, const void *alpha, - const void *A, const int lda, const void *X, - const int incX, const void *beta, void *Y, const int incY); -void cblas_ctrmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const void *A, const int lda, - void *X, const int incX); -void cblas_ctbmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const int K, const void *A, const int lda, - void *X, const int incX); -void cblas_ctpmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const void *Ap, void *X, const int incX); -void cblas_ctrsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const void *A, const int lda, void *X, - const int incX); -void cblas_ctbsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const int K, const void *A, const int lda, - void *X, const int incX); -void cblas_ctpsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const void *Ap, void *X, const int incX); - -void cblas_zgemv(const enum CBLAS_ORDER order, - const enum CBLAS_TRANSPOSE TransA, const int M, const int N, - const void *alpha, const void *A, const int lda, - const void *X, const int incX, const void *beta, - void *Y, const int incY); -void cblas_zgbmv(const enum CBLAS_ORDER order, - const enum CBLAS_TRANSPOSE TransA, const int M, const int N, - const int KL, const int KU, const void *alpha, - const void *A, const int lda, const void *X, - const int incX, const void *beta, void *Y, const int incY); -void cblas_ztrmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const void *A, const int lda, - void *X, const int incX); -void cblas_ztbmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const int K, const void *A, const int lda, - void *X, const int incX); -void cblas_ztpmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const void *Ap, void *X, const int incX); -void cblas_ztrsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const void *A, const int lda, void *X, - const int incX); -void cblas_ztbsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const int K, const void *A, const int lda, - void *X, const int incX); -void cblas_ztpsv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, - const int N, const void *Ap, void *X, const int incX); - - -/* - * Routines with S and D prefixes only - */ -void cblas_ssymv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const float alpha, const float *A, - const int lda, const float *X, const int incX, - const float beta, float *Y, const int incY); -void cblas_ssbmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const int K, const float alpha, const float *A, - const int lda, const float *X, const int incX, - const float beta, float *Y, const int incY); -void cblas_sspmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const float alpha, const float *Ap, - const float *X, const int incX, - const float beta, float *Y, const int incY); -void cblas_sger(const enum CBLAS_ORDER order, const int M, const int N, - const float alpha, const float *X, const int incX, - const float *Y, const int incY, float *A, const int lda); -void cblas_ssyr(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const float alpha, const float *X, - const int incX, float *A, const int lda); -void cblas_sspr(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const float alpha, const float *X, - const int incX, float *Ap); -void cblas_ssyr2(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const float alpha, const float *X, - const int incX, const float *Y, const int incY, float *A, - const int lda); -void cblas_sspr2(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const float alpha, const float *X, - const int incX, const float *Y, const int incY, float *A); - -void cblas_dsymv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const double alpha, const double *A, - const int lda, const double *X, const int incX, - const double beta, double *Y, const int incY); -void cblas_dsbmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const int K, const double alpha, const double *A, - const int lda, const double *X, const int incX, - const double beta, double *Y, const int incY); -void cblas_dspmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const double alpha, const double *Ap, - const double *X, const int incX, - const double beta, double *Y, const int incY); -void cblas_dger(const enum CBLAS_ORDER order, const int M, const int N, - const double alpha, const double *X, const int incX, - const double *Y, const int incY, double *A, const int lda); -void cblas_dsyr(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const double alpha, const double *X, - const int incX, double *A, const int lda); -void cblas_dspr(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const double alpha, const double *X, - const int incX, double *Ap); -void cblas_dsyr2(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const double alpha, const double *X, - const int incX, const double *Y, const int incY, double *A, - const int lda); -void cblas_dspr2(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const double alpha, const double *X, - const int incX, const double *Y, const int incY, double *A); - - -/* - * Routines with C and Z prefixes only - */ -void cblas_chemv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const void *alpha, const void *A, - const int lda, const void *X, const int incX, - const void *beta, void *Y, const int incY); -void cblas_chbmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const int K, const void *alpha, const void *A, - const int lda, const void *X, const int incX, - const void *beta, void *Y, const int incY); -void cblas_chpmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const void *alpha, const void *Ap, - const void *X, const int incX, - const void *beta, void *Y, const int incY); -void cblas_cgeru(const enum CBLAS_ORDER order, const int M, const int N, - const void *alpha, const void *X, const int incX, - const void *Y, const int incY, void *A, const int lda); -void cblas_cgerc(const enum CBLAS_ORDER order, const int M, const int N, - const void *alpha, const void *X, const int incX, - const void *Y, const int incY, void *A, const int lda); -void cblas_cher(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const float alpha, const void *X, const int incX, - void *A, const int lda); -void cblas_chpr(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const float alpha, const void *X, - const int incX, void *A); -void cblas_cher2(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, - const void *alpha, const void *X, const int incX, - const void *Y, const int incY, void *A, const int lda); -void cblas_chpr2(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, - const void *alpha, const void *X, const int incX, - const void *Y, const int incY, void *Ap); - -void cblas_zhemv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const void *alpha, const void *A, - const int lda, const void *X, const int incX, - const void *beta, void *Y, const int incY); -void cblas_zhbmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const int K, const void *alpha, const void *A, - const int lda, const void *X, const int incX, - const void *beta, void *Y, const int incY); -void cblas_zhpmv(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const void *alpha, const void *Ap, - const void *X, const int incX, - const void *beta, void *Y, const int incY); -void cblas_zgeru(const enum CBLAS_ORDER order, const int M, const int N, - const void *alpha, const void *X, const int incX, - const void *Y, const int incY, void *A, const int lda); -void cblas_zgerc(const enum CBLAS_ORDER order, const int M, const int N, - const void *alpha, const void *X, const int incX, - const void *Y, const int incY, void *A, const int lda); -void cblas_zher(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const double alpha, const void *X, const int incX, - void *A, const int lda); -void cblas_zhpr(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, - const int N, const double alpha, const void *X, - const int incX, void *A); -void cblas_zher2(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, - const void *alpha, const void *X, const int incX, - const void *Y, const int incY, void *A, const int lda); -void cblas_zhpr2(const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, - const void *alpha, const void *X, const int incX, - const void *Y, const int incY, void *Ap); - -/* - * =========================================================================== - * Prototypes for level 3 BLAS - * =========================================================================== - */ - -/* - * Routines with standard 4 prefixes (S, D, C, Z) - */ -void cblas_sgemm(const enum CBLAS_ORDER Order, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_TRANSPOSE TransB, const int M, const int N, - const int K, const float alpha, const float *A, - const int lda, const float *B, const int ldb, - const float beta, float *C, const int ldc); -void cblas_ssymm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const int M, const int N, - const float alpha, const float *A, const int lda, - const float *B, const int ldb, const float beta, - float *C, const int ldc); -void cblas_ssyrk(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const float alpha, const float *A, const int lda, - const float beta, float *C, const int ldc); -void cblas_ssyr2k(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const float alpha, const float *A, const int lda, - const float *B, const int ldb, const float beta, - float *C, const int ldc); -void cblas_strmm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_DIAG Diag, const int M, const int N, - const float alpha, const float *A, const int lda, - float *B, const int ldb); -void cblas_strsm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_DIAG Diag, const int M, const int N, - const float alpha, const float *A, const int lda, - float *B, const int ldb); - -void cblas_dgemm(const enum CBLAS_ORDER Order, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_TRANSPOSE TransB, const int M, const int N, - const int K, const double alpha, const double *A, - const int lda, const double *B, const int ldb, - const double beta, double *C, const int ldc); -void cblas_dsymm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const int M, const int N, - const double alpha, const double *A, const int lda, - const double *B, const int ldb, const double beta, - double *C, const int ldc); -void cblas_dsyrk(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const double alpha, const double *A, const int lda, - const double beta, double *C, const int ldc); -void cblas_dsyr2k(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const double alpha, const double *A, const int lda, - const double *B, const int ldb, const double beta, - double *C, const int ldc); -void cblas_dtrmm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_DIAG Diag, const int M, const int N, - const double alpha, const double *A, const int lda, - double *B, const int ldb); -void cblas_dtrsm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_DIAG Diag, const int M, const int N, - const double alpha, const double *A, const int lda, - double *B, const int ldb); - -void cblas_cgemm(const enum CBLAS_ORDER Order, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_TRANSPOSE TransB, const int M, const int N, - const int K, const void *alpha, const void *A, - const int lda, const void *B, const int ldb, - const void *beta, void *C, const int ldc); -void cblas_csymm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const int M, const int N, - const void *alpha, const void *A, const int lda, - const void *B, const int ldb, const void *beta, - void *C, const int ldc); -void cblas_csyrk(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const void *alpha, const void *A, const int lda, - const void *beta, void *C, const int ldc); -void cblas_csyr2k(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const void *alpha, const void *A, const int lda, - const void *B, const int ldb, const void *beta, - void *C, const int ldc); -void cblas_ctrmm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_DIAG Diag, const int M, const int N, - const void *alpha, const void *A, const int lda, - void *B, const int ldb); -void cblas_ctrsm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_DIAG Diag, const int M, const int N, - const void *alpha, const void *A, const int lda, - void *B, const int ldb); - -void cblas_zgemm(const enum CBLAS_ORDER Order, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_TRANSPOSE TransB, const int M, const int N, - const int K, const void *alpha, const void *A, - const int lda, const void *B, const int ldb, - const void *beta, void *C, const int ldc); -void cblas_zsymm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const int M, const int N, - const void *alpha, const void *A, const int lda, - const void *B, const int ldb, const void *beta, - void *C, const int ldc); -void cblas_zsyrk(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const void *alpha, const void *A, const int lda, - const void *beta, void *C, const int ldc); -void cblas_zsyr2k(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const void *alpha, const void *A, const int lda, - const void *B, const int ldb, const void *beta, - void *C, const int ldc); -void cblas_ztrmm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_DIAG Diag, const int M, const int N, - const void *alpha, const void *A, const int lda, - void *B, const int ldb); -void cblas_ztrsm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, - const enum CBLAS_DIAG Diag, const int M, const int N, - const void *alpha, const void *A, const int lda, - void *B, const int ldb); - - -/* - * Routines with prefixes C and Z only - */ -void cblas_chemm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const int M, const int N, - const void *alpha, const void *A, const int lda, - const void *B, const int ldb, const void *beta, - void *C, const int ldc); -void cblas_cherk(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const float alpha, const void *A, const int lda, - const float beta, void *C, const int ldc); -void cblas_cher2k(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const void *alpha, const void *A, const int lda, - const void *B, const int ldb, const float beta, - void *C, const int ldc); - -void cblas_zhemm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, - const enum CBLAS_UPLO Uplo, const int M, const int N, - const void *alpha, const void *A, const int lda, - const void *B, const int ldb, const void *beta, - void *C, const int ldc); -void cblas_zherk(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const double alpha, const void *A, const int lda, - const double beta, void *C, const int ldc); -void cblas_zher2k(const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, - const enum CBLAS_TRANSPOSE Trans, const int N, const int K, - const void *alpha, const void *A, const int lda, - const void *B, const int ldb, const double beta, - void *C, const int ldc); - -void cblas_xerbla(int p, const char *rout, const char *form, ...); - -#ifdef __cplusplus -} -#endif - -#endif diff --git a/pythonPackages/numpy/numpy/core/code_generators/__init__.py b/pythonPackages/numpy/numpy/core/code_generators/__init__.py deleted file mode 100755 index e69de29bb2..0000000000 diff --git a/pythonPackages/numpy/numpy/core/code_generators/cversions.py b/pythonPackages/numpy/numpy/core/code_generators/cversions.py deleted file mode 100755 index 036e923ec9..0000000000 --- a/pythonPackages/numpy/numpy/core/code_generators/cversions.py +++ /dev/null @@ -1,10 +0,0 @@ -"""Simple script to compute the api hash of the current API as defined by -numpy_api_order and ufunc_api_order.""" -from os.path import join, dirname - -from genapi import fullapi_hash -import numpy_api - -if __name__ == '__main__': - curdir = dirname(__file__) - print fullapi_hash(numpy_api.full_api) diff --git a/pythonPackages/numpy/numpy/core/code_generators/cversions.txt b/pythonPackages/numpy/numpy/core/code_generators/cversions.txt deleted file mode 100755 index 2914087638..0000000000 --- a/pythonPackages/numpy/numpy/core/code_generators/cversions.txt +++ /dev/null @@ -1,8 +0,0 @@ -# hash below were defined from numpy_api_order.txt and ufunc_api_order.txt -0x00000001 = 603580d224763e58c5e7147f804dc0f5 -0x00000002 = 8ecb29306758515ae69749c803a75da1 -0x00000003 = bf22c0d05b31625d2a7015988d61ce5a -# Starting from here, the hash is defined from numpy_api.full_api dict -# version 4 added neighborhood iterators and PyArray_Correlate2 -0x00000004 = 3d8940bf7b0d2a4e25be4338c14c3c85 - diff --git a/pythonPackages/numpy/numpy/core/code_generators/genapi.py b/pythonPackages/numpy/numpy/core/code_generators/genapi.py deleted file mode 100755 index cb06aa56ab..0000000000 --- a/pythonPackages/numpy/numpy/core/code_generators/genapi.py +++ /dev/null @@ -1,473 +0,0 @@ -""" -Get API information encoded in C files. - -See ``find_function`` for how functions should be formatted, and -``read_order`` for how the order of the functions should be -specified. -""" -import sys, os, re -try: - import hashlib - md5new = hashlib.md5 -except ImportError: - import md5 - md5new = md5.new -if sys.version_info[:2] < (2, 6): - from sets import Set as set -import textwrap - -from os.path import join - -__docformat__ = 'restructuredtext' - -# The files under src/ that are scanned for API functions -API_FILES = [join('multiarray', 'methods.c'), - join('multiarray', 'arrayobject.c'), - join('multiarray', 'flagsobject.c'), - join('multiarray', 'descriptor.c'), - join('multiarray', 'iterators.c'), - join('multiarray', 'getset.c'), - join('multiarray', 'number.c'), - join('multiarray', 'sequence.c'), - join('multiarray', 'ctors.c'), - join('multiarray', 'convert.c'), - join('multiarray', 'shape.c'), - join('multiarray', 'item_selection.c'), - join('multiarray', 'convert_datatype.c'), - join('multiarray', 'arraytypes.c.src'), - join('multiarray', 'multiarraymodule.c'), - join('multiarray', 'scalartypes.c.src'), - join('multiarray', 'scalarapi.c'), - join('multiarray', 'calculation.c'), - join('multiarray', 'usertypes.c'), - join('multiarray', 'refcount.c'), - join('multiarray', 'conversion_utils.c'), - join('multiarray', 'buffer.c'), - join('umath', 'ufunc_object.c'), - join('umath', 'loops.c.src'), - ] -THIS_DIR = os.path.dirname(__file__) -API_FILES = [os.path.join(THIS_DIR, '..', 'src', a) for a in API_FILES] - -def file_in_this_dir(filename): - return os.path.join(THIS_DIR, filename) - -def remove_whitespace(s): - return ''.join(s.split()) - -def _repl(str): - return str.replace('intp', 'npy_intp').replace('Bool','npy_bool') - -class Function(object): - def __init__(self, name, return_type, args, doc=''): - self.name = name - self.return_type = _repl(return_type) - self.args = args - self.doc = doc - - def _format_arg(self, typename, name): - if typename.endswith('*'): - return typename + name - else: - return typename + ' ' + name - - def __str__(self): - argstr = ', '.join([self._format_arg(*a) for a in self.args]) - if self.doc: - doccomment = '/* %s */\n' % self.doc - else: - doccomment = '' - return '%s%s %s(%s)' % (doccomment, self.return_type, self.name, argstr) - - def to_ReST(self): - lines = ['::', '', ' ' + self.return_type] - argstr = ',\000'.join([self._format_arg(*a) for a in self.args]) - name = ' %s' % (self.name,) - s = textwrap.wrap('(%s)' % (argstr,), width=72, - initial_indent=name, - subsequent_indent=' ' * (len(name)+1), - break_long_words=False) - for l in s: - lines.append(l.replace('\000', ' ').rstrip()) - lines.append('') - if self.doc: - lines.append(textwrap.dedent(self.doc)) - return '\n'.join(lines) - - def api_hash(self): - m = md5new() - m.update(remove_whitespace(self.return_type)) - m.update('\000') - m.update(self.name) - m.update('\000') - for typename, name in self.args: - m.update(remove_whitespace(typename)) - m.update('\000') - return m.hexdigest()[:8] - -class ParseError(Exception): - def __init__(self, filename, lineno, msg): - self.filename = filename - self.lineno = lineno - self.msg = msg - - def __str__(self): - return '%s:%s:%s' % (self.filename, self.lineno, self.msg) - -def skip_brackets(s, lbrac, rbrac): - count = 0 - for i, c in enumerate(s): - if c == lbrac: - count += 1 - elif c == rbrac: - count -= 1 - if count == 0: - return i - raise ValueError("no match '%s' for '%s' (%r)" % (lbrac, rbrac, s)) - -def split_arguments(argstr): - arguments = [] - bracket_counts = {'(': 0, '[': 0} - current_argument = [] - state = 0 - i = 0 - def finish_arg(): - if current_argument: - argstr = ''.join(current_argument).strip() - m = re.match(r'(.*(\s+|[*]))(\w+)$', argstr) - if m: - typename = m.group(1).strip() - name = m.group(3) - else: - typename = argstr - name = '' - arguments.append((typename, name)) - del current_argument[:] - while i < len(argstr): - c = argstr[i] - if c == ',': - finish_arg() - elif c == '(': - p = skip_brackets(argstr[i:], '(', ')') - current_argument += argstr[i:i+p] - i += p-1 - else: - current_argument += c - i += 1 - finish_arg() - return arguments - - -def find_functions(filename, tag='API'): - """ - Scan the file, looking for tagged functions. - - Assuming ``tag=='API'``, a tagged function looks like:: - - /*API*/ - static returntype* - function_name(argtype1 arg1, argtype2 arg2) - { - } - - where the return type must be on a separate line, the function - name must start the line, and the opening ``{`` must start the line. - - An optional documentation comment in ReST format may follow the tag, - as in:: - - /*API - This function does foo... - */ - """ - fo = open(filename, 'r') - functions = [] - return_type = None - function_name = None - function_args = [] - doclist = [] - SCANNING, STATE_DOC, STATE_RETTYPE, STATE_NAME, STATE_ARGS = range(5) - state = SCANNING - tagcomment = '/*' + tag - for lineno, line in enumerate(fo): - try: - line = line.strip() - if state == SCANNING: - if line.startswith(tagcomment): - if line.endswith('*/'): - state = STATE_RETTYPE - else: - state = STATE_DOC - elif state == STATE_DOC: - if line.startswith('*/'): - state = STATE_RETTYPE - else: - line = line.lstrip(' *') - doclist.append(line) - elif state == STATE_RETTYPE: - # first line of declaration with return type - m = re.match(r'NPY_NO_EXPORT\s+(.*)$', line) - if m: - line = m.group(1) - return_type = line - state = STATE_NAME - elif state == STATE_NAME: - # second line, with function name - m = re.match(r'(\w+)\s*\(', line) - if m: - function_name = m.group(1) - else: - raise ParseError(filename, lineno+1, - 'could not find function name') - function_args.append(line[m.end():]) - state = STATE_ARGS - elif state == STATE_ARGS: - if line.startswith('{'): - # finished - fargs_str = ' '.join(function_args).rstrip(' )') - fargs = split_arguments(fargs_str) - f = Function(function_name, return_type, fargs, - '\n'.join(doclist)) - functions.append(f) - return_type = None - function_name = None - function_args = [] - doclist = [] - state = SCANNING - else: - function_args.append(line) - except: - print(filename, lineno+1) - raise - fo.close() - return functions - -def should_rebuild(targets, source_files): - from distutils.dep_util import newer_group - for t in targets: - if not os.path.exists(t): - return True - sources = API_FILES + list(source_files) + [__file__] - if newer_group(sources, targets[0], missing='newer'): - return True - return False - -# Those *Api classes instances know how to output strings for the generated code -class TypeApi: - def __init__(self, name, index, ptr_cast, api_name): - self.index = index - self.name = name - self.ptr_cast = ptr_cast - self.api_name = api_name - - def define_from_array_api_string(self): - return "#define %s (*(%s *)%s[%d])" % (self.name, - self.ptr_cast, - self.api_name, - self.index) - - def array_api_define(self): - return " (void *) &%s" % self.name - - def internal_define(self): - astr = """\ -#ifdef NPY_ENABLE_SEPARATE_COMPILATION - extern NPY_NO_EXPORT PyTypeObject %(type)s; -#else - NPY_NO_EXPORT PyTypeObject %(type)s; -#endif -""" % {'type': self.name} - return astr - -class GlobalVarApi: - def __init__(self, name, index, type, api_name): - self.name = name - self.index = index - self.type = type - self.api_name = api_name - - def define_from_array_api_string(self): - return "#define %s (*(%s *)%s[%d])" % (self.name, - self.type, - self.api_name, - self.index) - - def array_api_define(self): - return " (%s *) &%s" % (self.type, self.name) - - def internal_define(self): - astr = """\ -#ifdef NPY_ENABLE_SEPARATE_COMPILATION - extern NPY_NO_EXPORT %(type)s %(name)s; -#else - NPY_NO_EXPORT %(type)s %(name)s; -#endif -""" % {'type': self.type, 'name': self.name} - return astr - -# Dummy to be able to consistently use *Api instances for all items in the -# array api -class BoolValuesApi: - def __init__(self, name, index, api_name): - self.name = name - self.index = index - self.type = 'PyBoolScalarObject' - self.api_name = api_name - - def define_from_array_api_string(self): - return "#define %s ((%s *)%s[%d])" % (self.name, - self.type, - self.api_name, - self.index) - - def array_api_define(self): - return " (void *) &%s" % self.name - - def internal_define(self): - astr = """\ -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT PyBoolScalarObject _PyArrayScalar_BoolValues[2]; -#else -NPY_NO_EXPORT PyBoolScalarObject _PyArrayScalar_BoolValues[2]; -#endif -""" - return astr - -class FunctionApi: - def __init__(self, name, index, return_type, args, api_name): - self.name = name - self.index = index - self.return_type = return_type - self.args = args - self.api_name = api_name - - def _argtypes_string(self): - if not self.args: - return 'void' - argstr = ', '.join([_repl(a[0]) for a in self.args]) - return argstr - - def define_from_array_api_string(self): - define = """\ -#define %s \\\n (*(%s (*)(%s)) \\ - %s[%d])""" % (self.name, - self.return_type, - self._argtypes_string(), - self.api_name, - self.index) - return define - - def array_api_define(self): - return " (void *) %s" % self.name - - def internal_define(self): - astr = """\ -NPY_NO_EXPORT %s %s \\\n (%s);""" % (self.return_type, - self.name, - self._argtypes_string()) - return astr - -def order_dict(d): - """Order dict by its values.""" - o = d.items() - def _key(x): - return (x[1], x[0]) - return sorted(o, key=_key) - -def merge_api_dicts(dicts): - ret = {} - for d in dicts: - for k, v in d.items(): - ret[k] = v - - return ret - -def check_api_dict(d): - """Check that an api dict is valid (does not use the same index twice).""" - # We have if a same index is used twice: we 'revert' the dict so that index - # become keys. If the length is different, it means one index has been used - # at least twice - revert_dict = dict([(v, k) for k, v in d.items()]) - if not len(revert_dict) == len(d): - # We compute a dict index -> list of associated items - doubled = {} - for name, index in d.items(): - try: - doubled[index].append(name) - except KeyError: - doubled[index] = [name] - msg = """\ -Same index has been used twice in api definition: %s -""" % ['index %d -> %s' % (index, names) for index, names in doubled.items() \ - if len(names) != 1] - raise ValueError(msg) - - # No 'hole' in the indexes may be allowed, and it must starts at 0 - indexes = set(d.values()) - expected = set(range(len(indexes))) - if not indexes == expected: - diff = expected.symmetric_difference(indexes) - msg = "There are some holes in the API indexing: " \ - "(symmetric diff is %s)" % diff - raise ValueError(msg) - -def get_api_functions(tagname, api_dict): - """Parse source files to get functions tagged by the given tag.""" - functions = [] - for f in API_FILES: - functions.extend(find_functions(f, tagname)) - dfunctions = [] - for func in functions: - o = api_dict[func.name] - dfunctions.append( (o, func) ) - dfunctions.sort() - return [a[1] for a in dfunctions] - -def fullapi_hash(api_dicts): - """Given a list of api dicts defining the numpy C API, compute a checksum - of the list of items in the API (as a string).""" - a = [] - for d in api_dicts: - def sorted_by_values(d): - """Sort a dictionary by its values. Assume the dictionary items is of - the form func_name -> order""" - return sorted(d.items(), key=lambda x_y: (x_y[1], x_y[0])) - for name, index in sorted_by_values(d): - a.extend(name) - a.extend(str(index)) - - return md5new(''.join(a).encode('ascii')).hexdigest() - -# To parse strings like 'hex = checksum' where hex is e.g. 0x1234567F and -# checksum a 128 bits md5 checksum (hex format as well) -VERRE = re.compile('(^0x[\da-f]{8})\s*=\s*([\da-f]{32})') - -def get_versions_hash(): - d = [] - - file = os.path.join(os.path.dirname(__file__), 'cversions.txt') - fid = open(file, 'r') - try: - for line in fid.readlines(): - m = VERRE.match(line) - if m: - d.append((int(m.group(1), 16), m.group(2))) - finally: - fid.close() - - return dict(d) - -def main(): - tagname = sys.argv[1] - order_file = sys.argv[2] - functions = get_api_functions(tagname, order_file) - m = md5new(tagname) - for func in functions: - print(func) - ah = func.api_hash() - m.update(ah) - print(hex(int(ah,16))) - print(hex(int(m.hexdigest()[:8],16))) - -if __name__ == '__main__': - main() diff --git a/pythonPackages/numpy/numpy/core/code_generators/genapi2.py b/pythonPackages/numpy/numpy/core/code_generators/genapi2.py deleted file mode 100755 index e69de29bb2..0000000000 diff --git a/pythonPackages/numpy/numpy/core/code_generators/generate_numpy_api.py b/pythonPackages/numpy/numpy/core/code_generators/generate_numpy_api.py deleted file mode 100755 index 2f4316d170..0000000000 --- a/pythonPackages/numpy/numpy/core/code_generators/generate_numpy_api.py +++ /dev/null @@ -1,252 +0,0 @@ -import os -import genapi - -from genapi import \ - TypeApi, GlobalVarApi, FunctionApi, BoolValuesApi - -import numpy_api - -h_template = r""" -#ifdef _MULTIARRAYMODULE - -typedef struct { - PyObject_HEAD - npy_bool obval; -} PyBoolScalarObject; - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT PyTypeObject PyArrayMapIter_Type; -extern NPY_NO_EXPORT PyTypeObject PyArrayNeighborhoodIter_Type; -extern NPY_NO_EXPORT PyBoolScalarObject _PyArrayScalar_BoolValues[2]; -#else -NPY_NO_EXPORT PyTypeObject PyArrayMapIter_Type; -NPY_NO_EXPORT PyTypeObject PyArrayNeighborhoodIter_Type; -NPY_NO_EXPORT PyBoolScalarObject _PyArrayScalar_BoolValues[2]; -#endif - -%s - -#else - -#if defined(PY_ARRAY_UNIQUE_SYMBOL) -#define PyArray_API PY_ARRAY_UNIQUE_SYMBOL -#endif - -#if defined(NO_IMPORT) || defined(NO_IMPORT_ARRAY) -extern void **PyArray_API; -#else -#if defined(PY_ARRAY_UNIQUE_SYMBOL) -void **PyArray_API; -#else -static void **PyArray_API=NULL; -#endif -#endif - -%s - -#if !defined(NO_IMPORT_ARRAY) && !defined(NO_IMPORT) -static int -_import_array(void) -{ - int st; - PyObject *numpy = PyImport_ImportModule("numpy.core.multiarray"); - PyObject *c_api = NULL; - - if (numpy == NULL) { - PyErr_SetString(PyExc_ImportError, "numpy.core.multiarray failed to import"); - return -1; - } - c_api = PyObject_GetAttrString(numpy, "_ARRAY_API"); - Py_DECREF(numpy); - if (c_api == NULL) { - PyErr_SetString(PyExc_AttributeError, "_ARRAY_API not found"); - return -1; - } - -#if PY_VERSION_HEX >= 0x03000000 - if (!PyCapsule_CheckExact(c_api)) { - PyErr_SetString(PyExc_RuntimeError, "_ARRAY_API is not PyCapsule object"); - Py_DECREF(c_api); - return -1; - } - PyArray_API = (void **)PyCapsule_GetPointer(c_api, NULL); -#else - if (!PyCObject_Check(c_api)) { - PyErr_SetString(PyExc_RuntimeError, "_ARRAY_API is not PyCObject object"); - Py_DECREF(c_api); - return -1; - } - PyArray_API = (void **)PyCObject_AsVoidPtr(c_api); -#endif - Py_DECREF(c_api); - if (PyArray_API == NULL) { - PyErr_SetString(PyExc_RuntimeError, "_ARRAY_API is NULL pointer"); - return -1; - } - - /* Perform runtime check of C API version */ - if (NPY_VERSION != PyArray_GetNDArrayCVersion()) { - PyErr_Format(PyExc_RuntimeError, "module compiled against "\ - "ABI version %%x but this version of numpy is %%x", \ - (int) NPY_VERSION, (int) PyArray_GetNDArrayCVersion()); - return -1; - } - if (NPY_FEATURE_VERSION > PyArray_GetNDArrayCFeatureVersion()) { - PyErr_Format(PyExc_RuntimeError, "module compiled against "\ - "API version %%x but this version of numpy is %%x", \ - (int) NPY_FEATURE_VERSION, (int) PyArray_GetNDArrayCFeatureVersion()); - return -1; - } - - /* - * Perform runtime check of endianness and check it matches the one set by - * the headers (npy_endian.h) as a safeguard - */ - st = PyArray_GetEndianness(); - if (st == NPY_CPU_UNKNOWN_ENDIAN) { - PyErr_Format(PyExc_RuntimeError, "FATAL: module compiled as unknown endian"); - return -1; - } -#if NPY_BYTE_ORDER == NPY_BIG_ENDIAN - if (st != NPY_CPU_BIG) { - PyErr_Format(PyExc_RuntimeError, "FATAL: module compiled as "\ - "big endian, but detected different endianness at runtime"); - return -1; - } -#elif NPY_BYTE_ORDER == NPY_LITTLE_ENDIAN - if (st != NPY_CPU_LITTLE) { - PyErr_Format(PyExc_RuntimeError, "FATAL: module compiled as "\ - "little endian, but detected different endianness at runtime"); - return -1; - } -#endif - - return 0; -} - -#if PY_VERSION_HEX >= 0x03000000 -#define NUMPY_IMPORT_ARRAY_RETVAL NULL -#else -#define NUMPY_IMPORT_ARRAY_RETVAL -#endif - -#define import_array() {if (_import_array() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, "numpy.core.multiarray failed to import"); return NUMPY_IMPORT_ARRAY_RETVAL; } } - -#define import_array1(ret) {if (_import_array() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, "numpy.core.multiarray failed to import"); return ret; } } - -#define import_array2(msg, ret) {if (_import_array() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, msg); return ret; } } - -#endif - -#endif -""" - - -c_template = r""" -/* These pointers will be stored in the C-object for use in other - extension modules -*/ - -void *PyArray_API[] = { -%s -}; -""" - -c_api_header = """ -=========== -Numpy C-API -=========== -""" - -def generate_api(output_dir, force=False): - basename = 'multiarray_api' - - h_file = os.path.join(output_dir, '__%s.h' % basename) - c_file = os.path.join(output_dir, '__%s.c' % basename) - d_file = os.path.join(output_dir, '%s.txt' % basename) - targets = (h_file, c_file, d_file) - - sources = numpy_api.multiarray_api - - if (not force and not genapi.should_rebuild(targets, [numpy_api.__file__, __file__])): - return targets - else: - do_generate_api(targets, sources) - - return targets - -def do_generate_api(targets, sources): - header_file = targets[0] - c_file = targets[1] - doc_file = targets[2] - - global_vars = sources[0] - global_vars_types = sources[1] - scalar_bool_values = sources[2] - types_api = sources[3] - multiarray_funcs = sources[4] - - # Remove global_vars_type: not a api dict - multiarray_api = sources[:1] + sources[2:] - - module_list = [] - extension_list = [] - init_list = [] - - # Check multiarray api indexes - multiarray_api_index = genapi.merge_api_dicts(multiarray_api) - genapi.check_api_dict(multiarray_api_index) - - numpyapi_list = genapi.get_api_functions('NUMPY_API', - multiarray_funcs) - ordered_funcs_api = genapi.order_dict(multiarray_funcs) - - # Create dict name -> *Api instance - api_name = 'PyArray_API' - multiarray_api_dict = {} - for f in numpyapi_list: - name = f.name - index = multiarray_funcs[name] - multiarray_api_dict[f.name] = FunctionApi(f.name, index, f.return_type, - f.args, api_name) - - for name, index in global_vars.items(): - type = global_vars_types[name] - multiarray_api_dict[name] = GlobalVarApi(name, index, type, api_name) - - for name, index in scalar_bool_values.items(): - multiarray_api_dict[name] = BoolValuesApi(name, index, api_name) - - for name, index in types_api.items(): - multiarray_api_dict[name] = TypeApi(name, index, 'PyTypeObject', api_name) - - assert len(multiarray_api_dict) == len(multiarray_api_index) - - extension_list = [] - for name, index in genapi.order_dict(multiarray_api_index): - api_item = multiarray_api_dict[name] - extension_list.append(api_item.define_from_array_api_string()) - init_list.append(api_item.array_api_define()) - module_list.append(api_item.internal_define()) - - # Write to header - fid = open(header_file, 'w') - s = h_template % ('\n'.join(module_list), '\n'.join(extension_list)) - fid.write(s) - fid.close() - - # Write to c-code - fid = open(c_file, 'w') - s = c_template % ',\n'.join(init_list) - fid.write(s) - fid.close() - - # write to documentation - fid = open(doc_file, 'w') - fid.write(c_api_header) - for func in numpyapi_list: - fid.write(func.to_ReST()) - fid.write('\n\n') - fid.close() - - return targets diff --git a/pythonPackages/numpy/numpy/core/code_generators/generate_ufunc_api.py b/pythonPackages/numpy/numpy/core/code_generators/generate_ufunc_api.py deleted file mode 100755 index e6e50c2fe0..0000000000 --- a/pythonPackages/numpy/numpy/core/code_generators/generate_ufunc_api.py +++ /dev/null @@ -1,176 +0,0 @@ -import os -import genapi - -import numpy_api - -from genapi import \ - TypeApi, GlobalVarApi, FunctionApi, BoolValuesApi - -h_template = r""" -#ifdef _UMATHMODULE - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT PyTypeObject PyUFunc_Type; -#else -NPY_NO_EXPORT PyTypeObject PyUFunc_Type; -#endif - -%s - -#else - -#if defined(PY_UFUNC_UNIQUE_SYMBOL) -#define PyUFunc_API PY_UFUNC_UNIQUE_SYMBOL -#endif - -#if defined(NO_IMPORT) || defined(NO_IMPORT_UFUNC) -extern void **PyUFunc_API; -#else -#if defined(PY_UFUNC_UNIQUE_SYMBOL) -void **PyUFunc_API; -#else -static void **PyUFunc_API=NULL; -#endif -#endif - -%s - -static int -_import_umath(void) -{ - PyObject *numpy = PyImport_ImportModule("numpy.core.umath"); - PyObject *c_api = NULL; - - if (numpy == NULL) { - PyErr_SetString(PyExc_ImportError, "numpy.core.umath failed to import"); - return -1; - } - c_api = PyObject_GetAttrString(numpy, "_UFUNC_API"); - Py_DECREF(numpy); - if (c_api == NULL) { - PyErr_SetString(PyExc_AttributeError, "_UFUNC_API not found"); - return -1; - } - -#if PY_VERSION_HEX >= 0x03000000 - if (!PyCapsule_CheckExact(c_api)) { - PyErr_SetString(PyExc_RuntimeError, "_UFUNC_API is not PyCapsule object"); - Py_DECREF(c_api); - return -1; - } - PyUFunc_API = (void **)PyCapsule_GetPointer(c_api, NULL); -#else - if (!PyCObject_Check(c_api)) { - PyErr_SetString(PyExc_RuntimeError, "_UFUNC_API is not PyCObject object"); - Py_DECREF(c_api); - return -1; - } - PyUFunc_API = (void **)PyCObject_AsVoidPtr(c_api); -#endif - Py_DECREF(c_api); - if (PyUFunc_API == NULL) { - PyErr_SetString(PyExc_RuntimeError, "_UFUNC_API is NULL pointer"); - return -1; - } - return 0; -} - -#define import_umath() { UFUNC_NOFPE if (_import_umath() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, "numpy.core.umath failed to import"); return; }} - -#define import_umath1(ret) { UFUNC_NOFPE if (_import_umath() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, "numpy.core.umath failed to import"); return ret; }} - -#define import_umath2(msg, ret) { UFUNC_NOFPE if (_import_umath() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, msg); return ret; }} - -#define import_ufunc() { UFUNC_NOFPE if (_import_umath() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, "numpy.core.umath failed to import"); }} - - -#endif -""" - -c_template = r""" -/* These pointers will be stored in the C-object for use in other - extension modules -*/ - -void *PyUFunc_API[] = { -%s -}; -""" - -def generate_api(output_dir, force=False): - basename = 'ufunc_api' - - h_file = os.path.join(output_dir, '__%s.h' % basename) - c_file = os.path.join(output_dir, '__%s.c' % basename) - d_file = os.path.join(output_dir, '%s.txt' % basename) - targets = (h_file, c_file, d_file) - - sources = ['ufunc_api_order.txt'] - - if (not force and not genapi.should_rebuild(targets, sources + [__file__])): - return targets - else: - do_generate_api(targets, sources) - - return targets - -def do_generate_api(targets, sources): - header_file = targets[0] - c_file = targets[1] - doc_file = targets[2] - - ufunc_api_index = genapi.merge_api_dicts(( - numpy_api.ufunc_funcs_api, - numpy_api.ufunc_types_api)) - genapi.check_api_dict(ufunc_api_index) - - ufunc_api_list = genapi.get_api_functions('UFUNC_API', numpy_api.ufunc_funcs_api) - - # Create dict name -> *Api instance - ufunc_api_dict = {} - api_name = 'PyUFunc_API' - for f in ufunc_api_list: - name = f.name - index = ufunc_api_index[name] - ufunc_api_dict[name] = FunctionApi(f.name, index, f.return_type, - f.args, api_name) - - for name, index in numpy_api.ufunc_types_api.items(): - ufunc_api_dict[name] = TypeApi(name, index, 'PyTypeObject', api_name) - - # set up object API - module_list = [] - extension_list = [] - init_list = [] - - for name, index in genapi.order_dict(ufunc_api_index): - api_item = ufunc_api_dict[name] - extension_list.append(api_item.define_from_array_api_string()) - init_list.append(api_item.array_api_define()) - module_list.append(api_item.internal_define()) - - # Write to header - fid = open(header_file, 'w') - s = h_template % ('\n'.join(module_list), '\n'.join(extension_list)) - fid.write(s) - fid.close() - - # Write to c-code - fid = open(c_file, 'w') - s = c_template % ',\n'.join(init_list) - fid.write(s) - fid.close() - - # Write to documentation - fid = open(doc_file, 'w') - fid.write(''' -================= -Numpy Ufunc C-API -================= -''') - for func in ufunc_api_list: - fid.write(func.to_ReST()) - fid.write('\n\n') - fid.close() - - return targets diff --git a/pythonPackages/numpy/numpy/core/code_generators/generate_umath.py b/pythonPackages/numpy/numpy/core/code_generators/generate_umath.py deleted file mode 100755 index 4f16f4ad5c..0000000000 --- a/pythonPackages/numpy/numpy/core/code_generators/generate_umath.py +++ /dev/null @@ -1,839 +0,0 @@ -import os -import re -import struct -import sys -import textwrap - -sys.path.insert(0, os.path.dirname(__file__)) -import ufunc_docstrings as docstrings -sys.path.pop(0) - -Zero = "PyUFunc_Zero" -One = "PyUFunc_One" -None_ = "PyUFunc_None" - -# Sentinel value to specify that the loop for the given TypeDescription uses the -# pointer to arrays as its func_data. -UsesArraysAsData = object() - - -class TypeDescription(object): - """Type signature for a ufunc. - - Attributes - ---------- - type : str - Character representing the nominal type. - func_data : str or None or UsesArraysAsData, optional - The string representing the expression to insert into the data array, if - any. - in_ : str or None, optional - The typecode(s) of the inputs. - out : str or None, optional - The typecode(s) of the outputs. - """ - def __init__(self, type, f=None, in_=None, out=None): - self.type = type - self.func_data = f - if in_ is not None: - in_ = in_.replace('P', type) - self.in_ = in_ - if out is not None: - out = out.replace('P', type) - self.out = out - - def finish_signature(self, nin, nout): - if self.in_ is None: - self.in_ = self.type * nin - assert len(self.in_) == nin - if self.out is None: - self.out = self.type * nout - assert len(self.out) == nout - -_fdata_map = dict(f='npy_%sf', d='npy_%s', g='npy_%sl', - F='nc_%sf', D='nc_%s', G='nc_%sl') -def build_func_data(types, f): - func_data = [] - for t in types: - d = _fdata_map.get(t, '%s') % (f,) - func_data.append(d) - return func_data - -def TD(types, f=None, in_=None, out=None): - if f is not None: - if isinstance(f, str): - func_data = build_func_data(types, f) - else: - assert len(f) == len(types) - func_data = f - else: - func_data = (None,) * len(types) - if isinstance(in_, str): - in_ = (in_,) * len(types) - elif in_ is None: - in_ = (None,) * len(types) - if isinstance(out, str): - out = (out,) * len(types) - elif out is None: - out = (None,) * len(types) - tds = [] - for t, fd, i, o in zip(types, func_data, in_, out): - tds.append(TypeDescription(t, f=fd, in_=i, out=o)) - return tds - -class Ufunc(object): - """Description of a ufunc. - - Attributes - ---------- - - nin: number of input arguments - nout: number of output arguments - identity: identity element for a two-argument function - docstring: docstring for the ufunc - type_descriptions: list of TypeDescription objects - """ - def __init__(self, nin, nout, identity, docstring, - *type_descriptions): - self.nin = nin - self.nout = nout - if identity is None: - identity = None_ - self.identity = identity - self.docstring = docstring - self.type_descriptions = [] - for td in type_descriptions: - self.type_descriptions.extend(td) - for td in self.type_descriptions: - td.finish_signature(self.nin, self.nout) - -# String-handling utilities to avoid locale-dependence. - -import string -if sys.version_info[0] < 3: - UPPER_TABLE = string.maketrans(string.ascii_lowercase, string.ascii_uppercase) -else: - UPPER_TABLE = string.maketrans(bytes(string.ascii_lowercase, "ascii"), - bytes(string.ascii_uppercase, "ascii")) - -def english_upper(s): - """ Apply English case rules to convert ASCII strings to all upper case. - - This is an internal utility function to replace calls to str.upper() such - that we can avoid changing behavior with changing locales. In particular, - Turkish has distinct dotted and dotless variants of the Latin letter "I" in - both lowercase and uppercase. Thus, "i".upper() != "I" in a "tr" locale. - - Parameters - ---------- - s : str - - Returns - ------- - uppered : str - - Examples - -------- - >>> from numpy.lib.utils import english_upper - >>> english_upper('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_') - 'ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_' - >>> english_upper('') - '' - """ - uppered = s.translate(UPPER_TABLE) - return uppered - - -#each entry in defdict is a Ufunc object. - -#name: [string of chars for which it is defined, -# string of characters using func interface, -# tuple of strings giving funcs for data, -# (in, out), or (instr, outstr) giving the signature as character codes, -# identity, -# docstring, -# output specification (optional) -# ] - -chartoname = {'?': 'bool', - 'b': 'byte', - 'B': 'ubyte', - 'h': 'short', - 'H': 'ushort', - 'i': 'int', - 'I': 'uint', - 'l': 'long', - 'L': 'ulong', - 'q': 'longlong', - 'Q': 'ulonglong', - 'f': 'float', - 'd': 'double', - 'g': 'longdouble', - 'F': 'cfloat', - 'D': 'cdouble', - 'G': 'clongdouble', - 'O': 'OBJECT', - # '.' is like 'O', but calls a method of the object instead - # of a function - 'P': 'OBJECT', - } - -all = '?bBhHiIlLqQfdgFDGO' -O = 'O' -P = 'P' -ints = 'bBhHiIlLqQ' -intsO = ints + O -bints = '?' + ints -bintsO = bints + O -flts = 'fdg' -fltsO = flts + O -fltsP = flts + P -cmplx = 'FDG' -cmplxO = cmplx + O -cmplxP = cmplx + P -inexact = flts + cmplx -noint = inexact+O -nointP = inexact+P -allP = bints+flts+cmplxP -nobool = all[1:] -noobj = all[:-2] -nobool_or_obj = all[1:-2] -intflt = ints+flts -intfltcmplx = ints+flts+cmplx -nocmplx = bints+flts -nocmplxO = nocmplx+O -nocmplxP = nocmplx+P -notimes_or_obj = bints + inexact - -# Find which code corresponds to int64. -int64 = '' -uint64 = '' -for code in 'bhilq': - if struct.calcsize(code) == 8: - int64 = code - uint64 = english_upper(code) - break - -defdict = { -'add' : - Ufunc(2, 1, Zero, - docstrings.get('numpy.core.umath.add'), - TD(notimes_or_obj), - TD(O, f='PyNumber_Add'), - ), -'subtract' : - Ufunc(2, 1, Zero, - docstrings.get('numpy.core.umath.subtract'), - TD(notimes_or_obj), - TD(O, f='PyNumber_Subtract'), - ), -'multiply' : - Ufunc(2, 1, One, - docstrings.get('numpy.core.umath.multiply'), - TD(notimes_or_obj), - TD(O, f='PyNumber_Multiply'), - ), -'divide' : - Ufunc(2, 1, One, - docstrings.get('numpy.core.umath.divide'), - TD(intfltcmplx), - TD(O, f='PyNumber_Divide'), - ), -'floor_divide' : - Ufunc(2, 1, One, - docstrings.get('numpy.core.umath.floor_divide'), - TD(intfltcmplx), - TD(O, f='PyNumber_FloorDivide'), - ), -'true_divide' : - Ufunc(2, 1, One, - docstrings.get('numpy.core.umath.true_divide'), - TD('bBhH', out='d'), - TD('iIlLqQ', out='d'), - TD(flts+cmplx), - TD(O, f='PyNumber_TrueDivide'), - ), -'conjugate' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.conjugate'), - TD(ints+flts+cmplx), - TD(P, f='conjugate'), - ), -'fmod' : - Ufunc(2, 1, Zero, - docstrings.get('numpy.core.umath.fmod'), - TD(ints), - TD(flts, f='fmod'), - TD(P, f='fmod'), - ), -'square' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.square'), - TD(ints+inexact), - TD(O, f='Py_square'), - ), -'reciprocal' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.reciprocal'), - TD(ints+inexact), - TD(O, f='Py_reciprocal'), - ), -'ones_like' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.ones_like'), - TD(noobj), - TD(O, f='Py_get_one'), - ), -'power' : - Ufunc(2, 1, One, - docstrings.get('numpy.core.umath.power'), - TD(ints), - TD(inexact, f='pow'), - TD(O, f='npy_ObjectPower'), - ), -'absolute' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.absolute'), - TD(bints+flts), - TD(cmplx, out=('f', 'd', 'g')), - TD(O, f='PyNumber_Absolute'), - ), -'_arg' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath._arg'), - TD(cmplx, out=('f', 'd', 'g')), - ), -'negative' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.negative'), - TD(bints+flts), - TD(cmplx, f='neg'), - TD(O, f='PyNumber_Negative'), - ), -'sign' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.sign'), - TD(nobool), - ), -'greater' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.greater'), - TD(all, out='?'), - ), -'greater_equal' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.greater_equal'), - TD(all, out='?'), - ), -'less' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.less'), - TD(all, out='?'), - ), -'less_equal' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.less_equal'), - TD(all, out='?'), - ), -'equal' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.equal'), - TD(all, out='?'), - ), -'not_equal' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.not_equal'), - TD(all, out='?'), - ), -'logical_and' : - Ufunc(2, 1, One, - docstrings.get('numpy.core.umath.logical_and'), - TD(noobj, out='?'), - TD(P, f='logical_and'), - ), -'logical_not' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.logical_not'), - TD(noobj, out='?'), - TD(P, f='logical_not'), - ), -'logical_or' : - Ufunc(2, 1, Zero, - docstrings.get('numpy.core.umath.logical_or'), - TD(noobj, out='?'), - TD(P, f='logical_or'), - ), -'logical_xor' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.logical_xor'), - TD(noobj, out='?'), - TD(P, f='logical_xor'), - ), -'maximum' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.maximum'), - TD(noobj), - TD(O, f='npy_ObjectMax') - ), -'minimum' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.minimum'), - TD(noobj), - TD(O, f='npy_ObjectMin') - ), -'fmax' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.fmax'), - TD(noobj), - TD(O, f='npy_ObjectMax') - ), -'fmin' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.fmin'), - TD(noobj), - TD(O, f='npy_ObjectMin') - ), -'logaddexp' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.logaddexp'), - TD(flts, f="logaddexp") - ), -'logaddexp2' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.logaddexp2'), - TD(flts, f="logaddexp2") - ), -# FIXME: decide if the times should have the bitwise operations. -'bitwise_and' : - Ufunc(2, 1, One, - docstrings.get('numpy.core.umath.bitwise_and'), - TD(bints), - TD(O, f='PyNumber_And'), - ), -'bitwise_or' : - Ufunc(2, 1, Zero, - docstrings.get('numpy.core.umath.bitwise_or'), - TD(bints), - TD(O, f='PyNumber_Or'), - ), -'bitwise_xor' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.bitwise_xor'), - TD(bints), - TD(O, f='PyNumber_Xor'), - ), -'invert' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.invert'), - TD(bints), - TD(O, f='PyNumber_Invert'), - ), -'left_shift' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.left_shift'), - TD(ints), - TD(O, f='PyNumber_Lshift'), - ), -'right_shift' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.right_shift'), - TD(ints), - TD(O, f='PyNumber_Rshift'), - ), -'degrees' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.degrees'), - TD(fltsP, f='degrees'), - ), -'rad2deg' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.rad2deg'), - TD(fltsP, f='rad2deg'), - ), -'radians' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.radians'), - TD(fltsP, f='radians'), - ), -'deg2rad' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.deg2rad'), - TD(fltsP, f='deg2rad'), - ), -'arccos' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.arccos'), - TD(inexact, f='acos'), - TD(P, f='arccos'), - ), -'arccosh' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.arccosh'), - TD(inexact, f='acosh'), - TD(P, f='arccosh'), - ), -'arcsin' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.arcsin'), - TD(inexact, f='asin'), - TD(P, f='arcsin'), - ), -'arcsinh' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.arcsinh'), - TD(inexact, f='asinh'), - TD(P, f='arcsinh'), - ), -'arctan' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.arctan'), - TD(inexact, f='atan'), - TD(P, f='arctan'), - ), -'arctanh' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.arctanh'), - TD(inexact, f='atanh'), - TD(P, f='arctanh'), - ), -'cos' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.cos'), - TD(inexact, f='cos'), - TD(P, f='cos'), - ), -'sin' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.sin'), - TD(inexact, f='sin'), - TD(P, f='sin'), - ), -'tan' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.tan'), - TD(inexact, f='tan'), - TD(P, f='tan'), - ), -'cosh' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.cosh'), - TD(inexact, f='cosh'), - TD(P, f='cosh'), - ), -'sinh' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.sinh'), - TD(inexact, f='sinh'), - TD(P, f='sinh'), - ), -'tanh' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.tanh'), - TD(inexact, f='tanh'), - TD(P, f='tanh'), - ), -'exp' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.exp'), - TD(inexact, f='exp'), - TD(P, f='exp'), - ), -'exp2' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.exp2'), - TD(inexact, f='exp2'), - TD(P, f='exp2'), - ), -'expm1' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.expm1'), - TD(inexact, f='expm1'), - TD(P, f='expm1'), - ), -'log' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.log'), - TD(inexact, f='log'), - TD(P, f='log'), - ), -'log2' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.log2'), - TD(inexact, f='log2'), - TD(P, f='log2'), - ), -'log10' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.log10'), - TD(inexact, f='log10'), - TD(P, f='log10'), - ), -'log1p' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.log1p'), - TD(inexact, f='log1p'), - TD(P, f='log1p'), - ), -'sqrt' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.sqrt'), - TD(inexact, f='sqrt'), - TD(P, f='sqrt'), - ), -'ceil' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.ceil'), - TD(flts, f='ceil'), - TD(P, f='ceil'), - ), -'trunc' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.trunc'), - TD(flts, f='trunc'), - TD(P, f='trunc'), - ), -'fabs' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.fabs'), - TD(flts, f='fabs'), - TD(P, f='fabs'), - ), -'floor' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.floor'), - TD(flts, f='floor'), - TD(P, f='floor'), - ), -'rint' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.rint'), - TD(inexact, f='rint'), - TD(P, f='rint'), - ), -'arctan2' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.arctan2'), - TD(flts, f='atan2'), - TD(P, f='arctan2'), - ), -'remainder' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.remainder'), - TD(intflt), - TD(O, f='PyNumber_Remainder'), - ), -'hypot' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.hypot'), - TD(flts, f='hypot'), - TD(P, f='hypot'), - ), -'isnan' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.isnan'), - TD(inexact, out='?'), - ), -'isinf' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.isinf'), - TD(inexact, out='?'), - ), -'isfinite' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.isfinite'), - TD(inexact, out='?'), - ), -'signbit' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.signbit'), - TD(flts, out='?'), - ), -'copysign' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.copysign'), - TD(flts), - ), -'nextafter' : - Ufunc(2, 1, None, - docstrings.get('numpy.core.umath.nextafter'), - TD(flts), - ), -'spacing' : - Ufunc(1, 1, None, - docstrings.get('numpy.core.umath.spacing'), - TD(flts), - ), -'modf' : - Ufunc(1, 2, None, - docstrings.get('numpy.core.umath.modf'), - TD(flts), - ), -} - -if sys.version_info[0] >= 3: - # Will be aliased to true_divide in umathmodule.c.src:InitOtherOperators - del defdict['divide'] - -def indent(st,spaces): - indention = ' '*spaces - indented = indention + st.replace('\n','\n'+indention) - # trim off any trailing spaces - indented = re.sub(r' +$',r'',indented) - return indented - -chartotype1 = {'f': 'f_f', - 'd': 'd_d', - 'g': 'g_g', - 'F': 'F_F', - 'D': 'D_D', - 'G': 'G_G', - 'O': 'O_O', - 'P': 'O_O_method'} - -chartotype2 = {'f': 'ff_f', - 'd': 'dd_d', - 'g': 'gg_g', - 'F': 'FF_F', - 'D': 'DD_D', - 'G': 'GG_G', - 'O': 'OO_O', - 'P': 'OO_O_method'} -#for each name -# 1) create functions, data, and signature -# 2) fill in functions and data in InitOperators -# 3) add function. - -def make_arrays(funcdict): - # functions array contains an entry for every type implemented - # NULL should be placed where PyUfunc_ style function will be filled in later - # - code1list = [] - code2list = [] - names = list(funcdict.keys()) - names.sort() - for name in names: - uf = funcdict[name] - funclist = [] - datalist = [] - siglist = [] - k = 0 - sub = 0 - - if uf.nin > 1: - assert uf.nin == 2 - thedict = chartotype2 # two inputs and one output - else: - thedict = chartotype1 # one input and one output - - for t in uf.type_descriptions: - if t.func_data not in (None, UsesArraysAsData): - funclist.append('NULL') - astr = '%s_functions[%d] = PyUFunc_%s;' % \ - (name, k, thedict[t.type]) - code2list.append(astr) - if t.type == 'O': - astr = '%s_data[%d] = (void *) %s;' % \ - (name, k, t.func_data) - code2list.append(astr) - datalist.append('(void *)NULL') - elif t.type == 'P': - datalist.append('(void *)"%s"' % t.func_data) - else: - astr = '%s_data[%d] = (void *) %s;' % \ - (name, k, t.func_data) - code2list.append(astr) - datalist.append('(void *)NULL') - #datalist.append('(void *)%s' % t.func_data) - sub += 1 - elif t.func_data is UsesArraysAsData: - tname = english_upper(chartoname[t.type]) - datalist.append('(void *)NULL') - funclist.append('%s_%s_%s_%s' % (tname, t.in_, t.out, name)) - code2list.append('PyUFunc_SetUsesArraysAsData(%s_data, %s);' % (name, k)) - else: - datalist.append('(void *)NULL') - tname = english_upper(chartoname[t.type]) - funclist.append('%s_%s' % (tname, name)) - - for x in t.in_ + t.out: - siglist.append('PyArray_%s' % (english_upper(chartoname[x]),)) - - k += 1 - - funcnames = ', '.join(funclist) - signames = ', '.join(siglist) - datanames = ', '.join(datalist) - code1list.append("static PyUFuncGenericFunction %s_functions[] = { %s };" \ - % (name, funcnames)) - code1list.append("static void * %s_data[] = { %s };" \ - % (name, datanames)) - code1list.append("static char %s_signatures[] = { %s };" \ - % (name, signames)) - return "\n".join(code1list),"\n".join(code2list) - -def make_ufuncs(funcdict): - code3list = [] - names = list(funcdict.keys()) - names.sort() - for name in names: - uf = funcdict[name] - mlist = [] - docstring = textwrap.dedent(uf.docstring).strip() - if sys.version_info[0] < 3: - docstring = docstring.encode('string-escape') - docstring = docstring.replace(r'"', r'\"') - else: - docstring = docstring.encode('unicode-escape').decode('ascii') - docstring = docstring.replace(r'"', r'\"') - # XXX: I don't understand why the following replace is not - # necessary in the python 2 case. - docstring = docstring.replace(r"'", r"\'") - # Split the docstring because some compilers (like MS) do not like big - # string literal in C code. We split at endlines because textwrap.wrap - # do not play well with \n - docstring = '\\n\"\"'.join(docstring.split(r"\n")) - mlist.append(\ -r"""f = PyUFunc_FromFuncAndData(%s_functions, %s_data, %s_signatures, %d, - %d, %d, %s, "%s", - "%s", 0);""" % (name, name, name, - len(uf.type_descriptions), - uf.nin, uf.nout, - uf.identity, - name, docstring)) - mlist.append(r"""PyDict_SetItemString(dictionary, "%s", f);""" % name) - mlist.append(r"""Py_DECREF(f);""") - code3list.append('\n'.join(mlist)) - return '\n'.join(code3list) - - -def make_code(funcdict,filename): - code1, code2 = make_arrays(funcdict) - code3 = make_ufuncs(funcdict) - code2 = indent(code2,4) - code3 = indent(code3,4) - code = r""" - -/** Warning this file is autogenerated!!! - - Please make changes to the code generator program (%s) -**/ - -%s - -static void -InitOperators(PyObject *dictionary) { - PyObject *f; - -%s -%s -} -""" % (filename, code1, code2, code3) - return code; - - -if __name__ == "__main__": - filename = __file__ - fid = open('__umath_generated.c','w') - code = make_code(defdict, filename) - fid.write(code) - fid.close() diff --git a/pythonPackages/numpy/numpy/core/code_generators/numpy_api.py b/pythonPackages/numpy/numpy/core/code_generators/numpy_api.py deleted file mode 100755 index d12cf5a88e..0000000000 --- a/pythonPackages/numpy/numpy/core/code_generators/numpy_api.py +++ /dev/null @@ -1,313 +0,0 @@ -"""Here we define the exported functions, types, etc... which need to be -exported through a global C pointer. - -Each dictionary contains name -> index pair. - -Whenever you change one index, you break the ABI (and the ABI version number -should be incremented). Whenever you add an item to one of the dict, the API -needs to be updated. - -When adding a function, make sure to use the next integer not used as an index -(in case you use an existing index or jump, the build will stop and raise an -exception, so it should hopefully not get unnoticed). -""" - -multiarray_global_vars = { - 'NPY_NUMUSERTYPES': 7, -} - -multiarray_global_vars_types = { - 'NPY_NUMUSERTYPES': 'int', -} - -multiarray_scalar_bool_values = { - '_PyArrayScalar_BoolValues': 9 -} - -multiarray_types_api = { - 'PyBigArray_Type': 1, - 'PyArray_Type': 2, - 'PyArrayDescr_Type': 3, - 'PyArrayFlags_Type': 4, - 'PyArrayIter_Type': 5, - 'PyArrayMultiIter_Type': 6, - 'PyBoolArrType_Type': 8, - 'PyGenericArrType_Type': 10, - 'PyNumberArrType_Type': 11, - 'PyIntegerArrType_Type': 12, - 'PySignedIntegerArrType_Type': 13, - 'PyUnsignedIntegerArrType_Type': 14, - 'PyInexactArrType_Type': 15, - 'PyFloatingArrType_Type': 16, - 'PyComplexFloatingArrType_Type': 17, - 'PyFlexibleArrType_Type': 18, - 'PyCharacterArrType_Type': 19, - 'PyByteArrType_Type': 20, - 'PyShortArrType_Type': 21, - 'PyIntArrType_Type': 22, - 'PyLongArrType_Type': 23, - 'PyLongLongArrType_Type': 24, - 'PyUByteArrType_Type': 25, - 'PyUShortArrType_Type': 26, - 'PyUIntArrType_Type': 27, - 'PyULongArrType_Type': 28, - 'PyULongLongArrType_Type': 29, - 'PyFloatArrType_Type': 30, - 'PyDoubleArrType_Type': 31, - 'PyLongDoubleArrType_Type': 32, - 'PyCFloatArrType_Type': 33, - 'PyCDoubleArrType_Type': 34, - 'PyCLongDoubleArrType_Type': 35, - 'PyObjectArrType_Type': 36, - 'PyStringArrType_Type': 37, - 'PyUnicodeArrType_Type': 38, - 'PyVoidArrType_Type': 39, -# Those were added much later, and there is no space anymore between Void and -# first functions from multiarray API -# 'PyTimeIntegerArrType_Type': 215, -# 'PyDatetimeArrType_Type': 216, -# 'PyTimedeltaArrType_Type': 217, -} - -#define NPY_NUMUSERTYPES (*(int *)PyArray_API[7]) -#define PyBoolArrType_Type (*(PyTypeObject *)PyArray_API[8]) -#define _PyArrayScalar_BoolValues ((PyBoolScalarObject *)PyArray_API[9]) - -multiarray_funcs_api = { - 'PyArray_GetNDArrayCVersion': 0, - 'PyArray_SetNumericOps': 40, - 'PyArray_GetNumericOps': 41, - 'PyArray_INCREF': 42, - 'PyArray_XDECREF': 43, - 'PyArray_SetStringFunction': 44, - 'PyArray_DescrFromType': 45, - 'PyArray_TypeObjectFromType': 46, - 'PyArray_Zero': 47, - 'PyArray_One': 48, - 'PyArray_CastToType': 49, - 'PyArray_CastTo': 50, - 'PyArray_CastAnyTo': 51, - 'PyArray_CanCastSafely': 52, - 'PyArray_CanCastTo': 53, - 'PyArray_ObjectType': 54, - 'PyArray_DescrFromObject': 55, - 'PyArray_ConvertToCommonType': 56, - 'PyArray_DescrFromScalar': 57, - 'PyArray_DescrFromTypeObject': 58, - 'PyArray_Size': 59, - 'PyArray_Scalar': 60, - 'PyArray_FromScalar': 61, - 'PyArray_ScalarAsCtype': 62, - 'PyArray_CastScalarToCtype': 63, - 'PyArray_CastScalarDirect': 64, - 'PyArray_ScalarFromObject': 65, - 'PyArray_GetCastFunc': 66, - 'PyArray_FromDims': 67, - 'PyArray_FromDimsAndDataAndDescr': 68, - 'PyArray_FromAny': 69, - 'PyArray_EnsureArray': 70, - 'PyArray_EnsureAnyArray': 71, - 'PyArray_FromFile': 72, - 'PyArray_FromString': 73, - 'PyArray_FromBuffer': 74, - 'PyArray_FromIter': 75, - 'PyArray_Return': 76, - 'PyArray_GetField': 77, - 'PyArray_SetField': 78, - 'PyArray_Byteswap': 79, - 'PyArray_Resize': 80, - 'PyArray_MoveInto': 81, - 'PyArray_CopyInto': 82, - 'PyArray_CopyAnyInto': 83, - 'PyArray_CopyObject': 84, - 'PyArray_NewCopy': 85, - 'PyArray_ToList': 86, - 'PyArray_ToString': 87, - 'PyArray_ToFile': 88, - 'PyArray_Dump': 89, - 'PyArray_Dumps': 90, - 'PyArray_ValidType': 91, - 'PyArray_UpdateFlags': 92, - 'PyArray_New': 93, - 'PyArray_NewFromDescr': 94, - 'PyArray_DescrNew': 95, - 'PyArray_DescrNewFromType': 96, - 'PyArray_GetPriority': 97, - 'PyArray_IterNew': 98, - 'PyArray_MultiIterNew': 99, - 'PyArray_PyIntAsInt': 100, - 'PyArray_PyIntAsIntp': 101, - 'PyArray_Broadcast': 102, - 'PyArray_FillObjectArray': 103, - 'PyArray_FillWithScalar': 104, - 'PyArray_CheckStrides': 105, - 'PyArray_DescrNewByteorder': 106, - 'PyArray_IterAllButAxis': 107, - 'PyArray_CheckFromAny': 108, - 'PyArray_FromArray': 109, - 'PyArray_FromInterface': 110, - 'PyArray_FromStructInterface': 111, - 'PyArray_FromArrayAttr': 112, - 'PyArray_ScalarKind': 113, - 'PyArray_CanCoerceScalar': 114, - 'PyArray_NewFlagsObject': 115, - 'PyArray_CanCastScalar': 116, - 'PyArray_CompareUCS4': 117, - 'PyArray_RemoveSmallest': 118, - 'PyArray_ElementStrides': 119, - 'PyArray_Item_INCREF': 120, - 'PyArray_Item_XDECREF': 121, - 'PyArray_FieldNames': 122, - 'PyArray_Transpose': 123, - 'PyArray_TakeFrom': 124, - 'PyArray_PutTo': 125, - 'PyArray_PutMask': 126, - 'PyArray_Repeat': 127, - 'PyArray_Choose': 128, - 'PyArray_Sort': 129, - 'PyArray_ArgSort': 130, - 'PyArray_SearchSorted': 131, - 'PyArray_ArgMax': 132, - 'PyArray_ArgMin': 133, - 'PyArray_Reshape': 134, - 'PyArray_Newshape': 135, - 'PyArray_Squeeze': 136, - 'PyArray_View': 137, - 'PyArray_SwapAxes': 138, - 'PyArray_Max': 139, - 'PyArray_Min': 140, - 'PyArray_Ptp': 141, - 'PyArray_Mean': 142, - 'PyArray_Trace': 143, - 'PyArray_Diagonal': 144, - 'PyArray_Clip': 145, - 'PyArray_Conjugate': 146, - 'PyArray_Nonzero': 147, - 'PyArray_Std': 148, - 'PyArray_Sum': 149, - 'PyArray_CumSum': 150, - 'PyArray_Prod': 151, - 'PyArray_CumProd': 152, - 'PyArray_All': 153, - 'PyArray_Any': 154, - 'PyArray_Compress': 155, - 'PyArray_Flatten': 156, - 'PyArray_Ravel': 157, - 'PyArray_MultiplyList': 158, - 'PyArray_MultiplyIntList': 159, - 'PyArray_GetPtr': 160, - 'PyArray_CompareLists': 161, - 'PyArray_AsCArray': 162, - 'PyArray_As1D': 163, - 'PyArray_As2D': 164, - 'PyArray_Free': 165, - 'PyArray_Converter': 166, - 'PyArray_IntpFromSequence': 167, - 'PyArray_Concatenate': 168, - 'PyArray_InnerProduct': 169, - 'PyArray_MatrixProduct': 170, - 'PyArray_CopyAndTranspose': 171, - 'PyArray_Correlate': 172, - 'PyArray_TypestrConvert': 173, - 'PyArray_DescrConverter': 174, - 'PyArray_DescrConverter2': 175, - 'PyArray_IntpConverter': 176, - 'PyArray_BufferConverter': 177, - 'PyArray_AxisConverter': 178, - 'PyArray_BoolConverter': 179, - 'PyArray_ByteorderConverter': 180, - 'PyArray_OrderConverter': 181, - 'PyArray_EquivTypes': 182, - 'PyArray_Zeros': 183, - 'PyArray_Empty': 184, - 'PyArray_Where': 185, - 'PyArray_Arange': 186, - 'PyArray_ArangeObj': 187, - 'PyArray_SortkindConverter': 188, - 'PyArray_LexSort': 189, - 'PyArray_Round': 190, - 'PyArray_EquivTypenums': 191, - 'PyArray_RegisterDataType': 192, - 'PyArray_RegisterCastFunc': 193, - 'PyArray_RegisterCanCast': 194, - 'PyArray_InitArrFuncs': 195, - 'PyArray_IntTupleFromIntp': 196, - 'PyArray_TypeNumFromName': 197, - 'PyArray_ClipmodeConverter': 198, - 'PyArray_OutputConverter': 199, - 'PyArray_BroadcastToShape': 200, - '_PyArray_SigintHandler': 201, - '_PyArray_GetSigintBuf': 202, - 'PyArray_DescrAlignConverter': 203, - 'PyArray_DescrAlignConverter2': 204, - 'PyArray_SearchsideConverter': 205, - 'PyArray_CheckAxis': 206, - 'PyArray_OverflowMultiplyList': 207, - 'PyArray_CompareString': 208, - 'PyArray_MultiIterFromObjects': 209, - 'PyArray_GetEndianness': 210, - 'PyArray_GetNDArrayCFeatureVersion': 211, - 'PyArray_Correlate2': 212, - 'PyArray_NeighborhoodIterNew': 213, -# 'PyArray_SetDatetimeParseFunction': 214, -# 'PyArray_DatetimeToDatetimeStruct': 218, -# 'PyArray_TimedeltaToTimedeltaStruct': 219, -# 'PyArray_DatetimeStructToDatetime': 220, -# 'PyArray_TimedeltaStructToTimedelta': 221, -} - -ufunc_types_api = { - 'PyUFunc_Type': 0 -} - -ufunc_funcs_api = { - 'PyUFunc_FromFuncAndData': 1, - 'PyUFunc_RegisterLoopForType': 2, - 'PyUFunc_GenericFunction': 3, - 'PyUFunc_f_f_As_d_d': 4, - 'PyUFunc_d_d': 5, - 'PyUFunc_f_f': 6, - 'PyUFunc_g_g': 7, - 'PyUFunc_F_F_As_D_D': 8, - 'PyUFunc_F_F': 9, - 'PyUFunc_D_D': 10, - 'PyUFunc_G_G': 11, - 'PyUFunc_O_O': 12, - 'PyUFunc_ff_f_As_dd_d': 13, - 'PyUFunc_ff_f': 14, - 'PyUFunc_dd_d': 15, - 'PyUFunc_gg_g': 16, - 'PyUFunc_FF_F_As_DD_D': 17, - 'PyUFunc_DD_D': 18, - 'PyUFunc_FF_F': 19, - 'PyUFunc_GG_G': 20, - 'PyUFunc_OO_O': 21, - 'PyUFunc_O_O_method': 22, - 'PyUFunc_OO_O_method': 23, - 'PyUFunc_On_Om': 24, - 'PyUFunc_GetPyValues': 25, - 'PyUFunc_checkfperr': 26, - 'PyUFunc_clearfperr': 27, - 'PyUFunc_getfperr': 28, - 'PyUFunc_handlefperr': 29, - 'PyUFunc_ReplaceLoopBySignature': 30, - 'PyUFunc_FromFuncAndDataAndSignature': 31, - 'PyUFunc_SetUsesArraysAsData': 32, -} - -# List of all the dicts which define the C API -# XXX: DO NOT CHANGE THE ORDER OF TUPLES BELOW ! -multiarray_api = ( - multiarray_global_vars, - multiarray_global_vars_types, - multiarray_scalar_bool_values, - multiarray_types_api, - multiarray_funcs_api, -) - -ufunc_api = ( - ufunc_funcs_api, - ufunc_types_api -) - -full_api = multiarray_api + ufunc_api diff --git a/pythonPackages/numpy/numpy/core/code_generators/ufunc_docstrings.py b/pythonPackages/numpy/numpy/core/code_generators/ufunc_docstrings.py deleted file mode 100755 index 0cb3a423ae..0000000000 --- a/pythonPackages/numpy/numpy/core/code_generators/ufunc_docstrings.py +++ /dev/null @@ -1,3299 +0,0 @@ -# Docstrings for generated ufuncs - -docdict = {} - -def get(name): - return docdict.get(name) - -def add_newdoc(place, name, doc): - docdict['.'.join((place, name))] = doc - - -add_newdoc('numpy.core.umath', 'absolute', - """ - Calculate the absolute value element-wise. - - Parameters - ---------- - x : array_like - Input array. - - Returns - ------- - absolute : ndarray - An ndarray containing the absolute value of - each element in `x`. For complex input, ``a + ib``, the - absolute value is :math:`\\sqrt{ a^2 + b^2 }`. - - Examples - -------- - >>> x = np.array([-1.2, 1.2]) - >>> np.absolute(x) - array([ 1.2, 1.2]) - >>> np.absolute(1.2 + 1j) - 1.5620499351813308 - - Plot the function over ``[-10, 10]``: - - >>> import matplotlib.pyplot as plt - - >>> x = np.linspace(-10, 10, 101) - >>> plt.plot(x, np.absolute(x)) - >>> plt.show() - - Plot the function over the complex plane: - - >>> xx = x + 1j * x[:, np.newaxis] - >>> plt.imshow(np.abs(xx), extent=[-10, 10, -10, 10]) - >>> plt.show() - - """) - -add_newdoc('numpy.core.umath', 'add', - """ - Add arguments element-wise. - - Parameters - ---------- - x1, x2 : array_like - The arrays to be added. If ``x1.shape != x2.shape``, they must be - broadcastable to a common shape (which may be the shape of one or - the other). - - Returns - ------- - y : ndarray or scalar - The sum of `x1` and `x2`, element-wise. Returns a scalar if - both `x1` and `x2` are scalars. - - Notes - ----- - Equivalent to `x1` + `x2` in terms of array broadcasting. - - Examples - -------- - >>> np.add(1.0, 4.0) - 5.0 - >>> x1 = np.arange(9.0).reshape((3, 3)) - >>> x2 = np.arange(3.0) - >>> np.add(x1, x2) - array([[ 0., 2., 4.], - [ 3., 5., 7.], - [ 6., 8., 10.]]) - - """) - -add_newdoc('numpy.core.umath', 'arccos', - """ - Trigonometric inverse cosine, element-wise. - - The inverse of `cos` so that, if ``y = cos(x)``, then ``x = arccos(y)``. - - Parameters - ---------- - x : array_like - `x`-coordinate on the unit circle. - For real arguments, the domain is [-1, 1]. - - out : ndarray, optional - Array of the same shape as `a`, to store results in. See - `doc.ufuncs` (Section "Output arguments") for more details. - - Returns - ------- - angle : ndarray - The angle of the ray intersecting the unit circle at the given - `x`-coordinate in radians [0, pi]. If `x` is a scalar then a - scalar is returned, otherwise an array of the same shape as `x` - is returned. - - See Also - -------- - cos, arctan, arcsin, emath.arccos - - Notes - ----- - `arccos` is a multivalued function: for each `x` there are infinitely - many numbers `z` such that `cos(z) = x`. The convention is to return - the angle `z` whose real part lies in `[0, pi]`. - - For real-valued input data types, `arccos` always returns real output. - For each value that cannot be expressed as a real number or infinity, - it yields ``nan`` and sets the `invalid` floating point error flag. - - For complex-valued input, `arccos` is a complex analytic function that - has branch cuts `[-inf, -1]` and `[1, inf]` and is continuous from - above on the former and from below on the latter. - - The inverse `cos` is also known as `acos` or cos^-1. - - References - ---------- - M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions", - 10th printing, 1964, pp. 79. http://www.math.sfu.ca/~cbm/aands/ - - Examples - -------- - We expect the arccos of 1 to be 0, and of -1 to be pi: - - >>> np.arccos([1, -1]) - array([ 0. , 3.14159265]) - - Plot arccos: - - >>> import matplotlib.pyplot as plt - >>> x = np.linspace(-1, 1, num=100) - >>> plt.plot(x, np.arccos(x)) - >>> plt.axis('tight') - >>> plt.show() - - """) - -add_newdoc('numpy.core.umath', 'arccosh', - """ - Inverse hyperbolic cosine, elementwise. - - Parameters - ---------- - x : array_like - Input array. - out : ndarray, optional - Array of the same shape as `x`, to store results in. - See `doc.ufuncs` (Section "Output arguments") for details. - - - Returns - ------- - y : ndarray - Array of the same shape as `x`. - - See Also - -------- - - cosh, arcsinh, sinh, arctanh, tanh - - Notes - ----- - `arccosh` is a multivalued function: for each `x` there are infinitely - many numbers `z` such that `cosh(z) = x`. The convention is to return the - `z` whose imaginary part lies in `[-pi, pi]` and the real part in - ``[0, inf]``. - - For real-valued input data types, `arccosh` always returns real output. - For each value that cannot be expressed as a real number or infinity, it - yields ``nan`` and sets the `invalid` floating point error flag. - - For complex-valued input, `arccosh` is a complex analytical function that - has a branch cut `[-inf, 1]` and is continuous from above on it. - - References - ---------- - .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions", - 10th printing, 1964, pp. 86. http://www.math.sfu.ca/~cbm/aands/ - .. [2] Wikipedia, "Inverse hyperbolic function", - http://en.wikipedia.org/wiki/Arccosh - - Examples - -------- - >>> np.arccosh([np.e, 10.0]) - array([ 1.65745445, 2.99322285]) - >>> np.arccosh(1) - 0.0 - - """) - -add_newdoc('numpy.core.umath', 'arcsin', - """ - Inverse sine, element-wise. - - Parameters - ---------- - x : array_like - `y`-coordinate on the unit circle. - - out : ndarray, optional - Array of the same shape as `x`, in which to store the results. - See `doc.ufuncs` (Section "Output arguments") for more details. - - Returns - ------- - angle : ndarray - The inverse sine of each element in `x`, in radians and in the - closed interval ``[-pi/2, pi/2]``. If `x` is a scalar, a scalar - is returned, otherwise an array. - - See Also - -------- - sin, cos, arccos, tan, arctan, arctan2, emath.arcsin - - Notes - ----- - `arcsin` is a multivalued function: for each `x` there are infinitely - many numbers `z` such that :math:`sin(z) = x`. The convention is to - return the angle `z` whose real part lies in [-pi/2, pi/2]. - - For real-valued input data types, *arcsin* always returns real output. - For each value that cannot be expressed as a real number or infinity, - it yields ``nan`` and sets the `invalid` floating point error flag. - - For complex-valued input, `arcsin` is a complex analytic function that - has, by convention, the branch cuts [-inf, -1] and [1, inf] and is - continuous from above on the former and from below on the latter. - - The inverse sine is also known as `asin` or sin^{-1}. - - References - ---------- - Abramowitz, M. and Stegun, I. A., *Handbook of Mathematical Functions*, - 10th printing, New York: Dover, 1964, pp. 79ff. - http://www.math.sfu.ca/~cbm/aands/ - - Examples - -------- - >>> np.arcsin(1) # pi/2 - 1.5707963267948966 - >>> np.arcsin(-1) # -pi/2 - -1.5707963267948966 - >>> np.arcsin(0) - 0.0 - - """) - -add_newdoc('numpy.core.umath', 'arcsinh', - """ - Inverse hyperbolic sine elementwise. - - Parameters - ---------- - x : array_like - Input array. - out : ndarray, optional - Array into which the output is placed. Its type is preserved and it - must be of the right shape to hold the output. See `doc.ufuncs`. - - Returns - ------- - out : ndarray - Array of of the same shape as `x`. - - Notes - ----- - `arcsinh` is a multivalued function: for each `x` there are infinitely - many numbers `z` such that `sinh(z) = x`. The convention is to return the - `z` whose imaginary part lies in `[-pi/2, pi/2]`. - - For real-valued input data types, `arcsinh` always returns real output. - For each value that cannot be expressed as a real number or infinity, it - returns ``nan`` and sets the `invalid` floating point error flag. - - For complex-valued input, `arccos` is a complex analytical function that - has branch cuts `[1j, infj]` and `[-1j, -infj]` and is continuous from - the right on the former and from the left on the latter. - - The inverse hyperbolic sine is also known as `asinh` or ``sinh^-1``. - - References - ---------- - .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions", - 10th printing, 1964, pp. 86. http://www.math.sfu.ca/~cbm/aands/ - .. [2] Wikipedia, "Inverse hyperbolic function", - http://en.wikipedia.org/wiki/Arcsinh - - Examples - -------- - >>> np.arcsinh(np.array([np.e, 10.0])) - array([ 1.72538256, 2.99822295]) - - """) - -add_newdoc('numpy.core.umath', 'arctan', - """ - Trigonometric inverse tangent, element-wise. - - The inverse of tan, so that if ``y = tan(x)`` then ``x = arctan(y)``. - - Parameters - ---------- - x : array_like - Input values. `arctan` is applied to each element of `x`. - - Returns - ------- - out : ndarray - Out has the same shape as `x`. Its real part is in - ``[-pi/2, pi/2]`` (``arctan(+/-inf)`` returns ``+/-pi/2``). - It is a scalar if `x` is a scalar. - - See Also - -------- - arctan2 : The "four quadrant" arctan of the angle formed by (`x`, `y`) - and the positive `x`-axis. - angle : Argument of complex values. - - Notes - ----- - `arctan` is a multi-valued function: for each `x` there are infinitely - many numbers `z` such that tan(`z`) = `x`. The convention is to return - the angle `z` whose real part lies in [-pi/2, pi/2]. - - For real-valued input data types, `arctan` always returns real output. - For each value that cannot be expressed as a real number or infinity, - it yields ``nan`` and sets the `invalid` floating point error flag. - - For complex-valued input, `arctan` is a complex analytic function that - has [`1j, infj`] and [`-1j, -infj`] as branch cuts, and is continuous - from the left on the former and from the right on the latter. - - The inverse tangent is also known as `atan` or tan^{-1}. - - References - ---------- - Abramowitz, M. and Stegun, I. A., *Handbook of Mathematical Functions*, - 10th printing, New York: Dover, 1964, pp. 79. - http://www.math.sfu.ca/~cbm/aands/ - - Examples - -------- - We expect the arctan of 0 to be 0, and of 1 to be pi/4: - - >>> np.arctan([0, 1]) - array([ 0. , 0.78539816]) - - >>> np.pi/4 - 0.78539816339744828 - - Plot arctan: - - >>> import matplotlib.pyplot as plt - >>> x = np.linspace(-10, 10) - >>> plt.plot(x, np.arctan(x)) - >>> plt.axis('tight') - >>> plt.show() - - """) - -add_newdoc('numpy.core.umath', 'arctan2', - """ - Element-wise arc tangent of ``x1/x2`` choosing the quadrant correctly. - - The quadrant (i.e., branch) is chosen so that ``arctan2(x1, x2)`` is - the signed angle in radians between the ray ending at the origin and - passing through the point (1,0), and the ray ending at the origin and - passing through the point (`x2`, `x1`). (Note the role reversal: the - "`y`-coordinate" is the first function parameter, the "`x`-coordinate" - is the second.) By IEEE convention, this function is defined for - `x2` = +/-0 and for either or both of `x1` and `x2` = +/-inf (see - Notes for specific values). - - This function is not defined for complex-valued arguments; for the - so-called argument of complex values, use `angle`. - - Parameters - ---------- - x1 : array_like, real-valued - `y`-coordinates. - x2 : array_like, real-valued - `x`-coordinates. `x2` must be broadcastable to match the shape of - `x1` or vice versa. - - Returns - ------- - angle : ndarray - Array of angles in radians, in the range ``[-pi, pi]``. - - See Also - -------- - arctan, tan, angle - - Notes - ----- - *arctan2* is identical to the `atan2` function of the underlying - C library. The following special values are defined in the C - standard: [1]_ - - ====== ====== ================ - `x1` `x2` `arctan2(x1,x2)` - ====== ====== ================ - +/- 0 +0 +/- 0 - +/- 0 -0 +/- pi - > 0 +/-inf +0 / +pi - < 0 +/-inf -0 / -pi - +/-inf +inf +/- (pi/4) - +/-inf -inf +/- (3*pi/4) - ====== ====== ================ - - Note that +0 and -0 are distinct floating point numbers, as are +inf - and -inf. - - References - ---------- - .. [1] ISO/IEC standard 9899:1999, "Programming language C." - - Examples - -------- - Consider four points in different quadrants: - - >>> x = np.array([-1, +1, +1, -1]) - >>> y = np.array([-1, -1, +1, +1]) - >>> np.arctan2(y, x) * 180 / np.pi - array([-135., -45., 45., 135.]) - - Note the order of the parameters. `arctan2` is defined also when `x2` = 0 - and at several other special points, obtaining values in - the range ``[-pi, pi]``: - - >>> np.arctan2([1., -1.], [0., 0.]) - array([ 1.57079633, -1.57079633]) - >>> np.arctan2([0., 0., np.inf], [+0., -0., np.inf]) - array([ 0. , 3.14159265, 0.78539816]) - - """) - -add_newdoc('numpy.core.umath', '_arg', - """ - DO NOT USE, ONLY FOR TESTING - """) - -add_newdoc('numpy.core.umath', 'arctanh', - """ - Inverse hyperbolic tangent elementwise. - - Parameters - ---------- - x : array_like - Input array. - - Returns - ------- - out : ndarray - Array of the same shape as `x`. - - See Also - -------- - emath.arctanh - - Notes - ----- - `arctanh` is a multivalued function: for each `x` there are infinitely - many numbers `z` such that `tanh(z) = x`. The convention is to return the - `z` whose imaginary part lies in `[-pi/2, pi/2]`. - - For real-valued input data types, `arctanh` always returns real output. - For each value that cannot be expressed as a real number or infinity, it - yields ``nan`` and sets the `invalid` floating point error flag. - - For complex-valued input, `arctanh` is a complex analytical function that - has branch cuts `[-1, -inf]` and `[1, inf]` and is continuous from - above on the former and from below on the latter. - - The inverse hyperbolic tangent is also known as `atanh` or ``tanh^-1``. - - References - ---------- - .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions", - 10th printing, 1964, pp. 86. http://www.math.sfu.ca/~cbm/aands/ - .. [2] Wikipedia, "Inverse hyperbolic function", - http://en.wikipedia.org/wiki/Arctanh - - Examples - -------- - >>> np.arctanh([0, -0.5]) - array([ 0. , -0.54930614]) - - """) - -add_newdoc('numpy.core.umath', 'bitwise_and', - """ - Compute the bit-wise AND of two arrays element-wise. - - Computes the bit-wise AND of the underlying binary representation of - the integers in the input arrays. This ufunc implements the C/Python - operator ``&``. - - Parameters - ---------- - x1, x2 : array_like - Only integer types are handled (including booleans). - - Returns - ------- - out : array_like - Result. - - See Also - -------- - logical_and - bitwise_or - bitwise_xor - binary_repr : - Return the binary representation of the input number as a string. - - Examples - -------- - The number 13 is represented by ``00001101``. Likewise, 17 is - represented by ``00010001``. The bit-wise AND of 13 and 17 is - therefore ``000000001``, or 1: - - >>> np.bitwise_and(13, 17) - 1 - - >>> np.bitwise_and(14, 13) - 12 - >>> np.binary_repr(12) - '1100' - >>> np.bitwise_and([14,3], 13) - array([12, 1]) - - >>> np.bitwise_and([11,7], [4,25]) - array([0, 1]) - >>> np.bitwise_and(np.array([2,5,255]), np.array([3,14,16])) - array([ 2, 4, 16]) - >>> np.bitwise_and([True, True], [False, True]) - array([False, True], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'bitwise_or', - """ - Compute the bit-wise OR of two arrays element-wise. - - Computes the bit-wise OR of the underlying binary representation of - the integers in the input arrays. This ufunc implements the C/Python - operator ``|``. - - Parameters - ---------- - x1, x2 : array_like - Only integer types are handled (including booleans). - out : ndarray, optional - Array into which the output is placed. Its type is preserved and it - must be of the right shape to hold the output. See doc.ufuncs. - - Returns - ------- - out : array_like - Result. - - See Also - -------- - logical_or - bitwise_and - bitwise_xor - binary_repr : - Return the binary representation of the input number as a string. - - Examples - -------- - The number 13 has the binaray representation ``00001101``. Likewise, - 16 is represented by ``00010000``. The bit-wise OR of 13 and 16 is - then ``000111011``, or 29: - - >>> np.bitwise_or(13, 16) - 29 - >>> np.binary_repr(29) - '11101' - - >>> np.bitwise_or(32, 2) - 34 - >>> np.bitwise_or([33, 4], 1) - array([33, 5]) - >>> np.bitwise_or([33, 4], [1, 2]) - array([33, 6]) - - >>> np.bitwise_or(np.array([2, 5, 255]), np.array([4, 4, 4])) - array([ 6, 5, 255]) - >>> np.array([2, 5, 255]) | np.array([4, 4, 4]) - array([ 6, 5, 255]) - >>> np.bitwise_or(np.array([2, 5, 255, 2147483647L], dtype=np.int32), - ... np.array([4, 4, 4, 2147483647L], dtype=np.int32)) - array([ 6, 5, 255, 2147483647]) - >>> np.bitwise_or([True, True], [False, True]) - array([ True, True], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'bitwise_xor', - """ - Compute the bit-wise XOR of two arrays element-wise. - - Computes the bit-wise XOR of the underlying binary representation of - the integers in the input arrays. This ufunc implements the C/Python - operator ``^``. - - Parameters - ---------- - x1, x2 : array_like - Only integer types are handled (including booleans). - - Returns - ------- - out : array_like - Result. - - See Also - -------- - logical_xor - bitwise_and - bitwise_or - binary_repr : - Return the binary representation of the input number as a string. - - Examples - -------- - The number 13 is represented by ``00001101``. Likewise, 17 is - represented by ``00010001``. The bit-wise XOR of 13 and 17 is - therefore ``00011100``, or 28: - - >>> np.bitwise_xor(13, 17) - 28 - >>> np.binary_repr(28) - '11100' - - >>> np.bitwise_xor(31, 5) - 26 - >>> np.bitwise_xor([31,3], 5) - array([26, 6]) - - >>> np.bitwise_xor([31,3], [5,6]) - array([26, 5]) - >>> np.bitwise_xor([True, True], [False, True]) - array([ True, False], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'ceil', - """ - Return the ceiling of the input, element-wise. - - The ceil of the scalar `x` is the smallest integer `i`, such that - `i >= x`. It is often denoted as :math:`\\lceil x \\rceil`. - - Parameters - ---------- - x : array_like - Input data. - - Returns - ------- - y : {ndarray, scalar} - The ceiling of each element in `x`, with `float` dtype. - - See Also - -------- - floor, trunc, rint - - Examples - -------- - >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) - >>> np.ceil(a) - array([-1., -1., -0., 1., 2., 2., 2.]) - - """) - -add_newdoc('numpy.core.umath', 'trunc', - """ - Return the truncated value of the input, element-wise. - - The truncated value of the scalar `x` is the nearest integer `i` which - is closer to zero than `x` is. In short, the fractional part of the - signed number `x` is discarded. - - Parameters - ---------- - x : array_like - Input data. - - Returns - ------- - y : {ndarray, scalar} - The truncated value of each element in `x`. - - See Also - -------- - ceil, floor, rint - - Notes - ----- - .. versionadded:: 1.3.0 - - Examples - -------- - >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) - >>> np.trunc(a) - array([-1., -1., -0., 0., 1., 1., 2.]) - - """) - -add_newdoc('numpy.core.umath', 'conjugate', - """ - Return the complex conjugate, element-wise. - - The complex conjugate of a complex number is obtained by changing the - sign of its imaginary part. - - Parameters - ---------- - x : array_like - Input value. - - Returns - ------- - y : ndarray - The complex conjugate of `x`, with same dtype as `y`. - - Examples - -------- - >>> np.conjugate(1+2j) - (1-2j) - - >>> x = np.eye(2) + 1j * np.eye(2) - >>> np.conjugate(x) - array([[ 1.-1.j, 0.-0.j], - [ 0.-0.j, 1.-1.j]]) - - """) - -add_newdoc('numpy.core.umath', 'cos', - """ - Cosine elementwise. - - Parameters - ---------- - x : array_like - Input array in radians. - out : ndarray, optional - Output array of same shape as `x`. - - Returns - ------- - y : ndarray - The corresponding cosine values. - - Raises - ------ - ValueError: invalid return array shape - if `out` is provided and `out.shape` != `x.shape` (See Examples) - - Notes - ----- - If `out` is provided, the function writes the result into it, - and returns a reference to `out`. (See Examples) - - References - ---------- - M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions. - New York, NY: Dover, 1972. - - Examples - -------- - >>> np.cos(np.array([0, np.pi/2, np.pi])) - array([ 1.00000000e+00, 6.12303177e-17, -1.00000000e+00]) - >>> - >>> # Example of providing the optional output parameter - >>> out2 = np.cos([0.1], out1) - >>> out2 is out1 - True - >>> - >>> # Example of ValueError due to provision of shape mis-matched `out` - >>> np.cos(np.zeros((3,3)),np.zeros((2,2))) - Traceback (most recent call last): - File "", line 1, in - ValueError: invalid return array shape - - """) - -add_newdoc('numpy.core.umath', 'cosh', - """ - Hyperbolic cosine, element-wise. - - Equivalent to ``1/2 * (np.exp(x) + np.exp(-x))`` and ``np.cos(1j*x)``. - - Parameters - ---------- - x : array_like - Input array. - - Returns - ------- - out : ndarray - Output array of same shape as `x`. - - Examples - -------- - >>> np.cosh(0) - 1.0 - - The hyperbolic cosine describes the shape of a hanging cable: - - >>> import matplotlib.pyplot as plt - >>> x = np.linspace(-4, 4, 1000) - >>> plt.plot(x, np.cosh(x)) - >>> plt.show() - - """) - -add_newdoc('numpy.core.umath', 'degrees', - """ - Convert angles from radians to degrees. - - Parameters - ---------- - x : array_like - Input array in radians. - out : ndarray, optional - Output array of same shape as x. - - Returns - ------- - y : ndarray of floats - The corresponding degree values; if `out` was supplied this is a - reference to it. - - See Also - -------- - rad2deg : equivalent function - - Examples - -------- - Convert a radian array to degrees - - >>> rad = np.arange(12.)*np.pi/6 - >>> np.degrees(rad) - array([ 0., 30., 60., 90., 120., 150., 180., 210., 240., - 270., 300., 330.]) - - >>> out = np.zeros((rad.shape)) - >>> r = degrees(rad, out) - >>> np.all(r == out) - True - - """) - -add_newdoc('numpy.core.umath', 'rad2deg', - """ - Convert angles from radians to degrees. - - Parameters - ---------- - x : array_like - Angle in radians. - out : ndarray, optional - Array into which the output is placed. Its type is preserved and it - must be of the right shape to hold the output. See doc.ufuncs. - - Returns - ------- - y : ndarray - The corresponding angle in degrees. - - See Also - -------- - deg2rad : Convert angles from degrees to radians. - unwrap : Remove large jumps in angle by wrapping. - - Notes - ----- - .. versionadded:: 1.3.0 - - rad2deg(x) is ``180 * x / pi``. - - Examples - -------- - >>> np.rad2deg(np.pi/2) - 90.0 - - """) - -add_newdoc('numpy.core.umath', 'divide', - """ - Divide arguments element-wise. - - Parameters - ---------- - x1 : array_like - Dividend array. - x2 : array_like - Divisor array. - out : ndarray, optional - Array into which the output is placed. Its type is preserved and it - must be of the right shape to hold the output. See doc.ufuncs. - - Returns - ------- - y : {ndarray, scalar} - The quotient `x1/x2`, element-wise. Returns a scalar if - both `x1` and `x2` are scalars. - - See Also - -------- - seterr : Set whether to raise or warn on overflow, underflow and division - by zero. - - Notes - ----- - Equivalent to `x1` / `x2` in terms of array-broadcasting. - - Behavior on division by zero can be changed using `seterr`. - - When both `x1` and `x2` are of an integer type, `divide` will return - integers and throw away the fractional part. Moreover, division by zero - always yields zero in integer arithmetic. - - Examples - -------- - >>> np.divide(2.0, 4.0) - 0.5 - >>> x1 = np.arange(9.0).reshape((3, 3)) - >>> x2 = np.arange(3.0) - >>> np.divide(x1, x2) - array([[ NaN, 1. , 1. ], - [ Inf, 4. , 2.5], - [ Inf, 7. , 4. ]]) - - Note the behavior with integer types: - - >>> np.divide(2, 4) - 0 - >>> np.divide(2, 4.) - 0.5 - - Division by zero always yields zero in integer arithmetic, and does not - raise an exception or a warning: - - >>> np.divide(np.array([0, 1], dtype=int), np.array([0, 0], dtype=int)) - array([0, 0]) - - Division by zero can, however, be caught using `seterr`: - - >>> old_err_state = np.seterr(divide='raise') - >>> np.divide(1, 0) - Traceback (most recent call last): - File "", line 1, in - FloatingPointError: divide by zero encountered in divide - - >>> ignored_states = np.seterr(**old_err_state) - >>> np.divide(1, 0) - 0 - - """) - -add_newdoc('numpy.core.umath', 'equal', - """ - Return (x1 == x2) element-wise. - - Parameters - ---------- - x1, x2 : array_like - Input arrays of the same shape. - - Returns - ------- - out : {ndarray, bool} - Output array of bools, or a single bool if x1 and x2 are scalars. - - See Also - -------- - not_equal, greater_equal, less_equal, greater, less - - Examples - -------- - >>> np.equal([0, 1, 3], np.arange(3)) - array([ True, True, False], dtype=bool) - - What is compared are values, not types. So an int (1) and an array of - length one can evaluate as True: - - >>> np.equal(1, np.ones(1)) - array([ True], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'exp', - """ - Calculate the exponential of all elements in the input array. - - Parameters - ---------- - x : array_like - Input values. - - Returns - ------- - out : ndarray - Output array, element-wise exponential of `x`. - - See Also - -------- - expm1 : Calculate ``exp(x) - 1`` for all elements in the array. - exp2 : Calculate ``2**x`` for all elements in the array. - - Notes - ----- - The irrational number ``e`` is also known as Euler's number. It is - approximately 2.718281, and is the base of the natural logarithm, - ``ln`` (this means that, if :math:`x = \\ln y = \\log_e y`, - then :math:`e^x = y`. For real input, ``exp(x)`` is always positive. - - For complex arguments, ``x = a + ib``, we can write - :math:`e^x = e^a e^{ib}`. The first term, :math:`e^a`, is already - known (it is the real argument, described above). The second term, - :math:`e^{ib}`, is :math:`\\cos b + i \\sin b`, a function with magnitude - 1 and a periodic phase. - - References - ---------- - .. [1] Wikipedia, "Exponential function", - http://en.wikipedia.org/wiki/Exponential_function - .. [2] M. Abramovitz and I. A. Stegun, "Handbook of Mathematical Functions - with Formulas, Graphs, and Mathematical Tables," Dover, 1964, p. 69, - http://www.math.sfu.ca/~cbm/aands/page_69.htm - - Examples - -------- - Plot the magnitude and phase of ``exp(x)`` in the complex plane: - - >>> import matplotlib.pyplot as plt - - >>> x = np.linspace(-2*np.pi, 2*np.pi, 100) - >>> xx = x + 1j * x[:, np.newaxis] # a + ib over complex plane - >>> out = np.exp(xx) - - >>> plt.subplot(121) - >>> plt.imshow(np.abs(out), - ... extent=[-2*np.pi, 2*np.pi, -2*np.pi, 2*np.pi]) - >>> plt.title('Magnitude of exp(x)') - - >>> plt.subplot(122) - >>> plt.imshow(np.angle(out), - ... extent=[-2*np.pi, 2*np.pi, -2*np.pi, 2*np.pi]) - >>> plt.title('Phase (angle) of exp(x)') - >>> plt.show() - - """) - -add_newdoc('numpy.core.umath', 'exp2', - """ - Calculate `2**p` for all `p` in the input array. - - Parameters - ---------- - x : array_like - Input values. - - out : ndarray, optional - Array to insert results into. - - Returns - ------- - out : ndarray - Element-wise 2 to the power `x`. - - See Also - -------- - exp : calculate x**p. - - Notes - ----- - .. versionadded:: 1.3.0 - - - - Examples - -------- - >>> np.exp2([2, 3]) - array([ 4., 8.]) - - """) - -add_newdoc('numpy.core.umath', 'expm1', - """ - Calculate ``exp(x) - 1`` for all elements in the array. - - Parameters - ---------- - x : array_like - Input values. - - Returns - ------- - out : ndarray - Element-wise exponential minus one: ``out = exp(x) - 1``. - - See Also - -------- - log1p : ``log(1 + x)``, the inverse of expm1. - - - Notes - ----- - This function provides greater precision than the formula ``exp(x) - 1`` - for small values of ``x``. - - Examples - -------- - The true value of ``exp(1e-10) - 1`` is ``1.00000000005e-10`` to - about 32 significant digits. This example shows the superiority of - expm1 in this case. - - >>> np.expm1(1e-10) - 1.00000000005e-10 - >>> np.exp(1e-10) - 1 - 1.000000082740371e-10 - - """) - -add_newdoc('numpy.core.umath', 'fabs', - """ - Compute the absolute values elementwise. - - This function returns the absolute values (positive magnitude) of the data - in `x`. Complex values are not handled, use `absolute` to find the - absolute values of complex data. - - Parameters - ---------- - x : array_like - The array of numbers for which the absolute values are required. If - `x` is a scalar, the result `y` will also be a scalar. - out : ndarray, optional - Array into which the output is placed. Its type is preserved and it - must be of the right shape to hold the output. See doc.ufuncs. - - Returns - ------- - y : {ndarray, scalar} - The absolute values of `x`, the returned values are always floats. - - See Also - -------- - absolute : Absolute values including `complex` types. - - Examples - -------- - >>> np.fabs(-1) - 1.0 - >>> np.fabs([-1.2, 1.2]) - array([ 1.2, 1.2]) - - """) - -add_newdoc('numpy.core.umath', 'floor', - """ - Return the floor of the input, element-wise. - - The floor of the scalar `x` is the largest integer `i`, such that - `i <= x`. It is often denoted as :math:`\\lfloor x \\rfloor`. - - Parameters - ---------- - x : array_like - Input data. - - Returns - ------- - y : {ndarray, scalar} - The floor of each element in `x`. - - See Also - -------- - ceil, trunc, rint - - Notes - ----- - Some spreadsheet programs calculate the "floor-towards-zero", in other - words ``floor(-2.5) == -2``. NumPy, however, uses the a definition of - `floor` such that `floor(-2.5) == -3`. - - Examples - -------- - >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) - >>> np.floor(a) - array([-2., -2., -1., 0., 1., 1., 2.]) - - """) - -add_newdoc('numpy.core.umath', 'floor_divide', - """ - Return the largest integer smaller or equal to the division of the inputs. - - Parameters - ---------- - x1 : array_like - Numerator. - x2 : array_like - Denominator. - - Returns - ------- - y : ndarray - y = floor(`x1`/`x2`) - - - See Also - -------- - divide : Standard division. - floor : Round a number to the nearest integer toward minus infinity. - ceil : Round a number to the nearest integer toward infinity. - - Examples - -------- - >>> np.floor_divide(7,3) - 2 - >>> np.floor_divide([1., 2., 3., 4.], 2.5) - array([ 0., 0., 1., 1.]) - - """) - -add_newdoc('numpy.core.umath', 'fmod', - """ - Return the element-wise remainder of division. - - This is the NumPy implementation of the Python modulo operator `%`. - - Parameters - ---------- - x1 : array_like - Dividend. - x2 : array_like - Divisor. - - Returns - ------- - y : array_like - The remainder of the division of `x1` by `x2`. - - See Also - -------- - remainder : Modulo operation where the quotient is `floor(x1/x2)`. - divide - - Notes - ----- - The result of the modulo operation for negative dividend and divisors is - bound by conventions. In `fmod`, the sign of the remainder is the sign of - the dividend. In `remainder`, the sign of the divisor does not affect the - sign of the result. - - Examples - -------- - >>> np.fmod([-3, -2, -1, 1, 2, 3], 2) - array([-1, 0, -1, 1, 0, 1]) - >>> np.remainder([-3, -2, -1, 1, 2, 3], 2) - array([1, 0, 1, 1, 0, 1]) - - >>> np.fmod([5, 3], [2, 2.]) - array([ 1., 1.]) - >>> a = np.arange(-3, 3).reshape(3, 2) - >>> a - array([[-3, -2], - [-1, 0], - [ 1, 2]]) - >>> np.fmod(a, [2,2]) - array([[-1, 0], - [-1, 0], - [ 1, 0]]) - - """) - -add_newdoc('numpy.core.umath', 'greater', - """ - Return the truth value of (x1 > x2) element-wise. - - Parameters - ---------- - x1, x2 : array_like - Input arrays. If ``x1.shape != x2.shape``, they must be - broadcastable to a common shape (which may be the shape of one or - the other). - - Returns - ------- - out : bool or ndarray of bool - Array of bools, or a single bool if `x1` and `x2` are scalars. - - - See Also - -------- - greater_equal, less, less_equal, equal, not_equal - - Examples - -------- - >>> np.greater([4,2],[2,2]) - array([ True, False], dtype=bool) - - If the inputs are ndarrays, then np.greater is equivalent to '>'. - - >>> a = np.array([4,2]) - >>> b = np.array([2,2]) - >>> a > b - array([ True, False], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'greater_equal', - """ - Return the truth value of (x1 >= x2) element-wise. - - Parameters - ---------- - x1, x2 : array_like - Input arrays. If ``x1.shape != x2.shape``, they must be - broadcastable to a common shape (which may be the shape of one or - the other). - - Returns - ------- - out : bool or ndarray of bool - Array of bools, or a single bool if `x1` and `x2` are scalars. - - See Also - -------- - greater, less, less_equal, equal, not_equal - - Examples - -------- - >>> np.greater_equal([4, 2, 1], [2, 2, 2]) - array([ True, True, False], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'hypot', - """ - Given the "legs" of a right triangle, return its hypotenuse. - - Equivalent to ``sqrt(x1**2 + x2**2)``, element-wise. If `x1` or - `x2` is scalar_like (i.e., unambiguously cast-able to a scalar type), - it is broadcast for use with each element of the other argument. - (See Examples) - - Parameters - ---------- - x1, x2 : array_like - Leg of the triangle(s). - out : ndarray, optional - Array into which the output is placed. Its type is preserved and it - must be of the right shape to hold the output. See doc.ufuncs. - - Returns - ------- - z : ndarray - The hypotenuse of the triangle(s). - - Examples - -------- - >>> np.hypot(3*np.ones((3, 3)), 4*np.ones((3, 3))) - array([[ 5., 5., 5.], - [ 5., 5., 5.], - [ 5., 5., 5.]]) - - Example showing broadcast of scalar_like argument: - - >>> np.hypot(3*np.ones((3, 3)), [4]) - array([[ 5., 5., 5.], - [ 5., 5., 5.], - [ 5., 5., 5.]]) - - """) - -add_newdoc('numpy.core.umath', 'invert', - """ - Compute bit-wise inversion, or bit-wise NOT, element-wise. - - Computes the bit-wise NOT of the underlying binary representation of - the integers in the input arrays. This ufunc implements the C/Python - operator ``~``. - - For signed integer inputs, the two's complement is returned. - In a two's-complement system negative numbers are represented by the two's - complement of the absolute value. This is the most common method of - representing signed integers on computers [1]_. A N-bit two's-complement - system can represent every integer in the range - :math:`-2^{N-1}` to :math:`+2^{N-1}-1`. - - Parameters - ---------- - x1 : array_like - Only integer types are handled (including booleans). - - Returns - ------- - out : array_like - Result. - - See Also - -------- - bitwise_and, bitwise_or, bitwise_xor - logical_not - binary_repr : - Return the binary representation of the input number as a string. - - Notes - ----- - `bitwise_not` is an alias for `invert`: - - >>> np.bitwise_not is np.invert - True - - References - ---------- - .. [1] Wikipedia, "Two's complement", - http://en.wikipedia.org/wiki/Two's_complement - - Examples - -------- - We've seen that 13 is represented by ``00001101``. - The invert or bit-wise NOT of 13 is then: - - >>> np.invert(np.array([13], dtype=uint8)) - array([242], dtype=uint8) - >>> np.binary_repr(x, width=8) - '00001101' - >>> np.binary_repr(242, width=8) - '11110010' - - The result depends on the bit-width: - - >>> np.invert(np.array([13], dtype=uint16)) - array([65522], dtype=uint16) - >>> np.binary_repr(x, width=16) - '0000000000001101' - >>> np.binary_repr(65522, width=16) - '1111111111110010' - - When using signed integer types the result is the two's complement of - the result for the unsigned type: - - >>> np.invert(np.array([13], dtype=int8)) - array([-14], dtype=int8) - >>> np.binary_repr(-14, width=8) - '11110010' - - Booleans are accepted as well: - - >>> np.invert(array([True, False])) - array([False, True], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'isfinite', - """ - Test element-wise for finite-ness (not infinity or not Not a Number). - - The result is returned as a boolean array. - - Parameters - ---------- - x : array_like - Input values. - out : ndarray, optional - Array into which the output is placed. Its type is preserved and it - must be of the right shape to hold the output. See `doc.ufuncs`. - - Returns - ------- - y : ndarray, bool - For scalar input, the result is a new boolean with value True - if the input is finite; otherwise the value is False (input is - either positive infinity, negative infinity or Not a Number). - - For array input, the result is a boolean array with the same - dimensions as the input and the values are True if the corresponding - element of the input is finite; otherwise the values are False (element - is either positive infinity, negative infinity or Not a Number). - - See Also - -------- - isinf, isneginf, isposinf, isnan - - Notes - ----- - Not a Number, positive infinity and negative infinity are considered - to be non-finite. - - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). This means that Not a Number is not equivalent to infinity. - Also that positive infinity is not equivalent to negative infinity. But - infinity is equivalent to positive infinity. - Errors result if the second argument is also supplied when `x` is a scalar - input, or if first and second arguments have different shapes. - - Examples - -------- - >>> np.isfinite(1) - True - >>> np.isfinite(0) - True - >>> np.isfinite(np.nan) - False - >>> np.isfinite(np.inf) - False - >>> np.isfinite(np.NINF) - False - >>> np.isfinite([np.log(-1.),1.,np.log(0)]) - array([False, True, False], dtype=bool) - - >>> x = np.array([-np.inf, 0., np.inf]) - >>> y = np.array([2, 2, 2]) - >>> np.isfinite(x, y) - array([0, 1, 0]) - >>> y - array([0, 1, 0]) - - """) - -add_newdoc('numpy.core.umath', 'isinf', - """ - Test element-wise for positive or negative infinity. - - Return a bool-type array, the same shape as `x`, True where ``x == - +/-inf``, False everywhere else. - - Parameters - ---------- - x : array_like - Input values - out : array_like, optional - An array with the same shape as `x` to store the result. - - Returns - ------- - y : bool (scalar) or bool-type ndarray - For scalar input, the result is a new boolean with value True - if the input is positive or negative infinity; otherwise the value - is False. - - For array input, the result is a boolean array with the same - shape as the input and the values are True where the - corresponding element of the input is positive or negative - infinity; elsewhere the values are False. If a second argument - was supplied the result is stored there. If the type of that array - is a numeric type the result is represented as zeros and ones, if - the type is boolean then as False and True, respectively. - The return value `y` is then a reference to that array. - - See Also - -------- - isneginf, isposinf, isnan, isfinite - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). - - Errors result if the second argument is supplied when the first - argument is a scalar, or if the first and second arguments have - different shapes. - - Examples - -------- - >>> np.isinf(np.inf) - True - >>> np.isinf(np.nan) - False - >>> np.isinf(np.NINF) - True - >>> np.isinf([np.inf, -np.inf, 1.0, np.nan]) - array([ True, True, False, False], dtype=bool) - - >>> x = np.array([-np.inf, 0., np.inf]) - >>> y = np.array([2, 2, 2]) - >>> np.isinf(x, y) - array([1, 0, 1]) - >>> y - array([1, 0, 1]) - - """) - -add_newdoc('numpy.core.umath', 'isnan', - """ - Test element-wise for Not a Number (NaN), return result as a bool array. - - Parameters - ---------- - x : array_like - Input array. - - Returns - ------- - y : {ndarray, bool} - For scalar input, the result is a new boolean with value True - if the input is NaN; otherwise the value is False. - - For array input, the result is a boolean array with the same - dimensions as the input and the values are True if the corresponding - element of the input is NaN; otherwise the values are False. - - See Also - -------- - isinf, isneginf, isposinf, isfinite - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). This means that Not a Number is not equivalent to infinity. - - Examples - -------- - >>> np.isnan(np.nan) - True - >>> np.isnan(np.inf) - False - >>> np.isnan([np.log(-1.),1.,np.log(0)]) - array([ True, False, False], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'left_shift', - """ - Shift the bits of an integer to the left. - - Bits are shifted to the left by appending `x2` 0s at the right of `x1`. - Since the internal representation of numbers is in binary format, this - operation is equivalent to multiplying `x1` by ``2**x2``. - - Parameters - ---------- - x1 : array_like of integer type - Input values. - x2 : array_like of integer type - Number of zeros to append to `x1`. Has to be non-negative. - - Returns - ------- - out : array of integer type - Return `x1` with bits shifted `x2` times to the left. - - See Also - -------- - right_shift : Shift the bits of an integer to the right. - binary_repr : Return the binary representation of the input number - as a string. - - Examples - -------- - >>> np.binary_repr(5) - '101' - >>> np.left_shift(5, 2) - 20 - >>> np.binary_repr(20) - '10100' - - >>> np.left_shift(5, [1,2,3]) - array([10, 20, 40]) - - """) - -add_newdoc('numpy.core.umath', 'less', - """ - Return the truth value of (x1 < x2) element-wise. - - Parameters - ---------- - x1, x2 : array_like - Input arrays. If ``x1.shape != x2.shape``, they must be - broadcastable to a common shape (which may be the shape of one or - the other). - - Returns - ------- - out : bool or ndarray of bool - Array of bools, or a single bool if `x1` and `x2` are scalars. - - See Also - -------- - greater, less_equal, greater_equal, equal, not_equal - - Examples - -------- - >>> np.less([1, 2], [2, 2]) - array([ True, False], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'less_equal', - """ - Return the truth value of (x1 =< x2) element-wise. - - Parameters - ---------- - x1, x2 : array_like - Input arrays. If ``x1.shape != x2.shape``, they must be - broadcastable to a common shape (which may be the shape of one or - the other). - - Returns - ------- - out : bool or ndarray of bool - Array of bools, or a single bool if `x1` and `x2` are scalars. - - See Also - -------- - greater, less, greater_equal, equal, not_equal - - Examples - -------- - >>> np.less_equal([4, 2, 1], [2, 2, 2]) - array([False, True, True], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'log', - """ - Natural logarithm, element-wise. - - The natural logarithm `log` is the inverse of the exponential function, - so that `log(exp(x)) = x`. The natural logarithm is logarithm in base `e`. - - Parameters - ---------- - x : array_like - Input value. - - Returns - ------- - y : ndarray - The natural logarithm of `x`, element-wise. - - See Also - -------- - log10, log2, log1p, emath.log - - Notes - ----- - Logarithm is a multivalued function: for each `x` there is an infinite - number of `z` such that `exp(z) = x`. The convention is to return the `z` - whose imaginary part lies in `[-pi, pi]`. - - For real-valued input data types, `log` always returns real output. For - each value that cannot be expressed as a real number or infinity, it - yields ``nan`` and sets the `invalid` floating point error flag. - - For complex-valued input, `log` is a complex analytical function that - has a branch cut `[-inf, 0]` and is continuous from above on it. `log` - handles the floating-point negative zero as an infinitesimal negative - number, conforming to the C99 standard. - - References - ---------- - .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions", - 10th printing, 1964, pp. 67. http://www.math.sfu.ca/~cbm/aands/ - .. [2] Wikipedia, "Logarithm". http://en.wikipedia.org/wiki/Logarithm - - Examples - -------- - >>> np.log([1, np.e, np.e**2, 0]) - array([ 0., 1., 2., -Inf]) - - """) - -add_newdoc('numpy.core.umath', 'log10', - """ - Return the base 10 logarithm of the input array, element-wise. - - Parameters - ---------- - x : array_like - Input values. - - Returns - ------- - y : ndarray - The logarithm to the base 10 of `x`, element-wise. NaNs are - returned where x is negative. - - See Also - -------- - emath.log10 - - Notes - ----- - Logarithm is a multivalued function: for each `x` there is an infinite - number of `z` such that `10**z = x`. The convention is to return the `z` - whose imaginary part lies in `[-pi, pi]`. - - For real-valued input data types, `log10` always returns real output. For - each value that cannot be expressed as a real number or infinity, it - yields ``nan`` and sets the `invalid` floating point error flag. - - For complex-valued input, `log10` is a complex analytical function that - has a branch cut `[-inf, 0]` and is continuous from above on it. `log10` - handles the floating-point negative zero as an infinitesimal negative - number, conforming to the C99 standard. - - References - ---------- - .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions", - 10th printing, 1964, pp. 67. http://www.math.sfu.ca/~cbm/aands/ - .. [2] Wikipedia, "Logarithm". http://en.wikipedia.org/wiki/Logarithm - - Examples - -------- - >>> np.log10([1e-15, -3.]) - array([-15., NaN]) - - """) - -add_newdoc('numpy.core.umath', 'log2', - """ - Base-2 logarithm of `x`. - - Parameters - ---------- - x : array_like - Input values. - - Returns - ------- - y : ndarray - Base-2 logarithm of `x`. - - See Also - -------- - log, log10, log1p, emath.log2 - - Notes - ----- - .. versionadded:: 1.3.0 - - Logarithm is a multivalued function: for each `x` there is an infinite - number of `z` such that `2**z = x`. The convention is to return the `z` - whose imaginary part lies in `[-pi, pi]`. - - For real-valued input data types, `log2` always returns real output. For - each value that cannot be expressed as a real number or infinity, it - yields ``nan`` and sets the `invalid` floating point error flag. - - For complex-valued input, `log2` is a complex analytical function that - has a branch cut `[-inf, 0]` and is continuous from above on it. `log2` - handles the floating-point negative zero as an infinitesimal negative - number, conforming to the C99 standard. - - Examples - -------- - >>> x = np.array([0, 1, 2, 2**4]) - >>> np.log2(x) - array([-Inf, 0., 1., 4.]) - - >>> xi = np.array([0+1.j, 1, 2+0.j, 4.j]) - >>> np.log2(xi) - array([ 0.+2.26618007j, 0.+0.j , 1.+0.j , 2.+2.26618007j]) - - """) - -add_newdoc('numpy.core.umath', 'logaddexp', - """ - Logarithm of the sum of exponentiations of the inputs. - - Calculates ``log(exp(x1) + exp(x2))``. This function is useful in - statistics where the calculated probabilities of events may be so small - as to exceed the range of normal floating point numbers. In such cases - the logarithm of the calculated probability is stored. This function - allows adding probabilities stored in such a fashion. - - Parameters - ---------- - x1, x2 : array_like - Input values. - - Returns - ------- - result : ndarray - Logarithm of ``exp(x1) + exp(x2)``. - - See Also - -------- - logaddexp2: Logarithm of the sum of exponentiations of inputs in base-2. - - Notes - ----- - .. versionadded:: 1.3.0 - - Examples - -------- - >>> prob1 = np.log(1e-50) - >>> prob2 = np.log(2.5e-50) - >>> prob12 = np.logaddexp(prob1, prob2) - >>> prob12 - -113.87649168120691 - >>> np.exp(prob12) - 3.5000000000000057e-50 - - """) - -add_newdoc('numpy.core.umath', 'logaddexp2', - """ - Logarithm of the sum of exponentiations of the inputs in base-2. - - Calculates ``log2(2**x1 + 2**x2)``. This function is useful in machine - learning when the calculated probabilities of events may be so small - as to exceed the range of normal floating point numbers. In such cases - the base-2 logarithm of the calculated probability can be used instead. - This function allows adding probabilities stored in such a fashion. - - Parameters - ---------- - x1, x2 : array_like - Input values. - out : ndarray, optional - Array to store results in. - - Returns - ------- - result : ndarray - Base-2 logarithm of ``2**x1 + 2**x2``. - - See Also - -------- - logaddexp: Logarithm of the sum of exponentiations of the inputs. - - Notes - ----- - .. versionadded:: 1.3.0 - - Examples - -------- - >>> prob1 = np.log2(1e-50) - >>> prob2 = np.log2(2.5e-50) - >>> prob12 = np.logaddexp2(prob1, prob2) - >>> prob1, prob2, prob12 - (-166.09640474436813, -164.77447664948076, -164.28904982231052) - >>> 2**prob12 - 3.4999999999999914e-50 - - """) - -add_newdoc('numpy.core.umath', 'log1p', - """ - Return the natural logarithm of one plus the input array, element-wise. - - Calculates ``log(1 + x)``. - - Parameters - ---------- - x : array_like - Input values. - - Returns - ------- - y : ndarray - Natural logarithm of `1 + x`, element-wise. - - See Also - -------- - expm1 : ``exp(x) - 1``, the inverse of `log1p`. - - Notes - ----- - For real-valued input, `log1p` is accurate also for `x` so small - that `1 + x == 1` in floating-point accuracy. - - Logarithm is a multivalued function: for each `x` there is an infinite - number of `z` such that `exp(z) = 1 + x`. The convention is to return - the `z` whose imaginary part lies in `[-pi, pi]`. - - For real-valued input data types, `log1p` always returns real output. For - each value that cannot be expressed as a real number or infinity, it - yields ``nan`` and sets the `invalid` floating point error flag. - - For complex-valued input, `log1p` is a complex analytical function that - has a branch cut `[-inf, -1]` and is continuous from above on it. `log1p` - handles the floating-point negative zero as an infinitesimal negative - number, conforming to the C99 standard. - - References - ---------- - .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions", - 10th printing, 1964, pp. 67. http://www.math.sfu.ca/~cbm/aands/ - .. [2] Wikipedia, "Logarithm". http://en.wikipedia.org/wiki/Logarithm - - Examples - -------- - >>> np.log1p(1e-99) - 1e-99 - >>> np.log(1 + 1e-99) - 0.0 - - """) - -add_newdoc('numpy.core.umath', 'logical_and', - """ - Compute the truth value of x1 AND x2 elementwise. - - Parameters - ---------- - x1, x2 : array_like - Input arrays. `x1` and `x2` must be of the same shape. - - - Returns - ------- - y : {ndarray, bool} - Boolean result with the same shape as `x1` and `x2` of the logical - AND operation on corresponding elements of `x1` and `x2`. - - See Also - -------- - logical_or, logical_not, logical_xor - bitwise_and - - Examples - -------- - >>> np.logical_and(True, False) - False - >>> np.logical_and([True, False], [False, False]) - array([False, False], dtype=bool) - - >>> x = np.arange(5) - >>> np.logical_and(x>1, x<4) - array([False, False, True, True, False], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'logical_not', - """ - Compute the truth value of NOT x elementwise. - - Parameters - ---------- - x : array_like - Logical NOT is applied to the elements of `x`. - - Returns - ------- - y : bool or ndarray of bool - Boolean result with the same shape as `x` of the NOT operation - on elements of `x`. - - See Also - -------- - logical_and, logical_or, logical_xor - - Examples - -------- - >>> np.logical_not(3) - False - >>> np.logical_not([True, False, 0, 1]) - array([False, True, True, False], dtype=bool) - - >>> x = np.arange(5) - >>> np.logical_not(x<3) - array([False, False, False, True, True], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'logical_or', - """ - Compute the truth value of x1 OR x2 elementwise. - - Parameters - ---------- - x1, x2 : array_like - Logical OR is applied to the elements of `x1` and `x2`. - They have to be of the same shape. - - Returns - ------- - y : {ndarray, bool} - Boolean result with the same shape as `x1` and `x2` of the logical - OR operation on elements of `x1` and `x2`. - - See Also - -------- - logical_and, logical_not, logical_xor - bitwise_or - - Examples - -------- - >>> np.logical_or(True, False) - True - >>> np.logical_or([True, False], [False, False]) - array([ True, False], dtype=bool) - - >>> x = np.arange(5) - >>> np.logical_or(x < 1, x > 3) - array([ True, False, False, False, True], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'logical_xor', - """ - Compute the truth value of x1 XOR x2, element-wise. - - Parameters - ---------- - x1, x2 : array_like - Logical XOR is applied to the elements of `x1` and `x2`. They must - be broadcastable to the same shape. - - Returns - ------- - y : bool or ndarray of bool - Boolean result of the logical XOR operation applied to the elements - of `x1` and `x2`; the shape is determined by whether or not - broadcasting of one or both arrays was required. - - See Also - -------- - logical_and, logical_or, logical_not, bitwise_xor - - Examples - -------- - >>> np.logical_xor(True, False) - True - >>> np.logical_xor([True, True, False, False], [True, False, True, False]) - array([False, True, True, False], dtype=bool) - - >>> x = np.arange(5) - >>> np.logical_xor(x < 1, x > 3) - array([ True, False, False, False, True], dtype=bool) - - Simple example showing support of broadcasting - - >>> np.logical_xor(0, np.eye(2)) - array([[ True, False], - [False, True]], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'maximum', - """ - Element-wise maximum of array elements. - - Compare two arrays and returns a new array containing - the element-wise maxima. If one of the elements being - compared is a nan, then that element is returned. If - both elements are nans then the first is returned. The - latter distinction is important for complex nans, - which are defined as at least one of the real or - imaginary parts being a nan. The net effect is that - nans are propagated. - - Parameters - ---------- - x1, x2 : array_like - The arrays holding the elements to be compared. They must have - the same shape, or shapes that can be broadcast to a single shape. - - Returns - ------- - y : {ndarray, scalar} - The maximum of `x1` and `x2`, element-wise. Returns scalar if - both `x1` and `x2` are scalars. - - See Also - -------- - minimum : - element-wise minimum - - fmax : - element-wise maximum that ignores nans unless both inputs are nans. - - fmin : - element-wise minimum that ignores nans unless both inputs are nans. - - Notes - ----- - Equivalent to ``np.where(x1 > x2, x1, x2)`` but faster and does proper - broadcasting. - - Examples - -------- - >>> np.maximum([2, 3, 4], [1, 5, 2]) - array([2, 5, 4]) - - >>> np.maximum(np.eye(2), [0.5, 2]) - array([[ 1. , 2. ], - [ 0.5, 2. ]]) - - >>> np.maximum([np.nan, 0, np.nan], [0, np.nan, np.nan]) - array([ NaN, NaN, NaN]) - >>> np.maximum(np.Inf, 1) - inf - - """) - -add_newdoc('numpy.core.umath', 'minimum', - """ - Element-wise minimum of array elements. - - Compare two arrays and returns a new array containing the element-wise - minima. If one of the elements being compared is a nan, then that element - is returned. If both elements are nans then the first is returned. The - latter distinction is important for complex nans, which are defined as at - least one of the real or imaginary parts being a nan. The net effect is - that nans are propagated. - - Parameters - ---------- - x1, x2 : array_like - The arrays holding the elements to be compared. They must have - the same shape, or shapes that can be broadcast to a single shape. - - Returns - ------- - y : {ndarray, scalar} - The minimum of `x1` and `x2`, element-wise. Returns scalar if - both `x1` and `x2` are scalars. - - See Also - -------- - maximum : - element-wise minimum that propagates nans. - fmax : - element-wise maximum that ignores nans unless both inputs are nans. - fmin : - element-wise minimum that ignores nans unless both inputs are nans. - - Notes - ----- - The minimum is equivalent to ``np.where(x1 <= x2, x1, x2)`` when neither - x1 nor x2 are nans, but it is faster and does proper broadcasting. - - Examples - -------- - >>> np.minimum([2, 3, 4], [1, 5, 2]) - array([1, 3, 2]) - - >>> np.minimum(np.eye(2), [0.5, 2]) # broadcasting - array([[ 0.5, 0. ], - [ 0. , 1. ]]) - - >>> np.minimum([np.nan, 0, np.nan],[0, np.nan, np.nan]) - array([ NaN, NaN, NaN]) - - """) - -add_newdoc('numpy.core.umath', 'fmax', - """ - Element-wise maximum of array elements. - - Compare two arrays and returns a new array containing the element-wise - maxima. If one of the elements being compared is a nan, then the non-nan - element is returned. If both elements are nans then the first is returned. - The latter distinction is important for complex nans, which are defined as - at least one of the real or imaginary parts being a nan. The net effect is - that nans are ignored when possible. - - Parameters - ---------- - x1, x2 : array_like - The arrays holding the elements to be compared. They must have - the same shape. - - Returns - ------- - y : {ndarray, scalar} - The minimum of `x1` and `x2`, element-wise. Returns scalar if - both `x1` and `x2` are scalars. - - See Also - -------- - fmin : - element-wise minimum that ignores nans unless both inputs are nans. - maximum : - element-wise maximum that propagates nans. - minimum : - element-wise minimum that propagates nans. - - Notes - ----- - .. versionadded:: 1.3.0 - - The fmax is equivalent to ``np.where(x1 >= x2, x1, x2)`` when neither - x1 nor x2 are nans, but it is faster and does proper broadcasting. - - Examples - -------- - >>> np.fmax([2, 3, 4], [1, 5, 2]) - array([ 2., 5., 4.]) - - >>> np.fmax(np.eye(2), [0.5, 2]) - array([[ 1. , 2. ], - [ 0.5, 2. ]]) - - >>> np.fmax([np.nan, 0, np.nan],[0, np.nan, np.nan]) - array([ 0., 0., NaN]) - - """) - -add_newdoc('numpy.core.umath', 'fmin', - """ - fmin(x1, x2[, out]) - - Element-wise minimum of array elements. - - Compare two arrays and returns a new array containing the element-wise - minima. If one of the elements being compared is a nan, then the non-nan - element is returned. If both elements are nans then the first is returned. - The latter distinction is important for complex nans, which are defined as - at least one of the real or imaginary parts being a nan. The net effect is - that nans are ignored when possible. - - Parameters - ---------- - x1, x2 : array_like - The arrays holding the elements to be compared. They must have - the same shape. - - Returns - ------- - y : {ndarray, scalar} - The minimum of `x1` and `x2`, element-wise. Returns scalar if - both `x1` and `x2` are scalars. - - See Also - -------- - fmax : - element-wise maximum that ignores nans unless both inputs are nans. - maximum : - element-wise maximum that propagates nans. - minimum : - element-wise minimum that propagates nans. - - Notes - ----- - .. versionadded:: 1.3.0 - - The fmin is equivalent to ``np.where(x1 <= x2, x1, x2)`` when neither - x1 nor x2 are nans, but it is faster and does proper broadcasting. - - Examples - -------- - >>> np.fmin([2, 3, 4], [1, 5, 2]) - array([2, 5, 4]) - - >>> np.fmin(np.eye(2), [0.5, 2]) - array([[ 1. , 2. ], - [ 0.5, 2. ]]) - - >>> np.fmin([np.nan, 0, np.nan],[0, np.nan, np.nan]) - array([ 0., 0., NaN]) - - """) - -add_newdoc('numpy.core.umath', 'modf', - """ - Return the fractional and integral parts of an array, element-wise. - - The fractional and integral parts are negative if the given number is - negative. - - Parameters - ---------- - x : array_like - Input array. - - Returns - ------- - y1 : ndarray - Fractional part of `x`. - y2 : ndarray - Integral part of `x`. - - Notes - ----- - For integer input the return values are floats. - - Examples - -------- - >>> np.modf([0, 3.5]) - (array([ 0. , 0.5]), array([ 0., 3.])) - >>> np.modf(-0.5) - (-0.5, -0) - - """) - -add_newdoc('numpy.core.umath', 'multiply', - """ - Multiply arguments element-wise. - - Parameters - ---------- - x1, x2 : array_like - Input arrays to be multiplied. - - Returns - ------- - y : ndarray - The product of `x1` and `x2`, element-wise. Returns a scalar if - both `x1` and `x2` are scalars. - - Notes - ----- - Equivalent to `x1` * `x2` in terms of array broadcasting. - - Examples - -------- - >>> np.multiply(2.0, 4.0) - 8.0 - - >>> x1 = np.arange(9.0).reshape((3, 3)) - >>> x2 = np.arange(3.0) - >>> np.multiply(x1, x2) - array([[ 0., 1., 4.], - [ 0., 4., 10.], - [ 0., 7., 16.]]) - - """) - -add_newdoc('numpy.core.umath', 'negative', - """ - Returns an array with the negative of each element of the original array. - - Parameters - ---------- - x : array_like or scalar - Input array. - - Returns - ------- - y : ndarray or scalar - Returned array or scalar: `y = -x`. - - Examples - -------- - >>> np.negative([1.,-1.]) - array([-1., 1.]) - - """) - -add_newdoc('numpy.core.umath', 'not_equal', - """ - Return (x1 != x2) element-wise. - - Parameters - ---------- - x1, x2 : array_like - Input arrays. - out : ndarray, optional - A placeholder the same shape as `x1` to store the result. - See `doc.ufuncs` (Section "Output arguments") for more details. - - Returns - ------- - not_equal : ndarray bool, scalar bool - For each element in `x1, x2`, return True if `x1` is not equal - to `x2` and False otherwise. - - - See Also - -------- - equal, greater, greater_equal, less, less_equal - - Examples - -------- - >>> np.not_equal([1.,2.], [1., 3.]) - array([False, True], dtype=bool) - >>> np.not_equal([1, 2], [[1, 3],[1, 4]]) - array([[False, True], - [False, True]], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'ones_like', - """ - Returns an array of ones with the same shape and type as a given array. - - Equivalent to ``a.copy().fill(1)``. - - Please refer to the documentation for `zeros_like` for further details. - - See Also - -------- - zeros_like, ones - - Examples - -------- - >>> a = np.array([[1, 2, 3], [4, 5, 6]]) - >>> np.ones_like(a) - array([[1, 1, 1], - [1, 1, 1]]) - - """) - -add_newdoc('numpy.core.umath', 'power', - """ - First array elements raised to powers from second array, element-wise. - - Raise each base in `x1` to the positionally-corresponding power in - `x2`. `x1` and `x2` must be broadcastable to the same shape. - - Parameters - ---------- - x1 : array_like - The bases. - x2 : array_like - The exponents. - - Returns - ------- - y : ndarray - The bases in `x1` raised to the exponents in `x2`. - - Examples - -------- - Cube each element in a list. - - >>> x1 = range(6) - >>> x1 - [0, 1, 2, 3, 4, 5] - >>> np.power(x1, 3) - array([ 0, 1, 8, 27, 64, 125]) - - Raise the bases to different exponents. - - >>> x2 = [1.0, 2.0, 3.0, 3.0, 2.0, 1.0] - >>> np.power(x1, x2) - array([ 0., 1., 8., 27., 16., 5.]) - - The effect of broadcasting. - - >>> x2 = np.array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]]) - >>> x2 - array([[1, 2, 3, 3, 2, 1], - [1, 2, 3, 3, 2, 1]]) - >>> np.power(x1, x2) - array([[ 0, 1, 8, 27, 16, 5], - [ 0, 1, 8, 27, 16, 5]]) - - """) - -add_newdoc('numpy.core.umath', 'radians', - """ - Convert angles from degrees to radians. - - Parameters - ---------- - x : array_like - Input array in degrees. - out : ndarray, optional - Output array of same shape as `x`. - - Returns - ------- - y : ndarray - The corresponding radian values. - - See Also - -------- - deg2rad : equivalent function - - Examples - -------- - Convert a degree array to radians - - >>> deg = np.arange(12.) * 30. - >>> np.radians(deg) - array([ 0. , 0.52359878, 1.04719755, 1.57079633, 2.0943951 , - 2.61799388, 3.14159265, 3.66519143, 4.1887902 , 4.71238898, - 5.23598776, 5.75958653]) - - >>> out = np.zeros((deg.shape)) - >>> ret = np.radians(deg, out) - >>> ret is out - True - - """) - -add_newdoc('numpy.core.umath', 'deg2rad', - """ - Convert angles from degrees to radians. - - Parameters - ---------- - x : array_like - Angles in degrees. - - Returns - ------- - y : ndarray - The corresponding angle in radians. - - See Also - -------- - rad2deg : Convert angles from radians to degrees. - unwrap : Remove large jumps in angle by wrapping. - - Notes - ----- - .. versionadded:: 1.3.0 - - ``deg2rad(x)`` is ``x * pi / 180``. - - Examples - -------- - >>> np.deg2rad(180) - 3.1415926535897931 - - """) - -add_newdoc('numpy.core.umath', 'reciprocal', - """ - Return the reciprocal of the argument, element-wise. - - Calculates ``1/x``. - - Parameters - ---------- - x : array_like - Input array. - - Returns - ------- - y : ndarray - Return array. - - Notes - ----- - .. note:: - This function is not designed to work with integers. - - For integer arguments with absolute value larger than 1 the result is - always zero because of the way Python handles integer division. - For integer zero the result is an overflow. - - Examples - -------- - >>> np.reciprocal(2.) - 0.5 - >>> np.reciprocal([1, 2., 3.33]) - array([ 1. , 0.5 , 0.3003003]) - - """) - -add_newdoc('numpy.core.umath', 'remainder', - """ - Return element-wise remainder of division. - - Computes ``x1 - floor(x1 / x2) * x2``. - - Parameters - ---------- - x1 : array_like - Dividend array. - x2 : array_like - Divisor array. - out : ndarray, optional - Array into which the output is placed. Its type is preserved and it - must be of the right shape to hold the output. See doc.ufuncs. - - Returns - ------- - y : ndarray - The remainder of the quotient ``x1/x2``, element-wise. Returns a scalar - if both `x1` and `x2` are scalars. - - See Also - -------- - divide, floor - - Notes - ----- - Returns 0 when `x2` is 0 and both `x1` and `x2` are (arrays of) integers. - - Examples - -------- - >>> np.remainder([4, 7], [2, 3]) - array([0, 1]) - >>> np.remainder(np.arange(7), 5) - array([0, 1, 2, 3, 4, 0, 1]) - - """) - -add_newdoc('numpy.core.umath', 'right_shift', - """ - Shift the bits of an integer to the right. - - Bits are shifted to the right by removing `x2` bits at the right of `x1`. - Since the internal representation of numbers is in binary format, this - operation is equivalent to dividing `x1` by ``2**x2``. - - Parameters - ---------- - x1 : array_like, int - Input values. - x2 : array_like, int - Number of bits to remove at the right of `x1`. - - Returns - ------- - out : ndarray, int - Return `x1` with bits shifted `x2` times to the right. - - See Also - -------- - left_shift : Shift the bits of an integer to the left. - binary_repr : Return the binary representation of the input number - as a string. - - Examples - -------- - >>> np.binary_repr(10) - '1010' - >>> np.right_shift(10, 1) - 5 - >>> np.binary_repr(5) - '101' - - >>> np.right_shift(10, [1,2,3]) - array([5, 2, 1]) - - """) - -add_newdoc('numpy.core.umath', 'rint', - """ - Round elements of the array to the nearest integer. - - Parameters - ---------- - x : array_like - Input array. - - Returns - ------- - out : {ndarray, scalar} - Output array is same shape and type as `x`. - - See Also - -------- - ceil, floor, trunc - - Examples - -------- - >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) - >>> np.rint(a) - array([-2., -2., -0., 0., 2., 2., 2.]) - - """) - -add_newdoc('numpy.core.umath', 'sign', - """ - Returns an element-wise indication of the sign of a number. - - The `sign` function returns ``-1 if x < 0, 0 if x==0, 1 if x > 0``. - - Parameters - ---------- - x : array_like - Input values. - - Returns - ------- - y : ndarray - The sign of `x`. - - Examples - -------- - >>> np.sign([-5., 4.5]) - array([-1., 1.]) - >>> np.sign(0) - 0 - - """) - -add_newdoc('numpy.core.umath', 'signbit', - """ - Returns element-wise True where signbit is set (less than zero). - - Parameters - ---------- - x: array_like - The input value(s). - out : ndarray, optional - Array into which the output is placed. Its type is preserved - and it must be of the right shape to hold the output. - See `doc.ufuncs`. - - Returns - ------- - result : ndarray of bool - Output array, or reference to `out` if that was supplied. - - Examples - -------- - >>> np.signbit(-1.2) - True - >>> np.signbit(np.array([1, -2.3, 2.1])) - array([False, True, False], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'copysign', - """ - Change the sign of x1 to that of x2, element-wise. - - If both arguments are arrays or sequences, they have to be of the same - length. If `x2` is a scalar, its sign will be copied to all elements of - `x1`. - - Parameters - ---------- - x1: array_like - Values to change the sign of. - x2: array_like - The sign of `x2` is copied to `x1`. - out : ndarray, optional - Array into which the output is placed. Its type is preserved and it - must be of the right shape to hold the output. See doc.ufuncs. - - Returns - ------- - out : array_like - The values of `x1` with the sign of `x2`. - - Examples - -------- - >>> np.copysign(1.3, -1) - -1.3 - >>> 1/np.copysign(0, 1) - inf - >>> 1/np.copysign(0, -1) - -inf - - >>> np.copysign([-1, 0, 1], -1.1) - array([-1., -0., -1.]) - >>> np.copysign([-1, 0, 1], np.arange(3)-1) - array([-1., 0., 1.]) - - """) - -add_newdoc('numpy.core.umath', 'nextafter', - """ - Return the next representable floating-point value after x1 in the direction - of x2 element-wise. - - Parameters - ---------- - x1 : array_like - Values to find the next representable value of. - x2 : array_like - The direction where to look for the next representable value of `x1`. - out : ndarray, optional - Array into which the output is placed. Its type is preserved and it - must be of the right shape to hold the output. See `doc.ufuncs`. - - Returns - ------- - out : array_like - The next representable values of `x1` in the direction of `x2`. - - Examples - -------- - >>> eps = np.finfo(np.float64).eps - >>> np.nextafter(1, 2) == eps + 1 - True - >>> np.nextafter([1, 2], [2, 1]) == [eps + 1, 2 - eps] - array([ True, True], dtype=bool) - - """) - -add_newdoc('numpy.core.umath', 'spacing', - """ - Return the distance between x and the nearest adjacent number. - - Parameters - ---------- - x1: array_like - Values to find the spacing of. - - Returns - ------- - out : array_like - The spacing of values of `x1`. - - Notes - ----- - It can be considered as a generalization of EPS: - ``spacing(np.float64(1)) == np.finfo(np.float64).eps``, and there - should not be any representable number between ``x + spacing(x)`` and - x for any finite x. - - Spacing of +- inf and nan is nan. - - Examples - -------- - >>> np.spacing(1, 2) == np.finfo(np.float64).eps - True - - """) - -add_newdoc('numpy.core.umath', 'sin', - """ - Trigonometric sine, element-wise. - - Parameters - ---------- - x : array_like - Angle, in radians (:math:`2 \\pi` rad equals 360 degrees). - - Returns - ------- - y : array_like - The sine of each element of x. - - See Also - -------- - arcsin, sinh, cos - - Notes - ----- - The sine is one of the fundamental functions of trigonometry - (the mathematical study of triangles). Consider a circle of radius - 1 centered on the origin. A ray comes in from the :math:`+x` axis, - makes an angle at the origin (measured counter-clockwise from that - axis), and departs from the origin. The :math:`y` coordinate of - the outgoing ray's intersection with the unit circle is the sine - of that angle. It ranges from -1 for :math:`x=3\\pi / 2` to - +1 for :math:`\\pi / 2.` The function has zeroes where the angle is - a multiple of :math:`\\pi`. Sines of angles between :math:`\\pi` and - :math:`2\\pi` are negative. The numerous properties of the sine and - related functions are included in any standard trigonometry text. - - Examples - -------- - Print sine of one angle: - - >>> np.sin(np.pi/2.) - 1.0 - - Print sines of an array of angles given in degrees: - - >>> np.sin(np.array((0., 30., 45., 60., 90.)) * np.pi / 180. ) - array([ 0. , 0.5 , 0.70710678, 0.8660254 , 1. ]) - - Plot the sine function: - - >>> import matplotlib.pylab as plt - >>> x = np.linspace(-np.pi, np.pi, 201) - >>> plt.plot(x, np.sin(x)) - >>> plt.xlabel('Angle [rad]') - >>> plt.ylabel('sin(x)') - >>> plt.axis('tight') - >>> plt.show() - - """) - -add_newdoc('numpy.core.umath', 'sinh', - """ - Hyperbolic sine, element-wise. - - Equivalent to ``1/2 * (np.exp(x) - np.exp(-x))`` or - ``-1j * np.sin(1j*x)``. - - Parameters - ---------- - x : array_like - Input array. - out : ndarray, optional - Output array of same shape as `x`. - - Returns - ------- - y : ndarray - The corresponding hyperbolic sine values. - - Raises - ------ - ValueError: invalid return array shape - if `out` is provided and `out.shape` != `x.shape` (See Examples) - - Notes - ----- - If `out` is provided, the function writes the result into it, - and returns a reference to `out`. (See Examples) - - References - ---------- - M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions. - New York, NY: Dover, 1972, pg. 83. - - Examples - -------- - >>> np.sinh(0) - 0.0 - >>> np.sinh(np.pi*1j/2) - 1j - >>> np.sinh(np.pi*1j) # (exact value is 0) - 1.2246063538223773e-016j - >>> # Discrepancy due to vagaries of floating point arithmetic. - - >>> # Example of providing the optional output parameter - >>> out2 = np.sinh([0.1], out1) - >>> out2 is out1 - True - - >>> # Example of ValueError due to provision of shape mis-matched `out` - >>> np.sinh(np.zeros((3,3)),np.zeros((2,2))) - Traceback (most recent call last): - File "", line 1, in - ValueError: invalid return array shape - - """) - -add_newdoc('numpy.core.umath', 'sqrt', - """ - Return the positive square-root of an array, element-wise. - - Parameters - ---------- - x : array_like - The values whose square-roots are required. - out : ndarray, optional - Alternate array object in which to put the result; if provided, it - must have the same shape as `x` - - Returns - ------- - y : ndarray - An array of the same shape as `x`, containing the positive - square-root of each element in `x`. If any element in `x` is - complex, a complex array is returned (and the square-roots of - negative reals are calculated). If all of the elements in `x` - are real, so is `y`, with negative elements returning ``nan``. - If `out` was provided, `y` is a reference to it. - - See Also - -------- - lib.scimath.sqrt - A version which returns complex numbers when given negative reals. - - Notes - ----- - *sqrt* has--consistent with common convention--as its branch cut the - real "interval" [`-inf`, 0), and is continuous from above on it. - (A branch cut is a curve in the complex plane across which a given - complex function fails to be continuous.) - - Examples - -------- - >>> np.sqrt([1,4,9]) - array([ 1., 2., 3.]) - - >>> np.sqrt([4, -1, -3+4J]) - array([ 2.+0.j, 0.+1.j, 1.+2.j]) - - >>> np.sqrt([4, -1, numpy.inf]) - array([ 2., NaN, Inf]) - - """) - -add_newdoc('numpy.core.umath', 'square', - """ - Return the element-wise square of the input. - - Parameters - ---------- - x : array_like - Input data. - - Returns - ------- - out : ndarray - Element-wise `x*x`, of the same shape and dtype as `x`. - Returns scalar if `x` is a scalar. - - See Also - -------- - numpy.linalg.matrix_power - sqrt - power - - Examples - -------- - >>> np.square([-1j, 1]) - array([-1.-0.j, 1.+0.j]) - - """) - -add_newdoc('numpy.core.umath', 'subtract', - """ - Subtract arguments, element-wise. - - Parameters - ---------- - x1, x2 : array_like - The arrays to be subtracted from each other. - - Returns - ------- - y : ndarray - The difference of `x1` and `x2`, element-wise. Returns a scalar if - both `x1` and `x2` are scalars. - - Notes - ----- - Equivalent to ``x1 - x2`` in terms of array broadcasting. - - Examples - -------- - >>> np.subtract(1.0, 4.0) - -3.0 - - >>> x1 = np.arange(9.0).reshape((3, 3)) - >>> x2 = np.arange(3.0) - >>> np.subtract(x1, x2) - array([[ 0., 0., 0.], - [ 3., 3., 3.], - [ 6., 6., 6.]]) - - """) - -add_newdoc('numpy.core.umath', 'tan', - """ - Compute tangent element-wise. - - Equivalent to ``np.sin(x)/np.cos(x)`` element-wise. - - Parameters - ---------- - x : array_like - Input array. - out : ndarray, optional - Output array of same shape as `x`. - - Returns - ------- - y : ndarray - The corresponding tangent values. - - Raises - ------ - ValueError: invalid return array shape - if `out` is provided and `out.shape` != `x.shape` (See Examples) - - Notes - ----- - If `out` is provided, the function writes the result into it, - and returns a reference to `out`. (See Examples) - - References - ---------- - M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions. - New York, NY: Dover, 1972. - - Examples - -------- - >>> from math import pi - >>> np.tan(np.array([-pi,pi/2,pi])) - array([ 1.22460635e-16, 1.63317787e+16, -1.22460635e-16]) - >>> - >>> # Example of providing the optional output parameter illustrating - >>> # that what is returned is a reference to said parameter - >>> out2 = np.cos([0.1], out1) - >>> out2 is out1 - True - >>> - >>> # Example of ValueError due to provision of shape mis-matched `out` - >>> np.cos(np.zeros((3,3)),np.zeros((2,2))) - Traceback (most recent call last): - File "", line 1, in - ValueError: invalid return array shape - - """) - -add_newdoc('numpy.core.umath', 'tanh', - """ - Compute hyperbolic tangent element-wise. - - Equivalent to ``np.sinh(x)/np.cosh(x)`` or - ``-1j * np.tan(1j*x)``. - - Parameters - ---------- - x : array_like - Input array. - out : ndarray, optional - Output array of same shape as `x`. - - Returns - ------- - y : ndarray - The corresponding hyperbolic tangent values. - - Raises - ------ - ValueError: invalid return array shape - if `out` is provided and `out.shape` != `x.shape` (See Examples) - - Notes - ----- - If `out` is provided, the function writes the result into it, - and returns a reference to `out`. (See Examples) - - References - ---------- - .. [1] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions. - New York, NY: Dover, 1972, pg. 83. - http://www.math.sfu.ca/~cbm/aands/ - - .. [2] Wikipedia, "Hyperbolic function", - http://en.wikipedia.org/wiki/Hyperbolic_function - - Examples - -------- - >>> np.tanh((0, np.pi*1j, np.pi*1j/2)) - array([ 0. +0.00000000e+00j, 0. -1.22460635e-16j, 0. +1.63317787e+16j]) - - >>> # Example of providing the optional output parameter illustrating - >>> # that what is returned is a reference to said parameter - >>> out2 = np.tanh([0.1], out1) - >>> out2 is out1 - True - - >>> # Example of ValueError due to provision of shape mis-matched `out` - >>> np.tanh(np.zeros((3,3)),np.zeros((2,2))) - Traceback (most recent call last): - File "", line 1, in - ValueError: invalid return array shape - - """) - -add_newdoc('numpy.core.umath', 'true_divide', - """ - Returns a true division of the inputs, element-wise. - - Instead of the Python traditional 'floor division', this returns a true - division. True division adjusts the output type to present the best - answer, regardless of input types. - - Parameters - ---------- - x1 : array_like - Dividend array. - x2 : array_like - Divisor array. - - Returns - ------- - out : ndarray - Result is scalar if both inputs are scalar, ndarray otherwise. - - Notes - ----- - The floor division operator ``//`` was added in Python 2.2 making ``//`` - and ``/`` equivalent operators. The default floor division operation of - ``/`` can be replaced by true division with - ``from __future__ import division``. - - In Python 3.0, ``//`` is the floor division operator and ``/`` the - true division operator. The ``true_divide(x1, x2)`` function is - equivalent to true division in Python. - - Examples - -------- - >>> x = np.arange(5) - >>> np.true_divide(x, 4) - array([ 0. , 0.25, 0.5 , 0.75, 1. ]) - - >>> x/4 - array([0, 0, 0, 0, 1]) - >>> x//4 - array([0, 0, 0, 0, 1]) - - >>> from __future__ import division - >>> x/4 - array([ 0. , 0.25, 0.5 , 0.75, 1. ]) - >>> x//4 - array([0, 0, 0, 0, 1]) - - """) diff --git a/pythonPackages/numpy/numpy/core/defchararray.py b/pythonPackages/numpy/numpy/core/defchararray.py deleted file mode 100755 index 623b11d058..0000000000 --- a/pythonPackages/numpy/numpy/core/defchararray.py +++ /dev/null @@ -1,2753 +0,0 @@ -""" -This module contains a set of functions for vectorized string -operations and methods. - -.. note:: - The `chararray` class exists for backwards compatibility with - Numarray, it is not recommended for new development. Starting from numpy - 1.4, if one needs arrays of strings, it is recommended to use arrays of - `dtype` `object_`, `string_` or `unicode_`, and use the free functions - in the `numpy.char` module for fast vectorized string operations. - -Some methods will only be available if the corresponding string method is -available in your version of Python. - -The preferred alias for `defchararray` is `numpy.char`. - -""" - -import sys -from numerictypes import string_, unicode_, integer, object_, bool_, character -from numeric import ndarray, compare_chararrays -from numeric import array as narray -from numpy.core.multiarray import _vec_string -from numpy.compat import asbytes -import numpy - -__all__ = ['chararray', - 'equal', 'not_equal', 'greater_equal', 'less_equal', 'greater', 'less', - 'str_len', 'add', 'multiply', 'mod', 'capitalize', 'center', 'count', - 'decode', 'encode', 'endswith', 'expandtabs', 'find', 'format', - 'index', 'isalnum', 'isalpha', 'isdigit', 'islower', 'isspace', - 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', - 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', - 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', - 'swapcase', 'title', 'translate', 'upper', 'zfill', - 'isnumeric', 'isdecimal', - 'array', 'asarray'] - -_globalvar = 0 -if sys.version_info[0] >= 3: - _unicode = str - _bytes = bytes -else: - _unicode = unicode - _bytes = str -_len = len - -def _use_unicode(*args): - """ - Helper function for determining the output type of some string - operations. - - For an operation on two ndarrays, if at least one is unicode, the - result should be unicode. - """ - for x in args: - if (isinstance(x, _unicode) - or issubclass(numpy.asarray(x).dtype.type, unicode_)): - return unicode_ - return string_ - -def _to_string_or_unicode_array(result): - """ - Helper function to cast a result back into a string or unicode array - if an object array must be used as an intermediary. - """ - return numpy.asarray(result.tolist()) - -def _clean_args(*args): - """ - Helper function for delegating arguments to Python string - functions. - - Many of the Python string operations that have optional arguments - do not use 'None' to indicate a default value. In these cases, - we need to remove all `None` arguments, and those following them. - """ - newargs = [] - for chk in args: - if chk is None: - break - newargs.append(chk) - return newargs - -def _get_num_chars(a): - """ - Helper function that returns the number of characters per field in - a string or unicode array. This is to abstract out the fact that - for a unicode array this is itemsize / 4. - """ - if issubclass(a.dtype.type, unicode_): - return a.itemsize / 4 - return a.itemsize - - -def equal(x1, x2): - """ - Return (x1 == x2) element-wise. - - Unlike `numpy.equal`, this comparison is performed by first - stripping whitespace characters from the end of the string. This - behavior is provided for backward-compatibility with numarray. - - Parameters - ---------- - x1, x2 : array_like of str or unicode - Input arrays of the same shape. - - Returns - ------- - out : {ndarray, bool} - Output array of bools, or a single bool if x1 and x2 are scalars. - - See Also - -------- - not_equal, greater_equal, less_equal, greater, less - """ - return compare_chararrays(x1, x2, '==', True) - -def not_equal(x1, x2): - """ - Return (x1 != x2) element-wise. - - Unlike `numpy.not_equal`, this comparison is performed by first - stripping whitespace characters from the end of the string. This - behavior is provided for backward-compatibility with numarray. - - Parameters - ---------- - x1, x2 : array_like of str or unicode - Input arrays of the same shape. - - Returns - ------- - out : {ndarray, bool} - Output array of bools, or a single bool if x1 and x2 are scalars. - - See Also - -------- - equal, greater_equal, less_equal, greater, less - """ - return compare_chararrays(x1, x2, '!=', True) - -def greater_equal(x1, x2): - """ - Return (x1 >= x2) element-wise. - - Unlike `numpy.greater_equal`, this comparison is performed by - first stripping whitespace characters from the end of the string. - This behavior is provided for backward-compatibility with - numarray. - - Parameters - ---------- - x1, x2 : array_like of str or unicode - Input arrays of the same shape. - - Returns - ------- - out : {ndarray, bool} - Output array of bools, or a single bool if x1 and x2 are scalars. - - See Also - -------- - equal, not_equal, less_equal, greater, less - """ - return compare_chararrays(x1, x2, '>=', True) - -def less_equal(x1, x2): - """ - Return (x1 <= x2) element-wise. - - Unlike `numpy.less_equal`, this comparison is performed by first - stripping whitespace characters from the end of the string. This - behavior is provided for backward-compatibility with numarray. - - Parameters - ---------- - x1, x2 : array_like of str or unicode - Input arrays of the same shape. - - Returns - ------- - out : {ndarray, bool} - Output array of bools, or a single bool if x1 and x2 are scalars. - - See Also - -------- - equal, not_equal, greater_equal, greater, less - """ - return compare_chararrays(x1, x2, '<=', True) - -def greater(x1, x2): - """ - Return (x1 > x2) element-wise. - - Unlike `numpy.greater`, this comparison is performed by first - stripping whitespace characters from the end of the string. This - behavior is provided for backward-compatibility with numarray. - - Parameters - ---------- - x1, x2 : array_like of str or unicode - Input arrays of the same shape. - - Returns - ------- - out : {ndarray, bool} - Output array of bools, or a single bool if x1 and x2 are scalars. - - See Also - -------- - equal, not_equal, greater_equal, less_equal, less - """ - return compare_chararrays(x1, x2, '>', True) - -def less(x1, x2): - """ - Return (x1 < x2) element-wise. - - Unlike `numpy.greater`, this comparison is performed by first - stripping whitespace characters from the end of the string. This - behavior is provided for backward-compatibility with numarray. - - Parameters - ---------- - x1, x2 : array_like of str or unicode - Input arrays of the same shape. - - Returns - ------- - out : {ndarray, bool} - Output array of bools, or a single bool if x1 and x2 are scalars. - - See Also - -------- - equal, not_equal, greater_equal, less_equal, greater - """ - return compare_chararrays(x1, x2, '<', True) - -def str_len(a): - """ - Return len(a) element-wise. - - Parameters - ---------- - a : array_like of str or unicode - - Returns - ------- - out : ndarray - Output array of integers - - See also - -------- - __builtin__.len - """ - return _vec_string(a, integer, '__len__') - -def add(x1, x2): - """ - Return (x1 + x2), that is string concatenation, element-wise for a - pair of array_likes of str or unicode. - - Parameters - ---------- - x1 : array_like of str or unicode - x2 : array_like of str or unicode - - Returns - ------- - out : ndarray - Output array of string_ or unicode_, depending on input types - """ - arr1 = numpy.asarray(x1) - arr2 = numpy.asarray(x2) - out_size = _get_num_chars(arr1) + _get_num_chars(arr2) - dtype = _use_unicode(arr1, arr2) - return _vec_string(arr1, (dtype, out_size), '__add__', (arr2,)) - -def multiply(a, i): - """ - Return (a * i), that is string multiple concatenation, - element-wise. - - Values in `i` of less than 0 are treated as 0 (which yields an - empty string). - - Parameters - ---------- - a : array_like of str or unicode - - i : array_like of ints - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input types - - """ - a_arr = numpy.asarray(a) - i_arr = numpy.asarray(i) - if not issubclass(i_arr.dtype.type, integer): - raise ValueError, "Can only multiply by integers" - out_size = _get_num_chars(a_arr) * max(long(i_arr.max()), 0) - return _vec_string( - a_arr, (a_arr.dtype.type, out_size), '__mul__', (i_arr,)) - -def mod(a, values): - """ - Return (a % i), that is pre-Python 2.6 string formatting - (iterpolation), element-wise for a pair of array_likes of str - or unicode. - - Parameters - ---------- - a : array_like of str or unicode - - values : array_like of values - These values will be element-wise interpolated into the string. - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input types - - See also - -------- - str.__mod__ - - """ - return _to_string_or_unicode_array( - _vec_string(a, object_, '__mod__', (values,))) - -def capitalize(a): - """ - Return a copy of `a` with only the first character of each element - capitalized. - - Calls `str.capitalize` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array_like of str or unicode - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input - types - - See also - -------- - str.capitalize - - Examples - -------- - >>> c = np.array(['a1b2','1b2a','b2a1','2a1b'],'S4'); c - array(['a1b2', '1b2a', 'b2a1', '2a1b'], - dtype='|S4') - >>> np.char.capitalize(c) - array(['A1b2', '1b2a', 'B2a1', '2a1b'], - dtype='|S4') - """ - a_arr = numpy.asarray(a) - return _vec_string(a_arr, a_arr.dtype, 'capitalize') - -if sys.version_info >= (2, 4): - def center(a, width, fillchar=' '): - """ - Return a copy of `a` with its elements centered in a string of - length `width`. - - Calls `str.center` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - - width : int - The length of the resulting strings - fillchar : str or unicode, optional - The padding character to use (default is space). - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input - types - - See also - -------- - str.center - - """ - a_arr = numpy.asarray(a) - width_arr = numpy.asarray(width) - size = long(numpy.max(width_arr.flat)) - if numpy.issubdtype(a_arr.dtype, numpy.string_): - fillchar = asbytes(fillchar) - return _vec_string( - a_arr, (a_arr.dtype.type, size), 'center', (width_arr, fillchar)) -else: - def center(a, width): - """ - Return an array with the elements of `a` centered in a string - of length width. - - Calls `str.center` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - width : int - The length of the resulting strings - - Returns - ------- - out : ndarray, str or unicode - Output array of str or unicode, depending on input types - - See also - -------- - str.center - """ - a_arr = numpy.asarray(a) - width_arr = numpy.asarray(width) - size = long(numpy.max(width_arr.flat)) - return _vec_string( - a_arr, (a_arr.dtype.type, size), 'center', (width_arr,)) - -def count(a, sub, start=0, end=None): - """ - Returns an array with the number of non-overlapping occurrences of - substring `sub` in the range [`start`, `end`]. - - Calls `str.count` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - - sub : str or unicode - The substring to search for. - - start, end : int, optional - Optional arguments `start` and `end` are interpreted as slice - notation to specify the range in which to count. - - Returns - ------- - out : ndarray - Output array of ints. - - See also - -------- - str.count - - Examples - -------- - >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) - >>> c - array(['aAaAaA', ' aA ', 'abBABba'], - dtype='|S7') - >>> np.char.count(c, 'A') - array([3, 1, 1]) - >>> np.char.count(c, 'aA') - array([3, 1, 0]) - >>> np.char.count(c, 'A', start=1, end=4) - array([2, 1, 1]) - >>> np.char.count(c, 'A', start=1, end=3) - array([1, 0, 0]) - - """ - return _vec_string(a, integer, 'count', [sub, start] + _clean_args(end)) - -def decode(a, encoding=None, errors=None): - """ - Calls `str.decode` element-wise. - - The set of available codecs comes from the Python standard library, - and may be extended at runtime. For more information, see the - :mod:`codecs` module. - - Parameters - ---------- - a : array_like of str or unicode - - encoding : str, optional - The name of an encoding - - errors : str, optional - Specifies how to handle encoding errors - - Returns - ------- - out : ndarray - - See also - -------- - str.decode - - Notes - ----- - The type of the result will depend on the encoding specified. - - Examples - -------- - >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) - >>> c - array(['aAaAaA', ' aA ', 'abBABba'], - dtype='|S7') - >>> np.char.encode(c, encoding='cp037') - array(['\\x81\\xc1\\x81\\xc1\\x81\\xc1', '@@\\x81\\xc1@@', - '\\x81\\x82\\xc2\\xc1\\xc2\\x82\\x81'], - dtype='|S7') - - """ - return _to_string_or_unicode_array( - _vec_string(a, object_, 'decode', _clean_args(encoding, errors))) - -def encode(a, encoding=None, errors=None): - """ - Calls `str.encode` element-wise. - - The set of available codecs comes from the Python standard library, - and may be extended at runtime. For more information, see the codecs - module. - - Parameters - ---------- - a : array_like of str or unicode - - encoding : str, optional - The name of an encoding - - errors : str, optional - Specifies how to handle encoding errors - - Returns - ------- - out : ndarray - - See also - -------- - str.encode - - Notes - ----- - The type of the result will depend on the encoding specified. - - """ - return _to_string_or_unicode_array( - _vec_string(a, object_, 'encode', _clean_args(encoding, errors))) - -def endswith(a, suffix, start=0, end=None): - """ - Returns a boolean array which is `True` where the string element - in `a` ends with `suffix`, otherwise `False`. - - Calls `str.endswith` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - - suffix : str - - start, end : int, optional - With optional `start`, test beginning at that position. With - optional `end`, stop comparing at that position. - - Returns - ------- - out : ndarray - Outputs an array of bools. - - See also - -------- - str.endswith - - Examples - -------- - >>> s = np.array(['foo', 'bar']) - >>> s[0] = 'foo' - >>> s[1] = 'bar' - >>> s - array(['foo', 'bar'], - dtype='|S3') - >>> np.char.endswith(s, 'ar') - array([False, True], dtype=bool) - >>> np.char.endswith(s, 'a', start=1, end=2) - array([False, True], dtype=bool) - - """ - return _vec_string( - a, bool_, 'endswith', [suffix, start] + _clean_args(end)) - -def expandtabs(a, tabsize=8): - """ - Return a copy of each string element where all tab characters are - replaced by one or more spaces. - - Calls `str.expandtabs` element-wise. - - Return a copy of each string element where all tab characters are - replaced by one or more spaces, depending on the current column - and the given `tabsize`. The column number is reset to zero after - each newline occurring in the string. If `tabsize` is not given, a - tab size of 8 characters is assumed. This doesn't understand other - non-printing characters or escape sequences. - - Parameters - ---------- - a : array_like of str or unicode - tabsize : int, optional - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.expandtabs - """ - return _to_string_or_unicode_array( - _vec_string(a, object_, 'expandtabs', (tabsize,))) - -def find(a, sub, start=0, end=None): - """ - For each element, return the lowest index in the string where - substring `sub` is found. - - Calls `str.find` element-wise. - - For each element, return the lowest index in the string where - substring `sub` is found, such that `sub` is contained in the - range [`start`, `end`]. - - Parameters - ---------- - a : array_like of str or unicode - - sub : str or unicode - - start, end : int, optional - Optional arguments `start` and `end` are interpreted as in - slice notation. - - Returns - ------- - out : ndarray or int - Output array of ints. Returns -1 if `sub` is not found. - - See also - -------- - str.find - - """ - return _vec_string( - a, integer, 'find', [sub, start] + _clean_args(end)) - -# if sys.version_info >= (2.6): -# def format(a, *args, **kwargs): -# # _vec_string doesn't support kwargs at present -# raise NotImplementedError - -def index(a, sub, start=0, end=None): - """ - Like `find`, but raises `ValueError` when the substring is not found. - - Calls `str.index` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - - sub : str or unicode - - start, end : int, optional - - Returns - ------- - out : ndarray - Output array of ints. Returns -1 if `sub` is not found. - - See also - -------- - find, str.find - - """ - return _vec_string( - a, integer, 'index', [sub, start] + _clean_args(end)) - -def isalnum(a): - """ - Returns true for each element if all characters in the string are - alphanumeric and there is at least one character, false otherwise. - - Calls `str.isalnum` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array_like of str or unicode - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.isalnum - """ - return _vec_string(a, bool_, 'isalnum') - -def isalpha(a): - """ - Returns true for each element if all characters in the string are - alphabetic and there is at least one character, false otherwise. - - Calls `str.isalpha` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array_like of str or unicode - - Returns - ------- - out : ndarray - Output array of bools - - See also - -------- - str.isalpha - """ - return _vec_string(a, bool_, 'isalpha') - -def isdigit(a): - """ - Returns true for each element if all characters in the string are - digits and there is at least one character, false otherwise. - - Calls `str.isdigit` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array_like of str or unicode - - Returns - ------- - out : ndarray - Output array of bools - - See also - -------- - str.isdigit - """ - return _vec_string(a, bool_, 'isdigit') - -def islower(a): - """ - Returns true for each element if all cased characters in the - string are lowercase and there is at least one cased character, - false otherwise. - - Calls `str.islower` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array_like of str or unicode - - Returns - ------- - out : ndarray - Output array of bools - - See also - -------- - str.islower - """ - return _vec_string(a, bool_, 'islower') - -def isspace(a): - """ - Returns true for each element if there are only whitespace - characters in the string and there is at least one character, - false otherwise. - - Calls `str.isspace` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array_like of str or unicode - - Returns - ------- - out : ndarray - Output array of bools - - See also - -------- - str.isspace - """ - return _vec_string(a, bool_, 'isspace') - -def istitle(a): - """ - Returns true for each element if the element is a titlecased - string and there is at least one character, false otherwise. - - Call `str.istitle` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array_like of str or unicode - - Returns - ------- - out : ndarray - Output array of bools - - See also - -------- - str.istitle - """ - return _vec_string(a, bool_, 'istitle') - -def isupper(a): - """ - Returns true for each element if all cased characters in the - string are uppercase and there is at least one character, false - otherwise. - - Call `str.isupper` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array_like of str or unicode - - Returns - ------- - out : ndarray - Output array of bools - - See also - -------- - str.isupper - """ - return _vec_string(a, bool_, 'isupper') - -def join(sep, seq): - """ - Return a string which is the concatenation of the strings in the - sequence `seq`. - - Calls `str.join` element-wise. - - Parameters - ---------- - sep : array_like of str or unicode - seq : array_like of str or unicode - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input types - - See also - -------- - str.join - """ - return _to_string_or_unicode_array( - _vec_string(sep, object_, 'join', (seq,))) - -if sys.version_info >= (2, 4): - def ljust(a, width, fillchar=' '): - """ - Return an array with the elements of `a` left-justified in a - string of length `width`. - - Calls `str.ljust` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - - width : int - The length of the resulting strings - fillchar : str or unicode, optional - The character to use for padding - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.ljust - - """ - a_arr = numpy.asarray(a) - width_arr = numpy.asarray(width) - size = long(numpy.max(width_arr.flat)) - if numpy.issubdtype(a_arr.dtype, numpy.string_): - fillchar = asbytes(fillchar) - return _vec_string( - a_arr, (a_arr.dtype.type, size), 'ljust', (width_arr, fillchar)) -else: - def ljust(a, width): - """ - Return an array with the elements of `a` left-justified in a - string of length `width`. - - Calls `str.ljust` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - width : int - The length of the resulting strings - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.ljust - """ - a_arr = numpy.asarray(a) - width_arr = numpy.asarray(width) - size = long(numpy.max(width_arr.flat)) - return _vec_string( - a_arr, (a_arr.dtype.type, size), 'ljust', (width_arr,)) - -def lower(a): - """ - Return an array with the elements of `a` converted to lowercase. - - Call `str.lower` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array-like of str or unicode - - Returns - ------- - out : ndarray, str or unicode - Output array of str or unicode, depending on input type - - See also - -------- - str.lower - - Examples - -------- - >>> c = np.array(['A1B C', '1BCA', 'BCA1']); c - array(['A1B C', '1BCA', 'BCA1'], - dtype='|S5') - >>> np.char.lower(c) - array(['a1b c', '1bca', 'bca1'], - dtype='|S5') - """ - a_arr = numpy.asarray(a) - return _vec_string(a_arr, a_arr.dtype, 'lower') - -def lstrip(a, chars=None): - """ - For each element in `a`, return a copy with the leading characters - removed. - - Calls `str.lstrip` element-wise. - - Parameters - ---------- - a : array-like of str or unicode - - chars : str or unicode, optional - The `chars` argument is a string specifying the set of - characters to be removed. If omitted or None, the `chars` - argument defaults to removing whitespace. The `chars` argument - is not a prefix; rather, all combinations of its values are - stripped. - - Returns - ------- - out : ndarray, str or unicode - Output array of str or unicode, depending on input type - - See also - -------- - str.lstrip - - Examples - -------- - >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) - >>> c - array(['aAaAaA', ' aA ', 'abBABba'], - dtype='|S7') - >>> np.char.lstrip(c, 'a') # 'a' unstripped from c[1] because whitespace leading - array(['AaAaA', ' aA ', 'bBABba'], - dtype='|S7') - >>> np.char.lstrip(c, 'A') # leaves c unchanged - array(['aAaAaA', ' aA ', 'abBABba'], - dtype='|S7') - >>> (np.char.lstrip(c, ' ') == np.char.lstrip(c, '')).all() - ... # XXX: is this a regression? this line now returns False - ... # np.char.lstrip(c,'') does not modify c at all. - True - >>> (np.char.lstrip(c, ' ') == np.char.lstrip(c, None)).all() - True - - """ - a_arr = numpy.asarray(a) - return _vec_string(a_arr, a_arr.dtype, 'lstrip', (chars,)) - -if sys.version_info >= (2, 5): - def partition(a, sep): - """ - Partition each element in `a` around `sep`. - - Calls `str.partition` element-wise. - - For each element in `a`, split the element as the first - occurrence of `sep`, and return 3 strings containing the part - before the separator, the separator itself, and the part after - the separator. If the separator is not found, return 3 strings - containing the string itself, followed by two empty strings. - - Parameters - ---------- - a : array-like of str or unicode - sep : str or unicode - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type. - The output array will have an extra dimension with 3 - elements per input element. - - See also - -------- - str.partition - """ - return _to_string_or_unicode_array( - _vec_string(a, object_, 'partition', (sep,))) - -def replace(a, old, new, count=None): - """ - For each element in `a`, return a copy of the string with all - occurrences of substring `old` replaced by `new`. - - Calls `str.replace` element-wise. - - Parameters - ---------- - a : array-like of str or unicode - - old, new : str or unicode - - count : int, optional - If the optional argument `count` is given, only the first - `count` occurrences are replaced. - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.replace - - """ - return _to_string_or_unicode_array( - _vec_string( - a, object_, 'replace', [old, new] +_clean_args(count))) - -def rfind(a, sub, start=0, end=None): - """ - For each element in `a`, return the highest index in the string - where substring `sub` is found, such that `sub` is contained - within [`start`, `end`]. - - Calls `str.rfind` element-wise. - - Parameters - ---------- - a : array-like of str or unicode - - sub : str or unicode - - start, end : int, optional - Optional arguments `start` and `end` are interpreted as in - slice notation. - - Returns - ------- - out : ndarray - Output array of ints. Return -1 on failure. - - See also - -------- - str.rfind - - """ - return _vec_string( - a, integer, 'rfind', [sub, start] + _clean_args(end)) - -def rindex(a, sub, start=0, end=None): - """ - Like `rfind`, but raises `ValueError` when the substring `sub` is - not found. - - Calls `str.rindex` element-wise. - - Parameters - ---------- - a : array-like of str or unicode - - sub : str or unicode - - start, end : int, optional - - Returns - ------- - out : ndarray - Output array of ints. - - See also - -------- - rfind, str.rindex - - """ - return _vec_string( - a, integer, 'rindex', [sub, start] + _clean_args(end)) - -if sys.version_info >= (2, 4): - def rjust(a, width, fillchar=' '): - """ - Return an array with the elements of `a` right-justified in a - string of length `width`. - - Calls `str.rjust` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - - width : int - The length of the resulting strings - fillchar : str or unicode, optional - The character to use for padding - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.rjust - - """ - a_arr = numpy.asarray(a) - width_arr = numpy.asarray(width) - size = long(numpy.max(width_arr.flat)) - if numpy.issubdtype(a_arr.dtype, numpy.string_): - fillchar = asbytes(fillchar) - return _vec_string( - a_arr, (a_arr.dtype.type, size), 'rjust', (width_arr, fillchar)) -else: - def rjust(a, width): - """ - Return an array with the elements of `a` right-justified in a - string of length `width`. - - Calls `str.rjust` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - width : int - The length of the resulting strings - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.rjust - """ - a_arr = numpy.asarray(a) - width_arr = numpy.asarray(width) - size = long(numpy.max(width_arr.flat)) - return _vec_string( - a_arr, (a_arr.dtype.type, size), 'rjust', (width,)) - -if sys.version_info >= (2, 5): - def rpartition(a, sep): - """ - Partition each element in `a` around `sep`. - - Calls `str.rpartition` element-wise. - - For each element in `a`, split the element as the last - occurrence of `sep`, and return 3 strings containing the part - before the separator, the separator itself, and the part after - the separator. If the separator is not found, return 3 strings - containing the string itself, followed by two empty strings. - - Parameters - ---------- - a : array-like of str or unicode - sep : str or unicode - - Returns - ------- - out : ndarray - Output array of string or unicode, depending on input - type. The output array will have an extra dimension with - 3 elements per input element. - - See also - -------- - str.rpartition - """ - return _to_string_or_unicode_array( - _vec_string(a, object_, 'rpartition', (sep,))) - -if sys.version_info >= (2, 4): - def rsplit(a, sep=None, maxsplit=None): - """ - For each element in `a`, return a list of the words in the - string, using `sep` as the delimiter string. - - Calls `str.rsplit` element-wise. - - Except for splitting from the right, `rsplit` - behaves like `split`. - - Parameters - ---------- - a : array_like of str or unicode - - sep : str or unicode, optional - If `sep` is not specified or `None`, any whitespace string - is a separator. - maxsplit : int, optional - If `maxsplit` is given, at most `maxsplit` splits are done, - the rightmost ones. - - Returns - ------- - out : ndarray - Array of list objects - - See also - -------- - str.rsplit, split - - """ - # This will return an array of lists of different sizes, so we - # leave it as an object array - return _vec_string( - a, object_, 'rsplit', [sep] + _clean_args(maxsplit)) - -def rstrip(a, chars=None): - """ - For each element in `a`, return a copy with the trailing - characters removed. - - Calls `str.rstrip` element-wise. - - Parameters - ---------- - a : array-like of str or unicode - - chars : str or unicode, optional - The `chars` argument is a string specifying the set of - characters to be removed. If omitted or None, the `chars` - argument defaults to removing whitespace. The `chars` argument - is not a suffix; rather, all combinations of its values are - stripped. - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.rstrip - - Examples - -------- - >>> c = np.array(['aAaAaA', 'abBABba'], dtype='S7'); c - array(['aAaAaA', 'abBABba'], - dtype='|S7') - >>> np.char.rstrip(c, 'a') - array(['aAaAaA', 'abBABb'], - dtype='|S7') - >>> np.char.rstrip(c, 'A') - array(['aAaAa', 'abBABba'], - dtype='|S7') - - """ - a_arr = numpy.asarray(a) - return _vec_string(a_arr, a_arr.dtype, 'rstrip', (chars,)) - -def split(a, sep=None, maxsplit=None): - """ - For each element in `a`, return a list of the words in the - string, using `sep` as the delimiter string. - - Calls `str.rsplit` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - - sep : str or unicode, optional - If `sep` is not specified or `None`, any whitespace string is a - separator. - - maxsplit : int, optional - If `maxsplit` is given, at most `maxsplit` splits are done. - - Returns - ------- - out : ndarray - Array of list objects - - See also - -------- - str.split, rsplit - - """ - # This will return an array of lists of different sizes, so we - # leave it as an object array - return _vec_string( - a, object_, 'split', [sep] + _clean_args(maxsplit)) - -def splitlines(a, keepends=None): - """ - For each element in `a`, return a list of the lines in the - element, breaking at line boundaries. - - Calls `str.splitlines` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - - keepends : bool, optional - Line breaks are not included in the resulting list unless - keepends is given and true. - - Returns - ------- - out : ndarray - Array of list objects - - See also - -------- - str.splitlines - - """ - return _vec_string( - a, object_, 'splitlines', _clean_args(keepends)) - -def startswith(a, prefix, start=0, end=None): - """ - Returns a boolean array which is `True` where the string element - in `a` starts with `prefix`, otherwise `False`. - - Calls `str.startswith` element-wise. - - Parameters - ---------- - a : array_like of str or unicode - - suffix : str - - start, end : int, optional - With optional `start`, test beginning at that position. With - optional `end`, stop comparing at that position. - - Returns - ------- - out : ndarray - Array of booleans - - See also - -------- - str.startswith - - """ - return _vec_string( - a, bool_, 'startswith', [prefix, start] + _clean_args(end)) - -def strip(a, chars=None): - """ - For each element in `a`, return a copy with the leading and - trailing characters removed. - - Calls `str.rstrip` element-wise. - - Parameters - ---------- - a : array-like of str or unicode - - chars : str or unicode, optional - The `chars` argument is a string specifying the set of - characters to be removed. If omitted or None, the `chars` - argument defaults to removing whitespace. The `chars` argument - is not a prefix or suffix; rather, all combinations of its - values are stripped. - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.strip - - Examples - -------- - >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) - >>> c - array(['aAaAaA', ' aA ', 'abBABba'], - dtype='|S7') - >>> np.char.strip(c) - array(['aAaAaA', 'aA', 'abBABba'], - dtype='|S7') - >>> np.char.strip(c, 'a') # 'a' unstripped from c[1] because whitespace leads - array(['AaAaA', ' aA ', 'bBABb'], - dtype='|S7') - >>> np.char.strip(c, 'A') # 'A' unstripped from c[1] because (unprinted) ws trails - array(['aAaAa', ' aA ', 'abBABba'], - dtype='|S7') - - """ - a_arr = numpy.asarray(a) - return _vec_string(a_arr, a_arr.dtype, 'strip', _clean_args(chars)) - -def swapcase(a): - """ - For each element in `a`, return a copy of the string with - uppercase characters converted to lowercase and vice versa. - - Calls `str.swapcase` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array-like of str or unicode - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.swapcase - - Examples - -------- - >>> c=np.array(['a1B c','1b Ca','b Ca1','cA1b'],'S5'); c - array(['a1B c', '1b Ca', 'b Ca1', 'cA1b'], - dtype='|S5') - >>> np.char.swapcase(c) - array(['A1b C', '1B cA', 'B cA1', 'Ca1B'], - dtype='|S5') - """ - a_arr = numpy.asarray(a) - return _vec_string(a_arr, a_arr.dtype, 'swapcase') - -def title(a): - """ - For each element in `a`, return a titlecased version of the - string: words start with uppercase characters, all remaining cased - characters are lowercase. - - Calls `str.title` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array-like of str or unicode - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.title - - Examples - -------- - >>> c=np.array(['a1b c','1b ca','b ca1','ca1b'],'S5'); c - array(['a1b c', '1b ca', 'b ca1', 'ca1b'], - dtype='|S5') - >>> np.char.title(c) - array(['A1B C', '1B Ca', 'B Ca1', 'Ca1B'], - dtype='|S5') - """ - a_arr = numpy.asarray(a) - return _vec_string(a_arr, a_arr.dtype, 'title') - -def translate(a, table, deletechars=None): - """ - For each element in `a`, return a copy of the string where all - characters occurring in the optional argument `deletechars` are - removed, and the remaining characters have been mapped through the - given translation table. - - Calls `str.translate` element-wise. - - Parameters - ---------- - a : array-like of str or unicode - - table : str of length 256 - - deletechars : str - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.translate - - """ - a_arr = numpy.asarray(a) - if issubclass(a_arr.dtype.type, unicode_): - return _vec_string( - a_arr, a_arr.dtype, 'translate', (table,)) - else: - return _vec_string( - a_arr, a_arr.dtype, 'translate', [table] + _clean_args(deletechars)) - -def upper(a): - """ - Return an array with the elements of `a` converted to uppercase. - - Calls `str.upper` element-wise. - - For 8-bit strings, this method is locale-dependent. - - Parameters - ---------- - a : array-like of str or unicode - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.upper - - Examples - -------- - >>> c = np.array(['a1b c', '1bca', 'bca1']); c - array(['a1b c', '1bca', 'bca1'], - dtype='|S5') - >>> np.char.upper(c) - array(['A1B C', '1BCA', 'BCA1'], - dtype='|S5') - """ - a_arr = numpy.asarray(a) - return _vec_string(a_arr, a_arr.dtype, 'upper') - -def zfill(a, width): - """ - Return the numeric string left-filled with zeros in a string of - length `width`. - - Calls `str.zfill` element-wise. - - Parameters - ---------- - a : array-like of str or unicode - width : int - - Returns - ------- - out : ndarray - Output array of str or unicode, depending on input type - - See also - -------- - str.zfill - """ - a_arr = numpy.asarray(a) - width_arr = numpy.asarray(width) - size = long(numpy.max(width_arr.flat)) - return _vec_string( - a_arr, (a_arr.dtype.type, size), 'zfill', (width_arr,)) - -def isnumeric(a): - """ - For each element in `a`, return True if there are only numeric - characters in the element. - - Calls `unicode.isnumeric` element-wise. - - Numeric characters include digit characters, and all characters - that have the Unicode numeric value property, e.g. ``U+2155, - VULGAR FRACTION ONE FIFTH``. - - Parameters - ---------- - a : array-like of unicode - - Returns - ------- - out : ndarray - Array of booleans - - See also - -------- - unicode.isnumeric - """ - if _use_unicode(a) != unicode_: - raise TypeError, "isnumeric is only available for Unicode strings and arrays" - return _vec_string(a, bool_, 'isnumeric') - -def isdecimal(a): - """ - For each element in `a`, return True if there are only decimal - characters in the element. - - Calls `unicode.isdecimal` element-wise. - - Decimal characters include digit characters, and all characters - that that can be used to form decimal-radix numbers, - e.g. ``U+0660, ARABIC-INDIC DIGIT ZERO``. - - Parameters - ---------- - a : array-like of unicode - - Returns - ------- - out : ndarray - Array of booleans - - See also - -------- - unicode.isdecimal - """ - if _use_unicode(a) != unicode_: - raise TypeError, "isnumeric is only available for Unicode strings and arrays" - return _vec_string(a, bool_, 'isdecimal') - - -class chararray(ndarray): - """ - chararray(shape, itemsize=1, unicode=False, buffer=None, offset=0, - strides=None, order=None) - - Provides a convenient view on arrays of string and unicode values. - - .. note:: - The `chararray` class exists for backwards compatibility with - Numarray, it is not recommended for new development. Starting from numpy - 1.4, if one needs arrays of strings, it is recommended to use arrays of - `dtype` `object_`, `string_` or `unicode_`, and use the free functions - in the `numpy.char` module for fast vectorized string operations. - - Versus a regular Numpy array of type `str` or `unicode`, this - class adds the following functionality: - - 1) values automatically have whitespace removed from the end - when indexed - - 2) comparison operators automatically remove whitespace from the - end when comparing values - - 3) vectorized string operations are provided as methods - (e.g. `.endswith`) and infix operators (e.g. ``"+", "*", "%"``) - - chararrays should be created using `numpy.char.array` or - `numpy.char.asarray`, rather than this constructor directly. - - This constructor creates the array, using `buffer` (with `offset` - and `strides`) if it is not ``None``. If `buffer` is ``None``, then - constructs a new array with `strides` in "C order", unless both - ``len(shape) >= 2`` and ``order='Fortran'``, in which case `strides` - is in "Fortran order". - - Methods - ------- - astype - argsort - copy - count - decode - dump - dumps - encode - endswith - expandtabs - fill - find - flatten - getfield - index - isalnum - isalpha - isdecimal - isdigit - islower - isnumeric - isspace - istitle - isupper - item - join - ljust - lower - lstrip - nonzero - put - ravel - repeat - replace - reshape - resize - rfind - rindex - rjust - rsplit - rstrip - searchsorted - setfield - setflags - sort - split - splitlines - squeeze - startswith - strip - swapaxes - swapcase - take - title - tofile - tolist - tostring - translate - transpose - upper - view - zfill - - Parameters - ---------- - shape : tuple - Shape of the array. - itemsize : int, optional - Length of each array element, in number of characters. Default is 1. - unicode : bool, optional - Are the array elements of type unicode (True) or string (False). - Default is False. - buffer : int, optional - Memory address of the start of the array data. Default is None, - in which case a new array is created. - offset : int, optional - Fixed stride displacement from the beginning of an axis? - Default is 0. Needs to be >=0. - strides : array_like of ints, optional - Strides for the array (see `ndarray.strides` for full description). - Default is None. - order : {'C', 'F'}, optional - The order in which the array data is stored in memory: 'C' -> - "row major" order (the default), 'F' -> "column major" - (Fortran) order. - - Examples - -------- - >>> charar = np.chararray((3, 3)) - >>> charar[:] = 'a' - >>> charar - chararray([['a', 'a', 'a'], - ['a', 'a', 'a'], - ['a', 'a', 'a']], - dtype='|S1') - - >>> charar = np.chararray(charar.shape, itemsize=5) - >>> charar[:] = 'abc' - >>> charar - chararray([['abc', 'abc', 'abc'], - ['abc', 'abc', 'abc'], - ['abc', 'abc', 'abc']], - dtype='|S5') - - """ - def __new__(subtype, shape, itemsize=1, unicode=False, buffer=None, - offset=0, strides=None, order='C'): - global _globalvar - - if unicode: - dtype = unicode_ - else: - dtype = string_ - - # force itemsize to be a Python long, since using Numpy integer - # types results in itemsize.itemsize being used as the size of - # strings in the new array. - itemsize = long(itemsize) - - if sys.version_info[0] >= 3 and isinstance(buffer, _unicode): - # On Py3, unicode objects do not have the buffer interface - filler = buffer - buffer = None - else: - filler = None - - _globalvar = 1 - if buffer is None: - self = ndarray.__new__(subtype, shape, (dtype, itemsize), - order=order) - else: - self = ndarray.__new__(subtype, shape, (dtype, itemsize), - buffer=buffer, - offset=offset, strides=strides, - order=order) - if filler is not None: - self[...] = filler - _globalvar = 0 - return self - - def __array_finalize__(self, obj): - # The b is a special case because it is used for reconstructing. - if not _globalvar and self.dtype.char not in 'SUbc': - raise ValueError, "Can only create a chararray from string data." - - def __getitem__(self, obj): - val = ndarray.__getitem__(self, obj) - if issubclass(val.dtype.type, character): - temp = val.rstrip() - if _len(temp) == 0: - val = '' - else: - val = temp - return val - - # IMPLEMENTATION NOTE: Most of the methods of this class are - # direct delegations to the free functions in this module. - # However, those that return an array of strings should instead - # return a chararray, so some extra wrapping is required. - - def __eq__(self, other): - """ - Return (self == other) element-wise. - - See also - -------- - equal - """ - return equal(self, other) - - def __ne__(self, other): - """ - Return (self != other) element-wise. - - See also - -------- - not_equal - """ - return not_equal(self, other) - - def __ge__(self, other): - """ - Return (self >= other) element-wise. - - See also - -------- - greater_equal - """ - return greater_equal(self, other) - - def __le__(self, other): - """ - Return (self <= other) element-wise. - - See also - -------- - less_equal - """ - return less_equal(self, other) - - def __gt__(self, other): - """ - Return (self > other) element-wise. - - See also - -------- - greater - """ - return greater(self, other) - - def __lt__(self, other): - """ - Return (self < other) element-wise. - - See also - -------- - less - """ - return less(self, other) - - def __add__(self, other): - """ - Return (self + other), that is string concatenation, - element-wise for a pair of array_likes of str or unicode. - - See also - -------- - add - """ - return asarray(add(self, other)) - - def __radd__(self, other): - """ - Return (other + self), that is string concatenation, - element-wise for a pair of array_likes of string_ or unicode_. - - See also - -------- - add - """ - return asarray(add(numpy.asarray(other), self)) - - def __mul__(self, i): - """ - Return (self * i), that is string multiple concatenation, - element-wise. - - See also - -------- - multiply - """ - return asarray(multiply(self, i)) - - def __rmul__(self, i): - """ - Return (self * i), that is string multiple concatenation, - element-wise. - - See also - -------- - multiply - """ - return asarray(multiply(self, i)) - - def __mod__(self, i): - """ - Return (self % i), that is pre-Python 2.6 string formatting - (iterpolation), element-wise for a pair of array_likes of string_ - or unicode_. - - See also - -------- - mod - """ - return asarray(mod(self, i)) - - def __rmod__(self, other): - return NotImplemented - - def argsort(self, axis=-1, kind='quicksort', order=None): - """ - Return the indices that sort the array lexicographically. - - For full documentation see `numpy.argsort`, for which this method is - in fact merely a "thin wrapper." - - Examples - -------- - >>> c = np.array(['a1b c', '1b ca', 'b ca1', 'Ca1b'], 'S5') - >>> c = c.view(np.chararray); c - chararray(['a1b c', '1b ca', 'b ca1', 'Ca1b'], - dtype='|S5') - >>> c[c.argsort()] - chararray(['1b ca', 'Ca1b', 'a1b c', 'b ca1'], - dtype='|S5') - - """ - return self.__array__().argsort(axis, kind, order) - argsort.__doc__ = ndarray.argsort.__doc__ - - def capitalize(self): - """ - Return a copy of `self` with only the first character of each element - capitalized. - - See also - -------- - char.capitalize - - """ - return asarray(capitalize(self)) - - if sys.version_info >= (2, 4): - def center(self, width, fillchar=' '): - """ - Return a copy of `self` with its elements centered in a - string of length `width`. - - See also - -------- - center - """ - return asarray(center(self, width, fillchar)) - else: - def center(self, width): - """ - Return a copy of `self` with its elements centered in a - string of length `width`. - - See also - -------- - center - """ - return asarray(center(self, width)) - - def count(self, sub, start=0, end=None): - """ - Returns an array with the number of non-overlapping occurrences of - substring `sub` in the range [`start`, `end`]. - - See also - -------- - char.count - - """ - return count(self, sub, start, end) - - - def decode(self, encoding=None, errors=None): - """ - Calls `str.decode` element-wise. - - See also - -------- - char.decode - - """ - return decode(self, encoding, errors) - - def encode(self, encoding=None, errors=None): - """ - Calls `str.encode` element-wise. - - See also - -------- - char.encode - - """ - return encode(self, encoding, errors) - - def endswith(self, suffix, start=0, end=None): - """ - Returns a boolean array which is `True` where the string element - in `self` ends with `suffix`, otherwise `False`. - - See also - -------- - char.endswith - - """ - return endswith(self, suffix, start, end) - - def expandtabs(self, tabsize=8): - """ - Return a copy of each string element where all tab characters are - replaced by one or more spaces. - - See also - -------- - char.expandtabs - - """ - return asarray(expandtabs(self, tabsize)) - - def find(self, sub, start=0, end=None): - """ - For each element, return the lowest index in the string where - substring `sub` is found. - - See also - -------- - char.find - - """ - return find(self, sub, start, end) - - def index(self, sub, start=0, end=None): - """ - Like `find`, but raises `ValueError` when the substring is not found. - - See also - -------- - char.index - - """ - return index(self, sub, start, end) - - def isalnum(self): - """ - Returns true for each element if all characters in the string - are alphanumeric and there is at least one character, false - otherwise. - - See also - -------- - char.isalnum - - """ - return isalnum(self) - - def isalpha(self): - """ - Returns true for each element if all characters in the string - are alphabetic and there is at least one character, false - otherwise. - - See also - -------- - char.isalpha - - """ - return isalpha(self) - - def isdigit(self): - """ - Returns true for each element if all characters in the string are - digits and there is at least one character, false otherwise. - - See also - -------- - char.isdigit - - """ - return isdigit(self) - - def islower(self): - """ - Returns true for each element if all cased characters in the - string are lowercase and there is at least one cased character, - false otherwise. - - See also - -------- - char.islower - - """ - return islower(self) - - def isspace(self): - """ - Returns true for each element if there are only whitespace - characters in the string and there is at least one character, - false otherwise. - - See also - -------- - char.isspace - - """ - return isspace(self) - - def istitle(self): - """ - Returns true for each element if the element is a titlecased - string and there is at least one character, false otherwise. - - See also - -------- - char.istitle - - """ - return istitle(self) - - def isupper(self): - """ - Returns true for each element if all cased characters in the - string are uppercase and there is at least one character, false - otherwise. - - See also - -------- - char.isupper - - """ - return isupper(self) - - def join(self, seq): - """ - Return a string which is the concatenation of the strings in the - sequence `seq`. - - See also - -------- - char.join - - """ - return join(self, seq) - - if sys.version_info >= (2, 4): - def ljust(self, width, fillchar=' '): - """ - Return an array with the elements of `self` left-justified in a - string of length `width`. - - See also - -------- - char.ljust - - """ - return asarray(ljust(self, width, fillchar)) - else: - def ljust(self, width): - """ - Return an array with the elements of `self` left-justified in a - string of length `width`. - - See also - -------- - ljust - """ - return asarray(ljust(self, width)) - - def lower(self): - """ - Return an array with the elements of `self` converted to - lowercase. - - See also - -------- - char.lower - - """ - return asarray(lower(self)) - - def lstrip(self, chars=None): - """ - For each element in `self`, return a copy with the leading characters - removed. - - See also - -------- - char.lstrip - - """ - return asarray(lstrip(self, chars)) - - if sys.version_info >= (2, 5): - def partition(self, sep): - """ - Partition each element in `self` around `sep`. - - See also - -------- - partition - """ - return asarray(partition(self, sep)) - - def replace(self, old, new, count=None): - """ - For each element in `self`, return a copy of the string with all - occurrences of substring `old` replaced by `new`. - - See also - -------- - char.replace - - """ - return asarray(replace(self, old, new, count)) - - def rfind(self, sub, start=0, end=None): - """ - For each element in `self`, return the highest index in the string - where substring `sub` is found, such that `sub` is contained - within [`start`, `end`]. - - See also - -------- - char.rfind - - """ - return rfind(self, sub, start, end) - - def rindex(self, sub, start=0, end=None): - """ - Like `rfind`, but raises `ValueError` when the substring `sub` is - not found. - - See also - -------- - char.rindex - - """ - return rindex(self, sub, start, end) - - if sys.version_info >= (2, 4): - def rjust(self, width, fillchar=' '): - """ - Return an array with the elements of `self` - right-justified in a string of length `width`. - - See also - -------- - char.rjust - - """ - return asarray(rjust(self, width, fillchar)) - else: - def rjust(self, width): - """ - Return an array with the elements of `self` - right-justified in a string of length `width`. - - See also - -------- - rjust - """ - return asarray(rjust(self, width)) - - if sys.version_info >= (2, 5): - def rpartition(self, sep): - """ - Partition each element in `self` around `sep`. - - See also - -------- - rpartition - """ - return asarray(rpartition(self, sep)) - - if sys.version_info >= (2, 4): - def rsplit(self, sep=None, maxsplit=None): - """ - For each element in `self`, return a list of the words in - the string, using `sep` as the delimiter string. - - See also - -------- - char.rsplit - - """ - return rsplit(self, sep, maxsplit) - - def rstrip(self, chars=None): - """ - For each element in `self`, return a copy with the trailing - characters removed. - - See also - -------- - char.rstrip - - """ - return asarray(rstrip(self, chars)) - - def split(self, sep=None, maxsplit=None): - """ - For each element in `self`, return a list of the words in the - string, using `sep` as the delimiter string. - - See also - -------- - char.split - - """ - return split(self, sep, maxsplit) - - def splitlines(self, keepends=None): - """ - For each element in `self`, return a list of the lines in the - element, breaking at line boundaries. - - See also - -------- - char.splitlines - - """ - return splitlines(self, keepends) - - def startswith(self, prefix, start=0, end=None): - """ - Returns a boolean array which is `True` where the string element - in `self` starts with `prefix`, otherwise `False`. - - See also - -------- - char.startswith - - """ - return startswith(self, prefix, start, end) - - def strip(self, chars=None): - """ - For each element in `self`, return a copy with the leading and - trailing characters removed. - - See also - -------- - char.strip - - """ - return asarray(strip(self, chars)) - - def swapcase(self): - """ - For each element in `self`, return a copy of the string with - uppercase characters converted to lowercase and vice versa. - - See also - -------- - char.swapcase - - """ - return asarray(swapcase(self)) - - def title(self): - """ - For each element in `self`, return a titlecased version of the - string: words start with uppercase characters, all remaining cased - characters are lowercase. - - See also - -------- - char.title - - """ - return asarray(title(self)) - - def translate(self, table, deletechars=None): - """ - For each element in `self`, return a copy of the string where - all characters occurring in the optional argument - `deletechars` are removed, and the remaining characters have - been mapped through the given translation table. - - See also - -------- - char.translate - - """ - return asarray(translate(self, table, deletechars)) - - def upper(self): - """ - Return an array with the elements of `self` converted to - uppercase. - - See also - -------- - char.upper - - """ - return asarray(upper(self)) - - def zfill(self, width): - """ - Return the numeric string left-filled with zeros in a string of - length `width`. - - See also - -------- - char.zfill - - """ - return asarray(zfill(self, width)) - - def isnumeric(self): - """ - For each element in `self`, return True if there are only - numeric characters in the element. - - See also - -------- - char.isnumeric - - """ - return isnumeric(self) - - def isdecimal(self): - """ - For each element in `self`, return True if there are only - decimal characters in the element. - - See also - -------- - char.isdecimal - - """ - return isdecimal(self) - - -def array(obj, itemsize=None, copy=True, unicode=None, order=None): - """ - Create a `chararray`. - - .. note:: - This class is provided for numarray backward-compatibility. - New code (not concerned with numarray compatibility) should use - arrays of type string_ or unicode_ and use the free functions - in :mod:`numpy.char ` for fast - vectorized string operations instead. - - Versus a regular Numpy array of type `str` or `unicode`, this - class adds the following functionality: - - 1) values automatically have whitespace removed from the end - when indexed - - 2) comparison operators automatically remove whitespace from the - end when comparing values - - 3) vectorized string operations are provided as methods - (e.g. `str.endswith`) and infix operators (e.g. +, *, %) - - Parameters - ---------- - obj : array of str or unicode-like - - itemsize : int, optional - `itemsize` is the number of characters per scalar in the - resulting array. If `itemsize` is None, and `obj` is an - object array or a Python list, the `itemsize` will be - automatically determined. If `itemsize` is provided and `obj` - is of type str or unicode, then the `obj` string will be - chunked into `itemsize` pieces. - - copy : bool, optional - If true (default), then the object is copied. Otherwise, a copy - will only be made if __array__ returns a copy, if obj is a - nested sequence, or if a copy is needed to satisfy any of the other - requirements (`itemsize`, unicode, `order`, etc.). - - unicode : bool, optional - When true, the resulting `chararray` can contain Unicode - characters, when false only 8-bit characters. If unicode is - `None` and `obj` is one of the following: - - - a `chararray`, - - an ndarray of type `str` or `unicode` - - a Python str or unicode object, - - then the unicode setting of the output array will be - automatically determined. - - order : {'C', 'F', 'A'}, optional - Specify the order of the array. If order is 'C' (default), then the - array will be in C-contiguous order (last-index varies the - fastest). If order is 'F', then the returned array - will be in Fortran-contiguous order (first-index varies the - fastest). If order is 'A', then the returned array may - be in any order (either C-, Fortran-contiguous, or even - discontiguous). - """ - if isinstance(obj, (_bytes, _unicode)): - if unicode is None: - if isinstance(obj, _unicode): - unicode = True - else: - unicode = False - - if itemsize is None: - itemsize = _len(obj) - shape = _len(obj) // itemsize - - if unicode: - if sys.maxunicode == 0xffff: - # On a narrow Python build, the buffer for Unicode - # strings is UCS2, which doesn't match the buffer for - # Numpy Unicode types, which is ALWAYS UCS4. - # Therefore, we need to convert the buffer. On Python - # 2.6 and later, we can use the utf_32 codec. Earlier - # versions don't have that codec, so we convert to a - # numerical array that matches the input buffer, and - # then use Numpy to convert it to UCS4. All of this - # should happen in native endianness. - if sys.hexversion >= 0x2060000: - obj = obj.encode('utf_32') - else: - if isinstance(obj, str): - ascii = numpy.frombuffer(obj, 'u1') - ucs4 = numpy.array(ascii, 'u4') - obj = ucs4.data - else: - ucs2 = numpy.frombuffer(obj, 'u2') - ucs4 = numpy.array(ucs2, 'u4') - obj = ucs4.data - else: - obj = _unicode(obj) - else: - # Let the default Unicode -> string encoding (if any) take - # precedence. - obj = _bytes(obj) - - return chararray(shape, itemsize=itemsize, unicode=unicode, - buffer=obj, order=order) - - if isinstance(obj, (list, tuple)): - obj = numpy.asarray(obj) - - if isinstance(obj, ndarray) and issubclass(obj.dtype.type, character): - # If we just have a vanilla chararray, create a chararray - # view around it. - if not isinstance(obj, chararray): - obj = obj.view(chararray) - - if itemsize is None: - itemsize = obj.itemsize - # itemsize is in 8-bit chars, so for Unicode, we need - # to divide by the size of a single Unicode character, - # which for Numpy is always 4 - if issubclass(obj.dtype.type, unicode_): - itemsize /= 4 - - if unicode is None: - if issubclass(obj.dtype.type, unicode_): - unicode = True - else: - unicode = False - - if unicode: - dtype = unicode_ - else: - dtype = string_ - - if order is not None: - obj = numpy.asarray(obj, order=order) - if (copy - or (itemsize != obj.itemsize) - or (not unicode and isinstance(obj, unicode_)) - or (unicode and isinstance(obj, string_))): - obj = obj.astype((dtype, long(itemsize))) - return obj - - if isinstance(obj, ndarray) and issubclass(obj.dtype.type, object): - if itemsize is None: - # Since no itemsize was specified, convert the input array to - # a list so the ndarray constructor will automatically - # determine the itemsize for us. - obj = obj.tolist() - # Fall through to the default case - - if unicode: - dtype = unicode_ - else: - dtype = string_ - - if itemsize is None: - val = narray(obj, dtype=dtype, order=order, subok=True) - else: - val = narray(obj, dtype=(dtype, itemsize), order=order, subok=True) - return val.view(chararray) - - -def asarray(obj, itemsize=None, unicode=None, order=None): - """ - Convert the input to a `chararray`, copying the data only if - necessary. - - Versus a regular Numpy array of type `str` or `unicode`, this - class adds the following functionality: - - 1) values automatically have whitespace removed from the end - when indexed - - 2) comparison operators automatically remove whitespace from the - end when comparing values - - 3) vectorized string operations are provided as methods - (e.g. `str.endswith`) and infix operators (e.g. +, *, %) - - Parameters - ---------- - obj : array of str or unicode-like - - itemsize : int, optional - `itemsize` is the number of characters per scalar in the - resulting array. If `itemsize` is None, and `obj` is an - object array or a Python list, the `itemsize` will be - automatically determined. If `itemsize` is provided and `obj` - is of type str or unicode, then the `obj` string will be - chunked into `itemsize` pieces. - - unicode : bool, optional - When true, the resulting `chararray` can contain Unicode - characters, when false only 8-bit characters. If unicode is - `None` and `obj` is one of the following: - - - a `chararray`, - - an ndarray of type `str` or 'unicode` - - a Python str or unicode object, - - then the unicode setting of the output array will be - automatically determined. - - order : {'C', 'F'}, optional - Specify the order of the array. If order is 'C' (default), then the - array will be in C-contiguous order (last-index varies the - fastest). If order is 'F', then the returned array - will be in Fortran-contiguous order (first-index varies the - fastest). - """ - return array(obj, itemsize, copy=False, - unicode=unicode, order=order) diff --git a/pythonPackages/numpy/numpy/core/fromnumeric.py b/pythonPackages/numpy/numpy/core/fromnumeric.py deleted file mode 100755 index 39bb8b3481..0000000000 --- a/pythonPackages/numpy/numpy/core/fromnumeric.py +++ /dev/null @@ -1,2529 +0,0 @@ -# Module containing non-deprecated functions borrowed from Numeric. -__docformat__ = "restructuredtext en" - -# functions that are now methods -__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', - 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', - 'searchsorted', 'alen', - 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', - 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', - 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', - 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', - 'amax', 'amin', - ] - -import multiarray as mu -import umath as um -import numerictypes as nt -from numeric import asarray, array, asanyarray, concatenate -_dt_ = nt.sctype2char - -import types - -try: - _gentype = types.GeneratorType -except AttributeError: - _gentype = types.NoneType - -# save away Python sum -_sum_ = sum - -# functions that are now methods -def _wrapit(obj, method, *args, **kwds): - try: - wrap = obj.__array_wrap__ - except AttributeError: - wrap = None - result = getattr(asarray(obj),method)(*args, **kwds) - if wrap: - if not isinstance(result, mu.ndarray): - result = asarray(result) - result = wrap(result) - return result - - -def take(a, indices, axis=None, out=None, mode='raise'): - """ - Take elements from an array along an axis. - - This function does the same thing as "fancy" indexing (indexing arrays - using arrays); however, it can be easier to use if you need elements - along a given axis. - - Parameters - ---------- - a : array_like - The source array. - indices : array_like - The indices of the values to extract. - axis : int, optional - The axis over which to select values. By default, the flattened - input array is used. - out : ndarray, optional - If provided, the result will be placed in this array. It should - be of the appropriate shape and dtype. - mode : {'raise', 'wrap', 'clip'}, optional - Specifies how out-of-bounds indices will behave. - - * 'raise' -- raise an error (default) - * 'wrap' -- wrap around - * 'clip' -- clip to the range - - 'clip' mode means that all indices that are too large are replaced - by the index that addresses the last element along that axis. Note - that this disables indexing with negative numbers. - - Returns - ------- - subarray : ndarray - The returned array has the same type as `a`. - - See Also - -------- - ndarray.take : equivalent method - - Examples - -------- - >>> a = [4, 3, 5, 7, 6, 8] - >>> indices = [0, 1, 4] - >>> np.take(a, indices) - array([4, 3, 6]) - - In this example if `a` is an ndarray, "fancy" indexing can be used. - - >>> a = np.array(a) - >>> a[indices] - array([4, 3, 6]) - - """ - try: - take = a.take - except AttributeError: - return _wrapit(a, 'take', indices, axis, out, mode) - return take(indices, axis, out, mode) - - -# not deprecated --- copy if necessary, view otherwise -def reshape(a, newshape, order='C'): - """ - Gives a new shape to an array without changing its data. - - Parameters - ---------- - a : array_like - Array to be reshaped. - newshape : int or tuple of ints - The new shape should be compatible with the original shape. If - an integer, then the result will be a 1-D array of that length. - One shape dimension can be -1. In this case, the value is inferred - from the length of the array and remaining dimensions. - order : {'C', 'F'}, optional - Determines whether the array data should be viewed as in C - (row-major) order or FORTRAN (column-major) order. - - Returns - ------- - reshaped_array : ndarray - This will be a new view object if possible; otherwise, it will - be a copy. - - - See Also - -------- - ndarray.reshape : Equivalent method. - - Notes - ----- - - It is not always possible to change the shape of an array without - copying the data. If you want an error to be raise if the data is copied, - you should assign the new shape to the shape attribute of the array:: - - >>> a = np.zeros((10, 2)) - # A transpose make the array non-contiguous - >>> b = a.T - # Taking a view makes it possible to modify the shape without modiying the - # initial object. - >>> c = b.view() - >>> c.shape = (20) - AttributeError: incompatible shape for a non-contiguous array - - - Examples - -------- - >>> a = np.array([[1,2,3], [4,5,6]]) - >>> np.reshape(a, 6) - array([1, 2, 3, 4, 5, 6]) - >>> np.reshape(a, 6, order='F') - array([1, 4, 2, 5, 3, 6]) - - >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 - array([[1, 2], - [3, 4], - [5, 6]]) - - """ - try: - reshape = a.reshape - except AttributeError: - return _wrapit(a, 'reshape', newshape, order=order) - return reshape(newshape, order=order) - - -def choose(a, choices, out=None, mode='raise'): - """ - Construct an array from an index array and a set of arrays to choose from. - - First of all, if confused or uncertain, definitely look at the Examples - - in its full generality, this function is less simple than it might - seem from the following code description (below ndi = - `numpy.lib.index_tricks`): - - ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. - - But this omits some subtleties. Here is a fully general summary: - - Given an "index" array (`a`) of integers and a sequence of `n` arrays - (`choices`), `a` and each choice array are first broadcast, as necessary, - to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = - 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` - for each `i`. Then, a new array with shape ``Ba.shape`` is created as - follows: - - * if ``mode=raise`` (the default), then, first of all, each element of - `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that - `i` (in that range) is the value at the `(j0, j1, ..., jm)` position - in `Ba` - then the value at the same position in the new array is the - value in `Bchoices[i]` at that same position; - - * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) - integer; modular arithmetic is used to map integers outside the range - `[0, n-1]` back into that range; and then the new array is constructed - as above; - - * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) - integer; negative integers are mapped to 0; values greater than `n-1` - are mapped to `n-1`; and then the new array is constructed as above. - - Parameters - ---------- - a : int array - This array must contain integers in `[0, n-1]`, where `n` is the number - of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any - integers are permissible. - choices : sequence of arrays - Choice arrays. `a` and all of the choices must be broadcastable to the - same shape. If `choices` is itself an array (not recommended), then - its outermost dimension (i.e., the one corresponding to - ``choices.shape[0]``) is taken as defining the "sequence". - out : array, optional - If provided, the result will be inserted into this array. It should - be of the appropriate shape and dtype. - mode : {'raise' (default), 'wrap', 'clip'}, optional - Specifies how indices outside `[0, n-1]` will be treated: - - * 'raise' : an exception is raised - * 'wrap' : value becomes value mod `n` - * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 - - Returns - ------- - merged_array : array - The merged result. - - Raises - ------ - ValueError: shape mismatch - If `a` and each choice array are not all broadcastable to the same - shape. - - See Also - -------- - ndarray.choose : equivalent method - - Notes - ----- - To reduce the chance of misinterpretation, even though the following - "abuse" is nominally supported, `choices` should neither be, nor be - thought of as, a single array, i.e., the outermost sequence-like container - should be either a list or a tuple. - - Examples - -------- - - >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], - ... [20, 21, 22, 23], [30, 31, 32, 33]] - >>> np.choose([2, 3, 1, 0], choices - ... # the first element of the result will be the first element of the - ... # third (2+1) "array" in choices, namely, 20; the second element - ... # will be the second element of the fourth (3+1) choice array, i.e., - ... # 31, etc. - ... ) - array([20, 31, 12, 3]) - >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) - array([20, 31, 12, 3]) - >>> # because there are 4 choice arrays - >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) - array([20, 1, 12, 3]) - >>> # i.e., 0 - - A couple examples illustrating how choose broadcasts: - - >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] - >>> choices = [-10, 10] - >>> np.choose(a, choices) - array([[ 10, -10, 10], - [-10, 10, -10], - [ 10, -10, 10]]) - - >>> # With thanks to Anne Archibald - >>> a = np.array([0, 1]).reshape((2,1,1)) - >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) - >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) - >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 - array([[[ 1, 1, 1, 1, 1], - [ 2, 2, 2, 2, 2], - [ 3, 3, 3, 3, 3]], - [[-1, -2, -3, -4, -5], - [-1, -2, -3, -4, -5], - [-1, -2, -3, -4, -5]]]) - - """ - try: - choose = a.choose - except AttributeError: - return _wrapit(a, 'choose', choices, out=out, mode=mode) - return choose(choices, out=out, mode=mode) - - -def repeat(a, repeats, axis=None): - """ - Repeat elements of an array. - - Parameters - ---------- - a : array_like - Input array. - repeats : {int, array of ints} - The number of repetitions for each element. `repeats` is broadcasted - to fit the shape of the given axis. - axis : int, optional - The axis along which to repeat values. By default, use the - flattened input array, and return a flat output array. - - Returns - ------- - repeated_array : ndarray - Output array which has the same shape as `a`, except along - the given axis. - - See Also - -------- - tile : Tile an array. - - Examples - -------- - >>> x = np.array([[1,2],[3,4]]) - >>> np.repeat(x, 2) - array([1, 1, 2, 2, 3, 3, 4, 4]) - >>> np.repeat(x, 3, axis=1) - array([[1, 1, 1, 2, 2, 2], - [3, 3, 3, 4, 4, 4]]) - >>> np.repeat(x, [1, 2], axis=0) - array([[1, 2], - [3, 4], - [3, 4]]) - - """ - try: - repeat = a.repeat - except AttributeError: - return _wrapit(a, 'repeat', repeats, axis) - return repeat(repeats, axis) - - -def put(a, ind, v, mode='raise'): - """ - Replaces specified elements of an array with given values. - - The indexing works on the flattened target array. `put` is roughly - equivalent to: - - :: - - a.flat[ind] = v - - Parameters - ---------- - a : ndarray - Target array. - ind : array_like - Target indices, interpreted as integers. - v : array_like - Values to place in `a` at target indices. If `v` is shorter than - `ind` it will be repeated as necessary. - mode : {'raise', 'wrap', 'clip'}, optional - Specifies how out-of-bounds indices will behave. - - * 'raise' -- raise an error (default) - * 'wrap' -- wrap around - * 'clip' -- clip to the range - - 'clip' mode means that all indices that are too large are replaced - by the index that addresses the last element along that axis. Note - that this disables indexing with negative numbers. - - See Also - -------- - putmask, place - - Examples - -------- - >>> a = np.arange(5) - >>> np.put(a, [0, 2], [-44, -55]) - >>> a - array([-44, 1, -55, 3, 4]) - - >>> a = np.arange(5) - >>> np.put(a, 22, -5, mode='clip') - >>> a - array([ 0, 1, 2, 3, -5]) - - """ - return a.put(ind, v, mode) - - -def swapaxes(a, axis1, axis2): - """ - Interchange two axes of an array. - - Parameters - ---------- - a : array_like - Input array. - axis1 : int - First axis. - axis2 : int - Second axis. - - Returns - ------- - a_swapped : ndarray - If `a` is an ndarray, then a view of `a` is returned; otherwise - a new array is created. - - Examples - -------- - >>> x = np.array([[1,2,3]]) - >>> np.swapaxes(x,0,1) - array([[1], - [2], - [3]]) - - >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) - >>> x - array([[[0, 1], - [2, 3]], - [[4, 5], - [6, 7]]]) - - >>> np.swapaxes(x,0,2) - array([[[0, 4], - [2, 6]], - [[1, 5], - [3, 7]]]) - - """ - try: - swapaxes = a.swapaxes - except AttributeError: - return _wrapit(a, 'swapaxes', axis1, axis2) - return swapaxes(axis1, axis2) - - -def transpose(a, axes=None): - """ - Permute the dimensions of an array. - - Parameters - ---------- - a : array_like - Input array. - axes : list of ints, optional - By default, reverse the dimensions, otherwise permute the axes - according to the values given. - - Returns - ------- - p : ndarray - `a` with its axes permuted. A view is returned whenever - possible. - - See Also - -------- - rollaxis - - Examples - -------- - >>> x = np.arange(4).reshape((2,2)) - >>> x - array([[0, 1], - [2, 3]]) - - >>> np.transpose(x) - array([[0, 2], - [1, 3]]) - - >>> x = np.ones((1, 2, 3)) - >>> np.transpose(x, (1, 0, 2)).shape - (2, 1, 3) - - """ - try: - transpose = a.transpose - except AttributeError: - return _wrapit(a, 'transpose', axes) - return transpose(axes) - - -def sort(a, axis=-1, kind='quicksort', order=None): - """ - Return a sorted copy of an array. - - Parameters - ---------- - a : array_like - Array to be sorted. - axis : int or None, optional - Axis along which to sort. If None, the array is flattened before - sorting. The default is -1, which sorts along the last axis. - kind : {'quicksort', 'mergesort', 'heapsort'}, optional - Sorting algorithm. Default is 'quicksort'. - order : list, optional - When `a` is a structured array, this argument specifies which fields - to compare first, second, and so on. This list does not need to - include all of the fields. - - Returns - ------- - sorted_array : ndarray - Array of the same type and shape as `a`. - - See Also - -------- - ndarray.sort : Method to sort an array in-place. - argsort : Indirect sort. - lexsort : Indirect stable sort on multiple keys. - searchsorted : Find elements in a sorted array. - - Notes - ----- - The various sorting algorithms are characterized by their average speed, - worst case performance, work space size, and whether they are stable. A - stable sort keeps items with the same key in the same relative - order. The three available algorithms have the following - properties: - - =========== ======= ============= ============ ======= - kind speed worst case work space stable - =========== ======= ============= ============ ======= - 'quicksort' 1 O(n^2) 0 no - 'mergesort' 2 O(n*log(n)) ~n/2 yes - 'heapsort' 3 O(n*log(n)) 0 no - =========== ======= ============= ============ ======= - - All the sort algorithms make temporary copies of the data when - sorting along any but the last axis. Consequently, sorting along - the last axis is faster and uses less space than sorting along - any other axis. - - The sort order for complex numbers is lexicographic. If both the real - and imaginary parts are non-nan then the order is determined by the - real parts except when they are equal, in which case the order is - determined by the imaginary parts. - - Previous to numpy 1.4.0 sorting real and complex arrays containing nan - values led to undefined behaviour. In numpy versions >= 1.4.0 nan - values are sorted to the end. The extended sort order is: - - * Real: [R, nan] - * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] - - where R is a non-nan real value. Complex values with the same nan - placements are sorted according to the non-nan part if it exists. - Non-nan values are sorted as before. - - Examples - -------- - >>> a = np.array([[1,4],[3,1]]) - >>> np.sort(a) # sort along the last axis - array([[1, 4], - [1, 3]]) - >>> np.sort(a, axis=None) # sort the flattened array - array([1, 1, 3, 4]) - >>> np.sort(a, axis=0) # sort along the first axis - array([[1, 1], - [3, 4]]) - - Use the `order` keyword to specify a field to use when sorting a - structured array: - - >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] - >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), - ... ('Galahad', 1.7, 38)] - >>> a = np.array(values, dtype=dtype) # create a structured array - >>> np.sort(a, order='height') # doctest: +SKIP - array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), - ('Lancelot', 1.8999999999999999, 38)], - dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP - array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), - ('Arthur', 1.8, 41)], - dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) - >>> np.argsort(x) - array([1, 2, 0]) - - Two-dimensional array: - - >>> x = np.array([[0, 3], [2, 2]]) - >>> x - array([[0, 3], - [2, 2]]) - - >>> np.argsort(x, axis=0) - array([[0, 1], - [1, 0]]) - - >>> np.argsort(x, axis=1) - array([[0, 1], - [0, 1]]) - - Sorting with keys: - - >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x - array([(1, 0), (0, 1)], - dtype=[('x', '>> np.argsort(x, order=('x','y')) - array([1, 0]) - - >>> np.argsort(x, order=('y','x')) - array([0, 1]) - - """ - try: - argsort = a.argsort - except AttributeError: - return _wrapit(a, 'argsort', axis, kind, order) - return argsort(axis, kind, order) - - -def argmax(a, axis=None): - """ - Indices of the maximum values along an axis. - - Parameters - ---------- - a : array_like - Input array. - axis : int, optional - By default, the index is into the flattened array, otherwise - along the specified axis. - - Returns - ------- - index_array : ndarray of ints - Array of indices into the array. It has the same shape as `a.shape` - with the dimension along `axis` removed. - - See Also - -------- - ndarray.argmax, argmin - amax : The maximum value along a given axis. - unravel_index : Convert a flat index into an index tuple. - - Notes - ----- - In case of multiple occurrences of the maximum values, the indices - corresponding to the first occurrence are returned. - - Examples - -------- - >>> a = np.arange(6).reshape(2,3) - >>> a - array([[0, 1, 2], - [3, 4, 5]]) - >>> np.argmax(a) - 5 - >>> np.argmax(a, axis=0) - array([1, 1, 1]) - >>> np.argmax(a, axis=1) - array([2, 2]) - - >>> b = np.arange(6) - >>> b[1] = 5 - >>> b - array([0, 5, 2, 3, 4, 5]) - >>> np.argmax(b) # Only the first occurrence is returned. - 1 - - """ - try: - argmax = a.argmax - except AttributeError: - return _wrapit(a, 'argmax', axis) - return argmax(axis) - - -def argmin(a, axis=None): - """ - Return the indices of the minimum values along an axis. - - See Also - -------- - argmax : Similar function. Please refer to `numpy.argmax` for detailed - documentation. - - """ - try: - argmin = a.argmin - except AttributeError: - return _wrapit(a, 'argmin', axis) - return argmin(axis) - - -def searchsorted(a, v, side='left'): - """ - Find indices where elements should be inserted to maintain order. - - Find the indices into a sorted array `a` such that, if the corresponding - elements in `v` were inserted before the indices, the order of `a` would - be preserved. - - Parameters - ---------- - a : 1-D array_like - Input array, sorted in ascending order. - v : array_like - Values to insert into `a`. - side : {'left', 'right'}, optional - If 'left', the index of the first suitable location found is given. If - 'right', return the last such index. If there is no suitable - index, return either 0 or N (where N is the length of `a`). - - Returns - ------- - indices : array of ints - Array of insertion points with the same shape as `v`. - - See Also - -------- - sort : Return a sorted copy of an array. - histogram : Produce histogram from 1-D data. - - Notes - ----- - Binary search is used to find the required insertion points. - - As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing - `nan` values. The enhanced sort order is documented in `sort`. - - Examples - -------- - >>> np.searchsorted([1,2,3,4,5], 3) - 2 - >>> np.searchsorted([1,2,3,4,5], 3, side='right') - 3 - >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) - array([0, 5, 1, 2]) - - """ - try: - searchsorted = a.searchsorted - except AttributeError: - return _wrapit(a, 'searchsorted', v, side) - return searchsorted(v, side) - - -def resize(a, new_shape): - """ - Return a new array with the specified shape. - - If the new array is larger than the original array, then the new - array is filled with repeated copies of `a`. Note that this behavior - is different from a.resize(new_shape) which fills with zeros instead - of repeated copies of `a`. - - Parameters - ---------- - a : array_like - Array to be resized. - - new_shape : int or tuple of int - Shape of resized array. - - Returns - ------- - reshaped_array : ndarray - The new array is formed from the data in the old array, repeated - if necessary to fill out the required number of elements. The - data are repeated in the order that they are stored in memory. - - See Also - -------- - ndarray.resize : resize an array in-place. - - Examples - -------- - >>> a=np.array([[0,1],[2,3]]) - >>> np.resize(a,(1,4)) - array([[0, 1, 2, 3]]) - >>> np.resize(a,(2,4)) - array([[0, 1, 2, 3], - [0, 1, 2, 3]]) - - """ - if isinstance(new_shape, (int, nt.integer)): - new_shape = (new_shape,) - a = ravel(a) - Na = len(a) - if not Na: return mu.zeros(new_shape, a.dtype.char) - total_size = um.multiply.reduce(new_shape) - n_copies = int(total_size / Na) - extra = total_size % Na - - if total_size == 0: - return a[:0] - - if extra != 0: - n_copies = n_copies+1 - extra = Na-extra - - a = concatenate( (a,)*n_copies) - if extra > 0: - a = a[:-extra] - - return reshape(a, new_shape) - - -def squeeze(a): - """ - Remove single-dimensional entries from the shape of an array. - - Parameters - ---------- - a : array_like - Input data. - - Returns - ------- - squeezed : ndarray - The input array, but with with all dimensions of length 1 - removed. Whenever possible, a view on `a` is returned. - - Examples - -------- - >>> x = np.array([[[0], [1], [2]]]) - >>> x.shape - (1, 3, 1) - >>> np.squeeze(x).shape - (3,) - - """ - try: - squeeze = a.squeeze - except AttributeError: - return _wrapit(a, 'squeeze') - return squeeze() - - -def diagonal(a, offset=0, axis1=0, axis2=1): - """ - Return specified diagonals. - - If `a` is 2-D, returns the diagonal of `a` with the given offset, - i.e., the collection of elements of the form ``a[i, i+offset]``. If - `a` has more than two dimensions, then the axes specified by `axis1` - and `axis2` are used to determine the 2-D sub-array whose diagonal is - returned. The shape of the resulting array can be determined by - removing `axis1` and `axis2` and appending an index to the right equal - to the size of the resulting diagonals. - - Parameters - ---------- - a : array_like - Array from which the diagonals are taken. - offset : int, optional - Offset of the diagonal from the main diagonal. Can be positive or - negative. Defaults to main diagonal (0). - axis1 : int, optional - Axis to be used as the first axis of the 2-D sub-arrays from which - the diagonals should be taken. Defaults to first axis (0). - axis2 : int, optional - Axis to be used as the second axis of the 2-D sub-arrays from - which the diagonals should be taken. Defaults to second axis (1). - - Returns - ------- - array_of_diagonals : ndarray - If `a` is 2-D, a 1-D array containing the diagonal is returned. - If the dimension of `a` is larger, then an array of diagonals is - returned, "packed" from left-most dimension to right-most (e.g., - if `a` is 3-D, then the diagonals are "packed" along rows). - - Raises - ------ - ValueError - If the dimension of `a` is less than 2. - - See Also - -------- - diag : MATLAB work-a-like for 1-D and 2-D arrays. - diagflat : Create diagonal arrays. - trace : Sum along diagonals. - - Examples - -------- - >>> a = np.arange(4).reshape(2,2) - >>> a - array([[0, 1], - [2, 3]]) - >>> a.diagonal() - array([0, 3]) - >>> a.diagonal(1) - array([1]) - - A 3-D example: - - >>> a = np.arange(8).reshape(2,2,2); a - array([[[0, 1], - [2, 3]], - [[4, 5], - [6, 7]]]) - >>> a.diagonal(0, # Main diagonals of two arrays created by skipping - ... 0, # across the outer(left)-most axis last and - ... 1) # the "middle" (row) axis first. - array([[0, 6], - [1, 7]]) - - The sub-arrays whose main diagonals we just obtained; note that each - corresponds to fixing the right-most (column) axis, and that the - diagonals are "packed" in rows. - - >>> a[:,:,0] # main diagonal is [0 6] - array([[0, 2], - [4, 6]]) - >>> a[:,:,1] # main diagonal is [1 7] - array([[1, 3], - [5, 7]]) - - """ - return asarray(a).diagonal(offset, axis1, axis2) - - -def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): - """ - Return the sum along diagonals of the array. - - If `a` is 2-D, the sum along its diagonal with the given offset - is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. - - If `a` has more than two dimensions, then the axes specified by axis1 and - axis2 are used to determine the 2-D sub-arrays whose traces are returned. - The shape of the resulting array is the same as that of `a` with `axis1` - and `axis2` removed. - - Parameters - ---------- - a : array_like - Input array, from which the diagonals are taken. - offset : int, optional - Offset of the diagonal from the main diagonal. Can be both positive - and negative. Defaults to 0. - axis1, axis2 : int, optional - Axes to be used as the first and second axis of the 2-D sub-arrays - from which the diagonals should be taken. Defaults are the first two - axes of `a`. - dtype : dtype, optional - Determines the data-type of the returned array and of the accumulator - where the elements are summed. If dtype has the value None and `a` is - of integer type of precision less than the default integer - precision, then the default integer precision is used. Otherwise, - the precision is the same as that of `a`. - out : ndarray, optional - Array into which the output is placed. Its type is preserved and - it must be of the right shape to hold the output. - - Returns - ------- - sum_along_diagonals : ndarray - If `a` is 2-D, the sum along the diagonal is returned. If `a` has - larger dimensions, then an array of sums along diagonals is returned. - - See Also - -------- - diag, diagonal, diagflat - - Examples - -------- - >>> np.trace(np.eye(3)) - 3.0 - >>> a = np.arange(8).reshape((2,2,2)) - >>> np.trace(a) - array([6, 8]) - - >>> a = np.arange(24).reshape((2,2,2,3)) - >>> np.trace(a).shape - (2, 3) - - """ - return asarray(a).trace(offset, axis1, axis2, dtype, out) - -def ravel(a, order='C'): - """ - Return a flattened array. - - A 1-D array, containing the elements of the input, is returned. A copy is - made only if needed. - - Parameters - ---------- - a : array_like - Input array. The elements in `a` are read in the order specified by - `order`, and packed as a 1-D array. - order : {'C','F'}, optional - The elements of `a` are read in this order. It can be either - 'C' for row-major order, or `F` for column-major order. - By default, row-major order is used. - - Returns - ------- - 1d_array : ndarray - Output of the same dtype as `a`, and of shape ``(a.size(),)``. - - See Also - -------- - ndarray.flat : 1-D iterator over an array. - ndarray.flatten : 1-D array copy of the elements of an array - in row-major order. - - Notes - ----- - In row-major order, the row index varies the slowest, and the column - index the quickest. This can be generalized to multiple dimensions, - where row-major order implies that the index along the first axis - varies slowest, and the index along the last quickest. The opposite holds - for Fortran-, or column-major, mode. - - Examples - -------- - If an array is in C-order (default), then `ravel` is equivalent - to ``reshape(-1)``: - - >>> x = np.array([[1, 2, 3], [4, 5, 6]]) - >>> print x.reshape(-1) - [1 2 3 4 5 6] - - >>> print np.ravel(x) - [1 2 3 4 5 6] - - When flattening using Fortran-order, however, we see - - >>> print np.ravel(x, order='F') - [1 4 2 5 3 6] - - """ - return asarray(a).ravel(order) - - -def nonzero(a): - """ - Return the indices of the elements that are non-zero. - - Returns a tuple of arrays, one for each dimension of `a`, containing - the indices of the non-zero elements in that dimension. The - corresponding non-zero values can be obtained with:: - - a[nonzero(a)] - - To group the indices by element, rather than dimension, use:: - - transpose(nonzero(a)) - - The result of this is always a 2-D array, with a row for - each non-zero element. - - Parameters - ---------- - a : array_like - Input array. - - Returns - ------- - tuple_of_arrays : tuple - Indices of elements that are non-zero. - - See Also - -------- - flatnonzero : - Return indices that are non-zero in the flattened version of the input - array. - ndarray.nonzero : - Equivalent ndarray method. - - Examples - -------- - >>> x = np.eye(3) - >>> x - array([[ 1., 0., 0.], - [ 0., 1., 0.], - [ 0., 0., 1.]]) - >>> np.nonzero(x) - (array([0, 1, 2]), array([0, 1, 2])) - - >>> x[np.nonzero(x)] - array([ 1., 1., 1.]) - >>> np.transpose(np.nonzero(x)) - array([[0, 0], - [1, 1], - [2, 2]]) - - A common use for ``nonzero`` is to find the indices of an array, where - a condition is True. Given an array `a`, the condition `a` > 3 is a - boolean array and since False is interpreted as 0, np.nonzero(a > 3) - yields the indices of the `a` where the condition is true. - - >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) - >>> a > 3 - array([[False, False, False], - [ True, True, True], - [ True, True, True]], dtype=bool) - >>> np.nonzero(a > 3) - (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) - - The ``nonzero`` method of the boolean array can also be called. - - >>> (a > 3).nonzero() - (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) - - """ - try: - nonzero = a.nonzero - except AttributeError: - res = _wrapit(a, 'nonzero') - else: - res = nonzero() - return res - - -def shape(a): - """ - Return the shape of an array. - - Parameters - ---------- - a : array_like - Input array. - - Returns - ------- - shape : tuple of ints - The elements of the shape tuple give the lengths of the - corresponding array dimensions. - - See Also - -------- - alen - ndarray.shape : Equivalent array method. - - Examples - -------- - >>> np.shape(np.eye(3)) - (3, 3) - >>> np.shape([[1, 2]]) - (1, 2) - >>> np.shape([0]) - (1,) - >>> np.shape(0) - () - - >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) - >>> np.shape(a) - (2,) - >>> a.shape - (2,) - - """ - try: - result = a.shape - except AttributeError: - result = asarray(a).shape - return result - - -def compress(condition, a, axis=None, out=None): - """ - Return selected slices of an array along given axis. - - When working along a given axis, a slice along that axis is returned in - `output` for each index where `condition` evaluates to True. When - working on a 1-D array, `compress` is equivalent to `extract`. - - Parameters - ---------- - condition : 1-D array of bools - Array that selects which entries to return. If len(condition) - is less than the size of `a` along the given axis, then output is - truncated to the length of the condition array. - a : array_like - Array from which to extract a part. - axis : int, optional - Axis along which to take slices. If None (default), work on the - flattened array. - out : ndarray, optional - Output array. Its type is preserved and it must be of the right - shape to hold the output. - - Returns - ------- - compressed_array : ndarray - A copy of `a` without the slices along axis for which `condition` - is false. - - See Also - -------- - take, choose, diag, diagonal, select - ndarray.compress : Equivalent method. - numpy.doc.ufuncs : Section "Output arguments" - - Examples - -------- - >>> a = np.array([[1, 2], [3, 4], [5, 6]]) - >>> a - array([[1, 2], - [3, 4], - [5, 6]]) - >>> np.compress([0, 1], a, axis=0) - array([[3, 4]]) - >>> np.compress([False, True, True], a, axis=0) - array([[3, 4], - [5, 6]]) - >>> np.compress([False, True], a, axis=1) - array([[2], - [4], - [6]]) - - Working on the flattened array does not return slices along an axis but - selects elements. - - >>> np.compress([False, True], a) - array([2]) - - """ - try: - compress = a.compress - except AttributeError: - return _wrapit(a, 'compress', condition, axis, out) - return compress(condition, axis, out) - - -def clip(a, a_min, a_max, out=None): - """ - Clip (limit) the values in an array. - - Given an interval, values outside the interval are clipped to - the interval edges. For example, if an interval of ``[0, 1]`` - is specified, values smaller than 0 become 0, and values larger - than 1 become 1. - - Parameters - ---------- - a : array_like - Array containing elements to clip. - a_min : scalar or array_like - Minimum value. - a_max : scalar or array_like - Maximum value. If `a_min` or `a_max` are array_like, then they will - be broadcasted to the shape of `a`. - out : ndarray, optional - The results will be placed in this array. It may be the input - array for in-place clipping. `out` must be of the right shape - to hold the output. Its type is preserved. - - Returns - ------- - clipped_array : ndarray - An array with the elements of `a`, but where values - < `a_min` are replaced with `a_min`, and those > `a_max` - with `a_max`. - - See Also - -------- - numpy.doc.ufuncs : Section "Output arguments" - - Examples - -------- - >>> a = np.arange(10) - >>> np.clip(a, 1, 8) - array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) - >>> a - array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - >>> np.clip(a, 3, 6, out=a) - array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) - >>> a = np.arange(10) - >>> a - array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) - array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) - - """ - try: - clip = a.clip - except AttributeError: - return _wrapit(a, 'clip', a_min, a_max, out) - return clip(a_min, a_max, out) - - -def sum(a, axis=None, dtype=None, out=None): - """ - Sum of array elements over a given axis. - - Parameters - ---------- - a : array_like - Elements to sum. - axis : integer, optional - Axis over which the sum is taken. By default `axis` is None, - and all elements are summed. - dtype : dtype, optional - The type of the returned array and of the accumulator in which - the elements are summed. By default, the dtype of `a` is used. - An exception is when `a` has an integer type with less precision - than the default platform integer. In that case, the default - platform integer is used instead. - out : ndarray, optional - Array into which the output is placed. By default, a new array is - created. If `out` is given, it must be of the appropriate shape - (the shape of `a` with `axis` removed, i.e., - ``numpy.delete(a.shape, axis)``). Its type is preserved. See - `doc.ufuncs` (Section "Output arguments") for more details. - - Returns - ------- - sum_along_axis : ndarray - An array with the same shape as `a`, with the specified - axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar - is returned. If an output array is specified, a reference to - `out` is returned. - - See Also - -------- - ndarray.sum : Equivalent method. - - cumsum : Cumulative sum of array elements. - - trapz : Integration of array values using the composite trapezoidal rule. - - mean, average - - Notes - ----- - Arithmetic is modular when using integer types, and no error is - raised on overflow. - - Examples - -------- - >>> np.sum([0.5, 1.5]) - 2.0 - >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) - 1 - >>> np.sum([[0, 1], [0, 5]]) - 6 - >>> np.sum([[0, 1], [0, 5]], axis=0) - array([0, 6]) - >>> np.sum([[0, 1], [0, 5]], axis=1) - array([1, 5]) - - If the accumulator is too small, overflow occurs: - - >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) - -128 - - """ - if isinstance(a, _gentype): - res = _sum_(a) - if out is not None: - out[...] = res - return out - return res - try: - sum = a.sum - except AttributeError: - return _wrapit(a, 'sum', axis, dtype, out) - return sum(axis, dtype, out) - - -def product (a, axis=None, dtype=None, out=None): - """ - Return the product of array elements over a given axis. - - See Also - -------- - prod : equivalent function; see for details. - - """ - try: - prod = a.prod - except AttributeError: - return _wrapit(a, 'prod', axis, dtype, out) - return prod(axis, dtype, out) - - -def sometrue(a, axis=None, out=None): - """ - Check whether some values are true. - - Refer to `any` for full documentation. - - See Also - -------- - any : equivalent function - - """ - try: - any = a.any - except AttributeError: - return _wrapit(a, 'any', axis, out) - return any(axis, out) - - -def alltrue (a, axis=None, out=None): - """ - Check if all elements of input array are true. - - See Also - -------- - numpy.all : Equivalent function; see for details. - - """ - try: - all = a.all - except AttributeError: - return _wrapit(a, 'all', axis, out) - return all(axis, out) - - -def any(a,axis=None, out=None): - """ - Test whether any array element along a given axis evaluates to True. - - Returns single boolean unless `axis` is not ``None`` - - Parameters - ---------- - a : array_like - Input array or object that can be converted to an array. - axis : int, optional - Axis along which a logical OR is performed. The default - (`axis` = `None`) is to perform a logical OR over a flattened - input array. `axis` may be negative, in which case it counts - from the last to the first axis. - out : ndarray, optional - Alternate output array in which to place the result. It must have - the same shape as the expected output and its type is preserved - (e.g., if it is of type float, then it will remain so, returning - 1.0 for True and 0.0 for False, regardless of the type of `a`). - See `doc.ufuncs` (Section "Output arguments") for details. - - Returns - ------- - any : bool or ndarray - A new boolean or `ndarray` is returned unless `out` is specified, - in which case a reference to `out` is returned. - - See Also - -------- - ndarray.any : equivalent method - - all : Test whether all elements along a given axis evaluate to True. - - Notes - ----- - Not a Number (NaN), positive infinity and negative infinity evaluate - to `True` because these are not equal to zero. - - Examples - -------- - >>> np.any([[True, False], [True, True]]) - True - - >>> np.any([[True, False], [False, False]], axis=0) - array([ True, False], dtype=bool) - - >>> np.any([-1, 0, 5]) - True - - >>> np.any(np.nan) - True - - >>> o=np.array([False]) - >>> z=np.any([-1, 4, 5], out=o) - >>> z, o - (array([ True], dtype=bool), array([ True], dtype=bool)) - >>> # Check now that z is a reference to o - >>> z is o - True - >>> id(z), id(o) # identity of z and o # doctest: +SKIP - (191614240, 191614240) - - """ - try: - any = a.any - except AttributeError: - return _wrapit(a, 'any', axis, out) - return any(axis, out) - - -def all(a,axis=None, out=None): - """ - Test whether all array elements along a given axis evaluate to True. - - Parameters - ---------- - a : array_like - Input array or object that can be converted to an array. - axis : int, optional - Axis along which a logical AND is performed. - The default (`axis` = `None`) is to perform a logical AND - over a flattened input array. `axis` may be negative, in which - case it counts from the last to the first axis. - out : ndarray, optional - Alternate output array in which to place the result. - It must have the same shape as the expected output and its - type is preserved (e.g., if ``dtype(out)`` is float, the result - will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section - "Output arguments") for more details. - - Returns - ------- - all : ndarray, bool - A new boolean or array is returned unless `out` is specified, - in which case a reference to `out` is returned. - - See Also - -------- - ndarray.all : equivalent method - - any : Test whether any element along a given axis evaluates to True. - - Notes - ----- - Not a Number (NaN), positive infinity and negative infinity - evaluate to `True` because these are not equal to zero. - - Examples - -------- - >>> np.all([[True,False],[True,True]]) - False - - >>> np.all([[True,False],[True,True]], axis=0) - array([ True, False], dtype=bool) - - >>> np.all([-1, 4, 5]) - True - - >>> np.all([1.0, np.nan]) - True - - >>> o=np.array([False]) - >>> z=np.all([-1, 4, 5], out=o) - >>> id(z), id(o), z # doctest: +SKIP - (28293632, 28293632, array([ True], dtype=bool)) - - """ - try: - all = a.all - except AttributeError: - return _wrapit(a, 'all', axis, out) - return all(axis, out) - - -def cumsum (a, axis=None, dtype=None, out=None): - """ - Return the cumulative sum of the elements along a given axis. - - Parameters - ---------- - a : array_like - Input array. - axis : int, optional - Axis along which the cumulative sum is computed. The default - (None) is to compute the cumsum over the flattened array. - dtype : dtype, optional - Type of the returned array and of the accumulator in which the - elements are summed. If `dtype` is not specified, it defaults - to the dtype of `a`, unless `a` has an integer dtype with a - precision less than that of the default platform integer. In - that case, the default platform integer is used. - out : ndarray, optional - Alternative output array in which to place the result. It must - have the same shape and buffer length as the expected output - but the type will be cast if necessary. See `doc.ufuncs` - (Section "Output arguments") for more details. - - Returns - ------- - cumsum_along_axis : ndarray. - A new array holding the result is returned unless `out` is - specified, in which case a reference to `out` is returned. The - result has the same size as `a`, and the same shape as `a` if - `axis` is not None or `a` is a 1-d array. - - - See Also - -------- - sum : Sum array elements. - - trapz : Integration of array values using the composite trapezoidal rule. - - Notes - ----- - Arithmetic is modular when using integer types, and no error is - raised on overflow. - - Examples - -------- - >>> a = np.array([[1,2,3], [4,5,6]]) - >>> a - array([[1, 2, 3], - [4, 5, 6]]) - >>> np.cumsum(a) - array([ 1, 3, 6, 10, 15, 21]) - >>> np.cumsum(a, dtype=float) # specifies type of output value(s) - array([ 1., 3., 6., 10., 15., 21.]) - - >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns - array([[1, 2, 3], - [5, 7, 9]]) - >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows - array([[ 1, 3, 6], - [ 4, 9, 15]]) - - """ - try: - cumsum = a.cumsum - except AttributeError: - return _wrapit(a, 'cumsum', axis, dtype, out) - return cumsum(axis, dtype, out) - - -def cumproduct(a, axis=None, dtype=None, out=None): - """ - Return the cumulative product over the given axis. - - - See Also - -------- - cumprod : equivalent function; see for details. - - """ - try: - cumprod = a.cumprod - except AttributeError: - return _wrapit(a, 'cumprod', axis, dtype, out) - return cumprod(axis, dtype, out) - - -def ptp(a, axis=None, out=None): - """ - Range of values (maximum - minimum) along an axis. - - The name of the function comes from the acronym for 'peak to peak'. - - Parameters - ---------- - a : array_like - Input values. - axis : int, optional - Axis along which to find the peaks. By default, flatten the - array. - out : array_like - Alternative output array in which to place the result. It must - have the same shape and buffer length as the expected output, - but the type of the output values will be cast if necessary. - - Returns - ------- - ptp : ndarray - A new array holding the result, unless `out` was - specified, in which case a reference to `out` is returned. - - Examples - -------- - >>> x = np.arange(4).reshape((2,2)) - >>> x - array([[0, 1], - [2, 3]]) - - >>> np.ptp(x, axis=0) - array([2, 2]) - - >>> np.ptp(x, axis=1) - array([1, 1]) - - """ - try: - ptp = a.ptp - except AttributeError: - return _wrapit(a, 'ptp', axis, out) - return ptp(axis, out) - - -def amax(a, axis=None, out=None): - """ - Return the maximum of an array or maximum along an axis. - - Parameters - ---------- - a : array_like - Input data. - axis : int, optional - Axis along which to operate. By default flattened input is used. - out : ndarray, optional - Alternate output array in which to place the result. Must be of - the same shape and buffer length as the expected output. See - `doc.ufuncs` (Section "Output arguments") for more details. - - Returns - ------- - amax : ndarray - A new array or scalar array with the result. - - See Also - -------- - nanmax : NaN values are ignored instead of being propagated. - fmax : same behavior as the C99 fmax function. - argmax : indices of the maximum values. - - Notes - ----- - NaN values are propagated, that is if at least one item is NaN, the - corresponding max value will be NaN as well. To ignore NaN values - (MATLAB behavior), please use nanmax. - - Examples - -------- - >>> a = np.arange(4).reshape((2,2)) - >>> a - array([[0, 1], - [2, 3]]) - >>> np.amax(a) - 3 - >>> np.amax(a, axis=0) - array([2, 3]) - >>> np.amax(a, axis=1) - array([1, 3]) - - >>> b = np.arange(5, dtype=np.float) - >>> b[2] = np.NaN - >>> np.amax(b) - nan - >>> np.nanmax(b) - 4.0 - - """ - try: - amax = a.max - except AttributeError: - return _wrapit(a, 'max', axis, out) - return amax(axis, out) - - -def amin(a, axis=None, out=None): - """ - Return the minimum of an array or minimum along an axis. - - Parameters - ---------- - a : array_like - Input data. - axis : int, optional - Axis along which to operate. By default a flattened input is used. - out : ndarray, optional - Alternative output array in which to place the result. Must - be of the same shape and buffer length as the expected output. - See `doc.ufuncs` (Section "Output arguments") for more details. - - Returns - ------- - amin : ndarray - A new array or a scalar array with the result. - - See Also - -------- - nanmin: nan values are ignored instead of being propagated - fmin: same behavior as the C99 fmin function - argmin: Return the indices of the minimum values. - - amax, nanmax, fmax - - Notes - ----- - NaN values are propagated, that is if at least one item is nan, the - corresponding min value will be nan as well. To ignore NaN values (matlab - behavior), please use nanmin. - - Examples - -------- - >>> a = np.arange(4).reshape((2,2)) - >>> a - array([[0, 1], - [2, 3]]) - >>> np.amin(a) # Minimum of the flattened array - 0 - >>> np.amin(a, axis=0) # Minima along the first axis - array([0, 1]) - >>> np.amin(a, axis=1) # Minima along the second axis - array([0, 2]) - - >>> b = np.arange(5, dtype=np.float) - >>> b[2] = np.NaN - >>> np.amin(b) - nan - >>> np.nanmin(b) - 0.0 - - """ - try: - amin = a.min - except AttributeError: - return _wrapit(a, 'min', axis, out) - return amin(axis, out) - - -def alen(a): - """ - Return the length of the first dimension of the input array. - - Parameters - ---------- - a : array_like - Input array. - - Returns - ------- - l : int - Length of the first dimension of `a`. - - See Also - -------- - shape, size - - Examples - -------- - >>> a = np.zeros((7,4,5)) - >>> a.shape[0] - 7 - >>> np.alen(a) - 7 - - """ - try: - return len(a) - except TypeError: - return len(array(a,ndmin=1)) - - -def prod(a, axis=None, dtype=None, out=None): - """ - Return the product of array elements over a given axis. - - Parameters - ---------- - a : array_like - Input data. - axis : int, optional - Axis over which the product is taken. By default, the product - of all elements is calculated. - dtype : data-type, optional - The data-type of the returned array, as well as of the accumulator - in which the elements are multiplied. By default, if `a` is of - integer type, `dtype` is the default platform integer. (Note: if - the type of `a` is unsigned, then so is `dtype`.) Otherwise, - the dtype is the same as that of `a`. - out : ndarray, optional - Alternative output array in which to place the result. It must have - the same shape as the expected output, but the type of the - output values will be cast if necessary. - - Returns - ------- - product_along_axis : ndarray, see `dtype` parameter above. - An array shaped as `a` but with the specified axis removed. - Returns a reference to `out` if specified. - - See Also - -------- - ndarray.prod : equivalent method - numpy.doc.ufuncs : Section "Output arguments" - - Notes - ----- - Arithmetic is modular when using integer types, and no error is - raised on overflow. That means that, on a 32-bit platform: - - >>> x = np.array([536870910, 536870910, 536870910, 536870910]) - >>> np.prod(x) #random - 16 - - Examples - -------- - By default, calculate the product of all elements: - - >>> np.prod([1.,2.]) - 2.0 - - Even when the input array is two-dimensional: - - >>> np.prod([[1.,2.],[3.,4.]]) - 24.0 - - But we can also specify the axis over which to multiply: - - >>> np.prod([[1.,2.],[3.,4.]], axis=1) - array([ 2., 12.]) - - If the type of `x` is unsigned, then the output type is - the unsigned platform integer: - - >>> x = np.array([1, 2, 3], dtype=np.uint8) - >>> np.prod(x).dtype == np.uint - True - - If `x` is of a signed integer type, then the output type - is the default platform integer: - - >>> x = np.array([1, 2, 3], dtype=np.int8) - >>> np.prod(x).dtype == np.int - True - - """ - try: - prod = a.prod - except AttributeError: - return _wrapit(a, 'prod', axis, dtype, out) - return prod(axis, dtype, out) - - -def cumprod(a, axis=None, dtype=None, out=None): - """ - Return the cumulative product of elements along a given axis. - - Parameters - ---------- - a : array_like - Input array. - axis : int, optional - Axis along which the cumulative product is computed. By default - the input is flattened. - dtype : dtype, optional - Type of the returned array, as well as of the accumulator in which - the elements are multiplied. If *dtype* is not specified, it - defaults to the dtype of `a`, unless `a` has an integer dtype with - a precision less than that of the default platform integer. In - that case, the default platform integer is used instead. - out : ndarray, optional - Alternative output array in which to place the result. It must - have the same shape and buffer length as the expected output - but the type of the resulting values will be cast if necessary. - - Returns - ------- - cumprod : ndarray - A new array holding the result is returned unless `out` is - specified, in which case a reference to out is returned. - - See Also - -------- - numpy.doc.ufuncs : Section "Output arguments" - - Notes - ----- - Arithmetic is modular when using integer types, and no error is - raised on overflow. - - Examples - -------- - >>> a = np.array([1,2,3]) - >>> np.cumprod(a) # intermediate results 1, 1*2 - ... # total product 1*2*3 = 6 - array([1, 2, 6]) - >>> a = np.array([[1, 2, 3], [4, 5, 6]]) - >>> np.cumprod(a, dtype=float) # specify type of output - array([ 1., 2., 6., 24., 120., 720.]) - - The cumulative product for each column (i.e., over the rows) of `a`: - - >>> np.cumprod(a, axis=0) - array([[ 1, 2, 3], - [ 4, 10, 18]]) - - The cumulative product for each row (i.e. over the columns) of `a`: - - >>> np.cumprod(a,axis=1) - array([[ 1, 2, 6], - [ 4, 20, 120]]) - - """ - try: - cumprod = a.cumprod - except AttributeError: - return _wrapit(a, 'cumprod', axis, dtype, out) - return cumprod(axis, dtype, out) - - -def ndim(a): - """ - Return the number of dimensions of an array. - - Parameters - ---------- - a : array_like - Input array. If it is not already an ndarray, a conversion is - attempted. - - Returns - ------- - number_of_dimensions : int - The number of dimensions in `a`. Scalars are zero-dimensional. - - See Also - -------- - ndarray.ndim : equivalent method - shape : dimensions of array - ndarray.shape : dimensions of array - - Examples - -------- - >>> np.ndim([[1,2,3],[4,5,6]]) - 2 - >>> np.ndim(np.array([[1,2,3],[4,5,6]])) - 2 - >>> np.ndim(1) - 0 - - """ - try: - return a.ndim - except AttributeError: - return asarray(a).ndim - - -def rank(a): - """ - Return the number of dimensions of an array. - - If `a` is not already an array, a conversion is attempted. - Scalars are zero dimensional. - - Parameters - ---------- - a : array_like - Array whose number of dimensions is desired. If `a` is not an array, - a conversion is attempted. - - Returns - ------- - number_of_dimensions : int - The number of dimensions in the array. - - See Also - -------- - ndim : equivalent function - ndarray.ndim : equivalent property - shape : dimensions of array - ndarray.shape : dimensions of array - - Notes - ----- - In the old Numeric package, `rank` was the term used for the number of - dimensions, but in Numpy `ndim` is used instead. - - Examples - -------- - >>> np.rank([1,2,3]) - 1 - >>> np.rank(np.array([[1,2,3],[4,5,6]])) - 2 - >>> np.rank(1) - 0 - - """ - try: - return a.ndim - except AttributeError: - return asarray(a).ndim - - -def size(a, axis=None): - """ - Return the number of elements along a given axis. - - Parameters - ---------- - a : array_like - Input data. - axis : int, optional - Axis along which the elements are counted. By default, give - the total number of elements. - - Returns - ------- - element_count : int - Number of elements along the specified axis. - - See Also - -------- - shape : dimensions of array - ndarray.shape : dimensions of array - ndarray.size : number of elements in array - - Examples - -------- - >>> a = np.array([[1,2,3],[4,5,6]]) - >>> np.size(a) - 6 - >>> np.size(a,1) - 3 - >>> np.size(a,0) - 2 - - """ - if axis is None: - try: - return a.size - except AttributeError: - return asarray(a).size - else: - try: - return a.shape[axis] - except AttributeError: - return asarray(a).shape[axis] - - -def around(a, decimals=0, out=None): - """ - Evenly round to the given number of decimals. - - Parameters - ---------- - a : array_like - Input data. - decimals : int, optional - Number of decimal places to round to (default: 0). If - decimals is negative, it specifies the number of positions to - the left of the decimal point. - out : ndarray, optional - Alternative output array in which to place the result. It must have - the same shape as the expected output, but the type of the output - values will be cast if necessary. See `doc.ufuncs` (Section - "Output arguments") for details. - - Returns - ------- - rounded_array : ndarray - An array of the same type as `a`, containing the rounded values. - Unless `out` was specified, a new array is created. A reference to - the result is returned. - - The real and imaginary parts of complex numbers are rounded - separately. The result of rounding a float is a float. - - See Also - -------- - ndarray.round : equivalent method - - ceil, fix, floor, rint, trunc - - - Notes - ----- - For values exactly halfway between rounded decimal values, Numpy - rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, - -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due - to the inexact representation of decimal fractions in the IEEE - floating point standard [1]_ and errors introduced when scaling - by powers of ten. - - References - ---------- - .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, - http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF - .. [2] "How Futile are Mindless Assessments of - Roundoff in Floating-Point Computation?", William Kahan, - http://www.cs.berkeley.edu/~wkahan/Mindless.pdf - - Examples - -------- - >>> np.around([0.37, 1.64]) - array([ 0., 2.]) - >>> np.around([0.37, 1.64], decimals=1) - array([ 0.4, 1.6]) - >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value - array([ 0., 2., 2., 4., 4.]) - >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned - array([ 1, 2, 3, 11]) - >>> np.around([1,2,3,11], decimals=-1) - array([ 0, 0, 0, 10]) - - """ - try: - round = a.round - except AttributeError: - return _wrapit(a, 'round', decimals, out) - return round(decimals, out) - - -def round_(a, decimals=0, out=None): - """ - Round an array to the given number of decimals. - - Refer to `around` for full documentation. - - See Also - -------- - around : equivalent function - - """ - try: - round = a.round - except AttributeError: - return _wrapit(a, 'round', decimals, out) - return round(decimals, out) - - -def mean(a, axis=None, dtype=None, out=None): - """ - Compute the arithmetic mean along the specified axis. - - Returns the average of the array elements. The average is taken over - the flattened array by default, otherwise over the specified axis. - `float64` intermediate and return values are used for integer inputs. - - Parameters - ---------- - a : array_like - Array containing numbers whose mean is desired. If `a` is not an - array, a conversion is attempted. - axis : int, optional - Axis along which the means are computed. The default is to compute - the mean of the flattened array. - dtype : data-type, optional - Type to use in computing the mean. For integer inputs, the default - is `float64`; for floating point inputs, it is the same as the - input dtype. - out : ndarray, optional - Alternate output array in which to place the result. The default - is ``None``; if provided, it must have the same shape as the - expected output, but the type will be cast if necessary. - See `doc.ufuncs` for details. - - Returns - ------- - m : ndarray, see dtype parameter above - If `out=None`, returns a new array containing the mean values, - otherwise a reference to the output array is returned. - - See Also - -------- - average : Weighted average - - Notes - ----- - The arithmetic mean is the sum of the elements along the axis divided - by the number of elements. - - Note that for floating-point input, the mean is computed using the - same precision the input has. Depending on the input data, this can - cause the results to be inaccurate, especially for `float32` (see - example below). Specifying a higher-precision accumulator using the - `dtype` keyword can alleviate this issue. - - Examples - -------- - >>> a = np.array([[1, 2], [3, 4]]) - >>> np.mean(a) - 2.5 - >>> np.mean(a, axis=0) - array([ 2., 3.]) - >>> np.mean(a, axis=1) - array([ 1.5, 3.5]) - - In single precision, `mean` can be inaccurate: - - >>> a = np.zeros((2, 512*512), dtype=np.float32) - >>> a[0, :] = 1.0 - >>> a[1, :] = 0.1 - >>> np.mean(a) - 0.546875 - - Computing the mean in float64 is more accurate: - - >>> np.mean(a, dtype=np.float64) - 0.55000000074505806 - - """ - try: - mean = a.mean - except AttributeError: - return _wrapit(a, 'mean', axis, dtype, out) - return mean(axis, dtype, out) - - -def std(a, axis=None, dtype=None, out=None, ddof=0): - """ - Compute the standard deviation along the specified axis. - - Returns the standard deviation, a measure of the spread of a distribution, - of the array elements. The standard deviation is computed for the - flattened array by default, otherwise over the specified axis. - - Parameters - ---------- - a : array_like - Calculate the standard deviation of these values. - axis : int, optional - Axis along which the standard deviation is computed. The default is - to compute the standard deviation of the flattened array. - dtype : dtype, optional - Type to use in computing the standard deviation. For arrays of - integer type the default is float64, for arrays of float types it is - the same as the array type. - out : ndarray, optional - Alternative output array in which to place the result. It must have - the same shape as the expected output but the type (of the calculated - values) will be cast if necessary. - ddof : int, optional - Means Delta Degrees of Freedom. The divisor used in calculations - is ``N - ddof``, where ``N`` represents the number of elements. - By default `ddof` is zero. - - Returns - ------- - standard_deviation : ndarray, see dtype parameter above. - If `out` is None, return a new array containing the standard deviation, - otherwise return a reference to the output array. - - See Also - -------- - var, mean - numpy.doc.ufuncs : Section "Output arguments" - - Notes - ----- - The standard deviation is the square root of the average of the squared - deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. - - The average squared deviation is normally calculated as ``x.sum() / N``, where - ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` - is used instead. In standard statistical practice, ``ddof=1`` provides an - unbiased estimator of the variance of the infinite population. ``ddof=0`` - provides a maximum likelihood estimate of the variance for normally - distributed variables. The standard deviation computed in this function - is the square root of the estimated variance, so even with ``ddof=1``, it - will not be an unbiased estimate of the standard deviation per se. - - Note that, for complex numbers, `std` takes the absolute - value before squaring, so that the result is always real and nonnegative. - - For floating-point input, the *std* is computed using the same - precision the input has. Depending on the input data, this can cause - the results to be inaccurate, especially for float32 (see example below). - Specifying a higher-accuracy accumulator using the `dtype` keyword can - alleviate this issue. - - Examples - -------- - >>> a = np.array([[1, 2], [3, 4]]) - >>> np.std(a) - 1.1180339887498949 - >>> np.std(a, axis=0) - array([ 1., 1.]) - >>> np.std(a, axis=1) - array([ 0.5, 0.5]) - - In single precision, std() can be inaccurate: - - >>> a = np.zeros((2,512*512), dtype=np.float32) - >>> a[0,:] = 1.0 - >>> a[1,:] = 0.1 - >>> np.std(a) - 0.45172946707416706 - - Computing the standard deviation in float64 is more accurate: - - >>> np.std(a, dtype=np.float64) - 0.44999999925552653 - - """ - try: - std = a.std - except AttributeError: - return _wrapit(a, 'std', axis, dtype, out, ddof) - return std(axis, dtype, out, ddof) - - -def var(a, axis=None, dtype=None, out=None, ddof=0): - """ - Compute the variance along the specified axis. - - Returns the variance of the array elements, a measure of the spread of a - distribution. The variance is computed for the flattened array by - default, otherwise over the specified axis. - - Parameters - ---------- - a : array_like - Array containing numbers whose variance is desired. If `a` is not an - array, a conversion is attempted. - axis : int, optional - Axis along which the variance is computed. The default is to compute - the variance of the flattened array. - dtype : data-type, optional - Type to use in computing the variance. For arrays of integer type - the default is `float32`; for arrays of float types it is the same as - the array type. - out : ndarray, optional - Alternate output array in which to place the result. It must have - the same shape as the expected output, but the type is cast if - necessary. - ddof : int, optional - "Delta Degrees of Freedom": the divisor used in the calculation is - ``N - ddof``, where ``N`` represents the number of elements. By - default `ddof` is zero. - - Returns - ------- - variance : ndarray, see dtype parameter above - If ``out=None``, returns a new array containing the variance; - otherwise, a reference to the output array is returned. - - See Also - -------- - std : Standard deviation - mean : Average - numpy.doc.ufuncs : Section "Output arguments" - - Notes - ----- - The variance is the average of the squared deviations from the mean, - i.e., ``var = mean(abs(x - x.mean())**2)``. - - The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. - If, however, `ddof` is specified, the divisor ``N - ddof`` is used - instead. In standard statistical practice, ``ddof=1`` provides an - unbiased estimator of the variance of a hypothetical infinite population. - ``ddof=0`` provides a maximum likelihood estimate of the variance for - normally distributed variables. - - Note that for complex numbers, the absolute value is taken before - squaring, so that the result is always real and nonnegative. - - For floating-point input, the variance is computed using the same - precision the input has. Depending on the input data, this can cause - the results to be inaccurate, especially for `float32` (see example - below). Specifying a higher-accuracy accumulator using the ``dtype`` - keyword can alleviate this issue. - - Examples - -------- - >>> a = np.array([[1,2],[3,4]]) - >>> np.var(a) - 1.25 - >>> np.var(a,0) - array([ 1., 1.]) - >>> np.var(a,1) - array([ 0.25, 0.25]) - - In single precision, var() can be inaccurate: - - >>> a = np.zeros((2,512*512), dtype=np.float32) - >>> a[0,:] = 1.0 - >>> a[1,:] = 0.1 - >>> np.var(a) - 0.20405951142311096 - - Computing the standard deviation in float64 is more accurate: - - >>> np.var(a, dtype=np.float64) - 0.20249999932997387 - >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 - 0.20250000000000001 - - """ - try: - var = a.var - except AttributeError: - return _wrapit(a, 'var', axis, dtype, out, ddof) - return var(axis, dtype, out, ddof) diff --git a/pythonPackages/numpy/numpy/core/function_base.py b/pythonPackages/numpy/numpy/core/function_base.py deleted file mode 100755 index b2f9dc70cf..0000000000 --- a/pythonPackages/numpy/numpy/core/function_base.py +++ /dev/null @@ -1,167 +0,0 @@ -__all__ = ['logspace', 'linspace'] - -import numeric as _nx -from numeric import array - -def linspace(start, stop, num=50, endpoint=True, retstep=False): - """ - Return evenly spaced numbers over a specified interval. - - Returns `num` evenly spaced samples, calculated over the - interval [`start`, `stop` ]. - - The endpoint of the interval can optionally be excluded. - - Parameters - ---------- - start : scalar - The starting value of the sequence. - stop : scalar - The end value of the sequence, unless `endpoint` is set to False. - In that case, the sequence consists of all but the last of ``num + 1`` - evenly spaced samples, so that `stop` is excluded. Note that the step - size changes when `endpoint` is False. - num : int, optional - Number of samples to generate. Default is 50. - endpoint : bool, optional - If True, `stop` is the last sample. Otherwise, it is not included. - Default is True. - retstep : bool, optional - If True, return (`samples`, `step`), where `step` is the spacing - between samples. - - Returns - ------- - samples : ndarray - There are `num` equally spaced samples in the closed interval - ``[start, stop]`` or the half-open interval ``[start, stop)`` - (depending on whether `endpoint` is True or False). - step : float (only if `retstep` is True) - Size of spacing between samples. - - - See Also - -------- - arange : Similiar to `linspace`, but uses a step size (instead of the - number of samples). - logspace : Samples uniformly distributed in log space. - - Examples - -------- - >>> np.linspace(2.0, 3.0, num=5) - array([ 2. , 2.25, 2.5 , 2.75, 3. ]) - >>> np.linspace(2.0, 3.0, num=5, endpoint=False) - array([ 2. , 2.2, 2.4, 2.6, 2.8]) - >>> np.linspace(2.0, 3.0, num=5, retstep=True) - (array([ 2. , 2.25, 2.5 , 2.75, 3. ]), 0.25) - - Graphical illustration: - - >>> import matplotlib.pyplot as plt - >>> N = 8 - >>> y = np.zeros(N) - >>> x1 = np.linspace(0, 10, N, endpoint=True) - >>> x2 = np.linspace(0, 10, N, endpoint=False) - >>> plt.plot(x1, y, 'o') - [] - >>> plt.plot(x2, y + 0.5, 'o') - [] - >>> plt.ylim([-0.5, 1]) - (-0.5, 1) - >>> plt.show() - - """ - num = int(num) - if num <= 0: - return array([], float) - if endpoint: - if num == 1: - return array([float(start)]) - step = (stop-start)/float((num-1)) - y = _nx.arange(0, num) * step + start - y[-1] = stop - else: - step = (stop-start)/float(num) - y = _nx.arange(0, num) * step + start - if retstep: - return y, step - else: - return y - -def logspace(start,stop,num=50,endpoint=True,base=10.0): - """ - Return numbers spaced evenly on a log scale. - - In linear space, the sequence starts at ``base ** start`` - (`base` to the power of `start`) and ends with ``base ** stop`` - (see `endpoint` below). - - Parameters - ---------- - start : float - ``base ** start`` is the starting value of the sequence. - stop : float - ``base ** stop`` is the final value of the sequence, unless `endpoint` - is False. In that case, ``num + 1`` values are spaced over the - interval in log-space, of which all but the last (a sequence of - length ``num``) are returned. - num : integer, optional - Number of samples to generate. Default is 50. - endpoint : boolean, optional - If true, `stop` is the last sample. Otherwise, it is not included. - Default is True. - base : float, optional - The base of the log space. The step size between the elements in - ``ln(samples) / ln(base)`` (or ``log_base(samples)``) is uniform. - Default is 10.0. - - Returns - ------- - samples : ndarray - `num` samples, equally spaced on a log scale. - - See Also - -------- - arange : Similiar to linspace, with the step size specified instead of the - number of samples. Note that, when used with a float endpoint, the - endpoint may or may not be included. - linspace : Similar to logspace, but with the samples uniformly distributed - in linear space, instead of log space. - - Notes - ----- - Logspace is equivalent to the code - - >>> y = np.linspace(start, stop, num=num, endpoint=endpoint) - ... # doctest: +SKIP - >>> power(base, y) - ... # doctest: +SKIP - - Examples - -------- - >>> np.logspace(2.0, 3.0, num=4) - array([ 100. , 215.443469 , 464.15888336, 1000. ]) - >>> np.logspace(2.0, 3.0, num=4, endpoint=False) - array([ 100. , 177.827941 , 316.22776602, 562.34132519]) - >>> np.logspace(2.0, 3.0, num=4, base=2.0) - array([ 4. , 5.0396842 , 6.34960421, 8. ]) - - Graphical illustration: - - >>> import matplotlib.pyplot as plt - >>> N = 10 - >>> x1 = np.logspace(0.1, 1, N, endpoint=True) - >>> x2 = np.logspace(0.1, 1, N, endpoint=False) - >>> y = np.zeros(N) - >>> plt.plot(x1, y, 'o') - [] - >>> plt.plot(x2, y + 0.5, 'o') - [] - >>> plt.ylim([-0.5, 1]) - (-0.5, 1) - >>> plt.show() - - """ - y = linspace(start,stop,num=num,endpoint=endpoint) - return _nx.power(base,y) - diff --git a/pythonPackages/numpy/numpy/core/getlimits.py b/pythonPackages/numpy/numpy/core/getlimits.py deleted file mode 100755 index 4fa020e861..0000000000 --- a/pythonPackages/numpy/numpy/core/getlimits.py +++ /dev/null @@ -1,285 +0,0 @@ -""" Machine limits for Float32 and Float64 and (long double) if available... -""" - -__all__ = ['finfo','iinfo'] - -from machar import MachAr -import numeric -import numerictypes as ntypes -from numeric import array - -def _frz(a): - """fix rank-0 --> rank-1""" - if a.ndim == 0: a.shape = (1,) - return a - -_convert_to_float = { - ntypes.csingle: ntypes.single, - ntypes.complex_: ntypes.float_, - ntypes.clongfloat: ntypes.longfloat - } - -class finfo(object): - """ - finfo(dtype) - - Machine limits for floating point types. - - Attributes - ---------- - eps : floating point number of the appropriate type - The smallest representable number such that ``1.0 + eps != 1.0``. - epsneg : floating point number of the appropriate type - The smallest representable number such that ``1.0 - epsneg != 1.0``. - iexp : int - The number of bits in the exponent portion of the floating point - representation. - machar : MachAr - The object which calculated these parameters and holds more detailed - information. - machep : int - The exponent that yields ``eps``. - max : floating point number of the appropriate type - The largest representable number. - maxexp : int - The smallest positive power of the base (2) that causes overflow. - min : floating point number of the appropriate type - The smallest representable number, typically ``-max``. - minexp : int - The most negative power of the base (2) consistent with there being - no leading 0's in the mantissa. - negep : int - The exponent that yields ``epsneg``. - nexp : int - The number of bits in the exponent including its sign and bias. - nmant : int - The number of bits in the mantissa. - precision : int - The approximate number of decimal digits to which this kind of float - is precise. - resolution : floating point number of the appropriate type - The approximate decimal resolution of this type, i.e. - ``10**-precision``. - tiny : floating point number of the appropriate type - The smallest-magnitude usable number. - - Parameters - ---------- - dtype : floating point type, dtype, or instance - The kind of floating point data type to get information about. - - See Also - -------- - MachAr : The implementation of the tests that produce this information. - iinfo : The equivalent for integer data types. - - Notes - ----- - For developers of NumPy: do not instantiate this at the module level. The - initial calculation of these parameters is expensive and negatively impacts - import times. These objects are cached, so calling ``finfo()`` repeatedly - inside your functions is not a problem. - - """ - - _finfo_cache = {} - - def __new__(cls, dtype): - try: - dtype = numeric.dtype(dtype) - except TypeError: - # In case a float instance was given - dtype = numeric.dtype(type(dtype)) - - obj = cls._finfo_cache.get(dtype,None) - if obj is not None: - return obj - dtypes = [dtype] - newdtype = numeric.obj2sctype(dtype) - if newdtype is not dtype: - dtypes.append(newdtype) - dtype = newdtype - if not issubclass(dtype, numeric.inexact): - raise ValueError, "data type %r not inexact" % (dtype) - obj = cls._finfo_cache.get(dtype,None) - if obj is not None: - return obj - if not issubclass(dtype, numeric.floating): - newdtype = _convert_to_float[dtype] - if newdtype is not dtype: - dtypes.append(newdtype) - dtype = newdtype - obj = cls._finfo_cache.get(dtype,None) - if obj is not None: - return obj - obj = object.__new__(cls)._init(dtype) - for dt in dtypes: - cls._finfo_cache[dt] = obj - return obj - - def _init(self, dtype): - self.dtype = numeric.dtype(dtype) - if dtype is ntypes.double: - itype = ntypes.int64 - fmt = '%24.16e' - precname = 'double' - elif dtype is ntypes.single: - itype = ntypes.int32 - fmt = '%15.7e' - precname = 'single' - elif dtype is ntypes.longdouble: - itype = ntypes.longlong - fmt = '%s' - precname = 'long double' - else: - raise ValueError, repr(dtype) - - machar = MachAr(lambda v:array([v], dtype), - lambda v:_frz(v.astype(itype))[0], - lambda v:array(_frz(v)[0], dtype), - lambda v: fmt % array(_frz(v)[0], dtype), - 'numpy %s precision floating point number' % precname) - - for word in ['precision', 'iexp', - 'maxexp','minexp','negep', - 'machep']: - setattr(self,word,getattr(machar, word)) - for word in ['tiny','resolution','epsneg']: - setattr(self,word,getattr(machar, word).flat[0]) - self.max = machar.huge.flat[0] - self.min = -self.max - self.eps = machar.eps.flat[0] - self.nexp = machar.iexp - self.nmant = machar.it - self.machar = machar - self._str_tiny = machar._str_xmin.strip() - self._str_max = machar._str_xmax.strip() - self._str_epsneg = machar._str_epsneg.strip() - self._str_eps = machar._str_eps.strip() - self._str_resolution = machar._str_resolution.strip() - return self - - def __str__(self): - return '''\ -Machine parameters for %(dtype)s ---------------------------------------------------------------------- -precision=%(precision)3s resolution= %(_str_resolution)s -machep=%(machep)6s eps= %(_str_eps)s -negep =%(negep)6s epsneg= %(_str_epsneg)s -minexp=%(minexp)6s tiny= %(_str_tiny)s -maxexp=%(maxexp)6s max= %(_str_max)s -nexp =%(nexp)6s min= -max ---------------------------------------------------------------------- -''' % self.__dict__ - - -class iinfo: - """ - iinfo(type) - - Machine limits for integer types. - - Attributes - ---------- - min : int - The smallest integer expressible by the type. - max : int - The largest integer expressible by the type. - - Parameters - ---------- - type : integer type, dtype, or instance - The kind of integer data type to get information about. - - See Also - -------- - finfo : The equivalent for floating point data types. - - Examples - -------- - With types: - - >>> ii16 = np.iinfo(np.int16) - >>> ii16.min - -32768 - >>> ii16.max - 32767 - >>> ii32 = np.iinfo(np.int32) - >>> ii32.min - -2147483648 - >>> ii32.max - 2147483647 - - With instances: - - >>> ii32 = np.iinfo(np.int32(10)) - >>> ii32.min - -2147483648 - >>> ii32.max - 2147483647 - - """ - - _min_vals = {} - _max_vals = {} - - def __init__(self, int_type): - try: - self.dtype = numeric.dtype(int_type) - except TypeError: - self.dtype = numeric.dtype(type(int_type)) - self.kind = self.dtype.kind - self.bits = self.dtype.itemsize * 8 - self.key = "%s%d" % (self.kind, self.bits) - if not self.kind in 'iu': - raise ValueError("Invalid integer data type.") - - def min(self): - """Minimum value of given dtype.""" - if self.kind == 'u': - return 0 - else: - try: - val = iinfo._min_vals[self.key] - except KeyError: - val = int(-(1L << (self.bits-1))) - iinfo._min_vals[self.key] = val - return val - - min = property(min) - - def max(self): - """Maximum value of given dtype.""" - try: - val = iinfo._max_vals[self.key] - except KeyError: - if self.kind == 'u': - val = int((1L << self.bits) - 1) - else: - val = int((1L << (self.bits-1)) - 1) - iinfo._max_vals[self.key] = val - return val - - max = property(max) - - def __str__(self): - """String representation.""" - return '''\ -Machine parameters for %(dtype)s ---------------------------------------------------------------------- -min = %(min)s -max = %(max)s ---------------------------------------------------------------------- -''' % {'dtype': self.dtype, 'min': self.min, 'max': self.max} - - -if __name__ == '__main__': - f = finfo(ntypes.single) - print 'single epsilon:',f.eps - print 'single tiny:',f.tiny - f = finfo(ntypes.float) - print 'float epsilon:',f.eps - print 'float tiny:',f.tiny - f = finfo(ntypes.longfloat) - print 'longfloat epsilon:',f.eps - print 'longfloat tiny:',f.tiny diff --git a/pythonPackages/numpy/numpy/core/include/numpy/_neighborhood_iterator_imp.h b/pythonPackages/numpy/numpy/core/include/numpy/_neighborhood_iterator_imp.h deleted file mode 100755 index e8860cbc73..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/_neighborhood_iterator_imp.h +++ /dev/null @@ -1,90 +0,0 @@ -#ifndef _NPY_INCLUDE_NEIGHBORHOOD_IMP -#error You should not include this header directly -#endif -/* - * Private API (here for inline) - */ -static NPY_INLINE int -_PyArrayNeighborhoodIter_IncrCoord(PyArrayNeighborhoodIterObject* iter); - -/* - * Update to next item of the iterator - * - * Note: this simply increment the coordinates vector, last dimension - * incremented first , i.e, for dimension 3 - * ... - * -1, -1, -1 - * -1, -1, 0 - * -1, -1, 1 - * .... - * -1, 0, -1 - * -1, 0, 0 - * .... - * 0, -1, -1 - * 0, -1, 0 - * .... - */ -#define _UPDATE_COORD_ITER(c) \ - wb = iter->coordinates[c] < iter->bounds[c][1]; \ - if (wb) { \ - iter->coordinates[c] += 1; \ - return 0; \ - } \ - else { \ - iter->coordinates[c] = iter->bounds[c][0]; \ - } - -static NPY_INLINE int -_PyArrayNeighborhoodIter_IncrCoord(PyArrayNeighborhoodIterObject* iter) -{ - npy_intp i, wb; - - for (i = iter->nd - 1; i >= 0; --i) { - _UPDATE_COORD_ITER(i) - } - - return 0; -} - -/* - * Version optimized for 2d arrays, manual loop unrolling - */ -static NPY_INLINE int -_PyArrayNeighborhoodIter_IncrCoord2D(PyArrayNeighborhoodIterObject* iter) -{ - npy_intp wb; - - _UPDATE_COORD_ITER(1) - _UPDATE_COORD_ITER(0) - - return 0; -} -#undef _UPDATE_COORD_ITER - -/* - * Advance to the next neighbour - */ -static NPY_INLINE int -PyArrayNeighborhoodIter_Next(PyArrayNeighborhoodIterObject* iter) -{ - _PyArrayNeighborhoodIter_IncrCoord (iter); - iter->dataptr = iter->translate((PyArrayIterObject*)iter, iter->coordinates); - - return 0; -} - -/* - * Reset functions - */ -static NPY_INLINE int -PyArrayNeighborhoodIter_Reset(PyArrayNeighborhoodIterObject* iter) -{ - npy_intp i; - - for (i = 0; i < iter->nd; ++i) { - iter->coordinates[i] = iter->bounds[i][0]; - } - iter->dataptr = iter->translate((PyArrayIterObject*)iter, iter->coordinates); - - return 0; -} diff --git a/pythonPackages/numpy/numpy/core/include/numpy/_numpyconfig.h.in b/pythonPackages/numpy/numpy/core/include/numpy/_numpyconfig.h.in deleted file mode 100755 index 2cd389d44c..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/_numpyconfig.h.in +++ /dev/null @@ -1,49 +0,0 @@ -#ifndef _NPY_NUMPYCONFIG_H_ -#error this header should not be included directly, always include numpyconfig.h instead -#endif - -#define NPY_SIZEOF_SHORT @SIZEOF_SHORT@ -#define NPY_SIZEOF_INT @SIZEOF_INT@ -#define NPY_SIZEOF_LONG @SIZEOF_LONG@ -#define NPY_SIZEOF_FLOAT @SIZEOF_FLOAT@ -#define NPY_SIZEOF_DOUBLE @SIZEOF_DOUBLE@ -#define NPY_SIZEOF_LONGDOUBLE @SIZEOF_LONG_DOUBLE@ -#define NPY_SIZEOF_PY_INTPTR_T @SIZEOF_PY_INTPTR_T@ - -#define NPY_SIZEOF_COMPLEX_FLOAT @SIZEOF_COMPLEX_FLOAT@ -#define NPY_SIZEOF_COMPLEX_DOUBLE @SIZEOF_COMPLEX_DOUBLE@ -#define NPY_SIZEOF_COMPLEX_LONGDOUBLE @SIZEOF_COMPLEX_LONG_DOUBLE@ - -@DEFINE_NPY_HAVE_DECL_ISNAN@ -@DEFINE_NPY_HAVE_DECL_ISINF@ -@DEFINE_NPY_HAVE_DECL_ISFINITE@ -@DEFINE_NPY_HAVE_DECL_SIGNBIT@ - -@DEFINE_NPY_NO_SIGNAL@ -#define NPY_NO_SMP @NPY_NO_SMP@ - -/* XXX: this has really nothing to do in a config file... */ -#define NPY_MATHLIB @MATHLIB@ - -@DEFINE_NPY_SIZEOF_LONGLONG@ -@DEFINE_NPY_SIZEOF_PY_LONG_LONG@ - -@DEFINE_NPY_ENABLE_SEPARATE_COMPILATION@ -#define NPY_VISIBILITY_HIDDEN @VISIBILITY_HIDDEN@ - -@DEFINE_NPY_USE_C99_FORMATS@ -@DEFINE_NPY_HAVE_COMPLEX_DOUBLE@ -@DEFINE_NPY_HAVE_COMPLEX_FLOAT@ -@DEFINE_NPY_HAVE_COMPLEX_LONG_DOUBLE@ -@DEFINE_NPY_USE_C99_COMPLEX@ - -#define NPY_ABI_VERSION @NPY_ABI_VERSION@ -#define NPY_API_VERSION @NPY_API_VERSION@ - -@DEFINE_NPY_HAVE_ENDIAN_H@ - -/* Ugly, but we can't test this in a proper manner without requiring a C++ - * compiler at the configuration stage of numpy ? */ -#ifndef __STDC_FORMAT_MACROS - #define __STDC_FORMAT_MACROS 1 -#endif diff --git a/pythonPackages/numpy/numpy/core/include/numpy/arrayobject.h b/pythonPackages/numpy/numpy/core/include/numpy/arrayobject.h deleted file mode 100755 index f64d2a6c3b..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/arrayobject.h +++ /dev/null @@ -1,21 +0,0 @@ - -/* This expects the following variables to be defined (besides - the usual ones from pyconfig.h - - SIZEOF_LONG_DOUBLE -- sizeof(long double) or sizeof(double) if no - long double is present on platform. - CHAR_BIT -- number of bits in a char (usually 8) - (should be in limits.h) - -*/ - -#ifndef Py_ARRAYOBJECT_H -#define Py_ARRAYOBJECT_H -#include "ndarrayobject.h" -#ifdef NPY_NO_PREFIX -#include "noprefix.h" -#endif - -#include "npy_interrupt.h" - -#endif diff --git a/pythonPackages/numpy/numpy/core/include/numpy/arrayscalars.h b/pythonPackages/numpy/numpy/core/include/numpy/arrayscalars.h deleted file mode 100755 index 61bf721d16..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/arrayscalars.h +++ /dev/null @@ -1,157 +0,0 @@ -#ifndef _NPY_ARRAYSCALARS_H_ -#define _NPY_ARRAYSCALARS_H_ - -#ifndef _MULTIARRAYMODULE -typedef struct { - PyObject_HEAD - npy_bool obval; -} PyBoolScalarObject; -#endif - - -typedef struct { - PyObject_HEAD - signed char obval; -} PyByteScalarObject; - - -typedef struct { - PyObject_HEAD - short obval; -} PyShortScalarObject; - - -typedef struct { - PyObject_HEAD - int obval; -} PyIntScalarObject; - - -typedef struct { - PyObject_HEAD - long obval; -} PyLongScalarObject; - - -typedef struct { - PyObject_HEAD - npy_longlong obval; -} PyLongLongScalarObject; - - -typedef struct { - PyObject_HEAD - unsigned char obval; -} PyUByteScalarObject; - - -typedef struct { - PyObject_HEAD - unsigned short obval; -} PyUShortScalarObject; - - -typedef struct { - PyObject_HEAD - unsigned int obval; -} PyUIntScalarObject; - - -typedef struct { - PyObject_HEAD - unsigned long obval; -} PyULongScalarObject; - - -typedef struct { - PyObject_HEAD - npy_ulonglong obval; -} PyULongLongScalarObject; - - -typedef struct { - PyObject_HEAD - float obval; -} PyFloatScalarObject; - - -typedef struct { - PyObject_HEAD - double obval; -} PyDoubleScalarObject; - - -typedef struct { - PyObject_HEAD - npy_longdouble obval; -} PyLongDoubleScalarObject; - - -typedef struct { - PyObject_HEAD - npy_cfloat obval; -} PyCFloatScalarObject; - - -typedef struct { - PyObject_HEAD - npy_cdouble obval; -} PyCDoubleScalarObject; - - -typedef struct { - PyObject_HEAD - npy_clongdouble obval; -} PyCLongDoubleScalarObject; - - -typedef struct { - PyObject_HEAD - PyObject * obval; -} PyObjectScalarObject; - - -typedef struct { - PyObject_HEAD - char obval; -} PyScalarObject; - -#define PyStringScalarObject PyStringObject -#define PyUnicodeScalarObject PyUnicodeObject - -typedef struct { - PyObject_VAR_HEAD - char *obval; - PyArray_Descr *descr; - int flags; - PyObject *base; -} PyVoidScalarObject; - -/* Macros - PyScalarObject - PyArrType_Type - are defined in ndarrayobject.h -*/ - -#define PyArrayScalar_False ((PyObject *)(&(_PyArrayScalar_BoolValues[0]))) -#define PyArrayScalar_True ((PyObject *)(&(_PyArrayScalar_BoolValues[1]))) -#define PyArrayScalar_FromLong(i) \ - ((PyObject *)(&(_PyArrayScalar_BoolValues[((i)!=0)]))) -#define PyArrayScalar_RETURN_BOOL_FROM_LONG(i) \ - return Py_INCREF(PyArrayScalar_FromLong(i)), \ - PyArrayScalar_FromLong(i) -#define PyArrayScalar_RETURN_FALSE \ - return Py_INCREF(PyArrayScalar_False), \ - PyArrayScalar_False -#define PyArrayScalar_RETURN_TRUE \ - return Py_INCREF(PyArrayScalar_True), \ - PyArrayScalar_True - -#define PyArrayScalar_New(cls) \ - Py##cls##ArrType_Type.tp_alloc(&Py##cls##ArrType_Type, 0) -#define PyArrayScalar_VAL(obj, cls) \ - ((Py##cls##ScalarObject *)obj)->obval -#define PyArrayScalar_ASSIGN(obj, cls, val) \ - PyArrayScalar_VAL(obj, cls) = val - -#endif diff --git a/pythonPackages/numpy/numpy/core/include/numpy/fenv/fenv.c b/pythonPackages/numpy/numpy/core/include/numpy/fenv/fenv.c deleted file mode 100644 index 9a8d1be100..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/fenv/fenv.c +++ /dev/null @@ -1,38 +0,0 @@ -/*- - * Copyright (c) 2004 David Schultz - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - * - * $FreeBSD$ - */ - -#include -#include "fenv.h" - -const fenv_t npy__fe_dfl_env = { - 0xffff0000, - 0xffff0000, - 0xffffffff, - { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff } -}; diff --git a/pythonPackages/numpy/numpy/core/include/numpy/fenv/fenv.h b/pythonPackages/numpy/numpy/core/include/numpy/fenv/fenv.h deleted file mode 100644 index 79a215fc3e..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/fenv/fenv.h +++ /dev/null @@ -1,224 +0,0 @@ -/*- - * Copyright (c) 2004 David Schultz - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - * - * $FreeBSD$ - */ - -#ifndef _FENV_H_ -#define _FENV_H_ - -#include -#include - -typedef struct { - __uint32_t __control; - __uint32_t __status; - __uint32_t __tag; - char __other[16]; -} fenv_t; - -typedef __uint16_t fexcept_t; - -/* Exception flags */ -#define FE_INVALID 0x01 -#define FE_DENORMAL 0x02 -#define FE_DIVBYZERO 0x04 -#define FE_OVERFLOW 0x08 -#define FE_UNDERFLOW 0x10 -#define FE_INEXACT 0x20 -#define FE_ALL_EXCEPT (FE_DIVBYZERO | FE_DENORMAL | FE_INEXACT | \ - FE_INVALID | FE_OVERFLOW | FE_UNDERFLOW) - -/* Rounding modes */ -#define FE_TONEAREST 0x0000 -#define FE_DOWNWARD 0x0400 -#define FE_UPWARD 0x0800 -#define FE_TOWARDZERO 0x0c00 -#define _ROUND_MASK (FE_TONEAREST | FE_DOWNWARD | \ - FE_UPWARD | FE_TOWARDZERO) - -__BEGIN_DECLS - -/* Default floating-point environment */ -extern const fenv_t npy__fe_dfl_env; -#define FE_DFL_ENV (&npy__fe_dfl_env) - -#define __fldcw(__cw) __asm __volatile("fldcw %0" : : "m" (__cw)) -#define __fldenv(__env) __asm __volatile("fldenv %0" : : "m" (__env)) -#define __fnclex() __asm __volatile("fnclex") -#define __fnstenv(__env) __asm __volatile("fnstenv %0" : "=m" (*(__env))) -#define __fnstcw(__cw) __asm __volatile("fnstcw %0" : "=m" (*(__cw))) -#define __fnstsw(__sw) __asm __volatile("fnstsw %0" : "=am" (*(__sw))) -#define __fwait() __asm __volatile("fwait") - -static __inline int -feclearexcept(int __excepts) -{ - fenv_t __env; - - if (__excepts == FE_ALL_EXCEPT) { - __fnclex(); - } else { - __fnstenv(&__env); - __env.__status &= ~__excepts; - __fldenv(__env); - } - return (0); -} - -static __inline int -fegetexceptflag(fexcept_t *__flagp, int __excepts) -{ - __uint16_t __status; - - __fnstsw(&__status); - *__flagp = __status & __excepts; - return (0); -} - -static __inline int -fesetexceptflag(const fexcept_t *__flagp, int __excepts) -{ - fenv_t __env; - - __fnstenv(&__env); - __env.__status &= ~__excepts; - __env.__status |= *__flagp & __excepts; - __fldenv(__env); - return (0); -} - -static __inline int -feraiseexcept(int __excepts) -{ - fexcept_t __ex = __excepts; - - fesetexceptflag(&__ex, __excepts); - __fwait(); - return (0); -} - -static __inline int -fetestexcept(int __excepts) -{ - __uint16_t __status; - - __fnstsw(&__status); - return (__status & __excepts); -} - -static __inline int -fegetround(void) -{ - int __control; - - __fnstcw(&__control); - return (__control & _ROUND_MASK); -} - -static __inline int -fesetround(int __round) -{ - int __control; - - if (__round & ~_ROUND_MASK) - return (-1); - __fnstcw(&__control); - __control &= ~_ROUND_MASK; - __control |= __round; - __fldcw(__control); - return (0); -} - -static __inline int -fegetenv(fenv_t *__envp) -{ - int __control; - - /* - * fnstenv masks all exceptions, so we need to save and - * restore the control word to avoid this side effect. - */ - __fnstcw(&__control); - __fnstenv(__envp); - __fldcw(__control); - return (0); -} - -static __inline int -feholdexcept(fenv_t *__envp) -{ - - __fnstenv(__envp); - __fnclex(); - return (0); -} - -static __inline int -fesetenv(const fenv_t *__envp) -{ - - __fldenv(*__envp); - return (0); -} - -static __inline int -feupdateenv(const fenv_t *__envp) -{ - __uint16_t __status; - - __fnstsw(&__status); - __fldenv(*__envp); - feraiseexcept(__status & FE_ALL_EXCEPT); - return (0); -} - -#if __BSD_VISIBLE - -static __inline int -fesetmask(int __mask) -{ - int __control; - - __fnstcw(&__control); - __mask = (__control | FE_ALL_EXCEPT) & ~__mask; - __fldcw(__mask); - return (~__control & FE_ALL_EXCEPT); -} - -static __inline int -fegetmask(void) -{ - int __control; - - __fnstcw(&__control); - return (~__control & FE_ALL_EXCEPT); -} - -#endif /* __BSD_VISIBLE */ - -__END_DECLS - -#endif /* !_FENV_H_ */ diff --git a/pythonPackages/numpy/numpy/core/include/numpy/ndarrayobject.h b/pythonPackages/numpy/numpy/core/include/numpy/ndarrayobject.h deleted file mode 100755 index 25beceab74..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/ndarrayobject.h +++ /dev/null @@ -1,233 +0,0 @@ -/* - * DON'T INCLUDE THIS DIRECTLY. - */ - -#ifndef NPY_NDARRAYOBJECT_H -#define NPY_NDARRAYOBJECT_H -#ifdef __cplusplus -#define CONFUSE_EMACS { -#define CONFUSE_EMACS2 } -extern "C" CONFUSE_EMACS -#undef CONFUSE_EMACS -#undef CONFUSE_EMACS2 -/* ... otherwise a semi-smart identer (like emacs) tries to indent - everything when you're typing */ -#endif - -#include "ndarraytypes.h" - -/* Includes the "function" C-API -- these are all stored in a - list of pointers --- one for each file - The two lists are concatenated into one in multiarray. - - They are available as import_array() -*/ - -#include "__multiarray_api.h" - - -/* C-API that requries previous API to be defined */ - -#define PyArray_DescrCheck(op) (((PyObject*)(op))->ob_type==&PyArrayDescr_Type) - -#define PyArray_Check(op) PyObject_TypeCheck(op, &PyArray_Type) -#define PyArray_CheckExact(op) (((PyObject*)(op))->ob_type == &PyArray_Type) - -#define PyArray_HasArrayInterfaceType(op, type, context, out) \ - ((((out)=PyArray_FromStructInterface(op)) != Py_NotImplemented) || \ - (((out)=PyArray_FromInterface(op)) != Py_NotImplemented) || \ - (((out)=PyArray_FromArrayAttr(op, type, context)) != \ - Py_NotImplemented)) - -#define PyArray_HasArrayInterface(op, out) \ - PyArray_HasArrayInterfaceType(op, NULL, NULL, out) - -#define PyArray_IsZeroDim(op) (PyArray_Check(op) && (PyArray_NDIM(op) == 0)) - -#define PyArray_IsScalar(obj, cls) \ - (PyObject_TypeCheck(obj, &Py##cls##ArrType_Type)) - -#define PyArray_CheckScalar(m) (PyArray_IsScalar(m, Generic) || \ - PyArray_IsZeroDim(m)) - -#define PyArray_IsPythonNumber(obj) \ - (PyInt_Check(obj) || PyFloat_Check(obj) || PyComplex_Check(obj) || \ - PyLong_Check(obj) || PyBool_Check(obj)) - -#define PyArray_IsPythonScalar(obj) \ - (PyArray_IsPythonNumber(obj) || PyString_Check(obj) || \ - PyUnicode_Check(obj)) - -#define PyArray_IsAnyScalar(obj) \ - (PyArray_IsScalar(obj, Generic) || PyArray_IsPythonScalar(obj)) - -#define PyArray_CheckAnyScalar(obj) (PyArray_IsPythonScalar(obj) || \ - PyArray_CheckScalar(obj)) - -#define PyArray_IsIntegerScalar(obj) (PyInt_Check(obj) \ - || PyLong_Check(obj) \ - || PyArray_IsScalar((obj), Integer)) - - -#define PyArray_GETCONTIGUOUS(m) (PyArray_ISCONTIGUOUS(m) ? \ - Py_INCREF(m), (m) : \ - (PyArrayObject *)(PyArray_Copy(m))) - -#define PyArray_SAMESHAPE(a1,a2) ((PyArray_NDIM(a1) == PyArray_NDIM(a2)) && \ - PyArray_CompareLists(PyArray_DIMS(a1), \ - PyArray_DIMS(a2), \ - PyArray_NDIM(a1))) - -#define PyArray_SIZE(m) PyArray_MultiplyList(PyArray_DIMS(m), PyArray_NDIM(m)) -#define PyArray_NBYTES(m) (PyArray_ITEMSIZE(m) * PyArray_SIZE(m)) -#define PyArray_FROM_O(m) PyArray_FromAny(m, NULL, 0, 0, 0, NULL) - -#define PyArray_FROM_OF(m,flags) PyArray_CheckFromAny(m, NULL, 0, 0, flags, \ - NULL) - -#define PyArray_FROM_OT(m,type) PyArray_FromAny(m, \ - PyArray_DescrFromType(type), 0, 0, 0, NULL); - -#define PyArray_FROM_OTF(m, type, flags) \ - PyArray_FromAny(m, PyArray_DescrFromType(type), 0, 0, \ - (((flags) & NPY_ENSURECOPY) ? \ - ((flags) | NPY_DEFAULT) : (flags)), NULL) - -#define PyArray_FROMANY(m, type, min, max, flags) \ - PyArray_FromAny(m, PyArray_DescrFromType(type), min, max, \ - (((flags) & NPY_ENSURECOPY) ? \ - (flags) | NPY_DEFAULT : (flags)), NULL) - -#define PyArray_ZEROS(m, dims, type, fortran) \ - PyArray_Zeros(m, dims, PyArray_DescrFromType(type), fortran) - -#define PyArray_EMPTY(m, dims, type, fortran) \ - PyArray_Empty(m, dims, PyArray_DescrFromType(type), fortran) - -#define PyArray_FILLWBYTE(obj, val) memset(PyArray_DATA(obj), val, \ - PyArray_NBYTES(obj)) - -#define PyArray_REFCOUNT(obj) (((PyObject *)(obj))->ob_refcnt) -#define NPY_REFCOUNT PyArray_REFCOUNT -#define NPY_MAX_ELSIZE (2 * NPY_SIZEOF_LONGDOUBLE) - -#define PyArray_ContiguousFromAny(op, type, min_depth, max_depth) \ - PyArray_FromAny(op, PyArray_DescrFromType(type), min_depth, \ - max_depth, NPY_DEFAULT, NULL) - -#define PyArray_EquivArrTypes(a1, a2) \ - PyArray_EquivTypes(PyArray_DESCR(a1), PyArray_DESCR(a2)) - -#define PyArray_EquivByteorders(b1, b2) \ - (((b1) == (b2)) || (PyArray_ISNBO(b1) == PyArray_ISNBO(b2))) - -#define PyArray_SimpleNew(nd, dims, typenum) \ - PyArray_New(&PyArray_Type, nd, dims, typenum, NULL, NULL, 0, 0, NULL) - -#define PyArray_SimpleNewFromData(nd, dims, typenum, data) \ - PyArray_New(&PyArray_Type, nd, dims, typenum, NULL, \ - data, 0, NPY_CARRAY, NULL) - -#define PyArray_SimpleNewFromDescr(nd, dims, descr) \ - PyArray_NewFromDescr(&PyArray_Type, descr, nd, dims, \ - NULL, NULL, 0, NULL) - -#define PyArray_ToScalar(data, arr) \ - PyArray_Scalar(data, PyArray_DESCR(arr), (PyObject *)arr) - - -/* These might be faster without the dereferencing of obj - going on inside -- of course an optimizing compiler should - inline the constants inside a for loop making it a moot point -*/ - -#define PyArray_GETPTR1(obj, i) ((void *)(PyArray_BYTES(obj) + \ - (i)*PyArray_STRIDES(obj)[0])) - -#define PyArray_GETPTR2(obj, i, j) ((void *)(PyArray_BYTES(obj) + \ - (i)*PyArray_STRIDES(obj)[0] + \ - (j)*PyArray_STRIDES(obj)[1])) - -#define PyArray_GETPTR3(obj, i, j, k) ((void *)(PyArray_BYTES(obj) + \ - (i)*PyArray_STRIDES(obj)[0] + \ - (j)*PyArray_STRIDES(obj)[1] + \ - (k)*PyArray_STRIDES(obj)[2])) - -#define PyArray_GETPTR4(obj, i, j, k, l) ((void *)(PyArray_BYTES(obj) + \ - (i)*PyArray_STRIDES(obj)[0] + \ - (j)*PyArray_STRIDES(obj)[1] + \ - (k)*PyArray_STRIDES(obj)[2] + \ - (l)*PyArray_STRIDES(obj)[3])) - -#define PyArray_XDECREF_ERR(obj) \ - if (obj && (PyArray_FLAGS(obj) & NPY_UPDATEIFCOPY)) { \ - PyArray_FLAGS(PyArray_BASE(obj)) |= NPY_WRITEABLE; \ - PyArray_FLAGS(obj) &= ~NPY_UPDATEIFCOPY; \ - } \ - Py_XDECREF(obj) - -#define PyArray_DESCR_REPLACE(descr) do { \ - PyArray_Descr *_new_; \ - _new_ = PyArray_DescrNew(descr); \ - Py_XDECREF(descr); \ - descr = _new_; \ - } while(0) - -/* Copy should always return contiguous array */ -#define PyArray_Copy(obj) PyArray_NewCopy(obj, NPY_CORDER) - -#define PyArray_FromObject(op, type, min_depth, max_depth) \ - PyArray_FromAny(op, PyArray_DescrFromType(type), min_depth, \ - max_depth, NPY_BEHAVED | NPY_ENSUREARRAY, NULL) - -#define PyArray_ContiguousFromObject(op, type, min_depth, max_depth) \ - PyArray_FromAny(op, PyArray_DescrFromType(type), min_depth, \ - max_depth, NPY_DEFAULT | NPY_ENSUREARRAY, NULL) - -#define PyArray_CopyFromObject(op, type, min_depth, max_depth) \ - PyArray_FromAny(op, PyArray_DescrFromType(type), min_depth, \ - max_depth, NPY_ENSURECOPY | NPY_DEFAULT | \ - NPY_ENSUREARRAY, NULL) - -#define PyArray_Cast(mp, type_num) \ - PyArray_CastToType(mp, PyArray_DescrFromType(type_num), 0) - -#define PyArray_Take(ap, items, axis) \ - PyArray_TakeFrom(ap, items, axis, NULL, NPY_RAISE) - -#define PyArray_Put(ap, items, values) \ - PyArray_PutTo(ap, items, values, NPY_RAISE) - -/* Compatibility with old Numeric stuff -- don't use in new code */ - -#define PyArray_FromDimsAndData(nd, d, type, data) \ - PyArray_FromDimsAndDataAndDescr(nd, d, PyArray_DescrFromType(type), \ - data) - -#include "old_defines.h" - -/* - Check to see if this key in the dictionary is the "title" - entry of the tuple (i.e. a duplicate dictionary entry in the fields - dict. -*/ - -#define NPY_TITLE_KEY(key, value) ((PyTuple_GET_SIZE((value))==3) && \ - (PyTuple_GET_ITEM((value), 2) == (key))) - - -/* Define python version independent deprecation macro */ - -#if PY_VERSION_HEX >= 0x02050000 -#define DEPRECATE(msg) PyErr_WarnEx(PyExc_DeprecationWarning,msg,1) -#else -#define DEPRECATE(msg) PyErr_Warn(PyExc_DeprecationWarning,msg) -#endif - - -#ifdef __cplusplus -} -#endif - - -#endif /* NPY_NDARRAYOBJECT_H */ diff --git a/pythonPackages/numpy/numpy/core/include/numpy/ndarraytypes.h b/pythonPackages/numpy/numpy/core/include/numpy/ndarraytypes.h deleted file mode 100755 index f090e329bd..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/ndarraytypes.h +++ /dev/null @@ -1,1325 +0,0 @@ -#ifndef NDARRAYTYPES_H -#define NDARRAYTYPES_H - -/* This is auto-generated by the installer */ -#include "numpyconfig.h" - -#include "npy_common.h" -#include "npy_endian.h" -#include "npy_cpu.h" -#include "utils.h" - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION - #define NPY_NO_EXPORT NPY_VISIBILITY_HIDDEN -#else - #define NPY_NO_EXPORT static -#endif - -/* Only use thread if configured in config and python supports it */ -#if defined WITH_THREAD && !NPY_NO_SMP - #define NPY_ALLOW_THREADS 1 -#else - #define NPY_ALLOW_THREADS 0 -#endif - - - -/* - * There are several places in the code where an array of dimensions - * is allocated statically. This is the size of that static - * allocation. - * - * The array creation itself could have arbitrary dimensions but all - * the places where static allocation is used would need to be changed - * to dynamic (including inside of several structures) - */ - -#define NPY_MAXDIMS 32 -#define NPY_MAXARGS 32 - -/* Used for Converter Functions "O&" code in ParseTuple */ -#define NPY_FAIL 0 -#define NPY_SUCCEED 1 - -/* - * Binary compatibility version number. This number is increased - * whenever the C-API is changed such that binary compatibility is - * broken, i.e. whenever a recompile of extension modules is needed. - */ -#define NPY_VERSION NPY_ABI_VERSION - -/* - * Minor API version. This number is increased whenever a change is - * made to the C-API -- whether it breaks binary compatibility or not. - * Some changes, such as adding a function pointer to the end of the - * function table, can be made without breaking binary compatibility. - * In this case, only the NPY_FEATURE_VERSION (*not* NPY_VERSION) - * would be increased. Whenever binary compatibility is broken, both - * NPY_VERSION and NPY_FEATURE_VERSION should be increased. - */ -#define NPY_FEATURE_VERSION NPY_API_VERSION - -enum NPY_TYPES { NPY_BOOL=0, - NPY_BYTE, NPY_UBYTE, - NPY_SHORT, NPY_USHORT, - NPY_INT, NPY_UINT, - NPY_LONG, NPY_ULONG, - NPY_LONGLONG, NPY_ULONGLONG, - NPY_FLOAT, NPY_DOUBLE, NPY_LONGDOUBLE, - NPY_CFLOAT, NPY_CDOUBLE, NPY_CLONGDOUBLE, - NPY_OBJECT=17, - NPY_STRING, NPY_UNICODE, - NPY_VOID, - NPY_NTYPES, - NPY_NOTYPE, - NPY_CHAR, /* special flag */ - NPY_USERDEF=256 /* leave room for characters */ -}; - -#define NPY_METADATA_DTSTR "__frequency__" - -/* basetype array priority */ -#define NPY_PRIORITY 0.0 - -/* default subtype priority */ -#define NPY_SUBTYPE_PRIORITY 1.0 - -/* default scalar priority */ -#define NPY_SCALAR_PRIORITY -1000000.0 - -/* How many floating point types are there */ -#define NPY_NUM_FLOATTYPE 3 - -/* - * We need to match npy_intp to a signed integer of the same size as a - * pointer variable. npy_uintp to the equivalent unsigned integer - */ - - -/* - * These characters correspond to the array type and the struct - * module - */ - -/* except 'p' -- signed integer for pointer type */ - -enum NPY_TYPECHAR { NPY_BOOLLTR = '?', - NPY_BYTELTR = 'b', - NPY_UBYTELTR = 'B', - NPY_SHORTLTR = 'h', - NPY_USHORTLTR = 'H', - NPY_INTLTR = 'i', - NPY_UINTLTR = 'I', - NPY_LONGLTR = 'l', - NPY_ULONGLTR = 'L', - NPY_LONGLONGLTR = 'q', - NPY_ULONGLONGLTR = 'Q', - NPY_FLOATLTR = 'f', - NPY_DOUBLELTR = 'd', - NPY_LONGDOUBLELTR = 'g', - NPY_CFLOATLTR = 'F', - NPY_CDOUBLELTR = 'D', - NPY_CLONGDOUBLELTR = 'G', - NPY_OBJECTLTR = 'O', - NPY_STRINGLTR = 'S', - NPY_STRINGLTR2 = 'a', - NPY_UNICODELTR = 'U', - NPY_VOIDLTR = 'V', - NPY_DATETIMELTR = 'M', - NPY_TIMEDELTALTR = 'm', - NPY_CHARLTR = 'c', - - /* - * No Descriptor, just a define -- this let's - * Python users specify an array of integers - * large enough to hold a pointer on the - * platform - */ - NPY_INTPLTR = 'p', - NPY_UINTPLTR = 'P', - - NPY_GENBOOLLTR ='b', - NPY_SIGNEDLTR = 'i', - NPY_UNSIGNEDLTR = 'u', - NPY_FLOATINGLTR = 'f', - NPY_COMPLEXLTR = 'c' -}; - -typedef enum { - NPY_QUICKSORT=0, - NPY_HEAPSORT=1, - NPY_MERGESORT=2 -} NPY_SORTKIND; -#define NPY_NSORTS (NPY_MERGESORT + 1) - - -typedef enum { - NPY_SEARCHLEFT=0, - NPY_SEARCHRIGHT=1 -} NPY_SEARCHSIDE; -#define NPY_NSEARCHSIDES (NPY_SEARCHRIGHT + 1) - - -typedef enum { - NPY_NOSCALAR=-1, - NPY_BOOL_SCALAR, - NPY_INTPOS_SCALAR, - NPY_INTNEG_SCALAR, - NPY_FLOAT_SCALAR, - NPY_COMPLEX_SCALAR, - NPY_OBJECT_SCALAR -} NPY_SCALARKIND; -#define NPY_NSCALARKINDS (NPY_OBJECT_SCALAR + 1) - -typedef enum { - NPY_ANYORDER=-1, - NPY_CORDER=0, - NPY_FORTRANORDER=1 -} NPY_ORDER; - - -typedef enum { - NPY_CLIP=0, - NPY_WRAP=1, - NPY_RAISE=2 -} NPY_CLIPMODE; - -typedef enum { - NPY_FR_Y, - NPY_FR_M, - NPY_FR_W, - NPY_FR_B, - NPY_FR_D, - NPY_FR_h, - NPY_FR_m, - NPY_FR_s, - NPY_FR_ms, - NPY_FR_us, - NPY_FR_ns, - NPY_FR_ps, - NPY_FR_fs, - NPY_FR_as -} NPY_DATETIMEUNIT; - -#define NPY_DATETIME_NUMUNITS (NPY_FR_as + 1) -#define NPY_DATETIME_DEFAULTUNIT NPY_FR_us - - -#define NPY_STR_Y "Y" -#define NPY_STR_M "M" -#define NPY_STR_W "W" -#define NPY_STR_B "B" -#define NPY_STR_D "D" -#define NPY_STR_h "h" -#define NPY_STR_m "m" -#define NPY_STR_s "s" -#define NPY_STR_ms "ms" -#define NPY_STR_us "us" -#define NPY_STR_ns "ns" -#define NPY_STR_ps "ps" -#define NPY_STR_fs "fs" -#define NPY_STR_as "as" - - -/* - * This is to typedef npy_intp to the appropriate pointer size for - * this platform. Py_intptr_t, Py_uintptr_t are defined in pyport.h. - */ -typedef Py_intptr_t npy_intp; -typedef Py_uintptr_t npy_uintp; -#define NPY_SIZEOF_INTP NPY_SIZEOF_PY_INTPTR_T -#define NPY_SIZEOF_UINTP NPY_SIZEOF_PY_INTPTR_T - -#ifdef constchar -#undef constchar -#endif - -#if (PY_VERSION_HEX < 0x02050000) - #ifndef PY_SSIZE_T_MIN - typedef int Py_ssize_t; - #define PY_SSIZE_T_MAX INT_MAX - #define PY_SSIZE_T_MIN INT_MIN - #endif -#define NPY_SSIZE_T_PYFMT "i" -#undef PyIndex_Check -#define constchar const char -#define PyIndex_Check(op) 0 -#else -#define NPY_SSIZE_T_PYFMT "n" -#define constchar char -#endif - -#if NPY_SIZEOF_PY_INTPTR_T == NPY_SIZEOF_INT - #define NPY_INTP NPY_INT - #define NPY_UINTP NPY_UINT - #define PyIntpArrType_Type PyIntArrType_Type - #define PyUIntpArrType_Type PyUIntArrType_Type - #define NPY_MAX_INTP NPY_MAX_INT - #define NPY_MIN_INTP NPY_MIN_INT - #define NPY_MAX_UINTP NPY_MAX_UINT - #define NPY_INTP_FMT "d" -#elif NPY_SIZEOF_PY_INTPTR_T == NPY_SIZEOF_LONG - #define NPY_INTP NPY_LONG - #define NPY_UINTP NPY_ULONG - #define PyIntpArrType_Type PyLongArrType_Type - #define PyUIntpArrType_Type PyULongArrType_Type - #define NPY_MAX_INTP NPY_MAX_LONG - #define NPY_MIN_INTP MIN_LONG - #define NPY_MAX_UINTP NPY_MAX_ULONG - #define NPY_INTP_FMT "ld" -#elif defined(PY_LONG_LONG) && (NPY_SIZEOF_PY_INTPTR_T == NPY_SIZEOF_LONGLONG) - #define NPY_INTP NPY_LONGLONG - #define NPY_UINTP NPY_ULONGLONG - #define PyIntpArrType_Type PyLongLongArrType_Type - #define PyUIntpArrType_Type PyULongLongArrType_Type - #define NPY_MAX_INTP NPY_MAX_LONGLONG - #define NPY_MIN_INTP NPY_MIN_LONGLONG - #define NPY_MAX_UINTP NPY_MAX_ULONGLONG - #define NPY_INTP_FMT "Ld" -#endif - -/* - * We can only use C99 formats for npy_int_p if it is the same as - * intp_t, hence the condition on HAVE_UNITPTR_T - */ -#if (NPY_USE_C99_FORMATS) == 1 \ - && (defined HAVE_UINTPTR_T) \ - && (defined HAVE_INTTYPES_H) - #include - #undef NPY_INTP_FMT - #define NPY_INTP_FMT PRIdPTR -#endif - -#define NPY_ERR(str) fprintf(stderr, #str); fflush(stderr); -#define NPY_ERR2(str) fprintf(stderr, str); fflush(stderr); - -#define NPY_STRINGIFY(x) #x -#define NPY_TOSTRING(x) NPY_STRINGIFY(x) - - /* - * Macros to define how array, and dimension/strides data is - * allocated. - */ - - /* Data buffer */ -#define PyDataMem_NEW(size) ((char *)malloc(size)) -#define PyDataMem_FREE(ptr) free(ptr) -#define PyDataMem_RENEW(ptr,size) ((char *)realloc(ptr,size)) - -#define NPY_USE_PYMEM 1 - -#if NPY_USE_PYMEM == 1 -#define PyArray_malloc PyMem_Malloc -#define PyArray_free PyMem_Free -#define PyArray_realloc PyMem_Realloc -#else -#define PyArray_malloc malloc -#define PyArray_free free -#define PyArray_realloc realloc -#endif - -/* Dimensions and strides */ -#define PyDimMem_NEW(size) \ - ((npy_intp *)PyArray_malloc(size*sizeof(npy_intp))) - -#define PyDimMem_FREE(ptr) PyArray_free(ptr) - -#define PyDimMem_RENEW(ptr,size) \ - ((npy_intp *)PyArray_realloc(ptr,size*sizeof(npy_intp))) - -/* forward declaration */ -struct _PyArray_Descr; - -/* These must deal with unaligned and swapped data if necessary */ -typedef PyObject * (PyArray_GetItemFunc) (void *, void *); -typedef int (PyArray_SetItemFunc)(PyObject *, void *, void *); - -typedef void (PyArray_CopySwapNFunc)(void *, npy_intp, void *, npy_intp, - npy_intp, int, void *); - -typedef void (PyArray_CopySwapFunc)(void *, void *, int, void *); -typedef npy_bool (PyArray_NonzeroFunc)(void *, void *); - - -/* - * These assume aligned and notswapped data -- a buffer will be used - * before or contiguous data will be obtained - */ - -typedef int (PyArray_CompareFunc)(const void *, const void *, void *); -typedef int (PyArray_ArgFunc)(void*, npy_intp, npy_intp*, void *); - -typedef void (PyArray_DotFunc)(void *, npy_intp, void *, npy_intp, void *, - npy_intp, void *); - -typedef void (PyArray_VectorUnaryFunc)(void *, void *, npy_intp, void *, - void *); - -/* - * XXX the ignore argument should be removed next time the API version - * is bumped. It used to be the separator. - */ -typedef int (PyArray_ScanFunc)(FILE *fp, void *dptr, - char *ignore, struct _PyArray_Descr *); -typedef int (PyArray_FromStrFunc)(char *s, void *dptr, char **endptr, - struct _PyArray_Descr *); - -typedef int (PyArray_FillFunc)(void *, npy_intp, void *); - -typedef int (PyArray_SortFunc)(void *, npy_intp, void *); -typedef int (PyArray_ArgSortFunc)(void *, npy_intp *, npy_intp, void *); - -typedef int (PyArray_FillWithScalarFunc)(void *, npy_intp, void *, void *); - -typedef int (PyArray_ScalarKindFunc)(void *); - -typedef void (PyArray_FastClipFunc)(void *in, npy_intp n_in, void *min, - void *max, void *out); -typedef void (PyArray_FastPutmaskFunc)(void *in, void *mask, npy_intp n_in, - void *values, npy_intp nv); -typedef int (PyArray_FastTakeFunc)(void *dest, void *src, npy_intp *indarray, - npy_intp nindarray, npy_intp n_outer, - npy_intp m_middle, npy_intp nelem, - NPY_CLIPMODE clipmode); - -typedef struct { - npy_intp *ptr; - int len; -} PyArray_Dims; - -typedef struct { - /* Functions to cast to all other standard types*/ - /* Can have some NULL entries */ - PyArray_VectorUnaryFunc *cast[NPY_NTYPES]; - - /* The next four functions *cannot* be NULL */ - - /* - * Functions to get and set items with standard Python types - * -- not array scalars - */ - PyArray_GetItemFunc *getitem; - PyArray_SetItemFunc *setitem; - - /* - * Copy and/or swap data. Memory areas may not overlap - * Use memmove first if they might - */ - PyArray_CopySwapNFunc *copyswapn; - PyArray_CopySwapFunc *copyswap; - - /* - * Function to compare items - * Can be NULL - */ - PyArray_CompareFunc *compare; - - /* - * Function to select largest - * Can be NULL - */ - PyArray_ArgFunc *argmax; - - /* - * Function to compute dot product - * Can be NULL - */ - PyArray_DotFunc *dotfunc; - - /* - * Function to scan an ASCII file and - * place a single value plus possible separator - * Can be NULL - */ - PyArray_ScanFunc *scanfunc; - - /* - * Function to read a single value from a string - * and adjust the pointer; Can be NULL - */ - PyArray_FromStrFunc *fromstr; - - /* - * Function to determine if data is zero or not - * If NULL a default version is - * used at Registration time. - */ - PyArray_NonzeroFunc *nonzero; - - /* - * Used for arange. - * Can be NULL. - */ - PyArray_FillFunc *fill; - - /* - * Function to fill arrays with scalar values - * Can be NULL - */ - PyArray_FillWithScalarFunc *fillwithscalar; - - /* - * Sorting functions - * Can be NULL - */ - PyArray_SortFunc *sort[NPY_NSORTS]; - PyArray_ArgSortFunc *argsort[NPY_NSORTS]; - - /* - * Dictionary of additional casting functions - * PyArray_VectorUnaryFuncs - * which can be populated to support casting - * to other registered types. Can be NULL - */ - PyObject *castdict; - - /* - * Functions useful for generalizing - * the casting rules. - * Can be NULL; - */ - PyArray_ScalarKindFunc *scalarkind; - int **cancastscalarkindto; - int *cancastto; - - PyArray_FastClipFunc *fastclip; - PyArray_FastPutmaskFunc *fastputmask; - PyArray_FastTakeFunc *fasttake; -} PyArray_ArrFuncs; - - -/* The item must be reference counted when it is inserted or extracted. */ -#define NPY_ITEM_REFCOUNT 0x01 -/* Same as needing REFCOUNT */ -#define NPY_ITEM_HASOBJECT 0x01 -/* Convert to list for pickling */ -#define NPY_LIST_PICKLE 0x02 -/* The item is a POINTER */ -#define NPY_ITEM_IS_POINTER 0x04 -/* memory needs to be initialized for this data-type */ -#define NPY_NEEDS_INIT 0x08 -/* operations need Python C-API so don't give-up thread. */ -#define NPY_NEEDS_PYAPI 0x10 -/* Use f.getitem when extracting elements of this data-type */ -#define NPY_USE_GETITEM 0x20 -/* Use f.setitem when setting creating 0-d array from this data-type.*/ -#define NPY_USE_SETITEM 0x40 -/* define NPY_IS_COMPLEX */ - -/* - *These are inherited for global data-type if any data-types in the - * field have them - */ -#define NPY_FROM_FIELDS (NPY_NEEDS_INIT | NPY_LIST_PICKLE | \ - NPY_ITEM_REFCOUNT | NPY_NEEDS_PYAPI) - -#define NPY_OBJECT_DTYPE_FLAGS (NPY_LIST_PICKLE | NPY_USE_GETITEM | \ - NPY_ITEM_IS_POINTER | NPY_ITEM_REFCOUNT | \ - NPY_NEEDS_INIT | NPY_NEEDS_PYAPI) - -#define PyDataType_FLAGCHK(dtype, flag) \ - (((dtype)->hasobject & (flag)) == (flag)) - -#define PyDataType_REFCHK(dtype) \ - PyDataType_FLAGCHK(dtype, NPY_ITEM_REFCOUNT) - -/* Change dtype hasobject to 32-bit in 1.1 and change its name */ -typedef struct _PyArray_Descr { - PyObject_HEAD - PyTypeObject *typeobj; /* the type object representing an - instance of this type -- should not - be two type_numbers with the same type - object. */ - char kind; /* kind for this type */ - char type; /* unique-character representing this type */ - char byteorder; /* '>' (big), '<' (little), '|' - (not-applicable), or '=' (native). */ - char hasobject; /* non-zero if it has object arrays - in fields */ - int type_num; /* number representing this type */ - int elsize; /* element size for this type */ - int alignment; /* alignment needed for this type */ - struct _arr_descr \ - *subarray; /* Non-NULL if this type is - is an array (C-contiguous) - of some other type - */ - PyObject *fields; /* The fields dictionary for this type */ - /* For statically defined descr this - is always Py_None */ - - PyObject *names; /* An ordered tuple of field names or NULL - if no fields are defined */ - - PyArray_ArrFuncs *f; /* a table of functions specific for each - basic data descriptor */ - - PyObject *metadata; /* Metadata about this dtype */ -} PyArray_Descr; - -typedef struct _arr_descr { - PyArray_Descr *base; - PyObject *shape; /* a tuple */ -} PyArray_ArrayDescr; - -/* - * The main array object structure. It is recommended to use the macros - * defined below (PyArray_DATA and friends) access fields here, instead - * of the members themselves. - */ - -typedef struct PyArrayObject { - PyObject_HEAD - char *data; /* pointer to raw data buffer */ - int nd; /* number of dimensions, also called ndim */ - npy_intp *dimensions; /* size in each dimension */ - npy_intp *strides; /* - * bytes to jump to get to the - * next element in each dimension - */ - PyObject *base; /* - * This object should be decref'd upon - * deletion of array - * - * For views it points to the original - * array - * - * For creation from buffer object it - * points to an object that shold be - * decref'd on deletion - * - * For UPDATEIFCOPY flag this is an - * array to-be-updated upon deletion - * of this one - */ - PyArray_Descr *descr; /* Pointer to type structure */ - int flags; /* Flags describing array -- see below */ - PyObject *weakreflist; /* For weakreferences */ -} PyArrayObject; - -#define NPY_AO PyArrayObject - -#define fortran fortran_ /* For some compilers */ - -/* Array Flags Object */ -typedef struct PyArrayFlagsObject { - PyObject_HEAD - PyObject *arr; - int flags; -} PyArrayFlagsObject; - -/* Mirrors buffer object to ptr */ - -typedef struct { - PyObject_HEAD - PyObject *base; - void *ptr; - npy_intp len; - int flags; -} PyArray_Chunk; - - -typedef int (PyArray_FinalizeFunc)(PyArrayObject *, PyObject *); - -/* - * Means c-style contiguous (last index varies the fastest). The data - * elements right after each other. - */ -#define NPY_CONTIGUOUS 0x0001 - -/* - * set if array is a contiguous Fortran array: the first index varies - * the fastest in memory (strides array is reverse of C-contiguous - * array) - */ -#define NPY_FORTRAN 0x0002 - -#define NPY_C_CONTIGUOUS NPY_CONTIGUOUS -#define NPY_F_CONTIGUOUS NPY_FORTRAN - -/* - * Note: all 0-d arrays are CONTIGUOUS and FORTRAN contiguous. If a - * 1-d array is CONTIGUOUS it is also FORTRAN contiguous - */ - -/* - * If set, the array owns the data: it will be free'd when the array - * is deleted. - */ -#define NPY_OWNDATA 0x0004 - -/* - * An array never has the next four set; they're only used as parameter - * flags to the the various FromAny functions - */ - -/* Cause a cast to occur regardless of whether or not it is safe. */ -#define NPY_FORCECAST 0x0010 - -/* - * Always copy the array. Returned arrays are always CONTIGUOUS, - * ALIGNED, and WRITEABLE. - */ -#define NPY_ENSURECOPY 0x0020 - -/* Make sure the returned array is a base-class ndarray */ -#define NPY_ENSUREARRAY 0x0040 - -/* - * Make sure that the strides are in units of the element size Needed - * for some operations with record-arrays. - */ -#define NPY_ELEMENTSTRIDES 0x0080 - -/* - * Array data is aligned on the appropiate memory address for the type - * stored according to how the compiler would align things (e.g., an - * array of integers (4 bytes each) starts on a memory address that's - * a multiple of 4) - */ -#define NPY_ALIGNED 0x0100 - -/* Array data has the native endianness */ -#define NPY_NOTSWAPPED 0x0200 - -/* Array data is writeable */ -#define NPY_WRITEABLE 0x0400 - -/* - * If this flag is set, then base contains a pointer to an array of - * the same size that should be updated with the current contents of - * this array when this array is deallocated - */ -#define NPY_UPDATEIFCOPY 0x1000 - -/* This flag is for the array interface */ -#define NPY_ARR_HAS_DESCR 0x0800 - - -#define NPY_BEHAVED (NPY_ALIGNED | NPY_WRITEABLE) -#define NPY_BEHAVED_NS (NPY_ALIGNED | NPY_WRITEABLE | NPY_NOTSWAPPED) -#define NPY_CARRAY (NPY_CONTIGUOUS | NPY_BEHAVED) -#define NPY_CARRAY_RO (NPY_CONTIGUOUS | NPY_ALIGNED) -#define NPY_FARRAY (NPY_FORTRAN | NPY_BEHAVED) -#define NPY_FARRAY_RO (NPY_FORTRAN | NPY_ALIGNED) -#define NPY_DEFAULT NPY_CARRAY -#define NPY_IN_ARRAY NPY_CARRAY_RO -#define NPY_OUT_ARRAY NPY_CARRAY -#define NPY_INOUT_ARRAY (NPY_CARRAY | NPY_UPDATEIFCOPY) -#define NPY_IN_FARRAY NPY_FARRAY_RO -#define NPY_OUT_FARRAY NPY_FARRAY -#define NPY_INOUT_FARRAY (NPY_FARRAY | NPY_UPDATEIFCOPY) - -#define NPY_UPDATE_ALL (NPY_CONTIGUOUS | NPY_FORTRAN | NPY_ALIGNED) - - -/* - * Size of internal buffers used for alignment Make BUFSIZE a multiple - * of sizeof(cdouble) -- ususally 16 so that ufunc buffers are aligned - */ -#define NPY_MIN_BUFSIZE ((int)sizeof(cdouble)) -#define NPY_MAX_BUFSIZE (((int)sizeof(cdouble))*1000000) -#define NPY_BUFSIZE 10000 -/* #define NPY_BUFSIZE 80*/ - -#define PyArray_MAX(a,b) (((a)>(b))?(a):(b)) -#define PyArray_MIN(a,b) (((a)<(b))?(a):(b)) -#define PyArray_CLT(p,q) ((((p).real==(q).real) ? ((p).imag < (q).imag) : \ - ((p).real < (q).real))) -#define PyArray_CGT(p,q) ((((p).real==(q).real) ? ((p).imag > (q).imag) : \ - ((p).real > (q).real))) -#define PyArray_CLE(p,q) ((((p).real==(q).real) ? ((p).imag <= (q).imag) : \ - ((p).real <= (q).real))) -#define PyArray_CGE(p,q) ((((p).real==(q).real) ? ((p).imag >= (q).imag) : \ - ((p).real >= (q).real))) -#define PyArray_CEQ(p,q) (((p).real==(q).real) && ((p).imag == (q).imag)) -#define PyArray_CNE(p,q) (((p).real!=(q).real) || ((p).imag != (q).imag)) - -/* - * C API: consists of Macros and functions. The MACROS are defined - * here. - */ - - -#define PyArray_CHKFLAGS(m, FLAGS) \ - ((((PyArrayObject *)(m))->flags & (FLAGS)) == (FLAGS)) - -#define PyArray_ISCONTIGUOUS(m) PyArray_CHKFLAGS(m, NPY_CONTIGUOUS) -#define PyArray_ISWRITEABLE(m) PyArray_CHKFLAGS(m, NPY_WRITEABLE) -#define PyArray_ISALIGNED(m) PyArray_CHKFLAGS(m, NPY_ALIGNED) - - -#if NPY_ALLOW_THREADS -#define NPY_BEGIN_ALLOW_THREADS Py_BEGIN_ALLOW_THREADS -#define NPY_END_ALLOW_THREADS Py_END_ALLOW_THREADS -#define NPY_BEGIN_THREADS_DEF PyThreadState *_save=NULL; -#define NPY_BEGIN_THREADS _save = PyEval_SaveThread(); -#define NPY_END_THREADS do {if (_save) PyEval_RestoreThread(_save);} while (0); - -#define NPY_BEGIN_THREADS_DESCR(dtype) \ - do {if (!(PyDataType_FLAGCHK(dtype, NPY_NEEDS_PYAPI))) \ - NPY_BEGIN_THREADS;} while (0); - -#define NPY_END_THREADS_DESCR(dtype) \ - do {if (!(PyDataType_FLAGCHK(dtype, NPY_NEEDS_PYAPI))) \ - NPY_END_THREADS; } while (0); - -#define NPY_ALLOW_C_API_DEF PyGILState_STATE __save__; -#define NPY_ALLOW_C_API __save__ = PyGILState_Ensure(); -#define NPY_DISABLE_C_API PyGILState_Release(__save__); -#else -#define NPY_BEGIN_ALLOW_THREADS -#define NPY_END_ALLOW_THREADS -#define NPY_BEGIN_THREADS_DEF -#define NPY_BEGIN_THREADS -#define NPY_END_THREADS -#define NPY_BEGIN_THREADS_DESCR(dtype) -#define NPY_END_THREADS_DESCR(dtype) -#define NPY_ALLOW_C_API_DEF -#define NPY_ALLOW_C_API -#define NPY_DISABLE_C_API -#endif - -/***************************** - * Basic iterator object - *****************************/ - -/* FWD declaration */ -typedef struct PyArrayIterObject_tag PyArrayIterObject; - -/* - * type of the function which translates a set of coordinates to a - * pointer to the data - */ -typedef char* (*npy_iter_get_dataptr_t)(PyArrayIterObject* iter, npy_intp*); - -struct PyArrayIterObject_tag { - PyObject_HEAD - int nd_m1; /* number of dimensions - 1 */ - npy_intp index, size; - npy_intp coordinates[NPY_MAXDIMS];/* N-dimensional loop */ - npy_intp dims_m1[NPY_MAXDIMS]; /* ao->dimensions - 1 */ - npy_intp strides[NPY_MAXDIMS]; /* ao->strides or fake */ - npy_intp backstrides[NPY_MAXDIMS];/* how far to jump back */ - npy_intp factors[NPY_MAXDIMS]; /* shape factors */ - PyArrayObject *ao; - char *dataptr; /* pointer to current item*/ - npy_bool contiguous; - - npy_intp bounds[NPY_MAXDIMS][2]; - npy_intp limits[NPY_MAXDIMS][2]; - npy_intp limits_sizes[NPY_MAXDIMS]; - npy_iter_get_dataptr_t translate; -} ; - - -/* Iterator API */ -#define PyArrayIter_Check(op) PyObject_TypeCheck(op, &PyArrayIter_Type) - -#define _PyAIT(it) ((PyArrayIterObject *)(it)) -#define PyArray_ITER_RESET(it) { \ - _PyAIT(it)->index = 0; \ - _PyAIT(it)->dataptr = _PyAIT(it)->ao->data; \ - memset(_PyAIT(it)->coordinates, 0, \ - (_PyAIT(it)->nd_m1+1)*sizeof(npy_intp)); \ -} - -#define _PyArray_ITER_NEXT1(it) { \ - (it)->dataptr += _PyAIT(it)->strides[0]; \ - (it)->coordinates[0]++; \ -} - -#define _PyArray_ITER_NEXT2(it) { \ - if ((it)->coordinates[1] < (it)->dims_m1[1]) { \ - (it)->coordinates[1]++; \ - (it)->dataptr += (it)->strides[1]; \ - } \ - else { \ - (it)->coordinates[1] = 0; \ - (it)->coordinates[0]++; \ - (it)->dataptr += (it)->strides[0] - \ - (it)->backstrides[1]; \ - } \ -} - -#define _PyArray_ITER_NEXT3(it) { \ - if ((it)->coordinates[2] < (it)->dims_m1[2]) { \ - (it)->coordinates[2]++; \ - (it)->dataptr += (it)->strides[2]; \ - } \ - else { \ - (it)->coordinates[2] = 0; \ - (it)->dataptr -= (it)->backstrides[2]; \ - if ((it)->coordinates[1] < (it)->dims_m1[1]) { \ - (it)->coordinates[1]++; \ - (it)->dataptr += (it)->strides[1]; \ - } \ - else { \ - (it)->coordinates[1] = 0; \ - (it)->coordinates[0]++; \ - (it)->dataptr += (it)->strides[0] - \ - (it)->backstrides[1]; \ - } \ - } \ -} - -#define PyArray_ITER_NEXT(it) { \ - _PyAIT(it)->index++; \ - if (_PyAIT(it)->nd_m1 == 0) { \ - _PyArray_ITER_NEXT1(_PyAIT(it)); \ - } \ - else if (_PyAIT(it)->contiguous) \ - _PyAIT(it)->dataptr += _PyAIT(it)->ao->descr->elsize; \ - else if (_PyAIT(it)->nd_m1 == 1) { \ - _PyArray_ITER_NEXT2(_PyAIT(it)); \ - } \ - else { \ - int __npy_i; \ - for (__npy_i=_PyAIT(it)->nd_m1; __npy_i >= 0; __npy_i--) { \ - if (_PyAIT(it)->coordinates[__npy_i] < \ - _PyAIT(it)->dims_m1[__npy_i]) { \ - _PyAIT(it)->coordinates[__npy_i]++; \ - _PyAIT(it)->dataptr += \ - _PyAIT(it)->strides[__npy_i]; \ - break; \ - } \ - else { \ - _PyAIT(it)->coordinates[__npy_i] = 0; \ - _PyAIT(it)->dataptr -= \ - _PyAIT(it)->backstrides[__npy_i]; \ - } \ - } \ - } \ -} - -#define PyArray_ITER_GOTO(it, destination) { \ - int __npy_i; \ - _PyAIT(it)->index = 0; \ - _PyAIT(it)->dataptr = _PyAIT(it)->ao->data; \ - for (__npy_i = _PyAIT(it)->nd_m1; __npy_i>=0; __npy_i--) { \ - if (destination[__npy_i] < 0) { \ - destination[__npy_i] += \ - _PyAIT(it)->dims_m1[__npy_i]+1; \ - } \ - _PyAIT(it)->dataptr += destination[__npy_i] * \ - _PyAIT(it)->strides[__npy_i]; \ - _PyAIT(it)->coordinates[__npy_i] = \ - destination[__npy_i]; \ - _PyAIT(it)->index += destination[__npy_i] * \ - ( __npy_i==_PyAIT(it)->nd_m1 ? 1 : \ - _PyAIT(it)->dims_m1[__npy_i+1]+1) ; \ - } \ -} - -#define PyArray_ITER_GOTO1D(it, ind) { \ - int __npy_i; \ - npy_intp __npy_ind = (npy_intp) (ind); \ - if (__npy_ind < 0) __npy_ind += _PyAIT(it)->size; \ - _PyAIT(it)->index = __npy_ind; \ - if (_PyAIT(it)->nd_m1 == 0) { \ - _PyAIT(it)->dataptr = _PyAIT(it)->ao->data + \ - __npy_ind * _PyAIT(it)->strides[0]; \ - } \ - else if (_PyAIT(it)->contiguous) \ - _PyAIT(it)->dataptr = _PyAIT(it)->ao->data + \ - __npy_ind * _PyAIT(it)->ao->descr->elsize; \ - else { \ - _PyAIT(it)->dataptr = _PyAIT(it)->ao->data; \ - for (__npy_i = 0; __npy_i<=_PyAIT(it)->nd_m1; \ - __npy_i++) { \ - _PyAIT(it)->dataptr += \ - (__npy_ind / _PyAIT(it)->factors[__npy_i]) \ - * _PyAIT(it)->strides[__npy_i]; \ - __npy_ind %= _PyAIT(it)->factors[__npy_i]; \ - } \ - } \ -} - -#define PyArray_ITER_DATA(it) ((void *)(_PyAIT(it)->dataptr)) - -#define PyArray_ITER_NOTDONE(it) (_PyAIT(it)->index < _PyAIT(it)->size) - - -/* - * Any object passed to PyArray_Broadcast must be binary compatible - * with this structure. - */ - -typedef struct { - PyObject_HEAD - int numiter; /* number of iters */ - npy_intp size; /* broadcasted size */ - npy_intp index; /* current index */ - int nd; /* number of dims */ - npy_intp dimensions[NPY_MAXDIMS]; /* dimensions */ - PyArrayIterObject *iters[NPY_MAXARGS]; /* iterators */ -} PyArrayMultiIterObject; - -#define _PyMIT(m) ((PyArrayMultiIterObject *)(m)) -#define PyArray_MultiIter_RESET(multi) { \ - int __npy_mi; \ - _PyMIT(multi)->index = 0; \ - for (__npy_mi=0; __npy_mi < _PyMIT(multi)->numiter; __npy_mi++) { \ - PyArray_ITER_RESET(_PyMIT(multi)->iters[__npy_mi]); \ - } \ -} - -#define PyArray_MultiIter_NEXT(multi) { \ - int __npy_mi; \ - _PyMIT(multi)->index++; \ - for (__npy_mi=0; __npy_mi < _PyMIT(multi)->numiter; __npy_mi++) { \ - PyArray_ITER_NEXT(_PyMIT(multi)->iters[__npy_mi]); \ - } \ -} - -#define PyArray_MultiIter_GOTO(multi, dest) { \ - int __npy_mi; \ - for (__npy_mi=0; __npy_mi < _PyMIT(multi)->numiter; __npy_mi++) { \ - PyArray_ITER_GOTO(_PyMIT(multi)->iters[__npy_mi], dest); \ - } \ - _PyMIT(multi)->index = _PyMIT(multi)->iters[0]->index; \ -} - -#define PyArray_MultiIter_GOTO1D(multi, ind) { \ - int __npy_mi; \ - for (__npy_mi=0; __npy_mi < _PyMIT(multi)->numiter; __npy_mi++) { \ - PyArray_ITER_GOTO1D(_PyMIT(multi)->iters[__npy_mi], ind); \ - } \ - _PyMIT(multi)->index = _PyMIT(multi)->iters[0]->index; \ -} - -#define PyArray_MultiIter_DATA(multi, i) \ - ((void *)(_PyMIT(multi)->iters[i]->dataptr)) - -#define PyArray_MultiIter_NEXTi(multi, i) \ - PyArray_ITER_NEXT(_PyMIT(multi)->iters[i]) - -#define PyArray_MultiIter_NOTDONE(multi) \ - (_PyMIT(multi)->index < _PyMIT(multi)->size) - -/* Store the information needed for fancy-indexing over an array */ - -typedef struct { - PyObject_HEAD - /* - * Multi-iterator portion --- needs to be present in this - * order to work with PyArray_Broadcast - */ - - int numiter; /* number of index-array - iterators */ - npy_intp size; /* size of broadcasted - result */ - npy_intp index; /* current index */ - int nd; /* number of dims */ - npy_intp dimensions[NPY_MAXDIMS]; /* dimensions */ - PyArrayIterObject *iters[NPY_MAXDIMS]; /* index object - iterators */ - PyArrayIterObject *ait; /* flat Iterator for - underlying array */ - - /* flat iterator for subspace (when numiter < nd) */ - PyArrayIterObject *subspace; - - /* - * if subspace iteration, then this is the array of axes in - * the underlying array represented by the index objects - */ - int iteraxes[NPY_MAXDIMS]; - /* - * if subspace iteration, the these are the coordinates to the - * start of the subspace. - */ - npy_intp bscoord[NPY_MAXDIMS]; - - PyObject *indexobj; /* creating obj */ - int consec; - char *dataptr; - -} PyArrayMapIterObject; - -enum { - NPY_NEIGHBORHOOD_ITER_ZERO_PADDING, - NPY_NEIGHBORHOOD_ITER_ONE_PADDING, - NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING, - NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING, - NPY_NEIGHBORHOOD_ITER_MIRROR_PADDING -}; - -typedef struct { - PyObject_HEAD - - /* - * PyArrayIterObject part: keep this in this exact order - */ - int nd_m1; /* number of dimensions - 1 */ - npy_intp index, size; - npy_intp coordinates[NPY_MAXDIMS];/* N-dimensional loop */ - npy_intp dims_m1[NPY_MAXDIMS]; /* ao->dimensions - 1 */ - npy_intp strides[NPY_MAXDIMS]; /* ao->strides or fake */ - npy_intp backstrides[NPY_MAXDIMS];/* how far to jump back */ - npy_intp factors[NPY_MAXDIMS]; /* shape factors */ - PyArrayObject *ao; - char *dataptr; /* pointer to current item*/ - npy_bool contiguous; - - npy_intp bounds[NPY_MAXDIMS][2]; - npy_intp limits[NPY_MAXDIMS][2]; - npy_intp limits_sizes[NPY_MAXDIMS]; - npy_iter_get_dataptr_t translate; - - /* - * New members - */ - npy_intp nd; - - /* Dimensions is the dimension of the array */ - npy_intp dimensions[NPY_MAXDIMS]; - - /* - * Neighborhood points coordinates are computed relatively to the - * point pointed by _internal_iter - */ - PyArrayIterObject* _internal_iter; - /* - * To keep a reference to the representation of the constant value - * for constant padding - */ - char* constant; - - int mode; -} PyArrayNeighborhoodIterObject; - -/* - * Neighborhood iterator API - */ - -/* General: those work for any mode */ -static NPY_INLINE int -PyArrayNeighborhoodIter_Reset(PyArrayNeighborhoodIterObject* iter); -static NPY_INLINE int -PyArrayNeighborhoodIter_Next(PyArrayNeighborhoodIterObject* iter); -#if 0 -static NPY_INLINE int -PyArrayNeighborhoodIter_Next2D(PyArrayNeighborhoodIterObject* iter); -#endif - -/* - * Include inline implementations - functions defined there are not - * considered public API - */ -#define _NPY_INCLUDE_NEIGHBORHOOD_IMP -#include "_neighborhood_iterator_imp.h" -#undef _NPY_INCLUDE_NEIGHBORHOOD_IMP - -/* The default array type */ -#define NPY_DEFAULT_TYPE NPY_DOUBLE -#define PyArray_DEFAULT NPY_DEFAULT_TYPE - -/* - * All sorts of useful ways to look into a PyArrayObject. These are - * the recommended over casting to PyArrayObject and accessing the - * members directly. - */ - -#define PyArray_NDIM(obj) (((PyArrayObject *)(obj))->nd) -#define PyArray_ISONESEGMENT(m) (PyArray_NDIM(m) == 0 || \ - PyArray_CHKFLAGS(m, NPY_CONTIGUOUS) || \ - PyArray_CHKFLAGS(m, NPY_FORTRAN)) - -#define PyArray_ISFORTRAN(m) (PyArray_CHKFLAGS(m, NPY_FORTRAN) && \ - (PyArray_NDIM(m) > 1)) - -#define PyArray_FORTRAN_IF(m) ((PyArray_CHKFLAGS(m, NPY_FORTRAN) ? \ - NPY_FORTRAN : 0)) - -#define FORTRAN_IF PyArray_FORTRAN_IF -#define PyArray_DATA(obj) ((void *)(((PyArrayObject *)(obj))->data)) -#define PyArray_BYTES(obj) (((PyArrayObject *)(obj))->data) -#define PyArray_DIMS(obj) (((PyArrayObject *)(obj))->dimensions) -#define PyArray_STRIDES(obj) (((PyArrayObject *)(obj))->strides) -#define PyArray_DIM(obj,n) (PyArray_DIMS(obj)[n]) -#define PyArray_STRIDE(obj,n) (PyArray_STRIDES(obj)[n]) -#define PyArray_BASE(obj) (((PyArrayObject *)(obj))->base) -#define PyArray_DESCR(obj) (((PyArrayObject *)(obj))->descr) -#define PyArray_FLAGS(obj) (((PyArrayObject *)(obj))->flags) -#define PyArray_ITEMSIZE(obj) (((PyArrayObject *)(obj))->descr->elsize) -#define PyArray_TYPE(obj) (((PyArrayObject *)(obj))->descr->type_num) - -#define PyArray_GETITEM(obj,itemptr) \ - ((PyArrayObject *)(obj))->descr->f->getitem((char *)(itemptr), \ - (PyArrayObject *)(obj)) - -#define PyArray_SETITEM(obj,itemptr,v) \ - ((PyArrayObject *)(obj))->descr->f->setitem((PyObject *)(v), \ - (char *)(itemptr), \ - (PyArrayObject *)(obj)) - - -#define PyTypeNum_ISBOOL(type) ((type) == NPY_BOOL) - -#define PyTypeNum_ISUNSIGNED(type) (((type) == NPY_UBYTE) || \ - ((type) == NPY_USHORT) || \ - ((type) == NPY_UINT) || \ - ((type) == NPY_ULONG) || \ - ((type) == NPY_ULONGLONG)) - -#define PyTypeNum_ISSIGNED(type) (((type) == NPY_BYTE) || \ - ((type) == NPY_SHORT) || \ - ((type) == NPY_INT) || \ - ((type) == NPY_LONG) || \ - ((type) == NPY_LONGLONG)) - -#define PyTypeNum_ISINTEGER(type) (((type) >= NPY_BYTE) && \ - ((type) <= NPY_ULONGLONG)) - -#define PyTypeNum_ISFLOAT(type) (((type) >= NPY_FLOAT) && \ - ((type) <= NPY_LONGDOUBLE)) - -#define PyTypeNum_ISNUMBER(type) ((type) <= NPY_CLONGDOUBLE) - -#define PyTypeNum_ISSTRING(type) (((type) == NPY_STRING) || \ - ((type) == NPY_UNICODE)) - -#define PyTypeNum_ISCOMPLEX(type) (((type) >= NPY_CFLOAT) && \ - ((type) <= NPY_CLONGDOUBLE)) - -#define PyTypeNum_ISPYTHON(type) (((type) == NPY_LONG) || \ - ((type) == NPY_DOUBLE) || \ - ((type) == NPY_CDOUBLE) || \ - ((type) == NPY_BOOL) || \ - ((type) == NPY_OBJECT )) - -#define PyTypeNum_ISFLEXIBLE(type) (((type) >=NPY_STRING) && \ - ((type) <=NPY_VOID)) - -#define PyTypeNum_ISUSERDEF(type) (((type) >= NPY_USERDEF) && \ - ((type) < NPY_USERDEF+ \ - NPY_NUMUSERTYPES)) - -#define PyTypeNum_ISEXTENDED(type) (PyTypeNum_ISFLEXIBLE(type) || \ - PyTypeNum_ISUSERDEF(type)) - -#define PyTypeNum_ISOBJECT(type) ((type) == NPY_OBJECT) - - -#define PyDataType_ISBOOL(obj) PyTypeNum_ISBOOL(_PyADt(obj)) -#define PyDataType_ISUNSIGNED(obj) PyTypeNum_ISUNSIGNED(((PyArray_Descr*)(obj))->type_num) -#define PyDataType_ISSIGNED(obj) PyTypeNum_ISSIGNED(((PyArray_Descr*)(obj))->type_num) -#define PyDataType_ISINTEGER(obj) PyTypeNum_ISINTEGER(((PyArray_Descr*)(obj))->type_num ) -#define PyDataType_ISFLOAT(obj) PyTypeNum_ISFLOAT(((PyArray_Descr*)(obj))->type_num) -#define PyDataType_ISNUMBER(obj) PyTypeNum_ISNUMBER(((PyArray_Descr*)(obj))->type_num) -#define PyDataType_ISSTRING(obj) PyTypeNum_ISSTRING(((PyArray_Descr*)(obj))->type_num) -#define PyDataType_ISCOMPLEX(obj) PyTypeNum_ISCOMPLEX(((PyArray_Descr*)(obj))->type_num) -#define PyDataType_ISPYTHON(obj) PyTypeNum_ISPYTHON(((PyArray_Descr*)(obj))->type_num) -#define PyDataType_ISFLEXIBLE(obj) PyTypeNum_ISFLEXIBLE(((PyArray_Descr*)(obj))->type_num) -#define PyDataType_ISUSERDEF(obj) PyTypeNum_ISUSERDEF(((PyArray_Descr*)(obj))->type_num) -#define PyDataType_ISEXTENDED(obj) PyTypeNum_ISEXTENDED(((PyArray_Descr*)(obj))->type_num) -#define PyDataType_ISOBJECT(obj) PyTypeNum_ISOBJECT(((PyArray_Descr*)(obj))->type_num) -#define PyDataType_HASFIELDS(obj) (((PyArray_Descr *)(obj))->names != NULL) - -#define PyArray_ISBOOL(obj) PyTypeNum_ISBOOL(PyArray_TYPE(obj)) -#define PyArray_ISUNSIGNED(obj) PyTypeNum_ISUNSIGNED(PyArray_TYPE(obj)) -#define PyArray_ISSIGNED(obj) PyTypeNum_ISSIGNED(PyArray_TYPE(obj)) -#define PyArray_ISINTEGER(obj) PyTypeNum_ISINTEGER(PyArray_TYPE(obj)) -#define PyArray_ISFLOAT(obj) PyTypeNum_ISFLOAT(PyArray_TYPE(obj)) -#define PyArray_ISNUMBER(obj) PyTypeNum_ISNUMBER(PyArray_TYPE(obj)) -#define PyArray_ISSTRING(obj) PyTypeNum_ISSTRING(PyArray_TYPE(obj)) -#define PyArray_ISCOMPLEX(obj) PyTypeNum_ISCOMPLEX(PyArray_TYPE(obj)) -#define PyArray_ISPYTHON(obj) PyTypeNum_ISPYTHON(PyArray_TYPE(obj)) -#define PyArray_ISFLEXIBLE(obj) PyTypeNum_ISFLEXIBLE(PyArray_TYPE(obj)) -#define PyArray_ISUSERDEF(obj) PyTypeNum_ISUSERDEF(PyArray_TYPE(obj)) -#define PyArray_ISEXTENDED(obj) PyTypeNum_ISEXTENDED(PyArray_TYPE(obj)) -#define PyArray_ISOBJECT(obj) PyTypeNum_ISOBJECT(PyArray_TYPE(obj)) -#define PyArray_HASFIELDS(obj) PyDataType_HASFIELDS(PyArray_DESCR(obj)) - - /* - * FIXME: This should check for a flag on the data-type that - * states whether or not it is variable length. Because the - * ISFLEXIBLE check is hard-coded to the built-in data-types. - */ -#define PyArray_ISVARIABLE(obj) PyTypeNum_ISFLEXIBLE(PyArray_TYPE(obj)) - -#define PyArray_SAFEALIGNEDCOPY(obj) (PyArray_ISALIGNED(obj) && !PyArray_ISVARIABLE(obj)) - - -#define NPY_LITTLE '<' -#define NPY_BIG '>' -#define NPY_NATIVE '=' -#define NPY_SWAP 's' -#define NPY_IGNORE '|' - -#if NPY_BYTE_ORDER == NPY_BIG_ENDIAN -#define NPY_NATBYTE NPY_BIG -#define NPY_OPPBYTE NPY_LITTLE -#else -#define NPY_NATBYTE NPY_LITTLE -#define NPY_OPPBYTE NPY_BIG -#endif - -#define PyArray_ISNBO(arg) ((arg) != NPY_OPPBYTE) -#define PyArray_IsNativeByteOrder PyArray_ISNBO -#define PyArray_ISNOTSWAPPED(m) PyArray_ISNBO(PyArray_DESCR(m)->byteorder) -#define PyArray_ISBYTESWAPPED(m) (!PyArray_ISNOTSWAPPED(m)) - -#define PyArray_FLAGSWAP(m, flags) (PyArray_CHKFLAGS(m, flags) && \ - PyArray_ISNOTSWAPPED(m)) - -#define PyArray_ISCARRAY(m) PyArray_FLAGSWAP(m, NPY_CARRAY) -#define PyArray_ISCARRAY_RO(m) PyArray_FLAGSWAP(m, NPY_CARRAY_RO) -#define PyArray_ISFARRAY(m) PyArray_FLAGSWAP(m, NPY_FARRAY) -#define PyArray_ISFARRAY_RO(m) PyArray_FLAGSWAP(m, NPY_FARRAY_RO) -#define PyArray_ISBEHAVED(m) PyArray_FLAGSWAP(m, NPY_BEHAVED) -#define PyArray_ISBEHAVED_RO(m) PyArray_FLAGSWAP(m, NPY_ALIGNED) - - -#define PyDataType_ISNOTSWAPPED(d) PyArray_ISNBO(((PyArray_Descr *)(d))->byteorder) -#define PyDataType_ISBYTESWAPPED(d) (!PyDataType_ISNOTSWAPPED(d)) - - -/* - * This is the form of the struct that's returned pointed by the - * PyCObject attribute of an array __array_struct__. See - * http://numpy.scipy.org/array_interface.shtml for the full - * documentation. - */ -typedef struct { - int two; /* - * contains the integer 2 as a sanity - * check - */ - - int nd; /* number of dimensions */ - - char typekind; /* - * kind in array --- character code of - * typestr - */ - - int itemsize; /* size of each element */ - - int flags; /* - * how should be data interpreted. Valid - * flags are CONTIGUOUS (1), FORTRAN (2), - * ALIGNED (0x100), NOTSWAPPED (0x200), and - * WRITEABLE (0x400). ARR_HAS_DESCR (0x800) - * states that arrdescr field is present in - * structure - */ - - npy_intp *shape; /* - * A length-nd array of shape - * information - */ - - npy_intp *strides; /* A length-nd array of stride information */ - - void *data; /* A pointer to the first element of the array */ - - PyObject *descr; /* - * A list of fields or NULL (ignored if flags - * does not have ARR_HAS_DESCR flag set) - */ -} PyArrayInterface; - -#endif /* NPY_ARRAYTYPES_H */ diff --git a/pythonPackages/numpy/numpy/core/include/numpy/noprefix.h b/pythonPackages/numpy/numpy/core/include/numpy/noprefix.h deleted file mode 100755 index d4ccfc4a3d..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/noprefix.h +++ /dev/null @@ -1,201 +0,0 @@ -#ifndef NPY_NOPREFIX_H -#define NPY_NOPREFIX_H - -/* You can directly include noprefix.h as a backward -compatibility measure*/ -#ifndef NPY_NO_PREFIX -#include "ndarrayobject.h" -#endif - -#define MAX_DIMS NPY_MAXDIMS - -#define longlong npy_longlong -#define ulonglong npy_ulonglong -#define Bool npy_bool -#define longdouble npy_longdouble -#define byte npy_byte - -#ifndef _BSD_SOURCE -#define ushort npy_ushort -#define uint npy_uint -#define ulong npy_ulong -#endif - -#define ubyte npy_ubyte -#define ushort npy_ushort -#define uint npy_uint -#define ulong npy_ulong -#define cfloat npy_cfloat -#define cdouble npy_cdouble -#define clongdouble npy_clongdouble -#define Int8 npy_int8 -#define UInt8 npy_uint8 -#define Int16 npy_int16 -#define UInt16 npy_uint16 -#define Int32 npy_int32 -#define UInt32 npy_uint32 -#define Int64 npy_int64 -#define UInt64 npy_uint64 -#define Int128 npy_int128 -#define UInt128 npy_uint128 -#define Int256 npy_int256 -#define UInt256 npy_uint256 -#define Float16 npy_float16 -#define Complex32 npy_complex32 -#define Float32 npy_float32 -#define Complex64 npy_complex64 -#define Float64 npy_float64 -#define Complex128 npy_complex128 -#define Float80 npy_float80 -#define Complex160 npy_complex160 -#define Float96 npy_float96 -#define Complex192 npy_complex192 -#define Float128 npy_float128 -#define Complex256 npy_complex256 -#define intp npy_intp -#define uintp npy_uintp -#define datetime npy_datetime -#define timedelta npy_timedelta - -#define SIZEOF_INTP NPY_SIZEOF_INTP -#define SIZEOF_UINTP NPY_SIZEOF_UINTP -#define SIZEOF_DATETIME NPY_SIZEOF_DATETIME -#define SIZEOF_TIMEDELTA NPY_SIZEOF_TIMEDELTA - -#define LONGLONG_FMT NPY_LONGLONG_FMT -#define ULONGLONG_FMT NPY_ULONGLONG_FMT -#define LONGLONG_SUFFIX NPY_LONGLONG_SUFFIX -#define ULONGLONG_SUFFIX NPY_ULONGLONG_SUFFIX(x) - -#define MAX_INT8 127 -#define MIN_INT8 -128 -#define MAX_UINT8 255 -#define MAX_INT16 32767 -#define MIN_INT16 -32768 -#define MAX_UINT16 65535 -#define MAX_INT32 2147483647 -#define MIN_INT32 (-MAX_INT32 - 1) -#define MAX_UINT32 4294967295U -#define MAX_INT64 LONGLONG_SUFFIX(9223372036854775807) -#define MIN_INT64 (-MAX_INT64 - LONGLONG_SUFFIX(1)) -#define MAX_UINT64 ULONGLONG_SUFFIX(18446744073709551615) -#define MAX_INT128 LONGLONG_SUFFIX(85070591730234615865843651857942052864) -#define MIN_INT128 (-MAX_INT128 - LONGLONG_SUFFIX(1)) -#define MAX_UINT128 ULONGLONG_SUFFIX(170141183460469231731687303715884105728) -#define MAX_INT256 LONGLONG_SUFFIX(57896044618658097711785492504343953926634992332820282019728792003956564819967) -#define MIN_INT256 (-MAX_INT256 - LONGLONG_SUFFIX(1)) -#define MAX_UINT256 ULONGLONG_SUFFIX(115792089237316195423570985008687907853269984665640564039457584007913129639935) - -#define MAX_BYTE NPY_MAX_BYTE -#define MIN_BYTE NPY_MIN_BYTE -#define MAX_UBYTE NPY_MAX_UBYTE -#define MAX_SHORT NPY_MAX_SHORT -#define MIN_SHORT NPY_MIN_SHORT -#define MAX_USHORT NPY_MAX_USHORT -#define MAX_INT NPY_MAX_INT -#define MIN_INT NPY_MIN_INT -#define MAX_UINT NPY_MAX_UINT -#define MAX_LONG NPY_MAX_LONG -#define MIN_LONG NPY_MIN_LONG -#define MAX_ULONG NPY_MAX_ULONG -#define MAX_LONGLONG NPY_MAX_LONGLONG -#define MIN_LONGLONG NPY_MIN_LONGLONG -#define MAX_ULONGLONG NPY_MAX_ULONGLONG -#define MIN_DATETIME NPY_MIN_DATETIME -#define MAX_DATETIME NPY_MAX_DATETIME -#define MIN_TIMEDELTA NPY_MIN_TIMEDELTA -#define MAX_TIMEDELTA NPY_MAX_TIMEDELTA - -#define SIZEOF_LONGDOUBLE NPY_SIZEOF_LONGDOUBLE -#define SIZEOF_LONGLONG NPY_SIZEOF_LONGLONG -#define BITSOF_BOOL NPY_BITSOF_BOOL -#define BITSOF_CHAR NPY_BITSOF_CHAR -#define BITSOF_SHORT NPY_BITSOF_SHORT -#define BITSOF_INT NPY_BITSOF_INT -#define BITSOF_LONG NPY_BITSOF_LONG -#define BITSOF_LONGLONG NPY_BITSOF_LONGLONG -#define BITSOF_FLOAT NPY_BITSOF_FLOAT -#define BITSOF_DOUBLE NPY_BITSOF_DOUBLE -#define BITSOF_LONGDOUBLE NPY_BITSOF_LONGDOUBLE -#define BITSOF_DATETIME NPY_BITSOF_DATETIME -#define BITSOF_TIMEDELTA NPY_BITSOF_TIMEDELTA - -#define PyArray_UCS4 npy_ucs4 -#define _pya_malloc PyArray_malloc -#define _pya_free PyArray_free -#define _pya_realloc PyArray_realloc - -#define BEGIN_THREADS_DEF NPY_BEGIN_THREADS_DEF -#define BEGIN_THREADS NPY_BEGIN_THREADS -#define END_THREADS NPY_END_THREADS -#define ALLOW_C_API_DEF NPY_ALLOW_C_API_DEF -#define ALLOW_C_API NPY_ALLOW_C_API -#define DISABLE_C_API NPY_DISABLE_C_API - -#define PY_FAIL NPY_FAIL -#define PY_SUCCEED NPY_SUCCEED - -#ifndef TRUE -#define TRUE NPY_TRUE -#endif - -#ifndef FALSE -#define FALSE NPY_FALSE -#endif - -#define LONGDOUBLE_FMT NPY_LONGDOUBLE_FMT - -#define CONTIGUOUS NPY_CONTIGUOUS -#define C_CONTIGUOUS NPY_C_CONTIGUOUS -#define FORTRAN NPY_FORTRAN -#define F_CONTIGUOUS NPY_F_CONTIGUOUS -#define OWNDATA NPY_OWNDATA -#define FORCECAST NPY_FORCECAST -#define ENSURECOPY NPY_ENSURECOPY -#define ENSUREARRAY NPY_ENSUREARRAY -#define ELEMENTSTRIDES NPY_ELEMENTSTRIDES -#define ALIGNED NPY_ALIGNED -#define NOTSWAPPED NPY_NOTSWAPPED -#define WRITEABLE NPY_WRITEABLE -#define UPDATEIFCOPY NPY_UPDATEIFCOPY -#define ARR_HAS_DESCR NPY_ARR_HAS_DESCR -#define BEHAVED NPY_BEHAVED -#define BEHAVED_NS NPY_BEHAVED_NS -#define CARRAY NPY_CARRAY -#define CARRAY_RO NPY_CARRAY_RO -#define FARRAY NPY_FARRAY -#define FARRAY_RO NPY_FARRAY_RO -#define DEFAULT NPY_DEFAULT -#define IN_ARRAY NPY_IN_ARRAY -#define OUT_ARRAY NPY_OUT_ARRAY -#define INOUT_ARRAY NPY_INOUT_ARRAY -#define IN_FARRAY NPY_IN_FARRAY -#define OUT_FARRAY NPY_OUT_FARRAY -#define INOUT_FARRAY NPY_INOUT_FARRAY -#define UPDATE_ALL NPY_UPDATE_ALL - -#define OWN_DATA NPY_OWNDATA -#define BEHAVED_FLAGS NPY_BEHAVED -#define BEHAVED_FLAGS_NS NPY_BEHAVED_NS -#define CARRAY_FLAGS_RO NPY_CARRAY_RO -#define CARRAY_FLAGS NPY_CARRAY -#define FARRAY_FLAGS NPY_FARRAY -#define FARRAY_FLAGS_RO NPY_FARRAY_RO -#define DEFAULT_FLAGS NPY_DEFAULT -#define UPDATE_ALL_FLAGS NPY_UPDATE_ALL_FLAGS - -#ifndef MIN -#define MIN PyArray_MIN -#endif -#ifndef MAX -#define MAX PyArray_MAX -#endif -#define MAX_INTP NPY_MAX_INTP -#define MIN_INTP NPY_MIN_INTP -#define MAX_UINTP NPY_MAX_UINTP -#define INTP_FMT NPY_INTP_FMT - -#define REFCOUNT PyArray_REFCOUNT -#define MAX_ELSIZE NPY_MAX_ELSIZE - -#endif diff --git a/pythonPackages/numpy/numpy/core/include/numpy/npy_3kcompat.h b/pythonPackages/numpy/numpy/core/include/numpy/npy_3kcompat.h deleted file mode 100755 index 831941f121..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/npy_3kcompat.h +++ /dev/null @@ -1,337 +0,0 @@ -/* - * This is a convenience header file providing compatibility utilities - * for supporting Python 2 and Python 3 in the same code base. - * - * If you want to use this for your own projects, it's recommended to make a - * copy of it. Although the stuff below is unlikely to change, we don't provide - * strong backwards compatibility guarantees at the moment. - */ - -#ifndef _NPY_3KCOMPAT_H_ -#define _NPY_3KCOMPAT_H_ - -#include -#include - -#if PY_VERSION_HEX >= 0x03000000 -#ifndef NPY_PY3K -#define NPY_PY3K -#endif -#endif - -#include "numpy/npy_common.h" -#include "numpy/ndarrayobject.h" - -#ifdef __cplusplus -extern "C" { -#endif - -/* - * PyInt -> PyLong - */ - -#if defined(NPY_PY3K) -/* Return True only if the long fits in a C long */ -static NPY_INLINE int PyInt_Check(PyObject *op) { - int overflow = 0; - if (!PyLong_Check(op)) { - return 0; - } - PyLong_AsLongAndOverflow(op, &overflow); - return (overflow == 0); -} - -#define PyInt_FromLong PyLong_FromLong -#define PyInt_AsLong PyLong_AsLong -#define PyInt_AS_LONG PyLong_AsLong -#define PyInt_AsSsize_t PyLong_AsSsize_t - -/* NOTE: - * - * Since the PyLong type is very different from the fixed-range PyInt, - * we don't define PyInt_Type -> PyLong_Type. - */ -#endif /* NPY_PY3K */ - -/* - * PyString -> PyBytes - */ - -#if defined(NPY_PY3K) - -#define PyString_Type PyBytes_Type -#define PyString_Check PyBytes_Check -#define PyStringObject PyBytesObject -#define PyString_FromString PyBytes_FromString -#define PyString_FromStringAndSize PyBytes_FromStringAndSize -#define PyString_AS_STRING PyBytes_AS_STRING -#define PyString_AsStringAndSize PyBytes_AsStringAndSize -#define PyString_FromFormat PyBytes_FromFormat -#define PyString_Concat PyBytes_Concat -#define PyString_ConcatAndDel PyBytes_ConcatAndDel -#define PyString_AsString PyBytes_AsString -#define PyString_GET_SIZE PyBytes_GET_SIZE -#define PyString_Size PyBytes_Size - -#define PyUString_Type PyUnicode_Type -#define PyUString_Check PyUnicode_Check -#define PyUStringObject PyUnicodeObject -#define PyUString_FromString PyUnicode_FromString -#define PyUString_FromStringAndSize PyUnicode_FromStringAndSize -#define PyUString_FromFormat PyUnicode_FromFormat -#define PyUString_Concat PyUnicode_Concat2 -#define PyUString_ConcatAndDel PyUnicode_ConcatAndDel -#define PyUString_GET_SIZE PyUnicode_GET_SIZE -#define PyUString_Size PyUnicode_Size -#define PyUString_InternFromString PyUnicode_InternFromString -#define PyUString_Format PyUnicode_Format - -#else - -#define PyBytes_Type PyString_Type -#define PyBytes_Check PyString_Check -#define PyBytesObject PyStringObject -#define PyBytes_FromString PyString_FromString -#define PyBytes_FromStringAndSize PyString_FromStringAndSize -#define PyBytes_AS_STRING PyString_AS_STRING -#define PyBytes_AsStringAndSize PyString_AsStringAndSize -#define PyBytes_FromFormat PyString_FromFormat -#define PyBytes_Concat PyString_Concat -#define PyBytes_ConcatAndDel PyString_ConcatAndDel -#define PyBytes_AsString PyString_AsString -#define PyBytes_GET_SIZE PyString_GET_SIZE -#define PyBytes_Size PyString_Size - -#define PyUString_Type PyString_Type -#define PyUString_Check PyString_Check -#define PyUStringObject PyStringObject -#define PyUString_FromString PyString_FromString -#define PyUString_FromStringAndSize PyString_FromStringAndSize -#define PyUString_FromFormat PyString_FromFormat -#define PyUString_Concat PyString_Concat -#define PyUString_ConcatAndDel PyString_ConcatAndDel -#define PyUString_GET_SIZE PyString_GET_SIZE -#define PyUString_Size PyString_Size -#define PyUString_InternFromString PyString_InternFromString -#define PyUString_Format PyString_Format - -#endif /* NPY_PY3K */ - - -static NPY_INLINE void -PyUnicode_ConcatAndDel(PyObject **left, PyObject *right) -{ - PyObject *newobj; - newobj = PyUnicode_Concat(*left, right); - Py_DECREF(*left); - Py_DECREF(right); - *left = newobj; -} - -static NPY_INLINE void -PyUnicode_Concat2(PyObject **left, PyObject *right) -{ - PyObject *newobj; - newobj = PyUnicode_Concat(*left, right); - Py_DECREF(*left); - *left = newobj; -} - - -/* - * Accessing items of ob_base - */ - -#if (PY_VERSION_HEX < 0x02060000) -#define Py_TYPE(o) (((PyObject*)(o))->ob_type) -#define Py_REFCNT(o) (((PyObject*)(o))->ob_refcnt) -#define Py_SIZE(o) (((PyVarObject*)(o))->ob_size) -#endif - -/* - * PyFile_AsFile - */ -#if defined(NPY_PY3K) -static NPY_INLINE FILE* -npy_PyFile_Dup(PyObject *file, char *mode) -{ - int fd, fd2; - PyObject *ret, *os; - /* Flush first to ensure things end up in the file in the correct order */ - ret = PyObject_CallMethod(file, "flush", ""); - if (ret == NULL) { - return NULL; - } - Py_DECREF(ret); - fd = PyObject_AsFileDescriptor(file); - if (fd == -1) { - return NULL; - } - os = PyImport_ImportModule("os"); - if (os == NULL) { - return NULL; - } - ret = PyObject_CallMethod(os, "dup", "i", fd); - Py_DECREF(os); - if (ret == NULL) { - return NULL; - } - fd2 = PyNumber_AsSsize_t(ret, NULL); - Py_DECREF(ret); - return fdopen(fd2, mode); -} -#endif - -static NPY_INLINE PyObject* -npy_PyFile_OpenFile(PyObject *filename, char *mode) -{ - PyObject *open; - open = PyDict_GetItemString(PyEval_GetBuiltins(), "open"); - if (open == NULL) { - return NULL; - } - return PyObject_CallFunction(open, "Os", filename, mode); -} - -/* - * PyObject_Cmp - */ -#if defined(NPY_PY3K) -static NPY_INLINE int -PyObject_Cmp(PyObject *i1, PyObject *i2, int *cmp) -{ - int v; - v = PyObject_RichCompareBool(i1, i2, Py_LT); - if (v == 0) { - *cmp = -1; - return 1; - } - else if (v == -1) { - return -1; - } - - v = PyObject_RichCompareBool(i1, i2, Py_GT); - if (v == 0) { - *cmp = 1; - return 1; - } - else if (v == -1) { - return -1; - } - - v = PyObject_RichCompareBool(i1, i2, Py_EQ); - if (v == 0) { - *cmp = 0; - return 1; - } - else { - *cmp = 0; - return -1; - } -} -#endif - -/* - * PyCObject functions adapted to PyCapsules. - * - * The main job here is to get rid of the improved error handling - * of PyCapsules. It's a shame... - */ -#if PY_VERSION_HEX >= 0x03000000 - -static NPY_INLINE PyObject * -NpyCapsule_FromVoidPtr(void *ptr, void (*dtor)(PyObject *)) -{ - PyObject *ret = PyCapsule_New(ptr, NULL, dtor); - if (ret == NULL) { - PyErr_Clear(); - } - return ret; -} - -static NPY_INLINE PyObject * -NpyCapsule_FromVoidPtrAndDesc(void *ptr, void* context, void (*dtor)(PyObject *)) -{ - PyObject *ret = NpyCapsule_FromVoidPtr(ptr, dtor); - if (ret != NULL && PyCapsule_SetContext(ret, context) != 0) { - PyErr_Clear(); - Py_DECREF(ret); - ret = NULL; - } - return ret; -} - -static NPY_INLINE void * -NpyCapsule_AsVoidPtr(PyObject *obj) -{ - void *ret = PyCapsule_GetPointer(obj, NULL); - if (ret == NULL) { - PyErr_Clear(); - } - return ret; -} - -static NPY_INLINE void * -NpyCapsule_GetDesc(PyObject *obj) -{ - return PyCapsule_GetContext(obj); -} - -static NPY_INLINE int -NpyCapsule_Check(PyObject *ptr) -{ - return PyCapsule_CheckExact(ptr); -} - -static void -simple_capsule_dtor(PyObject *cap) -{ - PyArray_free(PyCapsule_GetPointer(cap, NULL)); -} - -#else - -static NPY_INLINE PyObject * -NpyCapsule_FromVoidPtr(void *ptr, void (*dtor)(void *)) -{ - return PyCObject_FromVoidPtr(ptr, dtor); -} - -static NPY_INLINE PyObject * -NpyCapsule_FromVoidPtrAndDesc(void *ptr, void* context, - void (*dtor)(void *, void *)) -{ - return PyCObject_FromVoidPtrAndDesc(ptr, context, dtor); -} - -static NPY_INLINE void * -NpyCapsule_AsVoidPtr(PyObject *ptr) -{ - return PyCObject_AsVoidPtr(ptr); -} - -static NPY_INLINE void * -NpyCapsule_GetDesc(PyObject *obj) -{ - return PyCObject_GetDesc(obj); -} - -static NPY_INLINE int -NpyCapsule_Check(PyObject *ptr) -{ - return PyCObject_Check(ptr); -} - -static void -simple_capsule_dtor(void *ptr) -{ - PyArray_free(ptr); -} - -#endif - -#ifdef __cplusplus -} -#endif - -#endif /* _NPY_3KCOMPAT_H_ */ diff --git a/pythonPackages/numpy/numpy/core/include/numpy/npy_common.h b/pythonPackages/numpy/numpy/core/include/numpy/npy_common.h deleted file mode 100755 index 474f1c3af4..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/npy_common.h +++ /dev/null @@ -1,848 +0,0 @@ -#ifndef _NPY_COMMON_H_ -#define _NPY_COMMON_H_ - -/* This is auto-generated */ -#include "numpyconfig.h" - -#if defined(_MSC_VER) - #define NPY_INLINE __inline -#elif defined(__GNUC__) - #define NPY_INLINE inline -#else - #define NPY_INLINE -#endif - -/* enums for detected endianness */ -enum { - NPY_CPU_UNKNOWN_ENDIAN, - NPY_CPU_LITTLE, - NPY_CPU_BIG -}; - -/* Some platforms don't define bool, long long, or long double. - Handle that here. -*/ - -#define NPY_BYTE_FMT "hhd" -#define NPY_UBYTE_FMT "hhu" -#define NPY_SHORT_FMT "hd" -#define NPY_USHORT_FMT "hu" -#define NPY_INT_FMT "d" -#define NPY_UINT_FMT "u" -#define NPY_LONG_FMT "ld" -#define NPY_ULONG_FMT "lu" -#define NPY_FLOAT_FMT "g" -#define NPY_DOUBLE_FMT "g" - -#ifdef PY_LONG_LONG -typedef PY_LONG_LONG npy_longlong; -typedef unsigned PY_LONG_LONG npy_ulonglong; -# ifdef _MSC_VER -# define NPY_LONGLONG_FMT "I64d" -# define NPY_ULONGLONG_FMT "I64u" -# define NPY_LONGLONG_SUFFIX(x) (x##i64) -# define NPY_ULONGLONG_SUFFIX(x) (x##Ui64) -# else - /* #define LONGLONG_FMT "lld" Another possible variant - #define ULONGLONG_FMT "llu" - - #define LONGLONG_FMT "qd" -- BSD perhaps? - #define ULONGLONG_FMT "qu" - */ -# define NPY_LONGLONG_FMT "Ld" -# define NPY_ULONGLONG_FMT "Lu" -# define NPY_LONGLONG_SUFFIX(x) (x##LL) -# define NPY_ULONGLONG_SUFFIX(x) (x##ULL) -# endif -#else -typedef long npy_longlong; -typedef unsigned long npy_ulonglong; -# define NPY_LONGLONG_SUFFIX(x) (x##L) -# define NPY_ULONGLONG_SUFFIX(x) (x##UL) -#endif - - -typedef unsigned char npy_bool; -#define NPY_FALSE 0 -#define NPY_TRUE 1 - - -#if NPY_SIZEOF_LONGDOUBLE == NPY_SIZEOF_DOUBLE - typedef double npy_longdouble; - #define NPY_LONGDOUBLE_FMT "g" -#else - typedef long double npy_longdouble; - #define NPY_LONGDOUBLE_FMT "Lg" -#endif - -#ifndef Py_USING_UNICODE -#error Must use Python with unicode enabled. -#endif - - -typedef signed char npy_byte; -typedef unsigned char npy_ubyte; -typedef unsigned short npy_ushort; -typedef unsigned int npy_uint; -typedef unsigned long npy_ulong; - -/* These are for completeness */ -typedef float npy_float; -typedef double npy_double; -typedef short npy_short; -typedef int npy_int; -typedef long npy_long; - -/* - * Disabling C99 complex usage: a lot of C code in numpy/scipy rely on being - * able to do .real/.imag. Will have to convert code first. - */ -#if 0 -#if defined(NPY_USE_C99_COMPLEX) && defined(NPY_HAVE_COMPLEX_DOUBLE) -typedef complex npy_cdouble; -#else -typedef struct { double real, imag; } npy_cdouble; -#endif - -#if defined(NPY_USE_C99_COMPLEX) && defined(NPY_HAVE_COMPLEX_FLOAT) -typedef complex float npy_cfloat; -#else -typedef struct { float real, imag; } npy_cfloat; -#endif - -#if defined(NPY_USE_C99_COMPLEX) && defined(NPY_HAVE_COMPLEX_LONG_DOUBLE) -typedef complex long double npy_clongdouble; -#else -typedef struct {npy_longdouble real, imag;} npy_clongdouble; -#endif -#endif -#if NPY_SIZEOF_COMPLEX_DOUBLE != 2 * NPY_SIZEOF_DOUBLE -#error npy_cdouble definition is not compatible with C99 complex definition ! \ - Please contact Numpy maintainers and give detailed information about your \ - compiler and platform -#endif -typedef struct { double real, imag; } npy_cdouble; - -#if NPY_SIZEOF_COMPLEX_FLOAT != 2 * NPY_SIZEOF_FLOAT -#error npy_cfloat definition is not compatible with C99 complex definition ! \ - Please contact Numpy maintainers and give detailed information about your \ - compiler and platform -#endif -typedef struct { float real, imag; } npy_cfloat; - -#if NPY_SIZEOF_COMPLEX_LONGDOUBLE != 2 * NPY_SIZEOF_LONGDOUBLE -#error npy_clongdouble definition is not compatible with C99 complex definition ! \ - Please contact Numpy maintainers and give detailed information about your \ - compiler and platform -#endif -typedef struct { npy_longdouble real, imag; } npy_clongdouble; - -/* - * numarray-style bit-width typedefs - */ -#define NPY_MAX_INT8 127 -#define NPY_MIN_INT8 -128 -#define NPY_MAX_UINT8 255 -#define NPY_MAX_INT16 32767 -#define NPY_MIN_INT16 -32768 -#define NPY_MAX_UINT16 65535 -#define NPY_MAX_INT32 2147483647 -#define NPY_MIN_INT32 (-NPY_MAX_INT32 - 1) -#define NPY_MAX_UINT32 4294967295U -#define NPY_MAX_INT64 NPY_LONGLONG_SUFFIX(9223372036854775807) -#define NPY_MIN_INT64 (-NPY_MAX_INT64 - NPY_LONGLONG_SUFFIX(1)) -#define NPY_MAX_UINT64 NPY_ULONGLONG_SUFFIX(18446744073709551615) -#define NPY_MAX_INT128 NPY_LONGLONG_SUFFIX(85070591730234615865843651857942052864) -#define NPY_MIN_INT128 (-NPY_MAX_INT128 - NPY_LONGLONG_SUFFIX(1)) -#define NPY_MAX_UINT128 NPY_ULONGLONG_SUFFIX(170141183460469231731687303715884105728) -#define NPY_MAX_INT256 NPY_LONGLONG_SUFFIX(57896044618658097711785492504343953926634992332820282019728792003956564819967) -#define NPY_MIN_INT256 (-NPY_MAX_INT256 - NPY_LONGLONG_SUFFIX(1)) -#define NPY_MAX_UINT256 NPY_ULONGLONG_SUFFIX(115792089237316195423570985008687907853269984665640564039457584007913129639935) -#define NPY_MIN_DATETIME NPY_MIN_INT64 -#define NPY_MAX_DATETIME NPY_MAX_INT64 -#define NPY_MIN_TIMEDELTA NPY_MIN_INT64 -#define NPY_MAX_TIMEDELTA NPY_MAX_INT64 - - /* Need to find the number of bits for each type and - make definitions accordingly. - - C states that sizeof(char) == 1 by definition - - So, just using the sizeof keyword won't help. - - It also looks like Python itself uses sizeof(char) quite a - bit, which by definition should be 1 all the time. - - Idea: Make Use of CHAR_BIT which should tell us how many - BITS per CHARACTER - */ - - /* Include platform definitions -- These are in the C89/90 standard */ -#include -#define NPY_MAX_BYTE SCHAR_MAX -#define NPY_MIN_BYTE SCHAR_MIN -#define NPY_MAX_UBYTE UCHAR_MAX -#define NPY_MAX_SHORT SHRT_MAX -#define NPY_MIN_SHORT SHRT_MIN -#define NPY_MAX_USHORT USHRT_MAX -#define NPY_MAX_INT INT_MAX -#ifndef INT_MIN -#define INT_MIN (-INT_MAX - 1) -#endif -#define NPY_MIN_INT INT_MIN -#define NPY_MAX_UINT UINT_MAX -#define NPY_MAX_LONG LONG_MAX -#define NPY_MIN_LONG LONG_MIN -#define NPY_MAX_ULONG ULONG_MAX - -#define NPY_SIZEOF_DATETIME 8 -#define NPY_SIZEOF_TIMEDELTA 8 - -#define NPY_BITSOF_BOOL (sizeof(npy_bool)*CHAR_BIT) -#define NPY_BITSOF_CHAR CHAR_BIT -#define NPY_BITSOF_SHORT (NPY_SIZEOF_SHORT * CHAR_BIT) -#define NPY_BITSOF_INT (NPY_SIZEOF_INT * CHAR_BIT) -#define NPY_BITSOF_LONG (NPY_SIZEOF_LONG * CHAR_BIT) -#define NPY_BITSOF_LONGLONG (NPY_SIZEOF_LONGLONG * CHAR_BIT) -#define NPY_BITSOF_FLOAT (NPY_SIZEOF_FLOAT * CHAR_BIT) -#define NPY_BITSOF_DOUBLE (NPY_SIZEOF_DOUBLE * CHAR_BIT) -#define NPY_BITSOF_LONGDOUBLE (NPY_SIZEOF_LONGDOUBLE * CHAR_BIT) -#define NPY_BITSOF_DATETIME (NPY_SIZEOF_DATETIME * CHAR_BIT) -#define NPY_BITSOF_TIMEDELTA (NPY_SIZEOF_TIMEDELTA * CHAR_BIT) - -#if NPY_BITSOF_LONG == 8 -#define NPY_INT8 NPY_LONG -#define NPY_UINT8 NPY_ULONG - typedef long npy_int8; - typedef unsigned long npy_uint8; -#define PyInt8ScalarObject PyLongScalarObject -#define PyInt8ArrType_Type PyLongArrType_Type -#define PyUInt8ScalarObject PyULongScalarObject -#define PyUInt8ArrType_Type PyULongArrType_Type -#define NPY_INT8_FMT NPY_LONG_FMT -#define NPY_UINT8_FMT NPY_ULONG_FMT -#elif NPY_BITSOF_LONG == 16 -#define NPY_INT16 NPY_LONG -#define NPY_UINT16 NPY_ULONG - typedef long npy_int16; - typedef unsigned long npy_uint16; -#define PyInt16ScalarObject PyLongScalarObject -#define PyInt16ArrType_Type PyLongArrType_Type -#define PyUInt16ScalarObject PyULongScalarObject -#define PyUInt16ArrType_Type PyULongArrType_Type -#define NPY_INT16_FMT NPY_LONG_FMT -#define NPY_UINT16_FMT NPY_ULONG_FMT -#elif NPY_BITSOF_LONG == 32 -#define NPY_INT32 NPY_LONG -#define NPY_UINT32 NPY_ULONG - typedef long npy_int32; - typedef unsigned long npy_uint32; - typedef unsigned long npy_ucs4; -#define PyInt32ScalarObject PyLongScalarObject -#define PyInt32ArrType_Type PyLongArrType_Type -#define PyUInt32ScalarObject PyULongScalarObject -#define PyUInt32ArrType_Type PyULongArrType_Type -#define NPY_INT32_FMT NPY_LONG_FMT -#define NPY_UINT32_FMT NPY_ULONG_FMT -#elif NPY_BITSOF_LONG == 64 -#define NPY_INT64 NPY_LONG -#define NPY_UINT64 NPY_ULONG - typedef long npy_int64; - typedef unsigned long npy_uint64; -#define PyInt64ScalarObject PyLongScalarObject -#define PyInt64ArrType_Type PyLongArrType_Type -#define PyUInt64ScalarObject PyULongScalarObject -#define PyUInt64ArrType_Type PyULongArrType_Type -#define NPY_INT64_FMT NPY_LONG_FMT -#define NPY_UINT64_FMT NPY_ULONG_FMT -#define MyPyLong_FromInt64 PyLong_FromLong -#define MyPyLong_AsInt64 PyLong_AsLong -#elif NPY_BITSOF_LONG == 128 -#define NPY_INT128 NPY_LONG -#define NPY_UINT128 NPY_ULONG - typedef long npy_int128; - typedef unsigned long npy_uint128; -#define PyInt128ScalarObject PyLongScalarObject -#define PyInt128ArrType_Type PyLongArrType_Type -#define PyUInt128ScalarObject PyULongScalarObject -#define PyUInt128ArrType_Type PyULongArrType_Type -#define NPY_INT128_FMT NPY_LONG_FMT -#define NPY_UINT128_FMT NPY_ULONG_FMT -#endif - -#if NPY_BITSOF_LONGLONG == 8 -# ifndef NPY_INT8 -# define NPY_INT8 NPY_LONGLONG -# define NPY_UINT8 NPY_ULONGLONG - typedef npy_longlong npy_int8; - typedef npy_ulonglong npy_uint8; -# define PyInt8ScalarObject PyLongLongScalarObject -# define PyInt8ArrType_Type PyLongLongArrType_Type -# define PyUInt8ScalarObject PyULongLongScalarObject -# define PyUInt8ArrType_Type PyULongLongArrType_Type -#define NPY_INT8_FMT NPY_LONGLONG_FMT -#define NPY_UINT8_FMT NPY_ULONGLONG_FMT -# endif -# define NPY_MAX_LONGLONG NPY_MAX_INT8 -# define NPY_MIN_LONGLONG NPY_MIN_INT8 -# define NPY_MAX_ULONGLONG NPY_MAX_UINT8 -#elif NPY_BITSOF_LONGLONG == 16 -# ifndef NPY_INT16 -# define NPY_INT16 NPY_LONGLONG -# define NPY_UINT16 NPY_ULONGLONG - typedef npy_longlong npy_int16; - typedef npy_ulonglong npy_uint16; -# define PyInt16ScalarObject PyLongLongScalarObject -# define PyInt16ArrType_Type PyLongLongArrType_Type -# define PyUInt16ScalarObject PyULongLongScalarObject -# define PyUInt16ArrType_Type PyULongLongArrType_Type -#define NPY_INT16_FMT NPY_LONGLONG_FMT -#define NPY_UINT16_FMT NPY_ULONGLONG_FMT -# endif -# define NPY_MAX_LONGLONG NPY_MAX_INT16 -# define NPY_MIN_LONGLONG NPY_MIN_INT16 -# define NPY_MAX_ULONGLONG NPY_MAX_UINT16 -#elif NPY_BITSOF_LONGLONG == 32 -# ifndef NPY_INT32 -# define NPY_INT32 NPY_LONGLONG -# define NPY_UINT32 NPY_ULONGLONG - typedef npy_longlong npy_int32; - typedef npy_ulonglong npy_uint32; - typedef npy_ulonglong npy_ucs4; -# define PyInt32ScalarObject PyLongLongScalarObject -# define PyInt32ArrType_Type PyLongLongArrType_Type -# define PyUInt32ScalarObject PyULongLongScalarObject -# define PyUInt32ArrType_Type PyULongLongArrType_Type -#define NPY_INT32_FMT NPY_LONGLONG_FMT -#define NPY_UINT32_FMT NPY_ULONGLONG_FMT -# endif -# define NPY_MAX_LONGLONG NPY_MAX_INT32 -# define NPY_MIN_LONGLONG NPY_MIN_INT32 -# define NPY_MAX_ULONGLONG NPY_MAX_UINT32 -#elif NPY_BITSOF_LONGLONG == 64 -# ifndef NPY_INT64 -# define NPY_INT64 NPY_LONGLONG -# define NPY_UINT64 NPY_ULONGLONG - typedef npy_longlong npy_int64; - typedef npy_ulonglong npy_uint64; -# define PyInt64ScalarObject PyLongLongScalarObject -# define PyInt64ArrType_Type PyLongLongArrType_Type -# define PyUInt64ScalarObject PyULongLongScalarObject -# define PyUInt64ArrType_Type PyULongLongArrType_Type -#define NPY_INT64_FMT NPY_LONGLONG_FMT -#define NPY_UINT64_FMT NPY_ULONGLONG_FMT -# define MyPyLong_FromInt64 PyLong_FromLongLong -# define MyPyLong_AsInt64 PyLong_AsLongLong -# endif -# define NPY_MAX_LONGLONG NPY_MAX_INT64 -# define NPY_MIN_LONGLONG NPY_MIN_INT64 -# define NPY_MAX_ULONGLONG NPY_MAX_UINT64 -#elif NPY_BITSOF_LONGLONG == 128 -# ifndef NPY_INT128 -# define NPY_INT128 NPY_LONGLONG -# define NPY_UINT128 NPY_ULONGLONG - typedef npy_longlong npy_int128; - typedef npy_ulonglong npy_uint128; -# define PyInt128ScalarObject PyLongLongScalarObject -# define PyInt128ArrType_Type PyLongLongArrType_Type -# define PyUInt128ScalarObject PyULongLongScalarObject -# define PyUInt128ArrType_Type PyULongLongArrType_Type -#define NPY_INT128_FMT NPY_LONGLONG_FMT -#define NPY_UINT128_FMT NPY_ULONGLONG_FMT -# endif -# define NPY_MAX_LONGLONG NPY_MAX_INT128 -# define NPY_MIN_LONGLONG NPY_MIN_INT128 -# define NPY_MAX_ULONGLONG NPY_MAX_UINT128 -#elif NPY_BITSOF_LONGLONG == 256 -# define NPY_INT256 NPY_LONGLONG -# define NPY_UINT256 NPY_ULONGLONG - typedef npy_longlong npy_int256; - typedef npy_ulonglong npy_uint256; -# define PyInt256ScalarObject PyLongLongScalarObject -# define PyInt256ArrType_Type PyLongLongArrType_Type -# define PyUInt256ScalarObject PyULongLongScalarObject -# define PyUInt256ArrType_Type PyULongLongArrType_Type -#define NPY_INT256_FMT NPY_LONGLONG_FMT -#define NPY_UINT256_FMT NPY_ULONGLONG_FMT -# define NPY_MAX_LONGLONG NPY_MAX_INT256 -# define NPY_MIN_LONGLONG NPY_MIN_INT256 -# define NPY_MAX_ULONGLONG NPY_MAX_UINT256 -#endif - -#if NPY_BITSOF_INT == 8 -#ifndef NPY_INT8 -#define NPY_INT8 NPY_INT -#define NPY_UINT8 NPY_UINT - typedef int npy_int8; - typedef unsigned int npy_uint8; -# define PyInt8ScalarObject PyIntScalarObject -# define PyInt8ArrType_Type PyIntArrType_Type -# define PyUInt8ScalarObject PyUIntScalarObject -# define PyUInt8ArrType_Type PyUIntArrType_Type -#define NPY_INT8_FMT NPY_INT_FMT -#define NPY_UINT8_FMT NPY_UINT_FMT -#endif -#elif NPY_BITSOF_INT == 16 -#ifndef NPY_INT16 -#define NPY_INT16 NPY_INT -#define NPY_UINT16 NPY_UINT - typedef int npy_int16; - typedef unsigned int npy_uint16; -# define PyInt16ScalarObject PyIntScalarObject -# define PyInt16ArrType_Type PyIntArrType_Type -# define PyUInt16ScalarObject PyIntUScalarObject -# define PyUInt16ArrType_Type PyIntUArrType_Type -#define NPY_INT16_FMT NPY_INT_FMT -#define NPY_UINT16_FMT NPY_UINT_FMT -#endif -#elif NPY_BITSOF_INT == 32 -#ifndef NPY_INT32 -#define NPY_INT32 NPY_INT -#define NPY_UINT32 NPY_UINT - typedef int npy_int32; - typedef unsigned int npy_uint32; - typedef unsigned int npy_ucs4; -# define PyInt32ScalarObject PyIntScalarObject -# define PyInt32ArrType_Type PyIntArrType_Type -# define PyUInt32ScalarObject PyUIntScalarObject -# define PyUInt32ArrType_Type PyUIntArrType_Type -#define NPY_INT32_FMT NPY_INT_FMT -#define NPY_UINT32_FMT NPY_UINT_FMT -#endif -#elif NPY_BITSOF_INT == 64 -#ifndef NPY_INT64 -#define NPY_INT64 NPY_INT -#define NPY_UINT64 NPY_UINT - typedef int npy_int64; - typedef unsigned int npy_uint64; -# define PyInt64ScalarObject PyIntScalarObject -# define PyInt64ArrType_Type PyIntArrType_Type -# define PyUInt64ScalarObject PyUIntScalarObject -# define PyUInt64ArrType_Type PyUIntArrType_Type -#define NPY_INT64_FMT NPY_INT_FMT -#define NPY_UINT64_FMT NPY_UINT_FMT -# define MyPyLong_FromInt64 PyLong_FromLong -# define MyPyLong_AsInt64 PyLong_AsLong -#endif -#elif NPY_BITSOF_INT == 128 -#ifndef NPY_INT128 -#define NPY_INT128 NPY_INT -#define NPY_UINT128 NPY_UINT - typedef int npy_int128; - typedef unsigned int npy_uint128; -# define PyInt128ScalarObject PyIntScalarObject -# define PyInt128ArrType_Type PyIntArrType_Type -# define PyUInt128ScalarObject PyUIntScalarObject -# define PyUInt128ArrType_Type PyUIntArrType_Type -#define NPY_INT128_FMT NPY_INT_FMT -#define NPY_UINT128_FMT NPY_UINT_FMT -#endif -#endif - -#if NPY_BITSOF_SHORT == 8 -#ifndef NPY_INT8 -#define NPY_INT8 NPY_SHORT -#define NPY_UINT8 NPY_USHORT - typedef short npy_int8; - typedef unsigned short npy_uint8; -# define PyInt8ScalarObject PyShortScalarObject -# define PyInt8ArrType_Type PyShortArrType_Type -# define PyUInt8ScalarObject PyUShortScalarObject -# define PyUInt8ArrType_Type PyUShortArrType_Type -#define NPY_INT8_FMT NPY_SHORT_FMT -#define NPY_UINT8_FMT NPY_USHORT_FMT -#endif -#elif NPY_BITSOF_SHORT == 16 -#ifndef NPY_INT16 -#define NPY_INT16 NPY_SHORT -#define NPY_UINT16 NPY_USHORT - typedef short npy_int16; - typedef unsigned short npy_uint16; -# define PyInt16ScalarObject PyShortScalarObject -# define PyInt16ArrType_Type PyShortArrType_Type -# define PyUInt16ScalarObject PyUShortScalarObject -# define PyUInt16ArrType_Type PyUShortArrType_Type -#define NPY_INT16_FMT NPY_SHORT_FMT -#define NPY_UINT16_FMT NPY_USHORT_FMT -#endif -#elif NPY_BITSOF_SHORT == 32 -#ifndef NPY_INT32 -#define NPY_INT32 NPY_SHORT -#define NPY_UINT32 NPY_USHORT - typedef short npy_int32; - typedef unsigned short npy_uint32; - typedef unsigned short npy_ucs4; -# define PyInt32ScalarObject PyShortScalarObject -# define PyInt32ArrType_Type PyShortArrType_Type -# define PyUInt32ScalarObject PyUShortScalarObject -# define PyUInt32ArrType_Type PyUShortArrType_Type -#define NPY_INT32_FMT NPY_SHORT_FMT -#define NPY_UINT32_FMT NPY_USHORT_FMT -#endif -#elif NPY_BITSOF_SHORT == 64 -#ifndef NPY_INT64 -#define NPY_INT64 NPY_SHORT -#define NPY_UINT64 NPY_USHORT - typedef short npy_int64; - typedef unsigned short npy_uint64; -# define PyInt64ScalarObject PyShortScalarObject -# define PyInt64ArrType_Type PyShortArrType_Type -# define PyUInt64ScalarObject PyUShortScalarObject -# define PyUInt64ArrType_Type PyUShortArrType_Type -#define NPY_INT64_FMT NPY_SHORT_FMT -#define NPY_UINT64_FMT NPY_USHORT_FMT -# define MyPyLong_FromInt64 PyLong_FromLong -# define MyPyLong_AsInt64 PyLong_AsLong -#endif -#elif NPY_BITSOF_SHORT == 128 -#ifndef NPY_INT128 -#define NPY_INT128 NPY_SHORT -#define NPY_UINT128 NPY_USHORT - typedef short npy_int128; - typedef unsigned short npy_uint128; -# define PyInt128ScalarObject PyShortScalarObject -# define PyInt128ArrType_Type PyShortArrType_Type -# define PyUInt128ScalarObject PyUShortScalarObject -# define PyUInt128ArrType_Type PyUShortArrType_Type -#define NPY_INT128_FMT NPY_SHORT_FMT -#define NPY_UINT128_FMT NPY_USHORT_FMT -#endif -#endif - - -#if NPY_BITSOF_CHAR == 8 -#ifndef NPY_INT8 -#define NPY_INT8 NPY_BYTE -#define NPY_UINT8 NPY_UBYTE - typedef signed char npy_int8; - typedef unsigned char npy_uint8; -# define PyInt8ScalarObject PyByteScalarObject -# define PyInt8ArrType_Type PyByteArrType_Type -# define PyUInt8ScalarObject PyUByteScalarObject -# define PyUInt8ArrType_Type PyUByteArrType_Type -#define NPY_INT8_FMT NPY_BYTE_FMT -#define NPY_UINT8_FMT NPY_UBYTE_FMT -#endif -#elif NPY_BITSOF_CHAR == 16 -#ifndef NPY_INT16 -#define NPY_INT16 NPY_BYTE -#define NPY_UINT16 NPY_UBYTE - typedef signed char npy_int16; - typedef unsigned char npy_uint16; -# define PyInt16ScalarObject PyByteScalarObject -# define PyInt16ArrType_Type PyByteArrType_Type -# define PyUInt16ScalarObject PyUByteScalarObject -# define PyUInt16ArrType_Type PyUByteArrType_Type -#define NPY_INT16_FMT NPY_BYTE_FMT -#define NPY_UINT16_FMT NPY_UBYTE_FMT -#endif -#elif NPY_BITSOF_CHAR == 32 -#ifndef NPY_INT32 -#define NPY_INT32 NPY_BYTE -#define NPY_UINT32 NPY_UBYTE - typedef signed char npy_int32; - typedef unsigned char npy_uint32; - typedef unsigned char npy_ucs4; -# define PyInt32ScalarObject PyByteScalarObject -# define PyInt32ArrType_Type PyByteArrType_Type -# define PyUInt32ScalarObject PyUByteScalarObject -# define PyUInt32ArrType_Type PyUByteArrType_Type -#define NPY_INT32_FMT NPY_BYTE_FMT -#define NPY_UINT32_FMT NPY_UBYTE_FMT -#endif -#elif NPY_BITSOF_CHAR == 64 -#ifndef NPY_INT64 -#define NPY_INT64 NPY_BYTE -#define NPY_UINT64 NPY_UBYTE - typedef signed char npy_int64; - typedef unsigned char npy_uint64; -# define PyInt64ScalarObject PyByteScalarObject -# define PyInt64ArrType_Type PyByteArrType_Type -# define PyUInt64ScalarObject PyUByteScalarObject -# define PyUInt64ArrType_Type PyUByteArrType_Type -#define NPY_INT64_FMT NPY_BYTE_FMT -#define NPY_UINT64_FMT NPY_UBYTE_FMT -# define MyPyLong_FromInt64 PyLong_FromLong -# define MyPyLong_AsInt64 PyLong_AsLong -#endif -#elif NPY_BITSOF_CHAR == 128 -#ifndef NPY_INT128 -#define NPY_INT128 NPY_BYTE -#define NPY_UINT128 NPY_UBYTE - typedef signed char npy_int128; - typedef unsigned char npy_uint128; -# define PyInt128ScalarObject PyByteScalarObject -# define PyInt128ArrType_Type PyByteArrType_Type -# define PyUInt128ScalarObject PyUByteScalarObject -# define PyUInt128ArrType_Type PyUByteArrType_Type -#define NPY_INT128_FMT NPY_BYTE_FMT -#define NPY_UINT128_FMT NPY_UBYTE_FMT -#endif -#endif - - - -#if NPY_BITSOF_DOUBLE == 16 -#ifndef NPY_FLOAT16 -#define NPY_FLOAT16 NPY_DOUBLE -#define NPY_COMPLEX32 NPY_CDOUBLE - typedef double npy_float16; - typedef npy_cdouble npy_complex32; -# define PyFloat16ScalarObject PyDoubleScalarObject -# define PyComplex32ScalarObject PyCDoubleScalarObject -# define PyFloat16ArrType_Type PyDoubleArrType_Type -# define PyComplex32ArrType_Type PyCDoubleArrType_Type -#define NPY_FLOAT16_FMT NPY_DOUBLE_FMT -#define NPY_COMPLEX32_FMT NPY_CDOUBLE_FMT -#endif -#elif NPY_BITSOF_DOUBLE == 32 -#ifndef NPY_FLOAT32 -#define NPY_FLOAT32 NPY_DOUBLE -#define NPY_COMPLEX64 NPY_CDOUBLE - typedef double npy_float32; - typedef npy_cdouble npy_complex64; -# define PyFloat32ScalarObject PyDoubleScalarObject -# define PyComplex64ScalarObject PyCDoubleScalarObject -# define PyFloat32ArrType_Type PyDoubleArrType_Type -# define PyComplex64ArrType_Type PyCDoubleArrType_Type -#define NPY_FLOAT32_FMT NPY_DOUBLE_FMT -#define NPY_COMPLEX64_FMT NPY_CDOUBLE_FMT -#endif -#elif NPY_BITSOF_DOUBLE == 64 -#ifndef NPY_FLOAT64 -#define NPY_FLOAT64 NPY_DOUBLE -#define NPY_COMPLEX128 NPY_CDOUBLE - typedef double npy_float64; - typedef npy_cdouble npy_complex128; -# define PyFloat64ScalarObject PyDoubleScalarObject -# define PyComplex128ScalarObject PyCDoubleScalarObject -# define PyFloat64ArrType_Type PyDoubleArrType_Type -# define PyComplex128ArrType_Type PyCDoubleArrType_Type -#define NPY_FLOAT64_FMT NPY_DOUBLE_FMT -#define NPY_COMPLEX128_FMT NPY_CDOUBLE_FMT -#endif -#elif NPY_BITSOF_DOUBLE == 80 -#ifndef NPY_FLOAT80 -#define NPY_FLOAT80 NPY_DOUBLE -#define NPY_COMPLEX160 NPY_CDOUBLE - typedef double npy_float80; - typedef npy_cdouble npy_complex160; -# define PyFloat80ScalarObject PyDoubleScalarObject -# define PyComplex160ScalarObject PyCDoubleScalarObject -# define PyFloat80ArrType_Type PyDoubleArrType_Type -# define PyComplex160ArrType_Type PyCDoubleArrType_Type -#define NPY_FLOAT80_FMT NPY_DOUBLE_FMT -#define NPY_COMPLEX160_FMT NPY_CDOUBLE_FMT -#endif -#elif NPY_BITSOF_DOUBLE == 96 -#ifndef NPY_FLOAT96 -#define NPY_FLOAT96 NPY_DOUBLE -#define NPY_COMPLEX192 NPY_CDOUBLE - typedef double npy_float96; - typedef npy_cdouble npy_complex192; -# define PyFloat96ScalarObject PyDoubleScalarObject -# define PyComplex192ScalarObject PyCDoubleScalarObject -# define PyFloat96ArrType_Type PyDoubleArrType_Type -# define PyComplex192ArrType_Type PyCDoubleArrType_Type -#define NPY_FLOAT96_FMT NPY_DOUBLE_FMT -#define NPY_COMPLEX192_FMT NPY_CDOUBLE_FMT -#endif -#elif NPY_BITSOF_DOUBLE == 128 -#ifndef NPY_FLOAT128 -#define NPY_FLOAT128 NPY_DOUBLE -#define NPY_COMPLEX256 NPY_CDOUBLE - typedef double npy_float128; - typedef npy_cdouble npy_complex256; -# define PyFloat128ScalarObject PyDoubleScalarObject -# define PyComplex256ScalarObject PyCDoubleScalarObject -# define PyFloat128ArrType_Type PyDoubleArrType_Type -# define PyComplex256ArrType_Type PyCDoubleArrType_Type -#define NPY_FLOAT128_FMT NPY_DOUBLE_FMT -#define NPY_COMPLEX256_FMT NPY_CDOUBLE_FMT -#endif -#endif - - - -#if NPY_BITSOF_FLOAT == 16 -#ifndef NPY_FLOAT16 -#define NPY_FLOAT16 NPY_FLOAT -#define NPY_COMPLEX32 NPY_CFLOAT - typedef float npy_float16; - typedef npy_cfloat npy_complex32; -# define PyFloat16ScalarObject PyFloatScalarObject -# define PyComplex32ScalarObject PyCFloatScalarObject -# define PyFloat16ArrType_Type PyFloatArrType_Type -# define PyComplex32ArrType_Type PyCFloatArrType_Type -#define NPY_FLOAT16_FMT NPY_FLOAT_FMT -#define NPY_COMPLEX32_FMT NPY_CFLOAT_FMT -#endif -#elif NPY_BITSOF_FLOAT == 32 -#ifndef NPY_FLOAT32 -#define NPY_FLOAT32 NPY_FLOAT -#define NPY_COMPLEX64 NPY_CFLOAT - typedef float npy_float32; - typedef npy_cfloat npy_complex64; -# define PyFloat32ScalarObject PyFloatScalarObject -# define PyComplex64ScalarObject PyCFloatScalarObject -# define PyFloat32ArrType_Type PyFloatArrType_Type -# define PyComplex64ArrType_Type PyCFloatArrType_Type -#define NPY_FLOAT32_FMT NPY_FLOAT_FMT -#define NPY_COMPLEX64_FMT NPY_CFLOAT_FMT -#endif -#elif NPY_BITSOF_FLOAT == 64 -#ifndef NPY_FLOAT64 -#define NPY_FLOAT64 NPY_FLOAT -#define NPY_COMPLEX128 NPY_CFLOAT - typedef float npy_float64; - typedef npy_cfloat npy_complex128; -# define PyFloat64ScalarObject PyFloatScalarObject -# define PyComplex128ScalarObject PyCFloatScalarObject -# define PyFloat64ArrType_Type PyFloatArrType_Type -# define PyComplex128ArrType_Type PyCFloatArrType_Type -#define NPY_FLOAT64_FMT NPY_FLOAT_FMT -#define NPY_COMPLEX128_FMT NPY_CFLOAT_FMT -#endif -#elif NPY_BITSOF_FLOAT == 80 -#ifndef NPY_FLOAT80 -#define NPY_FLOAT80 NPY_FLOAT -#define NPY_COMPLEX160 NPY_CFLOAT - typedef float npy_float80; - typedef npy_cfloat npy_complex160; -# define PyFloat80ScalarObject PyFloatScalarObject -# define PyComplex160ScalarObject PyCFloatScalarObject -# define PyFloat80ArrType_Type PyFloatArrType_Type -# define PyComplex160ArrType_Type PyCFloatArrType_Type -#define NPY_FLOAT80_FMT NPY_FLOAT_FMT -#define NPY_COMPLEX160_FMT NPY_CFLOAT_FMT -#endif -#elif NPY_BITSOF_FLOAT == 96 -#ifndef NPY_FLOAT96 -#define NPY_FLOAT96 NPY_FLOAT -#define NPY_COMPLEX192 NPY_CFLOAT - typedef float npy_float96; - typedef npy_cfloat npy_complex192; -# define PyFloat96ScalarObject PyFloatScalarObject -# define PyComplex192ScalarObject PyCFloatScalarObject -# define PyFloat96ArrType_Type PyFloatArrType_Type -# define PyComplex192ArrType_Type PyCFloatArrType_Type -#define NPY_FLOAT96_FMT NPY_FLOAT_FMT -#define NPY_COMPLEX192_FMT NPY_CFLOAT_FMT -#endif -#elif NPY_BITSOF_FLOAT == 128 -#ifndef NPY_FLOAT128 -#define NPY_FLOAT128 NPY_FLOAT -#define NPY_COMPLEX256 NPY_CFLOAT - typedef float npy_float128; - typedef npy_cfloat npy_complex256; -# define PyFloat128ScalarObject PyFloatScalarObject -# define PyComplex256ScalarObject PyCFloatScalarObject -# define PyFloat128ArrType_Type PyFloatArrType_Type -# define PyComplex256ArrType_Type PyCFloatArrType_Type -#define NPY_FLOAT128_FMT NPY_FLOAT_FMT -#define NPY_COMPLEX256_FMT NPY_CFLOAT_FMT -#endif -#endif - - -#if NPY_BITSOF_LONGDOUBLE == 16 -#ifndef NPY_FLOAT16 -#define NPY_FLOAT16 NPY_LONGDOUBLE -#define NPY_COMPLEX32 NPY_CLONGDOUBLE - typedef npy_longdouble npy_float16; - typedef npy_clongdouble npy_complex32; -# define PyFloat16ScalarObject PyLongDoubleScalarObject -# define PyComplex32ScalarObject PyCLongDoubleScalarObject -# define PyFloat16ArrType_Type PyLongDoubleArrType_Type -# define PyComplex32ArrType_Type PyCLongDoubleArrType_Type -#define NPY_FLOAT16_FMT NPY_LONGDOUBLE_FMT -#define NPY_COMPLEX32_FMT NPY_CLONGDOUBLE_FMT -#endif -#elif NPY_BITSOF_LONGDOUBLE == 32 -#ifndef NPY_FLOAT32 -#define NPY_FLOAT32 NPY_LONGDOUBLE -#define NPY_COMPLEX64 NPY_CLONGDOUBLE - typedef npy_longdouble npy_float32; - typedef npy_clongdouble npy_complex64; -# define PyFloat32ScalarObject PyLongDoubleScalarObject -# define PyComplex64ScalarObject PyCLongDoubleScalarObject -# define PyFloat32ArrType_Type PyLongDoubleArrType_Type -# define PyComplex64ArrType_Type PyCLongDoubleArrType_Type -#define NPY_FLOAT32_FMT NPY_LONGDOUBLE_FMT -#define NPY_COMPLEX64_FMT NPY_CLONGDOUBLE_FMT -#endif -#elif NPY_BITSOF_LONGDOUBLE == 64 -#ifndef NPY_FLOAT64 -#define NPY_FLOAT64 NPY_LONGDOUBLE -#define NPY_COMPLEX128 NPY_CLONGDOUBLE - typedef npy_longdouble npy_float64; - typedef npy_clongdouble npy_complex128; -# define PyFloat64ScalarObject PyLongDoubleScalarObject -# define PyComplex128ScalarObject PyCLongDoubleScalarObject -# define PyFloat64ArrType_Type PyLongDoubleArrType_Type -# define PyComplex128ArrType_Type PyCLongDoubleArrType_Type -#define NPY_FLOAT64_FMT NPY_LONGDOUBLE_FMT -#define NPY_COMPLEX128_FMT NPY_CLONGDOUBLE_FMT -#endif -#elif NPY_BITSOF_LONGDOUBLE == 80 -#ifndef NPY_FLOAT80 -#define NPY_FLOAT80 NPY_LONGDOUBLE -#define NPY_COMPLEX160 NPY_CLONGDOUBLE - typedef npy_longdouble npy_float80; - typedef npy_clongdouble npy_complex160; -# define PyFloat80ScalarObject PyLongDoubleScalarObject -# define PyComplex160ScalarObject PyCLongDoubleScalarObject -# define PyFloat80ArrType_Type PyLongDoubleArrType_Type -# define PyComplex160ArrType_Type PyCLongDoubleArrType_Type -#define NPY_FLOAT80_FMT NPY_LONGDOUBLE_FMT -#define NPY_COMPLEX160_FMT NPY_CLONGDOUBLE_FMT -#endif -#elif NPY_BITSOF_LONGDOUBLE == 96 -#ifndef NPY_FLOAT96 -#define NPY_FLOAT96 NPY_LONGDOUBLE -#define NPY_COMPLEX192 NPY_CLONGDOUBLE - typedef npy_longdouble npy_float96; - typedef npy_clongdouble npy_complex192; -# define PyFloat96ScalarObject PyLongDoubleScalarObject -# define PyComplex192ScalarObject PyCLongDoubleScalarObject -# define PyFloat96ArrType_Type PyLongDoubleArrType_Type -# define PyComplex192ArrType_Type PyCLongDoubleArrType_Type -#define NPY_FLOAT96_FMT NPY_LONGDOUBLE_FMT -#define NPY_COMPLEX192_FMT NPY_CLONGDOUBLE_FMT -#endif -#elif NPY_BITSOF_LONGDOUBLE == 128 -#ifndef NPY_FLOAT128 -#define NPY_FLOAT128 NPY_LONGDOUBLE -#define NPY_COMPLEX256 NPY_CLONGDOUBLE - typedef npy_longdouble npy_float128; - typedef npy_clongdouble npy_complex256; -# define PyFloat128ScalarObject PyLongDoubleScalarObject -# define PyComplex256ScalarObject PyCLongDoubleScalarObject -# define PyFloat128ArrType_Type PyLongDoubleArrType_Type -# define PyComplex256ArrType_Type PyCLongDoubleArrType_Type -#define NPY_FLOAT128_FMT NPY_LONGDOUBLE_FMT -#define NPY_COMPLEX256_FMT NPY_CLONGDOUBLE_FMT -#endif -#elif NPY_BITSOF_LONGDOUBLE == 256 -#define NPY_FLOAT256 NPY_LONGDOUBLE -#define NPY_COMPLEX512 NPY_CLONGDOUBLE - typedef npy_longdouble npy_float256; - typedef npy_clongdouble npy_complex512; -# define PyFloat256ScalarObject PyLongDoubleScalarObject -# define PyComplex512ScalarObject PyCLongDoubleScalarObject -# define PyFloat256ArrType_Type PyLongDoubleArrType_Type -# define PyComplex512ArrType_Type PyCLongDoubleArrType_Type -#define NPY_FLOAT256_FMT NPY_LONGDOUBLE_FMT -#define NPY_COMPLEX512_FMT NPY_CLONGDOUBLE_FMT -#endif - -/* datetime typedefs */ -typedef npy_int64 npy_timedelta; -typedef npy_int64 npy_datetime; -#define NPY_DATETIME_FMT NPY_INT64_FMT -#define NPY_TIMEDELTA_FMT NPY_INT64_FMT - -/* End of typedefs for numarray style bit-width names */ - -#endif - diff --git a/pythonPackages/numpy/numpy/core/include/numpy/npy_cpu.h b/pythonPackages/numpy/numpy/core/include/numpy/npy_cpu.h deleted file mode 100755 index 8a29788065..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/npy_cpu.h +++ /dev/null @@ -1,107 +0,0 @@ -/* - * This set (target) cpu specific macros: - * - Possible values: - * NPY_CPU_X86 - * NPY_CPU_AMD64 - * NPY_CPU_PPC - * NPY_CPU_PPC64 - * NPY_CPU_SPARC - * NPY_CPU_S390 - * NPY_CPU_IA64 - * NPY_CPU_HPPA - * NPY_CPU_ALPHA - * NPY_CPU_ARMEL - * NPY_CPU_ARMEB - * NPY_CPU_SH_LE - * NPY_CPU_SH_BE - */ -#ifndef _NPY_CPUARCH_H_ -#define _NPY_CPUARCH_H_ - -#include "numpyconfig.h" - -#if defined( __i386__ ) || defined(i386) || defined(_M_IX86) - /* - * __i386__ is defined by gcc and Intel compiler on Linux, - * _M_IX86 by VS compiler, - * i386 by Sun compilers on opensolaris at least - */ - #define NPY_CPU_X86 -#elif defined(__x86_64__) || defined(__amd64__) || defined(__x86_64) || defined(_M_AMD64) - /* - * both __x86_64__ and __amd64__ are defined by gcc - * __x86_64 defined by sun compiler on opensolaris at least - * _M_AMD64 defined by MS compiler - */ - #define NPY_CPU_AMD64 -#elif defined(__ppc__) || defined(__powerpc__) || defined(_ARCH_PPC) - /* - * __ppc__ is defined by gcc, I remember having seen __powerpc__ once, - * but can't find it ATM - * _ARCH_PPC is used by at least gcc on AIX - */ - #define NPY_CPU_PPC -#elif defined(__ppc64__) - #define NPY_CPU_PPC64 -#elif defined(__sparc__) || defined(__sparc) - /* __sparc__ is defined by gcc and Forte (e.g. Sun) compilers */ - #define NPY_CPU_SPARC -#elif defined(__s390__) - #define NPY_CPU_S390 -#elif defined(__ia64) - #define NPY_CPU_IA64 -#elif defined(__hppa) - #define NPY_CPU_HPPA -#elif defined(__alpha__) - #define NPY_CPU_ALPHA -#elif defined(__arm__) && defined(__ARMEL__) - #define NPY_CPU_ARMEL -#elif defined(__arm__) && defined(__ARMEB__) - #define NPY_CPU_ARMEB -#elif defined(__sh__) && defined(__LITTLE_ENDIAN__) - #define NPY_CPU_SH_LE -#elif defined(__sh__) && defined(__BIG_ENDIAN__) - #define NPY_CPU_SH_BE -#elif defined(__MIPSEL__) - #define NPY_CPU_MIPSEL -#elif defined(__MIPSEB__) - #define NPY_CPU_MIPSEB -#else - #error Unknown CPU, please report this to numpy maintainers with \ - information about your platform (OS, CPU and compiler) -#endif - -/* - This "white-lists" the architectures that we know don't require - pointer alignment. We white-list, since the memcpy version will - work everywhere, whereas assignment will only work where pointer - dereferencing doesn't require alignment. - - TODO: There may be more architectures we can white list. -*/ -#if defined(NPY_CPU_X86) || defined(NPY_CPU_AMD64) - #define NPY_COPY_PYOBJECT_PTR(dst, src) (*((PyObject **)(dst)) = *((PyObject **)(src))) -#else - #if NPY_SIZEOF_PY_INTPTR_T == 4 - #define NPY_COPY_PYOBJECT_PTR(dst, src) \ - ((char*)(dst))[0] = ((char*)(src))[0]; \ - ((char*)(dst))[1] = ((char*)(src))[1]; \ - ((char*)(dst))[2] = ((char*)(src))[2]; \ - ((char*)(dst))[3] = ((char*)(src))[3]; - #elif NPY_SIZEOF_PY_INTPTR_T == 8 - #define NPY_COPY_PYOBJECT_PTR(dst, src) \ - ((char*)(dst))[0] = ((char*)(src))[0]; \ - ((char*)(dst))[1] = ((char*)(src))[1]; \ - ((char*)(dst))[2] = ((char*)(src))[2]; \ - ((char*)(dst))[3] = ((char*)(src))[3]; \ - ((char*)(dst))[4] = ((char*)(src))[4]; \ - ((char*)(dst))[5] = ((char*)(src))[5]; \ - ((char*)(dst))[6] = ((char*)(src))[6]; \ - ((char*)(dst))[7] = ((char*)(src))[7]; - #else - #error Unknown architecture, please report this to numpy maintainers with \ - information about your platform (OS, CPU and compiler) - #endif -#endif - -#endif diff --git a/pythonPackages/numpy/numpy/core/include/numpy/npy_endian.h b/pythonPackages/numpy/numpy/core/include/numpy/npy_endian.h deleted file mode 100755 index aa5ed8b2bd..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/npy_endian.h +++ /dev/null @@ -1,45 +0,0 @@ -#ifndef _NPY_ENDIAN_H_ -#define _NPY_ENDIAN_H_ - -/* - * NPY_BYTE_ORDER is set to the same value as BYTE_ORDER set by glibc in - * endian.h - */ - -#ifdef NPY_HAVE_ENDIAN_H - /* Use endian.h if available */ - #include - - #define NPY_BYTE_ORDER __BYTE_ORDER - #define NPY_LITTLE_ENDIAN __LITTLE_ENDIAN - #define NPY_BIG_ENDIAN __BIG_ENDIAN -#else - /* Set endianness info using target CPU */ - #include "npy_cpu.h" - - #define NPY_LITTLE_ENDIAN 1234 - #define NPY_BIG_ENDIAN 4321 - - #if defined(NPY_CPU_X86) \ - || defined(NPY_CPU_AMD64) \ - || defined(NPY_CPU_IA64) \ - || defined(NPY_CPU_ALPHA) \ - || defined(NPY_CPU_ARMEL) \ - || defined(NPY_CPU_SH_LE) \ - || defined(NPY_CPU_MIPSEL) - #define NPY_BYTE_ORDER NPY_LITTLE_ENDIAN - #elif defined(NPY_CPU_PPC) \ - || defined(NPY_CPU_SPARC) \ - || defined(NPY_CPU_S390) \ - || defined(NPY_CPU_HPPA) \ - || defined(NPY_CPU_PPC64) \ - || defined(NPY_CPU_ARMEB) \ - || defined(NPY_CPU_SH_BE) \ - || defined(NPY_CPU_MIPSEB) - #define NPY_BYTE_ORDER NPY_BIG_ENDIAN - #else - #error Unknown CPU: can not set endianness - #endif -#endif - -#endif diff --git a/pythonPackages/numpy/numpy/core/include/numpy/npy_interrupt.h b/pythonPackages/numpy/numpy/core/include/numpy/npy_interrupt.h deleted file mode 100755 index eb72fbaf0b..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/npy_interrupt.h +++ /dev/null @@ -1,117 +0,0 @@ - -/* Signal handling: - -This header file defines macros that allow your code to handle -interrupts received during processing. Interrupts that -could reasonably be handled: - -SIGINT, SIGABRT, SIGALRM, SIGSEGV - -****Warning*************** - -Do not allow code that creates temporary memory or increases reference -counts of Python objects to be interrupted unless you handle it -differently. - -************************** - -The mechanism for handling interrupts is conceptually simple: - - - replace the signal handler with our own home-grown version - and store the old one. - - run the code to be interrupted -- if an interrupt occurs - the handler should basically just cause a return to the - calling function for finish work. - - restore the old signal handler - -Of course, every code that allows interrupts must account for -returning via the interrupt and handle clean-up correctly. But, -even still, the simple paradigm is complicated by at least three -factors. - - 1) platform portability (i.e. Microsoft says not to use longjmp - to return from signal handling. They have a __try and __except - extension to C instead but what about mingw?). - - 2) how to handle threads: apparently whether signals are delivered to - every thread of the process or the "invoking" thread is platform - dependent. --- we don't handle threads for now. - - 3) do we need to worry about re-entrance. For now, assume the - code will not call-back into itself. - -Ideas: - - 1) Start by implementing an approach that works on platforms that - can use setjmp and longjmp functionality and does nothing - on other platforms. - - 2) Ignore threads --- i.e. do not mix interrupt handling and threads - - 3) Add a default signal_handler function to the C-API but have the rest - use macros. - - -Simple Interface: - - -In your C-extension: around a block of code you want to be interruptable -with a SIGINT - -NPY_SIGINT_ON -[code] -NPY_SIGINT_OFF - -In order for this to work correctly, the -[code] block must not allocate any memory or alter the reference count of any -Python objects. In other words [code] must be interruptible so that continuation -after NPY_SIGINT_OFF will only be "missing some computations" - -Interrupt handling does not work well with threads. - -*/ - -/* Add signal handling macros - Make the global variable and signal handler part of the C-API -*/ - -#ifndef NPY_INTERRUPT_H -#define NPY_INTERRUPT_H - -#ifndef NPY_NO_SIGNAL - -#include -#include - -#ifndef sigsetjmp - -#define SIGSETJMP(arg1, arg2) setjmp(arg1) -#define SIGLONGJMP(arg1, arg2) longjmp(arg1, arg2) -#define SIGJMP_BUF jmp_buf - -#else - -#define SIGSETJMP(arg1, arg2) sigsetjmp(arg1, arg2) -#define SIGLONGJMP(arg1, arg2) siglongjmp(arg1, arg2) -#define SIGJMP_BUF sigjmp_buf - -#endif - -# define NPY_SIGINT_ON { \ - PyOS_sighandler_t _npy_sig_save; \ - _npy_sig_save = PyOS_setsig(SIGINT, _PyArray_SigintHandler); \ - if (SIGSETJMP(*((SIGJMP_BUF *)_PyArray_GetSigintBuf()), \ - 1) == 0) { \ - -# define NPY_SIGINT_OFF } \ - PyOS_setsig(SIGINT, _npy_sig_save); \ - } - -#else /* NPY_NO_SIGNAL */ - -# define NPY_SIGINT_ON -# define NPY_SIGINT_OFF - -#endif /* HAVE_SIGSETJMP */ - -#endif /* NPY_INTERRUPT_H */ diff --git a/pythonPackages/numpy/numpy/core/include/numpy/npy_math.h b/pythonPackages/numpy/numpy/core/include/numpy/npy_math.h deleted file mode 100755 index d53900e19c..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/npy_math.h +++ /dev/null @@ -1,413 +0,0 @@ -#ifndef __NPY_MATH_C99_H_ -#define __NPY_MATH_C99_H_ - -#include -#include - -/* - * NAN and INFINITY like macros (same behavior as glibc for NAN, same as C99 - * for INFINITY) - * - * XXX: I should test whether INFINITY and NAN are available on the platform - */ -NPY_INLINE static float __npy_inff(void) -{ - const union { npy_uint32 __i; float __f;} __bint = {0x7f800000UL}; - return __bint.__f; -} - -NPY_INLINE static float __npy_nanf(void) -{ - const union { npy_uint32 __i; float __f;} __bint = {0x7fc00000UL}; - return __bint.__f; -} - -NPY_INLINE static float __npy_pzerof(void) -{ - const union { npy_uint32 __i; float __f;} __bint = {0x00000000UL}; - return __bint.__f; -} - -NPY_INLINE static float __npy_nzerof(void) -{ - const union { npy_uint32 __i; float __f;} __bint = {0x80000000UL}; - return __bint.__f; -} - -#define NPY_INFINITYF __npy_inff() -#define NPY_NANF __npy_nanf() -#define NPY_PZEROF __npy_pzerof() -#define NPY_NZEROF __npy_nzerof() - -#define NPY_INFINITY ((npy_double)NPY_INFINITYF) -#define NPY_NAN ((npy_double)NPY_NANF) -#define NPY_PZERO ((npy_double)NPY_PZEROF) -#define NPY_NZERO ((npy_double)NPY_NZEROF) - -#define NPY_INFINITYL ((npy_longdouble)NPY_INFINITYF) -#define NPY_NANL ((npy_longdouble)NPY_NANF) -#define NPY_PZEROL ((npy_longdouble)NPY_PZEROF) -#define NPY_NZEROL ((npy_longdouble)NPY_NZEROF) - -/* - * Useful constants - */ -#define NPY_E 2.718281828459045235360287471352662498 /* e */ -#define NPY_LOG2E 1.442695040888963407359924681001892137 /* log_2 e */ -#define NPY_LOG10E 0.434294481903251827651128918916605082 /* log_10 e */ -#define NPY_LOGE2 0.693147180559945309417232121458176568 /* log_e 2 */ -#define NPY_LOGE10 2.302585092994045684017991454684364208 /* log_e 10 */ -#define NPY_PI 3.141592653589793238462643383279502884 /* pi */ -#define NPY_PI_2 1.570796326794896619231321691639751442 /* pi/2 */ -#define NPY_PI_4 0.785398163397448309615660845819875721 /* pi/4 */ -#define NPY_1_PI 0.318309886183790671537767526745028724 /* 1/pi */ -#define NPY_2_PI 0.636619772367581343075535053490057448 /* 2/pi */ -#define NPY_EULER 0.577215664901532860606512090082402431 /* Euler constant */ -#define NPY_SQRT2 1.414213562373095048801688724209698079 /* sqrt(2) */ -#define NPY_SQRT1_2 0.707106781186547524400844362104849039 /* 1/sqrt(2) */ - -#define NPY_Ef 2.718281828459045235360287471352662498F /* e */ -#define NPY_LOG2Ef 1.442695040888963407359924681001892137F /* log_2 e */ -#define NPY_LOG10Ef 0.434294481903251827651128918916605082F /* log_10 e */ -#define NPY_LOGE2f 0.693147180559945309417232121458176568F /* log_e 2 */ -#define NPY_LOGE10f 2.302585092994045684017991454684364208F /* log_e 10 */ -#define NPY_PIf 3.141592653589793238462643383279502884F /* pi */ -#define NPY_PI_2f 1.570796326794896619231321691639751442F /* pi/2 */ -#define NPY_PI_4f 0.785398163397448309615660845819875721F /* pi/4 */ -#define NPY_1_PIf 0.318309886183790671537767526745028724F /* 1/pi */ -#define NPY_2_PIf 0.636619772367581343075535053490057448F /* 2/pi */ -#define NPY_EULERf 0.577215664901532860606512090082402431F /* Euler constan*/ -#define NPY_SQRT2f 1.414213562373095048801688724209698079F /* sqrt(2) */ -#define NPY_SQRT1_2f 0.707106781186547524400844362104849039F /* 1/sqrt(2) */ - -#define NPY_El 2.718281828459045235360287471352662498L /* e */ -#define NPY_LOG2El 1.442695040888963407359924681001892137L /* log_2 e */ -#define NPY_LOG10El 0.434294481903251827651128918916605082L /* log_10 e */ -#define NPY_LOGE2l 0.693147180559945309417232121458176568L /* log_e 2 */ -#define NPY_LOGE10l 2.302585092994045684017991454684364208L /* log_e 10 */ -#define NPY_PIl 3.141592653589793238462643383279502884L /* pi */ -#define NPY_PI_2l 1.570796326794896619231321691639751442L /* pi/2 */ -#define NPY_PI_4l 0.785398163397448309615660845819875721L /* pi/4 */ -#define NPY_1_PIl 0.318309886183790671537767526745028724L /* 1/pi */ -#define NPY_2_PIl 0.636619772367581343075535053490057448L /* 2/pi */ -#define NPY_EULERl 0.577215664901532860606512090082402431L /* Euler constan*/ -#define NPY_SQRT2l 1.414213562373095048801688724209698079L /* sqrt(2) */ -#define NPY_SQRT1_2l 0.707106781186547524400844362104849039L /* 1/sqrt(2) */ - -/* - * C99 double math funcs - */ -double npy_sin(double x); -double npy_cos(double x); -double npy_tan(double x); -double npy_sinh(double x); -double npy_cosh(double x); -double npy_tanh(double x); - -double npy_asin(double x); -double npy_acos(double x); -double npy_atan(double x); -double npy_aexp(double x); -double npy_alog(double x); -double npy_asqrt(double x); -double npy_afabs(double x); - -double npy_log(double x); -double npy_log10(double x); -double npy_exp(double x); -double npy_sqrt(double x); - -double npy_fabs(double x); -double npy_ceil(double x); -double npy_fmod(double x, double y); -double npy_floor(double x); - -double npy_expm1(double x); -double npy_log1p(double x); -double npy_hypot(double x, double y); -double npy_acosh(double x); -double npy_asinh(double xx); -double npy_atanh(double x); -double npy_rint(double x); -double npy_trunc(double x); -double npy_exp2(double x); -double npy_log2(double x); - -double npy_atan2(double x, double y); -double npy_pow(double x, double y); -double npy_modf(double x, double* y); - -double npy_copysign(double x, double y); -double npy_nextafter(double x, double y); -double npy_spacing(double x); - -/* - * IEEE 754 fpu handling. Those are guaranteed to be macros - */ -#ifndef NPY_HAVE_DECL_ISNAN - #define npy_isnan(x) ((x) != (x)) -#else - #define npy_isnan(x) isnan((x)) -#endif - -#ifndef NPY_HAVE_DECL_ISFINITE - #define npy_isfinite(x) !npy_isnan((x) + (-x)) -#else - #define npy_isfinite(x) isfinite((x)) -#endif - -#ifndef NPY_HAVE_DECL_ISINF - #define npy_isinf(x) (!npy_isfinite(x) && !npy_isnan(x)) -#else - #define npy_isinf(x) isinf((x)) -#endif - -#ifndef NPY_HAVE_DECL_SIGNBIT - int _npy_signbit_f(float x); - int _npy_signbit_d(double x); - int _npy_signbit_ld(npy_longdouble x); - #define npy_signbit(x) \ - (sizeof (x) == sizeof (long double) ? _npy_signbit_ld (x) \ - : sizeof (x) == sizeof (double) ? _npy_signbit_d (x) \ - : _npy_signbit_f (x)) -#else - #define npy_signbit(x) signbit((x)) -#endif - -/* - * float C99 math functions - */ - -float npy_sinf(float x); -float npy_cosf(float x); -float npy_tanf(float x); -float npy_sinhf(float x); -float npy_coshf(float x); -float npy_tanhf(float x); -float npy_fabsf(float x); -float npy_floorf(float x); -float npy_ceilf(float x); -float npy_rintf(float x); -float npy_truncf(float x); -float npy_sqrtf(float x); -float npy_log10f(float x); -float npy_logf(float x); -float npy_expf(float x); -float npy_expm1f(float x); -float npy_asinf(float x); -float npy_acosf(float x); -float npy_atanf(float x); -float npy_asinhf(float x); -float npy_acoshf(float x); -float npy_atanhf(float x); -float npy_log1pf(float x); -float npy_exp2f(float x); -float npy_log2f(float x); - -float npy_atan2f(float x, float y); -float npy_hypotf(float x, float y); -float npy_powf(float x, float y); -float npy_fmodf(float x, float y); - -float npy_modff(float x, float* y); - -float npy_copysignf(float x, float y); -float npy_nextafterf(float x, float y); -float npy_spacingf(float x); - -/* - * float C99 math functions - */ - -npy_longdouble npy_sinl(npy_longdouble x); -npy_longdouble npy_cosl(npy_longdouble x); -npy_longdouble npy_tanl(npy_longdouble x); -npy_longdouble npy_sinhl(npy_longdouble x); -npy_longdouble npy_coshl(npy_longdouble x); -npy_longdouble npy_tanhl(npy_longdouble x); -npy_longdouble npy_fabsl(npy_longdouble x); -npy_longdouble npy_floorl(npy_longdouble x); -npy_longdouble npy_ceill(npy_longdouble x); -npy_longdouble npy_rintl(npy_longdouble x); -npy_longdouble npy_truncl(npy_longdouble x); -npy_longdouble npy_sqrtl(npy_longdouble x); -npy_longdouble npy_log10l(npy_longdouble x); -npy_longdouble npy_logl(npy_longdouble x); -npy_longdouble npy_expl(npy_longdouble x); -npy_longdouble npy_expm1l(npy_longdouble x); -npy_longdouble npy_asinl(npy_longdouble x); -npy_longdouble npy_acosl(npy_longdouble x); -npy_longdouble npy_atanl(npy_longdouble x); -npy_longdouble npy_asinhl(npy_longdouble x); -npy_longdouble npy_acoshl(npy_longdouble x); -npy_longdouble npy_atanhl(npy_longdouble x); -npy_longdouble npy_log1pl(npy_longdouble x); -npy_longdouble npy_exp2l(npy_longdouble x); -npy_longdouble npy_log2l(npy_longdouble x); - -npy_longdouble npy_atan2l(npy_longdouble x, npy_longdouble y); -npy_longdouble npy_hypotl(npy_longdouble x, npy_longdouble y); -npy_longdouble npy_powl(npy_longdouble x, npy_longdouble y); -npy_longdouble npy_fmodl(npy_longdouble x, npy_longdouble y); - -npy_longdouble npy_modfl(npy_longdouble x, npy_longdouble* y); - -npy_longdouble npy_copysignl(npy_longdouble x, npy_longdouble y); -npy_longdouble npy_nextafterl(npy_longdouble x, npy_longdouble y); -npy_longdouble npy_spacingl(npy_longdouble x); - -/* - * Non standard functions - */ -double npy_deg2rad(double x); -double npy_rad2deg(double x); -double npy_logaddexp(double x, double y); -double npy_logaddexp2(double x, double y); - -float npy_deg2radf(float x); -float npy_rad2degf(float x); -float npy_logaddexpf(float x, float y); -float npy_logaddexp2f(float x, float y); - -npy_longdouble npy_deg2radl(npy_longdouble x); -npy_longdouble npy_rad2degl(npy_longdouble x); -npy_longdouble npy_logaddexpl(npy_longdouble x, npy_longdouble y); -npy_longdouble npy_logaddexp2l(npy_longdouble x, npy_longdouble y); - -#define npy_degrees npy_rad2deg -#define npy_degreesf npy_rad2degf -#define npy_degreesl npy_rad2degl - -#define npy_radians npy_deg2rad -#define npy_radiansf npy_deg2radf -#define npy_radiansl npy_deg2radl - -/* - * Complex declarations - */ - -/* - * C99 specifies that complex numbers have the same representation as - * an array of two elements, where the first element is the real part - * and the second element is the imaginary part. - */ -#define __NPY_CPACK_IMP(x, y, type, ctype) \ - union { \ - ctype z; \ - type a[2]; \ - } z1;; \ - \ - z1.a[0] = (x); \ - z1.a[1] = (y); \ - \ - return z1.z; - -static NPY_INLINE npy_cdouble npy_cpack(double x, double y) -{ - __NPY_CPACK_IMP(x, y, double, npy_cdouble); -} - -static NPY_INLINE npy_cfloat npy_cpackf(float x, float y) -{ - __NPY_CPACK_IMP(x, y, float, npy_cfloat); -} - -static NPY_INLINE npy_clongdouble npy_cpackl(npy_longdouble x, npy_longdouble y) -{ - __NPY_CPACK_IMP(x, y, npy_longdouble, npy_clongdouble); -} -#undef __NPY_CPACK_IMP - -/* - * Same remark as above, but in the other direction: extract first/second - * member of complex number, assuming a C99-compatible representation - * - * Those are defineds as static inline, and such as a reasonable compiler would - * most likely compile this to one or two instructions (on CISC at least) - */ -#define __NPY_CEXTRACT_IMP(z, index, type, ctype) \ - union { \ - ctype z; \ - type a[2]; \ - } __z_repr; \ - __z_repr.z = z; \ - \ - return __z_repr.a[index]; - -static NPY_INLINE double npy_creal(npy_cdouble z) -{ - __NPY_CEXTRACT_IMP(z, 0, double, npy_cdouble); -} - -static NPY_INLINE double npy_cimag(npy_cdouble z) -{ - __NPY_CEXTRACT_IMP(z, 1, double, npy_cdouble); -} - -static NPY_INLINE float npy_crealf(npy_cfloat z) -{ - __NPY_CEXTRACT_IMP(z, 0, float, npy_cfloat); -} - -static NPY_INLINE float npy_cimagf(npy_cfloat z) -{ - __NPY_CEXTRACT_IMP(z, 1, float, npy_cfloat); -} - -static NPY_INLINE npy_longdouble npy_creall(npy_clongdouble z) -{ - __NPY_CEXTRACT_IMP(z, 0, npy_longdouble, npy_clongdouble); -} - -static NPY_INLINE npy_longdouble npy_cimagl(npy_clongdouble z) -{ - __NPY_CEXTRACT_IMP(z, 1, npy_longdouble, npy_clongdouble); -} -#undef __NPY_CEXTRACT_IMP - -/* - * Double precision complex functions - */ -double npy_cabs(npy_cdouble z); -double npy_carg(npy_cdouble z); - -npy_cdouble npy_cexp(npy_cdouble z); -npy_cdouble npy_clog(npy_cdouble z); -npy_cdouble npy_cpow(npy_cdouble x, npy_cdouble y); - -npy_cdouble npy_csqrt(npy_cdouble z); - -npy_cdouble npy_ccos(npy_cdouble z); -npy_cdouble npy_csin(npy_cdouble z); - -/* - * Single precision complex functions - */ -float npy_cabsf(npy_cfloat z); -float npy_cargf(npy_cfloat z); - -npy_cfloat npy_cexpf(npy_cfloat z); -npy_cfloat npy_clogf(npy_cfloat z); -npy_cfloat npy_cpowf(npy_cfloat x, npy_cfloat y); - -npy_cfloat npy_csqrtf(npy_cfloat z); - -npy_cfloat npy_ccosf(npy_cfloat z); -npy_cfloat npy_csinf(npy_cfloat z); - -/* - * Extended precision complex functions - */ -npy_longdouble npy_cabsl(npy_clongdouble z); -npy_longdouble npy_cargl(npy_clongdouble z); - -npy_clongdouble npy_cexpl(npy_clongdouble z); -npy_clongdouble npy_clogl(npy_clongdouble z); -npy_clongdouble npy_cpowl(npy_clongdouble x, npy_clongdouble y); - -npy_clongdouble npy_csqrtl(npy_clongdouble z); - -npy_clongdouble npy_ccosl(npy_clongdouble z); -npy_clongdouble npy_csinl(npy_clongdouble z); - -#endif diff --git a/pythonPackages/numpy/numpy/core/include/numpy/npy_os.h b/pythonPackages/numpy/numpy/core/include/numpy/npy_os.h deleted file mode 100755 index 9228c3916e..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/npy_os.h +++ /dev/null @@ -1,30 +0,0 @@ -#ifndef _NPY_OS_H_ -#define _NPY_OS_H_ - -#if defined(linux) || defined(__linux) || defined(__linux__) - #define NPY_OS_LINUX -#elif defined(__FreeBSD__) || defined(__NetBSD__) || \ - defined(__OpenBSD__) || defined(__DragonFly__) - #define NPY_OS_BSD - #ifdef __FreeBSD__ - #define NPY_OS_FREEBSD - #elif defined(__NetBSD__) - #define NPY_OS_NETBSD - #elif defined(__OpenBSD__) - #define NPY_OS_OPENBSD - #elif defined(__DragonFly__) - #define NPY_OS_DRAGONFLY - #endif -#elif defined(sun) || defined(__sun) - #define NPY_OS_SOLARIS -#elif defined(__CYGWIN__) - #define NPY_OS_CYGWIN -#elif defined(_WIN32) || defined(__WIN32__) || defined(WIN32) - #define NPY_OS_WIN32 -#elif defined(__APPLE__) - #define NPY_OS_DARWIN -#else - #define NPY_OS_UNKNOWN -#endif - -#endif diff --git a/pythonPackages/numpy/numpy/core/include/numpy/numpyconfig.h b/pythonPackages/numpy/numpy/core/include/numpy/numpyconfig.h deleted file mode 100755 index ff7938cd96..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/numpyconfig.h +++ /dev/null @@ -1,24 +0,0 @@ -#ifndef _NPY_NUMPYCONFIG_H_ -#define _NPY_NUMPYCONFIG_H_ - -#include "_numpyconfig.h" - -/* - * On Mac OS X, because there is only one configuration stage for all the archs - * in universal builds, any macro which depends on the arch needs to be - * harcoded - */ -#ifdef __APPLE__ - #undef NPY_SIZEOF_LONG - #undef NPY_SIZEOF_PY_INTPTR_T - - #ifdef __LP64__ - #define NPY_SIZEOF_LONG 8 - #define NPY_SIZEOF_PY_INTPTR_T 8 - #else - #define NPY_SIZEOF_LONG 4 - #define NPY_SIZEOF_PY_INTPTR_T 4 - #endif -#endif - -#endif diff --git a/pythonPackages/numpy/numpy/core/include/numpy/old_defines.h b/pythonPackages/numpy/numpy/core/include/numpy/old_defines.h deleted file mode 100755 index a9dfd5c21f..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/old_defines.h +++ /dev/null @@ -1,173 +0,0 @@ -#define NDARRAY_VERSION NPY_VERSION - -#define PyArray_MIN_BUFSIZE NPY_MIN_BUFSIZE -#define PyArray_MAX_BUFSIZE NPY_MAX_BUFSIZE -#define PyArray_BUFSIZE NPY_BUFSIZE - -#define PyArray_PRIORITY NPY_PRIORITY -#define PyArray_SUBTYPE_PRIORITY NPY_PRIORITY -#define PyArray_NUM_FLOATTYPE NPY_NUM_FLOATTYPE - -#define NPY_MAX PyArray_MAX -#define NPY_MIN PyArray_MIN - -#define PyArray_TYPES NPY_TYPES -#define PyArray_BOOL NPY_BOOL -#define PyArray_BYTE NPY_BYTE -#define PyArray_UBYTE NPY_UBYTE -#define PyArray_SHORT NPY_SHORT -#define PyArray_USHORT NPY_USHORT -#define PyArray_INT NPY_INT -#define PyArray_UINT NPY_UINT -#define PyArray_LONG NPY_LONG -#define PyArray_ULONG NPY_ULONG -#define PyArray_LONGLONG NPY_LONGLONG -#define PyArray_ULONGLONG NPY_ULONGLONG -#define PyArray_FLOAT NPY_FLOAT -#define PyArray_DOUBLE NPY_DOUBLE -#define PyArray_LONGDOUBLE NPY_LONGDOUBLE -#define PyArray_CFLOAT NPY_CFLOAT -#define PyArray_CDOUBLE NPY_CDOUBLE -#define PyArray_CLONGDOUBLE NPY_CLONGDOUBLE -#define PyArray_OBJECT NPY_OBJECT -#define PyArray_STRING NPY_STRING -#define PyArray_UNICODE NPY_UNICODE -#define PyArray_VOID NPY_VOID -#define PyArray_DATETIME NPY_DATETIME -#define PyArray_TIMEDELTA NPY_TIMEDELTA -#define PyArray_NTYPES NPY_NTYPES -#define PyArray_NOTYPE NPY_NOTYPE -#define PyArray_CHAR NPY_CHAR -#define PyArray_USERDEF NPY_USERDEF -#define PyArray_NUMUSERTYPES NPY_NUMUSERTYPES - -#define PyArray_INTP NPY_INTP -#define PyArray_UINTP NPY_UINTP - -#define PyArray_INT8 NPY_INT8 -#define PyArray_UINT8 NPY_UINT8 -#define PyArray_INT16 NPY_INT16 -#define PyArray_UINT16 NPY_UINT16 -#define PyArray_INT32 NPY_INT32 -#define PyArray_UINT32 NPY_UINT32 - -#ifdef NPY_INT64 -#define PyArray_INT64 NPY_INT64 -#define PyArray_UINT64 NPY_UINT64 -#endif - -#ifdef NPY_INT128 -#define PyArray_INT128 NPY_INT128 -#define PyArray_UINT128 NPY_UINT128 -#endif - -#ifdef NPY_FLOAT16 -#define PyArray_FLOAT16 NPY_FLOAT16 -#define PyArray_COMPLEX32 NPY_COMPLEX32 -#endif - -#ifdef NPY_FLOAT80 -#define PyArray_FLOAT80 NPY_FLOAT80 -#define PyArray_COMPLEX160 NPY_COMPLEX160 -#endif - -#ifdef NPY_FLOAT96 -#define PyArray_FLOAT96 NPY_FLOAT96 -#define PyArray_COMPLEX192 NPY_COMPLEX192 -#endif - -#ifdef NPY_FLOAT128 -#define PyArray_FLOAT128 NPY_FLOAT128 -#define PyArray_COMPLEX256 NPY_COMPLEX256 -#endif - -#define PyArray_FLOAT32 NPY_FLOAT32 -#define PyArray_COMPLEX64 NPY_COMPLEX64 -#define PyArray_FLOAT64 NPY_FLOAT64 -#define PyArray_COMPLEX128 NPY_COMPLEX128 - - -#define PyArray_TYPECHAR NPY_TYPECHAR -#define PyArray_BOOLLTR NPY_BOOLLTR -#define PyArray_BYTELTR NPY_BYTELTR -#define PyArray_UBYTELTR NPY_UBYTELTR -#define PyArray_SHORTLTR NPY_SHORTLTR -#define PyArray_USHORTLTR NPY_USHORTLTR -#define PyArray_INTLTR NPY_INTLTR -#define PyArray_UINTLTR NPY_UINTLTR -#define PyArray_LONGLTR NPY_LONGLTR -#define PyArray_ULONGLTR NPY_ULONGLTR -#define PyArray_LONGLONGLTR NPY_LONGLONGLTR -#define PyArray_ULONGLONGLTR NPY_ULONGLONGLTR -#define PyArray_FLOATLTR NPY_FLOATLTR -#define PyArray_DOUBLELTR NPY_DOUBLELTR -#define PyArray_LONGDOUBLELTR NPY_LONGDOUBLELTR -#define PyArray_CFLOATLTR NPY_CFLOATLTR -#define PyArray_CDOUBLELTR NPY_CDOUBLELTR -#define PyArray_CLONGDOUBLELTR NPY_CLONGDOUBLELTR -#define PyArray_OBJECTLTR NPY_OBJECTLTR -#define PyArray_STRINGLTR NPY_STRINGLTR -#define PyArray_STRINGLTR2 NPY_STRINGLTR2 -#define PyArray_UNICODELTR NPY_UNICODELTR -#define PyArray_VOIDLTR NPY_VOIDLTR -#define PyArray_DATETIMELTR NPY_DATETIMELTR -#define PyArray_TIMEDELTALTR NPY_TIMEDELTALTR -#define PyArray_CHARLTR NPY_CHARLTR -#define PyArray_INTPLTR NPY_INTPLTR -#define PyArray_UINTPLTR NPY_UINTPLTR -#define PyArray_GENBOOLLTR NPY_GENBOOLLTR -#define PyArray_SIGNEDLTR NPY_SIGNEDLTR -#define PyArray_UNSIGNEDLTR NPY_UNSIGNEDLTR -#define PyArray_FLOATINGLTR NPY_FLOATINGLTR -#define PyArray_COMPLEXLTR NPY_COMPLEXLTR - -#define PyArray_QUICKSORT NPY_QUICKSORT -#define PyArray_HEAPSORT NPY_HEAPSORT -#define PyArray_MERGESORT NPY_MERGESORT -#define PyArray_SORTKIND NPY_SORTKIND -#define PyArray_NSORTS NPY_NSORTS - -#define PyArray_NOSCALAR NPY_NOSCALAR -#define PyArray_BOOL_SCALAR NPY_BOOL_SCALAR -#define PyArray_INTPOS_SCALAR NPY_INTPOS_SCALAR -#define PyArray_INTNEG_SCALAR NPY_INTNEG_SCALAR -#define PyArray_FLOAT_SCALAR NPY_FLOAT_SCALAR -#define PyArray_COMPLEX_SCALAR NPY_COMPLEX_SCALAR -#define PyArray_OBJECT_SCALAR NPY_OBJECT_SCALAR -#define PyArray_SCALARKIND NPY_SCALARKIND -#define PyArray_NSCALARKINDS NPY_NSCALARKINDS - -#define PyArray_ANYORDER NPY_ANYORDER -#define PyArray_CORDER NPY_CORDER -#define PyArray_FORTRANORDER NPY_FORTRANORDER -#define PyArray_ORDER NPY_ORDER - -#define PyDescr_ISBOOL PyDataType_ISBOOL -#define PyDescr_ISUNSIGNED PyDataType_ISUNSIGNED -#define PyDescr_ISSIGNED PyDataType_ISSIGNED -#define PyDescr_ISINTEGER PyDataType_ISINTEGER -#define PyDescr_ISFLOAT PyDataType_ISFLOAT -#define PyDescr_ISNUMBER PyDataType_ISNUMBER -#define PyDescr_ISSTRING PyDataType_ISSTRING -#define PyDescr_ISCOMPLEX PyDataType_ISCOMPLEX -#define PyDescr_ISPYTHON PyDataType_ISPYTHON -#define PyDescr_ISFLEXIBLE PyDataType_ISFLEXIBLE -#define PyDescr_ISUSERDEF PyDataType_ISUSERDEF -#define PyDescr_ISEXTENDED PyDataType_ISEXTENDED -#define PyDescr_ISOBJECT PyDataType_ISOBJECT -#define PyDescr_HASFIELDS PyDataType_HASFIELDS - -#define PyArray_LITTLE NPY_LITTLE -#define PyArray_BIG NPY_BIG -#define PyArray_NATIVE NPY_NATIVE -#define PyArray_SWAP NPY_SWAP -#define PyArray_IGNORE NPY_IGNORE - -#define PyArray_NATBYTE NPY_NATBYTE -#define PyArray_OPPBYTE NPY_OPPBYTE - -#define PyArray_MAX_ELSIZE NPY_MAX_ELSIZE - -#define PyArray_USE_PYMEM NPY_USE_PYMEM - -#define PyArray_RemoveLargest PyArray_RemoveSmallest diff --git a/pythonPackages/numpy/numpy/core/include/numpy/oldnumeric.h b/pythonPackages/numpy/numpy/core/include/numpy/oldnumeric.h deleted file mode 100755 index 51dba29cd4..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/oldnumeric.h +++ /dev/null @@ -1,23 +0,0 @@ -#include "arrayobject.h" - -#ifndef REFCOUNT -# define REFCOUNT NPY_REFCOUNT -# define MAX_ELSIZE 16 -#endif - -#define PyArray_UNSIGNED_TYPES -#define PyArray_SBYTE PyArray_BYTE -#define PyArray_CopyArray PyArray_CopyInto -#define _PyArray_multiply_list PyArray_MultiplyIntList -#define PyArray_ISSPACESAVER(m) NPY_FALSE -#define PyScalarArray_Check PyArray_CheckScalar - -#define CONTIGUOUS NPY_CONTIGUOUS -#define OWN_DIMENSIONS 0 -#define OWN_STRIDES 0 -#define OWN_DATA NPY_OWNDATA -#define SAVESPACE 0 -#define SAVESPACEBIT 0 - -#undef import_array -#define import_array() { if (_import_array() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, "numpy.core.multiarray failed to import"); } } diff --git a/pythonPackages/numpy/numpy/core/include/numpy/ufuncobject.h b/pythonPackages/numpy/numpy/core/include/numpy/ufuncobject.h deleted file mode 100755 index b795b54188..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/ufuncobject.h +++ /dev/null @@ -1,397 +0,0 @@ -#ifndef Py_UFUNCOBJECT_H -#define Py_UFUNCOBJECT_H -#ifdef __cplusplus -extern "C" { -#endif - -typedef void (*PyUFuncGenericFunction) (char **, npy_intp *, npy_intp *, void *); - -typedef struct { - PyObject_HEAD - int nin, nout, nargs; - int identity; - PyUFuncGenericFunction *functions; - void **data; - int ntypes; - int check_return; - char *name, *types; - char *doc; - void *ptr; - PyObject *obj; - PyObject *userloops; - - /* generalized ufunc */ - int core_enabled; /* 0 for scalar ufunc; 1 for generalized ufunc */ - int core_num_dim_ix; /* number of distinct dimension names in - signature */ - - /* dimension indices of input/output argument k are stored in - core_dim_ixs[core_offsets[k]..core_offsets[k]+core_num_dims[k]-1] */ - int *core_num_dims; /* numbers of core dimensions of each argument */ - int *core_dim_ixs; /* dimension indices in a flatted form; indices - are in the range of [0,core_num_dim_ix) */ - int *core_offsets; /* positions of 1st core dimensions of each - argument in core_dim_ixs */ - char *core_signature; /* signature string for printing purpose */ -} PyUFuncObject; - -#include "arrayobject.h" - -#define UFUNC_ERR_IGNORE 0 -#define UFUNC_ERR_WARN 1 -#define UFUNC_ERR_RAISE 2 -#define UFUNC_ERR_CALL 3 -#define UFUNC_ERR_PRINT 4 -#define UFUNC_ERR_LOG 5 - - /* Python side integer mask */ - -#define UFUNC_MASK_DIVIDEBYZERO 0x07 -#define UFUNC_MASK_OVERFLOW 0x3f -#define UFUNC_MASK_UNDERFLOW 0x1ff -#define UFUNC_MASK_INVALID 0xfff - -#define UFUNC_SHIFT_DIVIDEBYZERO 0 -#define UFUNC_SHIFT_OVERFLOW 3 -#define UFUNC_SHIFT_UNDERFLOW 6 -#define UFUNC_SHIFT_INVALID 9 - - -/* platform-dependent code translates floating point - status to an integer sum of these values -*/ -#define UFUNC_FPE_DIVIDEBYZERO 1 -#define UFUNC_FPE_OVERFLOW 2 -#define UFUNC_FPE_UNDERFLOW 4 -#define UFUNC_FPE_INVALID 8 - -#define UFUNC_ERR_DEFAULT 0 /* Error mode that avoids look-up (no checking) */ - -#define UFUNC_OBJ_ISOBJECT 1 -#define UFUNC_OBJ_NEEDS_API 2 - - /* Default user error mode */ -#define UFUNC_ERR_DEFAULT2 \ - (UFUNC_ERR_PRINT << UFUNC_SHIFT_DIVIDEBYZERO) + \ - (UFUNC_ERR_PRINT << UFUNC_SHIFT_OVERFLOW) + \ - (UFUNC_ERR_PRINT << UFUNC_SHIFT_INVALID) - - /* Only internal -- not exported, yet*/ -typedef struct { - /* Multi-iterator portion --- needs to be present in this order - to work with PyArray_Broadcast */ - PyObject_HEAD - int numiter; - npy_intp size; - npy_intp index; - int nd; - npy_intp dimensions[NPY_MAXDIMS]; - PyArrayIterObject *iters[NPY_MAXARGS]; - /* End of Multi-iterator portion */ - - /* The ufunc */ - PyUFuncObject *ufunc; - - /* The error handling */ - int errormask; /* Integer showing desired error handling */ - PyObject *errobj; /* currently a tuple with - (string, func or obj with write method or None) - */ - int first; - - /* Specific function and data to use */ - PyUFuncGenericFunction function; - void *funcdata; - - /* Loop method */ - int meth; - - /* Whether we need to copy to a buffer or not.*/ - int needbuffer[NPY_MAXARGS]; - int leftover; - int ninnerloops; - int lastdim; - - /* Whether or not to swap */ - int swap[NPY_MAXARGS]; - - /* Buffers for the loop */ - char *buffer[NPY_MAXARGS]; - int bufsize; - npy_intp bufcnt; - char *dptr[NPY_MAXARGS]; - - /* For casting */ - char *castbuf[NPY_MAXARGS]; - PyArray_VectorUnaryFunc *cast[NPY_MAXARGS]; - - /* usually points to buffer but when a cast is to be - done it switches for that argument to castbuf. - */ - char *bufptr[NPY_MAXARGS]; - - /* Steps filled in from iters or sizeof(item) - depending on loop method. - */ - npy_intp steps[NPY_MAXARGS]; - - int obj; /* This loop uses object arrays or needs the Python API */ - /* Flags: UFUNC_OBJ_ISOBJECT, UFUNC_OBJ_NEEDS_API */ - int notimplemented; /* The loop caused notimplemented */ - int objfunc; /* This loop calls object functions - (an inner-loop function with argument types */ - - /* generalized ufunc */ - npy_intp *core_dim_sizes; /* stores sizes of core dimensions; - contains 1 + core_num_dim_ix elements */ - npy_intp *core_strides; /* strides of loop and core dimensions */ -} PyUFuncLoopObject; - -/* Could make this more clever someday */ -#define UFUNC_MAXIDENTITY 32 - -typedef struct { - PyObject_HEAD - PyArrayIterObject *it; - PyArrayObject *ret; - PyArrayIterObject *rit; /* Needed for Accumulate */ - int outsize; - npy_intp index; - npy_intp size; - char idptr[UFUNC_MAXIDENTITY]; - - /* The ufunc */ - PyUFuncObject *ufunc; - - /* The error handling */ - int errormask; - PyObject *errobj; - int first; - - PyUFuncGenericFunction function; - void *funcdata; - int meth; - int swap; - - char *buffer; - int bufsize; - - char *castbuf; - PyArray_VectorUnaryFunc *cast; - - char *bufptr[3]; - npy_intp steps[3]; - - npy_intp N; - int instrides; - int insize; - char *inptr; - - /* For copying small arrays */ - PyObject *decref; - - int obj; - int retbase; - -} PyUFuncReduceObject; - - -#if NPY_ALLOW_THREADS -#define NPY_LOOP_BEGIN_THREADS do {if (!(loop->obj & UFUNC_OBJ_NEEDS_API)) _save = PyEval_SaveThread();} while (0) -#define NPY_LOOP_END_THREADS do {if (!(loop->obj & UFUNC_OBJ_NEEDS_API)) PyEval_RestoreThread(_save);} while (0) -#else -#define NPY_LOOP_BEGIN_THREADS -#define NPY_LOOP_END_THREADS -#endif - -#define PyUFunc_One 1 -#define PyUFunc_Zero 0 -#define PyUFunc_None -1 - -#define UFUNC_REDUCE 0 -#define UFUNC_ACCUMULATE 1 -#define UFUNC_REDUCEAT 2 -#define UFUNC_OUTER 3 - - -typedef struct { - int nin; - int nout; - PyObject *callable; -} PyUFunc_PyFuncData; - -/* A linked-list of function information for - user-defined 1-d loops. - */ -typedef struct _loop1d_info { - PyUFuncGenericFunction func; - void *data; - int *arg_types; - struct _loop1d_info *next; -} PyUFunc_Loop1d; - - -#include "__ufunc_api.h" - -#define UFUNC_PYVALS_NAME "UFUNC_PYVALS" - -#define UFUNC_CHECK_ERROR(arg) \ - do {if ((((arg)->obj & UFUNC_OBJ_NEEDS_API) && PyErr_Occurred()) || \ - ((arg)->errormask && \ - PyUFunc_checkfperr((arg)->errormask, \ - (arg)->errobj, \ - &(arg)->first))) \ - goto fail;} while (0) - -/* This code checks the IEEE status flags in a platform-dependent way */ -/* Adapted from Numarray */ - -#if (defined(__unix__) || defined(unix)) && !defined(USG) -#include -#endif - -/* OSF/Alpha (Tru64) ---------------------------------------------*/ -#if defined(__osf__) && defined(__alpha) - -#include - -#define UFUNC_CHECK_STATUS(ret) { \ - unsigned long fpstatus; \ - \ - fpstatus = ieee_get_fp_control(); \ - /* clear status bits as well as disable exception mode if on */ \ - ieee_set_fp_control( 0 ); \ - ret = ((IEEE_STATUS_DZE & fpstatus) ? UFUNC_FPE_DIVIDEBYZERO : 0) \ - | ((IEEE_STATUS_OVF & fpstatus) ? UFUNC_FPE_OVERFLOW : 0) \ - | ((IEEE_STATUS_UNF & fpstatus) ? UFUNC_FPE_UNDERFLOW : 0) \ - | ((IEEE_STATUS_INV & fpstatus) ? UFUNC_FPE_INVALID : 0); \ - } - -/* MS Windows -----------------------------------------------------*/ -#elif defined(_MSC_VER) - -#include - - /* Clear the floating point exception default of Borland C++ */ -#if defined(__BORLANDC__) -#define UFUNC_NOFPE _control87(MCW_EM, MCW_EM); -#endif - -#define UFUNC_CHECK_STATUS(ret) { \ - int fpstatus = (int) _clearfp(); \ - \ - ret = ((SW_ZERODIVIDE & fpstatus) ? UFUNC_FPE_DIVIDEBYZERO : 0) \ - | ((SW_OVERFLOW & fpstatus) ? UFUNC_FPE_OVERFLOW : 0) \ - | ((SW_UNDERFLOW & fpstatus) ? UFUNC_FPE_UNDERFLOW : 0) \ - | ((SW_INVALID & fpstatus) ? UFUNC_FPE_INVALID : 0); \ - } - -/* Solaris --------------------------------------------------------*/ -/* --------ignoring SunOS ieee_flags approach, someone else can -** deal with that! */ -#elif defined(sun) || defined(__BSD__) || defined(__OpenBSD__) || (defined(__FreeBSD__) && (__FreeBSD_version < 502114)) || defined(__NetBSD__) -#include - -#define UFUNC_CHECK_STATUS(ret) { \ - int fpstatus; \ - \ - fpstatus = (int) fpgetsticky(); \ - ret = ((FP_X_DZ & fpstatus) ? UFUNC_FPE_DIVIDEBYZERO : 0) \ - | ((FP_X_OFL & fpstatus) ? UFUNC_FPE_OVERFLOW : 0) \ - | ((FP_X_UFL & fpstatus) ? UFUNC_FPE_UNDERFLOW : 0) \ - | ((FP_X_INV & fpstatus) ? UFUNC_FPE_INVALID : 0); \ - (void) fpsetsticky(0); \ - } - -#elif defined(__GLIBC__) || defined(__APPLE__) || defined(__CYGWIN__) || defined(__MINGW32__) || (defined(__FreeBSD__) && (__FreeBSD_version >= 502114)) - -#if defined(__GLIBC__) || defined(__APPLE__) || defined(__MINGW32__) || defined(__FreeBSD__) -#include -#elif defined(__CYGWIN__) -#include "fenv/fenv.c" -#endif - -#define UFUNC_CHECK_STATUS(ret) { \ - int fpstatus = (int) fetestexcept(FE_DIVBYZERO | FE_OVERFLOW | \ - FE_UNDERFLOW | FE_INVALID); \ - ret = ((FE_DIVBYZERO & fpstatus) ? UFUNC_FPE_DIVIDEBYZERO : 0) \ - | ((FE_OVERFLOW & fpstatus) ? UFUNC_FPE_OVERFLOW : 0) \ - | ((FE_UNDERFLOW & fpstatus) ? UFUNC_FPE_UNDERFLOW : 0) \ - | ((FE_INVALID & fpstatus) ? UFUNC_FPE_INVALID : 0); \ - (void) feclearexcept(FE_DIVBYZERO | FE_OVERFLOW | \ - FE_UNDERFLOW | FE_INVALID); \ -} - -#define generate_divbyzero_error() feraiseexcept(FE_DIVBYZERO) -#define generate_overflow_error() feraiseexcept(FE_OVERFLOW) - -#elif defined(_AIX) - -#include -#include - -#define UFUNC_CHECK_STATUS(ret) { \ - fpflag_t fpstatus; \ - \ - fpstatus = fp_read_flag(); \ - ret = ((FP_DIV_BY_ZERO & fpstatus) ? UFUNC_FPE_DIVIDEBYZERO : 0) \ - | ((FP_OVERFLOW & fpstatus) ? UFUNC_FPE_OVERFLOW : 0) \ - | ((FP_UNDERFLOW & fpstatus) ? UFUNC_FPE_UNDERFLOW : 0) \ - | ((FP_INVALID & fpstatus) ? UFUNC_FPE_INVALID : 0); \ - fp_swap_flag(0); \ -} - -#define generate_divbyzero_error() fp_raise_xcp(FP_DIV_BY_ZERO) -#define generate_overflow_error() fp_raise_xcp(FP_OVERFLOW) - -#else - -#define NO_FLOATING_POINT_SUPPORT -#define UFUNC_CHECK_STATUS(ret) { \ - ret = 0; \ - } - -#endif - -/* These should really be altered to just set the corresponding bit - in the floating point status flag. Need to figure out how to do that - on all the platforms... -*/ - -#if !defined(generate_divbyzero_error) -static int numeric_zero2 = 0; -static void generate_divbyzero_error(void) { - double dummy; - dummy = 1./numeric_zero2; - if (dummy) /* to prevent optimizer from eliminating expression */ - return; - else /* should never be called */ - numeric_zero2 += 1; - return; -} -#endif - -#if !defined(generate_overflow_error) -static double numeric_two = 2.0; -static void generate_overflow_error(void) { - double dummy; - dummy = pow(numeric_two,1000); - if (dummy) - return; - else - numeric_two += 0.1; - return; - return; -} -#endif - - /* Make sure it gets defined if it isn't already */ -#ifndef UFUNC_NOFPE -#define UFUNC_NOFPE -#endif - - -#ifdef __cplusplus -} -#endif -#endif /* !Py_UFUNCOBJECT_H */ diff --git a/pythonPackages/numpy/numpy/core/include/numpy/utils.h b/pythonPackages/numpy/numpy/core/include/numpy/utils.h deleted file mode 100755 index cc968a3544..0000000000 --- a/pythonPackages/numpy/numpy/core/include/numpy/utils.h +++ /dev/null @@ -1,19 +0,0 @@ -#ifndef __NUMPY_UTILS_HEADER__ -#define __NUMPY_UTILS_HEADER__ - -#ifndef __COMP_NPY_UNUSED - #if defined(__GNUC__) - #define __COMP_NPY_UNUSED __attribute__ ((__unused__)) - # elif defined(__ICC) - #define __COMP_NPY_UNUSED __attribute__ ((__unused__)) - #else - #define __COMP_NPY_UNUSED - #endif -#endif - -/* Use this to tag a variable as not used. It will remove unused variable - * warning on support platforms (see __COM_NPY_UNUSED) and mangle the variable - * to avoid accidental use */ -#define NPY_UNUSED(x) (__NPY_UNUSED_TAGGED ## x) __COMP_NPY_UNUSED - -#endif diff --git a/pythonPackages/numpy/numpy/core/info.py b/pythonPackages/numpy/numpy/core/info.py deleted file mode 100755 index 561e171b03..0000000000 --- a/pythonPackages/numpy/numpy/core/info.py +++ /dev/null @@ -1,86 +0,0 @@ -__doc__ = """Defines a multi-dimensional array and useful procedures for Numerical computation. - -Functions - -- array - NumPy Array construction -- zeros - Return an array of all zeros -- empty - Return an unitialized array -- shape - Return shape of sequence or array -- rank - Return number of dimensions -- size - Return number of elements in entire array or a - certain dimension -- fromstring - Construct array from (byte) string -- take - Select sub-arrays using sequence of indices -- put - Set sub-arrays using sequence of 1-D indices -- putmask - Set portion of arrays using a mask -- reshape - Return array with new shape -- repeat - Repeat elements of array -- choose - Construct new array from indexed array tuple -- correlate - Correlate two 1-d arrays -- searchsorted - Search for element in 1-d array -- sum - Total sum over a specified dimension -- average - Average, possibly weighted, over axis or array. -- cumsum - Cumulative sum over a specified dimension -- product - Total product over a specified dimension -- cumproduct - Cumulative product over a specified dimension -- alltrue - Logical and over an entire axis -- sometrue - Logical or over an entire axis -- allclose - Tests if sequences are essentially equal - -More Functions: - -- arange - Return regularly spaced array -- asarray - Guarantee NumPy array -- convolve - Convolve two 1-d arrays -- swapaxes - Exchange axes -- concatenate - Join arrays together -- transpose - Permute axes -- sort - Sort elements of array -- argsort - Indices of sorted array -- argmax - Index of largest value -- argmin - Index of smallest value -- inner - Innerproduct of two arrays -- dot - Dot product (matrix multiplication) -- outer - Outerproduct of two arrays -- resize - Return array with arbitrary new shape -- indices - Tuple of indices -- fromfunction - Construct array from universal function -- diagonal - Return diagonal array -- trace - Trace of array -- dump - Dump array to file object (pickle) -- dumps - Return pickled string representing data -- load - Return array stored in file object -- loads - Return array from pickled string -- ravel - Return array as 1-D -- nonzero - Indices of nonzero elements for 1-D array -- shape - Shape of array -- where - Construct array from binary result -- compress - Elements of array where condition is true -- clip - Clip array between two values -- ones - Array of all ones -- identity - 2-D identity array (matrix) - -(Universal) Math Functions - - add logical_or exp - subtract logical_xor log - multiply logical_not log10 - divide maximum sin - divide_safe minimum sinh - conjugate bitwise_and sqrt - power bitwise_or tan - absolute bitwise_xor tanh - negative invert ceil - greater left_shift fabs - greater_equal right_shift floor - less arccos arctan2 - less_equal arcsin fmod - equal arctan hypot - not_equal cos around - logical_and cosh sign - arccosh arcsinh arctanh - -""" - -depends = ['testing'] -global_symbols = ['*'] diff --git a/pythonPackages/numpy/numpy/core/machar.py b/pythonPackages/numpy/numpy/core/machar.py deleted file mode 100755 index 290f33746a..0000000000 --- a/pythonPackages/numpy/numpy/core/machar.py +++ /dev/null @@ -1,339 +0,0 @@ -""" -Machine arithmetics - determine the parameters of the -floating-point arithmetic system -""" -# Author: Pearu Peterson, September 2003 - - -__all__ = ['MachAr'] - -from numpy.core.fromnumeric import any -from numpy.core.numeric import seterr - -# Need to speed this up...especially for longfloat - -class MachAr(object): - """ - Diagnosing machine parameters. - - Attributes - ---------- - ibeta : int - Radix in which numbers are represented. - it : int - Number of base-`ibeta` digits in the floating point mantissa M. - machep : int - Exponent of the smallest (most negative) power of `ibeta` that, - added to 1.0, gives something different from 1.0 - eps : float - Floating-point number ``beta**machep`` (floating point precision) - negep : int - Exponent of the smallest power of `ibeta` that, substracted - from 1.0, gives something different from 1.0. - epsneg : float - Floating-point number ``beta**negep``. - iexp : int - Number of bits in the exponent (including its sign and bias). - minexp : int - Smallest (most negative) power of `ibeta` consistent with there - being no leading zeros in the mantissa. - xmin : float - Floating point number ``beta**minexp`` (the smallest [in - magnitude] usable floating value). - maxexp : int - Smallest (positive) power of `ibeta` that causes overflow. - xmax : float - ``(1-epsneg) * beta**maxexp`` (the largest [in magnitude] - usable floating value). - irnd : int - In ``range(6)``, information on what kind of rounding is done - in addition, and on how underflow is handled. - ngrd : int - Number of 'guard digits' used when truncating the product - of two mantissas to fit the representation. - epsilon : float - Same as `eps`. - tiny : float - Same as `xmin`. - huge : float - Same as `xmax`. - precision : float - ``- int(-log10(eps))`` - resolution : float - ``- 10**(-precision)`` - - Parameters - ---------- - float_conv : function, optional - Function that converts an integer or integer array to a float - or float array. Default is `float`. - int_conv : function, optional - Function that converts a float or float array to an integer or - integer array. Default is `int`. - float_to_float : function, optional - Function that converts a float array to float. Default is `float`. - Note that this does not seem to do anything useful in the current - implementation. - float_to_str : function, optional - Function that converts a single float to a string. Default is - ``lambda v:'%24.16e' %v``. - title : str, optional - Title that is printed in the string representation of `MachAr`. - - See Also - -------- - finfo : Machine limits for floating point types. - iinfo : Machine limits for integer types. - - References - ---------- - .. [1] Press, Teukolsky, Vetterling and Flannery, - "Numerical Recipes in C++," 2nd ed, - Cambridge University Press, 2002, p. 31. - - """ - def __init__(self, float_conv=float,int_conv=int, - float_to_float=float, - float_to_str = lambda v:'%24.16e' % v, - title = 'Python floating point number'): - """ - float_conv - convert integer to float (array) - int_conv - convert float (array) to integer - float_to_float - convert float array to float - float_to_str - convert array float to str - title - description of used floating point numbers - """ - # We ignore all errors here because we are purposely triggering - # underflow to detect the properties of the runninng arch. - saverrstate = seterr(under='ignore') - try: - self._do_init(float_conv, int_conv, float_to_float, float_to_str, title) - finally: - seterr(**saverrstate) - - def _do_init(self, float_conv, int_conv, float_to_float, float_to_str, title): - max_iterN = 10000 - msg = "Did not converge after %d tries with %s" - one = float_conv(1) - two = one + one - zero = one - one - - # Do we really need to do this? Aren't they 2 and 2.0? - # Determine ibeta and beta - a = one - for _ in xrange(max_iterN): - a = a + a - temp = a + one - temp1 = temp - a - if any(temp1 - one != zero): - break - else: - raise RuntimeError, msg % (_, one.dtype) - b = one - for _ in xrange(max_iterN): - b = b + b - temp = a + b - itemp = int_conv(temp-a) - if any(itemp != 0): - break - else: - raise RuntimeError, msg % (_, one.dtype) - ibeta = itemp - beta = float_conv(ibeta) - - # Determine it and irnd - it = -1 - b = one - for _ in xrange(max_iterN): - it = it + 1 - b = b * beta - temp = b + one - temp1 = temp - b - if any(temp1 - one != zero): - break - else: - raise RuntimeError, msg % (_, one.dtype) - - betah = beta / two - a = one - for _ in xrange(max_iterN): - a = a + a - temp = a + one - temp1 = temp - a - if any(temp1 - one != zero): - break - else: - raise RuntimeError, msg % (_, one.dtype) - temp = a + betah - irnd = 0 - if any(temp-a != zero): - irnd = 1 - tempa = a + beta - temp = tempa + betah - if irnd==0 and any(temp-tempa != zero): - irnd = 2 - - # Determine negep and epsneg - negep = it + 3 - betain = one / beta - a = one - for i in range(negep): - a = a * betain - b = a - for _ in xrange(max_iterN): - temp = one - a - if any(temp-one != zero): - break - a = a * beta - negep = negep - 1 - # Prevent infinite loop on PPC with gcc 4.0: - if negep < 0: - raise RuntimeError, "could not determine machine tolerance " \ - "for 'negep', locals() -> %s" % (locals()) - else: - raise RuntimeError, msg % (_, one.dtype) - negep = -negep - epsneg = a - - # Determine machep and eps - machep = - it - 3 - a = b - - for _ in xrange(max_iterN): - temp = one + a - if any(temp-one != zero): - break - a = a * beta - machep = machep + 1 - else: - raise RuntimeError, msg % (_, one.dtype) - eps = a - - # Determine ngrd - ngrd = 0 - temp = one + eps - if irnd==0 and any(temp*one - one != zero): - ngrd = 1 - - # Determine iexp - i = 0 - k = 1 - z = betain - t = one + eps - nxres = 0 - for _ in xrange(max_iterN): - y = z - z = y*y - a = z*one # Check here for underflow - temp = z*t - if any(a+a == zero) or any(abs(z)>=y): - break - temp1 = temp * betain - if any(temp1*beta == z): - break - i = i + 1 - k = k + k - else: - raise RuntimeError, msg % (_, one.dtype) - if ibeta != 10: - iexp = i + 1 - mx = k + k - else: - iexp = 2 - iz = ibeta - while k >= iz: - iz = iz * ibeta - iexp = iexp + 1 - mx = iz + iz - 1 - - # Determine minexp and xmin - for _ in xrange(max_iterN): - xmin = y - y = y * betain - a = y * one - temp = y * t - if any(a+a != zero) and any(abs(y) < xmin): - k = k + 1 - temp1 = temp * betain - if any(temp1*beta == y) and any(temp != y): - nxres = 3 - xmin = y - break - else: - break - else: - raise RuntimeError, msg % (_, one.dtype) - minexp = -k - - # Determine maxexp, xmax - if mx <= k + k - 3 and ibeta != 10: - mx = mx + mx - iexp = iexp + 1 - maxexp = mx + minexp - irnd = irnd + nxres - if irnd >= 2: - maxexp = maxexp - 2 - i = maxexp + minexp - if ibeta == 2 and not i: - maxexp = maxexp - 1 - if i > 20: - maxexp = maxexp - 1 - if any(a != y): - maxexp = maxexp - 2 - xmax = one - epsneg - if any(xmax*one != xmax): - xmax = one - beta*epsneg - xmax = xmax / (xmin*beta*beta*beta) - i = maxexp + minexp + 3 - for j in range(i): - if ibeta==2: - xmax = xmax + xmax - else: - xmax = xmax * beta - - self.ibeta = ibeta - self.it = it - self.negep = negep - self.epsneg = float_to_float(epsneg) - self._str_epsneg = float_to_str(epsneg) - self.machep = machep - self.eps = float_to_float(eps) - self._str_eps = float_to_str(eps) - self.ngrd = ngrd - self.iexp = iexp - self.minexp = minexp - self.xmin = float_to_float(xmin) - self._str_xmin = float_to_str(xmin) - self.maxexp = maxexp - self.xmax = float_to_float(xmax) - self._str_xmax = float_to_str(xmax) - self.irnd = irnd - - self.title = title - # Commonly used parameters - self.epsilon = self.eps - self.tiny = self.xmin - self.huge = self.xmax - - import math - self.precision = int(-math.log10(float_to_float(self.eps))) - ten = two + two + two + two + two - resolution = ten ** (-self.precision) - self.resolution = float_to_float(resolution) - self._str_resolution = float_to_str(resolution) - - def __str__(self): - return '''\ -Machine parameters for %(title)s ---------------------------------------------------------------------- -ibeta=%(ibeta)s it=%(it)s iexp=%(iexp)s ngrd=%(ngrd)s irnd=%(irnd)s -machep=%(machep)s eps=%(_str_eps)s (beta**machep == epsilon) -negep =%(negep)s epsneg=%(_str_epsneg)s (beta**epsneg) -minexp=%(minexp)s xmin=%(_str_xmin)s (beta**minexp == tiny) -maxexp=%(maxexp)s xmax=%(_str_xmax)s ((1-epsneg)*beta**maxexp == huge) ---------------------------------------------------------------------- -''' % self.__dict__ - - -if __name__ == '__main__': - print MachAr() diff --git a/pythonPackages/numpy/numpy/core/memmap.py b/pythonPackages/numpy/numpy/core/memmap.py deleted file mode 100755 index 71f5de93c8..0000000000 --- a/pythonPackages/numpy/numpy/core/memmap.py +++ /dev/null @@ -1,311 +0,0 @@ -__all__ = ['memmap'] - -import warnings -from numeric import uint8, ndarray, dtype -import sys - -from numpy.compat import asbytes - -dtypedescr = dtype -valid_filemodes = ["r", "c", "r+", "w+"] -writeable_filemodes = ["r+","w+"] - -mode_equivalents = { - "readonly":"r", - "copyonwrite":"c", - "readwrite":"r+", - "write":"w+" - } - -class memmap(ndarray): - """ - Create a memory-map to an array stored in a *binary* file on disk. - - Memory-mapped files are used for accessing small segments of large files - on disk, without reading the entire file into memory. Numpy's - memmap's are array-like objects. This differs from Python's ``mmap`` - module, which uses file-like objects. - - Parameters - ---------- - filename : str or file-like object - The file name or file object to be used as the array data buffer. - dtype : data-type, optional - The data-type used to interpret the file contents. - Default is `uint8`. - mode : {'r+', 'r', 'w+', 'c'}, optional - The file is opened in this mode: - - +------+-------------------------------------------------------------+ - | 'r' | Open existing file for reading only. | - +------+-------------------------------------------------------------+ - | 'r+' | Open existing file for reading and writing. | - +------+-------------------------------------------------------------+ - | 'w+' | Create or overwrite existing file for reading and writing. | - +------+-------------------------------------------------------------+ - | 'c' | Copy-on-write: assignments affect data in memory, but | - | | changes are not saved to disk. The file on disk is | - | | read-only. | - +------+-------------------------------------------------------------+ - - Default is 'r+'. - offset : int, optional - In the file, array data starts at this offset. Since `offset` is - measured in bytes, it should be a multiple of the byte-size of - `dtype`. Requires ``shape=None``. The default is 0. - shape : tuple, optional - The desired shape of the array. By default, the returned array will be - 1-D with the number of elements determined by file size and data-type. - order : {'C', 'F'}, optional - Specify the order of the ndarray memory layout: C (row-major) or - Fortran (column-major). This only has an effect if the shape is - greater than 1-D. The default order is 'C'. - - Attributes - ---------- - filename : str - Path to the mapped file. - offset : int - Offset position in the file. - mode : str - File mode. - - - Methods - ------- - close - Close the memmap file. - flush - Flush any changes in memory to file on disk. - When you delete a memmap object, flush is called first to write - changes to disk before removing the object. - - Notes - ----- - The memmap object can be used anywhere an ndarray is accepted. - Given a memmap ``fp``, ``isinstance(fp, numpy.ndarray)`` returns - ``True``. - - Memory-mapped arrays use the Python memory-map object which - (prior to Python 2.5) does not allow files to be larger than a - certain size depending on the platform. This size is always < 2GB - even on 64-bit systems. - - Examples - -------- - >>> data = np.arange(12, dtype='float32') - >>> data.resize((3,4)) - - This example uses a temporary file so that doctest doesn't write - files to your directory. You would use a 'normal' filename. - - >>> from tempfile import mkdtemp - >>> import os.path as path - >>> filename = path.join(mkdtemp(), 'newfile.dat') - - Create a memmap with dtype and shape that matches our data: - - >>> fp = np.memmap(filename, dtype='float32', mode='w+', shape=(3,4)) - >>> fp - memmap([[ 0., 0., 0., 0.], - [ 0., 0., 0., 0.], - [ 0., 0., 0., 0.]], dtype=float32) - - Write data to memmap array: - - >>> fp[:] = data[:] - >>> fp - memmap([[ 0., 1., 2., 3.], - [ 4., 5., 6., 7.], - [ 8., 9., 10., 11.]], dtype=float32) - - >>> fp.filename == path.abspath(filename) - True - - Deletion flushes memory changes to disk before removing the object: - - >>> del fp - - Load the memmap and verify data was stored: - - >>> newfp = np.memmap(filename, dtype='float32', mode='r', shape=(3,4)) - >>> newfp - memmap([[ 0., 1., 2., 3.], - [ 4., 5., 6., 7.], - [ 8., 9., 10., 11.]], dtype=float32) - - Read-only memmap: - - >>> fpr = np.memmap(filename, dtype='float32', mode='r', shape=(3,4)) - >>> fpr.flags.writeable - False - - Copy-on-write memmap: - - >>> fpc = np.memmap(filename, dtype='float32', mode='c', shape=(3,4)) - >>> fpc.flags.writeable - True - - It's possible to assign to copy-on-write array, but values are only - written into the memory copy of the array, and not written to disk: - - >>> fpc - memmap([[ 0., 1., 2., 3.], - [ 4., 5., 6., 7.], - [ 8., 9., 10., 11.]], dtype=float32) - >>> fpc[0,:] = 0 - >>> fpc - memmap([[ 0., 0., 0., 0.], - [ 4., 5., 6., 7.], - [ 8., 9., 10., 11.]], dtype=float32) - - File on disk is unchanged: - - >>> fpr - memmap([[ 0., 1., 2., 3.], - [ 4., 5., 6., 7.], - [ 8., 9., 10., 11.]], dtype=float32) - - Offset into a memmap: - - >>> fpo = np.memmap(filename, dtype='float32', mode='r', offset=16) - >>> fpo - memmap([ 4., 5., 6., 7., 8., 9., 10., 11.], dtype=float32) - - """ - - __array_priority__ = -100.0 - def __new__(subtype, filename, dtype=uint8, mode='r+', offset=0, - shape=None, order='C'): - # Import here to minimize 'import numpy' overhead - import mmap - import os.path - try: - mode = mode_equivalents[mode] - except KeyError: - if mode not in valid_filemodes: - raise ValueError("mode must be one of %s" % \ - (valid_filemodes + mode_equivalents.keys())) - - if hasattr(filename,'read'): - fid = filename - else: - fid = open(filename, (mode == 'c' and 'r' or mode)+'b') - - if (mode == 'w+') and shape is None: - raise ValueError, "shape must be given" - - fid.seek(0, 2) - flen = fid.tell() - descr = dtypedescr(dtype) - _dbytes = descr.itemsize - - if shape is None: - bytes = flen - offset - if (bytes % _dbytes): - fid.close() - raise ValueError, "Size of available data is not a "\ - "multiple of data-type size." - size = bytes // _dbytes - shape = (size,) - else: - if not isinstance(shape, tuple): - shape = (shape,) - size = 1 - for k in shape: - size *= k - - bytes = long(offset + size*_dbytes) - - if mode == 'w+' or (mode == 'r+' and flen < bytes): - fid.seek(bytes - 1, 0) - fid.write(asbytes('\0')) - fid.flush() - - if mode == 'c': - acc = mmap.ACCESS_COPY - elif mode == 'r': - acc = mmap.ACCESS_READ - else: - acc = mmap.ACCESS_WRITE - - if sys.version_info[:2] >= (2,6): - # The offset keyword in mmap.mmap needs Python >= 2.6 - start = offset - offset % mmap.ALLOCATIONGRANULARITY - bytes -= start - offset -= start - mm = mmap.mmap(fid.fileno(), bytes, access=acc, offset=start) - else: - mm = mmap.mmap(fid.fileno(), bytes, access=acc) - - self = ndarray.__new__(subtype, shape, dtype=descr, buffer=mm, - offset=offset, order=order) - self._mmap = mm - self.offset = offset - self.mode = mode - - if isinstance(filename, basestring): - self.filename = os.path.abspath(filename) - elif hasattr(filename, "name"): - self.filename = os.path.abspath(filename.name) - - return self - - def __array_finalize__(self, obj): - if hasattr(obj, '_mmap'): - self._mmap = obj._mmap - self.filename = obj.filename - self.offset = obj.offset - self.mode = obj.mode - else: - self._mmap = None - - def flush(self): - """ - Write any changes in the array to the file on disk. - - For further information, see `memmap`. - - Parameters - ---------- - None - - See Also - -------- - memmap - - """ - if self._mmap is not None: - self._mmap.flush() - - def sync(self): - """This method is deprecated, use `flush`.""" - warnings.warn("Use ``flush``.", DeprecationWarning) - self.flush() - - def _close(self): - """Close the memmap file. Only do this when deleting the object.""" - if self.base is self._mmap: - # The python mmap probably causes flush on close, but - # we put this here for safety - self._mmap.flush() - self._mmap.close() - self._mmap = None - - def close(self): - """Close the memmap file. Does nothing.""" - warnings.warn("``close`` is deprecated on memmap arrays. Use del", - DeprecationWarning) - - def __del__(self): - # We first check if we are the owner of the mmap, rather than - # a view, so deleting a view does not call _close - # on the parent mmap - if self._mmap is self.base: - try: - # First run tell() to see whether file is open - self._mmap.tell() - except ValueError: - pass - else: - self._close() diff --git a/pythonPackages/numpy/numpy/core/mlib.ini.in b/pythonPackages/numpy/numpy/core/mlib.ini.in deleted file mode 100755 index badaa2ae9d..0000000000 --- a/pythonPackages/numpy/numpy/core/mlib.ini.in +++ /dev/null @@ -1,12 +0,0 @@ -[meta] -Name = mlib -Description = Math library used with this version of numpy -Version = 1.0 - -[default] -Libs=@posix_mathlib@ -Cflags= - -[msvc] -Libs=@msvc_mathlib@ -Cflags= diff --git a/pythonPackages/numpy/numpy/core/npymath.ini.in b/pythonPackages/numpy/numpy/core/npymath.ini.in deleted file mode 100755 index a233b8f3bf..0000000000 --- a/pythonPackages/numpy/numpy/core/npymath.ini.in +++ /dev/null @@ -1,20 +0,0 @@ -[meta] -Name=npymath -Description=Portable, core math library implementing C99 standard -Version=0.1 - -[variables] -pkgname=@pkgname@ -prefix=${pkgdir} -libdir=${prefix}@sep@lib -includedir=${prefix}@sep@include - -[default] -Libs=-L${libdir} -lnpymath -Cflags=-I${includedir} -Requires=mlib - -[msvc] -Libs=/LIBPATH:${libdir} npymath.lib -Cflags=/INCLUDE:${includedir} -Requires=mlib diff --git a/pythonPackages/numpy/numpy/core/numeric.py b/pythonPackages/numpy/numpy/core/numeric.py deleted file mode 100755 index 8c9f507d21..0000000000 --- a/pythonPackages/numpy/numpy/core/numeric.py +++ /dev/null @@ -1,2481 +0,0 @@ -__all__ = ['newaxis', 'ndarray', 'flatiter', 'ufunc', - 'arange', 'array', 'zeros', 'empty', 'broadcast', 'dtype', - 'fromstring', 'fromfile', 'frombuffer', - 'int_asbuffer', 'where', 'argwhere', - 'concatenate', 'fastCopyAndTranspose', 'lexsort', - 'set_numeric_ops', 'can_cast', - 'asarray', 'asanyarray', 'ascontiguousarray', 'asfortranarray', - 'isfortran', 'empty_like', 'zeros_like', - 'correlate', 'convolve', 'inner', 'dot', 'outer', 'vdot', - 'alterdot', 'restoredot', 'roll', 'rollaxis', 'cross', 'tensordot', - 'array2string', 'get_printoptions', 'set_printoptions', - 'array_repr', 'array_str', 'set_string_function', - 'little_endian', 'require', - 'fromiter', 'array_equal', 'array_equiv', - 'indices', 'fromfunction', - 'load', 'loads', 'isscalar', 'binary_repr', 'base_repr', - 'ones', 'identity', 'allclose', 'compare_chararrays', 'putmask', - 'seterr', 'geterr', 'setbufsize', 'getbufsize', - 'seterrcall', 'geterrcall', 'errstate', 'flatnonzero', - 'Inf', 'inf', 'infty', 'Infinity', - 'nan', 'NaN', 'False_', 'True_', 'bitwise_not', - 'CLIP', 'RAISE', 'WRAP', 'MAXDIMS', 'BUFSIZE', 'ALLOW_THREADS', - 'ComplexWarning'] - -import sys -import warnings -import multiarray -import umath -from umath import * -import numerictypes -from numerictypes import * - -if sys.version_info[0] < 3: - __all__.extend(['getbuffer', 'newbuffer']) - -class ComplexWarning(RuntimeWarning): - """ - The warning raised when casting a complex dtype to a real dtype. - - As implemented, casting a complex number to a real discards its imaginary - part, but this behavior may not be what the user actually wants. - - """ - pass - -bitwise_not = invert - -CLIP = multiarray.CLIP -WRAP = multiarray.WRAP -RAISE = multiarray.RAISE -MAXDIMS = multiarray.MAXDIMS -ALLOW_THREADS = multiarray.ALLOW_THREADS -BUFSIZE = multiarray.BUFSIZE - -ndarray = multiarray.ndarray -flatiter = multiarray.flatiter -broadcast = multiarray.broadcast -dtype = multiarray.dtype -ufunc = type(sin) - - -# originally from Fernando Perez's IPython -def zeros_like(a): - """ - Return an array of zeros with the same shape and type as a given array. - - Equivalent to ``a.copy().fill(0)``. - - Parameters - ---------- - a : array_like - The shape and data-type of `a` define these same attributes of - the returned array. - - Returns - ------- - out : ndarray - Array of zeros with the same shape and type as `a`. - - See Also - -------- - ones_like : Return an array of ones with shape and type of input. - empty_like : Return an empty array with shape and type of input. - zeros : Return a new array setting values to zero. - ones : Return a new array setting values to one. - empty : Return a new uninitialized array. - - Examples - -------- - >>> x = np.arange(6) - >>> x = x.reshape((2, 3)) - >>> x - array([[0, 1, 2], - [3, 4, 5]]) - >>> np.zeros_like(x) - array([[0, 0, 0], - [0, 0, 0]]) - - >>> y = np.arange(3, dtype=np.float) - >>> y - array([ 0., 1., 2.]) - >>> np.zeros_like(y) - array([ 0., 0., 0.]) - - """ - if isinstance(a, ndarray): - res = ndarray.__new__(type(a), a.shape, a.dtype, order=a.flags.fnc) - res.fill(0) - return res - try: - wrap = a.__array_wrap__ - except AttributeError: - wrap = None - a = asarray(a) - res = zeros(a.shape, a.dtype) - if wrap: - res = wrap(res) - return res - -def empty_like(a): - """ - Return a new array with the same shape and type as a given array. - - Parameters - ---------- - a : array_like - The shape and data-type of `a` define these same attributes of the - returned array. - - Returns - ------- - out : ndarray - Array of random data with the same shape and type as `a`. - - See Also - -------- - ones_like : Return an array of ones with shape and type of input. - zeros_like : Return an array of zeros with shape and type of input. - empty : Return a new uninitialized array. - ones : Return a new array setting values to one. - zeros : Return a new array setting values to zero. - - Notes - ----- - This function does *not* initialize the returned array; to do that use - `zeros_like` or `ones_like` instead. It may be marginally faster than - the functions that do set the array values. - - Examples - -------- - >>> a = ([1,2,3], [4,5,6]) # a is array-like - >>> np.empty_like(a) - array([[-1073741821, -1073741821, 3], #random - [ 0, 0, -1073741821]]) - >>> a = np.array([[1., 2., 3.],[4.,5.,6.]]) - >>> np.empty_like(a) - array([[ -2.00000715e+000, 1.48219694e-323, -2.00000572e+000],#random - [ 4.38791518e-305, -2.00000715e+000, 4.17269252e-309]]) - - """ - if isinstance(a, ndarray): - res = ndarray.__new__(type(a), a.shape, a.dtype, order=a.flags.fnc) - return res - try: - wrap = a.__array_wrap__ - except AttributeError: - wrap = None - a = asarray(a) - res = empty(a.shape, a.dtype) - if wrap: - res = wrap(res) - return res - -# end Fernando's utilities - - -def extend_all(module): - adict = {} - for a in __all__: - adict[a] = 1 - try: - mall = getattr(module, '__all__') - except AttributeError: - mall = [k for k in module.__dict__.keys() if not k.startswith('_')] - for a in mall: - if a not in adict: - __all__.append(a) - -extend_all(umath) -extend_all(numerictypes) - -newaxis = None - - -arange = multiarray.arange -array = multiarray.array -zeros = multiarray.zeros -empty = multiarray.empty -fromstring = multiarray.fromstring -fromiter = multiarray.fromiter -fromfile = multiarray.fromfile -frombuffer = multiarray.frombuffer -if sys.version_info[0] < 3: - newbuffer = multiarray.newbuffer - getbuffer = multiarray.getbuffer -int_asbuffer = multiarray.int_asbuffer -where = multiarray.where -concatenate = multiarray.concatenate -fastCopyAndTranspose = multiarray._fastCopyAndTranspose -set_numeric_ops = multiarray.set_numeric_ops -can_cast = multiarray.can_cast -lexsort = multiarray.lexsort -compare_chararrays = multiarray.compare_chararrays -putmask = multiarray.putmask - -def asarray(a, dtype=None, order=None): - """ - Convert the input to an array. - - Parameters - ---------- - a : array_like - Input data, in any form that can be converted to an array. This - includes lists, lists of tuples, tuples, tuples of tuples, tuples - of lists and ndarrays. - dtype : data-type, optional - By default, the data-type is inferred from the input data. - order : {'C', 'F'}, optional - Whether to use row-major ('C') or column-major ('F' for FORTRAN) - memory representation. Defaults to 'C'. - - Returns - ------- - out : ndarray - Array interpretation of `a`. No copy is performed if the input - is already an ndarray. If `a` is a subclass of ndarray, a base - class ndarray is returned. - - See Also - -------- - asanyarray : Similar function which passes through subclasses. - ascontiguousarray : Convert input to a contiguous array. - asfarray : Convert input to a floating point ndarray. - asfortranarray : Convert input to an ndarray with column-major - memory order. - asarray_chkfinite : Similar function which checks input for NaNs and Infs. - fromiter : Create an array from an iterator. - fromfunction : Construct an array by executing a function on grid - positions. - - Examples - -------- - Convert a list into an array: - - >>> a = [1, 2] - >>> np.asarray(a) - array([1, 2]) - - Existing arrays are not copied: - - >>> a = np.array([1, 2]) - >>> np.asarray(a) is a - True - - If `dtype` is set, array is copied only if dtype does not match: - - >>> a = np.array([1, 2], dtype=np.float32) - >>> np.asarray(a, dtype=np.float32) is a - True - >>> np.asarray(a, dtype=np.float64) is a - False - - Contrary to `asanyarray`, ndarray subclasses are not passed through: - - >>> issubclass(np.matrix, np.ndarray) - True - >>> a = np.matrix([[1, 2]]) - >>> np.asarray(a) is a - False - >>> np.asanyarray(a) is a - True - - """ - return array(a, dtype, copy=False, order=order) - -def asanyarray(a, dtype=None, order=None): - """ - Convert the input to an ndarray, but pass ndarray subclasses through. - - Parameters - ---------- - a : array_like - Input data, in any form that can be converted to an array. This - includes scalars, lists, lists of tuples, tuples, tuples of tuples, - tuples of lists, and ndarrays. - dtype : data-type, optional - By default, the data-type is inferred from the input data. - order : {'C', 'F'}, optional - Whether to use row-major ('C') or column-major ('F') memory - representation. Defaults to 'C'. - - Returns - ------- - out : ndarray or an ndarray subclass - Array interpretation of `a`. If `a` is an ndarray or a subclass - of ndarray, it is returned as-is and no copy is performed. - - See Also - -------- - asarray : Similar function which always returns ndarrays. - ascontiguousarray : Convert input to a contiguous array. - asfarray : Convert input to a floating point ndarray. - asfortranarray : Convert input to an ndarray with column-major - memory order. - asarray_chkfinite : Similar function which checks input for NaNs and - Infs. - fromiter : Create an array from an iterator. - fromfunction : Construct an array by executing a function on grid - positions. - - Examples - -------- - Convert a list into an array: - - >>> a = [1, 2] - >>> np.asanyarray(a) - array([1, 2]) - - Instances of `ndarray` subclasses are passed through as-is: - - >>> a = np.matrix([1, 2]) - >>> np.asanyarray(a) is a - True - - """ - return array(a, dtype, copy=False, order=order, subok=True) - -def ascontiguousarray(a, dtype=None): - """ - Return a contiguous array in memory (C order). - - Parameters - ---------- - a : array_like - Input array. - dtype : str or dtype object, optional - Data-type of returned array. - - Returns - ------- - out : ndarray - Contiguous array of same shape and content as `a`, with type `dtype` - if specified. - - See Also - -------- - asfortranarray : Convert input to an ndarray with column-major - memory order. - require : Return an ndarray that satisfies requirements. - ndarray.flags : Information about the memory layout of the array. - - Examples - -------- - >>> x = np.arange(6).reshape(2,3) - >>> np.ascontiguousarray(x, dtype=np.float32) - array([[ 0., 1., 2.], - [ 3., 4., 5.]], dtype=float32) - >>> x.flags['C_CONTIGUOUS'] - True - - """ - return array(a, dtype, copy=False, order='C', ndmin=1) - -def asfortranarray(a, dtype=None): - """ - Return an array laid out in Fortran order in memory. - - Parameters - ---------- - a : array_like - Input array. - dtype : str or dtype object, optional - By default, the data-type is inferred from the input data. - - Returns - ------- - out : ndarray - The input `a` in Fortran, or column-major, order. - - See Also - -------- - ascontiguousarray : Convert input to a contiguous (C order) array. - asanyarray : Convert input to an ndarray with either row or - column-major memory order. - require : Return an ndarray that satisfies requirements. - ndarray.flags : Information about the memory layout of the array. - - Examples - -------- - >>> x = np.arange(6).reshape(2,3) - >>> y = np.asfortranarray(x) - >>> x.flags['F_CONTIGUOUS'] - False - >>> y.flags['F_CONTIGUOUS'] - True - - """ - return array(a, dtype, copy=False, order='F', ndmin=1) - -def require(a, dtype=None, requirements=None): - """ - Return an ndarray of the provided type that satisfies requirements. - - This function is useful to be sure that an array with the correct flags - is returned for passing to compiled code (perhaps through ctypes). - - Parameters - ---------- - a : array_like - The object to be converted to a type-and-requirement-satisfying array. - dtype : data-type - The required data-type, the default data-type is float64). - requirements : str or list of str - The requirements list can be any of the following - - * 'F_CONTIGUOUS' ('F') - ensure a Fortran-contiguous array - * 'C_CONTIGUOUS' ('C') - ensure a C-contiguous array - * 'ALIGNED' ('A') - ensure a data-type aligned array - * 'WRITEABLE' ('W') - ensure a writable array - * 'OWNDATA' ('O') - ensure an array that owns its own data - - See Also - -------- - asarray : Convert input to an ndarray. - asanyarray : Convert to an ndarray, but pass through ndarray subclasses. - ascontiguousarray : Convert input to a contiguous array. - asfortranarray : Convert input to an ndarray with column-major - memory order. - ndarray.flags : Information about the memory layout of the array. - - Notes - ----- - The returned array will be guaranteed to have the listed requirements - by making a copy if needed. - - Examples - -------- - >>> x = np.arange(6).reshape(2,3) - >>> x.flags - C_CONTIGUOUS : True - F_CONTIGUOUS : False - OWNDATA : False - WRITEABLE : True - ALIGNED : True - UPDATEIFCOPY : False - - >>> y = np.require(x, dtype=np.float32, requirements=['A', 'O', 'W', 'F']) - >>> y.flags - C_CONTIGUOUS : False - F_CONTIGUOUS : True - OWNDATA : True - WRITEABLE : True - ALIGNED : True - UPDATEIFCOPY : False - - """ - if requirements is None: - requirements = [] - else: - requirements = [x.upper() for x in requirements] - - if not requirements: - return asanyarray(a, dtype=dtype) - - if 'ENSUREARRAY' in requirements or 'E' in requirements: - subok = False - else: - subok = True - - arr = array(a, dtype=dtype, copy=False, subok=subok) - - copychar = 'A' - if 'FORTRAN' in requirements or \ - 'F_CONTIGUOUS' in requirements or \ - 'F' in requirements: - copychar = 'F' - elif 'CONTIGUOUS' in requirements or \ - 'C_CONTIGUOUS' in requirements or \ - 'C' in requirements: - copychar = 'C' - - for prop in requirements: - if not arr.flags[prop]: - arr = arr.copy(copychar) - break - return arr - -def isfortran(a): - """ - Returns True if array is arranged in Fortran-order in memory - and dimension > 1. - - Parameters - ---------- - a : ndarray - Input array. - - - Examples - -------- - - np.array allows to specify whether the array is written in C-contiguous - order (last index varies the fastest), or FORTRAN-contiguous order in - memory (first index varies the fastest). - - >>> a = np.array([[1, 2, 3], [4, 5, 6]], order='C') - >>> a - array([[1, 2, 3], - [4, 5, 6]]) - >>> np.isfortran(a) - False - - >>> b = np.array([[1, 2, 3], [4, 5, 6]], order='FORTRAN') - >>> b - array([[1, 2, 3], - [4, 5, 6]]) - >>> np.isfortran(b) - True - - - The transpose of a C-ordered array is a FORTRAN-ordered array. - - >>> a = np.array([[1, 2, 3], [4, 5, 6]], order='C') - >>> a - array([[1, 2, 3], - [4, 5, 6]]) - >>> np.isfortran(a) - False - >>> b = a.T - >>> b - array([[1, 4], - [2, 5], - [3, 6]]) - >>> np.isfortran(b) - True - - 1-D arrays always evaluate as False. - - >>> np.isfortran(np.array([1, 2], order='FORTRAN')) - False - - """ - return a.flags.fnc - -def argwhere(a): - """ - Find the indices of array elements that are non-zero, grouped by element. - - Parameters - ---------- - a : array_like - Input data. - - Returns - ------- - index_array : ndarray - Indices of elements that are non-zero. Indices are grouped by element. - - See Also - -------- - where, nonzero - - Notes - ----- - ``np.argwhere(a)`` is the same as ``np.transpose(np.nonzero(a))``. - - The output of ``argwhere`` is not suitable for indexing arrays. - For this purpose use ``where(a)`` instead. - - Examples - -------- - >>> x = np.arange(6).reshape(2,3) - >>> x - array([[0, 1, 2], - [3, 4, 5]]) - >>> np.argwhere(x>1) - array([[0, 2], - [1, 0], - [1, 1], - [1, 2]]) - - """ - return transpose(asanyarray(a).nonzero()) - -def flatnonzero(a): - """ - Return indices that are non-zero in the flattened version of a. - - This is equivalent to a.ravel().nonzero()[0]. - - Parameters - ---------- - a : ndarray - Input array. - - Returns - ------- - res : ndarray - Output array, containing the indices of the elements of `a.ravel()` - that are non-zero. - - See Also - -------- - nonzero : Return the indices of the non-zero elements of the input array. - ravel : Return a 1-D array containing the elements of the input array. - - Examples - -------- - >>> x = np.arange(-2, 3) - >>> x - array([-2, -1, 0, 1, 2]) - >>> np.flatnonzero(x) - array([0, 1, 3, 4]) - - Use the indices of the non-zero elements as an index array to extract - these elements: - - >>> x.ravel()[np.flatnonzero(x)] - array([-2, -1, 1, 2]) - - """ - return a.ravel().nonzero()[0] - -_mode_from_name_dict = {'v': 0, - 's' : 1, - 'f' : 2} - -def _mode_from_name(mode): - if isinstance(mode, type("")): - return _mode_from_name_dict[mode.lower()[0]] - return mode - -def correlate(a,v,mode='valid',old_behavior=True): - """ - Discrete, linear correlation of two 1-dimensional sequences. - - This function is equivalent to - - >>> np.convolve(a, v[::-1], mode=mode) - ... #doctest: +SKIP - - where ``v[::-1]`` is the reverse of `v`. - - Parameters - ---------- - a, v : array_like - Input sequences. - mode : {'valid', 'same', 'full'}, optional - Refer to the `convolve` docstring. Note that the default - is `valid`, unlike `convolve`, which uses `full`. - old_behavior : bool - If True, uses the old, numeric behavior (correlate(a,v) == correlate(v, - a), and the conjugate is not taken for complex arrays). If False, uses - the conventional signal processing definition (see note). - - See Also - -------- - convolve : Discrete, linear convolution of two - one-dimensional sequences. - acorrelate : Discrete correlation following the usual signal processing - definition for complex arrays, and without assuming that - ``correlate(a, b) == correlate(b, a)``. - - Notes - ----- - If `old_behavior` is False, this function computes the correlation as - generally defined in signal processing texts:: - - z[k] = sum_n a[n] * conj(v[n+k]) - - with a and v sequences being zero-padded where necessary and conj being - the conjugate. - - Examples - -------- - >>> np.correlate([1, 2, 3], [0, 1, 0.5]) - array([ 3.5]) - >>> np.correlate([1, 2, 3], [0, 1, 0.5], "same") - array([ 2. , 3.5, 3. ]) - >>> np.correlate([1, 2, 3], [0, 1, 0.5], "full") - array([ 0.5, 2. , 3.5, 3. , 0. ]) - - """ - mode = _mode_from_name(mode) - if old_behavior: - warnings.warn(""" -The current behavior of correlate is deprecated for 1.4.0, and will be removed -for NumPy 1.5.0. - -The new behavior fits the conventional definition of correlation: inputs are -never swapped, and the second argument is conjugated for complex arrays.""", - DeprecationWarning) - return multiarray.correlate(a,v,mode) - else: - return multiarray.correlate2(a,v,mode) - -def convolve(a,v,mode='full'): - """ - Returns the discrete, linear convolution of two one-dimensional sequences. - - The convolution operator is often seen in signal processing, where it - models the effect of a linear time-invariant system on a signal [1]_. In - probability theory, the sum of two independent random variables is - distributed according to the convolution of their individual - distributions. - - Parameters - ---------- - a : (N,) array_like - First one-dimensional input array. - v : (M,) array_like - Second one-dimensional input array. - mode : {'full', 'valid', 'same'}, optional - 'full': - By default, mode is 'full'. This returns the convolution - at each point of overlap, with an output shape of (N+M-1,). At - the end-points of the convolution, the signals do not overlap - completely, and boundary effects may be seen. - - 'same': - Mode `same` returns output of length ``max(M, N)``. Boundary - effects are still visible. - - 'valid': - Mode `valid` returns output of length - ``max(M, N) - min(M, N) + 1``. The convolution product is only given - for points where the signals overlap completely. Values outside - the signal boundary have no effect. - - Returns - ------- - out : ndarray - Discrete, linear convolution of `a` and `v`. - - See Also - -------- - scipy.signal.fftconvolve : Convolve two arrays using the Fast Fourier - Transform. - scipy.linalg.toeplitz : Used to construct the convolution operator. - - Notes - ----- - The discrete convolution operation is defined as - - .. math:: (f * g)[n] = \\sum_{m = -\\infty}^{\\infty} f[m] g[n - m] - - It can be shown that a convolution :math:`x(t) * y(t)` in time/space - is equivalent to the multiplication :math:`X(f) Y(f)` in the Fourier - domain, after appropriate padding (padding is necessary to prevent - circular convolution). Since multiplication is more efficient (faster) - than convolution, the function `scipy.signal.fftconvolve` exploits the - FFT to calculate the convolution of large data-sets. - - References - ---------- - .. [1] Wikipedia, "Convolution", http://en.wikipedia.org/wiki/Convolution. - - Examples - -------- - Note how the convolution operator flips the second array - before "sliding" the two across one another: - - >>> np.convolve([1, 2, 3], [0, 1, 0.5]) - array([ 0. , 1. , 2.5, 4. , 1.5]) - - Only return the middle values of the convolution. - Contains boundary effects, where zeros are taken - into account: - - >>> np.convolve([1,2,3],[0,1,0.5], 'same') - array([ 1. , 2.5, 4. ]) - - The two arrays are of the same length, so there - is only one position where they completely overlap: - - >>> np.convolve([1,2,3],[0,1,0.5], 'valid') - array([ 2.5]) - - """ - a,v = array(a, ndmin=1),array(v, ndmin=1) - if (len(v) > len(a)): - a, v = v, a - if len(a) == 0 : - raise ValueError('a cannot be empty') - if len(v) == 0 : - raise ValueError('v cannot be empty') - mode = _mode_from_name(mode) - return multiarray.correlate(a, v[::-1], mode) - -def outer(a,b): - """ - Compute the outer product of two vectors. - - Given two vectors, ``a = [a0, a1, ..., aM]`` and - ``b = [b0, b1, ..., bN]``, - the outer product [1]_ is:: - - [[a0*b0 a0*b1 ... a0*bN ] - [a1*b0 . - [ ... . - [aM*b0 aM*bN ]] - - Parameters - ---------- - a, b : array_like, shape (M,), (N,) - First and second input vectors. Inputs are flattened if they - are not already 1-dimensional. - - Returns - ------- - out : ndarray, shape (M, N) - ``out[i, j] = a[i] * b[j]`` - - References - ---------- - .. [1] : G. H. Golub and C. F. van Loan, *Matrix Computations*, 3rd - ed., Baltimore, MD, Johns Hopkins University Press, 1996, - pg. 8. - - Examples - -------- - Make a (*very* coarse) grid for computing a Mandelbrot set: - - >>> rl = np.outer(np.ones((5,)), np.linspace(-2, 2, 5)) - >>> rl - array([[-2., -1., 0., 1., 2.], - [-2., -1., 0., 1., 2.], - [-2., -1., 0., 1., 2.], - [-2., -1., 0., 1., 2.], - [-2., -1., 0., 1., 2.]]) - >>> im = np.outer(1j*np.linspace(2, -2, 5), np.ones((5,))) - >>> im - array([[ 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j], - [ 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j], - [ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], - [ 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j], - [ 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j]]) - >>> grid = rl + im - >>> grid - array([[-2.+2.j, -1.+2.j, 0.+2.j, 1.+2.j, 2.+2.j], - [-2.+1.j, -1.+1.j, 0.+1.j, 1.+1.j, 2.+1.j], - [-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j], - [-2.-1.j, -1.-1.j, 0.-1.j, 1.-1.j, 2.-1.j], - [-2.-2.j, -1.-2.j, 0.-2.j, 1.-2.j, 2.-2.j]]) - - An example using a "vector" of letters: - - >>> x = np.array(['a', 'b', 'c'], dtype=object) - >>> np.outer(x, [1, 2, 3]) - array([[a, aa, aaa], - [b, bb, bbb], - [c, cc, ccc]], dtype=object) - - """ - a = asarray(a) - b = asarray(b) - return a.ravel()[:,newaxis]*b.ravel()[newaxis,:] - -# try to import blas optimized dot if available -try: - # importing this changes the dot function for basic 4 types - # to blas-optimized versions. - from _dotblas import dot, vdot, inner, alterdot, restoredot -except ImportError: - # docstrings are in add_newdocs.py - inner = multiarray.inner - dot = multiarray.dot - def vdot(a, b): - return dot(asarray(a).ravel().conj(), asarray(b).ravel()) - def alterdot(): - pass - def restoredot(): - pass - -def tensordot(a, b, axes=2): - """ - Compute tensor dot product along specified axes for arrays >= 1-D. - - Given two tensors (arrays of dimension greater than or equal to one), - ``a`` and ``b``, and an array_like object containing two array_like - objects, ``(a_axes, b_axes)``, sum the products of ``a``'s and ``b``'s - elements (components) over the axes specified by ``a_axes`` and - ``b_axes``. The third argument can be a single non-negative - integer_like scalar, ``N``; if it is such, then the last ``N`` - dimensions of ``a`` and the first ``N`` dimensions of ``b`` are summed - over. - - Parameters - ---------- - a, b : array_like, len(shape) >= 1 - Tensors to "dot". - - axes : variable type - - * integer_like scalar - Number of axes to sum over (applies to both arrays); or - - * array_like, shape = (2,), both elements array_like - Axes to be summed over, first sequence applying to ``a``, second - to ``b``. - - See Also - -------- - numpy.dot - - Notes - ----- - When there is more than one axis to sum over - and they are not the last - (first) axes of ``a`` (``b``) - the argument ``axes`` should consist of - two sequences of the same length, with the first axis to sum over given - first in both sequences, the second axis second, and so forth. - - Examples - -------- - A "traditional" example: - - >>> a = np.arange(60.).reshape(3,4,5) - >>> b = np.arange(24.).reshape(4,3,2) - >>> c = np.tensordot(a,b, axes=([1,0],[0,1])) - >>> c.shape - (5, 2) - >>> c - array([[ 4400., 4730.], - [ 4532., 4874.], - [ 4664., 5018.], - [ 4796., 5162.], - [ 4928., 5306.]]) - >>> # A slower but equivalent way of computing the same... - >>> d = np.zeros((5,2)) - >>> for i in range(5): - ... for j in range(2): - ... for k in range(3): - ... for n in range(4): - ... d[i,j] += a[k,n,i] * b[n,k,j] - >>> c == d - array([[ True, True], - [ True, True], - [ True, True], - [ True, True], - [ True, True]], dtype=bool) - - An extended example taking advantage of the overloading of + and \\*: - - >>> a = np.array(range(1, 9)) - >>> a.shape = (2, 2, 2) - >>> A = np.array(('a', 'b', 'c', 'd'), dtype=object) - >>> A.shape = (2, 2) - >>> a; A - array([[[1, 2], - [3, 4]], - [[5, 6], - [7, 8]]]) - array([[a, b], - [c, d]], dtype=object) - - >>> np.tensordot(a, A) # third argument default is 2 - array([abbcccdddd, aaaaabbbbbbcccccccdddddddd], dtype=object) - - >>> np.tensordot(a, A, 1) - array([[[acc, bdd], - [aaacccc, bbbdddd]], - [[aaaaacccccc, bbbbbdddddd], - [aaaaaaacccccccc, bbbbbbbdddddddd]]], dtype=object) - - >>> np.tensordot(a, A, 0) # "Left for reader" (result too long to incl.) - array([[[[[a, b], - [c, d]], - ... - - >>> np.tensordot(a, A, (0, 1)) - array([[[abbbbb, cddddd], - [aabbbbbb, ccdddddd]], - [[aaabbbbbbb, cccddddddd], - [aaaabbbbbbbb, ccccdddddddd]]], dtype=object) - - >>> np.tensordot(a, A, (2, 1)) - array([[[abb, cdd], - [aaabbbb, cccdddd]], - [[aaaaabbbbbb, cccccdddddd], - [aaaaaaabbbbbbbb, cccccccdddddddd]]], dtype=object) - - >>> np.tensordot(a, A, ((0, 1), (0, 1))) - array([abbbcccccddddddd, aabbbbccccccdddddddd], dtype=object) - - >>> np.tensordot(a, A, ((2, 1), (1, 0))) - array([acccbbdddd, aaaaacccccccbbbbbbdddddddd], dtype=object) - - """ - try: - iter(axes) - except: - axes_a = range(-axes,0) - axes_b = range(0,axes) - else: - axes_a, axes_b = axes - try: - na = len(axes_a) - axes_a = list(axes_a) - except TypeError: - axes_a = [axes_a] - na = 1 - try: - nb = len(axes_b) - axes_b = list(axes_b) - except TypeError: - axes_b = [axes_b] - nb = 1 - - a, b = asarray(a), asarray(b) - as_ = a.shape - nda = len(a.shape) - bs = b.shape - ndb = len(b.shape) - equal = True - if (na != nb): equal = False - else: - for k in xrange(na): - if as_[axes_a[k]] != bs[axes_b[k]]: - equal = False - break - if axes_a[k] < 0: - axes_a[k] += nda - if axes_b[k] < 0: - axes_b[k] += ndb - if not equal: - raise ValueError, "shape-mismatch for sum" - - # Move the axes to sum over to the end of "a" - # and to the front of "b" - notin = [k for k in range(nda) if k not in axes_a] - newaxes_a = notin + axes_a - N2 = 1 - for axis in axes_a: - N2 *= as_[axis] - newshape_a = (-1, N2) - olda = [as_[axis] for axis in notin] - - notin = [k for k in range(ndb) if k not in axes_b] - newaxes_b = axes_b + notin - N2 = 1 - for axis in axes_b: - N2 *= bs[axis] - newshape_b = (N2, -1) - oldb = [bs[axis] for axis in notin] - - at = a.transpose(newaxes_a).reshape(newshape_a) - bt = b.transpose(newaxes_b).reshape(newshape_b) - res = dot(at, bt) - return res.reshape(olda + oldb) - -def roll(a, shift, axis=None): - """ - Roll array elements along a given axis. - - Elements that roll beyond the last position are re-introduced at - the first. - - Parameters - ---------- - a : array_like - Input array. - shift : int - The number of places by which elements are shifted. - axis : int, optional - The axis along which elements are shifted. By default, the array - is flattened before shifting, after which the original - shape is restored. - - Returns - ------- - res : ndarray - Output array, with the same shape as `a`. - - See Also - -------- - rollaxis : Roll the specified axis backwards, until it lies in a - given position. - - Examples - -------- - >>> x = np.arange(10) - >>> np.roll(x, 2) - array([8, 9, 0, 1, 2, 3, 4, 5, 6, 7]) - - >>> x2 = np.reshape(x, (2,5)) - >>> x2 - array([[0, 1, 2, 3, 4], - [5, 6, 7, 8, 9]]) - >>> np.roll(x2, 1) - array([[9, 0, 1, 2, 3], - [4, 5, 6, 7, 8]]) - >>> np.roll(x2, 1, axis=0) - array([[5, 6, 7, 8, 9], - [0, 1, 2, 3, 4]]) - >>> np.roll(x2, 1, axis=1) - array([[4, 0, 1, 2, 3], - [9, 5, 6, 7, 8]]) - - """ - a = asanyarray(a) - if axis is None: - n = a.size - reshape = True - else: - n = a.shape[axis] - reshape = False - shift %= n - indexes = concatenate((arange(n-shift,n),arange(n-shift))) - res = a.take(indexes, axis) - if reshape: - return res.reshape(a.shape) - else: - return res - -def rollaxis(a, axis, start=0): - """ - Roll the specified axis backwards, until it lies in a given position. - - Parameters - ---------- - a : ndarray - Input array. - axis : int - The axis to roll backwards. The positions of the other axes do not - change relative to one another. - start : int, optional - The axis is rolled until it lies before this position. The default, - 0, results in a "complete" roll. - - Returns - ------- - res : ndarray - Output array. - - See Also - -------- - roll : Roll the elements of an array by a number of positions along a - given axis. - - Examples - -------- - >>> a = np.ones((3,4,5,6)) - >>> np.rollaxis(a, 3, 1).shape - (3, 6, 4, 5) - >>> np.rollaxis(a, 2).shape - (5, 3, 4, 6) - >>> np.rollaxis(a, 1, 4).shape - (3, 5, 6, 4) - - """ - n = a.ndim - if axis < 0: - axis += n - if start < 0: - start += n - msg = 'rollaxis: %s (%d) must be >=0 and < %d' - if not (0 <= axis < n): - raise ValueError, msg % ('axis', axis, n) - if not (0 <= start < n+1): - raise ValueError, msg % ('start', start, n+1) - if (axis < start): # it's been removed - start -= 1 - if axis==start: - return a - axes = range(0,n) - axes.remove(axis) - axes.insert(start, axis) - return a.transpose(axes) - -# fix hack in scipy which imports this function -def _move_axis_to_0(a, axis): - return rollaxis(a, axis, 0) - -def cross(a, b, axisa=-1, axisb=-1, axisc=-1, axis=None): - """ - Return the cross product of two (arrays of) vectors. - - The cross product of `a` and `b` in :math:`R^3` is a vector perpendicular - to both `a` and `b`. If `a` and `b` are arrays of vectors, the vectors - are defined by the last axis of `a` and `b` by default, and these axes - can have dimensions 2 or 3. Where the dimension of either `a` or `b` is - 2, the third component of the input vector is assumed to be zero and the - cross product calculated accordingly. In cases where both input vectors - have dimension 2, the z-component of the cross product is returned. - - Parameters - ---------- - a : array_like - Components of the first vector(s). - b : array_like - Components of the second vector(s). - axisa : int, optional - Axis of `a` that defines the vector(s). By default, the last axis. - axisb : int, optional - Axis of `b` that defines the vector(s). By default, the last axis. - axisc : int, optional - Axis of `c` containing the cross product vector(s). By default, the - last axis. - axis : int, optional - If defined, the axis of `a`, `b` and `c` that defines the vector(s) - and cross product(s). Overrides `axisa`, `axisb` and `axisc`. - - Returns - ------- - c : ndarray - Vector cross product(s). - - Raises - ------ - ValueError - When the dimension of the vector(s) in `a` and/or `b` does not - equal 2 or 3. - - See Also - -------- - inner : Inner product - outer : Outer product. - ix_ : Construct index arrays. - - Examples - -------- - Vector cross-product. - - >>> x = [1, 2, 3] - >>> y = [4, 5, 6] - >>> np.cross(x, y) - array([-3, 6, -3]) - - One vector with dimension 2. - - >>> x = [1, 2] - >>> y = [4, 5, 6] - >>> np.cross(x, y) - array([12, -6, -3]) - - Equivalently: - - >>> x = [1, 2, 0] - >>> y = [4, 5, 6] - >>> np.cross(x, y) - array([12, -6, -3]) - - Both vectors with dimension 2. - - >>> x = [1,2] - >>> y = [4,5] - >>> np.cross(x, y) - -3 - - Multiple vector cross-products. Note that the direction of the cross - product vector is defined by the `right-hand rule`. - - >>> x = np.array([[1,2,3], [4,5,6]]) - >>> y = np.array([[4,5,6], [1,2,3]]) - >>> np.cross(x, y) - array([[-3, 6, -3], - [ 3, -6, 3]]) - - The orientation of `c` can be changed using the `axisc` keyword. - - >>> np.cross(x, y, axisc=0) - array([[-3, 3], - [ 6, -6], - [-3, 3]]) - - Change the vector definition of `x` and `y` using `axisa` and `axisb`. - - >>> x = np.array([[1,2,3], [4,5,6], [7, 8, 9]]) - >>> y = np.array([[7, 8, 9], [4,5,6], [1,2,3]]) - >>> np.cross(x, y) - array([[ -6, 12, -6], - [ 0, 0, 0], - [ 6, -12, 6]]) - >>> np.cross(x, y, axisa=0, axisb=0) - array([[-24, 48, -24], - [-30, 60, -30], - [-36, 72, -36]]) - - """ - if axis is not None: - axisa,axisb,axisc=(axis,)*3 - a = asarray(a).swapaxes(axisa, 0) - b = asarray(b).swapaxes(axisb, 0) - msg = "incompatible dimensions for cross product\n"\ - "(dimension must be 2 or 3)" - if (a.shape[0] not in [2,3]) or (b.shape[0] not in [2,3]): - raise ValueError(msg) - if a.shape[0] == 2: - if (b.shape[0] == 2): - cp = a[0]*b[1] - a[1]*b[0] - if cp.ndim == 0: - return cp - else: - return cp.swapaxes(0, axisc) - else: - x = a[1]*b[2] - y = -a[0]*b[2] - z = a[0]*b[1] - a[1]*b[0] - elif a.shape[0] == 3: - if (b.shape[0] == 3): - x = a[1]*b[2] - a[2]*b[1] - y = a[2]*b[0] - a[0]*b[2] - z = a[0]*b[1] - a[1]*b[0] - else: - x = -a[2]*b[1] - y = a[2]*b[0] - z = a[0]*b[1] - a[1]*b[0] - cp = array([x,y,z]) - if cp.ndim == 1: - return cp - else: - return cp.swapaxes(0,axisc) - - -#Use numarray's printing function -from arrayprint import array2string, get_printoptions, set_printoptions - -_typelessdata = [int_, float_, complex_] -if issubclass(intc, int): - _typelessdata.append(intc) - -if issubclass(longlong, int): - _typelessdata.append(longlong) - -def array_repr(arr, max_line_width=None, precision=None, suppress_small=None): - """ - Return the string representation of an array. - - Parameters - ---------- - arr : ndarray - Input array. - max_line_width : int, optional - The maximum number of columns the string should span. Newline - characters split the string appropriately after array elements. - precision : int, optional - Floating point precision. Default is the current printing precision - (usually 8), which can be altered using `set_printoptions`. - suppress_small : bool, optional - Represent very small numbers as zero, default is False. Very small - is defined by `precision`, if the precision is 8 then - numbers smaller than 5e-9 are represented as zero. - - Returns - ------- - string : str - The string representation of an array. - - See Also - -------- - array_str, array2string, set_printoptions - - Examples - -------- - >>> np.array_repr(np.array([1,2])) - 'array([1, 2])' - >>> np.array_repr(np.ma.array([0.])) - 'MaskedArray([ 0.])' - >>> np.array_repr(np.array([], np.int32)) - 'array([], dtype=int32)' - - >>> x = np.array([1e-6, 4e-7, 2, 3]) - >>> np.array_repr(x, precision=6, suppress_small=True) - 'array([ 0.000001, 0. , 2. , 3. ])' - - """ - if arr.size > 0 or arr.shape==(0,): - lst = array2string(arr, max_line_width, precision, suppress_small, - ', ', "array(") - else: # show zero-length shape unless it is (0,) - lst = "[], shape=%s" % (repr(arr.shape),) - typeless = arr.dtype.type in _typelessdata - - if arr.__class__ is not ndarray: - cName= arr.__class__.__name__ - else: - cName = "array" - if typeless and arr.size: - return cName + "(%s)" % lst - else: - typename=arr.dtype.name - lf = '' - if issubclass(arr.dtype.type, flexible): - if arr.dtype.names: - typename = "%s" % str(arr.dtype) - else: - typename = "'%s'" % str(arr.dtype) - lf = '\n'+' '*len("array(") - return cName + "(%s, %sdtype=%s)" % (lst, lf, typename) - -def array_str(a, max_line_width=None, precision=None, suppress_small=None): - """ - Return a string representation of the data in an array. - - The data in the array is returned as a single string. This function is - similar to `array_repr`, the difference being that `array_repr` also - returns information on the kind of array and its data type. - - Parameters - ---------- - a : ndarray - Input array. - max_line_width : int, optional - Inserts newlines if text is longer than `max_line_width`. The - default is, indirectly, 75. - precision : int, optional - Floating point precision. Default is the current printing precision - (usually 8), which can be altered using `set_printoptions`. - suppress_small : bool, optional - Represent numbers "very close" to zero as zero; default is False. - Very close is defined by precision: if the precision is 8, e.g., - numbers smaller (in absolute value) than 5e-9 are represented as - zero. - - See Also - -------- - array2string, array_repr, set_printoptions - - Examples - -------- - >>> np.array_str(np.arange(3)) - '[0 1 2]' - - """ - return array2string(a, max_line_width, precision, suppress_small, ' ', "", str) - -def set_string_function(f, repr=True): - """ - Set a Python function to be used when pretty printing arrays. - - Parameters - ---------- - f : function or None - Function to be used to pretty print arrays. The function should expect - a single array argument and return a string of the representation of - the array. If None, the function is reset to the default NumPy function - to print arrays. - repr : bool, optional - If True (default), the function for pretty printing (``__repr__``) - is set, if False the function that returns the default string - representation (``__str__``) is set. - - See Also - -------- - set_printoptions, get_printoptions - - Examples - -------- - >>> def pprint(arr): - ... return 'HA! - What are you going to do now?' - ... - >>> np.set_string_function(pprint) - >>> a = np.arange(10) - >>> a - HA! - What are you going to do now? - >>> print a - [0 1 2 3 4 5 6 7 8 9] - - We can reset the function to the default: - - >>> np.set_string_function(None) - >>> a - array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - - `repr` affects either pretty printing or normal string representation. - Note that ``__repr__`` is still affected by setting ``__str__`` - because the width of each array element in the returned string becomes - equal to the length of the result of ``__str__()``. - - >>> x = np.arange(4) - >>> np.set_string_function(lambda x:'random', repr=False) - >>> x.__str__() - 'random' - >>> x.__repr__() - 'array([ 0, 1, 2, 3])' - - """ - if f is None: - if repr: - return multiarray.set_string_function(array_repr, 1) - else: - return multiarray.set_string_function(array_str, 0) - else: - return multiarray.set_string_function(f, repr) - -set_string_function(array_str, 0) -set_string_function(array_repr, 1) - -little_endian = (sys.byteorder == 'little') - - -def indices(dimensions, dtype=int): - """ - Return an array representing the indices of a grid. - - Compute an array where the subarrays contain index values 0,1,... - varying only along the corresponding axis. - - Parameters - ---------- - dimensions : sequence of ints - The shape of the grid. - dtype : dtype, optional - Data type of the result. - - Returns - ------- - grid : ndarray - The array of grid indices, - ``grid.shape = (len(dimensions),) + tuple(dimensions)``. - - See Also - -------- - mgrid, meshgrid - - Notes - ----- - The output shape is obtained by prepending the number of dimensions - in front of the tuple of dimensions, i.e. if `dimensions` is a tuple - ``(r0, ..., rN-1)`` of length ``N``, the output shape is - ``(N,r0,...,rN-1)``. - - The subarrays ``grid[k]`` contains the N-D array of indices along the - ``k-th`` axis. Explicitly:: - - grid[k,i0,i1,...,iN-1] = ik - - Examples - -------- - >>> grid = np.indices((2, 3)) - >>> grid.shape - (2, 2, 3) - >>> grid[0] # row indices - array([[0, 0, 0], - [1, 1, 1]]) - >>> grid[1] # column indices - array([[0, 1, 2], - [0, 1, 2]]) - - The indices can be used as an index into an array. - - >>> x = np.arange(20).reshape(5, 4) - >>> row, col = np.indices((2, 3)) - >>> x[row, col] - array([[0, 1, 2], - [4, 5, 6]]) - - Note that it would be more straightforward in the above example to - extract the required elements directly with ``x[:2, :3]``. - - """ - dimensions = tuple(dimensions) - N = len(dimensions) - if N == 0: - return array([],dtype=dtype) - res = empty((N,)+dimensions, dtype=dtype) - for i, dim in enumerate(dimensions): - tmp = arange(dim,dtype=dtype) - tmp.shape = (1,)*i + (dim,)+(1,)*(N-i-1) - newdim = dimensions[:i] + (1,)+ dimensions[i+1:] - val = zeros(newdim, dtype) - add(tmp, val, res[i]) - return res - -def fromfunction(function, shape, **kwargs): - """ - Construct an array by executing a function over each coordinate. - - The resulting array therefore has a value ``fn(x, y, z)`` at - coordinate ``(x, y, z)``. - - Parameters - ---------- - function : callable - The function is called with N parameters, each of which - represents the coordinates of the array varying along a - specific axis. For example, if `shape` were ``(2, 2)``, then - the parameters would be two arrays, ``[[0, 0], [1, 1]]`` and - ``[[0, 1], [0, 1]]``. `function` must be capable of operating on - arrays, and should return a scalar value. - shape : (N,) tuple of ints - Shape of the output array, which also determines the shape of - the coordinate arrays passed to `function`. - dtype : data-type, optional - Data-type of the coordinate arrays passed to `function`. - By default, `dtype` is float. - - Returns - ------- - out : any - The result of the call to `function` is passed back directly. - Therefore the type and shape of `out` is completely determined by - `function`. - - See Also - -------- - indices, meshgrid - - Notes - ----- - Keywords other than `shape` and `dtype` are passed to `function`. - - Examples - -------- - >>> np.fromfunction(lambda i, j: i == j, (3, 3), dtype=int) - array([[ True, False, False], - [False, True, False], - [False, False, True]], dtype=bool) - - >>> np.fromfunction(lambda i, j: i + j, (3, 3), dtype=int) - array([[0, 1, 2], - [1, 2, 3], - [2, 3, 4]]) - - """ - dtype = kwargs.pop('dtype', float) - args = indices(shape, dtype=dtype) - return function(*args,**kwargs) - -def isscalar(num): - """ - Returns True if the type of `num` is a scalar type. - - Parameters - ---------- - num : any - Input argument, can be of any type and shape. - - Returns - ------- - val : bool - True if `num` is a scalar type, False if it is not. - - Examples - -------- - >>> np.isscalar(3.1) - True - >>> np.isscalar([3.1]) - False - >>> np.isscalar(False) - True - - """ - if isinstance(num, generic): - return True - else: - return type(num) in ScalarType - -_lkup = { - '0':'0000', - '1':'0001', - '2':'0010', - '3':'0011', - '4':'0100', - '5':'0101', - '6':'0110', - '7':'0111', - '8':'1000', - '9':'1001', - 'a':'1010', - 'b':'1011', - 'c':'1100', - 'd':'1101', - 'e':'1110', - 'f':'1111', - 'A':'1010', - 'B':'1011', - 'C':'1100', - 'D':'1101', - 'E':'1110', - 'F':'1111', - 'L':''} - -def binary_repr(num, width=None): - """ - Return the binary representation of the input number as a string. - - For negative numbers, if width is not given, a minus sign is added to the - front. If width is given, the two's complement of the number is - returned, with respect to that width. - - In a two's-complement system negative numbers are represented by the two's - complement of the absolute value. This is the most common method of - representing signed integers on computers [1]_. A N-bit two's-complement - system can represent every integer in the range - :math:`-2^{N-1}` to :math:`+2^{N-1}-1`. - - Parameters - ---------- - num : int - Only an integer decimal number can be used. - width : int, optional - The length of the returned string if `num` is positive, the length of - the two's complement if `num` is negative. - - Returns - ------- - bin : str - Binary representation of `num` or two's complement of `num`. - - See Also - -------- - base_repr: Return a string representation of a number in the given base - system. - - Notes - ----- - `binary_repr` is equivalent to using `base_repr` with base 2, but about 25x - faster. - - References - ---------- - .. [1] Wikipedia, "Two's complement", - http://en.wikipedia.org/wiki/Two's_complement - - Examples - -------- - >>> np.binary_repr(3) - '11' - >>> np.binary_repr(-3) - '-11' - >>> np.binary_repr(3, width=4) - '0011' - - The two's complement is returned when the input number is negative and - width is specified: - - >>> np.binary_repr(-3, width=4) - '1101' - - """ - sign = '' - if num < 0: - if width is None: - sign = '-' - num = -num - else: - # replace num with its 2-complement - num = 2**width + num - elif num == 0: - return '0'*(width or 1) - ostr = hex(num) - bin = ''.join([_lkup[ch] for ch in ostr[2:]]) - bin = bin.lstrip('0') - if width is not None: - bin = bin.zfill(width) - return sign + bin - -def base_repr(number, base=2, padding=0): - """ - Return a string representation of a number in the given base system. - - Parameters - ---------- - number : int - The value to convert. Only positive values are handled. - base : int, optional - Convert `number` to the `base` number system. The valid range is 2-36, - the default value is 2. - padding : int, optional - Number of zeros padded on the left. Default is 0 (no padding). - - Returns - ------- - out : str - String representation of `number` in `base` system. - - See Also - -------- - binary_repr : Faster version of `base_repr` for base 2. - - Examples - -------- - >>> np.base_repr(5) - '101' - >>> np.base_repr(6, 5) - '11' - >>> np.base_repr(7, base=5, padding=3) - '00012' - - >>> np.base_repr(10, base=16) - 'A' - >>> np.base_repr(32, base=16) - '20' - - """ - digits = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' - if base > len(digits): - raise ValueError("Bases greater than 36 not handled in base_repr.") - - num = abs(number) - res = [] - while num: - res.append(digits[num % base]) - num //= base - if padding: - res.append('0' * padding) - if number < 0: - res.append('-') - return ''.join(reversed(res or '0')) - -from cPickle import load, loads -_cload = load -_file = open - -def load(file): - """ - Wrapper around cPickle.load which accepts either a file-like object or - a filename. - - Note that the NumPy binary format is not based on pickle/cPickle anymore. - For details on the preferred way of loading and saving files, see `load` - and `save`. - - See Also - -------- - load, save - - """ - if isinstance(file, type("")): - file = _file(file,"rb") - return _cload(file) - -# These are all essentially abbreviations -# These might wind up in a special abbreviations module - -def _maketup(descr, val): - dt = dtype(descr) - # Place val in all scalar tuples: - fields = dt.fields - if fields is None: - return val - else: - res = [_maketup(fields[name][0],val) for name in dt.names] - return tuple(res) - -def ones(shape, dtype=None, order='C'): - """ - Return a new array of given shape and type, filled with ones. - - Please refer to the documentation for `zeros` for further details. - - See Also - -------- - zeros, ones_like - - Examples - -------- - >>> np.ones(5) - array([ 1., 1., 1., 1., 1.]) - - >>> np.ones((5,), dtype=np.int) - array([1, 1, 1, 1, 1]) - - >>> np.ones((2, 1)) - array([[ 1.], - [ 1.]]) - - >>> s = (2,2) - >>> np.ones(s) - array([[ 1., 1.], - [ 1., 1.]]) - - """ - a = empty(shape, dtype, order) - try: - a.fill(1) - # Above is faster now after addition of fast loops. - #a = zeros(shape, dtype, order) - #a+=1 - except TypeError: - obj = _maketup(dtype, 1) - a.fill(obj) - return a - -def identity(n, dtype=None): - """ - Return the identity array. - - The identity array is a square array with ones on - the main diagonal. - - Parameters - ---------- - n : int - Number of rows (and columns) in `n` x `n` output. - dtype : data-type, optional - Data-type of the output. Defaults to ``float``. - - Returns - ------- - out : ndarray - `n` x `n` array with its main diagonal set to one, - and all other elements 0. - - Examples - -------- - >>> np.identity(3) - array([[ 1., 0., 0.], - [ 0., 1., 0.], - [ 0., 0., 1.]]) - - """ - a = zeros((n,n), dtype=dtype) - a.flat[::n+1] = 1 - return a - -def allclose(a, b, rtol=1.e-5, atol=1.e-8): - """ - Returns True if two arrays are element-wise equal within a tolerance. - - The tolerance values are positive, typically very small numbers. The - relative difference (`rtol` * abs(`b`)) and the absolute difference - `atol` are added together to compare against the absolute difference - between `a` and `b`. - - Parameters - ---------- - a, b : array_like - Input arrays to compare. - rtol : float - The relative tolerance parameter (see Notes). - atol : float - The absolute tolerance parameter (see Notes). - - Returns - ------- - y : bool - Returns True if the two arrays are equal within the given - tolerance; False otherwise. If either array contains NaN, then - False is returned. - - See Also - -------- - all, any, alltrue, sometrue - - Notes - ----- - If the following equation is element-wise True, then allclose returns - True. - - absolute(`a` - `b`) <= (`atol` + `rtol` * absolute(`b`)) - - The above equation is not symmetric in `a` and `b`, so that - `allclose(a, b)` might be different from `allclose(b, a)` in - some rare cases. - - Examples - -------- - >>> np.allclose([1e10,1e-7], [1.00001e10,1e-8]) - False - >>> np.allclose([1e10,1e-8], [1.00001e10,1e-9]) - True - >>> np.allclose([1e10,1e-8], [1.0001e10,1e-9]) - False - >>> np.allclose([1.0, np.nan], [1.0, np.nan]) - False - - """ - x = array(a, copy=False) - y = array(b, copy=False) - xinf = isinf(x) - if not all(xinf == isinf(y)): - return False - if not any(xinf): - return all(less_equal(absolute(x-y), atol + rtol * absolute(y))) - if not all(x[xinf] == y[xinf]): - return False - x = x[~xinf] - y = y[~xinf] - return all(less_equal(absolute(x-y), atol + rtol * absolute(y))) - -def array_equal(a1, a2): - """ - True if two arrays have the same shape and elements, False otherwise. - - Parameters - ---------- - a1, a2 : array_like - Input arrays. - - Returns - ------- - b : bool - Returns True if the arrays are equal. - - See Also - -------- - allclose: Returns True if two arrays are element-wise equal within a - tolerance. - array_equiv: Returns True if input arrays are shape consistent and all - elements equal. - - Examples - -------- - >>> np.array_equal([1, 2], [1, 2]) - True - >>> np.array_equal(np.array([1, 2]), np.array([1, 2])) - True - >>> np.array_equal([1, 2], [1, 2, 3]) - False - >>> np.array_equal([1, 2], [1, 4]) - False - - """ - try: - a1, a2 = asarray(a1), asarray(a2) - except: - return False - if a1.shape != a2.shape: - return False - return bool(logical_and.reduce(equal(a1,a2).ravel())) - -def array_equiv(a1, a2): - """ - Returns True if input arrays are shape consistent and all elements equal. - - Shape consistent means they are either the same shape, or one input array - can be broadcasted to create the same shape as the other one. - - Parameters - ---------- - a1, a2 : array_like - Input arrays. - - Returns - ------- - out : bool - True if equivalent, False otherwise. - - Examples - -------- - >>> np.array_equiv([1, 2], [1, 2]) - True - >>> np.array_equiv([1, 2], [1, 3]) - False - - Showing the shape equivalence: - - >>> np.array_equiv([1, 2], [[1, 2], [1, 2]]) - True - >>> np.array_equiv([1, 2], [[1, 2, 1, 2], [1, 2, 1, 2]]) - False - - >>> np.array_equiv([1, 2], [[1, 2], [1, 3]]) - False - - """ - try: - a1, a2 = asarray(a1), asarray(a2) - except: - return False - try: - return bool(logical_and.reduce(equal(a1,a2).ravel())) - except ValueError: - return False - - -_errdict = {"ignore":ERR_IGNORE, - "warn":ERR_WARN, - "raise":ERR_RAISE, - "call":ERR_CALL, - "print":ERR_PRINT, - "log":ERR_LOG} - -_errdict_rev = {} -for key in _errdict.keys(): - _errdict_rev[_errdict[key]] = key -del key - -def seterr(all=None, divide=None, over=None, under=None, invalid=None): - """ - Set how floating-point errors are handled. - - Note that operations on integer scalar types (such as `int16`) are - handled like floating point, and are affected by these settings. - - Parameters - ---------- - all : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional - Set treatment for all types of floating-point errors at once: - - - ignore: Take no action when the exception occurs. - - warn: Print a `RuntimeWarning` (via the Python `warnings` module). - - raise: Raise a `FloatingPointError`. - - call: Call a function specified using the `seterrcall` function. - - print: Print a warning directly to ``stdout``. - - log: Record error in a Log object specified by `seterrcall`. - - The default is not to change the current behavior. - divide : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional - Treatment for division by zero. - over : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional - Treatment for floating-point overflow. - under : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional - Treatment for floating-point underflow. - invalid : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional - Treatment for invalid floating-point operation. - - Returns - ------- - old_settings : dict - Dictionary containing the old settings. - - See also - -------- - seterrcall : Set a callback function for the 'call' mode. - geterr, geterrcall - - Notes - ----- - The floating-point exceptions are defined in the IEEE 754 standard [1]: - - - Division by zero: infinite result obtained from finite numbers. - - Overflow: result too large to be expressed. - - Underflow: result so close to zero that some precision - was lost. - - Invalid operation: result is not an expressible number, typically - indicates that a NaN was produced. - - .. [1] http://en.wikipedia.org/wiki/IEEE_754 - - Examples - -------- - >>> old_settings = np.seterr(all='ignore') #seterr to known value - >>> np.seterr(over='raise') - {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', - 'under': 'ignore'} - >>> np.seterr(all='ignore') # reset to default - {'over': 'raise', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} - - >>> np.int16(32000) * np.int16(3) - 30464 - >>> old_settings = np.seterr(all='warn', over='raise') - >>> np.int16(32000) * np.int16(3) - Traceback (most recent call last): - File "", line 1, in - FloatingPointError: overflow encountered in short_scalars - - >>> old_settings = np.seterr(all='print') - >>> np.geterr() - {'over': 'print', 'divide': 'print', 'invalid': 'print', 'under': 'print'} - >>> np.int16(32000) * np.int16(3) - Warning: overflow encountered in short_scalars - 30464 - - """ - - pyvals = umath.geterrobj() - old = geterr() - - if divide is None: divide = all or old['divide'] - if over is None: over = all or old['over'] - if under is None: under = all or old['under'] - if invalid is None: invalid = all or old['invalid'] - - maskvalue = ((_errdict[divide] << SHIFT_DIVIDEBYZERO) + - (_errdict[over] << SHIFT_OVERFLOW ) + - (_errdict[under] << SHIFT_UNDERFLOW) + - (_errdict[invalid] << SHIFT_INVALID)) - - pyvals[1] = maskvalue - umath.seterrobj(pyvals) - return old - - -def geterr(): - """ - Get the current way of handling floating-point errors. - - Returns - ------- - res : dict - A dictionary with keys "divide", "over", "under", and "invalid", - whose values are from the strings "ignore", "print", "log", "warn", - "raise", and "call". The keys represent possible floating-point - exceptions, and the values define how these exceptions are handled. - - See Also - -------- - geterrcall, seterr, seterrcall - - Notes - ----- - For complete documentation of the types of floating-point exceptions and - treatment options, see `seterr`. - - Examples - -------- - >>> np.geterr() # default is all set to 'ignore' - {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', - 'under': 'ignore'} - >>> np.arange(3.) / np.arange(3.) - array([ NaN, 1., 1.]) - - >>> oldsettings = np.seterr(all='warn', over='raise') - >>> np.geterr() - {'over': 'raise', 'divide': 'warn', 'invalid': 'warn', 'under': 'warn'} - >>> np.arange(3.) / np.arange(3.) - __main__:1: RuntimeWarning: invalid value encountered in divide - array([ NaN, 1., 1.]) - - """ - maskvalue = umath.geterrobj()[1] - mask = 7 - res = {} - val = (maskvalue >> SHIFT_DIVIDEBYZERO) & mask - res['divide'] = _errdict_rev[val] - val = (maskvalue >> SHIFT_OVERFLOW) & mask - res['over'] = _errdict_rev[val] - val = (maskvalue >> SHIFT_UNDERFLOW) & mask - res['under'] = _errdict_rev[val] - val = (maskvalue >> SHIFT_INVALID) & mask - res['invalid'] = _errdict_rev[val] - return res - -def setbufsize(size): - """ - Set the size of the buffer used in ufuncs. - - Parameters - ---------- - size : int - Size of buffer. - - """ - if size > 10e6: - raise ValueError, "Buffer size, %s, is too big." % size - if size < 5: - raise ValueError, "Buffer size, %s, is too small." %size - if size % 16 != 0: - raise ValueError, "Buffer size, %s, is not a multiple of 16." %size - - pyvals = umath.geterrobj() - old = getbufsize() - pyvals[0] = size - umath.seterrobj(pyvals) - return old - -def getbufsize(): - """Return the size of the buffer used in ufuncs. - """ - return umath.geterrobj()[0] - -def seterrcall(func): - """ - Set the floating-point error callback function or log object. - - There are two ways to capture floating-point error messages. The first - is to set the error-handler to 'call', using `seterr`. Then, set - the function to call using this function. - - The second is to set the error-handler to 'log', using `seterr`. - Floating-point errors then trigger a call to the 'write' method of - the provided object. - - Parameters - ---------- - func : callable f(err, flag) or object with write method - Function to call upon floating-point errors ('call'-mode) or - object whose 'write' method is used to log such message ('log'-mode). - - The call function takes two arguments. The first is the - type of error (one of "divide", "over", "under", or "invalid"), - and the second is the status flag. The flag is a byte, whose - least-significant bits indicate the status:: - - [0 0 0 0 invalid over under invalid] - - In other words, ``flags = divide + 2*over + 4*under + 8*invalid``. - - If an object is provided, its write method should take one argument, - a string. - - Returns - ------- - h : callable, log instance or None - The old error handler. - - See Also - -------- - seterr, geterr, geterrcall - - Examples - -------- - Callback upon error: - - >>> def err_handler(type, flag): - ... print "Floating point error (%s), with flag %s" % (type, flag) - ... - - >>> saved_handler = np.seterrcall(err_handler) - >>> save_err = np.seterr(all='call') - - >>> np.array([1, 2, 3]) / 0.0 - Floating point error (divide by zero), with flag 1 - array([ Inf, Inf, Inf]) - - >>> np.seterrcall(saved_handler) - - >>> np.seterr(**save_err) - {'over': 'call', 'divide': 'call', 'invalid': 'call', 'under': 'call'} - - Log error message: - - >>> class Log(object): - ... def write(self, msg): - ... print "LOG: %s" % msg - ... - - >>> log = Log() - >>> saved_handler = np.seterrcall(log) - >>> save_err = np.seterr(all='log') - - >>> np.array([1, 2, 3]) / 0.0 - LOG: Warning: divide by zero encountered in divide - - array([ Inf, Inf, Inf]) - - >>> np.seterrcall(saved_handler) - <__main__.Log object at 0x...> - >>> np.seterr(**save_err) - {'over': 'log', 'divide': 'log', 'invalid': 'log', 'under': 'log'} - - """ - if func is not None and not callable(func): - if not hasattr(func, 'write') or not callable(func.write): - raise ValueError, "Only callable can be used as callback" - pyvals = umath.geterrobj() - old = geterrcall() - pyvals[2] = func - umath.seterrobj(pyvals) - return old - -def geterrcall(): - """ - Return the current callback function used on floating-point errors. - - When the error handling for a floating-point error (one of "divide", - "over", "under", or "invalid") is set to 'call' or 'log', the function - that is called or the log instance that is written to is returned by - `geterrcall`. This function or log instance has been set with - `seterrcall`. - - Returns - ------- - errobj : callable, log instance or None - The current error handler. If no handler was set through `seterrcall`, - ``None`` is returned. - - See Also - -------- - seterrcall, seterr, geterr - - Notes - ----- - For complete documentation of the types of floating-point exceptions and - treatment options, see `seterr`. - - Examples - -------- - >>> np.geterrcall() # we did not yet set a handler, returns None - - >>> oldsettings = np.seterr(all='call') - >>> def err_handler(type, flag): - ... print "Floating point error (%s), with flag %s" % (type, flag) - >>> oldhandler = np.seterrcall(err_handler) - >>> np.array([1, 2, 3]) / 0.0 - Floating point error (divide by zero), with flag 1 - array([ Inf, Inf, Inf]) - - >>> cur_handler = np.geterrcall() - >>> cur_handler is err_handler - True - - """ - return umath.geterrobj()[2] - -class _unspecified(object): - pass -_Unspecified = _unspecified() - -class errstate(object): - """ - errstate(**kwargs) - - Context manager for floating-point error handling. - - Using an instance of `errstate` as a context manager allows statements in - that context to execute with a known error handling behavior. Upon entering - the context the error handling is set with `seterr` and `seterrcall`, and - upon exiting it is reset to what it was before. - - Parameters - ---------- - kwargs : {divide, over, under, invalid} - Keyword arguments. The valid keywords are the possible floating-point - exceptions. Each keyword should have a string value that defines the - treatment for the particular error. Possible values are - {'ignore', 'warn', 'raise', 'call', 'print', 'log'}. - - See Also - -------- - seterr, geterr, seterrcall, geterrcall - - Notes - ----- - The ``with`` statement was introduced in Python 2.5, and can only be used - there by importing it: ``from __future__ import with_statement``. In - earlier Python versions the ``with`` statement is not available. - - For complete documentation of the types of floating-point exceptions and - treatment options, see `seterr`. - - Examples - -------- - >>> from __future__ import with_statement # use 'with' in Python 2.5 - >>> olderr = np.seterr(all='ignore') # Set error handling to known state. - - >>> np.arange(3) / 0. - array([ NaN, Inf, Inf]) - >>> with np.errstate(divide='warn'): - ... np.arange(3) / 0. - ... - __main__:2: RuntimeWarning: divide by zero encountered in divide - array([ NaN, Inf, Inf]) - - >>> np.sqrt(-1) - nan - >>> with np.errstate(invalid='raise'): - ... np.sqrt(-1) - Traceback (most recent call last): - File "", line 2, in - FloatingPointError: invalid value encountered in sqrt - - Outside the context the error handling behavior has not changed: - - >>> np.geterr() - {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', - 'under': 'ignore'} - - """ - # Note that we don't want to run the above doctests because they will fail - # without a from __future__ import with_statement - def __init__(self, **kwargs): - self.call = kwargs.pop('call',_Unspecified) - self.kwargs = kwargs - def __enter__(self): - self.oldstate = seterr(**self.kwargs) - if self.call is not _Unspecified: - self.oldcall = seterrcall(self.call) - def __exit__(self, *exc_info): - seterr(**self.oldstate) - if self.call is not _Unspecified: - seterrcall(self.oldcall) - -def _setdef(): - defval = [UFUNC_BUFSIZE_DEFAULT, ERR_DEFAULT2, None] - umath.seterrobj(defval) - -# set the default values -_setdef() - -Inf = inf = infty = Infinity = PINF -nan = NaN = NAN -False_ = bool_(False) -True_ = bool_(True) - -import fromnumeric -from fromnumeric import * -extend_all(fromnumeric) diff --git a/pythonPackages/numpy/numpy/core/numerictypes.py b/pythonPackages/numpy/numpy/core/numerictypes.py deleted file mode 100755 index 312482e2fb..0000000000 --- a/pythonPackages/numpy/numpy/core/numerictypes.py +++ /dev/null @@ -1,948 +0,0 @@ -""" -numerictypes: Define the numeric type objects - -This module is designed so "from numerictypes import \\*" is safe. -Exported symbols include: - - Dictionary with all registered number types (including aliases): - typeDict - - Type objects (not all will be available, depends on platform): - see variable sctypes for which ones you have - - Bit-width names - - int8 int16 int32 int64 int128 - uint8 uint16 uint32 uint64 uint128 - float16 float32 float64 float96 float128 float256 - complex32 complex64 complex128 complex192 complex256 complex512 - datetime64 timedelta64 - - c-based names - - bool_ - - object_ - - void, str_, unicode_ - - byte, ubyte, - short, ushort - intc, uintc, - intp, uintp, - int_, uint, - longlong, ulonglong, - - - single, csingle, - float_, complex_, - longfloat, clongfloat, - - - As part of the type-hierarchy: xx -- is bit-width - - generic - +-> bool_ (kind=b) - +-> number (kind=i) - | integer - | signedinteger (intxx) - | byte - | short - | intc - | intp int0 - | int_ - | longlong - +-> unsignedinteger (uintxx) (kind=u) - | ubyte - | ushort - | uintc - | uintp uint0 - | uint_ - | ulonglong - +-> inexact - | +-> floating (floatxx) (kind=f) - | | single - | | float_ (double) - | | longfloat - | \\-> complexfloating (complexxx) (kind=c) - | csingle (singlecomplex) - | complex_ (cfloat, cdouble) - | clongfloat (longcomplex) - +-> flexible - | character - | void (kind=V) - | - | str_ (string_, bytes_) (kind=S) [Python 2] - | unicode_ (kind=U) [Python 2] - | - | bytes_ (string_) (kind=S) [Python 3] - | str_ (unicode_) (kind=U) [Python 3] - | - \\-> object_ (not used much) (kind=O) - -""" - -# we add more at the bottom -__all__ = ['sctypeDict', 'sctypeNA', 'typeDict', 'typeNA', 'sctypes', - 'ScalarType', 'obj2sctype', 'cast', 'nbytes', 'sctype2char', - 'maximum_sctype', 'issctype', 'typecodes', 'find_common_type', - 'issubdtype'] - -from numpy.core.multiarray import typeinfo, ndarray, array, empty, dtype -import types as _types -import sys - -# we don't export these for import *, but we do want them accessible -# as numerictypes.bool, etc. -from __builtin__ import bool, int, long, float, complex, object, unicode, str -from numpy.compat import bytes - -if sys.version_info[0] >= 3: - # Py3K - class long(int): - # Placeholder class -- this will not escape outside numerictypes.py - pass - -# String-handling utilities to avoid locale-dependence. - -# "import string" is costly to import! -# Construct the translation tables directly -# "A" = chr(65), "a" = chr(97) -_all_chars = map(chr, range(256)) -_ascii_upper = _all_chars[65:65+26] -_ascii_lower = _all_chars[97:97+26] -LOWER_TABLE="".join(_all_chars[:65] + _ascii_lower + _all_chars[65+26:]) -UPPER_TABLE="".join(_all_chars[:97] + _ascii_upper + _all_chars[97+26:]) - -#import string -# assert (string.maketrans(string.ascii_uppercase, string.ascii_lowercase) == \ -# LOWER_TABLE) -# assert (string.maketrnas(string_ascii_lowercase, string.ascii_uppercase) == \ -# UPPER_TABLE) -#LOWER_TABLE = string.maketrans(string.ascii_uppercase, string.ascii_lowercase) -#UPPER_TABLE = string.maketrans(string.ascii_lowercase, string.ascii_uppercase) - -def english_lower(s): - """ Apply English case rules to convert ASCII strings to all lower case. - - This is an internal utility function to replace calls to str.lower() such - that we can avoid changing behavior with changing locales. In particular, - Turkish has distinct dotted and dotless variants of the Latin letter "I" in - both lowercase and uppercase. Thus, "I".lower() != "i" in a "tr" locale. - - Parameters - ---------- - s : str - - Returns - ------- - lowered : str - - Examples - -------- - >>> from numpy.core.numerictypes import english_lower - >>> english_lower('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_') - 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz0123456789_' - >>> english_lower('') - '' - """ - lowered = s.translate(LOWER_TABLE) - return lowered - -def english_upper(s): - """ Apply English case rules to convert ASCII strings to all upper case. - - This is an internal utility function to replace calls to str.upper() such - that we can avoid changing behavior with changing locales. In particular, - Turkish has distinct dotted and dotless variants of the Latin letter "I" in - both lowercase and uppercase. Thus, "i".upper() != "I" in a "tr" locale. - - Parameters - ---------- - s : str - - Returns - ------- - uppered : str - - Examples - -------- - >>> from numpy.core.numerictypes import english_upper - >>> english_upper('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_') - 'ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_' - >>> english_upper('') - '' - """ - uppered = s.translate(UPPER_TABLE) - return uppered - -def english_capitalize(s): - """ Apply English case rules to convert the first character of an ASCII - string to upper case. - - This is an internal utility function to replace calls to str.capitalize() - such that we can avoid changing behavior with changing locales. - - Parameters - ---------- - s : str - - Returns - ------- - capitalized : str - - Examples - -------- - >>> from numpy.core.numerictypes import english_capitalize - >>> english_capitalize('int8') - 'Int8' - >>> english_capitalize('Int8') - 'Int8' - >>> english_capitalize('') - '' - """ - if s: - return english_upper(s[0]) + s[1:] - else: - return s - - -sctypeDict = {} # Contains all leaf-node scalar types with aliases -sctypeNA = {} # Contails all leaf-node types -> numarray type equivalences -allTypes = {} # Collect the types we will add to the module here - -def _evalname(name): - k = 0 - for ch in name: - if ch in '0123456789': - break - k += 1 - try: - bits = int(name[k:]) - except ValueError: - bits = 0 - base = name[:k] - return base, bits - -def bitname(obj): - """Return a bit-width name for a given type object""" - name = obj.__name__ - base = '' - char = '' - try: - if name[-1] == '_': - newname = name[:-1] - else: - newname = name - info = typeinfo[english_upper(newname)] - assert(info[-1] == obj) # sanity check - bits = info[2] - - except KeyError: # bit-width name - base, bits = _evalname(name) - char = base[0] - - if name == 'bool_': - char = 'b' - base = 'bool' - elif name=='void': - char = 'V' - base = 'void' - elif name=='object_': - char = 'O' - base = 'object' - bits = 0 - - if sys.version_info[0] >= 3: - if name=='bytes_': - char = 'S' - base = 'bytes' - elif name=='str_': - char = 'U' - base = 'str' - else: - if name=='string_': - char = 'S' - base = 'string' - elif name=='unicode_': - char = 'U' - base = 'unicode' - - bytes = bits // 8 - - if char != '' and bytes != 0: - char = "%s%d" % (char, bytes) - - return base, bits, char - - -def _add_types(): - for a in typeinfo.keys(): - name = english_lower(a) - if isinstance(typeinfo[a], tuple): - typeobj = typeinfo[a][-1] - - # define C-name and insert typenum and typechar references also - allTypes[name] = typeobj - sctypeDict[name] = typeobj - sctypeDict[typeinfo[a][0]] = typeobj - sctypeDict[typeinfo[a][1]] = typeobj - - else: # generic class - allTypes[name] = typeinfo[a] -_add_types() - -def _add_aliases(): - for a in typeinfo.keys(): - name = english_lower(a) - if not isinstance(typeinfo[a], tuple): - continue - typeobj = typeinfo[a][-1] - # insert bit-width version for this class (if relevant) - base, bit, char = bitname(typeobj) - if base[-3:] == 'int' or char[0] in 'ui': continue - if base != '': - myname = "%s%d" % (base, bit) - if (name != 'longdouble' and name != 'clongdouble') or \ - myname not in allTypes.keys(): - allTypes[myname] = typeobj - sctypeDict[myname] = typeobj - if base == 'complex': - na_name = '%s%d' % (english_capitalize(base), bit//2) - elif base == 'bool': - na_name = english_capitalize(base) - sctypeDict[na_name] = typeobj - else: - na_name = "%s%d" % (english_capitalize(base), bit) - sctypeDict[na_name] = typeobj - sctypeNA[na_name] = typeobj - sctypeDict[na_name] = typeobj - sctypeNA[typeobj] = na_name - sctypeNA[typeinfo[a][0]] = na_name - if char != '': - sctypeDict[char] = typeobj - sctypeNA[char] = na_name -_add_aliases() - -# Integers handled so that -# The int32, int64 types should agree exactly with -# PyArray_INT32, PyArray_INT64 in C -# We need to enforce the same checking as is done -# in arrayobject.h where the order of getting a -# bit-width match is: -# long, longlong, int, short, char -# for int8, int16, int32, int64, int128 - -def _add_integer_aliases(): - _ctypes = ['LONG', 'LONGLONG', 'INT', 'SHORT', 'BYTE'] - for ctype in _ctypes: - val = typeinfo[ctype] - bits = val[2] - charname = 'i%d' % (bits//8,) - ucharname = 'u%d' % (bits//8,) - intname = 'int%d' % bits - UIntname = 'UInt%d' % bits - Intname = 'Int%d' % bits - uval = typeinfo['U'+ctype] - typeobj = val[-1] - utypeobj = uval[-1] - if intname not in allTypes.keys(): - uintname = 'uint%d' % bits - allTypes[intname] = typeobj - allTypes[uintname] = utypeobj - sctypeDict[intname] = typeobj - sctypeDict[uintname] = utypeobj - sctypeDict[Intname] = typeobj - sctypeDict[UIntname] = utypeobj - sctypeDict[charname] = typeobj - sctypeDict[ucharname] = utypeobj - sctypeNA[Intname] = typeobj - sctypeNA[UIntname] = utypeobj - sctypeNA[charname] = typeobj - sctypeNA[ucharname] = utypeobj - sctypeNA[typeobj] = Intname - sctypeNA[utypeobj] = UIntname - sctypeNA[val[0]] = Intname - sctypeNA[uval[0]] = UIntname -_add_integer_aliases() - -# We use these later -void = allTypes['void'] -generic = allTypes['generic'] - -# -# Rework the Python names (so that float and complex and int are consistent -# with Python usage) -# -def _set_up_aliases(): - type_pairs = [('complex_', 'cdouble'), - ('int0', 'intp'), - ('uint0', 'uintp'), - ('single', 'float'), - ('csingle', 'cfloat'), - ('singlecomplex', 'cfloat'), - ('float_', 'double'), - ('intc', 'int'), - ('uintc', 'uint'), - ('int_', 'long'), - ('uint', 'ulong'), - ('cfloat', 'cdouble'), - ('longfloat', 'longdouble'), - ('clongfloat', 'clongdouble'), - ('longcomplex', 'clongdouble'), - ('bool_', 'bool'), - ('unicode_', 'unicode'), - ('object_', 'object')] - if sys.version_info[0] >= 3: - type_pairs.extend([('bytes_', 'string'), - ('str_', 'unicode'), - ('string_', 'string')]) - else: - type_pairs.extend([('str_', 'string'), - ('string_', 'string'), - ('bytes_', 'string')]) - for alias, t in type_pairs: - allTypes[alias] = allTypes[t] - sctypeDict[alias] = sctypeDict[t] - # Remove aliases overriding python types and modules - to_remove = ['ulong', 'object', 'unicode', 'int', 'long', 'float', - 'complex', 'bool', 'string'] - if sys.version_info[0] >= 3: - # Py3K - to_remove.append('bytes') - to_remove.append('str') - to_remove.remove('unicode') - to_remove.remove('long') - for t in to_remove: - try: - del allTypes[t] - del sctypeDict[t] - except KeyError: - pass -_set_up_aliases() - -# Now, construct dictionary to lookup character codes from types -_sctype2char_dict = {} -def _construct_char_code_lookup(): - for name in typeinfo.keys(): - tup = typeinfo[name] - if isinstance(tup, tuple): - if tup[0] not in ['p','P']: - _sctype2char_dict[tup[-1]] = tup[0] -_construct_char_code_lookup() - - -sctypes = {'int': [], - 'uint':[], - 'float':[], - 'complex':[], - 'others':[bool,object,str,unicode,void]} - -def _add_array_type(typename, bits): - try: - t = allTypes['%s%d' % (typename, bits)] - except KeyError: - pass - else: - sctypes[typename].append(t) - -def _set_array_types(): - ibytes = [1, 2, 4, 8, 16, 32, 64] - fbytes = [2, 4, 8, 10, 12, 16, 32, 64] - for bytes in ibytes: - bits = 8*bytes - _add_array_type('int', bits) - _add_array_type('uint', bits) - for bytes in fbytes: - bits = 8*bytes - _add_array_type('float', bits) - _add_array_type('complex', 2*bits) - _gi = dtype('p') - if _gi.type not in sctypes['int']: - indx = 0 - sz = _gi.itemsize - _lst = sctypes['int'] - while (indx < len(_lst) and sz >= _lst[indx](0).itemsize): - indx += 1 - sctypes['int'].insert(indx, _gi.type) - sctypes['uint'].insert(indx, dtype('P').type) -_set_array_types() - - -genericTypeRank = ['bool', 'int8', 'uint8', 'int16', 'uint16', - 'int32', 'uint32', 'int64', 'uint64', 'int128', - 'uint128', 'float16', - 'float32', 'float64', 'float80', 'float96', 'float128', - 'float256', - 'complex32', 'complex64', 'complex128', 'complex160', - 'complex192', 'complex256', 'complex512', 'object'] - -def maximum_sctype(t): - """ - Return the scalar type of highest precision of the same kind as the input. - - Parameters - ---------- - t : dtype or dtype specifier - The input data type. This can be a `dtype` object or an object that - is convertible to a `dtype`. - - Returns - ------- - out : dtype - The highest precision data type of the same kind (`dtype.kind`) as `t`. - - See Also - -------- - obj2sctype, mintypecode, sctype2char - dtype - - Examples - -------- - >>> np.maximum_sctype(np.int) - - >>> np.maximum_sctype(np.uint8) - - >>> np.maximum_sctype(np.complex) - - - >>> np.maximum_sctype(str) - - - >>> np.maximum_sctype('i2') - - >>> np.maximum_sctype('f4') - - - """ - g = obj2sctype(t) - if g is None: - return t - t = g - name = t.__name__ - base, bits = _evalname(name) - if bits == 0: - return t - else: - return sctypes[base][-1] - -try: - buffer_type = _types.BufferType -except AttributeError: - # Py3K - buffer_type = memoryview - -_python_types = {int : 'int_', - float: 'float_', - complex: 'complex_', - bool: 'bool_', - bytes: 'bytes_', - unicode: 'unicode_', - buffer_type: 'void', - } - -if sys.version_info[0] >= 3: - def _python_type(t): - """returns the type corresponding to a certain Python type""" - if not isinstance(t, type): - t = type(t) - return allTypes[_python_types.get(t, 'object_')] -else: - def _python_type(t): - """returns the type corresponding to a certain Python type""" - if not isinstance(t, _types.TypeType): - t = type(t) - return allTypes[_python_types.get(t, 'object_')] - -def issctype(rep): - """ - Determines whether the given object represents a scalar data-type. - - Parameters - ---------- - rep : any - If `rep` is an instance of a scalar dtype, True is returned. If not, - False is returned. - - Returns - ------- - out : bool - Boolean result of check whether `rep` is a scalar dtype. - - See Also - -------- - issubsctype, issubdtype, obj2sctype, sctype2char - - Examples - -------- - >>> np.issctype(np.int32) - True - >>> np.issctype(list) - False - >>> np.issctype(1.1) - False - - """ - if not isinstance(rep, (type, dtype)): - return False - try: - res = obj2sctype(rep) - if res and res != object_: - return True - return False - except: - return False - -def obj2sctype(rep, default=None): - try: - if issubclass(rep, generic): - return rep - except TypeError: - pass - if isinstance(rep, dtype): - return rep.type - if isinstance(rep, type): - return _python_type(rep) - if isinstance(rep, ndarray): - return rep.dtype.type - try: - res = dtype(rep) - except: - return default - return res.type - - -def issubclass_(arg1, arg2): - try: - return issubclass(arg1, arg2) - except TypeError: - return False - -def issubsctype(arg1, arg2): - """ - Determine if the first argument is a subclass of the second argument. - - Parameters - ---------- - arg1, arg2 : dtype or dtype specifier - Data-types. - - Returns - ------- - out : bool - The result. - - See Also - -------- - issctype, issubdtype,obj2sctype - - Examples - -------- - >>> np.issubsctype('S8', str) - True - >>> np.issubsctype(np.array([1]), np.int) - True - >>> np.issubsctype(np.array([1]), np.float) - False - - """ - return issubclass(obj2sctype(arg1), obj2sctype(arg2)) - -def issubdtype(arg1, arg2): - """ - Returns True if first argument is a typecode lower/equal in type hierarchy. - - Parameters - ---------- - arg1, arg2 : dtype_like - dtype or string representing a typecode. - - Returns - ------- - out : bool - - See Also - -------- - issubsctype, issubclass_ - numpy.core.numerictypes : Overview of numpy type hierarchy. - - Examples - -------- - >>> np.issubdtype('S1', str) - True - >>> np.issubdtype(np.float64, np.float32) - False - - """ - if issubclass_(arg2, generic): - return issubclass(dtype(arg1).type, arg2) - mro = dtype(arg2).type.mro() - if len(mro) > 1: - val = mro[1] - else: - val = mro[0] - return issubclass(dtype(arg1).type, val) - - -# This dictionary allows look up based on any alias for an array data-type -class _typedict(dict): - """ - Base object for a dictionary for look-up with any alias for an array dtype. - - Instances of `_typedict` can not be used as dictionaries directly, - first they have to be populated. - - """ - def __getitem__(self, obj): - return dict.__getitem__(self, obj2sctype(obj)) - -nbytes = _typedict() -_alignment = _typedict() -_maxvals = _typedict() -_minvals = _typedict() -def _construct_lookups(): - for name, val in typeinfo.iteritems(): - if not isinstance(val, tuple): - continue - obj = val[-1] - nbytes[obj] = val[2] // 8 - _alignment[obj] = val[3] - if (len(val) > 5): - _maxvals[obj] = val[4] - _minvals[obj] = val[5] - else: - _maxvals[obj] = None - _minvals[obj] = None - -_construct_lookups() - -def sctype2char(sctype): - """ - Return the string representation of a scalar dtype. - - Parameters - ---------- - sctype : scalar dtype or object - If a scalar dtype, the corresponding string character is - returned. If an object, `sctype2char` tries to infer its scalar type - and then return the corresponding string character. - - Returns - ------- - typechar : str - The string character corresponding to the scalar type. - - Raises - ------ - ValueError - If `sctype` is an object for which the type can not be inferred. - - See Also - -------- - obj2sctype, issctype, issubsctype, mintypecode - - Examples - -------- - >>> for sctype in [np.int32, np.float, np.complex, np.string_, np.ndarray]: - ... print np.sctype2char(sctype) - l - d - D - S - O - - >>> x = np.array([1., 2-1.j]) - >>> np.sctype2char(x) - 'D' - >>> np.sctype2char(list) - 'O' - - """ - sctype = obj2sctype(sctype) - if sctype is None: - raise ValueError, "unrecognized type" - return _sctype2char_dict[sctype] - -# Create dictionary of casting functions that wrap sequences -# indexed by type or type character - - -cast = _typedict() -try: - ScalarType = [_types.IntType, _types.FloatType, _types.ComplexType, - _types.LongType, _types.BooleanType, - _types.StringType, _types.UnicodeType, _types.BufferType] -except AttributeError: - # Py3K - ScalarType = [int, float, complex, long, bool, bytes, str, memoryview] - -ScalarType.extend(_sctype2char_dict.keys()) -ScalarType = tuple(ScalarType) -for key in _sctype2char_dict.keys(): - cast[key] = lambda x, k=key : array(x, copy=False).astype(k) - -# Create the typestring lookup dictionary -_typestr = _typedict() -for key in _sctype2char_dict.keys(): - if issubclass(key, allTypes['flexible']): - _typestr[key] = _sctype2char_dict[key] - else: - _typestr[key] = empty((1,),key).dtype.str[1:] - -# Make sure all typestrings are in sctypeDict -for key, val in _typestr.items(): - if val not in sctypeDict: - sctypeDict[val] = key - -# Add additional strings to the sctypeDict - -if sys.version_info[0] >= 3: - _toadd = ['int', 'float', 'complex', 'bool', 'object', - 'str', 'bytes', 'object', ('a', allTypes['bytes_'])] -else: - _toadd = ['int', 'float', 'complex', 'bool', 'object', 'string', - ('str', allTypes['string_']), - 'unicode', 'object', ('a', allTypes['string_'])] - -for name in _toadd: - if isinstance(name, tuple): - sctypeDict[name[0]] = name[1] - else: - sctypeDict[name] = allTypes['%s_' % name] - -del _toadd, name - -# Now add the types we've determined to this module -for key in allTypes: - globals()[key] = allTypes[key] - __all__.append(key) - -del key - -typecodes = {'Character':'c', - 'Integer':'bhilqp', - 'UnsignedInteger':'BHILQP', - 'Float':'fdg', - 'Complex':'FDG', - 'AllInteger':'bBhHiIlLqQpP', - 'AllFloat':'fdgFDG', - 'All':'?bhilqpBHILQPfdgFDGSUVO'} - -# backwards compatibility --- deprecated name -typeDict = sctypeDict -typeNA = sctypeNA - -# b -> boolean -# u -> unsigned integer -# i -> signed integer -# f -> floating point -# c -> complex -# S -> string -# U -> Unicode string -# V -> record -# O -> Python object -_kind_list = ['b', 'u', 'i', 'f', 'c', 'S', 'U', 'V', 'O'] - -__test_types = typecodes['AllInteger'][:-2]+typecodes['AllFloat']+'O' -__len_test_types = len(__test_types) - -# Keep incrementing until a common type both can be coerced to -# is found. Otherwise, return None -def _find_common_coerce(a, b): - if a > b: - return a - try: - thisind = __test_types.index(a.char) - except ValueError: - return None - return _can_coerce_all([a,b], start=thisind) - -# Find a data-type that all data-types in a list can be coerced to -def _can_coerce_all(dtypelist, start=0): - N = len(dtypelist) - if N == 0: - return None - if N == 1: - return dtypelist[0] - thisind = start - while thisind < __len_test_types: - newdtype = dtype(__test_types[thisind]) - numcoerce = len([x for x in dtypelist if newdtype >= x]) - if numcoerce == N: - return newdtype - thisind += 1 - return None - -def find_common_type(array_types, scalar_types): - """ - Determine common type following standard coercion rules. - - Parameters - ---------- - array_types : sequence - A list of dtypes or dtype convertible objects representing arrays. - scalar_types : sequence - A list of dtypes or dtype convertible objects representing scalars. - - Returns - ------- - datatype : dtype - The common data type, which is the maximum of `array_types` ignoring - `scalar_types`, unless the maximum of `scalar_types` is of a - different kind (`dtype.kind`). If the kind is not understood, then - None is returned. - - See Also - -------- - dtype, common_type, can_cast, mintypecode - - Examples - -------- - >>> np.find_common_type([], [np.int64, np.float32, np.complex]) - dtype('complex128') - >>> np.find_common_type([np.int64, np.float32], []) - dtype('float64') - - The standard casting rules ensure that a scalar cannot up-cast an - array unless the scalar is of a fundamentally different kind of data - (i.e. under a different hierarchy in the data type hierarchy) then - the array: - - >>> np.find_common_type([np.float32], [np.int64, np.float64]) - dtype('float32') - - Complex is of a different type, so it up-casts the float in the - `array_types` argument: - - >>> np.find_common_type([np.float32], [np.complex]) - dtype('complex128') - - Type specifier strings are convertible to dtypes and can therefore - be used instead of dtypes: - - >>> np.find_common_type(['f4', 'f4', 'i4'], ['c8']) - dtype('complex128') - - """ - array_types = [dtype(x) for x in array_types] - scalar_types = [dtype(x) for x in scalar_types] - - maxa = _can_coerce_all(array_types) - maxsc = _can_coerce_all(scalar_types) - - if maxa is None: - return maxsc - - if maxsc is None: - return maxa - - try: - index_a = _kind_list.index(maxa.kind) - index_sc = _kind_list.index(maxsc.kind) - except ValueError: - return None - - if index_sc > index_a: - return _find_common_coerce(maxsc,maxa) - else: - return maxa diff --git a/pythonPackages/numpy/numpy/core/records.py b/pythonPackages/numpy/numpy/core/records.py deleted file mode 100755 index dc1dfad687..0000000000 --- a/pythonPackages/numpy/numpy/core/records.py +++ /dev/null @@ -1,804 +0,0 @@ -""" -Record Arrays -============= -Record arrays expose the fields of structured arrays as properties. - -Most commonly, ndarrays contain elements of a single type, e.g. floats, integers, -bools etc. However, it is possible for elements to be combinations of these, -such as:: - - >>> a = np.array([(1, 2.0), (1, 2.0)], dtype=[('x', int), ('y', float)]) - >>> a - array([(1, 2.0), (1, 2.0)], - dtype=[('x', '>> a['x'] - array([1, 1]) - - >>> a['y'] - array([ 2., 2.]) - -Record arrays allow us to access fields as properties:: - - >>> ar = a.view(np.recarray) - - >>> ar.x - array([1, 1]) - - >>> ar.y - array([ 2., 2.]) - -""" -# All of the functions allow formats to be a dtype -__all__ = ['record', 'recarray', 'format_parser'] - -import numeric as sb -from defchararray import chararray -import numerictypes as nt -import types -import os -import sys - -from numpy.compat import isfileobj, bytes - -ndarray = sb.ndarray - -_byteorderconv = {'b':'>', - 'l':'<', - 'n':'=', - 'B':'>', - 'L':'<', - 'N':'=', - 'S':'s', - 's':'s', - '>':'>', - '<':'<', - '=':'=', - '|':'|', - 'I':'|', - 'i':'|'} - -# formats regular expression -# allows multidimension spec with a tuple syntax in front -# of the letter code '(2,3)f4' and ' ( 2 , 3 ) f4 ' -# are equally allowed - -numfmt = nt.typeDict -_typestr = nt._typestr - -def find_duplicate(list): - """Find duplication in a list, return a list of duplicated elements""" - dup = [] - for i in range(len(list)): - if (list[i] in list[i + 1:]): - if (list[i] not in dup): - dup.append(list[i]) - return dup - -class format_parser: - """ - Class to convert formats, names, titles description to a dtype. - - After constructing the format_parser object, the dtype attribute is - the converted data-type: - ``dtype = format_parser(formats, names, titles).dtype`` - - Attributes - ---------- - dtype : dtype - The converted data-type. - - Parameters - ---------- - formats : str or list of str - The format description, either specified as a string with - comma-separated format descriptions in the form ``'f8, i4, a5'``, or - a list of format description strings in the form - ``['f8', 'i4', 'a5']``. - names : str or list/tuple of str - The field names, either specified as a comma-separated string in the - form ``'col1, col2, col3'``, or as a list or tuple of strings in the - form ``['col1', 'col2', 'col3']``. - An empty list can be used, in that case default field names - ('f0', 'f1', ...) are used. - titles : sequence - Sequence of title strings. An empty list can be used to leave titles - out. - aligned : bool, optional - If True, align the fields by padding as the C-compiler would. - Default is False. - byteorder : str, optional - If specified, all the fields will be changed to the - provided byte-order. Otherwise, the default byte-order is - used. For all available string specifiers, see `dtype.newbyteorder`. - - See Also - -------- - dtype, typename, sctype2char - - Examples - -------- - >>> np.format_parser(['f8', 'i4', 'a5'], ['col1', 'col2', 'col3'], - ... ['T1', 'T2', 'T3']).dtype - dtype([(('T1', 'col1'), '>> np.format_parser(['f8', 'i4', 'a5'], ['col1', 'col2', 'col3'], - ... []).dtype - dtype([('col1', '>> np.format_parser(['f8', 'i4', 'a5'], [], []).dtype - dtype([('f0', ' len(titles)): - self._titles += [None] * (self._nfields - len(titles)) - - def _createdescr(self, byteorder): - descr = sb.dtype({'names':self._names, - 'formats':self._f_formats, - 'offsets':self._offsets, - 'titles':self._titles}) - if (byteorder is not None): - byteorder = _byteorderconv[byteorder[0]] - descr = descr.newbyteorder(byteorder) - - self._descr = descr - -class record(nt.void): - """A data-type scalar that allows field access as attribute lookup. - """ - def __repr__(self): - return self.__str__() - - def __str__(self): - return str(self.item()) - - def __getattribute__(self, attr): - if attr in ['setfield', 'getfield', 'dtype']: - return nt.void.__getattribute__(self, attr) - try: - return nt.void.__getattribute__(self, attr) - except AttributeError: - pass - fielddict = nt.void.__getattribute__(self, 'dtype').fields - res = fielddict.get(attr, None) - if res: - obj = self.getfield(*res[:2]) - # if it has fields return a recarray, - # if it's a string ('SU') return a chararray - # otherwise return the object - try: - dt = obj.dtype - except AttributeError: - return obj - if dt.fields: - return obj.view(obj.__class__) - if dt.char in 'SU': - return obj.view(chararray) - return obj - else: - raise AttributeError, "'record' object has no "\ - "attribute '%s'" % attr - - - def __setattr__(self, attr, val): - if attr in ['setfield', 'getfield', 'dtype']: - raise AttributeError, "Cannot set '%s' attribute" % attr - fielddict = nt.void.__getattribute__(self, 'dtype').fields - res = fielddict.get(attr, None) - if res: - return self.setfield(val, *res[:2]) - else: - if getattr(self, attr, None): - return nt.void.__setattr__(self, attr, val) - else: - raise AttributeError, "'record' object has no "\ - "attribute '%s'" % attr - - def pprint(self): - """Pretty-print all fields.""" - # pretty-print all fields - names = self.dtype.names - maxlen = max([len(name) for name in names]) - rows = [] - fmt = '%% %ds: %%s' % maxlen - for name in names: - rows.append(fmt % (name, getattr(self, name))) - return "\n".join(rows) - -# The recarray is almost identical to a standard array (which supports -# named fields already) The biggest difference is that it can use -# attribute-lookup to find the fields and it is constructed using -# a record. - -# If byteorder is given it forces a particular byteorder on all -# the fields (and any subfields) - -class recarray(ndarray): - """ - Construct an ndarray that allows field access using attributes. - - Arrays may have a data-types containing fields, analogous - to columns in a spread sheet. An example is ``[(x, int), (y, float)]``, - where each entry in the array is a pair of ``(int, float)``. Normally, - these attributes are accessed using dictionary lookups such as ``arr['x']`` - and ``arr['y']``. Record arrays allow the fields to be accessed as members - of the array, using ``arr.x`` and ``arr.y``. - - Parameters - ---------- - shape : tuple - Shape of output array. - dtype : data-type, optional - The desired data-type. By default, the data-type is determined - from `formats`, `names`, `titles`, `aligned` and `byteorder`. - formats : list of data-types, optional - A list containing the data-types for the different columns, e.g. - ``['i4', 'f8', 'i4']``. `formats` does *not* support the new - convention of using types directly, i.e. ``(int, float, int)``. - Note that `formats` must be a list, not a tuple. - Given that `formats` is somewhat limited, we recommend specifying - `dtype` instead. - names : tuple of str, optional - The name of each column, e.g. ``('x', 'y', 'z')``. - buf : buffer, optional - By default, a new array is created of the given shape and data-type. - If `buf` is specified and is an object exposing the buffer interface, - the array will use the memory from the existing buffer. In this case, - the `offset` and `strides` keywords are available. - - Other Parameters - ---------------- - titles : tuple of str, optional - Aliases for column names. For example, if `names` were - ``('x', 'y', 'z')`` and `titles` is - ``('x_coordinate', 'y_coordinate', 'z_coordinate')``, then - ``arr['x']`` is equivalent to both ``arr.x`` and ``arr.x_coordinate``. - byteorder : {'<', '>', '='}, optional - Byte-order for all fields. - aligned : bool, optional - Align the fields in memory as the C-compiler would. - strides : tuple of ints, optional - Buffer (`buf`) is interpreted according to these strides (strides - define how many bytes each array element, row, column, etc. - occupy in memory). - offset : int, optional - Start reading buffer (`buf`) from this offset onwards. - - Returns - ------- - rec : recarray - Empty array of the given shape and type. - - See Also - -------- - rec.fromrecords : Construct a record array from data. - record : fundamental data-type for `recarray`. - format_parser : determine a data-type from formats, names, titles. - - Notes - ----- - This constructor can be compared to ``empty``: it creates a new record - array but does not fill it with data. To create a record array from data, - use one of the following methods: - - 1. Create a standard ndarray and convert it to a record array, - using ``arr.view(np.recarray)`` - 2. Use the `buf` keyword. - 3. Use `np.rec.fromrecords`. - - Examples - -------- - Create an array with two fields, ``x`` and ``y``: - - >>> x = np.array([(1.0, 2), (3.0, 4)], dtype=[('x', float), ('y', int)]) - >>> x - array([(1.0, 2), (3.0, 4)], - dtype=[('x', '>> x['x'] - array([ 1., 3.]) - - View the array as a record array: - - >>> x = x.view(np.recarray) - - >>> x.x - array([ 1., 3.]) - - >>> x.y - array([2, 4]) - - Create a new, empty record array: - - >>> np.recarray((2,), - ... dtype=[('x', int), ('y', float), ('z', int)]) #doctest: +SKIP - rec.array([(-1073741821, 1.2249118382103472e-301, 24547520), - (3471280, 1.2134086255804012e-316, 0)], - dtype=[('x', '>> x1=np.array([1,2,3,4]) - >>> x2=np.array(['a','dd','xyz','12']) - >>> x3=np.array([1.1,2,3,4]) - >>> r = np.core.records.fromarrays([x1,x2,x3],names='a,b,c') - >>> print r[1] - (2, 'dd', 2.0) - >>> x1[1]=34 - >>> r.a - array([1, 2, 3, 4]) - """ - - arrayList = [sb.asarray(x) for x in arrayList] - - if shape is None or shape == 0: - shape = arrayList[0].shape - - if isinstance(shape, int): - shape = (shape,) - - if formats is None and dtype is None: - # go through each object in the list to see if it is an ndarray - # and determine the formats. - formats = '' - for obj in arrayList: - if not isinstance(obj, ndarray): - raise ValueError, "item in the array list must be an ndarray." - formats += _typestr[obj.dtype.type] - if issubclass(obj.dtype.type, nt.flexible): - formats += `obj.itemsize` - formats += ',' - formats = formats[:-1] - - if dtype is not None: - descr = sb.dtype(dtype) - _names = descr.names - else: - parsed = format_parser(formats, names, titles, aligned, byteorder) - _names = parsed._names - descr = parsed._descr - - # Determine shape from data-type. - if len(descr) != len(arrayList): - raise ValueError, "mismatch between the number of fields "\ - "and the number of arrays" - - d0 = descr[0].shape - nn = len(d0) - if nn > 0: - shape = shape[:-nn] - - for k, obj in enumerate(arrayList): - nn = len(descr[k].shape) - testshape = obj.shape[:len(obj.shape) - nn] - if testshape != shape: - raise ValueError, "array-shape mismatch in array %d" % k - - _array = recarray(shape, descr) - - # populate the record array (makes a copy) - for i in range(len(arrayList)): - _array[_names[i]] = arrayList[i] - - return _array - -# shape must be 1-d if you use list of lists... -def fromrecords(recList, dtype=None, shape=None, formats=None, names=None, - titles=None, aligned=False, byteorder=None): - """ create a recarray from a list of records in text form - - The data in the same field can be heterogeneous, they will be promoted - to the highest data type. This method is intended for creating - smaller record arrays. If used to create large array without formats - defined - - r=fromrecords([(2,3.,'abc')]*100000) - - it can be slow. - - If formats is None, then this will auto-detect formats. Use list of - tuples rather than list of lists for faster processing. - - >>> r=np.core.records.fromrecords([(456,'dbe',1.2),(2,'de',1.3)], - ... names='col1,col2,col3') - >>> print r[0] - (456, 'dbe', 1.2) - >>> r.col1 - array([456, 2]) - >>> r.col2 - chararray(['dbe', 'de'], - dtype='|S3') - >>> import cPickle - >>> print cPickle.loads(cPickle.dumps(r)) - [(456, 'dbe', 1.2) (2, 'de', 1.3)] - """ - - nfields = len(recList[0]) - if formats is None and dtype is None: # slower - obj = sb.array(recList, dtype=object) - arrlist = [sb.array(obj[..., i].tolist()) for i in xrange(nfields)] - return fromarrays(arrlist, formats=formats, shape=shape, names=names, - titles=titles, aligned=aligned, byteorder=byteorder) - - if dtype is not None: - descr = sb.dtype((record, dtype)) - else: - descr = format_parser(formats, names, titles, aligned, byteorder)._descr - - try: - retval = sb.array(recList, dtype=descr) - except TypeError: # list of lists instead of list of tuples - if (shape is None or shape == 0): - shape = len(recList) - if isinstance(shape, (int, long)): - shape = (shape,) - if len(shape) > 1: - raise ValueError, "Can only deal with 1-d array." - _array = recarray(shape, descr) - for k in xrange(_array.size): - _array[k] = tuple(recList[k]) - return _array - else: - if shape is not None and retval.shape != shape: - retval.shape = shape - - res = retval.view(recarray) - - return res - - -def fromstring(datastring, dtype=None, shape=None, offset=0, formats=None, - names=None, titles=None, aligned=False, byteorder=None): - """ create a (read-only) record array from binary data contained in - a string""" - - - if dtype is None and formats is None: - raise ValueError, "Must have dtype= or formats=" - - if dtype is not None: - descr = sb.dtype(dtype) - else: - descr = format_parser(formats, names, titles, aligned, byteorder)._descr - - itemsize = descr.itemsize - if (shape is None or shape == 0 or shape == -1): - shape = (len(datastring) - offset) / itemsize - - _array = recarray(shape, descr, buf=datastring, offset=offset) - return _array - -def get_remaining_size(fd): - try: - fn = fd.fileno() - except AttributeError: - return os.path.getsize(fd.name) - fd.tell() - st = os.fstat(fn) - size = st.st_size - fd.tell() - return size - -def fromfile(fd, dtype=None, shape=None, offset=0, formats=None, - names=None, titles=None, aligned=False, byteorder=None): - """Create an array from binary file data - - If file is a string then that file is opened, else it is assumed - to be a file object. - - >>> from tempfile import TemporaryFile - >>> a = np.empty(10,dtype='f8,i4,a5') - >>> a[5] = (0.5,10,'abcde') - >>> - >>> fd=TemporaryFile() - >>> a = a.newbyteorder('<') - >>> a.tofile(fd) - >>> - >>> fd.seek(0) - >>> r=np.core.records.fromfile(fd, formats='f8,i4,a5', shape=10, - ... byteorder='<') - >>> print r[5] - (0.5, 10, 'abcde') - >>> r.shape - (10,) - """ - - if (shape is None or shape == 0): - shape = (-1,) - elif isinstance(shape, (int, long)): - shape = (shape,) - - name = 0 - if isinstance(fd, str): - name = 1 - fd = open(fd, 'rb') - if (offset > 0): - fd.seek(offset, 1) - size = get_remaining_size(fd) - - if dtype is not None: - descr = sb.dtype(dtype) - else: - descr = format_parser(formats, names, titles, aligned, byteorder)._descr - - itemsize = descr.itemsize - - shapeprod = sb.array(shape).prod() - shapesize = shapeprod * itemsize - if shapesize < 0: - shape = list(shape) - shape[ shape.index(-1) ] = size / -shapesize - shape = tuple(shape) - shapeprod = sb.array(shape).prod() - - nbytes = shapeprod * itemsize - - if nbytes > size: - raise ValueError( - "Not enough bytes left in file for specified shape and type") - - # create the array - _array = recarray(shape, descr) - nbytesread = fd.readinto(_array.data) - if nbytesread != nbytes: - raise IOError("Didn't read as many bytes as expected") - if name: - fd.close() - - return _array - -def array(obj, dtype=None, shape=None, offset=0, strides=None, formats=None, - names=None, titles=None, aligned=False, byteorder=None, copy=True): - """Construct a record array from a wide-variety of objects. - """ - - if (isinstance(obj, (type(None), str)) or isfileobj(obj)) \ - and (formats is None) \ - and (dtype is None): - raise ValueError("Must define formats (or dtype) if object is "\ - "None, string, or an open file") - - kwds = {} - if dtype is not None: - dtype = sb.dtype(dtype) - elif formats is not None: - dtype = format_parser(formats, names, titles, - aligned, byteorder)._descr - else: - kwds = {'formats': formats, - 'names' : names, - 'titles' : titles, - 'aligned' : aligned, - 'byteorder' : byteorder - } - - if obj is None: - if shape is None: - raise ValueError("Must define a shape if obj is None") - return recarray(shape, dtype, buf=obj, offset=offset, strides=strides) - - elif isinstance(obj, bytes): - return fromstring(obj, dtype, shape=shape, offset=offset, **kwds) - - elif isinstance(obj, (list, tuple)): - if isinstance(obj[0], (tuple, list)): - return fromrecords(obj, dtype=dtype, shape=shape, **kwds) - else: - return fromarrays(obj, dtype=dtype, shape=shape, **kwds) - - elif isinstance(obj, recarray): - if dtype is not None and (obj.dtype != dtype): - new = obj.view(dtype) - else: - new = obj - if copy: - new = new.copy() - return new - - elif isfileobj(obj): - return fromfile(obj, dtype=dtype, shape=shape, offset=offset) - - elif isinstance(obj, ndarray): - if dtype is not None and (obj.dtype != dtype): - new = obj.view(dtype) - else: - new = obj - if copy: - new = new.copy() - res = new.view(recarray) - if issubclass(res.dtype.type, nt.void): - res.dtype = sb.dtype((record, res.dtype)) - return res - - else: - interface = getattr(obj, "__array_interface__", None) - if interface is None or not isinstance(interface, dict): - raise ValueError("Unknown input type") - obj = sb.array(obj) - if dtype is not None and (obj.dtype != dtype): - obj = obj.view(dtype) - res = obj.view(recarray) - if issubclass(res.dtype.type, nt.void): - res.dtype = sb.dtype((record, res.dtype)) - return res diff --git a/pythonPackages/numpy/numpy/core/scons_support.py b/pythonPackages/numpy/numpy/core/scons_support.py deleted file mode 100755 index 048f85db6d..0000000000 --- a/pythonPackages/numpy/numpy/core/scons_support.py +++ /dev/null @@ -1,272 +0,0 @@ -#! Last Change: Sun Apr 26 05:00 PM 2009 J - -"""Code to support special facilities to scons which are only useful for -numpy.core, hence not put into numpy.distutils.scons""" - -import sys -import os - -from os.path import join as pjoin, dirname as pdirname, basename as pbasename -from copy import deepcopy - -import code_generators -from code_generators.generate_numpy_api import \ - do_generate_api as nowrap_do_generate_numpy_api -from code_generators.generate_ufunc_api import \ - do_generate_api as nowrap_do_generate_ufunc_api -from setup_common import check_api_version as _check_api_version -from setup_common import \ - LONG_DOUBLE_REPRESENTATION_SRC, pyod, long_double_representation - -from numscons.numdist import process_c_str as process_str - -import SCons.Node -import SCons -from SCons.Builder import Builder -from SCons.Action import Action - -def check_api_version(apiversion): - return _check_api_version(apiversion, pdirname(code_generators.__file__)) - -def split_ext(string): - sp = string.rsplit( '.', 1) - if len(sp) == 1: - return (sp[0], '') - else: - return sp -#------------------------------------ -# Ufunc and multiarray API generators -#------------------------------------ -def do_generate_numpy_api(target, source, env): - nowrap_do_generate_numpy_api([str(i) for i in target], - [s.value for s in source]) - return 0 - -def do_generate_ufunc_api(target, source, env): - nowrap_do_generate_ufunc_api([str(i) for i in target], - [s.value for s in source]) - return 0 - -def generate_api_emitter(target, source, env): - """Returns the list of targets generated by the code generator for array - api and ufunc api.""" - base, ext = split_ext(str(target[0])) - dir = pdirname(base) - ba = pbasename(base) - h = pjoin(dir, '__' + ba + '.h') - c = pjoin(dir, '__' + ba + '.c') - txt = base + '.txt' - #print h, c, txt - t = [h, c, txt] - return (t, source) - -#------------------------- -# From template generators -#------------------------- -# XXX: this is general and can be used outside numpy.core. -def do_generate_from_template(targetfile, sourcefile, env): - t = open(targetfile, 'w') - s = open(sourcefile, 'r') - allstr = s.read() - s.close() - writestr = process_str(allstr) - t.write(writestr) - t.close() - return 0 - -def generate_from_template(target, source, env): - for t, s in zip(target, source): - do_generate_from_template(str(t), str(s), env) - -def generate_from_template_emitter(target, source, env): - base, ext = split_ext(pbasename(str(source[0]))) - t = pjoin(pdirname(str(target[0])), base) - return ([t], source) - -#---------------- -# umath generator -#---------------- -def do_generate_umath(targetfile, sourcefile, env): - t = open(targetfile, 'w') - from code_generators import generate_umath - code = generate_umath.make_code(generate_umath.defdict, generate_umath.__file__) - t.write(code) - t.close() - -def generate_umath(target, source, env): - for t, s in zip(target, source): - do_generate_umath(str(t), str(s), env) - -def generate_umath_emitter(target, source, env): - t = str(target[0]) + '.c' - return ([t], source) - -#----------------------------------------- -# Other functions related to configuration -#----------------------------------------- -def CheckGCC4(context): - src = """ -int -main() -{ -#if !(defined __GNUC__ && (__GNUC__ >= 4)) -die from an horrible death -#endif -} -""" - - context.Message("Checking if compiled with gcc 4.x or above ... ") - st = context.TryCompile(src, '.c') - - if st: - context.Result(' yes') - else: - context.Result(' no') - return st == 1 - -def CheckBrokenMathlib(context, mathlib): - src = """ -/* check whether libm is broken */ -#include -int main(int argc, char *argv[]) -{ - return exp(-720.) > 1.0; /* typically an IEEE denormal */ -} -""" - - try: - oldLIBS = deepcopy(context.env['LIBS']) - except: - oldLIBS = [] - - try: - context.Message("Checking if math lib %s is usable for numpy ... " % mathlib) - context.env.AppendUnique(LIBS = mathlib) - st = context.TryRun(src, '.c') - finally: - context.env['LIBS'] = oldLIBS - - if st[0]: - context.Result(' Yes !') - else: - context.Result(' No !') - return st[0] - -def check_mlib(config, mlib): - """Return 1 if mlib is available and usable by numpy, 0 otherwise. - - mlib can be a string (one library), or a list of libraries.""" - # Check the libraries in mlib are linkable - if len(mlib) > 0: - # XXX: put an autoadd argument to 0 here and add an autoadd argument to - # CheckBroekenMathlib (otherwise we may add bogus libraries, the ones - # which do not path the CheckBrokenMathlib test). - st = config.CheckLib(mlib) - if not st: - return 0 - # Check the mlib is usable by numpy - return config.CheckBrokenMathlib(mlib) - -def check_mlibs(config, mlibs): - for mlib in mlibs: - if check_mlib(config, mlib): - return mlib - - # No mlib was found. - raise SCons.Errors.UserError("No usable mathlib was found: chose another "\ - "one using the MATHLIB env variable, eg "\ - "'MATHLIB=m python setup.py build'") - - -def is_npy_no_signal(): - """Return True if the NPY_NO_SIGNAL symbol must be defined in configuration - header.""" - return sys.platform == 'win32' - -def define_no_smp(): - """Returns True if we should define NPY_NOSMP, False otherwise.""" - #-------------------------------- - # Checking SMP and thread options - #-------------------------------- - # Python 2.3 causes a segfault when - # trying to re-acquire the thread-state - # which is done in error-handling - # ufunc code. NPY_ALLOW_C_API and friends - # cause the segfault. So, we disable threading - # for now. - if sys.version[:5] < '2.4.2': - nosmp = 1 - else: - # Perhaps a fancier check is in order here. - # so that threads are only enabled if there - # are actually multiple CPUS? -- but - # threaded code can be nice even on a single - # CPU so that long-calculating code doesn't - # block. - try: - nosmp = os.environ['NPY_NOSMP'] - nosmp = 1 - except KeyError: - nosmp = 0 - return nosmp == 1 - -# Inline check -def CheckInline(context): - context.Message("Checking for inline keyword... ") - body = """ -#ifndef __cplusplus -static %(inline)s int static_func (void) -{ - return 0; -} -%(inline)s int nostatic_func (void) -{ - return 0; -} -#endif""" - inline = None - for kw in ['inline', '__inline__', '__inline']: - st = context.TryCompile(body % {'inline': kw}, '.c') - if st: - inline = kw - break - - if inline: - context.Result(inline) - else: - context.Result(0) - return inline - -def CheckLongDoubleRepresentation(context): - msg = { - 'INTEL_EXTENDED_12_BYTES_LE': "Intel extended, little endian", - 'INTEL_EXTENDED_16_BYTES_LE': "Intel extended, little endian", - 'IEEE_QUAD_BE': "IEEE Quad precision, big endian", - 'IEEE_QUAD_LE': "IEEE Quad precision, little endian", - 'IEEE_DOUBLE_LE': "IEEE Double precision, little endian", - 'IEEE_DOUBLE_BE': "IEEE Double precision, big endian" - } - - context.Message("Checking for long double representation... ") - body = LONG_DOUBLE_REPRESENTATION_SRC % {'type': 'long double'} - st = context.TryCompile(body, '.c') - if st: - obj = str(context.sconf.lastTarget) - tp = long_double_representation(pyod(obj)) - context.Result(msg[tp]) - return tp - if not st: - context.Result(0) - -array_api_gen_bld = Builder(action = Action(do_generate_numpy_api, '$ARRAYPIGENCOMSTR'), - emitter = generate_api_emitter) - - -ufunc_api_gen_bld = Builder(action = Action(do_generate_ufunc_api, '$UFUNCAPIGENCOMSTR'), - emitter = generate_api_emitter) - -template_bld = Builder(action = Action(generate_from_template, '$TEMPLATECOMSTR'), - emitter = generate_from_template_emitter) - -umath_bld = Builder(action = Action(generate_umath, '$UMATHCOMSTR'), - emitter = generate_umath_emitter) diff --git a/pythonPackages/numpy/numpy/core/setup.py b/pythonPackages/numpy/numpy/core/setup.py deleted file mode 100755 index b91c277c16..0000000000 --- a/pythonPackages/numpy/numpy/core/setup.py +++ /dev/null @@ -1,837 +0,0 @@ -import imp -import os -import sys -import shutil -from os.path import join -from numpy.distutils import log -from distutils.dep_util import newer -from distutils.sysconfig import get_config_var -import warnings -import re - -from setup_common import * - -# Set to True to enable multiple file compilations (experimental) -try: - os.environ['NPY_SEPARATE_COMPILATION'] - ENABLE_SEPARATE_COMPILATION = True -except KeyError: - ENABLE_SEPARATE_COMPILATION = False - -# XXX: ugly, we use a class to avoid calling twice some expensive functions in -# config.h/numpyconfig.h. I don't see a better way because distutils force -# config.h generation inside an Extension class, and as such sharing -# configuration informations between extensions is not easy. -# Using a pickled-based memoize does not work because config_cmd is an instance -# method, which cPickle does not like. -try: - import cPickle as _pik -except ImportError: - import pickle as _pik -import copy - -class CallOnceOnly(object): - def __init__(self): - self._check_types = None - self._check_ieee_macros = None - self._check_complex = None - - def check_types(self, *a, **kw): - if self._check_types is None: - out = check_types(*a, **kw) - self._check_types = _pik.dumps(out) - else: - out = copy.deepcopy(_pik.loads(self._check_types)) - return out - - def check_ieee_macros(self, *a, **kw): - if self._check_ieee_macros is None: - out = check_ieee_macros(*a, **kw) - self._check_ieee_macros = _pik.dumps(out) - else: - out = copy.deepcopy(_pik.loads(self._check_ieee_macros)) - return out - - def check_complex(self, *a, **kw): - if self._check_complex is None: - out = check_complex(*a, **kw) - self._check_complex = _pik.dumps(out) - else: - out = copy.deepcopy(_pik.loads(self._check_complex)) - return out - -PYTHON_HAS_UNICODE_WIDE = True - -def pythonlib_dir(): - """return path where libpython* is.""" - if sys.platform == 'win32': - return os.path.join(sys.prefix, "libs") - else: - return get_config_var('LIBDIR') - -def is_npy_no_signal(): - """Return True if the NPY_NO_SIGNAL symbol must be defined in configuration - header.""" - return sys.platform == 'win32' - -def is_npy_no_smp(): - """Return True if the NPY_NO_SMP symbol must be defined in public - header (when SMP support cannot be reliably enabled).""" - # Python 2.3 causes a segfault when - # trying to re-acquire the thread-state - # which is done in error-handling - # ufunc code. NPY_ALLOW_C_API and friends - # cause the segfault. So, we disable threading - # for now. - if sys.version[:5] < '2.4.2': - nosmp = 1 - else: - # Perhaps a fancier check is in order here. - # so that threads are only enabled if there - # are actually multiple CPUS? -- but - # threaded code can be nice even on a single - # CPU so that long-calculating code doesn't - # block. - try: - nosmp = os.environ['NPY_NOSMP'] - nosmp = 1 - except KeyError: - nosmp = 0 - return nosmp == 1 - -def win32_checks(deflist): - from numpy.distutils.misc_util import get_build_architecture - a = get_build_architecture() - - # Distutils hack on AMD64 on windows - print('BUILD_ARCHITECTURE: %r, os.name=%r, sys.platform=%r' % \ - (a, os.name, sys.platform)) - if a == 'AMD64': - deflist.append('DISTUTILS_USE_SDK') - - # On win32, force long double format string to be 'g', not - # 'Lg', since the MS runtime does not support long double whose - # size is > sizeof(double) - if a == "Intel" or a == "AMD64": - deflist.append('FORCE_NO_LONG_DOUBLE_FORMATTING') - -def check_math_capabilities(config, moredefs, mathlibs): - def check_func(func_name): - return config.check_func(func_name, libraries=mathlibs, - decl=True, call=True) - - def check_funcs_once(funcs_name): - decl = dict([(f, True) for f in funcs_name]) - st = config.check_funcs_once(funcs_name, libraries=mathlibs, - decl=decl, call=decl) - if st: - moredefs.extend([fname2def(f) for f in funcs_name]) - return st - - def check_funcs(funcs_name): - # Use check_funcs_once first, and if it does not work, test func per - # func. Return success only if all the functions are available - if not check_funcs_once(funcs_name): - # Global check failed, check func per func - for f in funcs_name: - if check_func(f): - moredefs.append(fname2def(f)) - return 0 - else: - return 1 - - #use_msvc = config.check_decl("_MSC_VER") - - if not check_funcs_once(MANDATORY_FUNCS): - raise SystemError("One of the required function to build numpy is not" - " available (the list is %s)." % str(MANDATORY_FUNCS)) - - # Standard functions which may not be available and for which we have a - # replacement implementation. Note that some of these are C99 functions. - - # XXX: hack to circumvent cpp pollution from python: python put its - # config.h in the public namespace, so we have a clash for the common - # functions we test. We remove every function tested by python's - # autoconf, hoping their own test are correct - if sys.version_info[:2] >= (2, 5): - for f in OPTIONAL_STDFUNCS_MAYBE: - if config.check_decl(fname2def(f), - headers=["Python.h", "math.h"]): - OPTIONAL_STDFUNCS.remove(f) - - check_funcs(OPTIONAL_STDFUNCS) - - # C99 functions: float and long double versions - check_funcs(C99_FUNCS_SINGLE) - check_funcs(C99_FUNCS_EXTENDED) - -def check_complex(config, mathlibs): - priv = [] - pub = [] - - # Check for complex support - st = config.check_header('complex.h') - if st: - priv.append('HAVE_COMPLEX_H') - pub.append('NPY_USE_C99_COMPLEX') - - for t in C99_COMPLEX_TYPES: - st = config.check_type(t, headers=["complex.h"]) - if st: - pub.append(('NPY_HAVE_%s' % type2def(t), 1)) - - def check_prec(prec): - flist = [f + prec for f in C99_COMPLEX_FUNCS] - decl = dict([(f, True) for f in flist]) - if not config.check_funcs_once(flist, call=decl, decl=decl, - libraries=mathlibs): - for f in flist: - if config.check_func(f, call=True, decl=True, - libraries=mathlibs): - priv.append(fname2def(f)) - else: - priv.extend([fname2def(f) for f in flist]) - - check_prec('') - check_prec('f') - check_prec('l') - - return priv, pub - -def check_ieee_macros(config): - priv = [] - pub = [] - - macros = [] - - def _add_decl(f): - priv.append(fname2def("decl_%s" % f)) - pub.append('NPY_%s' % fname2def("decl_%s" % f)) - - # XXX: hack to circumvent cpp pollution from python: python put its - # config.h in the public namespace, so we have a clash for the common - # functions we test. We remove every function tested by python's - # autoconf, hoping their own test are correct - _macros = ["isnan", "isinf", "signbit", "isfinite"] - if sys.version_info[:2] >= (2, 6): - for f in _macros: - st = config.check_decl(fname2def("decl_%s" % f), - headers=["Python.h", "math.h"]) - if not st: - macros.append(f) - else: - _add_decl(f) - else: - macros = _macros[:] - # Normally, isnan and isinf are macro (C99), but some platforms only have - # func, or both func and macro version. Check for macro only, and define - # replacement ones if not found. - # Note: including Python.h is necessary because it modifies some math.h - # definitions - for f in macros: - st = config.check_decl(f, headers = ["Python.h", "math.h"]) - if st: - _add_decl(f) - - return priv, pub - -def check_types(config_cmd, ext, build_dir): - private_defines = [] - public_defines = [] - - # Expected size (in number of bytes) for each type. This is an - # optimization: those are only hints, and an exhaustive search for the size - # is done if the hints are wrong. - expected = {} - expected['short'] = [2] - expected['int'] = [4] - expected['long'] = [8, 4] - expected['float'] = [4] - expected['double'] = [8] - expected['long double'] = [8, 12, 16] - expected['Py_intptr_t'] = [4, 8] - expected['PY_LONG_LONG'] = [8] - expected['long long'] = [8] - - # Check we have the python header (-dev* packages on Linux) - result = config_cmd.check_header('Python.h') - if not result: - raise SystemError( - "Cannot compile 'Python.h'. Perhaps you need to "\ - "install python-dev|python-devel.") - res = config_cmd.check_header("endian.h") - if res: - private_defines.append(('HAVE_ENDIAN_H', 1)) - public_defines.append(('NPY_HAVE_ENDIAN_H', 1)) - - # Check basic types sizes - for type in ('short', 'int', 'long'): - res = config_cmd.check_decl("SIZEOF_%s" % sym2def(type), headers = ["Python.h"]) - if res: - public_defines.append(('NPY_SIZEOF_%s' % sym2def(type), "SIZEOF_%s" % sym2def(type))) - else: - res = config_cmd.check_type_size(type, expected=expected[type]) - if res >= 0: - public_defines.append(('NPY_SIZEOF_%s' % sym2def(type), '%d' % res)) - else: - raise SystemError("Checking sizeof (%s) failed !" % type) - - for type in ('float', 'double', 'long double'): - already_declared = config_cmd.check_decl("SIZEOF_%s" % sym2def(type), - headers = ["Python.h"]) - res = config_cmd.check_type_size(type, expected=expected[type]) - if res >= 0: - public_defines.append(('NPY_SIZEOF_%s' % sym2def(type), '%d' % res)) - if not already_declared and not type == 'long double': - private_defines.append(('SIZEOF_%s' % sym2def(type), '%d' % res)) - else: - raise SystemError("Checking sizeof (%s) failed !" % type) - - # Compute size of corresponding complex type: used to check that our - # definition is binary compatible with C99 complex type (check done at - # build time in npy_common.h) - complex_def = "struct {%s __x; %s __y;}" % (type, type) - res = config_cmd.check_type_size(complex_def, expected=2*expected[type]) - if res >= 0: - public_defines.append(('NPY_SIZEOF_COMPLEX_%s' % sym2def(type), '%d' % res)) - else: - raise SystemError("Checking sizeof (%s) failed !" % complex_def) - - - for type in ('Py_intptr_t',): - res = config_cmd.check_type_size(type, headers=["Python.h"], - library_dirs=[pythonlib_dir()], - expected=expected[type]) - - if res >= 0: - private_defines.append(('SIZEOF_%s' % sym2def(type), '%d' % res)) - public_defines.append(('NPY_SIZEOF_%s' % sym2def(type), '%d' % res)) - else: - raise SystemError("Checking sizeof (%s) failed !" % type) - - # We check declaration AND type because that's how distutils does it. - if config_cmd.check_decl('PY_LONG_LONG', headers=['Python.h']): - res = config_cmd.check_type_size('PY_LONG_LONG', headers=['Python.h'], - library_dirs=[pythonlib_dir()], - expected=expected['PY_LONG_LONG']) - if res >= 0: - private_defines.append(('SIZEOF_%s' % sym2def('PY_LONG_LONG'), '%d' % res)) - public_defines.append(('NPY_SIZEOF_%s' % sym2def('PY_LONG_LONG'), '%d' % res)) - else: - raise SystemError("Checking sizeof (%s) failed !" % 'PY_LONG_LONG') - - res = config_cmd.check_type_size('long long', - expected=expected['long long']) - if res >= 0: - #private_defines.append(('SIZEOF_%s' % sym2def('long long'), '%d' % res)) - public_defines.append(('NPY_SIZEOF_%s' % sym2def('long long'), '%d' % res)) - else: - raise SystemError("Checking sizeof (%s) failed !" % 'long long') - - if not config_cmd.check_decl('CHAR_BIT', headers=['Python.h']): - raise RuntimeError( - "Config wo CHAR_BIT is not supported"\ - ", please contact the maintainers") - - return private_defines, public_defines - -def check_mathlib(config_cmd): - # Testing the C math library - mathlibs = [] - mathlibs_choices = [[],['m'],['cpml']] - mathlib = os.environ.get('MATHLIB') - if mathlib: - mathlibs_choices.insert(0,mathlib.split(',')) - for libs in mathlibs_choices: - if config_cmd.check_func("exp", libraries=libs, decl=True, call=True): - mathlibs = libs - break - else: - raise EnvironmentError("math library missing; rerun " - "setup.py after setting the " - "MATHLIB env variable") - return mathlibs - -def visibility_define(config): - """Return the define value to use for NPY_VISIBILITY_HIDDEN (may be empty - string).""" - if config.check_compiler_gcc4(): - return '__attribute__((visibility("hidden")))' - else: - return '' - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration,dot_join - from numpy.distutils.system_info import get_info, default_lib_dirs - - config = Configuration('core',parent_package,top_path) - local_dir = config.local_path - codegen_dir = join(local_dir,'code_generators') - - if is_released(config): - warnings.simplefilter('error', MismatchCAPIWarning) - - # Check whether we have a mismatch between the set C API VERSION and the - # actual C API VERSION - check_api_version(C_API_VERSION, codegen_dir) - - generate_umath_py = join(codegen_dir,'generate_umath.py') - n = dot_join(config.name,'generate_umath') - generate_umath = imp.load_module('_'.join(n.split('.')), - open(generate_umath_py,'U'),generate_umath_py, - ('.py','U',1)) - - header_dir = 'include/numpy' # this is relative to config.path_in_package - - cocache = CallOnceOnly() - - def generate_config_h(ext, build_dir): - target = join(build_dir,header_dir,'config.h') - d = os.path.dirname(target) - if not os.path.exists(d): - os.makedirs(d) - - if newer(__file__,target): - config_cmd = config.get_config_cmd() - log.info('Generating %s',target) - - # Check sizeof - moredefs, ignored = cocache.check_types(config_cmd, ext, build_dir) - - # Check math library and C99 math funcs availability - mathlibs = check_mathlib(config_cmd) - moredefs.append(('MATHLIB',','.join(mathlibs))) - - check_math_capabilities(config_cmd, moredefs, mathlibs) - moredefs.extend(cocache.check_ieee_macros(config_cmd)[0]) - moredefs.extend(cocache.check_complex(config_cmd, mathlibs)[0]) - - # Signal check - if is_npy_no_signal(): - moredefs.append('__NPY_PRIVATE_NO_SIGNAL') - - # Windows checks - if sys.platform=='win32' or os.name=='nt': - win32_checks(moredefs) - - # Inline check - inline = config_cmd.check_inline() - - # Check whether we need our own wide character support - if not config_cmd.check_decl('Py_UNICODE_WIDE', headers=['Python.h']): - PYTHON_HAS_UNICODE_WIDE = True - else: - PYTHON_HAS_UNICODE_WIDE = False - - if ENABLE_SEPARATE_COMPILATION: - moredefs.append(('ENABLE_SEPARATE_COMPILATION', 1)) - - # Get long double representation - if sys.platform != 'darwin': - rep = check_long_double_representation(config_cmd) - if rep in ['INTEL_EXTENDED_12_BYTES_LE', - 'INTEL_EXTENDED_16_BYTES_LE', - 'IEEE_QUAD_LE', 'IEEE_QUAD_BE', - 'IEEE_DOUBLE_LE', 'IEEE_DOUBLE_BE', - 'DOUBLE_DOUBLE_BE']: - moredefs.append(('HAVE_LDOUBLE_%s' % rep, 1)) - else: - raise ValueError("Unrecognized long double format: %s" % rep) - - # Py3K check - if sys.version_info[0] == 3: - moredefs.append(('NPY_PY3K', 1)) - - # Generate the config.h file from moredefs - target_f = open(target, 'w') - for d in moredefs: - if isinstance(d,str): - target_f.write('#define %s\n' % (d)) - else: - target_f.write('#define %s %s\n' % (d[0],d[1])) - - # define inline to our keyword, or nothing - target_f.write('#ifndef __cplusplus\n') - if inline == 'inline': - target_f.write('/* #undef inline */\n') - else: - target_f.write('#define inline %s\n' % inline) - target_f.write('#endif\n') - - # add the guard to make sure config.h is never included directly, - # but always through npy_config.h - target_f.write(""" -#ifndef _NPY_NPY_CONFIG_H_ -#error config.h should never be included directly, include npy_config.h instead -#endif -""") - - target_f.close() - print('File:',target) - target_f = open(target) - print(target_f.read()) - target_f.close() - print('EOF') - else: - mathlibs = [] - target_f = open(target) - for line in target_f.readlines(): - s = '#define MATHLIB' - if line.startswith(s): - value = line[len(s):].strip() - if value: - mathlibs.extend(value.split(',')) - target_f.close() - - # Ugly: this can be called within a library and not an extension, - # in which case there is no libraries attributes (and none is - # needed). - if hasattr(ext, 'libraries'): - ext.libraries.extend(mathlibs) - - incl_dir = os.path.dirname(target) - if incl_dir not in config.numpy_include_dirs: - config.numpy_include_dirs.append(incl_dir) - - return target - - def generate_numpyconfig_h(ext, build_dir): - """Depends on config.h: generate_config_h has to be called before !""" - target = join(build_dir,header_dir,'_numpyconfig.h') - d = os.path.dirname(target) - if not os.path.exists(d): - os.makedirs(d) - if newer(__file__,target): - config_cmd = config.get_config_cmd() - log.info('Generating %s',target) - - # Check sizeof - ignored, moredefs = cocache.check_types(config_cmd, ext, build_dir) - - if is_npy_no_signal(): - moredefs.append(('NPY_NO_SIGNAL', 1)) - - if is_npy_no_smp(): - moredefs.append(('NPY_NO_SMP', 1)) - else: - moredefs.append(('NPY_NO_SMP', 0)) - - mathlibs = check_mathlib(config_cmd) - moredefs.extend(cocache.check_ieee_macros(config_cmd)[1]) - moredefs.extend(cocache.check_complex(config_cmd, mathlibs)[1]) - - if ENABLE_SEPARATE_COMPILATION: - moredefs.append(('NPY_ENABLE_SEPARATE_COMPILATION', 1)) - - # Check wether we can use inttypes (C99) formats - if config_cmd.check_decl('PRIdPTR', headers = ['inttypes.h']): - moredefs.append(('NPY_USE_C99_FORMATS', 1)) - - # visibility check - hidden_visibility = visibility_define(config_cmd) - moredefs.append(('NPY_VISIBILITY_HIDDEN', hidden_visibility)) - - # Add the C API/ABI versions - moredefs.append(('NPY_ABI_VERSION', '0x%.8X' % C_ABI_VERSION)) - moredefs.append(('NPY_API_VERSION', '0x%.8X' % C_API_VERSION)) - - # Add moredefs to header - target_f = open(target, 'w') - for d in moredefs: - if isinstance(d,str): - target_f.write('#define %s\n' % (d)) - else: - target_f.write('#define %s %s\n' % (d[0],d[1])) - - # Define __STDC_FORMAT_MACROS - target_f.write(""" -#ifndef __STDC_FORMAT_MACROS -#define __STDC_FORMAT_MACROS 1 -#endif -""") - target_f.close() - - # Dump the numpyconfig.h header to stdout - print('File: %s' % target) - target_f = open(target) - print(target_f.read()) - target_f.close() - print('EOF') - config.add_data_files((header_dir, target)) - return target - - def generate_api_func(module_name): - def generate_api(ext, build_dir): - script = join(codegen_dir, module_name + '.py') - sys.path.insert(0, codegen_dir) - try: - m = __import__(module_name) - log.info('executing %s', script) - h_file, c_file, doc_file = m.generate_api(os.path.join(build_dir, header_dir)) - finally: - del sys.path[0] - config.add_data_files((header_dir, h_file), - (header_dir, doc_file)) - return (h_file,) - return generate_api - - generate_numpy_api = generate_api_func('generate_numpy_api') - generate_ufunc_api = generate_api_func('generate_ufunc_api') - - config.add_include_dirs(join(local_dir, "src", "private")) - config.add_include_dirs(join(local_dir, "src")) - config.add_include_dirs(join(local_dir)) - # Multiarray version: this function is needed to build foo.c from foo.c.src - # when foo.c is included in another file and as such not in the src - # argument of build_ext command - def generate_multiarray_templated_sources(ext, build_dir): - from numpy.distutils.misc_util import get_cmd - - subpath = join('src', 'multiarray') - sources = [join(local_dir, subpath, 'scalartypes.c.src'), - join(local_dir, subpath, 'arraytypes.c.src')] - - # numpy.distutils generate .c from .c.src in weird directories, we have - # to add them there as they depend on the build_dir - config.add_include_dirs(join(build_dir, subpath)) - - cmd = get_cmd('build_src') - cmd.ensure_finalized() - - cmd.template_sources(sources, ext) - - # umath version: this function is needed to build foo.c from foo.c.src - # when foo.c is included in another file and as such not in the src - # argument of build_ext command - def generate_umath_templated_sources(ext, build_dir): - from numpy.distutils.misc_util import get_cmd - - subpath = join('src', 'umath') - sources = [join(local_dir, subpath, 'loops.c.src'), - join(local_dir, subpath, 'umathmodule.c.src')] - - # numpy.distutils generate .c from .c.src in weird directories, we have - # to add them there as they depend on the build_dir - config.add_include_dirs(join(build_dir, subpath)) - - cmd = get_cmd('build_src') - cmd.ensure_finalized() - - cmd.template_sources(sources, ext) - - - def generate_umath_c(ext,build_dir): - target = join(build_dir,header_dir,'__umath_generated.c') - dir = os.path.dirname(target) - if not os.path.exists(dir): - os.makedirs(dir) - script = generate_umath_py - if newer(script,target): - f = open(target,'w') - f.write(generate_umath.make_code(generate_umath.defdict, - generate_umath.__file__)) - f.close() - return [] - - config.add_data_files('include/numpy/*.h') - config.add_include_dirs(join('src', 'npymath')) - config.add_include_dirs(join('src', 'multiarray')) - config.add_include_dirs(join('src', 'umath')) - - config.numpy_include_dirs.extend(config.paths('include')) - - deps = [join('src','npymath','_signbit.c'), - join('include','numpy','*object.h'), - 'include/numpy/fenv/fenv.c', - 'include/numpy/fenv/fenv.h', - join(codegen_dir,'genapi.py'), - ] - - # Don't install fenv unless we need them. - if sys.platform == 'cygwin': - config.add_data_dir('include/numpy/fenv') - - config.add_extension('_sort', - sources=[join('src','_sortmodule.c.src'), - generate_config_h, - generate_numpyconfig_h, - generate_numpy_api, - ], - ) - - # npymath needs the config.h and numpyconfig.h files to be generated, but - # build_clib cannot handle generate_config_h and generate_numpyconfig_h - # (don't ask). Because clib are generated before extensions, we have to - # explicitly add an extension which has generate_config_h and - # generate_numpyconfig_h as sources *before* adding npymath. - - subst_dict = dict([("sep", os.path.sep), ("pkgname", "numpy.core")]) - def get_mathlib_info(*args): - # Another ugly hack: the mathlib info is known once build_src is run, - # but we cannot use add_installed_pkg_config here either, so we only - # updated the substition dictionary during npymath build - config_cmd = config.get_config_cmd() - - # Check that the toolchain works, to fail early if it doesn't - # (avoid late errors with MATHLIB which are confusing if the - # compiler does not work). - st = config_cmd.try_link('int main(void) { return 0;}') - if not st: - raise RuntimeError("Broken toolchain: cannot link a simple C program") - mlibs = check_mathlib(config_cmd) - - posix_mlib = ' '.join(['-l%s' % l for l in mlibs]) - msvc_mlib = ' '.join(['%s.lib' % l for l in mlibs]) - subst_dict["posix_mathlib"] = posix_mlib - subst_dict["msvc_mathlib"] = msvc_mlib - - config.add_installed_library('npymath', - sources=[join('src', 'npymath', 'npy_math.c.src'), - join('src', 'npymath', 'ieee754.c.src'), - join('src', 'npymath', 'npy_math_complex.c.src'), - get_mathlib_info], - install_dir='lib') - config.add_npy_pkg_config("npymath.ini.in", "lib/npy-pkg-config", - subst_dict) - config.add_npy_pkg_config("mlib.ini.in", "lib/npy-pkg-config", - subst_dict) - - multiarray_deps = [ - join('src', 'multiarray', 'arrayobject.h'), - join('src', 'multiarray', 'arraytypes.h'), - join('src', 'multiarray', 'buffer.h'), - join('src', 'multiarray', 'calculation.h'), - join('src', 'multiarray', 'common.h'), - join('src', 'multiarray', 'convert_datatype.h'), - join('src', 'multiarray', 'convert.h'), - join('src', 'multiarray', 'conversion_utils.h'), - join('src', 'multiarray', 'ctors.h'), - join('src', 'multiarray', 'descriptor.h'), - join('src', 'multiarray', 'getset.h'), - join('src', 'multiarray', 'hashdescr.h'), - join('src', 'multiarray', 'iterators.h'), - join('src', 'multiarray', 'mapping.h'), - join('src', 'multiarray', 'methods.h'), - join('src', 'multiarray', 'multiarraymodule.h'), - join('src', 'multiarray', 'numpymemoryview.h'), - join('src', 'multiarray', 'number.h'), - join('src', 'multiarray', 'numpyos.h'), - join('src', 'multiarray', 'refcount.h'), - join('src', 'multiarray', 'scalartypes.h'), - join('src', 'multiarray', 'sequence.h'), - join('src', 'multiarray', 'shape.h'), - join('src', 'multiarray', 'ucsnarrow.h'), - join('src', 'multiarray', 'usertypes.h')] - - multiarray_src = [join('src', 'multiarray', 'multiarraymodule.c'), - join('src', 'multiarray', 'hashdescr.c'), - join('src', 'multiarray', 'arrayobject.c'), - join('src', 'multiarray', 'numpymemoryview.c'), - join('src', 'multiarray', 'buffer.c'), - join('src', 'multiarray', 'numpyos.c'), - join('src', 'multiarray', 'conversion_utils.c'), - join('src', 'multiarray', 'flagsobject.c'), - join('src', 'multiarray', 'descriptor.c'), - join('src', 'multiarray', 'iterators.c'), - join('src', 'multiarray', 'mapping.c'), - join('src', 'multiarray', 'number.c'), - join('src', 'multiarray', 'getset.c'), - join('src', 'multiarray', 'sequence.c'), - join('src', 'multiarray', 'methods.c'), - join('src', 'multiarray', 'ctors.c'), - join('src', 'multiarray', 'convert_datatype.c'), - join('src', 'multiarray', 'convert.c'), - join('src', 'multiarray', 'shape.c'), - join('src', 'multiarray', 'item_selection.c'), - join('src', 'multiarray', 'calculation.c'), - join('src', 'multiarray', 'common.c'), - join('src', 'multiarray', 'usertypes.c'), - join('src', 'multiarray', 'scalarapi.c'), - join('src', 'multiarray', 'refcount.c'), - join('src', 'multiarray', 'arraytypes.c.src'), - join('src', 'multiarray', 'scalartypes.c.src')] - - if PYTHON_HAS_UNICODE_WIDE: - multiarray_src.append(join('src', 'multiarray', 'ucsnarrow.c')) - - umath_src = [join('src', 'umath', 'umathmodule.c.src'), - join('src', 'umath', 'funcs.inc.src'), - join('src', 'umath', 'loops.c.src'), - join('src', 'umath', 'ufunc_object.c')] - - umath_deps = [generate_umath_py, - join(codegen_dir,'generate_ufunc_api.py')] - - if not ENABLE_SEPARATE_COMPILATION: - multiarray_deps.extend(multiarray_src) - multiarray_src = [join('src', 'multiarray', 'multiarraymodule_onefile.c')] - multiarray_src.append(generate_multiarray_templated_sources) - - umath_deps.extend(umath_src) - umath_src = [join('src', 'umath', 'umathmodule_onefile.c')] - umath_src.append(generate_umath_templated_sources) - umath_src.append(join('src', 'umath', 'funcs.inc.src')) - - config.add_extension('multiarray', - sources = multiarray_src + - [generate_config_h, - generate_numpyconfig_h, - generate_numpy_api, - join(codegen_dir,'generate_numpy_api.py'), - join('*.py')], - depends = deps + multiarray_deps, - libraries=['npymath']) - - config.add_extension('umath', - sources = [generate_config_h, - generate_numpyconfig_h, - generate_umath_c, - generate_ufunc_api, - ] + umath_src, - depends = deps + umath_deps, - libraries=['npymath'], - ) - - config.add_extension('scalarmath', - sources=[join('src','scalarmathmodule.c.src'), - generate_config_h, - generate_numpyconfig_h, - generate_numpy_api, - generate_ufunc_api], - ) - - # Configure blasdot - blas_info = get_info('blas_opt',0) - #blas_info = {} - def get_dotblas_sources(ext, build_dir): - if blas_info: - if ('NO_ATLAS_INFO',1) in blas_info.get('define_macros',[]): - return None # dotblas needs ATLAS, Fortran compiled blas will not be sufficient. - return ext.depends[:1] - return None # no extension module will be built - - config.add_extension('_dotblas', - sources = [get_dotblas_sources], - depends=[join('blasdot','_dotblas.c'), - join('blasdot','cblas.h'), - ], - include_dirs = ['blasdot'], - extra_info = blas_info - ) - - config.add_extension('umath_tests', - sources = [join('src','umath', 'umath_tests.c.src')]) - - config.add_extension('multiarray_tests', - sources = [join('src', 'multiarray', 'multiarray_tests.c.src')]) - - config.add_data_dir('tests') - config.add_data_dir('tests/data') - - config.make_svn_version_py() - - return config - -if __name__=='__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/core/setup_common.py b/pythonPackages/numpy/numpy/core/setup_common.py deleted file mode 100755 index 8356071467..0000000000 --- a/pythonPackages/numpy/numpy/core/setup_common.py +++ /dev/null @@ -1,271 +0,0 @@ -# Code shared by distutils and scons builds -import sys -from os.path import join -import warnings -import copy -import binascii - -from distutils.ccompiler import CompileError - -#------------------- -# Versioning support -#------------------- -# How to change C_API_VERSION ? -# - increase C_API_VERSION value -# - record the hash for the new C API with the script cversions.py -# and add the hash to cversions.txt -# The hash values are used to remind developers when the C API number was not -# updated - generates a MismatchCAPIWarning warning which is turned into an -# exception for released version. - -# Binary compatibility version number. This number is increased whenever the -# C-API is changed such that binary compatibility is broken, i.e. whenever a -# recompile of extension modules is needed. -C_ABI_VERSION = 0x01000009 - -# Minor API version. This number is increased whenever a change is made to the -# C-API -- whether it breaks binary compatibility or not. Some changes, such -# as adding a function pointer to the end of the function table, can be made -# without breaking binary compatibility. In this case, only the C_API_VERSION -# (*not* C_ABI_VERSION) would be increased. Whenever binary compatibility is -# broken, both C_API_VERSION and C_ABI_VERSION should be increased. -C_API_VERSION = 0x00000004 - -class MismatchCAPIWarning(Warning): - pass - -def is_released(config): - """Return True if a released version of numpy is detected.""" - from distutils.version import LooseVersion - - v = config.get_version('../version.py') - if v is None: - raise ValueError("Could not get version") - pv = LooseVersion(vstring=v).version - if len(pv) > 3: - return False - return True - -def get_api_versions(apiversion, codegen_dir): - """Return current C API checksum and the recorded checksum for the given - version of the C API version.""" - api_files = [join(codegen_dir, 'numpy_api_order.txt'), - join(codegen_dir, 'ufunc_api_order.txt')] - - # Compute the hash of the current API as defined in the .txt files in - # code_generators - sys.path.insert(0, codegen_dir) - try: - m = __import__('genapi') - numpy_api = __import__('numpy_api') - curapi_hash = m.fullapi_hash(numpy_api.full_api) - apis_hash = m.get_versions_hash() - finally: - del sys.path[0] - - return curapi_hash, apis_hash[apiversion] - -def check_api_version(apiversion, codegen_dir): - """Emits a MismacthCAPIWarning if the C API version needs updating.""" - curapi_hash, api_hash = get_api_versions(apiversion, codegen_dir) - - # If different hash, it means that the api .txt files in - # codegen_dir have been updated without the API version being - # updated. Any modification in those .txt files should be reflected - # in the api and eventually abi versions. - # To compute the checksum of the current API, use - # code_generators/cversions.py script - if not curapi_hash == api_hash: - msg = "API mismatch detected, the C API version " \ - "numbers have to be updated. Current C api version is %d, " \ - "with checksum %s, but recorded checksum for C API version %d in " \ - "codegen_dir/cversions.txt is %s. If functions were added in the " \ - "C API, you have to update C_API_VERSION in %s." - warnings.warn(msg % (apiversion, curapi_hash, apiversion, api_hash, - __file__), - MismatchCAPIWarning) -# Mandatory functions: if not found, fail the build -MANDATORY_FUNCS = ["sin", "cos", "tan", "sinh", "cosh", "tanh", "fabs", - "floor", "ceil", "sqrt", "log10", "log", "exp", "asin", - "acos", "atan", "fmod", 'modf', 'frexp', 'ldexp'] - -# Standard functions which may not be available and for which we have a -# replacement implementation. Note that some of these are C99 functions. -OPTIONAL_STDFUNCS = ["expm1", "log1p", "acosh", "asinh", "atanh", - "rint", "trunc", "exp2", "log2", "hypot", "atan2", "pow", - "copysign", "nextafter"] - -# Subset of OPTIONAL_STDFUNCS which may alreay have HAVE_* defined by Python.h -OPTIONAL_STDFUNCS_MAYBE = ["expm1", "log1p", "acosh", "atanh", "asinh", "hypot", - "copysign"] - -# C99 functions: float and long double versions -C99_FUNCS = ["sin", "cos", "tan", "sinh", "cosh", "tanh", "fabs", "floor", - "ceil", "rint", "trunc", "sqrt", "log10", "log", "log1p", "exp", - "expm1", "asin", "acos", "atan", "asinh", "acosh", "atanh", - "hypot", "atan2", "pow", "fmod", "modf", 'frexp', 'ldexp', - "exp2", "log2", "copysign", "nextafter"] - -C99_FUNCS_SINGLE = [f + 'f' for f in C99_FUNCS] -C99_FUNCS_EXTENDED = [f + 'l' for f in C99_FUNCS] - -C99_COMPLEX_TYPES = ['complex double', 'complex float', 'complex long double'] - -C99_COMPLEX_FUNCS = ['creal', 'cimag', 'cabs', 'carg', 'cexp', 'csqrt', 'clog', - 'ccos', 'csin', 'cpow'] - -def fname2def(name): - return "HAVE_%s" % name.upper() - -def sym2def(symbol): - define = symbol.replace(' ', '') - return define.upper() - -def type2def(symbol): - define = symbol.replace(' ', '_') - return define.upper() - -# Code to detect long double representation taken from MPFR m4 macro -def check_long_double_representation(cmd): - cmd._check_compiler() - body = LONG_DOUBLE_REPRESENTATION_SRC % {'type': 'long double'} - - # We need to use _compile because we need the object filename - src, object = cmd._compile(body, None, None, 'c') - try: - type = long_double_representation(pyod(object)) - return type - finally: - cmd._clean() - -LONG_DOUBLE_REPRESENTATION_SRC = r""" -/* "before" is 16 bytes to ensure there's no padding between it and "x". - * We're not expecting any "long double" bigger than 16 bytes or with - * alignment requirements stricter than 16 bytes. */ -typedef %(type)s test_type; - -struct { - char before[16]; - test_type x; - char after[8]; -} foo = { - { '\0', '\0', '\0', '\0', '\0', '\0', '\0', '\0', - '\001', '\043', '\105', '\147', '\211', '\253', '\315', '\357' }, - -123456789.0, - { '\376', '\334', '\272', '\230', '\166', '\124', '\062', '\020' } -}; -""" - -def pyod(filename): - """Python implementation of the od UNIX utility (od -b, more exactly). - - Parameters - ---------- - filename: str - name of the file to get the dump from. - - Returns - ------- - out: seq - list of lines of od output - Note - ---- - We only implement enough to get the necessary information for long double - representation, this is not intended as a compatible replacement for od. - """ - def _pyod2(): - out = [] - - fid = open(filename, 'r') - try: - yo = [int(oct(int(binascii.b2a_hex(o), 16))) for o in fid.read()] - for i in range(0, len(yo), 16): - line = ['%07d' % int(oct(i))] - line.extend(['%03d' % c for c in yo[i:i+16]]) - out.append(" ".join(line)) - return out - finally: - fid.close() - - def _pyod3(): - out = [] - - fid = open(filename, 'rb') - try: - yo2 = [oct(o)[2:] for o in fid.read()] - for i in range(0, len(yo2), 16): - line = ['%07d' % int(oct(i)[2:])] - line.extend(['%03d' % int(c) for c in yo2[i:i+16]]) - out.append(" ".join(line)) - return out - finally: - fid.close() - - if sys.version_info[0] < 3: - return _pyod2() - else: - return _pyod3() - -_BEFORE_SEQ = ['000','000','000','000','000','000','000','000', - '001','043','105','147','211','253','315','357'] -_AFTER_SEQ = ['376', '334','272','230','166','124','062','020'] - -_IEEE_DOUBLE_BE = ['301', '235', '157', '064', '124', '000', '000', '000'] -_IEEE_DOUBLE_LE = _IEEE_DOUBLE_BE[::-1] -_INTEL_EXTENDED_12B = ['000', '000', '000', '000', '240', '242', '171', '353', - '031', '300', '000', '000'] -_INTEL_EXTENDED_16B = ['000', '000', '000', '000', '240', '242', '171', '353', - '031', '300', '000', '000', '000', '000', '000', '000'] -_IEEE_QUAD_PREC_BE = ['300', '031', '326', '363', '105', '100', '000', '000', - '000', '000', '000', '000', '000', '000', '000', '000'] -_IEEE_QUAD_PREC_LE = _IEEE_QUAD_PREC_BE[::-1] -_DOUBLE_DOUBLE_BE = ['301', '235', '157', '064', '124', '000', '000', '000'] + \ - ['000'] * 8 - -def long_double_representation(lines): - """Given a binary dump as given by GNU od -b, look for long double - representation.""" - - # Read contains a list of 32 items, each item is a byte (in octal - # representation, as a string). We 'slide' over the output until read is of - # the form before_seq + content + after_sequence, where content is the long double - # representation: - # - content is 12 bytes: 80 bits Intel representation - # - content is 16 bytes: 80 bits Intel representation (64 bits) or quad precision - # - content is 8 bytes: same as double (not implemented yet) - read = [''] * 32 - saw = None - for line in lines: - # we skip the first word, as od -b output an index at the beginning of - # each line - for w in line.split()[1:]: - read.pop(0) - read.append(w) - - # If the end of read is equal to the after_sequence, read contains - # the long double - if read[-8:] == _AFTER_SEQ: - saw = copy.copy(read) - if read[:12] == _BEFORE_SEQ[4:]: - if read[12:-8] == _INTEL_EXTENDED_12B: - return 'INTEL_EXTENDED_12_BYTES_LE' - elif read[:8] == _BEFORE_SEQ[8:]: - if read[8:-8] == _INTEL_EXTENDED_16B: - return 'INTEL_EXTENDED_16_BYTES_LE' - elif read[8:-8] == _IEEE_QUAD_PREC_BE: - return 'IEEE_QUAD_BE' - elif read[8:-8] == _IEEE_QUAD_PREC_LE: - return 'IEEE_QUAD_LE' - elif read[8:-8] == _DOUBLE_DOUBLE_BE: - return 'DOUBLE_DOUBLE_BE' - elif read[:16] == _BEFORE_SEQ: - if read[16:-8] == _IEEE_DOUBLE_LE: - return 'IEEE_DOUBLE_LE' - elif read[16:-8] == _IEEE_DOUBLE_BE: - return 'IEEE_DOUBLE_BE' - - if saw is not None: - raise ValueError("Unrecognized format (%s)" % saw) - else: - # We never detected the after_sequence - raise ValueError("Could not lock sequences (%s)" % saw) diff --git a/pythonPackages/numpy/numpy/core/setupscons.py b/pythonPackages/numpy/numpy/core/setupscons.py deleted file mode 100755 index 4d329fded0..0000000000 --- a/pythonPackages/numpy/numpy/core/setupscons.py +++ /dev/null @@ -1,111 +0,0 @@ -import os -import sys -import glob -from os.path import join, basename - -from numpy.distutils import log - -from numscons import get_scons_build_dir - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration,dot_join - from numpy.distutils.command.scons import get_scons_pkg_build_dir - from numpy.distutils.system_info import get_info, default_lib_dirs - - config = Configuration('core',parent_package,top_path) - local_dir = config.local_path - - header_dir = 'include/numpy' # this is relative to config.path_in_package - - config.add_subpackage('code_generators') - - # List of files to register to numpy.distutils - dot_blas_src = [join('blasdot', '_dotblas.c'), - join('blasdot', 'cblas.h')] - api_definition = [join('code_generators', 'numpy_api_order.txt'), - join('code_generators', 'ufunc_api_order.txt')] - core_src = [join('src', basename(i)) for i in glob.glob(join(local_dir, - 'src', - '*.c'))] - core_src += [join('src', basename(i)) for i in glob.glob(join(local_dir, - 'src', - '*.src'))] - - source_files = dot_blas_src + api_definition + core_src + \ - [join(header_dir, 'numpyconfig.h.in')] - - # Add generated files to distutils... - def add_config_header(): - scons_build_dir = get_scons_build_dir() - # XXX: I really have to think about how to communicate path info - # between scons and distutils, and set the options at one single - # location. - target = join(get_scons_pkg_build_dir(config.name), 'config.h') - incl_dir = os.path.dirname(target) - if incl_dir not in config.numpy_include_dirs: - config.numpy_include_dirs.append(incl_dir) - - def add_numpyconfig_header(): - scons_build_dir = get_scons_build_dir() - # XXX: I really have to think about how to communicate path info - # between scons and distutils, and set the options at one single - # location. - target = join(get_scons_pkg_build_dir(config.name), - 'include/numpy/numpyconfig.h') - incl_dir = os.path.dirname(target) - if incl_dir not in config.numpy_include_dirs: - config.numpy_include_dirs.append(incl_dir) - config.add_data_files((header_dir, target)) - - def add_array_api(): - scons_build_dir = get_scons_build_dir() - # XXX: I really have to think about how to communicate path info - # between scons and distutils, and set the options at one single - # location. - h_file = join(get_scons_pkg_build_dir(config.name), - 'include/numpy/__multiarray_api.h') - t_file = join(get_scons_pkg_build_dir(config.name), - 'include/numpy/multiarray_api.txt') - config.add_data_files((header_dir, h_file), - (header_dir, t_file)) - - def add_ufunc_api(): - scons_build_dir = get_scons_build_dir() - # XXX: I really have to think about how to communicate path info - # between scons and distutils, and set the options at one single - # location. - h_file = join(get_scons_pkg_build_dir(config.name), - 'include/numpy/__ufunc_api.h') - t_file = join(get_scons_pkg_build_dir(config.name), - 'include/numpy/ufunc_api.txt') - config.add_data_files((header_dir, h_file), - (header_dir, t_file)) - - def add_generated_files(*args, **kw): - add_config_header() - add_numpyconfig_header() - add_array_api() - add_ufunc_api() - - config.add_sconscript('SConstruct', - post_hook = add_generated_files, - source_files = source_files) - config.add_scons_installed_library('npymath', 'lib') - - config.add_data_files('include/numpy/*.h') - config.add_include_dirs('src') - - config.numpy_include_dirs.extend(config.paths('include')) - - # Don't install fenv unless we need them. - if sys.platform == 'cygwin': - config.add_data_dir('include/numpy/fenv') - - config.add_data_dir('tests') - config.make_svn_version_py() - - return config - -if __name__=='__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/core/shape_base.py b/pythonPackages/numpy/numpy/core/shape_base.py deleted file mode 100755 index c86684c6fe..0000000000 --- a/pythonPackages/numpy/numpy/core/shape_base.py +++ /dev/null @@ -1,259 +0,0 @@ -__all__ = ['atleast_1d','atleast_2d','atleast_3d','vstack','hstack'] - -import numeric as _nx -from numeric import array, asarray, newaxis - -def atleast_1d(*arys): - """ - Convert inputs to arrays with at least one dimension. - - Scalar inputs are converted to 1-dimensional arrays, whilst - higher-dimensional inputs are preserved. - - Parameters - ---------- - array1, array2, ... : array_like - One or more input arrays. - - Returns - ------- - ret : ndarray - An array, or sequence of arrays, each with ``a.ndim >= 1``. - Copies are made only if necessary. - - See Also - -------- - atleast_2d, atleast_3d - - Examples - -------- - >>> np.atleast_1d(1.0) - array([ 1.]) - - >>> x = np.arange(9.0).reshape(3,3) - >>> np.atleast_1d(x) - array([[ 0., 1., 2.], - [ 3., 4., 5.], - [ 6., 7., 8.]]) - >>> np.atleast_1d(x) is x - True - - >>> np.atleast_1d(1, [3, 4]) - [array([1]), array([3, 4])] - - """ - res = [] - for ary in arys: - res.append(array(ary,copy=False,subok=True,ndmin=1)) - if len(res) == 1: - return res[0] - else: - return res - -def atleast_2d(*arys): - """ - View inputs as arrays with at least two dimensions. - - Parameters - ---------- - array1, array2, ... : array_like - One or more array-like sequences. Non-array inputs are converted - to arrays. Arrays that already have two or more dimensions are - preserved. - - Returns - ------- - res, res2, ... : ndarray - An array, or tuple of arrays, each with ``a.ndim >= 2``. - Copies are avoided where possible, and views with two or more - dimensions are returned. - - See Also - -------- - atleast_1d, atleast_3d - - Examples - -------- - >>> np.atleast_2d(3.0) - array([[ 3.]]) - - >>> x = np.arange(3.0) - >>> np.atleast_2d(x) - array([[ 0., 1., 2.]]) - >>> np.atleast_2d(x).base is x - True - - >>> np.atleast_2d(1, [1, 2], [[1, 2]]) - [array([[1]]), array([[1, 2]]), array([[1, 2]])] - - """ - res = [] - for ary in arys: - res.append(array(ary,copy=False,subok=True,ndmin=2)) - if len(res) == 1: - return res[0] - else: - return res - -def atleast_3d(*arys): - """ - View inputs as arrays with at least three dimensions. - - Parameters - ---------- - array1, array2, ... : array_like - One or more array-like sequences. Non-array inputs are converted - to arrays. Arrays that already have three or more dimensions are - preserved. - - Returns - ------- - res1, res2, ... : ndarray - An array, or tuple of arrays, each with ``a.ndim >= 3``. - Copies are avoided where possible, and views with three or more - dimensions are returned. For example, a 1-D array of shape ``N`` - becomes a view of shape ``(1, N, 1)``. A 2-D array of shape ``(M, N)`` - becomes a view of shape ``(M, N, 1)``. - - See Also - -------- - atleast_1d, atleast_2d - - Examples - -------- - >>> np.atleast_3d(3.0) - array([[[ 3.]]]) - - >>> x = np.arange(3.0) - >>> np.atleast_3d(x).shape - (1, 3, 1) - - >>> x = np.arange(12.0).reshape(4,3) - >>> np.atleast_3d(x).shape - (4, 3, 1) - >>> np.atleast_3d(x).base is x - True - - >>> for arr in np.atleast_3d([1, 2], [[1, 2]], [[[1, 2]]]): - ... print arr, arr.shape - ... - [[[1] - [2]]] (1, 2, 1) - [[[1] - [2]]] (1, 2, 1) - [[[1 2]]] (1, 1, 2) - - """ - res = [] - for ary in arys: - ary = asarray(ary) - if len(ary.shape) == 0: - result = ary.reshape(1,1,1) - elif len(ary.shape) == 1: - result = ary[newaxis,:,newaxis] - elif len(ary.shape) == 2: - result = ary[:,:,newaxis] - else: - result = ary - res.append(result) - if len(res) == 1: - return res[0] - else: - return res - - -def vstack(tup): - """ - Stack arrays in sequence vertically (row wise). - - Take a sequence of arrays and stack them vertically to make a single - array. Rebuild arrays divided by `vsplit`. - - Parameters - ---------- - tup : sequence of ndarrays - Tuple containing arrays to be stacked. The arrays must have the same - shape along all but the first axis. - - Returns - ------- - stacked : ndarray - The array formed by stacking the given arrays. - - See Also - -------- - hstack : Stack arrays in sequence horizontally (column wise). - dstack : Stack arrays in sequence depth wise (along third dimension). - concatenate : Join a sequence of arrays together. - vsplit : Split array into a list of multiple sub-arrays vertically. - - - Notes - ----- - Equivalent to ``np.concatenate(tup, axis=0)`` - - Examples - -------- - >>> a = np.array([1, 2, 3]) - >>> b = np.array([2, 3, 4]) - >>> np.vstack((a,b)) - array([[1, 2, 3], - [2, 3, 4]]) - - >>> a = np.array([[1], [2], [3]]) - >>> b = np.array([[2], [3], [4]]) - >>> np.vstack((a,b)) - array([[1], - [2], - [3], - [2], - [3], - [4]]) - - """ - return _nx.concatenate(map(atleast_2d,tup),0) - -def hstack(tup): - """ - Stack arrays in sequence horizontally (column wise). - - Take a sequence of arrays and stack them horizontally to make - a single array. Rebuild arrays divided by `hsplit`. - - Parameters - ---------- - tup : sequence of ndarrays - All arrays must have the same shape along all but the second axis. - - Returns - ------- - stacked : ndarray - The array formed by stacking the given arrays. - - See Also - -------- - vstack : Stack arrays in sequence vertically (row wise). - dstack : Stack arrays in sequence depth wise (along third axis). - concatenate : Join a sequence of arrays together. - hsplit : Split array along second axis. - - Notes - ----- - Equivalent to ``np.concatenate(tup, axis=1)`` - - Examples - -------- - >>> a = np.array((1,2,3)) - >>> b = np.array((2,3,4)) - >>> np.hstack((a,b)) - array([1, 2, 3, 2, 3, 4]) - >>> a = np.array([[1],[2],[3]]) - >>> b = np.array([[2],[3],[4]]) - >>> np.hstack((a,b)) - array([[1, 2], - [2, 3], - [3, 4]]) - - """ - return _nx.concatenate(map(atleast_1d,tup),1) - diff --git a/pythonPackages/numpy/numpy/core/src/_sortmodule.c.src b/pythonPackages/numpy/numpy/core/src/_sortmodule.c.src deleted file mode 100755 index 709a26a1c6..0000000000 --- a/pythonPackages/numpy/numpy/core/src/_sortmodule.c.src +++ /dev/null @@ -1,1027 +0,0 @@ -/* -*- c -*- */ - -/* - * The purpose of this module is to add faster sort functions - * that are type-specific. This is done by altering the - * function table for the builtin descriptors. - * - * These sorting functions are copied almost directly from numarray - * with a few modifications (complex comparisons compare the imaginary - * part if the real parts are equal, for example), and the names - * are changed. - * - * The original sorting code is due to Charles R. Harris who wrote - * it for numarray. - */ - -/* - * Quick sort is usually the fastest, but the worst case scenario can - * be slower than the merge and heap sorts. The merge sort requires - * extra memory and so for large arrays may not be useful. - * - * The merge sort is *stable*, meaning that equal components - * are unmoved from their entry versions, so it can be used to - * implement lexigraphic sorting on multiple keys. - * - * The heap sort is included for completeness. - */ - - -#include "Python.h" -#include "numpy/noprefix.h" -#include "numpy/npy_math.h" - -#include "npy_config.h" - -#define NOT_USED NPY_UNUSED(unused) -#define PYA_QS_STACK 100 -#define SMALL_QUICKSORT 15 -#define SMALL_MERGESORT 20 -#define SMALL_STRING 16 - -/* - ***************************************************************************** - ** SWAP MACROS ** - ***************************************************************************** - */ - -/**begin repeat - * - * #TYPE = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, CFLOAT, - * CDOUBLE,CLONGDOUBLE, INTP# - * #type = npy_bool, npy_byte, npy_ubyte, npy_short, npy_ushort, npy_int, - * npy_uint, npy_long, npy_ulong, npy_longlong, npy_ulonglong, - * npy_float, npy_double, npy_longdouble, npy_cfloat, npy_cdouble, - * npy_clongdouble, npy_intp# - */ -#define @TYPE@_SWAP(a,b) {@type@ tmp = (b); (b)=(a); (a) = tmp;} - -/**end repeat**/ - -/* - ***************************************************************************** - ** COMPARISON FUNCTIONS ** - ***************************************************************************** - */ - -/**begin repeat - * - * #TYPE = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG# - * #type = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong# - */ -NPY_INLINE static int -@TYPE@_LT(@type@ a, @type@ b) -{ - return a < b; -} -/**end repeat**/ - - -/**begin repeat - * - * #TYPE = FLOAT, DOUBLE, LONGDOUBLE# - * #type = float, double, longdouble# - */ -NPY_INLINE static int -@TYPE@_LT(@type@ a, @type@ b) -{ - return a < b || (b != b && a == a); -} -/**end repeat**/ - - -/* - * For inline functions SUN recommends not using a return in the then part - * of an if statement. It's a SUN compiler thing, so assign the return value - * to a variable instead. - */ - -/**begin repeat - * - * #TYPE = CFLOAT, CDOUBLE, CLONGDOUBLE# - * #type = cfloat, cdouble, clongdouble# - */ -NPY_INLINE static int -@TYPE@_LT(@type@ a, @type@ b) -{ - int ret; - - if (a.real < b.real) { - ret = a.imag == a.imag || b.imag != b.imag; - } - else if (a.real > b.real) { - ret = b.imag != b.imag && a.imag == a.imag; - } - else if (a.real == b.real || (a.real != a.real && b.real != b.real)) { - ret = a.imag < b.imag || (b.imag != b.imag && a.imag == a.imag); - } - else { - ret = b.real != b.real; - } - - return ret; -} -/**end repeat**/ - - -/* The PyObject functions are stubs for later use */ -NPY_INLINE static int -PyObject_LT(PyObject *pa, PyObject *pb) -{ - return 0; -} - - -NPY_INLINE static void -STRING_COPY(char *s1, char *s2, size_t len) -{ - memcpy(s1, s2, len); -} - - -NPY_INLINE static void -STRING_SWAP(char *s1, char *s2, size_t len) -{ - while(len--) { - const char t = *s1; - *s1++ = *s2; - *s2++ = t; - } -} - - -NPY_INLINE static int -STRING_LT(char *s1, char *s2, size_t len) -{ - const unsigned char *c1 = (unsigned char *)s1; - const unsigned char *c2 = (unsigned char *)s2; - size_t i; - int ret = 0; - - for (i = 0; i < len; ++i) { - if (c1[i] != c2[i]) { - ret = c1[i] < c2[i]; - break; - } - } - return ret; -} - - -NPY_INLINE static void -UNICODE_COPY(npy_ucs4 *s1, npy_ucs4 *s2, size_t len) -{ - while(len--) { - *s1++ = *s2++; - } -} - - -NPY_INLINE static void -UNICODE_SWAP(npy_ucs4 *s1, npy_ucs4 *s2, size_t len) -{ - while(len--) { - const npy_ucs4 t = *s1; - *s1++ = *s2; - *s2++ = t; - } -} - - -NPY_INLINE static int -UNICODE_LT(npy_ucs4 *s1, npy_ucs4 *s2, size_t len) -{ - size_t i; - int ret = 0; - - for (i = 0; i < len; ++i) { - if (s1[i] != s2[i]) { - ret = s1[i] < s2[i]; - break; - } - } - return ret; -} - - -/* - ***************************************************************************** - ** NUMERIC SORTS ** - ***************************************************************************** - */ - - -/**begin repeat - * - * #TYPE = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, - * CFLOAT, CDOUBLE, CLONGDOUBLE# - * #type = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble, - * cfloat, cdouble, clongdouble# - */ - - -static int -@TYPE@_quicksort(@type@ *start, npy_intp num, void *NOT_USED) -{ - @type@ *pl = start; - @type@ *pr = start + num - 1; - @type@ vp; - @type@ *stack[PYA_QS_STACK], **sptr = stack, *pm, *pi, *pj, *pk; - - for (;;) { - while ((pr - pl) > SMALL_QUICKSORT) { - /* quicksort partition */ - pm = pl + ((pr - pl) >> 1); - if (@TYPE@_LT(*pm, *pl)) @TYPE@_SWAP(*pm, *pl); - if (@TYPE@_LT(*pr, *pm)) @TYPE@_SWAP(*pr, *pm); - if (@TYPE@_LT(*pm, *pl)) @TYPE@_SWAP(*pm, *pl); - vp = *pm; - pi = pl; - pj = pr - 1; - @TYPE@_SWAP(*pm, *pj); - for (;;) { - do ++pi; while (@TYPE@_LT(*pi, vp)); - do --pj; while (@TYPE@_LT(vp, *pj)); - if (pi >= pj) { - break; - } - @TYPE@_SWAP(*pi,*pj); - } - pk = pr - 1; - @TYPE@_SWAP(*pi, *pk); - /* push largest partition on stack */ - if (pi - pl < pr - pi) { - *sptr++ = pi + 1; - *sptr++ = pr; - pr = pi - 1; - } - else { - *sptr++ = pl; - *sptr++ = pi - 1; - pl = pi + 1; - } - } - - /* insertion sort */ - for (pi = pl + 1; pi <= pr; ++pi) { - vp = *pi; - pj = pi; - pk = pi - 1; - while (pj > pl && @TYPE@_LT(vp, *pk)) { - *pj-- = *pk--; - } - *pj = vp; - } - if (sptr == stack) { - break; - } - pr = *(--sptr); - pl = *(--sptr); - } - - return 0; -} - -static int -@TYPE@_aquicksort(@type@ *v, npy_intp* tosort, npy_intp num, void *NOT_USED) -{ - @type@ vp; - npy_intp *pl, *pr; - npy_intp *stack[PYA_QS_STACK], **sptr=stack, *pm, *pi, *pj, *pk, vi; - - pl = tosort; - pr = tosort + num - 1; - - for (;;) { - while ((pr - pl) > SMALL_QUICKSORT) { - /* quicksort partition */ - pm = pl + ((pr - pl) >> 1); - if (@TYPE@_LT(v[*pm],v[*pl])) INTP_SWAP(*pm,*pl); - if (@TYPE@_LT(v[*pr],v[*pm])) INTP_SWAP(*pr,*pm); - if (@TYPE@_LT(v[*pm],v[*pl])) INTP_SWAP(*pm,*pl); - vp = v[*pm]; - pi = pl; - pj = pr - 1; - INTP_SWAP(*pm,*pj); - for (;;) { - do ++pi; while (@TYPE@_LT(v[*pi],vp)); - do --pj; while (@TYPE@_LT(vp,v[*pj])); - if (pi >= pj) { - break; - } - INTP_SWAP(*pi,*pj); - } - pk = pr - 1; - INTP_SWAP(*pi,*pk); - /* push largest partition on stack */ - if (pi - pl < pr - pi) { - *sptr++ = pi + 1; - *sptr++ = pr; - pr = pi - 1; - } - else { - *sptr++ = pl; - *sptr++ = pi - 1; - pl = pi + 1; - } - } - - /* insertion sort */ - for (pi = pl + 1; pi <= pr; ++pi) { - vi = *pi; - vp = v[vi]; - pj = pi; - pk = pi - 1; - while (pj > pl && @TYPE@_LT(vp, v[*pk])) { - *pj-- = *pk--; - } - *pj = vi; - } - if (sptr == stack) { - break; - } - pr = *(--sptr); - pl = *(--sptr); - } - - return 0; -} - - -static int -@TYPE@_heapsort(@type@ *start, npy_intp n, void *NOT_USED) -{ - @type@ tmp, *a; - npy_intp i,j,l; - - /* The array needs to be offset by one for heapsort indexing */ - a = start - 1; - - for (l = n>>1; l > 0; --l) { - tmp = a[l]; - for (i = l, j = l<<1; j <= n;) { - if (j < n && @TYPE@_LT(a[j], a[j+1])) { - j += 1; - } - if (@TYPE@_LT(tmp, a[j])) { - a[i] = a[j]; - i = j; - j += j; - } - else { - break; - } - } - a[i] = tmp; - } - - for (; n > 1;) { - tmp = a[n]; - a[n] = a[1]; - n -= 1; - for (i = 1, j = 2; j <= n;) { - if (j < n && @TYPE@_LT(a[j], a[j+1])) { - j++; - } - if (@TYPE@_LT(tmp, a[j])) { - a[i] = a[j]; - i = j; - j += j; - } - else { - break; - } - } - a[i] = tmp; - } - - return 0; -} - -static int -@TYPE@_aheapsort(@type@ *v, npy_intp *tosort, npy_intp n, void *NOT_USED) -{ - npy_intp *a, i,j,l, tmp; - /* The arrays need to be offset by one for heapsort indexing */ - a = tosort - 1; - - for (l = n>>1; l > 0; --l) { - tmp = a[l]; - for (i = l, j = l<<1; j <= n;) { - if (j < n && @TYPE@_LT(v[a[j]], v[a[j+1]])) { - j += 1; - } - if (@TYPE@_LT(v[tmp], v[a[j]])) { - a[i] = a[j]; - i = j; - j += j; - } - else { - break; - } - } - a[i] = tmp; - } - - for (; n > 1;) { - tmp = a[n]; - a[n] = a[1]; - n -= 1; - for (i = 1, j = 2; j <= n;) { - if (j < n && @TYPE@_LT(v[a[j]], v[a[j+1]])) { - j++; - } - if (@TYPE@_LT(v[tmp], v[a[j]])) { - a[i] = a[j]; - i = j; - j += j; - } - else { - break; - } - } - a[i] = tmp; - } - - return 0; -} - -static void -@TYPE@_mergesort0(@type@ *pl, @type@ *pr, @type@ *pw) -{ - @type@ vp, *pi, *pj, *pk, *pm; - - if (pr - pl > SMALL_MERGESORT) { - /* merge sort */ - pm = pl + ((pr - pl) >> 1); - @TYPE@_mergesort0(pl, pm, pw); - @TYPE@_mergesort0(pm, pr, pw); - for (pi = pw, pj = pl; pj < pm;) { - *pi++ = *pj++; - } - pj = pw; - pk = pl; - while (pj < pi && pm < pr) { - if (@TYPE@_LT(*pm,*pj)) { - *pk = *pm++; - } - else { - *pk = *pj++; - } - pk++; - } - while(pj < pi) { - *pk++ = *pj++; - } - } - else { - /* insertion sort */ - for (pi = pl + 1; pi < pr; ++pi) { - vp = *pi; - pj = pi; - pk = pi -1; - while (pj > pl && @TYPE@_LT(vp, *pk)) { - *pj-- = *pk--; - } - *pj = vp; - } - } -} - -static int -@TYPE@_mergesort(@type@ *start, npy_intp num, void *NOT_USED) -{ - @type@ *pl, *pr, *pw; - - pl = start; - pr = pl + num; - pw = (@type@ *) PyDataMem_NEW((num/2)*sizeof(@type@)); - if (!pw) { - PyErr_NoMemory(); - return -1; - } - @TYPE@_mergesort0(pl, pr, pw); - - PyDataMem_FREE(pw); - return 0; -} - -static void -@TYPE@_amergesort0(npy_intp *pl, npy_intp *pr, @type@ *v, npy_intp *pw) -{ - @type@ vp; - npy_intp vi, *pi, *pj, *pk, *pm; - - if (pr - pl > SMALL_MERGESORT) { - /* merge sort */ - pm = pl + ((pr - pl + 1)>>1); - @TYPE@_amergesort0(pl,pm-1,v,pw); - @TYPE@_amergesort0(pm,pr,v,pw); - for (pi = pw, pj = pl; pj < pm; ++pi, ++pj) { - *pi = *pj; - } - for (pk = pw, pm = pl; pk < pi && pj <= pr; ++pm) { - if (@TYPE@_LT(v[*pj],v[*pk])) { - *pm = *pj; - ++pj; - } - else { - *pm = *pk; - ++pk; - } - } - for (; pk < pi; ++pm, ++pk) { - *pm = *pk; - } - } - else { - /* insertion sort */ - for (pi = pl + 1; pi <= pr; ++pi) { - vi = *pi; - vp = v[vi]; - for (pj = pi, pk = pi - 1; pj > pl && @TYPE@_LT(vp, v[*pk]); --pj, --pk) { - *pj = *pk; - } - *pj = vi; - } - } -} - -static int -@TYPE@_amergesort(@type@ *v, npy_intp *tosort, npy_intp num, void *NOT_USED) -{ - npy_intp *pl, *pr, *pw; - - pl = tosort; pr = pl + num - 1; - pw = PyDimMem_NEW((1+num/2)); - - if (!pw) { - PyErr_NoMemory(); - return -1; - } - - @TYPE@_amergesort0(pl, pr, v, pw); - PyDimMem_FREE(pw); - - return 0; -} - - -/**end repeat**/ - -/* - ***************************************************************************** - ** STRING SORTS ** - ***************************************************************************** - */ - - -/**begin repeat - * - * #TYPE = STRING, UNICODE# - * #type = char, PyArray_UCS4# - */ - -static void -@TYPE@_mergesort0(@type@ *pl, @type@ *pr, @type@ *pw, @type@ *vp, size_t len) -{ - @type@ *pi, *pj, *pk, *pm; - - if ((size_t)(pr - pl) > SMALL_MERGESORT*len) { - /* merge sort */ - pm = pl + (((pr - pl)/len) >> 1)*len; - @TYPE@_mergesort0(pl, pm, pw, vp, len); - @TYPE@_mergesort0(pm, pr, pw, vp, len); - @TYPE@_COPY(pw, pl, pm - pl); - pi = pw + (pm - pl); - pj = pw; - pk = pl; - while (pj < pi && pm < pr) { - if (@TYPE@_LT(pm, pj, len)) { - @TYPE@_COPY(pk, pm, len); - pm += len; - } - else { - @TYPE@_COPY(pk, pj, len); - pj += len; - } - pk += len; - } - @TYPE@_COPY(pk, pj, pi - pj); - } - else { - /* insertion sort */ - for (pi = pl + len; pi < pr; pi += len) { - @TYPE@_COPY(vp, pi, len); - pj = pi; - pk = pi - len; - while (pj > pl && @TYPE@_LT(vp, pk, len)) { - @TYPE@_COPY(pj, pk, len); - pj -= len; - pk -= len; - } - @TYPE@_COPY(pj, vp, len); - } - } -} - -static int -@TYPE@_mergesort(@type@ *start, npy_intp num, PyArrayObject *arr) -{ - const size_t elsize = arr->descr->elsize; - const size_t len = elsize / sizeof(@type@); - @type@ *pl, *pr, *pw, *vp; - int err = 0; - - pl = start; - pr = pl + num*len; - pw = (@type@ *) PyDataMem_NEW((num/2)*elsize); - if (!pw) { - PyErr_NoMemory(); - err = -1; - goto fail_0; - } - vp = (@type@ *) PyDataMem_NEW(elsize); - if (!vp) { - PyErr_NoMemory(); - err = -1; - goto fail_1; - } - @TYPE@_mergesort0(pl, pr, pw, vp, len); - - PyDataMem_FREE(vp); -fail_1: - PyDataMem_FREE(pw); -fail_0: - return err; -} - -static int -@TYPE@_quicksort(@type@ *start, npy_intp num, PyArrayObject *arr) -{ - const size_t len = arr->descr->elsize/sizeof(@type@); - @type@ *vp = malloc(arr->descr->elsize); - @type@ *pl = start; - @type@ *pr = start + (num - 1)*len; - @type@ *stack[PYA_QS_STACK], **sptr = stack, *pm, *pi, *pj, *pk; - - for (;;) { - while ((size_t)(pr - pl) > SMALL_QUICKSORT*len) { - /* quicksort partition */ - pm = pl + (((pr - pl)/len) >> 1)*len; - if (@TYPE@_LT(pm, pl, len)) @TYPE@_SWAP(pm, pl, len); - if (@TYPE@_LT(pr, pm, len)) @TYPE@_SWAP(pr, pm, len); - if (@TYPE@_LT(pm, pl, len)) @TYPE@_SWAP(pm, pl, len); - @TYPE@_COPY(vp, pm, len); - pi = pl; - pj = pr - len; - @TYPE@_SWAP(pm, pj, len); - for (;;) { - do pi += len; while (@TYPE@_LT(pi, vp, len)); - do pj -= len; while (@TYPE@_LT(vp, pj, len)); - if (pi >= pj) { - break; - } - @TYPE@_SWAP(pi, pj, len); - } - pk = pr - len; - @TYPE@_SWAP(pi, pk, len); - /* push largest partition on stack */ - if (pi - pl < pr - pi) { - *sptr++ = pi + len; - *sptr++ = pr; - pr = pi - len; - } - else { - *sptr++ = pl; - *sptr++ = pi - len; - pl = pi + len; - } - } - - /* insertion sort */ - for (pi = pl + len; pi <= pr; pi += len) { - @TYPE@_COPY(vp, pi, len); - pj = pi; - pk = pi - len; - while (pj > pl && @TYPE@_LT(vp, pk, len)) { - @TYPE@_COPY(pj, pk, len); - pj -= len; - pk -= len; - } - @TYPE@_COPY(pj, vp, len); - } - if (sptr == stack) { - break; - } - pr = *(--sptr); - pl = *(--sptr); - } - - free(vp); - return 0; -} - - -static int -@TYPE@_heapsort(@type@ *start, npy_intp n, PyArrayObject *arr) -{ - size_t len = arr->descr->elsize/sizeof(@type@); - @type@ *tmp = malloc(arr->descr->elsize); - @type@ *a = start - len; - npy_intp i,j,l; - - for (l = n>>1; l > 0; --l) { - @TYPE@_COPY(tmp, a + l*len, len); - for (i = l, j = l<<1; j <= n;) { - if (j < n && @TYPE@_LT(a + j*len, a + (j+1)*len, len)) - j += 1; - if (@TYPE@_LT(tmp, a + j*len, len)) { - @TYPE@_COPY(a + i*len, a + j*len, len); - i = j; - j += j; - } - else { - break; - } - } - @TYPE@_COPY(a + i*len, tmp, len); - } - - for (; n > 1;) { - @TYPE@_COPY(tmp, a + n*len, len); - @TYPE@_COPY(a + n*len, a + len, len); - n -= 1; - for (i = 1, j = 2; j <= n;) { - if (j < n && @TYPE@_LT(a + j*len, a + (j+1)*len, len)) - j++; - if (@TYPE@_LT(tmp, a + j*len, len)) { - @TYPE@_COPY(a + i*len, a + j*len, len); - i = j; - j += j; - } - else { - break; - } - } - @TYPE@_COPY(a + i*len, tmp, len); - } - - free(tmp); - return 0; -} - - -static int -@TYPE@_aheapsort(@type@ *v, npy_intp *tosort, npy_intp n, PyArrayObject *arr) -{ - size_t len = arr->descr->elsize/sizeof(@type@); - npy_intp *a, i,j,l, tmp; - - /* The array needs to be offset by one for heapsort indexing */ - a = tosort - 1; - - for (l = n>>1; l > 0; --l) { - tmp = a[l]; - for (i = l, j = l<<1; j <= n;) { - if (j < n && @TYPE@_LT(v + a[j]*len, v + a[j+1]*len, len)) - j += 1; - if (@TYPE@_LT(v + tmp*len, v + a[j]*len, len)) { - a[i] = a[j]; - i = j; - j += j; - } - else { - break; - } - } - a[i] = tmp; - } - - for (; n > 1;) { - tmp = a[n]; - a[n] = a[1]; - n -= 1; - for (i = 1, j = 2; j <= n;) { - if (j < n && @TYPE@_LT(v + a[j]*len, v + a[j+1]*len, len)) - j++; - if (@TYPE@_LT(v + tmp*len, v + a[j]*len, len)) { - a[i] = a[j]; - i = j; - j += j; - } - else { - break; - } - } - a[i] = tmp; - } - - return 0; -} - - -static int -@TYPE@_aquicksort(@type@ *v, npy_intp* tosort, npy_intp num, PyArrayObject *arr) -{ - size_t len = arr->descr->elsize/sizeof(@type@); - @type@ *vp; - npy_intp *pl = tosort; - npy_intp *pr = tosort + num - 1; - npy_intp *stack[PYA_QS_STACK]; - npy_intp **sptr=stack; - npy_intp *pm, *pi, *pj, *pk, vi; - - for (;;) { - while ((pr - pl) > SMALL_QUICKSORT) { - /* quicksort partition */ - pm = pl + ((pr - pl) >> 1); - if (@TYPE@_LT(v + (*pm)*len, v + (*pl)*len, len)) INTP_SWAP(*pm, *pl); - if (@TYPE@_LT(v + (*pr)*len, v + (*pm)*len, len)) INTP_SWAP(*pr, *pm); - if (@TYPE@_LT(v + (*pm)*len, v + (*pl)*len, len)) INTP_SWAP(*pm, *pl); - vp = v + (*pm)*len; - pi = pl; - pj = pr - 1; - INTP_SWAP(*pm,*pj); - for (;;) { - do ++pi; while (@TYPE@_LT(v + (*pi)*len, vp, len)); - do --pj; while (@TYPE@_LT(vp, v + (*pj)*len, len)); - if (pi >= pj) { - break; - } - INTP_SWAP(*pi,*pj); - } - pk = pr - 1; - INTP_SWAP(*pi,*pk); - /* push largest partition on stack */ - if (pi - pl < pr - pi) { - *sptr++ = pi + 1; - *sptr++ = pr; - pr = pi - 1; - } - else { - *sptr++ = pl; - *sptr++ = pi - 1; - pl = pi + 1; - } - } - - /* insertion sort */ - for (pi = pl + 1; pi <= pr; ++pi) { - vi = *pi; - vp = v + vi*len; - pj = pi; - pk = pi - 1; - while (pj > pl && @TYPE@_LT(vp, v + (*pk)*len, len)) { - *pj-- = *pk--; - } - *pj = vi; - } - if (sptr == stack) { - break; - } - pr = *(--sptr); - pl = *(--sptr); - } - - return 0; -} - - -static void -@TYPE@_amergesort0(npy_intp *pl, npy_intp *pr, @type@ *v, npy_intp *pw, int len) -{ - @type@ *vp; - npy_intp vi, *pi, *pj, *pk, *pm; - - if (pr - pl > SMALL_MERGESORT) { - /* merge sort */ - pm = pl + ((pr - pl) >> 1); - @TYPE@_amergesort0(pl,pm,v,pw,len); - @TYPE@_amergesort0(pm,pr,v,pw,len); - for (pi = pw, pj = pl; pj < pm;) { - *pi++ = *pj++; - } - pj = pw; - pk = pl; - while (pj < pi && pm < pr) { - if (@TYPE@_LT(v + (*pm)*len, v + (*pj)*len, len)) { - *pk = *pm++; - } else { - *pk = *pj++; - } - pk++; - } - while (pj < pi) { - *pk++ = *pj++; - } - } else { - /* insertion sort */ - for (pi = pl + 1; pi < pr; ++pi) { - vi = *pi; - vp = v + vi*len; - pj = pi; - pk = pi -1; - while (pj > pl && @TYPE@_LT(vp, v + (*pk)*len, len)) { - *pj-- = *pk--; - } - *pj = vi; - } - } -} - - -static int -@TYPE@_amergesort(@type@ *v, npy_intp *tosort, npy_intp num, PyArrayObject *arr) -{ - const size_t elsize = arr->descr->elsize; - const size_t len = elsize / sizeof(@type@); - npy_intp *pl, *pr, *pw; - - pl = tosort; - pr = pl + num; - pw = PyDimMem_NEW(num/2); - if (!pw) { - PyErr_NoMemory(); - return -1; - } - @TYPE@_amergesort0(pl, pr, v, pw, len); - - PyDimMem_FREE(pw); - return 0; -} -/**end repeat**/ - -static void -add_sortfuncs(void) -{ - PyArray_Descr *descr; - - /**begin repeat - * - * #TYPE = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, - * CFLOAT, CDOUBLE, CLONGDOUBLE, STRING, UNICODE# - */ - descr = PyArray_DescrFromType(PyArray_@TYPE@); - descr->f->sort[PyArray_QUICKSORT] = - (PyArray_SortFunc *)@TYPE@_quicksort; - descr->f->argsort[PyArray_QUICKSORT] = - (PyArray_ArgSortFunc *)@TYPE@_aquicksort; - descr->f->sort[PyArray_HEAPSORT] = - (PyArray_SortFunc *)@TYPE@_heapsort; - descr->f->argsort[PyArray_HEAPSORT] = - (PyArray_ArgSortFunc *)@TYPE@_aheapsort; - descr->f->sort[PyArray_MERGESORT] = - (PyArray_SortFunc *)@TYPE@_mergesort; - descr->f->argsort[PyArray_MERGESORT] = - (PyArray_ArgSortFunc *)@TYPE@_amergesort; - /**end repeat**/ - -} - -static struct PyMethodDef methods[] = { - {NULL, NULL, 0, NULL} -}; - - -#if defined(NPY_PY3K) -static struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - "_sort", - NULL, - -1, - methods, - NULL, - NULL, - NULL, - NULL -}; -#endif - -/* Initialization function for the module */ -#if defined(NPY_PY3K) -PyObject *PyInit__sort(void) { - PyObject *m; - m = PyModule_Create(&moduledef); - if (!m) { - return NULL; - } - import_array(); - add_sortfuncs(); - return m; -} -#else -PyMODINIT_FUNC -init_sort(void) { - Py_InitModule("_sort", methods); - - import_array(); - add_sortfuncs(); -} -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/arrayobject.c b/pythonPackages/numpy/numpy/core/src/multiarray/arrayobject.c deleted file mode 100755 index e2accf058a..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/arrayobject.c +++ /dev/null @@ -1,1338 +0,0 @@ -/* - Provide multidimensional arrays as a basic object type in python. - - Based on Original Numeric implementation - Copyright (c) 1995, 1996, 1997 Jim Hugunin, hugunin@mit.edu - - with contributions from many Numeric Python developers 1995-2004 - - Heavily modified in 2005 with inspiration from Numarray - - by - - Travis Oliphant, oliphant@ee.byu.edu - Brigham Young Univeristy - - -maintainer email: oliphant.travis@ieee.org - - Numarray design (which provided guidance) by - Space Science Telescope Institute - (J. Todd Miller, Perry Greenfield, Rick White) -*/ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -/*#include */ -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "common.h" - -#include "number.h" -#include "usertypes.h" -#include "arraytypes.h" -#include "scalartypes.h" -#include "arrayobject.h" -#include "ctors.h" -#include "methods.h" -#include "descriptor.h" -#include "iterators.h" -#include "mapping.h" -#include "getset.h" -#include "sequence.h" -#include "buffer.h" - -/*NUMPY_API - Compute the size of an array (in number of items) -*/ -NPY_NO_EXPORT intp -PyArray_Size(PyObject *op) -{ - if (PyArray_Check(op)) { - return PyArray_SIZE((PyArrayObject *)op); - } - else { - return 0; - } -} - -/*NUMPY_API*/ -NPY_NO_EXPORT int -PyArray_CopyObject(PyArrayObject *dest, PyObject *src_object) -{ - PyArrayObject *src; - PyObject *r; - int ret; - - /* - * Special code to mimic Numeric behavior for - * character arrays. - */ - if (dest->descr->type == PyArray_CHARLTR && dest->nd > 0 \ - && PyString_Check(src_object)) { - intp n_new, n_old; - char *new_string; - PyObject *tmp; - - n_new = dest->dimensions[dest->nd-1]; - n_old = PyString_Size(src_object); - if (n_new > n_old) { - new_string = (char *)malloc(n_new); - memmove(new_string, PyString_AS_STRING(src_object), n_old); - memset(new_string + n_old, ' ', n_new - n_old); - tmp = PyString_FromStringAndSize(new_string, n_new); - free(new_string); - src_object = tmp; - } - } - - if (PyArray_Check(src_object)) { - src = (PyArrayObject *)src_object; - Py_INCREF(src); - } - else if (!PyArray_IsScalar(src_object, Generic) && - PyArray_HasArrayInterface(src_object, r)) { - src = (PyArrayObject *)r; - } - else { - PyArray_Descr* dtype; - dtype = dest->descr; - Py_INCREF(dtype); - src = (PyArrayObject *)PyArray_FromAny(src_object, dtype, 0, - dest->nd, - FORTRAN_IF(dest), - NULL); - } - if (src == NULL) { - return -1; - } - - ret = PyArray_MoveInto(dest, src); - Py_DECREF(src); - return ret; -} - - -/* returns an Array-Scalar Object of the type of arr - from the given pointer to memory -- main Scalar creation function - default new method calls this. -*/ - -/* Ideally, here the descriptor would contain all the information needed. - So, that we simply need the data and the descriptor, and perhaps - a flag -*/ - - -/* - Given a string return the type-number for - the data-type with that string as the type-object name. - Returns PyArray_NOTYPE without setting an error if no type can be - found. Only works for user-defined data-types. -*/ - -/*NUMPY_API - */ -NPY_NO_EXPORT int -PyArray_TypeNumFromName(char *str) -{ - int i; - PyArray_Descr *descr; - - for (i = 0; i < NPY_NUMUSERTYPES; i++) { - descr = userdescrs[i]; - if (strcmp(descr->typeobj->tp_name, str) == 0) { - return descr->type_num; - } - } - return PyArray_NOTYPE; -} - -/*********************** end C-API functions **********************/ - -/* array object functions */ - -static void -array_dealloc(PyArrayObject *self) { - - _array_dealloc_buffer_info(self); - - if (self->weakreflist != NULL) { - PyObject_ClearWeakRefs((PyObject *)self); - } - if (self->base) { - /* - * UPDATEIFCOPY means that base points to an - * array that should be updated with the contents - * of this array upon destruction. - * self->base->flags must have been WRITEABLE - * (checked previously) and it was locked here - * thus, unlock it. - */ - if (self->flags & UPDATEIFCOPY) { - ((PyArrayObject *)self->base)->flags |= WRITEABLE; - Py_INCREF(self); /* hold on to self in next call */ - if (PyArray_CopyAnyInto((PyArrayObject *)self->base, self) < 0) { - PyErr_Print(); - PyErr_Clear(); - } - /* - * Don't need to DECREF -- because we are deleting - *self already... - */ - } - /* - * In any case base is pointing to something that we need - * to DECREF -- either a view or a buffer object - */ - Py_DECREF(self->base); - } - - if ((self->flags & OWNDATA) && self->data) { - /* Free internal references if an Object array */ - if (PyDataType_FLAGCHK(self->descr, NPY_ITEM_REFCOUNT)) { - Py_INCREF(self); /*hold on to self */ - PyArray_XDECREF(self); - /* - * Don't need to DECREF -- because we are deleting - * self already... - */ - } - PyDataMem_FREE(self->data); - } - - PyDimMem_FREE(self->dimensions); - Py_DECREF(self->descr); - Py_TYPE(self)->tp_free((PyObject *)self); -} - -static int -dump_data(char **string, int *n, int *max_n, char *data, int nd, - intp *dimensions, intp *strides, PyArrayObject* self) -{ - PyArray_Descr *descr=self->descr; - PyObject *op, *sp; - char *ostring; - intp i, N; - -#define CHECK_MEMORY do { if (*n >= *max_n-16) { \ - *max_n *= 2; \ - *string = (char *)_pya_realloc(*string, *max_n); \ - }} while (0) - - if (nd == 0) { - if ((op = descr->f->getitem(data, self)) == NULL) { - return -1; - } - sp = PyObject_Repr(op); - if (sp == NULL) { - Py_DECREF(op); - return -1; - } - ostring = PyString_AsString(sp); - N = PyString_Size(sp)*sizeof(char); - *n += N; - CHECK_MEMORY; - memmove(*string + (*n - N), ostring, N); - Py_DECREF(sp); - Py_DECREF(op); - return 0; - } - else { - CHECK_MEMORY; - (*string)[*n] = '['; - *n += 1; - for (i = 0; i < dimensions[0]; i++) { - if (dump_data(string, n, max_n, - data + (*strides)*i, - nd - 1, dimensions + 1, - strides + 1, self) < 0) { - return -1; - } - CHECK_MEMORY; - if (i < dimensions[0] - 1) { - (*string)[*n] = ','; - (*string)[*n+1] = ' '; - *n += 2; - } - } - CHECK_MEMORY; - (*string)[*n] = ']'; - *n += 1; - return 0; - } - -#undef CHECK_MEMORY -} - -static PyObject * -array_repr_builtin(PyArrayObject *self, int repr) -{ - PyObject *ret; - char *string; - int n, max_n; - - max_n = PyArray_NBYTES(self)*4*sizeof(char) + 7; - - if ((string = (char *)_pya_malloc(max_n)) == NULL) { - PyErr_SetString(PyExc_MemoryError, "out of memory"); - return NULL; - } - - if (repr) { - n = 6; - sprintf(string, "array("); - } - else { - n = 0; - } - if (dump_data(&string, &n, &max_n, self->data, - self->nd, self->dimensions, - self->strides, self) < 0) { - _pya_free(string); - return NULL; - } - - if (repr) { - if (PyArray_ISEXTENDED(self)) { - char buf[100]; - PyOS_snprintf(buf, sizeof(buf), "%d", self->descr->elsize); - sprintf(string+n, ", '%c%s')", self->descr->type, buf); - ret = PyUString_FromStringAndSize(string, n + 6 + strlen(buf)); - } - else { - sprintf(string+n, ", '%c')", self->descr->type); - ret = PyUString_FromStringAndSize(string, n+6); - } - } - else { - ret = PyUString_FromStringAndSize(string, n); - } - - _pya_free(string); - return ret; -} - -static PyObject *PyArray_StrFunction = NULL; -static PyObject *PyArray_ReprFunction = NULL; - -/*NUMPY_API - * Set the array print function to be a Python function. - */ -NPY_NO_EXPORT void -PyArray_SetStringFunction(PyObject *op, int repr) -{ - if (repr) { - /* Dispose of previous callback */ - Py_XDECREF(PyArray_ReprFunction); - /* Add a reference to new callback */ - Py_XINCREF(op); - /* Remember new callback */ - PyArray_ReprFunction = op; - } - else { - /* Dispose of previous callback */ - Py_XDECREF(PyArray_StrFunction); - /* Add a reference to new callback */ - Py_XINCREF(op); - /* Remember new callback */ - PyArray_StrFunction = op; - } -} - -static PyObject * -array_repr(PyArrayObject *self) -{ - PyObject *s, *arglist; - - if (PyArray_ReprFunction == NULL) { - s = array_repr_builtin(self, 1); - } - else { - arglist = Py_BuildValue("(O)", self); - s = PyEval_CallObject(PyArray_ReprFunction, arglist); - Py_DECREF(arglist); - } - return s; -} - -static PyObject * -array_str(PyArrayObject *self) -{ - PyObject *s, *arglist; - - if (PyArray_StrFunction == NULL) { - s = array_repr_builtin(self, 0); - } - else { - arglist = Py_BuildValue("(O)", self); - s = PyEval_CallObject(PyArray_StrFunction, arglist); - Py_DECREF(arglist); - } - return s; -} - - - -/*NUMPY_API - */ -NPY_NO_EXPORT int -PyArray_CompareUCS4(npy_ucs4 *s1, npy_ucs4 *s2, size_t len) -{ - PyArray_UCS4 c1, c2; - while(len-- > 0) { - c1 = *s1++; - c2 = *s2++; - if (c1 != c2) { - return (c1 < c2) ? -1 : 1; - } - } - return 0; -} - -/*NUMPY_API - */ -NPY_NO_EXPORT int -PyArray_CompareString(char *s1, char *s2, size_t len) -{ - const unsigned char *c1 = (unsigned char *)s1; - const unsigned char *c2 = (unsigned char *)s2; - size_t i; - - for(i = 0; i < len; ++i) { - if (c1[i] != c2[i]) { - return (c1[i] > c2[i]) ? 1 : -1; - } - } - return 0; -} - - -/* This also handles possibly mis-aligned data */ -/* Compare s1 and s2 which are not necessarily NULL-terminated. - s1 is of length len1 - s2 is of length len2 - If they are NULL terminated, then stop comparison. -*/ -static int -_myunincmp(PyArray_UCS4 *s1, PyArray_UCS4 *s2, int len1, int len2) -{ - PyArray_UCS4 *sptr; - PyArray_UCS4 *s1t=s1, *s2t=s2; - int val; - intp size; - int diff; - - if ((intp)s1 % sizeof(PyArray_UCS4) != 0) { - size = len1*sizeof(PyArray_UCS4); - s1t = malloc(size); - memcpy(s1t, s1, size); - } - if ((intp)s2 % sizeof(PyArray_UCS4) != 0) { - size = len2*sizeof(PyArray_UCS4); - s2t = malloc(size); - memcpy(s2t, s2, size); - } - val = PyArray_CompareUCS4(s1t, s2t, MIN(len1,len2)); - if ((val != 0) || (len1 == len2)) { - goto finish; - } - if (len2 > len1) { - sptr = s2t+len1; - val = -1; - diff = len2-len1; - } - else { - sptr = s1t+len2; - val = 1; - diff=len1-len2; - } - while (diff--) { - if (*sptr != 0) { - goto finish; - } - sptr++; - } - val = 0; - - finish: - if (s1t != s1) { - free(s1t); - } - if (s2t != s2) { - free(s2t); - } - return val; -} - - - - -/* - * Compare s1 and s2 which are not necessarily NULL-terminated. - * s1 is of length len1 - * s2 is of length len2 - * If they are NULL terminated, then stop comparison. - */ -static int -_mystrncmp(char *s1, char *s2, int len1, int len2) -{ - char *sptr; - int val; - int diff; - - val = memcmp(s1, s2, MIN(len1, len2)); - if ((val != 0) || (len1 == len2)) { - return val; - } - if (len2 > len1) { - sptr = s2 + len1; - val = -1; - diff = len2 - len1; - } - else { - sptr = s1 + len2; - val = 1; - diff = len1 - len2; - } - while (diff--) { - if (*sptr != 0) { - return val; - } - sptr++; - } - return 0; /* Only happens if NULLs are everywhere */ -} - -/* Borrowed from Numarray */ - -#define SMALL_STRING 2048 - -#if defined(isspace) -#undef isspace -#define isspace(c) ((c==' ')||(c=='\t')||(c=='\n')||(c=='\r')||(c=='\v')||(c=='\f')) -#endif - -static void _rstripw(char *s, int n) -{ - int i; - for (i = n - 1; i >= 1; i--) { /* Never strip to length 0. */ - int c = s[i]; - - if (!c || isspace(c)) { - s[i] = 0; - } - else { - break; - } - } -} - -static void _unistripw(PyArray_UCS4 *s, int n) -{ - int i; - for (i = n - 1; i >= 1; i--) { /* Never strip to length 0. */ - PyArray_UCS4 c = s[i]; - if (!c || isspace(c)) { - s[i] = 0; - } - else { - break; - } - } -} - - -static char * -_char_copy_n_strip(char *original, char *temp, int nc) -{ - if (nc > SMALL_STRING) { - temp = malloc(nc); - if (!temp) { - PyErr_NoMemory(); - return NULL; - } - } - memcpy(temp, original, nc); - _rstripw(temp, nc); - return temp; -} - -static void -_char_release(char *ptr, int nc) -{ - if (nc > SMALL_STRING) { - free(ptr); - } -} - -static char * -_uni_copy_n_strip(char *original, char *temp, int nc) -{ - if (nc*sizeof(PyArray_UCS4) > SMALL_STRING) { - temp = malloc(nc*sizeof(PyArray_UCS4)); - if (!temp) { - PyErr_NoMemory(); - return NULL; - } - } - memcpy(temp, original, nc*sizeof(PyArray_UCS4)); - _unistripw((PyArray_UCS4 *)temp, nc); - return temp; -} - -static void -_uni_release(char *ptr, int nc) -{ - if (nc*sizeof(PyArray_UCS4) > SMALL_STRING) { - free(ptr); - } -} - - -/* End borrowed from numarray */ - -#define _rstrip_loop(CMP) { \ - void *aptr, *bptr; \ - char atemp[SMALL_STRING], btemp[SMALL_STRING]; \ - while(size--) { \ - aptr = stripfunc(iself->dataptr, atemp, N1); \ - if (!aptr) return -1; \ - bptr = stripfunc(iother->dataptr, btemp, N2); \ - if (!bptr) { \ - relfunc(aptr, N1); \ - return -1; \ - } \ - val = cmpfunc(aptr, bptr, N1, N2); \ - *dptr = (val CMP 0); \ - PyArray_ITER_NEXT(iself); \ - PyArray_ITER_NEXT(iother); \ - dptr += 1; \ - relfunc(aptr, N1); \ - relfunc(bptr, N2); \ - } \ - } - -#define _reg_loop(CMP) { \ - while(size--) { \ - val = cmpfunc((void *)iself->dataptr, \ - (void *)iother->dataptr, \ - N1, N2); \ - *dptr = (val CMP 0); \ - PyArray_ITER_NEXT(iself); \ - PyArray_ITER_NEXT(iother); \ - dptr += 1; \ - } \ - } - -#define _loop(CMP) if (rstrip) _rstrip_loop(CMP) \ - else _reg_loop(CMP) - -static int -_compare_strings(PyObject *result, PyArrayMultiIterObject *multi, - int cmp_op, void *func, int rstrip) -{ - PyArrayIterObject *iself, *iother; - Bool *dptr; - intp size; - int val; - int N1, N2; - int (*cmpfunc)(void *, void *, int, int); - void (*relfunc)(char *, int); - char* (*stripfunc)(char *, char *, int); - - cmpfunc = func; - dptr = (Bool *)PyArray_DATA(result); - iself = multi->iters[0]; - iother = multi->iters[1]; - size = multi->size; - N1 = iself->ao->descr->elsize; - N2 = iother->ao->descr->elsize; - if ((void *)cmpfunc == (void *)_myunincmp) { - N1 >>= 2; - N2 >>= 2; - stripfunc = _uni_copy_n_strip; - relfunc = _uni_release; - } - else { - stripfunc = _char_copy_n_strip; - relfunc = _char_release; - } - switch (cmp_op) { - case Py_EQ: - _loop(==) - break; - case Py_NE: - _loop(!=) - break; - case Py_LT: - _loop(<) - break; - case Py_LE: - _loop(<=) - break; - case Py_GT: - _loop(>) - break; - case Py_GE: - _loop(>=) - break; - default: - PyErr_SetString(PyExc_RuntimeError, "bad comparison operator"); - return -1; - } - return 0; -} - -#undef _loop -#undef _reg_loop -#undef _rstrip_loop -#undef SMALL_STRING - -NPY_NO_EXPORT PyObject * -_strings_richcompare(PyArrayObject *self, PyArrayObject *other, int cmp_op, - int rstrip) -{ - PyObject *result; - PyArrayMultiIterObject *mit; - int val; - - /* Cast arrays to a common type */ - if (self->descr->type_num != other->descr->type_num) { -#if defined(NPY_PY3K) - /* - * Comparison between Bytes and Unicode is not defined in Py3K; - * we follow. - */ - result = Py_NotImplemented; - Py_INCREF(result); - return result; -#else - PyObject *new; - if (self->descr->type_num == PyArray_STRING && - other->descr->type_num == PyArray_UNICODE) { - PyArray_Descr* unicode = PyArray_DescrNew(other->descr); - unicode->elsize = self->descr->elsize << 2; - new = PyArray_FromAny((PyObject *)self, unicode, - 0, 0, 0, NULL); - if (new == NULL) { - return NULL; - } - Py_INCREF(other); - self = (PyArrayObject *)new; - } - else if (self->descr->type_num == PyArray_UNICODE && - other->descr->type_num == PyArray_STRING) { - PyArray_Descr* unicode = PyArray_DescrNew(self->descr); - unicode->elsize = other->descr->elsize << 2; - new = PyArray_FromAny((PyObject *)other, unicode, - 0, 0, 0, NULL); - if (new == NULL) { - return NULL; - } - Py_INCREF(self); - other = (PyArrayObject *)new; - } - else { - PyErr_SetString(PyExc_TypeError, - "invalid string data-types " - "in comparison"); - return NULL; - } -#endif - } - else { - Py_INCREF(self); - Py_INCREF(other); - } - - /* Broad-cast the arrays to a common shape */ - mit = (PyArrayMultiIterObject *)PyArray_MultiIterNew(2, self, other); - Py_DECREF(self); - Py_DECREF(other); - if (mit == NULL) { - return NULL; - } - - result = PyArray_NewFromDescr(&PyArray_Type, - PyArray_DescrFromType(PyArray_BOOL), - mit->nd, - mit->dimensions, - NULL, NULL, 0, - NULL); - if (result == NULL) { - goto finish; - } - - if (self->descr->type_num == PyArray_UNICODE) { - val = _compare_strings(result, mit, cmp_op, _myunincmp, rstrip); - } - else { - val = _compare_strings(result, mit, cmp_op, _mystrncmp, rstrip); - } - - if (val < 0) { - Py_DECREF(result); result = NULL; - } - - finish: - Py_DECREF(mit); - return result; -} - -/* - * VOID-type arrays can only be compared equal and not-equal - * in which case the fields are all compared by extracting the fields - * and testing one at a time... - * equality testing is performed using logical_ands on all the fields. - * in-equality testing is performed using logical_ors on all the fields. - * - * VOID-type arrays without fields are compared for equality by comparing their - * memory at each location directly (using string-code). - */ -static PyObject * -_void_compare(PyArrayObject *self, PyArrayObject *other, int cmp_op) -{ - if (!(cmp_op == Py_EQ || cmp_op == Py_NE)) { - PyErr_SetString(PyExc_ValueError, - "Void-arrays can only be compared for equality."); - return NULL; - } - if (PyArray_HASFIELDS(self)) { - PyObject *res = NULL, *temp, *a, *b; - PyObject *key, *value, *temp2; - PyObject *op; - Py_ssize_t pos = 0; - - op = (cmp_op == Py_EQ ? n_ops.logical_and : n_ops.logical_or); - while (PyDict_Next(self->descr->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - a = PyArray_EnsureAnyArray(array_subscript(self, key)); - if (a == NULL) { - Py_XDECREF(res); - return NULL; - } - b = array_subscript(other, key); - if (b == NULL) { - Py_XDECREF(res); - Py_DECREF(a); - return NULL; - } - temp = array_richcompare((PyArrayObject *)a,b,cmp_op); - Py_DECREF(a); - Py_DECREF(b); - if (temp == NULL) { - Py_XDECREF(res); - return NULL; - } - if (res == NULL) { - res = temp; - } - else { - temp2 = PyObject_CallFunction(op, "OO", res, temp); - Py_DECREF(temp); - Py_DECREF(res); - if (temp2 == NULL) { - return NULL; - } - res = temp2; - } - } - if (res == NULL && !PyErr_Occurred()) { - PyErr_SetString(PyExc_ValueError, "No fields found."); - } - return res; - } - else { - /* - * compare as a string. Assumes self and - * other have same descr->type - */ - return _strings_richcompare(self, other, cmp_op, 0); - } -} - -NPY_NO_EXPORT PyObject * -array_richcompare(PyArrayObject *self, PyObject *other, int cmp_op) -{ - PyObject *array_other, *result = NULL; - int typenum; - - switch (cmp_op) { - case Py_LT: - result = PyArray_GenericBinaryFunction(self, other, - n_ops.less); - break; - case Py_LE: - result = PyArray_GenericBinaryFunction(self, other, - n_ops.less_equal); - break; - case Py_EQ: - if (other == Py_None) { - Py_INCREF(Py_False); - return Py_False; - } - /* Try to convert other to an array */ - if (!PyArray_Check(other)) { - typenum = self->descr->type_num; - if (typenum != PyArray_OBJECT) { - typenum = PyArray_NOTYPE; - } - array_other = PyArray_FromObject(other, - typenum, 0, 0); - /* - * If not successful, then return False. This fixes code - * that used to allow equality comparisons between arrays - * and other objects which would give a result of False. - */ - if ((array_other == NULL) || - (array_other == Py_None)) { - Py_XDECREF(array_other); - PyErr_Clear(); - Py_INCREF(Py_False); - return Py_False; - } - } - else { - Py_INCREF(other); - array_other = other; - } - result = PyArray_GenericBinaryFunction(self, - array_other, - n_ops.equal); - if ((result == Py_NotImplemented) && - (self->descr->type_num == PyArray_VOID)) { - int _res; - - _res = PyObject_RichCompareBool - ((PyObject *)self->descr, - (PyObject *)\ - PyArray_DESCR(array_other), - Py_EQ); - if (_res < 0) { - Py_DECREF(result); - Py_DECREF(array_other); - return NULL; - } - if (_res) { - Py_DECREF(result); - result = _void_compare - (self, - (PyArrayObject *)array_other, - cmp_op); - Py_DECREF(array_other); - } - return result; - } - /* - * If the comparison results in NULL, then the - * two array objects can not be compared together so - * return zero - */ - Py_DECREF(array_other); - if (result == NULL) { - PyErr_Clear(); - Py_INCREF(Py_False); - return Py_False; - } - break; - case Py_NE: - if (other == Py_None) { - Py_INCREF(Py_True); - return Py_True; - } - /* Try to convert other to an array */ - if (!PyArray_Check(other)) { - typenum = self->descr->type_num; - if (typenum != PyArray_OBJECT) { - typenum = PyArray_NOTYPE; - } - array_other = PyArray_FromObject(other, typenum, 0, 0); - /* - * If not successful, then objects cannot be - * compared and cannot be equal, therefore, - * return True; - */ - if ((array_other == NULL) || (array_other == Py_None)) { - Py_XDECREF(array_other); - PyErr_Clear(); - Py_INCREF(Py_True); - return Py_True; - } - } - else { - Py_INCREF(other); - array_other = other; - } - result = PyArray_GenericBinaryFunction(self, - array_other, - n_ops.not_equal); - if ((result == Py_NotImplemented) && - (self->descr->type_num == PyArray_VOID)) { - int _res; - - _res = PyObject_RichCompareBool( - (PyObject *)self->descr, - (PyObject *) - PyArray_DESCR(array_other), - Py_EQ); - if (_res < 0) { - Py_DECREF(result); - Py_DECREF(array_other); - return NULL; - } - if (_res) { - Py_DECREF(result); - result = _void_compare( - self, - (PyArrayObject *)array_other, - cmp_op); - Py_DECREF(array_other); - } - return result; - } - - Py_DECREF(array_other); - if (result == NULL) { - PyErr_Clear(); - Py_INCREF(Py_True); - return Py_True; - } - break; - case Py_GT: - result = PyArray_GenericBinaryFunction(self, other, - n_ops.greater); - break; - case Py_GE: - result = PyArray_GenericBinaryFunction(self, other, - n_ops.greater_equal); - break; - default: - result = Py_NotImplemented; - Py_INCREF(result); - } - if (result == Py_NotImplemented) { - /* Try to handle string comparisons */ - if (self->descr->type_num == PyArray_OBJECT) { - return result; - } - array_other = PyArray_FromObject(other,PyArray_NOTYPE, 0, 0); - if (PyArray_ISSTRING(self) && PyArray_ISSTRING(array_other)) { - Py_DECREF(result); - result = _strings_richcompare(self, (PyArrayObject *) - array_other, cmp_op, 0); - } - Py_DECREF(array_other); - } - return result; -} - -/*NUMPY_API - */ -NPY_NO_EXPORT int -PyArray_ElementStrides(PyObject *arr) -{ - int itemsize = PyArray_ITEMSIZE(arr); - int i, N = PyArray_NDIM(arr); - intp *strides = PyArray_STRIDES(arr); - - for (i = 0; i < N; i++) { - if ((strides[i] % itemsize) != 0) { - return 0; - } - } - return 1; -} - -/* - * This routine checks to see if newstrides (of length nd) will not - * ever be able to walk outside of the memory implied numbytes and offset. - * - * The available memory is assumed to start at -offset and proceed - * to numbytes-offset. The strides are checked to ensure - * that accessing memory using striding will not try to reach beyond - * this memory for any of the axes. - * - * If numbytes is 0 it will be calculated using the dimensions and - * element-size. - * - * This function checks for walking beyond the beginning and right-end - * of the buffer and therefore works for any integer stride (positive - * or negative). - */ - -/*NUMPY_API*/ -NPY_NO_EXPORT Bool -PyArray_CheckStrides(int elsize, int nd, intp numbytes, intp offset, - intp *dims, intp *newstrides) -{ - int i; - intp byte_begin; - intp begin; - intp end; - - if (numbytes == 0) { - numbytes = PyArray_MultiplyList(dims, nd) * elsize; - } - begin = -offset; - end = numbytes - offset - elsize; - for (i = 0; i < nd; i++) { - byte_begin = newstrides[i]*(dims[i] - 1); - if ((byte_begin < begin) || (byte_begin > end)) { - return FALSE; - } - } - return TRUE; -} - - -static PyObject * -array_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {"shape", "dtype", "buffer", "offset", "strides", - "order", NULL}; - PyArray_Descr *descr = NULL; - int itemsize; - PyArray_Dims dims = {NULL, 0}; - PyArray_Dims strides = {NULL, 0}; - PyArray_Chunk buffer; - longlong offset = 0; - NPY_ORDER order = PyArray_CORDER; - int fortran = 0; - PyArrayObject *ret; - - buffer.ptr = NULL; - /* - * Usually called with shape and type but can also be called with buffer, - * strides, and swapped info For now, let's just use this to create an - * empty, contiguous array of a specific type and shape. - */ - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&|O&O&LO&O&", - kwlist, PyArray_IntpConverter, - &dims, - PyArray_DescrConverter, - &descr, - PyArray_BufferConverter, - &buffer, - &offset, - &PyArray_IntpConverter, - &strides, - &PyArray_OrderConverter, - &order)) { - goto fail; - } - if (order == PyArray_FORTRANORDER) { - fortran = 1; - } - if (descr == NULL) { - descr = PyArray_DescrFromType(PyArray_DEFAULT); - } - - itemsize = descr->elsize; - if (itemsize == 0) { - PyErr_SetString(PyExc_ValueError, - "data-type with unspecified variable length"); - goto fail; - } - - if (strides.ptr != NULL) { - intp nb, off; - if (strides.len != dims.len) { - PyErr_SetString(PyExc_ValueError, - "strides, if given, must be " \ - "the same length as shape"); - goto fail; - } - - if (buffer.ptr == NULL) { - nb = 0; - off = 0; - } - else { - nb = buffer.len; - off = (intp) offset; - } - - - if (!PyArray_CheckStrides(itemsize, dims.len, - nb, off, - dims.ptr, strides.ptr)) { - PyErr_SetString(PyExc_ValueError, - "strides is incompatible " \ - "with shape of requested " \ - "array and size of buffer"); - goto fail; - } - } - - if (buffer.ptr == NULL) { - ret = (PyArrayObject *) - PyArray_NewFromDescr(subtype, descr, - (int)dims.len, - dims.ptr, - strides.ptr, NULL, fortran, NULL); - if (ret == NULL) { - descr = NULL; - goto fail; - } - if (PyDataType_FLAGCHK(descr, NPY_ITEM_HASOBJECT)) { - /* place Py_None in object positions */ - PyArray_FillObjectArray(ret, Py_None); - if (PyErr_Occurred()) { - descr = NULL; - goto fail; - } - } - } - else { - /* buffer given -- use it */ - if (dims.len == 1 && dims.ptr[0] == -1) { - dims.ptr[0] = (buffer.len-(intp)offset) / itemsize; - } - else if ((strides.ptr == NULL) && - (buffer.len < (offset + (((intp)itemsize)* - PyArray_MultiplyList(dims.ptr, - dims.len))))) { - PyErr_SetString(PyExc_TypeError, - "buffer is too small for " \ - "requested array"); - goto fail; - } - /* get writeable and aligned */ - if (fortran) { - buffer.flags |= FORTRAN; - } - ret = (PyArrayObject *)\ - PyArray_NewFromDescr(subtype, descr, - dims.len, dims.ptr, - strides.ptr, - offset + (char *)buffer.ptr, - buffer.flags, NULL); - if (ret == NULL) { - descr = NULL; - goto fail; - } - PyArray_UpdateFlags(ret, UPDATE_ALL); - ret->base = buffer.base; - Py_INCREF(buffer.base); - } - - PyDimMem_FREE(dims.ptr); - if (strides.ptr) { - PyDimMem_FREE(strides.ptr); - } - return (PyObject *)ret; - - fail: - Py_XDECREF(descr); - if (dims.ptr) { - PyDimMem_FREE(dims.ptr); - } - if (strides.ptr) { - PyDimMem_FREE(strides.ptr); - } - return NULL; -} - - -static PyObject * -array_iter(PyArrayObject *arr) -{ - if (arr->nd == 0) { - PyErr_SetString(PyExc_TypeError, - "iteration over a 0-d array"); - return NULL; - } - return PySeqIter_New((PyObject *)arr); -} - -static PyObject * -array_alloc(PyTypeObject *type, Py_ssize_t NPY_UNUSED(nitems)) -{ - PyObject *obj; - /* nitems will always be 0 */ - obj = (PyObject *)_pya_malloc(sizeof(PyArrayObject)); - PyObject_Init(obj, type); - return obj; -} - - -NPY_NO_EXPORT PyTypeObject PyArray_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.ndarray", /* tp_name */ - sizeof(PyArrayObject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)array_dealloc, /* tp_dealloc */ - (printfunc)NULL, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - (reprfunc)array_repr, /* tp_repr */ - &array_as_number, /* tp_as_number */ - &array_as_sequence, /* tp_as_sequence */ - &array_as_mapping, /* tp_as_mapping */ - (hashfunc)0, /* tp_hash */ - (ternaryfunc)0, /* tp_call */ - (reprfunc)array_str, /* tp_str */ - (getattrofunc)0, /* tp_getattro */ - (setattrofunc)0, /* tp_setattro */ - &array_as_buffer, /* tp_as_buffer */ - (Py_TPFLAGS_DEFAULT -#if !defined(NPY_PY3K) - | Py_TPFLAGS_CHECKTYPES -#endif -#if (PY_VERSION_HEX >= 0x02060000) && (PY_VERSION_HEX < 0x03000000) - | Py_TPFLAGS_HAVE_NEWBUFFER -#endif - | Py_TPFLAGS_BASETYPE), /* tp_flags */ - 0, /* tp_doc */ - - (traverseproc)0, /* tp_traverse */ - (inquiry)0, /* tp_clear */ - (richcmpfunc)array_richcompare, /* tp_richcompare */ - offsetof(PyArrayObject, weakreflist), /* tp_weaklistoffset */ - (getiterfunc)array_iter, /* tp_iter */ - (iternextfunc)0, /* tp_iternext */ - array_methods, /* tp_methods */ - 0, /* tp_members */ - array_getsetlist, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - (initproc)0, /* tp_init */ - array_alloc, /* tp_alloc */ - (newfunc)array_new, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/arrayobject.h b/pythonPackages/numpy/numpy/core/src/multiarray/arrayobject.h deleted file mode 100755 index ec33614357..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/arrayobject.h +++ /dev/null @@ -1,15 +0,0 @@ -#ifndef _NPY_INTERNAL_ARRAYOBJECT_H_ -#define _NPY_INTERNAL_ARRAYOBJECT_H_ - -#ifndef _MULTIARRAYMODULE -#error You should not include this -#endif - -NPY_NO_EXPORT PyObject * -_strings_richcompare(PyArrayObject *self, PyArrayObject *other, int cmp_op, - int rstrip); - -NPY_NO_EXPORT PyObject * -array_richcompare(PyArrayObject *self, PyObject *other, int cmp_op); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/arraytypes.c.src b/pythonPackages/numpy/numpy/core/src/multiarray/arraytypes.c.src deleted file mode 100755 index c20aef08ce..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/arraytypes.c.src +++ /dev/null @@ -1,3326 +0,0 @@ -/* -*- c -*- */ -#define PY_SSIZE_T_CLEAN -#include "Python.h" -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "numpy/npy_3kcompat.h" - -#include "numpy/npy_math.h" - -#include "common.h" -#include "ctors.h" -#include "usertypes.h" -#include "npy_config.h" - -#include "numpyos.h" - -/* - ***************************************************************************** - ** PYTHON TYPES TO C TYPES ** - ***************************************************************************** - */ - -static double -MyPyFloat_AsDouble(PyObject *obj) -{ - double ret = 0; - PyObject *num; - - if (obj == Py_None) { - return NPY_NAN; - } - num = PyNumber_Float(obj); - if (num == NULL) { - return NPY_NAN; - } - ret = PyFloat_AsDouble(num); - Py_DECREF(num); - return ret; -} - - -/**begin repeat - * #type = long, longlong# - * #Type = Long, LongLong# - */ -static @type@ -MyPyLong_As@Type@ (PyObject *obj) -{ - @type@ ret; - PyObject *num = PyNumber_Long(obj); - - if (num == NULL) { - return -1; - } - ret = PyLong_As@Type@(num); - Py_DECREF(num); - return ret; -} - -static u@type@ -MyPyLong_AsUnsigned@Type@ (PyObject *obj) -{ - u@type@ ret; - PyObject *num = PyNumber_Long(obj); - - if (num == NULL) { - return -1; - } - ret = PyLong_AsUnsigned@Type@(num); - if (PyErr_Occurred()) { - PyErr_Clear(); - ret = PyLong_As@Type@(num); - } - Py_DECREF(num); - return ret; -} - -/**end repeat**/ - - -/* - ***************************************************************************** - ** GETITEM AND SETITEM ** - ***************************************************************************** - */ - - -static char * _SEQUENCE_MESSAGE = "error setting an array element with a sequence"; - -/**begin repeat - * - * #TYPE = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, LONG, UINT, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE# - * #func1 = PyBool_FromLong, PyInt_FromLong*6, PyLong_FromUnsignedLong*2, - * PyLong_FromLongLong, PyLong_FromUnsignedLongLong, - * PyFloat_FromDouble*2# - * #func2 = PyObject_IsTrue, MyPyLong_AsLong*6, MyPyLong_AsUnsignedLong*2, - * MyPyLong_AsLongLong, MyPyLong_AsUnsignedLongLong, - * MyPyFloat_AsDouble*2# - * #type = Bool, byte, ubyte, short, ushort, int, long, uint, ulong, - * longlong, ulonglong, float, double# - * #type1 = long*7, ulong*2, longlong, ulonglong, float, double# - * #kind = Bool, Byte, UByte, Short, UShort, Int, Long, UInt, ULong, - * LongLong, ULongLong, Float, Double# -*/ -static PyObject * -@TYPE@_getitem(char *ip, PyArrayObject *ap) { - @type@ t1; - - if ((ap == NULL) || PyArray_ISBEHAVED_RO(ap)) { - t1 = *((@type@ *)ip); - return @func1@((@type1@)t1); - } - else { - ap->descr->f->copyswap(&t1, ip, !PyArray_ISNOTSWAPPED(ap), ap); - return @func1@((@type1@)t1); - } -} - -static int -@TYPE@_setitem(PyObject *op, char *ov, PyArrayObject *ap) { - @type@ temp; /* ensures alignment */ - - if (PyArray_IsScalar(op, @kind@)) { - temp = ((Py@kind@ScalarObject *)op)->obval; - } - else { - temp = (@type@)@func2@(op); - } - if (PyErr_Occurred()) { - if (PySequence_Check(op)) { - PyErr_Clear(); - PyErr_SetString(PyExc_ValueError, - "setting an array element with a sequence."); - } - return -1; - } - if (ap == NULL || PyArray_ISBEHAVED(ap)) - *((@type@ *)ov)=temp; - else { - ap->descr->f->copyswap(ov, &temp, !PyArray_ISNOTSWAPPED(ap), ap); - } - return 0; -} - -/**end repeat**/ - -/**begin repeat - * - * #TYPE = CFLOAT, CDOUBLE# - * #type = float, double# - */ -static PyObject * -@TYPE@_getitem(char *ip, PyArrayObject *ap) { - @type@ t1, t2; - - if ((ap == NULL) || PyArray_ISBEHAVED_RO(ap)) { - return PyComplex_FromDoubles((double)((@type@ *)ip)[0], - (double)((@type@ *)ip)[1]); - } - else { - int size = sizeof(@type@); - Bool swap = !PyArray_ISNOTSWAPPED(ap); - copy_and_swap(&t1, ip, size, 1, 0, swap); - copy_and_swap(&t2, ip + size, size, 1, 0, swap); - return PyComplex_FromDoubles((double)t1, (double)t2); - } -} - -/**end repeat**/ - - - -/**begin repeat - * - * #TYPE = CFLOAT, CDOUBLE, CLONGDOUBLE# - * #type = float, double, longdouble# - * #kind = CFloat, CDouble, CLongDouble# - */ -static int -@TYPE@_setitem(PyObject *op, char *ov, PyArrayObject *ap) -{ - Py_complex oop; - PyObject *op2; - c@type@ temp; - int rsize; - - if (!(PyArray_IsScalar(op, @kind@))) { - if (PyArray_Check(op) && (PyArray_NDIM(op) == 0)) { - op2 = ((PyArrayObject *)op)->descr->f->getitem - (((PyArrayObject *)op)->data, (PyArrayObject *)op); - } - else { - op2 = op; Py_INCREF(op); - } - if (op2 == Py_None) { - oop.real = NPY_NAN; - oop.imag = NPY_NAN; - } - else { - oop = PyComplex_AsCComplex (op2); - } - Py_DECREF(op2); - if (PyErr_Occurred()) { - return -1; - } - temp.real = (@type@) oop.real; - temp.imag = (@type@) oop.imag; - } - else { - temp = ((Py@kind@ScalarObject *)op)->obval; - } - memcpy(ov, &temp, ap->descr->elsize); - if (!PyArray_ISNOTSWAPPED(ap)) { - byte_swap_vector(ov, 2, sizeof(@type@)); - } - rsize = sizeof(@type@); - copy_and_swap(ov, &temp, rsize, 2, rsize, !PyArray_ISNOTSWAPPED(ap)); - return 0; -} - -/**end repeat**/ - -/* - * These return array scalars which are different than other date-types. - */ - -static PyObject * -LONGDOUBLE_getitem(char *ip, PyArrayObject *ap) -{ - return PyArray_Scalar(ip, ap->descr, NULL); -} - -static int -LONGDOUBLE_setitem(PyObject *op, char *ov, PyArrayObject *ap) { - /* ensure alignment */ - longdouble temp; - - if (PyArray_IsScalar(op, LongDouble)) { - temp = ((PyLongDoubleScalarObject *)op)->obval; - } - else { - temp = (longdouble) MyPyFloat_AsDouble(op); - } - if (PyErr_Occurred()) { - return -1; - } - if (ap == NULL || PyArray_ISBEHAVED(ap)) { - *((longdouble *)ov) = temp; - } - else { - copy_and_swap(ov, &temp, ap->descr->elsize, 1, 0, - !PyArray_ISNOTSWAPPED(ap)); - } - return 0; -} - -static PyObject * -CLONGDOUBLE_getitem(char *ip, PyArrayObject *ap) -{ - return PyArray_Scalar(ip, ap->descr, NULL); -} - -/* UNICODE */ -static PyObject * -UNICODE_getitem(char *ip, PyArrayObject *ap) -{ - intp elsize = ap->descr->elsize; - intp mysize = elsize/sizeof(PyArray_UCS4); - int alloc = 0; - PyArray_UCS4 *buffer = NULL; - PyUnicodeObject *obj; - intp i; - - if (!PyArray_ISBEHAVED_RO(ap)) { - buffer = malloc(elsize); - if (buffer == NULL) { - PyErr_NoMemory(); - goto fail; - } - alloc = 1; - memcpy(buffer, ip, elsize); - if (!PyArray_ISNOTSWAPPED(ap)) { - byte_swap_vector(buffer, mysize, sizeof(PyArray_UCS4)); - } - } - else { - buffer = (PyArray_UCS4 *)ip; - } - for (i = mysize; i > 0 && buffer[--i] == 0; mysize = i); - -#ifdef Py_UNICODE_WIDE - obj = (PyUnicodeObject *)PyUnicode_FromUnicode(buffer, mysize); -#else - /* create new empty unicode object of length mysize*2 */ - obj = (PyUnicodeObject *)MyPyUnicode_New(mysize*2); - if (obj == NULL) { - goto fail; - } - mysize = PyUCS2Buffer_FromUCS4(obj->str, buffer, mysize); - /* reset length of unicode object to ucs2size */ - if (MyPyUnicode_Resize(obj, mysize) < 0) { - Py_DECREF(obj); - goto fail; - } -#endif - - if (alloc) { - free(buffer); - } - return (PyObject *)obj; - -fail: - if (alloc) { - free(buffer); - } - return NULL; -} - -static int -UNICODE_setitem(PyObject *op, char *ov, PyArrayObject *ap) -{ - PyObject *temp; - Py_UNICODE *ptr; - int datalen; -#ifndef Py_UNICODE_WIDE - char *buffer; -#endif - - if (!PyBytes_Check(op) && !PyUnicode_Check(op) && - PySequence_Check(op) && PySequence_Size(op) > 0) { - PyErr_SetString(PyExc_ValueError, - "setting an array element with a sequence"); - return -1; - } - /* Sequence_Size might have returned an error */ - if (PyErr_Occurred()) { - PyErr_Clear(); - } -#if defined(NPY_PY3K) - if (PyBytes_Check(op)) { - /* Try to decode from ASCII */ - temp = PyUnicode_FromEncodedObject(op, "ASCII", "strict"); - if (temp == NULL) { - return -1; - } - } - else if ((temp=PyObject_Str(op)) == NULL) { -#else - if ((temp=PyObject_Unicode(op)) == NULL) { -#endif - return -1; - } - ptr = PyUnicode_AS_UNICODE(temp); - if ((ptr == NULL) || (PyErr_Occurred())) { - Py_DECREF(temp); - return -1; - } - datalen = PyUnicode_GET_DATA_SIZE(temp); - -#ifdef Py_UNICODE_WIDE - memcpy(ov, ptr, MIN(ap->descr->elsize, datalen)); -#else - if (!PyArray_ISALIGNED(ap)) { - buffer = _pya_malloc(ap->descr->elsize); - if (buffer == NULL) { - Py_DECREF(temp); - PyErr_NoMemory(); - return -1; - } - } - else { - buffer = ov; - } - datalen = PyUCS2Buffer_AsUCS4(ptr, (PyArray_UCS4 *)buffer, - datalen >> 1, ap->descr->elsize >> 2); - datalen <<= 2; - if (!PyArray_ISALIGNED(ap)) { - memcpy(ov, buffer, datalen); - _pya_free(buffer); - } -#endif - /* Fill in the rest of the space with 0 */ - if (ap->descr->elsize > datalen) { - memset(ov + datalen, 0, (ap->descr->elsize - datalen)); - } - if (!PyArray_ISNOTSWAPPED(ap)) { - byte_swap_vector(ov, ap->descr->elsize >> 2, 4); - } - Py_DECREF(temp); - return 0; -} - -/* STRING - * - * can handle both NULL-terminated and not NULL-terminated cases - * will truncate all ending NULLs in returned string. - */ -static PyObject * -STRING_getitem(char *ip, PyArrayObject *ap) -{ - /* Will eliminate NULLs at the end */ - char *ptr; - int size = ap->descr->elsize; - - ptr = ip + size - 1; - while (*ptr-- == '\0' && size > 0) { - size--; - } - return PyBytes_FromStringAndSize(ip,size); -} - -static int -STRING_setitem(PyObject *op, char *ov, PyArrayObject *ap) -{ - char *ptr; - Py_ssize_t len; - PyObject *temp = NULL; - - if (!PyBytes_Check(op) && !PyUnicode_Check(op) - && PySequence_Check(op) && PySequence_Size(op) > 0) { - PyErr_SetString(PyExc_ValueError, - "setting an array element with a sequence"); - return -1; - } - /* Sequence_Size might have returned an error */ - if (PyErr_Occurred()) { - PyErr_Clear(); - } -#if defined(NPY_PY3K) - if (PyUnicode_Check(op)) { - /* Assume ASCII codec -- function similarly as Python 2 */ - temp = PyUnicode_AsASCIIString(op); - if (temp == NULL) return -1; - } - else if (PyBytes_Check(op) || PyMemoryView_Check(op)) { - temp = PyObject_Bytes(op); - if (temp == NULL) { - return -1; - } - } - else { - /* Emulate similar casting behavior as on Python 2 */ - PyObject *str; - str = PyObject_Str(op); - if (str == NULL) { - return -1; - } - temp = PyUnicode_AsASCIIString(str); - Py_DECREF(str); - if (temp == NULL) { - return -1; - } - } -#else - if ((temp = PyObject_Str(op)) == NULL) { - return -1; - } -#endif - if (PyBytes_AsStringAndSize(temp, &ptr, &len) == -1) { - Py_DECREF(temp); - return -1; - } - memcpy(ov, ptr, MIN(ap->descr->elsize,len)); - /* - * If string lenth is smaller than room in array - * Then fill the rest of the element size with NULL - */ - if (ap->descr->elsize > len) { - memset(ov + len, 0, (ap->descr->elsize - len)); - } - Py_DECREF(temp); - return 0; -} - -/* OBJECT */ - -#define __ALIGNED(obj, sz) ((((size_t) obj) % (sz))==0) - -static PyObject * -OBJECT_getitem(char *ip, PyArrayObject *ap) -{ - /* TODO: We might be able to get away with just the "else" clause now */ - if (!ap || PyArray_ISALIGNED(ap)) { - if (*(PyObject **)ip == NULL) { - Py_INCREF(Py_None); - return Py_None; - } - else { - Py_INCREF(*(PyObject **)ip); - return *(PyObject **)ip; - } - } - else { - PyObject *obj; - NPY_COPY_PYOBJECT_PTR(&obj, ip); - if (obj == NULL) { - Py_INCREF(Py_None); - return Py_None; - } - else { - Py_INCREF(obj); - return obj; - } - } -} - - -static int -OBJECT_setitem(PyObject *op, char *ov, PyArrayObject *ap) -{ - Py_INCREF(op); - /* TODO: We might be able to get away with just the "else" clause now */ - if (!ap || PyArray_ISALIGNED(ap)) { - Py_XDECREF(*(PyObject **)ov); - *(PyObject **)ov = op; - } - else { - PyObject *obj; - NPY_COPY_PYOBJECT_PTR(&obj, ov); - Py_XDECREF(obj); - NPY_COPY_PYOBJECT_PTR(ov, &op); - } - return PyErr_Occurred() ? -1 : 0; -} - -/* VOID */ - -static PyObject * -VOID_getitem(char *ip, PyArrayObject *ap) -{ - PyObject *u = NULL; - PyArray_Descr* descr; - int itemsize; - - descr = ap->descr; - if (descr->names) { - PyObject *key; - PyObject *names; - int i, n; - PyObject *ret; - PyObject *tup, *title; - PyArray_Descr *new; - int offset; - int savedflags; - - /* get the names from the fields dictionary*/ - names = descr->names; - if (!names) { - goto finish; - } - n = PyTuple_GET_SIZE(names); - ret = PyTuple_New(n); - savedflags = ap->flags; - for (i = 0; i < n; i++) { - key = PyTuple_GET_ITEM(names, i); - tup = PyDict_GetItem(descr->fields, key); - if (!PyArg_ParseTuple(tup, "Oi|O", &new, &offset, &title)) { - Py_DECREF(ret); - ap->descr = descr; - return NULL; - } - ap->descr = new; - /* update alignment based on offset */ - if ((new->alignment > 1) - && ((((intp)(ip+offset)) % new->alignment) != 0)) { - ap->flags &= ~ALIGNED; - } - else { - ap->flags |= ALIGNED; - } - PyTuple_SET_ITEM(ret, i, new->f->getitem(ip+offset, ap)); - ap->flags = savedflags; - } - ap->descr = descr; - return ret; - } - - if (descr->subarray) { - /* return an array of the basic type */ - PyArray_Dims shape = {NULL, -1}; - PyObject *ret; - - if (!(PyArray_IntpConverter(descr->subarray->shape, &shape))) { - PyDimMem_FREE(shape.ptr); - PyErr_SetString(PyExc_ValueError, - "invalid shape in fixed-type tuple."); - return NULL; - } - Py_INCREF(descr->subarray->base); - ret = PyArray_NewFromDescr(&PyArray_Type, - descr->subarray->base, shape.len, shape.ptr, - NULL, ip, ap->flags, NULL); - PyDimMem_FREE(shape.ptr); - if (!ret) { - return NULL; - } - PyArray_BASE(ret) = (PyObject *)ap; - Py_INCREF(ap); - PyArray_UpdateFlags((PyArrayObject *)ret, UPDATE_ALL); - return ret; - } - -finish: - if (PyDataType_FLAGCHK(descr, NPY_ITEM_HASOBJECT) - || PyDataType_FLAGCHK(descr, NPY_ITEM_IS_POINTER)) { - PyErr_SetString(PyExc_ValueError, - "tried to get void-array with object members as buffer."); - return NULL; - } - itemsize = ap->descr->elsize; -#if defined(NPY_PY3K) - /* - * Return a byte array; there are no plain buffer objects on Py3 - */ - { - intp dims[1], strides[1]; - PyArray_Descr *descr; - dims[0] = itemsize; - strides[0] = 1; - descr = PyArray_DescrNewFromType(PyArray_BYTE); - u = PyArray_NewFromDescr(&PyArray_Type, descr, 1, dims, strides, - ip, - PyArray_ISWRITEABLE(ap) ? NPY_WRITEABLE : 0, - NULL); - ((PyArrayObject*)u)->base = ap; - Py_INCREF(ap); - } -#else - /* - * default is to return buffer object pointing to - * current item a view of it - */ - if (PyArray_ISWRITEABLE(ap)) { - u = PyBuffer_FromReadWriteMemory(ip, itemsize); - } - else { - u = PyBuffer_FromMemory(ip, itemsize); - } -#endif - if (u == NULL) { - goto fail; - } - return u; - -fail: - return NULL; -} - - -NPY_NO_EXPORT int PyArray_CopyObject(PyArrayObject *, PyObject *); - -static int -VOID_setitem(PyObject *op, char *ip, PyArrayObject *ap) -{ - PyArray_Descr* descr; - int itemsize=ap->descr->elsize; - int res; - - descr = ap->descr; - if (descr->names && PyTuple_Check(op)) { - PyObject *key; - PyObject *names; - int i, n; - PyObject *tup, *title; - PyArray_Descr *new; - int offset; - int savedflags; - - res = -1; - /* get the names from the fields dictionary*/ - names = descr->names; - n = PyTuple_GET_SIZE(names); - if (PyTuple_GET_SIZE(op) != n) { - PyErr_SetString(PyExc_ValueError, - "size of tuple must match number of fields."); - return -1; - } - savedflags = ap->flags; - for (i = 0; i < n; i++) { - key = PyTuple_GET_ITEM(names, i); - tup = PyDict_GetItem(descr->fields, key); - if (!PyArg_ParseTuple(tup, "Oi|O", &new, &offset, &title)) { - ap->descr = descr; - return -1; - } - ap->descr = new; - /* remember to update alignment flags */ - if ((new->alignment > 1) - && ((((intp)(ip+offset)) % new->alignment) != 0)) { - ap->flags &= ~ALIGNED; - } - else { - ap->flags |= ALIGNED; - } - res = new->f->setitem(PyTuple_GET_ITEM(op, i), ip+offset, ap); - ap->flags = savedflags; - if (res < 0) { - break; - } - } - ap->descr = descr; - return res; - } - - if (descr->subarray) { - /* copy into an array of the same basic type */ - PyArray_Dims shape = {NULL, -1}; - PyObject *ret; - if (!(PyArray_IntpConverter(descr->subarray->shape, &shape))) { - PyDimMem_FREE(shape.ptr); - PyErr_SetString(PyExc_ValueError, - "invalid shape in fixed-type tuple."); - return -1; - } - Py_INCREF(descr->subarray->base); - ret = PyArray_NewFromDescr(&PyArray_Type, - descr->subarray->base, shape.len, shape.ptr, - NULL, ip, ap->flags, NULL); - PyDimMem_FREE(shape.ptr); - if (!ret) { - return -1; - } - PyArray_BASE(ret) = (PyObject *)ap; - Py_INCREF(ap); - PyArray_UpdateFlags((PyArrayObject *)ret, UPDATE_ALL); - res = PyArray_CopyObject((PyArrayObject *)ret, op); - Py_DECREF(ret); - return res; - } - - /* Default is to use buffer interface to set item */ - { - const void *buffer; - Py_ssize_t buflen; - if (PyDataType_FLAGCHK(descr, NPY_ITEM_HASOBJECT) - || PyDataType_FLAGCHK(descr, NPY_ITEM_IS_POINTER)) { - PyErr_SetString(PyExc_ValueError, - "Setting void-array with object members using buffer."); - return -1; - } - res = PyObject_AsReadBuffer(op, &buffer, &buflen); - if (res == -1) { - goto fail; - } - memcpy(ip, buffer, NPY_MIN(buflen, itemsize)); - if (itemsize > buflen) { - memset(ip + buflen, 0, itemsize - buflen); - } - } - return 0; - -fail: - return -1; -} - - -/* - ***************************************************************************** - ** TYPE TO TYPE CONVERSIONS ** - ***************************************************************************** - */ - - -/* Assumes contiguous, and aligned, from and to */ - - -/**begin repeat - * - * #TOTYPE = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE# - * #totype = byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble# -*/ - -/**begin repeat1 - * - * #FROMTYPE = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE# - * #fromtype = byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble# - */ -static void -@FROMTYPE@_to_@TOTYPE@(@fromtype@ *ip, @totype@ *op, intp n, - PyArrayObject *NPY_UNUSED(aip), PyArrayObject *NPY_UNUSED(aop)) -{ - while (n--) { - *op++ = (@totype@)*ip++; - } -} -/**end repeat1**/ - -/**begin repeat1 - * - * #FROMTYPE = CFLOAT, CDOUBLE, CLONGDOUBLE# - * #fromtype = float, double, longdouble# - */ -static void -@FROMTYPE@_to_@TOTYPE@(@fromtype@ *ip, @totype@ *op, intp n, - PyArrayObject *NPY_UNUSED(aip), PyArrayObject *NPY_UNUSED(aop)) -{ - while (n--) { - *op++ = (@totype@)*ip; - ip += 2; - } -} -/**end repeat1**/ - -/**end repeat**/ - - -/**begin repeat - * - * #FROMTYPE = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE# - * #fromtype = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble# -*/ -static void -@FROMTYPE@_to_BOOL(@fromtype@ *ip, Bool *op, intp n, - PyArrayObject *NPY_UNUSED(aip), PyArrayObject *NPY_UNUSED(aop)) -{ - while (n--) { - *op++ = (Bool)(*ip++ != FALSE); - } -} -/**end repeat**/ - -/**begin repeat - * - * #FROMTYPE = CFLOAT, CDOUBLE, CLONGDOUBLE# - * #fromtype = cfloat, cdouble, clongdouble# -*/ -static void -@FROMTYPE@_to_BOOL(@fromtype@ *ip, Bool *op, intp n, - PyArrayObject *NPY_UNUSED(aip), PyArrayObject *NPY_UNUSED(aop)) -{ - while (n--) { - *op = (Bool)(((*ip).real != FALSE) || ((*ip).imag != FALSE)); - op++; - ip++; - } -} -/**end repeat**/ - -/**begin repeat - * #TOTYPE = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE# - * #totype = byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble# -*/ -static void -BOOL_to_@TOTYPE@(Bool *ip, @totype@ *op, intp n, - PyArrayObject *NPY_UNUSED(aip), PyArrayObject *NPY_UNUSED(aop)) -{ - while (n--) { - *op++ = (@totype@)(*ip++ != FALSE); - } -} -/**end repeat**/ - -/**begin repeat - * - * #TOTYPE = CFLOAT,CDOUBLE,CLONGDOUBLE# - * #totype = float, double, longdouble# - */ - -/**begin repeat1 - * #FROMTYPE = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE# - * #fromtype = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble# - */ -static void -@FROMTYPE@_to_@TOTYPE@(@fromtype@ *ip, @totype@ *op, intp n, - PyArrayObject *NPY_UNUSED(aip), PyArrayObject *NPY_UNUSED(aop)) -{ - while (n--) { - *op++ = (@totype@)*ip++; - *op++ = 0.0; - } - -} -/**end repeat1**/ -/**end repeat**/ - -/**begin repeat - * - * #TOTYPE = CFLOAT,CDOUBLE,CLONGDOUBLE# - * #totype = float, double, longdouble# - */ - -/**begin repeat1 - * #FROMTYPE = CFLOAT,CDOUBLE,CLONGDOUBLE# - * #fromtype = float, double, longdouble# - */ -static void -@FROMTYPE@_to_@TOTYPE@(@fromtype@ *ip, @totype@ *op, intp n, - PyArrayObject *NPY_UNUSED(aip), PyArrayObject *NPY_UNUSED(aop)) -{ - n <<= 1; - while (n--) { - *op++ = (@totype@)*ip++; - } -} - -/**end repeat1**/ -/**end repeat**/ - -/**begin repeat - * - * #FROMTYPE = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, - * CFLOAT, CDOUBLE, CLONGDOUBLE, STRING, UNICODE, VOID, OBJECT# - * #fromtype = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble, - * cfloat, cdouble, clongdouble, char, char, char, PyObject *# - * #skip = 1*17, aip->descr->elsize*3, 1*1# - */ -static void -@FROMTYPE@_to_OBJECT(@fromtype@ *ip, PyObject **op, intp n, PyArrayObject *aip, - PyArrayObject *NPY_UNUSED(aop)) -{ - intp i; - int skip = @skip@; - for (i = 0; i < n; i++, ip +=skip, op++) { - Py_XDECREF(*op); - *op = @FROMTYPE@_getitem((char *)ip, aip); - } -} -/**end repeat**/ - -#define _NPY_UNUSEDBOOL NPY_UNUSED -#define _NPY_UNUSEDBYTE NPY_UNUSED -#define _NPY_UNUSEDUBYTE NPY_UNUSED -#define _NPY_UNUSEDSHORT NPY_UNUSED -#define _NPY_UNUSEDUSHORT NPY_UNUSED -#define _NPY_UNUSEDINT NPY_UNUSED -#define _NPY_UNUSEDUINT NPY_UNUSED -#define _NPY_UNUSEDLONG NPY_UNUSED -#define _NPY_UNUSEDULONG NPY_UNUSED -#define _NPY_UNUSEDLONGLONG NPY_UNUSED -#define _NPY_UNUSEDULONGLONG NPY_UNUSED -#define _NPY_UNUSEDFLOAT NPY_UNUSED -#define _NPY_UNUSEDDOUBLE NPY_UNUSED -#define _NPY_UNUSEDLONGDOUBLE NPY_UNUSED -#define _NPY_UNUSEDCFLOAT NPY_UNUSED -#define _NPY_UNUSEDCDOUBLE NPY_UNUSED -#define _NPY_UNUSEDCLONGDOUBLE NPY_UNUSED -#define _NPY_UNUSEDDATETIME NPY_UNUSED -#define _NPY_UNUSEDTIMEDELTA NPY_UNUSED -#define _NPY_UNUSEDSTRING -#define _NPY_UNUSEDVOID -#define _NPY_UNUSEDUNICODE - -/**begin repeat - * - * #TOTYPE = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, - * CFLOAT, CDOUBLE, CLONGDOUBLE, STRING, UNICODE, VOID# - * #totype = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble, - * cfloat, cdouble, clongdouble, char, char, char# - * #skip = 1*17, aop->descr->elsize*3# - */ -static void -OBJECT_to_@TOTYPE@(PyObject **ip, @totype@ *op, intp n, - PyArrayObject *_NPY_UNUSED@TOTYPE@(aip), PyArrayObject *aop) -{ - intp i; - int skip = @skip@; - - for (i = 0; i < n; i++, ip++, op += skip) { - if (*ip == NULL) { - @TOTYPE@_setitem(Py_False, (char *)op, aop); - } - else { - @TOTYPE@_setitem(*ip, (char *)op, aop); - } - } -} -/**end repeat**/ - - -/**begin repeat - * - * #from = STRING*20, UNICODE*20, VOID*20# - * #fromtyp = char*60# - * #to = (BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, CFLOAT, CDOUBLE, CLONGDOUBLE, STRING, UNICODE, VOID)*3# - * #totyp = (Bool, byte, ubyte, short, ushort, int, uint, long, ulong, longlong, ulonglong, float, double, longdouble, cfloat, cdouble, clongdouble, char, char, char)*3# - * #oskip = (1*17,aop->descr->elsize*3)*3# - * #convert = 1*17, 0*3, 1*17, 0*3, 0*20# - * #convstr = (Int*9, Long*2, Float*3, Complex*3, Tuple*3)*3# - */ -static void -@from@_to_@to@(@fromtyp@ *ip, @totyp@ *op, intp n, PyArrayObject *aip, - PyArrayObject *aop) -{ - intp i; - PyObject *temp = NULL; - int skip = aip->descr->elsize; - int oskip = @oskip@; - - for (i = 0; i < n; i++, ip+=skip, op+=oskip) { - temp = @from@_getitem((char *)ip, aip); - if (temp == NULL) { - return; - } - /* convert from Python object to needed one */ - if (@convert@) { - PyObject *new, *args; - /* call out to the Python builtin given by convstr */ - args = Py_BuildValue("(N)", temp); -#if defined(NPY_PY3K) -#define PyInt_Type PyLong_Type -#endif - new = Py@convstr@_Type.tp_new(&Py@convstr@_Type, args, NULL); -#if defined(NPY_PY3K) -#undef PyInt_Type -#endif - Py_DECREF(args); - temp = new; - if (temp == NULL) { - return; - } - } - if (@to@_setitem(temp,(char *)op, aop)) { - Py_DECREF(temp); - return; - } - Py_DECREF(temp); - } -} -/**end repeat**/ - - -/**begin repeat - * - * #to = STRING*17, UNICODE*17, VOID*17# - * #totyp = char*17, char*17, char*17# - * #from = (BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, - * CFLOAT, CDOUBLE, CLONGDOUBLE)*3# - * #fromtyp = (Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble, - * cfloat, cdouble, clongdouble)*3# - */ -static void -@from@_to_@to@(@fromtyp@ *ip, @totyp@ *op, intp n, PyArrayObject *aip, - PyArrayObject *aop) -{ - intp i; - PyObject *temp = NULL; - int skip = 1; - int oskip = aop->descr->elsize; - for (i = 0; i < n; i++, ip += skip, op += oskip) { - temp = @from@_getitem((char *)ip, aip); - if (temp == NULL) { - Py_INCREF(Py_False); - temp = Py_False; - } - if (@to@_setitem(temp,(char *)op, aop)) { - Py_DECREF(temp); - return; - } - Py_DECREF(temp); - } -} - -/**end repeat**/ - - -/* - ***************************************************************************** - ** SCAN ** - ***************************************************************************** - */ - - -/* - * The first ignore argument is for backwards compatibility. - * Should be removed when the API version is bumped up. - */ - -/**begin repeat - * #fname = SHORT, USHORT, INT, UINT, LONG, ULONG, LONGLONG, ULONGLONG# - * #type = short, ushort, int, uint, long, ulong, longlong, ulonglong# - * #format = "hd", "hu", "d", "u", "ld", "lu", LONGLONG_FMT, ULONGLONG_FMT# - */ -static int -@fname@_scan(FILE *fp, @type@ *ip, void *NPY_UNUSED(ignore), PyArray_Descr *NPY_UNUSED(ignored)) -{ - return fscanf(fp, "%"@format@, ip); -} -/**end repeat**/ - -/**begin repeat - * #fname = FLOAT, DOUBLE, LONGDOUBLE# - * #type = float, double, longdouble# - */ -static int -@fname@_scan(FILE *fp, @type@ *ip, void *NPY_UNUSED(ignore), PyArray_Descr *NPY_UNUSED(ignored)) -{ - double result; - int ret; - - ret = NumPyOS_ascii_ftolf(fp, &result); - *ip = (@type@) result; - return ret; -} -/**end repeat**/ - -/**begin repeat - * #fname = BYTE, UBYTE# - * #type = byte, ubyte# - * #btype = int, uint# - * #format = "d", "u"# - */ -static int -@fname@_scan(FILE *fp, @type@ *ip, void *NPY_UNUSED(ignore), PyArray_Descr *NPY_UNUSED(ignore2)) -{ - @btype@ temp; - int num; - - num = fscanf(fp, "%"@format@, &temp); - *ip = (@type@) temp; - return num; -} -/**end repeat**/ - -static int -BOOL_scan(FILE *fp, Bool *ip, void *NPY_UNUSED(ignore), PyArray_Descr *NPY_UNUSED(ignore2)) -{ - int temp; - int num; - - num = fscanf(fp, "%d", &temp); - *ip = (Bool) (temp != 0); - return num; -} - -/**begin repeat - * #fname = CFLOAT, CDOUBLE, CLONGDOUBLE, OBJECT, STRING, UNICODE, VOID# - */ -#define @fname@_scan NULL -/**end repeat**/ - - -/* - ***************************************************************************** - ** FROMSTR ** - ***************************************************************************** - */ - - -/**begin repeat - * #fname = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, LONGLONG, - * ULONGLONG# - * #type = byte, ubyte, short, ushort, int, uint, long, ulong, longlong, - * ulonglong# - * #func = (l, ul)*5# - * #btype = (long, ulong)*5# - */ -static int -@fname@_fromstr(char *str, @type@ *ip, char **endptr, PyArray_Descr *NPY_UNUSED(ignore)) -{ - @btype@ result; - - result = PyOS_strto@func@(str, endptr, 10); - *ip = (@type@) result; - return 0; -} -/**end repeat**/ - -/**begin repeat - * - * #fname=FLOAT,DOUBLE,LONGDOUBLE# - * #type=float,double,longdouble# - */ -static int -@fname@_fromstr(char *str, @type@ *ip, char **endptr, PyArray_Descr *NPY_UNUSED(ignore)) -{ - double result; - - result = NumPyOS_ascii_strtod(str, endptr); - *ip = (@type@) result; - return 0; -} -/**end repeat**/ - - - -/**begin repeat - * #fname = BOOL, CFLOAT, CDOUBLE, CLONGDOUBLE, OBJECT, STRING, UNICODE, VOID# - */ -#define @fname@_fromstr NULL -/**end repeat**/ - - -/* - ***************************************************************************** - ** COPYSWAPN ** - ***************************************************************************** - */ - - -/**begin repeat - * - * #fname = SHORT, USHORT, INT, UINT, LONG, ULONG, LONGLONG, ULONGLONG, FLOAT, - * DOUBLE, LONGDOUBLE# - * #fsize = SHORT, SHORT, INT, INT, LONG, LONG, LONGLONG, LONGLONG, FLOAT, - * DOUBLE, LONGDOUBLE# - * #type = short, ushort, int, uint, long, ulong, longlong, ulonglong, float, - * double, longdouble# - */ -static void -@fname@_copyswapn (void *dst, intp dstride, void *src, intp sstride, - intp n, int swap, void *NPY_UNUSED(arr)) -{ - if (src != NULL) { - if (sstride == sizeof(@type@) && dstride == sizeof(@type@)) { - memcpy(dst, src, n*sizeof(@type@)); - } - else { - _unaligned_strided_byte_copy(dst, dstride, src, sstride, - n, sizeof(@type@)); - } - } - if (swap) { - _strided_byte_swap(dst, dstride, n, sizeof(@type@)); - } -} - -static void -@fname@_copyswap (void *dst, void *src, int swap, void *NPY_UNUSED(arr)) -{ - - if (src != NULL) { - /* copy first if needed */ - memcpy(dst, src, sizeof(@type@)); - } - if (swap) { - char *a, *b, c; - - a = (char *)dst; -#if SIZEOF_@fsize@ == 2 - b = a + 1; - c = *a; *a++ = *b; *b = c; -#elif SIZEOF_@fsize@ == 4 - b = a + 3; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; -#elif SIZEOF_@fsize@ == 8 - b = a + 7; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; -#elif SIZEOF_@fsize@ == 10 - b = a + 9; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; -#elif SIZEOF_@fsize@ == 12 - b = a + 11; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; -#elif SIZEOF_@fsize@ == 16 - b = a + 15; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; -#else - { - int i, nn; - - b = a + (SIZEOF_@fsize@-1); - nn = SIZEOF_@fsize@ / 2; - for (i = 0; i < nn; i++) { - c = *a; - *a++ = *b; - *b-- = c; - } - } -#endif - } -} - -/**end repeat**/ - -/**begin repeat - * - * #fname = BOOL, BYTE, UBYTE# - * #type = Bool, byte, ubyte# - */ -static void -@fname@_copyswapn (void *dst, intp dstride, void *src, intp sstride, intp n, - int NPY_UNUSED(swap), void *NPY_UNUSED(arr)) -{ - if (src != NULL) { - if (sstride == sizeof(@type@) && dstride == sizeof(@type@)) { - memcpy(dst, src, n*sizeof(@type@)); - } - else { - _unaligned_strided_byte_copy(dst, dstride, src, sstride, - n, sizeof(@type@)); - } - } - /* ignore swap */ -} - -static void -@fname@_copyswap (void *dst, void *src, int NPY_UNUSED(swap), void *NPY_UNUSED(arr)) -{ - if (src != NULL) { - /* copy first if needed */ - memcpy(dst, src, sizeof(@type@)); - } - /* ignore swap */ -} - -/**end repeat**/ - - - -/**begin repeat - * - * #fname = CFLOAT, CDOUBLE, CLONGDOUBLE# - * #type = cfloat, cdouble, clongdouble# - * #fsize = FLOAT, DOUBLE, LONGDOUBLE# -*/ -static void -@fname@_copyswapn (void *dst, intp dstride, void *src, intp sstride, intp n, - int swap, void *NPY_UNUSED(arr)) -{ - - if (src != NULL) { - /* copy first if needed */ - if (sstride == sizeof(@type@) && dstride == sizeof(@type@)) { - memcpy(dst, src, n*sizeof(@type@)); - } - else { - _unaligned_strided_byte_copy(dst, dstride, src, sstride, n, - sizeof(@type@)); - } - } - - if (swap) { - _strided_byte_swap(dst, dstride, n, SIZEOF_@fsize@); - _strided_byte_swap(((char *)dst + SIZEOF_@fsize@), dstride, - n, SIZEOF_@fsize@); - } -} - -static void -@fname@_copyswap (void *dst, void *src, int swap, void *NPY_UNUSED(arr)) -{ - if (src != NULL) /* copy first if needed */ - memcpy(dst, src, sizeof(@type@)); - - if (swap) { - char *a, *b, c; - a = (char *)dst; -#if SIZEOF_@fsize@ == 4 - b = a + 3; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; - a += 2; - b = a + 3; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; -#elif SIZEOF_@fsize@ == 8 - b = a + 7; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; - a += 4; - b = a + 7; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; -#elif SIZEOF_@fsize@ == 10 - b = a + 9; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; - a += 5; - b = a + 9; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; -#elif SIZEOF_@fsize@ == 12 - b = a + 11; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; - a += 6; - b = a + 11; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; -#elif SIZEOF_@fsize@ == 16 - b = a + 15; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; - a += 8; - b = a + 15; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b = c; -#else - { - int i, nn; - - b = a + (SIZEOF_@fsize@ - 1); - nn = SIZEOF_@fsize@ / 2; - for (i = 0; i < nn; i++) { - c = *a; - *a++ = *b; - *b-- = c; - } - a += nn / 2; - b = a + (SIZEOF_@fsize@ - 1); - nn = SIZEOF_@fsize@ / 2; - for (i = 0; i < nn; i++) { - c = *a; - *a++ = *b; - *b-- = c; - } - } -#endif - } -} - -/**end repeat**/ - -static void -OBJECT_copyswapn (PyObject **dst, intp dstride, PyObject **src, intp sstride, - intp n, int NPY_UNUSED(swap), void *NPY_UNUSED(arr)) -{ - intp i; - if (src != NULL) { - if (__ALIGNED(dst, sizeof(PyObject **)) - && __ALIGNED(src, sizeof(PyObject **)) - && __ALIGNED(dstride, sizeof(PyObject **)) - && __ALIGNED(sstride, sizeof(PyObject **))) { - dstride /= sizeof(PyObject **); - sstride /= sizeof(PyObject **); - for (i = 0; i < n; i++) { - Py_XINCREF(*src); - Py_XDECREF(*dst); - *dst = *src; - dst += dstride; - src += sstride; - } - } - else { - unsigned char *dstp, *srcp; - PyObject *tmp; - dstp = (unsigned char*)dst; - srcp = (unsigned char*)src; - for (i = 0; i < n; i++) { - NPY_COPY_PYOBJECT_PTR(&tmp, dstp); - Py_XDECREF(tmp); - NPY_COPY_PYOBJECT_PTR(&tmp, srcp); - Py_XINCREF(tmp); - NPY_COPY_PYOBJECT_PTR(dstp, srcp); - dstp += dstride; - srcp += sstride; - } - } - } - /* ignore swap */ - return; -} - -static void -OBJECT_copyswap(PyObject **dst, PyObject **src, int NPY_UNUSED(swap), void *NPY_UNUSED(arr)) -{ - - if (src != NULL) { - if (__ALIGNED(dst,sizeof(PyObject **)) && __ALIGNED(src,sizeof(PyObject **))) { - Py_XINCREF(*src); - Py_XDECREF(*dst); - *dst = *src; - } - else { - PyObject *tmp; - NPY_COPY_PYOBJECT_PTR(&tmp, dst); - Py_XDECREF(tmp); - NPY_COPY_PYOBJECT_PTR(&tmp, src); - Py_XINCREF(tmp); - NPY_COPY_PYOBJECT_PTR(dst, src); - } - } -} - -/* ignore swap */ -static void -STRING_copyswapn (char *dst, intp dstride, char *src, intp sstride, - intp n, int NPY_UNUSED(swap), PyArrayObject *arr) -{ - if (src != NULL && arr != NULL) { - int itemsize = arr->descr->elsize; - - if (dstride == itemsize && sstride == itemsize) { - memcpy(dst, src, itemsize * n); - } - else { - _unaligned_strided_byte_copy(dst, dstride, src, sstride, n, - itemsize); - } - } - return; -} - -/* */ -static void -VOID_copyswapn (char *dst, intp dstride, char *src, intp sstride, - intp n, int swap, PyArrayObject *arr) -{ - if (arr == NULL) { - return; - } - if (PyArray_HASFIELDS(arr)) { - PyObject *key, *value, *title = NULL; - PyArray_Descr *new, *descr; - int offset; - Py_ssize_t pos = 0; - - descr = arr->descr; - while (PyDict_Next(descr->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { - arr->descr = descr; - return; - } - arr->descr = new; - new->f->copyswapn(dst+offset, dstride, - (src != NULL ? src+offset : NULL), - sstride, n, swap, arr); - } - arr->descr = descr; - return; - } - if (swap && arr->descr->subarray != NULL) { - PyArray_Descr *descr, *new; - npy_intp num; - npy_intp i; - int subitemsize; - char *dstptr, *srcptr; - - descr = arr->descr; - new = descr->subarray->base; - arr->descr = new; - dstptr = dst; - srcptr = src; - subitemsize = new->elsize; - num = descr->elsize / subitemsize; - for (i = 0; i < n; i++) { - new->f->copyswapn(dstptr, subitemsize, srcptr, - subitemsize, num, swap, arr); - dstptr += dstride; - if (srcptr) { - srcptr += sstride; - } - } - arr->descr = descr; - return; - } - if (src != NULL) { - memcpy(dst, src, arr->descr->elsize * n); - } - return; -} - -static void -VOID_copyswap (char *dst, char *src, int swap, PyArrayObject *arr) -{ - if (arr == NULL) { - return; - } - if (PyArray_HASFIELDS(arr)) { - PyObject *key, *value, *title = NULL; - PyArray_Descr *new, *descr; - int offset; - Py_ssize_t pos = 0; - - descr = arr->descr; - while (PyDict_Next(descr->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { - arr->descr = descr; - return; - } - arr->descr = new; - new->f->copyswap(dst+offset, - (src != NULL ? src+offset : NULL), - swap, arr); - } - arr->descr = descr; - return; - } - if (swap && arr->descr->subarray != NULL) { - PyArray_Descr *descr, *new; - npy_intp num; - int itemsize; - - descr = arr->descr; - new = descr->subarray->base; - arr->descr = new; - itemsize = new->elsize; - num = descr->elsize / itemsize; - new->f->copyswapn(dst, itemsize, src, - itemsize, num, swap, arr); - arr->descr = descr; - return; - } - if (src != NULL) { - memcpy(dst, src, arr->descr->elsize); - } - return; -} - - -static void -UNICODE_copyswapn (char *dst, intp dstride, char *src, intp sstride, - intp n, int swap, PyArrayObject *arr) -{ - int itemsize; - - if (arr == NULL) { - return; - } - itemsize = arr->descr->elsize; - if (src != NULL) { - if (dstride == itemsize && sstride == itemsize) { - memcpy(dst, src, n * itemsize); - } - else { - _unaligned_strided_byte_copy(dst, dstride, src, - sstride, n, itemsize); - } - } - - n *= itemsize; - if (swap) { - char *a, *b, c; - - /* n is the number of unicode characters to swap */ - n >>= 2; - for (a = (char *)dst; n > 0; n--) { - b = a + 3; - c = *a; - *a++ = *b; - *b-- = c; - c = *a; - *a++ = *b; - *b-- = c; - a += 2; - } - } -} - - -static void -STRING_copyswap(char *dst, char *src, int NPY_UNUSED(swap), PyArrayObject *arr) -{ - if (src != NULL && arr != NULL) { - memcpy(dst, src, arr->descr->elsize); - } -} - -static void -UNICODE_copyswap (char *dst, char *src, int swap, PyArrayObject *arr) -{ - int itemsize; - - if (arr == NULL) { - return; - } - itemsize = arr->descr->elsize; - if (src != NULL) { - memcpy(dst, src, itemsize); - } - - if (swap) { - char *a, *b, c; - itemsize >>= 2; - for (a = (char *)dst; itemsize>0; itemsize--) { - b = a + 3; - c = *a; - *a++ = *b; - *b-- = c; - c = *a; - *a++ = *b; - *b-- = c; - a += 2; - } - } -} - - -/* - ***************************************************************************** - ** NONZERO ** - ***************************************************************************** - */ - -/**begin repeat - * - * #fname = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE# - * #type = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble# - */ -static Bool -@fname@_nonzero (char *ip, PyArrayObject *ap) -{ - if (ap == NULL || PyArray_ISBEHAVED_RO(ap)) { - @type@ *ptmp = (@type@ *)ip; - return (Bool) (*ptmp != 0); - } - else { - /* - * don't worry about swap, since we are just testing - * whether or not equal to 0 - */ - @type@ tmp; - memcpy(&tmp, ip, sizeof(@type@)); - return (Bool) (tmp != 0); - } -} -/**end repeat**/ - -/**begin repeat - * - * #fname=CFLOAT,CDOUBLE,CLONGDOUBLE# - * #type=cfloat, cdouble, clongdouble# - */ -static Bool -@fname@_nonzero (char *ip, PyArrayObject *ap) -{ - if (ap == NULL || PyArray_ISBEHAVED_RO(ap)) { - @type@ *ptmp = (@type@ *)ip; - return (Bool) ((ptmp->real != 0) || (ptmp->imag != 0)); - } - else { - /* - * don't worry about swap, since we are just testing - * whether or not equal to 0 - */ - @type@ tmp; - memcpy(&tmp, ip, sizeof(@type@)); - return (Bool) ((tmp.real != 0) || (tmp.imag != 0)); - } -} -/**end repeat**/ - - -#define WHITESPACE " \t\n\r\v\f" -#define WHITELEN 6 - -static Bool -Py_STRING_ISSPACE(char ch) -{ - char white[] = WHITESPACE; - int j; - Bool space = FALSE; - - for (j = 0; j < WHITELEN; j++) { - if (ch == white[j]) { - space = TRUE; - break; - } - } - return space; -} - -static Bool -STRING_nonzero (char *ip, PyArrayObject *ap) -{ - int len = ap->descr->elsize; - int i; - Bool nonz = FALSE; - - for (i = 0; i < len; i++) { - if (!Py_STRING_ISSPACE(*ip)) { - nonz = TRUE; - break; - } - ip++; - } - return nonz; -} - -#ifdef Py_UNICODE_WIDE -#define PyArray_UCS4_ISSPACE Py_UNICODE_ISSPACE -#else -#define PyArray_UCS4_ISSPACE(ch) Py_STRING_ISSPACE((char)ch) -#endif - -static Bool -UNICODE_nonzero (PyArray_UCS4 *ip, PyArrayObject *ap) -{ - int len = ap->descr->elsize >> 2; - int i; - Bool nonz = FALSE; - char *buffer = NULL; - - if ((!PyArray_ISNOTSWAPPED(ap)) || (!PyArray_ISALIGNED(ap))) { - buffer = _pya_malloc(ap->descr->elsize); - if (buffer == NULL) { - return nonz; - } - memcpy(buffer, ip, ap->descr->elsize); - if (!PyArray_ISNOTSWAPPED(ap)) { - byte_swap_vector(buffer, len, 4); - } - ip = (PyArray_UCS4 *)buffer; - } - - for (i = 0; i < len; i++) { - if (!PyArray_UCS4_ISSPACE(*ip)) { - nonz = TRUE; - break; - } - ip++; - } - _pya_free(buffer); - return nonz; -} - -static Bool -OBJECT_nonzero (PyObject **ip, PyArrayObject *ap) -{ - - if (PyArray_ISALIGNED(ap)) { - if (*ip == NULL) { - return FALSE; - } - return (Bool) PyObject_IsTrue(*ip); - } - else { - PyObject *obj; - NPY_COPY_PYOBJECT_PTR(&obj, ip); - if (obj == NULL) { - return FALSE; - } - return (Bool) PyObject_IsTrue(obj); - } -} - -/* - * if we have fields, then nonzero only if all sub-fields are nonzero. - */ -static Bool -VOID_nonzero (char *ip, PyArrayObject *ap) -{ - int i; - int len; - Bool nonz = FALSE; - - if (PyArray_HASFIELDS(ap)) { - PyArray_Descr *descr, *new; - PyObject *key, *value, *title; - int savedflags, offset; - Py_ssize_t pos = 0; - - descr = ap->descr; - savedflags = ap->flags; - while (PyDict_Next(descr->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, - &title)) { - PyErr_Clear(); - continue; - } - ap->descr = new; - ap->flags = savedflags; - if ((new->alignment > 1) && !__ALIGNED(ip+offset, new->alignment)) { - ap->flags &= ~ALIGNED; - } - else { - ap->flags |= ALIGNED; - } - if (new->f->nonzero(ip+offset, ap)) { - nonz = TRUE; - break; - } - } - ap->descr = descr; - ap->flags = savedflags; - return nonz; - } - len = ap->descr->elsize; - for (i = 0; i < len; i++) { - if (*ip != '\0') { - nonz = TRUE; - break; - } - ip++; - } - return nonz; -} - -#undef __ALIGNED - - -/* - ***************************************************************************** - ** COMPARE ** - ***************************************************************************** - */ - - -/* boolean type */ - -static int -BOOL_compare(Bool *ip1, Bool *ip2, PyArrayObject *NPY_UNUSED(ap)) -{ - return (*ip1 ? (*ip2 ? 0 : 1) : (*ip2 ? -1 : 0)); -} - - -/* integer types */ - -/**begin repeat - * #TYPE = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG# - * #type = byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong# - */ - -static int -@TYPE@_compare (@type@ *pa, @type@ *pb, PyArrayObject *NPY_UNUSED(ap)) -{ - const @type@ a = *pa; - const @type@ b = *pb; - - return a < b ? -1 : a == b ? 0 : 1; -} - -/**end repeat**/ - - -/* float types */ - -/* - * The real/complex comparison functions are compatible with the new sort - * order for nans introduced in numpy 1.4.0. All nan values now compare - * larger than non-nan values and are sorted to the end. The comparison - * order is: - * - * Real: [R, nan] - * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] - * - * where complex values with the same nan placements are sorted according - * to the non-nan part if it exists. If both the real and imaginary parts - * of complex types are non-nan the order is the same as the real parts - * unless they happen to be equal, in which case the order is that of the - * imaginary parts. - */ - -/**begin repeat - * - * #TYPE = FLOAT, DOUBLE, LONGDOUBLE# - * #type = float, double, longdouble# - */ - -#define LT(a,b) ((a) < (b) || ((b) != (b) && (a) ==(a))) - -static int -@TYPE@_compare(@type@ *pa, @type@ *pb) -{ - const @type@ a = *pa; - const @type@ b = *pb; - int ret; - - if (LT(a,b)) { - ret = -1; - } - else if (LT(b,a)) { - ret = 1; - } - else { - ret = 0; - } - return ret; -} - - -static int -C@TYPE@_compare(@type@ *pa, @type@ *pb) -{ - const @type@ ar = pa[0]; - const @type@ ai = pa[1]; - const @type@ br = pb[0]; - const @type@ bi = pb[1]; - int ret; - - if (ar < br) { - if (ai == ai || bi != bi) { - ret = -1; - } - else { - ret = 1; - } - } - else if (br < ar) { - if (bi == bi || ai != ai) { - ret = 1; - } - else { - ret = -1; - } - } - else if (ar == br || (ar != ar && br != br)) { - if (LT(ai,bi)) { - ret = -1; - } - else if (LT(bi,ai)) { - ret = 1; - } - else { - ret = 0; - } - } - else if (ar == ar) { - ret = -1; - } - else { - ret = 1; - } - - return ret; -} - -#undef LT - -/**end repeat**/ - - -/* object type */ - -static int -OBJECT_compare(PyObject **ip1, PyObject **ip2, PyArrayObject *NPY_UNUSED(ap)) -{ - /* - * ALIGNMENT NOTE: It seems that PyArray_Sort is already handling - * the alignment of pointers, so it doesn't need to be handled - * here. - */ - if ((*ip1 == NULL) || (*ip2 == NULL)) { - if (ip1 == ip2) { - return 1; - } - if (ip1 == NULL) { - return -1; - } - return 1; - } -#if defined(NPY_PY3K) - if (PyObject_RichCompareBool(*ip1, *ip2, Py_LT) == 1) { - return -1; - } - else if (PyObject_RichCompareBool(*ip1, *ip2, Py_GT) == 1) { - return 1; - } - else { - return 0; - } -#else - return PyObject_Compare(*ip1, *ip2); -#endif -} - - -/* string type */ - -static int -STRING_compare(char *ip1, char *ip2, PyArrayObject *ap) -{ - const unsigned char *c1 = (unsigned char *)ip1; - const unsigned char *c2 = (unsigned char *)ip2; - const size_t len = ap->descr->elsize; - size_t i; - - for(i = 0; i < len; ++i) { - if (c1[i] != c2[i]) { - return (c1[i] > c2[i]) ? 1 : -1; - } - } - return 0; -} - - -/* unicode type */ - -static int -UNICODE_compare(PyArray_UCS4 *ip1, PyArray_UCS4 *ip2, - PyArrayObject *ap) -{ - int itemsize = ap->descr->elsize; - - if (itemsize < 0) { - return 0; - } - while (itemsize-- > 0) { - PyArray_UCS4 c1 = *ip1++; - PyArray_UCS4 c2 = *ip2++; - if (c1 != c2) { - return (c1 < c2) ? -1 : 1; - } - } - return 0; -} - - -/* void type */ - -/* - * If fields are defined, then compare on first field and if equal - * compare on second field. Continue until done or comparison results - * in not_equal. - * - * Must align data passed on to sub-comparisons. - * Also must swap data based on to sub-comparisons. - */ -static int -VOID_compare(char *ip1, char *ip2, PyArrayObject *ap) -{ - PyArray_Descr *descr, *new; - PyObject *names, *key; - PyObject *tup, *title; - char *nip1, *nip2; - int i, offset, res = 0, swap=0; - - if (!PyArray_HASFIELDS(ap)) { - return STRING_compare(ip1, ip2, ap); - } - descr = ap->descr; - /* - * Compare on the first-field. If equal, then - * compare on the second-field, etc. - */ - names = descr->names; - for (i = 0; i < PyTuple_GET_SIZE(names); i++) { - key = PyTuple_GET_ITEM(names, i); - tup = PyDict_GetItem(descr->fields, key); - if (!PyArg_ParseTuple(tup, "Oi|O", &new, &offset, &title)) { - goto finish; - } - ap->descr = new; - swap = PyArray_ISBYTESWAPPED(ap); - nip1 = ip1+offset; - nip2 = ip2+offset; - if ((swap) || (new->alignment > 1)) { - if ((swap) || (((intp)(nip1) % new->alignment) != 0)) { - /* create buffer and copy */ - nip1 = _pya_malloc(new->elsize); - if (nip1 == NULL) { - goto finish; - } - memcpy(nip1, ip1+offset, new->elsize); - if (swap) - new->f->copyswap(nip1, NULL, swap, ap); - } - if ((swap) || (((intp)(nip2) % new->alignment) != 0)) { - /* copy data to a buffer */ - nip2 = _pya_malloc(new->elsize); - if (nip2 == NULL) { - if (nip1 != ip1+offset) { - _pya_free(nip1); - } - goto finish; - } - memcpy(nip2, ip2+offset, new->elsize); - if (swap) - new->f->copyswap(nip2, NULL, swap, ap); - } - } - res = new->f->compare(nip1, nip2, ap); - if ((swap) || (new->alignment > 1)) { - if (nip1 != ip1+offset) { - _pya_free(nip1); - } - if (nip2 != ip2+offset) { - _pya_free(nip2); - } - } - if (res != 0) { - break; - } - } - -finish: - ap->descr = descr; - return res; -} - - -/* - ***************************************************************************** - ** ARGFUNC ** - ***************************************************************************** - */ - - -/**begin repeat - * - * #fname = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, - * CFLOAT, CDOUBLE, CLONGDOUBLE# - * #type = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble, - * float, double, longdouble# - * #isfloat = 0*11, 1*6# - * #iscomplex = 0*14, 1*3# - * #incr= ip++*14, ip+=2*3# - */ -static int -@fname@_argmax(@type@ *ip, intp n, intp *max_ind, PyArrayObject *NPY_UNUSED(aip)) -{ - intp i; - @type@ mp = *ip; -#if @iscomplex@ - @type@ mp_im = ip[1]; -#endif - - *max_ind = 0; - -#if @isfloat@ - if (npy_isnan(mp)) { - /* nan encountered; it's maximal */ - return 0; - } -#endif -#if @iscomplex@ - if (npy_isnan(mp_im)) { - /* nan encountered; it's maximal */ - return 0; - } -#endif - - for (i = 1; i < n; i++) { - @incr@; - /* - * Propagate nans, similarly as max() and min() - */ -#if @iscomplex@ - /* Lexical order for complex numbers */ - if ((ip[0] > mp) || ((ip[0] == mp) && (ip[1] > mp_im)) - || npy_isnan(ip[0]) || npy_isnan(ip[1])) { - mp = ip[0]; - mp_im = ip[1]; - *max_ind = i; - if (npy_isnan(mp) || npy_isnan(mp_im)) { - /* nan encountered, it's maximal */ - break; - } - } -#else - if (!(*ip <= mp)) { /* negated, for correct nan handling */ - mp = *ip; - *max_ind = i; -#if @isfloat@ - if (npy_isnan(mp)) { - /* nan encountered, it's maximal */ - break; - } -#endif - } -#endif - } - return 0; -} - -/**end repeat**/ - -static int -OBJECT_argmax(PyObject **ip, intp n, intp *max_ind, PyArrayObject *NPY_UNUSED(aip)) -{ - intp i; - PyObject *mp = ip[0]; - - *max_ind = 0; - i = 1; - while (i < n && mp == NULL) { - mp = ip[i]; - i++; - } - for (; i < n; i++) { - ip++; -#if defined(NPY_PY3K) - if (*ip != NULL && PyObject_RichCompareBool(*ip, mp, Py_GT) == 1) { -#else - if (*ip != NULL && PyObject_Compare(*ip, mp) > 0) { -#endif - mp = *ip; - *max_ind = i; - } - } - return 0; -} - -/**begin repeat - * - * #fname = STRING, UNICODE# - * #type = char, PyArray_UCS4# - */ -static int -@fname@_argmax(@type@ *ip, intp n, intp *max_ind, PyArrayObject *aip) -{ - intp i; - int elsize = aip->descr->elsize; - @type@ *mp = (@type@ *)_pya_malloc(elsize); - - if (mp==NULL) return 0; - memcpy(mp, ip, elsize); - *max_ind = 0; - for(i=1; i 0) { - memcpy(mp, ip, elsize); - *max_ind=i; - } - } - _pya_free(mp); - return 0; -} - -/**end repeat**/ - -#define VOID_argmax NULL - - -/* - ***************************************************************************** - ** DOT ** - ***************************************************************************** - */ - -/* - * dot means inner product - */ - -static void -BOOL_dot(char *ip1, intp is1, char *ip2, intp is2, char *op, intp n, - void *NPY_UNUSED(ignore)) -{ - Bool tmp = FALSE; - intp i; - - for (i = 0; i < n; i++, ip1 += is1, ip2 += is2) { - if ((*((Bool *)ip1) != 0) && (*((Bool *)ip2) != 0)) { - tmp = TRUE; - break; - } - } - *((Bool *)op) = tmp; -} - -/**begin repeat - * - * #name = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE# - * #type = byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble# - * #out = long, ulong, long, ulong, long, ulong, long, ulong, - * longlong, ulonglong, float, double, longdouble# - */ -static void -@name@_dot(char *ip1, intp is1, char *ip2, intp is2, char *op, intp n, - void *NPY_UNUSED(ignore)) -{ - @out@ tmp = (@out@)0; - intp i; - - for (i = 0; i < n; i++, ip1 += is1, ip2 += is2) { - tmp += (@out@)(*((@type@ *)ip1)) * - (@out@)(*((@type@ *)ip2)); - } - *((@type@ *)op) = (@type@) tmp; -} -/**end repeat**/ - - -/**begin repeat - * - * #name = CFLOAT, CDOUBLE, CLONGDOUBLE# - * #type = float, double, longdouble# - */ -static void @name@_dot(char *ip1, intp is1, char *ip2, intp is2, - char *op, intp n, void *NPY_UNUSED(ignore)) -{ - @type@ tmpr = (@type@)0.0, tmpi=(@type@)0.0; - intp i; - - for (i = 0; i < n; i++, ip1 += is1, ip2 += is2) { - tmpr += ((@type@ *)ip1)[0] * ((@type@ *)ip2)[0] - - ((@type@ *)ip1)[1] * ((@type@ *)ip2)[1]; - tmpi += ((@type@ *)ip1)[1] * ((@type@ *)ip2)[0] - + ((@type@ *)ip1)[0] * ((@type@ *)ip2)[1]; - } - ((@type@ *)op)[0] = tmpr; ((@type@ *)op)[1] = tmpi; -} - -/**end repeat**/ - -static void -OBJECT_dot(char *ip1, intp is1, char *ip2, intp is2, char *op, intp n, - void *NPY_UNUSED(ignore)) -{ - /* - * ALIGNMENT NOTE: np.dot, np.inner etc. enforce that the array is - * BEHAVED before getting to this point, so unaligned pointers aren't - * handled here. - */ - intp i; - PyObject *tmp1, *tmp2, *tmp = NULL; - PyObject **tmp3; - for (i = 0; i < n; i++, ip1 += is1, ip2 += is2) { - if ((*((PyObject **)ip1) == NULL) || (*((PyObject **)ip2) == NULL)) { - tmp1 = Py_False; - Py_INCREF(Py_False); - } - else { - tmp1 = PyNumber_Multiply(*((PyObject **)ip1), *((PyObject **)ip2)); - if (!tmp1) { - Py_XDECREF(tmp); - return; - } - } - if (i == 0) { - tmp = tmp1; - } - else { - tmp2 = PyNumber_Add(tmp, tmp1); - Py_XDECREF(tmp); - Py_XDECREF(tmp1); - if (!tmp2) { - return; - } - tmp = tmp2; - } - } - tmp3 = (PyObject**) op; - tmp2 = *tmp3; - *((PyObject **)op) = tmp; - Py_XDECREF(tmp2); -} - - -/* - ***************************************************************************** - ** FILL ** - ***************************************************************************** - */ - - -#define BOOL_fill NULL - -/* this requires buffer to be filled with objects or NULL */ -static void -OBJECT_fill(PyObject **buffer, intp length, void *NPY_UNUSED(ignored)) -{ - intp i; - PyObject *start = buffer[0]; - PyObject *delta = buffer[1]; - - delta = PyNumber_Subtract(delta, start); - if (!delta) { - return; - } - start = PyNumber_Add(start, delta); - if (!start) { - goto finish; - } - buffer += 2; - - for (i = 2; i < length; i++, buffer++) { - start = PyNumber_Add(start, delta); - if (!start) { - goto finish; - } - Py_XDECREF(*buffer); - *buffer = start; - } - -finish: - Py_DECREF(delta); - return; -} - -/**begin repeat - * - * #NAME = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE# - * #typ = byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble# -*/ -static void -@NAME@_fill(@typ@ *buffer, intp length, void *NPY_UNUSED(ignored)) -{ - intp i; - @typ@ start = buffer[0]; - @typ@ delta = buffer[1]; - - delta -= start; - for (i = 2; i < length; ++i) { - buffer[i] = start + i*delta; - } -} -/**end repeat**/ - -/**begin repeat - * - * #NAME = CFLOAT, CDOUBLE, CLONGDOUBLE# - * #typ = cfloat, cdouble, clongdouble# -*/ -static void -@NAME@_fill(@typ@ *buffer, intp length, void *NPY_UNUSED(ignore)) -{ - intp i; - @typ@ start; - @typ@ delta; - - start.real = buffer->real; - start.imag = buffer->imag; - delta.real = buffer[1].real; - delta.imag = buffer[1].imag; - delta.real -= start.real; - delta.imag -= start.imag; - buffer += 2; - for (i = 2; i < length; i++, buffer++) { - buffer->real = start.real + i*delta.real; - buffer->imag = start.imag + i*delta.imag; - } -} -/**end repeat**/ - - -/* this requires buffer to be filled with objects or NULL */ -static void -OBJECT_fillwithscalar(PyObject **buffer, intp length, PyObject **value, void *NPY_UNUSED(ignored)) -{ - intp i; - PyObject *val = *value; - for (i = 0; i < length; i++) { - Py_XDECREF(buffer[i]); - Py_XINCREF(val); - buffer[i] = val; - } -} -/**begin repeat - * - * #NAME = BOOL, BYTE, UBYTE# - * #typ = Bool, byte, ubyte# - */ -static void -@NAME@_fillwithscalar(@typ@ *buffer, intp length, @typ@ *value, void *NPY_UNUSED(ignored)) -{ - memset(buffer, *value, length); -} -/**end repeat**/ - -/**begin repeat - * - * #NAME = SHORT, USHORT, INT, UINT, LONG, ULONG, LONGLONG, ULONGLONG, - * FLOAT, DOUBLE, LONGDOUBLE, CFLOAT, CDOUBLE, CLONGDOUBLE# - * #typ = short, ushort, int, uint, long, ulong, longlong, ulonglong, - * float, double, longdouble, cfloat, cdouble, clongdouble# - */ -static void -@NAME@_fillwithscalar(@typ@ *buffer, intp length, @typ@ *value, void *NPY_UNUSED(ignored)) -{ - intp i; - @typ@ val = *value; - - for (i = 0; i < length; ++i) { - buffer[i] = val; - } -} -/**end repeat**/ - - -/* - ***************************************************************************** - ** FASTCLIP ** - ***************************************************************************** - */ - - -/**begin repeat - * - * #name = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE# - * #type = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble# - */ -static void -@name@_fastclip(@type@ *in, intp ni, @type@ *min, @type@ *max, @type@ *out) -{ - npy_intp i; - @type@ max_val = 0, min_val = 0; - - if (max != NULL) { - max_val = *max; - } - if (min != NULL) { - min_val = *min; - } - if (max == NULL) { - for (i = 0; i < ni; i++) { - if (in[i] < min_val) { - out[i] = min_val; - } - } - } - else if (min == NULL) { - for (i = 0; i < ni; i++) { - if (in[i] > max_val) { - out[i] = max_val; - } - } - } - else { - for (i = 0; i < ni; i++) { - if (in[i] < min_val) { - out[i] = min_val; - } - else if (in[i] > max_val) { - out[i] = max_val; - } - } - } -} -/**end repeat**/ - -/**begin repeat - * - * #name = CFLOAT, CDOUBLE, CLONGDOUBLE# - * #type = cfloat, cdouble, clongdouble# - */ -static void -@name@_fastclip(@type@ *in, intp ni, @type@ *min, @type@ *max, @type@ *out) -{ - npy_intp i; - @type@ max_val, min_val; - - min_val = *min; - max_val = *max; - if (max != NULL) { - max_val = *max; - } - if (min != NULL) { - min_val = *min; - } - if (max == NULL) { - for (i = 0; i < ni; i++) { - if (PyArray_CLT(in[i],min_val)) { - out[i] = min_val; - } - } - } - else if (min == NULL) { - for (i = 0; i < ni; i++) { - if (PyArray_CGT(in[i], max_val)) { - out[i] = max_val; - } - } - } - else { - for (i = 0; i < ni; i++) { - if (PyArray_CLT(in[i], min_val)) { - out[i] = min_val; - } - else if (PyArray_CGT(in[i], max_val)) { - out[i] = max_val; - } - } - } -} - -/**end repeat**/ - -#define OBJECT_fastclip NULL - - -/* - ***************************************************************************** - ** FASTPUTMASK ** - ***************************************************************************** - */ - - -/**begin repeat - * - * #name = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, - * CFLOAT, CDOUBLE, CLONGDOUBLE# - * #type = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble, - * cfloat, cdouble, clongdouble# -*/ -static void -@name@_fastputmask(@type@ *in, Bool *mask, intp ni, @type@ *vals, intp nv) -{ - npy_intp i; - @type@ s_val; - - if (nv == 1) { - s_val = *vals; - for (i = 0; i < ni; i++) { - if (mask[i]) { - in[i] = s_val; - } - } - } - else { - for (i = 0; i < ni; i++) { - if (mask[i]) { - in[i] = vals[i%nv]; - } - } - } - return; -} -/**end repeat**/ - -#define OBJECT_fastputmask NULL - - -/* - ***************************************************************************** - ** FASTTAKE ** - ***************************************************************************** - */ - - -/**begin repeat - * - * #name = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, - * CFLOAT, CDOUBLE, CLONGDOUBLE# - * #type = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble, - * cfloat, cdouble, clongdouble# -*/ -static int -@name@_fasttake(@type@ *dest, @type@ *src, intp *indarray, - intp nindarray, intp n_outer, - intp m_middle, intp nelem, - NPY_CLIPMODE clipmode) -{ - intp i, j, k, tmp; - - switch(clipmode) { - case NPY_RAISE: - for (i = 0; i < n_outer; i++) { - for (j = 0; j < m_middle; j++) { - tmp = indarray[j]; - if (tmp < 0) { - tmp = tmp+nindarray; - } - if ((tmp < 0) || (tmp >= nindarray)) { - PyErr_SetString(PyExc_IndexError, - "index out of range for array"); - return 1; - } - if (nelem == 1) { - *dest++ = *(src+tmp); - } - else { - for (k = 0; k < nelem; k++) { - *dest++ = *(src + tmp*nelem + k); - } - } - } - src += nelem*nindarray; - } - break; - case NPY_WRAP: - for (i = 0; i < n_outer; i++) { - for (j = 0; j < m_middle; j++) { - tmp = indarray[j]; - if (tmp < 0) { - while (tmp < 0) { - tmp += nindarray; - } - } - else if (tmp >= nindarray) { - while (tmp >= nindarray) { - tmp -= nindarray; - } - } - if (nelem == 1) { - *dest++ = *(src+tmp); - } - else { - for (k = 0; k < nelem; k++) { - *dest++ = *(src+tmp*nelem+k); - } - } - } - src += nelem*nindarray; - } - break; - case NPY_CLIP: - for (i = 0; i < n_outer; i++) { - for (j = 0; j < m_middle; j++) { - tmp = indarray[j]; - if (tmp < 0) { - tmp = 0; - } - else if (tmp >= nindarray) { - tmp = nindarray - 1; - } - if (nelem == 1) { - *dest++ = *(src+tmp); - } - else { - for (k = 0; k < nelem; k++) { - *dest++ = *(src + tmp*nelem + k); - } - } - } - src += nelem*nindarray; - } - break; - } - return 0; -} -/**end repeat**/ - -#define OBJECT_fasttake NULL - - -/* - ***************************************************************************** - ** SETUP FUNCTION POINTERS ** - ***************************************************************************** - */ - - -#define _ALIGN(type) offsetof(struct {char c; type v;}, v) -/* - * Disable harmless compiler warning "4116: unnamed type definition in - * parentheses" which is caused by the _ALIGN macro. - */ -#if defined(_MSC_VER) -#pragma warning(disable:4116) -#endif - - -/**begin repeat - * - * #from = VOID, STRING, UNICODE# - * #align = char, char, PyArray_UCS4# - * #NAME = Void, String, Unicode# - * #endian = |, |, =# -*/ -static PyArray_ArrFuncs _Py@NAME@_ArrFuncs = { - { - (PyArray_VectorUnaryFunc*)@from@_to_BOOL, - (PyArray_VectorUnaryFunc*)@from@_to_BYTE, - (PyArray_VectorUnaryFunc*)@from@_to_UBYTE, - (PyArray_VectorUnaryFunc*)@from@_to_SHORT, - (PyArray_VectorUnaryFunc*)@from@_to_USHORT, - (PyArray_VectorUnaryFunc*)@from@_to_INT, - (PyArray_VectorUnaryFunc*)@from@_to_UINT, - (PyArray_VectorUnaryFunc*)@from@_to_LONG, - (PyArray_VectorUnaryFunc*)@from@_to_ULONG, - (PyArray_VectorUnaryFunc*)@from@_to_LONGLONG, - (PyArray_VectorUnaryFunc*)@from@_to_ULONGLONG, - (PyArray_VectorUnaryFunc*)@from@_to_FLOAT, - (PyArray_VectorUnaryFunc*)@from@_to_DOUBLE, - (PyArray_VectorUnaryFunc*)@from@_to_LONGDOUBLE, - (PyArray_VectorUnaryFunc*)@from@_to_CFLOAT, - (PyArray_VectorUnaryFunc*)@from@_to_CDOUBLE, - (PyArray_VectorUnaryFunc*)@from@_to_CLONGDOUBLE, - (PyArray_VectorUnaryFunc*)@from@_to_OBJECT, - (PyArray_VectorUnaryFunc*)@from@_to_STRING, - (PyArray_VectorUnaryFunc*)@from@_to_UNICODE, - (PyArray_VectorUnaryFunc*)@from@_to_VOID, - }, - (PyArray_GetItemFunc*)@from@_getitem, - (PyArray_SetItemFunc*)@from@_setitem, - (PyArray_CopySwapNFunc*)@from@_copyswapn, - (PyArray_CopySwapFunc*)@from@_copyswap, - (PyArray_CompareFunc*)@from@_compare, - (PyArray_ArgFunc*)@from@_argmax, - (PyArray_DotFunc*)NULL, - (PyArray_ScanFunc*)@from@_scan, - (PyArray_FromStrFunc*)@from@_fromstr, - (PyArray_NonzeroFunc*)@from@_nonzero, - (PyArray_FillFunc*)NULL, - (PyArray_FillWithScalarFunc*)NULL, - { - NULL, NULL, NULL - }, - { - NULL, NULL, NULL - }, - NULL, - (PyArray_ScalarKindFunc*)NULL, - NULL, - NULL, - (PyArray_FastClipFunc *)NULL, - (PyArray_FastPutmaskFunc *)NULL, - (PyArray_FastTakeFunc *)NULL -}; - -/* - * FIXME: check for PY3K - */ -static PyArray_Descr @from@_Descr = { - PyObject_HEAD_INIT(&PyArrayDescr_Type) - &Py@NAME@ArrType_Type, - PyArray_@from@LTR, - PyArray_@from@LTR, - '@endian@', - 0, - PyArray_@from@, - 0, - _ALIGN(@align@), - NULL, - NULL, - NULL, - &_Py@NAME@_ArrFuncs, - NULL, -}; - -/**end repeat**/ - - -/**begin repeat - * - * #from = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, - * LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, - * CFLOAT, CDOUBLE, CLONGDOUBLE, OBJECT# - * #num = 1*14, 2*3, 1*1# - * #fromtyp = Bool, byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, longdouble, - * float, double, longdouble, PyObject *# - * #NAME = Bool, Byte, UByte, Short, UShort, Int, UInt, Long, ULong, - * LongLong, ULongLong, Float, Double, LongDouble, - * CFloat, CDouble, CLongDouble, Object# - * #kind = GENBOOL, SIGNED, UNSIGNED, SIGNED, UNSIGNED, SIGNED, UNSIGNED, SIGNED, UNSIGNED, - * SIGNED, UNSIGNED, FLOATING, FLOATING, FLOATING, - * COMPLEX, COMPLEX, COMPLEX, OBJECT# - * #endian = |*3, =*14, |# - * #isobject= 0*17,NPY_OBJECT_DTYPE_FLAGS# - */ -static PyArray_ArrFuncs _Py@NAME@_ArrFuncs = { - { - (PyArray_VectorUnaryFunc*)@from@_to_BOOL, - (PyArray_VectorUnaryFunc*)@from@_to_BYTE, - (PyArray_VectorUnaryFunc*)@from@_to_UBYTE, - (PyArray_VectorUnaryFunc*)@from@_to_SHORT, - (PyArray_VectorUnaryFunc*)@from@_to_USHORT, - (PyArray_VectorUnaryFunc*)@from@_to_INT, - (PyArray_VectorUnaryFunc*)@from@_to_UINT, - (PyArray_VectorUnaryFunc*)@from@_to_LONG, - (PyArray_VectorUnaryFunc*)@from@_to_ULONG, - (PyArray_VectorUnaryFunc*)@from@_to_LONGLONG, - (PyArray_VectorUnaryFunc*)@from@_to_ULONGLONG, - (PyArray_VectorUnaryFunc*)@from@_to_FLOAT, - (PyArray_VectorUnaryFunc*)@from@_to_DOUBLE, - (PyArray_VectorUnaryFunc*)@from@_to_LONGDOUBLE, - (PyArray_VectorUnaryFunc*)@from@_to_CFLOAT, - (PyArray_VectorUnaryFunc*)@from@_to_CDOUBLE, - (PyArray_VectorUnaryFunc*)@from@_to_CLONGDOUBLE, - (PyArray_VectorUnaryFunc*)@from@_to_OBJECT, - (PyArray_VectorUnaryFunc*)@from@_to_STRING, - (PyArray_VectorUnaryFunc*)@from@_to_UNICODE, - (PyArray_VectorUnaryFunc*)@from@_to_VOID, - }, - (PyArray_GetItemFunc*)@from@_getitem, - (PyArray_SetItemFunc*)@from@_setitem, - (PyArray_CopySwapNFunc*)@from@_copyswapn, - (PyArray_CopySwapFunc*)@from@_copyswap, - (PyArray_CompareFunc*)@from@_compare, - (PyArray_ArgFunc*)@from@_argmax, - (PyArray_DotFunc*)@from@_dot, - (PyArray_ScanFunc*)@from@_scan, - (PyArray_FromStrFunc*)@from@_fromstr, - (PyArray_NonzeroFunc*)@from@_nonzero, - (PyArray_FillFunc*)@from@_fill, - (PyArray_FillWithScalarFunc*)@from@_fillwithscalar, - { - NULL, NULL, NULL - }, - { - NULL, NULL, NULL - }, - NULL, - (PyArray_ScalarKindFunc*)NULL, - NULL, - NULL, - (PyArray_FastClipFunc*)@from@_fastclip, - (PyArray_FastPutmaskFunc*)@from@_fastputmask, - (PyArray_FastTakeFunc*)@from@_fasttake -}; - -/* - * FIXME: check for PY3K - */ -NPY_NO_EXPORT PyArray_Descr @from@_Descr = { - PyObject_HEAD_INIT(&PyArrayDescr_Type) - &Py@NAME@ArrType_Type, - PyArray_@kind@LTR, - PyArray_@from@LTR, - '@endian@', - @isobject@, - PyArray_@from@, - @num@*sizeof(@fromtyp@), - _ALIGN(@fromtyp@), - NULL, - NULL, - NULL, - &_Py@NAME@_ArrFuncs, - NULL, -}; - -/**end repeat**/ - -#define _MAX_LETTER 128 -static char _letter_to_num[_MAX_LETTER]; - -static PyArray_Descr *_builtin_descrs[] = { - &BOOL_Descr, - &BYTE_Descr, - &UBYTE_Descr, - &SHORT_Descr, - &USHORT_Descr, - &INT_Descr, - &UINT_Descr, - &LONG_Descr, - &ULONG_Descr, - &LONGLONG_Descr, - &ULONGLONG_Descr, - &FLOAT_Descr, - &DOUBLE_Descr, - &LONGDOUBLE_Descr, - &CFLOAT_Descr, - &CDOUBLE_Descr, - &CLONGDOUBLE_Descr, - &OBJECT_Descr, - &STRING_Descr, - &UNICODE_Descr, - &VOID_Descr, -}; - -/*NUMPY_API - * Get the PyArray_Descr structure for a type. - */ -NPY_NO_EXPORT PyArray_Descr * -PyArray_DescrFromType(int type) -{ - PyArray_Descr *ret = NULL; - - if (type < PyArray_NTYPES) { - ret = _builtin_descrs[type]; - } - else if (type == PyArray_NOTYPE) { - /* - * This needs to not raise an error so - * that PyArray_DescrFromType(PyArray_NOTYPE) - * works for backwards-compatible C-API - */ - return NULL; - } - else if ((type == PyArray_CHAR) || (type == PyArray_CHARLTR)) { - ret = PyArray_DescrNew(_builtin_descrs[PyArray_STRING]); - if (ret == NULL) { - return NULL; - } - ret->elsize = 1; - ret->type = PyArray_CHARLTR; - return ret; - } - else if (PyTypeNum_ISUSERDEF(type)) { - ret = userdescrs[type - PyArray_USERDEF]; - } - else { - int num = PyArray_NTYPES; - if (type < _MAX_LETTER) { - num = (int) _letter_to_num[type]; - } - if (num >= PyArray_NTYPES) { - ret = NULL; - } - else { - ret = _builtin_descrs[num]; - } - } - if (ret == NULL) { - PyErr_SetString(PyExc_ValueError, - "Invalid data-type for array"); - } - else { - Py_INCREF(ret); - } - - return ret; -} - - -/* - ***************************************************************************** - ** SETUP TYPE INFO ** - ***************************************************************************** - */ - - -NPY_NO_EXPORT int -set_typeinfo(PyObject *dict) -{ - PyObject *infodict, *s; - int i; - - for (i = 0; i < _MAX_LETTER; i++) { - _letter_to_num[i] = PyArray_NTYPES; - } - -/**begin repeat - * - * #name = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, INTP, UINTP, - * LONG, ULONG, LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, - * CFLOAT, CDOUBLE, CLONGDOUBLE, OBJECT, STRING, UNICODE, VOID# - */ - _letter_to_num[PyArray_@name@LTR] = PyArray_@name@; -/**end repeat**/ - _letter_to_num[PyArray_STRINGLTR2] = PyArray_STRING; - -/**begin repeat - * #name = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, - * LONG, ULONG, LONGLONG, ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, - * CFLOAT, CDOUBLE, CLONGDOUBLE, OBJECT, STRING, UNICODE, VOID# -*/ - @name@_Descr.fields = Py_None; -/**end repeat**/ - - /* Set a dictionary with type information */ - infodict = PyDict_New(); - if (infodict == NULL) return -1; - -#define BITSOF_INTP CHAR_BIT*SIZEOF_PY_INTPTR_T -#define BITSOF_BYTE CHAR_BIT - -/**begin repeat - * - * #name = BOOL, BYTE, UBYTE, SHORT, USHORT, INT, UINT, INTP, UINTP, - * LONG, ULONG, LONGLONG, ULONGLONG# - * #uname = BOOL, BYTE*2, SHORT*2, INT*2, INTP*2, LONG*2, LONGLONG*2# - * #Name = Bool, Byte, UByte, Short, UShort, Int, UInt, Intp, UIntp, - * Long, ULong, LongLong, ULongLong# - * #type = Bool, byte, ubyte, short, ushort, int, uint, intp, uintp, - * long, ulong, longlong, ulonglong# - * #max= 1, MAX_BYTE, MAX_UBYTE, MAX_SHORT, MAX_USHORT, MAX_INT, - * PyLong_FromUnsignedLong(MAX_UINT), PyLong_FromLongLong((longlong) MAX_INTP), - * PyLong_FromUnsignedLongLong((ulonglong) MAX_UINTP), MAX_LONG, - * PyLong_FromUnsignedLong((unsigned long) MAX_ULONG), - * PyLong_FromLongLong((longlong) MAX_LONGLONG), - * PyLong_FromUnsignedLongLong((ulonglong) MAX_ULONGLONG)# - * #min = 0, MIN_BYTE, 0, MIN_SHORT, 0, MIN_INT, 0, - * PyLong_FromLongLong((longlong) MIN_INTP), 0, MIN_LONG, 0, - * PyLong_FromLongLong((longlong) MIN_LONGLONG),0# - * #cx = i*6, N, N, N, l, N, N, N# - * #cn = i*7, N, i, l, i, N, i# -*/ - PyDict_SetItemString(infodict, "@name@", -#if defined(NPY_PY3K) - s = Py_BuildValue("Ciii@cx@@cn@O", -#else - s = Py_BuildValue("ciii@cx@@cn@O", -#endif - PyArray_@name@LTR, - PyArray_@name@, - BITSOF_@uname@, - _ALIGN(@type@), - @max@, - @min@, - (PyObject *) &Py@Name@ArrType_Type)); - Py_DECREF(s); -/**end repeat**/ - -#define BITSOF_CFLOAT 2*BITSOF_FLOAT -#define BITSOF_CDOUBLE 2*BITSOF_DOUBLE -#define BITSOF_CLONGDOUBLE 2*BITSOF_LONGDOUBLE - -/**begin repeat - * - * #type = float, double, longdouble, cfloat, cdouble, clongdouble# - * #name = FLOAT, DOUBLE, LONGDOUBLE, CFLOAT, CDOUBLE, CLONGDOUBLE# - * #Name = Float, Double, LongDouble, CFloat, CDouble, CLongDouble# - */ - PyDict_SetItemString(infodict, "@name@", -#if defined(NPY_PY3K) - s = Py_BuildValue("CiiiO", PyArray_@name@LTR, -#else - s = Py_BuildValue("ciiiO", PyArray_@name@LTR, -#endif - PyArray_@name@, - BITSOF_@name@, - _ALIGN(@type@), - (PyObject *) &Py@Name@ArrType_Type)); - Py_DECREF(s); -/**end repeat**/ - - PyDict_SetItemString(infodict, "OBJECT", -#if defined(NPY_PY3K) - s = Py_BuildValue("CiiiO", PyArray_OBJECTLTR, -#else - s = Py_BuildValue("ciiiO", PyArray_OBJECTLTR, -#endif - PyArray_OBJECT, - sizeof(PyObject *) * CHAR_BIT, - _ALIGN(PyObject *), - (PyObject *) &PyObjectArrType_Type)); - Py_DECREF(s); - PyDict_SetItemString(infodict, "STRING", -#if defined(NPY_PY3K) - s = Py_BuildValue("CiiiO", PyArray_STRINGLTR, -#else - s = Py_BuildValue("ciiiO", PyArray_STRINGLTR, -#endif - PyArray_STRING, - 0, - _ALIGN(char), - (PyObject *) &PyStringArrType_Type)); - Py_DECREF(s); - PyDict_SetItemString(infodict, "UNICODE", -#if defined(NPY_PY3K) - s = Py_BuildValue("CiiiO", PyArray_UNICODELTR, -#else - s = Py_BuildValue("ciiiO", PyArray_UNICODELTR, -#endif - PyArray_UNICODE, - 0, - _ALIGN(PyArray_UCS4), - (PyObject *) &PyUnicodeArrType_Type)); - Py_DECREF(s); - PyDict_SetItemString(infodict, "VOID", -#if defined(NPY_PY3K) - s = Py_BuildValue("CiiiO", PyArray_VOIDLTR, -#else - s = Py_BuildValue("ciiiO", PyArray_VOIDLTR, -#endif - PyArray_VOID, - 0, - _ALIGN(char), - (PyObject *) &PyVoidArrType_Type)); - Py_DECREF(s); - -#define SETTYPE(name) \ - Py_INCREF(&Py##name##ArrType_Type); \ - PyDict_SetItemString(infodict, #name, \ - (PyObject *)&Py##name##ArrType_Type) - - SETTYPE(Generic); - SETTYPE(Number); - SETTYPE(Integer); - SETTYPE(Inexact); - SETTYPE(SignedInteger); - SETTYPE(UnsignedInteger); - SETTYPE(Floating); - SETTYPE(ComplexFloating); - SETTYPE(Flexible); - SETTYPE(Character); - -#undef SETTYPE - - PyDict_SetItemString(dict, "typeinfo", infodict); - Py_DECREF(infodict); - return 0; -} - -#undef _MAX_LETTER diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/arraytypes.h b/pythonPackages/numpy/numpy/core/src/multiarray/arraytypes.h deleted file mode 100755 index ff7d4ae408..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/arraytypes.h +++ /dev/null @@ -1,13 +0,0 @@ -#ifndef _NPY_ARRAYTYPES_H_ -#define _NPY_ARRAYTYPES_H_ - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT PyArray_Descr LONGLONG_Descr; -extern NPY_NO_EXPORT PyArray_Descr LONG_Descr; -extern NPY_NO_EXPORT PyArray_Descr INT_Descr; -#endif - -NPY_NO_EXPORT int -set_typeinfo(PyObject *dict); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/buffer.c b/pythonPackages/numpy/numpy/core/src/multiarray/buffer.c deleted file mode 100755 index 9fedece17a..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/buffer.c +++ /dev/null @@ -1,783 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "buffer.h" -#include "numpyos.h" - -/************************************************************************* - **************** Implement Buffer Protocol **************************** - *************************************************************************/ - -/* removed multiple segment interface */ - -static Py_ssize_t -array_getsegcount(PyArrayObject *self, Py_ssize_t *lenp) -{ - if (lenp) { - *lenp = PyArray_NBYTES(self); - } - if (PyArray_ISONESEGMENT(self)) { - return 1; - } - if (lenp) { - *lenp = 0; - } - return 0; -} - -static Py_ssize_t -array_getreadbuf(PyArrayObject *self, Py_ssize_t segment, void **ptrptr) -{ - if (segment != 0) { - PyErr_SetString(PyExc_ValueError, - "accessing non-existing array segment"); - return -1; - } - if (PyArray_ISONESEGMENT(self)) { - *ptrptr = self->data; - return PyArray_NBYTES(self); - } - PyErr_SetString(PyExc_ValueError, "array is not a single segment"); - *ptrptr = NULL; - return -1; -} - - -static Py_ssize_t -array_getwritebuf(PyArrayObject *self, Py_ssize_t segment, void **ptrptr) -{ - if (PyArray_CHKFLAGS(self, WRITEABLE)) { - return array_getreadbuf(self, segment, (void **) ptrptr); - } - else { - PyErr_SetString(PyExc_ValueError, "array cannot be " - "accessed as a writeable buffer"); - return -1; - } -} - -static Py_ssize_t -array_getcharbuf(PyArrayObject *self, Py_ssize_t segment, constchar **ptrptr) -{ - return array_getreadbuf(self, segment, (void **) ptrptr); -} - - -/************************************************************************* - * PEP 3118 buffer protocol - * - * Implementing PEP 3118 is somewhat convoluted because of the desirata: - * - * - Don't add new members to ndarray or descr structs, to preserve binary - * compatibility. (Also, adding the items is actually not very useful, - * since mutability issues prevent an 1 to 1 relationship between arrays - * and buffer views.) - * - * - Don't use bf_releasebuffer, because it prevents PyArg_ParseTuple("s#", ... - * from working. Breaking this would cause several backward compatibility - * issues already on Python 2.6. - * - * - Behave correctly when array is reshaped in-place, or it's dtype is - * altered. - * - * The solution taken below is to manually track memory allocated for - * Py_buffers. - *************************************************************************/ - -#if PY_VERSION_HEX >= 0x02060000 - -/* - * Format string translator - * - * Translate PyArray_Descr to a PEP 3118 format string. - */ - -/* Fast string 'class' */ -typedef struct { - char *s; - int allocated; - int pos; -} _tmp_string_t; - -static int -_append_char(_tmp_string_t *s, char c) -{ - char *p; - if (s->s == NULL) { - s->s = (char*)malloc(16); - s->pos = 0; - s->allocated = 16; - } - if (s->pos >= s->allocated) { - p = (char*)realloc(s->s, 2*s->allocated); - if (p == NULL) { - PyErr_SetString(PyExc_MemoryError, "memory allocation failed"); - return -1; - } - s->s = p; - s->allocated *= 2; - } - s->s[s->pos] = c; - ++s->pos; - return 0; -} - -static int -_append_str(_tmp_string_t *s, char *c) -{ - while (*c != '\0') { - if (_append_char(s, *c)) return -1; - ++c; - } - return 0; -} - -/* - * Return non-zero if a type is aligned in each item in the given array, - * AND, the descr element size is a multiple of the alignment, - * AND, the array data is positioned to alignment granularity. - */ -static int -_is_natively_aligned_at(PyArray_Descr *descr, - PyArrayObject *arr, Py_ssize_t offset) -{ - int k; - - if ((Py_ssize_t)(arr->data) % descr->alignment != 0) { - return 0; - } - - if (offset % descr->alignment != 0) { - return 0; - } - - if (descr->elsize % descr->alignment) { - return 0; - } - - for (k = 0; k < arr->nd; ++k) { - if (arr->dimensions[k] > 1) { - if (arr->strides[k] % descr->alignment != 0) { - return 0; - } - } - } - - return 1; -} - -static int -_buffer_format_string(PyArray_Descr *descr, _tmp_string_t *str, - PyArrayObject* arr, Py_ssize_t *offset, - char *active_byteorder) -{ - int k; - char _active_byteorder = '@'; - Py_ssize_t _offset = 0; - - if (active_byteorder == NULL) { - active_byteorder = &_active_byteorder; - } - if (offset == NULL) { - offset = &_offset; - } - - if (descr->subarray) { - PyObject *item; - Py_ssize_t total_count = 1; - Py_ssize_t dim_size; - char buf[128]; - int old_offset; - int ret; - - _append_char(str, '('); - for (k = 0; k < PyTuple_GET_SIZE(descr->subarray->shape); ++k) { - if (k > 0) { - _append_char(str, ','); - } - item = PyTuple_GET_ITEM(descr->subarray->shape, k); - dim_size = PyNumber_AsSsize_t(item, NULL); - - PyOS_snprintf(buf, sizeof(buf), "%ld", (long)dim_size); - _append_str(str, buf); - total_count *= dim_size; - } - _append_char(str, ')'); - old_offset = *offset; - ret = _buffer_format_string(descr->subarray->base, str, arr, offset, - active_byteorder); - *offset = old_offset + (*offset - old_offset) * total_count; - return ret; - } - else if (PyDataType_HASFIELDS(descr)) { - _append_str(str, "T{"); - for (k = 0; k < PyTuple_GET_SIZE(descr->names); ++k) { - PyObject *name, *item, *offset_obj, *tmp; - PyArray_Descr *child; - char *p; - Py_ssize_t len; - int new_offset; - - name = PyTuple_GET_ITEM(descr->names, k); - item = PyDict_GetItem(descr->fields, name); - - child = (PyArray_Descr*)PyTuple_GetItem(item, 0); - offset_obj = PyTuple_GetItem(item, 1); - new_offset = PyInt_AsLong(offset_obj); - - /* Insert padding manually */ - while (*offset < new_offset) { - _append_char(str, 'x'); - ++*offset; - } - *offset += child->elsize; - - /* Insert child item */ - _buffer_format_string(child, str, arr, offset, - active_byteorder); - - /* Insert field name */ -#if defined(NPY_PY3K) - /* FIXME: XXX -- should it use UTF-8 here? */ - tmp = PyUnicode_AsUTF8String(name); -#else - tmp = name; -#endif - if (tmp == NULL || PyBytes_AsStringAndSize(tmp, &p, &len) < 0) { - PyErr_SetString(PyExc_ValueError, "invalid field name"); - return -1; - } - _append_char(str, ':'); - while (len > 0) { - if (*p == ':') { - Py_DECREF(tmp); - PyErr_SetString(PyExc_ValueError, - "':' is not an allowed character in buffer " - "field names"); - return -1; - } - _append_char(str, *p); - ++p; - --len; - } - _append_char(str, ':'); -#if defined(NPY_PY3K) - Py_DECREF(tmp); -#endif - } - _append_char(str, '}'); - } - else { - int is_standard_size = 1; - int is_native_only_type = (descr->type_num == NPY_LONGDOUBLE || - descr->type_num == NPY_CLONGDOUBLE); -#if NPY_SIZEOF_LONG_LONG != 8 - is_native_only_type = is_native_only_type || ( - descr->type_num == NPY_LONGLONG || - descr->type_num == NPY_ULONGLONG); -#endif - - if (descr->byteorder == '=' && - _is_natively_aligned_at(descr, arr, *offset)) { - /* Prefer native types, to cater for Cython */ - is_standard_size = 0; - if (*active_byteorder != '@') { - _append_char(str, '@'); - *active_byteorder = '@'; - } - } - else if (descr->byteorder == '=' && is_native_only_type) { - /* Data types that have no standard size */ - is_standard_size = 0; - if (*active_byteorder != '^') { - _append_char(str, '^'); - *active_byteorder = '^'; - } - } - else if (descr->byteorder == '<' || descr->byteorder == '>' || - descr->byteorder == '=') { - is_standard_size = 1; - if (*active_byteorder != descr->byteorder) { - _append_char(str, descr->byteorder); - *active_byteorder = descr->byteorder; - } - - if (is_native_only_type) { - /* It's not possible to express native-only data types - in non-native byte orders */ - PyErr_Format(PyExc_ValueError, - "cannot expose native-only dtype '%c' in " - "non-native byte order '%c' via buffer interface", - descr->type, descr->byteorder); - } - } - - switch (descr->type_num) { - case NPY_BOOL: if (_append_char(str, '?')) return -1; break; - case NPY_BYTE: if (_append_char(str, 'b')) return -1; break; - case NPY_UBYTE: if (_append_char(str, 'B')) return -1; break; - case NPY_SHORT: if (_append_char(str, 'h')) return -1; break; - case NPY_USHORT: if (_append_char(str, 'H')) return -1; break; - case NPY_INT: if (_append_char(str, 'i')) return -1; break; - case NPY_UINT: if (_append_char(str, 'I')) return -1; break; - case NPY_LONG: - if (is_standard_size && (NPY_SIZEOF_LONG == 8)) { - if (_append_char(str, 'q')) return -1; - } - else { - if (_append_char(str, 'l')) return -1; - } - break; - case NPY_ULONG: - if (is_standard_size && (NPY_SIZEOF_LONG == 8)) { - if (_append_char(str, 'Q')) return -1; - } - else { - if (_append_char(str, 'L')) return -1; - } - break; - case NPY_LONGLONG: if (_append_char(str, 'q')) return -1; break; - case NPY_ULONGLONG: if (_append_char(str, 'Q')) return -1; break; - case NPY_FLOAT: if (_append_char(str, 'f')) return -1; break; - case NPY_DOUBLE: if (_append_char(str, 'd')) return -1; break; - case NPY_LONGDOUBLE: if (_append_char(str, 'g')) return -1; break; - case NPY_CFLOAT: if (_append_str(str, "Zf")) return -1; break; - case NPY_CDOUBLE: if (_append_str(str, "Zd")) return -1; break; - case NPY_CLONGDOUBLE: if (_append_str(str, "Zg")) return -1; break; - /* XXX: datetime */ - /* XXX: timedelta */ - case NPY_OBJECT: if (_append_char(str, 'O')) return -1; break; - case NPY_STRING: { - char buf[128]; - PyOS_snprintf(buf, sizeof(buf), "%ds", descr->elsize); - if (_append_str(str, buf)) return -1; - break; - } - case NPY_UNICODE: { - /* Numpy Unicode is always 4-byte */ - char buf[128]; - assert(descr->elsize % 4 == 0); - PyOS_snprintf(buf, sizeof(buf), "%dw", descr->elsize / 4); - if (_append_str(str, buf)) return -1; - break; - } - case NPY_VOID: { - /* Insert padding bytes */ - char buf[128]; - PyOS_snprintf(buf, sizeof(buf), "%dx", descr->elsize); - if (_append_str(str, buf)) return -1; - break; - } - default: - PyErr_Format(PyExc_ValueError, - "cannot include dtype '%c' in a buffer", - descr->type); - return -1; - } - } - - return 0; -} - - -/* - * Global information about all active buffers - * - * Note: because for backward compatibility we cannot define bf_releasebuffer, - * we must manually keep track of the additional data required by the buffers. - */ - -/* Additional per-array data required for providing the buffer interface */ -typedef struct { - char *format; - int ndim; - Py_ssize_t *strides; - Py_ssize_t *shape; -} _buffer_info_t; - -/* - * { id(array): [list of pointers to _buffer_info_t, the last one is latest] } - * - * Because shape, strides, and format can be different for different buffers, - * we may need to keep track of multiple buffer infos for each array. - * - * However, when none of them has changed, the same buffer info may be reused. - * - * Thread-safety is provided by GIL. - */ -static PyObject *_buffer_info_cache = NULL; - -/* Fill in the info structure */ -static _buffer_info_t* -_buffer_info_new(PyArrayObject *arr) -{ - _buffer_info_t *info; - _tmp_string_t fmt = {0,0,0}; - int k; - - info = (_buffer_info_t*)malloc(sizeof(_buffer_info_t)); - - /* Fill in format */ - if (_buffer_format_string(PyArray_DESCR(arr), &fmt, arr, NULL, NULL) != 0) { - free(info); - return NULL; - } - _append_char(&fmt, '\0'); - info->format = fmt.s; - - /* Fill in shape and strides */ - info->ndim = PyArray_NDIM(arr); - - if (info->ndim == 0) { - info->shape = NULL; - info->strides = NULL; - } - else { - info->shape = (Py_ssize_t*)malloc(sizeof(Py_ssize_t) - * PyArray_NDIM(arr) * 2 + 1); - info->strides = info->shape + PyArray_NDIM(arr); - for (k = 0; k < PyArray_NDIM(arr); ++k) { - info->shape[k] = PyArray_DIMS(arr)[k]; - info->strides[k] = PyArray_STRIDES(arr)[k]; - } - } - - return info; -} - -/* Compare two info structures */ -static Py_ssize_t -_buffer_info_cmp(_buffer_info_t *a, _buffer_info_t *b) -{ - Py_ssize_t c; - int k; - - c = strcmp(a->format, b->format); - if (c != 0) return c; - - c = a->ndim - b->ndim; - if (c != 0) return c; - - for (k = 0; k < a->ndim; ++k) { - c = a->shape[k] - b->shape[k]; - if (c != 0) return c; - c = a->strides[k] - b->strides[k]; - if (c != 0) return c; - } - - return 0; -} - -static void -_buffer_info_free(_buffer_info_t *info) -{ - if (info->format) { - free(info->format); - } - if (info->shape) { - free(info->shape); - } - free(info); -} - -/* Get buffer info from the global dictionary */ -static _buffer_info_t* -_buffer_get_info(PyObject *arr) -{ - PyObject *key, *item_list, *item; - _buffer_info_t *info = NULL, *old_info = NULL; - - if (_buffer_info_cache == NULL) { - _buffer_info_cache = PyDict_New(); - if (_buffer_info_cache == NULL) { - return NULL; - } - } - - /* Compute information */ - info = _buffer_info_new((PyArrayObject*)arr); - if (info == NULL) { - return NULL; - } - - /* Check if it is identical with an old one; reuse old one, if yes */ - key = PyLong_FromVoidPtr((void*)arr); - item_list = PyDict_GetItem(_buffer_info_cache, key); - - if (item_list != NULL) { - Py_INCREF(item_list); - if (PyList_GET_SIZE(item_list) > 0) { - item = PyList_GetItem(item_list, PyList_GET_SIZE(item_list) - 1); - old_info = (_buffer_info_t*)PyLong_AsVoidPtr(item); - - if (_buffer_info_cmp(info, old_info) == 0) { - _buffer_info_free(info); - info = old_info; - } - } - } - else { - item_list = PyList_New(0); - PyDict_SetItem(_buffer_info_cache, key, item_list); - } - - if (info != old_info) { - /* Needs insertion */ - item = PyLong_FromVoidPtr((void*)info); - PyList_Append(item_list, item); - Py_DECREF(item); - } - - Py_DECREF(item_list); - Py_DECREF(key); - return info; -} - -/* Clear buffer info from the global dictionary */ -static void -_buffer_clear_info(PyObject *arr) -{ - PyObject *key, *item_list, *item; - _buffer_info_t *info; - int k; - - if (_buffer_info_cache == NULL) { - return; - } - - key = PyLong_FromVoidPtr((void*)arr); - item_list = PyDict_GetItem(_buffer_info_cache, key); - if (item_list != NULL) { - for (k = 0; k < PyList_GET_SIZE(item_list); ++k) { - item = PyList_GET_ITEM(item_list, k); - info = (_buffer_info_t*)PyLong_AsVoidPtr(item); - _buffer_info_free(info); - } - PyDict_DelItem(_buffer_info_cache, key); - } - - Py_DECREF(key); -} - -/* - * Retrieving buffers - */ - -static int -array_getbuffer(PyObject *obj, Py_buffer *view, int flags) -{ - PyArrayObject *self; - _buffer_info_t *info = NULL; - - self = (PyArrayObject*)obj; - - /* Check whether we can provide the wanted properties */ - if ((flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS && - !PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)) { - PyErr_SetString(PyExc_ValueError, "ndarray is not C-contiguous"); - goto fail; - } - if ((flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS && - !PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)) { - PyErr_SetString(PyExc_ValueError, "ndarray is not Fortran contiguous"); - goto fail; - } - if ((flags & PyBUF_ANY_CONTIGUOUS) == PyBUF_ANY_CONTIGUOUS - && !PyArray_ISONESEGMENT(self)) { - PyErr_SetString(PyExc_ValueError, "ndarray is not contiguous"); - goto fail; - } - if ((flags & PyBUF_STRIDES) != PyBUF_STRIDES && - (flags & PyBUF_ND) == PyBUF_ND && - !PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)) { - /* Non-strided N-dim buffers must be C-contiguous */ - PyErr_SetString(PyExc_ValueError, "ndarray is not C-contiguous"); - goto fail; - } - if ((flags & PyBUF_WRITEABLE) == PyBUF_WRITEABLE && - !PyArray_ISWRITEABLE(self)) { - PyErr_SetString(PyExc_ValueError, "ndarray is not writeable"); - goto fail; - } - - if (view == NULL) { - PyErr_SetString(PyExc_ValueError, "NULL view in getbuffer"); - goto fail; - } - - /* Fill in information */ - info = _buffer_get_info(obj); - if (info == NULL) { - goto fail; - } - - view->buf = PyArray_DATA(self); - view->suboffsets = NULL; - view->itemsize = PyArray_ITEMSIZE(self); - view->readonly = !PyArray_ISWRITEABLE(self); - view->internal = NULL; - view->len = PyArray_NBYTES(self); - if ((flags & PyBUF_FORMAT) == PyBUF_FORMAT) { - view->format = info->format; - } else { - view->format = NULL; - } - if ((flags & PyBUF_ND) == PyBUF_ND) { - view->ndim = info->ndim; - view->shape = info->shape; - } - else { - view->ndim = 0; - view->shape = NULL; - } - if ((flags & PyBUF_STRIDES) == PyBUF_STRIDES) { - view->strides = info->strides; - } - else { - view->strides = NULL; - } - view->obj = (PyObject*)self; - - Py_INCREF(self); - return 0; - -fail: - return -1; -} - - -/* - * NOTE: for backward compatibility (esp. with PyArg_ParseTuple("s#", ...)) - * we do *not* define bf_releasebuffer at all. - * - * Instead, any extra data allocated with the buffer is released only in - * array_dealloc. - * - * Ensuring that the buffer stays in place is taken care by refcounting; - * ndarrays do not reallocate if there are references to them, and a buffer - * view holds one reference. - */ - -NPY_NO_EXPORT void -_array_dealloc_buffer_info(PyArrayObject *self) -{ - _buffer_clear_info((PyObject*)self); -} - -#else - -NPY_NO_EXPORT void -_array_dealloc_buffer_info(PyArrayObject *self) -{ -} - -#endif - -/*************************************************************************/ - -NPY_NO_EXPORT PyBufferProcs array_as_buffer = { -#if !defined(NPY_PY3K) -#if PY_VERSION_HEX >= 0x02050000 - (readbufferproc)array_getreadbuf, /*bf_getreadbuffer*/ - (writebufferproc)array_getwritebuf, /*bf_getwritebuffer*/ - (segcountproc)array_getsegcount, /*bf_getsegcount*/ - (charbufferproc)array_getcharbuf, /*bf_getcharbuffer*/ -#else - (getreadbufferproc)array_getreadbuf, /*bf_getreadbuffer*/ - (getwritebufferproc)array_getwritebuf, /*bf_getwritebuffer*/ - (getsegcountproc)array_getsegcount, /*bf_getsegcount*/ - (getcharbufferproc)array_getcharbuf, /*bf_getcharbuffer*/ -#endif -#endif -#if PY_VERSION_HEX >= 0x02060000 - (getbufferproc)array_getbuffer, - (releasebufferproc)0, -#endif -}; - - -/************************************************************************* - * Convert PEP 3118 format string to PyArray_Descr - */ -#if PY_VERSION_HEX >= 0x02060000 - -NPY_NO_EXPORT PyArray_Descr* -_descriptor_from_pep3118_format(char *s) -{ - char *buf, *p; - int in_name = 0; - PyObject *descr; - PyObject *str; - PyObject *_numpy_internal; - - if (s == NULL) { - return PyArray_DescrNewFromType(PyArray_BYTE); - } - - /* Strip whitespace, except from field names */ - buf = (char*)malloc(strlen(s) + 1); - p = buf; - while (*s != '\0') { - if (*s == ':') { - in_name = !in_name; - *p = *s; - } - else if (in_name || !NumPyOS_ascii_isspace(*s)) { - *p = *s; - } - ++p; - ++s; - } - *p = '\0'; - - str = PyUString_FromStringAndSize(buf, strlen(buf)); - free(buf); - if (str == NULL) { - return NULL; - } - - /* Convert */ - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) { - Py_DECREF(str); - return NULL; - } - descr = (PyArray_Descr*)PyObject_CallMethod( - _numpy_internal, "_dtype_from_pep3118", "O", str); - Py_DECREF(str); - Py_DECREF(_numpy_internal); - if (descr == NULL) { - PyErr_Format(PyExc_ValueError, - "'%s' is not a valid PEP 3118 buffer format string", buf); - return NULL; - } - if (!PyArray_DescrCheck(descr)) { - PyErr_Format(PyExc_RuntimeError, - "internal error: numpy.core._internal._dtype_from_pep3118 " - "did not return a valid dtype, got %s", buf); - return NULL; - } - return (PyArray_Descr*)descr; -} - -#else - -NPY_NO_EXPORT PyArray_Descr* -_descriptor_from_pep3118_format(char *s) -{ - PyErr_SetString(PyExc_RuntimeError, - "PEP 3118 is not supported on Python versions < 2.6"); - return NULL; -} - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/buffer.h b/pythonPackages/numpy/numpy/core/src/multiarray/buffer.h deleted file mode 100755 index c0a1f8e260..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/buffer.h +++ /dev/null @@ -1,16 +0,0 @@ -#ifndef _NPY_PRIVATE_BUFFER_H_ -#define _NPY_PRIVATE_BUFFER_H_ - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT PyBufferProcs array_as_buffer; -#else -NPY_NO_EXPORT PyBufferProcs array_as_buffer; -#endif - -NPY_NO_EXPORT void -_array_dealloc_buffer_info(PyArrayObject *self); - -NPY_NO_EXPORT PyArray_Descr* -_descriptor_from_pep3118_format(char *s); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/calculation.c b/pythonPackages/numpy/numpy/core/src/multiarray/calculation.c deleted file mode 100755 index bc078f0978..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/calculation.c +++ /dev/null @@ -1,1085 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "common.h" -#include "number.h" - -#include "calculation.h" - -/* FIXME: just remove _check_axis ? */ -#define _check_axis PyArray_CheckAxis -#define PyAO PyArrayObject - -static double -power_of_ten(int n) -{ - static const double p10[] = {1e0, 1e1, 1e2, 1e3, 1e4, 1e5, 1e6, 1e7, 1e8}; - double ret; - if (n < 9) { - ret = p10[n]; - } - else { - ret = 1e9; - while (n-- > 9) { - ret *= 10.; - } - } - return ret; -} - -/*NUMPY_API - * ArgMax - */ -NPY_NO_EXPORT PyObject * -PyArray_ArgMax(PyArrayObject *op, int axis, PyArrayObject *out) -{ - PyArrayObject *ap = NULL, *rp = NULL; - PyArray_ArgFunc* arg_func; - char *ip; - intp *rptr; - intp i, n, m; - int elsize; - int copyret = 0; - NPY_BEGIN_THREADS_DEF; - - if ((ap=(PyAO *)_check_axis(op, &axis, 0)) == NULL) { - return NULL; - } - /* - * We need to permute the array so that axis is placed at the end. - * And all other dimensions are shifted left. - */ - if (axis != ap->nd-1) { - PyArray_Dims newaxes; - intp dims[MAX_DIMS]; - int i; - - newaxes.ptr = dims; - newaxes.len = ap->nd; - for (i = 0; i < axis; i++) dims[i] = i; - for (i = axis; i < ap->nd - 1; i++) dims[i] = i + 1; - dims[ap->nd - 1] = axis; - op = (PyAO *)PyArray_Transpose(ap, &newaxes); - Py_DECREF(ap); - if (op == NULL) { - return NULL; - } - } - else { - op = ap; - } - - /* Will get native-byte order contiguous copy. */ - ap = (PyArrayObject *) - PyArray_ContiguousFromAny((PyObject *)op, - op->descr->type_num, 1, 0); - Py_DECREF(op); - if (ap == NULL) { - return NULL; - } - arg_func = ap->descr->f->argmax; - if (arg_func == NULL) { - PyErr_SetString(PyExc_TypeError, "data type not ordered"); - goto fail; - } - elsize = ap->descr->elsize; - m = ap->dimensions[ap->nd-1]; - if (m == 0) { - PyErr_SetString(PyExc_ValueError, - "attempt to get argmax/argmin "\ - "of an empty sequence"); - goto fail; - } - - if (!out) { - rp = (PyArrayObject *)PyArray_New(Py_TYPE(ap), ap->nd-1, - ap->dimensions, PyArray_INTP, - NULL, NULL, 0, 0, - (PyObject *)ap); - if (rp == NULL) { - goto fail; - } - } - else { - if (PyArray_SIZE(out) != - PyArray_MultiplyList(ap->dimensions, ap->nd - 1)) { - PyErr_SetString(PyExc_TypeError, - "invalid shape for output array."); - } - rp = (PyArrayObject *)\ - PyArray_FromArray(out, - PyArray_DescrFromType(PyArray_INTP), - NPY_CARRAY | NPY_UPDATEIFCOPY); - if (rp == NULL) { - goto fail; - } - if (rp != out) { - copyret = 1; - } - } - - NPY_BEGIN_THREADS_DESCR(ap->descr); - n = PyArray_SIZE(ap)/m; - rptr = (intp *)rp->data; - for (ip = ap->data, i = 0; i < n; i++, ip += elsize*m) { - arg_func(ip, m, rptr, ap); - rptr += 1; - } - NPY_END_THREADS_DESCR(ap->descr); - - Py_DECREF(ap); - if (copyret) { - PyArrayObject *obj; - obj = (PyArrayObject *)rp->base; - Py_INCREF(obj); - Py_DECREF(rp); - rp = obj; - } - return (PyObject *)rp; - - fail: - Py_DECREF(ap); - Py_XDECREF(rp); - return NULL; -} - -/*NUMPY_API - * ArgMin - */ -NPY_NO_EXPORT PyObject * -PyArray_ArgMin(PyArrayObject *ap, int axis, PyArrayObject *out) -{ - PyObject *obj, *new, *ret; - - if (PyArray_ISFLEXIBLE(ap)) { - PyErr_SetString(PyExc_TypeError, - "argmax is unsupported for this type"); - return NULL; - } - else if (PyArray_ISUNSIGNED(ap)) { - obj = PyInt_FromLong((long) -1); - } - else if (PyArray_TYPE(ap) == PyArray_BOOL) { - obj = PyInt_FromLong((long) 1); - } - else { - obj = PyInt_FromLong((long) 0); - } - new = PyArray_EnsureAnyArray(PyNumber_Subtract(obj, (PyObject *)ap)); - Py_DECREF(obj); - if (new == NULL) { - return NULL; - } - ret = PyArray_ArgMax((PyArrayObject *)new, axis, out); - Py_DECREF(new); - return ret; -} - -/*NUMPY_API - * Max - */ -NPY_NO_EXPORT PyObject * -PyArray_Max(PyArrayObject *ap, int axis, PyArrayObject *out) -{ - PyArrayObject *arr; - PyObject *ret; - - if ((arr=(PyArrayObject *)_check_axis(ap, &axis, 0)) == NULL) { - return NULL; - } - ret = PyArray_GenericReduceFunction(arr, n_ops.maximum, axis, - arr->descr->type_num, out); - Py_DECREF(arr); - return ret; -} - -/*NUMPY_API - * Min - */ -NPY_NO_EXPORT PyObject * -PyArray_Min(PyArrayObject *ap, int axis, PyArrayObject *out) -{ - PyArrayObject *arr; - PyObject *ret; - - if ((arr=(PyArrayObject *)_check_axis(ap, &axis, 0)) == NULL) { - return NULL; - } - ret = PyArray_GenericReduceFunction(arr, n_ops.minimum, axis, - arr->descr->type_num, out); - Py_DECREF(arr); - return ret; -} - -/*NUMPY_API - * Ptp - */ -NPY_NO_EXPORT PyObject * -PyArray_Ptp(PyArrayObject *ap, int axis, PyArrayObject *out) -{ - PyArrayObject *arr; - PyObject *ret; - PyObject *obj1 = NULL, *obj2 = NULL; - - if ((arr=(PyArrayObject *)_check_axis(ap, &axis, 0)) == NULL) { - return NULL; - } - obj1 = PyArray_Max(arr, axis, out); - if (obj1 == NULL) { - goto fail; - } - obj2 = PyArray_Min(arr, axis, NULL); - if (obj2 == NULL) { - goto fail; - } - Py_DECREF(arr); - if (out) { - ret = PyObject_CallFunction(n_ops.subtract, "OOO", out, obj2, out); - } - else { - ret = PyNumber_Subtract(obj1, obj2); - } - Py_DECREF(obj1); - Py_DECREF(obj2); - return ret; - - fail: - Py_XDECREF(arr); - Py_XDECREF(obj1); - Py_XDECREF(obj2); - return NULL; -} - - - -/*NUMPY_API - * Set variance to 1 to by-pass square-root calculation and return variance - * Std - */ -NPY_NO_EXPORT PyObject * -PyArray_Std(PyArrayObject *self, int axis, int rtype, PyArrayObject *out, - int variance) -{ - return __New_PyArray_Std(self, axis, rtype, out, variance, 0); -} - -NPY_NO_EXPORT PyObject * -__New_PyArray_Std(PyArrayObject *self, int axis, int rtype, PyArrayObject *out, - int variance, int num) -{ - PyObject *obj1 = NULL, *obj2 = NULL, *obj3 = NULL, *new = NULL; - PyObject *ret = NULL, *newshape = NULL; - int i, n; - intp val; - - if ((new = _check_axis(self, &axis, 0)) == NULL) { - return NULL; - } - /* Compute and reshape mean */ - obj1 = PyArray_EnsureAnyArray(PyArray_Mean((PyAO *)new, axis, rtype, NULL)); - if (obj1 == NULL) { - Py_DECREF(new); - return NULL; - } - n = PyArray_NDIM(new); - newshape = PyTuple_New(n); - if (newshape == NULL) { - Py_DECREF(obj1); - Py_DECREF(new); - return NULL; - } - for (i = 0; i < n; i++) { - if (i == axis) { - val = 1; - } - else { - val = PyArray_DIM(new,i); - } - PyTuple_SET_ITEM(newshape, i, PyInt_FromLong((long)val)); - } - obj2 = PyArray_Reshape((PyAO *)obj1, newshape); - Py_DECREF(obj1); - Py_DECREF(newshape); - if (obj2 == NULL) { - Py_DECREF(new); - return NULL; - } - - /* Compute x = x - mx */ - obj1 = PyArray_EnsureAnyArray(PyNumber_Subtract((PyObject *)new, obj2)); - Py_DECREF(obj2); - if (obj1 == NULL) { - Py_DECREF(new); - return NULL; - } - /* Compute x * x */ - if (PyArray_ISCOMPLEX(obj1)) { - obj3 = PyArray_Conjugate((PyAO *)obj1, NULL); - } - else { - obj3 = obj1; - Py_INCREF(obj1); - } - if (obj3 == NULL) { - Py_DECREF(new); - return NULL; - } - obj2 = PyArray_EnsureAnyArray \ - (PyArray_GenericBinaryFunction((PyAO *)obj1, obj3, n_ops.multiply)); - Py_DECREF(obj1); - Py_DECREF(obj3); - if (obj2 == NULL) { - Py_DECREF(new); - return NULL; - } - if (PyArray_ISCOMPLEX(obj2)) { - obj3 = PyObject_GetAttrString(obj2, "real"); - switch(rtype) { - case NPY_CDOUBLE: - rtype = NPY_DOUBLE; - break; - case NPY_CFLOAT: - rtype = NPY_FLOAT; - break; - case NPY_CLONGDOUBLE: - rtype = NPY_LONGDOUBLE; - break; - } - } - else { - obj3 = obj2; - Py_INCREF(obj2); - } - if (obj3 == NULL) { - Py_DECREF(new); - return NULL; - } - /* Compute add.reduce(x*x,axis) */ - obj1 = PyArray_GenericReduceFunction((PyAO *)obj3, n_ops.add, - axis, rtype, NULL); - Py_DECREF(obj3); - Py_DECREF(obj2); - if (obj1 == NULL) { - Py_DECREF(new); - return NULL; - } - n = PyArray_DIM(new,axis); - Py_DECREF(new); - n = (n-num); - if (n == 0) { - n = 1; - } - obj2 = PyFloat_FromDouble(1.0/((double )n)); - if (obj2 == NULL) { - Py_DECREF(obj1); - return NULL; - } - ret = PyNumber_Multiply(obj1, obj2); - Py_DECREF(obj1); - Py_DECREF(obj2); - - if (!variance) { - obj1 = PyArray_EnsureAnyArray(ret); - /* sqrt() */ - ret = PyArray_GenericUnaryFunction((PyAO *)obj1, n_ops.sqrt); - Py_DECREF(obj1); - } - if (ret == NULL) { - return NULL; - } - if (PyArray_CheckExact(self)) { - goto finish; - } - if (PyArray_Check(self) && Py_TYPE(self) == Py_TYPE(ret)) { - goto finish; - } - obj1 = PyArray_EnsureArray(ret); - if (obj1 == NULL) { - return NULL; - } - ret = PyArray_View((PyAO *)obj1, NULL, Py_TYPE(self)); - Py_DECREF(obj1); - -finish: - if (out) { - if (PyArray_CopyAnyInto(out, (PyArrayObject *)ret) < 0) { - Py_DECREF(ret); - return NULL; - } - Py_DECREF(ret); - Py_INCREF(out); - return (PyObject *)out; - } - return ret; -} - - -/*NUMPY_API - *Sum - */ -NPY_NO_EXPORT PyObject * -PyArray_Sum(PyArrayObject *self, int axis, int rtype, PyArrayObject *out) -{ - PyObject *new, *ret; - - if ((new = _check_axis(self, &axis, 0)) == NULL) { - return NULL; - } - ret = PyArray_GenericReduceFunction((PyAO *)new, n_ops.add, axis, - rtype, out); - Py_DECREF(new); - return ret; -} - -/*NUMPY_API - * Prod - */ -NPY_NO_EXPORT PyObject * -PyArray_Prod(PyArrayObject *self, int axis, int rtype, PyArrayObject *out) -{ - PyObject *new, *ret; - - if ((new = _check_axis(self, &axis, 0)) == NULL) { - return NULL; - } - ret = PyArray_GenericReduceFunction((PyAO *)new, n_ops.multiply, axis, - rtype, out); - Py_DECREF(new); - return ret; -} - -/*NUMPY_API - *CumSum - */ -NPY_NO_EXPORT PyObject * -PyArray_CumSum(PyArrayObject *self, int axis, int rtype, PyArrayObject *out) -{ - PyObject *new, *ret; - - if ((new = _check_axis(self, &axis, 0)) == NULL) { - return NULL; - } - ret = PyArray_GenericAccumulateFunction((PyAO *)new, n_ops.add, axis, - rtype, out); - Py_DECREF(new); - return ret; -} - -/*NUMPY_API - * CumProd - */ -NPY_NO_EXPORT PyObject * -PyArray_CumProd(PyArrayObject *self, int axis, int rtype, PyArrayObject *out) -{ - PyObject *new, *ret; - - if ((new = _check_axis(self, &axis, 0)) == NULL) { - return NULL; - } - - ret = PyArray_GenericAccumulateFunction((PyAO *)new, - n_ops.multiply, axis, - rtype, out); - Py_DECREF(new); - return ret; -} - -/*NUMPY_API - * Round - */ -NPY_NO_EXPORT PyObject * -PyArray_Round(PyArrayObject *a, int decimals, PyArrayObject *out) -{ - PyObject *f, *ret = NULL, *tmp, *op1, *op2; - int ret_int=0; - PyArray_Descr *my_descr; - if (out && (PyArray_SIZE(out) != PyArray_SIZE(a))) { - PyErr_SetString(PyExc_ValueError, - "invalid output shape"); - return NULL; - } - if (PyArray_ISCOMPLEX(a)) { - PyObject *part; - PyObject *round_part; - PyObject *new; - int res; - - if (out) { - new = (PyObject *)out; - Py_INCREF(new); - } - else { - new = PyArray_Copy(a); - if (new == NULL) { - return NULL; - } - } - - /* new.real = a.real.round(decimals) */ - part = PyObject_GetAttrString(new, "real"); - if (part == NULL) { - Py_DECREF(new); - return NULL; - } - part = PyArray_EnsureAnyArray(part); - round_part = PyArray_Round((PyArrayObject *)part, - decimals, NULL); - Py_DECREF(part); - if (round_part == NULL) { - Py_DECREF(new); - return NULL; - } - res = PyObject_SetAttrString(new, "real", round_part); - Py_DECREF(round_part); - if (res < 0) { - Py_DECREF(new); - return NULL; - } - - /* new.imag = a.imag.round(decimals) */ - part = PyObject_GetAttrString(new, "imag"); - if (part == NULL) { - Py_DECREF(new); - return NULL; - } - part = PyArray_EnsureAnyArray(part); - round_part = PyArray_Round((PyArrayObject *)part, - decimals, NULL); - Py_DECREF(part); - if (round_part == NULL) { - Py_DECREF(new); - return NULL; - } - res = PyObject_SetAttrString(new, "imag", round_part); - Py_DECREF(round_part); - if (res < 0) { - Py_DECREF(new); - return NULL; - } - return new; - } - /* do the most common case first */ - if (decimals >= 0) { - if (PyArray_ISINTEGER(a)) { - if (out) { - if (PyArray_CopyAnyInto(out, a) < 0) { - return NULL; - } - Py_INCREF(out); - return (PyObject *)out; - } - else { - Py_INCREF(a); - return (PyObject *)a; - } - } - if (decimals == 0) { - if (out) { - return PyObject_CallFunction(n_ops.rint, "OO", a, out); - } - return PyObject_CallFunction(n_ops.rint, "O", a); - } - op1 = n_ops.multiply; - op2 = n_ops.true_divide; - } - else { - op1 = n_ops.true_divide; - op2 = n_ops.multiply; - decimals = -decimals; - } - if (!out) { - if (PyArray_ISINTEGER(a)) { - ret_int = 1; - my_descr = PyArray_DescrFromType(NPY_DOUBLE); - } - else { - Py_INCREF(a->descr); - my_descr = a->descr; - } - out = (PyArrayObject *)PyArray_Empty(a->nd, a->dimensions, - my_descr, - PyArray_ISFORTRAN(a)); - if (out == NULL) { - return NULL; - } - } - else { - Py_INCREF(out); - } - f = PyFloat_FromDouble(power_of_ten(decimals)); - if (f == NULL) { - return NULL; - } - ret = PyObject_CallFunction(op1, "OOO", a, f, out); - if (ret == NULL) { - goto finish; - } - tmp = PyObject_CallFunction(n_ops.rint, "OO", ret, ret); - if (tmp == NULL) { - Py_DECREF(ret); - ret = NULL; - goto finish; - } - Py_DECREF(tmp); - tmp = PyObject_CallFunction(op2, "OOO", ret, f, ret); - if (tmp == NULL) { - Py_DECREF(ret); - ret = NULL; - goto finish; - } - Py_DECREF(tmp); - - finish: - Py_DECREF(f); - Py_DECREF(out); - if (ret_int) { - Py_INCREF(a->descr); - tmp = PyArray_CastToType((PyArrayObject *)ret, - a->descr, PyArray_ISFORTRAN(a)); - Py_DECREF(ret); - return tmp; - } - return ret; -} - - -/*NUMPY_API - * Mean - */ -NPY_NO_EXPORT PyObject * -PyArray_Mean(PyArrayObject *self, int axis, int rtype, PyArrayObject *out) -{ - PyObject *obj1 = NULL, *obj2 = NULL; - PyObject *new, *ret; - - if ((new = _check_axis(self, &axis, 0)) == NULL) { - return NULL; - } - obj1 = PyArray_GenericReduceFunction((PyAO *)new, n_ops.add, axis, - rtype, out); - obj2 = PyFloat_FromDouble((double) PyArray_DIM(new,axis)); - Py_DECREF(new); - if (obj1 == NULL || obj2 == NULL) { - Py_XDECREF(obj1); - Py_XDECREF(obj2); - return NULL; - } - if (!out) { -#if defined(NPY_PY3K) - ret = PyNumber_TrueDivide(obj1, obj2); -#else - ret = PyNumber_Divide(obj1, obj2); -#endif - } - else { - ret = PyObject_CallFunction(n_ops.divide, "OOO", out, obj2, out); - } - Py_DECREF(obj1); - Py_DECREF(obj2); - return ret; -} - -/*NUMPY_API - * Any - */ -NPY_NO_EXPORT PyObject * -PyArray_Any(PyArrayObject *self, int axis, PyArrayObject *out) -{ - PyObject *new, *ret; - - if ((new = _check_axis(self, &axis, 0)) == NULL) { - return NULL; - } - ret = PyArray_GenericReduceFunction((PyAO *)new, - n_ops.logical_or, axis, - PyArray_BOOL, out); - Py_DECREF(new); - return ret; -} - -/*NUMPY_API - * All - */ -NPY_NO_EXPORT PyObject * -PyArray_All(PyArrayObject *self, int axis, PyArrayObject *out) -{ - PyObject *new, *ret; - - if ((new = _check_axis(self, &axis, 0)) == NULL) { - return NULL; - } - ret = PyArray_GenericReduceFunction((PyAO *)new, - n_ops.logical_and, axis, - PyArray_BOOL, out); - Py_DECREF(new); - return ret; -} - - -static PyObject * -_GenericBinaryOutFunction(PyArrayObject *m1, PyObject *m2, PyArrayObject *out, - PyObject *op) -{ - if (out == NULL) { - return PyObject_CallFunction(op, "OO", m1, m2); - } - else { - return PyObject_CallFunction(op, "OOO", m1, m2, out); - } -} - -static PyObject * -_slow_array_clip(PyArrayObject *self, PyObject *min, PyObject *max, PyArrayObject *out) -{ - PyObject *res1=NULL, *res2=NULL; - - if (max != NULL) { - res1 = _GenericBinaryOutFunction(self, max, out, n_ops.minimum); - if (res1 == NULL) { - return NULL; - } - } - else { - res1 = (PyObject *)self; - Py_INCREF(res1); - } - - if (min != NULL) { - res2 = _GenericBinaryOutFunction((PyArrayObject *)res1, - min, out, n_ops.maximum); - if (res2 == NULL) { - Py_XDECREF(res1); - return NULL; - } - } - else { - res2 = res1; - Py_INCREF(res2); - } - Py_DECREF(res1); - return res2; -} - -/*NUMPY_API - * Clip - */ -NPY_NO_EXPORT PyObject * -PyArray_Clip(PyArrayObject *self, PyObject *min, PyObject *max, PyArrayObject *out) -{ - PyArray_FastClipFunc *func; - int outgood = 0, ingood = 0; - PyArrayObject *maxa = NULL; - PyArrayObject *mina = NULL; - PyArrayObject *newout = NULL, *newin = NULL; - PyArray_Descr *indescr, *newdescr; - char *max_data, *min_data; - PyObject *zero; - - if ((max == NULL) && (min == NULL)) { - PyErr_SetString(PyExc_ValueError, "array_clip: must set either max "\ - "or min"); - return NULL; - } - - func = self->descr->f->fastclip; - if (func == NULL || (min != NULL && !PyArray_CheckAnyScalar(min)) || - (max != NULL && !PyArray_CheckAnyScalar(max))) { - return _slow_array_clip(self, min, max, out); - } - /* Use the fast scalar clip function */ - - /* First we need to figure out the correct type */ - indescr = NULL; - if (min != NULL) { - indescr = PyArray_DescrFromObject(min, NULL); - if (indescr == NULL) { - return NULL; - } - } - if (max != NULL) { - newdescr = PyArray_DescrFromObject(max, indescr); - Py_XDECREF(indescr); - if (newdescr == NULL) { - return NULL; - } - } - else { - /* Steal the reference */ - newdescr = indescr; - } - - - /* - * Use the scalar descriptor only if it is of a bigger - * KIND than the input array (and then find the - * type that matches both). - */ - if (PyArray_ScalarKind(newdescr->type_num, NULL) > - PyArray_ScalarKind(self->descr->type_num, NULL)) { - indescr = _array_small_type(newdescr, self->descr); - func = indescr->f->fastclip; - if (func == NULL) { - return _slow_array_clip(self, min, max, out); - } - } - else { - indescr = self->descr; - Py_INCREF(indescr); - } - Py_DECREF(newdescr); - - if (!PyDataType_ISNOTSWAPPED(indescr)) { - PyArray_Descr *descr2; - descr2 = PyArray_DescrNewByteorder(indescr, '='); - Py_DECREF(indescr); - if (descr2 == NULL) { - goto fail; - } - indescr = descr2; - } - - /* Convert max to an array */ - if (max != NULL) { - maxa = (NPY_AO *)PyArray_FromAny(max, indescr, 0, 0, - NPY_DEFAULT, NULL); - if (maxa == NULL) { - return NULL; - } - } - else { - /* Side-effect of PyArray_FromAny */ - Py_DECREF(indescr); - } - - /* - * If we are unsigned, then make sure min is not < 0 - * This is to match the behavior of _slow_array_clip - * - * We allow min and max to go beyond the limits - * for other data-types in which case they - * are interpreted as their modular counterparts. - */ - if (min != NULL) { - if (PyArray_ISUNSIGNED(self)) { - int cmp; - zero = PyInt_FromLong(0); - cmp = PyObject_RichCompareBool(min, zero, Py_LT); - if (cmp == -1) { - Py_DECREF(zero); - goto fail; - } - if (cmp == 1) { - min = zero; - } - else { - Py_DECREF(zero); - Py_INCREF(min); - } - } - else { - Py_INCREF(min); - } - - /* Convert min to an array */ - Py_INCREF(indescr); - mina = (NPY_AO *)PyArray_FromAny(min, indescr, 0, 0, - NPY_DEFAULT, NULL); - Py_DECREF(min); - if (mina == NULL) { - goto fail; - } - } - - - /* - * Check to see if input is single-segment, aligned, - * and in native byteorder - */ - if (PyArray_ISONESEGMENT(self) && PyArray_CHKFLAGS(self, ALIGNED) && - PyArray_ISNOTSWAPPED(self) && (self->descr == indescr)) { - ingood = 1; - } - if (!ingood) { - int flags; - - if (PyArray_ISFORTRAN(self)) { - flags = NPY_FARRAY; - } - else { - flags = NPY_CARRAY; - } - Py_INCREF(indescr); - newin = (NPY_AO *)PyArray_FromArray(self, indescr, flags); - if (newin == NULL) { - goto fail; - } - } - else { - newin = self; - Py_INCREF(newin); - } - - /* - * At this point, newin is a single-segment, aligned, and correct - * byte-order array of the correct type - * - * if ingood == 0, then it is a copy, otherwise, - * it is the original input. - */ - - /* - * If we have already made a copy of the data, then use - * that as the output array - */ - if (out == NULL && !ingood) { - out = newin; - } - - /* - * Now, we know newin is a usable array for fastclip, - * we need to make sure the output array is available - * and usable - */ - if (out == NULL) { - Py_INCREF(indescr); - out = (NPY_AO*)PyArray_NewFromDescr(Py_TYPE(self), - indescr, self->nd, - self->dimensions, - NULL, NULL, - PyArray_ISFORTRAN(self), - (PyObject *)self); - if (out == NULL) { - goto fail; - } - outgood = 1; - } - else Py_INCREF(out); - /* Input is good at this point */ - if (out == newin) { - outgood = 1; - } - if (!outgood && PyArray_ISONESEGMENT(out) && - PyArray_CHKFLAGS(out, ALIGNED) && PyArray_ISNOTSWAPPED(out) && - PyArray_EquivTypes(out->descr, indescr)) { - outgood = 1; - } - - /* - * Do we still not have a suitable output array? - * Create one, now - */ - if (!outgood) { - int oflags; - if (PyArray_ISFORTRAN(out)) - oflags = NPY_FARRAY; - else - oflags = NPY_CARRAY; - oflags |= NPY_UPDATEIFCOPY | NPY_FORCECAST; - Py_INCREF(indescr); - newout = (NPY_AO*)PyArray_FromArray(out, indescr, oflags); - if (newout == NULL) { - goto fail; - } - } - else { - newout = out; - Py_INCREF(newout); - } - - /* make sure the shape of the output array is the same */ - if (!PyArray_SAMESHAPE(newin, newout)) { - PyErr_SetString(PyExc_ValueError, "clip: Output array must have the" - "same shape as the input."); - goto fail; - } - if (newout->data != newin->data) { - memcpy(newout->data, newin->data, PyArray_NBYTES(newin)); - } - - /* Now we can call the fast-clip function */ - min_data = max_data = NULL; - if (mina != NULL) { - min_data = mina->data; - } - if (maxa != NULL) { - max_data = maxa->data; - } - func(newin->data, PyArray_SIZE(newin), min_data, max_data, newout->data); - - /* Clean up temporary variables */ - Py_XDECREF(mina); - Py_XDECREF(maxa); - Py_DECREF(newin); - /* Copy back into out if out was not already a nice array. */ - Py_DECREF(newout); - return (PyObject *)out; - - fail: - Py_XDECREF(maxa); - Py_XDECREF(mina); - Py_XDECREF(newin); - PyArray_XDECREF_ERR(newout); - return NULL; -} - - -/*NUMPY_API - * Conjugate - */ -NPY_NO_EXPORT PyObject * -PyArray_Conjugate(PyArrayObject *self, PyArrayObject *out) -{ - if (PyArray_ISCOMPLEX(self)) { - if (out == NULL) { - return PyArray_GenericUnaryFunction(self, - n_ops.conjugate); - } - else { - return PyArray_GenericBinaryFunction(self, - (PyObject *)out, - n_ops.conjugate); - } - } - else { - PyArrayObject *ret; - if (out) { - if (PyArray_CopyAnyInto(out, self) < 0) { - return NULL; - } - ret = out; - } - else { - ret = self; - } - Py_INCREF(ret); - return (PyObject *)ret; - } -} - -/*NUMPY_API - * Trace - */ -NPY_NO_EXPORT PyObject * -PyArray_Trace(PyArrayObject *self, int offset, int axis1, int axis2, - int rtype, PyArrayObject *out) -{ - PyObject *diag = NULL, *ret = NULL; - - diag = PyArray_Diagonal(self, offset, axis1, axis2); - if (diag == NULL) { - return NULL; - } - ret = PyArray_GenericReduceFunction((PyAO *)diag, n_ops.add, -1, rtype, out); - Py_DECREF(diag); - return ret; -} - diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/calculation.h b/pythonPackages/numpy/numpy/core/src/multiarray/calculation.h deleted file mode 100755 index 34bc31f698..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/calculation.h +++ /dev/null @@ -1,64 +0,0 @@ -#ifndef _NPY_CALCULATION_H_ -#define _NPY_CALCULATION_H_ - -NPY_NO_EXPORT PyObject* -PyArray_ArgMax(PyArrayObject* self, int axis, PyArrayObject *out); - -NPY_NO_EXPORT PyObject* -PyArray_ArgMin(PyArrayObject* self, int axis, PyArrayObject *out); - -NPY_NO_EXPORT PyObject* -PyArray_Max(PyArrayObject* self, int axis, PyArrayObject* out); - -NPY_NO_EXPORT PyObject* -PyArray_Min(PyArrayObject* self, int axis, PyArrayObject* out); - -NPY_NO_EXPORT PyObject* -PyArray_Ptp(PyArrayObject* self, int axis, PyArrayObject* out); - -NPY_NO_EXPORT PyObject* -PyArray_Mean(PyArrayObject* self, int axis, int rtype, PyArrayObject* out); - -NPY_NO_EXPORT PyObject * -PyArray_Round(PyArrayObject *a, int decimals, PyArrayObject *out); - -NPY_NO_EXPORT PyObject* -PyArray_Trace(PyArrayObject* self, int offset, int axis1, int axis2, - int rtype, PyArrayObject* out); - -NPY_NO_EXPORT PyObject* -PyArray_Clip(PyArrayObject* self, PyObject* min, PyObject* max, PyArrayObject *out); - -NPY_NO_EXPORT PyObject* -PyArray_Conjugate(PyArrayObject* self, PyArrayObject* out); - -NPY_NO_EXPORT PyObject* -PyArray_Round(PyArrayObject* self, int decimals, PyArrayObject* out); - -NPY_NO_EXPORT PyObject* -PyArray_Std(PyArrayObject* self, int axis, int rtype, PyArrayObject* out, - int variance); - -NPY_NO_EXPORT PyObject * -__New_PyArray_Std(PyArrayObject *self, int axis, int rtype, PyArrayObject *out, - int variance, int num); - -NPY_NO_EXPORT PyObject* -PyArray_Sum(PyArrayObject* self, int axis, int rtype, PyArrayObject* out); - -NPY_NO_EXPORT PyObject* -PyArray_CumSum(PyArrayObject* self, int axis, int rtype, PyArrayObject* out); - -NPY_NO_EXPORT PyObject* -PyArray_Prod(PyArrayObject* self, int axis, int rtype, PyArrayObject* out); - -NPY_NO_EXPORT PyObject* -PyArray_CumProd(PyArrayObject* self, int axis, int rtype, PyArrayObject* out); - -NPY_NO_EXPORT PyObject* -PyArray_All(PyArrayObject* self, int axis, PyArrayObject* out); - -NPY_NO_EXPORT PyObject* -PyArray_Any(PyArrayObject* self, int axis, PyArrayObject* out); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/common.c b/pythonPackages/numpy/numpy/core/src/multiarray/common.c deleted file mode 100755 index 33d7f719cd..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/common.c +++ /dev/null @@ -1,612 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" - -#include "npy_config.h" -#include "numpy/npy_3kcompat.h" - -#include "usertypes.h" - -#include "common.h" -#include "buffer.h" - -/* - * new reference - * doesn't alter refcount of chktype or mintype --- - * unless one of them is returned - */ -NPY_NO_EXPORT PyArray_Descr * -_array_small_type(PyArray_Descr *chktype, PyArray_Descr* mintype) -{ - PyArray_Descr *outtype; - int outtype_num, save_num; - - if (PyArray_EquivTypes(chktype, mintype)) { - Py_INCREF(mintype); - return mintype; - } - - - if (chktype->type_num > mintype->type_num) { - outtype_num = chktype->type_num; - } - else { - if (PyDataType_ISOBJECT(chktype) && - PyDataType_ISSTRING(mintype)) { - return PyArray_DescrFromType(NPY_OBJECT); - } - else { - outtype_num = mintype->type_num; - } - } - - save_num = outtype_num; - while (outtype_num < PyArray_NTYPES && - !(PyArray_CanCastSafely(chktype->type_num, outtype_num) - && PyArray_CanCastSafely(mintype->type_num, outtype_num))) { - outtype_num++; - } - if (outtype_num == PyArray_NTYPES) { - outtype = PyArray_DescrFromType(save_num); - } - else { - outtype = PyArray_DescrFromType(outtype_num); - } - if (PyTypeNum_ISEXTENDED(outtype->type_num)) { - int testsize = outtype->elsize; - int chksize, minsize; - chksize = chktype->elsize; - minsize = mintype->elsize; - /* - * Handle string->unicode case separately - * because string itemsize is 4* as large - */ - if (outtype->type_num == PyArray_UNICODE && - mintype->type_num == PyArray_STRING) { - testsize = MAX(chksize, 4*minsize); - } - else if (chktype->type_num == PyArray_STRING && - mintype->type_num == PyArray_UNICODE) { - testsize = MAX(chksize*4, minsize); - } - else { - testsize = MAX(chksize, minsize); - } - if (testsize != outtype->elsize) { - PyArray_DESCR_REPLACE(outtype); - outtype->elsize = testsize; - Py_XDECREF(outtype->fields); - outtype->fields = NULL; - Py_XDECREF(outtype->names); - outtype->names = NULL; - } - } - return outtype; -} - -NPY_NO_EXPORT PyArray_Descr * -_array_find_python_scalar_type(PyObject *op) -{ - if (PyFloat_Check(op)) { - return PyArray_DescrFromType(PyArray_DOUBLE); - } - else if (PyComplex_Check(op)) { - return PyArray_DescrFromType(PyArray_CDOUBLE); - } - else if (PyInt_Check(op)) { - /* bools are a subclass of int */ - if (PyBool_Check(op)) { - return PyArray_DescrFromType(PyArray_BOOL); - } - else { - return PyArray_DescrFromType(PyArray_LONG); - } - } - else if (PyLong_Check(op)) { - /* if integer can fit into a longlong then return that*/ - if ((PyLong_AsLongLong(op) == -1) && PyErr_Occurred()) { - PyErr_Clear(); - return PyArray_DescrFromType(PyArray_OBJECT); - } - return PyArray_DescrFromType(PyArray_LONGLONG); - } - return NULL; -} - -static PyArray_Descr * -_use_default_type(PyObject *op) -{ - int typenum, l; - PyObject *type; - - typenum = -1; - l = 0; - type = (PyObject *)Py_TYPE(op); - while (l < PyArray_NUMUSERTYPES) { - if (type == (PyObject *)(userdescrs[l]->typeobj)) { - typenum = l + PyArray_USERDEF; - break; - } - l++; - } - if (typenum == -1) { - typenum = PyArray_OBJECT; - } - return PyArray_DescrFromType(typenum); -} - - -/* - * op is an object to be converted to an ndarray. - * - * minitype is the minimum type-descriptor needed. - * - * max is the maximum number of dimensions -- used for recursive call - * to avoid infinite recursion... - */ -NPY_NO_EXPORT PyArray_Descr * -_array_find_type(PyObject *op, PyArray_Descr *minitype, int max) -{ - int l; - PyObject *ip; - PyArray_Descr *chktype = NULL; - PyArray_Descr *outtype; -#if PY_VERSION_HEX >= 0x02060000 - Py_buffer buffer_view; -#endif - - /* - * These need to come first because if op already carries - * a descr structure, then we want it to be the result if minitype - * is NULL. - */ - if (PyArray_Check(op)) { - chktype = PyArray_DESCR(op); - Py_INCREF(chktype); - if (minitype == NULL) { - return chktype; - } - Py_INCREF(minitype); - goto finish; - } - - if (PyArray_IsScalar(op, Generic)) { - chktype = PyArray_DescrFromScalar(op); - if (minitype == NULL) { - return chktype; - } - Py_INCREF(minitype); - goto finish; - } - - if (minitype == NULL) { - minitype = PyArray_DescrFromType(PyArray_BOOL); - } - else { - Py_INCREF(minitype); - } - if (max < 0) { - goto deflt; - } - chktype = _array_find_python_scalar_type(op); - if (chktype) { - goto finish; - } - - if (PyBytes_Check(op)) { - chktype = PyArray_DescrNewFromType(PyArray_STRING); - chktype->elsize = PyString_GET_SIZE(op); - goto finish; - } - - if (PyUnicode_Check(op)) { - chktype = PyArray_DescrNewFromType(PyArray_UNICODE); - chktype->elsize = PyUnicode_GET_DATA_SIZE(op); -#ifndef Py_UNICODE_WIDE - chktype->elsize <<= 1; -#endif - goto finish; - } - -#if PY_VERSION_HEX >= 0x02060000 - /* PEP 3118 buffer interface */ - memset(&buffer_view, 0, sizeof(Py_buffer)); - if (PyObject_GetBuffer(op, &buffer_view, PyBUF_FORMAT|PyBUF_STRIDES) == 0 || - PyObject_GetBuffer(op, &buffer_view, PyBUF_FORMAT) == 0) { - - PyErr_Clear(); - chktype = _descriptor_from_pep3118_format(buffer_view.format); - PyBuffer_Release(&buffer_view); - if (chktype) { - goto finish; - } - } - else if (PyObject_GetBuffer(op, &buffer_view, PyBUF_STRIDES) == 0 || - PyObject_GetBuffer(op, &buffer_view, PyBUF_SIMPLE) == 0) { - - PyErr_Clear(); - chktype = PyArray_DescrNewFromType(PyArray_VOID); - chktype->elsize = buffer_view.itemsize; - PyBuffer_Release(&buffer_view); - goto finish; - } - else { - PyErr_Clear(); - } -#endif - - if ((ip=PyObject_GetAttrString(op, "__array_interface__"))!=NULL) { - if (PyDict_Check(ip)) { - PyObject *new; - new = PyDict_GetItemString(ip, "typestr"); - if (new && PyString_Check(new)) { - chktype =_array_typedescr_fromstr(PyString_AS_STRING(new)); - } - } - Py_DECREF(ip); - if (chktype) { - goto finish; - } - } - else { - PyErr_Clear(); - } - if ((ip=PyObject_GetAttrString(op, "__array_struct__")) != NULL) { - PyArrayInterface *inter; - char buf[40]; - - if (NpyCapsule_Check(ip)) { - inter = (PyArrayInterface *)NpyCapsule_AsVoidPtr(ip); - if (inter->two == 2) { - PyOS_snprintf(buf, sizeof(buf), - "|%c%d", inter->typekind, inter->itemsize); - chktype = _array_typedescr_fromstr(buf); - } - } - Py_DECREF(ip); - if (chktype) { - goto finish; - } - } - else { - PyErr_Clear(); - } - -#if !defined(NPY_PY3K) - if (PyBuffer_Check(op)) { - chktype = PyArray_DescrNewFromType(PyArray_VOID); - chktype->elsize = Py_TYPE(op)->tp_as_sequence->sq_length(op); - PyErr_Clear(); - goto finish; - } -#endif - - if (PyObject_HasAttrString(op, "__array__")) { - ip = PyObject_CallMethod(op, "__array__", NULL); - if(ip && PyArray_Check(ip)) { - chktype = PyArray_DESCR(ip); - Py_INCREF(chktype); - Py_DECREF(ip); - goto finish; - } - Py_XDECREF(ip); - if (PyErr_Occurred()) PyErr_Clear(); - } - -#if defined(NPY_PY3K) - /* FIXME: XXX -- what is the correct thing to do here? */ -#else - if (PyInstance_Check(op)) { - goto deflt; - } -#endif - if (PySequence_Check(op)) { - l = PyObject_Length(op); - if (l < 0 && PyErr_Occurred()) { - PyErr_Clear(); - goto deflt; - } - if (l == 0 && minitype->type_num == PyArray_BOOL) { - Py_DECREF(minitype); - minitype = PyArray_DescrFromType(PyArray_DEFAULT); - } - while (--l >= 0) { - PyArray_Descr *newtype; - ip = PySequence_GetItem(op, l); - if (ip==NULL) { - PyErr_Clear(); - goto deflt; - } - chktype = _array_find_type(ip, minitype, max-1); - newtype = _array_small_type(chktype, minitype); - Py_DECREF(minitype); - minitype = newtype; - Py_DECREF(chktype); - Py_DECREF(ip); - } - chktype = minitype; - Py_INCREF(minitype); - goto finish; - } - - - deflt: - chktype = _use_default_type(op); - - finish: - outtype = _array_small_type(chktype, minitype); - Py_DECREF(chktype); - Py_DECREF(minitype); - /* - * VOID Arrays should not occur by "default" - * unless input was already a VOID - */ - if (outtype->type_num == PyArray_VOID && - minitype->type_num != PyArray_VOID) { - Py_DECREF(outtype); - return PyArray_DescrFromType(PyArray_OBJECT); - } - return outtype; -} - -/* new reference */ -NPY_NO_EXPORT PyArray_Descr * -_array_typedescr_fromstr(char *str) -{ - PyArray_Descr *descr; - int type_num; - char typechar; - int size; - char msg[] = "unsupported typestring"; - int swap; - char swapchar; - - swapchar = str[0]; - str += 1; - - typechar = str[0]; - size = atoi(str + 1); - switch (typechar) { - case 'b': - if (size == sizeof(Bool)) { - type_num = PyArray_BOOL; - } - else { - PyErr_SetString(PyExc_ValueError, msg); - return NULL; - } - break; - case 'u': - if (size == sizeof(uintp)) { - type_num = PyArray_UINTP; - } - else if (size == sizeof(char)) { - type_num = PyArray_UBYTE; - } - else if (size == sizeof(short)) { - type_num = PyArray_USHORT; - } - else if (size == sizeof(ulong)) { - type_num = PyArray_ULONG; - } - else if (size == sizeof(int)) { - type_num = PyArray_UINT; - } - else if (size == sizeof(ulonglong)) { - type_num = PyArray_ULONGLONG; - } - else { - PyErr_SetString(PyExc_ValueError, msg); - return NULL; - } - break; - case 'i': - if (size == sizeof(intp)) { - type_num = PyArray_INTP; - } - else if (size == sizeof(char)) { - type_num = PyArray_BYTE; - } - else if (size == sizeof(short)) { - type_num = PyArray_SHORT; - } - else if (size == sizeof(long)) { - type_num = PyArray_LONG; - } - else if (size == sizeof(int)) { - type_num = PyArray_INT; - } - else if (size == sizeof(longlong)) { - type_num = PyArray_LONGLONG; - } - else { - PyErr_SetString(PyExc_ValueError, msg); - return NULL; - } - break; - case 'f': - if (size == sizeof(float)) { - type_num = PyArray_FLOAT; - } - else if (size == sizeof(double)) { - type_num = PyArray_DOUBLE; - } - else if (size == sizeof(longdouble)) { - type_num = PyArray_LONGDOUBLE; - } - else { - PyErr_SetString(PyExc_ValueError, msg); - return NULL; - } - break; - case 'c': - if (size == sizeof(float)*2) { - type_num = PyArray_CFLOAT; - } - else if (size == sizeof(double)*2) { - type_num = PyArray_CDOUBLE; - } - else if (size == sizeof(longdouble)*2) { - type_num = PyArray_CLONGDOUBLE; - } - else { - PyErr_SetString(PyExc_ValueError, msg); - return NULL; - } - break; - case 'O': - if (size == sizeof(PyObject *)) { - type_num = PyArray_OBJECT; - } - else { - PyErr_SetString(PyExc_ValueError, msg); - return NULL; - } - break; - case PyArray_STRINGLTR: - type_num = PyArray_STRING; - break; - case PyArray_UNICODELTR: - type_num = PyArray_UNICODE; - size <<= 2; - break; - case 'V': - type_num = PyArray_VOID; - break; - default: - PyErr_SetString(PyExc_ValueError, msg); - return NULL; - } - - descr = PyArray_DescrFromType(type_num); - if (descr == NULL) { - return NULL; - } - swap = !PyArray_ISNBO(swapchar); - if (descr->elsize == 0 || swap) { - /* Need to make a new PyArray_Descr */ - PyArray_DESCR_REPLACE(descr); - if (descr==NULL) { - return NULL; - } - if (descr->elsize == 0) { - descr->elsize = size; - } - if (swap) { - descr->byteorder = swapchar; - } - } - return descr; -} - -NPY_NO_EXPORT char * -index2ptr(PyArrayObject *mp, intp i) -{ - intp dim0; - - if (mp->nd == 0) { - PyErr_SetString(PyExc_IndexError, "0-d arrays can't be indexed"); - return NULL; - } - dim0 = mp->dimensions[0]; - if (i < 0) { - i += dim0; - } - if (i == 0 && dim0 > 0) { - return mp->data; - } - if (i > 0 && i < dim0) { - return mp->data+i*mp->strides[0]; - } - PyErr_SetString(PyExc_IndexError,"index out of bounds"); - return NULL; -} - -NPY_NO_EXPORT int -_zerofill(PyArrayObject *ret) -{ - if (PyDataType_REFCHK(ret->descr)) { - PyObject *zero = PyInt_FromLong(0); - PyArray_FillObjectArray(ret, zero); - Py_DECREF(zero); - if (PyErr_Occurred()) { - Py_DECREF(ret); - return -1; - } - } - else { - intp n = PyArray_NBYTES(ret); - memset(ret->data, 0, n); - } - return 0; -} - -NPY_NO_EXPORT int -_IsAligned(PyArrayObject *ap) -{ - int i, alignment, aligned = 1; - intp ptr; - - /* The special casing for STRING and VOID types was removed - * in accordance with http://projects.scipy.org/numpy/ticket/1227 - * It used to be that IsAligned always returned True for these - * types, which is indeed the case when they are created using - * PyArray_DescrConverter(), but not necessarily when using - * PyArray_DescrAlignConverter(). */ - - alignment = ap->descr->alignment; - if (alignment == 1) { - return 1; - } - ptr = (intp) ap->data; - aligned = (ptr % alignment) == 0; - for (i = 0; i < ap->nd; i++) { - aligned &= ((ap->strides[i] % alignment) == 0); - } - return aligned != 0; -} - -NPY_NO_EXPORT Bool -_IsWriteable(PyArrayObject *ap) -{ - PyObject *base=ap->base; - void *dummy; - Py_ssize_t n; - - /* If we own our own data, then no-problem */ - if ((base == NULL) || (ap->flags & OWNDATA)) { - return TRUE; - } - /* - * Get to the final base object - * If it is a writeable array, then return TRUE - * If we can find an array object - * or a writeable buffer object as the final base object - * or a string object (for pickling support memory savings). - * - this last could be removed if a proper pickleable - * buffer was added to Python. - */ - - while(PyArray_Check(base)) { - if (PyArray_CHKFLAGS(base, OWNDATA)) { - return (Bool) (PyArray_ISWRITEABLE(base)); - } - base = PyArray_BASE(base); - } - - /* - * here so pickle support works seamlessly - * and unpickled array can be set and reset writeable - * -- could be abused -- - */ - if (PyString_Check(base)) { - return TRUE; - } - if (PyObject_AsWriteBuffer(base, &dummy, &n) < 0) { - return FALSE; - } - return TRUE; -} diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/common.h b/pythonPackages/numpy/numpy/core/src/multiarray/common.h deleted file mode 100755 index 67ee9b66f8..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/common.h +++ /dev/null @@ -1,34 +0,0 @@ -#ifndef _NPY_PRIVATE_COMMON_H_ -#define _NPY_PRIVATE_COMMON_H_ - -#define error_converting(x) (((x) == -1) && PyErr_Occurred()) - -NPY_NO_EXPORT PyArray_Descr * -_array_find_type(PyObject *op, PyArray_Descr *minitype, int max); - -NPY_NO_EXPORT PyArray_Descr * -_array_small_type(PyArray_Descr *chktype, PyArray_Descr* mintype); - -NPY_NO_EXPORT PyArray_Descr * -_array_find_python_scalar_type(PyObject *op); - -NPY_NO_EXPORT PyArray_Descr * -_array_typedescr_fromstr(char *str); - -NPY_NO_EXPORT char * -index2ptr(PyArrayObject *mp, intp i); - -NPY_NO_EXPORT int -_zerofill(PyArrayObject *ret); - -NPY_NO_EXPORT int -_IsAligned(PyArrayObject *ap); - -NPY_NO_EXPORT Bool -_IsWriteable(PyArrayObject *ap); - -#ifndef Py_UNICODE_WIDE -#include "ucsnarrow.h" -#endif - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/conversion_utils.c b/pythonPackages/numpy/numpy/core/src/multiarray/conversion_utils.c deleted file mode 100755 index abc254058e..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/conversion_utils.c +++ /dev/null @@ -1,776 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" -#include "numpy/npy_3kcompat.h" - -#include "common.h" -#include "arraytypes.h" - -#include "conversion_utils.h" - -/**************************************************************** -* Useful function for conversion when used with PyArg_ParseTuple -****************************************************************/ - -/*NUMPY_API - * - * Useful to pass as converter function for O& processing in PyArgs_ParseTuple. - * - * This conversion function can be used with the "O&" argument for - * PyArg_ParseTuple. It will immediately return an object of array type - * or will convert to a CARRAY any other object. - * - * If you use PyArray_Converter, you must DECREF the array when finished - * as you get a new reference to it. - */ -NPY_NO_EXPORT int -PyArray_Converter(PyObject *object, PyObject **address) -{ - if (PyArray_Check(object)) { - *address = object; - Py_INCREF(object); - return PY_SUCCEED; - } - else { - *address = PyArray_FromAny(object, NULL, 0, 0, CARRAY, NULL); - if (*address == NULL) { - return PY_FAIL; - } - return PY_SUCCEED; - } -} - -/*NUMPY_API - * Useful to pass as converter function for O& processing in - * PyArgs_ParseTuple for output arrays - */ -NPY_NO_EXPORT int -PyArray_OutputConverter(PyObject *object, PyArrayObject **address) -{ - if (object == NULL || object == Py_None) { - *address = NULL; - return PY_SUCCEED; - } - if (PyArray_Check(object)) { - *address = (PyArrayObject *)object; - return PY_SUCCEED; - } - else { - PyErr_SetString(PyExc_TypeError, - "output must be an array"); - *address = NULL; - return PY_FAIL; - } -} - -/*NUMPY_API - * Get intp chunk from sequence - * - * This function takes a Python sequence object and allocates and - * fills in an intp array with the converted values. - * - * Remember to free the pointer seq.ptr when done using - * PyDimMem_FREE(seq.ptr)** - */ -NPY_NO_EXPORT int -PyArray_IntpConverter(PyObject *obj, PyArray_Dims *seq) -{ - int len; - int nd; - - seq->ptr = NULL; - seq->len = 0; - if (obj == Py_None) { - return PY_SUCCEED; - } - len = PySequence_Size(obj); - if (len == -1) { - /* Check to see if it is a number */ - if (PyNumber_Check(obj)) { - len = 1; - } - } - if (len < 0) { - PyErr_SetString(PyExc_TypeError, - "expected sequence object with len >= 0"); - return PY_FAIL; - } - if (len > MAX_DIMS) { - PyErr_Format(PyExc_ValueError, "sequence too large; " \ - "must be smaller than %d", MAX_DIMS); - return PY_FAIL; - } - if (len > 0) { - seq->ptr = PyDimMem_NEW(len); - if (seq->ptr == NULL) { - PyErr_NoMemory(); - return PY_FAIL; - } - } - seq->len = len; - nd = PyArray_IntpFromSequence(obj, (intp *)seq->ptr, len); - if (nd == -1 || nd != len) { - PyDimMem_FREE(seq->ptr); - seq->ptr = NULL; - return PY_FAIL; - } - return PY_SUCCEED; -} - -/*NUMPY_API - * Get buffer chunk from object - * - * this function takes a Python object which exposes the (single-segment) - * buffer interface and returns a pointer to the data segment - * - * You should increment the reference count by one of buf->base - * if you will hang on to a reference - * - * You only get a borrowed reference to the object. Do not free the - * memory... - */ -NPY_NO_EXPORT int -PyArray_BufferConverter(PyObject *obj, PyArray_Chunk *buf) -{ - Py_ssize_t buflen; - - buf->ptr = NULL; - buf->flags = BEHAVED; - buf->base = NULL; - if (obj == Py_None) { - return PY_SUCCEED; - } - if (PyObject_AsWriteBuffer(obj, &(buf->ptr), &buflen) < 0) { - PyErr_Clear(); - buf->flags &= ~WRITEABLE; - if (PyObject_AsReadBuffer(obj, (const void **)&(buf->ptr), - &buflen) < 0) { - return PY_FAIL; - } - } - buf->len = (intp) buflen; - - /* Point to the base of the buffer object if present */ -#if defined(NPY_PY3K) - if (PyMemoryView_Check(obj)) { - buf->base = PyMemoryView_GET_BASE(obj); - } -#else - if (PyBuffer_Check(obj)) { - buf->base = ((PyArray_Chunk *)obj)->base; - } -#endif - if (buf->base == NULL) { - buf->base = obj; - } - return PY_SUCCEED; -} - -/*NUMPY_API - * Get axis from an object (possibly None) -- a converter function, - */ -NPY_NO_EXPORT int -PyArray_AxisConverter(PyObject *obj, int *axis) -{ - if (obj == Py_None) { - *axis = MAX_DIMS; - } - else { - *axis = (int) PyInt_AsLong(obj); - if (PyErr_Occurred()) { - return PY_FAIL; - } - } - return PY_SUCCEED; -} - -/*NUMPY_API - * Convert an object to true / false - */ -NPY_NO_EXPORT int -PyArray_BoolConverter(PyObject *object, Bool *val) -{ - if (PyObject_IsTrue(object)) { - *val = TRUE; - } - else { - *val = FALSE; - } - if (PyErr_Occurred()) { - return PY_FAIL; - } - return PY_SUCCEED; -} - -/*NUMPY_API - * Convert object to endian - */ -NPY_NO_EXPORT int -PyArray_ByteorderConverter(PyObject *obj, char *endian) -{ - char *str; - PyObject *tmp = NULL; - - if (PyUnicode_Check(obj)) { - obj = tmp = PyUnicode_AsASCIIString(obj); - } - - *endian = PyArray_SWAP; - str = PyBytes_AsString(obj); - if (!str) { - Py_XDECREF(tmp); - return PY_FAIL; - } - if (strlen(str) < 1) { - PyErr_SetString(PyExc_ValueError, - "Byteorder string must be at least length 1"); - Py_XDECREF(tmp); - return PY_FAIL; - } - *endian = str[0]; - if (str[0] != PyArray_BIG && str[0] != PyArray_LITTLE - && str[0] != PyArray_NATIVE && str[0] != PyArray_IGNORE) { - if (str[0] == 'b' || str[0] == 'B') { - *endian = PyArray_BIG; - } - else if (str[0] == 'l' || str[0] == 'L') { - *endian = PyArray_LITTLE; - } - else if (str[0] == 'n' || str[0] == 'N') { - *endian = PyArray_NATIVE; - } - else if (str[0] == 'i' || str[0] == 'I') { - *endian = PyArray_IGNORE; - } - else if (str[0] == 's' || str[0] == 'S') { - *endian = PyArray_SWAP; - } - else { - PyErr_Format(PyExc_ValueError, - "%s is an unrecognized byteorder", - str); - Py_XDECREF(tmp); - return PY_FAIL; - } - } - Py_XDECREF(tmp); - return PY_SUCCEED; -} - -/*NUMPY_API - * Convert object to sort kind - */ -NPY_NO_EXPORT int -PyArray_SortkindConverter(PyObject *obj, NPY_SORTKIND *sortkind) -{ - char *str; - PyObject *tmp = NULL; - - if (PyUnicode_Check(obj)) { - obj = tmp = PyUnicode_AsASCIIString(obj); - } - - *sortkind = PyArray_QUICKSORT; - str = PyBytes_AsString(obj); - if (!str) { - Py_XDECREF(tmp); - return PY_FAIL; - } - if (strlen(str) < 1) { - PyErr_SetString(PyExc_ValueError, - "Sort kind string must be at least length 1"); - Py_XDECREF(tmp); - return PY_FAIL; - } - if (str[0] == 'q' || str[0] == 'Q') { - *sortkind = PyArray_QUICKSORT; - } - else if (str[0] == 'h' || str[0] == 'H') { - *sortkind = PyArray_HEAPSORT; - } - else if (str[0] == 'm' || str[0] == 'M') { - *sortkind = PyArray_MERGESORT; - } - else { - PyErr_Format(PyExc_ValueError, - "%s is an unrecognized kind of sort", - str); - Py_XDECREF(tmp); - return PY_FAIL; - } - Py_XDECREF(tmp); - return PY_SUCCEED; -} - -/*NUMPY_API - * Convert object to searchsorted side - */ -NPY_NO_EXPORT int -PyArray_SearchsideConverter(PyObject *obj, void *addr) -{ - NPY_SEARCHSIDE *side = (NPY_SEARCHSIDE *)addr; - char *str; - PyObject *tmp = NULL; - - if (PyUnicode_Check(obj)) { - obj = tmp = PyUnicode_AsASCIIString(obj); - } - - str = PyBytes_AsString(obj); - if (!str || strlen(str) < 1) { - PyErr_SetString(PyExc_ValueError, - "expected nonempty string for keyword 'side'"); - Py_XDECREF(tmp); - return PY_FAIL; - } - - if (str[0] == 'l' || str[0] == 'L') { - *side = NPY_SEARCHLEFT; - } - else if (str[0] == 'r' || str[0] == 'R') { - *side = NPY_SEARCHRIGHT; - } - else { - PyErr_Format(PyExc_ValueError, - "'%s' is an invalid value for keyword 'side'", str); - Py_XDECREF(tmp); - return PY_FAIL; - } - Py_XDECREF(tmp); - return PY_SUCCEED; -} - -/***************************** -* Other conversion functions -*****************************/ - -/*NUMPY_API*/ -NPY_NO_EXPORT int -PyArray_PyIntAsInt(PyObject *o) -{ - long long_value = -1; - PyObject *obj; - static char *msg = "an integer is required"; - PyObject *arr; - PyArray_Descr *descr; - int ret; - - - if (!o) { - PyErr_SetString(PyExc_TypeError, msg); - return -1; - } - if (PyInt_Check(o)) { - long_value = (long) PyInt_AS_LONG(o); - goto finish; - } else if (PyLong_Check(o)) { - long_value = (long) PyLong_AsLong(o); - goto finish; - } - - descr = &INT_Descr; - arr = NULL; - if (PyArray_Check(o)) { - if (PyArray_SIZE(o)!=1 || !PyArray_ISINTEGER(o)) { - PyErr_SetString(PyExc_TypeError, msg); - return -1; - } - Py_INCREF(descr); - arr = PyArray_CastToType((PyArrayObject *)o, descr, 0); - } - if (PyArray_IsScalar(o, Integer)) { - Py_INCREF(descr); - arr = PyArray_FromScalar(o, descr); - } - if (arr != NULL) { - ret = *((int *)PyArray_DATA(arr)); - Py_DECREF(arr); - return ret; - } -#if (PY_VERSION_HEX >= 0x02050000) - if (PyIndex_Check(o)) { - PyObject* value = PyNumber_Index(o); - long_value = (longlong) PyInt_AsSsize_t(value); - goto finish; - } -#endif - if (Py_TYPE(o)->tp_as_number != NULL && \ - Py_TYPE(o)->tp_as_number->nb_int != NULL) { - obj = Py_TYPE(o)->tp_as_number->nb_int(o); - if (obj == NULL) { - return -1; - } - long_value = (long) PyLong_AsLong(obj); - Py_DECREF(obj); - } -#if !defined(NPY_PY3K) - else if (Py_TYPE(o)->tp_as_number != NULL && \ - Py_TYPE(o)->tp_as_number->nb_long != NULL) { - obj = Py_TYPE(o)->tp_as_number->nb_long(o); - if (obj == NULL) { - return -1; - } - long_value = (long) PyLong_AsLong(obj); - Py_DECREF(obj); - } -#endif - else { - PyErr_SetString(PyExc_NotImplementedError,""); - } - - finish: - if error_converting(long_value) { - PyErr_SetString(PyExc_TypeError, msg); - return -1; - } - -#if (SIZEOF_LONG > SIZEOF_INT) - if ((long_value < INT_MIN) || (long_value > INT_MAX)) { - PyErr_SetString(PyExc_ValueError, "integer won't fit into a C int"); - return -1; - } -#endif - return (int) long_value; -} - -/*NUMPY_API*/ -NPY_NO_EXPORT intp -PyArray_PyIntAsIntp(PyObject *o) -{ - longlong long_value = -1; - PyObject *obj; - static char *msg = "an integer is required"; - PyObject *arr; - PyArray_Descr *descr; - intp ret; - - if (!o) { - PyErr_SetString(PyExc_TypeError, msg); - return -1; - } - if (PyInt_Check(o)) { - long_value = (longlong) PyInt_AS_LONG(o); - goto finish; - } else if (PyLong_Check(o)) { - long_value = (longlong) PyLong_AsLongLong(o); - goto finish; - } - -#if SIZEOF_INTP == SIZEOF_LONG - descr = &LONG_Descr; -#elif SIZEOF_INTP == SIZEOF_INT - descr = &INT_Descr; -#else - descr = &LONGLONG_Descr; -#endif - arr = NULL; - - if (PyArray_Check(o)) { - if (PyArray_SIZE(o)!=1 || !PyArray_ISINTEGER(o)) { - PyErr_SetString(PyExc_TypeError, msg); - return -1; - } - Py_INCREF(descr); - arr = PyArray_CastToType((PyArrayObject *)o, descr, 0); - } - else if (PyArray_IsScalar(o, Integer)) { - Py_INCREF(descr); - arr = PyArray_FromScalar(o, descr); - } - if (arr != NULL) { - ret = *((intp *)PyArray_DATA(arr)); - Py_DECREF(arr); - return ret; - } - -#if (PY_VERSION_HEX >= 0x02050000) - if (PyIndex_Check(o)) { - PyObject* value = PyNumber_Index(o); - if (value == NULL) { - return -1; - } - long_value = (longlong) PyInt_AsSsize_t(value); - goto finish; - } -#endif -#if !defined(NPY_PY3K) - if (Py_TYPE(o)->tp_as_number != NULL && \ - Py_TYPE(o)->tp_as_number->nb_long != NULL) { - obj = Py_TYPE(o)->tp_as_number->nb_long(o); - if (obj != NULL) { - long_value = (longlong) PyLong_AsLongLong(obj); - Py_DECREF(obj); - } - } - else -#endif - if (Py_TYPE(o)->tp_as_number != NULL && \ - Py_TYPE(o)->tp_as_number->nb_int != NULL) { - obj = Py_TYPE(o)->tp_as_number->nb_int(o); - if (obj != NULL) { - long_value = (longlong) PyLong_AsLongLong(obj); - Py_DECREF(obj); - } - } - else { - PyErr_SetString(PyExc_NotImplementedError,""); - } - - finish: - if error_converting(long_value) { - PyErr_SetString(PyExc_TypeError, msg); - return -1; - } - -#if (SIZEOF_LONGLONG > SIZEOF_INTP) - if ((long_value < MIN_INTP) || (long_value > MAX_INTP)) { - PyErr_SetString(PyExc_ValueError, - "integer won't fit into a C intp"); - return -1; - } -#endif - return (intp) long_value; -} - -/*NUMPY_API - * PyArray_IntpFromSequence - * Returns the number of dimensions or -1 if an error occurred. - * vals must be large enough to hold maxvals - */ -NPY_NO_EXPORT int -PyArray_IntpFromSequence(PyObject *seq, intp *vals, int maxvals) -{ - int nd, i; - PyObject *op, *err; - - /* - * Check to see if sequence is a single integer first. - * or, can be made into one - */ - if ((nd=PySequence_Length(seq)) == -1) { - if (PyErr_Occurred()) PyErr_Clear(); -#if SIZEOF_LONG >= SIZEOF_INTP && !defined(NPY_PY3K) - if (!(op = PyNumber_Int(seq))) { - return -1; - } -#else - if (!(op = PyNumber_Long(seq))) { - return -1; - } -#endif - nd = 1; -#if SIZEOF_LONG >= SIZEOF_INTP - vals[0] = (intp ) PyInt_AsLong(op); -#else - vals[0] = (intp ) PyLong_AsLongLong(op); -#endif - Py_DECREF(op); - - /* - * Check wether there was an error - if the error was an overflow, raise - * a ValueError instead to be more helpful - */ - if(vals[0] == -1) { - err = PyErr_Occurred(); - if (err && - PyErr_GivenExceptionMatches(err, PyExc_OverflowError)) { - PyErr_SetString(PyExc_ValueError, - "Maximum allowed dimension exceeded"); - } - if(err != NULL) { - return -1; - } - } - } - else { - for (i = 0; i < MIN(nd,maxvals); i++) { - op = PySequence_GetItem(seq, i); - if (op == NULL) { - return -1; - } -#if SIZEOF_LONG >= SIZEOF_INTP - vals[i]=(intp )PyInt_AsLong(op); -#else - vals[i]=(intp )PyLong_AsLongLong(op); -#endif - Py_DECREF(op); - - /* - * Check wether there was an error - if the error was an overflow, - * raise a ValueError instead to be more helpful - */ - if(vals[0] == -1) { - err = PyErr_Occurred(); - if (err && - PyErr_GivenExceptionMatches(err, PyExc_OverflowError)) { - PyErr_SetString(PyExc_ValueError, - "Maximum allowed dimension exceeded"); - } - if(err != NULL) { - return -1; - } - } - } - } - return nd; -} - -/*NUMPY_API - * Typestr converter - */ -NPY_NO_EXPORT int -PyArray_TypestrConvert(int itemsize, int gentype) -{ - int newtype = gentype; - - if (gentype == PyArray_GENBOOLLTR) { - if (itemsize == 1) { - newtype = PyArray_BOOL; - } - else { - newtype = PyArray_NOTYPE; - } - } - else if (gentype == PyArray_SIGNEDLTR) { - switch(itemsize) { - case 1: - newtype = PyArray_INT8; - break; - case 2: - newtype = PyArray_INT16; - break; - case 4: - newtype = PyArray_INT32; - break; - case 8: - newtype = PyArray_INT64; - break; -#ifdef PyArray_INT128 - case 16: - newtype = PyArray_INT128; - break; -#endif - default: - newtype = PyArray_NOTYPE; - } - } - else if (gentype == PyArray_UNSIGNEDLTR) { - switch(itemsize) { - case 1: - newtype = PyArray_UINT8; - break; - case 2: - newtype = PyArray_UINT16; - break; - case 4: - newtype = PyArray_UINT32; - break; - case 8: - newtype = PyArray_UINT64; - break; -#ifdef PyArray_INT128 - case 16: - newtype = PyArray_UINT128; - break; -#endif - default: - newtype = PyArray_NOTYPE; - break; - } - } - else if (gentype == PyArray_FLOATINGLTR) { - switch(itemsize) { - case 4: - newtype = PyArray_FLOAT32; - break; - case 8: - newtype = PyArray_FLOAT64; - break; -#ifdef PyArray_FLOAT80 - case 10: - newtype = PyArray_FLOAT80; - break; -#endif -#ifdef PyArray_FLOAT96 - case 12: - newtype = PyArray_FLOAT96; - break; -#endif -#ifdef PyArray_FLOAT128 - case 16: - newtype = PyArray_FLOAT128; - break; -#endif - default: - newtype = PyArray_NOTYPE; - } - } - else if (gentype == PyArray_COMPLEXLTR) { - switch(itemsize) { - case 8: - newtype = PyArray_COMPLEX64; - break; - case 16: - newtype = PyArray_COMPLEX128; - break; -#ifdef PyArray_FLOAT80 - case 20: - newtype = PyArray_COMPLEX160; - break; -#endif -#ifdef PyArray_FLOAT96 - case 24: - newtype = PyArray_COMPLEX192; - break; -#endif -#ifdef PyArray_FLOAT128 - case 32: - newtype = PyArray_COMPLEX256; - break; -#endif - default: - newtype = PyArray_NOTYPE; - } - } - return newtype; -} - -/* Lifted from numarray */ -/* TODO: not documented */ -/*NUMPY_API - PyArray_IntTupleFromIntp -*/ -NPY_NO_EXPORT PyObject * -PyArray_IntTupleFromIntp(int len, intp *vals) -{ - int i; - PyObject *intTuple = PyTuple_New(len); - - if (!intTuple) { - goto fail; - } - for (i = 0; i < len; i++) { -#if SIZEOF_INTP <= SIZEOF_LONG - PyObject *o = PyInt_FromLong((long) vals[i]); -#else - PyObject *o = PyLong_FromLongLong((longlong) vals[i]); -#endif - if (!o) { - Py_DECREF(intTuple); - intTuple = NULL; - goto fail; - } - PyTuple_SET_ITEM(intTuple, i, o); - } - - fail: - return intTuple; -} - diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/conversion_utils.h b/pythonPackages/numpy/numpy/core/src/multiarray/conversion_utils.h deleted file mode 100755 index 64b26b23ef..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/conversion_utils.h +++ /dev/null @@ -1,43 +0,0 @@ -#ifndef _NPY_PRIVATE_CONVERSION_UTILS_H_ -#define _NPY_PRIVATE_CONVERSION_UTILS_H_ - -NPY_NO_EXPORT int -PyArray_Converter(PyObject *object, PyObject **address); - -NPY_NO_EXPORT int -PyArray_OutputConverter(PyObject *object, PyArrayObject **address); - -NPY_NO_EXPORT int -PyArray_IntpConverter(PyObject *obj, PyArray_Dims *seq); - -NPY_NO_EXPORT int -PyArray_BufferConverter(PyObject *obj, PyArray_Chunk *buf); - -NPY_NO_EXPORT int -PyArray_BoolConverter(PyObject *object, Bool *val); - -NPY_NO_EXPORT int -PyArray_ByteorderConverter(PyObject *obj, char *endian); - -NPY_NO_EXPORT int -PyArray_SortkindConverter(PyObject *obj, NPY_SORTKIND *sortkind); - -NPY_NO_EXPORT int -PyArray_SearchsideConverter(PyObject *obj, void *addr); - -NPY_NO_EXPORT int -PyArray_PyIntAsInt(PyObject *o); - -NPY_NO_EXPORT intp -PyArray_PyIntAsIntp(PyObject *o); - -NPY_NO_EXPORT int -PyArray_IntpFromSequence(PyObject *seq, intp *vals, int maxvals); - -NPY_NO_EXPORT int -PyArray_TypestrConvert(int itemsize, int gentype); - -NPY_NO_EXPORT PyObject * -PyArray_IntTupleFromIntp(int len, intp *vals); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/convert.c b/pythonPackages/numpy/numpy/core/src/multiarray/convert.c deleted file mode 100755 index a2c965181d..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/convert.c +++ /dev/null @@ -1,399 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "arrayobject.h" -#include "mapping.h" - -#include "convert.h" - -/*NUMPY_API - * To List - */ -NPY_NO_EXPORT PyObject * -PyArray_ToList(PyArrayObject *self) -{ - PyObject *lp; - PyArrayObject *v; - intp sz, i; - - if (!PyArray_Check(self)) { - return (PyObject *)self; - } - if (self->nd == 0) { - return self->descr->f->getitem(self->data,self); - } - - sz = self->dimensions[0]; - lp = PyList_New(sz); - for (i = 0; i < sz; i++) { - v = (PyArrayObject *)array_big_item(self, i); - if (PyArray_Check(v) && (v->nd >= self->nd)) { - PyErr_SetString(PyExc_RuntimeError, - "array_item not returning smaller-" \ - "dimensional array"); - Py_DECREF(v); - Py_DECREF(lp); - return NULL; - } - PyList_SetItem(lp, i, PyArray_ToList(v)); - Py_DECREF(v); - } - return lp; -} - -/* XXX: FIXME --- add ordering argument to - Allow Fortran ordering on write - This will need the addition of a Fortran-order iterator. - */ - -/*NUMPY_API - To File -*/ -NPY_NO_EXPORT int -PyArray_ToFile(PyArrayObject *self, FILE *fp, char *sep, char *format) -{ - intp size; - intp n, n2; - size_t n3, n4; - PyArrayIterObject *it; - PyObject *obj, *strobj, *tupobj, *byteobj; - - n3 = (sep ? strlen((const char *)sep) : 0); - if (n3 == 0) { - /* binary data */ - if (PyDataType_FLAGCHK(self->descr, NPY_LIST_PICKLE)) { - PyErr_SetString(PyExc_ValueError, "cannot write " \ - "object arrays to a file in " \ - "binary mode"); - return -1; - } - - if (PyArray_ISCONTIGUOUS(self)) { - size = PyArray_SIZE(self); - NPY_BEGIN_ALLOW_THREADS; - n = fwrite((const void *)self->data, - (size_t) self->descr->elsize, - (size_t) size, fp); - NPY_END_ALLOW_THREADS; - if (n < size) { - PyErr_Format(PyExc_ValueError, - "%ld requested and %ld written", - (long) size, (long) n); - return -1; - } - } - else { - NPY_BEGIN_THREADS_DEF; - - it = (PyArrayIterObject *) PyArray_IterNew((PyObject *)self); - NPY_BEGIN_THREADS; - while (it->index < it->size) { - if (fwrite((const void *)it->dataptr, - (size_t) self->descr->elsize, - 1, fp) < 1) { - NPY_END_THREADS; - PyErr_Format(PyExc_IOError, - "problem writing element"\ - " %"INTP_FMT" to file", - it->index); - Py_DECREF(it); - return -1; - } - PyArray_ITER_NEXT(it); - } - NPY_END_THREADS; - Py_DECREF(it); - } - } - else { - /* - * text data - */ - - it = (PyArrayIterObject *) - PyArray_IterNew((PyObject *)self); - n4 = (format ? strlen((const char *)format) : 0); - while (it->index < it->size) { - obj = self->descr->f->getitem(it->dataptr, self); - if (obj == NULL) { - Py_DECREF(it); - return -1; - } - if (n4 == 0) { - /* - * standard writing - */ - strobj = PyObject_Str(obj); - Py_DECREF(obj); - if (strobj == NULL) { - Py_DECREF(it); - return -1; - } - } - else { - /* - * use format string - */ - tupobj = PyTuple_New(1); - if (tupobj == NULL) { - Py_DECREF(it); - return -1; - } - PyTuple_SET_ITEM(tupobj,0,obj); - obj = PyUString_FromString((const char *)format); - if (obj == NULL) { - Py_DECREF(tupobj); - Py_DECREF(it); - return -1; - } - strobj = PyUString_Format(obj, tupobj); - Py_DECREF(obj); - Py_DECREF(tupobj); - if (strobj == NULL) { - Py_DECREF(it); - return -1; - } - } -#if defined(NPY_PY3K) - byteobj = PyUnicode_AsASCIIString(strobj); -#else - byteobj = strobj; -#endif - NPY_BEGIN_ALLOW_THREADS; - n2 = PyBytes_GET_SIZE(byteobj); - n = fwrite(PyBytes_AS_STRING(byteobj), 1, n2, fp); - NPY_END_ALLOW_THREADS; -#if defined(NPY_PY3K) - Py_DECREF(byteobj); -#endif - if (n < n2) { - PyErr_Format(PyExc_IOError, - "problem writing element %"INTP_FMT\ - " to file", it->index); - Py_DECREF(strobj); - Py_DECREF(it); - return -1; - } - /* write separator for all but last one */ - if (it->index != it->size-1) { - if (fwrite(sep, 1, n3, fp) < n3) { - PyErr_Format(PyExc_IOError, - "problem writing "\ - "separator to file"); - Py_DECREF(strobj); - Py_DECREF(it); - return -1; - } - } - Py_DECREF(strobj); - PyArray_ITER_NEXT(it); - } - Py_DECREF(it); - } - return 0; -} - -/*NUMPY_API*/ -NPY_NO_EXPORT PyObject * -PyArray_ToString(PyArrayObject *self, NPY_ORDER order) -{ - intp numbytes; - intp index; - char *dptr; - int elsize; - PyObject *ret; - PyArrayIterObject *it; - - if (order == NPY_ANYORDER) - order = PyArray_ISFORTRAN(self); - - /* if (PyArray_TYPE(self) == PyArray_OBJECT) { - PyErr_SetString(PyExc_ValueError, "a string for the data" \ - "in an object array is not appropriate"); - return NULL; - } - */ - - numbytes = PyArray_NBYTES(self); - if ((PyArray_ISCONTIGUOUS(self) && (order == NPY_CORDER)) - || (PyArray_ISFORTRAN(self) && (order == NPY_FORTRANORDER))) { - ret = PyBytes_FromStringAndSize(self->data, (Py_ssize_t) numbytes); - } - else { - PyObject *new; - if (order == NPY_FORTRANORDER) { - /* iterators are always in C-order */ - new = PyArray_Transpose(self, NULL); - if (new == NULL) { - return NULL; - } - } - else { - Py_INCREF(self); - new = (PyObject *)self; - } - it = (PyArrayIterObject *)PyArray_IterNew(new); - Py_DECREF(new); - if (it == NULL) { - return NULL; - } - ret = PyBytes_FromStringAndSize(NULL, (Py_ssize_t) numbytes); - if (ret == NULL) { - Py_DECREF(it); - return NULL; - } - dptr = PyBytes_AS_STRING(ret); - index = it->size; - elsize = self->descr->elsize; - while (index--) { - memcpy(dptr, it->dataptr, elsize); - dptr += elsize; - PyArray_ITER_NEXT(it); - } - Py_DECREF(it); - } - return ret; -} - -/*NUMPY_API*/ -NPY_NO_EXPORT int -PyArray_FillWithScalar(PyArrayObject *arr, PyObject *obj) -{ - PyObject *newarr; - int itemsize, swap; - void *fromptr; - PyArray_Descr *descr; - intp size; - PyArray_CopySwapFunc *copyswap; - - itemsize = arr->descr->elsize; - if (PyArray_ISOBJECT(arr)) { - fromptr = &obj; - swap = 0; - newarr = NULL; - } - else { - descr = PyArray_DESCR(arr); - Py_INCREF(descr); - newarr = PyArray_FromAny(obj, descr, 0,0, ALIGNED, NULL); - if (newarr == NULL) { - return -1; - } - fromptr = PyArray_DATA(newarr); - swap = (PyArray_ISNOTSWAPPED(arr) != PyArray_ISNOTSWAPPED(newarr)); - } - size=PyArray_SIZE(arr); - copyswap = arr->descr->f->copyswap; - if (PyArray_ISONESEGMENT(arr)) { - char *toptr=PyArray_DATA(arr); - PyArray_FillWithScalarFunc* fillwithscalar = - arr->descr->f->fillwithscalar; - if (fillwithscalar && PyArray_ISALIGNED(arr)) { - copyswap(fromptr, NULL, swap, newarr); - fillwithscalar(toptr, size, fromptr, arr); - } - else { - while (size--) { - copyswap(toptr, fromptr, swap, arr); - toptr += itemsize; - } - } - } - else { - PyArrayIterObject *iter; - - iter = (PyArrayIterObject *)\ - PyArray_IterNew((PyObject *)arr); - if (iter == NULL) { - Py_XDECREF(newarr); - return -1; - } - while (size--) { - copyswap(iter->dataptr, fromptr, swap, arr); - PyArray_ITER_NEXT(iter); - } - Py_DECREF(iter); - } - Py_XDECREF(newarr); - return 0; -} - -/*NUMPY_API - Copy an array. -*/ -NPY_NO_EXPORT PyObject * -PyArray_NewCopy(PyArrayObject *m1, NPY_ORDER fortran) -{ - PyArrayObject *ret; - if (fortran == PyArray_ANYORDER) - fortran = PyArray_ISFORTRAN(m1); - - Py_INCREF(m1->descr); - ret = (PyArrayObject *)PyArray_NewFromDescr(Py_TYPE(m1), - m1->descr, - m1->nd, - m1->dimensions, - NULL, NULL, - fortran, - (PyObject *)m1); - if (ret == NULL) { - return NULL; - } - if (PyArray_CopyInto(ret, m1) == -1) { - Py_DECREF(ret); - return NULL; - } - - return (PyObject *)ret; -} - -/*NUMPY_API - * View - * steals a reference to type -- accepts NULL - */ -NPY_NO_EXPORT PyObject * -PyArray_View(PyArrayObject *self, PyArray_Descr *type, PyTypeObject *pytype) -{ - PyObject *new = NULL; - PyTypeObject *subtype; - - if (pytype) { - subtype = pytype; - } - else { - subtype = Py_TYPE(self); - } - Py_INCREF(self->descr); - new = PyArray_NewFromDescr(subtype, - self->descr, - self->nd, self->dimensions, - self->strides, - self->data, - self->flags, (PyObject *)self); - if (new == NULL) { - return NULL; - } - Py_INCREF(self); - PyArray_BASE(new) = (PyObject *)self; - - if (type != NULL) { - if (PyObject_SetAttrString(new, "dtype", - (PyObject *)type) < 0) { - Py_DECREF(new); - Py_DECREF(type); - return NULL; - } - Py_DECREF(type); - } - return new; -} diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/convert.h b/pythonPackages/numpy/numpy/core/src/multiarray/convert.h deleted file mode 100755 index de24e27cfc..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/convert.h +++ /dev/null @@ -1,4 +0,0 @@ -#ifndef _NPY_ARRAYOBJECT_CONVERT_H_ -#define _NPY_ARRAYOBJECT_CONVERT_H_ - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/convert_datatype.c b/pythonPackages/numpy/numpy/core/src/multiarray/convert_datatype.c deleted file mode 100755 index e511b7509e..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/convert_datatype.c +++ /dev/null @@ -1,974 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "common.h" -#include "scalartypes.h" -#include "mapping.h" - -#include "convert_datatype.h" - -/*NUMPY_API - * For backward compatibility - * - * Cast an array using typecode structure. - * steals reference to at --- cannot be NULL - */ -NPY_NO_EXPORT PyObject * -PyArray_CastToType(PyArrayObject *mp, PyArray_Descr *at, int fortran) -{ - PyObject *out; - int ret; - PyArray_Descr *mpd; - - mpd = mp->descr; - - if (((mpd == at) || - ((mpd->type_num == at->type_num) && - PyArray_EquivByteorders(mpd->byteorder, at->byteorder) && - ((mpd->elsize == at->elsize) || (at->elsize==0)))) && - PyArray_ISBEHAVED_RO(mp)) { - Py_DECREF(at); - Py_INCREF(mp); - return (PyObject *)mp; - } - - if (at->elsize == 0) { - PyArray_DESCR_REPLACE(at); - if (at == NULL) { - return NULL; - } - if (mpd->type_num == PyArray_STRING && - at->type_num == PyArray_UNICODE) { - at->elsize = mpd->elsize << 2; - } - if (mpd->type_num == PyArray_UNICODE && - at->type_num == PyArray_STRING) { - at->elsize = mpd->elsize >> 2; - } - if (at->type_num == PyArray_VOID) { - at->elsize = mpd->elsize; - } - } - - out = PyArray_NewFromDescr(Py_TYPE(mp), at, - mp->nd, - mp->dimensions, - NULL, NULL, - fortran, - (PyObject *)mp); - - if (out == NULL) { - return NULL; - } - ret = PyArray_CastTo((PyArrayObject *)out, mp); - if (ret != -1) { - return out; - } - - Py_DECREF(out); - return NULL; - -} - -/*NUMPY_API - * Get a cast function to cast from the input descriptor to the - * output type_number (must be a registered data-type). - * Returns NULL if un-successful. - */ -NPY_NO_EXPORT PyArray_VectorUnaryFunc * -PyArray_GetCastFunc(PyArray_Descr *descr, int type_num) -{ - PyArray_VectorUnaryFunc *castfunc = NULL; - - if (type_num < PyArray_NTYPES) { - castfunc = descr->f->cast[type_num]; - } - if (castfunc == NULL) { - PyObject *obj = descr->f->castdict; - if (obj && PyDict_Check(obj)) { - PyObject *key; - PyObject *cobj; - - key = PyInt_FromLong(type_num); - cobj = PyDict_GetItem(obj, key); - Py_DECREF(key); - if (NpyCapsule_Check(cobj)) { - castfunc = PyCObject_AsVoidPtr(cobj); - } - } - } - if (PyTypeNum_ISCOMPLEX(descr->type_num) && - !PyTypeNum_ISCOMPLEX(type_num) && - PyTypeNum_ISNUMBER(type_num) && - !PyTypeNum_ISBOOL(type_num)) { - PyObject *cls = NULL, *obj = NULL; - obj = PyImport_ImportModule("numpy.core"); - if (obj) { - cls = PyObject_GetAttrString(obj, "ComplexWarning"); - Py_DECREF(obj); - } -#if PY_VERSION_HEX >= 0x02050000 - PyErr_WarnEx(cls, - "Casting complex values to real discards the imaginary " - "part", 0); -#else - PyErr_Warn(cls, - "Casting complex values to real discards the imaginary " - "part"); -#endif - Py_XDECREF(cls); - } - if (castfunc) { - return castfunc; - } - - PyErr_SetString(PyExc_ValueError, "No cast function available."); - return NULL; -} - -/* - * Reference counts: - * copyswapn is used which increases and decreases reference counts for OBJECT arrays. - * All that needs to happen is for any reference counts in the buffers to be - * decreased when completely finished with the buffers. - * - * buffers[0] is the destination - * buffers[1] is the source - */ -static void -_strided_buffered_cast(char *dptr, intp dstride, int delsize, int dswap, - PyArray_CopySwapNFunc *dcopyfunc, - char *sptr, intp sstride, int selsize, int sswap, - PyArray_CopySwapNFunc *scopyfunc, - intp N, char **buffers, int bufsize, - PyArray_VectorUnaryFunc *castfunc, - PyArrayObject *dest, PyArrayObject *src) -{ - int i; - if (N <= bufsize) { - /* - * 1. copy input to buffer and swap - * 2. cast input to output - * 3. swap output if necessary and copy from output buffer - */ - scopyfunc(buffers[1], selsize, sptr, sstride, N, sswap, src); - castfunc(buffers[1], buffers[0], N, src, dest); - dcopyfunc(dptr, dstride, buffers[0], delsize, N, dswap, dest); - return; - } - - /* otherwise we need to divide up into bufsize pieces */ - i = 0; - while (N > 0) { - int newN = MIN(N, bufsize); - - _strided_buffered_cast(dptr+i*dstride, dstride, delsize, - dswap, dcopyfunc, - sptr+i*sstride, sstride, selsize, - sswap, scopyfunc, - newN, buffers, bufsize, castfunc, dest, src); - i += newN; - N -= bufsize; - } - return; -} - -static int -_broadcast_cast(PyArrayObject *out, PyArrayObject *in, - PyArray_VectorUnaryFunc *castfunc, int iswap, int oswap) -{ - int delsize, selsize, maxaxis, i, N; - PyArrayMultiIterObject *multi; - intp maxdim, ostrides, istrides; - char *buffers[2]; - PyArray_CopySwapNFunc *ocopyfunc, *icopyfunc; - char *obptr; - NPY_BEGIN_THREADS_DEF; - - delsize = PyArray_ITEMSIZE(out); - selsize = PyArray_ITEMSIZE(in); - multi = (PyArrayMultiIterObject *)PyArray_MultiIterNew(2, out, in); - if (multi == NULL) { - return -1; - } - - if (multi->size != PyArray_SIZE(out)) { - PyErr_SetString(PyExc_ValueError, - "array dimensions are not "\ - "compatible for copy"); - Py_DECREF(multi); - return -1; - } - - icopyfunc = in->descr->f->copyswapn; - ocopyfunc = out->descr->f->copyswapn; - maxaxis = PyArray_RemoveSmallest(multi); - if (maxaxis < 0) { - /* cast 1 0-d array to another */ - N = 1; - maxdim = 1; - ostrides = delsize; - istrides = selsize; - } - else { - maxdim = multi->dimensions[maxaxis]; - N = (int) (MIN(maxdim, PyArray_BUFSIZE)); - ostrides = multi->iters[0]->strides[maxaxis]; - istrides = multi->iters[1]->strides[maxaxis]; - - } - buffers[0] = _pya_malloc(N*delsize); - if (buffers[0] == NULL) { - PyErr_NoMemory(); - return -1; - } - buffers[1] = _pya_malloc(N*selsize); - if (buffers[1] == NULL) { - _pya_free(buffers[0]); - PyErr_NoMemory(); - return -1; - } - if (PyDataType_FLAGCHK(out->descr, NPY_NEEDS_INIT)) { - memset(buffers[0], 0, N*delsize); - } - if (PyDataType_FLAGCHK(in->descr, NPY_NEEDS_INIT)) { - memset(buffers[1], 0, N*selsize); - } - -#if NPY_ALLOW_THREADS - if (PyArray_ISNUMBER(in) && PyArray_ISNUMBER(out)) { - NPY_BEGIN_THREADS; - } -#endif - - while (multi->index < multi->size) { - _strided_buffered_cast(multi->iters[0]->dataptr, - ostrides, - delsize, oswap, ocopyfunc, - multi->iters[1]->dataptr, - istrides, - selsize, iswap, icopyfunc, - maxdim, buffers, N, - castfunc, out, in); - PyArray_MultiIter_NEXT(multi); - } -#if NPY_ALLOW_THREADS - if (PyArray_ISNUMBER(in) && PyArray_ISNUMBER(out)) { - NPY_END_THREADS; - } -#endif - Py_DECREF(multi); - if (PyDataType_REFCHK(in->descr)) { - obptr = buffers[1]; - for (i = 0; i < N; i++, obptr+=selsize) { - PyArray_Item_XDECREF(obptr, in->descr); - } - } - if (PyDataType_REFCHK(out->descr)) { - obptr = buffers[0]; - for (i = 0; i < N; i++, obptr+=delsize) { - PyArray_Item_XDECREF(obptr, out->descr); - } - } - _pya_free(buffers[0]); - _pya_free(buffers[1]); - if (PyErr_Occurred()) { - return -1; - } - - return 0; -} - - - -/* - * Must be broadcastable. - * This code is very similar to PyArray_CopyInto/PyArray_MoveInto - * except casting is done --- PyArray_BUFSIZE is used - * as the size of the casting buffer. - */ - -/*NUMPY_API - * Cast to an already created array. - */ -NPY_NO_EXPORT int -PyArray_CastTo(PyArrayObject *out, PyArrayObject *mp) -{ - int simple; - int same; - PyArray_VectorUnaryFunc *castfunc = NULL; - intp mpsize = PyArray_SIZE(mp); - int iswap, oswap; - NPY_BEGIN_THREADS_DEF; - - if (mpsize == 0) { - return 0; - } - if (!PyArray_ISWRITEABLE(out)) { - PyErr_SetString(PyExc_ValueError, "output array is not writeable"); - return -1; - } - - castfunc = PyArray_GetCastFunc(mp->descr, out->descr->type_num); - if (castfunc == NULL) { - return -1; - } - - same = PyArray_SAMESHAPE(out, mp); - simple = same && ((PyArray_ISCARRAY_RO(mp) && PyArray_ISCARRAY(out)) || - (PyArray_ISFARRAY_RO(mp) && PyArray_ISFARRAY(out))); - if (simple) { -#if NPY_ALLOW_THREADS - if (PyArray_ISNUMBER(mp) && PyArray_ISNUMBER(out)) { - NPY_BEGIN_THREADS; - } -#endif - castfunc(mp->data, out->data, mpsize, mp, out); - -#if NPY_ALLOW_THREADS - if (PyArray_ISNUMBER(mp) && PyArray_ISNUMBER(out)) { - NPY_END_THREADS; - } -#endif - if (PyErr_Occurred()) { - return -1; - } - return 0; - } - - /* - * If the input or output is OBJECT, STRING, UNICODE, or VOID - * then getitem and setitem are used for the cast - * and byteswapping is handled by those methods - */ - if (PyArray_ISFLEXIBLE(mp) || PyArray_ISOBJECT(mp) || PyArray_ISOBJECT(out) || - PyArray_ISFLEXIBLE(out)) { - iswap = oswap = 0; - } - else { - iswap = PyArray_ISBYTESWAPPED(mp); - oswap = PyArray_ISBYTESWAPPED(out); - } - - return _broadcast_cast(out, mp, castfunc, iswap, oswap); -} - - -static int -_bufferedcast(PyArrayObject *out, PyArrayObject *in, - PyArray_VectorUnaryFunc *castfunc) -{ - char *inbuffer, *bptr, *optr; - char *outbuffer=NULL; - PyArrayIterObject *it_in = NULL, *it_out = NULL; - intp i, index; - intp ncopies = PyArray_SIZE(out) / PyArray_SIZE(in); - int elsize=in->descr->elsize; - int nels = PyArray_BUFSIZE; - int el; - int inswap, outswap = 0; - int obuf=!PyArray_ISCARRAY(out); - int oelsize = out->descr->elsize; - PyArray_CopySwapFunc *in_csn; - PyArray_CopySwapFunc *out_csn; - int retval = -1; - - in_csn = in->descr->f->copyswap; - out_csn = out->descr->f->copyswap; - - /* - * If the input or output is STRING, UNICODE, or VOID - * then getitem and setitem are used for the cast - * and byteswapping is handled by those methods - */ - - inswap = !(PyArray_ISFLEXIBLE(in) || PyArray_ISNOTSWAPPED(in)); - - inbuffer = PyDataMem_NEW(PyArray_BUFSIZE*elsize); - if (inbuffer == NULL) { - return -1; - } - if (PyArray_ISOBJECT(in)) { - memset(inbuffer, 0, PyArray_BUFSIZE*elsize); - } - it_in = (PyArrayIterObject *)PyArray_IterNew((PyObject *)in); - if (it_in == NULL) { - goto exit; - } - if (obuf) { - outswap = !(PyArray_ISFLEXIBLE(out) || - PyArray_ISNOTSWAPPED(out)); - outbuffer = PyDataMem_NEW(PyArray_BUFSIZE*oelsize); - if (outbuffer == NULL) { - goto exit; - } - if (PyArray_ISOBJECT(out)) { - memset(outbuffer, 0, PyArray_BUFSIZE*oelsize); - } - it_out = (PyArrayIterObject *)PyArray_IterNew((PyObject *)out); - if (it_out == NULL) { - goto exit; - } - nels = MIN(nels, PyArray_BUFSIZE); - } - - optr = (obuf) ? outbuffer: out->data; - bptr = inbuffer; - el = 0; - while (ncopies--) { - index = it_in->size; - PyArray_ITER_RESET(it_in); - while (index--) { - in_csn(bptr, it_in->dataptr, inswap, in); - bptr += elsize; - PyArray_ITER_NEXT(it_in); - el += 1; - if ((el == nels) || (index == 0)) { - /* buffer filled, do cast */ - castfunc(inbuffer, optr, el, in, out); - if (obuf) { - /* Copy from outbuffer to array */ - for (i = 0; i < el; i++) { - out_csn(it_out->dataptr, - optr, outswap, - out); - optr += oelsize; - PyArray_ITER_NEXT(it_out); - } - optr = outbuffer; - } - else { - optr += out->descr->elsize * nels; - } - el = 0; - bptr = inbuffer; - } - } - } - retval = 0; - - exit: - Py_XDECREF(it_in); - PyDataMem_FREE(inbuffer); - PyDataMem_FREE(outbuffer); - if (obuf) { - Py_XDECREF(it_out); - } - return retval; -} - -/*NUMPY_API - * Cast to an already created array. Arrays don't have to be "broadcastable" - * Only requirement is they have the same number of elements. - */ -NPY_NO_EXPORT int -PyArray_CastAnyTo(PyArrayObject *out, PyArrayObject *mp) -{ - int simple; - PyArray_VectorUnaryFunc *castfunc = NULL; - npy_intp mpsize = PyArray_SIZE(mp); - - if (mpsize == 0) { - return 0; - } - if (!PyArray_ISWRITEABLE(out)) { - PyErr_SetString(PyExc_ValueError, "output array is not writeable"); - return -1; - } - - if (!(mpsize == PyArray_SIZE(out))) { - PyErr_SetString(PyExc_ValueError, - "arrays must have the same number of" - " elements for the cast."); - return -1; - } - - castfunc = PyArray_GetCastFunc(mp->descr, out->descr->type_num); - if (castfunc == NULL) { - return -1; - } - simple = ((PyArray_ISCARRAY_RO(mp) && PyArray_ISCARRAY(out)) || - (PyArray_ISFARRAY_RO(mp) && PyArray_ISFARRAY(out))); - if (simple) { - castfunc(mp->data, out->data, mpsize, mp, out); - return 0; - } - if (PyArray_SAMESHAPE(out, mp)) { - int iswap, oswap; - iswap = PyArray_ISBYTESWAPPED(mp) && !PyArray_ISFLEXIBLE(mp); - oswap = PyArray_ISBYTESWAPPED(out) && !PyArray_ISFLEXIBLE(out); - return _broadcast_cast(out, mp, castfunc, iswap, oswap); - } - return _bufferedcast(out, mp, castfunc); -} - -/*NUMPY_API - *Check the type coercion rules. - */ -NPY_NO_EXPORT int -PyArray_CanCastSafely(int fromtype, int totype) -{ - PyArray_Descr *from, *to; - int felsize, telsize; - - if (fromtype == totype) { - return 1; - } - if (fromtype == PyArray_BOOL) { - return 1; - } - if (totype == PyArray_BOOL) { - return 0; - } - if (totype == PyArray_OBJECT || totype == PyArray_VOID) { - return 1; - } - if (fromtype == PyArray_OBJECT || fromtype == PyArray_VOID) { - return 0; - } - from = PyArray_DescrFromType(fromtype); - /* - * cancastto is a PyArray_NOTYPE terminated C-int-array of types that - * the data-type can be cast to safely. - */ - if (from->f->cancastto) { - int *curtype; - curtype = from->f->cancastto; - while (*curtype != PyArray_NOTYPE) { - if (*curtype++ == totype) { - return 1; - } - } - } - if (PyTypeNum_ISUSERDEF(totype)) { - return 0; - } - to = PyArray_DescrFromType(totype); - telsize = to->elsize; - felsize = from->elsize; - Py_DECREF(from); - Py_DECREF(to); - - switch(fromtype) { - case PyArray_BYTE: - case PyArray_SHORT: - case PyArray_INT: - case PyArray_LONG: - case PyArray_LONGLONG: - if (PyTypeNum_ISINTEGER(totype)) { - if (PyTypeNum_ISUNSIGNED(totype)) { - return 0; - } - else { - return telsize >= felsize; - } - } - else if (PyTypeNum_ISFLOAT(totype)) { - if (felsize < 8) { - return telsize > felsize; - } - else { - return telsize >= felsize; - } - } - else if (PyTypeNum_ISCOMPLEX(totype)) { - if (felsize < 8) { - return (telsize >> 1) > felsize; - } - else { - return (telsize >> 1) >= felsize; - } - } - else { - return totype > fromtype; - } - case PyArray_UBYTE: - case PyArray_USHORT: - case PyArray_UINT: - case PyArray_ULONG: - case PyArray_ULONGLONG: - if (PyTypeNum_ISINTEGER(totype)) { - if (PyTypeNum_ISSIGNED(totype)) { - return telsize > felsize; - } - else { - return telsize >= felsize; - } - } - else if (PyTypeNum_ISFLOAT(totype)) { - if (felsize < 8) { - return telsize > felsize; - } - else { - return telsize >= felsize; - } - } - else if (PyTypeNum_ISCOMPLEX(totype)) { - if (felsize < 8) { - return (telsize >> 1) > felsize; - } - else { - return (telsize >> 1) >= felsize; - } - } - else { - return totype > fromtype; - } - case PyArray_FLOAT: - case PyArray_DOUBLE: - case PyArray_LONGDOUBLE: - if (PyTypeNum_ISCOMPLEX(totype)) { - return (telsize >> 1) >= felsize; - } - else if (PyTypeNum_ISFLOAT(totype) && (telsize == felsize)) { - /* On some systems, double == longdouble */ - return 1; - } - else { - return totype > fromtype; - } - case PyArray_CFLOAT: - case PyArray_CDOUBLE: - case PyArray_CLONGDOUBLE: - if (PyTypeNum_ISCOMPLEX(totype) && (telsize == felsize)) { - /* On some systems, double == longdouble */ - return 1; - } - else { - return totype > fromtype; - } - case PyArray_STRING: - case PyArray_UNICODE: - return totype > fromtype; - default: - return 0; - } -} - -/*NUMPY_API - * leaves reference count alone --- cannot be NULL - */ -NPY_NO_EXPORT Bool -PyArray_CanCastTo(PyArray_Descr *from, PyArray_Descr *to) -{ - int fromtype=from->type_num; - int totype=to->type_num; - Bool ret; - - ret = (Bool) PyArray_CanCastSafely(fromtype, totype); - if (ret) { - /* Check String and Unicode more closely */ - if (fromtype == PyArray_STRING) { - if (totype == PyArray_STRING) { - ret = (from->elsize <= to->elsize); - } - else if (totype == PyArray_UNICODE) { - ret = (from->elsize << 2 <= to->elsize); - } - } - else if (fromtype == PyArray_UNICODE) { - if (totype == PyArray_UNICODE) { - ret = (from->elsize <= to->elsize); - } - } - /* - * TODO: If totype is STRING or unicode - * see if the length is long enough to hold the - * stringified value of the object. - */ - } - return ret; -} - -/*NUMPY_API - * See if array scalars can be cast. - */ -NPY_NO_EXPORT Bool -PyArray_CanCastScalar(PyTypeObject *from, PyTypeObject *to) -{ - int fromtype; - int totype; - - fromtype = _typenum_fromtypeobj((PyObject *)from, 0); - totype = _typenum_fromtypeobj((PyObject *)to, 0); - if (fromtype == PyArray_NOTYPE || totype == PyArray_NOTYPE) { - return FALSE; - } - return (Bool) PyArray_CanCastSafely(fromtype, totype); -} - -/*NUMPY_API - * Is the typenum valid? - */ -NPY_NO_EXPORT int -PyArray_ValidType(int type) -{ - PyArray_Descr *descr; - int res=TRUE; - - descr = PyArray_DescrFromType(type); - if (descr == NULL) { - res = FALSE; - } - Py_DECREF(descr); - return res; -} - -/* Backward compatibility only */ -/* In both Zero and One - -***You must free the memory once you are done with it -using PyDataMem_FREE(ptr) or you create a memory leak*** - -If arr is an Object array you are getting a -BORROWED reference to Zero or One. -Do not DECREF. -Please INCREF if you will be hanging on to it. - -The memory for the ptr still must be freed in any case; -*/ - -static int -_check_object_rec(PyArray_Descr *descr) -{ - if (PyDataType_HASFIELDS(descr) && PyDataType_REFCHK(descr)) { - PyErr_SetString(PyExc_TypeError, "Not supported for this data-type."); - return -1; - } - return 0; -} - -/*NUMPY_API - Get pointer to zero of correct type for array. -*/ -NPY_NO_EXPORT char * -PyArray_Zero(PyArrayObject *arr) -{ - char *zeroval; - int ret, storeflags; - PyObject *obj; - - if (_check_object_rec(arr->descr) < 0) { - return NULL; - } - zeroval = PyDataMem_NEW(arr->descr->elsize); - if (zeroval == NULL) { - PyErr_SetNone(PyExc_MemoryError); - return NULL; - } - - obj=PyInt_FromLong((long) 0); - if (PyArray_ISOBJECT(arr)) { - memcpy(zeroval, &obj, sizeof(PyObject *)); - Py_DECREF(obj); - return zeroval; - } - storeflags = arr->flags; - arr->flags |= BEHAVED; - ret = arr->descr->f->setitem(obj, zeroval, arr); - arr->flags = storeflags; - Py_DECREF(obj); - if (ret < 0) { - PyDataMem_FREE(zeroval); - return NULL; - } - return zeroval; -} - -/*NUMPY_API - Get pointer to one of correct type for array -*/ -NPY_NO_EXPORT char * -PyArray_One(PyArrayObject *arr) -{ - char *oneval; - int ret, storeflags; - PyObject *obj; - - if (_check_object_rec(arr->descr) < 0) { - return NULL; - } - oneval = PyDataMem_NEW(arr->descr->elsize); - if (oneval == NULL) { - PyErr_SetNone(PyExc_MemoryError); - return NULL; - } - - obj = PyInt_FromLong((long) 1); - if (PyArray_ISOBJECT(arr)) { - memcpy(oneval, &obj, sizeof(PyObject *)); - Py_DECREF(obj); - return oneval; - } - - storeflags = arr->flags; - arr->flags |= BEHAVED; - ret = arr->descr->f->setitem(obj, oneval, arr); - arr->flags = storeflags; - Py_DECREF(obj); - if (ret < 0) { - PyDataMem_FREE(oneval); - return NULL; - } - return oneval; -} - -/* End deprecated */ - -/*NUMPY_API - * Return the typecode of the array a Python object would be converted to - */ -NPY_NO_EXPORT int -PyArray_ObjectType(PyObject *op, int minimum_type) -{ - PyArray_Descr *intype; - PyArray_Descr *outtype; - int ret; - - intype = PyArray_DescrFromType(minimum_type); - if (intype == NULL) { - PyErr_Clear(); - } - outtype = _array_find_type(op, intype, MAX_DIMS); - ret = outtype->type_num; - Py_DECREF(outtype); - Py_XDECREF(intype); - return ret; -} - -/* Raises error when len(op) == 0 */ - -/*NUMPY_API*/ -NPY_NO_EXPORT PyArrayObject ** -PyArray_ConvertToCommonType(PyObject *op, int *retn) -{ - int i, n, allscalars = 0; - PyArrayObject **mps = NULL; - PyObject *otmp; - PyArray_Descr *intype = NULL, *stype = NULL; - PyArray_Descr *newtype = NULL; - NPY_SCALARKIND scalarkind = NPY_NOSCALAR, intypekind = NPY_NOSCALAR; - - *retn = n = PySequence_Length(op); - if (n == 0) { - PyErr_SetString(PyExc_ValueError, "0-length sequence."); - } - if (PyErr_Occurred()) { - *retn = 0; - return NULL; - } - mps = (PyArrayObject **)PyDataMem_NEW(n*sizeof(PyArrayObject *)); - if (mps == NULL) { - *retn = 0; - return (void*)PyErr_NoMemory(); - } - - if (PyArray_Check(op)) { - for (i = 0; i < n; i++) { - mps[i] = (PyArrayObject *) array_big_item((PyArrayObject *)op, i); - } - if (!PyArray_ISCARRAY(op)) { - for (i = 0; i < n; i++) { - PyObject *obj; - obj = PyArray_NewCopy(mps[i], NPY_CORDER); - Py_DECREF(mps[i]); - mps[i] = (PyArrayObject *)obj; - } - } - return mps; - } - - for (i = 0; i < n; i++) { - otmp = PySequence_GetItem(op, i); - if (!PyArray_CheckAnyScalar(otmp)) { - newtype = PyArray_DescrFromObject(otmp, intype); - Py_XDECREF(intype); - intype = newtype; - mps[i] = NULL; - intypekind = PyArray_ScalarKind(intype->type_num, NULL); - } - else { - newtype = PyArray_DescrFromObject(otmp, stype); - Py_XDECREF(stype); - stype = newtype; - scalarkind = PyArray_ScalarKind(newtype->type_num, NULL); - mps[i] = (PyArrayObject *)Py_None; - Py_INCREF(Py_None); - } - Py_XDECREF(otmp); - } - if (intype==NULL) { - /* all scalars */ - allscalars = 1; - intype = stype; - Py_INCREF(intype); - for (i = 0; i < n; i++) { - Py_XDECREF(mps[i]); - mps[i] = NULL; - } - } - else if ((stype != NULL) && (intypekind != scalarkind)) { - /* - * we need to upconvert to type that - * handles both intype and stype - * also don't forcecast the scalars. - */ - if (!PyArray_CanCoerceScalar(stype->type_num, - intype->type_num, - scalarkind)) { - newtype = _array_small_type(intype, stype); - Py_XDECREF(intype); - intype = newtype; - } - for (i = 0; i < n; i++) { - Py_XDECREF(mps[i]); - mps[i] = NULL; - } - } - - - /* Make sure all arrays are actual array objects. */ - for (i = 0; i < n; i++) { - int flags = CARRAY; - - if ((otmp = PySequence_GetItem(op, i)) == NULL) { - goto fail; - } - if (!allscalars && ((PyObject *)(mps[i]) == Py_None)) { - /* forcecast scalars */ - flags |= FORCECAST; - Py_DECREF(Py_None); - } - Py_INCREF(intype); - mps[i] = (PyArrayObject*) - PyArray_FromAny(otmp, intype, 0, 0, flags, NULL); - Py_DECREF(otmp); - if (mps[i] == NULL) { - goto fail; - } - } - Py_DECREF(intype); - Py_XDECREF(stype); - return mps; - - fail: - Py_XDECREF(intype); - Py_XDECREF(stype); - *retn = 0; - for (i = 0; i < n; i++) { - Py_XDECREF(mps[i]); - } - PyDataMem_FREE(mps); - return NULL; -} - diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/convert_datatype.h b/pythonPackages/numpy/numpy/core/src/multiarray/convert_datatype.h deleted file mode 100755 index 60bdd82742..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/convert_datatype.h +++ /dev/null @@ -1,28 +0,0 @@ -#ifndef _NPY_ARRAY_CONVERT_DATATYPE_H_ -#define _NPY_ARRAY_CONVERT_DATATYPE_H_ - -NPY_NO_EXPORT PyObject * -PyArray_CastToType(PyArrayObject *mp, PyArray_Descr *at, int fortran); - -NPY_NO_EXPORT int -PyArray_CastTo(PyArrayObject *out, PyArrayObject *mp); - -NPY_NO_EXPORT PyArray_VectorUnaryFunc * -PyArray_GetCastFunc(PyArray_Descr *descr, int type_num); - -NPY_NO_EXPORT int -PyArray_CanCastSafely(int fromtype, int totype); - -NPY_NO_EXPORT Bool -PyArray_CanCastTo(PyArray_Descr *from, PyArray_Descr *to); - -NPY_NO_EXPORT int -PyArray_ObjectType(PyObject *op, int minimum_type); - -NPY_NO_EXPORT PyArrayObject ** -PyArray_ConvertToCommonType(PyObject *op, int *retn); - -NPY_NO_EXPORT int -PyArray_ValidType(int type); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/ctors.c b/pythonPackages/numpy/numpy/core/src/multiarray/ctors.c deleted file mode 100755 index 6e339f5dfd..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/ctors.c +++ /dev/null @@ -1,3570 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "numpy/npy_math.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "common.h" - -#include "ctors.h" - -#include "buffer.h" - -#include "numpymemoryview.h" - -/* - * Reading from a file or a string. - * - * As much as possible, we try to use the same code for both files and strings, - * so the semantics for fromstring and fromfile are the same, especially with - * regards to the handling of text representations. - */ - -typedef int (*next_element)(void **, void *, PyArray_Descr *, void *); -typedef int (*skip_separator)(void **, const char *, void *); - -static int -fromstr_next_element(char **s, void *dptr, PyArray_Descr *dtype, - const char *end) -{ - int r = dtype->f->fromstr(*s, dptr, s, dtype); - if (end != NULL && *s > end) { - return -1; - } - return r; -} - -static int -fromfile_next_element(FILE **fp, void *dptr, PyArray_Descr *dtype, - void *NPY_UNUSED(stream_data)) -{ - /* the NULL argument is for backwards-compatibility */ - return dtype->f->scanfunc(*fp, dptr, NULL, dtype); -} - -/* - * Remove multiple whitespace from the separator, and add a space to the - * beginning and end. This simplifies the separator-skipping code below. - */ -static char * -swab_separator(char *sep) -{ - int skip_space = 0; - char *s, *start; - - s = start = malloc(strlen(sep)+3); - /* add space to front if there isn't one */ - if (*sep != '\0' && !isspace(*sep)) { - *s = ' '; s++; - } - while (*sep != '\0') { - if (isspace(*sep)) { - if (skip_space) { - sep++; - } - else { - *s = ' '; - s++; - sep++; - skip_space = 1; - } - } - else { - *s = *sep; - s++; - sep++; - skip_space = 0; - } - } - /* add space to end if there isn't one */ - if (s != start && s[-1] == ' ') { - *s = ' '; - s++; - } - *s = '\0'; - return start; -} - -/* - * Assuming that the separator is the next bit in the string (file), skip it. - * - * Single spaces in the separator are matched to arbitrary-long sequences - * of whitespace in the input. If the separator consists only of spaces, - * it matches one or more whitespace characters. - * - * If we can't match the separator, return -2. - * If we hit the end of the string (file), return -1. - * Otherwise, return 0. - */ -static int -fromstr_skip_separator(char **s, const char *sep, const char *end) -{ - char *string = *s; - int result = 0; - while (1) { - char c = *string; - if (c == '\0' || (end != NULL && string >= end)) { - result = -1; - break; - } - else if (*sep == '\0') { - if (string != *s) { - /* matched separator */ - result = 0; - break; - } - else { - /* separator was whitespace wildcard that didn't match */ - result = -2; - break; - } - } - else if (*sep == ' ') { - /* whitespace wildcard */ - if (!isspace(c)) { - sep++; - continue; - } - } - else if (*sep != c) { - result = -2; - break; - } - else { - sep++; - } - string++; - } - *s = string; - return result; -} - -static int -fromfile_skip_separator(FILE **fp, const char *sep, void *NPY_UNUSED(stream_data)) -{ - int result = 0; - const char *sep_start = sep; - - while (1) { - int c = fgetc(*fp); - - if (c == EOF) { - result = -1; - break; - } - else if (*sep == '\0') { - ungetc(c, *fp); - if (sep != sep_start) { - /* matched separator */ - result = 0; - break; - } - else { - /* separator was whitespace wildcard that didn't match */ - result = -2; - break; - } - } - else if (*sep == ' ') { - /* whitespace wildcard */ - if (!isspace(c)) { - sep++; - sep_start++; - ungetc(c, *fp); - } - else if (sep == sep_start) { - sep_start--; - } - } - else if (*sep != c) { - ungetc(c, *fp); - result = -2; - break; - } - else { - sep++; - } - } - return result; -} - -/* - * Change a sub-array field to the base descriptor - * - * and update the dimensions and strides - * appropriately. Dimensions and strides are added - * to the end unless we have a FORTRAN array - * and then they are added to the beginning - * - * Strides are only added if given (because data is given). - */ -static int -_update_descr_and_dimensions(PyArray_Descr **des, intp *newdims, - intp *newstrides, int oldnd, int isfortran) -{ - PyArray_Descr *old; - int newnd; - int numnew; - intp *mydim; - int i; - int tuple; - - old = *des; - *des = old->subarray->base; - - - mydim = newdims + oldnd; - tuple = PyTuple_Check(old->subarray->shape); - if (tuple) { - numnew = PyTuple_GET_SIZE(old->subarray->shape); - } - else { - numnew = 1; - } - - - newnd = oldnd + numnew; - if (newnd > MAX_DIMS) { - goto finish; - } - if (isfortran) { - memmove(newdims+numnew, newdims, oldnd*sizeof(intp)); - mydim = newdims; - } - if (tuple) { - for (i = 0; i < numnew; i++) { - mydim[i] = (intp) PyInt_AsLong( - PyTuple_GET_ITEM(old->subarray->shape, i)); - } - } - else { - mydim[0] = (intp) PyInt_AsLong(old->subarray->shape); - } - - if (newstrides) { - intp tempsize; - intp *mystrides; - - mystrides = newstrides + oldnd; - if (isfortran) { - memmove(newstrides+numnew, newstrides, oldnd*sizeof(intp)); - mystrides = newstrides; - } - /* Make new strides -- alwasy C-contiguous */ - tempsize = (*des)->elsize; - for (i = numnew - 1; i >= 0; i--) { - mystrides[i] = tempsize; - tempsize *= mydim[i] ? mydim[i] : 1; - } - } - - finish: - Py_INCREF(*des); - Py_DECREF(old); - return newnd; -} - -/* - * If s is not a list, return 0 - * Otherwise: - * - * run object_depth_and_dimension on all the elements - * and make sure the returned shape and size is the - * same for each element - */ -static int -object_depth_and_dimension(PyObject *s, int max, intp *dims) -{ - intp *newdims, *test_dims; - int nd, test_nd; - int i, islist, istuple; - intp size; - PyObject *obj; - - islist = PyList_Check(s); - istuple = PyTuple_Check(s); - if (!(islist || istuple)) { - return 0; - } - - size = PySequence_Size(s); - if (size == 0) { - return 0; - } - if (max < 1) { - return 0; - } - if (max < 2) { - dims[0] = size; - return 1; - } - - newdims = PyDimMem_NEW(2*(max - 1)); - test_dims = newdims + (max - 1); - if (islist) { - obj = PyList_GET_ITEM(s, 0); - } - else { - obj = PyTuple_GET_ITEM(s, 0); - } - nd = object_depth_and_dimension(obj, max - 1, newdims); - - for (i = 1; i < size; i++) { - if (islist) { - obj = PyList_GET_ITEM(s, i); - } - else { - obj = PyTuple_GET_ITEM(s, i); - } - test_nd = object_depth_and_dimension(obj, max-1, test_dims); - - if ((nd != test_nd) || - (!PyArray_CompareLists(newdims, test_dims, nd))) { - nd = 0; - break; - } - } - - for (i = 1; i <= nd; i++) { - dims[i] = newdims[i-1]; - } - dims[0] = size; - PyDimMem_FREE(newdims); - return nd + 1; -} - -static void -_strided_byte_copy(char *dst, intp outstrides, char *src, intp instrides, - intp N, int elsize) -{ - intp i, j; - char *tout = dst; - char *tin = src; - -#define _FAST_MOVE(_type_) \ - for(i=0; i 0; n--, a += stride - 1) { - b = a + 3; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a = *b; *b = c; - } - break; - case 8: - for (a = (char*)p; n > 0; n--, a += stride - 3) { - b = a + 7; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a++ = *b; *b-- = c; - c = *a; *a = *b; *b = c; - } - break; - case 2: - for (a = (char*)p; n > 0; n--, a += stride) { - b = a + 1; - c = *a; *a = *b; *b = c; - } - break; - default: - m = size/2; - for (a = (char *)p; n > 0; n--, a += stride - m) { - b = a + (size - 1); - for (j = 0; j < m; j++) { - c=*a; *a++ = *b; *b-- = c; - } - } - break; - } -} - -NPY_NO_EXPORT void -byte_swap_vector(void *p, intp n, int size) -{ - _strided_byte_swap(p, (intp) size, n, size); - return; -} - -/* If numitems > 1, then dst must be contiguous */ -NPY_NO_EXPORT void -copy_and_swap(void *dst, void *src, int itemsize, intp numitems, - intp srcstrides, int swap) -{ - intp i; - char *s1 = (char *)src; - char *d1 = (char *)dst; - - - if ((numitems == 1) || (itemsize == srcstrides)) { - memcpy(d1, s1, itemsize*numitems); - } - else { - for (i = 0; i < numitems; i++) { - memcpy(d1, s1, itemsize); - d1 += itemsize; - s1 += srcstrides; - } - } - - if (swap) { - byte_swap_vector(d1, numitems, itemsize); - } -} - -static int -_copy_from0d(PyArrayObject *dest, PyArrayObject *src, int usecopy, int swap) -{ - char *aligned = NULL; - char *sptr; - intp numcopies, nbytes; - void (*myfunc)(char *, intp, char *, intp, intp, int); - int retval = -1; - NPY_BEGIN_THREADS_DEF; - - numcopies = PyArray_SIZE(dest); - if (numcopies < 1) { - return 0; - } - nbytes = PyArray_ITEMSIZE(src); - - if (!PyArray_ISALIGNED(src)) { - aligned = malloc((size_t)nbytes); - if (aligned == NULL) { - PyErr_NoMemory(); - return -1; - } - memcpy(aligned, src->data, (size_t) nbytes); - usecopy = 1; - sptr = aligned; - } - else { - sptr = src->data; - } - if (PyArray_SAFEALIGNEDCOPY(dest)) { - myfunc = _strided_byte_copy; - } - else if (usecopy) { - myfunc = _unaligned_strided_byte_copy; - } - else { - myfunc = _unaligned_strided_byte_move; - } - - if ((dest->nd < 2) || PyArray_ISONESEGMENT(dest)) { - char *dptr; - intp dstride; - - dptr = dest->data; - if (dest->nd == 1) { - dstride = dest->strides[0]; - } - else { - dstride = nbytes; - } - - /* Refcount note: src and dest may have different sizes */ - PyArray_INCREF(src); - PyArray_XDECREF(dest); - NPY_BEGIN_THREADS; - myfunc(dptr, dstride, sptr, 0, numcopies, (int) nbytes); - if (swap) { - _strided_byte_swap(dptr, dstride, numcopies, (int) nbytes); - } - NPY_END_THREADS; - PyArray_INCREF(dest); - PyArray_XDECREF(src); - } - else { - PyArrayIterObject *dit; - int axis = -1; - - dit = (PyArrayIterObject *) - PyArray_IterAllButAxis((PyObject *)dest, &axis); - if (dit == NULL) { - goto finish; - } - /* Refcount note: src and dest may have different sizes */ - PyArray_INCREF(src); - PyArray_XDECREF(dest); - NPY_BEGIN_THREADS; - while(dit->index < dit->size) { - myfunc(dit->dataptr, PyArray_STRIDE(dest, axis), sptr, 0, - PyArray_DIM(dest, axis), nbytes); - if (swap) { - _strided_byte_swap(dit->dataptr, PyArray_STRIDE(dest, axis), - PyArray_DIM(dest, axis), nbytes); - } - PyArray_ITER_NEXT(dit); - } - NPY_END_THREADS; - PyArray_INCREF(dest); - PyArray_XDECREF(src); - Py_DECREF(dit); - } - retval = 0; - -finish: - if (aligned != NULL) { - free(aligned); - } - return retval; -} - -/* - * Special-case of PyArray_CopyInto when dst is 1-d - * and contiguous (and aligned). - * PyArray_CopyInto requires broadcastable arrays while - * this one is a flattening operation... - */ -NPY_NO_EXPORT int -_flat_copyinto(PyObject *dst, PyObject *src, NPY_ORDER order) -{ - PyArrayIterObject *it; - PyObject *orig_src; - void (*myfunc)(char *, intp, char *, intp, intp, int); - char *dptr; - int axis; - int elsize; - intp nbytes; - NPY_BEGIN_THREADS_DEF; - - - orig_src = src; - if (PyArray_NDIM(src) == 0) { - /* Refcount note: src and dst have the same size */ - PyArray_INCREF((PyArrayObject *)src); - PyArray_XDECREF((PyArrayObject *)dst); - NPY_BEGIN_THREADS; - memcpy(PyArray_BYTES(dst), PyArray_BYTES(src), - PyArray_ITEMSIZE(src)); - NPY_END_THREADS; - return 0; - } - - axis = PyArray_NDIM(src)-1; - - if (order == PyArray_FORTRANORDER) { - if (PyArray_NDIM(src) <= 2) { - axis = 0; - } - /* fall back to a more general method */ - else { - src = PyArray_Transpose((PyArrayObject *)orig_src, NULL); - } - } - - it = (PyArrayIterObject *)PyArray_IterAllButAxis(src, &axis); - if (it == NULL) { - if (src != orig_src) { - Py_DECREF(src); - } - return -1; - } - - if (PyArray_SAFEALIGNEDCOPY(src)) { - myfunc = _strided_byte_copy; - } - else { - myfunc = _unaligned_strided_byte_copy; - } - - dptr = PyArray_BYTES(dst); - elsize = PyArray_ITEMSIZE(dst); - nbytes = elsize * PyArray_DIM(src, axis); - - /* Refcount note: src and dst have the same size */ - PyArray_INCREF((PyArrayObject *)src); - PyArray_XDECREF((PyArrayObject *)dst); - NPY_BEGIN_THREADS; - while(it->index < it->size) { - myfunc(dptr, elsize, it->dataptr, PyArray_STRIDE(src,axis), - PyArray_DIM(src,axis), elsize); - dptr += nbytes; - PyArray_ITER_NEXT(it); - } - NPY_END_THREADS; - - if (src != orig_src) { - Py_DECREF(src); - } - Py_DECREF(it); - return 0; -} - - -static int -_copy_from_same_shape(PyArrayObject *dest, PyArrayObject *src, - void (*myfunc)(char *, intp, char *, intp, intp, int), - int swap) -{ - int maxaxis = -1, elsize; - intp maxdim; - PyArrayIterObject *dit, *sit; - NPY_BEGIN_THREADS_DEF; - - dit = (PyArrayIterObject *) - PyArray_IterAllButAxis((PyObject *)dest, &maxaxis); - sit = (PyArrayIterObject *) - PyArray_IterAllButAxis((PyObject *)src, &maxaxis); - - maxdim = dest->dimensions[maxaxis]; - - if ((dit == NULL) || (sit == NULL)) { - Py_XDECREF(dit); - Py_XDECREF(sit); - return -1; - } - elsize = PyArray_ITEMSIZE(dest); - - /* Refcount note: src and dst have the same size */ - PyArray_INCREF(src); - PyArray_XDECREF(dest); - - NPY_BEGIN_THREADS; - while(dit->index < dit->size) { - /* strided copy of elsize bytes */ - myfunc(dit->dataptr, dest->strides[maxaxis], - sit->dataptr, src->strides[maxaxis], - maxdim, elsize); - if (swap) { - _strided_byte_swap(dit->dataptr, - dest->strides[maxaxis], - dest->dimensions[maxaxis], - elsize); - } - PyArray_ITER_NEXT(dit); - PyArray_ITER_NEXT(sit); - } - NPY_END_THREADS; - - Py_DECREF(sit); - Py_DECREF(dit); - return 0; -} - -static int -_broadcast_copy(PyArrayObject *dest, PyArrayObject *src, - void (*myfunc)(char *, intp, char *, intp, intp, int), - int swap) -{ - int elsize; - PyArrayMultiIterObject *multi; - int maxaxis; intp maxdim; - NPY_BEGIN_THREADS_DEF; - - elsize = PyArray_ITEMSIZE(dest); - multi = (PyArrayMultiIterObject *)PyArray_MultiIterNew(2, dest, src); - if (multi == NULL) { - return -1; - } - - if (multi->size != PyArray_SIZE(dest)) { - PyErr_SetString(PyExc_ValueError, - "array dimensions are not "\ - "compatible for copy"); - Py_DECREF(multi); - return -1; - } - - maxaxis = PyArray_RemoveSmallest(multi); - if (maxaxis < 0) { - /* - * copy 1 0-d array to another - * Refcount note: src and dst have the same size - */ - PyArray_INCREF(src); - PyArray_XDECREF(dest); - memcpy(dest->data, src->data, elsize); - if (swap) { - byte_swap_vector(dest->data, 1, elsize); - } - return 0; - } - maxdim = multi->dimensions[maxaxis]; - - /* - * Increment the source and decrement the destination - * reference counts - * - * Refcount note: src and dest may have different sizes - */ - PyArray_INCREF(src); - PyArray_XDECREF(dest); - - NPY_BEGIN_THREADS; - while(multi->index < multi->size) { - myfunc(multi->iters[0]->dataptr, - multi->iters[0]->strides[maxaxis], - multi->iters[1]->dataptr, - multi->iters[1]->strides[maxaxis], - maxdim, elsize); - if (swap) { - _strided_byte_swap(multi->iters[0]->dataptr, - multi->iters[0]->strides[maxaxis], - maxdim, elsize); - } - PyArray_MultiIter_NEXT(multi); - } - NPY_END_THREADS; - - PyArray_INCREF(dest); - PyArray_XDECREF(src); - - Py_DECREF(multi); - return 0; -} - -/* If destination is not the right type, then src - will be cast to destination -- this requires - src and dest to have the same shape -*/ - -/* Requires arrays to have broadcastable shapes - - The arrays are assumed to have the same number of elements - They can be different sizes and have different types however. -*/ - -static int -_array_copy_into(PyArrayObject *dest, PyArrayObject *src, int usecopy) -{ - int swap; - void (*myfunc)(char *, intp, char *, intp, intp, int); - int simple; - int same; - NPY_BEGIN_THREADS_DEF; - - - if (!PyArray_EquivArrTypes(dest, src)) { - return PyArray_CastTo(dest, src); - } - if (!PyArray_ISWRITEABLE(dest)) { - PyErr_SetString(PyExc_RuntimeError, - "cannot write to array"); - return -1; - } - same = PyArray_SAMESHAPE(dest, src); - simple = same && ((PyArray_ISCARRAY_RO(src) && PyArray_ISCARRAY(dest)) || - (PyArray_ISFARRAY_RO(src) && PyArray_ISFARRAY(dest))); - - if (simple) { - /* Refcount note: src and dest have the same size */ - PyArray_INCREF(src); - PyArray_XDECREF(dest); - NPY_BEGIN_THREADS; - if (usecopy) { - memcpy(dest->data, src->data, PyArray_NBYTES(dest)); - } - else { - memmove(dest->data, src->data, PyArray_NBYTES(dest)); - } - NPY_END_THREADS; - return 0; - } - - swap = PyArray_ISNOTSWAPPED(dest) != PyArray_ISNOTSWAPPED(src); - - if (src->nd == 0) { - return _copy_from0d(dest, src, usecopy, swap); - } - - if (PyArray_SAFEALIGNEDCOPY(dest) && PyArray_SAFEALIGNEDCOPY(src)) { - myfunc = _strided_byte_copy; - } - else if (usecopy) { - myfunc = _unaligned_strided_byte_copy; - } - else { - myfunc = _unaligned_strided_byte_move; - } - /* - * Could combine these because _broadcasted_copy would work as well. - * But, same-shape copying is so common we want to speed it up. - */ - if (same) { - return _copy_from_same_shape(dest, src, myfunc, swap); - } - else { - return _broadcast_copy(dest, src, myfunc, swap); - } -} - -/*NUMPY_API - * Move the memory of one array into another. - */ -NPY_NO_EXPORT int -PyArray_MoveInto(PyArrayObject *dest, PyArrayObject *src) -{ - return _array_copy_into(dest, src, 0); -} - - - -/* adapted from Numarray */ -static int -setArrayFromSequence(PyArrayObject *a, PyObject *s, int dim, intp offset) -{ - Py_ssize_t i, slen; - int res = -1; - - /* - * This code is to ensure that the sequence access below will - * return a lower-dimensional sequence. - */ - - /* INCREF on entry DECREF on exit */ - Py_INCREF(s); - - if (PyArray_Check(s) && !(PyArray_CheckExact(s))) { - /* - * FIXME: This could probably copy the entire subarray at once here using - * a faster algorithm. Right now, just make sure a base-class array is - * used so that the dimensionality reduction assumption is correct. - */ - /* This will DECREF(s) if replaced */ - s = PyArray_EnsureArray(s); - } - - if (dim > a->nd) { - PyErr_Format(PyExc_ValueError, - "setArrayFromSequence: sequence/array dimensions mismatch."); - goto fail; - } - - slen = PySequence_Length(s); - if (slen != a->dimensions[dim]) { - PyErr_Format(PyExc_ValueError, - "setArrayFromSequence: sequence/array shape mismatch."); - goto fail; - } - - for (i = 0; i < slen; i++) { - PyObject *o = PySequence_GetItem(s, i); - if ((a->nd - dim) > 1) { - res = setArrayFromSequence(a, o, dim+1, offset); - } - else { - res = a->descr->f->setitem(o, (a->data + offset), a); - } - Py_DECREF(o); - if (res < 0) { - goto fail; - } - offset += a->strides[dim]; - } - - Py_DECREF(s); - return 0; - - fail: - Py_DECREF(s); - return res; -} - -static int -Assign_Array(PyArrayObject *self, PyObject *v) -{ - if (!PySequence_Check(v)) { - PyErr_SetString(PyExc_ValueError, - "assignment from non-sequence"); - return -1; - } - if (self->nd == 0) { - PyErr_SetString(PyExc_ValueError, - "assignment to 0-d array"); - return -1; - } - return setArrayFromSequence(self, v, 0, 0); -} - -/* - * "Array Scalars don't call this code" - * steals reference to typecode -- no NULL - */ -static PyObject * -Array_FromPyScalar(PyObject *op, PyArray_Descr *typecode) -{ - PyArrayObject *ret; - int itemsize; - int type; - - itemsize = typecode->elsize; - type = typecode->type_num; - - if (itemsize == 0 && PyTypeNum_ISEXTENDED(type)) { - itemsize = PyObject_Length(op); - if (type == PyArray_UNICODE) { - itemsize *= 4; - } - if (itemsize != typecode->elsize) { - PyArray_DESCR_REPLACE(typecode); - typecode->elsize = itemsize; - } - } - - ret = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, typecode, - 0, NULL, - NULL, NULL, 0, NULL); - if (ret == NULL) { - return NULL; - } - if (ret->nd > 0) { - PyErr_SetString(PyExc_ValueError, - "shape-mismatch on array construction"); - Py_DECREF(ret); - return NULL; - } - - ret->descr->f->setitem(op, ret->data, ret); - if (PyErr_Occurred()) { - Py_DECREF(ret); - return NULL; - } - else { - return (PyObject *)ret; - } -} - - -static PyObject * -ObjectArray_FromNestedList(PyObject *s, PyArray_Descr *typecode, int fortran) -{ - int nd; - intp d[MAX_DIMS]; - PyArrayObject *r; - - /* Get the depth and the number of dimensions */ - nd = object_depth_and_dimension(s, MAX_DIMS, d); - if (nd < 0) { - return NULL; - } - if (nd == 0) { - return Array_FromPyScalar(s, typecode); - } - r = (PyArrayObject*)PyArray_NewFromDescr(&PyArray_Type, typecode, - nd, d, - NULL, NULL, - fortran, NULL); - if (!r) { - return NULL; - } - if(Assign_Array(r,s) == -1) { - Py_DECREF(r); - return NULL; - } - return (PyObject*)r; -} - -/* - * The rest of this code is to build the right kind of array - * from a python object. - */ - -static int -discover_depth(PyObject *s, int max, int stop_at_string, int stop_at_tuple) -{ - int d = 0; - PyObject *e; -#if PY_VERSION_HEX >= 0x02060000 - Py_buffer buffer_view; -#endif - - if(max < 1) { - return -1; - } - if(!PySequence_Check(s) || -#if defined(NPY_PY3K) - /* FIXME: XXX -- what is the correct thing to do here? */ -#else - PyInstance_Check(s) || -#endif - PySequence_Length(s) < 0) { - PyErr_Clear(); - return 0; - } - if (PyArray_Check(s)) { - return PyArray_NDIM(s); - } - if (PyArray_IsScalar(s, Generic)) { - return 0; - } - if (PyString_Check(s) || -#if defined(NPY_PY3K) -#else - PyBuffer_Check(s) || -#endif - PyUnicode_Check(s)) { - return stop_at_string ? 0:1; - } - if (stop_at_tuple && PyTuple_Check(s)) { - return 0; - } -#if PY_VERSION_HEX >= 0x02060000 - /* PEP 3118 buffer interface */ - memset(&buffer_view, 0, sizeof(Py_buffer)); - if (PyObject_GetBuffer(s, &buffer_view, PyBUF_STRIDES) == 0 || - PyObject_GetBuffer(s, &buffer_view, PyBUF_ND) == 0) { - d = buffer_view.ndim; - PyBuffer_Release(&buffer_view); - return d; - } - else if (PyObject_GetBuffer(s, &buffer_view, PyBUF_SIMPLE) == 0) { - PyBuffer_Release(&buffer_view); - return 1; - } - else { - PyErr_Clear(); - } -#endif - if ((e = PyObject_GetAttrString(s, "__array_struct__")) != NULL) { - d = -1; - if (NpyCapsule_Check(e)) { - PyArrayInterface *inter; - inter = (PyArrayInterface *)NpyCapsule_AsVoidPtr(e); - if (inter->two == 2) { - d = inter->nd; - } - } - Py_DECREF(e); - if (d > -1) { - return d; - } - } - else { - PyErr_Clear(); - } - if ((e=PyObject_GetAttrString(s, "__array_interface__")) != NULL) { - d = -1; - if (PyDict_Check(e)) { - PyObject *new; - new = PyDict_GetItemString(e, "shape"); - if (new && PyTuple_Check(new)) { - d = PyTuple_GET_SIZE(new); - } - } - Py_DECREF(e); - if (d>-1) { - return d; - } - } - else PyErr_Clear(); - - if (PySequence_Length(s) == 0) { - return 1; - } - if ((e=PySequence_GetItem(s,0)) == NULL) { - return -1; - } - if (e != s) { - d = discover_depth(e, max-1, stop_at_string, stop_at_tuple); - if (d >= 0) { - d++; - } - } - Py_DECREF(e); - return d; -} - -static int -discover_itemsize(PyObject *s, int nd, int *itemsize) -{ - int n, r, i; - PyObject *e; - - if (PyArray_Check(s)) { - *itemsize = MAX(*itemsize, PyArray_ITEMSIZE(s)); - return 0; - } - - n = PyObject_Length(s); - if ((nd == 0) || PyString_Check(s) || -#if defined(NPY_PY3K) - PyMemoryView_Check(s) || -#else - PyBuffer_Check(s) || -#endif - PyUnicode_Check(s)) { - - *itemsize = MAX(*itemsize, n); - return 0; - } - for (i = 0; i < n; i++) { - if ((e = PySequence_GetItem(s,i))==NULL) { - return -1; - } - r = discover_itemsize(e,nd-1,itemsize); - Py_DECREF(e); - if (r == -1) { - return -1; - } - } - return 0; -} - -/* - * Take an arbitrary object known to represent - * an array of ndim nd, and determine the size in each dimension - */ -static int -discover_dimensions(PyObject *s, int nd, intp *d, int check_it) -{ - PyObject *e; - int r, n, i, n_lower; - - - if (PyArray_Check(s)) { - /* - * XXXX: we handle the case of scalar arrays (0 dimensions) separately. - * This is an hack, the function discover_dimensions needs to be - * improved. - */ - if (PyArray_NDIM(s) == 0) { - d[0] = 0; - } else { - for (i=0; i n_lower) { - n_lower = d[1]; - } - } - d[1] = n_lower; - - return 0; -} - -/* - * isobject means that we are constructing an - * object array on-purpose with a nested list. - * Only a list is interpreted as a sequence with these rules - * steals reference to typecode - */ -static PyObject * -Array_FromSequence(PyObject *s, PyArray_Descr *typecode, int fortran, - int min_depth, int max_depth) -{ - PyArrayObject *r; - int nd; - int err; - intp d[MAX_DIMS]; - int stop_at_string; - int stop_at_tuple; - int check_it; - int type = typecode->type_num; - int itemsize = typecode->elsize; - - check_it = (typecode->type != PyArray_CHARLTR); - stop_at_string = (type != PyArray_STRING) || - (typecode->type == PyArray_STRINGLTR); - stop_at_tuple = (type == PyArray_VOID && (typecode->names - || typecode->subarray)); - - nd = discover_depth(s, MAX_DIMS + 1, stop_at_string, stop_at_tuple); - if (nd == 0) { - return Array_FromPyScalar(s, typecode); - } - else if (nd < 0) { - PyErr_SetString(PyExc_ValueError, - "invalid input sequence"); - goto fail; - } - if (max_depth && PyTypeNum_ISOBJECT(type) && (nd > max_depth)) { - nd = max_depth; - } - if ((max_depth && nd > max_depth) || (min_depth && nd < min_depth)) { - PyErr_SetString(PyExc_ValueError, - "invalid number of dimensions"); - goto fail; - } - - err = discover_dimensions(s, nd, d, check_it); - if (err == -1) { - goto fail; - } - if (typecode->type == PyArray_CHARLTR && nd > 0 && d[nd - 1] == 1) { - nd = nd - 1; - } - - if (itemsize == 0 && PyTypeNum_ISEXTENDED(type)) { - err = discover_itemsize(s, nd, &itemsize); - if (err == -1) { - goto fail; - } - if (type == PyArray_UNICODE) { - itemsize *= 4; - } - } - if (itemsize != typecode->elsize) { - PyArray_DESCR_REPLACE(typecode); - typecode->elsize = itemsize; - } - - r = (PyArrayObject*)PyArray_NewFromDescr(&PyArray_Type, typecode, - nd, d, - NULL, NULL, - fortran, NULL); - if (!r) { - return NULL; - } - - err = Assign_Array(r,s); - if (err == -1) { - Py_DECREF(r); - return NULL; - } - return (PyObject*)r; - - fail: - Py_DECREF(typecode); - return NULL; -} - - - -/*NUMPY_API - * Generic new array creation routine. - * - * steals a reference to descr (even on failure) - */ -NPY_NO_EXPORT PyObject * -PyArray_NewFromDescr(PyTypeObject *subtype, PyArray_Descr *descr, int nd, - intp *dims, intp *strides, void *data, - int flags, PyObject *obj) -{ - PyArrayObject *self; - int i; - size_t sd; - intp largest; - intp size; - - if (descr->subarray) { - PyObject *ret; - intp newdims[2*MAX_DIMS]; - intp *newstrides = NULL; - int isfortran = 0; - isfortran = (data && (flags & FORTRAN) && !(flags & CONTIGUOUS)) || - (!data && flags); - memcpy(newdims, dims, nd*sizeof(intp)); - if (strides) { - newstrides = newdims + MAX_DIMS; - memcpy(newstrides, strides, nd*sizeof(intp)); - } - nd =_update_descr_and_dimensions(&descr, newdims, - newstrides, nd, isfortran); - ret = PyArray_NewFromDescr(subtype, descr, nd, newdims, - newstrides, - data, flags, obj); - return ret; - } - if (nd < 0) { - PyErr_SetString(PyExc_ValueError, - "number of dimensions must be >=0"); - Py_DECREF(descr); - return NULL; - } - if (nd > MAX_DIMS) { - PyErr_Format(PyExc_ValueError, - "maximum number of dimensions is %d", MAX_DIMS); - Py_DECREF(descr); - return NULL; - } - - /* Check dimensions */ - size = 1; - sd = (size_t) descr->elsize; - if (sd == 0) { - if (!PyDataType_ISSTRING(descr)) { - PyErr_SetString(PyExc_ValueError, "Empty data-type"); - Py_DECREF(descr); - return NULL; - } - PyArray_DESCR_REPLACE(descr); - if (descr->type_num == NPY_STRING) { - descr->elsize = 1; - } - else { - descr->elsize = sizeof(PyArray_UCS4); - } - sd = descr->elsize; - } - - largest = NPY_MAX_INTP / sd; - for (i = 0; i < nd; i++) { - intp dim = dims[i]; - - if (dim == 0) { - /* - * Compare to PyArray_OverflowMultiplyList that - * returns 0 in this case. - */ - continue; - } - if (dim < 0) { - PyErr_SetString(PyExc_ValueError, - "negative dimensions " \ - "are not allowed"); - Py_DECREF(descr); - return NULL; - } - if (dim > largest) { - PyErr_SetString(PyExc_ValueError, - "array is too big."); - Py_DECREF(descr); - return NULL; - } - size *= dim; - largest /= dim; - } - - self = (PyArrayObject *) subtype->tp_alloc(subtype, 0); - if (self == NULL) { - Py_DECREF(descr); - return NULL; - } - self->nd = nd; - self->dimensions = NULL; - self->data = NULL; - if (data == NULL) { - self->flags = DEFAULT; - if (flags) { - self->flags |= FORTRAN; - if (nd > 1) { - self->flags &= ~CONTIGUOUS; - } - flags = FORTRAN; - } - } - else { - self->flags = (flags & ~UPDATEIFCOPY); - } - self->descr = descr; - self->base = (PyObject *)NULL; - self->weakreflist = (PyObject *)NULL; - - if (nd > 0) { - self->dimensions = PyDimMem_NEW(2*nd); - if (self->dimensions == NULL) { - PyErr_NoMemory(); - goto fail; - } - self->strides = self->dimensions + nd; - memcpy(self->dimensions, dims, sizeof(intp)*nd); - if (strides == NULL) { /* fill it in */ - sd = _array_fill_strides(self->strides, dims, nd, sd, - flags, &(self->flags)); - } - else { - /* - * we allow strides even when we create - * the memory, but be careful with this... - */ - memcpy(self->strides, strides, sizeof(intp)*nd); - sd *= size; - } - } - else { - self->dimensions = self->strides = NULL; - } - - if (data == NULL) { - /* - * Allocate something even for zero-space arrays - * e.g. shape=(0,) -- otherwise buffer exposure - * (a.data) doesn't work as it should. - */ - - if (sd == 0) { - sd = descr->elsize; - } - if ((data = PyDataMem_NEW(sd)) == NULL) { - PyErr_NoMemory(); - goto fail; - } - self->flags |= OWNDATA; - - /* - * It is bad to have unitialized OBJECT pointers - * which could also be sub-fields of a VOID array - */ - if (PyDataType_FLAGCHK(descr, NPY_NEEDS_INIT)) { - memset(data, 0, sd); - } - } - else { - /* - * If data is passed in, this object won't own it by default. - * Caller must arrange for this to be reset if truly desired - */ - self->flags &= ~OWNDATA; - } - self->data = data; - - /* - * call the __array_finalize__ - * method if a subtype. - * If obj is NULL, then call method with Py_None - */ - if ((subtype != &PyArray_Type)) { - PyObject *res, *func, *args; - - func = PyObject_GetAttrString((PyObject *)self, "__array_finalize__"); - if (func && func != Py_None) { - if (strides != NULL) { - /* - * did not allocate own data or funny strides - * update flags before finalize function - */ - PyArray_UpdateFlags(self, UPDATE_ALL); - } - if (NpyCapsule_Check(func)) { - /* A C-function is stored here */ - PyArray_FinalizeFunc *cfunc; - cfunc = NpyCapsule_AsVoidPtr(func); - Py_DECREF(func); - if (cfunc(self, obj) < 0) { - goto fail; - } - } - else { - args = PyTuple_New(1); - if (obj == NULL) { - obj=Py_None; - } - Py_INCREF(obj); - PyTuple_SET_ITEM(args, 0, obj); - res = PyObject_Call(func, args, NULL); - Py_DECREF(args); - Py_DECREF(func); - if (res == NULL) { - goto fail; - } - else { - Py_DECREF(res); - } - } - } - else Py_XDECREF(func); - } - return (PyObject *)self; - - fail: - Py_DECREF(self); - return NULL; -} - -/*NUMPY_API - * Generic new array creation routine. - */ -NPY_NO_EXPORT PyObject * -PyArray_New(PyTypeObject *subtype, int nd, intp *dims, int type_num, - intp *strides, void *data, int itemsize, int flags, - PyObject *obj) -{ - PyArray_Descr *descr; - PyObject *new; - - descr = PyArray_DescrFromType(type_num); - if (descr == NULL) { - return NULL; - } - if (descr->elsize == 0) { - if (itemsize < 1) { - PyErr_SetString(PyExc_ValueError, - "data type must provide an itemsize"); - Py_DECREF(descr); - return NULL; - } - PyArray_DESCR_REPLACE(descr); - descr->elsize = itemsize; - } - new = PyArray_NewFromDescr(subtype, descr, nd, dims, strides, - data, flags, obj); - return new; -} - - -NPY_NO_EXPORT int -_array_from_buffer_3118(PyObject *obj, PyObject **out) -{ -#if PY_VERSION_HEX >= 0x02060000 - /* PEP 3118 */ - PyObject *memoryview; - Py_buffer *view; - PyArray_Descr *descr = NULL; - PyObject *r; - int nd, flags, k; - Py_ssize_t d; - npy_intp shape[NPY_MAXDIMS], strides[NPY_MAXDIMS]; - - memoryview = PyMemoryView_FromObject(obj); - if (memoryview == NULL) { - PyErr_Clear(); - return -1; - } - - view = PyMemoryView_GET_BUFFER(memoryview); - if (view->format != NULL) { - descr = (PyObject*)_descriptor_from_pep3118_format(view->format); - if (descr == NULL) { - PyObject *msg; - msg = PyBytes_FromFormat("Invalid PEP 3118 format string: '%s'", - view->format); - PyErr_WarnEx(PyExc_RuntimeWarning, PyBytes_AS_STRING(msg), 0); - Py_DECREF(msg); - goto fail; - } - - /* Sanity check */ - if (descr->elsize != view->itemsize) { - PyErr_WarnEx(PyExc_RuntimeWarning, - "Item size computed from the PEP 3118 buffer format " - "string does not match the actual item size.", - 0); - goto fail; - } - } - else { - descr = PyArray_DescrNewFromType(PyArray_STRING); - descr->elsize = view->itemsize; - } - - if (view->shape != NULL) { - nd = view->ndim; - if (nd >= NPY_MAXDIMS || nd < 0) { - goto fail; - } - for (k = 0; k < nd; ++k) { - if (k >= NPY_MAXDIMS) { - goto fail; - } - shape[k] = view->shape[k]; - } - if (view->strides != NULL) { - for (k = 0; k < nd; ++k) { - strides[k] = view->strides[k]; - } - } - else { - d = view->len; - for (k = 0; k < nd; ++k) { - d /= view->shape[k]; - strides[k] = d; - } - } - } - else { - nd = 1; - shape[0] = view->len / view->itemsize; - strides[0] = view->itemsize; - } - - flags = BEHAVED & (view->readonly ? ~NPY_WRITEABLE : ~0); - r = PyArray_NewFromDescr(&PyArray_Type, descr, - nd, shape, strides, view->buf, - flags, NULL); - ((PyArrayObject *)r)->base = memoryview; - PyArray_UpdateFlags((PyArrayObject *)r, UPDATE_ALL); - - *out = r; - return 0; - -fail: - Py_XDECREF(descr); - Py_DECREF(memoryview); - return -1; - -#else - return -1; -#endif -} - - -/*NUMPY_API - * Does not check for ENSURECOPY and NOTSWAPPED in flags - * Steals a reference to newtype --- which can be NULL - */ -NPY_NO_EXPORT PyObject * -PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth, - int max_depth, int flags, PyObject *context) -{ - /* - * This is the main code to make a NumPy array from a Python - * Object. It is called from lot's of different places which - * is why there are so many checks. The comments try to - * explain some of the checks. - */ - PyObject *r = NULL; - int seq = FALSE; - - /* - * Is input object already an array? - * This is where the flags are used - */ - if (PyArray_Check(op)) { - r = PyArray_FromArray((PyArrayObject *)op, newtype, flags); - } - else if (PyArray_IsScalar(op, Generic)) { - if (flags & UPDATEIFCOPY) { - goto err; - } - r = PyArray_FromScalar(op, newtype); - } - else if (newtype == NULL && - (newtype = _array_find_python_scalar_type(op))) { - if (flags & UPDATEIFCOPY) { - goto err; - } - r = Array_FromPyScalar(op, newtype); - } - else if (!PyBytes_Check(op) && !PyUnicode_Check(op) && - _array_from_buffer_3118(op, &r) == 0) { - /* PEP 3118 buffer -- but don't accept Bytes objects here */ - PyObject *new; - if (newtype != NULL || flags != 0) { - new = PyArray_FromArray((PyArrayObject *)r, newtype, flags); - Py_DECREF(r); - r = new; - } - } - else if (PyArray_HasArrayInterfaceType(op, newtype, context, r)) { - PyObject *new; - if (r == NULL) { - Py_XDECREF(newtype); - return NULL; - } - if (newtype != NULL || flags != 0) { - new = PyArray_FromArray((PyArrayObject *)r, newtype, flags); - Py_DECREF(r); - r = new; - } - } - else { - int isobject = 0; - - if (flags & UPDATEIFCOPY) { - goto err; - } - if (newtype == NULL) { - newtype = _array_find_type(op, NULL, MAX_DIMS); - } - else if (newtype->type_num == PyArray_OBJECT) { - isobject = 1; - } - if (PySequence_Check(op)) { - PyObject *thiserr = NULL; - - /* necessary but not sufficient */ - Py_INCREF(newtype); - r = Array_FromSequence(op, newtype, flags & FORTRAN, - min_depth, max_depth); - if (r == NULL && (thiserr=PyErr_Occurred())) { - if (PyErr_GivenExceptionMatches(thiserr, - PyExc_MemoryError)) { - return NULL; - } - /* - * If object was explicitly requested, - * then try nested list object array creation - */ - PyErr_Clear(); - if (isobject) { - Py_INCREF(newtype); - r = ObjectArray_FromNestedList - (op, newtype, flags & FORTRAN); - seq = TRUE; - Py_DECREF(newtype); - } - } - else { - seq = TRUE; - Py_DECREF(newtype); - } - } - if (!seq) { - r = Array_FromPyScalar(op, newtype); - } - } - - /* If we didn't succeed return NULL */ - if (r == NULL) { - return NULL; - } - - /* Be sure we succeed here */ - if(!PyArray_Check(r)) { - PyErr_SetString(PyExc_RuntimeError, - "internal error: PyArray_FromAny "\ - "not producing an array"); - Py_DECREF(r); - return NULL; - } - - if (min_depth != 0 && ((PyArrayObject *)r)->nd < min_depth) { - PyErr_SetString(PyExc_ValueError, - "object of too small depth for desired array"); - Py_DECREF(r); - return NULL; - } - if (max_depth != 0 && ((PyArrayObject *)r)->nd > max_depth) { - PyErr_SetString(PyExc_ValueError, - "object too deep for desired array"); - Py_DECREF(r); - return NULL; - } - return r; - - err: - Py_XDECREF(newtype); - PyErr_SetString(PyExc_TypeError, - "UPDATEIFCOPY used for non-array input."); - return NULL; -} - -/* - * flags is any of - * CONTIGUOUS, - * FORTRAN, - * ALIGNED, - * WRITEABLE, - * NOTSWAPPED, - * ENSURECOPY, - * UPDATEIFCOPY, - * FORCECAST, - * ENSUREARRAY, - * ELEMENTSTRIDES - * - * or'd (|) together - * - * Any of these flags present means that the returned array should - * guarantee that aspect of the array. Otherwise the returned array - * won't guarantee it -- it will depend on the object as to whether or - * not it has such features. - * - * Note that ENSURECOPY is enough - * to guarantee CONTIGUOUS, ALIGNED and WRITEABLE - * and therefore it is redundant to include those as well. - * - * BEHAVED == ALIGNED | WRITEABLE - * CARRAY = CONTIGUOUS | BEHAVED - * FARRAY = FORTRAN | BEHAVED - * - * FORTRAN can be set in the FLAGS to request a FORTRAN array. - * Fortran arrays are always behaved (aligned, - * notswapped, and writeable) and not (C) CONTIGUOUS (if > 1d). - * - * UPDATEIFCOPY flag sets this flag in the returned array if a copy is - * made and the base argument points to the (possibly) misbehaved array. - * When the new array is deallocated, the original array held in base - * is updated with the contents of the new array. - * - * FORCECAST will cause a cast to occur regardless of whether or not - * it is safe. - */ - -/*NUMPY_API - * steals a reference to descr -- accepts NULL - */ -NPY_NO_EXPORT PyObject * -PyArray_CheckFromAny(PyObject *op, PyArray_Descr *descr, int min_depth, - int max_depth, int requires, PyObject *context) -{ - PyObject *obj; - if (requires & NOTSWAPPED) { - if (!descr && PyArray_Check(op) && - !PyArray_ISNBO(PyArray_DESCR(op)->byteorder)) { - descr = PyArray_DescrNew(PyArray_DESCR(op)); - } - else if (descr && !PyArray_ISNBO(descr->byteorder)) { - PyArray_DESCR_REPLACE(descr); - } - if (descr) { - descr->byteorder = PyArray_NATIVE; - } - } - - obj = PyArray_FromAny(op, descr, min_depth, max_depth, requires, context); - if (obj == NULL) { - return NULL; - } - if ((requires & ELEMENTSTRIDES) && - !PyArray_ElementStrides(obj)) { - PyObject *new; - new = PyArray_NewCopy((PyArrayObject *)obj, PyArray_ANYORDER); - Py_DECREF(obj); - obj = new; - } - return obj; -} - -/*NUMPY_API - * steals reference to newtype --- acc. NULL - */ -NPY_NO_EXPORT PyObject * -PyArray_FromArray(PyArrayObject *arr, PyArray_Descr *newtype, int flags) -{ - - PyArrayObject *ret = NULL; - int itemsize; - int copy = 0; - int arrflags; - PyArray_Descr *oldtype; - char *msg = "cannot copy back to a read-only array"; - PyTypeObject *subtype; - - oldtype = PyArray_DESCR(arr); - subtype = Py_TYPE(arr); - if (newtype == NULL) { - newtype = oldtype; Py_INCREF(oldtype); - } - itemsize = newtype->elsize; - if (itemsize == 0) { - PyArray_DESCR_REPLACE(newtype); - if (newtype == NULL) { - return NULL; - } - newtype->elsize = oldtype->elsize; - itemsize = newtype->elsize; - } - - /* - * Can't cast unless ndim-0 array, FORCECAST is specified - * or the cast is safe. - */ - if (!(flags & FORCECAST) && !PyArray_NDIM(arr) == 0 && - !PyArray_CanCastTo(oldtype, newtype)) { - Py_DECREF(newtype); - PyErr_SetString(PyExc_TypeError, - "array cannot be safely cast " \ - "to required type"); - return NULL; - } - - /* Don't copy if sizes are compatible */ - if ((flags & ENSURECOPY) || PyArray_EquivTypes(oldtype, newtype)) { - arrflags = arr->flags; - copy = (flags & ENSURECOPY) || - ((flags & CONTIGUOUS) && (!(arrflags & CONTIGUOUS))) - || ((flags & ALIGNED) && (!(arrflags & ALIGNED))) - || (arr->nd > 1 && - ((flags & FORTRAN) && (!(arrflags & FORTRAN)))) - || ((flags & WRITEABLE) && (!(arrflags & WRITEABLE))); - - if (copy) { - if ((flags & UPDATEIFCOPY) && - (!PyArray_ISWRITEABLE(arr))) { - Py_DECREF(newtype); - PyErr_SetString(PyExc_ValueError, msg); - return NULL; - } - if ((flags & ENSUREARRAY)) { - subtype = &PyArray_Type; - } - ret = (PyArrayObject *) - PyArray_NewFromDescr(subtype, newtype, - arr->nd, - arr->dimensions, - NULL, NULL, - flags & FORTRAN, - (PyObject *)arr); - if (ret == NULL) { - return NULL; - } - if (PyArray_CopyInto(ret, arr) == -1) { - Py_DECREF(ret); - return NULL; - } - if (flags & UPDATEIFCOPY) { - ret->flags |= UPDATEIFCOPY; - ret->base = (PyObject *)arr; - PyArray_FLAGS(ret->base) &= ~WRITEABLE; - Py_INCREF(arr); - } - } - /* - * If no copy then just increase the reference - * count and return the input - */ - else { - Py_DECREF(newtype); - if ((flags & ENSUREARRAY) && - !PyArray_CheckExact(arr)) { - Py_INCREF(arr->descr); - ret = (PyArrayObject *) - PyArray_NewFromDescr(&PyArray_Type, - arr->descr, - arr->nd, - arr->dimensions, - arr->strides, - arr->data, - arr->flags,NULL); - if (ret == NULL) { - return NULL; - } - ret->base = (PyObject *)arr; - } - else { - ret = arr; - } - Py_INCREF(arr); - } - } - - /* - * The desired output type is different than the input - * array type and copy was not specified - */ - else { - if ((flags & UPDATEIFCOPY) && - (!PyArray_ISWRITEABLE(arr))) { - Py_DECREF(newtype); - PyErr_SetString(PyExc_ValueError, msg); - return NULL; - } - if ((flags & ENSUREARRAY)) { - subtype = &PyArray_Type; - } - ret = (PyArrayObject *) - PyArray_NewFromDescr(subtype, newtype, - arr->nd, arr->dimensions, - NULL, NULL, - flags & FORTRAN, - (PyObject *)arr); - if (ret == NULL) { - return NULL; - } - if (PyArray_CastTo(ret, arr) < 0) { - Py_DECREF(ret); - return NULL; - } - if (flags & UPDATEIFCOPY) { - ret->flags |= UPDATEIFCOPY; - ret->base = (PyObject *)arr; - PyArray_FLAGS(ret->base) &= ~WRITEABLE; - Py_INCREF(arr); - } - } - return (PyObject *)ret; -} - -/*NUMPY_API */ -NPY_NO_EXPORT PyObject * -PyArray_FromStructInterface(PyObject *input) -{ - PyArray_Descr *thetype = NULL; - char buf[40]; - PyArrayInterface *inter; - PyObject *attr, *r; - char endian = PyArray_NATBYTE; - - attr = PyObject_GetAttrString(input, "__array_struct__"); - if (attr == NULL) { - PyErr_Clear(); - return Py_NotImplemented; - } - if (!NpyCapsule_Check(attr)) { - goto fail; - } - inter = NpyCapsule_AsVoidPtr(attr); - if (inter->two != 2) { - goto fail; - } - if ((inter->flags & NOTSWAPPED) != NOTSWAPPED) { - endian = PyArray_OPPBYTE; - inter->flags &= ~NOTSWAPPED; - } - - if (inter->flags & ARR_HAS_DESCR) { - if (PyArray_DescrConverter(inter->descr, &thetype) == PY_FAIL) { - thetype = NULL; - PyErr_Clear(); - } - } - - if (thetype == NULL) { - PyOS_snprintf(buf, sizeof(buf), - "%c%c%d", endian, inter->typekind, inter->itemsize); - if (!(thetype=_array_typedescr_fromstr(buf))) { - Py_DECREF(attr); - return NULL; - } - } - - r = PyArray_NewFromDescr(&PyArray_Type, thetype, - inter->nd, inter->shape, - inter->strides, inter->data, - inter->flags, NULL); - Py_INCREF(input); - PyArray_BASE(r) = input; - Py_DECREF(attr); - PyArray_UpdateFlags((PyArrayObject *)r, UPDATE_ALL); - return r; - - fail: - PyErr_SetString(PyExc_ValueError, "invalid __array_struct__"); - Py_DECREF(attr); - return NULL; -} - -#define PyIntOrLong_Check(obj) (PyInt_Check(obj) || PyLong_Check(obj)) - -/*NUMPY_API*/ -NPY_NO_EXPORT PyObject * -PyArray_FromInterface(PyObject *input) -{ - PyObject *attr = NULL, *item = NULL; - PyObject *tstr = NULL, *shape = NULL; - PyObject *inter = NULL; - PyObject *base = NULL; - PyArrayObject *ret; - PyArray_Descr *type=NULL; - char *data; - Py_ssize_t buffer_len; - int res, i, n; - intp dims[MAX_DIMS], strides[MAX_DIMS]; - int dataflags = BEHAVED; - - /* Get the memory from __array_data__ and __array_offset__ */ - /* Get the shape */ - /* Get the typestring -- ignore array_descr */ - /* Get the strides */ - - inter = PyObject_GetAttrString(input, "__array_interface__"); - if (inter == NULL) { - PyErr_Clear(); - return Py_NotImplemented; - } - if (!PyDict_Check(inter)) { - Py_DECREF(inter); - return Py_NotImplemented; - } - shape = PyDict_GetItemString(inter, "shape"); - if (shape == NULL) { - Py_DECREF(inter); - return Py_NotImplemented; - } - tstr = PyDict_GetItemString(inter, "typestr"); - if (tstr == NULL) { - Py_DECREF(inter); - return Py_NotImplemented; - } - - attr = PyDict_GetItemString(inter, "data"); - base = input; - if ((attr == NULL) || (attr==Py_None) || (!PyTuple_Check(attr))) { - if (attr && (attr != Py_None)) { - item = attr; - } - else { - item = input; - } - res = PyObject_AsWriteBuffer(item, (void **)&data, &buffer_len); - if (res < 0) { - PyErr_Clear(); - res = PyObject_AsReadBuffer( - item, (const void **)&data, &buffer_len); - if (res < 0) { - goto fail; - } - dataflags &= ~WRITEABLE; - } - attr = PyDict_GetItemString(inter, "offset"); - if (attr) { - longlong num = PyLong_AsLongLong(attr); - if (error_converting(num)) { - PyErr_SetString(PyExc_TypeError, - "offset "\ - "must be an integer"); - goto fail; - } - data += num; - } - base = item; - } - else { - PyObject *dataptr; - if (PyTuple_GET_SIZE(attr) != 2) { - PyErr_SetString(PyExc_TypeError, - "data must return " \ - "a 2-tuple with (data pointer "\ - "integer, read-only flag)"); - goto fail; - } - dataptr = PyTuple_GET_ITEM(attr, 0); - if (PyString_Check(dataptr)) { - res = sscanf(PyString_AsString(dataptr), - "%p", (void **)&data); - if (res < 1) { - PyErr_SetString(PyExc_TypeError, - "data string cannot be " \ - "converted"); - goto fail; - } - } - else if (PyIntOrLong_Check(dataptr)) { - data = PyLong_AsVoidPtr(dataptr); - } - else { - PyErr_SetString(PyExc_TypeError, "first element " \ - "of data tuple must be integer" \ - " or string."); - goto fail; - } - if (PyObject_IsTrue(PyTuple_GET_ITEM(attr,1))) { - dataflags &= ~WRITEABLE; - } - } - attr = tstr; -#if defined(NPY_PY3K) - if (PyUnicode_Check(tstr)) { - /* Allow unicode type strings */ - attr = PyUnicode_AsASCIIString(tstr); - } -#endif - if (!PyBytes_Check(attr)) { - PyErr_SetString(PyExc_TypeError, "typestr must be a string"); - goto fail; - } - type = _array_typedescr_fromstr(PyString_AS_STRING(attr)); -#if defined(NPY_PY3K) - if (attr != tstr) { - Py_DECREF(attr); - } -#endif - if (type == NULL) { - goto fail; - } - attr = shape; - if (!PyTuple_Check(attr)) { - PyErr_SetString(PyExc_TypeError, "shape must be a tuple"); - Py_DECREF(type); - goto fail; - } - n = PyTuple_GET_SIZE(attr); - for (i = 0; i < n; i++) { - item = PyTuple_GET_ITEM(attr, i); - dims[i] = PyArray_PyIntAsIntp(item); - if (error_converting(dims[i])) { - break; - } - } - - ret = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, type, - n, dims, - NULL, data, - dataflags, NULL); - if (ret == NULL) { - return NULL; - } - Py_INCREF(base); - ret->base = base; - - attr = PyDict_GetItemString(inter, "strides"); - if (attr != NULL && attr != Py_None) { - if (!PyTuple_Check(attr)) { - PyErr_SetString(PyExc_TypeError, - "strides must be a tuple"); - Py_DECREF(ret); - return NULL; - } - if (n != PyTuple_GET_SIZE(attr)) { - PyErr_SetString(PyExc_ValueError, - "mismatch in length of "\ - "strides and shape"); - Py_DECREF(ret); - return NULL; - } - for (i = 0; i < n; i++) { - item = PyTuple_GET_ITEM(attr, i); - strides[i] = PyArray_PyIntAsIntp(item); - if (error_converting(strides[i])) { - break; - } - } - if (PyErr_Occurred()) { - PyErr_Clear(); - } - memcpy(ret->strides, strides, n*sizeof(intp)); - } - else PyErr_Clear(); - PyArray_UpdateFlags(ret, UPDATE_ALL); - Py_DECREF(inter); - return (PyObject *)ret; - - fail: - Py_XDECREF(inter); - return NULL; -} - -/*NUMPY_API*/ -NPY_NO_EXPORT PyObject * -PyArray_FromArrayAttr(PyObject *op, PyArray_Descr *typecode, PyObject *context) -{ - PyObject *new; - PyObject *array_meth; - - array_meth = PyObject_GetAttrString(op, "__array__"); - if (array_meth == NULL) { - PyErr_Clear(); - return Py_NotImplemented; - } - if (context == NULL) { - if (typecode == NULL) { - new = PyObject_CallFunction(array_meth, NULL); - } - else { - new = PyObject_CallFunction(array_meth, "O", typecode); - } - } - else { - if (typecode == NULL) { - new = PyObject_CallFunction(array_meth, "OO", Py_None, context); - if (new == NULL && PyErr_ExceptionMatches(PyExc_TypeError)) { - PyErr_Clear(); - new = PyObject_CallFunction(array_meth, ""); - } - } - else { - new = PyObject_CallFunction(array_meth, "OO", typecode, context); - if (new == NULL && PyErr_ExceptionMatches(PyExc_TypeError)) { - PyErr_Clear(); - new = PyObject_CallFunction(array_meth, "O", typecode); - } - } - } - Py_DECREF(array_meth); - if (new == NULL) { - return NULL; - } - if (!PyArray_Check(new)) { - PyErr_SetString(PyExc_ValueError, - "object __array__ method not " \ - "producing an array"); - Py_DECREF(new); - return NULL; - } - return new; -} - -/*NUMPY_API -* new reference -- accepts NULL for mintype -*/ -NPY_NO_EXPORT PyArray_Descr * -PyArray_DescrFromObject(PyObject *op, PyArray_Descr *mintype) -{ - return _array_find_type(op, mintype, MAX_DIMS); -} - -/* These are also old calls (should use PyArray_NewFromDescr) */ - -/* They all zero-out the memory as previously done */ - -/* steals reference to descr -- and enforces native byteorder on it.*/ -/*NUMPY_API - Like FromDimsAndData but uses the Descr structure instead of typecode - as input. -*/ -NPY_NO_EXPORT PyObject * -PyArray_FromDimsAndDataAndDescr(int nd, int *d, - PyArray_Descr *descr, - char *data) -{ - PyObject *ret; - int i; - intp newd[MAX_DIMS]; - char msg[] = "PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr."; - - if (DEPRECATE(msg) < 0) { - return NULL; - } - if (!PyArray_ISNBO(descr->byteorder)) - descr->byteorder = '='; - for (i = 0; i < nd; i++) { - newd[i] = (intp) d[i]; - } - ret = PyArray_NewFromDescr(&PyArray_Type, descr, - nd, newd, - NULL, data, - (data ? CARRAY : 0), NULL); - return ret; -} - -/*NUMPY_API - Construct an empty array from dimensions and typenum -*/ -NPY_NO_EXPORT PyObject * -PyArray_FromDims(int nd, int *d, int type) -{ - PyObject *ret; - char msg[] = "PyArray_FromDims: use PyArray_SimpleNew."; - - if (DEPRECATE(msg) < 0) { - return NULL; - } - ret = PyArray_FromDimsAndDataAndDescr(nd, d, - PyArray_DescrFromType(type), - NULL); - /* - * Old FromDims set memory to zero --- some algorithms - * relied on that. Better keep it the same. If - * Object type, then it's already been set to zero, though. - */ - if (ret && (PyArray_DESCR(ret)->type_num != PyArray_OBJECT)) { - memset(PyArray_DATA(ret), 0, PyArray_NBYTES(ret)); - } - return ret; -} - -/* end old calls */ - -/*NUMPY_API - * This is a quick wrapper around PyArray_FromAny(op, NULL, 0, 0, ENSUREARRAY) - * that special cases Arrays and PyArray_Scalars up front - * It *steals a reference* to the object - * It also guarantees that the result is PyArray_Type - * Because it decrefs op if any conversion needs to take place - * so it can be used like PyArray_EnsureArray(some_function(...)) - */ -NPY_NO_EXPORT PyObject * -PyArray_EnsureArray(PyObject *op) -{ - PyObject *new; - - if ((op == NULL) || (PyArray_CheckExact(op))) { - new = op; - Py_XINCREF(new); - } - else if (PyArray_Check(op)) { - new = PyArray_View((PyArrayObject *)op, NULL, &PyArray_Type); - } - else if (PyArray_IsScalar(op, Generic)) { - new = PyArray_FromScalar(op, NULL); - } - else { - new = PyArray_FromAny(op, NULL, 0, 0, ENSUREARRAY, NULL); - } - Py_XDECREF(op); - return new; -} - -/*NUMPY_API*/ -NPY_NO_EXPORT PyObject * -PyArray_EnsureAnyArray(PyObject *op) -{ - if (op && PyArray_Check(op)) { - return op; - } - return PyArray_EnsureArray(op); -} - -/*NUMPY_API - * Copy an Array into another array -- memory must not overlap - * Does not require src and dest to have "broadcastable" shapes - * (only the same number of elements). - */ -NPY_NO_EXPORT int -PyArray_CopyAnyInto(PyArrayObject *dest, PyArrayObject *src) -{ - int elsize, simple; - PyArrayIterObject *idest, *isrc; - void (*myfunc)(char *, intp, char *, intp, intp, int); - NPY_BEGIN_THREADS_DEF; - - if (!PyArray_EquivArrTypes(dest, src)) { - return PyArray_CastAnyTo(dest, src); - } - if (!PyArray_ISWRITEABLE(dest)) { - PyErr_SetString(PyExc_RuntimeError, - "cannot write to array"); - return -1; - } - if (PyArray_SIZE(dest) != PyArray_SIZE(src)) { - PyErr_SetString(PyExc_ValueError, - "arrays must have the same number of elements" - " for copy"); - return -1; - } - - simple = ((PyArray_ISCARRAY_RO(src) && PyArray_ISCARRAY(dest)) || - (PyArray_ISFARRAY_RO(src) && PyArray_ISFARRAY(dest))); - if (simple) { - /* Refcount note: src and dest have the same size */ - PyArray_INCREF(src); - PyArray_XDECREF(dest); - NPY_BEGIN_THREADS; - memcpy(dest->data, src->data, PyArray_NBYTES(dest)); - NPY_END_THREADS; - return 0; - } - - if (PyArray_SAMESHAPE(dest, src)) { - int swap; - - if (PyArray_SAFEALIGNEDCOPY(dest) && PyArray_SAFEALIGNEDCOPY(src)) { - myfunc = _strided_byte_copy; - } - else { - myfunc = _unaligned_strided_byte_copy; - } - swap = PyArray_ISNOTSWAPPED(dest) != PyArray_ISNOTSWAPPED(src); - return _copy_from_same_shape(dest, src, myfunc, swap); - } - - /* Otherwise we have to do an iterator-based copy */ - idest = (PyArrayIterObject *)PyArray_IterNew((PyObject *)dest); - if (idest == NULL) { - return -1; - } - isrc = (PyArrayIterObject *)PyArray_IterNew((PyObject *)src); - if (isrc == NULL) { - Py_DECREF(idest); - return -1; - } - elsize = dest->descr->elsize; - /* Refcount note: src and dest have the same size */ - PyArray_INCREF(src); - PyArray_XDECREF(dest); - NPY_BEGIN_THREADS; - while(idest->index < idest->size) { - memcpy(idest->dataptr, isrc->dataptr, elsize); - PyArray_ITER_NEXT(idest); - PyArray_ITER_NEXT(isrc); - } - NPY_END_THREADS; - Py_DECREF(idest); - Py_DECREF(isrc); - return 0; -} - -/*NUMPY_API - * Copy an Array into another array -- memory must not overlap. - */ -NPY_NO_EXPORT int -PyArray_CopyInto(PyArrayObject *dest, PyArrayObject *src) -{ - return _array_copy_into(dest, src, 1); -} - - -/*NUMPY_API - PyArray_CheckAxis - - check that axis is valid - convert 0-d arrays to 1-d arrays -*/ -NPY_NO_EXPORT PyObject * -PyArray_CheckAxis(PyArrayObject *arr, int *axis, int flags) -{ - PyObject *temp1, *temp2; - int n = arr->nd; - - if (*axis == MAX_DIMS || n == 0) { - if (n != 1) { - temp1 = PyArray_Ravel(arr,0); - if (temp1 == NULL) { - *axis = 0; - return NULL; - } - if (*axis == MAX_DIMS) { - *axis = PyArray_NDIM(temp1)-1; - } - } - else { - temp1 = (PyObject *)arr; - Py_INCREF(temp1); - *axis = 0; - } - if (!flags && *axis == 0) { - return temp1; - } - } - else { - temp1 = (PyObject *)arr; - Py_INCREF(temp1); - } - if (flags) { - temp2 = PyArray_CheckFromAny((PyObject *)temp1, NULL, - 0, 0, flags, NULL); - Py_DECREF(temp1); - if (temp2 == NULL) { - return NULL; - } - } - else { - temp2 = (PyObject *)temp1; - } - n = PyArray_NDIM(temp2); - if (*axis < 0) { - *axis += n; - } - if ((*axis < 0) || (*axis >= n)) { - PyErr_Format(PyExc_ValueError, - "axis(=%d) out of bounds", *axis); - Py_DECREF(temp2); - return NULL; - } - return temp2; -} - -/*NUMPY_API - * Zeros - * - * steal a reference - * accepts NULL type - */ -NPY_NO_EXPORT PyObject * -PyArray_Zeros(int nd, intp *dims, PyArray_Descr *type, int fortran) -{ - PyArrayObject *ret; - - if (!type) { - type = PyArray_DescrFromType(PyArray_DEFAULT); - } - ret = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, - type, - nd, dims, - NULL, NULL, - fortran, NULL); - if (ret == NULL) { - return NULL; - } - if (_zerofill(ret) < 0) { - return NULL; - } - return (PyObject *)ret; - -} - -/*NUMPY_API - * Empty - * - * accepts NULL type - * steals referenct to type - */ -NPY_NO_EXPORT PyObject * -PyArray_Empty(int nd, intp *dims, PyArray_Descr *type, int fortran) -{ - PyArrayObject *ret; - - if (!type) type = PyArray_DescrFromType(PyArray_DEFAULT); - ret = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, - type, nd, dims, - NULL, NULL, - fortran, NULL); - if (ret == NULL) { - return NULL; - } - if (PyDataType_REFCHK(type)) { - PyArray_FillObjectArray(ret, Py_None); - if (PyErr_Occurred()) { - Py_DECREF(ret); - return NULL; - } - } - return (PyObject *)ret; -} - -/* - * Like ceil(value), but check for overflow. - * - * Return 0 on success, -1 on failure. In case of failure, set a PyExc_Overflow - * exception - */ -static int _safe_ceil_to_intp(double value, intp* ret) -{ - double ivalue; - - ivalue = npy_ceil(value); - if (ivalue < NPY_MIN_INTP || ivalue > NPY_MAX_INTP) { - return -1; - } - - *ret = (intp)ivalue; - return 0; -} - - -/*NUMPY_API - Arange, -*/ -NPY_NO_EXPORT PyObject * -PyArray_Arange(double start, double stop, double step, int type_num) -{ - intp length; - PyObject *range; - PyArray_ArrFuncs *funcs; - PyObject *obj; - int ret; - - if (_safe_ceil_to_intp((stop - start)/step, &length)) { - PyErr_SetString(PyExc_OverflowError, - "arange: overflow while computing length"); - } - - if (length <= 0) { - length = 0; - return PyArray_New(&PyArray_Type, 1, &length, type_num, - NULL, NULL, 0, 0, NULL); - } - range = PyArray_New(&PyArray_Type, 1, &length, type_num, - NULL, NULL, 0, 0, NULL); - if (range == NULL) { - return NULL; - } - funcs = PyArray_DESCR(range)->f; - - /* - * place start in the buffer and the next value in the second position - * if length > 2, then call the inner loop, otherwise stop - */ - obj = PyFloat_FromDouble(start); - ret = funcs->setitem(obj, PyArray_DATA(range), (PyArrayObject *)range); - Py_DECREF(obj); - if (ret < 0) { - goto fail; - } - if (length == 1) { - return range; - } - obj = PyFloat_FromDouble(start + step); - ret = funcs->setitem(obj, PyArray_BYTES(range)+PyArray_ITEMSIZE(range), - (PyArrayObject *)range); - Py_DECREF(obj); - if (ret < 0) { - goto fail; - } - if (length == 2) { - return range; - } - if (!funcs->fill) { - PyErr_SetString(PyExc_ValueError, "no fill-function for data-type."); - Py_DECREF(range); - return NULL; - } - funcs->fill(PyArray_DATA(range), length, (PyArrayObject *)range); - if (PyErr_Occurred()) { - goto fail; - } - return range; - - fail: - Py_DECREF(range); - return NULL; -} - -/* - * the formula is len = (intp) ceil((start - stop) / step); - */ -static intp -_calc_length(PyObject *start, PyObject *stop, PyObject *step, PyObject **next, int cmplx) -{ - intp len, tmp; - PyObject *val; - double value; - - *next = PyNumber_Subtract(stop, start); - if (!(*next)) { - if (PyTuple_Check(stop)) { - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "arange: scalar arguments expected "\ - "instead of a tuple."); - } - return -1; - } - val = PyNumber_TrueDivide(*next, step); - Py_DECREF(*next); - *next = NULL; - if (!val) { - return -1; - } - if (cmplx && PyComplex_Check(val)) { - value = PyComplex_RealAsDouble(val); - if (error_converting(value)) { - Py_DECREF(val); - return -1; - } - if (_safe_ceil_to_intp(value, &len)) { - Py_DECREF(val); - PyErr_SetString(PyExc_OverflowError, - "arange: overflow while computing length"); - return -1; - } - value = PyComplex_ImagAsDouble(val); - Py_DECREF(val); - if (error_converting(value)) { - return -1; - } - if (_safe_ceil_to_intp(value, &tmp)) { - PyErr_SetString(PyExc_OverflowError, - "arange: overflow while computing length"); - return -1; - } - len = MIN(len, tmp); - } - else { - value = PyFloat_AsDouble(val); - Py_DECREF(val); - if (error_converting(value)) { - return -1; - } - if (_safe_ceil_to_intp(value, &len)) { - PyErr_SetString(PyExc_OverflowError, - "arange: overflow while computing length"); - return -1; - } - } - if (len > 0) { - *next = PyNumber_Add(start, step); - if (!next) { - return -1; - } - } - return len; -} - -/*NUMPY_API - * - * ArangeObj, - * - * this doesn't change the references - */ -NPY_NO_EXPORT PyObject * -PyArray_ArangeObj(PyObject *start, PyObject *stop, PyObject *step, PyArray_Descr *dtype) -{ - PyObject *range; - PyArray_ArrFuncs *funcs; - PyObject *next, *err; - intp length; - PyArray_Descr *native = NULL; - int swap; - - if (!dtype) { - PyArray_Descr *deftype; - PyArray_Descr *newtype; - /* intentionally made to be PyArray_LONG default */ - deftype = PyArray_DescrFromType(PyArray_LONG); - newtype = PyArray_DescrFromObject(start, deftype); - Py_DECREF(deftype); - deftype = newtype; - if (stop && stop != Py_None) { - newtype = PyArray_DescrFromObject(stop, deftype); - Py_DECREF(deftype); - deftype = newtype; - } - if (step && step != Py_None) { - newtype = PyArray_DescrFromObject(step, deftype); - Py_DECREF(deftype); - deftype = newtype; - } - dtype = deftype; - } - else { - Py_INCREF(dtype); - } - if (!step || step == Py_None) { - step = PyInt_FromLong(1); - } - else { - Py_XINCREF(step); - } - if (!stop || stop == Py_None) { - stop = start; - start = PyInt_FromLong(0); - } - else { - Py_INCREF(start); - } - /* calculate the length and next = start + step*/ - length = _calc_length(start, stop, step, &next, - PyTypeNum_ISCOMPLEX(dtype->type_num)); - err = PyErr_Occurred(); - if (err) { - Py_DECREF(dtype); - if (err && PyErr_GivenExceptionMatches(err, PyExc_OverflowError)) { - PyErr_SetString(PyExc_ValueError, "Maximum allowed size exceeded"); - } - goto fail; - } - if (length <= 0) { - length = 0; - range = PyArray_SimpleNewFromDescr(1, &length, dtype); - Py_DECREF(step); - Py_DECREF(start); - return range; - } - - /* - * If dtype is not in native byte-order then get native-byte - * order version. And then swap on the way out. - */ - if (!PyArray_ISNBO(dtype->byteorder)) { - native = PyArray_DescrNewByteorder(dtype, PyArray_NATBYTE); - swap = 1; - } - else { - native = dtype; - swap = 0; - } - - range = PyArray_SimpleNewFromDescr(1, &length, native); - if (range == NULL) { - goto fail; - } - - /* - * place start in the buffer and the next value in the second position - * if length > 2, then call the inner loop, otherwise stop - */ - funcs = PyArray_DESCR(range)->f; - if (funcs->setitem( - start, PyArray_DATA(range), (PyArrayObject *)range) < 0) { - goto fail; - } - if (length == 1) { - goto finish; - } - if (funcs->setitem(next, PyArray_BYTES(range)+PyArray_ITEMSIZE(range), - (PyArrayObject *)range) < 0) { - goto fail; - } - if (length == 2) { - goto finish; - } - if (!funcs->fill) { - PyErr_SetString(PyExc_ValueError, "no fill-function for data-type."); - Py_DECREF(range); - goto fail; - } - funcs->fill(PyArray_DATA(range), length, (PyArrayObject *)range); - if (PyErr_Occurred()) { - goto fail; - } - finish: - if (swap) { - PyObject *new; - new = PyArray_Byteswap((PyArrayObject *)range, 1); - Py_DECREF(new); - Py_DECREF(PyArray_DESCR(range)); - PyArray_DESCR(range) = dtype; /* steals the reference */ - } - Py_DECREF(start); - Py_DECREF(step); - Py_DECREF(next); - return range; - - fail: - Py_DECREF(start); - Py_DECREF(step); - Py_XDECREF(next); - return NULL; -} - -static PyArrayObject * -array_fromfile_binary(FILE *fp, PyArray_Descr *dtype, intp num, size_t *nread) -{ - PyArrayObject *r; - intp start, numbytes; - - if (num < 0) { - int fail = 0; - - start = (intp )ftell(fp); - if (start < 0) { - fail = 1; - } - if (fseek(fp, 0, SEEK_END) < 0) { - fail = 1; - } - numbytes = (intp) ftell(fp); - if (numbytes < 0) { - fail = 1; - } - numbytes -= start; - if (fseek(fp, start, SEEK_SET) < 0) { - fail = 1; - } - if (fail) { - PyErr_SetString(PyExc_IOError, - "could not seek in file"); - Py_DECREF(dtype); - return NULL; - } - num = numbytes / dtype->elsize; - } - r = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, - dtype, - 1, &num, - NULL, NULL, - 0, NULL); - if (r == NULL) { - return NULL; - } - NPY_BEGIN_ALLOW_THREADS; - *nread = fread(r->data, dtype->elsize, num, fp); - NPY_END_ALLOW_THREADS; - return r; -} - -/* - * Create an array by reading from the given stream, using the passed - * next_element and skip_separator functions. - */ -#define FROM_BUFFER_SIZE 4096 -static PyArrayObject * -array_from_text(PyArray_Descr *dtype, intp num, char *sep, size_t *nread, - void *stream, next_element next, skip_separator skip_sep, - void *stream_data) -{ - PyArrayObject *r; - intp i; - char *dptr, *clean_sep, *tmp; - int err = 0; - intp thisbuf = 0; - intp size; - intp bytes, totalbytes; - - size = (num >= 0) ? num : FROM_BUFFER_SIZE; - r = (PyArrayObject *) - PyArray_NewFromDescr(&PyArray_Type, - dtype, - 1, &size, - NULL, NULL, - 0, NULL); - if (r == NULL) { - return NULL; - } - clean_sep = swab_separator(sep); - NPY_BEGIN_ALLOW_THREADS; - totalbytes = bytes = size * dtype->elsize; - dptr = r->data; - for (i= 0; num < 0 || i < num; i++) { - if (next(&stream, dptr, dtype, stream_data) < 0) { - break; - } - *nread += 1; - thisbuf += 1; - dptr += dtype->elsize; - if (num < 0 && thisbuf == size) { - totalbytes += bytes; - tmp = PyDataMem_RENEW(r->data, totalbytes); - if (tmp == NULL) { - err = 1; - break; - } - r->data = tmp; - dptr = tmp + (totalbytes - bytes); - thisbuf = 0; - } - if (skip_sep(&stream, clean_sep, stream_data) < 0) { - break; - } - } - if (num < 0) { - tmp = PyDataMem_RENEW(r->data, NPY_MAX(*nread,1)*dtype->elsize); - if (tmp == NULL) { - err = 1; - } - else { - PyArray_DIM(r,0) = *nread; - r->data = tmp; - } - } - NPY_END_ALLOW_THREADS; - free(clean_sep); - if (err == 1) { - PyErr_NoMemory(); - } - if (PyErr_Occurred()) { - Py_DECREF(r); - return NULL; - } - return r; -} -#undef FROM_BUFFER_SIZE - -/*NUMPY_API - * - * Given a ``FILE *`` pointer ``fp``, and a ``PyArray_Descr``, return an - * array corresponding to the data encoded in that file. - * - * If the dtype is NULL, the default array type is used (double). - * If non-null, the reference is stolen. - * - * The number of elements to read is given as ``num``; if it is < 0, then - * then as many as possible are read. - * - * If ``sep`` is NULL or empty, then binary data is assumed, else - * text data, with ``sep`` as the separator between elements. Whitespace in - * the separator matches any length of whitespace in the text, and a match - * for whitespace around the separator is added. - * - * For memory-mapped files, use the buffer interface. No more data than - * necessary is read by this routine. - */ -NPY_NO_EXPORT PyObject * -PyArray_FromFile(FILE *fp, PyArray_Descr *dtype, intp num, char *sep) -{ - PyArrayObject *ret; - size_t nread = 0; - - if (PyDataType_REFCHK(dtype)) { - PyErr_SetString(PyExc_ValueError, - "Cannot read into object array"); - Py_DECREF(dtype); - return NULL; - } - if (dtype->elsize == 0) { - PyErr_SetString(PyExc_ValueError, - "The elements are 0-sized."); - Py_DECREF(dtype); - return NULL; - } - if ((sep == NULL) || (strlen(sep) == 0)) { - ret = array_fromfile_binary(fp, dtype, num, &nread); - } - else { - if (dtype->f->scanfunc == NULL) { - PyErr_SetString(PyExc_ValueError, - "Unable to read character files of that array type"); - Py_DECREF(dtype); - return NULL; - } - ret = array_from_text(dtype, num, sep, &nread, fp, - (next_element) fromfile_next_element, - (skip_separator) fromfile_skip_separator, NULL); - } - if (ret == NULL) { - Py_DECREF(dtype); - return NULL; - } - if (((intp) nread) < num) { - /* Realloc memory for smaller number of elements */ - const size_t nsize = NPY_MAX(nread,1)*ret->descr->elsize; - char *tmp; - - if((tmp = PyDataMem_RENEW(ret->data, nsize)) == NULL) { - Py_DECREF(ret); - return PyErr_NoMemory(); - } - ret->data = tmp; - PyArray_DIM(ret,0) = nread; - } - return (PyObject *)ret; -} - -/*NUMPY_API*/ -NPY_NO_EXPORT PyObject * -PyArray_FromBuffer(PyObject *buf, PyArray_Descr *type, - intp count, intp offset) -{ - PyArrayObject *ret; - char *data; - Py_ssize_t ts; - intp s, n; - int itemsize; - int write = 1; - - - if (PyDataType_REFCHK(type)) { - PyErr_SetString(PyExc_ValueError, - "cannot create an OBJECT array from memory"\ - " buffer"); - Py_DECREF(type); - return NULL; - } - if (type->elsize == 0) { - PyErr_SetString(PyExc_ValueError, - "itemsize cannot be zero in type"); - Py_DECREF(type); - return NULL; - } - if (Py_TYPE(buf)->tp_as_buffer == NULL -#if defined(NPY_PY3K) - || Py_TYPE(buf)->tp_as_buffer->bf_getbuffer == NULL -#else - || (Py_TYPE(buf)->tp_as_buffer->bf_getwritebuffer == NULL - && Py_TYPE(buf)->tp_as_buffer->bf_getreadbuffer == NULL) -#endif - ) { - PyObject *newbuf; - newbuf = PyObject_GetAttrString(buf, "__buffer__"); - if (newbuf == NULL) { - Py_DECREF(type); - return NULL; - } - buf = newbuf; - } - else { - Py_INCREF(buf); - } - - if (PyObject_AsWriteBuffer(buf, (void *)&data, &ts) == -1) { - write = 0; - PyErr_Clear(); - if (PyObject_AsReadBuffer(buf, (void *)&data, &ts) == -1) { - Py_DECREF(buf); - Py_DECREF(type); - return NULL; - } - } - - if ((offset < 0) || (offset >= ts)) { - PyErr_Format(PyExc_ValueError, - "offset must be non-negative and smaller than buffer "\ - "lenth (%" INTP_FMT ")", (intp)ts); - Py_DECREF(buf); - Py_DECREF(type); - return NULL; - } - - data += offset; - s = (intp)ts - offset; - n = (intp)count; - itemsize = type->elsize; - if (n < 0 ) { - if (s % itemsize != 0) { - PyErr_SetString(PyExc_ValueError, - "buffer size must be a multiple"\ - " of element size"); - Py_DECREF(buf); - Py_DECREF(type); - return NULL; - } - n = s/itemsize; - } - else { - if (s < n*itemsize) { - PyErr_SetString(PyExc_ValueError, - "buffer is smaller than requested"\ - " size"); - Py_DECREF(buf); - Py_DECREF(type); - return NULL; - } - } - - if ((ret = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, - type, - 1, &n, - NULL, data, - DEFAULT, - NULL)) == NULL) { - Py_DECREF(buf); - return NULL; - } - - if (!write) { - ret->flags &= ~WRITEABLE; - } - /* Store a reference for decref on deallocation */ - ret->base = buf; - PyArray_UpdateFlags(ret, ALIGNED); - return (PyObject *)ret; -} - -/*NUMPY_API - * - * Given a pointer to a string ``data``, a string length ``slen``, and - * a ``PyArray_Descr``, return an array corresponding to the data - * encoded in that string. - * - * If the dtype is NULL, the default array type is used (double). - * If non-null, the reference is stolen. - * - * If ``slen`` is < 0, then the end of string is used for text data. - * It is an error for ``slen`` to be < 0 for binary data (since embedded NULLs - * would be the norm). - * - * The number of elements to read is given as ``num``; if it is < 0, then - * then as many as possible are read. - * - * If ``sep`` is NULL or empty, then binary data is assumed, else - * text data, with ``sep`` as the separator between elements. Whitespace in - * the separator matches any length of whitespace in the text, and a match - * for whitespace around the separator is added. - */ -NPY_NO_EXPORT PyObject * -PyArray_FromString(char *data, intp slen, PyArray_Descr *dtype, - intp num, char *sep) -{ - int itemsize; - PyArrayObject *ret; - Bool binary; - - if (dtype == NULL) { - dtype=PyArray_DescrFromType(PyArray_DEFAULT); - } - if (PyDataType_FLAGCHK(dtype, NPY_ITEM_IS_POINTER)) { - PyErr_SetString(PyExc_ValueError, - "Cannot create an object array from" \ - " a string"); - Py_DECREF(dtype); - return NULL; - } - itemsize = dtype->elsize; - if (itemsize == 0) { - PyErr_SetString(PyExc_ValueError, "zero-valued itemsize"); - Py_DECREF(dtype); - return NULL; - } - - binary = ((sep == NULL) || (strlen(sep) == 0)); - if (binary) { - if (num < 0 ) { - if (slen % itemsize != 0) { - PyErr_SetString(PyExc_ValueError, - "string size must be a "\ - "multiple of element size"); - Py_DECREF(dtype); - return NULL; - } - num = slen/itemsize; - } - else { - if (slen < num*itemsize) { - PyErr_SetString(PyExc_ValueError, - "string is smaller than " \ - "requested size"); - Py_DECREF(dtype); - return NULL; - } - } - ret = (PyArrayObject *) - PyArray_NewFromDescr(&PyArray_Type, dtype, - 1, &num, NULL, NULL, - 0, NULL); - if (ret == NULL) { - return NULL; - } - memcpy(ret->data, data, num*dtype->elsize); - } - else { - /* read from character-based string */ - size_t nread = 0; - char *end; - - if (dtype->f->scanfunc == NULL) { - PyErr_SetString(PyExc_ValueError, - "don't know how to read " \ - "character strings with that " \ - "array type"); - Py_DECREF(dtype); - return NULL; - } - if (slen < 0) { - end = NULL; - } - else { - end = data + slen; - } - ret = array_from_text(dtype, num, sep, &nread, - data, - (next_element) fromstr_next_element, - (skip_separator) fromstr_skip_separator, - end); - } - return (PyObject *)ret; -} - -/*NUMPY_API - * - * steals a reference to dtype (which cannot be NULL) - */ -NPY_NO_EXPORT PyObject * -PyArray_FromIter(PyObject *obj, PyArray_Descr *dtype, intp count) -{ - PyObject *value; - PyObject *iter = PyObject_GetIter(obj); - PyArrayObject *ret = NULL; - intp i, elsize, elcount; - char *item, *new_data; - - if (iter == NULL) { - goto done; - } - elcount = (count < 0) ? 0 : count; - if ((elsize=dtype->elsize) == 0) { - PyErr_SetString(PyExc_ValueError, "Must specify length "\ - "when using variable-size data-type."); - goto done; - } - - /* - * We would need to alter the memory RENEW code to decrement any - * reference counts before throwing away any memory. - */ - if (PyDataType_REFCHK(dtype)) { - PyErr_SetString(PyExc_ValueError, "cannot create "\ - "object arrays from iterator"); - goto done; - } - - ret = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, dtype, 1, - &elcount, NULL,NULL, 0, NULL); - dtype = NULL; - if (ret == NULL) { - goto done; - } - for (i = 0; (i < count || count == -1) && - (value = PyIter_Next(iter)); i++) { - if (i >= elcount) { - /* - Grow ret->data: - this is similar for the strategy for PyListObject, but we use - 50% overallocation => 0, 4, 8, 14, 23, 36, 56, 86 ... - */ - elcount = (i >> 1) + (i < 4 ? 4 : 2) + i; - if (elcount <= NPY_MAX_INTP/elsize) { - new_data = PyDataMem_RENEW(ret->data, elcount * elsize); - } - else { - new_data = NULL; - } - if (new_data == NULL) { - PyErr_SetString(PyExc_MemoryError, - "cannot allocate array memory"); - Py_DECREF(value); - goto done; - } - ret->data = new_data; - } - ret->dimensions[0] = i + 1; - - if (((item = index2ptr(ret, i)) == NULL) - || (ret->descr->f->setitem(value, item, ret) == -1)) { - Py_DECREF(value); - goto done; - } - Py_DECREF(value); - } - - if (i < count) { - PyErr_SetString(PyExc_ValueError, "iterator too short"); - goto done; - } - - /* - * Realloc the data so that don't keep extra memory tied up - * (assuming realloc is reasonably good about reusing space...) - */ - if (i == 0) { - i = 1; - } - new_data = PyDataMem_RENEW(ret->data, i * elsize); - if (new_data == NULL) { - PyErr_SetString(PyExc_MemoryError, "cannot allocate array memory"); - goto done; - } - ret->data = new_data; - - done: - Py_XDECREF(iter); - Py_XDECREF(dtype); - if (PyErr_Occurred()) { - Py_XDECREF(ret); - return NULL; - } - return (PyObject *)ret; -} - -/* - * This is the main array creation routine. - * - * Flags argument has multiple related meanings - * depending on data and strides: - * - * If data is given, then flags is flags associated with data. - * If strides is not given, then a contiguous strides array will be created - * and the CONTIGUOUS bit will be set. If the flags argument - * has the FORTRAN bit set, then a FORTRAN-style strides array will be - * created (and of course the FORTRAN flag bit will be set). - * - * If data is not given but created here, then flags will be DEFAULT - * and a non-zero flags argument can be used to indicate a FORTRAN style - * array is desired. - */ - -NPY_NO_EXPORT size_t -_array_fill_strides(intp *strides, intp *dims, int nd, size_t itemsize, - int inflag, int *objflags) -{ - int i; - /* Only make Fortran strides if not contiguous as well */ - if ((inflag & FORTRAN) && !(inflag & CONTIGUOUS)) { - for (i = 0; i < nd; i++) { - strides[i] = itemsize; - itemsize *= dims[i] ? dims[i] : 1; - } - *objflags |= FORTRAN; - if (nd > 1) { - *objflags &= ~CONTIGUOUS; - } - else { - *objflags |= CONTIGUOUS; - } - } - else { - for (i = nd - 1; i >= 0; i--) { - strides[i] = itemsize; - itemsize *= dims[i] ? dims[i] : 1; - } - *objflags |= CONTIGUOUS; - if (nd > 1) { - *objflags &= ~FORTRAN; - } - else { - *objflags |= FORTRAN; - } - } - return itemsize; -} - diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/ctors.h b/pythonPackages/numpy/numpy/core/src/multiarray/ctors.h deleted file mode 100755 index 5fd1d8e581..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/ctors.h +++ /dev/null @@ -1,70 +0,0 @@ -#ifndef _NPY_ARRAY_CTORS_H_ -#define _NPY_ARRAY_CTORS_H_ - -NPY_NO_EXPORT PyObject * -PyArray_NewFromDescr(PyTypeObject *subtype, PyArray_Descr *descr, int nd, - intp *dims, intp *strides, void *data, - int flags, PyObject *obj); - -NPY_NO_EXPORT PyObject *PyArray_New(PyTypeObject *, int nd, intp *, - int, intp *, void *, int, int, PyObject *); - -NPY_NO_EXPORT PyObject * -PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth, - int max_depth, int flags, PyObject *context); - -NPY_NO_EXPORT PyObject * -PyArray_CheckFromAny(PyObject *op, PyArray_Descr *descr, int min_depth, - int max_depth, int requires, PyObject *context); - -NPY_NO_EXPORT PyObject * -PyArray_FromArray(PyArrayObject *arr, PyArray_Descr *newtype, int flags); - -NPY_NO_EXPORT PyObject * -PyArray_FromStructInterface(PyObject *input); - -NPY_NO_EXPORT PyObject * -PyArray_FromInterface(PyObject *input); - -NPY_NO_EXPORT PyObject * -PyArray_FromArrayAttr(PyObject *op, PyArray_Descr *typecode, - PyObject *context); - -NPY_NO_EXPORT PyObject * -PyArray_EnsureArray(PyObject *op); - -NPY_NO_EXPORT PyObject * -PyArray_EnsureAnyArray(PyObject *op); - -NPY_NO_EXPORT int -PyArray_MoveInto(PyArrayObject *dest, PyArrayObject *src); - -NPY_NO_EXPORT int -PyArray_CopyAnyInto(PyArrayObject *dest, PyArrayObject *src); - -NPY_NO_EXPORT PyObject * -PyArray_CheckAxis(PyArrayObject *arr, int *axis, int flags); - -/* FIXME: remove those from here */ -NPY_NO_EXPORT int -_flat_copyinto(PyObject *dst, PyObject *src, NPY_ORDER order); - -NPY_NO_EXPORT size_t -_array_fill_strides(intp *strides, intp *dims, int nd, size_t itemsize, - int inflag, int *objflags); - -NPY_NO_EXPORT void -_unaligned_strided_byte_copy(char *dst, intp outstrides, char *src, - intp instrides, intp N, int elsize); - -NPY_NO_EXPORT void -_strided_byte_swap(void *p, intp stride, intp n, int size); - -NPY_NO_EXPORT void -copy_and_swap(void *dst, void *src, int itemsize, intp numitems, - intp srcstrides, int swap); - -NPY_NO_EXPORT void -byte_swap_vector(void *p, intp n, int size); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/descriptor.c b/pythonPackages/numpy/numpy/core/src/multiarray/descriptor.c deleted file mode 100755 index 7bfc05c69f..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/descriptor.c +++ /dev/null @@ -1,2555 +0,0 @@ -/* Array Descr Object */ - -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "common.h" - -#define _chk_byteorder(arg) (arg == '>' || arg == '<' || \ - arg == '|' || arg == '=') - -static PyObject *typeDict = NULL; /* Must be explicitly loaded */ - -static PyArray_Descr * -_use_inherit(PyArray_Descr *type, PyObject *newobj, int *errflag); - -NPY_NO_EXPORT PyArray_Descr * -_arraydescr_fromobj(PyObject *obj) -{ - PyObject *dtypedescr; - PyArray_Descr *new; - int ret; - - dtypedescr = PyObject_GetAttrString(obj, "dtype"); - PyErr_Clear(); - if (dtypedescr) { - ret = PyArray_DescrConverter(dtypedescr, &new); - Py_DECREF(dtypedescr); - if (ret == PY_SUCCEED) { - return new; - } - PyErr_Clear(); - } - /* Understand basic ctypes */ - dtypedescr = PyObject_GetAttrString(obj, "_type_"); - PyErr_Clear(); - if (dtypedescr) { - ret = PyArray_DescrConverter(dtypedescr, &new); - Py_DECREF(dtypedescr); - if (ret == PY_SUCCEED) { - PyObject *length; - length = PyObject_GetAttrString(obj, "_length_"); - PyErr_Clear(); - if (length) { - /* derived type */ - PyObject *newtup; - PyArray_Descr *derived; - newtup = Py_BuildValue("NO", new, length); - ret = PyArray_DescrConverter(newtup, &derived); - Py_DECREF(newtup); - if (ret == PY_SUCCEED) { - return derived; - } - PyErr_Clear(); - return NULL; - } - return new; - } - PyErr_Clear(); - return NULL; - } - /* Understand ctypes structures -- - bit-fields are not supported - automatically aligns */ - dtypedescr = PyObject_GetAttrString(obj, "_fields_"); - PyErr_Clear(); - if (dtypedescr) { - ret = PyArray_DescrAlignConverter(dtypedescr, &new); - Py_DECREF(dtypedescr); - if (ret == PY_SUCCEED) { - return new; - } - PyErr_Clear(); - } - return NULL; -} - -NPY_NO_EXPORT PyObject * -array_set_typeDict(PyObject *NPY_UNUSED(ignored), PyObject *args) -{ - PyObject *dict; - - if (!PyArg_ParseTuple(args, "O", &dict)) { - return NULL; - } - /* Decrement old reference (if any)*/ - Py_XDECREF(typeDict); - typeDict = dict; - /* Create an internal reference to it */ - Py_INCREF(dict); - Py_INCREF(Py_None); - return Py_None; -} - -static int -_check_for_commastring(char *type, int len) -{ - int i; - - /* Check for ints at start of string */ - if ((type[0] >= '0' - && type[0] <= '9') - || ((len > 1) - && _chk_byteorder(type[0]) - && (type[1] >= '0' - && type[1] <= '9'))) { - return 1; - } - /* Check for empty tuple */ - if (((len > 1) - && (type[0] == '(' - && type[1] == ')')) - || ((len > 3) - && _chk_byteorder(type[0]) - && (type[1] == '(' - && type[2] == ')'))) { - return 1; - } - /* Check for presence of commas */ - for (i = 1; i < len; i++) { - if (type[i] == ',') { - return 1; - } - } - return 0; -} - - -#undef _chk_byteorder - -static PyArray_Descr * -_convert_from_tuple(PyObject *obj) -{ - PyArray_Descr *type, *res; - PyObject *val; - int errflag; - - if (PyTuple_GET_SIZE(obj) != 2) { - return NULL; - } - if (!PyArray_DescrConverter(PyTuple_GET_ITEM(obj,0), &type)) { - return NULL; - } - val = PyTuple_GET_ITEM(obj,1); - /* try to interpret next item as a type */ - res = _use_inherit(type, val, &errflag); - if (res || errflag) { - Py_DECREF(type); - if (res) { - return res; - } - else { - return NULL; - } - } - PyErr_Clear(); - /* - * We get here if res was NULL but errflag wasn't set - * --- i.e. the conversion to a data-descr failed in _use_inherit - */ - if (type->elsize == 0) { - /* interpret next item as a typesize */ - int itemsize = PyArray_PyIntAsInt(PyTuple_GET_ITEM(obj,1)); - - if (error_converting(itemsize)) { - PyErr_SetString(PyExc_ValueError, - "invalid itemsize in generic type tuple"); - goto fail; - } - PyArray_DESCR_REPLACE(type); - if (type->type_num == PyArray_UNICODE) { - type->elsize = itemsize << 2; - } - else { - type->elsize = itemsize; - } - } - else if (PyDict_Check(val)) { - /* Assume it's a metadata dictionary */ - if (PyDict_Merge(type->metadata, val, 0) == -1) { - Py_DECREF(type); - return NULL; - } - } - else { - /* - * interpret next item as shape (if it's a tuple) - * and reset the type to PyArray_VOID with - * a new fields attribute. - */ - PyArray_Dims shape = {NULL, -1}; - PyArray_Descr *newdescr; - - if (!(PyArray_IntpConverter(val, &shape)) || (shape.len > MAX_DIMS)) { - PyDimMem_FREE(shape.ptr); - PyErr_SetString(PyExc_ValueError, - "invalid shape in fixed-type tuple."); - goto fail; - } - /* - * If (type, 1) was given, it is equivalent to type... - * or (type, ()) was given it is equivalent to type... - */ - if ((shape.len == 1 - && shape.ptr[0] == 1 - && PyNumber_Check(val)) - || (shape.len == 0 - && PyTuple_Check(val))) { - PyDimMem_FREE(shape.ptr); - return type; - } - newdescr = PyArray_DescrNewFromType(PyArray_VOID); - if (newdescr == NULL) { - PyDimMem_FREE(shape.ptr); - goto fail; - } - newdescr->elsize = type->elsize; - newdescr->elsize *= PyArray_MultiplyList(shape.ptr, shape.len); - PyDimMem_FREE(shape.ptr); - newdescr->subarray = _pya_malloc(sizeof(PyArray_ArrayDescr)); - newdescr->subarray->base = type; - newdescr->hasobject = type->hasobject; - Py_INCREF(val); - newdescr->subarray->shape = val; - Py_XDECREF(newdescr->fields); - Py_XDECREF(newdescr->names); - newdescr->fields = NULL; - newdescr->names = NULL; - type = newdescr; - } - return type; - - fail: - Py_XDECREF(type); - return NULL; -} - -/* - * obj is a list. Each item is a tuple with - * - * (field-name, data-type (either a list or a string), and an optional - * shape parameter). - * - * field-name can be a string or a 2-tuple - * data-type can now be a list, string, or 2-tuple (string, metadata dictionary)) - */ - -static PyArray_Descr * -_convert_from_array_descr(PyObject *obj, int align) -{ - int n, i, totalsize; - int ret; - PyObject *fields, *item, *newobj; - PyObject *name, *tup, *title; - PyObject *nameslist; - PyArray_Descr *new; - PyArray_Descr *conv; - int dtypeflags = 0; - int maxalign = 0; - - n = PyList_GET_SIZE(obj); - nameslist = PyTuple_New(n); - if (!nameslist) { - return NULL; - } - totalsize = 0; - fields = PyDict_New(); - for (i = 0; i < n; i++) { - item = PyList_GET_ITEM(obj, i); - if (!PyTuple_Check(item) || (PyTuple_GET_SIZE(item) < 2)) { - goto fail; - } - name = PyTuple_GET_ITEM(item, 0); - if (PyUString_Check(name)) { - title = NULL; - } - else if (PyTuple_Check(name)) { - if (PyTuple_GET_SIZE(name) != 2) { - goto fail; - } - title = PyTuple_GET_ITEM(name, 0); - name = PyTuple_GET_ITEM(name, 1); - if (!PyUString_Check(name)) { - goto fail; - } - } - else { - goto fail; - } - - /* Insert name into nameslist */ - Py_INCREF(name); - - if (PyUString_GET_SIZE(name) == 0) { - Py_DECREF(name); - if (title == NULL) { - name = PyUString_FromFormat("f%d", i); - } -#if defined(NPY_PY3K) - /* On Py3, allow only non-empty Unicode strings as field names */ - else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) { - name = title; - Py_INCREF(name); - } - else { - goto fail; - } -#else - else { - name = title; - Py_INCREF(name); - } -#endif - } - PyTuple_SET_ITEM(nameslist, i, name); - - /* Process rest */ - - if (PyTuple_GET_SIZE(item) == 2) { - ret = PyArray_DescrConverter(PyTuple_GET_ITEM(item, 1), &conv); - if (ret == PY_FAIL) { - PyObject_Print(PyTuple_GET_ITEM(item, 1), stderr, 0); - } - } - else if (PyTuple_GET_SIZE(item) == 3) { - newobj = PyTuple_GetSlice(item, 1, 3); - ret = PyArray_DescrConverter(newobj, &conv); - Py_DECREF(newobj); - } - else { - goto fail; - } - if (ret == PY_FAIL) { - goto fail; - } - if ((PyDict_GetItem(fields, name) != NULL) -#if defined(NPY_PY3K) - || (title - && PyUString_Check(title) - && (PyDict_GetItem(fields, title) != NULL))) { -#else - || (title - && (PyUString_Check(title) || PyUnicode_Check(title)) - && (PyDict_GetItem(fields, title) != NULL))) { -#endif - PyErr_SetString(PyExc_ValueError, - "two fields with the same name"); - goto fail; - } - dtypeflags |= (conv->hasobject & NPY_FROM_FIELDS); - tup = PyTuple_New((title == NULL ? 2 : 3)); - PyTuple_SET_ITEM(tup, 0, (PyObject *)conv); - if (align) { - int _align; - - _align = conv->alignment; - if (_align > 1) { - totalsize = ((totalsize + _align - 1)/_align)*_align; - } - maxalign = MAX(maxalign, _align); - } - PyTuple_SET_ITEM(tup, 1, PyInt_FromLong((long) totalsize)); - - PyDict_SetItem(fields, name, tup); - - /* - * Title can be "meta-data". Only insert it - * into the fields dictionary if it is a string - * and if it is not the same as the name. - */ - if (title != NULL) { - Py_INCREF(title); - PyTuple_SET_ITEM(tup, 2, title); -#if defined(NPY_PY3K) - if (PyUString_Check(title)) { -#else - if (PyUString_Check(title) || PyUnicode_Check(title)) { -#endif - if (PyDict_GetItem(fields, title) != NULL) { - PyErr_SetString(PyExc_ValueError, - "title already used as a name or title."); - Py_DECREF(tup); - goto fail; - } - PyDict_SetItem(fields, title, tup); - } - } - totalsize += conv->elsize; - Py_DECREF(tup); - } - new = PyArray_DescrNewFromType(PyArray_VOID); - new->fields = fields; - new->names = nameslist; - new->elsize = totalsize; - new->hasobject=dtypeflags; - if (maxalign > 1) { - totalsize = ((totalsize + maxalign - 1)/maxalign)*maxalign; - } - if (align) { - new->alignment = maxalign; - } - return new; - - fail: - Py_DECREF(fields); - Py_DECREF(nameslist); - return NULL; - -} - -/* - * a list specifying a data-type can just be - * a list of formats. The names for the fields - * will default to f0, f1, f2, and so forth. - */ -static PyArray_Descr * -_convert_from_list(PyObject *obj, int align) -{ - int n, i; - int totalsize; - PyObject *fields; - PyArray_Descr *conv = NULL; - PyArray_Descr *new; - PyObject *key, *tup; - PyObject *nameslist = NULL; - int ret; - int maxalign = 0; - int dtypeflags = 0; - - n = PyList_GET_SIZE(obj); - /* - * Ignore any empty string at end which _internal._commastring - * can produce - */ - key = PyList_GET_ITEM(obj, n-1); - if (PyBytes_Check(key) && PyBytes_GET_SIZE(key) == 0) { - n = n - 1; - } - /* End ignore code.*/ - totalsize = 0; - if (n == 0) { - return NULL; - } - nameslist = PyTuple_New(n); - if (!nameslist) { - return NULL; - } - fields = PyDict_New(); - for (i = 0; i < n; i++) { - tup = PyTuple_New(2); - key = PyUString_FromFormat("f%d", i); - ret = PyArray_DescrConverter(PyList_GET_ITEM(obj, i), &conv); - if (ret == PY_FAIL) { - Py_DECREF(tup); - Py_DECREF(key); - goto fail; - } - dtypeflags |= (conv->hasobject & NPY_FROM_FIELDS); - PyTuple_SET_ITEM(tup, 0, (PyObject *)conv); - if (align) { - int _align; - - _align = conv->alignment; - if (_align > 1) { - totalsize = ((totalsize + _align - 1)/_align)*_align; - } - maxalign = MAX(maxalign, _align); - } - PyTuple_SET_ITEM(tup, 1, PyInt_FromLong((long) totalsize)); - PyDict_SetItem(fields, key, tup); - Py_DECREF(tup); - PyTuple_SET_ITEM(nameslist, i, key); - totalsize += conv->elsize; - } - new = PyArray_DescrNewFromType(PyArray_VOID); - new->fields = fields; - new->names = nameslist; - new->hasobject=dtypeflags; - if (maxalign > 1) { - totalsize = ((totalsize+maxalign-1)/maxalign)*maxalign; - } - if (align) { - new->alignment = maxalign; - } - new->elsize = totalsize; - return new; - - fail: - Py_DECREF(nameslist); - Py_DECREF(fields); - return NULL; -} - - -/* - * comma-separated string - * this is the format developed by the numarray records module and implemented - * by the format parser in that module this is an alternative implementation - * found in the _internal.py file patterned after that one -- the approach is - * to try to convert to a list (with tuples if any repeat information is - * present) and then call the _convert_from_list) - */ -static PyArray_Descr * -_convert_from_commastring(PyObject *obj, int align) -{ - PyObject *listobj; - PyArray_Descr *res; - PyObject *_numpy_internal; - - if (!PyBytes_Check(obj)) { - return NULL; - } - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) { - return NULL; - } - listobj = PyObject_CallMethod(_numpy_internal, "_commastring", "O", obj); - Py_DECREF(_numpy_internal); - if (!listobj) { - return NULL; - } - if (!PyList_Check(listobj) || PyList_GET_SIZE(listobj) < 1) { - PyErr_SetString(PyExc_RuntimeError, - "_commastring is not returning a list with len >= 1"); - return NULL; - } - if (PyList_GET_SIZE(listobj) == 1) { - if (PyArray_DescrConverter( - PyList_GET_ITEM(listobj, 0), &res) == NPY_FAIL) { - res = NULL; - } - } - else { - res = _convert_from_list(listobj, align); - } - Py_DECREF(listobj); - if (!res && !PyErr_Occurred()) { - PyErr_SetString(PyExc_ValueError, - "invalid data-type"); - return NULL; - } - return res; -} - -static int -_is_tuple_of_integers(PyObject *obj) -{ - int i; - - if (!PyTuple_Check(obj)) { - return 0; - } - for (i = 0; i < PyTuple_GET_SIZE(obj); i++) { - if (!PyArray_IsIntegerScalar(PyTuple_GET_ITEM(obj, i))) { - return 0; - } - } - return 1; -} - -/* - * A tuple type would be either (generic typeobject, typesize) - * or (fixed-length data-type, shape) - * - * or (inheriting data-type, new-data-type) - * The new data-type must have the same itemsize as the inheriting data-type - * unless the latter is 0 - * - * Thus (int32, {'real':(int16,0),'imag',(int16,2)}) - * - * is one way to specify a descriptor that will give - * a['real'] and a['imag'] to an int32 array. - * - * leave type reference alone - */ -static PyArray_Descr * -_use_inherit(PyArray_Descr *type, PyObject *newobj, int *errflag) -{ - PyArray_Descr *new; - PyArray_Descr *conv; - - *errflag = 0; - if (PyArray_IsScalar(newobj, Integer) - || _is_tuple_of_integers(newobj) - || !PyArray_DescrConverter(newobj, &conv)) { - return NULL; - } - *errflag = 1; - new = PyArray_DescrNew(type); - if (new == NULL) { - goto fail; - } - if (new->elsize && new->elsize != conv->elsize) { - PyErr_SetString(PyExc_ValueError, - "mismatch in size of old and new data-descriptor"); - goto fail; - } - new->elsize = conv->elsize; - if (conv->names) { - new->fields = conv->fields; - Py_XINCREF(new->fields); - new->names = conv->names; - Py_XINCREF(new->names); - } - new->hasobject = conv->hasobject; - Py_DECREF(conv); - *errflag = 0; - return new; - - fail: - Py_DECREF(conv); - return NULL; -} - -/* - * a dictionary specifying a data-type - * must have at least two and up to four - * keys These must all be sequences of the same length. - * - * can also have an additional key called "metadata" which can be any dictionary - * - * "names" --- field names - * "formats" --- the data-type descriptors for the field. - * - * Optional: - * - * "offsets" --- integers indicating the offset into the - * record of the start of the field. - * if not given, then "consecutive offsets" - * will be assumed and placed in the dictionary. - * - * "titles" --- Allows the use of an additional key - * for the fields dictionary.(if these are strings - * or unicode objects) or - * this can also be meta-data to - * be passed around with the field description. - * - * Attribute-lookup-based field names merely has to query the fields - * dictionary of the data-descriptor. Any result present can be used - * to return the correct field. - * - * So, the notion of what is a name and what is a title is really quite - * arbitrary. - * - * What does distinguish a title, however, is that if it is not None, - * it will be placed at the end of the tuple inserted into the - * fields dictionary.and can therefore be used to carry meta-data around. - * - * If the dictionary does not have "names" and "formats" entries, - * then it will be checked for conformity and used directly. - */ -static PyArray_Descr * -_use_fields_dict(PyObject *obj, int align) -{ - PyObject *_numpy_internal; - PyArray_Descr *res; - - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) { - return NULL; - } - res = (PyArray_Descr *)PyObject_CallMethod(_numpy_internal, - "_usefields", "Oi", obj, align); - Py_DECREF(_numpy_internal); - return res; -} - -static PyArray_Descr * -_convert_from_dict(PyObject *obj, int align) -{ - PyArray_Descr *new; - PyObject *fields = NULL; - PyObject *names, *offsets, *descrs, *titles; - PyObject *metadata; - int n, i; - int totalsize; - int maxalign = 0; - int dtypeflags = 0; - - fields = PyDict_New(); - if (fields == NULL) { - return (PyArray_Descr *)PyErr_NoMemory(); - } - names = PyDict_GetItemString(obj, "names"); - descrs = PyDict_GetItemString(obj, "formats"); - if (!names || !descrs) { - Py_DECREF(fields); - return _use_fields_dict(obj, align); - } - n = PyObject_Length(names); - offsets = PyDict_GetItemString(obj, "offsets"); - titles = PyDict_GetItemString(obj, "titles"); - if ((n > PyObject_Length(descrs)) - || (offsets && (n > PyObject_Length(offsets))) - || (titles && (n > PyObject_Length(titles)))) { - PyErr_SetString(PyExc_ValueError, - "all items in the dictionary must have the same length."); - goto fail; - } - - totalsize = 0; - for (i = 0; i < n; i++) { - PyObject *tup, *descr, *index, *item, *name, *off; - int len, ret, _align = 1; - PyArray_Descr *newdescr; - - /* Build item to insert (descr, offset, [title])*/ - len = 2; - item = NULL; - index = PyInt_FromLong(i); - if (titles) { - item=PyObject_GetItem(titles, index); - if (item && item != Py_None) { - len = 3; - } - else { - Py_XDECREF(item); - } - PyErr_Clear(); - } - tup = PyTuple_New(len); - descr = PyObject_GetItem(descrs, index); - if (!descr) { - goto fail; - } - ret = PyArray_DescrConverter(descr, &newdescr); - Py_DECREF(descr); - if (ret == PY_FAIL) { - Py_DECREF(tup); - Py_DECREF(index); - goto fail; - } - PyTuple_SET_ITEM(tup, 0, (PyObject *)newdescr); - if (align) { - _align = newdescr->alignment; - maxalign = MAX(maxalign,_align); - } - if (offsets) { - long offset; - off = PyObject_GetItem(offsets, index); - if (!off) { - goto fail; - } - offset = PyInt_AsLong(off); - PyTuple_SET_ITEM(tup, 1, off); - if (offset < totalsize) { - PyErr_SetString(PyExc_ValueError, - "invalid offset (must be ordered)"); - ret = PY_FAIL; - } - if (offset > totalsize) { - totalsize = offset; - } - } - else { - if (align && _align > 1) { - totalsize = ((totalsize + _align - 1)/_align)*_align; - } - PyTuple_SET_ITEM(tup, 1, PyInt_FromLong(totalsize)); - } - if (len == 3) { - PyTuple_SET_ITEM(tup, 2, item); - } - name = PyObject_GetItem(names, index); - if (!name) { - goto fail; - } - Py_DECREF(index); -#if defined(NPY_PY3K) - if (!PyUString_Check(name)) { -#else - if (!(PyUString_Check(name) || PyUnicode_Check(name))) { -#endif - PyErr_SetString(PyExc_ValueError, - "field names must be strings"); - ret = PY_FAIL; - } - - /* Insert into dictionary */ - if (PyDict_GetItem(fields, name) != NULL) { - PyErr_SetString(PyExc_ValueError, - "name already used as a name or title"); - ret = PY_FAIL; - } - PyDict_SetItem(fields, name, tup); - Py_DECREF(name); - if (len == 3) { -#if defined(NPY_PY3K) - if (PyUString_Check(item)) { -#else - if (PyUString_Check(item) || PyUnicode_Check(item)) { -#endif - if (PyDict_GetItem(fields, item) != NULL) { - PyErr_SetString(PyExc_ValueError, - "title already used as a name or title."); - ret=PY_FAIL; - } - PyDict_SetItem(fields, item, tup); - } - } - Py_DECREF(tup); - if ((ret == PY_FAIL) || (newdescr->elsize == 0)) { - goto fail; - } - dtypeflags |= (newdescr->hasobject & NPY_FROM_FIELDS); - totalsize += newdescr->elsize; - } - - new = PyArray_DescrNewFromType(PyArray_VOID); - if (new == NULL) { - goto fail; - } - if (maxalign > 1) { - totalsize = ((totalsize + maxalign - 1)/maxalign)*maxalign; - } - if (align) { - new->alignment = maxalign; - } - new->elsize = totalsize; - if (!PyTuple_Check(names)) { - names = PySequence_Tuple(names); - } - else { - Py_INCREF(names); - } - new->names = names; - new->fields = fields; - new->hasobject = dtypeflags; - - metadata = PyDict_GetItemString(obj, "metadata"); - - if (new->metadata == NULL) { - new->metadata = metadata; - Py_XINCREF(new->metadata); - } - else if (metadata != NULL) { - if (PyDict_Merge(new->metadata, metadata, 0) == -1) { - Py_DECREF(new); - return NULL; - } - } - return new; - - fail: - Py_XDECREF(fields); - return NULL; -} - - -/*NUMPY_API*/ -NPY_NO_EXPORT PyArray_Descr * -PyArray_DescrNewFromType(int type_num) -{ - PyArray_Descr *old; - PyArray_Descr *new; - - old = PyArray_DescrFromType(type_num); - new = PyArray_DescrNew(old); - Py_DECREF(old); - return new; -} - -/*NUMPY_API - * Get typenum from an object -- None goes to NULL - */ -NPY_NO_EXPORT int -PyArray_DescrConverter2(PyObject *obj, PyArray_Descr **at) -{ - if (obj == Py_None) { - *at = NULL; - return PY_SUCCEED; - } - else { - return PyArray_DescrConverter(obj, at); - } -} - -/*NUMPY_API - * Get typenum from an object -- None goes to PyArray_DEFAULT - * This function takes a Python object representing a type and converts it - * to a the correct PyArray_Descr * structure to describe the type. - * - * Many objects can be used to represent a data-type which in NumPy is - * quite a flexible concept. - * - * This is the central code that converts Python objects to - * Type-descriptor objects that are used throughout numpy. - * new reference in *at - */ -NPY_NO_EXPORT int -PyArray_DescrConverter(PyObject *obj, PyArray_Descr **at) -{ - char *type; - int check_num = PyArray_NOTYPE + 10; - int len; - PyObject *item; - int elsize = 0; - char endian = '='; - - *at = NULL; - /* default */ - if (obj == Py_None) { - *at = PyArray_DescrFromType(PyArray_DEFAULT); - return PY_SUCCEED; - } - if (PyArray_DescrCheck(obj)) { - *at = (PyArray_Descr *)obj; - Py_INCREF(*at); - return PY_SUCCEED; - } - - if (PyType_Check(obj)) { - if (PyType_IsSubtype((PyTypeObject *)obj, &PyGenericArrType_Type)) { - *at = PyArray_DescrFromTypeObject(obj); - if (*at) { - return PY_SUCCEED; - } - else { - return PY_FAIL; - } - } - check_num = PyArray_OBJECT; -#if !defined(NPY_PY3K) - if (obj == (PyObject *)(&PyInt_Type)) { - check_num = PyArray_LONG; - } - else if (obj == (PyObject *)(&PyLong_Type)) { - check_num = PyArray_LONGLONG; - } -#else - if (obj == (PyObject *)(&PyLong_Type)) { - check_num = PyArray_LONG; - } -#endif - else if (obj == (PyObject *)(&PyFloat_Type)) { - check_num = PyArray_DOUBLE; - } - else if (obj == (PyObject *)(&PyComplex_Type)) { - check_num = PyArray_CDOUBLE; - } - else if (obj == (PyObject *)(&PyBool_Type)) { - check_num = PyArray_BOOL; - } - else if (obj == (PyObject *)(&PyBytes_Type)) { - check_num = PyArray_STRING; - } - else if (obj == (PyObject *)(&PyUnicode_Type)) { - check_num = PyArray_UNICODE; - } -#if defined(NPY_PY3K) - else if (obj == (PyObject *)(&PyMemoryView_Type)) { - check_num = PyArray_VOID; - } -#else - else if (obj == (PyObject *)(&PyBuffer_Type)) { - check_num = PyArray_VOID; - } -#endif - else { - *at = _arraydescr_fromobj(obj); - if (*at) { - return PY_SUCCEED; - } - } - goto finish; - } - - /* or a typecode string */ - - if (PyUnicode_Check(obj)) { - /* Allow unicode format strings: convert to bytes */ - int retval; - PyObject *obj2; - obj2 = PyUnicode_AsASCIIString(obj); - if (obj2 == NULL) { - return PY_FAIL; - } - retval = PyArray_DescrConverter(obj2, at); - Py_DECREF(obj2); - return retval; - } - - if (PyBytes_Check(obj)) { - /* Check for a string typecode. */ - type = PyBytes_AS_STRING(obj); - len = PyBytes_GET_SIZE(obj); - if (len <= 0) { - goto fail; - } - - /* check for commas present or first (or second) element a digit */ - if (_check_for_commastring(type, len)) { - *at = _convert_from_commastring(obj, 0); - if (*at) { - return PY_SUCCEED; - } - return PY_FAIL; - } - check_num = (int) type[0]; - if ((char) check_num == '>' - || (char) check_num == '<' - || (char) check_num == '|' - || (char) check_num == '=') { - if (len <= 1) { - goto fail; - } - endian = (char) check_num; - type++; len--; - check_num = (int) type[0]; - if (endian == '|') { - endian = '='; - } - } - if (len > 1) { - elsize = atoi(type + 1); - if (elsize == 0) { - check_num = PyArray_NOTYPE+10; - } - /* - * When specifying length of UNICODE - * the number of characters is given to match - * the STRING interface. Each character can be - * more than one byte and itemsize must be - * the number of bytes. - */ - else if (check_num == PyArray_UNICODELTR) { - elsize <<= 2; - } - /* Support for generic processing c4, i4, f8, etc...*/ - else if ((check_num != PyArray_STRINGLTR) - && (check_num != PyArray_VOIDLTR) - && (check_num != PyArray_STRINGLTR2)) { - check_num = PyArray_TypestrConvert(elsize, check_num); - if (check_num == PyArray_NOTYPE) { - check_num += 10; - } - elsize = 0; - } - } - } - else if (PyTuple_Check(obj)) { - /* or a tuple */ - *at = _convert_from_tuple(obj); - if (*at == NULL){ - if (PyErr_Occurred()) { - return PY_FAIL; - } - goto fail; - } - return PY_SUCCEED; - } - else if (PyList_Check(obj)) { - /* or a list */ - *at = _convert_from_array_descr(obj,0); - if (*at == NULL) { - if (PyErr_Occurred()) { - return PY_FAIL; - } - goto fail; - } - return PY_SUCCEED; - } - else if (PyDict_Check(obj)) { - /* or a dictionary */ - *at = _convert_from_dict(obj,0); - if (*at == NULL) { - if (PyErr_Occurred()) { - return PY_FAIL; - } - goto fail; - } - return PY_SUCCEED; - } - else if (PyArray_Check(obj)) { - goto fail; - } - else { - *at = _arraydescr_fromobj(obj); - if (*at) { - return PY_SUCCEED; - } - if (PyErr_Occurred()) { - return PY_FAIL; - } - goto fail; - } - if (PyErr_Occurred()) { - goto fail; - } - /* if (check_num == PyArray_NOTYPE) { - return PY_FAIL; - } - */ - - finish: - if ((check_num == PyArray_NOTYPE + 10) - || (*at = PyArray_DescrFromType(check_num)) == NULL) { - PyErr_Clear(); - /* Now check to see if the object is registered in typeDict */ - if (typeDict != NULL) { - item = PyDict_GetItem(typeDict, obj); -#if defined(NPY_PY3K) - if (!item && PyBytes_Check(obj)) { - PyObject *tmp; - tmp = PyUnicode_FromEncodedObject(obj, "ascii", "strict"); - if (tmp != NULL) { - item = PyDict_GetItem(typeDict, tmp); - Py_DECREF(tmp); - } - } -#endif - if (item) { - return PyArray_DescrConverter(item, at); - } - } - goto fail; - } - - if (((*at)->elsize == 0) && (elsize != 0)) { - PyArray_DESCR_REPLACE(*at); - (*at)->elsize = elsize; - } - if (endian != '=' && PyArray_ISNBO(endian)) { - endian = '='; - } - if (endian != '=' && (*at)->byteorder != '|' - && (*at)->byteorder != endian) { - PyArray_DESCR_REPLACE(*at); - (*at)->byteorder = endian; - } - return PY_SUCCEED; - - fail: - PyErr_SetString(PyExc_TypeError, "data type not understood"); - *at = NULL; - return PY_FAIL; -} - -/** Array Descr Objects for dynamic types **/ - -/* - * There are some statically-defined PyArray_Descr objects corresponding - * to the basic built-in types. - * These can and should be DECREF'd and INCREF'd as appropriate, anyway. - * If a mistake is made in reference counting, deallocation on these - * builtins will be attempted leading to problems. - * - * This let's us deal with all PyArray_Descr objects using reference - * counting (regardless of whether they are statically or dynamically - * allocated). - */ - -/*NUMPY_API - * base cannot be NULL - */ -NPY_NO_EXPORT PyArray_Descr * -PyArray_DescrNew(PyArray_Descr *base) -{ - PyArray_Descr *new = PyObject_New(PyArray_Descr, &PyArrayDescr_Type); - - if (new == NULL) { - return NULL; - } - /* Don't copy PyObject_HEAD part */ - memcpy((char *)new + sizeof(PyObject), - (char *)base + sizeof(PyObject), - sizeof(PyArray_Descr) - sizeof(PyObject)); - - if (new->fields == Py_None) { - new->fields = NULL; - } - Py_XINCREF(new->fields); - Py_XINCREF(new->names); - if (new->subarray) { - new->subarray = _pya_malloc(sizeof(PyArray_ArrayDescr)); - memcpy(new->subarray, base->subarray, sizeof(PyArray_ArrayDescr)); - Py_INCREF(new->subarray->shape); - Py_INCREF(new->subarray->base); - } - Py_XINCREF(new->typeobj); - Py_XINCREF(new->metadata); - - return new; -} - -/* - * should never be called for builtin-types unless - * there is a reference-count problem - */ -static void -arraydescr_dealloc(PyArray_Descr *self) -{ - if (self->fields == Py_None) { - fprintf(stderr, "*** Reference count error detected: \n" \ - "an attempt was made to deallocate %d (%c) ***\n", - self->type_num, self->type); - Py_INCREF(self); - Py_INCREF(self); - return; - } - Py_XDECREF(self->typeobj); - Py_XDECREF(self->names); - Py_XDECREF(self->fields); - if (self->subarray) { - Py_DECREF(self->subarray->shape); - Py_DECREF(self->subarray->base); - _pya_free(self->subarray); - } - Py_XDECREF(self->metadata); - Py_TYPE(self)->tp_free((PyObject *)self); -} - -/* - * we need to be careful about setting attributes because these - * objects are pointed to by arrays that depend on them for interpreting - * data. Currently no attributes of data-type objects can be set - * directly except names. - */ -static PyMemberDef arraydescr_members[] = { - {"type", - T_OBJECT, offsetof(PyArray_Descr, typeobj), READONLY, NULL}, - {"kind", - T_CHAR, offsetof(PyArray_Descr, kind), READONLY, NULL}, - {"char", - T_CHAR, offsetof(PyArray_Descr, type), READONLY, NULL}, - {"num", - T_INT, offsetof(PyArray_Descr, type_num), READONLY, NULL}, - {"byteorder", - T_CHAR, offsetof(PyArray_Descr, byteorder), READONLY, NULL}, - {"itemsize", - T_INT, offsetof(PyArray_Descr, elsize), READONLY, NULL}, - {"alignment", - T_INT, offsetof(PyArray_Descr, alignment), READONLY, NULL}, - {"flags", - T_UBYTE, offsetof(PyArray_Descr, hasobject), READONLY, NULL}, - {NULL, 0, 0, 0, NULL}, -}; - -static PyObject * -arraydescr_subdescr_get(PyArray_Descr *self) -{ - if (self->subarray == NULL) { - Py_INCREF(Py_None); - return Py_None; - } - return Py_BuildValue("OO", - (PyObject *)self->subarray->base, self->subarray->shape); -} - - -NPY_NO_EXPORT PyObject * -arraydescr_protocol_typestr_get(PyArray_Descr *self) -{ - char basic_ = self->kind; - char endian = self->byteorder; - int size = self->elsize; - PyObject *ret; - - if (endian == '=') { - endian = '<'; - if (!PyArray_IsNativeByteOrder(endian)) { - endian = '>'; - } - } - if (self->type_num == PyArray_UNICODE) { - size >>= 2; - } - - ret = PyUString_FromFormat("%c%c%d", endian, basic_, size); - - return ret; -} - -static PyObject * -arraydescr_typename_get(PyArray_Descr *self) -{ - int len; - PyTypeObject *typeobj = self->typeobj; - PyObject *res; - char *s; - /* fixme: not reentrant */ - static int prefix_len = 0; - - if (PyTypeNum_ISUSERDEF(self->type_num)) { - s = strrchr(typeobj->tp_name, '.'); - if (s == NULL) { - res = PyUString_FromString(typeobj->tp_name); - } - else { - res = PyUString_FromStringAndSize(s + 1, strlen(s) - 1); - } - return res; - } - else { - if (prefix_len == 0) { - prefix_len = strlen("numpy."); - } - len = strlen(typeobj->tp_name); - if (*(typeobj->tp_name + (len-1)) == '_') { - len -= 1; - } - len -= prefix_len; - res = PyUString_FromStringAndSize(typeobj->tp_name+prefix_len, len); - } - if (PyTypeNum_ISFLEXIBLE(self->type_num) && self->elsize != 0) { - PyObject *p; - p = PyUString_FromFormat("%d", self->elsize * 8); - PyUString_ConcatAndDel(&res, p); - } - - return res; -} - -static PyObject * -arraydescr_base_get(PyArray_Descr *self) -{ - if (self->subarray == NULL) { - Py_INCREF(self); - return (PyObject *)self; - } - Py_INCREF(self->subarray->base); - return (PyObject *)(self->subarray->base); -} - -static PyObject * -arraydescr_shape_get(PyArray_Descr *self) -{ - if (self->subarray == NULL) { - return PyTuple_New(0); - } - if (PyTuple_Check(self->subarray->shape)) { - Py_INCREF(self->subarray->shape); - return (PyObject *)(self->subarray->shape); - } - return Py_BuildValue("(O)", self->subarray->shape); -} - -NPY_NO_EXPORT PyObject * -arraydescr_protocol_descr_get(PyArray_Descr *self) -{ - PyObject *dobj, *res; - PyObject *_numpy_internal; - - if (self->names == NULL) { - /* get default */ - dobj = PyTuple_New(2); - if (dobj == NULL) { - return NULL; - } - PyTuple_SET_ITEM(dobj, 0, PyUString_FromString("")); - PyTuple_SET_ITEM(dobj, 1, arraydescr_protocol_typestr_get(self)); - res = PyList_New(1); - if (res == NULL) { - Py_DECREF(dobj); - return NULL; - } - PyList_SET_ITEM(res, 0, dobj); - return res; - } - - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) { - return NULL; - } - res = PyObject_CallMethod(_numpy_internal, "_array_descr", "O", self); - Py_DECREF(_numpy_internal); - return res; -} - -/* - * returns 1 for a builtin type - * and 2 for a user-defined data-type descriptor - * return 0 if neither (i.e. it's a copy of one) - */ -static PyObject * -arraydescr_isbuiltin_get(PyArray_Descr *self) -{ - long val; - val = 0; - if (self->fields == Py_None) { - val = 1; - } - if (PyTypeNum_ISUSERDEF(self->type_num)) { - val = 2; - } - return PyInt_FromLong(val); -} - -static int -_arraydescr_isnative(PyArray_Descr *self) -{ - if (self->names == NULL) { - return PyArray_ISNBO(self->byteorder); - } - else { - PyObject *key, *value, *title = NULL; - PyArray_Descr *new; - int offset; - Py_ssize_t pos = 0; - while (PyDict_Next(self->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { - return -1; - } - if (!_arraydescr_isnative(new)) { - return 0; - } - } - } - return 1; -} - -/* - * return Py_True if this data-type descriptor - * has native byteorder if no fields are defined - * - * or if all sub-fields have native-byteorder if - * fields are defined - */ -static PyObject * -arraydescr_isnative_get(PyArray_Descr *self) -{ - PyObject *ret; - int retval; - retval = _arraydescr_isnative(self); - if (retval == -1) { - return NULL; - } - ret = retval ? Py_True : Py_False; - Py_INCREF(ret); - return ret; -} - -static PyObject * -arraydescr_fields_get(PyArray_Descr *self) -{ - if (self->names == NULL) { - Py_INCREF(Py_None); - return Py_None; - } - return PyDictProxy_New(self->fields); -} - -static PyObject * -arraydescr_metadata_get(PyArray_Descr *self) -{ - if (self->metadata == NULL) { - Py_INCREF(Py_None); - return Py_None; - } - return PyDictProxy_New(self->metadata); -} - -static PyObject * -arraydescr_hasobject_get(PyArray_Descr *self) -{ - PyObject *res; - if (PyDataType_FLAGCHK(self, NPY_ITEM_HASOBJECT)) { - res = Py_True; - } - else { - res = Py_False; - } - Py_INCREF(res); - return res; -} - -static PyObject * -arraydescr_names_get(PyArray_Descr *self) -{ - if (self->names == NULL) { - Py_INCREF(Py_None); - return Py_None; - } - Py_INCREF(self->names); - return self->names; -} - -static int -arraydescr_names_set(PyArray_Descr *self, PyObject *val) -{ - int N = 0; - int i; - PyObject *new_names; - if (self->names == NULL) { - PyErr_SetString(PyExc_ValueError, - "there are no fields defined"); - return -1; - } - - N = PyTuple_GET_SIZE(self->names); - if (!PySequence_Check(val) || PyObject_Size((PyObject *)val) != N) { - PyErr_Format(PyExc_ValueError, - "must replace all names at once with a sequence of length %d", - N); - return -1; - } - /* Make sure all entries are strings */ - for (i = 0; i < N; i++) { - PyObject *item; - int valid = 1; - item = PySequence_GetItem(val, i); - valid = PyUString_Check(item); - Py_DECREF(item); - if (!valid) { - PyErr_Format(PyExc_ValueError, - "item #%d of names is of type %s and not string", - i, Py_TYPE(item)->tp_name); - return -1; - } - } - /* Update dictionary keys in fields */ - new_names = PySequence_Tuple(val); - for (i = 0; i < N; i++) { - PyObject *key; - PyObject *item; - PyObject *new_key; - key = PyTuple_GET_ITEM(self->names, i); - /* Borrowed reference to item */ - item = PyDict_GetItem(self->fields, key); - /* Hold on to it even through DelItem */ - Py_INCREF(item); - new_key = PyTuple_GET_ITEM(new_names, i); - PyDict_DelItem(self->fields, key); - PyDict_SetItem(self->fields, new_key, item); - /* self->fields now holds reference */ - Py_DECREF(item); - } - - /* Replace names */ - Py_DECREF(self->names); - self->names = new_names; - - return 0; -} - -static PyGetSetDef arraydescr_getsets[] = { - {"subdtype", - (getter)arraydescr_subdescr_get, - NULL, NULL, NULL}, - {"descr", - (getter)arraydescr_protocol_descr_get, - NULL, NULL, NULL}, - {"str", - (getter)arraydescr_protocol_typestr_get, - NULL, NULL, NULL}, - {"name", - (getter)arraydescr_typename_get, - NULL, NULL, NULL}, - {"base", - (getter)arraydescr_base_get, - NULL, NULL, NULL}, - {"shape", - (getter)arraydescr_shape_get, - NULL, NULL, NULL}, - {"isbuiltin", - (getter)arraydescr_isbuiltin_get, - NULL, NULL, NULL}, - {"isnative", - (getter)arraydescr_isnative_get, - NULL, NULL, NULL}, - {"fields", - (getter)arraydescr_fields_get, - NULL, NULL, NULL}, - {"metadata", - (getter)arraydescr_metadata_get, - NULL, NULL, NULL}, - {"names", - (getter)arraydescr_names_get, - (setter)arraydescr_names_set, - NULL, NULL}, - {"hasobject", - (getter)arraydescr_hasobject_get, - NULL, NULL, NULL}, - {NULL, NULL, NULL, NULL, NULL}, -}; - -static int -_invalid_metadata_check(PyObject *metadata) -{ - PyObject *res; - - /* borrowed reference */ - res = PyDict_GetItemString(metadata, NPY_METADATA_DTSTR); - if (res == NULL) { - return 0; - } - else { - PyErr_SetString(PyExc_ValueError, - "cannot set " NPY_METADATA_DTSTR "in dtype metadata"); - return 1; - } -} - -static PyObject * -arraydescr_new(PyTypeObject *NPY_UNUSED(subtype), PyObject *args, PyObject *kwds) -{ - PyObject *odescr, *ometadata=NULL; - PyArray_Descr *descr, *conv; - Bool align = FALSE; - Bool copy = FALSE; - Bool copied = FALSE; - static char *kwlist[] = {"dtype", "align", "copy", "metadata", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O&O&O!", kwlist, - &odescr, PyArray_BoolConverter, &align, - PyArray_BoolConverter, ©, - &PyDict_Type, &ometadata)) { - return NULL; - } - - if ((ometadata != NULL) && (_invalid_metadata_check(ometadata))) { - return NULL; - } - if (align) { - if (!PyArray_DescrAlignConverter(odescr, &conv)) { - return NULL; - } - } - else if (!PyArray_DescrConverter(odescr, &conv)) { - return NULL; - } - /* Get a new copy of it unless it's already a copy */ - if (copy && conv->fields == Py_None) { - descr = PyArray_DescrNew(conv); - Py_DECREF(conv); - conv = descr; - copied = TRUE; - } - - if ((ometadata != NULL)) { - /* - * We need to be sure to make a new copy of the data-type and any - * underlying dictionary - */ - if (!copied) { - descr = PyArray_DescrNew(conv); - Py_DECREF(conv); - conv = descr; - } - if ((conv->metadata != NULL)) { - /* - * Make a copy of the metadata before merging with ometadata - * so that this data-type descriptor has it's own copy - */ - /* Save a reference */ - odescr = conv->metadata; - conv->metadata = PyDict_Copy(odescr); - /* Decrement the old reference */ - Py_DECREF(odescr); - - /* - * Update conv->metadata with anything new in metadata - * keyword, but do not over-write anything already there - */ - if (PyDict_Merge(conv->metadata, ometadata, 0) != 0) { - Py_DECREF(conv); - return NULL; - } - } - else { - /* Make a copy of the input dictionary */ - conv->metadata = PyDict_Copy(ometadata); - } - } - - return (PyObject *)conv; -} - - -/* return a tuple of (callable object, args, state). */ -static PyObject * -arraydescr_reduce(PyArray_Descr *self, PyObject *NPY_UNUSED(args)) -{ - /* - * version number of this pickle type. Increment if we need to - * change the format. Be sure to handle the old versions in - * arraydescr_setstate. - */ - const int version = 4; - PyObject *ret, *mod, *obj; - PyObject *state; - char endian; - int elsize, alignment; - - ret = PyTuple_New(3); - if (ret == NULL) { - return NULL; - } - mod = PyImport_ImportModule("numpy.core.multiarray"); - if (mod == NULL) { - Py_DECREF(ret); - return NULL; - } - obj = PyObject_GetAttrString(mod, "dtype"); - Py_DECREF(mod); - if (obj == NULL) { - Py_DECREF(ret); - return NULL; - } - PyTuple_SET_ITEM(ret, 0, obj); - if (PyTypeNum_ISUSERDEF(self->type_num) - || ((self->type_num == PyArray_VOID - && self->typeobj != &PyVoidArrType_Type))) { - obj = (PyObject *)self->typeobj; - Py_INCREF(obj); - } - else { - elsize = self->elsize; - if (self->type_num == PyArray_UNICODE) { - elsize >>= 2; - } - obj = PyUString_FromFormat("%c%d",self->kind, elsize); - } - PyTuple_SET_ITEM(ret, 1, Py_BuildValue("(Nii)", obj, 0, 1)); - - /* - * Now return the state which is at least byteorder, - * subarray, and fields - */ - endian = self->byteorder; - if (endian == '=') { - endian = '<'; - if (!PyArray_IsNativeByteOrder(endian)) { - endian = '>'; - } - } - if (self->metadata) { - state = PyTuple_New(9); - PyTuple_SET_ITEM(state, 0, PyInt_FromLong(version)); - Py_INCREF(self->metadata); - PyTuple_SET_ITEM(state, 8, self->metadata); - } - else { /* Use version 3 pickle format */ - state = PyTuple_New(8); - PyTuple_SET_ITEM(state, 0, PyInt_FromLong(3)); - } - - PyTuple_SET_ITEM(state, 1, PyUString_FromFormat("%c", endian)); - PyTuple_SET_ITEM(state, 2, arraydescr_subdescr_get(self)); - if (self->names) { - Py_INCREF(self->names); - Py_INCREF(self->fields); - PyTuple_SET_ITEM(state, 3, self->names); - PyTuple_SET_ITEM(state, 4, self->fields); - } - else { - PyTuple_SET_ITEM(state, 3, Py_None); - PyTuple_SET_ITEM(state, 4, Py_None); - Py_INCREF(Py_None); - Py_INCREF(Py_None); - } - - /* for extended types it also includes elsize and alignment */ - if (PyTypeNum_ISEXTENDED(self->type_num)) { - elsize = self->elsize; - alignment = self->alignment; - } - else { - elsize = -1; - alignment = -1; - } - PyTuple_SET_ITEM(state, 5, PyInt_FromLong(elsize)); - PyTuple_SET_ITEM(state, 6, PyInt_FromLong(alignment)); - PyTuple_SET_ITEM(state, 7, PyInt_FromLong(self->hasobject)); - - PyTuple_SET_ITEM(ret, 2, state); - return ret; -} - -/* - * returns 1 if this data-type has an object portion - * used when setting the state because hasobject is not stored. - */ -static int -_descr_find_object(PyArray_Descr *self) -{ - if (self->hasobject - || self->type_num == PyArray_OBJECT - || self->kind == 'O') { - return NPY_OBJECT_DTYPE_FLAGS; - } - if (PyDescr_HASFIELDS(self)) { - PyObject *key, *value, *title = NULL; - PyArray_Descr *new; - int offset; - Py_ssize_t pos = 0; - - while (PyDict_Next(self->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { - PyErr_Clear(); - return 0; - } - if (_descr_find_object(new)) { - new->hasobject = NPY_OBJECT_DTYPE_FLAGS; - return NPY_OBJECT_DTYPE_FLAGS; - } - } - } - return 0; -} - -/* - * state is at least byteorder, subarray, and fields but could include elsize - * and alignment for EXTENDED arrays - */ -static PyObject * -arraydescr_setstate(PyArray_Descr *self, PyObject *args) -{ - int elsize = -1, alignment = -1; - int version = 4; -#if defined(NPY_PY3K) - int endian; -#else - char endian; -#endif - PyObject *subarray, *fields, *names = NULL, *metadata=NULL; - int incref_names = 1; - int dtypeflags = 0; - - if (self->fields == Py_None) { - Py_INCREF(Py_None); - return Py_None; - } - if (PyTuple_GET_SIZE(args) != 1 - || !(PyTuple_Check(PyTuple_GET_ITEM(args, 0)))) { - PyErr_BadInternalCall(); - return NULL; - } - switch (PyTuple_GET_SIZE(PyTuple_GET_ITEM(args,0))) { - case 9: -#if defined(NPY_PY3K) -#define _ARGSTR_ "(iCOOOiiiO)" -#else -#define _ARGSTR_ "(icOOOiiiO)" -#endif - if (!PyArg_ParseTuple(args, _ARGSTR_, &version, &endian, - &subarray, &names, &fields, &elsize, - &alignment, &dtypeflags, &metadata)) { - return NULL; -#undef _ARGSTR_ - } - break; - case 8: -#if defined(NPY_PY3K) -#define _ARGSTR_ "(iCOOOiii)" -#else -#define _ARGSTR_ "(icOOOiii)" -#endif - if (!PyArg_ParseTuple(args, _ARGSTR_, &version, &endian, - &subarray, &names, &fields, &elsize, - &alignment, &dtypeflags)) { - return NULL; -#undef _ARGSTR_ - } - break; - case 7: -#if defined(NPY_PY3K) -#define _ARGSTR_ "(iCOOOii)" -#else -#define _ARGSTR_ "(icOOOii)" -#endif - if (!PyArg_ParseTuple(args, _ARGSTR_, &version, &endian, - &subarray, &names, &fields, &elsize, - &alignment)) { - return NULL; -#undef _ARGSTR_ - } - break; - case 6: -#if defined(NPY_PY3K) -#define _ARGSTR_ "(iCOOii)" -#else -#define _ARGSTR_ "(icOOii)" -#endif - if (!PyArg_ParseTuple(args, _ARGSTR_, &version, - &endian, &subarray, &fields, - &elsize, &alignment)) { - PyErr_Clear(); -#undef _ARGSTR_ - } - break; - case 5: - version = 0; -#if defined(NPY_PY3K) -#define _ARGSTR_ "(COOii)" -#else -#define _ARGSTR_ "(cOOii)" -#endif - if (!PyArg_ParseTuple(args, _ARGSTR_, - &endian, &subarray, &fields, &elsize, - &alignment)) { -#undef _ARGSTR_ - return NULL; - } - break; - default: - /* raise an error */ - if (PyTuple_GET_SIZE(PyTuple_GET_ITEM(args,0)) > 5) { - version = PyInt_AsLong(PyTuple_GET_ITEM(args, 0)); - } - else { - version = -1; - } - } - - /* - * If we ever need another pickle format, increment the version - * number. But we should still be able to handle the old versions. - */ - if (version < 0 || version > 4) { - PyErr_Format(PyExc_ValueError, - "can't handle version %d of numpy.dtype pickle", - version); - return NULL; - } - - if (version == 1 || version == 0) { - if (fields != Py_None) { - PyObject *key, *list; - key = PyInt_FromLong(-1); - list = PyDict_GetItem(fields, key); - if (!list) { - return NULL; - } - Py_INCREF(list); - names = list; - PyDict_DelItem(fields, key); - incref_names = 0; - } - else { - names = Py_None; - } - } - - - if ((fields == Py_None && names != Py_None) || - (names == Py_None && fields != Py_None)) { - PyErr_Format(PyExc_ValueError, - "inconsistent fields and names"); - return NULL; - } - - if (endian != '|' && PyArray_IsNativeByteOrder(endian)) { - endian = '='; - } - self->byteorder = endian; - if (self->subarray) { - Py_XDECREF(self->subarray->base); - Py_XDECREF(self->subarray->shape); - _pya_free(self->subarray); - } - self->subarray = NULL; - - if (subarray != Py_None) { - self->subarray = _pya_malloc(sizeof(PyArray_ArrayDescr)); - self->subarray->base = (PyArray_Descr *)PyTuple_GET_ITEM(subarray, 0); - Py_INCREF(self->subarray->base); - self->subarray->shape = PyTuple_GET_ITEM(subarray, 1); - Py_INCREF(self->subarray->shape); - } - - if (fields != Py_None) { - Py_XDECREF(self->fields); - self->fields = fields; - Py_INCREF(fields); - Py_XDECREF(self->names); - self->names = names; - if (incref_names) { - Py_INCREF(names); - } - } - - if (PyTypeNum_ISEXTENDED(self->type_num)) { - self->elsize = elsize; - self->alignment = alignment; - } - - self->hasobject = dtypeflags; - if (version < 3) { - self->hasobject = _descr_find_object(self); - } - - Py_XDECREF(self->metadata); - - /* - * We have a borrowed reference to metadata so no need - * to alter reference count - */ - if (metadata == Py_None) { - metadata = NULL; - } - self->metadata = metadata; - Py_XINCREF(metadata); - - Py_INCREF(Py_None); - return Py_None; -} - -/*NUMPY_API - * - * Get type-descriptor from an object forcing alignment if possible - * None goes to DEFAULT type. - * - * any object with the .fields attribute and/or .itemsize attribute (if the - *.fields attribute does not give the total size -- i.e. a partial record - * naming). If itemsize is given it must be >= size computed from fields - * - * The .fields attribute must return a convertible dictionary if present. - * Result inherits from PyArray_VOID. -*/ -NPY_NO_EXPORT int -PyArray_DescrAlignConverter(PyObject *obj, PyArray_Descr **at) -{ - if (PyDict_Check(obj)) { - *at = _convert_from_dict(obj, 1); - } - else if (PyBytes_Check(obj)) { - *at = _convert_from_commastring(obj, 1); - } - else if (PyUnicode_Check(obj)) { - PyObject *tmp; - tmp = PyUnicode_AsASCIIString(obj); - *at = _convert_from_commastring(tmp, 1); - Py_DECREF(tmp); - } - else if (PyList_Check(obj)) { - *at = _convert_from_array_descr(obj, 1); - } - else { - return PyArray_DescrConverter(obj, at); - } - if (*at == NULL) { - if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ValueError, - "data-type-descriptor not understood"); - } - return PY_FAIL; - } - return PY_SUCCEED; -} - -/*NUMPY_API - * - * Get type-descriptor from an object forcing alignment if possible - * None goes to NULL. - */ -NPY_NO_EXPORT int -PyArray_DescrAlignConverter2(PyObject *obj, PyArray_Descr **at) -{ - if (PyDict_Check(obj)) { - *at = _convert_from_dict(obj, 1); - } - else if (PyBytes_Check(obj)) { - *at = _convert_from_commastring(obj, 1); - } - else if (PyUnicode_Check(obj)) { - PyObject *tmp; - tmp = PyUnicode_AsASCIIString(obj); - *at = _convert_from_commastring(tmp, 1); - Py_DECREF(tmp); - } - else if (PyList_Check(obj)) { - *at = _convert_from_array_descr(obj, 1); - } - else { - return PyArray_DescrConverter2(obj, at); - } - if (*at == NULL) { - if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ValueError, - "data-type-descriptor not understood"); - } - return PY_FAIL; - } - return PY_SUCCEED; -} - - - -/*NUMPY_API - * - * returns a copy of the PyArray_Descr structure with the byteorder - * altered: - * no arguments: The byteorder is swapped (in all subfields as well) - * single argument: The byteorder is forced to the given state - * (in all subfields as well) - * - * Valid states: ('big', '>') or ('little' or '<') - * ('native', or '=') - * - * If a descr structure with | is encountered it's own - * byte-order is not changed but any fields are: - * - * - * Deep bytorder change of a data-type descriptor - * *** Leaves reference count of self unchanged --- does not DECREF self *** - */ -NPY_NO_EXPORT PyArray_Descr * -PyArray_DescrNewByteorder(PyArray_Descr *self, char newendian) -{ - PyArray_Descr *new; - char endian; - - new = PyArray_DescrNew(self); - endian = new->byteorder; - if (endian != PyArray_IGNORE) { - if (newendian == PyArray_SWAP) { - /* swap byteorder */ - if PyArray_ISNBO(endian) { - endian = PyArray_OPPBYTE; - } - else { - endian = PyArray_NATBYTE; - } - new->byteorder = endian; - } - else if (newendian != PyArray_IGNORE) { - new->byteorder = newendian; - } - } - if (new->names) { - PyObject *newfields; - PyObject *key, *value; - PyObject *newvalue; - PyObject *old; - PyArray_Descr *newdescr; - Py_ssize_t pos = 0; - int len, i; - - newfields = PyDict_New(); - /* make new dictionary with replaced PyArray_Descr Objects */ - while (PyDict_Next(self->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - if (!PyUString_Check(key) || !PyTuple_Check(value) || - ((len=PyTuple_GET_SIZE(value)) < 2)) { - continue; - } - old = PyTuple_GET_ITEM(value, 0); - if (!PyArray_DescrCheck(old)) { - continue; - } - newdescr = PyArray_DescrNewByteorder( - (PyArray_Descr *)old, newendian); - if (newdescr == NULL) { - Py_DECREF(newfields); Py_DECREF(new); - return NULL; - } - newvalue = PyTuple_New(len); - PyTuple_SET_ITEM(newvalue, 0, (PyObject *)newdescr); - for (i = 1; i < len; i++) { - old = PyTuple_GET_ITEM(value, i); - Py_INCREF(old); - PyTuple_SET_ITEM(newvalue, i, old); - } - PyDict_SetItem(newfields, key, newvalue); - Py_DECREF(newvalue); - } - Py_DECREF(new->fields); - new->fields = newfields; - } - if (new->subarray) { - Py_DECREF(new->subarray->base); - new->subarray->base = PyArray_DescrNewByteorder( - self->subarray->base, newendian); - } - return new; -} - - -static PyObject * -arraydescr_newbyteorder(PyArray_Descr *self, PyObject *args) -{ - char endian=PyArray_SWAP; - - if (!PyArg_ParseTuple(args, "|O&", PyArray_ByteorderConverter, - &endian)) { - return NULL; - } - return (PyObject *)PyArray_DescrNewByteorder(self, endian); -} - -static PyMethodDef arraydescr_methods[] = { - /* for pickling */ - {"__reduce__", - (PyCFunction)arraydescr_reduce, - METH_VARARGS, NULL}, - {"__setstate__", - (PyCFunction)arraydescr_setstate, - METH_VARARGS, NULL}, - {"newbyteorder", - (PyCFunction)arraydescr_newbyteorder, - METH_VARARGS, NULL}, - {NULL, NULL, 0, NULL} /* sentinel */ -}; - -static PyObject * -arraydescr_str(PyArray_Descr *self) -{ - PyObject *sub; - - if (self->names) { - PyObject *lst; - lst = arraydescr_protocol_descr_get(self); - if (!lst) { - sub = PyUString_FromString(""); - PyErr_Clear(); - } - else { - sub = PyObject_Str(lst); - } - Py_XDECREF(lst); - if (self->type_num != PyArray_VOID) { - PyObject *p, *t; - t=PyUString_FromString("'"); - p = arraydescr_protocol_typestr_get(self); - PyUString_Concat(&p, t); - PyUString_ConcatAndDel(&t, p); - p = PyUString_FromString("("); - PyUString_ConcatAndDel(&p, t); - PyUString_ConcatAndDel(&p, PyUString_FromString(", ")); - PyUString_ConcatAndDel(&p, sub); - PyUString_ConcatAndDel(&p, PyUString_FromString(")")); - sub = p; - } - } - else if (self->subarray) { - PyObject *p; - PyObject *t = PyUString_FromString("("); - PyObject *sh; - p = arraydescr_str(self->subarray->base); - if (!self->subarray->base->names && !self->subarray->base->subarray) { - PyObject *t=PyUString_FromString("'"); - PyUString_Concat(&p, t); - PyUString_ConcatAndDel(&t, p); - p = t; - } - PyUString_ConcatAndDel(&t, p); - PyUString_ConcatAndDel(&t, PyUString_FromString(",")); - if (!PyTuple_Check(self->subarray->shape)) { - sh = Py_BuildValue("(O)", self->subarray->shape); - } - else { - sh = self->subarray->shape; - Py_INCREF(sh); - } - PyUString_ConcatAndDel(&t, PyObject_Str(sh)); - Py_DECREF(sh); - PyUString_ConcatAndDel(&t, PyUString_FromString(")")); - sub = t; - } - else if (PyDataType_ISFLEXIBLE(self) || !PyArray_ISNBO(self->byteorder)) { - sub = arraydescr_protocol_typestr_get(self); - } - else { - sub = arraydescr_typename_get(self); - } - return sub; -} - -static PyObject * -arraydescr_repr(PyArray_Descr *self) -{ - PyObject *sub, *s; - s = PyUString_FromString("dtype("); - sub = arraydescr_str(self); - if (sub == NULL) { - return sub; - } - if (!self->names && !self->subarray) { - PyObject *t=PyUString_FromString("'"); - PyUString_Concat(&sub, t); - PyUString_ConcatAndDel(&t, sub); - sub = t; - } - PyUString_ConcatAndDel(&s, sub); - sub = PyUString_FromString(")"); - PyUString_ConcatAndDel(&s, sub); - return s; -} - -static PyObject * -arraydescr_richcompare(PyArray_Descr *self, PyObject *other, int cmp_op) -{ - PyArray_Descr *new = NULL; - PyObject *result = Py_NotImplemented; - if (!PyArray_DescrCheck(other)) { - if (PyArray_DescrConverter(other, &new) == PY_FAIL) { - return NULL; - } - } - else { - new = (PyArray_Descr *)other; - Py_INCREF(new); - } - switch (cmp_op) { - case Py_LT: - if (!PyArray_EquivTypes(self, new) && PyArray_CanCastTo(self, new)) { - result = Py_True; - } - else { - result = Py_False; - } - break; - case Py_LE: - if (PyArray_CanCastTo(self, new)) { - result = Py_True; - } - else { - result = Py_False; - } - break; - case Py_EQ: - if (PyArray_EquivTypes(self, new)) { - result = Py_True; - } - else { - result = Py_False; - } - break; - case Py_NE: - if (PyArray_EquivTypes(self, new)) - result = Py_False; - else - result = Py_True; - break; - case Py_GT: - if (!PyArray_EquivTypes(self, new) && PyArray_CanCastTo(new, self)) { - result = Py_True; - } - else { - result = Py_False; - } - break; - case Py_GE: - if (PyArray_CanCastTo(new, self)) { - result = Py_True; - } - else { - result = Py_False; - } - break; - default: - result = Py_NotImplemented; - } - - Py_XDECREF(new); - Py_INCREF(result); - return result; -} - -/************************************************************************* - **************** Implement Mapping Protocol *************************** - *************************************************************************/ - -static Py_ssize_t -descr_length(PyObject *self0) -{ - PyArray_Descr *self = (PyArray_Descr *)self0; - - if (self->names) { - return PyTuple_GET_SIZE(self->names); - } - else { - return 0; - } -} - -static PyObject * -descr_repeat(PyObject *self, Py_ssize_t length) -{ - PyObject *tup; - PyArray_Descr *new; - if (length < 0) { - return PyErr_Format(PyExc_ValueError, - "Array length must be >= 0, not %"INTP_FMT, length); - } - tup = Py_BuildValue("O" NPY_SSIZE_T_PYFMT, self, length); - if (tup == NULL) { - return NULL; - } - PyArray_DescrConverter(tup, &new); - Py_DECREF(tup); - return (PyObject *)new; -} - -static PyObject * -descr_subscript(PyArray_Descr *self, PyObject *op) -{ - PyObject *retval; - - if (!self->names) { - PyObject *astr = arraydescr_str(self); -#if defined(NPY_PY3K) - PyObject *bstr = PyUnicode_AsUnicodeEscapeString(astr); - Py_DECREF(astr); - astr = bstr; -#endif - PyErr_Format(PyExc_KeyError, - "There are no fields in dtype %s.", PyBytes_AsString(astr)); - Py_DECREF(astr); - return NULL; - } -#if defined(NPY_PY3K) - if (PyUString_Check(op)) { -#else - if (PyUString_Check(op) || PyUnicode_Check(op)) { -#endif - PyObject *obj = PyDict_GetItem(self->fields, op); - PyObject *descr; - PyObject *s; - - if (obj == NULL) { - if (PyUnicode_Check(op)) { - s = PyUnicode_AsUnicodeEscapeString(op); - } - else { - s = op; - } - - PyErr_Format(PyExc_KeyError, - "Field named \'%s\' not found.", PyBytes_AsString(s)); - if (s != op) { - Py_DECREF(s); - } - return NULL; - } - descr = PyTuple_GET_ITEM(obj, 0); - Py_INCREF(descr); - retval = descr; - } - else if (PyInt_Check(op)) { - PyObject *name; - int size = PyTuple_GET_SIZE(self->names); - int value = PyArray_PyIntAsInt(op); - - if (PyErr_Occurred()) { - return NULL; - } - if (value < 0) { - value += size; - } - if (value < 0 || value >= size) { - PyErr_Format(PyExc_IndexError, - "Field index out of range."); - return NULL; - } - name = PyTuple_GET_ITEM(self->names, value); - retval = descr_subscript(self, name); - } - else { - PyErr_SetString(PyExc_ValueError, - "Field key must be an integer, string, or unicode."); - return NULL; - } - return retval; -} - -static PySequenceMethods descr_as_sequence = { - descr_length, - (binaryfunc)NULL, - descr_repeat, - NULL, NULL, - NULL, /* sq_ass_item */ - NULL, /* ssizessizeobjargproc sq_ass_slice */ - 0, /* sq_contains */ - 0, /* sq_inplace_concat */ - 0, /* sq_inplace_repeat */ -}; - -static PyMappingMethods descr_as_mapping = { - descr_length, /* mp_length*/ - (binaryfunc)descr_subscript, /* mp_subscript*/ - (objobjargproc)NULL, /* mp_ass_subscript*/ -}; - -/****************** End of Mapping Protocol ******************************/ - -NPY_NO_EXPORT PyTypeObject PyArrayDescr_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.dtype", /* tp_name */ - sizeof(PyArray_Descr), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)arraydescr_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - (void *)0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - (reprfunc)arraydescr_repr, /* tp_repr */ - 0, /* tp_as_number */ - &descr_as_sequence, /* tp_as_sequence */ - &descr_as_mapping, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)arraydescr_str, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - (richcmpfunc)arraydescr_richcompare, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - arraydescr_methods, /* tp_methods */ - arraydescr_members, /* tp_members */ - arraydescr_getsets, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - arraydescr_new, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/descriptor.h b/pythonPackages/numpy/numpy/core/src/multiarray/descriptor.h deleted file mode 100755 index acb80eec69..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/descriptor.h +++ /dev/null @@ -1,17 +0,0 @@ -#ifndef _NPY_ARRAYDESCR_H_ -#define _NPY_ARRAYDESCR_H_ - -NPY_NO_EXPORT PyObject *arraydescr_protocol_typestr_get(PyArray_Descr *); -NPY_NO_EXPORT PyObject *arraydescr_protocol_descr_get(PyArray_Descr *self); - -NPY_NO_EXPORT PyObject * -array_set_typeDict(PyObject *NPY_UNUSED(ignored), PyObject *args); - -NPY_NO_EXPORT PyArray_Descr * -_arraydescr_fromobj(PyObject *obj); - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT char *_datetime_strings[]; -#endif - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/flagsobject.c b/pythonPackages/numpy/numpy/core/src/multiarray/flagsobject.c deleted file mode 100755 index ca27ef0839..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/flagsobject.c +++ /dev/null @@ -1,678 +0,0 @@ -/* Array Flags Object */ - -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "common.h" - -static int -_IsContiguous(PyArrayObject *ap); - -static int -_IsFortranContiguous(PyArrayObject *ap); - -/*NUMPY_API - * - * Get New ArrayFlagsObject - */ -NPY_NO_EXPORT PyObject * -PyArray_NewFlagsObject(PyObject *obj) -{ - PyObject *flagobj; - int flags; - if (obj == NULL) { - flags = CONTIGUOUS | OWNDATA | FORTRAN | ALIGNED; - } - else { - flags = PyArray_FLAGS(obj); - } - flagobj = PyArrayFlags_Type.tp_alloc(&PyArrayFlags_Type, 0); - if (flagobj == NULL) { - return NULL; - } - Py_XINCREF(obj); - ((PyArrayFlagsObject *)flagobj)->arr = obj; - ((PyArrayFlagsObject *)flagobj)->flags = flags; - return flagobj; -} - -/*NUMPY_API - * Update Several Flags at once. - */ -NPY_NO_EXPORT void -PyArray_UpdateFlags(PyArrayObject *ret, int flagmask) -{ - - if (flagmask & FORTRAN) { - if (_IsFortranContiguous(ret)) { - ret->flags |= FORTRAN; - if (ret->nd > 1) { - ret->flags &= ~CONTIGUOUS; - } - } - else { - ret->flags &= ~FORTRAN; - } - } - if (flagmask & CONTIGUOUS) { - if (_IsContiguous(ret)) { - ret->flags |= CONTIGUOUS; - if (ret->nd > 1) { - ret->flags &= ~FORTRAN; - } - } - else { - ret->flags &= ~CONTIGUOUS; - } - } - if (flagmask & ALIGNED) { - if (_IsAligned(ret)) { - ret->flags |= ALIGNED; - } - else { - ret->flags &= ~ALIGNED; - } - } - /* - * This is not checked by default WRITEABLE is not - * part of UPDATE_ALL - */ - if (flagmask & WRITEABLE) { - if (_IsWriteable(ret)) { - ret->flags |= WRITEABLE; - } - else { - ret->flags &= ~WRITEABLE; - } - } - return; -} - -/* - * Check whether the given array is stored contiguously - * (row-wise) in memory. - * - * 0-strided arrays are not contiguous (even if dimension == 1) - */ -static int -_IsContiguous(PyArrayObject *ap) -{ - intp sd; - intp dim; - int i; - - if (ap->nd == 0) { - return 1; - } - sd = ap->descr->elsize; - if (ap->nd == 1) { - return ap->dimensions[0] == 1 || sd == ap->strides[0]; - } - for (i = ap->nd - 1; i >= 0; --i) { - dim = ap->dimensions[i]; - /* contiguous by definition */ - if (dim == 0) { - return 1; - } - if (ap->strides[i] != sd) { - return 0; - } - sd *= dim; - } - return 1; -} - - -/* 0-strided arrays are not contiguous (even if dimension == 1) */ -static int -_IsFortranContiguous(PyArrayObject *ap) -{ - intp sd; - intp dim; - int i; - - if (ap->nd == 0) { - return 1; - } - sd = ap->descr->elsize; - if (ap->nd == 1) { - return ap->dimensions[0] == 1 || sd == ap->strides[0]; - } - for (i = 0; i < ap->nd; ++i) { - dim = ap->dimensions[i]; - /* fortran contiguous by definition */ - if (dim == 0) { - return 1; - } - if (ap->strides[i] != sd) { - return 0; - } - sd *= dim; - } - return 1; -} - -static void -arrayflags_dealloc(PyArrayFlagsObject *self) -{ - Py_XDECREF(self->arr); - Py_TYPE(self)->tp_free((PyObject *)self); -} - - -#define _define_get(UPPER, lower) \ - static PyObject * \ - arrayflags_ ## lower ## _get(PyArrayFlagsObject *self) \ - { \ - PyObject *item; \ - item = ((self->flags & (UPPER)) == (UPPER)) ? Py_True : Py_False; \ - Py_INCREF(item); \ - return item; \ - } - -_define_get(CONTIGUOUS, contiguous) -_define_get(FORTRAN, fortran) -_define_get(UPDATEIFCOPY, updateifcopy) -_define_get(OWNDATA, owndata) -_define_get(ALIGNED, aligned) -_define_get(WRITEABLE, writeable) - -_define_get(ALIGNED|WRITEABLE, behaved) -_define_get(ALIGNED|WRITEABLE|CONTIGUOUS, carray) - -static PyObject * -arrayflags_forc_get(PyArrayFlagsObject *self) -{ - PyObject *item; - - if (((self->flags & FORTRAN) == FORTRAN) || - ((self->flags & CONTIGUOUS) == CONTIGUOUS)) { - item = Py_True; - } - else { - item = Py_False; - } - Py_INCREF(item); - return item; -} - -static PyObject * -arrayflags_fnc_get(PyArrayFlagsObject *self) -{ - PyObject *item; - - if (((self->flags & FORTRAN) == FORTRAN) && - !((self->flags & CONTIGUOUS) == CONTIGUOUS)) { - item = Py_True; - } - else { - item = Py_False; - } - Py_INCREF(item); - return item; -} - -static PyObject * -arrayflags_farray_get(PyArrayFlagsObject *self) -{ - PyObject *item; - - if (((self->flags & (ALIGNED|WRITEABLE|FORTRAN)) == - (ALIGNED|WRITEABLE|FORTRAN)) && - !((self->flags & CONTIGUOUS) == CONTIGUOUS)) { - item = Py_True; - } - else { - item = Py_False; - } - Py_INCREF(item); - return item; -} - -static PyObject * -arrayflags_num_get(PyArrayFlagsObject *self) -{ - return PyInt_FromLong(self->flags); -} - -/* relies on setflags order being write, align, uic */ -static int -arrayflags_updateifcopy_set(PyArrayFlagsObject *self, PyObject *obj) -{ - PyObject *res; - if (self->arr == NULL) { - PyErr_SetString(PyExc_ValueError, "Cannot set flags on array scalars."); - return -1; - } - res = PyObject_CallMethod(self->arr, "setflags", "OOO", Py_None, Py_None, - (PyObject_IsTrue(obj) ? Py_True : Py_False)); - if (res == NULL) { - return -1; - } - Py_DECREF(res); - return 0; -} - -static int -arrayflags_aligned_set(PyArrayFlagsObject *self, PyObject *obj) -{ - PyObject *res; - if (self->arr == NULL) { - PyErr_SetString(PyExc_ValueError, "Cannot set flags on array scalars."); - return -1; - } - res = PyObject_CallMethod(self->arr, "setflags", "OOO", Py_None, - (PyObject_IsTrue(obj) ? Py_True : Py_False), - Py_None); - if (res == NULL) { - return -1; - } - Py_DECREF(res); - return 0; -} - -static int -arrayflags_writeable_set(PyArrayFlagsObject *self, PyObject *obj) -{ - PyObject *res; - if (self->arr == NULL) { - PyErr_SetString(PyExc_ValueError, "Cannot set flags on array scalars."); - return -1; - } - res = PyObject_CallMethod(self->arr, "setflags", "OOO", - (PyObject_IsTrue(obj) ? Py_True : Py_False), - Py_None, Py_None); - if (res == NULL) { - return -1; - } - Py_DECREF(res); - return 0; -} - - -static PyGetSetDef arrayflags_getsets[] = { - {"contiguous", - (getter)arrayflags_contiguous_get, - NULL, - NULL, NULL}, - {"c_contiguous", - (getter)arrayflags_contiguous_get, - NULL, - NULL, NULL}, - {"f_contiguous", - (getter)arrayflags_fortran_get, - NULL, - NULL, NULL}, - {"fortran", - (getter)arrayflags_fortran_get, - NULL, - NULL, NULL}, - {"updateifcopy", - (getter)arrayflags_updateifcopy_get, - (setter)arrayflags_updateifcopy_set, - NULL, NULL}, - {"owndata", - (getter)arrayflags_owndata_get, - NULL, - NULL, NULL}, - {"aligned", - (getter)arrayflags_aligned_get, - (setter)arrayflags_aligned_set, - NULL, NULL}, - {"writeable", - (getter)arrayflags_writeable_get, - (setter)arrayflags_writeable_set, - NULL, NULL}, - {"fnc", - (getter)arrayflags_fnc_get, - NULL, - NULL, NULL}, - {"forc", - (getter)arrayflags_forc_get, - NULL, - NULL, NULL}, - {"behaved", - (getter)arrayflags_behaved_get, - NULL, - NULL, NULL}, - {"carray", - (getter)arrayflags_carray_get, - NULL, - NULL, NULL}, - {"farray", - (getter)arrayflags_farray_get, - NULL, - NULL, NULL}, - {"num", - (getter)arrayflags_num_get, - NULL, - NULL, NULL}, - {NULL, NULL, NULL, NULL, NULL}, -}; - -static PyObject * -arrayflags_getitem(PyArrayFlagsObject *self, PyObject *ind) -{ - char *key = NULL; - char buf[16]; - int n; - if (PyUnicode_Check(ind)) { - PyObject *tmp_str; - tmp_str = PyUnicode_AsASCIIString(ind); - if (tmp_str == NULL) { - return NULL; - } - key = PyBytes_AS_STRING(tmp_str); - n = PyBytes_GET_SIZE(tmp_str); - if (n > 16) { - Py_DECREF(tmp_str); - goto fail; - } - memcpy(buf, key, n); - Py_DECREF(tmp_str); - key = buf; - } - else if (PyBytes_Check(ind)) { - key = PyBytes_AS_STRING(ind); - n = PyBytes_GET_SIZE(ind); - } - else { - goto fail; - } - switch(n) { - case 1: - switch(key[0]) { - case 'C': - return arrayflags_contiguous_get(self); - case 'F': - return arrayflags_fortran_get(self); - case 'W': - return arrayflags_writeable_get(self); - case 'B': - return arrayflags_behaved_get(self); - case 'O': - return arrayflags_owndata_get(self); - case 'A': - return arrayflags_aligned_get(self); - case 'U': - return arrayflags_updateifcopy_get(self); - default: - goto fail; - } - break; - case 2: - if (strncmp(key, "CA", n) == 0) { - return arrayflags_carray_get(self); - } - if (strncmp(key, "FA", n) == 0) { - return arrayflags_farray_get(self); - } - break; - case 3: - if (strncmp(key, "FNC", n) == 0) { - return arrayflags_fnc_get(self); - } - break; - case 4: - if (strncmp(key, "FORC", n) == 0) { - return arrayflags_forc_get(self); - } - break; - case 6: - if (strncmp(key, "CARRAY", n) == 0) { - return arrayflags_carray_get(self); - } - if (strncmp(key, "FARRAY", n) == 0) { - return arrayflags_farray_get(self); - } - break; - case 7: - if (strncmp(key,"FORTRAN",n) == 0) { - return arrayflags_fortran_get(self); - } - if (strncmp(key,"BEHAVED",n) == 0) { - return arrayflags_behaved_get(self); - } - if (strncmp(key,"OWNDATA",n) == 0) { - return arrayflags_owndata_get(self); - } - if (strncmp(key,"ALIGNED",n) == 0) { - return arrayflags_aligned_get(self); - } - break; - case 9: - if (strncmp(key,"WRITEABLE",n) == 0) { - return arrayflags_writeable_get(self); - } - break; - case 10: - if (strncmp(key,"CONTIGUOUS",n) == 0) { - return arrayflags_contiguous_get(self); - } - break; - case 12: - if (strncmp(key, "UPDATEIFCOPY", n) == 0) { - return arrayflags_updateifcopy_get(self); - } - if (strncmp(key, "C_CONTIGUOUS", n) == 0) { - return arrayflags_contiguous_get(self); - } - if (strncmp(key, "F_CONTIGUOUS", n) == 0) { - return arrayflags_fortran_get(self); - } - break; - } - - fail: - PyErr_SetString(PyExc_KeyError, "Unknown flag"); - return NULL; -} - -static int -arrayflags_setitem(PyArrayFlagsObject *self, PyObject *ind, PyObject *item) -{ - char *key; - char buf[16]; - int n; - if (PyUnicode_Check(ind)) { - PyObject *tmp_str; - tmp_str = PyUnicode_AsASCIIString(ind); - key = PyBytes_AS_STRING(tmp_str); - n = PyBytes_GET_SIZE(tmp_str); - if (n > 16) n = 16; - memcpy(buf, key, n); - Py_DECREF(tmp_str); - key = buf; - } - else if (PyBytes_Check(ind)) { - key = PyBytes_AS_STRING(ind); - n = PyBytes_GET_SIZE(ind); - } - else { - goto fail; - } - if (((n==9) && (strncmp(key, "WRITEABLE", n) == 0)) || - ((n==1) && (strncmp(key, "W", n) == 0))) { - return arrayflags_writeable_set(self, item); - } - else if (((n==7) && (strncmp(key, "ALIGNED", n) == 0)) || - ((n==1) && (strncmp(key, "A", n) == 0))) { - return arrayflags_aligned_set(self, item); - } - else if (((n==12) && (strncmp(key, "UPDATEIFCOPY", n) == 0)) || - ((n==1) && (strncmp(key, "U", n) == 0))) { - return arrayflags_updateifcopy_set(self, item); - } - - fail: - PyErr_SetString(PyExc_KeyError, "Unknown flag"); - return -1; -} - -static char * -_torf_(int flags, int val) -{ - if ((flags & val) == val) { - return "True"; - } - else { - return "False"; - } -} - -static PyObject * -arrayflags_print(PyArrayFlagsObject *self) -{ - int fl = self->flags; - - return PyUString_FromFormat(" %s : %s\n %s : %s\n %s : %s\n"\ - " %s : %s\n %s : %s\n %s : %s", - "C_CONTIGUOUS", _torf_(fl, CONTIGUOUS), - "F_CONTIGUOUS", _torf_(fl, FORTRAN), - "OWNDATA", _torf_(fl, OWNDATA), - "WRITEABLE", _torf_(fl, WRITEABLE), - "ALIGNED", _torf_(fl, ALIGNED), - "UPDATEIFCOPY", _torf_(fl, UPDATEIFCOPY)); -} - - -static int -arrayflags_compare(PyArrayFlagsObject *self, PyArrayFlagsObject *other) -{ - if (self->flags == other->flags) { - return 0; - } - else if (self->flags < other->flags) { - return -1; - } - else { - return 1; - } -} - - -static PyObject* -arrayflags_richcompare(PyObject *self, PyObject *other, int cmp_op) -{ - PyObject *result = Py_NotImplemented; - int cmp; - - if (cmp_op != Py_EQ && cmp_op != Py_NE) { - PyErr_SetString(PyExc_TypeError, - "undefined comparison for flag object"); - return NULL; - } - - if (PyObject_TypeCheck(other, &PyArrayFlags_Type)) { - cmp = arrayflags_compare((PyArrayFlagsObject *)self, - (PyArrayFlagsObject *)other); - } - - if (cmp_op == Py_EQ) { - result = (cmp == 0) ? Py_True : Py_False; - } - else if (cmp_op == Py_NE) { - result = (cmp != 0) ? Py_True : Py_False; - } - - Py_INCREF(result); - return result; -} - -static PyMappingMethods arrayflags_as_mapping = { -#if PY_VERSION_HEX >= 0x02050000 - (lenfunc)NULL, /*mp_length*/ -#else - (inquiry)NULL, /*mp_length*/ -#endif - (binaryfunc)arrayflags_getitem, /*mp_subscript*/ - (objobjargproc)arrayflags_setitem, /*mp_ass_subscript*/ -}; - - -static PyObject * -arrayflags_new(PyTypeObject *NPY_UNUSED(self), PyObject *args, PyObject *NPY_UNUSED(kwds)) -{ - PyObject *arg=NULL; - if (!PyArg_UnpackTuple(args, "flagsobj", 0, 1, &arg)) { - return NULL; - } - if ((arg != NULL) && PyArray_Check(arg)) { - return PyArray_NewFlagsObject(arg); - } - else { - return PyArray_NewFlagsObject(NULL); - } -} - -NPY_NO_EXPORT PyTypeObject PyArrayFlags_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.flagsobj", - sizeof(PyArrayFlagsObject), - 0, /* tp_itemsize */ - /* methods */ - (destructor)arrayflags_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - (cmpfunc)arrayflags_compare, /* tp_compare */ -#endif - (reprfunc)arrayflags_print, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - &arrayflags_as_mapping, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)arrayflags_print, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - arrayflags_richcompare, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - arrayflags_getsets, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - arrayflags_new, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/getset.c b/pythonPackages/numpy/numpy/core/src/multiarray/getset.c deleted file mode 100755 index b35058238c..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/getset.c +++ /dev/null @@ -1,893 +0,0 @@ -/* Array Descr Object */ - -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "common.h" -#include "scalartypes.h" -#include "descriptor.h" -#include "getset.h" - -/******************* array attribute get and set routines ******************/ - -static PyObject * -array_ndim_get(PyArrayObject *self) -{ - return PyInt_FromLong(self->nd); -} - -static PyObject * -array_flags_get(PyArrayObject *self) -{ - return PyArray_NewFlagsObject((PyObject *)self); -} - -static PyObject * -array_shape_get(PyArrayObject *self) -{ - return PyArray_IntTupleFromIntp(self->nd, self->dimensions); -} - - -static int -array_shape_set(PyArrayObject *self, PyObject *val) -{ - int nd; - PyObject *ret; - - /* Assumes C-order */ - ret = PyArray_Reshape(self, val); - if (ret == NULL) { - return -1; - } - if (PyArray_DATA(ret) != PyArray_DATA(self)) { - Py_DECREF(ret); - PyErr_SetString(PyExc_AttributeError, - "incompatible shape for a non-contiguous "\ - "array"); - return -1; - } - - /* Free old dimensions and strides */ - PyDimMem_FREE(self->dimensions); - nd = PyArray_NDIM(ret); - self->nd = nd; - if (nd > 0) { - /* create new dimensions and strides */ - self->dimensions = PyDimMem_NEW(2*nd); - if (self->dimensions == NULL) { - Py_DECREF(ret); - PyErr_SetString(PyExc_MemoryError,""); - return -1; - } - self->strides = self->dimensions + nd; - memcpy(self->dimensions, PyArray_DIMS(ret), nd*sizeof(intp)); - memcpy(self->strides, PyArray_STRIDES(ret), nd*sizeof(intp)); - } - else { - self->dimensions = NULL; - self->strides = NULL; - } - Py_DECREF(ret); - PyArray_UpdateFlags(self, CONTIGUOUS | FORTRAN); - return 0; -} - - -static PyObject * -array_strides_get(PyArrayObject *self) -{ - return PyArray_IntTupleFromIntp(self->nd, self->strides); -} - -static int -array_strides_set(PyArrayObject *self, PyObject *obj) -{ - PyArray_Dims newstrides = {NULL, 0}; - PyArrayObject *new; - intp numbytes = 0; - intp offset = 0; - Py_ssize_t buf_len; - char *buf; - - if (!PyArray_IntpConverter(obj, &newstrides) || - newstrides.ptr == NULL) { - PyErr_SetString(PyExc_TypeError, "invalid strides"); - return -1; - } - if (newstrides.len != self->nd) { - PyErr_Format(PyExc_ValueError, "strides must be " \ - " same length as shape (%d)", self->nd); - goto fail; - } - new = self; - while(new->base && PyArray_Check(new->base)) { - new = (PyArrayObject *)(new->base); - } - /* - * Get the available memory through the buffer interface on - * new->base or if that fails from the current new - */ - if (new->base && PyObject_AsReadBuffer(new->base, - (const void **)&buf, - &buf_len) >= 0) { - offset = self->data - buf; - numbytes = buf_len + offset; - } - else { - PyErr_Clear(); - numbytes = PyArray_MultiplyList(new->dimensions, - new->nd)*new->descr->elsize; - offset = self->data - new->data; - } - - if (!PyArray_CheckStrides(self->descr->elsize, self->nd, numbytes, - offset, - self->dimensions, newstrides.ptr)) { - PyErr_SetString(PyExc_ValueError, "strides is not "\ - "compatible with available memory"); - goto fail; - } - memcpy(self->strides, newstrides.ptr, sizeof(intp)*newstrides.len); - PyArray_UpdateFlags(self, CONTIGUOUS | FORTRAN); - PyDimMem_FREE(newstrides.ptr); - return 0; - - fail: - PyDimMem_FREE(newstrides.ptr); - return -1; -} - - - -static PyObject * -array_priority_get(PyArrayObject *self) -{ - if (PyArray_CheckExact(self)) { - return PyFloat_FromDouble(PyArray_PRIORITY); - } - else { - return PyFloat_FromDouble(PyArray_SUBTYPE_PRIORITY); - } -} - -static PyObject * -array_typestr_get(PyArrayObject *self) -{ - return arraydescr_protocol_typestr_get(self->descr); -} - -static PyObject * -array_descr_get(PyArrayObject *self) -{ - Py_INCREF(self->descr); - return (PyObject *)self->descr; -} - -static PyObject * -array_protocol_descr_get(PyArrayObject *self) -{ - PyObject *res; - PyObject *dobj; - - res = arraydescr_protocol_descr_get(self->descr); - if (res) { - return res; - } - PyErr_Clear(); - - /* get default */ - dobj = PyTuple_New(2); - if (dobj == NULL) { - return NULL; - } - PyTuple_SET_ITEM(dobj, 0, PyString_FromString("")); - PyTuple_SET_ITEM(dobj, 1, array_typestr_get(self)); - res = PyList_New(1); - if (res == NULL) { - Py_DECREF(dobj); - return NULL; - } - PyList_SET_ITEM(res, 0, dobj); - return res; -} - -static PyObject * -array_protocol_strides_get(PyArrayObject *self) -{ - if PyArray_ISCONTIGUOUS(self) { - Py_INCREF(Py_None); - return Py_None; - } - return PyArray_IntTupleFromIntp(self->nd, self->strides); -} - - - -static PyObject * -array_dataptr_get(PyArrayObject *self) -{ - return Py_BuildValue("NO", - PyLong_FromVoidPtr(self->data), - (self->flags & WRITEABLE ? Py_False : - Py_True)); -} - -static PyObject * -array_ctypes_get(PyArrayObject *self) -{ - PyObject *_numpy_internal; - PyObject *ret; - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) { - return NULL; - } - ret = PyObject_CallMethod(_numpy_internal, "_ctypes", "ON", self, - PyLong_FromVoidPtr(self->data)); - Py_DECREF(_numpy_internal); - return ret; -} - -static PyObject * -array_interface_get(PyArrayObject *self) -{ - PyObject *dict; - PyObject *obj; - - dict = PyDict_New(); - if (dict == NULL) { - return NULL; - } - - /* dataptr */ - obj = array_dataptr_get(self); - PyDict_SetItemString(dict, "data", obj); - Py_DECREF(obj); - - obj = array_protocol_strides_get(self); - PyDict_SetItemString(dict, "strides", obj); - Py_DECREF(obj); - - obj = array_protocol_descr_get(self); - PyDict_SetItemString(dict, "descr", obj); - Py_DECREF(obj); - - obj = arraydescr_protocol_typestr_get(self->descr); - PyDict_SetItemString(dict, "typestr", obj); - Py_DECREF(obj); - - obj = array_shape_get(self); - PyDict_SetItemString(dict, "shape", obj); - Py_DECREF(obj); - - obj = PyInt_FromLong(3); - PyDict_SetItemString(dict, "version", obj); - Py_DECREF(obj); - - return dict; -} - -static PyObject * -array_data_get(PyArrayObject *self) -{ -#if defined(NPY_PY3K) - return PyMemoryView_FromObject(self); -#else - intp nbytes; - if (!(PyArray_ISONESEGMENT(self))) { - PyErr_SetString(PyExc_AttributeError, "cannot get single-"\ - "segment buffer for discontiguous array"); - return NULL; - } - nbytes = PyArray_NBYTES(self); - if (PyArray_ISWRITEABLE(self)) { - return PyBuffer_FromReadWriteObject((PyObject *)self, 0, (Py_ssize_t) nbytes); - } - else { - return PyBuffer_FromObject((PyObject *)self, 0, (Py_ssize_t) nbytes); - } -#endif -} - -static int -array_data_set(PyArrayObject *self, PyObject *op) -{ - void *buf; - Py_ssize_t buf_len; - int writeable=1; - - if (PyObject_AsWriteBuffer(op, &buf, &buf_len) < 0) { - writeable = 0; - if (PyObject_AsReadBuffer(op, (const void **)&buf, &buf_len) < 0) { - PyErr_SetString(PyExc_AttributeError, - "object does not have single-segment " \ - "buffer interface"); - return -1; - } - } - if (!PyArray_ISONESEGMENT(self)) { - PyErr_SetString(PyExc_AttributeError, "cannot set single-" \ - "segment buffer for discontiguous array"); - return -1; - } - if (PyArray_NBYTES(self) > buf_len) { - PyErr_SetString(PyExc_AttributeError, "not enough data for array"); - return -1; - } - if (self->flags & OWNDATA) { - PyArray_XDECREF(self); - PyDataMem_FREE(self->data); - } - if (self->base) { - if (self->flags & UPDATEIFCOPY) { - ((PyArrayObject *)self->base)->flags |= WRITEABLE; - self->flags &= ~UPDATEIFCOPY; - } - Py_DECREF(self->base); - } - Py_INCREF(op); - self->base = op; - self->data = buf; - self->flags = CARRAY; - if (!writeable) { - self->flags &= ~WRITEABLE; - } - return 0; -} - - -static PyObject * -array_itemsize_get(PyArrayObject *self) -{ - return PyInt_FromLong((long) self->descr->elsize); -} - -static PyObject * -array_size_get(PyArrayObject *self) -{ - intp size=PyArray_SIZE(self); -#if SIZEOF_INTP <= SIZEOF_LONG - return PyInt_FromLong((long) size); -#else - if (size > MAX_LONG || size < MIN_LONG) { - return PyLong_FromLongLong(size); - } - else { - return PyInt_FromLong((long) size); - } -#endif -} - -static PyObject * -array_nbytes_get(PyArrayObject *self) -{ - intp nbytes = PyArray_NBYTES(self); -#if SIZEOF_INTP <= SIZEOF_LONG - return PyInt_FromLong((long) nbytes); -#else - if (nbytes > MAX_LONG || nbytes < MIN_LONG) { - return PyLong_FromLongLong(nbytes); - } - else { - return PyInt_FromLong((long) nbytes); - } -#endif -} - - -/* - * If the type is changed. - * Also needing change: strides, itemsize - * - * Either itemsize is exactly the same or the array is single-segment - * (contiguous or fortran) with compatibile dimensions The shape and strides - * will be adjusted in that case as well. - */ - -static int -array_descr_set(PyArrayObject *self, PyObject *arg) -{ - PyArray_Descr *newtype = NULL; - intp newdim; - int index; - char *msg = "new type not compatible with array."; - - if (!(PyArray_DescrConverter(arg, &newtype)) || - newtype == NULL) { - PyErr_SetString(PyExc_TypeError, "invalid data-type for array"); - return -1; - } - if (PyDataType_FLAGCHK(newtype, NPY_ITEM_HASOBJECT) || - PyDataType_FLAGCHK(newtype, NPY_ITEM_IS_POINTER) || - PyDataType_FLAGCHK(self->descr, NPY_ITEM_HASOBJECT) || - PyDataType_FLAGCHK(self->descr, NPY_ITEM_IS_POINTER)) { - PyErr_SetString(PyExc_TypeError, \ - "Cannot change data-type for object " \ - "array."); - Py_DECREF(newtype); - return -1; - } - - if (newtype->elsize == 0) { - PyErr_SetString(PyExc_TypeError, - "data-type must not be 0-sized"); - Py_DECREF(newtype); - return -1; - } - - - if ((newtype->elsize != self->descr->elsize) && - (self->nd == 0 || !PyArray_ISONESEGMENT(self) || - newtype->subarray)) { - goto fail; - } - if (PyArray_ISCONTIGUOUS(self)) { - index = self->nd - 1; - } - else { - index = 0; - } - if (newtype->elsize < self->descr->elsize) { - /* - * if it is compatible increase the size of the - * dimension at end (or at the front for FORTRAN) - */ - if (self->descr->elsize % newtype->elsize != 0) { - goto fail; - } - newdim = self->descr->elsize / newtype->elsize; - self->dimensions[index] *= newdim; - self->strides[index] = newtype->elsize; - } - else if (newtype->elsize > self->descr->elsize) { - /* - * Determine if last (or first if FORTRAN) dimension - * is compatible - */ - newdim = self->dimensions[index] * self->descr->elsize; - if ((newdim % newtype->elsize) != 0) { - goto fail; - } - self->dimensions[index] = newdim / newtype->elsize; - self->strides[index] = newtype->elsize; - } - - /* fall through -- adjust type*/ - Py_DECREF(self->descr); - if (newtype->subarray) { - /* - * create new array object from data and update - * dimensions, strides and descr from it - */ - PyArrayObject *temp; - /* - * We would decref newtype here. - * temp will steal a reference to it - */ - temp = (PyArrayObject *) - PyArray_NewFromDescr(&PyArray_Type, newtype, self->nd, - self->dimensions, self->strides, - self->data, self->flags, NULL); - if (temp == NULL) { - return -1; - } - PyDimMem_FREE(self->dimensions); - self->dimensions = temp->dimensions; - self->nd = temp->nd; - self->strides = temp->strides; - newtype = temp->descr; - Py_INCREF(temp->descr); - /* Fool deallocator not to delete these*/ - temp->nd = 0; - temp->dimensions = NULL; - Py_DECREF(temp); - } - - self->descr = newtype; - PyArray_UpdateFlags(self, UPDATE_ALL); - return 0; - - fail: - PyErr_SetString(PyExc_ValueError, msg); - Py_DECREF(newtype); - return -1; -} - -static PyObject * -array_struct_get(PyArrayObject *self) -{ - PyArrayInterface *inter; - PyObject *ret; - - inter = (PyArrayInterface *)_pya_malloc(sizeof(PyArrayInterface)); - if (inter==NULL) { - return PyErr_NoMemory(); - } - inter->two = 2; - inter->nd = self->nd; - inter->typekind = self->descr->kind; - inter->itemsize = self->descr->elsize; - inter->flags = self->flags; - /* reset unused flags */ - inter->flags &= ~(UPDATEIFCOPY | OWNDATA); - if (PyArray_ISNOTSWAPPED(self)) inter->flags |= NOTSWAPPED; - /* - * Copy shape and strides over since these can be reset - *when the array is "reshaped". - */ - if (self->nd > 0) { - inter->shape = (intp *)_pya_malloc(2*sizeof(intp)*self->nd); - if (inter->shape == NULL) { - _pya_free(inter); - return PyErr_NoMemory(); - } - inter->strides = inter->shape + self->nd; - memcpy(inter->shape, self->dimensions, sizeof(intp)*self->nd); - memcpy(inter->strides, self->strides, sizeof(intp)*self->nd); - } - else { - inter->shape = NULL; - inter->strides = NULL; - } - inter->data = self->data; - if (self->descr->names) { - inter->descr = arraydescr_protocol_descr_get(self->descr); - if (inter->descr == NULL) { - PyErr_Clear(); - } - else { - inter->flags &= ARR_HAS_DESCR; - } - } - else { - inter->descr = NULL; - } - Py_INCREF(self); - ret = NpyCapsule_FromVoidPtrAndDesc(inter, self, gentype_struct_free); - return ret; -} - -static PyObject * -array_base_get(PyArrayObject *self) -{ - if (self->base == NULL) { - Py_INCREF(Py_None); - return Py_None; - } - else { - Py_INCREF(self->base); - return self->base; - } -} - -/* - * Create a view of a complex array with an equivalent data-type - * except it is real instead of complex. - */ -static PyArrayObject * -_get_part(PyArrayObject *self, int imag) -{ - PyArray_Descr *type; - PyArrayObject *ret; - int offset; - - type = PyArray_DescrFromType(self->descr->type_num - - PyArray_NUM_FLOATTYPE); - offset = (imag ? type->elsize : 0); - - if (!PyArray_ISNBO(self->descr->byteorder)) { - PyArray_Descr *new; - new = PyArray_DescrNew(type); - new->byteorder = self->descr->byteorder; - Py_DECREF(type); - type = new; - } - ret = (PyArrayObject *) - PyArray_NewFromDescr(Py_TYPE(self), - type, - self->nd, - self->dimensions, - self->strides, - self->data + offset, - self->flags, (PyObject *)self); - if (ret == NULL) { - return NULL; - } - ret->flags &= ~CONTIGUOUS; - ret->flags &= ~FORTRAN; - Py_INCREF(self); - ret->base = (PyObject *)self; - return ret; -} - -/* For Object arrays, we need to get and set the - real part of each element. - */ - -static PyObject * -array_real_get(PyArrayObject *self) -{ - PyArrayObject *ret; - - if (PyArray_ISCOMPLEX(self)) { - ret = _get_part(self, 0); - return (PyObject *)ret; - } - else { - Py_INCREF(self); - return (PyObject *)self; - } -} - - -static int -array_real_set(PyArrayObject *self, PyObject *val) -{ - PyArrayObject *ret; - PyArrayObject *new; - int rint; - - if (PyArray_ISCOMPLEX(self)) { - ret = _get_part(self, 0); - if (ret == NULL) { - return -1; - } - } - else { - Py_INCREF(self); - ret = self; - } - new = (PyArrayObject *)PyArray_FromAny(val, NULL, 0, 0, 0, NULL); - if (new == NULL) { - Py_DECREF(ret); - return -1; - } - rint = PyArray_MoveInto(ret, new); - Py_DECREF(ret); - Py_DECREF(new); - return rint; -} - -/* For Object arrays we need to get - and set the imaginary part of - each element -*/ - -static PyObject * -array_imag_get(PyArrayObject *self) -{ - PyArrayObject *ret; - - if (PyArray_ISCOMPLEX(self)) { - ret = _get_part(self, 1); - } - else { - Py_INCREF(self->descr); - ret = (PyArrayObject *)PyArray_NewFromDescr(Py_TYPE(self), - self->descr, - self->nd, - self->dimensions, - NULL, NULL, - PyArray_ISFORTRAN(self), - (PyObject *)self); - if (ret == NULL) { - return NULL; - } - if (_zerofill(ret) < 0) { - return NULL; - } - ret->flags &= ~WRITEABLE; - } - return (PyObject *) ret; -} - -static int -array_imag_set(PyArrayObject *self, PyObject *val) -{ - if (PyArray_ISCOMPLEX(self)) { - PyArrayObject *ret; - PyArrayObject *new; - int rint; - - ret = _get_part(self, 1); - if (ret == NULL) { - return -1; - } - new = (PyArrayObject *)PyArray_FromAny(val, NULL, 0, 0, 0, NULL); - if (new == NULL) { - Py_DECREF(ret); - return -1; - } - rint = PyArray_MoveInto(ret, new); - Py_DECREF(ret); - Py_DECREF(new); - return rint; - } - else { - PyErr_SetString(PyExc_TypeError, "array does not have "\ - "imaginary part to set"); - return -1; - } -} - -static PyObject * -array_flat_get(PyArrayObject *self) -{ - return PyArray_IterNew((PyObject *)self); -} - -static int -array_flat_set(PyArrayObject *self, PyObject *val) -{ - PyObject *arr = NULL; - int retval = -1; - PyArrayIterObject *selfit = NULL, *arrit = NULL; - PyArray_Descr *typecode; - int swap; - PyArray_CopySwapFunc *copyswap; - - typecode = self->descr; - Py_INCREF(typecode); - arr = PyArray_FromAny(val, typecode, - 0, 0, FORCECAST | FORTRAN_IF(self), NULL); - if (arr == NULL) { - return -1; - } - arrit = (PyArrayIterObject *)PyArray_IterNew(arr); - if (arrit == NULL) { - goto exit; - } - selfit = (PyArrayIterObject *)PyArray_IterNew((PyObject *)self); - if (selfit == NULL) { - goto exit; - } - if (arrit->size == 0) { - retval = 0; - goto exit; - } - swap = PyArray_ISNOTSWAPPED(self) != PyArray_ISNOTSWAPPED(arr); - copyswap = self->descr->f->copyswap; - if (PyDataType_REFCHK(self->descr)) { - while (selfit->index < selfit->size) { - PyArray_Item_XDECREF(selfit->dataptr, self->descr); - PyArray_Item_INCREF(arrit->dataptr, PyArray_DESCR(arr)); - memmove(selfit->dataptr, arrit->dataptr, sizeof(PyObject **)); - if (swap) { - copyswap(selfit->dataptr, NULL, swap, self); - } - PyArray_ITER_NEXT(selfit); - PyArray_ITER_NEXT(arrit); - if (arrit->index == arrit->size) { - PyArray_ITER_RESET(arrit); - } - } - retval = 0; - goto exit; - } - - while(selfit->index < selfit->size) { - memmove(selfit->dataptr, arrit->dataptr, self->descr->elsize); - if (swap) { - copyswap(selfit->dataptr, NULL, swap, self); - } - PyArray_ITER_NEXT(selfit); - PyArray_ITER_NEXT(arrit); - if (arrit->index == arrit->size) { - PyArray_ITER_RESET(arrit); - } - } - retval = 0; - - exit: - Py_XDECREF(selfit); - Py_XDECREF(arrit); - Py_XDECREF(arr); - return retval; -} - -static PyObject * -array_transpose_get(PyArrayObject *self) -{ - return PyArray_Transpose(self, NULL); -} - -/* If this is None, no function call is made - --- default sub-class behavior -*/ -static PyObject * -array_finalize_get(PyArrayObject *NPY_UNUSED(self)) -{ - Py_INCREF(Py_None); - return Py_None; -} - -NPY_NO_EXPORT PyGetSetDef array_getsetlist[] = { - {"ndim", - (getter)array_ndim_get, - NULL, - NULL, NULL}, - {"flags", - (getter)array_flags_get, - NULL, - NULL, NULL}, - {"shape", - (getter)array_shape_get, - (setter)array_shape_set, - NULL, NULL}, - {"strides", - (getter)array_strides_get, - (setter)array_strides_set, - NULL, NULL}, - {"data", - (getter)array_data_get, - (setter)array_data_set, - NULL, NULL}, - {"itemsize", - (getter)array_itemsize_get, - NULL, - NULL, NULL}, - {"size", - (getter)array_size_get, - NULL, - NULL, NULL}, - {"nbytes", - (getter)array_nbytes_get, - NULL, - NULL, NULL}, - {"base", - (getter)array_base_get, - NULL, - NULL, NULL}, - {"dtype", - (getter)array_descr_get, - (setter)array_descr_set, - NULL, NULL}, - {"real", - (getter)array_real_get, - (setter)array_real_set, - NULL, NULL}, - {"imag", - (getter)array_imag_get, - (setter)array_imag_set, - NULL, NULL}, - {"flat", - (getter)array_flat_get, - (setter)array_flat_set, - NULL, NULL}, - {"ctypes", - (getter)array_ctypes_get, - NULL, - NULL, NULL}, - { "T", - (getter)array_transpose_get, - NULL, - NULL, NULL}, - {"__array_interface__", - (getter)array_interface_get, - NULL, - NULL, NULL}, - {"__array_struct__", - (getter)array_struct_get, - NULL, - NULL, NULL}, - {"__array_priority__", - (getter)array_priority_get, - NULL, - NULL, NULL}, - {"__array_finalize__", - (getter)array_finalize_get, - NULL, - NULL, NULL}, - {NULL, NULL, NULL, NULL, NULL}, /* Sentinel */ -}; - -/****************** end of attribute get and set routines *******************/ diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/getset.h b/pythonPackages/numpy/numpy/core/src/multiarray/getset.h deleted file mode 100755 index 98bd217f72..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/getset.h +++ /dev/null @@ -1,8 +0,0 @@ -#ifndef _NPY_ARRAY_GETSET_H_ -#define _NPY_ARRAY_GETSET_H_ - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT PyGetSetDef array_getsetlist[]; -#endif - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/hashdescr.c b/pythonPackages/numpy/numpy/core/src/multiarray/hashdescr.c deleted file mode 100755 index d5f8b4b47e..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/hashdescr.c +++ /dev/null @@ -1,295 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#define _MULTIARRAYMODULE -#include - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "hashdescr.h" - -/* - * How does this work ? The hash is computed from a list which contains all the - * information specific to a type. The hard work is to build the list - * (_array_descr_walk). The list is built as follows: - * * If the dtype is builtin (no fields, no subarray), then the list - * contains 6 items which uniquely define one dtype (_array_descr_builtin) - * * If the dtype is a compound array, one walk on each field. For each - * field, we append title, names, offset to the final list used for - * hashing, and then append the list recursively built for each - * corresponding dtype (_array_descr_walk_fields) - * * If the dtype is a subarray, one adds the shape tuple to the list, and - * then append the list recursively built for each corresponding dtype - * (_array_descr_walk_subarray) - * - */ - -static int _is_array_descr_builtin(PyArray_Descr* descr); -static int _array_descr_walk(PyArray_Descr* descr, PyObject *l); -static int _array_descr_walk_fields(PyObject* fields, PyObject* l); -static int _array_descr_builtin(PyArray_Descr* descr, PyObject *l); - -/* - * Return true if descr is a builtin type - */ -static int _is_array_descr_builtin(PyArray_Descr* descr) -{ - if (descr->fields != NULL && descr->fields != Py_None) { - return 0; - } - if (descr->subarray != NULL) { - return 0; - } - return 1; -} - -/* - * Add to l all the items which uniquely define a builtin type - */ -static int _array_descr_builtin(PyArray_Descr* descr, PyObject *l) -{ - Py_ssize_t i; - PyObject *t, *item; - - /* - * For builtin type, hash relies on : kind + byteorder + hasobject + - * type_num + elsize + alignment - */ - t = Py_BuildValue("(ccciii)", descr->kind, descr->byteorder, - descr->hasobject, descr->type_num, descr->elsize, - descr->alignment); - - for(i = 0; i < PyTuple_Size(t); ++i) { - item = PyTuple_GetItem(t, i); - if (item == NULL) { - PyErr_SetString(PyExc_SystemError, - "(Hash) Error while computing builting hash"); - goto clean_t; - } - Py_INCREF(item); - PyList_Append(l, item); - } - - Py_DECREF(t); - return 0; - -clean_t: - Py_DECREF(t); - return -1; -} - -/* - * Walk inside the fields and add every item which will be used for hashing - * into the list l - * - * Return 0 on success - */ -static int _array_descr_walk_fields(PyObject* fields, PyObject* l) -{ - PyObject *key, *value, *foffset, *fdescr; - Py_ssize_t pos = 0; - int st; - - while (PyDict_Next(fields, &pos, &key, &value)) { - /* - * For each field, add the key + descr + offset to l - */ - - /* XXX: are those checks necessary ? */ - if (!PyUString_Check(key)) { - PyErr_SetString(PyExc_SystemError, - "(Hash) key of dtype dict not a string ???"); - return -1; - } - if (!PyTuple_Check(value)) { - PyErr_SetString(PyExc_SystemError, - "(Hash) value of dtype dict not a dtype ???"); - return -1; - } - if (PyTuple_Size(value) < 2) { - PyErr_SetString(PyExc_SystemError, - "(Hash) Less than 2 items in dtype dict ???"); - return -1; - } - Py_INCREF(key); - PyList_Append(l, key); - - fdescr = PyTuple_GetItem(value, 0); - if (!PyArray_DescrCheck(fdescr)) { - PyErr_SetString(PyExc_SystemError, - "(Hash) First item in compound dtype tuple not a descr ???"); - return -1; - } else { - Py_INCREF(fdescr); - st = _array_descr_walk((PyArray_Descr*)fdescr, l); - Py_DECREF(fdescr); - if (st) { - return -1; - } - } - - foffset = PyTuple_GetItem(value, 1); - if (!PyInt_Check(foffset)) { - PyErr_SetString(PyExc_SystemError, - "(Hash) Second item in compound dtype tuple not an int ???"); - return -1; - } else { - Py_INCREF(foffset); - PyList_Append(l, foffset); - } - } - - return 0; -} - -/* - * Walk into subarray, and add items for hashing in l - * - * Return 0 on success - */ -static int _array_descr_walk_subarray(PyArray_ArrayDescr* adescr, PyObject *l) -{ - PyObject *item; - Py_ssize_t i; - int st; - - /* - * Add shape and descr itself to the list of object to hash - */ - if (PyTuple_Check(adescr->shape)) { - for(i = 0; i < PyTuple_Size(adescr->shape); ++i) { - item = PyTuple_GetItem(adescr->shape, i); - if (item == NULL) { - PyErr_SetString(PyExc_SystemError, - "(Hash) Error while getting shape item of subarray dtype ???"); - return -1; - } - Py_INCREF(item); - PyList_Append(l, item); - } - } else if (PyInt_Check(adescr->shape)) { - Py_INCREF(adescr->shape); - PyList_Append(l, adescr->shape); - } else { - PyErr_SetString(PyExc_SystemError, - "(Hash) Shape of subarray dtype neither a tuple or int ???"); - return -1; - } - - Py_INCREF(adescr->base); - st = _array_descr_walk(adescr->base, l); - Py_DECREF(adescr->base); - - return st; -} - -/* - * 'Root' function to walk into a dtype. May be called recursively - */ -static int _array_descr_walk(PyArray_Descr* descr, PyObject *l) -{ - int st; - - if (_is_array_descr_builtin(descr)) { - return _array_descr_builtin(descr, l); - } else { - if(descr->fields != NULL && descr->fields != Py_None) { - if (!PyDict_Check(descr->fields)) { - PyErr_SetString(PyExc_SystemError, - "(Hash) fields is not a dict ???"); - return -1; - } - st = _array_descr_walk_fields(descr->fields, l); - if (st) { - return -1; - } - } - if(descr->subarray != NULL) { - st = _array_descr_walk_subarray(descr->subarray, l); - if (st) { - return -1; - } - } - } - - return 0; -} - -/* - * Return 0 if successfull - */ -static int _PyArray_DescrHashImp(PyArray_Descr *descr, long *hash) -{ - PyObject *l, *tl, *item; - Py_ssize_t i; - int st; - - l = PyList_New(0); - if (l == NULL) { - return -1; - } - - st = _array_descr_walk(descr, l); - if (st) { - goto clean_l; - } - - /* - * Convert the list to tuple and compute the tuple hash using python - * builtin function - */ - tl = PyTuple_New(PyList_Size(l)); - for(i = 0; i < PyList_Size(l); ++i) { - item = PyList_GetItem(l, i); - if (item == NULL) { - PyErr_SetString(PyExc_SystemError, - "(Hash) Error while translating the list into a tuple " \ - "(NULL item)"); - goto clean_tl; - } - PyTuple_SetItem(tl, i, item); - } - - *hash = PyObject_Hash(tl); - if (*hash == -1) { - /* XXX: does PyObject_Hash set an exception on failure ? */ -#if 0 - PyErr_SetString(PyExc_SystemError, - "(Hash) Error while hashing final tuple"); -#endif - goto clean_tl; - } - Py_DECREF(tl); - Py_DECREF(l); - - return 0; - -clean_tl: - Py_DECREF(tl); -clean_l: - Py_DECREF(l); - return -1; -} - -NPY_NO_EXPORT long -PyArray_DescrHash(PyObject* odescr) -{ - PyArray_Descr *descr; - int st; - long hash; - - if (!PyArray_DescrCheck(odescr)) { - PyErr_SetString(PyExc_ValueError, - "PyArray_DescrHash argument must be a type descriptor"); - return -1; - } - descr = (PyArray_Descr*)odescr; - - st = _PyArray_DescrHashImp(descr, &hash); - if (st) { - return -1; - } - - return hash; -} diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/hashdescr.h b/pythonPackages/numpy/numpy/core/src/multiarray/hashdescr.h deleted file mode 100755 index af0ec13b99..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/hashdescr.h +++ /dev/null @@ -1,7 +0,0 @@ -#ifndef _NPY_HASHDESCR_H_ -#define _NPY_HASHDESCR_H_ - -NPY_NO_EXPORT long -PyArray_DescrHash(PyObject* odescr); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/item_selection.c b/pythonPackages/numpy/numpy/core/src/multiarray/item_selection.c deleted file mode 100755 index 398acfe716..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/item_selection.c +++ /dev/null @@ -1,1752 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "numpy/npy_math.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "common.h" -#include "ctors.h" - -#define PyAO PyArrayObject -#define _check_axis PyArray_CheckAxis - -/*NUMPY_API - * Take - */ -NPY_NO_EXPORT PyObject * -PyArray_TakeFrom(PyArrayObject *self0, PyObject *indices0, int axis, - PyArrayObject *ret, NPY_CLIPMODE clipmode) -{ - PyArray_FastTakeFunc *func; - PyArrayObject *self, *indices; - intp nd, i, j, n, m, max_item, tmp, chunk, nelem; - intp shape[MAX_DIMS]; - char *src, *dest; - int copyret = 0; - int err; - - indices = NULL; - self = (PyAO *)_check_axis(self0, &axis, CARRAY); - if (self == NULL) { - return NULL; - } - indices = (PyArrayObject *)PyArray_ContiguousFromAny(indices0, - PyArray_INTP, - 1, 0); - if (indices == NULL) { - goto fail; - } - n = m = chunk = 1; - nd = self->nd + indices->nd - 1; - for (i = 0; i < nd; i++) { - if (i < axis) { - shape[i] = self->dimensions[i]; - n *= shape[i]; - } - else { - if (i < axis+indices->nd) { - shape[i] = indices->dimensions[i-axis]; - m *= shape[i]; - } - else { - shape[i] = self->dimensions[i-indices->nd+1]; - chunk *= shape[i]; - } - } - } - Py_INCREF(self->descr); - if (!ret) { - ret = (PyArrayObject *)PyArray_NewFromDescr(Py_TYPE(self), - self->descr, - nd, shape, - NULL, NULL, 0, - (PyObject *)self); - - if (ret == NULL) { - goto fail; - } - } - else { - PyArrayObject *obj; - int flags = NPY_CARRAY | NPY_UPDATEIFCOPY; - - if ((ret->nd != nd) || - !PyArray_CompareLists(ret->dimensions, shape, nd)) { - PyErr_SetString(PyExc_ValueError, - "bad shape in output array"); - ret = NULL; - Py_DECREF(self->descr); - goto fail; - } - - if (clipmode == NPY_RAISE) { - /* - * we need to make sure and get a copy - * so the input array is not changed - * before the error is called - */ - flags |= NPY_ENSURECOPY; - } - obj = (PyArrayObject *)PyArray_FromArray(ret, self->descr, - flags); - if (obj != ret) { - copyret = 1; - } - ret = obj; - if (ret == NULL) { - goto fail; - } - } - - max_item = self->dimensions[axis]; - nelem = chunk; - chunk = chunk * ret->descr->elsize; - src = self->data; - dest = ret->data; - - func = self->descr->f->fasttake; - if (func == NULL) { - switch(clipmode) { - case NPY_RAISE: - for (i = 0; i < n; i++) { - for (j = 0; j < m; j++) { - tmp = ((intp *)(indices->data))[j]; - if (tmp < 0) { - tmp = tmp + max_item; - } - if ((tmp < 0) || (tmp >= max_item)) { - PyErr_SetString(PyExc_IndexError, - "index out of range "\ - "for array"); - goto fail; - } - memmove(dest, src + tmp*chunk, chunk); - dest += chunk; - } - src += chunk*max_item; - } - break; - case NPY_WRAP: - for (i = 0; i < n; i++) { - for (j = 0; j < m; j++) { - tmp = ((intp *)(indices->data))[j]; - if (tmp < 0) { - while (tmp < 0) { - tmp += max_item; - } - } - else if (tmp >= max_item) { - while (tmp >= max_item) { - tmp -= max_item; - } - } - memmove(dest, src + tmp*chunk, chunk); - dest += chunk; - } - src += chunk*max_item; - } - break; - case NPY_CLIP: - for (i = 0; i < n; i++) { - for (j = 0; j < m; j++) { - tmp = ((intp *)(indices->data))[j]; - if (tmp < 0) { - tmp = 0; - } - else if (tmp >= max_item) { - tmp = max_item - 1; - } - memmove(dest, src+tmp*chunk, chunk); - dest += chunk; - } - src += chunk*max_item; - } - break; - } - } - else { - err = func(dest, src, (intp *)(indices->data), - max_item, n, m, nelem, clipmode); - if (err) { - goto fail; - } - } - - PyArray_INCREF(ret); - Py_XDECREF(indices); - Py_XDECREF(self); - if (copyret) { - PyObject *obj; - obj = ret->base; - Py_INCREF(obj); - Py_DECREF(ret); - ret = (PyArrayObject *)obj; - } - return (PyObject *)ret; - - fail: - PyArray_XDECREF_ERR(ret); - Py_XDECREF(indices); - Py_XDECREF(self); - return NULL; -} - -/*NUMPY_API - * Put values into an array - */ -NPY_NO_EXPORT PyObject * -PyArray_PutTo(PyArrayObject *self, PyObject* values0, PyObject *indices0, - NPY_CLIPMODE clipmode) -{ - PyArrayObject *indices, *values; - intp i, chunk, ni, max_item, nv, tmp; - char *src, *dest; - int copied = 0; - - indices = NULL; - values = NULL; - if (!PyArray_Check(self)) { - PyErr_SetString(PyExc_TypeError, - "put: first argument must be an array"); - return NULL; - } - if (!PyArray_ISCONTIGUOUS(self)) { - PyArrayObject *obj; - int flags = NPY_CARRAY | NPY_UPDATEIFCOPY; - - if (clipmode == NPY_RAISE) { - flags |= NPY_ENSURECOPY; - } - Py_INCREF(self->descr); - obj = (PyArrayObject *)PyArray_FromArray(self, - self->descr, flags); - if (obj != self) { - copied = 1; - } - self = obj; - } - max_item = PyArray_SIZE(self); - dest = self->data; - chunk = self->descr->elsize; - indices = (PyArrayObject *)PyArray_ContiguousFromAny(indices0, - PyArray_INTP, 0, 0); - if (indices == NULL) { - goto fail; - } - ni = PyArray_SIZE(indices); - Py_INCREF(self->descr); - values = (PyArrayObject *)PyArray_FromAny(values0, self->descr, 0, 0, - DEFAULT | FORCECAST, NULL); - if (values == NULL) { - goto fail; - } - nv = PyArray_SIZE(values); - if (nv <= 0) { - goto finish; - } - if (PyDataType_REFCHK(self->descr)) { - switch(clipmode) { - case NPY_RAISE: - for (i = 0; i < ni; i++) { - src = values->data + chunk*(i % nv); - tmp = ((intp *)(indices->data))[i]; - if (tmp < 0) { - tmp = tmp + max_item; - } - if ((tmp < 0) || (tmp >= max_item)) { - PyErr_SetString(PyExc_IndexError, - "index out of " \ - "range for array"); - goto fail; - } - PyArray_Item_INCREF(src, self->descr); - PyArray_Item_XDECREF(dest+tmp*chunk, self->descr); - memmove(dest + tmp*chunk, src, chunk); - } - break; - case NPY_WRAP: - for (i = 0; i < ni; i++) { - src = values->data + chunk * (i % nv); - tmp = ((intp *)(indices->data))[i]; - if (tmp < 0) { - while (tmp < 0) { - tmp += max_item; - } - } - else if (tmp >= max_item) { - while (tmp >= max_item) { - tmp -= max_item; - } - } - PyArray_Item_INCREF(src, self->descr); - PyArray_Item_XDECREF(dest+tmp*chunk, self->descr); - memmove(dest + tmp * chunk, src, chunk); - } - break; - case NPY_CLIP: - for (i = 0; i < ni; i++) { - src = values->data + chunk * (i % nv); - tmp = ((intp *)(indices->data))[i]; - if (tmp < 0) { - tmp = 0; - } - else if (tmp >= max_item) { - tmp = max_item - 1; - } - PyArray_Item_INCREF(src, self->descr); - PyArray_Item_XDECREF(dest+tmp*chunk, self->descr); - memmove(dest + tmp * chunk, src, chunk); - } - break; - } - } - else { - switch(clipmode) { - case NPY_RAISE: - for (i = 0; i < ni; i++) { - src = values->data + chunk * (i % nv); - tmp = ((intp *)(indices->data))[i]; - if (tmp < 0) { - tmp = tmp + max_item; - } - if ((tmp < 0) || (tmp >= max_item)) { - PyErr_SetString(PyExc_IndexError, - "index out of " \ - "range for array"); - goto fail; - } - memmove(dest + tmp * chunk, src, chunk); - } - break; - case NPY_WRAP: - for (i = 0; i < ni; i++) { - src = values->data + chunk * (i % nv); - tmp = ((intp *)(indices->data))[i]; - if (tmp < 0) { - while (tmp < 0) { - tmp += max_item; - } - } - else if (tmp >= max_item) { - while (tmp >= max_item) { - tmp -= max_item; - } - } - memmove(dest + tmp * chunk, src, chunk); - } - break; - case NPY_CLIP: - for (i = 0; i < ni; i++) { - src = values->data + chunk * (i % nv); - tmp = ((intp *)(indices->data))[i]; - if (tmp < 0) { - tmp = 0; - } - else if (tmp >= max_item) { - tmp = max_item - 1; - } - memmove(dest + tmp * chunk, src, chunk); - } - break; - } - } - - finish: - Py_XDECREF(values); - Py_XDECREF(indices); - if (copied) { - Py_DECREF(self); - } - Py_INCREF(Py_None); - return Py_None; - - fail: - Py_XDECREF(indices); - Py_XDECREF(values); - if (copied) { - PyArray_XDECREF_ERR(self); - } - return NULL; -} - -/*NUMPY_API - * Put values into an array according to a mask. - */ -NPY_NO_EXPORT PyObject * -PyArray_PutMask(PyArrayObject *self, PyObject* values0, PyObject* mask0) -{ - PyArray_FastPutmaskFunc *func; - PyArrayObject *mask, *values; - intp i, chunk, ni, max_item, nv, tmp; - char *src, *dest; - int copied = 0; - - mask = NULL; - values = NULL; - if (!PyArray_Check(self)) { - PyErr_SetString(PyExc_TypeError, - "putmask: first argument must "\ - "be an array"); - return NULL; - } - if (!PyArray_ISCONTIGUOUS(self)) { - PyArrayObject *obj; - int flags = NPY_CARRAY | NPY_UPDATEIFCOPY; - - Py_INCREF(self->descr); - obj = (PyArrayObject *)PyArray_FromArray(self, - self->descr, flags); - if (obj != self) { - copied = 1; - } - self = obj; - } - - max_item = PyArray_SIZE(self); - dest = self->data; - chunk = self->descr->elsize; - mask = (PyArrayObject *)\ - PyArray_FROM_OTF(mask0, PyArray_BOOL, CARRAY | FORCECAST); - if (mask == NULL) { - goto fail; - } - ni = PyArray_SIZE(mask); - if (ni != max_item) { - PyErr_SetString(PyExc_ValueError, - "putmask: mask and data must be "\ - "the same size"); - goto fail; - } - Py_INCREF(self->descr); - values = (PyArrayObject *)\ - PyArray_FromAny(values0, self->descr, 0, 0, NPY_CARRAY, NULL); - if (values == NULL) { - goto fail; - } - nv = PyArray_SIZE(values); /* zero if null array */ - if (nv <= 0) { - Py_XDECREF(values); - Py_XDECREF(mask); - Py_INCREF(Py_None); - return Py_None; - } - if (PyDataType_REFCHK(self->descr)) { - for (i = 0; i < ni; i++) { - tmp = ((Bool *)(mask->data))[i]; - if (tmp) { - src = values->data + chunk * (i % nv); - PyArray_Item_INCREF(src, self->descr); - PyArray_Item_XDECREF(dest+i*chunk, self->descr); - memmove(dest + i * chunk, src, chunk); - } - } - } - else { - func = self->descr->f->fastputmask; - if (func == NULL) { - for (i = 0; i < ni; i++) { - tmp = ((Bool *)(mask->data))[i]; - if (tmp) { - src = values->data + chunk*(i % nv); - memmove(dest + i*chunk, src, chunk); - } - } - } - else { - func(dest, mask->data, ni, values->data, nv); - } - } - - Py_XDECREF(values); - Py_XDECREF(mask); - if (copied) { - Py_DECREF(self); - } - Py_INCREF(Py_None); - return Py_None; - - fail: - Py_XDECREF(mask); - Py_XDECREF(values); - if (copied) { - PyArray_XDECREF_ERR(self); - } - return NULL; -} - -/*NUMPY_API - * Repeat the array. - */ -NPY_NO_EXPORT PyObject * -PyArray_Repeat(PyArrayObject *aop, PyObject *op, int axis) -{ - intp *counts; - intp n, n_outer, i, j, k, chunk, total; - intp tmp; - int nd; - PyArrayObject *repeats = NULL; - PyObject *ap = NULL; - PyArrayObject *ret = NULL; - char *new_data, *old_data; - - repeats = (PyAO *)PyArray_ContiguousFromAny(op, PyArray_INTP, 0, 1); - if (repeats == NULL) { - return NULL; - } - nd = repeats->nd; - counts = (intp *)repeats->data; - - if ((ap=_check_axis(aop, &axis, CARRAY))==NULL) { - Py_DECREF(repeats); - return NULL; - } - - aop = (PyAO *)ap; - if (nd == 1) { - n = repeats->dimensions[0]; - } - else { - /* nd == 0 */ - n = aop->dimensions[axis]; - } - if (aop->dimensions[axis] != n) { - PyErr_SetString(PyExc_ValueError, - "a.shape[axis] != len(repeats)"); - goto fail; - } - - if (nd == 0) { - total = counts[0]*n; - } - else { - - total = 0; - for (j = 0; j < n; j++) { - if (counts[j] < 0) { - PyErr_SetString(PyExc_ValueError, "count < 0"); - goto fail; - } - total += counts[j]; - } - } - - - /* Construct new array */ - aop->dimensions[axis] = total; - Py_INCREF(aop->descr); - ret = (PyArrayObject *)PyArray_NewFromDescr(Py_TYPE(aop), - aop->descr, - aop->nd, - aop->dimensions, - NULL, NULL, 0, - (PyObject *)aop); - aop->dimensions[axis] = n; - if (ret == NULL) { - goto fail; - } - new_data = ret->data; - old_data = aop->data; - - chunk = aop->descr->elsize; - for(i = axis + 1; i < aop->nd; i++) { - chunk *= aop->dimensions[i]; - } - - n_outer = 1; - for (i = 0; i < axis; i++) { - n_outer *= aop->dimensions[i]; - } - for (i = 0; i < n_outer; i++) { - for (j = 0; j < n; j++) { - tmp = nd ? counts[j] : counts[0]; - for (k = 0; k < tmp; k++) { - memcpy(new_data, old_data, chunk); - new_data += chunk; - } - old_data += chunk; - } - } - - Py_DECREF(repeats); - PyArray_INCREF(ret); - Py_XDECREF(aop); - return (PyObject *)ret; - - fail: - Py_DECREF(repeats); - Py_XDECREF(aop); - Py_XDECREF(ret); - return NULL; -} - -/*NUMPY_API - */ -NPY_NO_EXPORT PyObject * -PyArray_Choose(PyArrayObject *ip, PyObject *op, PyArrayObject *ret, - NPY_CLIPMODE clipmode) -{ - int n, elsize; - intp i; - char *ret_data; - PyArrayObject **mps, *ap; - PyArrayMultiIterObject *multi = NULL; - intp mi; - int copyret = 0; - ap = NULL; - - /* - * Convert all inputs to arrays of a common type - * Also makes them C-contiguous - */ - mps = PyArray_ConvertToCommonType(op, &n); - if (mps == NULL) { - return NULL; - } - for (i = 0; i < n; i++) { - if (mps[i] == NULL) { - goto fail; - } - } - ap = (PyArrayObject *)PyArray_FROM_OT((PyObject *)ip, NPY_INTP); - if (ap == NULL) { - goto fail; - } - /* Broadcast all arrays to each other, index array at the end. */ - multi = (PyArrayMultiIterObject *) - PyArray_MultiIterFromObjects((PyObject **)mps, n, 1, ap); - if (multi == NULL) { - goto fail; - } - /* Set-up return array */ - if (!ret) { - Py_INCREF(mps[0]->descr); - ret = (PyArrayObject *)PyArray_NewFromDescr(Py_TYPE(ap), - mps[0]->descr, - multi->nd, - multi->dimensions, - NULL, NULL, 0, - (PyObject *)ap); - } - else { - PyArrayObject *obj; - int flags = NPY_CARRAY | NPY_UPDATEIFCOPY | NPY_FORCECAST; - - if ((PyArray_NDIM(ret) != multi->nd) - || !PyArray_CompareLists( - PyArray_DIMS(ret), multi->dimensions, multi->nd)) { - PyErr_SetString(PyExc_TypeError, - "invalid shape for output array."); - ret = NULL; - goto fail; - } - if (clipmode == NPY_RAISE) { - /* - * we need to make sure and get a copy - * so the input array is not changed - * before the error is called - */ - flags |= NPY_ENSURECOPY; - } - Py_INCREF(mps[0]->descr); - obj = (PyArrayObject *)PyArray_FromArray(ret, mps[0]->descr, flags); - if (obj != ret) { - copyret = 1; - } - ret = obj; - } - - if (ret == NULL) { - goto fail; - } - elsize = ret->descr->elsize; - ret_data = ret->data; - - while (PyArray_MultiIter_NOTDONE(multi)) { - mi = *((intp *)PyArray_MultiIter_DATA(multi, n)); - if (mi < 0 || mi >= n) { - switch(clipmode) { - case NPY_RAISE: - PyErr_SetString(PyExc_ValueError, - "invalid entry in choice "\ - "array"); - goto fail; - case NPY_WRAP: - if (mi < 0) { - while (mi < 0) { - mi += n; - } - } - else { - while (mi >= n) { - mi -= n; - } - } - break; - case NPY_CLIP: - if (mi < 0) { - mi = 0; - } - else if (mi >= n) { - mi = n - 1; - } - break; - } - } - memmove(ret_data, PyArray_MultiIter_DATA(multi, mi), elsize); - ret_data += elsize; - PyArray_MultiIter_NEXT(multi); - } - - PyArray_INCREF(ret); - Py_DECREF(multi); - for (i = 0; i < n; i++) { - Py_XDECREF(mps[i]); - } - Py_DECREF(ap); - PyDataMem_FREE(mps); - if (copyret) { - PyObject *obj; - obj = ret->base; - Py_INCREF(obj); - Py_DECREF(ret); - ret = (PyArrayObject *)obj; - } - return (PyObject *)ret; - - fail: - Py_XDECREF(multi); - for (i = 0; i < n; i++) { - Py_XDECREF(mps[i]); - } - Py_XDECREF(ap); - PyDataMem_FREE(mps); - PyArray_XDECREF_ERR(ret); - return NULL; -} - -/* - * These algorithms use special sorting. They are not called unless the - * underlying sort function for the type is available. Note that axis is - * already valid. The sort functions require 1-d contiguous and well-behaved - * data. Therefore, a copy will be made of the data if needed before handing - * it to the sorting routine. An iterator is constructed and adjusted to walk - * over all but the desired sorting axis. - */ -static int -_new_sort(PyArrayObject *op, int axis, NPY_SORTKIND which) -{ - PyArrayIterObject *it; - int needcopy = 0, swap; - intp N, size; - int elsize; - intp astride; - PyArray_SortFunc *sort; - BEGIN_THREADS_DEF; - - it = (PyArrayIterObject *)PyArray_IterAllButAxis((PyObject *)op, &axis); - swap = !PyArray_ISNOTSWAPPED(op); - if (it == NULL) { - return -1; - } - - NPY_BEGIN_THREADS_DESCR(op->descr); - sort = op->descr->f->sort[which]; - size = it->size; - N = op->dimensions[axis]; - elsize = op->descr->elsize; - astride = op->strides[axis]; - - needcopy = !(op->flags & ALIGNED) || (astride != (intp) elsize) || swap; - if (needcopy) { - char *buffer = PyDataMem_NEW(N*elsize); - - while (size--) { - _unaligned_strided_byte_copy(buffer, (intp) elsize, it->dataptr, - astride, N, elsize); - if (swap) { - _strided_byte_swap(buffer, (intp) elsize, N, elsize); - } - if (sort(buffer, N, op) < 0) { - PyDataMem_FREE(buffer); - goto fail; - } - if (swap) { - _strided_byte_swap(buffer, (intp) elsize, N, elsize); - } - _unaligned_strided_byte_copy(it->dataptr, astride, buffer, - (intp) elsize, N, elsize); - PyArray_ITER_NEXT(it); - } - PyDataMem_FREE(buffer); - } - else { - while (size--) { - if (sort(it->dataptr, N, op) < 0) { - goto fail; - } - PyArray_ITER_NEXT(it); - } - } - NPY_END_THREADS_DESCR(op->descr); - Py_DECREF(it); - return 0; - - fail: - NPY_END_THREADS; - Py_DECREF(it); - return 0; -} - -static PyObject* -_new_argsort(PyArrayObject *op, int axis, NPY_SORTKIND which) -{ - - PyArrayIterObject *it = NULL; - PyArrayIterObject *rit = NULL; - PyObject *ret; - int needcopy = 0, i; - intp N, size; - int elsize, swap; - intp astride, rstride, *iptr; - PyArray_ArgSortFunc *argsort; - BEGIN_THREADS_DEF; - - ret = PyArray_New(Py_TYPE(op), op->nd, - op->dimensions, PyArray_INTP, - NULL, NULL, 0, 0, (PyObject *)op); - if (ret == NULL) { - return NULL; - } - it = (PyArrayIterObject *)PyArray_IterAllButAxis((PyObject *)op, &axis); - rit = (PyArrayIterObject *)PyArray_IterAllButAxis(ret, &axis); - if (rit == NULL || it == NULL) { - goto fail; - } - swap = !PyArray_ISNOTSWAPPED(op); - - NPY_BEGIN_THREADS_DESCR(op->descr); - argsort = op->descr->f->argsort[which]; - size = it->size; - N = op->dimensions[axis]; - elsize = op->descr->elsize; - astride = op->strides[axis]; - rstride = PyArray_STRIDE(ret,axis); - - needcopy = swap || !(op->flags & ALIGNED) || (astride != (intp) elsize) || - (rstride != sizeof(intp)); - if (needcopy) { - char *valbuffer, *indbuffer; - - valbuffer = PyDataMem_NEW(N*elsize); - indbuffer = PyDataMem_NEW(N*sizeof(intp)); - while (size--) { - _unaligned_strided_byte_copy(valbuffer, (intp) elsize, it->dataptr, - astride, N, elsize); - if (swap) { - _strided_byte_swap(valbuffer, (intp) elsize, N, elsize); - } - iptr = (intp *)indbuffer; - for (i = 0; i < N; i++) { - *iptr++ = i; - } - if (argsort(valbuffer, (intp *)indbuffer, N, op) < 0) { - PyDataMem_FREE(valbuffer); - PyDataMem_FREE(indbuffer); - goto fail; - } - _unaligned_strided_byte_copy(rit->dataptr, rstride, indbuffer, - sizeof(intp), N, sizeof(intp)); - PyArray_ITER_NEXT(it); - PyArray_ITER_NEXT(rit); - } - PyDataMem_FREE(valbuffer); - PyDataMem_FREE(indbuffer); - } - else { - while (size--) { - iptr = (intp *)rit->dataptr; - for (i = 0; i < N; i++) { - *iptr++ = i; - } - if (argsort(it->dataptr, (intp *)rit->dataptr, N, op) < 0) { - goto fail; - } - PyArray_ITER_NEXT(it); - PyArray_ITER_NEXT(rit); - } - } - - NPY_END_THREADS_DESCR(op->descr); - - Py_DECREF(it); - Py_DECREF(rit); - return ret; - - fail: - NPY_END_THREADS; - Py_DECREF(ret); - Py_XDECREF(it); - Py_XDECREF(rit); - return NULL; -} - - -/* Be sure to save this global_compare when necessary */ -static PyArrayObject *global_obj; - -static int -qsortCompare (const void *a, const void *b) -{ - return global_obj->descr->f->compare(a,b,global_obj); -} - -/* - * Consumes reference to ap (op gets it) op contains a version of - * the array with axes swapped if local variable axis is not the - * last dimension. Origin must be defined locally. - */ -#define SWAPAXES(op, ap) { \ - orign = (ap)->nd-1; \ - if (axis != orign) { \ - (op) = (PyAO *)PyArray_SwapAxes((ap), axis, orign); \ - Py_DECREF((ap)); \ - if ((op) == NULL) return NULL; \ - } \ - else (op) = (ap); \ - } - -/* - * Consumes reference to ap (op gets it) origin must be previously - * defined locally. SWAPAXES must have been called previously. - * op contains the swapped version of the array. - */ -#define SWAPBACK(op, ap) { \ - if (axis != orign) { \ - (op) = (PyAO *)PyArray_SwapAxes((ap), axis, orign); \ - Py_DECREF((ap)); \ - if ((op) == NULL) return NULL; \ - } \ - else (op) = (ap); \ - } - -/* These swap axes in-place if necessary */ -#define SWAPINTP(a,b) {intp c; c=(a); (a) = (b); (b) = c;} -#define SWAPAXES2(ap) { \ - orign = (ap)->nd-1; \ - if (axis != orign) { \ - SWAPINTP(ap->dimensions[axis], ap->dimensions[orign]); \ - SWAPINTP(ap->strides[axis], ap->strides[orign]); \ - PyArray_UpdateFlags(ap, CONTIGUOUS | FORTRAN); \ - } \ - } - -#define SWAPBACK2(ap) { \ - if (axis != orign) { \ - SWAPINTP(ap->dimensions[axis], ap->dimensions[orign]); \ - SWAPINTP(ap->strides[axis], ap->strides[orign]); \ - PyArray_UpdateFlags(ap, CONTIGUOUS | FORTRAN); \ - } \ - } - -/*NUMPY_API - * Sort an array in-place - */ -NPY_NO_EXPORT int -PyArray_Sort(PyArrayObject *op, int axis, NPY_SORTKIND which) -{ - PyArrayObject *ap = NULL, *store_arr = NULL; - char *ip; - int i, n, m, elsize, orign; - - n = op->nd; - if ((n == 0) || (PyArray_SIZE(op) == 1)) { - return 0; - } - if (axis < 0) { - axis += n; - } - if ((axis < 0) || (axis >= n)) { - PyErr_Format(PyExc_ValueError, "axis(=%d) out of bounds", axis); - return -1; - } - if (!PyArray_ISWRITEABLE(op)) { - PyErr_SetString(PyExc_RuntimeError, - "attempted sort on unwriteable array."); - return -1; - } - - /* Determine if we should use type-specific algorithm or not */ - if (op->descr->f->sort[which] != NULL) { - return _new_sort(op, axis, which); - } - if ((which != PyArray_QUICKSORT) - || op->descr->f->compare == NULL) { - PyErr_SetString(PyExc_TypeError, - "desired sort not supported for this type"); - return -1; - } - - SWAPAXES2(op); - - ap = (PyArrayObject *)PyArray_FromAny((PyObject *)op, - NULL, 1, 0, - DEFAULT | UPDATEIFCOPY, NULL); - if (ap == NULL) { - goto fail; - } - elsize = ap->descr->elsize; - m = ap->dimensions[ap->nd-1]; - if (m == 0) { - goto finish; - } - n = PyArray_SIZE(ap)/m; - - /* Store global -- allows re-entry -- restore before leaving*/ - store_arr = global_obj; - global_obj = ap; - for (ip = ap->data, i = 0; i < n; i++, ip += elsize*m) { - qsort(ip, m, elsize, qsortCompare); - } - global_obj = store_arr; - - if (PyErr_Occurred()) { - goto fail; - } - - finish: - Py_DECREF(ap); /* Should update op if needed */ - SWAPBACK2(op); - return 0; - - fail: - Py_XDECREF(ap); - SWAPBACK2(op); - return -1; -} - - -static char *global_data; - -static int -argsort_static_compare(const void *ip1, const void *ip2) -{ - int isize = global_obj->descr->elsize; - const intp *ipa = ip1; - const intp *ipb = ip2; - return global_obj->descr->f->compare(global_data + (isize * *ipa), - global_data + (isize * *ipb), - global_obj); -} - -/*NUMPY_API - * ArgSort an array - */ -NPY_NO_EXPORT PyObject * -PyArray_ArgSort(PyArrayObject *op, int axis, NPY_SORTKIND which) -{ - PyArrayObject *ap = NULL, *ret = NULL, *store, *op2; - intp *ip; - intp i, j, n, m, orign; - int argsort_elsize; - char *store_ptr; - - n = op->nd; - if ((n == 0) || (PyArray_SIZE(op) == 1)) { - ret = (PyArrayObject *)PyArray_New(Py_TYPE(op), op->nd, - op->dimensions, - PyArray_INTP, - NULL, NULL, 0, 0, - (PyObject *)op); - if (ret == NULL) { - return NULL; - } - *((intp *)ret->data) = 0; - return (PyObject *)ret; - } - - /* Creates new reference op2 */ - if ((op2=(PyAO *)_check_axis(op, &axis, 0)) == NULL) { - return NULL; - } - /* Determine if we should use new algorithm or not */ - if (op2->descr->f->argsort[which] != NULL) { - ret = (PyArrayObject *)_new_argsort(op2, axis, which); - Py_DECREF(op2); - return (PyObject *)ret; - } - - if ((which != PyArray_QUICKSORT) || op2->descr->f->compare == NULL) { - PyErr_SetString(PyExc_TypeError, - "requested sort not available for type"); - Py_DECREF(op2); - op = NULL; - goto fail; - } - - /* ap will contain the reference to op2 */ - SWAPAXES(ap, op2); - op = (PyArrayObject *)PyArray_ContiguousFromAny((PyObject *)ap, - PyArray_NOTYPE, - 1, 0); - Py_DECREF(ap); - if (op == NULL) { - return NULL; - } - ret = (PyArrayObject *)PyArray_New(Py_TYPE(op), op->nd, - op->dimensions, PyArray_INTP, - NULL, NULL, 0, 0, (PyObject *)op); - if (ret == NULL) { - goto fail; - } - ip = (intp *)ret->data; - argsort_elsize = op->descr->elsize; - m = op->dimensions[op->nd-1]; - if (m == 0) { - goto finish; - } - n = PyArray_SIZE(op)/m; - store_ptr = global_data; - global_data = op->data; - store = global_obj; - global_obj = op; - for (i = 0; i < n; i++, ip += m, global_data += m*argsort_elsize) { - for (j = 0; j < m; j++) { - ip[j] = j; - } - qsort((char *)ip, m, sizeof(intp), argsort_static_compare); - } - global_data = store_ptr; - global_obj = store; - - finish: - Py_DECREF(op); - SWAPBACK(op, ret); - return (PyObject *)op; - - fail: - Py_XDECREF(op); - Py_XDECREF(ret); - return NULL; - -} - - -/*NUMPY_API - *LexSort an array providing indices that will sort a collection of arrays - *lexicographically. The first key is sorted on first, followed by the second key - *-- requires that arg"merge"sort is available for each sort_key - * - *Returns an index array that shows the indexes for the lexicographic sort along - *the given axis. - */ -NPY_NO_EXPORT PyObject * -PyArray_LexSort(PyObject *sort_keys, int axis) -{ - PyArrayObject **mps; - PyArrayIterObject **its; - PyArrayObject *ret = NULL; - PyArrayIterObject *rit = NULL; - int n; - int nd; - int needcopy = 0, i,j; - intp N, size; - int elsize; - int maxelsize; - intp astride, rstride, *iptr; - int object = 0; - PyArray_ArgSortFunc *argsort; - NPY_BEGIN_THREADS_DEF; - - if (!PySequence_Check(sort_keys) - || ((n = PySequence_Size(sort_keys)) <= 0)) { - PyErr_SetString(PyExc_TypeError, - "need sequence of keys with len > 0 in lexsort"); - return NULL; - } - mps = (PyArrayObject **) _pya_malloc(n*sizeof(PyArrayObject)); - if (mps == NULL) { - return PyErr_NoMemory(); - } - its = (PyArrayIterObject **) _pya_malloc(n*sizeof(PyArrayIterObject)); - if (its == NULL) { - _pya_free(mps); - return PyErr_NoMemory(); - } - for (i = 0; i < n; i++) { - mps[i] = NULL; - its[i] = NULL; - } - for (i = 0; i < n; i++) { - PyObject *obj; - obj = PySequence_GetItem(sort_keys, i); - mps[i] = (PyArrayObject *)PyArray_FROM_O(obj); - Py_DECREF(obj); - if (mps[i] == NULL) { - goto fail; - } - if (i > 0) { - if ((mps[i]->nd != mps[0]->nd) - || (!PyArray_CompareLists(mps[i]->dimensions, - mps[0]->dimensions, - mps[0]->nd))) { - PyErr_SetString(PyExc_ValueError, - "all keys need to be the same shape"); - goto fail; - } - } - if (!mps[i]->descr->f->argsort[PyArray_MERGESORT]) { - PyErr_Format(PyExc_TypeError, - "merge sort not available for item %d", i); - goto fail; - } - if (!object - && PyDataType_FLAGCHK(mps[i]->descr, NPY_NEEDS_PYAPI)) { - object = 1; - } - its[i] = (PyArrayIterObject *)PyArray_IterAllButAxis( - (PyObject *)mps[i], &axis); - if (its[i] == NULL) { - goto fail; - } - } - - /* Now we can check the axis */ - nd = mps[0]->nd; - if ((nd == 0) || (PyArray_SIZE(mps[0]) == 1)) { - /* single element case */ - ret = (PyArrayObject *)PyArray_New(&PyArray_Type, mps[0]->nd, - mps[0]->dimensions, - PyArray_INTP, - NULL, NULL, 0, 0, NULL); - - if (ret == NULL) { - goto fail; - } - *((intp *)(ret->data)) = 0; - goto finish; - } - if (axis < 0) { - axis += nd; - } - if ((axis < 0) || (axis >= nd)) { - PyErr_Format(PyExc_ValueError, - "axis(=%d) out of bounds", axis); - goto fail; - } - - /* Now do the sorting */ - ret = (PyArrayObject *)PyArray_New(&PyArray_Type, mps[0]->nd, - mps[0]->dimensions, PyArray_INTP, - NULL, NULL, 0, 0, NULL); - if (ret == NULL) { - goto fail; - } - rit = (PyArrayIterObject *) - PyArray_IterAllButAxis((PyObject *)ret, &axis); - if (rit == NULL) { - goto fail; - } - if (!object) { - NPY_BEGIN_THREADS; - } - size = rit->size; - N = mps[0]->dimensions[axis]; - rstride = PyArray_STRIDE(ret, axis); - maxelsize = mps[0]->descr->elsize; - needcopy = (rstride != sizeof(intp)); - for (j = 0; j < n; j++) { - needcopy = needcopy - || PyArray_ISBYTESWAPPED(mps[j]) - || !(mps[j]->flags & ALIGNED) - || (mps[j]->strides[axis] != (intp)mps[j]->descr->elsize); - if (mps[j]->descr->elsize > maxelsize) { - maxelsize = mps[j]->descr->elsize; - } - } - - if (needcopy) { - char *valbuffer, *indbuffer; - int *swaps; - - valbuffer = PyDataMem_NEW(N*maxelsize); - indbuffer = PyDataMem_NEW(N*sizeof(intp)); - swaps = malloc(n*sizeof(int)); - for (j = 0; j < n; j++) { - swaps[j] = PyArray_ISBYTESWAPPED(mps[j]); - } - while (size--) { - iptr = (intp *)indbuffer; - for (i = 0; i < N; i++) { - *iptr++ = i; - } - for (j = 0; j < n; j++) { - elsize = mps[j]->descr->elsize; - astride = mps[j]->strides[axis]; - argsort = mps[j]->descr->f->argsort[PyArray_MERGESORT]; - _unaligned_strided_byte_copy(valbuffer, (intp) elsize, - its[j]->dataptr, astride, N, elsize); - if (swaps[j]) { - _strided_byte_swap(valbuffer, (intp) elsize, N, elsize); - } - if (argsort(valbuffer, (intp *)indbuffer, N, mps[j]) < 0) { - PyDataMem_FREE(valbuffer); - PyDataMem_FREE(indbuffer); - free(swaps); - goto fail; - } - PyArray_ITER_NEXT(its[j]); - } - _unaligned_strided_byte_copy(rit->dataptr, rstride, indbuffer, - sizeof(intp), N, sizeof(intp)); - PyArray_ITER_NEXT(rit); - } - PyDataMem_FREE(valbuffer); - PyDataMem_FREE(indbuffer); - free(swaps); - } - else { - while (size--) { - iptr = (intp *)rit->dataptr; - for (i = 0; i < N; i++) { - *iptr++ = i; - } - for (j = 0; j < n; j++) { - argsort = mps[j]->descr->f->argsort[PyArray_MERGESORT]; - if (argsort(its[j]->dataptr, (intp *)rit->dataptr, - N, mps[j]) < 0) { - goto fail; - } - PyArray_ITER_NEXT(its[j]); - } - PyArray_ITER_NEXT(rit); - } - } - - if (!object) { - NPY_END_THREADS; - } - - finish: - for (i = 0; i < n; i++) { - Py_XDECREF(mps[i]); - Py_XDECREF(its[i]); - } - Py_XDECREF(rit); - _pya_free(mps); - _pya_free(its); - return (PyObject *)ret; - - fail: - NPY_END_THREADS; - Py_XDECREF(rit); - Py_XDECREF(ret); - for (i = 0; i < n; i++) { - Py_XDECREF(mps[i]); - Py_XDECREF(its[i]); - } - _pya_free(mps); - _pya_free(its); - return NULL; -} - - -/** @brief Use bisection of sorted array to find first entries >= keys. - * - * For each key use bisection to find the first index i s.t. key <= arr[i]. - * When there is no such index i, set i = len(arr). Return the results in ret. - * All arrays are assumed contiguous on entry and both arr and key must be of - * the same comparable type. - * - * @param arr contiguous sorted array to be searched. - * @param key contiguous array of keys. - * @param ret contiguous array of intp for returned indices. - * @return void - */ -static void -local_search_left(PyArrayObject *arr, PyArrayObject *key, PyArrayObject *ret) -{ - PyArray_CompareFunc *compare = key->descr->f->compare; - intp nelts = arr->dimensions[arr->nd - 1]; - intp nkeys = PyArray_SIZE(key); - char *parr = arr->data; - char *pkey = key->data; - intp *pret = (intp *)ret->data; - int elsize = arr->descr->elsize; - intp i; - - for (i = 0; i < nkeys; ++i) { - intp imin = 0; - intp imax = nelts; - while (imin < imax) { - intp imid = imin + ((imax - imin) >> 1); - if (compare(parr + elsize*imid, pkey, key) < 0) { - imin = imid + 1; - } - else { - imax = imid; - } - } - *pret = imin; - pret += 1; - pkey += elsize; - } -} - - -/** @brief Use bisection of sorted array to find first entries > keys. - * - * For each key use bisection to find the first index i s.t. key < arr[i]. - * When there is no such index i, set i = len(arr). Return the results in ret. - * All arrays are assumed contiguous on entry and both arr and key must be of - * the same comparable type. - * - * @param arr contiguous sorted array to be searched. - * @param key contiguous array of keys. - * @param ret contiguous array of intp for returned indices. - * @return void - */ -static void -local_search_right(PyArrayObject *arr, PyArrayObject *key, PyArrayObject *ret) -{ - PyArray_CompareFunc *compare = key->descr->f->compare; - intp nelts = arr->dimensions[arr->nd - 1]; - intp nkeys = PyArray_SIZE(key); - char *parr = arr->data; - char *pkey = key->data; - intp *pret = (intp *)ret->data; - int elsize = arr->descr->elsize; - intp i; - - for(i = 0; i < nkeys; ++i) { - intp imin = 0; - intp imax = nelts; - while (imin < imax) { - intp imid = imin + ((imax - imin) >> 1); - if (compare(parr + elsize*imid, pkey, key) <= 0) { - imin = imid + 1; - } - else { - imax = imid; - } - } - *pret = imin; - pret += 1; - pkey += elsize; - } -} - -/*NUMPY_API - * Numeric.searchsorted(a,v) - */ -NPY_NO_EXPORT PyObject * -PyArray_SearchSorted(PyArrayObject *op1, PyObject *op2, NPY_SEARCHSIDE side) -{ - PyArrayObject *ap1 = NULL; - PyArrayObject *ap2 = NULL; - PyArrayObject *ret = NULL; - PyArray_Descr *dtype; - NPY_BEGIN_THREADS_DEF; - - dtype = PyArray_DescrFromObject((PyObject *)op2, op1->descr); - /* need ap1 as contiguous array and of right type */ - Py_INCREF(dtype); - ap1 = (PyArrayObject *)PyArray_FromAny((PyObject *)op1, dtype, - 1, 1, NPY_DEFAULT, NULL); - if (ap1 == NULL) { - Py_DECREF(dtype); - return NULL; - } - - /* need ap2 as contiguous array and of right type */ - ap2 = (PyArrayObject *)PyArray_FromAny(op2, dtype, - 0, 0, NPY_DEFAULT, NULL); - if (ap2 == NULL) { - goto fail; - } - /* ret is a contiguous array of intp type to hold returned indices */ - ret = (PyArrayObject *)PyArray_New(Py_TYPE(ap2), ap2->nd, - ap2->dimensions, PyArray_INTP, - NULL, NULL, 0, 0, (PyObject *)ap2); - if (ret == NULL) { - goto fail; - } - /* check that comparison function exists */ - if (ap2->descr->f->compare == NULL) { - PyErr_SetString(PyExc_TypeError, - "compare not supported for type"); - goto fail; - } - - if (side == NPY_SEARCHLEFT) { - NPY_BEGIN_THREADS_DESCR(ap2->descr); - local_search_left(ap1, ap2, ret); - NPY_END_THREADS_DESCR(ap2->descr); - } - else if (side == NPY_SEARCHRIGHT) { - NPY_BEGIN_THREADS_DESCR(ap2->descr); - local_search_right(ap1, ap2, ret); - NPY_END_THREADS_DESCR(ap2->descr); - } - Py_DECREF(ap1); - Py_DECREF(ap2); - return (PyObject *)ret; - - fail: - Py_XDECREF(ap1); - Py_XDECREF(ap2); - Py_XDECREF(ret); - return NULL; -} - -/*NUMPY_API - * Diagonal - */ -NPY_NO_EXPORT PyObject * -PyArray_Diagonal(PyArrayObject *self, int offset, int axis1, int axis2) -{ - int n = self->nd; - PyObject *new; - PyArray_Dims newaxes; - intp dims[MAX_DIMS]; - int i, pos; - - newaxes.ptr = dims; - if (n < 2) { - PyErr_SetString(PyExc_ValueError, - "array.ndim must be >= 2"); - return NULL; - } - if (axis1 < 0) { - axis1 += n; - } - if (axis2 < 0) { - axis2 += n; - } - if ((axis1 == axis2) || (axis1 < 0) || (axis1 >= n) || - (axis2 < 0) || (axis2 >= n)) { - PyErr_Format(PyExc_ValueError, "axis1(=%d) and axis2(=%d) "\ - "must be different and within range (nd=%d)", - axis1, axis2, n); - return NULL; - } - - newaxes.len = n; - /* insert at the end */ - newaxes.ptr[n-2] = axis1; - newaxes.ptr[n-1] = axis2; - pos = 0; - for (i = 0; i < n; i++) { - if ((i==axis1) || (i==axis2)) { - continue; - } - newaxes.ptr[pos++] = i; - } - new = PyArray_Transpose(self, &newaxes); - if (new == NULL) { - return NULL; - } - self = (PyAO *)new; - - if (n == 2) { - PyObject *a = NULL, *indices= NULL, *ret = NULL; - intp n1, n2, start, stop, step, count; - intp *dptr; - - n1 = self->dimensions[0]; - n2 = self->dimensions[1]; - step = n2 + 1; - if (offset < 0) { - start = -n2 * offset; - stop = MIN(n2, n1+offset)*(n2+1) - n2*offset; - } - else { - start = offset; - stop = MIN(n1, n2-offset)*(n2+1) + offset; - } - - /* count = ceil((stop-start)/step) */ - count = ((stop-start) / step) + (((stop-start) % step) != 0); - indices = PyArray_New(&PyArray_Type, 1, &count, - PyArray_INTP, NULL, NULL, 0, 0, NULL); - if (indices == NULL) { - Py_DECREF(self); - return NULL; - } - dptr = (intp *)PyArray_DATA(indices); - for (n1 = start; n1 < stop; n1 += step) { - *dptr++ = n1; - } - a = PyArray_IterNew((PyObject *)self); - Py_DECREF(self); - if (a == NULL) { - Py_DECREF(indices); - return NULL; - } - ret = PyObject_GetItem(a, indices); - Py_DECREF(a); - Py_DECREF(indices); - return ret; - } - - else { - /* - * my_diagonal = [] - * for i in range (s [0]) : - * my_diagonal.append (diagonal (a [i], offset)) - * return array (my_diagonal) - */ - PyObject *mydiagonal = NULL, *new = NULL, *ret = NULL, *sel = NULL; - intp i, n1; - int res; - PyArray_Descr *typecode; - - typecode = self->descr; - mydiagonal = PyList_New(0); - if (mydiagonal == NULL) { - Py_DECREF(self); - return NULL; - } - n1 = self->dimensions[0]; - for (i = 0; i < n1; i++) { - new = PyInt_FromLong((long) i); - sel = PyArray_EnsureAnyArray(PyObject_GetItem((PyObject *)self, new)); - Py_DECREF(new); - if (sel == NULL) { - Py_DECREF(self); - Py_DECREF(mydiagonal); - return NULL; - } - new = PyArray_Diagonal((PyAO *)sel, offset, n-3, n-2); - Py_DECREF(sel); - if (new == NULL) { - Py_DECREF(self); - Py_DECREF(mydiagonal); - return NULL; - } - res = PyList_Append(mydiagonal, new); - Py_DECREF(new); - if (res < 0) { - Py_DECREF(self); - Py_DECREF(mydiagonal); - return NULL; - } - } - Py_DECREF(self); - Py_INCREF(typecode); - ret = PyArray_FromAny(mydiagonal, typecode, 0, 0, 0, NULL); - Py_DECREF(mydiagonal); - return ret; - } -} - -/*NUMPY_API - * Compress - */ -NPY_NO_EXPORT PyObject * -PyArray_Compress(PyArrayObject *self, PyObject *condition, int axis, - PyArrayObject *out) -{ - PyArrayObject *cond; - PyObject *res, *ret; - - cond = (PyAO *)PyArray_FROM_O(condition); - if (cond == NULL) { - return NULL; - } - if (cond->nd != 1) { - Py_DECREF(cond); - PyErr_SetString(PyExc_ValueError, - "condition must be 1-d array"); - return NULL; - } - - res = PyArray_Nonzero(cond); - Py_DECREF(cond); - if (res == NULL) { - return res; - } - ret = PyArray_TakeFrom(self, PyTuple_GET_ITEM(res, 0), axis, - out, NPY_RAISE); - Py_DECREF(res); - return ret; -} - -/*NUMPY_API - * Nonzero - */ -NPY_NO_EXPORT PyObject * -PyArray_Nonzero(PyArrayObject *self) -{ - int n = self->nd, j; - intp count = 0, i, size; - PyArrayIterObject *it = NULL; - PyObject *ret = NULL, *item; - intp *dptr[MAX_DIMS]; - - it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)self); - if (it == NULL) { - return NULL; - } - size = it->size; - for (i = 0; i < size; i++) { - if (self->descr->f->nonzero(it->dataptr, self)) { - count++; - } - PyArray_ITER_NEXT(it); - } - - PyArray_ITER_RESET(it); - ret = PyTuple_New(n); - if (ret == NULL) { - goto fail; - } - for (j = 0; j < n; j++) { - item = PyArray_New(Py_TYPE(self), 1, &count, - PyArray_INTP, NULL, NULL, 0, 0, - (PyObject *)self); - if (item == NULL) { - goto fail; - } - PyTuple_SET_ITEM(ret, j, item); - dptr[j] = (intp *)PyArray_DATA(item); - } - if (n == 1) { - for (i = 0; i < size; i++) { - if (self->descr->f->nonzero(it->dataptr, self)) { - *(dptr[0])++ = i; - } - PyArray_ITER_NEXT(it); - } - } - else { - /* reset contiguous so that coordinates gets updated */ - it->contiguous = 0; - for (i = 0; i < size; i++) { - if (self->descr->f->nonzero(it->dataptr, self)) { - for (j = 0; j < n; j++) { - *(dptr[j])++ = it->coordinates[j]; - } - } - PyArray_ITER_NEXT(it); - } - } - - Py_DECREF(it); - return ret; - - fail: - Py_XDECREF(ret); - Py_XDECREF(it); - return NULL; - -} - diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/iterators.c b/pythonPackages/numpy/numpy/core/src/multiarray/iterators.c deleted file mode 100755 index f841006ec1..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/iterators.c +++ /dev/null @@ -1,2120 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "arrayobject.h" -#include "iterators.h" -#include "ctors.h" -#include "common.h" - -#define PseudoIndex -1 -#define RubberIndex -2 -#define SingleIndex -3 - -NPY_NO_EXPORT intp -parse_subindex(PyObject *op, intp *step_size, intp *n_steps, intp max) -{ - intp index; - - if (op == Py_None) { - *n_steps = PseudoIndex; - index = 0; - } - else if (op == Py_Ellipsis) { - *n_steps = RubberIndex; - index = 0; - } - else if (PySlice_Check(op)) { - intp stop; - if (slice_GetIndices((PySliceObject *)op, max, - &index, &stop, step_size, n_steps) < 0) { - if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_IndexError, - "invalid slice"); - } - goto fail; - } - if (*n_steps <= 0) { - *n_steps = 0; - *step_size = 1; - index = 0; - } - } - else { - index = PyArray_PyIntAsIntp(op); - if (error_converting(index)) { - PyErr_SetString(PyExc_IndexError, - "each subindex must be either a "\ - "slice, an integer, Ellipsis, or "\ - "newaxis"); - goto fail; - } - *n_steps = SingleIndex; - *step_size = 0; - if (index < 0) { - index += max; - } - if (index >= max || index < 0) { - PyErr_SetString(PyExc_IndexError, "invalid index"); - goto fail; - } - } - return index; - - fail: - return -1; -} - - -NPY_NO_EXPORT int -parse_index(PyArrayObject *self, PyObject *op, - intp *dimensions, intp *strides, intp *offset_ptr) -{ - int i, j, n; - int nd_old, nd_new, n_add, n_pseudo; - intp n_steps, start, offset, step_size; - PyObject *op1 = NULL; - int is_slice; - - if (PySlice_Check(op) || op == Py_Ellipsis || op == Py_None) { - n = 1; - op1 = op; - Py_INCREF(op); - /* this relies on the fact that n==1 for loop below */ - is_slice = 1; - } - else { - if (!PySequence_Check(op)) { - PyErr_SetString(PyExc_IndexError, - "index must be either an int "\ - "or a sequence"); - return -1; - } - n = PySequence_Length(op); - is_slice = 0; - } - - nd_old = nd_new = 0; - - offset = 0; - for (i = 0; i < n; i++) { - if (!is_slice) { - if (!(op1=PySequence_GetItem(op, i))) { - PyErr_SetString(PyExc_IndexError, - "invalid index"); - return -1; - } - } - start = parse_subindex(op1, &step_size, &n_steps, - nd_old < self->nd ? - self->dimensions[nd_old] : 0); - Py_DECREF(op1); - if (start == -1) { - break; - } - if (n_steps == PseudoIndex) { - dimensions[nd_new] = 1; strides[nd_new] = 0; - nd_new++; - } - else { - if (n_steps == RubberIndex) { - for (j = i + 1, n_pseudo = 0; j < n; j++) { - op1 = PySequence_GetItem(op, j); - if (op1 == Py_None) { - n_pseudo++; - } - Py_DECREF(op1); - } - n_add = self->nd-(n-i-n_pseudo-1+nd_old); - if (n_add < 0) { - PyErr_SetString(PyExc_IndexError, - "too many indices"); - return -1; - } - for (j = 0; j < n_add; j++) { - dimensions[nd_new] = \ - self->dimensions[nd_old]; - strides[nd_new] = \ - self->strides[nd_old]; - nd_new++; nd_old++; - } - } - else { - if (nd_old >= self->nd) { - PyErr_SetString(PyExc_IndexError, - "too many indices"); - return -1; - } - offset += self->strides[nd_old]*start; - nd_old++; - if (n_steps != SingleIndex) { - dimensions[nd_new] = n_steps; - strides[nd_new] = step_size * \ - self->strides[nd_old-1]; - nd_new++; - } - } - } - } - if (i < n) { - return -1; - } - n_add = self->nd-nd_old; - for (j = 0; j < n_add; j++) { - dimensions[nd_new] = self->dimensions[nd_old]; - strides[nd_new] = self->strides[nd_old]; - nd_new++; - nd_old++; - } - *offset_ptr = offset; - return nd_new; -} - -static int -slice_coerce_index(PyObject *o, intp *v) -{ - *v = PyArray_PyIntAsIntp(o); - if (error_converting(*v)) { - PyErr_Clear(); - return 0; - } - return 1; -} - -/* This is basically PySlice_GetIndicesEx, but with our coercion - * of indices to integers (plus, that function is new in Python 2.3) */ -NPY_NO_EXPORT int -slice_GetIndices(PySliceObject *r, intp length, - intp *start, intp *stop, intp *step, - intp *slicelength) -{ - intp defstop; - - if (r->step == Py_None) { - *step = 1; - } - else { - if (!slice_coerce_index(r->step, step)) { - return -1; - } - if (*step == 0) { - PyErr_SetString(PyExc_ValueError, - "slice step cannot be zero"); - return -1; - } - } - /* defstart = *step < 0 ? length - 1 : 0; */ - defstop = *step < 0 ? -1 : length; - if (r->start == Py_None) { - *start = *step < 0 ? length-1 : 0; - } - else { - if (!slice_coerce_index(r->start, start)) { - return -1; - } - if (*start < 0) { - *start += length; - } - if (*start < 0) { - *start = (*step < 0) ? -1 : 0; - } - if (*start >= length) { - *start = (*step < 0) ? length - 1 : length; - } - } - - if (r->stop == Py_None) { - *stop = defstop; - } - else { - if (!slice_coerce_index(r->stop, stop)) { - return -1; - } - if (*stop < 0) { - *stop += length; - } - if (*stop < 0) { - *stop = -1; - } - if (*stop > length) { - *stop = length; - } - } - - if ((*step < 0 && *stop >= *start) || - (*step > 0 && *start >= *stop)) { - *slicelength = 0; - } - else if (*step < 0) { - *slicelength = (*stop - *start + 1) / (*step) + 1; - } - else { - *slicelength = (*stop - *start - 1) / (*step) + 1; - } - - return 0; -} - -/*********************** Element-wise Array Iterator ***********************/ -/* Aided by Peter J. Verveer's nd_image package and numpy's arraymap ****/ -/* and Python's array iterator ***/ - -/* get the dataptr from its current coordinates for simple iterator */ -static char* -get_ptr_simple(PyArrayIterObject* iter, npy_intp *coordinates) -{ - npy_intp i; - char *ret; - - ret = iter->ao->data; - - for(i = 0; i < iter->ao->nd; ++i) { - ret += coordinates[i] * iter->strides[i]; - } - - return ret; -} - -/* - * This is common initialization code between PyArrayIterObject and - * PyArrayNeighborhoodIterObject - * - * Increase ao refcount - */ -static PyObject * -array_iter_base_init(PyArrayIterObject *it, PyArrayObject *ao) -{ - int nd, i; - - nd = ao->nd; - PyArray_UpdateFlags(ao, CONTIGUOUS); - if (PyArray_ISCONTIGUOUS(ao)) { - it->contiguous = 1; - } - else { - it->contiguous = 0; - } - Py_INCREF(ao); - it->ao = ao; - it->size = PyArray_SIZE(ao); - it->nd_m1 = nd - 1; - it->factors[nd-1] = 1; - for (i = 0; i < nd; i++) { - it->dims_m1[i] = ao->dimensions[i] - 1; - it->strides[i] = ao->strides[i]; - it->backstrides[i] = it->strides[i] * it->dims_m1[i]; - if (i > 0) { - it->factors[nd-i-1] = it->factors[nd-i] * ao->dimensions[nd-i]; - } - it->bounds[i][0] = 0; - it->bounds[i][1] = ao->dimensions[i] - 1; - it->limits[i][0] = 0; - it->limits[i][1] = ao->dimensions[i] - 1; - it->limits_sizes[i] = it->limits[i][1] - it->limits[i][0] + 1; - } - - it->translate = &get_ptr_simple; - PyArray_ITER_RESET(it); - - return (PyObject *)it; -} - -static void -array_iter_base_dealloc(PyArrayIterObject *it) -{ - Py_XDECREF(it->ao); -} - -/*NUMPY_API - * Get Iterator. - */ -NPY_NO_EXPORT PyObject * -PyArray_IterNew(PyObject *obj) -{ - PyArrayIterObject *it; - PyArrayObject *ao = (PyArrayObject *)obj; - - if (!PyArray_Check(ao)) { - PyErr_BadInternalCall(); - return NULL; - } - - it = (PyArrayIterObject *)_pya_malloc(sizeof(PyArrayIterObject)); - PyObject_Init((PyObject *)it, &PyArrayIter_Type); - /* it = PyObject_New(PyArrayIterObject, &PyArrayIter_Type);*/ - if (it == NULL) { - return NULL; - } - - array_iter_base_init(it, ao); - return (PyObject *)it; -} - -/*NUMPY_API - * Get Iterator broadcast to a particular shape - */ -NPY_NO_EXPORT PyObject * -PyArray_BroadcastToShape(PyObject *obj, intp *dims, int nd) -{ - PyArrayIterObject *it; - int i, diff, j, compat, k; - PyArrayObject *ao = (PyArrayObject *)obj; - - if (ao->nd > nd) { - goto err; - } - compat = 1; - diff = j = nd - ao->nd; - for (i = 0; i < ao->nd; i++, j++) { - if (ao->dimensions[i] == 1) { - continue; - } - if (ao->dimensions[i] != dims[j]) { - compat = 0; - break; - } - } - if (!compat) { - goto err; - } - it = (PyArrayIterObject *)_pya_malloc(sizeof(PyArrayIterObject)); - PyObject_Init((PyObject *)it, &PyArrayIter_Type); - - if (it == NULL) { - return NULL; - } - PyArray_UpdateFlags(ao, CONTIGUOUS); - if (PyArray_ISCONTIGUOUS(ao)) { - it->contiguous = 1; - } - else { - it->contiguous = 0; - } - Py_INCREF(ao); - it->ao = ao; - it->size = PyArray_MultiplyList(dims, nd); - it->nd_m1 = nd - 1; - it->factors[nd-1] = 1; - for (i = 0; i < nd; i++) { - it->dims_m1[i] = dims[i] - 1; - k = i - diff; - if ((k < 0) || ao->dimensions[k] != dims[i]) { - it->contiguous = 0; - it->strides[i] = 0; - } - else { - it->strides[i] = ao->strides[k]; - } - it->backstrides[i] = it->strides[i] * it->dims_m1[i]; - if (i > 0) { - it->factors[nd-i-1] = it->factors[nd-i] * dims[nd-i]; - } - } - PyArray_ITER_RESET(it); - return (PyObject *)it; - - err: - PyErr_SetString(PyExc_ValueError, "array is not broadcastable to "\ - "correct shape"); - return NULL; -} - - - - - -/*NUMPY_API - * Get Iterator that iterates over all but one axis (don't use this with - * PyArray_ITER_GOTO1D). The axis will be over-written if negative - * with the axis having the smallest stride. - */ -NPY_NO_EXPORT PyObject * -PyArray_IterAllButAxis(PyObject *obj, int *inaxis) -{ - PyArrayIterObject *it; - int axis; - it = (PyArrayIterObject *)PyArray_IterNew(obj); - if (it == NULL) { - return NULL; - } - if (PyArray_NDIM(obj)==0) { - return (PyObject *)it; - } - if (*inaxis < 0) { - int i, minaxis = 0; - intp minstride = 0; - i = 0; - while (minstride == 0 && i < PyArray_NDIM(obj)) { - minstride = PyArray_STRIDE(obj,i); - i++; - } - for (i = 1; i < PyArray_NDIM(obj); i++) { - if (PyArray_STRIDE(obj,i) > 0 && - PyArray_STRIDE(obj, i) < minstride) { - minaxis = i; - minstride = PyArray_STRIDE(obj,i); - } - } - *inaxis = minaxis; - } - axis = *inaxis; - /* adjust so that will not iterate over axis */ - it->contiguous = 0; - if (it->size != 0) { - it->size /= PyArray_DIM(obj,axis); - } - it->dims_m1[axis] = 0; - it->backstrides[axis] = 0; - - /* - * (won't fix factors so don't use - * PyArray_ITER_GOTO1D with this iterator) - */ - return (PyObject *)it; -} - -/*NUMPY_API - * Adjusts previously broadcasted iterators so that the axis with - * the smallest sum of iterator strides is not iterated over. - * Returns dimension which is smallest in the range [0,multi->nd). - * A -1 is returned if multi->nd == 0. - * - * don't use with PyArray_ITER_GOTO1D because factors are not adjusted - */ -NPY_NO_EXPORT int -PyArray_RemoveSmallest(PyArrayMultiIterObject *multi) -{ - PyArrayIterObject *it; - int i, j; - int axis; - intp smallest; - intp sumstrides[NPY_MAXDIMS]; - - if (multi->nd == 0) { - return -1; - } - for (i = 0; i < multi->nd; i++) { - sumstrides[i] = 0; - for (j = 0; j < multi->numiter; j++) { - sumstrides[i] += multi->iters[j]->strides[i]; - } - } - axis = 0; - smallest = sumstrides[0]; - /* Find longest dimension */ - for (i = 1; i < multi->nd; i++) { - if (sumstrides[i] < smallest) { - axis = i; - smallest = sumstrides[i]; - } - } - for(i = 0; i < multi->numiter; i++) { - it = multi->iters[i]; - it->contiguous = 0; - if (it->size != 0) { - it->size /= (it->dims_m1[axis]+1); - } - it->dims_m1[axis] = 0; - it->backstrides[axis] = 0; - } - multi->size = multi->iters[0]->size; - return axis; -} - -/* Returns an array scalar holding the element desired */ - -static PyObject * -arrayiter_next(PyArrayIterObject *it) -{ - PyObject *ret; - - if (it->index < it->size) { - ret = PyArray_ToScalar(it->dataptr, it->ao); - PyArray_ITER_NEXT(it); - return ret; - } - return NULL; -} - -static void -arrayiter_dealloc(PyArrayIterObject *it) -{ - array_iter_base_dealloc(it); - _pya_free(it); -} - -static Py_ssize_t -iter_length(PyArrayIterObject *self) -{ - return self->size; -} - - -static PyObject * -iter_subscript_Bool(PyArrayIterObject *self, PyArrayObject *ind) -{ - intp index, strides; - int itemsize; - intp count = 0; - char *dptr, *optr; - PyObject *r; - int swap; - PyArray_CopySwapFunc *copyswap; - - - if (ind->nd != 1) { - PyErr_SetString(PyExc_ValueError, - "boolean index array should have 1 dimension"); - return NULL; - } - index = ind->dimensions[0]; - if (index > self->size) { - PyErr_SetString(PyExc_ValueError, - "too many boolean indices"); - return NULL; - } - - strides = ind->strides[0]; - dptr = ind->data; - /* Get size of return array */ - while (index--) { - if (*((Bool *)dptr) != 0) { - count++; - } - dptr += strides; - } - itemsize = self->ao->descr->elsize; - Py_INCREF(self->ao->descr); - r = PyArray_NewFromDescr(Py_TYPE(self->ao), - self->ao->descr, 1, &count, - NULL, NULL, - 0, (PyObject *)self->ao); - if (r == NULL) { - return NULL; - } - /* Set up loop */ - optr = PyArray_DATA(r); - index = ind->dimensions[0]; - dptr = ind->data; - copyswap = self->ao->descr->f->copyswap; - /* Loop over Boolean array */ - swap = (PyArray_ISNOTSWAPPED(self->ao) != PyArray_ISNOTSWAPPED(r)); - while (index--) { - if (*((Bool *)dptr) != 0) { - copyswap(optr, self->dataptr, swap, self->ao); - optr += itemsize; - } - dptr += strides; - PyArray_ITER_NEXT(self); - } - PyArray_ITER_RESET(self); - return r; -} - -static PyObject * -iter_subscript_int(PyArrayIterObject *self, PyArrayObject *ind) -{ - intp num; - PyObject *r; - PyArrayIterObject *ind_it; - int itemsize; - int swap; - char *optr; - intp index; - PyArray_CopySwapFunc *copyswap; - - itemsize = self->ao->descr->elsize; - if (ind->nd == 0) { - num = *((intp *)ind->data); - if (num < 0) { - num += self->size; - } - if (num < 0 || num >= self->size) { - PyErr_Format(PyExc_IndexError, - "index %"INTP_FMT" out of bounds" \ - " 0<=index<%"INTP_FMT, - num, self->size); - r = NULL; - } - else { - PyArray_ITER_GOTO1D(self, num); - r = PyArray_ToScalar(self->dataptr, self->ao); - } - PyArray_ITER_RESET(self); - return r; - } - - Py_INCREF(self->ao->descr); - r = PyArray_NewFromDescr(Py_TYPE(self->ao), self->ao->descr, - ind->nd, ind->dimensions, - NULL, NULL, - 0, (PyObject *)self->ao); - if (r == NULL) { - return NULL; - } - optr = PyArray_DATA(r); - ind_it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)ind); - if (ind_it == NULL) { - Py_DECREF(r); - return NULL; - } - index = ind_it->size; - copyswap = PyArray_DESCR(r)->f->copyswap; - swap = (PyArray_ISNOTSWAPPED(r) != PyArray_ISNOTSWAPPED(self->ao)); - while (index--) { - num = *((intp *)(ind_it->dataptr)); - if (num < 0) { - num += self->size; - } - if (num < 0 || num >= self->size) { - PyErr_Format(PyExc_IndexError, - "index %"INTP_FMT" out of bounds" \ - " 0<=index<%"INTP_FMT, - num, self->size); - Py_DECREF(ind_it); - Py_DECREF(r); - PyArray_ITER_RESET(self); - return NULL; - } - PyArray_ITER_GOTO1D(self, num); - copyswap(optr, self->dataptr, swap, r); - optr += itemsize; - PyArray_ITER_NEXT(ind_it); - } - Py_DECREF(ind_it); - PyArray_ITER_RESET(self); - return r; -} - -/* Always returns arrays */ -NPY_NO_EXPORT PyObject * -iter_subscript(PyArrayIterObject *self, PyObject *ind) -{ - PyArray_Descr *indtype = NULL; - intp start, step_size; - intp n_steps; - PyObject *r; - char *dptr; - int size; - PyObject *obj = NULL; - PyArray_CopySwapFunc *copyswap; - - if (ind == Py_Ellipsis) { - ind = PySlice_New(NULL, NULL, NULL); - obj = iter_subscript(self, ind); - Py_DECREF(ind); - return obj; - } - if (PyTuple_Check(ind)) { - int len; - len = PyTuple_GET_SIZE(ind); - if (len > 1) { - goto fail; - } - if (len == 0) { - Py_INCREF(self->ao); - return (PyObject *)self->ao; - } - ind = PyTuple_GET_ITEM(ind, 0); - } - - /* - * Tuples >1d not accepted --- i.e. no newaxis - * Could implement this with adjusted strides and dimensions in iterator - * Check for Boolean -- this is first becasue Bool is a subclass of Int - */ - PyArray_ITER_RESET(self); - - if (PyBool_Check(ind)) { - if (PyObject_IsTrue(ind)) { - return PyArray_ToScalar(self->dataptr, self->ao); - } - else { /* empty array */ - intp ii = 0; - Py_INCREF(self->ao->descr); - r = PyArray_NewFromDescr(Py_TYPE(self->ao), - self->ao->descr, - 1, &ii, - NULL, NULL, 0, - (PyObject *)self->ao); - return r; - } - } - - /* Check for Integer or Slice */ - if (PyLong_Check(ind) || PyInt_Check(ind) || PySlice_Check(ind)) { - start = parse_subindex(ind, &step_size, &n_steps, - self->size); - if (start == -1) { - goto fail; - } - if (n_steps == RubberIndex || n_steps == PseudoIndex) { - PyErr_SetString(PyExc_IndexError, - "cannot use Ellipsis or newaxes here"); - goto fail; - } - PyArray_ITER_GOTO1D(self, start) - if (n_steps == SingleIndex) { /* Integer */ - r = PyArray_ToScalar(self->dataptr, self->ao); - PyArray_ITER_RESET(self); - return r; - } - size = self->ao->descr->elsize; - Py_INCREF(self->ao->descr); - r = PyArray_NewFromDescr(Py_TYPE(self->ao), - self->ao->descr, - 1, &n_steps, - NULL, NULL, - 0, (PyObject *)self->ao); - if (r == NULL) { - goto fail; - } - dptr = PyArray_DATA(r); - copyswap = PyArray_DESCR(r)->f->copyswap; - while (n_steps--) { - copyswap(dptr, self->dataptr, 0, r); - start += step_size; - PyArray_ITER_GOTO1D(self, start) - dptr += size; - } - PyArray_ITER_RESET(self); - return r; - } - - /* convert to INTP array if Integer array scalar or List */ - indtype = PyArray_DescrFromType(PyArray_INTP); - if (PyArray_IsScalar(ind, Integer) || PyList_Check(ind)) { - Py_INCREF(indtype); - obj = PyArray_FromAny(ind, indtype, 0, 0, FORCECAST, NULL); - if (obj == NULL) { - goto fail; - } - } - else { - Py_INCREF(ind); - obj = ind; - } - - if (PyArray_Check(obj)) { - /* Check for Boolean object */ - if (PyArray_TYPE(obj)==PyArray_BOOL) { - r = iter_subscript_Bool(self, (PyArrayObject *)obj); - Py_DECREF(indtype); - } - /* Check for integer array */ - else if (PyArray_ISINTEGER(obj)) { - PyObject *new; - new = PyArray_FromAny(obj, indtype, 0, 0, - FORCECAST | ALIGNED, NULL); - if (new == NULL) { - goto fail; - } - Py_DECREF(obj); - obj = new; - r = iter_subscript_int(self, (PyArrayObject *)obj); - } - else { - goto fail; - } - Py_DECREF(obj); - return r; - } - else { - Py_DECREF(indtype); - } - - - fail: - if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_IndexError, "unsupported iterator index"); - } - Py_XDECREF(indtype); - Py_XDECREF(obj); - return NULL; - -} - - -static int -iter_ass_sub_Bool(PyArrayIterObject *self, PyArrayObject *ind, - PyArrayIterObject *val, int swap) -{ - intp index, strides; - char *dptr; - PyArray_CopySwapFunc *copyswap; - - if (ind->nd != 1) { - PyErr_SetString(PyExc_ValueError, - "boolean index array should have 1 dimension"); - return -1; - } - - index = ind->dimensions[0]; - if (index > self->size) { - PyErr_SetString(PyExc_ValueError, - "boolean index array has too many values"); - return -1; - } - - strides = ind->strides[0]; - dptr = ind->data; - PyArray_ITER_RESET(self); - /* Loop over Boolean array */ - copyswap = self->ao->descr->f->copyswap; - while (index--) { - if (*((Bool *)dptr) != 0) { - copyswap(self->dataptr, val->dataptr, swap, self->ao); - PyArray_ITER_NEXT(val); - if (val->index == val->size) { - PyArray_ITER_RESET(val); - } - } - dptr += strides; - PyArray_ITER_NEXT(self); - } - PyArray_ITER_RESET(self); - return 0; -} - -static int -iter_ass_sub_int(PyArrayIterObject *self, PyArrayObject *ind, - PyArrayIterObject *val, int swap) -{ - PyArray_Descr *typecode; - intp num; - PyArrayIterObject *ind_it; - intp index; - PyArray_CopySwapFunc *copyswap; - - typecode = self->ao->descr; - copyswap = self->ao->descr->f->copyswap; - if (ind->nd == 0) { - num = *((intp *)ind->data); - PyArray_ITER_GOTO1D(self, num); - copyswap(self->dataptr, val->dataptr, swap, self->ao); - return 0; - } - ind_it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)ind); - if (ind_it == NULL) { - return -1; - } - index = ind_it->size; - while (index--) { - num = *((intp *)(ind_it->dataptr)); - if (num < 0) { - num += self->size; - } - if ((num < 0) || (num >= self->size)) { - PyErr_Format(PyExc_IndexError, - "index %"INTP_FMT" out of bounds" \ - " 0<=index<%"INTP_FMT, num, - self->size); - Py_DECREF(ind_it); - return -1; - } - PyArray_ITER_GOTO1D(self, num); - copyswap(self->dataptr, val->dataptr, swap, self->ao); - PyArray_ITER_NEXT(ind_it); - PyArray_ITER_NEXT(val); - if (val->index == val->size) { - PyArray_ITER_RESET(val); - } - } - Py_DECREF(ind_it); - return 0; -} - -NPY_NO_EXPORT int -iter_ass_subscript(PyArrayIterObject *self, PyObject *ind, PyObject *val) -{ - PyObject *arrval = NULL; - PyArrayIterObject *val_it = NULL; - PyArray_Descr *type; - PyArray_Descr *indtype = NULL; - int swap, retval = -1; - intp start, step_size; - intp n_steps; - PyObject *obj = NULL; - PyArray_CopySwapFunc *copyswap; - - - if (ind == Py_Ellipsis) { - ind = PySlice_New(NULL, NULL, NULL); - retval = iter_ass_subscript(self, ind, val); - Py_DECREF(ind); - return retval; - } - - if (PyTuple_Check(ind)) { - int len; - len = PyTuple_GET_SIZE(ind); - if (len > 1) { - goto finish; - } - ind = PyTuple_GET_ITEM(ind, 0); - } - - type = self->ao->descr; - - /* - * Check for Boolean -- this is first becasue - * Bool is a subclass of Int - */ - if (PyBool_Check(ind)) { - retval = 0; - if (PyObject_IsTrue(ind)) { - retval = type->f->setitem(val, self->dataptr, self->ao); - } - goto finish; - } - - if (PySequence_Check(ind) || PySlice_Check(ind)) { - goto skip; - } - start = PyArray_PyIntAsIntp(ind); - if (start==-1 && PyErr_Occurred()) { - PyErr_Clear(); - } - else { - if (start < -self->size || start >= self->size) { - PyErr_Format(PyExc_ValueError, - "index (%" NPY_INTP_FMT \ - ") out of range", start); - goto finish; - } - retval = 0; - PyArray_ITER_GOTO1D(self, start); - retval = type->f->setitem(val, self->dataptr, self->ao); - PyArray_ITER_RESET(self); - if (retval < 0) { - PyErr_SetString(PyExc_ValueError, - "Error setting single item of array."); - } - goto finish; - } - - skip: - Py_INCREF(type); - arrval = PyArray_FromAny(val, type, 0, 0, 0, NULL); - if (arrval == NULL) { - return -1; - } - val_it = (PyArrayIterObject *)PyArray_IterNew(arrval); - if (val_it == NULL) { - goto finish; - } - if (val_it->size == 0) { - retval = 0; - goto finish; - } - - copyswap = PyArray_DESCR(arrval)->f->copyswap; - swap = (PyArray_ISNOTSWAPPED(self->ao)!=PyArray_ISNOTSWAPPED(arrval)); - - /* Check Slice */ - if (PySlice_Check(ind)) { - start = parse_subindex(ind, &step_size, &n_steps, self->size); - if (start == -1) { - goto finish; - } - if (n_steps == RubberIndex || n_steps == PseudoIndex) { - PyErr_SetString(PyExc_IndexError, - "cannot use Ellipsis or newaxes here"); - goto finish; - } - PyArray_ITER_GOTO1D(self, start); - if (n_steps == SingleIndex) { - /* Integer */ - copyswap(self->dataptr, PyArray_DATA(arrval), swap, arrval); - PyArray_ITER_RESET(self); - retval = 0; - goto finish; - } - while (n_steps--) { - copyswap(self->dataptr, val_it->dataptr, swap, arrval); - start += step_size; - PyArray_ITER_GOTO1D(self, start); - PyArray_ITER_NEXT(val_it); - if (val_it->index == val_it->size) { - PyArray_ITER_RESET(val_it); - } - } - PyArray_ITER_RESET(self); - retval = 0; - goto finish; - } - - /* convert to INTP array if Integer array scalar or List */ - indtype = PyArray_DescrFromType(PyArray_INTP); - if (PyList_Check(ind)) { - Py_INCREF(indtype); - obj = PyArray_FromAny(ind, indtype, 0, 0, FORCECAST, NULL); - } - else { - Py_INCREF(ind); - obj = ind; - } - - if (obj != NULL && PyArray_Check(obj)) { - /* Check for Boolean object */ - if (PyArray_TYPE(obj)==PyArray_BOOL) { - if (iter_ass_sub_Bool(self, (PyArrayObject *)obj, - val_it, swap) < 0) { - goto finish; - } - retval=0; - } - /* Check for integer array */ - else if (PyArray_ISINTEGER(obj)) { - PyObject *new; - Py_INCREF(indtype); - new = PyArray_CheckFromAny(obj, indtype, 0, 0, - FORCECAST | BEHAVED_NS, NULL); - Py_DECREF(obj); - obj = new; - if (new == NULL) { - goto finish; - } - if (iter_ass_sub_int(self, (PyArrayObject *)obj, - val_it, swap) < 0) { - goto finish; - } - retval = 0; - } - } - - finish: - if (!PyErr_Occurred() && retval < 0) { - PyErr_SetString(PyExc_IndexError, "unsupported iterator index"); - } - Py_XDECREF(indtype); - Py_XDECREF(obj); - Py_XDECREF(val_it); - Py_XDECREF(arrval); - return retval; - -} - - -static PyMappingMethods iter_as_mapping = { -#if PY_VERSION_HEX >= 0x02050000 - (lenfunc)iter_length, /*mp_length*/ -#else - (inquiry)iter_length, /*mp_length*/ -#endif - (binaryfunc)iter_subscript, /*mp_subscript*/ - (objobjargproc)iter_ass_subscript, /*mp_ass_subscript*/ -}; - - - -static PyObject * -iter_array(PyArrayIterObject *it, PyObject *NPY_UNUSED(op)) -{ - - PyObject *r; - intp size; - - /* Any argument ignored */ - - /* Two options: - * 1) underlying array is contiguous - * -- return 1-d wrapper around it - * 2) underlying array is not contiguous - * -- make new 1-d contiguous array with updateifcopy flag set - * to copy back to the old array - */ - size = PyArray_SIZE(it->ao); - Py_INCREF(it->ao->descr); - if (PyArray_ISCONTIGUOUS(it->ao)) { - r = PyArray_NewFromDescr(&PyArray_Type, - it->ao->descr, - 1, &size, - NULL, it->ao->data, - it->ao->flags, - (PyObject *)it->ao); - if (r == NULL) { - return NULL; - } - } - else { - r = PyArray_NewFromDescr(&PyArray_Type, - it->ao->descr, - 1, &size, - NULL, NULL, - 0, (PyObject *)it->ao); - if (r == NULL) { - return NULL; - } - if (_flat_copyinto(r, (PyObject *)it->ao, - PyArray_CORDER) < 0) { - Py_DECREF(r); - return NULL; - } - PyArray_FLAGS(r) |= UPDATEIFCOPY; - it->ao->flags &= ~WRITEABLE; - } - Py_INCREF(it->ao); - PyArray_BASE(r) = (PyObject *)it->ao; - return r; - -} - -static PyObject * -iter_copy(PyArrayIterObject *it, PyObject *args) -{ - if (!PyArg_ParseTuple(args, "")) { - return NULL; - } - return PyArray_Flatten(it->ao, 0); -} - -static PyMethodDef iter_methods[] = { - /* to get array */ - {"__array__", - (PyCFunction)iter_array, - METH_VARARGS, NULL}, - {"copy", - (PyCFunction)iter_copy, - METH_VARARGS, NULL}, - {NULL, NULL, 0, NULL} /* sentinel */ -}; - -static PyObject * -iter_richcompare(PyArrayIterObject *self, PyObject *other, int cmp_op) -{ - PyArrayObject *new; - PyObject *ret; - new = (PyArrayObject *)iter_array(self, NULL); - if (new == NULL) { - return NULL; - } - ret = array_richcompare(new, other, cmp_op); - Py_DECREF(new); - return ret; -} - - -static PyMemberDef iter_members[] = { - {"base", - T_OBJECT, - offsetof(PyArrayIterObject, ao), - READONLY, NULL}, - {"index", - T_INT, - offsetof(PyArrayIterObject, index), - READONLY, NULL}, - {NULL, 0, 0, 0, NULL}, -}; - -static PyObject * -iter_coords_get(PyArrayIterObject *self) -{ - int nd; - nd = self->ao->nd; - if (self->contiguous) { - /* - * coordinates not kept track of --- - * need to generate from index - */ - intp val; - int i; - val = self->index; - for (i = 0; i < nd; i++) { - if (self->factors[i] != 0) { - self->coordinates[i] = val / self->factors[i]; - val = val % self->factors[i]; - } else { - self->coordinates[i] = 0; - } - } - } - return PyArray_IntTupleFromIntp(nd, self->coordinates); -} - -static PyGetSetDef iter_getsets[] = { - {"coords", - (getter)iter_coords_get, - NULL, - NULL, NULL}, - {NULL, NULL, NULL, NULL, NULL}, -}; - -NPY_NO_EXPORT PyTypeObject PyArrayIter_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.flatiter", /* tp_name */ - sizeof(PyArrayIterObject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)arrayiter_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - &iter_as_mapping, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - (richcmpfunc)iter_richcompare, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - (iternextfunc)arrayiter_next, /* tp_iternext */ - iter_methods, /* tp_methods */ - iter_members, /* tp_members */ - iter_getsets, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; - -/** END of Array Iterator **/ - -/* Adjust dimensionality and strides for index object iterators - --- i.e. broadcast -*/ -/*NUMPY_API*/ -NPY_NO_EXPORT int -PyArray_Broadcast(PyArrayMultiIterObject *mit) -{ - int i, nd, k, j; - intp tmp; - PyArrayIterObject *it; - - /* Discover the broadcast number of dimensions */ - for (i = 0, nd = 0; i < mit->numiter; i++) { - nd = MAX(nd, mit->iters[i]->ao->nd); - } - mit->nd = nd; - - /* Discover the broadcast shape in each dimension */ - for (i = 0; i < nd; i++) { - mit->dimensions[i] = 1; - for (j = 0; j < mit->numiter; j++) { - it = mit->iters[j]; - /* This prepends 1 to shapes not already equal to nd */ - k = i + it->ao->nd - nd; - if (k >= 0) { - tmp = it->ao->dimensions[k]; - if (tmp == 1) { - continue; - } - if (mit->dimensions[i] == 1) { - mit->dimensions[i] = tmp; - } - else if (mit->dimensions[i] != tmp) { - PyErr_SetString(PyExc_ValueError, - "shape mismatch: objects" \ - " cannot be broadcast" \ - " to a single shape"); - return -1; - } - } - } - } - - /* - * Reset the iterator dimensions and strides of each iterator - * object -- using 0 valued strides for broadcasting - * Need to check for overflow - */ - tmp = PyArray_OverflowMultiplyList(mit->dimensions, mit->nd); - if (tmp < 0) { - PyErr_SetString(PyExc_ValueError, - "broadcast dimensions too large."); - return -1; - } - mit->size = tmp; - for (i = 0; i < mit->numiter; i++) { - it = mit->iters[i]; - it->nd_m1 = mit->nd - 1; - it->size = tmp; - nd = it->ao->nd; - it->factors[mit->nd-1] = 1; - for (j = 0; j < mit->nd; j++) { - it->dims_m1[j] = mit->dimensions[j] - 1; - k = j + nd - mit->nd; - /* - * If this dimension was added or shape of - * underlying array was 1 - */ - if ((k < 0) || - it->ao->dimensions[k] != mit->dimensions[j]) { - it->contiguous = 0; - it->strides[j] = 0; - } - else { - it->strides[j] = it->ao->strides[k]; - } - it->backstrides[j] = it->strides[j] * it->dims_m1[j]; - if (j > 0) - it->factors[mit->nd-j-1] = - it->factors[mit->nd-j] * mit->dimensions[mit->nd-j]; - } - PyArray_ITER_RESET(it); - } - return 0; -} - -/*NUMPY_API - * Get MultiIterator from array of Python objects and any additional - * - * PyObject **mps -- array of PyObjects - * int n - number of PyObjects in the array - * int nadd - number of additional arrays to include in the iterator. - * - * Returns a multi-iterator object. - */ -NPY_NO_EXPORT PyObject * -PyArray_MultiIterFromObjects(PyObject **mps, int n, int nadd, ...) -{ - va_list va; - PyArrayMultiIterObject *multi; - PyObject *current; - PyObject *arr; - - int i, ntot, err=0; - - ntot = n + nadd; - if (ntot < 2 || ntot > NPY_MAXARGS) { - PyErr_Format(PyExc_ValueError, - "Need between 2 and (%d) " \ - "array objects (inclusive).", NPY_MAXARGS); - return NULL; - } - multi = _pya_malloc(sizeof(PyArrayMultiIterObject)); - if (multi == NULL) { - return PyErr_NoMemory(); - } - PyObject_Init((PyObject *)multi, &PyArrayMultiIter_Type); - - for (i = 0; i < ntot; i++) { - multi->iters[i] = NULL; - } - multi->numiter = ntot; - multi->index = 0; - - va_start(va, nadd); - for (i = 0; i < ntot; i++) { - if (i < n) { - current = mps[i]; - } - else { - current = va_arg(va, PyObject *); - } - arr = PyArray_FROM_O(current); - if (arr == NULL) { - err = 1; - break; - } - else { - multi->iters[i] = (PyArrayIterObject *)PyArray_IterNew(arr); - Py_DECREF(arr); - } - } - va_end(va); - - if (!err && PyArray_Broadcast(multi) < 0) { - err = 1; - } - if (err) { - Py_DECREF(multi); - return NULL; - } - PyArray_MultiIter_RESET(multi); - return (PyObject *)multi; -} - -/*NUMPY_API - * Get MultiIterator, - */ -NPY_NO_EXPORT PyObject * -PyArray_MultiIterNew(int n, ...) -{ - va_list va; - PyArrayMultiIterObject *multi; - PyObject *current; - PyObject *arr; - - int i, err = 0; - - if (n < 2 || n > NPY_MAXARGS) { - PyErr_Format(PyExc_ValueError, - "Need between 2 and (%d) " \ - "array objects (inclusive).", NPY_MAXARGS); - return NULL; - } - - /* fprintf(stderr, "multi new...");*/ - - multi = _pya_malloc(sizeof(PyArrayMultiIterObject)); - if (multi == NULL) { - return PyErr_NoMemory(); - } - PyObject_Init((PyObject *)multi, &PyArrayMultiIter_Type); - - for (i = 0; i < n; i++) { - multi->iters[i] = NULL; - } - multi->numiter = n; - multi->index = 0; - - va_start(va, n); - for (i = 0; i < n; i++) { - current = va_arg(va, PyObject *); - arr = PyArray_FROM_O(current); - if (arr == NULL) { - err = 1; - break; - } - else { - multi->iters[i] = (PyArrayIterObject *)PyArray_IterNew(arr); - Py_DECREF(arr); - } - } - va_end(va); - - if (!err && PyArray_Broadcast(multi) < 0) { - err = 1; - } - if (err) { - Py_DECREF(multi); - return NULL; - } - PyArray_MultiIter_RESET(multi); - return (PyObject *)multi; -} - -static PyObject * -arraymultiter_new(PyTypeObject *NPY_UNUSED(subtype), PyObject *args, PyObject *kwds) -{ - - Py_ssize_t n, i; - PyArrayMultiIterObject *multi; - PyObject *arr; - - if (kwds != NULL) { - PyErr_SetString(PyExc_ValueError, - "keyword arguments not accepted."); - return NULL; - } - - n = PyTuple_Size(args); - if (n < 2 || n > NPY_MAXARGS) { - if (PyErr_Occurred()) { - return NULL; - } - PyErr_Format(PyExc_ValueError, - "Need at least two and fewer than (%d) " \ - "array objects.", NPY_MAXARGS); - return NULL; - } - - multi = _pya_malloc(sizeof(PyArrayMultiIterObject)); - if (multi == NULL) { - return PyErr_NoMemory(); - } - PyObject_Init((PyObject *)multi, &PyArrayMultiIter_Type); - - multi->numiter = n; - multi->index = 0; - for (i = 0; i < n; i++) { - multi->iters[i] = NULL; - } - for (i = 0; i < n; i++) { - arr = PyArray_FromAny(PyTuple_GET_ITEM(args, i), NULL, 0, 0, 0, NULL); - if (arr == NULL) { - goto fail; - } - if ((multi->iters[i] = (PyArrayIterObject *)PyArray_IterNew(arr)) - == NULL) { - goto fail; - } - Py_DECREF(arr); - } - if (PyArray_Broadcast(multi) < 0) { - goto fail; - } - PyArray_MultiIter_RESET(multi); - return (PyObject *)multi; - - fail: - Py_DECREF(multi); - return NULL; -} - -static PyObject * -arraymultiter_next(PyArrayMultiIterObject *multi) -{ - PyObject *ret; - int i, n; - - n = multi->numiter; - ret = PyTuple_New(n); - if (ret == NULL) { - return NULL; - } - if (multi->index < multi->size) { - for (i = 0; i < n; i++) { - PyArrayIterObject *it=multi->iters[i]; - PyTuple_SET_ITEM(ret, i, - PyArray_ToScalar(it->dataptr, it->ao)); - PyArray_ITER_NEXT(it); - } - multi->index++; - return ret; - } - return NULL; -} - -static void -arraymultiter_dealloc(PyArrayMultiIterObject *multi) -{ - int i; - - for (i = 0; i < multi->numiter; i++) { - Py_XDECREF(multi->iters[i]); - } - Py_TYPE(multi)->tp_free((PyObject *)multi); -} - -static PyObject * -arraymultiter_size_get(PyArrayMultiIterObject *self) -{ -#if SIZEOF_INTP <= SIZEOF_LONG - return PyInt_FromLong((long) self->size); -#else - if (self->size < MAX_LONG) { - return PyInt_FromLong((long) self->size); - } - else { - return PyLong_FromLongLong((longlong) self->size); - } -#endif -} - -static PyObject * -arraymultiter_index_get(PyArrayMultiIterObject *self) -{ -#if SIZEOF_INTP <= SIZEOF_LONG - return PyInt_FromLong((long) self->index); -#else - if (self->size < MAX_LONG) { - return PyInt_FromLong((long) self->index); - } - else { - return PyLong_FromLongLong((longlong) self->index); - } -#endif -} - -static PyObject * -arraymultiter_shape_get(PyArrayMultiIterObject *self) -{ - return PyArray_IntTupleFromIntp(self->nd, self->dimensions); -} - -static PyObject * -arraymultiter_iters_get(PyArrayMultiIterObject *self) -{ - PyObject *res; - int i, n; - - n = self->numiter; - res = PyTuple_New(n); - if (res == NULL) { - return res; - } - for (i = 0; i < n; i++) { - Py_INCREF(self->iters[i]); - PyTuple_SET_ITEM(res, i, (PyObject *)self->iters[i]); - } - return res; -} - -static PyGetSetDef arraymultiter_getsetlist[] = { - {"size", - (getter)arraymultiter_size_get, - NULL, - NULL, NULL}, - {"index", - (getter)arraymultiter_index_get, - NULL, - NULL, NULL}, - {"shape", - (getter)arraymultiter_shape_get, - NULL, - NULL, NULL}, - {"iters", - (getter)arraymultiter_iters_get, - NULL, - NULL, NULL}, - {NULL, NULL, NULL, NULL, NULL}, -}; - -static PyMemberDef arraymultiter_members[] = { - {"numiter", - T_INT, - offsetof(PyArrayMultiIterObject, numiter), - READONLY, NULL}, - {"nd", - T_INT, - offsetof(PyArrayMultiIterObject, nd), - READONLY, NULL}, - {NULL, 0, 0, 0, NULL}, -}; - -static PyObject * -arraymultiter_reset(PyArrayMultiIterObject *self, PyObject *args) -{ - if (!PyArg_ParseTuple(args, "")) { - return NULL; - } - PyArray_MultiIter_RESET(self); - Py_INCREF(Py_None); - return Py_None; -} - -static PyMethodDef arraymultiter_methods[] = { - {"reset", - (PyCFunction) arraymultiter_reset, - METH_VARARGS, NULL}, - {NULL, NULL, 0, NULL}, /* sentinal */ -}; - -NPY_NO_EXPORT PyTypeObject PyArrayMultiIter_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.broadcast", /* tp_name */ - sizeof(PyArrayMultiIterObject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)arraymultiter_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - (iternextfunc)arraymultiter_next, /* tp_iternext */ - arraymultiter_methods, /* tp_methods */ - arraymultiter_members, /* tp_members */ - arraymultiter_getsetlist, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - (initproc)0, /* tp_init */ - 0, /* tp_alloc */ - arraymultiter_new, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; - -/*========================= Neighborhood iterator ======================*/ - -static void neighiter_dealloc(PyArrayNeighborhoodIterObject* iter); - -static char* _set_constant(PyArrayNeighborhoodIterObject* iter, - PyArrayObject *fill) -{ - char *ret; - PyArrayIterObject *ar = iter->_internal_iter; - int storeflags, st; - - ret = PyDataMem_NEW(ar->ao->descr->elsize); - if (ret == NULL) { - PyErr_SetNone(PyExc_MemoryError); - return NULL; - } - - if (PyArray_ISOBJECT(ar->ao)) { - memcpy(ret, fill->data, sizeof(PyObject*)); - Py_INCREF(*(PyObject**)ret); - } else { - /* Non-object types */ - - storeflags = ar->ao->flags; - ar->ao->flags |= BEHAVED; - st = ar->ao->descr->f->setitem((PyObject*)fill, ret, ar->ao); - ar->ao->flags = storeflags; - - if (st < 0) { - PyDataMem_FREE(ret); - return NULL; - } - } - - return ret; -} - -#define _INF_SET_PTR(c) \ - bd = coordinates[c] + p->coordinates[c]; \ - if (bd < p->limits[c][0] || bd > p->limits[c][1]) { \ - return niter->constant; \ - } \ - _coordinates[c] = bd; - -/* set the dataptr from its current coordinates */ -static char* -get_ptr_constant(PyArrayIterObject* _iter, npy_intp *coordinates) -{ - int i; - npy_intp bd, _coordinates[NPY_MAXDIMS]; - PyArrayNeighborhoodIterObject *niter = (PyArrayNeighborhoodIterObject*)_iter; - PyArrayIterObject *p = niter->_internal_iter; - - for(i = 0; i < niter->nd; ++i) { - _INF_SET_PTR(i) - } - - return p->translate(p, _coordinates); -} -#undef _INF_SET_PTR - -#define _NPY_IS_EVEN(x) ((x) % 2 == 0) - -/* For an array x of dimension n, and given index i, returns j, 0 <= j < n - * such as x[i] = x[j], with x assumed to be mirrored. For example, for x = - * {1, 2, 3} (n = 3) - * - * index -5 -4 -3 -2 -1 0 1 2 3 4 5 6 - * value 2 3 3 2 1 1 2 3 3 2 1 1 - * - * _npy_pos_index_mirror(4, 3) will return 1, because x[4] = x[1]*/ -static inline npy_intp -__npy_pos_remainder(npy_intp i, npy_intp n) -{ - npy_intp k, l, j; - - /* Mirror i such as it is guaranteed to be positive */ - if (i < 0) { - i = - i - 1; - } - - /* compute k and l such as i = k * n + l, 0 <= l < k */ - k = i / n; - l = i - k * n; - - if (_NPY_IS_EVEN(k)) { - j = l; - } else { - j = n - 1 - l; - } - return j; -} -#undef _NPY_IS_EVEN - -#define _INF_SET_PTR_MIRROR(c) \ - lb = p->limits[c][0]; \ - bd = coordinates[c] + p->coordinates[c] - lb; \ - _coordinates[c] = lb + __npy_pos_remainder(bd, p->limits_sizes[c]); - -/* set the dataptr from its current coordinates */ -static char* -get_ptr_mirror(PyArrayIterObject* _iter, npy_intp *coordinates) -{ - int i; - npy_intp bd, _coordinates[NPY_MAXDIMS], lb; - PyArrayNeighborhoodIterObject *niter = (PyArrayNeighborhoodIterObject*)_iter; - PyArrayIterObject *p = niter->_internal_iter; - - for(i = 0; i < niter->nd; ++i) { - _INF_SET_PTR_MIRROR(i) - } - - return p->translate(p, _coordinates); -} -#undef _INF_SET_PTR_MIRROR - -/* compute l such as i = k * n + l, 0 <= l < |k| */ -static inline npy_intp -__npy_euclidean_division(npy_intp i, npy_intp n) -{ - npy_intp l; - - l = i % n; - if (l < 0) { - l += n; - } - return l; -} - -#define _INF_SET_PTR_CIRCULAR(c) \ - lb = p->limits[c][0]; \ - bd = coordinates[c] + p->coordinates[c] - lb; \ - _coordinates[c] = lb + __npy_euclidean_division(bd, p->limits_sizes[c]); - -static char* -get_ptr_circular(PyArrayIterObject* _iter, npy_intp *coordinates) -{ - int i; - npy_intp bd, _coordinates[NPY_MAXDIMS], lb; - PyArrayNeighborhoodIterObject *niter = (PyArrayNeighborhoodIterObject*)_iter; - PyArrayIterObject *p = niter->_internal_iter; - - for(i = 0; i < niter->nd; ++i) { - _INF_SET_PTR_CIRCULAR(i) - } - return p->translate(p, _coordinates); -} - -#undef _INF_SET_PTR_CIRCULAR - -/* - * fill and x->ao should have equivalent types - */ -/*NUMPY_API - * A Neighborhood Iterator object. -*/ -NPY_NO_EXPORT PyObject* -PyArray_NeighborhoodIterNew(PyArrayIterObject *x, intp *bounds, - int mode, PyArrayObject* fill) -{ - int i; - PyArrayNeighborhoodIterObject *ret; - - ret = _pya_malloc(sizeof(*ret)); - if (ret == NULL) { - return NULL; - } - PyObject_Init((PyObject *)ret, &PyArrayNeighborhoodIter_Type); - - array_iter_base_init((PyArrayIterObject*)ret, x->ao); - Py_INCREF(x); - ret->_internal_iter = x; - - ret->nd = x->ao->nd; - - for (i = 0; i < ret->nd; ++i) { - ret->dimensions[i] = x->ao->dimensions[i]; - } - - /* Compute the neighborhood size and copy the shape */ - ret->size = 1; - for (i = 0; i < ret->nd; ++i) { - ret->bounds[i][0] = bounds[2 * i]; - ret->bounds[i][1] = bounds[2 * i + 1]; - ret->size *= (ret->bounds[i][1] - ret->bounds[i][0]) + 1; - - /* limits keep track of valid ranges for the neighborhood: if a bound - * of the neighborhood is outside the array, then limits is the same as - * boundaries. On the contrary, if a bound is strictly inside the - * array, then limits correspond to the array range. For example, for - * an array [1, 2, 3], if bounds are [-1, 3], limits will be [-1, 3], - * but if bounds are [1, 2], then limits will be [0, 2]. - * - * This is used by neighborhood iterators stacked on top of this one */ - ret->limits[i][0] = ret->bounds[i][0] < 0 ? ret->bounds[i][0] : 0; - ret->limits[i][1] = ret->bounds[i][1] >= ret->dimensions[i] - 1 ? - ret->bounds[i][1] : - ret->dimensions[i] - 1; - ret->limits_sizes[i] = (ret->limits[i][1] - ret->limits[i][0]) + 1; - } - - switch (mode) { - case NPY_NEIGHBORHOOD_ITER_ZERO_PADDING: - ret->constant = PyArray_Zero(x->ao); - ret->mode = mode; - ret->translate = &get_ptr_constant; - break; - case NPY_NEIGHBORHOOD_ITER_ONE_PADDING: - ret->constant = PyArray_One(x->ao); - ret->mode = mode; - ret->translate = &get_ptr_constant; - break; - case NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING: - /* New reference in returned value of _set_constant if array - * object */ - assert(PyArray_EquivArrTypes(x->ao, fill) == NPY_TRUE); - ret->constant = _set_constant(ret, fill); - if (ret->constant == NULL) { - goto clean_x; - } - ret->mode = mode; - ret->translate = &get_ptr_constant; - break; - case NPY_NEIGHBORHOOD_ITER_MIRROR_PADDING: - ret->mode = mode; - ret->constant = NULL; - ret->translate = &get_ptr_mirror; - break; - case NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING: - ret->mode = mode; - ret->constant = NULL; - ret->translate = &get_ptr_circular; - break; - default: - PyErr_SetString(PyExc_ValueError, "Unsupported padding mode"); - goto clean_x; - } - - /* - * XXX: we force x iterator to be non contiguous because we need - * coordinates... Modifying the iterator here is not great - */ - x->contiguous = 0; - - PyArrayNeighborhoodIter_Reset(ret); - - return (PyObject*)ret; - -clean_x: - Py_DECREF(ret->_internal_iter); - array_iter_base_dealloc((PyArrayIterObject*)ret); - _pya_free((PyArrayObject*)ret); - return NULL; -} - -static void neighiter_dealloc(PyArrayNeighborhoodIterObject* iter) -{ - if (iter->mode == NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING) { - if (PyArray_ISOBJECT(iter->_internal_iter->ao)) { - Py_DECREF(*(PyObject**)iter->constant); - } - } - if (iter->constant != NULL) { - PyDataMem_FREE(iter->constant); - } - Py_DECREF(iter->_internal_iter); - - array_iter_base_dealloc((PyArrayIterObject*)iter); - _pya_free((PyArrayObject*)iter); -} - -NPY_NO_EXPORT PyTypeObject PyArrayNeighborhoodIter_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.neigh_internal_iter", /* tp_name*/ - sizeof(PyArrayNeighborhoodIterObject), /* tp_basicsize*/ - 0, /* tp_itemsize*/ - (destructor)neighiter_dealloc, /* tp_dealloc*/ - 0, /* tp_print*/ - 0, /* tp_getattr*/ - 0, /* tp_setattr*/ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - 0, /* tp_repr*/ - 0, /* tp_as_number*/ - 0, /* tp_as_sequence*/ - 0, /* tp_as_mapping*/ - 0, /* tp_hash */ - 0, /* tp_call*/ - 0, /* tp_str*/ - 0, /* tp_getattro*/ - 0, /* tp_setattro*/ - 0, /* tp_as_buffer*/ - Py_TPFLAGS_DEFAULT, /* tp_flags*/ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - (iternextfunc)0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - (initproc)0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/iterators.h b/pythonPackages/numpy/numpy/core/src/multiarray/iterators.h deleted file mode 100755 index 3099425c55..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/iterators.h +++ /dev/null @@ -1,22 +0,0 @@ -#ifndef _NPY_ARRAYITERATORS_H_ -#define _NPY_ARRAYITERATORS_H_ - -NPY_NO_EXPORT intp -parse_subindex(PyObject *op, intp *step_size, intp *n_steps, intp max); - -NPY_NO_EXPORT int -parse_index(PyArrayObject *self, PyObject *op, - intp *dimensions, intp *strides, intp *offset_ptr); - -NPY_NO_EXPORT PyObject -*iter_subscript(PyArrayIterObject *, PyObject *); - -NPY_NO_EXPORT int -iter_ass_subscript(PyArrayIterObject *, PyObject *, PyObject *); - -NPY_NO_EXPORT int -slice_GetIndices(PySliceObject *r, intp length, - intp *start, intp *stop, intp *step, - intp *slicelength); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/mapping.c b/pythonPackages/numpy/numpy/core/src/multiarray/mapping.c deleted file mode 100755 index 21a630418c..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/mapping.c +++ /dev/null @@ -1,1694 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -/*#include */ -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "common.h" -#include "iterators.h" -#include "mapping.h" - -#define SOBJ_NOTFANCY 0 -#define SOBJ_ISFANCY 1 -#define SOBJ_BADARRAY 2 -#define SOBJ_TOOMANY 3 -#define SOBJ_LISTTUP 4 - -static PyObject * -array_subscript_simple(PyArrayObject *self, PyObject *op); - -/****************************************************************************** - *** IMPLEMENT MAPPING PROTOCOL *** - *****************************************************************************/ - -NPY_NO_EXPORT Py_ssize_t -array_length(PyArrayObject *self) -{ - if (self->nd != 0) { - return self->dimensions[0]; - } else { - PyErr_SetString(PyExc_TypeError, "len() of unsized object"); - return -1; - } -} - -NPY_NO_EXPORT PyObject * -array_big_item(PyArrayObject *self, intp i) -{ - char *item; - PyArrayObject *r; - - if(self->nd == 0) { - PyErr_SetString(PyExc_IndexError, - "0-d arrays can't be indexed"); - return NULL; - } - if ((item = index2ptr(self, i)) == NULL) { - return NULL; - } - Py_INCREF(self->descr); - r = (PyArrayObject *)PyArray_NewFromDescr(Py_TYPE(self), - self->descr, - self->nd-1, - self->dimensions+1, - self->strides+1, item, - self->flags, - (PyObject *)self); - if (r == NULL) { - return NULL; - } - Py_INCREF(self); - r->base = (PyObject *)self; - PyArray_UpdateFlags(r, CONTIGUOUS | FORTRAN); - return (PyObject *)r; -} - -NPY_NO_EXPORT int -_array_ass_item(PyArrayObject *self, Py_ssize_t i, PyObject *v) -{ - return array_ass_big_item(self, (intp) i, v); -} -/* contains optimization for 1-d arrays */ -NPY_NO_EXPORT PyObject * -array_item_nice(PyArrayObject *self, Py_ssize_t i) -{ - if (self->nd == 1) { - char *item; - if ((item = index2ptr(self, i)) == NULL) { - return NULL; - } - return PyArray_Scalar(item, self->descr, (PyObject *)self); - } - else { - return PyArray_Return( - (PyArrayObject *) array_big_item(self, (intp) i)); - } -} - -NPY_NO_EXPORT int -array_ass_big_item(PyArrayObject *self, intp i, PyObject *v) -{ - PyArrayObject *tmp; - char *item; - int ret; - - if (v == NULL) { - PyErr_SetString(PyExc_ValueError, - "can't delete array elements"); - return -1; - } - if (!PyArray_ISWRITEABLE(self)) { - PyErr_SetString(PyExc_RuntimeError, - "array is not writeable"); - return -1; - } - if (self->nd == 0) { - PyErr_SetString(PyExc_IndexError, - "0-d arrays can't be indexed."); - return -1; - } - - - if (self->nd > 1) { - if((tmp = (PyArrayObject *)array_big_item(self, i)) == NULL) { - return -1; - } - ret = PyArray_CopyObject(tmp, v); - Py_DECREF(tmp); - return ret; - } - - if ((item = index2ptr(self, i)) == NULL) { - return -1; - } - if (self->descr->f->setitem(v, item, self) == -1) { - return -1; - } - return 0; -} - -/* -------------------------------------------------------------- */ - -static void -_swap_axes(PyArrayMapIterObject *mit, PyArrayObject **ret, int getmap) -{ - PyObject *new; - int n1, n2, n3, val, bnd; - int i; - PyArray_Dims permute; - intp d[MAX_DIMS]; - PyArrayObject *arr; - - permute.ptr = d; - permute.len = mit->nd; - - /* - * arr might not have the right number of dimensions - * and need to be reshaped first by pre-pending ones - */ - arr = *ret; - if (arr->nd != mit->nd) { - for (i = 1; i <= arr->nd; i++) { - permute.ptr[mit->nd-i] = arr->dimensions[arr->nd-i]; - } - for (i = 0; i < mit->nd-arr->nd; i++) { - permute.ptr[i] = 1; - } - new = PyArray_Newshape(arr, &permute, PyArray_ANYORDER); - Py_DECREF(arr); - *ret = (PyArrayObject *)new; - if (new == NULL) { - return; - } - } - - /* - * Setting and getting need to have different permutations. - * On the get we are permuting the returned object, but on - * setting we are permuting the object-to-be-set. - * The set permutation is the inverse of the get permutation. - */ - - /* - * For getting the array the tuple for transpose is - * (n1,...,n1+n2-1,0,...,n1-1,n1+n2,...,n3-1) - * n1 is the number of dimensions of the broadcast index array - * n2 is the number of dimensions skipped at the start - * n3 is the number of dimensions of the result - */ - - /* - * For setting the array the tuple for transpose is - * (n2,...,n1+n2-1,0,...,n2-1,n1+n2,...n3-1) - */ - n1 = mit->iters[0]->nd_m1 + 1; - n2 = mit->iteraxes[0]; - n3 = mit->nd; - - /* use n1 as the boundary if getting but n2 if setting */ - bnd = getmap ? n1 : n2; - val = bnd; - i = 0; - while (val < n1 + n2) { - permute.ptr[i++] = val++; - } - val = 0; - while (val < bnd) { - permute.ptr[i++] = val++; - } - val = n1 + n2; - while (val < n3) { - permute.ptr[i++] = val++; - } - new = PyArray_Transpose(*ret, &permute); - Py_DECREF(*ret); - *ret = (PyArrayObject *)new; -} - -static PyObject * -PyArray_GetMap(PyArrayMapIterObject *mit) -{ - - PyArrayObject *ret, *temp; - PyArrayIterObject *it; - int index; - int swap; - PyArray_CopySwapFunc *copyswap; - - /* Unbound map iterator --- Bind should have been called */ - if (mit->ait == NULL) { - return NULL; - } - - /* This relies on the map iterator object telling us the shape - of the new array in nd and dimensions. - */ - temp = mit->ait->ao; - Py_INCREF(temp->descr); - ret = (PyArrayObject *) - PyArray_NewFromDescr(Py_TYPE(temp), - temp->descr, - mit->nd, mit->dimensions, - NULL, NULL, - PyArray_ISFORTRAN(temp), - (PyObject *)temp); - if (ret == NULL) { - return NULL; - } - - /* - * Now just iterate through the new array filling it in - * with the next object from the original array as - * defined by the mapping iterator - */ - - if ((it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)ret)) == NULL) { - Py_DECREF(ret); - return NULL; - } - index = it->size; - swap = (PyArray_ISNOTSWAPPED(temp) != PyArray_ISNOTSWAPPED(ret)); - copyswap = ret->descr->f->copyswap; - PyArray_MapIterReset(mit); - while (index--) { - copyswap(it->dataptr, mit->dataptr, swap, ret); - PyArray_MapIterNext(mit); - PyArray_ITER_NEXT(it); - } - Py_DECREF(it); - - /* check for consecutive axes */ - if ((mit->subspace != NULL) && (mit->consec)) { - if (mit->iteraxes[0] > 0) { /* then we need to swap */ - _swap_axes(mit, &ret, 1); - } - } - return (PyObject *)ret; -} - -static int -PyArray_SetMap(PyArrayMapIterObject *mit, PyObject *op) -{ - PyObject *arr = NULL; - PyArrayIterObject *it; - int index; - int swap; - PyArray_CopySwapFunc *copyswap; - PyArray_Descr *descr; - - /* Unbound Map Iterator */ - if (mit->ait == NULL) { - return -1; - } - descr = mit->ait->ao->descr; - Py_INCREF(descr); - arr = PyArray_FromAny(op, descr, 0, 0, FORCECAST, NULL); - if (arr == NULL) { - return -1; - } - if ((mit->subspace != NULL) && (mit->consec)) { - if (mit->iteraxes[0] > 0) { /* then we need to swap */ - _swap_axes(mit, (PyArrayObject **)&arr, 0); - if (arr == NULL) { - return -1; - } - } - } - - /* Be sure values array is "broadcastable" - to shape of mit->dimensions, mit->nd */ - - if ((it = (PyArrayIterObject *)\ - PyArray_BroadcastToShape(arr, mit->dimensions, mit->nd))==NULL) { - Py_DECREF(arr); - return -1; - } - - index = mit->size; - swap = (PyArray_ISNOTSWAPPED(mit->ait->ao) != - (PyArray_ISNOTSWAPPED(arr))); - copyswap = PyArray_DESCR(arr)->f->copyswap; - PyArray_MapIterReset(mit); - /* Need to decref arrays with objects in them */ - if (PyDataType_FLAGCHK(descr, NPY_ITEM_HASOBJECT)) { - while (index--) { - PyArray_Item_INCREF(it->dataptr, PyArray_DESCR(arr)); - PyArray_Item_XDECREF(mit->dataptr, PyArray_DESCR(arr)); - memmove(mit->dataptr, it->dataptr, PyArray_ITEMSIZE(arr)); - /* ignored unless VOID array with object's */ - if (swap) { - copyswap(mit->dataptr, NULL, swap, arr); - } - PyArray_MapIterNext(mit); - PyArray_ITER_NEXT(it); - } - Py_DECREF(arr); - Py_DECREF(it); - return 0; - } - while(index--) { - memmove(mit->dataptr, it->dataptr, PyArray_ITEMSIZE(arr)); - if (swap) { - copyswap(mit->dataptr, NULL, swap, arr); - } - PyArray_MapIterNext(mit); - PyArray_ITER_NEXT(it); - } - Py_DECREF(arr); - Py_DECREF(it); - return 0; -} - -NPY_NO_EXPORT int -count_new_axes_0d(PyObject *tuple) -{ - int i, argument_count; - int ellipsis_count = 0; - int newaxis_count = 0; - - argument_count = PyTuple_GET_SIZE(tuple); - for (i = 0; i < argument_count; ++i) { - PyObject *arg = PyTuple_GET_ITEM(tuple, i); - if (arg == Py_Ellipsis && !ellipsis_count) { - ellipsis_count++; - } - else if (arg == Py_None) { - newaxis_count++; - } - else { - break; - } - } - if (i < argument_count) { - PyErr_SetString(PyExc_IndexError, - "0-d arrays can only use a single ()" - " or a list of newaxes (and a single ...)" - " as an index"); - return -1; - } - if (newaxis_count > MAX_DIMS) { - PyErr_SetString(PyExc_IndexError, "too many dimensions"); - return -1; - } - return newaxis_count; -} - -NPY_NO_EXPORT PyObject * -add_new_axes_0d(PyArrayObject *arr, int newaxis_count) -{ - PyArrayObject *other; - intp dimensions[MAX_DIMS]; - int i; - - for (i = 0; i < newaxis_count; ++i) { - dimensions[i] = 1; - } - Py_INCREF(arr->descr); - if ((other = (PyArrayObject *) - PyArray_NewFromDescr(Py_TYPE(arr), arr->descr, - newaxis_count, dimensions, - NULL, arr->data, - arr->flags, - (PyObject *)arr)) == NULL) - return NULL; - other->base = (PyObject *)arr; - Py_INCREF(arr); - return (PyObject *)other; -} - - -/* This checks the args for any fancy indexing objects */ - -static int -fancy_indexing_check(PyObject *args) -{ - int i, n; - PyObject *obj; - int retval = SOBJ_NOTFANCY; - - if (PyTuple_Check(args)) { - n = PyTuple_GET_SIZE(args); - if (n >= MAX_DIMS) { - return SOBJ_TOOMANY; - } - for (i = 0; i < n; i++) { - obj = PyTuple_GET_ITEM(args,i); - if (PyArray_Check(obj)) { - if (PyArray_ISINTEGER(obj) || - PyArray_ISBOOL(obj)) { - retval = SOBJ_ISFANCY; - } - else { - retval = SOBJ_BADARRAY; - break; - } - } - else if (PySequence_Check(obj)) { - retval = SOBJ_ISFANCY; - } - } - } - else if (PyArray_Check(args)) { - if ((PyArray_TYPE(args)==PyArray_BOOL) || - (PyArray_ISINTEGER(args))) { - return SOBJ_ISFANCY; - } - else { - return SOBJ_BADARRAY; - } - } - else if (PySequence_Check(args)) { - /* - * Sequences < MAX_DIMS with any slice objects - * or newaxis, or Ellipsis is considered standard - * as long as there are also no Arrays and or additional - * sequences embedded. - */ - retval = SOBJ_ISFANCY; - n = PySequence_Size(args); - if (n < 0 || n >= MAX_DIMS) { - return SOBJ_ISFANCY; - } - for (i = 0; i < n; i++) { - obj = PySequence_GetItem(args, i); - if (obj == NULL) { - return SOBJ_ISFANCY; - } - if (PyArray_Check(obj)) { - if (PyArray_ISINTEGER(obj) || PyArray_ISBOOL(obj)) { - retval = SOBJ_LISTTUP; - } - else { - retval = SOBJ_BADARRAY; - } - } - else if (PySequence_Check(obj)) { - retval = SOBJ_LISTTUP; - } - else if (PySlice_Check(obj) || obj == Py_Ellipsis || - obj == Py_None) { - retval = SOBJ_NOTFANCY; - } - Py_DECREF(obj); - if (retval > SOBJ_ISFANCY) { - return retval; - } - } - } - return retval; -} - -/* - * Called when treating array object like a mapping -- called first from - * Python when using a[object] unless object is a standard slice object - * (not an extended one). - * - * There are two situations: - * - * 1 - the subscript is a standard view and a reference to the - * array can be returned - * - * 2 - the subscript uses Boolean masks or integer indexing and - * therefore a new array is created and returned. - */ - -NPY_NO_EXPORT PyObject * -array_subscript_simple(PyArrayObject *self, PyObject *op) -{ - intp dimensions[MAX_DIMS], strides[MAX_DIMS]; - intp offset; - int nd; - PyArrayObject *other; - intp value; - - value = PyArray_PyIntAsIntp(op); - if (!PyErr_Occurred()) { - return array_big_item(self, value); - } - PyErr_Clear(); - - /* Standard (view-based) Indexing */ - if ((nd = parse_index(self, op, dimensions, strides, &offset)) == -1) { - return NULL; - } - /* This will only work if new array will be a view */ - Py_INCREF(self->descr); - if ((other = (PyArrayObject *) - PyArray_NewFromDescr(Py_TYPE(self), self->descr, - nd, dimensions, - strides, self->data+offset, - self->flags, - (PyObject *)self)) == NULL) { - return NULL; - } - other->base = (PyObject *)self; - Py_INCREF(self); - PyArray_UpdateFlags(other, UPDATE_ALL); - return (PyObject *)other; -} - -NPY_NO_EXPORT PyObject * -array_subscript(PyArrayObject *self, PyObject *op) -{ - int nd, fancy; - PyArrayObject *other; - PyArrayMapIterObject *mit; - PyObject *obj; - - if (PyString_Check(op) || PyUnicode_Check(op)) { - PyObject *temp; - - if (self->descr->names) { - obj = PyDict_GetItem(self->descr->fields, op); - if (obj != NULL) { - PyArray_Descr *descr; - int offset; - PyObject *title; - - if (PyArg_ParseTuple(obj, "Oi|O", &descr, &offset, &title)) { - Py_INCREF(descr); - return PyArray_GetField(self, descr, offset); - } - } - } - - temp = op; - if (PyUnicode_Check(op)) { - temp = PyUnicode_AsUnicodeEscapeString(op); - } - PyErr_Format(PyExc_ValueError, - "field named %s not found.", - PyBytes_AsString(temp)); - if (temp != op) { - Py_DECREF(temp); - } - return NULL; - } - - /* Check for multiple field access */ - if (self->descr->names && PySequence_Check(op) && !PyTuple_Check(op)) { - int seqlen, i; - seqlen = PySequence_Size(op); - for (i = 0; i < seqlen; i++) { - obj = PySequence_GetItem(op, i); - if (!PyString_Check(obj) && !PyUnicode_Check(obj)) { - Py_DECREF(obj); - break; - } - Py_DECREF(obj); - } - /* - * extract multiple fields if all elements in sequence - * are either string or unicode (i.e. no break occurred). - */ - fancy = ((seqlen > 0) && (i == seqlen)); - if (fancy) { - PyObject *_numpy_internal; - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) { - return NULL; - } - obj = PyObject_CallMethod(_numpy_internal, - "_index_fields", "OO", self, op); - Py_DECREF(_numpy_internal); - return obj; - } - } - - if (op == Py_Ellipsis) { - Py_INCREF(self); - return (PyObject *)self; - } - - if (self->nd == 0) { - if (op == Py_None) { - return add_new_axes_0d(self, 1); - } - if (PyTuple_Check(op)) { - if (0 == PyTuple_GET_SIZE(op)) { - Py_INCREF(self); - return (PyObject *)self; - } - if ((nd = count_new_axes_0d(op)) == -1) { - return NULL; - } - return add_new_axes_0d(self, nd); - } - /* Allow Boolean mask selection also */ - if ((PyArray_Check(op) && (PyArray_DIMS(op)==0) - && PyArray_ISBOOL(op))) { - if (PyObject_IsTrue(op)) { - Py_INCREF(self); - return (PyObject *)self; - } - else { - intp oned = 0; - Py_INCREF(self->descr); - return PyArray_NewFromDescr(Py_TYPE(self), - self->descr, - 1, &oned, - NULL, NULL, - NPY_DEFAULT, - NULL); - } - } - PyErr_SetString(PyExc_IndexError, "0-d arrays can't be indexed."); - return NULL; - } - - fancy = fancy_indexing_check(op); - if (fancy != SOBJ_NOTFANCY) { - int oned; - - oned = ((self->nd == 1) && - !(PyTuple_Check(op) && PyTuple_GET_SIZE(op) > 1)); - - /* wrap arguments into a mapiter object */ - mit = (PyArrayMapIterObject *) PyArray_MapIterNew(op, oned, fancy); - if (mit == NULL) { - return NULL; - } - if (oned) { - PyArrayIterObject *it; - PyObject *rval; - it = (PyArrayIterObject *) PyArray_IterNew((PyObject *)self); - if (it == NULL) { - Py_DECREF(mit); - return NULL; - } - rval = iter_subscript(it, mit->indexobj); - Py_DECREF(it); - Py_DECREF(mit); - return rval; - } - PyArray_MapIterBind(mit, self); - other = (PyArrayObject *)PyArray_GetMap(mit); - Py_DECREF(mit); - return (PyObject *)other; - } - - return array_subscript_simple(self, op); -} - - -/* - * Another assignment hacked by using CopyObject. - * This only works if subscript returns a standard view. - * Again there are two cases. In the first case, PyArray_CopyObject - * can be used. In the second case, a new indexing function has to be - * used. - */ - -static int -array_ass_sub_simple(PyArrayObject *self, PyObject *index, PyObject *op) -{ - int ret; - PyArrayObject *tmp; - intp value; - - value = PyArray_PyIntAsIntp(index); - if (!error_converting(value)) { - return array_ass_big_item(self, value, op); - } - PyErr_Clear(); - - /* Rest of standard (view-based) indexing */ - - if (PyArray_CheckExact(self)) { - tmp = (PyArrayObject *)array_subscript_simple(self, index); - if (tmp == NULL) { - return -1; - } - } - else { - PyObject *tmp0; - - /* - * Note: this code path should never be reached with an index that - * produces scalars -- those are handled earlier in array_ass_sub - */ - - tmp0 = PyObject_GetItem((PyObject *)self, index); - if (tmp0 == NULL) { - return -1; - } - if (!PyArray_Check(tmp0)) { - PyErr_SetString(PyExc_RuntimeError, - "Getitem not returning array."); - Py_DECREF(tmp0); - return -1; - } - tmp = (PyArrayObject *)tmp0; - } - - if (PyArray_ISOBJECT(self) && (tmp->nd == 0)) { - ret = tmp->descr->f->setitem(op, tmp->data, tmp); - } - else { - ret = PyArray_CopyObject(tmp, op); - } - Py_DECREF(tmp); - return ret; -} - - -/* return -1 if tuple-object seq is not a tuple of integers. - otherwise fill vals with converted integers -*/ -static int -_tuple_of_integers(PyObject *seq, intp *vals, int maxvals) -{ - int i; - PyObject *obj; - intp temp; - - for(i=0; i 0) - || PyList_Check(obj)) { - return -1; - } - temp = PyArray_PyIntAsIntp(obj); - if (error_converting(temp)) { - return -1; - } - vals[i] = temp; - } - return 0; -} - - -static int -array_ass_sub(PyArrayObject *self, PyObject *index, PyObject *op) -{ - int ret, oned, fancy; - PyArrayMapIterObject *mit; - intp vals[MAX_DIMS]; - - if (op == NULL) { - PyErr_SetString(PyExc_ValueError, - "cannot delete array elements"); - return -1; - } - if (!PyArray_ISWRITEABLE(self)) { - PyErr_SetString(PyExc_RuntimeError, - "array is not writeable"); - return -1; - } - - if (PyInt_Check(index) || PyArray_IsScalar(index, Integer) || - PyLong_Check(index) || (PyIndex_Check(index) && - !PySequence_Check(index))) { - intp value; - value = PyArray_PyIntAsIntp(index); - if (PyErr_Occurred()) { - PyErr_Clear(); - } - else { - return array_ass_big_item(self, value, op); - } - } - - if (PyString_Check(index) || PyUnicode_Check(index)) { - if (self->descr->names) { - PyObject *obj; - - obj = PyDict_GetItem(self->descr->fields, index); - if (obj != NULL) { - PyArray_Descr *descr; - int offset; - PyObject *title; - - if (PyArg_ParseTuple(obj, "Oi|O", &descr, &offset, &title)) { - Py_INCREF(descr); - return PyArray_SetField(self, descr, offset, op); - } - } - } - - PyErr_Format(PyExc_ValueError, - "field named %s not found.", - PyString_AsString(index)); - return -1; - } - - if (self->nd == 0) { - /* - * Several different exceptions to the 0-d no-indexing rule - * - * 1) ellipses - * 2) empty tuple - * 3) Using newaxis (None) - * 4) Boolean mask indexing - */ - if (index == Py_Ellipsis || index == Py_None || - (PyTuple_Check(index) && (0 == PyTuple_GET_SIZE(index) || - count_new_axes_0d(index) > 0))) { - return self->descr->f->setitem(op, self->data, self); - } - if (PyBool_Check(index) || PyArray_IsScalar(index, Bool) || - (PyArray_Check(index) && (PyArray_DIMS(index)==0) && - PyArray_ISBOOL(index))) { - if (PyObject_IsTrue(index)) { - return self->descr->f->setitem(op, self->data, self); - } - else { /* don't do anything */ - return 0; - } - } - PyErr_SetString(PyExc_IndexError, "0-d arrays can't be indexed."); - return -1; - } - - /* Integer-tuple */ - if (PyTuple_Check(index) && (PyTuple_GET_SIZE(index) == self->nd) - && (_tuple_of_integers(index, vals, self->nd) >= 0)) { - int i; - char *item; - - for (i = 0; i < self->nd; i++) { - if (vals[i] < 0) { - vals[i] += self->dimensions[i]; - } - if ((vals[i] < 0) || (vals[i] >= self->dimensions[i])) { - PyErr_Format(PyExc_IndexError, - "index (%"INTP_FMT") out of range "\ - "(0<=index<%"INTP_FMT") in dimension %d", - vals[i], self->dimensions[i], i); - return -1; - } - } - item = PyArray_GetPtr(self, vals); - return self->descr->f->setitem(op, item, self); - } - PyErr_Clear(); - - fancy = fancy_indexing_check(index); - if (fancy != SOBJ_NOTFANCY) { - oned = ((self->nd == 1) && - !(PyTuple_Check(index) && PyTuple_GET_SIZE(index) > 1)); - mit = (PyArrayMapIterObject *) PyArray_MapIterNew(index, oned, fancy); - if (mit == NULL) { - return -1; - } - if (oned) { - PyArrayIterObject *it; - int rval; - - it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)self); - if (it == NULL) { - Py_DECREF(mit); - return -1; - } - rval = iter_ass_subscript(it, mit->indexobj, op); - Py_DECREF(it); - Py_DECREF(mit); - return rval; - } - PyArray_MapIterBind(mit, self); - ret = PyArray_SetMap(mit, op); - Py_DECREF(mit); - return ret; - } - - return array_ass_sub_simple(self, index, op); -} - - -/* - * There are places that require that array_subscript return a PyArrayObject - * and not possibly a scalar. Thus, this is the function exposed to - * Python so that 0-dim arrays are passed as scalars - */ - - -static PyObject * -array_subscript_nice(PyArrayObject *self, PyObject *op) -{ - - PyArrayObject *mp; - intp vals[MAX_DIMS]; - - if (PyInt_Check(op) || PyArray_IsScalar(op, Integer) || - PyLong_Check(op) || (PyIndex_Check(op) && - !PySequence_Check(op))) { - intp value; - value = PyArray_PyIntAsIntp(op); - if (PyErr_Occurred()) { - PyErr_Clear(); - } - else { - return array_item_nice(self, (Py_ssize_t) value); - } - } - /* optimization for a tuple of integers */ - if (self->nd > 1 && PyTuple_Check(op) && - (PyTuple_GET_SIZE(op) == self->nd) - && (_tuple_of_integers(op, vals, self->nd) >= 0)) { - int i; - char *item; - - for (i = 0; i < self->nd; i++) { - if (vals[i] < 0) { - vals[i] += self->dimensions[i]; - } - if ((vals[i] < 0) || (vals[i] >= self->dimensions[i])) { - PyErr_Format(PyExc_IndexError, - "index (%"INTP_FMT") out of range "\ - "(0<=index<%"INTP_FMT") in dimension %d", - vals[i], self->dimensions[i], i); - return NULL; - } - } - item = PyArray_GetPtr(self, vals); - return PyArray_Scalar(item, self->descr, (PyObject *)self); - } - PyErr_Clear(); - - mp = (PyArrayObject *)array_subscript(self, op); - /* - * mp could be a scalar if op is not an Int, Scalar, Long or other Index - * object and still convertable to an integer (so that the code goes to - * array_subscript_simple). So, this cast is a bit dangerous.. - */ - - /* - * The following is just a copy of PyArray_Return with an - * additional logic in the nd == 0 case. - */ - - if (mp == NULL) { - return NULL; - } - if (PyErr_Occurred()) { - Py_XDECREF(mp); - return NULL; - } - if (PyArray_Check(mp) && mp->nd == 0) { - Bool noellipses = TRUE; - if ((op == Py_Ellipsis) || PyString_Check(op) || PyUnicode_Check(op)) { - noellipses = FALSE; - } - else if (PyBool_Check(op) || PyArray_IsScalar(op, Bool) || - (PyArray_Check(op) && (PyArray_DIMS(op)==0) && - PyArray_ISBOOL(op))) { - noellipses = FALSE; - } - else if (PySequence_Check(op)) { - Py_ssize_t n, i; - PyObject *temp; - - n = PySequence_Size(op); - i = 0; - while (i < n && noellipses) { - temp = PySequence_GetItem(op, i); - if (temp == Py_Ellipsis) { - noellipses = FALSE; - } - Py_DECREF(temp); - i++; - } - } - if (noellipses) { - PyObject *ret; - ret = PyArray_ToScalar(mp->data, mp); - Py_DECREF(mp); - return ret; - } - } - return (PyObject *)mp; -} - - -NPY_NO_EXPORT PyMappingMethods array_as_mapping = { -#if PY_VERSION_HEX >= 0x02050000 - (lenfunc)array_length, /*mp_length*/ -#else - (inquiry)array_length, /*mp_length*/ -#endif - (binaryfunc)array_subscript_nice, /*mp_subscript*/ - (objobjargproc)array_ass_sub, /*mp_ass_subscript*/ -}; - -/****************** End of Mapping Protocol ******************************/ - -/*********************** Subscript Array Iterator ************************* - * * - * This object handles subscript behavior for array objects. * - * It is an iterator object with a next method * - * It abstracts the n-dimensional mapping behavior to make the looping * - * code more understandable (maybe) * - * and so that indexing can be set up ahead of time * - */ - -/* - * This function takes a Boolean array and constructs index objects and - * iterators as if nonzero(Bool) had been called - */ -static int -_nonzero_indices(PyObject *myBool, PyArrayIterObject **iters) -{ - PyArray_Descr *typecode; - PyArrayObject *ba = NULL, *new = NULL; - int nd, j; - intp size, i, count; - Bool *ptr; - intp coords[MAX_DIMS], dims_m1[MAX_DIMS]; - intp *dptr[MAX_DIMS]; - - typecode=PyArray_DescrFromType(PyArray_BOOL); - ba = (PyArrayObject *)PyArray_FromAny(myBool, typecode, 0, 0, - CARRAY, NULL); - if (ba == NULL) { - return -1; - } - nd = ba->nd; - for (j = 0; j < nd; j++) { - iters[j] = NULL; - } - size = PyArray_SIZE(ba); - ptr = (Bool *)ba->data; - count = 0; - - /* pre-determine how many nonzero entries there are */ - for (i = 0; i < size; i++) { - if (*(ptr++)) { - count++; - } - } - - /* create count-sized index arrays for each dimension */ - for (j = 0; j < nd; j++) { - new = (PyArrayObject *)PyArray_New(&PyArray_Type, 1, &count, - PyArray_INTP, NULL, NULL, - 0, 0, NULL); - if (new == NULL) { - goto fail; - } - iters[j] = (PyArrayIterObject *) - PyArray_IterNew((PyObject *)new); - Py_DECREF(new); - if (iters[j] == NULL) { - goto fail; - } - dptr[j] = (intp *)iters[j]->ao->data; - coords[j] = 0; - dims_m1[j] = ba->dimensions[j]-1; - } - ptr = (Bool *)ba->data; - if (count == 0) { - goto finish; - } - - /* - * Loop through the Boolean array and copy coordinates - * for non-zero entries - */ - for (i = 0; i < size; i++) { - if (*(ptr++)) { - for (j = 0; j < nd; j++) { - *(dptr[j]++) = coords[j]; - } - } - /* Borrowed from ITER_NEXT macro */ - for (j = nd - 1; j >= 0; j--) { - if (coords[j] < dims_m1[j]) { - coords[j]++; - break; - } - else { - coords[j] = 0; - } - } - } - - finish: - Py_DECREF(ba); - return nd; - - fail: - for (j = 0; j < nd; j++) { - Py_XDECREF(iters[j]); - } - Py_XDECREF(ba); - return -1; -} - -/* convert an indexing object to an INTP indexing array iterator - if possible -- otherwise, it is a Slice or Ellipsis object - and has to be interpreted on bind to a particular - array so leave it NULL for now. -*/ -static int -_convert_obj(PyObject *obj, PyArrayIterObject **iter) -{ - PyArray_Descr *indtype; - PyObject *arr; - - if (PySlice_Check(obj) || (obj == Py_Ellipsis)) { - return 0; - } - else if (PyArray_Check(obj) && PyArray_ISBOOL(obj)) { - return _nonzero_indices(obj, iter); - } - else { - indtype = PyArray_DescrFromType(PyArray_INTP); - arr = PyArray_FromAny(obj, indtype, 0, 0, FORCECAST, NULL); - if (arr == NULL) { - return -1; - } - *iter = (PyArrayIterObject *)PyArray_IterNew(arr); - Py_DECREF(arr); - if (*iter == NULL) { - return -1; - } - } - return 1; -} - -/* Reset the map iterator to the beginning */ -NPY_NO_EXPORT void -PyArray_MapIterReset(PyArrayMapIterObject *mit) -{ - int i,j; intp coord[MAX_DIMS]; - PyArrayIterObject *it; - PyArray_CopySwapFunc *copyswap; - - mit->index = 0; - - copyswap = mit->iters[0]->ao->descr->f->copyswap; - - if (mit->subspace != NULL) { - memcpy(coord, mit->bscoord, sizeof(intp)*mit->ait->ao->nd); - PyArray_ITER_RESET(mit->subspace); - for (i = 0; i < mit->numiter; i++) { - it = mit->iters[i]; - PyArray_ITER_RESET(it); - j = mit->iteraxes[i]; - copyswap(coord+j,it->dataptr, !PyArray_ISNOTSWAPPED(it->ao), - it->ao); - } - PyArray_ITER_GOTO(mit->ait, coord); - mit->subspace->dataptr = mit->ait->dataptr; - mit->dataptr = mit->subspace->dataptr; - } - else { - for (i = 0; i < mit->numiter; i++) { - it = mit->iters[i]; - if (it->size != 0) { - PyArray_ITER_RESET(it); - copyswap(coord+i,it->dataptr, !PyArray_ISNOTSWAPPED(it->ao), - it->ao); - } - else { - coord[i] = 0; - } - } - PyArray_ITER_GOTO(mit->ait, coord); - mit->dataptr = mit->ait->dataptr; - } - return; -} - -/* - * This function needs to update the state of the map iterator - * and point mit->dataptr to the memory-location of the next object - */ -NPY_NO_EXPORT void -PyArray_MapIterNext(PyArrayMapIterObject *mit) -{ - int i, j; - intp coord[MAX_DIMS]; - PyArrayIterObject *it; - PyArray_CopySwapFunc *copyswap; - - mit->index += 1; - if (mit->index >= mit->size) { - return; - } - copyswap = mit->iters[0]->ao->descr->f->copyswap; - /* Sub-space iteration */ - if (mit->subspace != NULL) { - PyArray_ITER_NEXT(mit->subspace); - if (mit->subspace->index >= mit->subspace->size) { - /* reset coord to coordinates of beginning of the subspace */ - memcpy(coord, mit->bscoord, sizeof(intp)*mit->ait->ao->nd); - PyArray_ITER_RESET(mit->subspace); - for (i = 0; i < mit->numiter; i++) { - it = mit->iters[i]; - PyArray_ITER_NEXT(it); - j = mit->iteraxes[i]; - copyswap(coord+j,it->dataptr, !PyArray_ISNOTSWAPPED(it->ao), - it->ao); - } - PyArray_ITER_GOTO(mit->ait, coord); - mit->subspace->dataptr = mit->ait->dataptr; - } - mit->dataptr = mit->subspace->dataptr; - } - else { - for (i = 0; i < mit->numiter; i++) { - it = mit->iters[i]; - PyArray_ITER_NEXT(it); - copyswap(coord+i,it->dataptr, - !PyArray_ISNOTSWAPPED(it->ao), - it->ao); - } - PyArray_ITER_GOTO(mit->ait, coord); - mit->dataptr = mit->ait->dataptr; - } - return; -} - -/* - * Bind a mapiteration to a particular array - * - * Determine if subspace iteration is necessary. If so, - * 1) Fill in mit->iteraxes - * 2) Create subspace iterator - * 3) Update nd, dimensions, and size. - * - * Subspace iteration is necessary if: arr->nd > mit->numiter - * - * Need to check for index-errors somewhere. - * - * Let's do it at bind time and also convert all <0 values to >0 here - * as well. - */ -NPY_NO_EXPORT void -PyArray_MapIterBind(PyArrayMapIterObject *mit, PyArrayObject *arr) -{ - int subnd; - PyObject *sub, *obj = NULL; - int i, j, n, curraxis, ellipexp, noellip; - PyArrayIterObject *it; - intp dimsize; - intp *indptr; - - subnd = arr->nd - mit->numiter; - if (subnd < 0) { - PyErr_SetString(PyExc_ValueError, - "too many indices for array"); - return; - } - - mit->ait = (PyArrayIterObject *)PyArray_IterNew((PyObject *)arr); - if (mit->ait == NULL) { - return; - } - /* no subspace iteration needed. Finish up and Return */ - if (subnd == 0) { - n = arr->nd; - for (i = 0; i < n; i++) { - mit->iteraxes[i] = i; - } - goto finish; - } - - /* - * all indexing arrays have been converted to 0 - * therefore we can extract the subspace with a simple - * getitem call which will use view semantics - * - * But, be sure to do it with a true array. - */ - if (PyArray_CheckExact(arr)) { - sub = array_subscript_simple(arr, mit->indexobj); - } - else { - Py_INCREF(arr); - obj = PyArray_EnsureArray((PyObject *)arr); - if (obj == NULL) { - goto fail; - } - sub = array_subscript_simple((PyArrayObject *)obj, mit->indexobj); - Py_DECREF(obj); - } - - if (sub == NULL) { - goto fail; - } - mit->subspace = (PyArrayIterObject *)PyArray_IterNew(sub); - Py_DECREF(sub); - if (mit->subspace == NULL) { - goto fail; - } - /* Expand dimensions of result */ - n = mit->subspace->ao->nd; - for (i = 0; i < n; i++) { - mit->dimensions[mit->nd+i] = mit->subspace->ao->dimensions[i]; - } - mit->nd += n; - - /* - * Now, we still need to interpret the ellipsis and slice objects - * to determine which axes the indexing arrays are referring to - */ - n = PyTuple_GET_SIZE(mit->indexobj); - /* The number of dimensions an ellipsis takes up */ - ellipexp = arr->nd - n + 1; - /* - * Now fill in iteraxes -- remember indexing arrays have been - * converted to 0's in mit->indexobj - */ - curraxis = 0; - j = 0; - /* Only expand the first ellipsis */ - noellip = 1; - memset(mit->bscoord, 0, sizeof(intp)*arr->nd); - for (i = 0; i < n; i++) { - /* - * We need to fill in the starting coordinates for - * the subspace - */ - obj = PyTuple_GET_ITEM(mit->indexobj, i); - if (PyInt_Check(obj) || PyLong_Check(obj)) { - mit->iteraxes[j++] = curraxis++; - } - else if (noellip && obj == Py_Ellipsis) { - curraxis += ellipexp; - noellip = 0; - } - else { - intp start = 0; - intp stop, step; - /* Should be slice object or another Ellipsis */ - if (obj == Py_Ellipsis) { - mit->bscoord[curraxis] = 0; - } - else if (!PySlice_Check(obj) || - (slice_GetIndices((PySliceObject *)obj, - arr->dimensions[curraxis], - &start, &stop, &step, - &dimsize) < 0)) { - PyErr_Format(PyExc_ValueError, - "unexpected object " \ - "(%s) in selection position %d", - Py_TYPE(obj)->tp_name, i); - goto fail; - } - else { - mit->bscoord[curraxis] = start; - } - curraxis += 1; - } - } - - finish: - /* Here check the indexes (now that we have iteraxes) */ - mit->size = PyArray_OverflowMultiplyList(mit->dimensions, mit->nd); - if (mit->size < 0) { - PyErr_SetString(PyExc_ValueError, - "dimensions too large in fancy indexing"); - goto fail; - } - if (mit->ait->size == 0 && mit->size != 0) { - PyErr_SetString(PyExc_ValueError, - "invalid index into a 0-size array"); - goto fail; - } - - for (i = 0; i < mit->numiter; i++) { - intp indval; - it = mit->iters[i]; - PyArray_ITER_RESET(it); - dimsize = arr->dimensions[mit->iteraxes[i]]; - while (it->index < it->size) { - indptr = ((intp *)it->dataptr); - indval = *indptr; - if (indval < 0) { - indval += dimsize; - } - if (indval < 0 || indval >= dimsize) { - PyErr_Format(PyExc_IndexError, - "index (%"INTP_FMT") out of range "\ - "(0<=index<%"INTP_FMT") in dimension %d", - indval, (dimsize-1), mit->iteraxes[i]); - goto fail; - } - PyArray_ITER_NEXT(it); - } - PyArray_ITER_RESET(it); - } - return; - - fail: - Py_XDECREF(mit->subspace); - Py_XDECREF(mit->ait); - mit->subspace = NULL; - mit->ait = NULL; - return; -} - - -NPY_NO_EXPORT PyObject * -PyArray_MapIterNew(PyObject *indexobj, int oned, int fancy) -{ - PyArrayMapIterObject *mit; - PyArray_Descr *indtype; - PyObject *arr = NULL; - int i, n, started, nonindex; - - if (fancy == SOBJ_BADARRAY) { - PyErr_SetString(PyExc_IndexError, \ - "arrays used as indices must be of " \ - "integer (or boolean) type"); - return NULL; - } - if (fancy == SOBJ_TOOMANY) { - PyErr_SetString(PyExc_IndexError, "too many indices"); - return NULL; - } - - mit = (PyArrayMapIterObject *)_pya_malloc(sizeof(PyArrayMapIterObject)); - PyObject_Init((PyObject *)mit, &PyArrayMapIter_Type); - if (mit == NULL) { - return NULL; - } - for (i = 0; i < MAX_DIMS; i++) { - mit->iters[i] = NULL; - } - mit->index = 0; - mit->ait = NULL; - mit->subspace = NULL; - mit->numiter = 0; - mit->consec = 1; - Py_INCREF(indexobj); - mit->indexobj = indexobj; - - if (fancy == SOBJ_LISTTUP) { - PyObject *newobj; - newobj = PySequence_Tuple(indexobj); - if (newobj == NULL) { - goto fail; - } - Py_DECREF(indexobj); - indexobj = newobj; - mit->indexobj = indexobj; - } - -#undef SOBJ_NOTFANCY -#undef SOBJ_ISFANCY -#undef SOBJ_BADARRAY -#undef SOBJ_TOOMANY -#undef SOBJ_LISTTUP - - if (oned) { - return (PyObject *)mit; - } - /* - * Must have some kind of fancy indexing if we are here - * indexobj is either a list, an arrayobject, or a tuple - * (with at least 1 list or arrayobject or Bool object) - */ - - /* convert all inputs to iterators */ - if (PyArray_Check(indexobj) && (PyArray_TYPE(indexobj) == PyArray_BOOL)) { - mit->numiter = _nonzero_indices(indexobj, mit->iters); - if (mit->numiter < 0) { - goto fail; - } - mit->nd = 1; - mit->dimensions[0] = mit->iters[0]->dims_m1[0]+1; - Py_DECREF(mit->indexobj); - mit->indexobj = PyTuple_New(mit->numiter); - if (mit->indexobj == NULL) { - goto fail; - } - for (i = 0; i < mit->numiter; i++) { - PyTuple_SET_ITEM(mit->indexobj, i, PyInt_FromLong(0)); - } - } - - else if (PyArray_Check(indexobj) || !PyTuple_Check(indexobj)) { - mit->numiter = 1; - indtype = PyArray_DescrFromType(PyArray_INTP); - arr = PyArray_FromAny(indexobj, indtype, 0, 0, FORCECAST, NULL); - if (arr == NULL) { - goto fail; - } - mit->iters[0] = (PyArrayIterObject *)PyArray_IterNew(arr); - if (mit->iters[0] == NULL) { - Py_DECREF(arr); - goto fail; - } - mit->nd = PyArray_NDIM(arr); - memcpy(mit->dimensions, PyArray_DIMS(arr), mit->nd*sizeof(intp)); - mit->size = PyArray_SIZE(arr); - Py_DECREF(arr); - Py_DECREF(mit->indexobj); - mit->indexobj = Py_BuildValue("(N)", PyInt_FromLong(0)); - } - else { - /* must be a tuple */ - PyObject *obj; - PyArrayIterObject **iterp; - PyObject *new; - int numiters, j, n2; - /* - * Make a copy of the tuple -- we will be replacing - * index objects with 0's - */ - n = PyTuple_GET_SIZE(indexobj); - n2 = n; - new = PyTuple_New(n2); - if (new == NULL) { - goto fail; - } - started = 0; - nonindex = 0; - j = 0; - for (i = 0; i < n; i++) { - obj = PyTuple_GET_ITEM(indexobj,i); - iterp = mit->iters + mit->numiter; - if ((numiters=_convert_obj(obj, iterp)) < 0) { - Py_DECREF(new); - goto fail; - } - if (numiters > 0) { - started = 1; - if (nonindex) { - mit->consec = 0; - } - mit->numiter += numiters; - if (numiters == 1) { - PyTuple_SET_ITEM(new,j++, PyInt_FromLong(0)); - } - else { - /* - * we need to grow the new indexing object and fill - * it with 0s for each of the iterators produced - */ - int k; - n2 += numiters - 1; - if (_PyTuple_Resize(&new, n2) < 0) { - goto fail; - } - for (k = 0; k < numiters; k++) { - PyTuple_SET_ITEM(new, j++, PyInt_FromLong(0)); - } - } - } - else { - if (started) { - nonindex = 1; - } - Py_INCREF(obj); - PyTuple_SET_ITEM(new,j++,obj); - } - } - Py_DECREF(mit->indexobj); - mit->indexobj = new; - /* - * Store the number of iterators actually converted - * These will be mapped to actual axes at bind time - */ - if (PyArray_Broadcast((PyArrayMultiIterObject *)mit) < 0) { - goto fail; - } - } - - return (PyObject *)mit; - - fail: - Py_DECREF(mit); - return NULL; -} - - -static void -arraymapiter_dealloc(PyArrayMapIterObject *mit) -{ - int i; - Py_XDECREF(mit->indexobj); - Py_XDECREF(mit->ait); - Py_XDECREF(mit->subspace); - for (i = 0; i < mit->numiter; i++) { - Py_XDECREF(mit->iters[i]); - } - _pya_free(mit); -} - -/* - * The mapiter object must be created new each time. It does not work - * to bind to a new array, and continue. - * - * This was the orginal intention, but currently that does not work. - * Do not expose the MapIter_Type to Python. - * - * It's not very useful anyway, since mapiter(indexobj); mapiter.bind(a); - * mapiter is equivalent to a[indexobj].flat but the latter gets to use - * slice syntax. - */ -NPY_NO_EXPORT PyTypeObject PyArrayMapIter_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.mapiter", /* tp_name */ - sizeof(PyArrayIterObject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)arraymapiter_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; - -/** END of Subscript Iterator **/ - - diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/mapping.h b/pythonPackages/numpy/numpy/core/src/multiarray/mapping.h deleted file mode 100755 index d5ac74735e..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/mapping.h +++ /dev/null @@ -1,62 +0,0 @@ -#ifndef _NPY_ARRAYMAPPING_H_ -#define _NPY_ARRAYMAPPING_H_ - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT PyMappingMethods array_as_mapping; -#else -NPY_NO_EXPORT PyMappingMethods array_as_mapping; -#endif - -NPY_NO_EXPORT PyObject * -array_big_item(PyArrayObject *self, intp i); - -NPY_NO_EXPORT Py_ssize_t -array_length(PyArrayObject *self); - -NPY_NO_EXPORT PyObject * -array_item_nice(PyArrayObject *self, Py_ssize_t i); - -NPY_NO_EXPORT PyObject * -array_subscript(PyArrayObject *self, PyObject *op); - -NPY_NO_EXPORT int -array_ass_big_item(PyArrayObject *self, intp i, PyObject *v); - -#if PY_VERSION_HEX < 0x02050000 - #if SIZEOF_INT == SIZEOF_INTP - #define array_ass_item array_ass_big_item - #endif -#else - #if SIZEOF_SIZE_T == SIZEOF_INTP - #define array_ass_item array_ass_big_item - #endif -#endif -#ifndef array_ass_item -NPY_NO_EXPORT int -_array_ass_item(PyArrayObject *self, Py_ssize_t i, PyObject *v); -#define array_ass_item _array_ass_item -#endif - -NPY_NO_EXPORT PyObject * -add_new_axes_0d(PyArrayObject *, int); - -NPY_NO_EXPORT int -count_new_axes_0d(PyObject *tuple); - -/* - * Prototypes for Mapping calls --- not part of the C-API - * because only useful as part of a getitem call. - */ -NPY_NO_EXPORT void -PyArray_MapIterReset(PyArrayMapIterObject *mit); - -NPY_NO_EXPORT void -PyArray_MapIterNext(PyArrayMapIterObject *mit); - -NPY_NO_EXPORT void -PyArray_MapIterBind(PyArrayMapIterObject *, PyArrayObject *); - -NPY_NO_EXPORT PyObject* -PyArray_MapIterNew(PyObject *, int, int); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/methods.c b/pythonPackages/numpy/numpy/core/src/multiarray/methods.c deleted file mode 100755 index 0d7180a01b..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/methods.c +++ /dev/null @@ -1,2319 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "common.h" -#include "ctors.h" -#include "calculation.h" - -#include "methods.h" - - -/* NpyArg_ParseKeywords - * - * Utility function that provides the keyword parsing functionality of - * PyArg_ParseTupleAndKeywords without having to have an args argument. - * - */ -static int -NpyArg_ParseKeywords(PyObject *keys, const char *format, char **kwlist, ...) -{ - PyObject *args = PyTuple_New(0); - int ret; - va_list va; - - if (args == NULL) { - PyErr_SetString(PyExc_RuntimeError, - "Failed to allocate new tuple"); - return 0; - } - va_start(va, kwlist); - ret = PyArg_VaParseTupleAndKeywords(args, keys, format, kwlist, va); - va_end(va); - Py_DECREF(args); - return ret; -} - -/* Should only be used if x is known to be an nd-array */ -#define _ARET(x) PyArray_Return((PyArrayObject *)(x)) - -static PyObject * -array_take(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int dimension = MAX_DIMS; - PyObject *indices; - PyArrayObject *out = NULL; - NPY_CLIPMODE mode = NPY_RAISE; - static char *kwlist[] = {"indices", "axis", "out", "mode", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O&O&O&", kwlist, - &indices, PyArray_AxisConverter, - &dimension, - PyArray_OutputConverter, - &out, - PyArray_ClipmodeConverter, - &mode)) - return NULL; - - return _ARET(PyArray_TakeFrom(self, indices, dimension, out, mode)); -} - -static PyObject * -array_fill(PyArrayObject *self, PyObject *args) -{ - PyObject *obj; - if (!PyArg_ParseTuple(args, "O", &obj)) { - return NULL; - } - if (PyArray_FillWithScalar(self, obj) < 0) { - return NULL; - } - Py_INCREF(Py_None); - return Py_None; -} - -static PyObject * -array_put(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - PyObject *indices, *values; - NPY_CLIPMODE mode = NPY_RAISE; - static char *kwlist[] = {"indices", "values", "mode", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO|O&", kwlist, - &indices, &values, - PyArray_ClipmodeConverter, - &mode)) - return NULL; - return PyArray_PutTo(self, values, indices, mode); -} - -static PyObject * -array_reshape(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - static char *keywords[] = {"order", NULL}; - PyArray_Dims newshape; - PyObject *ret; - PyArray_ORDER order = PyArray_CORDER; - Py_ssize_t n = PyTuple_Size(args); - - if (!NpyArg_ParseKeywords(kwds, "|O&", keywords, - PyArray_OrderConverter, &order)) { - return NULL; - } - - if (n <= 1) { - if (PyTuple_GET_ITEM(args, 0) == Py_None) { - return PyArray_View(self, NULL, NULL); - } - if (!PyArg_ParseTuple(args, "O&", PyArray_IntpConverter, - &newshape)) { - return NULL; - } - } - else { - if (!PyArray_IntpConverter(args, &newshape)) { - if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "invalid shape"); - } - goto fail; - } - } - ret = PyArray_Newshape(self, &newshape, order); - PyDimMem_FREE(newshape.ptr); - return ret; - - fail: - PyDimMem_FREE(newshape.ptr); - return NULL; -} - -static PyObject * -array_squeeze(PyArrayObject *self, PyObject *args) -{ - if (!PyArg_ParseTuple(args, "")) { - return NULL; - } - return PyArray_Squeeze(self); -} - -static PyObject * -array_view(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - PyObject *out_dtype = NULL; - PyObject *out_type = NULL; - PyArray_Descr *dtype = NULL; - - static char *kwlist[] = {"dtype", "type", NULL}; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|OO", kwlist, - &out_dtype, - &out_type)) - return NULL; - - /* If user specified a positional argument, guess whether it - represents a type or a dtype for backward compatibility. */ - if (out_dtype) { - /* type specified? */ - if (PyType_Check(out_dtype) && - PyType_IsSubtype((PyTypeObject *)out_dtype, - &PyArray_Type)) { - if (out_type) { - PyErr_SetString(PyExc_ValueError, - "Cannot specify output type twice."); - return NULL; - } - out_type = out_dtype; - out_dtype = NULL; - } - } - - if ((out_type) && (!PyType_Check(out_type) || - !PyType_IsSubtype((PyTypeObject *)out_type, - &PyArray_Type))) { - PyErr_SetString(PyExc_ValueError, - "Type must be a sub-type of ndarray type"); - return NULL; - } - - if ((out_dtype) && - (PyArray_DescrConverter(out_dtype, &dtype) == PY_FAIL)) { - PyErr_SetString(PyExc_ValueError, - "Dtype must be a numpy data-type"); - return NULL; - } - - return PyArray_View(self, dtype, (PyTypeObject*)out_type); -} - -static PyObject * -array_argmax(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArrayObject *out = NULL; - static char *kwlist[] = {"axis", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, - PyArray_AxisConverter, - &axis, - PyArray_OutputConverter, - &out)) - return NULL; - - return _ARET(PyArray_ArgMax(self, axis, out)); -} - -static PyObject * -array_argmin(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArrayObject *out = NULL; - static char *kwlist[] = {"axis", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, - PyArray_AxisConverter, - &axis, - PyArray_OutputConverter, - &out)) - return NULL; - - return _ARET(PyArray_ArgMin(self, axis, out)); -} - -static PyObject * -array_max(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArrayObject *out = NULL; - static char *kwlist[] = {"axis", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, - PyArray_AxisConverter, - &axis, - PyArray_OutputConverter, - &out)) - return NULL; - - return PyArray_Max(self, axis, out); -} - -static PyObject * -array_ptp(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArrayObject *out = NULL; - static char *kwlist[] = {"axis", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, - PyArray_AxisConverter, - &axis, - PyArray_OutputConverter, - &out)) - return NULL; - - return PyArray_Ptp(self, axis, out); -} - - -static PyObject * -array_min(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArrayObject *out = NULL; - static char *kwlist[] = {"axis", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, - PyArray_AxisConverter, - &axis, - PyArray_OutputConverter, - &out)) - return NULL; - - return PyArray_Min(self, axis, out); -} - -static PyObject * -array_swapaxes(PyArrayObject *self, PyObject *args) -{ - int axis1, axis2; - - if (!PyArg_ParseTuple(args, "ii", &axis1, &axis2)) { - return NULL; - } - return PyArray_SwapAxes(self, axis1, axis2); -} - - -/* steals typed reference */ -/*NUMPY_API - Get a subset of bytes from each element of the array -*/ -NPY_NO_EXPORT PyObject * -PyArray_GetField(PyArrayObject *self, PyArray_Descr *typed, int offset) -{ - PyObject *ret = NULL; - - if (offset < 0 || (offset + typed->elsize) > self->descr->elsize) { - PyErr_Format(PyExc_ValueError, - "Need 0 <= offset <= %d for requested type " \ - "but received offset = %d", - self->descr->elsize-typed->elsize, offset); - Py_DECREF(typed); - return NULL; - } - ret = PyArray_NewFromDescr(Py_TYPE(self), - typed, - self->nd, self->dimensions, - self->strides, - self->data + offset, - self->flags, (PyObject *)self); - if (ret == NULL) { - return NULL; - } - Py_INCREF(self); - ((PyArrayObject *)ret)->base = (PyObject *)self; - - PyArray_UpdateFlags((PyArrayObject *)ret, UPDATE_ALL); - return ret; -} - -static PyObject * -array_getfield(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - - PyArray_Descr *dtype = NULL; - int offset = 0; - static char *kwlist[] = {"dtype", "offset", 0}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&|i", kwlist, - PyArray_DescrConverter, - &dtype, &offset)) { - Py_XDECREF(dtype); - return NULL; - } - - return PyArray_GetField(self, dtype, offset); -} - - -/*NUMPY_API - Set a subset of bytes from each element of the array -*/ -NPY_NO_EXPORT int -PyArray_SetField(PyArrayObject *self, PyArray_Descr *dtype, - int offset, PyObject *val) -{ - PyObject *ret = NULL; - int retval = 0; - - if (offset < 0 || (offset + dtype->elsize) > self->descr->elsize) { - PyErr_Format(PyExc_ValueError, - "Need 0 <= offset <= %d for requested type " \ - "but received offset = %d", - self->descr->elsize-dtype->elsize, offset); - Py_DECREF(dtype); - return -1; - } - ret = PyArray_NewFromDescr(Py_TYPE(self), - dtype, self->nd, self->dimensions, - self->strides, self->data + offset, - self->flags, (PyObject *)self); - if (ret == NULL) { - return -1; - } - Py_INCREF(self); - ((PyArrayObject *)ret)->base = (PyObject *)self; - - PyArray_UpdateFlags((PyArrayObject *)ret, UPDATE_ALL); - retval = PyArray_CopyObject((PyArrayObject *)ret, val); - Py_DECREF(ret); - return retval; -} - -static PyObject * -array_setfield(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - PyArray_Descr *dtype = NULL; - int offset = 0; - PyObject *value; - static char *kwlist[] = {"value", "dtype", "offset", 0}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO&|i", kwlist, - &value, PyArray_DescrConverter, - &dtype, &offset)) { - Py_XDECREF(dtype); - return NULL; - } - - if (PyArray_SetField(self, dtype, offset, value) < 0) { - return NULL; - } - Py_INCREF(Py_None); - return Py_None; -} - -/* This doesn't change the descriptor just the actual data... - */ - -/*NUMPY_API*/ -NPY_NO_EXPORT PyObject * -PyArray_Byteswap(PyArrayObject *self, Bool inplace) -{ - PyArrayObject *ret; - intp size; - PyArray_CopySwapNFunc *copyswapn; - PyArrayIterObject *it; - - copyswapn = self->descr->f->copyswapn; - if (inplace) { - if (!PyArray_ISWRITEABLE(self)) { - PyErr_SetString(PyExc_RuntimeError, - "Cannot byte-swap in-place on a " \ - "read-only array"); - return NULL; - } - size = PyArray_SIZE(self); - if (PyArray_ISONESEGMENT(self)) { - copyswapn(self->data, self->descr->elsize, NULL, -1, size, 1, self); - } - else { /* Use iterator */ - int axis = -1; - intp stride; - it = (PyArrayIterObject *) \ - PyArray_IterAllButAxis((PyObject *)self, &axis); - stride = self->strides[axis]; - size = self->dimensions[axis]; - while (it->index < it->size) { - copyswapn(it->dataptr, stride, NULL, -1, size, 1, self); - PyArray_ITER_NEXT(it); - } - Py_DECREF(it); - } - - Py_INCREF(self); - return (PyObject *)self; - } - else { - PyObject *new; - if ((ret = (PyArrayObject *)PyArray_NewCopy(self,-1)) == NULL) { - return NULL; - } - new = PyArray_Byteswap(ret, TRUE); - Py_DECREF(new); - return (PyObject *)ret; - } -} - - -static PyObject * -array_byteswap(PyArrayObject *self, PyObject *args) -{ - Bool inplace = FALSE; - - if (!PyArg_ParseTuple(args, "|O&", PyArray_BoolConverter, &inplace)) { - return NULL; - } - return PyArray_Byteswap(self, inplace); -} - -static PyObject * -array_tolist(PyArrayObject *self, PyObject *args) -{ - if (!PyArg_ParseTuple(args, "")) { - return NULL; - } - return PyArray_ToList(self); -} - - -static PyObject * -array_tostring(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - NPY_ORDER order = NPY_CORDER; - static char *kwlist[] = {"order", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&", kwlist, - PyArray_OrderConverter, - &order)) { - return NULL; - } - return PyArray_ToString(self, order); -} - - -/* This should grow an order= keyword to be consistent - */ - -static PyObject * -array_tofile(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int ret; - PyObject *file; - FILE *fd; - char *sep = ""; - char *format = ""; - static char *kwlist[] = {"file", "sep", "format", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|ss", kwlist, - &file, &sep, &format)) { - return NULL; - } - - if (PyBytes_Check(file) || PyUnicode_Check(file)) { - file = npy_PyFile_OpenFile(file, "wb"); - if (file == NULL) { - return NULL; - } - } - else { - Py_INCREF(file); - } -#if defined(NPY_PY3K) - fd = npy_PyFile_Dup(file, "wb"); -#else - fd = PyFile_AsFile(file); -#endif - if (fd == NULL) { - PyErr_SetString(PyExc_IOError, "first argument must be a " \ - "string or open file"); - Py_DECREF(file); - return NULL; - } - ret = PyArray_ToFile(self, fd, sep, format); -#if defined(NPY_PY3K) - fclose(fd); -#endif - Py_DECREF(file); - if (ret < 0) { - return NULL; - } - Py_INCREF(Py_None); - return Py_None; -} - - -static PyObject * -array_toscalar(PyArrayObject *self, PyObject *args) { - int n, nd; - n = PyTuple_GET_SIZE(args); - - if (n == 1) { - PyObject *obj; - obj = PyTuple_GET_ITEM(args, 0); - if (PyTuple_Check(obj)) { - args = obj; - n = PyTuple_GET_SIZE(args); - } - } - - if (n == 0) { - if (self->nd == 0 || PyArray_SIZE(self) == 1) - return self->descr->f->getitem(self->data, self); - else { - PyErr_SetString(PyExc_ValueError, - "can only convert an array " \ - " of size 1 to a Python scalar"); - return NULL; - } - } - else if (n != self->nd && (n > 1 || self->nd == 0)) { - PyErr_SetString(PyExc_ValueError, - "incorrect number of indices for " \ - "array"); - return NULL; - } - else if (n == 1) { /* allows for flat getting as well as 1-d case */ - intp value, loc, index, factor; - intp factors[MAX_DIMS]; - value = PyArray_PyIntAsIntp(PyTuple_GET_ITEM(args, 0)); - if (error_converting(value)) { - PyErr_SetString(PyExc_ValueError, "invalid integer"); - return NULL; - } - factor = PyArray_SIZE(self); - if (value < 0) value += factor; - if ((value >= factor) || (value < 0)) { - PyErr_SetString(PyExc_ValueError, - "index out of bounds"); - return NULL; - } - if (self->nd == 1) { - value *= self->strides[0]; - return self->descr->f->getitem(self->data + value, - self); - } - nd = self->nd; - factor = 1; - while (nd--) { - factors[nd] = factor; - factor *= self->dimensions[nd]; - } - loc = 0; - for (nd = 0; nd < self->nd; nd++) { - index = value / factors[nd]; - value = value % factors[nd]; - loc += self->strides[nd]*index; - } - - return self->descr->f->getitem(self->data + loc, - self); - - } - else { - intp loc, index[MAX_DIMS]; - nd = PyArray_IntpFromSequence(args, index, MAX_DIMS); - if (nd < n) { - return NULL; - } - loc = 0; - while (nd--) { - if (index[nd] < 0) { - index[nd] += self->dimensions[nd]; - } - if (index[nd] < 0 || - index[nd] >= self->dimensions[nd]) { - PyErr_SetString(PyExc_ValueError, - "index out of bounds"); - return NULL; - } - loc += self->strides[nd]*index[nd]; - } - return self->descr->f->getitem(self->data + loc, self); - } -} - -static PyObject * -array_setscalar(PyArrayObject *self, PyObject *args) { - int n, nd; - int ret = -1; - PyObject *obj; - n = PyTuple_GET_SIZE(args) - 1; - - if (n < 0) { - PyErr_SetString(PyExc_ValueError, - "itemset must have at least one argument"); - return NULL; - } - obj = PyTuple_GET_ITEM(args, n); - if (n == 0) { - if (self->nd == 0 || PyArray_SIZE(self) == 1) { - ret = self->descr->f->setitem(obj, self->data, self); - } - else { - PyErr_SetString(PyExc_ValueError, - "can only place a scalar for an " - " array of size 1"); - return NULL; - } - } - else if (n != self->nd && (n > 1 || self->nd == 0)) { - PyErr_SetString(PyExc_ValueError, - "incorrect number of indices for " \ - "array"); - return NULL; - } - else if (n == 1) { /* allows for flat setting as well as 1-d case */ - intp value, loc, index, factor; - intp factors[MAX_DIMS]; - PyObject *indobj; - - indobj = PyTuple_GET_ITEM(args, 0); - if (PyTuple_Check(indobj)) { - PyObject *res; - PyObject *newargs; - PyObject *tmp; - int i, nn; - nn = PyTuple_GET_SIZE(indobj); - newargs = PyTuple_New(nn+1); - Py_INCREF(obj); - for (i = 0; i < nn; i++) { - tmp = PyTuple_GET_ITEM(indobj, i); - Py_INCREF(tmp); - PyTuple_SET_ITEM(newargs, i, tmp); - } - PyTuple_SET_ITEM(newargs, nn, obj); - /* Call with a converted set of arguments */ - res = array_setscalar(self, newargs); - Py_DECREF(newargs); - return res; - } - value = PyArray_PyIntAsIntp(indobj); - if (error_converting(value)) { - PyErr_SetString(PyExc_ValueError, "invalid integer"); - return NULL; - } - if (value >= PyArray_SIZE(self)) { - PyErr_SetString(PyExc_ValueError, - "index out of bounds"); - return NULL; - } - if (self->nd == 1) { - value *= self->strides[0]; - ret = self->descr->f->setitem(obj, self->data + value, - self); - goto finish; - } - nd = self->nd; - factor = 1; - while (nd--) { - factors[nd] = factor; - factor *= self->dimensions[nd]; - } - loc = 0; - for (nd = 0; nd < self->nd; nd++) { - index = value / factors[nd]; - value = value % factors[nd]; - loc += self->strides[nd]*index; - } - - ret = self->descr->f->setitem(obj, self->data + loc, self); - } - else { - intp loc, index[MAX_DIMS]; - PyObject *tupargs; - tupargs = PyTuple_GetSlice(args, 0, n); - nd = PyArray_IntpFromSequence(tupargs, index, MAX_DIMS); - Py_DECREF(tupargs); - if (nd < n) { - return NULL; - } - loc = 0; - while (nd--) { - if (index[nd] < 0) { - index[nd] += self->dimensions[nd]; - } - if (index[nd] < 0 || - index[nd] >= self->dimensions[nd]) { - PyErr_SetString(PyExc_ValueError, - "index out of bounds"); - return NULL; - } - loc += self->strides[nd]*index[nd]; - } - ret = self->descr->f->setitem(obj, self->data + loc, self); - } - - finish: - if (ret < 0) { - return NULL; - } - Py_INCREF(Py_None); - return Py_None; -} - - -static PyObject * -array_cast(PyArrayObject *self, PyObject *args) -{ - PyArray_Descr *descr = NULL; - PyObject *obj; - - if (!PyArg_ParseTuple(args, "O&", PyArray_DescrConverter, - &descr)) { - Py_XDECREF(descr); - return NULL; - } - - if (PyArray_EquivTypes(descr, self->descr)) { - obj = _ARET(PyArray_NewCopy(self,NPY_ANYORDER)); - Py_XDECREF(descr); - return obj; - } - if (descr->names != NULL) { - int flags; - flags = NPY_FORCECAST; - if (PyArray_ISFORTRAN(self)) { - flags |= NPY_FORTRAN; - } - return PyArray_FromArray(self, descr, flags); - } - return PyArray_CastToType(self, descr, PyArray_ISFORTRAN(self)); -} - -/* default sub-type implementation */ - - -static PyObject * -array_wraparray(PyArrayObject *self, PyObject *args) -{ - PyObject *arr; - PyObject *ret; - - if (PyTuple_Size(args) < 1) { - PyErr_SetString(PyExc_TypeError, - "only accepts 1 argument"); - return NULL; - } - arr = PyTuple_GET_ITEM(args, 0); - if (arr == NULL) { - return NULL; - } - if (!PyArray_Check(arr)) { - PyErr_SetString(PyExc_TypeError, - "can only be called with ndarray object"); - return NULL; - } - - if (Py_TYPE(self) != Py_TYPE(arr)){ - Py_INCREF(PyArray_DESCR(arr)); - ret = PyArray_NewFromDescr(Py_TYPE(self), - PyArray_DESCR(arr), - PyArray_NDIM(arr), - PyArray_DIMS(arr), - PyArray_STRIDES(arr), PyArray_DATA(arr), - PyArray_FLAGS(arr), (PyObject *)self); - if (ret == NULL) { - return NULL; - } - Py_INCREF(arr); - PyArray_BASE(ret) = arr; - return ret; - } else { - /*The type was set in __array_prepare__*/ - Py_INCREF(arr); - return arr; - } -} - - -static PyObject * -array_preparearray(PyArrayObject *self, PyObject *args) -{ - PyObject *arr; - PyObject *ret; - - if (PyTuple_Size(args) < 1) { - PyErr_SetString(PyExc_TypeError, - "only accepts 1 argument"); - return NULL; - } - arr = PyTuple_GET_ITEM(args, 0); - if (!PyArray_Check(arr)) { - PyErr_SetString(PyExc_TypeError, - "can only be called with ndarray object"); - return NULL; - } - - Py_INCREF(PyArray_DESCR(arr)); - ret = PyArray_NewFromDescr(Py_TYPE(self), - PyArray_DESCR(arr), - PyArray_NDIM(arr), - PyArray_DIMS(arr), - PyArray_STRIDES(arr), PyArray_DATA(arr), - PyArray_FLAGS(arr), (PyObject *)self); - if (ret == NULL) { - return NULL; - } - Py_INCREF(arr); - PyArray_BASE(ret) = arr; - return ret; -} - - -static PyObject * -array_getarray(PyArrayObject *self, PyObject *args) -{ - PyArray_Descr *newtype = NULL; - PyObject *ret; - - if (!PyArg_ParseTuple(args, "|O&", PyArray_DescrConverter, - &newtype)) { - Py_XDECREF(newtype); - return NULL; - } - - /* convert to PyArray_Type */ - if (!PyArray_CheckExact(self)) { - PyObject *new; - PyTypeObject *subtype = &PyArray_Type; - - if (!PyType_IsSubtype(Py_TYPE(self), &PyArray_Type)) { - subtype = &PyArray_Type; - } - - Py_INCREF(PyArray_DESCR(self)); - new = PyArray_NewFromDescr(subtype, - PyArray_DESCR(self), - PyArray_NDIM(self), - PyArray_DIMS(self), - PyArray_STRIDES(self), - PyArray_DATA(self), - PyArray_FLAGS(self), NULL); - if (new == NULL) { - return NULL; - } - Py_INCREF(self); - PyArray_BASE(new) = (PyObject *)self; - self = (PyArrayObject *)new; - } - else { - Py_INCREF(self); - } - - if ((newtype == NULL) || - PyArray_EquivTypes(self->descr, newtype)) { - return (PyObject *)self; - } - else { - ret = PyArray_CastToType(self, newtype, 0); - Py_DECREF(self); - return ret; - } -} - - -static PyObject * -array_copy(PyArrayObject *self, PyObject *args) -{ - PyArray_ORDER fortran=PyArray_CORDER; - if (!PyArg_ParseTuple(args, "|O&", PyArray_OrderConverter, - &fortran)) { - return NULL; - } - - return PyArray_NewCopy(self, fortran); -} - -#include -static PyObject * -array_resize(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {"refcheck", NULL}; - Py_ssize_t size = PyTuple_Size(args); - int refcheck = 1; - PyArray_Dims newshape; - PyObject *ret, *obj; - - - if (!NpyArg_ParseKeywords(kwds, "|i", kwlist, &refcheck)) { - return NULL; - } - - if (size == 0) { - Py_INCREF(Py_None); - return Py_None; - } - else if (size == 1) { - obj = PyTuple_GET_ITEM(args, 0); - if (obj == Py_None) { - Py_INCREF(Py_None); - return Py_None; - } - args = obj; - } - if (!PyArray_IntpConverter(args, &newshape)) { - if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, "invalid shape"); - } - return NULL; - } - - ret = PyArray_Resize(self, &newshape, refcheck, PyArray_CORDER); - PyDimMem_FREE(newshape.ptr); - if (ret == NULL) { - return NULL; - } - Py_DECREF(ret); - Py_INCREF(Py_None); - return Py_None; -} - -static PyObject * -array_repeat(PyArrayObject *self, PyObject *args, PyObject *kwds) { - PyObject *repeats; - int axis = MAX_DIMS; - static char *kwlist[] = {"repeats", "axis", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O&", kwlist, - &repeats, PyArray_AxisConverter, - &axis)) { - return NULL; - } - return _ARET(PyArray_Repeat(self, repeats, axis)); -} - -static PyObject * -array_choose(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - static char *keywords[] = {"out", "mode", NULL}; - PyObject *choices; - PyArrayObject *out = NULL; - NPY_CLIPMODE clipmode = NPY_RAISE; - Py_ssize_t n = PyTuple_Size(args); - - if (n <= 1) { - if (!PyArg_ParseTuple(args, "O", &choices)) { - return NULL; - } - } - else { - choices = args; - } - - if (!NpyArg_ParseKeywords(kwds, "|O&O&", keywords, - PyArray_OutputConverter, &out, - PyArray_ClipmodeConverter, &clipmode)) { - return NULL; - } - - return _ARET(PyArray_Choose(self, choices, out, clipmode)); -} - -static PyObject * -array_sort(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis=-1; - int val; - PyArray_SORTKIND which = PyArray_QUICKSORT; - PyObject *order = NULL; - PyArray_Descr *saved = NULL; - PyArray_Descr *newd; - static char *kwlist[] = {"axis", "kind", "order", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|iO&O", kwlist, &axis, - PyArray_SortkindConverter, &which, - &order)) { - return NULL; - } - if (order == Py_None) { - order = NULL; - } - if (order != NULL) { - PyObject *new_name; - PyObject *_numpy_internal; - saved = self->descr; - if (saved->names == NULL) { - PyErr_SetString(PyExc_ValueError, "Cannot specify " \ - "order when the array has no fields."); - return NULL; - } - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) { - return NULL; - } - new_name = PyObject_CallMethod(_numpy_internal, "_newnames", - "OO", saved, order); - Py_DECREF(_numpy_internal); - if (new_name == NULL) { - return NULL; - } - newd = PyArray_DescrNew(saved); - newd->names = new_name; - self->descr = newd; - } - - val = PyArray_Sort(self, axis, which); - if (order != NULL) { - Py_XDECREF(self->descr); - self->descr = saved; - } - if (val < 0) { - return NULL; - } - Py_INCREF(Py_None); - return Py_None; -} - -static PyObject * -array_argsort(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = -1; - PyArray_SORTKIND which = PyArray_QUICKSORT; - PyObject *order = NULL, *res; - PyArray_Descr *newd, *saved=NULL; - static char *kwlist[] = {"axis", "kind", "order", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&O", kwlist, - PyArray_AxisConverter, &axis, - PyArray_SortkindConverter, &which, - &order)) { - return NULL; - } - if (order == Py_None) { - order = NULL; - } - if (order != NULL) { - PyObject *new_name; - PyObject *_numpy_internal; - saved = self->descr; - if (saved->names == NULL) { - PyErr_SetString(PyExc_ValueError, "Cannot specify " \ - "order when the array has no fields."); - return NULL; - } - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) { - return NULL; - } - new_name = PyObject_CallMethod(_numpy_internal, "_newnames", - "OO", saved, order); - Py_DECREF(_numpy_internal); - if (new_name == NULL) { - return NULL; - } - newd = PyArray_DescrNew(saved); - newd->names = new_name; - self->descr = newd; - } - - res = PyArray_ArgSort(self, axis, which); - if (order != NULL) { - Py_XDECREF(self->descr); - self->descr = saved; - } - return _ARET(res); -} - -static PyObject * -array_searchsorted(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {"keys", "side", NULL}; - PyObject *keys; - NPY_SEARCHSIDE side = NPY_SEARCHLEFT; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O&:searchsorted", - kwlist, &keys, - PyArray_SearchsideConverter, &side)) { - return NULL; - } - return _ARET(PyArray_SearchSorted(self, keys, side)); -} - -static void -_deepcopy_call(char *iptr, char *optr, PyArray_Descr *dtype, - PyObject *deepcopy, PyObject *visit) -{ - if (!PyDataType_REFCHK(dtype)) { - return; - } - else if (PyDescr_HASFIELDS(dtype)) { - PyObject *key, *value, *title = NULL; - PyArray_Descr *new; - int offset; - Py_ssize_t pos = 0; - while (PyDict_Next(dtype->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, - &title)) { - return; - } - _deepcopy_call(iptr + offset, optr + offset, new, - deepcopy, visit); - } - } - else { - PyObject *itemp, *otemp; - PyObject *res; - NPY_COPY_PYOBJECT_PTR(&itemp, iptr); - NPY_COPY_PYOBJECT_PTR(&otemp, optr); - Py_XINCREF(itemp); - /* call deepcopy on this argument */ - res = PyObject_CallFunctionObjArgs(deepcopy, itemp, visit, NULL); - Py_XDECREF(itemp); - Py_XDECREF(otemp); - NPY_COPY_PYOBJECT_PTR(optr, &res); - } - -} - - -static PyObject * -array_deepcopy(PyArrayObject *self, PyObject *args) -{ - PyObject* visit; - char *optr; - PyArrayIterObject *it; - PyObject *copy, *ret, *deepcopy; - - if (!PyArg_ParseTuple(args, "O", &visit)) { - return NULL; - } - ret = PyArray_Copy(self); - if (PyDataType_REFCHK(self->descr)) { - copy = PyImport_ImportModule("copy"); - if (copy == NULL) { - return NULL; - } - deepcopy = PyObject_GetAttrString(copy, "deepcopy"); - Py_DECREF(copy); - if (deepcopy == NULL) { - return NULL; - } - it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)self); - if (it == NULL) { - Py_DECREF(deepcopy); - return NULL; - } - optr = PyArray_DATA(ret); - while(it->index < it->size) { - _deepcopy_call(it->dataptr, optr, self->descr, deepcopy, visit); - optr += self->descr->elsize; - PyArray_ITER_NEXT(it); - } - Py_DECREF(deepcopy); - Py_DECREF(it); - } - return _ARET(ret); -} - -/* Convert Array to flat list (using getitem) */ -static PyObject * -_getlist_pkl(PyArrayObject *self) -{ - PyObject *theobject; - PyArrayIterObject *iter = NULL; - PyObject *list; - PyArray_GetItemFunc *getitem; - - getitem = self->descr->f->getitem; - iter = (PyArrayIterObject *)PyArray_IterNew((PyObject *)self); - if (iter == NULL) { - return NULL; - } - list = PyList_New(iter->size); - if (list == NULL) { - Py_DECREF(iter); - return NULL; - } - while (iter->index < iter->size) { - theobject = getitem(iter->dataptr, self); - PyList_SET_ITEM(list, (int) iter->index, theobject); - PyArray_ITER_NEXT(iter); - } - Py_DECREF(iter); - return list; -} - -static int -_setlist_pkl(PyArrayObject *self, PyObject *list) -{ - PyObject *theobject; - PyArrayIterObject *iter = NULL; - PyArray_SetItemFunc *setitem; - - setitem = self->descr->f->setitem; - iter = (PyArrayIterObject *)PyArray_IterNew((PyObject *)self); - if (iter == NULL) { - return -1; - } - while(iter->index < iter->size) { - theobject = PyList_GET_ITEM(list, (int) iter->index); - setitem(theobject, iter->dataptr, self); - PyArray_ITER_NEXT(iter); - } - Py_XDECREF(iter); - return 0; -} - - -static PyObject * -array_reduce(PyArrayObject *self, PyObject *NPY_UNUSED(args)) -{ - /* version number of this pickle type. Increment if we need to - change the format. Be sure to handle the old versions in - array_setstate. */ - const int version = 1; - PyObject *ret = NULL, *state = NULL, *obj = NULL, *mod = NULL; - PyObject *mybool, *thestr = NULL; - PyArray_Descr *descr; - - /* Return a tuple of (callable object, arguments, object's state) */ - /* We will put everything in the object's state, so that on UnPickle - it can use the string object as memory without a copy */ - - ret = PyTuple_New(3); - if (ret == NULL) { - return NULL; - } - mod = PyImport_ImportModule("numpy.core.multiarray"); - if (mod == NULL) { - Py_DECREF(ret); - return NULL; - } - obj = PyObject_GetAttrString(mod, "_reconstruct"); - Py_DECREF(mod); - PyTuple_SET_ITEM(ret, 0, obj); - PyTuple_SET_ITEM(ret, 1, - Py_BuildValue("ONc", - (PyObject *)Py_TYPE(self), - Py_BuildValue("(N)", - PyInt_FromLong(0)), - /* dummy data-type */ - 'b')); - - /* Now fill in object's state. This is a tuple with - 5 arguments - - 1) an integer with the pickle version. - 2) a Tuple giving the shape - 3) a PyArray_Descr Object (with correct bytorder set) - 4) a Bool stating if Fortran or not - 5) a Python object representing the data (a string, or - a list or any user-defined object). - - Notice because Python does not describe a mechanism to write - raw data to the pickle, this performs a copy to a string first - */ - - state = PyTuple_New(5); - if (state == NULL) { - Py_DECREF(ret); - return NULL; - } - PyTuple_SET_ITEM(state, 0, PyInt_FromLong(version)); - PyTuple_SET_ITEM(state, 1, PyObject_GetAttrString((PyObject *)self, - "shape")); - descr = self->descr; - Py_INCREF(descr); - PyTuple_SET_ITEM(state, 2, (PyObject *)descr); - mybool = (PyArray_ISFORTRAN(self) ? Py_True : Py_False); - Py_INCREF(mybool); - PyTuple_SET_ITEM(state, 3, mybool); - if (PyDataType_FLAGCHK(self->descr, NPY_LIST_PICKLE)) { - thestr = _getlist_pkl(self); - } - else { - thestr = PyArray_ToString(self, NPY_ANYORDER); - } - if (thestr == NULL) { - Py_DECREF(ret); - Py_DECREF(state); - return NULL; - } - PyTuple_SET_ITEM(state, 4, thestr); - PyTuple_SET_ITEM(ret, 2, state); - return ret; -} - -static PyObject * -array_setstate(PyArrayObject *self, PyObject *args) -{ - PyObject *shape; - PyArray_Descr *typecode; - int version = 1; - int fortran; - PyObject *rawdata; - char *datastr; - Py_ssize_t len; - intp size, dimensions[MAX_DIMS]; - int nd; - int incref_base = 1; - - /* This will free any memory associated with a and - use the string in setstate as the (writeable) memory. - */ - if (!PyArg_ParseTuple(args, "(iO!O!iO)", &version, &PyTuple_Type, - &shape, &PyArrayDescr_Type, &typecode, - &fortran, &rawdata)) { - PyErr_Clear(); - version = 0; - if (!PyArg_ParseTuple(args, "(O!O!iO)", &PyTuple_Type, - &shape, &PyArrayDescr_Type, &typecode, - &fortran, &rawdata)) { - return NULL; - } - } - - /* If we ever need another pickle format, increment the version - number. But we should still be able to handle the old versions. - We've only got one right now. */ - if (version != 1 && version != 0) { - PyErr_Format(PyExc_ValueError, - "can't handle version %d of numpy.ndarray pickle", - version); - return NULL; - } - - Py_XDECREF(self->descr); - self->descr = typecode; - Py_INCREF(typecode); - nd = PyArray_IntpFromSequence(shape, dimensions, MAX_DIMS); - if (nd < 0) { - return NULL; - } - size = PyArray_MultiplyList(dimensions, nd); - if (self->descr->elsize == 0) { - PyErr_SetString(PyExc_ValueError, "Invalid data-type size."); - return NULL; - } - if (size < 0 || size > MAX_INTP / self->descr->elsize) { - PyErr_NoMemory(); - return NULL; - } - - if (PyDataType_FLAGCHK(typecode, NPY_LIST_PICKLE)) { - if (!PyList_Check(rawdata)) { - PyErr_SetString(PyExc_TypeError, - "object pickle not returning list"); - return NULL; - } - } - else { -#if defined(NPY_PY3K) - /* Backward compatibility with Python 2 Numpy pickles */ - if (PyUnicode_Check(rawdata)) { - PyObject *tmp; - tmp = PyUnicode_AsLatin1String(rawdata); - rawdata = tmp; - incref_base = 0; - } -#endif - - if (!PyBytes_Check(rawdata)) { - PyErr_SetString(PyExc_TypeError, - "pickle not returning string"); - return NULL; - } - - if (PyBytes_AsStringAndSize(rawdata, &datastr, &len)) - return NULL; - - if ((len != (self->descr->elsize * size))) { - PyErr_SetString(PyExc_ValueError, - "buffer size does not" \ - " match array size"); - return NULL; - } - } - - if ((self->flags & OWNDATA)) { - if (self->data != NULL) { - PyDataMem_FREE(self->data); - } - self->flags &= ~OWNDATA; - } - Py_XDECREF(self->base); - - self->flags &= ~UPDATEIFCOPY; - - if (self->dimensions != NULL) { - PyDimMem_FREE(self->dimensions); - self->dimensions = NULL; - } - - self->flags = DEFAULT; - - self->nd = nd; - - if (nd > 0) { - self->dimensions = PyDimMem_NEW(nd * 2); - self->strides = self->dimensions + nd; - memcpy(self->dimensions, dimensions, sizeof(intp)*nd); - (void) _array_fill_strides(self->strides, dimensions, nd, - (size_t) self->descr->elsize, - (fortran ? FORTRAN : CONTIGUOUS), - &(self->flags)); - } - - if (!PyDataType_FLAGCHK(typecode, NPY_LIST_PICKLE)) { - int swap=!PyArray_ISNOTSWAPPED(self); - self->data = datastr; - if (!_IsAligned(self) || swap) { - intp num = PyArray_NBYTES(self); - self->data = PyDataMem_NEW(num); - if (self->data == NULL) { - self->nd = 0; - PyDimMem_FREE(self->dimensions); - return PyErr_NoMemory(); - } - if (swap) { /* byte-swap on pickle-read */ - intp numels = num / self->descr->elsize; - self->descr->f->copyswapn(self->data, self->descr->elsize, - datastr, self->descr->elsize, - numels, 1, self); - if (!PyArray_ISEXTENDED(self)) { - self->descr = PyArray_DescrFromType(self->descr->type_num); - } - else { - self->descr = PyArray_DescrNew(typecode); - if (self->descr->byteorder == PyArray_BIG) { - self->descr->byteorder = PyArray_LITTLE; - } - else if (self->descr->byteorder == PyArray_LITTLE) { - self->descr->byteorder = PyArray_BIG; - } - } - Py_DECREF(typecode); - } - else { - memcpy(self->data, datastr, num); - } - self->flags |= OWNDATA; - self->base = NULL; - } - else { - self->base = rawdata; - if (incref_base) { - Py_INCREF(self->base); - } - } - } - else { - self->data = PyDataMem_NEW(PyArray_NBYTES(self)); - if (self->data == NULL) { - self->nd = 0; - self->data = PyDataMem_NEW(self->descr->elsize); - if (self->dimensions) { - PyDimMem_FREE(self->dimensions); - } - return PyErr_NoMemory(); - } - if (PyDataType_FLAGCHK(self->descr, NPY_NEEDS_INIT)) { - memset(self->data, 0, PyArray_NBYTES(self)); - } - self->flags |= OWNDATA; - self->base = NULL; - if (_setlist_pkl(self, rawdata) < 0) { - return NULL; - } - } - - PyArray_UpdateFlags(self, UPDATE_ALL); - - Py_INCREF(Py_None); - return Py_None; -} - -/*NUMPY_API*/ -NPY_NO_EXPORT int -PyArray_Dump(PyObject *self, PyObject *file, int protocol) -{ - PyObject *cpick = NULL; - PyObject *ret; - if (protocol < 0) { - protocol = 2; - } - -#if defined(NPY_PY3K) - cpick = PyImport_ImportModule("pickle"); -#else - cpick = PyImport_ImportModule("cPickle"); -#endif - if (cpick == NULL) { - return -1; - } - if (PyBytes_Check(file) || PyUnicode_Check(file)) { - file = npy_PyFile_OpenFile(file, "wb"); - if (file == NULL) { - return -1; - } - } - else { - Py_INCREF(file); - } - ret = PyObject_CallMethod(cpick, "dump", "OOi", self, file, protocol); - Py_XDECREF(ret); - Py_DECREF(file); - Py_DECREF(cpick); - if (PyErr_Occurred()) { - return -1; - } - return 0; -} - -/*NUMPY_API*/ -NPY_NO_EXPORT PyObject * -PyArray_Dumps(PyObject *self, int protocol) -{ - PyObject *cpick = NULL; - PyObject *ret; - if (protocol < 0) { - protocol = 2; - } -#if defined(NPY_PY3K) - cpick = PyImport_ImportModule("pickle"); -#else - cpick = PyImport_ImportModule("cPickle"); -#endif - if (cpick == NULL) { - return NULL; - } - ret = PyObject_CallMethod(cpick, "dumps", "Oi", self, protocol); - Py_DECREF(cpick); - return ret; -} - - -static PyObject * -array_dump(PyArrayObject *self, PyObject *args) -{ - PyObject *file = NULL; - int ret; - - if (!PyArg_ParseTuple(args, "O", &file)) { - return NULL; - } - ret = PyArray_Dump((PyObject *)self, file, 2); - if (ret < 0) { - return NULL; - } - Py_INCREF(Py_None); - return Py_None; -} - - -static PyObject * -array_dumps(PyArrayObject *self, PyObject *args) -{ - if (!PyArg_ParseTuple(args, "")) { - return NULL; - } - return PyArray_Dumps((PyObject *)self, 2); -} - - -static PyObject * -array_transpose(PyArrayObject *self, PyObject *args) -{ - PyObject *shape = Py_None; - Py_ssize_t n = PyTuple_Size(args); - PyArray_Dims permute; - PyObject *ret; - - if (n > 1) { - shape = args; - } - else if (n == 1) { - shape = PyTuple_GET_ITEM(args, 0); - } - - if (shape == Py_None) { - ret = PyArray_Transpose(self, NULL); - } - else { - if (!PyArray_IntpConverter(shape, &permute)) { - return NULL; - } - ret = PyArray_Transpose(self, &permute); - PyDimMem_FREE(permute.ptr); - } - - return ret; -} - -/* Return typenumber from dtype2 unless it is NULL, then return - NPY_DOUBLE if dtype1->type_num is integer or bool - and dtype1->type_num otherwise. -*/ -static int -_get_type_num_double(PyArray_Descr *dtype1, PyArray_Descr *dtype2) -{ - if (dtype2 != NULL) { - return dtype2->type_num; - } - /* For integer or bool data-types */ - if (dtype1->type_num < NPY_FLOAT) { - return NPY_DOUBLE; - } - else { - return dtype1->type_num; - } -} - -#define _CHKTYPENUM(typ) ((typ) ? (typ)->type_num : PyArray_NOTYPE) - -static PyObject * -array_mean(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArray_Descr *dtype = NULL; - PyArrayObject *out = NULL; - int num; - static char *kwlist[] = {"axis", "dtype", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&O&", kwlist, - PyArray_AxisConverter, - &axis, PyArray_DescrConverter2, - &dtype, - PyArray_OutputConverter, - &out)) { - Py_XDECREF(dtype); - return NULL; - } - - num = _get_type_num_double(self->descr, dtype); - Py_XDECREF(dtype); - return PyArray_Mean(self, axis, num, out); -} - -static PyObject * -array_sum(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArray_Descr *dtype = NULL; - PyArrayObject *out = NULL; - int rtype; - static char *kwlist[] = {"axis", "dtype", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&O&", kwlist, - PyArray_AxisConverter, - &axis, PyArray_DescrConverter2, - &dtype, - PyArray_OutputConverter, - &out)) { - Py_XDECREF(dtype); - return NULL; - } - - rtype = _CHKTYPENUM(dtype); - Py_XDECREF(dtype); - return PyArray_Sum(self, axis, rtype, out); -} - - -static PyObject * -array_cumsum(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArray_Descr *dtype = NULL; - PyArrayObject *out = NULL; - int rtype; - static char *kwlist[] = {"axis", "dtype", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&O&", kwlist, - PyArray_AxisConverter, - &axis, PyArray_DescrConverter2, - &dtype, - PyArray_OutputConverter, - &out)) { - Py_XDECREF(dtype); - return NULL; - } - - rtype = _CHKTYPENUM(dtype); - Py_XDECREF(dtype); - return PyArray_CumSum(self, axis, rtype, out); -} - -static PyObject * -array_prod(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArray_Descr *dtype = NULL; - PyArrayObject *out = NULL; - int rtype; - static char *kwlist[] = {"axis", "dtype", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&O&", kwlist, - PyArray_AxisConverter, - &axis, PyArray_DescrConverter2, - &dtype, - PyArray_OutputConverter, - &out)) { - Py_XDECREF(dtype); - return NULL; - } - - rtype = _CHKTYPENUM(dtype); - Py_XDECREF(dtype); - return PyArray_Prod(self, axis, rtype, out); -} - -static PyObject * -array_cumprod(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArray_Descr *dtype = NULL; - PyArrayObject *out = NULL; - int rtype; - static char *kwlist[] = {"axis", "dtype", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&O&", kwlist, - PyArray_AxisConverter, - &axis, PyArray_DescrConverter2, - &dtype, - PyArray_OutputConverter, - &out)) { - Py_XDECREF(dtype); - return NULL; - } - - rtype = _CHKTYPENUM(dtype); - Py_XDECREF(dtype); - return PyArray_CumProd(self, axis, rtype, out); -} - - -static PyObject * -array_dot(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - PyObject *b; - static PyObject *numpycore = NULL; - - if (!PyArg_ParseTuple(args, "O", &b)) { - return NULL; - } - - /* Since blas-dot is exposed only on the Python side, we need to grab it - * from there */ - if (numpycore == NULL) { - numpycore = PyImport_ImportModule("numpy.core"); - if (numpycore == NULL) { - return NULL; - } - } - - return PyObject_CallMethod(numpycore, "dot", "OO", self, b); -} - - -static PyObject * -array_any(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArrayObject *out = NULL; - static char *kwlist[] = {"axis", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, - PyArray_AxisConverter, - &axis, - PyArray_OutputConverter, - &out)) - return NULL; - - return PyArray_Any(self, axis, out); -} - - -static PyObject * -array_all(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArrayObject *out = NULL; - static char *kwlist[] = {"axis", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&", kwlist, - PyArray_AxisConverter, - &axis, - PyArray_OutputConverter, - &out)) - return NULL; - - return PyArray_All(self, axis, out); -} - - -static PyObject * -array_stddev(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArray_Descr *dtype = NULL; - PyArrayObject *out = NULL; - int num; - int ddof = 0; - static char *kwlist[] = {"axis", "dtype", "out", "ddof", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&O&i", kwlist, - PyArray_AxisConverter, - &axis, PyArray_DescrConverter2, - &dtype, - PyArray_OutputConverter, - &out, &ddof)) { - Py_XDECREF(dtype); - return NULL; - } - - num = _get_type_num_double(self->descr, dtype); - Py_XDECREF(dtype); - return __New_PyArray_Std(self, axis, num, out, 0, ddof); -} - - -static PyObject * -array_variance(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyArray_Descr *dtype = NULL; - PyArrayObject *out = NULL; - int num; - int ddof = 0; - static char *kwlist[] = {"axis", "dtype", "out", "ddof", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O&O&O&i", kwlist, - PyArray_AxisConverter, - &axis, PyArray_DescrConverter2, - &dtype, - PyArray_OutputConverter, - &out, &ddof)) { - Py_XDECREF(dtype); - return NULL; - } - - num = _get_type_num_double(self->descr, dtype); - Py_XDECREF(dtype); - return __New_PyArray_Std(self, axis, num, out, 1, ddof); -} - - -static PyObject * -array_compress(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis = MAX_DIMS; - PyObject *condition; - PyArrayObject *out = NULL; - static char *kwlist[] = {"condition", "axis", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O&O&", kwlist, - &condition, PyArray_AxisConverter, - &axis, - PyArray_OutputConverter, - &out)) { - return NULL; - } - return _ARET(PyArray_Compress(self, condition, axis, out)); -} - - -static PyObject * -array_nonzero(PyArrayObject *self, PyObject *args) -{ - if (!PyArg_ParseTuple(args, "")) { - return NULL; - } - return PyArray_Nonzero(self); -} - - -static PyObject * -array_trace(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis1 = 0, axis2 = 1, offset = 0; - PyArray_Descr *dtype = NULL; - PyArrayObject *out = NULL; - int rtype; - static char *kwlist[] = {"offset", "axis1", "axis2", "dtype", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|iiiO&O&", kwlist, - &offset, &axis1, &axis2, - PyArray_DescrConverter2, &dtype, - PyArray_OutputConverter, &out)) { - Py_XDECREF(dtype); - return NULL; - } - - rtype = _CHKTYPENUM(dtype); - Py_XDECREF(dtype); - return _ARET(PyArray_Trace(self, offset, axis1, axis2, rtype, out)); -} - -#undef _CHKTYPENUM - - -static PyObject * -array_clip(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - PyObject *min = NULL, *max = NULL; - PyArrayObject *out = NULL; - static char *kwlist[] = {"min", "max", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|OOO&", kwlist, - &min, &max, - PyArray_OutputConverter, - &out)) { - return NULL; - } - if (max == NULL && min == NULL) { - PyErr_SetString(PyExc_ValueError, "One of max or min must be given."); - return NULL; - } - return _ARET(PyArray_Clip(self, min, max, out)); -} - - -static PyObject * -array_conjugate(PyArrayObject *self, PyObject *args) -{ - - PyArrayObject *out = NULL; - if (!PyArg_ParseTuple(args, "|O&", - PyArray_OutputConverter, - &out)) { - return NULL; - } - return PyArray_Conjugate(self, out); -} - - -static PyObject * -array_diagonal(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int axis1 = 0, axis2 = 1, offset = 0; - static char *kwlist[] = {"offset", "axis1", "axis2", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|iii", kwlist, - &offset, &axis1, &axis2)) { - return NULL; - } - return _ARET(PyArray_Diagonal(self, offset, axis1, axis2)); -} - - -static PyObject * -array_flatten(PyArrayObject *self, PyObject *args) -{ - PyArray_ORDER fortran = PyArray_CORDER; - - if (!PyArg_ParseTuple(args, "|O&", PyArray_OrderConverter, &fortran)) { - return NULL; - } - return PyArray_Flatten(self, fortran); -} - - -static PyObject * -array_ravel(PyArrayObject *self, PyObject *args) -{ - PyArray_ORDER fortran = PyArray_CORDER; - - if (!PyArg_ParseTuple(args, "|O&", PyArray_OrderConverter, - &fortran)) { - return NULL; - } - return PyArray_Ravel(self, fortran); -} - - -static PyObject * -array_round(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - int decimals = 0; - PyArrayObject *out = NULL; - static char *kwlist[] = {"decimals", "out", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|iO&", kwlist, - &decimals, PyArray_OutputConverter, - &out)) { - return NULL; - } - return _ARET(PyArray_Round(self, decimals, out)); -} - - - -static PyObject * -array_setflags(PyArrayObject *self, PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {"write", "align", "uic", NULL}; - PyObject *write = Py_None; - PyObject *align = Py_None; - PyObject *uic = Py_None; - int flagback = self->flags; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "|OOO", kwlist, - &write, &align, &uic)) - return NULL; - - if (align != Py_None) { - if (PyObject_Not(align)) { - self->flags &= ~ALIGNED; - } - else if (_IsAligned(self)) { - self->flags |= ALIGNED; - } - else { - PyErr_SetString(PyExc_ValueError, - "cannot set aligned flag of mis-"\ - "aligned array to True"); - return NULL; - } - } - - if (uic != Py_None) { - if (PyObject_IsTrue(uic)) { - self->flags = flagback; - PyErr_SetString(PyExc_ValueError, - "cannot set UPDATEIFCOPY " \ - "flag to True"); - return NULL; - } - else { - self->flags &= ~UPDATEIFCOPY; - Py_XDECREF(self->base); - self->base = NULL; - } - } - - if (write != Py_None) { - if (PyObject_IsTrue(write)) - if (_IsWriteable(self)) { - self->flags |= WRITEABLE; - } - else { - self->flags = flagback; - PyErr_SetString(PyExc_ValueError, - "cannot set WRITEABLE " \ - "flag to True of this " \ - "array"); \ - return NULL; - } - else - self->flags &= ~WRITEABLE; - } - - Py_INCREF(Py_None); - return Py_None; -} - - -static PyObject * -array_newbyteorder(PyArrayObject *self, PyObject *args) -{ - char endian = PyArray_SWAP; - PyArray_Descr *new; - - if (!PyArg_ParseTuple(args, "|O&", PyArray_ByteorderConverter, - &endian)) { - return NULL; - } - new = PyArray_DescrNewByteorder(self->descr, endian); - if (!new) { - return NULL; - } - return PyArray_View(self, new, NULL); - -} - -NPY_NO_EXPORT PyMethodDef array_methods[] = { - - /* for subtypes */ - {"__array__", - (PyCFunction)array_getarray, - METH_VARARGS, NULL}, - {"__array_prepare__", - (PyCFunction)array_preparearray, - METH_VARARGS, NULL}, - {"__array_wrap__", - (PyCFunction)array_wraparray, - METH_VARARGS, NULL}, - - /* for the copy module */ - {"__copy__", - (PyCFunction)array_copy, - METH_VARARGS, NULL}, - {"__deepcopy__", - (PyCFunction)array_deepcopy, - METH_VARARGS, NULL}, - - /* for Pickling */ - {"__reduce__", - (PyCFunction) array_reduce, - METH_VARARGS, NULL}, - {"__setstate__", - (PyCFunction) array_setstate, - METH_VARARGS, NULL}, - {"dumps", - (PyCFunction) array_dumps, - METH_VARARGS, NULL}, - {"dump", - (PyCFunction) array_dump, - METH_VARARGS, NULL}, - - /* Original and Extended methods added 2005 */ - {"all", - (PyCFunction)array_all, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"any", - (PyCFunction)array_any, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"argmax", - (PyCFunction)array_argmax, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"argmin", - (PyCFunction)array_argmin, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"argsort", - (PyCFunction)array_argsort, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"astype", - (PyCFunction)array_cast, - METH_VARARGS, NULL}, - {"byteswap", - (PyCFunction)array_byteswap, - METH_VARARGS, NULL}, - {"choose", - (PyCFunction)array_choose, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"clip", - (PyCFunction)array_clip, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"compress", - (PyCFunction)array_compress, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"conj", - (PyCFunction)array_conjugate, - METH_VARARGS, NULL}, - {"conjugate", - (PyCFunction)array_conjugate, - METH_VARARGS, NULL}, - {"copy", - (PyCFunction)array_copy, - METH_VARARGS, NULL}, - {"cumprod", - (PyCFunction)array_cumprod, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"cumsum", - (PyCFunction)array_cumsum, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"diagonal", - (PyCFunction)array_diagonal, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"dot", - (PyCFunction)array_dot, - METH_VARARGS, NULL}, - {"fill", - (PyCFunction)array_fill, - METH_VARARGS, NULL}, - {"flatten", - (PyCFunction)array_flatten, - METH_VARARGS, NULL}, - {"getfield", - (PyCFunction)array_getfield, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"item", - (PyCFunction)array_toscalar, - METH_VARARGS, NULL}, - {"itemset", - (PyCFunction) array_setscalar, - METH_VARARGS, NULL}, - {"max", - (PyCFunction)array_max, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"mean", - (PyCFunction)array_mean, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"min", - (PyCFunction)array_min, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"newbyteorder", - (PyCFunction)array_newbyteorder, - METH_VARARGS, NULL}, - {"nonzero", - (PyCFunction)array_nonzero, - METH_VARARGS, NULL}, - {"prod", - (PyCFunction)array_prod, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"ptp", - (PyCFunction)array_ptp, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"put", - (PyCFunction)array_put, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"ravel", - (PyCFunction)array_ravel, - METH_VARARGS, NULL}, - {"repeat", - (PyCFunction)array_repeat, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"reshape", - (PyCFunction)array_reshape, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"resize", - (PyCFunction)array_resize, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"round", - (PyCFunction)array_round, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"searchsorted", - (PyCFunction)array_searchsorted, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"setfield", - (PyCFunction)array_setfield, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"setflags", - (PyCFunction)array_setflags, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"sort", - (PyCFunction)array_sort, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"squeeze", - (PyCFunction)array_squeeze, - METH_VARARGS, NULL}, - {"std", - (PyCFunction)array_stddev, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"sum", - (PyCFunction)array_sum, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"swapaxes", - (PyCFunction)array_swapaxes, - METH_VARARGS, NULL}, - {"take", - (PyCFunction)array_take, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"tofile", - (PyCFunction)array_tofile, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"tolist", - (PyCFunction)array_tolist, - METH_VARARGS, NULL}, - {"tostring", - (PyCFunction)array_tostring, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"trace", - (PyCFunction)array_trace, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"transpose", - (PyCFunction)array_transpose, - METH_VARARGS, NULL}, - {"var", - (PyCFunction)array_variance, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"view", - (PyCFunction)array_view, - METH_VARARGS | METH_KEYWORDS, NULL}, - {NULL, NULL, 0, NULL} /* sentinel */ -}; - -#undef _ARET diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/methods.h b/pythonPackages/numpy/numpy/core/src/multiarray/methods.h deleted file mode 100755 index 642265ccdc..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/methods.h +++ /dev/null @@ -1,8 +0,0 @@ -#ifndef _NPY_ARRAY_METHODS_H_ -#define _NPY_ARRAY_METHODS_H_ - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT PyMethodDef array_methods[]; -#endif - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/multiarray_tests.c.src b/pythonPackages/numpy/numpy/core/src/multiarray/multiarray_tests.c.src deleted file mode 100755 index f99cb98ad0..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/multiarray_tests.c.src +++ /dev/null @@ -1,422 +0,0 @@ -#include -#include "numpy/ndarrayobject.h" - -#include "numpy/npy_3kcompat.h" - -/* - * TODO: - * - Handle mode - */ - -/**begin repeat - * #type = double, int# - * #typenum = NPY_DOUBLE, NPY_INT# - */ -static int copy_@type@(PyArrayIterObject *itx, PyArrayNeighborhoodIterObject *niterx, - npy_intp *bounds, - PyObject **out) -{ - npy_intp i, j; - @type@ *ptr; - npy_intp odims[NPY_MAXDIMS]; - PyArrayObject *aout; - - /* - * For each point in itx, copy the current neighborhood into an array which - * is appended at the output list - */ - for (i = 0; i < itx->size; ++i) { - PyArrayNeighborhoodIter_Reset(niterx); - - for (j = 0; j < itx->ao->nd; ++j) { - odims[j] = bounds[2 * j + 1] - bounds[2 * j] + 1; - } - aout = (PyArrayObject*)PyArray_SimpleNew(itx->ao->nd, odims, @typenum@); - if (aout == NULL) { - return -1; - } - - ptr = (@type@*)aout->data; - - for (j = 0; j < niterx->size; ++j) { - *ptr = *((@type@*)niterx->dataptr); - PyArrayNeighborhoodIter_Next(niterx); - ptr += 1; - } - - Py_INCREF(aout); - PyList_Append(*out, (PyObject*)aout); - Py_DECREF(aout); - PyArray_ITER_NEXT(itx); - } - - return 0; -} -/**end repeat**/ - -static int copy_object(PyArrayIterObject *itx, PyArrayNeighborhoodIterObject *niterx, - npy_intp *bounds, - PyObject **out) -{ - npy_intp i, j; - npy_intp odims[NPY_MAXDIMS]; - PyArrayObject *aout; - PyArray_CopySwapFunc *copyswap = itx->ao->descr->f->copyswap; - npy_int itemsize = PyArray_ITEMSIZE(itx->ao); - - /* - * For each point in itx, copy the current neighborhood into an array which - * is appended at the output list - */ - for (i = 0; i < itx->size; ++i) { - PyArrayNeighborhoodIter_Reset(niterx); - - for (j = 0; j < itx->ao->nd; ++j) { - odims[j] = bounds[2 * j + 1] - bounds[2 * j] + 1; - } - aout = (PyArrayObject*)PyArray_SimpleNew(itx->ao->nd, odims, NPY_OBJECT); - if (aout == NULL) { - return -1; - } - - for (j = 0; j < niterx->size; ++j) { - copyswap(aout->data + j * itemsize, niterx->dataptr, 0, NULL); - PyArrayNeighborhoodIter_Next(niterx); - } - - Py_INCREF(aout); - PyList_Append(*out, (PyObject*)aout); - Py_DECREF(aout); - PyArray_ITER_NEXT(itx); - } - - return 0; -} - -static PyObject* -test_neighborhood_iterator(PyObject* NPY_UNUSED(self), PyObject* args) -{ - PyObject *x, *fill, *out, *b; - PyArrayObject *ax, *afill; - PyArrayIterObject *itx; - int i, typenum, mode, st; - npy_intp bounds[NPY_MAXDIMS*2]; - PyArrayNeighborhoodIterObject *niterx; - - if (!PyArg_ParseTuple(args, "OOOi", &x, &b, &fill, &mode)) { - return NULL; - } - - if (!PySequence_Check(b)) { - return NULL; - } - - typenum = PyArray_ObjectType(x, 0); - typenum = PyArray_ObjectType(fill, typenum); - - ax = (PyArrayObject*)PyArray_FromObject(x, typenum, 1, 10); - if (ax == NULL) { - return NULL; - } - if (PySequence_Size(b) != 2 * ax->nd) { - PyErr_SetString(PyExc_ValueError, - "bounds sequence size not compatible with x input"); - goto clean_ax; - } - - out = PyList_New(0); - if (out == NULL) { - goto clean_ax; - } - - itx = (PyArrayIterObject*)PyArray_IterNew(x); - if (itx == NULL) { - goto clean_out; - } - - /* Compute boundaries for the neighborhood iterator */ - for (i = 0; i < 2 * ax->nd; ++i) { - PyObject* bound; - bound = PySequence_GetItem(b, i); - if (bounds == NULL) { - goto clean_itx; - } - if (!PyInt_Check(bound)) { - PyErr_SetString(PyExc_ValueError, "bound not long"); - Py_DECREF(bound); - goto clean_itx; - } - bounds[i] = PyInt_AsLong(bound); - Py_DECREF(bound); - } - - /* Create the neighborhood iterator */ - afill = NULL; - if (mode == NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING) { - afill = (PyArrayObject *)PyArray_FromObject(fill, typenum, 0, 0); - if (afill == NULL) { - goto clean_itx; - } - } - - niterx = (PyArrayNeighborhoodIterObject*)PyArray_NeighborhoodIterNew( - (PyArrayIterObject*)itx, bounds, mode, afill); - if (niterx == NULL) { - goto clean_afill; - } - - switch (typenum) { - case NPY_OBJECT: - st = copy_object(itx, niterx, bounds, &out); - break; - case NPY_INT: - st = copy_int(itx, niterx, bounds, &out); - break; - case NPY_DOUBLE: - st = copy_double(itx, niterx, bounds, &out); - break; - default: - PyErr_SetString(PyExc_ValueError, "Type not supported"); - goto clean_niterx; - } - - if (st) { - goto clean_niterx; - } - - Py_DECREF(niterx); - Py_XDECREF(afill); - Py_DECREF(itx); - - Py_DECREF(ax); - - return out; - -clean_niterx: - Py_DECREF(niterx); -clean_afill: - Py_XDECREF(afill); -clean_itx: - Py_DECREF(itx); -clean_out: - Py_DECREF(out); -clean_ax: - Py_DECREF(ax); - return NULL; -} - -static int -copy_double_double(PyArrayNeighborhoodIterObject *itx, - PyArrayNeighborhoodIterObject *niterx, - npy_intp *bounds, - PyObject **out) -{ - npy_intp i, j; - double *ptr; - npy_intp odims[NPY_MAXDIMS]; - PyArrayObject *aout; - - /* - * For each point in itx, copy the current neighborhood into an array which - * is appended at the output list - */ - PyArrayNeighborhoodIter_Reset(itx); - for (i = 0; i < itx->size; ++i) { - for (j = 0; j < itx->ao->nd; ++j) { - odims[j] = bounds[2 * j + 1] - bounds[2 * j] + 1; - } - aout = (PyArrayObject*)PyArray_SimpleNew(itx->ao->nd, odims, NPY_DOUBLE); - if (aout == NULL) { - return -1; - } - - ptr = (double*)aout->data; - - PyArrayNeighborhoodIter_Reset(niterx); - for (j = 0; j < niterx->size; ++j) { - *ptr = *((double*)niterx->dataptr); - ptr += 1; - PyArrayNeighborhoodIter_Next(niterx); - } - Py_INCREF(aout); - PyList_Append(*out, (PyObject*)aout); - Py_DECREF(aout); - PyArrayNeighborhoodIter_Next(itx); - } - return 0; -} - -static PyObject* -test_neighborhood_iterator_oob(PyObject* NPY_UNUSED(self), PyObject* args) -{ - PyObject *x, *out, *b1, *b2; - PyArrayObject *ax; - PyArrayIterObject *itx; - int i, typenum, mode1, mode2, st; - npy_intp bounds[NPY_MAXDIMS*2]; - PyArrayNeighborhoodIterObject *niterx1, *niterx2; - - if (!PyArg_ParseTuple(args, "OOiOi", &x, &b1, &mode1, &b2, &mode2)) { - return NULL; - } - - if (!PySequence_Check(b1) || !PySequence_Check(b2)) { - return NULL; - } - - typenum = PyArray_ObjectType(x, 0); - - ax = (PyArrayObject*)PyArray_FromObject(x, typenum, 1, 10); - if (ax == NULL) { - return NULL; - } - if (PySequence_Size(b1) != 2 * ax->nd) { - PyErr_SetString(PyExc_ValueError, - "bounds sequence 1 size not compatible with x input"); - goto clean_ax; - } - if (PySequence_Size(b2) != 2 * ax->nd) { - PyErr_SetString(PyExc_ValueError, - "bounds sequence 2 size not compatible with x input"); - goto clean_ax; - } - - out = PyList_New(0); - if (out == NULL) { - goto clean_ax; - } - - itx = (PyArrayIterObject*)PyArray_IterNew(x); - if (itx == NULL) { - goto clean_out; - } - - /* Compute boundaries for the neighborhood iterator */ - for (i = 0; i < 2 * ax->nd; ++i) { - PyObject* bound; - bound = PySequence_GetItem(b1, i); - if (bounds == NULL) { - goto clean_itx; - } - if (!PyInt_Check(bound)) { - PyErr_SetString(PyExc_ValueError, "bound not long"); - Py_DECREF(bound); - goto clean_itx; - } - bounds[i] = PyInt_AsLong(bound); - Py_DECREF(bound); - } - - /* Create the neighborhood iterator */ - niterx1 = (PyArrayNeighborhoodIterObject*)PyArray_NeighborhoodIterNew( - (PyArrayIterObject*)itx, bounds, - mode1, NULL); - if (niterx1 == NULL) { - goto clean_out; - } - - for (i = 0; i < 2 * ax->nd; ++i) { - PyObject* bound; - bound = PySequence_GetItem(b2, i); - if (bounds == NULL) { - goto clean_itx; - } - if (!PyInt_Check(bound)) { - PyErr_SetString(PyExc_ValueError, "bound not long"); - Py_DECREF(bound); - goto clean_itx; - } - bounds[i] = PyInt_AsLong(bound); - Py_DECREF(bound); - } - - niterx2 = (PyArrayNeighborhoodIterObject*)PyArray_NeighborhoodIterNew( - (PyArrayIterObject*)niterx1, bounds, - mode2, NULL); - if (niterx1 == NULL) { - goto clean_niterx1; - } - - switch (typenum) { - case NPY_DOUBLE: - st = copy_double_double(niterx1, niterx2, bounds, &out); - break; - default: - PyErr_SetString(PyExc_ValueError, "Type not supported"); - goto clean_niterx2; - } - - if (st) { - goto clean_niterx2; - } - - Py_DECREF(niterx2); - Py_DECREF(niterx1); - Py_DECREF(itx); - Py_DECREF(ax); - return out; - -clean_niterx2: - Py_DECREF(niterx2); -clean_niterx1: - Py_DECREF(niterx1); -clean_itx: - Py_DECREF(itx); -clean_out: - Py_DECREF(out); -clean_ax: - Py_DECREF(ax); - return NULL; -} - -static PyMethodDef Multiarray_TestsMethods[] = { - {"test_neighborhood_iterator", - test_neighborhood_iterator, - METH_VARARGS, NULL}, - {"test_neighborhood_iterator_oob", - test_neighborhood_iterator_oob, - METH_VARARGS, NULL}, - {NULL, NULL, 0, NULL} /* Sentinel */ -}; - - -#if defined(NPY_PY3K) -static struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - "multiarray_tests", - NULL, - -1, - Multiarray_TestsMethods, - NULL, - NULL, - NULL, - NULL -}; -#endif - -#if defined(NPY_PY3K) -#define RETVAL m -PyObject *PyInit_multiarray_tests(void) -#else -#define RETVAL -PyMODINIT_FUNC -initmultiarray_tests(void) -#endif -{ - PyObject *m; - -#if defined(NPY_PY3K) - m = PyModule_Create(&moduledef); -#else - m = Py_InitModule("multiarray_tests", Multiarray_TestsMethods); -#endif - if (m == NULL) { - return RETVAL; - } - import_array(); - if (PyErr_Occurred()) { - PyErr_SetString(PyExc_RuntimeError, - "cannot load umath_tests module."); - } - return RETVAL; -} diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/multiarraymodule.c b/pythonPackages/numpy/numpy/core/src/multiarray/multiarraymodule.c deleted file mode 100755 index ad843d18c4..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/multiarraymodule.c +++ /dev/null @@ -1,3059 +0,0 @@ -/* - Python Multiarray Module -- A useful collection of functions for creating and - using ndarrays - - Original file - Copyright (c) 1995, 1996, 1997 Jim Hugunin, hugunin@mit.edu - - Modified for numpy in 2005 - - Travis E. Oliphant - oliphant@ee.byu.edu - Brigham Young University -*/ - -/* $Id: multiarraymodule.c,v 1.36 2005/09/14 00:14:00 teoliphant Exp $ */ - -#define PY_SSIZE_T_CLEAN -#include "Python.h" -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "numpy/npy_math.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -NPY_NO_EXPORT int NPY_NUMUSERTYPES = 0; - -#define PyAO PyArrayObject - -/* Internal APIs */ -#include "arraytypes.h" -#include "arrayobject.h" -#include "hashdescr.h" -#include "descriptor.h" -#include "calculation.h" -#include "number.h" -#include "scalartypes.h" -#include "numpymemoryview.h" - -NPY_NO_EXPORT PyTypeObject PyBigArray_Type; - -/*NUMPY_API - * Get Priority from object - */ -NPY_NO_EXPORT double -PyArray_GetPriority(PyObject *obj, double default_) -{ - PyObject *ret; - double priority = PyArray_PRIORITY; - - if (PyArray_CheckExact(obj)) - return priority; - - ret = PyObject_GetAttrString(obj, "__array_priority__"); - if (ret != NULL) { - priority = PyFloat_AsDouble(ret); - } - if (PyErr_Occurred()) { - PyErr_Clear(); - priority = default_; - } - Py_XDECREF(ret); - return priority; -} - -/*NUMPY_API - * Multiply a List of ints - */ -NPY_NO_EXPORT int -PyArray_MultiplyIntList(int *l1, int n) -{ - int s = 1; - - while (n--) { - s *= (*l1++); - } - return s; -} - -/*NUMPY_API - * Multiply a List - */ -NPY_NO_EXPORT intp -PyArray_MultiplyList(intp *l1, int n) -{ - intp s = 1; - - while (n--) { - s *= (*l1++); - } - return s; -} - -/*NUMPY_API - * Multiply a List of Non-negative numbers with over-flow detection. - */ -NPY_NO_EXPORT intp -PyArray_OverflowMultiplyList(intp *l1, int n) -{ - intp prod = 1; - intp imax = NPY_MAX_INTP; - int i; - - for (i = 0; i < n; i++) { - intp dim = l1[i]; - - if (dim == 0) { - return 0; - } - if (dim > imax) { - return -1; - } - imax /= dim; - prod *= dim; - } - return prod; -} - -/*NUMPY_API - * Produce a pointer into array - */ -NPY_NO_EXPORT void * -PyArray_GetPtr(PyArrayObject *obj, intp* ind) -{ - int n = obj->nd; - intp *strides = obj->strides; - char *dptr = obj->data; - - while (n--) { - dptr += (*strides++) * (*ind++); - } - return (void *)dptr; -} - -/*NUMPY_API - * Compare Lists - */ -NPY_NO_EXPORT int -PyArray_CompareLists(intp *l1, intp *l2, int n) -{ - int i; - - for (i = 0; i < n; i++) { - if (l1[i] != l2[i]) { - return 0; - } - } - return 1; -} - -/* - * simulates a C-style 1-3 dimensional array which can be accesed using - * ptr[i] or ptr[i][j] or ptr[i][j][k] -- requires pointer allocation - * for 2-d and 3-d. - * - * For 2-d and up, ptr is NOT equivalent to a statically defined - * 2-d or 3-d array. In particular, it cannot be passed into a - * function that requires a true pointer to a fixed-size array. - */ - -/*NUMPY_API - * Simulate a C-array - * steals a reference to typedescr -- can be NULL - */ -NPY_NO_EXPORT int -PyArray_AsCArray(PyObject **op, void *ptr, intp *dims, int nd, - PyArray_Descr* typedescr) -{ - PyArrayObject *ap; - intp n, m, i, j; - char **ptr2; - char ***ptr3; - - if ((nd < 1) || (nd > 3)) { - PyErr_SetString(PyExc_ValueError, - "C arrays of only 1-3 dimensions available"); - Py_XDECREF(typedescr); - return -1; - } - if ((ap = (PyArrayObject*)PyArray_FromAny(*op, typedescr, nd, nd, - CARRAY, NULL)) == NULL) { - return -1; - } - switch(nd) { - case 1: - *((char **)ptr) = ap->data; - break; - case 2: - n = ap->dimensions[0]; - ptr2 = (char **)_pya_malloc(n * sizeof(char *)); - if (!ptr2) { - goto fail; - } - for (i = 0; i < n; i++) { - ptr2[i] = ap->data + i*ap->strides[0]; - } - *((char ***)ptr) = ptr2; - break; - case 3: - n = ap->dimensions[0]; - m = ap->dimensions[1]; - ptr3 = (char ***)_pya_malloc(n*(m+1) * sizeof(char *)); - if (!ptr3) { - goto fail; - } - for (i = 0; i < n; i++) { - ptr3[i] = ptr3[n + (m-1)*i]; - for (j = 0; j < m; j++) { - ptr3[i][j] = ap->data + i*ap->strides[0] + j*ap->strides[1]; - } - } - *((char ****)ptr) = ptr3; - } - memcpy(dims, ap->dimensions, nd*sizeof(intp)); - *op = (PyObject *)ap; - return 0; - - fail: - PyErr_SetString(PyExc_MemoryError, "no memory"); - return -1; -} - -/* Deprecated --- Use PyArray_AsCArray instead */ - -/*NUMPY_API - * Convert to a 1D C-array - */ -NPY_NO_EXPORT int -PyArray_As1D(PyObject **op, char **ptr, int *d1, int typecode) -{ - intp newd1; - PyArray_Descr *descr; - char msg[] = "PyArray_As1D: use PyArray_AsCArray."; - - if (DEPRECATE(msg) < 0) { - return -1; - } - descr = PyArray_DescrFromType(typecode); - if (PyArray_AsCArray(op, (void *)ptr, &newd1, 1, descr) == -1) { - return -1; - } - *d1 = (int) newd1; - return 0; -} - -/*NUMPY_API - * Convert to a 2D C-array - */ -NPY_NO_EXPORT int -PyArray_As2D(PyObject **op, char ***ptr, int *d1, int *d2, int typecode) -{ - intp newdims[2]; - PyArray_Descr *descr; - char msg[] = "PyArray_As1D: use PyArray_AsCArray."; - - if (DEPRECATE(msg) < 0) { - return -1; - } - descr = PyArray_DescrFromType(typecode); - if (PyArray_AsCArray(op, (void *)ptr, newdims, 2, descr) == -1) { - return -1; - } - *d1 = (int ) newdims[0]; - *d2 = (int ) newdims[1]; - return 0; -} - -/* End Deprecated */ - -/*NUMPY_API - * Free pointers created if As2D is called - */ -NPY_NO_EXPORT int -PyArray_Free(PyObject *op, void *ptr) -{ - PyArrayObject *ap = (PyArrayObject *)op; - - if ((ap->nd < 1) || (ap->nd > 3)) { - return -1; - } - if (ap->nd >= 2) { - _pya_free(ptr); - } - Py_DECREF(ap); - return 0; -} - - -static PyObject * -_swap_and_concat(PyObject *op, int axis, int n) -{ - PyObject *newtup = NULL; - PyObject *otmp, *arr; - int i; - - newtup = PyTuple_New(n); - if (newtup == NULL) { - return NULL; - } - for (i = 0; i < n; i++) { - otmp = PySequence_GetItem(op, i); - arr = PyArray_FROM_O(otmp); - Py_DECREF(otmp); - if (arr == NULL) { - goto fail; - } - otmp = PyArray_SwapAxes((PyArrayObject *)arr, axis, 0); - Py_DECREF(arr); - if (otmp == NULL) { - goto fail; - } - PyTuple_SET_ITEM(newtup, i, otmp); - } - otmp = PyArray_Concatenate(newtup, 0); - Py_DECREF(newtup); - if (otmp == NULL) { - return NULL; - } - arr = PyArray_SwapAxes((PyArrayObject *)otmp, axis, 0); - Py_DECREF(otmp); - return arr; - - fail: - Py_DECREF(newtup); - return NULL; -} - -/*NUMPY_API - * Concatenate - * - * Concatenate an arbitrary Python sequence into an array. - * op is a python object supporting the sequence interface. - * Its elements will be concatenated together to form a single - * multidimensional array. If axis is MAX_DIMS or bigger, then - * each sequence object will be flattened before concatenation -*/ -NPY_NO_EXPORT PyObject * -PyArray_Concatenate(PyObject *op, int axis) -{ - PyArrayObject *ret, **mps; - PyObject *otmp; - int i, n, tmp, nd = 0, new_dim; - char *data; - PyTypeObject *subtype; - double prior1, prior2; - intp numbytes; - - n = PySequence_Length(op); - if (n == -1) { - return NULL; - } - if (n == 0) { - PyErr_SetString(PyExc_ValueError, - "concatenation of zero-length sequences is "\ - "impossible"); - return NULL; - } - - if ((axis < 0) || ((0 < axis) && (axis < MAX_DIMS))) { - return _swap_and_concat(op, axis, n); - } - mps = PyArray_ConvertToCommonType(op, &n); - if (mps == NULL) { - return NULL; - } - - /* - * Make sure these arrays are legal to concatenate. - * Must have same dimensions except d0 - */ - prior1 = PyArray_PRIORITY; - subtype = &PyArray_Type; - ret = NULL; - for (i = 0; i < n; i++) { - if (axis >= MAX_DIMS) { - otmp = PyArray_Ravel(mps[i],0); - Py_DECREF(mps[i]); - mps[i] = (PyArrayObject *)otmp; - } - if (Py_TYPE(mps[i]) != subtype) { - prior2 = PyArray_GetPriority((PyObject *)(mps[i]), 0.0); - if (prior2 > prior1) { - prior1 = prior2; - subtype = Py_TYPE(mps[i]); - } - } - } - - new_dim = 0; - for (i = 0; i < n; i++) { - if (mps[i] == NULL) { - goto fail; - } - if (i == 0) { - nd = mps[i]->nd; - } - else { - if (nd != mps[i]->nd) { - PyErr_SetString(PyExc_ValueError, - "arrays must have same "\ - "number of dimensions"); - goto fail; - } - if (!PyArray_CompareLists(mps[0]->dimensions+1, - mps[i]->dimensions+1, - nd-1)) { - PyErr_SetString(PyExc_ValueError, - "array dimensions must "\ - "agree except for d_0"); - goto fail; - } - } - if (nd == 0) { - PyErr_SetString(PyExc_ValueError, - "0-d arrays can't be concatenated"); - goto fail; - } - new_dim += mps[i]->dimensions[0]; - } - tmp = mps[0]->dimensions[0]; - mps[0]->dimensions[0] = new_dim; - Py_INCREF(mps[0]->descr); - ret = (PyArrayObject *)PyArray_NewFromDescr(subtype, - mps[0]->descr, nd, - mps[0]->dimensions, - NULL, NULL, 0, - (PyObject *)ret); - mps[0]->dimensions[0] = tmp; - - if (ret == NULL) { - goto fail; - } - data = ret->data; - for (i = 0; i < n; i++) { - numbytes = PyArray_NBYTES(mps[i]); - memcpy(data, mps[i]->data, numbytes); - data += numbytes; - } - - PyArray_INCREF(ret); - for (i = 0; i < n; i++) { - Py_XDECREF(mps[i]); - } - PyDataMem_FREE(mps); - return (PyObject *)ret; - - fail: - Py_XDECREF(ret); - for (i = 0; i < n; i++) { - Py_XDECREF(mps[i]); - } - PyDataMem_FREE(mps); - return NULL; -} - -static int -_signbit_set(PyArrayObject *arr) -{ - static char bitmask = (char) 0x80; - char *ptr; /* points to the byte to test */ - char byteorder; - int elsize; - - elsize = arr->descr->elsize; - byteorder = arr->descr->byteorder; - ptr = arr->data; - if (elsize > 1 && - (byteorder == PyArray_LITTLE || - (byteorder == PyArray_NATIVE && - PyArray_ISNBO(PyArray_LITTLE)))) { - ptr += elsize - 1; - } - return ((*ptr & bitmask) != 0); -} - - -/*NUMPY_API - * ScalarKind - */ -NPY_NO_EXPORT NPY_SCALARKIND -PyArray_ScalarKind(int typenum, PyArrayObject **arr) -{ - if (PyTypeNum_ISSIGNED(typenum)) { - if (arr && _signbit_set(*arr)) { - return PyArray_INTNEG_SCALAR; - } - else { - return PyArray_INTPOS_SCALAR; - } - } - if (PyTypeNum_ISFLOAT(typenum)) { - return PyArray_FLOAT_SCALAR; - } - if (PyTypeNum_ISUNSIGNED(typenum)) { - return PyArray_INTPOS_SCALAR; - } - if (PyTypeNum_ISCOMPLEX(typenum)) { - return PyArray_COMPLEX_SCALAR; - } - if (PyTypeNum_ISBOOL(typenum)) { - return PyArray_BOOL_SCALAR; - } - - if (PyTypeNum_ISUSERDEF(typenum)) { - NPY_SCALARKIND retval; - PyArray_Descr* descr = PyArray_DescrFromType(typenum); - - if (descr->f->scalarkind) { - retval = descr->f->scalarkind((arr ? *arr : NULL)); - } - else { - retval = PyArray_NOSCALAR; - } - Py_DECREF(descr); - return retval; - } - return PyArray_OBJECT_SCALAR; -} - -/*NUMPY_API*/ -NPY_NO_EXPORT int -PyArray_CanCoerceScalar(int thistype, int neededtype, - NPY_SCALARKIND scalar) -{ - PyArray_Descr* from; - int *castlist; - - if (scalar == PyArray_NOSCALAR) { - return PyArray_CanCastSafely(thistype, neededtype); - } - from = PyArray_DescrFromType(thistype); - if (from->f->cancastscalarkindto - && (castlist = from->f->cancastscalarkindto[scalar])) { - while (*castlist != PyArray_NOTYPE) { - if (*castlist++ == neededtype) { - Py_DECREF(from); - return 1; - } - } - } - Py_DECREF(from); - - switch(scalar) { - case PyArray_BOOL_SCALAR: - case PyArray_OBJECT_SCALAR: - return PyArray_CanCastSafely(thistype, neededtype); - default: - if (PyTypeNum_ISUSERDEF(neededtype)) { - return FALSE; - } - switch(scalar) { - case PyArray_INTPOS_SCALAR: - return (neededtype >= PyArray_BYTE); - case PyArray_INTNEG_SCALAR: - return (neededtype >= PyArray_BYTE) - && !(PyTypeNum_ISUNSIGNED(neededtype)); - case PyArray_FLOAT_SCALAR: - return (neededtype >= PyArray_FLOAT); - case PyArray_COMPLEX_SCALAR: - return (neededtype >= PyArray_CFLOAT); - default: - /* should never get here... */ - return 1; - } - } -} - -/* - * Make a new empty array, of the passed size, of a type that takes the - * priority of ap1 and ap2 into account. - */ -static PyArrayObject * -new_array_for_sum(PyArrayObject *ap1, PyArrayObject *ap2, - int nd, intp dimensions[], int typenum) -{ - PyArrayObject *ret; - PyTypeObject *subtype; - double prior1, prior2; - /* - * Need to choose an output array that can hold a sum - * -- use priority to determine which subtype. - */ - if (Py_TYPE(ap2) != Py_TYPE(ap1)) { - prior2 = PyArray_GetPriority((PyObject *)ap2, 0.0); - prior1 = PyArray_GetPriority((PyObject *)ap1, 0.0); - subtype = (prior2 > prior1 ? Py_TYPE(ap2) : Py_TYPE(ap1)); - } - else { - prior1 = prior2 = 0.0; - subtype = Py_TYPE(ap1); - } - - ret = (PyArrayObject *)PyArray_New(subtype, nd, dimensions, - typenum, NULL, NULL, 0, 0, - (PyObject *) - (prior2 > prior1 ? ap2 : ap1)); - return ret; -} - -/* Could perhaps be redone to not make contiguous arrays */ - -/*NUMPY_API - * Numeric.innerproduct(a,v) - */ -NPY_NO_EXPORT PyObject * -PyArray_InnerProduct(PyObject *op1, PyObject *op2) -{ - PyArrayObject *ap1, *ap2, *ret = NULL; - PyArrayIterObject *it1, *it2; - intp i, j, l; - int typenum, nd, axis; - intp is1, is2, os; - char *op; - intp dimensions[MAX_DIMS]; - PyArray_DotFunc *dot; - PyArray_Descr *typec; - NPY_BEGIN_THREADS_DEF; - - typenum = PyArray_ObjectType(op1, 0); - typenum = PyArray_ObjectType(op2, typenum); - - typec = PyArray_DescrFromType(typenum); - Py_INCREF(typec); - ap1 = (PyArrayObject *)PyArray_FromAny(op1, typec, 0, 0, ALIGNED, NULL); - if (ap1 == NULL) { - Py_DECREF(typec); - return NULL; - } - ap2 = (PyArrayObject *)PyArray_FromAny(op2, typec, 0, 0, ALIGNED, NULL); - if (ap2 == NULL) { - goto fail; - } - if (ap1->nd == 0 || ap2->nd == 0) { - ret = (ap1->nd == 0 ? ap1 : ap2); - ret = (PyArrayObject *)Py_TYPE(ret)->tp_as_number->nb_multiply( - (PyObject *)ap1, (PyObject *)ap2); - Py_DECREF(ap1); - Py_DECREF(ap2); - return (PyObject *)ret; - } - - l = ap1->dimensions[ap1->nd - 1]; - if (ap2->dimensions[ap2->nd - 1] != l) { - PyErr_SetString(PyExc_ValueError, "matrices are not aligned"); - goto fail; - } - - nd = ap1->nd + ap2->nd - 2; - j = 0; - for (i = 0; i < ap1->nd - 1; i++) { - dimensions[j++] = ap1->dimensions[i]; - } - for (i = 0; i < ap2->nd - 1; i++) { - dimensions[j++] = ap2->dimensions[i]; - } - - /* - * Need to choose an output array that can hold a sum - * -- use priority to determine which subtype. - */ - ret = new_array_for_sum(ap1, ap2, nd, dimensions, typenum); - if (ret == NULL) { - goto fail; - } - dot = (ret->descr->f->dotfunc); - if (dot == NULL) { - PyErr_SetString(PyExc_ValueError, - "dot not available for this type"); - goto fail; - } - is1 = ap1->strides[ap1->nd - 1]; - is2 = ap2->strides[ap2->nd - 1]; - op = ret->data; os = ret->descr->elsize; - axis = ap1->nd - 1; - it1 = (PyArrayIterObject *) PyArray_IterAllButAxis((PyObject *)ap1, &axis); - axis = ap2->nd - 1; - it2 = (PyArrayIterObject *) PyArray_IterAllButAxis((PyObject *)ap2, &axis); - NPY_BEGIN_THREADS_DESCR(ap2->descr); - while (1) { - while (it2->index < it2->size) { - dot(it1->dataptr, is1, it2->dataptr, is2, op, l, ret); - op += os; - PyArray_ITER_NEXT(it2); - } - PyArray_ITER_NEXT(it1); - if (it1->index >= it1->size) { - break; - } - PyArray_ITER_RESET(it2); - } - NPY_END_THREADS_DESCR(ap2->descr); - Py_DECREF(it1); - Py_DECREF(it2); - if (PyErr_Occurred()) { - goto fail; - } - Py_DECREF(ap1); - Py_DECREF(ap2); - return (PyObject *)ret; - - fail: - Py_XDECREF(ap1); - Py_XDECREF(ap2); - Py_XDECREF(ret); - return NULL; -} - - -/*NUMPY_API - *Numeric.matrixproduct(a,v) - * just like inner product but does the swapaxes stuff on the fly - */ -NPY_NO_EXPORT PyObject * -PyArray_MatrixProduct(PyObject *op1, PyObject *op2) -{ - PyArrayObject *ap1, *ap2, *ret = NULL; - PyArrayIterObject *it1, *it2; - intp i, j, l; - int typenum, nd, axis, matchDim; - intp is1, is2, os; - char *op; - intp dimensions[MAX_DIMS]; - PyArray_DotFunc *dot; - PyArray_Descr *typec; - NPY_BEGIN_THREADS_DEF; - - typenum = PyArray_ObjectType(op1, 0); - typenum = PyArray_ObjectType(op2, typenum); - typec = PyArray_DescrFromType(typenum); - Py_INCREF(typec); - ap1 = (PyArrayObject *)PyArray_FromAny(op1, typec, 0, 0, ALIGNED, NULL); - if (ap1 == NULL) { - Py_DECREF(typec); - return NULL; - } - ap2 = (PyArrayObject *)PyArray_FromAny(op2, typec, 0, 0, ALIGNED, NULL); - if (ap2 == NULL) { - goto fail; - } - if (ap1->nd == 0 || ap2->nd == 0) { - ret = (ap1->nd == 0 ? ap1 : ap2); - ret = (PyArrayObject *)Py_TYPE(ret)->tp_as_number->nb_multiply( - (PyObject *)ap1, (PyObject *)ap2); - Py_DECREF(ap1); - Py_DECREF(ap2); - return (PyObject *)ret; - } - l = ap1->dimensions[ap1->nd - 1]; - if (ap2->nd > 1) { - matchDim = ap2->nd - 2; - } - else { - matchDim = 0; - } - if (ap2->dimensions[matchDim] != l) { - PyErr_SetString(PyExc_ValueError, "objects are not aligned"); - goto fail; - } - nd = ap1->nd + ap2->nd - 2; - if (nd > NPY_MAXDIMS) { - PyErr_SetString(PyExc_ValueError, "dot: too many dimensions in result"); - goto fail; - } - j = 0; - for (i = 0; i < ap1->nd - 1; i++) { - dimensions[j++] = ap1->dimensions[i]; - } - for (i = 0; i < ap2->nd - 2; i++) { - dimensions[j++] = ap2->dimensions[i]; - } - if(ap2->nd > 1) { - dimensions[j++] = ap2->dimensions[ap2->nd-1]; - } - /* - fprintf(stderr, "nd=%d dimensions=", nd); - for(i=0; istrides[ap1->nd-1]; is2 = ap2->strides[matchDim]; - /* Choose which subtype to return */ - ret = new_array_for_sum(ap1, ap2, nd, dimensions, typenum); - if (ret == NULL) { - goto fail; - } - /* Ensure that multiarray.dot(,<0xM>) -> zeros((N,M)) */ - if (PyArray_SIZE(ap1) == 0 && PyArray_SIZE(ap2) == 0) { - memset(PyArray_DATA(ret), 0, PyArray_NBYTES(ret)); - } - else { - /* Ensure that multiarray.dot([],[]) -> 0 */ - memset(PyArray_DATA(ret), 0, PyArray_ITEMSIZE(ret)); - } - - dot = ret->descr->f->dotfunc; - if (dot == NULL) { - PyErr_SetString(PyExc_ValueError, - "dot not available for this type"); - goto fail; - } - - op = ret->data; os = ret->descr->elsize; - axis = ap1->nd-1; - it1 = (PyArrayIterObject *) - PyArray_IterAllButAxis((PyObject *)ap1, &axis); - it2 = (PyArrayIterObject *) - PyArray_IterAllButAxis((PyObject *)ap2, &matchDim); - NPY_BEGIN_THREADS_DESCR(ap2->descr); - while (1) { - while (it2->index < it2->size) { - dot(it1->dataptr, is1, it2->dataptr, is2, op, l, ret); - op += os; - PyArray_ITER_NEXT(it2); - } - PyArray_ITER_NEXT(it1); - if (it1->index >= it1->size) { - break; - } - PyArray_ITER_RESET(it2); - } - NPY_END_THREADS_DESCR(ap2->descr); - Py_DECREF(it1); - Py_DECREF(it2); - if (PyErr_Occurred()) { - /* only for OBJECT arrays */ - goto fail; - } - Py_DECREF(ap1); - Py_DECREF(ap2); - return (PyObject *)ret; - - fail: - Py_XDECREF(ap1); - Py_XDECREF(ap2); - Py_XDECREF(ret); - return NULL; -} - -/*NUMPY_API - * Fast Copy and Transpose - */ -NPY_NO_EXPORT PyObject * -PyArray_CopyAndTranspose(PyObject *op) -{ - PyObject *ret, *arr; - int nd; - intp dims[2]; - intp i,j; - int elsize, str2; - char *iptr; - char *optr; - - /* make sure it is well-behaved */ - arr = PyArray_FromAny(op, NULL, 0, 0, CARRAY, NULL); - if (arr == NULL) { - return NULL; - } - nd = PyArray_NDIM(arr); - if (nd == 1) { - /* we will give in to old behavior */ - ret = PyArray_Copy((PyArrayObject *)arr); - Py_DECREF(arr); - return ret; - } - else if (nd != 2) { - Py_DECREF(arr); - PyErr_SetString(PyExc_ValueError, - "only 2-d arrays are allowed"); - return NULL; - } - - /* Now construct output array */ - dims[0] = PyArray_DIM(arr,1); - dims[1] = PyArray_DIM(arr,0); - elsize = PyArray_ITEMSIZE(arr); - Py_INCREF(PyArray_DESCR(arr)); - ret = PyArray_NewFromDescr(Py_TYPE(arr), - PyArray_DESCR(arr), - 2, dims, - NULL, NULL, 0, arr); - if (ret == NULL) { - Py_DECREF(arr); - return NULL; - } - - /* do 2-d loop */ - NPY_BEGIN_ALLOW_THREADS; - optr = PyArray_DATA(ret); - str2 = elsize*dims[0]; - for (i = 0; i < dims[0]; i++) { - iptr = PyArray_BYTES(arr) + i*elsize; - for (j = 0; j < dims[1]; j++) { - /* optr[i,j] = iptr[j,i] */ - memcpy(optr, iptr, elsize); - optr += elsize; - iptr += str2; - } - } - NPY_END_ALLOW_THREADS; - Py_DECREF(arr); - return ret; -} - -/* - * Implementation which is common between PyArray_Correlate and PyArray_Correlate2 - * - * inverted is set to 1 if computed correlate(ap2, ap1), 0 otherwise - */ -static PyArrayObject* -_pyarray_correlate(PyArrayObject *ap1, PyArrayObject *ap2, int typenum, - int mode, int *inverted) -{ - PyArrayObject *ret; - intp length; - intp i, n1, n2, n, n_left, n_right; - intp is1, is2, os; - char *ip1, *ip2, *op; - PyArray_DotFunc *dot; - - NPY_BEGIN_THREADS_DEF; - - n1 = ap1->dimensions[0]; - n2 = ap2->dimensions[0]; - if (n1 < n2) { - ret = ap1; - ap1 = ap2; - ap2 = ret; - ret = NULL; - i = n1; - n1 = n2; - n2 = i; - *inverted = 1; - } else { - *inverted = 0; - } - - length = n1; - n = n2; - switch(mode) { - case 0: - length = length - n + 1; - n_left = n_right = 0; - break; - case 1: - n_left = (intp)(n/2); - n_right = n - n_left - 1; - break; - case 2: - n_right = n - 1; - n_left = n - 1; - length = length + n - 1; - break; - default: - PyErr_SetString(PyExc_ValueError, "mode must be 0, 1, or 2"); - return NULL; - } - - /* - * Need to choose an output array that can hold a sum - * -- use priority to determine which subtype. - */ - ret = new_array_for_sum(ap1, ap2, 1, &length, typenum); - if (ret == NULL) { - return NULL; - } - dot = ret->descr->f->dotfunc; - if (dot == NULL) { - PyErr_SetString(PyExc_ValueError, - "function not available for this data type"); - goto clean_ret; - } - - NPY_BEGIN_THREADS_DESCR(ret->descr); - is1 = ap1->strides[0]; - is2 = ap2->strides[0]; - op = ret->data; - os = ret->descr->elsize; - ip1 = ap1->data; - ip2 = ap2->data + n_left*is2; - n = n - n_left; - for (i = 0; i < n_left; i++) { - dot(ip1, is1, ip2, is2, op, n, ret); - n++; - ip2 -= is2; - op += os; - } - for (i = 0; i < (n1 - n2 + 1); i++) { - dot(ip1, is1, ip2, is2, op, n, ret); - ip1 += is1; - op += os; - } - for (i = 0; i < n_right; i++) { - n--; - dot(ip1, is1, ip2, is2, op, n, ret); - ip1 += is1; - op += os; - } - - NPY_END_THREADS_DESCR(ret->descr); - if (PyErr_Occurred()) { - goto clean_ret; - } - - return ret; - -clean_ret: - Py_DECREF(ret); - return NULL; -} - -/* - * Revert a one dimensional array in-place - * - * Return 0 on success, other value on failure - */ -static int -_pyarray_revert(PyArrayObject *ret) -{ - intp length; - intp i; - PyArray_CopySwapFunc *copyswap; - char *tmp = NULL, *sw1, *sw2; - intp os; - char *op; - - length = ret->dimensions[0]; - copyswap = ret->descr->f->copyswap; - - tmp = PyArray_malloc(ret->descr->elsize); - if (tmp == NULL) { - return -1; - } - - os = ret->descr->elsize; - op = ret->data; - sw1 = op; - sw2 = op + (length - 1) * os; - if (PyArray_ISFLEXIBLE(ret) || PyArray_ISOBJECT(ret)) { - for(i = 0; i < length/2; ++i) { - memmove(tmp, sw1, os); - copyswap(tmp, NULL, 0, NULL); - memmove(sw1, sw2, os); - copyswap(sw1, NULL, 0, NULL); - memmove(sw2, tmp, os); - copyswap(sw2, NULL, 0, NULL); - sw1 += os; - sw2 -= os; - } - } else { - for(i = 0; i < length/2; ++i) { - memcpy(tmp, sw1, os); - memcpy(sw1, sw2, os); - memcpy(sw2, tmp, os); - sw1 += os; - sw2 -= os; - } - } - - PyArray_free(tmp); - return 0; -} - -/*NUMPY_API - * correlate(a1,a2,mode) - * - * This function computes the usual correlation (correlate(a1, a2) != - * correlate(a2, a1), and conjugate the second argument for complex inputs - */ -NPY_NO_EXPORT PyObject * -PyArray_Correlate2(PyObject *op1, PyObject *op2, int mode) -{ - PyArrayObject *ap1, *ap2, *ret = NULL; - int typenum; - PyArray_Descr *typec; - int inverted; - int st; - - typenum = PyArray_ObjectType(op1, 0); - typenum = PyArray_ObjectType(op2, typenum); - - typec = PyArray_DescrFromType(typenum); - Py_INCREF(typec); - ap1 = (PyArrayObject *)PyArray_FromAny(op1, typec, 1, 1, DEFAULT, NULL); - if (ap1 == NULL) { - Py_DECREF(typec); - return NULL; - } - ap2 = (PyArrayObject *)PyArray_FromAny(op2, typec, 1, 1, DEFAULT, NULL); - if (ap2 == NULL) { - goto clean_ap1; - } - - if (PyArray_ISCOMPLEX(ap2)) { - PyArrayObject *cap2; - cap2 = (PyArrayObject *)PyArray_Conjugate(ap2, NULL); - if (cap2 == NULL) { - goto clean_ap2; - } - Py_DECREF(ap2); - ap2 = cap2; - } - - ret = _pyarray_correlate(ap1, ap2, typenum, mode, &inverted); - if (ret == NULL) { - goto clean_ap2; - } - - /* - * If we inverted input orders, we need to reverse the output array (i.e. - * ret = ret[::-1]) - */ - if (inverted) { - st = _pyarray_revert(ret); - if(st) { - goto clean_ret; - } - } - - Py_DECREF(ap1); - Py_DECREF(ap2); - return (PyObject *)ret; - -clean_ret: - Py_DECREF(ret); -clean_ap2: - Py_DECREF(ap2); -clean_ap1: - Py_DECREF(ap1); - return NULL; -} - -/*NUMPY_API - * Numeric.correlate(a1,a2,mode) - */ -NPY_NO_EXPORT PyObject * -PyArray_Correlate(PyObject *op1, PyObject *op2, int mode) -{ - PyArrayObject *ap1, *ap2, *ret = NULL; - int typenum; - int unused; - PyArray_Descr *typec; - - typenum = PyArray_ObjectType(op1, 0); - typenum = PyArray_ObjectType(op2, typenum); - - typec = PyArray_DescrFromType(typenum); - Py_INCREF(typec); - ap1 = (PyArrayObject *)PyArray_FromAny(op1, typec, 1, 1, DEFAULT, NULL); - if (ap1 == NULL) { - Py_DECREF(typec); - return NULL; - } - ap2 = (PyArrayObject *)PyArray_FromAny(op2, typec, 1, 1, DEFAULT, NULL); - if (ap2 == NULL) { - goto fail; - } - - ret = _pyarray_correlate(ap1, ap2, typenum, mode, &unused); - if(ret == NULL) { - goto fail; - } - Py_DECREF(ap1); - Py_DECREF(ap2); - return (PyObject *)ret; - -fail: - Py_XDECREF(ap1); - Py_XDECREF(ap2); - Py_XDECREF(ret); - return NULL; -} - - -static PyObject * -array_putmask(PyObject *NPY_UNUSED(module), PyObject *args, PyObject *kwds) -{ - PyObject *mask, *values; - PyObject *array; - - static char *kwlist[] = {"arr", "mask", "values", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!OO:putmask", kwlist, - &PyArray_Type, &array, &mask, &values)) { - return NULL; - } - return PyArray_PutMask((PyArrayObject *)array, values, mask); -} - -/*NUMPY_API - * Convert an object to FORTRAN / C / ANY - */ -NPY_NO_EXPORT int -PyArray_OrderConverter(PyObject *object, NPY_ORDER *val) -{ - char *str; - if (object == NULL || object == Py_None) { - *val = PyArray_ANYORDER; - } - else if (PyUnicode_Check(object)) { - PyObject *tmp; - int ret; - tmp = PyUnicode_AsASCIIString(object); - ret = PyArray_OrderConverter(tmp, val); - Py_DECREF(tmp); - return ret; - } - else if (!PyBytes_Check(object) || PyBytes_GET_SIZE(object) < 1) { - if (PyObject_IsTrue(object)) { - *val = PyArray_FORTRANORDER; - } - else { - *val = PyArray_CORDER; - } - if (PyErr_Occurred()) { - return PY_FAIL; - } - return PY_SUCCEED; - } - else { - str = PyBytes_AS_STRING(object); - if (str[0] == 'C' || str[0] == 'c') { - *val = PyArray_CORDER; - } - else if (str[0] == 'F' || str[0] == 'f') { - *val = PyArray_FORTRANORDER; - } - else if (str[0] == 'A' || str[0] == 'a') { - *val = PyArray_ANYORDER; - } - else { - PyErr_SetString(PyExc_TypeError, - "order not understood"); - return PY_FAIL; - } - } - return PY_SUCCEED; -} - -/*NUMPY_API - * Convert an object to NPY_RAISE / NPY_CLIP / NPY_WRAP - */ -NPY_NO_EXPORT int -PyArray_ClipmodeConverter(PyObject *object, NPY_CLIPMODE *val) -{ - if (object == NULL || object == Py_None) { - *val = NPY_RAISE; - } - else if (PyBytes_Check(object)) { - char *str; - str = PyBytes_AS_STRING(object); - if (str[0] == 'C' || str[0] == 'c') { - *val = NPY_CLIP; - } - else if (str[0] == 'W' || str[0] == 'w') { - *val = NPY_WRAP; - } - else if (str[0] == 'R' || str[0] == 'r') { - *val = NPY_RAISE; - } - else { - PyErr_SetString(PyExc_TypeError, - "clipmode not understood"); - return PY_FAIL; - } - } - else if (PyUnicode_Check(object)) { - PyObject *tmp; - int ret; - tmp = PyUnicode_AsASCIIString(object); - ret = PyArray_ClipmodeConverter(tmp, val); - Py_DECREF(tmp); - return ret; - } - else { - int number = PyInt_AsLong(object); - if (number == -1 && PyErr_Occurred()) { - goto fail; - } - if (number <= (int) NPY_RAISE - && number >= (int) NPY_CLIP) { - *val = (NPY_CLIPMODE) number; - } - else { - goto fail; - } - } - return PY_SUCCEED; - - fail: - PyErr_SetString(PyExc_TypeError, - "clipmode not understood"); - return PY_FAIL; -} - -/* - * compare the field dictionary for two types - * return 1 if the same or 0 if not - */ -static int -_equivalent_fields(PyObject *field1, PyObject *field2) { - - int same, val; - - if (field1 == field2) { - return 1; - } - if (field1 == NULL || field2 == NULL) { - return 0; - } -#if defined(NPY_PY3K) - val = PyObject_RichCompareBool(field1, field2, Py_EQ); - if (val != 1 || PyErr_Occurred()) { -#else - val = PyObject_Compare(field1, field2); - if (val != 0 || PyErr_Occurred()) { -#endif - same = 0; - } - else { - same = 1; - } - PyErr_Clear(); - return same; -} - - -/*NUMPY_API - * - * This function returns true if the two typecodes are - * equivalent (same basic kind and same itemsize). - */ -NPY_NO_EXPORT unsigned char -PyArray_EquivTypes(PyArray_Descr *typ1, PyArray_Descr *typ2) -{ - int typenum1 = typ1->type_num; - int typenum2 = typ2->type_num; - int size1 = typ1->elsize; - int size2 = typ2->elsize; - - if (size1 != size2) { - return FALSE; - } - if (PyArray_ISNBO(typ1->byteorder) != PyArray_ISNBO(typ2->byteorder)) { - return FALSE; - } - if (typenum1 == PyArray_VOID - || typenum2 == PyArray_VOID) { - return ((typenum1 == typenum2) - && _equivalent_fields(typ1->fields, typ2->fields)); - } - - return typ1->kind == typ2->kind; -} - -/*NUMPY_API*/ -NPY_NO_EXPORT unsigned char -PyArray_EquivTypenums(int typenum1, int typenum2) -{ - PyArray_Descr *d1, *d2; - Bool ret; - - d1 = PyArray_DescrFromType(typenum1); - d2 = PyArray_DescrFromType(typenum2); - ret = PyArray_EquivTypes(d1, d2); - Py_DECREF(d1); - Py_DECREF(d2); - return ret; -} - -/*** END C-API FUNCTIONS **/ - -static PyObject * -_prepend_ones(PyArrayObject *arr, int nd, int ndmin) -{ - intp newdims[MAX_DIMS]; - intp newstrides[MAX_DIMS]; - int i, k, num; - PyObject *ret; - - num = ndmin - nd; - for (i = 0; i < num; i++) { - newdims[i] = 1; - newstrides[i] = arr->descr->elsize; - } - for (i = num; i < ndmin; i++) { - k = i - num; - newdims[i] = arr->dimensions[k]; - newstrides[i] = arr->strides[k]; - } - Py_INCREF(arr->descr); - ret = PyArray_NewFromDescr(Py_TYPE(arr), arr->descr, ndmin, - newdims, newstrides, arr->data, arr->flags, (PyObject *)arr); - /* steals a reference to arr --- so don't increment here */ - PyArray_BASE(ret) = (PyObject *)arr; - return ret; -} - - -#define _ARET(x) PyArray_Return((PyArrayObject *)(x)) - -#define STRIDING_OK(op, order) ((order) == PyArray_ANYORDER || \ - ((order) == PyArray_CORDER && \ - PyArray_ISCONTIGUOUS(op)) || \ - ((order) == PyArray_FORTRANORDER && \ - PyArray_ISFORTRAN(op))) - -static PyObject * -_array_fromobject(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kws) -{ - PyObject *op, *ret = NULL; - static char *kwd[]= {"object", "dtype", "copy", "order", "subok", - "ndmin", NULL}; - Bool subok = FALSE; - Bool copy = TRUE; - int ndmin = 0, nd; - PyArray_Descr *type = NULL; - PyArray_Descr *oldtype = NULL; - NPY_ORDER order=PyArray_ANYORDER; - int flags = 0; - - if (PyTuple_GET_SIZE(args) > 2) { - PyErr_SetString(PyExc_ValueError, - "only 2 non-keyword arguments accepted"); - return NULL; - } - if(!PyArg_ParseTupleAndKeywords(args, kws, "O|O&O&O&O&i", kwd, &op, - PyArray_DescrConverter2, &type, - PyArray_BoolConverter, ©, - PyArray_OrderConverter, &order, - PyArray_BoolConverter, &subok, - &ndmin)) { - goto clean_type; - } - - if (ndmin > NPY_MAXDIMS) { - PyErr_Format(PyExc_ValueError, - "ndmin bigger than allowable number of dimensions "\ - "NPY_MAXDIMS (=%d)", NPY_MAXDIMS); - goto clean_type; - } - /* fast exit if simple call */ - if ((subok && PyArray_Check(op)) - || (!subok && PyArray_CheckExact(op))) { - if (type == NULL) { - if (!copy && STRIDING_OK(op, order)) { - Py_INCREF(op); - ret = op; - goto finish; - } - else { - ret = PyArray_NewCopy((PyArrayObject*)op, order); - goto finish; - } - } - /* One more chance */ - oldtype = PyArray_DESCR(op); - if (PyArray_EquivTypes(oldtype, type)) { - if (!copy && STRIDING_OK(op, order)) { - Py_INCREF(op); - ret = op; - goto finish; - } - else { - ret = PyArray_NewCopy((PyArrayObject*)op, order); - if (oldtype == type) { - goto finish; - } - Py_INCREF(oldtype); - Py_DECREF(PyArray_DESCR(ret)); - PyArray_DESCR(ret) = oldtype; - goto finish; - } - } - } - - if (copy) { - flags = ENSURECOPY; - } - if (order == PyArray_CORDER) { - flags |= CONTIGUOUS; - } - else if ((order == PyArray_FORTRANORDER) - /* order == PyArray_ANYORDER && */ - || (PyArray_Check(op) && PyArray_ISFORTRAN(op))) { - flags |= FORTRAN; - } - if (!subok) { - flags |= ENSUREARRAY; - } - - flags |= NPY_FORCECAST; - Py_XINCREF(type); - ret = PyArray_CheckFromAny(op, type, 0, 0, flags, NULL); - - finish: - Py_XDECREF(type); - if (!ret || (nd=PyArray_NDIM(ret)) >= ndmin) { - return ret; - } - /* - * create a new array from the same data with ones in the shape - * steals a reference to ret - */ - return _prepend_ones((PyArrayObject *)ret, nd, ndmin); - -clean_type: - Py_XDECREF(type); - return NULL; -} - -static PyObject * -array_empty(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds) -{ - - static char *kwlist[] = {"shape","dtype","order",NULL}; - PyArray_Descr *typecode = NULL; - PyArray_Dims shape = {NULL, 0}; - NPY_ORDER order = PyArray_CORDER; - Bool fortran; - PyObject *ret = NULL; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&|O&O&", kwlist, - PyArray_IntpConverter, &shape, - PyArray_DescrConverter, &typecode, - PyArray_OrderConverter, &order)) { - goto fail; - } - if (order == PyArray_FORTRANORDER) { - fortran = TRUE; - } - else { - fortran = FALSE; - } - ret = PyArray_Empty(shape.len, shape.ptr, typecode, fortran); - PyDimMem_FREE(shape.ptr); - return ret; - - fail: - Py_XDECREF(typecode); - PyDimMem_FREE(shape.ptr); - return NULL; -} - -/* - * This function is needed for supporting Pickles of - * numpy scalar objects. - */ -static PyObject * -array_scalar(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds) -{ - - static char *kwlist[] = {"dtype","obj", NULL}; - PyArray_Descr *typecode; - PyObject *obj = NULL; - int alloc = 0; - void *dptr; - PyObject *ret; - - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!|O", kwlist, - &PyArrayDescr_Type, &typecode, &obj)) { - return NULL; - } - if (typecode->elsize == 0) { - PyErr_SetString(PyExc_ValueError, - "itemsize cannot be zero"); - return NULL; - } - - if (PyDataType_FLAGCHK(typecode, NPY_ITEM_IS_POINTER)) { - if (obj == NULL) { - obj = Py_None; - } - dptr = &obj; - } - else { - if (obj == NULL) { - dptr = _pya_malloc(typecode->elsize); - if (dptr == NULL) { - return PyErr_NoMemory(); - } - memset(dptr, '\0', typecode->elsize); - alloc = 1; - } - else { - if (!PyString_Check(obj)) { - PyErr_SetString(PyExc_TypeError, - "initializing object must be a string"); - return NULL; - } - if (PyString_GET_SIZE(obj) < typecode->elsize) { - PyErr_SetString(PyExc_ValueError, - "initialization string is too small"); - return NULL; - } - dptr = PyString_AS_STRING(obj); - } - } - ret = PyArray_Scalar(dptr, typecode, NULL); - - /* free dptr which contains zeros */ - if (alloc) { - _pya_free(dptr); - } - return ret; -} - -static PyObject * -array_zeros(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {"shape","dtype","order",NULL}; /* XXX ? */ - PyArray_Descr *typecode = NULL; - PyArray_Dims shape = {NULL, 0}; - NPY_ORDER order = PyArray_CORDER; - Bool fortran = FALSE; - PyObject *ret = NULL; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&|O&O&", kwlist, - PyArray_IntpConverter, &shape, - PyArray_DescrConverter, &typecode, - PyArray_OrderConverter, &order)) { - goto fail; - } - if (order == PyArray_FORTRANORDER) { - fortran = TRUE; - } - else { - fortran = FALSE; - } - ret = PyArray_Zeros(shape.len, shape.ptr, typecode, (int) fortran); - PyDimMem_FREE(shape.ptr); - return ret; - - fail: - Py_XDECREF(typecode); - PyDimMem_FREE(shape.ptr); - return ret; -} - -static PyObject * -array_fromstring(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *keywds) -{ - char *data; - Py_ssize_t nin = -1; - char *sep = NULL; - Py_ssize_t s; - static char *kwlist[] = {"string", "dtype", "count", "sep", NULL}; - PyArray_Descr *descr = NULL; - - if (!PyArg_ParseTupleAndKeywords(args, keywds, - "s#|O&" NPY_SSIZE_T_PYFMT "s", kwlist, - &data, &s, PyArray_DescrConverter, &descr, &nin, &sep)) { - Py_XDECREF(descr); - return NULL; - } - return PyArray_FromString(data, (intp)s, descr, (intp)nin, sep); -} - - - -static PyObject * -array_fromfile(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *keywds) -{ - PyObject *file = NULL, *ret; - FILE *fp; - char *sep = ""; - Py_ssize_t nin = -1; - static char *kwlist[] = {"file", "dtype", "count", "sep", NULL}; - PyArray_Descr *type = NULL; - - if (!PyArg_ParseTupleAndKeywords(args, keywds, - "O|O&" NPY_SSIZE_T_PYFMT "s", kwlist, - &file, PyArray_DescrConverter, &type, &nin, &sep)) { - Py_XDECREF(type); - return NULL; - } - if (PyString_Check(file) || PyUnicode_Check(file)) { - file = npy_PyFile_OpenFile(file, "rb"); - if (file == NULL) { - return NULL; - } - } - else { - Py_INCREF(file); - } -#if defined(NPY_PY3K) - fp = npy_PyFile_Dup(file, "rb"); -#else - fp = PyFile_AsFile(file); -#endif - if (fp == NULL) { - PyErr_SetString(PyExc_IOError, - "first argument must be an open file"); - Py_DECREF(file); - return NULL; - } - if (type == NULL) { - type = PyArray_DescrFromType(PyArray_DEFAULT); - } - ret = PyArray_FromFile(fp, type, (intp) nin, sep); -#if defined(NPY_PY3K) - fclose(fp); -#endif - Py_DECREF(file); - return ret; -} - -static PyObject * -array_fromiter(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *keywds) -{ - PyObject *iter; - Py_ssize_t nin = -1; - static char *kwlist[] = {"iter", "dtype", "count", NULL}; - PyArray_Descr *descr = NULL; - - if (!PyArg_ParseTupleAndKeywords(args, keywds, - "OO&|" NPY_SSIZE_T_PYFMT, kwlist, - &iter, PyArray_DescrConverter, &descr, &nin)) { - Py_XDECREF(descr); - return NULL; - } - return PyArray_FromIter(iter, descr, (intp)nin); -} - -static PyObject * -array_frombuffer(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *keywds) -{ - PyObject *obj = NULL; - Py_ssize_t nin = -1, offset = 0; - static char *kwlist[] = {"buffer", "dtype", "count", "offset", NULL}; - PyArray_Descr *type = NULL; - - if (!PyArg_ParseTupleAndKeywords(args, keywds, - "O|O&" NPY_SSIZE_T_PYFMT NPY_SSIZE_T_PYFMT, kwlist, - &obj, PyArray_DescrConverter, &type, &nin, &offset)) { - Py_XDECREF(type); - return NULL; - } - if (type == NULL) { - type = PyArray_DescrFromType(PyArray_DEFAULT); - } - return PyArray_FromBuffer(obj, type, (intp)nin, (intp)offset); -} - -static PyObject * -array_concatenate(PyObject *NPY_UNUSED(dummy), PyObject *args, PyObject *kwds) -{ - PyObject *a0; - int axis = 0; - static char *kwlist[] = {"seq", "axis", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O&", kwlist, - &a0, PyArray_AxisConverter, &axis)) { - return NULL; - } - return PyArray_Concatenate(a0, axis); -} - -static PyObject * -array_innerproduct(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - PyObject *b0, *a0; - - if (!PyArg_ParseTuple(args, "OO", &a0, &b0)) { - return NULL; - } - return _ARET(PyArray_InnerProduct(a0, b0)); -} - -static PyObject * -array_matrixproduct(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - PyObject *v, *a; - - if (!PyArg_ParseTuple(args, "OO", &a, &v)) { - return NULL; - } - return _ARET(PyArray_MatrixProduct(a, v)); -} - -static PyObject * -array_fastCopyAndTranspose(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - PyObject *a0; - - if (!PyArg_ParseTuple(args, "O", &a0)) { - return NULL; - } - return _ARET(PyArray_CopyAndTranspose(a0)); -} - -static PyObject * -array_correlate(PyObject *NPY_UNUSED(dummy), PyObject *args, PyObject *kwds) -{ - PyObject *shape, *a0; - int mode = 0; - static char *kwlist[] = {"a", "v", "mode", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO|i", kwlist, - &a0, &shape, &mode)) { - return NULL; - } - return PyArray_Correlate(a0, shape, mode); -} - -static PyObject* -array_correlate2(PyObject *NPY_UNUSED(dummy), PyObject *args, PyObject *kwds) -{ - PyObject *shape, *a0; - int mode = 0; - static char *kwlist[] = {"a", "v", "mode", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO|i", kwlist, - &a0, &shape, &mode)) { - return NULL; - } - return PyArray_Correlate2(a0, shape, mode); -} - -static PyObject * -array_arange(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kws) { - PyObject *o_start = NULL, *o_stop = NULL, *o_step = NULL, *range=NULL; - static char *kwd[]= {"start", "stop", "step", "dtype", NULL}; - PyArray_Descr *typecode = NULL; - - if(!PyArg_ParseTupleAndKeywords(args, kws, "O|OOO&", kwd, - &o_start, &o_stop, &o_step, - PyArray_DescrConverter2, &typecode)) { - Py_XDECREF(typecode); - return NULL; - } - range = PyArray_ArangeObj(o_start, o_stop, o_step, typecode); - Py_XDECREF(typecode); - return range; -} - -/*NUMPY_API - * - * Included at the very first so not auto-grabbed and thus not labeled. - */ -NPY_NO_EXPORT unsigned int -PyArray_GetNDArrayCVersion(void) -{ - return (unsigned int)NPY_VERSION; -} - -/*NUMPY_API - * Returns the built-in (at compilation time) C API version - */ -NPY_NO_EXPORT unsigned int -PyArray_GetNDArrayCFeatureVersion(void) -{ - return (unsigned int)NPY_FEATURE_VERSION; -} - -static PyObject * -array__get_ndarray_c_version(PyObject *NPY_UNUSED(dummy), PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {NULL}; - - if(!PyArg_ParseTupleAndKeywords(args, kwds, "", kwlist )) { - return NULL; - } - return PyInt_FromLong( (long) PyArray_GetNDArrayCVersion() ); -} - -/*NUMPY_API -*/ -NPY_NO_EXPORT int -PyArray_GetEndianness(void) -{ - const union { - npy_uint32 i; - char c[4]; - } bint = {0x01020304}; - - if (bint.c[0] == 1) { - return NPY_CPU_BIG; - } - else if (bint.c[0] == 4) { - return NPY_CPU_LITTLE; - } - else { - return NPY_CPU_UNKNOWN_ENDIAN; - } -} - -static PyObject * -array__reconstruct(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - - PyObject *ret; - PyTypeObject *subtype; - PyArray_Dims shape = {NULL, 0}; - PyArray_Descr *dtype = NULL; - - if (!PyArg_ParseTuple(args, "O!O&O&", - &PyType_Type, &subtype, - PyArray_IntpConverter, &shape, - PyArray_DescrConverter, &dtype)) { - goto fail; - } - if (!PyType_IsSubtype(subtype, &PyArray_Type)) { - PyErr_SetString(PyExc_TypeError, - "_reconstruct: First argument must be a sub-type of ndarray"); - goto fail; - } - ret = PyArray_NewFromDescr(subtype, dtype, - (int)shape.len, shape.ptr, NULL, NULL, 0, NULL); - if (shape.ptr) { - PyDimMem_FREE(shape.ptr); - } - return ret; - - fail: - Py_XDECREF(dtype); - if (shape.ptr) { - PyDimMem_FREE(shape.ptr); - } - return NULL; -} - -static PyObject * -array_set_string_function(PyObject *NPY_UNUSED(self), PyObject *args, - PyObject *kwds) -{ - PyObject *op = NULL; - int repr = 1; - static char *kwlist[] = {"f", "repr", NULL}; - - if(!PyArg_ParseTupleAndKeywords(args, kwds, "|Oi", kwlist, &op, &repr)) { - return NULL; - } - /* reset the array_repr function to built-in */ - if (op == Py_None) { - op = NULL; - } - if (op != NULL && !PyCallable_Check(op)) { - PyErr_SetString(PyExc_TypeError, - "Argument must be callable."); - return NULL; - } - PyArray_SetStringFunction(op, repr); - Py_INCREF(Py_None); - return Py_None; -} - -static PyObject * -array_set_ops_function(PyObject *NPY_UNUSED(self), PyObject *NPY_UNUSED(args), - PyObject *kwds) -{ - PyObject *oldops = NULL; - - if ((oldops = PyArray_GetNumericOps()) == NULL) { - return NULL; - } - /* - * Should probably ensure that objects are at least callable - * Leave this to the caller for now --- error will be raised - * later when use is attempted - */ - if (kwds && PyArray_SetNumericOps(kwds) == -1) { - Py_DECREF(oldops); - PyErr_SetString(PyExc_ValueError, - "one or more objects not callable"); - return NULL; - } - return oldops; -} - - -/*NUMPY_API - * Where - */ -NPY_NO_EXPORT PyObject * -PyArray_Where(PyObject *condition, PyObject *x, PyObject *y) -{ - PyArrayObject *arr; - PyObject *tup = NULL, *obj = NULL; - PyObject *ret = NULL, *zero = NULL; - - arr = (PyArrayObject *)PyArray_FromAny(condition, NULL, 0, 0, 0, NULL); - if (arr == NULL) { - return NULL; - } - if ((x == NULL) && (y == NULL)) { - ret = PyArray_Nonzero(arr); - Py_DECREF(arr); - return ret; - } - if ((x == NULL) || (y == NULL)) { - Py_DECREF(arr); - PyErr_SetString(PyExc_ValueError, - "either both or neither of x and y should be given"); - return NULL; - } - - - zero = PyInt_FromLong((long) 0); - obj = PyArray_EnsureAnyArray(PyArray_GenericBinaryFunction(arr, zero, - n_ops.not_equal)); - Py_DECREF(zero); - Py_DECREF(arr); - if (obj == NULL) { - return NULL; - } - tup = Py_BuildValue("(OO)", y, x); - if (tup == NULL) { - Py_DECREF(obj); - return NULL; - } - ret = PyArray_Choose((PyAO *)obj, tup, NULL, NPY_RAISE); - Py_DECREF(obj); - Py_DECREF(tup); - return ret; -} - -static PyObject * -array_where(PyObject *NPY_UNUSED(ignored), PyObject *args) -{ - PyObject *obj = NULL, *x = NULL, *y = NULL; - - if (!PyArg_ParseTuple(args, "O|OO", &obj, &x, &y)) { - return NULL; - } - return PyArray_Where(obj, x, y); -} - -static PyObject * -array_lexsort(PyObject *NPY_UNUSED(ignored), PyObject *args, PyObject *kwds) -{ - int axis = -1; - PyObject *obj; - static char *kwlist[] = {"keys", "axis", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|i", kwlist, &obj, &axis)) { - return NULL; - } - return _ARET(PyArray_LexSort(obj, axis)); -} - -#undef _ARET - -static PyObject * -array_can_cast_safely(PyObject *NPY_UNUSED(self), PyObject *args, - PyObject *kwds) -{ - PyArray_Descr *d1 = NULL; - PyArray_Descr *d2 = NULL; - Bool ret; - PyObject *retobj = NULL; - static char *kwlist[] = {"from", "to", NULL}; - - if(!PyArg_ParseTupleAndKeywords(args, kwds, "O&O&", kwlist, - PyArray_DescrConverter, &d1, PyArray_DescrConverter, &d2)) { - goto finish; - } - if (d1 == NULL || d2 == NULL) { - PyErr_SetString(PyExc_TypeError, - "did not understand one of the types; 'None' not accepted"); - goto finish; - } - - ret = PyArray_CanCastTo(d1, d2); - retobj = ret ? Py_True : Py_False; - Py_INCREF(retobj); - - finish: - Py_XDECREF(d1); - Py_XDECREF(d2); - return retobj; -} - -#if !defined(NPY_PY3K) -static PyObject * -new_buffer(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - int size; - - if(!PyArg_ParseTuple(args, "i", &size)) { - return NULL; - } - return PyBuffer_New(size); -} - -static PyObject * -buffer_buffer(PyObject *NPY_UNUSED(dummy), PyObject *args, PyObject *kwds) -{ - PyObject *obj; - Py_ssize_t offset = 0, n; - Py_ssize_t size = Py_END_OF_BUFFER; - void *unused; - static char *kwlist[] = {"object", "offset", "size", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, - "O|" NPY_SSIZE_T_PYFMT NPY_SSIZE_T_PYFMT, kwlist, - &obj, &offset, &size)) { - return NULL; - } - if (PyObject_AsWriteBuffer(obj, &unused, &n) < 0) { - PyErr_Clear(); - return PyBuffer_FromObject(obj, offset, size); - } - else { - return PyBuffer_FromReadWriteObject(obj, offset, size); - } -} -#endif - -#ifndef _MSC_VER -#include -#include -jmp_buf _NPY_SIGSEGV_BUF; -static void -_SigSegv_Handler(int signum) -{ - longjmp(_NPY_SIGSEGV_BUF, signum); -} -#endif - -#define _test_code() { \ - test = *((char*)memptr); \ - if (!ro) { \ - *((char *)memptr) = '\0'; \ - *((char *)memptr) = test; \ - } \ - test = *((char*)memptr+size-1); \ - if (!ro) { \ - *((char *)memptr+size-1) = '\0'; \ - *((char *)memptr+size-1) = test; \ - } \ - } - -static PyObject * -as_buffer(PyObject *NPY_UNUSED(dummy), PyObject *args, PyObject *kwds) -{ - PyObject *mem; - Py_ssize_t size; - Bool ro = FALSE, check = TRUE; - void *memptr; - static char *kwlist[] = {"mem", "size", "readonly", "check", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, - "O" NPY_SSIZE_T_PYFMT "|O&O&", kwlist, - &mem, &size, PyArray_BoolConverter, &ro, - PyArray_BoolConverter, &check)) { - return NULL; - } - memptr = PyLong_AsVoidPtr(mem); - if (memptr == NULL) { - return NULL; - } - if (check) { - /* - * Try to dereference the start and end of the memory region - * Catch segfault and report error if it occurs - */ - char test; - int err = 0; - -#ifdef _MSC_VER - __try { - _test_code(); - } - __except(1) { - err = 1; - } -#else - PyOS_sighandler_t _npy_sig_save; - _npy_sig_save = PyOS_setsig(SIGSEGV, _SigSegv_Handler); - if (setjmp(_NPY_SIGSEGV_BUF) == 0) { - _test_code(); - } - else { - err = 1; - } - PyOS_setsig(SIGSEGV, _npy_sig_save); -#endif - if (err) { - PyErr_SetString(PyExc_ValueError, - "cannot use memory location as a buffer."); - return NULL; - } - } - - -#if defined(NPY_PY3K) - PyErr_SetString(PyExc_RuntimeError, - "XXX -- not implemented!"); - return NULL; -#else - if (ro) { - return PyBuffer_FromMemory(memptr, size); - } - return PyBuffer_FromReadWriteMemory(memptr, size); -#endif -} - -#undef _test_code - -static PyObject * -format_longfloat(PyObject *NPY_UNUSED(dummy), PyObject *args, PyObject *kwds) -{ - PyObject *obj; - unsigned int precision; - longdouble x; - static char *kwlist[] = {"x", "precision", NULL}; - static char repr[100]; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "OI", kwlist, - &obj, &precision)) { - return NULL; - } - if (!PyArray_IsScalar(obj, LongDouble)) { - PyErr_SetString(PyExc_TypeError, - "not a longfloat"); - return NULL; - } - x = ((PyLongDoubleScalarObject *)obj)->obval; - if (precision > 70) { - precision = 70; - } - format_longdouble(repr, 100, x, precision); - return PyUString_FromString(repr); -} - -static PyObject * -compare_chararrays(PyObject *NPY_UNUSED(dummy), PyObject *args, PyObject *kwds) -{ - PyObject *array; - PyObject *other; - PyArrayObject *newarr, *newoth; - int cmp_op; - Bool rstrip; - char *cmp_str; - Py_ssize_t strlen; - PyObject *res = NULL; - static char msg[] = "comparision must be '==', '!=', '<', '>', '<=', '>='"; - static char *kwlist[] = {"a1", "a2", "cmp", "rstrip", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "OOs#O&", kwlist, - &array, &other, &cmp_str, &strlen, - PyArray_BoolConverter, &rstrip)) { - return NULL; - } - if (strlen < 1 || strlen > 2) { - goto err; - } - if (strlen > 1) { - if (cmp_str[1] != '=') { - goto err; - } - if (cmp_str[0] == '=') { - cmp_op = Py_EQ; - } - else if (cmp_str[0] == '!') { - cmp_op = Py_NE; - } - else if (cmp_str[0] == '<') { - cmp_op = Py_LE; - } - else if (cmp_str[0] == '>') { - cmp_op = Py_GE; - } - else { - goto err; - } - } - else { - if (cmp_str[0] == '<') { - cmp_op = Py_LT; - } - else if (cmp_str[0] == '>') { - cmp_op = Py_GT; - } - else { - goto err; - } - } - - newarr = (PyArrayObject *)PyArray_FROM_O(array); - if (newarr == NULL) { - return NULL; - } - newoth = (PyArrayObject *)PyArray_FROM_O(other); - if (newoth == NULL) { - Py_DECREF(newarr); - return NULL; - } - if (PyArray_ISSTRING(newarr) && PyArray_ISSTRING(newoth)) { - res = _strings_richcompare(newarr, newoth, cmp_op, rstrip != 0); - } - else { - PyErr_SetString(PyExc_TypeError, - "comparison of non-string arrays"); - } - Py_DECREF(newarr); - Py_DECREF(newoth); - return res; - - err: - PyErr_SetString(PyExc_ValueError, msg); - return NULL; -} - -static PyObject * -_vec_string_with_args(PyArrayObject* char_array, PyArray_Descr* type, - PyObject* method, PyObject* args) -{ - PyObject* broadcast_args[NPY_MAXARGS]; - PyArrayMultiIterObject* in_iter = NULL; - PyArrayObject* result = NULL; - PyArrayIterObject* out_iter = NULL; - PyObject* args_tuple = NULL; - Py_ssize_t i, n, nargs; - - nargs = PySequence_Size(args) + 1; - if (nargs == -1 || nargs > NPY_MAXARGS) { - PyErr_Format(PyExc_ValueError, - "len(args) must be < %d", NPY_MAXARGS - 1); - goto err; - } - - broadcast_args[0] = (PyObject*)char_array; - for (i = 1; i < nargs; i++) { - PyObject* item = PySequence_GetItem(args, i-1); - if (item == NULL) { - goto err; - } - broadcast_args[i] = item; - Py_DECREF(item); - } - in_iter = (PyArrayMultiIterObject*)PyArray_MultiIterFromObjects - (broadcast_args, nargs, 0); - if (in_iter == NULL) { - goto err; - } - n = in_iter->numiter; - - result = (PyArrayObject*)PyArray_SimpleNewFromDescr(in_iter->nd, - in_iter->dimensions, type); - if (result == NULL) { - goto err; - } - - out_iter = (PyArrayIterObject*)PyArray_IterNew((PyObject*)result); - if (out_iter == NULL) { - goto err; - } - - args_tuple = PyTuple_New(n); - if (args_tuple == NULL) { - goto err; - } - - while (PyArray_MultiIter_NOTDONE(in_iter)) { - PyObject* item_result; - - for (i = 0; i < n; i++) { - PyArrayIterObject* it = in_iter->iters[i]; - PyObject* arg = PyArray_ToScalar(PyArray_ITER_DATA(it), it->ao); - if (arg == NULL) { - goto err; - } - /* Steals ref to arg */ - PyTuple_SetItem(args_tuple, i, arg); - } - - item_result = PyObject_CallObject(method, args_tuple); - if (item_result == NULL) { - goto err; - } - - if (PyArray_SETITEM(result, PyArray_ITER_DATA(out_iter), item_result)) { - Py_DECREF(item_result); - PyErr_SetString( PyExc_TypeError, - "result array type does not match underlying function"); - goto err; - } - Py_DECREF(item_result); - - PyArray_MultiIter_NEXT(in_iter); - PyArray_ITER_NEXT(out_iter); - } - - Py_DECREF(in_iter); - Py_DECREF(out_iter); - Py_DECREF(args_tuple); - - return (PyObject*)result; - - err: - Py_XDECREF(in_iter); - Py_XDECREF(out_iter); - Py_XDECREF(args_tuple); - Py_XDECREF(result); - - return 0; -} - -static PyObject * -_vec_string_no_args(PyArrayObject* char_array, - PyArray_Descr* type, PyObject* method) -{ - /* - * This is a faster version of _vec_string_args to use when there - * are no additional arguments to the string method. This doesn't - * require a broadcast iterator (and broadcast iterators don't work - * with 1 argument anyway). - */ - PyArrayIterObject* in_iter = NULL; - PyArrayObject* result = NULL; - PyArrayIterObject* out_iter = NULL; - - in_iter = (PyArrayIterObject*)PyArray_IterNew((PyObject*)char_array); - if (in_iter == NULL) { - goto err; - } - - result = (PyArrayObject*)PyArray_SimpleNewFromDescr( - PyArray_NDIM(char_array), PyArray_DIMS(char_array), type); - if (result == NULL) { - goto err; - } - - out_iter = (PyArrayIterObject*)PyArray_IterNew((PyObject*)result); - if (out_iter == NULL) { - goto err; - } - - while (PyArray_ITER_NOTDONE(in_iter)) { - PyObject* item_result; - PyObject* item = PyArray_ToScalar(in_iter->dataptr, in_iter->ao); - if (item == NULL) { - goto err; - } - - item_result = PyObject_CallFunctionObjArgs(method, item, NULL); - Py_DECREF(item); - if (item_result == NULL) { - goto err; - } - - if (PyArray_SETITEM(result, PyArray_ITER_DATA(out_iter), item_result)) { - Py_DECREF(item_result); - PyErr_SetString( PyExc_TypeError, - "result array type does not match underlying function"); - goto err; - } - Py_DECREF(item_result); - - PyArray_ITER_NEXT(in_iter); - PyArray_ITER_NEXT(out_iter); - } - - Py_DECREF(in_iter); - Py_DECREF(out_iter); - - return (PyObject*)result; - - err: - Py_XDECREF(in_iter); - Py_XDECREF(out_iter); - Py_XDECREF(result); - - return 0; -} - -static PyObject * -_vec_string(PyObject *NPY_UNUSED(dummy), PyObject *args, PyObject *kwds) -{ - PyArrayObject* char_array = NULL; - PyArray_Descr *type = NULL; - PyObject* method_name; - PyObject* args_seq = NULL; - - PyObject* method = NULL; - PyObject* result = NULL; - - if (!PyArg_ParseTuple(args, "O&O&O|O", - PyArray_Converter, &char_array, - PyArray_DescrConverter, &type, - &method_name, &args_seq)) { - goto err; - } - - if (PyArray_TYPE(char_array) == NPY_STRING) { - method = PyObject_GetAttr((PyObject *)&PyString_Type, method_name); - } - else if (PyArray_TYPE(char_array) == NPY_UNICODE) { - method = PyObject_GetAttr((PyObject *)&PyUnicode_Type, method_name); - } - else { - PyErr_SetString(PyExc_TypeError, - "string operation on non-string array"); - goto err; - } - if (method == NULL) { - goto err; - } - - if (args_seq == NULL - || (PySequence_Check(args_seq) && PySequence_Size(args_seq) == 0)) { - result = _vec_string_no_args(char_array, type, method); - } - else if (PySequence_Check(args_seq)) { - result = _vec_string_with_args(char_array, type, method, args_seq); - } - else { - PyErr_SetString(PyExc_TypeError, - "'args' must be a sequence of arguments"); - goto err; - } - if (result == NULL) { - goto err; - } - - Py_DECREF(char_array); - Py_DECREF(method); - - return (PyObject*)result; - - err: - Py_XDECREF(char_array); - Py_XDECREF(method); - - return 0; -} - -#ifndef __NPY_PRIVATE_NO_SIGNAL - -SIGJMP_BUF _NPY_SIGINT_BUF; - -/*NUMPY_API - */ -NPY_NO_EXPORT void -_PyArray_SigintHandler(int signum) -{ - PyOS_setsig(signum, SIG_IGN); - SIGLONGJMP(_NPY_SIGINT_BUF, signum); -} - -/*NUMPY_API - */ -NPY_NO_EXPORT void* -_PyArray_GetSigintBuf(void) -{ - return (void *)&_NPY_SIGINT_BUF; -} - -#else - -NPY_NO_EXPORT void -_PyArray_SigintHandler(int signum) -{ - return; -} - -NPY_NO_EXPORT void* -_PyArray_GetSigintBuf(void) -{ - return NULL; -} - -#endif - - -static PyObject * -test_interrupt(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int kind = 0; - int a = 0; - - if (!PyArg_ParseTuple(args, "|i", &kind)) { - return NULL; - } - if (kind) { - Py_BEGIN_ALLOW_THREADS; - while (a >= 0) { - if ((a % 1000 == 0) && PyOS_InterruptOccurred()) { - break; - } - a += 1; - } - Py_END_ALLOW_THREADS; - } - else { - NPY_SIGINT_ON - while(a >= 0) { - a += 1; - } - NPY_SIGINT_OFF - } - return PyInt_FromLong(a); -} - -static struct PyMethodDef array_module_methods[] = { - {"_get_ndarray_c_version", - (PyCFunction)array__get_ndarray_c_version, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"_reconstruct", - (PyCFunction)array__reconstruct, - METH_VARARGS, NULL}, - {"set_string_function", - (PyCFunction)array_set_string_function, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"set_numeric_ops", - (PyCFunction)array_set_ops_function, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"set_typeDict", - (PyCFunction)array_set_typeDict, - METH_VARARGS, NULL}, - {"array", - (PyCFunction)_array_fromobject, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"arange", - (PyCFunction)array_arange, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"zeros", - (PyCFunction)array_zeros, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"empty", - (PyCFunction)array_empty, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"scalar", - (PyCFunction)array_scalar, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"where", - (PyCFunction)array_where, - METH_VARARGS, NULL}, - {"lexsort", - (PyCFunction)array_lexsort, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"putmask", - (PyCFunction)array_putmask, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"fromstring", - (PyCFunction)array_fromstring, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"fromiter", - (PyCFunction)array_fromiter, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"concatenate", - (PyCFunction)array_concatenate, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"inner", - (PyCFunction)array_innerproduct, - METH_VARARGS, NULL}, - {"dot", - (PyCFunction)array_matrixproduct, - METH_VARARGS, NULL}, - {"_fastCopyAndTranspose", - (PyCFunction)array_fastCopyAndTranspose, - METH_VARARGS, NULL}, - {"correlate", - (PyCFunction)array_correlate, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"correlate2", - (PyCFunction)array_correlate2, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"frombuffer", - (PyCFunction)array_frombuffer, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"fromfile", - (PyCFunction)array_fromfile, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"can_cast", - (PyCFunction)array_can_cast_safely, - METH_VARARGS | METH_KEYWORDS, NULL}, -#if !defined(NPY_PY3K) - {"newbuffer", - (PyCFunction)new_buffer, - METH_VARARGS, NULL}, - {"getbuffer", - (PyCFunction)buffer_buffer, - METH_VARARGS | METH_KEYWORDS, NULL}, -#endif - {"int_asbuffer", - (PyCFunction)as_buffer, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"format_longfloat", - (PyCFunction)format_longfloat, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"compare_chararrays", - (PyCFunction)compare_chararrays, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"_vec_string", - (PyCFunction)_vec_string, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"test_interrupt", - (PyCFunction)test_interrupt, - METH_VARARGS, NULL}, - {NULL, NULL, 0, NULL} /* sentinel */ -}; - -#include "__multiarray_api.c" - -/* Establish scalar-type hierarchy - * - * For dual inheritance we need to make sure that the objects being - * inherited from have the tp->mro object initialized. This is - * not necessarily true for the basic type objects of Python (it is - * checked for single inheritance but not dual in PyType_Ready). - * - * Thus, we call PyType_Ready on the standard Python Types, here. - */ -static int -setup_scalartypes(PyObject *NPY_UNUSED(dict)) -{ - initialize_numeric_types(); - - if (PyType_Ready(&PyBool_Type) < 0) { - return -1; - } -#if !defined(NPY_PY3K) - if (PyType_Ready(&PyInt_Type) < 0) { - return -1; - } -#endif - if (PyType_Ready(&PyFloat_Type) < 0) { - return -1; - } - if (PyType_Ready(&PyComplex_Type) < 0) { - return -1; - } - if (PyType_Ready(&PyString_Type) < 0) { - return -1; - } - if (PyType_Ready(&PyUnicode_Type) < 0) { - return -1; - } - -#define SINGLE_INHERIT(child, parent) \ - Py##child##ArrType_Type.tp_base = &Py##parent##ArrType_Type; \ - if (PyType_Ready(&Py##child##ArrType_Type) < 0) { \ - PyErr_Print(); \ - PyErr_Format(PyExc_SystemError, \ - "could not initialize Py%sArrType_Type", \ - #child); \ - return -1; \ - } - - if (PyType_Ready(&PyGenericArrType_Type) < 0) { - return -1; - } - SINGLE_INHERIT(Number, Generic); - SINGLE_INHERIT(Integer, Number); - SINGLE_INHERIT(Inexact, Number); - SINGLE_INHERIT(SignedInteger, Integer); - SINGLE_INHERIT(UnsignedInteger, Integer); - SINGLE_INHERIT(Floating, Inexact); - SINGLE_INHERIT(ComplexFloating, Inexact); - SINGLE_INHERIT(Flexible, Generic); - SINGLE_INHERIT(Character, Flexible); - -#define DUAL_INHERIT(child, parent1, parent2) \ - Py##child##ArrType_Type.tp_base = &Py##parent2##ArrType_Type; \ - Py##child##ArrType_Type.tp_bases = \ - Py_BuildValue("(OO)", &Py##parent2##ArrType_Type, \ - &Py##parent1##_Type); \ - if (PyType_Ready(&Py##child##ArrType_Type) < 0) { \ - PyErr_Print(); \ - PyErr_Format(PyExc_SystemError, \ - "could not initialize Py%sArrType_Type", \ - #child); \ - return -1; \ - } \ - Py##child##ArrType_Type.tp_hash = Py##parent1##_Type.tp_hash; - -#if defined(NPY_PY3K) -#define DUAL_INHERIT_COMPARE(child, parent1, parent2) -#else -#define DUAL_INHERIT_COMPARE(child, parent1, parent2) \ - Py##child##ArrType_Type.tp_compare = \ - Py##parent1##_Type.tp_compare; -#endif - -#define DUAL_INHERIT2(child, parent1, parent2) \ - Py##child##ArrType_Type.tp_base = &Py##parent1##_Type; \ - Py##child##ArrType_Type.tp_bases = \ - Py_BuildValue("(OO)", &Py##parent1##_Type, \ - &Py##parent2##ArrType_Type); \ - Py##child##ArrType_Type.tp_richcompare = \ - Py##parent1##_Type.tp_richcompare; \ - DUAL_INHERIT_COMPARE(child, parent1, parent2) \ - Py##child##ArrType_Type.tp_hash = Py##parent1##_Type.tp_hash; \ - if (PyType_Ready(&Py##child##ArrType_Type) < 0) { \ - PyErr_Print(); \ - PyErr_Format(PyExc_SystemError, \ - "could not initialize Py%sArrType_Type", \ - #child); \ - return -1; \ - } - - SINGLE_INHERIT(Bool, Generic); - SINGLE_INHERIT(Byte, SignedInteger); - SINGLE_INHERIT(Short, SignedInteger); -#if SIZEOF_INT == SIZEOF_LONG && !defined(NPY_PY3K) - DUAL_INHERIT(Int, Int, SignedInteger); -#else - SINGLE_INHERIT(Int, SignedInteger); -#endif -#if !defined(NPY_PY3K) - DUAL_INHERIT(Long, Int, SignedInteger); -#else - SINGLE_INHERIT(Long, SignedInteger); -#endif -#if SIZEOF_LONGLONG == SIZEOF_LONG && !defined(NPY_PY3K) - DUAL_INHERIT(LongLong, Int, SignedInteger); -#else - SINGLE_INHERIT(LongLong, SignedInteger); -#endif - - /* - fprintf(stderr, - "tp_free = %p, PyObject_Del = %p, int_tp_free = %p, base.tp_free = %p\n", - PyIntArrType_Type.tp_free, PyObject_Del, PyInt_Type.tp_free, - PySignedIntegerArrType_Type.tp_free); - */ - SINGLE_INHERIT(UByte, UnsignedInteger); - SINGLE_INHERIT(UShort, UnsignedInteger); - SINGLE_INHERIT(UInt, UnsignedInteger); - SINGLE_INHERIT(ULong, UnsignedInteger); - SINGLE_INHERIT(ULongLong, UnsignedInteger); - - SINGLE_INHERIT(Float, Floating); - DUAL_INHERIT(Double, Float, Floating); - SINGLE_INHERIT(LongDouble, Floating); - - SINGLE_INHERIT(CFloat, ComplexFloating); - DUAL_INHERIT(CDouble, Complex, ComplexFloating); - SINGLE_INHERIT(CLongDouble, ComplexFloating); - - DUAL_INHERIT2(String, String, Character); - DUAL_INHERIT2(Unicode, Unicode, Character); - - SINGLE_INHERIT(Void, Flexible); - - SINGLE_INHERIT(Object, Generic); - - return 0; - -#undef SINGLE_INHERIT -#undef DUAL_INHERIT - - /* - * Clean up string and unicode array types so they act more like - * strings -- get their tables from the standard types. - */ -} - -/* place a flag dictionary in d */ - -static void -set_flaginfo(PyObject *d) -{ - PyObject *s; - PyObject *newd; - - newd = PyDict_New(); - -#define _addnew(val, one) \ - PyDict_SetItemString(newd, #val, s=PyInt_FromLong(val)); \ - Py_DECREF(s); \ - PyDict_SetItemString(newd, #one, s=PyInt_FromLong(val)); \ - Py_DECREF(s) - -#define _addone(val) \ - PyDict_SetItemString(newd, #val, s=PyInt_FromLong(val)); \ - Py_DECREF(s) - - _addnew(OWNDATA, O); - _addnew(FORTRAN, F); - _addnew(CONTIGUOUS, C); - _addnew(ALIGNED, A); - _addnew(UPDATEIFCOPY, U); - _addnew(WRITEABLE, W); - _addone(C_CONTIGUOUS); - _addone(F_CONTIGUOUS); - -#undef _addone -#undef _addnew - - PyDict_SetItemString(d, "_flagdict", newd); - Py_DECREF(newd); - return; -} - -#if defined(NPY_PY3K) -static struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - "multiarray", - NULL, - -1, - array_module_methods, - NULL, - NULL, - NULL, - NULL -}; -#endif - -/* Initialization function for the module */ -#if defined(NPY_PY3K) -#define RETVAL m -PyObject *PyInit_multiarray(void) { -#else -#define RETVAL -PyMODINIT_FUNC initmultiarray(void) { -#endif - PyObject *m, *d, *s; - PyObject *c_api; - - /* Create the module and add the functions */ -#if defined(NPY_PY3K) - m = PyModule_Create(&moduledef); -#else - m = Py_InitModule("multiarray", array_module_methods); -#endif - if (!m) { - goto err; - } - -#if defined(MS_WIN64) && defined(__GNUC__) - PyErr_WarnEx(PyExc_Warning, - "Numpy built with MINGW-W64 on Windows 64 bits is experimental, " \ - "and only available for \n" \ - "testing. You are advised not to use it for production. \n\n" \ - "CRASHES ARE TO BE EXPECTED - PLEASE REPORT THEM TO NUMPY DEVELOPERS", - 1); -#endif - - /* Add some symbolic constants to the module */ - d = PyModule_GetDict(m); - if (!d) { - goto err; - } - PyArray_Type.tp_free = _pya_free; - if (PyType_Ready(&PyArray_Type) < 0) { - return RETVAL; - } - if (setup_scalartypes(d) < 0) { - goto err; - } - PyArrayIter_Type.tp_iter = PyObject_SelfIter; - PyArrayMultiIter_Type.tp_iter = PyObject_SelfIter; - PyArrayMultiIter_Type.tp_free = _pya_free; - if (PyType_Ready(&PyArrayIter_Type) < 0) { - return RETVAL; - } - if (PyType_Ready(&PyArrayMapIter_Type) < 0) { - return RETVAL; - } - if (PyType_Ready(&PyArrayMultiIter_Type) < 0) { - return RETVAL; - } - PyArrayNeighborhoodIter_Type.tp_new = PyType_GenericNew; - if (PyType_Ready(&PyArrayNeighborhoodIter_Type) < 0) { - return RETVAL; - } - - PyArrayDescr_Type.tp_hash = PyArray_DescrHash; - if (PyType_Ready(&PyArrayDescr_Type) < 0) { - return RETVAL; - } - if (PyType_Ready(&PyArrayFlags_Type) < 0) { - return RETVAL; - } -/* FIXME - * There is no error handling here - */ - c_api = NpyCapsule_FromVoidPtr((void *)PyArray_API, NULL); - PyDict_SetItemString(d, "_ARRAY_API", c_api); - Py_DECREF(c_api); - if (PyErr_Occurred()) { - goto err; - } - - /* Initialize types in numpymemoryview.c */ - if (_numpymemoryview_init(&s) < 0) { - return RETVAL; - } - if (s != NULL) { - PyDict_SetItemString(d, "memorysimpleview", s); - } - - /* - * PyExc_Exception should catch all the standard errors that are - * now raised instead of the string exception "multiarray.error" - - * This is for backward compatibility with existing code. - */ - PyDict_SetItemString (d, "error", PyExc_Exception); - - s = PyUString_FromString("3.1"); - PyDict_SetItemString(d, "__version__", s); - Py_DECREF(s); - - s = PyUString_InternFromString(NPY_METADATA_DTSTR); - PyDict_SetItemString(d, "METADATA_DTSTR", s); - Py_DECREF(s); - -#define ADDCONST(NAME) \ - s = PyInt_FromLong(NPY_##NAME); \ - PyDict_SetItemString(d, #NAME, s); \ - Py_DECREF(s) - - - ADDCONST(ALLOW_THREADS); - ADDCONST(BUFSIZE); - ADDCONST(CLIP); - - ADDCONST(ITEM_HASOBJECT); - ADDCONST(LIST_PICKLE); - ADDCONST(ITEM_IS_POINTER); - ADDCONST(NEEDS_INIT); - ADDCONST(NEEDS_PYAPI); - ADDCONST(USE_GETITEM); - ADDCONST(USE_SETITEM); - - ADDCONST(RAISE); - ADDCONST(WRAP); - ADDCONST(MAXDIMS); -#undef ADDCONST - - Py_INCREF(&PyArray_Type); - PyDict_SetItemString(d, "ndarray", (PyObject *)&PyArray_Type); - Py_INCREF(&PyArrayIter_Type); - PyDict_SetItemString(d, "flatiter", (PyObject *)&PyArrayIter_Type); - Py_INCREF(&PyArrayMultiIter_Type); - PyDict_SetItemString(d, "broadcast", - (PyObject *)&PyArrayMultiIter_Type); - Py_INCREF(&PyArrayDescr_Type); - PyDict_SetItemString(d, "dtype", (PyObject *)&PyArrayDescr_Type); - - Py_INCREF(&PyArrayFlags_Type); - PyDict_SetItemString(d, "flagsobj", (PyObject *)&PyArrayFlags_Type); - - set_flaginfo(d); - - if (set_typeinfo(d) != 0) { - goto err; - } - return RETVAL; - - err: - if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_RuntimeError, - "cannot load multiarray module."); - } - return RETVAL; -} diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/multiarraymodule.h b/pythonPackages/numpy/numpy/core/src/multiarray/multiarraymodule.h deleted file mode 100755 index 5a3b14b0b5..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/multiarraymodule.h +++ /dev/null @@ -1,4 +0,0 @@ -#ifndef _NPY_MULTIARRAY_H_ -#define _NPY_MULTIARRAY_H_ - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/multiarraymodule_onefile.c b/pythonPackages/numpy/numpy/core/src/multiarray/multiarraymodule_onefile.c deleted file mode 100755 index 604c068c18..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/multiarraymodule_onefile.c +++ /dev/null @@ -1,46 +0,0 @@ -/* - * This file includes all the .c files needed for a complete multiarray module. - * This is used in the case where separate compilation is not enabled - * - * Note that the order of the includs matters - */ - -#include "common.c" - -#include "scalartypes.c" -#include "scalarapi.c" - -#include "arraytypes.c" - -#include "hashdescr.c" -#include "numpyos.c" - -#include "descriptor.c" -#include "flagsobject.c" -#include "ctors.c" -#include "iterators.c" -#include "mapping.c" -#include "number.c" -#include "getset.c" -#include "sequence.c" -#include "methods.c" -#include "convert_datatype.c" -#include "convert.c" -#include "shape.c" -#include "item_selection.c" -#include "calculation.c" -#include "usertypes.c" -#include "refcount.c" -#include "conversion_utils.c" -#include "buffer.c" - - -#ifndef Py_UNICODE_WIDE -#include "ucsnarrow.c" -#endif - -#include "arrayobject.c" - -#include "numpymemoryview.c" - -#include "multiarraymodule.c" diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/number.c b/pythonPackages/numpy/numpy/core/src/multiarray/number.c deleted file mode 100755 index ee9bf27a46..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/number.c +++ /dev/null @@ -1,812 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -/*#include */ -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "number.h" - -/************************************************************************* - **************** Implement Number Protocol **************************** - *************************************************************************/ - -NPY_NO_EXPORT NumericOps n_ops; /* NB: static objects initialized to zero */ - -/* - * Dictionary can contain any of the numeric operations, by name. - * Those not present will not be changed - */ - -/* FIXME - macro contains a return */ -#define SET(op) temp = PyDict_GetItemString(dict, #op); \ - if (temp != NULL) { \ - if (!(PyCallable_Check(temp))) { \ - return -1; \ - } \ - Py_INCREF(temp); \ - Py_XDECREF(n_ops.op); \ - n_ops.op = temp; \ - } - - -/*NUMPY_API - *Set internal structure with number functions that all arrays will use - */ -NPY_NO_EXPORT int -PyArray_SetNumericOps(PyObject *dict) -{ - PyObject *temp = NULL; - SET(add); - SET(subtract); - SET(multiply); - SET(divide); - SET(remainder); - SET(power); - SET(square); - SET(reciprocal); - SET(ones_like); - SET(sqrt); - SET(negative); - SET(absolute); - SET(invert); - SET(left_shift); - SET(right_shift); - SET(bitwise_and); - SET(bitwise_or); - SET(bitwise_xor); - SET(less); - SET(less_equal); - SET(equal); - SET(not_equal); - SET(greater); - SET(greater_equal); - SET(floor_divide); - SET(true_divide); - SET(logical_or); - SET(logical_and); - SET(floor); - SET(ceil); - SET(maximum); - SET(minimum); - SET(rint); - SET(conjugate); - return 0; -} - -/* FIXME - macro contains goto */ -#define GET(op) if (n_ops.op && \ - (PyDict_SetItemString(dict, #op, n_ops.op)==-1)) \ - goto fail; - -/*NUMPY_API - Get dictionary showing number functions that all arrays will use -*/ -NPY_NO_EXPORT PyObject * -PyArray_GetNumericOps(void) -{ - PyObject *dict; - if ((dict = PyDict_New())==NULL) - return NULL; - GET(add); - GET(subtract); - GET(multiply); - GET(divide); - GET(remainder); - GET(power); - GET(square); - GET(reciprocal); - GET(ones_like); - GET(sqrt); - GET(negative); - GET(absolute); - GET(invert); - GET(left_shift); - GET(right_shift); - GET(bitwise_and); - GET(bitwise_or); - GET(bitwise_xor); - GET(less); - GET(less_equal); - GET(equal); - GET(not_equal); - GET(greater); - GET(greater_equal); - GET(floor_divide); - GET(true_divide); - GET(logical_or); - GET(logical_and); - GET(floor); - GET(ceil); - GET(maximum); - GET(minimum); - GET(rint); - GET(conjugate); - return dict; - - fail: - Py_DECREF(dict); - return NULL; -} - -static PyObject * -_get_keywords(int rtype, PyArrayObject *out) -{ - PyObject *kwds = NULL; - if (rtype != PyArray_NOTYPE || out != NULL) { - kwds = PyDict_New(); - if (rtype != PyArray_NOTYPE) { - PyArray_Descr *descr; - descr = PyArray_DescrFromType(rtype); - if (descr) { - PyDict_SetItemString(kwds, "dtype", (PyObject *)descr); - Py_DECREF(descr); - } - } - if (out != NULL) { - PyDict_SetItemString(kwds, "out", (PyObject *)out); - } - } - return kwds; -} - -NPY_NO_EXPORT PyObject * -PyArray_GenericReduceFunction(PyArrayObject *m1, PyObject *op, int axis, - int rtype, PyArrayObject *out) -{ - PyObject *args, *ret = NULL, *meth; - PyObject *kwds; - if (op == NULL) { - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } - args = Py_BuildValue("(Oi)", m1, axis); - kwds = _get_keywords(rtype, out); - meth = PyObject_GetAttrString(op, "reduce"); - if (meth && PyCallable_Check(meth)) { - ret = PyObject_Call(meth, args, kwds); - } - Py_DECREF(args); - Py_DECREF(meth); - Py_XDECREF(kwds); - return ret; -} - - -NPY_NO_EXPORT PyObject * -PyArray_GenericAccumulateFunction(PyArrayObject *m1, PyObject *op, int axis, - int rtype, PyArrayObject *out) -{ - PyObject *args, *ret = NULL, *meth; - PyObject *kwds; - if (op == NULL) { - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } - args = Py_BuildValue("(Oi)", m1, axis); - kwds = _get_keywords(rtype, out); - meth = PyObject_GetAttrString(op, "accumulate"); - if (meth && PyCallable_Check(meth)) { - ret = PyObject_Call(meth, args, kwds); - } - Py_DECREF(args); - Py_DECREF(meth); - Py_XDECREF(kwds); - return ret; -} - - -NPY_NO_EXPORT PyObject * -PyArray_GenericBinaryFunction(PyArrayObject *m1, PyObject *m2, PyObject *op) -{ - if (op == NULL) { - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } - return PyObject_CallFunction(op, "OO", m1, m2); -} - -NPY_NO_EXPORT PyObject * -PyArray_GenericUnaryFunction(PyArrayObject *m1, PyObject *op) -{ - if (op == NULL) { - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } - return PyObject_CallFunction(op, "(O)", m1); -} - -static PyObject * -PyArray_GenericInplaceBinaryFunction(PyArrayObject *m1, - PyObject *m2, PyObject *op) -{ - if (op == NULL) { - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } - return PyObject_CallFunction(op, "OOO", m1, m2, m1); -} - -static PyObject * -PyArray_GenericInplaceUnaryFunction(PyArrayObject *m1, PyObject *op) -{ - if (op == NULL) { - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } - return PyObject_CallFunction(op, "OO", m1, m1); -} - -static PyObject * -array_add(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.add); -} - -static PyObject * -array_subtract(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.subtract); -} - -static PyObject * -array_multiply(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.multiply); -} - -static PyObject * -array_divide(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.divide); -} - -static PyObject * -array_remainder(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.remainder); -} - -static int -array_power_is_scalar(PyObject *o2, double* exp) -{ - PyObject *temp; - const int optimize_fpexps = 1; - - if (PyInt_Check(o2)) { - *exp = (double)PyInt_AsLong(o2); - return 1; - } - if (optimize_fpexps && PyFloat_Check(o2)) { - *exp = PyFloat_AsDouble(o2); - return 1; - } - if ((PyArray_IsZeroDim(o2) && - ((PyArray_ISINTEGER(o2) || - (optimize_fpexps && PyArray_ISFLOAT(o2))))) || - PyArray_IsScalar(o2, Integer) || - (optimize_fpexps && PyArray_IsScalar(o2, Floating))) { - temp = Py_TYPE(o2)->tp_as_number->nb_float(o2); - if (temp != NULL) { - *exp = PyFloat_AsDouble(o2); - Py_DECREF(temp); - return 1; - } - } -#if (PY_VERSION_HEX >= 0x02050000) - if (PyIndex_Check(o2)) { - PyObject* value = PyNumber_Index(o2); - Py_ssize_t val; - if (value==NULL) { - if (PyErr_Occurred()) { - PyErr_Clear(); - } - return 0; - } - val = PyInt_AsSsize_t(value); - if (val == -1 && PyErr_Occurred()) { - PyErr_Clear(); - return 0; - } - *exp = (double) val; - return 1; - } -#endif - return 0; -} - -/* optimize float array or complex array to a scalar power */ -static PyObject * -fast_scalar_power(PyArrayObject *a1, PyObject *o2, int inplace) -{ - double exp; - - if (PyArray_Check(a1) && array_power_is_scalar(o2, &exp)) { - PyObject *fastop = NULL; - if (PyArray_ISFLOAT(a1) || PyArray_ISCOMPLEX(a1)) { - if (exp == 1.0) { - /* we have to do this one special, as the - "copy" method of array objects isn't set - up early enough to be added - by PyArray_SetNumericOps. - */ - if (inplace) { - Py_INCREF(a1); - return (PyObject *)a1; - } else { - return PyArray_Copy(a1); - } - } - else if (exp == -1.0) { - fastop = n_ops.reciprocal; - } - else if (exp == 0.0) { - fastop = n_ops.ones_like; - } - else if (exp == 0.5) { - fastop = n_ops.sqrt; - } - else if (exp == 2.0) { - fastop = n_ops.square; - } - else { - return NULL; - } - - if (inplace) { - return PyArray_GenericInplaceUnaryFunction(a1, fastop); - } else { - return PyArray_GenericUnaryFunction(a1, fastop); - } - } - else if (exp==2.0) { - fastop = n_ops.multiply; - if (inplace) { - return PyArray_GenericInplaceBinaryFunction - (a1, (PyObject *)a1, fastop); - } - else { - return PyArray_GenericBinaryFunction - (a1, (PyObject *)a1, fastop); - } - } - } - return NULL; -} - -static PyObject * -array_power(PyArrayObject *a1, PyObject *o2, PyObject *NPY_UNUSED(modulo)) -{ - /* modulo is ignored! */ - PyObject *value; - value = fast_scalar_power(a1, o2, 0); - if (!value) { - value = PyArray_GenericBinaryFunction(a1, o2, n_ops.power); - } - return value; -} - - -static PyObject * -array_negative(PyArrayObject *m1) -{ - return PyArray_GenericUnaryFunction(m1, n_ops.negative); -} - -static PyObject * -array_absolute(PyArrayObject *m1) -{ - return PyArray_GenericUnaryFunction(m1, n_ops.absolute); -} - -static PyObject * -array_invert(PyArrayObject *m1) -{ - return PyArray_GenericUnaryFunction(m1, n_ops.invert); -} - -static PyObject * -array_left_shift(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.left_shift); -} - -static PyObject * -array_right_shift(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.right_shift); -} - -static PyObject * -array_bitwise_and(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.bitwise_and); -} - -static PyObject * -array_bitwise_or(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.bitwise_or); -} - -static PyObject * -array_bitwise_xor(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.bitwise_xor); -} - -static PyObject * -array_inplace_add(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.add); -} - -static PyObject * -array_inplace_subtract(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.subtract); -} - -static PyObject * -array_inplace_multiply(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.multiply); -} - -static PyObject * -array_inplace_divide(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.divide); -} - -static PyObject * -array_inplace_remainder(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.remainder); -} - -static PyObject * -array_inplace_power(PyArrayObject *a1, PyObject *o2, PyObject *NPY_UNUSED(modulo)) -{ - /* modulo is ignored! */ - PyObject *value; - value = fast_scalar_power(a1, o2, 1); - if (!value) { - value = PyArray_GenericInplaceBinaryFunction(a1, o2, n_ops.power); - } - return value; -} - -static PyObject * -array_inplace_left_shift(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.left_shift); -} - -static PyObject * -array_inplace_right_shift(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.right_shift); -} - -static PyObject * -array_inplace_bitwise_and(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.bitwise_and); -} - -static PyObject * -array_inplace_bitwise_or(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.bitwise_or); -} - -static PyObject * -array_inplace_bitwise_xor(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, n_ops.bitwise_xor); -} - -static PyObject * -array_floor_divide(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.floor_divide); -} - -static PyObject * -array_true_divide(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericBinaryFunction(m1, m2, n_ops.true_divide); -} - -static PyObject * -array_inplace_floor_divide(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, - n_ops.floor_divide); -} - -static PyObject * -array_inplace_true_divide(PyArrayObject *m1, PyObject *m2) -{ - return PyArray_GenericInplaceBinaryFunction(m1, m2, - n_ops.true_divide); -} - -static int -_array_nonzero(PyArrayObject *mp) -{ - intp n; - - n = PyArray_SIZE(mp); - if (n == 1) { - return mp->descr->f->nonzero(mp->data, mp); - } - else if (n == 0) { - return 0; - } - else { - PyErr_SetString(PyExc_ValueError, - "The truth value of an array " \ - "with more than one element is ambiguous. " \ - "Use a.any() or a.all()"); - return -1; - } -} - - - -static PyObject * -array_divmod(PyArrayObject *op1, PyObject *op2) -{ - PyObject *divp, *modp, *result; - - divp = array_floor_divide(op1, op2); - if (divp == NULL) { - return NULL; - } - modp = array_remainder(op1, op2); - if (modp == NULL) { - Py_DECREF(divp); - return NULL; - } - result = Py_BuildValue("OO", divp, modp); - Py_DECREF(divp); - Py_DECREF(modp); - return result; -} - - -NPY_NO_EXPORT PyObject * -array_int(PyArrayObject *v) -{ - PyObject *pv, *pv2; - if (PyArray_SIZE(v) != 1) { - PyErr_SetString(PyExc_TypeError, "only length-1 arrays can be"\ - " converted to Python scalars"); - return NULL; - } - pv = v->descr->f->getitem(v->data, v); - if (pv == NULL) { - return NULL; - } - if (Py_TYPE(pv)->tp_as_number == 0) { - PyErr_SetString(PyExc_TypeError, "cannot convert to an int; "\ - "scalar object is not a number"); - Py_DECREF(pv); - return NULL; - } - if (Py_TYPE(pv)->tp_as_number->nb_int == 0) { - PyErr_SetString(PyExc_TypeError, "don't know how to convert "\ - "scalar number to int"); - Py_DECREF(pv); - return NULL; - } - - pv2 = Py_TYPE(pv)->tp_as_number->nb_int(pv); - Py_DECREF(pv); - return pv2; -} - -static PyObject * -array_float(PyArrayObject *v) -{ - PyObject *pv, *pv2; - if (PyArray_SIZE(v) != 1) { - PyErr_SetString(PyExc_TypeError, "only length-1 arrays can "\ - "be converted to Python scalars"); - return NULL; - } - pv = v->descr->f->getitem(v->data, v); - if (pv == NULL) { - return NULL; - } - if (Py_TYPE(pv)->tp_as_number == 0) { - PyErr_SetString(PyExc_TypeError, "cannot convert to a "\ - "float; scalar object is not a number"); - Py_DECREF(pv); - return NULL; - } - if (Py_TYPE(pv)->tp_as_number->nb_float == 0) { - PyErr_SetString(PyExc_TypeError, "don't know how to convert "\ - "scalar number to float"); - Py_DECREF(pv); - return NULL; - } - pv2 = Py_TYPE(pv)->tp_as_number->nb_float(pv); - Py_DECREF(pv); - return pv2; -} - -#if !defined(NPY_PY3K) - -static PyObject * -array_long(PyArrayObject *v) -{ - PyObject *pv, *pv2; - if (PyArray_SIZE(v) != 1) { - PyErr_SetString(PyExc_TypeError, "only length-1 arrays can "\ - "be converted to Python scalars"); - return NULL; - } - pv = v->descr->f->getitem(v->data, v); - if (Py_TYPE(pv)->tp_as_number == 0) { - PyErr_SetString(PyExc_TypeError, "cannot convert to an int; "\ - "scalar object is not a number"); - return NULL; - } - if (Py_TYPE(pv)->tp_as_number->nb_long == 0) { - PyErr_SetString(PyExc_TypeError, "don't know how to convert "\ - "scalar number to long"); - return NULL; - } - pv2 = Py_TYPE(pv)->tp_as_number->nb_long(pv); - Py_DECREF(pv); - return pv2; -} - -static PyObject * -array_oct(PyArrayObject *v) -{ - PyObject *pv, *pv2; - if (PyArray_SIZE(v) != 1) { - PyErr_SetString(PyExc_TypeError, "only length-1 arrays can "\ - "be converted to Python scalars"); - return NULL; - } - pv = v->descr->f->getitem(v->data, v); - if (Py_TYPE(pv)->tp_as_number == 0) { - PyErr_SetString(PyExc_TypeError, "cannot convert to an int; "\ - "scalar object is not a number"); - return NULL; - } - if (Py_TYPE(pv)->tp_as_number->nb_oct == 0) { - PyErr_SetString(PyExc_TypeError, "don't know how to convert "\ - "scalar number to oct"); - return NULL; - } - pv2 = Py_TYPE(pv)->tp_as_number->nb_oct(pv); - Py_DECREF(pv); - return pv2; -} - -static PyObject * -array_hex(PyArrayObject *v) -{ - PyObject *pv, *pv2; - if (PyArray_SIZE(v) != 1) { - PyErr_SetString(PyExc_TypeError, "only length-1 arrays can "\ - "be converted to Python scalars"); - return NULL; - } - pv = v->descr->f->getitem(v->data, v); - if (Py_TYPE(pv)->tp_as_number == 0) { - PyErr_SetString(PyExc_TypeError, "cannot convert to an int; "\ - "scalar object is not a number"); - return NULL; - } - if (Py_TYPE(pv)->tp_as_number->nb_hex == 0) { - PyErr_SetString(PyExc_TypeError, "don't know how to convert "\ - "scalar number to hex"); - return NULL; - } - pv2 = Py_TYPE(pv)->tp_as_number->nb_hex(pv); - Py_DECREF(pv); - return pv2; -} - -#endif - -static PyObject * -_array_copy_nice(PyArrayObject *self) -{ - return PyArray_Return((PyArrayObject *) PyArray_Copy(self)); -} - -#if PY_VERSION_HEX >= 0x02050000 -static PyObject * -array_index(PyArrayObject *v) -{ - if (!PyArray_ISINTEGER(v) || PyArray_SIZE(v) != 1) { - PyErr_SetString(PyExc_TypeError, "only integer arrays with " \ - "one element can be converted to an index"); - return NULL; - } - return v->descr->f->getitem(v->data, v); -} -#endif - - -NPY_NO_EXPORT PyNumberMethods array_as_number = { - (binaryfunc)array_add, /*nb_add*/ - (binaryfunc)array_subtract, /*nb_subtract*/ - (binaryfunc)array_multiply, /*nb_multiply*/ -#if defined(NPY_PY3K) -#else - (binaryfunc)array_divide, /*nb_divide*/ -#endif - (binaryfunc)array_remainder, /*nb_remainder*/ - (binaryfunc)array_divmod, /*nb_divmod*/ - (ternaryfunc)array_power, /*nb_power*/ - (unaryfunc)array_negative, /*nb_neg*/ - (unaryfunc)_array_copy_nice, /*nb_pos*/ - (unaryfunc)array_absolute, /*(unaryfunc)array_abs,*/ - (inquiry)_array_nonzero, /*nb_nonzero*/ - (unaryfunc)array_invert, /*nb_invert*/ - (binaryfunc)array_left_shift, /*nb_lshift*/ - (binaryfunc)array_right_shift, /*nb_rshift*/ - (binaryfunc)array_bitwise_and, /*nb_and*/ - (binaryfunc)array_bitwise_xor, /*nb_xor*/ - (binaryfunc)array_bitwise_or, /*nb_or*/ -#if defined(NPY_PY3K) -#else - 0, /*nb_coerce*/ -#endif - (unaryfunc)array_int, /*nb_int*/ -#if defined(NPY_PY3K) - 0, /*nb_reserved*/ -#else - (unaryfunc)array_long, /*nb_long*/ -#endif - (unaryfunc)array_float, /*nb_float*/ -#if defined(NPY_PY3K) -#else - (unaryfunc)array_oct, /*nb_oct*/ - (unaryfunc)array_hex, /*nb_hex*/ -#endif - - /* - * This code adds augmented assignment functionality - * that was made available in Python 2.0 - */ - (binaryfunc)array_inplace_add, /*inplace_add*/ - (binaryfunc)array_inplace_subtract, /*inplace_subtract*/ - (binaryfunc)array_inplace_multiply, /*inplace_multiply*/ -#if defined(NPY_PY3K) -#else - (binaryfunc)array_inplace_divide, /*inplace_divide*/ -#endif - (binaryfunc)array_inplace_remainder, /*inplace_remainder*/ - (ternaryfunc)array_inplace_power, /*inplace_power*/ - (binaryfunc)array_inplace_left_shift, /*inplace_lshift*/ - (binaryfunc)array_inplace_right_shift, /*inplace_rshift*/ - (binaryfunc)array_inplace_bitwise_and, /*inplace_and*/ - (binaryfunc)array_inplace_bitwise_xor, /*inplace_xor*/ - (binaryfunc)array_inplace_bitwise_or, /*inplace_or*/ - - (binaryfunc)array_floor_divide, /*nb_floor_divide*/ - (binaryfunc)array_true_divide, /*nb_true_divide*/ - (binaryfunc)array_inplace_floor_divide, /*nb_inplace_floor_divide*/ - (binaryfunc)array_inplace_true_divide, /*nb_inplace_true_divide*/ - -#if PY_VERSION_HEX >= 0x02050000 - (unaryfunc)array_index, /* nb_index */ -#endif - -}; diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/number.h b/pythonPackages/numpy/numpy/core/src/multiarray/number.h deleted file mode 100755 index 8f1cb3b913..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/number.h +++ /dev/null @@ -1,72 +0,0 @@ -#ifndef _NPY_ARRAY_NUMBER_H_ -#define _NPY_ARRAY_NUMBER_H_ - -typedef struct { - PyObject *add; - PyObject *subtract; - PyObject *multiply; - PyObject *divide; - PyObject *remainder; - PyObject *power; - PyObject *square; - PyObject *reciprocal; - PyObject *ones_like; - PyObject *sqrt; - PyObject *negative; - PyObject *absolute; - PyObject *invert; - PyObject *left_shift; - PyObject *right_shift; - PyObject *bitwise_and; - PyObject *bitwise_xor; - PyObject *bitwise_or; - PyObject *less; - PyObject *less_equal; - PyObject *equal; - PyObject *not_equal; - PyObject *greater; - PyObject *greater_equal; - PyObject *floor_divide; - PyObject *true_divide; - PyObject *logical_or; - PyObject *logical_and; - PyObject *floor; - PyObject *ceil; - PyObject *maximum; - PyObject *minimum; - PyObject *rint; - PyObject *conjugate; -} NumericOps; - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT NumericOps n_ops; -extern NPY_NO_EXPORT PyNumberMethods array_as_number; -#else -NPY_NO_EXPORT NumericOps n_ops; -NPY_NO_EXPORT PyNumberMethods array_as_number; -#endif - -NPY_NO_EXPORT PyObject * -array_int(PyArrayObject *v); - -NPY_NO_EXPORT int -PyArray_SetNumericOps(PyObject *dict); - -NPY_NO_EXPORT PyObject * -PyArray_GetNumericOps(void); - -NPY_NO_EXPORT PyObject * -PyArray_GenericBinaryFunction(PyArrayObject *m1, PyObject *m2, PyObject *op); - -NPY_NO_EXPORT PyObject * -PyArray_GenericUnaryFunction(PyArrayObject *m1, PyObject *op); - -NPY_NO_EXPORT PyObject * -PyArray_GenericReduceFunction(PyArrayObject *m1, PyObject *op, int axis, - int rtype, PyArrayObject *out); - -NPY_NO_EXPORT PyObject * -PyArray_GenericAccumulateFunction(PyArrayObject *m1, PyObject *op, int axis, - int rtype, PyArrayObject *out); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/numpymemoryview.c b/pythonPackages/numpy/numpy/core/src/multiarray/numpymemoryview.c deleted file mode 100755 index 97d20577ed..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/numpymemoryview.c +++ /dev/null @@ -1,310 +0,0 @@ -/* - * Simple PyMemoryView'ish object for Python 2.6 compatibility. - * - * On Python >= 2.7, we can use the actual PyMemoryView objects. - * - * Some code copied from the CPython implementation. - */ - -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" -#include "numpy/npy_3kcompat.h" - -#include "numpymemoryview.h" - - -#if (PY_VERSION_HEX >= 0x02060000) && (PY_VERSION_HEX < 0x02070000) - -/* - * Memory allocation - */ - -static int -memorysimpleview_traverse(PyMemorySimpleViewObject *self, - visitproc visit, void *arg) -{ - if (self->base != NULL) - Py_VISIT(self->base); - if (self->view.obj != NULL) - Py_VISIT(self->view.obj); - return 0; -} - -static int -memorysimpleview_clear(PyMemorySimpleViewObject *self) -{ - Py_CLEAR(self->base); - PyBuffer_Release(&self->view); - self->view.obj = NULL; - return 0; -} - -static void -memorysimpleview_dealloc(PyMemorySimpleViewObject *self) -{ - PyObject_GC_UnTrack(self); - Py_CLEAR(self->base); - if (self->view.obj != NULL) { - PyBuffer_Release(&self->view); - self->view.obj = NULL; - } - PyObject_GC_Del(self); -} - -static PyObject * -memorysimpleview_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) -{ - PyObject *obj; - static char *kwlist[] = {"object", 0}; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O:memorysimpleview", kwlist, - &obj)) { - return NULL; - } - return PyMemorySimpleView_FromObject(obj); -} - - -/* - * Buffer interface - */ - -static int -memorysimpleview_getbuffer(PyMemorySimpleViewObject *self, - Py_buffer *view, int flags) -{ - return PyObject_GetBuffer(self->base, view, flags); -} - -static void -memorysimpleview_releasebuffer(PyMemorySimpleViewObject *self, - Py_buffer *view) -{ - PyBuffer_Release(view); -} - -static PyBufferProcs memorysimpleview_as_buffer = { - (readbufferproc)0, /*bf_getreadbuffer*/ - (writebufferproc)0, /*bf_getwritebuffer*/ - (segcountproc)0, /*bf_getsegcount*/ - (charbufferproc)0, /*bf_getcharbuffer*/ - (getbufferproc)memorysimpleview_getbuffer, /* bf_getbuffer */ - (releasebufferproc)memorysimpleview_releasebuffer, /* bf_releasebuffer */ -}; - - -/* - * Getters - */ - -static PyObject * -_IntTupleFromSsizet(int len, Py_ssize_t *vals) -{ - int i; - PyObject *o; - PyObject *intTuple; - - if (vals == NULL) { - Py_INCREF(Py_None); - return Py_None; - } - intTuple = PyTuple_New(len); - if (!intTuple) return NULL; - for(i=0; iview.format); -} - -static PyObject * -memorysimpleview_itemsize_get(PyMemorySimpleViewObject *self) -{ - return PyLong_FromSsize_t(self->view.itemsize); -} - -static PyObject * -memorysimpleview_shape_get(PyMemorySimpleViewObject *self) -{ - return _IntTupleFromSsizet(self->view.ndim, self->view.shape); -} - -static PyObject * -memorysimpleview_strides_get(PyMemorySimpleViewObject *self) -{ - return _IntTupleFromSsizet(self->view.ndim, self->view.strides); -} - -static PyObject * -memorysimpleview_suboffsets_get(PyMemorySimpleViewObject *self) -{ - return _IntTupleFromSsizet(self->view.ndim, self->view.suboffsets); -} - -static PyObject * -memorysimpleview_readonly_get(PyMemorySimpleViewObject *self) -{ - return PyBool_FromLong(self->view.readonly); -} - -static PyObject * -memorysimpleview_ndim_get(PyMemorySimpleViewObject *self) -{ - return PyLong_FromLong(self->view.ndim); -} - - -static PyGetSetDef memorysimpleview_getsets[] = -{ - {"format", (getter)memorysimpleview_format_get, NULL, NULL, NULL}, - {"itemsize", (getter)memorysimpleview_itemsize_get, NULL, NULL, NULL}, - {"shape", (getter)memorysimpleview_shape_get, NULL, NULL, NULL}, - {"strides", (getter)memorysimpleview_strides_get, NULL, NULL, NULL}, - {"suboffsets", (getter)memorysimpleview_suboffsets_get, NULL, NULL, NULL}, - {"readonly", (getter)memorysimpleview_readonly_get, NULL, NULL, NULL}, - {"ndim", (getter)memorysimpleview_ndim_get, NULL, NULL, NULL}, - {NULL, NULL, NULL, NULL} -}; - -NPY_NO_EXPORT PyTypeObject PyMemorySimpleView_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.memorysimpleview", - sizeof(PyMemorySimpleViewObject), - 0, /* tp_itemsize */ - /* methods */ - (destructor)memorysimpleview_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - (cmpfunc)0, /* tp_compare */ -#endif - (reprfunc)0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - &memorysimpleview_as_buffer, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC - | Py_TPFLAGS_HAVE_NEWBUFFER, /* tp_flags */ - 0, /* tp_doc */ - (traverseproc)memorysimpleview_traverse, /* tp_traverse */ - (inquiry)memorysimpleview_clear, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - memorysimpleview_getsets, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - memorysimpleview_new, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; - - -/* - * Factory - */ -NPY_NO_EXPORT PyObject * -PyMemorySimpleView_FromObject(PyObject *base) -{ - PyMemorySimpleViewObject *mview = NULL; - Py_buffer view; - - if (Py_TYPE(base)->tp_as_buffer == NULL || - Py_TYPE(base)->tp_as_buffer->bf_getbuffer == NULL) { - - PyErr_SetString(PyExc_TypeError, - "cannot make memory view because object does " - "not have the buffer interface"); - return NULL; - } - - memset(&view, 0, sizeof(Py_buffer)); - if (PyObject_GetBuffer(base, &view, PyBUF_FULL_RO) < 0) - return NULL; - - mview = (PyMemorySimpleViewObject *) - PyObject_GC_New(PyMemorySimpleViewObject, &PyMemorySimpleView_Type); - if (mview == NULL) { - PyBuffer_Release(&view); - return NULL; - } - memcpy(&mview->view, &view, sizeof(Py_buffer)); - mview->base = base; - Py_INCREF(base); - - PyObject_GC_Track(mview); - return (PyObject *)mview; -} - - -/* - * Module initialization - */ - -NPY_NO_EXPORT int -_numpymemoryview_init(PyObject **typeobject) -{ - if (PyType_Ready(&PyMemorySimpleView_Type) < 0) { - return -1; - } - *typeobject = (PyObject*)&PyMemorySimpleView_Type; - return 0; -} - -#else - -NPY_NO_EXPORT int -_numpymemoryview_init(PyObject **typeobject) -{ - *typeobject = NULL; - return 0; -} - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/numpymemoryview.h b/pythonPackages/numpy/numpy/core/src/multiarray/numpymemoryview.h deleted file mode 100755 index 3a26617543..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/numpymemoryview.h +++ /dev/null @@ -1,29 +0,0 @@ -#ifndef _NPY_PRIVATE_NUMPYMEMORYVIEW_H_ -#define _NPY_PRIVATE_NUMPYMEMORYVIEW_H_ - -/* - * Memoryview is introduced to 2.x series only in 2.7, so for supporting 2.6, - * we need to have a minimal implementation here. - */ -#if (PY_VERSION_HEX >= 0x02060000) && (PY_VERSION_HEX < 0x02070000) - -typedef struct { - PyObject_HEAD - PyObject *base; - Py_buffer view; -} PyMemorySimpleViewObject; - -NPY_NO_EXPORT PyObject * -PyMemorySimpleView_FromObject(PyObject *base); - -#define PyMemorySimpleView_GET_BUFFER(op) (&((PyMemorySimpleViewObject *)(op))->view) - -#define PyMemoryView_FromObject PyMemorySimpleView_FromObject -#define PyMemoryView_GET_BUFFER PyMemorySimpleView_GET_BUFFER - -#endif - -NPY_NO_EXPORT int -_numpymemoryview_init(PyObject **typeobject); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/numpyos.c b/pythonPackages/numpy/numpy/core/src/multiarray/numpyos.c deleted file mode 100755 index b37e039422..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/numpyos.c +++ /dev/null @@ -1,690 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include - -#include -#include - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/npy_math.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -/* - * From the C99 standard, section 7.19.6: The exponent always contains at least - * two digits, and only as many more digits as necessary to represent the - * exponent. - */ - -/* We force 3 digits on windows for python < 2.6 for compatibility reason */ -#if defined(MS_WIN32) && (PY_VERSION_HEX < 0x02060000) -#define MIN_EXPONENT_DIGITS 3 -#else -#define MIN_EXPONENT_DIGITS 2 -#endif - -/* - * Ensure that any exponent, if present, is at least MIN_EXPONENT_DIGITS - * in length. - */ -static void -_ensure_minimum_exponent_length(char* buffer, size_t buf_size) -{ - char *p = strpbrk(buffer, "eE"); - if (p && (*(p + 1) == '-' || *(p + 1) == '+')) { - char *start = p + 2; - int exponent_digit_cnt = 0; - int leading_zero_cnt = 0; - int in_leading_zeros = 1; - int significant_digit_cnt; - - /* Skip over the exponent and the sign. */ - p += 2; - - /* Find the end of the exponent, keeping track of leading zeros. */ - while (*p && isdigit(Py_CHARMASK(*p))) { - if (in_leading_zeros && *p == '0') { - ++leading_zero_cnt; - } - if (*p != '0') { - in_leading_zeros = 0; - } - ++p; - ++exponent_digit_cnt; - } - - significant_digit_cnt = exponent_digit_cnt - leading_zero_cnt; - if (exponent_digit_cnt == MIN_EXPONENT_DIGITS) { - /* - * If there are 2 exactly digits, we're done, - * regardless of what they contain - */ - } - else if (exponent_digit_cnt > MIN_EXPONENT_DIGITS) { - int extra_zeros_cnt; - - /* - * There are more than 2 digits in the exponent. See - * if we can delete some of the leading zeros - */ - if (significant_digit_cnt < MIN_EXPONENT_DIGITS) { - significant_digit_cnt = MIN_EXPONENT_DIGITS; - } - extra_zeros_cnt = exponent_digit_cnt - significant_digit_cnt; - - /* - * Delete extra_zeros_cnt worth of characters from the - * front of the exponent - */ - assert(extra_zeros_cnt >= 0); - - /* - * Add one to significant_digit_cnt to copy the - * trailing 0 byte, thus setting the length - */ - memmove(start, start + extra_zeros_cnt, significant_digit_cnt + 1); - } - else { - /* - * If there are fewer than 2 digits, add zeros - * until there are 2, if there's enough room - */ - int zeros = MIN_EXPONENT_DIGITS - exponent_digit_cnt; - if (start + zeros + exponent_digit_cnt + 1 < buffer + buf_size) { - memmove(start + zeros, start, exponent_digit_cnt + 1); - memset(start, '0', zeros); - } - } - } -} - -/* - * Ensure that buffer has a decimal point in it. The decimal point - * will not be in the current locale, it will always be '.' - */ -static void -_ensure_decimal_point(char* buffer, size_t buf_size) -{ - int insert_count = 0; - char* chars_to_insert; - - /* search for the first non-digit character */ - char *p = buffer; - if (*p == '-' || *p == '+') - /* - * Skip leading sign, if present. I think this could only - * ever be '-', but it can't hurt to check for both. - */ - ++p; - while (*p && isdigit(Py_CHARMASK(*p))) { - ++p; - } - if (*p == '.') { - if (isdigit(Py_CHARMASK(*(p+1)))) { - /* - * Nothing to do, we already have a decimal - * point and a digit after it. - */ - } - else { - /* - * We have a decimal point, but no following - * digit. Insert a zero after the decimal. - */ - ++p; - chars_to_insert = "0"; - insert_count = 1; - } - } - else { - chars_to_insert = ".0"; - insert_count = 2; - } - if (insert_count) { - size_t buf_len = strlen(buffer); - if (buf_len + insert_count + 1 >= buf_size) { - /* - * If there is not enough room in the buffer - * for the additional text, just skip it. It's - * not worth generating an error over. - */ - } - else { - memmove(p + insert_count, p, buffer + strlen(buffer) - p + 1); - memcpy(p, chars_to_insert, insert_count); - } - } -} - -/* see FORMATBUFLEN in unicodeobject.c */ -#define FLOAT_FORMATBUFLEN 120 - -/* - * Given a string that may have a decimal point in the current - * locale, change it back to a dot. Since the string cannot get - * longer, no need for a maximum buffer size parameter. - */ -static void -_change_decimal_from_locale_to_dot(char* buffer) -{ - struct lconv *locale_data = localeconv(); - const char *decimal_point = locale_data->decimal_point; - - if (decimal_point[0] != '.' || decimal_point[1] != 0) { - size_t decimal_point_len = strlen(decimal_point); - - if (*buffer == '+' || *buffer == '-') { - buffer++; - } - while (isdigit(Py_CHARMASK(*buffer))) { - buffer++; - } - if (strncmp(buffer, decimal_point, decimal_point_len) == 0) { - *buffer = '.'; - buffer++; - if (decimal_point_len > 1) { - /* buffer needs to get smaller */ - size_t rest_len = strlen(buffer + (decimal_point_len - 1)); - memmove(buffer, buffer + (decimal_point_len - 1), rest_len); - buffer[rest_len] = 0; - } - } - } -} - -/* - * Check that the format string is a valid one for NumPyOS_ascii_format* - */ -static int -_check_ascii_format(const char *format) -{ - char format_char; - size_t format_len = strlen(format); - - /* The last character in the format string must be the format char */ - format_char = format[format_len - 1]; - - if (format[0] != '%') { - return -1; - } - - /* - * I'm not sure why this test is here. It's ensuring that the format - * string after the first character doesn't have a single quote, a - * lowercase l, or a percent. This is the reverse of the commented-out - * test about 10 lines ago. - */ - if (strpbrk(format + 1, "'l%")) { - return -1; - } - - /* - * Also curious about this function is that it accepts format strings - * like "%xg", which are invalid for floats. In general, the - * interface to this function is not very good, but changing it is - * difficult because it's a public API. - */ - if (!(format_char == 'e' || format_char == 'E' - || format_char == 'f' || format_char == 'F' - || format_char == 'g' || format_char == 'G')) { - return -1; - } - - return 0; -} - -/* - * Fix the generated string: make sure the decimal is ., that exponent has a - * minimal number of digits, and that it has a decimal + one digit after that - * decimal if decimal argument != 0 (Same effect that 'Z' format in - * PyOS_ascii_formatd - */ -static char* -_fix_ascii_format(char* buf, size_t buflen, int decimal) -{ - /* - * Get the current locale, and find the decimal point string. - * Convert that string back to a dot. - */ - _change_decimal_from_locale_to_dot(buf); - - /* - * If an exponent exists, ensure that the exponent is at least - * MIN_EXPONENT_DIGITS digits, providing the buffer is large enough - * for the extra zeros. Also, if there are more than - * MIN_EXPONENT_DIGITS, remove as many zeros as possible until we get - * back to MIN_EXPONENT_DIGITS - */ - _ensure_minimum_exponent_length(buf, buflen); - - if (decimal != 0) { - _ensure_decimal_point(buf, buflen); - } - - return buf; -} - -/* - * NumPyOS_ascii_format*: - * - buffer: A buffer to place the resulting string in - * - buf_size: The length of the buffer. - * - format: The printf()-style format to use for the code to use for - * converting. - * - value: The value to convert - * - decimal: if != 0, always has a decimal, and at leasat one digit after - * the decimal. This has the same effect as passing 'Z' in the origianl - * PyOS_ascii_formatd - * - * This is similar to PyOS_ascii_formatd in python > 2.6, except that it does - * not handle 'n', and handles nan / inf. - * - * Converts a #gdouble to a string, using the '.' as decimal point. To format - * the number you pass in a printf()-style format string. Allowed conversion - * specifiers are 'e', 'E', 'f', 'F', 'g', 'G'. - * - * Return value: The pointer to the buffer with the converted string. - */ -#define _ASCII_FORMAT(type, suffix, print_type) \ - NPY_NO_EXPORT char* \ - NumPyOS_ascii_format ## suffix(char *buffer, size_t buf_size, \ - const char *format, \ - type val, int decimal) \ - { \ - if (npy_isfinite(val)) { \ - if(_check_ascii_format(format)) { \ - return NULL; \ - } \ - PyOS_snprintf(buffer, buf_size, format, (print_type)val); \ - return _fix_ascii_format(buffer, buf_size, decimal); \ - } \ - else if (npy_isnan(val)){ \ - if (buf_size < 4) { \ - return NULL; \ - } \ - strcpy(buffer, "nan"); \ - } \ - else { \ - if (npy_signbit(val)) { \ - if (buf_size < 5) { \ - return NULL; \ - } \ - strcpy(buffer, "-inf"); \ - } \ - else { \ - if (buf_size < 4) { \ - return NULL; \ - } \ - strcpy(buffer, "inf"); \ - } \ - } \ - return buffer; \ - } - -_ASCII_FORMAT(float, f, float) -_ASCII_FORMAT(double, d, double) -#ifndef FORCE_NO_LONG_DOUBLE_FORMATTING -_ASCII_FORMAT(long double, l, long double) -#else -_ASCII_FORMAT(long double, l, double) -#endif - -/* - * NumPyOS_ascii_isspace: - * - * Same as isspace under C locale - */ -NPY_NO_EXPORT int -NumPyOS_ascii_isspace(char c) -{ - return c == ' ' || c == '\f' || c == '\n' || c == '\r' || c == '\t' - || c == '\v'; -} - - -/* - * NumPyOS_ascii_isalpha: - * - * Same as isalpha under C locale - */ -static int -NumPyOS_ascii_isalpha(char c) -{ - return (c >= 'A' && c <= 'Z') || (c >= 'a' && c <= 'z'); -} - - -/* - * NumPyOS_ascii_isdigit: - * - * Same as isdigit under C locale - */ -static int -NumPyOS_ascii_isdigit(char c) -{ - return (c >= '0' && c <= '9'); -} - - -/* - * NumPyOS_ascii_isalnum: - * - * Same as isalnum under C locale - */ -static int -NumPyOS_ascii_isalnum(char c) -{ - return NumPyOS_ascii_isdigit(c) || NumPyOS_ascii_isalpha(c); -} - - -/* - * NumPyOS_ascii_tolower: - * - * Same as tolower under C locale - */ -static char -NumPyOS_ascii_tolower(char c) -{ - if (c >= 'A' && c <= 'Z') { - return c + ('a'-'A'); - } - return c; -} - - -/* - * NumPyOS_ascii_strncasecmp: - * - * Same as strncasecmp under C locale - */ -static int -NumPyOS_ascii_strncasecmp(const char* s1, const char* s2, size_t len) -{ - int diff; - while (len > 0 && *s1 != '\0' && *s2 != '\0') { - diff = ((int)NumPyOS_ascii_tolower(*s1)) - - ((int)NumPyOS_ascii_tolower(*s2)); - if (diff != 0) { - return diff; - } - ++s1; - ++s2; - --len; - } - if (len > 0) { - return ((int)*s1) - ((int)*s2); - } - return 0; -} - -/* - * _NumPyOS_ascii_strtod_plain: - * - * PyOS_ascii_strtod work-alike, with no enhanced features, - * for forward compatibility with Python >= 2.7 - */ -static double -NumPyOS_ascii_strtod_plain(const char *s, char** endptr) -{ - double result; -#if PY_VERSION_HEX >= 0x02070000 - NPY_ALLOW_C_API_DEF - NPY_ALLOW_C_API - result = PyOS_string_to_double(s, endptr, NULL); - if (PyErr_Occurred()) { - if (endptr) { - *endptr = (char*)s; - } - PyErr_Clear(); - } - NPY_DISABLE_C_API -#else - result = PyOS_ascii_strtod(s, endptr); -#endif - return result; -} - -/* - * NumPyOS_ascii_strtod: - * - * Work around bugs in PyOS_ascii_strtod - */ -NPY_NO_EXPORT double -NumPyOS_ascii_strtod(const char *s, char** endptr) -{ - struct lconv *locale_data = localeconv(); - const char *decimal_point = locale_data->decimal_point; - size_t decimal_point_len = strlen(decimal_point); - - char buffer[FLOAT_FORMATBUFLEN+1]; - const char *p; - char *q; - size_t n; - double result; - - while (NumPyOS_ascii_isspace(*s)) { - ++s; - } - - /* - * ##1 - * - * Recognize POSIX inf/nan representations on all platforms. - */ - p = s; - result = 1.0; - if (*p == '-') { - result = -1.0; - ++p; - } - else if (*p == '+') { - ++p; - } - if (NumPyOS_ascii_strncasecmp(p, "nan", 3) == 0) { - p += 3; - if (*p == '(') { - ++p; - while (NumPyOS_ascii_isalnum(*p) || *p == '_') { - ++p; - } - if (*p == ')') { - ++p; - } - } - if (endptr != NULL) { - *endptr = (char*)p; - } - return NPY_NAN; - } - else if (NumPyOS_ascii_strncasecmp(p, "inf", 3) == 0) { - p += 3; - if (NumPyOS_ascii_strncasecmp(p, "inity", 5) == 0) { - p += 5; - } - if (endptr != NULL) { - *endptr = (char*)p; - } - return result*NPY_INFINITY; - } - /* End of ##1 */ - - /* - * ## 2 - * - * At least Python versions <= 2.5.2 and <= 2.6.1 - * - * Fails to do best-efforts parsing of strings of the form "1234" - * where is the decimal point under the foreign locale. - */ - if (decimal_point[0] != '.' || decimal_point[1] != 0) { - p = s; - if (*p == '+' || *p == '-') { - ++p; - } - while (*p >= '0' && *p <= '9') { - ++p; - } - if (strncmp(p, decimal_point, decimal_point_len) == 0) { - n = (size_t)(p - s); - if (n > FLOAT_FORMATBUFLEN) { - n = FLOAT_FORMATBUFLEN; - } - memcpy(buffer, s, n); - buffer[n] = '\0'; - result = NumPyOS_ascii_strtod_plain(buffer, &q); - if (endptr != NULL) { - *endptr = (char*)(s + (q - buffer)); - } - return result; - } - } - /* End of ##2 */ - - return NumPyOS_ascii_strtod_plain(s, endptr); -} - - -/* - * NumPyOS_ascii_ftolf: - * * fp: FILE pointer - * * value: Place to store the value read - * - * Similar to PyOS_ascii_strtod, except that it reads input from a file. - * - * Similarly to fscanf, this function always consumes leading whitespace, - * and any text that could be the leading part in valid input. - * - * Return value: similar to fscanf. - * * 0 if no number read, - * * 1 if a number read, - * * EOF if end-of-file met before reading anything. - */ -NPY_NO_EXPORT int -NumPyOS_ascii_ftolf(FILE *fp, double *value) -{ - char buffer[FLOAT_FORMATBUFLEN + 1]; - char *endp; - char *p; - int c; - int ok; - - /* - * Pass on to PyOS_ascii_strtod the leftmost matching part in regexp - * - * \s*[+-]? ( [0-9]*\.[0-9]+([eE][+-]?[0-9]+) - * | nan ( \([:alphanum:_]*\) )? - * | inf(inity)? - * ) - * - * case-insensitively. - * - * The "do { ... } while (0)" wrapping in macros ensures that they behave - * properly eg. in "if ... else" structures. - */ - -#define END_MATCH() \ - goto buffer_filled - -#define NEXT_CHAR() \ - do { \ - if (c == EOF || endp >= buffer + FLOAT_FORMATBUFLEN) \ - END_MATCH(); \ - *endp++ = (char)c; \ - c = getc(fp); \ - } while (0) - -#define MATCH_ALPHA_STRING_NOCASE(string) \ - do { \ - for (p=(string); *p!='\0' && (c==*p || c+('a'-'A')==*p); ++p) \ - NEXT_CHAR(); \ - if (*p != '\0') END_MATCH(); \ - } while (0) - -#define MATCH_ONE_OR_NONE(condition) \ - do { if (condition) NEXT_CHAR(); } while (0) - -#define MATCH_ONE_OR_MORE(condition) \ - do { \ - ok = 0; \ - while (condition) { NEXT_CHAR(); ok = 1; } \ - if (!ok) END_MATCH(); \ - } while (0) - -#define MATCH_ZERO_OR_MORE(condition) \ - while (condition) { NEXT_CHAR(); } - - /* 1. emulate fscanf EOF handling */ - c = getc(fp); - if (c == EOF) { - return EOF; - } - /* 2. consume leading whitespace unconditionally */ - while (NumPyOS_ascii_isspace(c)) { - c = getc(fp); - } - - /* 3. start reading matching input to buffer */ - endp = buffer; - - /* 4.1 sign (optional) */ - MATCH_ONE_OR_NONE(c == '+' || c == '-'); - - /* 4.2 nan, inf, infinity; [case-insensitive] */ - if (c == 'n' || c == 'N') { - NEXT_CHAR(); - MATCH_ALPHA_STRING_NOCASE("an"); - - /* accept nan([:alphanum:_]*), similarly to strtod */ - if (c == '(') { - NEXT_CHAR(); - MATCH_ZERO_OR_MORE(NumPyOS_ascii_isalnum(c) || c == '_'); - if (c == ')') { - NEXT_CHAR(); - } - } - END_MATCH(); - } - else if (c == 'i' || c == 'I') { - NEXT_CHAR(); - MATCH_ALPHA_STRING_NOCASE("nfinity"); - END_MATCH(); - } - - /* 4.3 mantissa */ - MATCH_ZERO_OR_MORE(NumPyOS_ascii_isdigit(c)); - - if (c == '.') { - NEXT_CHAR(); - MATCH_ONE_OR_MORE(NumPyOS_ascii_isdigit(c)); - } - - /* 4.4 exponent */ - if (c == 'e' || c == 'E') { - NEXT_CHAR(); - MATCH_ONE_OR_NONE(c == '+' || c == '-'); - MATCH_ONE_OR_MORE(NumPyOS_ascii_isdigit(c)); - } - - END_MATCH(); - -buffer_filled: - - ungetc(c, fp); - *endp = '\0'; - - /* 5. try to convert buffer. */ - *value = NumPyOS_ascii_strtod(buffer, &p); - - /* return 1 if something read, else 0 */ - return (buffer == p) ? 0 : 1; -} - -#undef END_MATCH -#undef NEXT_CHAR -#undef MATCH_ALPHA_STRING_NOCASE -#undef MATCH_ONE_OR_NONE -#undef MATCH_ONE_OR_MORE -#undef MATCH_ZERO_OR_MORE diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/numpyos.h b/pythonPackages/numpy/numpy/core/src/multiarray/numpyos.h deleted file mode 100755 index 6f247e6085..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/numpyos.h +++ /dev/null @@ -1,28 +0,0 @@ -#ifndef _NPY_NUMPYOS_H_ -#define _NPY_NUMPYOS_H_ - -NPY_NO_EXPORT char* -NumPyOS_ascii_formatd(char *buffer, size_t buf_size, - const char *format, - double val, int decimal); - -NPY_NO_EXPORT char* -NumPyOS_ascii_formatf(char *buffer, size_t buf_size, - const char *format, - float val, int decimal); - -NPY_NO_EXPORT char* -NumPyOS_ascii_formatl(char *buffer, size_t buf_size, - const char *format, - long double val, int decimal); - -NPY_NO_EXPORT double -NumPyOS_ascii_strtod(const char *s, char** endptr); - -NPY_NO_EXPORT int -NumPyOS_ascii_ftolf(FILE *fp, double *value); - -NPY_NO_EXPORT int -NumPyOS_ascii_isspace(char c); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/refcount.c b/pythonPackages/numpy/numpy/core/src/multiarray/refcount.c deleted file mode 100755 index 9fb4a901f4..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/refcount.c +++ /dev/null @@ -1,283 +0,0 @@ -/* - * This module corresponds to the `Special functions for PyArray_OBJECT` - * section in the numpy reference for C-API. - */ - -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -static void -_fillobject(char *optr, PyObject *obj, PyArray_Descr *dtype); - -/* Incref all objects found at this record */ -/*NUMPY_API - */ -NPY_NO_EXPORT void -PyArray_Item_INCREF(char *data, PyArray_Descr *descr) -{ - PyObject *temp; - - if (!PyDataType_REFCHK(descr)) { - return; - } - if (descr->type_num == PyArray_OBJECT) { - NPY_COPY_PYOBJECT_PTR(&temp, data); - Py_XINCREF(temp); - } - else if (PyDescr_HASFIELDS(descr)) { - PyObject *key, *value, *title = NULL; - PyArray_Descr *new; - int offset; - Py_ssize_t pos = 0; - - while (PyDict_Next(descr->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, - &title)) { - return; - } - PyArray_Item_INCREF(data + offset, new); - } - } - return; -} - -/* XDECREF all objects found at this record */ -/*NUMPY_API - */ -NPY_NO_EXPORT void -PyArray_Item_XDECREF(char *data, PyArray_Descr *descr) -{ - PyObject *temp; - - if (!PyDataType_REFCHK(descr)) { - return; - } - - if (descr->type_num == PyArray_OBJECT) { - NPY_COPY_PYOBJECT_PTR(&temp, data); - Py_XDECREF(temp); - } - else if PyDescr_HASFIELDS(descr) { - PyObject *key, *value, *title = NULL; - PyArray_Descr *new; - int offset; - Py_ssize_t pos = 0; - - while (PyDict_Next(descr->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, - &title)) { - return; - } - PyArray_Item_XDECREF(data + offset, new); - } - } - return; -} - -/* Used for arrays of python objects to increment the reference count of */ -/* every python object in the array. */ -/*NUMPY_API - For object arrays, increment all internal references. -*/ -NPY_NO_EXPORT int -PyArray_INCREF(PyArrayObject *mp) -{ - intp i, n; - PyObject **data; - PyObject *temp; - PyArrayIterObject *it; - - if (!PyDataType_REFCHK(mp->descr)) { - return 0; - } - if (mp->descr->type_num != PyArray_OBJECT) { - it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)mp); - if (it == NULL) { - return -1; - } - while(it->index < it->size) { - PyArray_Item_INCREF(it->dataptr, mp->descr); - PyArray_ITER_NEXT(it); - } - Py_DECREF(it); - return 0; - } - - if (PyArray_ISONESEGMENT(mp)) { - data = (PyObject **)mp->data; - n = PyArray_SIZE(mp); - if (PyArray_ISALIGNED(mp)) { - for (i = 0; i < n; i++, data++) { - Py_XINCREF(*data); - } - } - else { - for( i = 0; i < n; i++, data++) { - NPY_COPY_PYOBJECT_PTR(&temp, data); - Py_XINCREF(temp); - } - } - } - else { /* handles misaligned data too */ - it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)mp); - if (it == NULL) { - return -1; - } - while(it->index < it->size) { - NPY_COPY_PYOBJECT_PTR(&temp, it->dataptr); - Py_XINCREF(temp); - PyArray_ITER_NEXT(it); - } - Py_DECREF(it); - } - return 0; -} - -/*NUMPY_API - Decrement all internal references for object arrays. - (or arrays with object fields) -*/ -NPY_NO_EXPORT int -PyArray_XDECREF(PyArrayObject *mp) -{ - intp i, n; - PyObject **data; - PyObject *temp; - PyArrayIterObject *it; - - if (!PyDataType_REFCHK(mp->descr)) { - return 0; - } - if (mp->descr->type_num != PyArray_OBJECT) { - it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)mp); - if (it == NULL) { - return -1; - } - while(it->index < it->size) { - PyArray_Item_XDECREF(it->dataptr, mp->descr); - PyArray_ITER_NEXT(it); - } - Py_DECREF(it); - return 0; - } - - if (PyArray_ISONESEGMENT(mp)) { - data = (PyObject **)mp->data; - n = PyArray_SIZE(mp); - if (PyArray_ISALIGNED(mp)) { - for (i = 0; i < n; i++, data++) Py_XDECREF(*data); - } - else { - for (i = 0; i < n; i++, data++) { - NPY_COPY_PYOBJECT_PTR(&temp, data); - Py_XDECREF(temp); - } - } - } - else { /* handles misaligned data too */ - it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)mp); - if (it == NULL) { - return -1; - } - while(it->index < it->size) { - NPY_COPY_PYOBJECT_PTR(&temp, it->dataptr); - Py_XDECREF(temp); - PyArray_ITER_NEXT(it); - } - Py_DECREF(it); - } - return 0; -} - -/*NUMPY_API - * Assumes contiguous - */ -NPY_NO_EXPORT void -PyArray_FillObjectArray(PyArrayObject *arr, PyObject *obj) -{ - intp i,n; - n = PyArray_SIZE(arr); - if (arr->descr->type_num == PyArray_OBJECT) { - PyObject **optr; - optr = (PyObject **)(arr->data); - n = PyArray_SIZE(arr); - if (obj == NULL) { - for (i = 0; i < n; i++) { - *optr++ = NULL; - } - } - else { - for (i = 0; i < n; i++) { - Py_INCREF(obj); - *optr++ = obj; - } - } - } - else { - char *optr; - optr = arr->data; - for (i = 0; i < n; i++) { - _fillobject(optr, obj, arr->descr); - optr += arr->descr->elsize; - } - } -} - -static void -_fillobject(char *optr, PyObject *obj, PyArray_Descr *dtype) -{ - if (!PyDataType_FLAGCHK(dtype, NPY_ITEM_REFCOUNT)) { - if ((obj == Py_None) || (PyInt_Check(obj) && PyInt_AsLong(obj)==0)) { - return; - } - else { - PyObject *arr; - Py_INCREF(dtype); - arr = PyArray_NewFromDescr(&PyArray_Type, dtype, - 0, NULL, NULL, NULL, - 0, NULL); - if (arr!=NULL) { - dtype->f->setitem(obj, optr, arr); - } - Py_XDECREF(arr); - } - } - else if (PyDescr_HASFIELDS(dtype)) { - PyObject *key, *value, *title = NULL; - PyArray_Descr *new; - int offset; - Py_ssize_t pos = 0; - - while (PyDict_Next(dtype->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { - return; - } - _fillobject(optr + offset, obj, new); - } - } - else { - Py_XINCREF(obj); - NPY_COPY_PYOBJECT_PTR(optr, &obj); - return; - } -} - diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/refcount.h b/pythonPackages/numpy/numpy/core/src/multiarray/refcount.h deleted file mode 100755 index 761d53dd0d..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/refcount.h +++ /dev/null @@ -1,19 +0,0 @@ -#ifndef _NPY_PRIVATE_REFCOUNT_H_ -#define _NPY_PRIVATE_REFCOUNT_H_ - -NPY_NO_EXPORT void -PyArray_Item_INCREF(char *data, PyArray_Descr *descr); - -NPY_NO_EXPORT void -PyArray_Item_XDECREF(char *data, PyArray_Descr *descr); - -NPY_NO_EXPORT int -PyArray_INCREF(PyArrayObject *mp); - -NPY_NO_EXPORT int -PyArray_XDECREF(PyArrayObject *mp); - -NPY_NO_EXPORT void -PyArray_FillObjectArray(PyArrayObject *arr, PyObject *obj); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/scalarapi.c b/pythonPackages/numpy/numpy/core/src/multiarray/scalarapi.c deleted file mode 100755 index 9868709d20..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/scalarapi.c +++ /dev/null @@ -1,752 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "numpy/npy_math.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "ctors.h" -#include "descriptor.h" -#include "scalartypes.h" - -#include "common.h" - -static PyArray_Descr * -_descr_from_subtype(PyObject *type) -{ - PyObject *mro; - mro = ((PyTypeObject *)type)->tp_mro; - if (PyTuple_GET_SIZE(mro) < 2) { - return PyArray_DescrFromType(PyArray_OBJECT); - } - return PyArray_DescrFromTypeObject(PyTuple_GET_ITEM(mro, 1)); -} - -NPY_NO_EXPORT void * -scalar_value(PyObject *scalar, PyArray_Descr *descr) -{ - int type_num; - int align; - intp memloc; - if (descr == NULL) { - descr = PyArray_DescrFromScalar(scalar); - type_num = descr->type_num; - Py_DECREF(descr); - } - else { - type_num = descr->type_num; - } - switch (type_num) { -#define CASE(ut,lt) case NPY_##ut: return &(((Py##lt##ScalarObject *)scalar)->obval) - CASE(BOOL, Bool); - CASE(BYTE, Byte); - CASE(UBYTE, UByte); - CASE(SHORT, Short); - CASE(USHORT, UShort); - CASE(INT, Int); - CASE(UINT, UInt); - CASE(LONG, Long); - CASE(ULONG, ULong); - CASE(LONGLONG, LongLong); - CASE(ULONGLONG, ULongLong); - CASE(FLOAT, Float); - CASE(DOUBLE, Double); - CASE(LONGDOUBLE, LongDouble); - CASE(CFLOAT, CFloat); - CASE(CDOUBLE, CDouble); - CASE(CLONGDOUBLE, CLongDouble); - CASE(OBJECT, Object); -#undef CASE - case NPY_STRING: - return (void *)PyString_AS_STRING(scalar); - case NPY_UNICODE: - return (void *)PyUnicode_AS_DATA(scalar); - case NPY_VOID: - return ((PyVoidScalarObject *)scalar)->obval; - } - - /* - * Must be a user-defined type --- check to see which - * scalar it inherits from. - */ - -#define _CHK(cls) (PyObject_IsInstance(scalar, \ - (PyObject *)&Py##cls##ArrType_Type)) -#define _OBJ(lt) &(((Py##lt##ScalarObject *)scalar)->obval) -#define _IFCASE(cls) if _CHK(cls) return _OBJ(cls) - - if _CHK(Number) { - if _CHK(Integer) { - if _CHK(SignedInteger) { - _IFCASE(Byte); - _IFCASE(Short); - _IFCASE(Int); - _IFCASE(Long); - _IFCASE(LongLong); - } - else { - /* Unsigned Integer */ - _IFCASE(UByte); - _IFCASE(UShort); - _IFCASE(UInt); - _IFCASE(ULong); - _IFCASE(ULongLong); - } - } - else { - /* Inexact */ - if _CHK(Floating) { - _IFCASE(Float); - _IFCASE(Double); - _IFCASE(LongDouble); - } - else { - /*ComplexFloating */ - _IFCASE(CFloat); - _IFCASE(CDouble); - _IFCASE(CLongDouble); - } - } - } - else if (_CHK(Bool)) { - return _OBJ(Bool); - } - else if (_CHK(Flexible)) { - if (_CHK(String)) { - return (void *)PyString_AS_STRING(scalar); - } - if (_CHK(Unicode)) { - return (void *)PyUnicode_AS_DATA(scalar); - } - if (_CHK(Void)) { - return ((PyVoidScalarObject *)scalar)->obval; - } - } - else { - _IFCASE(Object); - } - - - /* - * Use the alignment flag to figure out where the data begins - * after a PyObject_HEAD - */ - memloc = (intp)scalar; - memloc += sizeof(PyObject); - /* now round-up to the nearest alignment value */ - align = descr->alignment; - if (align > 1) { - memloc = ((memloc + align - 1)/align)*align; - } - return (void *)memloc; -#undef _IFCASE -#undef _OBJ -#undef _CHK -} - -/*NUMPY_API - * Convert to c-type - * - * no error checking is performed -- ctypeptr must be same type as scalar - * in case of flexible type, the data is not copied - * into ctypeptr which is expected to be a pointer to pointer - */ -NPY_NO_EXPORT void -PyArray_ScalarAsCtype(PyObject *scalar, void *ctypeptr) -{ - PyArray_Descr *typecode; - void *newptr; - typecode = PyArray_DescrFromScalar(scalar); - newptr = scalar_value(scalar, typecode); - - if (PyTypeNum_ISEXTENDED(typecode->type_num)) { - void **ct = (void **)ctypeptr; - *ct = newptr; - } - else { - memcpy(ctypeptr, newptr, typecode->elsize); - } - Py_DECREF(typecode); - return; -} - -/*NUMPY_API - * Cast Scalar to c-type - * - * The output buffer must be large-enough to receive the value - * Even for flexible types which is different from ScalarAsCtype - * where only a reference for flexible types is returned - * - * This may not work right on narrow builds for NumPy unicode scalars. - */ -NPY_NO_EXPORT int -PyArray_CastScalarToCtype(PyObject *scalar, void *ctypeptr, - PyArray_Descr *outcode) -{ - PyArray_Descr* descr; - PyArray_VectorUnaryFunc* castfunc; - - descr = PyArray_DescrFromScalar(scalar); - castfunc = PyArray_GetCastFunc(descr, outcode->type_num); - if (castfunc == NULL) { - return -1; - } - if (PyTypeNum_ISEXTENDED(descr->type_num) || - PyTypeNum_ISEXTENDED(outcode->type_num)) { - PyArrayObject *ain, *aout; - - ain = (PyArrayObject *)PyArray_FromScalar(scalar, NULL); - if (ain == NULL) { - Py_DECREF(descr); - return -1; - } - aout = (PyArrayObject *) - PyArray_NewFromDescr(&PyArray_Type, - outcode, - 0, NULL, - NULL, ctypeptr, - CARRAY, NULL); - if (aout == NULL) { - Py_DECREF(ain); - return -1; - } - castfunc(ain->data, aout->data, 1, ain, aout); - Py_DECREF(ain); - Py_DECREF(aout); - } - else { - castfunc(scalar_value(scalar, descr), ctypeptr, 1, NULL, NULL); - } - Py_DECREF(descr); - return 0; -} - -/*NUMPY_API - * Cast Scalar to c-type - */ -NPY_NO_EXPORT int -PyArray_CastScalarDirect(PyObject *scalar, PyArray_Descr *indescr, - void *ctypeptr, int outtype) -{ - PyArray_VectorUnaryFunc* castfunc; - void *ptr; - castfunc = PyArray_GetCastFunc(indescr, outtype); - if (castfunc == NULL) { - return -1; - } - ptr = scalar_value(scalar, indescr); - castfunc(ptr, ctypeptr, 1, NULL, NULL); - return 0; -} - -/*NUMPY_API - * Get 0-dim array from scalar - * - * 0-dim array from array-scalar object - * always contains a copy of the data - * unless outcode is NULL, it is of void type and the referrer does - * not own it either. - * - * steals reference to outcode - */ -NPY_NO_EXPORT PyObject * -PyArray_FromScalar(PyObject *scalar, PyArray_Descr *outcode) -{ - PyArray_Descr *typecode; - PyObject *r; - char *memptr; - PyObject *ret; - - /* convert to 0-dim array of scalar typecode */ - typecode = PyArray_DescrFromScalar(scalar); - if ((typecode->type_num == PyArray_VOID) && - !(((PyVoidScalarObject *)scalar)->flags & OWNDATA) && - outcode == NULL) { - r = PyArray_NewFromDescr(&PyArray_Type, - typecode, - 0, NULL, NULL, - ((PyVoidScalarObject *)scalar)->obval, - ((PyVoidScalarObject *)scalar)->flags, - NULL); - PyArray_BASE(r) = (PyObject *)scalar; - Py_INCREF(scalar); - return r; - } - - r = PyArray_NewFromDescr(&PyArray_Type, - typecode, - 0, NULL, - NULL, NULL, 0, NULL); - if (r==NULL) { - Py_XDECREF(outcode); - return NULL; - } - if (PyDataType_FLAGCHK(typecode, NPY_USE_SETITEM)) { - if (typecode->f->setitem(scalar, PyArray_DATA(r), r) < 0) { - Py_XDECREF(outcode); Py_DECREF(r); - return NULL; - } - goto finish; - } - - memptr = scalar_value(scalar, typecode); - -#ifndef Py_UNICODE_WIDE - if (typecode->type_num == PyArray_UNICODE) { - PyUCS2Buffer_AsUCS4((Py_UNICODE *)memptr, - (PyArray_UCS4 *)PyArray_DATA(r), - PyUnicode_GET_SIZE(scalar), - PyArray_ITEMSIZE(r) >> 2); - } - else -#endif - { - memcpy(PyArray_DATA(r), memptr, PyArray_ITEMSIZE(r)); - if (PyDataType_FLAGCHK(typecode, NPY_ITEM_HASOBJECT)) { - /* Need to INCREF just the PyObject portion */ - PyArray_Item_INCREF(memptr, typecode); - } - } - -finish: - if (outcode == NULL) { - return r; - } - if (outcode->type_num == typecode->type_num) { - if (!PyTypeNum_ISEXTENDED(typecode->type_num) - || (outcode->elsize == typecode->elsize)) { - return r; - } - } - - /* cast if necessary to desired output typecode */ - ret = PyArray_CastToType((PyArrayObject *)r, outcode, 0); - Py_DECREF(r); - return ret; -} - -/*NUMPY_API - * Get an Array Scalar From a Python Object - * - * Returns NULL if unsuccessful but error is only set if another error occurred. - * Currently only Numeric-like object supported. - */ -NPY_NO_EXPORT PyObject * -PyArray_ScalarFromObject(PyObject *object) -{ - PyObject *ret=NULL; - if (PyArray_IsZeroDim(object)) { - return PyArray_ToScalar(PyArray_DATA(object), object); - } - if (PyInt_Check(object)) { - ret = PyArrayScalar_New(Long); - if (ret == NULL) { - return NULL; - } - PyArrayScalar_VAL(ret, Long) = PyInt_AS_LONG(object); - } - else if (PyFloat_Check(object)) { - ret = PyArrayScalar_New(Double); - if (ret == NULL) { - return NULL; - } - PyArrayScalar_VAL(ret, Double) = PyFloat_AS_DOUBLE(object); - } - else if (PyComplex_Check(object)) { - ret = PyArrayScalar_New(CDouble); - if (ret == NULL) { - return NULL; - } - PyArrayScalar_VAL(ret, CDouble).real = PyComplex_RealAsDouble(object); - PyArrayScalar_VAL(ret, CDouble).imag = PyComplex_ImagAsDouble(object); - } - else if (PyLong_Check(object)) { - longlong val; - val = PyLong_AsLongLong(object); - if (val==-1 && PyErr_Occurred()) { - PyErr_Clear(); - return NULL; - } - ret = PyArrayScalar_New(LongLong); - if (ret == NULL) { - return NULL; - } - PyArrayScalar_VAL(ret, LongLong) = val; - } - else if (PyBool_Check(object)) { - if (object == Py_True) { - PyArrayScalar_RETURN_TRUE; - } - else { - PyArrayScalar_RETURN_FALSE; - } - } - return ret; -} - -/*New reference */ -/*NUMPY_API - */ -NPY_NO_EXPORT PyArray_Descr * -PyArray_DescrFromTypeObject(PyObject *type) -{ - int typenum; - PyArray_Descr *new, *conv = NULL; - - /* if it's a builtin type, then use the typenumber */ - typenum = _typenum_fromtypeobj(type,1); - if (typenum != PyArray_NOTYPE) { - new = PyArray_DescrFromType(typenum); - return new; - } - - /* Check the generic types */ - if ((type == (PyObject *) &PyNumberArrType_Type) || - (type == (PyObject *) &PyInexactArrType_Type) || - (type == (PyObject *) &PyFloatingArrType_Type)) { - typenum = PyArray_DOUBLE; - } - else if (type == (PyObject *)&PyComplexFloatingArrType_Type) { - typenum = PyArray_CDOUBLE; - } - else if ((type == (PyObject *)&PyIntegerArrType_Type) || - (type == (PyObject *)&PySignedIntegerArrType_Type)) { - typenum = PyArray_LONG; - } - else if (type == (PyObject *) &PyUnsignedIntegerArrType_Type) { - typenum = PyArray_ULONG; - } - else if (type == (PyObject *) &PyCharacterArrType_Type) { - typenum = PyArray_STRING; - } - else if ((type == (PyObject *) &PyGenericArrType_Type) || - (type == (PyObject *) &PyFlexibleArrType_Type)) { - typenum = PyArray_VOID; - } - - if (typenum != PyArray_NOTYPE) { - return PyArray_DescrFromType(typenum); - } - - /* - * Otherwise --- type is a sub-type of an array scalar - * not corresponding to a registered data-type object. - */ - - /* Do special thing for VOID sub-types */ - if (PyType_IsSubtype((PyTypeObject *)type, &PyVoidArrType_Type)) { - new = PyArray_DescrNewFromType(PyArray_VOID); - conv = _arraydescr_fromobj(type); - if (conv) { - new->fields = conv->fields; - Py_INCREF(new->fields); - new->names = conv->names; - Py_INCREF(new->names); - new->elsize = conv->elsize; - new->subarray = conv->subarray; - conv->subarray = NULL; - Py_DECREF(conv); - } - Py_XDECREF(new->typeobj); - new->typeobj = (PyTypeObject *)type; - Py_INCREF(type); - return new; - } - return _descr_from_subtype(type); -} - -/*NUMPY_API - * Return the tuple of ordered field names from a dictionary. - */ -NPY_NO_EXPORT PyObject * -PyArray_FieldNames(PyObject *fields) -{ - PyObject *tup; - PyObject *ret; - PyObject *_numpy_internal; - - if (!PyDict_Check(fields)) { - PyErr_SetString(PyExc_TypeError, - "Fields must be a dictionary"); - return NULL; - } - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) { - return NULL; - } - tup = PyObject_CallMethod(_numpy_internal, "_makenames_list", "O", fields); - Py_DECREF(_numpy_internal); - if (tup == NULL) { - return NULL; - } - ret = PyTuple_GET_ITEM(tup, 0); - ret = PySequence_Tuple(ret); - Py_DECREF(tup); - return ret; -} - -/*NUMPY_API - * Return descr object from array scalar. - * - * New reference - */ -NPY_NO_EXPORT PyArray_Descr * -PyArray_DescrFromScalar(PyObject *sc) -{ - int type_num; - PyArray_Descr *descr; - - if (PyArray_IsScalar(sc, Void)) { - descr = ((PyVoidScalarObject *)sc)->descr; - Py_INCREF(descr); - return descr; - } - - descr = PyArray_DescrFromTypeObject((PyObject *)Py_TYPE(sc)); - if (descr->elsize == 0) { - PyArray_DESCR_REPLACE(descr); - type_num = descr->type_num; - if (type_num == PyArray_STRING) { - descr->elsize = PyString_GET_SIZE(sc); - } - else if (type_num == PyArray_UNICODE) { - descr->elsize = PyUnicode_GET_DATA_SIZE(sc); -#ifndef Py_UNICODE_WIDE - descr->elsize <<= 1; -#endif - } - else { - descr->elsize = Py_SIZE((PyVoidScalarObject *)sc); - descr->fields = PyObject_GetAttrString(sc, "fields"); - if (!descr->fields - || !PyDict_Check(descr->fields) - || (descr->fields == Py_None)) { - Py_XDECREF(descr->fields); - descr->fields = NULL; - } - if (descr->fields) { - descr->names = PyArray_FieldNames(descr->fields); - } - PyErr_Clear(); - } - } - return descr; -} - -/*NUMPY_API - * Get a typeobject from a type-number -- can return NULL. - * - * New reference - */ -NPY_NO_EXPORT PyObject * -PyArray_TypeObjectFromType(int type) -{ - PyArray_Descr *descr; - PyObject *obj; - - descr = PyArray_DescrFromType(type); - if (descr == NULL) { - return NULL; - } - obj = (PyObject *)descr->typeobj; - Py_XINCREF(obj); - Py_DECREF(descr); - return obj; -} - -/* Does nothing with descr (cannot be NULL) */ -/*NUMPY_API - Get scalar-equivalent to a region of memory described by a descriptor. -*/ -NPY_NO_EXPORT PyObject * -PyArray_Scalar(void *data, PyArray_Descr *descr, PyObject *base) -{ - PyTypeObject *type; - PyObject *obj; - void *destptr; - PyArray_CopySwapFunc *copyswap; - int type_num; - int itemsize; - int swap; - - type_num = descr->type_num; - if (type_num == PyArray_BOOL) { - PyArrayScalar_RETURN_BOOL_FROM_LONG(*(Bool*)data); - } - else if (PyDataType_FLAGCHK(descr, NPY_USE_GETITEM)) { - return descr->f->getitem(data, base); - } - itemsize = descr->elsize; - copyswap = descr->f->copyswap; - type = descr->typeobj; - swap = !PyArray_ISNBO(descr->byteorder); - if PyTypeNum_ISSTRING(type_num) { - /* Eliminate NULL bytes */ - char *dptr = data; - - dptr += itemsize - 1; - while(itemsize && *dptr-- == 0) { - itemsize--; - } - if (type_num == PyArray_UNICODE && itemsize) { - /* - * make sure itemsize is a multiple of 4 - * so round up to nearest multiple - */ - itemsize = (((itemsize - 1) >> 2) + 1) << 2; - } - } - if (type->tp_itemsize != 0) { - /* String type */ - obj = type->tp_alloc(type, itemsize); - } - else { - obj = type->tp_alloc(type, 0); - } - if (obj == NULL) { - return NULL; - } - if (PyTypeNum_ISFLEXIBLE(type_num)) { - if (type_num == PyArray_STRING) { - destptr = PyString_AS_STRING(obj); - ((PyStringObject *)obj)->ob_shash = -1; -#if !defined(NPY_PY3K) - ((PyStringObject *)obj)->ob_sstate = SSTATE_NOT_INTERNED; -#endif - memcpy(destptr, data, itemsize); - return obj; - } - else if (type_num == PyArray_UNICODE) { - PyUnicodeObject *uni = (PyUnicodeObject*)obj; - size_t length = itemsize >> 2; -#ifndef Py_UNICODE_WIDE - char *buffer; - int alloc = 0; - length *= 2; -#endif - /* Need an extra slot and need to use Python memory manager */ - uni->str = NULL; - destptr = PyMem_NEW(Py_UNICODE,length+1); - if (destptr == NULL) { - Py_DECREF(obj); - return PyErr_NoMemory(); - } - uni->str = (Py_UNICODE *)destptr; - uni->str[0] = 0; - uni->str[length] = 0; - uni->length = length; - uni->hash = -1; - uni->defenc = NULL; -#ifdef Py_UNICODE_WIDE - memcpy(destptr, data, itemsize); - if (swap) { - byte_swap_vector(destptr, length, 4); - } -#else - /* need aligned data buffer */ - if ((swap) || ((((intp)data) % descr->alignment) != 0)) { - buffer = _pya_malloc(itemsize); - if (buffer == NULL) { - return PyErr_NoMemory(); - } - alloc = 1; - memcpy(buffer, data, itemsize); - if (swap) { - byte_swap_vector(buffer, itemsize >> 2, 4); - } - } - else { - buffer = data; - } - - /* - * Allocated enough for 2-characters per itemsize. - * Now convert from the data-buffer - */ - length = PyUCS2Buffer_FromUCS4(uni->str, - (PyArray_UCS4 *)buffer, itemsize >> 2); - if (alloc) { - _pya_free(buffer); - } - /* Resize the unicode result */ - if (MyPyUnicode_Resize(uni, length) < 0) { - Py_DECREF(obj); - return NULL; - } -#endif - return obj; - } - else { - PyVoidScalarObject *vobj = (PyVoidScalarObject *)obj; - vobj->base = NULL; - vobj->descr = descr; - Py_INCREF(descr); - vobj->obval = NULL; - Py_SIZE(vobj) = itemsize; - vobj->flags = BEHAVED | OWNDATA; - swap = 0; - if (descr->names) { - if (base) { - Py_INCREF(base); - vobj->base = base; - vobj->flags = PyArray_FLAGS(base); - vobj->flags &= ~OWNDATA; - vobj->obval = data; - return obj; - } - } - destptr = PyDataMem_NEW(itemsize); - if (destptr == NULL) { - Py_DECREF(obj); - return PyErr_NoMemory(); - } - vobj->obval = destptr; - } - } - else { - destptr = scalar_value(obj, descr); - } - /* copyswap for OBJECT increments the reference count */ - copyswap(destptr, data, swap, base); - return obj; -} - -/* Return Array Scalar if 0-d array object is encountered */ - -/*NUMPY_API - * - *Return either an array or the appropriate Python object if the array - *is 0d and matches a Python type. - */ -NPY_NO_EXPORT PyObject * -PyArray_Return(PyArrayObject *mp) -{ - - if (mp == NULL) { - return NULL; - } - if (PyErr_Occurred()) { - Py_XDECREF(mp); - return NULL; - } - if (!PyArray_Check(mp)) { - return (PyObject *)mp; - } - if (mp->nd == 0) { - PyObject *ret; - ret = PyArray_ToScalar(mp->data, mp); - Py_DECREF(mp); - return ret; - } - else { - return (PyObject *)mp; - } -} diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/scalartypes.c.src b/pythonPackages/numpy/numpy/core/src/multiarray/scalartypes.c.src deleted file mode 100755 index cd44777cc5..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/scalartypes.c.src +++ /dev/null @@ -1,3384 +0,0 @@ -/* -*- c -*- */ -#define PY_SSIZE_T_CLEAN -#include "Python.h" -#include "structmember.h" - -#ifndef _MULTIARRAYMODULE -#define _MULTIARRAYMODULE -#endif -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/npy_math.h" -#include "numpy/arrayscalars.h" - -#include "numpy/npy_3kcompat.h" - -#include "npy_config.h" -#include "mapping.h" -#include "ctors.h" -#include "usertypes.h" -#include "numpyos.h" -#include "common.h" -#include "scalartypes.h" - -NPY_NO_EXPORT PyBoolScalarObject _PyArrayScalar_BoolValues[] = { - {PyObject_HEAD_INIT(&PyBoolArrType_Type) 0}, - {PyObject_HEAD_INIT(&PyBoolArrType_Type) 1}, -}; - -/* - * Inheritance is established later when tp_bases is set (or tp_base for - * single inheritance) - */ - -/**begin repeat - * #name = number, integer, signedinteger, unsignedinteger, inexact, - * floating, complexfloating, flexible, character, timeinteger# - * #NAME = Number, Integer, SignedInteger, UnsignedInteger, Inexact, - * Floating, ComplexFloating, Flexible, Character, TimeInteger# - */ -NPY_NO_EXPORT PyTypeObject Py@NAME@ArrType_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.@name@", /* tp_name*/ - sizeof(PyObject), /* tp_basicsize*/ - 0, /* tp_itemsize */ - /* methods */ - 0, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - 0, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; -/**end repeat**/ - -static PyObject * -gentype_alloc(PyTypeObject *type, Py_ssize_t nitems) -{ - PyObject *obj; - const size_t size = _PyObject_VAR_SIZE(type, nitems + 1); - - obj = (PyObject *)_pya_malloc(size); - memset(obj, 0, size); - if (type->tp_itemsize == 0) { - PyObject_INIT(obj, type); - } - else { - (void) PyObject_INIT_VAR((PyVarObject *)obj, type, nitems); - } - return obj; -} - -static void -gentype_dealloc(PyObject *v) -{ - Py_TYPE(v)->tp_free(v); -} - - -static PyObject * -gentype_power(PyObject *m1, PyObject *m2, PyObject *NPY_UNUSED(m3)) -{ - PyObject *arr, *ret, *arg2; - char *msg="unsupported operand type(s) for ** or pow()"; - - if (!PyArray_IsScalar(m1,Generic)) { - if (PyArray_Check(m1)) { - ret = Py_TYPE(m1)->tp_as_number->nb_power(m1,m2, Py_None); - } - else { - if (!PyArray_IsScalar(m2,Generic)) { - PyErr_SetString(PyExc_TypeError, msg); - return NULL; - } - arr = PyArray_FromScalar(m2, NULL); - if (arr == NULL) { - return NULL; - } - ret = Py_TYPE(arr)->tp_as_number->nb_power(m1, arr, Py_None); - Py_DECREF(arr); - } - return ret; - } - if (!PyArray_IsScalar(m2, Generic)) { - if (PyArray_Check(m2)) { - ret = Py_TYPE(m2)->tp_as_number->nb_power(m1,m2, Py_None); - } - else { - if (!PyArray_IsScalar(m1, Generic)) { - PyErr_SetString(PyExc_TypeError, msg); - return NULL; - } - arr = PyArray_FromScalar(m1, NULL); - if (arr == NULL) { - return NULL; - } - ret = Py_TYPE(arr)->tp_as_number->nb_power(arr, m2, Py_None); - Py_DECREF(arr); - } - return ret; - } - arr = arg2 = NULL; - arr = PyArray_FromScalar(m1, NULL); - arg2 = PyArray_FromScalar(m2, NULL); - if (arr == NULL || arg2 == NULL) { - Py_XDECREF(arr); - Py_XDECREF(arg2); - return NULL; - } - ret = Py_TYPE(arr)->tp_as_number->nb_power(arr, arg2, Py_None); - Py_DECREF(arr); - Py_DECREF(arg2); - return ret; -} - -static PyObject * -gentype_generic_method(PyObject *self, PyObject *args, PyObject *kwds, - char *str) -{ - PyObject *arr, *meth, *ret; - - arr = PyArray_FromScalar(self, NULL); - if (arr == NULL) { - return NULL; - } - meth = PyObject_GetAttrString(arr, str); - if (meth == NULL) { - Py_DECREF(arr); - return NULL; - } - if (kwds == NULL) { - ret = PyObject_CallObject(meth, args); - } - else { - ret = PyObject_Call(meth, args, kwds); - } - Py_DECREF(meth); - Py_DECREF(arr); - if (ret && PyArray_Check(ret)) { - return PyArray_Return((PyArrayObject *)ret); - } - else { - return ret; - } -} - -/**begin repeat - * - * #name = add, subtract, remainder, divmod, lshift, rshift, - * and, xor, or, floor_divide, true_divide# - */ -static PyObject * -gentype_@name@(PyObject *m1, PyObject *m2) -{ - return PyArray_Type.tp_as_number->nb_@name@(m1, m2); -} - -/**end repeat**/ - -#if !defined(NPY_PY3K) -/**begin repeat - * - * #name = divide# - */ -static PyObject * -gentype_@name@(PyObject *m1, PyObject *m2) -{ - return PyArray_Type.tp_as_number->nb_@name@(m1, m2); -} -/**end repeat**/ -#endif - -static PyObject * -gentype_multiply(PyObject *m1, PyObject *m2) -{ - PyObject *ret = NULL; - long repeat; - - if (!PyArray_IsScalar(m1, Generic) && - ((Py_TYPE(m1)->tp_as_number == NULL) || - (Py_TYPE(m1)->tp_as_number->nb_multiply == NULL))) { - /* Try to convert m2 to an int and try sequence repeat */ - repeat = PyInt_AsLong(m2); - if (repeat == -1 && PyErr_Occurred()) { - return NULL; - } - ret = PySequence_Repeat(m1, (int) repeat); - } - else if (!PyArray_IsScalar(m2, Generic) && - ((Py_TYPE(m2)->tp_as_number == NULL) || - (Py_TYPE(m2)->tp_as_number->nb_multiply == NULL))) { - /* Try to convert m1 to an int and try sequence repeat */ - repeat = PyInt_AsLong(m1); - if (repeat == -1 && PyErr_Occurred()) { - return NULL; - } - ret = PySequence_Repeat(m2, (int) repeat); - } - if (ret == NULL) { - PyErr_Clear(); /* no effect if not set */ - ret = PyArray_Type.tp_as_number->nb_multiply(m1, m2); - } - return ret; -} - -/**begin repeat - * - * #name=positive, negative, absolute, invert, int, float# - */ -static PyObject * -gentype_@name@(PyObject *m1) -{ - PyObject *arr, *ret; - - arr = PyArray_FromScalar(m1, NULL); - if (arr == NULL) { - return NULL; - } - ret = Py_TYPE(arr)->tp_as_number->nb_@name@(arr); - Py_DECREF(arr); - return ret; -} -/**end repeat**/ - -#if !defined(NPY_PY3K) -/**begin repeat - * - * #name=long, oct, hex# - */ -static PyObject * -gentype_@name@(PyObject *m1) -{ - PyObject *arr, *ret; - - arr = PyArray_FromScalar(m1, NULL); - if (arr == NULL) { - return NULL; - } - ret = Py_TYPE(arr)->tp_as_number->nb_@name@(arr); - Py_DECREF(arr); - return ret; -} -/**end repeat**/ -#endif - -static int -gentype_nonzero_number(PyObject *m1) -{ - PyObject *arr; - int ret; - - arr = PyArray_FromScalar(m1, NULL); - if (arr == NULL) { - return -1; - } -#if defined(NPY_PY3K) - ret = Py_TYPE(arr)->tp_as_number->nb_bool(arr); -#else - ret = Py_TYPE(arr)->tp_as_number->nb_nonzero(arr); -#endif - Py_DECREF(arr); - return ret; -} - -static PyObject * -gentype_str(PyObject *self) -{ - PyArrayObject *arr; - PyObject *ret; - - arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); - if (arr == NULL) { - return NULL; - } - ret = PyObject_Str((PyObject *)arr); - Py_DECREF(arr); - return ret; -} - - -static PyObject * -gentype_repr(PyObject *self) -{ - PyArrayObject *arr; - PyObject *ret; - - arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); - if (arr == NULL) { - return NULL; - } - ret = PyObject_Str((PyObject *)arr); - Py_DECREF(arr); - return ret; -} - -#ifdef FORCE_NO_LONG_DOUBLE_FORMATTING -#undef NPY_LONGDOUBLE_FMT -#define NPY_LONGDOUBLE_FMT NPY_DOUBLE_FMT -#endif - -/**begin repeat - * #name = float, double, longdouble# - * #NAME = FLOAT, DOUBLE, LONGDOUBLE# - * #type = f, d, l# - */ - -#define _FMT1 "%%.%i" NPY_@NAME@_FMT -#define _FMT2 "%%+.%i" NPY_@NAME@_FMT - -NPY_NO_EXPORT void -format_@name@(char *buf, size_t buflen, @name@ val, unsigned int prec) -{ - /* XXX: Find a correct size here for format string */ - char format[64], *res; - size_t i, cnt; - - PyOS_snprintf(format, sizeof(format), _FMT1, prec); - res = NumPyOS_ascii_format@type@(buf, buflen, format, val, 0); - if (res == NULL) { - fprintf(stderr, "Error while formatting\n"); - return; - } - - /* If nothing but digits after sign, append ".0" */ - cnt = strlen(buf); - for (i = (val < 0) ? 1 : 0; i < cnt; ++i) { - if (!isdigit(Py_CHARMASK(buf[i]))) { - break; - } - } - if (i == cnt && buflen >= cnt + 3) { - strcpy(&buf[cnt],".0"); - } -} - -static void -format_c@name@(char *buf, size_t buflen, c@name@ val, unsigned int prec) -{ - /* XXX: Find a correct size here for format string */ - char format[64]; - char *res; - - /* - * Ideally, we should handle this nan/inf stuff in NumpyOS_ascii_format* - */ -#if PY_VERSION_HEX >= 0x02070000 - if (val.real == 0.0 && npy_signbit(val.real) == 0) { -#else - if (val.real == 0.0) { -#endif - PyOS_snprintf(format, sizeof(format), _FMT1, prec); - res = NumPyOS_ascii_format@type@(buf, buflen-1, format, val.imag, 0); - if (res == NULL) { - fprintf(stderr, "Error while formatting\n"); - return; - } -#if PY_VERSION_HEX >= 0x02060000 - if (!npy_isfinite(val.imag)) { - strncat(buf, "*", 1); - } -#endif - strncat(buf, "j", 1); - } - else { - char re[64], im[64]; - if (npy_isfinite(val.real)) { - PyOS_snprintf(format, sizeof(format), _FMT1, prec); - res = NumPyOS_ascii_format@type@(re, sizeof(re), format, val.real, 0); - if (res == NULL) { - fprintf(stderr, "Error while formatting\n"); - return; - } - } else { - if (npy_isnan(val.real)) { - strcpy(re, "nan"); - } else if (val.real > 0){ - strcpy(re, "inf"); - } else { - strcpy(re, "-inf"); - } - } - - - if (npy_isfinite(val.imag)) { - PyOS_snprintf(format, sizeof(format), _FMT2, prec); - res = NumPyOS_ascii_format@type@(im, sizeof(im), format, val.imag, 0); - if (res == NULL) { - fprintf(stderr, "Error while formatting\n"); - return; - } - } else { - if (npy_isnan(val.imag)) { - strcpy(im, "+nan"); - } else if (val.imag > 0){ - strcpy(im, "+inf"); - } else { - strcpy(im, "-inf"); - } - #if PY_VERSION_HEX >= 0x02060000 - if (!npy_isfinite(val.imag)) { - strncat(im, "*", 1); - } - #endif - } - PyOS_snprintf(buf, buflen, "(%s%sj)", re, im); - } -} - -#undef _FMT1 -#undef _FMT2 - -/**end repeat**/ - -/* - * over-ride repr and str of array-scalar strings and unicode to - * remove NULL bytes and then call the corresponding functions - * of string and unicode. - */ - -/**begin repeat - * #name = string*2,unicode*2# - * #form = (repr,str)*2# - * #Name = String*2,Unicode*2# - * #NAME = STRING*2,UNICODE*2# - * #extra = AndSize*2,,# - * #type = char*2, Py_UNICODE*2# - */ -static PyObject * -@name@type_@form@(PyObject *self) -{ - const @type@ *dptr, *ip; - int len; - PyObject *new; - PyObject *ret; - - ip = dptr = Py@Name@_AS_@NAME@(self); - len = Py@Name@_GET_SIZE(self); - dptr += len-1; - while(len > 0 && *dptr-- == 0) { - len--; - } - new = Py@Name@_From@Name@@extra@(ip, len); - if (new == NULL) { - return PyUString_FromString(""); - } - ret = Py@Name@_Type.tp_@form@(new); - Py_DECREF(new); - return ret; -} -/**end repeat**/ - -/* These values are finfo.precision + 2 */ -#define FLOATPREC_REPR 8 -#define FLOATPREC_STR 6 -#define DOUBLEPREC_REPR 17 -#define DOUBLEPREC_STR 12 -#if SIZEOF_LONGDOUBLE == SIZEOF_DOUBLE -#define LONGDOUBLEPREC_REPR DOUBLEPREC_REPR -#define LONGDOUBLEPREC_STR DOUBLEPREC_STR -#else /* More than probably needed on Intel FP */ -#define LONGDOUBLEPREC_REPR 20 -#define LONGDOUBLEPREC_STR 12 -#endif - -/* - * float type str and repr - * - * These functions will return NULL if PyString creation fails. - */ - -/**begin repeat - * #name = float, double, longdouble# - * #Name = Float, Double, LongDouble# - * #NAME = FLOAT, DOUBLE, LONGDOUBLE# - */ -/**begin repeat1 - * #kind = str, repr# - * #KIND = STR, REPR# - */ - -#define PREC @NAME@PREC_@KIND@ - -static PyObject * -@name@type_@kind@(PyObject *self) -{ - char buf[100]; - @name@ val = ((Py@Name@ScalarObject *)self)->obval; - - format_@name@(buf, sizeof(buf), val, PREC); - return PyUString_FromString(buf); -} - -static PyObject * -c@name@type_@kind@(PyObject *self) -{ - char buf[202]; - c@name@ val = ((PyC@Name@ScalarObject *)self)->obval; - - format_c@name@(buf, sizeof(buf), val, PREC); - return PyUString_FromString(buf); -} - -#undef PREC - -/**end repeat1**/ -/**end repeat**/ - -/* - * float type print (control print a, where a is a float type instance) - */ -/**begin repeat - * #name = float, double, longdouble# - * #Name = Float, Double, LongDouble# - * #NAME = FLOAT, DOUBLE, LONGDOUBLE# - */ - -static int -@name@type_print(PyObject *v, FILE *fp, int flags) -{ - char buf[100]; - @name@ val = ((Py@Name@ScalarObject *)v)->obval; - - format_@name@(buf, sizeof(buf), val, - (flags & Py_PRINT_RAW) ? @NAME@PREC_STR : @NAME@PREC_REPR); - Py_BEGIN_ALLOW_THREADS - fputs(buf, fp); - Py_END_ALLOW_THREADS - return 0; -} - -static int -c@name@type_print(PyObject *v, FILE *fp, int flags) -{ - /* Size of buf: twice sizeof(real) + 2 (for the parenthesis) */ - char buf[202]; - c@name@ val = ((PyC@Name@ScalarObject *)v)->obval; - - format_c@name@(buf, sizeof(buf), val, - (flags & Py_PRINT_RAW) ? @NAME@PREC_STR : @NAME@PREC_REPR); - Py_BEGIN_ALLOW_THREADS - fputs(buf, fp); - Py_END_ALLOW_THREADS - return 0; -} - -/**end repeat**/ - - -/* - * Could improve this with a PyLong_FromLongDouble(longdouble ldval) - * but this would need some more work... - */ - -/**begin repeat - * - * #name = (int, float)*2# - * #KIND = (Long, Float)*2# - * #char = ,,c*2# - * #CHAR = ,,C*2# - * #POST = ,,.real*2# - */ -static PyObject * -@char@longdoubletype_@name@(PyObject *self) -{ - double dval; - PyObject *obj, *ret; - - dval = (double)(((Py@CHAR@LongDoubleScalarObject *)self)->obval)@POST@; - obj = Py@KIND@_FromDouble(dval); - ret = Py_TYPE(obj)->tp_as_number->nb_@name@(obj); - Py_DECREF(obj); - return ret; -} -/**end repeat**/ - -#if !defined(NPY_PY3K) - -/**begin repeat - * - * #name = (long, hex, oct)*2# - * #KIND = (Long*3)*2# - * #char = ,,,c*3# - * #CHAR = ,,,C*3# - * #POST = ,,,.real*3# - */ -static PyObject * -@char@longdoubletype_@name@(PyObject *self) -{ - double dval; - PyObject *obj, *ret; - - dval = (double)(((Py@CHAR@LongDoubleScalarObject *)self)->obval)@POST@; - obj = Py@KIND@_FromDouble(dval); - ret = Py_TYPE(obj)->tp_as_number->nb_@name@(obj); - Py_DECREF(obj); - return ret; -} -/**end repeat**/ - -#endif /* !defined(NPY_PY3K) */ - -static PyNumberMethods gentype_as_number = { - (binaryfunc)gentype_add, /*nb_add*/ - (binaryfunc)gentype_subtract, /*nb_subtract*/ - (binaryfunc)gentype_multiply, /*nb_multiply*/ -#if defined(NPY_PY3K) -#else - (binaryfunc)gentype_divide, /*nb_divide*/ -#endif - (binaryfunc)gentype_remainder, /*nb_remainder*/ - (binaryfunc)gentype_divmod, /*nb_divmod*/ - (ternaryfunc)gentype_power, /*nb_power*/ - (unaryfunc)gentype_negative, - (unaryfunc)gentype_positive, /*nb_pos*/ - (unaryfunc)gentype_absolute, /*(unaryfunc)gentype_abs,*/ - (inquiry)gentype_nonzero_number, /*nb_nonzero*/ - (unaryfunc)gentype_invert, /*nb_invert*/ - (binaryfunc)gentype_lshift, /*nb_lshift*/ - (binaryfunc)gentype_rshift, /*nb_rshift*/ - (binaryfunc)gentype_and, /*nb_and*/ - (binaryfunc)gentype_xor, /*nb_xor*/ - (binaryfunc)gentype_or, /*nb_or*/ -#if defined(NPY_PY3K) -#else - 0, /*nb_coerce*/ -#endif - (unaryfunc)gentype_int, /*nb_int*/ -#if defined(NPY_PY3K) - 0, /*nb_reserved*/ -#else - (unaryfunc)gentype_long, /*nb_long*/ -#endif - (unaryfunc)gentype_float, /*nb_float*/ -#if defined(NPY_PY3K) -#else - (unaryfunc)gentype_oct, /*nb_oct*/ - (unaryfunc)gentype_hex, /*nb_hex*/ -#endif - 0, /*inplace_add*/ - 0, /*inplace_subtract*/ - 0, /*inplace_multiply*/ -#if defined(NPY_PY3K) -#else - 0, /*inplace_divide*/ -#endif - 0, /*inplace_remainder*/ - 0, /*inplace_power*/ - 0, /*inplace_lshift*/ - 0, /*inplace_rshift*/ - 0, /*inplace_and*/ - 0, /*inplace_xor*/ - 0, /*inplace_or*/ - (binaryfunc)gentype_floor_divide, /*nb_floor_divide*/ - (binaryfunc)gentype_true_divide, /*nb_true_divide*/ - 0, /*nb_inplace_floor_divide*/ - 0, /*nb_inplace_true_divide*/ -#if PY_VERSION_HEX >= 0x02050000 - (unaryfunc)NULL, /*nb_index*/ -#endif -}; - - -static PyObject * -gentype_richcompare(PyObject *self, PyObject *other, int cmp_op) -{ - PyObject *arr, *ret; - - arr = PyArray_FromScalar(self, NULL); - if (arr == NULL) { - return NULL; - } - ret = Py_TYPE(arr)->tp_richcompare(arr, other, cmp_op); - Py_DECREF(arr); - return ret; -} - -static PyObject * -gentype_ndim_get(PyObject *NPY_UNUSED(self)) -{ - return PyInt_FromLong(0); -} - -static PyObject * -gentype_flags_get(PyObject *NPY_UNUSED(self)) -{ - return PyArray_NewFlagsObject(NULL); -} - -static PyObject * -voidtype_flags_get(PyVoidScalarObject *self) -{ - PyObject *flagobj; - flagobj = PyArrayFlags_Type.tp_alloc(&PyArrayFlags_Type, 0); - if (flagobj == NULL) { - return NULL; - } - ((PyArrayFlagsObject *)flagobj)->arr = NULL; - ((PyArrayFlagsObject *)flagobj)->flags = self->flags; - return flagobj; -} - -static PyObject * -voidtype_dtypedescr_get(PyVoidScalarObject *self) -{ - Py_INCREF(self->descr); - return (PyObject *)self->descr; -} - - -static PyObject * -gentype_data_get(PyObject *self) -{ -#if defined(NPY_PY3K) - return PyMemoryView_FromObject(self); -#else - return PyBuffer_FromObject(self, 0, Py_END_OF_BUFFER); -#endif -} - - -static PyObject * -gentype_itemsize_get(PyObject *self) -{ - PyArray_Descr *typecode; - PyObject *ret; - int elsize; - - typecode = PyArray_DescrFromScalar(self); - elsize = typecode->elsize; -#ifndef Py_UNICODE_WIDE - if (typecode->type_num == NPY_UNICODE) { - elsize >>= 1; - } -#endif - ret = PyInt_FromLong((long) elsize); - Py_DECREF(typecode); - return ret; -} - -static PyObject * -gentype_size_get(PyObject *NPY_UNUSED(self)) -{ - return PyInt_FromLong(1); -} - -#if PY_VERSION_HEX >= 0x03000000 -NPY_NO_EXPORT void -gentype_struct_free(PyObject *ptr) -{ - PyArrayInterface *arrif; - PyObject *context; - - arrif = (PyArrayInterface*)PyCapsule_GetPointer(ptr, NULL); - context = (PyObject *)PyCapsule_GetContext(ptr); - Py_DECREF(context); - Py_XDECREF(arrif->descr); - _pya_free(arrif->shape); - _pya_free(arrif); -} -#else -NPY_NO_EXPORT void -gentype_struct_free(void *ptr, void *arg) -{ - PyArrayInterface *arrif = (PyArrayInterface *)ptr; - Py_DECREF((PyObject *)arg); - Py_XDECREF(arrif->descr); - _pya_free(arrif->shape); - _pya_free(arrif); -} -#endif - -static PyObject * -gentype_struct_get(PyObject *self) -{ - PyArrayObject *arr; - PyArrayInterface *inter; - PyObject *ret; - - arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); - inter = (PyArrayInterface *)_pya_malloc(sizeof(PyArrayInterface)); - inter->two = 2; - inter->nd = 0; - inter->flags = arr->flags; - inter->flags &= ~(UPDATEIFCOPY | OWNDATA); - inter->flags |= NPY_NOTSWAPPED; - inter->typekind = arr->descr->kind; - inter->itemsize = arr->descr->elsize; - inter->strides = NULL; - inter->shape = NULL; - inter->data = arr->data; - inter->descr = NULL; - - ret = NpyCapsule_FromVoidPtrAndDesc(inter, arr, gentype_struct_free); - return ret; -} - -static PyObject * -gentype_priority_get(PyObject *NPY_UNUSED(self)) -{ - return PyFloat_FromDouble(NPY_SCALAR_PRIORITY); -} - -static PyObject * -gentype_shape_get(PyObject *NPY_UNUSED(self)) -{ - return PyTuple_New(0); -} - - -static PyObject * -gentype_interface_get(PyObject *self) -{ - PyArrayObject *arr; - PyObject *inter; - - arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); - if (arr == NULL) { - return NULL; - } - inter = PyObject_GetAttrString((PyObject *)arr, "__array_interface__"); - if (inter != NULL) { - PyDict_SetItemString(inter, "__ref", (PyObject *)arr); - } - Py_DECREF(arr); - return inter; -} - - - -static PyObject * -gentype_typedescr_get(PyObject *self) -{ - return (PyObject *)PyArray_DescrFromScalar(self); -} - - -static PyObject * -gentype_base_get(PyObject *NPY_UNUSED(self)) -{ - Py_INCREF(Py_None); - return Py_None; -} - - -static PyArray_Descr * -_realdescr_fromcomplexscalar(PyObject *self, int *typenum) -{ - if (PyArray_IsScalar(self, CDouble)) { - *typenum = PyArray_CDOUBLE; - return PyArray_DescrFromType(PyArray_DOUBLE); - } - if (PyArray_IsScalar(self, CFloat)) { - *typenum = PyArray_CFLOAT; - return PyArray_DescrFromType(PyArray_FLOAT); - } - if (PyArray_IsScalar(self, CLongDouble)) { - *typenum = PyArray_CLONGDOUBLE; - return PyArray_DescrFromType(PyArray_LONGDOUBLE); - } - return NULL; -} - -static PyObject * -gentype_real_get(PyObject *self) -{ - PyArray_Descr *typecode; - PyObject *ret; - int typenum; - - if (PyArray_IsScalar(self, ComplexFloating)) { - void *ptr; - typecode = _realdescr_fromcomplexscalar(self, &typenum); - ptr = scalar_value(self, NULL); - ret = PyArray_Scalar(ptr, typecode, NULL); - Py_DECREF(typecode); - return ret; - } - else if (PyArray_IsScalar(self, Object)) { - PyObject *obj = ((PyObjectScalarObject *)self)->obval; - ret = PyObject_GetAttrString(obj, "real"); - if (ret != NULL) { - return ret; - } - PyErr_Clear(); - } - Py_INCREF(self); - return (PyObject *)self; -} - -static PyObject * -gentype_imag_get(PyObject *self) -{ - PyArray_Descr *typecode=NULL; - PyObject *ret; - int typenum; - - if (PyArray_IsScalar(self, ComplexFloating)) { - char *ptr; - typecode = _realdescr_fromcomplexscalar(self, &typenum); - ptr = (char *)scalar_value(self, NULL); - ret = PyArray_Scalar(ptr + typecode->elsize, typecode, NULL); - } - else if (PyArray_IsScalar(self, Object)) { - PyObject *obj = ((PyObjectScalarObject *)self)->obval; - PyArray_Descr *newtype; - ret = PyObject_GetAttrString(obj, "imag"); - if (ret == NULL) { - PyErr_Clear(); - obj = PyInt_FromLong(0); - newtype = PyArray_DescrFromType(PyArray_OBJECT); - ret = PyArray_Scalar((char *)&obj, newtype, NULL); - Py_DECREF(newtype); - Py_DECREF(obj); - } - } - else { - char *temp; - int elsize; - typecode = PyArray_DescrFromScalar(self); - elsize = typecode->elsize; - temp = PyDataMem_NEW(elsize); - memset(temp, '\0', elsize); - ret = PyArray_Scalar(temp, typecode, NULL); - PyDataMem_FREE(temp); - } - - Py_XDECREF(typecode); - return ret; -} - -static PyObject * -gentype_flat_get(PyObject *self) -{ - PyObject *ret, *arr; - - arr = PyArray_FromScalar(self, NULL); - if (arr == NULL) { - return NULL; - } - ret = PyArray_IterNew(arr); - Py_DECREF(arr); - return ret; -} - - -static PyObject * -gentype_transpose_get(PyObject *self) -{ - Py_INCREF(self); - return self; -} - - -static PyGetSetDef gentype_getsets[] = { - {"ndim", - (getter)gentype_ndim_get, - (setter) 0, - "number of array dimensions", - NULL}, - {"flags", - (getter)gentype_flags_get, - (setter)0, - "integer value of flags", - NULL}, - {"shape", - (getter)gentype_shape_get, - (setter)0, - "tuple of array dimensions", - NULL}, - {"strides", - (getter)gentype_shape_get, - (setter) 0, - "tuple of bytes steps in each dimension", - NULL}, - {"data", - (getter)gentype_data_get, - (setter) 0, - "pointer to start of data", - NULL}, - {"itemsize", - (getter)gentype_itemsize_get, - (setter)0, - "length of one element in bytes", - NULL}, - {"size", - (getter)gentype_size_get, - (setter)0, - "number of elements in the gentype", - NULL}, - {"nbytes", - (getter)gentype_itemsize_get, - (setter)0, - "length of item in bytes", - NULL}, - {"base", - (getter)gentype_base_get, - (setter)0, - "base object", - NULL}, - {"dtype", - (getter)gentype_typedescr_get, - NULL, - "get array data-descriptor", - NULL}, - {"real", - (getter)gentype_real_get, - (setter)0, - "real part of scalar", - NULL}, - {"imag", - (getter)gentype_imag_get, - (setter)0, - "imaginary part of scalar", - NULL}, - {"flat", - (getter)gentype_flat_get, - (setter)0, - "a 1-d view of scalar", - NULL}, - {"T", - (getter)gentype_transpose_get, - (setter)0, - "transpose", - NULL}, - {"__array_interface__", - (getter)gentype_interface_get, - NULL, - "Array protocol: Python side", - NULL}, - {"__array_struct__", - (getter)gentype_struct_get, - NULL, - "Array protocol: struct", - NULL}, - {"__array_priority__", - (getter)gentype_priority_get, - NULL, - "Array priority.", - NULL}, - {NULL, NULL, NULL, NULL, NULL} /* Sentinel */ -}; - - -/* 0-dim array from scalar object */ - -static char doc_getarray[] = "sc.__array__(|type) return 0-dim array"; - -static PyObject * -gentype_getarray(PyObject *scalar, PyObject *args) -{ - PyArray_Descr *outcode=NULL; - PyObject *ret; - - if (!PyArg_ParseTuple(args, "|O&", &PyArray_DescrConverter, - &outcode)) { - Py_XDECREF(outcode); - return NULL; - } - ret = PyArray_FromScalar(scalar, outcode); - return ret; -} - -static char doc_sc_wraparray[] = "sc.__array_wrap__(obj) return scalar from array"; - -static PyObject * -gentype_wraparray(PyObject *NPY_UNUSED(scalar), PyObject *args) -{ - PyObject *arr; - - if (PyTuple_Size(args) < 1) { - PyErr_SetString(PyExc_TypeError, - "only accepts 1 argument."); - return NULL; - } - arr = PyTuple_GET_ITEM(args, 0); - if (!PyArray_Check(arr)) { - PyErr_SetString(PyExc_TypeError, - "can only be called with ndarray object"); - return NULL; - } - - return PyArray_Scalar(PyArray_DATA(arr), PyArray_DESCR(arr), arr); -} - -/* - * These gentype_* functions do not take keyword arguments. - * The proper flag is METH_VARARGS. - */ -/**begin repeat - * - * #name = tolist, item, tostring, astype, copy, __deepcopy__, searchsorted, - * view, swapaxes, conj, conjugate, nonzero, flatten, ravel, fill, - * transpose, newbyteorder# - */ -static PyObject * -gentype_@name@(PyObject *self, PyObject *args) -{ - return gentype_generic_method(self, args, NULL, "@name@"); -} -/**end repeat**/ - -static PyObject * -gentype_itemset(PyObject *NPY_UNUSED(self), PyObject *NPY_UNUSED(args)) -{ - PyErr_SetString(PyExc_ValueError, "array-scalars are immutable"); - return NULL; -} - -static PyObject * -gentype_squeeze(PyObject *self, PyObject *args) -{ - if (!PyArg_ParseTuple(args, "")) { - return NULL; - } - Py_INCREF(self); - return self; -} - -static Py_ssize_t -gentype_getreadbuf(PyObject *, Py_ssize_t, void **); - -static PyObject * -gentype_byteswap(PyObject *self, PyObject *args) -{ - Bool inplace=FALSE; - - if (!PyArg_ParseTuple(args, "|O&", PyArray_BoolConverter, &inplace)) { - return NULL; - } - if (inplace) { - PyErr_SetString(PyExc_ValueError, - "cannot byteswap a scalar in-place"); - return NULL; - } - else { - /* get the data, copyswap it and pass it to a new Array scalar */ - char *data; - int numbytes; - PyArray_Descr *descr; - PyObject *new; - char *newmem; - - numbytes = gentype_getreadbuf(self, 0, (void **)&data); - descr = PyArray_DescrFromScalar(self); - newmem = _pya_malloc(descr->elsize); - if (newmem == NULL) { - Py_DECREF(descr); - return PyErr_NoMemory(); - } - else { - descr->f->copyswap(newmem, data, 1, NULL); - } - new = PyArray_Scalar(newmem, descr, NULL); - _pya_free(newmem); - Py_DECREF(descr); - return new; - } -} - - -/* - * These gentype_* functions take keyword arguments. - * The proper flag is METH_VARARGS | METH_KEYWORDS. - */ -/**begin repeat - * - * #name = take, getfield, put, repeat, tofile, mean, trace, diagonal, clip, - * std, var, sum, cumsum, prod, cumprod, compress, sort, argsort, - * round, argmax, argmin, max, min, ptp, any, all, resize, reshape, - * choose# - */ -static PyObject * -gentype_@name@(PyObject *self, PyObject *args, PyObject *kwds) -{ - return gentype_generic_method(self, args, kwds, "@name@"); -} -/**end repeat**/ - -static PyObject * -voidtype_getfield(PyVoidScalarObject *self, PyObject *args, PyObject *kwds) -{ - PyObject *ret, *newargs; - - newargs = PyTuple_GetSlice(args, 0, 2); - if (newargs == NULL) { - return NULL; - } - ret = gentype_generic_method((PyObject *)self, newargs, kwds, "getfield"); - Py_DECREF(newargs); - if (!ret) { - return ret; - } - if (PyArray_IsScalar(ret, Generic) && \ - (!PyArray_IsScalar(ret, Void))) { - PyArray_Descr *new; - void *ptr; - if (!PyArray_ISNBO(self->descr->byteorder)) { - new = PyArray_DescrFromScalar(ret); - ptr = scalar_value(ret, new); - byte_swap_vector(ptr, 1, new->elsize); - Py_DECREF(new); - } - } - return ret; -} - -static PyObject * -gentype_setfield(PyObject *NPY_UNUSED(self), PyObject *NPY_UNUSED(args), PyObject *NPY_UNUSED(kwds)) -{ - PyErr_SetString(PyExc_TypeError, - "Can't set fields in a non-void array scalar."); - return NULL; -} - -static PyObject * -voidtype_setfield(PyVoidScalarObject *self, PyObject *args, PyObject *kwds) -{ - PyArray_Descr *typecode = NULL; - int offset = 0; - PyObject *value, *src; - int mysize; - char *dptr; - static char *kwlist[] = {"value", "dtype", "offset", 0}; - - if ((self->flags & WRITEABLE) != WRITEABLE) { - PyErr_SetString(PyExc_RuntimeError, "Can't write to memory"); - return NULL; - } - if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO&|i", kwlist, - &value, - PyArray_DescrConverter, - &typecode, &offset)) { - Py_XDECREF(typecode); - return NULL; - } - - mysize = Py_SIZE(self); - - if (offset < 0 || (offset + typecode->elsize) > mysize) { - PyErr_Format(PyExc_ValueError, - "Need 0 <= offset <= %d for requested type " \ - "but received offset = %d", - mysize-typecode->elsize, offset); - Py_DECREF(typecode); - return NULL; - } - - dptr = self->obval + offset; - - if (typecode->type_num == PyArray_OBJECT) { - PyObject *temp; - Py_INCREF(value); - NPY_COPY_PYOBJECT_PTR(&temp, dptr); - Py_XDECREF(temp); - NPY_COPY_PYOBJECT_PTR(dptr, &value); - Py_DECREF(typecode); - } - else { - /* Copy data from value to correct place in dptr */ - src = PyArray_FromAny(value, typecode, 0, 0, CARRAY, NULL); - if (src == NULL) { - return NULL; - } - typecode->f->copyswap(dptr, PyArray_DATA(src), - !PyArray_ISNBO(self->descr->byteorder), - src); - Py_DECREF(src); - } - Py_INCREF(Py_None); - return Py_None; -} - - -static PyObject * -gentype_reduce(PyObject *self, PyObject *NPY_UNUSED(args)) -{ - PyObject *ret = NULL, *obj = NULL, *mod = NULL; - const char *buffer; - Py_ssize_t buflen; - - /* Return a tuple of (callable object, arguments) */ - ret = PyTuple_New(2); - if (ret == NULL) { - return NULL; - } -#if defined(NPY_PY3K) - if (PyArray_IsScalar(self, Unicode)) { - /* Unicode on Python 3 does not expose the buffer interface */ - buffer = PyUnicode_AS_DATA(self); - buflen = PyUnicode_GET_DATA_SIZE(self); - } - else -#endif - if (PyObject_AsReadBuffer(self, (const void **)&buffer, &buflen)<0) { - Py_DECREF(ret); - return NULL; - } - mod = PyImport_ImportModule("numpy.core.multiarray"); - if (mod == NULL) { - return NULL; - } - obj = PyObject_GetAttrString(mod, "scalar"); - Py_DECREF(mod); - if (obj == NULL) { - return NULL; - } - PyTuple_SET_ITEM(ret, 0, obj); - obj = PyObject_GetAttrString((PyObject *)self, "dtype"); - if (PyArray_IsScalar(self, Object)) { - mod = ((PyObjectScalarObject *)self)->obval; - PyTuple_SET_ITEM(ret, 1, Py_BuildValue("NO", obj, mod)); - } - else { -#ifndef Py_UNICODE_WIDE - /* - * We need to expand the buffer so that we always write - * UCS4 to disk for pickle of unicode scalars. - * - * This could be in a unicode_reduce function, but - * that would require re-factoring. - */ - int alloc = 0; - char *tmp; - int newlen; - - if (PyArray_IsScalar(self, Unicode)) { - tmp = _pya_malloc(buflen*2); - if (tmp == NULL) { - Py_DECREF(ret); - return PyErr_NoMemory(); - } - alloc = 1; - newlen = PyUCS2Buffer_AsUCS4((Py_UNICODE *)buffer, - (PyArray_UCS4 *)tmp, - buflen / 2, buflen / 2); - buflen = newlen*4; - buffer = tmp; - } -#endif - mod = PyBytes_FromStringAndSize(buffer, buflen); - if (mod == NULL) { - Py_DECREF(ret); -#ifndef Py_UNICODE_WIDE - ret = NULL; - goto fail; -#else - return NULL; -#endif - } - PyTuple_SET_ITEM(ret, 1, - Py_BuildValue("NN", obj, mod)); -#ifndef Py_UNICODE_WIDE -fail: - if (alloc) _pya_free((char *)buffer); -#endif - } - return ret; -} - -/* ignores everything */ -static PyObject * -gentype_setstate(PyObject *NPY_UNUSED(self), PyObject *NPY_UNUSED(args)) -{ - Py_INCREF(Py_None); - return (Py_None); -} - -static PyObject * -gentype_dump(PyObject *self, PyObject *args) -{ - PyObject *file = NULL; - int ret; - - if (!PyArg_ParseTuple(args, "O", &file)) { - return NULL; - } - ret = PyArray_Dump(self, file, 2); - if (ret < 0) { - return NULL; - } - Py_INCREF(Py_None); - return Py_None; -} - -static PyObject * -gentype_dumps(PyObject *self, PyObject *args) -{ - if (!PyArg_ParseTuple(args, "")) { - return NULL; - } - return PyArray_Dumps(self, 2); -} - - -/* setting flags cannot be done for scalars */ -static PyObject * -gentype_setflags(PyObject *NPY_UNUSED(self), PyObject *NPY_UNUSED(args), - PyObject *NPY_UNUSED(kwds)) -{ - Py_INCREF(Py_None); - return Py_None; -} - -/* need to fill in doc-strings for these methods on import -- copy from - array docstrings -*/ -static PyMethodDef gentype_methods[] = { - {"tolist", - (PyCFunction)gentype_tolist, - METH_VARARGS, NULL}, - {"item", - (PyCFunction)gentype_item, - METH_VARARGS, NULL}, - {"itemset", - (PyCFunction)gentype_itemset, - METH_VARARGS, NULL}, - {"tofile", - (PyCFunction)gentype_tofile, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"tostring", - (PyCFunction)gentype_tostring, - METH_VARARGS, NULL}, - {"byteswap", - (PyCFunction)gentype_byteswap, - METH_VARARGS, NULL}, - {"astype", - (PyCFunction)gentype_astype, - METH_VARARGS, NULL}, - {"getfield", - (PyCFunction)gentype_getfield, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"setfield", - (PyCFunction)gentype_setfield, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"copy", - (PyCFunction)gentype_copy, - METH_VARARGS, NULL}, - {"resize", - (PyCFunction)gentype_resize, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"__array__", - (PyCFunction)gentype_getarray, - METH_VARARGS, doc_getarray}, - {"__array_wrap__", - (PyCFunction)gentype_wraparray, - METH_VARARGS, doc_sc_wraparray}, - - /* for the copy module */ - {"__copy__", - (PyCFunction)gentype_copy, - METH_VARARGS, NULL}, - {"__deepcopy__", - (PyCFunction)gentype___deepcopy__, - METH_VARARGS, NULL}, - - {"__reduce__", - (PyCFunction) gentype_reduce, - METH_VARARGS, NULL}, - /* For consistency does nothing */ - {"__setstate__", - (PyCFunction) gentype_setstate, - METH_VARARGS, NULL}, - - {"dumps", - (PyCFunction) gentype_dumps, - METH_VARARGS, NULL}, - {"dump", - (PyCFunction) gentype_dump, - METH_VARARGS, NULL}, - - /* Methods for array */ - {"fill", - (PyCFunction)gentype_fill, - METH_VARARGS, NULL}, - {"transpose", - (PyCFunction)gentype_transpose, - METH_VARARGS, NULL}, - {"take", - (PyCFunction)gentype_take, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"put", - (PyCFunction)gentype_put, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"repeat", - (PyCFunction)gentype_repeat, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"choose", - (PyCFunction)gentype_choose, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"sort", - (PyCFunction)gentype_sort, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"argsort", - (PyCFunction)gentype_argsort, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"searchsorted", - (PyCFunction)gentype_searchsorted, - METH_VARARGS, NULL}, - {"argmax", - (PyCFunction)gentype_argmax, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"argmin", - (PyCFunction)gentype_argmin, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"reshape", - (PyCFunction)gentype_reshape, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"squeeze", - (PyCFunction)gentype_squeeze, - METH_VARARGS, NULL}, - {"view", - (PyCFunction)gentype_view, - METH_VARARGS, NULL}, - {"swapaxes", - (PyCFunction)gentype_swapaxes, - METH_VARARGS, NULL}, - {"max", - (PyCFunction)gentype_max, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"min", - (PyCFunction)gentype_min, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"ptp", - (PyCFunction)gentype_ptp, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"mean", - (PyCFunction)gentype_mean, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"trace", - (PyCFunction)gentype_trace, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"diagonal", - (PyCFunction)gentype_diagonal, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"clip", - (PyCFunction)gentype_clip, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"conj", - (PyCFunction)gentype_conj, - METH_VARARGS, NULL}, - {"conjugate", - (PyCFunction)gentype_conjugate, - METH_VARARGS, NULL}, - {"nonzero", - (PyCFunction)gentype_nonzero, - METH_VARARGS, NULL}, - {"std", - (PyCFunction)gentype_std, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"var", - (PyCFunction)gentype_var, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"sum", - (PyCFunction)gentype_sum, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"cumsum", - (PyCFunction)gentype_cumsum, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"prod", - (PyCFunction)gentype_prod, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"cumprod", - (PyCFunction)gentype_cumprod, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"all", - (PyCFunction)gentype_all, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"any", - (PyCFunction)gentype_any, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"compress", - (PyCFunction)gentype_compress, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"flatten", - (PyCFunction)gentype_flatten, - METH_VARARGS, NULL}, - {"ravel", - (PyCFunction)gentype_ravel, - METH_VARARGS, NULL}, - {"round", - (PyCFunction)gentype_round, - METH_VARARGS | METH_KEYWORDS, NULL}, -#if defined(NPY_PY3K) - /* Hook for the round() builtin */ - {"__round__", - (PyCFunction)gentype_round, - METH_VARARGS | METH_KEYWORDS, NULL}, -#endif - {"setflags", - (PyCFunction)gentype_setflags, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"newbyteorder", - (PyCFunction)gentype_newbyteorder, - METH_VARARGS, NULL}, - {NULL, NULL, 0, NULL} /* sentinel */ -}; - - -static PyGetSetDef voidtype_getsets[] = { - {"flags", - (getter)voidtype_flags_get, - (setter)0, - "integer value of flags", - NULL}, - {"dtype", - (getter)voidtype_dtypedescr_get, - (setter)0, - "dtype object", - NULL}, - {NULL, NULL, NULL, NULL, NULL} -}; - -static PyMethodDef voidtype_methods[] = { - {"getfield", - (PyCFunction)voidtype_getfield, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"setfield", - (PyCFunction)voidtype_setfield, - METH_VARARGS | METH_KEYWORDS, NULL}, - {NULL, NULL, 0, NULL} -}; - -/************* As_mapping functions for void array scalar ************/ - -static Py_ssize_t -voidtype_length(PyVoidScalarObject *self) -{ - if (!self->descr->names) { - return 0; - } - else { /* return the number of fields */ - return (Py_ssize_t) PyTuple_GET_SIZE(self->descr->names); - } -} - -static PyObject * -voidtype_item(PyVoidScalarObject *self, Py_ssize_t n) -{ - intp m; - PyObject *flist=NULL, *fieldinfo; - - if (!(PyDescr_HASFIELDS(self->descr))) { - PyErr_SetString(PyExc_IndexError, - "can't index void scalar without fields"); - return NULL; - } - flist = self->descr->names; - m = PyTuple_GET_SIZE(flist); - if (n < 0) { - n += m; - } - if (n < 0 || n >= m) { - PyErr_Format(PyExc_IndexError, "invalid index (%d)", (int) n); - return NULL; - } - fieldinfo = PyDict_GetItem(self->descr->fields, - PyTuple_GET_ITEM(flist, n)); - return voidtype_getfield(self, fieldinfo, NULL); -} - - -/* get field by name or number */ -static PyObject * -voidtype_subscript(PyVoidScalarObject *self, PyObject *ind) -{ - intp n; - PyObject *fieldinfo; - - if (!(PyDescr_HASFIELDS(self->descr))) { - PyErr_SetString(PyExc_IndexError, - "can't index void scalar without fields"); - return NULL; - } - -#if defined(NPY_PY3K) - if (PyUString_Check(ind)) { -#else - if (PyBytes_Check(ind) || PyUnicode_Check(ind)) { -#endif - /* look up in fields */ - fieldinfo = PyDict_GetItem(self->descr->fields, ind); - if (!fieldinfo) { - goto fail; - } - return voidtype_getfield(self, fieldinfo, NULL); - } - - /* try to convert it to a number */ - n = PyArray_PyIntAsIntp(ind); - if (error_converting(n)) { - goto fail; - } - return voidtype_item(self, (Py_ssize_t)n); - -fail: - PyErr_SetString(PyExc_IndexError, "invalid index"); - return NULL; -} - -static int -voidtype_ass_item(PyVoidScalarObject *self, Py_ssize_t n, PyObject *val) -{ - intp m; - PyObject *flist=NULL, *fieldinfo, *newtup; - PyObject *res; - - if (!(PyDescr_HASFIELDS(self->descr))) { - PyErr_SetString(PyExc_IndexError, - "can't index void scalar without fields"); - return -1; - } - - flist = self->descr->names; - m = PyTuple_GET_SIZE(flist); - if (n < 0) { - n += m; - } - if (n < 0 || n >= m) { - goto fail; - } - fieldinfo = PyDict_GetItem(self->descr->fields, - PyTuple_GET_ITEM(flist, n)); - newtup = Py_BuildValue("(OOO)", val, - PyTuple_GET_ITEM(fieldinfo, 0), - PyTuple_GET_ITEM(fieldinfo, 1)); - res = voidtype_setfield(self, newtup, NULL); - Py_DECREF(newtup); - if (!res) { - return -1; - } - Py_DECREF(res); - return 0; - -fail: - PyErr_Format(PyExc_IndexError, "invalid index (%d)", (int) n); - return -1; -} - -static int -voidtype_ass_subscript(PyVoidScalarObject *self, PyObject *ind, PyObject *val) -{ - intp n; - char *msg = "invalid index"; - PyObject *fieldinfo, *newtup; - PyObject *res; - - if (!PyDescr_HASFIELDS(self->descr)) { - PyErr_SetString(PyExc_IndexError, - "can't index void scalar without fields"); - return -1; - } - -#if defined(NPY_PY3K) - if (PyUString_Check(ind)) { -#else - if (PyBytes_Check(ind) || PyUnicode_Check(ind)) { -#endif - /* look up in fields */ - fieldinfo = PyDict_GetItem(self->descr->fields, ind); - if (!fieldinfo) { - goto fail; - } - newtup = Py_BuildValue("(OOO)", val, - PyTuple_GET_ITEM(fieldinfo, 0), - PyTuple_GET_ITEM(fieldinfo, 1)); - res = voidtype_setfield(self, newtup, NULL); - Py_DECREF(newtup); - if (!res) { - return -1; - } - Py_DECREF(res); - return 0; - } - - /* try to convert it to a number */ - n = PyArray_PyIntAsIntp(ind); - if (error_converting(n)) { - goto fail; - } - return voidtype_ass_item(self, (Py_ssize_t)n, val); - -fail: - PyErr_SetString(PyExc_IndexError, msg); - return -1; -} - -static PyMappingMethods voidtype_as_mapping = { -#if PY_VERSION_HEX >= 0x02050000 - (lenfunc)voidtype_length, /*mp_length*/ -#else - (inquiry)voidtype_length, /*mp_length*/ -#endif - (binaryfunc)voidtype_subscript, /*mp_subscript*/ - (objobjargproc)voidtype_ass_subscript, /*mp_ass_subscript*/ -}; - - -static PySequenceMethods voidtype_as_sequence = { -#if PY_VERSION_HEX >= 0x02050000 - (lenfunc)voidtype_length, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - (ssizeargfunc)voidtype_item, /*sq_item*/ - 0, /*sq_slice*/ - (ssizeobjargproc)voidtype_ass_item, /*sq_ass_item*/ -#else - (inquiry)voidtype_length, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - (intargfunc)voidtype_item, /*sq_item*/ - 0, /*sq_slice*/ - (intobjargproc)voidtype_ass_item, /*sq_ass_item*/ -#endif - 0, /* ssq_ass_slice */ - 0, /* sq_contains */ - 0, /* sq_inplace_concat */ - 0, /* sq_inplace_repeat */ -}; - - -static Py_ssize_t -gentype_getreadbuf(PyObject *self, Py_ssize_t segment, void **ptrptr) -{ - int numbytes; - PyArray_Descr *outcode; - - if (segment != 0) { - PyErr_SetString(PyExc_SystemError, - "Accessing non-existent array segment"); - return -1; - } - - outcode = PyArray_DescrFromScalar(self); - numbytes = outcode->elsize; - *ptrptr = (void *)scalar_value(self, outcode); - -#ifndef Py_UNICODE_WIDE - if (outcode->type_num == NPY_UNICODE) { - numbytes >>= 1; - } -#endif - Py_DECREF(outcode); - return numbytes; -} - -static Py_ssize_t -gentype_getsegcount(PyObject *self, Py_ssize_t *lenp) -{ - PyArray_Descr *outcode; - - outcode = PyArray_DescrFromScalar(self); - if (lenp) { - *lenp = outcode->elsize; -#ifndef Py_UNICODE_WIDE - if (outcode->type_num == NPY_UNICODE) { - *lenp >>= 1; - } -#endif - } - Py_DECREF(outcode); - return 1; -} - -static Py_ssize_t -gentype_getcharbuf(PyObject *self, Py_ssize_t segment, constchar **ptrptr) -{ - if (PyArray_IsScalar(self, String) || - PyArray_IsScalar(self, Unicode)) { - return gentype_getreadbuf(self, segment, (void **)ptrptr); - } - else { - PyErr_SetString(PyExc_TypeError, - "Non-character array cannot be interpreted "\ - "as character buffer."); - return -1; - } -} - -#if PY_VERSION_HEX >= 0x02060000 - -static int -gentype_getbuffer(PyObject *self, Py_buffer *view, int flags) -{ - Py_ssize_t len; - void *buf; - - /* FIXME: XXX: the format is not implemented! -- this needs more work */ - - len = gentype_getreadbuf(self, 0, &buf); - return PyBuffer_FillInfo(view, self, buf, len, 1, flags); -} - -/* releasebuffer is not needed */ - -#endif - -static PyBufferProcs gentype_as_buffer = { -#if !defined(NPY_PY3K) - gentype_getreadbuf, /* bf_getreadbuffer*/ - NULL, /* bf_getwritebuffer*/ - gentype_getsegcount, /* bf_getsegcount*/ - gentype_getcharbuf, /* bf_getcharbuffer*/ -#endif -#if PY_VERSION_HEX >= 0x02060000 - gentype_getbuffer, /* bf_getbuffer */ - NULL, /* bf_releasebuffer */ -#endif -}; - - -#if defined(NPY_PY3K) -#define BASEFLAGS Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE -#define LEAFFLAGS Py_TPFLAGS_DEFAULT -#else -#define BASEFLAGS Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_CHECKTYPES -#define LEAFFLAGS Py_TPFLAGS_DEFAULT | Py_TPFLAGS_CHECKTYPES -#endif - -NPY_NO_EXPORT PyTypeObject PyGenericArrType_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.generic", /* tp_name*/ - sizeof(PyObject), /* tp_basicsize*/ - 0, /* tp_itemsize */ - /* methods */ - 0, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - 0, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; - -static void -void_dealloc(PyVoidScalarObject *v) -{ - if (v->flags & OWNDATA) { - PyDataMem_FREE(v->obval); - } - Py_XDECREF(v->descr); - Py_XDECREF(v->base); - Py_TYPE(v)->tp_free(v); -} - -static void -object_arrtype_dealloc(PyObject *v) -{ - Py_XDECREF(((PyObjectScalarObject *)v)->obval); - Py_TYPE(v)->tp_free(v); -} - -/* - * string and unicode inherit from Python Type first and so GET_ITEM - * is different to get to the Python Type. - * - * ok is a work-around for a bug in complex_new that doesn't allocate - * memory from the sub-types memory allocator. - */ - -#define _WORK(num) \ - if (type->tp_bases && (PyTuple_GET_SIZE(type->tp_bases)==2)) { \ - PyTypeObject *sup; \ - /* We are inheriting from a Python type as well so \ - give it first dibs on conversion */ \ - sup = (PyTypeObject *)PyTuple_GET_ITEM(type->tp_bases, num); \ - robj = sup->tp_new(type, args, kwds); \ - if (robj != NULL) goto finish; \ - if (PyTuple_GET_SIZE(args)!=1) return NULL; \ - PyErr_Clear(); \ - /* now do default conversion */ \ - } - -#define _WORK1 _WORK(1) -#define _WORKz _WORK(0) -#define _WORK0 - -/**begin repeat - * #name = byte, short, int, long, longlong, ubyte, ushort, uint, ulong, - * ulonglong, float, double, longdouble, cfloat, cdouble, clongdouble, - * string, unicode, object# - * #TYPE = BYTE, SHORT, INT, LONG, LONGLONG, UBYTE, USHORT, UINT, ULONG, - * ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, CFLOAT, CDOUBLE, CLONGDOUBLE, - * STRING, UNICODE, OBJECT# - * #work = 0,0,1,1,1,0,0,0,0,0,0,1,0,0,0,0,z,z,0# - * #default = 0*16,1*2,2# - */ - -#define _NPY_UNUSED2_1 -#define _NPY_UNUSED2_z -#define _NPY_UNUSED2_0 NPY_UNUSED -#define _NPY_UNUSED1_0 -#define _NPY_UNUSED1_1 -#define _NPY_UNUSED1_2 NPY_UNUSED - -static PyObject * -@name@_arrtype_new(PyTypeObject *_NPY_UNUSED1_@default@(type), PyObject *args, PyObject *_NPY_UNUSED2_@work@(kwds)) -{ - PyObject *obj = NULL; - PyObject *robj; - PyObject *arr; - PyArray_Descr *typecode = NULL; -#if !(@default@ == 2) - int itemsize; - void *dest, *src; -#endif - - /* - * allow base-class (if any) to do conversion - * If successful, this will jump to finish: - */ - _WORK@work@ - - if (!PyArg_ParseTuple(args, "|O", &obj)) { - return NULL; - } - typecode = PyArray_DescrFromType(PyArray_@TYPE@); - /* - * typecode is new reference and stolen by - * PyArray_FromAny but not PyArray_Scalar - */ - if (obj == NULL) { -#if @default@ == 0 - char *mem = malloc(sizeof(@name@)); - - memset(mem, 0, sizeof(@name@)); - robj = PyArray_Scalar(mem, typecode, NULL); - free(mem); -#elif @default@ == 1 - robj = PyArray_Scalar(NULL, typecode, NULL); -#elif @default@ == 2 - Py_INCREF(Py_None); - robj = Py_None; -#endif - Py_DECREF(typecode); - goto finish; - } - - /* - * It is expected at this point that robj is a PyArrayScalar - * (even for Object Data Type) - */ - arr = PyArray_FromAny(obj, typecode, 0, 0, FORCECAST, NULL); - if ((arr == NULL) || (PyArray_NDIM(arr) > 0)) { - return arr; - } - /* 0-d array */ - robj = PyArray_ToScalar(PyArray_DATA(arr), (NPY_AO *)arr); - Py_DECREF(arr); - -finish: - /* - * In OBJECT case, robj is no longer a - * PyArrayScalar at this point but the - * remaining code assumes it is - */ -#if @default@ == 2 - return robj; -#else - /* Normal return */ - if ((robj == NULL) || (Py_TYPE(robj) == type)) { - return robj; - } - - /* - * This return path occurs when the requested type is not created - * but another scalar object is created instead (i.e. when - * the base-class does the conversion in _WORK macro) - */ - - /* Need to allocate new type and copy data-area over */ - if (type->tp_itemsize) { - itemsize = PyBytes_GET_SIZE(robj); - } - else { - itemsize = 0; - } - obj = type->tp_alloc(type, itemsize); - if (obj == NULL) { - Py_DECREF(robj); - return NULL; - } - /* typecode will be NULL */ - typecode = PyArray_DescrFromType(PyArray_@TYPE@); - dest = scalar_value(obj, typecode); - src = scalar_value(robj, typecode); - Py_DECREF(typecode); -#if @default@ == 0 - *((npy_@name@ *)dest) = *((npy_@name@ *)src); -#elif @default@ == 1 /* unicode and strings */ - if (itemsize == 0) { /* unicode */ - itemsize = ((PyUnicodeObject *)robj)->length * sizeof(Py_UNICODE); - } - memcpy(dest, src, itemsize); - /* @default@ == 2 won't get here */ -#endif - Py_DECREF(robj); - return obj; -#endif -} -/**end repeat**/ - -#undef _WORK1 -#undef _WORKz -#undef _WORK0 -#undef _WORK - -/* bool->tp_new only returns Py_True or Py_False */ -static PyObject * -bool_arrtype_new(PyTypeObject *NPY_UNUSED(type), PyObject *args, PyObject *NPY_UNUSED(kwds)) -{ - PyObject *obj = NULL; - PyObject *arr; - - if (!PyArg_ParseTuple(args, "|O", &obj)) { - return NULL; - } - if (obj == NULL) { - PyArrayScalar_RETURN_FALSE; - } - if (obj == Py_False) { - PyArrayScalar_RETURN_FALSE; - } - if (obj == Py_True) { - PyArrayScalar_RETURN_TRUE; - } - arr = PyArray_FROM_OTF(obj, PyArray_BOOL, FORCECAST); - if (arr && 0 == PyArray_NDIM(arr)) { - Bool val = *((Bool *)PyArray_DATA(arr)); - Py_DECREF(arr); - PyArrayScalar_RETURN_BOOL_FROM_LONG(val); - } - return PyArray_Return((PyArrayObject *)arr); -} - -static PyObject * -bool_arrtype_and(PyObject *a, PyObject *b) -{ - if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) { - PyArrayScalar_RETURN_BOOL_FROM_LONG - ((a == PyArrayScalar_True)&(b == PyArrayScalar_True)); - } - return PyGenericArrType_Type.tp_as_number->nb_and(a, b); -} - -static PyObject * -bool_arrtype_or(PyObject *a, PyObject *b) -{ - if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) { - PyArrayScalar_RETURN_BOOL_FROM_LONG - ((a == PyArrayScalar_True)|(b == PyArrayScalar_True)); - } - return PyGenericArrType_Type.tp_as_number->nb_or(a, b); -} - -static PyObject * -bool_arrtype_xor(PyObject *a, PyObject *b) -{ - if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) { - PyArrayScalar_RETURN_BOOL_FROM_LONG - ((a == PyArrayScalar_True)^(b == PyArrayScalar_True)); - } - return PyGenericArrType_Type.tp_as_number->nb_xor(a, b); -} - -static int -bool_arrtype_nonzero(PyObject *a) -{ - return a == PyArrayScalar_True; -} - -#if PY_VERSION_HEX >= 0x02050000 -/**begin repeat - * #name = byte, short, int, long, ubyte, ushort, longlong, uint, ulong, - * ulonglong# - * #Name = Byte, Short, Int, Long, UByte, UShort, LongLong, UInt, ULong, - * ULongLong# - * #type = PyInt_FromLong*6, PyLong_FromLongLong*1, PyLong_FromUnsignedLong*2, - * PyLong_FromUnsignedLongLong# - */ -static PyNumberMethods @name@_arrtype_as_number; -static PyObject * -@name@_index(PyObject *self) -{ - return @type@(PyArrayScalar_VAL(self, @Name@)); -} -/**end repeat**/ - -static PyObject * -bool_index(PyObject *a) -{ - return PyInt_FromLong(PyArrayScalar_VAL(a, Bool)); -} -#endif - -/* Arithmetic methods -- only so we can override &, |, ^. */ -NPY_NO_EXPORT PyNumberMethods bool_arrtype_as_number = { - 0, /* nb_add */ - 0, /* nb_subtract */ - 0, /* nb_multiply */ -#if defined(NPY_PY3K) -#else - 0, /* nb_divide */ -#endif - 0, /* nb_remainder */ - 0, /* nb_divmod */ - 0, /* nb_power */ - 0, /* nb_negative */ - 0, /* nb_positive */ - 0, /* nb_absolute */ - (inquiry)bool_arrtype_nonzero, /* nb_nonzero / nb_bool */ - 0, /* nb_invert */ - 0, /* nb_lshift */ - 0, /* nb_rshift */ - (binaryfunc)bool_arrtype_and, /* nb_and */ - (binaryfunc)bool_arrtype_xor, /* nb_xor */ - (binaryfunc)bool_arrtype_or, /* nb_or */ -#if defined(NPY_PY3K) -#else - 0, /* nb_coerce */ -#endif - 0, /* nb_int */ -#if defined(NPY_PY3K) - 0, /* nb_reserved */ -#else - 0, /* nb_long */ -#endif - 0, /* nb_float */ -#if defined(NPY_PY3K) -#else - 0, /* nb_oct */ - 0, /* nb_hex */ -#endif - /* Added in release 2.0 */ - 0, /* nb_inplace_add */ - 0, /* nb_inplace_subtract */ - 0, /* nb_inplace_multiply */ -#if defined(NPY_PY3K) -#else - 0, /* nb_inplace_divide */ -#endif - 0, /* nb_inplace_remainder */ - 0, /* nb_inplace_power */ - 0, /* nb_inplace_lshift */ - 0, /* nb_inplace_rshift */ - 0, /* nb_inplace_and */ - 0, /* nb_inplace_xor */ - 0, /* nb_inplace_or */ - /* Added in release 2.2 */ - /* The following require the Py_TPFLAGS_HAVE_CLASS flag */ - 0, /* nb_floor_divide */ - 0, /* nb_true_divide */ - 0, /* nb_inplace_floor_divide */ - 0, /* nb_inplace_true_divide */ - /* Added in release 2.5 */ -#if PY_VERSION_HEX >= 0x02050000 - 0, /* nb_index */ -#endif -}; - -static PyObject * -void_arrtype_new(PyTypeObject *type, PyObject *args, PyObject *NPY_UNUSED(kwds)) -{ - PyObject *obj, *arr; - ulonglong memu = 1; - PyObject *new = NULL; - char *destptr; - - if (!PyArg_ParseTuple(args, "O", &obj)) { - return NULL; - } - /* - * For a VOID scalar first see if obj is an integer or long - * and create new memory of that size (filled with 0) for the scalar - */ - if (PyLong_Check(obj) || PyInt_Check(obj) || - PyArray_IsScalar(obj, Integer) || - (PyArray_Check(obj) && PyArray_NDIM(obj)==0 && - PyArray_ISINTEGER(obj))) { -#if defined(NPY_PY3K) - new = Py_TYPE(obj)->tp_as_number->nb_int(obj); -#else - new = Py_TYPE(obj)->tp_as_number->nb_long(obj); -#endif - } - if (new && PyLong_Check(new)) { - PyObject *ret; - memu = PyLong_AsUnsignedLongLong(new); - Py_DECREF(new); - if (PyErr_Occurred() || (memu > MAX_INT)) { - PyErr_Clear(); - PyErr_Format(PyExc_OverflowError, - "size must be smaller than %d", - (int) MAX_INT); - return NULL; - } - destptr = PyDataMem_NEW((int) memu); - if (destptr == NULL) { - return PyErr_NoMemory(); - } - ret = type->tp_alloc(type, 0); - if (ret == NULL) { - PyDataMem_FREE(destptr); - return PyErr_NoMemory(); - } - ((PyVoidScalarObject *)ret)->obval = destptr; - Py_SIZE((PyVoidScalarObject *)ret) = (int) memu; - ((PyVoidScalarObject *)ret)->descr = - PyArray_DescrNewFromType(PyArray_VOID); - ((PyVoidScalarObject *)ret)->descr->elsize = (int) memu; - ((PyVoidScalarObject *)ret)->flags = BEHAVED | OWNDATA; - ((PyVoidScalarObject *)ret)->base = NULL; - memset(destptr, '\0', (size_t) memu); - return ret; - } - - arr = PyArray_FROM_OTF(obj, PyArray_VOID, FORCECAST); - return PyArray_Return((PyArrayObject *)arr); -} - - -/**************** Define Hash functions ********************/ - -/**begin repeat - * #lname = bool,ubyte,ushort# - * #name = Bool,UByte, UShort# - */ -static long -@lname@_arrtype_hash(PyObject *obj) -{ - return (long)(((Py@name@ScalarObject *)obj)->obval); -} -/**end repeat**/ - -/**begin repeat - * #lname=byte,short,uint,ulong# - * #name=Byte,Short,UInt,ULong# - */ -static long -@lname@_arrtype_hash(PyObject *obj) -{ - long x = (long)(((Py@name@ScalarObject *)obj)->obval); - if (x == -1) { - x = -2; - } - return x; -} -/**end repeat**/ - -#if (SIZEOF_INT != SIZEOF_LONG) || defined(NPY_PY3K) -static long -int_arrtype_hash(PyObject *obj) -{ - long x = (long)(((PyIntScalarObject *)obj)->obval); - if (x == -1) { - x = -2; - } - return x; -} -#endif - -/**begin repeat - * #char = ,u# - * #Char = ,U# - * #ext = && (x >= LONG_MIN),# - */ -#if SIZEOF_LONG != SIZEOF_LONGLONG -/* we assume SIZEOF_LONGLONG=2*SIZEOF_LONG */ -static long -@char@longlong_arrtype_hash(PyObject *obj) -{ - long y; - @char@longlong x = (((Py@Char@LongLongScalarObject *)obj)->obval); - - if ((x <= LONG_MAX)@ext@) { - y = (long) x; - } - else { - union Mask { - long hashvals[2]; - @char@longlong v; - } both; - - both.v = x; - y = both.hashvals[0] + (1000003)*both.hashvals[1]; - } - if (y == -1) { - y = -2; - } - return y; -} -#else - -static long -@char@longlong_arrtype_hash(PyObject *obj) -{ - long x = (long)(((Py@Char@LongLongScalarObject *)obj)->obval); - if (x == -1) { - x = -2; - } - return x; -} - -#endif -/**end repeat**/ - - -/* Wrong thing to do for longdouble, but....*/ - -/**begin repeat - * #lname = float, longdouble# - * #name = Float, LongDouble# - */ -static long -@lname@_arrtype_hash(PyObject *obj) -{ - return _Py_HashDouble((double) ((Py@name@ScalarObject *)obj)->obval); -} - -/* borrowed from complex_hash */ -static long -c@lname@_arrtype_hash(PyObject *obj) -{ - long hashreal, hashimag, combined; - hashreal = _Py_HashDouble((double) - (((PyC@name@ScalarObject *)obj)->obval).real); - - if (hashreal == -1) { - return -1; - } - hashimag = _Py_HashDouble((double) - (((PyC@name@ScalarObject *)obj)->obval).imag); - if (hashimag == -1) { - return -1; - } - combined = hashreal + 1000003 * hashimag; - if (combined == -1) { - combined = -2; - } - return combined; -} -/**end repeat**/ - -static long -object_arrtype_hash(PyObject *obj) -{ - return PyObject_Hash(((PyObjectScalarObject *)obj)->obval); -} - -/* just hash the pointer */ -static long -void_arrtype_hash(PyObject *obj) -{ - return _Py_HashPointer((void *)(((PyVoidScalarObject *)obj)->obval)); -} - -/*object arrtype getattro and setattro */ -static PyObject * -object_arrtype_getattro(PyObjectScalarObject *obj, PyObject *attr) { - PyObject *res; - - /* first look in object and then hand off to generic type */ - - res = PyObject_GenericGetAttr(obj->obval, attr); - if (res) { - return res; - } - PyErr_Clear(); - return PyObject_GenericGetAttr((PyObject *)obj, attr); -} - -static int -object_arrtype_setattro(PyObjectScalarObject *obj, PyObject *attr, PyObject *val) { - int res; - /* first look in object and then hand off to generic type */ - - res = PyObject_GenericSetAttr(obj->obval, attr, val); - if (res >= 0) { - return res; - } - PyErr_Clear(); - return PyObject_GenericSetAttr((PyObject *)obj, attr, val); -} - -static PyObject * -object_arrtype_concat(PyObjectScalarObject *self, PyObject *other) -{ - return PySequence_Concat(self->obval, other); -} - -static Py_ssize_t -object_arrtype_length(PyObjectScalarObject *self) -{ - return PyObject_Length(self->obval); -} - -static PyObject * -object_arrtype_repeat(PyObjectScalarObject *self, Py_ssize_t count) -{ - return PySequence_Repeat(self->obval, count); -} - -static PyObject * -object_arrtype_subscript(PyObjectScalarObject *self, PyObject *key) -{ - return PyObject_GetItem(self->obval, key); -} - -static int -object_arrtype_ass_subscript(PyObjectScalarObject *self, PyObject *key, - PyObject *value) -{ - return PyObject_SetItem(self->obval, key, value); -} - -static int -object_arrtype_contains(PyObjectScalarObject *self, PyObject *ob) -{ - return PySequence_Contains(self->obval, ob); -} - -static PyObject * -object_arrtype_inplace_concat(PyObjectScalarObject *self, PyObject *o) -{ - return PySequence_InPlaceConcat(self->obval, o); -} - -static PyObject * -object_arrtype_inplace_repeat(PyObjectScalarObject *self, Py_ssize_t count) -{ - return PySequence_InPlaceRepeat(self->obval, count); -} - -static PySequenceMethods object_arrtype_as_sequence = { -#if PY_VERSION_HEX >= 0x02050000 - (lenfunc)object_arrtype_length, /*sq_length*/ - (binaryfunc)object_arrtype_concat, /*sq_concat*/ - (ssizeargfunc)object_arrtype_repeat, /*sq_repeat*/ - 0, /*sq_item*/ - 0, /*sq_slice*/ - 0, /* sq_ass_item */ - 0, /* sq_ass_slice */ - (objobjproc)object_arrtype_contains, /* sq_contains */ - (binaryfunc)object_arrtype_inplace_concat, /* sq_inplace_concat */ - (ssizeargfunc)object_arrtype_inplace_repeat, /* sq_inplace_repeat */ -#else - (inquiry)object_arrtype_length, /*sq_length*/ - (binaryfunc)object_arrtype_concat, /*sq_concat*/ - (intargfunc)object_arrtype_repeat, /*sq_repeat*/ - 0, /*sq_item*/ - 0, /*sq_slice*/ - 0, /* sq_ass_item */ - 0, /* sq_ass_slice */ - (objobjproc)object_arrtype_contains, /* sq_contains */ - (binaryfunc)object_arrtype_inplace_concat, /* sq_inplace_concat */ - (intargfunc)object_arrtype_inplace_repeat, /* sq_inplace_repeat */ -#endif -}; - -static PyMappingMethods object_arrtype_as_mapping = { -#if PY_VERSION_HEX >= 0x02050000 - (lenfunc)object_arrtype_length, - (binaryfunc)object_arrtype_subscript, - (objobjargproc)object_arrtype_ass_subscript, -#else - (inquiry)object_arrtype_length, - (binaryfunc)object_arrtype_subscript, - (objobjargproc)object_arrtype_ass_subscript, -#endif -}; - -#if !defined(NPY_PY3K) -static Py_ssize_t -object_arrtype_getsegcount(PyObjectScalarObject *self, Py_ssize_t *lenp) -{ - Py_ssize_t newlen; - int cnt; - PyBufferProcs *pb = Py_TYPE(self->obval)->tp_as_buffer; - - if (pb == NULL || - pb->bf_getsegcount == NULL || - (cnt = (*pb->bf_getsegcount)(self->obval, &newlen)) != 1) { - return 0; - } - if (lenp) { - *lenp = newlen; - } - return cnt; -} - -static Py_ssize_t -object_arrtype_getreadbuf(PyObjectScalarObject *self, Py_ssize_t segment, void **ptrptr) -{ - PyBufferProcs *pb = Py_TYPE(self->obval)->tp_as_buffer; - - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } - return (*pb->bf_getreadbuffer)(self->obval, segment, ptrptr); -} - -static Py_ssize_t -object_arrtype_getwritebuf(PyObjectScalarObject *self, Py_ssize_t segment, void **ptrptr) -{ - PyBufferProcs *pb = Py_TYPE(self->obval)->tp_as_buffer; - - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a writeable buffer object"); - return -1; - } - return (*pb->bf_getwritebuffer)(self->obval, segment, ptrptr); -} - -static Py_ssize_t -object_arrtype_getcharbuf(PyObjectScalarObject *self, Py_ssize_t segment, - constchar **ptrptr) -{ - PyBufferProcs *pb = Py_TYPE(self->obval)->tp_as_buffer; - - if (pb == NULL || - pb->bf_getcharbuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a character buffer object"); - return -1; - } - return (*pb->bf_getcharbuffer)(self->obval, segment, ptrptr); -} -#endif - -#if PY_VERSION_HEX >= 0x02060000 -static int -object_arrtype_getbuffer(PyObjectScalarObject *self, Py_buffer *view, int flags) -{ - PyBufferProcs *pb = Py_TYPE(self->obval)->tp_as_buffer; - if (pb == NULL || pb->bf_getbuffer == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } - return (*pb->bf_getbuffer)(self->obval, view, flags); -} - -static void -object_arrtype_releasebuffer(PyObjectScalarObject *self, Py_buffer *view) -{ - PyBufferProcs *pb = Py_TYPE(self->obval)->tp_as_buffer; - if (pb == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return; - } - if (pb->bf_releasebuffer != NULL) { - (*pb->bf_releasebuffer)(self->obval, view); - } -} -#endif - -static PyBufferProcs object_arrtype_as_buffer = { -#if !defined(NPY_PY3K) -#if PY_VERSION_HEX >= 0x02050000 - (readbufferproc)object_arrtype_getreadbuf, - (writebufferproc)object_arrtype_getwritebuf, - (segcountproc)object_arrtype_getsegcount, - (charbufferproc)object_arrtype_getcharbuf, -#else - (getreadbufferproc)object_arrtype_getreadbuf, - (getwritebufferproc)object_arrtype_getwritebuf, - (getsegcountproc)object_arrtype_getsegcount, - (getcharbufferproc)object_arrtype_getcharbuf, -#endif -#endif -#if PY_VERSION_HEX >= 0x02060000 - (getbufferproc)object_arrtype_getbuffer, - (releasebufferproc)object_arrtype_releasebuffer, -#endif -}; - -static PyObject * -object_arrtype_call(PyObjectScalarObject *obj, PyObject *args, PyObject *kwds) -{ - return PyObject_Call(obj->obval, args, kwds); -} - -NPY_NO_EXPORT PyTypeObject PyObjectArrType_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.object_", /* tp_name*/ - sizeof(PyObjectScalarObject), /* tp_basicsize*/ - 0, /* tp_itemsize */ - (destructor)object_arrtype_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - 0, /* tp_repr */ - 0, /* tp_as_number */ - &object_arrtype_as_sequence, /* tp_as_sequence */ - &object_arrtype_as_mapping, /* tp_as_mapping */ - 0, /* tp_hash */ - (ternaryfunc)object_arrtype_call, /* tp_call */ - 0, /* tp_str */ - (getattrofunc)object_arrtype_getattro, /* tp_getattro */ - (setattrofunc)object_arrtype_setattro, /* tp_setattro */ - &object_arrtype_as_buffer, /* tp_as_buffer */ - 0, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; - -static PyObject * -gen_arrtype_subscript(PyObject *self, PyObject *key) -{ - /* - * Only [...], [...,], [, ...], - * is allowed for indexing a scalar - * - * These return a new N-d array with a copy of - * the data where N is the number of None's in . - */ - PyObject *res, *ret; - int N; - - if (key == Py_Ellipsis || key == Py_None || - PyTuple_Check(key)) { - res = PyArray_FromScalar(self, NULL); - } - else { - PyErr_SetString(PyExc_IndexError, - "invalid index to scalar variable."); - return NULL; - } - if (key == Py_Ellipsis) { - return res; - } - if (key == Py_None) { - ret = add_new_axes_0d((PyArrayObject *)res, 1); - Py_DECREF(res); - return ret; - } - /* Must be a Tuple */ - N = count_new_axes_0d(key); - if (N < 0) { - return NULL; - } - ret = add_new_axes_0d((PyArrayObject *)res, N); - Py_DECREF(res); - return ret; -} - - -#define NAME_bool "bool" -#define NAME_void "void" -#if defined(NPY_PY3K) -#define NAME_string "bytes" -#define NAME_unicode "str" -#else -#define NAME_string "string" -#define NAME_unicode "unicode" -#endif - -/**begin repeat - * #name = bool, string, unicode, void# - * #NAME = Bool, String, Unicode, Void# - * #ex = _,_,_,# - */ -NPY_NO_EXPORT PyTypeObject Py@NAME@ArrType_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy." NAME_@name@ "@ex@", /* tp_name*/ - sizeof(Py@NAME@ScalarObject), /* tp_basicsize*/ - 0, /* tp_itemsize */ - 0, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - 0, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; -/**end repeat**/ - -#undef NAME_bool -#undef NAME_void -#undef NAME_string -#undef NAME_unicode - -/**begin repeat - * #NAME = Byte, Short, Int, Long, LongLong, UByte, UShort, UInt, ULong, - * ULongLong, Float, Double, LongDouble# - * #name = int*5, uint*5, float*3# - * #CNAME = (CHAR, SHORT, INT, LONG, LONGLONG)*2, FLOAT, DOUBLE, LONGDOUBLE# - */ -#if BITSOF_@CNAME@ == 8 -#define _THIS_SIZE "8" -#elif BITSOF_@CNAME@ == 16 -#define _THIS_SIZE "16" -#elif BITSOF_@CNAME@ == 32 -#define _THIS_SIZE "32" -#elif BITSOF_@CNAME@ == 64 -#define _THIS_SIZE "64" -#elif BITSOF_@CNAME@ == 80 -#define _THIS_SIZE "80" -#elif BITSOF_@CNAME@ == 96 -#define _THIS_SIZE "96" -#elif BITSOF_@CNAME@ == 128 -#define _THIS_SIZE "128" -#elif BITSOF_@CNAME@ == 256 -#define _THIS_SIZE "256" -#endif -NPY_NO_EXPORT PyTypeObject Py@NAME@ArrType_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.@name@" _THIS_SIZE, /* tp_name*/ - sizeof(Py@NAME@ScalarObject), /* tp_basicsize*/ - 0, /* tp_itemsize */ - 0, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - 0, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; - -#undef _THIS_SIZE -/**end repeat**/ - - -static PyMappingMethods gentype_as_mapping = { - NULL, - (binaryfunc)gen_arrtype_subscript, - NULL -}; - - -/**begin repeat - * #NAME = CFloat, CDouble, CLongDouble# - * #name = complex*3# - * #CNAME = FLOAT, DOUBLE, LONGDOUBLE# - */ -#if BITSOF_@CNAME@ == 16 -#define _THIS_SIZE2 "16" -#define _THIS_SIZE1 "32" -#elif BITSOF_@CNAME@ == 32 -#define _THIS_SIZE2 "32" -#define _THIS_SIZE1 "64" -#elif BITSOF_@CNAME@ == 64 -#define _THIS_SIZE2 "64" -#define _THIS_SIZE1 "128" -#elif BITSOF_@CNAME@ == 80 -#define _THIS_SIZE2 "80" -#define _THIS_SIZE1 "160" -#elif BITSOF_@CNAME@ == 96 -#define _THIS_SIZE2 "96" -#define _THIS_SIZE1 "192" -#elif BITSOF_@CNAME@ == 128 -#define _THIS_SIZE2 "128" -#define _THIS_SIZE1 "256" -#elif BITSOF_@CNAME@ == 256 -#define _THIS_SIZE2 "256" -#define _THIS_SIZE1 "512" -#endif - -#define _THIS_DOC "Composed of two " _THIS_SIZE2 " bit floats" - -NPY_NO_EXPORT PyTypeObject Py@NAME@ArrType_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(0, 0) -#else - PyObject_HEAD_INIT(0) - 0, /* ob_size */ -#endif - "numpy.@name@" _THIS_SIZE1, /* tp_name*/ - sizeof(Py@NAME@ScalarObject), /* tp_basicsize*/ - 0, /* tp_itemsize*/ - 0, /* tp_dealloc*/ - 0, /* tp_print*/ - 0, /* tp_getattr*/ - 0, /* tp_setattr*/ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - 0, /* tp_repr*/ - 0, /* tp_as_number*/ - 0, /* tp_as_sequence*/ - 0, /* tp_as_mapping*/ - 0, /* tp_hash */ - 0, /* tp_call*/ - 0, /* tp_str*/ - 0, /* tp_getattro*/ - 0, /* tp_setattro*/ - 0, /* tp_as_buffer*/ - Py_TPFLAGS_DEFAULT, /* tp_flags*/ - _THIS_DOC, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; -#undef _THIS_SIZE1 -#undef _THIS_SIZE2 -#undef _THIS_DOC - -/**end repeat**/ - - -static PyNumberMethods longdoubletype_as_number; -static PyNumberMethods clongdoubletype_as_number; - - -NPY_NO_EXPORT void -initialize_numeric_types(void) -{ - PyGenericArrType_Type.tp_dealloc = (destructor)gentype_dealloc; - PyGenericArrType_Type.tp_as_number = &gentype_as_number; - PyGenericArrType_Type.tp_as_buffer = &gentype_as_buffer; - PyGenericArrType_Type.tp_as_mapping = &gentype_as_mapping; - PyGenericArrType_Type.tp_flags = BASEFLAGS; - PyGenericArrType_Type.tp_methods = gentype_methods; - PyGenericArrType_Type.tp_getset = gentype_getsets; - PyGenericArrType_Type.tp_new = NULL; - PyGenericArrType_Type.tp_alloc = gentype_alloc; - PyGenericArrType_Type.tp_free = _pya_free; - PyGenericArrType_Type.tp_repr = gentype_repr; - PyGenericArrType_Type.tp_str = gentype_str; - PyGenericArrType_Type.tp_richcompare = gentype_richcompare; - - PyBoolArrType_Type.tp_as_number = &bool_arrtype_as_number; -#if PY_VERSION_HEX >= 0x02050000 - /* - * need to add dummy versions with filled-in nb_index - * in-order for PyType_Ready to fill in .__index__() method - */ - /**begin repeat - * #name = byte, short, int, long, longlong, ubyte, ushort, - * uint, ulong, ulonglong# - * #NAME = Byte, Short, Int, Long, LongLong, UByte, UShort, - * UInt, ULong, ULongLong# - */ - Py@NAME@ArrType_Type.tp_as_number = &@name@_arrtype_as_number; - Py@NAME@ArrType_Type.tp_as_number->nb_index = (unaryfunc)@name@_index; - - /**end repeat**/ - PyBoolArrType_Type.tp_as_number->nb_index = (unaryfunc)bool_index; -#endif - - PyStringArrType_Type.tp_alloc = NULL; - PyStringArrType_Type.tp_free = NULL; - - PyStringArrType_Type.tp_repr = stringtype_repr; - PyStringArrType_Type.tp_str = stringtype_str; - - PyUnicodeArrType_Type.tp_repr = unicodetype_repr; - PyUnicodeArrType_Type.tp_str = unicodetype_str; - - PyVoidArrType_Type.tp_methods = voidtype_methods; - PyVoidArrType_Type.tp_getset = voidtype_getsets; - PyVoidArrType_Type.tp_as_mapping = &voidtype_as_mapping; - PyVoidArrType_Type.tp_as_sequence = &voidtype_as_sequence; - - /**begin repeat - * #NAME= Number, Integer, SignedInteger, UnsignedInteger, Inexact, - * Floating, ComplexFloating, Flexible, Character, TimeInteger# - */ - Py@NAME@ArrType_Type.tp_flags = BASEFLAGS; - /**end repeat**/ - - /**begin repeat - * #name = bool, byte, short, int, long, longlong, ubyte, ushort, uint, - * ulong, ulonglong, float, double, longdouble, cfloat, cdouble, - * clongdouble, string, unicode, void, object# - * #NAME = Bool, Byte, Short, Int, Long, LongLong, UByte, UShort, UInt, - * ULong, ULongLong, Float, Double, LongDouble, CFloat, CDouble, - * CLongDouble, String, Unicode, Void, Object# - */ - Py@NAME@ArrType_Type.tp_flags = BASEFLAGS; - Py@NAME@ArrType_Type.tp_new = @name@_arrtype_new; - Py@NAME@ArrType_Type.tp_richcompare = gentype_richcompare; - /**end repeat**/ - - /**begin repeat - * #name = bool, byte, short, ubyte, ushort, uint, ulong, ulonglong, - * float, longdouble, cfloat, clongdouble, void, object# - * #NAME = Bool, Byte, Short, UByte, UShort, UInt, ULong, ULongLong, - * Float, LongDouble, CFloat, CLongDouble, Void, Object# - */ - Py@NAME@ArrType_Type.tp_hash = @name@_arrtype_hash; - /**end repeat**/ - -#if (SIZEOF_INT != SIZEOF_LONG) || defined(NPY_PY3K) - /* We won't be inheriting from Python Int type. */ - PyIntArrType_Type.tp_hash = int_arrtype_hash; -#endif - -#if defined(NPY_PY3K) - /* We won't be inheriting from Python Int type. */ - PyLongArrType_Type.tp_hash = int_arrtype_hash; -#endif - -#if (SIZEOF_LONG != SIZEOF_LONGLONG) || defined(NPY_PY3K) - /* We won't be inheriting from Python Int type. */ - PyLongLongArrType_Type.tp_hash = longlong_arrtype_hash; -#endif - - /**begin repeat - * #name = repr, str# - */ - PyFloatArrType_Type.tp_@name@ = floattype_@name@; - PyCFloatArrType_Type.tp_@name@ = cfloattype_@name@; - - PyDoubleArrType_Type.tp_@name@ = doubletype_@name@; - PyCDoubleArrType_Type.tp_@name@ = cdoubletype_@name@; - /**end repeat**/ - - PyFloatArrType_Type.tp_print = floattype_print; - PyDoubleArrType_Type.tp_print = doubletype_print; - PyLongDoubleArrType_Type.tp_print = longdoubletype_print; - - PyCFloatArrType_Type.tp_print = cfloattype_print; - PyCDoubleArrType_Type.tp_print = cdoubletype_print; - PyCLongDoubleArrType_Type.tp_print = clongdoubletype_print; - - /* - * These need to be coded specially because getitem does not - * return a normal Python type - */ - PyLongDoubleArrType_Type.tp_as_number = &longdoubletype_as_number; - PyCLongDoubleArrType_Type.tp_as_number = &clongdoubletype_as_number; - - /**begin repeat - * #name = int, float, repr, str# - * #kind = tp_as_number->nb*2, tp*2# - */ - PyLongDoubleArrType_Type.@kind@_@name@ = longdoubletype_@name@; - PyCLongDoubleArrType_Type.@kind@_@name@ = clongdoubletype_@name@; - /**end repeat**/ - -#if !defined(NPY_PY3K) - /**begin repeat - * #name = long, hex, oct# - * #kind = tp_as_number->nb*3# - */ - PyLongDoubleArrType_Type.@kind@_@name@ = longdoubletype_@name@; - PyCLongDoubleArrType_Type.@kind@_@name@ = clongdoubletype_@name@; - /**end repeat**/ - -#endif - - PyStringArrType_Type.tp_itemsize = sizeof(char); - PyVoidArrType_Type.tp_dealloc = (destructor) void_dealloc; - - PyArrayIter_Type.tp_iter = PyObject_SelfIter; - PyArrayMapIter_Type.tp_iter = PyObject_SelfIter; -} - - -/* the order of this table is important */ -static PyTypeObject *typeobjects[] = { - &PyBoolArrType_Type, - &PyByteArrType_Type, - &PyUByteArrType_Type, - &PyShortArrType_Type, - &PyUShortArrType_Type, - &PyIntArrType_Type, - &PyUIntArrType_Type, - &PyLongArrType_Type, - &PyULongArrType_Type, - &PyLongLongArrType_Type, - &PyULongLongArrType_Type, - &PyFloatArrType_Type, - &PyDoubleArrType_Type, - &PyLongDoubleArrType_Type, - &PyCFloatArrType_Type, - &PyCDoubleArrType_Type, - &PyCLongDoubleArrType_Type, - &PyObjectArrType_Type, - &PyStringArrType_Type, - &PyUnicodeArrType_Type, - &PyVoidArrType_Type -}; - -NPY_NO_EXPORT int -_typenum_fromtypeobj(PyObject *type, int user) -{ - int typenum, i; - - typenum = PyArray_NOTYPE; - i = 0; - while(i < PyArray_NTYPES) { - if (type == (PyObject *)typeobjects[i]) { - typenum = i; - break; - } - i++; - } - - if (!user) { - return typenum; - } - /* Search any registered types */ - i = 0; - while (i < PyArray_NUMUSERTYPES) { - if (type == (PyObject *)(userdescrs[i]->typeobj)) { - typenum = i + PyArray_USERDEF; - break; - } - i++; - } - return typenum; -} diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/scalartypes.h b/pythonPackages/numpy/numpy/core/src/multiarray/scalartypes.h deleted file mode 100755 index 893c0051d0..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/scalartypes.h +++ /dev/null @@ -1,24 +0,0 @@ -#ifndef _NPY_SCALARTYPES_H_ -#define _NPY_SCALARTYPES_H_ - -NPY_NO_EXPORT void -initialize_numeric_types(void); - -NPY_NO_EXPORT void -format_longdouble(char *buf, size_t buflen, longdouble val, unsigned int prec); - -#if PY_VERSION_HEX >= 0x03000000 -NPY_NO_EXPORT void -gentype_struct_free(PyObject *ptr); -#else -NPY_NO_EXPORT void -gentype_struct_free(void *ptr, void *arg); -#endif - -NPY_NO_EXPORT int -_typenum_fromtypeobj(PyObject *type, int user); - -NPY_NO_EXPORT void * -scalar_value(PyObject *scalar, PyArray_Descr *descr); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/sequence.c b/pythonPackages/numpy/numpy/core/src/multiarray/sequence.c deleted file mode 100755 index e3fff56c6e..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/sequence.c +++ /dev/null @@ -1,185 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "common.h" -#include "mapping.h" - -#include "sequence.h" - -static int -array_any_nonzero(PyArrayObject *mp); - -/************************************************************************* - **************** Implement Sequence Protocol ************************** - *************************************************************************/ - -/* Some of this is repeated in the array_as_mapping protocol. But - we fill it in here so that PySequence_XXXX calls work as expected -*/ - - -static PyObject * -array_slice(PyArrayObject *self, Py_ssize_t ilow, - Py_ssize_t ihigh) -{ - PyArrayObject *r; - Py_ssize_t l; - char *data; - - if (self->nd == 0) { - PyErr_SetString(PyExc_ValueError, "cannot slice a 0-d array"); - return NULL; - } - - l=self->dimensions[0]; - if (ilow < 0) { - ilow = 0; - } - else if (ilow > l) { - ilow = l; - } - if (ihigh < ilow) { - ihigh = ilow; - } - else if (ihigh > l) { - ihigh = l; - } - - if (ihigh != ilow) { - data = index2ptr(self, ilow); - if (data == NULL) { - return NULL; - } - } - else { - data = self->data; - } - - self->dimensions[0] = ihigh-ilow; - Py_INCREF(self->descr); - r = (PyArrayObject *) \ - PyArray_NewFromDescr(Py_TYPE(self), self->descr, - self->nd, self->dimensions, - self->strides, data, - self->flags, (PyObject *)self); - self->dimensions[0] = l; - if (r == NULL) { - return NULL; - } - r->base = (PyObject *)self; - Py_INCREF(self); - PyArray_UpdateFlags(r, UPDATE_ALL); - return (PyObject *)r; -} - - -static int -array_ass_slice(PyArrayObject *self, Py_ssize_t ilow, - Py_ssize_t ihigh, PyObject *v) { - int ret; - PyArrayObject *tmp; - - if (v == NULL) { - PyErr_SetString(PyExc_ValueError, - "cannot delete array elements"); - return -1; - } - if (!PyArray_ISWRITEABLE(self)) { - PyErr_SetString(PyExc_RuntimeError, - "array is not writeable"); - return -1; - } - if ((tmp = (PyArrayObject *)array_slice(self, ilow, ihigh)) == NULL) { - return -1; - } - ret = PyArray_CopyObject(tmp, v); - Py_DECREF(tmp); - - return ret; -} - -static int -array_contains(PyArrayObject *self, PyObject *el) -{ - /* equivalent to (self == el).any() */ - - PyObject *res; - int ret; - - res = PyArray_EnsureAnyArray(PyObject_RichCompare((PyObject *)self, - el, Py_EQ)); - if (res == NULL) { - return -1; - } - ret = array_any_nonzero((PyArrayObject *)res); - Py_DECREF(res); - return ret; -} - -NPY_NO_EXPORT PySequenceMethods array_as_sequence = { -#if PY_VERSION_HEX >= 0x02050000 - (lenfunc)array_length, /*sq_length*/ - (binaryfunc)NULL, /*sq_concat is handled by nb_add*/ - (ssizeargfunc)NULL, - (ssizeargfunc)array_item_nice, - (ssizessizeargfunc)array_slice, - (ssizeobjargproc)array_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)array_ass_slice, /*sq_ass_slice*/ - (objobjproc) array_contains, /*sq_contains */ - (binaryfunc) NULL, /*sg_inplace_concat */ - (ssizeargfunc)NULL, -#else - (inquiry)array_length, /*sq_length*/ - (binaryfunc)NULL, /*sq_concat is handled by nb_add*/ - (intargfunc)NULL, /*sq_repeat is handled nb_multiply*/ - (intargfunc)array_item_nice, /*sq_item*/ - (intintargfunc)array_slice, /*sq_slice*/ - (intobjargproc)array_ass_item, /*sq_ass_item*/ - (intintobjargproc)array_ass_slice, /*sq_ass_slice*/ - (objobjproc) array_contains, /*sq_contains */ - (binaryfunc) NULL, /*sg_inplace_concat */ - (intargfunc) NULL /*sg_inplace_repeat */ -#endif -}; - - -/****************** End of Sequence Protocol ****************************/ - -/* - * Helpers - */ - -/* Array evaluates as "TRUE" if any of the elements are non-zero*/ -static int -array_any_nonzero(PyArrayObject *mp) -{ - intp index; - PyArrayIterObject *it; - Bool anyTRUE = FALSE; - - it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)mp); - if (it == NULL) { - return anyTRUE; - } - index = it->size; - while(index--) { - if (mp->descr->f->nonzero(it->dataptr, mp)) { - anyTRUE = TRUE; - break; - } - PyArray_ITER_NEXT(it); - } - Py_DECREF(it); - return anyTRUE; -} - diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/sequence.h b/pythonPackages/numpy/numpy/core/src/multiarray/sequence.h deleted file mode 100755 index 321c0200fc..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/sequence.h +++ /dev/null @@ -1,10 +0,0 @@ -#ifndef _NPY_ARRAY_SEQUENCE_H_ -#define _NPY_ARRAY_SEQUENCE_H_ - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT PySequenceMethods array_as_sequence; -#else -NPY_NO_EXPORT PySequenceMethods array_as_sequence; -#endif - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/shape.c b/pythonPackages/numpy/numpy/core/src/multiarray/shape.c deleted file mode 100755 index 671dc1538a..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/shape.c +++ /dev/null @@ -1,819 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "numpy/npy_math.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -#include "ctors.h" - -#include "shape.h" - -#define PyAO PyArrayObject - -static int -_check_ones(PyArrayObject *self, int newnd, intp* newdims, intp *strides); - -static int -_fix_unknown_dimension(PyArray_Dims *newshape, intp s_original); - -static int -_attempt_nocopy_reshape(PyArrayObject *self, int newnd, intp* newdims, - intp *newstrides, int fortran); - -static void -_putzero(char *optr, PyObject *zero, PyArray_Descr *dtype); - -/*NUMPY_API - * Resize (reallocate data). Only works if nothing else is referencing this - * array and it is contiguous. If refcheck is 0, then the reference count is - * not checked and assumed to be 1. You still must own this data and have no - * weak-references and no base object. - */ -NPY_NO_EXPORT PyObject * -PyArray_Resize(PyArrayObject *self, PyArray_Dims *newshape, int refcheck, - NPY_ORDER fortran) -{ - intp oldsize, newsize; - int new_nd=newshape->len, k, n, elsize; - int refcnt; - intp* new_dimensions=newshape->ptr; - intp new_strides[MAX_DIMS]; - size_t sd; - intp *dimptr; - char *new_data; - intp largest; - - if (!PyArray_ISONESEGMENT(self)) { - PyErr_SetString(PyExc_ValueError, - "resize only works on single-segment arrays"); - return NULL; - } - - if (self->descr->elsize == 0) { - PyErr_SetString(PyExc_ValueError, - "Bad data-type size."); - return NULL; - } - newsize = 1; - largest = MAX_INTP / self->descr->elsize; - for(k = 0; k < new_nd; k++) { - if (new_dimensions[k] == 0) { - break; - } - if (new_dimensions[k] < 0) { - PyErr_SetString(PyExc_ValueError, - "negative dimensions not allowed"); - return NULL; - } - newsize *= new_dimensions[k]; - if (newsize <= 0 || newsize > largest) { - return PyErr_NoMemory(); - } - } - oldsize = PyArray_SIZE(self); - - if (oldsize != newsize) { - if (!(self->flags & OWNDATA)) { - PyErr_SetString(PyExc_ValueError, - "cannot resize this array: it does not own its data"); - return NULL; - } - - if (refcheck) { - refcnt = REFCOUNT(self); - } - else { - refcnt = 1; - } - if ((refcnt > 2) - || (self->base != NULL) - || (self->weakreflist != NULL)) { - PyErr_SetString(PyExc_ValueError, - "cannot resize an array references or is referenced\n"\ - "by another array in this way. Use the resize function"); - return NULL; - } - - if (newsize == 0) { - sd = self->descr->elsize; - } - else { - sd = newsize*self->descr->elsize; - } - /* Reallocate space if needed */ - new_data = PyDataMem_RENEW(self->data, sd); - if (new_data == NULL) { - PyErr_SetString(PyExc_MemoryError, - "cannot allocate memory for array"); - return NULL; - } - self->data = new_data; - } - - if ((newsize > oldsize) && PyArray_ISWRITEABLE(self)) { - /* Fill new memory with zeros */ - elsize = self->descr->elsize; - if (PyDataType_FLAGCHK(self->descr, NPY_ITEM_REFCOUNT)) { - PyObject *zero = PyInt_FromLong(0); - char *optr; - optr = self->data + oldsize*elsize; - n = newsize - oldsize; - for (k = 0; k < n; k++) { - _putzero((char *)optr, zero, self->descr); - optr += elsize; - } - Py_DECREF(zero); - } - else{ - memset(self->data+oldsize*elsize, 0, (newsize-oldsize)*elsize); - } - } - - if (self->nd != new_nd) { - /* Different number of dimensions. */ - self->nd = new_nd; - /* Need new dimensions and strides arrays */ - dimptr = PyDimMem_RENEW(self->dimensions, 2*new_nd); - if (dimptr == NULL) { - PyErr_SetString(PyExc_MemoryError, - "cannot allocate memory for array"); - return NULL; - } - self->dimensions = dimptr; - self->strides = dimptr + new_nd; - } - - /* make new_strides variable */ - sd = (size_t) self->descr->elsize; - sd = (size_t) _array_fill_strides(new_strides, new_dimensions, new_nd, sd, - self->flags, &(self->flags)); - memmove(self->dimensions, new_dimensions, new_nd*sizeof(intp)); - memmove(self->strides, new_strides, new_nd*sizeof(intp)); - Py_INCREF(Py_None); - return Py_None; -} - -/* - * Returns a new array - * with the new shape from the data - * in the old array --- order-perspective depends on fortran argument. - * copy-only-if-necessary - */ - -/*NUMPY_API - * New shape for an array - */ -NPY_NO_EXPORT PyObject * -PyArray_Newshape(PyArrayObject *self, PyArray_Dims *newdims, - NPY_ORDER fortran) -{ - intp i; - intp *dimensions = newdims->ptr; - PyArrayObject *ret; - int n = newdims->len; - Bool same, incref = TRUE; - intp *strides = NULL; - intp newstrides[MAX_DIMS]; - int flags; - - if (fortran == PyArray_ANYORDER) { - fortran = PyArray_ISFORTRAN(self); - } - /* Quick check to make sure anything actually needs to be done */ - if (n == self->nd) { - same = TRUE; - i = 0; - while (same && i < n) { - if (PyArray_DIM(self,i) != dimensions[i]) { - same=FALSE; - } - i++; - } - if (same) { - return PyArray_View(self, NULL, NULL); - } - } - - /* - * Returns a pointer to an appropriate strides array - * if all we are doing is inserting ones into the shape, - * or removing ones from the shape - * or doing a combination of the two - * In this case we don't need to do anything but update strides and - * dimensions. So, we can handle non single-segment cases. - */ - i = _check_ones(self, n, dimensions, newstrides); - if (i == 0) { - strides = newstrides; - } - flags = self->flags; - - if (strides == NULL) { - /* - * we are really re-shaping not just adding ones to the shape somewhere - * fix any -1 dimensions and check new-dimensions against old size - */ - if (_fix_unknown_dimension(newdims, PyArray_SIZE(self)) < 0) { - return NULL; - } - /* - * sometimes we have to create a new copy of the array - * in order to get the right orientation and - * because we can't just re-use the buffer with the - * data in the order it is in. - */ - if (!(PyArray_ISONESEGMENT(self)) || - (((PyArray_CHKFLAGS(self, NPY_CONTIGUOUS) && - fortran == NPY_FORTRANORDER) || - (PyArray_CHKFLAGS(self, NPY_FORTRAN) && - fortran == NPY_CORDER)) && (self->nd > 1))) { - int success = 0; - success = _attempt_nocopy_reshape(self,n,dimensions, - newstrides,fortran); - if (success) { - /* no need to copy the array after all */ - strides = newstrides; - flags = self->flags; - } - else { - PyObject *new; - new = PyArray_NewCopy(self, fortran); - if (new == NULL) { - return NULL; - } - incref = FALSE; - self = (PyArrayObject *)new; - flags = self->flags; - } - } - - /* We always have to interpret the contiguous buffer correctly */ - - /* Make sure the flags argument is set. */ - if (n > 1) { - if (fortran == NPY_FORTRANORDER) { - flags &= ~NPY_CONTIGUOUS; - flags |= NPY_FORTRAN; - } - else { - flags &= ~NPY_FORTRAN; - flags |= NPY_CONTIGUOUS; - } - } - } - else if (n > 0) { - /* - * replace any 0-valued strides with - * appropriate value to preserve contiguousness - */ - if (fortran == PyArray_FORTRANORDER) { - if (strides[0] == 0) { - strides[0] = self->descr->elsize; - } - for (i = 1; i < n; i++) { - if (strides[i] == 0) { - strides[i] = strides[i-1] * dimensions[i-1]; - } - } - } - else { - if (strides[n-1] == 0) { - strides[n-1] = self->descr->elsize; - } - for (i = n - 2; i > -1; i--) { - if (strides[i] == 0) { - strides[i] = strides[i+1] * dimensions[i+1]; - } - } - } - } - - Py_INCREF(self->descr); - ret = (PyAO *)PyArray_NewFromDescr(Py_TYPE(self), - self->descr, - n, dimensions, - strides, - self->data, - flags, (PyObject *)self); - - if (ret == NULL) { - goto fail; - } - if (incref) { - Py_INCREF(self); - } - ret->base = (PyObject *)self; - PyArray_UpdateFlags(ret, CONTIGUOUS | FORTRAN); - return (PyObject *)ret; - - fail: - if (!incref) { - Py_DECREF(self); - } - return NULL; -} - - - -/* For back-ward compatability -- Not recommended */ - -/*NUMPY_API - * Reshape - */ -NPY_NO_EXPORT PyObject * -PyArray_Reshape(PyArrayObject *self, PyObject *shape) -{ - PyObject *ret; - PyArray_Dims newdims; - - if (!PyArray_IntpConverter(shape, &newdims)) { - return NULL; - } - ret = PyArray_Newshape(self, &newdims, PyArray_CORDER); - PyDimMem_FREE(newdims.ptr); - return ret; -} - -/* inserts 0 for strides where dimension will be 1 */ -static int -_check_ones(PyArrayObject *self, int newnd, intp* newdims, intp *strides) -{ - int nd; - intp *dims; - Bool done=FALSE; - int j, k; - - nd = self->nd; - dims = self->dimensions; - - for (k = 0, j = 0; !done && (j < nd || k < newnd);) { - if ((jstrides[j]; - j++; - k++; - } - else if ((k < newnd) && (newdims[k] == 1)) { - strides[k] = 0; - k++; - } - else if ((jelsize); - } - else if (PyDescr_HASFIELDS(dtype)) { - PyObject *key, *value, *title = NULL; - PyArray_Descr *new; - int offset; - Py_ssize_t pos = 0; - while (PyDict_Next(dtype->fields, &pos, &key, &value)) { - if NPY_TITLE_KEY(key, value) { - continue; - } - if (!PyArg_ParseTuple(value, "Oi|O", &new, &offset, &title)) { - return; - } - _putzero(optr + offset, zero, new); - } - } - else { - Py_INCREF(zero); - NPY_COPY_PYOBJECT_PTR(optr, &zero); - } - return; -} - - -/* - * attempt to reshape an array without copying data - * - * This function should correctly handle all reshapes, including - * axes of length 1. Zero strides should work but are untested. - * - * If a copy is needed, returns 0 - * If no copy is needed, returns 1 and fills newstrides - * with appropriate strides - * - * The "fortran" argument describes how the array should be viewed - * during the reshape, not how it is stored in memory (that - * information is in self->strides). - * - * If some output dimensions have length 1, the strides assigned to - * them are arbitrary. In the current implementation, they are the - * stride of the next-fastest index. - */ -static int -_attempt_nocopy_reshape(PyArrayObject *self, int newnd, intp* newdims, - intp *newstrides, int fortran) -{ - int oldnd; - intp olddims[MAX_DIMS]; - intp oldstrides[MAX_DIMS]; - int oi, oj, ok, ni, nj, nk; - int np, op; - - oldnd = 0; - for (oi = 0; oi < self->nd; oi++) { - if (self->dimensions[oi]!= 1) { - olddims[oldnd] = self->dimensions[oi]; - oldstrides[oldnd] = self->strides[oi]; - oldnd++; - } - } - - /* - fprintf(stderr, "_attempt_nocopy_reshape( ("); - for (oi=0; oi ("); - for (ni=0; ni ni; nk--) { - newstrides[nk - 1] = newstrides[nk]*newdims[nk]; - } - } - ni = nj++; - oi = oj++; - } - - /* - fprintf(stderr, "success: _attempt_nocopy_reshape ("); - for (oi=0; oi ("); - for (ni=0; niptr; - n = newshape->len; - s_known = 1; - i_unknown = -1; - - for (i = 0; i < n; i++) { - if (dimensions[i] < 0) { - if (i_unknown == -1) { - i_unknown = i; - } - else { - PyErr_SetString(PyExc_ValueError, - "can only specify one" \ - " unknown dimension"); - return -1; - } - } - else { - s_known *= dimensions[i]; - } - } - - if (i_unknown >= 0) { - if ((s_known == 0) || (s_original % s_known != 0)) { - PyErr_SetString(PyExc_ValueError, msg); - return -1; - } - dimensions[i_unknown] = s_original/s_known; - } - else { - if (s_original != s_known) { - PyErr_SetString(PyExc_ValueError, msg); - return -1; - } - } - return 0; -} - -/*NUMPY_API - * - * return a new view of the array object with all of its unit-length - * dimensions squeezed out if needed, otherwise - * return the same array. - */ -NPY_NO_EXPORT PyObject * -PyArray_Squeeze(PyArrayObject *self) -{ - int nd = self->nd; - int newnd = nd; - intp dimensions[MAX_DIMS]; - intp strides[MAX_DIMS]; - int i, j; - PyObject *ret; - - if (nd == 0) { - Py_INCREF(self); - return (PyObject *)self; - } - for (j = 0, i = 0; i < nd; i++) { - if (self->dimensions[i] == 1) { - newnd -= 1; - } - else { - dimensions[j] = self->dimensions[i]; - strides[j++] = self->strides[i]; - } - } - - Py_INCREF(self->descr); - ret = PyArray_NewFromDescr(Py_TYPE(self), - self->descr, - newnd, dimensions, - strides, self->data, - self->flags, - (PyObject *)self); - if (ret == NULL) { - return NULL; - } - PyArray_FLAGS(ret) &= ~OWNDATA; - PyArray_BASE(ret) = (PyObject *)self; - Py_INCREF(self); - return (PyObject *)ret; -} - -/*NUMPY_API - * SwapAxes - */ -NPY_NO_EXPORT PyObject * -PyArray_SwapAxes(PyArrayObject *ap, int a1, int a2) -{ - PyArray_Dims new_axes; - intp dims[MAX_DIMS]; - int n, i, val; - PyObject *ret; - - if (a1 == a2) { - Py_INCREF(ap); - return (PyObject *)ap; - } - - n = ap->nd; - if (n <= 1) { - Py_INCREF(ap); - return (PyObject *)ap; - } - - if (a1 < 0) { - a1 += n; - } - if (a2 < 0) { - a2 += n; - } - if ((a1 < 0) || (a1 >= n)) { - PyErr_SetString(PyExc_ValueError, - "bad axis1 argument to swapaxes"); - return NULL; - } - if ((a2 < 0) || (a2 >= n)) { - PyErr_SetString(PyExc_ValueError, - "bad axis2 argument to swapaxes"); - return NULL; - } - new_axes.ptr = dims; - new_axes.len = n; - - for (i = 0; i < n; i++) { - if (i == a1) { - val = a2; - } - else if (i == a2) { - val = a1; - } - else { - val = i; - } - new_axes.ptr[i] = val; - } - ret = PyArray_Transpose(ap, &new_axes); - return ret; -} - -/*NUMPY_API - * Return Transpose. - */ -NPY_NO_EXPORT PyObject * -PyArray_Transpose(PyArrayObject *ap, PyArray_Dims *permute) -{ - intp *axes, axis; - intp i, n; - intp permutation[MAX_DIMS], reverse_permutation[MAX_DIMS]; - PyArrayObject *ret = NULL; - - if (permute == NULL) { - n = ap->nd; - for (i = 0; i < n; i++) { - permutation[i] = n-1-i; - } - } - else { - n = permute->len; - axes = permute->ptr; - if (n != ap->nd) { - PyErr_SetString(PyExc_ValueError, - "axes don't match array"); - return NULL; - } - for (i = 0; i < n; i++) { - reverse_permutation[i] = -1; - } - for (i = 0; i < n; i++) { - axis = axes[i]; - if (axis < 0) { - axis = ap->nd + axis; - } - if (axis < 0 || axis >= ap->nd) { - PyErr_SetString(PyExc_ValueError, - "invalid axis for this array"); - return NULL; - } - if (reverse_permutation[axis] != -1) { - PyErr_SetString(PyExc_ValueError, - "repeated axis in transpose"); - return NULL; - } - reverse_permutation[axis] = i; - permutation[i] = axis; - } - for (i = 0; i < n; i++) { - } - } - - /* - * this allocates memory for dimensions and strides (but fills them - * incorrectly), sets up descr, and points data at ap->data. - */ - Py_INCREF(ap->descr); - ret = (PyArrayObject *)\ - PyArray_NewFromDescr(Py_TYPE(ap), - ap->descr, - n, ap->dimensions, - NULL, ap->data, ap->flags, - (PyObject *)ap); - if (ret == NULL) { - return NULL; - } - /* point at true owner of memory: */ - ret->base = (PyObject *)ap; - Py_INCREF(ap); - - /* fix the dimensions and strides of the return-array */ - for (i = 0; i < n; i++) { - ret->dimensions[i] = ap->dimensions[permutation[i]]; - ret->strides[i] = ap->strides[permutation[i]]; - } - PyArray_UpdateFlags(ret, CONTIGUOUS | FORTRAN); - return (PyObject *)ret; -} - -/*NUMPY_API - * Ravel - * Returns a contiguous array - */ -NPY_NO_EXPORT PyObject * -PyArray_Ravel(PyArrayObject *a, NPY_ORDER fortran) -{ - PyArray_Dims newdim = {NULL,1}; - intp val[1] = {-1}; - - if (fortran == PyArray_ANYORDER) { - fortran = PyArray_ISFORTRAN(a); - } - newdim.ptr = val; - if (!fortran && PyArray_ISCONTIGUOUS(a)) { - return PyArray_Newshape(a, &newdim, PyArray_CORDER); - } - else if (fortran && PyArray_ISFORTRAN(a)) { - return PyArray_Newshape(a, &newdim, PyArray_FORTRANORDER); - } - else { - return PyArray_Flatten(a, fortran); - } -} - -/*NUMPY_API - * Flatten - */ -NPY_NO_EXPORT PyObject * -PyArray_Flatten(PyArrayObject *a, NPY_ORDER order) -{ - PyObject *ret; - intp size; - - if (order == PyArray_ANYORDER) { - order = PyArray_ISFORTRAN(a); - } - size = PyArray_SIZE(a); - Py_INCREF(a->descr); - ret = PyArray_NewFromDescr(Py_TYPE(a), - a->descr, - 1, &size, - NULL, - NULL, - 0, (PyObject *)a); - - if (ret == NULL) { - return NULL; - } - if (_flat_copyinto(ret, (PyObject *)a, order) < 0) { - Py_DECREF(ret); - return NULL; - } - return ret; -} - - diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/shape.h b/pythonPackages/numpy/numpy/core/src/multiarray/shape.h deleted file mode 100755 index 1a5991a500..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/shape.h +++ /dev/null @@ -1,4 +0,0 @@ -#ifndef _NPY_ARRAY_SHAPE_H_ -#define _NPY_ARRAY_SHAPE_H_ - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/ucsnarrow.c b/pythonPackages/numpy/numpy/core/src/multiarray/ucsnarrow.c deleted file mode 100755 index 6a17885815..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/ucsnarrow.c +++ /dev/null @@ -1,126 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include - -#include -#include - -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/npy_math.h" - -#include "npy_config.h" - -#include "numpy/npy_3kcompat.h" - -/* Functions only needed on narrow builds of Python - for converting back and forth between the NumPy Unicode data-type - (always 4-byte) - and the Python Unicode scalar (2-bytes on a narrow build). -*/ - -/* the ucs2 buffer must be large enough to hold 2*ucs4length characters - due to the use of surrogate pairs. - - The return value is the number of ucs2 bytes used-up which - is ucs4length + number of surrogate pairs found. - - values above 0xffff are converted to surrogate pairs. -*/ -NPY_NO_EXPORT int -PyUCS2Buffer_FromUCS4(Py_UNICODE *ucs2, PyArray_UCS4 *ucs4, int ucs4length) -{ - int i; - int numucs2 = 0; - PyArray_UCS4 chr; - for (i=0; i 0xffff) { - numucs2++; - chr -= 0x10000L; - *ucs2++ = 0xD800 + (Py_UNICODE) (chr >> 10); - *ucs2++ = 0xDC00 + (Py_UNICODE) (chr & 0x03FF); - } - else { - *ucs2++ = (Py_UNICODE) chr; - } - numucs2++; - } - return numucs2; -} - - -/* This converts a UCS2 buffer of the given length to UCS4 buffer. - It converts up to ucs4len characters of UCS2 - - It returns the number of characters converted which can - be less than ucs2len if there are surrogate pairs in ucs2. - - The return value is the actual size of the used part of the ucs4 buffer. -*/ - -NPY_NO_EXPORT int -PyUCS2Buffer_AsUCS4(Py_UNICODE *ucs2, PyArray_UCS4 *ucs4, int ucs2len, int ucs4len) -{ - int i; - PyArray_UCS4 chr; - Py_UNICODE ch; - int numchars=0; - - for (i=0; (i < ucs2len) && (numchars < ucs4len); i++) { - ch = *ucs2++; - if (ch >= 0xd800 && ch <= 0xdfff) { - /* surrogate pair */ - chr = ((PyArray_UCS4)(ch-0xd800)) << 10; - chr += *ucs2++ + 0x2400; /* -0xdc00 + 0x10000 */ - i++; - } - else { - chr = (PyArray_UCS4) ch; - } - *ucs4++ = chr; - numchars++; - } - return numchars; -} - - -NPY_NO_EXPORT PyObject * -MyPyUnicode_New(int length) -{ - PyUnicodeObject *unicode; - unicode = PyObject_New(PyUnicodeObject, &PyUnicode_Type); - if (unicode == NULL) return NULL; - unicode->str = PyMem_NEW(Py_UNICODE, length+1); - if (!unicode->str) { - _Py_ForgetReference((PyObject *)unicode); - PyObject_Del(unicode); - return PyErr_NoMemory(); - } - unicode->str[0] = 0; - unicode->str[length] = 0; - unicode->length = length; - unicode->hash = -1; - unicode->defenc = NULL; -#if defined(NPY_PY3K) - unicode->state = 0; /* Not interned */ -#endif - return (PyObject *)unicode; -} - -NPY_NO_EXPORT int -MyPyUnicode_Resize(PyUnicodeObject *uni, int length) -{ - void *oldstr; - - oldstr = uni->str; - PyMem_RESIZE(uni->str, Py_UNICODE, length+1); - if (!uni->str) { - uni->str = oldstr; - PyErr_NoMemory(); - return -1; - } - uni->str[length] = 0; - uni->length = length; - return 0; -} diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/ucsnarrow.h b/pythonPackages/numpy/numpy/core/src/multiarray/ucsnarrow.h deleted file mode 100755 index 4b0e0c1111..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/ucsnarrow.h +++ /dev/null @@ -1,21 +0,0 @@ -#ifndef _NPY_UCSNARROW_H_ -#define _NPY_UCSNARROW_H_ - -#ifdef Py_UNICODE_WIDE -#error this should not be included if Py_UNICODE_WIDE is defined -int int int; -#endif - -NPY_NO_EXPORT int -PyUCS2Buffer_FromUCS4(Py_UNICODE *ucs2, PyArray_UCS4 *ucs4, int ucs4length); - -NPY_NO_EXPORT int -PyUCS2Buffer_AsUCS4(Py_UNICODE *ucs2, PyArray_UCS4 *ucs4, int ucs2len, int ucs4len); - -NPY_NO_EXPORT PyObject * -MyPyUnicode_New(int length); - -NPY_NO_EXPORT int -MyPyUnicode_Resize(PyUnicodeObject *uni, int length); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/usertypes.c b/pythonPackages/numpy/numpy/core/src/multiarray/usertypes.c deleted file mode 100755 index 2037929149..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/usertypes.c +++ /dev/null @@ -1,262 +0,0 @@ -/* - Provide multidimensional arrays as a basic object type in python. - - Based on Original Numeric implementation - Copyright (c) 1995, 1996, 1997 Jim Hugunin, hugunin@mit.edu - - with contributions from many Numeric Python developers 1995-2004 - - Heavily modified in 2005 with inspiration from Numarray - - by - - Travis Oliphant, oliphant@ee.byu.edu - Brigham Young Univeristy - - -maintainer email: oliphant.travis@ieee.org - - Numarray design (which provided guidance) by - Space Science Telescope Institute - (J. Todd Miller, Perry Greenfield, Rick White) -*/ -#define PY_SSIZE_T_CLEAN -#include -#include "structmember.h" - -/*#include */ -#define _MULTIARRAYMODULE -#define NPY_NO_PREFIX -#include "numpy/arrayobject.h" -#include "numpy/arrayscalars.h" - -#include "npy_config.h" - -#include "common.h" - -#include "numpy/npy_3kcompat.h" - -#include "usertypes.h" - -NPY_NO_EXPORT PyArray_Descr **userdescrs=NULL; - -static int * -_append_new(int *types, int insert) -{ - int n = 0; - int *newtypes; - - while (types[n] != PyArray_NOTYPE) { - n++; - } - newtypes = (int *)realloc(types, (n + 2)*sizeof(int)); - newtypes[n] = insert; - newtypes[n + 1] = PyArray_NOTYPE; - return newtypes; -} - -static Bool -_default_nonzero(void *ip, void *arr) -{ - int elsize = PyArray_ITEMSIZE(arr); - char *ptr = ip; - while (elsize--) { - if (*ptr++ != 0) { - return TRUE; - } - } - return FALSE; -} - -static void -_default_copyswapn(void *dst, npy_intp dstride, void *src, - npy_intp sstride, npy_intp n, int swap, void *arr) -{ - npy_intp i; - PyArray_CopySwapFunc *copyswap; - char *dstptr = dst; - char *srcptr = src; - - copyswap = PyArray_DESCR(arr)->f->copyswap; - - for (i = 0; i < n; i++) { - copyswap(dstptr, srcptr, swap, arr); - dstptr += dstride; - srcptr += sstride; - } -} - -/*NUMPY_API - Initialize arrfuncs to NULL -*/ -NPY_NO_EXPORT void -PyArray_InitArrFuncs(PyArray_ArrFuncs *f) -{ - int i; - - for(i = 0; i < PyArray_NTYPES; i++) { - f->cast[i] = NULL; - } - f->getitem = NULL; - f->setitem = NULL; - f->copyswapn = NULL; - f->copyswap = NULL; - f->compare = NULL; - f->argmax = NULL; - f->dotfunc = NULL; - f->scanfunc = NULL; - f->fromstr = NULL; - f->nonzero = NULL; - f->fill = NULL; - f->fillwithscalar = NULL; - for(i = 0; i < PyArray_NSORTS; i++) { - f->sort[i] = NULL; - f->argsort[i] = NULL; - } - f->castdict = NULL; - f->scalarkind = NULL; - f->cancastscalarkindto = NULL; - f->cancastto = NULL; -} - -/* - returns typenum to associate with this type >=PyArray_USERDEF. - needs the userdecrs table and PyArray_NUMUSER variables - defined in arraytypes.inc -*/ -/*NUMPY_API - Register Data type - Does not change the reference count of descr -*/ -NPY_NO_EXPORT int -PyArray_RegisterDataType(PyArray_Descr *descr) -{ - PyArray_Descr *descr2; - int typenum; - int i; - PyArray_ArrFuncs *f; - - /* See if this type is already registered */ - for (i = 0; i < NPY_NUMUSERTYPES; i++) { - descr2 = userdescrs[i]; - if (descr2 == descr) { - return descr->type_num; - } - } - typenum = PyArray_USERDEF + NPY_NUMUSERTYPES; - descr->type_num = typenum; - if (descr->elsize == 0) { - PyErr_SetString(PyExc_ValueError, "cannot register a" \ - "flexible data-type"); - return -1; - } - f = descr->f; - if (f->nonzero == NULL) { - f->nonzero = _default_nonzero; - } - if (f->copyswapn == NULL) { - f->copyswapn = _default_copyswapn; - } - if (f->copyswap == NULL || f->getitem == NULL || - f->setitem == NULL) { - PyErr_SetString(PyExc_ValueError, "a required array function" \ - " is missing."); - return -1; - } - if (descr->typeobj == NULL) { - PyErr_SetString(PyExc_ValueError, "missing typeobject"); - return -1; - } - userdescrs = realloc(userdescrs, - (NPY_NUMUSERTYPES+1)*sizeof(void *)); - if (userdescrs == NULL) { - PyErr_SetString(PyExc_MemoryError, "RegisterDataType"); - return -1; - } - userdescrs[NPY_NUMUSERTYPES++] = descr; - return typenum; -} - -/*NUMPY_API - Register Casting Function - Replaces any function currently stored. -*/ -NPY_NO_EXPORT int -PyArray_RegisterCastFunc(PyArray_Descr *descr, int totype, - PyArray_VectorUnaryFunc *castfunc) -{ - PyObject *cobj, *key; - int ret; - - if (totype < PyArray_NTYPES) { - descr->f->cast[totype] = castfunc; - return 0; - } - if (!PyTypeNum_ISUSERDEF(totype)) { - PyErr_SetString(PyExc_TypeError, "invalid type number."); - return -1; - } - if (descr->f->castdict == NULL) { - descr->f->castdict = PyDict_New(); - if (descr->f->castdict == NULL) { - return -1; - } - } - key = PyInt_FromLong(totype); - if (PyErr_Occurred()) { - return -1; - } - cobj = NpyCapsule_FromVoidPtr((void *)castfunc, NULL); - if (cobj == NULL) { - Py_DECREF(key); - return -1; - } - ret = PyDict_SetItem(descr->f->castdict, key, cobj); - Py_DECREF(key); - Py_DECREF(cobj); - return ret; -} - -/*NUMPY_API - * Register a type number indicating that a descriptor can be cast - * to it safely - */ -NPY_NO_EXPORT int -PyArray_RegisterCanCast(PyArray_Descr *descr, int totype, - NPY_SCALARKIND scalar) -{ - if (scalar == PyArray_NOSCALAR) { - /* - * register with cancastto - * These lists won't be freed once created - * -- they become part of the data-type - */ - if (descr->f->cancastto == NULL) { - descr->f->cancastto = (int *)malloc(1*sizeof(int)); - descr->f->cancastto[0] = PyArray_NOTYPE; - } - descr->f->cancastto = _append_new(descr->f->cancastto, - totype); - } - else { - /* register with cancastscalarkindto */ - if (descr->f->cancastscalarkindto == NULL) { - int i; - descr->f->cancastscalarkindto = - (int **)malloc(PyArray_NSCALARKINDS* sizeof(int*)); - for (i = 0; i < PyArray_NSCALARKINDS; i++) { - descr->f->cancastscalarkindto[i] = NULL; - } - } - if (descr->f->cancastscalarkindto[scalar] == NULL) { - descr->f->cancastscalarkindto[scalar] = - (int *)malloc(1*sizeof(int)); - descr->f->cancastscalarkindto[scalar][0] = - PyArray_NOTYPE; - } - descr->f->cancastscalarkindto[scalar] = - _append_new(descr->f->cancastscalarkindto[scalar], totype); - } - return 0; -} - diff --git a/pythonPackages/numpy/numpy/core/src/multiarray/usertypes.h b/pythonPackages/numpy/numpy/core/src/multiarray/usertypes.h deleted file mode 100755 index 51f6a8720c..0000000000 --- a/pythonPackages/numpy/numpy/core/src/multiarray/usertypes.h +++ /dev/null @@ -1,24 +0,0 @@ -#ifndef _NPY_PRIVATE_USERTYPES_H_ -#define _NPY_PRIVATE_USERTYPES_H_ - -#ifdef NPY_ENABLE_SEPARATE_COMPILATION -extern NPY_NO_EXPORT PyArray_Descr **userdescrs; -#else -NPY_NO_EXPORT PyArray_Descr **userdescrs; -#endif - -NPY_NO_EXPORT void -PyArray_InitArrFuncs(PyArray_ArrFuncs *f); - -NPY_NO_EXPORT int -PyArray_RegisterCanCast(PyArray_Descr *descr, int totype, - NPY_SCALARKIND scalar); - -NPY_NO_EXPORT int -PyArray_RegisterDataType(PyArray_Descr *descr); - -NPY_NO_EXPORT int -PyArray_RegisterCastFunc(PyArray_Descr *descr, int totype, - PyArray_VectorUnaryFunc *castfunc); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/npymath/_signbit.c b/pythonPackages/numpy/numpy/core/src/npymath/_signbit.c deleted file mode 100755 index a2ad381627..0000000000 --- a/pythonPackages/numpy/numpy/core/src/npymath/_signbit.c +++ /dev/null @@ -1,32 +0,0 @@ -/* Adapted from cephes */ - -int -_npy_signbit_d(double x) -{ - union - { - double d; - short s[4]; - int i[2]; - } u; - - u.d = x; - -#if SIZEOF_INT == 4 - -#ifdef WORDS_BIGENDIAN /* defined in pyconfig.h */ - return u.i[0] < 0; -#else - return u.i[1] < 0; -#endif - -#else /* SIZEOF_INT != 4 */ - -#ifdef WORDS_BIGENDIAN - return u.s[0] < 0; -#else - return u.s[3] < 0; -#endif - -#endif /* SIZEOF_INT */ -} diff --git a/pythonPackages/numpy/numpy/core/src/npymath/ieee754.c.src b/pythonPackages/numpy/numpy/core/src/npymath/ieee754.c.src deleted file mode 100755 index 8df903b2bc..0000000000 --- a/pythonPackages/numpy/numpy/core/src/npymath/ieee754.c.src +++ /dev/null @@ -1,550 +0,0 @@ -/* -*- c -*- */ -/* - * vim:syntax=c - * - * Low-level routines related to IEEE-754 format - */ -#include "npy_math_common.h" -#include "npy_math_private.h" - -#ifndef HAVE_COPYSIGN -double npy_copysign(double x, double y) -{ - npy_uint32 hx, hy; - GET_HIGH_WORD(hx, x); - GET_HIGH_WORD(hy, y); - SET_HIGH_WORD(x, (hx & 0x7fffffff) | (hy & 0x80000000)); - return x; -} -#endif - -#if !defined(HAVE_DECL_SIGNBIT) -#include "_signbit.c" - -int _npy_signbit_f(float x) -{ - return _npy_signbit_d((double) x); -} - -int _npy_signbit_ld(long double x) -{ - return _npy_signbit_d((double) x); -} -#endif - -/* - * FIXME: There is a lot of redundancy between _next* and npy_nextafter*. - * refactor this at some point - * - * p >= 0, returnx x + nulp - * p < 0, returnx x - nulp - */ -double _next(double x, int p) -{ - volatile double t; - npy_int32 hx, hy, ix; - npy_uint32 lx; - - EXTRACT_WORDS(hx, lx, x); - ix = hx & 0x7fffffff; /* |x| */ - - if (((ix >= 0x7ff00000) && ((ix - 0x7ff00000) | lx) != 0)) /* x is nan */ - return x; - if ((ix | lx) == 0) { /* x == 0 */ - if (p >= 0) { - INSERT_WORDS(x, 0x0, 1); /* return +minsubnormal */ - } else { - INSERT_WORDS(x, 0x80000000, 1); /* return -minsubnormal */ - } - t = x * x; - if (t == x) - return t; - else - return x; /* raise underflow flag */ - } - if (p < 0) { /* x -= ulp */ - if (lx == 0) - hx -= 1; - lx -= 1; - } else { /* x += ulp */ - lx += 1; - if (lx == 0) - hx += 1; - } - hy = hx & 0x7ff00000; - if (hy >= 0x7ff00000) - return x + x; /* overflow */ - if (hy < 0x00100000) { /* underflow */ - t = x * x; - if (t != x) { /* raise underflow flag */ - INSERT_WORDS(x, hx, lx); - return x; - } - } - INSERT_WORDS(x, hx, lx); - return x; -} - -float _nextf(float x, int p) -{ - volatile float t; - npy_int32 hx, hy, ix; - - GET_FLOAT_WORD(hx, x); - ix = hx & 0x7fffffff; /* |x| */ - - if ((ix > 0x7f800000)) /* x is nan */ - return x; - if (ix == 0) { /* x == 0 */ - if (p >= 0) { - SET_FLOAT_WORD(x, 0x0 | 1); /* return +minsubnormal */ - } else { - SET_FLOAT_WORD(x, 0x80000000 | 1); /* return -minsubnormal */ - } - t = x * x; - if (t == x) - return t; - else - return x; /* raise underflow flag */ - } - if (p < 0) { /* x -= ulp */ - hx -= 1; - } else { /* x += ulp */ - hx += 1; - } - hy = hx & 0x7f800000; - if (hy >= 0x7f800000) - return x + x; /* overflow */ - if (hy < 0x00800000) { /* underflow */ - t = x * x; - if (t != x) { /* raise underflow flag */ - SET_FLOAT_WORD(x, hx); - return x; - } - } - SET_FLOAT_WORD(x, hx); - return x; -} - -#ifdef HAVE_LDOUBLE_DOUBLE_DOUBLE_BE - -/* - * FIXME: this is ugly and untested. The asm part only works with gcc, and we - * should consolidate the GET_LDOUBLE* / SET_LDOUBLE macros - */ -#define math_opt_barrier(x) \ - ({ __typeof (x) __x = x; __asm ("" : "+m" (__x)); __x; }) -#define math_force_eval(x) __asm __volatile ("" : : "m" (x)) - -/* only works for big endian */ -typedef union -{ - npy_longdouble value; - struct - { - npy_uint64 msw; - npy_uint64 lsw; - } parts64; - struct - { - npy_uint32 w0, w1, w2, w3; - } parts32; -} ieee854_long_double_shape_type; - -/* Get two 64 bit ints from a long double. */ - -#define GET_LDOUBLE_WORDS64(ix0,ix1,d) \ -do { \ - ieee854_long_double_shape_type qw_u; \ - qw_u.value = (d); \ - (ix0) = qw_u.parts64.msw; \ - (ix1) = qw_u.parts64.lsw; \ -} while (0) - -/* Set a long double from two 64 bit ints. */ - -#define SET_LDOUBLE_WORDS64(d,ix0,ix1) \ -do { \ - ieee854_long_double_shape_type qw_u; \ - qw_u.parts64.msw = (ix0); \ - qw_u.parts64.lsw = (ix1); \ - (d) = qw_u.value; \ -} while (0) - -npy_longdouble _nextl(npy_longdouble x, int p) -{ - npy_int64 hx,ihx,ilx; - npy_uint64 lx; - - GET_LDOUBLE_WORDS64(hx, lx, x); - ihx = hx & 0x7fffffffffffffffLL; /* |hx| */ - ilx = lx & 0x7fffffffffffffffLL; /* |lx| */ - - if(((ihx & 0x7ff0000000000000LL)==0x7ff0000000000000LL)&& - ((ihx & 0x000fffffffffffffLL)!=0)) { - return x; /* signal the nan */ - } - if(ihx == 0 && ilx == 0) { /* x == 0 */ - npy_longdouble u; - SET_LDOUBLE_WORDS64(x, p, 0ULL);/* return +-minsubnormal */ - u = x * x; - if (u == x) { - return u; - } else { - return x; /* raise underflow flag */ - } - } - - npy_longdouble u; - if(p < 0) { /* p < 0, x -= ulp */ - if((hx==0xffefffffffffffffLL)&&(lx==0xfc8ffffffffffffeLL)) - return x+x; /* overflow, return -inf */ - if (hx >= 0x7ff0000000000000LL) { - SET_LDOUBLE_WORDS64(u,0x7fefffffffffffffLL,0x7c8ffffffffffffeLL); - return u; - } - if(ihx <= 0x0360000000000000LL) { /* x <= LDBL_MIN */ - u = math_opt_barrier (x); - x -= __LDBL_DENORM_MIN__; - if (ihx < 0x0360000000000000LL - || (hx > 0 && (npy_int64) lx <= 0) - || (hx < 0 && (npy_int64) lx > 1)) { - u = u * u; - math_force_eval (u); /* raise underflow flag */ - } - return x; - } - if (ihx < 0x06a0000000000000LL) { /* ulp will denormal */ - SET_LDOUBLE_WORDS64(u,(hx&0x7ff0000000000000LL),0ULL); - u *= 0x1.0000000000000p-105L; - } else - SET_LDOUBLE_WORDS64(u,(hx&0x7ff0000000000000LL)-0x0690000000000000LL,0ULL); - return x - u; - } else { /* p >= 0, x += ulp */ - if((hx==0x7fefffffffffffffLL)&&(lx==0x7c8ffffffffffffeLL)) - return x+x; /* overflow, return +inf */ - if ((npy_uint64) hx >= 0xfff0000000000000ULL) { - SET_LDOUBLE_WORDS64(u,0xffefffffffffffffLL,0xfc8ffffffffffffeLL); - return u; - } - if(ihx <= 0x0360000000000000LL) { /* x <= LDBL_MIN */ - u = math_opt_barrier (x); - x += __LDBL_DENORM_MIN__; - if (ihx < 0x0360000000000000LL - || (hx > 0 && (npy_int64) lx < 0 && lx != 0x8000000000000001LL) - || (hx < 0 && (npy_int64) lx >= 0)) { - u = u * u; - math_force_eval (u); /* raise underflow flag */ - } - if (x == 0.0L) /* handle negative __LDBL_DENORM_MIN__ case */ - x = -0.0L; - return x; - } - if (ihx < 0x06a0000000000000LL) { /* ulp will denormal */ - SET_LDOUBLE_WORDS64(u,(hx&0x7ff0000000000000LL),0ULL); - u *= 0x1.0000000000000p-105L; - } else - SET_LDOUBLE_WORDS64(u,(hx&0x7ff0000000000000LL)-0x0690000000000000LL,0ULL); - return x + u; - } -} -#else -npy_longdouble _nextl(npy_longdouble x, int p) -{ - volatile npy_longdouble t; - union IEEEl2bitsrep ux; - - ux.e = x; - - if ((GET_LDOUBLE_EXP(ux) == 0x7fff && - ((GET_LDOUBLE_MANH(ux) & ~LDBL_NBIT) | GET_LDOUBLE_MANL(ux)) != 0)) { - return ux.e; /* x is nan */ - } - if (ux.e == 0.0) { - SET_LDOUBLE_MANH(ux, 0); /* return +-minsubnormal */ - SET_LDOUBLE_MANL(ux, 1); - if (p >= 0) { - SET_LDOUBLE_SIGN(ux, 0); - } else { - SET_LDOUBLE_SIGN(ux, 1); - } - t = ux.e * ux.e; - if (t == ux.e) { - return t; - } else { - return ux.e; /* raise underflow flag */ - } - } - if (p < 0) { /* x -= ulp */ - if (GET_LDOUBLE_MANL(ux) == 0) { - if ((GET_LDOUBLE_MANH(ux) & ~LDBL_NBIT) == 0) { - SET_LDOUBLE_EXP(ux, GET_LDOUBLE_EXP(ux) - 1); - } - SET_LDOUBLE_MANH(ux, - (GET_LDOUBLE_MANH(ux) - 1) | - (GET_LDOUBLE_MANH(ux) & LDBL_NBIT)); - } - SET_LDOUBLE_MANL(ux, GET_LDOUBLE_MANL(ux) - 1); - } else { /* x += ulp */ - SET_LDOUBLE_MANL(ux, GET_LDOUBLE_MANL(ux) + 1); - if (GET_LDOUBLE_MANL(ux) == 0) { - SET_LDOUBLE_MANH(ux, - (GET_LDOUBLE_MANH(ux) + 1) | - (GET_LDOUBLE_MANH(ux) & LDBL_NBIT)); - if ((GET_LDOUBLE_MANH(ux) & ~LDBL_NBIT) == 0) { - SET_LDOUBLE_EXP(ux, GET_LDOUBLE_EXP(ux) + 1); - } - } - } - if (GET_LDOUBLE_EXP(ux) == 0x7fff) { - return ux.e + ux.e; /* overflow */ - } - if (GET_LDOUBLE_EXP(ux) == 0) { /* underflow */ - if (LDBL_NBIT) { - SET_LDOUBLE_MANH(ux, GET_LDOUBLE_MANH(ux) & ~LDBL_NBIT); - } - t = ux.e * ux.e; - if (t != ux.e) { /* raise underflow flag */ - return ux.e; - } - } - - return ux.e; -} -#endif - -/* - * nextafter code taken from BSD math lib, the code contains the following - * notice: - * - * ==================================================== - * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. - * - * Developed at SunPro, a Sun Microsystems, Inc. business. - * Permission to use, copy, modify, and distribute this - * software is freely granted, provided that this notice - * is preserved. - * ==================================================== - */ - -#ifndef HAVE_NEXTAFTER -double npy_nextafter(double x, double y) -{ - volatile double t; - npy_int32 hx, hy, ix, iy; - npy_uint32 lx, ly; - - EXTRACT_WORDS(hx, lx, x); - EXTRACT_WORDS(hy, ly, y); - ix = hx & 0x7fffffff; /* |x| */ - iy = hy & 0x7fffffff; /* |y| */ - - if (((ix >= 0x7ff00000) && ((ix - 0x7ff00000) | lx) != 0) || /* x is nan */ - ((iy >= 0x7ff00000) && ((iy - 0x7ff00000) | ly) != 0)) /* y is nan */ - return x + y; - if (x == y) - return y; /* x=y, return y */ - if ((ix | lx) == 0) { /* x == 0 */ - INSERT_WORDS(x, hy & 0x80000000, 1); /* return +-minsubnormal */ - t = x * x; - if (t == x) - return t; - else - return x; /* raise underflow flag */ - } - if (hx >= 0) { /* x > 0 */ - if (hx > hy || ((hx == hy) && (lx > ly))) { /* x > y, x -= ulp */ - if (lx == 0) - hx -= 1; - lx -= 1; - } else { /* x < y, x += ulp */ - lx += 1; - if (lx == 0) - hx += 1; - } - } else { /* x < 0 */ - if (hy >= 0 || hx > hy || ((hx == hy) && (lx > ly))) { /* x < y, x -= ulp */ - if (lx == 0) - hx -= 1; - lx -= 1; - } else { /* x > y, x += ulp */ - lx += 1; - if (lx == 0) - hx += 1; - } - } - hy = hx & 0x7ff00000; - if (hy >= 0x7ff00000) - return x + x; /* overflow */ - if (hy < 0x00100000) { /* underflow */ - t = x * x; - if (t != x) { /* raise underflow flag */ - INSERT_WORDS(y, hx, lx); - return y; - } - } - INSERT_WORDS(x, hx, lx); - return x; -} -#endif - -#ifndef HAVE_NEXTAFTERF -float npy_nextafterf(float x, float y) -{ - volatile float t; - npy_int32 hx, hy, ix, iy; - - GET_FLOAT_WORD(hx, x); - GET_FLOAT_WORD(hy, y); - ix = hx & 0x7fffffff; /* |x| */ - iy = hy & 0x7fffffff; /* |y| */ - - if ((ix > 0x7f800000) || /* x is nan */ - (iy > 0x7f800000)) /* y is nan */ - return x + y; - if (x == y) - return y; /* x=y, return y */ - if (ix == 0) { /* x == 0 */ - SET_FLOAT_WORD(x, (hy & 0x80000000) | 1); /* return +-minsubnormal */ - t = x * x; - if (t == x) - return t; - else - return x; /* raise underflow flag */ - } - if (hx >= 0) { /* x > 0 */ - if (hx > hy) { /* x > y, x -= ulp */ - hx -= 1; - } else { /* x < y, x += ulp */ - hx += 1; - } - } else { /* x < 0 */ - if (hy >= 0 || hx > hy) { /* x < y, x -= ulp */ - hx -= 1; - } else { /* x > y, x += ulp */ - hx += 1; - } - } - hy = hx & 0x7f800000; - if (hy >= 0x7f800000) - return x + x; /* overflow */ - if (hy < 0x00800000) { /* underflow */ - t = x * x; - if (t != x) { /* raise underflow flag */ - SET_FLOAT_WORD(y, hx); - return y; - } - } - SET_FLOAT_WORD(x, hx); - return x; -} -#endif - -#ifndef HAVE_NEXTAFTERL -npy_longdouble npy_nextafterl(npy_longdouble x, npy_longdouble y) -{ - volatile npy_longdouble t; - union IEEEl2bitsrep ux; - union IEEEl2bitsrep uy; - - ux.e = x; - uy.e = y; - - if ((GET_LDOUBLE_EXP(ux) == 0x7fff && - ((GET_LDOUBLE_MANH(ux) & ~LDBL_NBIT) | GET_LDOUBLE_MANL(ux)) != 0) || - (GET_LDOUBLE_EXP(uy) == 0x7fff && - ((GET_LDOUBLE_MANH(uy) & ~LDBL_NBIT) | GET_LDOUBLE_MANL(uy)) != 0)) { - return ux.e + uy.e; /* x or y is nan */ - } - if (ux.e == uy.e) { - return uy.e; /* x=y, return y */ - } - if (ux.e == 0.0) { - SET_LDOUBLE_MANH(ux, 0); /* return +-minsubnormal */ - SET_LDOUBLE_MANL(ux, 1); - SET_LDOUBLE_SIGN(ux, GET_LDOUBLE_SIGN(uy)); - t = ux.e * ux.e; - if (t == ux.e) { - return t; - } else { - return ux.e; /* raise underflow flag */ - } - } - if ((ux.e > 0.0) ^ (ux.e < uy.e)) { /* x -= ulp */ - if (GET_LDOUBLE_MANL(ux) == 0) { - if ((GET_LDOUBLE_MANH(ux) & ~LDBL_NBIT) == 0) { - SET_LDOUBLE_EXP(ux, GET_LDOUBLE_EXP(ux) - 1); - } - SET_LDOUBLE_MANH(ux, - (GET_LDOUBLE_MANH(ux) - 1) | - (GET_LDOUBLE_MANH(ux) & LDBL_NBIT)); - } - SET_LDOUBLE_MANL(ux, GET_LDOUBLE_MANL(ux) - 1); - } else { /* x += ulp */ - SET_LDOUBLE_MANL(ux, GET_LDOUBLE_MANL(ux) + 1); - if (GET_LDOUBLE_MANL(ux) == 0) { - SET_LDOUBLE_MANH(ux, - (GET_LDOUBLE_MANH(ux) + 1) | - (GET_LDOUBLE_MANH(ux) & LDBL_NBIT)); - if ((GET_LDOUBLE_MANH(ux) & ~LDBL_NBIT) == 0) { - SET_LDOUBLE_EXP(ux, GET_LDOUBLE_EXP(ux) + 1); - } - } - } - if (GET_LDOUBLE_EXP(ux) == 0x7fff) { - return ux.e + ux.e; /* overflow */ - } - if (GET_LDOUBLE_EXP(ux) == 0) { /* underflow */ - if (LDBL_NBIT) { - SET_LDOUBLE_MANH(ux, GET_LDOUBLE_MANH(ux) & ~LDBL_NBIT); - } - t = ux.e * ux.e; - if (t != ux.e) { /* raise underflow flag */ - return ux.e; - } - } - - return ux.e; -} -#endif - -/**begin repeat - * #suff = f,,l# - * #SUFF = F,,L# - * #type = float, double, npy_longdouble# - */ -@type@ npy_spacing@suff@(@type@ x) -{ - /* XXX: npy isnan/isinf may be optimized by bit twiddling */ - if (npy_isinf(x)) { - return NPY_NAN@SUFF@; - } - - return _next@suff@(x, 1) - x; -} -/**end repeat**/ - -/* - * Decorate all the math functions which are available on the current platform - */ - -#ifdef HAVE_NEXTAFTERF -float npy_nextafterf(float x, float y) -{ - return nextafterf(x, y); -} -#endif - -#ifdef HAVE_NEXTAFTER -double npy_nextafter(double x, double y) -{ - return nextafter(x, y); -} -#endif - -#ifdef HAVE_NEXTAFTERL -npy_longdouble npy_nextafterl(npy_longdouble x, npy_longdouble y) -{ - return nextafterl(x, y); -} -#endif diff --git a/pythonPackages/numpy/numpy/core/src/npymath/npy_math.c.src b/pythonPackages/numpy/numpy/core/src/npymath/npy_math.c.src deleted file mode 100755 index 04a09bcba1..0000000000 --- a/pythonPackages/numpy/numpy/core/src/npymath/npy_math.c.src +++ /dev/null @@ -1,490 +0,0 @@ -/* - * vim:syntax=c - * A small module to implement missing C99 math capabilities required by numpy - * - * Please keep this independant of python ! Only basic types (npy_longdouble) - * can be used, otherwise, pure C, without any use of Python facilities - * - * How to add a function to this section - * ------------------------------------- - * - * Say you want to add `foo`, these are the steps and the reasons for them. - * - * 1) Add foo to the appropriate list in the configuration system. The - * lists can be found in numpy/core/setup.py lines 63-105. Read the - * comments that come with them, they are very helpful. - * - * 2) The configuration system will define a macro HAVE_FOO if your function - * can be linked from the math library. The result can depend on the - * optimization flags as well as the compiler, so can't be known ahead of - * time. If the function can't be linked, then either it is absent, defined - * as a macro, or is an intrinsic (hardware) function. - * - * i) Undefine any possible macros: - * - * #ifdef foo - * #undef foo - * #endif - * - * ii) Avoid as much as possible to declare any function here. Declaring - * functions is not portable: some platforms define some function inline - * with a non standard identifier, for example, or may put another - * idendifier which changes the calling convention of the function. If you - * really have to, ALWAYS declare it for the one platform you are dealing - * with: - * - * Not ok: - * double exp(double a); - * - * Ok: - * #ifdef SYMBOL_DEFINED_WEIRD_PLATFORM - * double exp(double); - * #endif - * - * Some of the code is taken from msun library in FreeBSD, with the following - * notice: - * - * ==================================================== - * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. - * - * Developed at SunPro, a Sun Microsystems, Inc. business. - * Permission to use, copy, modify, and distribute this - * software is freely granted, provided that this notice - * is preserved. - * ==================================================== - */ -#include "npy_math_private.h" - -/* - ***************************************************************************** - ** BASIC MATH FUNCTIONS ** - ***************************************************************************** - */ - -/* Original code by Konrad Hinsen. */ -#ifndef HAVE_EXPM1 -double npy_expm1(double x) -{ - const double u = npy_exp(x); - - if (u == 1.0) { - return x; - } else if (u - 1.0 == -1.0) { - return -1; - } else { - return (u - 1.0) * x/npy_log(u); - } -} -#endif - -#ifndef HAVE_LOG1P -double npy_log1p(double x) -{ - const double u = 1. + x; - const double d = u - 1.; - - if (d == 0) { - return x; - } else { - return npy_log(u) * x / d; - } -} -#endif - -/* Taken from FreeBSD mlib, adapted for numpy - * - * XXX: we could be a bit faster by reusing high/low words for inf/nan - * classification instead of calling npy_isinf/npy_isnan: we should have some - * macros for this, though, instead of doing it manually - */ -#ifndef HAVE_ATAN2 -/* XXX: we should have this in npy_math.h */ -#define NPY_DBL_EPSILON 1.2246467991473531772E-16 -double npy_atan2(double y, double x) -{ - npy_int32 k, m, iy, ix, hx, hy; - npy_uint32 lx,ly; - double z; - - EXTRACT_WORDS(hx, lx, x); - ix = hx & 0x7fffffff; - EXTRACT_WORDS(hy, ly, y); - iy = hy & 0x7fffffff; - - /* if x or y is nan, return nan */ - if (npy_isnan(x * y)) { - return x + y; - } - - if (x == 1.0) { - return npy_atan(y); - } - - m = 2 * npy_signbit(x) + npy_signbit(y); - if (y == 0.0) { - switch(m) { - case 0: - case 1: return y; /* atan(+-0,+anything)=+-0 */ - case 2: return NPY_PI;/* atan(+0,-anything) = pi */ - case 3: return -NPY_PI;/* atan(-0,-anything) =-pi */ - } - } - - if (x == 0.0) { - return y > 0 ? NPY_PI_2 : -NPY_PI_2; - } - - if (npy_isinf(x)) { - if (npy_isinf(y)) { - switch(m) { - case 0: return NPY_PI_4;/* atan(+INF,+INF) */ - case 1: return -NPY_PI_4;/* atan(-INF,+INF) */ - case 2: return 3.0*NPY_PI_4;/*atan(+INF,-INF)*/ - case 3: return -3.0*NPY_PI_4;/*atan(-INF,-INF)*/ - } - } else { - switch(m) { - case 0: return NPY_PZERO; /* atan(+...,+INF) */ - case 1: return NPY_NZERO; /* atan(-...,+INF) */ - case 2: return NPY_PI; /* atan(+...,-INF) */ - case 3: return -NPY_PI; /* atan(-...,-INF) */ - } - } - } - - if (npy_isinf(y)) { - return y > 0 ? NPY_PI_2 : -NPY_PI_2; - } - - /* compute y/x */ - k = (iy - ix) >> 20; - if (k > 60) { /* |y/x| > 2**60 */ - z = NPY_PI_2 + 0.5 * NPY_DBL_EPSILON; - m &= 1; - } else if (hx < 0 && k < -60) { - z = 0.0; /* 0 > |y|/x > -2**-60 */ - } else { - z = npy_atan(npy_fabs(y/x)); /* safe to do y/x */ - } - - switch (m) { - case 0: return z ; /* atan(+,+) */ - case 1: return -z ; /* atan(-,+) */ - case 2: return NPY_PI - (z - NPY_DBL_EPSILON);/* atan(+,-) */ - default: /* case 3 */ - return (z - NPY_DBL_EPSILON) - NPY_PI;/* atan(-,-) */ - } -} - -#endif - -#ifndef HAVE_HYPOT -double npy_hypot(double x, double y) -{ - double yx; - - /* Handle the case where x or y is a NaN */ - if (npy_isnan(x * y)) { - if (npy_isinf(x) || npy_isinf(y)) { - return NPY_INFINITY; - } else { - return NPY_NAN; - } - } - - x = npy_fabs(x); - y = npy_fabs(y); - if (x < y) { - double temp = x; - x = y; - y = temp; - } - if (x == 0.) { - return 0.; - } - else { - yx = y/x; - return x*npy_sqrt(1.+yx*yx); - } -} -#endif - -#ifndef HAVE_ACOSH -double npy_acosh(double x) -{ - return 2*npy_log(npy_sqrt((x + 1.0)/2) + npy_sqrt((x - 1.0)/2)); -} -#endif - -#ifndef HAVE_ASINH -double npy_asinh(double xx) -{ - double x, d; - int sign; - if (xx < 0.0) { - sign = -1; - x = -xx; - } - else { - sign = 1; - x = xx; - } - if (x > 1e8) { - d = x; - } else { - d = npy_sqrt(x*x + 1); - } - return sign*npy_log1p(x*(1.0 + x/(d+1))); -} -#endif - -#ifndef HAVE_ATANH -double npy_atanh(double x) -{ - if (x > 0) { - return -0.5*npy_log1p(-2.0*x/(1.0 + x)); - } - else { - return 0.5*npy_log1p(2.0*x/(1.0 - x)); - } -} -#endif - -#ifndef HAVE_RINT -double npy_rint(double x) -{ - double y, r; - - y = npy_floor(x); - r = x - y; - - if (r > 0.5) { - y += 1.0; - } - - /* Round to nearest even */ - if (r == 0.5) { - r = y - 2.0*npy_floor(0.5*y); - if (r == 1.0) { - y += 1.0; - } - } - return y; -} -#endif - -#ifndef HAVE_TRUNC -double npy_trunc(double x) -{ - return x < 0 ? npy_ceil(x) : npy_floor(x); -} -#endif - -#ifndef HAVE_EXP2 -double npy_exp2(double x) -{ - return npy_exp(NPY_LOGE2*x); -} -#endif - -#ifndef HAVE_LOG2 -double npy_log2(double x) -{ - return NPY_LOG2E*npy_log(x); -} -#endif - -/* - * if C99 extensions not available then define dummy functions that use the - * double versions for - * - * sin, cos, tan - * sinh, cosh, tanh, - * fabs, floor, ceil, rint, trunc - * sqrt, log10, log, exp, expm1 - * asin, acos, atan, - * asinh, acosh, atanh - * - * hypot, atan2, pow, fmod, modf - * - * We assume the above are always available in their double versions. - * - * NOTE: some facilities may be available as macro only instead of functions. - * For simplicity, we define our own functions and undef the macros. We could - * instead test for the macro, but I am lazy to do that for now. - */ - -/**begin repeat - * #type = npy_longdouble, float# - * #TYPE = NPY_LONGDOUBLE, FLOAT# - * #c = l,f# - * #C = L,F# - */ - -/**begin repeat1 - * #kind = sin,cos,tan,sinh,cosh,tanh,fabs,floor,ceil,rint,trunc,sqrt,log10, - * log,exp,expm1,asin,acos,atan,asinh,acosh,atanh,log1p,exp2,log2# - * #KIND = SIN,COS,TAN,SINH,COSH,TANH,FABS,FLOOR,CEIL,RINT,TRUNC,SQRT,LOG10, - * LOG,EXP,EXPM1,ASIN,ACOS,ATAN,ASINH,ACOSH,ATANH,LOG1P,EXP2,LOG2# - */ - -#ifdef @kind@@c@ -#undef @kind@@c@ -#endif -#ifndef HAVE_@KIND@@C@ -@type@ npy_@kind@@c@(@type@ x) -{ - return (@type@) npy_@kind@((double)x); -} -#endif - -/**end repeat1**/ - -/**begin repeat1 - * #kind = atan2,hypot,pow,fmod,copysign# - * #KIND = ATAN2,HYPOT,POW,FMOD,COPYSIGN# - */ -#ifdef @kind@@c@ -#undef @kind@@c@ -#endif -#ifndef HAVE_@KIND@@C@ -@type@ npy_@kind@@c@(@type@ x, @type@ y) -{ - return (@type@) npy_@kind@((double)x, (double) y); -} -#endif -/**end repeat1**/ - -#ifdef modf@c@ -#undef modf@c@ -#endif -#ifndef HAVE_MODF@C@ -@type@ npy_modf@c@(@type@ x, @type@ *iptr) -{ - double niptr; - double y = npy_modf((double)x, &niptr); - *iptr = (@type@) niptr; - return (@type@) y; -} -#endif - -/**end repeat**/ - - -/* - * Decorate all the math functions which are available on the current platform - */ - -/**begin repeat - * #type = npy_longdouble,double,float# - * #c = l,,f# - * #C = L,,F# - */ -/**begin repeat1 - * #kind = sin,cos,tan,sinh,cosh,tanh,fabs,floor,ceil,rint,trunc,sqrt,log10, - * log,exp,expm1,asin,acos,atan,asinh,acosh,atanh,log1p,exp2,log2# - * #KIND = SIN,COS,TAN,SINH,COSH,TANH,FABS,FLOOR,CEIL,RINT,TRUNC,SQRT,LOG10, - * LOG,EXP,EXPM1,ASIN,ACOS,ATAN,ASINH,ACOSH,ATANH,LOG1P,EXP2,LOG2# - */ -#ifdef HAVE_@KIND@@C@ -@type@ npy_@kind@@c@(@type@ x) -{ - return @kind@@c@(x); -} -#endif - -/**end repeat1**/ - -/**begin repeat1 - * #kind = atan2,hypot,pow,fmod,copysign# - * #KIND = ATAN2,HYPOT,POW,FMOD,COPYSIGN# - */ -#ifdef HAVE_@KIND@@C@ -@type@ npy_@kind@@c@(@type@ x, @type@ y) -{ - return @kind@@c@(x, y); -} -#endif -/**end repeat1**/ - -#ifdef HAVE_MODF@C@ -@type@ npy_modf@c@(@type@ x, @type@ *iptr) -{ - return modf@c@(x, iptr); -} -#endif - -/**end repeat**/ - - -/* - * Non standard functions - */ - -/**begin repeat - * #type = float, double, npy_longdouble# - * #c = f, ,l# - * #C = F, ,L# - */ - -#define LOGE2 NPY_LOGE2@c@ -#define LOG2E NPY_LOG2E@c@ -#define RAD2DEG (180.0@c@/NPY_PI@c@) -#define DEG2RAD (NPY_PI@c@/180.0@c@) - -@type@ npy_rad2deg@c@(@type@ x) -{ - return x*RAD2DEG; -} - -@type@ npy_deg2rad@c@(@type@ x) -{ - return x*DEG2RAD; -} - -@type@ npy_log2_1p@c@(@type@ x) -{ - return LOG2E*npy_log1p@c@(x); -} - -@type@ npy_exp2_m1@c@(@type@ x) -{ - return npy_expm1@c@(LOGE2*x); -} - -@type@ npy_logaddexp@c@(@type@ x, @type@ y) -{ - const @type@ tmp = x - y; - if (tmp > 0) { - return x + npy_log1p@c@(npy_exp@c@(-tmp)); - } - else if (tmp <= 0) { - return y + npy_log1p@c@(npy_exp@c@(tmp)); - } - else { - /* NaNs, or infinities of the same sign involved */ - return x + y; - } -} - -@type@ npy_logaddexp2@c@(@type@ x, @type@ y) -{ - const @type@ tmp = x - y; - if (tmp > 0) { - return x + npy_log2_1p@c@(npy_exp2@c@(-tmp)); - } - else if (tmp <= 0) { - return y + npy_log2_1p@c@(npy_exp2@c@(tmp)); - } - else { - /* NaNs, or infinities of the same sign involved */ - return x + y; - } -} - -#undef LOGE2 -#undef LOG2E -#undef RAD2DEG -#undef DEG2RAD - -/**end repeat**/ diff --git a/pythonPackages/numpy/numpy/core/src/npymath/npy_math_common.h b/pythonPackages/numpy/numpy/core/src/npymath/npy_math_common.h deleted file mode 100755 index 1f555a90a6..0000000000 --- a/pythonPackages/numpy/numpy/core/src/npymath/npy_math_common.h +++ /dev/null @@ -1,9 +0,0 @@ -/* - * Common headers needed by every npy math compilation unit - */ -#include -#include -#include - -#include "npy_config.h" -#include "numpy/npy_math.h" diff --git a/pythonPackages/numpy/numpy/core/src/npymath/npy_math_complex.c.src b/pythonPackages/numpy/numpy/core/src/npymath/npy_math_complex.c.src deleted file mode 100755 index 68cfa6ea1e..0000000000 --- a/pythonPackages/numpy/numpy/core/src/npymath/npy_math_complex.c.src +++ /dev/null @@ -1,287 +0,0 @@ -/* - * Implement some C99-compatible complex math functions - * - * Most of the code is taken from the msun library in FreeBSD (HEAD @ 30th June - * 2009), under the following license: - * - * Copyright (c) 2007 David Schultz - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - */ -#include "npy_math_common.h" -#include "npy_math_private.h" - -/*========================================================== - * Custom implementation of missing complex C99 functions - *=========================================================*/ - -/**begin repeat - * #type = float,double,npy_longdouble# - * #ctype = npy_cfloat,npy_cdouble,npy_clongdouble# - * #c = f, , l# - * #C = F, , L# - * #TMAX = FLT_MAX, DBL_MAX, LDBL_MAX# - */ -#ifndef HAVE_CABS@C@ -@type@ npy_cabs@c@(@ctype@ z) -{ - return npy_hypot@c@(npy_creal@c@(z), npy_cimag@c@(z)); -} -#endif - -#ifndef HAVE_CARG@C@ -@type@ npy_carg@c@(@ctype@ z) -{ - return npy_atan2@c@(npy_cimag@c@(z), npy_creal@c@(z)); -} -#endif - -#ifndef HAVE_CEXP@C@ -@ctype@ npy_cexp@c@(@ctype@ z) -{ - @type@ x, c, s; - @type@ r, i; - @ctype@ ret; - - r = npy_creal@c@(z); - i = npy_cimag@c@(z); - - if (npy_isfinite(r)) { - x = npy_exp@c@(r); - - c = npy_cos@c@(i); - s = npy_sin@c@(i); - - if (npy_isfinite(i)) { - ret = npy_cpack@c@(x * c, x * s); - } else { - ret = npy_cpack@c@(NPY_NAN, npy_copysign@c@(NPY_NAN, i)); - } - - } else if (npy_isnan(r)) { - /* r is nan */ - if (i == 0) { - ret = npy_cpack@c@(r, 0); - } else { - ret = npy_cpack@c@(r, npy_copysign@c@(NPY_NAN, i)); - } - } else { - /* r is +- inf */ - if (r > 0) { - if (i == 0) { - ret = npy_cpack@c@(r, i); - } else if (npy_isfinite(i)) { - c = npy_cos@c@(i); - s = npy_sin@c@(i); - - ret = npy_cpack@c@(r * c, r * s); - } else { - /* x = +inf, y = +-inf | nan */ - ret = npy_cpack@c@(r, NPY_NAN); - } - } else { - if (npy_isfinite(i)) { - x = npy_exp@c@(r); - c = npy_cos@c@(i); - s = npy_sin@c@(i); - - ret = npy_cpack@c@(x * c, x * s); - } else { - /* x = -inf, y = nan | +i inf */ - ret = npy_cpack@c@(0, 0); - } - } - } - - return ret; -} -#endif - -#ifndef HAVE_CLOG@C@ -@ctype@ npy_clog@c@(@ctype@ z) -{ - return npy_cpack@c@(npy_log@c@ (npy_cabs@c@ (z)), npy_carg@c@ (z)); -} -#endif - -#ifndef HAVE_CSQRT@C@ - -/* We risk spurious overflow for components >= DBL_MAX / (1 + sqrt(2)). */ -#define THRESH (@TMAX@ / (1 + NPY_SQRT2@c@)) - -@ctype@ npy_csqrt@c@(@ctype@ z) -{ - @ctype@ result; - @type@ a, b; - @type@ t; - int scale; - - a = npy_creal@c@(z); - b = npy_cimag@c@(z); - - /* Handle special cases. */ - if (a == 0 && b == 0) - return (npy_cpack@c@(0, b)); - if (npy_isinf(b)) - return (npy_cpack@c@(NPY_INFINITY, b)); - if (npy_isnan(a)) { - t = (b - b) / (b - b); /* raise invalid if b is not a NaN */ - return (npy_cpack@c@(a, t)); /* return NaN + NaN i */ - } - if (npy_isinf(a)) { - /* - * csqrt(inf + NaN i) = inf + NaN i - * csqrt(inf + y i) = inf + 0 i - * csqrt(-inf + NaN i) = NaN +- inf i - * csqrt(-inf + y i) = 0 + inf i - */ - if (npy_signbit(a)) - return (npy_cpack@c@(npy_fabs@c@(b - b), npy_copysign@c@(a, b))); - else - return (npy_cpack@c@(a, npy_copysign@c@(b - b, b))); - } - /* - * The remaining special case (b is NaN) is handled just fine by - * the normal code path below. - */ - - /* Scale to avoid overflow. */ - if (npy_fabs@c@(a) >= THRESH || npy_fabs@c@(b) >= THRESH) { - a *= 0.25; - b *= 0.25; - scale = 1; - } else { - scale = 0; - } - - /* Algorithm 312, CACM vol 10, Oct 1967. */ - if (a >= 0) { - t = npy_sqrt@c@((a + npy_hypot@c@(a, b)) * 0.5); - result = npy_cpack@c@(t, b / (2 * t)); - } else { - t = npy_sqrt@c@((-a + npy_hypot@c@(a, b)) * 0.5); - result = npy_cpack@c@(npy_fabs@c@(b) / (2 * t), npy_copysign@c@(t, b)); - } - - /* Rescale. */ - if (scale) - return (npy_cpack@c@(npy_creal@c@(result) * 2, npy_cimag@c@(result))); - else - return (result); -} -#undef THRESH -#endif - -#ifndef HAVE_CPOW@C@ -@ctype@ npy_cpow@c@ (@ctype@ x, @ctype@ y) -{ - @ctype@ b; - @type@ br, bi, yr, yi; - - yr = npy_creal@c@(y); - yi = npy_cimag@c@(y); - b = npy_clog@c@(x); - br = npy_creal@c@(b); - bi = npy_cimag@c@(b); - - return npy_cexp@c@(npy_cpack@c@(br * yr - bi * yi, br * yi + bi * yr)); -} -#endif - -#ifndef HAVE_CCOS@C@ -@ctype@ npy_ccos@c@(@ctype@ z) -{ - @type@ x, y; - x = npy_creal@c@(z); - y = npy_cimag@c@(z); - return npy_cpack@c@(npy_cos@c@(x) * npy_cosh@c@(y), -(npy_sin@c@(x) * npy_sinh@c@(y))); -} -#endif - -#ifndef HAVE_CSIN@C@ -@ctype@ npy_csin@c@(@ctype@ z) -{ - @type@ x, y; - x = npy_creal@c@(z); - y = npy_cimag@c@(z); - return npy_cpack@c@(npy_sin@c@(x) * npy_cosh@c@(y), npy_cos@c@(x) * npy_sinh@c@(y)); -} -#endif -/**end repeat**/ - -/*========================================================== - * Decorate all the functions which are available natively - *=========================================================*/ - -/**begin repeat - * #type = float, double, npy_longdouble# - * #ctype = npy_cfloat, npy_cdouble, npy_clongdouble# - * #c = f, , l# - * #C = F, , L# - */ - -/**begin repeat1 - * #kind = cabs,carg# - * #KIND = CABS,CARG# - */ -#ifdef HAVE_@KIND@@C@ -@type@ npy_@kind@@c@(@ctype@ z) -{ - __@ctype@_to_c99_cast z1 = {z}; - return @kind@@c@(z1.c99_z); -} -#endif -/**end repeat1**/ - -/**begin repeat1 - * #kind = cexp,clog,csqrt,ccos,csin# - * #KIND = CEXP,CLOG,CSQRT,CCOS,CSIN# - */ -#ifdef HAVE_@KIND@@C@ -@ctype@ npy_@kind@@c@(@ctype@ z) -{ - __@ctype@_to_c99_cast z1 = {z}; - __@ctype@_to_c99_cast ret; - ret.c99_z = @kind@@c@(z1.c99_z); - return ret.npy_z; -} -#endif -/**end repeat1**/ - -/**begin repeat1 - * #kind = cpow# - * #KIND = CPOW# - */ -#ifdef HAVE_@KIND@@C@ -@ctype@ npy_@kind@@c@(@ctype@ x, @ctype@ y) -{ - __@ctype@_to_c99_cast x1 = {x}; - __@ctype@_to_c99_cast y1 = {y}; - __@ctype@_to_c99_cast ret; - ret.c99_z = @kind@@c@(x1.c99_z, y1.c99_z); - return ret.npy_z; -} -#endif -/**end repeat1**/ - -/**end repeat**/ diff --git a/pythonPackages/numpy/numpy/core/src/npymath/npy_math_private.h b/pythonPackages/numpy/numpy/core/src/npymath/npy_math_private.h deleted file mode 100755 index 722d03f94b..0000000000 --- a/pythonPackages/numpy/numpy/core/src/npymath/npy_math_private.h +++ /dev/null @@ -1,481 +0,0 @@ -/* - * - * ==================================================== - * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. - * - * Developed at SunPro, a Sun Microsystems, Inc. business. - * Permission to use, copy, modify, and distribute this - * software is freely granted, provided that this notice - * is preserved. - * ==================================================== - */ - -/* - * from: @(#)fdlibm.h 5.1 93/09/24 - * $FreeBSD$ - */ - -#ifndef _NPY_MATH_PRIVATE_H_ -#define _NPY_MATH_PRIVATE_H_ - -#include -#include - -#include "npy_config.h" -#include "npy_fpmath.h" - -#include "numpy/npy_math.h" -#include "numpy/npy_cpu.h" -#include "numpy/npy_endian.h" -#include "numpy/npy_common.h" - -/* - * The original fdlibm code used statements like: - * n0 = ((*(int*)&one)>>29)^1; * index of high word * - * ix0 = *(n0+(int*)&x); * high word of x * - * ix1 = *((1-n0)+(int*)&x); * low word of x * - * to dig two 32 bit words out of the 64 bit IEEE floating point - * value. That is non-ANSI, and, moreover, the gcc instruction - * scheduler gets it wrong. We instead use the following macros. - * Unlike the original code, we determine the endianness at compile - * time, not at run time; I don't see much benefit to selecting - * endianness at run time. - */ - -/* - * A union which permits us to convert between a double and two 32 bit - * ints. - */ - -/* XXX: not really, but we already make this assumption elsewhere. Will have to - * fix this at some point */ -#define IEEE_WORD_ORDER NPY_BYTE_ORDER - -#if IEEE_WORD_ORDER == NPY_BIG_ENDIAN - -typedef union -{ - double value; - struct - { - npy_uint32 msw; - npy_uint32 lsw; - } parts; -} ieee_double_shape_type; - -#endif - -#if IEEE_WORD_ORDER == NPY_LITTLE_ENDIAN - -typedef union -{ - double value; - struct - { - npy_uint32 lsw; - npy_uint32 msw; - } parts; -} ieee_double_shape_type; - -#endif - -/* Get two 32 bit ints from a double. */ - -#define EXTRACT_WORDS(ix0,ix1,d) \ -do { \ - ieee_double_shape_type ew_u; \ - ew_u.value = (d); \ - (ix0) = ew_u.parts.msw; \ - (ix1) = ew_u.parts.lsw; \ -} while (0) - -/* Get the more significant 32 bit int from a double. */ - -#define GET_HIGH_WORD(i,d) \ -do { \ - ieee_double_shape_type gh_u; \ - gh_u.value = (d); \ - (i) = gh_u.parts.msw; \ -} while (0) - -/* Get the less significant 32 bit int from a double. */ - -#define GET_LOW_WORD(i,d) \ -do { \ - ieee_double_shape_type gl_u; \ - gl_u.value = (d); \ - (i) = gl_u.parts.lsw; \ -} while (0) - -/* Set the more significant 32 bits of a double from an int. */ - -#define SET_HIGH_WORD(d,v) \ -do { \ - ieee_double_shape_type sh_u; \ - sh_u.value = (d); \ - sh_u.parts.msw = (v); \ - (d) = sh_u.value; \ -} while (0) - -/* Set the less significant 32 bits of a double from an int. */ - -#define SET_LOW_WORD(d,v) \ -do { \ - ieee_double_shape_type sl_u; \ - sl_u.value = (d); \ - sl_u.parts.lsw = (v); \ - (d) = sl_u.value; \ -} while (0) - -/* Set a double from two 32 bit ints. */ - -#define INSERT_WORDS(d,ix0,ix1) \ -do { \ - ieee_double_shape_type iw_u; \ - iw_u.parts.msw = (ix0); \ - iw_u.parts.lsw = (ix1); \ - (d) = iw_u.value; \ -} while (0) - -/* - * A union which permits us to convert between a float and a 32 bit - * int. - */ - -typedef union -{ - float value; - /* FIXME: Assumes 32 bit int. */ - npy_uint32 word; -} ieee_float_shape_type; - -/* Get a 32 bit int from a float. */ - -#define GET_FLOAT_WORD(i,d) \ -do { \ - ieee_float_shape_type gf_u; \ - gf_u.value = (d); \ - (i) = gf_u.word; \ -} while (0) - -/* Set a float from a 32 bit int. */ - -#define SET_FLOAT_WORD(d,i) \ -do { \ - ieee_float_shape_type sf_u; \ - sf_u.word = (i); \ - (d) = sf_u.value; \ -} while (0) - -#ifdef NPY_USE_C99_COMPLEX -#include -#endif - -/* - * Long double support - */ -#if defined(HAVE_LDOUBLE_INTEL_EXTENDED_12_BYTES_LE) - /* - * Intel extended 80 bits precision. Bit representation is - * | junk | s |eeeeeeeeeeeeeee|mmmmmmmm................mmmmmmm| - * | 16 bits| 1 bit | 15 bits | 64 bits | - * | a[2] | a[1] | a[0] | - * - * 16 stronger bits of a[2] are junk - */ - typedef npy_uint32 IEEEl2bitsrep_part; - -/* my machine */ - - union IEEEl2bitsrep { - npy_longdouble e; - IEEEl2bitsrep_part a[3]; - }; - - #define LDBL_MANL_INDEX 0 - #define LDBL_MANL_MASK 0xFFFFFFFF - #define LDBL_MANL_SHIFT 0 - - #define LDBL_MANH_INDEX 1 - #define LDBL_MANH_MASK 0xFFFFFFFF - #define LDBL_MANH_SHIFT 0 - - #define LDBL_EXP_INDEX 2 - #define LDBL_EXP_MASK 0x7FFF - #define LDBL_EXP_SHIFT 0 - - #define LDBL_SIGN_INDEX 2 - #define LDBL_SIGN_MASK 0x8000 - #define LDBL_SIGN_SHIFT 15 - - #define LDBL_NBIT 0x80000000 - - typedef npy_uint32 ldouble_man_t; - typedef npy_uint32 ldouble_exp_t; - typedef npy_uint32 ldouble_sign_t; -#elif defined(HAVE_LDOUBLE_INTEL_EXTENDED_16_BYTES_LE) - /* - * Intel extended 80 bits precision, 16 bytes alignment.. Bit representation is - * | junk | s |eeeeeeeeeeeeeee|mmmmmmmm................mmmmmmm| - * | 16 bits| 1 bit | 15 bits | 64 bits | - * | a[2] | a[1] | a[0] | - * - * a[3] and 16 stronger bits of a[2] are junk - */ - typedef npy_uint32 IEEEl2bitsrep_part; - - union IEEEl2bitsrep { - npy_longdouble e; - IEEEl2bitsrep_part a[4]; - }; - - #define LDBL_MANL_INDEX 0 - #define LDBL_MANL_MASK 0xFFFFFFFF - #define LDBL_MANL_SHIFT 0 - - #define LDBL_MANH_INDEX 1 - #define LDBL_MANH_MASK 0xFFFFFFFF - #define LDBL_MANH_SHIFT 0 - - #define LDBL_EXP_INDEX 2 - #define LDBL_EXP_MASK 0x7FFF - #define LDBL_EXP_SHIFT 0 - - #define LDBL_SIGN_INDEX 2 - #define LDBL_SIGN_MASK 0x8000 - #define LDBL_SIGN_SHIFT 15 - - #define LDBL_NBIT 0x800000000 - - typedef npy_uint32 ldouble_man_t; - typedef npy_uint32 ldouble_exp_t; - typedef npy_uint32 ldouble_sign_t; -#elif defined(HAVE_LDOUBLE_IEEE_DOUBLE_16_BYTES_BE) || \ - defined(HAVE_LDOUBLE_IEEE_DOUBLE_BE) - /* 64 bits IEEE double precision aligned on 16 bytes: used by ppc arch on - * Mac OS X */ - - /* - * IEEE double precision. Bit representation is - * | s |eeeeeeeeeee|mmmmmmmm................mmmmmmm| - * |1 bit| 11 bits | 52 bits | - * | a[0] | a[1] | - */ - typedef npy_uint32 IEEEl2bitsrep_part; - - union IEEEl2bitsrep { - npy_longdouble e; - IEEEl2bitsrep_part a[2]; - }; - - #define LDBL_MANL_INDEX 1 - #define LDBL_MANL_MASK 0xFFFFFFFF - #define LDBL_MANL_SHIFT 0 - - #define LDBL_MANH_INDEX 0 - #define LDBL_MANH_MASK 0x000FFFFF - #define LDBL_MANH_SHIFT 0 - - #define LDBL_EXP_INDEX 0 - #define LDBL_EXP_MASK 0x7FF00000 - #define LDBL_EXP_SHIFT 20 - - #define LDBL_SIGN_INDEX 0 - #define LDBL_SIGN_MASK 0x80000000 - #define LDBL_SIGN_SHIFT 31 - - #define LDBL_NBIT 0 - - typedef npy_uint32 ldouble_man_t; - typedef npy_uint32 ldouble_exp_t; - typedef npy_uint32 ldouble_sign_t; -#elif defined(HAVE_LDOUBLE_IEEE_DOUBLE_LE) - /* 64 bits IEEE double precision, Little Endian. */ - - /* - * IEEE double precision. Bit representation is - * | s |eeeeeeeeeee|mmmmmmmm................mmmmmmm| - * |1 bit| 11 bits | 52 bits | - * | a[1] | a[0] | - */ - typedef npy_uint32 IEEEl2bitsrep_part; - - union IEEEl2bitsrep { - npy_longdouble e; - IEEEl2bitsrep_part a[2]; - }; - - #define LDBL_MANL_INDEX 0 - #define LDBL_MANL_MASK 0xFFFFFFFF - #define LDBL_MANL_SHIFT 0 - - #define LDBL_MANH_INDEX 1 - #define LDBL_MANH_MASK 0x000FFFFF - #define LDBL_MANH_SHIFT 0 - - #define LDBL_EXP_INDEX 1 - #define LDBL_EXP_MASK 0x7FF00000 - #define LDBL_EXP_SHIFT 20 - - #define LDBL_SIGN_INDEX 1 - #define LDBL_SIGN_MASK 0x80000000 - #define LDBL_SIGN_SHIFT 31 - - #define LDBL_NBIT 0x00000080 - - typedef npy_uint32 ldouble_man_t; - typedef npy_uint32 ldouble_exp_t; - typedef npy_uint32 ldouble_sign_t; -#elif defined(HAVE_LDOUBLE_IEEE_QUAD_BE) - /* - * IEEE quad precision, Big Endian. Bit representation is - * | s |eeeeeeeeeee|mmmmmmmm................mmmmmmm| - * |1 bit| 15 bits | 112 bits | - * | a[0] | a[1] | - */ - typedef npy_uint64 IEEEl2bitsrep_part; - - union IEEEl2bitsrep { - npy_longdouble e; - IEEEl2bitsrep_part a[2]; - }; - - #define LDBL_MANL_INDEX 1 - #define LDBL_MANL_MASK 0xFFFFFFFFFFFFFFFF - #define LDBL_MANL_SHIFT 0 - - #define LDBL_MANH_INDEX 0 - #define LDBL_MANH_MASK 0x0000FFFFFFFFFFFF - #define LDBL_MANH_SHIFT 0 - - #define LDBL_EXP_INDEX 0 - #define LDBL_EXP_MASK 0x7FFF000000000000 - #define LDBL_EXP_SHIFT 48 - - #define LDBL_SIGN_INDEX 0 - #define LDBL_SIGN_MASK 0x8000000000000000 - #define LDBL_SIGN_SHIFT 63 - - #define LDBL_NBIT 0 - - typedef npy_uint64 ldouble_man_t; - typedef npy_uint64 ldouble_exp_t; - typedef npy_uint32 ldouble_sign_t; -#elif defined(HAVE_LDOUBLE_IEEE_QUAD_LE) - /* - * IEEE quad precision, Little Endian. Bit representation is - * | s |eeeeeeeeeee|mmmmmmmm................mmmmmmm| - * |1 bit| 15 bits | 112 bits | - * | a[1] | a[0] | - */ - typedef npy_uint64 IEEEl2bitsrep_part; - - union IEEEl2bitsrep { - npy_longdouble e; - IEEEl2bitsrep_part a[2]; - }; - - #define LDBL_MANL_INDEX 0 - #define LDBL_MANL_MASK 0xFFFFFFFFFFFFFFFF - #define LDBL_MANL_SHIFT 0 - - #define LDBL_MANH_INDEX 1 - #define LDBL_MANH_MASK 0x0000FFFFFFFFFFFF - #define LDBL_MANH_SHIFT 0 - - #define LDBL_EXP_INDEX 1 - #define LDBL_EXP_MASK 0x7FFF000000000000 - #define LDBL_EXP_SHIFT 48 - - #define LDBL_SIGN_INDEX 1 - #define LDBL_SIGN_MASK 0x8000000000000000 - #define LDBL_SIGN_SHIFT 63 - - #define LDBL_NBIT 0 - - typedef npy_uint64 ldouble_man_t; - typedef npy_uint64 ldouble_exp_t; - typedef npy_uint32 ldouble_sign_t; -#endif - -#ifndef HAVE_LDOUBLE_DOUBLE_DOUBLE_BE -/* Get the sign bit of x. x should be of type IEEEl2bitsrep */ -#define GET_LDOUBLE_SIGN(x) \ - (((x).a[LDBL_SIGN_INDEX] & LDBL_SIGN_MASK) >> LDBL_SIGN_SHIFT) - -/* Set the sign bit of x to v. x should be of type IEEEl2bitsrep */ -#define SET_LDOUBLE_SIGN(x, v) \ - ((x).a[LDBL_SIGN_INDEX] = \ - ((x).a[LDBL_SIGN_INDEX] & ~LDBL_SIGN_MASK) | \ - (((IEEEl2bitsrep_part)(v) << LDBL_SIGN_SHIFT) & LDBL_SIGN_MASK)) - -/* Get the exp bits of x. x should be of type IEEEl2bitsrep */ -#define GET_LDOUBLE_EXP(x) \ - (((x).a[LDBL_EXP_INDEX] & LDBL_EXP_MASK) >> LDBL_EXP_SHIFT) - -/* Set the exp bit of x to v. x should be of type IEEEl2bitsrep */ -#define SET_LDOUBLE_EXP(x, v) \ - ((x).a[LDBL_EXP_INDEX] = \ - ((x).a[LDBL_EXP_INDEX] & ~LDBL_EXP_MASK) | \ - (((IEEEl2bitsrep_part)(v) << LDBL_EXP_SHIFT) & LDBL_EXP_MASK)) - -/* Get the manl bits of x. x should be of type IEEEl2bitsrep */ -#define GET_LDOUBLE_MANL(x) \ - (((x).a[LDBL_MANL_INDEX] & LDBL_MANL_MASK) >> LDBL_MANL_SHIFT) - -/* Set the manl bit of x to v. x should be of type IEEEl2bitsrep */ -#define SET_LDOUBLE_MANL(x, v) \ - ((x).a[LDBL_MANL_INDEX] = \ - ((x).a[LDBL_MANL_INDEX] & ~LDBL_MANL_MASK) | \ - (((IEEEl2bitsrep_part)(v) << LDBL_MANL_SHIFT) & LDBL_MANL_MASK)) - -/* Get the manh bits of x. x should be of type IEEEl2bitsrep */ -#define GET_LDOUBLE_MANH(x) \ - (((x).a[LDBL_MANH_INDEX] & LDBL_MANH_MASK) >> LDBL_MANH_SHIFT) - -/* Set the manh bit of x to v. x should be of type IEEEl2bitsrep */ -#define SET_LDOUBLE_MANH(x, v) \ - ((x).a[LDBL_MANH_INDEX] = \ - ((x).a[LDBL_MANH_INDEX] & ~LDBL_MANH_MASK) | \ - (((IEEEl2bitsrep_part)(v) << LDBL_MANH_SHIFT) & LDBL_MANH_MASK)) - -#endif /* #ifndef HAVE_LDOUBLE_DOUBLE_DOUBLE_BE */ - -/* - * Those unions are used to convert a pointer of npy_cdouble to native C99 - * complex or our own complex type independently on whether C99 complex - * support is available - */ -#ifdef NPY_USE_C99_COMPLEX -typedef union { - npy_cdouble npy_z; - complex double c99_z; -} __npy_cdouble_to_c99_cast; - -typedef union { - npy_cfloat npy_z; - complex float c99_z; -} __npy_cfloat_to_c99_cast; - -typedef union { - npy_clongdouble npy_z; - complex long double c99_z; -} __npy_clongdouble_to_c99_cast; -#else -typedef union { - npy_cdouble npy_z; - npy_cdouble c99_z; -} __npy_cdouble_to_c99_cast; - -typedef union { - npy_cfloat npy_z; - npy_cfloat c99_z; -} __npy_cfloat_to_c99_cast; - -typedef union { - npy_clongdouble npy_z; - npy_clongdouble c99_z; -} __npy_clongdouble_to_c99_cast; -#endif - -#endif /* !_NPY_MATH_PRIVATE_H_ */ diff --git a/pythonPackages/numpy/numpy/core/src/private/npy_config.h b/pythonPackages/numpy/numpy/core/src/private/npy_config.h deleted file mode 100755 index e164014960..0000000000 --- a/pythonPackages/numpy/numpy/core/src/private/npy_config.h +++ /dev/null @@ -1,34 +0,0 @@ -#ifndef _NPY_NPY_CONFIG_H_ -#define _NPY_NPY_CONFIG_H_ - -#include "config.h" - -/* Disable broken MS math functions */ -#if defined(_MSC_VER) || defined(__MINGW32_VERSION) -#undef HAVE_ATAN2 -#undef HAVE_HYPOT -#endif - -/* Disable broken Sun Workshop Pro math functions */ -#ifdef __SUNPRO_C -#undef HAVE_ATAN2 -#endif - -/* - * On Mac OS X, because there is only one configuration stage for all the archs - * in universal builds, any macro which depends on the arch needs to be - * harcoded - */ -#ifdef __APPLE__ - #undef SIZEOF_LONG - #undef SIZEOF_PY_INTPTR_T - - #ifdef __LP64__ - #define SIZEOF_LONG 8 - #define SIZEOF_PY_INTPTR_T 8 - #else - #define SIZEOF_LONG 4 - #define SIZEOF_PY_INTPTR_T 4 - #endif -#endif -#endif diff --git a/pythonPackages/numpy/numpy/core/src/private/npy_fpmath.h b/pythonPackages/numpy/numpy/core/src/private/npy_fpmath.h deleted file mode 100755 index 92338e4c7f..0000000000 --- a/pythonPackages/numpy/numpy/core/src/private/npy_fpmath.h +++ /dev/null @@ -1,47 +0,0 @@ -#ifndef _NPY_NPY_FPMATH_H_ -#define _NPY_NPY_FPMATH_H_ - -#include "npy_config.h" - -#include "numpy/npy_os.h" -#include "numpy/npy_cpu.h" -#include "numpy/npy_common.h" - -#ifdef NPY_OS_DARWIN - /* This hardcoded logic is fragile, but universal builds makes it - * difficult to detect arch-specific features */ - - /* MAC OS X < 10.4 and gcc < 4 does not support proper long double, and - * is the same as double on those platforms */ - #if NPY_BITSOF_LONGDOUBLE == NPY_BITSOF_DOUBLE - /* This assumes that FPU and ALU have the same endianness */ - #if NPY_BYTE_ORDER == NPY_LITTLE_ENDIAN - #define HAVE_LDOUBLE_IEEE_DOUBLE_LE - #elif NPY_BYTE_ORDER == NPY_BIG_ENDIAN - #define HAVE_LDOUBLE_IEEE_DOUBLE_BE - #else - #error Endianness undefined ? - #endif - #else - #if defined(NPY_CPU_X86) - #define HAVE_LDOUBLE_INTEL_EXTENDED_12_BYTES_LE - #elif defined(NPY_CPU_AMD64) - #define HAVE_LDOUBLE_INTEL_EXTENDED_16_BYTES_LE - #elif defined(NPY_CPU_PPC) || defined(NPY_CPU_PPC64) - #define HAVE_LDOUBLE_IEEE_DOUBLE_16_BYTES_BE - #endif - #endif -#endif - -#if !(defined(HAVE_LDOUBLE_IEEE_QUAD_BE) || \ - defined(HAVE_LDOUBLE_IEEE_QUAD_LE) || \ - defined(HAVE_LDOUBLE_IEEE_DOUBLE_LE) || \ - defined(HAVE_LDOUBLE_IEEE_DOUBLE_BE) || \ - defined(HAVE_LDOUBLE_IEEE_DOUBLE_16_BYTES_BE) || \ - defined(HAVE_LDOUBLE_INTEL_EXTENDED_16_BYTES_LE) || \ - defined(HAVE_LDOUBLE_INTEL_EXTENDED_12_BYTES_LE) || \ - defined(HAVE_LDOUBLE_DOUBLE_DOUBLE_BE)) - #error No long double representation defined -#endif - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/scalarmathmodule.c.src b/pythonPackages/numpy/numpy/core/src/scalarmathmodule.c.src deleted file mode 100755 index 99182d83f7..0000000000 --- a/pythonPackages/numpy/numpy/core/src/scalarmathmodule.c.src +++ /dev/null @@ -1,1526 +0,0 @@ -/* -*- c -*- */ - -/* The purpose of this module is to add faster math for array scalars - that does not go through the ufunc machinery - - but still supports error-modes. -*/ - -#include "Python.h" -#include "numpy/noprefix.h" -#include "numpy/ufuncobject.h" -#include "numpy/arrayscalars.h" - -#include "numpy/npy_3kcompat.h" - -/** numarray adapted routines.... **/ - -#if SIZEOF_LONGLONG == 64 || SIZEOF_LONGLONG == 128 -static int ulonglong_overflow(ulonglong a, ulonglong b) -{ - ulonglong ah, al, bh, bl, w, x, y, z; - -#if SIZEOF_LONGLONG == 64 - ah = (a >> 32); - al = (a & 0xFFFFFFFFL); - bh = (b >> 32); - bl = (b & 0xFFFFFFFFL); -#elif SIZEOF_LONGLONG == 128 - ah = (a >> 64); - al = (a & 0xFFFFFFFFFFFFFFFFL); - bh = (b >> 64); - bl = (b & 0xFFFFFFFFFFFFFFFFL); -#else - ah = al = bh = bl = 0; -#endif - - /* 128-bit product: z*2**64 + (x+y)*2**32 + w */ - w = al*bl; - x = bh*al; - y = ah*bl; - z = ah*bh; - - /* *c = ((x + y)<<32) + w; */ -#if SIZEOF_LONGLONG == 64 - return z || (x>>32) || (y>>32) || - (((x & 0xFFFFFFFFL) + (y & 0xFFFFFFFFL) + (w >> 32)) >> 32); -#elif SIZEOF_LONGLONG == 128 - return z || (x>>64) || (y>>64) || - (((x & 0xFFFFFFFFFFFFFFFFL) + (y & 0xFFFFFFFFFFFFFFFFL) + (w >> 64)) >> 64); -#else - return 0; -#endif - -} -#else -static int ulonglong_overflow(ulonglong NPY_UNUSED(a), ulonglong NPY_UNUSED(b)) -{ - return 0; -} -#endif - -static int slonglong_overflow(longlong a0, longlong b0) -{ - ulonglong a, b; - ulonglong ah, al, bh, bl, w, x, y, z; - - /* Convert to non-negative quantities */ - if (a0 < 0) { - a = -a0; - } - else { - a = a0; - } - if (b0 < 0) { - b = -b0; - } - else { - b = b0; - } - - -#if SIZEOF_LONGLONG == 64 - ah = (a >> 32); - al = (a & 0xFFFFFFFFL); - bh = (b >> 32); - bl = (b & 0xFFFFFFFFL); -#elif SIZEOF_LONGLONG == 128 - ah = (a >> 64); - al = (a & 0xFFFFFFFFFFFFFFFFL); - bh = (b >> 64); - bl = (b & 0xFFFFFFFFFFFFFFFFL); -#else - ah = al = bh = bl = 0; -#endif - - w = al*bl; - x = bh*al; - y = ah*bl; - z = ah*bh; - - /* - ulonglong c = ((x + y)<<32) + w; - if ((a0 < 0) ^ (b0 < 0)) - *c = -c; - else - *c = c - */ - -#if SIZEOF_LONGLONG == 64 - return z || (x>>31) || (y>>31) || - (((x & 0xFFFFFFFFL) + (y & 0xFFFFFFFFL) + (w >> 32)) >> 31); -#elif SIZEOF_LONGLONG == 128 - return z || (x>>63) || (y>>63) || - (((x & 0xFFFFFFFFFFFFFFFFL) + (y & 0xFFFFFFFFFFFFFFFFL) + (w >> 64)) >> 63); -#else - return 0; -#endif -} -/** end direct numarray code **/ - - -/* Basic operations: - * - * BINARY: - * - * add, subtract, multiply, divide, remainder, divmod, power, - * floor_divide, true_divide - * - * lshift, rshift, and, or, xor (integers only) - * - * UNARY: - * - * negative, positive, absolute, nonzero, invert, int, long, float, oct, hex - * - */ - -/**begin repeat - * #name = byte, short, int, long, longlong# - */ -static void -@name@_ctype_add(@name@ a, @name@ b, @name@ *out) { - *out = a + b; - if ((*out^a) >= 0 || (*out^b) >= 0) { - return; - } - generate_overflow_error(); - return; -} -static void -@name@_ctype_subtract(@name@ a, @name@ b, @name@ *out) { - *out = a - b; - if ((*out^a) >= 0 || (*out^~b) >= 0) { - return; - } - generate_overflow_error(); - return; -} -/**end repeat**/ - -/**begin repeat - * #name = ubyte, ushort, uint, ulong, ulonglong# - */ -static void -@name@_ctype_add(@name@ a, @name@ b, @name@ *out) { - *out = a + b; - if (*out >= a && *out >= b) { - return; - } - generate_overflow_error(); - return; -} -static void -@name@_ctype_subtract(@name@ a, @name@ b, @name@ *out) { - *out = a - b; - if (a >= b) { - return; - } - generate_overflow_error(); - return; -} -/**end repeat**/ - -#ifndef SIZEOF_BYTE -#define SIZEOF_BYTE 1 -#endif - -/**begin repeat - * - * #name = byte, ubyte, short, ushort, int, uint, long, ulong# - * #big = (int,uint)*2, (longlong,ulonglong)*2# - * #NAME = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG# - * #SIZENAME = BYTE*2, SHORT*2, INT*2, LONG*2# - * #SIZE = INT*4,LONGLONG*4# - * #neg = (1,0)*4# - */ -#if SIZEOF_@SIZE@ > SIZEOF_@SIZENAME@ -static void -@name@_ctype_multiply(@name@ a, @name@ b, @name@ *out) { - @big@ temp; - temp = ((@big@) a) * ((@big@) b); - *out = (@name@) temp; -#if @neg@ - if (temp > MAX_@NAME@ || temp < MIN_@NAME@) -#else - if (temp > MAX_@NAME@) -#endif - generate_overflow_error(); - return; -} -#endif -/**end repeat**/ - -/**begin repeat - * - * #name = int, uint, long, ulong, longlong, ulonglong# - * #SIZE = INT*2, LONG*2, LONGLONG*2# - * #char = (s,u)*3# - */ -#if SIZEOF_LONGLONG == SIZEOF_@SIZE@ -static void -@name@_ctype_multiply(@name@ a, @name@ b, @name@ *out) { - *out = a * b; - if (@char@longlong_overflow(a, b)) { - generate_overflow_error(); - } - return; -} -#endif -/**end repeat**/ - -/**begin repeat - * - * #name = byte, ubyte, short, ushort, int, uint, long, - * ulong, longlong, ulonglong# - * #neg = (1,0)*5# - */ -static void -@name@_ctype_divide(@name@ a, @name@ b, @name@ *out) { - if (b == 0) { - generate_divbyzero_error(); - *out = 0; - } -#if @neg@ - else if (b == -1 && a < 0 && a == -a) { - generate_overflow_error(); - *out = a / b; - } -#endif - else { -#if @neg@ - @name@ tmp; - tmp = a / b; - if (((a > 0) != (b > 0)) && (a % b != 0)) { - tmp--; - } - *out = tmp; -#else - *out = a / b; -#endif - } -} - -#define @name@_ctype_floor_divide @name@_ctype_divide -static void -@name@_ctype_remainder(@name@ a, @name@ b, @name@ *out) { - if (a == 0 || b == 0) { - if (b == 0) generate_divbyzero_error(); - *out = 0; - return; - } -#if @neg@ - else if ((a > 0) == (b > 0)) { - *out = a % b; - } - else { - /* handled like Python does */ - *out = a % b; - if (*out) *out += b; - } -#else - *out = a % b; -#endif -} -/**end repeat**/ - -/**begin repeat - * - * #name = byte, ubyte, short, ushort, int, uint, long, - * ulong, longlong, ulonglong# - * #otyp = float*4, double*6# - */ -#define @name@_ctype_true_divide(a, b, out) \ - *(out) = ((@otyp@) (a)) / ((@otyp@) (b)); -/**end repeat**/ - -/* b will always be positive in this call */ -/**begin repeat - * - * #name = byte, ubyte, short, ushort, int, uint, long, ulong, longlong, ulonglong# - * #upc = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, LONGLONG, ULONGLONG# - */ -static void -@name@_ctype_power(@name@ a, @name@ b, @name@ *out) { - @name@ temp, ix, mult; - /* code from Python's intobject.c, with overflow checking removed. */ - temp = a; - ix = 1; - while (b > 0) { - if (b & 1) { - @name@_ctype_multiply(ix, temp, &mult); - ix = mult; - if (temp == 0) { - break; - } - } - b >>= 1; /* Shift exponent down by 1 bit */ - if (b==0) { - break; - } - /* Square the value of temp */ - @name@_ctype_multiply(temp, temp, &mult); - temp = mult; - } - *out = ix; -} -/**end repeat**/ - - - -/* QUESTION: Should we check for overflow / underflow in (l,r)shift? */ - -/**begin repeat - * #name = (byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong)*5# - * #oper = and*10, xor*10, or*10, lshift*10, rshift*10# - * #op = &*10, ^*10, |*10, <<*10, >>*10# - */ -#define @name@_ctype_@oper@(arg1, arg2, out) *(out) = (arg1) @op@ (arg2) -/**end repeat**/ - -/**begin repeat - * #name = float, double, longdouble# - */ -static @name@ (*_basic_@name@_floor)(@name@); -static @name@ (*_basic_@name@_sqrt)(@name@); -static @name@ (*_basic_@name@_fmod)(@name@, @name@); -#define @name@_ctype_add(a, b, outp) *(outp) = a + b -#define @name@_ctype_subtract(a, b, outp) *(outp) = a - b -#define @name@_ctype_multiply(a, b, outp) *(outp) = a * b -#define @name@_ctype_divide(a, b, outp) *(outp) = a / b -#define @name@_ctype_true_divide @name@_ctype_divide -#define @name@_ctype_floor_divide(a, b, outp) \ - *(outp) = _basic_@name@_floor((a) / (b)) -/**end repeat**/ - -/**begin repeat - * #name = cfloat, cdouble, clongdouble# - * #rtype = float, double, longdouble# - * #c = f,,l# - */ -#define @name@_ctype_add(a, b, outp) do{ \ - (outp)->real = (a).real + (b).real; \ - (outp)->imag = (a).imag + (b).imag; \ - } while(0) -#define @name@_ctype_subtract(a, b, outp) do{ \ - (outp)->real = (a).real - (b).real; \ - (outp)->imag = (a).imag - (b).imag; \ - } while(0) -#define @name@_ctype_multiply(a, b, outp) do{ \ - (outp)->real = (a).real * (b).real - (a).imag * (b).imag; \ - (outp)->imag = (a).real * (b).imag + (a).imag * (b).real; \ - } while(0) -#define @name@_ctype_divide(a, b, outp) do{ \ - @rtype@ d = (b).real*(b).real + (b).imag*(b).imag; \ - (outp)->real = ((a).real*(b).real + (a).imag*(b).imag)/d; \ - (outp)->imag = ((a).imag*(b).real - (a).real*(b).imag)/d; \ - } while(0) -#define @name@_ctype_true_divide @name@_ctype_divide -#define @name@_ctype_floor_divide(a, b, outp) do { \ - (outp)->real = _basic_@rtype@_floor \ - (((a).real*(b).real + (a).imag*(b).imag) / \ - ((b).real*(b).real + (b).imag*(b).imag)); \ - (outp)->imag = 0; \ - } while(0) -/**end repeat**/ - -/**begin repeat - * #name = float, double, longdouble# - */ -static void -@name@_ctype_remainder(@name@ a, @name@ b, @name@ *out) { - @name@ mod; - mod = _basic_@name@_fmod(a, b); - if (mod && (((b < 0) != (mod < 0)))) { - mod += b; - } - *out = mod; -} -/**end repeat**/ - - - -/**begin repeat - * #name = byte, ubyte, short, ushort, int, uint, long, ulong, longlong, - * ulonglong, float, double, longdouble, cfloat, cdouble, clongdouble# - */ -#define @name@_ctype_divmod(a, b, out, out2) { \ - @name@_ctype_floor_divide(a, b, out); \ - @name@_ctype_remainder(a, b, out2); \ - } -/**end repeat**/ - -/**begin repeat - * #name = float, double, longdouble# - */ -static @name@ (*_basic_@name@_pow)(@name@ a, @name@ b); -static void -@name@_ctype_power(@name@ a, @name@ b, @name@ *out) { - *out = _basic_@name@_pow(a, b); -} -/**end repeat**/ - -/**begin repeat - * #name = byte, ubyte, short, ushort, int, uint, long, ulong, longlong, - * ulonglong, float, double, longdouble# - * #uns = (0,1)*5,0*3# - */ -static void -@name@_ctype_negative(@name@ a, @name@ *out) -{ -#if @uns@ - generate_overflow_error(); -#endif - *out = -a; -} -/**end repeat**/ - - -/**begin repeat - * #name = cfloat, cdouble, clongdouble# - */ -static void -@name@_ctype_negative(@name@ a, @name@ *out) -{ - out->real = -a.real; - out->imag = -a.imag; -} -/**end repeat**/ - -/**begin repeat - * #name = byte, ubyte, short, ushort, int, uint, long, ulong, longlong, - * ulonglong, float, double, longdouble# - */ -static void -@name@_ctype_positive(@name@ a, @name@ *out) -{ - *out = a; -} -/**end repeat**/ - -/* - * Get the nc_powf, nc_pow, and nc_powl functions from - * the data area of the power ufunc in umathmodule. - */ - -/**begin repeat - * #name = cfloat, cdouble, clongdouble# - */ -static void -@name@_ctype_positive(@name@ a, @name@ *out) -{ - out->real = a.real; - out->imag = a.imag; -} -static void (*_basic_@name@_pow)(@name@ *, @name@ *, @name@ *); -static void -@name@_ctype_power(@name@ a, @name@ b, @name@ *out) -{ - _basic_@name@_pow(&a, &b, out); -} -/**end repeat**/ - - -/**begin repeat - * #name = ubyte, ushort, uint, ulong, ulonglong# - */ -#define @name@_ctype_absolute @name@_ctype_positive -/**end repeat**/ - - -/**begin repeat - * #name = byte, short, int, long, longlong, float, double, longdouble# - */ -static void -@name@_ctype_absolute(@name@ a, @name@ *out) -{ - *out = (a < 0 ? -a : a); -} -/**end repeat**/ - -/**begin repeat - * #name = cfloat, cdouble, clongdouble# - * #rname = float, double, longdouble# - */ -static void -@name@_ctype_absolute(@name@ a, @rname@ *out) -{ - *out = _basic_@rname@_sqrt(a.real*a.real + a.imag*a.imag); -} -/**end repeat**/ - -/**begin repeat - * #name = byte, ubyte, short, ushort, int, uint, long, - * ulong, longlong, ulonglong# - */ -#define @name@_ctype_invert(a, out) *(out) = ~a; -/**end repeat**/ - -/*** END OF BASIC CODE **/ - - -/* The general strategy for commutative binary operators is to - * - * 1) Convert the types to the common type if both are scalars (0 return) - * 2) If both are not scalars use ufunc machinery (-2 return) - * 3) If both are scalars but cannot be cast to the right type - * return NotImplmented (-1 return) - * - * 4) Perform the function on the C-type. - * 5) If an error condition occurred, check to see - * what the current error-handling is and handle the error. - * - * 6) Construct and return the output scalar. - */ - -/**begin repeat - * #name = byte, ubyte, short, ushort, int, uint, long, ulong, longlong, - * ulonglong, float, double, longdouble, cfloat, cdouble, clongdouble# - * #Name = Byte, UByte, Short, UShort, Int, UInt, Long, ULong, LongLong, - * ULongLong, Float, Double, LongDouble, CFloat, CDouble, CLongDouble# - * #NAME = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, LONGLONG, - * ULONGLONG, FLOAT, DOUBLE, LONGDOUBLE, CFLOAT, CDOUBLE, CLONGDOUBLE# - */ - -static int -_@name@_convert_to_ctype(PyObject *a, @name@ *arg1) -{ - PyObject *temp; - - if (PyArray_IsScalar(a, @Name@)) { - *arg1 = PyArrayScalar_VAL(a, @Name@); - return 0; - } - else if (PyArray_IsScalar(a, Generic)) { - PyArray_Descr *descr1; - - if (!PyArray_IsScalar(a, Number)) { - return -1; - } - descr1 = PyArray_DescrFromTypeObject((PyObject *)Py_TYPE(a)); - if (PyArray_CanCastSafely(descr1->type_num, PyArray_@NAME@)) { - PyArray_CastScalarDirect(a, descr1, arg1, PyArray_@NAME@); - Py_DECREF(descr1); - return 0; - } - else { - Py_DECREF(descr1); - return -1; - } - } - else if (PyArray_GetPriority(a, PyArray_SUBTYPE_PRIORITY) > - PyArray_SUBTYPE_PRIORITY) { - return -2; - } - else if ((temp = PyArray_ScalarFromObject(a)) != NULL) { - int retval = _@name@_convert_to_ctype(temp, arg1); - - Py_DECREF(temp); - return retval; - } - return -2; -} - -/**end repeat**/ - - -/**begin repeat - * #name = byte, ubyte, short, ushort, int, uint, long, ulong, - * longlong, ulonglong, float, double, cfloat, cdouble# - */ -static int -_@name@_convert2_to_ctypes(PyObject *a, @name@ *arg1, - PyObject *b, @name@ *arg2) -{ - int ret; - ret = _@name@_convert_to_ctype(a, arg1); - if (ret < 0) { - return ret; - } - ret = _@name@_convert_to_ctype(b, arg2); - if (ret < 0) { - return ret; - } - return 0; -} -/**end repeat**/ - -/**begin repeat - * #name = longdouble, clongdouble# - */ - -static int -_@name@_convert2_to_ctypes(PyObject *a, @name@ *arg1, - PyObject *b, @name@ *arg2) -{ - int ret; - ret = _@name@_convert_to_ctype(a, arg1); - if (ret < 0) { - return ret; - } - ret = _@name@_convert_to_ctype(b, arg2); - if (ret == -2) { - ret = -3; - } - if (ret < 0) { - return ret; - } - return 0; -} - -/**end repeat**/ - - -#if defined(NPY_PY3K) -#define CODEGEN_SKIP_divide_FLAG -#endif - -/**begin repeat - #name=(byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong)*13, (float, double, longdouble, cfloat, cdouble, clongdouble)*6, (float, double, longdouble)*2# - #Name=(Byte, UByte, Short, UShort, Int, UInt, Long, ULong, LongLong, ULongLong)*13, (Float, Double, LongDouble, CFloat, CDouble, CLongDouble)*6, (Float, Double, LongDouble)*2# - #oper=add*10, subtract*10, multiply*10, divide*10, remainder*10, divmod*10, floor_divide*10, lshift*10, rshift*10, and*10, or*10, xor*10, true_divide*10, add*6, subtract*6, multiply*6, divide*6, floor_divide*6, true_divide*6, divmod*3, remainder*3# - #fperr=1*70,0*50,1*52# - #twoout=0*50,1*10,0*106,1*3,0*3# - #otyp=(byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong)*12, float*4, double*6, (float, double, longdouble, cfloat, cdouble, clongdouble)*6, (float, double, longdouble)*2# - #OName=(Byte, UByte, Short, UShort, Int, UInt, Long, ULong, LongLong, ULongLong)*12, Float*4, Double*6, (Float, Double, LongDouble, CFloat, CDouble, CLongDouble)*6, (Float, Double, LongDouble)*2# -**/ - -#if !defined(CODEGEN_SKIP_@oper@_FLAG) - -static PyObject * -@name@_@oper@(PyObject *a, PyObject *b) -{ - PyObject *ret; - @name@ arg1, arg2; - @otyp@ out; -#if @twoout@ - @otyp@ out2; - PyObject *obj; -#endif - -#if @fperr@ - int retstatus; - int first; -#endif - - switch(_@name@_convert2_to_ctypes(a, &arg1, b, &arg2)) { - case 0: - break; - case -1: - /* one of them can't be cast safely must be mixed-types*/ - return PyArray_Type.tp_as_number->nb_@oper@(a,b); - case -2: - /* use default handling */ - if (PyErr_Occurred()) { - return NULL; - } - return PyGenericArrType_Type.tp_as_number->nb_@oper@(a,b); - case -3: - /* - * special case for longdouble and clongdouble - * because they have a recursive getitem in their dtype - */ - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } - -#if @fperr@ - PyUFunc_clearfperr(); -#endif - - /* - * here we do the actual calculation with arg1 and arg2 - * as a function call. - */ -#if @twoout@ - @name@_ctype_@oper@(arg1, arg2, &out, &out2); -#else - @name@_ctype_@oper@(arg1, arg2, &out); -#endif - -#if @fperr@ - /* Check status flag. If it is set, then look up what to do */ - retstatus = PyUFunc_getfperr(); - if (retstatus) { - int bufsize, errmask; - PyObject *errobj; - - if (PyUFunc_GetPyValues("@name@_scalars", &bufsize, &errmask, - &errobj) < 0) { - return NULL; - } - first = 1; - if (PyUFunc_handlefperr(errmask, errobj, retstatus, &first)) { - Py_XDECREF(errobj); - return NULL; - } - Py_XDECREF(errobj); - } -#endif - - -#if @twoout@ - ret = PyTuple_New(2); - if (ret == NULL) { - return NULL; - } - obj = PyArrayScalar_New(@OName@); - if (obj == NULL) { - Py_DECREF(ret); - return NULL; - } - PyArrayScalar_ASSIGN(obj, @OName@, out); - PyTuple_SET_ITEM(ret, 0, obj); - obj = PyArrayScalar_New(@OName@); - if (obj == NULL) { - Py_DECREF(ret); - return NULL; - } - PyArrayScalar_ASSIGN(obj, @OName@, out2); - PyTuple_SET_ITEM(ret, 1, obj); -#else - ret = PyArrayScalar_New(@OName@); - if (ret == NULL) { - return NULL; - } - PyArrayScalar_ASSIGN(ret, @OName@, out); -#endif - return ret; -} -#endif - -/**end repeat**/ - -#undef CODEGEN_SKIP_divide_FLAG - -/**begin repeat - #name=byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong,float, double, longdouble, cfloat, cdouble, clongdouble# - #Name=Byte, UByte, Short, UShort, Int, UInt, Long, ULong, LongLong, ULongLong, Float, Double, LongDouble, CFloat, CDouble, CLongDouble# - #otyp=float*4, double*6, float, double, longdouble, cfloat, cdouble, clongdouble# - #OName=Float*4, Double*6, Float, Double, LongDouble, CFloat, CDouble, CLongDouble# - #isint=(1,0)*5,0*6# - #cmplx=0*13,1*3# -**/ - -static PyObject * -@name@_power(PyObject *a, PyObject *b, PyObject *NPY_UNUSED(c)) -{ - PyObject *ret; - @name@ arg1, arg2; - int retstatus; - int first; - -#if @cmplx@ - @name@ out = {0,0}; - @otyp@ out1; - out1.real = out.imag = 0; -#else - @name@ out = 0; - @otyp@ out1=0; -#endif - - switch(_@name@_convert2_to_ctypes(a, &arg1, b, &arg2)) { - case 0: - break; - case -1: - /* can't cast both safely mixed-types? */ - return PyArray_Type.tp_as_number->nb_power(a,b,NULL); - case -2: - /* use default handling */ - if (PyErr_Occurred()) { - return NULL; - } - return PyGenericArrType_Type.tp_as_number->nb_power(a,b,NULL); - case -3: - /* - * special case for longdouble and clongdouble - * because they have a recursive getitem in their dtype - */ - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } - - PyUFunc_clearfperr(); - - /* - * here we do the actual calculation with arg1 and arg2 - * as a function call. - */ -#if @cmplx@ - if (arg2.real == 0 && arg2.imag == 0) { - out1.real = out.real = 1; - out1.imag = out.imag = 0; - } -#else - if (arg2 == 0) { - out1 = out = 1; - } -#endif -#if @isint@ - else if (arg2 < 0) { - @name@_ctype_power(arg1, -arg2, &out); - out1 = (@otyp@) (1.0 / out); - } -#endif - else { - @name@_ctype_power(arg1, arg2, &out); - } - - /* Check status flag. If it is set, then look up what to do */ - retstatus = PyUFunc_getfperr(); - if (retstatus) { - int bufsize, errmask; - PyObject *errobj; - - if (PyUFunc_GetPyValues("@name@_scalars", &bufsize, &errmask, - &errobj) < 0) { - return NULL; - } - first = 1; - if (PyUFunc_handlefperr(errmask, errobj, retstatus, &first)) { - Py_XDECREF(errobj); - return NULL; - } - Py_XDECREF(errobj); - } - -#if @isint@ - if (arg2 < 0) { - ret = PyArrayScalar_New(@OName@); - if (ret == NULL) { - return NULL; - } - PyArrayScalar_ASSIGN(ret, @OName@, out1); - } - else { - ret = PyArrayScalar_New(@Name@); - if (ret == NULL) { - return NULL; - } - PyArrayScalar_ASSIGN(ret, @Name@, out); - } -#else - ret = PyArrayScalar_New(@Name@); - if (ret == NULL) { - return NULL; - } - PyArrayScalar_ASSIGN(ret, @Name@, out); -#endif - - return ret; -} -/**end repeat**/ - - -/**begin repeat - * #name = (cfloat,cdouble,clongdouble)*2# - * #oper = divmod*3,remainder*3# - */ -#define @name@_@oper@ NULL -/**end repeat**/ - -/**begin repeat - * #name = (float,double,longdouble,cfloat,cdouble,clongdouble)*5# - * #oper = lshift*6, rshift*6, and*6, or*6, xor*6# - */ -#define @name@_@oper@ NULL -/**end repeat**/ - - -/**begin repeat - * #name=(byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong,float,double,longdouble,cfloat,cdouble,clongdouble)*3, byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong# - * #otyp=(byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong,float,double,longdouble,cfloat,cdouble,clongdouble)*2,byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong,float,double,longdouble,float,double,longdouble,byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong# - * #OName=(Byte, UByte, Short, UShort, Int, UInt, Long, ULong, LongLong, ULongLong, Float, Double, LongDouble, CFloat, CDouble, CLongDouble)*2, Byte, UByte, Short, UShort, Int, UInt, Long, ULong, LongLong, ULongLong, Float, Double, LongDouble, Float, Double, LongDouble, Byte, UByte, Short, UShort, Int, UInt, Long, ULong, LongLong, ULongLong# - * #oper=negative*16, positive*16, absolute*16, invert*10# - */ -static PyObject * -@name@_@oper@(PyObject *a) -{ - @name@ arg1; - @otyp@ out; - PyObject *ret; - - switch(_@name@_convert_to_ctype(a, &arg1)) { - case 0: - break; - case -1: - /* can't cast both safely use different add function */ - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - case -2: - /* use default handling */ - if (PyErr_Occurred()) { - return NULL; - } - return PyGenericArrType_Type.tp_as_number->nb_@oper@(a); - } - - /* - * here we do the actual calculation with arg1 and arg2 - * make it a function call. - */ - - @name@_ctype_@oper@(arg1, &out); - - ret = PyArrayScalar_New(@OName@); - PyArrayScalar_ASSIGN(ret, @OName@, out); - - return ret; -} -/**end repeat**/ - -/**begin repeat - * #name = float, double, longdouble, cfloat, cdouble, clongdouble# - */ -#define @name@_invert NULL -/**end repeat**/ - -#if defined(NPY_PY3K) -#define NONZERO_NAME(prefix, suffix) prefix##bool##suffix -#else -#define NONZERO_NAME(prefix, suffix) prefix##nonzero##suffix -#endif - -/**begin repeat - * #name = byte, ubyte, short, ushort, int, uint, long, ulong, longlong, - * ulonglong, float, double, longdouble, cfloat, cdouble, clongdouble# - * #simp=1*13,0*3# - */ -static int -NONZERO_NAME(@name@_,)(PyObject *a) -{ - int ret; - @name@ arg1; - - if (_@name@_convert_to_ctype(a, &arg1) < 0) { - if (PyErr_Occurred()) { - return -1; - } - return PyGenericArrType_Type.tp_as_number->NONZERO_NAME(nb_,)(a); - } - - /* - * here we do the actual calculation with arg1 and arg2 - * make it a function call. - */ - -#if @simp@ - ret = (arg1 != 0); -#else - ret = ((arg1.real != 0) || (arg1.imag != 0)); -#endif - - return ret; -} -/**end repeat**/ - -/**begin repeat - * - * #name=byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong,float,double,longdouble,cfloat,cdouble,clongdouble# - * #Name=Byte,UByte,Short,UShort,Int,UInt,Long,ULong,LongLong,ULongLong,Float,Double,LongDouble,CFloat,CDouble,CLongDouble# - * #cmplx=,,,,,,,,,,,,,.real,.real,.real# - * #sign=(signed,unsigned)*5,,,,,,# - * #unsigntyp=0,1,0,1,0,1,0,1,0,1,1*6# - * #ctype=long*8,PY_LONG_LONG*2,double*6# - * #realtyp=0*10,1*6# - * #func=(PyLong_FromLong,PyLong_FromUnsignedLong)*4,PyLong_FromLongLong,PyLong_FromUnsignedLongLong,PyLong_FromDouble*6# - */ -static PyObject * -@name@_int(PyObject *obj) -{ - @sign@ @ctype@ x= PyArrayScalar_VAL(obj, @Name@)@cmplx@; -#if @realtyp@ - double ix; - modf(x, &ix); - x = ix; -#endif - -/* - * For unsigned type, the (@ctype@) cast just does what is implicitely done by - * the compiler. - */ -#if @unsigntyp@ - if(LONG_MIN < (@ctype@)x && (@ctype@)x < LONG_MAX) - return PyInt_FromLong(x); -#else - if(LONG_MIN < x && x < LONG_MAX) - return PyInt_FromLong(x); -#endif - return @func@(x); -} -/**end repeat**/ - -/**begin repeat - * - * #name=(byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong,float,double,longdouble,cfloat,cdouble,clongdouble)*2# - * #Name=(Byte,UByte,Short,UShort,Int,UInt,Long,ULong,LongLong,ULongLong,Float,Double,LongDouble,CFloat,CDouble,CLongDouble)*2# - * #cmplx=(,,,,,,,,,,,,,.real,.real,.real)*2# - * #which=long*16,float*16# - * #func=(PyLong_FromLongLong, PyLong_FromUnsignedLongLong)*5,PyLong_FromDouble*6,PyFloat_FromDouble*16# - */ -static PyObject * -@name@_@which@(PyObject *obj) -{ - return @func@((PyArrayScalar_VAL(obj, @Name@))@cmplx@); -} -/**end repeat**/ - -#if !defined(NPY_PY3K) - -/**begin repeat - * - * #name=(byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong,float,double,longdouble,cfloat,cdouble,clongdouble)*2# - * #oper=oct*16, hex*16# - * #kind=(int*5, long*5, int, long*2, int, long*2)*2# - * #cap=(Int*5, Long*5, Int, Long*2, Int, Long*2)*2# - */ -static PyObject * -@name@_@oper@(PyObject *obj) -{ - PyObject *pyint; - pyint = @name@_@kind@(obj); - if (pyint == NULL) return NULL; - return Py@cap@_Type.tp_as_number->nb_@oper@(pyint); -} -/**end repeat**/ - -#endif - -/**begin repeat - * #oper=le,ge,lt,gt,eq,ne# - * #op=<=,>=,<,>,==,!=# - */ -#define def_cmp_@oper@(arg1, arg2) (arg1 @op@ arg2) -#define cmplx_cmp_@oper@(arg1, arg2) ((arg1.real == arg2.real) ? \ - arg1.imag @op@ arg2.imag : \ - arg1.real @op@ arg2.real) -/**end repeat**/ - -/**begin repeat - * #name=byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong,float,double,longdouble,cfloat,cdouble,clongdouble# - * #simp=def*13,cmplx*3# - */ -static PyObject* -@name@_richcompare(PyObject *self, PyObject *other, int cmp_op) -{ - @name@ arg1, arg2; - int out=0; - - switch(_@name@_convert2_to_ctypes(self, &arg1, other, &arg2)) { - case 0: - break; - case -1: /* can't cast both safely use different add function */ - case -2: /* use ufunc */ - if (PyErr_Occurred()) return NULL; - return PyGenericArrType_Type.tp_richcompare(self, other, cmp_op); - case -3: /* special case for longdouble and clongdouble - because they have a recursive getitem in their dtype */ - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } - - /* here we do the actual calculation with arg1 and arg2 */ - switch (cmp_op) { - case Py_EQ: - out = @simp@_cmp_eq(arg1, arg2); - break; - case Py_NE: - out = @simp@_cmp_ne(arg1, arg2); - break; - case Py_LE: - out = @simp@_cmp_le(arg1, arg2); - break; - case Py_GE: - out = @simp@_cmp_ge(arg1, arg2); - break; - case Py_LT: - out = @simp@_cmp_lt(arg1, arg2); - break; - case Py_GT: - out = @simp@_cmp_gt(arg1, arg2); - break; - } - - if (out) { - PyArrayScalar_RETURN_TRUE; - } - else { - PyArrayScalar_RETURN_FALSE; - } -} -/**end repeat**/ - - -/**begin repeat - #name=byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong,float,double,longdouble,cfloat,cdouble,clongdouble# -**/ -static PyNumberMethods @name@_as_number = { - (binaryfunc)@name@_add, /*nb_add*/ - (binaryfunc)@name@_subtract, /*nb_subtract*/ - (binaryfunc)@name@_multiply, /*nb_multiply*/ -#if defined(NPY_PY3K) -#else - (binaryfunc)@name@_divide, /*nb_divide*/ -#endif - (binaryfunc)@name@_remainder, /*nb_remainder*/ - (binaryfunc)@name@_divmod, /*nb_divmod*/ - (ternaryfunc)@name@_power, /*nb_power*/ - (unaryfunc)@name@_negative, - (unaryfunc)@name@_positive, /*nb_pos*/ - (unaryfunc)@name@_absolute, /*nb_abs*/ -#if defined(NPY_PY3K) - (inquiry)@name@_bool, /*nb_bool*/ -#else - (inquiry)@name@_nonzero, /*nb_nonzero*/ -#endif - (unaryfunc)@name@_invert, /*nb_invert*/ - (binaryfunc)@name@_lshift, /*nb_lshift*/ - (binaryfunc)@name@_rshift, /*nb_rshift*/ - (binaryfunc)@name@_and, /*nb_and*/ - (binaryfunc)@name@_xor, /*nb_xor*/ - (binaryfunc)@name@_or, /*nb_or*/ -#if defined(NPY_PY3K) -#else - 0, /*nb_coerce*/ -#endif - (unaryfunc)@name@_int, /*nb_int*/ -#if defined(NPY_PY3K) - (unaryfunc)0, /*nb_reserved*/ -#else - (unaryfunc)@name@_long, /*nb_long*/ -#endif - (unaryfunc)@name@_float, /*nb_float*/ -#if defined(NPY_PY3K) -#else - (unaryfunc)@name@_oct, /*nb_oct*/ - (unaryfunc)@name@_hex, /*nb_hex*/ -#endif - 0, /*inplace_add*/ - 0, /*inplace_subtract*/ - 0, /*inplace_multiply*/ -#if defined(NPY_PY3K) -#else - 0, /*inplace_divide*/ -#endif - 0, /*inplace_remainder*/ - 0, /*inplace_power*/ - 0, /*inplace_lshift*/ - 0, /*inplace_rshift*/ - 0, /*inplace_and*/ - 0, /*inplace_xor*/ - 0, /*inplace_or*/ - (binaryfunc)@name@_floor_divide, /*nb_floor_divide*/ - (binaryfunc)@name@_true_divide, /*nb_true_divide*/ - 0, /*nb_inplace_floor_divide*/ - 0, /*nb_inplace_true_divide*/ -#if PY_VERSION_HEX >= 0x02050000 - (unaryfunc)NULL, /*nb_index*/ -#endif -}; -/**end repeat**/ - -static void *saved_tables_arrtype[9]; - -static void -add_scalarmath(void) -{ - /**begin repeat - #name=byte,ubyte,short,ushort,int,uint,long,ulong,longlong,ulonglong,float,double,longdouble,cfloat,cdouble,clongdouble# - #NAME=Byte, UByte, Short, UShort, Int, UInt, Long, ULong, LongLong, ULongLong, Float, Double, LongDouble, CFloat, CDouble, CLongDouble# - **/ -#if PY_VERSION_HEX >= 0x02050000 - @name@_as_number.nb_index = Py@NAME@ArrType_Type.tp_as_number->nb_index; -#endif - Py@NAME@ArrType_Type.tp_as_number = &(@name@_as_number); - Py@NAME@ArrType_Type.tp_richcompare = @name@_richcompare; - /**end repeat**/ - - saved_tables_arrtype[0] = PyLongArrType_Type.tp_as_number; -#if !defined(NPY_PY3K) - saved_tables_arrtype[1] = PyLongArrType_Type.tp_compare; -#endif - saved_tables_arrtype[2] = PyLongArrType_Type.tp_richcompare; - saved_tables_arrtype[3] = PyDoubleArrType_Type.tp_as_number; -#if !defined(NPY_PY3K) - saved_tables_arrtype[4] = PyDoubleArrType_Type.tp_compare; -#endif - saved_tables_arrtype[5] = PyDoubleArrType_Type.tp_richcompare; - saved_tables_arrtype[6] = PyCDoubleArrType_Type.tp_as_number; -#if !defined(NPY_PY3K) - saved_tables_arrtype[7] = PyCDoubleArrType_Type.tp_compare; -#endif - saved_tables_arrtype[8] = PyCDoubleArrType_Type.tp_richcompare; -} - -static int -get_functions(void) -{ - PyObject *mm, *obj; - void **funcdata; - char *signatures; - int i, j; - int ret = -1; - - /* Get the nc_pow functions */ - /* Get the pow functions */ - mm = PyImport_ImportModule("numpy.core.umath"); - if (mm == NULL) return -1; - - obj = PyObject_GetAttrString(mm, "power"); - if (obj == NULL) goto fail; - funcdata = ((PyUFuncObject *)obj)->data; - signatures = ((PyUFuncObject *)obj)->types; - - i = 0; - j = 0; - while(signatures[i] != PyArray_FLOAT) {i+=3; j++;} - _basic_float_pow = funcdata[j]; - _basic_double_pow = funcdata[j+1]; - _basic_longdouble_pow = funcdata[j+2]; - _basic_cfloat_pow = funcdata[j+3]; - _basic_cdouble_pow = funcdata[j+4]; - _basic_clongdouble_pow = funcdata[j+5]; - Py_DECREF(obj); - - /* Get the floor functions */ - obj = PyObject_GetAttrString(mm, "floor"); - if (obj == NULL) goto fail; - funcdata = ((PyUFuncObject *)obj)->data; - signatures = ((PyUFuncObject *)obj)->types; - i = 0; - j = 0; - while(signatures[i] != PyArray_FLOAT) {i+=2; j++;} - _basic_float_floor = funcdata[j]; - _basic_double_floor = funcdata[j+1]; - _basic_longdouble_floor = funcdata[j+2]; - Py_DECREF(obj); - - /* Get the sqrt functions */ - obj = PyObject_GetAttrString(mm, "sqrt"); - if (obj == NULL) goto fail; - funcdata = ((PyUFuncObject *)obj)->data; - signatures = ((PyUFuncObject *)obj)->types; - i = 0; - j = 0; - while(signatures[i] != PyArray_FLOAT) {i+=2; j++;} - _basic_float_sqrt = funcdata[j]; - _basic_double_sqrt = funcdata[j+1]; - _basic_longdouble_sqrt = funcdata[j+2]; - Py_DECREF(obj); - - /* Get the fmod functions */ - obj = PyObject_GetAttrString(mm, "fmod"); - if (obj == NULL) goto fail; - funcdata = ((PyUFuncObject *)obj)->data; - signatures = ((PyUFuncObject *)obj)->types; - i = 0; - j = 0; - while(signatures[i] != PyArray_FLOAT) {i+=3; j++;} - _basic_float_fmod = funcdata[j]; - _basic_double_fmod = funcdata[j+1]; - _basic_longdouble_fmod = funcdata[j+2]; - Py_DECREF(obj); - return - - ret = 0; - fail: - Py_DECREF(mm); - return ret; -} - -static void *saved_tables[9]; - -char doc_alterpyscalars[] = ""; - -static PyObject * -alter_pyscalars(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - int n; - PyObject *obj; - n = PyTuple_GET_SIZE(args); - while(n--) { - obj = PyTuple_GET_ITEM(args, n); -#if !defined(NPY_PY3K) - if (obj == (PyObject *)(&PyInt_Type)) { - PyInt_Type.tp_as_number = PyLongArrType_Type.tp_as_number; - PyInt_Type.tp_compare = PyLongArrType_Type.tp_compare; - PyInt_Type.tp_richcompare = PyLongArrType_Type.tp_richcompare; - } - else -#endif - if (obj == (PyObject *)(&PyFloat_Type)) { - PyFloat_Type.tp_as_number = PyDoubleArrType_Type.tp_as_number; -#if !defined(NPY_PY3K) - PyFloat_Type.tp_compare = PyDoubleArrType_Type.tp_compare; -#endif - PyFloat_Type.tp_richcompare = PyDoubleArrType_Type.tp_richcompare; - } - else if (obj == (PyObject *)(&PyComplex_Type)) { - PyComplex_Type.tp_as_number = PyCDoubleArrType_Type.tp_as_number; -#if !defined(NPY_PY3K) - PyComplex_Type.tp_compare = PyCDoubleArrType_Type.tp_compare; -#endif - PyComplex_Type.tp_richcompare = \ - PyCDoubleArrType_Type.tp_richcompare; - } - else { - PyErr_SetString(PyExc_ValueError, - "arguments must be int, float, or complex"); - return NULL; - } - } - Py_INCREF(Py_None); - return Py_None; -} - -char doc_restorepyscalars[] = ""; -static PyObject * -restore_pyscalars(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - int n; - PyObject *obj; - n = PyTuple_GET_SIZE(args); - while(n--) { - obj = PyTuple_GET_ITEM(args, n); -#if !defined(NPY_PY3K) - if (obj == (PyObject *)(&PyInt_Type)) { - PyInt_Type.tp_as_number = saved_tables[0]; - PyInt_Type.tp_compare = saved_tables[1]; - PyInt_Type.tp_richcompare = saved_tables[2]; - } - else -#endif - if (obj == (PyObject *)(&PyFloat_Type)) { - PyFloat_Type.tp_as_number = saved_tables[3]; -#if !defined(NPY_PY3K) - PyFloat_Type.tp_compare = saved_tables[4]; -#endif - PyFloat_Type.tp_richcompare = saved_tables[5]; - } - else if (obj == (PyObject *)(&PyComplex_Type)) { - PyComplex_Type.tp_as_number = saved_tables[6]; -#if !defined(NPY_PY3K) - PyComplex_Type.tp_compare = saved_tables[7]; -#endif - PyComplex_Type.tp_richcompare = saved_tables[8]; - } - else { - PyErr_SetString(PyExc_ValueError, - "arguments must be int, float, or complex"); - return NULL; - } - } - Py_INCREF(Py_None); - return Py_None; -} - -char doc_usepythonmath[] = ""; -static PyObject * -use_pythonmath(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - int n; - PyObject *obj; - n = PyTuple_GET_SIZE(args); - while(n--) { - obj = PyTuple_GET_ITEM(args, n); -#if !defined(NPY_PY3K) - if (obj == (PyObject *)(&PyInt_Type)) { - PyLongArrType_Type.tp_as_number = saved_tables[0]; - PyLongArrType_Type.tp_compare = saved_tables[1]; - PyLongArrType_Type.tp_richcompare = saved_tables[2]; - } - else -#endif - if (obj == (PyObject *)(&PyFloat_Type)) { - PyDoubleArrType_Type.tp_as_number = saved_tables[3]; -#if !defined(NPY_PY3K) - PyDoubleArrType_Type.tp_compare = saved_tables[4]; -#endif - PyDoubleArrType_Type.tp_richcompare = saved_tables[5]; - } - else if (obj == (PyObject *)(&PyComplex_Type)) { - PyCDoubleArrType_Type.tp_as_number = saved_tables[6]; -#if !defined(NPY_PY3K) - PyCDoubleArrType_Type.tp_compare = saved_tables[7]; -#endif - PyCDoubleArrType_Type.tp_richcompare = saved_tables[8]; - } - else { - PyErr_SetString(PyExc_ValueError, - "arguments must be int, float, or complex"); - return NULL; - } - } - Py_INCREF(Py_None); - return Py_None; -} - -char doc_usescalarmath[] = ""; -static PyObject * -use_scalarmath(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - int n; - PyObject *obj; - n = PyTuple_GET_SIZE(args); - while(n--) { - obj = PyTuple_GET_ITEM(args, n); -#if !defined(NPY_PY3K) - if (obj == (PyObject *)(&PyInt_Type)) { - PyLongArrType_Type.tp_as_number = saved_tables_arrtype[0]; - PyLongArrType_Type.tp_compare = saved_tables_arrtype[1]; - PyLongArrType_Type.tp_richcompare = saved_tables_arrtype[2]; - } - else -#endif - if (obj == (PyObject *)(&PyFloat_Type)) { - PyDoubleArrType_Type.tp_as_number = saved_tables_arrtype[3]; -#if !defined(NPY_PY3K) - PyDoubleArrType_Type.tp_compare = saved_tables_arrtype[4]; -#endif - PyDoubleArrType_Type.tp_richcompare = saved_tables_arrtype[5]; - } - else if (obj == (PyObject *)(&PyComplex_Type)) { - PyCDoubleArrType_Type.tp_as_number = saved_tables_arrtype[6]; -#if !defined(NPY_PY3K) - PyCDoubleArrType_Type.tp_compare = saved_tables_arrtype[7]; -#endif - PyCDoubleArrType_Type.tp_richcompare = saved_tables_arrtype[8]; - } - else { - PyErr_SetString(PyExc_ValueError, - "arguments must be int, float, or complex"); - return NULL; - } - } - Py_INCREF(Py_None); - return Py_None; -} - -static struct PyMethodDef methods[] = { - {"alter_pythonmath", (PyCFunction) alter_pyscalars, - METH_VARARGS, doc_alterpyscalars}, - {"restore_pythonmath", (PyCFunction) restore_pyscalars, - METH_VARARGS, doc_restorepyscalars}, - {"use_pythonmath", (PyCFunction) use_pythonmath, - METH_VARARGS, doc_usepythonmath}, - {"use_scalarmath", (PyCFunction) use_scalarmath, - METH_VARARGS, doc_usescalarmath}, - {NULL, NULL, 0, NULL} -}; - -#if defined(NPY_PY3K) -static struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - "scalarmath", - NULL, - -1, - methods, - NULL, - NULL, - NULL, - NULL -}; -#endif - -#if defined(NPY_PY3K) -#define RETVAL m -PyObject *PyInit_scalarmath(void) -#else -#define RETVAL -PyMODINIT_FUNC -initscalarmath(void) -#endif -{ -#if defined(NPY_PY3K) - PyObject *m = PyModule_Create(&moduledef); - if (!m) { - return NULL; - } -#else - Py_InitModule("scalarmath", methods); -#endif - - import_array(); - import_umath(); - - if (get_functions() < 0) return RETVAL; - - add_scalarmath(); - -#if !defined(NPY_PY3K) - saved_tables[0] = PyInt_Type.tp_as_number; - saved_tables[1] = PyInt_Type.tp_compare; - saved_tables[2] = PyInt_Type.tp_richcompare; -#endif - saved_tables[3] = PyFloat_Type.tp_as_number; -#if !defined(NPY_PY3K) - saved_tables[4] = PyFloat_Type.tp_compare; -#endif - saved_tables[5] = PyFloat_Type.tp_richcompare; - saved_tables[6] = PyComplex_Type.tp_as_number; -#if !defined(NPY_PY3K) - saved_tables[7] = PyComplex_Type.tp_compare; -#endif - saved_tables[8] = PyComplex_Type.tp_richcompare; - - return RETVAL; -} diff --git a/pythonPackages/numpy/numpy/core/src/umath/funcs.inc.src b/pythonPackages/numpy/numpy/core/src/umath/funcs.inc.src deleted file mode 100755 index 5dc58e9909..0000000000 --- a/pythonPackages/numpy/numpy/core/src/umath/funcs.inc.src +++ /dev/null @@ -1,605 +0,0 @@ -/* -*- c -*- */ - -/* - * This file is for the definitions of the non-c99 functions used in ufuncs. - * All the complex ufuncs are defined here along with a smattering of real and - * object functions. - */ - -#include "numpy/npy_3kcompat.h" - - -/* - ***************************************************************************** - ** PYTHON OBJECT FUNCTIONS ** - ***************************************************************************** - */ - -static PyObject * -Py_square(PyObject *o) -{ - return PyNumber_Multiply(o, o); -} - -static PyObject * -Py_get_one(PyObject *NPY_UNUSED(o)) -{ - return PyInt_FromLong(1); -} - -static PyObject * -Py_reciprocal(PyObject *o) -{ - PyObject *one = PyInt_FromLong(1); - PyObject *result; - - if (!one) { - return NULL; - } -#if defined(NPY_PY3K) - result = PyNumber_TrueDivide(one, o); -#else - result = PyNumber_Divide(one, o); -#endif - Py_DECREF(one); - return result; -} - -/* - * Define numpy version of PyNumber_Power as binary function. - */ -static PyObject * -npy_ObjectPower(PyObject *x, PyObject *y) -{ - return PyNumber_Power(x, y, Py_None); -} - -/**begin repeat - * #Kind = Max, Min# - * #OP = >=, <=# - */ -static PyObject * -npy_Object@Kind@(PyObject *i1, PyObject *i2) -{ - PyObject *result; - int cmp; - - if (PyObject_Cmp(i1, i2, &cmp) < 0) { - return NULL; - } - if (cmp @OP@ 0) { - result = i1; - } - else { - result = i2; - } - Py_INCREF(result); - return result; -} -/**end repeat**/ - - -/* - ***************************************************************************** - ** COMPLEX FUNCTIONS ** - ***************************************************************************** - */ - - -/* - * Don't pass structures between functions (only pointers) because how - * structures are passed is compiler dependent and could cause segfaults if - * umath_ufunc_object.inc is compiled with a different compiler than an - * extension that makes use of the UFUNC API - */ - -/**begin repeat - * - * #typ = float, double, longdouble# - * #c = f, ,l# - * #C = F, ,L# - * #precision = 1,2,4# - */ - -/* - * Perform the operation result := 1 + coef * x * result, - * with real coefficient `coef`. - */ -#define SERIES_HORNER_TERM@c@(result, x, coef) \ - do { \ - nc_prod@c@((result), (x), (result)); \ - (result)->real *= (coef); \ - (result)->imag *= (coef); \ - nc_sum@c@((result), &nc_1@c@, (result)); \ - } while(0) - -/* constants */ -static c@typ@ nc_1@c@ = {1., 0.}; -static c@typ@ nc_half@c@ = {0.5, 0.}; -static c@typ@ nc_i@c@ = {0., 1.}; -static c@typ@ nc_i2@c@ = {0., 0.5}; -/* - * static c@typ@ nc_mi@c@ = {0.0@c@, -1.0@c@}; - * static c@typ@ nc_pi2@c@ = {NPY_PI_2@c@., 0.0@c@}; - */ - - -static void -nc_sum@c@(c@typ@ *a, c@typ@ *b, c@typ@ *r) -{ - r->real = a->real + b->real; - r->imag = a->imag + b->imag; - return; -} - -static void -nc_diff@c@(c@typ@ *a, c@typ@ *b, c@typ@ *r) -{ - r->real = a->real - b->real; - r->imag = a->imag - b->imag; - return; -} - -static void -nc_neg@c@(c@typ@ *a, c@typ@ *r) -{ - r->real = -a->real; - r->imag = -a->imag; - return; -} - -static void -nc_prod@c@(c@typ@ *a, c@typ@ *b, c@typ@ *r) -{ - @typ@ ar=a->real, br=b->real, ai=a->imag, bi=b->imag; - r->real = ar*br - ai*bi; - r->imag = ar*bi + ai*br; - return; -} - -static void -nc_quot@c@(c@typ@ *a, c@typ@ *b, c@typ@ *r) -{ - - @typ@ ar=a->real, br=b->real, ai=a->imag, bi=b->imag; - @typ@ d = br*br + bi*bi; - r->real = (ar*br + ai*bi)/d; - r->imag = (ai*br - ar*bi)/d; - return; -} - -static void -nc_sqrt@c@(c@typ@ *x, c@typ@ *r) -{ - *r = npy_csqrt@c@(*x); - return; -} - -static void -nc_rint@c@(c@typ@ *x, c@typ@ *r) -{ - r->real = npy_rint@c@(x->real); - r->imag = npy_rint@c@(x->imag); -} - -static void -nc_log@c@(c@typ@ *x, c@typ@ *r) -{ - *r = npy_clog@c@(*x); - return; -} - -static void -nc_log1p@c@(c@typ@ *x, c@typ@ *r) -{ - @typ@ l = npy_hypot@c@(x->real + 1,x->imag); - r->imag = npy_atan2@c@(x->imag, x->real + 1); - r->real = npy_log@c@(l); - return; -} - -static void -nc_exp@c@(c@typ@ *x, c@typ@ *r) -{ - *r = npy_cexp@c@(*x); - return; -} - -static void -nc_exp2@c@(c@typ@ *x, c@typ@ *r) -{ - c@typ@ a; - a.real = x->real*NPY_LOGE2@c@; - a.imag = x->imag*NPY_LOGE2@c@; - nc_exp@c@(&a, r); - return; -} - -static void -nc_expm1@c@(c@typ@ *x, c@typ@ *r) -{ - @typ@ a = npy_exp@c@(x->real); - r->real = a*npy_cos@c@(x->imag) - 1.0@c@; - r->imag = a*npy_sin@c@(x->imag); - return; -} - -static void -nc_pow@c@(c@typ@ *a, c@typ@ *b, c@typ@ *r) -{ - intp n; - @typ@ ar = npy_creal@c@(*a); - @typ@ br = npy_creal@c@(*b); - @typ@ ai = npy_cimag@c@(*a); - @typ@ bi = npy_cimag@c@(*b); - - if (br == 0. && bi == 0.) { - *r = npy_cpack@c@(1., 0.); - return; - } - if (ar == 0. && ai == 0.) { - if (br > 0 && bi == 0) { - *r = npy_cpack@c@(0., 0.); - } - else { - /* NB: there are four complex zeros; c0 = (+-0, +-0), so that unlike - * for reals, c0**p, with `p` negative is in general - * ill-defined. - * - * c0**z with z complex is also ill-defined. - */ - *r = npy_cpack@c@(NPY_NAN, NPY_NAN); - - /* Raise invalid */ - ar = NPY_INFINITY; - ar = ar - ar; - } - return; - } - if (bi == 0 && (n=(intp)br) == br) { - if (n == 1) { - /* unroll: handle inf better */ - *r = npy_cpack@c@(ar, ai); - return; - } - else if (n == 2) { - /* unroll: handle inf better */ - nc_prod@c@(a, a, r); - return; - } - else if (n == 3) { - /* unroll: handle inf better */ - nc_prod@c@(a, a, r); - nc_prod@c@(a, r, r); - return; - } - else if (n > -100 && n < 100) { - c@typ@ p, aa; - intp mask = 1; - if (n < 0) n = -n; - aa = nc_1@c@; - p = npy_cpack@c@(ar, ai); - while (1) { - if (n & mask) - nc_prod@c@(&aa,&p,&aa); - mask <<= 1; - if (n < mask || mask <= 0) break; - nc_prod@c@(&p,&p,&p); - } - *r = npy_cpack@c@(npy_creal@c@(aa), npy_cimag@c@(aa)); - if (br < 0) nc_quot@c@(&nc_1@c@, r, r); - return; - } - } - - *r = npy_cpow@c@(*a, *b); - return; -} - - -static void -nc_prodi@c@(c@typ@ *x, c@typ@ *r) -{ - @typ@ xr = x->real; - r->real = -x->imag; - r->imag = xr; - return; -} - - -static void -nc_acos@c@(c@typ@ *x, c@typ@ *r) -{ - /* - * return nc_neg(nc_prodi(nc_log(nc_sum(x,nc_prod(nc_i, - * nc_sqrt(nc_diff(nc_1,nc_prod(x,x)))))))); - */ - nc_prod@c@(x,x,r); - nc_diff@c@(&nc_1@c@, r, r); - nc_sqrt@c@(r, r); - nc_prodi@c@(r, r); - nc_sum@c@(x, r, r); - nc_log@c@(r, r); - nc_prodi@c@(r, r); - nc_neg@c@(r, r); - return; -} - -static void -nc_acosh@c@(c@typ@ *x, c@typ@ *r) -{ - /* - * return nc_log(nc_sum(x, - * nc_prod(nc_sqrt(nc_sum(x,nc_1)), nc_sqrt(nc_diff(x,nc_1))))); - */ - c@typ@ t; - - nc_sum@c@(x, &nc_1@c@, &t); - nc_sqrt@c@(&t, &t); - nc_diff@c@(x, &nc_1@c@, r); - nc_sqrt@c@(r, r); - nc_prod@c@(&t, r, r); - nc_sum@c@(x, r, r); - nc_log@c@(r, r); - return; -} - -static void -nc_asin@c@(c@typ@ *x, c@typ@ *r) -{ - /* - * return nc_neg(nc_prodi(nc_log(nc_sum(nc_prod(nc_i,x), - * nc_sqrt(nc_diff(nc_1,nc_prod(x,x))))))); - */ - if (fabs(x->real) > 1e-3 || fabs(x->imag) > 1e-3) { - c@typ@ a, *pa=&a; - nc_prod@c@(x, x, r); - nc_diff@c@(&nc_1@c@, r, r); - nc_sqrt@c@(r, r); - nc_prodi@c@(x, pa); - nc_sum@c@(pa, r, r); - nc_log@c@(r, r); - nc_prodi@c@(r, r); - nc_neg@c@(r, r); - } - else { - /* - * Small arguments: series expansion, to avoid loss of precision - * asin(x) = x [1 + (1/6) x^2 [1 + (9/20) x^2 [1 + ...]]] - * - * |x| < 1e-3 => |rel. error| < 1e-18 (f), 1e-24, 1e-36 (l) - */ - c@typ@ x2; - nc_prod@c@(x, x, &x2); - - *r = nc_1@c@; -#if @precision@ >= 3 - SERIES_HORNER_TERM@c@(r, &x2, 81.0@C@/110); - SERIES_HORNER_TERM@c@(r, &x2, 49.0@C@/72); -#endif -#if @precision@ >= 2 - SERIES_HORNER_TERM@c@(r, &x2, 25.0@C@/42); -#endif - SERIES_HORNER_TERM@c@(r, &x2, 9.0@C@/20); - SERIES_HORNER_TERM@c@(r, &x2, 1.0@C@/6); - nc_prod@c@(r, x, r); - } - return; -} - - -static void -nc_asinh@c@(c@typ@ *x, c@typ@ *r) -{ - /* - * return nc_log(nc_sum(nc_sqrt(nc_sum(nc_1,nc_prod(x,x))),x)); - */ - if (fabs(x->real) > 1e-3 || fabs(x->imag) > 1e-3) { - nc_prod@c@(x, x, r); - nc_sum@c@(&nc_1@c@, r, r); - nc_sqrt@c@(r, r); - nc_sum@c@(r, x, r); - nc_log@c@(r, r); - } - else { - /* - * Small arguments: series expansion, to avoid loss of precision - * asinh(x) = x [1 - (1/6) x^2 [1 - (9/20) x^2 [1 - ...]]] - * - * |x| < 1e-3 => |rel. error| < 1e-18 (f), 1e-24, 1e-36 (l) - */ - c@typ@ x2; - nc_prod@c@(x, x, &x2); - - *r = nc_1@c@; -#if @precision@ >= 3 - SERIES_HORNER_TERM@c@(r, &x2, -81.0@C@/110); - SERIES_HORNER_TERM@c@(r, &x2, -49.0@C@/72); -#endif -#if @precision@ >= 2 - SERIES_HORNER_TERM@c@(r, &x2, -25.0@C@/42); -#endif - SERIES_HORNER_TERM@c@(r, &x2, -9.0@C@/20); - SERIES_HORNER_TERM@c@(r, &x2, -1.0@C@/6); - nc_prod@c@(r, x, r); - } - return; -} - -static void -nc_atan@c@(c@typ@ *x, c@typ@ *r) -{ - /* - * return nc_prod(nc_i2,nc_log(nc_quot(nc_sum(nc_i,x),nc_diff(nc_i,x)))); - */ - if (fabs(x->real) > 1e-3 || fabs(x->imag) > 1e-3) { - c@typ@ a, *pa=&a; - nc_diff@c@(&nc_i@c@, x, pa); - nc_sum@c@(&nc_i@c@, x, r); - nc_quot@c@(r, pa, r); - nc_log@c@(r,r); - nc_prod@c@(&nc_i2@c@, r, r); - } - else { - /* - * Small arguments: series expansion, to avoid loss of precision - * atan(x) = x [1 - (1/3) x^2 [1 - (3/5) x^2 [1 - ...]]] - * - * |x| < 1e-3 => |rel. error| < 1e-18 (f), 1e-24, 1e-36 (l) - */ - c@typ@ x2; - nc_prod@c@(x, x, &x2); - - *r = nc_1@c@; -#if @precision@ >= 3 - SERIES_HORNER_TERM@c@(r, &x2, -9.0@C@/11); - SERIES_HORNER_TERM@c@(r, &x2, -7.0@C@/9); -#endif -#if @precision@ >= 2 - SERIES_HORNER_TERM@c@(r, &x2, -5.0@C@/7); -#endif - SERIES_HORNER_TERM@c@(r, &x2, -3.0@C@/5); - SERIES_HORNER_TERM@c@(r, &x2, -1.0@C@/3); - nc_prod@c@(r, x, r); - } - return; -} - -static void -nc_atanh@c@(c@typ@ *x, c@typ@ *r) -{ - /* - * return nc_prod(nc_half,nc_log(nc_quot(nc_sum(nc_1,x),nc_diff(nc_1,x)))); - */ - if (fabs(x->real) > 1e-3 || fabs(x->imag) > 1e-3) { - c@typ@ a, *pa=&a; - nc_diff@c@(&nc_1@c@, x, r); - nc_sum@c@(&nc_1@c@, x, pa); - nc_quot@c@(pa, r, r); - nc_log@c@(r, r); - nc_prod@c@(&nc_half@c@, r, r); - } - else { - /* - * Small arguments: series expansion, to avoid loss of precision - * atan(x) = x [1 + (1/3) x^2 [1 + (3/5) x^2 [1 + ...]]] - * - * |x| < 1e-3 => |rel. error| < 1e-18 (f), 1e-24, 1e-36 (l) - */ - c@typ@ x2; - nc_prod@c@(x, x, &x2); - - *r = nc_1@c@; -#if @precision@ >= 3 - SERIES_HORNER_TERM@c@(r, &x2, 9.0@C@/11); - SERIES_HORNER_TERM@c@(r, &x2, 7.0@C@/9); -#endif -#if @precision@ >= 2 - SERIES_HORNER_TERM@c@(r, &x2, 5.0@C@/7); -#endif - SERIES_HORNER_TERM@c@(r, &x2, 3.0@C@/5); - SERIES_HORNER_TERM@c@(r, &x2, 1.0@C@/3); - nc_prod@c@(r, x, r); - } - return; -} - -static void -nc_cos@c@(c@typ@ *x, c@typ@ *r) -{ - @typ@ xr=x->real, xi=x->imag; - r->real = npy_cos@c@(xr)*npy_cosh@c@(xi); - r->imag = -npy_sin@c@(xr)*npy_sinh@c@(xi); - return; -} - -static void -nc_cosh@c@(c@typ@ *x, c@typ@ *r) -{ - @typ@ xr=x->real, xi=x->imag; - r->real = npy_cos@c@(xi)*npy_cosh@c@(xr); - r->imag = npy_sin@c@(xi)*npy_sinh@c@(xr); - return; -} - -static void -nc_log10@c@(c@typ@ *x, c@typ@ *r) -{ - nc_log@c@(x, r); - r->real *= NPY_LOG10E@c@; - r->imag *= NPY_LOG10E@c@; - return; -} - -static void -nc_log2@c@(c@typ@ *x, c@typ@ *r) -{ - nc_log@c@(x, r); - r->real *= NPY_LOG2E@c@; - r->imag *= NPY_LOG2E@c@; - return; -} - -static void -nc_sin@c@(c@typ@ *x, c@typ@ *r) -{ - @typ@ xr=x->real, xi=x->imag; - r->real = npy_sin@c@(xr)*npy_cosh@c@(xi); - r->imag = npy_cos@c@(xr)*npy_sinh@c@(xi); - return; -} - -static void -nc_sinh@c@(c@typ@ *x, c@typ@ *r) -{ - @typ@ xr=x->real, xi=x->imag; - r->real = npy_cos@c@(xi)*npy_sinh@c@(xr); - r->imag = npy_sin@c@(xi)*npy_cosh@c@(xr); - return; -} - -static void -nc_tan@c@(c@typ@ *x, c@typ@ *r) -{ - @typ@ sr,cr,shi,chi; - @typ@ rs,is,rc,ic; - @typ@ d; - @typ@ xr=x->real, xi=x->imag; - sr = npy_sin@c@(xr); - cr = npy_cos@c@(xr); - shi = npy_sinh@c@(xi); - chi = npy_cosh@c@(xi); - rs = sr*chi; - is = cr*shi; - rc = cr*chi; - ic = -sr*shi; - d = rc*rc + ic*ic; - r->real = (rs*rc+is*ic)/d; - r->imag = (is*rc-rs*ic)/d; - return; -} - -static void -nc_tanh@c@(c@typ@ *x, c@typ@ *r) -{ - @typ@ si,ci,shr,chr; - @typ@ rs,is,rc,ic; - @typ@ d; - @typ@ xr=x->real, xi=x->imag; - si = npy_sin@c@(xi); - ci = npy_cos@c@(xi); - shr = npy_sinh@c@(xr); - chr = npy_cosh@c@(xr); - rs = ci*shr; - is = si*chr; - rc = ci*chr; - ic = si*shr; - d = rc*rc + ic*ic; - r->real = (rs*rc+is*ic)/d; - r->imag = (is*rc-rs*ic)/d; - return; -} - -#undef SERIES_HORNER_TERM@c@ - -/**end repeat**/ diff --git a/pythonPackages/numpy/numpy/core/src/umath/loops.c.src b/pythonPackages/numpy/numpy/core/src/umath/loops.c.src deleted file mode 100755 index b9b5b1d5e7..0000000000 --- a/pythonPackages/numpy/numpy/core/src/umath/loops.c.src +++ /dev/null @@ -1,1517 +0,0 @@ -/* -*- c -*- */ - -#define _UMATHMODULE - -#include "Python.h" - -#include "npy_config.h" -#ifdef ENABLE_SEPARATE_COMPILATION -#define PY_ARRAY_UNIQUE_SYMBOL _npy_umathmodule_ARRAY_API -#define NO_IMPORT_ARRAY -#endif - -#include "numpy/noprefix.h" -#include "numpy/ufuncobject.h" -#include "numpy/npy_math.h" - -#include "numpy/npy_3kcompat.h" - -#include "ufunc_object.h" - -/* - ***************************************************************************** - ** UFUNC LOOPS ** - ***************************************************************************** - */ - -#define IS_BINARY_REDUCE ((args[0] == args[2])\ - && (steps[0] == steps[2])\ - && (steps[0] == 0)) - -#define OUTPUT_LOOP\ - char *op1 = args[1];\ - intp os1 = steps[1];\ - intp n = dimensions[0];\ - intp i;\ - for(i = 0; i < n; i++, op1 += os1) - -#define UNARY_LOOP\ - char *ip1 = args[0], *op1 = args[1];\ - intp is1 = steps[0], os1 = steps[1];\ - intp n = dimensions[0];\ - intp i;\ - for(i = 0; i < n; i++, ip1 += is1, op1 += os1) - -#define UNARY_LOOP_TWO_OUT\ - char *ip1 = args[0], *op1 = args[1], *op2 = args[2];\ - intp is1 = steps[0], os1 = steps[1], os2 = steps[2];\ - intp n = dimensions[0];\ - intp i;\ - for(i = 0; i < n; i++, ip1 += is1, op1 += os1, op2 += os2) - -#define BINARY_LOOP\ - char *ip1 = args[0], *ip2 = args[1], *op1 = args[2];\ - intp is1 = steps[0], is2 = steps[1], os1 = steps[2];\ - intp n = dimensions[0];\ - intp i;\ - for(i = 0; i < n; i++, ip1 += is1, ip2 += is2, op1 += os1) - -#define BINARY_REDUCE_LOOP(TYPE)\ - char *iop1 = args[0], *ip2 = args[1]; \ - intp is2 = steps[1]; \ - intp n = dimensions[0]; \ - intp i; \ - TYPE io1 = *(TYPE *)iop1; \ - for(i = 0; i < n; i++, ip2 += is2) - -#define BINARY_LOOP_TWO_OUT\ - char *ip1 = args[0], *ip2 = args[1], *op1 = args[2], *op2 = args[3];\ - intp is1 = steps[0], is2 = steps[1], os1 = steps[2], os2 = steps[3];\ - intp n = dimensions[0];\ - intp i;\ - for(i = 0; i < n; i++, ip1 += is1, ip2 += is2, op1 += os1, op2 += os2) - -/****************************************************************************** - ** GENERIC FLOAT LOOPS ** - *****************************************************************************/ - - -typedef float floatUnaryFunc(float x); -typedef double doubleUnaryFunc(double x); -typedef longdouble longdoubleUnaryFunc(longdouble x); -typedef float floatBinaryFunc(float x, float y); -typedef double doubleBinaryFunc(double x, double y); -typedef longdouble longdoubleBinaryFunc(longdouble x, longdouble y); - - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_f_f(char **args, intp *dimensions, intp *steps, void *func) -{ - floatUnaryFunc *f = (floatUnaryFunc *)func; - UNARY_LOOP { - const float in1 = *(float *)ip1; - *(float *)op1 = f(in1); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_f_f_As_d_d(char **args, intp *dimensions, intp *steps, void *func) -{ - doubleUnaryFunc *f = (doubleUnaryFunc *)func; - UNARY_LOOP { - const float in1 = *(float *)ip1; - *(float *)op1 = (float)f((double)in1); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_ff_f(char **args, intp *dimensions, intp *steps, void *func) -{ - floatBinaryFunc *f = (floatBinaryFunc *)func; - BINARY_LOOP { - float in1 = *(float *)ip1; - float in2 = *(float *)ip2; - *(float *)op1 = f(in1, in2); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_ff_f_As_dd_d(char **args, intp *dimensions, intp *steps, void *func) -{ - doubleBinaryFunc *f = (doubleBinaryFunc *)func; - BINARY_LOOP { - float in1 = *(float *)ip1; - float in2 = *(float *)ip2; - *(float *)op1 = (double)f((double)in1, (double)in2); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_d_d(char **args, intp *dimensions, intp *steps, void *func) -{ - doubleUnaryFunc *f = (doubleUnaryFunc *)func; - UNARY_LOOP { - double in1 = *(double *)ip1; - *(double *)op1 = f(in1); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_dd_d(char **args, intp *dimensions, intp *steps, void *func) -{ - doubleBinaryFunc *f = (doubleBinaryFunc *)func; - BINARY_LOOP { - double in1 = *(double *)ip1; - double in2 = *(double *)ip2; - *(double *)op1 = f(in1, in2); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_g_g(char **args, intp *dimensions, intp *steps, void *func) -{ - longdoubleUnaryFunc *f = (longdoubleUnaryFunc *)func; - UNARY_LOOP { - longdouble in1 = *(longdouble *)ip1; - *(longdouble *)op1 = f(in1); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_gg_g(char **args, intp *dimensions, intp *steps, void *func) -{ - longdoubleBinaryFunc *f = (longdoubleBinaryFunc *)func; - BINARY_LOOP { - longdouble in1 = *(longdouble *)ip1; - longdouble in2 = *(longdouble *)ip2; - *(longdouble *)op1 = f(in1, in2); - } -} - - - -/****************************************************************************** - ** GENERIC COMPLEX LOOPS ** - *****************************************************************************/ - - -typedef void cdoubleUnaryFunc(cdouble *x, cdouble *r); -typedef void cfloatUnaryFunc(cfloat *x, cfloat *r); -typedef void clongdoubleUnaryFunc(clongdouble *x, clongdouble *r); -typedef void cdoubleBinaryFunc(cdouble *x, cdouble *y, cdouble *r); -typedef void cfloatBinaryFunc(cfloat *x, cfloat *y, cfloat *r); -typedef void clongdoubleBinaryFunc(clongdouble *x, clongdouble *y, - clongdouble *r); - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_F_F(char **args, intp *dimensions, intp *steps, void *func) -{ - cfloatUnaryFunc *f = (cfloatUnaryFunc *)func; - UNARY_LOOP { - cfloat in1 = *(cfloat *)ip1; - cfloat *out = (cfloat *)op1; - f(&in1, out); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_F_F_As_D_D(char **args, intp *dimensions, intp *steps, void *func) -{ - cdoubleUnaryFunc *f = (cdoubleUnaryFunc *)func; - UNARY_LOOP { - cdouble tmp, out; - tmp.real = (double)((float *)ip1)[0]; - tmp.imag = (double)((float *)ip1)[1]; - f(&tmp, &out); - ((float *)op1)[0] = (float)out.real; - ((float *)op1)[1] = (float)out.imag; - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_FF_F(char **args, intp *dimensions, intp *steps, void *func) -{ - cfloatBinaryFunc *f = (cfloatBinaryFunc *)func; - BINARY_LOOP { - cfloat in1 = *(cfloat *)ip1; - cfloat in2 = *(cfloat *)ip2; - cfloat *out = (cfloat *)op1; - f(&in1, &in2, out); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_FF_F_As_DD_D(char **args, intp *dimensions, intp *steps, void *func) -{ - cdoubleBinaryFunc *f = (cdoubleBinaryFunc *)func; - BINARY_LOOP { - cdouble tmp1, tmp2, out; - tmp1.real = (double)((float *)ip1)[0]; - tmp1.imag = (double)((float *)ip1)[1]; - tmp2.real = (double)((float *)ip2)[0]; - tmp2.imag = (double)((float *)ip2)[1]; - f(&tmp1, &tmp2, &out); - ((float *)op1)[0] = (float)out.real; - ((float *)op1)[1] = (float)out.imag; - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_D_D(char **args, intp *dimensions, intp *steps, void *func) -{ - cdoubleUnaryFunc *f = (cdoubleUnaryFunc *)func; - UNARY_LOOP { - cdouble in1 = *(cdouble *)ip1; - cdouble *out = (cdouble *)op1; - f(&in1, out); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_DD_D(char **args, intp *dimensions, intp *steps, void *func) -{ - cdoubleBinaryFunc *f = (cdoubleBinaryFunc *)func; - BINARY_LOOP { - cdouble in1 = *(cdouble *)ip1; - cdouble in2 = *(cdouble *)ip2; - cdouble *out = (cdouble *)op1; - f(&in1, &in2, out); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_G_G(char **args, intp *dimensions, intp *steps, void *func) -{ - clongdoubleUnaryFunc *f = (clongdoubleUnaryFunc *)func; - UNARY_LOOP { - clongdouble in1 = *(clongdouble *)ip1; - clongdouble *out = (clongdouble *)op1; - f(&in1, out); - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_GG_G(char **args, intp *dimensions, intp *steps, void *func) -{ - clongdoubleBinaryFunc *f = (clongdoubleBinaryFunc *)func; - BINARY_LOOP { - clongdouble in1 = *(clongdouble *)ip1; - clongdouble in2 = *(clongdouble *)ip2; - clongdouble *out = (clongdouble *)op1; - f(&in1, &in2, out); - } -} - - -/****************************************************************************** - ** GENERIC OBJECT lOOPS ** - *****************************************************************************/ - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_O_O(char **args, intp *dimensions, intp *steps, void *func) -{ - unaryfunc f = (unaryfunc)func; - UNARY_LOOP { - PyObject *in1 = *(PyObject **)ip1; - PyObject **out = (PyObject **)op1; - PyObject *ret = f(in1); - if ((ret == NULL) || PyErr_Occurred()) { - return; - } - Py_XDECREF(*out); - *out = ret; - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_O_O_method(char **args, intp *dimensions, intp *steps, void *func) -{ - char *meth = (char *)func; - UNARY_LOOP { - PyObject *in1 = *(PyObject **)ip1; - PyObject **out = (PyObject **)op1; - PyObject *ret = PyObject_CallMethod(in1, meth, NULL); - if (ret == NULL) { - return; - } - Py_XDECREF(*out); - *out = ret; - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_OO_O(char **args, intp *dimensions, intp *steps, void *func) -{ - binaryfunc f = (binaryfunc)func; - BINARY_LOOP { - PyObject *in1 = *(PyObject **)ip1; - PyObject *in2 = *(PyObject **)ip2; - PyObject **out = (PyObject **)op1; - PyObject *ret = f(in1, in2); - if (PyErr_Occurred()) { - return; - } - Py_XDECREF(*out); - *out = ret; - } -} - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_OO_O_method(char **args, intp *dimensions, intp *steps, void *func) -{ - char *meth = (char *)func; - BINARY_LOOP { - PyObject *in1 = *(PyObject **)ip1; - PyObject *in2 = *(PyObject **)ip2; - PyObject **out = (PyObject **)op1; - PyObject *ret = PyObject_CallMethod(in1, meth, "(O)", in2); - if (ret == NULL) { - return; - } - Py_XDECREF(*out); - *out = ret; - } -} - -/* - * A general-purpose ufunc that deals with general-purpose Python callable. - * func is a structure with nin, nout, and a Python callable function - */ - -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_On_Om(char **args, intp *dimensions, intp *steps, void *func) -{ - intp n = dimensions[0]; - PyUFunc_PyFuncData *data = (PyUFunc_PyFuncData *)func; - int nin = data->nin; - int nout = data->nout; - PyObject *tocall = data->callable; - char *ptrs[NPY_MAXARGS]; - PyObject *arglist, *result; - PyObject *in, **op; - intp i, j, ntot; - - ntot = nin+nout; - - for(j = 0; j < ntot; j++) { - ptrs[j] = args[j]; - } - for(i = 0; i < n; i++) { - arglist = PyTuple_New(nin); - if (arglist == NULL) { - return; - } - for(j = 0; j < nin; j++) { - in = *((PyObject **)ptrs[j]); - if (in == NULL) { - Py_DECREF(arglist); - return; - } - PyTuple_SET_ITEM(arglist, j, in); - Py_INCREF(in); - } - result = PyEval_CallObject(tocall, arglist); - Py_DECREF(arglist); - if (result == NULL) { - return; - } - if (PyTuple_Check(result)) { - if (nout != PyTuple_Size(result)) { - Py_DECREF(result); - return; - } - for(j = 0; j < nout; j++) { - op = (PyObject **)ptrs[j+nin]; - Py_XDECREF(*op); - *op = PyTuple_GET_ITEM(result, j); - Py_INCREF(*op); - } - Py_DECREF(result); - } - else { - op = (PyObject **)ptrs[nin]; - Py_XDECREF(*op); - *op = result; - } - for(j = 0; j < ntot; j++) { - ptrs[j] += steps[j]; - } - } -} - -/* - ***************************************************************************** - ** BOOLEAN LOOPS ** - ***************************************************************************** - */ - -/**begin repeat - * #kind = equal, not_equal, greater, greater_equal, less, less_equal# - * #OP = ==, !=, >, >=, <, <=# - **/ - -NPY_NO_EXPORT void -BOOL_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - Bool in1 = *((Bool *)ip1) != 0; - Bool in2 = *((Bool *)ip2) != 0; - *((Bool *)op1)= in1 @OP@ in2; - } -} -/**end repeat**/ - - -/**begin repeat - * #kind = logical_and, logical_or# - * #OP = &&, ||# - * #SC = ==, !=# - **/ - -NPY_NO_EXPORT void -BOOL_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - if(IS_BINARY_REDUCE) { - BINARY_REDUCE_LOOP(Bool) { - const Bool in2 = *(Bool *)ip2; - io1 = io1 @OP@ in2; - if (io1 @SC@ 0) { - break; - } - } - *((Bool *)iop1) = io1; - } - else { - BINARY_LOOP { - const Bool in1 = *(Bool *)ip1; - const Bool in2 = *(Bool *)ip2; - *((Bool *)op1) = in1 @OP@ in2; - } - } -} -/**end repeat**/ - - -NPY_NO_EXPORT void -BOOL_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - Bool in1 = *((Bool *)ip1) != 0; - Bool in2 = *((Bool *)ip2) != 0; - *((Bool *)op1)= (in1 && !in2) || (!in1 && in2); - } -} - -/**begin repeat - * #kind = maximum, minimum# - * #OP = >, <# - **/ -NPY_NO_EXPORT void -BOOL_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - Bool in1 = *((Bool *)ip1) != 0; - Bool in2 = *((Bool *)ip2) != 0; - *((Bool *)op1) = (in1 @OP@ in2) ? in1 : in2; - } -} -/**end repeat**/ - -/**begin repeat - * #kind = absolute, logical_not# - * #OP = !=, ==# - **/ -NPY_NO_EXPORT void -BOOL_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - Bool in1 = *(Bool *)ip1; - *((Bool *)op1) = in1 @OP@ 0; - } -} -/**end repeat**/ - -NPY_NO_EXPORT void -BOOL_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)) -{ - OUTPUT_LOOP { - *((Bool *)op1) = 1; - } -} - - -/* - ***************************************************************************** - ** INTEGER LOOPS - ***************************************************************************** - */ - -/**begin repeat - * #type = byte, short, int, long, longlong# - * #TYPE = BYTE, SHORT, INT, LONG, LONGLONG# - * #ftype = float, float, double, double, double# - */ - -/**begin repeat1 - * both signed and unsigned integer types - * #s = , u# - * #S = , U# - */ - -#define @S@@TYPE@_floor_divide @S@@TYPE@_divide -#define @S@@TYPE@_fmax @S@@TYPE@_maximum -#define @S@@TYPE@_fmin @S@@TYPE@_minimum - -NPY_NO_EXPORT void -@S@@TYPE@_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)) -{ - OUTPUT_LOOP { - *((@s@@type@ *)op1) = 1; - } -} - -NPY_NO_EXPORT void -@S@@TYPE@_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)) -{ - UNARY_LOOP { - const @s@@type@ in1 = *(@s@@type@ *)ip1; - *((@s@@type@ *)op1) = in1*in1; - } -} - -NPY_NO_EXPORT void -@S@@TYPE@_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)) -{ - UNARY_LOOP { - const @s@@type@ in1 = *(@s@@type@ *)ip1; - *((@s@@type@ *)op1) = (@s@@type@)(1.0/in1); - } -} - -NPY_NO_EXPORT void -@S@@TYPE@_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @s@@type@ in1 = *(@s@@type@ *)ip1; - *((@s@@type@ *)op1) = in1; - } -} - -NPY_NO_EXPORT void -@S@@TYPE@_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @s@@type@ in1 = *(@s@@type@ *)ip1; - *((@s@@type@ *)op1) = (@s@@type@)(-(@type@)in1); - } -} - -NPY_NO_EXPORT void -@S@@TYPE@_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @s@@type@ in1 = *(@s@@type@ *)ip1; - *((Bool *)op1) = !in1; - } -} - -NPY_NO_EXPORT void -@S@@TYPE@_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @s@@type@ in1 = *(@s@@type@ *)ip1; - *((@s@@type@ *)op1) = ~in1; - } -} - -/**begin repeat2 - * Arithmetic - * #kind = add, subtract, multiply, bitwise_and, bitwise_or, bitwise_xor, - * left_shift, right_shift# - * #OP = +, -,*, &, |, ^, <<, >># - */ -NPY_NO_EXPORT void -@S@@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - if(IS_BINARY_REDUCE) { - BINARY_REDUCE_LOOP(@s@@type@) { - io1 @OP@= *(@s@@type@ *)ip2; - } - *((@s@@type@ *)iop1) = io1; - } - else { - BINARY_LOOP { - const @s@@type@ in1 = *(@s@@type@ *)ip1; - const @s@@type@ in2 = *(@s@@type@ *)ip2; - *((@s@@type@ *)op1) = in1 @OP@ in2; - } - } -} -/**end repeat2**/ - -/**begin repeat2 - * #kind = equal, not_equal, greater, greater_equal, less, less_equal, - * logical_and, logical_or# - * #OP = ==, !=, >, >=, <, <=, &&, ||# - */ -NPY_NO_EXPORT void -@S@@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @s@@type@ in1 = *(@s@@type@ *)ip1; - const @s@@type@ in2 = *(@s@@type@ *)ip2; - *((Bool *)op1) = in1 @OP@ in2; - } -} -/**end repeat2**/ - -NPY_NO_EXPORT void -@S@@TYPE@_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @s@@type@ in1 = *(@s@@type@ *)ip1; - const @s@@type@ in2 = *(@s@@type@ *)ip2; - *((Bool *)op1)= (in1 && !in2) || (!in1 && in2); - } -} - -/**begin repeat2 - * #kind = maximum, minimum# - * #OP = >, <# - **/ -NPY_NO_EXPORT void -@S@@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @s@@type@ in1 = *(@s@@type@ *)ip1; - const @s@@type@ in2 = *(@s@@type@ *)ip2; - *((@s@@type@ *)op1) = (in1 @OP@ in2) ? in1 : in2; - } -} -/**end repeat2**/ - -NPY_NO_EXPORT void -@S@@TYPE@_true_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const double in1 = (double)(*(@s@@type@ *)ip1); - const double in2 = (double)(*(@s@@type@ *)ip2); - *((double *)op1) = in1/in2; - } -} - -NPY_NO_EXPORT void -@S@@TYPE@_power(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @ftype@ in1 = (@ftype@)*(@s@@type@ *)ip1; - const @ftype@ in2 = (@ftype@)*(@s@@type@ *)ip2; - *((@s@@type@ *)op1) = (@s@@type@) pow(in1, in2); - } -} - -NPY_NO_EXPORT void -@S@@TYPE@_fmod(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @s@@type@ in1 = *(@s@@type@ *)ip1; - const @s@@type@ in2 = *(@s@@type@ *)ip2; - if (in2 == 0) { - generate_divbyzero_error(); - *((@s@@type@ *)op1) = 0; - } - else { - *((@s@@type@ *)op1)= in1 % in2; - } - - } -} - -/**end repeat1**/ - -NPY_NO_EXPORT void -U@TYPE@_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const u@type@ in1 = *(u@type@ *)ip1; - *((u@type@ *)op1) = in1; - } -} - -NPY_NO_EXPORT void -@TYPE@_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - *((@type@ *)op1) = (in1 >= 0) ? in1 : -in1; - } -} - -NPY_NO_EXPORT void -U@TYPE@_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const u@type@ in1 = *(u@type@ *)ip1; - *((u@type@ *)op1) = in1 > 0 ? 1 : 0; - } -} - -NPY_NO_EXPORT void -@TYPE@_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - *((@type@ *)op1) = in1 > 0 ? 1 : (in1 < 0 ? -1 : 0); - } -} - -NPY_NO_EXPORT void -@TYPE@_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ in2 = *(@type@ *)ip2; - /* - * FIXME: On x86 at least, dividing the smallest representable integer - * by -1 causes a SIFGPE (division overflow). We treat this case here - * (to avoid a SIGFPE crash at python level), but a good solution would - * be to treat integer division problems separately from FPU exceptions - * (i.e. fixing generate_divbyzero_error()). - */ - if (in2 == 0 || (in1 == NPY_MIN_@TYPE@ && in2 == -1)) { - generate_divbyzero_error(); - *((@type@ *)op1) = 0; - } - else if (((in1 > 0) != (in2 > 0)) && (in1 % in2 != 0)) { - *((@type@ *)op1) = in1/in2 - 1; - } - else { - *((@type@ *)op1) = in1/in2; - } - } -} - -NPY_NO_EXPORT void -U@TYPE@_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const u@type@ in1 = *(u@type@ *)ip1; - const u@type@ in2 = *(u@type@ *)ip2; - if (in2 == 0) { - generate_divbyzero_error(); - *((u@type@ *)op1) = 0; - } - else { - *((u@type@ *)op1)= in1/in2; - } - } -} - -NPY_NO_EXPORT void -@TYPE@_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ in2 = *(@type@ *)ip2; - if (in2 == 0) { - generate_divbyzero_error(); - *((@type@ *)op1) = 0; - } - else { - /* handle mixed case the way Python does */ - const @type@ rem = in1 % in2; - if ((in1 > 0) == (in2 > 0) || rem == 0) { - *((@type@ *)op1) = rem; - } - else { - *((@type@ *)op1) = rem + in2; - } - } - } -} - -NPY_NO_EXPORT void -U@TYPE@_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const u@type@ in1 = *(u@type@ *)ip1; - const u@type@ in2 = *(u@type@ *)ip2; - if (in2 == 0) { - generate_divbyzero_error(); - *((@type@ *)op1) = 0; - } - else { - *((@type@ *)op1) = in1 % in2; - } - } -} - -/**end repeat**/ - - -/* - ***************************************************************************** - ** FLOAT LOOPS ** - ***************************************************************************** - */ - - -/**begin repeat - * Float types - * #type = float, double, longdouble# - * #TYPE = FLOAT, DOUBLE, LONGDOUBLE# - * #c = f, , l# - * #C = F, , L# - */ - - -/**begin repeat1 - * Arithmetic - * # kind = add, subtract, multiply, divide# - * # OP = +, -, *, /# - */ -NPY_NO_EXPORT void -@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - if(IS_BINARY_REDUCE) { - BINARY_REDUCE_LOOP(@type@) { - io1 @OP@= *(@type@ *)ip2; - } - *((@type@ *)iop1) = io1; - } - else { - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ in2 = *(@type@ *)ip2; - *((@type@ *)op1) = in1 @OP@ in2; - } - } -} -/**end repeat1**/ - -/**end repeat1**/ - -/**begin repeat1 - * #kind = equal, not_equal, less, less_equal, greater, greater_equal, - * logical_and, logical_or# - * #OP = ==, !=, <, <=, >, >=, &&, ||# - */ -NPY_NO_EXPORT void -@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ in2 = *(@type@ *)ip2; - *((Bool *)op1) = in1 @OP@ in2; - } -} -/**end repeat1**/ - -NPY_NO_EXPORT void -@TYPE@_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ in2 = *(@type@ *)ip2; - *((Bool *)op1)= (in1 && !in2) || (!in1 && in2); - } -} - -NPY_NO_EXPORT void -@TYPE@_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - *((Bool *)op1) = !in1; - } -} - -/**begin repeat1 - * #kind = isnan, isinf, isfinite, signbit# - * #func = npy_isnan, npy_isinf, npy_isfinite, npy_signbit# - **/ -NPY_NO_EXPORT void -@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - *((Bool *)op1) = @func@(in1) != 0; - } -} -/**end repeat1**/ - -NPY_NO_EXPORT void -@TYPE@_spacing(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - *((@type@ *)op1) = npy_spacing@c@(in1); - } -} - -NPY_NO_EXPORT void -@TYPE@_copysign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ in2 = *(@type@ *)ip2; - *((@type@ *)op1)= npy_copysign@c@(in1, in2); - } -} - -NPY_NO_EXPORT void -@TYPE@_nextafter(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ in2 = *(@type@ *)ip2; - *((@type@ *)op1)= npy_nextafter@c@(in1, in2); - } -} - -/**begin repeat1 - * #kind = maximum, minimum# - * #OP = >=, <=# - **/ -NPY_NO_EXPORT void -@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - /* */ - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ in2 = *(@type@ *)ip2; - *((@type@ *)op1) = (in1 @OP@ in2 || npy_isnan(in1)) ? in1 : in2; - } -} -/**end repeat1**/ - -/**begin repeat1 - * #kind = fmax, fmin# - * #OP = >=, <=# - **/ -NPY_NO_EXPORT void -@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - /* */ - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ in2 = *(@type@ *)ip2; - *((@type@ *)op1) = (in1 @OP@ in2 || npy_isnan(in2)) ? in1 : in2; - } -} -/**end repeat1**/ - -NPY_NO_EXPORT void -@TYPE@_floor_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ in2 = *(@type@ *)ip2; - *((@type@ *)op1) = npy_floor@c@(in1/in2); - } -} - -NPY_NO_EXPORT void -@TYPE@_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ in2 = *(@type@ *)ip2; - const @type@ res = npy_fmod@c@(in1,in2); - if (res && ((in2 < 0) != (res < 0))) { - *((@type@ *)op1) = res + in2; - } - else { - *((@type@ *)op1) = res; - } - } -} - -NPY_NO_EXPORT void -@TYPE@_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)) -{ - UNARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - *((@type@ *)op1) = in1*in1; - } -} - -NPY_NO_EXPORT void -@TYPE@_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)) -{ - UNARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - *((@type@ *)op1) = 1/in1; - } -} - -NPY_NO_EXPORT void -@TYPE@_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)) -{ - OUTPUT_LOOP { - *((@type@ *)op1) = 1; - } -} - -NPY_NO_EXPORT void -@TYPE@_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - *((@type@ *)op1) = in1; - } -} - -NPY_NO_EXPORT void -@TYPE@_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const @type@ tmp = in1 > 0 ? in1 : -in1; - /* add 0 to clear -0.0 */ - *((@type@ *)op1) = tmp + 0; - } -} - -NPY_NO_EXPORT void -@TYPE@_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - *((@type@ *)op1) = -in1; - } -} - -NPY_NO_EXPORT void -@TYPE@_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - /* Sign of nan is nan */ - UNARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - *((@type@ *)op1) = in1 > 0 ? 1 : (in1 < 0 ? -1 : (in1 == 0 ? 0 : in1)); - } -} - -NPY_NO_EXPORT void -@TYPE@_modf(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP_TWO_OUT { - const @type@ in1 = *(@type@ *)ip1; - *((@type@ *)op1) = npy_modf@c@(in1, (@type@ *)op2); - } -} - -#ifdef HAVE_FREXP@C@ -NPY_NO_EXPORT void -@TYPE@_frexp(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP_TWO_OUT { - const @type@ in1 = *(@type@ *)ip1; - *((@type@ *)op1) = frexp@c@(in1, (int *)op2); - } -} -#endif - -#ifdef HAVE_LDEXP@C@ -NPY_NO_EXPORT void -@TYPE@_ldexp(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1 = *(@type@ *)ip1; - const int in2 = *(int *)ip2; - *((@type@ *)op1) = ldexp@c@(in1, in2); - } -} -#endif - -#define @TYPE@_true_divide @TYPE@_divide - -/**end repeat**/ - - -/* - ***************************************************************************** - ** COMPLEX LOOPS ** - ***************************************************************************** - */ - -#define CGE(xr,xi,yr,yi) ((xr > yr && !npy_isnan(xi) && !npy_isnan(yi)) \ - || (xr == yr && xi >= yi)) -#define CLE(xr,xi,yr,yi) ((xr < yr && !npy_isnan(xi) && !npy_isnan(yi)) \ - || (xr == yr && xi <= yi)) -#define CGT(xr,xi,yr,yi) ((xr > yr && !npy_isnan(xi) && !npy_isnan(yi)) \ - || (xr == yr && xi > yi)) -#define CLT(xr,xi,yr,yi) ((xr < yr && !npy_isnan(xi) && !npy_isnan(yi)) \ - || (xr == yr && xi < yi)) -#define CEQ(xr,xi,yr,yi) (xr == yr && xi == yi) -#define CNE(xr,xi,yr,yi) (xr != yr || xi != yi) - -/**begin repeat - * complex types - * #type = float, double, longdouble# - * #TYPE = FLOAT, DOUBLE, LONGDOUBLE# - * #c = f, , l# - * #C = F, , L# - */ - -/**begin repeat1 - * arithmetic - * #kind = add, subtract# - * #OP = +, -# - */ -NPY_NO_EXPORT void -C@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - const @type@ in2r = ((@type@ *)ip2)[0]; - const @type@ in2i = ((@type@ *)ip2)[1]; - ((@type@ *)op1)[0] = in1r @OP@ in2r; - ((@type@ *)op1)[1] = in1i @OP@ in2i; - } -} -/**end repeat1**/ - -NPY_NO_EXPORT void -C@TYPE@_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - const @type@ in2r = ((@type@ *)ip2)[0]; - const @type@ in2i = ((@type@ *)ip2)[1]; - ((@type@ *)op1)[0] = in1r*in2r - in1i*in2i; - ((@type@ *)op1)[1] = in1r*in2i + in1i*in2r; - } -} - -NPY_NO_EXPORT void -C@TYPE@_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - const @type@ in2r = ((@type@ *)ip2)[0]; - const @type@ in2i = ((@type@ *)ip2)[1]; - if (npy_fabs@c@(in2r) >= npy_fabs@c@(in2i)) { - const @type@ rat = in2i/in2r; - const @type@ scl = 1.0@c@/(in2r + in2i*rat); - ((@type@ *)op1)[0] = (in1r + in1i*rat)*scl; - ((@type@ *)op1)[1] = (in1i - in1r*rat)*scl; - } - else { - const @type@ rat = in2r/in2i; - const @type@ scl = 1.0@c@/(in2i + in2r*rat); - ((@type@ *)op1)[0] = (in1r*rat + in1i)*scl; - ((@type@ *)op1)[1] = (in1i*rat - in1r)*scl; - } - } -} - -NPY_NO_EXPORT void -C@TYPE@_floor_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - const @type@ in2r = ((@type@ *)ip2)[0]; - const @type@ in2i = ((@type@ *)ip2)[1]; - if (npy_fabs@c@(in2r) >= npy_fabs@c@(in2i)) { - const @type@ rat = in2i/in2r; - ((@type@ *)op1)[0] = npy_floor@c@((in1r + in1i*rat)/(in2r + in2i*rat)); - ((@type@ *)op1)[1] = 0; - } - else { - const @type@ rat = in2r/in2i; - ((@type@ *)op1)[0] = npy_floor@c@((in1r*rat + in1i)/(in2i + in2r*rat)); - ((@type@ *)op1)[1] = 0; - } - } -} - -/**begin repeat1 - * #kind= greater, greater_equal, less, less_equal, equal, not_equal# - * #OP = CGT, CGE, CLT, CLE, CEQ, CNE# - */ -NPY_NO_EXPORT void -C@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - const @type@ in2r = ((@type@ *)ip2)[0]; - const @type@ in2i = ((@type@ *)ip2)[1]; - *((Bool *)op1) = @OP@(in1r,in1i,in2r,in2i); - } -} -/**end repeat1**/ - -/**begin repeat1 - #kind = logical_and, logical_or# - #OP1 = ||, ||# - #OP2 = &&, ||# -*/ -NPY_NO_EXPORT void -C@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - const @type@ in2r = ((@type@ *)ip2)[0]; - const @type@ in2i = ((@type@ *)ip2)[1]; - *((Bool *)op1) = (in1r @OP1@ in1i) @OP2@ (in2r @OP1@ in2i); - } -} -/**end repeat1**/ - -NPY_NO_EXPORT void -C@TYPE@_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - const @type@ in2r = ((@type@ *)ip2)[0]; - const @type@ in2i = ((@type@ *)ip2)[1]; - const Bool tmp1 = (in1r || in1i); - const Bool tmp2 = (in2r || in2i); - *((Bool *)op1) = (tmp1 && !tmp2) || (!tmp1 && tmp2); - } -} - -NPY_NO_EXPORT void -C@TYPE@_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - *((Bool *)op1) = !(in1r || in1i); - } -} - -/**begin repeat1 - * #kind = isnan, isinf, isfinite# - * #func = npy_isnan, npy_isinf, npy_isfinite# - * #OP = ||, ||, &&# - **/ -NPY_NO_EXPORT void -C@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - *((Bool *)op1) = @func@(in1r) @OP@ @func@(in1i); - } -} -/**end repeat1**/ - -NPY_NO_EXPORT void -C@TYPE@_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)) -{ - UNARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - ((@type@ *)op1)[0] = in1r*in1r - in1i*in1i; - ((@type@ *)op1)[1] = in1r*in1i + in1i*in1r; - } -} - -NPY_NO_EXPORT void -C@TYPE@_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)) -{ - UNARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - if (npy_fabs@c@(in1i) <= npy_fabs@c@(in1r)) { - const @type@ r = in1i/in1r; - const @type@ d = in1r + in1i*r; - ((@type@ *)op1)[0] = 1/d; - ((@type@ *)op1)[1] = -r/d; - } else { - const @type@ r = in1r/in1i; - const @type@ d = in1r*r + in1i; - ((@type@ *)op1)[0] = r/d; - ((@type@ *)op1)[1] = -1/d; - } - } -} - -NPY_NO_EXPORT void -C@TYPE@_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)) -{ - OUTPUT_LOOP { - ((@type@ *)op1)[0] = 1; - ((@type@ *)op1)[1] = 0; - } -} - -NPY_NO_EXPORT void -C@TYPE@_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) { - UNARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - ((@type@ *)op1)[0] = in1r; - ((@type@ *)op1)[1] = -in1i; - } -} - -NPY_NO_EXPORT void -C@TYPE@_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - *((@type@ *)op1) = npy_hypot@c@(in1r, in1i); - } -} - -NPY_NO_EXPORT void -C@TYPE@__arg(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - UNARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - *((@type@ *)op1) = npy_atan2@c@(in1i, in1r); - } -} - -NPY_NO_EXPORT void -C@TYPE@_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - /* fixme: sign of nan is currently 0 */ - UNARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - ((@type@ *)op1)[0] = CGT(in1r, in1i, 0, 0) ? 1 : - (CLT(in1r, in1i, 0, 0) ? -1 : - (CEQ(in1r, in1i, 0, 0) ? 0 : NPY_NAN@C@)); - ((@type@ *)op1)[1] = 0; - } -} - -/**begin repeat1 - * #kind = maximum, minimum# - * #OP = CGE, CLE# - */ -NPY_NO_EXPORT void -C@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - const @type@ in2r = ((@type@ *)ip2)[0]; - const @type@ in2i = ((@type@ *)ip2)[1]; - if (@OP@(in1r, in1i, in2r, in2i) || npy_isnan(in1r) || npy_isnan(in1i)) { - ((@type@ *)op1)[0] = in1r; - ((@type@ *)op1)[1] = in1i; - } - else { - ((@type@ *)op1)[0] = in2r; - ((@type@ *)op1)[1] = in2i; - } - } -} -/**end repeat1**/ - -/**begin repeat1 - * #kind = fmax, fmin# - * #OP = CGE, CLE# - */ -NPY_NO_EXPORT void -C@TYPE@_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - BINARY_LOOP { - const @type@ in1r = ((@type@ *)ip1)[0]; - const @type@ in1i = ((@type@ *)ip1)[1]; - const @type@ in2r = ((@type@ *)ip2)[0]; - const @type@ in2i = ((@type@ *)ip2)[1]; - if (@OP@(in1r, in1i, in2r, in2i) || npy_isnan(in2r) || npy_isnan(in2i)) { - ((@type@ *)op1)[0] = in1r; - ((@type@ *)op1)[1] = in1i; - } - else { - ((@type@ *)op1)[0] = in2r; - ((@type@ *)op1)[1] = in2i; - } - } -} -/**end repeat1**/ - -#define C@TYPE@_true_divide C@TYPE@_divide - -/**end repeat**/ - -#undef CGE -#undef CLE -#undef CGT -#undef CLT -#undef CEQ -#undef CNE - -/* - ***************************************************************************** - ** OBJECT LOOPS ** - ***************************************************************************** - */ - -/**begin repeat - * #kind = equal, not_equal, greater, greater_equal, less, less_equal# - * #OP = EQ, NE, GT, GE, LT, LE# - */ -NPY_NO_EXPORT void -OBJECT_@kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) { - BINARY_LOOP { - PyObject *in1 = *(PyObject **)ip1; - PyObject *in2 = *(PyObject **)ip2; - int ret = PyObject_RichCompareBool(in1, in2, Py_@OP@); - if (ret == -1) { - return; - } - *((Bool *)op1) = (Bool)ret; - } -} -/**end repeat**/ - -NPY_NO_EXPORT void -OBJECT_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ -#if defined(NPY_PY3K) - PyObject *zero = PyLong_FromLong(0); - UNARY_LOOP { - PyObject *in1 = *(PyObject **)ip1; - PyObject **out = (PyObject **)op1; - int v; - PyObject *ret; - PyObject_Cmp(in1, zero, &v); - ret = PyLong_FromLong(v); - if (PyErr_Occurred()) { - return; - } - Py_XDECREF(*out); - *out = ret; - } - Py_DECREF(zero); -#else - PyObject *zero = PyInt_FromLong(0); - UNARY_LOOP { - PyObject *in1 = *(PyObject **)ip1; - PyObject **out = (PyObject **)op1; - PyObject *ret = PyInt_FromLong(PyObject_Compare(in1, zero)); - if (PyErr_Occurred()) { - return; - } - Py_XDECREF(*out); - *out = ret; - } - Py_DECREF(zero); -#endif -} - -/* - ***************************************************************************** - ** END LOOPS ** - ***************************************************************************** - */ - - diff --git a/pythonPackages/numpy/numpy/core/src/umath/loops.h b/pythonPackages/numpy/numpy/core/src/umath/loops.h deleted file mode 100755 index 216179b253..0000000000 --- a/pythonPackages/numpy/numpy/core/src/umath/loops.h +++ /dev/null @@ -1,2400 +0,0 @@ - -/* - ***************************************************************************** - ** This file was autogenerated from a template DO NOT EDIT!!!! ** - ** Changes should be made to the original source (.src) file ** - ***************************************************************************** - */ - -#line 1 -/* -*- c -*- */ -/* - * vim:syntax=c - */ - -#ifndef _NPY_UMATH_LOOPS_H_ -#define _NPY_UMATH_LOOPS_H_ - -#define BOOL_invert BOOL_logical_not -#define BOOL_negative BOOL_logical_not -#define BOOL_add BOOL_logical_or -#define BOOL_bitwise_and BOOL_logical_and -#define BOOL_bitwise_or BOOL_logical_or -#define BOOL_bitwise_xor BOOL_logical_xor -#define BOOL_multiply BOOL_logical_and -#define BOOL_subtract BOOL_logical_xor -#define BOOL_fmax BOOL_maximum -#define BOOL_fmin BOOL_minimum - -/* - ***************************************************************************** - ** BOOLEAN LOOPS ** - ***************************************************************************** - */ - -#line 32 - -NPY_NO_EXPORT void -BOOL_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_bitwise_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_bitwise_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_bitwise_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_fmax(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_fmin(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 32 - -NPY_NO_EXPORT void -BOOL_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 41 -NPY_NO_EXPORT void -BOOL_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 41 -NPY_NO_EXPORT void -BOOL_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 49 -NPY_NO_EXPORT void -BOOL_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 49 -NPY_NO_EXPORT void -BOOL_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -BOOL_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -/* - ***************************************************************************** - ** INTEGER LOOPS - ***************************************************************************** - */ - -#line 67 - -#line 73 - -#define BYTE_floor_divide BYTE_divide -#define BYTE_fmax BYTE_maximum -#define BYTE_fmin BYTE_minimum - -NPY_NO_EXPORT void -BYTE_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -BYTE_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -BYTE_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -BYTE_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -BYTE_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -BYTE_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -BYTE_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 105 -NPY_NO_EXPORT void -BYTE_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -BYTE_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -BYTE_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -BYTE_bitwise_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -BYTE_bitwise_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -BYTE_bitwise_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -BYTE_left_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -BYTE_right_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -#line 115 -NPY_NO_EXPORT void -BYTE_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -BYTE_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -BYTE_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -BYTE_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -BYTE_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -BYTE_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -BYTE_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -BYTE_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -BYTE_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -BYTE_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -BYTE_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -BYTE_true_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -BYTE_power(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -BYTE_fmod(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 73 - -#define UBYTE_floor_divide UBYTE_divide -#define UBYTE_fmax UBYTE_maximum -#define UBYTE_fmin UBYTE_minimum - -NPY_NO_EXPORT void -UBYTE_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -UBYTE_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -UBYTE_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -UBYTE_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UBYTE_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UBYTE_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UBYTE_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 105 -NPY_NO_EXPORT void -UBYTE_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UBYTE_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UBYTE_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UBYTE_bitwise_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UBYTE_bitwise_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UBYTE_bitwise_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UBYTE_left_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UBYTE_right_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -#line 115 -NPY_NO_EXPORT void -UBYTE_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UBYTE_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UBYTE_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UBYTE_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UBYTE_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UBYTE_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UBYTE_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UBYTE_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -UBYTE_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -UBYTE_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -UBYTE_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -UBYTE_true_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UBYTE_power(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UBYTE_fmod(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -NPY_NO_EXPORT void -UBYTE_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -BYTE_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UBYTE_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -BYTE_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -BYTE_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UBYTE_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -BYTE_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UBYTE_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 67 - -#line 73 - -#define SHORT_floor_divide SHORT_divide -#define SHORT_fmax SHORT_maximum -#define SHORT_fmin SHORT_minimum - -NPY_NO_EXPORT void -SHORT_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -SHORT_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -SHORT_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -SHORT_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -SHORT_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -SHORT_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -SHORT_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 105 -NPY_NO_EXPORT void -SHORT_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -SHORT_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -SHORT_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -SHORT_bitwise_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -SHORT_bitwise_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -SHORT_bitwise_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -SHORT_left_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -SHORT_right_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -#line 115 -NPY_NO_EXPORT void -SHORT_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -SHORT_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -SHORT_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -SHORT_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -SHORT_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -SHORT_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -SHORT_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -SHORT_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -SHORT_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -SHORT_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -SHORT_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -SHORT_true_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -SHORT_power(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -SHORT_fmod(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 73 - -#define USHORT_floor_divide USHORT_divide -#define USHORT_fmax USHORT_maximum -#define USHORT_fmin USHORT_minimum - -NPY_NO_EXPORT void -USHORT_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -USHORT_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -USHORT_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -USHORT_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -USHORT_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -USHORT_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -USHORT_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 105 -NPY_NO_EXPORT void -USHORT_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -USHORT_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -USHORT_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -USHORT_bitwise_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -USHORT_bitwise_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -USHORT_bitwise_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -USHORT_left_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -USHORT_right_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -#line 115 -NPY_NO_EXPORT void -USHORT_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -USHORT_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -USHORT_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -USHORT_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -USHORT_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -USHORT_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -USHORT_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -USHORT_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -USHORT_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -USHORT_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -USHORT_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -USHORT_true_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -USHORT_power(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -USHORT_fmod(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -NPY_NO_EXPORT void -USHORT_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -SHORT_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -USHORT_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -SHORT_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -SHORT_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -USHORT_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -SHORT_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -USHORT_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 67 - -#line 73 - -#define INT_floor_divide INT_divide -#define INT_fmax INT_maximum -#define INT_fmin INT_minimum - -NPY_NO_EXPORT void -INT_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -INT_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -INT_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -INT_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -INT_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -INT_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -INT_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 105 -NPY_NO_EXPORT void -INT_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -INT_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -INT_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -INT_bitwise_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -INT_bitwise_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -INT_bitwise_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -INT_left_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -INT_right_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -#line 115 -NPY_NO_EXPORT void -INT_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -INT_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -INT_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -INT_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -INT_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -INT_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -INT_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -INT_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -INT_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -INT_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -INT_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -INT_true_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -INT_power(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -INT_fmod(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 73 - -#define UINT_floor_divide UINT_divide -#define UINT_fmax UINT_maximum -#define UINT_fmin UINT_minimum - -NPY_NO_EXPORT void -UINT_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -UINT_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -UINT_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -UINT_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UINT_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UINT_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UINT_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 105 -NPY_NO_EXPORT void -UINT_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UINT_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UINT_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UINT_bitwise_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UINT_bitwise_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UINT_bitwise_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UINT_left_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -UINT_right_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -#line 115 -NPY_NO_EXPORT void -UINT_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UINT_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UINT_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UINT_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UINT_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UINT_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UINT_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -UINT_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -UINT_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -UINT_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -UINT_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -UINT_true_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UINT_power(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UINT_fmod(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -NPY_NO_EXPORT void -UINT_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -INT_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UINT_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -INT_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -INT_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UINT_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -INT_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -UINT_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 67 - -#line 73 - -#define LONG_floor_divide LONG_divide -#define LONG_fmax LONG_maximum -#define LONG_fmin LONG_minimum - -NPY_NO_EXPORT void -LONG_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -LONG_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -LONG_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -LONG_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONG_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONG_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONG_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 105 -NPY_NO_EXPORT void -LONG_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONG_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONG_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONG_bitwise_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONG_bitwise_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONG_bitwise_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONG_left_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONG_right_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -#line 115 -NPY_NO_EXPORT void -LONG_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONG_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONG_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONG_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONG_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONG_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONG_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONG_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -LONG_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -LONG_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -LONG_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -LONG_true_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONG_power(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONG_fmod(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 73 - -#define ULONG_floor_divide ULONG_divide -#define ULONG_fmax ULONG_maximum -#define ULONG_fmin ULONG_minimum - -NPY_NO_EXPORT void -ULONG_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -ULONG_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -ULONG_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -ULONG_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONG_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONG_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONG_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 105 -NPY_NO_EXPORT void -ULONG_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONG_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONG_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONG_bitwise_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONG_bitwise_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONG_bitwise_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONG_left_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONG_right_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -#line 115 -NPY_NO_EXPORT void -ULONG_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONG_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONG_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONG_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONG_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONG_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONG_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONG_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -ULONG_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -ULONG_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -ULONG_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -ULONG_true_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONG_power(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONG_fmod(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -NPY_NO_EXPORT void -ULONG_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONG_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONG_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONG_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONG_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONG_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONG_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONG_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 67 - -#line 73 - -#define LONGLONG_floor_divide LONGLONG_divide -#define LONGLONG_fmax LONGLONG_maximum -#define LONGLONG_fmin LONGLONG_minimum - -NPY_NO_EXPORT void -LONGLONG_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -LONGLONG_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -LONGLONG_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -LONGLONG_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGLONG_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGLONG_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGLONG_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 105 -NPY_NO_EXPORT void -LONGLONG_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONGLONG_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONGLONG_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONGLONG_bitwise_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONGLONG_bitwise_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONGLONG_bitwise_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONGLONG_left_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -LONGLONG_right_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -#line 115 -NPY_NO_EXPORT void -LONGLONG_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONGLONG_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONGLONG_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONGLONG_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONGLONG_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONGLONG_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONGLONG_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -LONGLONG_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -LONGLONG_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -LONGLONG_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -LONGLONG_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -LONGLONG_true_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGLONG_power(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGLONG_fmod(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 73 - -#define ULONGLONG_floor_divide ULONGLONG_divide -#define ULONGLONG_fmax ULONGLONG_maximum -#define ULONGLONG_fmin ULONGLONG_minimum - -NPY_NO_EXPORT void -ULONGLONG_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -ULONGLONG_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -ULONGLONG_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -ULONGLONG_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONGLONG_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONGLONG_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONGLONG_invert(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 105 -NPY_NO_EXPORT void -ULONGLONG_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONGLONG_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONGLONG_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONGLONG_bitwise_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONGLONG_bitwise_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONGLONG_bitwise_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONGLONG_left_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 105 -NPY_NO_EXPORT void -ULONGLONG_right_shift(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -#line 115 -NPY_NO_EXPORT void -ULONGLONG_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONGLONG_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONGLONG_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONGLONG_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONGLONG_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONGLONG_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONGLONG_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 115 -NPY_NO_EXPORT void -ULONGLONG_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -ULONGLONG_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -ULONGLONG_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 126 -NPY_NO_EXPORT void -ULONGLONG_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -ULONGLONG_true_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONGLONG_power(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONGLONG_fmod(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -NPY_NO_EXPORT void -ULONGLONG_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGLONG_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONGLONG_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGLONG_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGLONG_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONGLONG_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGLONG_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -ULONGLONG_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -/* - ***************************************************************************** - ** FLOAT LOOPS ** - ***************************************************************************** - */ - - -#line 180 - - -#line 187 -NPY_NO_EXPORT void -FLOAT_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 187 -NPY_NO_EXPORT void -FLOAT_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 187 -NPY_NO_EXPORT void -FLOAT_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 187 -NPY_NO_EXPORT void -FLOAT_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 196 -NPY_NO_EXPORT void -FLOAT_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -FLOAT_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -FLOAT_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -FLOAT_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -FLOAT_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -FLOAT_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -FLOAT_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -FLOAT_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -FLOAT_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -FLOAT_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -FLOAT_isnan(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -FLOAT_isinf(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -FLOAT_isfinite(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -FLOAT_signbit(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -FLOAT_copysign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -FLOAT_nextafter(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -FLOAT_spacing(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 218 -NPY_NO_EXPORT void -FLOAT_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 218 -NPY_NO_EXPORT void -FLOAT_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 226 -NPY_NO_EXPORT void -FLOAT_fmax(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 226 -NPY_NO_EXPORT void -FLOAT_fmin(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -FLOAT_floor_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -FLOAT_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -FLOAT_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -FLOAT_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - - -NPY_NO_EXPORT void -FLOAT_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -FLOAT_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -FLOAT_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -FLOAT_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -FLOAT_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -FLOAT_modf(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#ifdef HAVE_FREXPF -NPY_NO_EXPORT void -FLOAT_frexp(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); -#endif - -#ifdef HAVE_LDEXPF -NPY_NO_EXPORT void -FLOAT_ldexp(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); -#endif - -#define FLOAT_true_divide FLOAT_divide - - -#line 180 - - -#line 187 -NPY_NO_EXPORT void -DOUBLE_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 187 -NPY_NO_EXPORT void -DOUBLE_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 187 -NPY_NO_EXPORT void -DOUBLE_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 187 -NPY_NO_EXPORT void -DOUBLE_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 196 -NPY_NO_EXPORT void -DOUBLE_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -DOUBLE_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -DOUBLE_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -DOUBLE_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -DOUBLE_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -DOUBLE_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -DOUBLE_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -DOUBLE_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -DOUBLE_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -DOUBLE_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -DOUBLE_isnan(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -DOUBLE_isinf(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -DOUBLE_isfinite(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -DOUBLE_signbit(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -DOUBLE_copysign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -DOUBLE_nextafter(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -DOUBLE_spacing(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 218 -NPY_NO_EXPORT void -DOUBLE_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 218 -NPY_NO_EXPORT void -DOUBLE_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 226 -NPY_NO_EXPORT void -DOUBLE_fmax(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 226 -NPY_NO_EXPORT void -DOUBLE_fmin(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -DOUBLE_floor_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -DOUBLE_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -DOUBLE_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -DOUBLE_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - - -NPY_NO_EXPORT void -DOUBLE_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -DOUBLE_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -DOUBLE_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -DOUBLE_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -DOUBLE_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -DOUBLE_modf(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#ifdef HAVE_FREXP -NPY_NO_EXPORT void -DOUBLE_frexp(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); -#endif - -#ifdef HAVE_LDEXP -NPY_NO_EXPORT void -DOUBLE_ldexp(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); -#endif - -#define DOUBLE_true_divide DOUBLE_divide - - -#line 180 - - -#line 187 -NPY_NO_EXPORT void -LONGDOUBLE_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 187 -NPY_NO_EXPORT void -LONGDOUBLE_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 187 -NPY_NO_EXPORT void -LONGDOUBLE_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 187 -NPY_NO_EXPORT void -LONGDOUBLE_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 196 -NPY_NO_EXPORT void -LONGDOUBLE_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -LONGDOUBLE_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -LONGDOUBLE_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -LONGDOUBLE_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -LONGDOUBLE_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -LONGDOUBLE_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -LONGDOUBLE_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 196 -NPY_NO_EXPORT void -LONGDOUBLE_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -LONGDOUBLE_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGDOUBLE_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -LONGDOUBLE_isnan(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -LONGDOUBLE_isinf(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -LONGDOUBLE_isfinite(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -LONGDOUBLE_signbit(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -LONGDOUBLE_copysign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -LONGDOUBLE_nextafter(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 210 -NPY_NO_EXPORT void -LONGDOUBLE_spacing(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 218 -NPY_NO_EXPORT void -LONGDOUBLE_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 218 -NPY_NO_EXPORT void -LONGDOUBLE_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 226 -NPY_NO_EXPORT void -LONGDOUBLE_fmax(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 226 -NPY_NO_EXPORT void -LONGDOUBLE_fmin(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -LONGDOUBLE_floor_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGDOUBLE_remainder(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGDOUBLE_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -LONGDOUBLE_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - - -NPY_NO_EXPORT void -LONGDOUBLE_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -LONGDOUBLE_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGDOUBLE_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -LONGDOUBLE_negative(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -LONGDOUBLE_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -LONGDOUBLE_modf(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#ifdef HAVE_FREXPL -NPY_NO_EXPORT void -LONGDOUBLE_frexp(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); -#endif - -#ifdef HAVE_LDEXPL -NPY_NO_EXPORT void -LONGDOUBLE_ldexp(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); -#endif - -#define LONGDOUBLE_true_divide LONGDOUBLE_divide - - - - -/* - ***************************************************************************** - ** COMPLEX LOOPS ** - ***************************************************************************** - */ - -#define CGE(xr,xi,yr,yi) (xr > yr || (xr == yr && xi >= yi)); -#define CLE(xr,xi,yr,yi) (xr < yr || (xr == yr && xi <= yi)); -#define CGT(xr,xi,yr,yi) (xr > yr || (xr == yr && xi > yi)); -#define CLT(xr,xi,yr,yi) (xr < yr || (xr == yr && xi < yi)); -#define CEQ(xr,xi,yr,yi) (xr == yr && xi == yi); -#define CNE(xr,xi,yr,yi) (xr != yr || xi != yi); - -#line 298 - -#line 304 -NPY_NO_EXPORT void -CFLOAT_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 304 -NPY_NO_EXPORT void -CFLOAT_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -NPY_NO_EXPORT void -CFLOAT_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CFLOAT_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CFLOAT_floor_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CFLOAT_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CFLOAT_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CFLOAT_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CFLOAT_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CFLOAT_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CFLOAT_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 331 -NPY_NO_EXPORT void -CFLOAT_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 331 -NPY_NO_EXPORT void -CFLOAT_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -CFLOAT_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CFLOAT_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); -#line 345 -NPY_NO_EXPORT void -CFLOAT_isnan(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 345 -NPY_NO_EXPORT void -CFLOAT_isinf(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 345 -NPY_NO_EXPORT void -CFLOAT_isfinite(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -CFLOAT_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -CFLOAT_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -CFLOAT_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -CFLOAT_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CFLOAT_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CFLOAT__arg(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CFLOAT_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 374 -NPY_NO_EXPORT void -CFLOAT_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 374 -NPY_NO_EXPORT void -CFLOAT_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 382 -NPY_NO_EXPORT void -CFLOAT_fmax(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 382 -NPY_NO_EXPORT void -CFLOAT_fmin(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#define CFLOAT_true_divide CFLOAT_divide - - -#line 298 - -#line 304 -NPY_NO_EXPORT void -CDOUBLE_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 304 -NPY_NO_EXPORT void -CDOUBLE_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -NPY_NO_EXPORT void -CDOUBLE_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CDOUBLE_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CDOUBLE_floor_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CDOUBLE_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CDOUBLE_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CDOUBLE_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CDOUBLE_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CDOUBLE_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CDOUBLE_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 331 -NPY_NO_EXPORT void -CDOUBLE_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 331 -NPY_NO_EXPORT void -CDOUBLE_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -CDOUBLE_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CDOUBLE_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); -#line 345 -NPY_NO_EXPORT void -CDOUBLE_isnan(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 345 -NPY_NO_EXPORT void -CDOUBLE_isinf(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 345 -NPY_NO_EXPORT void -CDOUBLE_isfinite(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -CDOUBLE_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -CDOUBLE_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -CDOUBLE_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -CDOUBLE_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CDOUBLE_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CDOUBLE__arg(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CDOUBLE_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 374 -NPY_NO_EXPORT void -CDOUBLE_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 374 -NPY_NO_EXPORT void -CDOUBLE_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 382 -NPY_NO_EXPORT void -CDOUBLE_fmax(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 382 -NPY_NO_EXPORT void -CDOUBLE_fmin(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#define CDOUBLE_true_divide CDOUBLE_divide - - -#line 298 - -#line 304 -NPY_NO_EXPORT void -CLONGDOUBLE_add(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 304 -NPY_NO_EXPORT void -CLONGDOUBLE_subtract(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - - -NPY_NO_EXPORT void -CLONGDOUBLE_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CLONGDOUBLE_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CLONGDOUBLE_floor_divide(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CLONGDOUBLE_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CLONGDOUBLE_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CLONGDOUBLE_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CLONGDOUBLE_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CLONGDOUBLE_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 322 -NPY_NO_EXPORT void -CLONGDOUBLE_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 331 -NPY_NO_EXPORT void -CLONGDOUBLE_logical_and(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 331 -NPY_NO_EXPORT void -CLONGDOUBLE_logical_or(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -CLONGDOUBLE_logical_xor(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CLONGDOUBLE_logical_not(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); -#line 345 -NPY_NO_EXPORT void -CLONGDOUBLE_isnan(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 345 -NPY_NO_EXPORT void -CLONGDOUBLE_isinf(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 345 -NPY_NO_EXPORT void -CLONGDOUBLE_isfinite(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -CLONGDOUBLE_square(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -CLONGDOUBLE_reciprocal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -CLONGDOUBLE_ones_like(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(data)); - -NPY_NO_EXPORT void -CLONGDOUBLE_conjugate(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CLONGDOUBLE_absolute(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CLONGDOUBLE__arg(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -NPY_NO_EXPORT void -CLONGDOUBLE_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 374 -NPY_NO_EXPORT void -CLONGDOUBLE_maximum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 374 -NPY_NO_EXPORT void -CLONGDOUBLE_minimum(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#line 382 -NPY_NO_EXPORT void -CLONGDOUBLE_fmax(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 382 -NPY_NO_EXPORT void -CLONGDOUBLE_fmin(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -#define CLONGDOUBLE_true_divide CLONGDOUBLE_divide - - - -#undef CGE -#undef CLE -#undef CGT -#undef CLT -#undef CEQ -#undef CNE - - -/* - ***************************************************************************** - ** OBJECT LOOPS ** - ***************************************************************************** - */ - -#line 407 -NPY_NO_EXPORT void -OBJECT_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 407 -NPY_NO_EXPORT void -OBJECT_not_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 407 -NPY_NO_EXPORT void -OBJECT_greater(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 407 -NPY_NO_EXPORT void -OBJECT_greater_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 407 -NPY_NO_EXPORT void -OBJECT_less(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -#line 407 -NPY_NO_EXPORT void -OBJECT_less_equal(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - - -NPY_NO_EXPORT void -OBJECT_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)); - -/* - ***************************************************************************** - ** END LOOPS ** - ***************************************************************************** - */ - -#endif - diff --git a/pythonPackages/numpy/numpy/core/src/umath/ufunc_object.c b/pythonPackages/numpy/numpy/core/src/umath/ufunc_object.c deleted file mode 100755 index fe4ff71476..0000000000 --- a/pythonPackages/numpy/numpy/core/src/umath/ufunc_object.c +++ /dev/null @@ -1,4440 +0,0 @@ -/* - * Python Universal Functions Object -- Math for all types, plus fast - * arrays math - * - * Full description - * - * This supports mathematical (and Boolean) functions on arrays and other python - * objects. Math on large arrays of basic C types is rather efficient. - * - * Travis E. Oliphant 2005, 2006 oliphant@ee.byu.edu (oliphant.travis@ieee.org) - * Brigham Young University - * - * based on the - * - * Original Implementation: - * Copyright (c) 1995, 1996, 1997 Jim Hugunin, hugunin@mit.edu - * - * with inspiration and code from - * Numarray - * Space Science Telescope Institute - * J. Todd Miller - * Perry Greenfield - * Rick White - * - */ -#define _UMATHMODULE - -#include "Python.h" - -#include "npy_config.h" -#ifdef ENABLE_SEPARATE_COMPILATION -#define PY_ARRAY_UNIQUE_SYMBOL _npy_umathmodule_ARRAY_API -#define NO_IMPORT_ARRAY -#endif - -#include "numpy/npy_3kcompat.h" - -#include "numpy/noprefix.h" -#include "numpy/ufuncobject.h" - -#include "ufunc_object.h" - -#define USE_USE_DEFAULTS 1 - -/* ---------------------------------------------------------------- */ - -static int -_does_loop_use_arrays(void *data); - -/* - * fpstatus is the ufunc_formatted hardware status - * errmask is the handling mask specified by the user. - * errobj is a Python object with (string, callable object or None) - * or NULL - */ - -/* - * 2. for each of the flags - * determine whether to ignore, warn, raise error, or call Python function. - * If ignore, do nothing - * If warn, print a warning and continue - * If raise return an error - * If call, call a user-defined function with string - */ - -static int -_error_handler(int method, PyObject *errobj, char *errtype, int retstatus, int *first) -{ - PyObject *pyfunc, *ret, *args; - char *name = PyBytes_AS_STRING(PyTuple_GET_ITEM(errobj,0)); - char msg[100]; - ALLOW_C_API_DEF; - - ALLOW_C_API; - switch(method) { - case UFUNC_ERR_WARN: - PyOS_snprintf(msg, sizeof(msg), "%s encountered in %s", errtype, name); - if (PyErr_Warn(PyExc_RuntimeWarning, msg) < 0) { - goto fail; - } - break; - case UFUNC_ERR_RAISE: - PyErr_Format(PyExc_FloatingPointError, "%s encountered in %s", - errtype, name); - goto fail; - case UFUNC_ERR_CALL: - pyfunc = PyTuple_GET_ITEM(errobj, 1); - if (pyfunc == Py_None) { - PyErr_Format(PyExc_NameError, - "python callback specified for %s (in " \ - " %s) but no function found.", - errtype, name); - goto fail; - } - args = Py_BuildValue("NN", PyUString_FromString(errtype), - PyInt_FromLong((long) retstatus)); - if (args == NULL) { - goto fail; - } - ret = PyObject_CallObject(pyfunc, args); - Py_DECREF(args); - if (ret == NULL) { - goto fail; - } - Py_DECREF(ret); - break; - case UFUNC_ERR_PRINT: - if (*first) { - fprintf(stderr, "Warning: %s encountered in %s\n", errtype, name); - *first = 0; - } - break; - case UFUNC_ERR_LOG: - if (first) { - *first = 0; - pyfunc = PyTuple_GET_ITEM(errobj, 1); - if (pyfunc == Py_None) { - PyErr_Format(PyExc_NameError, - "log specified for %s (in %s) but no " \ - "object with write method found.", - errtype, name); - goto fail; - } - PyOS_snprintf(msg, sizeof(msg), - "Warning: %s encountered in %s\n", errtype, name); - ret = PyObject_CallMethod(pyfunc, "write", "s", msg); - if (ret == NULL) { - goto fail; - } - Py_DECREF(ret); - } - break; - } - DISABLE_C_API; - return 0; - -fail: - DISABLE_C_API; - return -1; -} - - -/*UFUNC_API*/ -NPY_NO_EXPORT int -PyUFunc_getfperr(void) -{ - int retstatus; - UFUNC_CHECK_STATUS(retstatus); - return retstatus; -} - -#define HANDLEIT(NAME, str) {if (retstatus & UFUNC_FPE_##NAME) { \ - handle = errmask & UFUNC_MASK_##NAME; \ - if (handle && \ - _error_handler(handle >> UFUNC_SHIFT_##NAME, \ - errobj, str, retstatus, first) < 0) \ - return -1; \ - }} - -/*UFUNC_API*/ -NPY_NO_EXPORT int -PyUFunc_handlefperr(int errmask, PyObject *errobj, int retstatus, int *first) -{ - int handle; - if (errmask && retstatus) { - HANDLEIT(DIVIDEBYZERO, "divide by zero"); - HANDLEIT(OVERFLOW, "overflow"); - HANDLEIT(UNDERFLOW, "underflow"); - HANDLEIT(INVALID, "invalid value"); - } - return 0; -} - -#undef HANDLEIT - - -/*UFUNC_API*/ -NPY_NO_EXPORT int -PyUFunc_checkfperr(int errmask, PyObject *errobj, int *first) -{ - int retstatus; - - /* 1. check hardware flag --- this is platform dependent code */ - retstatus = PyUFunc_getfperr(); - return PyUFunc_handlefperr(errmask, errobj, retstatus, first); -} - - -/* Checking the status flag clears it */ -/*UFUNC_API*/ -NPY_NO_EXPORT void -PyUFunc_clearfperr() -{ - PyUFunc_getfperr(); -} - - -#define NO_UFUNCLOOP 0 -#define ZERO_EL_REDUCELOOP 0 -#define ONE_UFUNCLOOP 1 -#define ONE_EL_REDUCELOOP 1 -#define NOBUFFER_UFUNCLOOP 2 -#define NOBUFFER_REDUCELOOP 2 -#define BUFFER_UFUNCLOOP 3 -#define BUFFER_REDUCELOOP 3 -#define SIGNATURE_NOBUFFER_UFUNCLOOP 4 - - -static char -_lowest_type(char intype) -{ - switch(intype) { - /* case PyArray_BYTE */ - case PyArray_SHORT: - case PyArray_INT: - case PyArray_LONG: - case PyArray_LONGLONG: - return PyArray_BYTE; - /* case PyArray_UBYTE */ - case PyArray_USHORT: - case PyArray_UINT: - case PyArray_ULONG: - case PyArray_ULONGLONG: - return PyArray_UBYTE; - /* case PyArray_FLOAT:*/ - case PyArray_DOUBLE: - case PyArray_LONGDOUBLE: - return PyArray_FLOAT; - /* case PyArray_CFLOAT:*/ - case PyArray_CDOUBLE: - case PyArray_CLONGDOUBLE: - return PyArray_CFLOAT; - default: - return intype; - } -} - -static char *_types_msg = "function not supported for these types, " \ - "and can't coerce safely to supported types"; - -/* - * This function analyzes the input arguments - * and determines an appropriate __array_prepare__ function to call - * for the outputs. - * - * If an output argument is provided, then it is wrapped - * with its own __array_prepare__ not with the one determined by - * the input arguments. - * - * if the provided output argument is already an ndarray, - * the wrapping function is None (which means no wrapping will - * be done --- not even PyArray_Return). - * - * A NULL is placed in output_wrap for outputs that - * should just have PyArray_Return called. - */ -static void -_find_array_prepare(PyObject *args, PyObject **output_wrap, int nin, int nout) -{ - Py_ssize_t nargs; - int i; - int np = 0; - PyObject *with_wrap[NPY_MAXARGS], *wraps[NPY_MAXARGS]; - PyObject *obj, *wrap = NULL; - - nargs = PyTuple_GET_SIZE(args); - for (i = 0; i < nin; i++) { - obj = PyTuple_GET_ITEM(args, i); - if (PyArray_CheckExact(obj) || PyArray_IsAnyScalar(obj)) { - continue; - } - wrap = PyObject_GetAttrString(obj, "__array_prepare__"); - if (wrap) { - if (PyCallable_Check(wrap)) { - with_wrap[np] = obj; - wraps[np] = wrap; - ++np; - } - else { - Py_DECREF(wrap); - wrap = NULL; - } - } - else { - PyErr_Clear(); - } - } - if (np > 0) { - /* If we have some wraps defined, find the one of highest priority */ - wrap = wraps[0]; - if (np > 1) { - double maxpriority = PyArray_GetPriority(with_wrap[0], - PyArray_SUBTYPE_PRIORITY); - for (i = 1; i < np; ++i) { - double priority = PyArray_GetPriority(with_wrap[i], - PyArray_SUBTYPE_PRIORITY); - if (priority > maxpriority) { - maxpriority = priority; - Py_DECREF(wrap); - wrap = wraps[i]; - } - else { - Py_DECREF(wraps[i]); - } - } - } - } - - /* - * Here wrap is the wrapping function determined from the - * input arrays (could be NULL). - * - * For all the output arrays decide what to do. - * - * 1) Use the wrap function determined from the input arrays - * This is the default if the output array is not - * passed in. - * - * 2) Use the __array_prepare__ method of the output object. - * This is special cased for - * exact ndarray so that no PyArray_Return is - * done in that case. - */ - for (i = 0; i < nout; i++) { - int j = nin + i; - int incref = 1; - output_wrap[i] = wrap; - if (j < nargs) { - obj = PyTuple_GET_ITEM(args, j); - if (obj == Py_None) { - continue; - } - if (PyArray_CheckExact(obj)) { - output_wrap[i] = Py_None; - } - else { - PyObject *owrap = PyObject_GetAttrString(obj, - "__array_prepare__"); - incref = 0; - if (!(owrap) || !(PyCallable_Check(owrap))) { - Py_XDECREF(owrap); - owrap = wrap; - incref = 1; - PyErr_Clear(); - } - output_wrap[i] = owrap; - } - } - if (incref) { - Py_XINCREF(output_wrap[i]); - } - } - Py_XDECREF(wrap); - return; -} - -/* - * Called for non-NULL user-defined functions. - * The object should be a CObject pointing to a linked-list of functions - * storing the function, data, and signature of all user-defined functions. - * There must be a match with the input argument types or an error - * will occur. - */ -static int -_find_matching_userloop(PyObject *obj, int *arg_types, - PyArray_SCALARKIND *scalars, - PyUFuncGenericFunction *function, void **data, - int nargs, int nin) -{ - PyUFunc_Loop1d *funcdata; - int i; - - funcdata = (PyUFunc_Loop1d *)NpyCapsule_AsVoidPtr(obj); - while (funcdata != NULL) { - for (i = 0; i < nin; i++) { - if (!PyArray_CanCoerceScalar(arg_types[i], - funcdata->arg_types[i], - scalars[i])) - break; - } - if (i == nin) { - /* match found */ - *function = funcdata->func; - *data = funcdata->data; - /* Make sure actual arg_types supported by the loop are used */ - for (i = 0; i < nargs; i++) { - arg_types[i] = funcdata->arg_types[i]; - } - return 0; - } - funcdata = funcdata->next; - } - return -1; -} - -/* - * if only one type is specified then it is the "first" output data-type - * and the first signature matching this output data-type is returned. - * - * if a tuple of types is specified then an exact match to the signature - * is searched and it much match exactly or an error occurs - */ -static int -extract_specified_loop(PyUFuncObject *self, int *arg_types, - PyUFuncGenericFunction *function, void **data, - PyObject *type_tup, int userdef) -{ - Py_ssize_t n = 1; - int *rtypenums; - static char msg[] = "loop written to specified type(s) not found"; - PyArray_Descr *dtype; - int nargs; - int i, j; - int strtype = 0; - - nargs = self->nargs; - if (PyTuple_Check(type_tup)) { - n = PyTuple_GET_SIZE(type_tup); - if (n != 1 && n != nargs) { - PyErr_Format(PyExc_ValueError, - "a type-tuple must be specified " \ - "of length 1 or %d for %s", nargs, - self->name ? self->name : "(unknown)"); - return -1; - } - } - else if (PyString_Check(type_tup)) { - Py_ssize_t slen; - char *thestr; - - slen = PyString_GET_SIZE(type_tup); - thestr = PyString_AS_STRING(type_tup); - for (i = 0; i < slen - 2; i++) { - if (thestr[i] == '-' && thestr[i+1] == '>') { - break; - } - } - if (i < slen-2) { - strtype = 1; - n = slen - 2; - if (i != self->nin - || slen - 2 - i != self->nout) { - PyErr_Format(PyExc_ValueError, - "a type-string for %s, " \ - "requires %d typecode(s) before " \ - "and %d after the -> sign", - self->name ? self->name : "(unknown)", - self->nin, self->nout); - return -1; - } - } - } - rtypenums = (int *)_pya_malloc(n*sizeof(int)); - if (rtypenums == NULL) { - PyErr_NoMemory(); - return -1; - } - - if (strtype) { - char *ptr; - ptr = PyString_AS_STRING(type_tup); - i = 0; - while (i < n) { - if (*ptr == '-' || *ptr == '>') { - ptr++; - continue; - } - dtype = PyArray_DescrFromType((int) *ptr); - if (dtype == NULL) { - goto fail; - } - rtypenums[i] = dtype->type_num; - Py_DECREF(dtype); - ptr++; - i++; - } - } - else if (PyTuple_Check(type_tup)) { - for (i = 0; i < n; i++) { - if (PyArray_DescrConverter(PyTuple_GET_ITEM(type_tup, i), - &dtype) == NPY_FAIL) { - goto fail; - } - rtypenums[i] = dtype->type_num; - Py_DECREF(dtype); - } - } - else { - if (PyArray_DescrConverter(type_tup, &dtype) == NPY_FAIL) { - goto fail; - } - rtypenums[0] = dtype->type_num; - Py_DECREF(dtype); - } - - if (userdef > 0) { - /* search in the user-defined functions */ - PyObject *key, *obj; - PyUFunc_Loop1d *funcdata; - - obj = NULL; - key = PyInt_FromLong((long) userdef); - if (key == NULL) { - goto fail; - } - obj = PyDict_GetItem(self->userloops, key); - Py_DECREF(key); - if (obj == NULL) { - PyErr_SetString(PyExc_TypeError, - "user-defined type used in ufunc" \ - " with no registered loops"); - goto fail; - } - /* - * extract the correct function - * data and argtypes - */ - funcdata = (PyUFunc_Loop1d *)NpyCapsule_AsVoidPtr(obj); - while (funcdata != NULL) { - if (n != 1) { - for (i = 0; i < nargs; i++) { - if (rtypenums[i] != funcdata->arg_types[i]) { - break; - } - } - } - else if (rtypenums[0] == funcdata->arg_types[self->nin]) { - i = nargs; - } - else { - i = -1; - } - if (i == nargs) { - *function = funcdata->func; - *data = funcdata->data; - for(i = 0; i < nargs; i++) { - arg_types[i] = funcdata->arg_types[i]; - } - Py_DECREF(obj); - goto finish; - } - funcdata = funcdata->next; - } - PyErr_SetString(PyExc_TypeError, msg); - goto fail; - } - - /* look for match in self->functions */ - for (j = 0; j < self->ntypes; j++) { - if (n != 1) { - for(i = 0; i < nargs; i++) { - if (rtypenums[i] != self->types[j*nargs + i]) { - break; - } - } - } - else if (rtypenums[0] == self->types[j*nargs+self->nin]) { - i = nargs; - } - else { - i = -1; - } - if (i == nargs) { - *function = self->functions[j]; - *data = self->data[j]; - for (i = 0; i < nargs; i++) { - arg_types[i] = self->types[j*nargs+i]; - } - goto finish; - } - } - PyErr_SetString(PyExc_TypeError, msg); - - fail: - _pya_free(rtypenums); - return -1; - - finish: - _pya_free(rtypenums); - return 0; -} - - -/* - * Called to determine coercion - * Can change arg_types. - */ -static int -select_types(PyUFuncObject *self, int *arg_types, - PyUFuncGenericFunction *function, void **data, - PyArray_SCALARKIND *scalars, - PyObject *typetup) -{ - int i, j; - char start_type; - int userdef = -1; - int userdef_ind = -1; - - if (self->userloops) { - for(i = 0; i < self->nin; i++) { - if (PyTypeNum_ISUSERDEF(arg_types[i])) { - userdef = arg_types[i]; - userdef_ind = i; - break; - } - } - } - - if (typetup != NULL) - return extract_specified_loop(self, arg_types, function, data, - typetup, userdef); - - if (userdef > 0) { - PyObject *key, *obj; - int ret = -1; - obj = NULL; - - /* - * Look through all the registered loops for all the user-defined - * types to find a match. - */ - while (ret == -1) { - if (userdef_ind >= self->nin) { - break; - } - userdef = arg_types[userdef_ind++]; - if (!(PyTypeNum_ISUSERDEF(userdef))) { - continue; - } - key = PyInt_FromLong((long) userdef); - if (key == NULL) { - return -1; - } - obj = PyDict_GetItem(self->userloops, key); - Py_DECREF(key); - if (obj == NULL) { - continue; - } - /* - * extract the correct function - * data and argtypes for this user-defined type. - */ - ret = _find_matching_userloop(obj, arg_types, scalars, - function, data, self->nargs, - self->nin); - } - if (ret == 0) { - return ret; - } - PyErr_SetString(PyExc_TypeError, _types_msg); - return ret; - } - - start_type = arg_types[0]; - /* - * If the first argument is a scalar we need to place - * the start type as the lowest type in the class - */ - if (scalars[0] != PyArray_NOSCALAR) { - start_type = _lowest_type(start_type); - } - - i = 0; - while (i < self->ntypes && start_type > self->types[i*self->nargs]) { - i++; - } - for (; i < self->ntypes; i++) { - for (j = 0; j < self->nin; j++) { - if (!PyArray_CanCoerceScalar(arg_types[j], - self->types[i*self->nargs + j], - scalars[j])) - break; - } - if (j == self->nin) { - break; - } - } - if (i >= self->ntypes) { - PyErr_SetString(PyExc_TypeError, _types_msg); - return -1; - } - for (j = 0; j < self->nargs; j++) { - arg_types[j] = self->types[i*self->nargs+j]; - } - if (self->data) { - *data = self->data[i]; - } - else { - *data = NULL; - } - *function = self->functions[i]; - - return 0; -} - -#if USE_USE_DEFAULTS==1 -static int PyUFunc_NUM_NODEFAULTS = 0; -#endif -static PyObject *PyUFunc_PYVALS_NAME = NULL; - - -static int -_extract_pyvals(PyObject *ref, char *name, int *bufsize, - int *errmask, PyObject **errobj) -{ - PyObject *retval; - - *errobj = NULL; - if (!PyList_Check(ref) || (PyList_GET_SIZE(ref)!=3)) { - PyErr_Format(PyExc_TypeError, "%s must be a length 3 list.", - UFUNC_PYVALS_NAME); - return -1; - } - - *bufsize = PyInt_AsLong(PyList_GET_ITEM(ref, 0)); - if ((*bufsize == -1) && PyErr_Occurred()) { - return -1; - } - if ((*bufsize < PyArray_MIN_BUFSIZE) - || (*bufsize > PyArray_MAX_BUFSIZE) - || (*bufsize % 16 != 0)) { - PyErr_Format(PyExc_ValueError, - "buffer size (%d) is not in range " - "(%"INTP_FMT" - %"INTP_FMT") or not a multiple of 16", - *bufsize, (intp) PyArray_MIN_BUFSIZE, - (intp) PyArray_MAX_BUFSIZE); - return -1; - } - - *errmask = PyInt_AsLong(PyList_GET_ITEM(ref, 1)); - if (*errmask < 0) { - if (PyErr_Occurred()) { - return -1; - } - PyErr_Format(PyExc_ValueError, - "invalid error mask (%d)", - *errmask); - return -1; - } - - retval = PyList_GET_ITEM(ref, 2); - if (retval != Py_None && !PyCallable_Check(retval)) { - PyObject *temp; - temp = PyObject_GetAttrString(retval, "write"); - if (temp == NULL || !PyCallable_Check(temp)) { - PyErr_SetString(PyExc_TypeError, - "python object must be callable or have " \ - "a callable write method"); - Py_XDECREF(temp); - return -1; - } - Py_DECREF(temp); - } - - *errobj = Py_BuildValue("NO", PyBytes_FromString(name), retval); - if (*errobj == NULL) { - return -1; - } - return 0; -} - - - -/*UFUNC_API*/ -NPY_NO_EXPORT int -PyUFunc_GetPyValues(char *name, int *bufsize, int *errmask, PyObject **errobj) -{ - PyObject *thedict; - PyObject *ref = NULL; - -#if USE_USE_DEFAULTS==1 - if (PyUFunc_NUM_NODEFAULTS != 0) { -#endif - if (PyUFunc_PYVALS_NAME == NULL) { - PyUFunc_PYVALS_NAME = PyUString_InternFromString(UFUNC_PYVALS_NAME); - } - thedict = PyThreadState_GetDict(); - if (thedict == NULL) { - thedict = PyEval_GetBuiltins(); - } - ref = PyDict_GetItem(thedict, PyUFunc_PYVALS_NAME); -#if USE_USE_DEFAULTS==1 - } -#endif - if (ref == NULL) { - *errmask = UFUNC_ERR_DEFAULT; - *errobj = Py_BuildValue("NO", PyBytes_FromString(name), Py_None); - *bufsize = PyArray_BUFSIZE; - return 0; - } - return _extract_pyvals(ref, name, bufsize, errmask, errobj); -} - -/* - * Create copies for any arrays that are less than loop->bufsize - * in total size (or core_enabled) and are mis-behaved or in need - * of casting. - */ -static int -_create_copies(PyUFuncLoopObject *loop, int *arg_types, PyArrayObject **mps) -{ - int nin = loop->ufunc->nin; - int i; - intp size; - PyObject *new; - PyArray_Descr *ntype; - PyArray_Descr *atype; - - for (i = 0; i < nin; i++) { - size = PyArray_SIZE(mps[i]); - /* - * if the type of mps[i] is equivalent to arg_types[i] - * then set arg_types[i] equal to type of mps[i] for later checking.... - */ - if (PyArray_TYPE(mps[i]) != arg_types[i]) { - ntype = mps[i]->descr; - atype = PyArray_DescrFromType(arg_types[i]); - if (PyArray_EquivTypes(atype, ntype)) { - arg_types[i] = ntype->type_num; - } - Py_DECREF(atype); - } - if (size < loop->bufsize || loop->ufunc->core_enabled) { - if (!(PyArray_ISBEHAVED_RO(mps[i])) - || PyArray_TYPE(mps[i]) != arg_types[i]) { - ntype = PyArray_DescrFromType(arg_types[i]); - new = PyArray_FromAny((PyObject *)mps[i], - ntype, 0, 0, - FORCECAST | ALIGNED, NULL); - if (new == NULL) { - return -1; - } - Py_DECREF(mps[i]); - mps[i] = (PyArrayObject *)new; - } - } - } - return 0; -} - -#define _GETATTR_(str, rstr) do {if (strcmp(name, #str) == 0) \ - return PyObject_HasAttrString(op, "__" #rstr "__");} while (0); - -static int -_has_reflected_op(PyObject *op, char *name) -{ - _GETATTR_(add, radd); - _GETATTR_(subtract, rsub); - _GETATTR_(multiply, rmul); - _GETATTR_(divide, rdiv); - _GETATTR_(true_divide, rtruediv); - _GETATTR_(floor_divide, rfloordiv); - _GETATTR_(remainder, rmod); - _GETATTR_(power, rpow); - _GETATTR_(left_shift, rlshift); - _GETATTR_(right_shift, rrshift); - _GETATTR_(bitwise_and, rand); - _GETATTR_(bitwise_xor, rxor); - _GETATTR_(bitwise_or, ror); - return 0; -} - -#undef _GETATTR_ - - -/* Return the position of next non-white-space char in the string */ -static int -_next_non_white_space(const char* str, int offset) -{ - int ret = offset; - while (str[ret] == ' ' || str[ret] == '\t') { - ret++; - } - return ret; -} - -static int -_is_alpha_underscore(char ch) -{ - return (ch >= 'A' && ch <= 'Z') || (ch >= 'a' && ch <= 'z') || ch == '_'; -} - -static int -_is_alnum_underscore(char ch) -{ - return _is_alpha_underscore(ch) || (ch >= '0' && ch <= '9'); -} - -/* - * Return the ending position of a variable name - */ -static int -_get_end_of_name(const char* str, int offset) -{ - int ret = offset; - while (_is_alnum_underscore(str[ret])) { - ret++; - } - return ret; -} - -/* - * Returns 1 if the dimension names pointed by s1 and s2 are the same, - * otherwise returns 0. - */ -static int -_is_same_name(const char* s1, const char* s2) -{ - while (_is_alnum_underscore(*s1) && _is_alnum_underscore(*s2)) { - if (*s1 != *s2) { - return 0; - } - s1++; - s2++; - } - return !_is_alnum_underscore(*s1) && !_is_alnum_underscore(*s2); -} - -/* - * Sets core_num_dim_ix, core_num_dims, core_dim_ixs, core_offsets, - * and core_signature in PyUFuncObject "self". Returns 0 unless an - * error occured. - */ -static int -_parse_signature(PyUFuncObject *self, const char *signature) -{ - size_t len; - char const **var_names; - int nd = 0; /* number of dimension of the current argument */ - int cur_arg = 0; /* index into core_num_dims&core_offsets */ - int cur_core_dim = 0; /* index into core_dim_ixs */ - int i = 0; - char *parse_error = NULL; - - if (signature == NULL) { - PyErr_SetString(PyExc_RuntimeError, - "_parse_signature with NULL signature"); - return -1; - } - - len = strlen(signature); - self->core_signature = _pya_malloc(sizeof(char) * (len+1)); - if (self->core_signature) { - strcpy(self->core_signature, signature); - } - /* Allocate sufficient memory to store pointers to all dimension names */ - var_names = _pya_malloc(sizeof(char const*) * len); - if (var_names == NULL) { - PyErr_NoMemory(); - return -1; - } - - self->core_enabled = 1; - self->core_num_dim_ix = 0; - self->core_num_dims = _pya_malloc(sizeof(int) * self->nargs); - self->core_dim_ixs = _pya_malloc(sizeof(int) * len); /* shrink this later */ - self->core_offsets = _pya_malloc(sizeof(int) * self->nargs); - if (self->core_num_dims == NULL || self->core_dim_ixs == NULL - || self->core_offsets == NULL) { - PyErr_NoMemory(); - goto fail; - } - - i = _next_non_white_space(signature, 0); - while (signature[i] != '\0') { - /* loop over input/output arguments */ - if (cur_arg == self->nin) { - /* expect "->" */ - if (signature[i] != '-' || signature[i+1] != '>') { - parse_error = "expect '->'"; - goto fail; - } - i = _next_non_white_space(signature, i + 2); - } - - /* - * parse core dimensions of one argument, - * e.g. "()", "(i)", or "(i,j)" - */ - if (signature[i] != '(') { - parse_error = "expect '('"; - goto fail; - } - i = _next_non_white_space(signature, i + 1); - while (signature[i] != ')') { - /* loop over core dimensions */ - int j = 0; - if (!_is_alpha_underscore(signature[i])) { - parse_error = "expect dimension name"; - goto fail; - } - while (j < self->core_num_dim_ix) { - if (_is_same_name(signature+i, var_names[j])) { - break; - } - j++; - } - if (j >= self->core_num_dim_ix) { - var_names[j] = signature+i; - self->core_num_dim_ix++; - } - self->core_dim_ixs[cur_core_dim] = j; - cur_core_dim++; - nd++; - i = _get_end_of_name(signature, i); - i = _next_non_white_space(signature, i); - if (signature[i] != ',' && signature[i] != ')') { - parse_error = "expect ',' or ')'"; - goto fail; - } - if (signature[i] == ',') - { - i = _next_non_white_space(signature, i + 1); - if (signature[i] == ')') { - parse_error = "',' must not be followed by ')'"; - goto fail; - } - } - } - self->core_num_dims[cur_arg] = nd; - self->core_offsets[cur_arg] = cur_core_dim-nd; - cur_arg++; - nd = 0; - - i = _next_non_white_space(signature, i + 1); - if (cur_arg != self->nin && cur_arg != self->nargs) { - /* - * The list of input arguments (or output arguments) was - * only read partially - */ - if (signature[i] != ',') { - parse_error = "expect ','"; - goto fail; - } - i = _next_non_white_space(signature, i + 1); - } - } - if (cur_arg != self->nargs) { - parse_error = "incomplete signature: not all arguments found"; - goto fail; - } - self->core_dim_ixs = _pya_realloc(self->core_dim_ixs, - sizeof(int)*cur_core_dim); - /* check for trivial core-signature, e.g. "(),()->()" */ - if (cur_core_dim == 0) { - self->core_enabled = 0; - } - _pya_free((void*)var_names); - return 0; - -fail: - _pya_free((void*)var_names); - if (parse_error) { - char *buf = _pya_malloc(sizeof(char) * (len + 200)); - if (buf) { - sprintf(buf, "%s at position %d in \"%s\"", - parse_error, i, signature); - PyErr_SetString(PyExc_ValueError, signature); - _pya_free(buf); - } - else { - PyErr_NoMemory(); - } - } - return -1; -} - -/* - * Concatenate the loop and core dimensions of - * PyArrayMultiIterObject's iarg-th argument, to recover a full - * dimension array (used for output arguments). - */ -static npy_intp* -_compute_output_dims(PyUFuncLoopObject *loop, int iarg, - int *out_nd, npy_intp *tmp_dims) -{ - int i; - PyUFuncObject *ufunc = loop->ufunc; - if (ufunc->core_enabled == 0) { - /* case of ufunc with trivial core-signature */ - *out_nd = loop->nd; - return loop->dimensions; - } - - *out_nd = loop->nd + ufunc->core_num_dims[iarg]; - if (*out_nd > NPY_MAXARGS) { - PyErr_SetString(PyExc_ValueError, - "dimension of output variable exceeds limit"); - return NULL; - } - - /* copy loop dimensions */ - memcpy(tmp_dims, loop->dimensions, sizeof(npy_intp) * loop->nd); - - /* copy core dimension */ - for (i = 0; i < ufunc->core_num_dims[iarg]; i++) { - tmp_dims[loop->nd + i] = loop->core_dim_sizes[1 + - ufunc->core_dim_ixs[ufunc->core_offsets[iarg] + i]]; - } - return tmp_dims; -} - -/* Check and set core_dim_sizes and core_strides for the i-th argument. */ -static int -_compute_dimension_size(PyUFuncLoopObject *loop, PyArrayObject **mps, int i) -{ - PyUFuncObject *ufunc = loop->ufunc; - int j = ufunc->core_offsets[i]; - int k = PyArray_NDIM(mps[i]) - ufunc->core_num_dims[i]; - int ind; - for (ind = 0; ind < ufunc->core_num_dims[i]; ind++, j++, k++) { - npy_intp dim = k < 0 ? 1 : PyArray_DIM(mps[i], k); - /* First element of core_dim_sizes will be used for looping */ - int dim_ix = ufunc->core_dim_ixs[j] + 1; - if (loop->core_dim_sizes[dim_ix] == 1) { - /* broadcast core dimension */ - loop->core_dim_sizes[dim_ix] = dim; - } - else if (dim != 1 && dim != loop->core_dim_sizes[dim_ix]) { - PyErr_SetString(PyExc_ValueError, "core dimensions mismatch"); - return -1; - } - /* First ufunc->nargs elements will be used for looping */ - loop->core_strides[ufunc->nargs + j] = - dim == 1 ? 0 : PyArray_STRIDE(mps[i], k); - } - return 0; -} - -/* Return a view of array "ap" with "core_nd" dimensions cut from tail. */ -static PyArrayObject * -_trunc_coredim(PyArrayObject *ap, int core_nd) -{ - PyArrayObject *ret; - int nd = ap->nd - core_nd; - - if (nd < 0) { - nd = 0; - } - /* The following code is basically taken from PyArray_Transpose */ - /* NewFromDescr will steal this reference */ - Py_INCREF(ap->descr); - ret = (PyArrayObject *) - PyArray_NewFromDescr(Py_TYPE(ap), ap->descr, - nd, ap->dimensions, - ap->strides, ap->data, ap->flags, - (PyObject *)ap); - if (ret == NULL) { - return NULL; - } - /* point at true owner of memory: */ - ret->base = (PyObject *)ap; - Py_INCREF(ap); - PyArray_UpdateFlags(ret, CONTIGUOUS | FORTRAN); - return ret; -} - -static Py_ssize_t -construct_arrays(PyUFuncLoopObject *loop, PyObject *args, PyArrayObject **mps, - PyObject *typetup) -{ - Py_ssize_t nargs; - int i; - int arg_types[NPY_MAXARGS]; - PyArray_SCALARKIND scalars[NPY_MAXARGS]; - PyArray_SCALARKIND maxarrkind, maxsckind, new; - PyUFuncObject *self = loop->ufunc; - Bool allscalars = TRUE; - PyTypeObject *subtype = &PyArray_Type; - PyObject *context = NULL; - PyObject *obj; - int flexible = 0; - int object = 0; - - npy_intp temp_dims[NPY_MAXDIMS]; - npy_intp *out_dims; - int out_nd; - PyObject *wraparr[NPY_MAXARGS]; - - /* Check number of arguments */ - nargs = PyTuple_Size(args); - if ((nargs < self->nin) || (nargs > self->nargs)) { - PyErr_SetString(PyExc_ValueError, "invalid number of arguments"); - return -1; - } - - /* Get each input argument */ - maxarrkind = PyArray_NOSCALAR; - maxsckind = PyArray_NOSCALAR; - for(i = 0; i < self->nin; i++) { - obj = PyTuple_GET_ITEM(args,i); - if (!PyArray_Check(obj) && !PyArray_IsScalar(obj, Generic)) { - context = Py_BuildValue("OOi", self, args, i); - } - else { - context = NULL; - } - mps[i] = (PyArrayObject *)PyArray_FromAny(obj, NULL, 0, 0, 0, context); - Py_XDECREF(context); - if (mps[i] == NULL) { - return -1; - } - arg_types[i] = PyArray_TYPE(mps[i]); - if (!flexible && PyTypeNum_ISFLEXIBLE(arg_types[i])) { - flexible = 1; - } - if (!object && PyTypeNum_ISOBJECT(arg_types[i])) { - object = 1; - } - /* - * debug - * fprintf(stderr, "array %d has reference %d\n", i, - * (mps[i])->ob_refcnt); - */ - - /* - * Scalars are 0-dimensional arrays at this point - */ - - /* - * We need to keep track of whether or not scalars - * are mixed with arrays of different kinds. - */ - - if (mps[i]->nd > 0) { - scalars[i] = PyArray_NOSCALAR; - allscalars = FALSE; - new = PyArray_ScalarKind(arg_types[i], NULL); - maxarrkind = NPY_MAX(new, maxarrkind); - } - else { - scalars[i] = PyArray_ScalarKind(arg_types[i], &(mps[i])); - maxsckind = NPY_MAX(scalars[i], maxsckind); - } - } - - /* We don't do strings */ - if (flexible && !object) { - loop->notimplemented = 1; - return nargs; - } - - /* - * If everything is a scalar, or scalars mixed with arrays of - * different kinds of lesser kinds then use normal coercion rules - */ - if (allscalars || (maxsckind > maxarrkind)) { - for (i = 0; i < self->nin; i++) { - scalars[i] = PyArray_NOSCALAR; - } - } - - /* Select an appropriate function for these argument types. */ - if (select_types(loop->ufunc, arg_types, &(loop->function), - &(loop->funcdata), scalars, typetup) == -1) { - return -1; - } - /* - * FAIL with NotImplemented if the other object has - * the __r__ method and has __array_priority__ as - * an attribute (signalling it can handle ndarray's) - * and is not already an ndarray or a subtype of the same type. - */ - if ((arg_types[1] == PyArray_OBJECT) - && (loop->ufunc->nin==2) && (loop->ufunc->nout == 1)) { - PyObject *_obj = PyTuple_GET_ITEM(args, 1); - if (!PyArray_CheckExact(_obj) - /* If both are same subtype of object arrays, then proceed */ - && !(Py_TYPE(_obj) == Py_TYPE(PyTuple_GET_ITEM(args, 0))) - && PyObject_HasAttrString(_obj, "__array_priority__") - && _has_reflected_op(_obj, loop->ufunc->name)) { - loop->notimplemented = 1; - return nargs; - } - } - - /* - * Create copies for some of the arrays if they are small - * enough and not already contiguous - */ - if (_create_copies(loop, arg_types, mps) < 0) { - return -1; - } - - /* - * Only use loop dimensions when constructing Iterator: - * temporarily replace mps[i] (will be recovered below). - */ - if (self->core_enabled) { - for (i = 0; i < self->nin; i++) { - PyArrayObject *ao; - - if (_compute_dimension_size(loop, mps, i) < 0) { - return -1; - } - ao = _trunc_coredim(mps[i], self->core_num_dims[i]); - if (ao == NULL) { - return -1; - } - mps[i] = ao; - } - } - - /* Create Iterators for the Inputs */ - for (i = 0; i < self->nin; i++) { - loop->iters[i] = (PyArrayIterObject *) - PyArray_IterNew((PyObject *)mps[i]); - if (loop->iters[i] == NULL) { - return -1; - } - } - - /* Recover mps[i]. */ - if (self->core_enabled) { - for (i = 0; i < self->nin; i++) { - PyArrayObject *ao = mps[i]; - mps[i] = (PyArrayObject *)mps[i]->base; - Py_DECREF(ao); - } - } - - /* Broadcast the result */ - loop->numiter = self->nin; - if (PyArray_Broadcast((PyArrayMultiIterObject *)loop) < 0) { - return -1; - } - - /* Get any return arguments */ - for (i = self->nin; i < nargs; i++) { - mps[i] = (PyArrayObject *)PyTuple_GET_ITEM(args, i); - if (((PyObject *)mps[i])==Py_None) { - mps[i] = NULL; - continue; - } - Py_INCREF(mps[i]); - if (!PyArray_Check((PyObject *)mps[i])) { - PyObject *new; - if (PyArrayIter_Check(mps[i])) { - new = PyObject_CallMethod((PyObject *)mps[i], - "__array__", NULL); - Py_DECREF(mps[i]); - mps[i] = (PyArrayObject *)new; - } - else { - PyErr_SetString(PyExc_TypeError, - "return arrays must be "\ - "of ArrayType"); - Py_DECREF(mps[i]); - mps[i] = NULL; - return -1; - } - } - - if (self->core_enabled) { - if (_compute_dimension_size(loop, mps, i) < 0) { - return -1; - } - } - out_dims = _compute_output_dims(loop, i, &out_nd, temp_dims); - if (!out_dims) { - return -1; - } - if (mps[i]->nd != out_nd - || !PyArray_CompareLists(mps[i]->dimensions, out_dims, out_nd)) { - PyErr_SetString(PyExc_ValueError, "invalid return array shape"); - Py_DECREF(mps[i]); - mps[i] = NULL; - return -1; - } - if (!PyArray_ISWRITEABLE(mps[i])) { - PyErr_SetString(PyExc_ValueError, "return array is not writeable"); - Py_DECREF(mps[i]); - mps[i] = NULL; - return -1; - } - } - - /* construct any missing return arrays and make output iterators */ - for(i = self->nin; i < self->nargs; i++) { - PyArray_Descr *ntype; - - if (mps[i] == NULL) { - out_dims = _compute_output_dims(loop, i, &out_nd, temp_dims); - if (!out_dims) { - return -1; - } - mps[i] = (PyArrayObject *)PyArray_New(subtype, - out_nd, - out_dims, - arg_types[i], - NULL, NULL, - 0, 0, NULL); - if (mps[i] == NULL) { - return -1; - } - } - - /* - * reset types for outputs that are equivalent - * -- no sense casting uselessly - */ - else { - if (mps[i]->descr->type_num != arg_types[i]) { - PyArray_Descr *atype; - ntype = mps[i]->descr; - atype = PyArray_DescrFromType(arg_types[i]); - if (PyArray_EquivTypes(atype, ntype)) { - arg_types[i] = ntype->type_num; - } - Py_DECREF(atype); - } - - /* still not the same -- or will we have to use buffers?*/ - if (mps[i]->descr->type_num != arg_types[i] - || !PyArray_ISBEHAVED_RO(mps[i])) { - if (loop->size < loop->bufsize || self->core_enabled) { - PyObject *new; - /* - * Copy the array to a temporary copy - * and set the UPDATEIFCOPY flag - */ - ntype = PyArray_DescrFromType(arg_types[i]); - new = PyArray_FromAny((PyObject *)mps[i], - ntype, 0, 0, - FORCECAST | ALIGNED | - UPDATEIFCOPY, NULL); - if (new == NULL) { - return -1; - } - Py_DECREF(mps[i]); - mps[i] = (PyArrayObject *)new; - } - } - } - - if (self->core_enabled) { - PyArrayObject *ao; - - /* computer for all output arguments, and set strides in "loop" */ - if (_compute_dimension_size(loop, mps, i) < 0) { - return -1; - } - ao = _trunc_coredim(mps[i], self->core_num_dims[i]); - if (ao == NULL) { - return -1; - } - /* Temporarily modify mps[i] for constructing iterator. */ - mps[i] = ao; - } - - loop->iters[i] = (PyArrayIterObject *) - PyArray_IterNew((PyObject *)mps[i]); - if (loop->iters[i] == NULL) { - return -1; - } - - /* Recover mps[i]. */ - if (self->core_enabled) { - PyArrayObject *ao = mps[i]; - mps[i] = (PyArrayObject *)mps[i]->base; - Py_DECREF(ao); - } - - } - - /* - * Use __array_prepare__ on all outputs - * if present on one of the input arguments. - * If present for multiple inputs: - * use __array_prepare__ of input object with largest - * __array_priority__ (default = 0.0) - * - * Exception: we should not wrap outputs for items already - * passed in as output-arguments. These items should either - * be left unwrapped or wrapped by calling their own __array_prepare__ - * routine. - * - * For each output argument, wrap will be either - * NULL --- call PyArray_Return() -- default if no output arguments given - * None --- array-object passed in don't call PyArray_Return - * method --- the __array_prepare__ method to call. - */ - _find_array_prepare(args, wraparr, loop->ufunc->nin, loop->ufunc->nout); - - /* wrap outputs */ - for (i = 0; i < loop->ufunc->nout; i++) { - int j = loop->ufunc->nin+i; - PyObject *wrap; - PyObject *res; - wrap = wraparr[i]; - if (wrap != NULL) { - if (wrap == Py_None) { - Py_DECREF(wrap); - continue; - } - res = PyObject_CallFunction(wrap, "O(OOi)", - mps[j], loop->ufunc, args, i); - Py_DECREF(wrap); - if ((res == NULL) || (res == Py_None)) { - if (!PyErr_Occurred()){ - PyErr_SetString(PyExc_TypeError, - "__array_prepare__ must return an ndarray or subclass thereof"); - } - return -1; - } - Py_DECREF(mps[j]); - mps[j] = (PyArrayObject *)res; - } - } - - /* - * If any of different type, or misaligned or swapped - * then must use buffers - */ - loop->bufcnt = 0; - loop->obj = 0; - /* Determine looping method needed */ - loop->meth = NO_UFUNCLOOP; - if (loop->size == 0) { - return nargs; - } - if (self->core_enabled) { - loop->meth = SIGNATURE_NOBUFFER_UFUNCLOOP; - } - for (i = 0; i < self->nargs; i++) { - loop->needbuffer[i] = 0; - if (arg_types[i] != mps[i]->descr->type_num - || !PyArray_ISBEHAVED_RO(mps[i])) { - if (self->core_enabled) { - PyErr_SetString(PyExc_RuntimeError, - "never reached; copy should have been made"); - return -1; - } - loop->meth = BUFFER_UFUNCLOOP; - loop->needbuffer[i] = 1; - } - if (!(loop->obj & UFUNC_OBJ_ISOBJECT) - && ((mps[i]->descr->type_num == PyArray_OBJECT) - || (arg_types[i] == PyArray_OBJECT))) { - loop->obj = UFUNC_OBJ_ISOBJECT|UFUNC_OBJ_NEEDS_API; - } - } - - if (self->core_enabled && (loop->obj & UFUNC_OBJ_ISOBJECT)) { - PyErr_SetString(PyExc_TypeError, - "Object type not allowed in ufunc with signature"); - return -1; - } - if (loop->meth == NO_UFUNCLOOP) { - loop->meth = ONE_UFUNCLOOP; - - /* All correct type and BEHAVED */ - /* Check for non-uniform stridedness */ - for (i = 0; i < self->nargs; i++) { - if (!(loop->iters[i]->contiguous)) { - /* - * May still have uniform stride - * if (broadcast result) <= 1-d - */ - if (mps[i]->nd != 0 && \ - (loop->iters[i]->nd_m1 > 0)) { - loop->meth = NOBUFFER_UFUNCLOOP; - break; - } - } - } - if (loop->meth == ONE_UFUNCLOOP) { - for (i = 0; i < self->nargs; i++) { - loop->bufptr[i] = mps[i]->data; - } - } - } - - loop->numiter = self->nargs; - - /* Fill in steps */ - if (loop->meth == SIGNATURE_NOBUFFER_UFUNCLOOP && loop->nd == 0) { - /* Use default core_strides */ - } - else if (loop->meth != ONE_UFUNCLOOP) { - int ldim; - intp minsum; - intp maxdim; - PyArrayIterObject *it; - intp stride_sum[NPY_MAXDIMS]; - int j; - - /* Fix iterators */ - - /* - * Optimize axis the iteration takes place over - * - * The first thought was to have the loop go - * over the largest dimension to minimize the number of loops - * - * However, on processors with slow memory bus and cache, - * the slowest loops occur when the memory access occurs for - * large strides. - * - * Thus, choose the axis for which strides of the last iterator is - * smallest but non-zero. - */ - for (i = 0; i < loop->nd; i++) { - stride_sum[i] = 0; - for (j = 0; j < loop->numiter; j++) { - stride_sum[i] += loop->iters[j]->strides[i]; - } - } - - ldim = loop->nd - 1; - minsum = stride_sum[loop->nd - 1]; - for (i = loop->nd - 2; i >= 0; i--) { - if (stride_sum[i] < minsum ) { - ldim = i; - minsum = stride_sum[i]; - } - } - maxdim = loop->dimensions[ldim]; - loop->size /= maxdim; - loop->bufcnt = maxdim; - loop->lastdim = ldim; - - /* - * Fix the iterators so the inner loop occurs over the - * largest dimensions -- This can be done by - * setting the size to 1 in that dimension - * (just in the iterators) - */ - for (i = 0; i < loop->numiter; i++) { - it = loop->iters[i]; - it->contiguous = 0; - it->size /= (it->dims_m1[ldim] + 1); - it->dims_m1[ldim] = 0; - it->backstrides[ldim] = 0; - - /* - * (won't fix factors because we - * don't use PyArray_ITER_GOTO1D - * so don't change them) - * - * Set the steps to the strides in that dimension - */ - loop->steps[i] = it->strides[ldim]; - } - - /* - * Set looping part of core_dim_sizes and core_strides. - */ - if (loop->meth == SIGNATURE_NOBUFFER_UFUNCLOOP) { - loop->core_dim_sizes[0] = maxdim; - for (i = 0; i < self->nargs; i++) { - loop->core_strides[i] = loop->steps[i]; - } - } - - /* - * fix up steps where we will be copying data to - * buffers and calculate the ninnerloops and leftover - * values -- if step size is already zero that is not changed... - */ - if (loop->meth == BUFFER_UFUNCLOOP) { - loop->leftover = maxdim % loop->bufsize; - loop->ninnerloops = (maxdim / loop->bufsize) + 1; - for (i = 0; i < self->nargs; i++) { - if (loop->needbuffer[i] && loop->steps[i]) { - loop->steps[i] = mps[i]->descr->elsize; - } - /* These are changed later if casting is needed */ - } - } - } - else if (loop->meth == ONE_UFUNCLOOP) { - /* uniformly-strided case */ - for (i = 0; i < self->nargs; i++) { - if (PyArray_SIZE(mps[i]) == 1) { - loop->steps[i] = 0; - } - else { - loop->steps[i] = mps[i]->strides[mps[i]->nd - 1]; - } - } - } - - - /* Finally, create memory for buffers if we need them */ - - /* - * Buffers for scalars are specially made small -- scalars are - * not copied multiple times - */ - if (loop->meth == BUFFER_UFUNCLOOP) { - int cnt = 0, cntcast = 0; - int scnt = 0, scntcast = 0; - char *castptr; - char *bufptr; - int last_was_scalar = 0; - int last_cast_was_scalar = 0; - int oldbufsize = 0; - int oldsize = 0; - int scbufsize = 4*sizeof(double); - int memsize; - PyArray_Descr *descr; - - /* compute the element size */ - for (i = 0; i < self->nargs; i++) { - if (!loop->needbuffer[i]) { - continue; - } - if (arg_types[i] != mps[i]->descr->type_num) { - descr = PyArray_DescrFromType(arg_types[i]); - if (loop->steps[i]) { - cntcast += descr->elsize; - } - else { - scntcast += descr->elsize; - } - if (i < self->nin) { - loop->cast[i] = PyArray_GetCastFunc(mps[i]->descr, - arg_types[i]); - } - else { - loop->cast[i] = PyArray_GetCastFunc \ - (descr, mps[i]->descr->type_num); - } - Py_DECREF(descr); - if (!loop->cast[i]) { - return -1; - } - } - loop->swap[i] = !(PyArray_ISNOTSWAPPED(mps[i])); - if (loop->steps[i]) { - cnt += mps[i]->descr->elsize; - } - else { - scnt += mps[i]->descr->elsize; - } - } - memsize = loop->bufsize*(cnt+cntcast) + scbufsize*(scnt+scntcast); - loop->buffer[0] = PyDataMem_NEW(memsize); - - /* - * debug - * fprintf(stderr, "Allocated buffer at %p of size %d, cnt=%d, cntcast=%d\n", - * loop->buffer[0], loop->bufsize * (cnt + cntcast), cnt, cntcast); - */ - if (loop->buffer[0] == NULL) { - PyErr_NoMemory(); - return -1; - } - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - memset(loop->buffer[0], 0, memsize); - } - castptr = loop->buffer[0] + loop->bufsize*cnt + scbufsize*scnt; - bufptr = loop->buffer[0]; - loop->objfunc = 0; - for (i = 0; i < self->nargs; i++) { - if (!loop->needbuffer[i]) { - continue; - } - loop->buffer[i] = bufptr + (last_was_scalar ? scbufsize : - loop->bufsize)*oldbufsize; - last_was_scalar = (loop->steps[i] == 0); - bufptr = loop->buffer[i]; - oldbufsize = mps[i]->descr->elsize; - /* fprintf(stderr, "buffer[%d] = %p\n", i, loop->buffer[i]); */ - if (loop->cast[i]) { - PyArray_Descr *descr; - loop->castbuf[i] = castptr + (last_cast_was_scalar ? scbufsize : - loop->bufsize)*oldsize; - last_cast_was_scalar = last_was_scalar; - /* fprintf(stderr, "castbuf[%d] = %p\n", i, loop->castbuf[i]); */ - descr = PyArray_DescrFromType(arg_types[i]); - oldsize = descr->elsize; - Py_DECREF(descr); - loop->bufptr[i] = loop->castbuf[i]; - castptr = loop->castbuf[i]; - if (loop->steps[i]) { - loop->steps[i] = oldsize; - } - } - else { - loop->bufptr[i] = loop->buffer[i]; - } - if (!loop->objfunc && (loop->obj & UFUNC_OBJ_ISOBJECT)) { - if (arg_types[i] == PyArray_OBJECT) { - loop->objfunc = 1; - } - } - } - } - - if (_does_loop_use_arrays(loop->funcdata)) { - loop->funcdata = (void*)mps; - } - - return nargs; -} - -static void -ufuncreduce_dealloc(PyUFuncReduceObject *self) -{ - if (self->ufunc) { - Py_XDECREF(self->it); - Py_XDECREF(self->rit); - Py_XDECREF(self->ret); - Py_XDECREF(self->errobj); - Py_XDECREF(self->decref); - if (self->buffer) { - PyDataMem_FREE(self->buffer); - } - Py_DECREF(self->ufunc); - } - _pya_free(self); -} - -static void -ufuncloop_dealloc(PyUFuncLoopObject *self) -{ - int i; - - if (self->ufunc != NULL) { - if (self->core_dim_sizes) { - _pya_free(self->core_dim_sizes); - } - if (self->core_strides) { - _pya_free(self->core_strides); - } - for (i = 0; i < self->ufunc->nargs; i++) { - Py_XDECREF(self->iters[i]); - } - if (self->buffer[0]) { - PyDataMem_FREE(self->buffer[0]); - } - Py_XDECREF(self->errobj); - Py_DECREF(self->ufunc); - } - _pya_free(self); -} - -static PyUFuncLoopObject * -construct_loop(PyUFuncObject *self, PyObject *args, PyObject *kwds, PyArrayObject **mps) -{ - PyUFuncLoopObject *loop; - int i; - PyObject *typetup = NULL; - PyObject *extobj = NULL; - char *name; - - if (self == NULL) { - PyErr_SetString(PyExc_ValueError, "function not supported"); - return NULL; - } - if ((loop = _pya_malloc(sizeof(PyUFuncLoopObject))) == NULL) { - PyErr_NoMemory(); - return loop; - } - - loop->index = 0; - loop->ufunc = self; - Py_INCREF(self); - loop->buffer[0] = NULL; - for (i = 0; i < self->nargs; i++) { - loop->iters[i] = NULL; - loop->cast[i] = NULL; - } - loop->errobj = NULL; - loop->notimplemented = 0; - loop->first = 1; - loop->core_dim_sizes = NULL; - loop->core_strides = NULL; - - if (self->core_enabled) { - int num_dim_ix = 1 + self->core_num_dim_ix; - int nstrides = self->nargs + self->core_offsets[self->nargs - 1] - + self->core_num_dims[self->nargs - 1]; - loop->core_dim_sizes = _pya_malloc(sizeof(npy_intp)*num_dim_ix); - loop->core_strides = _pya_malloc(sizeof(npy_intp)*nstrides); - if (loop->core_dim_sizes == NULL || loop->core_strides == NULL) { - PyErr_NoMemory(); - goto fail; - } - memset(loop->core_strides, 0, sizeof(npy_intp) * nstrides); - for (i = 0; i < num_dim_ix; i++) { - loop->core_dim_sizes[i] = 1; - } - } - name = self->name ? self->name : ""; - - /* - * Extract sig= keyword and extobj= keyword if present. - * Raise an error if anything else is present in the - * keyword dictionary - */ - if (kwds != NULL) { - PyObject *key, *value; - Py_ssize_t pos = 0; - while (PyDict_Next(kwds, &pos, &key, &value)) { - char *keystring = PyString_AsString(key); - - if (keystring == NULL) { - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, "invalid keyword"); - goto fail; - } - if (strncmp(keystring,"extobj",6) == 0) { - extobj = value; - } - else if (strncmp(keystring,"sig",3) == 0) { - typetup = value; - } - else { - char *format = "'%s' is an invalid keyword to %s"; - PyErr_Format(PyExc_TypeError,format,keystring, name); - goto fail; - } - } - } - - if (extobj == NULL) { - if (PyUFunc_GetPyValues(name, - &(loop->bufsize), &(loop->errormask), - &(loop->errobj)) < 0) { - goto fail; - } - } - else { - if (_extract_pyvals(extobj, name, - &(loop->bufsize), &(loop->errormask), - &(loop->errobj)) < 0) { - goto fail; - } - } - - /* Setup the arrays */ - if (construct_arrays(loop, args, mps, typetup) < 0) { - goto fail; - } - PyUFunc_clearfperr(); - return loop; - -fail: - ufuncloop_dealloc(loop); - return NULL; -} - - -/* - static void - _printbytebuf(PyUFuncLoopObject *loop, int bufnum) - { - int i; - - fprintf(stderr, "Printing byte buffer %d\n", bufnum); - for(i=0; ibufcnt; i++) { - fprintf(stderr, " %d\n", *(((byte *)(loop->buffer[bufnum]))+i)); - } - } - - static void - _printlongbuf(PyUFuncLoopObject *loop, int bufnum) - { - int i; - - fprintf(stderr, "Printing long buffer %d\n", bufnum); - for(i=0; ibufcnt; i++) { - fprintf(stderr, " %ld\n", *(((long *)(loop->buffer[bufnum]))+i)); - } - } - - static void - _printlongbufptr(PyUFuncLoopObject *loop, int bufnum) - { - int i; - - fprintf(stderr, "Printing long buffer %d\n", bufnum); - for(i=0; ibufcnt; i++) { - fprintf(stderr, " %ld\n", *(((long *)(loop->bufptr[bufnum]))+i)); - } - } - - - - static void - _printcastbuf(PyUFuncLoopObject *loop, int bufnum) - { - int i; - - fprintf(stderr, "Printing long buffer %d\n", bufnum); - for(i=0; ibufcnt; i++) { - fprintf(stderr, " %ld\n", *(((long *)(loop->castbuf[bufnum]))+i)); - } - } - -*/ - - - - -/* - * currently generic ufuncs cannot be built for use on flexible arrays. - * - * The cast functions in the generic loop would need to be fixed to pass - * in something besides NULL, NULL. - * - * Also the underlying ufunc loops would not know the element-size unless - * that was passed in as data (which could be arranged). - * - */ - -/*UFUNC_API - * - * This generic function is called with the ufunc object, the arguments to it, - * and an array of (pointers to) PyArrayObjects which are NULL. The - * arguments are parsed and placed in mps in construct_loop (construct_arrays) - */ -NPY_NO_EXPORT int -PyUFunc_GenericFunction(PyUFuncObject *self, PyObject *args, PyObject *kwds, - PyArrayObject **mps) -{ - PyUFuncLoopObject *loop; - int i; - NPY_BEGIN_THREADS_DEF; - - if (!(loop = construct_loop(self, args, kwds, mps))) { - return -1; - } - if (loop->notimplemented) { - ufuncloop_dealloc(loop); - return -2; - } - if (self->core_enabled && loop->meth != SIGNATURE_NOBUFFER_UFUNCLOOP) { - PyErr_SetString(PyExc_RuntimeError, - "illegal loop method for ufunc with signature"); - goto fail; - } - - NPY_LOOP_BEGIN_THREADS; - switch(loop->meth) { - case ONE_UFUNCLOOP: - /* - * Everything is contiguous, notswapped, aligned, - * and of the right type. -- Fastest. - * Or if not contiguous, then a single-stride - * increment moves through the entire array. - */ - /*fprintf(stderr, "ONE...%d\n", loop->size);*/ - loop->function((char **)loop->bufptr, &(loop->size), - loop->steps, loop->funcdata); - UFUNC_CHECK_ERROR(loop); - break; - case NOBUFFER_UFUNCLOOP: - /* - * Everything is notswapped, aligned and of the - * right type but not contiguous. -- Almost as fast. - */ - /*fprintf(stderr, "NOBUFFER...%d\n", loop->size);*/ - while (loop->index < loop->size) { - for (i = 0; i < self->nargs; i++) { - loop->bufptr[i] = loop->iters[i]->dataptr; - } - loop->function((char **)loop->bufptr, &(loop->bufcnt), - loop->steps, loop->funcdata); - UFUNC_CHECK_ERROR(loop); - - /* Adjust loop pointers */ - for (i = 0; i < self->nargs; i++) { - PyArray_ITER_NEXT(loop->iters[i]); - } - loop->index++; - } - break; - case SIGNATURE_NOBUFFER_UFUNCLOOP: - while (loop->index < loop->size) { - for (i = 0; i < self->nargs; i++) { - loop->bufptr[i] = loop->iters[i]->dataptr; - } - loop->function((char **)loop->bufptr, loop->core_dim_sizes, - loop->core_strides, loop->funcdata); - UFUNC_CHECK_ERROR(loop); - - /* Adjust loop pointers */ - for (i = 0; i < self->nargs; i++) { - PyArray_ITER_NEXT(loop->iters[i]); - } - loop->index++; - } - break; - case BUFFER_UFUNCLOOP: { - /* This should be a function */ - PyArray_CopySwapNFunc *copyswapn[NPY_MAXARGS]; - PyArrayIterObject **iters=loop->iters; - int *swap=loop->swap; - char **dptr=loop->dptr; - int mpselsize[NPY_MAXARGS]; - intp laststrides[NPY_MAXARGS]; - int fastmemcpy[NPY_MAXARGS]; - int *needbuffer = loop->needbuffer; - intp index=loop->index, size=loop->size; - int bufsize; - intp bufcnt; - int copysizes[NPY_MAXARGS]; - char **bufptr = loop->bufptr; - char **buffer = loop->buffer; - char **castbuf = loop->castbuf; - intp *steps = loop->steps; - char *tptr[NPY_MAXARGS]; - int ninnerloops = loop->ninnerloops; - Bool pyobject[NPY_MAXARGS]; - int datasize[NPY_MAXARGS]; - int j, k, stopcondition; - char *myptr1, *myptr2; - - for (i = 0; i nargs; i++) { - copyswapn[i] = mps[i]->descr->f->copyswapn; - mpselsize[i] = mps[i]->descr->elsize; - pyobject[i] = ((loop->obj & UFUNC_OBJ_ISOBJECT) - && (mps[i]->descr->type_num == PyArray_OBJECT)); - laststrides[i] = iters[i]->strides[loop->lastdim]; - if (steps[i] && laststrides[i] != mpselsize[i]) { - fastmemcpy[i] = 0; - } - else { - fastmemcpy[i] = 1; - } - } - /* Do generic buffered looping here (works for any kind of - * arrays -- some need buffers, some don't. - * - * - * New algorithm: N is the largest dimension. B is the buffer-size. - * quotient is loop->ninnerloops-1 - * remainder is loop->leftover - * - * Compute N = quotient * B + remainder. - * quotient = N / B # integer math - * (store quotient + 1) as the number of innerloops - * remainder = N % B # integer remainder - * - * On the inner-dimension we will have (quotient + 1) loops where - * the size of the inner function is B for all but the last when the niter size is - * remainder. - * - * So, the code looks very similar to NOBUFFER_LOOP except the inner-most loop is - * replaced with... - * - * for(i=0; isize, - * loop->ninnerloops, loop->leftover); - */ - /* - * for(i=0; inargs; i++) { - * fprintf(stderr, "iters[%d]->dataptr = %p, %p of size %d\n", i, - * iters[i], iters[i]->ao->data, PyArray_NBYTES(iters[i]->ao)); - * } - */ - stopcondition = ninnerloops; - if (loop->leftover == 0) { - stopcondition--; - } - while (index < size) { - bufsize=loop->bufsize; - for(i = 0; inargs; i++) { - tptr[i] = loop->iters[i]->dataptr; - if (needbuffer[i]) { - dptr[i] = bufptr[i]; - datasize[i] = (steps[i] ? bufsize : 1); - copysizes[i] = datasize[i] * mpselsize[i]; - } - else { - dptr[i] = tptr[i]; - } - } - - /* This is the inner function over the last dimension */ - for (k = 1; k<=stopcondition; k++) { - if (k == ninnerloops) { - bufsize = loop->leftover; - for (i=0; inargs;i++) { - if (!needbuffer[i]) { - continue; - } - datasize[i] = (steps[i] ? bufsize : 1); - copysizes[i] = datasize[i] * mpselsize[i]; - } - } - for (i = 0; i < self->nin; i++) { - if (!needbuffer[i]) { - continue; - } - if (fastmemcpy[i]) { - memcpy(buffer[i], tptr[i], copysizes[i]); - } - else { - myptr1 = buffer[i]; - myptr2 = tptr[i]; - for (j = 0; j < bufsize; j++) { - memcpy(myptr1, myptr2, mpselsize[i]); - myptr1 += mpselsize[i]; - myptr2 += laststrides[i]; - } - } - - /* swap the buffer if necessary */ - if (swap[i]) { - /* fprintf(stderr, "swapping...\n");*/ - copyswapn[i](buffer[i], mpselsize[i], NULL, -1, - (intp) datasize[i], 1, - mps[i]); - } - /* cast to the other buffer if necessary */ - if (loop->cast[i]) { - /* fprintf(stderr, "casting... %d, %p %p\n", i, buffer[i]); */ - loop->cast[i](buffer[i], castbuf[i], - (intp) datasize[i], - NULL, NULL); - } - } - - bufcnt = (intp) bufsize; - loop->function((char **)dptr, &bufcnt, steps, loop->funcdata); - UFUNC_CHECK_ERROR(loop); - - for (i = self->nin; i < self->nargs; i++) { - if (!needbuffer[i]) { - continue; - } - if (loop->cast[i]) { - /* fprintf(stderr, "casting back... %d, %p", i, castbuf[i]); */ - loop->cast[i](castbuf[i], - buffer[i], - (intp) datasize[i], - NULL, NULL); - } - if (swap[i]) { - copyswapn[i](buffer[i], mpselsize[i], NULL, -1, - (intp) datasize[i], 1, - mps[i]); - } - /* - * copy back to output arrays - * decref what's already there for object arrays - */ - if (pyobject[i]) { - myptr1 = tptr[i]; - for (j = 0; j < datasize[i]; j++) { - Py_XDECREF(*((PyObject **)myptr1)); - myptr1 += laststrides[i]; - } - } - if (fastmemcpy[i]) { - memcpy(tptr[i], buffer[i], copysizes[i]); - } - else { - myptr2 = buffer[i]; - myptr1 = tptr[i]; - for (j = 0; j < bufsize; j++) { - memcpy(myptr1, myptr2, mpselsize[i]); - myptr1 += laststrides[i]; - myptr2 += mpselsize[i]; - } - } - } - if (k == stopcondition) { - continue; - } - for (i = 0; i < self->nargs; i++) { - tptr[i] += bufsize * laststrides[i]; - if (!needbuffer[i]) { - dptr[i] = tptr[i]; - } - } - } - /* end inner function over last dimension */ - - if (loop->objfunc) { - /* - * DECREF castbuf when underlying function used - * object arrays and casting was needed to get - * to object arrays - */ - for (i = 0; i < self->nargs; i++) { - if (loop->cast[i]) { - if (steps[i] == 0) { - Py_XDECREF(*((PyObject **)castbuf[i])); - } - else { - int size = loop->bufsize; - - PyObject **objptr = (PyObject **)castbuf[i]; - /* - * size is loop->bufsize unless there - * was only one loop - */ - if (ninnerloops == 1) { - size = loop->leftover; - } - for (j = 0; j < size; j++) { - Py_XDECREF(*objptr); - *objptr = NULL; - objptr += 1; - } - } - } - } - } - /* fixme -- probably not needed here*/ - UFUNC_CHECK_ERROR(loop); - - for (i = 0; i < self->nargs; i++) { - PyArray_ITER_NEXT(loop->iters[i]); - } - index++; - } - } /* end of last case statement */ - } - - NPY_LOOP_END_THREADS; - ufuncloop_dealloc(loop); - return 0; - -fail: - NPY_LOOP_END_THREADS; - if (loop) { - ufuncloop_dealloc(loop); - } - return -1; -} - -static PyArrayObject * -_getidentity(PyUFuncObject *self, int otype, char *str) -{ - PyObject *obj, *arr; - PyArray_Descr *typecode; - - if (self->identity == PyUFunc_None) { - PyErr_Format(PyExc_ValueError, - "zero-size array to ufunc.%s " \ - "without identity", str); - return NULL; - } - if (self->identity == PyUFunc_One) { - obj = PyInt_FromLong((long) 1); - } else { - obj = PyInt_FromLong((long) 0); - } - - typecode = PyArray_DescrFromType(otype); - arr = PyArray_FromAny(obj, typecode, 0, 0, CARRAY, NULL); - Py_DECREF(obj); - return (PyArrayObject *)arr; -} - -static int -_create_reduce_copy(PyUFuncReduceObject *loop, PyArrayObject **arr, int rtype) -{ - intp maxsize; - PyObject *new; - PyArray_Descr *ntype; - - maxsize = PyArray_SIZE(*arr); - - if (maxsize < loop->bufsize) { - if (!(PyArray_ISBEHAVED_RO(*arr)) - || PyArray_TYPE(*arr) != rtype) { - ntype = PyArray_DescrFromType(rtype); - new = PyArray_FromAny((PyObject *)(*arr), - ntype, 0, 0, - FORCECAST | ALIGNED, NULL); - if (new == NULL) { - return -1; - } - *arr = (PyArrayObject *)new; - loop->decref = new; - } - } - - /* - * Don't decref *arr before re-assigning - * because it was not going to be DECREF'd anyway. - * - * If a copy is made, then the copy will be removed - * on deallocation of the loop structure by setting - * loop->decref. - */ - return 0; -} - -static PyUFuncReduceObject * -construct_reduce(PyUFuncObject *self, PyArrayObject **arr, PyArrayObject *out, - int axis, int otype, int operation, intp ind_size, char *str) -{ - PyUFuncReduceObject *loop; - PyArrayObject *idarr; - PyArrayObject *aar; - intp loop_i[MAX_DIMS], outsize = 0; - int arg_types[3]; - PyArray_SCALARKIND scalars[3] = {PyArray_NOSCALAR, PyArray_NOSCALAR, - PyArray_NOSCALAR}; - int i, j, nd; - int flags; - - /* Reduce type is the type requested of the input during reduction */ - if (self->core_enabled) { - PyErr_Format(PyExc_RuntimeError, - "construct_reduce not allowed on ufunc with signature"); - return NULL; - } - nd = (*arr)->nd; - arg_types[0] = otype; - arg_types[1] = otype; - arg_types[2] = otype; - if ((loop = _pya_malloc(sizeof(PyUFuncReduceObject))) == NULL) { - PyErr_NoMemory(); - return loop; - } - - loop->retbase = 0; - loop->swap = 0; - loop->index = 0; - loop->ufunc = self; - Py_INCREF(self); - loop->cast = NULL; - loop->buffer = NULL; - loop->ret = NULL; - loop->it = NULL; - loop->rit = NULL; - loop->errobj = NULL; - loop->first = 1; - loop->decref = NULL; - loop->N = (*arr)->dimensions[axis]; - loop->instrides = (*arr)->strides[axis]; - if (select_types(loop->ufunc, arg_types, &(loop->function), - &(loop->funcdata), scalars, NULL) == -1) { - goto fail; - } - /* - * output type may change -- if it does - * reduction is forced into that type - * and we need to select the reduction function again - */ - if (otype != arg_types[2]) { - otype = arg_types[2]; - arg_types[0] = otype; - arg_types[1] = otype; - if (select_types(loop->ufunc, arg_types, &(loop->function), - &(loop->funcdata), scalars, NULL) == -1) { - goto fail; - } - } - - /* get looping parameters from Python */ - if (PyUFunc_GetPyValues(str, &(loop->bufsize), &(loop->errormask), - &(loop->errobj)) < 0) { - goto fail; - } - /* Make copy if misbehaved or not otype for small arrays */ - if (_create_reduce_copy(loop, arr, otype) < 0) { - goto fail; - } - aar = *arr; - - if (loop->N == 0) { - loop->meth = ZERO_EL_REDUCELOOP; - } - else if (PyArray_ISBEHAVED_RO(aar) && (otype == (aar)->descr->type_num)) { - if (loop->N == 1) { - loop->meth = ONE_EL_REDUCELOOP; - } - else { - loop->meth = NOBUFFER_UFUNCLOOP; - loop->steps[1] = (aar)->strides[axis]; - loop->N -= 1; - } - } - else { - loop->meth = BUFFER_UFUNCLOOP; - loop->swap = !(PyArray_ISNOTSWAPPED(aar)); - } - - /* Determine if object arrays are involved */ - if (otype == PyArray_OBJECT || aar->descr->type_num == PyArray_OBJECT) { - loop->obj = UFUNC_OBJ_ISOBJECT | UFUNC_OBJ_NEEDS_API; - } - else { - loop->obj = 0; - } - if ((loop->meth == ZERO_EL_REDUCELOOP) - || ((operation == UFUNC_REDUCEAT) - && (loop->meth == BUFFER_UFUNCLOOP))) { - idarr = _getidentity(self, otype, str); - if (idarr == NULL) { - goto fail; - } - if (idarr->descr->elsize > UFUNC_MAXIDENTITY) { - PyErr_Format(PyExc_RuntimeError, - "UFUNC_MAXIDENTITY (%d) is too small"\ - "(needs to be at least %d)", - UFUNC_MAXIDENTITY, idarr->descr->elsize); - Py_DECREF(idarr); - goto fail; - } - memcpy(loop->idptr, idarr->data, idarr->descr->elsize); - Py_DECREF(idarr); - } - - /* Construct return array */ - flags = NPY_CARRAY | NPY_UPDATEIFCOPY | NPY_FORCECAST; - switch(operation) { - case UFUNC_REDUCE: - for (j = 0, i = 0; i < nd; i++) { - if (i != axis) { - loop_i[j++] = (aar)->dimensions[i]; - } - } - if (out == NULL) { - loop->ret = (PyArrayObject *) - PyArray_New(Py_TYPE(aar), aar->nd-1, loop_i, - otype, NULL, NULL, 0, 0, - (PyObject *)aar); - } - else { - outsize = PyArray_MultiplyList(loop_i, aar->nd - 1); - } - break; - case UFUNC_ACCUMULATE: - if (out == NULL) { - loop->ret = (PyArrayObject *) - PyArray_New(Py_TYPE(aar), aar->nd, aar->dimensions, - otype, NULL, NULL, 0, 0, (PyObject *)aar); - } - else { - outsize = PyArray_MultiplyList(aar->dimensions, aar->nd); - } - break; - case UFUNC_REDUCEAT: - memcpy(loop_i, aar->dimensions, nd*sizeof(intp)); - /* Index is 1-d array */ - loop_i[axis] = ind_size; - if (out == NULL) { - loop->ret = (PyArrayObject *) - PyArray_New(Py_TYPE(aar), aar->nd, loop_i, otype, - NULL, NULL, 0, 0, (PyObject *)aar); - } - else { - outsize = PyArray_MultiplyList(loop_i, aar->nd); - } - if (ind_size == 0) { - loop->meth = ZERO_EL_REDUCELOOP; - return loop; - } - if (loop->meth == ONE_EL_REDUCELOOP) { - loop->meth = NOBUFFER_REDUCELOOP; - } - break; - } - if (out) { - if (PyArray_SIZE(out) != outsize) { - PyErr_SetString(PyExc_ValueError, - "wrong shape for output"); - goto fail; - } - loop->ret = (PyArrayObject *) - PyArray_FromArray(out, PyArray_DescrFromType(otype), flags); - if (loop->ret && loop->ret != out) { - loop->retbase = 1; - } - } - if (loop->ret == NULL) { - goto fail; - } - loop->insize = aar->descr->elsize; - loop->outsize = loop->ret->descr->elsize; - loop->bufptr[0] = loop->ret->data; - - if (loop->meth == ZERO_EL_REDUCELOOP) { - loop->size = PyArray_SIZE(loop->ret); - return loop; - } - - loop->it = (PyArrayIterObject *)PyArray_IterNew((PyObject *)aar); - if (loop->it == NULL) { - return NULL; - } - if (loop->meth == ONE_EL_REDUCELOOP) { - loop->size = loop->it->size; - return loop; - } - - /* - * Fix iterator to loop over correct dimension - * Set size in axis dimension to 1 - */ - loop->it->contiguous = 0; - loop->it->size /= (loop->it->dims_m1[axis]+1); - loop->it->dims_m1[axis] = 0; - loop->it->backstrides[axis] = 0; - loop->size = loop->it->size; - if (operation == UFUNC_REDUCE) { - loop->steps[0] = 0; - } - else { - loop->rit = (PyArrayIterObject *) \ - PyArray_IterNew((PyObject *)(loop->ret)); - if (loop->rit == NULL) { - return NULL; - } - /* - * Fix iterator to loop over correct dimension - * Set size in axis dimension to 1 - */ - loop->rit->contiguous = 0; - loop->rit->size /= (loop->rit->dims_m1[axis] + 1); - loop->rit->dims_m1[axis] = 0; - loop->rit->backstrides[axis] = 0; - - if (operation == UFUNC_ACCUMULATE) { - loop->steps[0] = loop->ret->strides[axis]; - } - else { - loop->steps[0] = 0; - } - } - loop->steps[2] = loop->steps[0]; - loop->bufptr[2] = loop->bufptr[0] + loop->steps[2]; - if (loop->meth == BUFFER_UFUNCLOOP) { - int _size; - - loop->steps[1] = loop->outsize; - if (otype != aar->descr->type_num) { - _size=loop->bufsize*(loop->outsize + aar->descr->elsize); - loop->buffer = PyDataMem_NEW(_size); - if (loop->buffer == NULL) { - goto fail; - } - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - memset(loop->buffer, 0, _size); - } - loop->castbuf = loop->buffer + loop->bufsize*aar->descr->elsize; - loop->bufptr[1] = loop->castbuf; - loop->cast = PyArray_GetCastFunc(aar->descr, otype); - if (loop->cast == NULL) { - goto fail; - } - } - else { - _size = loop->bufsize * loop->outsize; - loop->buffer = PyDataMem_NEW(_size); - if (loop->buffer == NULL) { - goto fail; - } - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - memset(loop->buffer, 0, _size); - } - loop->bufptr[1] = loop->buffer; - } - } - PyUFunc_clearfperr(); - return loop; - - fail: - ufuncreduce_dealloc(loop); - return NULL; -} - - -/* - * We have two basic kinds of loops. One is used when arr is not-swapped - * and aligned and output type is the same as input type. The other uses - * buffers when one of these is not satisfied. - * - * Zero-length and one-length axes-to-be-reduced are handled separately. - */ -static PyObject * -PyUFunc_Reduce(PyUFuncObject *self, PyArrayObject *arr, PyArrayObject *out, - int axis, int otype) -{ - PyArrayObject *ret = NULL; - PyUFuncReduceObject *loop; - intp i, n; - char *dptr; - NPY_BEGIN_THREADS_DEF; - - /* Construct loop object */ - loop = construct_reduce(self, &arr, out, axis, otype, UFUNC_REDUCE, 0, - "reduce"); - if (!loop) { - return NULL; - } - - NPY_LOOP_BEGIN_THREADS; - switch(loop->meth) { - case ZERO_EL_REDUCELOOP: - /* fprintf(stderr, "ZERO..%d\n", loop->size); */ - for (i = 0; i < loop->size; i++) { - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - Py_INCREF(*((PyObject **)loop->idptr)); - } - memmove(loop->bufptr[0], loop->idptr, loop->outsize); - loop->bufptr[0] += loop->outsize; - } - break; - case ONE_EL_REDUCELOOP: - /*fprintf(stderr, "ONEDIM..%d\n", loop->size); */ - while (loop->index < loop->size) { - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - Py_INCREF(*((PyObject **)loop->it->dataptr)); - } - memmove(loop->bufptr[0], loop->it->dataptr, loop->outsize); - PyArray_ITER_NEXT(loop->it); - loop->bufptr[0] += loop->outsize; - loop->index++; - } - break; - case NOBUFFER_UFUNCLOOP: - /*fprintf(stderr, "NOBUFFER..%d\n", loop->size); */ - while (loop->index < loop->size) { - /* Copy first element to output */ - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - Py_INCREF(*((PyObject **)loop->it->dataptr)); - } - memmove(loop->bufptr[0], loop->it->dataptr, loop->outsize); - /* Adjust input pointer */ - loop->bufptr[1] = loop->it->dataptr+loop->steps[1]; - loop->function((char **)loop->bufptr, &(loop->N), - loop->steps, loop->funcdata); - UFUNC_CHECK_ERROR(loop); - PyArray_ITER_NEXT(loop->it); - loop->bufptr[0] += loop->outsize; - loop->bufptr[2] = loop->bufptr[0]; - loop->index++; - } - break; - case BUFFER_UFUNCLOOP: - /* - * use buffer for arr - * - * For each row to reduce - * 1. copy first item over to output (casting if necessary) - * 2. Fill inner buffer - * 3. When buffer is filled or end of row - * a. Cast input buffers if needed - * b. Call inner function. - * 4. Repeat 2 until row is done. - */ - /* fprintf(stderr, "BUFFERED..%d %d\n", loop->size, loop->swap); */ - while(loop->index < loop->size) { - loop->inptr = loop->it->dataptr; - /* Copy (cast) First term over to output */ - if (loop->cast) { - /* A little tricky because we need to cast it first */ - arr->descr->f->copyswap(loop->buffer, loop->inptr, - loop->swap, NULL); - loop->cast(loop->buffer, loop->castbuf, 1, NULL, NULL); - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - Py_XINCREF(*((PyObject **)loop->castbuf)); - } - memcpy(loop->bufptr[0], loop->castbuf, loop->outsize); - } - else { - /* Simple copy */ - arr->descr->f->copyswap(loop->bufptr[0], loop->inptr, - loop->swap, NULL); - } - loop->inptr += loop->instrides; - n = 1; - while(n < loop->N) { - /* Copy up to loop->bufsize elements to buffer */ - dptr = loop->buffer; - for (i = 0; i < loop->bufsize; i++, n++) { - if (n == loop->N) { - break; - } - arr->descr->f->copyswap(dptr, loop->inptr, - loop->swap, NULL); - loop->inptr += loop->instrides; - dptr += loop->insize; - } - if (loop->cast) { - loop->cast(loop->buffer, loop->castbuf, i, NULL, NULL); - } - loop->function((char **)loop->bufptr, &i, - loop->steps, loop->funcdata); - loop->bufptr[0] += loop->steps[0]*i; - loop->bufptr[2] += loop->steps[2]*i; - UFUNC_CHECK_ERROR(loop); - } - PyArray_ITER_NEXT(loop->it); - loop->bufptr[0] += loop->outsize; - loop->bufptr[2] = loop->bufptr[0]; - loop->index++; - } - - /* - * DECREF left-over objects if buffering was used. - * It is needed when casting created new objects in - * castbuf. Intermediate copying into castbuf (via - * loop->function) decref'd what was already there. - - * It's the final copy into the castbuf that needs a DECREF. - */ - - /* Only when casting needed and it is from a non-object array */ - if ((loop->obj & UFUNC_OBJ_ISOBJECT) && loop->cast && - (!PyArray_ISOBJECT(arr))) { - for (i=0; ibufsize; i++) { - Py_CLEAR(((PyObject **)loop->castbuf)[i]); - } - } - - } - NPY_LOOP_END_THREADS; - /* Hang on to this reference -- will be decref'd with loop */ - if (loop->retbase) { - ret = (PyArrayObject *)loop->ret->base; - } - else { - ret = loop->ret; - } - Py_INCREF(ret); - ufuncreduce_dealloc(loop); - return (PyObject *)ret; - -fail: - NPY_LOOP_END_THREADS; - if (loop) { - ufuncreduce_dealloc(loop); - } - return NULL; -} - - -static PyObject * -PyUFunc_Accumulate(PyUFuncObject *self, PyArrayObject *arr, PyArrayObject *out, - int axis, int otype) -{ - PyArrayObject *ret = NULL; - PyUFuncReduceObject *loop; - intp i, n; - char *dptr; - NPY_BEGIN_THREADS_DEF; - - /* Construct loop object */ - loop = construct_reduce(self, &arr, out, axis, otype, - UFUNC_ACCUMULATE, 0, "accumulate"); - if (!loop) { - return NULL; - } - - NPY_LOOP_BEGIN_THREADS; - switch(loop->meth) { - case ZERO_EL_REDUCELOOP: - /* Accumulate */ - /* fprintf(stderr, "ZERO..%d\n", loop->size); */ - for (i = 0; i < loop->size; i++) { - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - Py_INCREF(*((PyObject **)loop->idptr)); - } - memcpy(loop->bufptr[0], loop->idptr, loop->outsize); - loop->bufptr[0] += loop->outsize; - } - break; - case ONE_EL_REDUCELOOP: - /* Accumulate */ - /* fprintf(stderr, "ONEDIM..%d\n", loop->size); */ - while (loop->index < loop->size) { - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - Py_INCREF(*((PyObject **)loop->it->dataptr)); - } - memmove(loop->bufptr[0], loop->it->dataptr, loop->outsize); - PyArray_ITER_NEXT(loop->it); - loop->bufptr[0] += loop->outsize; - loop->index++; - } - break; - case NOBUFFER_UFUNCLOOP: - /* Accumulate */ - /* fprintf(stderr, "NOBUFFER..%d\n", loop->size); */ - while (loop->index < loop->size) { - /* Copy first element to output */ - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - Py_INCREF(*((PyObject **)loop->it->dataptr)); - } - memmove(loop->bufptr[0], loop->it->dataptr, loop->outsize); - /* Adjust input pointer */ - loop->bufptr[1] = loop->it->dataptr + loop->steps[1]; - loop->function((char **)loop->bufptr, &(loop->N), - loop->steps, loop->funcdata); - UFUNC_CHECK_ERROR(loop); - PyArray_ITER_NEXT(loop->it); - PyArray_ITER_NEXT(loop->rit); - loop->bufptr[0] = loop->rit->dataptr; - loop->bufptr[2] = loop->bufptr[0] + loop->steps[0]; - loop->index++; - } - break; - case BUFFER_UFUNCLOOP: - /* Accumulate - * - * use buffer for arr - * - * For each row to reduce - * 1. copy identity over to output (casting if necessary) - * 2. Fill inner buffer - * 3. When buffer is filled or end of row - * a. Cast input buffers if needed - * b. Call inner function. - * 4. Repeat 2 until row is done. - */ - /* fprintf(stderr, "BUFFERED..%d %p\n", loop->size, loop->cast); */ - while (loop->index < loop->size) { - loop->inptr = loop->it->dataptr; - /* Copy (cast) First term over to output */ - if (loop->cast) { - /* A little tricky because we need to - cast it first */ - arr->descr->f->copyswap(loop->buffer, loop->inptr, - loop->swap, NULL); - loop->cast(loop->buffer, loop->castbuf, 1, NULL, NULL); - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - Py_XINCREF(*((PyObject **)loop->castbuf)); - } - memcpy(loop->bufptr[0], loop->castbuf, loop->outsize); - } - else { - /* Simple copy */ - arr->descr->f->copyswap(loop->bufptr[0], loop->inptr, - loop->swap, NULL); - } - loop->inptr += loop->instrides; - n = 1; - while (n < loop->N) { - /* Copy up to loop->bufsize elements to buffer */ - dptr = loop->buffer; - for (i = 0; i < loop->bufsize; i++, n++) { - if (n == loop->N) { - break; - } - arr->descr->f->copyswap(dptr, loop->inptr, - loop->swap, NULL); - loop->inptr += loop->instrides; - dptr += loop->insize; - } - if (loop->cast) { - loop->cast(loop->buffer, loop->castbuf, i, NULL, NULL); - } - loop->function((char **)loop->bufptr, &i, - loop->steps, loop->funcdata); - loop->bufptr[0] += loop->steps[0]*i; - loop->bufptr[2] += loop->steps[2]*i; - UFUNC_CHECK_ERROR(loop); - } - PyArray_ITER_NEXT(loop->it); - PyArray_ITER_NEXT(loop->rit); - loop->bufptr[0] = loop->rit->dataptr; - loop->bufptr[2] = loop->bufptr[0] + loop->steps[0]; - loop->index++; - } - - /* - * DECREF left-over objects if buffering was used. - * It is needed when casting created new objects in - * castbuf. Intermediate copying into castbuf (via - * loop->function) decref'd what was already there. - - * It's the final copy into the castbuf that needs a DECREF. - */ - - /* Only when casting needed and it is from a non-object array */ - if ((loop->obj & UFUNC_OBJ_ISOBJECT) && loop->cast && - (!PyArray_ISOBJECT(arr))) { - for (i=0; ibufsize; i++) { - Py_CLEAR(((PyObject **)loop->castbuf)[i]); - } - } - - } - NPY_LOOP_END_THREADS; - /* Hang on to this reference -- will be decref'd with loop */ - if (loop->retbase) { - ret = (PyArrayObject *)loop->ret->base; - } - else { - ret = loop->ret; - } - Py_INCREF(ret); - ufuncreduce_dealloc(loop); - return (PyObject *)ret; - - fail: - NPY_LOOP_END_THREADS; - if (loop) { - ufuncreduce_dealloc(loop); - } - return NULL; -} - -/* - * Reduceat performs a reduce over an axis using the indices as a guide - * - * op.reduceat(array,indices) computes - * op.reduce(array[indices[i]:indices[i+1]] - * for i=0..end with an implicit indices[i+1]=len(array) - * assumed when i=end-1 - * - * if indices[i+1] <= indices[i]+1 - * then the result is array[indices[i]] for that value - * - * op.accumulate(array) is the same as - * op.reduceat(array,indices)[::2] - * where indices is range(len(array)-1) with a zero placed in every other sample - * indices = zeros(len(array)*2-1) - * indices[1::2] = range(1,len(array)) - * - * output shape is based on the size of indices - */ -static PyObject * -PyUFunc_Reduceat(PyUFuncObject *self, PyArrayObject *arr, PyArrayObject *ind, - PyArrayObject *out, int axis, int otype) -{ - PyArrayObject *ret; - PyUFuncReduceObject *loop; - intp *ptr = (intp *)ind->data; - intp nn = ind->dimensions[0]; - intp mm = arr->dimensions[axis] - 1; - intp n, i, j; - char *dptr; - NPY_BEGIN_THREADS_DEF; - - /* Check for out-of-bounds values in indices array */ - for (i = 0; i mm)) { - PyErr_Format(PyExc_IndexError, - "index out-of-bounds (0, %d)", (int) mm); - return NULL; - } - ptr++; - } - - ptr = (intp *)ind->data; - /* Construct loop object */ - loop = construct_reduce(self, &arr, out, axis, otype, - UFUNC_REDUCEAT, nn, "reduceat"); - if (!loop) { - return NULL; - } - - NPY_LOOP_BEGIN_THREADS; - switch(loop->meth) { - case ZERO_EL_REDUCELOOP: - /* zero-length index -- return array immediately */ - /* fprintf(stderr, "ZERO..\n"); */ - break; - case NOBUFFER_UFUNCLOOP: - /* Reduceat - * NOBUFFER -- behaved array and same type - */ - /* fprintf(stderr, "NOBUFFER..%d\n", loop->size); */ - while (loop->index < loop->size) { - ptr = (intp *)ind->data; - for (i = 0; i < nn; i++) { - loop->bufptr[1] = loop->it->dataptr + (*ptr)*loop->steps[1]; - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - Py_XINCREF(*((PyObject **)loop->bufptr[1])); - } - memcpy(loop->bufptr[0], loop->bufptr[1], loop->outsize); - mm = (i == nn - 1 ? arr->dimensions[axis] - *ptr : - *(ptr + 1) - *ptr) - 1; - if (mm > 0) { - loop->bufptr[1] += loop->steps[1]; - loop->bufptr[2] = loop->bufptr[0]; - loop->function((char **)loop->bufptr, &mm, - loop->steps, loop->funcdata); - UFUNC_CHECK_ERROR(loop); - } - loop->bufptr[0] += loop->ret->strides[axis]; - ptr++; - } - PyArray_ITER_NEXT(loop->it); - PyArray_ITER_NEXT(loop->rit); - loop->bufptr[0] = loop->rit->dataptr; - loop->index++; - } - break; - - case BUFFER_UFUNCLOOP: - /* Reduceat - * BUFFER -- misbehaved array or different types - */ - /* fprintf(stderr, "BUFFERED..%d\n", loop->size); */ - while (loop->index < loop->size) { - ptr = (intp *)ind->data; - for (i = 0; i < nn; i++) { - if (loop->obj & UFUNC_OBJ_ISOBJECT) { - Py_XINCREF(*((PyObject **)loop->idptr)); - } - memcpy(loop->bufptr[0], loop->idptr, loop->outsize); - n = 0; - mm = (i == nn - 1 ? arr->dimensions[axis] - *ptr : - *(ptr + 1) - *ptr); - if (mm < 1) { - mm = 1; - } - loop->inptr = loop->it->dataptr + (*ptr)*loop->instrides; - while (n < mm) { - /* Copy up to loop->bufsize elements to buffer */ - dptr = loop->buffer; - for (j = 0; j < loop->bufsize; j++, n++) { - if (n == mm) { - break; - } - arr->descr->f->copyswap(dptr, loop->inptr, - loop->swap, NULL); - loop->inptr += loop->instrides; - dptr += loop->insize; - } - if (loop->cast) { - loop->cast(loop->buffer, loop->castbuf, j, NULL, NULL); - } - loop->bufptr[2] = loop->bufptr[0]; - loop->function((char **)loop->bufptr, &j, - loop->steps, loop->funcdata); - UFUNC_CHECK_ERROR(loop); - loop->bufptr[0] += j*loop->steps[0]; - } - loop->bufptr[0] += loop->ret->strides[axis]; - ptr++; - } - PyArray_ITER_NEXT(loop->it); - PyArray_ITER_NEXT(loop->rit); - loop->bufptr[0] = loop->rit->dataptr; - loop->index++; - } - - /* - * DECREF left-over objects if buffering was used. - * It is needed when casting created new objects in - * castbuf. Intermediate copying into castbuf (via - * loop->function) decref'd what was already there. - - * It's the final copy into the castbuf that needs a DECREF. - */ - - /* Only when casting needed and it is from a non-object array */ - if ((loop->obj & UFUNC_OBJ_ISOBJECT) && loop->cast && - (!PyArray_ISOBJECT(arr))) { - for (i=0; ibufsize; i++) { - Py_CLEAR(((PyObject **)loop->castbuf)[i]); - } - } - - break; - } - NPY_LOOP_END_THREADS; - /* Hang on to this reference -- will be decref'd with loop */ - if (loop->retbase) { - ret = (PyArrayObject *)loop->ret->base; - } - else { - ret = loop->ret; - } - Py_INCREF(ret); - ufuncreduce_dealloc(loop); - return (PyObject *)ret; - -fail: - NPY_LOOP_END_THREADS; - if (loop) { - ufuncreduce_dealloc(loop); - } - return NULL; -} - - -/* - * This code handles reduce, reduceat, and accumulate - * (accumulate and reduce are special cases of the more general reduceat - * but they are handled separately for speed) - */ -static PyObject * -PyUFunc_GenericReduction(PyUFuncObject *self, PyObject *args, - PyObject *kwds, int operation) -{ - int axis=0; - PyArrayObject *mp, *ret = NULL; - PyObject *op, *res = NULL; - PyObject *obj_ind, *context; - PyArrayObject *indices = NULL; - PyArray_Descr *otype = NULL; - PyArrayObject *out = NULL; - static char *kwlist1[] = {"array", "axis", "dtype", "out", NULL}; - static char *kwlist2[] = {"array", "indices", "axis", "dtype", "out", NULL}; - static char *_reduce_type[] = {"reduce", "accumulate", "reduceat", NULL}; - - if (self == NULL) { - PyErr_SetString(PyExc_ValueError, "function not supported"); - return NULL; - } - if (self->core_enabled) { - PyErr_Format(PyExc_RuntimeError, - "Reduction not defined on ufunc with signature"); - return NULL; - } - if (self->nin != 2) { - PyErr_Format(PyExc_ValueError, - "%s only supported for binary functions", - _reduce_type[operation]); - return NULL; - } - if (self->nout != 1) { - PyErr_Format(PyExc_ValueError, - "%s only supported for functions " \ - "returning a single value", - _reduce_type[operation]); - return NULL; - } - - if (operation == UFUNC_REDUCEAT) { - PyArray_Descr *indtype; - indtype = PyArray_DescrFromType(PyArray_INTP); - if(!PyArg_ParseTupleAndKeywords(args, kwds, "OO|iO&O&", kwlist2, - &op, &obj_ind, &axis, - PyArray_DescrConverter2, - &otype, - PyArray_OutputConverter, - &out)) { - Py_XDECREF(otype); - return NULL; - } - indices = (PyArrayObject *)PyArray_FromAny(obj_ind, indtype, - 1, 1, CARRAY, NULL); - if (indices == NULL) { - Py_XDECREF(otype); - return NULL; - } - } - else { - if(!PyArg_ParseTupleAndKeywords(args, kwds, "O|iO&O&", kwlist1, - &op, &axis, - PyArray_DescrConverter2, - &otype, - PyArray_OutputConverter, - &out)) { - Py_XDECREF(otype); - return NULL; - } - } - /* Ensure input is an array */ - if (!PyArray_Check(op) && !PyArray_IsScalar(op, Generic)) { - context = Py_BuildValue("O(O)i", self, op, 0); - } - else { - context = NULL; - } - mp = (PyArrayObject *)PyArray_FromAny(op, NULL, 0, 0, 0, context); - Py_XDECREF(context); - if (mp == NULL) { - return NULL; - } - /* Check to see if input is zero-dimensional */ - if (mp->nd == 0) { - PyErr_Format(PyExc_TypeError, "cannot %s on a scalar", - _reduce_type[operation]); - Py_XDECREF(otype); - Py_DECREF(mp); - return NULL; - } - /* Check to see that type (and otype) is not FLEXIBLE */ - if (PyArray_ISFLEXIBLE(mp) || - (otype && PyTypeNum_ISFLEXIBLE(otype->type_num))) { - PyErr_Format(PyExc_TypeError, - "cannot perform %s with flexible type", - _reduce_type[operation]); - Py_XDECREF(otype); - Py_DECREF(mp); - return NULL; - } - - if (axis < 0) { - axis += mp->nd; - } - if (axis < 0 || axis >= mp->nd) { - PyErr_SetString(PyExc_ValueError, "axis not in array"); - Py_XDECREF(otype); - Py_DECREF(mp); - return NULL; - } - /* - * If out is specified it determines otype - * unless otype already specified. - */ - if (otype == NULL && out != NULL) { - otype = out->descr; - Py_INCREF(otype); - } - if (otype == NULL) { - /* - * For integer types --- make sure at least a long - * is used for add and multiply reduction to avoid overflow - */ - int typenum = PyArray_TYPE(mp); - if ((typenum < NPY_FLOAT) - && ((strcmp(self->name,"add") == 0) - || (strcmp(self->name,"multiply") == 0))) { - if (PyTypeNum_ISBOOL(typenum)) { - typenum = PyArray_LONG; - } - else if ((size_t)mp->descr->elsize < sizeof(long)) { - if (PyTypeNum_ISUNSIGNED(typenum)) { - typenum = PyArray_ULONG; - } - else { - typenum = PyArray_LONG; - } - } - } - otype = PyArray_DescrFromType(typenum); - } - - - switch(operation) { - case UFUNC_REDUCE: - ret = (PyArrayObject *)PyUFunc_Reduce(self, mp, out, axis, - otype->type_num); - break; - case UFUNC_ACCUMULATE: - ret = (PyArrayObject *)PyUFunc_Accumulate(self, mp, out, axis, - otype->type_num); - break; - case UFUNC_REDUCEAT: - ret = (PyArrayObject *)PyUFunc_Reduceat(self, mp, indices, out, - axis, otype->type_num); - Py_DECREF(indices); - break; - } - Py_DECREF(mp); - Py_DECREF(otype); - if (ret == NULL) { - return NULL; - } - if (Py_TYPE(op) != Py_TYPE(ret)) { - res = PyObject_CallMethod(op, "__array_wrap__", "O", ret); - if (res == NULL) { - PyErr_Clear(); - } - else if (res == Py_None) { - Py_DECREF(res); - } - else { - Py_DECREF(ret); - return res; - } - } - return PyArray_Return(ret); -} - -/* - * This function analyzes the input arguments - * and determines an appropriate __array_wrap__ function to call - * for the outputs. - * - * If an output argument is provided, then it is wrapped - * with its own __array_wrap__ not with the one determined by - * the input arguments. - * - * if the provided output argument is already an array, - * the wrapping function is None (which means no wrapping will - * be done --- not even PyArray_Return). - * - * A NULL is placed in output_wrap for outputs that - * should just have PyArray_Return called. - */ -static void -_find_array_wrap(PyObject *args, PyObject **output_wrap, int nin, int nout) -{ - Py_ssize_t nargs; - int i; - int np = 0; - PyObject *with_wrap[NPY_MAXARGS], *wraps[NPY_MAXARGS]; - PyObject *obj, *wrap = NULL; - - nargs = PyTuple_GET_SIZE(args); - for (i = 0; i < nin; i++) { - obj = PyTuple_GET_ITEM(args, i); - if (PyArray_CheckExact(obj) || PyArray_IsAnyScalar(obj)) { - continue; - } - wrap = PyObject_GetAttrString(obj, "__array_wrap__"); - if (wrap) { - if (PyCallable_Check(wrap)) { - with_wrap[np] = obj; - wraps[np] = wrap; - ++np; - } - else { - Py_DECREF(wrap); - wrap = NULL; - } - } - else { - PyErr_Clear(); - } - } - if (np > 0) { - /* If we have some wraps defined, find the one of highest priority */ - wrap = wraps[0]; - if (np > 1) { - double maxpriority = PyArray_GetPriority(with_wrap[0], - PyArray_SUBTYPE_PRIORITY); - for (i = 1; i < np; ++i) { - double priority = PyArray_GetPriority(with_wrap[i], - PyArray_SUBTYPE_PRIORITY); - if (priority > maxpriority) { - maxpriority = priority; - Py_DECREF(wrap); - wrap = wraps[i]; - } - else { - Py_DECREF(wraps[i]); - } - } - } - } - - /* - * Here wrap is the wrapping function determined from the - * input arrays (could be NULL). - * - * For all the output arrays decide what to do. - * - * 1) Use the wrap function determined from the input arrays - * This is the default if the output array is not - * passed in. - * - * 2) Use the __array_wrap__ method of the output object - * passed in. -- this is special cased for - * exact ndarray so that no PyArray_Return is - * done in that case. - */ - for (i = 0; i < nout; i++) { - int j = nin + i; - int incref = 1; - output_wrap[i] = wrap; - if (j < nargs) { - obj = PyTuple_GET_ITEM(args, j); - if (obj == Py_None) { - continue; - } - if (PyArray_CheckExact(obj)) { - output_wrap[i] = Py_None; - } - else { - PyObject *owrap = PyObject_GetAttrString(obj,"__array_wrap__"); - incref = 0; - if (!(owrap) || !(PyCallable_Check(owrap))) { - Py_XDECREF(owrap); - owrap = wrap; - incref = 1; - PyErr_Clear(); - } - output_wrap[i] = owrap; - } - } - if (incref) { - Py_XINCREF(output_wrap[i]); - } - } - Py_XDECREF(wrap); - return; -} - -static PyObject * -ufunc_generic_call(PyUFuncObject *self, PyObject *args, PyObject *kwds) -{ - int i; - PyTupleObject *ret; - PyArrayObject *mps[NPY_MAXARGS]; - PyObject *retobj[NPY_MAXARGS]; - PyObject *wraparr[NPY_MAXARGS]; - PyObject *res; - int errval; - - /* - * Initialize all array objects to NULL to make cleanup easier - * if something goes wrong. - */ - for(i = 0; i < self->nargs; i++) { - mps[i] = NULL; - } - errval = PyUFunc_GenericFunction(self, args, kwds, mps); - if (errval < 0) { - for (i = 0; i < self->nargs; i++) { - PyArray_XDECREF_ERR(mps[i]); - } - if (errval == -1) - return NULL; - else if (self->nin == 2 && self->nout == 1) { - /* To allow the other argument to be given a chance - */ - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } - else { - PyErr_SetString(PyExc_NotImplementedError, "Not implemented for this type"); - return NULL; - } - } - for (i = 0; i < self->nin; i++) { - Py_DECREF(mps[i]); - } - /* - * Use __array_wrap__ on all outputs - * if present on one of the input arguments. - * If present for multiple inputs: - * use __array_wrap__ of input object with largest - * __array_priority__ (default = 0.0) - * - * Exception: we should not wrap outputs for items already - * passed in as output-arguments. These items should either - * be left unwrapped or wrapped by calling their own __array_wrap__ - * routine. - * - * For each output argument, wrap will be either - * NULL --- call PyArray_Return() -- default if no output arguments given - * None --- array-object passed in don't call PyArray_Return - * method --- the __array_wrap__ method to call. - */ - _find_array_wrap(args, wraparr, self->nin, self->nout); - - /* wrap outputs */ - for (i = 0; i < self->nout; i++) { - int j = self->nin+i; - PyObject *wrap; - /* - * check to see if any UPDATEIFCOPY flags are set - * which meant that a temporary output was generated - */ - if (mps[j]->flags & UPDATEIFCOPY) { - PyObject *old = mps[j]->base; - /* we want to hang on to this */ - Py_INCREF(old); - /* should trigger the copyback into old */ - Py_DECREF(mps[j]); - mps[j] = (PyArrayObject *)old; - } - wrap = wraparr[i]; - if (wrap != NULL) { - if (wrap == Py_None) { - Py_DECREF(wrap); - retobj[i] = (PyObject *)mps[j]; - continue; - } - res = PyObject_CallFunction(wrap, "O(OOi)", mps[j], self, args, i); - if (res == NULL && PyErr_ExceptionMatches(PyExc_TypeError)) { - PyErr_Clear(); - res = PyObject_CallFunctionObjArgs(wrap, mps[j], NULL); - } - Py_DECREF(wrap); - if (res == NULL) { - goto fail; - } - else if (res == Py_None) { - Py_DECREF(res); - } - else { - Py_DECREF(mps[j]); - retobj[i] = res; - continue; - } - } - /* default behavior */ - retobj[i] = PyArray_Return(mps[j]); - } - - if (self->nout == 1) { - return retobj[0]; - } - else { - ret = (PyTupleObject *)PyTuple_New(self->nout); - for (i = 0; i < self->nout; i++) { - PyTuple_SET_ITEM(ret, i, retobj[i]); - } - return (PyObject *)ret; - } - -fail: - for (i = self->nin; i < self->nargs; i++) { - Py_XDECREF(mps[i]); - } - return NULL; -} - -NPY_NO_EXPORT PyObject * -ufunc_geterr(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - PyObject *thedict; - PyObject *res; - - if (!PyArg_ParseTuple(args, "")) { - return NULL; - } - if (PyUFunc_PYVALS_NAME == NULL) { - PyUFunc_PYVALS_NAME = PyUString_InternFromString(UFUNC_PYVALS_NAME); - } - thedict = PyThreadState_GetDict(); - if (thedict == NULL) { - thedict = PyEval_GetBuiltins(); - } - res = PyDict_GetItem(thedict, PyUFunc_PYVALS_NAME); - if (res != NULL) { - Py_INCREF(res); - return res; - } - /* Construct list of defaults */ - res = PyList_New(3); - if (res == NULL) { - return NULL; - } - PyList_SET_ITEM(res, 0, PyInt_FromLong(PyArray_BUFSIZE)); - PyList_SET_ITEM(res, 1, PyInt_FromLong(UFUNC_ERR_DEFAULT)); - PyList_SET_ITEM(res, 2, Py_None); Py_INCREF(Py_None); - return res; -} - -#if USE_USE_DEFAULTS==1 -/* - * This is a strategy to buy a little speed up and avoid the dictionary - * look-up in the default case. It should work in the presence of - * threads. If it is deemed too complicated or it doesn't actually work - * it could be taken out. - */ -static int -ufunc_update_use_defaults(void) -{ - PyObject *errobj = NULL; - int errmask, bufsize; - int res; - - PyUFunc_NUM_NODEFAULTS += 1; - res = PyUFunc_GetPyValues("test", &bufsize, &errmask, &errobj); - PyUFunc_NUM_NODEFAULTS -= 1; - if (res < 0) { - Py_XDECREF(errobj); - return -1; - } - if ((errmask != UFUNC_ERR_DEFAULT) || (bufsize != PyArray_BUFSIZE) - || (PyTuple_GET_ITEM(errobj, 1) != Py_None)) { - PyUFunc_NUM_NODEFAULTS += 1; - } - else if (PyUFunc_NUM_NODEFAULTS > 0) { - PyUFunc_NUM_NODEFAULTS -= 1; - } - Py_XDECREF(errobj); - return 0; -} -#endif - -NPY_NO_EXPORT PyObject * -ufunc_seterr(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - PyObject *thedict; - int res; - PyObject *val; - static char *msg = "Error object must be a list of length 3"; - - if (!PyArg_ParseTuple(args, "O", &val)) { - return NULL; - } - if (!PyList_CheckExact(val) || PyList_GET_SIZE(val) != 3) { - PyErr_SetString(PyExc_ValueError, msg); - return NULL; - } - if (PyUFunc_PYVALS_NAME == NULL) { - PyUFunc_PYVALS_NAME = PyUString_InternFromString(UFUNC_PYVALS_NAME); - } - thedict = PyThreadState_GetDict(); - if (thedict == NULL) { - thedict = PyEval_GetBuiltins(); - } - res = PyDict_SetItem(thedict, PyUFunc_PYVALS_NAME, val); - if (res < 0) { - return NULL; - } -#if USE_USE_DEFAULTS==1 - if (ufunc_update_use_defaults() < 0) { - return NULL; - } -#endif - Py_INCREF(Py_None); - return Py_None; -} - - - -/*UFUNC_API*/ -NPY_NO_EXPORT int -PyUFunc_ReplaceLoopBySignature(PyUFuncObject *func, - PyUFuncGenericFunction newfunc, - int *signature, - PyUFuncGenericFunction *oldfunc) -{ - int i, j; - int res = -1; - /* Find the location of the matching signature */ - for (i = 0; i < func->ntypes; i++) { - for (j = 0; j < func->nargs; j++) { - if (signature[j] != func->types[i*func->nargs+j]) { - break; - } - } - if (j < func->nargs) { - continue; - } - if (oldfunc != NULL) { - *oldfunc = func->functions[i]; - } - func->functions[i] = newfunc; - res = 0; - break; - } - return res; -} - -/*UFUNC_API*/ -NPY_NO_EXPORT PyObject * -PyUFunc_FromFuncAndData(PyUFuncGenericFunction *func, void **data, - char *types, int ntypes, - int nin, int nout, int identity, - char *name, char *doc, int check_return) -{ - return PyUFunc_FromFuncAndDataAndSignature(func, data, types, ntypes, - nin, nout, identity, name, doc, check_return, NULL); -} - -/*UFUNC_API*/ -NPY_NO_EXPORT PyObject * -PyUFunc_FromFuncAndDataAndSignature(PyUFuncGenericFunction *func, void **data, - char *types, int ntypes, - int nin, int nout, int identity, - char *name, char *doc, - int check_return, const char *signature) -{ - PyUFuncObject *self; - - self = _pya_malloc(sizeof(PyUFuncObject)); - if (self == NULL) { - return NULL; - } - PyObject_Init((PyObject *)self, &PyUFunc_Type); - - self->nin = nin; - self->nout = nout; - self->nargs = nin+nout; - self->identity = identity; - - self->functions = func; - self->data = data; - self->types = types; - self->ntypes = ntypes; - self->check_return = check_return; - self->ptr = NULL; - self->obj = NULL; - self->userloops=NULL; - - if (name == NULL) { - self->name = "?"; - } - else { - self->name = name; - } - if (doc == NULL) { - self->doc = "NULL"; - } - else { - self->doc = doc; - } - - /* generalized ufunc */ - self->core_enabled = 0; - self->core_num_dim_ix = 0; - self->core_num_dims = NULL; - self->core_dim_ixs = NULL; - self->core_offsets = NULL; - self->core_signature = NULL; - if (signature != NULL) { - if (_parse_signature(self, signature) != 0) { - return NULL; - } - } - return (PyObject *)self; -} - -/* Specify that the loop specified by the given index should use the array of - * input and arrays as the data pointer to the loop. - */ -/*UFUNC_API*/ -NPY_NO_EXPORT int -PyUFunc_SetUsesArraysAsData(void **data, size_t i) -{ - data[i] = (void*)PyUFunc_SetUsesArraysAsData; - return 0; -} - -/* Return 1 if the given data pointer for the loop specifies that it needs the - * arrays as the data pointer. - */ -static int -_does_loop_use_arrays(void *data) -{ - return (data == PyUFunc_SetUsesArraysAsData); -} - - -/* - * This is the first-part of the CObject structure. - * - * I don't think this will change, but if it should, then - * this needs to be fixed. The exposed C-API was insufficient - * because I needed to replace the pointer and it wouldn't - * let me with a destructor set (even though it works fine - * with the destructor). - */ -typedef struct { - PyObject_HEAD - void *c_obj; -} _simple_cobj; - -#define _SETCPTR(cobj, val) ((_simple_cobj *)(cobj))->c_obj = (val) - -/* return 1 if arg1 > arg2, 0 if arg1 == arg2, and -1 if arg1 < arg2 */ -static int -cmp_arg_types(int *arg1, int *arg2, int n) -{ - for (; n > 0; n--, arg1++, arg2++) { - if (PyArray_EquivTypenums(*arg1, *arg2)) { - continue; - } - if (PyArray_CanCastSafely(*arg1, *arg2)) { - return -1; - } - return 1; - } - return 0; -} - -/* - * This frees the linked-list structure when the CObject - * is destroyed (removed from the internal dictionary) -*/ -static NPY_INLINE void -_free_loop1d_list(PyUFunc_Loop1d *data) -{ - while (data != NULL) { - PyUFunc_Loop1d *next = data->next; - _pya_free(data->arg_types); - _pya_free(data); - data = next; - } -} - -#if PY_VERSION_HEX >= 0x03000000 -static void -_loop1d_list_free(PyObject *ptr) -{ - PyUFunc_Loop1d *data = (PyUFunc_Loop1d *)PyCapsule_GetPointer(ptr, NULL); - _free_loop1d_list(data); -} -#else -static void -_loop1d_list_free(void *ptr) -{ - PyUFunc_Loop1d *data = (PyUFunc_Loop1d *)ptr; - _free_loop1d_list(data); -} -#endif - - -/*UFUNC_API*/ -NPY_NO_EXPORT int -PyUFunc_RegisterLoopForType(PyUFuncObject *ufunc, - int usertype, - PyUFuncGenericFunction function, - int *arg_types, - void *data) -{ - PyArray_Descr *descr; - PyUFunc_Loop1d *funcdata; - PyObject *key, *cobj; - int i; - int *newtypes=NULL; - - descr=PyArray_DescrFromType(usertype); - if ((usertype < PyArray_USERDEF) || (descr==NULL)) { - PyErr_SetString(PyExc_TypeError, "unknown user-defined type"); - return -1; - } - Py_DECREF(descr); - - if (ufunc->userloops == NULL) { - ufunc->userloops = PyDict_New(); - } - key = PyInt_FromLong((long) usertype); - if (key == NULL) { - return -1; - } - funcdata = _pya_malloc(sizeof(PyUFunc_Loop1d)); - if (funcdata == NULL) { - goto fail; - } - newtypes = _pya_malloc(sizeof(int)*ufunc->nargs); - if (newtypes == NULL) { - goto fail; - } - if (arg_types != NULL) { - for (i = 0; i < ufunc->nargs; i++) { - newtypes[i] = arg_types[i]; - } - } - else { - for (i = 0; i < ufunc->nargs; i++) { - newtypes[i] = usertype; - } - } - - funcdata->func = function; - funcdata->arg_types = newtypes; - funcdata->data = data; - funcdata->next = NULL; - - /* Get entry for this user-defined type*/ - cobj = PyDict_GetItem(ufunc->userloops, key); - /* If it's not there, then make one and return. */ - if (cobj == NULL) { - cobj = NpyCapsule_FromVoidPtr((void *)funcdata, _loop1d_list_free); - if (cobj == NULL) { - goto fail; - } - PyDict_SetItem(ufunc->userloops, key, cobj); - Py_DECREF(cobj); - Py_DECREF(key); - return 0; - } - else { - PyUFunc_Loop1d *current, *prev = NULL; - int cmp = 1; - /* - * There is already at least 1 loop. Place this one in - * lexicographic order. If the next one signature - * is exactly like this one, then just replace. - * Otherwise insert. - */ - current = (PyUFunc_Loop1d *)NpyCapsule_AsVoidPtr(cobj); - while (current != NULL) { - cmp = cmp_arg_types(current->arg_types, newtypes, ufunc->nargs); - if (cmp >= 0) { - break; - } - prev = current; - current = current->next; - } - if (cmp == 0) { - /* just replace it with new function */ - current->func = function; - current->data = data; - _pya_free(newtypes); - _pya_free(funcdata); - } - else { - /* - * insert it before the current one by hacking the internals - * of cobject to replace the function pointer --- can't use - * CObject API because destructor is set. - */ - funcdata->next = current; - if (prev == NULL) { - /* place this at front */ - _SETCPTR(cobj, funcdata); - } - else { - prev->next = funcdata; - } - } - } - Py_DECREF(key); - return 0; - - fail: - Py_DECREF(key); - _pya_free(funcdata); - _pya_free(newtypes); - if (!PyErr_Occurred()) PyErr_NoMemory(); - return -1; -} - -#undef _SETCPTR - - -static void -ufunc_dealloc(PyUFuncObject *self) -{ - if (self->core_num_dims) { - _pya_free(self->core_num_dims); - } - if (self->core_dim_ixs) { - _pya_free(self->core_dim_ixs); - } - if (self->core_offsets) { - _pya_free(self->core_offsets); - } - if (self->core_signature) { - _pya_free(self->core_signature); - } - if (self->ptr) { - _pya_free(self->ptr); - } - Py_XDECREF(self->userloops); - Py_XDECREF(self->obj); - _pya_free(self); -} - -static PyObject * -ufunc_repr(PyUFuncObject *self) -{ - char buf[100]; - - sprintf(buf, "", self->name); - return PyUString_FromString(buf); -} - - -/****************************************************************************** - *** UFUNC METHODS *** - *****************************************************************************/ - - -/* - * op.outer(a,b) is equivalent to op(a[:,NewAxis,NewAxis,etc.],b) - * where a has b.ndim NewAxis terms appended. - * - * The result has dimensions a.ndim + b.ndim - */ -static PyObject * -ufunc_outer(PyUFuncObject *self, PyObject *args, PyObject *kwds) -{ - int i; - PyObject *ret; - PyArrayObject *ap1 = NULL, *ap2 = NULL, *ap_new = NULL; - PyObject *new_args, *tmp; - PyObject *shape1, *shape2, *newshape; - - if (self->core_enabled) { - PyErr_Format(PyExc_TypeError, - "method outer is not allowed in ufunc with non-trivial"\ - " signature"); - return NULL; - } - - if(self->nin != 2) { - PyErr_SetString(PyExc_ValueError, - "outer product only supported "\ - "for binary functions"); - return NULL; - } - - if (PySequence_Length(args) != 2) { - PyErr_SetString(PyExc_TypeError, "exactly two arguments expected"); - return NULL; - } - - tmp = PySequence_GetItem(args, 0); - if (tmp == NULL) { - return NULL; - } - ap1 = (PyArrayObject *) PyArray_FromObject(tmp, PyArray_NOTYPE, 0, 0); - Py_DECREF(tmp); - if (ap1 == NULL) { - return NULL; - } - tmp = PySequence_GetItem(args, 1); - if (tmp == NULL) { - return NULL; - } - ap2 = (PyArrayObject *)PyArray_FromObject(tmp, PyArray_NOTYPE, 0, 0); - Py_DECREF(tmp); - if (ap2 == NULL) { - Py_DECREF(ap1); - return NULL; - } - /* Construct new shape tuple */ - shape1 = PyTuple_New(ap1->nd); - if (shape1 == NULL) { - goto fail; - } - for (i = 0; i < ap1->nd; i++) { - PyTuple_SET_ITEM(shape1, i, - PyLong_FromLongLong((longlong)ap1->dimensions[i])); - } - shape2 = PyTuple_New(ap2->nd); - for (i = 0; i < ap2->nd; i++) { - PyTuple_SET_ITEM(shape2, i, PyInt_FromLong((long) 1)); - } - if (shape2 == NULL) { - Py_DECREF(shape1); - goto fail; - } - newshape = PyNumber_Add(shape1, shape2); - Py_DECREF(shape1); - Py_DECREF(shape2); - if (newshape == NULL) { - goto fail; - } - ap_new = (PyArrayObject *)PyArray_Reshape(ap1, newshape); - Py_DECREF(newshape); - if (ap_new == NULL) { - goto fail; - } - new_args = Py_BuildValue("(OO)", ap_new, ap2); - Py_DECREF(ap1); - Py_DECREF(ap2); - Py_DECREF(ap_new); - ret = ufunc_generic_call(self, new_args, kwds); - Py_DECREF(new_args); - return ret; - - fail: - Py_XDECREF(ap1); - Py_XDECREF(ap2); - Py_XDECREF(ap_new); - return NULL; -} - - -static PyObject * -ufunc_reduce(PyUFuncObject *self, PyObject *args, PyObject *kwds) -{ - return PyUFunc_GenericReduction(self, args, kwds, UFUNC_REDUCE); -} - -static PyObject * -ufunc_accumulate(PyUFuncObject *self, PyObject *args, PyObject *kwds) -{ - return PyUFunc_GenericReduction(self, args, kwds, UFUNC_ACCUMULATE); -} - -static PyObject * -ufunc_reduceat(PyUFuncObject *self, PyObject *args, PyObject *kwds) -{ - return PyUFunc_GenericReduction(self, args, kwds, UFUNC_REDUCEAT); -} - - -static struct PyMethodDef ufunc_methods[] = { - {"reduce", - (PyCFunction)ufunc_reduce, - METH_VARARGS | METH_KEYWORDS, NULL }, - {"accumulate", - (PyCFunction)ufunc_accumulate, - METH_VARARGS | METH_KEYWORDS, NULL }, - {"reduceat", - (PyCFunction)ufunc_reduceat, - METH_VARARGS | METH_KEYWORDS, NULL }, - {"outer", - (PyCFunction)ufunc_outer, - METH_VARARGS | METH_KEYWORDS, NULL}, - {NULL, NULL, 0, NULL} /* sentinel */ -}; - - -/****************************************************************************** - *** UFUNC GETSET *** - *****************************************************************************/ - - -/* construct the string y1,y2,...,yn */ -static PyObject * -_makeargs(int num, char *ltr, int null_if_none) -{ - PyObject *str; - int i; - - switch (num) { - case 0: - if (null_if_none) { - return NULL; - } - return PyString_FromString(""); - case 1: - return PyString_FromString(ltr); - } - str = PyString_FromFormat("%s1, %s2", ltr, ltr); - for (i = 3; i <= num; ++i) { - PyString_ConcatAndDel(&str, PyString_FromFormat(", %s%d", ltr, i)); - } - return str; -} - -static char -_typecharfromnum(int num) { - PyArray_Descr *descr; - char ret; - - descr = PyArray_DescrFromType(num); - ret = descr->type; - Py_DECREF(descr); - return ret; -} - -static PyObject * -ufunc_get_doc(PyUFuncObject *self) -{ - /* - * Put docstring first or FindMethod finds it... could so some - * introspection on name and nin + nout to automate the first part - * of it the doc string shouldn't need the calling convention - * construct name(x1, x2, ...,[ out1, out2, ...]) __doc__ - */ - PyObject *outargs, *inargs, *doc; - outargs = _makeargs(self->nout, "out", 1); - inargs = _makeargs(self->nin, "x", 0); - if (outargs == NULL) { - doc = PyUString_FromFormat("%s(%s)\n\n%s", - self->name, - PyString_AS_STRING(inargs), - self->doc); - } - else { - doc = PyUString_FromFormat("%s(%s[, %s])\n\n%s", - self->name, - PyString_AS_STRING(inargs), - PyString_AS_STRING(outargs), - self->doc); - Py_DECREF(outargs); - } - Py_DECREF(inargs); - return doc; -} - -static PyObject * -ufunc_get_nin(PyUFuncObject *self) -{ - return PyInt_FromLong(self->nin); -} - -static PyObject * -ufunc_get_nout(PyUFuncObject *self) -{ - return PyInt_FromLong(self->nout); -} - -static PyObject * -ufunc_get_nargs(PyUFuncObject *self) -{ - return PyInt_FromLong(self->nargs); -} - -static PyObject * -ufunc_get_ntypes(PyUFuncObject *self) -{ - return PyInt_FromLong(self->ntypes); -} - -static PyObject * -ufunc_get_types(PyUFuncObject *self) -{ - /* return a list with types grouped input->output */ - PyObject *list; - PyObject *str; - int k, j, n, nt = self->ntypes; - int ni = self->nin; - int no = self->nout; - char *t; - list = PyList_New(nt); - if (list == NULL) { - return NULL; - } - t = _pya_malloc(no+ni+2); - n = 0; - for (k = 0; k < nt; k++) { - for (j = 0; jtypes[n]); - n++; - } - t[ni] = '-'; - t[ni+1] = '>'; - for (j = 0; j < no; j++) { - t[ni + 2 + j] = _typecharfromnum(self->types[n]); - n++; - } - str = PyUString_FromStringAndSize(t, no + ni + 2); - PyList_SET_ITEM(list, k, str); - } - _pya_free(t); - return list; -} - -static PyObject * -ufunc_get_name(PyUFuncObject *self) -{ - return PyUString_FromString(self->name); -} - -static PyObject * -ufunc_get_identity(PyUFuncObject *self) -{ - switch(self->identity) { - case PyUFunc_One: - return PyInt_FromLong(1); - case PyUFunc_Zero: - return PyInt_FromLong(0); - } - return Py_None; -} - -static PyObject * -ufunc_get_signature(PyUFuncObject *self) -{ - if (!self->core_enabled) { - Py_RETURN_NONE; - } - return PyUString_FromString(self->core_signature); -} - -#undef _typecharfromnum - -/* - * Docstring is now set from python - * static char *Ufunctype__doc__ = NULL; - */ -static PyGetSetDef ufunc_getset[] = { - {"__doc__", - (getter)ufunc_get_doc, - NULL, NULL, NULL}, - {"nin", - (getter)ufunc_get_nin, - NULL, NULL, NULL}, - {"nout", - (getter)ufunc_get_nout, - NULL, NULL, NULL}, - {"nargs", - (getter)ufunc_get_nargs, - NULL, NULL, NULL}, - {"ntypes", - (getter)ufunc_get_ntypes, - NULL, NULL, NULL}, - {"types", - (getter)ufunc_get_types, - NULL, NULL, NULL}, - {"__name__", - (getter)ufunc_get_name, - NULL, NULL, NULL}, - {"identity", - (getter)ufunc_get_identity, - NULL, NULL, NULL}, - {"signature", - (getter)ufunc_get_signature, - NULL, NULL, NULL}, - {NULL, NULL, NULL, NULL, NULL}, /* Sentinel */ -}; - - -/****************************************************************************** - *** UFUNC TYPE OBJECT *** - *****************************************************************************/ - -NPY_NO_EXPORT PyTypeObject PyUFunc_Type = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(NULL) - 0, /* ob_size */ -#endif - "numpy.ufunc", /* tp_name */ - sizeof(PyUFuncObject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)ufunc_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ -#if defined(NPY_PY3K) - 0, /* tp_reserved */ -#else - 0, /* tp_compare */ -#endif - (reprfunc)ufunc_repr, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - (ternaryfunc)ufunc_generic_call, /* tp_call */ - (reprfunc)ufunc_repr, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - ufunc_methods, /* tp_methods */ - 0, /* tp_members */ - ufunc_getset, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif -}; - -/* End of code for ufunc objects */ diff --git a/pythonPackages/numpy/numpy/core/src/umath/ufunc_object.h b/pythonPackages/numpy/numpy/core/src/umath/ufunc_object.h deleted file mode 100755 index a8886be057..0000000000 --- a/pythonPackages/numpy/numpy/core/src/umath/ufunc_object.h +++ /dev/null @@ -1,10 +0,0 @@ -#ifndef _NPY_UMATH_UFUNC_OBJECT_H_ -#define _NPY_UMATH_UFUNC_OBJECT_H_ - -NPY_NO_EXPORT PyObject * -ufunc_geterr(PyObject *NPY_UNUSED(dummy), PyObject *args); - -NPY_NO_EXPORT PyObject * -ufunc_seterr(PyObject *NPY_UNUSED(dummy), PyObject *args); - -#endif diff --git a/pythonPackages/numpy/numpy/core/src/umath/umath_tests.c.src b/pythonPackages/numpy/numpy/core/src/umath/umath_tests.c.src deleted file mode 100755 index 1fd27a2964..0000000000 --- a/pythonPackages/numpy/numpy/core/src/umath/umath_tests.c.src +++ /dev/null @@ -1,340 +0,0 @@ -/* -*- c -*- */ - -/* - ***************************************************************************** - ** INCLUDES ** - ***************************************************************************** - */ -#include "Python.h" -#include "numpy/arrayobject.h" -#include "numpy/ufuncobject.h" - -#include "numpy/npy_3kcompat.h" - -#include "npy_config.h" - -/* - ***************************************************************************** - ** BASICS ** - ***************************************************************************** - */ - -typedef npy_intp intp; - -#define INIT_OUTER_LOOP_1 \ - intp dN = *dimensions++; \ - intp N_; \ - intp s0 = *steps++; - -#define INIT_OUTER_LOOP_2 \ - INIT_OUTER_LOOP_1 \ - intp s1 = *steps++; - -#define INIT_OUTER_LOOP_3 \ - INIT_OUTER_LOOP_2 \ - intp s2 = *steps++; - -#define INIT_OUTER_LOOP_4 \ - INIT_OUTER_LOOP_3 \ - intp s3 = *steps++; - -#define BEGIN_OUTER_LOOP_3 \ - for (N_ = 0; N_ < dN; N_++, args[0] += s0, args[1] += s1, args[2] += s2) { - -#define BEGIN_OUTER_LOOP_4 \ - for (N_ = 0; N_ < dN; N_++, args[0] += s0, args[1] += s1, args[2] += s2, args[3] += s3) { - -#define END_OUTER_LOOP } - - -/* - ***************************************************************************** - ** UFUNC LOOPS ** - ***************************************************************************** - */ - -char *inner1d_signature = "(i),(i)->()"; - -/**begin repeat - - #TYPE=LONG,DOUBLE# - #typ=npy_long, npy_double# -*/ - -/* - * This implements the function - * out[n] = sum_i { in1[n, i] * in2[n, i] }. - */ -static void -@TYPE@_inner1d(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - INIT_OUTER_LOOP_3 - intp di = dimensions[0]; - intp i; - intp is1=steps[0], is2=steps[1]; - BEGIN_OUTER_LOOP_3 - char *ip1=args[0], *ip2=args[1], *op=args[2]; - @typ@ sum = 0; - for (i = 0; i < di; i++) { - sum += (*(@typ@ *)ip1) * (*(@typ@ *)ip2); - ip1 += is1; - ip2 += is2; - } - *(@typ@ *)op = sum; - END_OUTER_LOOP -} - -/**end repeat**/ - -char *innerwt_signature = "(i),(i),(i)->()"; - -/**begin repeat - - #TYPE=LONG,DOUBLE# - #typ=npy_long, npy_double# -*/ - - -/* - * This implements the function - * out[n] = sum_i { in1[n, i] * in2[n, i] * in3[n, i] }. - */ - -static void -@TYPE@_innerwt(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - INIT_OUTER_LOOP_4 - intp di = dimensions[0]; - intp i; - intp is1=steps[0], is2=steps[1], is3=steps[2]; - BEGIN_OUTER_LOOP_4 - char *ip1=args[0], *ip2=args[1], *ip3=args[2], *op=args[3]; - @typ@ sum = 0; - for (i = 0; i < di; i++) { - sum += (*(@typ@ *)ip1) * (*(@typ@ *)ip2) * (*(@typ@ *)ip3); - ip1 += is1; - ip2 += is2; - ip3 += is3; - } - *(@typ@ *)op = sum; - END_OUTER_LOOP -} - -/**end repeat**/ - -char *matrix_multiply_signature = "(m,n),(n,p)->(m,p)"; - -/**begin repeat - - #TYPE=FLOAT,DOUBLE,LONG# - #typ=npy_float, npy_double,npy_long# -*/ - -/* - * This implements the function - * out[k, m, p] = sum_n { in1[k, m, n] * in2[k, n, p] }. - */ - - -static void -@TYPE@_matrix_multiply(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) -{ - /* no BLAS is available */ - INIT_OUTER_LOOP_3 - intp dm = dimensions[0]; - intp dn = dimensions[1]; - intp dp = dimensions[2]; - intp m,n,p; - intp is1_m=steps[0], is1_n=steps[1], is2_n=steps[2], is2_p=steps[3], - os_m=steps[4], os_p=steps[5]; - intp ib1_n = is1_n*dn; - intp ib2_n = is2_n*dn; - intp ib2_p = is2_p*dp; - intp ob_p = os_p *dp; - BEGIN_OUTER_LOOP_3 - char *ip1=args[0], *ip2=args[1], *op=args[2]; - for (m = 0; m < dm; m++) { - for (n = 0; n < dn; n++) { - @typ@ val1 = (*(@typ@ *)ip1); - for (p = 0; p < dp; p++) { - if (n == 0) *(@typ@ *)op = 0; - *(@typ@ *)op += val1 * (*(@typ@ *)ip2); - ip2 += is2_p; - op += os_p; - } - ip2 -= ib2_p; - op -= ob_p; - ip1 += is1_n; - ip2 += is2_n; - } - ip1 -= ib1_n; - ip2 -= ib2_n; - ip1 += is1_m; - op += os_m; - } - END_OUTER_LOOP -} - -/**end repeat**/ - -/* The following lines were generated using a slightly modified - version of code_generators/generate_umath.py and adding these - lines to defdict: - -defdict = { -'inner1d' : - Ufunc(2, 1, None_, - r'''inner on the last dimension and broadcast on the rest \n" - " \"(i),(i)->()\" \n''', - TD('ld'), - ), -'innerwt' : - Ufunc(3, 1, None_, - r'''inner1d with a weight argument \n" - " \"(i),(i),(i)->()\" \n''', - TD('ld'), - ), -} - -*/ - -static PyUFuncGenericFunction inner1d_functions[] = { LONG_inner1d, DOUBLE_inner1d }; -static void * inner1d_data[] = { (void *)NULL, (void *)NULL }; -static char inner1d_signatures[] = { PyArray_LONG, PyArray_LONG, PyArray_LONG, PyArray_DOUBLE, PyArray_DOUBLE, PyArray_DOUBLE }; -static PyUFuncGenericFunction innerwt_functions[] = { LONG_innerwt, DOUBLE_innerwt }; -static void * innerwt_data[] = { (void *)NULL, (void *)NULL }; -static char innerwt_signatures[] = { PyArray_LONG, PyArray_LONG, PyArray_LONG, PyArray_LONG, PyArray_DOUBLE, PyArray_DOUBLE, PyArray_DOUBLE, PyArray_DOUBLE }; -static PyUFuncGenericFunction matrix_multiply_functions[] = { LONG_matrix_multiply, FLOAT_matrix_multiply, DOUBLE_matrix_multiply }; -static void *matrix_multiply_data[] = { (void *)NULL, (void *)NULL, (void *)NULL }; -static char matrix_multiply_signatures[] = { PyArray_LONG, PyArray_LONG, PyArray_LONG, PyArray_FLOAT, PyArray_FLOAT, PyArray_FLOAT, PyArray_DOUBLE, PyArray_DOUBLE, PyArray_DOUBLE }; - -static void -addUfuncs(PyObject *dictionary) { - PyObject *f; - - f = PyUFunc_FromFuncAndDataAndSignature(inner1d_functions, inner1d_data, inner1d_signatures, 2, - 2, 1, PyUFunc_None, "inner1d", - "inner on the last dimension and broadcast on the rest \n"\ - " \"(i),(i)->()\" \n", - 0, inner1d_signature); - PyDict_SetItemString(dictionary, "inner1d", f); - Py_DECREF(f); - f = PyUFunc_FromFuncAndDataAndSignature(innerwt_functions, innerwt_data, innerwt_signatures, 2, - 3, 1, PyUFunc_None, "innerwt", - "inner1d with a weight argument \n"\ - " \"(i),(i),(i)->()\" \n", - 0, innerwt_signature); - PyDict_SetItemString(dictionary, "innerwt", f); - Py_DECREF(f); - f = PyUFunc_FromFuncAndDataAndSignature(matrix_multiply_functions, - matrix_multiply_data, matrix_multiply_signatures, - 3, 2, 1, PyUFunc_None, "matrix_multiply", - "matrix multiplication on last two dimensions \n"\ - " \"(m,n),(n,p)->(m,p)\" \n", - 0, matrix_multiply_signature); - PyDict_SetItemString(dictionary, "matrix_multiply", f); - Py_DECREF(f); -} - -/* - End of auto-generated code. -*/ - - - -static PyObject * -UMath_Tests_test_signature(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - int nin, nout; - PyObject *signature, *sig_str; - PyObject *f; - int core_enabled; - - if (!PyArg_ParseTuple(args, "iiO", &nin, &nout, &signature)) return NULL; - - - if (PyString_Check(signature)) { - sig_str = signature; - } else if (PyUnicode_Check(signature)) { - sig_str = PyUnicode_AsUTF8String(signature); - } else { - PyErr_SetString(PyExc_ValueError, "signature should be a string"); - return NULL; - } - - f = PyUFunc_FromFuncAndDataAndSignature(NULL, NULL, NULL, - 0, nin, nout, PyUFunc_None, "no name", - "doc:none", - 1, PyString_AS_STRING(sig_str)); - if (sig_str != signature) { - Py_DECREF(sig_str); - } - if (f == NULL) return NULL; - core_enabled = ((PyUFuncObject*)f)->core_enabled; - return Py_BuildValue("i", core_enabled); -} - -static PyMethodDef UMath_TestsMethods[] = { - {"test_signature", UMath_Tests_test_signature, METH_VARARGS, - "Test signature parsing of ufunc. \n" - "Arguments: nin nout signature \n" - "If fails, it returns NULL. Otherwise it will returns 0 for scalar ufunc " - "and 1 for generalized ufunc. \n", - }, - {NULL, NULL, 0, NULL} /* Sentinel */ -}; - -#if defined(NPY_PY3K) -static struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - "umath_tests", - NULL, - -1, - UMath_TestsMethods, - NULL, - NULL, - NULL, - NULL -}; -#endif - -#if defined(NPY_PY3K) -#define RETVAL m -PyObject *PyInit_umath_tests(void) -#else -#define RETVAL -PyMODINIT_FUNC -initumath_tests(void) -#endif -{ - PyObject *m; - PyObject *d; - PyObject *version; - -#if defined(NPY_PY3K) - m = PyModule_Create(&moduledef); -#else - m = Py_InitModule("umath_tests", UMath_TestsMethods); -#endif - if (m == NULL) - return RETVAL; - - import_array(); - import_ufunc(); - - d = PyModule_GetDict(m); - - version = PyString_FromString("0.1"); - PyDict_SetItemString(d, "__version__", version); - Py_DECREF(version); - - /* Load the ufunc operators into the module's namespace */ - addUfuncs(d); - - if (PyErr_Occurred()) { - PyErr_SetString(PyExc_RuntimeError, - "cannot load umath_tests module."); - } - - return RETVAL; -} diff --git a/pythonPackages/numpy/numpy/core/src/umath/umathmodule.c.src b/pythonPackages/numpy/numpy/core/src/umath/umathmodule.c.src deleted file mode 100755 index 9694f521d6..0000000000 --- a/pythonPackages/numpy/numpy/core/src/umath/umathmodule.c.src +++ /dev/null @@ -1,385 +0,0 @@ -/* -*- c -*- */ - -/* - * vim:syntax=c - */ - -/* - ***************************************************************************** - ** INCLUDES ** - ***************************************************************************** - */ - -/* - * _UMATHMODULE IS needed in __ufunc_api.h, included from numpy/ufuncobject.h. - * This is a mess and it would be nice to fix it. It has nothing to do with - * __ufunc_api.c - */ -#define _UMATHMODULE - -#include "Python.h" - -#include "npy_config.h" -#ifdef ENABLE_SEPARATE_COMPILATION -#define PY_ARRAY_UNIQUE_SYMBOL _npy_umathmodule_ARRAY_API -#endif - -#include "numpy/noprefix.h" -#include "numpy/ufuncobject.h" -#include "abstract.h" - -#include "numpy/npy_math.h" - -/* - ***************************************************************************** - ** INCLUDE GENERATED CODE ** - ***************************************************************************** - */ -#include "funcs.inc" -#include "loops.h" -#include "ufunc_object.h" -#include "__umath_generated.c" -#include "__ufunc_api.c" - -static PyUFuncGenericFunction pyfunc_functions[] = {PyUFunc_On_Om}; - -static PyObject * -ufunc_frompyfunc(PyObject *NPY_UNUSED(dummy), PyObject *args, PyObject *NPY_UNUSED(kwds)) { - /* Keywords are ignored for now */ - - PyObject *function, *pyname = NULL; - int nin, nout, i; - PyUFunc_PyFuncData *fdata; - PyUFuncObject *self; - char *fname, *str; - Py_ssize_t fname_len = -1; - int offset[2]; - - if (!PyArg_ParseTuple(args, "Oii", &function, &nin, &nout)) { - return NULL; - } - if (!PyCallable_Check(function)) { - PyErr_SetString(PyExc_TypeError, "function must be callable"); - return NULL; - } - self = _pya_malloc(sizeof(PyUFuncObject)); - if (self == NULL) { - return NULL; - } - PyObject_Init((PyObject *)self, &PyUFunc_Type); - - self->userloops = NULL; - self->nin = nin; - self->nout = nout; - self->nargs = nin + nout; - self->identity = PyUFunc_None; - self->functions = pyfunc_functions; - self->ntypes = 1; - self->check_return = 0; - - /* generalized ufunc */ - self->core_enabled = 0; - self->core_num_dim_ix = 0; - self->core_num_dims = NULL; - self->core_dim_ixs = NULL; - self->core_offsets = NULL; - self->core_signature = NULL; - - pyname = PyObject_GetAttrString(function, "__name__"); - if (pyname) { - (void) PyString_AsStringAndSize(pyname, &fname, &fname_len); - } - if (PyErr_Occurred()) { - fname = "?"; - fname_len = 1; - PyErr_Clear(); - } - Py_XDECREF(pyname); - - /* - * self->ptr holds a pointer for enough memory for - * self->data[0] (fdata) - * self->data - * self->name - * self->types - * - * To be safest, all of these need their memory aligned on void * pointers - * Therefore, we may need to allocate extra space. - */ - offset[0] = sizeof(PyUFunc_PyFuncData); - i = (sizeof(PyUFunc_PyFuncData) % sizeof(void *)); - if (i) { - offset[0] += (sizeof(void *) - i); - } - offset[1] = self->nargs; - i = (self->nargs % sizeof(void *)); - if (i) { - offset[1] += (sizeof(void *)-i); - } - self->ptr = _pya_malloc(offset[0] + offset[1] + sizeof(void *) + - (fname_len + 14)); - if (self->ptr == NULL) { - return PyErr_NoMemory(); - } - Py_INCREF(function); - self->obj = function; - fdata = (PyUFunc_PyFuncData *)(self->ptr); - fdata->nin = nin; - fdata->nout = nout; - fdata->callable = function; - - self->data = (void **)(((char *)self->ptr) + offset[0]); - self->data[0] = (void *)fdata; - self->types = (char *)self->data + sizeof(void *); - for (i = 0; i < self->nargs; i++) { - self->types[i] = PyArray_OBJECT; - } - str = self->types + offset[1]; - memcpy(str, fname, fname_len); - memcpy(str+fname_len, " (vectorized)", 14); - self->name = str; - - /* Do a better job someday */ - self->doc = "dynamic ufunc based on a python function"; - - return (PyObject *)self; -} - -/* - ***************************************************************************** - ** SETUP UFUNCS ** - ***************************************************************************** - */ - -/* Less automated additions to the ufuncs */ - -static PyUFuncGenericFunction frexp_functions[] = { -#ifdef HAVE_FREXPF - FLOAT_frexp, -#endif - DOUBLE_frexp -#ifdef HAVE_FREXPL - ,LONGDOUBLE_frexp -#endif -}; - -static void * blank3_data[] = { (void *)NULL, (void *)NULL, (void *)NULL}; -static char frexp_signatures[] = { -#ifdef HAVE_FREXPF - PyArray_FLOAT, PyArray_FLOAT, PyArray_INT, -#endif - PyArray_DOUBLE, PyArray_DOUBLE, PyArray_INT -#ifdef HAVE_FREXPL - ,PyArray_LONGDOUBLE, PyArray_LONGDOUBLE, PyArray_INT -#endif -}; - -static PyUFuncGenericFunction ldexp_functions[] = { -#ifdef HAVE_LDEXPF - FLOAT_ldexp, -#endif - DOUBLE_ldexp -#ifdef HAVE_LDEXPL - ,LONGDOUBLE_ldexp -#endif -}; - -static char ldexp_signatures[] = { -#ifdef HAVE_LDEXPF - PyArray_FLOAT, PyArray_INT, PyArray_FLOAT, -#endif - PyArray_DOUBLE, PyArray_LONG, PyArray_DOUBLE -#ifdef HAVE_LDEXPL - ,PyArray_LONGDOUBLE, PyArray_LONG, PyArray_LONGDOUBLE -#endif -}; - -static void -InitOtherOperators(PyObject *dictionary) { - PyObject *f; - int num=1; - -#ifdef HAVE_FREXPL - num += 1; -#endif -#ifdef HAVE_FREXPF - num += 1; -#endif - f = PyUFunc_FromFuncAndData(frexp_functions, blank3_data, - frexp_signatures, num, - 1, 2, PyUFunc_None, "frexp", - "Split the number, x, into a normalized"\ - " fraction (y1) and exponent (y2)",0); - PyDict_SetItemString(dictionary, "frexp", f); - Py_DECREF(f); - - num = 1; -#ifdef HAVE_LDEXPL - num += 1; -#endif -#ifdef HAVE_LDEXPF - num += 1; -#endif - f = PyUFunc_FromFuncAndData(ldexp_functions, blank3_data, ldexp_signatures, num, - 2, 1, PyUFunc_None, "ldexp", - "Compute y = x1 * 2**x2.",0); - PyDict_SetItemString(dictionary, "ldexp", f); - Py_DECREF(f); - -#if defined(NPY_PY3K) - f = PyDict_GetItemString(dictionary, "true_divide"); - PyDict_SetItemString(dictionary, "divide", f); -#endif - return; -} - -/* Setup the umath module */ -/* Remove for time being, it is declared in __ufunc_api.h */ -/*static PyTypeObject PyUFunc_Type;*/ - -static struct PyMethodDef methods[] = { - {"frompyfunc", (PyCFunction) ufunc_frompyfunc, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"seterrobj", (PyCFunction) ufunc_seterr, - METH_VARARGS, NULL}, - {"geterrobj", (PyCFunction) ufunc_geterr, - METH_VARARGS, NULL}, - {NULL, NULL, 0, NULL} /* sentinel */ -}; - - -#if defined(NPY_PY3K) -static struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - "umath", - NULL, - -1, - methods, - NULL, - NULL, - NULL, - NULL -}; -#endif - -#include - -#if defined(NPY_PY3K) -#define RETVAL m -PyObject *PyInit_umath(void) -#else -#define RETVAL -PyMODINIT_FUNC initumath(void) -#endif -{ - PyObject *m, *d, *s, *s2, *c_api; - int UFUNC_FLOATING_POINT_SUPPORT = 1; - -#ifdef NO_UFUNC_FLOATING_POINT_SUPPORT - UFUNC_FLOATING_POINT_SUPPORT = 0; -#endif - /* Create the module and add the functions */ -#if defined(NPY_PY3K) - m = PyModule_Create(&moduledef); -#else - m = Py_InitModule("umath", methods); -#endif - if (!m) { - return RETVAL; - } - - /* Import the array */ - if (_import_array() < 0) { - if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, - "umath failed: Could not import array core."); - } - return RETVAL; - } - - /* Initialize the types */ - if (PyType_Ready(&PyUFunc_Type) < 0) - return RETVAL; - - /* Add some symbolic constants to the module */ - d = PyModule_GetDict(m); - - c_api = NpyCapsule_FromVoidPtr((void *)PyUFunc_API, NULL); - if (PyErr_Occurred()) { - goto err; - } - PyDict_SetItemString(d, "_UFUNC_API", c_api); - Py_DECREF(c_api); - if (PyErr_Occurred()) { - goto err; - } - - s = PyString_FromString("0.4.0"); - PyDict_SetItemString(d, "__version__", s); - Py_DECREF(s); - - /* Load the ufunc operators into the array module's namespace */ - InitOperators(d); - - InitOtherOperators(d); - - PyDict_SetItemString(d, "pi", s = PyFloat_FromDouble(NPY_PI)); - Py_DECREF(s); - PyDict_SetItemString(d, "e", s = PyFloat_FromDouble(exp(1.0))); - Py_DECREF(s); - -#define ADDCONST(str) PyModule_AddIntConstant(m, #str, UFUNC_##str) -#define ADDSCONST(str) PyModule_AddStringConstant(m, "UFUNC_" #str, UFUNC_##str) - - ADDCONST(ERR_IGNORE); - ADDCONST(ERR_WARN); - ADDCONST(ERR_CALL); - ADDCONST(ERR_RAISE); - ADDCONST(ERR_PRINT); - ADDCONST(ERR_LOG); - ADDCONST(ERR_DEFAULT); - ADDCONST(ERR_DEFAULT2); - - ADDCONST(SHIFT_DIVIDEBYZERO); - ADDCONST(SHIFT_OVERFLOW); - ADDCONST(SHIFT_UNDERFLOW); - ADDCONST(SHIFT_INVALID); - - ADDCONST(FPE_DIVIDEBYZERO); - ADDCONST(FPE_OVERFLOW); - ADDCONST(FPE_UNDERFLOW); - ADDCONST(FPE_INVALID); - - ADDCONST(FLOATING_POINT_SUPPORT); - - ADDSCONST(PYVALS_NAME); - -#undef ADDCONST -#undef ADDSCONST - PyModule_AddIntConstant(m, "UFUNC_BUFSIZE_DEFAULT", (long)PyArray_BUFSIZE); - - PyModule_AddObject(m, "PINF", PyFloat_FromDouble(NPY_INFINITY)); - PyModule_AddObject(m, "NINF", PyFloat_FromDouble(-NPY_INFINITY)); - PyModule_AddObject(m, "PZERO", PyFloat_FromDouble(NPY_PZERO)); - PyModule_AddObject(m, "NZERO", PyFloat_FromDouble(NPY_NZERO)); - PyModule_AddObject(m, "NAN", PyFloat_FromDouble(NPY_NAN)); - - s = PyDict_GetItemString(d, "conjugate"); - s2 = PyDict_GetItemString(d, "remainder"); - /* Setup the array object's numerical structures with appropriate - ufuncs in d*/ - PyArray_SetNumericOps(d); - - PyDict_SetItemString(d, "conj", s); - PyDict_SetItemString(d, "mod", s2); - - return RETVAL; - - err: - /* Check for errors */ - if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_RuntimeError, - "cannot load umath module."); - } - return RETVAL; -} diff --git a/pythonPackages/numpy/numpy/core/src/umath/umathmodule_onefile.c b/pythonPackages/numpy/numpy/core/src/umath/umathmodule_onefile.c deleted file mode 100755 index 722f74eec5..0000000000 --- a/pythonPackages/numpy/numpy/core/src/umath/umathmodule_onefile.c +++ /dev/null @@ -1,4 +0,0 @@ -#include "loops.c" - -#include "ufunc_object.c" -#include "umathmodule.c" diff --git a/pythonPackages/numpy/numpy/core/tests/data/astype_copy.pkl b/pythonPackages/numpy/numpy/core/tests/data/astype_copy.pkl deleted file mode 100755 index 7397c97829..0000000000 Binary files a/pythonPackages/numpy/numpy/core/tests/data/astype_copy.pkl and /dev/null differ diff --git a/pythonPackages/numpy/numpy/core/tests/data/recarray_from_file.fits b/pythonPackages/numpy/numpy/core/tests/data/recarray_from_file.fits deleted file mode 100755 index ca48ee8515..0000000000 Binary files a/pythonPackages/numpy/numpy/core/tests/data/recarray_from_file.fits and /dev/null differ diff --git a/pythonPackages/numpy/numpy/core/tests/test_arrayprint.py b/pythonPackages/numpy/numpy/core/tests/test_arrayprint.py deleted file mode 100755 index 954869727e..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_arrayprint.py +++ /dev/null @@ -1,10 +0,0 @@ -import numpy as np -from numpy.testing import * - -class TestArrayRepr(object): - def test_nan_inf(self): - x = np.array([np.nan, np.inf]) - assert_equal(repr(x), 'array([ nan, inf])') - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_blasdot.py b/pythonPackages/numpy/numpy/core/tests/test_blasdot.py deleted file mode 100755 index aeaf55fbbf..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_blasdot.py +++ /dev/null @@ -1,14 +0,0 @@ -from numpy.core import zeros, float64 -from numpy.testing import TestCase, assert_almost_equal -from numpy.core.multiarray import inner as inner_ - -DECPREC = 14 - -class TestInner(TestCase): - def test_vecself(self): - """Ticket 844.""" - # Inner product of a vector with itself segfaults or give meaningless - # result - a = zeros(shape = (1, 80), dtype = float64) - p = inner_(a, a) - assert_almost_equal(p, 0, decimal = DECPREC) diff --git a/pythonPackages/numpy/numpy/core/tests/test_defchararray.py b/pythonPackages/numpy/numpy/core/tests/test_defchararray.py deleted file mode 100755 index d4df9ae6fd..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_defchararray.py +++ /dev/null @@ -1,635 +0,0 @@ -from numpy.testing import * -from numpy.core import * -import numpy as np -import sys -from numpy.core.multiarray import _vec_string - -from numpy.compat import asbytes, asbytes_nested - -kw_unicode_true = {'unicode': True} # make 2to3 work properly -kw_unicode_false = {'unicode': False} - -class TestBasic(TestCase): - def test_from_object_array(self): - A = np.array([['abc', 2], - ['long ', '0123456789']], dtype='O') - B = np.char.array(A) - assert_equal(B.dtype.itemsize, 10) - assert_array_equal(B, asbytes_nested([['abc', '2'], - ['long', '0123456789']])) - - def test_from_object_array_unicode(self): - A = np.array([['abc', u'Sigma \u03a3'], - ['long ', '0123456789']], dtype='O') - self.assertRaises(ValueError, np.char.array, (A,)) - B = np.char.array(A, **kw_unicode_true) - assert_equal(B.dtype.itemsize, 10 * np.array('a', 'U').dtype.itemsize) - assert_array_equal(B, [['abc', u'Sigma \u03a3'], - ['long', '0123456789']]) - - def test_from_string_array(self): - A = np.array(asbytes_nested([['abc', 'foo'], - ['long ', '0123456789']])) - assert_equal(A.dtype.type, np.string_) - B = np.char.array(A) - assert_array_equal(B, A) - assert_equal(B.dtype, A.dtype) - assert_equal(B.shape, A.shape) - B[0,0] = 'changed' - assert B[0,0] != A[0,0] - C = np.char.asarray(A) - assert_array_equal(C, A) - assert_equal(C.dtype, A.dtype) - C[0,0] = 'changed again' - assert C[0,0] != B[0,0] - assert C[0,0] == A[0,0] - - def test_from_unicode_array(self): - A = np.array([['abc', u'Sigma \u03a3'], - ['long ', '0123456789']]) - assert_equal(A.dtype.type, np.unicode_) - B = np.char.array(A) - assert_array_equal(B, A) - assert_equal(B.dtype, A.dtype) - assert_equal(B.shape, A.shape) - B = np.char.array(A, **kw_unicode_true) - assert_array_equal(B, A) - assert_equal(B.dtype, A.dtype) - assert_equal(B.shape, A.shape) - def fail(): - B = np.char.array(A, **kw_unicode_false) - self.assertRaises(UnicodeEncodeError, fail) - - def test_unicode_upconvert(self): - A = np.char.array(['abc']) - B = np.char.array([u'\u03a3']) - assert issubclass((A + B).dtype.type, np.unicode_) - - def test_from_string(self): - A = np.char.array(asbytes('abc')) - assert_equal(len(A), 1) - assert_equal(len(A[0]), 3) - assert issubclass(A.dtype.type, np.string_) - - def test_from_unicode(self): - A = np.char.array(u'\u03a3') - assert_equal(len(A), 1) - assert_equal(len(A[0]), 1) - assert_equal(A.itemsize, 4) - assert issubclass(A.dtype.type, np.unicode_) - -class TestVecString(TestCase): - def test_non_existent_method(self): - def fail(): - _vec_string('a', np.string_, 'bogus') - self.assertRaises(AttributeError, fail) - - def test_non_string_array(self): - def fail(): - _vec_string(1, np.string_, 'strip') - self.assertRaises(TypeError, fail) - - def test_invalid_args_tuple(self): - def fail(): - _vec_string(['a'], np.string_, 'strip', 1) - self.assertRaises(TypeError, fail) - - def test_invalid_type_descr(self): - def fail(): - _vec_string(['a'], 'BOGUS', 'strip') - self.assertRaises(TypeError, fail) - - def test_invalid_function_args(self): - def fail(): - _vec_string(['a'], np.string_, 'strip', (1,)) - self.assertRaises(TypeError, fail) - - def test_invalid_result_type(self): - def fail(): - _vec_string(['a'], np.integer, 'strip') - self.assertRaises(TypeError, fail) - - def test_broadcast_error(self): - def fail(): - _vec_string([['abc', 'def']], np.integer, 'find', (['a', 'd', 'j'],)) - self.assertRaises(ValueError, fail) - - -class TestWhitespace(TestCase): - def setUp(self): - self.A = np.array([['abc ', '123 '], - ['789 ', 'xyz ']]).view(np.chararray) - self.B = np.array([['abc', '123'], - ['789', 'xyz']]).view(np.chararray) - - def test1(self): - assert all(self.A == self.B) - assert all(self.A >= self.B) - assert all(self.A <= self.B) - assert all(negative(self.A > self.B)) - assert all(negative(self.A < self.B)) - assert all(negative(self.A != self.B)) - -class TestChar(TestCase): - def setUp(self): - self.A = np.array('abc1', dtype='c').view(np.chararray) - - def test_it(self): - assert_equal(self.A.shape, (4,)) - assert_equal(self.A.upper()[:2].tostring(), asbytes('AB')) - -class TestComparisons(TestCase): - def setUp(self): - self.A = np.array([['abc', '123'], - ['789', 'xyz']]).view(np.chararray) - self.B = np.array([['efg', '123 '], - ['051', 'tuv']]).view(np.chararray) - - def test_not_equal(self): - assert_array_equal((self.A != self.B), [[True, False], [True, True]]) - - def test_equal(self): - assert_array_equal((self.A == self.B), [[False, True], [False, False]]) - - def test_greater_equal(self): - assert_array_equal((self.A >= self.B), [[False, True], [True, True]]) - - def test_less_equal(self): - assert_array_equal((self.A <= self.B), [[True, True], [False, False]]) - - def test_greater(self): - assert_array_equal((self.A > self.B), [[False, False], [True, True]]) - - def test_less(self): - assert_array_equal((self.A < self.B), [[True, False], [False, False]]) - -class TestComparisonsMixed1(TestComparisons): - """Ticket #1276""" - - def setUp(self): - TestComparisons.setUp(self) - self.B = np.array([['efg', '123 '], - ['051', 'tuv']], np.unicode_).view(np.chararray) - -class TestComparisonsMixed2(TestComparisons): - """Ticket #1276""" - - def setUp(self): - TestComparisons.setUp(self) - self.A = np.array([['abc', '123'], - ['789', 'xyz']], np.unicode_).view(np.chararray) - -class TestInformation(TestCase): - def setUp(self): - self.A = np.array([[' abc ', ''], - ['12345', 'MixedCase'], - ['123 \t 345 \0 ', 'UPPER']]).view(np.chararray) - self.B = np.array([[u' \u03a3 ', u''], - [u'12345', u'MixedCase'], - [u'123 \t 345 \0 ', u'UPPER']]).view(np.chararray) - - def test_len(self): - assert issubclass(np.char.str_len(self.A).dtype.type, np.integer) - assert_array_equal(np.char.str_len(self.A), [[5, 0], [5, 9], [12, 5]]) - assert_array_equal(np.char.str_len(self.B), [[3, 0], [5, 9], [12, 5]]) - - def test_count(self): - assert issubclass(self.A.count('').dtype.type, np.integer) - assert_array_equal(self.A.count('a'), [[1, 0], [0, 1], [0, 0]]) - assert_array_equal(self.A.count('123'), [[0, 0], [1, 0], [1, 0]]) - # Python doesn't seem to like counting NULL characters - # assert_array_equal(self.A.count('\0'), [[0, 0], [0, 0], [1, 0]]) - assert_array_equal(self.A.count('a', 0, 2), [[1, 0], [0, 0], [0, 0]]) - assert_array_equal(self.B.count('a'), [[0, 0], [0, 1], [0, 0]]) - assert_array_equal(self.B.count('123'), [[0, 0], [1, 0], [1, 0]]) - # assert_array_equal(self.B.count('\0'), [[0, 0], [0, 0], [1, 0]]) - - def test_endswith(self): - assert issubclass(self.A.endswith('').dtype.type, np.bool_) - assert_array_equal(self.A.endswith(' '), [[1, 0], [0, 0], [1, 0]]) - assert_array_equal(self.A.endswith('3', 0, 3), [[0, 0], [1, 0], [1, 0]]) - def fail(): - self.A.endswith('3', 'fdjk') - self.assertRaises(TypeError, fail) - - def test_find(self): - assert issubclass(self.A.find('a').dtype.type, np.integer) - assert_array_equal(self.A.find('a'), [[1, -1], [-1, 6], [-1, -1]]) - assert_array_equal(self.A.find('3'), [[-1, -1], [2, -1], [2, -1]]) - assert_array_equal(self.A.find('a', 0, 2), [[1, -1], [-1, -1], [-1, -1]]) - assert_array_equal(self.A.find(['1', 'P']), [[-1, -1], [0, -1], [0, 1]]) - - def test_index(self): - def fail(): - self.A.index('a') - self.assertRaises(ValueError, fail) - assert np.char.index('abcba', 'b') == 1 - assert issubclass(np.char.index('abcba', 'b').dtype.type, np.integer) - - def test_isalnum(self): - assert issubclass(self.A.isalnum().dtype.type, np.bool_) - assert_array_equal(self.A.isalnum(), [[False, False], [True, True], [False, True]]) - - def test_isalpha(self): - assert issubclass(self.A.isalpha().dtype.type, np.bool_) - assert_array_equal(self.A.isalpha(), [[False, False], [False, True], [False, True]]) - - def test_isdigit(self): - assert issubclass(self.A.isdigit().dtype.type, np.bool_) - assert_array_equal(self.A.isdigit(), [[False, False], [True, False], [False, False]]) - - def test_islower(self): - assert issubclass(self.A.islower().dtype.type, np.bool_) - assert_array_equal(self.A.islower(), [[True, False], [False, False], [False, False]]) - - def test_isspace(self): - assert issubclass(self.A.isspace().dtype.type, np.bool_) - assert_array_equal(self.A.isspace(), [[False, False], [False, False], [False, False]]) - - def test_istitle(self): - assert issubclass(self.A.istitle().dtype.type, np.bool_) - assert_array_equal(self.A.istitle(), [[False, False], [False, False], [False, False]]) - - def test_isupper(self): - assert issubclass(self.A.isupper().dtype.type, np.bool_) - assert_array_equal(self.A.isupper(), [[False, False], [False, False], [False, True]]) - - def test_rfind(self): - assert issubclass(self.A.rfind('a').dtype.type, np.integer) - assert_array_equal(self.A.rfind('a'), [[1, -1], [-1, 6], [-1, -1]]) - assert_array_equal(self.A.rfind('3'), [[-1, -1], [2, -1], [6, -1]]) - assert_array_equal(self.A.rfind('a', 0, 2), [[1, -1], [-1, -1], [-1, -1]]) - assert_array_equal(self.A.rfind(['1', 'P']), [[-1, -1], [0, -1], [0, 2]]) - - def test_rindex(self): - def fail(): - self.A.rindex('a') - self.assertRaises(ValueError, fail) - assert np.char.rindex('abcba', 'b') == 3 - assert issubclass(np.char.rindex('abcba', 'b').dtype.type, np.integer) - - def test_startswith(self): - assert issubclass(self.A.startswith('').dtype.type, np.bool_) - assert_array_equal(self.A.startswith(' '), [[1, 0], [0, 0], [0, 0]]) - assert_array_equal(self.A.startswith('1', 0, 3), [[0, 0], [1, 0], [1, 0]]) - def fail(): - self.A.startswith('3', 'fdjk') - self.assertRaises(TypeError, fail) - - -class TestMethods(TestCase): - def setUp(self): - self.A = np.array([[' abc ', ''], - ['12345', 'MixedCase'], - ['123 \t 345 \0 ', 'UPPER']], - dtype='S').view(np.chararray) - self.B = np.array([[u' \u03a3 ', u''], - [u'12345', u'MixedCase'], - [u'123 \t 345 \0 ', u'UPPER']]).view(np.chararray) - - def test_capitalize(self): - assert issubclass(self.A.capitalize().dtype.type, np.string_) - assert_array_equal(self.A.capitalize(), asbytes_nested([ - [' abc ', ''], - ['12345', 'Mixedcase'], - ['123 \t 345 \0 ', 'Upper']])) - assert issubclass(self.B.capitalize().dtype.type, np.unicode_) - assert_array_equal(self.B.capitalize(), [ - [u' \u03c3 ', ''], - ['12345', 'Mixedcase'], - ['123 \t 345 \0 ', 'Upper']]) - - def test_center(self): - assert issubclass(self.A.center(10).dtype.type, np.string_) - widths = np.array([[10, 20]]) - C = self.A.center([10, 20]) - assert_array_equal(np.char.str_len(C), [[10, 20], [10, 20], [12, 20]]) - C = self.A.center(20, asbytes('#')) - assert np.all(C.startswith(asbytes('#'))) - assert np.all(C.endswith(asbytes('#'))) - C = np.char.center(asbytes('FOO'), [[10, 20], [15, 8]]) - assert issubclass(C.dtype.type, np.string_) - assert_array_equal(C, asbytes_nested([ - [' FOO ', ' FOO '], - [' FOO ', ' FOO ']])) - - def test_decode(self): - if sys.version_info[0] >= 3: - A = np.char.array([asbytes('\\u03a3')]) - assert A.decode('unicode-escape')[0] == '\u03a3' - else: - A = np.char.array(['736563726574206d657373616765']) - assert A.decode('hex_codec')[0] == 'secret message' - - def test_encode(self): - B = self.B.encode('unicode_escape') - assert B[0][0] == asbytes(r' \u03a3 ') - - def test_expandtabs(self): - T = self.A.expandtabs() - assert T[2][0] == asbytes('123 345') - - def test_join(self): - if sys.version_info[0] >= 3: - # NOTE: list(b'123') == [49, 50, 51] - # so that b','.join(b'123') results to an error on Py3 - A0 = self.A.decode('ascii') - else: - A0 = self.A - - A = np.char.join([',', '#'], A0) - if sys.version_info[0] >= 3: - assert issubclass(A.dtype.type, np.unicode_) - else: - assert issubclass(A.dtype.type, np.string_) - assert_array_equal(np.char.join([',', '#'], A0), - [ - [' ,a,b,c, ', ''], - ['1,2,3,4,5', 'M#i#x#e#d#C#a#s#e'], - ['1,2,3, ,\t, ,3,4,5, ,\x00, ', 'U#P#P#E#R']]) - - def test_ljust(self): - assert issubclass(self.A.ljust(10).dtype.type, np.string_) - widths = np.array([[10, 20]]) - C = self.A.ljust([10, 20]) - assert_array_equal(np.char.str_len(C), [[10, 20], [10, 20], [12, 20]]) - C = self.A.ljust(20, asbytes('#')) - assert_array_equal(C.startswith(asbytes('#')), [ - [False, True], [False, False], [False, False]]) - assert np.all(C.endswith(asbytes('#'))) - C = np.char.ljust(asbytes('FOO'), [[10, 20], [15, 8]]) - assert issubclass(C.dtype.type, np.string_) - assert_array_equal(C, asbytes_nested([ - ['FOO ', 'FOO '], - ['FOO ', 'FOO ']])) - - def test_lower(self): - assert issubclass(self.A.lower().dtype.type, np.string_) - assert_array_equal(self.A.lower(), asbytes_nested([ - [' abc ', ''], - ['12345', 'mixedcase'], - ['123 \t 345 \0 ', 'upper']])) - assert issubclass(self.B.lower().dtype.type, np.unicode_) - assert_array_equal(self.B.lower(), [ - [u' \u03c3 ', u''], - [u'12345', u'mixedcase'], - [u'123 \t 345 \0 ', u'upper']]) - - def test_lstrip(self): - assert issubclass(self.A.lstrip().dtype.type, np.string_) - assert_array_equal(self.A.lstrip(), asbytes_nested([ - ['abc ', ''], - ['12345', 'MixedCase'], - ['123 \t 345 \0 ', 'UPPER']])) - assert_array_equal(self.A.lstrip(asbytes_nested(['1', 'M'])), - asbytes_nested([ - [' abc', ''], - ['2345', 'ixedCase'], - ['23 \t 345 \x00', 'UPPER']])) - assert issubclass(self.B.lstrip().dtype.type, np.unicode_) - assert_array_equal(self.B.lstrip(), [ - [u'\u03a3 ', ''], - ['12345', 'MixedCase'], - ['123 \t 345 \0 ', 'UPPER']]) - - def test_partition(self): - if sys.version_info >= (2, 5): - P = self.A.partition(asbytes_nested(['3', 'M'])) - assert issubclass(P.dtype.type, np.string_) - assert_array_equal(P, asbytes_nested([ - [(' abc ', '', ''), ('', '', '')], - [('12', '3', '45'), ('', 'M', 'ixedCase')], - [('12', '3', ' \t 345 \0 '), ('UPPER', '', '')]])) - - def test_replace(self): - R = self.A.replace(asbytes_nested(['3', 'a']), - asbytes_nested(['##########', '@'])) - assert issubclass(R.dtype.type, np.string_) - assert_array_equal(R, asbytes_nested([ - [' abc ', ''], - ['12##########45', 'MixedC@se'], - ['12########## \t ##########45 \x00', 'UPPER']])) - - if sys.version_info[0] < 3: - # NOTE: b'abc'.replace(b'a', 'b') is not allowed on Py3 - R = self.A.replace(asbytes('a'), u'\u03a3') - assert issubclass(R.dtype.type, np.unicode_) - assert_array_equal(R, [ - [u' \u03a3bc ', ''], - ['12345', u'MixedC\u03a3se'], - ['123 \t 345 \x00', 'UPPER']]) - - def test_rjust(self): - assert issubclass(self.A.rjust(10).dtype.type, np.string_) - widths = np.array([[10, 20]]) - C = self.A.rjust([10, 20]) - assert_array_equal(np.char.str_len(C), [[10, 20], [10, 20], [12, 20]]) - C = self.A.rjust(20, asbytes('#')) - assert np.all(C.startswith(asbytes('#'))) - assert_array_equal(C.endswith(asbytes('#')), - [[False, True], [False, False], [False, False]]) - C = np.char.rjust(asbytes('FOO'), [[10, 20], [15, 8]]) - assert issubclass(C.dtype.type, np.string_) - assert_array_equal(C, asbytes_nested([ - [' FOO', ' FOO'], - [' FOO', ' FOO']])) - - def test_rpartition(self): - if sys.version_info >= (2, 5): - P = self.A.rpartition(asbytes_nested(['3', 'M'])) - assert issubclass(P.dtype.type, np.string_) - assert_array_equal(P, asbytes_nested([ - [('', '', ' abc '), ('', '', '')], - [('12', '3', '45'), ('', 'M', 'ixedCase')], - [('123 \t ', '3', '45 \0 '), ('', '', 'UPPER')]])) - - def test_rsplit(self): - A = self.A.rsplit(asbytes('3')) - assert issubclass(A.dtype.type, np.object_) - assert_equal(A.tolist(), asbytes_nested([ - [[' abc '], ['']], - [['12', '45'], ['MixedCase']], - [['12', ' \t ', '45 \x00 '], ['UPPER']]])) - - def test_rstrip(self): - assert issubclass(self.A.rstrip().dtype.type, np.string_) - assert_array_equal(self.A.rstrip(), asbytes_nested([ - [' abc', ''], - ['12345', 'MixedCase'], - ['123 \t 345', 'UPPER']])) - assert_array_equal(self.A.rstrip(asbytes_nested(['5', 'ER'])), - asbytes_nested([ - [' abc ', ''], - ['1234', 'MixedCase'], - ['123 \t 345 \x00', 'UPP']])) - assert issubclass(self.B.rstrip().dtype.type, np.unicode_) - assert_array_equal(self.B.rstrip(), [ - [u' \u03a3', ''], - ['12345', 'MixedCase'], - ['123 \t 345', 'UPPER']]) - - def test_strip(self): - assert issubclass(self.A.strip().dtype.type, np.string_) - assert_array_equal(self.A.strip(), asbytes_nested([ - ['abc', ''], - ['12345', 'MixedCase'], - ['123 \t 345', 'UPPER']])) - assert_array_equal(self.A.strip(asbytes_nested(['15', 'EReM'])), - asbytes_nested([ - [' abc ', ''], - ['234', 'ixedCas'], - ['23 \t 345 \x00', 'UPP']])) - assert issubclass(self.B.strip().dtype.type, np.unicode_) - assert_array_equal(self.B.strip(), [ - [u'\u03a3', ''], - ['12345', 'MixedCase'], - ['123 \t 345', 'UPPER']]) - - def test_split(self): - A = self.A.split(asbytes('3')) - assert issubclass(A.dtype.type, np.object_) - assert_equal(A.tolist(), asbytes_nested([ - [[' abc '], ['']], - [['12', '45'], ['MixedCase']], - [['12', ' \t ', '45 \x00 '], ['UPPER']]])) - - def test_splitlines(self): - A = np.char.array(['abc\nfds\nwer']).splitlines() - assert issubclass(A.dtype.type, np.object_) - assert A.shape == (1,) - assert len(A[0]) == 3 - - def test_swapcase(self): - assert issubclass(self.A.swapcase().dtype.type, np.string_) - assert_array_equal(self.A.swapcase(), asbytes_nested([ - [' ABC ', ''], - ['12345', 'mIXEDcASE'], - ['123 \t 345 \0 ', 'upper']])) - assert issubclass(self.B.swapcase().dtype.type, np.unicode_) - assert_array_equal(self.B.swapcase(), [ - [u' \u03c3 ', u''], - [u'12345', u'mIXEDcASE'], - [u'123 \t 345 \0 ', u'upper']]) - - def test_title(self): - assert issubclass(self.A.title().dtype.type, np.string_) - assert_array_equal(self.A.title(), asbytes_nested([ - [' Abc ', ''], - ['12345', 'Mixedcase'], - ['123 \t 345 \0 ', 'Upper']])) - assert issubclass(self.B.title().dtype.type, np.unicode_) - assert_array_equal(self.B.title(), [ - [u' \u03a3 ', u''], - [u'12345', u'Mixedcase'], - [u'123 \t 345 \0 ', u'Upper']]) - - def test_upper(self): - assert issubclass(self.A.upper().dtype.type, np.string_) - assert_array_equal(self.A.upper(), asbytes_nested([ - [' ABC ', ''], - ['12345', 'MIXEDCASE'], - ['123 \t 345 \0 ', 'UPPER']])) - assert issubclass(self.B.upper().dtype.type, np.unicode_) - assert_array_equal(self.B.upper(), [ - [u' \u03a3 ', u''], - [u'12345', u'MIXEDCASE'], - [u'123 \t 345 \0 ', u'UPPER']]) - - def test_isnumeric(self): - def fail(): - self.A.isnumeric() - self.assertRaises(TypeError, fail) - assert issubclass(self.B.isnumeric().dtype.type, np.bool_) - assert_array_equal(self.B.isnumeric(), [ - [False, False], [True, False], [False, False]]) - - def test_isdecimal(self): - def fail(): - self.A.isdecimal() - self.assertRaises(TypeError, fail) - assert issubclass(self.B.isdecimal().dtype.type, np.bool_) - assert_array_equal(self.B.isdecimal(), [ - [False, False], [True, False], [False, False]]) - - -class TestOperations(TestCase): - def setUp(self): - self.A = np.array([['abc', '123'], - ['789', 'xyz']]).view(np.chararray) - self.B = np.array([['efg', '456'], - ['051', 'tuv']]).view(np.chararray) - - def test_add(self): - AB = np.array([['abcefg', '123456'], - ['789051', 'xyztuv']]).view(np.chararray) - assert_array_equal(AB, (self.A + self.B)) - assert len((self.A + self.B)[0][0]) == 6 - - def test_radd(self): - QA = np.array([['qabc', 'q123'], - ['q789', 'qxyz']]).view(np.chararray) - assert_array_equal(QA, ('q' + self.A)) - - def test_mul(self): - A = self.A - for r in (2,3,5,7,197): - Ar = np.array([[A[0,0]*r, A[0,1]*r], - [A[1,0]*r, A[1,1]*r]]).view(np.chararray) - - assert_array_equal(Ar, (self.A * r)) - - for ob in [object(), 'qrs']: - try: - A * ob - except ValueError: - pass - else: - self.fail("chararray can only be multiplied by integers") - - def test_rmul(self): - A = self.A - for r in (2,3,5,7,197): - Ar = np.array([[A[0,0]*r, A[0,1]*r], - [A[1,0]*r, A[1,1]*r]]).view(np.chararray) - assert_array_equal(Ar, (r * self.A)) - - for ob in [object(), 'qrs']: - try: - ob * A - except ValueError: - pass - else: - self.fail("chararray can only be multiplied by integers") - - def test_mod(self): - """Ticket #856""" - F = np.array([['%d', '%f'],['%s','%r']]).view(np.chararray) - C = np.array([[3,7],[19,1]]) - FC = np.array([['3', '7.000000'], - ['19', '1']]).view(np.chararray) - assert_array_equal(FC, F % C) - - A = np.array([['%.3f','%d'],['%s','%r']]).view(np.chararray) - A1 = np.array([['1.000','1'],['1','1']]).view(np.chararray) - assert_array_equal(A1, (A % 1)) - - A2 = np.array([['1.000','2'],['3','4']]).view(np.chararray) - assert_array_equal(A2, (A % [[1,2],[3,4]])) - - def test_rmod(self): - assert ("%s" % self.A) == str(self.A) - assert ("%r" % self.A) == repr(self.A) - - for ob in [42, object()]: - try: - ob % self.A - except TypeError: - pass - else: - self.fail("chararray __rmod__ should fail with " \ - "non-string objects") - - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_dtype.py b/pythonPackages/numpy/numpy/core/tests/test_dtype.py deleted file mode 100755 index e817ce5d43..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_dtype.py +++ /dev/null @@ -1,112 +0,0 @@ -import numpy as np -from numpy.testing import * - -class TestBuiltin(TestCase): - def test_run(self): - """Only test hash runs at all.""" - for t in [np.int, np.float, np.complex, np.int32, np.str, np.object, - np.unicode]: - dt = np.dtype(t) - hash(dt) - -class TestRecord(TestCase): - def test_equivalent_record(self): - """Test whether equivalent record dtypes hash the same.""" - a = np.dtype([('yo', np.int)]) - b = np.dtype([('yo', np.int)]) - self.assertTrue(hash(a) == hash(b), - "two equivalent types do not hash to the same value !") - - def test_different_names(self): - # In theory, they may hash the same (collision) ? - a = np.dtype([('yo', np.int)]) - b = np.dtype([('ye', np.int)]) - self.assertTrue(hash(a) != hash(b), - "%s and %s hash the same !" % (a, b)) - - def test_different_titles(self): - # In theory, they may hash the same (collision) ? - a = np.dtype({'names': ['r','b'], 'formats': ['u1', 'u1'], - 'titles': ['Red pixel', 'Blue pixel']}) - b = np.dtype({'names': ['r','b'], 'formats': ['u1', 'u1'], - 'titles': ['RRed pixel', 'Blue pixel']}) - self.assertTrue(hash(a) != hash(b), - "%s and %s hash the same !" % (a, b)) - - def test_not_lists(self): - """Test if an appropriate exception is raised when passing bad values to - the dtype constructor. - """ - self.assertRaises(TypeError, np.dtype, - dict(names=set(['A', 'B']), formats=['f8', 'i4'])) - self.assertRaises(TypeError, np.dtype, - dict(names=['A', 'B'], formats=set(['f8', 'i4']))) - -class TestSubarray(TestCase): - def test_single_subarray(self): - a = np.dtype((np.int, (2))) - b = np.dtype((np.int, (2,))) - self.assertTrue(hash(a) == hash(b), - "two equivalent types do not hash to the same value !") - - def test_equivalent_record(self): - """Test whether equivalent subarray dtypes hash the same.""" - a = np.dtype((np.int, (2, 3))) - b = np.dtype((np.int, (2, 3))) - self.assertTrue(hash(a) == hash(b), - "two equivalent types do not hash to the same value !") - - def test_nonequivalent_record(self): - """Test whether different subarray dtypes hash differently.""" - a = np.dtype((np.int, (2, 3))) - b = np.dtype((np.int, (3, 2))) - self.assertTrue(hash(a) != hash(b), - "%s and %s hash the same !" % (a, b)) - - a = np.dtype((np.int, (2, 3))) - b = np.dtype((np.int, (2, 2))) - self.assertTrue(hash(a) != hash(b), - "%s and %s hash the same !" % (a, b)) - - a = np.dtype((np.int, (1, 2, 3))) - b = np.dtype((np.int, (1, 2))) - self.assertTrue(hash(a) != hash(b), - "%s and %s hash the same !" % (a, b)) - -class TestMonsterType(TestCase): - """Test deeply nested subtypes.""" - def test1(self): - simple1 = np.dtype({'names': ['r','b'], 'formats': ['u1', 'u1'], - 'titles': ['Red pixel', 'Blue pixel']}) - a = np.dtype([('yo', np.int), ('ye', simple1), - ('yi', np.dtype((np.int, (3, 2))))]) - b = np.dtype([('yo', np.int), ('ye', simple1), - ('yi', np.dtype((np.int, (3, 2))))]) - self.assertTrue(hash(a) == hash(b)) - - c = np.dtype([('yo', np.int), ('ye', simple1), - ('yi', np.dtype((a, (3, 2))))]) - d = np.dtype([('yo', np.int), ('ye', simple1), - ('yi', np.dtype((a, (3, 2))))]) - self.assertTrue(hash(c) == hash(d)) - -class TestMetadata(TestCase): - def test_no_metadata(self): - d = np.dtype(int) - self.assertEqual(d.metadata, None) - - def test_metadata_takes_dict(self): - d = np.dtype(int, metadata={'datum': 1}) - self.assertEqual(d.metadata, {'datum': 1}) - - def test_metadata_rejects_nondict(self): - self.assertRaises(TypeError, np.dtype, int, metadata='datum') - self.assertRaises(TypeError, np.dtype, int, metadata=1) - self.assertRaises(TypeError, np.dtype, int, metadata=None) - - def test_nested_metadata(self): - d = np.dtype([('a', np.dtype(int, metadata={'datum': 1}))]) - self.assertEqual(d['a'].metadata, {'datum': 1}) - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_errstate.py b/pythonPackages/numpy/numpy/core/tests/test_errstate.py deleted file mode 100755 index c8e2b708f8..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_errstate.py +++ /dev/null @@ -1,57 +0,0 @@ -# The following exec statement (or something like it) is needed to -# prevent SyntaxError on Python < 2.5. Even though this is a test, -# SyntaxErrors are not acceptable; on Debian systems, they block -# byte-compilation during install and thus cause the package to fail -# to install. - -import sys -if sys.version_info[:2] >= (2, 5): - exec """ -from __future__ import with_statement -from numpy.core import * -from numpy.random import rand, randint -from numpy.testing import * - -class TestErrstate(TestCase): - def test_invalid(self): - with errstate(all='raise', under='ignore'): - a = -arange(3) - # This should work - with errstate(invalid='ignore'): - sqrt(a) - # While this should fail! - try: - sqrt(a) - except FloatingPointError: - pass - else: - self.fail() - - def test_divide(self): - with errstate(all='raise', under='ignore'): - a = -arange(3) - # This should work - with errstate(divide='ignore'): - a // 0 - # While this should fail! - try: - a // 0 - except FloatingPointError: - pass - else: - self.fail() - - def test_errcall(self): - def foo(*args): - print(args) - olderrcall = geterrcall() - with errstate(call=foo): - assert(geterrcall() is foo), 'call is not foo' - with errstate(call=None): - assert(geterrcall() is None), 'call is not None' - assert(geterrcall() is olderrcall), 'call is not olderrcall' - -""" - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_function_base.py b/pythonPackages/numpy/numpy/core/tests/test_function_base.py deleted file mode 100755 index 67ce8953f1..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_function_base.py +++ /dev/null @@ -1,37 +0,0 @@ - -from numpy.testing import * -from numpy import logspace, linspace - -class TestLogspace(TestCase): - def test_basic(self): - y = logspace(0,6) - assert(len(y)==50) - y = logspace(0,6,num=100) - assert(y[-1] == 10**6) - y = logspace(0,6,endpoint=0) - assert(y[-1] < 10**6) - y = logspace(0,6,num=7) - assert_array_equal(y,[1,10,100,1e3,1e4,1e5,1e6]) - -class TestLinspace(TestCase): - def test_basic(self): - y = linspace(0,10) - assert(len(y)==50) - y = linspace(2,10,num=100) - assert(y[-1] == 10) - y = linspace(2,10,endpoint=0) - assert(y[-1] < 10) - - def test_corner(self): - y = list(linspace(0,1,1)) - assert y == [0.0], y - y = list(linspace(0,1,2.5)) - assert y == [0.0, 1.0] - - def test_type(self): - t1 = linspace(0,1,0).dtype - t2 = linspace(0,1,1).dtype - t3 = linspace(0,1,2).dtype - assert_equal(t1, t2) - assert_equal(t2, t3) - diff --git a/pythonPackages/numpy/numpy/core/tests/test_getlimits.py b/pythonPackages/numpy/numpy/core/tests/test_getlimits.py deleted file mode 100755 index f52463cbbc..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_getlimits.py +++ /dev/null @@ -1,58 +0,0 @@ -""" Test functions for limits module. -""" - -from numpy.testing import * - -from numpy.core import finfo, iinfo -from numpy import single,double,longdouble -import numpy as np - -################################################## - -class TestPythonFloat(TestCase): - def test_singleton(self): - ftype = finfo(float) - ftype2 = finfo(float) - assert_equal(id(ftype),id(ftype2)) - -class TestSingle(TestCase): - def test_singleton(self): - ftype = finfo(single) - ftype2 = finfo(single) - assert_equal(id(ftype),id(ftype2)) - -class TestDouble(TestCase): - def test_singleton(self): - ftype = finfo(double) - ftype2 = finfo(double) - assert_equal(id(ftype),id(ftype2)) - -class TestLongdouble(TestCase): - def test_singleton(self,level=2): - ftype = finfo(longdouble) - ftype2 = finfo(longdouble) - assert_equal(id(ftype),id(ftype2)) - -class TestIinfo(TestCase): - def test_basic(self): - dts = zip(['i1', 'i2', 'i4', 'i8', - 'u1', 'u2', 'u4', 'u8'], - [np.int8, np.int16, np.int32, np.int64, - np.uint8, np.uint16, np.uint32, np.uint64]) - for dt1, dt2 in dts: - assert_equal(iinfo(dt1).min, iinfo(dt2).min) - assert_equal(iinfo(dt1).max, iinfo(dt2).max) - self.assertRaises(ValueError, iinfo, 'f4') - - def test_unsigned_max(self): - types = np.sctypes['uint'] - for T in types: - assert_equal(iinfo(T).max, T(-1)) - - -def test_instances(): - iinfo(10) - finfo(3.0) - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_machar.py b/pythonPackages/numpy/numpy/core/tests/test_machar.py deleted file mode 100755 index 4175ceeac5..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_machar.py +++ /dev/null @@ -1,31 +0,0 @@ -from numpy.testing import * - -from numpy.core.machar import MachAr -import numpy.core.numerictypes as ntypes -from numpy import seterr, array - -class TestMachAr(TestCase): - def _run_machar_highprec(self): - # Instanciate MachAr instance with high enough precision to cause - # underflow - try: - hiprec = ntypes.float96 - machar = MachAr(lambda v:array([v], hiprec)) - except AttributeError: - "Skipping test: no nyptes.float96 available on this platform." - - def test_underlow(self): - """Regression testing for #759: instanciating MachAr for dtype = - np.float96 raises spurious warning.""" - serrstate = seterr(all='raise') - try: - try: - self._run_machar_highprec() - except FloatingPointError, e: - self.fail("Caught %s exception, should not have been raised." % e) - finally: - seterr(**serrstate) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_memmap.py b/pythonPackages/numpy/numpy/core/tests/test_memmap.py deleted file mode 100755 index 05163625bd..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_memmap.py +++ /dev/null @@ -1,94 +0,0 @@ -from tempfile import NamedTemporaryFile, mktemp -import os -import warnings - -from numpy import memmap -from numpy import arange, allclose -from numpy.testing import * - -class TestMemmap(TestCase): - def setUp(self): - self.tmpfp = NamedTemporaryFile(prefix='mmap') - self.shape = (3,4) - self.dtype = 'float32' - self.data = arange(12, dtype=self.dtype) - self.data.resize(self.shape) - - def tearDown(self): - self.tmpfp.close() - - def test_roundtrip(self): - # Write data to file - fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+', - shape=self.shape) - fp[:] = self.data[:] - del fp # Test __del__ machinery, which handles cleanup - - # Read data back from file - newfp = memmap(self.tmpfp, dtype=self.dtype, mode='r', - shape=self.shape) - assert allclose(self.data, newfp) - assert_array_equal(self.data, newfp) - - def test_open_with_filename(self): - tmpname = mktemp('','mmap') - fp = memmap(tmpname, dtype=self.dtype, mode='w+', - shape=self.shape) - fp[:] = self.data[:] - del fp - os.unlink(tmpname) - - def test_attributes(self): - offset = 1 - mode = "w+" - fp = memmap(self.tmpfp, dtype=self.dtype, mode=mode, - shape=self.shape, offset=offset) - self.assertEquals(offset, fp.offset) - self.assertEquals(mode, fp.mode) - del fp - - def test_filename(self): - tmpname = mktemp('','mmap') - fp = memmap(tmpname, dtype=self.dtype, mode='w+', - shape=self.shape) - abspath = os.path.abspath(tmpname) - fp[:] = self.data[:] - self.assertEquals(abspath, fp.filename) - b = fp[:1] - self.assertEquals(abspath, b.filename) - del fp - os.unlink(tmpname) - - def test_filename_fileobj(self): - fp = memmap(self.tmpfp, dtype=self.dtype, mode="w+", - shape=self.shape) - self.assertEquals(fp.filename, self.tmpfp.name) - - def test_flush(self): - fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+', - shape=self.shape) - fp[:] = self.data[:] - fp.flush() - - warnings.simplefilter('ignore', DeprecationWarning) - fp.sync() - warnings.simplefilter('default', DeprecationWarning) - - def test_del(self): - # Make sure a view does not delete the underlying mmap - fp_base = memmap(self.tmpfp, dtype=self.dtype, mode='w+', - shape=self.shape) - fp_view = fp_base[:] - class ViewCloseError(Exception): - pass - _close = memmap._close - def replace_close(self): - raise ViewCloseError('View should not call _close on memmap') - try: - memmap._close = replace_close - del fp_view - finally: - memmap._close = _close - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_multiarray.py b/pythonPackages/numpy/numpy/core/tests/test_multiarray.py deleted file mode 100755 index 45cc1c8589..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_multiarray.py +++ /dev/null @@ -1,1768 +0,0 @@ -import tempfile -import sys -import os -import numpy as np -from numpy.testing import * -from numpy.core import * -from numpy.core.multiarray_tests import test_neighborhood_iterator, test_neighborhood_iterator_oob - -from numpy.compat import asbytes, getexception, strchar - -from test_print import in_foreign_locale - -class TestFlags(TestCase): - def setUp(self): - self.a = arange(10) - - def test_writeable(self): - mydict = locals() - self.a.flags.writeable = False - self.assertRaises(RuntimeError, runstring, 'self.a[0] = 3', mydict) - self.a.flags.writeable = True - self.a[0] = 5 - self.a[0] = 0 - - def test_otherflags(self): - assert_equal(self.a.flags.carray, True) - assert_equal(self.a.flags.farray, False) - assert_equal(self.a.flags.behaved, True) - assert_equal(self.a.flags.fnc, False) - assert_equal(self.a.flags.forc, True) - assert_equal(self.a.flags.owndata, True) - assert_equal(self.a.flags.writeable, True) - assert_equal(self.a.flags.aligned, True) - assert_equal(self.a.flags.updateifcopy, False) - - -class TestAttributes(TestCase): - def setUp(self): - self.one = arange(10) - self.two = arange(20).reshape(4,5) - self.three = arange(60,dtype=float64).reshape(2,5,6) - - def test_attributes(self): - assert_equal(self.one.shape, (10,)) - assert_equal(self.two.shape, (4,5)) - assert_equal(self.three.shape, (2,5,6)) - self.three.shape = (10,3,2) - assert_equal(self.three.shape, (10,3,2)) - self.three.shape = (2,5,6) - assert_equal(self.one.strides, (self.one.itemsize,)) - num = self.two.itemsize - assert_equal(self.two.strides, (5*num, num)) - num = self.three.itemsize - assert_equal(self.three.strides, (30*num, 6*num, num)) - assert_equal(self.one.ndim, 1) - assert_equal(self.two.ndim, 2) - assert_equal(self.three.ndim, 3) - num = self.two.itemsize - assert_equal(self.two.size, 20) - assert_equal(self.two.nbytes, 20*num) - assert_equal(self.two.itemsize, self.two.dtype.itemsize) - assert_equal(self.two.base, arange(20)) - - def test_dtypeattr(self): - assert_equal(self.one.dtype, dtype(int_)) - assert_equal(self.three.dtype, dtype(float_)) - assert_equal(self.one.dtype.char, 'l') - assert_equal(self.three.dtype.char, 'd') - self.assertTrue(self.three.dtype.str[0] in '<>') - assert_equal(self.one.dtype.str[1], 'i') - assert_equal(self.three.dtype.str[1], 'f') - - def test_stridesattr(self): - x = self.one - def make_array(size, offset, strides): - return ndarray([size], buffer=x, dtype=int, - offset=offset*x.itemsize, - strides=strides*x.itemsize) - assert_equal(make_array(4, 4, -1), array([4, 3, 2, 1])) - self.assertRaises(ValueError, make_array, 4, 4, -2) - self.assertRaises(ValueError, make_array, 4, 2, -1) - self.assertRaises(ValueError, make_array, 8, 3, 1) - #self.assertRaises(ValueError, make_array, 8, 3, 0) - #self.assertRaises(ValueError, lambda: ndarray([1], strides=4)) - - - def test_set_stridesattr(self): - x = self.one - def make_array(size, offset, strides): - try: - r = ndarray([size], dtype=int, buffer=x, offset=offset*x.itemsize) - except: - raise RuntimeError(getexception()) - r.strides = strides=strides*x.itemsize - return r - assert_equal(make_array(4, 4, -1), array([4, 3, 2, 1])) - assert_equal(make_array(7,3,1), array([3, 4, 5, 6, 7, 8, 9])) - self.assertRaises(ValueError, make_array, 4, 4, -2) - self.assertRaises(ValueError, make_array, 4, 2, -1) - self.assertRaises(RuntimeError, make_array, 8, 3, 1) - #self.assertRaises(ValueError, make_array, 8, 3, 0) - - def test_fill(self): - for t in "?bhilqpBHILQPfdgFDGO": - x = empty((3,2,1), t) - y = empty((3,2,1), t) - x.fill(1) - y[...] = 1 - assert_equal(x,y) - - x = array([(0,0.0), (1,1.0)], dtype='i4,f8') - x.fill(x[0]) - assert_equal(x['f1'][1], x['f1'][0]) - - -class TestDtypedescr(TestCase): - def test_construction(self): - d1 = dtype('i4') - assert_equal(d1, dtype(int32)) - d2 = dtype('f8') - assert_equal(d2, dtype(float64)) - -class TestZeroRank(TestCase): - def setUp(self): - self.d = array(0), array('x', object) - - def test_ellipsis_subscript(self): - a,b = self.d - self.assertEqual(a[...], 0) - self.assertEqual(b[...], 'x') - self.assertTrue(a[...] is a) - self.assertTrue(b[...] is b) - - def test_empty_subscript(self): - a,b = self.d - self.assertEqual(a[()], 0) - self.assertEqual(b[()], 'x') - self.assertTrue(type(a[()]) is a.dtype.type) - self.assertTrue(type(b[()]) is str) - - def test_invalid_subscript(self): - a,b = self.d - self.assertRaises(IndexError, lambda x: x[0], a) - self.assertRaises(IndexError, lambda x: x[0], b) - self.assertRaises(IndexError, lambda x: x[array([], int)], a) - self.assertRaises(IndexError, lambda x: x[array([], int)], b) - - def test_ellipsis_subscript_assignment(self): - a,b = self.d - a[...] = 42 - self.assertEqual(a, 42) - b[...] = '' - self.assertEqual(b.item(), '') - - def test_empty_subscript_assignment(self): - a,b = self.d - a[()] = 42 - self.assertEqual(a, 42) - b[()] = '' - self.assertEqual(b.item(), '') - - def test_invalid_subscript_assignment(self): - a,b = self.d - def assign(x, i, v): - x[i] = v - self.assertRaises(IndexError, assign, a, 0, 42) - self.assertRaises(IndexError, assign, b, 0, '') - self.assertRaises(ValueError, assign, a, (), '') - - def test_newaxis(self): - a,b = self.d - self.assertEqual(a[newaxis].shape, (1,)) - self.assertEqual(a[..., newaxis].shape, (1,)) - self.assertEqual(a[newaxis, ...].shape, (1,)) - self.assertEqual(a[..., newaxis].shape, (1,)) - self.assertEqual(a[newaxis, ..., newaxis].shape, (1,1)) - self.assertEqual(a[..., newaxis, newaxis].shape, (1,1)) - self.assertEqual(a[newaxis, newaxis, ...].shape, (1,1)) - self.assertEqual(a[(newaxis,)*10].shape, (1,)*10) - - def test_invalid_newaxis(self): - a,b = self.d - def subscript(x, i): x[i] - self.assertRaises(IndexError, subscript, a, (newaxis, 0)) - self.assertRaises(IndexError, subscript, a, (newaxis,)*50) - - def test_constructor(self): - x = ndarray(()) - x[()] = 5 - self.assertEqual(x[()], 5) - y = ndarray((),buffer=x) - y[()] = 6 - self.assertEqual(x[()], 6) - - def test_output(self): - x = array(2) - self.assertRaises(ValueError, add, x, [1], x) - - -class TestScalarIndexing(TestCase): - def setUp(self): - self.d = array([0,1])[0] - - def test_ellipsis_subscript(self): - a = self.d - self.assertEqual(a[...], 0) - self.assertEqual(a[...].shape,()) - - def test_empty_subscript(self): - a = self.d - self.assertEqual(a[()], 0) - self.assertEqual(a[()].shape,()) - - def test_invalid_subscript(self): - a = self.d - self.assertRaises(IndexError, lambda x: x[0], a) - self.assertRaises(IndexError, lambda x: x[array([], int)], a) - - def test_invalid_subscript_assignment(self): - a = self.d - def assign(x, i, v): - x[i] = v - self.assertRaises(TypeError, assign, a, 0, 42) - - def test_newaxis(self): - a = self.d - self.assertEqual(a[newaxis].shape, (1,)) - self.assertEqual(a[..., newaxis].shape, (1,)) - self.assertEqual(a[newaxis, ...].shape, (1,)) - self.assertEqual(a[..., newaxis].shape, (1,)) - self.assertEqual(a[newaxis, ..., newaxis].shape, (1,1)) - self.assertEqual(a[..., newaxis, newaxis].shape, (1,1)) - self.assertEqual(a[newaxis, newaxis, ...].shape, (1,1)) - self.assertEqual(a[(newaxis,)*10].shape, (1,)*10) - - def test_invalid_newaxis(self): - a = self.d - def subscript(x, i): x[i] - self.assertRaises(IndexError, subscript, a, (newaxis, 0)) - self.assertRaises(IndexError, subscript, a, (newaxis,)*50) - - -class TestCreation(TestCase): - def test_from_attribute(self): - class x(object): - def __array__(self, dtype=None): - pass - self.assertRaises(ValueError, array, x()) - - def test_from_string(self) : - types = np.typecodes['AllInteger'] + np.typecodes['Float'] - nstr = ['123','123'] - result = array([123, 123], dtype=int) - for type in types : - msg = 'String conversion for %s' % type - assert_equal(array(nstr, dtype=type), result, err_msg=msg) - - -class TestBool(TestCase): - def test_test_interning(self): - a0 = bool_(0) - b0 = bool_(False) - self.assertTrue(a0 is b0) - a1 = bool_(1) - b1 = bool_(True) - self.assertTrue(a1 is b1) - self.assertTrue(array([True])[0] is a1) - self.assertTrue(array(True)[()] is a1) - - -class TestMethods(TestCase): - def test_test_round(self): - assert_equal(array([1.2,1.5]).round(), [1,2]) - assert_equal(array(1.5).round(), 2) - assert_equal(array([12.2,15.5]).round(-1), [10,20]) - assert_equal(array([12.15,15.51]).round(1), [12.2,15.5]) - - def test_transpose(self): - a = array([[1,2],[3,4]]) - assert_equal(a.transpose(), [[1,3],[2,4]]) - self.assertRaises(ValueError, lambda: a.transpose(0)) - self.assertRaises(ValueError, lambda: a.transpose(0,0)) - self.assertRaises(ValueError, lambda: a.transpose(0,1,2)) - - def test_sort(self): - # test ordering for floats and complex containing nans. It is only - # necessary to check the lessthan comparison, so sorts that - # only follow the insertion sort path are sufficient. We only - # test doubles and complex doubles as the logic is the same. - - # check doubles - msg = "Test real sort order with nans" - a = np.array([np.nan, 1, 0]) - b = sort(a) - assert_equal(b, a[::-1], msg) - # check complex - msg = "Test complex sort order with nans" - a = np.zeros(9, dtype=np.complex128) - a.real += [np.nan, np.nan, np.nan, 1, 0, 1, 1, 0, 0] - a.imag += [np.nan, 1, 0, np.nan, np.nan, 1, 0, 1, 0] - b = sort(a) - assert_equal(b, a[::-1], msg) - - # all c scalar sorts use the same code with different types - # so it suffices to run a quick check with one type. The number - # of sorted items must be greater than ~50 to check the actual - # algorithm because quick and merge sort fall over to insertion - # sort for small arrays. - a = np.arange(100) - b = a[::-1].copy() - for kind in ['q','m','h'] : - msg = "scalar sort, kind=%s" % kind - c = a.copy(); - c.sort(kind=kind) - assert_equal(c, a, msg) - c = b.copy(); - c.sort(kind=kind) - assert_equal(c, a, msg) - - # test complex sorts. These use the same code as the scalars - # but the compare fuction differs. - ai = a*1j + 1 - bi = b*1j + 1 - for kind in ['q','m','h'] : - msg = "complex sort, real part == 1, kind=%s" % kind - c = ai.copy(); - c.sort(kind=kind) - assert_equal(c, ai, msg) - c = bi.copy(); - c.sort(kind=kind) - assert_equal(c, ai, msg) - ai = a + 1j - bi = b + 1j - for kind in ['q','m','h'] : - msg = "complex sort, imag part == 1, kind=%s" % kind - c = ai.copy(); - c.sort(kind=kind) - assert_equal(c, ai, msg) - c = bi.copy(); - c.sort(kind=kind) - assert_equal(c, ai, msg) - - # test string sorts. - s = 'aaaaaaaa' - a = np.array([s + chr(i) for i in range(100)]) - b = a[::-1].copy() - for kind in ['q', 'm', 'h'] : - msg = "string sort, kind=%s" % kind - c = a.copy(); - c.sort(kind=kind) - assert_equal(c, a, msg) - c = b.copy(); - c.sort(kind=kind) - assert_equal(c, a, msg) - - # test unicode sort. - s = 'aaaaaaaa' - a = np.array([s + chr(i) for i in range(100)], dtype=np.unicode) - b = a[::-1].copy() - for kind in ['q', 'm', 'h'] : - msg = "unicode sort, kind=%s" % kind - c = a.copy(); - c.sort(kind=kind) - assert_equal(c, a, msg) - c = b.copy(); - c.sort(kind=kind) - assert_equal(c, a, msg) - - # todo, check object array sorts. - - # check axis handling. This should be the same for all type - # specific sorts, so we only check it for one type and one kind - a = np.array([[3,2],[1,0]]) - b = np.array([[1,0],[3,2]]) - c = np.array([[2,3],[0,1]]) - d = a.copy() - d.sort(axis=0) - assert_equal(d, b, "test sort with axis=0") - d = a.copy() - d.sort(axis=1) - assert_equal(d, c, "test sort with axis=1") - d = a.copy() - d.sort() - assert_equal(d, c, "test sort with default axis") - # using None is known fail at this point - # d = a.copy() - # d.sort(axis=None) - #assert_equal(d, c, "test sort with axis=None") - - - def test_sort_order(self): - # Test sorting an array with fields - x1=np.array([21,32,14]) - x2=np.array(['my','first','name']) - x3=np.array([3.1,4.5,6.2]) - r=np.rec.fromarrays([x1,x2,x3],names='id,word,number') - - r.sort(order=['id']) - assert_equal(r.id, array([14,21,32])) - assert_equal(r.word, array(['name','my','first'])) - assert_equal(r.number, array([6.2,3.1,4.5])) - - r.sort(order=['word']) - assert_equal(r.id, array([32,21,14])) - assert_equal(r.word, array(['first','my','name'])) - assert_equal(r.number, array([4.5,3.1,6.2])) - - r.sort(order=['number']) - assert_equal(r.id, array([21,32,14])) - assert_equal(r.word, array(['my','first','name'])) - assert_equal(r.number, array([3.1,4.5,6.2])) - - if sys.byteorder == 'little': - strtype = '>i2' - else: - strtype = '= 3: - return loads(obj, encoding='latin1') - else: - return loads(obj) - - # version 0 pickles, using protocol=2 to pickle - # version 0 doesn't have a version field - def test_version0_int8(self): - s = '\x80\x02cnumpy.core._internal\n_reconstruct\nq\x01cnumpy\nndarray\nq\x02K\x00\x85U\x01b\x87Rq\x03(K\x04\x85cnumpy\ndtype\nq\x04U\x02i1K\x00K\x01\x87Rq\x05(U\x01|NNJ\xff\xff\xff\xffJ\xff\xff\xff\xfftb\x89U\x04\x01\x02\x03\x04tb.' - a = array([1,2,3,4], dtype=int8) - p = self._loads(asbytes(s)) - assert_equal(a, p) - - def test_version0_float32(self): - s = '\x80\x02cnumpy.core._internal\n_reconstruct\nq\x01cnumpy\nndarray\nq\x02K\x00\x85U\x01b\x87Rq\x03(K\x04\x85cnumpy\ndtype\nq\x04U\x02f4K\x00K\x01\x87Rq\x05(U\x01= g2, [g1[i] >= g2[i] for i in [0,1,2]]) - assert_array_equal(g1 < g2, [g1[i] < g2[i] for i in [0,1,2]]) - assert_array_equal(g1 > g2, [g1[i] > g2[i] for i in [0,1,2]]) - - def test_mixed(self): - g1 = array(["spam","spa","spammer","and eggs"]) - g2 = "spam" - assert_array_equal(g1 == g2, [x == g2 for x in g1]) - assert_array_equal(g1 != g2, [x != g2 for x in g1]) - assert_array_equal(g1 < g2, [x < g2 for x in g1]) - assert_array_equal(g1 > g2, [x > g2 for x in g1]) - assert_array_equal(g1 <= g2, [x <= g2 for x in g1]) - assert_array_equal(g1 >= g2, [x >= g2 for x in g1]) - - - def test_unicode(self): - g1 = array([u"This",u"is",u"example"]) - g2 = array([u"This",u"was",u"example"]) - assert_array_equal(g1 == g2, [g1[i] == g2[i] for i in [0,1,2]]) - assert_array_equal(g1 != g2, [g1[i] != g2[i] for i in [0,1,2]]) - assert_array_equal(g1 <= g2, [g1[i] <= g2[i] for i in [0,1,2]]) - assert_array_equal(g1 >= g2, [g1[i] >= g2[i] for i in [0,1,2]]) - assert_array_equal(g1 < g2, [g1[i] < g2[i] for i in [0,1,2]]) - assert_array_equal(g1 > g2, [g1[i] > g2[i] for i in [0,1,2]]) - - -class TestArgmax(TestCase): - - nan_arr = [ - ([0, 1, 2, 3, np.nan], 4), - ([0, 1, 2, np.nan, 3], 3), - ([np.nan, 0, 1, 2, 3], 0), - ([np.nan, 0, np.nan, 2, 3], 0), - ([0, 1, 2, 3, complex(0,np.nan)], 4), - ([0, 1, 2, 3, complex(np.nan,0)], 4), - ([0, 1, 2, complex(np.nan,0), 3], 3), - ([0, 1, 2, complex(0,np.nan), 3], 3), - ([complex(0,np.nan), 0, 1, 2, 3], 0), - ([complex(np.nan, np.nan), 0, 1, 2, 3], 0), - ([complex(np.nan, 0), complex(np.nan, 2), complex(np.nan, 1)], 0), - ([complex(np.nan, np.nan), complex(np.nan, 2), complex(np.nan, 1)], 0), - ([complex(np.nan, 0), complex(np.nan, 2), complex(np.nan, np.nan)], 0), - - ([complex(0, 0), complex(0, 2), complex(0, 1)], 1), - ([complex(1, 0), complex(0, 2), complex(0, 1)], 0), - ([complex(1, 0), complex(0, 2), complex(1, 1)], 2), - ] - - def test_all(self): - a = np.random.normal(0,1,(4,5,6,7,8)) - for i in xrange(a.ndim): - amax = a.max(i) - aargmax = a.argmax(i) - axes = range(a.ndim) - axes.remove(i) - assert all(amax == aargmax.choose(*a.transpose(i,*axes))) - - def test_combinations(self): - for arr, pos in self.nan_arr: - assert_equal(np.argmax(arr), pos, err_msg="%r"%arr) - assert_equal(arr[np.argmax(arr)], np.max(arr), err_msg="%r"%arr) - - -class TestMinMax(TestCase): - def test_scalar(self): - assert_raises(ValueError, np.amax, 1, 1) - assert_raises(ValueError, np.amin, 1, 1) - - assert_equal(np.amax(1, axis=0), 1) - assert_equal(np.amin(1, axis=0), 1) - assert_equal(np.amax(1, axis=None), 1) - assert_equal(np.amin(1, axis=None), 1) - - def test_axis(self): - assert_raises(ValueError, np.amax, [1,2,3], 1000) - assert_equal(np.amax([[1,2,3]], axis=1), 3) - -class TestNewaxis(TestCase): - def test_basic(self): - sk = array([0,-0.1,0.1]) - res = 250*sk[:,newaxis] - assert_almost_equal(res.ravel(),250*sk) - - -class TestClip(TestCase): - def _check_range(self,x,cmin,cmax): - assert np.all(x >= cmin) - assert np.all(x <= cmax) - - def _clip_type(self,type_group,array_max, - clip_min,clip_max,inplace=False, - expected_min=None,expected_max=None): - if expected_min is None: - expected_min = clip_min - if expected_max is None: - expected_max = clip_max - - for T in np.sctypes[type_group]: - if sys.byteorder == 'little': - byte_orders = ['=','>'] - else: - byte_orders = ['<','='] - - for byteorder in byte_orders: - dtype = np.dtype(T).newbyteorder(byteorder) - - x = (np.random.random(1000) * array_max).astype(dtype) - if inplace: - x.clip(clip_min,clip_max,x) - else: - x = x.clip(clip_min,clip_max) - byteorder = '=' - - if x.dtype.byteorder == '|': byteorder = '|' - assert_equal(x.dtype.byteorder,byteorder) - self._check_range(x,expected_min,expected_max) - return x - - def test_basic(self): - for inplace in [False, True]: - self._clip_type('float',1024,-12.8,100.2, inplace=inplace) - self._clip_type('float',1024,0,0, inplace=inplace) - - self._clip_type('int',1024,-120,100.5, inplace=inplace) - self._clip_type('int',1024,0,0, inplace=inplace) - - x = self._clip_type('uint',1024,-120,100,expected_min=0, inplace=inplace) - x = self._clip_type('uint',1024,0,0, inplace=inplace) - - def test_record_array(self): - rec = np.array([(-5, 2.0, 3.0), (5.0, 4.0, 3.0)], - dtype=[('x', '= 3) - x = val.clip(min=3) - assert np.all(x >= 3) - x = val.clip(max=4) - assert np.all(x <= 4) - - -class TestPutmask(TestCase): - def tst_basic(self,x,T,mask,val): - np.putmask(x,mask,val) - assert np.all(x[mask] == T(val)) - assert x.dtype == T - - def test_ip_types(self): - unchecked_types = [str, unicode, np.void, object] - - x = np.random.random(1000)*100 - mask = x < 40 - - for val in [-100,0,15]: - for types in np.sctypes.itervalues(): - for T in types: - if T not in unchecked_types: - yield self.tst_basic,x.copy().astype(T),T,mask,val - - def test_mask_size(self): - self.assertRaises(ValueError, np.putmask, - np.array([1,2,3]), [True], 5) - - def tst_byteorder(self,dtype): - x = np.array([1,2,3],dtype) - np.putmask(x,[True,False,True],-1) - assert_array_equal(x,[-1,2,-1]) - - def test_ip_byteorder(self): - for dtype in ('>i4','f8'), ('z', 'i4','f8'), ('z', '']: - for dtype in [float,int,np.complex]: - dt = np.dtype(dtype).newbyteorder(byteorder) - x = (np.random.random((4,7))*5).astype(dt) - buf = x.tostring() - yield self.tst_basic,buf,x.flat,{'dtype':dt} - - -class TestResize(TestCase): - def test_basic(self): - x = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) - x.resize((5,5)) - assert_array_equal(x.flat[:9],np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]).flat) - assert_array_equal(x[9:].flat,0) - - def test_check_reference(self): - x = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) - y = x - self.assertRaises(ValueError,x.resize,(5,1)) - - def test_int_shape(self): - x = np.eye(3) - x.resize(3) - assert_array_equal(x, np.eye(3)[0,:]) - - def test_none_shape(self): - x = np.eye(3) - x.resize(None) - assert_array_equal(x, np.eye(3)) - x.resize() - assert_array_equal(x, np.eye(3)) - - def test_invalid_arguements(self): - self.assertRaises(TypeError, np.eye(3).resize, 'hi') - self.assertRaises(ValueError, np.eye(3).resize, -1) - self.assertRaises(TypeError, np.eye(3).resize, order=1) - self.assertRaises(TypeError, np.eye(3).resize, refcheck='hi') - - def test_freeform_shape(self): - x = np.eye(3) - x.resize(3,2,1) - assert_(x.shape == (3,2,1)) - - def test_zeros_appended(self): - x = np.eye(3) - x.resize(2,3,3) - assert_array_equal(x[0], np.eye(3)) - assert_array_equal(x[1], np.zeros((3,3))) - - -class TestRecord(TestCase): - def test_field_rename(self): - dt = np.dtype([('f',float),('i',int)]) - dt.names = ['p','q'] - assert_equal(dt.names,['p','q']) - - if sys.version_info[0] >= 3: - def test_bytes_fields(self): - # Bytes are not allowed in field names and not recognized in titles - # on Py3 - assert_raises(TypeError, np.dtype, [(asbytes('a'), int)]) - assert_raises(TypeError, np.dtype, [(('b', asbytes('a')), int)]) - - dt = np.dtype([((asbytes('a'), 'b'), int)]) - assert_raises(ValueError, dt.__getitem__, asbytes('a')) - - x = np.array([(1,), (2,), (3,)], dtype=dt) - assert_raises(ValueError, x.__getitem__, asbytes('a')) - - y = x[0] - assert_raises(IndexError, y.__getitem__, asbytes('a')) - else: - def test_unicode_field_titles(self): - # Unicode field titles are added to field dict on Py2 - title = unicode('b') - dt = np.dtype([((title, 'a'), int)]) - dt[title] - dt['a'] - x = np.array([(1,), (2,), (3,)], dtype=dt) - x[title] - x['a'] - y = x[0] - y[title] - y['a'] - - def test_unicode_field_names(self): - # Unicode field names are not allowed on Py2 - title = unicode('b') - assert_raises(TypeError, np.dtype, [(title, int)]) - assert_raises(TypeError, np.dtype, [(('a', title), int)]) - -class TestView(TestCase): - def test_basic(self): - x = np.array([(1,2,3,4),(5,6,7,8)],dtype=[('r',np.int8),('g',np.int8), - ('b',np.int8),('a',np.int8)]) - # We must be specific about the endianness here: - y = x.view(dtype='= (2, 6): - - if sys.version_info[:2] == (2, 6): - from numpy.core.multiarray import memorysimpleview as memoryview - - from numpy.core._internal import _dtype_from_pep3118 - - class TestPEP3118Dtype(object): - def _check(self, spec, wanted): - dt = np.dtype(wanted) - if isinstance(wanted, list) and isinstance(wanted[-1], tuple): - if wanted[-1][0] == '': - names = list(dt.names) - names[-1] = '' - dt.names = tuple(names) - assert_equal(_dtype_from_pep3118(spec), dt, - err_msg="spec %r != dtype %r" % (spec, wanted)) - - def test_native_padding(self): - align = np.dtype('i').alignment - for j in xrange(8): - if j == 0: - s = 'bi' - else: - s = 'b%dxi' % j - self._check('@'+s, {'f0': ('i1', 0), - 'f1': ('i', align*(1 + j//align))}) - self._check('='+s, {'f0': ('i1', 0), - 'f1': ('i', 1+j)}) - - def test_native_padding_2(self): - # Native padding should work also for structs and sub-arrays - self._check('x3T{xi}', {'f0': (({'f0': ('i', 4)}, (3,)), 4)}) - self._check('^x3T{xi}', {'f0': (({'f0': ('i', 1)}, (3,)), 1)}) - - def test_trailing_padding(self): - # Trailing padding should be included, *and*, the item size - # should match the alignment if in aligned mode - align = np.dtype('i').alignment - def VV(n): - return 'V%d' % (align*(1 + (n-1)//align)) - - self._check('ix', [('f0', 'i'), ('', VV(1))]) - self._check('ixx', [('f0', 'i'), ('', VV(2))]) - self._check('ixxx', [('f0', 'i'), ('', VV(3))]) - self._check('ixxxx', [('f0', 'i'), ('', VV(4))]) - self._check('i7x', [('f0', 'i'), ('', VV(7))]) - - self._check('^ix', [('f0', 'i'), ('', 'V1')]) - self._check('^ixx', [('f0', 'i'), ('', 'V2')]) - self._check('^ixxx', [('f0', 'i'), ('', 'V3')]) - self._check('^ixxxx', [('f0', 'i'), ('', 'V4')]) - self._check('^i7x', [('f0', 'i'), ('', 'V7')]) - - def test_byteorder_inside_struct(self): - # The byte order after @T{=i} should be '=', not '@'. - # Check this by noting the absence of native alignment. - self._check('@T{^i}xi', {'f0': ({'f0': ('i', 0)}, 0), - 'f1': ('i', 5)}) - - def test_intra_padding(self): - # Natively aligned sub-arrays may require some internal padding - align = np.dtype('i').alignment - def VV(n): - return 'V%d' % (align*(1 + (n-1)//align)) - - self._check('(3)T{ix}', ({'f0': ('i', 0), '': (VV(1), 4)}, (3,))) - - class TestNewBufferProtocol(object): - def _check_roundtrip(self, obj): - obj = np.asarray(obj) - x = memoryview(obj) - y = np.asarray(x) - y2 = np.array(x) - assert not y.flags.owndata - assert y2.flags.owndata - assert_equal(y.dtype, obj.dtype) - assert_array_equal(obj, y) - assert_equal(y2.dtype, obj.dtype) - assert_array_equal(obj, y2) - - def test_roundtrip(self): - x = np.array([1,2,3,4,5], dtype='i4') - self._check_roundtrip(x) - - x = np.array([[1,2],[3,4]], dtype=np.float64) - self._check_roundtrip(x) - - x = np.zeros((3,3,3), dtype=np.float32)[:,0,:] - self._check_roundtrip(x) - - dt = [('a', 'b'), - ('b', 'h'), - ('c', 'i'), - ('d', 'l'), - ('dx', 'q'), - ('e', 'B'), - ('f', 'H'), - ('g', 'I'), - ('h', 'L'), - ('hx', 'Q'), - ('i', np.single), - ('j', np.double), - ('k', np.longdouble), - ('ix', np.csingle), - ('jx', np.cdouble), - ('kx', np.clongdouble), - ('l', 'S4'), - ('m', 'U4'), - ('n', 'V3'), - ('o', '?')] - x = np.array([(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, - asbytes('aaaa'), 'bbbb', asbytes('xxx'), True)], - dtype=dt) - self._check_roundtrip(x) - - x = np.array(([[1,2],[3,4]],), dtype=[('a', (int, (2,2)))]) - self._check_roundtrip(x) - - x = np.array([1,2,3], dtype='>i2') - self._check_roundtrip(x) - - x = np.array([1,2,3], dtype='i') - else: - assert_equal(y.format, 'i') - - x = np.array([1,2,3], dtype='0],a[1][V>0],a[2][V>0]]) == a[:,V>0]).all() - - -class TestBinaryRepr(TestCase): - def test_zero(self): - assert_equal(binary_repr(0),'0') - - def test_large(self): - assert_equal(binary_repr(10736848),'101000111101010011010000') - - def test_negative(self): - assert_equal(binary_repr(-1), '-1') - assert_equal(binary_repr(-1, width=8), '11111111') - -class TestBaseRepr(TestCase): - def test_base3(self): - assert_equal(base_repr(3**5, 3), '100000') - - def test_positive(self): - assert_equal(base_repr(12, 10), '12') - assert_equal(base_repr(12, 10, 4), '000012') - assert_equal(base_repr(12, 4), '30') - assert_equal(base_repr(3731624803700888, 36), '10QR0ROFCEW') - - def test_negative(self): - assert_equal(base_repr(-12, 10), '-12') - assert_equal(base_repr(-12, 10, 4), '-000012') - assert_equal(base_repr(-12, 4), '-30') - -class TestArrayComparisons(TestCase): - def test_array_equal(self): - res = array_equal(array([1,2]), array([1,2])) - assert res - assert type(res) is bool - res = array_equal(array([1,2]), array([1,2,3])) - assert not res - assert type(res) is bool - res = array_equal(array([1,2]), array([3,4])) - assert not res - assert type(res) is bool - res = array_equal(array([1,2]), array([1,3])) - assert not res - assert type(res) is bool - - def test_array_equiv(self): - res = array_equiv(array([1,2]), array([1,2])) - assert res - assert type(res) is bool - res = array_equiv(array([1,2]), array([1,2,3])) - assert not res - assert type(res) is bool - res = array_equiv(array([1,2]), array([3,4])) - assert not res - assert type(res) is bool - res = array_equiv(array([1,2]), array([1,3])) - assert not res - assert type(res) is bool - - res = array_equiv(array([1,1]), array([1])) - assert res - assert type(res) is bool - res = array_equiv(array([1,1]), array([[1],[1]])) - assert res - assert type(res) is bool - res = array_equiv(array([1,2]), array([2])) - assert not res - assert type(res) is bool - res = array_equiv(array([1,2]), array([[1],[2]])) - assert not res - assert type(res) is bool - res = array_equiv(array([1,2]), array([[1,2,3],[4,5,6],[7,8,9]])) - assert not res - assert type(res) is bool - - -def assert_array_strict_equal(x, y): - assert_array_equal(x, y) - # Check flags - assert x.flags == y.flags - # check endianness - assert x.dtype.isnative == y.dtype.isnative - - -class TestClip(TestCase): - def setUp(self): - self.nr = 5 - self.nc = 3 - - def fastclip(self, a, m, M, out=None): - if out is None: - return a.clip(m,M) - else: - return a.clip(m,M,out) - - def clip(self, a, m, M, out=None): - # use slow-clip - selector = less(a, m)+2*greater(a, M) - return selector.choose((a, m, M), out=out) - - # Handy functions - def _generate_data(self, n, m): - return randn(n, m) - - def _generate_data_complex(self, n, m): - return randn(n, m) + 1.j *rand(n, m) - - def _generate_flt_data(self, n, m): - return (randn(n, m)).astype(float32) - - def _neg_byteorder(self, a): - a = asarray(a) - if sys.byteorder == 'little': - a = a.astype(a.dtype.newbyteorder('>')) - else: - a = a.astype(a.dtype.newbyteorder('<')) - return a - - def _generate_non_native_data(self, n, m): - data = randn(n, m) - data = self._neg_byteorder(data) - assert not data.dtype.isnative - return data - - def _generate_int_data(self, n, m): - return (10 * rand(n, m)).astype(int64) - - def _generate_int32_data(self, n, m): - return (10 * rand(n, m)).astype(int32) - - # Now the real test cases - def test_simple_double(self): - """Test native double input with scalar min/max.""" - a = self._generate_data(self.nr, self.nc) - m = 0.1 - M = 0.6 - ac = self.fastclip(a, m, M) - act = self.clip(a, m, M) - assert_array_strict_equal(ac, act) - - def test_simple_int(self): - """Test native int input with scalar min/max.""" - a = self._generate_int_data(self.nr, self.nc) - a = a.astype(int) - m = -2 - M = 4 - ac = self.fastclip(a, m, M) - act = self.clip(a, m, M) - assert_array_strict_equal(ac, act) - - def test_array_double(self): - """Test native double input with array min/max.""" - a = self._generate_data(self.nr, self.nc) - m = zeros(a.shape) - M = m + 0.5 - ac = self.fastclip(a, m, M) - act = self.clip(a, m, M) - assert_array_strict_equal(ac, act) - - def test_simple_nonnative(self): - """Test non native double input with scalar min/max. - Test native double input with non native double scalar min/max.""" - a = self._generate_non_native_data(self.nr, self.nc) - m = -0.5 - M = 0.6 - ac = self.fastclip(a, m, M) - act = self.clip(a, m, M) - assert_array_equal(ac, act) - - "Test native double input with non native double scalar min/max." - a = self._generate_data(self.nr, self.nc) - m = -0.5 - M = self._neg_byteorder(0.6) - assert not M.dtype.isnative - ac = self.fastclip(a, m, M) - act = self.clip(a, m, M) - assert_array_equal(ac, act) - - def test_simple_complex(self): - """Test native complex input with native double scalar min/max. - Test native input with complex double scalar min/max. - """ - a = 3 * self._generate_data_complex(self.nr, self.nc) - m = -0.5 - M = 1. - ac = self.fastclip(a, m, M) - act = self.clip(a, m, M) - assert_array_strict_equal(ac, act) - - "Test native input with complex double scalar min/max." - a = 3 * self._generate_data(self.nr, self.nc) - m = -0.5 + 1.j - M = 1. + 2.j - ac = self.fastclip(a, m, M) - act = self.clip(a, m, M) - assert_array_strict_equal(ac, act) - - def test_clip_non_contig(self): - """Test clip for non contiguous native input and native scalar min/max.""" - a = self._generate_data(self.nr * 2, self.nc * 3) - a = a[::2, ::3] - assert not a.flags['F_CONTIGUOUS'] - assert not a.flags['C_CONTIGUOUS'] - ac = self.fastclip(a, -1.6, 1.7) - act = self.clip(a, -1.6, 1.7) - assert_array_strict_equal(ac, act) - - def test_simple_out(self): - """Test native double input with scalar min/max.""" - a = self._generate_data(self.nr, self.nc) - m = -0.5 - M = 0.6 - ac = zeros(a.shape) - act = zeros(a.shape) - self.fastclip(a, m, M, ac) - self.clip(a, m, M, act) - assert_array_strict_equal(ac, act) - - def test_simple_int32_inout(self): - """Test native int32 input with double min/max and int32 out.""" - a = self._generate_int32_data(self.nr, self.nc) - m = float64(0) - M = float64(2) - ac = zeros(a.shape, dtype = int32) - act = ac.copy() - self.fastclip(a, m, M, ac) - self.clip(a, m, M, act) - assert_array_strict_equal(ac, act) - - def test_simple_int64_out(self): - """Test native int32 input with int32 scalar min/max and int64 out.""" - a = self._generate_int32_data(self.nr, self.nc) - m = int32(-1) - M = int32(1) - ac = zeros(a.shape, dtype = int64) - act = ac.copy() - self.fastclip(a, m, M, ac) - self.clip(a, m, M, act) - assert_array_strict_equal(ac, act) - - def test_simple_int64_inout(self): - """Test native in32 input with double array min/max and int32 out.""" - a = self._generate_int32_data(self.nr, self.nc) - m = zeros(a.shape, float64) - M = float64(1) - ac = zeros(a.shape, dtype = int32) - act = ac.copy() - self.fastclip(a, m, M, ac) - self.clip(a, m, M, act) - assert_array_strict_equal(ac, act) - - def test_simple_int32_out(self): - """Test native double input with scalar min/max and int out.""" - a = self._generate_data(self.nr, self.nc) - m = -1.0 - M = 2.0 - ac = zeros(a.shape, dtype = int32) - act = ac.copy() - self.fastclip(a, m, M, ac) - self.clip(a, m, M, act) - assert_array_strict_equal(ac, act) - - def test_simple_inplace_01(self): - """Test native double input with array min/max in-place.""" - a = self._generate_data(self.nr, self.nc) - ac = a.copy() - m = zeros(a.shape) - M = 1.0 - self.fastclip(a, m, M, a) - self.clip(a, m, M, ac) - assert_array_strict_equal(a, ac) - - def test_simple_inplace_02(self): - """Test native double input with scalar min/max in-place.""" - a = self._generate_data(self.nr, self.nc) - ac = a.copy() - m = -0.5 - M = 0.6 - self.fastclip(a, m, M, a) - self.clip(a, m, M, ac) - assert_array_strict_equal(a, ac) - - def test_noncontig_inplace(self): - """Test non contiguous double input with double scalar min/max in-place.""" - a = self._generate_data(self.nr * 2, self.nc * 3) - a = a[::2, ::3] - assert not a.flags['F_CONTIGUOUS'] - assert not a.flags['C_CONTIGUOUS'] - ac = a.copy() - m = -0.5 - M = 0.6 - self.fastclip(a, m, M, a) - self.clip(a, m, M, ac) - assert_array_equal(a, ac) - - def test_type_cast_01(self): - "Test native double input with scalar min/max." - a = self._generate_data(self.nr, self.nc) - m = -0.5 - M = 0.6 - ac = self.fastclip(a, m, M) - act = self.clip(a, m, M) - assert_array_strict_equal(ac, act) - - def test_type_cast_02(self): - "Test native int32 input with int32 scalar min/max." - a = self._generate_int_data(self.nr, self.nc) - a = a.astype(int32) - m = -2 - M = 4 - ac = self.fastclip(a, m, M) - act = self.clip(a, m, M) - assert_array_strict_equal(ac, act) - - def test_type_cast_03(self): - "Test native int32 input with float64 scalar min/max." - a = self._generate_int32_data(self.nr, self.nc) - m = -2 - M = 4 - ac = self.fastclip(a, float64(m), float64(M)) - act = self.clip(a, float64(m), float64(M)) - assert_array_strict_equal(ac, act) - - def test_type_cast_04(self): - "Test native int32 input with float32 scalar min/max." - a = self._generate_int32_data(self.nr, self.nc) - m = float32(-2) - M = float32(4) - act = self.fastclip(a,m,M) - ac = self.clip(a,m,M) - assert_array_strict_equal(ac, act) - - def test_type_cast_05(self): - "Test native int32 with double arrays min/max." - a = self._generate_int_data(self.nr, self.nc) - m = -0.5 - M = 1. - ac = self.fastclip(a, m * zeros(a.shape), M) - act = self.clip(a, m * zeros(a.shape), M) - assert_array_strict_equal(ac, act) - - def test_type_cast_06(self): - "Test native with NON native scalar min/max." - a = self._generate_data(self.nr, self.nc) - m = 0.5 - m_s = self._neg_byteorder(m) - M = 1. - act = self.clip(a, m_s, M) - ac = self.fastclip(a, m_s, M) - assert_array_strict_equal(ac, act) - - def test_type_cast_07(self): - "Test NON native with native array min/max." - a = self._generate_data(self.nr, self.nc) - m = -0.5 * ones(a.shape) - M = 1. - a_s = self._neg_byteorder(a) - assert not a_s.dtype.isnative - act = a_s.clip(m, M) - ac = self.fastclip(a_s, m, M) - assert_array_strict_equal(ac, act) - - def test_type_cast_08(self): - "Test NON native with native scalar min/max." - a = self._generate_data(self.nr, self.nc) - m = -0.5 - M = 1. - a_s = self._neg_byteorder(a) - assert not a_s.dtype.isnative - ac = self.fastclip(a_s, m , M) - act = a_s.clip(m, M) - assert_array_strict_equal(ac, act) - - def test_type_cast_09(self): - "Test native with NON native array min/max." - a = self._generate_data(self.nr, self.nc) - m = -0.5 * ones(a.shape) - M = 1. - m_s = self._neg_byteorder(m) - assert not m_s.dtype.isnative - ac = self.fastclip(a, m_s , M) - act = self.clip(a, m_s, M) - assert_array_strict_equal(ac, act) - - def test_type_cast_10(self): - """Test native int32 with float min/max and float out for output argument.""" - a = self._generate_int_data(self.nr, self.nc) - b = zeros(a.shape, dtype = float32) - m = float32(-0.5) - M = float32(1) - act = self.clip(a, m, M, out = b) - ac = self.fastclip(a, m , M, out = b) - assert_array_strict_equal(ac, act) - - def test_type_cast_11(self): - "Test non native with native scalar, min/max, out non native" - a = self._generate_non_native_data(self.nr, self.nc) - b = a.copy() - b = b.astype(b.dtype.newbyteorder('>')) - bt = b.copy() - m = -0.5 - M = 1. - self.fastclip(a, m , M, out = b) - self.clip(a, m, M, out = bt) - assert_array_strict_equal(b, bt) - - def test_type_cast_12(self): - "Test native int32 input and min/max and float out" - a = self._generate_int_data(self.nr, self.nc) - b = zeros(a.shape, dtype = float32) - m = int32(0) - M = int32(1) - act = self.clip(a, m, M, out = b) - ac = self.fastclip(a, m , M, out = b) - assert_array_strict_equal(ac, act) - - def test_clip_with_out_simple(self): - "Test native double input with scalar min/max" - a = self._generate_data(self.nr, self.nc) - m = -0.5 - M = 0.6 - ac = zeros(a.shape) - act = zeros(a.shape) - self.fastclip(a, m, M, ac) - self.clip(a, m, M, act) - assert_array_strict_equal(ac, act) - - def test_clip_with_out_simple2(self): - "Test native int32 input with double min/max and int32 out" - a = self._generate_int32_data(self.nr, self.nc) - m = float64(0) - M = float64(2) - ac = zeros(a.shape, dtype = int32) - act = ac.copy() - self.fastclip(a, m, M, ac) - self.clip(a, m, M, act) - assert_array_strict_equal(ac, act) - - def test_clip_with_out_simple_int32(self): - "Test native int32 input with int32 scalar min/max and int64 out" - a = self._generate_int32_data(self.nr, self.nc) - m = int32(-1) - M = int32(1) - ac = zeros(a.shape, dtype = int64) - act = ac.copy() - self.fastclip(a, m, M, ac) - self.clip(a, m, M, act) - assert_array_strict_equal(ac, act) - - def test_clip_with_out_array_int32(self): - "Test native int32 input with double array min/max and int32 out" - a = self._generate_int32_data(self.nr, self.nc) - m = zeros(a.shape, float64) - M = float64(1) - ac = zeros(a.shape, dtype = int32) - act = ac.copy() - self.fastclip(a, m, M, ac) - self.clip(a, m, M, act) - assert_array_strict_equal(ac, act) - - def test_clip_with_out_array_outint32(self): - "Test native double input with scalar min/max and int out" - a = self._generate_data(self.nr, self.nc) - m = -1.0 - M = 2.0 - ac = zeros(a.shape, dtype = int32) - act = ac.copy() - self.fastclip(a, m, M, ac) - self.clip(a, m, M, act) - assert_array_strict_equal(ac, act) - - def test_clip_inplace_array(self): - "Test native double input with array min/max" - a = self._generate_data(self.nr, self.nc) - ac = a.copy() - m = zeros(a.shape) - M = 1.0 - self.fastclip(a, m, M, a) - self.clip(a, m, M, ac) - assert_array_strict_equal(a, ac) - - def test_clip_inplace_simple(self): - "Test native double input with scalar min/max" - a = self._generate_data(self.nr, self.nc) - ac = a.copy() - m = -0.5 - M = 0.6 - self.fastclip(a, m, M, a) - self.clip(a, m, M, ac) - assert_array_strict_equal(a, ac) - - def test_clip_func_takes_out(self): - """ Ensure that the clip() function takes an out= argument. - """ - a = self._generate_data(self.nr, self.nc) - ac = a.copy() - m = -0.5 - M = 0.6 - a2 = clip(a, m, M, out=a) - self.clip(a, m, M, ac) - assert_array_strict_equal(a2, ac) - self.assert_(a2 is a) - - -class test_allclose_inf(TestCase): - rtol = 1e-5 - atol = 1e-8 - - def tst_allclose(self,x,y): - assert allclose(x,y), "%s and %s not close" % (x,y) - - def tst_not_allclose(self,x,y): - assert not allclose(x,y), "%s and %s shouldn't be close" % (x,y) - - def test_ip_allclose(self): - """Parametric test factory.""" - arr = array([100,1000]) - aran = arange(125).reshape((5,5,5)) - - atol = self.atol - rtol = self.rtol - - data = [([1,0], [1,0]), - ([atol], [0]), - ([1], [1+rtol+atol]), - (arr, arr + arr*rtol), - (arr, arr + arr*rtol + atol*2), - (aran, aran + aran*rtol),] - - for (x,y) in data: - yield (self.tst_allclose,x,y) - - def test_ip_not_allclose(self): - """Parametric test factory.""" - aran = arange(125).reshape((5,5,5)) - - atol = self.atol - rtol = self.rtol - - data = [([inf,0], [1,inf]), - ([inf,0], [1,0]), - ([inf,inf], [1,inf]), - ([inf,inf], [1,0]), - ([-inf, 0], [inf, 0]), - ([nan,0], [nan,0]), - ([atol*2], [0]), - ([1], [1+rtol+atol*2]), - (aran, aran + aran*atol + atol*2), - (array([inf,1]), array([0,inf]))] - - for (x,y) in data: - yield (self.tst_not_allclose,x,y) - - def test_no_parameter_modification(self): - x = array([inf,1]) - y = array([0,inf]) - allclose(x,y) - assert_array_equal(x,array([inf,1])) - assert_array_equal(y,array([0,inf])) - - -class TestStdVar(TestCase): - def setUp(self): - self.A = array([1,-1,1,-1]) - self.real_var = 1 - - def test_basic(self): - assert_almost_equal(var(self.A),self.real_var) - assert_almost_equal(std(self.A)**2,self.real_var) - - def test_ddof1(self): - assert_almost_equal(var(self.A,ddof=1), - self.real_var*len(self.A)/float(len(self.A)-1)) - assert_almost_equal(std(self.A,ddof=1)**2, - self.real_var*len(self.A)/float(len(self.A)-1)) - - def test_ddof2(self): - assert_almost_equal(var(self.A,ddof=2), - self.real_var*len(self.A)/float(len(self.A)-2)) - assert_almost_equal(std(self.A,ddof=2)**2, - self.real_var*len(self.A)/float(len(self.A)-2)) - - -class TestStdVarComplex(TestCase): - def test_basic(self): - A = array([1,1.j,-1,-1.j]) - real_var = 1 - assert_almost_equal(var(A),real_var) - assert_almost_equal(std(A)**2,real_var) - - -class TestLikeFuncs(TestCase): - '''Test zeros_like and empty_like''' - - def setUp(self): - self.data = [(array([[1,2,3],[4,5,6]],dtype=int32), (2,3), int32), - (array([[1,2,3],[4,5,6]],dtype=float32), (2,3), float32), - ] - - def test_zeros_like(self): - for d, dshape, dtype in self.data: - dz = zeros_like(d) - assert dz.shape == dshape - assert dz.dtype.type == dtype - assert all(abs(dz) == 0) - - def test_empty_like(self): - for d, dshape, dtype in self.data: - dz = zeros_like(d) - assert dz.shape == dshape - assert dz.dtype.type == dtype - -class _TestCorrelate(TestCase): - def _setup(self, dt): - self.x = np.array([1, 2, 3, 4, 5], dtype=dt) - self.y = np.array([-1, -2, -3], dtype=dt) - self.z1 = np.array([ -3., -8., -14., -20., -26., -14., -5.], dtype=dt) - self.z2 = np.array([ -5., -14., -26., -20., -14., -8., -3.], dtype=dt) - - def test_float(self): - self._setup(np.float) - z = np.correlate(self.x, self.y, 'full', old_behavior=self.old_behavior) - assert_array_almost_equal(z, self.z1) - z = np.correlate(self.y, self.x, 'full', old_behavior=self.old_behavior) - assert_array_almost_equal(z, self.z2) - - def test_object(self): - self._setup(Decimal) - z = np.correlate(self.x, self.y, 'full', old_behavior=self.old_behavior) - assert_array_almost_equal(z, self.z1) - z = np.correlate(self.y, self.x, 'full', old_behavior=self.old_behavior) - assert_array_almost_equal(z, self.z2) - -class TestCorrelate(_TestCorrelate): - old_behavior = True - def _setup(self, dt): - # correlate uses an unconventional definition so that correlate(a, b) - # == correlate(b, a), so force the corresponding outputs to be the same - # as well - _TestCorrelate._setup(self, dt) - self.z2 = self.z1 - - @dec.deprecated() - def test_complex(self): - x = np.array([1, 2, 3, 4+1j], dtype=np.complex) - y = np.array([-1, -2j, 3+1j], dtype=np.complex) - r_z = np.array([3+1j, 6, 8-1j, 9+1j, -1-8j, -4-1j], dtype=np.complex) - z = np.correlate(x, y, 'full') - assert_array_almost_equal(z, r_z) - - @dec.deprecated() - def test_float(self): - _TestCorrelate.test_float(self) - - @dec.deprecated() - def test_object(self): - _TestCorrelate.test_object(self) - -class TestCorrelateNew(_TestCorrelate): - old_behavior = False - def test_complex(self): - x = np.array([1, 2, 3, 4+1j], dtype=np.complex) - y = np.array([-1, -2j, 3+1j], dtype=np.complex) - r_z = np.array([3-1j, 6, 8+1j, 11+5j, -5+8j, -4-1j], dtype=np.complex) - #z = np.acorrelate(x, y, 'full') - #assert_array_almost_equal(z, r_z) - - r_z = r_z[::-1].conjugate() - z = np.correlate(y, x, 'full', old_behavior=self.old_behavior) - assert_array_almost_equal(z, r_z) - -class TestArgwhere: - def test_2D(self): - x = np.arange(6).reshape((2, 3)) - assert_array_equal(np.argwhere(x > 1), - [[0, 2], - [1, 0], - [1, 1], - [1, 2]]) - - def test_list(self): - assert_equal(np.argwhere([4, 0, 2, 1, 3]), [[0], [2], [3], [4]]) - -class TestStringFunction: - def test_set_string_function(self): - a = np.array([1]) - np.set_string_function(lambda x: "FOO", repr=True) - assert_equal(repr(a), "FOO") - np.set_string_function(None, repr=True) - assert_equal(repr(a), "array([1])") - - np.set_string_function(lambda x: "FOO", repr=False) - assert_equal(str(a), "FOO") - np.set_string_function(None, repr=False) - assert_equal(str(a), "[1]") - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_numerictypes.py b/pythonPackages/numpy/numpy/core/tests/test_numerictypes.py deleted file mode 100755 index 44878b117f..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_numerictypes.py +++ /dev/null @@ -1,372 +0,0 @@ -import sys -from numpy.testing import * -from numpy.compat import asbytes, asunicode -import numpy as np - -# This is the structure of the table used for plain objects: -# -# +-+-+-+ -# |x|y|z| -# +-+-+-+ - -# Structure of a plain array description: -Pdescr = [ - ('x', 'i4', (2,)), - ('y', 'f8', (2, 2)), - ('z', 'u1')] - -# A plain list of tuples with values for testing: -PbufferT = [ - # x y z - ([3,2], [[6.,4.],[6.,4.]], 8), - ([4,3], [[7.,5.],[7.,5.]], 9), - ] - - -# This is the structure of the table used for nested objects (DON'T PANIC!): -# -# +-+---------------------------------+-----+----------+-+-+ -# |x|Info |color|info |y|z| -# | +-----+--+----------------+----+--+ +----+-----+ | | -# | |value|y2|Info2 |name|z2| |Name|Value| | | -# | | | +----+-----+--+--+ | | | | | | | -# | | | |name|value|y3|z3| | | | | | | | -# +-+-----+--+----+-----+--+--+----+--+-----+----+-----+-+-+ -# - -# The corresponding nested array description: -Ndescr = [ - ('x', 'i4', (2,)), - ('Info', [ - ('value', 'c16'), - ('y2', 'f8'), - ('Info2', [ - ('name', 'S2'), - ('value', 'c16', (2,)), - ('y3', 'f8', (2,)), - ('z3', 'u4', (2,))]), - ('name', 'S2'), - ('z2', 'b1')]), - ('color', 'S2'), - ('info', [ - ('Name', 'U8'), - ('Value', 'c16')]), - ('y', 'f8', (2, 2)), - ('z', 'u1')] - -NbufferT = [ - # x Info color info y z - # value y2 Info2 name z2 Name Value - # name value y3 z3 - ([3,2], (6j, 6., (asbytes('nn'), [6j,4j], [6.,4.], [1,2]), asbytes('NN'), True), asbytes('cc'), (asunicode('NN'), 6j), [[6.,4.],[6.,4.]], 8), - ([4,3], (7j, 7., (asbytes('oo'), [7j,5j], [7.,5.], [2,1]), asbytes('OO'), False), asbytes('dd'), (asunicode('OO'), 7j), [[7.,5.],[7.,5.]], 9), - ] - - -byteorder = {'little':'<', 'big':'>'}[sys.byteorder] - -def normalize_descr(descr): - "Normalize a description adding the platform byteorder." - - out = [] - for item in descr: - dtype = item[1] - if isinstance(dtype, str): - if dtype[0] not in ['|','<','>']: - onebyte = dtype[1:] == "1" - if onebyte or dtype[0] in ['S', 'V', 'b']: - dtype = "|" + dtype - else: - dtype = byteorder + dtype - if len(item) > 2 and np.prod(item[2]) > 1: - nitem = (item[0], dtype, item[2]) - else: - nitem = (item[0], dtype) - out.append(nitem) - elif isinstance(item[1], list): - l = [] - for j in normalize_descr(item[1]): - l.append(j) - out.append((item[0], l)) - else: - raise ValueError("Expected a str or list and got %s" % \ - (type(item))) - return out - - -############################################################ -# Creation tests -############################################################ - -class create_zeros(object): - """Check the creation of heterogeneous arrays zero-valued""" - - def test_zeros0D(self): - """Check creation of 0-dimensional objects""" - h = np.zeros((), dtype=self._descr) - self.assert_(normalize_descr(self._descr) == h.dtype.descr) - self.assert_(h.dtype.fields['x'][0].name[:4] == 'void') - self.assert_(h.dtype.fields['x'][0].char == 'V') - self.assert_(h.dtype.fields['x'][0].type == np.void) - # A small check that data is ok - assert_equal(h['z'], np.zeros((), dtype='u1')) - - def test_zerosSD(self): - """Check creation of single-dimensional objects""" - h = np.zeros((2,), dtype=self._descr) - self.assert_(normalize_descr(self._descr) == h.dtype.descr) - self.assert_(h.dtype['y'].name[:4] == 'void') - self.assert_(h.dtype['y'].char == 'V') - self.assert_(h.dtype['y'].type == np.void) - # A small check that data is ok - assert_equal(h['z'], np.zeros((2,), dtype='u1')) - - def test_zerosMD(self): - """Check creation of multi-dimensional objects""" - h = np.zeros((2,3), dtype=self._descr) - self.assert_(normalize_descr(self._descr) == h.dtype.descr) - self.assert_(h.dtype['z'].name == 'uint8') - self.assert_(h.dtype['z'].char == 'B') - self.assert_(h.dtype['z'].type == np.uint8) - # A small check that data is ok - assert_equal(h['z'], np.zeros((2,3), dtype='u1')) - - -class test_create_zeros_plain(create_zeros, TestCase): - """Check the creation of heterogeneous arrays zero-valued (plain)""" - _descr = Pdescr - -class test_create_zeros_nested(create_zeros, TestCase): - """Check the creation of heterogeneous arrays zero-valued (nested)""" - _descr = Ndescr - - -class create_values(object): - """Check the creation of heterogeneous arrays with values""" - - def test_tuple(self): - """Check creation from tuples""" - h = np.array(self._buffer, dtype=self._descr) - self.assert_(normalize_descr(self._descr) == h.dtype.descr) - if self.multiple_rows: - self.assert_(h.shape == (2,)) - else: - self.assert_(h.shape == ()) - - def test_list_of_tuple(self): - """Check creation from list of tuples""" - h = np.array([self._buffer], dtype=self._descr) - self.assert_(normalize_descr(self._descr) == h.dtype.descr) - if self.multiple_rows: - self.assert_(h.shape == (1,2)) - else: - self.assert_(h.shape == (1,)) - - def test_list_of_list_of_tuple(self): - """Check creation from list of list of tuples""" - h = np.array([[self._buffer]], dtype=self._descr) - self.assert_(normalize_descr(self._descr) == h.dtype.descr) - if self.multiple_rows: - self.assert_(h.shape == (1,1,2)) - else: - self.assert_(h.shape == (1,1)) - - -class test_create_values_plain_single(create_values, TestCase): - """Check the creation of heterogeneous arrays (plain, single row)""" - _descr = Pdescr - multiple_rows = 0 - _buffer = PbufferT[0] - -class test_create_values_plain_multiple(create_values, TestCase): - """Check the creation of heterogeneous arrays (plain, multiple rows)""" - _descr = Pdescr - multiple_rows = 1 - _buffer = PbufferT - -class test_create_values_nested_single(create_values, TestCase): - """Check the creation of heterogeneous arrays (nested, single row)""" - _descr = Ndescr - multiple_rows = 0 - _buffer = NbufferT[0] - -class test_create_values_nested_multiple(create_values, TestCase): - """Check the creation of heterogeneous arrays (nested, multiple rows)""" - _descr = Ndescr - multiple_rows = 1 - _buffer = NbufferT - - -############################################################ -# Reading tests -############################################################ - -class read_values_plain(object): - """Check the reading of values in heterogeneous arrays (plain)""" - - def test_access_fields(self): - h = np.array(self._buffer, dtype=self._descr) - if not self.multiple_rows: - self.assert_(h.shape == ()) - assert_equal(h['x'], np.array(self._buffer[0], dtype='i4')) - assert_equal(h['y'], np.array(self._buffer[1], dtype='f8')) - assert_equal(h['z'], np.array(self._buffer[2], dtype='u1')) - else: - self.assert_(len(h) == 2) - assert_equal(h['x'], np.array([self._buffer[0][0], - self._buffer[1][0]], dtype='i4')) - assert_equal(h['y'], np.array([self._buffer[0][1], - self._buffer[1][1]], dtype='f8')) - assert_equal(h['z'], np.array([self._buffer[0][2], - self._buffer[1][2]], dtype='u1')) - - -class test_read_values_plain_single(read_values_plain, TestCase): - """Check the creation of heterogeneous arrays (plain, single row)""" - _descr = Pdescr - multiple_rows = 0 - _buffer = PbufferT[0] - -class test_read_values_plain_multiple(read_values_plain, TestCase): - """Check the values of heterogeneous arrays (plain, multiple rows)""" - _descr = Pdescr - multiple_rows = 1 - _buffer = PbufferT - -class read_values_nested(object): - """Check the reading of values in heterogeneous arrays (nested)""" - - - def test_access_top_fields(self): - """Check reading the top fields of a nested array""" - h = np.array(self._buffer, dtype=self._descr) - if not self.multiple_rows: - self.assert_(h.shape == ()) - assert_equal(h['x'], np.array(self._buffer[0], dtype='i4')) - assert_equal(h['y'], np.array(self._buffer[4], dtype='f8')) - assert_equal(h['z'], np.array(self._buffer[5], dtype='u1')) - else: - self.assert_(len(h) == 2) - assert_equal(h['x'], np.array([self._buffer[0][0], - self._buffer[1][0]], dtype='i4')) - assert_equal(h['y'], np.array([self._buffer[0][4], - self._buffer[1][4]], dtype='f8')) - assert_equal(h['z'], np.array([self._buffer[0][5], - self._buffer[1][5]], dtype='u1')) - - - def test_nested1_acessors(self): - """Check reading the nested fields of a nested array (1st level)""" - h = np.array(self._buffer, dtype=self._descr) - if not self.multiple_rows: - assert_equal(h['Info']['value'], - np.array(self._buffer[1][0], dtype='c16')) - assert_equal(h['Info']['y2'], - np.array(self._buffer[1][1], dtype='f8')) - assert_equal(h['info']['Name'], - np.array(self._buffer[3][0], dtype='U2')) - assert_equal(h['info']['Value'], - np.array(self._buffer[3][1], dtype='c16')) - else: - assert_equal(h['Info']['value'], - np.array([self._buffer[0][1][0], - self._buffer[1][1][0]], - dtype='c16')) - assert_equal(h['Info']['y2'], - np.array([self._buffer[0][1][1], - self._buffer[1][1][1]], - dtype='f8')) - assert_equal(h['info']['Name'], - np.array([self._buffer[0][3][0], - self._buffer[1][3][0]], - dtype='U2')) - assert_equal(h['info']['Value'], - np.array([self._buffer[0][3][1], - self._buffer[1][3][1]], - dtype='c16')) - - def test_nested2_acessors(self): - """Check reading the nested fields of a nested array (2nd level)""" - h = np.array(self._buffer, dtype=self._descr) - if not self.multiple_rows: - assert_equal(h['Info']['Info2']['value'], - np.array(self._buffer[1][2][1], dtype='c16')) - assert_equal(h['Info']['Info2']['z3'], - np.array(self._buffer[1][2][3], dtype='u4')) - else: - assert_equal(h['Info']['Info2']['value'], - np.array([self._buffer[0][1][2][1], - self._buffer[1][1][2][1]], - dtype='c16')) - assert_equal(h['Info']['Info2']['z3'], - np.array([self._buffer[0][1][2][3], - self._buffer[1][1][2][3]], - dtype='u4')) - - def test_nested1_descriptor(self): - """Check access nested descriptors of a nested array (1st level)""" - h = np.array(self._buffer, dtype=self._descr) - self.assert_(h.dtype['Info']['value'].name == 'complex128') - self.assert_(h.dtype['Info']['y2'].name == 'float64') - if sys.version_info[0] >= 3: - self.assert_(h.dtype['info']['Name'].name == 'str256') - else: - self.assert_(h.dtype['info']['Name'].name == 'unicode256') - self.assert_(h.dtype['info']['Value'].name == 'complex128') - - def test_nested2_descriptor(self): - """Check access nested descriptors of a nested array (2nd level)""" - h = np.array(self._buffer, dtype=self._descr) - self.assert_(h.dtype['Info']['Info2']['value'].name == 'void256') - self.assert_(h.dtype['Info']['Info2']['z3'].name == 'void64') - - -class test_read_values_nested_single(read_values_nested, TestCase): - """Check the values of heterogeneous arrays (nested, single row)""" - _descr = Ndescr - multiple_rows = False - _buffer = NbufferT[0] - -class test_read_values_nested_multiple(read_values_nested, TestCase): - """Check the values of heterogeneous arrays (nested, multiple rows)""" - _descr = Ndescr - multiple_rows = True - _buffer = NbufferT - -class TestEmptyField(TestCase): - def test_assign(self): - a = np.arange(10, dtype=np.float32) - a.dtype = [("int", "<0i4"),("float", "<2f4")] - assert(a['int'].shape == (5,0)) - assert(a['float'].shape == (5,2)) - -class TestCommonType(TestCase): - def test_scalar_loses1(self): - res = np.find_common_type(['f4','f4','i2'],['f8']) - assert(res == 'f4') - def test_scalar_loses2(self): - res = np.find_common_type(['f4','f4'],['i8']) - assert(res == 'f4') - def test_scalar_wins(self): - res = np.find_common_type(['f4','f4','i2'],['c8']) - assert(res == 'c8') - def test_scalar_wins2(self): - res = np.find_common_type(['u4','i4','i4'],['f4']) - assert(res == 'f8') - def test_scalar_wins3(self): # doesn't go up to 'f16' on purpose - res = np.find_common_type(['u8','i8','i8'],['f8']) - assert(res == 'f8') - -class TestMultipleFields(TestCase): - def setUp(self): - self.ary = np.array([(1,2,3,4),(5,6,7,8)], dtype='i4,f4,i2,c8') - def _bad_call(self): - return self.ary['f0','f1'] - def test_no_tuple(self): - self.assertRaises(ValueError, self._bad_call) - def test_return(self): - res = self.ary[['f0','f2']].tolist() - assert(res == [(1,3), (5,7)]) - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_print.py b/pythonPackages/numpy/numpy/core/tests/test_print.py deleted file mode 100755 index d83f21cb20..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_print.py +++ /dev/null @@ -1,243 +0,0 @@ -import numpy as np -from numpy.testing import * -import nose - -import locale -import sys -from StringIO import StringIO - -_REF = {np.inf: 'inf', -np.inf: '-inf', np.nan: 'nan'} - - -def check_float_type(tp): - for x in [0, 1,-1, 1e20] : - assert_equal(str(tp(x)), str(float(x)), - err_msg='Failed str formatting for type %s' % tp) - - if tp(1e10).itemsize > 4: - assert_equal(str(tp(1e10)), str(float('1e10')), - err_msg='Failed str formatting for type %s' % tp) - else: - if sys.platform == 'win32' and sys.version_info[0] <= 2 and \ - sys.version_info[1] <= 5: - ref = '1e+010' - else: - ref = '1e+10' - assert_equal(str(tp(1e10)), ref, - err_msg='Failed str formatting for type %s' % tp) - -def test_float_types(): - """ Check formatting. - - This is only for the str function, and only for simple types. - The precision of np.float and np.longdouble aren't the same as the - python float precision. - - """ - for t in [np.float32, np.double, np.longdouble] : - yield check_float_type, t - -def check_nan_inf_float(tp): - for x in [np.inf, -np.inf, np.nan]: - assert_equal(str(tp(x)), _REF[x], - err_msg='Failed str formatting for type %s' % tp) - -def test_nan_inf_float(): - """ Check formatting of nan & inf. - - This is only for the str function, and only for simple types. - The precision of np.float and np.longdouble aren't the same as the - python float precision. - - """ - for t in [np.float32, np.double, np.longdouble] : - yield check_nan_inf_float, t - -def check_complex_type(tp): - for x in [0, 1,-1, 1e20] : - assert_equal(str(tp(x)), str(complex(x)), - err_msg='Failed str formatting for type %s' % tp) - assert_equal(str(tp(x*1j)), str(complex(x*1j)), - err_msg='Failed str formatting for type %s' % tp) - assert_equal(str(tp(x + x*1j)), str(complex(x + x*1j)), - err_msg='Failed str formatting for type %s' % tp) - - if tp(1e10).itemsize > 8: - assert_equal(str(tp(1e10)), str(complex(1e10)), - err_msg='Failed str formatting for type %s' % tp) - else: - if sys.platform == 'win32' and sys.version_info[0] <= 2 and \ - sys.version_info[1] <= 5: - ref = '(1e+010+0j)' - else: - ref = '(1e+10+0j)' - assert_equal(str(tp(1e10)), ref, - err_msg='Failed str formatting for type %s' % tp) - -def test_complex_types(): - """Check formatting of complex types. - - This is only for the str function, and only for simple types. - The precision of np.float and np.longdouble aren't the same as the - python float precision. - - """ - for t in [np.complex64, np.cdouble, np.clongdouble] : - yield check_complex_type, t - -def test_complex_inf_nan(): - """Check inf/nan formatting of complex types.""" - if sys.version_info >= (2, 6): - TESTS = { - complex(np.inf, 0): "(inf+0j)", - complex(0, np.inf): "inf*j", - complex(-np.inf, 0): "(-inf+0j)", - complex(0, -np.inf): "-inf*j", - complex(np.inf, 1): "(inf+1j)", - complex(1, np.inf): "(1+inf*j)", - complex(-np.inf, 1): "(-inf+1j)", - complex(1, -np.inf): "(1-inf*j)", - complex(np.nan, 0): "(nan+0j)", - complex(0, np.nan): "nan*j", - complex(-np.nan, 0): "(nan+0j)", - complex(0, -np.nan): "nan*j", - complex(np.nan, 1): "(nan+1j)", - complex(1, np.nan): "(1+nan*j)", - complex(-np.nan, 1): "(nan+1j)", - complex(1, -np.nan): "(1+nan*j)", - } - else: - TESTS = { - complex(np.inf, 0): "(inf+0j)", - complex(0, np.inf): "infj", - complex(-np.inf, 0): "(-inf+0j)", - complex(0, -np.inf): "-infj", - complex(np.inf, 1): "(inf+1j)", - complex(1, np.inf): "(1+infj)", - complex(-np.inf, 1): "(-inf+1j)", - complex(1, -np.inf): "(1-infj)", - complex(np.nan, 0): "(nan+0j)", - complex(0, np.nan): "nanj", - complex(-np.nan, 0): "(nan+0j)", - complex(0, -np.nan): "nanj", - complex(np.nan, 1): "(nan+1j)", - complex(1, np.nan): "(1+nanj)", - complex(-np.nan, 1): "(nan+1j)", - complex(1, -np.nan): "(1+nanj)", - } - for tp in [np.complex64, np.cdouble, np.clongdouble]: - for c, s in TESTS.items(): - yield _check_complex_inf_nan, c, s, tp - -def _check_complex_inf_nan(c, s, dtype): - assert_equal(str(dtype(c)), s) - -# print tests -def _test_redirected_print(x, tp, ref=None): - file = StringIO() - file_tp = StringIO() - stdout = sys.stdout - try: - sys.stdout = file_tp - print tp(x) - sys.stdout = file - if ref: - print ref - else: - print x - finally: - sys.stdout = stdout - - assert_equal(file.getvalue(), file_tp.getvalue(), - err_msg='print failed for type%s' % tp) - -def check_float_type_print(tp): - for x in [0, 1,-1, 1e20]: - _test_redirected_print(float(x), tp) - - for x in [np.inf, -np.inf, np.nan]: - _test_redirected_print(float(x), tp, _REF[x]) - - if tp(1e10).itemsize > 4: - _test_redirected_print(float(1e10), tp) - else: - if sys.platform == 'win32' and sys.version_info[0] <= 2 and \ - sys.version_info[1] <= 5: - ref = '1e+010' - else: - ref = '1e+10' - _test_redirected_print(float(1e10), tp, ref) - -def check_complex_type_print(tp): - # We do not create complex with inf/nan directly because the feature is - # missing in python < 2.6 - for x in [0, 1, -1, 1e20]: - _test_redirected_print(complex(x), tp) - - if tp(1e10).itemsize > 8: - _test_redirected_print(complex(1e10), tp) - else: - if sys.platform == 'win32' and sys.version_info[0] <= 2 and \ - sys.version_info[1] <= 5: - ref = '(1e+010+0j)' - else: - ref = '(1e+10+0j)' - _test_redirected_print(complex(1e10), tp, ref) - - _test_redirected_print(complex(np.inf, 1), tp, '(inf+1j)') - _test_redirected_print(complex(-np.inf, 1), tp, '(-inf+1j)') - _test_redirected_print(complex(-np.nan, 1), tp, '(nan+1j)') - -def test_float_type_print(): - """Check formatting when using print """ - for t in [np.float32, np.double, np.longdouble] : - yield check_float_type_print, t - -def test_complex_type_print(): - """Check formatting when using print """ - for t in [np.complex64, np.cdouble, np.clongdouble] : - yield check_complex_type_print, t - -# Locale tests: scalar types formatting should be independent of the locale -def in_foreign_locale(func): - """ - Swap LC_NUMERIC locale to one in which the decimal point is ',' and not '.' - If not possible, raise nose.SkipTest - - """ - if sys.platform == 'win32': - locales = ['FRENCH'] - else: - locales = ['fr_FR', 'fr_FR.UTF-8', 'fi_FI', 'fi_FI.UTF-8'] - - def wrapper(*args, **kwargs): - curloc = locale.getlocale(locale.LC_NUMERIC) - try: - for loc in locales: - try: - locale.setlocale(locale.LC_NUMERIC, loc) - break - except locale.Error: - pass - else: - raise nose.SkipTest("Skipping locale test, because " - "French locale not found") - return func(*args, **kwargs) - finally: - locale.setlocale(locale.LC_NUMERIC, locale=curloc) - return nose.tools.make_decorator(func)(wrapper) - -@in_foreign_locale -def test_locale_single(): - assert_equal(str(np.float32(1.2)), str(float(1.2))) - -@in_foreign_locale -def test_locale_double(): - assert_equal(str(np.double(1.2)), str(float(1.2))) - -@in_foreign_locale -def test_locale_longdouble(): - assert_equal(str(np.longdouble(1.2)), str(float(1.2))) - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_records.py b/pythonPackages/numpy/numpy/core/tests/test_records.py deleted file mode 100755 index 5e159f3489..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_records.py +++ /dev/null @@ -1,149 +0,0 @@ -from os import path -import numpy as np -from numpy.testing import * -from numpy.compat import asbytes, asunicode - -class TestFromrecords(TestCase): - def test_fromrecords(self): - r = np.rec.fromrecords([[456, 'dbe', 1.2], [2, 'de', 1.3]], - names='col1,col2,col3') - assert_equal(r[0].item(), (456, 'dbe', 1.2)) - - def test_method_array(self): - r = np.rec.array(asbytes('abcdefg') * 100, formats='i2,a3,i4', shape=3, byteorder='big') - assert_equal(r[1].item(), (25444, asbytes('efg'), 1633837924)) - - def test_method_array2(self): - r = np.rec.array([(1, 11, 'a'), (2, 22, 'b'), (3, 33, 'c'), (4, 44, 'd'), (5, 55, 'ex'), - (6, 66, 'f'), (7, 77, 'g')], formats='u1,f4,a1') - assert_equal(r[1].item(), (2, 22.0, asbytes('b'))) - - def test_recarray_slices(self): - r = np.rec.array([(1, 11, 'a'), (2, 22, 'b'), (3, 33, 'c'), (4, 44, 'd'), (5, 55, 'ex'), - (6, 66, 'f'), (7, 77, 'g')], formats='u1,f4,a1') - assert_equal(r[1::2][1].item(), (4, 44.0, asbytes('d'))) - - def test_recarray_fromarrays(self): - x1 = np.array([1, 2, 3, 4]) - x2 = np.array(['a', 'dd', 'xyz', '12']) - x3 = np.array([1.1, 2, 3, 4]) - r = np.rec.fromarrays([x1, x2, x3], names='a,b,c') - assert_equal(r[1].item(), (2, 'dd', 2.0)) - x1[1] = 34 - assert_equal(r.a, np.array([1, 2, 3, 4])) - - def test_recarray_fromfile(self): - data_dir = path.join(path.dirname(__file__), 'data') - filename = path.join(data_dir, 'recarray_from_file.fits') - fd = open(filename, 'rb') - fd.seek(2880 * 2) - r = np.rec.fromfile(fd, formats='f8,i4,a5', shape=3, byteorder='big') - - def test_recarray_from_obj(self): - count = 10 - a = np.zeros(count, dtype='O') - b = np.zeros(count, dtype='f8') - c = np.zeros(count, dtype='f8') - for i in range(len(a)): - a[i] = range(1, 10) - - mine = np.rec.fromarrays([a, b, c], names='date,data1,data2') - for i in range(len(a)): - assert (mine.date[i] == range(1, 10)) - assert (mine.data1[i] == 0.0) - assert (mine.data2[i] == 0.0) - - def test_recarray_from_repr(self): - x = np.rec.array([ (1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) - y = eval("np." + repr(x)) - assert isinstance(y, np.recarray) - assert_equal(y, x) - - def test_recarray_from_names(self): - ra = np.rec.array([ - (1, 'abc', 3.7000002861022949, 0), - (2, 'xy', 6.6999998092651367, 1), - (0, ' ', 0.40000000596046448, 0)], - names='c1, c2, c3, c4') - pa = np.rec.fromrecords([ - (1, 'abc', 3.7000002861022949, 0), - (2, 'xy', 6.6999998092651367, 1), - (0, ' ', 0.40000000596046448, 0)], - names='c1, c2, c3, c4') - assert ra.dtype == pa.dtype - assert ra.shape == pa.shape - for k in xrange(len(ra)): - assert ra[k].item() == pa[k].item() - - def test_recarray_conflict_fields(self): - ra = np.rec.array([(1, 'abc', 2.3), (2, 'xyz', 4.2), - (3, 'wrs', 1.3)], - names='field, shape, mean') - ra.mean = [1.1, 2.2, 3.3] - assert_array_almost_equal(ra['mean'], [1.1, 2.2, 3.3]) - assert type(ra.mean) is type(ra.var) - ra.shape = (1, 3) - assert ra.shape == (1, 3) - ra.shape = ['A', 'B', 'C'] - assert_array_equal(ra['shape'], [['A', 'B', 'C']]) - ra.field = 5 - assert_array_equal(ra['field'], [[5, 5, 5]]) - assert callable(ra.field) - - def test_fromrecords_with_explicit_dtype(self): - a = np.rec.fromrecords([(1, 'a'), (2, 'bbb')], - dtype=[('a', int), ('b', np.object)]) - assert_equal(a.a, [1, 2]) - assert_equal(a[0].a, 1) - assert_equal(a.b, ['a', 'bbb']) - assert_equal(a[-1].b, 'bbb') - # - ndtype = np.dtype([('a', int), ('b', np.object)]) - a = np.rec.fromrecords([(1, 'a'), (2, 'bbb')], dtype=ndtype) - assert_equal(a.a, [1, 2]) - assert_equal(a[0].a, 1) - assert_equal(a.b, ['a', 'bbb']) - assert_equal(a[-1].b, 'bbb') - - -class TestRecord(TestCase): - def setUp(self): - self.data = np.rec.fromrecords([(1, 2, 3), (4, 5, 6)], - dtype=[("col1", "= 3: - import io - StringIO = io.BytesIO - -rlevel = 1 - -class TestRegression(TestCase): - def test_invalid_round(self,level=rlevel): - """Ticket #3""" - v = 4.7599999999999998 - assert_array_equal(np.array([v]),np.array(v)) - - def test_mem_empty(self,level=rlevel): - """Ticket #7""" - np.empty((1,),dtype=[('x',np.int64)]) - - def test_pickle_transposed(self,level=rlevel): - """Ticket #16""" - a = np.transpose(np.array([[2,9],[7,0],[3,8]])) - f = StringIO() - pickle.dump(a,f) - f.seek(0) - b = pickle.load(f) - f.close() - assert_array_equal(a,b) - - def test_typeNA(self,level=rlevel): - """Ticket #31""" - assert_equal(np.typeNA[np.int64],'Int64') - assert_equal(np.typeNA[np.uint64],'UInt64') - - def test_dtype_names(self,level=rlevel): - """Ticket #35""" - dt = np.dtype([(('name','label'),np.int32,3)]) - - def test_reduce(self,level=rlevel): - """Ticket #40""" - assert_almost_equal(np.add.reduce([1.,.5],dtype=None), 1.5) - - def test_zeros_order(self,level=rlevel): - """Ticket #43""" - np.zeros([3], int, 'C') - np.zeros([3], order='C') - np.zeros([3], int, order='C') - - def test_sort_bigendian(self,level=rlevel): - """Ticket #47""" - a = np.linspace(0, 10, 11) - c = a.astype(np.dtype('= 3, - "numpy.intp('0xff', 16) not supported on Py3, " - "as it does not inherit from Python int") - def test_intp(self,level=rlevel): - """Ticket #99""" - i_width = np.int_(0).nbytes*2 - 1 - np.intp('0x' + 'f'*i_width,16) - self.assertRaises(OverflowError,np.intp,'0x' + 'f'*(i_width+1),16) - self.assertRaises(ValueError,np.intp,'0x1',32) - assert_equal(255,np.intp('0xFF',16)) - assert_equal(1024,np.intp(1024)) - - def test_endian_bool_indexing(self,level=rlevel): - """Ticket #105""" - a = np.arange(10.,dtype='>f8') - b = np.arange(10.,dtype='2) & (a<6)) - xb = np.where((b>2) & (b<6)) - ya = ((a>2) & (a<6)) - yb = ((b>2) & (b<6)) - assert_array_almost_equal(xa,ya.nonzero()) - assert_array_almost_equal(xb,yb.nonzero()) - assert(np.all(a[ya] > 0.5)) - assert(np.all(b[yb] > 0.5)) - - def test_mem_dot(self,level=rlevel): - """Ticket #106""" - x = np.random.randn(0,1) - y = np.random.randn(10,1) - z = np.dot(x, np.transpose(y)) - - def test_arange_endian(self,level=rlevel): - """Ticket #111""" - ref = np.arange(10) - x = np.arange(10,dtype='f8') - assert_array_equal(ref,x) - -# Longfloat support is not consistent enough across -# platforms for this test to be meaningful. -# def test_longfloat_repr(self,level=rlevel): -# """Ticket #112""" -# if np.longfloat(0).itemsize > 8: -# a = np.exp(np.array([1000],dtype=np.longfloat)) -# assert(str(a)[1:9] == str(a[0])[:8]) - - def test_argmax(self,level=rlevel): - """Ticket #119""" - a = np.random.normal(0,1,(4,5,6,7,8)) - for i in xrange(a.ndim): - aargmax = a.argmax(i) - - def test_mem_divmod(self,level=rlevel): - """Ticket #126""" - for i in range(10): - divmod(np.array([i])[0],10) - - - def test_hstack_invalid_dims(self,level=rlevel): - """Ticket #128""" - x = np.arange(9).reshape((3,3)) - y = np.array([0,0,0]) - self.assertRaises(ValueError,np.hstack,(x,y)) - - def test_squeeze_type(self,level=rlevel): - """Ticket #133""" - a = np.array([3]) - b = np.array(3) - assert(type(a.squeeze()) is np.ndarray) - assert(type(b.squeeze()) is np.ndarray) - - def test_add_identity(self,level=rlevel): - """Ticket #143""" - assert_equal(0,np.add.identity) - - def test_binary_repr_0(self,level=rlevel): - """Ticket #151""" - assert_equal('0',np.binary_repr(0)) - - def test_rec_iterate(self,level=rlevel): - """Ticket #160""" - descr = np.dtype([('i',int),('f',float),('s','|S3')]) - x = np.rec.array([(1,1.1,'1.0'), - (2,2.2,'2.0')],dtype=descr) - x[0].tolist() - [i for i in x[0]] - - def test_unicode_string_comparison(self,level=rlevel): - """Ticket #190""" - a = np.array('hello',np.unicode_) - b = np.array('world') - a == b - - def test_tostring_FORTRANORDER_discontiguous(self,level=rlevel): - """Fix in r2836""" - # Create discontiguous Fortran-ordered array - x = np.array(np.random.rand(3,3),order='F')[:,:2] - assert_array_almost_equal(x.ravel(),np.fromstring(x.tostring())) - - def test_flat_assignment(self,level=rlevel): - """Correct behaviour of ticket #194""" - x = np.empty((3,1)) - x.flat = np.arange(3) - assert_array_almost_equal(x,[[0],[1],[2]]) - x.flat = np.arange(3,dtype=float) - assert_array_almost_equal(x,[[0],[1],[2]]) - - def test_broadcast_flat_assignment(self,level=rlevel): - """Ticket #194""" - x = np.empty((3,1)) - def bfa(): x[:] = np.arange(3) - def bfb(): x[:] = np.arange(3,dtype=float) - self.assertRaises(ValueError, bfa) - self.assertRaises(ValueError, bfb) - - def test_unpickle_dtype_with_object(self,level=rlevel): - """Implemented in r2840""" - dt = np.dtype([('x',int),('y',np.object_),('z','O')]) - f = StringIO() - pickle.dump(dt,f) - f.seek(0) - dt_ = pickle.load(f) - f.close() - assert_equal(dt,dt_) - - def test_mem_array_creation_invalid_specification(self,level=rlevel): - """Ticket #196""" - dt = np.dtype([('x',int),('y',np.object_)]) - # Wrong way - self.assertRaises(ValueError, np.array, [1,'object'], dt) - # Correct way - np.array([(1,'object')],dt) - - def test_recarray_single_element(self,level=rlevel): - """Ticket #202""" - a = np.array([1,2,3],dtype=np.int32) - b = a.copy() - r = np.rec.array(a,shape=1,formats=['3i4'],names=['d']) - assert_array_equal(a,b) - assert_equal(a,r[0][0]) - - def test_zero_sized_array_indexing(self,level=rlevel): - """Ticket #205""" - tmp = np.array([]) - def index_tmp(): tmp[np.array(10)] - self.assertRaises(IndexError, index_tmp) - - def test_chararray_rstrip(self,level=rlevel): - """Ticket #222""" - x = np.chararray((1,),5) - x[0] = asbytes('a ') - x = x.rstrip() - assert_equal(x[0], asbytes('a')) - - def test_object_array_shape(self,level=rlevel): - """Ticket #239""" - assert_equal(np.array([[1,2],3,4],dtype=object).shape, (3,)) - assert_equal(np.array([[1,2],[3,4]],dtype=object).shape, (2,2)) - assert_equal(np.array([(1,2),(3,4)],dtype=object).shape, (2,2)) - assert_equal(np.array([],dtype=object).shape, (0,)) - assert_equal(np.array([[],[],[]],dtype=object).shape, (3,0)) - assert_equal(np.array([[3,4],[5,6],None],dtype=object).shape, (3,)) - - def test_mem_around(self,level=rlevel): - """Ticket #243""" - x = np.zeros((1,)) - y = [0] - decimal = 6 - np.around(abs(x-y),decimal) <= 10.0**(-decimal) - - def test_character_array_strip(self,level=rlevel): - """Ticket #246""" - x = np.char.array(("x","x ","x ")) - for c in x: assert_equal(c,"x") - - def test_lexsort(self,level=rlevel): - """Lexsort memory error""" - v = np.array([1,2,3,4,5,6,7,8,9,10]) - assert_equal(np.lexsort(v),0) - - def test_pickle_dtype(self,level=rlevel): - """Ticket #251""" - import pickle - pickle.dumps(np.float) - - def test_swap_real(self, level=rlevel): - """Ticket #265""" - assert_equal(np.arange(4,dtype='>c8').imag.max(),0.0) - assert_equal(np.arange(4,dtype=' 1 and x['two'] > 2) - - def test_method_args(self, level=rlevel): - # Make sure methods and functions have same default axis - # keyword and arguments - funcs1= ['argmax', 'argmin', 'sum', ('product', 'prod'), - ('sometrue', 'any'), - ('alltrue', 'all'), 'cumsum', ('cumproduct', 'cumprod'), - 'ptp', 'cumprod', 'prod', 'std', 'var', 'mean', - 'round', 'min', 'max', 'argsort', 'sort'] - funcs2 = ['compress', 'take', 'repeat'] - - for func in funcs1: - arr = np.random.rand(8,7) - arr2 = arr.copy() - if isinstance(func, tuple): - func_meth = func[1] - func = func[0] - else: - func_meth = func - res1 = getattr(arr, func_meth)() - res2 = getattr(np, func)(arr2) - if res1 is None: - assert abs(arr-res2).max() < 1e-8, func - else: - assert abs(res1-res2).max() < 1e-8, func - - for func in funcs2: - arr1 = np.random.rand(8,7) - arr2 = np.random.rand(8,7) - res1 = None - if func == 'compress': - arr1 = arr1.ravel() - res1 = getattr(arr2, func)(arr1) - else: - arr2 = (15*arr2).astype(int).ravel() - if res1 is None: - res1 = getattr(arr1, func)(arr2) - res2 = getattr(np, func)(arr1, arr2) - assert abs(res1-res2).max() < 1e-8, func - - def test_mem_lexsort_strings(self, level=rlevel): - """Ticket #298""" - lst = ['abc','cde','fgh'] - np.lexsort((lst,)) - - def test_fancy_index(self, level=rlevel): - """Ticket #302""" - x = np.array([1,2])[np.array([0])] - assert_equal(x.shape,(1,)) - - def test_recarray_copy(self, level=rlevel): - """Ticket #312""" - dt = [('x',np.int16),('y',np.float64)] - ra = np.array([(1,2.3)], dtype=dt) - rb = np.rec.array(ra, dtype=dt) - rb['x'] = 2. - assert ra['x'] != rb['x'] - - def test_rec_fromarray(self, level=rlevel): - """Ticket #322""" - x1 = np.array([[1,2],[3,4],[5,6]]) - x2 = np.array(['a','dd','xyz']) - x3 = np.array([1.1,2,3]) - np.rec.fromarrays([x1,x2,x3], formats="(2,)i4,a3,f8") - - def test_object_array_assign(self, level=rlevel): - x = np.empty((2,2),object) - x.flat[2] = (1,2,3) - assert_equal(x.flat[2],(1,2,3)) - - def test_ndmin_float64(self, level=rlevel): - """Ticket #324""" - x = np.array([1,2,3],dtype=np.float64) - assert_equal(np.array(x,dtype=np.float32,ndmin=2).ndim,2) - assert_equal(np.array(x,dtype=np.float64,ndmin=2).ndim,2) - - def test_mem_axis_minimization(self, level=rlevel): - """Ticket #327""" - data = np.arange(5) - data = np.add.outer(data,data) - - def test_mem_float_imag(self, level=rlevel): - """Ticket #330""" - np.float64(1.0).imag - - def test_dtype_tuple(self, level=rlevel): - """Ticket #334""" - assert np.dtype('i4') == np.dtype(('i4',())) - - def test_dtype_posttuple(self, level=rlevel): - """Ticket #335""" - np.dtype([('col1', '()i4')]) - - def test_numeric_carray_compare(self, level=rlevel): - """Ticket #341""" - assert_equal(np.array(['X'], 'c'), asbytes('X')) - - def test_string_array_size(self, level=rlevel): - """Ticket #342""" - self.assertRaises(ValueError, - np.array,[['X'],['X','X','X']],'|S1') - - def test_dtype_repr(self, level=rlevel): - """Ticket #344""" - dt1=np.dtype(('uint32', 2)) - dt2=np.dtype(('uint32', (2,))) - assert_equal(dt1.__repr__(), dt2.__repr__()) - - def test_reshape_order(self, level=rlevel): - """Make sure reshape order works.""" - a = np.arange(6).reshape(2,3,order='F') - assert_equal(a,[[0,2,4],[1,3,5]]) - a = np.array([[1,2],[3,4],[5,6],[7,8]]) - b = a[:,1] - assert_equal(b.reshape(2,2,order='F'), [[2,6],[4,8]]) - - def test_repeat_discont(self, level=rlevel): - """Ticket #352""" - a = np.arange(12).reshape(4,3)[:,2] - assert_equal(a.repeat(3), [2,2,2,5,5,5,8,8,8,11,11,11]) - - def test_array_index(self, level=rlevel): - """Make sure optimization is not called in this case.""" - a = np.array([1,2,3]) - a2 = np.array([[1,2,3]]) - assert_equal(a[np.where(a==3)], a2[np.where(a2==3)]) - - def test_object_argmax(self, level=rlevel): - a = np.array([1,2,3],dtype=object) - assert a.argmax() == 2 - - def test_recarray_fields(self, level=rlevel): - """Ticket #372""" - dt0 = np.dtype([('f0','i4'),('f1','i4')]) - dt1 = np.dtype([('f0','i8'),('f1','i8')]) - for a in [np.array([(1,2),(3,4)],"i4,i4"), - np.rec.array([(1,2),(3,4)],"i4,i4"), - np.rec.array([(1,2),(3,4)]), - np.rec.fromarrays([(1,2),(3,4)],"i4,i4"), - np.rec.fromarrays([(1,2),(3,4)])]: - assert(a.dtype in [dt0,dt1]) - - def test_random_shuffle(self, level=rlevel): - """Ticket #374""" - a = np.arange(5).reshape((5,1)) - b = a.copy() - np.random.shuffle(b) - assert_equal(np.sort(b, axis=0),a) - - def test_refcount_vdot(self, level=rlevel): - """Changeset #3443""" - _assert_valid_refcount(np.vdot) - - def test_startswith(self, level=rlevel): - ca = np.char.array(['Hi','There']) - assert_equal(ca.startswith('H'),[True,False]) - - def test_noncommutative_reduce_accumulate(self, level=rlevel): - """Ticket #413""" - tosubtract = np.arange(5) - todivide = np.array([2.0, 0.5, 0.25]) - assert_equal(np.subtract.reduce(tosubtract), -10) - assert_equal(np.divide.reduce(todivide), 16.0) - assert_array_equal(np.subtract.accumulate(tosubtract), - np.array([0, -1, -3, -6, -10])) - assert_array_equal(np.divide.accumulate(todivide), - np.array([2., 4., 16.])) - - def test_convolve_empty(self, level=rlevel): - """Convolve should raise an error for empty input array.""" - self.assertRaises(ValueError,np.convolve,[],[1]) - self.assertRaises(ValueError,np.convolve,[1],[]) - - def test_multidim_byteswap(self, level=rlevel): - """Ticket #449""" - r=np.array([(1,(0,1,2))], dtype="i2,3i2") - assert_array_equal(r.byteswap(), - np.array([(256,(0,256,512))],r.dtype)) - - def test_string_NULL(self, level=rlevel): - """Changeset 3557""" - assert_equal(np.array("a\x00\x0b\x0c\x00").item(), - 'a\x00\x0b\x0c') - - def test_junk_in_string_fields_of_recarray(self, level=rlevel): - """Ticket #483""" - r = np.array([[asbytes('abc')]], dtype=[('var1', '|S20')]) - assert asbytes(r['var1'][0][0]) == asbytes('abc') - - def test_take_output(self, level=rlevel): - """Ensure that 'take' honours output parameter.""" - x = np.arange(12).reshape((3,4)) - a = np.take(x,[0,2],axis=1) - b = np.zeros_like(a) - np.take(x,[0,2],axis=1,out=b) - assert_array_equal(a,b) - - def test_array_str_64bit(self, level=rlevel): - """Ticket #501""" - s = np.array([1, np.nan],dtype=np.float64) - errstate = np.seterr(all='raise') - try: - sstr = np.array_str(s) - finally: - np.seterr(**errstate) - - def test_frompyfunc_endian(self, level=rlevel): - """Ticket #503""" - from math import radians - uradians = np.frompyfunc(radians, 1, 1) - big_endian = np.array([83.4, 83.5], dtype='>f8') - little_endian = np.array([83.4, 83.5], dtype='f4','0)]=1.0 - self.assertRaises(ValueError,ia,x,s) - - def test_mem_scalar_indexing(self, level=rlevel): - """Ticket #603""" - x = np.array([0],dtype=float) - index = np.array(0,dtype=np.int32) - x[index] - - def test_binary_repr_0_width(self, level=rlevel): - assert_equal(np.binary_repr(0,width=3),'000') - - def test_fromstring(self, level=rlevel): - assert_equal(np.fromstring("12:09:09", dtype=int, sep=":"), - [12,9,9]) - - def test_searchsorted_variable_length(self, level=rlevel): - x = np.array(['a','aa','b']) - y = np.array(['d','e']) - assert_equal(x.searchsorted(y), [3,3]) - - def test_string_argsort_with_zeros(self, level=rlevel): - """Check argsort for strings containing zeros.""" - x = np.fromstring("\x00\x02\x00\x01", dtype="|S2") - assert_array_equal(x.argsort(kind='m'), np.array([1,0])) - assert_array_equal(x.argsort(kind='q'), np.array([1,0])) - - def test_string_sort_with_zeros(self, level=rlevel): - """Check sort for strings containing zeros.""" - x = np.fromstring("\x00\x02\x00\x01", dtype="|S2") - y = np.fromstring("\x00\x01\x00\x02", dtype="|S2") - assert_array_equal(np.sort(x, kind="q"), y) - - def test_copy_detection_zero_dim(self, level=rlevel): - """Ticket #658""" - np.indices((0,3,4)).T.reshape(-1,3) - - def test_flat_byteorder(self, level=rlevel): - """Ticket #657""" - x = np.arange(10) - assert_array_equal(x.astype('>i4'),x.astype('i4').flat[:],x.astype('i4')): - x = np.array([-1,0,1],dtype=dt) - assert_equal(x.flat[0].dtype, x[0].dtype) - - def test_copy_detection_corner_case(self, level=rlevel): - """Ticket #658""" - np.indices((0,3,4)).T.reshape(-1,3) - - def test_copy_detection_corner_case2(self, level=rlevel): - """Ticket #771: strides are not set correctly when reshaping 0-sized - arrays""" - b = np.indices((0,3,4)).T.reshape(-1,3) - assert_equal(b.strides, (3 * b.itemsize, b.itemsize)) - - def test_object_array_refcounting(self, level=rlevel): - """Ticket #633""" - if not hasattr(sys, 'getrefcount'): - return - - # NB. this is probably CPython-specific - - cnt = sys.getrefcount - - a = object() - b = object() - c = object() - - cnt0_a = cnt(a) - cnt0_b = cnt(b) - cnt0_c = cnt(c) - - # -- 0d -> 1d broadcasted slice assignment - - arr = np.zeros(5, dtype=np.object_) - - arr[:] = a - assert cnt(a) == cnt0_a + 5 - - arr[:] = b - assert cnt(a) == cnt0_a - assert cnt(b) == cnt0_b + 5 - - arr[:2] = c - assert cnt(b) == cnt0_b + 3 - assert cnt(c) == cnt0_c + 2 - - del arr - - # -- 1d -> 2d broadcasted slice assignment - - arr = np.zeros((5, 2), dtype=np.object_) - arr0 = np.zeros(2, dtype=np.object_) - - arr0[0] = a - assert cnt(a) == cnt0_a + 1 - arr0[1] = b - assert cnt(b) == cnt0_b + 1 - - arr[:,:] = arr0 - assert cnt(a) == cnt0_a + 6 - assert cnt(b) == cnt0_b + 6 - - arr[:,0] = None - assert cnt(a) == cnt0_a + 1 - - del arr, arr0 - - # -- 2d copying + flattening - - arr = np.zeros((5, 2), dtype=np.object_) - - arr[:,0] = a - arr[:,1] = b - assert cnt(a) == cnt0_a + 5 - assert cnt(b) == cnt0_b + 5 - - arr2 = arr.copy() - assert cnt(a) == cnt0_a + 10 - assert cnt(b) == cnt0_b + 10 - - arr2 = arr[:,0].copy() - assert cnt(a) == cnt0_a + 10 - assert cnt(b) == cnt0_b + 5 - - arr2 = arr.flatten() - assert cnt(a) == cnt0_a + 10 - assert cnt(b) == cnt0_b + 10 - - del arr, arr2 - - # -- concatenate, repeat, take, choose - - arr1 = np.zeros((5, 1), dtype=np.object_) - arr2 = np.zeros((5, 1), dtype=np.object_) - - arr1[...] = a - arr2[...] = b - assert cnt(a) == cnt0_a + 5 - assert cnt(b) == cnt0_b + 5 - - arr3 = np.concatenate((arr1, arr2)) - assert cnt(a) == cnt0_a + 5 + 5 - assert cnt(b) == cnt0_b + 5 + 5 - - arr3 = arr1.repeat(3, axis=0) - assert cnt(a) == cnt0_a + 5 + 3*5 - - arr3 = arr1.take([1,2,3], axis=0) - assert cnt(a) == cnt0_a + 5 + 3 - - x = np.array([[0],[1],[0],[1],[1]], int) - arr3 = x.choose(arr1, arr2) - assert cnt(a) == cnt0_a + 5 + 2 - assert cnt(b) == cnt0_b + 5 + 3 - - def test_mem_custom_float_to_array(self, level=rlevel): - """Ticket 702""" - class MyFloat: - def __float__(self): - return 1.0 - - tmp = np.atleast_1d([MyFloat()]) - tmp2 = tmp.astype(float) - - def test_object_array_refcount_self_assign(self, level=rlevel): - """Ticket #711""" - class VictimObject(object): - deleted = False - def __del__(self): - self.deleted = True - d = VictimObject() - arr = np.zeros(5, dtype=np.object_) - arr[:] = d - del d - arr[:] = arr # refcount of 'd' might hit zero here - assert not arr[0].deleted - arr[:] = arr # trying to induce a segfault by doing it again... - assert not arr[0].deleted - - def test_mem_fromiter_invalid_dtype_string(self, level=rlevel): - x = [1,2,3] - self.assertRaises(ValueError, - np.fromiter, [xi for xi in x], dtype='S') - - def test_reduce_big_object_array(self, level=rlevel): - """Ticket #713""" - oldsize = np.setbufsize(10*16) - a = np.array([None]*161, object) - assert not np.any(a) - np.setbufsize(oldsize) - - def test_mem_0d_array_index(self, level=rlevel): - """Ticket #714""" - np.zeros(10)[np.array(0)] - - def test_floats_from_string(self, level=rlevel): - """Ticket #640, floats from string""" - fsingle = np.single('1.234') - fdouble = np.double('1.234') - flongdouble = np.longdouble('1.234') - assert_almost_equal(fsingle, 1.234) - assert_almost_equal(fdouble, 1.234) - assert_almost_equal(flongdouble, 1.234) - - def test_complex_dtype_printing(self, level=rlevel): - dt = np.dtype([('top', [('tiles', ('>f4', (64, 64)), (1,)), - ('rtile', '>f4', (64, 36))], (3,)), - ('bottom', [('bleft', ('>f4', (8, 64)), (1,)), - ('bright', '>f4', (8, 36))])]) - assert_equal(str(dt), - "[('top', [('tiles', ('>f4', (64, 64)), (1,)), " - "('rtile', '>f4', (64, 36))], (3,)), " - "('bottom', [('bleft', ('>f4', (8, 64)), (1,)), " - "('bright', '>f4', (8, 36))])]") - - def test_nonnative_endian_fill(self, level=rlevel): - """ Non-native endian arrays were incorrectly filled with scalars before - r5034. - """ - if sys.byteorder == 'little': - dtype = np.dtype('>i4') - else: - dtype = np.dtype('= 3: - xp = pickle.load(open(filename, 'rb'), encoding='latin1') - else: - xp = pickle.load(open(filename)) - xpd = xp.astype(np.float64) - assert (xp.__array_interface__['data'][0] != - xpd.__array_interface__['data'][0]) - - def test_compress_small_type(self, level=rlevel): - """Ticket #789, changeset 5217. - """ - # compress with out argument segfaulted if cannot cast safely - import numpy as np - a = np.array([[1, 2], [3, 4]]) - b = np.zeros((2, 1), dtype = np.single) - try: - a.compress([True, False], axis = 1, out = b) - raise AssertionError("compress with an out which cannot be " \ - "safely casted should not return "\ - "successfully") - except TypeError: - pass - - def test_attributes(self, level=rlevel): - """Ticket #791 - """ - import numpy as np - class TestArray(np.ndarray): - def __new__(cls, data, info): - result = np.array(data) - result = result.view(cls) - result.info = info - return result - def __array_finalize__(self, obj): - self.info = getattr(obj, 'info', '') - dat = TestArray([[1,2,3,4],[5,6,7,8]],'jubba') - assert dat.info == 'jubba' - dat.resize((4,2)) - assert dat.info == 'jubba' - dat.sort() - assert dat.info == 'jubba' - dat.fill(2) - assert dat.info == 'jubba' - dat.put([2,3,4],[6,3,4]) - assert dat.info == 'jubba' - dat.setfield(4, np.int32,0) - assert dat.info == 'jubba' - dat.setflags() - assert dat.info == 'jubba' - assert dat.all(1).info == 'jubba' - assert dat.any(1).info == 'jubba' - assert dat.argmax(1).info == 'jubba' - assert dat.argmin(1).info == 'jubba' - assert dat.argsort(1).info == 'jubba' - assert dat.astype(TestArray).info == 'jubba' - assert dat.byteswap().info == 'jubba' - assert dat.clip(2,7).info == 'jubba' - assert dat.compress([0,1,1]).info == 'jubba' - assert dat.conj().info == 'jubba' - assert dat.conjugate().info == 'jubba' - assert dat.copy().info == 'jubba' - dat2 = TestArray([2, 3, 1, 0],'jubba') - choices = [[0, 1, 2, 3], [10, 11, 12, 13], - [20, 21, 22, 23], [30, 31, 32, 33]] - assert dat2.choose(choices).info == 'jubba' - assert dat.cumprod(1).info == 'jubba' - assert dat.cumsum(1).info == 'jubba' - assert dat.diagonal().info == 'jubba' - assert dat.flatten().info == 'jubba' - assert dat.getfield(np.int32,0).info == 'jubba' - assert dat.imag.info == 'jubba' - assert dat.max(1).info == 'jubba' - assert dat.mean(1).info == 'jubba' - assert dat.min(1).info == 'jubba' - assert dat.newbyteorder().info == 'jubba' - assert dat.nonzero()[0].info == 'jubba' - assert dat.nonzero()[1].info == 'jubba' - assert dat.prod(1).info == 'jubba' - assert dat.ptp(1).info == 'jubba' - assert dat.ravel().info == 'jubba' - assert dat.real.info == 'jubba' - assert dat.repeat(2).info == 'jubba' - assert dat.reshape((2,4)).info == 'jubba' - assert dat.round().info == 'jubba' - assert dat.squeeze().info == 'jubba' - assert dat.std(1).info == 'jubba' - assert dat.sum(1).info == 'jubba' - assert dat.swapaxes(0,1).info == 'jubba' - assert dat.take([2,3,5]).info == 'jubba' - assert dat.transpose().info == 'jubba' - assert dat.T.info == 'jubba' - assert dat.var(1).info == 'jubba' - assert dat.view(TestArray).info == 'jubba' - - def test_recarray_tolist(self, level=rlevel): - """Ticket #793, changeset r5215 - """ - # Comparisons fail for NaN, so we can't use random memory - # for the test. - buf = np.zeros(40, dtype=np.int8) - a = np.recarray(2, formats="i4,f8,f8", names="id,x,y", buf=buf) - b = a.tolist() - assert( a[0].tolist() == b[0]) - assert( a[1].tolist() == b[1]) - - def test_char_array_creation(self, level=rlevel): - a = np.array('123', dtype='c') - b = np.array(asbytes_nested(['1','2','3'])) - assert_equal(a,b) - - def test_unaligned_unicode_access(self, level=rlevel) : - """Ticket #825""" - for i in range(1,9) : - msg = 'unicode offset: %d chars'%i - t = np.dtype([('a','S%d'%i),('b','U2')]) - x = np.array([(asbytes('a'),u'b')], dtype=t) - if sys.version_info[0] >= 3: - assert_equal(str(x), "[(b'a', 'b')]", err_msg=msg) - else: - assert_equal(str(x), "[('a', u'b')]", err_msg=msg) - - def test_sign_for_complex_nan(self, level=rlevel): - """Ticket 794.""" - C = np.array([-np.inf, -2+1j, 0, 2-1j, np.inf, np.nan]) - have = np.sign(C) - want = np.array([-1+0j, -1+0j, 0+0j, 1+0j, 1+0j, np.nan]) - assert_equal(have, want) - - def test_for_equal_names(self, level=rlevel): - """Ticket #674""" - dt = np.dtype([('foo', float), ('bar', float)]) - a = np.zeros(10, dt) - b = list(a.dtype.names) - b[0] = "notfoo" - a.dtype.names = b - assert a.dtype.names[0] == "notfoo" - assert a.dtype.names[1] == "bar" - - def test_for_object_scalar_creation(self, level=rlevel): - """Ticket #816""" - a = np.object_() - b = np.object_(3) - b2 = np.object_(3.0) - c = np.object_([4,5]) - d = np.object_([None, {}, []]) - assert a is None - assert type(b) is int - assert type(b2) is float - assert type(c) is np.ndarray - assert c.dtype == object - assert d.dtype == object - - def test_array_resize_method_system_error(self): - """Ticket #840 - order should be an invalid keyword.""" - x = np.array([[0,1],[2,3]]) - self.assertRaises(TypeError, x.resize, (2,2), order='C') - - def test_for_zero_length_in_choose(self, level=rlevel): - "Ticket #882" - a = np.array(1) - self.assertRaises(ValueError, lambda x: x.choose([]), a) - - def test_array_ndmin_overflow(self): - "Ticket #947." - self.assertRaises(ValueError, lambda: np.array([1], ndmin=33)) - - def test_errobj_reference_leak(self, level=rlevel): - """Ticket #955""" - old_err = np.seterr(all="ignore") - try: - z = int(0) - p = np.int32(-1) - - gc.collect() - n_before = len(gc.get_objects()) - z**p # this shouldn't leak a reference to errobj - gc.collect() - n_after = len(gc.get_objects()) - assert n_before >= n_after, (n_before, n_after) - finally: - np.seterr(**old_err) - - def test_void_scalar_with_titles(self, level=rlevel): - """No ticket""" - data = [('john', 4), ('mary', 5)] - dtype1 = [(('source:yy', 'name'), 'O'), (('source:xx', 'id'), int)] - arr = np.array(data, dtype=dtype1) - assert arr[0][0] == 'john' - assert arr[0][1] == 4 - - def test_blasdot_uninitialized_memory(self): - """Ticket #950""" - for m in [0, 1, 2]: - for n in [0, 1, 2]: - for k in xrange(3): - # Try to ensure that x->data contains non-zero floats - x = np.array([123456789e199], dtype=np.float64) - x.resize((m, 0)) - y = np.array([123456789e199], dtype=np.float64) - y.resize((0, n)) - - # `dot` should just return zero (m,n) matrix - z = np.dot(x, y) - assert np.all(z == 0) - assert z.shape == (m, n) - - def test_zeros(self): - """Regression test for #1061.""" - # Set a size which cannot fit into a 64 bits signed integer - sz = 2 ** 64 - good = 'Maximum allowed dimension exceeded' - try: - np.empty(sz) - except ValueError, e: - if not str(e) == good: - self.fail("Got msg '%s', expected '%s'" % (e, good)) - except Exception, e: - self.fail("Got exception of type %s instead of ValueError" % type(e)) - - def test_huge_arange(self): - """Regression test for #1062.""" - # Set a size which cannot fit into a 64 bits signed integer - sz = 2 ** 64 - good = 'Maximum allowed size exceeded' - try: - a = np.arange(sz) - self.assertTrue(np.size == sz) - except ValueError, e: - if not str(e) == good: - self.fail("Got msg '%s', expected '%s'" % (e, good)) - except Exception, e: - self.fail("Got exception of type %s instead of ValueError" % type(e)) - - def test_fromiter_bytes(self): - """Ticket #1058""" - a = np.fromiter(range(10), dtype='b') - b = np.fromiter(range(10), dtype='B') - assert np.alltrue(a == np.array([0,1,2,3,4,5,6,7,8,9])) - assert np.alltrue(b == np.array([0,1,2,3,4,5,6,7,8,9])) - - def test_array_from_sequence_scalar_array(self): - """Ticket #1078: segfaults when creating an array with a sequence of 0d - arrays.""" - a = np.ones(2) - b = np.array(3) - assert_raises(ValueError, lambda: np.array((a, b))) - - t = ((1,), np.array(1)) - assert_raises(ValueError, lambda: np.array(t)) - - @dec.knownfailureif(True, "Fix this for 1.5.0.") - def test_array_from_sequence_scalar_array2(self): - """Ticket #1081: weird array with strange input...""" - t = np.array([np.array([]), np.array(0, object)]) - assert_raises(ValueError, lambda: np.array(t)) - - def test_array_too_big(self): - """Ticket #1080.""" - assert_raises(ValueError, np.zeros, [2**10]*10) - - def test_dtype_keyerrors_(self): - """Ticket #1106.""" - dt = np.dtype([('f1', np.uint)]) - assert_raises(KeyError, dt.__getitem__, "f2") - assert_raises(IndexError, dt.__getitem__, 1) - assert_raises(ValueError, dt.__getitem__, 0.0) - - def test_lexsort_buffer_length(self): - """Ticket #1217, don't segfault.""" - a = np.ones(100, dtype=np.int8) - b = np.ones(100, dtype=np.int32) - i = np.lexsort((a[::-1], b)) - assert_equal(i, np.arange(100, dtype=np.int)) - - def test_object_array_to_fixed_string(self): - """Ticket #1235.""" - a = np.array(['abcdefgh', 'ijklmnop'], dtype=np.object_) - b = np.array(a, dtype=(np.str_, 8)) - assert_equal(a, b) - c = np.array(a, dtype=(np.str_, 5)) - assert_equal(c, np.array(['abcde', 'ijklm'])) - d = np.array(a, dtype=(np.str_, 12)) - assert_equal(a, d) - e = np.empty((2, ), dtype=(np.str_, 8)) - e[:] = a[:] - assert_equal(a, e) - - def test_unicode_to_string_cast(self): - """Ticket #1240.""" - a = np.array([[u'abc', u'\u03a3'], [u'asdf', u'erw']], dtype='U') - def fail(): - b = np.array(a, 'S4') - self.assertRaises(UnicodeEncodeError, fail) - - def test_mixed_string_unicode_array_creation(self): - a = np.array(['1234', u'123']) - assert a.itemsize == 16 - a = np.array([u'123', '1234']) - assert a.itemsize == 16 - a = np.array(['1234', u'123', '12345']) - assert a.itemsize == 20 - a = np.array([u'123', '1234', u'12345']) - assert a.itemsize == 20 - a = np.array([u'123', '1234', u'1234']) - assert a.itemsize == 16 - - def test_misaligned_objects_segfault(self): - """Ticket #1198 and #1267""" - a1 = np.zeros((10,), dtype='O,c') - a2 = np.array(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'], 'S10') - a1['f0'] = a2 - r = repr(a1) - np.argmax(a1['f0']) - a1['f0'][1] = "FOO" - a1['f0'] = "FOO" - a3 = np.array(a1['f0'], dtype='S') - np.nonzero(a1['f0']) - a1.sort() - a4 = copy.deepcopy(a1) - - def test_misaligned_scalars_segfault(self): - """Ticket #1267""" - s1 = np.array(('a', 'Foo'), dtype='c,O') - s2 = np.array(('b', 'Bar'), dtype='c,O') - s1['f1'] = s2['f1'] - s1['f1'] = 'Baz' - - def test_misaligned_dot_product_objects(self): - """Ticket #1267""" - # This didn't require a fix, but it's worth testing anyway, because - # it may fail if .dot stops enforcing the arrays to be BEHAVED - a = np.array([[(1, 'a'), (0, 'a')], [(0, 'a'), (1, 'a')]], dtype='O,c') - b = np.array([[(4, 'a'), (1, 'a')], [(2, 'a'), (2, 'a')]], dtype='O,c') - np.dot(a['f0'], b['f0']) - - def test_byteswap_complex_scalar(self): - """Ticket #1259""" - z = np.array([-1j], 'c') - - def test_log1p_compiler_shenanigans(self): - # Check if log1p is behaving on 32 bit intel systems. - assert_(np.isfinite(np.log1p(np.exp2(-53)))) - - def test_fromiter_comparison(self, level=rlevel): - a = np.fromiter(range(10), dtype='b') - b = np.fromiter(range(10), dtype='B') - assert np.alltrue(a == np.array([0,1,2,3,4,5,6,7,8,9])) - assert np.alltrue(b == np.array([0,1,2,3,4,5,6,7,8,9])) - - def test_fromstring_crash(self): - # Ticket #1345: the following should not cause a crash - np.fromstring(asbytes('aa, aa, 1.0'), sep=',') - - def test_ticket_1539(self): - dtypes = [x for x in np.typeDict.values() - if issubclass(x, np.number)] - a = np.array([], dtypes[0]) - failures = [] - for x in dtypes: - b = a.astype(x) - for y in dtypes: - c = a.astype(y) - try: - np.dot(b, c) - except TypeError, e: - failures.append((x, y)) - if failures: - raise AssertionError("Failures: %r" % failures) - - def test_ticket_1538(self): - x = np.finfo(np.float32) - for name in 'eps epsneg max min resolution tiny'.split(): - assert_equal(type(getattr(x, name)), np.float32, - err_msg=name) - - def test_ticket_1434(self): - # Check that the out= argument in var and std has an effect - data = np.array(((1,2,3),(4,5,6),(7,8,9))) - out = np.zeros((3,)) - - ret = data.var(axis=1, out=out) - assert_(ret is out) - assert_array_equal(ret, data.var(axis=1)) - - ret = data.std(axis=1, out=out) - assert_(ret is out) - assert_array_equal(ret, data.std(axis=1)) - - def test_complex_nan_maximum(self): - cnan = complex(0, np.nan) - assert_equal(np.maximum(1, cnan), cnan) - - def test_subclass_int_tuple_assignment(self): - # ticket #1563 - class Subclass(np.ndarray): - def __new__(cls,i): - return np.ones((i,)).view(cls) - x = Subclass(5) - x[(0,)] = 2 # shouldn't raise an exception - assert_equal(x[0], 2) - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_scalarmath.py b/pythonPackages/numpy/numpy/core/tests/test_scalarmath.py deleted file mode 100755 index 520b6eb174..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_scalarmath.py +++ /dev/null @@ -1,111 +0,0 @@ -import sys -from numpy.testing import * -import numpy as np - -types = [np.bool_, np.byte, np.ubyte, np.short, np.ushort, np.intc, np.uintc, - np.int_, np.uint, np.longlong, np.ulonglong, - np.single, np.double, np.longdouble, np.csingle, - np.cdouble, np.clongdouble] - -# This compares scalarmath against ufuncs. - -class TestTypes(TestCase): - def test_types(self, level=1): - for atype in types: - a = atype(1) - assert a == 1, "error with %r: got %r" % (atype,a) - - def test_type_add(self, level=1): - # list of types - for k, atype in enumerate(types): - vala = atype(3) - val1 = np.array([3],dtype=atype) - for l, btype in enumerate(types): - valb = btype(1) - val2 = np.array([1],dtype=btype) - val = vala+valb - valo = val1 + val2 - assert val.dtype.num == valo.dtype.num and \ - val.dtype.char == valo.dtype.char, \ - "error with (%d,%d)" % (k,l) - - def test_type_create(self, level=1): - for k, atype in enumerate(types): - a = np.array([1,2,3],atype) - b = atype([1,2,3]) - assert_equal(a,b) - - -class TestPower(TestCase): - def test_small_types(self): - for t in [np.int8, np.int16]: - a = t(3) - b = a ** 4 - assert b == 81, "error with %r: got %r" % (t,b) - - def test_large_types(self): - for t in [np.int32, np.int64, np.float32, np.float64, np.longdouble]: - a = t(51) - b = a ** 4 - msg = "error with %r: got %r" % (t,b) - if np.issubdtype(t, np.integer): - assert b == 6765201, msg - else: - assert_almost_equal(b, 6765201, err_msg=msg) - - -class TestConversion(TestCase): - def test_int_from_long(self): - l = [1e6, 1e12, 1e18, -1e6, -1e12, -1e18] - li = [10**6, 10**12, 10**18, -10**6, -10**12, -10**18] - for T in [None, np.float64, np.int64]: - a = np.array(l,dtype=T) - assert_equal(map(int,a), li) - - a = np.array(l[:3], dtype=np.uint64) - assert_equal(map(int,a), li[:3]) - - -#class TestRepr(TestCase): -# def test_repr(self): -# for t in types: -# val = t(1197346475.0137341) -# val_repr = repr(val) -# val2 = eval(val_repr) -# assert_equal( val, val2 ) - - -class TestRepr(TestCase): - def _test_type_repr(self, t): - finfo=np.finfo(t) - last_fraction_bit_idx = finfo.nexp + finfo.nmant - last_exponent_bit_idx = finfo.nexp - storage_bytes = np.dtype(t).itemsize*8 - # could add some more types to the list below - for which in ['small denorm','small norm']: - # Values from http://en.wikipedia.org/wiki/IEEE_754 - constr = np.array([0x00]*storage_bytes,dtype=np.uint8) - if which == 'small denorm': - byte = last_fraction_bit_idx // 8 - bytebit = 7-(last_fraction_bit_idx % 8) - constr[byte] = 1< real - n 1 negative nums + O - n 1 sign nums + O -> int - n 1 invert bool + ints + O flts raise an error - n 1 degrees real + M cmplx raise an error - n 1 radians real + M cmplx raise an error - n 1 arccos flts + M - n 1 arccosh flts + M - n 1 arcsin flts + M - n 1 arcsinh flts + M - n 1 arctan flts + M - n 1 arctanh flts + M - n 1 cos flts + M - n 1 sin flts + M - n 1 tan flts + M - n 1 cosh flts + M - n 1 sinh flts + M - n 1 tanh flts + M - n 1 exp flts + M - n 1 expm1 flts + M - n 1 log flts + M - n 1 log10 flts + M - n 1 log1p flts + M - n 1 sqrt flts + M real x < 0 raises error - n 1 ceil real + M - n 1 trunc real + M - n 1 floor real + M - n 1 fabs real + M - n 1 rint flts + M - n 1 isnan flts -> bool - n 1 isinf flts -> bool - n 1 isfinite flts -> bool - n 1 signbit real -> bool - n 1 modf real -> (frac, int) - n 1 logical_not bool + nums + M -> bool - n 2 left_shift ints + O flts raise an error - n 2 right_shift ints + O flts raise an error - n 2 add bool + nums + O boolean + is || - n 2 subtract bool + nums + O boolean - is ^ - n 2 multiply bool + nums + O boolean * is & - n 2 divide nums + O - n 2 floor_divide nums + O - n 2 true_divide nums + O bBhH -> f, iIlLqQ -> d - n 2 fmod nums + M - n 2 power nums + O - n 2 greater bool + nums + O -> bool - n 2 greater_equal bool + nums + O -> bool - n 2 less bool + nums + O -> bool - n 2 less_equal bool + nums + O -> bool - n 2 equal bool + nums + O -> bool - n 2 not_equal bool + nums + O -> bool - n 2 logical_and bool + nums + M -> bool - n 2 logical_or bool + nums + M -> bool - n 2 logical_xor bool + nums + M -> bool - n 2 maximum bool + nums + O - n 2 minimum bool + nums + O - n 2 bitwise_and bool + ints + O flts raise an error - n 2 bitwise_or bool + ints + O flts raise an error - n 2 bitwise_xor bool + ints + O flts raise an error - n 2 arctan2 real + M - n 2 remainder ints + real + O - n 2 hypot real + M - ===== ==== ============= =============== ======================== - - Types other than those listed will be accepted, but they are cast to - the smallest compatible type for which the function is defined. The - casting rules are: - - bool -> int8 -> float32 - ints -> double - - """ - pass - - - def test_signature(self): - # the arguments to test_signature are: nin, nout, core_signature - # pass - assert_equal(umt.test_signature(2,1,"(i),(i)->()"), 1) - - # pass. empty core signature; treat as plain ufunc (with trivial core) - assert_equal(umt.test_signature(2,1,"(),()->()"), 0) - - # in the following calls, a ValueError should be raised because - # of error in core signature - # error: extra parenthesis - msg = "core_sig: extra parenthesis" - try: - ret = umt.test_signature(2,1,"((i)),(i)->()") - assert_equal(ret, None, err_msg=msg) - except ValueError: None - # error: parenthesis matching - msg = "core_sig: parenthesis matching" - try: - ret = umt.test_signature(2,1,"(i),)i(->()") - assert_equal(ret, None, err_msg=msg) - except ValueError: None - # error: incomplete signature. letters outside of parenthesis are ignored - msg = "core_sig: incomplete signature" - try: - ret = umt.test_signature(2,1,"(i),->()") - assert_equal(ret, None, err_msg=msg) - except ValueError: None - # error: incomplete signature. 2 output arguments are specified - msg = "core_sig: incomplete signature" - try: - ret = umt.test_signature(2,2,"(i),(i)->()") - assert_equal(ret, None, err_msg=msg) - except ValueError: None - - # more complicated names for variables - assert_equal(umt.test_signature(2,1,"(i1,i2),(J_1)->(_kAB)"),1) - - def test_get_signature(self): - assert_equal(umt.inner1d.signature, "(i),(i)->()") - - def test_inner1d(self): - a = np.arange(6).reshape((2,3)) - assert_array_equal(umt.inner1d(a,a), np.sum(a*a,axis=-1)) - - def test_broadcast(self): - msg = "broadcast" - a = np.arange(4).reshape((2,1,2)) - b = np.arange(4).reshape((1,2,2)) - assert_array_equal(umt.inner1d(a,b), np.sum(a*b,axis=-1), err_msg=msg) - msg = "extend & broadcast loop dimensions" - b = np.arange(4).reshape((2,2)) - assert_array_equal(umt.inner1d(a,b), np.sum(a*b,axis=-1), err_msg=msg) - msg = "broadcast in core dimensions" - a = np.arange(8).reshape((4,2)) - b = np.arange(4).reshape((4,1)) - assert_array_equal(umt.inner1d(a,b), np.sum(a*b,axis=-1), err_msg=msg) - msg = "extend & broadcast core and loop dimensions" - a = np.arange(8).reshape((4,2)) - b = np.array(7) - assert_array_equal(umt.inner1d(a,b), np.sum(a*b,axis=-1), err_msg=msg) - msg = "broadcast should fail" - a = np.arange(2).reshape((2,1,1)) - b = np.arange(3).reshape((3,1,1)) - try: - ret = umt.inner1d(a,b) - assert_equal(ret, None, err_msg=msg) - except ValueError: None - - def test_type_cast(self): - msg = "type cast" - a = np.arange(6, dtype='short').reshape((2,3)) - assert_array_equal(umt.inner1d(a,a), np.sum(a*a,axis=-1), err_msg=msg) - msg = "type cast on one argument" - a = np.arange(6).reshape((2,3)) - b = a+0.1 - assert_array_almost_equal(umt.inner1d(a,a), np.sum(a*a,axis=-1), - err_msg=msg) - - def test_endian(self): - msg = "big endian" - a = np.arange(6, dtype='>i4').reshape((2,3)) - assert_array_equal(umt.inner1d(a,a), np.sum(a*a,axis=-1), err_msg=msg) - msg = "little endian" - a = np.arange(6, dtype=' 0), "arctan(%s, %s) is %s, not +inf" % (x, y, ncu.arctan2(x, y)) - - -def assert_arctan2_isninf(x, y): - assert (np.isinf(ncu.arctan2(x, y)) and ncu.arctan2(x, y) < 0), "arctan(%s, %s) is %s, not -inf" % (x, y, ncu.arctan2(x, y)) - - -def assert_arctan2_ispzero(x, y): - assert (ncu.arctan2(x, y) == 0 and not np.signbit(ncu.arctan2(x, y))), "arctan(%s, %s) is %s, not +0" % (x, y, ncu.arctan2(x, y)) - - -def assert_arctan2_isnzero(x, y): - assert (ncu.arctan2(x, y) == 0 and np.signbit(ncu.arctan2(x, y))), "arctan(%s, %s) is %s, not -0" % (x, y, ncu.arctan2(x, y)) - - -class TestArctan2SpecialValues(TestCase): - def test_one_one(self): - # atan2(1, 1) returns pi/4. - assert_almost_equal(ncu.arctan2(1, 1), 0.25 * np.pi) - assert_almost_equal(ncu.arctan2(-1, 1), -0.25 * np.pi) - assert_almost_equal(ncu.arctan2(1, -1), 0.75 * np.pi) - - def test_zero_nzero(self): - # atan2(+-0, -0) returns +-pi. - assert_almost_equal(ncu.arctan2(np.PZERO, np.NZERO), np.pi) - assert_almost_equal(ncu.arctan2(np.NZERO, np.NZERO), -np.pi) - - def test_zero_pzero(self): - # atan2(+-0, +0) returns +-0. - assert_arctan2_ispzero(np.PZERO, np.PZERO) - assert_arctan2_isnzero(np.NZERO, np.PZERO) - - def test_zero_negative(self): - # atan2(+-0, x) returns +-pi for x < 0. - assert_almost_equal(ncu.arctan2(np.PZERO, -1), np.pi) - assert_almost_equal(ncu.arctan2(np.NZERO, -1), -np.pi) - - def test_zero_positive(self): - # atan2(+-0, x) returns +-0 for x > 0. - assert_arctan2_ispzero(np.PZERO, 1) - assert_arctan2_isnzero(np.NZERO, 1) - - def test_positive_zero(self): - # atan2(y, +-0) returns +pi/2 for y > 0. - assert_almost_equal(ncu.arctan2(1, np.PZERO), 0.5 * np.pi) - assert_almost_equal(ncu.arctan2(1, np.NZERO), 0.5 * np.pi) - - def test_negative_zero(self): - # atan2(y, +-0) returns -pi/2 for y < 0. - assert_almost_equal(ncu.arctan2(-1, np.PZERO), -0.5 * np.pi) - assert_almost_equal(ncu.arctan2(-1, np.NZERO), -0.5 * np.pi) - - def test_any_ninf(self): - # atan2(+-y, -infinity) returns +-pi for finite y > 0. - assert_almost_equal(ncu.arctan2(1, np.NINF), np.pi) - assert_almost_equal(ncu.arctan2(-1, np.NINF), -np.pi) - - def test_any_pinf(self): - # atan2(+-y, +infinity) returns +-0 for finite y > 0. - assert_arctan2_ispzero(1, np.inf) - assert_arctan2_isnzero(-1, np.inf) - - def test_inf_any(self): - # atan2(+-infinity, x) returns +-pi/2 for finite x. - assert_almost_equal(ncu.arctan2( np.inf, 1), 0.5 * np.pi) - assert_almost_equal(ncu.arctan2(-np.inf, 1), -0.5 * np.pi) - - def test_inf_ninf(self): - # atan2(+-infinity, -infinity) returns +-3*pi/4. - assert_almost_equal(ncu.arctan2( np.inf, -np.inf), 0.75 * np.pi) - assert_almost_equal(ncu.arctan2(-np.inf, -np.inf), -0.75 * np.pi) - - def test_inf_pinf(self): - # atan2(+-infinity, +infinity) returns +-pi/4. - assert_almost_equal(ncu.arctan2( np.inf, np.inf), 0.25 * np.pi) - assert_almost_equal(ncu.arctan2(-np.inf, np.inf), -0.25 * np.pi) - - def test_nan_any(self): - # atan2(nan, x) returns nan for any x, including inf - assert_arctan2_isnan(np.nan, np.inf) - assert_arctan2_isnan(np.inf, np.nan) - assert_arctan2_isnan(np.nan, np.nan) - - -class TestLdexp(TestCase): - def test_ldexp(self): - assert_almost_equal(ncu.ldexp(2., 3), 16.) - assert_almost_equal(ncu.ldexp(np.array(2., np.float32), np.array(3, np.int16)), 16.) - assert_almost_equal(ncu.ldexp(np.array(2., np.float32), np.array(3, np.int32)), 16.) - assert_almost_equal(ncu.ldexp(np.array(2., np.float64), np.array(3, np.int16)), 16.) - assert_almost_equal(ncu.ldexp(np.array(2., np.float64), np.array(3, np.int32)), 16.) - assert_almost_equal(ncu.ldexp(np.array(2., np.longdouble), np.array(3, np.int16)), 16.) - assert_almost_equal(ncu.ldexp(np.array(2., np.longdouble), np.array(3, np.int32)), 16.) - - -class TestMaximum(TestCase): - def test_reduce_complex(self): - assert_equal(np.maximum.reduce([1,2j]),1) - assert_equal(np.maximum.reduce([1+3j,2j]),1+3j) - - def test_float_nans(self): - nan = np.nan - arg1 = np.array([0, nan, nan]) - arg2 = np.array([nan, 0, nan]) - out = np.array([nan, nan, nan]) - assert_equal(np.maximum(arg1, arg2), out) - - def test_complex_nans(self): - nan = np.nan - for cnan in [complex(nan, 0), complex(0, nan), complex(nan, nan)] : - arg1 = np.array([0, cnan, cnan], dtype=np.complex) - arg2 = np.array([cnan, 0, cnan], dtype=np.complex) - out = np.array([nan, nan, nan], dtype=np.complex) - assert_equal(np.maximum(arg1, arg2), out) - - -class TestMinimum(TestCase): - def test_reduce_complex(self): - assert_equal(np.minimum.reduce([1,2j]),2j) - assert_equal(np.minimum.reduce([1+3j,2j]),2j) - - def test_float_nans(self): - nan = np.nan - arg1 = np.array([0, nan, nan]) - arg2 = np.array([nan, 0, nan]) - out = np.array([nan, nan, nan]) - assert_equal(np.minimum(arg1, arg2), out) - - def test_complex_nans(self): - nan = np.nan - for cnan in [complex(nan, 0), complex(0, nan), complex(nan, nan)] : - arg1 = np.array([0, cnan, cnan], dtype=np.complex) - arg2 = np.array([cnan, 0, cnan], dtype=np.complex) - out = np.array([nan, nan, nan], dtype=np.complex) - assert_equal(np.minimum(arg1, arg2), out) - - -class TestFmax(TestCase): - def test_reduce_complex(self): - assert_equal(np.fmax.reduce([1,2j]),1) - assert_equal(np.fmax.reduce([1+3j,2j]),1+3j) - - def test_float_nans(self): - nan = np.nan - arg1 = np.array([0, nan, nan]) - arg2 = np.array([nan, 0, nan]) - out = np.array([0, 0, nan]) - assert_equal(np.fmax(arg1, arg2), out) - - def test_complex_nans(self): - nan = np.nan - for cnan in [complex(nan, 0), complex(0, nan), complex(nan, nan)] : - arg1 = np.array([0, cnan, cnan], dtype=np.complex) - arg2 = np.array([cnan, 0, cnan], dtype=np.complex) - out = np.array([0, 0, nan], dtype=np.complex) - assert_equal(np.fmax(arg1, arg2), out) - - -class TestFmin(TestCase): - def test_reduce_complex(self): - assert_equal(np.fmin.reduce([1,2j]),2j) - assert_equal(np.fmin.reduce([1+3j,2j]),2j) - - def test_float_nans(self): - nan = np.nan - arg1 = np.array([0, nan, nan]) - arg2 = np.array([nan, 0, nan]) - out = np.array([0, 0, nan]) - assert_equal(np.fmin(arg1, arg2), out) - - def test_complex_nans(self): - nan = np.nan - for cnan in [complex(nan, 0), complex(0, nan), complex(nan, nan)] : - arg1 = np.array([0, cnan, cnan], dtype=np.complex) - arg2 = np.array([cnan, 0, cnan], dtype=np.complex) - out = np.array([0, 0, nan], dtype=np.complex) - assert_equal(np.fmin(arg1, arg2), out) - - -class TestFloatingPoint(TestCase): - def test_floating_point(self): - assert_equal(ncu.FLOATING_POINT_SUPPORT, 1) - - -class TestDegrees(TestCase): - def test_degrees(self): - assert_almost_equal(ncu.degrees(np.pi), 180.0) - assert_almost_equal(ncu.degrees(-0.5*np.pi), -90.0) - - -class TestRadians(TestCase): - def test_radians(self): - assert_almost_equal(ncu.radians(180.0), np.pi) - assert_almost_equal(ncu.radians(-90.0), -0.5*np.pi) - - -class TestSign(TestCase): - def test_sign(self): - a = np.array([np.inf, -np.inf, np.nan, 0.0, 3.0, -3.0]) - out = np.zeros(a.shape) - tgt = np.array([1., -1., np.nan, 0.0, 1.0, -1.0]) - - olderr = np.seterr(invalid='ignore') - try: - res = ncu.sign(a) - assert_equal(res, tgt) - res = ncu.sign(a, out) - assert_equal(res, tgt) - assert_equal(out, tgt) - finally: - np.seterr(**olderr) - - -class TestSpecialMethods(TestCase): - def test_wrap(self): - class with_wrap(object): - def __array__(self): - return np.zeros(1) - def __array_wrap__(self, arr, context): - r = with_wrap() - r.arr = arr - r.context = context - return r - a = with_wrap() - x = ncu.minimum(a, a) - assert_equal(x.arr, np.zeros(1)) - func, args, i = x.context - self.assertTrue(func is ncu.minimum) - self.assertEqual(len(args), 2) - assert_equal(args[0], a) - assert_equal(args[1], a) - self.assertEqual(i, 0) - - def test_wrap_with_iterable(self): - # test fix for bug #1026: - class with_wrap(np.ndarray): - __array_priority__ = 10 - def __new__(cls): - return np.asarray(1).view(cls).copy() - def __array_wrap__(self, arr, context): - return arr.view(type(self)) - a = with_wrap() - x = ncu.multiply(a, (1, 2, 3)) - self.assertTrue(isinstance(x, with_wrap)) - assert_array_equal(x, np.array((1, 2, 3))) - - def test_priority_with_scalar(self): - # test fix for bug #826: - class A(np.ndarray): - __array_priority__ = 10 - def __new__(cls): - return np.asarray(1.0, 'float64').view(cls).copy() - a = A() - x = np.float64(1)*a - self.assertTrue(isinstance(x, A)) - assert_array_equal(x, np.array(1)) - - def test_old_wrap(self): - class with_wrap(object): - def __array__(self): - return np.zeros(1) - def __array_wrap__(self, arr): - r = with_wrap() - r.arr = arr - return r - a = with_wrap() - x = ncu.minimum(a, a) - assert_equal(x.arr, np.zeros(1)) - - def test_priority(self): - class A(object): - def __array__(self): - return np.zeros(1) - def __array_wrap__(self, arr, context): - r = type(self)() - r.arr = arr - r.context = context - return r - class B(A): - __array_priority__ = 20. - class C(A): - __array_priority__ = 40. - x = np.zeros(1) - a = A() - b = B() - c = C() - f = ncu.minimum - self.assertTrue(type(f(x,x)) is np.ndarray) - self.assertTrue(type(f(x,a)) is A) - self.assertTrue(type(f(x,b)) is B) - self.assertTrue(type(f(x,c)) is C) - self.assertTrue(type(f(a,x)) is A) - self.assertTrue(type(f(b,x)) is B) - self.assertTrue(type(f(c,x)) is C) - - self.assertTrue(type(f(a,a)) is A) - self.assertTrue(type(f(a,b)) is B) - self.assertTrue(type(f(b,a)) is B) - self.assertTrue(type(f(b,b)) is B) - self.assertTrue(type(f(b,c)) is C) - self.assertTrue(type(f(c,b)) is C) - self.assertTrue(type(f(c,c)) is C) - - self.assertTrue(type(ncu.exp(a) is A)) - self.assertTrue(type(ncu.exp(b) is B)) - self.assertTrue(type(ncu.exp(c) is C)) - - def test_failing_wrap(self): - class A(object): - def __array__(self): - return np.zeros(1) - def __array_wrap__(self, arr, context): - raise RuntimeError - a = A() - self.assertRaises(RuntimeError, ncu.maximum, a, a) - - def test_default_prepare(self): - class with_wrap(object): - __array_priority__ = 10 - def __array__(self): - return np.zeros(1) - def __array_wrap__(self, arr, context): - return arr - a = with_wrap() - x = ncu.minimum(a, a) - assert_equal(x, np.zeros(1)) - assert_equal(type(x), np.ndarray) - - def test_prepare(self): - class with_prepare(np.ndarray): - __array_priority__ = 10 - def __array_prepare__(self, arr, context): - # make sure we can return a new - return np.array(arr).view(type=with_prepare) - a = np.array(1).view(type=with_prepare) - x = np.add(a, a) - assert_equal(x, np.array(2)) - assert_equal(type(x), with_prepare) - - def test_failing_prepare(self): - class A(object): - def __array__(self): - return np.zeros(1) - def __array_prepare__(self, arr, context=None): - raise RuntimeError - a = A() - self.assertRaises(RuntimeError, ncu.maximum, a, a) - - def test_array_with_context(self): - class A(object): - def __array__(self, dtype=None, context=None): - func, args, i = context - self.func = func - self.args = args - self.i = i - return np.zeros(1) - class B(object): - def __array__(self, dtype=None): - return np.zeros(1, dtype) - class C(object): - def __array__(self): - return np.zeros(1) - a = A() - ncu.maximum(np.zeros(1), a) - self.assertTrue(a.func is ncu.maximum) - assert_equal(a.args[0], 0) - self.assertTrue(a.args[1] is a) - self.assertTrue(a.i == 1) - assert_equal(ncu.maximum(a, B()), 0) - assert_equal(ncu.maximum(a, C()), 0) - - -class TestChoose(TestCase): - def test_mixed(self): - c = np.array([True,True]) - a = np.array([True,True]) - assert_equal(np.choose(c, (a, 1)), np.array([1,1])) - - -def is_longdouble_finfo_bogus(): - info = np.finfo(np.longcomplex) - return not np.isfinite(np.log10(info.tiny/info.eps)) - - -class TestComplexFunctions(object): - funcs = [np.arcsin, np.arccos, np.arctan, np.arcsinh, np.arccosh, - np.arctanh, np.sin, np.cos, np.tan, np.exp, - np.exp2, np.log, np.sqrt, np.log10, np.log2, - np.log1p] - - def test_it(self): - for f in self.funcs: - if f is np.arccosh : - x = 1.5 - else : - x = .5 - fr = f(x) - fz = f(np.complex(x)) - assert_almost_equal(fz.real, fr, err_msg='real part %s'%f) - assert_almost_equal(fz.imag, 0., err_msg='imag part %s'%f) - - def test_precisions_consistent(self) : - z = 1 + 1j - for f in self.funcs : - fcf = f(np.csingle(z)) - fcd = f(np.cdouble(z)) - fcl = f(np.clongdouble(z)) - assert_almost_equal(fcf, fcd, decimal=6, err_msg='fch-fcd %s'%f) - assert_almost_equal(fcl, fcd, decimal=15, err_msg='fch-fcl %s'%f) - - def test_branch_cuts(self): - # check branch cuts and continuity on them - yield _check_branch_cut, np.log, -0.5, 1j, 1, -1 - yield _check_branch_cut, np.log2, -0.5, 1j, 1, -1 - yield _check_branch_cut, np.log10, -0.5, 1j, 1, -1 - yield _check_branch_cut, np.log1p, -1.5, 1j, 1, -1 - yield _check_branch_cut, np.sqrt, -0.5, 1j, 1, -1 - - yield _check_branch_cut, np.arcsin, [ -2, 2], [1j, -1j], 1, -1 - yield _check_branch_cut, np.arccos, [ -2, 2], [1j, -1j], 1, -1 - yield _check_branch_cut, np.arctan, [-2j, 2j], [1, -1 ], -1, 1 - - yield _check_branch_cut, np.arcsinh, [-2j, 2j], [-1, 1], -1, 1 - yield _check_branch_cut, np.arccosh, [ -1, 0.5], [1j, 1j], 1, -1 - yield _check_branch_cut, np.arctanh, [ -2, 2], [1j, -1j], 1, -1 - - # check against bogus branch cuts: assert continuity between quadrants - yield _check_branch_cut, np.arcsin, [-2j, 2j], [ 1, 1], 1, 1 - yield _check_branch_cut, np.arccos, [-2j, 2j], [ 1, 1], 1, 1 - yield _check_branch_cut, np.arctan, [ -2, 2], [1j, 1j], 1, 1 - - yield _check_branch_cut, np.arcsinh, [ -2, 2, 0], [1j, 1j, 1 ], 1, 1 - yield _check_branch_cut, np.arccosh, [-2j, 2j, 2], [1, 1, 1j], 1, 1 - yield _check_branch_cut, np.arctanh, [-2j, 2j, 0], [1, 1, 1j], 1, 1 - - @dec.knownfailureif(True, "These branch cuts are known to fail") - def test_branch_cuts_failing(self): - # XXX: signed zero not OK with ICC on 64-bit platform for log, see - # http://permalink.gmane.org/gmane.comp.python.numeric.general/25335 - yield _check_branch_cut, np.log, -0.5, 1j, 1, -1, True - yield _check_branch_cut, np.log2, -0.5, 1j, 1, -1, True - yield _check_branch_cut, np.log10, -0.5, 1j, 1, -1, True - yield _check_branch_cut, np.log1p, -1.5, 1j, 1, -1, True - # XXX: signed zeros are not OK for sqrt or for the arc* functions - yield _check_branch_cut, np.sqrt, -0.5, 1j, 1, -1, True - yield _check_branch_cut, np.arcsin, [ -2, 2], [1j, -1j], 1, -1, True - yield _check_branch_cut, np.arccos, [ -2, 2], [1j, -1j], 1, -1, True - yield _check_branch_cut, np.arctan, [-2j, 2j], [1, -1 ], -1, 1, True - yield _check_branch_cut, np.arcsinh, [-2j, 2j], [-1, 1], -1, 1, True - yield _check_branch_cut, np.arccosh, [ -1, 0.5], [1j, 1j], 1, -1, True - yield _check_branch_cut, np.arctanh, [ -2, 2], [1j, -1j], 1, -1, True - - def test_against_cmath(self): - import cmath, sys - - # cmath.asinh is broken in some versions of Python, see - # http://bugs.python.org/issue1381 - broken_cmath_asinh = False - if sys.version_info < (2,6): - broken_cmath_asinh = True - - points = [-1-1j, -1+1j, +1-1j, +1+1j] - name_map = {'arcsin': 'asin', 'arccos': 'acos', 'arctan': 'atan', - 'arcsinh': 'asinh', 'arccosh': 'acosh', 'arctanh': 'atanh'} - atol = 4*np.finfo(np.complex).eps - for func in self.funcs: - fname = func.__name__.split('.')[-1] - cname = name_map.get(fname, fname) - try: - cfunc = getattr(cmath, cname) - except AttributeError: - continue - for p in points: - a = complex(func(np.complex_(p))) - b = cfunc(p) - - if cname == 'asinh' and broken_cmath_asinh: - continue - - assert abs(a - b) < atol, "%s %s: %s; cmath: %s"%(fname,p,a,b) - - def check_loss_of_precision(self, dtype): - """Check loss of precision in complex arc* functions""" - - # Check against known-good functions - - info = np.finfo(dtype) - real_dtype = dtype(0.).real.dtype - eps = info.eps - - def check(x, rtol): - x = x.astype(real_dtype) - - z = x.astype(dtype) - d = np.absolute(np.arcsinh(x)/np.arcsinh(z).real - 1) - assert np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max(), - 'arcsinh') - - z = (1j*x).astype(dtype) - d = np.absolute(np.arcsinh(x)/np.arcsin(z).imag - 1) - assert np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max(), - 'arcsin') - - z = x.astype(dtype) - d = np.absolute(np.arctanh(x)/np.arctanh(z).real - 1) - assert np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max(), - 'arctanh') - - z = (1j*x).astype(dtype) - d = np.absolute(np.arctanh(x)/np.arctan(z).imag - 1) - assert np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max(), - 'arctan') - - # The switchover was chosen as 1e-3; hence there can be up to - # ~eps/1e-3 of relative cancellation error before it - - x_series = np.logspace(-20, -3.001, 200) - x_basic = np.logspace(-2.999, 0, 10, endpoint=False) - - if dtype is np.longcomplex: - # It's not guaranteed that the system-provided arc functions - # are accurate down to a few epsilons. (Eg. on Linux 64-bit) - # So, give more leeway for long complex tests here: - check(x_series, 50*eps) - else: - check(x_series, 2*eps) - check(x_basic, 2*eps/1e-3) - - # Check a few points - - z = np.array([1e-5*(1+1j)], dtype=dtype) - p = 9.999999999333333333e-6 + 1.000000000066666666e-5j - d = np.absolute(1-np.arctanh(z)/p) - assert np.all(d < 1e-15) - - p = 1.0000000000333333333e-5 + 9.999999999666666667e-6j - d = np.absolute(1-np.arcsinh(z)/p) - assert np.all(d < 1e-15) - - p = 9.999999999333333333e-6j + 1.000000000066666666e-5 - d = np.absolute(1-np.arctan(z)/p) - assert np.all(d < 1e-15) - - p = 1.0000000000333333333e-5j + 9.999999999666666667e-6 - d = np.absolute(1-np.arcsin(z)/p) - assert np.all(d < 1e-15) - - # Check continuity across switchover points - - def check(func, z0, d=1): - z0 = np.asarray(z0, dtype=dtype) - zp = z0 + abs(z0) * d * eps * 2 - zm = z0 - abs(z0) * d * eps * 2 - assert np.all(zp != zm), (zp, zm) - - # NB: the cancellation error at the switchover is at least eps - good = (abs(func(zp) - func(zm)) < 2*eps) - assert np.all(good), (func, z0[~good]) - - for func in (np.arcsinh,np.arcsinh,np.arcsin,np.arctanh,np.arctan): - pts = [rp+1j*ip for rp in (-1e-3,0,1e-3) for ip in(-1e-3,0,1e-3) - if rp != 0 or ip != 0] - check(func, pts, 1) - check(func, pts, 1j) - check(func, pts, 1+1j) - - def test_loss_of_precision(self): - for dtype in [np.complex64, np.complex_]: - yield self.check_loss_of_precision, dtype - - @dec.knownfailureif(is_longdouble_finfo_bogus(), "Bogus long double finfo") - def test_loss_of_precision_longcomplex(self): - self.check_loss_of_precision(np.longcomplex) - - -class TestAttributes(TestCase): - def test_attributes(self): - add = ncu.add - assert_equal(add.__name__, 'add') - assert add.__doc__.startswith('add(x1, x2[, out])\n\n') - self.assertTrue(add.ntypes >= 18) # don't fail if types added - self.assertTrue('ii->i' in add.types) - assert_equal(add.nin, 2) - assert_equal(add.nout, 1) - assert_equal(add.identity, 0) - - -class TestSubclass(TestCase): - def test_subclass_op(self): - class simple(np.ndarray): - def __new__(subtype, shape): - self = np.ndarray.__new__(subtype, shape, dtype=object) - self.fill(0) - return self - a = simple((3,4)) - assert_equal(a+a, a) - -def _check_branch_cut(f, x0, dx, re_sign=1, im_sign=-1, sig_zero_ok=False, - dtype=np.complex): - """ - Check for a branch cut in a function. - - Assert that `x0` lies on a branch cut of function `f` and `f` is - continuous from the direction `dx`. - - Parameters - ---------- - f : func - Function to check - x0 : array-like - Point on branch cut - dx : array-like - Direction to check continuity in - re_sign, im_sign : {1, -1} - Change of sign of the real or imaginary part expected - sig_zero_ok : bool - Whether to check if the branch cut respects signed zero (if applicable) - dtype : dtype - Dtype to check (should be complex) - - """ - x0 = np.atleast_1d(x0).astype(dtype) - dx = np.atleast_1d(dx).astype(dtype) - - scale = np.finfo(dtype).eps * 1e3 - atol = 1e-4 - - y0 = f(x0) - yp = f(x0 + dx*scale*np.absolute(x0)/np.absolute(dx)) - ym = f(x0 - dx*scale*np.absolute(x0)/np.absolute(dx)) - - assert np.all(np.absolute(y0.real - yp.real) < atol), (y0, yp) - assert np.all(np.absolute(y0.imag - yp.imag) < atol), (y0, yp) - assert np.all(np.absolute(y0.real - ym.real*re_sign) < atol), (y0, ym) - assert np.all(np.absolute(y0.imag - ym.imag*im_sign) < atol), (y0, ym) - - if sig_zero_ok: - # check that signed zeros also work as a displacement - jr = (x0.real == 0) & (dx.real != 0) - ji = (x0.imag == 0) & (dx.imag != 0) - - x = -x0 - x.real[jr] = 0.*dx.real - x.imag[ji] = 0.*dx.imag - x = -x - ym = f(x) - ym = ym[jr | ji] - y0 = y0[jr | ji] - assert np.all(np.absolute(y0.real - ym.real*re_sign) < atol), (y0, ym) - assert np.all(np.absolute(y0.imag - ym.imag*im_sign) < atol), (y0, ym) - -def test_copysign(): - assert np.copysign(1, -1) == -1 - old_err = np.seterr(divide="ignore") - try: - assert 1 / np.copysign(0, -1) < 0 - assert 1 / np.copysign(0, 1) > 0 - finally: - np.seterr(**old_err) - assert np.signbit(np.copysign(np.nan, -1)) - assert not np.signbit(np.copysign(np.nan, 1)) - -def _test_nextafter(t): - one = t(1) - two = t(2) - zero = t(0) - eps = np.finfo(t).eps - assert np.nextafter(one, two) - one == eps - assert np.nextafter(one, zero) - one < 0 - assert np.isnan(np.nextafter(np.nan, one)) - assert np.isnan(np.nextafter(one, np.nan)) - assert np.nextafter(one, one) == one - -def test_nextafter(): - return _test_nextafter(np.float64) - -def test_nextafterf(): - return _test_nextafter(np.float32) - -@dec.knownfailureif(sys.platform == 'win32', "Long double support buggy on win32") -def test_nextafterl(): - return _test_nextafter(np.longdouble) - -def _test_spacing(t): - err = np.seterr(invalid='ignore') - one = t(1) - eps = np.finfo(t).eps - nan = t(np.nan) - inf = t(np.inf) - try: - assert np.spacing(one) == eps - assert np.isnan(np.spacing(nan)) - assert np.isnan(np.spacing(inf)) - assert np.isnan(np.spacing(-inf)) - assert np.spacing(t(1e30)) != 0 - finally: - np.seterr(**err) - -def test_spacing(): - return _test_spacing(np.float64) - -def test_spacingf(): - return _test_spacing(np.float32) - -@dec.knownfailureif(sys.platform == 'win32', "Long double support buggy on win32") -def test_spacingl(): - return _test_spacing(np.longdouble) - -def test_spacing_gfortran(): - # Reference from this fortran file, built with gfortran 4.3.3 on linux - # 32bits: - # PROGRAM test_spacing - # INTEGER, PARAMETER :: SGL = SELECTED_REAL_KIND(p=6, r=37) - # INTEGER, PARAMETER :: DBL = SELECTED_REAL_KIND(p=13, r=200) - # - # WRITE(*,*) spacing(0.00001_DBL) - # WRITE(*,*) spacing(1.0_DBL) - # WRITE(*,*) spacing(1000._DBL) - # WRITE(*,*) spacing(10500._DBL) - # - # WRITE(*,*) spacing(0.00001_SGL) - # WRITE(*,*) spacing(1.0_SGL) - # WRITE(*,*) spacing(1000._SGL) - # WRITE(*,*) spacing(10500._SGL) - # END PROGRAM - ref = {} - ref[np.float64] = [1.69406589450860068E-021, - 2.22044604925031308E-016, - 1.13686837721616030E-013, - 1.81898940354585648E-012] - ref[np.float32] = [ - 9.09494702E-13, - 1.19209290E-07, - 6.10351563E-05, - 9.76562500E-04] - - for dt, dec in zip([np.float32, np.float64], (10, 20)): - x = np.array([1e-5, 1, 1000, 10500], dtype=dt) - assert_array_almost_equal(np.spacing(x), ref[dt], decimal=dec) - -def test_nextafter_vs_spacing(): - # XXX: spacing does not handle long double yet - for t in [np.float32, np.float64]: - for _f in [1, 1e-5, 1000]: - f = t(_f) - f1 = t(_f + 1) - assert np.nextafter(f, f1) - f == np.spacing(f) - -def test_pos_nan(): - """Check np.nan is a positive nan.""" - assert np.signbit(np.nan) == 0 - -def test_reduceat(): - """Test bug in reduceat when structured arrays are not copied.""" - db = np.dtype([('name', 'S11'),('time', np.int64), ('value', np.float32)]) - a = np.empty([100], dtype=db) - a['name'] = 'Simple' - a['time'] = 10 - a['value'] = 100 - indx = [0,7,15,25] - - h2 = [] - val1 = indx[0] - for val2 in indx[1:]: - h2.append(np.add.reduce(a['value'][val1:val2])) - val1 = val2 - h2.append(np.add.reduce(a['value'][val1:])) - h2 = np.array(h2) - - # test buffered -- this should work - h1 = np.add.reduceat(a['value'], indx) - assert_array_almost_equal(h1, h2) - - # This is when the error occurs. - # test no buffer - res = np.setbufsize(32) - h1 = np.add.reduceat(a['value'], indx) - np.setbufsize(np.UFUNC_BUFSIZE_DEFAULT) - assert_array_almost_equal(h1, h2) - - -def test_complex_nan_comparisons(): - nans = [complex(np.nan, 0), complex(0, np.nan), complex(np.nan, np.nan)] - fins = [complex(1, 0), complex(-1, 0), complex(0, 1), complex(0, -1), - complex(1, 1), complex(-1, -1), complex(0, 0)] - - for x in nans + fins: - x = np.array([x]) - for y in nans + fins: - y = np.array([y]) - - if np.isfinite(x) and np.isfinite(y): - continue - - assert_equal(x < y, False, err_msg="%r < %r" % (x, y)) - assert_equal(x > y, False, err_msg="%r > %r" % (x, y)) - assert_equal(x <= y, False, err_msg="%r <= %r" % (x, y)) - assert_equal(x >= y, False, err_msg="%r >= %r" % (x, y)) - assert_equal(x == y, False, err_msg="%r == %r" % (x, y)) - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_umath_complex.py b/pythonPackages/numpy/numpy/core/tests/test_umath_complex.py deleted file mode 100755 index ad05906d95..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_umath_complex.py +++ /dev/null @@ -1,546 +0,0 @@ -from numpy.testing import * -import numpy.core.umath as ncu -import numpy as np - -# TODO: branch cuts (use Pauli code) -# TODO: conj 'symmetry' -# TODO: FPU exceptions - -class TestCexp(object): - def test_simple(self): - check = check_complex_value - f = np.exp - - yield check, f, 1, 0, np.exp(1), 0, False - yield check, f, 0, 1, np.cos(1), np.sin(1), False - - ref = np.exp(1) * np.complex(np.cos(1), np.sin(1)) - yield check, f, 1, 1, ref.real, ref.imag, False - - def test_special_values(self): - # C99: Section G 6.3.1 - - check = check_complex_value - f = np.exp - - # cexp(+-0 + 0i) is 1 + 0i - yield check, f, np.PZERO, 0, 1, 0, False - yield check, f, np.NZERO, 0, 1, 0, False - - # cexp(x + infi) is nan + nani for finite x and raises 'invalid' FPU - # exception - yield check, f, 1, np.inf, np.nan, np.nan - yield check, f, -1, np.inf, np.nan, np.nan - yield check, f, 0, np.inf, np.nan, np.nan - - # cexp(inf + 0i) is inf + 0i - yield check, f, np.inf, 0, np.inf, 0 - - # cexp(-inf + yi) is +0 * (cos(y) + i sin(y)) for finite y - ref = np.complex(np.cos(1.), np.sin(1.)) - yield check, f, -np.inf, 1, np.PZERO, np.PZERO - - ref = np.complex(np.cos(np.pi * 0.75), np.sin(np.pi * 0.75)) - yield check, f, -np.inf, 0.75 * np.pi, np.NZERO, np.PZERO - - # cexp(inf + yi) is +inf * (cos(y) + i sin(y)) for finite y - ref = np.complex(np.cos(1.), np.sin(1.)) - yield check, f, np.inf, 1, np.inf, np.inf - - ref = np.complex(np.cos(np.pi * 0.75), np.sin(np.pi * 0.75)) - yield check, f, np.inf, 0.75 * np.pi, -np.inf, np.inf - - # cexp(-inf + inf i) is +-0 +- 0i (signs unspecified) - def _check_ninf_inf(dummy): - msgform = "cexp(-inf, inf) is (%f, %f), expected (+-0, +-0)" - err = np.seterr(invalid='ignore') - try: - z = f(np.array(np.complex(-np.inf, np.inf))) - if z.real != 0 or z.imag != 0: - raise AssertionError(msgform %(z.real, z.imag)) - finally: - np.seterr(**err) - - yield _check_ninf_inf, None - - # cexp(inf + inf i) is +-inf + NaNi and raised invalid FPU ex. - def _check_inf_inf(dummy): - msgform = "cexp(inf, inf) is (%f, %f), expected (+-inf, nan)" - err = np.seterr(invalid='ignore') - try: - z = f(np.array(np.complex(np.inf, np.inf))) - if not np.isinf(z.real) or not np.isnan(z.imag): - raise AssertionError(msgform % (z.real, z.imag)) - finally: - np.seterr(**err) - - yield _check_inf_inf, None - - # cexp(-inf + nan i) is +-0 +- 0i - def _check_ninf_nan(dummy): - msgform = "cexp(-inf, nan) is (%f, %f), expected (+-0, +-0)" - err = np.seterr(invalid='ignore') - try: - z = f(np.array(np.complex(-np.inf, np.nan))) - if z.real != 0 or z.imag != 0: - raise AssertionError(msgform % (z.real, z.imag)) - finally: - np.seterr(**err) - - yield _check_ninf_nan, None - - # cexp(inf + nan i) is +-inf + nan - def _check_inf_nan(dummy): - msgform = "cexp(-inf, nan) is (%f, %f), expected (+-inf, nan)" - err = np.seterr(invalid='ignore') - try: - z = f(np.array(np.complex(np.inf, np.nan))) - if not np.isinf(z.real) or not np.isnan(z.imag): - raise AssertionError(msgform % (z.real, z.imag)) - finally: - np.seterr(**err) - - yield _check_inf_nan, None - - # cexp(nan + yi) is nan + nani for y != 0 (optional: raises invalid FPU - # ex) - yield check, f, np.nan, 1, np.nan, np.nan - yield check, f, np.nan, -1, np.nan, np.nan - - yield check, f, np.nan, np.inf, np.nan, np.nan - yield check, f, np.nan, -np.inf, np.nan, np.nan - - # cexp(nan + nani) is nan + nani - yield check, f, np.nan, np.nan, np.nan, np.nan - - @dec.knownfailureif(True, "cexp(nan + 0I) is wrong on most implementations") - def test_special_values2(self): - # XXX: most implementations get it wrong here (including glibc <= 2.10) - # cexp(nan + 0i) is nan + 0i - yield check, f, np.nan, 0, np.nan, 0 - -class TestClog(TestCase): - def test_simple(self): - x = np.array([1+0j, 1+2j]) - y_r = np.log(np.abs(x)) + 1j * np.angle(x) - y = np.log(x) - for i in range(len(x)): - assert_almost_equal(y[i], y_r[i]) - - def test_special_values(self): - xl = [] - yl = [] - - # From C99 std (Sec 6.3.2) - # XXX: check exceptions raised - # --- raise for invalid fails. - - # clog(-0 + i0) returns -inf + i pi and raises the 'divide-by-zero' - # floating-point exception. - err = np.seterr(divide='raise') - try: - x = np.array([np.NZERO], dtype=np.complex) - y = np.complex(-np.inf, np.pi) - self.assertRaises(FloatingPointError, np.log, x) - np.seterr(divide='ignore') - assert_almost_equal(np.log(x), y) - finally: - np.seterr(**err) - - xl.append(x) - yl.append(y) - - # clog(+0 + i0) returns -inf + i0 and raises the 'divide-by-zero' - # floating-point exception. - err = np.seterr(divide='raise') - try: - x = np.array([0], dtype=np.complex) - y = np.complex(-np.inf, 0) - self.assertRaises(FloatingPointError, np.log, x) - np.seterr(divide='ignore') - assert_almost_equal(np.log(x), y) - finally: - np.seterr(**err) - - xl.append(x) - yl.append(y) - - # clog(x + i inf returns +inf + i pi /2, for finite x. - x = np.array([complex(1, np.inf)], dtype=np.complex) - y = np.complex(np.inf, 0.5 * np.pi) - assert_almost_equal(np.log(x), y) - xl.append(x) - yl.append(y) - - x = np.array([complex(-1, np.inf)], dtype=np.complex) - assert_almost_equal(np.log(x), y) - xl.append(x) - yl.append(y) - - # clog(x + iNaN) returns NaN + iNaN and optionally raises the - # 'invalid' floating- point exception, for finite x. - err = np.seterr(invalid='raise') - try: - x = np.array([complex(1., np.nan)], dtype=np.complex) - y = np.complex(np.nan, np.nan) - #self.assertRaises(FloatingPointError, np.log, x) - np.seterr(invalid='ignore') - assert_almost_equal(np.log(x), y) - finally: - np.seterr(**err) - - xl.append(x) - yl.append(y) - - err = np.seterr(invalid='raise') - try: - x = np.array([np.inf + 1j * np.nan], dtype=np.complex) - #self.assertRaises(FloatingPointError, np.log, x) - np.seterr(invalid='ignore') - assert_almost_equal(np.log(x), y) - finally: - np.seterr(**err) - - xl.append(x) - yl.append(y) - - # clog(- inf + iy) returns +inf + ipi , for finite positive-signed y. - x = np.array([-np.inf + 1j], dtype=np.complex) - y = np.complex(np.inf, np.pi) - assert_almost_equal(np.log(x), y) - xl.append(x) - yl.append(y) - - # clog(+ inf + iy) returns +inf + i0, for finite positive-signed y. - x = np.array([np.inf + 1j], dtype=np.complex) - y = np.complex(np.inf, 0) - assert_almost_equal(np.log(x), y) - xl.append(x) - yl.append(y) - - # clog(- inf + i inf) returns +inf + i3pi /4. - x = np.array([complex(-np.inf, np.inf)], dtype=np.complex) - y = np.complex(np.inf, 0.75 * np.pi) - assert_almost_equal(np.log(x), y) - xl.append(x) - yl.append(y) - - # clog(+ inf + i inf) returns +inf + ipi /4. - x = np.array([complex(np.inf, np.inf)], dtype=np.complex) - y = np.complex(np.inf, 0.25 * np.pi) - assert_almost_equal(np.log(x), y) - xl.append(x) - yl.append(y) - - # clog(+/- inf + iNaN) returns +inf + iNaN. - x = np.array([complex(np.inf, np.nan)], dtype=np.complex) - y = np.complex(np.inf, np.nan) - assert_almost_equal(np.log(x), y) - xl.append(x) - yl.append(y) - - x = np.array([complex(-np.inf, np.nan)], dtype=np.complex) - assert_almost_equal(np.log(x), y) - xl.append(x) - yl.append(y) - - # clog(NaN + iy) returns NaN + iNaN and optionally raises the - # 'invalid' floating-point exception, for finite y. - x = np.array([complex(np.nan, 1)], dtype=np.complex) - y = np.complex(np.nan, np.nan) - assert_almost_equal(np.log(x), y) - xl.append(x) - yl.append(y) - - # clog(NaN + i inf) returns +inf + iNaN. - x = np.array([complex(np.nan, np.inf)], dtype=np.complex) - y = np.complex(np.inf, np.nan) - assert_almost_equal(np.log(x), y) - xl.append(x) - yl.append(y) - - # clog(NaN + iNaN) returns NaN + iNaN. - x = np.array([complex(np.nan, np.nan)], dtype=np.complex) - y = np.complex(np.nan, np.nan) - assert_almost_equal(np.log(x), y) - xl.append(x) - yl.append(y) - - # clog(conj(z)) = conj(clog(z)). - xa = np.array(xl, dtype=np.complex) - ya = np.array(yl, dtype=np.complex) - err = np.seterr(divide='ignore') - try: - for i in range(len(xa)): - assert_almost_equal(np.log(np.conj(xa[i])), np.conj(np.log(xa[i]))) - finally: - np.seterr(**err) - -class TestCsqrt(object): - - def test_simple(self): - # sqrt(1) - yield check_complex_value, np.sqrt, 1, 0, 1, 0 - - # sqrt(1i) - yield check_complex_value, np.sqrt, 0, 1, 0.5*np.sqrt(2), 0.5*np.sqrt(2), False - - # sqrt(-1) - yield check_complex_value, np.sqrt, -1, 0, 0, 1 - - def test_simple_conjugate(self): - ref = np.conj(np.sqrt(np.complex(1, 1))) - def f(z): - return np.sqrt(np.conj(z)) - yield check_complex_value, f, 1, 1, ref.real, ref.imag, False - - #def test_branch_cut(self): - # _check_branch_cut(f, -1, 0, 1, -1) - - def test_special_values(self): - check = check_complex_value - f = np.sqrt - - # C99: Sec G 6.4.2 - x, y = [], [] - - # csqrt(+-0 + 0i) is 0 + 0i - yield check, f, np.PZERO, 0, 0, 0 - yield check, f, np.NZERO, 0, 0, 0 - - # csqrt(x + infi) is inf + infi for any x (including NaN) - yield check, f, 1, np.inf, np.inf, np.inf - yield check, f, -1, np.inf, np.inf, np.inf - - yield check, f, np.PZERO, np.inf, np.inf, np.inf - yield check, f, np.NZERO, np.inf, np.inf, np.inf - yield check, f, np.inf, np.inf, np.inf, np.inf - yield check, f, -np.inf, np.inf, np.inf, np.inf - yield check, f, -np.nan, np.inf, np.inf, np.inf - - # csqrt(x + nani) is nan + nani for any finite x - yield check, f, 1, np.nan, np.nan, np.nan - yield check, f, -1, np.nan, np.nan, np.nan - yield check, f, 0, np.nan, np.nan, np.nan - - # csqrt(-inf + yi) is +0 + infi for any finite y > 0 - yield check, f, -np.inf, 1, np.PZERO, np.inf - - # csqrt(inf + yi) is +inf + 0i for any finite y > 0 - yield check, f, np.inf, 1, np.inf, np.PZERO - - # csqrt(-inf + nani) is nan +- infi (both +i infi are valid) - def _check_ninf_nan(dummy): - msgform = "csqrt(-inf, nan) is (%f, %f), expected (nan, +-inf)" - z = np.sqrt(np.array(np.complex(-np.inf, np.nan))) - #Fixme: ugly workaround for isinf bug. - err = np.seterr(invalid='ignore') - try: - if not (np.isnan(z.real) and np.isinf(z.imag)): - raise AssertionError(msgform % (z.real, z.imag)) - finally: - np.seterr(**err) - - yield _check_ninf_nan, None - - # csqrt(+inf + nani) is inf + nani - yield check, f, np.inf, np.nan, np.inf, np.nan - - # csqrt(nan + yi) is nan + nani for any finite y (infinite handled in x - # + nani) - yield check, f, np.nan, 0, np.nan, np.nan - yield check, f, np.nan, 1, np.nan, np.nan - yield check, f, np.nan, np.nan, np.nan, np.nan - - # XXX: check for conj(csqrt(z)) == csqrt(conj(z)) (need to fix branch - # cuts first) - -class TestCpow(TestCase): - def test_simple(self): - x = np.array([1+1j, 0+2j, 1+2j, np.inf, np.nan]) - err = np.seterr(invalid='ignore') - try: - y_r = x ** 2 - y = np.power(x, 2) - for i in range(len(x)): - assert_almost_equal(y[i], y_r[i]) - finally: - np.seterr(**err) - - def test_scalar(self): - x = np.array([1, 1j, 2, 2.5+.37j, np.inf, np.nan]) - y = np.array([1, 1j, -0.5+1.5j, -0.5+1.5j, 2, 3]) - lx = range(len(x)) - # Compute the values for complex type in python - p_r = [complex(x[i]) ** complex(y[i]) for i in lx] - # Substitute a result allowed by C99 standard - p_r[4] = complex(np.inf, np.nan) - # Do the same with numpy complex scalars - err = np.seterr(invalid='ignore') - try: - n_r = [x[i] ** y[i] for i in lx] - for i in lx: - assert_almost_equal(n_r[i], p_r[i], err_msg='Loop %d\n' % i) - finally: - np.seterr(**err) - - def test_array(self): - x = np.array([1, 1j, 2, 2.5+.37j, np.inf, np.nan]) - y = np.array([1, 1j, -0.5+1.5j, -0.5+1.5j, 2, 3]) - lx = range(len(x)) - # Compute the values for complex type in python - p_r = [complex(x[i]) ** complex(y[i]) for i in lx] - # Substitute a result allowed by C99 standard - p_r[4] = complex(np.inf, np.nan) - # Do the same with numpy arrays - err = np.seterr(invalid='ignore') - try: - n_r = x ** y - for i in lx: - assert_almost_equal(n_r[i], p_r[i], err_msg='Loop %d\n' % i) - finally: - np.seterr(**err) - -class TestCabs(object): - def test_simple(self): - x = np.array([1+1j, 0+2j, 1+2j, np.inf, np.nan]) - y_r = np.array([np.sqrt(2.), 2, np.sqrt(5), np.inf, np.nan]) - y = np.abs(x) - for i in range(len(x)): - assert_almost_equal(y[i], y_r[i]) - - def test_fabs(self): - # Test that np.abs(x +- 0j) == np.abs(x) (as mandated by C99 for cabs) - x = np.array([1+0j], dtype=np.complex) - assert_array_equal(np.abs(x), np.real(x)) - - x = np.array([complex(1, np.NZERO)], dtype=np.complex) - assert_array_equal(np.abs(x), np.real(x)) - - x = np.array([complex(np.inf, np.NZERO)], dtype=np.complex) - assert_array_equal(np.abs(x), np.real(x)) - - x = np.array([complex(np.nan, np.NZERO)], dtype=np.complex) - assert_array_equal(np.abs(x), np.real(x)) - - def test_cabs_inf_nan(self): - x, y = [], [] - - # cabs(+-nan + nani) returns nan - x.append(np.nan) - y.append(np.nan) - yield check_real_value, np.abs, np.nan, np.nan, np.nan - - x.append(np.nan) - y.append(-np.nan) - yield check_real_value, np.abs, -np.nan, np.nan, np.nan - - # According to C99 standard, if exactly one of the real/part is inf and - # the other nan, then cabs should return inf - x.append(np.inf) - y.append(np.nan) - yield check_real_value, np.abs, np.inf, np.nan, np.inf - - x.append(-np.inf) - y.append(np.nan) - yield check_real_value, np.abs, -np.inf, np.nan, np.inf - - # cabs(conj(z)) == conj(cabs(z)) (= cabs(z)) - def f(a): - return np.abs(np.conj(a)) - def g(a, b): - return np.abs(np.complex(a, b)) - - xa = np.array(x, dtype=np.complex) - ya = np.array(x, dtype=np.complex) - for i in range(len(xa)): - ref = g(x[i], y[i]) - yield check_real_value, f, x[i], y[i], ref - -class TestCarg(object): - def test_simple(self): - check_real_value(ncu._arg, 1, 0, 0, False) - check_real_value(ncu._arg, 0, 1, 0.5*np.pi, False) - - check_real_value(ncu._arg, 1, 1, 0.25*np.pi, False) - check_real_value(ncu._arg, np.PZERO, np.PZERO, np.PZERO) - - @dec.knownfailureif(True, - "Complex arithmetic with signed zero is buggy on most implementation") - def test_zero(self): - # carg(-0 +- 0i) returns +- pi - yield check_real_value, ncu._arg, np.NZERO, np.PZERO, np.pi, False - yield check_real_value, ncu._arg, np.NZERO, np.NZERO, -np.pi, False - - # carg(+0 +- 0i) returns +- 0 - yield check_real_value, ncu._arg, np.PZERO, np.PZERO, np.PZERO - yield check_real_value, ncu._arg, np.PZERO, np.NZERO, np.NZERO - - # carg(x +- 0i) returns +- 0 for x > 0 - yield check_real_value, ncu._arg, 1, np.PZERO, np.PZERO, False - yield check_real_value, ncu._arg, 1, np.NZERO, np.NZERO, False - - # carg(x +- 0i) returns +- pi for x < 0 - yield check_real_value, ncu._arg, -1, np.PZERO, np.pi, False - yield check_real_value, ncu._arg, -1, np.NZERO, -np.pi, False - - # carg(+- 0 + yi) returns pi/2 for y > 0 - yield check_real_value, ncu._arg, np.PZERO, 1, 0.5 * np.pi, False - yield check_real_value, ncu._arg, np.NZERO, 1, 0.5 * np.pi, False - - # carg(+- 0 + yi) returns -pi/2 for y < 0 - yield check_real_value, ncu._arg, np.PZERO, -1, 0.5 * np.pi, False - yield check_real_value, ncu._arg, np.NZERO, -1,-0.5 * np.pi, False - - #def test_branch_cuts(self): - # _check_branch_cut(ncu._arg, -1, 1j, -1, 1) - - def test_special_values(self): - # carg(-np.inf +- yi) returns +-pi for finite y > 0 - yield check_real_value, ncu._arg, -np.inf, 1, np.pi, False - yield check_real_value, ncu._arg, -np.inf, -1, -np.pi, False - - # carg(np.inf +- yi) returns +-0 for finite y > 0 - yield check_real_value, ncu._arg, np.inf, 1, np.PZERO, False - yield check_real_value, ncu._arg, np.inf, -1, np.NZERO, False - - # carg(x +- np.infi) returns +-pi/2 for finite x - yield check_real_value, ncu._arg, 1, np.inf, 0.5 * np.pi, False - yield check_real_value, ncu._arg, 1, -np.inf, -0.5 * np.pi, False - - # carg(-np.inf +- np.infi) returns +-3pi/4 - yield check_real_value, ncu._arg, -np.inf, np.inf, 0.75 * np.pi, False - yield check_real_value, ncu._arg, -np.inf, -np.inf, -0.75 * np.pi, False - - # carg(np.inf +- np.infi) returns +-pi/4 - yield check_real_value, ncu._arg, np.inf, np.inf, 0.25 * np.pi, False - yield check_real_value, ncu._arg, np.inf, -np.inf, -0.25 * np.pi, False - - # carg(x + yi) returns np.nan if x or y is nan - yield check_real_value, ncu._arg, np.nan, 0, np.nan, False - yield check_real_value, ncu._arg, 0, np.nan, np.nan, False - - yield check_real_value, ncu._arg, np.nan, np.inf, np.nan, False - yield check_real_value, ncu._arg, np.inf, np.nan, np.nan, False - -def check_real_value(f, x1, y1, x, exact=True): - z1 = np.array([complex(x1, y1)]) - if exact: - assert_equal(f(z1), x) - else: - assert_almost_equal(f(z1), x) - -def check_complex_value(f, x1, y1, x2, y2, exact=True): - err = np.seterr(invalid='ignore') - z1 = np.array([complex(x1, y1)]) - z2 = np.complex(x2, y2) - try: - if exact: - assert_equal(f(z1), z2) - else: - assert_almost_equal(f(z1), z2) - finally: - np.seterr(**err) - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/core/tests/test_unicode.py b/pythonPackages/numpy/numpy/core/tests/test_unicode.py deleted file mode 100755 index 5ae4ad9dd9..0000000000 --- a/pythonPackages/numpy/numpy/core/tests/test_unicode.py +++ /dev/null @@ -1,341 +0,0 @@ -import sys - -from numpy.testing import * -from numpy.core import * -from numpy.compat import asbytes - -# Guess the UCS length for this python interpreter -if sys.version_info[0] >= 3: - import array as _array - ucs4 = (_array.array('u').itemsize == 4) - def buffer_length(arr): - if isinstance(arr, unicode): - return _array.array('u').itemsize * len(arr) - v = memoryview(arr) - if v.shape is None: - return len(v) * v.itemsize - else: - return prod(v.shape) * v.itemsize -else: - if len(buffer(u'u')) == 4: - ucs4 = True - else: - ucs4 = False - def buffer_length(arr): - if isinstance(arr, ndarray): - return len(arr.data) - return len(buffer(arr)) - -# Value that can be represented in UCS2 interpreters -ucs2_value = u'\uFFFF' -# Value that cannot be represented in UCS2 interpreters (but can in UCS4) -ucs4_value = u'\U0010FFFF' - - -############################################################ -# Creation tests -############################################################ - -class create_zeros(object): - """Check the creation of zero-valued arrays""" - - def content_check(self, ua, ua_scalar, nbytes): - - # Check the length of the unicode base type - self.assert_(int(ua.dtype.str[2:]) == self.ulen) - # Check the length of the data buffer - self.assert_(buffer_length(ua) == nbytes) - # Small check that data in array element is ok - self.assert_(ua_scalar == u'') - # Encode to ascii and double check - self.assert_(ua_scalar.encode('ascii') == asbytes('')) - # Check buffer lengths for scalars - if ucs4: - self.assert_(buffer_length(ua_scalar) == 0) - else: - self.assert_(buffer_length(ua_scalar) == 0) - - def test_zeros0D(self): - """Check creation of 0-dimensional objects""" - ua = zeros((), dtype='U%s' % self.ulen) - self.content_check(ua, ua[()], 4*self.ulen) - - def test_zerosSD(self): - """Check creation of single-dimensional objects""" - ua = zeros((2,), dtype='U%s' % self.ulen) - self.content_check(ua, ua[0], 4*self.ulen*2) - self.content_check(ua, ua[1], 4*self.ulen*2) - - def test_zerosMD(self): - """Check creation of multi-dimensional objects""" - ua = zeros((2,3,4), dtype='U%s' % self.ulen) - self.content_check(ua, ua[0,0,0], 4*self.ulen*2*3*4) - self.content_check(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) - - -class test_create_zeros_1(create_zeros, TestCase): - """Check the creation of zero-valued arrays (size 1)""" - ulen = 1 - - -class test_create_zeros_2(create_zeros, TestCase): - """Check the creation of zero-valued arrays (size 2)""" - ulen = 2 - - -class test_create_zeros_1009(create_zeros, TestCase): - """Check the creation of zero-valued arrays (size 1009)""" - ulen = 1009 - - -class create_values(object): - """Check the creation of unicode arrays with values""" - - def content_check(self, ua, ua_scalar, nbytes): - - # Check the length of the unicode base type - self.assert_(int(ua.dtype.str[2:]) == self.ulen) - # Check the length of the data buffer - self.assert_(buffer_length(ua) == nbytes) - # Small check that data in array element is ok - self.assert_(ua_scalar == self.ucs_value*self.ulen) - # Encode to UTF-8 and double check - self.assert_(ua_scalar.encode('utf-8') == \ - (self.ucs_value*self.ulen).encode('utf-8')) - # Check buffer lengths for scalars - if ucs4: - self.assert_(buffer_length(ua_scalar) == 4*self.ulen) - else: - if self.ucs_value == ucs4_value: - # In UCS2, the \U0010FFFF will be represented using a - # surrogate *pair* - self.assert_(buffer_length(ua_scalar) == 2*2*self.ulen) - else: - # In UCS2, the \uFFFF will be represented using a - # regular 2-byte word - self.assert_(buffer_length(ua_scalar) == 2*self.ulen) - - def test_values0D(self): - """Check creation of 0-dimensional objects with values""" - ua = array(self.ucs_value*self.ulen, dtype='U%s' % self.ulen) - self.content_check(ua, ua[()], 4*self.ulen) - - def test_valuesSD(self): - """Check creation of single-dimensional objects with values""" - ua = array([self.ucs_value*self.ulen]*2, dtype='U%s' % self.ulen) - self.content_check(ua, ua[0], 4*self.ulen*2) - self.content_check(ua, ua[1], 4*self.ulen*2) - - def test_valuesMD(self): - """Check creation of multi-dimensional objects with values""" - ua = array([[[self.ucs_value*self.ulen]*2]*3]*4, dtype='U%s' % self.ulen) - self.content_check(ua, ua[0,0,0], 4*self.ulen*2*3*4) - self.content_check(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) - - -class test_create_values_1_ucs2(create_values, TestCase): - """Check the creation of valued arrays (size 1, UCS2 values)""" - ulen = 1 - ucs_value = ucs2_value - - -class test_create_values_1_ucs4(create_values, TestCase): - """Check the creation of valued arrays (size 1, UCS4 values)""" - ulen = 1 - ucs_value = ucs4_value - - -class test_create_values_2_ucs2(create_values, TestCase): - """Check the creation of valued arrays (size 2, UCS2 values)""" - ulen = 2 - ucs_value = ucs2_value - - -class test_create_values_2_ucs4(create_values, TestCase): - """Check the creation of valued arrays (size 2, UCS4 values)""" - ulen = 2 - ucs_value = ucs4_value - - -class test_create_values_1009_ucs2(create_values, TestCase): - """Check the creation of valued arrays (size 1009, UCS2 values)""" - ulen = 1009 - ucs_value = ucs2_value - - -class test_create_values_1009_ucs4(create_values, TestCase): - """Check the creation of valued arrays (size 1009, UCS4 values)""" - ulen = 1009 - ucs_value = ucs4_value - - -############################################################ -# Assignment tests -############################################################ - -class assign_values(object): - """Check the assignment of unicode arrays with values""" - - def content_check(self, ua, ua_scalar, nbytes): - - # Check the length of the unicode base type - self.assert_(int(ua.dtype.str[2:]) == self.ulen) - # Check the length of the data buffer - self.assert_(buffer_length(ua) == nbytes) - # Small check that data in array element is ok - self.assert_(ua_scalar == self.ucs_value*self.ulen) - # Encode to UTF-8 and double check - self.assert_(ua_scalar.encode('utf-8') == \ - (self.ucs_value*self.ulen).encode('utf-8')) - # Check buffer lengths for scalars - if ucs4: - self.assert_(buffer_length(ua_scalar) == 4*self.ulen) - else: - if self.ucs_value == ucs4_value: - # In UCS2, the \U0010FFFF will be represented using a - # surrogate *pair* - self.assert_(buffer_length(ua_scalar) == 2*2*self.ulen) - else: - # In UCS2, the \uFFFF will be represented using a - # regular 2-byte word - self.assert_(buffer_length(ua_scalar) == 2*self.ulen) - - def test_values0D(self): - """Check assignment of 0-dimensional objects with values""" - ua = zeros((), dtype='U%s' % self.ulen) - ua[()] = self.ucs_value*self.ulen - self.content_check(ua, ua[()], 4*self.ulen) - - def test_valuesSD(self): - """Check assignment of single-dimensional objects with values""" - ua = zeros((2,), dtype='U%s' % self.ulen) - ua[0] = self.ucs_value*self.ulen - self.content_check(ua, ua[0], 4*self.ulen*2) - ua[1] = self.ucs_value*self.ulen - self.content_check(ua, ua[1], 4*self.ulen*2) - - def test_valuesMD(self): - """Check assignment of multi-dimensional objects with values""" - ua = zeros((2,3,4), dtype='U%s' % self.ulen) - ua[0,0,0] = self.ucs_value*self.ulen - self.content_check(ua, ua[0,0,0], 4*self.ulen*2*3*4) - ua[-1,-1,-1] = self.ucs_value*self.ulen - self.content_check(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) - - -class test_assign_values_1_ucs2(assign_values, TestCase): - """Check the assignment of valued arrays (size 1, UCS2 values)""" - ulen = 1 - ucs_value = ucs2_value - - -class test_assign_values_1_ucs4(assign_values, TestCase): - """Check the assignment of valued arrays (size 1, UCS4 values)""" - ulen = 1 - ucs_value = ucs4_value - - -class test_assign_values_2_ucs2(assign_values, TestCase): - """Check the assignment of valued arrays (size 2, UCS2 values)""" - ulen = 2 - ucs_value = ucs2_value - - -class test_assign_values_2_ucs4(assign_values, TestCase): - """Check the assignment of valued arrays (size 2, UCS4 values)""" - ulen = 2 - ucs_value = ucs4_value - - -class test_assign_values_1009_ucs2(assign_values, TestCase): - """Check the assignment of valued arrays (size 1009, UCS2 values)""" - ulen = 1009 - ucs_value = ucs2_value - - -class test_assign_values_1009_ucs4(assign_values, TestCase): - """Check the assignment of valued arrays (size 1009, UCS4 values)""" - ulen = 1009 - ucs_value = ucs4_value - - - -############################################################ -# Byteorder tests -############################################################ - -class byteorder_values: - """Check the byteorder of unicode arrays in round-trip conversions""" - - def test_values0D(self): - """Check byteorder of 0-dimensional objects""" - ua = array(self.ucs_value*self.ulen, dtype='U%s' % self.ulen) - ua2 = ua.newbyteorder() - # This changes the interpretation of the data region (but not the - # actual data), therefore the returned scalars are not - # the same (they are byte-swapped versions of each other). - self.assert_(ua[()] != ua2[()]) - ua3 = ua2.newbyteorder() - # Arrays must be equal after the round-trip - assert_equal(ua, ua3) - - def test_valuesSD(self): - """Check byteorder of single-dimensional objects""" - ua = array([self.ucs_value*self.ulen]*2, dtype='U%s' % self.ulen) - ua2 = ua.newbyteorder() - self.assert_(ua[0] != ua2[0]) - self.assert_(ua[-1] != ua2[-1]) - ua3 = ua2.newbyteorder() - # Arrays must be equal after the round-trip - assert_equal(ua, ua3) - - def test_valuesMD(self): - """Check byteorder of multi-dimensional objects""" - ua = array([[[self.ucs_value*self.ulen]*2]*3]*4, - dtype='U%s' % self.ulen) - ua2 = ua.newbyteorder() - self.assert_(ua[0,0,0] != ua2[0,0,0]) - self.assert_(ua[-1,-1,-1] != ua2[-1,-1,-1]) - ua3 = ua2.newbyteorder() - # Arrays must be equal after the round-trip - assert_equal(ua, ua3) - - -class test_byteorder_1_ucs2(byteorder_values, TestCase): - """Check the byteorder in unicode (size 1, UCS2 values)""" - ulen = 1 - ucs_value = ucs2_value - - -class test_byteorder_1_ucs4(byteorder_values, TestCase): - """Check the byteorder in unicode (size 1, UCS4 values)""" - ulen = 1 - ucs_value = ucs4_value - - -class test_byteorder_2_ucs2(byteorder_values, TestCase): - """Check the byteorder in unicode (size 2, UCS2 values)""" - ulen = 2 - ucs_value = ucs2_value - - -class test_byteorder_2_ucs4(byteorder_values, TestCase): - """Check the byteorder in unicode (size 2, UCS4 values)""" - ulen = 2 - ucs_value = ucs4_value - - -class test_byteorder_1009_ucs2(byteorder_values, TestCase): - """Check the byteorder in unicode (size 1009, UCS2 values)""" - ulen = 1009 - ucs_value = ucs2_value - - -class test_byteorder_1009_ucs4(byteorder_values, TestCase): - """Check the byteorder in unicode (size 1009, UCS4 values)""" - ulen = 1009 - ucs_value = ucs4_value - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/ctypeslib.py b/pythonPackages/numpy/numpy/ctypeslib.py deleted file mode 100755 index ccaa3cfd6d..0000000000 --- a/pythonPackages/numpy/numpy/ctypeslib.py +++ /dev/null @@ -1,420 +0,0 @@ -""" -============================ -``ctypes`` Utility Functions -============================ - -See Also ---------- -load_library : Load a C library. -ndpointer : Array restype/argtype with verification. -as_ctypes : Create a ctypes array from an ndarray. -as_array : Create an ndarray from a ctypes array. - -References ----------- -.. [1] "SciPy Cookbook: ctypes", http://www.scipy.org/Cookbook/Ctypes - -Examples --------- -Load the C library: - ->>> _lib = np.ctypeslib.load_library('libmystuff', '.') #doctest: +SKIP - -Our result type, an ndarray that must be of type double, be 1-dimensional -and is C-contiguous in memory: - ->>> array_1d_double = np.ctypeslib.ndpointer( -... dtype=np.double, -... ndim=1, flags='CONTIGUOUS') #doctest: +SKIP - -Our C-function typically takes an array and updates its values -in-place. For example:: - - void foo_func(double* x, int length) - { - int i; - for (i = 0; i < length; i++) { - x[i] = i*i; - } - } - -We wrap it using: - ->>> lib.foo_func.restype = None #doctest: +SKIP ->>> lib.foo.argtypes = [array_1d_double, c_int] #doctest: +SKIP - -Then, we're ready to call ``foo_func``: - ->>> out = np.empty(15, dtype=np.double) ->>> _lib.foo_func(out, len(out)) #doctest: +SKIP - -""" -__all__ = ['load_library', 'ndpointer', 'test', 'ctypes_load_library', - 'c_intp', 'as_ctypes', 'as_array'] - -import sys, os -from numpy import integer, ndarray, dtype as _dtype, deprecate, array -from numpy.core.multiarray import _flagdict, flagsobj - -try: - import ctypes -except ImportError: - ctypes = None - -if ctypes is None: - def _dummy(*args, **kwds): - """ - Dummy object that raises an ImportError if ctypes is not available. - - Raises - ------ - ImportError - If ctypes is not available. - - """ - raise ImportError, "ctypes is not available." - ctypes_load_library = _dummy - load_library = _dummy - as_ctypes = _dummy - as_array = _dummy - from numpy import intp as c_intp - _ndptr_base = object -else: - import numpy.core._internal as nic - c_intp = nic._getintp_ctype() - del nic - _ndptr_base = ctypes.c_void_p - - # Adapted from Albert Strasheim - def load_library(libname, loader_path): - if ctypes.__version__ < '1.0.1': - import warnings - warnings.warn("All features of ctypes interface may not work " \ - "with ctypes < 1.0.1") - - ext = os.path.splitext(libname)[1] - - if not ext: - # Try to load library with platform-specific name, otherwise - # default to libname.[so|pyd]. Sometimes, these files are built - # erroneously on non-linux platforms. - libname_ext = ['%s.so' % libname, '%s.pyd' % libname] - if sys.platform == 'win32': - libname_ext.insert(0, '%s.dll' % libname) - elif sys.platform == 'darwin': - libname_ext.insert(0, '%s.dylib' % libname) - else: - libname_ext = [libname] - - loader_path = os.path.abspath(loader_path) - if not os.path.isdir(loader_path): - libdir = os.path.dirname(loader_path) - else: - libdir = loader_path - - for ln in libname_ext: - try: - libpath = os.path.join(libdir, ln) - return ctypes.cdll[libpath] - except OSError, e: - pass - - raise e - - ctypes_load_library = deprecate(load_library, 'ctypes_load_library', - 'load_library') - -def _num_fromflags(flaglist): - num = 0 - for val in flaglist: - num += _flagdict[val] - return num - -_flagnames = ['C_CONTIGUOUS', 'F_CONTIGUOUS', 'ALIGNED', 'WRITEABLE', - 'OWNDATA', 'UPDATEIFCOPY'] -def _flags_fromnum(num): - res = [] - for key in _flagnames: - value = _flagdict[key] - if (num & value): - res.append(key) - return res - - -class _ndptr(_ndptr_base): - - def _check_retval_(self): - """This method is called when this class is used as the .restype - asttribute for a shared-library function. It constructs a numpy - array from a void pointer.""" - return array(self) - - @property - def __array_interface__(self): - return {'descr': self._dtype_.descr, - '__ref': self, - 'strides': None, - 'shape': self._shape_, - 'version': 3, - 'typestr': self._dtype_.descr[0][1], - 'data': (self.value, False), - } - - @classmethod - def from_param(cls, obj): - if not isinstance(obj, ndarray): - raise TypeError, "argument must be an ndarray" - if cls._dtype_ is not None \ - and obj.dtype != cls._dtype_: - raise TypeError, "array must have data type %s" % cls._dtype_ - if cls._ndim_ is not None \ - and obj.ndim != cls._ndim_: - raise TypeError, "array must have %d dimension(s)" % cls._ndim_ - if cls._shape_ is not None \ - and obj.shape != cls._shape_: - raise TypeError, "array must have shape %s" % str(cls._shape_) - if cls._flags_ is not None \ - and ((obj.flags.num & cls._flags_) != cls._flags_): - raise TypeError, "array must have flags %s" % \ - _flags_fromnum(cls._flags_) - return obj.ctypes - - -# Factory for an array-checking class with from_param defined for -# use with ctypes argtypes mechanism -_pointer_type_cache = {} -def ndpointer(dtype=None, ndim=None, shape=None, flags=None): - """ - Array-checking restype/argtypes. - - An ndpointer instance is used to describe an ndarray in restypes - and argtypes specifications. This approach is more flexible than - using, for example, ``POINTER(c_double)``, since several restrictions - can be specified, which are verified upon calling the ctypes function. - These include data type, number of dimensions, shape and flags. If a - given array does not satisfy the specified restrictions, - a ``TypeError`` is raised. - - Parameters - ---------- - dtype : data-type, optional - Array data-type. - ndim : int, optional - Number of array dimensions. - shape : tuple of ints, optional - Array shape. - flags : str or tuple of str - Array flags; may be one or more of: - - - C_CONTIGUOUS / C / CONTIGUOUS - - F_CONTIGUOUS / F / FORTRAN - - OWNDATA / O - - WRITEABLE / W - - ALIGNED / A - - UPDATEIFCOPY / U - - Returns - ------- - klass : ndpointer type object - A type object, which is an ``_ndtpr`` instance containing - dtype, ndim, shape and flags information. - - Raises - ------ - TypeError - If a given array does not satisfy the specified restrictions. - - Examples - -------- - >>> clib.somefunc.argtypes = [np.ctypeslib.ndpointer(dtype=np.float64, - ... ndim=1, - ... flags='C_CONTIGUOUS')] - ... #doctest: +SKIP - >>> clib.somefunc(np.array([1, 2, 3], dtype=np.float64)) - ... #doctest: +SKIP - - """ - - if dtype is not None: - dtype = _dtype(dtype) - num = None - if flags is not None: - if isinstance(flags, str): - flags = flags.split(',') - elif isinstance(flags, (int, integer)): - num = flags - flags = _flags_fromnum(num) - elif isinstance(flags, flagsobj): - num = flags.num - flags = _flags_fromnum(num) - if num is None: - try: - flags = [x.strip().upper() for x in flags] - except: - raise TypeError, "invalid flags specification" - num = _num_fromflags(flags) - try: - return _pointer_type_cache[(dtype, ndim, shape, num)] - except KeyError: - pass - if dtype is None: - name = 'any' - elif dtype.names: - name = str(id(dtype)) - else: - name = dtype.str - if ndim is not None: - name += "_%dd" % ndim - if shape is not None: - try: - strshape = [str(x) for x in shape] - except TypeError: - strshape = [str(shape)] - shape = (shape,) - shape = tuple(shape) - name += "_"+"x".join(strshape) - if flags is not None: - name += "_"+"_".join(flags) - else: - flags = [] - klass = type("ndpointer_%s"%name, (_ndptr,), - {"_dtype_": dtype, - "_shape_" : shape, - "_ndim_" : ndim, - "_flags_" : num}) - _pointer_type_cache[dtype] = klass - return klass - -if ctypes is not None: - ct = ctypes - ################################################################ - # simple types - - # maps the numpy typecodes like '=2.3 distutils. - # Any changes here should be applied also to fcompiler.compile - # method to support pre Python 2.3 distutils. - if not sources: - return [] - # FIXME:RELATIVE_IMPORT - if sys.version_info[0] < 3: - from fcompiler import FCompiler - else: - from numpy.distutils.fcompiler import FCompiler - if isinstance(self, FCompiler): - display = [] - for fc in ['f77','f90','fix']: - fcomp = getattr(self,'compiler_'+fc) - if fcomp is None: - continue - display.append("Fortran %s compiler: %s" % (fc, ' '.join(fcomp))) - display = '\n'.join(display) - else: - ccomp = self.compiler_so - display = "C compiler: %s\n" % (' '.join(ccomp),) - log.info(display) - macros, objects, extra_postargs, pp_opts, build = \ - self._setup_compile(output_dir, macros, include_dirs, sources, - depends, extra_postargs) - cc_args = self._get_cc_args(pp_opts, debug, extra_preargs) - display = "compile options: '%s'" % (' '.join(cc_args)) - if extra_postargs: - display += "\nextra options: '%s'" % (' '.join(extra_postargs)) - log.info(display) - - # build any sources in same order as they were originally specified - # especially important for fortran .f90 files using modules - if isinstance(self, FCompiler): - objects_to_build = build.keys() - for obj in objects: - if obj in objects_to_build: - src, ext = build[obj] - if self.compiler_type=='absoft': - obj = cyg2win32(obj) - src = cyg2win32(src) - self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts) - else: - for obj, (src, ext) in build.items(): - self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts) - - # Return *all* object filenames, not just the ones we just built. - return objects - -replace_method(CCompiler, 'compile', CCompiler_compile) - -def CCompiler_customize_cmd(self, cmd, ignore=()): - """ - Customize compiler using distutils command. - - Parameters - ---------- - cmd : class instance - An instance inheriting from `distutils.cmd.Command`. - ignore : sequence of str, optional - List of `CCompiler` commands (without ``'set_'``) that should not be - altered. Strings that are checked for are: - ``('include_dirs', 'define', 'undef', 'libraries', 'library_dirs', - 'rpath', 'link_objects')``. - - Returns - ------- - None - - """ - log.info('customize %s using %s' % (self.__class__.__name__, - cmd.__class__.__name__)) - def allow(attr): - return getattr(cmd, attr, None) is not None and attr not in ignore - - if allow('include_dirs'): - self.set_include_dirs(cmd.include_dirs) - if allow('define'): - for (name,value) in cmd.define: - self.define_macro(name, value) - if allow('undef'): - for macro in cmd.undef: - self.undefine_macro(macro) - if allow('libraries'): - self.set_libraries(self.libraries + cmd.libraries) - if allow('library_dirs'): - self.set_library_dirs(self.library_dirs + cmd.library_dirs) - if allow('rpath'): - self.set_runtime_library_dirs(cmd.rpath) - if allow('link_objects'): - self.set_link_objects(cmd.link_objects) - -replace_method(CCompiler, 'customize_cmd', CCompiler_customize_cmd) - -def _compiler_to_string(compiler): - props = [] - mx = 0 - keys = compiler.executables.keys() - for key in ['version','libraries','library_dirs', - 'object_switch','compile_switch', - 'include_dirs','define','undef','rpath','link_objects']: - if key not in keys: - keys.append(key) - for key in keys: - if hasattr(compiler,key): - v = getattr(compiler, key) - mx = max(mx,len(key)) - props.append((key,repr(v))) - lines = [] - format = '%-' + repr(mx+1) + 's = %s' - for prop in props: - lines.append(format % prop) - return '\n'.join(lines) - -def CCompiler_show_customization(self): - """ - Print the compiler customizations to stdout. - - Parameters - ---------- - None - - Returns - ------- - None - - Notes - ----- - Printing is only done if the distutils log threshold is < 2. - - """ - if 0: - for attrname in ['include_dirs','define','undef', - 'libraries','library_dirs', - 'rpath','link_objects']: - attr = getattr(self,attrname,None) - if not attr: - continue - log.info("compiler '%s' is set to %s" % (attrname,attr)) - try: - self.get_version() - except: - pass - if log._global_log.threshold<2: - print('*'*80) - print(self.__class__) - print(_compiler_to_string(self)) - print('*'*80) - -replace_method(CCompiler, 'show_customization', CCompiler_show_customization) - -def CCompiler_customize(self, dist, need_cxx=0): - """ - Do any platform-specific customization of a compiler instance. - - This method calls `distutils.sysconfig.customize_compiler` for - platform-specific customization, as well as optionally remove a flag - to suppress spurious warnings in case C++ code is being compiled. - - Parameters - ---------- - dist : object - This parameter is not used for anything. - need_cxx : bool, optional - Whether or not C++ has to be compiled. If so (True), the - ``"-Wstrict-prototypes"`` option is removed to prevent spurious - warnings. Default is False. - - Returns - ------- - None - - Notes - ----- - All the default options used by distutils can be extracted with:: - - from distutils import sysconfig - sysconfig.get_config_vars('CC', 'CXX', 'OPT', 'BASECFLAGS', - 'CCSHARED', 'LDSHARED', 'SO') - - """ - # See FCompiler.customize for suggested usage. - log.info('customize %s' % (self.__class__.__name__)) - customize_compiler(self) - if need_cxx: - # In general, distutils uses -Wstrict-prototypes, but this option is - # not valid for C++ code, only for C. Remove it if it's there to - # avoid a spurious warning on every compilation. All the default - # options used by distutils can be extracted with: - - # from distutils import sysconfig - # sysconfig.get_config_vars('CC', 'CXX', 'OPT', 'BASECFLAGS', - # 'CCSHARED', 'LDSHARED', 'SO') - try: - self.compiler_so.remove('-Wstrict-prototypes') - except (AttributeError, ValueError): - pass - - if hasattr(self,'compiler') and 'cc' in self.compiler[0]: - if not self.compiler_cxx: - if self.compiler[0].startswith('gcc'): - a, b = 'gcc', 'g++' - else: - a, b = 'cc', 'c++' - self.compiler_cxx = [self.compiler[0].replace(a,b)]\ - + self.compiler[1:] - else: - if hasattr(self,'compiler'): - log.warn("#### %s #######" % (self.compiler,)) - log.warn('Missing compiler_cxx fix for '+self.__class__.__name__) - return - -replace_method(CCompiler, 'customize', CCompiler_customize) - -def simple_version_match(pat=r'[-.\d]+', ignore='', start=''): - """ - Simple matching of version numbers, for use in CCompiler and FCompiler. - - Parameters - ---------- - pat : str, optional - A regular expression matching version numbers. - Default is ``r'[-.\\d]+'``. - ignore : str, optional - A regular expression matching patterns to skip. - Default is ``''``, in which case nothing is skipped. - start : str, optional - A regular expression matching the start of where to start looking - for version numbers. - Default is ``''``, in which case searching is started at the - beginning of the version string given to `matcher`. - - Returns - ------- - matcher : callable - A function that is appropriate to use as the ``.version_match`` - attribute of a `CCompiler` class. `matcher` takes a single parameter, - a version string. - - """ - def matcher(self, version_string): - # version string may appear in the second line, so getting rid - # of new lines: - version_string = version_string.replace('\n',' ') - pos = 0 - if start: - m = re.match(start, version_string) - if not m: - return None - pos = m.end() - while 1: - m = re.search(pat, version_string[pos:]) - if not m: - return None - if ignore and re.match(ignore, m.group(0)): - pos = m.end() - continue - break - return m.group(0) - return matcher - -def CCompiler_get_version(self, force=False, ok_status=[0]): - """ - Return compiler version, or None if compiler is not available. - - Parameters - ---------- - force : bool, optional - If True, force a new determination of the version, even if the - compiler already has a version attribute. Default is False. - ok_status : list of int, optional - The list of status values returned by the version look-up process - for which a version string is returned. If the status value is not - in `ok_status`, None is returned. Default is ``[0]``. - - Returns - ------- - version : str or None - Version string, in the format of `distutils.version.LooseVersion`. - - """ - if not force and hasattr(self,'version'): - return self.version - self.find_executables() - try: - version_cmd = self.version_cmd - except AttributeError: - return None - if not version_cmd or not version_cmd[0]: - return None - try: - matcher = self.version_match - except AttributeError: - try: - pat = self.version_pattern - except AttributeError: - return None - def matcher(version_string): - m = re.match(pat, version_string) - if not m: - return None - version = m.group('version') - return version - - status, output = exec_command(version_cmd,use_tee=0) - - version = None - if status in ok_status: - version = matcher(output) - if version: - version = LooseVersion(version) - self.version = version - return version - -replace_method(CCompiler, 'get_version', CCompiler_get_version) - -def CCompiler_cxx_compiler(self): - """ - Return the C++ compiler. - - Parameters - ---------- - None - - Returns - ------- - cxx : class instance - The C++ compiler, as a `CCompiler` instance. - - """ - if self.compiler_type=='msvc': return self - cxx = copy(self) - cxx.compiler_so = [cxx.compiler_cxx[0]] + cxx.compiler_so[1:] - if sys.platform.startswith('aix') and 'ld_so_aix' in cxx.linker_so[0]: - # AIX needs the ld_so_aix script included with Python - cxx.linker_so = [cxx.linker_so[0], cxx.compiler_cxx[0]] \ - + cxx.linker_so[2:] - else: - cxx.linker_so = [cxx.compiler_cxx[0]] + cxx.linker_so[1:] - return cxx - -replace_method(CCompiler, 'cxx_compiler', CCompiler_cxx_compiler) - -compiler_class['intel'] = ('intelccompiler','IntelCCompiler', - "Intel C Compiler for 32-bit applications") -compiler_class['intele'] = ('intelccompiler','IntelItaniumCCompiler', - "Intel C Itanium Compiler for Itanium-based applications") -ccompiler._default_compilers += (('linux.*','intel'),('linux.*','intele')) - -if sys.platform == 'win32': - compiler_class['mingw32'] = ('mingw32ccompiler', 'Mingw32CCompiler', - "Mingw32 port of GNU C Compiler for Win32"\ - "(for MSC built Python)") - if mingw32(): - # On windows platforms, we want to default to mingw32 (gcc) - # because msvc can't build blitz stuff. - log.info('Setting mingw32 as default compiler for nt.') - ccompiler._default_compilers = (('nt', 'mingw32'),) \ - + ccompiler._default_compilers - - -_distutils_new_compiler = new_compiler -def new_compiler (plat=None, - compiler=None, - verbose=0, - dry_run=0, - force=0): - # Try first C compilers from numpy.distutils. - if plat is None: - plat = os.name - try: - if compiler is None: - compiler = get_default_compiler(plat) - (module_name, class_name, long_description) = compiler_class[compiler] - except KeyError: - msg = "don't know how to compile C/C++ code on platform '%s'" % plat - if compiler is not None: - msg = msg + " with '%s' compiler" % compiler - raise DistutilsPlatformError(msg) - module_name = "numpy.distutils." + module_name - try: - __import__ (module_name) - except ImportError: - msg = str(get_exception()) - log.info('%s in numpy.distutils; trying from distutils', - str(msg)) - module_name = module_name[6:] - try: - __import__(module_name) - except ImportError: - msg = str(get_exception()) - raise DistutilsModuleError("can't compile C/C++ code: unable to load module '%s'" % \ - module_name) - try: - module = sys.modules[module_name] - klass = vars(module)[class_name] - except KeyError: - raise DistutilsModuleError(("can't compile C/C++ code: unable to find class '%s' " + - "in module '%s'") % (class_name, module_name)) - compiler = klass(None, dry_run, force) - log.debug('new_compiler returns %s' % (klass)) - return compiler - -ccompiler.new_compiler = new_compiler - -_distutils_gen_lib_options = gen_lib_options -def gen_lib_options(compiler, library_dirs, runtime_library_dirs, libraries): - library_dirs = quote_args(library_dirs) - runtime_library_dirs = quote_args(runtime_library_dirs) - r = _distutils_gen_lib_options(compiler, library_dirs, - runtime_library_dirs, libraries) - lib_opts = [] - for i in r: - if is_sequence(i): - lib_opts.extend(list(i)) - else: - lib_opts.append(i) - return lib_opts -ccompiler.gen_lib_options = gen_lib_options - -# Also fix up the various compiler modules, which do -# from distutils.ccompiler import gen_lib_options -# Don't bother with mwerks, as we don't support Classic Mac. -for _cc in ['msvc', 'bcpp', 'cygwinc', 'emxc', 'unixc']: - _m = sys.modules.get('distutils.'+_cc+'compiler') - if _m is not None: - setattr(_m, 'gen_lib_options', gen_lib_options) - -_distutils_gen_preprocess_options = gen_preprocess_options -def gen_preprocess_options (macros, include_dirs): - include_dirs = quote_args(include_dirs) - return _distutils_gen_preprocess_options(macros, include_dirs) -ccompiler.gen_preprocess_options = gen_preprocess_options - -##Fix distutils.util.split_quoted: -# NOTE: I removed this fix in revision 4481 (see ticket #619), but it appears -# that removing this fix causes f2py problems on Windows XP (see ticket #723). -# Specifically, on WinXP when gfortran is installed in a directory path, which -# contains spaces, then f2py is unable to find it. -import re -import string -_wordchars_re = re.compile(r'[^\\\'\"%s ]*' % string.whitespace) -_squote_re = re.compile(r"'(?:[^'\\]|\\.)*'") -_dquote_re = re.compile(r'"(?:[^"\\]|\\.)*"') -_has_white_re = re.compile(r'\s') -def split_quoted(s): - s = s.strip() - words = [] - pos = 0 - - while s: - m = _wordchars_re.match(s, pos) - end = m.end() - if end == len(s): - words.append(s[:end]) - break - - if s[end] in string.whitespace: # unescaped, unquoted whitespace: now - words.append(s[:end]) # we definitely have a word delimiter - s = s[end:].lstrip() - pos = 0 - - elif s[end] == '\\': # preserve whatever is being escaped; - # will become part of the current word - s = s[:end] + s[end+1:] - pos = end+1 - - else: - if s[end] == "'": # slurp singly-quoted string - m = _squote_re.match(s, end) - elif s[end] == '"': # slurp doubly-quoted string - m = _dquote_re.match(s, end) - else: - raise RuntimeError("this can't happen (bad char '%c')" % s[end]) - - if m is None: - raise ValueError("bad string (mismatched %s quotes?)" % s[end]) - - (beg, end) = m.span() - if _has_white_re.search(s[beg+1:end-1]): - s = s[:beg] + s[beg+1:end-1] + s[end:] - pos = m.end() - 2 - else: - # Keeping quotes when a quoted word does not contain - # white-space. XXX: send a patch to distutils - pos = m.end() - - if pos >= len(s): - words.append(s) - break - - return words -ccompiler.split_quoted = split_quoted -##Fix distutils.util.split_quoted: - -# define DISTUTILS_USE_SDK when necessary to workaround distutils/msvccompiler.py bug -msvc_on_amd64() diff --git a/pythonPackages/numpy/numpy/distutils/command/__init__.py b/pythonPackages/numpy/numpy/distutils/command/__init__.py deleted file mode 100755 index 87546aeeef..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -"""distutils.command - -Package containing implementation of all the standard Distutils -commands.""" - -__revision__ = "$Id: __init__.py,v 1.3 2005/05/16 11:08:49 pearu Exp $" - -distutils_all = [ 'build_py', - 'clean', - 'install_clib', - 'install_scripts', - 'bdist', - 'bdist_dumb', - 'bdist_wininst', - ] - -__import__('distutils.command',globals(),locals(),distutils_all) - -__all__ = ['build', - 'config_compiler', - 'config', - 'build_src', - 'build_ext', - 'build_clib', - 'build_scripts', - 'install', - 'install_data', - 'install_headers', - 'install_lib', - 'bdist_rpm', - 'sdist', - ] + distutils_all diff --git a/pythonPackages/numpy/numpy/distutils/command/autodist.py b/pythonPackages/numpy/numpy/distutils/command/autodist.py deleted file mode 100755 index fe40119efb..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/autodist.py +++ /dev/null @@ -1,39 +0,0 @@ -"""This module implements additional tests ala autoconf which can be useful.""" - -# We put them here since they could be easily reused outside numpy.distutils - -def check_inline(cmd): - """Return the inline identifier (may be empty).""" - cmd._check_compiler() - body = """ -#ifndef __cplusplus -static %(inline)s int static_func (void) -{ - return 0; -} -%(inline)s int nostatic_func (void) -{ - return 0; -} -#endif""" - - for kw in ['inline', '__inline__', '__inline']: - st = cmd.try_compile(body % {'inline': kw}, None, None) - if st: - return kw - - return '' - -def check_compiler_gcc4(cmd): - """Return True if the C compiler is GCC 4.x.""" - cmd._check_compiler() - body = """ -int -main() -{ -#ifndef __GNUC__ && (__GNUC__ >= 4) -die in an horrible death -#endif -} -""" - return cmd.try_compile(body, None, None) diff --git a/pythonPackages/numpy/numpy/distutils/command/bdist_rpm.py b/pythonPackages/numpy/numpy/distutils/command/bdist_rpm.py deleted file mode 100755 index 60e9b57527..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/bdist_rpm.py +++ /dev/null @@ -1,22 +0,0 @@ -import os -import sys -if 'setuptools' in sys.modules: - from setuptools.command.bdist_rpm import bdist_rpm as old_bdist_rpm -else: - from distutils.command.bdist_rpm import bdist_rpm as old_bdist_rpm - -class bdist_rpm(old_bdist_rpm): - - def _make_spec_file(self): - spec_file = old_bdist_rpm._make_spec_file(self) - - # Replace hardcoded setup.py script name - # with the real setup script name. - setup_py = os.path.basename(sys.argv[0]) - if setup_py == 'setup.py': - return spec_file - new_spec_file = [] - for line in spec_file: - line = line.replace('setup.py',setup_py) - new_spec_file.append(line) - return new_spec_file diff --git a/pythonPackages/numpy/numpy/distutils/command/build.py b/pythonPackages/numpy/numpy/distutils/command/build.py deleted file mode 100755 index 5d986570c9..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/build.py +++ /dev/null @@ -1,37 +0,0 @@ -import os -import sys -from distutils.command.build import build as old_build -from distutils.util import get_platform -from numpy.distutils.command.config_compiler import show_fortran_compilers - -class build(old_build): - - sub_commands = [('config_cc', lambda *args: True), - ('config_fc', lambda *args: True), - ('build_src', old_build.has_ext_modules), - ] + old_build.sub_commands - - user_options = old_build.user_options + [ - ('fcompiler=', None, - "specify the Fortran compiler type"), - ] - - help_options = old_build.help_options + [ - ('help-fcompiler',None, "list available Fortran compilers", - show_fortran_compilers), - ] - - def initialize_options(self): - old_build.initialize_options(self) - self.fcompiler = None - - def finalize_options(self): - build_scripts = self.build_scripts - old_build.finalize_options(self) - plat_specifier = ".%s-%s" % (get_platform(), sys.version[0:3]) - if build_scripts is None: - self.build_scripts = os.path.join(self.build_base, - 'scripts' + plat_specifier) - - def run(self): - old_build.run(self) diff --git a/pythonPackages/numpy/numpy/distutils/command/build_clib.py b/pythonPackages/numpy/numpy/distutils/command/build_clib.py deleted file mode 100755 index 71ca1e0961..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/build_clib.py +++ /dev/null @@ -1,277 +0,0 @@ -""" Modified version of build_clib that handles fortran source files. -""" - -import os -from glob import glob -import shutil -from distutils.command.build_clib import build_clib as old_build_clib -from distutils.errors import DistutilsSetupError, DistutilsError, \ - DistutilsFileError - -from numpy.distutils import log -from distutils.dep_util import newer_group -from numpy.distutils.misc_util import filter_sources, has_f_sources,\ - has_cxx_sources, all_strings, get_lib_source_files, is_sequence, \ - get_numpy_include_dirs - -# Fix Python distutils bug sf #1718574: -_l = old_build_clib.user_options -for _i in range(len(_l)): - if _l[_i][0] in ['build-clib', 'build-temp']: - _l[_i] = (_l[_i][0]+'=',)+_l[_i][1:] -# - -class build_clib(old_build_clib): - - description = "build C/C++/F libraries used by Python extensions" - - user_options = old_build_clib.user_options + [ - ('fcompiler=', None, - "specify the Fortran compiler type"), - ('inplace', 'i', 'Build in-place'), - ] - - boolean_options = old_build_clib.boolean_options + ['inplace'] - - def initialize_options(self): - old_build_clib.initialize_options(self) - self.fcompiler = None - self.inplace = 0 - return - - def have_f_sources(self): - for (lib_name, build_info) in self.libraries: - if has_f_sources(build_info.get('sources',[])): - return True - return False - - def have_cxx_sources(self): - for (lib_name, build_info) in self.libraries: - if has_cxx_sources(build_info.get('sources',[])): - return True - return False - - def run(self): - if not self.libraries: - return - - # Make sure that library sources are complete. - languages = [] - - # Make sure that extension sources are complete. - self.run_command('build_src') - - for (lib_name, build_info) in self.libraries: - l = build_info.get('language',None) - if l and l not in languages: languages.append(l) - - from distutils.ccompiler import new_compiler - self.compiler = new_compiler(compiler=self.compiler, - dry_run=self.dry_run, - force=self.force) - self.compiler.customize(self.distribution, - need_cxx=self.have_cxx_sources()) - - libraries = self.libraries - self.libraries = None - self.compiler.customize_cmd(self) - self.libraries = libraries - - self.compiler.show_customization() - - if self.have_f_sources(): - from numpy.distutils.fcompiler import new_fcompiler - self.fcompiler = new_fcompiler(compiler=self.fcompiler, - verbose=self.verbose, - dry_run=self.dry_run, - force=self.force, - requiref90='f90' in languages, - c_compiler=self.compiler) - if self.fcompiler is not None: - self.fcompiler.customize(self.distribution) - - libraries = self.libraries - self.libraries = None - self.fcompiler.customize_cmd(self) - self.libraries = libraries - - self.fcompiler.show_customization() - - self.build_libraries(self.libraries) - - if self.inplace: - for l in self.distribution.installed_libraries: - libname = self.compiler.library_filename(l.name) - source = os.path.join(self.build_clib, libname) - target = os.path.join(l.target_dir, libname) - self.mkpath(l.target_dir) - shutil.copy(source, target) - - def get_source_files(self): - self.check_library_list(self.libraries) - filenames = [] - for lib in self.libraries: - filenames.extend(get_lib_source_files(lib)) - return filenames - - def build_libraries(self, libraries): - for (lib_name, build_info) in libraries: - self.build_a_library(build_info, lib_name, libraries) - - def build_a_library(self, build_info, lib_name, libraries): - # default compilers - compiler = self.compiler - fcompiler = self.fcompiler - - sources = build_info.get('sources') - if sources is None or not is_sequence(sources): - raise DistutilsSetupError(("in 'libraries' option (library '%s'), " + - "'sources' must be present and must be " + - "a list of source filenames") % lib_name) - sources = list(sources) - - c_sources, cxx_sources, f_sources, fmodule_sources \ - = filter_sources(sources) - requiref90 = not not fmodule_sources or \ - build_info.get('language','c')=='f90' - - # save source type information so that build_ext can use it. - source_languages = [] - if c_sources: source_languages.append('c') - if cxx_sources: source_languages.append('c++') - if requiref90: source_languages.append('f90') - elif f_sources: source_languages.append('f77') - build_info['source_languages'] = source_languages - - lib_file = compiler.library_filename(lib_name, - output_dir=self.build_clib) - depends = sources + build_info.get('depends',[]) - if not (self.force or newer_group(depends, lib_file, 'newer')): - log.debug("skipping '%s' library (up-to-date)", lib_name) - return - else: - log.info("building '%s' library", lib_name) - - config_fc = build_info.get('config_fc',{}) - if fcompiler is not None and config_fc: - log.info('using additional config_fc from setup script '\ - 'for fortran compiler: %s' \ - % (config_fc,)) - from numpy.distutils.fcompiler import new_fcompiler - fcompiler = new_fcompiler(compiler=fcompiler.compiler_type, - verbose=self.verbose, - dry_run=self.dry_run, - force=self.force, - requiref90=requiref90, - c_compiler=self.compiler) - if fcompiler is not None: - dist = self.distribution - base_config_fc = dist.get_option_dict('config_fc').copy() - base_config_fc.update(config_fc) - fcompiler.customize(base_config_fc) - - # check availability of Fortran compilers - if (f_sources or fmodule_sources) and fcompiler is None: - raise DistutilsError("library %s has Fortran sources"\ - " but no Fortran compiler found" % (lib_name)) - - macros = build_info.get('macros') - include_dirs = build_info.get('include_dirs') - if include_dirs is None: - include_dirs = [] - extra_postargs = build_info.get('extra_compiler_args') or [] - - include_dirs.extend(get_numpy_include_dirs()) - # where compiled F90 module files are: - module_dirs = build_info.get('module_dirs') or [] - module_build_dir = os.path.dirname(lib_file) - if requiref90: self.mkpath(module_build_dir) - - if compiler.compiler_type=='msvc': - # this hack works around the msvc compiler attributes - # problem, msvc uses its own convention :( - c_sources += cxx_sources - cxx_sources = [] - - objects = [] - if c_sources: - log.info("compiling C sources") - objects = compiler.compile(c_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_postargs) - - if cxx_sources: - log.info("compiling C++ sources") - cxx_compiler = compiler.cxx_compiler() - cxx_objects = cxx_compiler.compile(cxx_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_postargs) - objects.extend(cxx_objects) - - if f_sources or fmodule_sources: - extra_postargs = [] - f_objects = [] - - if requiref90: - if fcompiler.module_dir_switch is None: - existing_modules = glob('*.mod') - extra_postargs += fcompiler.module_options(\ - module_dirs,module_build_dir) - - if fmodule_sources: - log.info("compiling Fortran 90 module sources") - f_objects += fcompiler.compile(fmodule_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_postargs) - - if requiref90 and self.fcompiler.module_dir_switch is None: - # move new compiled F90 module files to module_build_dir - for f in glob('*.mod'): - if f in existing_modules: - continue - t = os.path.join(module_build_dir, f) - if os.path.abspath(f)==os.path.abspath(t): - continue - if os.path.isfile(t): - os.remove(t) - try: - self.move_file(f, module_build_dir) - except DistutilsFileError: - log.warn('failed to move %r to %r' \ - % (f, module_build_dir)) - - if f_sources: - log.info("compiling Fortran sources") - f_objects += fcompiler.compile(f_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_postargs) - else: - f_objects = [] - - objects.extend(f_objects) - - # assume that default linker is suitable for - # linking Fortran object files - compiler.create_static_lib(objects, lib_name, - output_dir=self.build_clib, - debug=self.debug) - - # fix library dependencies - clib_libraries = build_info.get('libraries',[]) - for lname, binfo in libraries: - if lname in clib_libraries: - clib_libraries.extend(binfo[1].get('libraries',[])) - if clib_libraries: - build_info['libraries'] = clib_libraries diff --git a/pythonPackages/numpy/numpy/distutils/command/build_ext.py b/pythonPackages/numpy/numpy/distutils/command/build_ext.py deleted file mode 100755 index 840d43716d..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/build_ext.py +++ /dev/null @@ -1,503 +0,0 @@ -""" Modified version of build_ext that handles fortran source files. -""" - -import os -import sys -from glob import glob - -from distutils.dep_util import newer_group -from distutils.command.build_ext import build_ext as old_build_ext -from distutils.errors import DistutilsFileError, DistutilsSetupError,\ - DistutilsError -from distutils.file_util import copy_file - -from numpy.distutils import log -from numpy.distutils.exec_command import exec_command -from numpy.distutils.system_info import combine_paths -from numpy.distutils.misc_util import filter_sources, has_f_sources, \ - has_cxx_sources, get_ext_source_files, \ - get_numpy_include_dirs, is_sequence, get_build_architecture, \ - msvc_version -from numpy.distutils.command.config_compiler import show_fortran_compilers - -try: - set -except NameError: - from sets import Set as set - -class build_ext (old_build_ext): - - description = "build C/C++/F extensions (compile/link to build directory)" - - user_options = old_build_ext.user_options + [ - ('fcompiler=', None, - "specify the Fortran compiler type"), - ] - - help_options = old_build_ext.help_options + [ - ('help-fcompiler',None, "list available Fortran compilers", - show_fortran_compilers), - ] - - def initialize_options(self): - old_build_ext.initialize_options(self) - self.fcompiler = None - - def finalize_options(self): - incl_dirs = self.include_dirs - old_build_ext.finalize_options(self) - if incl_dirs is not None: - self.include_dirs.extend(self.distribution.include_dirs or []) - - def run(self): - if not self.extensions: - return - - # Make sure that extension sources are complete. - self.run_command('build_src') - - if self.distribution.has_c_libraries(): - if self.inplace: - if self.distribution.have_run.get('build_clib'): - log.warn('build_clib already run, it is too late to ' \ - 'ensure in-place build of build_clib') - build_clib = self.distribution.get_command_obj('build_clib') - else: - build_clib = self.distribution.get_command_obj('build_clib') - build_clib.inplace = 1 - build_clib.ensure_finalized() - build_clib.run() - self.distribution.have_run['build_clib'] = 1 - - else: - self.run_command('build_clib') - build_clib = self.get_finalized_command('build_clib') - self.library_dirs.append(build_clib.build_clib) - else: - build_clib = None - - # Not including C libraries to the list of - # extension libraries automatically to prevent - # bogus linking commands. Extensions must - # explicitly specify the C libraries that they use. - - from distutils.ccompiler import new_compiler - from numpy.distutils.fcompiler import new_fcompiler - - compiler_type = self.compiler - # Initialize C compiler: - self.compiler = new_compiler(compiler=compiler_type, - verbose=self.verbose, - dry_run=self.dry_run, - force=self.force) - self.compiler.customize(self.distribution) - self.compiler.customize_cmd(self) - self.compiler.show_customization() - - # Create mapping of libraries built by build_clib: - clibs = {} - if build_clib is not None: - for libname,build_info in build_clib.libraries or []: - if libname in clibs and clibs[libname] != build_info: - log.warn('library %r defined more than once,'\ - ' overwriting build_info\n%s... \nwith\n%s...' \ - % (libname, repr(clibs[libname])[:300], repr(build_info)[:300])) - clibs[libname] = build_info - # .. and distribution libraries: - for libname,build_info in self.distribution.libraries or []: - if libname in clibs: - # build_clib libraries have a precedence before distribution ones - continue - clibs[libname] = build_info - - # Determine if C++/Fortran 77/Fortran 90 compilers are needed. - # Update extension libraries, library_dirs, and macros. - all_languages = set() - for ext in self.extensions: - ext_languages = set() - c_libs = [] - c_lib_dirs = [] - macros = [] - for libname in ext.libraries: - if libname in clibs: - binfo = clibs[libname] - c_libs += binfo.get('libraries',[]) - c_lib_dirs += binfo.get('library_dirs',[]) - for m in binfo.get('macros',[]): - if m not in macros: - macros.append(m) - - for l in clibs.get(libname,{}).get('source_languages',[]): - ext_languages.add(l) - if c_libs: - new_c_libs = ext.libraries + c_libs - log.info('updating extension %r libraries from %r to %r' - % (ext.name, ext.libraries, new_c_libs)) - ext.libraries = new_c_libs - ext.library_dirs = ext.library_dirs + c_lib_dirs - if macros: - log.info('extending extension %r defined_macros with %r' - % (ext.name, macros)) - ext.define_macros = ext.define_macros + macros - - # determine extension languages - if has_f_sources(ext.sources): - ext_languages.add('f77') - if has_cxx_sources(ext.sources): - ext_languages.add('c++') - l = ext.language or self.compiler.detect_language(ext.sources) - if l: - ext_languages.add(l) - # reset language attribute for choosing proper linker - if 'c++' in ext_languages: - ext_language = 'c++' - elif 'f90' in ext_languages: - ext_language = 'f90' - elif 'f77' in ext_languages: - ext_language = 'f77' - else: - ext_language = 'c' # default - if l and l != ext_language and ext.language: - log.warn('resetting extension %r language from %r to %r.' % - (ext.name,l,ext_language)) - ext.language = ext_language - # global language - all_languages.update(ext_languages) - - need_f90_compiler = 'f90' in all_languages - need_f77_compiler = 'f77' in all_languages - need_cxx_compiler = 'c++' in all_languages - - # Initialize C++ compiler: - if need_cxx_compiler: - self._cxx_compiler = new_compiler(compiler=compiler_type, - verbose=self.verbose, - dry_run=self.dry_run, - force=self.force) - compiler = self._cxx_compiler - compiler.customize(self.distribution,need_cxx=need_cxx_compiler) - compiler.customize_cmd(self) - compiler.show_customization() - self._cxx_compiler = compiler.cxx_compiler() - else: - self._cxx_compiler = None - - # Initialize Fortran 77 compiler: - if need_f77_compiler: - ctype = self.fcompiler - self._f77_compiler = new_fcompiler(compiler=self.fcompiler, - verbose=self.verbose, - dry_run=self.dry_run, - force=self.force, - requiref90=False, - c_compiler=self.compiler) - fcompiler = self._f77_compiler - if fcompiler: - ctype = fcompiler.compiler_type - fcompiler.customize(self.distribution) - if fcompiler and fcompiler.get_version(): - fcompiler.customize_cmd(self) - fcompiler.show_customization() - else: - self.warn('f77_compiler=%s is not available.' % - (ctype)) - self._f77_compiler = None - else: - self._f77_compiler = None - - # Initialize Fortran 90 compiler: - if need_f90_compiler: - ctype = self.fcompiler - self._f90_compiler = new_fcompiler(compiler=self.fcompiler, - verbose=self.verbose, - dry_run=self.dry_run, - force=self.force, - requiref90=True, - c_compiler = self.compiler) - fcompiler = self._f90_compiler - if fcompiler: - ctype = fcompiler.compiler_type - fcompiler.customize(self.distribution) - if fcompiler and fcompiler.get_version(): - fcompiler.customize_cmd(self) - fcompiler.show_customization() - else: - self.warn('f90_compiler=%s is not available.' % - (ctype)) - self._f90_compiler = None - else: - self._f90_compiler = None - - # Build extensions - self.build_extensions() - - # Make sure that scons based extensions are complete. - if self.inplace: - cmd = self.reinitialize_command('scons') - cmd.inplace = 1 - self.run_command('scons') - - def swig_sources(self, sources): - # Do nothing. Swig sources have beed handled in build_src command. - return sources - - def build_extension(self, ext): - sources = ext.sources - if sources is None or not is_sequence(sources): - raise DistutilsSetupError( - ("in 'ext_modules' option (extension '%s'), " + - "'sources' must be present and must be " + - "a list of source filenames") % ext.name) - sources = list(sources) - - if not sources: - return - - fullname = self.get_ext_fullname(ext.name) - if self.inplace: - modpath = fullname.split('.') - package = '.'.join(modpath[0:-1]) - base = modpath[-1] - build_py = self.get_finalized_command('build_py') - package_dir = build_py.get_package_dir(package) - ext_filename = os.path.join(package_dir, - self.get_ext_filename(base)) - else: - ext_filename = os.path.join(self.build_lib, - self.get_ext_filename(fullname)) - depends = sources + ext.depends - - if not (self.force or newer_group(depends, ext_filename, 'newer')): - log.debug("skipping '%s' extension (up-to-date)", ext.name) - return - else: - log.info("building '%s' extension", ext.name) - - extra_args = ext.extra_compile_args or [] - macros = ext.define_macros[:] - for undef in ext.undef_macros: - macros.append((undef,)) - - c_sources, cxx_sources, f_sources, fmodule_sources = \ - filter_sources(ext.sources) - - - - if self.compiler.compiler_type=='msvc': - if cxx_sources: - # Needed to compile kiva.agg._agg extension. - extra_args.append('/Zm1000') - # this hack works around the msvc compiler attributes - # problem, msvc uses its own convention :( - c_sources += cxx_sources - cxx_sources = [] - - # Set Fortran/C++ compilers for compilation and linking. - if ext.language=='f90': - fcompiler = self._f90_compiler - elif ext.language=='f77': - fcompiler = self._f77_compiler - else: # in case ext.language is c++, for instance - fcompiler = self._f90_compiler or self._f77_compiler - cxx_compiler = self._cxx_compiler - - # check for the availability of required compilers - if cxx_sources and cxx_compiler is None: - raise DistutilsError("extension %r has C++ sources" \ - "but no C++ compiler found" % (ext.name)) - if (f_sources or fmodule_sources) and fcompiler is None: - raise DistutilsError("extension %r has Fortran sources " \ - "but no Fortran compiler found" % (ext.name)) - if ext.language in ['f77','f90'] and fcompiler is None: - self.warn("extension %r has Fortran libraries " \ - "but no Fortran linker found, using default linker" % (ext.name)) - if ext.language=='c++' and cxx_compiler is None: - self.warn("extension %r has C++ libraries " \ - "but no C++ linker found, using default linker" % (ext.name)) - - kws = {'depends':ext.depends} - output_dir = self.build_temp - - include_dirs = ext.include_dirs + get_numpy_include_dirs() - - c_objects = [] - if c_sources: - log.info("compiling C sources") - c_objects = self.compiler.compile(c_sources, - output_dir=output_dir, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_args, - **kws) - - if cxx_sources: - log.info("compiling C++ sources") - c_objects += cxx_compiler.compile(cxx_sources, - output_dir=output_dir, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_args, - **kws) - - extra_postargs = [] - f_objects = [] - if fmodule_sources: - log.info("compiling Fortran 90 module sources") - module_dirs = ext.module_dirs[:] - module_build_dir = os.path.join( - self.build_temp,os.path.dirname( - self.get_ext_filename(fullname))) - - self.mkpath(module_build_dir) - if fcompiler.module_dir_switch is None: - existing_modules = glob('*.mod') - extra_postargs += fcompiler.module_options( - module_dirs,module_build_dir) - f_objects += fcompiler.compile(fmodule_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_postargs, - depends=ext.depends) - - if fcompiler.module_dir_switch is None: - for f in glob('*.mod'): - if f in existing_modules: - continue - t = os.path.join(module_build_dir, f) - if os.path.abspath(f)==os.path.abspath(t): - continue - if os.path.isfile(t): - os.remove(t) - try: - self.move_file(f, module_build_dir) - except DistutilsFileError: - log.warn('failed to move %r to %r' % - (f, module_build_dir)) - if f_sources: - log.info("compiling Fortran sources") - f_objects += fcompiler.compile(f_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_postargs, - depends=ext.depends) - - objects = c_objects + f_objects - - if ext.extra_objects: - objects.extend(ext.extra_objects) - extra_args = ext.extra_link_args or [] - libraries = self.get_libraries(ext)[:] - library_dirs = ext.library_dirs[:] - - linker = self.compiler.link_shared_object - # Always use system linker when using MSVC compiler. - if self.compiler.compiler_type=='msvc': - # expand libraries with fcompiler libraries as we are - # not using fcompiler linker - self._libs_with_msvc_and_fortran(fcompiler, libraries, library_dirs) - - elif ext.language in ['f77','f90'] and fcompiler is not None: - linker = fcompiler.link_shared_object - if ext.language=='c++' and cxx_compiler is not None: - linker = cxx_compiler.link_shared_object - - if sys.version[:3]>='2.3': - kws = {'target_lang':ext.language} - else: - kws = {} - - linker(objects, ext_filename, - libraries=libraries, - library_dirs=library_dirs, - runtime_library_dirs=ext.runtime_library_dirs, - extra_postargs=extra_args, - export_symbols=self.get_export_symbols(ext), - debug=self.debug, - build_temp=self.build_temp,**kws) - - def _add_dummy_mingwex_sym(self, c_sources): - build_src = self.get_finalized_command("build_src").build_src - build_clib = self.get_finalized_command("build_clib").build_clib - objects = self.compiler.compile([os.path.join(build_src, - "gfortran_vs2003_hack.c")], - output_dir=self.build_temp) - self.compiler.create_static_lib(objects, "_gfortran_workaround", output_dir=build_clib, debug=self.debug) - - def _libs_with_msvc_and_fortran(self, fcompiler, c_libraries, - c_library_dirs): - if fcompiler is None: return - - for libname in c_libraries: - if libname.startswith('msvc'): continue - fileexists = False - for libdir in c_library_dirs or []: - libfile = os.path.join(libdir,'%s.lib' % (libname)) - if os.path.isfile(libfile): - fileexists = True - break - if fileexists: continue - # make g77-compiled static libs available to MSVC - fileexists = False - for libdir in c_library_dirs: - libfile = os.path.join(libdir,'lib%s.a' % (libname)) - if os.path.isfile(libfile): - # copy libname.a file to name.lib so that MSVC linker - # can find it - libfile2 = os.path.join(self.build_temp, libname + '.lib') - copy_file(libfile, libfile2) - if self.build_temp not in c_library_dirs: - c_library_dirs.append(self.build_temp) - fileexists = True - break - if fileexists: continue - log.warn('could not find library %r in directories %s' - % (libname, c_library_dirs)) - - # Always use system linker when using MSVC compiler. - f_lib_dirs = [] - for dir in fcompiler.library_dirs: - # correct path when compiling in Cygwin but with normal Win - # Python - if dir.startswith('/usr/lib'): - s,o = exec_command(['cygpath', '-w', dir], use_tee=False) - if not s: - dir = o - f_lib_dirs.append(dir) - c_library_dirs.extend(f_lib_dirs) - - # make g77-compiled static libs available to MSVC - for lib in fcompiler.libraries: - if not lib.startswith('msvc'): - c_libraries.append(lib) - p = combine_paths(f_lib_dirs, 'lib' + lib + '.a') - if p: - dst_name = os.path.join(self.build_temp, lib + '.lib') - if not os.path.isfile(dst_name): - copy_file(p[0], dst_name) - if self.build_temp not in c_library_dirs: - c_library_dirs.append(self.build_temp) - - def get_source_files (self): - self.check_extensions_list(self.extensions) - filenames = [] - for ext in self.extensions: - filenames.extend(get_ext_source_files(ext)) - return filenames - - def get_outputs (self): - self.check_extensions_list(self.extensions) - - outputs = [] - for ext in self.extensions: - if not ext.sources: - continue - fullname = self.get_ext_fullname(ext.name) - outputs.append(os.path.join(self.build_lib, - self.get_ext_filename(fullname))) - return outputs diff --git a/pythonPackages/numpy/numpy/distutils/command/build_py.py b/pythonPackages/numpy/numpy/distutils/command/build_py.py deleted file mode 100755 index 0da23a513b..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/build_py.py +++ /dev/null @@ -1,25 +0,0 @@ - -from distutils.command.build_py import build_py as old_build_py -from numpy.distutils.misc_util import is_string - -class build_py(old_build_py): - - def find_package_modules(self, package, package_dir): - modules = old_build_py.find_package_modules(self, package, package_dir) - - # Find build_src generated *.py files. - build_src = self.get_finalized_command('build_src') - modules += build_src.py_modules_dict.get(package,[]) - - return modules - - def find_modules(self): - old_py_modules = self.py_modules[:] - new_py_modules = filter(is_string, self.py_modules) - self.py_modules[:] = new_py_modules - modules = old_build_py.find_modules(self) - self.py_modules[:] = old_py_modules - return modules - - # XXX: Fix find_source_files for item in py_modules such that item is 3-tuple - # and item[2] is source file. diff --git a/pythonPackages/numpy/numpy/distutils/command/build_scripts.py b/pythonPackages/numpy/numpy/distutils/command/build_scripts.py deleted file mode 100755 index 99134f2026..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/build_scripts.py +++ /dev/null @@ -1,49 +0,0 @@ -""" Modified version of build_scripts that handles building scripts from functions. -""" - -from distutils.command.build_scripts import build_scripts as old_build_scripts -from numpy.distutils import log -from numpy.distutils.misc_util import is_string - -class build_scripts(old_build_scripts): - - def generate_scripts(self, scripts): - new_scripts = [] - func_scripts = [] - for script in scripts: - if is_string(script): - new_scripts.append(script) - else: - func_scripts.append(script) - if not func_scripts: - return new_scripts - - build_dir = self.build_dir - self.mkpath(build_dir) - for func in func_scripts: - script = func(build_dir) - if not script: - continue - if is_string(script): - log.info(" adding '%s' to scripts" % (script,)) - new_scripts.append(script) - else: - [log.info(" adding '%s' to scripts" % (s,)) for s in script] - new_scripts.extend(list(script)) - return new_scripts - - def run (self): - if not self.scripts: - return - - self.scripts = self.generate_scripts(self.scripts) - # Now make sure that the distribution object has this list of scripts. - # setuptools' develop command requires that this be a list of filenames, - # not functions. - self.distribution.scripts = self.scripts - - return old_build_scripts.run(self) - - def get_source_files(self): - from numpy.distutils.misc_util import get_script_files - return get_script_files(self.scripts) diff --git a/pythonPackages/numpy/numpy/distutils/command/build_src.py b/pythonPackages/numpy/numpy/distutils/command/build_src.py deleted file mode 100755 index c6aaf079a4..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/build_src.py +++ /dev/null @@ -1,798 +0,0 @@ -""" Build swig, f2py, pyrex sources. -""" - -import os -import re -import sys -import shlex -import copy - -from distutils.command import build_ext -from distutils.dep_util import newer_group, newer -from distutils.util import get_platform -from distutils.errors import DistutilsError, DistutilsSetupError - -def have_pyrex(): - try: - import Pyrex.Compiler.Main - return True - except ImportError: - return False - -# this import can't be done here, as it uses numpy stuff only available -# after it's installed -#import numpy.f2py -from numpy.distutils import log -from numpy.distutils.misc_util import fortran_ext_match, \ - appendpath, is_string, is_sequence, get_cmd -from numpy.distutils.from_template import process_file as process_f_file -from numpy.distutils.conv_template import process_file as process_c_file - -def subst_vars(target, source, d): - """Substitute any occurence of @foo@ by d['foo'] from source file into - target.""" - var = re.compile('@([a-zA-Z_]+)@') - fs = open(source, 'r') - try: - ft = open(target, 'w') - try: - for l in fs.readlines(): - m = var.search(l) - if m: - ft.write(l.replace('@%s@' % m.group(1), d[m.group(1)])) - else: - ft.write(l) - finally: - ft.close() - finally: - fs.close() - -class build_src(build_ext.build_ext): - - description = "build sources from SWIG, F2PY files or a function" - - user_options = [ - ('build-src=', 'd', "directory to \"build\" sources to"), - ('f2py-opts=', None, "list of f2py command line options"), - ('swig=', None, "path to the SWIG executable"), - ('swig-opts=', None, "list of SWIG command line options"), - ('swig-cpp', None, "make SWIG create C++ files (default is autodetected from sources)"), - ('f2pyflags=', None, "additional flags to f2py (use --f2py-opts= instead)"), # obsolete - ('swigflags=', None, "additional flags to swig (use --swig-opts= instead)"), # obsolete - ('force', 'f', "forcibly build everything (ignore file timestamps)"), - ('inplace', 'i', - "ignore build-lib and put compiled extensions into the source " + - "directory alongside your pure Python modules"), - ] - - boolean_options = ['force','inplace'] - - help_options = [] - - def initialize_options(self): - self.extensions = None - self.package = None - self.py_modules = None - self.py_modules_dict = None - self.build_src = None - self.build_lib = None - self.build_base = None - self.force = None - self.inplace = None - self.package_dir = None - self.f2pyflags = None # obsolete - self.f2py_opts = None - self.swigflags = None # obsolete - self.swig_opts = None - self.swig_cpp = None - self.swig = None - - def finalize_options(self): - self.set_undefined_options('build', - ('build_base', 'build_base'), - ('build_lib', 'build_lib'), - ('force', 'force')) - if self.package is None: - self.package = self.distribution.ext_package - self.extensions = self.distribution.ext_modules - self.libraries = self.distribution.libraries or [] - self.py_modules = self.distribution.py_modules or [] - self.data_files = self.distribution.data_files or [] - - if self.build_src is None: - plat_specifier = ".%s-%s" % (get_platform(), sys.version[0:3]) - self.build_src = os.path.join(self.build_base, 'src'+plat_specifier) - - # py_modules_dict is used in build_py.find_package_modules - self.py_modules_dict = {} - - if self.f2pyflags: - if self.f2py_opts: - log.warn('ignoring --f2pyflags as --f2py-opts already used') - else: - self.f2py_opts = self.f2pyflags - self.f2pyflags = None - if self.f2py_opts is None: - self.f2py_opts = [] - else: - self.f2py_opts = shlex.split(self.f2py_opts) - - if self.swigflags: - if self.swig_opts: - log.warn('ignoring --swigflags as --swig-opts already used') - else: - self.swig_opts = self.swigflags - self.swigflags = None - - if self.swig_opts is None: - self.swig_opts = [] - else: - self.swig_opts = shlex.split(self.swig_opts) - - # use options from build_ext command - build_ext = self.get_finalized_command('build_ext') - if self.inplace is None: - self.inplace = build_ext.inplace - if self.swig_cpp is None: - self.swig_cpp = build_ext.swig_cpp - for c in ['swig','swig_opt']: - o = '--'+c.replace('_','-') - v = getattr(build_ext,c,None) - if v: - if getattr(self,c): - log.warn('both build_src and build_ext define %s option' % (o)) - else: - log.info('using "%s=%s" option from build_ext command' % (o,v)) - setattr(self, c, v) - - def run(self): - log.info("build_src") - if not (self.extensions or self.libraries): - return - self.build_sources() - - def build_sources(self): - - if self.inplace: - self.get_package_dir = \ - self.get_finalized_command('build_py').get_package_dir - - self.build_py_modules_sources() - - for libname_info in self.libraries: - self.build_library_sources(*libname_info) - - if self.extensions: - self.check_extensions_list(self.extensions) - - for ext in self.extensions: - self.build_extension_sources(ext) - - self.build_data_files_sources() - self.build_npy_pkg_config() - - def build_data_files_sources(self): - if not self.data_files: - return - log.info('building data_files sources') - from numpy.distutils.misc_util import get_data_files - new_data_files = [] - for data in self.data_files: - if isinstance(data,str): - new_data_files.append(data) - elif isinstance(data,tuple): - d,files = data - if self.inplace: - build_dir = self.get_package_dir('.'.join(d.split(os.sep))) - else: - build_dir = os.path.join(self.build_src,d) - funcs = filter(lambda f:hasattr(f, '__call__'), files) - files = filter(lambda f:not hasattr(f, '__call__'), files) - for f in funcs: - if f.func_code.co_argcount==1: - s = f(build_dir) - else: - s = f() - if s is not None: - if isinstance(s,list): - files.extend(s) - elif isinstance(s,str): - files.append(s) - else: - raise TypeError(repr(s)) - filenames = get_data_files((d,files)) - new_data_files.append((d, filenames)) - else: - raise TypeError(repr(data)) - self.data_files[:] = new_data_files - - - def _build_npy_pkg_config(self, info, gd): - import shutil - template, install_dir, subst_dict = info - template_dir = os.path.dirname(template) - for k, v in gd.items(): - subst_dict[k] = v - - if self.inplace == 1: - generated_dir = os.path.join(template_dir, install_dir) - else: - generated_dir = os.path.join(self.build_src, template_dir, - install_dir) - generated = os.path.basename(os.path.splitext(template)[0]) - generated_path = os.path.join(generated_dir, generated) - if not os.path.exists(generated_dir): - os.makedirs(generated_dir) - - subst_vars(generated_path, template, subst_dict) - - # Where to install relatively to install prefix - full_install_dir = os.path.join(template_dir, install_dir) - return full_install_dir, generated_path - - def build_npy_pkg_config(self): - log.info('build_src: building npy-pkg config files') - - # XXX: another ugly workaround to circumvent distutils brain damage. We - # need the install prefix here, but finalizing the options of the - # install command when only building sources cause error. Instead, we - # copy the install command instance, and finalize the copy so that it - # does not disrupt how distutils want to do things when with the - # original install command instance. - install_cmd = copy.copy(get_cmd('install')) - if not install_cmd.finalized == 1: - install_cmd.finalize_options() - build_npkg = False - gd = {} - if self.inplace == 1: - top_prefix = '.' - build_npkg = True - elif hasattr(install_cmd, 'install_libbase'): - top_prefix = install_cmd.install_libbase - build_npkg = True - - if build_npkg: - for pkg, infos in self.distribution.installed_pkg_config.items(): - pkg_path = self.distribution.package_dir[pkg] - prefix = os.path.join(os.path.abspath(top_prefix), pkg_path) - d = {'prefix': prefix} - for info in infos: - install_dir, generated = self._build_npy_pkg_config(info, d) - self.distribution.data_files.append((install_dir, - [generated])) - - def build_py_modules_sources(self): - if not self.py_modules: - return - log.info('building py_modules sources') - new_py_modules = [] - for source in self.py_modules: - if is_sequence(source) and len(source)==3: - package, module_base, source = source - if self.inplace: - build_dir = self.get_package_dir(package) - else: - build_dir = os.path.join(self.build_src, - os.path.join(*package.split('.'))) - if hasattr(source, '__call__'): - target = os.path.join(build_dir, module_base + '.py') - source = source(target) - if source is None: - continue - modules = [(package, module_base, source)] - if package not in self.py_modules_dict: - self.py_modules_dict[package] = [] - self.py_modules_dict[package] += modules - else: - new_py_modules.append(source) - self.py_modules[:] = new_py_modules - - def build_library_sources(self, lib_name, build_info): - sources = list(build_info.get('sources',[])) - - if not sources: - return - - log.info('building library "%s" sources' % (lib_name)) - - sources = self.generate_sources(sources, (lib_name, build_info)) - - sources = self.template_sources(sources, (lib_name, build_info)) - - sources, h_files = self.filter_h_files(sources) - - if h_files: - log.info('%s - nothing done with h_files = %s', - self.package, h_files) - - #for f in h_files: - # self.distribution.headers.append((lib_name,f)) - - build_info['sources'] = sources - return - - def build_extension_sources(self, ext): - - sources = list(ext.sources) - - log.info('building extension "%s" sources' % (ext.name)) - - fullname = self.get_ext_fullname(ext.name) - - modpath = fullname.split('.') - package = '.'.join(modpath[0:-1]) - - if self.inplace: - self.ext_target_dir = self.get_package_dir(package) - - sources = self.generate_sources(sources, ext) - - sources = self.template_sources(sources, ext) - - sources = self.swig_sources(sources, ext) - - sources = self.f2py_sources(sources, ext) - - sources = self.pyrex_sources(sources, ext) - - sources, py_files = self.filter_py_files(sources) - - if package not in self.py_modules_dict: - self.py_modules_dict[package] = [] - modules = [] - for f in py_files: - module = os.path.splitext(os.path.basename(f))[0] - modules.append((package, module, f)) - self.py_modules_dict[package] += modules - - sources, h_files = self.filter_h_files(sources) - - if h_files: - log.info('%s - nothing done with h_files = %s', - package, h_files) - #for f in h_files: - # self.distribution.headers.append((package,f)) - - ext.sources = sources - - def generate_sources(self, sources, extension): - new_sources = [] - func_sources = [] - for source in sources: - if is_string(source): - new_sources.append(source) - else: - func_sources.append(source) - if not func_sources: - return new_sources - if self.inplace and not is_sequence(extension): - build_dir = self.ext_target_dir - else: - if is_sequence(extension): - name = extension[0] - # if 'include_dirs' not in extension[1]: - # extension[1]['include_dirs'] = [] - # incl_dirs = extension[1]['include_dirs'] - else: - name = extension.name - # incl_dirs = extension.include_dirs - #if self.build_src not in incl_dirs: - # incl_dirs.append(self.build_src) - build_dir = os.path.join(*([self.build_src]\ - +name.split('.')[:-1])) - self.mkpath(build_dir) - for func in func_sources: - source = func(extension, build_dir) - if not source: - continue - if is_sequence(source): - [log.info(" adding '%s' to sources." % (s,)) for s in source] - new_sources.extend(source) - else: - log.info(" adding '%s' to sources." % (source,)) - new_sources.append(source) - - return new_sources - - def filter_py_files(self, sources): - return self.filter_files(sources,['.py']) - - def filter_h_files(self, sources): - return self.filter_files(sources,['.h','.hpp','.inc']) - - def filter_files(self, sources, exts = []): - new_sources = [] - files = [] - for source in sources: - (base, ext) = os.path.splitext(source) - if ext in exts: - files.append(source) - else: - new_sources.append(source) - return new_sources, files - - def template_sources(self, sources, extension): - new_sources = [] - if is_sequence(extension): - depends = extension[1].get('depends') - include_dirs = extension[1].get('include_dirs') - else: - depends = extension.depends - include_dirs = extension.include_dirs - for source in sources: - (base, ext) = os.path.splitext(source) - if ext == '.src': # Template file - if self.inplace: - target_dir = os.path.dirname(base) - else: - target_dir = appendpath(self.build_src, os.path.dirname(base)) - self.mkpath(target_dir) - target_file = os.path.join(target_dir,os.path.basename(base)) - if (self.force or newer_group([source] + depends, target_file)): - if _f_pyf_ext_match(base): - log.info("from_template:> %s" % (target_file)) - outstr = process_f_file(source) - else: - log.info("conv_template:> %s" % (target_file)) - outstr = process_c_file(source) - fid = open(target_file,'w') - fid.write(outstr) - fid.close() - if _header_ext_match(target_file): - d = os.path.dirname(target_file) - if d not in include_dirs: - log.info(" adding '%s' to include_dirs." % (d)) - include_dirs.append(d) - new_sources.append(target_file) - else: - new_sources.append(source) - return new_sources - - def pyrex_sources(self, sources, extension): - new_sources = [] - ext_name = extension.name.split('.')[-1] - for source in sources: - (base, ext) = os.path.splitext(source) - if ext == '.pyx': - target_file = self.generate_a_pyrex_source(base, ext_name, - source, - extension) - new_sources.append(target_file) - else: - new_sources.append(source) - return new_sources - - def generate_a_pyrex_source(self, base, ext_name, source, extension): - if self.inplace or not have_pyrex(): - target_dir = os.path.dirname(base) - else: - target_dir = appendpath(self.build_src, os.path.dirname(base)) - target_file = os.path.join(target_dir, ext_name + '.c') - depends = [source] + extension.depends - if self.force or newer_group(depends, target_file, 'newer'): - if have_pyrex(): - import Pyrex.Compiler.Main - log.info("pyrexc:> %s" % (target_file)) - self.mkpath(target_dir) - options = Pyrex.Compiler.Main.CompilationOptions( - defaults=Pyrex.Compiler.Main.default_options, - include_path=extension.include_dirs, - output_file=target_file) - pyrex_result = Pyrex.Compiler.Main.compile(source, - options=options) - if pyrex_result.num_errors != 0: - raise DistutilsError("%d errors while compiling %r with Pyrex" \ - % (pyrex_result.num_errors, source)) - elif os.path.isfile(target_file): - log.warn("Pyrex required for compiling %r but not available,"\ - " using old target %r"\ - % (source, target_file)) - else: - raise DistutilsError("Pyrex required for compiling %r"\ - " but notavailable" % (source,)) - return target_file - - def f2py_sources(self, sources, extension): - new_sources = [] - f2py_sources = [] - f_sources = [] - f2py_targets = {} - target_dirs = [] - ext_name = extension.name.split('.')[-1] - skip_f2py = 0 - - for source in sources: - (base, ext) = os.path.splitext(source) - if ext == '.pyf': # F2PY interface file - if self.inplace: - target_dir = os.path.dirname(base) - else: - target_dir = appendpath(self.build_src, os.path.dirname(base)) - if os.path.isfile(source): - name = get_f2py_modulename(source) - if name != ext_name: - raise DistutilsSetupError('mismatch of extension names: %s ' - 'provides %r but expected %r' % ( - source, name, ext_name)) - target_file = os.path.join(target_dir,name+'module.c') - else: - log.debug(' source %s does not exist: skipping f2py\'ing.' \ - % (source)) - name = ext_name - skip_f2py = 1 - target_file = os.path.join(target_dir,name+'module.c') - if not os.path.isfile(target_file): - log.warn(' target %s does not exist:\n '\ - 'Assuming %smodule.c was generated with '\ - '"build_src --inplace" command.' \ - % (target_file, name)) - target_dir = os.path.dirname(base) - target_file = os.path.join(target_dir,name+'module.c') - if not os.path.isfile(target_file): - raise DistutilsSetupError("%r missing" % (target_file,)) - log.info(' Yes! Using %r as up-to-date target.' \ - % (target_file)) - target_dirs.append(target_dir) - f2py_sources.append(source) - f2py_targets[source] = target_file - new_sources.append(target_file) - elif fortran_ext_match(ext): - f_sources.append(source) - else: - new_sources.append(source) - - if not (f2py_sources or f_sources): - return new_sources - - for d in target_dirs: - self.mkpath(d) - - f2py_options = extension.f2py_options + self.f2py_opts - - if self.distribution.libraries: - for name,build_info in self.distribution.libraries: - if name in extension.libraries: - f2py_options.extend(build_info.get('f2py_options',[])) - - log.info("f2py options: %s" % (f2py_options)) - - if f2py_sources: - if len(f2py_sources) != 1: - raise DistutilsSetupError( - 'only one .pyf file is allowed per extension module but got'\ - ' more: %r' % (f2py_sources,)) - source = f2py_sources[0] - target_file = f2py_targets[source] - target_dir = os.path.dirname(target_file) or '.' - depends = [source] + extension.depends - if (self.force or newer_group(depends, target_file,'newer')) \ - and not skip_f2py: - log.info("f2py: %s" % (source)) - import numpy.f2py - numpy.f2py.run_main(f2py_options - + ['--build-dir',target_dir,source]) - else: - log.debug(" skipping '%s' f2py interface (up-to-date)" % (source)) - else: - #XXX TODO: --inplace support for sdist command - if is_sequence(extension): - name = extension[0] - else: name = extension.name - target_dir = os.path.join(*([self.build_src]\ - +name.split('.')[:-1])) - target_file = os.path.join(target_dir,ext_name + 'module.c') - new_sources.append(target_file) - depends = f_sources + extension.depends - if (self.force or newer_group(depends, target_file, 'newer')) \ - and not skip_f2py: - log.info("f2py:> %s" % (target_file)) - self.mkpath(target_dir) - import numpy.f2py - numpy.f2py.run_main(f2py_options + ['--lower', - '--build-dir',target_dir]+\ - ['-m',ext_name]+f_sources) - else: - log.debug(" skipping f2py fortran files for '%s' (up-to-date)"\ - % (target_file)) - - if not os.path.isfile(target_file): - raise DistutilsError("f2py target file %r not generated" % (target_file,)) - - target_c = os.path.join(self.build_src,'fortranobject.c') - target_h = os.path.join(self.build_src,'fortranobject.h') - log.info(" adding '%s' to sources." % (target_c)) - new_sources.append(target_c) - if self.build_src not in extension.include_dirs: - log.info(" adding '%s' to include_dirs." \ - % (self.build_src)) - extension.include_dirs.append(self.build_src) - - if not skip_f2py: - import numpy.f2py - d = os.path.dirname(numpy.f2py.__file__) - source_c = os.path.join(d,'src','fortranobject.c') - source_h = os.path.join(d,'src','fortranobject.h') - if newer(source_c,target_c) or newer(source_h,target_h): - self.mkpath(os.path.dirname(target_c)) - self.copy_file(source_c,target_c) - self.copy_file(source_h,target_h) - else: - if not os.path.isfile(target_c): - raise DistutilsSetupError("f2py target_c file %r not found" % (target_c,)) - if not os.path.isfile(target_h): - raise DistutilsSetupError("f2py target_h file %r not found" % (target_h,)) - - for name_ext in ['-f2pywrappers.f','-f2pywrappers2.f90']: - filename = os.path.join(target_dir,ext_name + name_ext) - if os.path.isfile(filename): - log.info(" adding '%s' to sources." % (filename)) - f_sources.append(filename) - - return new_sources + f_sources - - def swig_sources(self, sources, extension): - # Assuming SWIG 1.3.14 or later. See compatibility note in - # http://www.swig.org/Doc1.3/Python.html#Python_nn6 - - new_sources = [] - swig_sources = [] - swig_targets = {} - target_dirs = [] - py_files = [] # swig generated .py files - target_ext = '.c' - if self.swig_cpp: - typ = 'c++' - is_cpp = True - else: - typ = None - is_cpp = False - skip_swig = 0 - ext_name = extension.name.split('.')[-1] - - for source in sources: - (base, ext) = os.path.splitext(source) - if ext == '.i': # SWIG interface file - if self.inplace: - target_dir = os.path.dirname(base) - py_target_dir = self.ext_target_dir - else: - target_dir = appendpath(self.build_src, os.path.dirname(base)) - py_target_dir = target_dir - if os.path.isfile(source): - name = get_swig_modulename(source) - if name != ext_name[1:]: - raise DistutilsSetupError( - 'mismatch of extension names: %s provides %r' - ' but expected %r' % (source, name, ext_name[1:])) - if typ is None: - typ = get_swig_target(source) - is_cpp = typ=='c++' - if is_cpp: target_ext = '.cpp' - else: - typ2 = get_swig_target(source) - if typ!=typ2: - log.warn('expected %r but source %r defines %r swig target' \ - % (typ, source, typ2)) - if typ2=='c++': - log.warn('resetting swig target to c++ (some targets may have .c extension)') - is_cpp = True - target_ext = '.cpp' - else: - log.warn('assuming that %r has c++ swig target' % (source)) - target_file = os.path.join(target_dir,'%s_wrap%s' \ - % (name, target_ext)) - else: - log.warn(' source %s does not exist: skipping swig\'ing.' \ - % (source)) - name = ext_name[1:] - skip_swig = 1 - target_file = _find_swig_target(target_dir, name) - if not os.path.isfile(target_file): - log.warn(' target %s does not exist:\n '\ - 'Assuming %s_wrap.{c,cpp} was generated with '\ - '"build_src --inplace" command.' \ - % (target_file, name)) - target_dir = os.path.dirname(base) - target_file = _find_swig_target(target_dir, name) - if not os.path.isfile(target_file): - raise DistutilsSetupError("%r missing" % (target_file,)) - log.warn(' Yes! Using %r as up-to-date target.' \ - % (target_file)) - target_dirs.append(target_dir) - new_sources.append(target_file) - py_files.append(os.path.join(py_target_dir, name+'.py')) - swig_sources.append(source) - swig_targets[source] = new_sources[-1] - else: - new_sources.append(source) - - if not swig_sources: - return new_sources - - if skip_swig: - return new_sources + py_files - - for d in target_dirs: - self.mkpath(d) - - swig = self.swig or self.find_swig() - swig_cmd = [swig, "-python"] - if is_cpp: - swig_cmd.append('-c++') - for d in extension.include_dirs: - swig_cmd.append('-I'+d) - for source in swig_sources: - target = swig_targets[source] - depends = [source] + extension.depends - if self.force or newer_group(depends, target, 'newer'): - log.info("%s: %s" % (os.path.basename(swig) \ - + (is_cpp and '++' or ''), source)) - self.spawn(swig_cmd + self.swig_opts \ - + ["-o", target, '-outdir', py_target_dir, source]) - else: - log.debug(" skipping '%s' swig interface (up-to-date)" \ - % (source)) - - return new_sources + py_files - -_f_pyf_ext_match = re.compile(r'.*[.](f90|f95|f77|for|ftn|f|pyf)\Z',re.I).match -_header_ext_match = re.compile(r'.*[.](inc|h|hpp)\Z',re.I).match - -#### SWIG related auxiliary functions #### -_swig_module_name_match = re.compile(r'\s*%module\s*(.*\(\s*package\s*=\s*"(?P[\w_]+)".*\)|)\s*(?P[\w_]+)', - re.I).match -_has_c_header = re.compile(r'-[*]-\s*c\s*-[*]-',re.I).search -_has_cpp_header = re.compile(r'-[*]-\s*c[+][+]\s*-[*]-',re.I).search - -def get_swig_target(source): - f = open(source,'r') - result = 'c' - line = f.readline() - if _has_cpp_header(line): - result = 'c++' - if _has_c_header(line): - result = 'c' - f.close() - return result - -def get_swig_modulename(source): - f = open(source,'r') - f_readlines = getattr(f,'xreadlines',f.readlines) - name = None - for line in f_readlines(): - m = _swig_module_name_match(line) - if m: - name = m.group('name') - break - f.close() - return name - -def _find_swig_target(target_dir,name): - for ext in ['.cpp','.c']: - target = os.path.join(target_dir,'%s_wrap%s' % (name, ext)) - if os.path.isfile(target): - break - return target - -#### F2PY related auxiliary functions #### - -_f2py_module_name_match = re.compile(r'\s*python\s*module\s*(?P[\w_]+)', - re.I).match -_f2py_user_module_name_match = re.compile(r'\s*python\s*module\s*(?P[\w_]*?'\ - '__user__[\w_]*)',re.I).match - -def get_f2py_modulename(source): - name = None - f = open(source) - f_readlines = getattr(f,'xreadlines',f.readlines) - for line in f_readlines(): - m = _f2py_module_name_match(line) - if m: - if _f2py_user_module_name_match(line): # skip *__user__* names - continue - name = m.group('name') - break - f.close() - return name - -########################################## diff --git a/pythonPackages/numpy/numpy/distutils/command/config.py b/pythonPackages/numpy/numpy/distutils/command/config.py deleted file mode 100755 index c99c966ac2..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/config.py +++ /dev/null @@ -1,421 +0,0 @@ -# Added Fortran compiler support to config. Currently useful only for -# try_compile call. try_run works but is untested for most of Fortran -# compilers (they must define linker_exe first). -# Pearu Peterson - -import os, signal -import warnings -import sys - -from distutils.command.config import config as old_config -from distutils.command.config import LANG_EXT -from distutils import log -from distutils.file_util import copy_file -from distutils.ccompiler import CompileError, LinkError -import distutils -from numpy.distutils.exec_command import exec_command -from numpy.distutils.mingw32ccompiler import generate_manifest -from numpy.distutils.command.autodist import check_inline, check_compiler_gcc4 -from numpy.distutils.compat import get_exception - -LANG_EXT['f77'] = '.f' -LANG_EXT['f90'] = '.f90' - -class config(old_config): - old_config.user_options += [ - ('fcompiler=', None, "specify the Fortran compiler type"), - ] - - def initialize_options(self): - self.fcompiler = None - old_config.initialize_options(self) - - def try_run(self, body, headers=None, include_dirs=None, - libraries=None, library_dirs=None, lang="c"): - warnings.warn("\n+++++++++++++++++++++++++++++++++++++++++++++++++\n" \ - "Usage of try_run is deprecated: please do not \n" \ - "use it anymore, and avoid configuration checks \n" \ - "involving running executable on the target machine.\n" \ - "+++++++++++++++++++++++++++++++++++++++++++++++++\n", - DeprecationWarning) - return old_config.try_run(self, body, headers, include_dirs, libraries, - library_dirs, lang) - - def _check_compiler (self): - old_config._check_compiler(self) - from numpy.distutils.fcompiler import FCompiler, new_fcompiler - - if sys.platform == 'win32' and self.compiler.compiler_type == 'msvc': - # XXX: hack to circumvent a python 2.6 bug with msvc9compiler: - # initialize call query_vcvarsall, which throws an IOError, and - # causes an error along the way without much information. We try to - # catch it here, hoping it is early enough, and print an helpful - # message instead of Error: None. - if not self.compiler.initialized: - try: - self.compiler.initialize() - except IOError: - e = get_exception() - msg = """\ -Could not initialize compiler instance: do you have Visual Studio -installed ? If you are trying to build with mingw, please use python setup.py -build -c mingw32 instead ). If you have Visual Studio installed, check it is -correctly installed, and the right version (VS 2008 for python 2.6, VS 2003 for -2.5, etc...). Original exception was: %s, and the Compiler -class was %s -============================================================================""" \ - % (e, self.compiler.__class__.__name__) - print ("""\ -============================================================================""") - raise distutils.errors.DistutilsPlatformError(msg) - - if not isinstance(self.fcompiler, FCompiler): - self.fcompiler = new_fcompiler(compiler=self.fcompiler, - dry_run=self.dry_run, force=1, - c_compiler=self.compiler) - if self.fcompiler is not None: - self.fcompiler.customize(self.distribution) - if self.fcompiler.get_version(): - self.fcompiler.customize_cmd(self) - self.fcompiler.show_customization() - - def _wrap_method(self,mth,lang,args): - from distutils.ccompiler import CompileError - from distutils.errors import DistutilsExecError - save_compiler = self.compiler - if lang in ['f77','f90']: - self.compiler = self.fcompiler - try: - ret = mth(*((self,)+args)) - except (DistutilsExecError,CompileError): - msg = str(get_exception()) - self.compiler = save_compiler - raise CompileError - self.compiler = save_compiler - return ret - - def _compile (self, body, headers, include_dirs, lang): - return self._wrap_method(old_config._compile,lang, - (body, headers, include_dirs, lang)) - - def _link (self, body, - headers, include_dirs, - libraries, library_dirs, lang): - if self.compiler.compiler_type=='msvc': - libraries = (libraries or [])[:] - library_dirs = (library_dirs or [])[:] - if lang in ['f77','f90']: - lang = 'c' # always use system linker when using MSVC compiler - if self.fcompiler: - for d in self.fcompiler.library_dirs or []: - # correct path when compiling in Cygwin but with - # normal Win Python - if d.startswith('/usr/lib'): - s,o = exec_command(['cygpath', '-w', d], - use_tee=False) - if not s: d = o - library_dirs.append(d) - for libname in self.fcompiler.libraries or []: - if libname not in libraries: - libraries.append(libname) - for libname in libraries: - if libname.startswith('msvc'): continue - fileexists = False - for libdir in library_dirs or []: - libfile = os.path.join(libdir,'%s.lib' % (libname)) - if os.path.isfile(libfile): - fileexists = True - break - if fileexists: continue - # make g77-compiled static libs available to MSVC - fileexists = False - for libdir in library_dirs: - libfile = os.path.join(libdir,'lib%s.a' % (libname)) - if os.path.isfile(libfile): - # copy libname.a file to name.lib so that MSVC linker - # can find it - libfile2 = os.path.join(libdir,'%s.lib' % (libname)) - copy_file(libfile, libfile2) - self.temp_files.append(libfile2) - fileexists = True - break - if fileexists: continue - log.warn('could not find library %r in directories %s' \ - % (libname, library_dirs)) - elif self.compiler.compiler_type == 'mingw32': - generate_manifest(self) - return self._wrap_method(old_config._link,lang, - (body, headers, include_dirs, - libraries, library_dirs, lang)) - - def check_header(self, header, include_dirs=None, library_dirs=None, lang='c'): - self._check_compiler() - return self.try_compile( - "/* we need a dummy line to make distutils happy */", - [header], include_dirs) - - def check_decl(self, symbol, - headers=None, include_dirs=None): - self._check_compiler() - body = """ -int main() -{ -#ifndef %s - (void) %s; -#endif - ; - return 0; -}""" % (symbol, symbol) - - return self.try_compile(body, headers, include_dirs) - - def check_type(self, type_name, headers=None, include_dirs=None, - library_dirs=None): - """Check type availability. Return True if the type can be compiled, - False otherwise""" - self._check_compiler() - - # First check the type can be compiled - body = r""" -int main() { - if ((%(name)s *) 0) - return 0; - if (sizeof (%(name)s)) - return 0; -} -""" % {'name': type_name} - - st = False - try: - try: - self._compile(body % {'type': type_name}, - headers, include_dirs, 'c') - st = True - except distutils.errors.CompileError: - st = False - finally: - self._clean() - - return st - - def check_type_size(self, type_name, headers=None, include_dirs=None, library_dirs=None, expected=None): - """Check size of a given type.""" - self._check_compiler() - - # First check the type can be compiled - body = r""" -typedef %(type)s npy_check_sizeof_type; -int main () -{ - static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) >= 0)]; - test_array [0] = 0 - - ; - return 0; -} -""" - self._compile(body % {'type': type_name}, - headers, include_dirs, 'c') - self._clean() - - if expected: - body = r""" -typedef %(type)s npy_check_sizeof_type; -int main () -{ - static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) == %(size)s)]; - test_array [0] = 0 - - ; - return 0; -} -""" - for size in expected: - try: - self._compile(body % {'type': type_name, 'size': size}, - headers, include_dirs, 'c') - self._clean() - return size - except CompileError: - pass - - # this fails to *compile* if size > sizeof(type) - body = r""" -typedef %(type)s npy_check_sizeof_type; -int main () -{ - static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) <= %(size)s)]; - test_array [0] = 0 - - ; - return 0; -} -""" - - # The principle is simple: we first find low and high bounds of size - # for the type, where low/high are looked up on a log scale. Then, we - # do a binary search to find the exact size between low and high - low = 0 - mid = 0 - while True: - try: - self._compile(body % {'type': type_name, 'size': mid}, - headers, include_dirs, 'c') - self._clean() - break - except CompileError: - #log.info("failure to test for bound %d" % mid) - low = mid + 1 - mid = 2 * mid + 1 - - high = mid - # Binary search: - while low != high: - mid = (high - low) / 2 + low - try: - self._compile(body % {'type': type_name, 'size': mid}, - headers, include_dirs, 'c') - self._clean() - high = mid - except CompileError: - low = mid + 1 - return low - - def check_func(self, func, - headers=None, include_dirs=None, - libraries=None, library_dirs=None, - decl=False, call=False, call_args=None): - # clean up distutils's config a bit: add void to main(), and - # return a value. - self._check_compiler() - body = [] - if decl: - body.append("int %s (void);" % func) - # Handle MSVC intrinsics: force MS compiler to make a function call. - # Useful to test for some functions when built with optimization on, to - # avoid build error because the intrinsic and our 'fake' test - # declaration do not match. - body.append("#ifdef _MSC_VER") - body.append("#pragma function(%s)" % func) - body.append("#endif") - body.append("int main (void) {") - if call: - if call_args is None: - call_args = '' - body.append(" %s(%s);" % (func, call_args)) - else: - body.append(" %s;" % func) - body.append(" return 0;") - body.append("}") - body = '\n'.join(body) + "\n" - - return self.try_link(body, headers, include_dirs, - libraries, library_dirs) - - def check_funcs_once(self, funcs, - headers=None, include_dirs=None, - libraries=None, library_dirs=None, - decl=False, call=False, call_args=None): - """Check a list of functions at once. - - This is useful to speed up things, since all the functions in the funcs - list will be put in one compilation unit. - - Arguments - --------- - funcs: seq - list of functions to test - include_dirs : seq - list of header paths - libraries : seq - list of libraries to link the code snippet to - libraru_dirs : seq - list of library paths - decl : dict - for every (key, value), the declaration in the value will be - used for function in key. If a function is not in the - dictionay, no declaration will be used. - call : dict - for every item (f, value), if the value is True, a call will be - done to the function f. - """ - self._check_compiler() - body = [] - if decl: - for f, v in decl.items(): - if v: - body.append("int %s (void);" % f) - - # Handle MS intrinsics. See check_func for more info. - body.append("#ifdef _MSC_VER") - for func in funcs: - body.append("#pragma function(%s)" % func) - body.append("#endif") - - body.append("int main (void) {") - if call: - for f in funcs: - if f in call and call[f]: - if not (call_args and f in call_args and call_args[f]): - args = '' - else: - args = call_args[f] - body.append(" %s(%s);" % (f, args)) - else: - body.append(" %s;" % f) - else: - for f in funcs: - body.append(" %s;" % f) - body.append(" return 0;") - body.append("}") - body = '\n'.join(body) + "\n" - - return self.try_link(body, headers, include_dirs, - libraries, library_dirs) - - def check_inline(self): - """Return the inline keyword recognized by the compiler, empty string - otherwise.""" - return check_inline(self) - - def check_compiler_gcc4(self): - """Return True if the C compiler is gcc >= 4.""" - return check_compiler_gcc4(self) - - def get_output(self, body, headers=None, include_dirs=None, - libraries=None, library_dirs=None, - lang="c"): - """Try to compile, link to an executable, and run a program - built from 'body' and 'headers'. Returns the exit status code - of the program and its output. - """ - warnings.warn("\n+++++++++++++++++++++++++++++++++++++++++++++++++\n" \ - "Usage of get_output is deprecated: please do not \n" \ - "use it anymore, and avoid configuration checks \n" \ - "involving running executable on the target machine.\n" \ - "+++++++++++++++++++++++++++++++++++++++++++++++++\n", - DeprecationWarning) - from distutils.ccompiler import CompileError, LinkError - self._check_compiler() - exitcode, output = 255, '' - try: - src, obj, exe = self._link(body, headers, include_dirs, - libraries, library_dirs, lang) - exe = os.path.join('.', exe) - exitstatus, output = exec_command(exe, execute_in='.') - if hasattr(os, 'WEXITSTATUS'): - exitcode = os.WEXITSTATUS(exitstatus) - if os.WIFSIGNALED(exitstatus): - sig = os.WTERMSIG(exitstatus) - log.error('subprocess exited with signal %d' % (sig,)) - if sig == signal.SIGINT: - # control-C - raise KeyboardInterrupt - else: - exitcode = exitstatus - log.info("success!") - except (CompileError, LinkError): - log.info("failure.") - - self._clean() - return exitcode, output diff --git a/pythonPackages/numpy/numpy/distutils/command/config_compiler.py b/pythonPackages/numpy/numpy/distutils/command/config_compiler.py deleted file mode 100755 index e7fee94dfb..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/config_compiler.py +++ /dev/null @@ -1,123 +0,0 @@ -from distutils.core import Command -from numpy.distutils import log - -#XXX: Linker flags - -def show_fortran_compilers(_cache=[]): - # Using cache to prevent infinite recursion - if _cache: return - _cache.append(1) - from numpy.distutils.fcompiler import show_fcompilers - import distutils.core - dist = distutils.core._setup_distribution - show_fcompilers(dist) - -class config_fc(Command): - """ Distutils command to hold user specified options - to Fortran compilers. - - config_fc command is used by the FCompiler.customize() method. - """ - - description = "specify Fortran 77/Fortran 90 compiler information" - - user_options = [ - ('fcompiler=',None,"specify Fortran compiler type"), - ('f77exec=', None, "specify F77 compiler command"), - ('f90exec=', None, "specify F90 compiler command"), - ('f77flags=',None,"specify F77 compiler flags"), - ('f90flags=',None,"specify F90 compiler flags"), - ('opt=',None,"specify optimization flags"), - ('arch=',None,"specify architecture specific optimization flags"), - ('debug','g',"compile with debugging information"), - ('noopt',None,"compile without optimization"), - ('noarch',None,"compile without arch-dependent optimization"), - ] - - help_options = [ - ('help-fcompiler',None, "list available Fortran compilers", - show_fortran_compilers), - ] - - boolean_options = ['debug','noopt','noarch'] - - def initialize_options(self): - self.fcompiler = None - self.f77exec = None - self.f90exec = None - self.f77flags = None - self.f90flags = None - self.opt = None - self.arch = None - self.debug = None - self.noopt = None - self.noarch = None - - def finalize_options(self): - log.info('unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options') - build_clib = self.get_finalized_command('build_clib') - build_ext = self.get_finalized_command('build_ext') - config = self.get_finalized_command('config') - build = self.get_finalized_command('build') - cmd_list = [self, config, build_clib, build_ext, build] - for a in ['fcompiler']: - l = [] - for c in cmd_list: - v = getattr(c,a) - if v is not None: - if not isinstance(v, str): v = v.compiler_type - if v not in l: l.append(v) - if not l: v1 = None - else: v1 = l[0] - if len(l)>1: - log.warn(' commands have different --%s options: %s'\ - ', using first in list as default' % (a, l)) - if v1: - for c in cmd_list: - if getattr(c,a) is None: setattr(c, a, v1) - - def run(self): - # Do nothing. - return - -class config_cc(Command): - """ Distutils command to hold user specified options - to C/C++ compilers. - """ - - description = "specify C/C++ compiler information" - - user_options = [ - ('compiler=',None,"specify C/C++ compiler type"), - ] - - def initialize_options(self): - self.compiler = None - - def finalize_options(self): - log.info('unifing config_cc, config, build_clib, build_ext, build commands --compiler options') - build_clib = self.get_finalized_command('build_clib') - build_ext = self.get_finalized_command('build_ext') - config = self.get_finalized_command('config') - build = self.get_finalized_command('build') - cmd_list = [self, config, build_clib, build_ext, build] - for a in ['compiler']: - l = [] - for c in cmd_list: - v = getattr(c,a) - if v is not None: - if not isinstance(v, str): v = v.compiler_type - if v not in l: l.append(v) - if not l: v1 = None - else: v1 = l[0] - if len(l)>1: - log.warn(' commands have different --%s options: %s'\ - ', using first in list as default' % (a, l)) - if v1: - for c in cmd_list: - if getattr(c,a) is None: setattr(c, a, v1) - return - - def run(self): - # Do nothing. - return diff --git a/pythonPackages/numpy/numpy/distutils/command/develop.py b/pythonPackages/numpy/numpy/distutils/command/develop.py deleted file mode 100755 index 1677066719..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/develop.py +++ /dev/null @@ -1,15 +0,0 @@ -""" Override the develop command from setuptools so we can ensure that our -generated files (from build_src or build_scripts) are properly converted to real -files with filenames. -""" - -from setuptools.command.develop import develop as old_develop - -class develop(old_develop): - __doc__ = old_develop.__doc__ - def install_for_development(self): - # Build sources in-place, too. - self.reinitialize_command('build_src', inplace=1) - # Make sure scripts are built. - self.run_command('build_scripts') - old_develop.install_for_development(self) diff --git a/pythonPackages/numpy/numpy/distutils/command/egg_info.py b/pythonPackages/numpy/numpy/distutils/command/egg_info.py deleted file mode 100755 index 687faf080a..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/egg_info.py +++ /dev/null @@ -1,9 +0,0 @@ -from setuptools.command.egg_info import egg_info as _egg_info - -class egg_info(_egg_info): - def run(self): - # We need to ensure that build_src has been executed in order to give - # setuptools' egg_info command real filenames instead of functions which - # generate files. - self.run_command("build_src") - _egg_info.run(self) diff --git a/pythonPackages/numpy/numpy/distutils/command/install.py b/pythonPackages/numpy/numpy/distutils/command/install.py deleted file mode 100755 index ad3cc507db..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/install.py +++ /dev/null @@ -1,77 +0,0 @@ -import sys -if 'setuptools' in sys.modules: - import setuptools.command.install as old_install_mod - have_setuptools = True -else: - import distutils.command.install as old_install_mod - have_setuptools = False -old_install = old_install_mod.install -from distutils.file_util import write_file - -class install(old_install): - - # Always run install_clib - the command is cheap, so no need to bypass it; - # but it's not run by setuptools -- so it's run again in install_data - sub_commands = old_install.sub_commands + [ - ('install_clib', lambda x: True) - ] - - def finalize_options (self): - old_install.finalize_options(self) - self.install_lib = self.install_libbase - - def setuptools_run(self): - """ The setuptools version of the .run() method. - - We must pull in the entire code so we can override the level used in the - _getframe() call since we wrap this call by one more level. - """ - # Explicit request for old-style install? Just do it - if self.old_and_unmanageable or self.single_version_externally_managed: - return old_install_mod._install.run(self) - - # Attempt to detect whether we were called from setup() or by another - # command. If we were called by setup(), our caller will be the - # 'run_command' method in 'distutils.dist', and *its* caller will be - # the 'run_commands' method. If we were called any other way, our - # immediate caller *might* be 'run_command', but it won't have been - # called by 'run_commands'. This is slightly kludgy, but seems to - # work. - # - caller = sys._getframe(3) - caller_module = caller.f_globals.get('__name__','') - caller_name = caller.f_code.co_name - - if caller_module != 'distutils.dist' or caller_name!='run_commands': - # We weren't called from the command line or setup(), so we - # should run in backward-compatibility mode to support bdist_* - # commands. - old_install_mod._install.run(self) - else: - self.do_egg_install() - - def run(self): - if not have_setuptools: - r = old_install.run(self) - else: - r = self.setuptools_run() - if self.record: - # bdist_rpm fails when INSTALLED_FILES contains - # paths with spaces. Such paths must be enclosed - # with double-quotes. - f = open(self.record,'r') - lines = [] - need_rewrite = False - for l in f.readlines(): - l = l.rstrip() - if ' ' in l: - need_rewrite = True - l = '"%s"' % (l) - lines.append(l) - f.close() - if need_rewrite: - self.execute(write_file, - (self.record, lines), - "re-writing list of installed files to '%s'" % - self.record) - return r diff --git a/pythonPackages/numpy/numpy/distutils/command/install_clib.py b/pythonPackages/numpy/numpy/distutils/command/install_clib.py deleted file mode 100755 index 638d4beacb..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/install_clib.py +++ /dev/null @@ -1,37 +0,0 @@ -import os -from distutils.core import Command -from distutils.ccompiler import new_compiler -from numpy.distutils.misc_util import get_cmd - -class install_clib(Command): - description = "Command to install installable C libraries" - - user_options = [] - - def initialize_options(self): - self.install_dir = None - self.outfiles = [] - - def finalize_options(self): - self.set_undefined_options('install', ('install_lib', 'install_dir')) - - def run (self): - build_clib_cmd = get_cmd("build_clib") - build_dir = build_clib_cmd.build_clib - - # We need the compiler to get the library name -> filename association - if not build_clib_cmd.compiler: - compiler = new_compiler(compiler=None) - compiler.customize(self.distribution) - else: - compiler = build_clib_cmd.compiler - - for l in self.distribution.installed_libraries: - target_dir = os.path.join(self.install_dir, l.target_dir) - name = compiler.library_filename(l.name) - source = os.path.join(build_dir, name) - self.mkpath(target_dir) - self.outfiles.append(self.copy_file(source, target_dir)[0]) - - def get_outputs(self): - return self.outfiles diff --git a/pythonPackages/numpy/numpy/distutils/command/install_data.py b/pythonPackages/numpy/numpy/distutils/command/install_data.py deleted file mode 100755 index 0a2e68ae19..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/install_data.py +++ /dev/null @@ -1,24 +0,0 @@ -import sys -have_setuptools = ('setuptools' in sys.modules) - -from distutils.command.install_data import install_data as old_install_data - -#data installer with improved intelligence over distutils -#data files are copied into the project directory instead -#of willy-nilly -class install_data (old_install_data): - - def run(self): - old_install_data.run(self) - - if have_setuptools: - # Run install_clib again, since setuptools does not run sub-commands - # of install automatically - self.run_command('install_clib') - - def finalize_options (self): - self.set_undefined_options('install', - ('install_lib', 'install_dir'), - ('root', 'root'), - ('force', 'force'), - ) diff --git a/pythonPackages/numpy/numpy/distutils/command/install_headers.py b/pythonPackages/numpy/numpy/distutils/command/install_headers.py deleted file mode 100755 index 58ace10644..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/install_headers.py +++ /dev/null @@ -1,25 +0,0 @@ -import os -from distutils.command.install_headers import install_headers as old_install_headers - -class install_headers (old_install_headers): - - def run (self): - headers = self.distribution.headers - if not headers: - return - - prefix = os.path.dirname(self.install_dir) - for header in headers: - if isinstance(header,tuple): - # Kind of a hack, but I don't know where else to change this... - if header[0] == 'numpy.core': - header = ('numpy', header[1]) - if os.path.splitext(header[1])[1] == '.inc': - continue - d = os.path.join(*([prefix]+header[0].split('.'))) - header = header[1] - else: - d = self.install_dir - self.mkpath(d) - (out, _) = self.copy_file(header, d) - self.outfiles.append(out) diff --git a/pythonPackages/numpy/numpy/distutils/command/scons.py b/pythonPackages/numpy/numpy/distutils/command/scons.py deleted file mode 100755 index d7bbec35e5..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/scons.py +++ /dev/null @@ -1,589 +0,0 @@ -import os -import sys -import os.path -from os.path import join as pjoin, dirname as pdirname - -from distutils.errors import DistutilsPlatformError -from distutils.errors import DistutilsExecError, DistutilsSetupError - -from numpy.distutils.command.build_ext import build_ext as old_build_ext -from numpy.distutils.ccompiler import CCompiler, new_compiler -from numpy.distutils.fcompiler import FCompiler, new_fcompiler -from numpy.distutils.exec_command import find_executable -from numpy.distutils import log -from numpy.distutils.misc_util import is_bootstrapping, get_cmd -from numpy.distutils.misc_util import get_numpy_include_dirs as _incdir -from numpy.distutils.compat import get_exception - -# A few notes: -# - numscons is not mandatory to build numpy, so we cannot import it here. -# Any numscons import has to happen once we check numscons is available and -# is required for the build (call through setupscons.py or native numscons -# build). -def get_scons_build_dir(): - """Return the top path where everything produced by scons will be put. - - The path is relative to the top setup.py""" - from numscons import get_scons_build_dir - return get_scons_build_dir() - -def get_scons_pkg_build_dir(pkg): - """Return the build directory for the given package (foo.bar). - - The path is relative to the top setup.py""" - from numscons.core.utils import pkg_to_path - return pjoin(get_scons_build_dir(), pkg_to_path(pkg)) - -def get_scons_configres_dir(): - """Return the top path where everything produced by scons will be put. - - The path is relative to the top setup.py""" - from numscons import get_scons_configres_dir - return get_scons_configres_dir() - -def get_scons_configres_filename(): - """Return the top path where everything produced by scons will be put. - - The path is relative to the top setup.py""" - from numscons import get_scons_configres_filename - return get_scons_configres_filename() - -def get_scons_local_path(): - """This returns the full path where scons.py for scons-local is located.""" - from numscons import get_scons_path - return get_scons_path() - -def _get_top_dir(pkg): - # XXX: this mess is necessary because scons is launched per package, and - # has no knowledge outside its build dir, which is package dependent. If - # one day numscons does not launch one process/package, this will be - # unnecessary. - from numscons import get_scons_build_dir - from numscons.core.utils import pkg_to_path - scdir = pjoin(get_scons_build_dir(), pkg_to_path(pkg)) - n = scdir.count(os.sep) - return os.sep.join([os.pardir for i in range(n+1)]) - -def get_distutils_libdir(cmd, pkg): - """Returns the path where distutils install libraries, relatively to the - scons build directory.""" - return pjoin(_get_top_dir(pkg), cmd.build_lib) - -def get_distutils_clibdir(cmd, pkg): - """Returns the path where distutils put pure C libraries.""" - return pjoin(_get_top_dir(pkg), cmd.build_clib) - -def get_distutils_install_prefix(pkg, inplace): - """Returns the installation path for the current package.""" - from numscons.core.utils import pkg_to_path - if inplace == 1: - return pkg_to_path(pkg) - else: - install_cmd = get_cmd('install').get_finalized_command('install') - return pjoin(install_cmd.install_libbase, pkg_to_path(pkg)) - -def get_python_exec_invoc(): - """This returns the python executable from which this file is invocated.""" - # Do we need to take into account the PYTHONPATH, in a cross platform way, - # that is the string returned can be executed directly on supported - # platforms, and the sys.path of the executed python should be the same - # than the caller ? This may not be necessary, since os.system is said to - # take into accound os.environ. This actually also works for my way of - # using "local python", using the alias facility of bash. - return sys.executable - -def get_numpy_include_dirs(sconscript_path): - """Return include dirs for numpy. - - The paths are relatively to the setup.py script path.""" - from numscons import get_scons_build_dir - scdir = pjoin(get_scons_build_dir(), pdirname(sconscript_path)) - n = scdir.count(os.sep) - - dirs = _incdir() - rdirs = [] - for d in dirs: - rdirs.append(pjoin(os.sep.join([os.pardir for i in range(n+1)]), d)) - return rdirs - -def dirl_to_str(dirlist): - """Given a list of directories, returns a string where the paths are - concatenated by the path separator. - - example: ['foo/bar', 'bar/foo'] will return 'foo/bar:bar/foo'.""" - return os.pathsep.join(dirlist) - -def dist2sconscc(compiler): - """This converts the name passed to distutils to scons name convention (C - compiler). compiler should be a CCompiler instance. - - Example: - --compiler=intel -> intelc""" - compiler_type = compiler.compiler_type - if compiler_type == 'msvc': - return 'msvc' - elif compiler_type == 'intel': - return 'intelc' - else: - return compiler.compiler[0] - -def dist2sconsfc(compiler): - """This converts the name passed to distutils to scons name convention - (Fortran compiler). The argument should be a FCompiler instance. - - Example: - --fcompiler=intel -> ifort on linux, ifl on windows""" - if compiler.compiler_type == 'intel': - #raise NotImplementedError('FIXME: intel fortran compiler name ?') - return 'ifort' - elif compiler.compiler_type == 'gnu': - return 'g77' - elif compiler.compiler_type == 'gnu95': - return 'gfortran' - elif compiler.compiler_type == 'sun': - return 'sunf77' - else: - # XXX: Just give up for now, and use generic fortran compiler - return 'fortran' - -def dist2sconscxx(compiler): - """This converts the name passed to distutils to scons name convention - (C++ compiler). The argument should be a Compiler instance.""" - if compiler.compiler_type == 'msvc': - return compiler.compiler_type - - return compiler.compiler_cxx[0] - -def get_compiler_executable(compiler): - """For any give CCompiler instance, this gives us the name of C compiler - (the actual executable). - - NOTE: does NOT work with FCompiler instances.""" - # Geez, why does distutils has no common way to get the compiler name... - if compiler.compiler_type == 'msvc': - # this is harcoded in distutils... A bit cleaner way would be to - # initialize the compiler instance and then get compiler.cc, but this - # may be costly: we really just want a string. - # XXX: we need to initialize the compiler anyway, so do not use - # hardcoded string - #compiler.initialize() - #print compiler.cc - return 'cl.exe' - else: - return compiler.compiler[0] - -def get_f77_compiler_executable(compiler): - """For any give FCompiler instance, this gives us the name of F77 compiler - (the actual executable).""" - return compiler.compiler_f77[0] - -def get_cxxcompiler_executable(compiler): - """For any give CCompiler instance, this gives us the name of CXX compiler - (the actual executable). - - NOTE: does NOT work with FCompiler instances.""" - # Geez, why does distutils has no common way to get the compiler name... - if compiler.compiler_type == 'msvc': - # this is harcoded in distutils... A bit cleaner way would be to - # initialize the compiler instance and then get compiler.cc, but this - # may be costly: we really just want a string. - # XXX: we need to initialize the compiler anyway, so do not use - # hardcoded string - #compiler.initialize() - #print compiler.cc - return 'cl.exe' - else: - return compiler.compiler_cxx[0] - -def get_tool_path(compiler): - """Given a distutils.ccompiler.CCompiler class, returns the path of the - toolset related to C compilation.""" - fullpath_exec = find_executable(get_compiler_executable(compiler)) - if fullpath_exec: - fullpath = pdirname(fullpath_exec) - else: - raise DistutilsSetupError("Could not find compiler executable info for scons") - return fullpath - -def get_f77_tool_path(compiler): - """Given a distutils.ccompiler.FCompiler class, returns the path of the - toolset related to F77 compilation.""" - fullpath_exec = find_executable(get_f77_compiler_executable(compiler)) - if fullpath_exec: - fullpath = pdirname(fullpath_exec) - else: - raise DistutilsSetupError("Could not find F77 compiler executable "\ - "info for scons") - return fullpath - -def get_cxx_tool_path(compiler): - """Given a distutils.ccompiler.CCompiler class, returns the path of the - toolset related to C compilation.""" - fullpath_exec = find_executable(get_cxxcompiler_executable(compiler)) - if fullpath_exec: - fullpath = pdirname(fullpath_exec) - else: - raise DistutilsSetupError("Could not find compiler executable info for scons") - return fullpath - -def protect_path(path): - """Convert path (given as a string) to something the shell will have no - problem to understand (space, etc... problems).""" - if path: - # XXX: to this correctly, this is totally bogus for now (does not check for - # already quoted path, for example). - return '"' + path + '"' - else: - return '""' - -def parse_package_list(pkglist): - return pkglist.split(",") - -def find_common(seq1, seq2): - """Given two list, return the index of the common items. - - The index are relative to seq1. - - Note: do not handle duplicate items.""" - dict2 = dict([(i, None) for i in seq2]) - - return [i for i in range(len(seq1)) if dict2.has_key(seq1[i])] - -def select_packages(sconspkg, pkglist): - """Given a list of packages in pkglist, return the list of packages which - match this list.""" - common = find_common(sconspkg, pkglist) - if not len(common) == len(pkglist): - msg = "the package list contains a package not found in "\ - "the current list. The current list is %s" % sconspkg - raise ValueError(msg) - return common - -def check_numscons(minver): - """Check that we can use numscons. - - minver is a 3 integers tuple which defines the min version.""" - try: - import numscons - except ImportError: - e = get_exception() - raise RuntimeError("importing numscons failed (error was %s), using " \ - "scons within distutils is not possible without " - "this package " % str(e)) - - try: - # version_info was added in 0.10.0 - from numscons import version_info - # Stupid me used string instead of numbers in version_info in - # dev versions of 0.10.0 - if isinstance(version_info[0], str): - raise ValueError("Numscons %s or above expected " \ - "(detected 0.10.0)" % str(minver)) - # Stupid me used list instead of tuple in numscons - version_info = tuple(version_info) - if version_info[:3] < minver: - raise ValueError("Numscons %s or above expected (got %s) " - % (str(minver), str(version_info[:3]))) - except ImportError: - raise RuntimeError("You need numscons >= %s to build numpy "\ - "with numscons (imported numscons path " \ - "is %s)." % (minver, numscons.__file__)) - -# XXX: this is a giantic mess. Refactor this at some point. -class scons(old_build_ext): - # XXX: add an option to the scons command for configuration (auto/force/cache). - description = "Scons builder" - - library_options = [ - ('with-perflib=', None, - 'Specify which performance library to use for BLAS/LAPACK/etc...' \ - 'Examples: mkl/atlas/sunper/accelerate'), - ('with-mkl-lib=', None, 'TODO'), - ('with-mkl-include=', None, 'TODO'), - ('with-mkl-libraries=', None, 'TODO'), - ('with-atlas-lib=', None, 'TODO'), - ('with-atlas-include=', None, 'TODO'), - ('with-atlas-libraries=', None, 'TODO') - ] - user_options = [ - ('jobs=', 'j', "specify number of worker threads when executing" \ - "scons"), - ('inplace', 'i', 'If specified, build in place.'), - ('import-env', 'e', 'If specified, import user environment into scons env["ENV"].'), - ('bypass', 'b', 'Bypass distutils compiler detection (experimental).'), - ('scons-tool-path=', None, 'specify additional path '\ - '(absolute) to look for scons tools'), - ('silent=', None, 'specify whether scons output should less verbose'\ - '(1), silent (2), super silent (3) or not (0, default)'), - ('log-level=', None, 'specify log level for numscons. Any value ' \ - 'valid for the logging python module is valid'), - ('package-list=', None, - 'If specified, only run scons on the given '\ - 'packages (example: --package-list=scipy.cluster). If empty, '\ - 'no package is built'), - ('fcompiler=', None, "specify the Fortran compiler type"), - ('compiler=', None, "specify the C compiler type"), - ('cxxcompiler=', None, - "specify the C++ compiler type (same as C by default)"), - ('debug', 'g', - "compile/link with debugging information"), - ] + library_options - - def initialize_options(self): - old_build_ext.initialize_options(self) - self.build_clib = None - - self.debug = 0 - - self.compiler = None - self.cxxcompiler = None - self.fcompiler = None - - self.jobs = None - self.silent = 0 - self.import_env = 0 - self.scons_tool_path = '' - # If true, we bypass distutils to find the c compiler altogether. This - # is to be used in desperate cases (like incompatible visual studio - # version). - self._bypass_distutils_cc = False - - # scons compilers - self.scons_compiler = None - self.scons_compiler_path = None - self.scons_fcompiler = None - self.scons_fcompiler_path = None - self.scons_cxxcompiler = None - self.scons_cxxcompiler_path = None - - self.package_list = None - self.inplace = 0 - self.bypass = 0 - - # Only critical things - self.log_level = 50 - - # library options - self.with_perflib = [] - self.with_mkl_lib = [] - self.with_mkl_include = [] - self.with_mkl_libraries = [] - self.with_atlas_lib = [] - self.with_atlas_include = [] - self.with_atlas_libraries = [] - - def _init_ccompiler(self, compiler_type): - # XXX: The logic to bypass distutils is ... not so logic. - if compiler_type == 'msvc': - self._bypass_distutils_cc = True - try: - distutils_compiler = new_compiler(compiler=compiler_type, - verbose=self.verbose, - dry_run=self.dry_run, - force=self.force) - distutils_compiler.customize(self.distribution) - # This initialization seems necessary, sometimes, for find_executable to work... - if hasattr(distutils_compiler, 'initialize'): - distutils_compiler.initialize() - self.scons_compiler = dist2sconscc(distutils_compiler) - self.scons_compiler_path = protect_path(get_tool_path(distutils_compiler)) - except DistutilsPlatformError: - e = get_exception() - if not self._bypass_distutils_cc: - raise e - else: - self.scons_compiler = compiler_type - - def _init_fcompiler(self, compiler_type): - self.fcompiler = new_fcompiler(compiler = compiler_type, - verbose = self.verbose, - dry_run = self.dry_run, - force = self.force) - - if self.fcompiler is not None: - self.fcompiler.customize(self.distribution) - self.scons_fcompiler = dist2sconsfc(self.fcompiler) - self.scons_fcompiler_path = protect_path(get_f77_tool_path(self.fcompiler)) - - def _init_cxxcompiler(self, compiler_type): - cxxcompiler = new_compiler(compiler = compiler_type, - verbose = self.verbose, - dry_run = self.dry_run, - force = self.force) - if cxxcompiler is not None: - cxxcompiler.customize(self.distribution, need_cxx = 1) - cxxcompiler.customize_cmd(self) - self.cxxcompiler = cxxcompiler.cxx_compiler() - try: - get_cxx_tool_path(self.cxxcompiler) - except DistutilsSetupError: - self.cxxcompiler = None - - if self.cxxcompiler: - self.scons_cxxcompiler = dist2sconscxx(self.cxxcompiler) - self.scons_cxxcompiler_path = protect_path(get_cxx_tool_path(self.cxxcompiler)) - - def finalize_options(self): - old_build_ext.finalize_options(self) - - self.sconscripts = [] - self.pre_hooks = [] - self.post_hooks = [] - self.pkg_names = [] - self.pkg_paths = [] - - if self.distribution.has_scons_scripts(): - for i in self.distribution.scons_data: - self.sconscripts.append(i.scons_path) - self.pre_hooks.append(i.pre_hook) - self.post_hooks.append(i.post_hook) - self.pkg_names.append(i.parent_name) - self.pkg_paths.append(i.pkg_path) - # This crap is needed to get the build_clib - # directory - build_clib_cmd = get_cmd("build_clib").get_finalized_command("build_clib") - self.build_clib = build_clib_cmd.build_clib - - if not self.cxxcompiler: - self.cxxcompiler = self.compiler - - # To avoid trouble, just don't do anything if no sconscripts are used. - # This is useful when for example f2py uses numpy.distutils, because - # f2py does not pass compiler information to scons command, and the - # compilation setup below can crash in some situation. - if len(self.sconscripts) > 0: - if self.bypass: - self.scons_compiler = self.compiler - self.scons_fcompiler = self.fcompiler - self.scons_cxxcompiler = self.cxxcompiler - else: - # Try to get the same compiler than the ones used by distutils: this is - # non trivial because distutils and scons have totally different - # conventions on this one (distutils uses PATH from user's environment, - # whereas scons uses standard locations). The way we do it is once we - # got the c compiler used, we use numpy.distutils function to get the - # full path, and add the path to the env['PATH'] variable in env - # instance (this is done in numpy.distutils.scons module). - - self._init_ccompiler(self.compiler) - self._init_fcompiler(self.fcompiler) - self._init_cxxcompiler(self.cxxcompiler) - - if self.package_list: - self.package_list = parse_package_list(self.package_list) - - def _call_scons(self, scons_exec, sconscript, pkg_name, pkg_path, bootstrapping): - # XXX: when a scons script is missing, scons only prints warnings, and - # does not return a failure (status is 0). We have to detect this from - # distutils (this cannot work for recursive scons builds...) - - # XXX: passing everything at command line may cause some trouble where - # there is a size limitation ? What is the standard solution in thise - # case ? - - cmd = [scons_exec, "-f", sconscript, '-I.'] - if self.jobs: - cmd.append(" --jobs=%d" % int(self.jobs)) - if self.inplace: - cmd.append("inplace=1") - cmd.append('scons_tool_path="%s"' % self.scons_tool_path) - cmd.append('src_dir="%s"' % pdirname(sconscript)) - cmd.append('pkg_path="%s"' % pkg_path) - cmd.append('pkg_name="%s"' % pkg_name) - cmd.append('log_level=%s' % self.log_level) - #cmd.append('distutils_libdir=%s' % protect_path(pjoin(self.build_lib, - # pdirname(sconscript)))) - cmd.append('distutils_libdir=%s' % - protect_path(get_distutils_libdir(self, pkg_name))) - cmd.append('distutils_clibdir=%s' % - protect_path(get_distutils_clibdir(self, pkg_name))) - prefix = get_distutils_install_prefix(pkg_name, self.inplace) - cmd.append('distutils_install_prefix=%s' % protect_path(prefix)) - - if not self._bypass_distutils_cc: - cmd.append('cc_opt=%s' % self.scons_compiler) - if self.scons_compiler_path: - cmd.append('cc_opt_path=%s' % self.scons_compiler_path) - else: - cmd.append('cc_opt=%s' % self.scons_compiler) - - cmd.append('debug=%s' % self.debug) - - if self.scons_fcompiler: - cmd.append('f77_opt=%s' % self.scons_fcompiler) - if self.scons_fcompiler_path: - cmd.append('f77_opt_path=%s' % self.scons_fcompiler_path) - - if self.scons_cxxcompiler: - cmd.append('cxx_opt=%s' % self.scons_cxxcompiler) - if self.scons_cxxcompiler_path: - cmd.append('cxx_opt_path=%s' % self.scons_cxxcompiler_path) - - cmd.append('include_bootstrap=%s' % dirl_to_str(get_numpy_include_dirs(sconscript))) - cmd.append('bypass=%s' % self.bypass) - cmd.append('import_env=%s' % self.import_env) - if self.silent: - if int(self.silent) == 2: - cmd.append('-Q') - elif int(self.silent) == 3: - cmd.append('-s') - cmd.append('silent=%d' % int(self.silent)) - cmd.append('bootstrapping=%d' % bootstrapping) - cmdstr = ' '.join(cmd) - if int(self.silent) < 1: - log.info("Executing scons command (pkg is %s): %s ", pkg_name, cmdstr) - else: - log.info("======== Executing scons command for pkg %s =========", pkg_name) - st = os.system(cmdstr) - if st: - #print "status is %d" % st - msg = "Error while executing scons command." - msg += " See above for more information.\n" - msg += """\ -If you think it is a problem in numscons, you can also try executing the scons -command with --log-level option for more detailed output of what numscons is -doing, for example --log-level=0; the lowest the level is, the more detailed -the output it.""" - raise DistutilsExecError(msg) - - def run(self): - if len(self.sconscripts) < 1: - # nothing to do, just leave it here. - return - - check_numscons(minver=(0, 11, 0)) - - if self.package_list is not None: - id = select_packages(self.pkg_names, self.package_list) - sconscripts = [self.sconscripts[i] for i in id] - pre_hooks = [self.pre_hooks[i] for i in id] - post_hooks = [self.post_hooks[i] for i in id] - pkg_names = [self.pkg_names[i] for i in id] - pkg_paths = [self.pkg_paths[i] for i in id] - else: - sconscripts = self.sconscripts - pre_hooks = self.pre_hooks - post_hooks = self.post_hooks - pkg_names = self.pkg_names - pkg_paths = self.pkg_paths - - if is_bootstrapping(): - bootstrapping = 1 - else: - bootstrapping = 0 - - scons_exec = get_python_exec_invoc() - scons_exec += ' ' + protect_path(pjoin(get_scons_local_path(), 'scons.py')) - - for sconscript, pre_hook, post_hook, pkg_name, pkg_path in zip(sconscripts, - pre_hooks, post_hooks, - pkg_names, pkg_paths): - if pre_hook: - pre_hook() - - if sconscript: - self._call_scons(scons_exec, sconscript, pkg_name, pkg_path, bootstrapping) - - if post_hook: - post_hook(**{'pkg_name': pkg_name, 'scons_cmd' : self}) - diff --git a/pythonPackages/numpy/numpy/distutils/command/sdist.py b/pythonPackages/numpy/numpy/distutils/command/sdist.py deleted file mode 100755 index 62fce95744..0000000000 --- a/pythonPackages/numpy/numpy/distutils/command/sdist.py +++ /dev/null @@ -1,27 +0,0 @@ -import sys -if 'setuptools' in sys.modules: - from setuptools.command.sdist import sdist as old_sdist -else: - from distutils.command.sdist import sdist as old_sdist - -from numpy.distutils.misc_util import get_data_files - -class sdist(old_sdist): - - def add_defaults (self): - old_sdist.add_defaults(self) - - dist = self.distribution - - if dist.has_data_files(): - for data in dist.data_files: - self.filelist.extend(get_data_files(data)) - - if dist.has_headers(): - headers = [] - for h in dist.headers: - if isinstance(h,str): headers.append(h) - else: headers.append(h[1]) - self.filelist.extend(headers) - - return diff --git a/pythonPackages/numpy/numpy/distutils/compat.py b/pythonPackages/numpy/numpy/distutils/compat.py deleted file mode 100755 index 1c37dc2b9a..0000000000 --- a/pythonPackages/numpy/numpy/distutils/compat.py +++ /dev/null @@ -1,7 +0,0 @@ -"""Small modules to cope with python 2 vs 3 incompatibilities inside -numpy.distutils -""" -import sys - -def get_exception(): - return sys.exc_info()[1] diff --git a/pythonPackages/numpy/numpy/distutils/conv_template.py b/pythonPackages/numpy/numpy/distutils/conv_template.py deleted file mode 100755 index 368cdd4570..0000000000 --- a/pythonPackages/numpy/numpy/distutils/conv_template.py +++ /dev/null @@ -1,335 +0,0 @@ -#!/usr/bin/python -""" -takes templated file .xxx.src and produces .xxx file where .xxx is -.i or .c or .h, using the following template rules - -/**begin repeat -- on a line by itself marks the start of a repeated code - segment -/**end repeat**/ -- on a line by itself marks it's end - -After the /**begin repeat and before the */, all the named templates are placed -these should all have the same number of replacements - -Repeat blocks can be nested, with each nested block labeled with its depth, -i.e. -/**begin repeat1 - *.... - */ -/**end repeat1**/ - -When using nested loops, you can optionally exlude particular -combinations of the variables using (inside the comment portion of the inner loop): - - :exclude: var1=value1, var2=value2, ... - -This will exlude the pattern where var1 is value1 and var2 is value2 when -the result is being generated. - - -In the main body each replace will use one entry from the list of named replacements - - Note that all #..# forms in a block must have the same number of - comma-separated entries. - -Example: - - An input file containing - - /**begin repeat - * #a = 1,2,3# - * #b = 1,2,3# - */ - - /**begin repeat1 - * #c = ted, jim# - */ - @a@, @b@, @c@ - /**end repeat1**/ - - /**end repeat**/ - - produces - - line 1 "template.c.src" - - /* - ********************************************************************* - ** This file was autogenerated from a template DO NOT EDIT!!** - ** Changes should be made to the original source (.src) file ** - ********************************************************************* - */ - - #line 9 - 1, 1, ted - - #line 9 - 1, 1, jim - - #line 9 - 2, 2, ted - - #line 9 - 2, 2, jim - - #line 9 - 3, 3, ted - - #line 9 - 3, 3, jim - -""" - -__all__ = ['process_str', 'process_file'] - -import os -import sys -import re - -from numpy.distutils.compat import get_exception - -# names for replacement that are already global. -global_names = {} - -# header placed at the front of head processed file -header =\ -""" -/* - ***************************************************************************** - ** This file was autogenerated from a template DO NOT EDIT!!!! ** - ** Changes should be made to the original source (.src) file ** - ***************************************************************************** - */ - -""" -# Parse string for repeat loops -def parse_structure(astr, level): - """ - The returned line number is from the beginning of the string, starting - at zero. Returns an empty list if no loops found. - - """ - if level == 0 : - loopbeg = "/**begin repeat" - loopend = "/**end repeat**/" - else : - loopbeg = "/**begin repeat%d" % level - loopend = "/**end repeat%d**/" % level - - ind = 0 - line = 0 - spanlist = [] - while 1: - start = astr.find(loopbeg, ind) - if start == -1: - break - start2 = astr.find("*/",start) - start2 = astr.find("\n",start2) - fini1 = astr.find(loopend,start2) - fini2 = astr.find("\n",fini1) - line += astr.count("\n", ind, start2+1) - spanlist.append((start, start2+1, fini1, fini2+1, line)) - line += astr.count("\n", start2+1, fini2) - ind = fini2 - spanlist.sort() - return spanlist - - -def paren_repl(obj): - torep = obj.group(1) - numrep = obj.group(2) - return ','.join([torep]*int(numrep)) - -parenrep = re.compile(r"[(]([^)]*)[)]\*(\d+)") -plainrep = re.compile(r"([^*]+)\*(\d+)") -def parse_values(astr): - # replaces all occurrences of '(a,b,c)*4' in astr - # with 'a,b,c,a,b,c,a,b,c,a,b,c'. Empty braces generate - # empty values, i.e., ()*4 yields ',,,'. The result is - # split at ',' and a list of values returned. - astr = parenrep.sub(paren_repl, astr) - # replaces occurences of xxx*3 with xxx, xxx, xxx - astr = ','.join([plainrep.sub(paren_repl,x.strip()) - for x in astr.split(',')]) - return astr.split(',') - - -stripast = re.compile(r"\n\s*\*?") -named_re = re.compile(r"#\s*(\w*)\s*=([^#]*)#") -exclude_vars_re = re.compile(r"(\w*)=(\w*)") -exclude_re = re.compile(":exclude:") -def parse_loop_header(loophead) : - """Find all named replacements in the header - - Returns a list of dictionaries, one for each loop iteration, - where each key is a name to be substituted and the corresponding - value is the replacement string. - - Also return a list of exclusions. The exclusions are dictionaries - of key value pairs. There can be more than one exclusion. - [{'var1':'value1', 'var2', 'value2'[,...]}, ...] - - """ - # Strip out '\n' and leading '*', if any, in continuation lines. - # This should not effect code previous to this change as - # continuation lines were not allowed. - loophead = stripast.sub("", loophead) - # parse out the names and lists of values - names = [] - reps = named_re.findall(loophead) - nsub = None - for rep in reps: - name = rep[0] - vals = parse_values(rep[1]) - size = len(vals) - if nsub is None : - nsub = size - elif nsub != size : - msg = "Mismatch in number of values:\n%s = %s" % (name, vals) - raise ValueError(msg) - names.append((name,vals)) - - - # Find any exclude variables - excludes = [] - - for obj in exclude_re.finditer(loophead): - span = obj.span() - # find next newline - endline = loophead.find('\n', span[1]) - substr = loophead[span[1]:endline] - ex_names = exclude_vars_re.findall(substr) - excludes.append(dict(ex_names)) - - # generate list of dictionaries, one for each template iteration - dlist = [] - if nsub is None : - raise ValueError("No substitution variables found") - for i in range(nsub) : - tmp = {} - for name,vals in names : - tmp[name] = vals[i] - dlist.append(tmp) - return dlist - -replace_re = re.compile(r"@([\w]+)@") -def parse_string(astr, env, level, line) : - lineno = "#line %d\n" % line - - # local function for string replacement, uses env - def replace(match): - name = match.group(1) - try : - val = env[name] - except KeyError: - msg = 'line %d: no definition of key "%s"'%(line, name) - raise ValueError(msg) - return val - - code = [lineno] - struct = parse_structure(astr, level) - if struct : - # recurse over inner loops - oldend = 0 - newlevel = level + 1 - for sub in struct: - pref = astr[oldend:sub[0]] - head = astr[sub[0]:sub[1]] - text = astr[sub[1]:sub[2]] - oldend = sub[3] - newline = line + sub[4] - code.append(replace_re.sub(replace, pref)) - try : - envlist = parse_loop_header(head) - except ValueError: - e = get_exception() - msg = "line %d: %s" % (newline, e) - raise ValueError(msg) - for newenv in envlist : - newenv.update(env) - newcode = parse_string(text, newenv, newlevel, newline) - code.extend(newcode) - suff = astr[oldend:] - code.append(replace_re.sub(replace, suff)) - else : - # replace keys - code.append(replace_re.sub(replace, astr)) - code.append('\n') - return ''.join(code) - -def process_str(astr): - code = [header] - code.extend(parse_string(astr, global_names, 0, 1)) - return ''.join(code) - - -include_src_re = re.compile(r"(\n|\A)#include\s*['\"]" - r"(?P[\w\d./\\]+[.]src)['\"]", re.I) - -def resolve_includes(source): - d = os.path.dirname(source) - fid = open(source) - lines = [] - for line in fid.readlines(): - m = include_src_re.match(line) - if m: - fn = m.group('name') - if not os.path.isabs(fn): - fn = os.path.join(d,fn) - if os.path.isfile(fn): - print ('Including file',fn) - lines.extend(resolve_includes(fn)) - else: - lines.append(line) - else: - lines.append(line) - fid.close() - return lines - -def process_file(source): - lines = resolve_includes(source) - sourcefile = os.path.normcase(source).replace("\\","\\\\") - try: - code = process_str(''.join(lines)) - except ValueError: - e = get_exception() - raise ValueError('In "%s" loop at %s' % (sourcefile, e)) - return '#line 1 "%s"\n%s' % (sourcefile, code) - - -def unique_key(adict): - # this obtains a unique key given a dictionary - # currently it works by appending together n of the letters of the - # current keys and increasing n until a unique key is found - # -- not particularly quick - allkeys = adict.keys() - done = False - n = 1 - while not done: - newkey = "".join([x[:n] for x in allkeys]) - if newkey in allkeys: - n += 1 - else: - done = True - return newkey - - -if __name__ == "__main__": - - try: - file = sys.argv[1] - except IndexError: - fid = sys.stdin - outfile = sys.stdout - else: - fid = open(file,'r') - (base, ext) = os.path.splitext(file) - newname = base - outfile = open(newname,'w') - - allstr = fid.read() - try: - writestr = process_str(allstr) - except ValueError: - e = get_exception() - raise ValueError("In %s loop at %s" % (file, e)) - outfile.write(writestr) diff --git a/pythonPackages/numpy/numpy/distutils/core.py b/pythonPackages/numpy/numpy/distutils/core.py deleted file mode 100755 index e617589a24..0000000000 --- a/pythonPackages/numpy/numpy/distutils/core.py +++ /dev/null @@ -1,227 +0,0 @@ - -import sys -from distutils.core import * - -if 'setuptools' in sys.modules: - have_setuptools = True - from setuptools import setup as old_setup - # easy_install imports math, it may be picked up from cwd - from setuptools.command import easy_install - try: - # very old versions of setuptools don't have this - from setuptools.command import bdist_egg - except ImportError: - have_setuptools = False -else: - from distutils.core import setup as old_setup - have_setuptools = False - -import warnings -import distutils.core -import distutils.dist - -from numpy.distutils.extension import Extension -from numpy.distutils.numpy_distribution import NumpyDistribution -from numpy.distutils.command import config, config_compiler, \ - build, build_py, build_ext, build_clib, build_src, build_scripts, \ - sdist, install_data, install_headers, install, bdist_rpm, scons, \ - install_clib -from numpy.distutils.misc_util import get_data_files, is_sequence, is_string - -numpy_cmdclass = {'build': build.build, - 'build_src': build_src.build_src, - 'build_scripts': build_scripts.build_scripts, - 'config_cc': config_compiler.config_cc, - 'config_fc': config_compiler.config_fc, - 'config': config.config, - 'build_ext': build_ext.build_ext, - 'build_py': build_py.build_py, - 'build_clib': build_clib.build_clib, - 'sdist': sdist.sdist, - 'scons': scons.scons, - 'install_data': install_data.install_data, - 'install_headers': install_headers.install_headers, - 'install_clib': install_clib.install_clib, - 'install': install.install, - 'bdist_rpm': bdist_rpm.bdist_rpm, - } -if have_setuptools: - # Use our own versions of develop and egg_info to ensure that build_src is - # handled appropriately. - from numpy.distutils.command import develop, egg_info - numpy_cmdclass['bdist_egg'] = bdist_egg.bdist_egg - numpy_cmdclass['develop'] = develop.develop - numpy_cmdclass['easy_install'] = easy_install.easy_install - numpy_cmdclass['egg_info'] = egg_info.egg_info - -def _dict_append(d, **kws): - for k,v in kws.items(): - if k not in d: - d[k] = v - continue - dv = d[k] - if isinstance(dv, tuple): - d[k] = dv + tuple(v) - elif isinstance(dv, list): - d[k] = dv + list(v) - elif isinstance(dv, dict): - _dict_append(dv, **v) - elif is_string(dv): - d[k] = dv + v - else: - raise TypeError(repr(type(dv))) - -def _command_line_ok(_cache=[]): - """ Return True if command line does not contain any - help or display requests. - """ - if _cache: - return _cache[0] - ok = True - display_opts = ['--'+n for n in Distribution.display_option_names] - for o in Distribution.display_options: - if o[1]: - display_opts.append('-'+o[1]) - for arg in sys.argv: - if arg.startswith('--help') or arg=='-h' or arg in display_opts: - ok = False - break - _cache.append(ok) - return ok - -def get_distribution(always=False): - dist = distutils.core._setup_distribution - # XXX Hack to get numpy installable with easy_install. - # The problem is easy_install runs it's own setup(), which - # sets up distutils.core._setup_distribution. However, - # when our setup() runs, that gets overwritten and lost. - # We can't use isinstance, as the DistributionWithoutHelpCommands - # class is local to a function in setuptools.command.easy_install - if dist is not None and \ - 'DistributionWithoutHelpCommands' in repr(dist): - #raise NotImplementedError("setuptools not supported yet for numpy.scons branch") - dist = None - if always and dist is None: - dist = NumpyDistribution() - return dist - -def _exit_interactive_session(_cache=[]): - if _cache: - return # been here - _cache.append(1) - print('-'*72) - raw_input('Press ENTER to close the interactive session..') - print('='*72) - -def setup(**attr): - - if len(sys.argv)<=1 and not attr.get('script_args',[]): - from interactive import interactive_sys_argv - import atexit - atexit.register(_exit_interactive_session) - sys.argv[:] = interactive_sys_argv(sys.argv) - if len(sys.argv)>1: - return setup(**attr) - - cmdclass = numpy_cmdclass.copy() - - new_attr = attr.copy() - if 'cmdclass' in new_attr: - cmdclass.update(new_attr['cmdclass']) - new_attr['cmdclass'] = cmdclass - - if 'configuration' in new_attr: - # To avoid calling configuration if there are any errors - # or help request in command in the line. - configuration = new_attr.pop('configuration') - - old_dist = distutils.core._setup_distribution - old_stop = distutils.core._setup_stop_after - distutils.core._setup_distribution = None - distutils.core._setup_stop_after = "commandline" - try: - dist = setup(**new_attr) - finally: - distutils.core._setup_distribution = old_dist - distutils.core._setup_stop_after = old_stop - if dist.help or not _command_line_ok(): - # probably displayed help, skip running any commands - return dist - - # create setup dictionary and append to new_attr - config = configuration() - if hasattr(config,'todict'): - config = config.todict() - _dict_append(new_attr, **config) - - # Move extension source libraries to libraries - libraries = [] - for ext in new_attr.get('ext_modules',[]): - new_libraries = [] - for item in ext.libraries: - if is_sequence(item): - lib_name, build_info = item - _check_append_ext_library(libraries, lib_name, build_info) - new_libraries.append(lib_name) - elif is_string(item): - new_libraries.append(item) - else: - raise TypeError("invalid description of extension module " - "library %r" % (item,)) - ext.libraries = new_libraries - if libraries: - if 'libraries' not in new_attr: - new_attr['libraries'] = [] - for item in libraries: - _check_append_library(new_attr['libraries'], item) - - # sources in ext_modules or libraries may contain header files - if ('ext_modules' in new_attr or 'libraries' in new_attr) \ - and 'headers' not in new_attr: - new_attr['headers'] = [] - - # Use our custom NumpyDistribution class instead of distutils' one - new_attr['distclass'] = NumpyDistribution - - return old_setup(**new_attr) - -def _check_append_library(libraries, item): - for libitem in libraries: - if is_sequence(libitem): - if is_sequence(item): - if item[0]==libitem[0]: - if item[1] is libitem[1]: - return - warnings.warn("[0] libraries list contains %r with" - " different build_info" % (item[0],)) - break - else: - if item==libitem[0]: - warnings.warn("[1] libraries list contains %r with" - " no build_info" % (item[0],)) - break - else: - if is_sequence(item): - if item[0]==libitem: - warnings.warn("[2] libraries list contains %r with" - " no build_info" % (item[0],)) - break - else: - if item==libitem: - return - libraries.append(item) - -def _check_append_ext_library(libraries, lib_name, build_info): - for item in libraries: - if is_sequence(item): - if item[0]==lib_name: - if item[1] is build_info: - return - warnings.warn("[3] libraries list contains %r with" - " different build_info" % (lib_name,)) - break - elif item==lib_name: - warnings.warn("[4] libraries list contains %r with" - " no build_info" % (lib_name,)) - break - libraries.append((lib_name,build_info)) diff --git a/pythonPackages/numpy/numpy/distutils/cpuinfo.py b/pythonPackages/numpy/numpy/distutils/cpuinfo.py deleted file mode 100755 index a9b2af1080..0000000000 --- a/pythonPackages/numpy/numpy/distutils/cpuinfo.py +++ /dev/null @@ -1,685 +0,0 @@ -#!/usr/bin/env python -""" -cpuinfo - -Copyright 2002 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy (BSD style) license. See LICENSE.txt that came with -this distribution for specifics. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Pearu Peterson -""" - -__all__ = ['cpu'] - -import sys, re, types -import os -if sys.version_info[0] < 3: - from commands import getstatusoutput -else: - from subprocess import getstatusoutput -import warnings -import platform - -from numpy.distutils.compat import get_exception - -def getoutput(cmd, successful_status=(0,), stacklevel=1): - try: - status, output = getstatusoutput(cmd) - except EnvironmentError: - e = get_exception() - warnings.warn(str(e), UserWarning, stacklevel=stacklevel) - return False, output - if os.WIFEXITED(status) and os.WEXITSTATUS(status) in successful_status: - return True, output - return False, output - -def command_info(successful_status=(0,), stacklevel=1, **kw): - info = {} - for key in kw: - ok, output = getoutput(kw[key], successful_status=successful_status, - stacklevel=stacklevel+1) - if ok: - info[key] = output.strip() - return info - -def command_by_line(cmd, successful_status=(0,), stacklevel=1): - ok, output = getoutput(cmd, successful_status=successful_status, - stacklevel=stacklevel+1) - if not ok: - return - for line in output.splitlines(): - yield line.strip() - -def key_value_from_command(cmd, sep, successful_status=(0,), - stacklevel=1): - d = {} - for line in command_by_line(cmd, successful_status=successful_status, - stacklevel=stacklevel+1): - l = [s.strip() for s in line.split(sep, 1)] - if len(l) == 2: - d[l[0]] = l[1] - return d - -class CPUInfoBase(object): - """Holds CPU information and provides methods for requiring - the availability of various CPU features. - """ - - def _try_call(self,func): - try: - return func() - except: - pass - - def __getattr__(self,name): - if not name.startswith('_'): - if hasattr(self,'_'+name): - attr = getattr(self,'_'+name) - if type(attr) is types.MethodType: - return lambda func=self._try_call,attr=attr : func(attr) - else: - return lambda : None - raise AttributeError(name) - - def _getNCPUs(self): - return 1 - - def __get_nbits(self): - abits = platform.architecture()[0] - nbits = re.compile('(\d+)bit').search(abits).group(1) - return nbits - - def _is_32bit(self): - return self.__get_nbits() == '32' - - def _is_64bit(self): - return self.__get_nbits() == '64' - -class LinuxCPUInfo(CPUInfoBase): - - info = None - - def __init__(self): - if self.info is not None: - return - info = [ {} ] - ok, output = getoutput('uname -m') - if ok: - info[0]['uname_m'] = output.strip() - try: - fo = open('/proc/cpuinfo') - except EnvironmentError: - e = get_exception() - warnings.warn(str(e), UserWarning) - else: - for line in fo: - name_value = [s.strip() for s in line.split(':', 1)] - if len(name_value) != 2: - continue - name, value = name_value - if not info or name in info[-1]: # next processor - info.append({}) - info[-1][name] = value - fo.close() - self.__class__.info = info - - def _not_impl(self): pass - - # Athlon - - def _is_AMD(self): - return self.info[0]['vendor_id']=='AuthenticAMD' - - def _is_AthlonK6_2(self): - return self._is_AMD() and self.info[0]['model'] == '2' - - def _is_AthlonK6_3(self): - return self._is_AMD() and self.info[0]['model'] == '3' - - def _is_AthlonK6(self): - return re.match(r'.*?AMD-K6',self.info[0]['model name']) is not None - - def _is_AthlonK7(self): - return re.match(r'.*?AMD-K7',self.info[0]['model name']) is not None - - def _is_AthlonMP(self): - return re.match(r'.*?Athlon\(tm\) MP\b', - self.info[0]['model name']) is not None - - def _is_AMD64(self): - return self.is_AMD() and self.info[0]['family'] == '15' - - def _is_Athlon64(self): - return re.match(r'.*?Athlon\(tm\) 64\b', - self.info[0]['model name']) is not None - - def _is_AthlonHX(self): - return re.match(r'.*?Athlon HX\b', - self.info[0]['model name']) is not None - - def _is_Opteron(self): - return re.match(r'.*?Opteron\b', - self.info[0]['model name']) is not None - - def _is_Hammer(self): - return re.match(r'.*?Hammer\b', - self.info[0]['model name']) is not None - - # Alpha - - def _is_Alpha(self): - return self.info[0]['cpu']=='Alpha' - - def _is_EV4(self): - return self.is_Alpha() and self.info[0]['cpu model'] == 'EV4' - - def _is_EV5(self): - return self.is_Alpha() and self.info[0]['cpu model'] == 'EV5' - - def _is_EV56(self): - return self.is_Alpha() and self.info[0]['cpu model'] == 'EV56' - - def _is_PCA56(self): - return self.is_Alpha() and self.info[0]['cpu model'] == 'PCA56' - - # Intel - - #XXX - _is_i386 = _not_impl - - def _is_Intel(self): - return self.info[0]['vendor_id']=='GenuineIntel' - - def _is_i486(self): - return self.info[0]['cpu']=='i486' - - def _is_i586(self): - return self.is_Intel() and self.info[0]['cpu family'] == '5' - - def _is_i686(self): - return self.is_Intel() and self.info[0]['cpu family'] == '6' - - def _is_Celeron(self): - return re.match(r'.*?Celeron', - self.info[0]['model name']) is not None - - def _is_Pentium(self): - return re.match(r'.*?Pentium', - self.info[0]['model name']) is not None - - def _is_PentiumII(self): - return re.match(r'.*?Pentium.*?II\b', - self.info[0]['model name']) is not None - - def _is_PentiumPro(self): - return re.match(r'.*?PentiumPro\b', - self.info[0]['model name']) is not None - - def _is_PentiumMMX(self): - return re.match(r'.*?Pentium.*?MMX\b', - self.info[0]['model name']) is not None - - def _is_PentiumIII(self): - return re.match(r'.*?Pentium.*?III\b', - self.info[0]['model name']) is not None - - def _is_PentiumIV(self): - return re.match(r'.*?Pentium.*?(IV|4)\b', - self.info[0]['model name']) is not None - - def _is_PentiumM(self): - return re.match(r'.*?Pentium.*?M\b', - self.info[0]['model name']) is not None - - def _is_Prescott(self): - return self.is_PentiumIV() and self.has_sse3() - - def _is_Nocona(self): - return self.is_Intel() \ - and (self.info[0]['cpu family'] == '6' \ - or self.info[0]['cpu family'] == '15' ) \ - and (self.has_sse3() and not self.has_ssse3())\ - and re.match(r'.*?\blm\b',self.info[0]['flags']) is not None - - def _is_Core2(self): - return self.is_64bit() and self.is_Intel() and \ - re.match(r'.*?Core\(TM\)2\b', \ - self.info[0]['model name']) is not None - - def _is_Itanium(self): - return re.match(r'.*?Itanium\b', - self.info[0]['family']) is not None - - def _is_XEON(self): - return re.match(r'.*?XEON\b', - self.info[0]['model name'],re.IGNORECASE) is not None - - _is_Xeon = _is_XEON - - # Varia - - def _is_singleCPU(self): - return len(self.info) == 1 - - def _getNCPUs(self): - return len(self.info) - - def _has_fdiv_bug(self): - return self.info[0]['fdiv_bug']=='yes' - - def _has_f00f_bug(self): - return self.info[0]['f00f_bug']=='yes' - - def _has_mmx(self): - return re.match(r'.*?\bmmx\b',self.info[0]['flags']) is not None - - def _has_sse(self): - return re.match(r'.*?\bsse\b',self.info[0]['flags']) is not None - - def _has_sse2(self): - return re.match(r'.*?\bsse2\b',self.info[0]['flags']) is not None - - def _has_sse3(self): - return re.match(r'.*?\bpni\b',self.info[0]['flags']) is not None - - def _has_ssse3(self): - return re.match(r'.*?\bssse3\b',self.info[0]['flags']) is not None - - def _has_3dnow(self): - return re.match(r'.*?\b3dnow\b',self.info[0]['flags']) is not None - - def _has_3dnowext(self): - return re.match(r'.*?\b3dnowext\b',self.info[0]['flags']) is not None - -class IRIXCPUInfo(CPUInfoBase): - info = None - - def __init__(self): - if self.info is not None: - return - info = key_value_from_command('sysconf', sep=' ', - successful_status=(0,1)) - self.__class__.info = info - - def _not_impl(self): pass - - def _is_singleCPU(self): - return self.info.get('NUM_PROCESSORS') == '1' - - def _getNCPUs(self): - return int(self.info.get('NUM_PROCESSORS', 1)) - - def __cputype(self,n): - return self.info.get('PROCESSORS').split()[0].lower() == 'r%s' % (n) - def _is_r2000(self): return self.__cputype(2000) - def _is_r3000(self): return self.__cputype(3000) - def _is_r3900(self): return self.__cputype(3900) - def _is_r4000(self): return self.__cputype(4000) - def _is_r4100(self): return self.__cputype(4100) - def _is_r4300(self): return self.__cputype(4300) - def _is_r4400(self): return self.__cputype(4400) - def _is_r4600(self): return self.__cputype(4600) - def _is_r4650(self): return self.__cputype(4650) - def _is_r5000(self): return self.__cputype(5000) - def _is_r6000(self): return self.__cputype(6000) - def _is_r8000(self): return self.__cputype(8000) - def _is_r10000(self): return self.__cputype(10000) - def _is_r12000(self): return self.__cputype(12000) - def _is_rorion(self): return self.__cputype('orion') - - def get_ip(self): - try: return self.info.get('MACHINE') - except: pass - def __machine(self,n): - return self.info.get('MACHINE').lower() == 'ip%s' % (n) - def _is_IP19(self): return self.__machine(19) - def _is_IP20(self): return self.__machine(20) - def _is_IP21(self): return self.__machine(21) - def _is_IP22(self): return self.__machine(22) - def _is_IP22_4k(self): return self.__machine(22) and self._is_r4000() - def _is_IP22_5k(self): return self.__machine(22) and self._is_r5000() - def _is_IP24(self): return self.__machine(24) - def _is_IP25(self): return self.__machine(25) - def _is_IP26(self): return self.__machine(26) - def _is_IP27(self): return self.__machine(27) - def _is_IP28(self): return self.__machine(28) - def _is_IP30(self): return self.__machine(30) - def _is_IP32(self): return self.__machine(32) - def _is_IP32_5k(self): return self.__machine(32) and self._is_r5000() - def _is_IP32_10k(self): return self.__machine(32) and self._is_r10000() - - -class DarwinCPUInfo(CPUInfoBase): - info = None - - def __init__(self): - if self.info is not None: - return - info = command_info(arch='arch', - machine='machine') - info['sysctl_hw'] = key_value_from_command('sysctl hw', sep='=') - self.__class__.info = info - - def _not_impl(self): pass - - def _getNCPUs(self): - return int(self.info['sysctl_hw'].get('hw.ncpu', 1)) - - def _is_Power_Macintosh(self): - return self.info['sysctl_hw']['hw.machine']=='Power Macintosh' - - def _is_i386(self): - return self.info['arch']=='i386' - def _is_ppc(self): - return self.info['arch']=='ppc' - - def __machine(self,n): - return self.info['machine'] == 'ppc%s'%n - def _is_ppc601(self): return self.__machine(601) - def _is_ppc602(self): return self.__machine(602) - def _is_ppc603(self): return self.__machine(603) - def _is_ppc603e(self): return self.__machine('603e') - def _is_ppc604(self): return self.__machine(604) - def _is_ppc604e(self): return self.__machine('604e') - def _is_ppc620(self): return self.__machine(620) - def _is_ppc630(self): return self.__machine(630) - def _is_ppc740(self): return self.__machine(740) - def _is_ppc7400(self): return self.__machine(7400) - def _is_ppc7450(self): return self.__machine(7450) - def _is_ppc750(self): return self.__machine(750) - def _is_ppc403(self): return self.__machine(403) - def _is_ppc505(self): return self.__machine(505) - def _is_ppc801(self): return self.__machine(801) - def _is_ppc821(self): return self.__machine(821) - def _is_ppc823(self): return self.__machine(823) - def _is_ppc860(self): return self.__machine(860) - - -class SunOSCPUInfo(CPUInfoBase): - - info = None - - def __init__(self): - if self.info is not None: - return - info = command_info(arch='arch', - mach='mach', - uname_i='uname_i', - isainfo_b='isainfo -b', - isainfo_n='isainfo -n', - ) - info['uname_X'] = key_value_from_command('uname -X', sep='=') - for line in command_by_line('psrinfo -v 0'): - m = re.match(r'\s*The (?P

    [\w\d]+) processor operates at', line) - if m: - info['processor'] = m.group('p') - break - self.__class__.info = info - - def _not_impl(self): pass - - def _is_i386(self): - return self.info['isainfo_n']=='i386' - def _is_sparc(self): - return self.info['isainfo_n']=='sparc' - def _is_sparcv9(self): - return self.info['isainfo_n']=='sparcv9' - - def _getNCPUs(self): - return int(self.info['uname_X'].get('NumCPU', 1)) - - def _is_sun4(self): - return self.info['arch']=='sun4' - - def _is_SUNW(self): - return re.match(r'SUNW',self.info['uname_i']) is not None - def _is_sparcstation5(self): - return re.match(r'.*SPARCstation-5',self.info['uname_i']) is not None - def _is_ultra1(self): - return re.match(r'.*Ultra-1',self.info['uname_i']) is not None - def _is_ultra250(self): - return re.match(r'.*Ultra-250',self.info['uname_i']) is not None - def _is_ultra2(self): - return re.match(r'.*Ultra-2',self.info['uname_i']) is not None - def _is_ultra30(self): - return re.match(r'.*Ultra-30',self.info['uname_i']) is not None - def _is_ultra4(self): - return re.match(r'.*Ultra-4',self.info['uname_i']) is not None - def _is_ultra5_10(self): - return re.match(r'.*Ultra-5_10',self.info['uname_i']) is not None - def _is_ultra5(self): - return re.match(r'.*Ultra-5',self.info['uname_i']) is not None - def _is_ultra60(self): - return re.match(r'.*Ultra-60',self.info['uname_i']) is not None - def _is_ultra80(self): - return re.match(r'.*Ultra-80',self.info['uname_i']) is not None - def _is_ultraenterprice(self): - return re.match(r'.*Ultra-Enterprise',self.info['uname_i']) is not None - def _is_ultraenterprice10k(self): - return re.match(r'.*Ultra-Enterprise-10000',self.info['uname_i']) is not None - def _is_sunfire(self): - return re.match(r'.*Sun-Fire',self.info['uname_i']) is not None - def _is_ultra(self): - return re.match(r'.*Ultra',self.info['uname_i']) is not None - - def _is_cpusparcv7(self): - return self.info['processor']=='sparcv7' - def _is_cpusparcv8(self): - return self.info['processor']=='sparcv8' - def _is_cpusparcv9(self): - return self.info['processor']=='sparcv9' - -class Win32CPUInfo(CPUInfoBase): - - info = None - pkey = r"HARDWARE\DESCRIPTION\System\CentralProcessor" - # XXX: what does the value of - # HKEY_LOCAL_MACHINE\HARDWARE\DESCRIPTION\System\CentralProcessor\0 - # mean? - - def __init__(self): - if self.info is not None: - return - info = [] - try: - #XXX: Bad style to use so long `try:...except:...`. Fix it! - import _winreg - prgx = re.compile(r"family\s+(?P\d+)\s+model\s+(?P\d+)"\ - "\s+stepping\s+(?P\d+)",re.IGNORECASE) - chnd=_winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, self.pkey) - pnum=0 - while 1: - try: - proc=_winreg.EnumKey(chnd,pnum) - except _winreg.error: - break - else: - pnum+=1 - info.append({"Processor":proc}) - phnd=_winreg.OpenKey(chnd,proc) - pidx=0 - while True: - try: - name,value,vtpe=_winreg.EnumValue(phnd,pidx) - except _winreg.error: - break - else: - pidx=pidx+1 - info[-1][name]=value - if name=="Identifier": - srch=prgx.search(value) - if srch: - info[-1]["Family"]=int(srch.group("FML")) - info[-1]["Model"]=int(srch.group("MDL")) - info[-1]["Stepping"]=int(srch.group("STP")) - except: - print(sys.exc_value,'(ignoring)') - self.__class__.info = info - - def _not_impl(self): pass - - # Athlon - - def _is_AMD(self): - return self.info[0]['VendorIdentifier']=='AuthenticAMD' - - def _is_Am486(self): - return self.is_AMD() and self.info[0]['Family']==4 - - def _is_Am5x86(self): - return self.is_AMD() and self.info[0]['Family']==4 - - def _is_AMDK5(self): - return self.is_AMD() and self.info[0]['Family']==5 \ - and self.info[0]['Model'] in [0,1,2,3] - - def _is_AMDK6(self): - return self.is_AMD() and self.info[0]['Family']==5 \ - and self.info[0]['Model'] in [6,7] - - def _is_AMDK6_2(self): - return self.is_AMD() and self.info[0]['Family']==5 \ - and self.info[0]['Model']==8 - - def _is_AMDK6_3(self): - return self.is_AMD() and self.info[0]['Family']==5 \ - and self.info[0]['Model']==9 - - def _is_AMDK7(self): - return self.is_AMD() and self.info[0]['Family'] == 6 - - # To reliably distinguish between the different types of AMD64 chips - # (Athlon64, Operton, Athlon64 X2, Semperon, Turion 64, etc.) would - # require looking at the 'brand' from cpuid - - def _is_AMD64(self): - return self.is_AMD() and self.info[0]['Family'] == 15 - - # Intel - - def _is_Intel(self): - return self.info[0]['VendorIdentifier']=='GenuineIntel' - - def _is_i386(self): - return self.info[0]['Family']==3 - - def _is_i486(self): - return self.info[0]['Family']==4 - - def _is_i586(self): - return self.is_Intel() and self.info[0]['Family']==5 - - def _is_i686(self): - return self.is_Intel() and self.info[0]['Family']==6 - - def _is_Pentium(self): - return self.is_Intel() and self.info[0]['Family']==5 - - def _is_PentiumMMX(self): - return self.is_Intel() and self.info[0]['Family']==5 \ - and self.info[0]['Model']==4 - - def _is_PentiumPro(self): - return self.is_Intel() and self.info[0]['Family']==6 \ - and self.info[0]['Model']==1 - - def _is_PentiumII(self): - return self.is_Intel() and self.info[0]['Family']==6 \ - and self.info[0]['Model'] in [3,5,6] - - def _is_PentiumIII(self): - return self.is_Intel() and self.info[0]['Family']==6 \ - and self.info[0]['Model'] in [7,8,9,10,11] - - def _is_PentiumIV(self): - return self.is_Intel() and self.info[0]['Family']==15 - - def _is_PentiumM(self): - return self.is_Intel() and self.info[0]['Family'] == 6 \ - and self.info[0]['Model'] in [9, 13, 14] - - def _is_Core2(self): - return self.is_Intel() and self.info[0]['Family'] == 6 \ - and self.info[0]['Model'] in [15, 16, 17] - - # Varia - - def _is_singleCPU(self): - return len(self.info) == 1 - - def _getNCPUs(self): - return len(self.info) - - def _has_mmx(self): - if self.is_Intel(): - return (self.info[0]['Family']==5 and self.info[0]['Model']==4) \ - or (self.info[0]['Family'] in [6,15]) - elif self.is_AMD(): - return self.info[0]['Family'] in [5,6,15] - else: - return False - - def _has_sse(self): - if self.is_Intel(): - return (self.info[0]['Family']==6 and \ - self.info[0]['Model'] in [7,8,9,10,11]) \ - or self.info[0]['Family']==15 - elif self.is_AMD(): - return (self.info[0]['Family']==6 and \ - self.info[0]['Model'] in [6,7,8,10]) \ - or self.info[0]['Family']==15 - else: - return False - - def _has_sse2(self): - if self.is_Intel(): - return self.is_Pentium4() or self.is_PentiumM() \ - or self.is_Core2() - elif self.is_AMD(): - return self.is_AMD64() - else: - return False - - def _has_3dnow(self): - return self.is_AMD() and self.info[0]['Family'] in [5,6,15] - - def _has_3dnowext(self): - return self.is_AMD() and self.info[0]['Family'] in [6,15] - -if sys.platform.startswith('linux'): # variations: linux2,linux-i386 (any others?) - cpuinfo = LinuxCPUInfo -elif sys.platform.startswith('irix'): - cpuinfo = IRIXCPUInfo -elif sys.platform == 'darwin': - cpuinfo = DarwinCPUInfo -elif sys.platform.startswith('sunos'): - cpuinfo = SunOSCPUInfo -elif sys.platform.startswith('win32'): - cpuinfo = Win32CPUInfo -elif sys.platform.startswith('cygwin'): - cpuinfo = LinuxCPUInfo -#XXX: other OS's. Eg. use _winreg on Win32. Or os.uname on unices. -else: - cpuinfo = CPUInfoBase - -cpu = cpuinfo() - -#if __name__ == "__main__": -# -# cpu.is_blaa() -# cpu.is_Intel() -# cpu.is_Alpha() -# -# print 'CPU information:', -# for name in dir(cpuinfo): -# if name[0]=='_' and name[1]!='_': -# r = getattr(cpu,name[1:])() -# if r: -# if r!=1: -# print '%s=%s' %(name[1:],r), -# else: -# print name[1:], -# print diff --git a/pythonPackages/numpy/numpy/distutils/environment.py b/pythonPackages/numpy/numpy/distutils/environment.py deleted file mode 100755 index c701bce472..0000000000 --- a/pythonPackages/numpy/numpy/distutils/environment.py +++ /dev/null @@ -1,70 +0,0 @@ -import os -from distutils.dist import Distribution - -__metaclass__ = type - -class EnvironmentConfig: - def __init__(self, distutils_section='ALL', **kw): - self._distutils_section = distutils_section - self._conf_keys = kw - self._conf = None - self._hook_handler = None - - def dump_variable(self, name): - conf_desc = self._conf_keys[name] - hook, envvar, confvar, convert = conf_desc - if not convert: - convert = lambda x : x - print('%s.%s:' % (self._distutils_section, name)) - v = self._hook_handler(name, hook) - print(' hook : %s' % (convert(v),)) - if envvar: - v = os.environ.get(envvar, None) - print(' environ: %s' % (convert(v),)) - if confvar and self._conf: - v = self._conf.get(confvar, (None, None))[1] - print(' config : %s' % (convert(v),)) - - def dump_variables(self): - for name in self._conf_keys: - self.dump_variable(name) - - def __getattr__(self, name): - try: - conf_desc = self._conf_keys[name] - except KeyError: - raise AttributeError(name) - return self._get_var(name, conf_desc) - - def get(self, name, default=None): - try: - conf_desc = self._conf_keys[name] - except KeyError: - return default - var = self._get_var(name, conf_desc) - if var is None: - var = default - return var - - def _get_var(self, name, conf_desc): - hook, envvar, confvar, convert = conf_desc - var = self._hook_handler(name, hook) - if envvar is not None: - var = os.environ.get(envvar, var) - if confvar is not None and self._conf: - var = self._conf.get(confvar, (None, var))[1] - if convert is not None: - var = convert(var) - return var - - def clone(self, hook_handler): - ec = self.__class__(distutils_section=self._distutils_section, - **self._conf_keys) - ec._hook_handler = hook_handler - return ec - - def use_distribution(self, dist): - if isinstance(dist, Distribution): - self._conf = dist.get_option_dict(self._distutils_section) - else: - self._conf = dist diff --git a/pythonPackages/numpy/numpy/distutils/exec_command.py b/pythonPackages/numpy/numpy/distutils/exec_command.py deleted file mode 100755 index e0c3e1c97a..0000000000 --- a/pythonPackages/numpy/numpy/distutils/exec_command.py +++ /dev/null @@ -1,596 +0,0 @@ -#!/usr/bin/env python -""" -exec_command - -Implements exec_command function that is (almost) equivalent to -commands.getstatusoutput function but on NT, DOS systems the -returned status is actually correct (though, the returned status -values may be different by a factor). In addition, exec_command -takes keyword arguments for (re-)defining environment variables. - -Provides functions: - exec_command --- execute command in a specified directory and - in the modified environment. - find_executable --- locate a command using info from environment - variable PATH. Equivalent to posix `which` - command. - -Author: Pearu Peterson -Created: 11 January 2003 - -Requires: Python 2.x - -Succesfully tested on: - os.name | sys.platform | comments - --------+--------------+---------- - posix | linux2 | Debian (sid) Linux, Python 2.1.3+, 2.2.3+, 2.3.3 - PyCrust 0.9.3, Idle 1.0.2 - posix | linux2 | Red Hat 9 Linux, Python 2.1.3, 2.2.2, 2.3.2 - posix | sunos5 | SunOS 5.9, Python 2.2, 2.3.2 - posix | darwin | Darwin 7.2.0, Python 2.3 - nt | win32 | Windows Me - Python 2.3(EE), Idle 1.0, PyCrust 0.7.2 - Python 2.1.1 Idle 0.8 - nt | win32 | Windows 98, Python 2.1.1. Idle 0.8 - nt | win32 | Cygwin 98-4.10, Python 2.1.1(MSC) - echo tests - fail i.e. redefining environment variables may - not work. FIXED: don't use cygwin echo! - Comment: also `cmd /c echo` will not work - but redefining environment variables do work. - posix | cygwin | Cygwin 98-4.10, Python 2.3.3(cygming special) - nt | win32 | Windows XP, Python 2.3.3 - -Known bugs: -- Tests, that send messages to stderr, fail when executed from MSYS prompt - because the messages are lost at some point. -""" - -__all__ = ['exec_command','find_executable'] - -import os -import sys -import shlex - -from numpy.distutils.misc_util import is_sequence, make_temp_file -from numpy.distutils import log -from numpy.distutils.compat import get_exception - -from numpy.compat import open_latin1 - -def temp_file_name(): - fo, name = make_temp_file() - fo.close() - return name - -def get_pythonexe(): - pythonexe = sys.executable - if os.name in ['nt','dos']: - fdir,fn = os.path.split(pythonexe) - fn = fn.upper().replace('PYTHONW','PYTHON') - pythonexe = os.path.join(fdir,fn) - assert os.path.isfile(pythonexe), '%r is not a file' % (pythonexe,) - return pythonexe - -def splitcmdline(line): - import warnings - warnings.warn('splitcmdline is deprecated; use shlex.split', - DeprecationWarning) - return shlex.split(line) - -def find_executable(exe, path=None, _cache={}): - """Return full path of a executable or None. - - Symbolic links are not followed. - """ - key = exe, path - try: - return _cache[key] - except KeyError: - pass - log.debug('find_executable(%r)' % exe) - orig_exe = exe - - if path is None: - path = os.environ.get('PATH',os.defpath) - if os.name=='posix': - realpath = os.path.realpath - else: - realpath = lambda a:a - - if exe.startswith('"'): - exe = exe[1:-1] - - suffixes = [''] - if os.name in ['nt','dos','os2']: - fn,ext = os.path.splitext(exe) - extra_suffixes = ['.exe','.com','.bat'] - if ext.lower() not in extra_suffixes: - suffixes = extra_suffixes - - if os.path.isabs(exe): - paths = [''] - else: - paths = [ os.path.abspath(p) for p in path.split(os.pathsep) ] - - for path in paths: - fn = os.path.join(path, exe) - for s in suffixes: - f_ext = fn+s - if not os.path.islink(f_ext): - f_ext = realpath(f_ext) - if os.path.isfile(f_ext) and os.access(f_ext, os.X_OK): - log.good('Found executable %s' % f_ext) - _cache[key] = f_ext - return f_ext - - log.warn('Could not locate executable %s' % orig_exe) - return None - -############################################################ - -def _preserve_environment( names ): - log.debug('_preserve_environment(%r)' % (names)) - env = {} - for name in names: - env[name] = os.environ.get(name) - return env - -def _update_environment( **env ): - log.debug('_update_environment(...)') - for name,value in env.items(): - os.environ[name] = value or '' - -def exec_command( command, - execute_in='', use_shell=None, use_tee = None, - _with_python = 1, - **env ): - """ Return (status,output) of executed command. - - command is a concatenated string of executable and arguments. - The output contains both stdout and stderr messages. - The following special keyword arguments can be used: - use_shell - execute `sh -c command` - use_tee - pipe the output of command through tee - execute_in - before run command `cd execute_in` and after `cd -`. - - On NT, DOS systems the returned status is correct for external commands. - Wild cards will not work for non-posix systems or when use_shell=0. - """ - log.debug('exec_command(%r,%s)' % (command,\ - ','.join(['%s=%r'%kv for kv in env.items()]))) - - if use_tee is None: - use_tee = os.name=='posix' - if use_shell is None: - use_shell = os.name=='posix' - execute_in = os.path.abspath(execute_in) - oldcwd = os.path.abspath(os.getcwd()) - - if __name__[-12:] == 'exec_command': - exec_dir = os.path.dirname(os.path.abspath(__file__)) - elif os.path.isfile('exec_command.py'): - exec_dir = os.path.abspath('.') - else: - exec_dir = os.path.abspath(sys.argv[0]) - if os.path.isfile(exec_dir): - exec_dir = os.path.dirname(exec_dir) - - if oldcwd!=execute_in: - os.chdir(execute_in) - log.debug('New cwd: %s' % execute_in) - else: - log.debug('Retaining cwd: %s' % oldcwd) - - oldenv = _preserve_environment( env.keys() ) - _update_environment( **env ) - - try: - # _exec_command is robust but slow, it relies on - # usable sys.std*.fileno() descriptors. If they - # are bad (like in win32 Idle, PyCrust environments) - # then _exec_command_python (even slower) - # will be used as a last resort. - # - # _exec_command_posix uses os.system and is faster - # but not on all platforms os.system will return - # a correct status. - if _with_python and (0 or sys.__stdout__.fileno()==-1): - st = _exec_command_python(command, - exec_command_dir = exec_dir, - **env) - elif os.name=='posix': - st = _exec_command_posix(command, - use_shell=use_shell, - use_tee=use_tee, - **env) - else: - st = _exec_command(command, use_shell=use_shell, - use_tee=use_tee,**env) - finally: - if oldcwd!=execute_in: - os.chdir(oldcwd) - log.debug('Restored cwd to %s' % oldcwd) - _update_environment(**oldenv) - - return st - -def _exec_command_posix( command, - use_shell = None, - use_tee = None, - **env ): - log.debug('_exec_command_posix(...)') - - if is_sequence(command): - command_str = ' '.join(list(command)) - else: - command_str = command - - tmpfile = temp_file_name() - stsfile = None - if use_tee: - stsfile = temp_file_name() - filter = '' - if use_tee == 2: - filter = r'| tr -cd "\n" | tr "\n" "."; echo' - command_posix = '( %s ; echo $? > %s ) 2>&1 | tee %s %s'\ - % (command_str,stsfile,tmpfile,filter) - else: - stsfile = temp_file_name() - command_posix = '( %s ; echo $? > %s ) > %s 2>&1'\ - % (command_str,stsfile,tmpfile) - #command_posix = '( %s ) > %s 2>&1' % (command_str,tmpfile) - - log.debug('Running os.system(%r)' % (command_posix)) - status = os.system(command_posix) - - if use_tee: - if status: - # if command_tee fails then fall back to robust exec_command - log.warn('_exec_command_posix failed (status=%s)' % status) - return _exec_command(command, use_shell=use_shell, **env) - - if stsfile is not None: - f = open_latin1(stsfile,'r') - status_text = f.read() - status = int(status_text) - f.close() - os.remove(stsfile) - - f = open_latin1(tmpfile,'r') - text = f.read() - f.close() - os.remove(tmpfile) - - if text[-1:]=='\n': - text = text[:-1] - - return status, text - - -def _exec_command_python(command, - exec_command_dir='', **env): - log.debug('_exec_command_python(...)') - - python_exe = get_pythonexe() - cmdfile = temp_file_name() - stsfile = temp_file_name() - outfile = temp_file_name() - - f = open(cmdfile,'w') - f.write('import os\n') - f.write('import sys\n') - f.write('sys.path.insert(0,%r)\n' % (exec_command_dir)) - f.write('from exec_command import exec_command\n') - f.write('del sys.path[0]\n') - f.write('cmd = %r\n' % command) - f.write('os.environ = %r\n' % (os.environ)) - f.write('s,o = exec_command(cmd, _with_python=0, **%r)\n' % (env)) - f.write('f=open(%r,"w")\nf.write(str(s))\nf.close()\n' % (stsfile)) - f.write('f=open(%r,"w")\nf.write(o)\nf.close()\n' % (outfile)) - f.close() - - cmd = '%s %s' % (python_exe, cmdfile) - status = os.system(cmd) - if status: - raise RuntimeError("%r failed" % (cmd,)) - os.remove(cmdfile) - - f = open_latin1(stsfile,'r') - status = int(f.read()) - f.close() - os.remove(stsfile) - - f = open_latin1(outfile,'r') - text = f.read() - f.close() - os.remove(outfile) - - return status, text - -def quote_arg(arg): - if arg[0]!='"' and ' ' in arg: - return '"%s"' % arg - return arg - -def _exec_command( command, use_shell=None, use_tee = None, **env ): - log.debug('_exec_command(...)') - - if use_shell is None: - use_shell = os.name=='posix' - if use_tee is None: - use_tee = os.name=='posix' - using_command = 0 - if use_shell: - # We use shell (unless use_shell==0) so that wildcards can be - # used. - sh = os.environ.get('SHELL','/bin/sh') - if is_sequence(command): - argv = [sh,'-c',' '.join(list(command))] - else: - argv = [sh,'-c',command] - else: - # On NT, DOS we avoid using command.com as it's exit status is - # not related to the exit status of a command. - if is_sequence(command): - argv = command[:] - else: - argv = shlex.split(command) - - if hasattr(os,'spawnvpe'): - spawn_command = os.spawnvpe - else: - spawn_command = os.spawnve - argv[0] = find_executable(argv[0]) or argv[0] - if not os.path.isfile(argv[0]): - log.warn('Executable %s does not exist' % (argv[0])) - if os.name in ['nt','dos']: - # argv[0] might be internal command - argv = [os.environ['COMSPEC'],'/C'] + argv - using_command = 1 - - # sys.__std*__ is used instead of sys.std* because environments - # like IDLE, PyCrust, etc overwrite sys.std* commands. - so_fileno = sys.__stdout__.fileno() - se_fileno = sys.__stderr__.fileno() - so_flush = sys.__stdout__.flush - se_flush = sys.__stderr__.flush - so_dup = os.dup(so_fileno) - se_dup = os.dup(se_fileno) - - outfile = temp_file_name() - fout = open(outfile,'w') - if using_command: - errfile = temp_file_name() - ferr = open(errfile,'w') - - log.debug('Running %s(%s,%r,%r,os.environ)' \ - % (spawn_command.__name__,os.P_WAIT,argv[0],argv)) - - argv0 = argv[0] - if not using_command: - argv[0] = quote_arg(argv0) - - so_flush() - se_flush() - os.dup2(fout.fileno(),so_fileno) - if using_command: - #XXX: disabled for now as it does not work from cmd under win32. - # Tests fail on msys - os.dup2(ferr.fileno(),se_fileno) - else: - os.dup2(fout.fileno(),se_fileno) - try: - status = spawn_command(os.P_WAIT,argv0,argv,os.environ) - except OSError: - errmess = str(get_exception()) - status = 999 - sys.stderr.write('%s: %s'%(errmess,argv[0])) - - so_flush() - se_flush() - os.dup2(so_dup,so_fileno) - os.dup2(se_dup,se_fileno) - - fout.close() - fout = open_latin1(outfile,'r') - text = fout.read() - fout.close() - os.remove(outfile) - - if using_command: - ferr.close() - ferr = open_latin1(errfile,'r') - errmess = ferr.read() - ferr.close() - os.remove(errfile) - if errmess and not status: - # Not sure how to handle the case where errmess - # contains only warning messages and that should - # not be treated as errors. - #status = 998 - if text: - text = text + '\n' - #text = '%sCOMMAND %r FAILED: %s' %(text,command,errmess) - text = text + errmess - print (errmess) - if text[-1:]=='\n': - text = text[:-1] - if status is None: - status = 0 - - if use_tee: - print (text) - - return status, text - - -def test_nt(**kws): - pythonexe = get_pythonexe() - echo = find_executable('echo') - using_cygwin_echo = echo != 'echo' - if using_cygwin_echo: - log.warn('Using cygwin echo in win32 environment is not supported') - - s,o=exec_command(pythonexe\ - +' -c "import os;print os.environ.get(\'AAA\',\'\')"') - assert s==0 and o=='',(s,o) - - s,o=exec_command(pythonexe\ - +' -c "import os;print os.environ.get(\'AAA\')"', - AAA='Tere') - assert s==0 and o=='Tere',(s,o) - - os.environ['BBB'] = 'Hi' - s,o=exec_command(pythonexe\ - +' -c "import os;print os.environ.get(\'BBB\',\'\')"') - assert s==0 and o=='Hi',(s,o) - - s,o=exec_command(pythonexe\ - +' -c "import os;print os.environ.get(\'BBB\',\'\')"', - BBB='Hey') - assert s==0 and o=='Hey',(s,o) - - s,o=exec_command(pythonexe\ - +' -c "import os;print os.environ.get(\'BBB\',\'\')"') - assert s==0 and o=='Hi',(s,o) - elif 0: - s,o=exec_command('echo Hello') - assert s==0 and o=='Hello',(s,o) - - s,o=exec_command('echo a%AAA%') - assert s==0 and o=='a',(s,o) - - s,o=exec_command('echo a%AAA%',AAA='Tere') - assert s==0 and o=='aTere',(s,o) - - os.environ['BBB'] = 'Hi' - s,o=exec_command('echo a%BBB%') - assert s==0 and o=='aHi',(s,o) - - s,o=exec_command('echo a%BBB%',BBB='Hey') - assert s==0 and o=='aHey', (s,o) - s,o=exec_command('echo a%BBB%') - assert s==0 and o=='aHi',(s,o) - - s,o=exec_command('this_is_not_a_command') - assert s and o!='',(s,o) - - s,o=exec_command('type not_existing_file') - assert s and o!='',(s,o) - - s,o=exec_command('echo path=%path%') - assert s==0 and o!='',(s,o) - - s,o=exec_command('%s -c "import sys;sys.stderr.write(sys.platform)"' \ - % pythonexe) - assert s==0 and o=='win32',(s,o) - - s,o=exec_command('%s -c "raise \'Ignore me.\'"' % pythonexe) - assert s==1 and o,(s,o) - - s,o=exec_command('%s -c "import sys;sys.stderr.write(\'0\');sys.stderr.write(\'1\');sys.stderr.write(\'2\')"'\ - % pythonexe) - assert s==0 and o=='012',(s,o) - - s,o=exec_command('%s -c "import sys;sys.exit(15)"' % pythonexe) - assert s==15 and o=='',(s,o) - - s,o=exec_command('%s -c "print \'Heipa\'"' % pythonexe) - assert s==0 and o=='Heipa',(s,o) - - print ('ok') - -def test_posix(**kws): - s,o=exec_command("echo Hello",**kws) - assert s==0 and o=='Hello',(s,o) - - s,o=exec_command('echo $AAA',**kws) - assert s==0 and o=='',(s,o) - - s,o=exec_command('echo "$AAA"',AAA='Tere',**kws) - assert s==0 and o=='Tere',(s,o) - - - s,o=exec_command('echo "$AAA"',**kws) - assert s==0 and o=='',(s,o) - - os.environ['BBB'] = 'Hi' - s,o=exec_command('echo "$BBB"',**kws) - assert s==0 and o=='Hi',(s,o) - - s,o=exec_command('echo "$BBB"',BBB='Hey',**kws) - assert s==0 and o=='Hey',(s,o) - - s,o=exec_command('echo "$BBB"',**kws) - assert s==0 and o=='Hi',(s,o) - - - s,o=exec_command('this_is_not_a_command',**kws) - assert s!=0 and o!='',(s,o) - - s,o=exec_command('echo path=$PATH',**kws) - assert s==0 and o!='',(s,o) - - s,o=exec_command('python -c "import sys,os;sys.stderr.write(os.name)"',**kws) - assert s==0 and o=='posix',(s,o) - - s,o=exec_command('python -c "raise \'Ignore me.\'"',**kws) - assert s==1 and o,(s,o) - - s,o=exec_command('python -c "import sys;sys.stderr.write(\'0\');sys.stderr.write(\'1\');sys.stderr.write(\'2\')"',**kws) - assert s==0 and o=='012',(s,o) - - s,o=exec_command('python -c "import sys;sys.exit(15)"',**kws) - assert s==15 and o=='',(s,o) - - s,o=exec_command('python -c "print \'Heipa\'"',**kws) - assert s==0 and o=='Heipa',(s,o) - - print ('ok') - -def test_execute_in(**kws): - pythonexe = get_pythonexe() - tmpfile = temp_file_name() - fn = os.path.basename(tmpfile) - tmpdir = os.path.dirname(tmpfile) - f = open(tmpfile,'w') - f.write('Hello') - f.close() - - s,o = exec_command('%s -c "print \'Ignore the following IOError:\','\ - 'open(%r,\'r\')"' % (pythonexe,fn),**kws) - assert s and o!='',(s,o) - s,o = exec_command('%s -c "print open(%r,\'r\').read()"' % (pythonexe,fn), - execute_in = tmpdir,**kws) - assert s==0 and o=='Hello',(s,o) - os.remove(tmpfile) - print ('ok') - -def test_svn(**kws): - s,o = exec_command(['svn','status'],**kws) - assert s,(s,o) - print ('svn ok') - -def test_cl(**kws): - if os.name=='nt': - s,o = exec_command(['cl','/V'],**kws) - assert s,(s,o) - print ('cl ok') - -if os.name=='posix': - test = test_posix -elif os.name in ['nt','dos']: - test = test_nt -else: - raise NotImplementedError('exec_command tests for ', os.name) - -############################################################ - -if __name__ == "__main__": - - test(use_tee=0) - test(use_tee=1) - test_execute_in(use_tee=0) - test_execute_in(use_tee=1) - test_svn(use_tee=1) - test_cl(use_tee=1) diff --git a/pythonPackages/numpy/numpy/distutils/extension.py b/pythonPackages/numpy/numpy/distutils/extension.py deleted file mode 100755 index 2db62969ee..0000000000 --- a/pythonPackages/numpy/numpy/distutils/extension.py +++ /dev/null @@ -1,74 +0,0 @@ -"""distutils.extension - -Provides the Extension class, used to describe C/C++ extension -modules in setup scripts. - -Overridden to support f2py. -""" - -__revision__ = "$Id: extension.py,v 1.1 2005/04/09 19:29:34 pearu Exp $" - -from distutils.extension import Extension as old_Extension - -import re -cxx_ext_re = re.compile(r'.*[.](cpp|cxx|cc)\Z',re.I).match -fortran_pyf_ext_re = re.compile(r'.*[.](f90|f95|f77|for|ftn|f|pyf)\Z',re.I).match - -class Extension(old_Extension): - def __init__ (self, name, sources, - include_dirs=None, - define_macros=None, - undef_macros=None, - library_dirs=None, - libraries=None, - runtime_library_dirs=None, - extra_objects=None, - extra_compile_args=None, - extra_link_args=None, - export_symbols=None, - swig_opts=None, - depends=None, - language=None, - f2py_options=None, - module_dirs=None, - ): - old_Extension.__init__(self,name, [], - include_dirs, - define_macros, - undef_macros, - library_dirs, - libraries, - runtime_library_dirs, - extra_objects, - extra_compile_args, - extra_link_args, - export_symbols) - # Avoid assert statements checking that sources contains strings: - self.sources = sources - - # Python 2.4 distutils new features - self.swig_opts = swig_opts or [] - - # Python 2.3 distutils new features - self.depends = depends or [] - self.language = language - - # numpy_distutils features - self.f2py_options = f2py_options or [] - self.module_dirs = module_dirs or [] - - return - - def has_cxx_sources(self): - for source in self.sources: - if cxx_ext_re(str(source)): - return True - return False - - def has_f2py_sources(self): - for source in self.sources: - if fortran_pyf_ext_re(source): - return True - return False - -# class Extension diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/__init__.py b/pythonPackages/numpy/numpy/distutils/fcompiler/__init__.py deleted file mode 100755 index 26ea73895b..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/__init__.py +++ /dev/null @@ -1,961 +0,0 @@ -"""numpy.distutils.fcompiler - -Contains FCompiler, an abstract base class that defines the interface -for the numpy.distutils Fortran compiler abstraction model. - -Terminology: - -To be consistent, where the term 'executable' is used, it means the single -file, like 'gcc', that is executed, and should be a string. In contrast, -'command' means the entire command line, like ['gcc', '-c', 'file.c'], and -should be a list. - -But note that FCompiler.executables is actually a dictionary of commands. -""" - -__all__ = ['FCompiler','new_fcompiler','show_fcompilers', - 'dummy_fortran_file'] - -import os -import sys -import re -import types -try: - set -except NameError: - from sets import Set as set - -from distutils.sysconfig import get_config_var, get_python_lib -from distutils.fancy_getopt import FancyGetopt -from distutils.errors import DistutilsModuleError, \ - DistutilsExecError, CompileError, LinkError, DistutilsPlatformError -from distutils.util import split_quoted, strtobool - -from numpy.distutils.ccompiler import CCompiler, gen_lib_options -from numpy.distutils import log -from numpy.distutils.misc_util import is_string, all_strings, is_sequence, make_temp_file -from numpy.distutils.environment import EnvironmentConfig -from numpy.distutils.exec_command import find_executable -from numpy.distutils.compat import get_exception - -__metaclass__ = type - -class CompilerNotFound(Exception): - pass - -def flaglist(s): - if is_string(s): - return split_quoted(s) - else: - return s - -def str2bool(s): - if is_string(s): - return strtobool(s) - return bool(s) - -def is_sequence_of_strings(seq): - return is_sequence(seq) and all_strings(seq) - -class FCompiler(CCompiler): - """Abstract base class to define the interface that must be implemented - by real Fortran compiler classes. - - Methods that subclasses may redefine: - - update_executables(), find_executables(), get_version() - get_flags(), get_flags_opt(), get_flags_arch(), get_flags_debug() - get_flags_f77(), get_flags_opt_f77(), get_flags_arch_f77(), - get_flags_debug_f77(), get_flags_f90(), get_flags_opt_f90(), - get_flags_arch_f90(), get_flags_debug_f90(), - get_flags_fix(), get_flags_linker_so() - - DON'T call these methods (except get_version) after - constructing a compiler instance or inside any other method. - All methods, except update_executables() and find_executables(), - may call the get_version() method. - - After constructing a compiler instance, always call customize(dist=None) - method that finalizes compiler construction and makes the following - attributes available: - compiler_f77 - compiler_f90 - compiler_fix - linker_so - archiver - ranlib - libraries - library_dirs - """ - - # These are the environment variables and distutils keys used. - # Each configuration descripition is - # (, , , ) - # The hook names are handled by the self._environment_hook method. - # - names starting with 'self.' call methods in this class - # - names starting with 'exe.' return the key in the executables dict - # - names like 'flags.YYY' return self.get_flag_YYY() - # convert is either None or a function to convert a string to the - # appropiate type used. - - distutils_vars = EnvironmentConfig( - distutils_section='config_fc', - noopt = (None, None, 'noopt', str2bool), - noarch = (None, None, 'noarch', str2bool), - debug = (None, None, 'debug', str2bool), - verbose = (None, None, 'verbose', str2bool), - ) - - command_vars = EnvironmentConfig( - distutils_section='config_fc', - compiler_f77 = ('exe.compiler_f77', 'F77', 'f77exec', None), - compiler_f90 = ('exe.compiler_f90', 'F90', 'f90exec', None), - compiler_fix = ('exe.compiler_fix', 'F90', 'f90exec', None), - version_cmd = ('exe.version_cmd', None, None, None), - linker_so = ('exe.linker_so', 'LDSHARED', 'ldshared', None), - linker_exe = ('exe.linker_exe', 'LD', 'ld', None), - archiver = (None, 'AR', 'ar', None), - ranlib = (None, 'RANLIB', 'ranlib', None), - ) - - flag_vars = EnvironmentConfig( - distutils_section='config_fc', - f77 = ('flags.f77', 'F77FLAGS', 'f77flags', flaglist), - f90 = ('flags.f90', 'F90FLAGS', 'f90flags', flaglist), - free = ('flags.free', 'FREEFLAGS', 'freeflags', flaglist), - fix = ('flags.fix', None, None, flaglist), - opt = ('flags.opt', 'FOPT', 'opt', flaglist), - opt_f77 = ('flags.opt_f77', None, None, flaglist), - opt_f90 = ('flags.opt_f90', None, None, flaglist), - arch = ('flags.arch', 'FARCH', 'arch', flaglist), - arch_f77 = ('flags.arch_f77', None, None, flaglist), - arch_f90 = ('flags.arch_f90', None, None, flaglist), - debug = ('flags.debug', 'FDEBUG', 'fdebug', flaglist), - debug_f77 = ('flags.debug_f77', None, None, flaglist), - debug_f90 = ('flags.debug_f90', None, None, flaglist), - flags = ('self.get_flags', 'FFLAGS', 'fflags', flaglist), - linker_so = ('flags.linker_so', 'LDFLAGS', 'ldflags', flaglist), - linker_exe = ('flags.linker_exe', 'LDFLAGS', 'ldflags', flaglist), - ar = ('flags.ar', 'ARFLAGS', 'arflags', flaglist), - ) - - language_map = {'.f':'f77', - '.for':'f77', - '.F':'f77', # XXX: needs preprocessor - '.ftn':'f77', - '.f77':'f77', - '.f90':'f90', - '.F90':'f90', # XXX: needs preprocessor - '.f95':'f90', - } - language_order = ['f90','f77'] - - - # These will be set by the subclass - - compiler_type = None - compiler_aliases = () - version_pattern = None - - possible_executables = [] - executables = { - 'version_cmd' : ["f77", "-v"], - 'compiler_f77' : ["f77"], - 'compiler_f90' : ["f90"], - 'compiler_fix' : ["f90", "-fixed"], - 'linker_so' : ["f90", "-shared"], - 'linker_exe' : ["f90"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : None, - } - - # If compiler does not support compiling Fortran 90 then it can - # suggest using another compiler. For example, gnu would suggest - # gnu95 compiler type when there are F90 sources. - suggested_f90_compiler = None - - compile_switch = "-c" - object_switch = "-o " # Ending space matters! It will be stripped - # but if it is missing then object_switch - # will be prefixed to object file name by - # string concatenation. - library_switch = "-o " # Ditto! - - # Switch to specify where module files are created and searched - # for USE statement. Normally it is a string and also here ending - # space matters. See above. - module_dir_switch = None - - # Switch to specify where module files are searched for USE statement. - module_include_switch = '-I' - - pic_flags = [] # Flags to create position-independent code - - src_extensions = ['.for','.ftn','.f77','.f','.f90','.f95','.F','.F90'] - obj_extension = ".o" - shared_lib_extension = get_config_var('SO') # or .dll - static_lib_extension = ".a" # or .lib - static_lib_format = "lib%s%s" # or %s%s - shared_lib_format = "%s%s" - exe_extension = "" - - _exe_cache = {} - - _executable_keys = ['version_cmd', 'compiler_f77', 'compiler_f90', - 'compiler_fix', 'linker_so', 'linker_exe', 'archiver', - 'ranlib'] - - # This will be set by new_fcompiler when called in - # command/{build_ext.py, build_clib.py, config.py} files. - c_compiler = None - - def __init__(self, *args, **kw): - CCompiler.__init__(self, *args, **kw) - self.distutils_vars = self.distutils_vars.clone(self._environment_hook) - self.command_vars = self.command_vars.clone(self._environment_hook) - self.flag_vars = self.flag_vars.clone(self._environment_hook) - self.executables = self.executables.copy() - for e in self._executable_keys: - if e not in self.executables: - self.executables[e] = None - - # Some methods depend on .customize() being called first, so - # this keeps track of whether that's happened yet. - self._is_customised = False - - def __copy__(self): - obj = self.__new__(self.__class__) - obj.__dict__.update(self.__dict__) - obj.distutils_vars = obj.distutils_vars.clone(obj._environment_hook) - obj.command_vars = obj.command_vars.clone(obj._environment_hook) - obj.flag_vars = obj.flag_vars.clone(obj._environment_hook) - obj.executables = obj.executables.copy() - return obj - - def copy(self): - return self.__copy__() - - # Use properties for the attributes used by CCompiler. Setting them - # as attributes from the self.executables dictionary is error-prone, - # so we get them from there each time. - def _command_property(key): - def fget(self): - assert self._is_customised - return self.executables[key] - return property(fget=fget) - version_cmd = _command_property('version_cmd') - compiler_f77 = _command_property('compiler_f77') - compiler_f90 = _command_property('compiler_f90') - compiler_fix = _command_property('compiler_fix') - linker_so = _command_property('linker_so') - linker_exe = _command_property('linker_exe') - archiver = _command_property('archiver') - ranlib = _command_property('ranlib') - - # Make our terminology consistent. - def set_executable(self, key, value): - self.set_command(key, value) - - def set_commands(self, **kw): - for k, v in kw.items(): - self.set_command(k, v) - - def set_command(self, key, value): - if not key in self._executable_keys: - raise ValueError( - "unknown executable '%s' for class %s" % - (key, self.__class__.__name__)) - if is_string(value): - value = split_quoted(value) - assert value is None or is_sequence_of_strings(value[1:]), (key, value) - self.executables[key] = value - - ###################################################################### - ## Methods that subclasses may redefine. But don't call these methods! - ## They are private to FCompiler class and may return unexpected - ## results if used elsewhere. So, you have been warned.. - - def find_executables(self): - """Go through the self.executables dictionary, and attempt to - find and assign appropiate executables. - - Executable names are looked for in the environment (environment - variables, the distutils.cfg, and command line), the 0th-element of - the command list, and the self.possible_executables list. - - Also, if the 0th element is "" or "", the Fortran 77 - or the Fortran 90 compiler executable is used, unless overridden - by an environment setting. - - Subclasses should call this if overriden. - """ - assert self._is_customised - exe_cache = self._exe_cache - def cached_find_executable(exe): - if exe in exe_cache: - return exe_cache[exe] - fc_exe = find_executable(exe) - exe_cache[exe] = exe_cache[fc_exe] = fc_exe - return fc_exe - def verify_command_form(name, value): - if value is not None and not is_sequence_of_strings(value): - raise ValueError( - "%s value %r is invalid in class %s" % - (name, value, self.__class__.__name__)) - def set_exe(exe_key, f77=None, f90=None): - cmd = self.executables.get(exe_key, None) - if not cmd: - return None - # Note that we get cmd[0] here if the environment doesn't - # have anything set - exe_from_environ = getattr(self.command_vars, exe_key) - if not exe_from_environ: - possibles = [f90, f77] + self.possible_executables - else: - possibles = [exe_from_environ] + self.possible_executables - - seen = set() - unique_possibles = [] - for e in possibles: - if e == '': - e = f77 - elif e == '': - e = f90 - if not e or e in seen: - continue - seen.add(e) - unique_possibles.append(e) - - for exe in unique_possibles: - fc_exe = cached_find_executable(exe) - if fc_exe: - cmd[0] = fc_exe - return fc_exe - self.set_command(exe_key, None) - return None - - ctype = self.compiler_type - f90 = set_exe('compiler_f90') - if not f90: - f77 = set_exe('compiler_f77') - if f77: - log.warn('%s: no Fortran 90 compiler found' % ctype) - else: - raise CompilerNotFound('%s: f90 nor f77' % ctype) - else: - f77 = set_exe('compiler_f77', f90=f90) - if not f77: - log.warn('%s: no Fortran 77 compiler found' % ctype) - set_exe('compiler_fix', f90=f90) - - set_exe('linker_so', f77=f77, f90=f90) - set_exe('linker_exe', f77=f77, f90=f90) - set_exe('version_cmd', f77=f77, f90=f90) - set_exe('archiver') - set_exe('ranlib') - - def update_executables(elf): - """Called at the beginning of customisation. Subclasses should - override this if they need to set up the executables dictionary. - - Note that self.find_executables() is run afterwards, so the - self.executables dictionary values can contain or as - the command, which will be replaced by the found F77 or F90 - compiler. - """ - pass - - def get_flags(self): - """List of flags common to all compiler types.""" - return [] + self.pic_flags - - def _get_command_flags(self, key): - cmd = self.executables.get(key, None) - if cmd is None: - return [] - return cmd[1:] - - def get_flags_f77(self): - """List of Fortran 77 specific flags.""" - return self._get_command_flags('compiler_f77') - def get_flags_f90(self): - """List of Fortran 90 specific flags.""" - return self._get_command_flags('compiler_f90') - def get_flags_free(self): - """List of Fortran 90 free format specific flags.""" - return [] - def get_flags_fix(self): - """List of Fortran 90 fixed format specific flags.""" - return self._get_command_flags('compiler_fix') - def get_flags_linker_so(self): - """List of linker flags to build a shared library.""" - return self._get_command_flags('linker_so') - def get_flags_linker_exe(self): - """List of linker flags to build an executable.""" - return self._get_command_flags('linker_exe') - def get_flags_ar(self): - """List of archiver flags. """ - return self._get_command_flags('archiver') - def get_flags_opt(self): - """List of architecture independent compiler flags.""" - return [] - def get_flags_arch(self): - """List of architecture dependent compiler flags.""" - return [] - def get_flags_debug(self): - """List of compiler flags to compile with debugging information.""" - return [] - - get_flags_opt_f77 = get_flags_opt_f90 = get_flags_opt - get_flags_arch_f77 = get_flags_arch_f90 = get_flags_arch - get_flags_debug_f77 = get_flags_debug_f90 = get_flags_debug - - def get_libraries(self): - """List of compiler libraries.""" - return self.libraries[:] - def get_library_dirs(self): - """List of compiler library directories.""" - return self.library_dirs[:] - - def get_version(self, force=False, ok_status=[0]): - assert self._is_customised - version = CCompiler.get_version(self, force=force, ok_status=ok_status) - if version is None: - raise CompilerNotFound() - return version - - ############################################################ - - ## Public methods: - - def customize(self, dist = None): - """Customize Fortran compiler. - - This method gets Fortran compiler specific information from - (i) class definition, (ii) environment, (iii) distutils config - files, and (iv) command line (later overrides earlier). - - This method should be always called after constructing a - compiler instance. But not in __init__ because Distribution - instance is needed for (iii) and (iv). - """ - log.info('customize %s' % (self.__class__.__name__)) - - self._is_customised = True - - self.distutils_vars.use_distribution(dist) - self.command_vars.use_distribution(dist) - self.flag_vars.use_distribution(dist) - - self.update_executables() - - # find_executables takes care of setting the compiler commands, - # version_cmd, linker_so, linker_exe, ar, and ranlib - self.find_executables() - - noopt = self.distutils_vars.get('noopt', False) - noarch = self.distutils_vars.get('noarch', noopt) - debug = self.distutils_vars.get('debug', False) - - f77 = self.command_vars.compiler_f77 - f90 = self.command_vars.compiler_f90 - - f77flags = [] - f90flags = [] - freeflags = [] - fixflags = [] - - if f77: - f77flags = self.flag_vars.f77 - if f90: - f90flags = self.flag_vars.f90 - freeflags = self.flag_vars.free - # XXX Assuming that free format is default for f90 compiler. - fix = self.command_vars.compiler_fix - if fix: - fixflags = self.flag_vars.fix + f90flags - - oflags, aflags, dflags = [], [], [] - # examine get_flags__ for extra flags - # only add them if the method is different from get_flags_ - def get_flags(tag, flags): - # note that self.flag_vars. calls self.get_flags_() - flags.extend(getattr(self.flag_vars, tag)) - this_get = getattr(self, 'get_flags_' + tag) - for name, c, flagvar in [('f77', f77, f77flags), - ('f90', f90, f90flags), - ('f90', fix, fixflags)]: - t = '%s_%s' % (tag, name) - if c and this_get is not getattr(self, 'get_flags_' + t): - flagvar.extend(getattr(self.flag_vars, t)) - if not noopt: - get_flags('opt', oflags) - if not noarch: - get_flags('arch', aflags) - if debug: - get_flags('debug', dflags) - - fflags = self.flag_vars.flags + dflags + oflags + aflags - - if f77: - self.set_commands(compiler_f77=[f77]+f77flags+fflags) - if f90: - self.set_commands(compiler_f90=[f90]+freeflags+f90flags+fflags) - if fix: - self.set_commands(compiler_fix=[fix]+fixflags+fflags) - - - #XXX: Do we need LDSHARED->SOSHARED, LDFLAGS->SOFLAGS - linker_so = self.linker_so - if linker_so: - linker_so_flags = self.flag_vars.linker_so - if sys.platform.startswith('aix'): - python_lib = get_python_lib(standard_lib=1) - ld_so_aix = os.path.join(python_lib, 'config', 'ld_so_aix') - python_exp = os.path.join(python_lib, 'config', 'python.exp') - linker_so = [ld_so_aix] + linker_so + ['-bI:'+python_exp] - self.set_commands(linker_so=linker_so+linker_so_flags) - - linker_exe = self.linker_exe - if linker_exe: - linker_exe_flags = self.flag_vars.linker_exe - self.set_commands(linker_exe=linker_exe+linker_exe_flags) - - ar = self.command_vars.archiver - if ar: - arflags = self.flag_vars.ar - self.set_commands(archiver=[ar]+arflags) - - self.set_library_dirs(self.get_library_dirs()) - self.set_libraries(self.get_libraries()) - - def dump_properties(self): - """Print out the attributes of a compiler instance.""" - props = [] - for key in self.executables.keys() + \ - ['version','libraries','library_dirs', - 'object_switch','compile_switch']: - if hasattr(self,key): - v = getattr(self,key) - props.append((key, None, '= '+repr(v))) - props.sort() - - pretty_printer = FancyGetopt(props) - for l in pretty_printer.generate_help("%s instance properties:" \ - % (self.__class__.__name__)): - if l[:4]==' --': - l = ' ' + l[4:] - print(l) - - ################### - - def _compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts): - """Compile 'src' to product 'obj'.""" - src_flags = {} - if is_f_file(src) and not has_f90_header(src): - flavor = ':f77' - compiler = self.compiler_f77 - src_flags = get_f77flags(src) - elif is_free_format(src): - flavor = ':f90' - compiler = self.compiler_f90 - if compiler is None: - raise DistutilsExecError('f90 not supported by %s needed for %s'\ - % (self.__class__.__name__,src)) - else: - flavor = ':fix' - compiler = self.compiler_fix - if compiler is None: - raise DistutilsExecError('f90 (fixed) not supported by %s needed for %s'\ - % (self.__class__.__name__,src)) - if self.object_switch[-1]==' ': - o_args = [self.object_switch.strip(),obj] - else: - o_args = [self.object_switch.strip()+obj] - - assert self.compile_switch.strip() - s_args = [self.compile_switch, src] - - extra_flags = src_flags.get(self.compiler_type,[]) - if extra_flags: - log.info('using compile options from source: %r' \ - % ' '.join(extra_flags)) - - command = compiler + cc_args + extra_flags + s_args + o_args \ - + extra_postargs - - display = '%s: %s' % (os.path.basename(compiler[0]) + flavor, - src) - try: - self.spawn(command,display=display) - except DistutilsExecError: - msg = str(get_exception()) - raise CompileError(msg) - - def module_options(self, module_dirs, module_build_dir): - options = [] - if self.module_dir_switch is not None: - if self.module_dir_switch[-1]==' ': - options.extend([self.module_dir_switch.strip(),module_build_dir]) - else: - options.append(self.module_dir_switch.strip()+module_build_dir) - else: - print('XXX: module_build_dir=%r option ignored' % (module_build_dir)) - print('XXX: Fix module_dir_switch for ',self.__class__.__name__) - if self.module_include_switch is not None: - for d in [module_build_dir]+module_dirs: - options.append('%s%s' % (self.module_include_switch, d)) - else: - print('XXX: module_dirs=%r option ignored' % (module_dirs)) - print('XXX: Fix module_include_switch for ',self.__class__.__name__) - return options - - def library_option(self, lib): - return "-l" + lib - def library_dir_option(self, dir): - return "-L" + dir - - def link(self, target_desc, objects, - output_filename, output_dir=None, libraries=None, - library_dirs=None, runtime_library_dirs=None, - export_symbols=None, debug=0, extra_preargs=None, - extra_postargs=None, build_temp=None, target_lang=None): - objects, output_dir = self._fix_object_args(objects, output_dir) - libraries, library_dirs, runtime_library_dirs = \ - self._fix_lib_args(libraries, library_dirs, runtime_library_dirs) - - lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, - libraries) - if is_string(output_dir): - output_filename = os.path.join(output_dir, output_filename) - elif output_dir is not None: - raise TypeError("'output_dir' must be a string or None") - - if self._need_link(objects, output_filename): - if self.library_switch[-1]==' ': - o_args = [self.library_switch.strip(),output_filename] - else: - o_args = [self.library_switch.strip()+output_filename] - - if is_string(self.objects): - ld_args = objects + [self.objects] - else: - ld_args = objects + self.objects - ld_args = ld_args + lib_opts + o_args - if debug: - ld_args[:0] = ['-g'] - if extra_preargs: - ld_args[:0] = extra_preargs - if extra_postargs: - ld_args.extend(extra_postargs) - self.mkpath(os.path.dirname(output_filename)) - if target_desc == CCompiler.EXECUTABLE: - linker = self.linker_exe[:] - else: - linker = self.linker_so[:] - command = linker + ld_args - try: - self.spawn(command) - except DistutilsExecError: - msg = str(get_exception()) - raise LinkError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def _environment_hook(self, name, hook_name): - if hook_name is None: - return None - if is_string(hook_name): - if hook_name.startswith('self.'): - hook_name = hook_name[5:] - hook = getattr(self, hook_name) - return hook() - elif hook_name.startswith('exe.'): - hook_name = hook_name[4:] - var = self.executables[hook_name] - if var: - return var[0] - else: - return None - elif hook_name.startswith('flags.'): - hook_name = hook_name[6:] - hook = getattr(self, 'get_flags_' + hook_name) - return hook() - else: - return hook_name() - - ## class FCompiler - -_default_compilers = ( - # sys.platform mappings - ('win32', ('gnu','intelv','absoft','compaqv','intelev','gnu95','g95', - 'intelvem', 'intelem')), - ('cygwin.*', ('gnu','intelv','absoft','compaqv','intelev','gnu95','g95')), - ('linux.*', ('gnu','intel','lahey','pg','absoft','nag','vast','compaq', - 'intele','intelem','gnu95','g95')), - ('darwin.*', ('nag', 'absoft', 'ibm', 'intel', 'gnu', 'gnu95', 'g95')), - ('sunos.*', ('sun','gnu','gnu95','g95')), - ('irix.*', ('mips','gnu','gnu95',)), - ('aix.*', ('ibm','gnu','gnu95',)), - # os.name mappings - ('posix', ('gnu','gnu95',)), - ('nt', ('gnu','gnu95',)), - ('mac', ('gnu','gnu95',)), - ) - -fcompiler_class = None -fcompiler_aliases = None - -def load_all_fcompiler_classes(): - """Cache all the FCompiler classes found in modules in the - numpy.distutils.fcompiler package. - """ - from glob import glob - global fcompiler_class, fcompiler_aliases - if fcompiler_class is not None: - return - pys = os.path.join(os.path.dirname(__file__), '*.py') - fcompiler_class = {} - fcompiler_aliases = {} - for fname in glob(pys): - module_name, ext = os.path.splitext(os.path.basename(fname)) - module_name = 'numpy.distutils.fcompiler.' + module_name - __import__ (module_name) - module = sys.modules[module_name] - if hasattr(module, 'compilers'): - for cname in module.compilers: - klass = getattr(module, cname) - desc = (klass.compiler_type, klass, klass.description) - fcompiler_class[klass.compiler_type] = desc - for alias in klass.compiler_aliases: - if alias in fcompiler_aliases: - raise ValueError("alias %r defined for both %s and %s" - % (alias, klass.__name__, - fcompiler_aliases[alias][1].__name__)) - fcompiler_aliases[alias] = desc - -def _find_existing_fcompiler(compiler_types, - osname=None, platform=None, - requiref90=False, - c_compiler=None): - from numpy.distutils.core import get_distribution - dist = get_distribution(always=True) - for compiler_type in compiler_types: - v = None - try: - c = new_fcompiler(plat=platform, compiler=compiler_type, - c_compiler=c_compiler) - c.customize(dist) - v = c.get_version() - if requiref90 and c.compiler_f90 is None: - v = None - new_compiler = c.suggested_f90_compiler - if new_compiler: - log.warn('Trying %r compiler as suggested by %r ' - 'compiler for f90 support.' % (compiler_type, - new_compiler)) - c = new_fcompiler(plat=platform, compiler=new_compiler, - c_compiler=c_compiler) - c.customize(dist) - v = c.get_version() - if v is not None: - compiler_type = new_compiler - if requiref90 and c.compiler_f90 is None: - raise ValueError('%s does not support compiling f90 codes, ' - 'skipping.' % (c.__class__.__name__)) - except DistutilsModuleError: - log.debug("_find_existing_fcompiler: compiler_type='%s' raised DistutilsModuleError", compiler_type) - except CompilerNotFound: - log.debug("_find_existing_fcompiler: compiler_type='%s' not found", compiler_type) - if v is not None: - return compiler_type - return None - -def available_fcompilers_for_platform(osname=None, platform=None): - if osname is None: - osname = os.name - if platform is None: - platform = sys.platform - matching_compiler_types = [] - for pattern, compiler_type in _default_compilers: - if re.match(pattern, platform) or re.match(pattern, osname): - for ct in compiler_type: - if ct not in matching_compiler_types: - matching_compiler_types.append(ct) - if not matching_compiler_types: - matching_compiler_types.append('gnu') - return matching_compiler_types - -def get_default_fcompiler(osname=None, platform=None, requiref90=False, - c_compiler=None): - """Determine the default Fortran compiler to use for the given - platform.""" - matching_compiler_types = available_fcompilers_for_platform(osname, - platform) - compiler_type = _find_existing_fcompiler(matching_compiler_types, - osname=osname, - platform=platform, - requiref90=requiref90, - c_compiler=c_compiler) - return compiler_type - -def new_fcompiler(plat=None, - compiler=None, - verbose=0, - dry_run=0, - force=0, - requiref90=False, - c_compiler = None): - """Generate an instance of some FCompiler subclass for the supplied - platform/compiler combination. - """ - load_all_fcompiler_classes() - if plat is None: - plat = os.name - if compiler is None: - compiler = get_default_fcompiler(plat, requiref90=requiref90, - c_compiler=c_compiler) - if compiler in fcompiler_class: - module_name, klass, long_description = fcompiler_class[compiler] - elif compiler in fcompiler_aliases: - module_name, klass, long_description = fcompiler_aliases[compiler] - else: - msg = "don't know how to compile Fortran code on platform '%s'" % plat - if compiler is not None: - msg = msg + " with '%s' compiler." % compiler - msg = msg + " Supported compilers are: %s)" \ - % (','.join(fcompiler_class.keys())) - log.warn(msg) - return None - - compiler = klass(verbose=verbose, dry_run=dry_run, force=force) - compiler.c_compiler = c_compiler - return compiler - -def show_fcompilers(dist=None): - """Print list of available compilers (used by the "--help-fcompiler" - option to "config_fc"). - """ - if dist is None: - from distutils.dist import Distribution - from numpy.distutils.command.config_compiler import config_fc - dist = Distribution() - dist.script_name = os.path.basename(sys.argv[0]) - dist.script_args = ['config_fc'] + sys.argv[1:] - try: - dist.script_args.remove('--help-fcompiler') - except ValueError: - pass - dist.cmdclass['config_fc'] = config_fc - dist.parse_config_files() - dist.parse_command_line() - compilers = [] - compilers_na = [] - compilers_ni = [] - if not fcompiler_class: - load_all_fcompiler_classes() - platform_compilers = available_fcompilers_for_platform() - for compiler in platform_compilers: - v = None - log.set_verbosity(-2) - try: - c = new_fcompiler(compiler=compiler, verbose=dist.verbose) - c.customize(dist) - v = c.get_version() - except (DistutilsModuleError, CompilerNotFound): - e = get_exception() - log.debug("show_fcompilers: %s not found" % (compiler,)) - log.debug(repr(e)) - - if v is None: - compilers_na.append(("fcompiler="+compiler, None, - fcompiler_class[compiler][2])) - else: - c.dump_properties() - compilers.append(("fcompiler="+compiler, None, - fcompiler_class[compiler][2] + ' (%s)' % v)) - - compilers_ni = list(set(fcompiler_class.keys()) - set(platform_compilers)) - compilers_ni = [("fcompiler="+fc, None, fcompiler_class[fc][2]) - for fc in compilers_ni] - - compilers.sort() - compilers_na.sort() - compilers_ni.sort() - pretty_printer = FancyGetopt(compilers) - pretty_printer.print_help("Fortran compilers found:") - pretty_printer = FancyGetopt(compilers_na) - pretty_printer.print_help("Compilers available for this " - "platform, but not found:") - if compilers_ni: - pretty_printer = FancyGetopt(compilers_ni) - pretty_printer.print_help("Compilers not available on this platform:") - print("For compiler details, run 'config_fc --verbose' setup command.") - - -def dummy_fortran_file(): - fo, name = make_temp_file(suffix='.f') - fo.write(" subroutine dummy()\n end\n") - fo.close() - return name[:-2] - - -is_f_file = re.compile(r'.*[.](for|ftn|f77|f)\Z',re.I).match -_has_f_header = re.compile(r'-[*]-\s*fortran\s*-[*]-',re.I).search -_has_f90_header = re.compile(r'-[*]-\s*f90\s*-[*]-',re.I).search -_has_fix_header = re.compile(r'-[*]-\s*fix\s*-[*]-',re.I).search -_free_f90_start = re.compile(r'[^c*!]\s*[^\s\d\t]',re.I).match - -def is_free_format(file): - """Check if file is in free format Fortran.""" - # f90 allows both fixed and free format, assuming fixed unless - # signs of free format are detected. - result = 0 - f = open(file,'r') - line = f.readline() - n = 10000 # the number of non-comment lines to scan for hints - if _has_f_header(line): - n = 0 - elif _has_f90_header(line): - n = 0 - result = 1 - while n>0 and line: - line = line.rstrip() - if line and line[0]!='!': - n -= 1 - if (line[0]!='\t' and _free_f90_start(line[:5])) or line[-1:]=='&': - result = 1 - break - line = f.readline() - f.close() - return result - -def has_f90_header(src): - f = open(src,'r') - line = f.readline() - f.close() - return _has_f90_header(line) or _has_fix_header(line) - -_f77flags_re = re.compile(r'(c|)f77flags\s*\(\s*(?P\w+)\s*\)\s*=\s*(?P.*)',re.I) -def get_f77flags(src): - """ - Search the first 20 lines of fortran 77 code for line pattern - `CF77FLAGS()=` - Return a dictionary {:}. - """ - flags = {} - f = open(src,'r') - i = 0 - for line in f.readlines(): - i += 1 - if i>20: break - m = _f77flags_re.match(line) - if not m: continue - fcname = m.group('fcname').strip() - fflags = m.group('fflags').strip() - flags[fcname] = split_quoted(fflags) - f.close() - return flags - -if __name__ == '__main__': - show_fcompilers() diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/absoft.py b/pythonPackages/numpy/numpy/distutils/fcompiler/absoft.py deleted file mode 100755 index f14502d105..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/absoft.py +++ /dev/null @@ -1,157 +0,0 @@ - -# http://www.absoft.com/literature/osxuserguide.pdf -# http://www.absoft.com/documentation.html - -# Notes: -# - when using -g77 then use -DUNDERSCORE_G77 to compile f2py -# generated extension modules (works for f2py v2.45.241_1936 and up) - -import os - -from numpy.distutils.cpuinfo import cpu -from numpy.distutils.fcompiler import FCompiler, dummy_fortran_file -from numpy.distutils.misc_util import cyg2win32 - -compilers = ['AbsoftFCompiler'] - -class AbsoftFCompiler(FCompiler): - - compiler_type = 'absoft' - description = 'Absoft Corp Fortran Compiler' - #version_pattern = r'FORTRAN 77 Compiler (?P[^\s*,]*).*?Absoft Corp' - version_pattern = r'(f90:.*?(Absoft Pro FORTRAN Version|FORTRAN 77 Compiler|Absoft Fortran Compiler Version|Copyright Absoft Corporation.*?Version))'+\ - r' (?P[^\s*,]*)(.*?Absoft Corp|)' - - # on windows: f90 -V -c dummy.f - # f90: Copyright Absoft Corporation 1994-1998 mV2; Cray Research, Inc. 1994-1996 CF90 (2.x.x.x f36t87) Version 2.3 Wed Apr 19, 2006 13:05:16 - - # samt5735(8)$ f90 -V -c dummy.f - # f90: Copyright Absoft Corporation 1994-2002; Absoft Pro FORTRAN Version 8.0 - # Note that fink installs g77 as f77, so need to use f90 for detection. - - executables = { - 'version_cmd' : None, # set by update_executables - 'compiler_f77' : ["f77"], - 'compiler_fix' : ["f90"], - 'compiler_f90' : ["f90"], - 'linker_so' : [""], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - - if os.name=='nt': - library_switch = '/out:' #No space after /out:! - - module_dir_switch = None - module_include_switch = '-p' - - def update_executables(self): - f = cyg2win32(dummy_fortran_file()) - self.executables['version_cmd'] = ['', '-V', '-c', - f+'.f', '-o', f+'.o'] - - def get_flags_linker_so(self): - if os.name=='nt': - opt = ['/dll'] - # The "-K shared" switches are being left in for pre-9.0 versions - # of Absoft though I don't think versions earlier than 9 can - # actually be used to build shared libraries. In fact, version - # 8 of Absoft doesn't recognize "-K shared" and will fail. - elif self.get_version() >= '9.0': - opt = ['-shared'] - else: - opt = ["-K","shared"] - return opt - - def library_dir_option(self, dir): - if os.name=='nt': - return ['-link','/PATH:"%s"' % (dir)] - return "-L" + dir - - def library_option(self, lib): - if os.name=='nt': - return '%s.lib' % (lib) - return "-l" + lib - - def get_library_dirs(self): - opt = FCompiler.get_library_dirs(self) - d = os.environ.get('ABSOFT') - if d: - if self.get_version() >= '10.0': - # use shared libraries, the static libraries were not compiled -fPIC - prefix = 'sh' - else: - prefix = '' - if cpu.is_64bit(): - suffix = '64' - else: - suffix = '' - opt.append(os.path.join(d, '%slib%s' % (prefix, suffix))) - return opt - - def get_libraries(self): - opt = FCompiler.get_libraries(self) - if self.get_version() >= '10.0': - opt.extend(['af90math', 'afio', 'af77math', 'U77']) - elif self.get_version() >= '8.0': - opt.extend(['f90math','fio','f77math','U77']) - else: - opt.extend(['fio','f90math','fmath','U77']) - if os.name =='nt': - opt.append('COMDLG32') - return opt - - def get_flags(self): - opt = FCompiler.get_flags(self) - if os.name != 'nt': - opt.extend(['-s']) - if self.get_version(): - if self.get_version()>='8.2': - opt.append('-fpic') - return opt - - def get_flags_f77(self): - opt = FCompiler.get_flags_f77(self) - opt.extend(['-N22','-N90','-N110']) - v = self.get_version() - if os.name == 'nt': - if v and v>='8.0': - opt.extend(['-f','-N15']) - else: - opt.append('-f') - if v: - if v<='4.6': - opt.append('-B108') - else: - # Though -N15 is undocumented, it works with - # Absoft 8.0 on Linux - opt.append('-N15') - return opt - - def get_flags_f90(self): - opt = FCompiler.get_flags_f90(self) - opt.extend(["-YCFRL=1","-YCOM_NAMES=LCS","-YCOM_PFX","-YEXT_PFX", - "-YCOM_SFX=_","-YEXT_SFX=_","-YEXT_NAMES=LCS"]) - if self.get_version(): - if self.get_version()>'4.6': - opt.extend(["-YDEALLOC=ALL"]) - return opt - - def get_flags_fix(self): - opt = FCompiler.get_flags_fix(self) - opt.extend(["-YCFRL=1","-YCOM_NAMES=LCS","-YCOM_PFX","-YEXT_PFX", - "-YCOM_SFX=_","-YEXT_SFX=_","-YEXT_NAMES=LCS"]) - opt.extend(["-f","fixed"]) - return opt - - def get_flags_opt(self): - opt = ['-O'] - return opt - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - compiler = new_fcompiler(compiler='absoft') - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/compaq.py b/pythonPackages/numpy/numpy/distutils/fcompiler/compaq.py deleted file mode 100755 index a00d8bdb8d..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/compaq.py +++ /dev/null @@ -1,127 +0,0 @@ - -#http://www.compaq.com/fortran/docs/ - -import os -import sys - -from numpy.distutils.fcompiler import FCompiler -from numpy.distutils.compat import get_exception -from distutils.errors import DistutilsPlatformError - -compilers = ['CompaqFCompiler'] -if os.name != 'posix' or sys.platform[:6] == 'cygwin' : - # Otherwise we'd get a false positive on posix systems with - # case-insensitive filesystems (like darwin), because we'll pick - # up /bin/df - compilers.append('CompaqVisualFCompiler') - -class CompaqFCompiler(FCompiler): - - compiler_type = 'compaq' - description = 'Compaq Fortran Compiler' - version_pattern = r'Compaq Fortran (?P[^\s]*).*' - - if sys.platform[:5]=='linux': - fc_exe = 'fort' - else: - fc_exe = 'f90' - - executables = { - 'version_cmd' : ['', "-version"], - 'compiler_f77' : [fc_exe, "-f77rtl","-fixed"], - 'compiler_fix' : [fc_exe, "-fixed"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [''], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - - module_dir_switch = '-module ' # not tested - module_include_switch = '-I' - - def get_flags(self): - return ['-assume no2underscore','-nomixed_str_len_arg'] - def get_flags_debug(self): - return ['-g','-check bounds'] - def get_flags_opt(self): - return ['-O4','-align dcommons','-assume bigarrays', - '-assume nozsize','-math_library fast'] - def get_flags_arch(self): - return ['-arch host', '-tune host'] - def get_flags_linker_so(self): - if sys.platform[:5]=='linux': - return ['-shared'] - return ['-shared','-Wl,-expect_unresolved,*'] - -class CompaqVisualFCompiler(FCompiler): - - compiler_type = 'compaqv' - description = 'DIGITAL or Compaq Visual Fortran Compiler' - version_pattern = r'(DIGITAL|Compaq) Visual Fortran Optimizing Compiler'\ - ' Version (?P[^\s]*).*' - - compile_switch = '/compile_only' - object_switch = '/object:' - library_switch = '/OUT:' #No space after /OUT:! - - static_lib_extension = ".lib" - static_lib_format = "%s%s" - module_dir_switch = '/module:' - module_include_switch = '/I' - - ar_exe = 'lib.exe' - fc_exe = 'DF' - - if sys.platform=='win32': - from distutils.msvccompiler import MSVCCompiler - - try: - m = MSVCCompiler() - m.initialize() - ar_exe = m.lib - except DistutilsPlatformError: - pass - except AttributeError: - msg = get_exception() - if '_MSVCCompiler__root' in str(msg): - print('Ignoring "%s" (I think it is msvccompiler.py bug)' % (msg)) - else: - raise - except IOError: - e = get_exception() - if not "vcvarsall.bat" in str(e): - print("Unexpected IOError in", __file__) - raise e - except ValueError: - e = get_exception() - if not "path']" in str(e): - print("Unexpected ValueError in", __file__) - raise e - - executables = { - 'version_cmd' : ['', "/what"], - 'compiler_f77' : [fc_exe, "/f77rtl","/fixed"], - 'compiler_fix' : [fc_exe, "/fixed"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [''], - 'archiver' : [ar_exe, "/OUT:"], - 'ranlib' : None - } - - def get_flags(self): - return ['/nologo','/MD','/WX','/iface=(cref,nomixed_str_len_arg)', - '/names:lowercase','/assume:underscore'] - def get_flags_opt(self): - return ['/Ox','/fast','/optimize:5','/unroll:0','/math_library:fast'] - def get_flags_arch(self): - return ['/threads'] - def get_flags_debug(self): - return ['/debug'] - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - compiler = new_fcompiler(compiler='compaq') - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/g95.py b/pythonPackages/numpy/numpy/distutils/fcompiler/g95.py deleted file mode 100755 index 9352a0b7b5..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/g95.py +++ /dev/null @@ -1,44 +0,0 @@ -# http://g95.sourceforge.net/ - -from numpy.distutils.fcompiler import FCompiler - -compilers = ['G95FCompiler'] - -class G95FCompiler(FCompiler): - compiler_type = 'g95' - description = 'G95 Fortran Compiler' - -# version_pattern = r'G95 \((GCC (?P[\d.]+)|.*?) \(g95!\) (?P.*)\).*' - # $ g95 --version - # G95 (GCC 4.0.3 (g95!) May 22 2006) - - version_pattern = r'G95 \((GCC (?P[\d.]+)|.*?) \(g95 (?P.*)!\) (?P.*)\).*' - # $ g95 --version - # G95 (GCC 4.0.3 (g95 0.90!) Aug 22 2006) - - executables = { - 'version_cmd' : ["", "--version"], - 'compiler_f77' : ["g95", "-ffixed-form"], - 'compiler_fix' : ["g95", "-ffixed-form"], - 'compiler_f90' : ["g95"], - 'linker_so' : ["","-shared"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - pic_flags = ['-fpic'] - module_dir_switch = '-fmod=' - module_include_switch = '-I' - - def get_flags(self): - return ['-fno-second-underscore'] - def get_flags_opt(self): - return ['-O'] - def get_flags_debug(self): - return ['-g'] - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(2) - compiler = G95FCompiler() - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/gnu.py b/pythonPackages/numpy/numpy/distutils/fcompiler/gnu.py deleted file mode 100755 index f2ae17fadd..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/gnu.py +++ /dev/null @@ -1,361 +0,0 @@ -import re -import os -import sys -import warnings -import platform -import tempfile -from subprocess import Popen, PIPE, STDOUT - -from numpy.distutils.cpuinfo import cpu -from numpy.distutils.fcompiler import FCompiler -from numpy.distutils.exec_command import exec_command -from numpy.distutils.misc_util import msvc_runtime_library -from numpy.distutils.compat import get_exception - -compilers = ['GnuFCompiler', 'Gnu95FCompiler'] - -TARGET_R = re.compile("Target: ([a-zA-Z0-9_\-]*)") - -# XXX: handle cross compilation -def is_win64(): - return sys.platform == "win32" and platform.architecture()[0] == "64bit" - -if is_win64(): - #_EXTRAFLAGS = ["-fno-leading-underscore"] - _EXTRAFLAGS = [] -else: - _EXTRAFLAGS = [] - -class GnuFCompiler(FCompiler): - compiler_type = 'gnu' - compiler_aliases = ('g77',) - description = 'GNU Fortran 77 compiler' - - def gnu_version_match(self, version_string): - """Handle the different versions of GNU fortran compilers""" - m = re.match(r'GNU Fortran', version_string) - if not m: - return None - m = re.match(r'GNU Fortran\s+95.*?([0-9-.]+)', version_string) - if m: - return ('gfortran', m.group(1)) - m = re.match(r'GNU Fortran.*?([0-9-.]+)', version_string) - if m: - v = m.group(1) - if v.startswith('0') or v.startswith('2') or v.startswith('3'): - # the '0' is for early g77's - return ('g77', v) - else: - # at some point in the 4.x series, the ' 95' was dropped - # from the version string - return ('gfortran', v) - - def version_match(self, version_string): - v = self.gnu_version_match(version_string) - if not v or v[0] != 'g77': - return None - return v[1] - - # 'g77 --version' results - # SunOS: GNU Fortran (GCC 3.2) 3.2 20020814 (release) - # Debian: GNU Fortran (GCC) 3.3.3 20040110 (prerelease) (Debian) - # GNU Fortran (GCC) 3.3.3 (Debian 20040401) - # GNU Fortran 0.5.25 20010319 (prerelease) - # Redhat: GNU Fortran (GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) - # GNU Fortran (GCC) 3.4.2 (mingw-special) - - possible_executables = ['g77', 'f77'] - executables = { - 'version_cmd' : [None, "--version"], - 'compiler_f77' : [None, "-g", "-Wall -m32", "-fno-second-underscore"], - 'compiler_f90' : None, # Use --fcompiler=gnu95 for f90 codes - 'compiler_fix' : None, - 'linker_so' : [None, "-g", "-Wall"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"], - 'linker_exe' : [None, "-g", "-Wall"] - } - module_dir_switch = None - module_include_switch = None - - # Cygwin: f771: warning: -fPIC ignored for target (all code is - # position independent) - if os.name != 'nt' and sys.platform != 'cygwin': - pic_flags = ['-fPIC'] - - # use -mno-cygwin for g77 when Python is not Cygwin-Python - if sys.platform == 'win32': - for key in ['version_cmd', 'compiler_f77', 'linker_so', 'linker_exe']: - executables[key].append('-mno-cygwin') - - g2c = 'g2c' - - suggested_f90_compiler = 'gnu95' - - #def get_linker_so(self): - # # win32 linking should be handled by standard linker - # # Darwin g77 cannot be used as a linker. - # #if re.match(r'(darwin)', sys.platform): - # # return - # return FCompiler.get_linker_so(self) - - def get_flags_linker_so(self): - opt = self.linker_so[1:] - if sys.platform=='darwin': - target = os.environ.get('MACOSX_DEPLOYMENT_TARGET', None) - # If MACOSX_DEPLOYMENT_TARGET is set, we simply trust the value - # and leave it alone. But, distutils will complain if the - # environment's value is different from the one in the Python - # Makefile used to build Python. We let disutils handle this - # error checking. - if not target: - # If MACOSX_DEPLOYMENT_TARGET is not set in the environment, - # we try to get it first from the Python Makefile and then we - # fall back to setting it to 10.3 to maximize the set of - # versions we can work with. This is a reasonable default - # even when using the official Python dist and those derived - # from it. - import distutils.sysconfig as sc - g = {} - filename = sc.get_makefile_filename() - sc.parse_makefile(filename, g) - target = g.get('MACOSX_DEPLOYMENT_TARGET', '10.3') - os.environ['MACOSX_DEPLOYMENT_TARGET'] = target - if target == '10.3': - s = 'Env. variable MACOSX_DEPLOYMENT_TARGET set to 10.3' - warnings.warn(s) - - opt.extend(['-undefined', 'dynamic_lookup', '-bundle']) - else: - opt.append("-shared") - if sys.platform.startswith('sunos'): - # SunOS often has dynamically loaded symbols defined in the - # static library libg2c.a The linker doesn't like this. To - # ignore the problem, use the -mimpure-text flag. It isn't - # the safest thing, but seems to work. 'man gcc' says: - # ".. Instead of using -mimpure-text, you should compile all - # source code with -fpic or -fPIC." - opt.append('-mimpure-text') - return opt - - def get_libgcc_dir(self): - status, output = exec_command(self.compiler_f77 + - ['-print-libgcc-file-name'], - use_tee=0) - if not status: - return os.path.dirname(output) - return None - - def get_library_dirs(self): - opt = [] - if sys.platform[:5] != 'linux': - d = self.get_libgcc_dir() - if d: - # if windows and not cygwin, libg2c lies in a different folder - if sys.platform == 'win32' and not d.startswith('/usr/lib'): - d = os.path.normpath(d) - if not os.path.exists(os.path.join(d, "lib%s.a" % self.g2c)): - d2 = os.path.abspath(os.path.join(d, - '../../../../lib')) - if os.path.exists(os.path.join(d2, "lib%s.a" % self.g2c)): - opt.append(d2) - opt.append(d) - return opt - - def get_libraries(self): - opt = [] - d = self.get_libgcc_dir() - if d is not None: - g2c = self.g2c + '-pic' - f = self.static_lib_format % (g2c, self.static_lib_extension) - if not os.path.isfile(os.path.join(d,f)): - g2c = self.g2c - else: - g2c = self.g2c - - if g2c is not None: - opt.append(g2c) - c_compiler = self.c_compiler - if sys.platform == 'win32' and c_compiler and \ - c_compiler.compiler_type=='msvc': - # the following code is not needed (read: breaks) when using MinGW - # in case want to link F77 compiled code with MSVC - opt.append('gcc') - runtime_lib = msvc_runtime_library() - if runtime_lib: - opt.append(runtime_lib) - if sys.platform == 'darwin': - opt.append('cc_dynamic') - return opt - - def get_flags_debug(self): - return ['-g'] - - def get_flags_opt(self): - v = self.get_version() - if v and v<='3.3.3': - # With this compiler version building Fortran BLAS/LAPACK - # with -O3 caused failures in lib.lapack heevr,syevr tests. - opt = ['-O2'] - else: - opt = ['-O3'] - opt.append('-funroll-loops') - return opt - - def get_flags_arch(self): - return [] - -class Gnu95FCompiler(GnuFCompiler): - compiler_type = 'gnu95' - compiler_aliases = ('gfortran',) - description = 'GNU Fortran 95 compiler' - - def version_match(self, version_string): - v = self.gnu_version_match(version_string) - if not v or v[0] != 'gfortran': - return None - return v[1] - - # 'gfortran --version' results: - # XXX is the below right? - # Debian: GNU Fortran 95 (GCC 4.0.3 20051023 (prerelease) (Debian 4.0.2-3)) - # GNU Fortran 95 (GCC) 4.1.2 20061115 (prerelease) (Debian 4.1.1-21) - # OS X: GNU Fortran 95 (GCC) 4.1.0 - # GNU Fortran 95 (GCC) 4.2.0 20060218 (experimental) - # GNU Fortran (GCC) 4.3.0 20070316 (experimental) - - possible_executables = ['gfortran', 'f95'] - executables = { - 'version_cmd' : ["", "--version"], - 'compiler_f77' : [None, "-Wall -m32", "-ffixed-form", - "-fno-second-underscore"] + _EXTRAFLAGS, - 'compiler_f90' : [None, "-Wall -m32", "-fno-second-underscore"] + _EXTRAFLAGS, - 'compiler_fix' : [None, "-Wall -m32", "-ffixed-form", - "-fno-second-underscore"] + _EXTRAFLAGS, - 'linker_so' : ["", "-Wall"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"], - 'linker_exe' : [None, "-Wall"] - } - - # use -mno-cygwin flag for g77 when Python is not Cygwin-Python - if sys.platform == 'win32': - for key in ['version_cmd', 'compiler_f77', 'compiler_f90', - 'compiler_fix', 'linker_so', 'linker_exe']: - executables[key].append('-mno-cygwin') - - module_dir_switch = '-J' - module_include_switch = '-I' - - g2c = 'gfortran -m32' - - def _universal_flags(self, cmd): - """Return a list of -arch flags for every supported architecture.""" - if not sys.platform == 'darwin': - return [] - arch_flags = [] - for arch in ["ppc", "i686", "x86_64", "ppc64"]: - if _can_target(cmd, arch): - arch_flags.extend(["-arch", arch]) - return arch_flags - - def get_flags(self): - flags = GnuFCompiler.get_flags(self) - arch_flags = self._universal_flags(self.compiler_f90) - if arch_flags: - flags[:0] = arch_flags - return flags - - def get_flags_linker_so(self): - flags = GnuFCompiler.get_flags_linker_so(self) - arch_flags = self._universal_flags(self.linker_so) - if arch_flags: - flags[:0] = arch_flags - return flags - - def get_library_dirs(self): - opt = GnuFCompiler.get_library_dirs(self) - if sys.platform == 'win32': - c_compiler = self.c_compiler - if c_compiler and c_compiler.compiler_type == "msvc": - target = self.get_target() - if target: - d = os.path.normpath(self.get_libgcc_dir()) - root = os.path.join(d, os.pardir, os.pardir, os.pardir, os.pardir) - mingwdir = os.path.normpath(os.path.join(root, target, "lib")) - full = os.path.join(mingwdir, "libmingwex.a") - if os.path.exists(full): - opt.append(mingwdir) - return opt - - def get_libraries(self): - opt = GnuFCompiler.get_libraries(self) - if sys.platform == 'darwin': - opt.remove('cc_dynamic') - if sys.platform == 'win32': - c_compiler = self.c_compiler - if c_compiler and c_compiler.compiler_type == "msvc": - if "gcc" in opt: - i = opt.index("gcc") - opt.insert(i+1, "mingwex") - opt.insert(i+1, "mingw32") - # XXX: fix this mess, does not work for mingw - if is_win64(): - c_compiler = self.c_compiler - if c_compiler and c_compiler.compiler_type == "msvc": - return [] - else: - raise NotImplementedError("Only MS compiler supported with gfortran on win64") - return opt - - def get_target(self): - status, output = exec_command(self.compiler_f77 + - ['-v'], - use_tee=0) - if not status: - m = TARGET_R.search(output) - if m: - return m.group(1) - return "" - - def get_flags_opt(self): - if is_win64(): - return ['-O0'] - else: - return GnuFCompiler.get_flags_opt(self) - -def _can_target(cmd, arch): - """Return true is the command supports the -arch flag for the given - architecture.""" - newcmd = cmd[:] - fid, filename = tempfile.mkstemp(suffix=".f") - try: - d = os.path.dirname(filename) - output = os.path.splitext(filename)[0] + ".o" - try: - newcmd.extend(["-arch", arch, "-c", filename]) - p = Popen(newcmd, stderr=STDOUT, stdout=PIPE, cwd=d) - p.communicate() - return p.returncode == 0 - finally: - if os.path.exists(output): - os.remove(output) - finally: - os.remove(filename) - return False - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(2) - compiler = GnuFCompiler() - compiler.customize() - print(compiler.get_version()) - raw_input('Press ENTER to continue...') - try: - compiler = Gnu95FCompiler() - compiler.customize() - print(compiler.get_version()) - except Exception: - msg = get_exception() - print(msg) - raw_input('Press ENTER to continue...') diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/hpux.py b/pythonPackages/numpy/numpy/distutils/fcompiler/hpux.py deleted file mode 100755 index a3db3493b9..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/hpux.py +++ /dev/null @@ -1,43 +0,0 @@ -from numpy.distutils.fcompiler import FCompiler - -compilers = ['HPUXFCompiler'] - -class HPUXFCompiler(FCompiler): - - compiler_type = 'hpux' - description = 'HP Fortran 90 Compiler' - version_pattern = r'HP F90 (?P[^\s*,]*)' - - executables = { - 'version_cmd' : ["", "+version"], - 'compiler_f77' : ["f90"], - 'compiler_fix' : ["f90"], - 'compiler_f90' : ["f90"], - 'linker_so' : None, - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - module_dir_switch = None #XXX: fix me - module_include_switch = None #XXX: fix me - pic_flags = ['+pic=long'] - def get_flags(self): - return self.pic_flags + ['+ppu', '+DD64'] - def get_flags_opt(self): - return ['-O3'] - def get_libraries(self): - return ['m'] - def get_library_dirs(self): - opt = ['/usr/lib/hpux64'] - return opt - def get_version(self, force=0, ok_status=[256,0,1]): - # XXX status==256 may indicate 'unrecognized option' or - # 'no input file'. So, version_cmd needs more work. - return FCompiler.get_version(self,force,ok_status) - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(10) - from numpy.distutils.fcompiler import new_fcompiler - compiler = new_fcompiler(compiler='hpux') - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/ibm.py b/pythonPackages/numpy/numpy/distutils/fcompiler/ibm.py deleted file mode 100755 index cbbf52bf7d..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/ibm.py +++ /dev/null @@ -1,95 +0,0 @@ -import os -import re -import sys - -from numpy.distutils.fcompiler import FCompiler -from numpy.distutils.exec_command import exec_command, find_executable -from numpy.distutils.misc_util import make_temp_file -from distutils import log - -compilers = ['IBMFCompiler'] - -class IBMFCompiler(FCompiler): - compiler_type = 'ibm' - description = 'IBM XL Fortran Compiler' - version_pattern = r'(xlf\(1\)\s*|)IBM XL Fortran ((Advanced Edition |)Version |Enterprise Edition V)(?P[^\s*]*)' - #IBM XL Fortran Enterprise Edition V10.1 for AIX \nVersion: 10.01.0000.0004 - - executables = { - 'version_cmd' : ["", "-qversion"], - 'compiler_f77' : ["xlf"], - 'compiler_fix' : ["xlf90", "-qfixed"], - 'compiler_f90' : ["xlf90"], - 'linker_so' : ["xlf95"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - - def get_version(self,*args,**kwds): - version = FCompiler.get_version(self,*args,**kwds) - - if version is None and sys.platform.startswith('aix'): - # use lslpp to find out xlf version - lslpp = find_executable('lslpp') - xlf = find_executable('xlf') - if os.path.exists(xlf) and os.path.exists(lslpp): - s,o = exec_command(lslpp + ' -Lc xlfcmp') - m = re.search('xlfcmp:(?P\d+([.]\d+)+)', o) - if m: version = m.group('version') - - xlf_dir = '/etc/opt/ibmcmp/xlf' - if version is None and os.path.isdir(xlf_dir): - # linux: - # If the output of xlf does not contain version info - # (that's the case with xlf 8.1, for instance) then - # let's try another method: - l = os.listdir(xlf_dir) - l.sort() - l.reverse() - l = [d for d in l if os.path.isfile(os.path.join(xlf_dir,d,'xlf.cfg'))] - if l: - from distutils.version import LooseVersion - self.version = version = LooseVersion(l[0]) - return version - - def get_flags(self): - return ['-qextname'] - - def get_flags_debug(self): - return ['-g'] - - def get_flags_linker_so(self): - opt = [] - if sys.platform=='darwin': - opt.append('-Wl,-bundle,-flat_namespace,-undefined,suppress') - else: - opt.append('-bshared') - version = self.get_version(ok_status=[0,40]) - if version is not None: - if sys.platform.startswith('aix'): - xlf_cfg = '/etc/xlf.cfg' - else: - xlf_cfg = '/etc/opt/ibmcmp/xlf/%s/xlf.cfg' % version - fo, new_cfg = make_temp_file(suffix='_xlf.cfg') - log.info('Creating '+new_cfg) - fi = open(xlf_cfg,'r') - crt1_match = re.compile(r'\s*crt\s*[=]\s*(?P.*)/crt1.o').match - for line in fi.readlines(): - m = crt1_match(line) - if m: - fo.write('crt = %s/bundle1.o\n' % (m.group('path'))) - else: - fo.write(line) - fi.close() - fo.close() - opt.append('-F'+new_cfg) - return opt - - def get_flags_opt(self): - return ['-O5'] - -if __name__ == '__main__': - log.set_verbosity(2) - compiler = IBMFCompiler() - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/intel.py b/pythonPackages/numpy/numpy/distutils/fcompiler/intel.py deleted file mode 100755 index d7effb01e0..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/intel.py +++ /dev/null @@ -1,252 +0,0 @@ -# http://developer.intel.com/software/products/compilers/flin/ - -import sys - -from numpy.distutils.cpuinfo import cpu -from numpy.distutils.ccompiler import simple_version_match -from numpy.distutils.fcompiler import FCompiler, dummy_fortran_file - -compilers = ['IntelFCompiler', 'IntelVisualFCompiler', - 'IntelItaniumFCompiler', 'IntelItaniumVisualFCompiler', - 'IntelEM64VisualFCompiler', 'IntelEM64TFCompiler'] - -def intel_version_match(type): - # Match against the important stuff in the version string - return simple_version_match(start=r'Intel.*?Fortran.*?(?:%s).*?Version' % (type,)) - -class BaseIntelFCompiler(FCompiler): - def update_executables(self): - f = dummy_fortran_file() - self.executables['version_cmd'] = ['', '-FI', '-V', '-c', - f + '.f', '-o', f + '.o'] - -class IntelFCompiler(BaseIntelFCompiler): - - compiler_type = 'intel' - compiler_aliases = ('ifort',) - description = 'Intel Fortran Compiler for 32-bit apps' - version_match = intel_version_match('32-bit|IA-32') - - possible_executables = ['ifort', 'ifc'] - - executables = { - 'version_cmd' : None, # set by update_executables - 'compiler_f77' : [None, "-72", "-w90", "-w95"], - 'compiler_f90' : [None], - 'compiler_fix' : [None, "-FI"], - 'linker_so' : ["", "-shared"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - - pic_flags = ['-fPIC'] - module_dir_switch = '-module ' # Don't remove ending space! - module_include_switch = '-I' - - def get_flags(self): - v = self.get_version() - if v >= '10.0': - # Use -fPIC instead of -KPIC. - pic_flags = ['-fPIC'] - else: - pic_flags = ['-KPIC'] - opt = pic_flags + ["-cm"] - return opt - - def get_flags_free(self): - return ["-FR"] - - def get_flags_opt(self): - return ['-O3','-unroll'] - - def get_flags_arch(self): - v = self.get_version() - opt = [] - if cpu.has_fdiv_bug(): - opt.append('-fdiv_check') - if cpu.has_f00f_bug(): - opt.append('-0f_check') - if cpu.is_PentiumPro() or cpu.is_PentiumII() or cpu.is_PentiumIII(): - opt.extend(['-tpp6']) - elif cpu.is_PentiumM(): - opt.extend(['-tpp7','-xB']) - elif cpu.is_Pentium(): - opt.append('-tpp5') - elif cpu.is_PentiumIV() or cpu.is_Xeon(): - opt.extend(['-tpp7','-xW']) - if v and v <= '7.1': - if cpu.has_mmx() and (cpu.is_PentiumII() or cpu.is_PentiumIII()): - opt.append('-xM') - elif v and v >= '8.0': - if cpu.is_PentiumIII(): - opt.append('-xK') - if cpu.has_sse3(): - opt.extend(['-xP']) - elif cpu.is_PentiumIV(): - opt.append('-xW') - if cpu.has_sse2(): - opt.append('-xN') - elif cpu.is_PentiumM(): - opt.extend(['-xB']) - if (cpu.is_Xeon() or cpu.is_Core2() or cpu.is_Core2Extreme()) and cpu.getNCPUs()==2: - opt.extend(['-xT']) - if cpu.has_sse3() and (cpu.is_PentiumIV() or cpu.is_CoreDuo() or cpu.is_CoreSolo()): - opt.extend(['-xP']) - - if cpu.has_sse2(): - opt.append('-arch SSE2') - elif cpu.has_sse(): - opt.append('-arch SSE') - return opt - - def get_flags_linker_so(self): - opt = FCompiler.get_flags_linker_so(self) - v = self.get_version() - if v and v >= '8.0': - opt.append('-nofor_main') - if sys.platform == 'darwin': - # Here, it's -dynamiclib - try: - idx = opt.index('-shared') - opt.remove('-shared') - except ValueError: - idx = 0 - opt[idx:idx] = ['-dynamiclib', '-Wl,-undefined,dynamic_lookup', '-Wl,-framework,Python'] - return opt - -class IntelItaniumFCompiler(IntelFCompiler): - compiler_type = 'intele' - compiler_aliases = () - description = 'Intel Fortran Compiler for Itanium apps' - - version_match = intel_version_match('Itanium|IA-64') - - possible_executables = ['ifort', 'efort', 'efc'] - - executables = { - 'version_cmd' : None, - 'compiler_f77' : [None, "-FI", "-w90", "-w95"], - 'compiler_fix' : [None, "-FI"], - 'compiler_f90' : [None], - 'linker_so' : ['', "-shared"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - -class IntelEM64TFCompiler(IntelFCompiler): - compiler_type = 'intelem' - compiler_aliases = () - description = 'Intel Fortran Compiler for EM64T-based apps' - - version_match = intel_version_match('EM64T-based|Intel\\(R\\) 64') - - possible_executables = ['ifort', 'efort', 'efc'] - - executables = { - 'version_cmd' : None, - 'compiler_f77' : [None, "-FI", "-w90", "-w95"], - 'compiler_fix' : [None, "-FI"], - 'compiler_f90' : [None], - 'linker_so' : ['', "-shared"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - - def get_flags_arch(self): - opt = [] - if cpu.is_PentiumIV() or cpu.is_Xeon(): - opt.extend(['-tpp7', '-xW']) - return opt - -# Is there no difference in the version string between the above compilers -# and the Visual compilers? - -class IntelVisualFCompiler(BaseIntelFCompiler): - compiler_type = 'intelv' - description = 'Intel Visual Fortran Compiler for 32-bit apps' - version_match = intel_version_match('32-bit|IA-32') - - def update_executables(self): - f = dummy_fortran_file() - self.executables['version_cmd'] = ['', '/FI', '/c', - f + '.f', '/o', f + '.o'] - - ar_exe = 'lib.exe' - possible_executables = ['ifort', 'ifl'] - - executables = { - 'version_cmd' : None, - 'compiler_f77' : [None,"-FI","-w90","-w95"], - 'compiler_fix' : [None,"-FI","-4L72","-w"], - 'compiler_f90' : [None], - 'linker_so' : ['', "-shared"], - 'archiver' : [ar_exe, "/verbose", "/OUT:"], - 'ranlib' : None - } - - compile_switch = '/c ' - object_switch = '/Fo' #No space after /Fo! - library_switch = '/OUT:' #No space after /OUT:! - module_dir_switch = '/module:' #No space after /module: - module_include_switch = '/I' - - def get_flags(self): - opt = ['/nologo','/MD','/nbs','/Qlowercase','/us'] - return opt - - def get_flags_free(self): - return ["-FR"] - - def get_flags_debug(self): - return ['/4Yb','/d2'] - - def get_flags_opt(self): - return ['/O3','/Qip'] - - def get_flags_arch(self): - opt = [] - if cpu.is_PentiumPro() or cpu.is_PentiumII(): - opt.extend(['/G6','/Qaxi']) - elif cpu.is_PentiumIII(): - opt.extend(['/G6','/QaxK']) - elif cpu.is_Pentium(): - opt.append('/G5') - elif cpu.is_PentiumIV(): - opt.extend(['/G7','/QaxW']) - if cpu.has_mmx(): - opt.append('/QaxM') - return opt - -class IntelItaniumVisualFCompiler(IntelVisualFCompiler): - compiler_type = 'intelev' - description = 'Intel Visual Fortran Compiler for Itanium apps' - - version_match = intel_version_match('Itanium') - - possible_executables = ['efl'] # XXX this is a wild guess - ar_exe = IntelVisualFCompiler.ar_exe - - executables = { - 'version_cmd' : None, - 'compiler_f77' : [None,"-FI","-w90","-w95"], - 'compiler_fix' : [None,"-FI","-4L72","-w"], - 'compiler_f90' : [None], - 'linker_so' : ['',"-shared"], - 'archiver' : [ar_exe, "/verbose", "/OUT:"], - 'ranlib' : None - } - -class IntelEM64VisualFCompiler(IntelVisualFCompiler): - compiler_type = 'intelvem' - description = 'Intel Visual Fortran Compiler for 64-bit apps' - - version_match = simple_version_match(start='Intel\(R\).*?64,') - - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - compiler = new_fcompiler(compiler='intel') - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/lahey.py b/pythonPackages/numpy/numpy/distutils/fcompiler/lahey.py deleted file mode 100755 index cf29506241..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/lahey.py +++ /dev/null @@ -1,47 +0,0 @@ -import os - -from numpy.distutils.fcompiler import FCompiler - -compilers = ['LaheyFCompiler'] - -class LaheyFCompiler(FCompiler): - - compiler_type = 'lahey' - description = 'Lahey/Fujitsu Fortran 95 Compiler' - version_pattern = r'Lahey/Fujitsu Fortran 95 Compiler Release (?P[^\s*]*)' - - executables = { - 'version_cmd' : ["", "--version"], - 'compiler_f77' : ["lf95", "--fix"], - 'compiler_fix' : ["lf95", "--fix"], - 'compiler_f90' : ["lf95"], - 'linker_so' : ["lf95","-shared"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - - module_dir_switch = None #XXX Fix me - module_include_switch = None #XXX Fix me - - def get_flags_opt(self): - return ['-O'] - def get_flags_debug(self): - return ['-g','--chk','--chkglobal'] - def get_library_dirs(self): - opt = [] - d = os.environ.get('LAHEY') - if d: - opt.append(os.path.join(d,'lib')) - return opt - def get_libraries(self): - opt = [] - opt.extend(['fj9f6', 'fj9i6', 'fj9ipp', 'fj9e6']) - return opt - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - compiler = new_fcompiler(compiler='lahey') - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/mips.py b/pythonPackages/numpy/numpy/distutils/fcompiler/mips.py deleted file mode 100755 index 3c2e9ac84b..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/mips.py +++ /dev/null @@ -1,56 +0,0 @@ -from numpy.distutils.cpuinfo import cpu -from numpy.distutils.fcompiler import FCompiler - -compilers = ['MIPSFCompiler'] - -class MIPSFCompiler(FCompiler): - - compiler_type = 'mips' - description = 'MIPSpro Fortran Compiler' - version_pattern = r'MIPSpro Compilers: Version (?P[^\s*,]*)' - - executables = { - 'version_cmd' : ["", "-version"], - 'compiler_f77' : ["f77", "-f77"], - 'compiler_fix' : ["f90", "-fixedform"], - 'compiler_f90' : ["f90"], - 'linker_so' : ["f90","-shared"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : None - } - module_dir_switch = None #XXX: fix me - module_include_switch = None #XXX: fix me - pic_flags = ['-KPIC'] - - def get_flags(self): - return self.pic_flags + ['-n32'] - def get_flags_opt(self): - return ['-O3'] - def get_flags_arch(self): - opt = [] - for a in '19 20 21 22_4k 22_5k 24 25 26 27 28 30 32_5k 32_10k'.split(): - if getattr(cpu,'is_IP%s'%a)(): - opt.append('-TARG:platform=IP%s' % a) - break - return opt - def get_flags_arch_f77(self): - r = None - if cpu.is_r10000(): r = 10000 - elif cpu.is_r12000(): r = 12000 - elif cpu.is_r8000(): r = 8000 - elif cpu.is_r5000(): r = 5000 - elif cpu.is_r4000(): r = 4000 - if r is not None: - return ['r%s' % (r)] - return [] - def get_flags_arch_f90(self): - r = self.get_flags_arch_f77() - if r: - r[0] = '-' + r[0] - return r - -if __name__ == '__main__': - from numpy.distutils.fcompiler import new_fcompiler - compiler = new_fcompiler(compiler='mips') - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/nag.py b/pythonPackages/numpy/numpy/distutils/fcompiler/nag.py deleted file mode 100755 index 4aca48450f..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/nag.py +++ /dev/null @@ -1,43 +0,0 @@ -import sys -from numpy.distutils.fcompiler import FCompiler - -compilers = ['NAGFCompiler'] - -class NAGFCompiler(FCompiler): - - compiler_type = 'nag' - description = 'NAGWare Fortran 95 Compiler' - version_pattern = r'NAGWare Fortran 95 compiler Release (?P[^\s]*)' - - executables = { - 'version_cmd' : ["", "-V"], - 'compiler_f77' : ["f95", "-fixed"], - 'compiler_fix' : ["f95", "-fixed"], - 'compiler_f90' : ["f95"], - 'linker_so' : [""], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - - def get_flags_linker_so(self): - if sys.platform=='darwin': - return ['-unsharedf95','-Wl,-bundle,-flat_namespace,-undefined,suppress'] - return ["-Wl,-shared"] - def get_flags_opt(self): - return ['-O4'] - def get_flags_arch(self): - version = self.get_version() - if version and version < '5.1': - return ['-target=native'] - else: - return [''] - def get_flags_debug(self): - return ['-g','-gline','-g90','-nan','-C'] - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - compiler = new_fcompiler(compiler='nag') - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/none.py b/pythonPackages/numpy/numpy/distutils/fcompiler/none.py deleted file mode 100755 index 526b42d497..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/none.py +++ /dev/null @@ -1,30 +0,0 @@ - -from numpy.distutils.fcompiler import FCompiler - -compilers = ['NoneFCompiler'] - -class NoneFCompiler(FCompiler): - - compiler_type = 'none' - description = 'Fake Fortran compiler' - - executables = {'compiler_f77' : None, - 'compiler_f90' : None, - 'compiler_fix' : None, - 'linker_so' : None, - 'linker_exe' : None, - 'archiver' : None, - 'ranlib' : None, - 'version_cmd' : None, - } - - def find_executables(self): - pass - - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(2) - compiler = NoneFCompiler() - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/pg.py b/pythonPackages/numpy/numpy/distutils/fcompiler/pg.py deleted file mode 100755 index 60c7f4e4b7..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/pg.py +++ /dev/null @@ -1,41 +0,0 @@ - -# http://www.pgroup.com - -from numpy.distutils.fcompiler import FCompiler - -compilers = ['PGroupFCompiler'] - -class PGroupFCompiler(FCompiler): - - compiler_type = 'pg' - description = 'Portland Group Fortran Compiler' - version_pattern = r'\s*pg(f77|f90|hpf) (?P[\d.-]+).*' - - executables = { - 'version_cmd' : ["", "-V 2>/dev/null"], - 'compiler_f77' : ["pgf77"], - 'compiler_fix' : ["pgf90", "-Mfixed"], - 'compiler_f90' : ["pgf90"], - 'linker_so' : ["pgf90","-shared","-fpic"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - pic_flags = ['-fpic'] - module_dir_switch = '-module ' - module_include_switch = '-I' - - def get_flags(self): - opt = ['-Minform=inform','-Mnosecond_underscore'] - return self.pic_flags + opt - def get_flags_opt(self): - return ['-fast'] - def get_flags_debug(self): - return ['-g'] - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - compiler = new_fcompiler(compiler='pg') - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/sun.py b/pythonPackages/numpy/numpy/distutils/fcompiler/sun.py deleted file mode 100755 index 85e2c33772..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/sun.py +++ /dev/null @@ -1,50 +0,0 @@ -from numpy.distutils.ccompiler import simple_version_match -from numpy.distutils.fcompiler import FCompiler - -compilers = ['SunFCompiler'] - -class SunFCompiler(FCompiler): - - compiler_type = 'sun' - description = 'Sun or Forte Fortran 95 Compiler' - # ex: - # f90: Sun WorkShop 6 update 2 Fortran 95 6.2 Patch 111690-10 2003/08/28 - version_match = simple_version_match( - start=r'f9[05]: (Sun|Forte|WorkShop).*Fortran 95') - - executables = { - 'version_cmd' : ["", "-V"], - 'compiler_f77' : ["f90"], - 'compiler_fix' : ["f90", "-fixed"], - 'compiler_f90' : ["f90"], - 'linker_so' : ["","-Bdynamic","-G"], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - module_dir_switch = '-moddir=' - module_include_switch = '-M' - pic_flags = ['-xcode=pic32'] - - def get_flags_f77(self): - ret = ["-ftrap=%none"] - if (self.get_version() or '') >= '7': - ret.append("-f77") - else: - ret.append("-fixed") - return ret - def get_opt(self): - return ['-fast','-dalign'] - def get_arch(self): - return ['-xtarget=generic'] - def get_libraries(self): - opt = [] - opt.extend(['fsu','sunmath','mvec']) - return opt - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - compiler = new_fcompiler(compiler='sun') - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/fcompiler/vast.py b/pythonPackages/numpy/numpy/distutils/fcompiler/vast.py deleted file mode 100755 index a7b99ce73e..0000000000 --- a/pythonPackages/numpy/numpy/distutils/fcompiler/vast.py +++ /dev/null @@ -1,54 +0,0 @@ -import os - -from numpy.distutils.fcompiler.gnu import GnuFCompiler - -compilers = ['VastFCompiler'] - -class VastFCompiler(GnuFCompiler): - compiler_type = 'vast' - compiler_aliases = () - description = 'Pacific-Sierra Research Fortran 90 Compiler' - version_pattern = r'\s*Pacific-Sierra Research vf90 '\ - '(Personal|Professional)\s+(?P[^\s]*)' - - # VAST f90 does not support -o with -c. So, object files are created - # to the current directory and then moved to build directory - object_switch = ' && function _mvfile { mv -v `basename $1` $1 ; } && _mvfile ' - - executables = { - 'version_cmd' : ["vf90", "-v"], - 'compiler_f77' : ["g77"], - 'compiler_fix' : ["f90", "-Wv,-ya"], - 'compiler_f90' : ["f90"], - 'linker_so' : [""], - 'archiver' : ["ar", "-cr"], - 'ranlib' : ["ranlib"] - } - module_dir_switch = None #XXX Fix me - module_include_switch = None #XXX Fix me - - def find_executables(self): - pass - - def get_version_cmd(self): - f90 = self.compiler_f90[0] - d, b = os.path.split(f90) - vf90 = os.path.join(d, 'v'+b) - return vf90 - - def get_flags_arch(self): - vast_version = self.get_version() - gnu = GnuFCompiler() - gnu.customize(None) - self.version = gnu.get_version() - opt = GnuFCompiler.get_flags_arch(self) - self.version = vast_version - return opt - -if __name__ == '__main__': - from distutils import log - log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - compiler = new_fcompiler(compiler='vast') - compiler.customize() - print(compiler.get_version()) diff --git a/pythonPackages/numpy/numpy/distutils/from_template.py b/pythonPackages/numpy/numpy/distutils/from_template.py deleted file mode 100755 index 413f0721df..0000000000 --- a/pythonPackages/numpy/numpy/distutils/from_template.py +++ /dev/null @@ -1,256 +0,0 @@ -#!/usr/bin/python -""" - -process_file(filename) - - takes templated file .xxx.src and produces .xxx file where .xxx - is .pyf .f90 or .f using the following template rules: - - '<..>' denotes a template. - - All function and subroutine blocks in a source file with names that - contain '<..>' will be replicated according to the rules in '<..>'. - - The number of comma-separeted words in '<..>' will determine the number of - replicates. - - '<..>' may have two different forms, named and short. For example, - - named: - where anywhere inside a block '

    ' will be replaced with - 'd', 's', 'z', and 'c' for each replicate of the block. - - <_c> is already defined: <_c=s,d,c,z> - <_t> is already defined: <_t=real,double precision,complex,double complex> - - short: - , a short form of the named, useful when no

    appears inside - a block. - - In general, '<..>' contains a comma separated list of arbitrary - expressions. If these expression must contain a comma|leftarrow|rightarrow, - then prepend the comma|leftarrow|rightarrow with a backslash. - - If an expression matches '\\' then it will be replaced - by -th expression. - - Note that all '<..>' forms in a block must have the same number of - comma-separated entries. - - Predefined named template rules: - - - - - - -""" - -__all__ = ['process_str','process_file'] - -import os -import sys -import re - -routine_start_re = re.compile(r'(\n|\A)(( (\$|\*))|)\s*(subroutine|function)\b',re.I) -routine_end_re = re.compile(r'\n\s*end\s*(subroutine|function)\b.*(\n|\Z)',re.I) -function_start_re = re.compile(r'\n (\$|\*)\s*function\b',re.I) - -def parse_structure(astr): - """ Return a list of tuples for each function or subroutine each - tuple is the start and end of a subroutine or function to be - expanded. - """ - - spanlist = [] - ind = 0 - while 1: - m = routine_start_re.search(astr,ind) - if m is None: - break - start = m.start() - if function_start_re.match(astr,start,m.end()): - while 1: - i = astr.rfind('\n',ind,start) - if i==-1: - break - start = i - if astr[i:i+7]!='\n $': - break - start += 1 - m = routine_end_re.search(astr,m.end()) - ind = end = m and m.end()-1 or len(astr) - spanlist.append((start,end)) - return spanlist - -template_re = re.compile(r"<\s*(\w[\w\d]*)\s*>") -named_re = re.compile(r"<\s*(\w[\w\d]*)\s*=\s*(.*?)\s*>") -list_re = re.compile(r"<\s*((.*?))\s*>") - -def find_repl_patterns(astr): - reps = named_re.findall(astr) - names = {} - for rep in reps: - name = rep[0].strip() or unique_key(names) - repl = rep[1].replace('\,','@comma@') - thelist = conv(repl) - names[name] = thelist - return names - -item_re = re.compile(r"\A\\(?P\d+)\Z") -def conv(astr): - b = astr.split(',') - l = [x.strip() for x in b] - for i in range(len(l)): - m = item_re.match(l[i]) - if m: - j = int(m.group('index')) - l[i] = l[j] - return ','.join(l) - -def unique_key(adict): - """ Obtain a unique key given a dictionary.""" - allkeys = adict.keys() - done = False - n = 1 - while not done: - newkey = '__l%s' % (n) - if newkey in allkeys: - n += 1 - else: - done = True - return newkey - - -template_name_re = re.compile(r'\A\s*(\w[\w\d]*)\s*\Z') -def expand_sub(substr,names): - substr = substr.replace('\>','@rightarrow@') - substr = substr.replace('\<','@leftarrow@') - lnames = find_repl_patterns(substr) - substr = named_re.sub(r"<\1>",substr) # get rid of definition templates - - def listrepl(mobj): - thelist = conv(mobj.group(1).replace('\,','@comma@')) - if template_name_re.match(thelist): - return "<%s>" % (thelist) - name = None - for key in lnames.keys(): # see if list is already in dictionary - if lnames[key] == thelist: - name = key - if name is None: # this list is not in the dictionary yet - name = unique_key(lnames) - lnames[name] = thelist - return "<%s>" % name - - substr = list_re.sub(listrepl, substr) # convert all lists to named templates - # newnames are constructed as needed - - numsubs = None - base_rule = None - rules = {} - for r in template_re.findall(substr): - if r not in rules: - thelist = lnames.get(r,names.get(r,None)) - if thelist is None: - raise ValueError('No replicates found for <%s>' % (r)) - if r not in names and not thelist.startswith('_'): - names[r] = thelist - rule = [i.replace('@comma@',',') for i in thelist.split(',')] - num = len(rule) - - if numsubs is None: - numsubs = num - rules[r] = rule - base_rule = r - elif num == numsubs: - rules[r] = rule - else: - print("Mismatch in number of replacements (base <%s=%s>)"\ - " for <%s=%s>. Ignoring." % (base_rule, - ','.join(rules[base_rule]), - r,thelist)) - if not rules: - return substr - - def namerepl(mobj): - name = mobj.group(1) - return rules.get(name,(k+1)*[name])[k] - - newstr = '' - for k in range(numsubs): - newstr += template_re.sub(namerepl, substr) + '\n\n' - - newstr = newstr.replace('@rightarrow@','>') - newstr = newstr.replace('@leftarrow@','<') - return newstr - -def process_str(allstr): - newstr = allstr - writestr = '' #_head # using _head will break free-format files - - struct = parse_structure(newstr) - - oldend = 0 - names = {} - names.update(_special_names) - for sub in struct: - writestr += newstr[oldend:sub[0]] - names.update(find_repl_patterns(newstr[oldend:sub[0]])) - writestr += expand_sub(newstr[sub[0]:sub[1]],names) - oldend = sub[1] - writestr += newstr[oldend:] - - return writestr - -include_src_re = re.compile(r"(\n|\A)\s*include\s*['\"](?P[\w\d./\\]+[.]src)['\"]",re.I) - -def resolve_includes(source): - d = os.path.dirname(source) - fid = open(source) - lines = [] - for line in fid.readlines(): - m = include_src_re.match(line) - if m: - fn = m.group('name') - if not os.path.isabs(fn): - fn = os.path.join(d,fn) - if os.path.isfile(fn): - print ('Including file',fn) - lines.extend(resolve_includes(fn)) - else: - lines.append(line) - else: - lines.append(line) - fid.close() - return lines - -def process_file(source): - lines = resolve_includes(source) - return process_str(''.join(lines)) - -_special_names = find_repl_patterns(''' -<_c=s,d,c,z> -<_t=real,double precision,complex,double complex> - - - - - -''') - -if __name__ == "__main__": - - try: - file = sys.argv[1] - except IndexError: - fid = sys.stdin - outfile = sys.stdout - else: - fid = open(file,'r') - (base, ext) = os.path.splitext(file) - newname = base - outfile = open(newname,'w') - - allstr = fid.read() - writestr = process_str(allstr) - outfile.write(writestr) diff --git a/pythonPackages/numpy/numpy/distutils/info.py b/pythonPackages/numpy/numpy/distutils/info.py deleted file mode 100755 index 3d27a8092b..0000000000 --- a/pythonPackages/numpy/numpy/distutils/info.py +++ /dev/null @@ -1,5 +0,0 @@ -""" -Enhanced distutils with Fortran compilers support and more. -""" - -postpone_import = True diff --git a/pythonPackages/numpy/numpy/distutils/intelccompiler.py b/pythonPackages/numpy/numpy/distutils/intelccompiler.py deleted file mode 100755 index e03c5beba7..0000000000 --- a/pythonPackages/numpy/numpy/distutils/intelccompiler.py +++ /dev/null @@ -1,29 +0,0 @@ - -from distutils.unixccompiler import UnixCCompiler -from numpy.distutils.exec_command import find_executable - -class IntelCCompiler(UnixCCompiler): - - """ A modified Intel compiler compatible with an gcc built Python. - """ - - compiler_type = 'intel' - cc_exe = 'icc' - - def __init__ (self, verbose=0, dry_run=0, force=0): - UnixCCompiler.__init__ (self, verbose,dry_run, force) - compiler = self.cc_exe - self.set_executables(compiler=compiler, - compiler_so=compiler, - compiler_cxx=compiler, - linker_exe=compiler, - linker_so=compiler + ' -shared') - -class IntelItaniumCCompiler(IntelCCompiler): - compiler_type = 'intele' - - # On Itanium, the Intel Compiler used to be called ecc, let's search for - # it (now it's also icc, so ecc is last in the search). - for cc_exe in map(find_executable,['icc','ecc']): - if cc_exe: - break diff --git a/pythonPackages/numpy/numpy/distutils/interactive.py b/pythonPackages/numpy/numpy/distutils/interactive.py deleted file mode 100755 index e3dba04eb4..0000000000 --- a/pythonPackages/numpy/numpy/distutils/interactive.py +++ /dev/null @@ -1,188 +0,0 @@ -import os -import sys -from pprint import pformat - -__all__ = ['interactive_sys_argv'] - -def show_information(*args): - print 'Python',sys.version - for a in ['platform','prefix','byteorder','path']: - print 'sys.%s = %s' % (a,pformat(getattr(sys,a))) - for a in ['name']: - print 'os.%s = %s' % (a,pformat(getattr(os,a))) - if hasattr(os,'uname'): - print 'system,node,release,version,machine = ',os.uname() - -def show_environ(*args): - for k,i in os.environ.items(): - print ' %s = %s' % (k, i) - -def show_fortran_compilers(*args): - from fcompiler import show_fcompilers - show_fcompilers() - -def show_compilers(*args): - from distutils.ccompiler import show_compilers - show_compilers() - -def show_tasks(argv,ccompiler,fcompiler): - print """\ - -Tasks: - i - Show python/platform/machine information - ie - Show environment information - c - Show C compilers information - c - Set C compiler (current:%s) - f - Show Fortran compilers information - f - Set Fortran compiler (current:%s) - e - Edit proposed sys.argv[1:]. - -Task aliases: - 0 - Configure - 1 - Build - 2 - Install - 2 - Install with prefix. - 3 - Inplace build - 4 - Source distribution - 5 - Binary distribution - -Proposed sys.argv = %s - """ % (ccompiler, fcompiler, argv) - - -import shlex - -def edit_argv(*args): - argv = args[0] - readline = args[1] - if readline is not None: - readline.add_history(' '.join(argv[1:])) - try: - s = raw_input('Edit argv [UpArrow to retrive %r]: ' % (' '.join(argv[1:]))) - except EOFError: - return - if s: - argv[1:] = shlex.split(s) - return - -def interactive_sys_argv(argv): - print '='*72 - print 'Starting interactive session' - print '-'*72 - - readline = None - try: - try: - import readline - except ImportError: - pass - else: - import tempfile - tdir = tempfile.gettempdir() - username = os.environ.get('USER',os.environ.get('USERNAME','UNKNOWN')) - histfile = os.path.join(tdir,".pyhist_interactive_setup-" + username) - try: - try: readline.read_history_file(histfile) - except IOError: pass - import atexit - atexit.register(readline.write_history_file, histfile) - except AttributeError: pass - except Exception, msg: - print msg - - task_dict = {'i':show_information, - 'ie':show_environ, - 'f':show_fortran_compilers, - 'c':show_compilers, - 'e':edit_argv, - } - c_compiler_name = None - f_compiler_name = None - - while 1: - show_tasks(argv,c_compiler_name, f_compiler_name) - try: - task = raw_input('Choose a task (^D to quit, Enter to continue with setup): ') - except EOFError: - print - task = 'quit' - ltask = task.lower() - if task=='': break - if ltask=='quit': sys.exit() - task_func = task_dict.get(ltask,None) - if task_func is None: - if ltask[0]=='c': - c_compiler_name = task[1:] - if c_compiler_name=='none': - c_compiler_name = None - continue - if ltask[0]=='f': - f_compiler_name = task[1:] - if f_compiler_name=='none': - f_compiler_name = None - continue - if task[0]=='2' and len(task)>1: - prefix = task[1:] - task = task[0] - else: - prefix = None - if task == '4': - argv[1:] = ['sdist','-f'] - continue - elif task in '01235': - cmd_opts = {'config':[],'config_fc':[], - 'build_ext':[],'build_src':[], - 'build_clib':[]} - if c_compiler_name is not None: - c = '--compiler=%s' % (c_compiler_name) - cmd_opts['config'].append(c) - if task != '0': - cmd_opts['build_ext'].append(c) - cmd_opts['build_clib'].append(c) - if f_compiler_name is not None: - c = '--fcompiler=%s' % (f_compiler_name) - cmd_opts['config_fc'].append(c) - if task != '0': - cmd_opts['build_ext'].append(c) - cmd_opts['build_clib'].append(c) - if task=='3': - cmd_opts['build_ext'].append('--inplace') - cmd_opts['build_src'].append('--inplace') - conf = [] - sorted_keys = ['config','config_fc','build_src', - 'build_clib','build_ext'] - for k in sorted_keys: - opts = cmd_opts[k] - if opts: conf.extend([k]+opts) - if task=='0': - if 'config' not in conf: - conf.append('config') - argv[1:] = conf - elif task=='1': - argv[1:] = conf+['build'] - elif task=='2': - if prefix is not None: - argv[1:] = conf+['install','--prefix=%s' % (prefix)] - else: - argv[1:] = conf+['install'] - elif task=='3': - argv[1:] = conf+['build'] - elif task=='5': - if sys.platform=='win32': - argv[1:] = conf+['bdist_wininst'] - else: - argv[1:] = conf+['bdist'] - else: - print 'Skipping unknown task:',`task` - else: - print '-'*68 - try: - task_func(argv,readline) - except Exception,msg: - print 'Failed running task %s: %s' % (task,msg) - break - print '-'*68 - print - - print '-'*72 - return argv diff --git a/pythonPackages/numpy/numpy/distutils/lib2def.py b/pythonPackages/numpy/numpy/distutils/lib2def.py deleted file mode 100755 index a486b13bde..0000000000 --- a/pythonPackages/numpy/numpy/distutils/lib2def.py +++ /dev/null @@ -1,114 +0,0 @@ -import re -import sys -import os -import subprocess - -__doc__ = """This module generates a DEF file from the symbols in -an MSVC-compiled DLL import library. It correctly discriminates between -data and functions. The data is collected from the output of the program -nm(1). - -Usage: - python lib2def.py [libname.lib] [output.def] -or - python lib2def.py [libname.lib] > output.def - -libname.lib defaults to python.lib and output.def defaults to stdout - -Author: Robert Kern -Last Update: April 30, 1999 -""" - -__version__ = '0.1a' - -py_ver = "%d%d" % tuple(sys.version_info[:2]) - -DEFAULT_NM = 'nm -Cs' - -DEF_HEADER = """LIBRARY python%s.dll -;CODE PRELOAD MOVEABLE DISCARDABLE -;DATA PRELOAD SINGLE - -EXPORTS -""" % py_ver -# the header of the DEF file - -FUNC_RE = re.compile(r"^(.*) in python%s\.dll" % py_ver, re.MULTILINE) -DATA_RE = re.compile(r"^_imp__(.*) in python%s\.dll" % py_ver, re.MULTILINE) - -def parse_cmd(): - """Parses the command-line arguments. - -libfile, deffile = parse_cmd()""" - if len(sys.argv) == 3: - if sys.argv[1][-4:] == '.lib' and sys.argv[2][-4:] == '.def': - libfile, deffile = sys.argv[1:] - elif sys.argv[1][-4:] == '.def' and sys.argv[2][-4:] == '.lib': - deffile, libfile = sys.argv[1:] - else: - print "I'm assuming that your first argument is the library" - print "and the second is the DEF file." - elif len(sys.argv) == 2: - if sys.argv[1][-4:] == '.def': - deffile = sys.argv[1] - libfile = 'python%s.lib' % py_ver - elif sys.argv[1][-4:] == '.lib': - deffile = None - libfile = sys.argv[1] - else: - libfile = 'python%s.lib' % py_ver - deffile = None - return libfile, deffile - -def getnm(nm_cmd = ['nm', '-Cs', 'python%s.lib' % py_ver]): - """Returns the output of nm_cmd via a pipe. - -nm_output = getnam(nm_cmd = 'nm -Cs py_lib')""" - f = subprocess.Popen(nm_cmd, shell=True, stdout=subprocess.PIPE) - nm_output = f.stdout.read() - f.stdout.close() - return nm_output - -def parse_nm(nm_output): - """Returns a tuple of lists: dlist for the list of data -symbols and flist for the list of function symbols. - -dlist, flist = parse_nm(nm_output)""" - data = DATA_RE.findall(nm_output) - func = FUNC_RE.findall(nm_output) - - flist = [] - for sym in data: - if sym in func and (sym[:2] == 'Py' or sym[:3] == '_Py' or sym[:4] == 'init'): - flist.append(sym) - - dlist = [] - for sym in data: - if sym not in flist and (sym[:2] == 'Py' or sym[:3] == '_Py'): - dlist.append(sym) - - dlist.sort() - flist.sort() - return dlist, flist - -def output_def(dlist, flist, header, file = sys.stdout): - """Outputs the final DEF file to a file defaulting to stdout. - -output_def(dlist, flist, header, file = sys.stdout)""" - for data_sym in dlist: - header = header + '\t%s DATA\n' % data_sym - header = header + '\n' # blank line - for func_sym in flist: - header = header + '\t%s\n' % func_sym - file.write(header) - -if __name__ == '__main__': - libfile, deffile = parse_cmd() - if deffile is None: - deffile = sys.stdout - else: - deffile = open(deffile, 'w') - nm_cmd = [str(DEFAULT_NM), str(libfile)] - nm_output = getnm(nm_cmd) - dlist, flist = parse_nm(nm_output) - output_def(dlist, flist, DEF_HEADER, deffile) diff --git a/pythonPackages/numpy/numpy/distutils/line_endings.py b/pythonPackages/numpy/numpy/distutils/line_endings.py deleted file mode 100755 index 4e6c1f38ec..0000000000 --- a/pythonPackages/numpy/numpy/distutils/line_endings.py +++ /dev/null @@ -1,74 +0,0 @@ -""" Functions for converting from DOS to UNIX line endings -""" - -import sys, re, os - -def dos2unix(file): - "Replace CRLF with LF in argument files. Print names of changed files." - if os.path.isdir(file): - print file, "Directory!" - return - - data = open(file, "rb").read() - if '\0' in data: - print file, "Binary!" - return - - newdata = re.sub("\r\n", "\n", data) - if newdata != data: - print 'dos2unix:', file - f = open(file, "wb") - f.write(newdata) - f.close() - return file - else: - print file, 'ok' - -def dos2unix_one_dir(modified_files,dir_name,file_names): - for file in file_names: - full_path = os.path.join(dir_name,file) - file = dos2unix(full_path) - if file is not None: - modified_files.append(file) - -def dos2unix_dir(dir_name): - modified_files = [] - os.path.walk(dir_name,dos2unix_one_dir,modified_files) - return modified_files -#---------------------------------- - -def unix2dos(file): - "Replace LF with CRLF in argument files. Print names of changed files." - if os.path.isdir(file): - print file, "Directory!" - return - - data = open(file, "rb").read() - if '\0' in data: - print file, "Binary!" - return - newdata = re.sub("\r\n", "\n", data) - newdata = re.sub("\n", "\r\n", newdata) - if newdata != data: - print 'unix2dos:', file - f = open(file, "wb") - f.write(newdata) - f.close() - return file - else: - print file, 'ok' - -def unix2dos_one_dir(modified_files,dir_name,file_names): - for file in file_names: - full_path = os.path.join(dir_name,file) - unix2dos(full_path) - if file is not None: - modified_files.append(file) - -def unix2dos_dir(dir_name): - modified_files = [] - os.path.walk(dir_name,unix2dos_one_dir,modified_files) - return modified_files - -if __name__ == "__main__": - dos2unix_dir(sys.argv[1]) diff --git a/pythonPackages/numpy/numpy/distutils/log.py b/pythonPackages/numpy/numpy/distutils/log.py deleted file mode 100755 index fe44bb4433..0000000000 --- a/pythonPackages/numpy/numpy/distutils/log.py +++ /dev/null @@ -1,81 +0,0 @@ -# Colored log, requires Python 2.3 or up. - -import sys -from distutils.log import * -from distutils.log import Log as old_Log -from distutils.log import _global_log - -if sys.version_info[0] < 3: - from misc_util import red_text, default_text, cyan_text, green_text, is_sequence, is_string -else: - from numpy.distutils.misc_util import red_text, default_text, cyan_text, green_text, is_sequence, is_string - - -def _fix_args(args,flag=1): - if is_string(args): - return args.replace('%','%%') - if flag and is_sequence(args): - return tuple([_fix_args(a,flag=0) for a in args]) - return args - -class Log(old_Log): - def _log(self, level, msg, args): - if level >= self.threshold: - if args: - msg = msg % _fix_args(args) - if 0: - if msg.startswith('copying ') and msg.find(' -> ') != -1: - return - if msg.startswith('byte-compiling '): - return - print(_global_color_map[level](msg)) - sys.stdout.flush() - - def good(self, msg, *args): - """If we'd log WARN messages, log this message as a 'nice' anti-warn - message. - """ - if WARN >= self.threshold: - if args: - print(green_text(msg % _fix_args(args))) - else: - print(green_text(msg)) - sys.stdout.flush() -_global_log.__class__ = Log - -good = _global_log.good - -def set_threshold(level, force=False): - prev_level = _global_log.threshold - if prev_level > DEBUG or force: - # If we're running at DEBUG, don't change the threshold, as there's - # likely a good reason why we're running at this level. - _global_log.threshold = level - if level <= DEBUG: - info('set_threshold: setting thershold to DEBUG level, it can be changed only with force argument') - else: - info('set_threshold: not changing thershold from DEBUG level %s to %s' % (prev_level,level)) - return prev_level - -def set_verbosity(v, force=False): - prev_level = _global_log.threshold - if v < 0: - set_threshold(ERROR, force) - elif v == 0: - set_threshold(WARN, force) - elif v == 1: - set_threshold(INFO, force) - elif v >= 2: - set_threshold(DEBUG, force) - return {FATAL:-2,ERROR:-1,WARN:0,INFO:1,DEBUG:2}.get(prev_level,1) - -_global_color_map = { - DEBUG:cyan_text, - INFO:default_text, - WARN:red_text, - ERROR:red_text, - FATAL:red_text -} - -# don't use INFO,.. flags in set_verbosity, these flags are for set_threshold. -set_verbosity(0, force=True) diff --git a/pythonPackages/numpy/numpy/distutils/mingw/gfortran_vs2003_hack.c b/pythonPackages/numpy/numpy/distutils/mingw/gfortran_vs2003_hack.c deleted file mode 100755 index 15ed7e6863..0000000000 --- a/pythonPackages/numpy/numpy/distutils/mingw/gfortran_vs2003_hack.c +++ /dev/null @@ -1,6 +0,0 @@ -int _get_output_format(void) -{ - return 0; -} - -int _imp____lc_codepage = 0; diff --git a/pythonPackages/numpy/numpy/distutils/mingw32ccompiler.py b/pythonPackages/numpy/numpy/distutils/mingw32ccompiler.py deleted file mode 100755 index ebc9c0e2bf..0000000000 --- a/pythonPackages/numpy/numpy/distutils/mingw32ccompiler.py +++ /dev/null @@ -1,477 +0,0 @@ -""" -Support code for building Python extensions on Windows. - - # NT stuff - # 1. Make sure libpython.a exists for gcc. If not, build it. - # 2. Force windows to use gcc (we're struggling with MSVC and g77 support) - # 3. Force windows to use g77 - -""" - -import os -import subprocess -import sys -import subprocess -import re - -# Overwrite certain distutils.ccompiler functions: -import numpy.distutils.ccompiler - -if sys.version_info[0] < 3: - import log -else: - from numpy.distutils import log -# NT stuff -# 1. Make sure libpython.a exists for gcc. If not, build it. -# 2. Force windows to use gcc (we're struggling with MSVC and g77 support) -# --> this is done in numpy/distutils/ccompiler.py -# 3. Force windows to use g77 - -import distutils.cygwinccompiler -from distutils.version import StrictVersion -from numpy.distutils.ccompiler import gen_preprocess_options, gen_lib_options -from distutils.errors import DistutilsExecError, CompileError, UnknownFileError - -from distutils.unixccompiler import UnixCCompiler -from distutils.msvccompiler import get_build_version as get_build_msvc_version -from numpy.distutils.misc_util import msvc_runtime_library, get_build_architecture - -# Useful to generate table of symbols from a dll -_START = re.compile(r'\[Ordinal/Name Pointer\] Table') -_TABLE = re.compile(r'^\s+\[([\s*[0-9]*)\] ([a-zA-Z0-9_]*)') - -# the same as cygwin plus some additional parameters -class Mingw32CCompiler(distutils.cygwinccompiler.CygwinCCompiler): - """ A modified MingW32 compiler compatible with an MSVC built Python. - - """ - - compiler_type = 'mingw32' - - def __init__ (self, - verbose=0, - dry_run=0, - force=0): - - distutils.cygwinccompiler.CygwinCCompiler.__init__ (self, - verbose,dry_run, force) - - # we need to support 3.2 which doesn't match the standard - # get_versions methods regex - if self.gcc_version is None: - import re - p = subprocess.Popen(['gcc', '-dumpversion'], shell=True, - stdout=subprocess.PIPE) - out_string = p.stdout.read() - p.stdout.close() - result = re.search('(\d+\.\d+)',out_string) - if result: - self.gcc_version = StrictVersion(result.group(1)) - - # A real mingw32 doesn't need to specify a different entry point, - # but cygwin 2.91.57 in no-cygwin-mode needs it. - if self.gcc_version <= "2.91.57": - entry_point = '--entry _DllMain@12' - else: - entry_point = '' - - if self.linker_dll == 'dllwrap': - # Commented out '--driver-name g++' part that fixes weird - # g++.exe: g++: No such file or directory - # error (mingw 1.0 in Enthon24 tree, gcc-3.4.5). - # If the --driver-name part is required for some environment - # then make the inclusion of this part specific to that environment. - self.linker = 'dllwrap' # --driver-name g++' - elif self.linker_dll == 'gcc': - self.linker = 'g++' - - # **changes: eric jones 4/11/01 - # 1. Check for import library on Windows. Build if it doesn't exist. - - build_import_library() - - # **changes: eric jones 4/11/01 - # 2. increased optimization and turned off all warnings - # 3. also added --driver-name g++ - #self.set_executables(compiler='gcc -mno-cygwin -O2 -w', - # compiler_so='gcc -mno-cygwin -mdll -O2 -w', - # linker_exe='gcc -mno-cygwin', - # linker_so='%s --driver-name g++ -mno-cygwin -mdll -static %s' - # % (self.linker, entry_point)) - - # MS_WIN64 should be defined when building for amd64 on windows, but - # python headers define it only for MS compilers, which has all kind of - # bad consequences, like using Py_ModuleInit4 instead of - # Py_ModuleInit4_64, etc... So we add it here - if get_build_architecture() == 'AMD64': - self.set_executables( - compiler='gcc -g -DDEBUG -DMS_WIN64 -mno-cygwin -O0 -Wall', - compiler_so='gcc -g -DDEBUG -DMS_WIN64 -mno-cygwin -O0 -Wall -Wstrict-prototypes', - linker_exe='gcc -g -mno-cygwin', - linker_so='gcc -g -mno-cygwin -shared') - else: - if self.gcc_version <= "3.0.0": - self.set_executables(compiler='gcc -mno-cygwin -O2 -w', - compiler_so='gcc -mno-cygwin -mdll -O2 -w -Wstrict-prototypes', - linker_exe='g++ -mno-cygwin', - linker_so='%s -mno-cygwin -mdll -static %s' - % (self.linker, entry_point)) - else: - self.set_executables(compiler='gcc -mno-cygwin -O2 -Wall', - compiler_so='gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes', - linker_exe='g++ -mno-cygwin', - linker_so='g++ -mno-cygwin -shared') - # added for python2.3 support - # we can't pass it through set_executables because pre 2.2 would fail - self.compiler_cxx = ['g++'] - - # Maybe we should also append -mthreads, but then the finished - # dlls need another dll (mingwm10.dll see Mingw32 docs) - # (-mthreads: Support thread-safe exception handling on `Mingw32') - - # no additional libraries needed - #self.dll_libraries=[] - return - - # __init__ () - - def link(self, - target_desc, - objects, - output_filename, - output_dir, - libraries, - library_dirs, - runtime_library_dirs, - export_symbols = None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None): - # Include the appropiate MSVC runtime library if Python was built - # with MSVC >= 7.0 (MinGW standard is msvcrt) - runtime_library = msvc_runtime_library() - if runtime_library: - if not libraries: - libraries = [] - libraries.append(runtime_library) - args = (self, - target_desc, - objects, - output_filename, - output_dir, - libraries, - library_dirs, - runtime_library_dirs, - None, #export_symbols, we do this in our def-file - debug, - extra_preargs, - extra_postargs, - build_temp, - target_lang) - if self.gcc_version < "3.0.0": - func = distutils.cygwinccompiler.CygwinCCompiler.link - else: - func = UnixCCompiler.link - func(*args[:func.im_func.func_code.co_argcount]) - return - - def object_filenames (self, - source_filenames, - strip_dir=0, - output_dir=''): - if output_dir is None: output_dir = '' - obj_names = [] - for src_name in source_filenames: - # use normcase to make sure '.rc' is really '.rc' and not '.RC' - (base, ext) = os.path.splitext (os.path.normcase(src_name)) - - # added these lines to strip off windows drive letters - # without it, .o files are placed next to .c files - # instead of the build directory - drv,base = os.path.splitdrive(base) - if drv: - base = base[1:] - - if ext not in (self.src_extensions + ['.rc','.res']): - raise UnknownFileError( - "unknown file type '%s' (from '%s')" % \ - (ext, src_name)) - if strip_dir: - base = os.path.basename (base) - if ext == '.res' or ext == '.rc': - # these need to be compiled to object files - obj_names.append (os.path.join (output_dir, - base + ext + self.obj_extension)) - else: - obj_names.append (os.path.join (output_dir, - base + self.obj_extension)) - return obj_names - - # object_filenames () - - -def find_python_dll(): - maj, min, micro = [int(i) for i in sys.version_info[:3]] - dllname = 'python%d%d.dll' % (maj, min) - print ("Looking for %s" % dllname) - - # We can't do much here: - # - find it in python main dir - # - in system32, - # - ortherwise (Sxs), I don't know how to get it. - lib_dirs = [] - lib_dirs.append(os.path.join(sys.prefix, 'lib')) - try: - lib_dirs.append(os.path.join(os.environ['SYSTEMROOT'], 'system32')) - except KeyError: - pass - - for d in lib_dirs: - dll = os.path.join(d, dllname) - if os.path.exists(dll): - return dll - - raise ValueError("%s not found in %s" % (dllname, lib_dirs)) - -def dump_table(dll): - st = subprocess.Popen(["objdump.exe", "-p", dll], stdout=subprocess.PIPE) - return st.stdout.readlines() - -def generate_def(dll, dfile): - """Given a dll file location, get all its exported symbols and dump them - into the given def file. - - The .def file will be overwritten""" - dump = dump_table(dll) - for i in range(len(dump)): - if _START.match(dump[i]): - break - - if i == len(dump): - raise ValueError("Symbol table not found") - - syms = [] - for j in range(i+1, len(dump)): - m = _TABLE.match(dump[j]) - if m: - syms.append((int(m.group(1).strip()), m.group(2))) - else: - break - - if len(syms) == 0: - log.warn('No symbols found in %s' % dll) - - d = open(dfile, 'w') - d.write('LIBRARY %s\n' % os.path.basename(dll)) - d.write(';CODE PRELOAD MOVEABLE DISCARDABLE\n') - d.write(';DATA PRELOAD SINGLE\n') - d.write('\nEXPORTS\n') - for s in syms: - #d.write('@%d %s\n' % (s[0], s[1])) - d.write('%s\n' % s[1]) - d.close() - -def build_import_library(): - if os.name != 'nt': - return - - arch = get_build_architecture() - if arch == 'AMD64': - return _build_import_library_amd64() - elif arch == 'Intel': - return _build_import_library_x86() - else: - raise ValueError("Unhandled arch %s" % arch) - -def _build_import_library_amd64(): - dll_file = find_python_dll() - - out_name = "libpython%d%d.a" % tuple(sys.version_info[:2]) - out_file = os.path.join(sys.prefix, 'libs', out_name) - if os.path.isfile(out_file): - log.debug('Skip building import library: "%s" exists' % (out_file)) - return - - def_name = "python%d%d.def" % tuple(sys.version_info[:2]) - def_file = os.path.join(sys.prefix,'libs',def_name) - - log.info('Building import library (arch=AMD64): "%s" (from %s)' \ - % (out_file, dll_file)) - - generate_def(dll_file, def_file) - - cmd = ['dlltool', '-d', def_file, '-l', out_file] - subprocess.Popen(cmd) - -def _build_import_library_x86(): - """ Build the import libraries for Mingw32-gcc on Windows - """ - lib_name = "python%d%d.lib" % tuple(sys.version_info[:2]) - lib_file = os.path.join(sys.prefix,'libs',lib_name) - out_name = "libpython%d%d.a" % tuple(sys.version_info[:2]) - out_file = os.path.join(sys.prefix,'libs',out_name) - if not os.path.isfile(lib_file): - log.warn('Cannot build import library: "%s" not found' % (lib_file)) - return - if os.path.isfile(out_file): - log.debug('Skip building import library: "%s" exists' % (out_file)) - return - log.info('Building import library (ARCH=x86): "%s"' % (out_file)) - - from numpy.distutils import lib2def - - def_name = "python%d%d.def" % tuple(sys.version_info[:2]) - def_file = os.path.join(sys.prefix,'libs',def_name) - nm_cmd = '%s %s' % (lib2def.DEFAULT_NM, lib_file) - nm_output = lib2def.getnm(nm_cmd) - dlist, flist = lib2def.parse_nm(nm_output) - lib2def.output_def(dlist, flist, lib2def.DEF_HEADER, open(def_file, 'w')) - - dll_name = "python%d%d.dll" % tuple(sys.version_info[:2]) - args = (dll_name,def_file,out_file) - cmd = 'dlltool --dllname %s --def %s --output-lib %s' % args - status = os.system(cmd) - # for now, fail silently - if status: - log.warn('Failed to build import library for gcc. Linking will fail.') - #if not success: - # msg = "Couldn't find import library, and failed to build it." - # raise DistutilsPlatformError, msg - return - -#===================================== -# Dealing with Visual Studio MANIFESTS -#===================================== - -# Functions to deal with visual studio manifests. Manifest are a mechanism to -# enforce strong DLL versioning on windows, and has nothing to do with -# distutils MANIFEST. manifests are XML files with version info, and used by -# the OS loader; they are necessary when linking against a DLL not in the -# system path; in particular, official python 2.6 binary is built against the -# MS runtime 9 (the one from VS 2008), which is not available on most windows -# systems; python 2.6 installer does install it in the Win SxS (Side by side) -# directory, but this requires the manifest for this to work. This is a big -# mess, thanks MS for a wonderful system. - -# XXX: ideally, we should use exactly the same version as used by python. I -# submitted a patch to get this version, but it was only included for python -# 2.6.1 and above. So for versions below, we use a "best guess". -_MSVCRVER_TO_FULLVER = {} -if sys.platform == 'win32': - try: - import msvcrt - if hasattr(msvcrt, "CRT_ASSEMBLY_VERSION"): - _MSVCRVER_TO_FULLVER['90'] = msvcrt.CRT_ASSEMBLY_VERSION - else: - _MSVCRVER_TO_FULLVER['90'] = "9.0.21022.8" - # I took one version in my SxS directory: no idea if it is the good - # one, and we can't retrieve it from python - _MSVCRVER_TO_FULLVER['80'] = "8.0.50727.42" - except ImportError: - # If we are here, means python was not built with MSVC. Not sure what to do - # in that case: manifest building will fail, but it should not be used in - # that case anyway - log.warn('Cannot import msvcrt: using manifest will not be possible') - -def msvc_manifest_xml(maj, min): - """Given a major and minor version of the MSVCR, returns the - corresponding XML file.""" - try: - fullver = _MSVCRVER_TO_FULLVER[str(maj * 10 + min)] - except KeyError: - raise ValueError("Version %d,%d of MSVCRT not supported yet" \ - % (maj, min)) - # Don't be fooled, it looks like an XML, but it is not. In particular, it - # should not have any space before starting, and its size should be - # divisible by 4, most likely for alignement constraints when the xml is - # embedded in the binary... - # This template was copied directly from the python 2.6 binary (using - # strings.exe from mingw on python.exe). - template = """\ - - - - - - - - - - - - - -""" - - return template % {'fullver': fullver, 'maj': maj, 'min': min} - -def manifest_rc(name, type='dll'): - """Return the rc file used to generate the res file which will be embedded - as manifest for given manifest file name, of given type ('dll' or - 'exe'). - - Parameters - ---------- name: str - name of the manifest file to embed - type: str ('dll', 'exe') - type of the binary which will embed the manifest""" - if type == 'dll': - rctype = 2 - elif type == 'exe': - rctype = 1 - else: - raise ValueError("Type %s not supported" % type) - - return """\ -#include "winuser.h" -%d RT_MANIFEST %s""" % (rctype, name) - -def check_embedded_msvcr_match_linked(msver): - """msver is the ms runtime version used for the MANIFEST.""" - # check msvcr major version are the same for linking and - # embedding - msvcv = msvc_runtime_library() - if msvcv: - maj = int(msvcv[5:6]) - if not maj == int(msver): - raise ValueError( - "Discrepancy between linked msvcr " \ - "(%d) and the one about to be embedded " \ - "(%d)" % (int(msver), maj)) - -def configtest_name(config): - base = os.path.basename(config._gen_temp_sourcefile("yo", [], "c")) - return os.path.splitext(base)[0] - -def manifest_name(config): - # Get configest name (including suffix) - root = configtest_name(config) - exext = config.compiler.exe_extension - return root + exext + ".manifest" - -def rc_name(config): - # Get configest name (including suffix) - root = configtest_name(config) - return root + ".rc" - -def generate_manifest(config): - msver = get_build_msvc_version() - if msver is not None: - if msver >= 8: - check_embedded_msvcr_match_linked(msver) - ma = int(msver) - mi = int((msver - ma) * 10) - # Write the manifest file - manxml = msvc_manifest_xml(ma, mi) - man = open(manifest_name(config), "w") - config.temp_files.append(manifest_name(config)) - man.write(manxml) - man.close() - # # Write the rc file - # manrc = manifest_rc(manifest_name(self), "exe") - # rc = open(rc_name(self), "w") - # self.temp_files.append(manrc) - # rc.write(manrc) - # rc.close() diff --git a/pythonPackages/numpy/numpy/distutils/misc_util.py b/pythonPackages/numpy/numpy/distutils/misc_util.py deleted file mode 100755 index 7197570477..0000000000 --- a/pythonPackages/numpy/numpy/distutils/misc_util.py +++ /dev/null @@ -1,2268 +0,0 @@ -import os -import re -import sys -import imp -import copy -import glob -import atexit -import tempfile -import subprocess -import shutil - -from distutils.errors import DistutilsError - -try: - set -except NameError: - from sets import Set as set - -from numpy.distutils.compat import get_exception - -__all__ = ['Configuration', 'get_numpy_include_dirs', 'default_config_dict', - 'dict_append', 'appendpath', 'generate_config_py', - 'get_cmd', 'allpath', 'get_mathlibs', - 'terminal_has_colors', 'red_text', 'green_text', 'yellow_text', - 'blue_text', 'cyan_text', 'cyg2win32','mingw32','all_strings', - 'has_f_sources', 'has_cxx_sources', 'filter_sources', - 'get_dependencies', 'is_local_src_dir', 'get_ext_source_files', - 'get_script_files', 'get_lib_source_files', 'get_data_files', - 'dot_join', 'get_frame', 'minrelpath','njoin', - 'is_sequence', 'is_string', 'as_list', 'gpaths', 'get_language', - 'quote_args', 'get_build_architecture', 'get_info', 'get_pkg_info'] - -class InstallableLib: - """ - Container to hold information on an installable library. - - Parameters - ---------- - name : str - Name of the installed library. - build_info : dict - Dictionary holding build information. - target_dir : str - Absolute path specifying where to install the library. - - See Also - -------- - Configuration.add_installed_library - - Notes - ----- - The three parameters are stored as attributes with the same names. - - """ - def __init__(self, name, build_info, target_dir): - self.name = name - self.build_info = build_info - self.target_dir = target_dir - -def quote_args(args): - # don't used _nt_quote_args as it does not check if - # args items already have quotes or not. - args = list(args) - for i in range(len(args)): - a = args[i] - if ' ' in a and a[0] not in '"\'': - args[i] = '"%s"' % (a) - return args - -def allpath(name): - "Convert a /-separated pathname to one using the OS's path separator." - splitted = name.split('/') - return os.path.join(*splitted) - -def rel_path(path, parent_path): - """Return path relative to parent_path. - """ - pd = os.path.abspath(parent_path) - apath = os.path.abspath(path) - if len(apath)= 0 - and curses.tigetnum("pairs") >= 0 - and ((curses.tigetstr("setf") is not None - and curses.tigetstr("setb") is not None) - or (curses.tigetstr("setaf") is not None - and curses.tigetstr("setab") is not None) - or curses.tigetstr("scp") is not None)): - return 1 - except Exception: - pass - return 0 - -if terminal_has_colors(): - _colour_codes = dict(black=0, red=1, green=2, yellow=3, - blue=4, magenta=5, cyan=6, white=7, default=9) - def colour_text(s, fg=None, bg=None, bold=False): - seq = [] - if bold: - seq.append('1') - if fg: - fgcode = 30 + _colour_codes.get(fg.lower(), 0) - seq.append(str(fgcode)) - if bg: - bgcode = 40 + _colour_codes.get(fg.lower(), 7) - seq.append(str(bgcode)) - if seq: - return '\x1b[%sm%s\x1b[0m' % (';'.join(seq), s) - else: - return s -else: - def colour_text(s, fg=None, bg=None): - return s - -def default_text(s): - return colour_text(s, 'default') -def red_text(s): - return colour_text(s, 'red') -def green_text(s): - return colour_text(s, 'green') -def yellow_text(s): - return colour_text(s, 'yellow') -def cyan_text(s): - return colour_text(s, 'cyan') -def blue_text(s): - return colour_text(s, 'blue') - -######################### - -def cyg2win32(path): - if sys.platform=='cygwin' and path.startswith('/cygdrive'): - path = path[10] + ':' + os.path.normcase(path[11:]) - return path - -def mingw32(): - """Return true when using mingw32 environment. - """ - if sys.platform=='win32': - if os.environ.get('OSTYPE','')=='msys': - return True - if os.environ.get('MSYSTEM','')=='MINGW32': - return True - return False - -def msvc_runtime_library(): - "Return name of MSVC runtime library if Python was built with MSVC >= 7" - msc_pos = sys.version.find('MSC v.') - if msc_pos != -1: - msc_ver = sys.version[msc_pos+6:msc_pos+10] - lib = {'1300' : 'msvcr70', # MSVC 7.0 - '1310' : 'msvcr71', # MSVC 7.1 - '1400' : 'msvcr80', # MSVC 8 - '1500' : 'msvcr90', # MSVC 9 (VS 2008) - }.get(msc_ver, None) - else: - lib = None - return lib - -def msvc_on_amd64(): - if not (sys.platform=='win32' or os.name=='nt'): - return - if get_build_architecture() != 'AMD64': - return - if 'DISTUTILS_USE_SDK' in os.environ: - return - # try to avoid _MSVCCompiler__root attribute error - print('Forcing DISTUTILS_USE_SDK=1') - os.environ['DISTUTILS_USE_SDK']='1' - return - -######################### - -#XXX need support for .C that is also C++ -cxx_ext_match = re.compile(r'.*[.](cpp|cxx|cc)\Z',re.I).match -fortran_ext_match = re.compile(r'.*[.](f90|f95|f77|for|ftn|f)\Z',re.I).match -f90_ext_match = re.compile(r'.*[.](f90|f95)\Z',re.I).match -f90_module_name_match = re.compile(r'\s*module\s*(?P[\w_]+)',re.I).match -def _get_f90_modules(source): - """Return a list of Fortran f90 module names that - given source file defines. - """ - if not f90_ext_match(source): - return [] - modules = [] - f = open(source,'r') - f_readlines = getattr(f,'xreadlines',f.readlines) - for line in f_readlines(): - m = f90_module_name_match(line) - if m: - name = m.group('name') - modules.append(name) - # break # XXX can we assume that there is one module per file? - f.close() - return modules - -def is_string(s): - return isinstance(s, str) - -def all_strings(lst): - """Return True if all items in lst are string objects. """ - for item in lst: - if not is_string(item): - return False - return True - -def is_sequence(seq): - if is_string(seq): - return False - try: - len(seq) - except: - return False - return True - -def is_glob_pattern(s): - return is_string(s) and ('*' in s or '?' is s) - -def as_list(seq): - if is_sequence(seq): - return list(seq) - else: - return [seq] - -def get_language(sources): - # not used in numpy/scipy packages, use build_ext.detect_language instead - """Determine language value (c,f77,f90) from sources """ - language = None - for source in sources: - if isinstance(source, str): - if f90_ext_match(source): - language = 'f90' - break - elif fortran_ext_match(source): - language = 'f77' - return language - -def has_f_sources(sources): - """Return True if sources contains Fortran files """ - for source in sources: - if fortran_ext_match(source): - return True - return False - -def has_cxx_sources(sources): - """Return True if sources contains C++ files """ - for source in sources: - if cxx_ext_match(source): - return True - return False - -def filter_sources(sources): - """Return four lists of filenames containing - C, C++, Fortran, and Fortran 90 module sources, - respectively. - """ - c_sources = [] - cxx_sources = [] - f_sources = [] - fmodule_sources = [] - for source in sources: - if fortran_ext_match(source): - modules = _get_f90_modules(source) - if modules: - fmodule_sources.append(source) - else: - f_sources.append(source) - elif cxx_ext_match(source): - cxx_sources.append(source) - else: - c_sources.append(source) - return c_sources, cxx_sources, f_sources, fmodule_sources - - -def _get_headers(directory_list): - # get *.h files from list of directories - headers = [] - for d in directory_list: - head = glob.glob(os.path.join(d,"*.h")) #XXX: *.hpp files?? - headers.extend(head) - return headers - -def _get_directories(list_of_sources): - # get unique directories from list of sources. - direcs = [] - for f in list_of_sources: - d = os.path.split(f) - if d[0] != '' and not d[0] in direcs: - direcs.append(d[0]) - return direcs - -def get_dependencies(sources): - #XXX scan sources for include statements - return _get_headers(_get_directories(sources)) - -def is_local_src_dir(directory): - """Return true if directory is local directory. - """ - if not is_string(directory): - return False - abs_dir = os.path.abspath(directory) - c = os.path.commonprefix([os.getcwd(),abs_dir]) - new_dir = abs_dir[len(c):].split(os.sep) - if new_dir and not new_dir[0]: - new_dir = new_dir[1:] - if new_dir and new_dir[0]=='build': - return False - new_dir = os.sep.join(new_dir) - return os.path.isdir(new_dir) - -def general_source_files(top_path): - pruned_directories = {'CVS':1, '.svn':1, 'build':1} - prune_file_pat = re.compile(r'(?:[~#]|\.py[co]|\.o)$') - for dirpath, dirnames, filenames in os.walk(top_path, topdown=True): - pruned = [ d for d in dirnames if d not in pruned_directories ] - dirnames[:] = pruned - for f in filenames: - if not prune_file_pat.search(f): - yield os.path.join(dirpath, f) - -def general_source_directories_files(top_path): - """Return a directory name relative to top_path and - files contained. - """ - pruned_directories = ['CVS','.svn','build'] - prune_file_pat = re.compile(r'(?:[~#]|\.py[co]|\.o)$') - for dirpath, dirnames, filenames in os.walk(top_path, topdown=True): - pruned = [ d for d in dirnames if d not in pruned_directories ] - dirnames[:] = pruned - for d in dirnames: - dpath = os.path.join(dirpath, d) - rpath = rel_path(dpath, top_path) - files = [] - for f in os.listdir(dpath): - fn = os.path.join(dpath,f) - if os.path.isfile(fn) and not prune_file_pat.search(fn): - files.append(fn) - yield rpath, files - dpath = top_path - rpath = rel_path(dpath, top_path) - filenames = [os.path.join(dpath,f) for f in os.listdir(dpath) \ - if not prune_file_pat.search(f)] - files = [f for f in filenames if os.path.isfile(f)] - yield rpath, files - - -def get_ext_source_files(ext): - # Get sources and any include files in the same directory. - filenames = [] - sources = filter(is_string, ext.sources) - filenames.extend(sources) - filenames.extend(get_dependencies(sources)) - for d in ext.depends: - if is_local_src_dir(d): - filenames.extend(list(general_source_files(d))) - elif os.path.isfile(d): - filenames.append(d) - return filenames - -def get_script_files(scripts): - scripts = filter(is_string, scripts) - return scripts - -def get_lib_source_files(lib): - filenames = [] - sources = lib[1].get('sources',[]) - sources = filter(is_string, sources) - filenames.extend(sources) - filenames.extend(get_dependencies(sources)) - depends = lib[1].get('depends',[]) - for d in depends: - if is_local_src_dir(d): - filenames.extend(list(general_source_files(d))) - elif os.path.isfile(d): - filenames.append(d) - return filenames - -def get_data_files(data): - if is_string(data): - return [data] - sources = data[1] - filenames = [] - for s in sources: - if hasattr(s, '__call__'): - continue - if is_local_src_dir(s): - filenames.extend(list(general_source_files(s))) - elif is_string(s): - if os.path.isfile(s): - filenames.append(s) - else: - print('Not existing data file:',s) - else: - raise TypeError(repr(s)) - return filenames - -def dot_join(*args): - return '.'.join([a for a in args if a]) - -def get_frame(level=0): - """Return frame object from call stack with given level. - """ - try: - return sys._getframe(level+1) - except AttributeError: - frame = sys.exc_info()[2].tb_frame - for _ in range(level+1): - frame = frame.f_back - return frame - -class SconsInfo(object): - """ - Container object holding build info for building a package with scons. - - Parameters - ---------- - scons_path : str or None - Path to scons script, relative to the directory of setup.py. - If None, no scons script is specified. This can be useful to add only - pre- and post-hooks to a configuration. - parent_name : str or None - Name of the parent package (for example "numpy"). - pre_hook : sequence of callables or None - Callables that are executed before scons is invoked. - Each callable should be defined as ``callable(*args, **kw)``. - post_hook : sequence of callables or None - Callables that are executed after scons is invoked. - Each callable should be defined as ``callable(*args, **kw)``. - source_files : list of str or None - List of paths to source files, relative to the directory of setup.py. - pkg_path : str or None - Path to the package for which the `SconsInfo` instance holds the - build info, relative to the directory of setup.py. - - Notes - ----- - All parameters are available as attributes of a `SconsInfo` instance. - - """ - def __init__(self, scons_path, parent_name, pre_hook, - post_hook, source_files, pkg_path): - self.scons_path = scons_path - self.parent_name = parent_name - self.pre_hook = pre_hook - self.post_hook = post_hook - self.source_files = source_files - if pkg_path: - self.pkg_path = pkg_path - else: - if scons_path: - self.pkg_path = os.path.dirname(scons_path) - else: - self.pkg_path = '' - -###################### - -class Configuration(object): - - _list_keys = ['packages', 'ext_modules', 'data_files', 'include_dirs', - 'libraries', 'headers', 'scripts', 'py_modules', 'scons_data', - 'installed_libraries'] - _dict_keys = ['package_dir', 'installed_pkg_config'] - _extra_keys = ['name', 'version'] - - numpy_include_dirs = [] - - def __init__(self, - package_name=None, - parent_name=None, - top_path=None, - package_path=None, - caller_level=1, - setup_name='setup.py', - **attrs): - """Construct configuration instance of a package. - - package_name -- name of the package - Ex.: 'distutils' - parent_name -- name of the parent package - Ex.: 'numpy' - top_path -- directory of the toplevel package - Ex.: the directory where the numpy package source sits - package_path -- directory of package. Will be computed by magic from the - directory of the caller module if not specified - Ex.: the directory where numpy.distutils is - caller_level -- frame level to caller namespace, internal parameter. - """ - self.name = dot_join(parent_name, package_name) - self.version = None - - caller_frame = get_frame(caller_level) - self.local_path = get_path_from_frame(caller_frame, top_path) - # local_path -- directory of a file (usually setup.py) that - # defines a configuration() function. - # local_path -- directory of a file (usually setup.py) that - # defines a configuration() function. - if top_path is None: - top_path = self.local_path - self.local_path = '' - if package_path is None: - package_path = self.local_path - elif os.path.isdir(njoin(self.local_path,package_path)): - package_path = njoin(self.local_path,package_path) - if not os.path.isdir(package_path or '.'): - raise ValueError("%r is not a directory" % (package_path,)) - self.top_path = top_path - self.package_path = package_path - # this is the relative path in the installed package - self.path_in_package = os.path.join(*self.name.split('.')) - - self.list_keys = self._list_keys[:] - self.dict_keys = self._dict_keys[:] - - for n in self.list_keys: - v = copy.copy(attrs.get(n, [])) - setattr(self, n, as_list(v)) - - for n in self.dict_keys: - v = copy.copy(attrs.get(n, {})) - setattr(self, n, v) - - known_keys = self.list_keys + self.dict_keys - self.extra_keys = self._extra_keys[:] - for n in attrs.keys(): - if n in known_keys: - continue - a = attrs[n] - setattr(self,n,a) - if isinstance(a, list): - self.list_keys.append(n) - elif isinstance(a, dict): - self.dict_keys.append(n) - else: - self.extra_keys.append(n) - - if os.path.exists(njoin(package_path,'__init__.py')): - self.packages.append(self.name) - self.package_dir[self.name] = package_path - - self.options = dict( - ignore_setup_xxx_py = False, - assume_default_configuration = False, - delegate_options_to_subpackages = False, - quiet = False, - ) - - caller_instance = None - for i in range(1,3): - try: - f = get_frame(i) - except ValueError: - break - try: - caller_instance = eval('self',f.f_globals,f.f_locals) - break - except NameError: - pass - if isinstance(caller_instance, self.__class__): - if caller_instance.options['delegate_options_to_subpackages']: - self.set_options(**caller_instance.options) - - self.setup_name = setup_name - - def todict(self): - """ - Return a dictionary compatible with the keyword arguments of distutils - setup function. - - Examples - -------- - >>> setup(**config.todict()) #doctest: +SKIP - """ - - self._optimize_data_files() - d = {} - known_keys = self.list_keys + self.dict_keys + self.extra_keys - for n in known_keys: - a = getattr(self,n) - if a: - d[n] = a - return d - - def info(self, message): - if not self.options['quiet']: - print(message) - - def warn(self, message): - sys.stderr.write('Warning: %s' % (message,)) - - def set_options(self, **options): - """ - Configure Configuration instance. - - The following options are available: - - ignore_setup_xxx_py - - assume_default_configuration - - delegate_options_to_subpackages - - quiet - - """ - for key, value in options.items(): - if key in self.options: - self.options[key] = value - else: - raise ValueError('Unknown option: '+key) - - def get_distribution(self): - """Return the distutils distribution object for self.""" - from numpy.distutils.core import get_distribution - return get_distribution() - - def _wildcard_get_subpackage(self, subpackage_name, - parent_name, - caller_level = 1): - l = subpackage_name.split('.') - subpackage_path = njoin([self.local_path]+l) - dirs = filter(os.path.isdir,glob.glob(subpackage_path)) - config_list = [] - for d in dirs: - if not os.path.isfile(njoin(d,'__init__.py')): - continue - if 'build' in d.split(os.sep): - continue - n = '.'.join(d.split(os.sep)[-len(l):]) - c = self.get_subpackage(n, - parent_name = parent_name, - caller_level = caller_level+1) - config_list.extend(c) - return config_list - - def _get_configuration_from_setup_py(self, setup_py, - subpackage_name, - subpackage_path, - parent_name, - caller_level = 1): - # In case setup_py imports local modules: - sys.path.insert(0,os.path.dirname(setup_py)) - try: - fo_setup_py = open(setup_py, 'U') - setup_name = os.path.splitext(os.path.basename(setup_py))[0] - n = dot_join(self.name,subpackage_name,setup_name) - setup_module = imp.load_module('_'.join(n.split('.')), - fo_setup_py, - setup_py, - ('.py', 'U', 1)) - fo_setup_py.close() - if not hasattr(setup_module,'configuration'): - if not self.options['assume_default_configuration']: - self.warn('Assuming default configuration '\ - '(%s does not define configuration())'\ - % (setup_module)) - config = Configuration(subpackage_name, parent_name, - self.top_path, subpackage_path, - caller_level = caller_level + 1) - else: - pn = dot_join(*([parent_name] + subpackage_name.split('.')[:-1])) - args = (pn,) - def fix_args_py2(args): - if setup_module.configuration.func_code.co_argcount > 1: - args = args + (self.top_path,) - return args - def fix_args_py3(args): - if setup_module.configuration.__code__.co_argcount > 1: - args = args + (self.top_path,) - return args - if sys.version_info[0] < 3: - args = fix_args_py2(args) - else: - args = fix_args_py3(args) - config = setup_module.configuration(*args) - if config.name!=dot_join(parent_name,subpackage_name): - self.warn('Subpackage %r configuration returned as %r' % \ - (dot_join(parent_name,subpackage_name), config.name)) - finally: - del sys.path[0] - return config - - def get_subpackage(self,subpackage_name, - subpackage_path=None, - parent_name=None, - caller_level = 1): - """Return list of subpackage configurations. - - Parameters - ---------- - subpackage_name: str,None - Name of the subpackage to get the configuration. '*' in - subpackage_name is handled as a wildcard. - subpackage_path: str - If None, then the path is assumed to be the local path plus the - subpackage_name. If a setup.py file is not found in the - subpackage_path, then a default configuration is used. - parent_name: str - Parent name. - """ - if subpackage_name is None: - if subpackage_path is None: - raise ValueError( - "either subpackage_name or subpackage_path must be specified") - subpackage_name = os.path.basename(subpackage_path) - - # handle wildcards - l = subpackage_name.split('.') - if subpackage_path is None and '*' in subpackage_name: - return self._wildcard_get_subpackage(subpackage_name, - parent_name, - caller_level = caller_level+1) - assert '*' not in subpackage_name,repr((subpackage_name, subpackage_path,parent_name)) - if subpackage_path is None: - subpackage_path = njoin([self.local_path] + l) - else: - subpackage_path = njoin([subpackage_path] + l[:-1]) - subpackage_path = self.paths([subpackage_path])[0] - setup_py = njoin(subpackage_path, self.setup_name) - if not self.options['ignore_setup_xxx_py']: - if not os.path.isfile(setup_py): - setup_py = njoin(subpackage_path, - 'setup_%s.py' % (subpackage_name)) - if not os.path.isfile(setup_py): - if not self.options['assume_default_configuration']: - self.warn('Assuming default configuration '\ - '(%s/{setup_%s,setup}.py was not found)' \ - % (os.path.dirname(setup_py), subpackage_name)) - config = Configuration(subpackage_name, parent_name, - self.top_path, subpackage_path, - caller_level = caller_level+1) - else: - config = self._get_configuration_from_setup_py( - setup_py, - subpackage_name, - subpackage_path, - parent_name, - caller_level = caller_level + 1) - if config: - return [config] - else: - return [] - - def add_subpackage(self,subpackage_name, - subpackage_path=None, - standalone = False): - """Add a sub-package to the current Configuration instance. - - This is useful in a setup.py script for adding sub-packages to a - package. - - Parameters - ---------- - subpackage_name: str - name of the subpackage - subpackage_path: str - if given, the subpackage path such as the subpackage is in - subpackage_path / subpackage_name. If None,the subpackage is - assumed to be located in the local path / subpackage_name. - standalone: bool - """ - - if standalone: - parent_name = None - else: - parent_name = self.name - config_list = self.get_subpackage(subpackage_name,subpackage_path, - parent_name = parent_name, - caller_level = 2) - if not config_list: - self.warn('No configuration returned, assuming unavailable.') - for config in config_list: - d = config - if isinstance(config, Configuration): - d = config.todict() - assert isinstance(d,dict),repr(type(d)) - - self.info('Appending %s configuration to %s' \ - % (d.get('name'), self.name)) - self.dict_append(**d) - - dist = self.get_distribution() - if dist is not None: - self.warn('distutils distribution has been initialized,'\ - ' it may be too late to add a subpackage '+ subpackage_name) - - def add_data_dir(self,data_path): - """Recursively add files under data_path to data_files list. - - Recursively add files under data_path to the list of data_files to be - installed (and distributed). The data_path can be either a relative - path-name, or an absolute path-name, or a 2-tuple where the first - argument shows where in the install directory the data directory - should be installed to. - - Parameters - ---------- - data_path: seq,str - Argument can be either - - * 2-sequence (,) - * path to data directory where python datadir suffix defaults - to package dir. - - Notes - ----- - Rules for installation paths: - foo/bar -> (foo/bar, foo/bar) -> parent/foo/bar - (gun, foo/bar) -> parent/gun - foo/* -> (foo/a, foo/a), (foo/b, foo/b) -> parent/foo/a, parent/foo/b - (gun, foo/*) -> (gun, foo/a), (gun, foo/b) -> gun - (gun/*, foo/*) -> parent/gun/a, parent/gun/b - /foo/bar -> (bar, /foo/bar) -> parent/bar - (gun, /foo/bar) -> parent/gun - (fun/*/gun/*, sun/foo/bar) -> parent/fun/foo/gun/bar - - Examples - -------- - For example suppose the source directory contains fun/foo.dat and - fun/bar/car.dat:: - - >>> self.add_data_dir('fun') #doctest: +SKIP - >>> self.add_data_dir(('sun', 'fun')) #doctest: +SKIP - >>> self.add_data_dir(('gun', '/full/path/to/fun'))#doctest: +SKIP - - Will install data-files to the locations:: - - / - fun/ - foo.dat - bar/ - car.dat - sun/ - foo.dat - bar/ - car.dat - gun/ - foo.dat - car.dat - """ - if is_sequence(data_path): - d, data_path = data_path - else: - d = None - if is_sequence(data_path): - [self.add_data_dir((d,p)) for p in data_path] - return - if not is_string(data_path): - raise TypeError("not a string: %r" % (data_path,)) - if d is None: - if os.path.isabs(data_path): - return self.add_data_dir((os.path.basename(data_path), data_path)) - return self.add_data_dir((data_path, data_path)) - paths = self.paths(data_path, include_non_existing=False) - if is_glob_pattern(data_path): - if is_glob_pattern(d): - pattern_list = allpath(d).split(os.sep) - pattern_list.reverse() - # /a/*//b/ -> /a/*/b - rl = range(len(pattern_list)-1); rl.reverse() - for i in rl: - if not pattern_list[i]: - del pattern_list[i] - # - for path in paths: - if not os.path.isdir(path): - print('Not a directory, skipping',path) - continue - rpath = rel_path(path, self.local_path) - path_list = rpath.split(os.sep) - path_list.reverse() - target_list = [] - i = 0 - for s in pattern_list: - if is_glob_pattern(s): - if i>=len(path_list): - raise ValueError('cannot fill pattern %r with %r' \ - % (d, path)) - target_list.append(path_list[i]) - else: - assert s==path_list[i],repr((s,path_list[i],data_path,d,path,rpath)) - target_list.append(s) - i += 1 - if path_list[i:]: - self.warn('mismatch of pattern_list=%s and path_list=%s'\ - % (pattern_list,path_list)) - target_list.reverse() - self.add_data_dir((os.sep.join(target_list),path)) - else: - for path in paths: - self.add_data_dir((d,path)) - return - assert not is_glob_pattern(d),repr(d) - - dist = self.get_distribution() - if dist is not None and dist.data_files is not None: - data_files = dist.data_files - else: - data_files = self.data_files - - for path in paths: - for d1,f in list(general_source_directories_files(path)): - target_path = os.path.join(self.path_in_package,d,d1) - data_files.append((target_path, f)) - - def _optimize_data_files(self): - data_dict = {} - for p,files in self.data_files: - if p not in data_dict: - data_dict[p] = set() - for f in files: - data_dict[p].add(f) - self.data_files[:] = [(p,list(files)) for p,files in data_dict.items()] - - def add_data_files(self,*files): - """Add data files to configuration data_files. - - Parameters - ---------- - files: sequence - Argument(s) can be either - - * 2-sequence (,) - * paths to data files where python datadir prefix defaults - to package dir. - - Notes - ----- - The form of each element of the files sequence is very flexible - allowing many combinations of where to get the files from the package - and where they should ultimately be installed on the system. The most - basic usage is for an element of the files argument sequence to be a - simple filename. This will cause that file from the local path to be - installed to the installation path of the self.name package (package - path). The file argument can also be a relative path in which case the - entire relative path will be installed into the package directory. - Finally, the file can be an absolute path name in which case the file - will be found at the absolute path name but installed to the package - path. - - This basic behavior can be augmented by passing a 2-tuple in as the - file argument. The first element of the tuple should specify the - relative path (under the package install directory) where the - remaining sequence of files should be installed to (it has nothing to - do with the file-names in the source distribution). The second element - of the tuple is the sequence of files that should be installed. The - files in this sequence can be filenames, relative paths, or absolute - paths. For absolute paths the file will be installed in the top-level - package installation directory (regardless of the first argument). - Filenames and relative path names will be installed in the package - install directory under the path name given as the first element of - the tuple. - - Rules for installation paths: - - #. file.txt -> (., file.txt)-> parent/file.txt - #. foo/file.txt -> (foo, foo/file.txt) -> parent/foo/file.txt - #. /foo/bar/file.txt -> (., /foo/bar/file.txt) -> parent/file.txt - #. *.txt -> parent/a.txt, parent/b.txt - #. foo/*.txt -> parent/foo/a.txt, parent/foo/b.txt - #. */*.txt -> (*, */*.txt) -> parent/c/a.txt, parent/d/b.txt - #. (sun, file.txt) -> parent/sun/file.txt - #. (sun, bar/file.txt) -> parent/sun/file.txt - #. (sun, /foo/bar/file.txt) -> parent/sun/file.txt - #. (sun, *.txt) -> parent/sun/a.txt, parent/sun/b.txt - #. (sun, bar/*.txt) -> parent/sun/a.txt, parent/sun/b.txt - #. (sun/*, */*.txt) -> parent/sun/c/a.txt, parent/d/b.txt - - An additional feature is that the path to a data-file can actually be - a function that takes no arguments and returns the actual path(s) to - the data-files. This is useful when the data files are generated while - building the package. - - Examples - -------- - Add files to the list of data_files to be included with the package. - - >>> self.add_data_files('foo.dat', - ... ('fun', ['gun.dat', 'nun/pun.dat', '/tmp/sun.dat']), - ... 'bar/cat.dat', - ... '/full/path/to/can.dat') #doctest: +SKIP - - will install these data files to:: - - / - foo.dat - fun/ - gun.dat - nun/ - pun.dat - sun.dat - bar/ - car.dat - can.dat - - where is the package (or sub-package) - directory such as '/usr/lib/python2.4/site-packages/mypackage' ('C: - \\Python2.4 \\Lib \\site-packages \\mypackage') or - '/usr/lib/python2.4/site- packages/mypackage/mysubpackage' ('C: - \\Python2.4 \\Lib \\site-packages \\mypackage \\mysubpackage'). - """ - - if len(files)>1: - for f in files: - self.add_data_files(f) - return - assert len(files)==1 - if is_sequence(files[0]): - d,files = files[0] - else: - d = None - if is_string(files): - filepat = files - elif is_sequence(files): - if len(files)==1: - filepat = files[0] - else: - for f in files: - self.add_data_files((d,f)) - return - else: - raise TypeError(repr(type(files))) - - if d is None: - if hasattr(filepat, '__call__'): - d = '' - elif os.path.isabs(filepat): - d = '' - else: - d = os.path.dirname(filepat) - self.add_data_files((d,files)) - return - - paths = self.paths(filepat, include_non_existing=False) - if is_glob_pattern(filepat): - if is_glob_pattern(d): - pattern_list = d.split(os.sep) - pattern_list.reverse() - for path in paths: - path_list = path.split(os.sep) - path_list.reverse() - path_list.pop() # filename - target_list = [] - i = 0 - for s in pattern_list: - if is_glob_pattern(s): - target_list.append(path_list[i]) - i += 1 - else: - target_list.append(s) - target_list.reverse() - self.add_data_files((os.sep.join(target_list), path)) - else: - self.add_data_files((d,paths)) - return - assert not is_glob_pattern(d),repr((d,filepat)) - - dist = self.get_distribution() - if dist is not None and dist.data_files is not None: - data_files = dist.data_files - else: - data_files = self.data_files - - data_files.append((os.path.join(self.path_in_package,d),paths)) - - ### XXX Implement add_py_modules - - def add_include_dirs(self,*paths): - """Add paths to configuration include directories. - - Add the given sequence of paths to the beginning of the include_dirs - list. This list will be visible to all extension modules of the - current package. - """ - include_dirs = self.paths(paths) - dist = self.get_distribution() - if dist is not None: - if dist.include_dirs is None: - dist.include_dirs = [] - dist.include_dirs.extend(include_dirs) - else: - self.include_dirs.extend(include_dirs) - - def add_numarray_include_dirs(self): - import numpy.numarray.util as nnu - self.add_include_dirs(*nnu.get_numarray_include_dirs()) - - def add_headers(self,*files): - """Add installable headers to configuration. - - Add the given sequence of files to the beginning of the headers list. - By default, headers will be installed under // directory. If an item of files - is a tuple, then its first argument specifies the actual installation - location relative to the path. - - Parameters - ---------- - files: str, seq - Argument(s) can be either: - - * 2-sequence (,) - * path(s) to header file(s) where python includedir suffix will - default to package name. - """ - headers = [] - for path in files: - if is_string(path): - [headers.append((self.name,p)) for p in self.paths(path)] - else: - if not isinstance(path, (tuple, list)) or len(path) != 2: - raise TypeError(repr(path)) - [headers.append((path[0],p)) for p in self.paths(path[1])] - dist = self.get_distribution() - if dist is not None: - if dist.headers is None: - dist.headers = [] - dist.headers.extend(headers) - else: - self.headers.extend(headers) - - def paths(self,*paths,**kws): - """Apply glob to paths and prepend local_path if needed. - - Applies glob.glob(...) to each path in the sequence (if needed) and - pre-pends the local_path if needed. Because this is called on all - source lists, this allows wildcard characters to be specified in lists - of sources for extension modules and libraries and scripts and allows - path-names be relative to the source directory. - - """ - include_non_existing = kws.get('include_non_existing',True) - return gpaths(paths, - local_path = self.local_path, - include_non_existing=include_non_existing) - - def _fix_paths_dict(self,kw): - for k in kw.keys(): - v = kw[k] - if k in ['sources','depends','include_dirs','library_dirs', - 'module_dirs','extra_objects']: - new_v = self.paths(v) - kw[k] = new_v - - def add_extension(self,name,sources,**kw): - """Add extension to configuration. - - Create and add an Extension instance to the ext_modules list. This - method also takes the following optional keyword arguments that are - passed on to the Extension constructor. - - Parameters - ---------- - name: str - name of the extension - sources: seq - list of the sources. The list of sources may contain functions - (called source generators) which must take an extension instance - and a build directory as inputs and return a source file or list of - source files or None. If None is returned then no sources are - generated. If the Extension instance has no sources after - processing all source generators, then no extension module is - built. - include_dirs: - define_macros: - undef_macros: - library_dirs: - libraries: - runtime_library_dirs: - extra_objects: - extra_compile_args: - extra_link_args: - export_symbols: - swig_opts: - depends: - The depends list contains paths to files or directories that the - sources of the extension module depend on. If any path in the - depends list is newer than the extension module, then the module - will be rebuilt. - language: - f2py_options: - module_dirs: - extra_info: dict,list - dict or list of dict of keywords to be appended to keywords. - - Notes - ----- - The self.paths(...) method is applied to all lists that may contain - paths. - """ - ext_args = copy.copy(kw) - ext_args['name'] = dot_join(self.name,name) - ext_args['sources'] = sources - - if 'extra_info' in ext_args: - extra_info = ext_args['extra_info'] - del ext_args['extra_info'] - if isinstance(extra_info, dict): - extra_info = [extra_info] - for info in extra_info: - assert isinstance(info, dict), repr(info) - dict_append(ext_args,**info) - - self._fix_paths_dict(ext_args) - - # Resolve out-of-tree dependencies - libraries = ext_args.get('libraries',[]) - libnames = [] - ext_args['libraries'] = [] - for libname in libraries: - if isinstance(libname,tuple): - self._fix_paths_dict(libname[1]) - - # Handle library names of the form libname@relative/path/to/library - if '@' in libname: - lname,lpath = libname.split('@',1) - lpath = os.path.abspath(njoin(self.local_path,lpath)) - if os.path.isdir(lpath): - c = self.get_subpackage(None,lpath, - caller_level = 2) - if isinstance(c,Configuration): - c = c.todict() - for l in [l[0] for l in c.get('libraries',[])]: - llname = l.split('__OF__',1)[0] - if llname == lname: - c.pop('name',None) - dict_append(ext_args,**c) - break - continue - libnames.append(libname) - - ext_args['libraries'] = libnames + ext_args['libraries'] - - from numpy.distutils.core import Extension - ext = Extension(**ext_args) - self.ext_modules.append(ext) - - dist = self.get_distribution() - if dist is not None: - self.warn('distutils distribution has been initialized,'\ - ' it may be too late to add an extension '+name) - return ext - - def add_library(self,name,sources,**build_info): - """ - Add library to configuration. - - Parameters - ---------- - name : str - Name of the extension. - sources : sequence - List of the sources. The list of sources may contain functions - (called source generators) which must take an extension instance - and a build directory as inputs and return a source file or list of - source files or None. If None is returned then no sources are - generated. If the Extension instance has no sources after - processing all source generators, then no extension module is - built. - build_info : dict, optional - The following keys are allowed: - - * depends - * macros - * include_dirs - * extra_compiler_args - * f2py_options - * language - - """ - self._add_library(name, sources, None, build_info) - - dist = self.get_distribution() - if dist is not None: - self.warn('distutils distribution has been initialized,'\ - ' it may be too late to add a library '+ name) - - def _add_library(self, name, sources, install_dir, build_info): - """Common implementation for add_library and add_installed_library. Do - not use directly""" - build_info = copy.copy(build_info) - name = name #+ '__OF__' + self.name - build_info['sources'] = sources - - # Sometimes, depends is not set up to an empty list by default, and if - # depends is not given to add_library, distutils barfs (#1134) - if not 'depends' in build_info: - build_info['depends'] = [] - - self._fix_paths_dict(build_info) - - # Add to libraries list so that it is build with build_clib - self.libraries.append((name, build_info)) - - def add_installed_library(self, name, sources, install_dir, build_info=None): - """ - Similar to add_library, but the specified library is installed. - - Most C libraries used with `distutils` are only used to build python - extensions, but libraries built through this method will be installed - so that they can be reused by third-party packages. - - Parameters - ---------- - name : str - Name of the installed library. - sources : sequence - List of the library's source files. See `add_library` for details. - install_dir : str - Path to install the library, relative to the current sub-package. - build_info : dict, optional - The following keys are allowed: - - * depends - * macros - * include_dirs - * extra_compiler_args - * f2py_options - * language - - Returns - ------- - None - - See Also - -------- - add_library, add_npy_pkg_config, get_info - - Notes - ----- - The best way to encode the options required to link against the specified - C libraries is to use a "libname.ini" file, and use `get_info` to - retrieve the required options (see `add_npy_pkg_config` for more - information). - - """ - if not build_info: - build_info = {} - - install_dir = os.path.join(self.package_path, install_dir) - self._add_library(name, sources, install_dir, build_info) - self.installed_libraries.append(InstallableLib(name, build_info, install_dir)) - - def add_npy_pkg_config(self, template, install_dir, subst_dict=None): - """ - Generate and install a npy-pkg config file from a template. - - The config file generated from `template` is installed in the - given install directory, using `subst_dict` for variable substitution. - - Parameters - ---------- - template : str - The path of the template, relatively to the current package path. - install_dir : str - Where to install the npy-pkg config file, relatively to the current - package path. - subst_dict : dict, optional - If given, any string of the form ``@key@`` will be replaced by - ``subst_dict[key]`` in the template file when installed. The install - prefix is always available through the variable ``@prefix@``, since the - install prefix is not easy to get reliably from setup.py. - - See also - -------- - add_installed_library, get_info - - Notes - ----- - This works for both standard installs and in-place builds, i.e. the - ``@prefix@`` refer to the source directory for in-place builds. - - Examples - -------- - :: - - config.add_npy_pkg_config('foo.ini.in', 'lib', {'foo': bar}) - - Assuming the foo.ini.in file has the following content:: - - [meta] - Name=@foo@ - Version=1.0 - Description=dummy description - - [default] - Cflags=-I@prefix@/include - Libs= - - The generated file will have the following content:: - - [meta] - Name=bar - Version=1.0 - Description=dummy description - - [default] - Cflags=-Iprefix_dir/include - Libs= - - and will be installed as foo.ini in the 'lib' subpath. - - """ - if subst_dict is None: - subst_dict = {} - basename = os.path.splitext(template)[0] - template = os.path.join(self.package_path, template) - - if self.name in self.installed_pkg_config: - self.installed_pkg_config[self.name].append((template, install_dir, - subst_dict)) - else: - self.installed_pkg_config[self.name] = [(template, install_dir, - subst_dict)] - - def add_scons_installed_library(self, name, install_dir): - """ - Add a scons-built installable library to distutils. - - Parameters - ---------- - name : str - The name of the library. - install_dir : str - Path to install the library, relative to the current sub-package. - - """ - install_dir = os.path.join(self.package_path, install_dir) - self.installed_libraries.append(InstallableLib(name, {}, install_dir)) - - def add_sconscript(self, sconscript, subpackage_path=None, - standalone = False, pre_hook = None, - post_hook = None, source_files = None, package_path=None): - """Add a sconscript to configuration. - - pre_hook and post hook should be sequences of callable, which will be - use before and after executing scons. The callable should be defined as - callable(*args, **kw). It is ugly, but well, hooks are ugly anyway... - - sconscript can be None, which can be useful to add only post/pre - hooks.""" - if standalone: - parent_name = None - else: - parent_name = self.name - - dist = self.get_distribution() - # Convert the sconscript name to a relative filename (relative from top - # setup.py's directory) - fullsconsname = self.paths(sconscript)[0] - - # XXX: Think about a way to automatically register source files from - # scons... - full_source_files = [] - if source_files: - full_source_files.extend([self.paths(i)[0] for i in source_files]) - - scons_info = SconsInfo(fullsconsname, parent_name, - pre_hook, post_hook, - full_source_files, package_path) - if dist is not None: - if dist.scons_data is None: - dist.scons_data = [] - dist.scons_data.append(scons_info) - self.warn('distutils distribution has been initialized,'\ - ' it may be too late to add a subpackage '+ subpackage_name) - # XXX: we add a fake extension, to correctly initialize some - # options in distutils command. - dist.add_extension('', sources = []) - else: - self.scons_data.append(scons_info) - # XXX: we add a fake extension, to correctly initialize some - # options in distutils command. - self.add_extension('', sources = []) - - def add_scripts(self,*files): - """Add scripts to configuration. - - Add the sequence of files to the beginning of the scripts list. - Scripts will be installed under the /bin/ directory. - - """ - scripts = self.paths(files) - dist = self.get_distribution() - if dist is not None: - if dist.scripts is None: - dist.scripts = [] - dist.scripts.extend(scripts) - else: - self.scripts.extend(scripts) - - def dict_append(self,**dict): - for key in self.list_keys: - a = getattr(self,key) - a.extend(dict.get(key,[])) - for key in self.dict_keys: - a = getattr(self,key) - a.update(dict.get(key,{})) - known_keys = self.list_keys + self.dict_keys + self.extra_keys - for key in dict.keys(): - if key not in known_keys: - a = getattr(self, key, None) - if a and a==dict[key]: continue - self.warn('Inheriting attribute %r=%r from %r' \ - % (key,dict[key],dict.get('name','?'))) - setattr(self,key,dict[key]) - self.extra_keys.append(key) - elif key in self.extra_keys: - self.info('Ignoring attempt to set %r (from %r to %r)' \ - % (key, getattr(self,key), dict[key])) - elif key in known_keys: - # key is already processed above - pass - else: - raise ValueError("Don't know about key=%r" % (key)) - - def __str__(self): - from pprint import pformat - known_keys = self.list_keys + self.dict_keys + self.extra_keys - s = '<'+5*'-' + '\n' - s += 'Configuration of '+self.name+':\n' - known_keys.sort() - for k in known_keys: - a = getattr(self,k,None) - if a: - s += '%s = %s\n' % (k,pformat(a)) - s += 5*'-' + '>' - return s - - def get_config_cmd(self): - """ - Returns the numpy.distutils config command instance. - """ - cmd = get_cmd('config') - cmd.ensure_finalized() - cmd.dump_source = 0 - cmd.noisy = 0 - old_path = os.environ.get('PATH') - if old_path: - path = os.pathsep.join(['.',old_path]) - os.environ['PATH'] = path - return cmd - - def get_build_temp_dir(self): - """ - Return a path to a temporary directory where temporary files should be - placed. - """ - cmd = get_cmd('build') - cmd.ensure_finalized() - return cmd.build_temp - - def have_f77c(self): - """Check for availability of Fortran 77 compiler. - - Use it inside source generating function to ensure that - setup distribution instance has been initialized. - - Notes - ----- - True if a Fortran 77 compiler is available (because a simple Fortran 77 - code was able to be compiled successfully). - """ - simple_fortran_subroutine = ''' - subroutine simple - end - ''' - config_cmd = self.get_config_cmd() - flag = config_cmd.try_compile(simple_fortran_subroutine,lang='f77') - return flag - - def have_f90c(self): - """Check for availability of Fortran 90 compiler. - - Use it inside source generating function to ensure that - setup distribution instance has been initialized. - - Notes - ----- - True if a Fortran 90 compiler is available (because a simple Fortran - 90 code was able to be compiled successfully) - """ - simple_fortran_subroutine = ''' - subroutine simple - end - ''' - config_cmd = self.get_config_cmd() - flag = config_cmd.try_compile(simple_fortran_subroutine,lang='f90') - return flag - - def append_to(self, extlib): - """Append libraries, include_dirs to extension or library item. - """ - if is_sequence(extlib): - lib_name, build_info = extlib - dict_append(build_info, - libraries=self.libraries, - include_dirs=self.include_dirs) - else: - from numpy.distutils.core import Extension - assert isinstance(extlib,Extension), repr(extlib) - extlib.libraries.extend(self.libraries) - extlib.include_dirs.extend(self.include_dirs) - - def _get_svn_revision(self,path): - """Return path's SVN revision number. - """ - revision = None - m = None - try: - p = subprocess.Popen(['svnversion'], shell=True, - stdout=subprocess.PIPE, stderr=STDOUT, - close_fds=True) - sout = p.stdout - m = re.match(r'(?P\d+)', sout.read()) - except: - pass - if m: - revision = int(m.group('revision')) - return revision - if sys.platform=='win32' and os.environ.get('SVN_ASP_DOT_NET_HACK',None): - entries = njoin(path,'_svn','entries') - else: - entries = njoin(path,'.svn','entries') - if os.path.isfile(entries): - f = open(entries) - fstr = f.read() - f.close() - if fstr[:5] == '\d+)"',fstr) - if m: - revision = int(m.group('revision')) - else: # non-xml entries file --- check to be sure that - m = re.search(r'dir[\n\r]+(?P\d+)', fstr) - if m: - revision = int(m.group('revision')) - return revision - - def get_version(self, version_file=None, version_variable=None): - """Try to get version string of a package. - - Return a version string of the current package or None if the version - information could not be detected. - - Notes - ----- - This method scans files named - __version__.py, _version.py, version.py, and - __svn_version__.py for string variables version, __version\__, and - _version, until a version number is found. - """ - version = getattr(self,'version',None) - if version is not None: - return version - - # Get version from version file. - if version_file is None: - files = ['__version__.py', - self.name.split('.')[-1]+'_version.py', - 'version.py', - '__svn_version__.py'] - else: - files = [version_file] - if version_variable is None: - version_vars = ['version', - '__version__', - self.name.split('.')[-1]+'_version'] - else: - version_vars = [version_variable] - for f in files: - fn = njoin(self.local_path,f) - if os.path.isfile(fn): - info = (open(fn),fn,('.py','U',1)) - name = os.path.splitext(os.path.basename(fn))[0] - n = dot_join(self.name,name) - try: - version_module = imp.load_module('_'.join(n.split('.')),*info) - except ImportError: - msg = get_exception() - self.warn(str(msg)) - version_module = None - if version_module is None: - continue - - for a in version_vars: - version = getattr(version_module,a,None) - if version is not None: - break - if version is not None: - break - - if version is not None: - self.version = version - return version - - # Get version as SVN revision number - revision = self._get_svn_revision(self.local_path) - if revision is not None: - version = str(revision) - self.version = version - - return version - - def make_svn_version_py(self, delete=True): - """Appends a data function to the data_files list that will generate - __svn_version__.py file to the current package directory. - - Generate package __svn_version__.py file from SVN revision number, - it will be removed after python exits but will be available - when sdist, etc commands are executed. - - Notes - ----- - If __svn_version__.py existed before, nothing is done. - - This is - intended for working with source directories that are in an SVN - repository. - """ - target = njoin(self.local_path,'__svn_version__.py') - revision = self._get_svn_revision(self.local_path) - if os.path.isfile(target) or revision is None: - return - else: - def generate_svn_version_py(): - if not os.path.isfile(target): - version = str(revision) - self.info('Creating %s (version=%r)' % (target,version)) - f = open(target,'w') - f.write('version = %r\n' % (version)) - f.close() - - import atexit - def rm_file(f=target,p=self.info): - if delete: - try: os.remove(f); p('removed '+f) - except OSError: pass - try: os.remove(f+'c'); p('removed '+f+'c') - except OSError: pass - - atexit.register(rm_file) - - return target - - self.add_data_files(('', generate_svn_version_py())) - - def make_config_py(self,name='__config__'): - """Generate package __config__.py file containing system_info - information used during building the package. - - This file is installed to the - package installation directory. - - """ - self.py_modules.append((self.name,name,generate_config_py)) - - def scons_make_config_py(self, name = '__config__'): - """Generate package __config__.py file containing system_info - information used during building the package. - """ - self.py_modules.append((self.name, name, scons_generate_config_py)) - - def get_info(self,*names): - """Get resources information. - - Return information (from system_info.get_info) for all of the names in - the argument list in a single dictionary. - """ - from system_info import get_info, dict_append - info_dict = {} - for a in names: - dict_append(info_dict,**get_info(a)) - return info_dict - - -def get_cmd(cmdname, _cache={}): - if cmdname not in _cache: - import distutils.core - dist = distutils.core._setup_distribution - if dist is None: - from distutils.errors import DistutilsInternalError - raise DistutilsInternalError( - 'setup distribution instance not initialized') - cmd = dist.get_command_obj(cmdname) - _cache[cmdname] = cmd - return _cache[cmdname] - -def get_numpy_include_dirs(): - # numpy_include_dirs are set by numpy/core/setup.py, otherwise [] - include_dirs = Configuration.numpy_include_dirs[:] - if not include_dirs: - import numpy - include_dirs = [ numpy.get_include() ] - # else running numpy/core/setup.py - return include_dirs - -def get_npy_pkg_dir(): - """Return the path where to find the npy-pkg-config directory.""" - # XXX: import here for bootstrapping reasons - import numpy - d = os.path.join(os.path.dirname(numpy.__file__), - 'core', 'lib', 'npy-pkg-config') - return d - -def get_pkg_info(pkgname, dirs=None): - """ - Return library info for the given package. - - Parameters - ---------- - pkgname : str - Name of the package (should match the name of the .ini file, without - the extension, e.g. foo for the file foo.ini). - dirs : sequence, optional - If given, should be a sequence of additional directories where to look - for npy-pkg-config files. Those directories are searched prior to the - NumPy directory. - - Returns - ------- - pkginfo : class instance - The `LibraryInfo` instance containing the build information. - - Raises - ------ - PkgNotFound - If the package is not found. - - See Also - -------- - Configuration.add_npy_pkg_config, Configuration.add_installed_library, - get_info - - """ - from numpy.distutils.npy_pkg_config import read_config - - if dirs: - dirs.append(get_npy_pkg_dir()) - else: - dirs = [get_npy_pkg_dir()] - return read_config(pkgname, dirs) - -def get_info(pkgname, dirs=None): - """ - Return an info dict for a given C library. - - The info dict contains the necessary options to use the C library. - - Parameters - ---------- - pkgname : str - Name of the package (should match the name of the .ini file, without - the extension, e.g. foo for the file foo.ini). - dirs : sequence, optional - If given, should be a sequence of additional directories where to look - for npy-pkg-config files. Those directories are searched prior to the - NumPy directory. - - Returns - ------- - info : dict - The dictionary with build information. - - Raises - ------ - PkgNotFound - If the package is not found. - - See Also - -------- - Configuration.add_npy_pkg_config, Configuration.add_installed_library, - get_pkg_info - - Examples - -------- - To get the necessary information for the npymath library from NumPy: - - >>> npymath_info = np.distutils.misc_util.get_info('npymath') - >>> npymath_info #doctest: +SKIP - {'define_macros': [], 'libraries': ['npymath'], 'library_dirs': - ['.../numpy/core/lib'], 'include_dirs': ['.../numpy/core/include']} - - This info dict can then be used as input to a `Configuration` instance:: - - config.add_extension('foo', sources=['foo.c'], extra_info=npymath_info) - - """ - from numpy.distutils.npy_pkg_config import parse_flags - pkg_info = get_pkg_info(pkgname, dirs) - - # Translate LibraryInfo instance into a build_info dict - info = parse_flags(pkg_info.cflags()) - for k, v in parse_flags(pkg_info.libs()).items(): - info[k].extend(v) - - # add_extension extra_info argument is ANAL - info['define_macros'] = info['macros'] - del info['macros'] - del info['ignored'] - - return info - -def is_bootstrapping(): - import __builtin__ - try: - __builtin__.__NUMPY_SETUP__ - return True - except AttributeError: - return False - __NUMPY_SETUP__ = False - -def scons_generate_config_py(target): - """generate config.py file containing system_info information - used during building the package. - - usage: - config['py_modules'].append((packagename, '__config__',generate_config_py)) - """ - from distutils.dir_util import mkpath - from numscons import get_scons_configres_dir, get_scons_configres_filename - d = {} - mkpath(os.path.dirname(target)) - f = open(target, 'w') - f.write('# this file is generated by %s\n' % (os.path.abspath(sys.argv[0]))) - f.write('# it contains system_info results at the time of building this package.\n') - f.write('__all__ = ["show"]\n\n') - confdir = get_scons_configres_dir() - confilename = get_scons_configres_filename() - for root, dirs, files in os.walk(confdir): - if files: - file = os.path.join(root, confilename) - assert root.startswith(confdir) - pkg_name = '.'.join(root[len(confdir)+1:].split(os.sep)) - fid = open(file, 'r') - try: - cnt = fid.read() - d[pkg_name] = eval(cnt) - finally: - fid.close() - # d is a dictionary whose keys are package names, and values the - # corresponding configuration. Each configuration is itself a dictionary - # (lib : libinfo) - f.write('_config = %s\n' % d) - f.write(r''' -def show(): - for pkg, config in _config.items(): - print("package %s configuration:" % pkg) - for lib, libc in config.items(): - print(' %s' % lib) - for line in libc.split('\n'): - print('\t%s' % line) - ''') - f.close() - return target - -######################### - -def default_config_dict(name = None, parent_name = None, local_path=None): - """Return a configuration dictionary for usage in - configuration() function defined in file setup_.py. - """ - import warnings - warnings.warn('Use Configuration(%r,%r,top_path=%r) instead of '\ - 'deprecated default_config_dict(%r,%r,%r)' - % (name, parent_name, local_path, - name, parent_name, local_path, - )) - c = Configuration(name, parent_name, local_path) - return c.todict() - - -def dict_append(d, **kws): - for k, v in kws.items(): - if k in d: - ov = d[k] - if isinstance(ov,str): - d[k] = v - else: - d[k].extend(v) - else: - d[k] = v - -def appendpath(prefix, path): - if os.path.sep != '/': - prefix = prefix.replace('/', os.path.sep) - path = path.replace('/', os.path.sep) - drive = '' - if os.path.isabs(path): - drive = os.path.splitdrive(prefix)[0] - absprefix = os.path.splitdrive(os.path.abspath(prefix))[1] - pathdrive, path = os.path.splitdrive(path) - d = os.path.commonprefix([absprefix, path]) - if os.path.join(absprefix[:len(d)], absprefix[len(d):]) != absprefix \ - or os.path.join(path[:len(d)], path[len(d):]) != path: - # Handle invalid paths - d = os.path.dirname(d) - subpath = path[len(d):] - if os.path.isabs(subpath): - subpath = subpath[1:] - else: - subpath = path - return os.path.normpath(njoin(drive + prefix, subpath)) - -def generate_config_py(target): - """Generate config.py file containing system_info information - used during building the package. - - Usage: - config['py_modules'].append((packagename, '__config__',generate_config_py)) - """ - from numpy.distutils.system_info import system_info - from distutils.dir_util import mkpath - mkpath(os.path.dirname(target)) - f = open(target, 'w') - f.write('# This file is generated by %s\n' % (os.path.abspath(sys.argv[0]))) - f.write('# It contains system_info results at the time of building this package.\n') - f.write('__all__ = ["get_info","show"]\n\n') - for k, i in system_info.saved_results.items(): - f.write('%s=%r\n' % (k, i)) - f.write(r''' -def get_info(name): - g = globals() - return g.get(name, g.get(name + "_info", {})) - -def show(): - for name,info_dict in globals().items(): - if name[0] == "_" or type(info_dict) is not type({}): continue - print(name + ":") - if not info_dict: - print(" NOT AVAILABLE") - for k,v in info_dict.items(): - v = str(v) - if k == "sources" and len(v) > 200: - v = v[:60] + " ...\n... " + v[-60:] - print(" %s = %s" % (k,v)) - ''') - - f.close() - return target - -def msvc_version(compiler): - """Return version major and minor of compiler instance if it is - MSVC, raise an exception otherwise.""" - if not compiler.compiler_type == "msvc": - raise ValueError("Compiler instance is not msvc (%s)"\ - % compiler.compiler_type) - return compiler._MSVCCompiler__version - -if sys.version[:3] >= '2.5': - def get_build_architecture(): - from distutils.msvccompiler import get_build_architecture - return get_build_architecture() -else: - #copied from python 2.5.1 distutils/msvccompiler.py - def get_build_architecture(): - """Return the processor architecture. - - Possible results are "Intel", "Itanium", or "AMD64". - """ - prefix = " bit (" - i = sys.version.find(prefix) - if i == -1: - return "Intel" - j = sys.version.find(")", i) - return sys.version[i+len(prefix):j] diff --git a/pythonPackages/numpy/numpy/distutils/npy_pkg_config.py b/pythonPackages/numpy/numpy/distutils/npy_pkg_config.py deleted file mode 100755 index e559e85887..0000000000 --- a/pythonPackages/numpy/numpy/distutils/npy_pkg_config.py +++ /dev/null @@ -1,455 +0,0 @@ -import sys -if sys.version_info[0] < 3: - from ConfigParser import SafeConfigParser, NoOptionError -else: - from configparser import SafeConfigParser, NoOptionError -import re -import os -import shlex - -__all__ = ['FormatError', 'PkgNotFound', 'LibraryInfo', 'VariableSet', - 'read_config', 'parse_flags'] - -_VAR = re.compile('\$\{([a-zA-Z0-9_-]+)\}') - -class FormatError(IOError): - """ - Exception thrown when there is a problem parsing a configuration file. - - """ - def __init__(self, msg): - self.msg = msg - - def __str__(self): - return self.msg - -class PkgNotFound(IOError): - """Exception raised when a package can not be located.""" - def __init__(self, msg): - self.msg = msg - - def __str__(self): - return self.msg - -def parse_flags(line): - """ - Parse a line from a config file containing compile flags. - - Parameters - ---------- - line : str - A single line containing one or more compile flags. - - Returns - ------- - d : dict - Dictionary of parsed flags, split into relevant categories. - These categories are the keys of `d`: - - * 'include_dirs' - * 'library_dirs' - * 'libraries' - * 'macros' - * 'ignored' - - """ - lexer = shlex.shlex(line) - lexer.whitespace_split = True - - d = {'include_dirs': [], 'library_dirs': [], 'libraries': [], - 'macros': [], 'ignored': []} - def next_token(t): - if t.startswith('-I'): - if len(t) > 2: - d['include_dirs'].append(t[2:]) - else: - t = lexer.get_token() - d['include_dirs'].append(t) - elif t.startswith('-L'): - if len(t) > 2: - d['library_dirs'].append(t[2:]) - else: - t = lexer.get_token() - d['library_dirs'].append(t) - elif t.startswith('-l'): - d['libraries'].append(t[2:]) - elif t.startswith('-D'): - d['macros'].append(t[2:]) - else: - d['ignored'].append(t) - return lexer.get_token() - - t = lexer.get_token() - while t: - t = next_token(t) - - return d - -def _escape_backslash(val): - return val.replace('\\', '\\\\') - -class LibraryInfo(object): - """ - Object containing build information about a library. - - Parameters - ---------- - name : str - The library name. - description : str - Description of the library. - version : str - Version string. - sections : dict - The sections of the configuration file for the library. The keys are - the section headers, the values the text under each header. - vars : class instance - A `VariableSet` instance, which contains ``(name, value)`` pairs for - variables defined in the configuration file for the library. - requires : sequence, optional - The required libraries for the library to be installed. - - Notes - ----- - All input parameters (except "sections" which is a method) are available as - attributes of the same name. - - """ - def __init__(self, name, description, version, sections, vars, requires=None): - self.name = name - self.description = description - if requires: - self.requires = requires - else: - self.requires = [] - self.version = version - self._sections = sections - self.vars = vars - - def sections(self): - """ - Return the section headers of the config file. - - Parameters - ---------- - None - - Returns - ------- - keys : list of str - The list of section headers. - - """ - return self._sections.keys() - - def cflags(self, section="default"): - val = self.vars.interpolate(self._sections[section]['cflags']) - return _escape_backslash(val) - - def libs(self, section="default"): - val = self.vars.interpolate(self._sections[section]['libs']) - return _escape_backslash(val) - - def __str__(self): - m = ['Name: %s' % self.name] - m.append('Description: %s' % self.description) - if self.requires: - m.append('Requires:') - else: - m.append('Requires: %s' % ",".join(self.requires)) - m.append('Version: %s' % self.version) - - return "\n".join(m) - -class VariableSet(object): - """ - Container object for the variables defined in a config file. - - `VariableSet` can be used as a plain dictionary, with the variable names - as keys. - - Parameters - ---------- - d : dict - Dict of items in the "variables" section of the configuration file. - - """ - def __init__(self, d): - self._raw_data = dict([(k, v) for k, v in d.items()]) - - self._re = {} - self._re_sub = {} - - self._init_parse() - - def _init_parse(self): - for k, v in self._raw_data.items(): - self._init_parse_var(k, v) - - def _init_parse_var(self, name, value): - self._re[name] = re.compile(r'\$\{%s\}' % name) - self._re_sub[name] = value - - def interpolate(self, value): - # Brute force: we keep interpolating until there is no '${var}' anymore - # or until interpolated string is equal to input string - def _interpolate(value): - for k in self._re.keys(): - value = self._re[k].sub(self._re_sub[k], value) - return value - while _VAR.search(value): - nvalue = _interpolate(value) - if nvalue == value: - break - value = nvalue - - return value - - def variables(self): - """ - Return the list of variable names. - - Parameters - ---------- - None - - Returns - ------- - names : list of str - The names of all variables in the `VariableSet` instance. - - """ - return self._raw_data.keys() - - # Emulate a dict to set/get variables values - def __getitem__(self, name): - return self._raw_data[name] - - def __setitem__(self, name, value): - self._raw_data[name] = value - self._init_parse_var(name, value) - -def parse_meta(config): - if not config.has_section('meta'): - raise FormatError("No meta section found !") - - d = {} - for name, value in config.items('meta'): - d[name] = value - - for k in ['name', 'description', 'version']: - if not d.has_key(k): - raise FormatError("Option %s (section [meta]) is mandatory, " - "but not found" % k) - - if not d.has_key('requires'): - d['requires'] = [] - - return d - -def parse_variables(config): - if not config.has_section('variables'): - raise FormatError("No variables section found !") - - d = {} - - for name, value in config.items("variables"): - d[name] = value - - return VariableSet(d) - -def parse_sections(config): - return meta_d, r - -def pkg_to_filename(pkg_name): - return "%s.ini" % pkg_name - -def parse_config(filename, dirs=None): - if dirs: - filenames = [os.path.join(d, filename) for d in dirs] - else: - filenames = [filename] - - config = SafeConfigParser() - n = config.read(filenames) - if not len(n) >= 1: - raise PkgNotFound("Could not find file(s) %s" % str(filenames)) - - # Parse meta and variables sections - meta = parse_meta(config) - - vars = {} - if config.has_section('variables'): - for name, value in config.items("variables"): - vars[name] = _escape_backslash(value) - - # Parse "normal" sections - secs = [s for s in config.sections() if not s in ['meta', 'variables']] - sections = {} - - requires = {} - for s in secs: - d = {} - if config.has_option(s, "requires"): - requires[s] = config.get(s, 'requires') - - for name, value in config.items(s): - d[name] = value - sections[s] = d - - return meta, vars, sections, requires - -def _read_config_imp(filenames, dirs=None): - def _read_config(f): - meta, vars, sections, reqs = parse_config(f, dirs) - # recursively add sections and variables of required libraries - for rname, rvalue in reqs.items(): - nmeta, nvars, nsections, nreqs = _read_config(pkg_to_filename(rvalue)) - - # Update var dict for variables not in 'top' config file - for k, v in nvars.items(): - if not vars.has_key(k): - vars[k] = v - - # Update sec dict - for oname, ovalue in nsections[rname].items(): - sections[rname][oname] += ' %s' % ovalue - - return meta, vars, sections, reqs - - meta, vars, sections, reqs = _read_config(filenames) - - # FIXME: document this. If pkgname is defined in the variables section, and - # there is no pkgdir variable defined, pkgdir is automatically defined to - # the path of pkgname. This requires the package to be imported to work - if not vars.has_key("pkgdir") and vars.has_key("pkgname"): - pkgname = vars["pkgname"] - if not pkgname in sys.modules: - raise ValueError("You should import %s to get information on %s" % - (pkgname, meta["name"])) - - mod = sys.modules[pkgname] - vars["pkgdir"] = _escape_backslash(os.path.dirname(mod.__file__)) - - return LibraryInfo(name=meta["name"], description=meta["description"], - version=meta["version"], sections=sections, vars=VariableSet(vars)) - -# Trivial cache to cache LibraryInfo instances creation. To be really -# efficient, the cache should be handled in read_config, since a same file can -# be parsed many time outside LibraryInfo creation, but I doubt this will be a -# problem in practice -_CACHE = {} -def read_config(pkgname, dirs=None): - """ - Return library info for a package from its configuration file. - - Parameters - ---------- - pkgname : str - Name of the package (should match the name of the .ini file, without - the extension, e.g. foo for the file foo.ini). - dirs : sequence, optional - If given, should be a sequence of directories - usually including - the NumPy base directory - where to look for npy-pkg-config files. - - Returns - ------- - pkginfo : class instance - The `LibraryInfo` instance containing the build information. - - Raises - ------ - PkgNotFound - If the package is not found. - - See Also - -------- - misc_util.get_info, misc_util.get_pkg_info - - Examples - -------- - >>> npymath_info = np.distutils.npy_pkg_config.read_config('npymath') - >>> type(npymath_info) - - >>> print npymath_info - Name: npymath - Description: Portable, core math library implementing C99 standard - Requires: - Version: 0.1 #random - - """ - try: - return _CACHE[pkgname] - except KeyError: - v = _read_config_imp(pkg_to_filename(pkgname), dirs) - _CACHE[pkgname] = v - return v - -# TODO: -# - implements version comparison (modversion + atleast) - -# pkg-config simple emulator - useful for debugging, and maybe later to query -# the system -if __name__ == '__main__': - import sys - from optparse import OptionParser - import glob - - parser = OptionParser() - parser.add_option("--cflags", dest="cflags", action="store_true", - help="output all preprocessor and compiler flags") - parser.add_option("--libs", dest="libs", action="store_true", - help="output all linker flags") - parser.add_option("--use-section", dest="section", - help="use this section instead of default for options") - parser.add_option("--version", dest="version", action="store_true", - help="output version") - parser.add_option("--atleast-version", dest="min_version", - help="Minimal version") - parser.add_option("--list-all", dest="list_all", action="store_true", - help="Minimal version") - parser.add_option("--define-variable", dest="define_variable", - help="Replace variable with the given value") - - (options, args) = parser.parse_args(sys.argv) - - if len(args) < 2: - raise ValueError("Expect package name on the command line:") - - if options.list_all: - files = glob.glob("*.ini") - for f in files: - info = read_config(f) - print ("%s\t%s - %s" % (info.name, info.name, info.description)) - - pkg_name = args[1] - import os - d = os.environ.get('NPY_PKG_CONFIG_PATH') - if d: - info = read_config(pkg_name, ['numpy/core/lib/npy-pkg-config', '.', d]) - else: - info = read_config(pkg_name, ['numpy/core/lib/npy-pkg-config', '.']) - - if options.section: - section = options.section - else: - section = "default" - - if options.define_variable: - m = re.search('([\S]+)=([\S]+)', options.define_variable) - if not m: - raise ValueError("--define-variable option should be of " \ - "the form --define-variable=foo=bar") - else: - name = m.group(1) - value = m.group(2) - info.vars[name] = value - - if options.cflags: - print (info.cflags(section)) - if options.libs: - print (info.libs(section)) - if options.version: - print (info.version) - if options.min_version: - print (info.version >= options.min_version) diff --git a/pythonPackages/numpy/numpy/distutils/numpy_distribution.py b/pythonPackages/numpy/numpy/distutils/numpy_distribution.py deleted file mode 100755 index ea8182659c..0000000000 --- a/pythonPackages/numpy/numpy/distutils/numpy_distribution.py +++ /dev/null @@ -1,17 +0,0 @@ -# XXX: Handle setuptools ? -from distutils.core import Distribution - -# This class is used because we add new files (sconscripts, and so on) with the -# scons command -class NumpyDistribution(Distribution): - def __init__(self, attrs = None): - # A list of (sconscripts, pre_hook, post_hook, src, parent_names) - self.scons_data = [] - # A list of installable libraries - self.installed_libraries = [] - # A dict of pkg_config files to generate/install - self.installed_pkg_config = {} - Distribution.__init__(self, attrs) - - def has_scons_scripts(self): - return bool(self.scons_data) diff --git a/pythonPackages/numpy/numpy/distutils/setup.py b/pythonPackages/numpy/numpy/distutils/setup.py deleted file mode 100755 index afc1fadd23..0000000000 --- a/pythonPackages/numpy/numpy/distutils/setup.py +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env python - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('distutils',parent_package,top_path) - config.add_subpackage('command') - config.add_subpackage('fcompiler') - config.add_data_dir('tests') - config.add_data_files('site.cfg') - config.add_data_files('mingw/gfortran_vs2003_hack.c') - config.make_config_py() - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/distutils/setupscons.py b/pythonPackages/numpy/numpy/distutils/setupscons.py deleted file mode 100755 index 938f07ead0..0000000000 --- a/pythonPackages/numpy/numpy/distutils/setupscons.py +++ /dev/null @@ -1,17 +0,0 @@ -#!/usr/bin/env python -import os.path - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('distutils',parent_package,top_path) - config.add_subpackage('command') - config.add_subpackage('fcompiler') - config.add_data_dir('tests') - if os.path.exists("site.cfg"): - config.add_data_files('site.cfg') - config.make_config_py() - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/distutils/system_info.py b/pythonPackages/numpy/numpy/distutils/system_info.py deleted file mode 100755 index e0fd213254..0000000000 --- a/pythonPackages/numpy/numpy/distutils/system_info.py +++ /dev/null @@ -1,2016 +0,0 @@ -#!/bin/env python -""" -This file defines a set of system_info classes for getting -information about various resources (libraries, library directories, -include directories, etc.) in the system. Currently, the following -classes are available: - - atlas_info - atlas_threads_info - atlas_blas_info - atlas_blas_threads_info - lapack_atlas_info - blas_info - lapack_info - blas_opt_info # usage recommended - lapack_opt_info # usage recommended - fftw_info,dfftw_info,sfftw_info - fftw_threads_info,dfftw_threads_info,sfftw_threads_info - djbfft_info - x11_info - lapack_src_info - blas_src_info - numpy_info - numarray_info - numpy_info - boost_python_info - agg2_info - wx_info - gdk_pixbuf_xlib_2_info - gdk_pixbuf_2_info - gdk_x11_2_info - gtkp_x11_2_info - gtkp_2_info - xft_info - freetype2_info - umfpack_info - -Usage: - info_dict = get_info() - where is a string 'atlas','x11','fftw','lapack','blas', - 'lapack_src', 'blas_src', etc. For a complete list of allowed names, - see the definition of get_info() function below. - - Returned info_dict is a dictionary which is compatible with - distutils.setup keyword arguments. If info_dict == {}, then the - asked resource is not available (system_info could not find it). - - Several *_info classes specify an environment variable to specify - the locations of software. When setting the corresponding environment - variable to 'None' then the software will be ignored, even when it - is available in system. - -Global parameters: - system_info.search_static_first - search static libraries (.a) - in precedence to shared ones (.so, .sl) if enabled. - system_info.verbosity - output the results to stdout if enabled. - -The file 'site.cfg' is looked for in - -1) Directory of main setup.py file being run. -2) Home directory of user running the setup.py file as ~/.numpy-site.cfg -3) System wide directory (location of this file...) - -The first one found is used to get system configuration options The -format is that used by ConfigParser (i.e., Windows .INI style). The -section ALL has options that are the default for each section. The -available sections are fftw, atlas, and x11. Appropiate defaults are -used if nothing is specified. - -The order of finding the locations of resources is the following: - 1. environment variable - 2. section in site.cfg - 3. ALL section in site.cfg -Only the first complete match is returned. - -Example: ----------- -[ALL] -library_dirs = /usr/lib:/usr/local/lib:/opt/lib -include_dirs = /usr/include:/usr/local/include:/opt/include -src_dirs = /usr/local/src:/opt/src -# search static libraries (.a) in preference to shared ones (.so) -search_static_first = 0 - -[fftw] -fftw_libs = rfftw, fftw -fftw_opt_libs = rfftw_threaded, fftw_threaded -# if the above aren't found, look for {s,d}fftw_libs and {s,d}fftw_opt_libs - -[atlas] -library_dirs = /usr/lib/3dnow:/usr/lib/3dnow/atlas -# for overriding the names of the atlas libraries -atlas_libs = lapack, f77blas, cblas, atlas - -[x11] -library_dirs = /usr/X11R6/lib -include_dirs = /usr/X11R6/include ----------- - -Authors: - Pearu Peterson , February 2002 - David M. Cooke , April 2002 - -Copyright 2002 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy (BSD style) license. See LICENSE.txt that came with -this distribution for specifics. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -""" - -import sys -import os -import re -import copy -import warnings -from glob import glob -if sys.version_info[0] < 3: - from ConfigParser import SafeConfigParser, NoOptionError, ConfigParser -else: - from configparser import SafeConfigParser, NoOptionError, ConfigParser - -from distutils.errors import DistutilsError -from distutils.dist import Distribution -import distutils.sysconfig -from distutils import log - -from numpy.distutils.exec_command import \ - find_executable, exec_command, get_pythonexe -from numpy.distutils.misc_util import is_sequence, is_string -from numpy.distutils.command.config import config as cmd_config -from numpy.distutils.compat import get_exception - -if sys.version_info[0] >= 3: - from functools import reduce - -# Determine number of bits -import platform -_bits = {'32bit':32,'64bit':64} -platform_bits = _bits[platform.architecture()[0]] - -def libpaths(paths,bits): - """Return a list of library paths valid on 32 or 64 bit systems. - - Inputs: - paths : sequence - A sequence of strings (typically paths) - bits : int - An integer, the only valid values are 32 or 64. A ValueError exception - is raised otherwise. - - Examples: - - Consider a list of directories - >>> paths = ['/usr/X11R6/lib','/usr/X11/lib','/usr/lib'] - - For a 32-bit platform, this is already valid: - >>> np.distutils.system_info.libpaths(paths,32) - ['/usr/X11R6/lib', '/usr/X11/lib', '/usr/lib'] - - On 64 bits, we prepend the '64' postfix - >>> np.distutils.system_info.libpaths(paths,64) - ['/usr/X11R6/lib64', '/usr/X11R6/lib', '/usr/X11/lib64', '/usr/X11/lib', - '/usr/lib64', '/usr/lib'] - """ - if bits not in (32, 64): - raise ValueError("Invalid bit size in libpaths: 32 or 64 only") - - # Handle 32bit case - if bits==32: - return paths - - # Handle 64bit case - out = [] - for p in paths: - out.extend([p+'64', p]) - - return out - - -if sys.platform == 'win32': - default_lib_dirs = ['C:\\', - os.path.join(distutils.sysconfig.EXEC_PREFIX, - 'libs')] - default_include_dirs = [] - default_src_dirs = ['.'] - default_x11_lib_dirs = [] - default_x11_include_dirs = [] -else: - default_lib_dirs = libpaths(['/usr/local/lib','/opt/lib','/usr/lib', - '/opt/local/lib','/sw/lib'], platform_bits) - default_include_dirs = ['/usr/local/include', - '/opt/include', '/usr/include', - '/opt/local/include', '/sw/include', - '/usr/include/suitesparse'] - default_src_dirs = ['.','/usr/local/src', '/opt/src','/sw/src'] - - default_x11_lib_dirs = libpaths(['/usr/X11R6/lib','/usr/X11/lib', - '/usr/lib'], platform_bits) - default_x11_include_dirs = ['/usr/X11R6/include','/usr/X11/include', - '/usr/include'] - -if os.path.join(sys.prefix, 'lib') not in default_lib_dirs: - default_lib_dirs.insert(0,os.path.join(sys.prefix, 'lib')) - default_include_dirs.append(os.path.join(sys.prefix, 'include')) - default_src_dirs.append(os.path.join(sys.prefix, 'src')) - -default_lib_dirs = filter(os.path.isdir, default_lib_dirs) -default_include_dirs = filter(os.path.isdir, default_include_dirs) -default_src_dirs = filter(os.path.isdir, default_src_dirs) - -so_ext = distutils.sysconfig.get_config_vars('SO')[0] or '' - -def get_standard_file(fname): - """Returns a list of files named 'fname' from - 1) System-wide directory (directory-location of this module) - 2) Users HOME directory (os.environ['HOME']) - 3) Local directory - """ - # System-wide file - filenames = [] - try: - f = __file__ - except NameError: - f = sys.argv[0] - else: - sysfile = os.path.join(os.path.split(os.path.abspath(f))[0], - fname) - if os.path.isfile(sysfile): - filenames.append(sysfile) - - # Home directory - # And look for the user config file - try: - f = os.environ['HOME'] - except KeyError: - pass - else: - user_file = os.path.join(f, fname) - if os.path.isfile(user_file): - filenames.append(user_file) - - # Local file - if os.path.isfile(fname): - filenames.append(os.path.abspath(fname)) - - return filenames - -def get_info(name,notfound_action=0): - """ - notfound_action: - 0 - do nothing - 1 - display warning message - 2 - raise error - """ - cl = {'atlas':atlas_info, # use lapack_opt or blas_opt instead - 'atlas_threads':atlas_threads_info, # ditto - 'atlas_blas':atlas_blas_info, - 'atlas_blas_threads':atlas_blas_threads_info, - 'lapack_atlas':lapack_atlas_info, # use lapack_opt instead - 'lapack_atlas_threads':lapack_atlas_threads_info, # ditto - 'mkl':mkl_info, - 'lapack_mkl':lapack_mkl_info, # use lapack_opt instead - 'blas_mkl':blas_mkl_info, # use blas_opt instead - 'x11':x11_info, - 'fft_opt':fft_opt_info, - 'fftw':fftw_info, - 'fftw2':fftw2_info, - 'fftw3':fftw3_info, - 'dfftw':dfftw_info, - 'sfftw':sfftw_info, - 'fftw_threads':fftw_threads_info, - 'dfftw_threads':dfftw_threads_info, - 'sfftw_threads':sfftw_threads_info, - 'djbfft':djbfft_info, - 'blas':blas_info, # use blas_opt instead - 'lapack':lapack_info, # use lapack_opt instead - 'lapack_src':lapack_src_info, - 'blas_src':blas_src_info, - 'numpy':numpy_info, - 'f2py':f2py_info, - 'Numeric':Numeric_info, - 'numeric':Numeric_info, - 'numarray':numarray_info, - 'numerix':numerix_info, - 'lapack_opt':lapack_opt_info, - 'blas_opt':blas_opt_info, - 'boost_python':boost_python_info, - 'agg2':agg2_info, - 'wx':wx_info, - 'gdk_pixbuf_xlib_2':gdk_pixbuf_xlib_2_info, - 'gdk-pixbuf-xlib-2.0':gdk_pixbuf_xlib_2_info, - 'gdk_pixbuf_2':gdk_pixbuf_2_info, - 'gdk-pixbuf-2.0':gdk_pixbuf_2_info, - 'gdk':gdk_info, - 'gdk_2':gdk_2_info, - 'gdk-2.0':gdk_2_info, - 'gdk_x11_2':gdk_x11_2_info, - 'gdk-x11-2.0':gdk_x11_2_info, - 'gtkp_x11_2':gtkp_x11_2_info, - 'gtk+-x11-2.0':gtkp_x11_2_info, - 'gtkp_2':gtkp_2_info, - 'gtk+-2.0':gtkp_2_info, - 'xft':xft_info, - 'freetype2':freetype2_info, - 'umfpack':umfpack_info, - 'amd':amd_info, - }.get(name.lower(),system_info) - return cl().get_info(notfound_action) - -class NotFoundError(DistutilsError): - """Some third-party program or library is not found.""" - -class AtlasNotFoundError(NotFoundError): - """ - Atlas (http://math-atlas.sourceforge.net/) libraries not found. - Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [atlas]) or by setting - the ATLAS environment variable.""" - -class LapackNotFoundError(NotFoundError): - """ - Lapack (http://www.netlib.org/lapack/) libraries not found. - Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [lapack]) or by setting - the LAPACK environment variable.""" - -class LapackSrcNotFoundError(LapackNotFoundError): - """ - Lapack (http://www.netlib.org/lapack/) sources not found. - Directories to search for the sources can be specified in the - numpy/distutils/site.cfg file (section [lapack_src]) or by setting - the LAPACK_SRC environment variable.""" - -class BlasNotFoundError(NotFoundError): - """ - Blas (http://www.netlib.org/blas/) libraries not found. - Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [blas]) or by setting - the BLAS environment variable.""" - -class BlasSrcNotFoundError(BlasNotFoundError): - """ - Blas (http://www.netlib.org/blas/) sources not found. - Directories to search for the sources can be specified in the - numpy/distutils/site.cfg file (section [blas_src]) or by setting - the BLAS_SRC environment variable.""" - -class FFTWNotFoundError(NotFoundError): - """ - FFTW (http://www.fftw.org/) libraries not found. - Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [fftw]) or by setting - the FFTW environment variable.""" - -class DJBFFTNotFoundError(NotFoundError): - """ - DJBFFT (http://cr.yp.to/djbfft.html) libraries not found. - Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [djbfft]) or by setting - the DJBFFT environment variable.""" - -class NumericNotFoundError(NotFoundError): - """ - Numeric (http://www.numpy.org/) module not found. - Get it from above location, install it, and retry setup.py.""" - -class X11NotFoundError(NotFoundError): - """X11 libraries not found.""" - -class UmfpackNotFoundError(NotFoundError): - """ - UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) - not found. Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [umfpack]) or by setting - the UMFPACK environment variable.""" - -class system_info: - - """ get_info() is the only public method. Don't use others. - """ - section = 'ALL' - dir_env_var = None - search_static_first = 0 # XXX: disabled by default, may disappear in - # future unless it is proved to be useful. - verbosity = 1 - saved_results = {} - - notfounderror = NotFoundError - - def __init__ (self, - default_lib_dirs=default_lib_dirs, - default_include_dirs=default_include_dirs, - verbosity = 1, - ): - self.__class__.info = {} - self.local_prefixes = [] - defaults = {} - defaults['libraries'] = '' - defaults['library_dirs'] = os.pathsep.join(default_lib_dirs) - defaults['include_dirs'] = os.pathsep.join(default_include_dirs) - defaults['src_dirs'] = os.pathsep.join(default_src_dirs) - defaults['search_static_first'] = str(self.search_static_first) - self.cp = ConfigParser(defaults) - self.files = [] - self.files.extend(get_standard_file('.numpy-site.cfg')) - self.files.extend(get_standard_file('site.cfg')) - self.parse_config_files() - if self.section is not None: - self.search_static_first = self.cp.getboolean(self.section, - 'search_static_first') - assert isinstance(self.search_static_first, int) - - def parse_config_files(self): - self.cp.read(self.files) - if not self.cp.has_section(self.section): - if self.section is not None: - self.cp.add_section(self.section) - - def calc_libraries_info(self): - libs = self.get_libraries() - dirs = self.get_lib_dirs() - info = {} - for lib in libs: - i = None - for d in dirs: - i = self.check_libs(d,[lib]) - if i is not None: - break - if i is not None: - dict_append(info,**i) - else: - log.info('Library %s was not found. Ignoring' % (lib)) - return info - - def set_info(self,**info): - if info: - lib_info = self.calc_libraries_info() - dict_append(info,**lib_info) - self.saved_results[self.__class__.__name__] = info - - def has_info(self): - return self.__class__.__name__ in self.saved_results - - def get_info(self,notfound_action=0): - """ Return a dictonary with items that are compatible - with numpy.distutils.setup keyword arguments. - """ - flag = 0 - if not self.has_info(): - flag = 1 - log.info(self.__class__.__name__ + ':') - if hasattr(self, 'calc_info'): - self.calc_info() - if notfound_action: - if not self.has_info(): - if notfound_action==1: - warnings.warn(self.notfounderror.__doc__) - elif notfound_action==2: - raise self.notfounderror(self.notfounderror.__doc__) - else: - raise ValueError(repr(notfound_action)) - - if not self.has_info(): - log.info(' NOT AVAILABLE') - self.set_info() - else: - log.info(' FOUND:') - - res = self.saved_results.get(self.__class__.__name__) - if self.verbosity>0 and flag: - for k,v in res.items(): - v = str(v) - if k in ['sources','libraries'] and len(v)>270: - v = v[:120]+'...\n...\n...'+v[-120:] - log.info(' %s = %s', k, v) - log.info('') - - return copy.deepcopy(res) - - def get_paths(self, section, key): - dirs = self.cp.get(section, key).split(os.pathsep) - env_var = self.dir_env_var - if env_var: - if is_sequence(env_var): - e0 = env_var[-1] - for e in env_var: - if e in os.environ: - e0 = e - break - if not env_var[0]==e0: - log.info('Setting %s=%s' % (env_var[0],e0)) - env_var = e0 - if env_var and env_var in os.environ: - d = os.environ[env_var] - if d=='None': - log.info('Disabled %s: %s',self.__class__.__name__,'(%s is None)' \ - % (env_var,)) - return [] - if os.path.isfile(d): - dirs = [os.path.dirname(d)] + dirs - l = getattr(self,'_lib_names',[]) - if len(l)==1: - b = os.path.basename(d) - b = os.path.splitext(b)[0] - if b[:3]=='lib': - log.info('Replacing _lib_names[0]==%r with %r' \ - % (self._lib_names[0], b[3:])) - self._lib_names[0] = b[3:] - else: - ds = d.split(os.pathsep) - ds2 = [] - for d in ds: - if os.path.isdir(d): - ds2.append(d) - for dd in ['include','lib']: - d1 = os.path.join(d,dd) - if os.path.isdir(d1): - ds2.append(d1) - dirs = ds2 + dirs - default_dirs = self.cp.get(self.section, key).split(os.pathsep) - dirs.extend(default_dirs) - ret = [] - for d in dirs: - if not os.path.isdir(d): - warnings.warn('Specified path %s is invalid.' % d) - continue - - if d not in ret: - ret.append(d) - - log.debug('( %s = %s )', key, ':'.join(ret)) - return ret - - def get_lib_dirs(self, key='library_dirs'): - return self.get_paths(self.section, key) - - def get_include_dirs(self, key='include_dirs'): - return self.get_paths(self.section, key) - - def get_src_dirs(self, key='src_dirs'): - return self.get_paths(self.section, key) - - def get_libs(self, key, default): - try: - libs = self.cp.get(self.section, key) - except NoOptionError: - if not default: - return [] - if is_string(default): - return [default] - return default - return [b for b in [a.strip() for a in libs.split(',')] if b] - - def get_libraries(self, key='libraries'): - return self.get_libs(key,'') - - def library_extensions(self): - static_exts = ['.a'] - if sys.platform == 'win32': - static_exts.append('.lib') # .lib is used by MSVC - if self.search_static_first: - exts = static_exts + [so_ext] - else: - exts = [so_ext] + static_exts - if sys.platform == 'cygwin': - exts.append('.dll.a') - if sys.platform == 'darwin': - exts.append('.dylib') - # Debian and Ubuntu added a g3f suffix to shared library to deal with - # g77 -> gfortran ABI transition - # XXX: disabled, it hides more problem than it solves. - #if sys.platform[:5] == 'linux': - # exts.append('.so.3gf') - return exts - - def check_libs(self,lib_dir,libs,opt_libs =[]): - """If static or shared libraries are available then return - their info dictionary. - - Checks for all libraries as shared libraries first, then - static (or vice versa if self.search_static_first is True). - """ - exts = self.library_extensions() - info = None - for ext in exts: - info = self._check_libs(lib_dir,libs,opt_libs,[ext]) - if info is not None: - break - if not info: - log.info(' libraries %s not found in %s', ','.join(libs), lib_dir) - return info - - def check_libs2(self, lib_dir, libs, opt_libs =[]): - """If static or shared libraries are available then return - their info dictionary. - - Checks each library for shared or static. - """ - exts = self.library_extensions() - info = self._check_libs(lib_dir,libs,opt_libs,exts) - if not info: - log.info(' libraries %s not found in %s', ','.join(libs), lib_dir) - return info - - def _lib_list(self, lib_dir, libs, exts): - assert is_string(lib_dir) - liblist = [] - # under windows first try without 'lib' prefix - if sys.platform == 'win32': - lib_prefixes = ['', 'lib'] - else: - lib_prefixes = ['lib'] - # for each library name, see if we can find a file for it. - for l in libs: - for ext in exts: - for prefix in lib_prefixes: - p = self.combine_paths(lib_dir, prefix+l+ext) - if p: - break - if p: - assert len(p)==1 - # ??? splitext on p[0] would do this for cygwin - # doesn't seem correct - if ext == '.dll.a': - l += '.dll' - liblist.append(l) - break - return liblist - - def _check_libs(self, lib_dir, libs, opt_libs, exts): - found_libs = self._lib_list(lib_dir, libs, exts) - if len(found_libs) == len(libs): - info = {'libraries' : found_libs, 'library_dirs' : [lib_dir]} - opt_found_libs = self._lib_list(lib_dir, opt_libs, exts) - if len(opt_found_libs) == len(opt_libs): - info['libraries'].extend(opt_found_libs) - return info - else: - return None - - def combine_paths(self,*args): - """Return a list of existing paths composed by all combinations - of items from the arguments. - """ - return combine_paths(*args,**{'verbosity':self.verbosity}) - - -class fft_opt_info(system_info): - - def calc_info(self): - info = {} - fftw_info = get_info('fftw3') or get_info('fftw2') or get_info('dfftw') - djbfft_info = get_info('djbfft') - if fftw_info: - dict_append(info,**fftw_info) - if djbfft_info: - dict_append(info,**djbfft_info) - self.set_info(**info) - return - - -class fftw_info(system_info): - #variables to override - section = 'fftw' - dir_env_var = 'FFTW' - notfounderror = FFTWNotFoundError - ver_info = [ { 'name':'fftw3', - 'libs':['fftw3'], - 'includes':['fftw3.h'], - 'macros':[('SCIPY_FFTW3_H',None)]}, - { 'name':'fftw2', - 'libs':['rfftw', 'fftw'], - 'includes':['fftw.h','rfftw.h'], - 'macros':[('SCIPY_FFTW_H',None)]}] - - def __init__(self): - system_info.__init__(self) - - def calc_ver_info(self,ver_param): - """Returns True on successful version detection, else False""" - lib_dirs = self.get_lib_dirs() - incl_dirs = self.get_include_dirs() - incl_dir = None - libs = self.get_libs(self.section+'_libs', ver_param['libs']) - info = None - for d in lib_dirs: - r = self.check_libs(d,libs) - if r is not None: - info = r - break - if info is not None: - flag = 0 - for d in incl_dirs: - if len(self.combine_paths(d,ver_param['includes']))==len(ver_param['includes']): - dict_append(info,include_dirs=[d]) - flag = 1 - incl_dirs = [d] - incl_dir = d - break - if flag: - dict_append(info,define_macros=ver_param['macros']) - else: - info = None - if info is not None: - self.set_info(**info) - return True - else: - log.info(' %s not found' % (ver_param['name'])) - return False - - def calc_info(self): - for i in self.ver_info: - if self.calc_ver_info(i): - break - -class fftw2_info(fftw_info): - #variables to override - section = 'fftw' - dir_env_var = 'FFTW' - notfounderror = FFTWNotFoundError - ver_info = [ { 'name':'fftw2', - 'libs':['rfftw', 'fftw'], - 'includes':['fftw.h','rfftw.h'], - 'macros':[('SCIPY_FFTW_H',None)]} - ] - -class fftw3_info(fftw_info): - #variables to override - section = 'fftw3' - dir_env_var = 'FFTW3' - notfounderror = FFTWNotFoundError - ver_info = [ { 'name':'fftw3', - 'libs':['fftw3'], - 'includes':['fftw3.h'], - 'macros':[('SCIPY_FFTW3_H',None)]}, - ] - -class dfftw_info(fftw_info): - section = 'fftw' - dir_env_var = 'FFTW' - ver_info = [ { 'name':'dfftw', - 'libs':['drfftw','dfftw'], - 'includes':['dfftw.h','drfftw.h'], - 'macros':[('SCIPY_DFFTW_H',None)]} ] - -class sfftw_info(fftw_info): - section = 'fftw' - dir_env_var = 'FFTW' - ver_info = [ { 'name':'sfftw', - 'libs':['srfftw','sfftw'], - 'includes':['sfftw.h','srfftw.h'], - 'macros':[('SCIPY_SFFTW_H',None)]} ] - -class fftw_threads_info(fftw_info): - section = 'fftw' - dir_env_var = 'FFTW' - ver_info = [ { 'name':'fftw threads', - 'libs':['rfftw_threads','fftw_threads'], - 'includes':['fftw_threads.h','rfftw_threads.h'], - 'macros':[('SCIPY_FFTW_THREADS_H',None)]} ] - -class dfftw_threads_info(fftw_info): - section = 'fftw' - dir_env_var = 'FFTW' - ver_info = [ { 'name':'dfftw threads', - 'libs':['drfftw_threads','dfftw_threads'], - 'includes':['dfftw_threads.h','drfftw_threads.h'], - 'macros':[('SCIPY_DFFTW_THREADS_H',None)]} ] - -class sfftw_threads_info(fftw_info): - section = 'fftw' - dir_env_var = 'FFTW' - ver_info = [ { 'name':'sfftw threads', - 'libs':['srfftw_threads','sfftw_threads'], - 'includes':['sfftw_threads.h','srfftw_threads.h'], - 'macros':[('SCIPY_SFFTW_THREADS_H',None)]} ] - -class djbfft_info(system_info): - section = 'djbfft' - dir_env_var = 'DJBFFT' - notfounderror = DJBFFTNotFoundError - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend(self.combine_paths(d,['djbfft'])+[d]) - return [ d for d in dirs if os.path.isdir(d) ] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - incl_dirs = self.get_include_dirs() - info = None - for d in lib_dirs: - p = self.combine_paths (d,['djbfft.a']) - if p: - info = {'extra_objects':p} - break - p = self.combine_paths (d,['libdjbfft.a','libdjbfft'+so_ext]) - if p: - info = {'libraries':['djbfft'],'library_dirs':[d]} - break - if info is None: - return - for d in incl_dirs: - if len(self.combine_paths(d,['fftc8.h','fftfreq.h']))==2: - dict_append(info,include_dirs=[d], - define_macros=[('SCIPY_DJBFFT_H',None)]) - self.set_info(**info) - return - return - -class mkl_info(system_info): - section = 'mkl' - dir_env_var = 'MKL' - _lib_mkl = ['mkl','vml','guide'] - - def get_mkl_rootdir(self): - mklroot = os.environ.get('MKLROOT',None) - if mklroot is not None: - return mklroot - paths = os.environ.get('LD_LIBRARY_PATH','').split(os.pathsep) - ld_so_conf = '/etc/ld.so.conf' - if os.path.isfile(ld_so_conf): - for d in open(ld_so_conf,'r').readlines(): - d = d.strip() - if d: paths.append(d) - intel_mkl_dirs = [] - for path in paths: - path_atoms = path.split(os.sep) - for m in path_atoms: - if m.startswith('mkl'): - d = os.sep.join(path_atoms[:path_atoms.index(m)+2]) - intel_mkl_dirs.append(d) - break - for d in paths: - dirs = glob(os.path.join(d,'mkl','*')) + glob(os.path.join(d,'mkl*')) - for d in dirs: - if os.path.isdir(os.path.join(d,'lib')): - return d - return None - - def __init__(self): - mklroot = self.get_mkl_rootdir() - if mklroot is None: - system_info.__init__(self) - else: - from cpuinfo import cpu - l = 'mkl' # use shared library - if cpu.is_Itanium(): - plt = '64' - #l = 'mkl_ipf' - elif cpu.is_Xeon(): - plt = 'em64t' - #l = 'mkl_em64t' - else: - plt = '32' - #l = 'mkl_ia32' - if l not in self._lib_mkl: - self._lib_mkl.insert(0,l) - system_info.__init__(self, - default_lib_dirs=[os.path.join(mklroot,'lib',plt)], - default_include_dirs=[os.path.join(mklroot,'include')]) - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - incl_dirs = self.get_include_dirs() - mkl_libs = self.get_libs('mkl_libs',self._lib_mkl) - mkl = None - for d in lib_dirs: - mkl = self.check_libs2(d,mkl_libs) - if mkl is not None: - break - if mkl is None: - return - info = {} - dict_append(info,**mkl) - dict_append(info, - define_macros=[('SCIPY_MKL_H',None)], - include_dirs = incl_dirs) - if sys.platform == 'win32': - pass # win32 has no pthread library - else: - dict_append(info, libraries=['pthread']) - self.set_info(**info) - -class lapack_mkl_info(mkl_info): - - def calc_info(self): - mkl = get_info('mkl') - if not mkl: - return - if sys.platform == 'win32': - lapack_libs = self.get_libs('lapack_libs',['mkl_lapack']) - else: - lapack_libs = self.get_libs('lapack_libs',['mkl_lapack32','mkl_lapack64']) - - info = {'libraries': lapack_libs} - dict_append(info,**mkl) - self.set_info(**info) - -class blas_mkl_info(mkl_info): - pass - -class atlas_info(system_info): - section = 'atlas' - dir_env_var = 'ATLAS' - _lib_names = ['f77blas','cblas'] - if sys.platform[:7]=='freebsd': - _lib_atlas = ['atlas_r'] - _lib_lapack = ['alapack_r'] - else: - _lib_atlas = ['atlas'] - _lib_lapack = ['lapack'] - - notfounderror = AtlasNotFoundError - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend(self.combine_paths(d,['atlas*','ATLAS*', - 'sse','3dnow','sse2'])+[d]) - return [ d for d in dirs if os.path.isdir(d) ] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - info = {} - atlas_libs = self.get_libs('atlas_libs', - self._lib_names + self._lib_atlas) - lapack_libs = self.get_libs('lapack_libs',self._lib_lapack) - atlas = None - lapack = None - atlas_1 = None - for d in lib_dirs: - atlas = self.check_libs2(d,atlas_libs,[]) - lapack_atlas = self.check_libs2(d,['lapack_atlas'],[]) - if atlas is not None: - lib_dirs2 = [d] + self.combine_paths(d,['atlas*','ATLAS*']) - for d2 in lib_dirs2: - lapack = self.check_libs2(d2,lapack_libs,[]) - if lapack is not None: - break - else: - lapack = None - if lapack is not None: - break - if atlas: - atlas_1 = atlas - log.info(self.__class__) - if atlas is None: - atlas = atlas_1 - if atlas is None: - return - include_dirs = self.get_include_dirs() - h = (self.combine_paths(lib_dirs+include_dirs,'cblas.h') or [None])[0] - if h: - h = os.path.dirname(h) - dict_append(info,include_dirs=[h]) - info['language'] = 'c' - if lapack is not None: - dict_append(info,**lapack) - dict_append(info,**atlas) - elif 'lapack_atlas' in atlas['libraries']: - dict_append(info,**atlas) - dict_append(info,define_macros=[('ATLAS_WITH_LAPACK_ATLAS',None)]) - self.set_info(**info) - return - else: - dict_append(info,**atlas) - dict_append(info,define_macros=[('ATLAS_WITHOUT_LAPACK',None)]) - message = """ -********************************************************************* - Could not find lapack library within the ATLAS installation. -********************************************************************* -""" - warnings.warn(message) - self.set_info(**info) - return - - # Check if lapack library is complete, only warn if it is not. - lapack_dir = lapack['library_dirs'][0] - lapack_name = lapack['libraries'][0] - lapack_lib = None - lib_prefixes = ['lib'] - if sys.platform == 'win32': - lib_prefixes.append('') - for e in self.library_extensions(): - for prefix in lib_prefixes: - fn = os.path.join(lapack_dir,prefix+lapack_name+e) - if os.path.exists(fn): - lapack_lib = fn - break - if lapack_lib: - break - if lapack_lib is not None: - sz = os.stat(lapack_lib)[6] - if sz <= 4000*1024: - message = """ -********************************************************************* - Lapack library (from ATLAS) is probably incomplete: - size of %s is %sk (expected >4000k) - - Follow the instructions in the KNOWN PROBLEMS section of the file - numpy/INSTALL.txt. -********************************************************************* -""" % (lapack_lib,sz/1024) - warnings.warn(message) - else: - info['language'] = 'f77' - - self.set_info(**info) - -class atlas_blas_info(atlas_info): - _lib_names = ['f77blas','cblas'] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - info = {} - atlas_libs = self.get_libs('atlas_libs', - self._lib_names + self._lib_atlas) - atlas = None - for d in lib_dirs: - atlas = self.check_libs2(d,atlas_libs,[]) - if atlas is not None: - break - if atlas is None: - return - include_dirs = self.get_include_dirs() - h = (self.combine_paths(lib_dirs+include_dirs,'cblas.h') or [None])[0] - if h: - h = os.path.dirname(h) - dict_append(info,include_dirs=[h]) - info['language'] = 'c' - - dict_append(info,**atlas) - - self.set_info(**info) - return - - -class atlas_threads_info(atlas_info): - dir_env_var = ['PTATLAS','ATLAS'] - _lib_names = ['ptf77blas','ptcblas'] - -class atlas_blas_threads_info(atlas_blas_info): - dir_env_var = ['PTATLAS','ATLAS'] - _lib_names = ['ptf77blas','ptcblas'] - -class lapack_atlas_info(atlas_info): - _lib_names = ['lapack_atlas'] + atlas_info._lib_names - -class lapack_atlas_threads_info(atlas_threads_info): - _lib_names = ['lapack_atlas'] + atlas_threads_info._lib_names - -class lapack_info(system_info): - section = 'lapack' - dir_env_var = 'LAPACK' - _lib_names = ['lapack'] - notfounderror = LapackNotFoundError - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - - lapack_libs = self.get_libs('lapack_libs', self._lib_names) - for d in lib_dirs: - lapack = self.check_libs(d,lapack_libs,[]) - if lapack is not None: - info = lapack - break - else: - return - info['language'] = 'f77' - self.set_info(**info) - -class lapack_src_info(system_info): - section = 'lapack_src' - dir_env_var = 'LAPACK_SRC' - notfounderror = LapackSrcNotFoundError - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend([d] + self.combine_paths(d,['LAPACK*/SRC','SRC'])) - return [ d for d in dirs if os.path.isdir(d) ] - - def calc_info(self): - src_dirs = self.get_src_dirs() - src_dir = '' - for d in src_dirs: - if os.path.isfile(os.path.join(d,'dgesv.f')): - src_dir = d - break - if not src_dir: - #XXX: Get sources from netlib. May be ask first. - return - # The following is extracted from LAPACK-3.0/SRC/Makefile. - # Added missing names from lapack-lite-3.1.1/SRC/Makefile - # while keeping removed names for Lapack-3.0 compatibility. - allaux=''' - ilaenv ieeeck lsame lsamen xerbla - iparmq - ''' # *.f - laux = ''' - bdsdc bdsqr disna labad lacpy ladiv lae2 laebz laed0 laed1 - laed2 laed3 laed4 laed5 laed6 laed7 laed8 laed9 laeda laev2 - lagtf lagts lamch lamrg lanst lapy2 lapy3 larnv larrb larre - larrf lartg laruv las2 lascl lasd0 lasd1 lasd2 lasd3 lasd4 - lasd5 lasd6 lasd7 lasd8 lasd9 lasda lasdq lasdt laset lasq1 - lasq2 lasq3 lasq4 lasq5 lasq6 lasr lasrt lassq lasv2 pttrf - stebz stedc steqr sterf - - larra larrc larrd larr larrk larrj larrr laneg laisnan isnan - lazq3 lazq4 - ''' # [s|d]*.f - lasrc = ''' - gbbrd gbcon gbequ gbrfs gbsv gbsvx gbtf2 gbtrf gbtrs gebak - gebal gebd2 gebrd gecon geequ gees geesx geev geevx gegs gegv - gehd2 gehrd gelq2 gelqf gels gelsd gelss gelsx gelsy geql2 - geqlf geqp3 geqpf geqr2 geqrf gerfs gerq2 gerqf gesc2 gesdd - gesv gesvd gesvx getc2 getf2 getrf getri getrs ggbak ggbal - gges ggesx ggev ggevx ggglm gghrd gglse ggqrf ggrqf ggsvd - ggsvp gtcon gtrfs gtsv gtsvx gttrf gttrs gtts2 hgeqz hsein - hseqr labrd lacon laein lags2 lagtm lahqr lahrd laic1 lals0 - lalsa lalsd langb lange langt lanhs lansb lansp lansy lantb - lantp lantr lapll lapmt laqgb laqge laqp2 laqps laqsb laqsp - laqsy lar1v lar2v larf larfb larfg larft larfx largv larrv - lartv larz larzb larzt laswp lasyf latbs latdf latps latrd - latrs latrz latzm lauu2 lauum pbcon pbequ pbrfs pbstf pbsv - pbsvx pbtf2 pbtrf pbtrs pocon poequ porfs posv posvx potf2 - potrf potri potrs ppcon ppequ pprfs ppsv ppsvx pptrf pptri - pptrs ptcon pteqr ptrfs ptsv ptsvx pttrs ptts2 spcon sprfs - spsv spsvx sptrf sptri sptrs stegr stein sycon syrfs sysv - sysvx sytf2 sytrf sytri sytrs tbcon tbrfs tbtrs tgevc tgex2 - tgexc tgsen tgsja tgsna tgsy2 tgsyl tpcon tprfs tptri tptrs - trcon trevc trexc trrfs trsen trsna trsyl trti2 trtri trtrs - tzrqf tzrzf - - lacn2 lahr2 stemr laqr0 laqr1 laqr2 laqr3 laqr4 laqr5 - ''' # [s|c|d|z]*.f - sd_lasrc = ''' - laexc lag2 lagv2 laln2 lanv2 laqtr lasy2 opgtr opmtr org2l - org2r orgbr orghr orgl2 orglq orgql orgqr orgr2 orgrq orgtr - orm2l orm2r ormbr ormhr orml2 ormlq ormql ormqr ormr2 ormr3 - ormrq ormrz ormtr rscl sbev sbevd sbevx sbgst sbgv sbgvd sbgvx - sbtrd spev spevd spevx spgst spgv spgvd spgvx sptrd stev stevd - stevr stevx syev syevd syevr syevx sygs2 sygst sygv sygvd - sygvx sytd2 sytrd - ''' # [s|d]*.f - cz_lasrc = ''' - bdsqr hbev hbevd hbevx hbgst hbgv hbgvd hbgvx hbtrd hecon heev - heevd heevr heevx hegs2 hegst hegv hegvd hegvx herfs hesv - hesvx hetd2 hetf2 hetrd hetrf hetri hetrs hpcon hpev hpevd - hpevx hpgst hpgv hpgvd hpgvx hprfs hpsv hpsvx hptrd hptrf - hptri hptrs lacgv lacp2 lacpy lacrm lacrt ladiv laed0 laed7 - laed8 laesy laev2 lahef lanhb lanhe lanhp lanht laqhb laqhe - laqhp larcm larnv lartg lascl laset lasr lassq pttrf rot spmv - spr stedc steqr symv syr ung2l ung2r ungbr unghr ungl2 unglq - ungql ungqr ungr2 ungrq ungtr unm2l unm2r unmbr unmhr unml2 - unmlq unmql unmqr unmr2 unmr3 unmrq unmrz unmtr upgtr upmtr - ''' # [c|z]*.f - ####### - sclaux = laux + ' econd ' # s*.f - dzlaux = laux + ' secnd ' # d*.f - slasrc = lasrc + sd_lasrc # s*.f - dlasrc = lasrc + sd_lasrc # d*.f - clasrc = lasrc + cz_lasrc + ' srot srscl ' # c*.f - zlasrc = lasrc + cz_lasrc + ' drot drscl ' # z*.f - oclasrc = ' icmax1 scsum1 ' # *.f - ozlasrc = ' izmax1 dzsum1 ' # *.f - sources = ['s%s.f'%f for f in (sclaux+slasrc).split()] \ - + ['d%s.f'%f for f in (dzlaux+dlasrc).split()] \ - + ['c%s.f'%f for f in (clasrc).split()] \ - + ['z%s.f'%f for f in (zlasrc).split()] \ - + ['%s.f'%f for f in (allaux+oclasrc+ozlasrc).split()] - sources = [os.path.join(src_dir,f) for f in sources] - # Lapack 3.1: - src_dir2 = os.path.join(src_dir,'..','INSTALL') - sources += [os.path.join(src_dir2,p+'lamch.f') for p in 'sdcz'] - # Lapack 3.2.1: - sources += [os.path.join(src_dir,p+'larfp.f') for p in 'sdcz'] - sources += [os.path.join(src_dir,'ila'+p+'lr.f') for p in 'sdcz'] - sources += [os.path.join(src_dir,'ila'+p+'lc.f') for p in 'sdcz'] - # Should we check here actual existence of source files? - # Yes, the file listing is different between 3.0 and 3.1 - # versions. - sources = [f for f in sources if os.path.isfile(f)] - info = {'sources':sources,'language':'f77'} - self.set_info(**info) - -atlas_version_c_text = r''' -/* This file is generated from numpy/distutils/system_info.py */ -void ATL_buildinfo(void); -int main(void) { - ATL_buildinfo(); - return 0; -} -''' - -_cached_atlas_version = {} -def get_atlas_version(**config): - libraries = config.get('libraries', []) - library_dirs = config.get('library_dirs', []) - key = (tuple(libraries), tuple(library_dirs)) - if key in _cached_atlas_version: - return _cached_atlas_version[key] - c = cmd_config(Distribution()) - atlas_version = None - try: - s, o = c.get_output(atlas_version_c_text, - libraries=libraries, library_dirs=library_dirs) - except: # failed to get version from file -- maybe on Windows - # look at directory name - for o in library_dirs: - m = re.search(r'ATLAS_(?P\d+[.]\d+[.]\d+)_',o) - if m: - atlas_version = m.group('version') - if atlas_version is not None: - break - # final choice --- look at ATLAS_VERSION environment - # variable - if atlas_version is None: - atlas_version = os.environ.get('ATLAS_VERSION',None) - return atlas_version or '?.?.?' - - if not s: - m = re.search(r'ATLAS version (?P\d+[.]\d+[.]\d+)',o) - if m: - atlas_version = m.group('version') - if atlas_version is None: - if re.search(r'undefined symbol: ATL_buildinfo',o,re.M): - atlas_version = '3.2.1_pre3.3.6' - else: - log.info('Status: %d', s) - log.info('Output: %s', o) - _cached_atlas_version[key] = atlas_version - return atlas_version - -from distutils.util import get_platform - -class lapack_opt_info(system_info): - - notfounderror = LapackNotFoundError - - def calc_info(self): - - if sys.platform=='darwin' and not os.environ.get('ATLAS',None): - args = [] - link_args = [] - if get_platform()[-4:] == 'i386': - intel = 1 - else: - intel = 0 - if os.path.exists('/System/Library/Frameworks/Accelerate.framework/'): - if intel: - args.extend(['-msse3']) - else: - args.extend(['-faltivec']) - link_args.extend(['-Wl,-framework','-Wl,Accelerate']) - elif os.path.exists('/System/Library/Frameworks/vecLib.framework/'): - if intel: - args.extend(['-msse3']) - else: - args.extend(['-faltivec']) - link_args.extend(['-Wl,-framework','-Wl,vecLib']) - if args: - self.set_info(extra_compile_args=args, - extra_link_args=link_args, - define_macros=[('NO_ATLAS_INFO',3)]) - return - - lapack_mkl_info = get_info('lapack_mkl') - if lapack_mkl_info: - self.set_info(**lapack_mkl_info) - return - - atlas_info = get_info('atlas_threads') - if not atlas_info: - atlas_info = get_info('atlas') - #atlas_info = {} ## uncomment for testing - atlas_version = None - need_lapack = 0 - need_blas = 0 - info = {} - if atlas_info: - version_info = atlas_info.copy() - atlas_version = get_atlas_version(**version_info) - if 'define_macros' not in atlas_info: - atlas_info['define_macros'] = [] - if atlas_version is None: - atlas_info['define_macros'].append(('NO_ATLAS_INFO',2)) - else: - atlas_info['define_macros'].append(('ATLAS_INFO', - '"\\"%s\\""' % atlas_version)) - if atlas_version=='3.2.1_pre3.3.6': - atlas_info['define_macros'].append(('NO_ATLAS_INFO',4)) - l = atlas_info.get('define_macros',[]) - if ('ATLAS_WITH_LAPACK_ATLAS',None) in l \ - or ('ATLAS_WITHOUT_LAPACK',None) in l: - need_lapack = 1 - info = atlas_info - else: - warnings.warn(AtlasNotFoundError.__doc__) - need_blas = 1 - need_lapack = 1 - dict_append(info,define_macros=[('NO_ATLAS_INFO',1)]) - - if need_lapack: - lapack_info = get_info('lapack') - #lapack_info = {} ## uncomment for testing - if lapack_info: - dict_append(info,**lapack_info) - else: - warnings.warn(LapackNotFoundError.__doc__) - lapack_src_info = get_info('lapack_src') - if not lapack_src_info: - warnings.warn(LapackSrcNotFoundError.__doc__) - return - dict_append(info,libraries=[('flapack_src',lapack_src_info)]) - - if need_blas: - blas_info = get_info('blas') - #blas_info = {} ## uncomment for testing - if blas_info: - dict_append(info,**blas_info) - else: - warnings.warn(BlasNotFoundError.__doc__) - blas_src_info = get_info('blas_src') - if not blas_src_info: - warnings.warn(BlasSrcNotFoundError.__doc__) - return - dict_append(info,libraries=[('fblas_src',blas_src_info)]) - - self.set_info(**info) - return - - -class blas_opt_info(system_info): - - notfounderror = BlasNotFoundError - - def calc_info(self): - - if sys.platform=='darwin' and not os.environ.get('ATLAS',None): - args = [] - link_args = [] - if get_platform()[-4:] == 'i386': - intel = 1 - else: - intel = 0 - if os.path.exists('/System/Library/Frameworks/Accelerate.framework/'): - if intel: - args.extend(['-msse3']) - else: - args.extend(['-faltivec']) - args.extend([ - '-I/System/Library/Frameworks/vecLib.framework/Headers']) - link_args.extend(['-Wl,-framework','-Wl,Accelerate']) - elif os.path.exists('/System/Library/Frameworks/vecLib.framework/'): - if intel: - args.extend(['-msse3']) - else: - args.extend(['-faltivec']) - args.extend([ - '-I/System/Library/Frameworks/vecLib.framework/Headers']) - link_args.extend(['-Wl,-framework','-Wl,vecLib']) - if args: - self.set_info(extra_compile_args=args, - extra_link_args=link_args, - define_macros=[('NO_ATLAS_INFO',3)]) - return - - blas_mkl_info = get_info('blas_mkl') - if blas_mkl_info: - self.set_info(**blas_mkl_info) - return - - atlas_info = get_info('atlas_blas_threads') - if not atlas_info: - atlas_info = get_info('atlas_blas') - atlas_version = None - need_blas = 0 - info = {} - if atlas_info: - version_info = atlas_info.copy() - atlas_version = get_atlas_version(**version_info) - if 'define_macros' not in atlas_info: - atlas_info['define_macros'] = [] - if atlas_version is None: - atlas_info['define_macros'].append(('NO_ATLAS_INFO',2)) - else: - atlas_info['define_macros'].append(('ATLAS_INFO', - '"\\"%s\\""' % atlas_version)) - info = atlas_info - else: - warnings.warn(AtlasNotFoundError.__doc__) - need_blas = 1 - dict_append(info,define_macros=[('NO_ATLAS_INFO',1)]) - - if need_blas: - blas_info = get_info('blas') - if blas_info: - dict_append(info,**blas_info) - else: - warnings.warn(BlasNotFoundError.__doc__) - blas_src_info = get_info('blas_src') - if not blas_src_info: - warnings.warn(BlasSrcNotFoundError.__doc__) - return - dict_append(info,libraries=[('fblas_src',blas_src_info)]) - - self.set_info(**info) - return - - -class blas_info(system_info): - section = 'blas' - dir_env_var = 'BLAS' - _lib_names = ['blas'] - notfounderror = BlasNotFoundError - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - - blas_libs = self.get_libs('blas_libs', self._lib_names) - for d in lib_dirs: - blas = self.check_libs(d,blas_libs,[]) - if blas is not None: - info = blas - break - else: - return - info['language'] = 'f77' # XXX: is it generally true? - self.set_info(**info) - - -class blas_src_info(system_info): - section = 'blas_src' - dir_env_var = 'BLAS_SRC' - notfounderror = BlasSrcNotFoundError - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend([d] + self.combine_paths(d,['blas'])) - return [ d for d in dirs if os.path.isdir(d) ] - - def calc_info(self): - src_dirs = self.get_src_dirs() - src_dir = '' - for d in src_dirs: - if os.path.isfile(os.path.join(d,'daxpy.f')): - src_dir = d - break - if not src_dir: - #XXX: Get sources from netlib. May be ask first. - return - blas1 = ''' - caxpy csscal dnrm2 dzasum saxpy srotg zdotc ccopy cswap drot - dznrm2 scasum srotm zdotu cdotc dasum drotg icamax scnrm2 - srotmg zdrot cdotu daxpy drotm idamax scopy sscal zdscal crotg - dcabs1 drotmg isamax sdot sswap zrotg cscal dcopy dscal izamax - snrm2 zaxpy zscal csrot ddot dswap sasum srot zcopy zswap - scabs1 - ''' - blas2 = ''' - cgbmv chpmv ctrsv dsymv dtrsv sspr2 strmv zhemv ztpmv cgemv - chpr dgbmv dsyr lsame ssymv strsv zher ztpsv cgerc chpr2 dgemv - dsyr2 sgbmv ssyr xerbla zher2 ztrmv cgeru ctbmv dger dtbmv - sgemv ssyr2 zgbmv zhpmv ztrsv chbmv ctbsv dsbmv dtbsv sger - stbmv zgemv zhpr chemv ctpmv dspmv dtpmv ssbmv stbsv zgerc - zhpr2 cher ctpsv dspr dtpsv sspmv stpmv zgeru ztbmv cher2 - ctrmv dspr2 dtrmv sspr stpsv zhbmv ztbsv - ''' - blas3 = ''' - cgemm csymm ctrsm dsyrk sgemm strmm zhemm zsyr2k chemm csyr2k - dgemm dtrmm ssymm strsm zher2k zsyrk cher2k csyrk dsymm dtrsm - ssyr2k zherk ztrmm cherk ctrmm dsyr2k ssyrk zgemm zsymm ztrsm - ''' - sources = [os.path.join(src_dir,f+'.f') \ - for f in (blas1+blas2+blas3).split()] - #XXX: should we check here actual existence of source files? - sources = [f for f in sources if os.path.isfile(f)] - info = {'sources':sources,'language':'f77'} - self.set_info(**info) - -class x11_info(system_info): - section = 'x11' - notfounderror = X11NotFoundError - - def __init__(self): - system_info.__init__(self, - default_lib_dirs=default_x11_lib_dirs, - default_include_dirs=default_x11_include_dirs) - - def calc_info(self): - if sys.platform in ['win32']: - return - lib_dirs = self.get_lib_dirs() - include_dirs = self.get_include_dirs() - x11_libs = self.get_libs('x11_libs', ['X11']) - for lib_dir in lib_dirs: - info = self.check_libs(lib_dir, x11_libs, []) - if info is not None: - break - else: - return - inc_dir = None - for d in include_dirs: - if self.combine_paths(d, 'X11/X.h'): - inc_dir = d - break - if inc_dir is not None: - dict_append(info, include_dirs=[inc_dir]) - self.set_info(**info) - -class _numpy_info(system_info): - section = 'Numeric' - modulename = 'Numeric' - notfounderror = NumericNotFoundError - - def __init__(self): - include_dirs = [] - try: - module = __import__(self.modulename) - prefix = [] - for name in module.__file__.split(os.sep): - if name=='lib': - break - prefix.append(name) - - # Ask numpy for its own include path before attempting anything else - try: - include_dirs.append(getattr(module, 'get_include')()) - except AttributeError: - pass - - include_dirs.append(distutils.sysconfig.get_python_inc( - prefix=os.sep.join(prefix))) - except ImportError: - pass - py_incl_dir = distutils.sysconfig.get_python_inc() - include_dirs.append(py_incl_dir) - for d in default_include_dirs: - d = os.path.join(d, os.path.basename(py_incl_dir)) - if d not in include_dirs: - include_dirs.append(d) - system_info.__init__(self, - default_lib_dirs=[], - default_include_dirs=include_dirs) - - def calc_info(self): - try: - module = __import__(self.modulename) - except ImportError: - return - info = {} - macros = [] - for v in ['__version__','version']: - vrs = getattr(module,v,None) - if vrs is None: - continue - macros = [(self.modulename.upper()+'_VERSION', - '"\\"%s\\""' % (vrs)), - (self.modulename.upper(),None)] - break -## try: -## macros.append( -## (self.modulename.upper()+'_VERSION_HEX', -## hex(vstr2hex(module.__version__))), -## ) -## except Exception,msg: -## print msg - dict_append(info, define_macros = macros) - include_dirs = self.get_include_dirs() - inc_dir = None - for d in include_dirs: - if self.combine_paths(d, - os.path.join(self.modulename, - 'arrayobject.h')): - inc_dir = d - break - if inc_dir is not None: - dict_append(info, include_dirs=[inc_dir]) - if info: - self.set_info(**info) - return - -class numarray_info(_numpy_info): - section = 'numarray' - modulename = 'numarray' - -class Numeric_info(_numpy_info): - section = 'Numeric' - modulename = 'Numeric' - -class numpy_info(_numpy_info): - section = 'numpy' - modulename = 'numpy' - -class numerix_info(system_info): - section = 'numerix' - def calc_info(self): - which = None, None - if os.getenv("NUMERIX"): - which = os.getenv("NUMERIX"), "environment var" - # If all the above fail, default to numpy. - if which[0] is None: - which = "numpy", "defaulted" - try: - import numpy - which = "numpy", "defaulted" - except ImportError: - msg1 = str(get_exception()) - try: - import Numeric - which = "numeric", "defaulted" - except ImportError: - msg2 = str(get_exception()) - try: - import numarray - which = "numarray", "defaulted" - except ImportError: - msg3 = str(get_exception()) - log.info(msg1) - log.info(msg2) - log.info(msg3) - which = which[0].strip().lower(), which[1] - if which[0] not in ["numeric", "numarray", "numpy"]: - raise ValueError("numerix selector must be either 'Numeric' " - "or 'numarray' or 'numpy' but the value obtained" - " from the %s was '%s'." % (which[1], which[0])) - os.environ['NUMERIX'] = which[0] - self.set_info(**get_info(which[0])) - -class f2py_info(system_info): - def calc_info(self): - try: - import numpy.f2py as f2py - except ImportError: - return - f2py_dir = os.path.join(os.path.dirname(f2py.__file__),'src') - self.set_info(sources = [os.path.join(f2py_dir,'fortranobject.c')], - include_dirs = [f2py_dir]) - return - -class boost_python_info(system_info): - section = 'boost_python' - dir_env_var = 'BOOST' - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend([d] + self.combine_paths(d,['boost*'])) - return [ d for d in dirs if os.path.isdir(d) ] - - def calc_info(self): - src_dirs = self.get_src_dirs() - src_dir = '' - for d in src_dirs: - if os.path.isfile(os.path.join(d,'libs','python','src','module.cpp')): - src_dir = d - break - if not src_dir: - return - py_incl_dir = distutils.sysconfig.get_python_inc() - srcs_dir = os.path.join(src_dir,'libs','python','src') - bpl_srcs = glob(os.path.join(srcs_dir,'*.cpp')) - bpl_srcs += glob(os.path.join(srcs_dir,'*','*.cpp')) - info = {'libraries':[('boost_python_src',{'include_dirs':[src_dir,py_incl_dir], - 'sources':bpl_srcs})], - 'include_dirs':[src_dir], - } - if info: - self.set_info(**info) - return - -class agg2_info(system_info): - section = 'agg2' - dir_env_var = 'AGG2' - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend([d] + self.combine_paths(d,['agg2*'])) - return [ d for d in dirs if os.path.isdir(d) ] - - def calc_info(self): - src_dirs = self.get_src_dirs() - src_dir = '' - for d in src_dirs: - if os.path.isfile(os.path.join(d,'src','agg_affine_matrix.cpp')): - src_dir = d - break - if not src_dir: - return - if sys.platform=='win32': - agg2_srcs = glob(os.path.join(src_dir,'src','platform','win32','agg_win32_bmp.cpp')) - else: - agg2_srcs = glob(os.path.join(src_dir,'src','*.cpp')) - agg2_srcs += [os.path.join(src_dir,'src','platform','X11','agg_platform_support.cpp')] - - info = {'libraries':[('agg2_src',{'sources':agg2_srcs, - 'include_dirs':[os.path.join(src_dir,'include')], - })], - 'include_dirs':[os.path.join(src_dir,'include')], - } - if info: - self.set_info(**info) - return - -class _pkg_config_info(system_info): - section = None - config_env_var = 'PKG_CONFIG' - default_config_exe = 'pkg-config' - append_config_exe = '' - version_macro_name = None - release_macro_name = None - version_flag = '--modversion' - cflags_flag = '--cflags' - - def get_config_exe(self): - if self.config_env_var in os.environ: - return os.environ[self.config_env_var] - return self.default_config_exe - def get_config_output(self, config_exe, option): - s,o = exec_command(config_exe+' '+self.append_config_exe+' '+option,use_tee=0) - if not s: - return o - - def calc_info(self): - config_exe = find_executable(self.get_config_exe()) - if not config_exe: - log.warn('File not found: %s. Cannot determine %s info.' \ - % (config_exe, self.section)) - return - info = {} - macros = [] - libraries = [] - library_dirs = [] - include_dirs = [] - extra_link_args = [] - extra_compile_args = [] - version = self.get_config_output(config_exe,self.version_flag) - if version: - macros.append((self.__class__.__name__.split('.')[-1].upper(), - '"\\"%s\\""' % (version))) - if self.version_macro_name: - macros.append((self.version_macro_name+'_%s' % (version.replace('.','_')),None)) - if self.release_macro_name: - release = self.get_config_output(config_exe,'--release') - if release: - macros.append((self.release_macro_name+'_%s' % (release.replace('.','_')),None)) - opts = self.get_config_output(config_exe,'--libs') - if opts: - for opt in opts.split(): - if opt[:2]=='-l': - libraries.append(opt[2:]) - elif opt[:2]=='-L': - library_dirs.append(opt[2:]) - else: - extra_link_args.append(opt) - opts = self.get_config_output(config_exe,self.cflags_flag) - if opts: - for opt in opts.split(): - if opt[:2]=='-I': - include_dirs.append(opt[2:]) - elif opt[:2]=='-D': - if '=' in opt: - n,v = opt[2:].split('=') - macros.append((n,v)) - else: - macros.append((opt[2:],None)) - else: - extra_compile_args.append(opt) - if macros: dict_append(info, define_macros = macros) - if libraries: dict_append(info, libraries = libraries) - if library_dirs: dict_append(info, library_dirs = library_dirs) - if include_dirs: dict_append(info, include_dirs = include_dirs) - if extra_link_args: dict_append(info, extra_link_args = extra_link_args) - if extra_compile_args: dict_append(info, extra_compile_args = extra_compile_args) - if info: - self.set_info(**info) - return - -class wx_info(_pkg_config_info): - section = 'wx' - config_env_var = 'WX_CONFIG' - default_config_exe = 'wx-config' - append_config_exe = '' - version_macro_name = 'WX_VERSION' - release_macro_name = 'WX_RELEASE' - version_flag = '--version' - cflags_flag = '--cxxflags' - -class gdk_pixbuf_xlib_2_info(_pkg_config_info): - section = 'gdk_pixbuf_xlib_2' - append_config_exe = 'gdk-pixbuf-xlib-2.0' - version_macro_name = 'GDK_PIXBUF_XLIB_VERSION' - -class gdk_pixbuf_2_info(_pkg_config_info): - section = 'gdk_pixbuf_2' - append_config_exe = 'gdk-pixbuf-2.0' - version_macro_name = 'GDK_PIXBUF_VERSION' - -class gdk_x11_2_info(_pkg_config_info): - section = 'gdk_x11_2' - append_config_exe = 'gdk-x11-2.0' - version_macro_name = 'GDK_X11_VERSION' - -class gdk_2_info(_pkg_config_info): - section = 'gdk_2' - append_config_exe = 'gdk-2.0' - version_macro_name = 'GDK_VERSION' - -class gdk_info(_pkg_config_info): - section = 'gdk' - append_config_exe = 'gdk' - version_macro_name = 'GDK_VERSION' - -class gtkp_x11_2_info(_pkg_config_info): - section = 'gtkp_x11_2' - append_config_exe = 'gtk+-x11-2.0' - version_macro_name = 'GTK_X11_VERSION' - - -class gtkp_2_info(_pkg_config_info): - section = 'gtkp_2' - append_config_exe = 'gtk+-2.0' - version_macro_name = 'GTK_VERSION' - -class xft_info(_pkg_config_info): - section = 'xft' - append_config_exe = 'xft' - version_macro_name = 'XFT_VERSION' - -class freetype2_info(_pkg_config_info): - section = 'freetype2' - append_config_exe = 'freetype2' - version_macro_name = 'FREETYPE2_VERSION' - -class amd_info(system_info): - section = 'amd' - dir_env_var = 'AMD' - _lib_names = ['amd'] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - - amd_libs = self.get_libs('amd_libs', self._lib_names) - for d in lib_dirs: - amd = self.check_libs(d,amd_libs,[]) - if amd is not None: - info = amd - break - else: - return - - include_dirs = self.get_include_dirs() - - inc_dir = None - for d in include_dirs: - p = self.combine_paths(d,'amd.h') - if p: - inc_dir = os.path.dirname(p[0]) - break - if inc_dir is not None: - dict_append(info, include_dirs=[inc_dir], - define_macros=[('SCIPY_AMD_H',None)], - swig_opts = ['-I' + inc_dir]) - - self.set_info(**info) - return - -class umfpack_info(system_info): - section = 'umfpack' - dir_env_var = 'UMFPACK' - notfounderror = UmfpackNotFoundError - _lib_names = ['umfpack'] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - - umfpack_libs = self.get_libs('umfpack_libs', self._lib_names) - for d in lib_dirs: - umf = self.check_libs(d,umfpack_libs,[]) - if umf is not None: - info = umf - break - else: - return - - include_dirs = self.get_include_dirs() - - inc_dir = None - for d in include_dirs: - p = self.combine_paths(d,['','umfpack'],'umfpack.h') - if p: - inc_dir = os.path.dirname(p[0]) - break - if inc_dir is not None: - dict_append(info, include_dirs=[inc_dir], - define_macros=[('SCIPY_UMFPACK_H',None)], - swig_opts = ['-I' + inc_dir]) - - amd = get_info('amd') - dict_append(info, **get_info('amd')) - - self.set_info(**info) - return - -## def vstr2hex(version): -## bits = [] -## n = [24,16,8,4,0] -## r = 0 -## for s in version.split('.'): -## r |= int(s) << n[0] -## del n[0] -## return r - -#-------------------------------------------------------------------- - -def combine_paths(*args,**kws): - """ Return a list of existing paths composed by all combinations of - items from arguments. - """ - r = [] - for a in args: - if not a: continue - if is_string(a): - a = [a] - r.append(a) - args = r - if not args: return [] - if len(args)==1: - result = reduce(lambda a,b:a+b,map(glob,args[0]),[]) - elif len (args)==2: - result = [] - for a0 in args[0]: - for a1 in args[1]: - result.extend(glob(os.path.join(a0,a1))) - else: - result = combine_paths(*(combine_paths(args[0],args[1])+args[2:])) - verbosity = kws.get('verbosity',1) - log.debug('(paths: %s)', ','.join(result)) - return result - -language_map = {'c':0,'c++':1,'f77':2,'f90':3} -inv_language_map = {0:'c',1:'c++',2:'f77',3:'f90'} -def dict_append(d,**kws): - languages = [] - for k,v in kws.items(): - if k=='language': - languages.append(v) - continue - if k in d: - if k in ['library_dirs','include_dirs','define_macros']: - [d[k].append(vv) for vv in v if vv not in d[k]] - else: - d[k].extend(v) - else: - d[k] = v - if languages: - l = inv_language_map[max([language_map.get(l,0) for l in languages])] - d['language'] = l - return - -def parseCmdLine(argv=(None,)): - import optparse - parser = optparse.OptionParser("usage: %prog [-v] [info objs]") - parser.add_option('-v', '--verbose', action='store_true', dest='verbose', - default=False, - help='be verbose and print more messages') - - opts, args = parser.parse_args(args=argv[1:]) - return opts, args - -def show_all(argv=None): - import inspect - if argv is None: - argv = sys.argv - opts, args = parseCmdLine(argv) - if opts.verbose: - log.set_threshold(log.DEBUG) - else: - log.set_threshold(log.INFO) - show_only = [] - for n in args: - if n[-5:] != '_info': - n = n + '_info' - show_only.append(n) - show_all = not show_only - _gdict_ = globals().copy() - for name, c in _gdict_.iteritems(): - if not inspect.isclass(c): - continue - if not issubclass(c, system_info) or c is system_info: - continue - if not show_all: - if name not in show_only: - continue - del show_only[show_only.index(name)] - conf = c() - conf.verbosity = 2 - r = conf.get_info() - if show_only: - log.info('Info classes not defined: %s',','.join(show_only)) - -if __name__ == "__main__": - show_all() diff --git a/pythonPackages/numpy/numpy/distutils/tests/f2py_ext/__init__.py b/pythonPackages/numpy/numpy/distutils/tests/f2py_ext/__init__.py deleted file mode 100755 index e69de29bb2..0000000000 diff --git a/pythonPackages/numpy/numpy/distutils/tests/f2py_ext/setup.py b/pythonPackages/numpy/numpy/distutils/tests/f2py_ext/setup.py deleted file mode 100755 index e3dfddb747..0000000000 --- a/pythonPackages/numpy/numpy/distutils/tests/f2py_ext/setup.py +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/env python -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('f2py_ext',parent_package,top_path) - config.add_extension('fib2', ['src/fib2.pyf','src/fib1.f']) - config.add_data_dir('tests') - return config - -if __name__ == "__main__": - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/distutils/tests/f2py_f90_ext/__init__.py b/pythonPackages/numpy/numpy/distutils/tests/f2py_f90_ext/__init__.py deleted file mode 100755 index e69de29bb2..0000000000 diff --git a/pythonPackages/numpy/numpy/distutils/tests/f2py_f90_ext/setup.py b/pythonPackages/numpy/numpy/distutils/tests/f2py_f90_ext/setup.py deleted file mode 100755 index ee56cc3a61..0000000000 --- a/pythonPackages/numpy/numpy/distutils/tests/f2py_f90_ext/setup.py +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env python -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('f2py_f90_ext',parent_package,top_path) - config.add_extension('foo', - ['src/foo_free.f90'], - include_dirs=['include'], - f2py_options=['--include_paths', - config.paths('include')[0]] - ) - config.add_data_dir('tests') - return config - -if __name__ == "__main__": - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/distutils/tests/gen_ext/__init__.py b/pythonPackages/numpy/numpy/distutils/tests/gen_ext/__init__.py deleted file mode 100755 index e69de29bb2..0000000000 diff --git a/pythonPackages/numpy/numpy/distutils/tests/gen_ext/setup.py b/pythonPackages/numpy/numpy/distutils/tests/gen_ext/setup.py deleted file mode 100755 index bf029062c6..0000000000 --- a/pythonPackages/numpy/numpy/distutils/tests/gen_ext/setup.py +++ /dev/null @@ -1,47 +0,0 @@ -#!/usr/bin/env python - -fib3_f = ''' -C FILE: FIB3.F - SUBROUTINE FIB(A,N) -C -C CALCULATE FIRST N FIBONACCI NUMBERS -C - INTEGER N - REAL*8 A(N) -Cf2py intent(in) n -Cf2py intent(out) a -Cf2py depend(n) a - DO I=1,N - IF (I.EQ.1) THEN - A(I) = 0.0D0 - ELSEIF (I.EQ.2) THEN - A(I) = 1.0D0 - ELSE - A(I) = A(I-1) + A(I-2) - ENDIF - ENDDO - END -C END FILE FIB3.F -''' - -def source_func(ext, build_dir): - import os - from distutils.dep_util import newer - target = os.path.join(build_dir,'fib3.f') - if newer(__file__, target): - f = open(target,'w') - f.write(fib3_f) - f.close() - return [target] - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('gen_ext',parent_package,top_path) - config.add_extension('fib3', - [source_func] - ) - return config - -if __name__ == "__main__": - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/distutils/tests/pyrex_ext/__init__.py b/pythonPackages/numpy/numpy/distutils/tests/pyrex_ext/__init__.py deleted file mode 100755 index e69de29bb2..0000000000 diff --git a/pythonPackages/numpy/numpy/distutils/tests/pyrex_ext/primes.pyx b/pythonPackages/numpy/numpy/distutils/tests/pyrex_ext/primes.pyx deleted file mode 100755 index 2ada0c5a08..0000000000 --- a/pythonPackages/numpy/numpy/distutils/tests/pyrex_ext/primes.pyx +++ /dev/null @@ -1,22 +0,0 @@ -# -# Calculate prime numbers -# - -def primes(int kmax): - cdef int n, k, i - cdef int p[1000] - result = [] - if kmax > 1000: - kmax = 1000 - k = 0 - n = 2 - while k < kmax: - i = 0 - while i < k and n % p[i] <> 0: - i = i + 1 - if i == k: - p[k] = n - k = k + 1 - result.append(n) - n = n + 1 - return result diff --git a/pythonPackages/numpy/numpy/distutils/tests/pyrex_ext/setup.py b/pythonPackages/numpy/numpy/distutils/tests/pyrex_ext/setup.py deleted file mode 100755 index 5b348b916b..0000000000 --- a/pythonPackages/numpy/numpy/distutils/tests/pyrex_ext/setup.py +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/env python -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('pyrex_ext',parent_package,top_path) - config.add_extension('primes', - ['primes.pyx']) - config.add_data_dir('tests') - return config - -if __name__ == "__main__": - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/distutils/tests/setup.py b/pythonPackages/numpy/numpy/distutils/tests/setup.py deleted file mode 100755 index 89d73800ed..0000000000 --- a/pythonPackages/numpy/numpy/distutils/tests/setup.py +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env python -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('testnumpydistutils',parent_package,top_path) - config.add_subpackage('pyrex_ext') - config.add_subpackage('f2py_ext') - #config.add_subpackage('f2py_f90_ext') - config.add_subpackage('swig_ext') - config.add_subpackage('gen_ext') - return config - -if __name__ == "__main__": - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/distutils/tests/swig_ext/__init__.py b/pythonPackages/numpy/numpy/distutils/tests/swig_ext/__init__.py deleted file mode 100755 index e69de29bb2..0000000000 diff --git a/pythonPackages/numpy/numpy/distutils/tests/swig_ext/setup.py b/pythonPackages/numpy/numpy/distutils/tests/swig_ext/setup.py deleted file mode 100755 index 7f0dbe6271..0000000000 --- a/pythonPackages/numpy/numpy/distutils/tests/swig_ext/setup.py +++ /dev/null @@ -1,18 +0,0 @@ -#!/usr/bin/env python -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('swig_ext',parent_package,top_path) - config.add_extension('_example', - ['src/example.i','src/example.c'] - ) - config.add_extension('_example2', - ['src/zoo.i','src/zoo.cc'], - depends=['src/zoo.h'], - include_dirs=['src'] - ) - config.add_data_dir('tests') - return config - -if __name__ == "__main__": - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/distutils/tests/test_fcompiler_gnu.py b/pythonPackages/numpy/numpy/distutils/tests/test_fcompiler_gnu.py deleted file mode 100755 index 3d727cd94e..0000000000 --- a/pythonPackages/numpy/numpy/distutils/tests/test_fcompiler_gnu.py +++ /dev/null @@ -1,50 +0,0 @@ -from numpy.testing import * - -import numpy.distutils.fcompiler - -g77_version_strings = [ - ('GNU Fortran 0.5.25 20010319 (prerelease)', '0.5.25'), - ('GNU Fortran (GCC 3.2) 3.2 20020814 (release)', '3.2'), - ('GNU Fortran (GCC) 3.3.3 20040110 (prerelease) (Debian)', '3.3.3'), - ('GNU Fortran (GCC) 3.3.3 (Debian 20040401)', '3.3.3'), - ('GNU Fortran (GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)) 3.2.2' - ' 20030222 (Red Hat Linux 3.2.2-5)', '3.2.2'), -] - -gfortran_version_strings = [ - ('GNU Fortran 95 (GCC 4.0.3 20051023 (prerelease) (Debian 4.0.2-3))', - '4.0.3'), - ('GNU Fortran 95 (GCC) 4.1.0', '4.1.0'), - ('GNU Fortran 95 (GCC) 4.2.0 20060218 (experimental)', '4.2.0'), - ('GNU Fortran (GCC) 4.3.0 20070316 (experimental)', '4.3.0'), -] - -class TestG77Versions(TestCase): - def test_g77_version(self): - fc = numpy.distutils.fcompiler.new_fcompiler(compiler='gnu') - for vs, version in g77_version_strings: - v = fc.version_match(vs) - assert v == version, (vs, v) - - def test_not_g77(self): - fc = numpy.distutils.fcompiler.new_fcompiler(compiler='gnu') - for vs, _ in gfortran_version_strings: - v = fc.version_match(vs) - assert v is None, (vs, v) - -class TestGortranVersions(TestCase): - def test_gfortran_version(self): - fc = numpy.distutils.fcompiler.new_fcompiler(compiler='gnu95') - for vs, version in gfortran_version_strings: - v = fc.version_match(vs) - assert v == version, (vs, v) - - def test_not_gfortran(self): - fc = numpy.distutils.fcompiler.new_fcompiler(compiler='gnu95') - for vs, _ in g77_version_strings: - v = fc.version_match(vs) - assert v is None, (vs, v) - - -if __name__ == '__main__': - run_module_suite() diff --git a/pythonPackages/numpy/numpy/distutils/tests/test_misc_util.py b/pythonPackages/numpy/numpy/distutils/tests/test_misc_util.py deleted file mode 100755 index 6a671a9315..0000000000 --- a/pythonPackages/numpy/numpy/distutils/tests/test_misc_util.py +++ /dev/null @@ -1,58 +0,0 @@ -#!/usr/bin/env python - -from numpy.testing import * -from numpy.distutils.misc_util import appendpath, minrelpath, gpaths, rel_path -from os.path import join, sep, dirname - -ajoin = lambda *paths: join(*((sep,)+paths)) - -class TestAppendpath(TestCase): - - def test_1(self): - assert_equal(appendpath('prefix','name'),join('prefix','name')) - assert_equal(appendpath('/prefix','name'),ajoin('prefix','name')) - assert_equal(appendpath('/prefix','/name'),ajoin('prefix','name')) - assert_equal(appendpath('prefix','/name'),join('prefix','name')) - - def test_2(self): - assert_equal(appendpath('prefix/sub','name'), - join('prefix','sub','name')) - assert_equal(appendpath('prefix/sub','sup/name'), - join('prefix','sub','sup','name')) - assert_equal(appendpath('/prefix/sub','/prefix/name'), - ajoin('prefix','sub','name')) - - def test_3(self): - assert_equal(appendpath('/prefix/sub','/prefix/sup/name'), - ajoin('prefix','sub','sup','name')) - assert_equal(appendpath('/prefix/sub/sub2','/prefix/sup/sup2/name'), - ajoin('prefix','sub','sub2','sup','sup2','name')) - assert_equal(appendpath('/prefix/sub/sub2','/prefix/sub/sup/name'), - ajoin('prefix','sub','sub2','sup','name')) - -class TestMinrelpath(TestCase): - - def test_1(self): - n = lambda path: path.replace('/',sep) - assert_equal(minrelpath(n('aa/bb')),n('aa/bb')) - assert_equal(minrelpath('..'),'..') - assert_equal(minrelpath(n('aa/..')),'') - assert_equal(minrelpath(n('aa/../bb')),'bb') - assert_equal(minrelpath(n('aa/bb/..')),'aa') - assert_equal(minrelpath(n('aa/bb/../..')),'') - assert_equal(minrelpath(n('aa/bb/../cc/../dd')),n('aa/dd')) - assert_equal(minrelpath(n('.././..')),n('../..')) - assert_equal(minrelpath(n('aa/bb/.././../dd')),n('dd')) - -class TestGpaths(TestCase): - - def test_gpaths(self): - local_path = minrelpath(join(dirname(__file__),'..')) - ls = gpaths('command/*.py', local_path) - assert join(local_path,'command','build_src.py') in ls,`ls` - f = gpaths('system_info.py', local_path) - assert join(local_path,'system_info.py')==f[0],`f` - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/distutils/tests/test_npy_pkg_config.py b/pythonPackages/numpy/numpy/distutils/tests/test_npy_pkg_config.py deleted file mode 100755 index 6122e303bf..0000000000 --- a/pythonPackages/numpy/numpy/distutils/tests/test_npy_pkg_config.py +++ /dev/null @@ -1,96 +0,0 @@ -import os -from tempfile import mkstemp - -from numpy.testing import * -from numpy.distutils.npy_pkg_config import read_config, parse_flags - -simple = """\ -[meta] -Name = foo -Description = foo lib -Version = 0.1 - -[default] -cflags = -I/usr/include -libs = -L/usr/lib -""" -simple_d = {'cflags': '-I/usr/include', 'libflags': '-L/usr/lib', - 'version': '0.1', 'name': 'foo'} - -simple_variable = """\ -[meta] -Name = foo -Description = foo lib -Version = 0.1 - -[variables] -prefix = /foo/bar -libdir = ${prefix}/lib -includedir = ${prefix}/include - -[default] -cflags = -I${includedir} -libs = -L${libdir} -""" -simple_variable_d = {'cflags': '-I/foo/bar/include', 'libflags': '-L/foo/bar/lib', - 'version': '0.1', 'name': 'foo'} - -class TestLibraryInfo(TestCase): - def test_simple(self): - fd, filename = mkstemp('foo.ini') - try: - pkg = os.path.splitext(filename)[0] - try: - os.write(fd, simple.encode('ascii')) - finally: - os.close(fd) - - out = read_config(pkg) - self.assertTrue(out.cflags() == simple_d['cflags']) - self.assertTrue(out.libs() == simple_d['libflags']) - self.assertTrue(out.name == simple_d['name']) - self.assertTrue(out.version == simple_d['version']) - finally: - os.remove(filename) - - def test_simple_variable(self): - fd, filename = mkstemp('foo.ini') - try: - pkg = os.path.splitext(filename)[0] - try: - os.write(fd, simple_variable.encode('ascii')) - finally: - os.close(fd) - - out = read_config(pkg) - self.assertTrue(out.cflags() == simple_variable_d['cflags']) - self.assertTrue(out.libs() == simple_variable_d['libflags']) - self.assertTrue(out.name == simple_variable_d['name']) - self.assertTrue(out.version == simple_variable_d['version']) - - out.vars['prefix'] = '/Users/david' - self.assertTrue(out.cflags() == '-I/Users/david/include') - finally: - os.remove(filename) - -class TestParseFlags(TestCase): - def test_simple_cflags(self): - d = parse_flags("-I/usr/include") - self.assertTrue(d['include_dirs'] == ['/usr/include']) - - d = parse_flags("-I/usr/include -DFOO") - self.assertTrue(d['include_dirs'] == ['/usr/include']) - self.assertTrue(d['macros'] == ['FOO']) - - d = parse_flags("-I /usr/include -DFOO") - self.assertTrue(d['include_dirs'] == ['/usr/include']) - self.assertTrue(d['macros'] == ['FOO']) - - def test_simple_lflags(self): - d = parse_flags("-L/usr/lib -lfoo -L/usr/lib -lbar") - self.assertTrue(d['library_dirs'] == ['/usr/lib', '/usr/lib']) - self.assertTrue(d['libraries'] == ['foo', 'bar']) - - d = parse_flags("-L /usr/lib -lfoo -L/usr/lib -lbar") - self.assertTrue(d['library_dirs'] == ['/usr/lib', '/usr/lib']) - self.assertTrue(d['libraries'] == ['foo', 'bar']) diff --git a/pythonPackages/numpy/numpy/distutils/unixccompiler.py b/pythonPackages/numpy/numpy/distutils/unixccompiler.py deleted file mode 100755 index cc1a9a4f9c..0000000000 --- a/pythonPackages/numpy/numpy/distutils/unixccompiler.py +++ /dev/null @@ -1,99 +0,0 @@ -""" -unixccompiler - can handle very long argument lists for ar. -""" - -import os - -from distutils.errors import DistutilsExecError, CompileError -from distutils.unixccompiler import * -from numpy.distutils.ccompiler import replace_method -from numpy.distutils.compat import get_exception - -if sys.version_info[0] < 3: - import log -else: - from numpy.distutils import log - -# Note that UnixCCompiler._compile appeared in Python 2.3 -def UnixCCompiler__compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts): - """Compile a single source files with a Unix-style compiler.""" - display = '%s: %s' % (os.path.basename(self.compiler_so[0]),src) - try: - self.spawn(self.compiler_so + cc_args + [src, '-o', obj] + - extra_postargs, display = display) - except DistutilsExecError: - msg = str(get_exception()) - raise CompileError(msg) - -replace_method(UnixCCompiler, '_compile', UnixCCompiler__compile) - - -def UnixCCompiler_create_static_lib(self, objects, output_libname, - output_dir=None, debug=0, target_lang=None): - """ - Build a static library in a separate sub-process. - - Parameters - ---------- - objects : list or tuple of str - List of paths to object files used to build the static library. - output_libname : str - The library name as an absolute or relative (if `output_dir` is used) - path. - output_dir : str, optional - The path to the output directory. Default is None, in which case - the ``output_dir`` attribute of the UnixCCompiler instance. - debug : bool, optional - This parameter is not used. - target_lang : str, optional - This parameter is not used. - - Returns - ------- - None - - """ - objects, output_dir = self._fix_object_args(objects, output_dir) - - output_filename = \ - self.library_filename(output_libname, output_dir=output_dir) - - if self._need_link(objects, output_filename): - try: - # previous .a may be screwed up; best to remove it first - # and recreate. - # Also, ar on OS X doesn't handle updating universal archives - os.unlink(output_filename) - except (IOError, OSError): - pass - self.mkpath(os.path.dirname(output_filename)) - tmp_objects = objects + self.objects - while tmp_objects: - objects = tmp_objects[:50] - tmp_objects = tmp_objects[50:] - display = '%s: adding %d object files to %s' % ( - os.path.basename(self.archiver[0]), - len(objects), output_filename) - self.spawn(self.archiver + [output_filename] + objects, - display = display) - - # Not many Unices required ranlib anymore -- SunOS 4.x is, I - # think the only major Unix that does. Maybe we need some - # platform intelligence here to skip ranlib if it's not - # needed -- or maybe Python's configure script took care of - # it for us, hence the check for leading colon. - if self.ranlib: - display = '%s:@ %s' % (os.path.basename(self.ranlib[0]), - output_filename) - try: - self.spawn(self.ranlib + [output_filename], - display = display) - except DistutilsExecError: - msg = str(get_exception()) - raise LibError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - return - -replace_method(UnixCCompiler, 'create_static_lib', - UnixCCompiler_create_static_lib) diff --git a/pythonPackages/numpy/numpy/doc/__init__.py b/pythonPackages/numpy/numpy/doc/__init__.py deleted file mode 100755 index 6589b54929..0000000000 --- a/pythonPackages/numpy/numpy/doc/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -import os - -ref_dir = os.path.join(os.path.dirname(__file__)) - -__all__ = [f[:-3] for f in os.listdir(ref_dir) if f.endswith('.py') and - not f.startswith('__')] -__all__.sort() - -for f in __all__: - __import__(__name__ + '.' + f) - -del f, ref_dir - -__doc__ = """\ -Topical documentation -===================== - -The following topics are available: -%s - -You can view them by - ->>> help(np.doc.TOPIC) #doctest: +SKIP - -""" % '\n- '.join([''] + __all__) - -__all__.extend(['__doc__']) diff --git a/pythonPackages/numpy/numpy/doc/basics.py b/pythonPackages/numpy/numpy/doc/basics.py deleted file mode 100755 index ea651bbc7b..0000000000 --- a/pythonPackages/numpy/numpy/doc/basics.py +++ /dev/null @@ -1,136 +0,0 @@ -""" -============ -Array basics -============ - -Array types and conversions between types -========================================= - -Numpy supports a much greater variety of numerical types than Python does. -This section shows which are available, and how to modify an array's data-type. - -========== ========================================================= -Data type Description -========== ========================================================= -bool Boolean (True or False) stored as a byte -int Platform integer (normally either ``int32`` or ``int64``) -int8 Byte (-128 to 127) -int16 Integer (-32768 to 32767) -int32 Integer (-2147483648 to 2147483647) -int64 Integer (9223372036854775808 to 9223372036854775807) -uint8 Unsigned integer (0 to 255) -uint16 Unsigned integer (0 to 65535) -uint32 Unsigned integer (0 to 4294967295) -uint64 Unsigned integer (0 to 18446744073709551615) -float Shorthand for ``float64``. -float32 Single precision float: sign bit, 8 bits exponent, - 23 bits mantissa -float64 Double precision float: sign bit, 11 bits exponent, - 52 bits mantissa -complex Shorthand for ``complex128``. -complex64 Complex number, represented by two 32-bit floats (real - and imaginary components) -complex128 Complex number, represented by two 64-bit floats (real - and imaginary components) -========== ========================================================= - -Numpy numerical types are instances of ``dtype`` (data-type) objects, each -having unique characteristics. Once you have imported NumPy using - - :: - - >>> import numpy as np - -the dtypes are available as ``np.bool``, ``np.float32``, etc. - -Advanced types, not listed in the table above, are explored in -section :ref:`structured_arrays`. - -There are 5 basic numerical types representing booleans (bool), integers (int), -unsigned integers (uint) floating point (float) and complex. Those with numbers -in their name indicate the bitsize of the type (i.e. how many bits are needed -to represent a single value in memory). Some types, such as ``int`` and -``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit -vs. 64-bit machines). This should be taken into account when interfacing -with low-level code (such as C or Fortran) where the raw memory is addressed. - -Data-types can be used as functions to convert python numbers to array scalars -(see the array scalar section for an explanation), python sequences of numbers -to arrays of that type, or as arguments to the dtype keyword that many numpy -functions or methods accept. Some examples:: - - >>> import numpy as np - >>> x = np.float32(1.0) - >>> x - 1.0 - >>> y = np.int_([1,2,4]) - >>> y - array([1, 2, 4]) - >>> z = np.arange(3, dtype=np.uint8) - >>> z - array([0, 1, 2], dtype=uint8) - -Array types can also be referred to by character codes, mostly to retain -backward compatibility with older packages such as Numeric. Some -documentation may still refer to these, for example:: - - >>> np.array([1, 2, 3], dtype='f') - array([ 1., 2., 3.], dtype=float32) - -We recommend using dtype objects instead. - -To convert the type of an array, use the .astype() method (preferred) or -the type itself as a function. For example: :: - - >>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE - array([ 0., 1., 2.]) - >>> np.int8(z) - array([0, 1, 2], dtype=int8) - -Note that, above, we use the *Python* float object as a dtype. NumPy knows -that ``int`` refers to ``np.int``, ``bool`` means ``np.bool`` and -that ``float`` is ``np.float``. The other data-types do not have Python -equivalents. - -To determine the type of an array, look at the dtype attribute:: - - >>> z.dtype - dtype('uint8') - -dtype objects also contain information about the type, such as its bit-width -and its byte-order. The data type can also be used indirectly to query -properties of the type, such as whether it is an integer:: - - >>> d = np.dtype(int) - >>> d - dtype('int32') - - >>> np.issubdtype(d, int) - True - - >>> np.issubdtype(d, float) - False - - -Array Scalars -============= - -Numpy generally returns elements of arrays as array scalars (a scalar -with an associated dtype). Array scalars differ from Python scalars, but -for the most part they can be used interchangeably (the primary -exception is for versions of Python older than v2.x, where integer array -scalars cannot act as indices for lists and tuples). There are some -exceptions, such as when code requires very specific attributes of a scalar -or when it checks specifically whether a value is a Python scalar. Generally, -problems are easily fixed by explicitly converting array scalars -to Python scalars, using the corresponding Python type function -(e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``). - -The primary advantage of using array scalars is that -they preserve the array type (Python may not have a matching scalar type -available, e.g. ``int16``). Therefore, the use of array scalars ensures -identical behaviour between arrays and scalars, irrespective of whether the -value is inside an array or not. NumPy scalars also have many of the same -methods arrays do. - -""" diff --git a/pythonPackages/numpy/numpy/doc/broadcasting.py b/pythonPackages/numpy/numpy/doc/broadcasting.py deleted file mode 100755 index 7b61796636..0000000000 --- a/pythonPackages/numpy/numpy/doc/broadcasting.py +++ /dev/null @@ -1,177 +0,0 @@ -""" -======================== -Broadcasting over arrays -======================== - -The term broadcasting describes how numpy treats arrays with different -shapes during arithmetic operations. Subject to certain constraints, -the smaller array is "broadcast" across the larger array so that they -have compatible shapes. Broadcasting provides a means of vectorizing -array operations so that looping occurs in C instead of Python. It does -this without making needless copies of data and usually leads to -efficient algorithm implementations. There are, however, cases where -broadcasting is a bad idea because it leads to inefficient use of memory -that slows computation. - -NumPy operations are usually done on pairs of arrays on an -element-by-element basis. In the simplest case, the two arrays must -have exactly the same shape, as in the following example: - - >>> a = np.array([1.0, 2.0, 3.0]) - >>> b = np.array([2.0, 2.0, 2.0]) - >>> a * b - array([ 2., 4., 6.]) - -NumPy's broadcasting rule relaxes this constraint when the arrays' -shapes meet certain constraints. The simplest broadcasting example occurs -when an array and a scalar value are combined in an operation: - ->>> a = np.array([1.0, 2.0, 3.0]) ->>> b = 2.0 ->>> a * b -array([ 2., 4., 6.]) - -The result is equivalent to the previous example where ``b`` was an array. -We can think of the scalar ``b`` being *stretched* during the arithmetic -operation into an array with the same shape as ``a``. The new elements in -``b`` are simply copies of the original scalar. The stretching analogy is -only conceptual. NumPy is smart enough to use the original scalar value -without actually making copies, so that broadcasting operations are as -memory and computationally efficient as possible. - -The code in the second example is more efficient than that in the first -because broadcasting moves less memory around during the multiplication -(``b`` is a scalar rather than an array). - -General Broadcasting Rules -========================== -When operating on two arrays, NumPy compares their shapes element-wise. -It starts with the trailing dimensions, and works its way forward. Two -dimensions are compatible when - -1) they are equal, or -2) one of them is 1 - -If these conditions are not met, a -``ValueError: frames are not aligned`` exception is thrown, indicating that -the arrays have incompatible shapes. The size of the resulting array -is the maximum size along each dimension of the input arrays. - -Arrays do not need to have the same *number* of dimensions. For example, -if you have a ``256x256x3`` array of RGB values, and you want to scale -each color in the image by a different value, you can multiply the image -by a one-dimensional array with 3 values. Lining up the sizes of the -trailing axes of these arrays according to the broadcast rules, shows that -they are compatible:: - - Image (3d array): 256 x 256 x 3 - Scale (1d array): 3 - Result (3d array): 256 x 256 x 3 - -When either of the dimensions compared is one, the larger of the two is -used. In other words, the smaller of two axes is stretched or "copied" -to match the other. - -In the following example, both the ``A`` and ``B`` arrays have axes with -length one that are expanded to a larger size during the broadcast -operation:: - - A (4d array): 8 x 1 x 6 x 1 - B (3d array): 7 x 1 x 5 - Result (4d array): 8 x 7 x 6 x 5 - -Here are some more examples:: - - A (2d array): 5 x 4 - B (1d array): 1 - Result (2d array): 5 x 4 - - A (2d array): 5 x 4 - B (1d array): 4 - Result (2d array): 5 x 4 - - A (3d array): 15 x 3 x 5 - B (3d array): 15 x 1 x 5 - Result (3d array): 15 x 3 x 5 - - A (3d array): 15 x 3 x 5 - B (2d array): 3 x 5 - Result (3d array): 15 x 3 x 5 - - A (3d array): 15 x 3 x 5 - B (2d array): 3 x 1 - Result (3d array): 15 x 3 x 5 - -Here are examples of shapes that do not broadcast:: - - A (1d array): 3 - B (1d array): 4 # trailing dimensions do not match - - A (2d array): 2 x 1 - B (3d array): 8 x 4 x 3 # second from last dimensions mismatched - -An example of broadcasting in practice:: - - >>> x = np.arange(4) - >>> xx = x.reshape(4,1) - >>> y = np.ones(5) - >>> z = np.ones((3,4)) - - >>> x.shape - (4,) - - >>> y.shape - (5,) - - >>> x + y - : shape mismatch: objects cannot be broadcast to a single shape - - >>> xx.shape - (4, 1) - - >>> y.shape - (5,) - - >>> (xx + y).shape - (4, 5) - - >>> xx + y - array([[ 1., 1., 1., 1., 1.], - [ 2., 2., 2., 2., 2.], - [ 3., 3., 3., 3., 3.], - [ 4., 4., 4., 4., 4.]]) - - >>> x.shape - (4,) - - >>> z.shape - (3, 4) - - >>> (x + z).shape - (3, 4) - - >>> x + z - array([[ 1., 2., 3., 4.], - [ 1., 2., 3., 4.], - [ 1., 2., 3., 4.]]) - -Broadcasting provides a convenient way of taking the outer product (or -any other outer operation) of two arrays. The following example shows an -outer addition operation of two 1-d arrays:: - - >>> a = np.array([0.0, 10.0, 20.0, 30.0]) - >>> b = np.array([1.0, 2.0, 3.0]) - >>> a[:, np.newaxis] + b - array([[ 1., 2., 3.], - [ 11., 12., 13.], - [ 21., 22., 23.], - [ 31., 32., 33.]]) - -Here the ``newaxis`` index operator inserts a new axis into ``a``, -making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array -with ``b``, which has shape ``(3,)``, yields a ``4x3`` array. - -See `this article `_ -for illustrations of broadcasting concepts. - -""" diff --git a/pythonPackages/numpy/numpy/doc/byteswapping.py b/pythonPackages/numpy/numpy/doc/byteswapping.py deleted file mode 100755 index 23e7d7f6ee..0000000000 --- a/pythonPackages/numpy/numpy/doc/byteswapping.py +++ /dev/null @@ -1,137 +0,0 @@ -''' - -============================= - Byteswapping and byte order -============================= - -Introduction to byte ordering and ndarrays -========================================== - -The ``ndarray`` is an object that provide a python array interface to data -in memory. - -It often happens that the memory that you want to view with an array is -not of the same byte ordering as the computer on which you are running -Python. - -For example, I might be working on a computer with a little-endian CPU - -such as an Intel Pentium, but I have loaded some data from a file -written by a computer that is big-endian. Let's say I have loaded 4 -bytes from a file written by a Sun (big-endian) computer. I know that -these 4 bytes represent two 16-bit integers. On a big-endian machine, a -two-byte integer is stored with the Most Significant Byte (MSB) first, -and then the Least Significant Byte (LSB). Thus the bytes are, in memory order: - -#. MSB integer 1 -#. LSB integer 1 -#. MSB integer 2 -#. LSB integer 2 - -Let's say the two integers were in fact 1 and 770. Because 770 = 256 * -3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2. -The bytes I have loaded from the file would have these contents: - ->>> big_end_str = chr(0) + chr(1) + chr(3) + chr(2) ->>> big_end_str -'\\x00\\x01\\x03\\x02' - -We might want to use an ``ndarray`` to access these integers. In that -case, we can create an array around this memory, and tell numpy that -there are two integers, and that they are 16 bit and big-endian: - ->>> import numpy as np ->>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_str) ->>> big_end_arr[0] -1 ->>> big_end_arr[1] -770 - -Note the array ``dtype`` above of ``>i2``. The ``>`` means 'big-endian' -(``<`` is little-endian) and ``i2`` means 'signed 2-byte integer'. For -example, if our data represented a single unsigned 4-byte little-endian -integer, the dtype string would be ``>> little_end_u4 = np.ndarray(shape=(1,),dtype='>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3 -True - -Returning to our ``big_end_arr`` - in this case our underlying data is -big-endian (data endianness) and we've set the dtype to match (the dtype -is also big-endian). However, sometimes you need to flip these around. - -Changing byte ordering -====================== - -As you can imagine from the introduction, there are two ways you can -affect the relationship between the byte ordering of the array and the -underlying memory it is looking at: - -* Change the byte-ordering information in the array dtype so that it - interprets the undelying data as being in a different byte order. - This is the role of ``arr.newbyteorder()`` -* Change the byte-ordering of the underlying data, leaving the dtype - interpretation as it was. This is what ``arr.byteswap()`` does. - -The common situations in which you need to change byte ordering are: - -#. Your data and dtype endianess don't match, and you want to change - the dtype so that it matches the data. -#. Your data and dtype endianess don't match, and you want to swap the - data so that they match the dtype -#. Your data and dtype endianess match, but you want the data swapped - and the dtype to reflect this - -Data and dtype endianness don't match, change dtype to match data ------------------------------------------------------------------ - -We make something where they don't match: - ->>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='>> wrong_end_dtype_arr[0] -256 - -The obvious fix for this situation is to change the dtype so it gives -the correct endianness: - ->>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder() ->>> fixed_end_dtype_arr[0] -1 - -Note the the array has not changed in memory: - ->>> fixed_end_dtype_arr.tostring() == big_end_str -True - -Data and type endianness don't match, change data to match dtype ----------------------------------------------------------------- - -You might want to do this if you need the data in memory to be a certain -ordering. For example you might be writing the memory out to a file -that needs a certain byte ordering. - ->>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap() ->>> fixed_end_mem_arr[0] -1 - -Now the array *has* changed in memory: - ->>> fixed_end_mem_arr.tostring() == big_end_str -False - -Data and dtype endianness match, swap data and dtype ----------------------------------------------------- - -You may have a correctly specified array dtype, but you need the array -to have the opposite byte order in memory, and you want the dtype to -match so the array values make sense. In this case you just do both of -the previous operations: - ->>> swapped_end_arr = big_end_arr.byteswap().newbyteorder() ->>> swapped_end_arr[0] -1 ->>> swapped_end_arr.tostring() == big_end_str -False - -''' diff --git a/pythonPackages/numpy/numpy/doc/constants.py b/pythonPackages/numpy/numpy/doc/constants.py deleted file mode 100755 index 722147dd8a..0000000000 --- a/pythonPackages/numpy/numpy/doc/constants.py +++ /dev/null @@ -1,391 +0,0 @@ -""" -========= -Constants -========= - -Numpy includes several constants: - -%(constant_list)s -""" -# -# Note: the docstring is autogenerated. -# -import textwrap, re - -# Maintain same format as in numpy.add_newdocs -constants = [] -def add_newdoc(module, name, doc): - constants.append((name, doc)) - -add_newdoc('numpy', 'Inf', - """ - IEEE 754 floating point representation of (positive) infinity. - - Use `inf` because `Inf`, `Infinity`, `PINF` and `infty` are aliases for - `inf`. For more details, see `inf`. - - See Also - -------- - inf - - """) - -add_newdoc('numpy', 'Infinity', - """ - IEEE 754 floating point representation of (positive) infinity. - - Use `inf` because `Inf`, `Infinity`, `PINF` and `infty` are aliases for - `inf`. For more details, see `inf`. - - See Also - -------- - inf - - """) - -add_newdoc('numpy', 'NAN', - """ - IEEE 754 floating point representation of Not a Number (NaN). - - `NaN` and `NAN` are equivalent definitions of `nan`. Please use - `nan` instead of `NAN`. - - See Also - -------- - nan - - """) - -add_newdoc('numpy', 'NINF', - """ - IEEE 754 floating point representation of negative infinity. - - Returns - ------- - y : float - A floating point representation of negative infinity. - - See Also - -------- - isinf : Shows which elements are positive or negative infinity - - isposinf : Shows which elements are positive infinity - - isneginf : Shows which elements are negative infinity - - isnan : Shows which elements are Not a Number - - isfinite : Shows which elements are finite (not one of Not a Number, - positive infinity and negative infinity) - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). This means that Not a Number is not equivalent to infinity. - Also that positive infinity is not equivalent to negative infinity. But - infinity is equivalent to positive infinity. - - Examples - -------- - >>> np.NINF - -inf - >>> np.log(0) - -inf - - """) - -add_newdoc('numpy', 'NZERO', - """ - IEEE 754 floating point representation of negative zero. - - Returns - ------- - y : float - A floating point representation of negative zero. - - See Also - -------- - PZERO : Defines positive zero. - - isinf : Shows which elements are positive or negative infinity. - - isposinf : Shows which elements are positive infinity. - - isneginf : Shows which elements are negative infinity. - - isnan : Shows which elements are Not a Number. - - isfinite : Shows which elements are finite - not one of - Not a Number, positive infinity and negative infinity. - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). Negative zero is considered to be a finite number. - - Examples - -------- - >>> np.NZERO - -0.0 - >>> np.PZERO - 0.0 - - >>> np.isfinite([np.NZERO]) - array([ True], dtype=bool) - >>> np.isnan([np.NZERO]) - array([False], dtype=bool) - >>> np.isinf([np.NZERO]) - array([False], dtype=bool) - - """) - -add_newdoc('numpy', 'NaN', - """ - IEEE 754 floating point representation of Not a Number (NaN). - - `NaN` and `NAN` are equivalent definitions of `nan`. Please use - `nan` instead of `NaN`. - - See Also - -------- - nan - - """) - -add_newdoc('numpy', 'PINF', - """ - IEEE 754 floating point representation of (positive) infinity. - - Use `inf` because `Inf`, `Infinity`, `PINF` and `infty` are aliases for - `inf`. For more details, see `inf`. - - See Also - -------- - inf - - """) - -add_newdoc('numpy', 'PZERO', - """ - IEEE 754 floating point representation of positive zero. - - Returns - ------- - y : float - A floating point representation of positive zero. - - See Also - -------- - NZERO : Defines negative zero. - - isinf : Shows which elements are positive or negative infinity. - - isposinf : Shows which elements are positive infinity. - - isneginf : Shows which elements are negative infinity. - - isnan : Shows which elements are Not a Number. - - isfinite : Shows which elements are finite - not one of - Not a Number, positive infinity and negative infinity. - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). Positive zero is considered to be a finite number. - - Examples - -------- - >>> np.PZERO - 0.0 - >>> np.NZERO - -0.0 - - >>> np.isfinite([np.PZERO]) - array([ True], dtype=bool) - >>> np.isnan([np.PZERO]) - array([False], dtype=bool) - >>> np.isinf([np.PZERO]) - array([False], dtype=bool) - - """) - -add_newdoc('numpy', 'e', - """ - Euler's constant, base of natural logarithms, Napier's constant. - - ``e = 2.71828182845904523536028747135266249775724709369995...`` - - See Also - -------- - exp : Exponential function - log : Natural logarithm - - References - ---------- - .. [1] http://en.wikipedia.org/wiki/Napier_constant - - """) - -add_newdoc('numpy', 'inf', - """ - IEEE 754 floating point representation of (positive) infinity. - - Returns - ------- - y : float - A floating point representation of positive infinity. - - See Also - -------- - isinf : Shows which elements are positive or negative infinity - - isposinf : Shows which elements are positive infinity - - isneginf : Shows which elements are negative infinity - - isnan : Shows which elements are Not a Number - - isfinite : Shows which elements are finite (not one of Not a Number, - positive infinity and negative infinity) - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). This means that Not a Number is not equivalent to infinity. - Also that positive infinity is not equivalent to negative infinity. But - infinity is equivalent to positive infinity. - - `Inf`, `Infinity`, `PINF` and `infty` are aliases for `inf`. - - Examples - -------- - >>> np.inf - inf - >>> np.array([1]) / 0. - array([ Inf]) - - """) - -add_newdoc('numpy', 'infty', - """ - IEEE 754 floating point representation of (positive) infinity. - - Use `inf` because `Inf`, `Infinity`, `PINF` and `infty` are aliases for - `inf`. For more details, see `inf`. - - See Also - -------- - inf - - """) - -add_newdoc('numpy', 'nan', - """ - IEEE 754 floating point representation of Not a Number (NaN). - - Returns - ------- - y : A floating point representation of Not a Number. - - See Also - -------- - isnan : Shows which elements are Not a Number. - isfinite : Shows which elements are finite (not one of - Not a Number, positive infinity and negative infinity) - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). This means that Not a Number is not equivalent to infinity. - - `NaN` and `NAN` are aliases of `nan`. - - Examples - -------- - >>> np.nan - nan - >>> np.log(-1) - nan - >>> np.log([-1, 1, 2]) - array([ NaN, 0. , 0.69314718]) - - """) - -add_newdoc('numpy', 'newaxis', - """ - A convenient alias for None, useful for indexing arrays. - - See Also - -------- - `numpy.doc.indexing` - - Examples - -------- - >>> newaxis is None - True - >>> x = np.arange(3) - >>> x - array([0, 1, 2]) - >>> x[:, newaxis] - array([[0], - [1], - [2]]) - >>> x[:, newaxis, newaxis] - array([[[0]], - [[1]], - [[2]]]) - >>> x[:, newaxis] * x - array([[0, 0, 0], - [0, 1, 2], - [0, 2, 4]]) - - Outer product, same as ``outer(x, y)``: - - >>> y = np.arange(3, 6) - >>> x[:, newaxis] * y - array([[ 0, 0, 0], - [ 3, 4, 5], - [ 6, 8, 10]]) - - ``x[newaxis, :]`` is equivalent to ``x[newaxis]`` and ``x[None]``: - - >>> x[newaxis, :].shape - (1, 3) - >>> x[newaxis].shape - (1, 3) - >>> x[None].shape - (1, 3) - >>> x[:, newaxis].shape - (3, 1) - - """) - -if __doc__: - constants_str = [] - constants.sort() - for name, doc in constants: - s = textwrap.dedent(doc).replace("\n", "\n ") - - # Replace sections by rubrics - lines = s.split("\n") - new_lines = [] - for line in lines: - m = re.match(r'^(\s+)[-=]+\s*$', line) - if m and new_lines: - prev = textwrap.dedent(new_lines.pop()) - new_lines.append('%s.. rubric:: %s' % (m.group(1), prev)) - new_lines.append('') - else: - new_lines.append(line) - s = "\n".join(new_lines) - - # Done. - constants_str.append(""".. const:: %s\n %s""" % (name, s)) - constants_str = "\n".join(constants_str) - - __doc__ = __doc__ % dict(constant_list=constants_str) - del constants_str, name, doc - del line, lines, new_lines, m, s, prev - -del constants, add_newdoc diff --git a/pythonPackages/numpy/numpy/doc/creation.py b/pythonPackages/numpy/numpy/doc/creation.py deleted file mode 100755 index 9a204e252a..0000000000 --- a/pythonPackages/numpy/numpy/doc/creation.py +++ /dev/null @@ -1,143 +0,0 @@ -""" -============== -Array Creation -============== - -Introduction -============ - -There are 5 general mechanisms for creating arrays: - -1) Conversion from other Python structures (e.g., lists, tuples) -2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros, - etc.) -3) Reading arrays from disk, either from standard or custom formats -4) Creating arrays from raw bytes through the use of strings or buffers -5) Use of special library functions (e.g., random) - -This section will not cover means of replicating, joining, or otherwise -expanding or mutating existing arrays. Nor will it cover creating object -arrays or record arrays. Both of those are covered in their own sections. - -Converting Python array_like Objects to Numpy Arrays -==================================================== - -In general, numerical data arranged in an array-like structure in Python can -be converted to arrays through the use of the array() function. The most -obvious examples are lists and tuples. See the documentation for array() for -details for its use. Some objects may support the array-protocol and allow -conversion to arrays this way. A simple way to find out if the object can be -converted to a numpy array using array() is simply to try it interactively and -see if it works! (The Python Way). - -Examples: :: - - >>> x = np.array([2,3,1,0]) - >>> x = np.array([2, 3, 1, 0]) - >>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists, - and types - >>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]]) - -Intrinsic Numpy Array Creation -============================== - -Numpy has built-in functions for creating arrays from scratch: - -zeros(shape) will create an array filled with 0 values with the specified -shape. The default dtype is float64. - -``>>> np.zeros((2, 3)) -array([[ 0., 0., 0.], [ 0., 0., 0.]])`` - -ones(shape) will create an array filled with 1 values. It is identical to -zeros in all other respects. - -arange() will create arrays with regularly incrementing values. Check the -docstring for complete information on the various ways it can be used. A few -examples will be given here: :: - - >>> np.arange(10) - array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - >>> np.arange(2, 10, dtype=np.float) - array([ 2., 3., 4., 5., 6., 7., 8., 9.]) - >>> np.arange(2, 3, 0.1) - array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]) - -Note that there are some subtleties regarding the last usage that the user -should be aware of that are described in the arange docstring. - -linspace() will create arrays with a specified number of elements, and -spaced equally between the specified beginning and end values. For -example: :: - - >>> np.linspace(1., 4., 6) - array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ]) - -The advantage of this creation function is that one can guarantee the -number of elements and the starting and end point, which arange() -generally will not do for arbitrary start, stop, and step values. - -indices() will create a set of arrays (stacked as a one-higher dimensioned -array), one per dimension with each representing variation in that dimension. -An example illustrates much better than a verbal description: :: - - >>> np.indices((3,3)) - array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]]) - -This is particularly useful for evaluating functions of multiple dimensions on -a regular grid. - -Reading Arrays From Disk -======================== - -This is presumably the most common case of large array creation. The details, -of course, depend greatly on the format of data on disk and so this section -can only give general pointers on how to handle various formats. - -Standard Binary Formats ------------------------ - -Various fields have standard formats for array data. The following lists the -ones with known python libraries to read them and return numpy arrays (there -may be others for which it is possible to read and convert to numpy arrays so -check the last section as well) -:: - - HDF5: PyTables - FITS: PyFITS - -Examples of formats that cannot be read directly but for which it is not hard -to convert are libraries like PIL (able to read and write many image formats -such as jpg, png, etc). - -Common ASCII Formats ------------------------- - -Comma Separated Value files (CSV) are widely used (and an export and import -option for programs like Excel). There are a number of ways of reading these -files in Python. There are CSV functions in Python and functions in pylab -(part of matplotlib). - -More generic ascii files can be read using the io package in scipy. - -Custom Binary Formats ---------------------- - -There are a variety of approaches one can use. If the file has a relatively -simple format then one can write a simple I/O library and use the numpy -fromfile() function and .tofile() method to read and write numpy arrays -directly (mind your byteorder though!) If a good C or C++ library exists that -read the data, one can wrap that library with a variety of techniques though -that certainly is much more work and requires significantly more advanced -knowledge to interface with C or C++. - -Use of Special Libraries ------------------------- - -There are libraries that can be used to generate arrays for special purposes -and it isn't possible to enumerate all of them. The most common uses are use -of the many array generation functions in random that can generate arrays of -random values, and some utility functions to generate special matrices (e.g. -diagonal). - -""" diff --git a/pythonPackages/numpy/numpy/doc/glossary.py b/pythonPackages/numpy/numpy/doc/glossary.py deleted file mode 100755 index dc7c75a0a3..0000000000 --- a/pythonPackages/numpy/numpy/doc/glossary.py +++ /dev/null @@ -1,415 +0,0 @@ -""" -======== -Glossary -======== - -along an axis - Axes are defined for arrays with more than one dimension. A - 2-dimensional array has two corresponding axes: the first running - vertically downwards across rows (axis 0), and the second running - horizontally across columns (axis 1). - - Many operation can take place along one of these axes. For example, - we can sum each row of an array, in which case we operate along - columns, or axis 1:: - - >>> x = np.arange(12).reshape((3,4)) - - >>> x - array([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - - >>> x.sum(axis=1) - array([ 6, 22, 38]) - -array - A homogeneous container of numerical elements. Each element in the - array occupies a fixed amount of memory (hence homogeneous), and - can be a numerical element of a single type (such as float, int - or complex) or a combination (such as ``(float, int, float)``). Each - array has an associated data-type (or ``dtype``), which describes - the numerical type of its elements:: - - >>> x = np.array([1, 2, 3], float) - - >>> x - array([ 1., 2., 3.]) - - >>> x.dtype # floating point number, 64 bits of memory per element - dtype('float64') - - - # More complicated data type: each array element is a combination of - # and integer and a floating point number - >>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)]) - array([(1, 2.0), (3, 4.0)], - dtype=[('x', '>> x = np.array([1, 2, 3]) - >>> x.shape - (3,) - -BLAS - `Basic Linear Algebra Subprograms `_ - -broadcast - NumPy can do operations on arrays whose shapes are mismatched:: - - >>> x = np.array([1, 2]) - >>> y = np.array([[3], [4]]) - - >>> x - array([1, 2]) - - >>> y - array([[3], - [4]]) - - >>> x + y - array([[4, 5], - [5, 6]]) - - See `doc.broadcasting`_ for more information. - -C order - See `row-major` - -column-major - A way to represent items in a N-dimensional array in the 1-dimensional - computer memory. In column-major order, the leftmost index "varies the - fastest": for example the array:: - - [[1, 2, 3], - [4, 5, 6]] - - is represented in the column-major order as:: - - [1, 4, 2, 5, 3, 6] - - Column-major order is also known as the Fortran order, as the Fortran - programming language uses it. - -decorator - An operator that transforms a function. For example, a ``log`` - decorator may be defined to print debugging information upon - function execution:: - - >>> def log(f): - ... def new_logging_func(*args, **kwargs): - ... print "Logging call with parameters:", args, kwargs - ... return f(*args, **kwargs) - ... - ... return new_logging_func - - Now, when we define a function, we can "decorate" it using ``log``:: - - >>> @log - ... def add(a, b): - ... return a + b - - Calling ``add`` then yields: - - >>> add(1, 2) - Logging call with parameters: (1, 2) {} - 3 - -dictionary - Resembling a language dictionary, which provides a mapping between - words and descriptions thereof, a Python dictionary is a mapping - between two objects:: - - >>> x = {1: 'one', 'two': [1, 2]} - - Here, `x` is a dictionary mapping keys to values, in this case - the integer 1 to the string "one", and the string "two" to - the list ``[1, 2]``. The values may be accessed using their - corresponding keys:: - - >>> x[1] - 'one' - - >>> x['two'] - [1, 2] - - Note that dictionaries are not stored in any specific order. Also, - most mutable (see *immutable* below) objects, such as lists, may not - be used as keys. - - For more information on dictionaries, read the - `Python tutorial `_. - -Fortran order - See `column-major` - -flattened - Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details. - -immutable - An object that cannot be modified after execution is called - immutable. Two common examples are strings and tuples. - -instance - A class definition gives the blueprint for constructing an object:: - - >>> class House(object): - ... wall_colour = 'white' - - Yet, we have to *build* a house before it exists:: - - >>> h = House() # build a house - - Now, ``h`` is called a ``House`` instance. An instance is therefore - a specific realisation of a class. - -iterable - A sequence that allows "walking" (iterating) over items, typically - using a loop such as:: - - >>> x = [1, 2, 3] - >>> [item**2 for item in x] - [1, 4, 9] - - It is often used in combintion with ``enumerate``:: - >>> keys = ['a','b','c'] - >>> for n, k in enumerate(keys): - ... print "Key %d: %s" % (n, k) - ... - Key 0: a - Key 1: b - Key 2: c - -list - A Python container that can hold any number of objects or items. - The items do not have to be of the same type, and can even be - lists themselves:: - - >>> x = [2, 2.0, "two", [2, 2.0]] - - The list `x` contains 4 items, each which can be accessed individually:: - - >>> x[2] # the string 'two' - 'two' - - >>> x[3] # a list, containing an integer 2 and a float 2.0 - [2, 2.0] - - It is also possible to select more than one item at a time, - using *slicing*:: - - >>> x[0:2] # or, equivalently, x[:2] - [2, 2.0] - - In code, arrays are often conveniently expressed as nested lists:: - - - >>> np.array([[1, 2], [3, 4]]) - array([[1, 2], - [3, 4]]) - - For more information, read the section on lists in the `Python - tutorial `_. For a mapping - type (key-value), see *dictionary*. - -mask - A boolean array, used to select only certain elements for an operation:: - - >>> x = np.arange(5) - >>> x - array([0, 1, 2, 3, 4]) - - >>> mask = (x > 2) - >>> mask - array([False, False, False, True, True], dtype=bool) - - >>> x[mask] = -1 - >>> x - array([ 0, 1, 2, -1, -1]) - -masked array - Array that suppressed values indicated by a mask:: - - >>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True]) - >>> x - masked_array(data = [-- 2.0 --], - mask = [ True False True], - fill_value = 1e+20) - - - >>> x + [1, 2, 3] - masked_array(data = [-- 4.0 --], - mask = [ True False True], - fill_value = 1e+20) - - - - Masked arrays are often used when operating on arrays containing - missing or invalid entries. - -matrix - A 2-dimensional ndarray that preserves its two-dimensional nature - throughout operations. It has certain special operations, such as ``*`` - (matrix multiplication) and ``**`` (matrix power), defined:: - - >>> x = np.mat([[1, 2], [3, 4]]) - - >>> x - matrix([[1, 2], - [3, 4]]) - - >>> x**2 - matrix([[ 7, 10], - [15, 22]]) - -method - A function associated with an object. For example, each ndarray has a - method called ``repeat``:: - - >>> x = np.array([1, 2, 3]) - - >>> x.repeat(2) - array([1, 1, 2, 2, 3, 3]) - -ndarray - See *array*. - -reference - If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore, - ``a`` and ``b`` are different names for the same Python object. - -row-major - A way to represent items in a N-dimensional array in the 1-dimensional - computer memory. In row-major order, the rightmost index "varies - the fastest": for example the array:: - - [[1, 2, 3], - [4, 5, 6]] - - is represented in the row-major order as:: - - [1, 2, 3, 4, 5, 6] - - Row-major order is also known as the C order, as the C programming - language uses it. New Numpy arrays are by default in row-major order. - -self - Often seen in method signatures, ``self`` refers to the instance - of the associated class. For example: - - >>> class Paintbrush(object): - ... color = 'blue' - ... - ... def paint(self): - ... print "Painting the city %s!" % self.color - ... - >>> p = Paintbrush() - >>> p.color = 'red' - >>> p.paint() # self refers to 'p' - Painting the city red! - -slice - Used to select only certain elements from a sequence:: - - >>> x = range(5) - >>> x - [0, 1, 2, 3, 4] - - >>> x[1:3] # slice from 1 to 3 (excluding 3 itself) - [1, 2] - - >>> x[1:5:2] # slice from 1 to 5, but skipping every second element - [1, 3] - - >>> x[::-1] # slice a sequence in reverse - [4, 3, 2, 1, 0] - - Arrays may have more than one dimension, each which can be sliced - individually:: - - >>> x = np.array([[1, 2], [3, 4]]) - >>> x - array([[1, 2], - [3, 4]]) - - >>> x[:, 1] - array([2, 4]) - -tuple - A sequence that may contain a variable number of types of any - kind. A tuple is immutable, i.e., once constructed it cannot be - changed. Similar to a list, it can be indexed and sliced:: - - >>> x = (1, 'one', [1, 2]) - - >>> x - (1, 'one', [1, 2]) - - >>> x[0] - 1 - - >>> x[:2] - (1, 'one') - - A useful concept is "tuple unpacking", which allows variables to - be assigned to the contents of a tuple:: - - >>> x, y = (1, 2) - >>> x, y = 1, 2 - - This is often used when a function returns multiple values: - - >>> def return_many(): - ... return 1, 'alpha', None - - >>> a, b, c = return_many() - >>> a, b, c - (1, 'alpha', None) - - >>> a - 1 - >>> b - 'alpha' - -ufunc - Universal function. A fast element-wise array operation. Examples include - ``add``, ``sin`` and ``logical_or``. - -view - An array that does not own its data, but refers to another array's - data instead. For example, we may create a view that only shows - every second element of another array:: - - >>> x = np.arange(5) - >>> x - array([0, 1, 2, 3, 4]) - - >>> y = x[::2] - >>> y - array([0, 2, 4]) - - >>> x[0] = 3 # changing x changes y as well, since y is a view on x - >>> y - array([3, 2, 4]) - -wrapper - Python is a high-level (highly abstracted, or English-like) language. - This abstraction comes at a price in execution speed, and sometimes - it becomes necessary to use lower level languages to do fast - computations. A wrapper is code that provides a bridge between - high and the low level languages, allowing, e.g., Python to execute - code written in C or Fortran. - - Examples include ctypes, SWIG and Cython (which wraps C and C++) - and f2py (which wraps Fortran). - -""" diff --git a/pythonPackages/numpy/numpy/doc/howtofind.py b/pythonPackages/numpy/numpy/doc/howtofind.py deleted file mode 100755 index 29ad05318e..0000000000 --- a/pythonPackages/numpy/numpy/doc/howtofind.py +++ /dev/null @@ -1,9 +0,0 @@ -""" - -================= -How to Find Stuff -================= - -How to find things in NumPy. - -""" diff --git a/pythonPackages/numpy/numpy/doc/indexing.py b/pythonPackages/numpy/numpy/doc/indexing.py deleted file mode 100755 index 99def88892..0000000000 --- a/pythonPackages/numpy/numpy/doc/indexing.py +++ /dev/null @@ -1,407 +0,0 @@ -""" -============== -Array indexing -============== - -Array indexing refers to any use of the square brackets ([]) to index -array values. There are many options to indexing, which give numpy -indexing great power, but with power comes some complexity and the -potential for confusion. This section is just an overview of the -various options and issues related to indexing. Aside from single -element indexing, the details on most of these options are to be -found in related sections. - -Assignment vs referencing -========================= - -Most of the following examples show the use of indexing when -referencing data in an array. The examples work just as well -when assigning to an array. See the section at the end for -specific examples and explanations on how assignments work. - -Single element indexing -======================= - -Single element indexing for a 1-D array is what one expects. It work -exactly like that for other standard Python sequences. It is 0-based, -and accepts negative indices for indexing from the end of the array. :: - - >>> x = np.arange(10) - >>> x[2] - 2 - >>> x[-2] - 8 - -Unlike lists and tuples, numpy arrays support multidimensional indexing -for multidimensional arrays. That means that it is not necessary to -separate each dimension's index into its own set of square brackets. :: - - >>> x.shape = (2,5) # now x is 2-dimensional - >>> x[1,3] - 8 - >>> x[1,-1] - 9 - -Note that if one indexes a multidimensional array with fewer indices -than dimensions, one gets a subdimensional array. For example: :: - - >>> x[0] - array([0, 1, 2, 3, 4]) - -That is, each index specified selects the array corresponding to the -rest of the dimensions selected. In the above example, choosing 0 -means that remaining dimension of lenth 5 is being left unspecified, -and that what is returned is an array of that dimensionality and size. -It must be noted that the returned array is not a copy of the original, -but points to the same values in memory as does the original array. -In this case, the 1-D array at the first position (0) is returned. -So using a single index on the returned array, results in a single -element being returned. That is: :: - - >>> x[0][2] - 2 - -So note that ``x[0,2] = x[0][2]`` though the second case is more -inefficient a new temporary array is created after the first index -that is subsequently indexed by 2. - -Note to those used to IDL or Fortran memory order as it relates to -indexing. Numpy uses C-order indexing. That means that the last -index usually represents the most rapidly changing memory location, -unlike Fortran or IDL, where the first index represents the most -rapidly changing location in memory. This difference represents a -great potential for confusion. - -Other indexing options -====================== - -It is possible to slice and stride arrays to extract arrays of the -same number of dimensions, but of different sizes than the original. -The slicing and striding works exactly the same way it does for lists -and tuples except that they can be applied to multiple dimensions as -well. A few examples illustrates best: :: - - >>> x = np.arange(10) - >>> x[2:5] - array([2, 3, 4]) - >>> x[:-7] - array([0, 1, 2]) - >>> x[1:7:2] - array([1, 3, 5]) - >>> y = np.arange(35).reshape(5,7) - >>> y[1:5:2,::3] - array([[ 7, 10, 13], - [21, 24, 27]]) - -Note that slices of arrays do not copy the internal array data but -also produce new views of the original data. - -It is possible to index arrays with other arrays for the purposes of -selecting lists of values out of arrays into new arrays. There are -two different ways of accomplishing this. One uses one or more arrays -of index values. The other involves giving a boolean array of the proper -shape to indicate the values to be selected. Index arrays are a very -powerful tool that allow one to avoid looping over individual elements in -arrays and thus greatly improve performance. - -It is possible to use special features to effectively increase the -number of dimensions in an array through indexing so the resulting -array aquires the shape needed for use in an expression or with a -specific function. - -Index arrays -============ - -Numpy arrays may be indexed with other arrays (or any other sequence- -like object that can be converted to an array, such as lists, with the -exception of tuples; see the end of this document for why this is). The -use of index arrays ranges from simple, straightforward cases to -complex, hard-to-understand cases. For all cases of index arrays, what -is returned is a copy of the original data, not a view as one gets for -slices. - -Index arrays must be of integer type. Each value in the array indicates -which value in the array to use in place of the index. To illustrate: :: - - >>> x = np.arange(10,1,-1) - >>> x - array([10, 9, 8, 7, 6, 5, 4, 3, 2]) - >>> x[np.array([3, 3, 1, 8])] - array([7, 7, 9, 2]) - - -The index array consisting of the values 3, 3, 1 and 8 correspondingly -create an array of length 4 (same as the index array) where each index -is replaced by the value the index array has in the array being indexed. - -Negative values are permitted and work as they do with single indices -or slices: :: - - >>> x[np.array([3,3,-3,8])] - array([7, 7, 4, 2]) - -It is an error to have index values out of bounds: :: - - >>> x[np.array([3, 3, 20, 8])] - : index 20 out of bounds 0<=index<9 - -Generally speaking, what is returned when index arrays are used is -an array with the same shape as the index array, but with the type -and values of the array being indexed. As an example, we can use a -multidimensional index array instead: :: - - >>> x[np.array([[1,1],[2,3]])] - array([[9, 9], - [8, 7]]) - -Indexing Multi-dimensional arrays -================================= - -Things become more complex when multidimensional arrays are indexed, -particularly with multidimensional index arrays. These tend to be -more unusal uses, but theyare permitted, and they are useful for some -problems. We'll start with thesimplest multidimensional case (using -the array y from the previous examples): :: - - >>> y[np.array([0,2,4]), np.array([0,1,2])] - array([ 0, 15, 30]) - -In this case, if the index arrays have a matching shape, and there is -an index array for each dimension of the array being indexed, the -resultant array has the same shape as the index arrays, and the values -correspond to the index set for each position in the index arrays. In -this example, the first index value is 0 for both index arrays, and -thus the first value of the resultant array is y[0,0]. The next value -is y[2,1], and the last is y[4,2]. - -If the index arrays do not have the same shape, there is an attempt to -broadcast them to the same shape. If they cannot be broadcast to the -same shape, an exception is raised: :: - - >>> y[np.array([0,2,4]), np.array([0,1])] - : shape mismatch: objects cannot be - broadcast to a single shape - -The broadcasting mechanism permits index arrays to be combined with -scalars for other indices. The effect is that the scalar value is used -for all the corresponding values of the index arrays: :: - - >>> y[np.array([0,2,4]), 1] - array([ 1, 15, 29]) - -Jumping to the next level of complexity, it is possible to only -partially index an array with index arrays. It takes a bit of thought -to understand what happens in such cases. For example if we just use -one index array with y: :: - - >>> y[np.array([0,2,4])] - array([[ 0, 1, 2, 3, 4, 5, 6], - [14, 15, 16, 17, 18, 19, 20], - [28, 29, 30, 31, 32, 33, 34]]) - -What results is the construction of a new array where each value of -the index array selects one row from the array being indexed and the -resultant array has the resulting shape (size of row, number index -elements). - -An example of where this may be useful is for a color lookup table -where we want to map the values of an image into RGB triples for -display. The lookup table could have a shape (nlookup, 3). Indexing -such an array with an image with shape (ny, nx) with dtype=np.uint8 -(or any integer type so long as values are with the bounds of the -lookup table) will result in an array of shape (ny, nx, 3) where a -triple of RGB values is associated with each pixel location. - -In general, the shape of the resulant array will be the concatenation -of the shape of the index array (or the shape that all the index arrays -were broadcast to) with the shape of any unused dimensions (those not -indexed) in the array being indexed. - -Boolean or "mask" index arrays -============================== - -Boolean arrays used as indices are treated in a different manner -entirely than index arrays. Boolean arrays must be of the same shape -as the array being indexed, or broadcastable to the same shape. In the -most straightforward case, the boolean array has the same shape: :: - - >>> b = y>20 - >>> y[b] - array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]) - -The result is a 1-D array containing all the elements in the indexed -array corresponding to all the true elements in the boolean array. As -with index arrays, what is returned is a copy of the data, not a view -as one gets with slices. - -With broadcasting, multidimensional arrays may be the result. For -example: :: - - >>> b[:,5] # use a 1-D boolean that broadcasts with y - array([False, False, False, True, True], dtype=bool) - >>> y[b[:,5]] - array([[21, 22, 23, 24, 25, 26, 27], - [28, 29, 30, 31, 32, 33, 34]]) - -Here the 4th and 5th rows are selected from the indexed array and -combined to make a 2-D array. - -Combining index arrays with slices -================================== - -Index arrays may be combined with slices. For example: :: - - >>> y[np.array([0,2,4]),1:3] - array([[ 1, 2], - [15, 16], - [29, 30]]) - -In effect, the slice is converted to an index array -np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array -to produce a resultant array of shape (3,2). - -Likewise, slicing can be combined with broadcasted boolean indices: :: - - >>> y[b[:,5],1:3] - array([[22, 23], - [29, 30]]) - -Structural indexing tools -========================= - -To facilitate easy matching of array shapes with expressions and in -assignments, the np.newaxis object can be used within array indices -to add new dimensions with a size of 1. For example: :: - - >>> y.shape - (5, 7) - >>> y[:,np.newaxis,:].shape - (5, 1, 7) - -Note that there are no new elements in the array, just that the -dimensionality is increased. This can be handy to combine two -arrays in a way that otherwise would require explicitly reshaping -operations. For example: :: - - >>> x = np.arange(5) - >>> x[:,np.newaxis] + x[np.newaxis,:] - array([[0, 1, 2, 3, 4], - [1, 2, 3, 4, 5], - [2, 3, 4, 5, 6], - [3, 4, 5, 6, 7], - [4, 5, 6, 7, 8]]) - -The ellipsis syntax maybe used to indicate selecting in full any -remaining unspecified dimensions. For example: :: - - >>> z = np.arange(81).reshape(3,3,3,3) - >>> z[1,...,2] - array([[29, 32, 35], - [38, 41, 44], - [47, 50, 53]]) - -This is equivalent to: :: - - >>> z[1,:,:,2] - array([[29, 32, 35], - [38, 41, 44], - [47, 50, 53]]) - -Assigning values to indexed arrays -================================== - -As mentioned, one can select a subset of an array to assign to using -a single index, slices, and index and mask arrays. The value being -assigned to the indexed array must be shape consistent (the same shape -or broadcastable to the shape the index produces). For example, it is -permitted to assign a constant to a slice: :: - - >>> x = np.arange(10) - >>> x[2:7] = 1 - -or an array of the right size: :: - - >>> x[2:7] = np.arange(5) - -Note that assignments may result in changes if assigning -higher types to lower types (like floats to ints) or even -exceptions (assigning complex to floats or ints): :: - - >>> x[1] = 1.2 - >>> x[1] - 1 - >>> x[1] = 1.2j - : can't convert complex to long; use - long(abs(z)) - - -Unlike some of the references (such as array and mask indices) -assignments are always made to the original data in the array -(indeed, nothing else would make sense!). Note though, that some -actions may not work as one may naively expect. This particular -example is often surprising to people: :: - - >>> x = np.arange(0, 50, 10) - >>> x - array([ 0, 10, 20, 30, 40]) - >>> x[np.array([1, 1, 3, 1])] += 1 - >>> x - array([ 0, 11, 20, 31, 40]) - -Where people expect that the 1st location will be incremented by 3. -In fact, it will only be incremented by 1. The reason is because -a new array is extracted from the original (as a temporary) containing -the values at 1, 1, 3, 1, then the value 1 is added to the temporary, -and then the temporary is assigned back to the original array. Thus -the value of the array at x[1]+1 is assigned to x[1] three times, -rather than being incremented 3 times. - -Dealing with variable numbers of indices within programs -======================================================== - -The index syntax is very powerful but limiting when dealing with -a variable number of indices. For example, if you want to write -a function that can handle arguments with various numbers of -dimensions without having to write special case code for each -number of possible dimensions, how can that be done? If one -supplies to the index a tuple, the tuple will be interpreted -as a list of indices. For example (using the previous definition -for the array z): :: - - >>> indices = (1,1,1,1) - >>> z[indices] - 40 - -So one can use code to construct tuples of any number of indices -and then use these within an index. - -Slices can be specified within programs by using the slice() function -in Python. For example: :: - - >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2] - >>> z[indices] - array([39, 40]) - -Likewise, ellipsis can be specified by code by using the Ellipsis -object: :: - - >>> indices = (1, Ellipsis, 1) # same as [1,...,1] - >>> z[indices] - array([[28, 31, 34], - [37, 40, 43], - [46, 49, 52]]) - -For this reason it is possible to use the output from the np.where() -function directly as an index since it always returns a tuple of index -arrays. - -Because the special treatment of tuples, they are not automatically -converted to an array as a list would be. As an example: :: - - >>> z[[1,1,1,1]] # produces a large array - array([[[[27, 28, 29], - [30, 31, 32], ... - >>> z[(1,1,1,1)] # returns a single value - 40 - -""" diff --git a/pythonPackages/numpy/numpy/doc/internals.py b/pythonPackages/numpy/numpy/doc/internals.py deleted file mode 100755 index a744293683..0000000000 --- a/pythonPackages/numpy/numpy/doc/internals.py +++ /dev/null @@ -1,162 +0,0 @@ -""" -=============== -Array Internals -=============== - -Internal organization of numpy arrays -===================================== - -It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to Numpy". - -Numpy arrays consist of two major components, the raw array data (from now on, -referred to as the data buffer), and the information about the raw array data. -The data buffer is typically what people think of as arrays in C or Fortran, -a contiguous (and fixed) block of memory containing fixed sized data items. -Numpy also contains a significant set of data that describes how to interpret -the data in the data buffer. This extra information contains (among other things): - - 1) The basic data element's size in bytes - 2) The start of the data within the data buffer (an offset relative to the - beginning of the data buffer). - 3) The number of dimensions and the size of each dimension - 4) The separation between elements for each dimension (the 'stride'). This - does not have to be a multiple of the element size - 5) The byte order of the data (which may not be the native byte order) - 6) Whether the buffer is read-only - 7) Information (via the dtype object) about the interpretation of the basic - data element. The basic data element may be as simple as a int or a float, - or it may be a compound object (e.g., struct-like), a fixed character field, - or Python object pointers. - 8) Whether the array is to interpreted as C-order or Fortran-order. - -This arrangement allow for very flexible use of arrays. One thing that it allows -is simple changes of the metadata to change the interpretation of the array buffer. -Changing the byteorder of the array is a simple change involving no rearrangement -of the data. The shape of the array can be changed very easily without changing -anything in the data buffer or any data copying at all - -Among other things that are made possible is one can create a new array metadata -object that uses the same data buffer -to create a new view of that data buffer that has a different interpretation -of the buffer (e.g., different shape, offset, byte order, strides, etc) but -shares the same data bytes. Many operations in numpy do just this such as -slices. Other operations, such as transpose, don't move data elements -around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move. - -Typically these new versions of the array metadata but the same data buffer are -new 'views' into the data buffer. There is a different ndarray object, but it -uses the same data buffer. This is why it is necessary to force copies through -use of the .copy() method if one really wants to make a new and independent -copy of the data buffer. - -New views into arrays mean the the object reference counts for the data buffer -increase. Simply doing away with the original array object will not remove the -data buffer if other views of it still exist. - -Multidimensional Array Indexing Order Issues -============================================ - -What is the right way to index -multi-dimensional arrays? Before you jump to conclusions about the one and -true way to index multi-dimensional arrays, it pays to understand why this is -a confusing issue. This section will try to explain in detail how numpy -indexing works and why we adopt the convention we do for images, and when it -may be appropriate to adopt other conventions. - -The first thing to understand is -that there are two conflicting conventions for indexing 2-dimensional arrays. -Matrix notation uses the first index to indicate which row is being selected and -the second index to indicate which column is selected. This is opposite the -geometrically oriented-convention for images where people generally think the -first index represents x position (i.e., column) and the second represents y -position (i.e., row). This alone is the source of much confusion; -matrix-oriented users and image-oriented users expect two different things with -regard to indexing. - -The second issue to understand is how indices correspond -to the order the array is stored in memory. In Fortran the first index is the -most rapidly varying index when moving through the elements of a two -dimensional array as it is stored in memory. If you adopt the matrix -convention for indexing, then this means the matrix is stored one column at a -time (since the first index moves to the next row as it changes). Thus Fortran -is considered a Column-major language. C has just the opposite convention. In -C, the last index changes most rapidly as one moves through the array as -stored in memory. Thus C is a Row-major language. The matrix is stored by -rows. Note that in both cases it presumes that the matrix convention for -indexing is being used, i.e., for both Fortran and C, the first index is the -row. Note this convention implies that the indexing convention is invariant -and that the data order changes to keep that so. - -But that's not the only way -to look at it. Suppose one has large two-dimensional arrays (images or -matrices) stored in data files. Suppose the data are stored by rows rather than -by columns. If we are to preserve our index convention (whether matrix or -image) that means that depending on the language we use, we may be forced to -reorder the data if it is read into memory to preserve our indexing -convention. For example if we read row-ordered data into memory without -reordering, it will match the matrix indexing convention for C, but not for -Fortran. Conversely, it will match the image indexing convention for Fortran, -but not for C. For C, if one is using data stored in row order, and one wants -to preserve the image index convention, the data must be reordered when -reading into memory. - -In the end, which you do for Fortran or C depends on -which is more important, not reordering data or preserving the indexing -convention. For large images, reordering data is potentially expensive, and -often the indexing convention is inverted to avoid that. - -The situation with -numpy makes this issue yet more complicated. The internal machinery of numpy -arrays is flexible enough to accept any ordering of indices. One can simply -reorder indices by manipulating the internal stride information for arrays -without reordering the data at all. Numpy will know how to map the new index -order to the data without moving the data. - -So if this is true, why not choose -the index order that matches what you most expect? In particular, why not define -row-ordered images to use the image convention? (This is sometimes referred -to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN' -order options for array ordering in numpy.) The drawback of doing this is -potential performance penalties. It's common to access the data sequentially, -either implicitly in array operations or explicitly by looping over rows of an -image. When that is done, then the data will be accessed in non-optimal order. -As the first index is incremented, what is actually happening is that elements -spaced far apart in memory are being sequentially accessed, with usually poor -memory access speeds. For example, for a two dimensional image 'im' defined so -that im[0, 10] represents the value at x=0, y=10. To be consistent with usual -Python behavior then im[0] would represent a column at x=0. Yet that data -would be spread over the whole array since the data are stored in row order. -Despite the flexibility of numpy's indexing, it can't really paper over the fact -basic operations are rendered inefficient because of data order or that getting -contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs -im[0]), thus one can't use an idiom such as for row in im; for col in im does -work, but doesn't yield contiguous column data. - -As it turns out, numpy is -smart enough when dealing with ufuncs to determine which index is the most -rapidly varying one in memory and uses that for the innermost loop. Thus for -ufuncs there is no large intrinsic advantage to either approach in most cases. -On the other hand, use of .flat with an FORTRAN ordered array will lead to -non-optimal memory access as adjacent elements in the flattened array (iterator, -actually) are not contiguous in memory. - -Indeed, the fact is that Python -indexing on lists and other sequences naturally leads to an outside-to inside -ordering (the first index gets the largest grouping, the next the next largest, -and the last gets the smallest element). Since image data are normally stored -by rows, this corresponds to position within rows being the last item indexed. - -If you do want to use Fortran ordering realize that -there are two approaches to consider: 1) accept that the first index is just not -the most rapidly changing in memory and have all your I/O routines reorder -your data when going from memory to disk or visa versa, or use numpy's -mechanism for mapping the first index to the most rapidly varying data. We -recommend the former if possible. The disadvantage of the latter is that many -of numpy's functions will yield arrays without Fortran ordering unless you are -careful to use the 'order' keyword. Doing this would be highly inconvenient. - -Otherwise we recommend simply learning to reverse the usual order of indices -when accessing elements of an array. Granted, it goes against the grain, but -it is more in line with Python semantics and the natural order of the data. - -""" diff --git a/pythonPackages/numpy/numpy/doc/io.py b/pythonPackages/numpy/numpy/doc/io.py deleted file mode 100755 index 3cde40bd0b..0000000000 --- a/pythonPackages/numpy/numpy/doc/io.py +++ /dev/null @@ -1,9 +0,0 @@ -""" - -========= -Array I/O -========= - -Placeholder for array I/O documentation. - -""" diff --git a/pythonPackages/numpy/numpy/doc/jargon.py b/pythonPackages/numpy/numpy/doc/jargon.py deleted file mode 100755 index e13ff5686a..0000000000 --- a/pythonPackages/numpy/numpy/doc/jargon.py +++ /dev/null @@ -1,9 +0,0 @@ -""" - -====== -Jargon -====== - -Placeholder for computer science, engineering and other jargon. - -""" diff --git a/pythonPackages/numpy/numpy/doc/methods_vs_functions.py b/pythonPackages/numpy/numpy/doc/methods_vs_functions.py deleted file mode 100755 index 22eadccf72..0000000000 --- a/pythonPackages/numpy/numpy/doc/methods_vs_functions.py +++ /dev/null @@ -1,9 +0,0 @@ -""" - -===================== -Methods vs. Functions -===================== - -Placeholder for Methods vs. Functions documentation. - -""" diff --git a/pythonPackages/numpy/numpy/doc/misc.py b/pythonPackages/numpy/numpy/doc/misc.py deleted file mode 100755 index 81d7a54afe..0000000000 --- a/pythonPackages/numpy/numpy/doc/misc.py +++ /dev/null @@ -1,224 +0,0 @@ -""" -============= -Miscellaneous -============= - -IEEE 754 Floating Point Special Values: ------------------------------------------------ - -Special values defined in numpy: nan, inf, - -NaNs can be used as a poor-man's mask (if you don't care what the -original value was) - -Note: cannot use equality to test NaNs. E.g.: :: - - >>> myarr = np.array([1., 0., np.nan, 3.]) - >>> np.where(myarr == np.nan) - >>> np.nan == np.nan # is always False! Use special numpy functions instead. - False - >>> myarr[myarr == np.nan] = 0. # doesn't work - >>> myarr - array([ 1., 0., NaN, 3.]) - >>> myarr[np.isnan(myarr)] = 0. # use this instead find - >>> myarr - array([ 1., 0., 0., 3.]) - -Other related special value functions: :: - - isinf(): True if value is inf - isfinite(): True if not nan or inf - nan_to_num(): Map nan to 0, inf to max float, -inf to min float - -The following corresponds to the usual functions except that nans are excluded -from the results: :: - - nansum() - nanmax() - nanmin() - nanargmax() - nanargmin() - - >>> x = np.arange(10.) - >>> x[3] = np.nan - >>> x.sum() - nan - >>> np.nansum(x) - 42.0 - -How numpy handles numerical exceptions - -Default is to "warn" -But this can be changed, and it can be set individually for different kinds -of exceptions. The different behaviors are: :: - - 'ignore' : ignore completely - 'warn' : print a warning (once only) - 'raise' : raise an exception - 'call' : call a user-supplied function (set using seterrcall()) - -These behaviors can be set for all kinds of errors or specific ones: :: - - all: apply to all numeric exceptions - invalid: when NaNs are generated - divide: divide by zero (for integers as well!) - overflow: floating point overflows - underflow: floating point underflows - -Note that integer divide-by-zero is handled by the same machinery. -These behaviors are set on a per-thread basis. - -Examples: ------------- - -:: - - >>> oldsettings = np.seterr(all='warn') - >>> np.zeros(5,dtype=np.float32)/0. - invalid value encountered in divide - >>> j = np.seterr(under='ignore') - >>> np.array([1.e-100])**10 - >>> j = np.seterr(invalid='raise') - >>> np.sqrt(np.array([-1.])) - FloatingPointError: invalid value encountered in sqrt - >>> def errorhandler(errstr, errflag): - ... print "saw stupid error!" - >>> np.seterrcall(errorhandler) - - >>> j = np.seterr(all='call') - >>> np.zeros(5, dtype=np.int32)/0 - FloatingPointError: invalid value encountered in divide - saw stupid error! - >>> j = np.seterr(**oldsettings) # restore previous - ... # error-handling settings - -Interfacing to C: ------------------ -Only a survey of the choices. Little detail on how each works. - -1) Bare metal, wrap your own C-code manually. - - - Plusses: - - - Efficient - - No dependencies on other tools - - - Minuses: - - - Lots of learning overhead: - - - need to learn basics of Python C API - - need to learn basics of numpy C API - - need to learn how to handle reference counting and love it. - - - Reference counting often difficult to get right. - - - getting it wrong leads to memory leaks, and worse, segfaults - - - API will change for Python 3.0! - -2) pyrex - - - Plusses: - - - avoid learning C API's - - no dealing with reference counting - - can code in psuedo python and generate C code - - can also interface to existing C code - - should shield you from changes to Python C api - - become pretty popular within Python community - - - Minuses: - - - Can write code in non-standard form which may become obsolete - - Not as flexible as manual wrapping - - Maintainers not easily adaptable to new features - -Thus: - -3) cython - fork of pyrex to allow needed features for SAGE - - - being considered as the standard scipy/numpy wrapping tool - - fast indexing support for arrays - -4) ctypes - - - Plusses: - - - part of Python standard library - - good for interfacing to existing sharable libraries, particularly - Windows DLLs - - avoids API/reference counting issues - - good numpy support: arrays have all these in their ctypes - attribute: :: - - a.ctypes.data a.ctypes.get_strides - a.ctypes.data_as a.ctypes.shape - a.ctypes.get_as_parameter a.ctypes.shape_as - a.ctypes.get_data a.ctypes.strides - a.ctypes.get_shape a.ctypes.strides_as - - - Minuses: - - - can't use for writing code to be turned into C extensions, only a wrapper - tool. - -5) SWIG (automatic wrapper generator) - - - Plusses: - - - around a long time - - multiple scripting language support - - C++ support - - Good for wrapping large (many functions) existing C libraries - - - Minuses: - - - generates lots of code between Python and the C code - - can cause performance problems that are nearly impossible to optimize - out - - interface files can be hard to write - - doesn't necessarily avoid reference counting issues or needing to know - API's - -7) Weave - - - Plusses: - - - Phenomenal tool - - can turn many numpy expressions into C code - - dynamic compiling and loading of generated C code - - can embed pure C code in Python module and have weave extract, generate - interfaces and compile, etc. - - - Minuses: - - - Future uncertain--lacks a champion - -8) Psyco - - - Plusses: - - - Turns pure python into efficient machine code through jit-like - optimizations - - very fast when it optimizes well - - - Minuses: - - - Only on intel (windows?) - - Doesn't do much for numpy? - -Interfacing to Fortran: ------------------------ -Fortran: Clear choice is f2py. (Pyfort is an older alternative, but not -supported any longer) - -Interfacing to C++: -------------------- - 1) CXX - 2) Boost.python - 3) SWIG - 4) Sage has used cython to wrap C++ (not pretty, but it can be done) - 5) SIP (used mainly in PyQT) - -""" diff --git a/pythonPackages/numpy/numpy/doc/performance.py b/pythonPackages/numpy/numpy/doc/performance.py deleted file mode 100755 index 1429e232ff..0000000000 --- a/pythonPackages/numpy/numpy/doc/performance.py +++ /dev/null @@ -1,9 +0,0 @@ -""" - -=========== -Performance -=========== - -Placeholder for Improving Performance documentation. - -""" diff --git a/pythonPackages/numpy/numpy/doc/structured_arrays.py b/pythonPackages/numpy/numpy/doc/structured_arrays.py deleted file mode 100755 index 21fdf87eac..0000000000 --- a/pythonPackages/numpy/numpy/doc/structured_arrays.py +++ /dev/null @@ -1,176 +0,0 @@ -""" -===================================== -Structured Arrays (aka Record Arrays) -===================================== - -Introduction -============ - -Numpy provides powerful capabilities to create arrays of structs or records. -These arrays permit one to manipulate the data by the structs or by fields of -the struct. A simple example will show what is meant.: :: - - >>> x = np.zeros((2,),dtype=('i4,f4,a10')) - >>> x[:] = [(1,2.,'Hello'),(2,3.,"World")] - >>> x - array([(1, 2.0, 'Hello'), (2, 3.0, 'World')], - dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')]) - -Here we have created a one-dimensional array of length 2. Each element of -this array is a record that contains three items, a 32-bit integer, a 32-bit -float, and a string of length 10 or less. If we index this array at the second -position we get the second record: :: - - >>> x[1] - (2,3.,"World") - -Conveniently, one can access any field of the array by indexing using the -string that names that field. In this case the fields have received the -default names 'f0', 'f1' and 'f2'. - - >>> y = x['f1'] - >>> y - array([ 2., 3.], dtype=float32) - >>> y[:] = 2*y - >>> y - array([ 4., 6.], dtype=float32) - >>> x - array([(1, 4.0, 'Hello'), (2, 6.0, 'World')], - dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')]) - -In these examples, y is a simple float array consisting of the 2nd field -in the record. But, rather than being a copy of the data in the structured -array, it is a view, i.e., it shares exactly the same memory locations. -Thus, when we updated this array by doubling its values, the structured -array shows the corresponding values as doubled as well. Likewise, if one -changes the record, the field view also changes: :: - - >>> x[1] = (-1,-1.,"Master") - >>> x - array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')], - dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')]) - >>> y - array([ 4., -1.], dtype=float32) - -Defining Structured Arrays -========================== - -One defines a structured array through the dtype object. There are -**several** alternative ways to define the fields of a record. Some of -these variants provide backward compatibility with Numeric, numarray, or -another module, and should not be used except for such purposes. These -will be so noted. One specifies record structure in -one of four alternative ways, using an argument (as supplied to a dtype -function keyword or a dtype object constructor itself). This -argument must be one of the following: 1) string, 2) tuple, 3) list, or -4) dictionary. Each of these is briefly described below. - -1) String argument (as used in the above examples). -In this case, the constructor expects a comma-separated list of type -specifiers, optionally with extra shape information. -The type specifiers can take 4 different forms: :: - - a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f4, f8, c8, c16, a - (representing bytes, ints, unsigned ints, floats, complex and - fixed length strings of specified byte lengths) - b) int8,...,uint8,...,float32, float64, complex64, complex128 - (this time with bit sizes) - c) older Numeric/numarray type specifications (e.g. Float32). - Don't use these in new code! - d) Single character type specifiers (e.g H for unsigned short ints). - Avoid using these unless you must. Details can be found in the - Numpy book - -These different styles can be mixed within the same string (but why would you -want to do that?). Furthermore, each type specifier can be prefixed -with a repetition number, or a shape. In these cases an array -element is created, i.e., an array within a record. That array -is still referred to as a single field. An example: :: - - >>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64') - >>> x - array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), - ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), - ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])], - dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))]) - -By using strings to define the record structure, it precludes being -able to name the fields in the original definition. The names can -be changed as shown later, however. - -2) Tuple argument: The only relevant tuple case that applies to record -structures is when a structure is mapped to an existing data type. This -is done by pairing in a tuple, the existing data type with a matching -dtype definition (using any of the variants being described here). As -an example (using a definition using a list, so see 3) for further -details): :: - - >>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')])) - >>> x - array([0, 0, 0]) - >>> x['r'] - array([0, 0, 0], dtype=uint8) - -In this case, an array is produced that looks and acts like a simple int32 array, -but also has definitions for fields that use only one byte of the int32 (a bit -like Fortran equivalencing). - -3) List argument: In this case the record structure is defined with a list of -tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field -('' is permitted), 2) the type of the field, and 3) the shape (optional). -For example: - - >>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) - >>> x - array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), - (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), - (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])], - dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))]) - -4) Dictionary argument: two different forms are permitted. The first consists -of a dictionary with two required keys ('names' and 'formats'), each having an -equal sized list of values. The format list contains any type/shape specifier -allowed in other contexts. The names must be strings. There are two optional -keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to -the required two where offsets contain integer offsets for each field, and -titles are objects containing metadata for each field (these do not have -to be strings), where the value of None is permitted. As an example: :: - - >>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']}) - >>> x - array([(0, 0.0), (0, 0.0), (0, 0.0)], - dtype=[('col1', '>i4'), ('col2', '>f4')]) - -The other dictionary form permitted is a dictionary of name keys with tuple -values specifying type, offset, and an optional title. - - >>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')}) - >>> x - array([(0, 0.0), (0, 0.0), (0, 0.0)], - dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')]) - -Accessing and modifying field names -=================================== - -The field names are an attribute of the dtype object defining the record structure. -For the last example: :: - - >>> x.dtype.names - ('col1', 'col2') - >>> x.dtype.names = ('x', 'y') - >>> x - array([(0, 0.0), (0, 0.0), (0, 0.0)], - dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')]) - >>> x.dtype.names = ('x', 'y', 'z') # wrong number of names - : must replace all names at once with a sequence of length 2 - -Accessing field titles -==================================== - -The field titles provide a standard place to put associated info for fields. -They do not have to be strings. - - >>> x.dtype.fields['x'][2] - 'title 1' - -""" diff --git a/pythonPackages/numpy/numpy/doc/subclassing.py b/pythonPackages/numpy/numpy/doc/subclassing.py deleted file mode 100755 index de0338060c..0000000000 --- a/pythonPackages/numpy/numpy/doc/subclassing.py +++ /dev/null @@ -1,559 +0,0 @@ -""" -============================= -Subclassing ndarray in python -============================= - -Credits -------- - -This page is based with thanks on the wiki page on subclassing by Pierre -Gerard-Marchant - http://www.scipy.org/Subclasses. - -Introduction ------------- - -Subclassing ndarray is relatively simple, but it has some complications -compared to other Python objects. On this page we explain the machinery -that allows you to subclass ndarray, and the implications for -implementing a subclass. - -ndarrays and object creation -============================ - -Subclassing ndarray is complicated by the fact that new instances of -ndarray classes can come about in three different ways. These are: - -#. Explicit constructor call - as in ``MySubClass(params)``. This is - the usual route to Python instance creation. -#. View casting - casting an existing ndarray as a given subclass -#. New from template - creating a new instance from a template - instance. Examples include returning slices from a subclassed array, - creating return types from ufuncs, and copying arrays. See - :ref:`new-from-template` for more details - -The last two are characteristics of ndarrays - in order to support -things like array slicing. The complications of subclassing ndarray are -due to the mechanisms numpy has to support these latter two routes of -instance creation. - -.. _view-casting: - -View casting ------------- - -*View casting* is the standard ndarray mechanism by which you take an -ndarray of any subclass, and return a view of the array as another -(specified) subclass: - ->>> import numpy as np ->>> # create a completely useless ndarray subclass ->>> class C(np.ndarray): pass ->>> # create a standard ndarray ->>> arr = np.zeros((3,)) ->>> # take a view of it, as our useless subclass ->>> c_arr = arr.view(C) ->>> type(c_arr) - - -.. _new-from-template: - -Creating new from template --------------------------- - -New instances of an ndarray subclass can also come about by a very -similar mechanism to :ref:`view-casting`, when numpy finds it needs to -create a new instance from a template instance. The most obvious place -this has to happen is when you are taking slices of subclassed arrays. -For example: - ->>> v = c_arr[1:] ->>> type(v) # the view is of type 'C' - ->>> v is c_arr # but it's a new instance -False - -The slice is a *view* onto the original ``c_arr`` data. So, when we -take a view from the ndarray, we return a new ndarray, of the same -class, that points to the data in the original. - -There are other points in the use of ndarrays where we need such views, -such as copying arrays (``c_arr.copy()``), creating ufunc output arrays -(see also :ref:`array-wrap`), and reducing methods (like -``c_arr.mean()``. - -Relationship of view casting and new-from-template --------------------------------------------------- - -These paths both use the same machinery. We make the distinction here, -because they result in different input to your methods. Specifically, -:ref:`view-casting` means you have created a new instance of your array -type from any potential subclass of ndarray. :ref:`new-from-template` -means you have created a new instance of your class from a pre-existing -instance, allowing you - for example - to copy across attributes that -are particular to your subclass. - -Implications for subclassing ----------------------------- - -If we subclass ndarray, we need to deal not only with explicit -construction of our array type, but also :ref:`view-casting` or -:ref:`new-from-template`. Numpy has the machinery to do this, and this -machinery that makes subclassing slightly non-standard. - -There are two aspects to the machinery that ndarray uses to support -views and new-from-template in subclasses. - -The first is the use of the ``ndarray.__new__`` method for the main work -of object initialization, rather then the more usual ``__init__`` -method. The second is the use of the ``__array_finalize__`` method to -allow subclasses to clean up after the creation of views and new -instances from templates. - -A brief Python primer on ``__new__`` and ``__init__`` -===================================================== - -``__new__`` is a standard Python method, and, if present, is called -before ``__init__`` when we create a class instance. See the `python -__new__ documentation -`_ for more detail. - -For example, consider the following Python code: - -.. testcode:: - - class C(object): - def __new__(cls, *args): - print 'Cls in __new__:', cls - print 'Args in __new__:', args - return object.__new__(cls, *args) - - def __init__(self, *args): - print 'type(self) in __init__:', type(self) - print 'Args in __init__:', args - -meaning that we get: - ->>> c = C('hello') -Cls in __new__: -Args in __new__: ('hello',) -type(self) in __init__: -Args in __init__: ('hello',) - -When we call ``C('hello')``, the ``__new__`` method gets its own class -as first argument, and the passed argument, which is the string -``'hello'``. After python calls ``__new__``, it usually (see below) -calls our ``__init__`` method, with the output of ``__new__`` as the -first argument (now a class instance), and the passed arguments -following. - -As you can see, the object can be initialized in the ``__new__`` -method or the ``__init__`` method, or both, and in fact ndarray does -not have an ``__init__`` method, because all the initialization is -done in the ``__new__`` method. - -Why use ``__new__`` rather than just the usual ``__init__``? Because -in some cases, as for ndarray, we want to be able to return an object -of some other class. Consider the following: - -.. testcode:: - - class D(C): - def __new__(cls, *args): - print 'D cls is:', cls - print 'D args in __new__:', args - return C.__new__(C, *args) - - def __init__(self, *args): - # we never get here - print 'In D __init__' - -meaning that: - ->>> obj = D('hello') -D cls is: -D args in __new__: ('hello',) -Cls in __new__: -Args in __new__: ('hello',) ->>> type(obj) - - -The definition of ``C`` is the same as before, but for ``D``, the -``__new__`` method returns an instance of class ``C`` rather than -``D``. Note that the ``__init__`` method of ``D`` does not get -called. In general, when the ``__new__`` method returns an object of -class other than the class in which it is defined, the ``__init__`` -method of that class is not called. - -This is how subclasses of the ndarray class are able to return views -that preserve the class type. When taking a view, the standard -ndarray machinery creates the new ndarray object with something -like:: - - obj = ndarray.__new__(subtype, shape, ... - -where ``subdtype`` is the subclass. Thus the returned view is of the -same class as the subclass, rather than being of class ``ndarray``. - -That solves the problem of returning views of the same type, but now -we have a new problem. The machinery of ndarray can set the class -this way, in its standard methods for taking views, but the ndarray -``__new__`` method knows nothing of what we have done in our own -``__new__`` method in order to set attributes, and so on. (Aside - -why not call ``obj = subdtype.__new__(...`` then? Because we may not -have a ``__new__`` method with the same call signature). - -The role of ``__array_finalize__`` -================================== - -``__array_finalize__`` is the mechanism that numpy provides to allow -subclasses to handle the various ways that new instances get created. - -Remember that subclass instances can come about in these three ways: - -#. explicit constructor call (``obj = MySubClass(params)``). This will - call the usual sequence of ``MySubClass.__new__`` then (if it exists) - ``MySubClass.__init__``. -#. :ref:`view-casting` -#. :ref:`new-from-template` - -Our ``MySubClass.__new__`` method only gets called in the case of the -explicit constructor call, so we can't rely on ``MySubClass.__new__`` or -``MySubClass.__init__`` to deal with the view casting and -new-from-template. It turns out that ``MySubClass.__array_finalize__`` -*does* get called for all three methods of object creation, so this is -where our object creation housekeeping usually goes. - -* For the explicit constructor call, our subclass will need to create a - new ndarray instance of its own class. In practice this means that - we, the authors of the code, will need to make a call to - ``ndarray.__new__(MySubClass,...)``, or do view casting of an existing - array (see below) -* For view casting and new-from-template, the equivalent of - ``ndarray.__new__(MySubClass,...`` is called, at the C level. - -The arguments that ``__array_finalize__`` recieves differ for the three -methods of instance creation above. - -The following code allows us to look at the call sequences and arguments: - -.. testcode:: - - import numpy as np - - class C(np.ndarray): - def __new__(cls, *args, **kwargs): - print 'In __new__ with class %s' % cls - return np.ndarray.__new__(cls, *args, **kwargs) - - def __init__(self, *args, **kwargs): - # in practice you probably will not need or want an __init__ - # method for your subclass - print 'In __init__ with class %s' % self.__class__ - - def __array_finalize__(self, obj): - print 'In array_finalize:' - print ' self type is %s' % type(self) - print ' obj type is %s' % type(obj) - - -Now: - ->>> # Explicit constructor ->>> c = C((10,)) -In __new__ with class -In array_finalize: - self type is - obj type is -In __init__ with class ->>> # View casting ->>> a = np.arange(10) ->>> cast_a = a.view(C) -In array_finalize: - self type is - obj type is ->>> # Slicing (example of new-from-template) ->>> cv = c[:1] -In array_finalize: - self type is - obj type is - -The signature of ``__array_finalize__`` is:: - - def __array_finalize__(self, obj): - -``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our -own class (``self``) as well as the object from which the view has been -taken (``obj``). As you can see from the output above, the ``self`` is -always a newly created instance of our subclass, and the type of ``obj`` -differs for the three instance creation methods: - -* When called from the explicit constructor, ``obj`` is ``None`` -* When called from view casting, ``obj`` can be an instance of any - subclass of ndarray, including our own. -* When called in new-from-template, ``obj`` is another instance of our - own subclass, that we might use to update the new ``self`` instance. - -Because ``__array_finalize__`` is the only method that always sees new -instances being created, it is the sensible place to fill in instance -defaults for new object attributes, among other tasks. - -This may be clearer with an example. - -Simple example - adding an extra attribute to ndarray ------------------------------------------------------ - -.. testcode:: - - import numpy as np - - class InfoArray(np.ndarray): - - def __new__(subtype, shape, dtype=float, buffer=None, offset=0, - strides=None, order=None, info=None): - # Create the ndarray instance of our type, given the usual - # ndarray input arguments. This will call the standard - # ndarray constructor, but return an object of our type. - # It also triggers a call to InfoArray.__array_finalize__ - obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides, - order) - # set the new 'info' attribute to the value passed - obj.info = info - # Finally, we must return the newly created object: - return obj - - def __array_finalize__(self, obj): - # ``self`` is a new object resulting from - # ndarray.__new__(InfoArray, ...), therefore it only has - # attributes that the ndarray.__new__ constructor gave it - - # i.e. those of a standard ndarray. - # - # We could have got to the ndarray.__new__ call in 3 ways: - # From an explicit constructor - e.g. InfoArray(): - # obj is None - # (we're in the middle of the InfoArray.__new__ - # constructor, and self.info will be set when we return to - # InfoArray.__new__) - if obj is None: return - # From view casting - e.g arr.view(InfoArray): - # obj is arr - # (type(obj) can be InfoArray) - # From new-from-template - e.g infoarr[:3] - # type(obj) is InfoArray - # - # Note that it is here, rather than in the __new__ method, - # that we set the default value for 'info', because this - # method sees all creation of default objects - with the - # InfoArray.__new__ constructor, but also with - # arr.view(InfoArray). - self.info = getattr(obj, 'info', None) - # We do not need to return anything - - -Using the object looks like this: - - >>> obj = InfoArray(shape=(3,)) # explicit constructor - >>> type(obj) - - >>> obj.info is None - True - >>> obj = InfoArray(shape=(3,), info='information') - >>> obj.info - 'information' - >>> v = obj[1:] # new-from-template - here - slicing - >>> type(v) - - >>> v.info - 'information' - >>> arr = np.arange(10) - >>> cast_arr = arr.view(InfoArray) # view casting - >>> type(cast_arr) - - >>> cast_arr.info is None - True - -This class isn't very useful, because it has the same constructor as the -bare ndarray object, including passing in buffers and shapes and so on. -We would probably prefer the constructor to be able to take an already -formed ndarray from the usual numpy calls to ``np.array`` and return an -object. - -Slightly more realistic example - attribute added to existing array -------------------------------------------------------------------- - -Here is a class that takes a standard ndarray that already exists, casts -as our type, and adds an extra attribute. - -.. testcode:: - - import numpy as np - - class RealisticInfoArray(np.ndarray): - - def __new__(cls, input_array, info=None): - # Input array is an already formed ndarray instance - # We first cast to be our class type - obj = np.asarray(input_array).view(cls) - # add the new attribute to the created instance - obj.info = info - # Finally, we must return the newly created object: - return obj - - def __array_finalize__(self, obj): - # see InfoArray.__array_finalize__ for comments - if obj is None: return - self.info = getattr(obj, 'info', None) - - -So: - - >>> arr = np.arange(5) - >>> obj = RealisticInfoArray(arr, info='information') - >>> type(obj) - - >>> obj.info - 'information' - >>> v = obj[1:] - >>> type(v) - - >>> v.info - 'information' - -.. _array-wrap: - -``__array_wrap__`` for ufuncs -------------------------------------------------------- - -``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy -functions, to allow a subclass to set the type of the return value -and update attributes and metadata. Let's show how this works with an example. -First we make the same subclass as above, but with a different name and -some print statements: - -.. testcode:: - - import numpy as np - - class MySubClass(np.ndarray): - - def __new__(cls, input_array, info=None): - obj = np.asarray(input_array).view(cls) - obj.info = info - return obj - - def __array_finalize__(self, obj): - print 'In __array_finalize__:' - print ' self is %s' % repr(self) - print ' obj is %s' % repr(obj) - if obj is None: return - self.info = getattr(obj, 'info', None) - - def __array_wrap__(self, out_arr, context=None): - print 'In __array_wrap__:' - print ' self is %s' % repr(self) - print ' arr is %s' % repr(out_arr) - # then just call the parent - return np.ndarray.__array_wrap__(self, out_arr, context) - -We run a ufunc on an instance of our new array: - ->>> obj = MySubClass(np.arange(5), info='spam') -In __array_finalize__: - self is MySubClass([0, 1, 2, 3, 4]) - obj is array([0, 1, 2, 3, 4]) ->>> arr2 = np.arange(5)+1 ->>> ret = np.add(arr2, obj) -In __array_wrap__: - self is MySubClass([0, 1, 2, 3, 4]) - arr is array([1, 3, 5, 7, 9]) -In __array_finalize__: - self is MySubClass([1, 3, 5, 7, 9]) - obj is MySubClass([0, 1, 2, 3, 4]) ->>> ret -MySubClass([1, 3, 5, 7, 9]) ->>> ret.info -'spam' - -Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the -input with the highest ``__array_priority__`` value, in this case -``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and -``out_arr`` as the (ndarray) result of the addition. In turn, the -default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the -result to class ``MySubClass``, and called ``__array_finalize__`` - -hence the copying of the ``info`` attribute. This has all happened at the C level. - -But, we could do anything we wanted: - -.. testcode:: - - class SillySubClass(np.ndarray): - - def __array_wrap__(self, arr, context=None): - return 'I lost your data' - ->>> arr1 = np.arange(5) ->>> obj = arr1.view(SillySubClass) ->>> arr2 = np.arange(5) ->>> ret = np.multiply(obj, arr2) ->>> ret -'I lost your data' - -So, by defining a specific ``__array_wrap__`` method for our subclass, -we can tweak the output from ufuncs. The ``__array_wrap__`` method -requires ``self``, then an argument - which is the result of the ufunc - -and an optional parameter *context*. This parameter is returned by some -ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc, -domain of the ufunc). ``__array_wrap__`` should return an instance of -its containing class. See the masked array subclass for an -implementation. - -In addition to ``__array_wrap__``, which is called on the way out of the -ufunc, there is also an ``__array_prepare__`` method which is called on -the way into the ufunc, after the output arrays are created but before any -computation has been performed. The default implementation does nothing -but pass through the array. ``__array_prepare__`` should not attempt to -access the array data or resize the array, it is intended for setting the -output array type, updating attributes and metadata, and performing any -checks based on the input that may be desired before computation begins. -Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or -subclass thereof or raise an error. - -Extra gotchas - custom ``__del__`` methods and ndarray.base ------------------------------------------------------------ - -One of the problems that ndarray solves is keeping track of memory -ownership of ndarrays and their views. Consider the case where we have -created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``. -The two objects are looking at the same memory. Numpy keeps track of -where the data came from for a particular array or view, with the -``base`` attribute: - ->>> # A normal ndarray, that owns its own data ->>> arr = np.zeros((4,)) ->>> # In this case, base is None ->>> arr.base is None -True ->>> # We take a view ->>> v1 = arr[1:] ->>> # base now points to the array that it derived from ->>> v1.base is arr -True ->>> # Take a view of a view ->>> v2 = v1[1:] ->>> # base points to the view it derived from ->>> v2.base is v1 -True - -In general, if the array owns its own memory, as for ``arr`` in this -case, then ``arr.base`` will be None - there are some exceptions to this -- see the numpy book for more details. - -The ``base`` attribute is useful in being able to tell whether we have -a view or the original array. This in turn can be useful if we need -to know whether or not to do some specific cleanup when the subclassed -array is deleted. For example, we may only want to do the cleanup if -the original array is deleted, but not the views. For an example of -how this can work, have a look at the ``memmap`` class in -``numpy.core``. - - -""" diff --git a/pythonPackages/numpy/numpy/doc/ufuncs.py b/pythonPackages/numpy/numpy/doc/ufuncs.py deleted file mode 100755 index e85b477635..0000000000 --- a/pythonPackages/numpy/numpy/doc/ufuncs.py +++ /dev/null @@ -1,137 +0,0 @@ -""" -=================== -Universal Functions -=================== - -Ufuncs are, generally speaking, mathematical functions or operations that are -applied element-by-element to the contents of an array. That is, the result -in each output array element only depends on the value in the corresponding -input array (or arrays) and on no other array elements. Numpy comes with a -large suite of ufuncs, and scipy extends that suite substantially. The simplest -example is the addition operator: :: - - >>> np.array([0,2,3,4]) + np.array([1,1,-1,2]) - array([1, 3, 2, 6]) - -The unfunc module lists all the available ufuncs in numpy. Documentation on -the specific ufuncs may be found in those modules. This documentation is -intended to address the more general aspects of unfuncs common to most of -them. All of the ufuncs that make use of Python operators (e.g., +, -, etc.) -have equivalent functions defined (e.g. add() for +) - -Type coercion -============= - -What happens when a binary operator (e.g., +,-,\\*,/, etc) deals with arrays of -two different types? What is the type of the result? Typically, the result is -the higher of the two types. For example: :: - - float32 + float64 -> float64 - int8 + int32 -> int32 - int16 + float32 -> float32 - float32 + complex64 -> complex64 - -There are some less obvious cases generally involving mixes of types -(e.g. uints, ints and floats) where equal bit sizes for each are not -capable of saving all the information in a different type of equivalent -bit size. Some examples are int32 vs float32 or uint32 vs int32. -Generally, the result is the higher type of larger size than both -(if available). So: :: - - int32 + float32 -> float64 - uint32 + int32 -> int64 - -Finally, the type coercion behavior when expressions involve Python -scalars is different than that seen for arrays. Since Python has a -limited number of types, combining a Python int with a dtype=np.int8 -array does not coerce to the higher type but instead, the type of the -array prevails. So the rules for Python scalars combined with arrays is -that the result will be that of the array equivalent the Python scalar -if the Python scalar is of a higher 'kind' than the array (e.g., float -vs. int), otherwise the resultant type will be that of the array. -For example: :: - - Python int + int8 -> int8 - Python float + int8 -> float64 - -ufunc methods -============= - -Binary ufuncs support 4 methods. - -**.reduce(arr)** applies the binary operator to elements of the array in - sequence. For example: :: - - >>> np.add.reduce(np.arange(10)) # adds all elements of array - 45 - -For multidimensional arrays, the first dimension is reduced by default: :: - - >>> np.add.reduce(np.arange(10).reshape(2,5)) - array([ 5, 7, 9, 11, 13]) - -The axis keyword can be used to specify different axes to reduce: :: - - >>> np.add.reduce(np.arange(10).reshape(2,5),axis=1) - array([10, 35]) - -**.accumulate(arr)** applies the binary operator and generates an an -equivalently shaped array that includes the accumulated amount for each -element of the array. A couple examples: :: - - >>> np.add.accumulate(np.arange(10)) - array([ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45]) - >>> np.multiply.accumulate(np.arange(1,9)) - array([ 1, 2, 6, 24, 120, 720, 5040, 40320]) - -The behavior for multidimensional arrays is the same as for .reduce(), -as is the use of the axis keyword). - -**.reduceat(arr,indices)** allows one to apply reduce to selected parts - of an array. It is a difficult method to understand. See the documentation - at: - -**.outer(arr1,arr2)** generates an outer operation on the two arrays arr1 and - arr2. It will work on multidimensional arrays (the shape of the result is - the concatenation of the two input shapes.: :: - - >>> np.multiply.outer(np.arange(3),np.arange(4)) - array([[0, 0, 0, 0], - [0, 1, 2, 3], - [0, 2, 4, 6]]) - -Output arguments -================ - -All ufuncs accept an optional output array. The array must be of the expected -output shape. Beware that if the type of the output array is of a different -(and lower) type than the output result, the results may be silently truncated -or otherwise corrupted in the downcast to the lower type. This usage is useful -when one wants to avoid creating large temporary arrays and instead allows one -to reuse the same array memory repeatedly (at the expense of not being able to -use more convenient operator notation in expressions). Note that when the -output argument is used, the ufunc still returns a reference to the result. - - >>> x = np.arange(2) - >>> np.add(np.arange(2),np.arange(2.),x) - array([0, 2]) - >>> x - array([0, 2]) - -and & or as ufuncs -================== - -Invariably people try to use the python 'and' and 'or' as logical operators -(and quite understandably). But these operators do not behave as normal -operators since Python treats these quite differently. They cannot be -overloaded with array equivalents. Thus using 'and' or 'or' with an array -results in an error. There are two alternatives: - - 1) use the ufunc functions logical_and() and logical_or(). - 2) use the bitwise operators & and \\|. The drawback of these is that if - the arguments to these operators are not boolean arrays, the result is - likely incorrect. On the other hand, most usages of logical_and and - logical_or are with boolean arrays. As long as one is careful, this is - a convenient way to apply these operators. - -""" diff --git a/pythonPackages/numpy/numpy/dual.py b/pythonPackages/numpy/numpy/dual.py deleted file mode 100755 index 3c863bf6f5..0000000000 --- a/pythonPackages/numpy/numpy/dual.py +++ /dev/null @@ -1,69 +0,0 @@ -""" -Aliases for functions which may be accelerated by Scipy. - -Scipy_ can be built to use accelerated or otherwise improved libraries -for FFTs, linear algebra, and special functions. This module allows -developers to transparently support these accelerated functions when -scipy is available but still support users who have only installed -Numpy. - -.. _Scipy : http://www.scipy.org - -""" -# This module should be used for functions both in numpy and scipy if -# you want to use the numpy version if available but the scipy version -# otherwise. -# Usage --- from numpy.dual import fft, inv - -__all__ = ['fft','ifft','fftn','ifftn','fft2','ifft2', - 'norm','inv','svd','solve','det','eig','eigvals', - 'eigh','eigvalsh','lstsq', 'pinv','cholesky','i0'] - -import numpy.linalg as linpkg -import numpy.fft as fftpkg -from numpy.lib import i0 -import sys - - -fft = fftpkg.fft -ifft = fftpkg.ifft -fftn = fftpkg.fftn -ifftn = fftpkg.ifftn -fft2 = fftpkg.fft2 -ifft2 = fftpkg.ifft2 - -norm = linpkg.norm -inv = linpkg.inv -svd = linpkg.svd -solve = linpkg.solve -det = linpkg.det -eig = linpkg.eig -eigvals = linpkg.eigvals -eigh = linpkg.eigh -eigvalsh = linpkg.eigvalsh -lstsq = linpkg.lstsq -pinv = linpkg.pinv -cholesky = linpkg.cholesky - -_restore_dict = {} - -def register_func(name, func): - if name not in __all__: - raise ValueError, "%s not a dual function." % name - f = sys._getframe(0).f_globals - _restore_dict[name] = f[name] - f[name] = func - -def restore_func(name): - if name not in __all__: - raise ValueError, "%s not a dual function." % name - try: - val = _restore_dict[name] - except KeyError: - return - else: - sys._getframe(0).f_globals[name] = val - -def restore_all(): - for name in _restore_dict.keys(): - restore_func(name) diff --git a/pythonPackages/numpy/numpy/f2py/__init__.py b/pythonPackages/numpy/numpy/f2py/__init__.py deleted file mode 100755 index 220cb3d879..0000000000 --- a/pythonPackages/numpy/numpy/f2py/__init__.py +++ /dev/null @@ -1,48 +0,0 @@ -#!/usr/bin/env python - -__all__ = ['run_main','compile','f2py_testing'] - -import os -import sys -import commands - -import f2py2e -import f2py_testing -import diagnose - -from info import __doc__ - -run_main = f2py2e.run_main -main = f2py2e.main - -def compile(source, - modulename = 'untitled', - extra_args = '', - verbose = 1, - source_fn = None - ): - ''' Build extension module from processing source with f2py. - Read the source of this function for more information. - ''' - from numpy.distutils.exec_command import exec_command - import tempfile - if source_fn is None: - fname = os.path.join(tempfile.mktemp()+'.f') - else: - fname = source_fn - - f = open(fname,'w') - f.write(source) - f.close() - - args = ' -c -m %s %s %s'%(modulename,fname,extra_args) - c = '%s -c "import numpy.f2py as f2py2e;f2py2e.main()" %s' %(sys.executable,args) - s,o = exec_command(c) - if source_fn is None: - try: os.remove(fname) - except OSError: pass - return s - -from numpy.testing import Tester -test = Tester().test -bench = Tester().bench diff --git a/pythonPackages/numpy/numpy/f2py/__version__.py b/pythonPackages/numpy/numpy/f2py/__version__.py deleted file mode 100755 index cc30f8380d..0000000000 --- a/pythonPackages/numpy/numpy/f2py/__version__.py +++ /dev/null @@ -1,8 +0,0 @@ -major = 1 - -try: - from __svn_version__ import version - version_info = (major, version) - version = '%s_%s' % version_info -except (ImportError, ValueError): - version = str(major) diff --git a/pythonPackages/numpy/numpy/f2py/auxfuncs.py b/pythonPackages/numpy/numpy/f2py/auxfuncs.py deleted file mode 100755 index ac95669b78..0000000000 --- a/pythonPackages/numpy/numpy/f2py/auxfuncs.py +++ /dev/null @@ -1,695 +0,0 @@ -#!/usr/bin/env python -""" - -Auxiliary functions for f2py2e. - -Copyright 1999,2000 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy (BSD style) LICENSE. - - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Date: 2005/07/24 19:01:55 $ -Pearu Peterson -""" -__version__ = "$Revision: 1.65 $"[10:-1] - -import __version__ -f2py_version = __version__.version - -import pprint -import sys -import types -import cfuncs - - -errmess=sys.stderr.write -#outmess=sys.stdout.write -show=pprint.pprint - -options={} -debugoptions=[] -wrapfuncs = 1 - -if sys.version_info[0] >= 3: - from functools import reduce - -def outmess(t): - if options.get('verbose',1): - sys.stdout.write(t) - -def debugcapi(var): - return 'capi' in debugoptions - -def _isstring(var): - return 'typespec' in var and var['typespec']=='character' and (not isexternal(var)) - -def isstring(var): - return _isstring(var) and not isarray(var) - -def ischaracter(var): - return isstring(var) and 'charselector' not in var - -def isstringarray(var): - return isarray(var) and _isstring(var) - -def isarrayofstrings(var): - # leaving out '*' for now so that - # `character*(*) a(m)` and `character a(m,*)` - # are treated differently. Luckily `character**` is illegal. - return isstringarray(var) and var['dimension'][-1]=='(*)' - -def isarray(var): - return 'dimension' in var and (not isexternal(var)) - -def isscalar(var): - return not (isarray(var) or isstring(var) or isexternal(var)) - -def iscomplex(var): - return isscalar(var) and var.get('typespec') in ['complex','double complex'] - -def islogical(var): - return isscalar(var) and var.get('typespec')=='logical' - -def isinteger(var): - return isscalar(var) and var.get('typespec')=='integer' - -def isreal(var): - return isscalar(var) and var.get('typespec')=='real' - -def get_kind(var): - try: - return var['kindselector']['*'] - except KeyError: - try: - return var['kindselector']['kind'] - except KeyError: - pass - -def islong_long(var): - if not isscalar(var): - return 0 - if var.get('typespec') not in ['integer','logical']: - return 0 - return get_kind(var)=='8' - -def isunsigned_char(var): - if not isscalar(var): - return 0 - if var.get('typespec') != 'integer': - return 0 - return get_kind(var)=='-1' - -def isunsigned_short(var): - if not isscalar(var): - return 0 - if var.get('typespec') != 'integer': - return 0 - return get_kind(var)=='-2' - -def isunsigned(var): - if not isscalar(var): - return 0 - if var.get('typespec') != 'integer': - return 0 - return get_kind(var)=='-4' - -def isunsigned_long_long(var): - if not isscalar(var): - return 0 - if var.get('typespec') != 'integer': - return 0 - return get_kind(var)=='-8' - -def isdouble(var): - if not isscalar(var): - return 0 - if not var.get('typespec')=='real': - return 0 - return get_kind(var)=='8' - -def islong_double(var): - if not isscalar(var): - return 0 - if not var.get('typespec')=='real': - return 0 - return get_kind(var)=='16' - -def islong_complex(var): - if not iscomplex(var): - return 0 - return get_kind(var)=='32' - -def iscomplexarray(var): - return isarray(var) and var.get('typespec') in ['complex','double complex'] - -def isint1array(var): - return isarray(var) and var.get('typespec')=='integer' \ - and get_kind(var)=='1' - -def isunsigned_chararray(var): - return isarray(var) and var.get('typespec') in ['integer', 'logical']\ - and get_kind(var)=='-1' - -def isunsigned_shortarray(var): - return isarray(var) and var.get('typespec') in ['integer', 'logical']\ - and get_kind(var)=='-2' - -def isunsignedarray(var): - return isarray(var) and var.get('typespec') in ['integer', 'logical']\ - and get_kind(var)=='-4' - -def isunsigned_long_longarray(var): - return isarray(var) and var.get('typespec') in ['integer', 'logical']\ - and get_kind(var)=='-8' - -def issigned_chararray(var): - return isarray(var) and var.get('typespec') in ['integer', 'logical']\ - and get_kind(var)=='1' - -def issigned_shortarray(var): - return isarray(var) and var.get('typespec') in ['integer', 'logical']\ - and get_kind(var)=='2' - -def issigned_array(var): - return isarray(var) and var.get('typespec') in ['integer', 'logical']\ - and get_kind(var)=='4' - -def issigned_long_longarray(var): - return isarray(var) and var.get('typespec') in ['integer', 'logical']\ - and get_kind(var)=='8' - -def isallocatable(var): - return 'attrspec' in var and 'allocatable' in var['attrspec'] - -def ismutable(var): - return not (not 'dimension' in var or isstring(var)) - -def ismoduleroutine(rout): - return 'modulename' in rout - -def ismodule(rout): - return ('block' in rout and 'module'==rout['block']) - -def isfunction(rout): - return ('block' in rout and 'function'==rout['block']) - -#def isfunction_wrap(rout): -# return wrapfuncs and (iscomplexfunction(rout) or isstringfunction(rout)) and (not isexternal(rout)) - -def isfunction_wrap(rout): - if isintent_c(rout): - return 0 - return wrapfuncs and isfunction(rout) and (not isexternal(rout)) - -def issubroutine(rout): - return ('block' in rout and 'subroutine'==rout['block']) - -def isroutine(rout): - return isfunction(rout) or issubroutine(rout) - -def islogicalfunction(rout): - if not isfunction(rout): - return 0 - if 'result' in rout: - a=rout['result'] - else: - a=rout['name'] - if a in rout['vars']: - return islogical(rout['vars'][a]) - return 0 - -def islong_longfunction(rout): - if not isfunction(rout): - return 0 - if 'result' in rout: - a=rout['result'] - else: - a=rout['name'] - if a in rout['vars']: - return islong_long(rout['vars'][a]) - return 0 - -def islong_doublefunction(rout): - if not isfunction(rout): - return 0 - if 'result' in rout: - a=rout['result'] - else: - a=rout['name'] - if a in rout['vars']: - return islong_double(rout['vars'][a]) - return 0 - -def iscomplexfunction(rout): - if not isfunction(rout): - return 0 - if 'result' in rout: - a=rout['result'] - else: - a=rout['name'] - if a in rout['vars']: - return iscomplex(rout['vars'][a]) - return 0 - -def iscomplexfunction_warn(rout): - if iscomplexfunction(rout): - outmess("""\ - ************************************************************** - Warning: code with a function returning complex value - may not work correctly with your Fortran compiler. - Run the following test before using it in your applications: - $(f2py install dir)/test-site/{b/runme_scalar,e/runme} - When using GNU gcc/g77 compilers, codes should work correctly. - **************************************************************\n""") - return 1 - return 0 - -def isstringfunction(rout): - if not isfunction(rout): - return 0 - if 'result' in rout: - a=rout['result'] - else: - a=rout['name'] - if a in rout['vars']: - return isstring(rout['vars'][a]) - return 0 - -def hasexternals(rout): - return 'externals' in rout and rout['externals'] - -def isthreadsafe(rout): - return 'f2pyenhancements' in rout and 'threadsafe' in rout['f2pyenhancements'] - -def hasvariables(rout): - return 'vars' in rout and rout['vars'] - -def isoptional(var): - return ('attrspec' in var and 'optional' in var['attrspec'] and 'required' not in var['attrspec']) and isintent_nothide(var) - -def isexternal(var): - return ('attrspec' in var and 'external' in var['attrspec']) - -def isrequired(var): - return not isoptional(var) and isintent_nothide(var) - -def isintent_in(var): - if 'intent' not in var: - return 1 - if 'hide' in var['intent']: - return 0 - if 'inplace' in var['intent']: - return 0 - if 'in' in var['intent']: - return 1 - if 'out' in var['intent']: - return 0 - if 'inout' in var['intent']: - return 0 - if 'outin' in var['intent']: - return 0 - return 1 - -def isintent_inout(var): - return 'intent' in var and ('inout' in var['intent'] or 'outin' in var['intent']) and 'in' not in var['intent'] and 'hide' not in var['intent'] and 'inplace' not in var['intent'] - -def isintent_out(var): - return 'out' in var.get('intent',[]) - -def isintent_hide(var): - return ('intent' in var and ('hide' in var['intent'] or ('out' in var['intent'] and 'in' not in var['intent'] and (not l_or(isintent_inout,isintent_inplace)(var))))) - -def isintent_nothide(var): - return not isintent_hide(var) - -def isintent_c(var): - return 'c' in var.get('intent',[]) - -# def isintent_f(var): -# return not isintent_c(var) - -def isintent_cache(var): - return 'cache' in var.get('intent',[]) - -def isintent_copy(var): - return 'copy' in var.get('intent',[]) - -def isintent_overwrite(var): - return 'overwrite' in var.get('intent',[]) - -def isintent_callback(var): - return 'callback' in var.get('intent',[]) - -def isintent_inplace(var): - return 'inplace' in var.get('intent',[]) - -def isintent_aux(var): - return 'aux' in var.get('intent',[]) - -def isintent_aligned4(var): - return 'aligned4' in var.get('intent',[]) -def isintent_aligned8(var): - return 'aligned8' in var.get('intent',[]) -def isintent_aligned16(var): - return 'aligned16' in var.get('intent',[]) - -isintent_dict = {isintent_in:'INTENT_IN',isintent_inout:'INTENT_INOUT', - isintent_out:'INTENT_OUT',isintent_hide:'INTENT_HIDE', - isintent_cache:'INTENT_CACHE', - isintent_c:'INTENT_C',isoptional:'OPTIONAL', - isintent_inplace:'INTENT_INPLACE', - isintent_aligned4:'INTENT_ALIGNED4', - isintent_aligned8:'INTENT_ALIGNED8', - isintent_aligned16:'INTENT_ALIGNED16', - } - -def isprivate(var): - return 'attrspec' in var and 'private' in var['attrspec'] - -def hasinitvalue(var): - return '=' in var - -def hasinitvalueasstring(var): - if not hasinitvalue(var): - return 0 - return var['='][0] in ['"',"'"] - -def hasnote(var): - return 'note' in var - -def hasresultnote(rout): - if not isfunction(rout): - return 0 - if 'result' in rout: - a=rout['result'] - else: - a=rout['name'] - if a in rout['vars']: - return hasnote(rout['vars'][a]) - return 0 - -def hascommon(rout): - return 'common' in rout - -def containscommon(rout): - if hascommon(rout): - return 1 - if hasbody(rout): - for b in rout['body']: - if containscommon(b): - return 1 - return 0 - -def containsmodule(block): - if ismodule(block): - return 1 - if not hasbody(block): - return 0 - for b in block['body']: - if containsmodule(b): - return 1 - return 0 - -def hasbody(rout): - return 'body' in rout - -def hascallstatement(rout): - return getcallstatement(rout) is not None - -def istrue(var): - return 1 - -def isfalse(var): - return 0 - -class F2PYError(Exception): - pass - -class throw_error: - def __init__(self,mess): - self.mess = mess - def __call__(self,var): - mess = '\n\n var = %s\n Message: %s\n' % (var,self.mess) - raise F2PYError,mess - -def l_and(*f): - l,l2='lambda v',[] - for i in range(len(f)): - l='%s,f%d=f[%d]'%(l,i,i) - l2.append('f%d(v)'%(i)) - return eval('%s:%s'%(l,' and '.join(l2))) - -def l_or(*f): - l,l2='lambda v',[] - for i in range(len(f)): - l='%s,f%d=f[%d]'%(l,i,i) - l2.append('f%d(v)'%(i)) - return eval('%s:%s'%(l,' or '.join(l2))) - -def l_not(f): - return eval('lambda v,f=f:not f(v)') - -def isdummyroutine(rout): - try: - return rout['f2pyenhancements']['fortranname']=='' - except KeyError: - return 0 - -def getfortranname(rout): - try: - name = rout['f2pyenhancements']['fortranname'] - if name=='': - raise KeyError - if not name: - errmess('Failed to use fortranname from %s\n'%(rout['f2pyenhancements'])) - raise KeyError - except KeyError: - name = rout['name'] - return name - -def getmultilineblock(rout,blockname,comment=1,counter=0): - try: - r = rout['f2pyenhancements'].get(blockname) - except KeyError: - return - if not r: return - if counter>0 and type(r) is type(''): - return - if type(r) is type([]): - if counter>=len(r): return - r = r[counter] - if r[:3]=="'''": - if comment: - r = '\t/* start ' + blockname + ' multiline ('+`counter`+') */\n' + r[3:] - else: - r = r[3:] - if r[-3:]=="'''": - if comment: - r = r[:-3] + '\n\t/* end multiline ('+`counter`+')*/' - else: - r = r[:-3] - else: - errmess("%s multiline block should end with `'''`: %s\n" \ - % (blockname,repr(r))) - return r - -def getcallstatement(rout): - return getmultilineblock(rout,'callstatement') - -def getcallprotoargument(rout,cb_map={}): - r = getmultilineblock(rout,'callprotoargument',comment=0) - if r: return r - if hascallstatement(rout): - outmess('warning: callstatement is defined without callprotoargument\n') - return - from capi_maps import getctype - arg_types,arg_types2 = [],[] - if l_and(isstringfunction,l_not(isfunction_wrap))(rout): - arg_types.extend(['char*','size_t']) - for n in rout['args']: - var = rout['vars'][n] - if isintent_callback(var): - continue - if n in cb_map: - ctype = cb_map[n]+'_typedef' - else: - ctype = getctype(var) - if l_and(isintent_c,l_or(isscalar,iscomplex))(var): - pass - elif isstring(var): - pass - #ctype = 'void*' - else: - ctype = ctype+'*' - if isstring(var) or isarrayofstrings(var): - arg_types2.append('size_t') - arg_types.append(ctype) - - proto_args = ','.join(arg_types+arg_types2) - if not proto_args: - proto_args = 'void' - #print proto_args - return proto_args - -def getusercode(rout): - return getmultilineblock(rout,'usercode') - -def getusercode1(rout): - return getmultilineblock(rout,'usercode',counter=1) - -def getpymethoddef(rout): - return getmultilineblock(rout,'pymethoddef') - -def getargs(rout): - sortargs,args=[],[] - if 'args' in rout: - args=rout['args'] - if 'sortvars' in rout: - for a in rout['sortvars']: - if a in args: sortargs.append(a) - for a in args: - if a not in sortargs: - sortargs.append(a) - else: sortargs=rout['args'] - return args,sortargs - -def getargs2(rout): - sortargs,args=[],rout.get('args',[]) - auxvars = [a for a in rout['vars'].keys() if isintent_aux(rout['vars'][a])\ - and a not in args] - args = auxvars + args - if 'sortvars' in rout: - for a in rout['sortvars']: - if a in args: sortargs.append(a) - for a in args: - if a not in sortargs: - sortargs.append(a) - else: sortargs=auxvars + rout['args'] - return args,sortargs - -def getrestdoc(rout): - if 'f2pymultilines' not in rout: - return None - k = None - if rout['block']=='python module': - k = rout['block'],rout['name'] - return rout['f2pymultilines'].get(k,None) - -def gentitle(name): - l=(80-len(name)-6)//2 - return '/*%s %s %s*/'%(l*'*',name,l*'*') - -def flatlist(l): - if type(l)==types.ListType: - return reduce(lambda x,y,f=flatlist:x+f(y),l,[]) - return [l] - -def stripcomma(s): - if s and s[-1]==',': return s[:-1] - return s - -def replace(str,d,defaultsep=''): - if type(d)==types.ListType: - return map(lambda d,f=replace,sep=defaultsep,s=str:f(s,d,sep),d) - if type(str)==types.ListType: - return map(lambda s,f=replace,sep=defaultsep,d=d:f(s,d,sep),str) - for k in 2*d.keys(): - if k=='separatorsfor': - continue - if 'separatorsfor' in d and k in d['separatorsfor']: - sep=d['separatorsfor'][k] - else: - sep=defaultsep - if type(d[k])==types.ListType: - str=str.replace('#%s#'%(k),sep.join(flatlist(d[k]))) - else: - str=str.replace('#%s#'%(k),d[k]) - return str - -def dictappend(rd,ar): - if type(ar)==types.ListType: - for a in ar: - rd=dictappend(rd,a) - return rd - for k in ar.keys(): - if k[0]=='_': - continue - if k in rd: - if type(rd[k])==str: - rd[k]=[rd[k]] - if type(rd[k])==types.ListType: - if type(ar[k])==types.ListType: - rd[k]=rd[k]+ar[k] - else: - rd[k].append(ar[k]) - elif type(rd[k])==types.DictType: - if type(ar[k])==types.DictType: - if k=='separatorsfor': - for k1 in ar[k].keys(): - if k1 not in rd[k]: - rd[k][k1]=ar[k][k1] - else: - rd[k]=dictappend(rd[k],ar[k]) - else: - rd[k]=ar[k] - return rd - -def applyrules(rules,d,var={}): - ret={} - if type(rules)==types.ListType: - for r in rules: - rr=applyrules(r,d,var) - ret=dictappend(ret,rr) - if '_break' in rr: - break - return ret - if '_check' in rules and (not rules['_check'](var)): - return ret - if 'need' in rules: - res = applyrules({'needs':rules['need']},d,var) - if 'needs' in res: - cfuncs.append_needs(res['needs']) - - for k in rules.keys(): - if k=='separatorsfor': - ret[k]=rules[k]; continue - if type(rules[k])==str: - ret[k]=replace(rules[k],d) - elif type(rules[k])==types.ListType: - ret[k]=[] - for i in rules[k]: - ar=applyrules({k:i},d,var) - if k in ar: - ret[k].append(ar[k]) - elif k[0]=='_': - continue - elif type(rules[k])==types.DictType: - ret[k]=[] - for k1 in rules[k].keys(): - if type(k1)==types.FunctionType and k1(var): - if type(rules[k][k1])==types.ListType: - for i in rules[k][k1]: - if type(i)==types.DictType: - res=applyrules({'supertext':i},d,var) - if 'supertext' in res: - i=res['supertext'] - else: i='' - ret[k].append(replace(i,d)) - else: - i=rules[k][k1] - if type(i)==types.DictType: - res=applyrules({'supertext':i},d) - if 'supertext' in res: - i=res['supertext'] - else: i='' - ret[k].append(replace(i,d)) - else: - errmess('applyrules: ignoring rule %s.\n'%`rules[k]`) - if type(ret[k])==types.ListType: - if len(ret[k])==1: - ret[k]=ret[k][0] - if ret[k]==[]: - del ret[k] - return ret diff --git a/pythonPackages/numpy/numpy/f2py/capi_maps.py b/pythonPackages/numpy/numpy/f2py/capi_maps.py deleted file mode 100755 index a3ab3e380d..0000000000 --- a/pythonPackages/numpy/numpy/f2py/capi_maps.py +++ /dev/null @@ -1,766 +0,0 @@ -#!/usr/bin/env python -""" - -Copyright 1999,2000 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Date: 2005/05/06 10:57:33 $ -Pearu Peterson -""" - -__version__ = "$Revision: 1.60 $"[10:-1] - -import __version__ -f2py_version = __version__.version - -import copy -import re -import os -import sys -from auxfuncs import * -from crackfortran import markoutercomma -import cb_rules - -# Numarray and Numeric users should set this False -using_newcore = True - -depargs=[] -lcb_map={} -lcb2_map={} -# forced casting: mainly caused by the fact that Python or Numeric -# C/APIs do not support the corresponding C types. -c2py_map={'double':'float', - 'float':'float', # forced casting - 'long_double':'float', # forced casting - 'char':'int', # forced casting - 'signed_char':'int', # forced casting - 'unsigned_char':'int', # forced casting - 'short':'int', # forced casting - 'unsigned_short':'int', # forced casting - 'int':'int', # (forced casting) - 'long':'int', - 'long_long':'long', - 'unsigned':'int', # forced casting - 'complex_float':'complex', # forced casting - 'complex_double':'complex', - 'complex_long_double':'complex', # forced casting - 'string':'string', - } -c2capi_map={'double':'PyArray_DOUBLE', - 'float':'PyArray_FLOAT', - 'long_double':'PyArray_DOUBLE', # forced casting - 'char':'PyArray_CHAR', - 'unsigned_char':'PyArray_UBYTE', - 'signed_char':'PyArray_SBYTE', - 'short':'PyArray_SHORT', - 'unsigned_short':'PyArray_USHORT', - 'int':'PyArray_INT', - 'unsigned':'PyArray_UINT', - 'long':'PyArray_LONG', - 'long_long':'PyArray_LONG', # forced casting - 'complex_float':'PyArray_CFLOAT', - 'complex_double':'PyArray_CDOUBLE', - 'complex_long_double':'PyArray_CDOUBLE', # forced casting - 'string':'PyArray_CHAR'} - -#These new maps aren't used anyhere yet, but should be by default -# unless building numeric or numarray extensions. -if using_newcore: - c2capi_map={'double':'PyArray_DOUBLE', - 'float':'PyArray_FLOAT', - 'long_double':'PyArray_LONGDOUBLE', - 'char':'PyArray_BYTE', - 'unsigned_char':'PyArray_UBYTE', - 'signed_char':'PyArray_BYTE', - 'short':'PyArray_SHORT', - 'unsigned_short':'PyArray_USHORT', - 'int':'PyArray_INT', - 'unsigned':'PyArray_UINT', - 'long':'PyArray_LONG', - 'unsigned_long':'PyArray_ULONG', - 'long_long':'PyArray_LONGLONG', - 'unsigned_long_long':'Pyarray_ULONGLONG', - 'complex_float':'PyArray_CFLOAT', - 'complex_double':'PyArray_CDOUBLE', - 'complex_long_double':'PyArray_CDOUBLE', - 'string':'PyArray_CHAR', # f2py 2e is not ready for PyArray_STRING (must set itemisize etc) - #'string':'PyArray_STRING' - - } -c2pycode_map={'double':'d', - 'float':'f', - 'long_double':'d', # forced casting - 'char':'1', - 'signed_char':'1', - 'unsigned_char':'b', - 'short':'s', - 'unsigned_short':'w', - 'int':'i', - 'unsigned':'u', - 'long':'l', - 'long_long':'L', - 'complex_float':'F', - 'complex_double':'D', - 'complex_long_double':'D', # forced casting - 'string':'c' - } -if using_newcore: - c2pycode_map={'double':'d', - 'float':'f', - 'long_double':'g', - 'char':'b', - 'unsigned_char':'B', - 'signed_char':'b', - 'short':'h', - 'unsigned_short':'H', - 'int':'i', - 'unsigned':'I', - 'long':'l', - 'unsigned_long':'L', - 'long_long':'q', - 'unsigned_long_long':'Q', - 'complex_float':'F', - 'complex_double':'D', - 'complex_long_double':'G', - 'string':'S'} -c2buildvalue_map={'double':'d', - 'float':'f', - 'char':'b', - 'signed_char':'b', - 'short':'h', - 'int':'i', - 'long':'l', - 'long_long':'L', - 'complex_float':'N', - 'complex_double':'N', - 'complex_long_double':'N', - 'string':'z'} - -if sys.version_info[0] >= 3: - # Bytes, not Unicode strings - c2buildvalue_map['string'] = 'y' - -if using_newcore: - #c2buildvalue_map=??? - pass - -f2cmap_all={'real':{'':'float','4':'float','8':'double','12':'long_double','16':'long_double'}, - 'integer':{'':'int','1':'signed_char','2':'short','4':'int','8':'long_long', - '-1':'unsigned_char','-2':'unsigned_short','-4':'unsigned', - '-8':'unsigned_long_long'}, - 'complex':{'':'complex_float','8':'complex_float', - '16':'complex_double','24':'complex_long_double', - '32':'complex_long_double'}, - 'complexkind':{'':'complex_float','4':'complex_float', - '8':'complex_double','12':'complex_long_double', - '16':'complex_long_double'}, - 'logical':{'':'int','1':'char','2':'short','4':'int','8':'long_long'}, - 'double complex':{'':'complex_double'}, - 'double precision':{'':'double'}, - 'byte':{'':'char'}, - 'character':{'':'string'} - } - -if os.path.isfile('.f2py_f2cmap'): - # User defined additions to f2cmap_all. - # .f2py_f2cmap must contain a dictionary of dictionaries, only. - # For example, {'real':{'low':'float'}} means that Fortran 'real(low)' is - # interpreted as C 'float'. - # This feature is useful for F90/95 users if they use PARAMETERSs - # in type specifications. - try: - outmess('Reading .f2py_f2cmap ...\n') - f = open('.f2py_f2cmap','r') - d = eval(f.read(),{},{}) - f.close() - for k,d1 in d.items(): - for k1 in d1.keys(): - d1[k1.lower()] = d1[k1] - d[k.lower()] = d[k] - for k in d.keys(): - if k not in f2cmap_all: - f2cmap_all[k]={} - for k1 in d[k].keys(): - if d[k][k1] in c2py_map: - if k1 in f2cmap_all[k]: - outmess("\tWarning: redefinition of {'%s':{'%s':'%s'->'%s'}}\n"%(k,k1,f2cmap_all[k][k1],d[k][k1])) - f2cmap_all[k][k1] = d[k][k1] - outmess('\tMapping "%s(kind=%s)" to "%s"\n' % (k,k1,d[k][k1])) - else: - errmess("\tIgnoring map {'%s':{'%s':'%s'}}: '%s' must be in %s\n"%(k,k1,d[k][k1],d[k][k1],c2py_map.keys())) - outmess('Succesfully applied user defined changes from .f2py_f2cmap\n') - except: - errmess('Failed to apply user defined changes from .f2py_f2cmap. Skipping.\n') -cformat_map={'double':'%g', - 'float':'%g', - 'long_double':'%Lg', - 'char':'%d', - 'signed_char':'%d', - 'unsigned_char':'%hhu', - 'short':'%hd', - 'unsigned_short':'%hu', - 'int':'%d', - 'unsigned':'%u', - 'long':'%ld', - 'unsigned_long':'%lu', - 'long_long':'%ld', - 'complex_float':'(%g,%g)', - 'complex_double':'(%g,%g)', - 'complex_long_double':'(%Lg,%Lg)', - 'string':'%s', - } - -############### Auxiliary functions -def getctype(var): - """ - Determines C type - """ - ctype='void' - if isfunction(var): - if 'result' in var: - a=var['result'] - else: - a=var['name'] - if a in var['vars']: - return getctype(var['vars'][a]) - else: - errmess('getctype: function %s has no return value?!\n'%a) - elif issubroutine(var): - return ctype - elif 'typespec' in var and var['typespec'].lower() in f2cmap_all: - typespec = var['typespec'].lower() - f2cmap=f2cmap_all[typespec] - ctype=f2cmap[''] # default type - if 'kindselector' in var: - if '*' in var['kindselector']: - try: - ctype=f2cmap[var['kindselector']['*']] - except KeyError: - errmess('getctype: "%s %s %s" not supported.\n'%(var['typespec'],'*',var['kindselector']['*'])) - elif 'kind' in var['kindselector']: - if typespec+'kind' in f2cmap_all: - f2cmap=f2cmap_all[typespec+'kind'] - try: - ctype=f2cmap[var['kindselector']['kind']] - except KeyError: - if typespec in f2cmap_all: - f2cmap=f2cmap_all[typespec] - try: - ctype=f2cmap[str(var['kindselector']['kind'])] - except KeyError: - errmess('getctype: "%s(kind=%s)" not supported (use .f2py_f2cmap).\n'\ - %(typespec,var['kindselector']['kind'])) - - else: - if not isexternal(var): - errmess('getctype: No C-type found in "%s", assuming void.\n'%var) - return ctype - -def getstrlength(var): - if isstringfunction(var): - if 'result' in var: - a=var['result'] - else: - a=var['name'] - if a in var['vars']: - return getstrlength(var['vars'][a]) - else: - errmess('getstrlength: function %s has no return value?!\n'%a) - if not isstring(var): - errmess('getstrlength: expected a signature of a string but got: %s\n'%(`var`)) - len='1' - if 'charselector' in var: - a=var['charselector'] - if '*' in a: - len=a['*'] - elif 'len' in a: - len=a['len'] - if re.match(r'\(\s*([*]|[:])\s*\)',len) or re.match(r'([*]|[:])',len): - #if len in ['(*)','*','(:)',':']: - if isintent_hide(var): - errmess('getstrlength:intent(hide): expected a string with defined length but got: %s\n'%(`var`)) - len='-1' - return len - -def getarrdims(a,var,verbose=0): - global depargs - ret={} - if isstring(var) and not isarray(var): - ret['dims']=getstrlength(var) - ret['size']=ret['dims'] - ret['rank']='1' - elif isscalar(var): - ret['size']='1' - ret['rank']='0' - ret['dims']='' - elif isarray(var): -# if not isintent_c(var): -# var['dimension'].reverse() - dim=copy.copy(var['dimension']) - ret['size']='*'.join(dim) - try: ret['size']=`eval(ret['size'])` - except: pass - ret['dims']=','.join(dim) - ret['rank']=`len(dim)` - ret['rank*[-1]']=`len(dim)*[-1]`[1:-1] - for i in range(len(dim)): # solve dim for dependecies - v=[] - if dim[i] in depargs: v=[dim[i]] - else: - for va in depargs: - if re.match(r'.*?\b%s\b.*'%va,dim[i]): - v.append(va) - for va in v: - if depargs.index(va)>depargs.index(a): - dim[i]='*' - break - ret['setdims'],i='',-1 - for d in dim: - i=i+1 - if d not in ['*',':','(*)','(:)']: - ret['setdims']='%s#varname#_Dims[%d]=%s,'%(ret['setdims'],i,d) - if ret['setdims']: ret['setdims']=ret['setdims'][:-1] - ret['cbsetdims'],i='',-1 - for d in var['dimension']: - i=i+1 - if d not in ['*',':','(*)','(:)']: - ret['cbsetdims']='%s#varname#_Dims[%d]=%s,'%(ret['cbsetdims'],i,d) - elif isintent_in(var): - outmess('getarrdims:warning: assumed shape array, using 0 instead of %r\n' \ - % (d)) - ret['cbsetdims']='%s#varname#_Dims[%d]=%s,'%(ret['cbsetdims'],i,0) - elif verbose : - errmess('getarrdims: If in call-back function: array argument %s must have bounded dimensions: got %s\n'%(`a`,`d`)) - if ret['cbsetdims']: ret['cbsetdims']=ret['cbsetdims'][:-1] -# if not isintent_c(var): -# var['dimension'].reverse() - return ret - -def getpydocsign(a,var): - global lcb_map - if isfunction(var): - if 'result' in var: - af=var['result'] - else: - af=var['name'] - if af in var['vars']: - return getpydocsign(af,var['vars'][af]) - else: - errmess('getctype: function %s has no return value?!\n'%af) - return '','' - sig,sigout=a,a - opt='' - if isintent_in(var): opt='input' - elif isintent_inout(var): opt='in/output' - out_a = a - if isintent_out(var): - for k in var['intent']: - if k[:4]=='out=': - out_a = k[4:] - break - init='' - ctype=getctype(var) - - if hasinitvalue(var): - init,showinit=getinit(a,var) - init='= %s'%(showinit) - if isscalar(var): - if isintent_inout(var): - sig='%s :%s %s rank-0 array(%s,\'%s\')'%(a,init,opt,c2py_map[ctype], - c2pycode_map[ctype],) - else: - sig='%s :%s %s %s'%(a,init,opt,c2py_map[ctype]) - sigout='%s : %s'%(out_a,c2py_map[ctype]) - elif isstring(var): - if isintent_inout(var): - sig='%s :%s %s rank-0 array(string(len=%s),\'c\')'%(a,init,opt,getstrlength(var)) - else: - sig='%s :%s %s string(len=%s)'%(a,init,opt,getstrlength(var)) - sigout='%s : string(len=%s)'%(out_a,getstrlength(var)) - elif isarray(var): - dim=var['dimension'] - rank=`len(dim)` - sig='%s :%s %s rank-%s array(\'%s\') with bounds (%s)'%(a,init,opt,rank, - c2pycode_map[ctype], - ','.join(dim)) - if a==out_a: - sigout='%s : rank-%s array(\'%s\') with bounds (%s)'\ - %(a,rank,c2pycode_map[ctype],','.join(dim)) - else: - sigout='%s : rank-%s array(\'%s\') with bounds (%s) and %s storage'\ - %(out_a,rank,c2pycode_map[ctype],','.join(dim),a) - elif isexternal(var): - ua='' - if a in lcb_map and lcb_map[a] in lcb2_map and 'argname' in lcb2_map[lcb_map[a]]: - ua=lcb2_map[lcb_map[a]]['argname'] - if not ua==a: ua=' => %s'%ua - else: ua='' - sig='%s : call-back function%s'%(a,ua) - sigout=sig - else: - errmess('getpydocsign: Could not resolve docsignature for "%s".\\n'%a) - return sig,sigout - -def getarrdocsign(a,var): - ctype=getctype(var) - if isstring(var) and (not isarray(var)): - sig='%s : rank-0 array(string(len=%s),\'c\')'%(a,getstrlength(var)) - elif isscalar(var): - sig='%s : rank-0 array(%s,\'%s\')'%(a,c2py_map[ctype], - c2pycode_map[ctype],) - elif isarray(var): - dim=var['dimension'] - rank=`len(dim)` - sig='%s : rank-%s array(\'%s\') with bounds (%s)'%(a,rank, - c2pycode_map[ctype], - ','.join(dim)) - return sig - -def getinit(a,var): - if isstring(var): init,showinit='""',"''" - else: init,showinit='','' - if hasinitvalue(var): - init=var['='] - showinit=init - if iscomplex(var) or iscomplexarray(var): - ret={} - - try: - v = var["="] - if ',' in v: - ret['init.r'],ret['init.i']=markoutercomma(v[1:-1]).split('@,@') - else: - v = eval(v,{},{}) - ret['init.r'],ret['init.i']=str(v.real),str(v.imag) - except: - raise ValueError('sign2map: expected complex number `(r,i)\' but got `%s\' as initial value of %r.' % (init, a)) - if isarray(var): - init='(capi_c.r=%s,capi_c.i=%s,capi_c)'%(ret['init.r'],ret['init.i']) - elif isstring(var): - if not init: init,showinit='""',"''" - if init[0]=="'": - init='"%s"'%(init[1:-1].replace('"','\\"')) - if init[0]=='"': showinit="'%s'"%(init[1:-1]) - return init,showinit - -def sign2map(a,var): - """ - varname,ctype,atype - init,init.r,init.i,pytype - vardebuginfo,vardebugshowvalue,varshowvalue - varrfromat - intent - """ - global lcb_map,cb_map - out_a = a - if isintent_out(var): - for k in var['intent']: - if k[:4]=='out=': - out_a = k[4:] - break - ret={'varname':a,'outvarname':out_a} - ret['ctype']=getctype(var) - intent_flags = [] - for f,s in isintent_dict.items(): - if f(var): intent_flags.append('F2PY_%s'%s) - if intent_flags: - #XXX: Evaluate intent_flags here. - ret['intent'] = '|'.join(intent_flags) - else: - ret['intent'] = 'F2PY_INTENT_IN' - if isarray(var): ret['varrformat']='N' - elif ret['ctype'] in c2buildvalue_map: - ret['varrformat']=c2buildvalue_map[ret['ctype']] - else: ret['varrformat']='O' - ret['init'],ret['showinit']=getinit(a,var) - if hasinitvalue(var) and iscomplex(var) and not isarray(var): - ret['init.r'],ret['init.i'] = markoutercomma(ret['init'][1:-1]).split('@,@') - if isexternal(var): - ret['cbnamekey']=a - if a in lcb_map: - ret['cbname']=lcb_map[a] - ret['maxnofargs']=lcb2_map[lcb_map[a]]['maxnofargs'] - ret['nofoptargs']=lcb2_map[lcb_map[a]]['nofoptargs'] - ret['cbdocstr']=lcb2_map[lcb_map[a]]['docstr'] - ret['cblatexdocstr']=lcb2_map[lcb_map[a]]['latexdocstr'] - else: - ret['cbname']=a - errmess('sign2map: Confused: external %s is not in lcb_map%s.\n'%(a,lcb_map.keys())) - if isstring(var): - ret['length']=getstrlength(var) - if isarray(var): - ret=dictappend(ret,getarrdims(a,var)) - dim=copy.copy(var['dimension']) - if ret['ctype'] in c2capi_map: - ret['atype']=c2capi_map[ret['ctype']] - # Debug info - if debugcapi(var): - il=[isintent_in,'input',isintent_out,'output', - isintent_inout,'inoutput',isrequired,'required', - isoptional,'optional',isintent_hide,'hidden', - iscomplex,'complex scalar', - l_and(isscalar,l_not(iscomplex)),'scalar', - isstring,'string',isarray,'array', - iscomplexarray,'complex array',isstringarray,'string array', - iscomplexfunction,'complex function', - l_and(isfunction,l_not(iscomplexfunction)),'function', - isexternal,'callback', - isintent_callback,'callback', - isintent_aux,'auxiliary', - #ismutable,'mutable',l_not(ismutable),'immutable', - ] - rl=[] - for i in range(0,len(il),2): - if il[i](var): rl.append(il[i+1]) - if isstring(var): - rl.append('slen(%s)=%s'%(a,ret['length'])) - if isarray(var): -# if not isintent_c(var): -# var['dimension'].reverse() - ddim=','.join(map(lambda x,y:'%s|%s'%(x,y),var['dimension'],dim)) - rl.append('dims(%s)'%ddim) -# if not isintent_c(var): -# var['dimension'].reverse() - if isexternal(var): - ret['vardebuginfo']='debug-capi:%s=>%s:%s'%(a,ret['cbname'],','.join(rl)) - else: - ret['vardebuginfo']='debug-capi:%s %s=%s:%s'%(ret['ctype'],a,ret['showinit'],','.join(rl)) - if isscalar(var): - if ret['ctype'] in cformat_map: - ret['vardebugshowvalue']='debug-capi:%s=%s'%(a,cformat_map[ret['ctype']]) - if isstring(var): - ret['vardebugshowvalue']='debug-capi:slen(%s)=%%d %s=\\"%%s\\"'%(a,a) - if isexternal(var): - ret['vardebugshowvalue']='debug-capi:%s=%%p'%(a) - if ret['ctype'] in cformat_map: - ret['varshowvalue']='#name#:%s=%s'%(a,cformat_map[ret['ctype']]) - ret['showvalueformat']='%s'%(cformat_map[ret['ctype']]) - if isstring(var): - ret['varshowvalue']='#name#:slen(%s)=%%d %s=\\"%%s\\"'%(a,a) - ret['pydocsign'],ret['pydocsignout']=getpydocsign(a,var) - if hasnote(var): - ret['note']=var['note'] - return ret - -def routsign2map(rout): - """ - name,NAME,begintitle,endtitle - rname,ctype,rformat - routdebugshowvalue - """ - global lcb_map - name = rout['name'] - fname = getfortranname(rout) - ret={'name':name, - 'texname':name.replace('_','\\_'), - 'name_lower':name.lower(), - 'NAME':name.upper(), - 'begintitle':gentitle(name), - 'endtitle':gentitle('end of %s'%name), - 'fortranname':fname, - 'FORTRANNAME':fname.upper(), - 'callstatement':getcallstatement(rout) or '', - 'usercode':getusercode(rout) or '', - 'usercode1':getusercode1(rout) or '', - } - if '_' in fname: - ret['F_FUNC'] = 'F_FUNC_US' - else: - ret['F_FUNC'] = 'F_FUNC' - if '_' in name: - ret['F_WRAPPEDFUNC'] = 'F_WRAPPEDFUNC_US' - else: - ret['F_WRAPPEDFUNC'] = 'F_WRAPPEDFUNC' - lcb_map={} - if 'use' in rout: - for u in rout['use'].keys(): - if u in cb_rules.cb_map: - for un in cb_rules.cb_map[u]: - ln=un[0] - if 'map' in rout['use'][u]: - for k in rout['use'][u]['map'].keys(): - if rout['use'][u]['map'][k]==un[0]: ln=k;break - lcb_map[ln]=un[1] - #else: - # errmess('routsign2map: cb_map does not contain module "%s" used in "use" statement.\n'%(u)) - elif 'externals' in rout and rout['externals']: - errmess('routsign2map: Confused: function %s has externals %s but no "use" statement.\n'%(ret['name'],`rout['externals']`)) - ret['callprotoargument'] = getcallprotoargument(rout,lcb_map) or '' - if isfunction(rout): - if 'result' in rout: - a=rout['result'] - else: - a=rout['name'] - ret['rname']=a - ret['pydocsign'],ret['pydocsignout']=getpydocsign(a,rout) - ret['ctype']=getctype(rout['vars'][a]) - if hasresultnote(rout): - ret['resultnote']=rout['vars'][a]['note'] - rout['vars'][a]['note']=['See elsewhere.'] - if ret['ctype'] in c2buildvalue_map: - ret['rformat']=c2buildvalue_map[ret['ctype']] - else: - ret['rformat']='O' - errmess('routsign2map: no c2buildvalue key for type %s\n'%(`ret['ctype']`)) - if debugcapi(rout): - if ret['ctype'] in cformat_map: - ret['routdebugshowvalue']='debug-capi:%s=%s'%(a,cformat_map[ret['ctype']]) - if isstringfunction(rout): - ret['routdebugshowvalue']='debug-capi:slen(%s)=%%d %s=\\"%%s\\"'%(a,a) - if isstringfunction(rout): - ret['rlength']=getstrlength(rout['vars'][a]) - if ret['rlength']=='-1': - errmess('routsign2map: expected explicit specification of the length of the string returned by the fortran function %s; taking 10.\n'%(`rout['name']`)) - ret['rlength']='10' - if hasnote(rout): - ret['note']=rout['note'] - rout['note']=['See elsewhere.'] - return ret - -def modsign2map(m): - """ - modulename - """ - if ismodule(m): - ret={'f90modulename':m['name'], - 'F90MODULENAME':m['name'].upper(), - 'texf90modulename':m['name'].replace('_','\\_')} - else: - ret={'modulename':m['name'], - 'MODULENAME':m['name'].upper(), - 'texmodulename':m['name'].replace('_','\\_')} - ret['restdoc'] = getrestdoc(m) or [] - if hasnote(m): - ret['note']=m['note'] - #m['note']=['See elsewhere.'] - ret['usercode'] = getusercode(m) or '' - ret['usercode1'] = getusercode1(m) or '' - if m['body']: - ret['interface_usercode'] = getusercode(m['body'][0]) or '' - else: - ret['interface_usercode'] = '' - ret['pymethoddef'] = getpymethoddef(m) or '' - return ret - -def cb_sign2map(a,var,index=None): - ret={'varname':a} - if index is None or 1: # disable 7712 patch - ret['varname_i'] = ret['varname'] - else: - ret['varname_i'] = ret['varname'] + '_' + str(index) - ret['ctype']=getctype(var) - if ret['ctype'] in c2capi_map: - ret['atype']=c2capi_map[ret['ctype']] - if ret['ctype'] in cformat_map: - ret['showvalueformat']='%s'%(cformat_map[ret['ctype']]) - if isarray(var): - ret=dictappend(ret,getarrdims(a,var)) - ret['pydocsign'],ret['pydocsignout']=getpydocsign(a,var) - if hasnote(var): - ret['note']=var['note'] - var['note']=['See elsewhere.'] - return ret - -def cb_routsign2map(rout,um): - """ - name,begintitle,endtitle,argname - ctype,rctype,maxnofargs,nofoptargs,returncptr - """ - ret={'name':'cb_%s_in_%s'%(rout['name'],um), - 'returncptr':''} - if isintent_callback(rout): - if '_' in rout['name']: - F_FUNC='F_FUNC_US' - else: - F_FUNC='F_FUNC' - ret['callbackname'] = '%s(%s,%s)' \ - % (F_FUNC, - rout['name'].lower(), - rout['name'].upper(), - ) - ret['static'] = 'extern' - else: - ret['callbackname'] = ret['name'] - ret['static'] = 'static' - ret['argname']=rout['name'] - ret['begintitle']=gentitle(ret['name']) - ret['endtitle']=gentitle('end of %s'%ret['name']) - ret['ctype']=getctype(rout) - ret['rctype']='void' - if ret['ctype']=='string': ret['rctype']='void' - else: - ret['rctype']=ret['ctype'] - if ret['rctype']!='void': - if iscomplexfunction(rout): - ret['returncptr'] = """ -#ifdef F2PY_CB_RETURNCOMPLEX -return_value= -#endif -""" - else: - ret['returncptr'] = 'return_value=' - if ret['ctype'] in cformat_map: - ret['showvalueformat']='%s'%(cformat_map[ret['ctype']]) - if isstringfunction(rout): - ret['strlength']=getstrlength(rout) - if isfunction(rout): - if 'result' in rout: - a=rout['result'] - else: - a=rout['name'] - if hasnote(rout['vars'][a]): - ret['note']=rout['vars'][a]['note'] - rout['vars'][a]['note']=['See elsewhere.'] - ret['rname']=a - ret['pydocsign'],ret['pydocsignout']=getpydocsign(a,rout) - if iscomplexfunction(rout): - ret['rctype']=""" -#ifdef F2PY_CB_RETURNCOMPLEX -#ctype# -#else -void -#endif -""" - else: - if hasnote(rout): - ret['note']=rout['note'] - rout['note']=['See elsewhere.'] - nofargs=0 - nofoptargs=0 - if 'args' in rout and 'vars' in rout: - for a in rout['args']: - var=rout['vars'][a] - if l_or(isintent_in,isintent_inout)(var): - nofargs=nofargs+1 - if isoptional(var): - nofoptargs=nofoptargs+1 - ret['maxnofargs']=`nofargs` - ret['nofoptargs']=`nofoptargs` - if hasnote(rout) and isfunction(rout) and 'result' in rout: - ret['routnote']=rout['note'] - rout['note']=['See elsewhere.'] - return ret - -def common_sign2map(a,var): # obsolute - ret={'varname':a} - ret['ctype']=getctype(var) - if isstringarray(var): - ret['ctype']='char' - if ret['ctype'] in c2capi_map: - ret['atype']=c2capi_map[ret['ctype']] - if ret['ctype'] in cformat_map: - ret['showvalueformat']='%s'%(cformat_map[ret['ctype']]) - if isarray(var): - ret=dictappend(ret,getarrdims(a,var)) - elif isstring(var): - ret['size']=getstrlength(var) - ret['rank']='1' - ret['pydocsign'],ret['pydocsignout']=getpydocsign(a,var) - if hasnote(var): - ret['note']=var['note'] - var['note']=['See elsewhere.'] - ret['arrdocstr']=getarrdocsign(a,var) # for strings this returns 0-rank but actually is 1-rank - return ret diff --git a/pythonPackages/numpy/numpy/f2py/cb_rules.py b/pythonPackages/numpy/numpy/f2py/cb_rules.py deleted file mode 100755 index 99742cb467..0000000000 --- a/pythonPackages/numpy/numpy/f2py/cb_rules.py +++ /dev/null @@ -1,539 +0,0 @@ -#!/usr/bin/env python -""" - -Build call-back mechanism for f2py2e. - -Copyright 2000 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Date: 2005/07/20 11:27:58 $ -Pearu Peterson -""" - -__version__ = "$Revision: 1.53 $"[10:-1] - -import __version__ -f2py_version = __version__.version - - -import pprint -import sys -import types -errmess=sys.stderr.write -outmess=sys.stdout.write -show=pprint.pprint - -from auxfuncs import * -import cfuncs - -################## Rules for callback function ############## - -cb_routine_rules={ - 'cbtypedefs':'typedef #rctype#(*#name#_typedef)(#optargs_td##args_td##strarglens_td##noargs#);', - 'body':""" -#begintitle# -PyObject *#name#_capi = NULL;/*was Py_None*/ -PyTupleObject *#name#_args_capi = NULL; -int #name#_nofargs = 0; -jmp_buf #name#_jmpbuf; -/*typedef #rctype#(*#name#_typedef)(#optargs_td##args_td##strarglens_td##noargs#);*/ -#static# #rctype# #callbackname# (#optargs##args##strarglens##noargs#) { -\tPyTupleObject *capi_arglist = #name#_args_capi; -\tPyObject *capi_return = NULL; -\tPyObject *capi_tmp = NULL; -\tint capi_j,capi_i = 0; -\tint capi_longjmp_ok = 1; -#decl# -#ifdef F2PY_REPORT_ATEXIT -f2py_cb_start_clock(); -#endif -\tCFUNCSMESS(\"cb:Call-back function #name# (maxnofargs=#maxnofargs#(-#nofoptargs#))\\n\"); -\tCFUNCSMESSPY(\"cb:#name#_capi=\",#name#_capi); -\tif (#name#_capi==NULL) { -\t\tcapi_longjmp_ok = 0; -\t\t#name#_capi = PyObject_GetAttrString(#modulename#_module,\"#argname#\"); -\t} -\tif (#name#_capi==NULL) { -\t\tPyErr_SetString(#modulename#_error,\"cb: Callback #argname# not defined (as an argument or module #modulename# attribute).\\n\"); -\t\tgoto capi_fail; -\t} -\tif (F2PyCapsule_Check(#name#_capi)) { -\t#name#_typedef #name#_cptr; -\t#name#_cptr = F2PyCapsule_AsVoidPtr(#name#_capi); -\t#returncptr#(*#name#_cptr)(#optargs_nm##args_nm##strarglens_nm#); -\t#return# -\t} -\tif (capi_arglist==NULL) { -\t\tcapi_longjmp_ok = 0; -\t\tcapi_tmp = PyObject_GetAttrString(#modulename#_module,\"#argname#_extra_args\"); -\t\tif (capi_tmp) { -\t\t\tcapi_arglist = (PyTupleObject *)PySequence_Tuple(capi_tmp); -\t\t\tif (capi_arglist==NULL) { -\t\t\t\tPyErr_SetString(#modulename#_error,\"Failed to convert #modulename#.#argname#_extra_args to tuple.\\n\"); -\t\t\t\tgoto capi_fail; -\t\t\t} -\t\t} else { -\t\t\tPyErr_Clear(); -\t\t\tcapi_arglist = (PyTupleObject *)Py_BuildValue(\"()\"); -\t\t} -\t} -\tif (capi_arglist == NULL) { -\t\tPyErr_SetString(#modulename#_error,\"Callback #argname# argument list is not set.\\n\"); -\t\tgoto capi_fail; -\t} -#setdims# -#pyobjfrom# -\tCFUNCSMESSPY(\"cb:capi_arglist=\",capi_arglist); -\tCFUNCSMESS(\"cb:Call-back calling Python function #argname#.\\n\"); -#ifdef F2PY_REPORT_ATEXIT -f2py_cb_start_call_clock(); -#endif -\tcapi_return = PyObject_CallObject(#name#_capi,(PyObject *)capi_arglist); -#ifdef F2PY_REPORT_ATEXIT -f2py_cb_stop_call_clock(); -#endif -\tCFUNCSMESSPY(\"cb:capi_return=\",capi_return); -\tif (capi_return == NULL) { -\t\tfprintf(stderr,\"capi_return is NULL\\n\"); -\t\tgoto capi_fail; -\t} -\tif (capi_return == Py_None) { -\t\tPy_DECREF(capi_return); -\t\tcapi_return = Py_BuildValue(\"()\"); -\t} -\telse if (!PyTuple_Check(capi_return)) { -\t\tcapi_return = Py_BuildValue(\"(N)\",capi_return); -\t} -\tcapi_j = PyTuple_Size(capi_return); -\tcapi_i = 0; -#frompyobj# -\tCFUNCSMESS(\"cb:#name#:successful\\n\"); -\tPy_DECREF(capi_return); -#ifdef F2PY_REPORT_ATEXIT -f2py_cb_stop_clock(); -#endif -\tgoto capi_return_pt; -capi_fail: -\tfprintf(stderr,\"Call-back #name# failed.\\n\"); -\tPy_XDECREF(capi_return); -\tif (capi_longjmp_ok) -\t\tlongjmp(#name#_jmpbuf,-1); -capi_return_pt: -\t; -#return# -} -#endtitle# -""", - 'need':['setjmp.h','CFUNCSMESS'], - 'maxnofargs':'#maxnofargs#', - 'nofoptargs':'#nofoptargs#', - 'docstr':"""\ -\tdef #argname#(#docsignature#): return #docreturn#\\n\\ -#docstrsigns#""", - 'latexdocstr':""" -{{}\\verb@def #argname#(#latexdocsignature#): return #docreturn#@{}} -#routnote# - -#latexdocstrsigns#""", - 'docstrshort':'def #argname#(#docsignature#): return #docreturn#' - } -cb_rout_rules=[ - {# Init - 'separatorsfor':{'decl':'\n', - 'args':',','optargs':'','pyobjfrom':'\n','freemem':'\n', - 'args_td':',','optargs_td':'', - 'args_nm':',','optargs_nm':'', - 'frompyobj':'\n','setdims':'\n', - 'docstrsigns':'\\n"\n"', - 'latexdocstrsigns':'\n', - 'latexdocstrreq':'\n','latexdocstropt':'\n', - 'latexdocstrout':'\n','latexdocstrcbs':'\n', - }, - 'decl':'/*decl*/','pyobjfrom':'/*pyobjfrom*/','frompyobj':'/*frompyobj*/', - 'args':[],'optargs':'','return':'','strarglens':'','freemem':'/*freemem*/', - 'args_td':[],'optargs_td':'','strarglens_td':'', - 'args_nm':[],'optargs_nm':'','strarglens_nm':'', - 'noargs':'', - 'setdims':'/*setdims*/', - 'docstrsigns':'','latexdocstrsigns':'', - 'docstrreq':'\tRequired arguments:', - 'docstropt':'\tOptional arguments:', - 'docstrout':'\tReturn objects:', - 'docstrcbs':'\tCall-back functions:', - 'docreturn':'','docsign':'','docsignopt':'', - 'latexdocstrreq':'\\noindent Required arguments:', - 'latexdocstropt':'\\noindent Optional arguments:', - 'latexdocstrout':'\\noindent Return objects:', - 'latexdocstrcbs':'\\noindent Call-back functions:', - 'routnote':{hasnote:'--- #note#',l_not(hasnote):''}, - },{ # Function - 'decl':'\t#ctype# return_value;', - 'frompyobj':[{debugcapi:'\tCFUNCSMESS("cb:Getting return_value->");'}, - '\tif (capi_j>capi_i)\n\t\tGETSCALARFROMPYTUPLE(capi_return,capi_i++,&return_value,#ctype#,"#ctype#_from_pyobj failed in converting return_value of call-back function #name# to C #ctype#\\n");', - {debugcapi:'\tfprintf(stderr,"#showvalueformat#.\\n",return_value);'} - ], - 'need':['#ctype#_from_pyobj',{debugcapi:'CFUNCSMESS'},'GETSCALARFROMPYTUPLE'], - 'return':'\treturn return_value;', - '_check':l_and(isfunction,l_not(isstringfunction),l_not(iscomplexfunction)) - }, - {# String function - 'pyobjfrom':{debugcapi:'\tfprintf(stderr,"debug-capi:cb:#name#:%d:\\n",return_value_len);'}, - 'args':'#ctype# return_value,int return_value_len', - 'args_nm':'return_value,&return_value_len', - 'args_td':'#ctype# ,int', - 'frompyobj':[{debugcapi:'\tCFUNCSMESS("cb:Getting return_value->\\"");'}, - """\tif (capi_j>capi_i) -\t\tGETSTRFROMPYTUPLE(capi_return,capi_i++,return_value,return_value_len);""", - {debugcapi:'\tfprintf(stderr,"#showvalueformat#\\".\\n",return_value);'} - ], - 'need':['#ctype#_from_pyobj',{debugcapi:'CFUNCSMESS'}, - 'string.h','GETSTRFROMPYTUPLE'], - 'return':'return;', - '_check':isstringfunction - }, - {# Complex function - 'optargs':""" -#ifndef F2PY_CB_RETURNCOMPLEX -#ctype# *return_value -#endif -""", - 'optargs_nm':""" -#ifndef F2PY_CB_RETURNCOMPLEX -return_value -#endif -""", - 'optargs_td':""" -#ifndef F2PY_CB_RETURNCOMPLEX -#ctype# * -#endif -""", - 'decl':""" -#ifdef F2PY_CB_RETURNCOMPLEX -\t#ctype# return_value; -#endif -""", - 'frompyobj':[{debugcapi:'\tCFUNCSMESS("cb:Getting return_value->");'}, - """\ -\tif (capi_j>capi_i) -#ifdef F2PY_CB_RETURNCOMPLEX -\t\tGETSCALARFROMPYTUPLE(capi_return,capi_i++,&return_value,#ctype#,\"#ctype#_from_pyobj failed in converting return_value of call-back function #name# to C #ctype#\\n\"); -#else -\t\tGETSCALARFROMPYTUPLE(capi_return,capi_i++,return_value,#ctype#,\"#ctype#_from_pyobj failed in converting return_value of call-back function #name# to C #ctype#\\n\"); -#endif -""", - {debugcapi:""" -#ifdef F2PY_CB_RETURNCOMPLEX -\tfprintf(stderr,\"#showvalueformat#.\\n\",(return_value).r,(return_value).i); -#else -\tfprintf(stderr,\"#showvalueformat#.\\n\",(*return_value).r,(*return_value).i); -#endif - -"""} - ], - 'return':""" -#ifdef F2PY_CB_RETURNCOMPLEX -\treturn return_value; -#else -\treturn; -#endif -""", - 'need':['#ctype#_from_pyobj',{debugcapi:'CFUNCSMESS'}, - 'string.h','GETSCALARFROMPYTUPLE','#ctype#'], - '_check':iscomplexfunction - }, - {'docstrout':'\t\t#pydocsignout#', - 'latexdocstrout':['\\item[]{{}\\verb@#pydocsignout#@{}}', - {hasnote:'--- #note#'}], - 'docreturn':'#rname#,', - '_check':isfunction}, - {'_check':issubroutine,'return':'return;'} - ] - -cb_arg_rules=[ - { # Doc - 'docstropt':{l_and(isoptional,isintent_nothide):'\t\t#pydocsign#'}, - 'docstrreq':{l_and(isrequired,isintent_nothide):'\t\t#pydocsign#'}, - 'docstrout':{isintent_out:'\t\t#pydocsignout#'}, - 'latexdocstropt':{l_and(isoptional,isintent_nothide):['\\item[]{{}\\verb@#pydocsign#@{}}', - {hasnote:'--- #note#'}]}, - 'latexdocstrreq':{l_and(isrequired,isintent_nothide):['\\item[]{{}\\verb@#pydocsign#@{}}', - {hasnote:'--- #note#'}]}, - 'latexdocstrout':{isintent_out:['\\item[]{{}\\verb@#pydocsignout#@{}}', - {l_and(hasnote,isintent_hide):'--- #note#', - l_and(hasnote,isintent_nothide):'--- See above.'}]}, - 'docsign':{l_and(isrequired,isintent_nothide):'#varname#,'}, - 'docsignopt':{l_and(isoptional,isintent_nothide):'#varname#,'}, - 'depend':'' - }, - { - 'args':{ - l_and (isscalar,isintent_c):'#ctype# #varname_i#', - l_and (isscalar,l_not(isintent_c)):'#ctype# *#varname_i#_cb_capi', - isarray:'#ctype# *#varname_i#', - isstring:'#ctype# #varname_i#' - }, - 'args_nm':{ - l_and (isscalar,isintent_c):'#varname_i#', - l_and (isscalar,l_not(isintent_c)):'#varname_i#_cb_capi', - isarray:'#varname_i#', - isstring:'#varname_i#' - }, - 'args_td':{ - l_and (isscalar,isintent_c):'#ctype#', - l_and (isscalar,l_not(isintent_c)):'#ctype# *', - isarray:'#ctype# *', - isstring:'#ctype#' - }, - 'strarglens':{isstring:',int #varname_i#_cb_len'}, # untested with multiple args - 'strarglens_td':{isstring:',int'}, # untested with multiple args - 'strarglens_nm':{isstring:',#varname_i#_cb_len'}, # untested with multiple args - }, - { # Scalars - 'decl':{l_not(isintent_c):'\t#ctype# #varname_i#=(*#varname_i#_cb_capi);'}, - 'error': {l_and(isintent_c,isintent_out, - throw_error('intent(c,out) is forbidden for callback scalar arguments')):\ - ''}, - 'frompyobj':[{debugcapi:'\tCFUNCSMESS("cb:Getting #varname#->");'}, - {isintent_out:'\tif (capi_j>capi_i)\n\t\tGETSCALARFROMPYTUPLE(capi_return,capi_i++,#varname_i#_cb_capi,#ctype#,"#ctype#_from_pyobj failed in converting argument #varname# of call-back function #name# to C #ctype#\\n");'}, - {l_and(debugcapi,l_and(l_not(iscomplex),isintent_c)):'\tfprintf(stderr,"#showvalueformat#.\\n",#varname_i#);'}, - {l_and(debugcapi,l_and(l_not(iscomplex),l_not(isintent_c))):'\tfprintf(stderr,"#showvalueformat#.\\n",*#varname_i#_cb_capi);'}, - {l_and(debugcapi,l_and(iscomplex,isintent_c)):'\tfprintf(stderr,"#showvalueformat#.\\n",(#varname_i#).r,(#varname_i#).i);'}, - {l_and(debugcapi,l_and(iscomplex,l_not(isintent_c))):'\tfprintf(stderr,"#showvalueformat#.\\n",(*#varname_i#_cb_capi).r,(*#varname_i#_cb_capi).i);'}, - ], - 'need':[{isintent_out:['#ctype#_from_pyobj','GETSCALARFROMPYTUPLE']}, - {debugcapi:'CFUNCSMESS'}], - '_check':isscalar - },{ - 'pyobjfrom':[{isintent_in:"""\ -\tif (#name#_nofargs>capi_i) -\t\tif (PyTuple_SetItem((PyObject *)capi_arglist,capi_i++,pyobj_from_#ctype#1(#varname_i#))) -\t\t\tgoto capi_fail;"""}, - {isintent_inout:"""\ -\tif (#name#_nofargs>capi_i) -\t\tif (PyTuple_SetItem((PyObject *)capi_arglist,capi_i++,pyarr_from_p_#ctype#1(#varname_i#_cb_capi))) -\t\t\tgoto capi_fail;"""}], - 'need':[{isintent_in:'pyobj_from_#ctype#1'}, - {isintent_inout:'pyarr_from_p_#ctype#1'}, - {iscomplex:'#ctype#'}], - '_check':l_and(isscalar,isintent_nothide), - '_optional':'' - },{# String - 'frompyobj':[{debugcapi:'\tCFUNCSMESS("cb:Getting #varname#->\\"");'}, - """\tif (capi_j>capi_i) -\t\tGETSTRFROMPYTUPLE(capi_return,capi_i++,#varname_i#,#varname_i#_cb_len);""", - {debugcapi:'\tfprintf(stderr,"#showvalueformat#\\":%d:.\\n",#varname_i#,#varname_i#_cb_len);'}, - ], - 'need':['#ctype#','GETSTRFROMPYTUPLE', - {debugcapi:'CFUNCSMESS'},'string.h'], - '_check':l_and(isstring,isintent_out) - },{ - 'pyobjfrom':[{debugcapi:'\tfprintf(stderr,"debug-capi:cb:#varname#=\\"#showvalueformat#\\":%d:\\n",#varname_i#,#varname_i#_cb_len);'}, - {isintent_in:"""\ -\tif (#name#_nofargs>capi_i) -\t\tif (PyTuple_SetItem((PyObject *)capi_arglist,capi_i++,pyobj_from_#ctype#1(#varname_i#))) -\t\t\tgoto capi_fail;"""}, - {isintent_inout:"""\ -\tif (#name#_nofargs>capi_i) { -\t\tint #varname_i#_cb_dims[] = {#varname_i#_cb_len}; -\t\tif (PyTuple_SetItem((PyObject *)capi_arglist,capi_i++,pyarr_from_p_#ctype#1(#varname_i#,#varname_i#_cb_dims))) -\t\t\tgoto capi_fail; -\t}"""}], - 'need':[{isintent_in:'pyobj_from_#ctype#1'}, - {isintent_inout:'pyarr_from_p_#ctype#1'}], - '_check':l_and(isstring,isintent_nothide), - '_optional':'' - }, -# Array ... - { - 'decl':'\tnpy_intp #varname_i#_Dims[#rank#] = {#rank*[-1]#};', - 'setdims':'\t#cbsetdims#;', - '_check':isarray, - '_depend':'' - }, - { - 'pyobjfrom':[{debugcapi:'\tfprintf(stderr,"debug-capi:cb:#varname#\\n");'}, - {isintent_c:"""\ -\tif (#name#_nofargs>capi_i) { -\t\tPyArrayObject *tmp_arr = (PyArrayObject *)PyArray_New(&PyArray_Type,#rank#,#varname_i#_Dims,#atype#,NULL,(char*)#varname_i#,0,NPY_CARRAY,NULL); /*XXX: Hmm, what will destroy this array??? */ -""", - l_not(isintent_c):"""\ -\tif (#name#_nofargs>capi_i) { -\t\tPyArrayObject *tmp_arr = (PyArrayObject *)PyArray_New(&PyArray_Type,#rank#,#varname_i#_Dims,#atype#,NULL,(char*)#varname_i#,0,NPY_FARRAY,NULL); /*XXX: Hmm, what will destroy this array??? */ -""", - }, - """ -\t\tif (tmp_arr==NULL) -\t\t\tgoto capi_fail; -\t\tif (PyTuple_SetItem((PyObject *)capi_arglist,capi_i++,(PyObject *)tmp_arr)) -\t\t\tgoto capi_fail; -}"""], - '_check':l_and(isarray,isintent_nothide,l_or(isintent_in,isintent_inout)), - '_optional':'', - },{ - 'frompyobj':[{debugcapi:'\tCFUNCSMESS("cb:Getting #varname#->");'}, - """\tif (capi_j>capi_i) { -\t\tPyArrayObject *rv_cb_arr = NULL; -\t\tif ((capi_tmp = PyTuple_GetItem(capi_return,capi_i++))==NULL) goto capi_fail; -\t\trv_cb_arr = array_from_pyobj(#atype#,#varname_i#_Dims,#rank#,F2PY_INTENT_IN""", - {isintent_c:'|F2PY_INTENT_C'}, - """,capi_tmp); -\t\tif (rv_cb_arr == NULL) { -\t\t\tfprintf(stderr,\"rv_cb_arr is NULL\\n\"); -\t\t\tgoto capi_fail; -\t\t} -\t\tMEMCOPY(#varname_i#,rv_cb_arr->data,PyArray_NBYTES(rv_cb_arr)); -\t\tif (capi_tmp != (PyObject *)rv_cb_arr) { -\t\t\tPy_DECREF(rv_cb_arr); -\t\t} -\t}""", - {debugcapi:'\tfprintf(stderr,"<-.\\n");'}, - ], - 'need':['MEMCOPY',{iscomplexarray:'#ctype#'}], - '_check':l_and(isarray,isintent_out) - },{ - 'docreturn':'#varname#,', - '_check':isintent_out - } - ] - -################## Build call-back module ############# -cb_map={} -def buildcallbacks(m): - global cb_map - cb_map[m['name']]=[] - for bi in m['body']: - if bi['block']=='interface': - for b in bi['body']: - if b: - buildcallback(b,m['name']) - else: - errmess('warning: empty body for %s\n' % (m['name'])) - -def buildcallback(rout,um): - global cb_map - import capi_maps - - outmess('\tConstructing call-back function "cb_%s_in_%s"\n'%(rout['name'],um)) - args,depargs=getargs(rout) - capi_maps.depargs=depargs - var=rout['vars'] - vrd=capi_maps.cb_routsign2map(rout,um) - rd=dictappend({},vrd) - cb_map[um].append([rout['name'],rd['name']]) - for r in cb_rout_rules: - if ('_check' in r and r['_check'](rout)) or ('_check' not in r): - ar=applyrules(r,vrd,rout) - rd=dictappend(rd,ar) - savevrd={} - for i,a in enumerate(args): - vrd=capi_maps.cb_sign2map(a,var[a], index=i) - savevrd[a]=vrd - for r in cb_arg_rules: - if '_depend' in r: - continue - if '_optional' in r and isoptional(var[a]): - continue - if ('_check' in r and r['_check'](var[a])) or ('_check' not in r): - ar=applyrules(r,vrd,var[a]) - rd=dictappend(rd,ar) - if '_break' in r: - break - for a in args: - vrd=savevrd[a] - for r in cb_arg_rules: - if '_depend' in r: - continue - if ('_optional' not in r) or ('_optional' in r and isrequired(var[a])): - continue - if ('_check' in r and r['_check'](var[a])) or ('_check' not in r): - ar=applyrules(r,vrd,var[a]) - rd=dictappend(rd,ar) - if '_break' in r: - break - for a in depargs: - vrd=savevrd[a] - for r in cb_arg_rules: - if '_depend' not in r: - continue - if '_optional' in r: - continue - if ('_check' in r and r['_check'](var[a])) or ('_check' not in r): - ar=applyrules(r,vrd,var[a]) - rd=dictappend(rd,ar) - if '_break' in r: - break - if 'args' in rd and 'optargs' in rd: - if type(rd['optargs'])==type([]): - rd['optargs']=rd['optargs']+[""" -#ifndef F2PY_CB_RETURNCOMPLEX -, -#endif -"""] - rd['optargs_nm']=rd['optargs_nm']+[""" -#ifndef F2PY_CB_RETURNCOMPLEX -, -#endif -"""] - rd['optargs_td']=rd['optargs_td']+[""" -#ifndef F2PY_CB_RETURNCOMPLEX -, -#endif -"""] - if type(rd['docreturn'])==types.ListType: - rd['docreturn']=stripcomma(replace('#docreturn#',{'docreturn':rd['docreturn']})) - optargs=stripcomma(replace('#docsignopt#', - {'docsignopt':rd['docsignopt']} - )) - if optargs=='': - rd['docsignature']=stripcomma(replace('#docsign#',{'docsign':rd['docsign']})) - else: - rd['docsignature']=replace('#docsign#[#docsignopt#]', - {'docsign':rd['docsign'], - 'docsignopt':optargs, - }) - rd['latexdocsignature']=rd['docsignature'].replace('_','\\_') - rd['latexdocsignature']=rd['latexdocsignature'].replace(',',', ') - rd['docstrsigns']=[] - rd['latexdocstrsigns']=[] - for k in ['docstrreq','docstropt','docstrout','docstrcbs']: - if k in rd and type(rd[k])==types.ListType: - rd['docstrsigns']=rd['docstrsigns']+rd[k] - k='latex'+k - if k in rd and type(rd[k])==types.ListType: - rd['latexdocstrsigns']=rd['latexdocstrsigns']+rd[k][0:1]+\ - ['\\begin{description}']+rd[k][1:]+\ - ['\\end{description}'] - if 'args' not in rd: - rd['args']='' - rd['args_td']='' - rd['args_nm']='' - if not (rd.get('args') or rd.get('optargs') or rd.get('strarglens')): - rd['noargs'] = 'void' - - ar=applyrules(cb_routine_rules,rd) - cfuncs.callbacks[rd['name']]=ar['body'] - if type(ar['need'])==str: - ar['need']=[ar['need']] - - if 'need' in rd: - for t in cfuncs.typedefs.keys(): - if t in rd['need']: - ar['need'].append(t) - - cfuncs.typedefs_generated[rd['name']+'_typedef'] = ar['cbtypedefs'] - ar['need'].append(rd['name']+'_typedef') - cfuncs.needs[rd['name']]=ar['need'] - - capi_maps.lcb2_map[rd['name']]={'maxnofargs':ar['maxnofargs'], - 'nofoptargs':ar['nofoptargs'], - 'docstr':ar['docstr'], - 'latexdocstr':ar['latexdocstr'], - 'argname':rd['argname'] - } - outmess('\t %s\n'%(ar['docstrshort'])) - #print ar['body'] - return -################## Build call-back function ############# diff --git a/pythonPackages/numpy/numpy/f2py/cfuncs.py b/pythonPackages/numpy/numpy/f2py/cfuncs.py deleted file mode 100755 index 722760ad04..0000000000 --- a/pythonPackages/numpy/numpy/f2py/cfuncs.py +++ /dev/null @@ -1,1192 +0,0 @@ -#!/usr/bin/env python -""" - -C declarations, CPP macros, and C functions for f2py2e. -Only required declarations/macros/functions will be used. - -Copyright 1999,2000 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Date: 2005/05/06 11:42:34 $ -Pearu Peterson -""" - -__version__ = "$Revision: 1.75 $"[10:-1] - -import __version__ -f2py_version = __version__.version - -import types -import sys -import copy -errmess=sys.stderr.write - -##################### Definitions ################## - -outneeds={'includes0':[],'includes':[],'typedefs':[],'typedefs_generated':[], - 'userincludes':[], - 'cppmacros':[],'cfuncs':[],'callbacks':[],'f90modhooks':[], - 'commonhooks':[]} -needs={} -includes0={'includes0':'/*need_includes0*/'} -includes={'includes':'/*need_includes*/'} -userincludes={'userincludes':'/*need_userincludes*/'} -typedefs={'typedefs':'/*need_typedefs*/'} -typedefs_generated={'typedefs_generated':'/*need_typedefs_generated*/'} -cppmacros={'cppmacros':'/*need_cppmacros*/'} -cfuncs={'cfuncs':'/*need_cfuncs*/'} -callbacks={'callbacks':'/*need_callbacks*/'} -f90modhooks={'f90modhooks':'/*need_f90modhooks*/', - 'initf90modhooksstatic':'/*initf90modhooksstatic*/', - 'initf90modhooksdynamic':'/*initf90modhooksdynamic*/', - } -commonhooks={'commonhooks':'/*need_commonhooks*/', - 'initcommonhooks':'/*need_initcommonhooks*/', - } - -############ Includes ################### - -includes0['math.h']='#include ' -includes0['string.h']='#include ' -includes0['setjmp.h']='#include ' - -includes['Python.h']='#include "Python.h"' -needs['arrayobject.h']=['Python.h'] -includes['arrayobject.h']='''#define PY_ARRAY_UNIQUE_SYMBOL PyArray_API -#include "arrayobject.h"''' - -includes['arrayobject.h']='#include "fortranobject.h"' - -############# Type definitions ############### - -typedefs['unsigned_char']='typedef unsigned char unsigned_char;' -typedefs['unsigned_short']='typedef unsigned short unsigned_short;' -typedefs['unsigned_long']='typedef unsigned long unsigned_long;' -typedefs['signed_char']='typedef signed char signed_char;' -typedefs['long_long']="""\ -#ifdef _WIN32 -typedef __int64 long_long; -#else -typedef long long long_long; -typedef unsigned long long unsigned_long_long; -#endif -""" -typedefs['insinged_long_long']="""\ -#ifdef _WIN32 -typedef __uint64 long_long; -#else -typedef unsigned long long unsigned_long_long; -#endif -""" -typedefs['long_double']="""\ -#ifndef _LONG_DOUBLE -typedef long double long_double; -#endif -""" -typedefs['complex_long_double']='typedef struct {long double r,i;} complex_long_double;' -typedefs['complex_float']='typedef struct {float r,i;} complex_float;' -typedefs['complex_double']='typedef struct {double r,i;} complex_double;' -typedefs['string']="""typedef char * string;""" - - -############### CPP macros #################### -cppmacros['CFUNCSMESS']="""\ -#ifdef DEBUGCFUNCS -#define CFUNCSMESS(mess) fprintf(stderr,\"debug-capi:\"mess); -#define CFUNCSMESSPY(mess,obj) CFUNCSMESS(mess) \\ -\tPyObject_Print((PyObject *)obj,stderr,Py_PRINT_RAW);\\ -\tfprintf(stderr,\"\\n\"); -#else -#define CFUNCSMESS(mess) -#define CFUNCSMESSPY(mess,obj) -#endif -""" -cppmacros['F_FUNC']="""\ -#if defined(PREPEND_FORTRAN) -#if defined(NO_APPEND_FORTRAN) -#if defined(UPPERCASE_FORTRAN) -#define F_FUNC(f,F) _##F -#else -#define F_FUNC(f,F) _##f -#endif -#else -#if defined(UPPERCASE_FORTRAN) -#define F_FUNC(f,F) _##F##_ -#else -#define F_FUNC(f,F) _##f##_ -#endif -#endif -#else -#if defined(NO_APPEND_FORTRAN) -#if defined(UPPERCASE_FORTRAN) -#define F_FUNC(f,F) F -#else -#define F_FUNC(f,F) f -#endif -#else -#if defined(UPPERCASE_FORTRAN) -#define F_FUNC(f,F) F##_ -#else -#define F_FUNC(f,F) f##_ -#endif -#endif -#endif -#if defined(UNDERSCORE_G77) -#define F_FUNC_US(f,F) F_FUNC(f##_,F##_) -#else -#define F_FUNC_US(f,F) F_FUNC(f,F) -#endif -""" -cppmacros['F_WRAPPEDFUNC']="""\ -#if defined(PREPEND_FORTRAN) -#if defined(NO_APPEND_FORTRAN) -#if defined(UPPERCASE_FORTRAN) -#define F_WRAPPEDFUNC(f,F) _F2PYWRAP##F -#else -#define F_WRAPPEDFUNC(f,F) _f2pywrap##f -#endif -#else -#if defined(UPPERCASE_FORTRAN) -#define F_WRAPPEDFUNC(f,F) _F2PYWRAP##F##_ -#else -#define F_WRAPPEDFUNC(f,F) _f2pywrap##f##_ -#endif -#endif -#else -#if defined(NO_APPEND_FORTRAN) -#if defined(UPPERCASE_FORTRAN) -#define F_WRAPPEDFUNC(f,F) F2PYWRAP##F -#else -#define F_WRAPPEDFUNC(f,F) f2pywrap##f -#endif -#else -#if defined(UPPERCASE_FORTRAN) -#define F_WRAPPEDFUNC(f,F) F2PYWRAP##F##_ -#else -#define F_WRAPPEDFUNC(f,F) f2pywrap##f##_ -#endif -#endif -#endif -#if defined(UNDERSCORE_G77) -#define F_WRAPPEDFUNC_US(f,F) F_WRAPPEDFUNC(f##_,F##_) -#else -#define F_WRAPPEDFUNC_US(f,F) F_WRAPPEDFUNC(f,F) -#endif -""" -cppmacros['F_MODFUNC']="""\ -#if defined(F90MOD2CCONV1) /*E.g. Compaq Fortran */ -#if defined(NO_APPEND_FORTRAN) -#define F_MODFUNCNAME(m,f) $ ## m ## $ ## f -#else -#define F_MODFUNCNAME(m,f) $ ## m ## $ ## f ## _ -#endif -#endif - -#if defined(F90MOD2CCONV2) /*E.g. IBM XL Fortran, not tested though */ -#if defined(NO_APPEND_FORTRAN) -#define F_MODFUNCNAME(m,f) __ ## m ## _MOD_ ## f -#else -#define F_MODFUNCNAME(m,f) __ ## m ## _MOD_ ## f ## _ -#endif -#endif - -#if defined(F90MOD2CCONV3) /*E.g. MIPSPro Compilers */ -#if defined(NO_APPEND_FORTRAN) -#define F_MODFUNCNAME(m,f) f ## .in. ## m -#else -#define F_MODFUNCNAME(m,f) f ## .in. ## m ## _ -#endif -#endif -/* -#if defined(UPPERCASE_FORTRAN) -#define F_MODFUNC(m,M,f,F) F_MODFUNCNAME(M,F) -#else -#define F_MODFUNC(m,M,f,F) F_MODFUNCNAME(m,f) -#endif -*/ - -#define F_MODFUNC(m,f) (*(f2pymodstruct##m##.##f)) -""" -cppmacros['SWAPUNSAFE']="""\ -#define SWAP(a,b) (size_t)(a) = ((size_t)(a) ^ (size_t)(b));\\ - (size_t)(b) = ((size_t)(a) ^ (size_t)(b));\\ - (size_t)(a) = ((size_t)(a) ^ (size_t)(b)) -""" -cppmacros['SWAP']="""\ -#define SWAP(a,b,t) {\\ -\tt *c;\\ -\tc = a;\\ -\ta = b;\\ -\tb = c;} -""" -#cppmacros['ISCONTIGUOUS']='#define ISCONTIGUOUS(m) ((m)->flags & NPY_CONTIGUOUS)' -cppmacros['PRINTPYOBJERR']="""\ -#define PRINTPYOBJERR(obj)\\ -\tfprintf(stderr,\"#modulename#.error is related to \");\\ -\tPyObject_Print((PyObject *)obj,stderr,Py_PRINT_RAW);\\ -\tfprintf(stderr,\"\\n\"); -""" -cppmacros['MINMAX']="""\ -#ifndef max -#define max(a,b) ((a > b) ? (a) : (b)) -#endif -#ifndef min -#define min(a,b) ((a < b) ? (a) : (b)) -#endif -#ifndef MAX -#define MAX(a,b) ((a > b) ? (a) : (b)) -#endif -#ifndef MIN -#define MIN(a,b) ((a < b) ? (a) : (b)) -#endif -""" -cppmacros['len..']="""\ -#define rank(var) var ## _Rank -#define shape(var,dim) var ## _Dims[dim] -#define old_rank(var) (((PyArrayObject *)(capi_ ## var ## _tmp))->nd) -#define old_shape(var,dim) (((PyArrayObject *)(capi_ ## var ## _tmp))->dimensions[dim]) -#define fshape(var,dim) shape(var,rank(var)-dim-1) -#define len(var) shape(var,0) -#define flen(var) fshape(var,0) -#define size(var) PyArray_SIZE((PyArrayObject *)(capi_ ## var ## _tmp)) -/* #define index(i) capi_i ## i */ -#define slen(var) capi_ ## var ## _len -""" - -cppmacros['pyobj_from_char1']='#define pyobj_from_char1(v) (PyInt_FromLong(v))' -cppmacros['pyobj_from_short1']='#define pyobj_from_short1(v) (PyInt_FromLong(v))' -needs['pyobj_from_int1']=['signed_char'] -cppmacros['pyobj_from_int1']='#define pyobj_from_int1(v) (PyInt_FromLong(v))' -cppmacros['pyobj_from_long1']='#define pyobj_from_long1(v) (PyLong_FromLong(v))' -needs['pyobj_from_long_long1']=['long_long'] -cppmacros['pyobj_from_long_long1']="""\ -#ifdef HAVE_LONG_LONG -#define pyobj_from_long_long1(v) (PyLong_FromLongLong(v)) -#else -#warning HAVE_LONG_LONG is not available. Redefining pyobj_from_long_long. -#define pyobj_from_long_long1(v) (PyLong_FromLong(v)) -#endif -""" -needs['pyobj_from_long_double1']=['long_double'] -cppmacros['pyobj_from_long_double1']='#define pyobj_from_long_double1(v) (PyFloat_FromDouble(v))' -cppmacros['pyobj_from_double1']='#define pyobj_from_double1(v) (PyFloat_FromDouble(v))' -cppmacros['pyobj_from_float1']='#define pyobj_from_float1(v) (PyFloat_FromDouble(v))' -needs['pyobj_from_complex_long_double1']=['complex_long_double'] -cppmacros['pyobj_from_complex_long_double1']='#define pyobj_from_complex_long_double1(v) (PyComplex_FromDoubles(v.r,v.i))' -needs['pyobj_from_complex_double1']=['complex_double'] -cppmacros['pyobj_from_complex_double1']='#define pyobj_from_complex_double1(v) (PyComplex_FromDoubles(v.r,v.i))' -needs['pyobj_from_complex_float1']=['complex_float'] -cppmacros['pyobj_from_complex_float1']='#define pyobj_from_complex_float1(v) (PyComplex_FromDoubles(v.r,v.i))' -needs['pyobj_from_string1']=['string'] -cppmacros['pyobj_from_string1']='#define pyobj_from_string1(v) (PyString_FromString((char *)v))' -needs['TRYPYARRAYTEMPLATE']=['PRINTPYOBJERR'] -cppmacros['TRYPYARRAYTEMPLATE']="""\ -/* New SciPy */ -#define TRYPYARRAYTEMPLATECHAR case PyArray_STRING: *(char *)(arr->data)=*v; break; -#define TRYPYARRAYTEMPLATELONG case PyArray_LONG: *(long *)(arr->data)=*v; break; -#define TRYPYARRAYTEMPLATEOBJECT case PyArray_OBJECT: (arr->descr->f->setitem)(pyobj_from_ ## ctype ## 1(*v),arr->data); break; - -#define TRYPYARRAYTEMPLATE(ctype,typecode) \\ - PyArrayObject *arr = NULL;\\ - if (!obj) return -2;\\ - if (!PyArray_Check(obj)) return -1;\\ - if (!(arr=(PyArrayObject *)obj)) {fprintf(stderr,\"TRYPYARRAYTEMPLATE:\");PRINTPYOBJERR(obj);return 0;}\\ - if (arr->descr->type==typecode) {*(ctype *)(arr->data)=*v; return 1;}\\ - switch (arr->descr->type_num) {\\ - case PyArray_DOUBLE: *(double *)(arr->data)=*v; break;\\ - case PyArray_INT: *(int *)(arr->data)=*v; break;\\ - case PyArray_LONG: *(long *)(arr->data)=*v; break;\\ - case PyArray_FLOAT: *(float *)(arr->data)=*v; break;\\ - case PyArray_CDOUBLE: *(double *)(arr->data)=*v; break;\\ - case PyArray_CFLOAT: *(float *)(arr->data)=*v; break;\\ - case PyArray_BOOL: *(npy_bool *)(arr->data)=(*v!=0); break;\\ - case PyArray_UBYTE: *(unsigned char *)(arr->data)=*v; break;\\ - case PyArray_BYTE: *(signed char *)(arr->data)=*v; break;\\ - case PyArray_SHORT: *(short *)(arr->data)=*v; break;\\ - case PyArray_USHORT: *(npy_ushort *)(arr->data)=*v; break;\\ - case PyArray_UINT: *(npy_uint *)(arr->data)=*v; break;\\ - case PyArray_ULONG: *(npy_ulong *)(arr->data)=*v; break;\\ - case PyArray_LONGLONG: *(npy_longlong *)(arr->data)=*v; break;\\ - case PyArray_ULONGLONG: *(npy_ulonglong *)(arr->data)=*v; break;\\ - case PyArray_LONGDOUBLE: *(npy_longdouble *)(arr->data)=*v; break;\\ - case PyArray_CLONGDOUBLE: *(npy_longdouble *)(arr->data)=*v; break;\\ - case PyArray_OBJECT: (arr->descr->f->setitem)(pyobj_from_ ## ctype ## 1(*v),arr->data, arr); break;\\ - default: return -2;\\ - };\\ - return 1 -""" - -needs['TRYCOMPLEXPYARRAYTEMPLATE']=['PRINTPYOBJERR'] -cppmacros['TRYCOMPLEXPYARRAYTEMPLATE']="""\ -#define TRYCOMPLEXPYARRAYTEMPLATEOBJECT case PyArray_OBJECT: (arr->descr->f->setitem)(pyobj_from_complex_ ## ctype ## 1((*v)),arr->data, arr); break; -#define TRYCOMPLEXPYARRAYTEMPLATE(ctype,typecode)\\ - PyArrayObject *arr = NULL;\\ - if (!obj) return -2;\\ - if (!PyArray_Check(obj)) return -1;\\ - if (!(arr=(PyArrayObject *)obj)) {fprintf(stderr,\"TRYCOMPLEXPYARRAYTEMPLATE:\");PRINTPYOBJERR(obj);return 0;}\\ - if (arr->descr->type==typecode) {\\ - *(ctype *)(arr->data)=(*v).r;\\ - *(ctype *)(arr->data+sizeof(ctype))=(*v).i;\\ - return 1;\\ - }\\ - switch (arr->descr->type_num) {\\ - case PyArray_CDOUBLE: *(double *)(arr->data)=(*v).r;*(double *)(arr->data+sizeof(double))=(*v).i;break;\\ - case PyArray_CFLOAT: *(float *)(arr->data)=(*v).r;*(float *)(arr->data+sizeof(float))=(*v).i;break;\\ - case PyArray_DOUBLE: *(double *)(arr->data)=(*v).r; break;\\ - case PyArray_LONG: *(long *)(arr->data)=(*v).r; break;\\ - case PyArray_FLOAT: *(float *)(arr->data)=(*v).r; break;\\ - case PyArray_INT: *(int *)(arr->data)=(*v).r; break;\\ - case PyArray_SHORT: *(short *)(arr->data)=(*v).r; break;\\ - case PyArray_UBYTE: *(unsigned char *)(arr->data)=(*v).r; break;\\ - case PyArray_BYTE: *(signed char *)(arr->data)=(*v).r; break;\\ - case PyArray_BOOL: *(npy_bool *)(arr->data)=((*v).r!=0 && (*v).i!=0); break;\\ - case PyArray_USHORT: *(npy_ushort *)(arr->data)=(*v).r; break;\\ - case PyArray_UINT: *(npy_uint *)(arr->data)=(*v).r; break;\\ - case PyArray_ULONG: *(npy_ulong *)(arr->data)=(*v).r; break;\\ - case PyArray_LONGLONG: *(npy_longlong *)(arr->data)=(*v).r; break;\\ - case PyArray_ULONGLONG: *(npy_ulonglong *)(arr->data)=(*v).r; break;\\ - case PyArray_LONGDOUBLE: *(npy_longdouble *)(arr->data)=(*v).r; break;\\ - case PyArray_CLONGDOUBLE: *(npy_longdouble *)(arr->data)=(*v).r;*(npy_longdouble *)(arr->data+sizeof(npy_longdouble))=(*v).i;break;\\ - case PyArray_OBJECT: (arr->descr->f->setitem)(pyobj_from_complex_ ## ctype ## 1((*v)),arr->data, arr); break;\\ - default: return -2;\\ - };\\ - return -1; -""" -## cppmacros['NUMFROMARROBJ']="""\ -## #define NUMFROMARROBJ(typenum,ctype) \\ -## \tif (PyArray_Check(obj)) arr = (PyArrayObject *)obj;\\ -## \telse arr = (PyArrayObject *)PyArray_ContiguousFromObject(obj,typenum,0,0);\\ -## \tif (arr) {\\ -## \t\tif (arr->descr->type_num==PyArray_OBJECT) {\\ -## \t\t\tif (!ctype ## _from_pyobj(v,(arr->descr->getitem)(arr->data),\"\"))\\ -## \t\t\tgoto capi_fail;\\ -## \t\t} else {\\ -## \t\t\t(arr->descr->cast[typenum])(arr->data,1,(char*)v,1,1);\\ -## \t\t}\\ -## \t\tif ((PyObject *)arr != obj) { Py_DECREF(arr); }\\ -## \t\treturn 1;\\ -## \t} -## """ -## #XXX: Note that CNUMFROMARROBJ is identical with NUMFROMARROBJ -## cppmacros['CNUMFROMARROBJ']="""\ -## #define CNUMFROMARROBJ(typenum,ctype) \\ -## \tif (PyArray_Check(obj)) arr = (PyArrayObject *)obj;\\ -## \telse arr = (PyArrayObject *)PyArray_ContiguousFromObject(obj,typenum,0,0);\\ -## \tif (arr) {\\ -## \t\tif (arr->descr->type_num==PyArray_OBJECT) {\\ -## \t\t\tif (!ctype ## _from_pyobj(v,(arr->descr->getitem)(arr->data),\"\"))\\ -## \t\t\tgoto capi_fail;\\ -## \t\t} else {\\ -## \t\t\t(arr->descr->cast[typenum])((void *)(arr->data),1,(void *)(v),1,1);\\ -## \t\t}\\ -## \t\tif ((PyObject *)arr != obj) { Py_DECREF(arr); }\\ -## \t\treturn 1;\\ -## \t} -## """ - - -needs['GETSTRFROMPYTUPLE']=['STRINGCOPYN','PRINTPYOBJERR'] -cppmacros['GETSTRFROMPYTUPLE']="""\ -#define GETSTRFROMPYTUPLE(tuple,index,str,len) {\\ -\t\tPyObject *rv_cb_str = PyTuple_GetItem((tuple),(index));\\ -\t\tif (rv_cb_str == NULL)\\ -\t\t\tgoto capi_fail;\\ -\t\tif (PyString_Check(rv_cb_str)) {\\ -\t\t\tstr[len-1]='\\0';\\ -\t\t\tSTRINGCOPYN((str),PyString_AS_STRING((PyStringObject*)rv_cb_str),(len));\\ -\t\t} else {\\ -\t\t\tPRINTPYOBJERR(rv_cb_str);\\ -\t\t\tPyErr_SetString(#modulename#_error,\"string object expected\");\\ -\t\t\tgoto capi_fail;\\ -\t\t}\\ -\t} -""" -cppmacros['GETSCALARFROMPYTUPLE']="""\ -#define GETSCALARFROMPYTUPLE(tuple,index,var,ctype,mess) {\\ -\t\tif ((capi_tmp = PyTuple_GetItem((tuple),(index)))==NULL) goto capi_fail;\\ -\t\tif (!(ctype ## _from_pyobj((var),capi_tmp,mess)))\\ -\t\t\tgoto capi_fail;\\ -\t} -""" - -cppmacros['FAILNULL']="""\\ -#define FAILNULL(p) do { \\ - if ((p) == NULL) { \\ - PyErr_SetString(PyExc_MemoryError, "NULL pointer found"); \\ - goto capi_fail; \\ - } \\ -} while (0) -""" -needs['MEMCOPY']=['string.h', 'FAILNULL'] -cppmacros['MEMCOPY']="""\ -#define MEMCOPY(to,from,n)\\ - do { FAILNULL(to); FAILNULL(from); (void)memcpy(to,from,n); } while (0) -""" -cppmacros['STRINGMALLOC']="""\ -#define STRINGMALLOC(str,len)\\ -\tif ((str = (string)malloc(sizeof(char)*(len+1))) == NULL) {\\ -\t\tPyErr_SetString(PyExc_MemoryError, \"out of memory\");\\ -\t\tgoto capi_fail;\\ -\t} else {\\ -\t\t(str)[len] = '\\0';\\ -\t} -""" -cppmacros['STRINGFREE']="""\ -#define STRINGFREE(str) do {if (!(str == NULL)) free(str);} while (0) -""" -needs['STRINGCOPYN']=['string.h', 'FAILNULL'] -cppmacros['STRINGCOPYN']="""\ -#define STRINGCOPYN(to,from,buf_size) \\ - do { \\ - int _m = (buf_size); \\ - char *_to = (to); \\ - char *_from = (from); \\ - FAILNULL(_to); FAILNULL(_from); \\ - (void)strncpy(_to, _from, sizeof(char)*_m); \\ - _to[_m-1] = '\\0'; \\ - /* Padding with spaces instead of nulls */ \\ - for (_m -= 2; _m >= 0 && _to[_m] == '\\0'; _m--) { \\ - _to[_m] = ' '; \\ - } \\ - } while (0) -""" -needs['STRINGCOPY']=['string.h', 'FAILNULL'] -cppmacros['STRINGCOPY']="""\ -#define STRINGCOPY(to,from)\\ - do { FAILNULL(to); FAILNULL(from); (void)strcpy(to,from); } while (0) -""" -cppmacros['CHECKGENERIC']="""\ -#define CHECKGENERIC(check,tcheck,name) \\ -\tif (!(check)) {\\ -\t\tPyErr_SetString(#modulename#_error,\"(\"tcheck\") failed for \"name);\\ -\t\t/*goto capi_fail;*/\\ -\t} else """ -cppmacros['CHECKARRAY']="""\ -#define CHECKARRAY(check,tcheck,name) \\ -\tif (!(check)) {\\ -\t\tPyErr_SetString(#modulename#_error,\"(\"tcheck\") failed for \"name);\\ -\t\t/*goto capi_fail;*/\\ -\t} else """ -cppmacros['CHECKSTRING']="""\ -#define CHECKSTRING(check,tcheck,name,show,var)\\ -\tif (!(check)) {\\ -\t\tchar errstring[256];\\ -\t\tsprintf(errstring, \"%s: \"show, \"(\"tcheck\") failed for \"name, slen(var), var);\\ -\t\tPyErr_SetString(#modulename#_error, errstring);\\ -\t\t/*goto capi_fail;*/\\ -\t} else """ -cppmacros['CHECKSCALAR']="""\ -#define CHECKSCALAR(check,tcheck,name,show,var)\\ -\tif (!(check)) {\\ -\t\tchar errstring[256];\\ -\t\tsprintf(errstring, \"%s: \"show, \"(\"tcheck\") failed for \"name, var);\\ -\t\tPyErr_SetString(#modulename#_error,errstring);\\ -\t\t/*goto capi_fail;*/\\ -\t} else """ -## cppmacros['CHECKDIMS']="""\ -## #define CHECKDIMS(dims,rank) \\ -## \tfor (int i=0;i<(rank);i++)\\ -## \t\tif (dims[i]<0) {\\ -## \t\t\tfprintf(stderr,\"Unspecified array argument requires a complete dimension specification.\\n\");\\ -## \t\t\tgoto capi_fail;\\ -## \t\t} -## """ -cppmacros['ARRSIZE']='#define ARRSIZE(dims,rank) (_PyArray_multiply_list(dims,rank))' -cppmacros['OLDPYNUM']="""\ -#ifdef OLDPYNUM -#error You need to intall Numeric Python version 13 or higher. Get it from http:/sourceforge.net/project/?group_id=1369 -#endif -""" -################# C functions ############### - -cfuncs['calcarrindex']="""\ -static int calcarrindex(int *i,PyArrayObject *arr) { -\tint k,ii = i[0]; -\tfor (k=1; k < arr->nd; k++) -\t\tii += (ii*(arr->dimensions[k] - 1)+i[k]); /* assuming contiguous arr */ -\treturn ii; -}""" -cfuncs['calcarrindextr']="""\ -static int calcarrindextr(int *i,PyArrayObject *arr) { -\tint k,ii = i[arr->nd-1]; -\tfor (k=1; k < arr->nd; k++) -\t\tii += (ii*(arr->dimensions[arr->nd-k-1] - 1)+i[arr->nd-k-1]); /* assuming contiguous arr */ -\treturn ii; -}""" -cfuncs['forcomb']="""\ -static struct { int nd;npy_intp *d;int *i,*i_tr,tr; } forcombcache; -static int initforcomb(npy_intp *dims,int nd,int tr) { - int k; - if (dims==NULL) return 0; - if (nd<0) return 0; - forcombcache.nd = nd; - forcombcache.d = dims; - forcombcache.tr = tr; - if ((forcombcache.i = (int *)malloc(sizeof(int)*nd))==NULL) return 0; - if ((forcombcache.i_tr = (int *)malloc(sizeof(int)*nd))==NULL) return 0; - for (k=1;kdata,str,PyArray_NBYTES(arr)); } -\treturn 1; -capi_fail: -\tPRINTPYOBJERR(obj); -\tPyErr_SetString(#modulename#_error,\"try_pyarr_from_string failed\"); -\treturn 0; -} -""" -needs['string_from_pyobj']=['string','STRINGMALLOC','STRINGCOPYN'] -cfuncs['string_from_pyobj']="""\ -static int string_from_pyobj(string *str,int *len,const string inistr,PyObject *obj,const char *errmess) { -\tPyArrayObject *arr = NULL; -\tPyObject *tmp = NULL; -#ifdef DEBUGCFUNCS -fprintf(stderr,\"string_from_pyobj(str='%s',len=%d,inistr='%s',obj=%p)\\n\",(char*)str,*len,(char *)inistr,obj); -#endif -\tif (obj == Py_None) { -\t\tif (*len == -1) -\t\t\t*len = strlen(inistr); /* Will this cause problems? */ -\t\tSTRINGMALLOC(*str,*len); -\t\tSTRINGCOPYN(*str,inistr,*len+1); -\t\treturn 1; -\t} -\tif (PyArray_Check(obj)) { -\t\tif ((arr = (PyArrayObject *)obj) == NULL) -\t\t\tgoto capi_fail; -\t\tif (!ISCONTIGUOUS(arr)) { -\t\t\tPyErr_SetString(PyExc_ValueError,\"array object is non-contiguous.\"); -\t\t\tgoto capi_fail; -\t\t} -\t\tif (*len == -1) -\t\t\t*len = (arr->descr->elsize)*PyArray_SIZE(arr); -\t\tSTRINGMALLOC(*str,*len); -\t\tSTRINGCOPYN(*str,arr->data,*len+1); -\t\treturn 1; -\t} -\tif (PyString_Check(obj)) { -\t\ttmp = obj; -\t\tPy_INCREF(tmp); -\t} -#if PY_VERSION_HEX >= 0x03000000 -\telse if (PyUnicode_Check(obj)) { -\t\ttmp = PyUnicode_AsASCIIString(obj); -\t} -\telse { -\t\tPyObject *tmp2; -\t\ttmp2 = PyObject_Str(obj); -\t\tif (tmp2) { -\t\t\ttmp = PyUnicode_AsASCIIString(tmp2); -\t\t\tPy_DECREF(tmp2); -\t\t} -\t\telse { -\t\t\ttmp = NULL; -\t\t} -\t} -#else -\telse { -\t\ttmp = PyObject_Str(obj); -\t} -#endif -\tif (tmp == NULL) goto capi_fail; -\tif (*len == -1) -\t\t*len = PyString_GET_SIZE(tmp); -\tSTRINGMALLOC(*str,*len); -\tSTRINGCOPYN(*str,PyString_AS_STRING(tmp),*len+1); -\tPy_DECREF(tmp); -\treturn 1; -capi_fail: -\tPy_XDECREF(tmp); -\t{ -\t\tPyObject* err = PyErr_Occurred(); -\t\tif (err==NULL) err = #modulename#_error; -\t\tPyErr_SetString(err,errmess); -\t} -\treturn 0; -} -""" -needs['char_from_pyobj']=['int_from_pyobj'] -cfuncs['char_from_pyobj']="""\ -static int char_from_pyobj(char* v,PyObject *obj,const char *errmess) { -\tint i=0; -\tif (int_from_pyobj(&i,obj,errmess)) { -\t\t*v = (char)i; -\t\treturn 1; -\t} -\treturn 0; -} -""" -needs['signed_char_from_pyobj']=['int_from_pyobj','signed_char'] -cfuncs['signed_char_from_pyobj']="""\ -static int signed_char_from_pyobj(signed_char* v,PyObject *obj,const char *errmess) { -\tint i=0; -\tif (int_from_pyobj(&i,obj,errmess)) { -\t\t*v = (signed_char)i; -\t\treturn 1; -\t} -\treturn 0; -} -""" -needs['short_from_pyobj']=['int_from_pyobj'] -cfuncs['short_from_pyobj']="""\ -static int short_from_pyobj(short* v,PyObject *obj,const char *errmess) { -\tint i=0; -\tif (int_from_pyobj(&i,obj,errmess)) { -\t\t*v = (short)i; -\t\treturn 1; -\t} -\treturn 0; -} -""" -cfuncs['int_from_pyobj']="""\ -static int int_from_pyobj(int* v,PyObject *obj,const char *errmess) { -\tPyObject* tmp = NULL; -\tif (PyInt_Check(obj)) { -\t\t*v = (int)PyInt_AS_LONG(obj); -\t\treturn 1; -\t} -\ttmp = PyNumber_Int(obj); -\tif (tmp) { -\t\t*v = PyInt_AS_LONG(tmp); -\t\tPy_DECREF(tmp); -\t\treturn 1; -\t} -\tif (PyComplex_Check(obj)) -\t\ttmp = PyObject_GetAttrString(obj,\"real\"); -\telse if (PyString_Check(obj) || PyUnicode_Check(obj)) -\t\t/*pass*/; -\telse if (PySequence_Check(obj)) -\t\ttmp = PySequence_GetItem(obj,0); -\tif (tmp) { -\t\tPyErr_Clear(); -\t\tif (int_from_pyobj(v,tmp,errmess)) {Py_DECREF(tmp); return 1;} -\t\tPy_DECREF(tmp); -\t} -\t{ -\t\tPyObject* err = PyErr_Occurred(); -\t\tif (err==NULL) err = #modulename#_error; -\t\tPyErr_SetString(err,errmess); -\t} -\treturn 0; -} -""" -cfuncs['long_from_pyobj']="""\ -static int long_from_pyobj(long* v,PyObject *obj,const char *errmess) { -\tPyObject* tmp = NULL; -\tif (PyInt_Check(obj)) { -\t\t*v = PyInt_AS_LONG(obj); -\t\treturn 1; -\t} -\ttmp = PyNumber_Int(obj); -\tif (tmp) { -\t\t*v = PyInt_AS_LONG(tmp); -\t\tPy_DECREF(tmp); -\t\treturn 1; -\t} -\tif (PyComplex_Check(obj)) -\t\ttmp = PyObject_GetAttrString(obj,\"real\"); -\telse if (PyString_Check(obj) || PyUnicode_Check(obj)) -\t\t/*pass*/; -\telse if (PySequence_Check(obj)) -\t\ttmp = PySequence_GetItem(obj,0); -\tif (tmp) { -\t\tPyErr_Clear(); -\t\tif (long_from_pyobj(v,tmp,errmess)) {Py_DECREF(tmp); return 1;} -\t\tPy_DECREF(tmp); -\t} -\t{ -\t\tPyObject* err = PyErr_Occurred(); -\t\tif (err==NULL) err = #modulename#_error; -\t\tPyErr_SetString(err,errmess); -\t} -\treturn 0; -} -""" -needs['long_long_from_pyobj']=['long_long'] -cfuncs['long_long_from_pyobj']="""\ -static int long_long_from_pyobj(long_long* v,PyObject *obj,const char *errmess) { -\tPyObject* tmp = NULL; -\tif (PyLong_Check(obj)) { -\t\t*v = PyLong_AsLongLong(obj); -\t\treturn (!PyErr_Occurred()); -\t} -\tif (PyInt_Check(obj)) { -\t\t*v = (long_long)PyInt_AS_LONG(obj); -\t\treturn 1; -\t} -\ttmp = PyNumber_Long(obj); -\tif (tmp) { -\t\t*v = PyLong_AsLongLong(tmp); -\t\tPy_DECREF(tmp); -\t\treturn (!PyErr_Occurred()); -\t} -\tif (PyComplex_Check(obj)) -\t\ttmp = PyObject_GetAttrString(obj,\"real\"); -\telse if (PyString_Check(obj) || PyUnicode_Check(obj)) -\t\t/*pass*/; -\telse if (PySequence_Check(obj)) -\t\ttmp = PySequence_GetItem(obj,0); -\tif (tmp) { -\t\tPyErr_Clear(); -\t\tif (long_long_from_pyobj(v,tmp,errmess)) {Py_DECREF(tmp); return 1;} -\t\tPy_DECREF(tmp); -\t} -\t{ -\t\tPyObject* err = PyErr_Occurred(); -\t\tif (err==NULL) err = #modulename#_error; -\t\tPyErr_SetString(err,errmess); -\t} -\treturn 0; -} -""" -needs['long_double_from_pyobj']=['double_from_pyobj','long_double'] -cfuncs['long_double_from_pyobj']="""\ -static int long_double_from_pyobj(long_double* v,PyObject *obj,const char *errmess) { -\tdouble d=0; -\tif (PyArray_CheckScalar(obj)){ -\t\tif PyArray_IsScalar(obj, LongDouble) { -\t\t\tPyArray_ScalarAsCtype(obj, v); -\t\t\treturn 1; -\t\t} -\t\telse if (PyArray_Check(obj) && PyArray_TYPE(obj)==PyArray_LONGDOUBLE) { -\t\t\t(*v) = *((npy_longdouble *)PyArray_DATA(obj)); -\t\t\treturn 1; -\t\t} -\t} -\tif (double_from_pyobj(&d,obj,errmess)) { -\t\t*v = (long_double)d; -\t\treturn 1; -\t} -\treturn 0; -} -""" -cfuncs['double_from_pyobj']="""\ -static int double_from_pyobj(double* v,PyObject *obj,const char *errmess) { -\tPyObject* tmp = NULL; -\tif (PyFloat_Check(obj)) { -#ifdef __sgi -\t\t*v = PyFloat_AsDouble(obj); -#else -\t\t*v = PyFloat_AS_DOUBLE(obj); -#endif -\t\treturn 1; -\t} -\ttmp = PyNumber_Float(obj); -\tif (tmp) { -#ifdef __sgi -\t\t*v = PyFloat_AsDouble(tmp); -#else -\t\t*v = PyFloat_AS_DOUBLE(tmp); -#endif -\t\tPy_DECREF(tmp); -\t\treturn 1; -\t} -\tif (PyComplex_Check(obj)) -\t\ttmp = PyObject_GetAttrString(obj,\"real\"); -\telse if (PyString_Check(obj) || PyUnicode_Check(obj)) -\t\t/*pass*/; -\telse if (PySequence_Check(obj)) -\t\ttmp = PySequence_GetItem(obj,0); -\tif (tmp) { -\t\tPyErr_Clear(); -\t\tif (double_from_pyobj(v,tmp,errmess)) {Py_DECREF(tmp); return 1;} -\t\tPy_DECREF(tmp); -\t} -\t{ -\t\tPyObject* err = PyErr_Occurred(); -\t\tif (err==NULL) err = #modulename#_error; -\t\tPyErr_SetString(err,errmess); -\t} -\treturn 0; -} -""" -needs['float_from_pyobj']=['double_from_pyobj'] -cfuncs['float_from_pyobj']="""\ -static int float_from_pyobj(float* v,PyObject *obj,const char *errmess) { -\tdouble d=0.0; -\tif (double_from_pyobj(&d,obj,errmess)) { -\t\t*v = (float)d; -\t\treturn 1; -\t} -\treturn 0; -} -""" -needs['complex_long_double_from_pyobj']=['complex_long_double','long_double', - 'complex_double_from_pyobj'] -cfuncs['complex_long_double_from_pyobj']="""\ -static int complex_long_double_from_pyobj(complex_long_double* v,PyObject *obj,const char *errmess) { -\tcomplex_double cd={0.0,0.0}; -\tif (PyArray_CheckScalar(obj)){ -\t\tif PyArray_IsScalar(obj, CLongDouble) { -\t\t\tPyArray_ScalarAsCtype(obj, v); -\t\t\treturn 1; -\t\t} -\t\telse if (PyArray_Check(obj) && PyArray_TYPE(obj)==PyArray_CLONGDOUBLE) { -\t\t\t(*v).r = ((npy_clongdouble *)PyArray_DATA(obj))->real; -\t\t\t(*v).i = ((npy_clongdouble *)PyArray_DATA(obj))->imag; -\t\t\treturn 1; -\t\t} -\t} -\tif (complex_double_from_pyobj(&cd,obj,errmess)) { -\t\t(*v).r = (long_double)cd.r; -\t\t(*v).i = (long_double)cd.i; -\t\treturn 1; -\t} -\treturn 0; -} -""" -needs['complex_double_from_pyobj']=['complex_double'] -cfuncs['complex_double_from_pyobj']="""\ -static int complex_double_from_pyobj(complex_double* v,PyObject *obj,const char *errmess) { -\tPy_complex c; -\tif (PyComplex_Check(obj)) { -\t\tc=PyComplex_AsCComplex(obj); -\t\t(*v).r=c.real, (*v).i=c.imag; -\t\treturn 1; -\t} -\tif (PyArray_IsScalar(obj, ComplexFloating)) { -\t\tif (PyArray_IsScalar(obj, CFloat)) { -\t\t\tnpy_cfloat new; -\t\t\tPyArray_ScalarAsCtype(obj, &new); -\t\t\t(*v).r = (double)new.real; -\t\t\t(*v).i = (double)new.imag; -\t\t} -\t\telse if (PyArray_IsScalar(obj, CLongDouble)) { -\t\t\tnpy_clongdouble new; -\t\t\tPyArray_ScalarAsCtype(obj, &new); -\t\t\t(*v).r = (double)new.real; -\t\t\t(*v).i = (double)new.imag; -\t\t} -\t\telse { /* if (PyArray_IsScalar(obj, CDouble)) */ -\t\t\tPyArray_ScalarAsCtype(obj, v); -\t\t} -\t\treturn 1; -\t} -\tif (PyArray_CheckScalar(obj)) { /* 0-dim array or still array scalar */ -\t\tPyObject *arr; -\t\tif (PyArray_Check(obj)) { -\t\t\tarr = PyArray_Cast((PyArrayObject *)obj, PyArray_CDOUBLE); -\t\t} -\t\telse { -\t\t\tarr = PyArray_FromScalar(obj, PyArray_DescrFromType(PyArray_CDOUBLE)); -\t\t} -\t\tif (arr==NULL) return 0; -\t\t(*v).r = ((npy_cdouble *)PyArray_DATA(arr))->real; -\t\t(*v).i = ((npy_cdouble *)PyArray_DATA(arr))->imag; -\t\treturn 1; -\t} -\t/* Python does not provide PyNumber_Complex function :-( */ -\t(*v).i=0.0; -\tif (PyFloat_Check(obj)) { -#ifdef __sgi -\t\t(*v).r = PyFloat_AsDouble(obj); -#else -\t\t(*v).r = PyFloat_AS_DOUBLE(obj); -#endif -\t\treturn 1; -\t} -\tif (PyInt_Check(obj)) { -\t\t(*v).r = (double)PyInt_AS_LONG(obj); -\t\treturn 1; -\t} -\tif (PyLong_Check(obj)) { -\t\t(*v).r = PyLong_AsDouble(obj); -\t\treturn (!PyErr_Occurred()); -\t} -\tif (PySequence_Check(obj) && !(PyString_Check(obj) || PyUnicode_Check(obj))) { -\t\tPyObject *tmp = PySequence_GetItem(obj,0); -\t\tif (tmp) { -\t\t\tif (complex_double_from_pyobj(v,tmp,errmess)) { -\t\t\t\tPy_DECREF(tmp); -\t\t\t\treturn 1; -\t\t\t} -\t\t\tPy_DECREF(tmp); -\t\t} -\t} -\t{ -\t\tPyObject* err = PyErr_Occurred(); -\t\tif (err==NULL) -\t\t\terr = PyExc_TypeError; -\t\tPyErr_SetString(err,errmess); -\t} -\treturn 0; -} -""" -needs['complex_float_from_pyobj']=['complex_float','complex_double_from_pyobj'] -cfuncs['complex_float_from_pyobj']="""\ -static int complex_float_from_pyobj(complex_float* v,PyObject *obj,const char *errmess) { -\tcomplex_double cd={0.0,0.0}; -\tif (complex_double_from_pyobj(&cd,obj,errmess)) { -\t\t(*v).r = (float)cd.r; -\t\t(*v).i = (float)cd.i; -\t\treturn 1; -\t} -\treturn 0; -} -""" -needs['try_pyarr_from_char']=['pyobj_from_char1','TRYPYARRAYTEMPLATE'] -cfuncs['try_pyarr_from_char']='static int try_pyarr_from_char(PyObject* obj,char* v) {\n\tTRYPYARRAYTEMPLATE(char,\'c\');\n}\n' -needs['try_pyarr_from_signed_char']=['TRYPYARRAYTEMPLATE','unsigned_char'] -cfuncs['try_pyarr_from_unsigned_char']='static int try_pyarr_from_unsigned_char(PyObject* obj,unsigned_char* v) {\n\tTRYPYARRAYTEMPLATE(unsigned_char,\'b\');\n}\n' -needs['try_pyarr_from_signed_char']=['TRYPYARRAYTEMPLATE','signed_char'] -cfuncs['try_pyarr_from_signed_char']='static int try_pyarr_from_signed_char(PyObject* obj,signed_char* v) {\n\tTRYPYARRAYTEMPLATE(signed_char,\'1\');\n}\n' -needs['try_pyarr_from_short']=['pyobj_from_short1','TRYPYARRAYTEMPLATE'] -cfuncs['try_pyarr_from_short']='static int try_pyarr_from_short(PyObject* obj,short* v) {\n\tTRYPYARRAYTEMPLATE(short,\'s\');\n}\n' -needs['try_pyarr_from_int']=['pyobj_from_int1','TRYPYARRAYTEMPLATE'] -cfuncs['try_pyarr_from_int']='static int try_pyarr_from_int(PyObject* obj,int* v) {\n\tTRYPYARRAYTEMPLATE(int,\'i\');\n}\n' -needs['try_pyarr_from_long']=['pyobj_from_long1','TRYPYARRAYTEMPLATE'] -cfuncs['try_pyarr_from_long']='static int try_pyarr_from_long(PyObject* obj,long* v) {\n\tTRYPYARRAYTEMPLATE(long,\'l\');\n}\n' -needs['try_pyarr_from_long_long']=['pyobj_from_long_long1','TRYPYARRAYTEMPLATE','long_long'] -cfuncs['try_pyarr_from_long_long']='static int try_pyarr_from_long_long(PyObject* obj,long_long* v) {\n\tTRYPYARRAYTEMPLATE(long_long,\'L\');\n}\n' -needs['try_pyarr_from_float']=['pyobj_from_float1','TRYPYARRAYTEMPLATE'] -cfuncs['try_pyarr_from_float']='static int try_pyarr_from_float(PyObject* obj,float* v) {\n\tTRYPYARRAYTEMPLATE(float,\'f\');\n}\n' -needs['try_pyarr_from_double']=['pyobj_from_double1','TRYPYARRAYTEMPLATE'] -cfuncs['try_pyarr_from_double']='static int try_pyarr_from_double(PyObject* obj,double* v) {\n\tTRYPYARRAYTEMPLATE(double,\'d\');\n}\n' -needs['try_pyarr_from_complex_float']=['pyobj_from_complex_float1','TRYCOMPLEXPYARRAYTEMPLATE','complex_float'] -cfuncs['try_pyarr_from_complex_float']='static int try_pyarr_from_complex_float(PyObject* obj,complex_float* v) {\n\tTRYCOMPLEXPYARRAYTEMPLATE(float,\'F\');\n}\n' -needs['try_pyarr_from_complex_double']=['pyobj_from_complex_double1','TRYCOMPLEXPYARRAYTEMPLATE','complex_double'] -cfuncs['try_pyarr_from_complex_double']='static int try_pyarr_from_complex_double(PyObject* obj,complex_double* v) {\n\tTRYCOMPLEXPYARRAYTEMPLATE(double,\'D\');\n}\n' - -needs['create_cb_arglist']=['CFUNCSMESS','PRINTPYOBJERR','MINMAX'] -cfuncs['create_cb_arglist']="""\ -static int create_cb_arglist(PyObject* fun,PyTupleObject* xa,const int maxnofargs,const int nofoptargs,int *nofargs,PyTupleObject **args,const char *errmess) { -\tPyObject *tmp = NULL; -\tPyObject *tmp_fun = NULL; -\tint tot,opt,ext,siz,i,di=0; -\tCFUNCSMESS(\"create_cb_arglist\\n\"); -\ttot=opt=ext=siz=0; -\t/* Get the total number of arguments */ -\tif (PyFunction_Check(fun)) -\t\ttmp_fun = fun; -\telse { -\t\tdi = 1; -\t\tif (PyObject_HasAttrString(fun,\"im_func\")) { -\t\t\ttmp_fun = PyObject_GetAttrString(fun,\"im_func\"); -\t\t} -\t\telse if (PyObject_HasAttrString(fun,\"__call__\")) { -\t\t\ttmp = PyObject_GetAttrString(fun,\"__call__\"); -\t\t\tif (PyObject_HasAttrString(tmp,\"im_func\")) -\t\t\t\ttmp_fun = PyObject_GetAttrString(tmp,\"im_func\"); -\t\t\telse { -\t\t\t\ttmp_fun = fun; /* built-in function */ -\t\t\t\ttot = maxnofargs; -\t\t\t\tif (xa != NULL) -\t\t\t\t\ttot += PyTuple_Size((PyObject *)xa); -\t\t\t} -\t\t\tPy_XDECREF(tmp); -\t\t} -\t\telse if (PyFortran_Check(fun) || PyFortran_Check1(fun)) { -\t\t\ttot = maxnofargs; -\t\t\tif (xa != NULL) -\t\t\t\ttot += PyTuple_Size((PyObject *)xa); -\t\t\ttmp_fun = fun; -\t\t} -\t\telse if (F2PyCapsule_Check(fun)) { -\t\t\ttot = maxnofargs; -\t\t\tif (xa != NULL) -\t\t\t\text = PyTuple_Size((PyObject *)xa); -\t\t\tif(ext>0) { -\t\t\t\tfprintf(stderr,\"extra arguments tuple cannot be used with CObject call-back\\n\"); -\t\t\t\tgoto capi_fail; -\t\t\t} -\t\t\ttmp_fun = fun; -\t\t} -\t} -if (tmp_fun==NULL) { -fprintf(stderr,\"Call-back argument must be function|instance|instance.__call__|f2py-function but got %s.\\n\",(fun==NULL?\"NULL\":Py_TYPE(fun)->tp_name)); -goto capi_fail; -} -#if PY_VERSION_HEX >= 0x03000000 -\tif (PyObject_HasAttrString(tmp_fun,\"__code__\")) { -\t\tif (PyObject_HasAttrString(tmp = PyObject_GetAttrString(tmp_fun,\"__code__\"),\"co_argcount\")) -#else -\tif (PyObject_HasAttrString(tmp_fun,\"func_code\")) { -\t\tif (PyObject_HasAttrString(tmp = PyObject_GetAttrString(tmp_fun,\"func_code\"),\"co_argcount\")) -#endif -\t\t\ttot = PyInt_AsLong(PyObject_GetAttrString(tmp,\"co_argcount\")) - di; -\t\tPy_XDECREF(tmp); -\t} -\t/* Get the number of optional arguments */ -#if PY_VERSION_HEX >= 0x03000000 -\tif (PyObject_HasAttrString(tmp_fun,\"__defaults__\")) -\t\tif (PyTuple_Check(tmp = PyObject_GetAttrString(tmp_fun,\"__defaults__\"))) -#else -\tif (PyObject_HasAttrString(tmp_fun,\"func_defaults\")) -\t\tif (PyTuple_Check(tmp = PyObject_GetAttrString(tmp_fun,\"func_defaults\"))) -#endif -\t\t\topt = PyTuple_Size(tmp); -\t\tPy_XDECREF(tmp); -\t/* Get the number of extra arguments */ -\tif (xa != NULL) -\t\text = PyTuple_Size((PyObject *)xa); -\t/* Calculate the size of call-backs argument list */ -\tsiz = MIN(maxnofargs+ext,tot); -\t*nofargs = MAX(0,siz-ext); -#ifdef DEBUGCFUNCS -\tfprintf(stderr,\"debug-capi:create_cb_arglist:maxnofargs(-nofoptargs),tot,opt,ext,siz,nofargs=%d(-%d),%d,%d,%d,%d,%d\\n\",maxnofargs,nofoptargs,tot,opt,ext,siz,*nofargs); -#endif -\tif (siz0: - if outneeds[n][0] not in needs: - out.append(outneeds[n][0]) - del outneeds[n][0] - else: - flag=0 - for k in outneeds[n][1:]: - if k in needs[outneeds[n][0]]: - flag=1 - break - if flag: - outneeds[n]=outneeds[n][1:]+[outneeds[n][0]] - else: - out.append(outneeds[n][0]) - del outneeds[n][0] - if saveout and (0 not in map(lambda x,y:x==y,saveout,outneeds[n])) \ - and outneeds[n] != []: - print n,saveout - errmess('get_needs: no progress in sorting needs, probably circular dependence, skipping.\n') - out=out+saveout - break - saveout=copy.copy(outneeds[n]) - if out==[]: - out=[n] - res[n]=out - return res diff --git a/pythonPackages/numpy/numpy/f2py/common_rules.py b/pythonPackages/numpy/numpy/f2py/common_rules.py deleted file mode 100755 index 3295676ef2..0000000000 --- a/pythonPackages/numpy/numpy/f2py/common_rules.py +++ /dev/null @@ -1,130 +0,0 @@ -#!/usr/bin/env python -""" - -Build common block mechanism for f2py2e. - -Copyright 2000 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Date: 2005/05/06 10:57:33 $ -Pearu Peterson -""" - -__version__ = "$Revision: 1.19 $"[10:-1] - -import __version__ -f2py_version = __version__.version - -import pprint -import sys -errmess=sys.stderr.write -outmess=sys.stdout.write -show=pprint.pprint - -from auxfuncs import * -import capi_maps -import func2subr -from crackfortran import rmbadname -############## - -def findcommonblocks(block,top=1): - ret = [] - if hascommon(block): - for n in block['common'].keys(): - vars={} - for v in block['common'][n]: - vars[v]=block['vars'][v] - ret.append((n,block['common'][n],vars)) - elif hasbody(block): - for b in block['body']: - ret=ret+findcommonblocks(b,0) - if top: - tret=[] - names=[] - for t in ret: - if t[0] not in names: - names.append(t[0]) - tret.append(t) - return tret - return ret - -def buildhooks(m): - ret = {'commonhooks':[],'initcommonhooks':[],'docs':['"COMMON blocks:\\n"']} - fwrap = [''] - def fadd(line,s=fwrap): s[0] = '%s\n %s'%(s[0],line) - chooks = [''] - def cadd(line,s=chooks): s[0] = '%s\n%s'%(s[0],line) - ihooks = [''] - def iadd(line,s=ihooks): s[0] = '%s\n%s'%(s[0],line) - doc = [''] - def dadd(line,s=doc): s[0] = '%s\n%s'%(s[0],line) - for (name,vnames,vars) in findcommonblocks(m): - lower_name = name.lower() - hnames,inames = [],[] - for n in vnames: - if isintent_hide(vars[n]): hnames.append(n) - else: inames.append(n) - if hnames: - outmess('\t\tConstructing COMMON block support for "%s"...\n\t\t %s\n\t\t Hidden: %s\n'%(name,','.join(inames),','.join(hnames))) - else: - outmess('\t\tConstructing COMMON block support for "%s"...\n\t\t %s\n'%(name,','.join(inames))) - fadd('subroutine f2pyinit%s(setupfunc)'%name) - fadd('external setupfunc') - for n in vnames: - fadd(func2subr.var2fixfortran(vars,n)) - if name=='_BLNK_': - fadd('common %s'%(','.join(vnames))) - else: - fadd('common /%s/ %s'%(name,','.join(vnames))) - fadd('call setupfunc(%s)'%(','.join(inames))) - fadd('end\n') - cadd('static FortranDataDef f2py_%s_def[] = {'%(name)) - idims=[] - for n in inames: - ct = capi_maps.getctype(vars[n]) - at = capi_maps.c2capi_map[ct] - dm = capi_maps.getarrdims(n,vars[n]) - if dm['dims']: idims.append('(%s)'%(dm['dims'])) - else: idims.append('') - dms=dm['dims'].strip() - if not dms: dms='-1' - cadd('\t{\"%s\",%s,{{%s}},%s},'%(n,dm['rank'],dms,at)) - cadd('\t{NULL}\n};') - inames1 = rmbadname(inames) - inames1_tps = ','.join(map(lambda s:'char *'+s,inames1)) - cadd('static void f2py_setup_%s(%s) {'%(name,inames1_tps)) - cadd('\tint i_f2py=0;') - for n in inames1: - cadd('\tf2py_%s_def[i_f2py++].data = %s;'%(name,n)) - cadd('}') - if '_' in lower_name: - F_FUNC='F_FUNC_US' - else: - F_FUNC='F_FUNC' - cadd('extern void %s(f2pyinit%s,F2PYINIT%s)(void(*)(%s));'\ - %(F_FUNC,lower_name,name.upper(), - ','.join(['char*']*len(inames1)))) - cadd('static void f2py_init_%s(void) {'%name) - cadd('\t%s(f2pyinit%s,F2PYINIT%s)(f2py_setup_%s);'\ - %(F_FUNC,lower_name,name.upper(),name)) - cadd('}\n') - iadd('\tF2PyDict_SetItemString(d, \"%s\", PyFortranObject_New(f2py_%s_def,f2py_init_%s));'%(name,name,name)) - tname = name.replace('_','\\_') - dadd('\\subsection{Common block \\texttt{%s}}\n'%(tname)) - dadd('\\begin{description}') - for n in inames: - dadd('\\item[]{{}\\verb@%s@{}}'%(capi_maps.getarrdocsign(n,vars[n]))) - if hasnote(vars[n]): - note = vars[n]['note'] - if type(note) is type([]): note='\n'.join(note) - dadd('--- %s'%(note)) - dadd('\\end{description}') - ret['docs'].append('"\t/%s/ %s\\n"'%(name,','.join(map(lambda v,d:v+d,inames,idims)))) - ret['commonhooks']=chooks - ret['initcommonhooks']=ihooks - ret['latexdoc']=doc[0] - if len(ret['docs'])<=1: ret['docs']='' - return ret,fwrap[0] diff --git a/pythonPackages/numpy/numpy/f2py/crackfortran.py b/pythonPackages/numpy/numpy/f2py/crackfortran.py deleted file mode 100755 index 9f6f7c7104..0000000000 --- a/pythonPackages/numpy/numpy/f2py/crackfortran.py +++ /dev/null @@ -1,2772 +0,0 @@ -#!/usr/bin/env python -""" -crackfortran --- read fortran (77,90) code and extract declaration information. - Usage is explained in the comment block below. - -Copyright 1999-2004 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Date: 2005/09/27 07:13:49 $ -Pearu Peterson -""" -__version__ = "$Revision: 1.177 $"[10:-1] - -import __version__ -f2py_version = __version__.version - -""" - Usage of crackfortran: - ====================== - Command line keys: -quiet,-verbose,-fix,-f77,-f90,-show,-h - -m ,--ignore-contains - Functions: crackfortran, crack2fortran - The following Fortran statements/constructions are supported - (or will be if needed): - block data,byte,call,character,common,complex,contains,data, - dimension,double complex,double precision,end,external,function, - implicit,integer,intent,interface,intrinsic, - logical,module,optional,parameter,private,public, - program,real,(sequence?),subroutine,type,use,virtual, - include,pythonmodule - Note: 'virtual' is mapped to 'dimension'. - Note: 'implicit integer (z) static (z)' is 'implicit static (z)' (this is minor bug). - Note: code after 'contains' will be ignored until its scope ends. - Note: 'common' statement is extended: dimensions are moved to variable definitions - Note: f2py directive: f2py is read as - Note: pythonmodule is introduced to represent Python module - - Usage: - `postlist=crackfortran(files,funcs)` - `postlist` contains declaration information read from the list of files `files`. - `crack2fortran(postlist)` returns a fortran code to be saved to pyf-file - - `postlist` has the following structure: - *** it is a list of dictionaries containing `blocks': - B = {'block','body','vars','parent_block'[,'name','prefix','args','result', - 'implicit','externals','interfaced','common','sortvars', - 'commonvars','note']} - B['block'] = 'interface' | 'function' | 'subroutine' | 'module' | - 'program' | 'block data' | 'type' | 'pythonmodule' - B['body'] --- list containing `subblocks' with the same structure as `blocks' - B['parent_block'] --- dictionary of a parent block: - C['body'][]['parent_block'] is C - B['vars'] --- dictionary of variable definitions - B['sortvars'] --- dictionary of variable definitions sorted by dependence (independent first) - B['name'] --- name of the block (not if B['block']=='interface') - B['prefix'] --- prefix string (only if B['block']=='function') - B['args'] --- list of argument names if B['block']== 'function' | 'subroutine' - B['result'] --- name of the return value (only if B['block']=='function') - B['implicit'] --- dictionary {'a':,'b':...} | None - B['externals'] --- list of variables being external - B['interfaced'] --- list of variables being external and defined - B['common'] --- dictionary of common blocks (list of objects) - B['commonvars'] --- list of variables used in common blocks (dimensions are moved to variable definitions) - B['from'] --- string showing the 'parents' of the current block - B['use'] --- dictionary of modules used in current block: - {:{['only':<0|1>],['map':{:,...}]}} - B['note'] --- list of LaTeX comments on the block - B['f2pyenhancements'] --- optional dictionary - {'threadsafe':'','fortranname':, - 'callstatement':|, - 'callprotoargument':, - 'usercode':|, - 'pymethoddef:' - } - B['entry'] --- dictionary {entryname:argslist,..} - B['varnames'] --- list of variable names given in the order of reading the - Fortran code, useful for derived types. - *** Variable definition is a dictionary - D = B['vars'][] = - {'typespec'[,'attrspec','kindselector','charselector','=','typename']} - D['typespec'] = 'byte' | 'character' | 'complex' | 'double complex' | - 'double precision' | 'integer' | 'logical' | 'real' | 'type' - D['attrspec'] --- list of attributes (e.g. 'dimension()', - 'external','intent(in|out|inout|hide|c|callback|cache|aligned4|aligned8|aligned16)', - 'optional','required', etc) - K = D['kindselector'] = {['*','kind']} (only if D['typespec'] = - 'complex' | 'integer' | 'logical' | 'real' ) - C = D['charselector'] = {['*','len','kind']} - (only if D['typespec']=='character') - D['='] --- initialization expression string - D['typename'] --- name of the type if D['typespec']=='type' - D['dimension'] --- list of dimension bounds - D['intent'] --- list of intent specifications - D['depend'] --- list of variable names on which current variable depends on - D['check'] --- list of C-expressions; if C-expr returns zero, exception is raised - D['note'] --- list of LaTeX comments on the variable - *** Meaning of kind/char selectors (few examples): - D['typespec>']*K['*'] - D['typespec'](kind=K['kind']) - character*C['*'] - character(len=C['len'],kind=C['kind']) - (see also fortran type declaration statement formats below) - - Fortran 90 type declaration statement format (F77 is subset of F90) -==================================================================== - (Main source: IBM XL Fortran 5.1 Language Reference Manual) - type declaration = [[]::] - = byte | - character[] | - complex[] | - double complex | - double precision | - integer[] | - logical[] | - real[] | - type() - = * | - ([len=][,[kind=]]) | - (kind=[,len=]) - = * | - ([kind=]) - = comma separated list of attributes. - Only the following attributes are used in - building up the interface: - external - (parameter --- affects '=' key) - optional - intent - Other attributes are ignored. - = in | out | inout - = comma separated list of dimension bounds. - = [[*][()] | [()]*] - [// | =] [,] - - In addition, the following attributes are used: check,depend,note - - TODO: - * Apply 'parameter' attribute (e.g. 'integer parameter :: i=2' 'real x(i)' - -> 'real x(2)') - The above may be solved by creating appropriate preprocessor program, for example. -""" -# -import sys -import string -import fileinput -import re -import pprint -import os -import copy -from auxfuncs import * - -# Global flags: -strictf77=1 # Ignore `!' comments unless line[0]=='!' -sourcecodeform='fix' # 'fix','free' -quiet=0 # Be verbose if 0 (Obsolete: not used any more) -verbose=1 # Be quiet if 0, extra verbose if > 1. -tabchar=4*' ' -pyffilename='' -f77modulename='' -skipemptyends=0 # for old F77 programs without 'program' statement -ignorecontains=1 -dolowercase=1 -debug=[] -## do_analyze = 1 - -###### global variables - -## use reload(crackfortran) to reset these variables - -groupcounter=0 -grouplist={groupcounter:[]} -neededmodule=-1 -expectbegin=1 -skipblocksuntil=-1 -usermodules=[] -f90modulevars={} -gotnextfile=1 -filepositiontext='' -currentfilename='' -skipfunctions=[] -skipfuncs=[] -onlyfuncs=[] -include_paths=[] -previous_context = None - -###### Some helper functions -def show(o,f=0):pprint.pprint(o) -errmess=sys.stderr.write -def outmess(line,flag=1): - global filepositiontext - if not verbose: return - if not quiet: - if flag:sys.stdout.write(filepositiontext) - sys.stdout.write(line) -re._MAXCACHE=50 -defaultimplicitrules={} -for c in "abcdefghopqrstuvwxyz$_": defaultimplicitrules[c]={'typespec':'real'} -for c in "ijklmn": defaultimplicitrules[c]={'typespec':'integer'} -del c -badnames={} -invbadnames={} -for n in ['int','double','float','char','short','long','void','case','while', - 'return','signed','unsigned','if','for','typedef','sizeof','union', - 'struct','static','register','new','break','do','goto','switch', - 'continue','else','inline','extern','delete','const','auto', - 'len','rank','shape','index','slen','size','_i', - 'max', 'min', - 'flen','fshape', - 'string','complex_double','float_double','stdin','stderr','stdout', - 'type','default']: - badnames[n]=n+'_bn' - invbadnames[n+'_bn']=n -def rmbadname1(name): - if name in badnames: - errmess('rmbadname1: Replacing "%s" with "%s".\n'%(name,badnames[name])) - return badnames[name] - return name -def rmbadname(names): return map(rmbadname1,names) - -def undo_rmbadname1(name): - if name in invbadnames: - errmess('undo_rmbadname1: Replacing "%s" with "%s".\n'\ - %(name,invbadnames[name])) - return invbadnames[name] - return name -def undo_rmbadname(names): return map(undo_rmbadname1,names) - -def getextension(name): - i=name.rfind('.') - if i==-1: return '' - if '\\' in name[i:]: return '' - if '/' in name[i:]: return '' - return name[i+1:] - -is_f_file = re.compile(r'.*[.](for|ftn|f77|f)\Z',re.I).match -_has_f_header = re.compile(r'-[*]-\s*fortran\s*-[*]-',re.I).search -_has_f90_header = re.compile(r'-[*]-\s*f90\s*-[*]-',re.I).search -_has_fix_header = re.compile(r'-[*]-\s*fix\s*-[*]-',re.I).search -_free_f90_start = re.compile(r'[^c*]\s*[^\s\d\t]',re.I).match -def is_free_format(file): - """Check if file is in free format Fortran.""" - # f90 allows both fixed and free format, assuming fixed unless - # signs of free format are detected. - result = 0 - f = open(file,'r') - line = f.readline() - n = 15 # the number of non-comment lines to scan for hints - if _has_f_header(line): - n = 0 - elif _has_f90_header(line): - n = 0 - result = 1 - while n>0 and line: - if line[0]!='!' and line.strip(): - n -= 1 - if (line[0]!='\t' and _free_f90_start(line[:5])) or line[-2:-1]=='&': - result = 1 - break - line = f.readline() - f.close() - return result - - -####### Read fortran (77,90) code -def readfortrancode(ffile,dowithline=show,istop=1): - """ - Read fortran codes from files and - 1) Get rid of comments, line continuations, and empty lines; lower cases. - 2) Call dowithline(line) on every line. - 3) Recursively call itself when statement \"include ''\" is met. - """ - global gotnextfile,filepositiontext,currentfilename,sourcecodeform,strictf77,\ - beginpattern,quiet,verbose,dolowercase,include_paths - if not istop: - saveglobals=gotnextfile,filepositiontext,currentfilename,sourcecodeform,strictf77,\ - beginpattern,quiet,verbose,dolowercase - if ffile==[]: return - localdolowercase = dolowercase - cont=0 - finalline='' - ll='' - commentline=re.compile(r'(?P([^"]*"[^"]*"[^"!]*|[^\']*\'[^\']*\'[^\'!]*|[^!]*))!{1}(?P.*)') - includeline=re.compile(r'\s*include\s*(\'|")(?P[^\'"]*)(\'|")',re.I) - cont1=re.compile(r'(?P.*)&\s*\Z') - cont2=re.compile(r'(\s*&|)(?P.*)') - mline_mark = re.compile(r".*?'''") - if istop: dowithline('',-1) - ll,l1='','' - spacedigits=[' ']+map(str,range(10)) - filepositiontext='' - fin=fileinput.FileInput(ffile) - while 1: - l=fin.readline() - if not l: break - if fin.isfirstline(): - filepositiontext='' - currentfilename=fin.filename() - gotnextfile=1 - l1=l - strictf77=0 - sourcecodeform='fix' - ext = os.path.splitext(currentfilename)[1] - if is_f_file(currentfilename) and \ - not (_has_f90_header(l) or _has_fix_header(l)): - strictf77=1 - elif is_free_format(currentfilename) and not _has_fix_header(l): - sourcecodeform='free' - if strictf77: beginpattern=beginpattern77 - else: beginpattern=beginpattern90 - outmess('\tReading file %s (format:%s%s)\n'\ - %(`currentfilename`,sourcecodeform, - strictf77 and ',strict' or '')) - - l=l.expandtabs().replace('\xa0',' ') - while not l=='': # Get rid of newline characters - if l[-1] not in "\n\r\f": break - l=l[:-1] - if not strictf77: - r=commentline.match(l) - if r: - l=r.group('line')+' ' # Strip comments starting with `!' - rl=r.group('rest') - if rl[:4].lower()=='f2py': # f2py directive - l = l + 4*' ' - r=commentline.match(rl[4:]) - if r: l=l+r.group('line') - else: l = l + rl[4:] - if l.strip()=='': # Skip empty line - cont=0 - continue - if sourcecodeform=='fix': - if l[0] in ['*','c','!','C','#']: - if l[1:5].lower()=='f2py': # f2py directive - l=' '+l[5:] - else: # Skip comment line - cont=0 - continue - elif strictf77: - if len(l)>72: l=l[:72] - if not (l[0] in spacedigits): - raise Exception('readfortrancode: Found non-(space,digit) char ' - 'in the first column.\n\tAre you sure that ' - 'this code is in fix form?\n\tline=%s' % `l`) - - if (not cont or strictf77) and (len(l)>5 and not l[5]==' '): - # Continuation of a previous line - ll=ll+l[6:] - finalline='' - origfinalline='' - else: - if not strictf77: - # F90 continuation - r=cont1.match(l) - if r: l=r.group('line') # Continuation follows .. - if cont: - ll=ll+cont2.match(l).group('line') - finalline='' - origfinalline='' - else: - l=' '+l[5:] # clean up line beginning from possible digits. - if localdolowercase: finalline=ll.lower() - else: finalline=ll - origfinalline=ll - ll=l - cont=(r is not None) - else: - l=' '+l[5:] # clean up line beginning from possible digits. - if localdolowercase: finalline=ll.lower() - else: finalline=ll - origfinalline =ll - ll=l - - elif sourcecodeform=='free': - if not cont and ext=='.pyf' and mline_mark.match(l): - l = l + '\n' - while 1: - lc = fin.readline() - if not lc: - errmess('Unexpected end of file when reading multiline\n') - break - l = l + lc - if mline_mark.match(lc): - break - l = l.rstrip() - r=cont1.match(l) - if r: l=r.group('line') # Continuation follows .. - if cont: - ll=ll+cont2.match(l).group('line') - finalline='' - origfinalline='' - else: - if localdolowercase: finalline=ll.lower() - else: finalline=ll - origfinalline =ll - ll=l - cont=(r is not None) - else: - raise ValueError,"Flag sourcecodeform must be either 'fix' or 'free': %s"%`sourcecodeform` - filepositiontext='Line #%d in %s:"%s"\n\t' % (fin.filelineno()-1,currentfilename,l1) - m=includeline.match(origfinalline) - if m: - fn=m.group('name') - if os.path.isfile(fn): - readfortrancode(fn,dowithline=dowithline,istop=0) - else: - include_dirs = [os.path.dirname(currentfilename)] + include_paths - foundfile = 0 - for inc_dir in include_dirs: - fn1 = os.path.join(inc_dir,fn) - if os.path.isfile(fn1): - foundfile = 1 - readfortrancode(fn1,dowithline=dowithline,istop=0) - break - if not foundfile: - outmess('readfortrancode: could not find include file %s. Ignoring.\n'%(`fn`)) - else: - dowithline(finalline) - l1=ll - if localdolowercase: - finalline=ll.lower() - else: finalline=ll - origfinalline = ll - filepositiontext='Line #%d in %s:"%s"\n\t' % (fin.filelineno()-1,currentfilename,l1) - m=includeline.match(origfinalline) - if m: - fn=m.group('name') - fn1=os.path.join(os.path.dirname(currentfilename),fn) - if os.path.isfile(fn): - readfortrancode(fn,dowithline=dowithline,istop=0) - elif os.path.isfile(fn1): - readfortrancode(fn1,dowithline=dowithline,istop=0) - else: - outmess('readfortrancode: could not find include file %s. Ignoring.\n'%(`fn`)) - else: - dowithline(finalline) - filepositiontext='' - fin.close() - if istop: dowithline('',1) - else: - gotnextfile,filepositiontext,currentfilename,sourcecodeform,strictf77,\ - beginpattern,quiet,verbose,dolowercase=saveglobals - -########### Crack line -beforethisafter=r'\s*(?P%s(?=\s*(\b(%s)\b)))'+ \ - r'\s*(?P(\b(%s)\b))'+ \ - r'\s*(?P%s)\s*\Z' -## -fortrantypes='character|logical|integer|real|complex|double\s*(precision\s*(complex|)|complex)|type(?=\s*\([\w\s,=(*)]*\))|byte' -typespattern=re.compile(beforethisafter%('',fortrantypes,fortrantypes,'.*'),re.I),'type' -typespattern4implicit=re.compile(beforethisafter%('',fortrantypes+'|static|automatic|undefined',fortrantypes+'|static|automatic|undefined','.*'),re.I) -# -functionpattern=re.compile(beforethisafter%('([a-z]+[\w\s(=*+-/)]*?|)','function','function','.*'),re.I),'begin' -subroutinepattern=re.compile(beforethisafter%('[a-z\s]*?','subroutine','subroutine','.*'),re.I),'begin' -#modulepattern=re.compile(beforethisafter%('[a-z\s]*?','module','module','.*'),re.I),'begin' -# -groupbegins77=r'program|block\s*data' -beginpattern77=re.compile(beforethisafter%('',groupbegins77,groupbegins77,'.*'),re.I),'begin' -groupbegins90=groupbegins77+r'|module(?!\s*procedure)|python\s*module|interface|type(?!\s*\()' -beginpattern90=re.compile(beforethisafter%('',groupbegins90,groupbegins90,'.*'),re.I),'begin' -groupends=r'end|endprogram|endblockdata|endmodule|endpythonmodule|endinterface' -endpattern=re.compile(beforethisafter%('',groupends,groupends,'[\w\s]*'),re.I),'end' -#endifs='end\s*(if|do|where|select|while|forall)' -endifs='(end\s*(if|do|where|select|while|forall))|(module\s*procedure)' -endifpattern=re.compile(beforethisafter%('[\w]*?',endifs,endifs,'[\w\s]*'),re.I),'endif' -# -implicitpattern=re.compile(beforethisafter%('','implicit','implicit','.*'),re.I),'implicit' -dimensionpattern=re.compile(beforethisafter%('','dimension|virtual','dimension|virtual','.*'),re.I),'dimension' -externalpattern=re.compile(beforethisafter%('','external','external','.*'),re.I),'external' -optionalpattern=re.compile(beforethisafter%('','optional','optional','.*'),re.I),'optional' -requiredpattern=re.compile(beforethisafter%('','required','required','.*'),re.I),'required' -publicpattern=re.compile(beforethisafter%('','public','public','.*'),re.I),'public' -privatepattern=re.compile(beforethisafter%('','private','private','.*'),re.I),'private' -intrisicpattern=re.compile(beforethisafter%('','intrisic','intrisic','.*'),re.I),'intrisic' -intentpattern=re.compile(beforethisafter%('','intent|depend|note|check','intent|depend|note|check','\s*\(.*?\).*'),re.I),'intent' -parameterpattern=re.compile(beforethisafter%('','parameter','parameter','\s*\(.*'),re.I),'parameter' -datapattern=re.compile(beforethisafter%('','data','data','.*'),re.I),'data' -callpattern=re.compile(beforethisafter%('','call','call','.*'),re.I),'call' -entrypattern=re.compile(beforethisafter%('','entry','entry','.*'),re.I),'entry' -callfunpattern=re.compile(beforethisafter%('','callfun','callfun','.*'),re.I),'callfun' -commonpattern=re.compile(beforethisafter%('','common','common','.*'),re.I),'common' -usepattern=re.compile(beforethisafter%('','use','use','.*'),re.I),'use' -containspattern=re.compile(beforethisafter%('','contains','contains',''),re.I),'contains' -formatpattern=re.compile(beforethisafter%('','format','format','.*'),re.I),'format' -## Non-fortran and f2py-specific statements -f2pyenhancementspattern=re.compile(beforethisafter%('','threadsafe|fortranname|callstatement|callprotoargument|usercode|pymethoddef','threadsafe|fortranname|callstatement|callprotoargument|usercode|pymethoddef','.*'),re.I|re.S),'f2pyenhancements' -multilinepattern = re.compile(r"\s*(?P''')(?P.*?)(?P''')\s*\Z",re.S),'multiline' -## - -def _simplifyargs(argsline): - a = [] - for n in markoutercomma(argsline).split('@,@'): - for r in '(),': - n = n.replace(r,'_') - a.append(n) - return ','.join(a) - -crackline_re_1 = re.compile(r'\s*(?P\b[a-z]+[\w]*\b)\s*[=].*',re.I) -def crackline(line,reset=0): - """ - reset=-1 --- initialize - reset=0 --- crack the line - reset=1 --- final check if mismatch of blocks occured - - Cracked data is saved in grouplist[0]. - """ - global beginpattern,groupcounter,groupname,groupcache,grouplist,gotnextfile,\ - filepositiontext,currentfilename,neededmodule,expectbegin,skipblocksuntil,\ - skipemptyends,previous_context - if ';' in line and not (f2pyenhancementspattern[0].match(line) or - multilinepattern[0].match(line)): - for l in line.split(';'): - assert reset==0,`reset` # XXX: non-zero reset values need testing - crackline(l,reset) - return - if reset<0: - groupcounter=0 - groupname={groupcounter:''} - groupcache={groupcounter:{}} - grouplist={groupcounter:[]} - groupcache[groupcounter]['body']=[] - groupcache[groupcounter]['vars']={} - groupcache[groupcounter]['block']='' - groupcache[groupcounter]['name']='' - neededmodule=-1 - skipblocksuntil=-1 - return - if reset>0: - fl=0 - if f77modulename and neededmodule==groupcounter: fl=2 - while groupcounter>fl: - outmess('crackline: groupcounter=%s groupname=%s\n'%(`groupcounter`,`groupname`)) - outmess('crackline: Mismatch of blocks encountered. Trying to fix it by assuming "end" statement.\n') - grouplist[groupcounter-1].append(groupcache[groupcounter]) - grouplist[groupcounter-1][-1]['body']=grouplist[groupcounter] - del grouplist[groupcounter] - groupcounter=groupcounter-1 - if f77modulename and neededmodule==groupcounter: - grouplist[groupcounter-1].append(groupcache[groupcounter]) - grouplist[groupcounter-1][-1]['body']=grouplist[groupcounter] - del grouplist[groupcounter] - groupcounter=groupcounter-1 # end interface - grouplist[groupcounter-1].append(groupcache[groupcounter]) - grouplist[groupcounter-1][-1]['body']=grouplist[groupcounter] - del grouplist[groupcounter] - groupcounter=groupcounter-1 # end module - neededmodule=-1 - return - if line=='': return - flag=0 - for pat in [dimensionpattern,externalpattern,intentpattern,optionalpattern, - requiredpattern, - parameterpattern,datapattern,publicpattern,privatepattern, - intrisicpattern, - endifpattern,endpattern, - formatpattern, - beginpattern,functionpattern,subroutinepattern, - implicitpattern,typespattern,commonpattern, - callpattern,usepattern,containspattern, - entrypattern, - f2pyenhancementspattern, - multilinepattern - ]: - m = pat[0].match(line) - if m: - break - flag=flag+1 - if not m: - re_1 = crackline_re_1 - if 0<=skipblocksuntil<=groupcounter:return - if 'externals' in groupcache[groupcounter]: - for name in groupcache[groupcounter]['externals']: - if name in invbadnames: - name=invbadnames[name] - if 'interfaced' in groupcache[groupcounter] and name in groupcache[groupcounter]['interfaced']: - continue - m1=re.match(r'(?P[^"]*)\b%s\b\s*@\(@(?P[^@]*)@\)@.*\Z'%name,markouterparen(line),re.I) - if m1: - m2 = re_1.match(m1.group('before')) - a = _simplifyargs(m1.group('args')) - if m2: - line='callfun %s(%s) result (%s)'%(name,a,m2.group('result')) - else: line='callfun %s(%s)'%(name,a) - m = callfunpattern[0].match(line) - if not m: - outmess('crackline: could not resolve function call for line=%s.\n'%`line`) - return - analyzeline(m,'callfun',line) - return - if verbose>1: - previous_context = None - outmess('crackline:%d: No pattern for line\n'%(groupcounter)) - return - elif pat[1]=='end': - if 0<=skipblocksuntil(@\(@.*?@\)@|[*][\d*]+|[*]\s*@\(@.*?@\)@|))(?P.*)\Z',re.I) -nameargspattern=re.compile(r'\s*(?P\b[\w$]+\b)\s*(@\(@\s*(?P[\w\s,]*)\s*@\)@|)\s*(result(\s*@\(@\s*(?P\b[\w$]+\b)\s*@\)@|))*\s*\Z',re.I) -callnameargspattern=re.compile(r'\s*(?P\b[\w$]+\b)\s*@\(@\s*(?P.*)\s*@\)@\s*\Z',re.I) -real16pattern = re.compile(r'([-+]?(?:\d+(?:\.\d*)?|\d*\.\d+))[dD]((?:[-+]?\d+)?)') -real8pattern = re.compile(r'([-+]?((?:\d+(?:\.\d*)?|\d*\.\d+))[eE]((?:[-+]?\d+)?)|(\d+\.\d*))') - -_intentcallbackpattern = re.compile(r'intent\s*\(.*?\bcallback\b',re.I) -def _is_intent_callback(vdecl): - for a in vdecl.get('attrspec',[]): - if _intentcallbackpattern.match(a): - return 1 - return 0 - -def _resolvenameargspattern(line): - line = markouterparen(line) - m1=nameargspattern.match(line) - if m1: return m1.group('name'),m1.group('args'),m1.group('result') - m1=callnameargspattern.match(line) - if m1: return m1.group('name'),m1.group('args'),None - return None,[],None - -def analyzeline(m,case,line): - global groupcounter,groupname,groupcache,grouplist,filepositiontext,\ - currentfilename,f77modulename,neededinterface,neededmodule,expectbegin,\ - gotnextfile,previous_context - block=m.group('this') - if case != 'multiline': - previous_context = None - if expectbegin and case not in ['begin','call','callfun','type'] \ - and not skipemptyends and groupcounter<1: - newname=os.path.basename(currentfilename).split('.')[0] - outmess('analyzeline: no group yet. Creating program group with name "%s".\n'%newname) - gotnextfile=0 - groupcounter=groupcounter+1 - groupname[groupcounter]='program' - groupcache[groupcounter]={} - grouplist[groupcounter]=[] - groupcache[groupcounter]['body']=[] - groupcache[groupcounter]['vars']={} - groupcache[groupcounter]['block']='program' - groupcache[groupcounter]['name']=newname - groupcache[groupcounter]['from']='fromsky' - expectbegin=0 - if case in ['begin','call','callfun']: - # Crack line => block,name,args,result - block = block.lower() - if re.match(r'block\s*data',block,re.I): block='block data' - if re.match(r'python\s*module',block,re.I): block='python module' - name,args,result = _resolvenameargspattern(m.group('after')) - if name is None: - if block=='block data': - name = '_BLOCK_DATA_' - else: - name = '' - if block not in ['interface','block data']: - outmess('analyzeline: No name/args pattern found for line.\n') - - previous_context = (block,name,groupcounter) - if args: args=rmbadname([x.strip() for x in markoutercomma(args).split('@,@')]) - else: args=[] - if '' in args: - while '' in args: - args.remove('') - outmess('analyzeline: argument list is malformed (missing argument).\n') - - # end of crack line => block,name,args,result - needmodule=0 - needinterface=0 - - if case in ['call','callfun']: - needinterface=1 - if 'args' not in groupcache[groupcounter]: - return - if name not in groupcache[groupcounter]['args']: - return - for it in grouplist[groupcounter]: - if it['name']==name: - return - if name in groupcache[groupcounter]['interfaced']: - return - block={'call':'subroutine','callfun':'function'}[case] - if f77modulename and neededmodule==-1 and groupcounter<=1: - neededmodule=groupcounter+2 - needmodule=1 - needinterface=1 - # Create new block(s) - groupcounter=groupcounter+1 - groupcache[groupcounter]={} - grouplist[groupcounter]=[] - if needmodule: - if verbose>1: - outmess('analyzeline: Creating module block %s\n'%`f77modulename`,0) - groupname[groupcounter]='module' - groupcache[groupcounter]['block']='python module' - groupcache[groupcounter]['name']=f77modulename - groupcache[groupcounter]['from']='' - groupcache[groupcounter]['body']=[] - groupcache[groupcounter]['externals']=[] - groupcache[groupcounter]['interfaced']=[] - groupcache[groupcounter]['vars']={} - groupcounter=groupcounter+1 - groupcache[groupcounter]={} - grouplist[groupcounter]=[] - if needinterface: - if verbose>1: - outmess('analyzeline: Creating additional interface block (groupcounter=%s).\n' % (groupcounter),0) - groupname[groupcounter]='interface' - groupcache[groupcounter]['block']='interface' - groupcache[groupcounter]['name']='unknown_interface' - groupcache[groupcounter]['from']='%s:%s'%(groupcache[groupcounter-1]['from'],groupcache[groupcounter-1]['name']) - groupcache[groupcounter]['body']=[] - groupcache[groupcounter]['externals']=[] - groupcache[groupcounter]['interfaced']=[] - groupcache[groupcounter]['vars']={} - groupcounter=groupcounter+1 - groupcache[groupcounter]={} - grouplist[groupcounter]=[] - groupname[groupcounter]=block - groupcache[groupcounter]['block']=block - if not name: name='unknown_'+block - groupcache[groupcounter]['prefix']=m.group('before') - groupcache[groupcounter]['name']=rmbadname1(name) - groupcache[groupcounter]['result']=result - if groupcounter==1: - groupcache[groupcounter]['from']=currentfilename - else: - if f77modulename and groupcounter==3: - groupcache[groupcounter]['from']='%s:%s'%(groupcache[groupcounter-1]['from'],currentfilename) - else: - groupcache[groupcounter]['from']='%s:%s'%(groupcache[groupcounter-1]['from'],groupcache[groupcounter-1]['name']) - for k in groupcache[groupcounter].keys(): - if not groupcache[groupcounter][k]: del groupcache[groupcounter][k] - groupcache[groupcounter]['args']=args - groupcache[groupcounter]['body']=[] - groupcache[groupcounter]['externals']=[] - groupcache[groupcounter]['interfaced']=[] - groupcache[groupcounter]['vars']={} - groupcache[groupcounter]['entry']={} - # end of creation - if block=='type': - groupcache[groupcounter]['varnames'] = [] - - if case in ['call','callfun']: # set parents variables - if name not in groupcache[groupcounter-2]['externals']: - groupcache[groupcounter-2]['externals'].append(name) - groupcache[groupcounter]['vars']=copy.deepcopy(groupcache[groupcounter-2]['vars']) - #try: del groupcache[groupcounter]['vars'][groupcache[groupcounter-2]['name']] - #except: pass - try: del groupcache[groupcounter]['vars'][name][groupcache[groupcounter]['vars'][name]['attrspec'].index('external')] - except: pass - if block in ['function','subroutine']: # set global attributes - try: groupcache[groupcounter]['vars'][name]=appenddecl(groupcache[groupcounter]['vars'][name],groupcache[groupcounter-2]['vars']['']) - except: pass - if case=='callfun': # return type - if result and result in groupcache[groupcounter]['vars']: - if not name==result: - groupcache[groupcounter]['vars'][name]=appenddecl(groupcache[groupcounter]['vars'][name],groupcache[groupcounter]['vars'][result]) - #if groupcounter>1: # name is interfaced - try: groupcache[groupcounter-2]['interfaced'].append(name) - except: pass - if block=='function': - t=typespattern[0].match(m.group('before')+' '+name) - if t: - typespec,selector,attr,edecl=cracktypespec0(t.group('this'),t.group('after')) - updatevars(typespec,selector,attr,edecl) - if case in ['call','callfun']: - grouplist[groupcounter-1].append(groupcache[groupcounter]) - grouplist[groupcounter-1][-1]['body']=grouplist[groupcounter] - del grouplist[groupcounter] - groupcounter=groupcounter-1 # end routine - grouplist[groupcounter-1].append(groupcache[groupcounter]) - grouplist[groupcounter-1][-1]['body']=grouplist[groupcounter] - del grouplist[groupcounter] - groupcounter=groupcounter-1 # end interface - elif case=='entry': - name,args,result=_resolvenameargspattern(m.group('after')) - if name is not None: - if args: - args=rmbadname([x.strip() for x in markoutercomma(args).split('@,@')]) - else: args=[] - assert result is None,`result` - groupcache[groupcounter]['entry'][name] = args - previous_context = ('entry',name,groupcounter) - elif case=='type': - typespec,selector,attr,edecl=cracktypespec0(block,m.group('after')) - last_name = updatevars(typespec,selector,attr,edecl) - if last_name is not None: - previous_context = ('variable',last_name,groupcounter) - elif case in ['dimension','intent','optional','required','external','public','private','intrisic']: - edecl=groupcache[groupcounter]['vars'] - ll=m.group('after').strip() - i=ll.find('::') - if i<0 and case=='intent': - i=markouterparen(ll).find('@)@')-2 - ll=ll[:i+1]+'::'+ll[i+1:] - i=ll.find('::') - if ll[i:]=='::' and 'args' in groupcache[groupcounter]: - outmess('All arguments will have attribute %s%s\n'%(m.group('this'),ll[:i])) - ll = ll + ','.join(groupcache[groupcounter]['args']) - if i<0:i=0;pl='' - else: pl=ll[:i].strip();ll=ll[i+2:] - ch = markoutercomma(pl).split('@,@') - if len(ch)>1: - pl = ch[0] - outmess('analyzeline: cannot handle multiple attributes without type specification. Ignoring %r.\n' % (','.join(ch[1:]))) - last_name = None - - for e in [x.strip() for x in markoutercomma(ll).split('@,@')]: - m1=namepattern.match(e) - if not m1: - if case in ['public','private']: k='' - else: - print m.groupdict() - outmess('analyzeline: no name pattern found in %s statement for %s. Skipping.\n'%(case,`e`)) - continue - else: - k=rmbadname1(m1.group('name')) - if k not in edecl: - edecl[k]={} - if case=='dimension': - ap=case+m1.group('after') - if case=='intent': - ap=m.group('this')+pl - if _intentcallbackpattern.match(ap): - if k not in groupcache[groupcounter]['args']: - if groupcounter>1: - outmess('analyzeline: appending intent(callback) %s'\ - ' to %s arguments\n' % (k,groupcache[groupcounter]['name'])) - if '__user__' not in groupcache[groupcounter-2]['name']: - outmess('analyzeline: missing __user__ module (could be nothing)\n') - groupcache[groupcounter]['args'].append(k) - else: - errmess('analyzeline: intent(callback) %s is ignored' % (k)) - else: - errmess('analyzeline: intent(callback) %s is already'\ - ' in argument list' % (k)) - if case in ['optional','required','public','external','private','intrisic']: - ap=case - if 'attrspec' in edecl[k]: - edecl[k]['attrspec'].append(ap) - else: - edecl[k]['attrspec']=[ap] - if case=='external': - if groupcache[groupcounter]['block']=='program': - outmess('analyzeline: ignoring program arguments\n') - continue - if k not in groupcache[groupcounter]['args']: - #outmess('analyzeline: ignoring external %s (not in arguments list)\n'%(`k`)) - continue - if 'externals' not in groupcache[groupcounter]: - groupcache[groupcounter]['externals']=[] - groupcache[groupcounter]['externals'].append(k) - last_name = k - groupcache[groupcounter]['vars']=edecl - if last_name is not None: - previous_context = ('variable',last_name,groupcounter) - elif case=='parameter': - edecl=groupcache[groupcounter]['vars'] - ll=m.group('after').strip()[1:-1] - last_name = None - for e in markoutercomma(ll).split('@,@'): - try: - k,initexpr=[x.strip() for x in e.split('=')] - except: - outmess('analyzeline: could not extract name,expr in parameter statement "%s" of "%s"\n'%(e,ll));continue - params = get_parameters(edecl) - k=rmbadname1(k) - if k not in edecl: - edecl[k]={} - if '=' in edecl[k] and (not edecl[k]['=']==initexpr): - outmess('analyzeline: Overwriting the value of parameter "%s" ("%s") with "%s".\n'%(k,edecl[k]['='],initexpr)) - t = determineexprtype(initexpr,params) - if t: - if t.get('typespec')=='real': - tt = list(initexpr) - for m in real16pattern.finditer(initexpr): - tt[m.start():m.end()] = list(\ - initexpr[m.start():m.end()].lower().replace('d', 'e')) - initexpr = ''.join(tt) - elif t.get('typespec')=='complex': - initexpr = initexpr[1:].lower().replace('d','e').\ - replace(',','+1j*(') - try: - v = eval(initexpr,{},params) - except (SyntaxError,NameError,TypeError),msg: - errmess('analyzeline: Failed to evaluate %r. Ignoring: %s\n'\ - % (initexpr, msg)) - continue - edecl[k]['='] = repr(v) - if 'attrspec' in edecl[k]: - edecl[k]['attrspec'].append('parameter') - else: edecl[k]['attrspec']=['parameter'] - last_name = k - groupcache[groupcounter]['vars']=edecl - if last_name is not None: - previous_context = ('variable',last_name,groupcounter) - elif case=='implicit': - if m.group('after').strip().lower()=='none': - groupcache[groupcounter]['implicit']=None - elif m.group('after'): - if 'implicit' in groupcache[groupcounter]: - impl=groupcache[groupcounter]['implicit'] - else: impl={} - if impl is None: - outmess('analyzeline: Overwriting earlier "implicit none" statement.\n') - impl={} - for e in markoutercomma(m.group('after')).split('@,@'): - decl={} - m1=re.match(r'\s*(?P.*?)\s*(\(\s*(?P[a-z-, ]+)\s*\)\s*|)\Z',e,re.I) - if not m1: - outmess('analyzeline: could not extract info of implicit statement part "%s"\n'%(e));continue - m2=typespattern4implicit.match(m1.group('this')) - if not m2: - outmess('analyzeline: could not extract types pattern of implicit statement part "%s"\n'%(e));continue - typespec,selector,attr,edecl=cracktypespec0(m2.group('this'),m2.group('after')) - kindselect,charselect,typename=cracktypespec(typespec,selector) - decl['typespec']=typespec - decl['kindselector']=kindselect - decl['charselector']=charselect - decl['typename']=typename - for k in decl.keys(): - if not decl[k]: del decl[k] - for r in markoutercomma(m1.group('after')).split('@,@'): - if '-' in r: - try: begc,endc=[x.strip() for x in r.split('-')] - except: - outmess('analyzeline: expected "-" instead of "%s" in range list of implicit statement\n'%r);continue - else: begc=endc=r.strip() - if not len(begc)==len(endc)==1: - outmess('analyzeline: expected "-" instead of "%s" in range list of implicit statement (2)\n'%r);continue - for o in range(ord(begc),ord(endc)+1): - impl[chr(o)]=decl - groupcache[groupcounter]['implicit']=impl - elif case=='data': - ll=[] - dl='';il='';f=0;fc=1;inp=0 - for c in m.group('after'): - if not inp: - if c=="'": fc=not fc - if c=='/' and fc: f=f+1;continue - if c=='(': inp = inp + 1 - elif c==')': inp = inp - 1 - if f==0: dl=dl+c - elif f==1: il=il+c - elif f==2: - dl = dl.strip() - if dl.startswith(','): - dl = dl[1:].strip() - ll.append([dl,il]) - dl=c;il='';f=0 - if f==2: - dl = dl.strip() - if dl.startswith(','): - dl = dl[1:].strip() - ll.append([dl,il]) - vars={} - if 'vars' in groupcache[groupcounter]: - vars=groupcache[groupcounter]['vars'] - last_name = None - for l in ll: - l=[x.strip() for x in l] - if l[0][0]==',':l[0]=l[0][1:] - if l[0][0]=='(': - outmess('analyzeline: implied-DO list "%s" is not supported. Skipping.\n'%l[0]) - continue - #if '(' in l[0]: - # #outmess('analyzeline: ignoring this data statement.\n') - # continue - i=0;j=0;llen=len(l[1]) - for v in rmbadname([x.strip() for x in markoutercomma(l[0]).split('@,@')]): - if v[0]=='(': - outmess('analyzeline: implied-DO list "%s" is not supported. Skipping.\n'%v) - # XXX: subsequent init expressions may get wrong values. - # Ignoring since data statements are irrelevant for wrapping. - continue - fc=0 - while (i=3: - bn = bn.strip() - if not bn: bn='_BLNK_' - cl.append([bn,ol]) - f=f-2;bn='';ol='' - if f%2: bn=bn+c - else: ol=ol+c - bn = bn.strip() - if not bn: bn='_BLNK_' - cl.append([bn,ol]) - commonkey={} - if 'common' in groupcache[groupcounter]: - commonkey=groupcache[groupcounter]['common'] - for c in cl: - if c[0] in commonkey: - outmess('analyzeline: previously defined common block encountered. Skipping.\n') - continue - commonkey[c[0]]=[] - for i in [x.strip() for x in markoutercomma(c[1]).split('@,@')]: - if i: commonkey[c[0]].append(i) - groupcache[groupcounter]['common']=commonkey - previous_context = ('common',bn,groupcounter) - elif case=='use': - m1=re.match(r'\A\s*(?P\b[\w]+\b)\s*((,(\s*\bonly\b\s*:|(?P))\s*(?P.*))|)\s*\Z',m.group('after'),re.I) - if m1: - mm=m1.groupdict() - if 'use' not in groupcache[groupcounter]: - groupcache[groupcounter]['use']={} - name=m1.group('name') - groupcache[groupcounter]['use'][name]={} - isonly=0 - if 'list' in mm and mm['list'] is not None: - if 'notonly' in mm and mm['notonly'] is None: - isonly=1 - groupcache[groupcounter]['use'][name]['only']=isonly - ll=[x.strip() for x in mm['list'].split(',')] - rl={} - for l in ll: - if '=' in l: - m2=re.match(r'\A\s*(?P\b[\w]+\b)\s*=\s*>\s*(?P\b[\w]+\b)\s*\Z',l,re.I) - if m2: rl[m2.group('local').strip()]=m2.group('use').strip() - else: - outmess('analyzeline: Not local=>use pattern found in %s\n'%`l`) - else: - rl[l]=l - groupcache[groupcounter]['use'][name]['map']=rl - else: - pass - - else: - print m.groupdict() - outmess('analyzeline: Could not crack the use statement.\n') - elif case in ['f2pyenhancements']: - if 'f2pyenhancements' not in groupcache[groupcounter]: - groupcache[groupcounter]['f2pyenhancements'] = {} - d = groupcache[groupcounter]['f2pyenhancements'] - if m.group('this')=='usercode' and 'usercode' in d: - if type(d['usercode']) is type(''): - d['usercode'] = [d['usercode']] - d['usercode'].append(m.group('after')) - else: - d[m.group('this')] = m.group('after') - elif case=='multiline': - if previous_context is None: - if verbose: - outmess('analyzeline: No context for multiline block.\n') - return - gc = groupcounter - #gc = previous_context[2] - appendmultiline(groupcache[gc], - previous_context[:2], - m.group('this')) - else: - if verbose>1: - print m.groupdict() - outmess('analyzeline: No code implemented for line.\n') - -def appendmultiline(group, context_name,ml): - if 'f2pymultilines' not in group: - group['f2pymultilines'] = {} - d = group['f2pymultilines'] - if context_name not in d: - d[context_name] = [] - d[context_name].append(ml) - return - -def cracktypespec0(typespec,ll): - selector=None - attr=None - if re.match(r'double\s*complex',typespec,re.I): typespec='double complex' - elif re.match(r'double\s*precision',typespec,re.I): typespec='double precision' - else: typespec=typespec.strip().lower() - m1=selectpattern.match(markouterparen(ll)) - if not m1: - outmess('cracktypespec0: no kind/char_selector pattern found for line.\n') - return - d=m1.groupdict() - for k in d.keys(): d[k]=unmarkouterparen(d[k]) - if typespec in ['complex','integer','logical','real','character','type']: - selector=d['this'] - ll=d['after'] - i=ll.find('::') - if i>=0: - attr=ll[:i].strip() - ll=ll[i+2:] - return typespec,selector,attr,ll -##### -namepattern=re.compile(r'\s*(?P\b[\w]+\b)\s*(?P.*)\s*\Z',re.I) -kindselector=re.compile(r'\s*(\(\s*(kind\s*=)?\s*(?P.*)\s*\)|[*]\s*(?P.*?))\s*\Z',re.I) -charselector=re.compile(r'\s*(\((?P.*)\)|[*]\s*(?P.*))\s*\Z',re.I) -lenkindpattern=re.compile(r'\s*(kind\s*=\s*(?P.*?)\s*(@,@\s*len\s*=\s*(?P.*)|)|(len\s*=\s*|)(?P.*?)\s*(@,@\s*(kind\s*=\s*|)(?P.*)|))\s*\Z',re.I) -lenarraypattern=re.compile(r'\s*(@\(@\s*(?!/)\s*(?P.*?)\s*@\)@\s*[*]\s*(?P.*?)|([*]\s*(?P.*?)|)\s*(@\(@\s*(?!/)\s*(?P.*?)\s*@\)@|))\s*(=\s*(?P.*?)|(@\(@|)/\s*(?P.*?)\s*/(@\)@|)|)\s*\Z',re.I) -def removespaces(expr): - expr=expr.strip() - if len(expr)<=1: return expr - expr2=expr[0] - for i in range(1,len(expr)-1): - if expr[i]==' ' and \ - ((expr[i+1] in "()[]{}=+-/* ") or (expr[i-1] in "()[]{}=+-/* ")): continue - expr2=expr2+expr[i] - expr2=expr2+expr[-1] - return expr2 -def markinnerspaces(line): - l='';f=0 - cc='\'' - cc1='"' - cb='' - for c in line: - if cb=='\\' and c in ['\\','\'','"']: - l=l+c; - cb=c - continue - if f==0 and c in ['\'','"']: cc=c; cc1={'\'':'"','"':'\''}[c] - if c==cc:f=f+1 - elif c==cc:f=f-1 - elif c==' ' and f==1: l=l+'@_@'; continue - l=l+c;cb=c - return l -def updatevars(typespec,selector,attrspec,entitydecl): - global groupcache,groupcounter - last_name = None - kindselect,charselect,typename=cracktypespec(typespec,selector) - if attrspec: - attrspec=[x.strip() for x in markoutercomma(attrspec).split('@,@')] - l = [] - c = re.compile(r'(?P[a-zA-Z]+)') - for a in attrspec: - m = c.match(a) - if m: - s = m.group('start').lower() - a = s + a[len(s):] - l.append(a) - attrspec = l - el=[x.strip() for x in markoutercomma(entitydecl).split('@,@')] - el1=[] - for e in el: - for e1 in [x.strip() for x in markoutercomma(removespaces(markinnerspaces(e)),comma=' ').split('@ @')]: - if e1: el1.append(e1.replace('@_@',' ')) - for e in el1: - m=namepattern.match(e) - if not m: - outmess('updatevars: no name pattern found for entity=%s. Skipping.\n'%(`e`)) - continue - ename=rmbadname1(m.group('name')) - edecl={} - if ename in groupcache[groupcounter]['vars']: - edecl=groupcache[groupcounter]['vars'][ename].copy() - not_has_typespec = 'typespec' not in edecl - if not_has_typespec: - edecl['typespec']=typespec - elif typespec and (not typespec==edecl['typespec']): - outmess('updatevars: attempt to change the type of "%s" ("%s") to "%s". Ignoring.\n' % (ename,edecl['typespec'],typespec)) - if 'kindselector' not in edecl: - edecl['kindselector']=copy.copy(kindselect) - elif kindselect: - for k in kindselect.keys(): - if k in edecl['kindselector'] and (not kindselect[k]==edecl['kindselector'][k]): - outmess('updatevars: attempt to change the kindselector "%s" of "%s" ("%s") to "%s". Ignoring.\n' % (k,ename,edecl['kindselector'][k],kindselect[k])) - else: edecl['kindselector'][k]=copy.copy(kindselect[k]) - if 'charselector' not in edecl and charselect: - if not_has_typespec: - edecl['charselector']=charselect - else: - errmess('updatevars:%s: attempt to change empty charselector to %r. Ignoring.\n' \ - %(ename,charselect)) - elif charselect: - for k in charselect.keys(): - if k in edecl['charselector'] and (not charselect[k]==edecl['charselector'][k]): - outmess('updatevars: attempt to change the charselector "%s" of "%s" ("%s") to "%s". Ignoring.\n' % (k,ename,edecl['charselector'][k],charselect[k])) - else: edecl['charselector'][k]=copy.copy(charselect[k]) - if 'typename' not in edecl: - edecl['typename']=typename - elif typename and (not edecl['typename']==typename): - outmess('updatevars: attempt to change the typename of "%s" ("%s") to "%s". Ignoring.\n' % (ename,edecl['typename'],typename)) - if 'attrspec' not in edecl: - edecl['attrspec']=copy.copy(attrspec) - elif attrspec: - for a in attrspec: - if a not in edecl['attrspec']: - edecl['attrspec'].append(a) - else: - edecl['typespec']=copy.copy(typespec) - edecl['kindselector']=copy.copy(kindselect) - edecl['charselector']=copy.copy(charselect) - edecl['typename']=typename - edecl['attrspec']=copy.copy(attrspec) - if m.group('after'): - m1=lenarraypattern.match(markouterparen(m.group('after'))) - if m1: - d1=m1.groupdict() - for lk in ['len','array','init']: - if d1[lk+'2'] is not None: d1[lk]=d1[lk+'2']; del d1[lk+'2'] - for k in d1.keys(): - if d1[k] is not None: d1[k]=unmarkouterparen(d1[k]) - else: del d1[k] - if 'len' in d1 and 'array' in d1: - if d1['len']=='': - d1['len']=d1['array'] - del d1['array'] - else: - d1['array']=d1['array']+','+d1['len'] - del d1['len'] - errmess('updatevars: "%s %s" is mapped to "%s %s(%s)"\n'%(typespec,e,typespec,ename,d1['array'])) - if 'array' in d1: - dm = 'dimension(%s)'%d1['array'] - if 'attrspec' not in edecl or (not edecl['attrspec']): - edecl['attrspec']=[dm] - else: - edecl['attrspec'].append(dm) - for dm1 in edecl['attrspec']: - if dm1[:9]=='dimension' and dm1!=dm: - del edecl['attrspec'][-1] - errmess('updatevars:%s: attempt to change %r to %r. Ignoring.\n' \ - % (ename,dm1,dm)) - break - - if 'len' in d1: - if typespec in ['complex','integer','logical','real']: - if ('kindselector' not in edecl) or (not edecl['kindselector']): - edecl['kindselector']={} - edecl['kindselector']['*']=d1['len'] - elif typespec == 'character': - if ('charselector' not in edecl) or (not edecl['charselector']): - edecl['charselector']={} - if 'len' in edecl['charselector']: - del edecl['charselector']['len'] - edecl['charselector']['*']=d1['len'] - if 'init' in d1: - if '=' in edecl and (not edecl['=']==d1['init']): - outmess('updatevars: attempt to change the init expression of "%s" ("%s") to "%s". Ignoring.\n' % (ename,edecl['='],d1['init'])) - else: - edecl['=']=d1['init'] - else: - outmess('updatevars: could not crack entity declaration "%s". Ignoring.\n'%(ename+m.group('after'))) - for k in edecl.keys(): - if not edecl[k]: - del edecl[k] - groupcache[groupcounter]['vars'][ename]=edecl - if 'varnames' in groupcache[groupcounter]: - groupcache[groupcounter]['varnames'].append(ename) - last_name = ename - return last_name - -def cracktypespec(typespec,selector): - kindselect=None - charselect=None - typename=None - if selector: - if typespec in ['complex','integer','logical','real']: - kindselect=kindselector.match(selector) - if not kindselect: - outmess('cracktypespec: no kindselector pattern found for %s\n'%(`selector`)) - return - kindselect=kindselect.groupdict() - kindselect['*']=kindselect['kind2'] - del kindselect['kind2'] - for k in kindselect.keys(): - if not kindselect[k]: del kindselect[k] - for k,i in kindselect.items(): - kindselect[k] = rmbadname1(i) - elif typespec=='character': - charselect=charselector.match(selector) - if not charselect: - outmess('cracktypespec: no charselector pattern found for %s\n'%(`selector`)) - return - charselect=charselect.groupdict() - charselect['*']=charselect['charlen'] - del charselect['charlen'] - if charselect['lenkind']: - lenkind=lenkindpattern.match(markoutercomma(charselect['lenkind'])) - lenkind=lenkind.groupdict() - for lk in ['len','kind']: - if lenkind[lk+'2']: - lenkind[lk]=lenkind[lk+'2'] - charselect[lk]=lenkind[lk] - del lenkind[lk+'2'] - del charselect['lenkind'] - for k in charselect.keys(): - if not charselect[k]: del charselect[k] - for k,i in charselect.items(): - charselect[k] = rmbadname1(i) - elif typespec=='type': - typename=re.match(r'\s*\(\s*(?P\w+)\s*\)',selector,re.I) - if typename: typename=typename.group('name') - else: outmess('cracktypespec: no typename found in %s\n'%(`typespec+selector`)) - else: - outmess('cracktypespec: no selector used for %s\n'%(`selector`)) - return kindselect,charselect,typename -###### -def setattrspec(decl,attr,force=0): - if not decl: - decl={} - if not attr: - return decl - if 'attrspec' not in decl: - decl['attrspec']=[attr] - return decl - if force: decl['attrspec'].append(attr) - if attr in decl['attrspec']: return decl - if attr=='static' and 'automatic' not in decl['attrspec']: - decl['attrspec'].append(attr) - elif attr=='automatic' and 'static' not in decl['attrspec']: - decl['attrspec'].append(attr) - elif attr=='public' and 'private' not in decl['attrspec']: - decl['attrspec'].append(attr) - elif attr=='private' and 'public' not in decl['attrspec']: - decl['attrspec'].append(attr) - else: - decl['attrspec'].append(attr) - return decl - -def setkindselector(decl,sel,force=0): - if not decl: - decl={} - if not sel: - return decl - if 'kindselector' not in decl: - decl['kindselector']=sel - return decl - for k in sel.keys(): - if force or k not in decl['kindselector']: - decl['kindselector'][k]=sel[k] - return decl - -def setcharselector(decl,sel,force=0): - if not decl: - decl={} - if not sel: - return decl - if 'charselector' not in decl: - decl['charselector']=sel - return decl - for k in sel.keys(): - if force or k not in decl['charselector']: - decl['charselector'][k]=sel[k] - return decl - -def getblockname(block,unknown='unknown'): - if 'name' in block: - return block['name'] - return unknown - -###### post processing - -def setmesstext(block): - global filepositiontext - try: - filepositiontext='In: %s:%s\n'%(block['from'],block['name']) - except: - pass - -def get_usedict(block): - usedict = {} - if 'parent_block' in block: - usedict = get_usedict(block['parent_block']) - if 'use' in block: - usedict.update(block['use']) - return usedict - -def get_useparameters(block, param_map=None): - global f90modulevars - if param_map is None: - param_map = {} - usedict = get_usedict(block) - if not usedict: - return param_map - for usename,mapping in usedict.items(): - usename = usename.lower() - if usename not in f90modulevars: - continue - mvars = f90modulevars[usename] - params = get_parameters(mvars) - if not params: - continue - # XXX: apply mapping - if mapping: - errmess('get_useparameters: mapping for %s not impl.' % (mapping)) - for k,v in params.items(): - if k in param_map: - outmess('get_useparameters: overriding parameter %s with'\ - ' value from module %s' % (`k`,`usename`)) - param_map[k] = v - return param_map - -def postcrack2(block,tab='',param_map=None): - global f90modulevars - if not f90modulevars: - return block - if type(block)==types.ListType: - ret = [] - for g in block: - g = postcrack2(g,tab=tab+'\t',param_map=param_map) - ret.append(g) - return ret - setmesstext(block) - outmess('%sBlock: %s\n'%(tab,block['name']),0) - - if param_map is None: - param_map = get_useparameters(block) - - if param_map is not None and 'vars' in block: - vars = block['vars'] - for n in vars.keys(): - var = vars[n] - if 'kindselector' in var: - kind = var['kindselector'] - if 'kind' in kind: - val = kind['kind'] - if val in param_map: - kind['kind'] = param_map[val] - new_body = [] - for b in block['body']: - b = postcrack2(b,tab=tab+'\t',param_map=param_map) - new_body.append(b) - block['body'] = new_body - - return block - -def postcrack(block,args=None,tab=''): - """ - TODO: - function return values - determine expression types if in argument list - """ - global usermodules,onlyfunctions - if type(block)==types.ListType: - gret=[] - uret=[] - for g in block: - setmesstext(g) - g=postcrack(g,tab=tab+'\t') - if 'name' in g and '__user__' in g['name']: # sort user routines to appear first - uret.append(g) - else: - gret.append(g) - return uret+gret - setmesstext(block) - if (not type(block)==types.DictType) and 'block' not in block: - raise Exception('postcrack: Expected block dictionary instead of ' + \ - str(block)) - if 'name' in block and not block['name']=='unknown_interface': - outmess('%sBlock: %s\n'%(tab,block['name']),0) - blocktype=block['block'] - block=analyzeargs(block) - block=analyzecommon(block) - block['vars']=analyzevars(block) - block['sortvars']=sortvarnames(block['vars']) - if 'args' in block and block['args']: - args=block['args'] - block['body']=analyzebody(block,args,tab=tab) - - userisdefined=[] -## fromuser = [] - if 'use' in block: - useblock=block['use'] - for k in useblock.keys(): - if '__user__' in k: - userisdefined.append(k) -## if 'map' in useblock[k]: -## for n in useblock[k]['map'].values(): -## if n not in fromuser: fromuser.append(n) - else: useblock={} - name='' - if 'name' in block: - name=block['name'] - if 'externals' in block and block['externals']:# and not userisdefined: # Build a __user__ module - interfaced=[] - if 'interfaced' in block: - interfaced=block['interfaced'] - mvars=copy.copy(block['vars']) - if name: - mname=name+'__user__routines' - else: - mname='unknown__user__routines' - if mname in userisdefined: - i=1 - while '%s_%i'%(mname,i) in userisdefined: i=i+1 - mname='%s_%i'%(mname,i) - interface={'block':'interface','body':[],'vars':{},'name':name+'_user_interface'} - for e in block['externals']: -## if e in fromuser: -## outmess(' Skipping %s that is defined explicitly in another use statement\n'%(`e`)) -## continue - if e in interfaced: - edef=[] - j=-1 - for b in block['body']: - j=j+1 - if b['block']=='interface': - i=-1 - for bb in b['body']: - i=i+1 - if 'name' in bb and bb['name']==e: - edef=copy.copy(bb) - del b['body'][i] - break - if edef: - if not b['body']: del block['body'][j] - del interfaced[interfaced.index(e)] - break - interface['body'].append(edef) - else: - if e in mvars and not isexternal(mvars[e]): - interface['vars'][e]=mvars[e] - if interface['vars'] or interface['body']: - block['interfaced']=interfaced - mblock={'block':'python module','body':[interface],'vars':{},'name':mname,'interfaced':block['externals']} - useblock[mname]={} - usermodules.append(mblock) - if useblock: - block['use']=useblock - return block - -def sortvarnames(vars): - indep = [] - dep = [] - for v in vars.keys(): - if 'depend' in vars[v] and vars[v]['depend']: - dep.append(v) - #print '%s depends on %s'%(v,vars[v]['depend']) - else: indep.append(v) - n = len(dep) - i = 0 - while dep: #XXX: How to catch dependence cycles correctly? - v = dep[0] - fl = 0 - for w in dep[1:]: - if w in vars[v]['depend']: - fl = 1 - break - if fl: - dep = dep[1:]+[v] - i = i + 1 - if i>n: - errmess('sortvarnames: failed to compute dependencies because' - ' of cyclic dependencies between ' - +', '.join(dep)+'\n') - indep = indep + dep - break - else: - indep.append(v) - dep = dep[1:] - n = len(dep) - i = 0 - #print indep - return indep - -def analyzecommon(block): - if not hascommon(block): return block - commonvars=[] - for k in block['common'].keys(): - comvars=[] - for e in block['common'][k]: - m=re.match(r'\A\s*\b(?P.*?)\b\s*(\((?P.*?)\)|)\s*\Z',e,re.I) - if m: - dims=[] - if m.group('dims'): - dims=[x.strip() for x in markoutercomma(m.group('dims')).split('@,@')] - n=m.group('name').strip() - if n in block['vars']: - if 'attrspec' in block['vars'][n]: - block['vars'][n]['attrspec'].append('dimension(%s)'%(','.join(dims))) - else: - block['vars'][n]['attrspec']=['dimension(%s)'%(','.join(dims))] - else: - if dims: - block['vars'][n]={'attrspec':['dimension(%s)'%(','.join(dims))]} - else: block['vars'][n]={} - if n not in commonvars: commonvars.append(n) - else: - n=e - errmess('analyzecommon: failed to extract "[()]" from "%s" in common /%s/.\n'%(e,k)) - comvars.append(n) - block['common'][k]=comvars - if 'commonvars' not in block: - block['commonvars']=commonvars - else: - block['commonvars']=block['commonvars']+commonvars - return block - -def analyzebody(block,args,tab=''): - global usermodules,skipfuncs,onlyfuncs,f90modulevars - setmesstext(block) - body=[] - for b in block['body']: - b['parent_block'] = block - if b['block'] in ['function','subroutine']: - if args is not None and b['name'] not in args: - continue - else: - as_=b['args'] - if b['name'] in skipfuncs: - continue - if onlyfuncs and b['name'] not in onlyfuncs: - continue - else: as_=args - b=postcrack(b,as_,tab=tab+'\t') - if b['block']=='interface' and not b['body']: - if 'f2pyenhancements' not in b: - continue - if b['block'].replace(' ','')=='pythonmodule': - usermodules.append(b) - else: - if b['block']=='module': - f90modulevars[b['name']] = b['vars'] - body.append(b) - return body - -def buildimplicitrules(block): - setmesstext(block) - implicitrules=defaultimplicitrules - attrrules={} - if 'implicit' in block: - if block['implicit'] is None: - implicitrules=None - if verbose>1: - outmess('buildimplicitrules: no implicit rules for routine %s.\n'%`block['name']`) - else: - for k in block['implicit'].keys(): - if block['implicit'][k].get('typespec') not in ['static','automatic']: - implicitrules[k]=block['implicit'][k] - else: - attrrules[k]=block['implicit'][k]['typespec'] - return implicitrules,attrrules - -def myeval(e,g=None,l=None): - r = eval(e,g,l) - if type(r) in [type(0),type(0.0)]: - return r - raise ValueError('r=%r' % (r)) - -getlincoef_re_1 = re.compile(r'\A\b\w+\b\Z',re.I) -def getlincoef(e,xset): # e = a*x+b ; x in xset - try: - c = int(myeval(e,{},{})) - return 0,c,None - except: pass - if getlincoef_re_1.match(e): - return 1,0,e - len_e = len(e) - for x in xset: - if len(x)>len_e: continue - if re.search(r'\w\s*\([^)]*\b'+x+r'\b', e): - # skip function calls having x as an argument, e.g max(1, x) - continue - re_1 = re.compile(r'(?P.*?)\b'+x+r'\b(?P.*)',re.I) - m = re_1.match(e) - if m: - try: - m1 = re_1.match(e) - while m1: - ee = '%s(%s)%s'%(m1.group('before'),0,m1.group('after')) - m1 = re_1.match(ee) - b = myeval(ee,{},{}) - m1 = re_1.match(e) - while m1: - ee = '%s(%s)%s'%(m1.group('before'),1,m1.group('after')) - m1 = re_1.match(ee) - a = myeval(ee,{},{}) - b - m1 = re_1.match(e) - while m1: - ee = '%s(%s)%s'%(m1.group('before'),0.5,m1.group('after')) - m1 = re_1.match(ee) - c = myeval(ee,{},{}) - # computing another point to be sure that expression is linear - m1 = re_1.match(e) - while m1: - ee = '%s(%s)%s'%(m1.group('before'),1.5,m1.group('after')) - m1 = re_1.match(ee) - c2 = myeval(ee,{},{}) - if (a*0.5+b==c and a*1.5+b==c2): - return a,b,x - except: pass - break - return None,None,None - -_varname_match = re.compile(r'\A[a-z]\w*\Z').match -def getarrlen(dl,args,star='*'): - edl = [] - try: edl.append(myeval(dl[0],{},{})) - except: edl.append(dl[0]) - try: edl.append(myeval(dl[1],{},{})) - except: edl.append(dl[1]) - if type(edl[0]) is type(0): - p1 = 1-edl[0] - if p1==0: d = str(dl[1]) - elif p1<0: d = '%s-%s'%(dl[1],-p1) - else: d = '%s+%s'%(dl[1],p1) - elif type(edl[1]) is type(0): - p1 = 1+edl[1] - if p1==0: d='-(%s)' % (dl[0]) - else: d='%s-(%s)' % (p1,dl[0]) - else: d = '%s-(%s)+1'%(dl[1],dl[0]) - try: return `myeval(d,{},{})`,None,None - except: pass - d1,d2=getlincoef(dl[0],args),getlincoef(dl[1],args) - if None not in [d1[0],d2[0]]: - if (d1[0],d2[0])==(0,0): - return `d2[1]-d1[1]+1`,None,None - b = d2[1] - d1[1] + 1 - d1 = (d1[0],0,d1[2]) - d2 = (d2[0],b,d2[2]) - if d1[0]==0 and d2[2] in args: - if b<0: return '%s * %s - %s'%(d2[0],d2[2],-b),d2[2],'+%s)/(%s)'%(-b,d2[0]) - elif b: return '%s * %s + %s'%(d2[0],d2[2],b),d2[2],'-%s)/(%s)'%(b,d2[0]) - else: return '%s * %s'%(d2[0],d2[2]),d2[2],')/(%s)'%(d2[0]) - if d2[0]==0 and d1[2] in args: - - if b<0: return '%s * %s - %s'%(-d1[0],d1[2],-b),d1[2],'+%s)/(%s)'%(-b,-d1[0]) - elif b: return '%s * %s + %s'%(-d1[0],d1[2],b),d1[2],'-%s)/(%s)'%(b,-d1[0]) - else: return '%s * %s'%(-d1[0],d1[2]),d1[2],')/(%s)'%(-d1[0]) - if d1[2]==d2[2] and d1[2] in args: - a = d2[0] - d1[0] - if not a: return `b`,None,None - if b<0: return '%s * %s - %s'%(a,d1[2],-b),d2[2],'+%s)/(%s)'%(-b,a) - elif b: return '%s * %s + %s'%(a,d1[2],b),d2[2],'-%s)/(%s)'%(b,a) - else: return '%s * %s'%(a,d1[2]),d2[2],')/(%s)'%(a) - if d1[0]==d2[0]==1: - c = str(d1[2]) - if c not in args: - if _varname_match(c): - outmess('\tgetarrlen:variable "%s" undefined\n' % (c)) - c = '(%s)'%c - if b==0: d='%s-%s' % (d2[2],c) - elif b<0: d='%s-%s-%s' % (d2[2],c,-b) - else: d='%s-%s+%s' % (d2[2],c,b) - elif d1[0]==0: - c2 = str(d2[2]) - if c2 not in args: - if _varname_match(c2): - outmess('\tgetarrlen:variable "%s" undefined\n' % (c2)) - c2 = '(%s)'%c2 - if d2[0]==1: pass - elif d2[0]==-1: c2='-%s' %c2 - else: c2='%s*%s'%(d2[0],c2) - - if b==0: d=c2 - elif b<0: d='%s-%s' % (c2,-b) - else: d='%s+%s' % (c2,b) - elif d2[0]==0: - c1 = str(d1[2]) - if c1 not in args: - if _varname_match(c1): - outmess('\tgetarrlen:variable "%s" undefined\n' % (c1)) - c1 = '(%s)'%c1 - if d1[0]==1: c1='-%s'%c1 - elif d1[0]==-1: c1='+%s'%c1 - elif d1[0]<0: c1='+%s*%s'%(-d1[0],c1) - else: c1 = '-%s*%s' % (d1[0],c1) - - if b==0: d=c1 - elif b<0: d='%s-%s' % (c1,-b) - else: d='%s+%s' % (c1,b) - else: - c1 = str(d1[2]) - if c1 not in args: - if _varname_match(c1): - outmess('\tgetarrlen:variable "%s" undefined\n' % (c1)) - c1 = '(%s)'%c1 - if d1[0]==1: c1='-%s'%c1 - elif d1[0]==-1: c1='+%s'%c1 - elif d1[0]<0: c1='+%s*%s'%(-d1[0],c1) - else: c1 = '-%s*%s' % (d1[0],c1) - - c2 = str(d2[2]) - if c2 not in args: - if _varname_match(c2): - outmess('\tgetarrlen:variable "%s" undefined\n' % (c2)) - c2 = '(%s)'%c2 - if d2[0]==1: pass - elif d2[0]==-1: c2='-%s' %c2 - else: c2='%s*%s'%(d2[0],c2) - - if b==0: d='%s%s' % (c2,c1) - elif b<0: d='%s%s-%s' % (c2,c1,-b) - else: d='%s%s+%s' % (c2,c1,b) - return d,None,None - -word_pattern = re.compile(r'\b[a-z][\w$]*\b',re.I) - -def _get_depend_dict(name, vars, deps): - if name in vars: - words = vars[name].get('depend',[]) - - if '=' in vars[name] and not isstring(vars[name]): - for word in word_pattern.findall(vars[name]['=']): - if word not in words and word in vars: - words.append(word) - for word in words[:]: - for w in deps.get(word,[]) \ - or _get_depend_dict(word, vars, deps): - if w not in words: - words.append(w) - else: - outmess('_get_depend_dict: no dependence info for %s\n' % (`name`)) - words = [] - deps[name] = words - return words - -def _calc_depend_dict(vars): - names = vars.keys() - depend_dict = {} - for n in names: - _get_depend_dict(n, vars, depend_dict) - return depend_dict - -def get_sorted_names(vars): - """ - """ - depend_dict = _calc_depend_dict(vars) - names = [] - for name in depend_dict.keys(): - if not depend_dict[name]: - names.append(name) - del depend_dict[name] - while depend_dict: - for name, lst in depend_dict.items(): - new_lst = [n for n in lst if n in depend_dict] - if not new_lst: - names.append(name) - del depend_dict[name] - else: - depend_dict[name] = new_lst - return [name for name in names if name in vars] - -def _kind_func(string): - #XXX: return something sensible. - if string[0] in "'\"": - string = string[1:-1] - if real16pattern.match(string): - return 8 - elif real8pattern.match(string): - return 4 - return 'kind('+string+')' - -def _selected_int_kind_func(r): - #XXX: This should be processor dependent - m = 10**r - if m<=2**8: return 1 - if m<=2**16: return 2 - if m<=2**32: return 4 - if m<=2**64: return 8 - if m<=2**128: return 16 - return -1 - -def get_parameters(vars, global_params={}): - params = copy.copy(global_params) - g_params = copy.copy(global_params) - for name,func in [('kind',_kind_func), - ('selected_int_kind',_selected_int_kind_func), - ]: - if name not in g_params: - g_params[name] = func - param_names = [] - for n in get_sorted_names(vars): - if 'attrspec' in vars[n] and 'parameter' in vars[n]['attrspec']: - param_names.append(n) - kind_re = re.compile(r'\bkind\s*\(\s*(?P.*)\s*\)',re.I) - selected_int_kind_re = re.compile(r'\bselected_int_kind\s*\(\s*(?P.*)\s*\)',re.I) - for n in param_names: - if '=' in vars[n]: - v = vars[n]['='] - if islogical(vars[n]): - v = v.lower() - for repl in [ - ('.false.','False'), - ('.true.','True'), - #TODO: test .eq., .neq., etc replacements. - ]: - v = v.replace(*repl) - v = kind_re.sub(r'kind("\1")',v) - v = selected_int_kind_re.sub(r'selected_int_kind(\1)',v) - if isinteger(vars[n]) and not selected_int_kind_re.match(v): - v = v.split('_')[0] - if isdouble(vars[n]): - tt = list(v) - for m in real16pattern.finditer(v): - tt[m.start():m.end()] = list(\ - v[m.start():m.end()].lower().replace('d', 'e')) - v = ''.join(tt) - if iscomplex(vars[n]): - if v[0]=='(' and v[-1]==')': - l = markoutercomma(v[1:-1]).split('@,@') - print n,params - try: - params[n] = eval(v,g_params,params) - except Exception,msg: - params[n] = v - #print params - outmess('get_parameters: got "%s" on %s\n' % (msg,`v`)) - if isstring(vars[n]) and type(params[n]) is type(0): - params[n] = chr(params[n]) - nl = n.lower() - if nl!=n: - params[nl] = params[n] - else: - print vars[n] - outmess('get_parameters:parameter %s does not have value?!\n'%(`n`)) - return params - -def _eval_length(length,params): - if length in ['(:)','(*)','*']: - return '(*)' - return _eval_scalar(length,params) - -_is_kind_number = re.compile('\d+_').match - -def _eval_scalar(value,params): - if _is_kind_number(value): - value = value.split('_')[0] - try: - value = str(eval(value,{},params)) - except (NameError, SyntaxError): - return value - except Exception,msg: - errmess('"%s" in evaluating %r '\ - '(available names: %s)\n' \ - % (msg,value,params.keys())) - return value - -def analyzevars(block): - global f90modulevars - setmesstext(block) - implicitrules,attrrules=buildimplicitrules(block) - vars=copy.copy(block['vars']) - if block['block']=='function' and block['name'] not in vars: - vars[block['name']]={} - if '' in block['vars']: - del vars[''] - if 'attrspec' in block['vars']['']: - gen=block['vars']['']['attrspec'] - for n in vars.keys(): - for k in ['public','private']: - if k in gen: - vars[n]=setattrspec(vars[n],k) - svars=[] - args = block['args'] - for a in args: - try: - vars[a] - svars.append(a) - except KeyError: - pass - for n in vars.keys(): - if n not in args: svars.append(n) - - params = get_parameters(vars, get_useparameters(block)) - - dep_matches = {} - name_match = re.compile(r'\w[\w\d_$]*').match - for v in vars.keys(): - m = name_match(v) - if m: - n = v[m.start():m.end()] - try: - dep_matches[n] - except KeyError: - dep_matches[n] = re.compile(r'.*\b%s\b'%(v),re.I).match - for n in svars: - if n[0] in attrrules.keys(): - vars[n]=setattrspec(vars[n],attrrules[n[0]]) - if 'typespec' not in vars[n]: - if not('attrspec' in vars[n] and 'external' in vars[n]['attrspec']): - if implicitrules: - ln0 = n[0].lower() - for k in implicitrules[ln0].keys(): - if k=='typespec' and implicitrules[ln0][k]=='undefined': - continue - if k not in vars[n]: - vars[n][k]=implicitrules[ln0][k] - elif k=='attrspec': - for l in implicitrules[ln0][k]: - vars[n]=setattrspec(vars[n],l) - elif n in block['args']: - outmess('analyzevars: typespec of variable %s is not defined in routine %s.\n'%(`n`,block['name'])) - - if 'charselector' in vars[n]: - if 'len' in vars[n]['charselector']: - l = vars[n]['charselector']['len'] - try: - l = str(eval(l,{},params)) - except: - pass - vars[n]['charselector']['len'] = l - - if 'kindselector' in vars[n]: - if 'kind' in vars[n]['kindselector']: - l = vars[n]['kindselector']['kind'] - try: - l = str(eval(l,{},params)) - except: - pass - vars[n]['kindselector']['kind'] = l - - savelindims = {} - if 'attrspec' in vars[n]: - attr=vars[n]['attrspec'] - attr.reverse() - vars[n]['attrspec']=[] - dim,intent,depend,check,note=None,None,None,None,None - for a in attr: - if a[:9]=='dimension': dim=(a[9:].strip())[1:-1] - elif a[:6]=='intent': intent=(a[6:].strip())[1:-1] - elif a[:6]=='depend': depend=(a[6:].strip())[1:-1] - elif a[:5]=='check': check=(a[5:].strip())[1:-1] - elif a[:4]=='note': note=(a[4:].strip())[1:-1] - else: vars[n]=setattrspec(vars[n],a) - if intent: - if 'intent' not in vars[n]: - vars[n]['intent']=[] - for c in [x.strip() for x in markoutercomma(intent).split('@,@')]: - if not c in vars[n]['intent']: - vars[n]['intent'].append(c) - intent=None - if note: - note=note.replace('\\n\\n','\n\n') - note=note.replace('\\n ','\n') - if 'note' not in vars[n]: - vars[n]['note']=[note] - else: - vars[n]['note'].append(note) - note=None - if depend is not None: - if 'depend' not in vars[n]: - vars[n]['depend']=[] - for c in rmbadname([x.strip() for x in markoutercomma(depend).split('@,@')]): - if c not in vars[n]['depend']: - vars[n]['depend'].append(c) - depend=None - if check is not None: - if 'check' not in vars[n]: - vars[n]['check']=[] - for c in [x.strip() for x in markoutercomma(check).split('@,@')]: - if not c in vars[n]['check']: - vars[n]['check'].append(c) - check=None - if dim and 'dimension' not in vars[n]: - vars[n]['dimension']=[] - for d in rmbadname([x.strip() for x in markoutercomma(dim).split('@,@')]): - star = '*' - if d==':': - star=':' - if d in params: - d = str(params[d]) - for p in params.keys(): - m = re.match(r'(?P.*?)\b'+p+r'\b(?P.*)',d,re.I) - if m: - #outmess('analyzevars:replacing parameter %s in %s (dimension of %s) with %s\n'%(`p`,`d`,`n`,`params[p]`)) - d = m.group('before')+str(params[p])+m.group('after') - if d==star: - dl = [star] - else: - dl=markoutercomma(d,':').split('@:@') - if len(dl)==2 and '*' in dl: # e.g. dimension(5:*) - dl = ['*'] - d = '*' - if len(dl)==1 and not dl[0]==star: dl = ['1',dl[0]] - if len(dl)==2: - d,v,di = getarrlen(dl,block['vars'].keys()) - if d[:4] == '1 * ': d = d[4:] - if di and di[-4:] == '/(1)': di = di[:-4] - if v: savelindims[d] = v,di - vars[n]['dimension'].append(d) - if 'dimension' in vars[n]: - if isintent_c(vars[n]): - shape_macro = 'shape' - else: - shape_macro = 'shape'#'fshape' - if isstringarray(vars[n]): - if 'charselector' in vars[n]: - d = vars[n]['charselector'] - if '*' in d: - d = d['*'] - errmess('analyzevars: character array "character*%s %s(%s)" is considered as "character %s(%s)"; "intent(c)" is forced.\n'\ - %(d,n, - ','.join(vars[n]['dimension']), - n,','.join(vars[n]['dimension']+[d]))) - vars[n]['dimension'].append(d) - del vars[n]['charselector'] - if 'intent' not in vars[n]: - vars[n]['intent'] = [] - if 'c' not in vars[n]['intent']: - vars[n]['intent'].append('c') - else: - errmess("analyzevars: charselector=%r unhandled." % (d)) - if 'check' not in vars[n] and 'args' in block and n in block['args']: - flag = 'depend' not in vars[n] - if flag: - vars[n]['depend']=[] - vars[n]['check']=[] - if 'dimension' in vars[n]: - #/----< no check - #vars[n]['check'].append('rank(%s)==%s'%(n,len(vars[n]['dimension']))) - i=-1; ni=len(vars[n]['dimension']) - for d in vars[n]['dimension']: - ddeps=[] # dependecies of 'd' - ad='' - pd='' - #origd = d - if d not in vars: - if d in savelindims: - pd,ad='(',savelindims[d][1] - d = savelindims[d][0] - else: - for r in block['args']: - #for r in block['vars'].keys(): - if r not in vars: - continue - if re.match(r'.*?\b'+r+r'\b',d,re.I): - ddeps.append(r) - if d in vars: - if 'attrspec' in vars[d]: - for aa in vars[d]['attrspec']: - if aa[:6]=='depend': - ddeps += aa[6:].strip()[1:-1].split(',') - if 'depend' in vars[d]: - ddeps=ddeps+vars[d]['depend'] - i=i+1 - if d in vars and ('depend' not in vars[d]) \ - and ('=' not in vars[d]) and (d not in vars[n]['depend']) \ - and l_or(isintent_in, isintent_inout, isintent_inplace)(vars[n]): - vars[d]['depend']=[n] - if ni>1: - vars[d]['=']='%s%s(%s,%s)%s'% (pd,shape_macro,n,i,ad) - else: - vars[d]['=']='%slen(%s)%s'% (pd,n,ad) - # /---< no check - if 1 and 'check' not in vars[d]: - if ni>1: - vars[d]['check']=['%s%s(%s,%i)%s==%s'\ - %(pd,shape_macro,n,i,ad,d)] - else: - vars[d]['check']=['%slen(%s)%s>=%s'%(pd,n,ad,d)] - if 'attrspec' not in vars[d]: - vars[d]['attrspec']=['optional'] - if ('optional' not in vars[d]['attrspec']) and\ - ('required' not in vars[d]['attrspec']): - vars[d]['attrspec'].append('optional') - elif d not in ['*',':']: - #/----< no check - #if ni>1: vars[n]['check'].append('shape(%s,%i)==%s'%(n,i,d)) - #else: vars[n]['check'].append('len(%s)>=%s'%(n,d)) - if flag: - if d in vars: - if n not in ddeps: - vars[n]['depend'].append(d) - else: - vars[n]['depend'] = vars[n]['depend'] + ddeps - elif isstring(vars[n]): - length='1' - if 'charselector' in vars[n]: - if '*' in vars[n]['charselector']: - length = _eval_length(vars[n]['charselector']['*'], - params) - vars[n]['charselector']['*']=length - elif 'len' in vars[n]['charselector']: - length = _eval_length(vars[n]['charselector']['len'], - params) - del vars[n]['charselector']['len'] - vars[n]['charselector']['*']=length - - if not vars[n]['check']: - del vars[n]['check'] - if flag and not vars[n]['depend']: - del vars[n]['depend'] - if '=' in vars[n]: - if 'attrspec' not in vars[n]: - vars[n]['attrspec']=[] - if ('optional' not in vars[n]['attrspec']) and \ - ('required' not in vars[n]['attrspec']): - vars[n]['attrspec'].append('optional') - if 'depend' not in vars[n]: - vars[n]['depend']=[] - for v,m in dep_matches.items(): - if m(vars[n]['=']): vars[n]['depend'].append(v) - if not vars[n]['depend']: del vars[n]['depend'] - if isscalar(vars[n]): - vars[n]['='] = _eval_scalar(vars[n]['='],params) - - for n in vars.keys(): - if n==block['name']: # n is block name - if 'note' in vars[n]: - block['note']=vars[n]['note'] - if block['block']=='function': - if 'result' in block and block['result'] in vars: - vars[n]=appenddecl(vars[n],vars[block['result']]) - if 'prefix' in block: - pr=block['prefix']; ispure=0; isrec=1 - pr1=pr.replace('pure','') - ispure=(not pr==pr1) - pr=pr1.replace('recursive','') - isrec=(not pr==pr1) - m=typespattern[0].match(pr) - if m: - typespec,selector,attr,edecl=cracktypespec0(m.group('this'),m.group('after')) - kindselect,charselect,typename=cracktypespec(typespec,selector) - vars[n]['typespec']=typespec - if kindselect: - if 'kind' in kindselect: - try: - kindselect['kind'] = eval(kindselect['kind'],{},params) - except: - pass - vars[n]['kindselector']=kindselect - if charselect: vars[n]['charselector']=charselect - if typename: vars[n]['typename']=typename - if ispure: vars[n]=setattrspec(vars[n],'pure') - if isrec: vars[n]=setattrspec(vars[n],'recursive') - else: - outmess('analyzevars: prefix (%s) were not used\n'%`block['prefix']`) - if not block['block'] in ['module','pythonmodule','python module','block data']: - if 'commonvars' in block: - neededvars=copy.copy(block['args']+block['commonvars']) - else: - neededvars=copy.copy(block['args']) - for n in vars.keys(): - if l_or(isintent_callback,isintent_aux)(vars[n]): - neededvars.append(n) - if 'entry' in block: - neededvars.extend(block['entry'].keys()) - for k in block['entry'].keys(): - for n in block['entry'][k]: - if n not in neededvars: - neededvars.append(n) - if block['block']=='function': - if 'result' in block: - neededvars.append(block['result']) - else: - neededvars.append(block['name']) - if block['block'] in ['subroutine','function']: - name = block['name'] - if name in vars and 'intent' in vars[name]: - block['intent'] = vars[name]['intent'] - if block['block'] == 'type': - neededvars.extend(vars.keys()) - for n in vars.keys(): - if n not in neededvars: - del vars[n] - return vars - -analyzeargs_re_1 = re.compile(r'\A[a-z]+[\w$]*\Z',re.I) -def analyzeargs(block): - setmesstext(block) - implicitrules,attrrules=buildimplicitrules(block) - if 'args' not in block: - block['args']=[] - args=[] - re_1 = analyzeargs_re_1 - for a in block['args']: - if not re_1.match(a): # `a` is an expression - at=determineexprtype(a,block['vars'],implicitrules) - na='e_' - for c in a: - if c not in string.lowercase+string.digits: c='_' - na=na+c - if na[-1]=='_': na=na+'e' - else: na=na+'_e' - a=na - while a in block['vars'] or a in block['args']: - a=a+'r' - block['vars'][a]=at - args.append(a) - if a not in block['vars']: - block['vars'][a]={} - if 'externals' in block and a in block['externals']+block['interfaced']: - block['vars'][a]=setattrspec(block['vars'][a],'external') - block['args']=args - - if 'entry' in block: - for k,args1 in block['entry'].items(): - for a in args1: - if a not in block['vars']: - block['vars'][a]={} - - for b in block['body']: - if b['name'] in args: - if 'externals' not in block: - block['externals']=[] - if b['name'] not in block['externals']: - block['externals'].append(b['name']) - if 'result' in block and block['result'] not in block['vars']: - block['vars'][block['result']]={} - return block - -determineexprtype_re_1 = re.compile(r'\A\(.+?[,].+?\)\Z',re.I) -determineexprtype_re_2 = re.compile(r'\A[+-]?\d+(_(P[\w]+)|)\Z',re.I) -determineexprtype_re_3 = re.compile(r'\A[+-]?[\d.]+[\d+-de.]*(_(P[\w]+)|)\Z',re.I) -determineexprtype_re_4 = re.compile(r'\A\(.*\)\Z',re.I) -determineexprtype_re_5 = re.compile(r'\A(?P\w+)\s*\(.*?\)\s*\Z',re.I) -def _ensure_exprdict(r): - if type(r) is type(0): - return {'typespec':'integer'} - if type(r) is type(0.0): - return {'typespec':'real'} - if type(r) is type(0j): - return {'typespec':'complex'} - assert type(r) is type({}),`r` - return r - -def determineexprtype(expr,vars,rules={}): - if expr in vars: - return _ensure_exprdict(vars[expr]) - expr=expr.strip() - if determineexprtype_re_1.match(expr): - return {'typespec':'complex'} - m=determineexprtype_re_2.match(expr) - if m: - if 'name' in m.groupdict() and m.group('name'): - outmess('determineexprtype: selected kind types not supported (%s)\n'%`expr`) - return {'typespec':'integer'} - m = determineexprtype_re_3.match(expr) - if m: - if 'name' in m.groupdict() and m.group('name'): - outmess('determineexprtype: selected kind types not supported (%s)\n'%`expr`) - return {'typespec':'real'} - for op in ['+','-','*','/']: - for e in [x.strip() for x in markoutercomma(expr,comma=op).split('@'+op+'@')]: - if e in vars: - return _ensure_exprdict(vars[e]) - t={} - if determineexprtype_re_4.match(expr): # in parenthesis - t=determineexprtype(expr[1:-1],vars,rules) - else: - m = determineexprtype_re_5.match(expr) - if m: - rn=m.group('name') - t=determineexprtype(m.group('name'),vars,rules) - if t and 'attrspec' in t: - del t['attrspec'] - if not t: - if rn[0] in rules: - return _ensure_exprdict(rules[rn[0]]) - if expr[0] in '\'"': - return {'typespec':'character','charselector':{'*':'*'}} - if not t: - outmess('determineexprtype: could not determine expressions (%s) type.\n'%(`expr`)) - return t - -###### -def crack2fortrangen(block,tab='\n'): - global skipfuncs, onlyfuncs - setmesstext(block) - ret='' - if isinstance(block, list): - for g in block: - if g and g['block'] in ['function','subroutine']: - if g['name'] in skipfuncs: - continue - if onlyfuncs and g['name'] not in onlyfuncs: - continue - ret=ret+crack2fortrangen(g,tab) - return ret - prefix='' - name='' - args='' - blocktype=block['block'] - if blocktype=='program': return '' - al=[] - if 'name' in block: - name=block['name'] - if 'args' in block: - vars = block['vars'] - al = [a for a in block['args'] if not isintent_callback(vars[a])] - if block['block']=='function' or al: - args='(%s)'%','.join(al) - f2pyenhancements = '' - if 'f2pyenhancements' in block: - for k in block['f2pyenhancements'].keys(): - f2pyenhancements = '%s%s%s %s'%(f2pyenhancements,tab+tabchar,k,block['f2pyenhancements'][k]) - intent_lst = block.get('intent',[])[:] - if blocktype=='function' and 'callback' in intent_lst: - intent_lst.remove('callback') - if intent_lst: - f2pyenhancements = '%s%sintent(%s) %s'%\ - (f2pyenhancements,tab+tabchar, - ','.join(intent_lst),name) - use='' - if 'use' in block: - use=use2fortran(block['use'],tab+tabchar) - common='' - if 'common' in block: - common=common2fortran(block['common'],tab+tabchar) - if name=='unknown_interface': name='' - result='' - if 'result' in block: - result=' result (%s)'%block['result'] - if block['result'] not in al: - al.append(block['result']) - #if 'prefix' in block: - # prefix=block['prefix']+' ' - body=crack2fortrangen(block['body'],tab+tabchar) - vars=vars2fortran(block,block['vars'],al,tab+tabchar) - mess='' - if 'from' in block: - mess='! in %s'%block['from'] - if 'entry' in block: - entry_stmts = '' - for k,i in block['entry'].items(): - entry_stmts = '%s%sentry %s(%s)' \ - % (entry_stmts,tab+tabchar,k,','.join(i)) - body = body + entry_stmts - if blocktype=='block data' and name=='_BLOCK_DATA_': - name = '' - ret='%s%s%s %s%s%s %s%s%s%s%s%s%send %s %s'%(tab,prefix,blocktype,name,args,result,mess,f2pyenhancements,use,vars,common,body,tab,blocktype,name) - return ret - -def common2fortran(common,tab=''): - ret='' - for k in common.keys(): - if k=='_BLNK_': - ret='%s%scommon %s'%(ret,tab,','.join(common[k])) - else: - ret='%s%scommon /%s/ %s'%(ret,tab,k,','.join(common[k])) - return ret - -def use2fortran(use,tab=''): - ret='' - for m in use.keys(): - ret='%s%suse %s,'%(ret,tab,m) - if use[m]=={}: - if ret and ret[-1]==',': ret=ret[:-1] - continue - if 'only' in use[m] and use[m]['only']: - ret='%s,only:'%(ret) - if 'map' in use[m] and use[m]['map']: - c=' ' - for k in use[m]['map'].keys(): - if k==use[m]['map'][k]: - ret='%s%s%s'%(ret,c,k); c=',' - else: - ret='%s%s%s=>%s'%(ret,c,k,use[m]['map'][k]); c=',' - if ret and ret[-1]==',': ret=ret[:-1] - return ret - -def true_intent_list(var): - lst = var['intent'] - ret = [] - for intent in lst: - try: - exec('c = isintent_%s(var)' % intent) - except NameError: - c = 0 - if c: - ret.append(intent) - return ret - -def vars2fortran(block,vars,args,tab=''): - """ - TODO: - public sub - ... - """ - setmesstext(block) - ret='' - nout=[] - for a in args: - if a in block['vars']: - nout.append(a) - if 'commonvars' in block: - for a in block['commonvars']: - if a in vars: - if a not in nout: - nout.append(a) - else: - errmess('vars2fortran: Confused?!: "%s" is not defined in vars.\n'%a) - if 'varnames' in block: - nout.extend(block['varnames']) - for a in vars.keys(): - if a not in nout: - nout.append(a) - for a in nout: - if 'depend' in vars[a]: - for d in vars[a]['depend']: - if d in vars and 'depend' in vars[d] and a in vars[d]['depend']: - errmess('vars2fortran: Warning: cross-dependence between variables "%s" and "%s"\n'%(a,d)) - if 'externals' in block and a in block['externals']: - if isintent_callback(vars[a]): - ret='%s%sintent(callback) %s'%(ret,tab,a) - ret='%s%sexternal %s'%(ret,tab,a) - if isoptional(vars[a]): - ret='%s%soptional %s'%(ret,tab,a) - if a in vars and 'typespec' not in vars[a]: - continue - cont=1 - for b in block['body']: - if a==b['name'] and b['block']=='function': - cont=0;break - if cont: - continue - if a not in vars: - show(vars) - outmess('vars2fortran: No definition for argument "%s".\n'%a) - continue - if a==block['name'] and not block['block']=='function': - continue - if 'typespec' not in vars[a]: - if 'attrspec' in vars[a] and 'external' in vars[a]['attrspec']: - if a in args: - ret='%s%sexternal %s'%(ret,tab,a) - continue - show(vars[a]) - outmess('vars2fortran: No typespec for argument "%s".\n'%a) - continue - vardef=vars[a]['typespec'] - if vardef=='type' and 'typename' in vars[a]: - vardef='%s(%s)'%(vardef,vars[a]['typename']) - selector={} - if 'kindselector' in vars[a]: - selector=vars[a]['kindselector'] - elif 'charselector' in vars[a]: - selector=vars[a]['charselector'] - if '*' in selector: - if selector['*'] in ['*',':']: - vardef='%s*(%s)'%(vardef,selector['*']) - else: - vardef='%s*%s'%(vardef,selector['*']) - else: - if 'len' in selector: - vardef='%s(len=%s'%(vardef,selector['len']) - if 'kind' in selector: - vardef='%s,kind=%s)'%(vardef,selector['kind']) - else: - vardef='%s)'%(vardef) - elif 'kind' in selector: - vardef='%s(kind=%s)'%(vardef,selector['kind']) - c=' ' - if 'attrspec' in vars[a]: - attr=[] - for l in vars[a]['attrspec']: - if l not in ['external']: - attr.append(l) - if attr: - vardef='%s %s'%(vardef,','.join(attr)) - c=',' - if 'dimension' in vars[a]: -# if not isintent_c(vars[a]): -# vars[a]['dimension'].reverse() - vardef='%s%sdimension(%s)'%(vardef,c,','.join(vars[a]['dimension'])) - c=',' - if 'intent' in vars[a]: - lst = true_intent_list(vars[a]) - if lst: - vardef='%s%sintent(%s)'%(vardef,c,','.join(lst)) - c=',' - if 'check' in vars[a]: - vardef='%s%scheck(%s)'%(vardef,c,','.join(vars[a]['check'])) - c=',' - if 'depend' in vars[a]: - vardef='%s%sdepend(%s)'%(vardef,c,','.join(vars[a]['depend'])) - c=',' - if '=' in vars[a]: - v = vars[a]['='] - if vars[a]['typespec'] in ['complex','double complex']: - try: - v = eval(v) - v = '(%s,%s)' % (v.real,v.imag) - except: - pass - vardef='%s :: %s=%s'%(vardef,a,v) - else: - vardef='%s :: %s'%(vardef,a) - ret='%s%s%s'%(ret,tab,vardef) - return ret -###### - -def crackfortran(files): - global usermodules - outmess('Reading fortran codes...\n',0) - readfortrancode(files,crackline) - outmess('Post-processing...\n',0) - usermodules=[] - postlist=postcrack(grouplist[0]) - outmess('Post-processing (stage 2)...\n',0) - postlist=postcrack2(postlist) - return usermodules+postlist - -def crack2fortran(block): - global f2py_version - pyf=crack2fortrangen(block)+'\n' - header="""! -*- f90 -*- -! Note: the context of this file is case sensitive. -""" - footer=""" -! This file was auto-generated with f2py (version:%s). -! See http://cens.ioc.ee/projects/f2py2e/ -"""%(f2py_version) - return header+pyf+footer - -if __name__ == "__main__": - files=[] - funcs=[] - f=1;f2=0;f3=0 - showblocklist=0 - for l in sys.argv[1:]: - if l=='': pass - elif l[0]==':': - f=0 - elif l=='-quiet': - quiet=1 - verbose=0 - elif l=='-verbose': - verbose=2 - quiet=0 - elif l=='-fix': - if strictf77: - outmess('Use option -f90 before -fix if Fortran 90 code is in fix form.\n',0) - skipemptyends=1 - sourcecodeform='fix' - elif l=='-skipemptyends': - skipemptyends=1 - elif l=='--ignore-contains': - ignorecontains=1 - elif l=='-f77': - strictf77=1 - sourcecodeform='fix' - elif l=='-f90': - strictf77=0 - sourcecodeform='free' - skipemptyends=1 - elif l=='-h': - f2=1 - elif l=='-show': - showblocklist=1 - elif l=='-m': - f3=1 - elif l[0]=='-': - errmess('Unknown option %s\n'%`l`) - elif f2: - f2=0 - pyffilename=l - elif f3: - f3=0 - f77modulename=l - elif f: - try: - open(l).close() - files.append(l) - except IOError,detail: - errmess('IOError: %s\n'%str(detail)) - else: - funcs.append(l) - if not strictf77 and f77modulename and not skipemptyends: - outmess("""\ - Warning: You have specifyied module name for non Fortran 77 code - that should not need one (expect if you are scanning F90 code - for non module blocks but then you should use flag -skipemptyends - and also be sure that the files do not contain programs without program statement). -""",0) - - postlist=crackfortran(files,funcs) - if pyffilename: - outmess('Writing fortran code to file %s\n'%`pyffilename`,0) - pyf=crack2fortran(postlist) - f=open(pyffilename,'w') - f.write(pyf) - f.close() - if showblocklist: - show(postlist) diff --git a/pythonPackages/numpy/numpy/f2py/diagnose.py b/pythonPackages/numpy/numpy/f2py/diagnose.py deleted file mode 100755 index 3b517a5c9b..0000000000 --- a/pythonPackages/numpy/numpy/f2py/diagnose.py +++ /dev/null @@ -1,148 +0,0 @@ -#!/usr/bin/env python - -import os -import sys -import tempfile - -def run_command(cmd): - print 'Running %r:' % (cmd) - s = os.system(cmd) - print '------' -def run(): - _path = os.getcwd() - os.chdir(tempfile.gettempdir()) - print '------' - print 'os.name=%r' % (os.name) - print '------' - print 'sys.platform=%r' % (sys.platform) - print '------' - print 'sys.version:' - print sys.version - print '------' - print 'sys.prefix:' - print sys.prefix - print '------' - print 'sys.path=%r' % (':'.join(sys.path)) - print '------' - - try: - import numpy - has_newnumpy = 1 - except ImportError: - print 'Failed to import new numpy:', sys.exc_value - has_newnumpy = 0 - - try: - from numpy.f2py import f2py2e - has_f2py2e = 1 - except ImportError: - print 'Failed to import f2py2e:',sys.exc_value - has_f2py2e = 0 - - try: - import numpy.distutils - has_numpy_distutils = 2 - except ImportError: - try: - import numpy_distutils - has_numpy_distutils = 1 - except ImportError: - print 'Failed to import numpy_distutils:',sys.exc_value - has_numpy_distutils = 0 - - if has_newnumpy: - try: - print 'Found new numpy version %r in %s' % \ - (numpy.__version__, numpy.__file__) - except Exception,msg: - print 'error:', msg - print '------' - - if has_f2py2e: - try: - print 'Found f2py2e version %r in %s' % \ - (f2py2e.__version__.version,f2py2e.__file__) - except Exception,msg: - print 'error:',msg - print '------' - - if has_numpy_distutils: - try: - if has_numpy_distutils == 2: - print 'Found numpy.distutils version %r in %r' % (\ - numpy.distutils.__version__, - numpy.distutils.__file__) - else: - print 'Found numpy_distutils version %r in %r' % (\ - numpy_distutils.numpy_distutils_version.numpy_distutils_version, - numpy_distutils.__file__) - print '------' - except Exception,msg: - print 'error:',msg - print '------' - try: - if has_numpy_distutils == 1: - print 'Importing numpy_distutils.command.build_flib ...', - import numpy_distutils.command.build_flib as build_flib - print 'ok' - print '------' - try: - print 'Checking availability of supported Fortran compilers:' - for compiler_class in build_flib.all_compilers: - compiler_class(verbose=1).is_available() - print '------' - except Exception,msg: - print 'error:',msg - print '------' - except Exception,msg: - print 'error:',msg,'(ignore it, build_flib is obsolute for numpy.distutils 0.2.2 and up)' - print '------' - try: - if has_numpy_distutils == 2: - print 'Importing numpy.distutils.fcompiler ...', - import numpy.distutils.fcompiler as fcompiler - else: - print 'Importing numpy_distutils.fcompiler ...', - import numpy_distutils.fcompiler as fcompiler - print 'ok' - print '------' - try: - print 'Checking availability of supported Fortran compilers:' - fcompiler.show_fcompilers() - print '------' - except Exception,msg: - print 'error:',msg - print '------' - except Exception,msg: - print 'error:',msg - print '------' - try: - if has_numpy_distutils == 2: - print 'Importing numpy.distutils.cpuinfo ...', - from numpy.distutils.cpuinfo import cpuinfo - print 'ok' - print '------' - else: - try: - print 'Importing numpy_distutils.command.cpuinfo ...', - from numpy_distutils.command.cpuinfo import cpuinfo - print 'ok' - print '------' - except Exception,msg: - print 'error:',msg,'(ignore it)' - print 'Importing numpy_distutils.cpuinfo ...', - from numpy_distutils.cpuinfo import cpuinfo - print 'ok' - print '------' - cpu = cpuinfo() - print 'CPU information:', - for name in dir(cpuinfo): - if name[0]=='_' and name[1]!='_' and getattr(cpu,name[1:])(): - print name[1:], - print '------' - except Exception,msg: - print 'error:',msg - print '------' - os.chdir(_path) -if __name__ == "__main__": - run() diff --git a/pythonPackages/numpy/numpy/f2py/docs/FAQ.txt b/pythonPackages/numpy/numpy/f2py/docs/FAQ.txt deleted file mode 100755 index 416560e920..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/FAQ.txt +++ /dev/null @@ -1,615 +0,0 @@ - -====================================================================== - F2PY Frequently Asked Questions -====================================================================== - -.. contents:: - -General information -=================== - -Q: How to get started? ----------------------- - -First, install__ F2PY. Then check that F2PY installation works -properly (see below__). Try out a `simple example`__. - -Read `F2PY Users Guide and Reference Manual`__. It contains lots -of complete examples. - -If you have any questions/problems when using F2PY, don't hesitate to -turn to `F2PY users mailing list`__ or directly to me. - -__ index.html#installation -__ #testing -__ index.html#usage -__ usersguide/index.html -__ index.html#mailing-list - -Q: When to report bugs? ------------------------ - -* If F2PY scanning fails on Fortran sources that otherwise compile - fine. - -* After checking that you have the latest version of F2PY from its - CVS. It is possible that a bug has been fixed already. See also the - log entries in the file `HISTORY.txt`_ (`HISTORY.txt in CVS`_). - -* After checking that your Python and Numerical Python installations - work correctly. - -* After checking that your C and Fortran compilers work correctly. - - -Q: How to report bugs? ----------------------- - -You can send bug reports directly to me. Please, include information -about your platform (operating system, version) and -compilers/linkers, e.g. the output (both stdout/stderr) of -:: - - python -c 'import f2py2e.diagnose;f2py2e.diagnose.run()' - -Feel free to add any other relevant information. However, avoid -sending the output of F2PY generated ``.pyf`` files (unless they are -manually modified) or any binary files like shared libraries or object -codes. - -While reporting bugs, you may find the following notes useful: - -* `How To Ask Questions The Smart Way`__ by E. S. Raymond and R. Moen. - -* `How to Report Bugs Effectively`__ by S. Tatham. - -__ http://www.catb.org/~esr/faqs/smart-questions.html -__ http://www.chiark.greenend.org.uk/~sgtatham/bugs.html - -Installation -============ - -Q: How to use F2PY with different Python versions? --------------------------------------------------- - -Run the installation command using the corresponding Python -executable. For example, -:: - - python2.1 setup.py install - -installs the ``f2py`` script as ``f2py2.1``. - -See `Distutils User Documentation`__ for more information how to -install Python modules to non-standard locations. - -__ http://www.python.org/sigs/distutils-sig/doc/inst/inst.html - - -Q: Why F2PY is not working after upgrading? -------------------------------------------- - -If upgrading from F2PY version 2.3.321 or earlier then remove all f2py -specific files from ``/path/to/python/bin`` directory before -running installation command. - -Q: How to get/upgrade numpy_distutils when using F2PY from CVS? ---------------------------------------------------------------- - -To get numpy_distutils from SciPy CVS repository, run -:: - - cd cvs/f2py2e/ - make numpy_distutils - -This will checkout numpy_distutils to the current directory. - -You can upgrade numpy_distutils by executing -:: - - cd cvs/f2py2e/numpy_distutils - cvs update -Pd - -and install it by executing -:: - - cd cvs/f2py2e/numpy_distutils - python setup_numpy_distutils.py install - -In most of the time, f2py2e and numpy_distutils can be upgraded -independently. - -Testing -======= - -Q: How to test if F2PY is installed correctly? ----------------------------------------------- - -Run -:: - - f2py - -without arguments. If F2PY is installed correctly then it should print -the usage information for f2py. - -Q: How to test if F2PY is working correctly? --------------------------------------------- - -For a quick test, try out an example problem from Usage__ -section in `README.txt`_. - -__ index.html#usage - -For running F2PY unit tests, see `TESTING.txt`_. - - -Q: How to run tests and examples in f2py2e/test-suite/ directory? ---------------------------------------------------------------------- - -You shouldn't. These tests are obsolete and I have no intention to -make them work. They will be removed in future. - - -Compiler/Platform-specific issues -================================= - -Q: What are supported platforms and compilers? ----------------------------------------------- - -F2PY is developed on Linux system with a GCC compiler (versions -2.95.x, 3.x). Fortran 90 related hooks are tested against Intel -Fortran Compiler. F2PY should work under any platform where Python and -Numeric are installed and has supported Fortran compiler installed. - -To see a list of supported compilers, execute:: - - f2py -c --help-fcompiler - -Example output:: - - List of available Fortran compilers: - --fcompiler=gnu GNU Fortran Compiler (3.3.4) - --fcompiler=intel Intel Fortran Compiler for 32-bit apps (8.0) - List of unavailable Fortran compilers: - --fcompiler=absoft Absoft Corp Fortran Compiler - --fcompiler=compaq Compaq Fortran Compiler - --fcompiler=compaqv DIGITAL|Compaq Visual Fortran Compiler - --fcompiler=hpux HP Fortran 90 Compiler - --fcompiler=ibm IBM XL Fortran Compiler - --fcompiler=intele Intel Fortran Compiler for Itanium apps - --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps - --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps - --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler - --fcompiler=mips MIPSpro Fortran Compiler - --fcompiler=nag NAGWare Fortran 95 Compiler - --fcompiler=pg Portland Group Fortran Compiler - --fcompiler=sun Sun|Forte Fortran 95 Compiler - --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler - List of unimplemented Fortran compilers: - --fcompiler=f Fortran Company/NAG F Compiler - For compiler details, run 'config_fc --verbose' setup command. - - -Q: How to use the F compiler in F2PY? -------------------------------------- - -Read `f2py2e/doc/using_F_compiler.txt`__. It describes why the F -compiler cannot be used in a normal way (i.e. using ``-c`` switch) to -build F2PY generated modules. It also gives a workaround to this -problem. - -__ http://cens.ioc.ee/cgi-bin/viewcvs.cgi/python/f2py2e/doc/using_F_compiler.txt?rev=HEAD&content-type=text/vnd.viewcvs-markup - -Q: How to use F2PY under Windows? ---------------------------------- - -F2PY can be used both within Cygwin__ and MinGW__ environments under -Windows, F2PY can be used also in Windows native terminal. -See the section `Setting up environment`__ for Cygwin and MinGW. - -__ http://cygwin.com/ -__ http://www.mingw.org/ -__ http://cens.ioc.ee/~pearu/numpy/BUILD_WIN32.html#setting-up-environment - -Install numpy_distutils and F2PY. Win32 installers of these packages -are provided in `F2PY Download`__ section. - -__ http://cens.ioc.ee/projects/f2py2e/#download - -Use ``--compiler=`` and ``--fcompiler`` F2PY command line switches to -to specify which C and Fortran compilers F2PY should use, respectively. - -Under MinGW environment, ``mingw32`` is default for a C compiler. - -Supported and Unsupported Features -================================== - -Q: Does F2PY support ``ENTRY`` statements? ------------------------------------------- - -Yes, starting at F2PY version higher than 2.39.235_1706. - -Q: Does F2PY support derived types in F90 code? ------------------------------------------------ - -Not yet. However I do have plans to implement support for F90 TYPE -constructs in future. But note that the task in non-trivial and may -require the next edition of F2PY for which I don't have resources to -work with at the moment. - -Jeffrey Hagelberg from LLNL has made progress on adding -support for derived types to f2py. He writes: - - At this point, I have a version of f2py that supports derived types - for most simple cases. I have multidimensional arrays of derived - types and allocatable arrays of derived types working. I'm just now - starting to work on getting nested derived types to work. I also - haven't tried putting complex number in derived types yet. - -Hopefully he can contribute his changes to f2py soon. - -Q: Does F2PY support pointer data in F90 code? ------------------------------------------------ - -No. I have never needed it and I haven't studied if there are any -obstacles to add pointer data support to F2PY. - -Q: What if Fortran 90 code uses ``(kind=KIND(..))``? ---------------------------------------------------------------- - -Currently, F2PY can handle only ``(kind=)`` -declarations where ```` is a numeric integer (e.g. 1, 2, -4,...) but not a function call ``KIND(..)`` or any other -expression. F2PY needs to know what would be the corresponding C type -and a general solution for that would be too complicated to implement. - -However, F2PY provides a hook to overcome this difficulty, namely, -users can define their own to maps. For -example, if Fortran 90 code contains:: - - REAL(kind=KIND(0.0D0)) ... - -then create a file ``.f2py_f2cmap`` (into the working directory) -containing a Python dictionary:: - - {'real':{'KIND(0.0D0)':'double'}} - -for instance. - -Or more generally, the file ``.f2py_f2cmap`` must contain a dictionary -with items:: - - : {:} - -that defines mapping between Fortran type:: - - ([kind=]) - -and the corresponding ````. ```` can be one of the -following:: - - char - signed_char - short - int - long_long - float - double - long_double - complex_float - complex_double - complex_long_double - string - -For more information, see ``f2py2e/capi_maps.py``. - -Related software -================ - -Q: How F2PY distinguishes from Pyfort? --------------------------------------- - -F2PY and Pyfort have very similar aims and ideology of how they are -targeted. Both projects started to evolve in the same year 1999 -independently. When we discovered each others projects, a discussion -started to join the projects but that unfortunately failed for -various reasons, e.g. both projects had evolved too far that merging -the tools would have been impractical and giving up the efforts that -the developers of both projects have made was unacceptable to both -parties. And so, nowadays we have two tools for connecting Fortran -with Python and this fact will hardly change in near future. To decide -which one to choose is a matter of taste, I can only recommend to try -out both to make up your choice. - -At the moment F2PY can handle more wrapping tasks than Pyfort, -e.g. with F2PY one can wrap Fortran 77 common blocks, Fortran 90 -module routines, Fortran 90 module data (including allocatable -arrays), one can call Python from Fortran, etc etc. F2PY scans Fortran -codes to create signature (.pyf) files. F2PY is free from most of the -limitations listed in in `the corresponding section of Pyfort -Reference Manual`__. - -__ http://pyfortran.sourceforge.net/pyfort/pyfort_reference.htm#pgfId-296925 - -There is a conceptual difference on how F2PY and Pyfort handle the -issue of different data ordering in Fortran and C multi-dimensional -arrays. Pyfort generated wrapper functions have optional arguments -TRANSPOSE and MIRROR that can be used to control explicitly how the array -arguments and their dimensions are passed to Fortran routine in order -to deal with the C/Fortran data ordering issue. F2PY generated wrapper -functions hide the whole issue from an end-user so that translation -between Fortran and C/Python loops and array element access codes is -one-to-one. How the F2PY generated wrappers deal with the issue is -determined by a person who creates a signature file via using -attributes like ``intent(c)``, ``intent(copy|overwrite)``, -``intent(inout|in,out|inplace)`` etc. - -For example, let's consider a typical usage of both F2PY and Pyfort -when wrapping the following simple Fortran code: - -.. include:: simple.f - :literal: - -The comment lines starting with ``cf2py`` are read by F2PY (so that we -don't need to generate/handwrite an intermediate signature file in -this simple case) while for a Fortran compiler they are just comment -lines. - -And here is a Python version of the Fortran code: - -.. include:: pytest.py - :literal: - -To generate a wrapper for subroutine ``foo`` using F2PY, execute:: - - $ f2py -m f2pytest simple.f -c - -that will generate an extension module ``f2pytest`` into the current -directory. - -To generate a wrapper using Pyfort, create the following file - -.. include:: pyforttest.pyf - :literal: - -and execute:: - - $ pyfort pyforttest - -In Pyfort GUI add ``simple.f`` to the list of Fortran sources and -check that the signature file is in free format. And then copy -``pyforttest.so`` from the build directory to the current directory. - -Now, in Python - -.. include:: simple_session.dat - :literal: - -Q: Can Pyfort .pyf files used with F2PY and vice versa? -------------------------------------------------------- - -After some simple modifications, yes. You should take into account the -following differences in Pyfort and F2PY .pyf files. - -+ F2PY signature file contains ``python module`` and ``interface`` - blocks that are equivalent to Pyfort ``module`` block usage. - -+ F2PY attribute ``intent(inplace)`` is equivalent to Pyfort - ``intent(inout)``. F2PY ``intent(inout)`` is a strict (but safe) - version of ``intent(inplace)``, any mismatch in arguments with - expected type, size, or contiguouness will trigger an exception - while ``intent(inplace)`` (dangerously) modifies arguments - attributes in-place. - -Misc -==== - -Q: How to establish which Fortran compiler F2PY will use? ---------------------------------------------------------- - -This question may be releavant when using F2PY in Makefiles. Here -follows a script demonstrating how to determine which Fortran compiler -and flags F2PY will use:: - - # Using post-0.2.2 numpy_distutils - from numpy_distutils.fcompiler import new_fcompiler - compiler = new_fcompiler() # or new_fcompiler(compiler='intel') - compiler.dump_properties() - - # Using pre-0.2.2 numpy_distutils - import os - from numpy_distutils.command.build_flib import find_fortran_compiler - def main(): - fcompiler = os.environ.get('FC_VENDOR') - fcompiler_exec = os.environ.get('F77') - f90compiler_exec = os.environ.get('F90') - fc = find_fortran_compiler(fcompiler, - fcompiler_exec, - f90compiler_exec, - verbose = 0) - print 'FC=',fc.f77_compiler - print 'FFLAGS=',fc.f77_switches - print 'FOPT=',fc.f77_opt - if __name__ == "__main__": - main() - -Users feedback -============== - -Q: Where to find additional information on using F2PY? ------------------------------------------------------- - -There are several F2PY related tutorials, slides, papers, etc -available: - -+ `Fortran to Python Interface Generator with an Application to - Aerospace Engineering`__ by P. Peterson, J. R. R. A. Martins, and - J. J. Alonso in `In Proceedings of the 9th International Python - Conference`__, Long Beach, California, 2001. - -__ http://www.python9.org/p9-cdrom/07/index.htm -__ http://www.python9.org/ - -+ Section `Adding Fortran90 code`__ in the UG of `The Bolometer Data - Analysis Project`__. - -__ http://www.astro.rub.de/laboca/download/boa_master_doc/7_4Adding_Fortran90_code.html -__ http://www.openboa.de/ - -+ Powerpoint presentation `Python for Scientific Computing`__ by Eric - Jones in `The Ninth International Python Conference`__. - -__ http://www.python9.org/p9-jones.ppt -__ http://www.python9.org/ - -+ Paper `Scripting a Large Fortran Code with Python`__ by Alvaro Caceres - Calleja in `International Workshop on Software Engineering for High - Performance Computing System Applications`__. - -__ http://csdl.ics.hawaii.edu/se-hpcs/pdf/calleja.pdf -__ http://csdl.ics.hawaii.edu/se-hpcs/ - -+ Section `Automatic building of C/Fortran extension for Python`__ by - Simon Lacoste-Julien in `Summer 2002 Report about Hybrid Systems - Modelling`__. - -__ http://moncs.cs.mcgill.ca/people/slacoste/research/report/SummerReport.html#tth_sEc3.4 -__ http://moncs.cs.mcgill.ca/people/slacoste/research/report/SummerReport.html - -+ `Scripting for Computational Science`__ by Hans Petter Langtangen - (see the `Mixed language programming`__ and `NumPy array programming`__ - sections for examples on using F2PY). - -__ http://www.ifi.uio.no/~inf3330/lecsplit/ -__ http://www.ifi.uio.no/~inf3330/lecsplit/slide662.html -__ http://www.ifi.uio.no/~inf3330/lecsplit/slide718.html - -+ Chapters 5 and 9 of `Python Scripting for Computational Science`__ - by H. P. Langtangen for case studies on using F2PY. - -__ http://www.springeronline.com/3-540-43508-5 - -+ Section `Fortran Wrapping`__ in `Continuity`__, a computational tool - for continuum problems in bioengineering and physiology. - -__ http://www.continuity.ucsd.edu/cont6_html/docs_fram.html -__ http://www.continuity.ucsd.edu/ - -+ Presentation `PYFORT and F2PY: 2 ways to bind C and Fortran with Python`__ - by Reiner Vogelsang. - -__ http://www.prism.enes.org/WPs/WP4a/Slides/pyfort/pyfort.html - -+ Lecture slides of `Extending Python: speed it up`__. - -__ http://www.astro.uni-bonn.de/~heith/lecture_pdf/friedrich5.pdf - -+ Wiki topics on `Wrapping Tools`__ and `Wrapping Bemchmarks`__ for Climate - System Center at the University of Chicago. - -__ https://geodoc.uchicago.edu/climatewiki/DiscussWrappingTools -__ https://geodoc.uchicago.edu/climatewiki/WrappingBenchmarks - -+ `Performance Python with Weave`__ by Prabhu Ramachandran. - -__ http://www.numpy.org/documentation/weave/weaveperformance.html - -+ `How To Install py-f2py on Mac OSX`__ - -__ http://py-f2py.darwinports.com/ - -Please, let me know if there are any other sites that document F2PY -usage in one or another way. - -Q: What projects use F2PY? --------------------------- - -+ `SciPy: Scientific tools for Python`__ - -__ http://www.numpy.org/ - -+ `The Bolometer Data Analysis Project`__ - -__ http://www.openboa.de/ - -+ `pywavelet`__ - -__ http://www.met.wau.nl/index.html?http://www.met.wau.nl/medewerkers/moenea/python/pywavelet.html - -+ `PyARTS: an ARTS related Python package`__. - -__ http://www.met.ed.ac.uk/~cory/PyARTS/ - -+ `Python interface to PSPLINE`__, a collection of Spline and - Hermite interpolation tools for 1D, 2D, and 3D datasets on - rectilinear grids. - -__ http://pypspline.sourceforge.net - -+ `Markovian Analysis Package for Python`__. - -__ http://pymc.sourceforge.net - -+ `Modular toolkit for Data Processing (MDP)`__ - -__ http://mdp-toolkit.sourceforge.net/ - - -Please, send me a note if you are using F2PY in your project. - -Q: What people think about F2PY? --------------------------------- - -*F2PY is GOOD*: - -Here are some comments people have posted to f2py mailing list and c.l.py: - -+ Ryan Krauss: I really appreciate f2py. It seems weird to say, but I - am excited about relearning FORTRAN to compliment my python stuff. - -+ Fabien Wahl: f2py is great, and is used extensively over here... - -+ Fernando Perez: Anyway, many many thanks for this amazing tool. - - I haven't used pyfort, but I can definitely vouch for the amazing quality of - f2py. And since f2py is actively used by numpy, it won't go unmaintained. - It's quite impressive, and very easy to use. - -+ Kevin Mueller: First off, thanks to those responsible for F2PY; - its been an integral tool of my research for years now. - -+ David Linke: Best regards and thanks for the great tool! - -+ Perrin Meyer: F2Py is really useful! - -+ Hans Petter Langtangen: First of all, thank you for developing - F2py. This is a very important contribution to the scientific - computing community. We are using F2py a lot and are very happy with - it. - -+ Berthold Höllmann: Thank's alot. It seems it is also working in my - 'real' application :-) - -+ John Hunter: At first I wrapped them with f2py (unbelievably easy!)... - -+ Cameron Laird: Among many other features, Python boasts a mature - f2py, which makes it particularly rewarding to yoke Fortran- and - Python-coded modules into finished applications. - -+ Ryan Gutenkunst: f2py is sweet magic. - -*F2PY is BAD*: - -+ `Is it worth using on a large scale python drivers for Fortran - subroutines, interfaced with f2py?`__ - -__ http://sepwww.stanford.edu/internal/computing/python.html - -Additional comments on F2PY, good or bad, are welcome! - -.. References: -.. _README.txt: index.html -.. _HISTORY.txt: HISTORY.html -.. _HISTORY.txt in CVS: http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/HISTORY.txt?rev=HEAD&content-type=text/x-cvsweb-markup -.. _TESTING.txt: TESTING.html diff --git a/pythonPackages/numpy/numpy/f2py/docs/HISTORY.txt b/pythonPackages/numpy/numpy/f2py/docs/HISTORY.txt deleted file mode 100755 index 72b683eb01..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/HISTORY.txt +++ /dev/null @@ -1,1044 +0,0 @@ -.. -*- rest -*- - -========================= - F2PY History -========================= - -:Author: Pearu Peterson -:Web-site: http://cens.ioc.ee/projects/f2py2e/ -:Date: $Date: 2005/09/16 08:36:45 $ -:Revision: $Revision: 1.191 $ - -.. Contents:: - -Release 2.46.243 -===================== - -* common_rules.py - - - Fixed compiler warnings. - -* fortranobject.c - - - Fixed another dims calculation bug. - - Fixed dims calculation bug and added the corresponding check. - - Accept higher dimensional arrays if their effective rank matches. - Effective rank is multiplication of non-unit dimensions. - -* f2py2e.py - - - Added support for numpy.distutils version 0.4.0. - -* Documentation - - - Added example about ``intent(callback,hide)`` usage. Updates. - - Updated FAQ. - -* cb_rules.py - - - Fixed missing need kw error. - - Fixed getting callback non-existing extra arguments. - - External callback functions and extra_args can be set via - ext.module namespace. - - Avoid crash when external callback function is not set. - -* rules.py - - - Enabled ``intent(out)`` for ``intent(aux)`` non-complex scalars. - - Fixed splitting lines in F90 fixed form mode. - - Fixed FORTRANAME typo, relevant when wrapping scalar functions with - ``--no-wrap-functions``. - - Improved failure handling for callback functions. - - Fixed bug in writting F90 wrapper functions when a line length - is exactly 66. - -* cfuncs.py - - - Fixed dependency issue with typedefs. - - Introduced ``-DUNDERSCORE_G77`` that cause extra underscore to be - used for external names that contain an underscore. - -* capi_maps.py - - - Fixed typos. - - Fixed using complex cb functions. - -* crackfortran.py - - - Introduced parent_block key. Get ``use`` statements recursively - from parent blocks. - - Apply parameter values to kindselectors. - - Fixed bug evaluating ``selected_int_kind`` function. - - Ignore Name and Syntax errors when evaluating scalars. - - Treat ``_intType`` as ```` in get_parameters. - - Added support for F90 line continuation in fix format mode. - - Include optional attribute of external to signature file. - - Add ``entry`` arguments to variable lists. - - Treat \xa0 character as space. - - Fixed bug where __user__ callback subroutine was added to its - argument list. - - In strict 77 mode read only the first 72 columns. - - Fixed parsing ``v(i) = func(r)``. - - Fixed parsing ``integer*4::``. - - Fixed parsing ``1.d-8`` when used as a parameter value. - -Release 2.45.241_1926 -===================== - -* diagnose.py - - - Clean up output. - -* cb_rules.py - - - Fixed ``_cpointer`` usage for subroutines. - - Fortran function ``_cpointer`` can be used for callbacks. - -* func2subr.py - - - Use result name when wrapping functions with subroutines. - -* f2py2e.py - - - Fixed ``--help-link`` switch. - - Fixed ``--[no-]lower`` usage with ``-c`` option. - - Added support for ``.pyf.src`` template files. - -* __init__.py - - - Using ``exec_command`` in ``compile()``. - -* setup.py - - - Clean up. - - Disabled ``need_numpy_distutils`` function. From now on it is assumed - that proper version of ``numpy_distutils`` is already installed. - -* capi_maps.py - - - Added support for wrapping unsigned integers. In a .pyf file - ``integer(-1)``, ``integer(-2)``, ``integer(-4)`` correspond to - ``unsigned char``, ``unsigned short``, ``unsigned`` C types, - respectively. - -* tests/c/return_real.py - - - Added tests to wrap C functions returning float/double. - -* fortranobject.c - - - Added ``_cpointer`` attribute to wrapped objects. - -* rules.py - - - ``_cpointer`` feature for wrapped module functions is not - functional at the moment. - - Introduced ``intent(aux)`` attribute. Useful to save a value - of a parameter to auxiliary C variable. Note that ``intent(aux)`` - implies ``intent(c)``. - - Added ``usercode`` section. When ``usercode`` is used in ``python - module`` block twise then the contents of the second multi-line - block is inserted after the definition of external routines. - - Call-back function arguments can be CObjects. - -* cfuncs.py - - - Allow call-back function arguments to be fortran objects. - - Allow call-back function arguments to be built-in functions. - -* crackfortran.py - - - Fixed detection of a function signature from usage example. - - Cleaned up -h output for intent(callback) variables. - - Repair malformed argument list (missing argument name). - - Warn on the usage of multiple attributes without type specification. - - Evaluate only scalars ```` (e.g. not of strings). - - Evaluate ```` using parameters name space. - - Fixed resolving `()[result()]` pattern. - - ``usercode`` can be used more than once in the same context. - -Release 2.43.239_1831 -===================== - -* auxfuncs.py - - - Made ``intent(in,inplace)`` to mean ``intent(inplace)``. - -* f2py2e.py - - - Intoduced ``--help-link`` and ``--link-`` - switches to link generated extension module with system - ```` as defined by numpy_distutils/system_info.py. - -* fortranobject.c - - - Patch to make PyArray_CanCastSafely safe on 64-bit machines. - Fixes incorrect results when passing ``array('l')`` to - ``real*8 intent(in,out,overwrite)`` arguments. - -* rules.py - - - Avoid empty continuation lines in Fortran wrappers. - -* cfuncs.py - - - Adding ``\0`` at the end of a space-padded string, fixes tests - on 64-bit Gentoo. - -* crackfortran.py - - - Fixed splitting lines with string parameters. - -Release 2.43.239_1806 -===================== - -* Tests - - - Fixed test site that failed after padding strings with spaces - instead of zeros. - -* Documentation - - - Documented ``intent(inplace)`` attribute. - - Documented ``intent(callback)`` attribute. - - Updated FAQ, added Users Feedback section. - -* cfuncs.py - - - Padding longer (than provided from Python side) strings with spaces - (that is Fortran behavior) instead of nulls (that is C strncpy behavior). - -* f90mod_rules.py - - - Undoing rmbadnames in Python and Fortran layers. - -* common_rules.py - - - Renaming common block items that have names identical to C keywords. - - Fixed wrapping blank common blocks. - -* fortranobject.h - - - Updated numarray (0.9, 1.0, 1.1) support (patch by Todd Miller). - -* fortranobject.c - - - Introduced ``intent(inplace)`` feature. - - Fix numarray reference counts (patch by Todd). - - Updated numarray (0.9, 1.0, 1.1) support (patch by Todd Miller). - - Enabled F2PY_REPORT_ON_ARRAY_COPY for Numarray. - -* capi_maps.py - - - Always normalize .f2py_f2cmap keys to lower case. - -* rules.py - - - Disabled ``index`` macro as it conflicts with the one defined - in string.h. - - Moved ``externroutines`` up to make it visible to ``usercode``. - - Fixed bug in f90 code generation: no empty line continuation is - allowed. - - Fixed undefined symbols failure when ``fortranname`` is used - to rename a wrapped function. - - Support for ``entry`` statement. - -* auxfuncs.py - - - Made is* functions more robust with respect to parameters that - have no typespec specified. - - Using ``size_t`` instead of ``int`` as the type of string - length. Fixes issues on 64-bit platforms. - -* setup.py - - - Fixed bug of installing ``f2py`` script as ``.exe`` file. - -* f2py2e.py - - - ``--compiler=`` and ``--fcompiler=`` can be specified at the same time. - -* crackfortran.py - - - Fixed dependency detection for non-intent(in|inout|inplace) arguments. - They must depend on their dimensions, not vice-versa. - - Don't match ``!!f2py`` as a start of f2py directive. - - Only effective intent attributes will be output to ``-h`` target. - - Introduced ``intent(callback)`` to build interface between Python - functions and Fortran external routines. - - Avoid including external arguments to __user__ modules. - - Initial hooks to evaluate ``kind`` and ``selected_int_kind``. - - Evaluating parameters in {char,kind}selectors and applying rmbadname. - - Evaluating parameters using also module parameters. Fixed the order - of parameter evaluation. - - Fixed silly bug: when block name was not lower cased, it was not - recognized correctly. - - Applying mapping '.false.'->'False', '.true.'->'True' to logical - parameters. TODO: Support for logical expressions is needed. - - Added support for multiple statements in one line (separated with semicolon). - - Impl. get_useparameters function for using parameter values from - other f90 modules. - - Applied Bertholds patch to fix bug in evaluating expressions - like ``1.d0/dvar``. - - Fixed bug in reading string parameters. - - Evaluating parameters in charselector. Code cleanup. - - Using F90 module parameters to resolve kindselectors. - - Made the evaluation of module data init-expression more robust. - - Support for ``entry`` statement. - - Fixed ``determineexprtype`` that in the case of parameters - returned non-dictionary objects. - - Use ``-*- fix -*-`` to specify that a file is in fixed format. - -Release 2.39.235_1693 -===================== - -* fortranobject.{h,c} - - - Support for allocatable string arrays. - -* cfuncs.py - - - Call-back arguments can now be also instances that have ``__call__`` method - as well as instance methods. - -* f2py2e.py - - - Introduced ``--include_paths ::..`` command line - option. - - Added ``--compiler=`` support to change the C/C++ compiler from - f2py command line. - -* capi_maps.py - - - Handle ``XDY`` parameter constants. - -* crackfortran.py - - - Handle ``XDY`` parameter constants. - - - Introduced formatpattern to workaround a corner case where reserved - keywords are used in format statement. Other than that, format pattern - has no use. - - - Parameters are now fully evaluated. - -* More splitting of documentation strings. - -* func2subr.py - fixed bug for function names that f77 compiler - would set ``integer`` type. - -Release 2.39.235_1660 -===================== - -* f2py2e.py - - - Fixed bug in using --f90flags=.. - -* f90mod_rules.py - - - Splitted generated documentation strings (to avoid MSVC issue when - string length>2k) - - - Ignore ``private`` module data. - -Release 2.39.235_1644 -===================== - -:Date:24 February 2004 - -* Character arrays: - - - Finished complete support for character arrays and arrays of strings. - - ``character*n a(m)`` is treated like ``character a(m,n)`` with ``intent(c)``. - - Character arrays are now considered as ordinary arrays (not as arrays - of strings which actually didn't work). - -* docs - - - Initial f2py manpage file f2py.1. - - Updated usersguide and other docs when using numpy_distutils 0.2.2 - and up. - -* capi_maps.py - - - Try harder to use .f2py_f2cmap mappings when kind is used. - -* crackfortran.py - - - Included files are first search in the current directory and - then from the source file directory. - - Ignoring dimension and character selector changes. - - Fixed bug in Fortran 90 comments of fixed format. - - Warn when .pyf signatures contain undefined symbols. - - Better detection of source code formats. Using ``-*- fortran -*-`` - or ``-*- f90 -*-`` in the first line of a Fortran source file is - recommended to help f2py detect the format, fixed or free, - respectively, correctly. - -* cfuncs.py - - - Fixed intent(inout) scalars when typecode=='l'. - - Fixed intent(inout) scalars when not using numarray. - - Fixed intent(inout) scalars when using numarray. - -* diagnose.py - - - Updated for numpy_distutils 0.2.2 and up. - - Added numarray support to diagnose. - -* fortranobject.c - - - Fixed nasty bug with intent(in,copy) complex slice arrays. - - Applied Todd's patch to support numarray's byteswapped or - misaligned arrays, requires numarray-0.8 or higher. - -* f2py2e.py - - - Applying new hooks for numpy_distutils 0.2.2 and up, keeping - backward compatibility with depreciation messages. - - Using always os.system on non-posix platforms in f2py2e.compile - function. - -* rules.py - - - Changed the order of buildcallback and usercode junks. - -* setup.cfg - - - Added so that docs/ and tests/ directories are included to RPMs. - -* setup.py - - - Installing f2py.py instead of f2py.bat under NT. - - Introduced ``--with-numpy_distutils`` that is useful when making - f2py tar-ball with numpy_distutils included. - -Release 2.37.233-1545 -===================== - -:Date: 11 September 2003 - -* rules.py - - - Introduced ``interface_usercode`` replacement. When ``usercode`` - statement is used inside the first interface block, its contents - will be inserted at the end of initialization function of a F2PY - generated extension module (feature request: Berthold Höllmann). - - Introduced auxiliary function ``as_column_major_storage`` that - converts input array to an array with column major storage order - (feature request: Hans Petter Langtangen). - -* crackfortran.py - - - Introduced ``pymethoddef`` statement. - -* cfuncs.py - - - Fixed "#ifdef in #define TRYPYARRAYTEMPLATE" bug (patch thanks - to Bernhard Gschaider) - -* auxfuncs.py - - - Introduced ``getpymethod`` function. - - Enabled multi-line blocks in ``callprotoargument`` statement. - -* f90mod_rules.py - - - Undone "Fixed Warning 43 emitted by Intel Fortran compiler" that - causes (curios) segfaults. - -* fortranobject.c - - - Fixed segfaults (that were introduced with recent memory leak - fixes) when using allocatable arrays. - - Introduced F2PY_REPORT_ON_ARRAY_COPY CPP macro int-variable. If defined - then a message is printed to stderr whenever a copy of an array is - made and arrays size is larger than F2PY_REPORT_ON_ARRAY_COPY. - -Release 2.35.229-1505 -===================== - -:Date: 5 August 2003 - -* General - - - Introduced ``usercode`` statement (dropped ``c_code`` hooks). - -* setup.py - - - Updated the CVS location of numpy_distutils. - -* auxfuncs.py - - - Introduced ``isint1array(var)`` for fixing ``integer*1 intent(out)`` - support. - -* tests/f77/callback.py - - Introduced some basic tests. - -* src/fortranobject.{c,h} - - - Fixed memory leaks when getting/setting allocatable arrays. - (Bug report by Bernhard Gschaider) - - - Initial support for numarray (Todd Miller's patch). Use -DNUMARRAY - on the f2py command line to enable numarray support. Note that - there is no character arrays support and these hooks are not - tested with F90 compilers yet. - -* cfuncs.py - - - Fixed reference counting bug that appeared when constructing extra - argument list to callback functions. - - Added ``PyArray_LONG != PyArray_INT`` test. - -* f2py2e.py - - Undocumented ``--f90compiler``. - -* crackfortran.py - - - Introduced ``usercode`` statement. - - Fixed newlines when outputting multi-line blocks. - - Optimized ``getlincoef`` loop and ``analyzevars`` for cases where - len(vars) is large. - - Fixed callback string argument detection. - - Fixed evaluating expressions: only int|float expressions are - evaluated succesfully. - -* docs - - Documented -DF2PY_REPORT_ATEXIT feature. - -* diagnose.py - - Added CPU information and sys.prefix printout. - -* tests/run_all.py - - Added cwd to PYTHONPATH. - -* tests/f??/return_{real,complex}.py - - Pass "infinity" check in SunOS. - -* rules.py - - - Fixed ``integer*1 intent(out)`` support - - Fixed free format continuation of f2py generated F90 files. - -* tests/mixed/ - - Introduced tests for mixing Fortran 77, Fortran 90 fixed and free - format codes in one module. - -* f90mod_rules.py - - - Fixed non-prototype warnings. - - Fixed Warning 43 emitted by Intel Fortran compiler. - - Avoid long lines in Fortran codes to reduce possible problems with - continuations of lines. - -Public Release 2.32.225-1419 -============================ - -:Date: 8 December 2002 - -* docs/usersguide/ - - Complete revision of F2PY Users Guide - -* tests/run_all.py - - - New file. A Python script to run all f2py unit tests. - -* Removed files: buildmakefile.py, buildsetup.py. - -* tests/f77/ - - - Added intent(out) scalar tests. - -* f2py_testing.py - - - Introduced. It contains jiffies, memusage, run, cmdline functions - useful for f2py unit tests site. - -* setup.py - - - Install numpy_distutils only if it is missing or is too old - for f2py. - -* f90modrules.py - - - Fixed wrapping f90 module data. - - Fixed wrapping f90 module subroutines. - - Fixed f90 compiler warnings for wrapped functions by using interface - instead of external stmt for functions. - -* tests/f90/ - - - Introduced return_*.py tests. - -* func2subr.py - - - Added optional signature argument to createfuncwrapper. - - In f2pywrappers routines, declare external, scalar, remaining - arguments in that order. Fixes compiler error 'Invalid declaration' - for:: - - real function foo(a,b) - integer b - real a(b) - end - -* crackfortran.py - - - Removed first-line comment information support. - - Introduced multiline block. Currently usable only for - ``callstatement`` statement. - - Improved array length calculation in getarrlen(..). - - "From sky" program group is created only if ``groupcounter<1``. - See TODO.txt. - - Added support for ``dimension(n:*)``, ``dimension(*:n)``. They are - treated as ``dimesnion(*)`` by f2py. - - Fixed parameter substitution (this fixes TODO item by Patrick - LeGresley, 22 Aug 2001). - -* f2py2e.py - - - Disabled all makefile, setup, manifest file generation hooks. - - Disabled --[no]-external-modroutines option. All F90 module - subroutines will have Fortran/C interface hooks. - - --build-dir can be used with -c option. - - only/skip modes can be used with -c option. - - Fixed and documented `-h stdout` feature. - - Documented extra options. - - Introduced --quiet and --verbose flags. - -* cb_rules.py - - - Fixed debugcapi hooks for intent(c) scalar call-back arguments - (bug report: Pierre Schnizer). - - Fixed intent(c) for scalar call-back arguments. - - Improved failure reports. - -* capi_maps.py - - - Fixed complex(kind=..) to C type mapping bug. The following hold - complex==complex(kind=4)==complex*8, complex(kind=8)==complex*16 - - Using signed_char for integer*1 (bug report: Steve M. Robbins). - - Fixed logical*8 function bug: changed its C correspondence to - long_long. - - Fixed memory leak when returning complex scalar. - -* __init__.py - - - Introduced a new function (for f2py test site, but could be useful - in general) ``compile(source[,modulename,extra_args])`` for - compiling fortran source codes directly from Python. - -* src/fortranobject.c - - - Multi-dimensional common block members and allocatable arrays - are returned as Fortran-contiguous arrays. - - Fixed NULL return to Python without exception. - - Fixed memory leak in getattr(,'__doc__'). - - .__doc__ is saved to .__dict__ (previously - it was generated each time when requested). - - Fixed a nasty typo from the previous item that caused data - corruption and occasional SEGFAULTs. - - array_from_pyobj accepts arbitrary rank arrays if the last dimension - is undefined. E.g. dimension(3,*) accepts a(3,4,5) and the result is - array with dimension(3,20). - - Fixed (void*) casts to make g++ happy (bug report: eric). - - Changed the interface of ARR_IS_NULL macro to avoid "``NULL used in - arithmetics``" warnings from g++. - -* src/fortranobject.h - - - Undone previous item. Defining NO_IMPORT_ARRAY for - src/fortranobject.c (bug report: travis) - - Ensured that PY_ARRAY_UNIQUE_SYMBOL is defined only for - src/fortranobject.c (bug report: eric). - -* rules.py - - - Introduced dummy routine feature. - - F77 and F90 wrapper subroutines (if any) as saved to different - files, -f2pywrappers.f and -f2pywrappers2.f90, - respectively. Therefore, wrapping F90 requires numpy_distutils >= - 0.2.0_alpha_2.229. - - Fixed compiler warnings about meaningless ``const void (*f2py_func)(..)``. - - Improved error messages for ``*_from_pyobj``. - - Changed __CPLUSPLUS__ macros to __cplusplus (bug report: eric). - - Changed (void*) casts to (f2py_init_func) (bug report: eric). - - Removed unnecessary (void*) cast for f2py_has_column_major_storage - in f2py_module_methods definition (bug report: eric). - - Changed the interface of f2py_has_column_major_storage function: - removed const from the 1st argument. - -* cfuncs.py - - - Introduced -DPREPEND_FORTRAN. - - Fixed bus error on SGI by using PyFloat_AsDouble when ``__sgi`` is defined. - This seems to be `know bug`__ with Python 2.1 and SGI. - - string_from_pyobj accepts only arrays whos elements size==sizeof(char). - - logical scalars (intent(in),function) are normalized to 0 or 1. - - Removed NUMFROMARROBJ macro. - - (char|short)_from_pyobj now use int_from_pyobj. - - (float|long_double)_from_pyobj now use double_from_pyobj. - - complex_(float|long_double)_from_pyobj now use complex_double_from_pyobj. - - Rewrote ``*_from_pyobj`` to be more robust. This fixes segfaults if - getting * from a string. Note that int_from_pyobj differs - from PyNumber_Int in that it accepts also complex arguments - (takes the real part) and sequences (takes the 1st element). - - Removed unnecessary void* casts in NUMFROMARROBJ. - - Fixed casts in ``*_from_pyobj`` functions. - - Replaced CNUMFROMARROBJ with NUMFROMARROBJ. - -.. __: http://sourceforge.net/tracker/index.php?func=detail&aid=435026&group_id=5470&atid=105470 - -* auxfuncs.py - - - Introduced isdummyroutine(). - - Fixed islong_* functions. - - Fixed isintent_in for intent(c) arguments (bug report: Pierre Schnizer). - - Introduced F2PYError and throw_error. Using throw_error, f2py - rejects illegal .pyf file constructs that otherwise would cause - compilation failures or python crashes. - - Fixed islong_long(logical*8)->True. - - Introduced islogical() and islogicalfunction(). - - Fixed prototype string argument (bug report: eric). - -* Updated README.txt and doc strings. Starting to use docutils. - -* Speed up for ``*_from_pyobj`` functions if obj is a sequence. - -* Fixed SegFault (reported by M.Braun) due to invalid ``Py_DECREF`` - in ``GETSCALARFROMPYTUPLE``. - -Older Releases -============== - -:: - - *** Fixed missing includes when wrapping F90 module data. - *** Fixed typos in docs of build_flib options. - *** Implemented prototype calculator if no callstatement or - callprotoargument statements are used. A warning is issued if - callstatement is used without callprotoargument. - *** Fixed transposing issue with array arguments in callback functions. - *** Removed -pyinc command line option. - *** Complete tests for Fortran 77 functions returning scalars. - *** Fixed returning character bug if --no-wrap-functions. - *** Described how to wrap F compiled Fortran F90 module procedures - with F2PY. See doc/using_F_compiler.txt. - *** Fixed the order of build_flib options when using --fcompiler=... - *** Recognize .f95 and .F95 files as Fortran sources with free format. - *** Cleaned up the output of 'f2py -h': removed obsolete items, - added build_flib options section. - *** Added --help-compiler option: it lists available Fortran compilers - as detected by numpy_distutils/command/build_flib.py. This option - is available only with -c option. - - -:Release: 2.13.175-1250 -:Date: 4 April 2002 - -:: - - *** Fixed copying of non-contigious 1-dimensional arrays bug. - (Thanks to Travis O.). - - -:Release: 2.13.175-1242 -:Date: 26 March 2002 - -:: - - *** Fixed ignoring type declarations. - *** Turned F2PY_REPORT_ATEXIT off by default. - *** Made MAX,MIN macros available by default so that they can be - always used in signature files. - *** Disabled F2PY_REPORT_ATEXIT for FreeBSD. - - -:Release: 2.13.175-1233 -:Date: 13 March 2002 - -:: - - *** Fixed Win32 port when using f2py.bat. (Thanks to Erik Wilsher). - *** F2PY_REPORT_ATEXIT is disabled for MACs. - *** Fixed incomplete dependency calculator. - - -:Release: 2.13.175-1222 -:Date: 3 March 2002 - -:: - - *** Plugged a memory leak for intent(out) arrays with overwrite=0. - *** Introduced CDOUBLE_to_CDOUBLE,.. functions for copy_ND_array. - These cast functions probably work incorrectly in Numeric. - - -:Release: 2.13.175-1212 -:Date: 23 February 2002 - -:: - - *** Updated f2py for the latest numpy_distutils. - *** A nasty bug with multi-dimensional Fortran arrays is fixed - (intent(out) arrays had wrong shapes). (Thanks to Eric for - pointing out this bug). - *** F2PY_REPORT_ATEXIT is disabled by default for __WIN32__. - - -:Release: 2.11.174-1161 -:Date: 14 February 2002 - -:: - - *** Updated f2py for the latest numpy_distutils. - *** Fixed raise error when f2py missed -m flag. - *** Script name `f2py' now depends on the name of python executable. - For example, `python2.2 setup.py install' will create a f2py - script with a name `f2py2.2'. - *** Introduced 'callprotoargument' statement so that proper prototypes - can be declared. This is crucial when wrapping C functions as it - will fix segmentation faults when these wrappers use non-pointer - arguments (thanks to R. Clint Whaley for explaining this to me). - Note that in f2py generated wrapper, the prototypes have - the following forms: - extern #rtype# #fortranname#(#callprotoargument#); - or - extern #rtype# F_FUNC(#fortranname#,#FORTRANNAME#)(#callprotoargument#); - *** Cosmetic fixes to F2PY_REPORT_ATEXIT feature. - - -:Release: 2.11.174-1146 -:Date: 3 February 2002 - -:: - - *** Reviewed reference counting in call-back mechanism. Fixed few bugs. - *** Enabled callstatement for complex functions. - *** Fixed bug with initializing capi_overwrite_ - *** Introduced intent(overwrite) that is similar to intent(copy) but - has opposite effect. Renamed copy_=1 to overwrite_=0. - intent(overwrite) will make default overwrite_=1. - *** Introduced intent(in|inout,out,out=) attribute that renames - arguments name when returned. This renaming has effect only in - documentation strings. - *** Introduced 'callstatement' statement to pyf file syntax. With this - one can specify explicitly how wrapped function should be called - from the f2py generated module. WARNING: this is a dangerous feature - and should be used with care. It is introduced to provide a hack - to construct wrappers that may have very different signature - pattern from the wrapped function. Currently 'callstatement' can - be used only inside a subroutine or function block (it should be enough - though) and must be only in one continuous line. The syntax of the - statement is: callstatement ; - - -:Release: 2.11.174 -:Date: 18 January 2002 - -:: - - *** Fixed memory-leak for PyFortranObject. - *** Introduced extra keyword argument copy_ for intent(copy) - variables. It defaults to 1 and forces to make a copy for - intent(in) variables when passing on to wrapped functions (in case - they undesirably change the variable in-situ). - *** Introduced has_column_major_storage member function for all f2py - generated extension modules. It is equivalent to Python call - 'transpose(obj).iscontiguous()' but very efficient. - *** Introduced -DF2PY_REPORT_ATEXIT. If this is used when compiling, - a report is printed to stderr as python exits. The report includes - the following timings: - 1) time spent in all wrapped function calls; - 2) time spent in f2py generated interface around the wrapped - functions. This gives a hint whether one should worry - about storing data in proper order (C or Fortran). - 3) time spent in Python functions called by wrapped functions - through call-back interface. - 4) time spent in f2py generated call-back interface. - For now, -DF2PY_REPORT_ATEXIT is enabled by default. Use - -DF2PY_REPORT_ATEXIT_DISABLE to disable it (I am not sure if - Windows has needed tools, let me know). - Also, I appreciate if you could send me the output of 'F2PY - performance report' (with CPU and platform information) so that I - could optimize f2py generated interfaces for future releases. - *** Extension modules can be linked with dmalloc library. Use - -DDMALLOC when compiling. - *** Moved array_from_pyobj to fortranobject.c. - *** Usage of intent(inout) arguments is made more strict -- only - with proper type contiguous arrays are accepted. In general, - you should avoid using intent(inout) attribute as it makes - wrappers of C and Fortran functions asymmetric. I recommend using - intent(in,out) instead. - *** intent(..) has new keywords: copy,cache. - intent(copy,in) - forces a copy of an input argument; this - may be useful for cases where the wrapped function changes - the argument in situ and this may not be desired side effect. - Otherwise, it is safe to not use intent(copy) for the sake - of a better performance. - intent(cache,hide|optional) - just creates a junk of memory. - It does not care about proper storage order. Can be also - intent(in) but then the corresponding argument must be a - contiguous array with a proper elsize. - *** intent(c) can be used also for subroutine names so that - -DNO_APPEND_FORTRAN can be avoided for C functions. - - *** IMPORTANT BREAKING GOOD ... NEWS!!!: - - From now on you don't have to worry about the proper storage order - in multi-dimensional arrays that was earlier a real headache when - wrapping Fortran functions. Now f2py generated modules take care - of the proper conversations when needed. I have carefully designed - and optimized this interface to avoid any unnecessary memory usage - or copying of data. However, it is wise to use input arrays that - has proper storage order: for C arguments it is row-major and for - Fortran arguments it is column-major. But you don't need to worry - about that when developing your programs. The optimization of - initializing the program with proper data for possibly better - memory usage can be safely postponed until the program is working. - - This change also affects the signatures in .pyf files. If you have - created wrappers that take multi-dimensional arrays in arguments, - it is better to let f2py re-generate these files. Or you have to - manually do the following changes: reverse the axes indices in all - 'shape' macros. For example, if you have defined an array A(n,m) - and n=shape(A,1), m=shape(A,0) then you must change the last - statements to n=shape(A,0), m=shape(A,1). - - -:Release: 2.8.172 -:Date: 13 January 2002 - -:: - - *** Fixed -c process. Removed pyf_extensions function and pyf_file class. - *** Reorganized setup.py. It generates f2py or f2py.bat scripts - depending on the OS and the location of the python executable. - *** Started to use update_version from numpy_distutils that makes - f2py startup faster. As a side effect, the version number system - changed. - *** Introduced test-site/test_f2py2e.py script that runs all - tests. - *** Fixed global variables initialization problem in crackfortran - when run_main is called several times. - *** Added 'import Numeric' to C/API init function. - *** Fixed f2py.bat in setup.py. - *** Switched over to numpy_distutils and dropped fortran_support. - *** On Windows create f2py.bat file. - *** Introduced -c option: read fortran or pyf files, construct extension - modules, build, and save them to current directory. - In one word: do-it-all-in-one-call. - *** Introduced pyf_extensions(sources,f2py_opts) function. It simplifies - the extension building process considerably. Only for internal use. - *** Converted tests to use numpy_distutils in order to improve portability: - a,b,c - *** f2py2e.run_main() returns a pyf_file class instance containing - information about f2py generated files. - *** Introduced `--build-dir ' command line option. - *** Fixed setup.py for bdist_rpm command. - *** Added --numpy-setup command line option. - *** Fixed crackfortran that did not recognized capitalized type - specification with --no-lower flag. - *** `-h stdout' writes signature to stdout. - *** Fixed incorrect message for check() with empty name list. - - -:Release: 2.4.366 -:Date: 17 December 2001 - -:: - - *** Added command line option --[no-]manifest. - *** `make test' should run on Windows, but the results are not truthful. - *** Reorganized f2py2e.py a bit. Introduced run_main(comline_list) function - that can be useful when running f2py from another Python module. - *** Removed command line options -f77,-fix,-f90 as the file format - is determined from the extension of the fortran file - or from its header (first line starting with `!%' and containing keywords - free, fix, or f77). The later overrides the former one. - *** Introduced command line options --[no-]makefile,--[no-]latex-doc. - Users must explicitly use --makefile,--latex-doc if Makefile-, - module.tex is desired. --setup is default. Use --no-setup - to disable setup_.py generation. --overwrite-makefile - will set --makefile. - *** Added `f2py_rout_' to #capiname# in rules.py. - *** intent(...) statement with empty namelist forces intent(...) attribute for - all arguments. - *** Dropped DL_IMPORT and DL_EXPORT in fortranobject.h. - *** Added missing PyFortran_Type.ob_type initialization. - *** Added gcc-3.0 support. - *** Raising non-existing/broken Numeric as a FatalError exception. - *** Fixed Python 2.x specific += construct in fortran_support.py. - *** Fixed copy_ND_array for 1-rank arrays that used to call calloc(0,..) - and caused core dump with a non-gcc compiler (Thanks to Pierre Schnizer - for reporting this bug). - *** Fixed "warning: variable `..' might be clobbered by `longjmp' or `vfork'": - - Reorganized the structure of wrapper functions to get rid of - `goto capi_fail' statements that caused the above warning. - - -:Release: 2.3.343 -:Date: 12 December 2001 - -:: - - *** Issues with the Win32 support (thanks to Eric Jones and Tiffany Kamm): - - Using DL_EXPORT macro for init#modulename#. - - Changed PyObject_HEAD_INIT(&PyType_Type) to PyObject_HEAD_INIT(0). - - Initializing #name#_capi=NULL instead of Py_None in cb hooks. - *** Fixed some 'warning: function declaration isn't a prototype', mainly - in fortranobject.{c,h}. - *** Fixed 'warning: missing braces around initializer'. - *** Fixed reading a line containing only a label. - *** Fixed nonportable 'cp -fv' to shutil.copy in f2py2e.py. - *** Replaced PyEval_CallObject with PyObject_CallObject in cb_rules. - *** Replaced Py_DECREF with Py_XDECREF when freeing hidden arguments. - (Reason: Py_DECREF caused segfault when an error was raised) - *** Impl. support for `include "file"' (in addition to `include 'file'') - *** Fixed bugs (buildsetup.py missing in Makefile, in generated MANIFEST.in) - - -:Release: 2.3.327 -:Date: 4 December 2001 - -:: - - *** Sending out the third public release of f2py. - *** Support for Intel(R) Fortran Compiler (thanks to Patrick LeGresley). - *** Introduced `threadsafe' statement to pyf-files (or to be used with - the 'f2py' directive in fortran codes) to force - Py_BEGIN|END_ALLOW_THREADS block around the Fortran subroutine - calling statement in Python C/API. `threadsafe' statement has - an effect only inside a subroutine block. - *** Introduced `fortranname ' statement to be used only within - pyf-files. This is useful when the wrapper (Python C/API) function - has different name from the wrapped (Fortran) function. - *** Introduced `intent(c)' directive and statement. It is useful when - wrapping C functions. Use intent(c) for arguments that are - scalars (not pointers) or arrays (with row-ordering of elements). - - -:Release: 2.3.321 -:Date: 3 December 2001 - -:: - - *** f2py2e can be installed using distutils (run `python setup.py install'). - *** f2py builds setup_.py. Use --[no-]setup to control this - feature. setup_.py uses fortran_support module (from SciPy), - but for your convenience it is included also with f2py as an additional - package. Note that it has not as many compilers supported as with - using Makefile-, but new compilers should be added to - fortran_support module, not to f2py2e package. - *** Fixed some compiler warnings about else statements. - diff --git a/pythonPackages/numpy/numpy/f2py/docs/OLDNEWS.txt b/pythonPackages/numpy/numpy/f2py/docs/OLDNEWS.txt deleted file mode 100755 index 401d2dcee4..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/OLDNEWS.txt +++ /dev/null @@ -1,63 +0,0 @@ - -.. topic:: Old F2PY NEWS - - March 30, 2004 - F2PY bug fix release (version 2.39.235-1693). Two new command line switches: - ``--compiler`` and ``--include_paths``. Support for allocatable string arrays. - Callback arguments may now be arbitrary callable objects. Win32 installers - for F2PY and Scipy_core are provided. - `Differences with the previous release (version 2.37.235-1660)`__. - - __ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/HISTORY.txt.diff?r1=1.98&r2=1.87&f=h - - - March 9, 2004 - F2PY bug fix release (version 2.39.235-1660). - `Differences with the previous release (version 2.37.235-1644)`__. - - __ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/HISTORY.txt.diff?r1=1.87&r2=1.83&f=h - - February 24, 2004 - Latest F2PY release (version 2.39.235-1644). - Support for numpy_distutils 0.2.2 and up (e.g. compiler flags can be - changed via f2py command line options). Implemented support for - character arrays and arrays of strings (e.g. ``character*(*) a(m,..)``). - *Important bug fixes regarding complex arguments, upgrading is - highly recommended*. Documentation updates. - `Differences with the previous release (version 2.37.233-1545)`__. - - __ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/HISTORY.txt.diff?r1=1.83&r2=1.58&f=h - - September 11, 2003 - Latest F2PY release (version 2.37.233-1545). - New statements: ``pymethoddef`` and ``usercode`` in interface blocks. - New function: ``as_column_major_storage``. - New CPP macro: ``F2PY_REPORT_ON_ARRAY_COPY``. - Bug fixes. - `Differences with the previous release (version 2.35.229-1505)`__. - - __ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/HISTORY.txt.diff?r1=1.58&r2=1.49&f=h - - August 2, 2003 - Latest F2PY release (version 2.35.229-1505). - `Differences with the previous release (version 2.32.225-1419)`__. - - __ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/HISTORY.txt.diff?r1=1.49&r2=1.28&f=h - - April 2, 2003 - Initial support for Numarray_ (thanks to Todd Miller). - - December 8, 2002 - Sixth public release of F2PY (version 2.32.225-1419). Comes with - revised `F2PY Users Guide`__, `new testing site`__, lots of fixes - and other improvements, see `HISTORY.txt`_ for details. - - __ usersguide/index.html - __ TESTING.txt_ - -.. References - ========== - -.. _HISTORY.txt: HISTORY.html -.. _Numarray: http://www.stsci.edu/resources/software_hardware/numarray -.. _TESTING.txt: TESTING.html \ No newline at end of file diff --git a/pythonPackages/numpy/numpy/f2py/docs/README.txt b/pythonPackages/numpy/numpy/f2py/docs/README.txt deleted file mode 100755 index cec8a6ec09..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/README.txt +++ /dev/null @@ -1,461 +0,0 @@ -.. -*- rest -*- - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - F2PY: Fortran to Python interface generator -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -:Author: Pearu Peterson -:License: NumPy License -:Web-site: http://cens.ioc.ee/projects/f2py2e/ -:Discussions to: `f2py-users mailing list`_ -:Documentation: `User's Guide`__, FAQ__ -:Platforms: All -:Date: $Date: 2005/01/30 18:54:53 $ - -.. _f2py-users mailing list: http://cens.ioc.ee/mailman/listinfo/f2py-users/ -__ usersguide/index.html -__ FAQ.html - -------------------------------- - -.. topic:: NEWS!!! - - January 5, 2006 - - WARNING -- these notes are out of date! The package structure for NumPy and - SciPy has changed considerably. Much of this information is now incorrect. - - January 30, 2005 - - Latest F2PY release (version 2.45.241_1926). - New features: wrapping unsigned integers, support for ``.pyf.src`` template files, - callback arguments can now be CObjects, fortran objects, built-in functions. - Introduced ``intent(aux)`` attribute. Wrapped objects have ``_cpointer`` - attribute holding C pointer to wrapped functions or variables. - Many bug fixes and improvements, updated documentation. - `Differences with the previous release (version 2.43.239_1831)`__. - - __ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/HISTORY.txt.diff?r1=1.163&r2=1.137&f=h - - October 4, 2004 - F2PY bug fix release (version 2.43.239_1831). - Better support for 64-bit platforms. - Introduced ``--help-link`` and ``--link-`` options. - Bug fixes. - `Differences with the previous release (version 2.43.239_1806)`__. - - __ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/HISTORY.txt.diff?r1=1.137&r2=1.131&f=h - - September 25, 2004 - Latest F2PY release (version 2.43.239_1806). - Support for ``ENTRY`` statement. New attributes: - ``intent(inplace)``, ``intent(callback)``. Supports Numarray 1.1. - Introduced ``-*- fix -*-`` header content. Improved ``PARAMETER`` support. - Documentation updates. `Differences with the previous release - (version 2.39.235-1693)`__. - - __ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/HISTORY.txt.diff?r1=1.131&r2=1.98&f=h - - `History of NEWS`__ - - __ OLDNEWS.html - -------------------------------- - -.. Contents:: - -============== - Introduction -============== - -The purpose of the F2PY --*Fortran to Python interface generator*-- -project is to provide connection between Python_ and Fortran -languages. F2PY is a Python extension tool for creating Python C/API -modules from (handwritten or F2PY generated) signature files (or -directly from Fortran sources). The generated extension modules -facilitate: - -* Calling Fortran 77/90/95, Fortran 90/95 module, and C functions from - Python. - -* Accessing Fortran 77 ``COMMON`` blocks and Fortran 90/95 module - data (including allocatable arrays) from Python. - -* Calling Python functions from Fortran or C (call-backs). - -* Automatically handling the difference in the data storage order of - multi-dimensional Fortran and Numerical Python (i.e. C) arrays. - -In addition, F2PY can build the generated extension modules to shared -libraries with one command. F2PY uses the ``numpy_distutils`` module -from SciPy_ that supports number of major Fortran compilers. - -.. - (see `COMPILERS.txt`_ for more information). - -F2PY generated extension modules depend on NumPy_ package that -provides fast multi-dimensional array language facility to Python. - - ---------------- - Main features ---------------- - -Here follows a more detailed list of F2PY features: - -* F2PY scans real Fortran codes to produce the so-called signature - files (.pyf files). The signature files contain all the information - (function names, arguments and their types, etc.) that is needed to - construct Python bindings to Fortran (or C) functions. - - The syntax of signature files is borrowed from the - Fortran 90/95 language specification and has some F2PY specific - extensions. The signature files can be modified to dictate how - Fortran (or C) programs are called from Python: - - + F2PY solves dependencies between arguments (this is relevant for - the order of initializing variables in extension modules). - - + Arguments can be specified to be optional or hidden that - simplifies calling Fortran programs from Python considerably. - - + In principle, one can design any Python signature for a given - Fortran function, e.g. change the order arguments, introduce - auxiliary arguments, hide the arguments, process the arguments - before passing to Fortran, return arguments as output of F2PY - generated functions, etc. - -* F2PY automatically generates __doc__ strings (and optionally LaTeX - documentation) for extension modules. - -* F2PY generated functions accept arbitrary (but sensible) Python - objects as arguments. The F2PY interface automatically takes care of - type-casting and handling of non-contiguous arrays. - -* The following Fortran constructs are recognized by F2PY: - - + All basic Fortran types:: - - integer[ | *1 | *2 | *4 | *8 ], logical[ | *1 | *2 | *4 | *8 ] - integer*([ -1 | -2 | -4 | -8 ]) - character[ | *(*) | *1 | *2 | *3 | ... ] - real[ | *4 | *8 | *16 ], double precision - complex[ | *8 | *16 | *32 ] - - Negative ``integer`` kinds are used to wrap unsigned integers. - - + Multi-dimensional arrays of all basic types with the following - dimension specifications:: - - | : | * | : - - + Attributes and statements:: - - intent([ in | inout | out | hide | in,out | inout,out | c | - copy | cache | callback | inplace | aux ]) - dimension() - common, parameter - allocatable - optional, required, external - depend([]) - check([]) - note() - usercode, callstatement, callprotoargument, threadsafe, fortranname - pymethoddef - entry - -* Because there are only little (and easily handleable) differences - between calling C and Fortran functions from F2PY generated - extension modules, then F2PY is also well suited for wrapping C - libraries to Python. - -* Practice has shown that F2PY generated interfaces (to C or Fortran - functions) are less error prone and even more efficient than - handwritten extension modules. The F2PY generated interfaces are - easy to maintain and any future optimization of F2PY generated - interfaces transparently apply to extension modules by just - regenerating them with the latest version of F2PY. - -* `F2PY Users Guide and Reference Manual`_ - - -=============== - Prerequisites -=============== - -F2PY requires the following software installed: - -* Python_ (versions 1.5.2 or later; 2.1 and up are recommended). - You must have python-dev package installed. -* NumPy_ (versions 13 or later; 20.x, 21.x, 22.x, 23.x are recommended) -* Numarray_ (version 0.9 and up), optional, partial support. -* Scipy_distutils (version 0.2.2 and up are recommended) from SciPy_ - project. Get it from Scipy CVS or download it below. - -Python 1.x users also need distutils_. - -Of course, to build extension modules, you'll need also working C -and/or Fortran compilers installed. - -========== - Download -========== - -You can download the sources for the latest F2PY and numpy_distutils -releases as: - -* `2.x`__/`F2PY-2-latest.tar.gz`__ -* `2.x`__/`numpy_distutils-latest.tar.gz`__ - -Windows users might be interested in Win32 installer for F2PY and -Scipy_distutils (these installers are built using Python 2.3): - -* `2.x`__/`F2PY-2-latest.win32.exe`__ -* `2.x`__/`numpy_distutils-latest.win32.exe`__ - -Older releases are also available in the directories -`rel-0.x`__, `rel-1.x`__, `rel-2.x`__, `rel-3.x`__, `rel-4.x`__, `rel-5.x`__, -if you need them. - -.. __: 2.x/ -.. __: 2.x/F2PY-2-latest.tar.gz -.. __: 2.x/ -.. __: 2.x/numpy_distutils-latest.tar.gz -.. __: 2.x/ -.. __: 2.x/F2PY-2-latest.win32.exe -.. __: 2.x/ -.. __: 2.x/numpy_distutils-latest.win32.exe -.. __: rel-0.x -.. __: rel-1.x -.. __: rel-2.x -.. __: rel-3.x -.. __: rel-4.x -.. __: rel-5.x - -Development version of F2PY from CVS is available as `f2py2e.tar.gz`__. - -__ http://cens.ioc.ee/cgi-bin/viewcvs.cgi/python/f2py2e/f2py2e.tar.gz?tarball=1 - -Debian Sid users can simply install ``python-f2py`` package. - -============== - Installation -============== - -Unpack the source file, change to directrory ``F2PY-?-???/`` and run -(you may need to become a root):: - - python setup.py install - -The F2PY installation installs a Python package ``f2py2e`` to your -Python ``site-packages`` directory and a script ``f2py`` to your -Python executable path. - -See also Installation__ section in `F2PY FAQ`_. - -.. __: FAQ.html#installation - -Similarly, to install ``numpy_distutils``, unpack its tar-ball and run:: - - python setup.py install - -======= - Usage -======= - -To check if F2PY is installed correctly, run -:: - - f2py - -without any arguments. This should print out the usage information of -the ``f2py`` program. - -Next, try out the following three steps: - -1) Create a Fortran file `hello.f`__ that contains:: - - C File hello.f - subroutine foo (a) - integer a - print*, "Hello from Fortran!" - print*, "a=",a - end - -__ hello.f - -2) Run - - :: - - f2py -c -m hello hello.f - - This will build an extension module ``hello.so`` (or ``hello.sl``, - or ``hello.pyd``, etc. depending on your platform) into the current - directory. - -3) Now in Python try:: - - >>> import hello - >>> print hello.__doc__ - >>> print hello.foo.__doc__ - >>> hello.foo(4) - Hello from Fortran! - a= 4 - >>> - -If the above works, then you can try out more thorough -`F2PY unit tests`__ and read the `F2PY Users Guide and Reference Manual`_. - -__ FAQ.html#q-how-to-test-if-f2py-is-working-correctly - -=============== - Documentation -=============== - -The documentation of the F2PY project is collected in ``f2py2e/docs/`` -directory. It contains the following documents: - -`README.txt`_ (in CVS__) - The first thing to read about F2PY -- this document. - -__ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/README.txt?rev=HEAD&content-type=text/x-cvsweb-markup - -`usersguide/index.txt`_, `usersguide/f2py_usersguide.pdf`_ - F2PY Users Guide and Reference Manual. Contains lots of examples. - -`FAQ.txt`_ (in CVS__) - F2PY Frequently Asked Questions. - -__ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/FAQ.txt?rev=HEAD&content-type=text/x-cvsweb-markup - -`TESTING.txt`_ (in CVS__) - About F2PY testing site. What tests are available and how to run them. - -__ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/TESTING.txt?rev=HEAD&content-type=text/x-cvsweb-markup - -`HISTORY.txt`_ (in CVS__) - A list of latest changes in F2PY. This is the most up-to-date - document on F2PY. - -__ http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/HISTORY.txt?rev=HEAD&content-type=text/x-cvsweb-markup - -`THANKS.txt`_ - Acknowledgments. - -.. - `COMPILERS.txt`_ - Compiler and platform specific notes. - -=============== - Mailing list -=============== - -A mailing list f2py-users@cens.ioc.ee is open for F2PY releated -discussion/questions/etc. - -* `Subscribe..`__ -* `Archives..`__ - -__ http://cens.ioc.ee/mailman/listinfo/f2py-users -__ http://cens.ioc.ee/pipermail/f2py-users - - -===== - CVS -===== - -F2PY is being developed under CVS_. The CVS version of F2PY can be -obtained as follows: - -1) First you need to login (the password is ``guest``):: - - cvs -d :pserver:anonymous@cens.ioc.ee:/home/cvs login - -2) and then do the checkout:: - - cvs -z6 -d :pserver:anonymous@cens.ioc.ee:/home/cvs checkout f2py2e - -3) You can update your local F2PY tree ``f2py2e/`` by executing:: - - cvs -z6 update -P -d - -You can browse the `F2PY CVS`_ repository. - -=============== - Contributions -=============== - -* `A short introduction to F2PY`__ by Pierre Schnizer. - -* `F2PY notes`__ by Fernando Perez. - -* `Debian packages of F2PY`__ by José Fonseca. [OBSOLETE, Debian Sid - ships python-f2py package] - -__ http://fubphpc.tu-graz.ac.at/~pierre/f2py_tutorial.tar.gz -__ http://cens.ioc.ee/pipermail/f2py-users/2003-April/000472.html -__ http://jrfonseca.dyndns.org/debian/ - - -=============== - Related sites -=============== - -* `Numerical Python`_ -- adds a fast array facility to the Python language. -* Pyfort_ -- A Python-Fortran connection tool. -* SciPy_ -- An open source library of scientific tools for Python. -* `Scientific Python`_ -- A collection of Python modules that are - useful for scientific computing. -* `The Fortran Company`_ -- A place to find products, services, and general - information related to the Fortran programming language. -* `American National Standard Programming Language FORTRAN ANSI(R) X3.9-1978`__ -* `J3`_ -- The US Fortran standards committee. -* SWIG_ -- A software development tool that connects programs written - in C and C++ with a variety of high-level programming languages. -* `Mathtools.net`_ -- A technical computing portal for all scientific - and engineering needs. - -.. __: http://www.fortran.com/fortran/F77_std/rjcnf.html - -.. References - ========== - - -.. _F2PY Users Guide and Reference Manual: usersguide/index.html -.. _usersguide/index.txt: usersguide/index.html -.. _usersguide/f2py_usersguide.pdf: usersguide/f2py_usersguide.pdf -.. _README.txt: README.html -.. _COMPILERS.txt: COMPILERS.html -.. _F2PY FAQ: -.. _FAQ.txt: FAQ.html -.. _HISTORY.txt: HISTORY.html -.. _HISTORY.txt from CVS: http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/docs/HISTORY.txt?rev=HEAD&content-type=text/x-cvsweb-markup -.. _THANKS.txt: THANKS.html -.. _TESTING.txt: TESTING.html -.. _F2PY CVS2: http://cens.ioc.ee/cgi-bin/cvsweb/python/f2py2e/ -.. _F2PY CVS: http://cens.ioc.ee/cgi-bin/viewcvs.cgi/python/f2py2e/ - -.. _CVS: http://www.cvshome.org/ -.. _Python: http://www.python.org/ -.. _SciPy: http://www.numpy.org/ -.. _NumPy: http://www.numpy.org/ -.. _Numarray: http://www.stsci.edu/resources/software_hardware/numarray -.. _docutils: http://docutils.sourceforge.net/ -.. _distutils: http://www.python.org/sigs/distutils-sig/ -.. _Numerical Python: http://www.numpy.org/ -.. _Pyfort: http://pyfortran.sourceforge.net/ -.. _Scientific Python: - http://starship.python.net/crew/hinsen/scientific.html -.. _The Fortran Company: http://www.fortran.com/fortran/ -.. _J3: http://www.j3-fortran.org/ -.. _Mathtools.net: http://www.mathtools.net/ -.. _SWIG: http://www.swig.org/ - -.. - Local Variables: - mode: indented-text - indent-tabs-mode: nil - sentence-end-double-space: t - fill-column: 70 - End: diff --git a/pythonPackages/numpy/numpy/f2py/docs/TESTING.txt b/pythonPackages/numpy/numpy/f2py/docs/TESTING.txt deleted file mode 100755 index d905211754..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/TESTING.txt +++ /dev/null @@ -1,108 +0,0 @@ - -======================================================= - F2PY unit testing site -======================================================= - -.. Contents:: - -Tests ------ - -* To run all F2PY unit tests in one command:: - - cd tests - python run_all.py [] - - For example:: - - localhost:~/src_cvs/f2py2e/tests$ python2.2 run_all.py 100 --quiet - ********************************************** - Running '/usr/bin/python2.2 f77/return_integer.py 100 --quiet' - run 1000 tests in 1.87 seconds - initial virtual memory size: 3952640 bytes - current virtual memory size: 3952640 bytes - ok - ********************************************** - Running '/usr/bin/python2.2 f77/return_logical.py 100 --quiet' - run 1000 tests in 1.47 seconds - initial virtual memory size: 3952640 bytes - current virtual memory size: 3952640 bytes - ok - ... - - If some tests fail, try to run the failing tests separately (without - the ``--quiet`` option) as described below to get more information - about the failure. - -* Test intent(in), intent(out) scalar arguments, - scalars returned by F77 functions - and F90 module functions:: - - tests/f77/return_integer.py - tests/f77/return_real.py - tests/f77/return_logical.py - tests/f77/return_complex.py - tests/f77/return_character.py - tests/f90/return_integer.py - tests/f90/return_real.py - tests/f90/return_logical.py - tests/f90/return_complex.py - tests/f90/return_character.py - - Change to tests/ directory and run:: - - python f77/return_.py [] - python f90/return_.py [] - - where ```` is integer, real, logical, complex, or character. - Test scripts options are described below. - - A test is considered succesful if the last printed line is "ok". - - If you get import errors like:: - - ImportError: No module named f77_ext_return_integer - - but ``f77_ext_return_integer.so`` exists in the current directory then - it means that the current directory is not included in to `sys.path` - in your Python installation. As a fix, prepend ``.`` to ``PYTHONPATH`` - environment variable and rerun the tests. For example:: - - PYTHONPATH=. python f77/return_integer.py - -* Test mixing Fortran 77, Fortran 90 fixed and free format codes:: - - tests/mixed/run.py - -* Test basic callback hooks:: - - tests/f77/callback.py - -Options -------- - -You may want to use the following options when running the test -scripts: - -```` - Run tests ```` times. Useful for detecting memory leaks. Under - Linux tests scripts output virtual memory size state of the process - before and after calling the wrapped functions. - -``--quiet`` - Suppress all messages. On success only "ok" should be displayed. - -``--fcompiler=`` - Use:: - - f2py -c --help-fcompiler - - to find out what compilers are available (or more precisely, which - ones are recognized by ``numpy_distutils``). - -Reporting failures ------------------- - -XXX: (1) make sure that failures are due to f2py and (2) send full -stdout/stderr messages to me. Also add compiler,python,platform -information. diff --git a/pythonPackages/numpy/numpy/f2py/docs/THANKS.txt b/pythonPackages/numpy/numpy/f2py/docs/THANKS.txt deleted file mode 100755 index 0a3f0b9d66..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/THANKS.txt +++ /dev/null @@ -1,63 +0,0 @@ - -================= - Acknowledgments -================= - -F2PY__ is an open source Python package and command line tool developed and -maintained by Pearu Peterson (me__). - -.. __: http://cens.ioc.ee/projects/f2py2e/ -.. __: http://cens.ioc.ee/~pearu/ - -Many people have contributed to the F2PY project in terms of interest, -encouragement, suggestions, criticism, bug reports, code -contributions, and keeping me busy with developing F2PY. For all that -I thank - - James Amundson, John Barnard, David Beazley, Frank Bertoldi, Roman - Bertle, James Boyle, Moritz Braun, Rolv Erlend Bredesen, John - Chaffer, Fred Clare, Adam Collard, Ben Cornett, Jose L Gomez Dans, - Jaime D. Perea Duarte, Paul F Dubois, Thilo Ernst, Bonilla Fabian, - Martin Gelfand, Eduardo A. Gonzalez, Siegfried Gonzi, Bernhard - Gschaider, Charles Doutriaux, Jeff Hagelberg, Janko Hauser, Thomas - Hauser, Heiko Henkelmann, William Henney, Yueqiang Huang, Asim - Hussain, Berthold Höllmann, Vladimir Janku, Henk Jansen, Curtis - Jensen, Eric Jones, Tiffany Kamm, Andrey Khavryuchenko, Greg - Kochanski, Jochen Küpper, Simon Lacoste-Julien, Tim Lahey, Hans - Petter Langtangen, Jeff Layton, Matthew Lewis, Patrick LeGresley, - Joaquim R R A Martins, Paul Magwene Lionel Maziere, Craig McNeile, - Todd Miller, David C. Morrill, Dirk Muders, Kevin Mueller, Andrew - Mullhaupt, Vijayendra Munikoti, Travis Oliphant, Kevin O'Mara, Arno - Paehler, Fernando Perez, Didrik Pinte, Todd Alan Pitts, Prabhu - Ramachandran, Brad Reisfeld, Steve M. Robbins, Theresa Robinson, - Pedro Rodrigues, Les Schaffer, Christoph Scheurer, Herb Schilling, - Pierre Schnizer, Kevin Smith, Paulo Teotonio Sobrinho, José Rui - Faustino de Sousa, Andrew Swan, Dustin Tang, Charlie Taylor, Paul le - Texier, Michael Tiller, Semen Trygubenko, Ravi C Venkatesan, Peter - Verveer, Nils Wagner, R. Clint Whaley, Erik Wilsher, Martin - Wiechert, Gilles Zerah, SungPil Yoon. - -(This list may not be complete. Please forgive me if I have left you -out and let me know, I'll add your name.) - -Special thanks are due to ... - -Eric Jones - he and Travis O. are responsible for starting the -numpy_distutils project that allowed to move most of the platform and -compiler specific codes out from F2PY. This simplified maintaining the -F2PY project considerably. - -Joaquim R R A Martins - he made possible for me to test F2PY on IRIX64 -platform. He also presented our paper about F2PY in the 9th Python -Conference that I planned to attend but had to cancel in very last -minutes. - -Travis Oliphant - his knowledge and experience on Numerical Python -C/API has been invaluable in early development of the F2PY program. -His major contributions are call-back mechanism and copying N-D arrays -of arbitrary types. - -Todd Miller - he is responsible for Numarray support in F2PY. - -Thanks! - Pearu diff --git a/pythonPackages/numpy/numpy/f2py/docs/default.css b/pythonPackages/numpy/numpy/f2py/docs/default.css deleted file mode 100755 index 9289e28260..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/default.css +++ /dev/null @@ -1,180 +0,0 @@ -/* -:Author: David Goodger -:Contact: goodger@users.sourceforge.net -:date: $Date: 2002/08/01 20:52:44 $ -:version: $Revision: 1.1 $ -:copyright: This stylesheet has been placed in the public domain. - -Default cascading style sheet for the HTML output of Docutils. -*/ - -body { - background: #FFFFFF ; - color: #000000 -} - -a.footnote-reference { - font-size: smaller ; - vertical-align: super } - -a.target { - color: blue } - -a.toc-backref { - text-decoration: none ; - color: black } - -dd { - margin-bottom: 0.5em } - -div.abstract { - margin: 2em 5em } - -div.abstract p.topic-title { - font-weight: bold ; - text-align: center } - -div.attention, div.caution, div.danger, div.error, div.hint, -div.important, div.note, div.tip, div.warning { - margin: 2em ; - border: medium outset ; - padding: 1em } - -div.attention p.admonition-title, div.caution p.admonition-title, -div.danger p.admonition-title, div.error p.admonition-title, -div.warning p.admonition-title { - color: red ; - font-weight: bold ; - font-family: sans-serif } - -div.hint p.admonition-title, div.important p.admonition-title, -div.note p.admonition-title, div.tip p.admonition-title { - font-weight: bold ; - font-family: sans-serif } - -div.dedication { - margin: 2em 5em ; - text-align: center ; - font-style: italic } - -div.dedication p.topic-title { - font-weight: bold ; - font-style: normal } - -div.figure { - margin-left: 2em } - -div.footer, div.header { - font-size: smaller } - -div.system-messages { - margin: 5em } - -div.system-messages h1 { - color: red } - -div.system-message { - border: medium outset ; - padding: 1em } - -div.system-message p.system-message-title { - color: red ; - font-weight: bold } - -div.topic { - margin: 2em } - -h1.title { - text-align: center } - -h2.subtitle { - text-align: center } - -hr { - width: 75% } - -ol.simple, ul.simple { - margin-bottom: 1em } - -ol.arabic { - list-style: decimal } - -ol.loweralpha { - list-style: lower-alpha } - -ol.upperalpha { - list-style: upper-alpha } - -ol.lowerroman { - list-style: lower-roman } - -ol.upperroman { - list-style: upper-roman } - -p.caption { - font-style: italic } - -p.credits { - font-style: italic ; - font-size: smaller } - -p.first { - margin-top: 0 } - -p.label { - white-space: nowrap } - -p.topic-title { - font-weight: bold } - -pre.literal-block, pre.doctest-block { - margin-left: 2em ; - margin-right: 2em ; - background-color: #eeeeee } - -span.classifier { - font-family: sans-serif ; - font-style: oblique } - -span.classifier-delimiter { - font-family: sans-serif ; - font-weight: bold } - -span.field-argument { - font-style: italic } - -span.interpreted { - font-family: sans-serif } - -span.option-argument { - font-style: italic } - -span.problematic { - color: red } - -table { - margin-top: 0.5em ; - margin-bottom: 0.5em } - -table.citation { - border-left: solid thin gray ; - padding-left: 0.5ex } - -table.docinfo { - margin: 2em 4em } - -table.footnote { - border-left: solid thin black ; - padding-left: 0.5ex } - -td, th { - padding-left: 0.5em ; - padding-right: 0.5em ; - vertical-align: baseline } - -td.docinfo-name { - font-weight: bold ; - text-align: right } - -td.field-name { - font-weight: bold } diff --git a/pythonPackages/numpy/numpy/f2py/docs/docutils.conf b/pythonPackages/numpy/numpy/f2py/docs/docutils.conf deleted file mode 100755 index 4e5a8425bb..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/docutils.conf +++ /dev/null @@ -1,16 +0,0 @@ -[general] - -# These entries affect all processing: -#source-link: 1 -datestamp: %Y-%m-%d %H:%M UTC -generator: 1 - -# These entries affect HTML output: -#stylesheet-path: pearu_style.css -output-encoding: latin-1 - -# These entries affect reStructuredText-style PEPs: -#pep-template: pep-html-template -#pep-stylesheet-path: stylesheets/pep.css -#python-home: http://www.python.org -#no-random: 1 diff --git a/pythonPackages/numpy/numpy/f2py/docs/hello.f b/pythonPackages/numpy/numpy/f2py/docs/hello.f deleted file mode 100755 index 3e0dc6d212..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/hello.f +++ /dev/null @@ -1,7 +0,0 @@ -C File hello.f - subroutine foo (a) - integer a - print*, "Hello from Fortran!" - print*, "a=",a - end - diff --git a/pythonPackages/numpy/numpy/f2py/docs/pyforttest.pyf b/pythonPackages/numpy/numpy/f2py/docs/pyforttest.pyf deleted file mode 100755 index 79a9ae205f..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/pyforttest.pyf +++ /dev/null @@ -1,5 +0,0 @@ -subroutine foo(a,m,n) -integer m = size(a,1) -integer n = size(a,2) -real, intent(inout) :: a(m,n) -end subroutine foo diff --git a/pythonPackages/numpy/numpy/f2py/docs/pytest.py b/pythonPackages/numpy/numpy/f2py/docs/pytest.py deleted file mode 100755 index abd3487dfb..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/pytest.py +++ /dev/null @@ -1,10 +0,0 @@ -#File: pytest.py -import Numeric -def foo(a): - a = Numeric.array(a) - m,n = a.shape - for i in range(m): - for j in range(n): - a[i,j] = a[i,j] + 10*(i+1) + (j+1) - return a -#eof diff --git a/pythonPackages/numpy/numpy/f2py/docs/simple.f b/pythonPackages/numpy/numpy/f2py/docs/simple.f deleted file mode 100755 index ba468a509c..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/simple.f +++ /dev/null @@ -1,13 +0,0 @@ -cFile: simple.f - subroutine foo(a,m,n) - integer m,n,i,j - real a(m,n) -cf2py intent(in,out) a -cf2py intent(hide) m,n - do i=1,m - do j=1,n - a(i,j) = a(i,j) + 10*i+j - enddo - enddo - end -cEOF diff --git a/pythonPackages/numpy/numpy/f2py/docs/simple_session.dat b/pythonPackages/numpy/numpy/f2py/docs/simple_session.dat deleted file mode 100755 index 10d9dc9627..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/simple_session.dat +++ /dev/null @@ -1,51 +0,0 @@ ->>> import pytest ->>> import f2pytest ->>> import pyforttest ->>> print f2pytest.foo.__doc__ -foo - Function signature: - a = foo(a) -Required arguments: - a : input rank-2 array('f') with bounds (m,n) -Return objects: - a : rank-2 array('f') with bounds (m,n) - ->>> print pyforttest.foo.__doc__ -foo(a) - ->>> pytest.foo([[1,2],[3,4]]) -array([[12, 14], - [24, 26]]) ->>> f2pytest.foo([[1,2],[3,4]]) # F2PY can handle arbitrary input sequences -array([[ 12., 14.], - [ 24., 26.]],'f') ->>> pyforttest.foo([[1,2],[3,4]]) -Traceback (most recent call last): - File "", line 1, in ? -pyforttest.error: foo, argument A: Argument intent(inout) must be an array. - ->>> import Numeric ->>> a=Numeric.array([[1,2],[3,4]],'f') ->>> f2pytest.foo(a) -array([[ 12., 14.], - [ 24., 26.]],'f') ->>> a # F2PY makes a copy when input array is not Fortran contiguous -array([[ 1., 2.], - [ 3., 4.]],'f') ->>> a=Numeric.transpose(Numeric.array([[1,3],[2,4]],'f')) ->>> a -array([[ 1., 2.], - [ 3., 4.]],'f') ->>> f2pytest.foo(a) -array([[ 12., 14.], - [ 24., 26.]],'f') ->>> a # F2PY passes Fortran contiguous input array directly to Fortran -array([[ 12., 14.], - [ 24., 26.]],'f') -# See intent(copy), intent(overwrite), intent(inplace), intent(inout) -# attributes documentation to enhance the above behavior. - ->>> a=Numeric.array([[1,2],[3,4]],'f') ->>> pyforttest.foo(a) ->>> a # Huh? Pyfort 8.5 gives wrong results.. -array([[ 12., 23.], - [ 15., 26.]],'f') diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/allocarr.f90 b/pythonPackages/numpy/numpy/f2py/docs/usersguide/allocarr.f90 deleted file mode 100755 index e0d6c2ec85..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/allocarr.f90 +++ /dev/null @@ -1,16 +0,0 @@ -module mod - real, allocatable, dimension(:,:) :: b -contains - subroutine foo - integer k - if (allocated(b)) then - print*, "b=[" - do k = 1,size(b,1) - print*, b(k,1:size(b,2)) - enddo - print*, "]" - else - print*, "b is not allocated" - endif - end subroutine foo -end module mod diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/allocarr_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/allocarr_session.dat deleted file mode 100755 index fc91959b73..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/allocarr_session.dat +++ /dev/null @@ -1,27 +0,0 @@ ->>> import allocarr ->>> print allocarr.mod.__doc__ -b - 'f'-array(-1,-1), not allocated -foo - Function signature: - foo() - ->>> allocarr.mod.foo() - b is not allocated ->>> allocarr.mod.b = [[1,2,3],[4,5,6]] # allocate/initialize b ->>> allocarr.mod.foo() - b=[ - 1.000000 2.000000 3.000000 - 4.000000 5.000000 6.000000 - ] ->>> allocarr.mod.b # b is Fortran-contiguous -array([[ 1., 2., 3.], - [ 4., 5., 6.]],'f') ->>> allocarr.mod.b = [[1,2,3],[4,5,6],[7,8,9]] # reallocate/initialize b ->>> allocarr.mod.foo() - b=[ - 1.000000 2.000000 3.000000 - 4.000000 5.000000 6.000000 - 7.000000 8.000000 9.000000 - ] ->>> allocarr.mod.b = None # deallocate array ->>> allocarr.mod.foo() - b is not allocated diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/array.f b/pythonPackages/numpy/numpy/f2py/docs/usersguide/array.f deleted file mode 100755 index ef20c9c206..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/array.f +++ /dev/null @@ -1,17 +0,0 @@ -C FILE: ARRAY.F - SUBROUTINE FOO(A,N,M) -C -C INCREMENT THE FIRST ROW AND DECREMENT THE FIRST COLUMN OF A -C - INTEGER N,M,I,J - REAL*8 A(N,M) -Cf2py intent(in,out,copy) a -Cf2py integer intent(hide),depend(a) :: n=shape(a,0), m=shape(a,1) - DO J=1,M - A(1,J) = A(1,J) + 1D0 - ENDDO - DO I=1,N - A(I,1) = A(I,1) - 1D0 - ENDDO - END -C END OF FILE ARRAY.F diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/array_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/array_session.dat deleted file mode 100755 index f649334821..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/array_session.dat +++ /dev/null @@ -1,65 +0,0 @@ ->>> import arr ->>> from Numeric import array ->>> print arr.foo.__doc__ -foo - Function signature: - a = foo(a,[overwrite_a]) -Required arguments: - a : input rank-2 array('d') with bounds (n,m) -Optional arguments: - overwrite_a := 0 input int -Return objects: - a : rank-2 array('d') with bounds (n,m) - ->>> a=arr.foo([[1,2,3], -... [4,5,6]]) -copied an array using PyArray_CopyFromObject: size=6, elsize=8 ->>> print a -[[ 1. 3. 4.] - [ 3. 5. 6.]] ->>> a.iscontiguous(), arr.has_column_major_storage(a) -(0, 1) ->>> b=arr.foo(a) # even if a is proper-contiguous -... # and has proper type, a copy is made -... # forced by intent(copy) attribute -... # to preserve its original contents -... -copied an array using copy_ND_array: size=6, elsize=8 ->>> print a -[[ 1. 3. 4.] - [ 3. 5. 6.]] ->>> print b -[[ 1. 4. 5.] - [ 2. 5. 6.]] ->>> b=arr.foo(a,overwrite_a=1) # a is passed directly to Fortran -... # routine and its contents is discarded -... ->>> print a -[[ 1. 4. 5.] - [ 2. 5. 6.]] ->>> print b -[[ 1. 4. 5.] - [ 2. 5. 6.]] ->>> a is b # a and b are acctually the same objects -1 ->>> print arr.foo([1,2,3]) # different rank arrays are allowed -copied an array using PyArray_CopyFromObject: size=3, elsize=8 -[ 1. 1. 2.] ->>> print arr.foo([[[1],[2],[3]]]) -copied an array using PyArray_CopyFromObject: size=3, elsize=8 -[ [[ 1.] - [ 3.] - [ 4.]]] ->>> ->>> # Creating arrays with column major data storage order: -... ->>> s = arr.as_column_major_storage(array([[1,2,3],[4,5,6]])) -copied an array using copy_ND_array: size=6, elsize=4 ->>> arr.has_column_major_storage(s) -1 ->>> print s -[[1 2 3] - [4 5 6]] ->>> s2 = arr.as_column_major_storage(s) ->>> s2 is s # an array with column major storage order - # is returned immediately -1 \ No newline at end of file diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/calculate.f b/pythonPackages/numpy/numpy/f2py/docs/usersguide/calculate.f deleted file mode 100755 index 1cda1c8ddd..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/calculate.f +++ /dev/null @@ -1,14 +0,0 @@ - subroutine calculate(x,n) -cf2py intent(callback) func - external func -c The following lines define the signature of func for F2PY: -cf2py real*8 y -cf2py y = func(y) -c -cf2py intent(in,out,copy) x - integer n,i - real*8 x(n) - do i=1,n - x(i) = func(x(i)) - end do - end diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/calculate_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/calculate_session.dat deleted file mode 100755 index 2fe64f5224..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/calculate_session.dat +++ /dev/null @@ -1,6 +0,0 @@ ->>> import foo ->>> foo.calculate(range(5), lambda x: x*x) -array([ 0., 1., 4., 9., 16.]) ->>> import math ->>> foo.calculate(range(5), math.exp) -array([ 1. , 2.71828175, 7.38905621, 20.08553696, 54.59814835]) diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/callback.f b/pythonPackages/numpy/numpy/f2py/docs/usersguide/callback.f deleted file mode 100755 index 6e9bfb920c..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/callback.f +++ /dev/null @@ -1,12 +0,0 @@ -C FILE: CALLBACK.F - SUBROUTINE FOO(FUN,R) - EXTERNAL FUN - INTEGER I - REAL*8 R -Cf2py intent(out) r - R = 0D0 - DO I=-5,5 - R = R + FUN(I) - ENDDO - END -C END OF FILE CALLBACK.F diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/callback2.pyf b/pythonPackages/numpy/numpy/f2py/docs/usersguide/callback2.pyf deleted file mode 100755 index 3d77eed24f..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/callback2.pyf +++ /dev/null @@ -1,19 +0,0 @@ -! -*- f90 -*- -python module __user__routines - interface - function fun(i) result (r) - integer :: i - real*8 :: r - end function fun - end interface -end python module __user__routines - -python module callback2 - interface - subroutine foo(f,r) - use __user__routines, f=>fun - external f - real*8 intent(out) :: r - end subroutine foo - end interface -end python module callback2 diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/callback_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/callback_session.dat deleted file mode 100755 index cd2f260849..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/callback_session.dat +++ /dev/null @@ -1,23 +0,0 @@ ->>> import callback ->>> print callback.foo.__doc__ -foo - Function signature: - r = foo(fun,[fun_extra_args]) -Required arguments: - fun : call-back function -Optional arguments: - fun_extra_args := () input tuple -Return objects: - r : float -Call-back functions: - def fun(i): return r - Required arguments: - i : input int - Return objects: - r : float - ->>> def f(i): return i*i -... ->>> print callback.foo(f) -110.0 ->>> print callback.foo(lambda i:1) -11.0 diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/common.f b/pythonPackages/numpy/numpy/f2py/docs/usersguide/common.f deleted file mode 100755 index b098ab20cf..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/common.f +++ /dev/null @@ -1,13 +0,0 @@ -C FILE: COMMON.F - SUBROUTINE FOO - INTEGER I,X - REAL A - COMMON /DATA/ I,X(4),A(2,3) - PRINT*, "I=",I - PRINT*, "X=[",X,"]" - PRINT*, "A=[" - PRINT*, "[",A(1,1),",",A(1,2),",",A(1,3),"]" - PRINT*, "[",A(2,1),",",A(2,2),",",A(2,3),"]" - PRINT*, "]" - END -C END OF COMMON.F diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/common_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/common_session.dat deleted file mode 100755 index 846fdaa076..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/common_session.dat +++ /dev/null @@ -1,27 +0,0 @@ ->>> import common ->>> print common.data.__doc__ -i - 'i'-scalar -x - 'i'-array(4) -a - 'f'-array(2,3) - ->>> common.data.i = 5 ->>> common.data.x[1] = 2 ->>> common.data.a = [[1,2,3],[4,5,6]] ->>> common.foo() - I= 5 - X=[ 0 2 0 0] - A=[ - [ 1., 2., 3.] - [ 4., 5., 6.] - ] ->>> common.data.a[1] = 45 ->>> common.foo() - I= 5 - X=[ 0 2 0 0] - A=[ - [ 1., 2., 3.] - [ 45., 45., 45.] - ] ->>> common.data.a # a is Fortran-contiguous -array([[ 1., 2., 3.], - [ 45., 45., 45.]],'f') diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/compile_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/compile_session.dat deleted file mode 100755 index 0d84081988..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/compile_session.dat +++ /dev/null @@ -1,11 +0,0 @@ ->>> import f2py2e ->>> fsource = ''' -... subroutine foo -... print*, "Hello world!" -... end -... ''' ->>> f2py2e.compile(fsource,modulename='hello',verbose=0) -0 ->>> import hello ->>> hello.foo() - Hello world! diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/default.css b/pythonPackages/numpy/numpy/f2py/docs/usersguide/default.css deleted file mode 100755 index bb7226161d..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/default.css +++ /dev/null @@ -1,180 +0,0 @@ -/* -:Author: David Goodger -:Contact: goodger@users.sourceforge.net -:date: $Date: 2002/12/07 23:59:33 $ -:version: $Revision: 1.2 $ -:copyright: This stylesheet has been placed in the public domain. - -Default cascading style sheet for the HTML output of Docutils. -*/ - -body { - background: #FFFFFF ; - color: #000000 -} - -a.footnote-reference { - font-size: smaller ; - vertical-align: super } - -a.target { - color: blue } - -a.toc-backref { - text-decoration: none ; - color: black } - -dd { - margin-bottom: 0.5em } - -div.abstract { - margin: 2em 5em } - -div.abstract p.topic-title { - font-weight: bold ; - text-align: center } - -div.attention, div.caution, div.danger, div.error, div.hint, -div.important, div.note, div.tip, div.warning { - margin: 2em ; - border: medium outset ; - padding: 1em } - -div.attention p.admonition-title, div.caution p.admonition-title, -div.danger p.admonition-title, div.error p.admonition-title, -div.warning p.admonition-title { - color: red ; - font-weight: bold ; - font-family: sans-serif } - -div.hint p.admonition-title, div.important p.admonition-title, -div.note p.admonition-title, div.tip p.admonition-title { - font-weight: bold ; - font-family: sans-serif } - -div.dedication { - margin: 2em 5em ; - text-align: center ; - font-style: italic } - -div.dedication p.topic-title { - font-weight: bold ; - font-style: normal } - -div.figure { - margin-left: 2em } - -div.footer, div.header { - font-size: smaller } - -div.system-messages { - margin: 5em } - -div.system-messages h1 { - color: red } - -div.system-message { - border: medium outset ; - padding: 1em } - -div.system-message p.system-message-title { - color: red ; - font-weight: bold } - -div.topic { - margin: 2em } - -h1.title { - text-align: center } - -h2.subtitle { - text-align: center } - -hr { - width: 75% } - -ol.simple, ul.simple { - margin-bottom: 1em } - -ol.arabic { - list-style: decimal } - -ol.loweralpha { - list-style: lower-alpha } - -ol.upperalpha { - list-style: upper-alpha } - -ol.lowerroman { - list-style: lower-roman } - -ol.upperroman { - list-style: upper-roman } - -p.caption { - font-style: italic } - -p.credits { - font-style: italic ; - font-size: smaller } - -p.first { - margin-top: 0 } - -p.label { - white-space: nowrap } - -p.topic-title { - font-weight: bold } - -pre.literal-block, pre.doctest-block { - margin-left: 2em ; - margin-right: 2em ; - background-color: #ee9e9e } - -span.classifier { - font-family: sans-serif ; - font-style: oblique } - -span.classifier-delimiter { - font-family: sans-serif ; - font-weight: bold } - -span.field-argument { - font-style: italic } - -span.interpreted { - font-family: sans-serif } - -span.option-argument { - font-style: italic } - -span.problematic { - color: red } - -table { - margin-top: 0.5em ; - margin-bottom: 0.5em } - -table.citation { - border-left: solid thin gray ; - padding-left: 0.5ex } - -table.docinfo { - margin: 2em 4em } - -table.footnote { - border-left: solid thin black ; - padding-left: 0.5ex } - -td, th { - padding-left: 0.5em ; - padding-right: 0.5em ; - vertical-align: baseline } - -td.docinfo-name { - font-weight: bold ; - text-align: right } - -td.field-name { - font-weight: bold } diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/docutils.conf b/pythonPackages/numpy/numpy/f2py/docs/usersguide/docutils.conf deleted file mode 100755 index b772fd1376..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/docutils.conf +++ /dev/null @@ -1,16 +0,0 @@ -[general] - -# These entries affect all processing: -#source-link: 1 -datestamp: %Y-%m-%d %H:%M UTC -generator: 1 - -# These entries affect HTML output: -#stylesheet-path: f2py_style.css -output-encoding: latin-1 - -# These entries affect reStructuredText-style PEPs: -#pep-template: pep-html-template -#pep-stylesheet-path: stylesheets/pep.css -#python-home: http://www.python.org -#no-random: 1 diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/extcallback.f b/pythonPackages/numpy/numpy/f2py/docs/usersguide/extcallback.f deleted file mode 100755 index 9a800628e0..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/extcallback.f +++ /dev/null @@ -1,14 +0,0 @@ - subroutine f1() - print *, "in f1, calling f2 twice.." - call f2() - call f2() - return - end - - subroutine f2() -cf2py intent(callback, hide) fpy - external fpy - print *, "in f2, calling f2py.." - call fpy() - return - end diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/extcallback_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/extcallback_session.dat deleted file mode 100755 index c22935ea0f..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/extcallback_session.dat +++ /dev/null @@ -1,19 +0,0 @@ ->>> import pfromf ->>> pfromf.f2() -Traceback (most recent call last): - File "", line 1, in ? -pfromf.error: Callback fpy not defined (as an argument or module pfromf attribute). - ->>> def f(): print "python f" -... ->>> pfromf.fpy = f ->>> pfromf.f2() - in f2, calling f2py.. -python f ->>> pfromf.f1() - in f1, calling f2 twice.. - in f2, calling f2py.. -python f - in f2, calling f2py.. -python f ->>> \ No newline at end of file diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib1.f b/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib1.f deleted file mode 100755 index cfbb1eea0d..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib1.f +++ /dev/null @@ -1,18 +0,0 @@ -C FILE: FIB1.F - SUBROUTINE FIB(A,N) -C -C CALCULATE FIRST N FIBONACCI NUMBERS -C - INTEGER N - REAL*8 A(N) - DO I=1,N - IF (I.EQ.1) THEN - A(I) = 0.0D0 - ELSEIF (I.EQ.2) THEN - A(I) = 1.0D0 - ELSE - A(I) = A(I-1) + A(I-2) - ENDIF - ENDDO - END -C END FILE FIB1.F diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib1.pyf b/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib1.pyf deleted file mode 100755 index 3d6cc0a548..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib1.pyf +++ /dev/null @@ -1,12 +0,0 @@ -! -*- f90 -*- -python module fib2 ! in - interface ! in :fib2 - subroutine fib(a,n) ! in :fib2:fib1.f - real*8 dimension(n) :: a - integer optional,check(len(a)>=n),depend(a) :: n=len(a) - end subroutine fib - end interface -end python module fib2 - -! This file was auto-generated with f2py (version:2.28.198-1366). -! See http://cens.ioc.ee/projects/f2py2e/ diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib2.pyf b/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib2.pyf deleted file mode 100755 index 4a5ae29f1e..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib2.pyf +++ /dev/null @@ -1,9 +0,0 @@ -! -*- f90 -*- -python module fib2 - interface - subroutine fib(a,n) - real*8 dimension(n),intent(out),depend(n) :: a - integer intent(in) :: n - end subroutine fib - end interface -end python module fib2 diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib3.f b/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib3.f deleted file mode 100755 index 08b050cd26..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/fib3.f +++ /dev/null @@ -1,21 +0,0 @@ -C FILE: FIB3.F - SUBROUTINE FIB(A,N) -C -C CALCULATE FIRST N FIBONACCI NUMBERS -C - INTEGER N - REAL*8 A(N) -Cf2py intent(in) n -Cf2py intent(out) a -Cf2py depend(n) a - DO I=1,N - IF (I.EQ.1) THEN - A(I) = 0.0D0 - ELSEIF (I.EQ.2) THEN - A(I) = 1.0D0 - ELSE - A(I) = A(I-1) + A(I-2) - ENDIF - ENDDO - END -C END FILE FIB3.F diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/ftype.f b/pythonPackages/numpy/numpy/f2py/docs/usersguide/ftype.f deleted file mode 100755 index cabbb9e2d5..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/ftype.f +++ /dev/null @@ -1,9 +0,0 @@ -C FILE: FTYPE.F - SUBROUTINE FOO(N) - INTEGER N -Cf2py integer optional,intent(in) :: n = 13 - REAL A,X - COMMON /DATA/ A,X(3) - PRINT*, "IN FOO: N=",N," A=",A," X=[",X(1),X(2),X(3),"]" - END -C END OF FTYPE.F diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/ftype_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/ftype_session.dat deleted file mode 100755 index 01f9febaf4..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/ftype_session.dat +++ /dev/null @@ -1,21 +0,0 @@ ->>> import ftype ->>> print ftype.__doc__ -This module 'ftype' is auto-generated with f2py (version:2.28.198-1366). -Functions: - foo(n=13) -COMMON blocks: - /data/ a,x(3) -. ->>> type(ftype.foo),type(ftype.data) -(, ) ->>> ftype.foo() - IN FOO: N= 13 A= 0. X=[ 0. 0. 0.] ->>> ftype.data.a = 3 ->>> ftype.data.x = [1,2,3] ->>> ftype.foo() - IN FOO: N= 13 A= 3. X=[ 1. 2. 3.] ->>> ftype.data.x[1] = 45 ->>> ftype.foo(24) - IN FOO: N= 24 A= 3. X=[ 1. 45. 3.] ->>> ftype.data.x -array([ 1., 45., 3.],'f') diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/index.txt b/pythonPackages/numpy/numpy/f2py/docs/usersguide/index.txt deleted file mode 100755 index 5a8d12c68e..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/index.txt +++ /dev/null @@ -1,1772 +0,0 @@ -.. -*- rest -*- - -////////////////////////////////////////////////////////////////////// - F2PY Users Guide and Reference Manual -////////////////////////////////////////////////////////////////////// - -:Author: Pearu Peterson -:Contact: pearu@cens.ioc.ee -:Web site: http://cens.ioc.ee/projects/f2py2e/ -:Date: $Date: 2005/04/02 10:03:26 $ -:Revision: $Revision: 1.27 $ - - -.. section-numbering:: - -.. Contents:: - - -================ - Introduction -================ - -The purpose of the F2PY_ --*Fortran to Python interface generator*-- -project is to provide a connection between Python and Fortran -languages. F2PY is a Python_ package (with a command line tool -``f2py`` and a module ``f2py2e``) that facilitates creating/building -Python C/API extension modules that make it possible - -* to call Fortran 77/90/95 external subroutines and Fortran 90/95 - module subroutines as well as C functions; -* to access Fortran 77 ``COMMON`` blocks and Fortran 90/95 module data, - including allocatable arrays - -from Python. See F2PY_ web site for more information and installation -instructions. - -====================================== - Three ways to wrap - getting started -====================================== - -Wrapping Fortran or C functions to Python using F2PY consists of the -following steps: - -* Creating the so-called signature file that contains descriptions of - wrappers to Fortran or C functions, also called as signatures of the - functions. In the case of Fortran routines, F2PY can create initial - signature file by scanning Fortran source codes and - catching all relevant information needed to create wrapper - functions. - -* Optionally, F2PY created signature files can be edited to optimize - wrappers functions, make them "smarter" and more "Pythonic". - -* F2PY reads a signature file and writes a Python C/API module containing - Fortran/C/Python bindings. - -* F2PY compiles all sources and builds an extension module containing - the wrappers. In building extension modules, F2PY uses - ``numpy_distutils`` that supports a number of Fortran 77/90/95 - compilers, including Gnu, Intel, - Sun Fortre, SGI MIPSpro, Absoft, NAG, Compaq etc. compilers. - -Depending on a particular situation, these steps can be carried out -either by just in one command or step-by-step, some steps can be -ommited or combined with others. - -Below I'll describe three typical approaches of using F2PY. -The following `example Fortran 77 code`__ will be used for -illustration: - -.. include:: fib1.f - :literal: - -__ fib1.f - -The quick way -============== - -The quickest way to wrap the Fortran subroutine ``FIB`` to Python is -to run - -:: - - f2py -c fib1.f -m fib1 - -This command builds (see ``-c`` flag, execute ``f2py`` without -arguments to see the explanation of command line options) an extension -module ``fib1.so`` (see ``-m`` flag) to the current directory. Now, in -Python the Fortran subroutine ``FIB`` is accessible via ``fib1.fib``:: - - >>> import Numeric - >>> import fib1 - >>> print fib1.fib.__doc__ - fib - Function signature: - fib(a,[n]) - Required arguments: - a : input rank-1 array('d') with bounds (n) - Optional arguments: - n := len(a) input int - - >>> a=Numeric.zeros(8,'d') - >>> fib1.fib(a) - >>> print a - [ 0. 1. 1. 2. 3. 5. 8. 13.] - -.. topic:: Comments - - * Note that F2PY found that the second argument ``n`` is the - dimension of the first array argument ``a``. Since by default all - arguments are input-only arguments, F2PY concludes that ``n`` can - be optional with the default value ``len(a)``. - - * One can use different values for optional ``n``:: - - >>> a1=Numeric.zeros(8,'d') - >>> fib1.fib(a1,6) - >>> print a1 - [ 0. 1. 1. 2. 3. 5. 0. 0.] - - but an exception is raised when it is incompatible with the input - array ``a``:: - - >>> fib1.fib(a,10) - fib:n=10 - Traceback (most recent call last): - File "", line 1, in ? - fib.error: (len(a)>=n) failed for 1st keyword n - >>> - - This demonstrates one of the useful features in F2PY, that it, - F2PY implements basic compatibility checks between related - arguments in order to avoid any unexpected crashes. - - * When a Numeric array, that is Fortran contiguous and has a typecode - corresponding to presumed Fortran type, is used as an input array - argument, then its C pointer is directly passed to Fortran. - - Otherwise F2PY makes a contiguous copy (with a proper typecode) of - the input array and passes C pointer of the copy to Fortran - subroutine. As a result, any possible changes to the (copy of) - input array have no effect to the original argument, as - demonstrated below:: - - >>> a=Numeric.ones(8,'i') - >>> fib1.fib(a) - >>> print a - [1 1 1 1 1 1 1 1] - - Clearly, this is not an expected behaviour. The fact that the - above example worked with ``typecode='d'`` is considered - accidental. - - F2PY provides ``intent(inplace)`` attribute that would modify - the attributes of an input array so that any changes made by - Fortran routine will be effective also in input argument. For example, - if one specifies ``intent(inplace) a`` (see below, how), then - the example above would read: - - >>> a=Numeric.ones(8,'i') - >>> fib1.fib(a) - >>> print a - [ 0. 1. 1. 2. 3. 5. 8. 13.] - - However, the recommended way to get changes made by Fortran - subroutine back to python is to use ``intent(out)`` attribute. It - is more efficient and a cleaner solution. - - * The usage of ``fib1.fib`` in Python is very similar to using - ``FIB`` in Fortran. However, using *in situ* output arguments in - Python indicates a poor style as there is no safety mechanism - in Python with respect to wrong argument types. When using Fortran - or C, compilers naturally discover any type mismatches during - compile time but in Python the types must be checked in - runtime. So, using *in situ* output arguments in Python may cause - difficult to find bugs, not to mention that the codes will be less - readable when all required type checks are implemented. - - Though the demonstrated way of wrapping Fortran routines to Python - is very straightforward, it has several drawbacks (see the comments - above). These drawbacks are due to the fact that there is no way - that F2PY can determine what is the acctual intention of one or the - other argument, is it input or output argument, or both, or - something else. So, F2PY conservatively assumes that all arguments - are input arguments by default. - - However, there are ways (see below) how to "teach" F2PY about the - true intentions (among other things) of function arguments; and then - F2PY is able to generate more Pythonic (more explicit, easier to - use, and less error prone) wrappers to Fortran functions. - -The smart way -============== - -Let's apply the steps of wrapping Fortran functions to Python one by -one. - -* First, we create a signature file from ``fib1.f`` by running - - :: - - f2py fib1.f -m fib2 -h fib1.pyf - - The signature file is saved to ``fib1.pyf`` (see ``-h`` flag) and - its contents is shown below. - - .. include:: fib1.pyf - :literal: - -* Next, we'll teach F2PY that the argument ``n`` is a input argument - (use ``intent(in)`` attribute) and that the result, i.e. the - contents of ``a`` after calling Fortran function ``FIB``, should be - returned to Python (use ``intent(out)`` attribute). In addition, an - array ``a`` should be created dynamically using the size given by - the input argument ``n`` (use ``depend(n)`` attribute to indicate - dependence relation). - - The content of a modified version of ``fib1.pyf`` (saved as - ``fib2.pyf``) is as follows: - - .. include:: fib2.pyf - :literal: - -* And finally, we build the extension module by running - - :: - - f2py -c fib2.pyf fib1.f - -In Python:: - - >>> import fib2 - >>> print fib2.fib.__doc__ - fib - Function signature: - a = fib(n) - Required arguments: - n : input int - Return objects: - a : rank-1 array('d') with bounds (n) - - >>> print fib2.fib(8) - [ 0. 1. 1. 2. 3. 5. 8. 13.] - -.. topic:: Comments - - * Clearly, the signature of ``fib2.fib`` now corresponds to the - intention of Fortran subroutine ``FIB`` more closely: given the - number ``n``, ``fib2.fib`` returns the first ``n`` Fibonacci numbers - as a Numeric array. Also, the new Python signature ``fib2.fib`` - rules out any surprises that we experienced with ``fib1.fib``. - - * Note that by default using single ``intent(out)`` also implies - ``intent(hide)``. Argument that has ``intent(hide)`` attribute - specified, will not be listed in the argument list of a wrapper - function. - -The quick and smart way -======================== - -The "smart way" of wrapping Fortran functions, as explained above, is -suitable for wrapping (e.g. third party) Fortran codes for which -modifications to their source codes are not desirable nor even -possible. - -However, if editing Fortran codes is acceptable, then the generation -of an intermediate signature file can be skipped in most -cases. Namely, F2PY specific attributes can be inserted directly to -Fortran source codes using the so-called F2PY directive. A F2PY -directive defines special comment lines (starting with ``Cf2py``, for -example) which are ignored by Fortran compilers but F2PY interprets -them as normal lines. - -Here is shown a `modified version of the example Fortran code`__, saved -as ``fib3.f``: - -.. include:: fib3.f - :literal: - -__ fib3.f - -Building the extension module can be now carried out in one command:: - - f2py -c -m fib3 fib3.f - -Notice that the resulting wrapper to ``FIB`` is as "smart" as in -previous case:: - - >>> import fib3 - >>> print fib3.fib.__doc__ - fib - Function signature: - a = fib(n) - Required arguments: - n : input int - Return objects: - a : rank-1 array('d') with bounds (n) - - >>> print fib3.fib(8) - [ 0. 1. 1. 2. 3. 5. 8. 13.] - - -================== - Signature file -================== - -The syntax specification for signature files (.pyf files) is borrowed -from the Fortran 90/95 language specification. Almost all Fortran -90/95 standard constructs are understood, both in free and fixed -format (recall that Fortran 77 is a subset of Fortran 90/95). F2PY -introduces also some extensions to Fortran 90/95 language -specification that help designing Fortran to Python interface, make it -more "Pythonic". - -Signature files may contain arbitrary Fortran code (so that Fortran -codes can be considered as signature files). F2PY silently ignores -Fortran constructs that are irrelevant for creating the interface. -However, this includes also syntax errors. So, be careful not making -ones;-). - -In general, the contents of signature files is case-sensitive. When -scanning Fortran codes and writing a signature file, F2PY lowers all -cases automatically except in multi-line blocks or when ``--no-lower`` -option is used. - -The syntax of signature files is overvied below. - -Python module block -===================== - -A signature file may contain one (recommended) or more ``python -module`` blocks. ``python module`` block describes the contents of -a Python/C extension module ``module.c`` that F2PY -generates. - -Exception: if ```` contains a substring ``__user__``, then -the corresponding ``python module`` block describes the signatures of -so-called call-back functions (see `Call-back arguments`_). - -A ``python module`` block has the following structure:: - - python module - []... - [ - interface - - - - end [interface] - ]... - [ - interface - module - [] - [] - end [module []] - end [interface] - ]... - end [python module []] - -Here brackets ``[]`` indicate a optional part, dots ``...`` indicate -one or more of a previous part. So, ``[]...`` reads zero or more of a -previous part. - - -Fortran/C routine signatures -============================= - -The signature of a Fortran routine has the following structure:: - - [] function | subroutine \ - [ ( [] ) ] [ result ( ) ] - [] - [] - [] - [] - [] - end [ function | subroutine [] ] - -From a Fortran routine signature F2PY generates a Python/C extension -function that has the following signature:: - - def ([,]): - ... - return - -The signature of a Fortran block data has the following structure:: - - block data [ ] - [] - [] - [] - [] - [] - end [ block data [] ] - -Type declarations -------------------- - - The definition of the ```` part - is - - :: - - [ [] :: ] - - where - - :: - - := byte | character [] - | complex [] | real [] - | double complex | double precision - | integer [] | logical [] - - := * - | ( [len=] [ , [kind=] ] ) - | ( kind= [ , len= ] ) - := * | ( [kind=] ) - - := [ [ * ] [ ( ) ] - | [ ( ) ] * ] - | [ / / | = ] \ - [ , ] - - and - - + ```` is a comma separated list of attributes_; - - + ```` is a comma separated list of dimension bounds; - - + ```` is a `C expression`__. - - + ```` may be negative integer for ``integer`` type - specifications. In such cases ``integer*`` represents - unsigned C integers. - -__ `C expressions`_ - - If an argument has no ````, its type is - determined by applying ``implicit`` rules to its name. - - -Statements ------------- - -Attribute statements: - - The ```` is - ```` without ````. - In addition, in an attribute statement one cannot use other - attributes, also ```` can be only a list of names. - -Use statements: - - The definition of the ```` part is - - :: - - use [ , | , ONLY : ] - - where - - :: - - := => [ , ] - - Currently F2PY uses ``use`` statement only for linking call-back - modules and ``external`` arguments (call-back functions), see - `Call-back arguments`_. - -Common block statements: - - The definition of the ```` part is - - :: - - common / / - - where - - :: - - := [ ( ) ] [ , ] - - One ``python module`` block should not contain two or more - ``common`` blocks with the same name. Otherwise, the latter ones are - ignored. The types of variables in ```` are defined - using ````. Note that the corresponding - ```` may contain array specifications; - then you don't need to specify these in ````. - -Other statements: - - The ```` part refers to any other Fortran language - constructs that are not described above. F2PY ignores most of them - except - - + ``call`` statements and function calls of ``external`` arguments - (`more details`__?); - -__ external_ - - + ``include`` statements - - :: - - include '' - include "" - - If a file ```` does not exist, the ``include`` - statement is ignored. Otherwise, the file ```` is - included to a signature file. ``include`` statements can be used - in any part of a signature file, also outside the Fortran/C - routine signature blocks. - - + ``implicit`` statements - - :: - - implicit none - implicit - - where - - :: - - := ( ) - - Implicit rules are used to deterimine the type specification of - a variable (from the first-letter of its name) if the variable - is not defined using ````. Default - implicit rule is given by - - :: - - implicit real (a-h,o-z,$_), integer (i-m) - - + ``entry`` statements - - :: - - entry [([])] - - F2PY generates wrappers to all entry names using the signature - of the routine block. - - Tip: ``entry`` statement can be used to describe the signature - of an arbitrary routine allowing F2PY to generate a number of - wrappers from only one routine block signature. There are few - restrictions while doing this: ``fortranname`` cannot be used, - ``callstatement`` and ``callprotoargument`` can be used only if - they are valid for all entry routines, etc. - - In addition, F2PY introduces the following statements: - - + ``threadsafe`` - Use ``Py_BEGIN_ALLOW_THREADS .. Py_END_ALLOW_THREADS`` block - around the call to Fortran/C function. - - + ``callstatement `` - Replace F2PY generated call statement to Fortran/C function with - ````. The wrapped Fortran/C function - is available as ``(*f2py_func)``. To raise an exception, set - ``f2py_success = 0`` in ````. - - + ``callprotoargument `` - When ``callstatement`` statement is used then F2PY may not - generate proper prototypes for Fortran/C functions (because - ```` may contain any function calls and F2PY has no way - to determine what should be the proper prototype). With this - statement you can explicitely specify the arguments of the - corresponding prototype:: - - extern FUNC_F(,)(); - - + ``fortranname []`` - You can use arbitrary ```` for a given Fortran/C - function. Then you have to specify - ```` with this statement. - - If ``fortranname`` statement is used without - ```` then a dummy wrapper is - generated. - - + ``usercode `` - When used inside ``python module`` block, then given C code - will be inserted to generated C/API source just before - wrapper function definitions. Here you can define arbitrary - C functions to be used in initialization of optional arguments, - for example. If ``usercode`` is used twise inside ``python - module`` block then the second multi-line block is inserted - after the definition of external routines. - - When used inside ````, then given C code will - be inserted to the corresponding wrapper function just after - declaring variables but before any C statements. So, ``usercode`` - follow-up can contain both declarations and C statements. - - When used inside the first ``interface`` block, then given C - code will be inserted at the end of the initialization - function of the extension module. Here you can modify extension - modules dictionary. For example, for defining additional - variables etc. - - + ``pymethoddef `` - Multiline block will be inserted to the definition of - module methods ``PyMethodDef``-array. It must be a - comma-separated list of C arrays (see `Extending and Embedding`__ - Python documentation for details). - ``pymethoddef`` statement can be used only inside - ``python module`` block. - - __ http://www.python.org/doc/current/ext/ext.html - -Attributes ------------- - -The following attributes are used by F2PY: - -``optional`` - The corresponding argument is moved to the end of ```` list. A default value for an optional argument can be - specified ````, see ``entitydecl`` definition. Note that - the default value must be given as a valid C expression. - - Note that whenever ```` is used, ``optional`` attribute - is set automatically by F2PY. - - For an optional array argument, all its dimensions must be bounded. - -``required`` - The corresponding argument is considered as a required one. This is - default. You need to specify ``required`` only if there is a need to - disable automatic ``optional`` setting when ```` is used. - - If Python ``None`` object is used as an required argument, the - argument is treated as optional. That is, in the case of array - argument, the memory is allocated. And if ```` is given, - the corresponding initialization is carried out. - -``dimension()`` - The corresponding variable is considered as an array with given - dimensions in ````. - -``intent()`` - This specifies the "intention" of the corresponding - argument. ```` is a comma separated list of the - following keys: - - + ``in`` - The argument is considered as an input-only argument. It means - that the value of the argument is passed to Fortran/C function and - that function is expected not to change the value of an argument. - - + ``inout`` - The argument is considered as an input/output or *in situ* - output argument. ``intent(inout)`` arguments can be only - "contiguous" Numeric arrays with proper type and size. Here - "contiguous" can be either in Fortran or C sense. The latter one - coincides with the contiguous concept used in Numeric and is - effective only if ``intent(c)`` is used. Fortran-contiguousness - is assumed by default. - - Using ``intent(inout)`` is generally not recommended, use - ``intent(in,out)`` instead. See also ``intent(inplace)`` attribute. - - + ``inplace`` - The argument is considered as an input/output or *in situ* - output argument. ``intent(inplace)`` arguments must be - Numeric arrays with proper size. If the type of an array is - not "proper" or the array is non-contiguous then the array - will be changed in-place to fix the type and make it contiguous. - - Using ``intent(inplace)`` is generally not recommended either. - For example, when slices have been taken from an - ``intent(inplace)`` argument then after in-place changes, - slices data pointers may point to unallocated memory area. - - + ``out`` - The argument is considered as an return variable. It is appended - to the ```` list. Using ``intent(out)`` - sets ``intent(hide)`` automatically, unless also - ``intent(in)`` or ``intent(inout)`` were used. - - By default, returned multidimensional arrays are - Fortran-contiguous. If ``intent(c)`` is used, then returned - multi-dimensional arrays are C-contiguous. - - + ``hide`` - The argument is removed from the list of required or optional - arguments. Typically ``intent(hide)`` is used with ``intent(out)`` - or when ```` completely determines the value of the - argument like in the following example:: - - integer intent(hide),depend(a) :: n = len(a) - real intent(in),dimension(n) :: a - - + ``c`` - The argument is treated as a C scalar or C array argument. In - the case of a scalar argument, its value is passed to C function - as a C scalar argument (recall that Fortran scalar arguments are - actually C pointer arguments). In the case of an array - argument, the wrapper function is assumed to treat - multi-dimensional arrays as C-contiguous arrays. - - There is no need to use ``intent(c)`` for one-dimensional - arrays, no matter if the wrapped function is either a Fortran or - a C function. This is because the concepts of Fortran- and - C-contiguousness overlap in one-dimensional cases. - - If ``intent(c)`` is used as an statement but without entity - declaration list, then F2PY adds ``intent(c)`` attibute to all - arguments. - - Also, when wrapping C functions, one must use ``intent(c)`` - attribute for ```` in order to disable Fortran - specific ``F_FUNC(..,..)`` macros. - - + ``cache`` - The argument is treated as a junk of memory. No Fortran nor C - contiguousness checks are carried out. Using ``intent(cache)`` - makes sense only for array arguments, also in connection with - ``intent(hide)`` or ``optional`` attributes. - - + ``copy`` - Ensure that the original contents of ``intent(in)`` argument is - preserved. Typically used in connection with ``intent(in,out)`` - attribute. F2PY creates an optional argument - ``overwrite_`` with the default value ``0``. - - + ``overwrite`` - The original contents of the ``intent(in)`` argument may be - altered by the Fortran/C function. F2PY creates an optional - argument ``overwrite_`` with the default value - ``1``. - - + ``out=`` - Replace the return name with ```` in the ``__doc__`` - string of a wrapper function. - - + ``callback`` - Construct an external function suitable for calling Python function - from Fortran. ``intent(callback)`` must be specified before the - corresponding ``external`` statement. If 'argument' is not in - argument list then it will be added to Python wrapper but only - initializing external function. - - Use ``intent(callback)`` in situations where a Fortran/C code - assumes that a user implements a function with given prototype - and links it to an executable. Don't use ``intent(callback)`` - if function appears in the argument list of a Fortran routine. - - With ``intent(hide)`` or ``optional`` attributes specified and - using a wrapper function without specifying the callback argument - in argument list then call-back function is looked in the - namespace of F2PY generated extension module where it can be - set as a module attribute by a user. - - + ``aux`` - Define auxiliary C variable in F2PY generated wrapper function. - Useful to save parameter values so that they can be accessed - in initialization expression of other variables. Note that - ``intent(aux)`` silently implies ``intent(c)``. - - The following rules apply: - - + If no ``intent(in | inout | out | hide)`` is specified, - ``intent(in)`` is assumed. - + ``intent(in,inout)`` is ``intent(in)``. - + ``intent(in,hide)`` or ``intent(inout,hide)`` is - ``intent(hide)``. - + ``intent(out)`` is ``intent(out,hide)`` unless ``intent(in)`` or - ``intent(inout)`` is specified. - + If ``intent(copy)`` or ``intent(overwrite)`` is used, then an - additional optional argument is introduced with a name - ``overwrite_`` and a default value 0 or 1, respectively. - + ``intent(inout,inplace)`` is ``intent(inplace)``. - + ``intent(in,inplace)`` is ``intent(inplace)``. - + ``intent(hide)`` disables ``optional`` and ``required``. - -``check([])`` - Perform consistency check of arguments by evaluating - ````; if ```` returns 0, an exception - is raised. - - If ``check(..)`` is not used then F2PY generates few standard checks - (e.g. in a case of an array argument, check for the proper shape - and size) automatically. Use ``check()`` to disable checks generated - by F2PY. - -``depend([])`` - This declares that the corresponding argument depends on the values - of variables in the list ````. For example, ```` - may use the values of other arguments. Using information given by - ``depend(..)`` attributes, F2PY ensures that arguments are - initialized in a proper order. If ``depend(..)`` attribute is not - used then F2PY determines dependence relations automatically. Use - ``depend()`` to disable dependence relations generated by F2PY. - - When you edit dependence relations that were initially generated by - F2PY, be careful not to break the dependence relations of other - relevant variables. Another thing to watch out is cyclic - dependencies. F2PY is able to detect cyclic dependencies - when constructing wrappers and it complains if any are found. - -``allocatable`` - The corresponding variable is Fortran 90 allocatable array defined - as Fortran 90 module data. - -.. _external: - -``external`` - The corresponding argument is a function provided by user. The - signature of this so-called call-back function can be defined - - - in ``__user__`` module block, - - or by demonstrative (or real, if the signature file is a real Fortran - code) call in the ```` block. - - For example, F2PY generates from - - :: - - external cb_sub, cb_fun - integer n - real a(n),r - call cb_sub(a,n) - r = cb_fun(4) - - the following call-back signatures:: - - subroutine cb_sub(a,n) - real dimension(n) :: a - integer optional,check(len(a)>=n),depend(a) :: n=len(a) - end subroutine cb_sub - function cb_fun(e_4_e) result (r) - integer :: e_4_e - real :: r - end function cb_fun - - The corresponding user-provided Python function are then:: - - def cb_sub(a,[n]): - ... - return - def cb_fun(e_4_e): - ... - return r - - See also ``intent(callback)`` attribute. - -``parameter`` - The corresponding variable is a parameter and it must have a fixed - value. F2PY replaces all parameter occurrences by their - corresponding values. - -Extensions -============ - -F2PY directives ------------------ - -The so-called F2PY directives allow using F2PY signature file -constructs also in Fortran 77/90 source codes. With this feature you -can skip (almost) completely intermediate signature file generations -and apply F2PY directly to Fortran source codes. - -F2PY directive has the following form:: - - f2py ... - -where allowed comment characters for fixed and free format Fortran -codes are ``cC*!#`` and ``!``, respectively. Everything that follows -``f2py`` is ignored by a compiler but read by F2PY as a -normal Fortran (non-comment) line: - - When F2PY finds a line with F2PY directive, the directive is first - replaced by 5 spaces and then the line is reread. - -For fixed format Fortran codes, ```` must be at the -first column of a file, of course. For free format Fortran codes, -F2PY directives can appear anywhere in a file. - -C expressions --------------- - -C expressions are used in the following parts of signature files: - -* ```` of variable initialization; -* ```` of the ``check`` attribute; -* `` of the ``dimension`` attribute; -* ``callstatement`` statement, here also a C multi-line block can be used. - -A C expression may contain: - -* standard C constructs; -* functions from ``math.h`` and ``Python.h``; -* variables from the argument list, presumably initialized before - according to given dependence relations; -* the following CPP macros: - - ``rank()`` - Returns the rank of an array ````. - ``shape(,)`` - Returns the ````-th dimension of an array ````. - ``len()`` - Returns the lenght of an array ````. - ``size()`` - Returns the size of an array ````. - ``slen()`` - Returns the length of a string ````. - -For initializing an array ````, F2PY generates a loop over -all indices and dimensions that executes the following -pseudo-statement:: - - (_i[0],_i[1],...) = ; - -where ``_i[]`` refers to the ````-th index value and that runs -from ``0`` to ``shape(,)-1``. - -For example, a function ``myrange(n)`` generated from the following -signature - -:: - - subroutine myrange(a,n) - fortranname ! myrange is a dummy wrapper - integer intent(in) :: n - real*8 intent(c,out),dimension(n),depend(n) :: a = _i[0] - end subroutine myrange - -is equivalent to ``Numeric.arange(n,typecode='d')``. - -.. topic:: Warning! - - F2PY may lower cases also in C expressions when scanning Fortran codes - (see ``--[no]-lower`` option). - -Multi-line blocks ------------------- - -A multi-line block starts with ``'''`` (triple single-quotes) and ends -with ``'''`` in some *strictly* subsequent line. Multi-line blocks can -be used only within .pyf files. The contents of a multi-line block can -be arbitrary (except that it cannot contain ``'''``) and no -transformations (e.g. lowering cases) are applied to it. - -Currently, multi-line blocks can be used in the following constructs: - -+ as a C expression of the ``callstatement`` statement; - -+ as a C type specification of the ``callprotoargument`` statement; - -+ as a C code block of the ``usercode`` statement; - -+ as a list of C arrays of the ``pymethoddef`` statement; - -+ as documentation string. - -================================== -Using F2PY bindings in Python -================================== - -All wrappers (to Fortran/C routines or to common blocks or to Fortran -90 module data) generated by F2PY are exposed to Python as ``fortran`` -type objects. Routine wrappers are callable ``fortran`` type objects -while wrappers to Fortran data have attributes referring to data -objects. - -All ``fortran`` type object have attribute ``_cpointer`` that contains -CObject referring to the C pointer of the corresponding Fortran/C -function or variable in C level. Such CObjects can be used as an -callback argument of F2PY generated functions to bypass Python C/API -layer of calling Python functions from Fortran or C when the -computational part of such functions is implemented in C or Fortran -and wrapped with F2PY (or any other tool capable of providing CObject -of a function). - -.. topic:: Example - - Consider a `Fortran 77 file`__ ``ftype.f``: - - .. include:: ftype.f - :literal: - - and build a wrapper using:: - - f2py -c ftype.f -m ftype - - __ ftype.f - - In Python: - - .. include:: ftype_session.dat - :literal: - - -Scalar arguments -================= - -In general, a scalar argument of a F2PY generated wrapper function can -be ordinary Python scalar (integer, float, complex number) as well as -an arbitrary sequence object (list, tuple, array, string) of -scalars. In the latter case, the first element of the sequence object -is passed to Fortran routine as a scalar argument. - -Note that when type-casting is required and there is possible loss of -information (e.g. when type-casting float to integer or complex to -float), F2PY does not raise any exception. In complex to real -type-casting only the real part of a complex number is used. - -``intent(inout)`` scalar arguments are assumed to be array objects in -order to *in situ* changes to be effective. It is recommended to use -arrays with proper type but also other types work. - -.. topic:: Example - - Consider the following `Fortran 77 code`__: - - .. include:: scalar.f - :literal: - - and wrap it using ``f2py -c -m scalar scalar.f``. - - __ scalar.f - - In Python: - - .. include:: scalar_session.dat - :literal: - - -String arguments -================= - -F2PY generated wrapper functions accept (almost) any Python object as -a string argument, ``str`` is applied for non-string objects. -Exceptions are Numeric arrays that must have type code ``'c'`` or -``'1'`` when used as string arguments. - -A string can have arbitrary length when using it as a string argument -to F2PY generated wrapper function. If the length is greater than -expected, the string is truncated. If the length is smaller that -expected, additional memory is allocated and filled with ``\0``. - -Because Python strings are immutable, an ``intent(inout)`` argument -expects an array version of a string in order to *in situ* changes to -be effective. - -.. topic:: Example - - Consider the following `Fortran 77 code`__: - - .. include:: string.f - :literal: - - and wrap it using ``f2py -c -m mystring string.f``. - - __ string.f - - Python session: - - .. include:: string_session.dat - :literal: - - -Array arguments -================ - -In general, array arguments of F2PY generated wrapper functions accept -arbitrary sequences that can be transformed to Numeric array objects. -An exception is ``intent(inout)`` array arguments that always must be -proper-contiguous and have proper type, otherwise an exception is -raised. Another exception is ``intent(inplace)`` array arguments that -attributes will be changed in-situ if the argument has different type -than expected (see ``intent(inplace)`` attribute for more -information). - -In general, if a Numeric array is proper-contiguous and has a proper -type then it is directly passed to wrapped Fortran/C function. -Otherwise, an element-wise copy of an input array is made and the -copy, being proper-contiguous and with proper type, is used as an -array argument. - -There are two types of proper-contiguous Numeric arrays: - -* Fortran-contiguous arrays when data is stored column-wise, - i.e. indexing of data as stored in memory starts from the lowest - dimension; -* C-contiguous or simply contiguous arrays when data is stored - row-wise, i.e. indexing of data as stored in memory starts from the - highest dimension. - -For one-dimensional arrays these notions coincide. - -For example, an 2x2 array ``A`` is Fortran-contiguous if its elements -are stored in memory in the following order:: - - A[0,0] A[1,0] A[0,1] A[1,1] - -and C-contiguous if the order is as follows:: - - A[0,0] A[0,1] A[1,0] A[1,1] - -To test whether an array is C-contiguous, use ``.iscontiguous()`` -method of Numeric arrays. To test for Fortran-contiguousness, all -F2PY generated extension modules provide a function -``has_column_major_storage()``. This function is equivalent to -``Numeric.transpose().iscontiguous()`` but more efficient. - -Usually there is no need to worry about how the arrays are stored in -memory and whether the wrapped functions, being either Fortran or C -functions, assume one or another storage order. F2PY automatically -ensures that wrapped functions get arguments with proper storage -order; the corresponding algorithm is designed to make copies of -arrays only when absolutely necessary. However, when dealing with very -large multi-dimensional input arrays with sizes close to the size of -the physical memory in your computer, then a care must be taken to use -always proper-contiguous and proper type arguments. - -To transform input arrays to column major storage order before passing -them to Fortran routines, use a function -``as_column_major_storage()`` that is provided by all F2PY -generated extension modules. - -.. topic:: Example - - Consider `Fortran 77 code`__: - - .. include:: array.f - :literal: - - and wrap it using ``f2py -c -m arr array.f -DF2PY_REPORT_ON_ARRAY_COPY=1``. - - __ array.f - - In Python: - - .. include:: array_session.dat - :literal: - -Call-back arguments -==================== - -F2PY supports calling Python functions from Fortran or C codes. - - -.. topic:: Example - - Consider the following `Fortran 77 code`__ - - .. include:: callback.f - :literal: - - and wrap it using ``f2py -c -m callback callback.f``. - - __ callback.f - - In Python: - - .. include:: callback_session.dat - :literal: - -In the above example F2PY was able to guess accurately the signature -of a call-back function. However, sometimes F2PY cannot establish the -signature as one would wish and then the signature of a call-back -function must be modified in the signature file manually. Namely, -signature files may contain special modules (the names of such modules -contain a substring ``__user__``) that collect various signatures of -call-back functions. Callback arguments in routine signatures have -attribute ``external`` (see also ``intent(callback)`` attribute). To -relate a callback argument and its signature in ``__user__`` module -block, use ``use`` statement as illustrated below. The same signature -of a callback argument can be referred in different routine -signatures. - -.. topic:: Example - - We use the same `Fortran 77 code`__ as in previous example but now - we'll pretend that F2PY was not able to guess the signatures of - call-back arguments correctly. First, we create an initial signature - file ``callback2.pyf`` using F2PY:: - - f2py -m callback2 -h callback2.pyf callback.f - - Then modify it as follows - - .. include:: callback2.pyf - :literal: - - Finally, build the extension module using:: - - f2py -c callback2.pyf callback.f - - An example Python session would be identical to the previous example - except that argument names would differ. - - __ callback.f - -Sometimes a Fortran package may require that users provide routines -that the package will use. F2PY can construct an interface to such -routines so that Python functions could be called from Fortran. - -.. topic:: Example - - Consider the following `Fortran 77 subroutine`__ that takes an array - and applies a function ``func`` to its elements. - - .. include:: calculate.f - :literal: - - __ calculate.f - - It is expected that function ``func`` has been defined - externally. In order to use a Python function as ``func``, it must - have an attribute ``intent(callback)`` (it must be specified before - the ``external`` statement). - - Finally, build an extension module using:: - - f2py -c -m foo calculate.f - - In Python: - - .. include:: calculate_session.dat - :literal: - -The function is included as an argument to the python function call to -the FORTRAN subroutine eventhough it was NOT in the FORTRAN subroutine argument -list. The "external" refers to the C function generated by f2py, not the python -function itself. The python function must be supplied to the C function. - -The callback function may also be explicitly set in the module. -Then it is not necessary to pass the function in the argument list to -the FORTRAN function. This may be desired if the FORTRAN function calling -the python callback function is itself called by another FORTRAN function. - -.. topic:: Example - - Consider the following `Fortran 77 subroutine`__. - - .. include:: extcallback.f - :literal: - - __ extcallback.f - - and wrap it using ``f2py -c -m pfromf extcallback.f``. - - In Python: - - .. include:: extcallback_session.dat - :literal: - -Resolving arguments to call-back functions ------------------------------------------- - -F2PY generated interface is very flexible with respect to call-back -arguments. For each call-back argument an additional optional -argument ``_extra_args`` is introduced by F2PY. This argument -can be used to pass extra arguments to user provided call-back -arguments. - -If a F2PY generated wrapper function expects the following call-back -argument:: - - def fun(a_1,...,a_n): - ... - return x_1,...,x_k - -but the following Python function - -:: - - def gun(b_1,...,b_m): - ... - return y_1,...,y_l - -is provided by an user, and in addition, - -:: - - fun_extra_args = (e_1,...,e_p) - -is used, then the following rules are applied when a Fortran or C -function calls the call-back argument ``gun``: - -* If ``p==0`` then ``gun(a_1,...,a_q)`` is called, here - ``q=min(m,n)``. -* If ``n+p<=m`` then ``gun(a_1,...,a_n,e_1,...,e_p)`` is called. -* If ``p<=mm`` then ``gun(e_1,...,e_m)`` is called. -* If ``n+p`` is less than the number of required arguments to ``gun`` - then an exception is raised. - -The function ``gun`` may return any number of objects as a tuple. Then -following rules are applied: - -* If ``kl``, then only ``x_1,...,x_l`` are set. - - - -Common blocks -============== - -F2PY generates wrappers to ``common`` blocks defined in a routine -signature block. Common blocks are visible by all Fortran codes linked -with the current extension module, but not to other extension modules -(this restriction is due to how Python imports shared libraries). In -Python, the F2PY wrappers to ``common`` blocks are ``fortran`` type -objects that have (dynamic) attributes related to data members of -common blocks. When accessed, these attributes return as Numeric array -objects (multi-dimensional arrays are Fortran-contiguous) that -directly link to data members in common blocks. Data members can be -changed by direct assignment or by in-place changes to the -corresponding array objects. - -.. topic:: Example - - Consider the following `Fortran 77 code`__ - - .. include:: common.f - :literal: - - and wrap it using ``f2py -c -m common common.f``. - - __ common.f - - In Python: - - .. include:: common_session.dat - :literal: - -Fortran 90 module data -======================= - -The F2PY interface to Fortran 90 module data is similar to Fortran 77 -common blocks. - -.. topic:: Example - - Consider the following `Fortran 90 code`__ - - .. include:: moddata.f90 - :literal: - - and wrap it using ``f2py -c -m moddata moddata.f90``. - - __ moddata.f90 - - In Python: - - .. include:: moddata_session.dat - :literal: - -Allocatable arrays -------------------- - -F2PY has basic support for Fortran 90 module allocatable arrays. - -.. topic:: Example - - Consider the following `Fortran 90 code`__ - - .. include:: allocarr.f90 - :literal: - - and wrap it using ``f2py -c -m allocarr allocarr.f90``. - - __ allocarr.f90 - - In Python: - - .. include:: allocarr_session.dat - :literal: - - -=========== -Using F2PY -=========== - -F2PY can be used either as a command line tool ``f2py`` or as a Python -module ``f2py2e``. - -Command ``f2py`` -================= - -When used as a command line tool, ``f2py`` has three major modes, -distinguished by the usage of ``-c`` and ``-h`` switches: - -1. To scan Fortran sources and generate a signature file, use - - :: - - f2py -h \ - [[ only: : ] \ - [ skip: : ]]... \ - [ ...] - - Note that a Fortran source file can contain many routines, and not - necessarily all routines are needed to be used from Python. So, you - can either specify which routines should be wrapped (in ``only: .. :`` - part) or which routines F2PY should ignored (in ``skip: .. :`` part). - - If ```` is specified as ``stdout`` then signatures - are send to standard output instead of a file. - - Among other options (see below), the following options can be used - in this mode: - - ``--overwrite-signature`` - Overwrite existing signature file. - -2. To construct an extension module, use - - :: - - f2py \ - [[ only: : ] \ - [ skip: : ]]... \ - [ ...] - - The constructed extension module is saved as - ``module.c`` to the current directory. - - Here ```` may also contain signature files. - Among other options (see below), the following options can be used - in this mode: - - ``--debug-capi`` - Add debugging hooks to the extension module. When using this - extension module, various information about the wrapper is printed - to standard output, for example, the values of variables, the - steps taken, etc. - - ``-include''`` - Add a CPP ``#include`` statement to the extension module source. - ```` should be given in one of the following forms:: - - "filename.ext" - - - The include statement is inserted just before the wrapper - functions. This feature enables using arbitrary C functions - (defined in ````) in F2PY generated wrappers. - - This option is deprecated. Use ``usercode`` statement to specify - C codelets directly in signature filess - - - ``--[no-]wrap-functions`` - - Create Fortran subroutine wrappers to Fortran functions. - ``--wrap-functions`` is default because it ensures maximum - portability and compiler independence. - - ``--include-paths ::..`` - Search include files from given directories. - - ``--help-link []`` - List system resources found by ``numpy_distutils/system_info.py``. - For example, try ``f2py --help-link lapack_opt``. - -3. To build an extension module, use - - :: - - f2py -c \ - [[ only: : ] \ - [ skip: : ]]... \ - [ ] [ <.o, .a, .so files> ] - - If ```` contains a signature file, then a source for - an extension module is constructed, all Fortran and C sources are - compiled, and finally all object and library files are linked to the - extension module ``.so`` which is saved into the current - directory. - - If ```` does not contain a signature file, then an - extension module is constructed by scanning all Fortran source codes - for routine signatures. - - Among other options (see below) and options described in previous - mode, the following options can be used in this mode: - - ``--help-fcompiler`` - List available Fortran compilers. - ``--help-compiler`` [depreciated] - List available Fortran compilers. - ``--fcompiler=`` - Specify Fortran compiler type by vendor. - ``--f77exec=`` - Specify the path to F77 compiler - ``--fcompiler-exec=`` [depreciated] - Specify the path to F77 compiler - ``--f90exec=`` - Specify the path to F90 compiler - ``--f90compiler-exec=`` [depreciated] - Specify the path to F90 compiler - - ``--f77flags=`` - Specify F77 compiler flags - ``--f90flags=`` - Specify F90 compiler flags - ``--opt=`` - Specify optimization flags - ``--arch=`` - Specify architecture specific optimization flags - ``--noopt`` - Compile without optimization - ``--noarch`` - Compile without arch-dependent optimization - ``--debug`` - Compile with debugging information - - ``-l`` - Use the library ```` when linking. - ``-D[=]`` - Define macro ```` as ````. - ``-U`` - Define macro ```` - ``-I

    `` - Append directory ```` to the list of directories searched for - include files. - ``-L`` - Add directory ```` to the list of directories to be searched - for ``-l``. - - ``link-`` - - Link extension module with as defined by - ``numpy_distutils/system_info.py``. E.g. to link with optimized - LAPACK libraries (vecLib on MacOSX, ATLAS elsewhere), use - ``--link-lapack_opt``. See also ``--help-link`` switch. - - When building an extension module, a combination of the following - macros may be required for non-gcc Fortran compilers:: - - -DPREPEND_FORTRAN - -DNO_APPEND_FORTRAN - -DUPPERCASE_FORTRAN - - To test the performance of F2PY generated interfaces, use - ``-DF2PY_REPORT_ATEXIT``. Then a report of various timings is - printed out at the exit of Python. This feature may not work on - all platforms, currently only Linux platform is supported. - - To see whether F2PY generated interface performs copies of array - arguments, use ``-DF2PY_REPORT_ON_ARRAY_COPY=``. When the size - of an array argument is larger than ````, a message about - the coping is sent to ``stderr``. - -Other options: - -``-m `` - Name of an extension module. Default is ``untitled``. Don't use this option - if a signature file (*.pyf) is used. -``--[no-]lower`` - Do [not] lower the cases in ````. By default, - ``--lower`` is assumed with ``-h`` switch, and ``--no-lower`` - without the ``-h`` switch. -``--build-dir `` - All F2PY generated files are created in ````. Default is - ``tempfile.mktemp()``. -``--quiet`` - Run quietly. -``--verbose`` - Run with extra verbosity. -``-v`` - Print f2py version ID and exit. - -Execute ``f2py`` without any options to get an up-to-date list of -available options. - -Python module ``f2py2e`` -========================= - -.. topic:: Warning - - The current Python interface to ``f2py2e`` module is not mature and - may change in future depending on users needs. - -The following functions are provided by the ``f2py2e`` module: - -``run_main()`` - Equivalent to running:: - - f2py - - where ``=string.join(,' ')``, but in Python. Unless - ``-h`` is used, this function returns a dictionary containing - information on generated modules and their dependencies on source - files. For example, the command ``f2py -m scalar scalar.f`` can be - executed from Python as follows - - .. include:: run_main_session.dat - :literal: - - You cannot build extension modules with this function, that is, - using ``-c`` is not allowed. Use ``compile`` command instead, see - below. - -``compile(source, modulename='untitled', extra_args='', verbose=1, source_fn=None)`` - - Build extension module from Fortran 77 source string ``source``. - Return 0 if successful. - Note that this function actually calls ``f2py -c ..`` from shell to - ensure safety of the current Python process. - For example, - - .. include:: compile_session.dat - :literal: - -========================== -Using ``numpy_distutils`` -========================== - -``numpy_distutils`` is part of the SciPy_ project and aims to extend -standard Python ``distutils`` to deal with Fortran sources and F2PY -signature files, e.g. compile Fortran sources, call F2PY to construct -extension modules, etc. - -.. topic:: Example - - Consider the following `setup file`__: - - .. include:: setup_example.py - :literal: - - Running - - :: - - python setup_example.py build - - will build two extension modules ``scalar`` and ``fib2`` to the - build directory. - - __ setup_example.py - -``numpy_distutils`` extends ``distutils`` with the following features: - -* ``Extension`` class argument ``sources`` may contain Fortran source - files. In addition, the list ``sources`` may contain at most one - F2PY signature file, and then the name of an Extension module must - match with the ```` used in signature file. It is - assumed that an F2PY signature file contains exactly one ``python - module`` block. - - If ``sources`` does not contain a signature files, then F2PY is used - to scan Fortran source files for routine signatures to construct the - wrappers to Fortran codes. - - Additional options to F2PY process can be given using ``Extension`` - class argument ``f2py_options``. - -``numpy_distutils`` 0.2.2 and up -================================ - -* The following new ``distutils`` commands are defined: - - ``build_src`` - to construct Fortran wrapper extension modules, among many other things. - ``config_fc`` - to change Fortran compiler options - - as well as ``build_ext`` and ``build_clib`` commands are enhanced - to support Fortran sources. - - Run - - :: - - python config_fc build_src build_ext --help - - to see available options for these commands. - -* When building Python packages containing Fortran sources, then one - can choose different Fortran compilers by using ``build_ext`` - command option ``--fcompiler=``. Here ```` can be one of the - following names:: - - absoft sun mips intel intelv intele intelev nag compaq compaqv gnu vast pg hpux - - See ``numpy_distutils/fcompiler.py`` for up-to-date list of - supported compilers or run - - :: - - f2py -c --help-fcompiler - -``numpy_distutils`` pre 0.2.2 -============================= - -* The following new ``distutils`` commands are defined: - - ``build_flib`` - to build f77/f90 libraries used by Python extensions; - ``run_f2py`` - to construct Fortran wrapper extension modules. - - Run - - :: - - python build_flib run_f2py --help - - to see available options for these commands. - -* When building Python packages containing Fortran sources, then one - can choose different Fortran compilers either by using ``build_flib`` - command option ``--fcompiler=`` or by defining environment - variable ``FC_VENDOR=``. Here ```` can be one of the - following names:: - - Absoft Sun SGI Intel Itanium NAG Compaq Digital Gnu VAST PG - - See ``numpy_distutils/command/build_flib.py`` for up-to-date list of - supported compilers. - -====================== - Extended F2PY usages -====================== - -Adding self-written functions to F2PY generated modules -======================================================= - -Self-written Python C/API functions can be defined inside -signature files using ``usercode`` and ``pymethoddef`` statements -(they must be used inside the ``python module`` block). For -example, the following signature file ``spam.pyf`` - -.. include:: spam.pyf - :literal: - -wraps the C library function ``system()``:: - - f2py -c spam.pyf - -In Python: - -.. include:: spam_session.dat - :literal: - -Modifying the dictionary of a F2PY generated module -=================================================== - -The following example illustrates how to add an user-defined -variables to a F2PY generated extension module. Given the following -signature file - -.. include:: var.pyf - :literal: - -compile it as ``f2py -c var.pyf``. - -Notice that the second ``usercode`` statement must be defined inside -an ``interface`` block and where the module dictionary is available through -the variable ``d`` (see ``f2py var.pyf``-generated ``varmodule.c`` for -additional details). - -In Python: - -.. include:: var_session.dat - :literal: - -.. References - ========== -.. _F2PY: http://cens.ioc.ee/projects/f2py2e/ -.. _Python: http://www.python.org/ -.. _NumPy: http://www.numpy.org/ -.. _SciPy: http://www.numpy.org/ diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/moddata.f90 b/pythonPackages/numpy/numpy/f2py/docs/usersguide/moddata.f90 deleted file mode 100755 index 0e98f04674..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/moddata.f90 +++ /dev/null @@ -1,18 +0,0 @@ -module mod - integer i - integer :: x(4) - real, dimension(2,3) :: a - real, allocatable, dimension(:,:) :: b -contains - subroutine foo - integer k - print*, "i=",i - print*, "x=[",x,"]" - print*, "a=[" - print*, "[",a(1,1),",",a(1,2),",",a(1,3),"]" - print*, "[",a(2,1),",",a(2,2),",",a(2,3),"]" - print*, "]" - print*, "Setting a(1,2)=a(1,2)+3" - a(1,2) = a(1,2)+3 - end subroutine foo -end module mod diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/moddata_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/moddata_session.dat deleted file mode 100755 index 1ec212f8bd..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/moddata_session.dat +++ /dev/null @@ -1,23 +0,0 @@ ->>> import moddata ->>> print moddata.mod.__doc__ -i - 'i'-scalar -x - 'i'-array(4) -a - 'f'-array(2,3) -foo - Function signature: - foo() - - ->>> moddata.mod.i = 5 ->>> moddata.mod.x[:2] = [1,2] ->>> moddata.mod.a = [[1,2,3],[4,5,6]] ->>> moddata.mod.foo() - i= 5 - x=[ 1 2 0 0 ] - a=[ - [ 1.000000 , 2.000000 , 3.000000 ] - [ 4.000000 , 5.000000 , 6.000000 ] - ] - Setting a(1,2)=a(1,2)+3 ->>> moddata.mod.a # a is Fortran-contiguous -array([[ 1., 5., 3.], - [ 4., 5., 6.]],'f') diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/run_main_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/run_main_session.dat deleted file mode 100755 index 29ecc3dfe4..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/run_main_session.dat +++ /dev/null @@ -1,14 +0,0 @@ ->>> import f2py2e ->>> r=f2py2e.run_main(['-m','scalar','docs/usersguide/scalar.f']) -Reading fortran codes... - Reading file 'docs/usersguide/scalar.f' -Post-processing... - Block: scalar - Block: FOO -Building modules... - Building module "scalar"... - Wrote C/API module "scalar" to file "./scalarmodule.c" ->>> print r -{'scalar': {'h': ['/home/users/pearu/src_cvs/f2py2e/src/fortranobject.h'], - 'csrc': ['./scalarmodule.c', - '/home/users/pearu/src_cvs/f2py2e/src/fortranobject.c']}} diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/scalar.f b/pythonPackages/numpy/numpy/f2py/docs/usersguide/scalar.f deleted file mode 100755 index c22f639edb..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/scalar.f +++ /dev/null @@ -1,12 +0,0 @@ -C FILE: SCALAR.F - SUBROUTINE FOO(A,B) - REAL*8 A, B -Cf2py intent(in) a -Cf2py intent(inout) b - PRINT*, " A=",A," B=",B - PRINT*, "INCREMENT A AND B" - A = A + 1D0 - B = B + 1D0 - PRINT*, "NEW A=",A," B=",B - END -C END OF FILE SCALAR.F diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/scalar_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/scalar_session.dat deleted file mode 100755 index 4fe8c03b1d..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/scalar_session.dat +++ /dev/null @@ -1,21 +0,0 @@ ->>> import scalar ->>> print scalar.foo.__doc__ -foo - Function signature: - foo(a,b) -Required arguments: - a : input float - b : in/output rank-0 array(float,'d') - ->>> scalar.foo(2,3) - A= 2. B= 3. - INCREMENT A AND B - NEW A= 3. B= 4. ->>> import Numeric ->>> a=Numeric.array(2) # these are integer rank-0 arrays ->>> b=Numeric.array(3) ->>> scalar.foo(a,b) - A= 2. B= 3. - INCREMENT A AND B - NEW A= 3. B= 4. ->>> print a,b # note that only b is changed in situ -2 4 \ No newline at end of file diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/setup_example.py b/pythonPackages/numpy/numpy/f2py/docs/usersguide/setup_example.py deleted file mode 100755 index e5f5e84413..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/setup_example.py +++ /dev/null @@ -1,19 +0,0 @@ -#!/usr/bin/env python -# File: setup_example.py - -from numpy_distutils.core import Extension - -ext1 = Extension(name = 'scalar', - sources = ['scalar.f']) -ext2 = Extension(name = 'fib2', - sources = ['fib2.pyf','fib1.f']) - -if __name__ == "__main__": - from numpy_distutils.core import setup - setup(name = 'f2py_example', - description = "F2PY Users Guide examples", - author = "Pearu Peterson", - author_email = "pearu@cens.ioc.ee", - ext_modules = [ext1,ext2] - ) -# End of setup_example.py diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/spam.pyf b/pythonPackages/numpy/numpy/f2py/docs/usersguide/spam.pyf deleted file mode 100755 index 21ea18b77f..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/spam.pyf +++ /dev/null @@ -1,19 +0,0 @@ -! -*- f90 -*- -python module spam - usercode ''' - static char doc_spam_system[] = "Execute a shell command."; - static PyObject *spam_system(PyObject *self, PyObject *args) - { - char *command; - int sts; - - if (!PyArg_ParseTuple(args, "s", &command)) - return NULL; - sts = system(command); - return Py_BuildValue("i", sts); - } - ''' - pymethoddef ''' - {"system", spam_system, METH_VARARGS, doc_spam_system}, - ''' -end python module spam diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/spam_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/spam_session.dat deleted file mode 100755 index 7f99d13f9a..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/spam_session.dat +++ /dev/null @@ -1,5 +0,0 @@ ->>> import spam ->>> status = spam.system('whoami') -pearu ->> status = spam.system('blah') -sh: line 1: blah: command not found \ No newline at end of file diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/string.f b/pythonPackages/numpy/numpy/f2py/docs/usersguide/string.f deleted file mode 100755 index 9246f02e78..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/string.f +++ /dev/null @@ -1,21 +0,0 @@ -C FILE: STRING.F - SUBROUTINE FOO(A,B,C,D) - CHARACTER*5 A, B - CHARACTER*(*) C,D -Cf2py intent(in) a,c -Cf2py intent(inout) b,d - PRINT*, "A=",A - PRINT*, "B=",B - PRINT*, "C=",C - PRINT*, "D=",D - PRINT*, "CHANGE A,B,C,D" - A(1:1) = 'A' - B(1:1) = 'B' - C(1:1) = 'C' - D(1:1) = 'D' - PRINT*, "A=",A - PRINT*, "B=",B - PRINT*, "C=",C - PRINT*, "D=",D - END -C END OF FILE STRING.F diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/string_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/string_session.dat deleted file mode 100755 index 64ebcb3f4a..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/string_session.dat +++ /dev/null @@ -1,27 +0,0 @@ ->>> import mystring ->>> print mystring.foo.__doc__ -foo - Function signature: - foo(a,b,c,d) -Required arguments: - a : input string(len=5) - b : in/output rank-0 array(string(len=5),'c') - c : input string(len=-1) - d : in/output rank-0 array(string(len=-1),'c') - ->>> import Numeric ->>> a=Numeric.array('123') ->>> b=Numeric.array('123') ->>> c=Numeric.array('123') ->>> d=Numeric.array('123') ->>> mystring.foo(a,b,c,d) - A=123 - B=123 - C=123 - D=123 - CHANGE A,B,C,D - A=A23 - B=B23 - C=C23 - D=D23 ->>> a.tostring(),b.tostring(),c.tostring(),d.tostring() -('123', 'B23', '123', 'D23') \ No newline at end of file diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/var.pyf b/pythonPackages/numpy/numpy/f2py/docs/usersguide/var.pyf deleted file mode 100755 index 8275ff3afe..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/var.pyf +++ /dev/null @@ -1,11 +0,0 @@ -! -*- f90 -*- -python module var - usercode ''' - int BAR = 5; - ''' - interface - usercode ''' - PyDict_SetItemString(d,"BAR",PyInt_FromLong(BAR)); - ''' - end interface -end python module diff --git a/pythonPackages/numpy/numpy/f2py/docs/usersguide/var_session.dat b/pythonPackages/numpy/numpy/f2py/docs/usersguide/var_session.dat deleted file mode 100755 index fb0f798bf8..0000000000 --- a/pythonPackages/numpy/numpy/f2py/docs/usersguide/var_session.dat +++ /dev/null @@ -1,3 +0,0 @@ ->>> import var ->>> var.BAR -5 \ No newline at end of file diff --git a/pythonPackages/numpy/numpy/f2py/f2py.1 b/pythonPackages/numpy/numpy/f2py/f2py.1 deleted file mode 100755 index b9391e5920..0000000000 --- a/pythonPackages/numpy/numpy/f2py/f2py.1 +++ /dev/null @@ -1,209 +0,0 @@ -.TH "F2PY" 1 -.SH NAME -f2py \- Fortran to Python interface generator -.SH SYNOPSIS -(1) To construct extension module sources: - -.B f2py -[] [[[only:]||[skip:]] ] [: ...] - -(2) To compile fortran files and build extension modules: - -.B f2py --c [, , ] - -(3) To generate signature files: - -.B f2py --h ...< same options as in (1) > -.SH DESCRIPTION -This program generates a Python C/API file (module.c) -that contains wrappers for given Fortran or C functions so that they -can be called from Python. -With the \-c option the corresponding -extension modules are built. -.SH OPTIONS -.TP -.B \-h -Write signatures of the fortran routines to file and -exit. You can then edit and use it instead of . If ==stdout then the signatures are printed to -stdout. -.TP -.B -Names of fortran routines for which Python C/API functions will be -generated. Default is all that are found in . -.TP -.B skip: -Ignore fortran functions that follow until `:'. -.TP -.B only: -Use only fortran functions that follow until `:'. -.TP -.B : -Get back to mode. -.TP -.B \-m -Name of the module; f2py generates a Python/C API file -module.c or extension module . Default is -\'untitled\'. -.TP -.B \-\-[no\-]lower -Do [not] lower the cases in . By default, \-\-lower is -assumed with \-h key, and \-\-no\-lower without \-h key. -.TP -.B \-\-build\-dir -All f2py generated files are created in . Default is tempfile.mktemp(). -.TP -.B \-\-overwrite\-signature -Overwrite existing signature file. -.TP -.B \-\-[no\-]latex\-doc -Create (or not) module.tex. Default is \-\-no\-latex\-doc. -.TP -.B \-\-short\-latex -Create 'incomplete' LaTeX document (without commands \\documentclass, -\\tableofcontents, and \\begin{document}, \\end{document}). -.TP -.B \-\-[no\-]rest\-doc -Create (or not) module.rst. Default is \-\-no\-rest\-doc. -.TP -.B \-\-debug\-capi -Create C/API code that reports the state of the wrappers during -runtime. Useful for debugging. -.TP -.B \-include\'\' -Add CPP #include statement to the C/API code. should be -in the format of either `"filename.ext"' or `'. As a -result will be included just before wrapper functions -part in the C/API code. The option is depreciated, use `usercode` -statement in signature files instead. -.TP -.B \-\-[no\-]wrap\-functions -Create Fortran subroutine wrappers to Fortran 77 -functions. \-\-wrap\-functions is default because it ensures maximum -portability/compiler independence. -.TP -.B \-\-help\-link [..] -List system resources found by system_info.py. [..] may contain -a list of resources names. See also \-\-link\- switch below. -.TP -.B \-\-quiet -Run quietly. -.TP -.B \-\-verbose -Run with extra verbosity. -.TP -.B \-v -Print f2py version ID and exit. -.TP -.B \-\-include_paths path1:path2:... -Search include files (that f2py will scan) from the given directories. -.SH "CONFIG_FC OPTIONS" -The following options are effective only when \-c switch is used. -.TP -.B \-\-help-compiler -List available Fortran compilers [DEPRECIATED]. -.TP -.B \-\-fcompiler= -Specify Fortran compiler type by vendor. -.TP -.B \-\-compiler= -Specify C compiler type (as defined by distutils) -.TP -.B \-\-fcompiler-exec= -Specify the path to F77 compiler [DEPRECIATED]. -.TP -.B \-\-f90compiler\-exec= -Specify the path to F90 compiler [DEPRECIATED]. -.TP -.B \-\-help\-fcompiler -List available Fortran compilers and exit. -.TP -.B \-\-f77exec= -Specify the path to F77 compiler. -.TP -.B \-\-f90exec= -Specify the path to F90 compiler. -.TP -.B \-\-f77flags="..." -Specify F77 compiler flags. -.TP -.B \-\-f90flags="..." -Specify F90 compiler flags. -.TP -.B \-\-opt="..." -Specify optimization flags. -.TP -.B \-\-arch="..." -Specify architecture specific optimization flags. -.TP -.B \-\-noopt -Compile without optimization. -.TP -.B \-\-noarch -Compile without arch-dependent optimization. -.TP -.B \-\-debug -Compile with debugging information. -.SH "EXTRA OPTIONS" -The following options are effective only when \-c switch is used. -.TP -.B \-\-link- -Link extension module with as defined by -numpy_distutils/system_info.py. E.g. to link with optimized LAPACK -libraries (vecLib on MacOSX, ATLAS elsewhere), use -\-\-link\-lapack_opt. See also \-\-help\-link switch. - -.TP -.B -L/path/to/lib/ -l -.TP -.B -D -U -I/path/to/include/ -.TP -.B .o .so .a - -.TP -.B -DPREPEND_FORTRAN -DNO_APPEND_FORTRAN -DUPPERCASE_FORTRAN -DUNDERSCORE_G77 -Macros that might be required with non-gcc Fortran compilers. - -.TP -.B -DF2PY_REPORT_ATEXIT -To print out a performance report of F2PY interface when python -exits. Available for Linux. - -.TP -.B -DF2PY_REPORT_ON_ARRAY_COPY= -To send a message to stderr whenever F2PY interface makes a copy of an -array. Integer sets the threshold for array sizes when a message -should be shown. - -.SH REQUIREMENTS -Python 1.5.2 or higher (2.x is supported). - -Numerical Python 13 or higher (20.x,21.x,22.x,23.x are supported). - -Optional Numarray 0.9 or higher partially supported. - -numpy_distutils from Scipy (can be downloaded from F2PY homepage) -.SH "SEE ALSO" -python(1) -.SH BUGS -For instructions on reporting bugs, see - - http://cens.ioc.ee/projects/f2py2e/FAQ.html -.SH AUTHOR -Pearu Peterson -.SH "INTERNET RESOURCES" -Main website: http://cens.ioc.ee/projects/f2py2e/ - -User's Guide: http://cens.ioc.ee/projects/f2py2e/usersguide/ - -Mailing list: http://cens.ioc.ee/mailman/listinfo/f2py-users/ - -Scipy website: http://www.numpy.org -.SH COPYRIGHT -Copyright (c) 1999, 2000, 2001, 2002, 2003, 2004, 2005 Pearu Peterson -.SH LICENSE -NumPy License -.SH VERSION -2.45.241 diff --git a/pythonPackages/numpy/numpy/f2py/f2py2e.py b/pythonPackages/numpy/numpy/f2py/f2py2e.py deleted file mode 100755 index 8c220565fc..0000000000 --- a/pythonPackages/numpy/numpy/f2py/f2py2e.py +++ /dev/null @@ -1,569 +0,0 @@ -#!/usr/bin/env python -""" - -f2py2e - Fortran to Python C/API generator. 2nd Edition. - See __usage__ below. - -Copyright 1999--2005 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Date: 2005/05/06 08:31:19 $ -Pearu Peterson -""" -__version__ = "$Revision: 1.90 $"[10:-1] - -import __version__ -f2py_version = __version__.version - -import sys -import os -import pprint -import types -import re -errmess=sys.stderr.write -#outmess=sys.stdout.write -show=pprint.pprint - -import crackfortran -import rules -import cb_rules -import auxfuncs -import cfuncs -import f90mod_rules - -outmess = auxfuncs.outmess - -try: - from numpy import __version__ as numpy_version -except ImportError: - numpy_version = 'N/A' - -__usage__ = """\ -Usage: - -1) To construct extension module sources: - - f2py [] [[[only:]||[skip:]] \\ - ] \\ - [: ...] - -2) To compile fortran files and build extension modules: - - f2py -c [, , ] - -3) To generate signature files: - - f2py -h ...< same options as in (1) > - -Description: This program generates a Python C/API file (module.c) - that contains wrappers for given fortran functions so that they - can be called from Python. With the -c option the corresponding - extension modules are built. - -Options: - - --2d-numpy Use numpy.f2py tool with NumPy support. [DEFAULT] - --2d-numeric Use f2py2e tool with Numeric support. - --2d-numarray Use f2py2e tool with Numarray support. - --g3-numpy Use 3rd generation f2py from the separate f2py package. - [NOT AVAILABLE YET] - - -h Write signatures of the fortran routines to file - and exit. You can then edit and use it instead - of . If ==stdout then the - signatures are printed to stdout. - Names of fortran routines for which Python C/API - functions will be generated. Default is all that are found - in . - Paths to fortran/signature files that will be scanned for - in order to determine their signatures. - skip: Ignore fortran functions that follow until `:'. - only: Use only fortran functions that follow until `:'. - : Get back to mode. - - -m Name of the module; f2py generates a Python/C API - file module.c or extension module . - Default is 'untitled'. - - --[no-]lower Do [not] lower the cases in . By default, - --lower is assumed with -h key, and --no-lower without -h key. - - --build-dir All f2py generated files are created in . - Default is tempfile.mktemp(). - - --overwrite-signature Overwrite existing signature file. - - --[no-]latex-doc Create (or not) module.tex. - Default is --no-latex-doc. - --short-latex Create 'incomplete' LaTeX document (without commands - \\documentclass, \\tableofcontents, and \\begin{document}, - \\end{document}). - - --[no-]rest-doc Create (or not) module.rst. - Default is --no-rest-doc. - - --debug-capi Create C/API code that reports the state of the wrappers - during runtime. Useful for debugging. - - --[no-]wrap-functions Create Fortran subroutine wrappers to Fortran 77 - functions. --wrap-functions is default because it ensures - maximum portability/compiler independence. - - --include_paths ::... Search include files from the given - directories. - - --help-link [..] List system resources found by system_info.py. See also - --link- switch below. [..] is optional list - of resources names. E.g. try 'f2py --help-link lapack_opt'. - - --quiet Run quietly. - --verbose Run with extra verbosity. - -v Print f2py version ID and exit. - - -numpy.distutils options (only effective with -c): - - --fcompiler= Specify Fortran compiler type by vendor - --compiler= Specify C compiler type (as defined by distutils) - - --help-fcompiler List available Fortran compilers and exit - --f77exec= Specify the path to F77 compiler - --f90exec= Specify the path to F90 compiler - --f77flags= Specify F77 compiler flags - --f90flags= Specify F90 compiler flags - --opt= Specify optimization flags - --arch= Specify architecture specific optimization flags - --noopt Compile without optimization - --noarch Compile without arch-dependent optimization - --debug Compile with debugging information - -Extra options (only effective with -c): - - --link- Link extension module with as defined - by numpy.distutils/system_info.py. E.g. to link - with optimized LAPACK libraries (vecLib on MacOSX, - ATLAS elsewhere), use --link-lapack_opt. - See also --help-link switch. - - -L/path/to/lib/ -l - -D -U - -I/path/to/include/ - .o .so .a - - Using the following macros may be required with non-gcc Fortran - compilers: - -DPREPEND_FORTRAN -DNO_APPEND_FORTRAN -DUPPERCASE_FORTRAN - -DUNDERSCORE_G77 - - When using -DF2PY_REPORT_ATEXIT, a performance report of F2PY - interface is printed out at exit (platforms: Linux). - - When using -DF2PY_REPORT_ON_ARRAY_COPY=, a message is - sent to stderr whenever F2PY interface makes a copy of an - array. Integer sets the threshold for array sizes when - a message should be shown. - -Version: %s -numpy Version: %s -Requires: Python 2.3 or higher. -License: NumPy license (see LICENSE.txt in the NumPy source code) -Copyright 1999 - 2005 Pearu Peterson all rights reserved. -http://cens.ioc.ee/projects/f2py2e/"""%(f2py_version, numpy_version) - - -def scaninputline(inputline): - files,funcs,skipfuncs,onlyfuncs,debug=[],[],[],[],[] - f,f2,f3,f4,f5,f6,f7=1,0,0,0,0,0,0 - verbose = 1 - dolc=-1 - dolatexdoc = 0 - dorestdoc = 0 - wrapfuncs = 1 - buildpath = '.' - include_paths = [] - signsfile,modulename=None,None - options = {'buildpath':buildpath} - for l in inputline: - if l=='': pass - elif l=='only:': f=0 - elif l=='skip:': f=-1 - elif l==':': f=1;f4=0 - elif l[:8]=='--debug-': debug.append(l[8:]) - elif l=='--lower': dolc=1 - elif l=='--build-dir': f6=1 - elif l=='--no-lower': dolc=0 - elif l=='--quiet': verbose = 0 - elif l=='--verbose': verbose += 1 - elif l=='--latex-doc': dolatexdoc=1 - elif l=='--no-latex-doc': dolatexdoc=0 - elif l=='--rest-doc': dorestdoc=1 - elif l=='--no-rest-doc': dorestdoc=0 - elif l=='--wrap-functions': wrapfuncs=1 - elif l=='--no-wrap-functions': wrapfuncs=0 - elif l=='--short-latex': options['shortlatex']=1 - elif l=='--overwrite-signature': options['h-overwrite']=1 - elif l=='-h': f2=1 - elif l=='-m': f3=1 - elif l[:2]=='-v': - print f2py_version - sys.exit() - elif l=='--show-compilers': - f5=1 - elif l[:8]=='-include': - cfuncs.outneeds['userincludes'].append(l[9:-1]) - cfuncs.userincludes[l[9:-1]]='#include '+l[8:] - elif l[:15]=='--include_paths': - f7=1 - elif l[0]=='-': - errmess('Unknown option %s\n'%`l`) - sys.exit() - elif f2: f2=0;signsfile=l - elif f3: f3=0;modulename=l - elif f6: f6=0;buildpath=l - elif f7: f7=0;include_paths.extend(l.split(os.pathsep)) - elif f==1: - try: - open(l).close() - files.append(l) - except IOError,detail: - errmess('IOError: %s. Skipping file "%s".\n'%(str(detail),l)) - elif f==-1: skipfuncs.append(l) - elif f==0: onlyfuncs.append(l) - if not f5 and not files and not modulename: - print __usage__ - sys.exit() - if not os.path.isdir(buildpath): - if not verbose: - outmess('Creating build directory %s'%(buildpath)) - os.mkdir(buildpath) - if signsfile: - signsfile = os.path.join(buildpath,signsfile) - if signsfile and os.path.isfile(signsfile) and 'h-overwrite' not in options: - errmess('Signature file "%s" exists!!! Use --overwrite-signature to overwrite.\n'%(signsfile)) - sys.exit() - - options['debug']=debug - options['verbose']=verbose - if dolc==-1 and not signsfile: options['do-lower']=0 - else: options['do-lower']=dolc - if modulename: options['module']=modulename - if signsfile: options['signsfile']=signsfile - if onlyfuncs: options['onlyfuncs']=onlyfuncs - if skipfuncs: options['skipfuncs']=skipfuncs - options['dolatexdoc'] = dolatexdoc - options['dorestdoc'] = dorestdoc - options['wrapfuncs'] = wrapfuncs - options['buildpath']=buildpath - options['include_paths']=include_paths - return files,options - -def callcrackfortran(files,options): - rules.options=options - funcs=[] - crackfortran.debug=options['debug'] - crackfortran.verbose=options['verbose'] - if 'module' in options: - crackfortran.f77modulename=options['module'] - if 'skipfuncs' in options: - crackfortran.skipfuncs=options['skipfuncs'] - if 'onlyfuncs' in options: - crackfortran.onlyfuncs=options['onlyfuncs'] - crackfortran.include_paths[:]=options['include_paths'] - crackfortran.dolowercase=options['do-lower'] - postlist=crackfortran.crackfortran(files) - if 'signsfile' in options: - outmess('Saving signatures to file "%s"\n'%(options['signsfile'])) - pyf=crackfortran.crack2fortran(postlist) - if options['signsfile'][-6:]=='stdout': - sys.stdout.write(pyf) - else: - f=open(options['signsfile'],'w') - f.write(pyf) - f.close() - return postlist - -def buildmodules(lst): - cfuncs.buildcfuncs() - outmess('Building modules...\n') - modules,mnames,isusedby=[],[],{} - for i in range(len(lst)): - if '__user__' in lst[i]['name']: - cb_rules.buildcallbacks(lst[i]) - else: - if 'use' in lst[i]: - for u in lst[i]['use'].keys(): - if u not in isusedby: - isusedby[u]=[] - isusedby[u].append(lst[i]['name']) - modules.append(lst[i]) - mnames.append(lst[i]['name']) - ret = {} - for i in range(len(mnames)): - if mnames[i] in isusedby: - outmess('\tSkipping module "%s" which is used by %s.\n'%(mnames[i],','.join(map(lambda s:'"%s"'%s,isusedby[mnames[i]])))) - else: - um=[] - if 'use' in modules[i]: - for u in modules[i]['use'].keys(): - if u in isusedby and u in mnames: - um.append(modules[mnames.index(u)]) - else: - outmess('\tModule "%s" uses nonexisting "%s" which will be ignored.\n'%(mnames[i],u)) - ret[mnames[i]] = {} - dict_append(ret[mnames[i]],rules.buildmodule(modules[i],um)) - return ret - -def dict_append(d_out,d_in): - for (k,v) in d_in.items(): - if k not in d_out: - d_out[k] = [] - if type(v) is types.ListType: - d_out[k] = d_out[k] + v - else: - d_out[k].append(v) - -def run_main(comline_list): - """Run f2py as if string.join(comline_list,' ') is used as a command line. - In case of using -h flag, return None. - """ - if sys.version_info[0] >= 3: - import imp - imp.reload(crackfortran) - else: - reload(crackfortran) - f2pydir=os.path.dirname(os.path.abspath(cfuncs.__file__)) - fobjhsrc = os.path.join(f2pydir,'src','fortranobject.h') - fobjcsrc = os.path.join(f2pydir,'src','fortranobject.c') - files,options=scaninputline(comline_list) - auxfuncs.options=options - postlist=callcrackfortran(files,options) - isusedby={} - for i in range(len(postlist)): - if 'use' in postlist[i]: - for u in postlist[i]['use'].keys(): - if u not in isusedby: - isusedby[u]=[] - isusedby[u].append(postlist[i]['name']) - for i in range(len(postlist)): - if postlist[i]['block']=='python module' and '__user__' in postlist[i]['name']: - if postlist[i]['name'] in isusedby: - #if not quiet: - outmess('Skipping Makefile build for module "%s" which is used by %s\n'%(postlist[i]['name'],','.join(map(lambda s:'"%s"'%s,isusedby[postlist[i]['name']])))) - if 'signsfile' in options: - if options['verbose']>1: - outmess('Stopping. Edit the signature file and then run f2py on the signature file: ') - outmess('%s %s\n'%(os.path.basename(sys.argv[0]),options['signsfile'])) - return - for i in range(len(postlist)): - if postlist[i]['block']!='python module': - if 'python module' not in options: - errmess('Tip: If your original code is Fortran source then you must use -m option.\n') - raise TypeError,'All blocks must be python module blocks but got %s'%(`postlist[i]['block']`) - auxfuncs.debugoptions=options['debug'] - f90mod_rules.options=options - auxfuncs.wrapfuncs=options['wrapfuncs'] - - ret=buildmodules(postlist) - - for mn in ret.keys(): - dict_append(ret[mn],{'csrc':fobjcsrc,'h':fobjhsrc}) - return ret - -def filter_files(prefix,suffix,files,remove_prefix=None): - """ - Filter files by prefix and suffix. - """ - filtered,rest = [],[] - match = re.compile(prefix+r'.*'+suffix+r'\Z').match - if remove_prefix: - ind = len(prefix) - else: - ind = 0 - for file in [x.strip() for x in files]: - if match(file): filtered.append(file[ind:]) - else: rest.append(file) - return filtered,rest - -def get_prefix(module): - p = os.path.dirname(os.path.dirname(module.__file__)) - return p - -def run_compile(): - """ - Do it all in one call! - """ - import tempfile - - i = sys.argv.index('-c') - del sys.argv[i] - - remove_build_dir = 0 - try: i = sys.argv.index('--build-dir') - except ValueError: i=None - if i is not None: - build_dir = sys.argv[i+1] - del sys.argv[i+1] - del sys.argv[i] - else: - remove_build_dir = 1 - build_dir = os.path.join(tempfile.mktemp()) - - sysinfo_flags = filter(re.compile(r'[-][-]link[-]').match,sys.argv[1:]) - sys.argv = filter(lambda a,flags=sysinfo_flags:a not in flags,sys.argv) - if sysinfo_flags: - sysinfo_flags = [f[7:] for f in sysinfo_flags] - - f2py_flags = filter(re.compile(r'[-][-]((no[-]|)(wrap[-]functions|lower)|debug[-]capi|quiet)|[-]include').match,sys.argv[1:]) - sys.argv = filter(lambda a,flags=f2py_flags:a not in flags,sys.argv) - f2py_flags2 = [] - fl = 0 - for a in sys.argv[1:]: - if a in ['only:','skip:']: - fl = 1 - elif a==':': - fl = 0 - if fl or a==':': - f2py_flags2.append(a) - if f2py_flags2 and f2py_flags2[-1]!=':': - f2py_flags2.append(':') - f2py_flags.extend(f2py_flags2) - - sys.argv = filter(lambda a,flags=f2py_flags2:a not in flags,sys.argv) - - flib_flags = filter(re.compile(r'[-][-]((f(90)?compiler([-]exec|)|compiler)=|help[-]compiler)').match,sys.argv[1:]) - sys.argv = filter(lambda a,flags=flib_flags:a not in flags,sys.argv) - fc_flags = filter(re.compile(r'[-][-]((f(77|90)(flags|exec)|opt|arch)=|(debug|noopt|noarch|help[-]fcompiler))').match,sys.argv[1:]) - sys.argv = filter(lambda a,flags=fc_flags:a not in flags,sys.argv) - - if 1: - del_list = [] - for s in flib_flags: - v = '--fcompiler=' - if s[:len(v)]==v: - from numpy.distutils import fcompiler - fcompiler.load_all_fcompiler_classes() - allowed_keys = fcompiler.fcompiler_class.keys() - nv = ov = s[len(v):].lower() - if ov not in allowed_keys: - vmap = {} # XXX - try: - nv = vmap[ov] - except KeyError: - if ov not in vmap.values(): - print 'Unknown vendor: "%s"' % (s[len(v):]) - nv = ov - i = flib_flags.index(s) - flib_flags[i] = '--fcompiler=' + nv - continue - for s in del_list: - i = flib_flags.index(s) - del flib_flags[i] - assert len(flib_flags)<=2,`flib_flags` - setup_flags = filter(re.compile(r'[-][-](verbose)').match,sys.argv[1:]) - sys.argv = filter(lambda a,flags=setup_flags:a not in flags,sys.argv) - if '--quiet' in f2py_flags: - setup_flags.append('--quiet') - - modulename = 'untitled' - sources = sys.argv[1:] - if '-m' in sys.argv: - i = sys.argv.index('-m') - modulename = sys.argv[i+1] - del sys.argv[i+1],sys.argv[i] - sources = sys.argv[1:] - else: - from numpy.distutils.command.build_src import get_f2py_modulename - pyf_files,sources = filter_files('','[.]pyf([.]src|)',sources) - sources = pyf_files + sources - for f in pyf_files: - modulename = get_f2py_modulename(f) - if modulename: - break - - extra_objects, sources = filter_files('','[.](o|a|so)',sources) - include_dirs, sources = filter_files('-I','',sources,remove_prefix=1) - library_dirs, sources = filter_files('-L','',sources,remove_prefix=1) - libraries, sources = filter_files('-l','',sources,remove_prefix=1) - undef_macros, sources = filter_files('-U','',sources,remove_prefix=1) - define_macros, sources = filter_files('-D','',sources,remove_prefix=1) - using_numarray = 0 - using_numeric = 0 - for i in range(len(define_macros)): - name_value = define_macros[i].split('=',1) - if len(name_value)==1: - name_value.append(None) - if len(name_value)==2: - define_macros[i] = tuple(name_value) - else: - print 'Invalid use of -D:',name_value - - from numpy.distutils.system_info import get_info - - num_include_dir = None - num_info = {} - #import numpy - #n = 'numpy' - #p = get_prefix(numpy) - #from numpy.distutils.misc_util import get_numpy_include_dirs - #num_info = {'include_dirs': get_numpy_include_dirs()} - - if num_info: - include_dirs.extend(num_info.get('include_dirs',[])) - - from numpy.distutils.core import setup,Extension - ext_args = {'name':modulename,'sources':sources, - 'include_dirs': include_dirs, - 'library_dirs': library_dirs, - 'libraries': libraries, - 'define_macros': define_macros, - 'undef_macros': undef_macros, - 'extra_objects': extra_objects, - 'f2py_options': f2py_flags, - } - - if sysinfo_flags: - from numpy.distutils.misc_util import dict_append - for n in sysinfo_flags: - i = get_info(n) - if not i: - outmess('No %s resources found in system'\ - ' (try `f2py --help-link`)\n' % (`n`)) - dict_append(ext_args,**i) - - ext = Extension(**ext_args) - sys.argv = [sys.argv[0]] + setup_flags - sys.argv.extend(['build', - '--build-temp',build_dir, - '--build-base',build_dir, - '--build-platlib','.']) - if fc_flags: - sys.argv.extend(['config_fc']+fc_flags) - if flib_flags: - sys.argv.extend(['build_ext']+flib_flags) - - setup(ext_modules = [ext]) - - if remove_build_dir and os.path.exists(build_dir): - import shutil - outmess('Removing build directory %s\n'%(build_dir)) - shutil.rmtree(build_dir) - -def main(): - if '--help-link' in sys.argv[1:]: - sys.argv.remove('--help-link') - from numpy.distutils.system_info import show_all - show_all() - return - if '-c' in sys.argv[1:]: - run_compile() - else: - run_main(sys.argv[1:]) - -#if __name__ == "__main__": -# main() - - -# EOF diff --git a/pythonPackages/numpy/numpy/f2py/f2py_testing.py b/pythonPackages/numpy/numpy/f2py/f2py_testing.py deleted file mode 100755 index 0c78f35946..0000000000 --- a/pythonPackages/numpy/numpy/f2py/f2py_testing.py +++ /dev/null @@ -1,44 +0,0 @@ -import sys -import re - -from numpy.testing.utils import jiffies, memusage - -def cmdline(): - m=re.compile(r'\A\d+\Z') - args = [] - repeat = 1 - for a in sys.argv[1:]: - if m.match(a): - repeat = eval(a) - else: - args.append(a) - f2py_opts = ' '.join(args) - return repeat,f2py_opts - -def run(runtest,test_functions,repeat=1): - l = [(t,repr(t.__doc__.split('\n')[1].strip())) for t in test_functions] - #l = [(t,'') for t in test_functions] - start_memusage = memusage() - diff_memusage = None - start_jiffies = jiffies() - i = 0 - while i -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Date: 2005/02/03 19:30:23 $ -Pearu Peterson -""" - -__version__ = "$Revision: 1.27 $"[10:-1] - -f2py_version='See `f2py -v`' - -import pprint -import sys -errmess=sys.stderr.write -outmess=sys.stdout.write -show=pprint.pprint - -from auxfuncs import * -import numpy as np -import capi_maps -import func2subr -from crackfortran import undo_rmbadname, undo_rmbadname1 - -options={} - -def findf90modules(m): - if ismodule(m): return [m] - if not hasbody(m): return [] - ret = [] - for b in m['body']: - if ismodule(b): ret.append(b) - else: ret=ret+findf90modules(b) - return ret - -fgetdims1 = """\ - external f2pysetdata - logical ns - integer r,i,j - integer(%d) s(*) - ns = .FALSE. - if (allocated(d)) then - do i=1,r - if ((size(d,i).ne.s(i)).and.(s(i).ge.0)) then - ns = .TRUE. - end if - end do - if (ns) then - deallocate(d) - end if - end if - if ((.not.allocated(d)).and.(s(1).ge.1)) then""" % np.intp().itemsize - -fgetdims2="""\ - end if - if (allocated(d)) then - do i=1,r - s(i) = size(d,i) - end do - end if - flag = 1 - call f2pysetdata(d,allocated(d))""" - -fgetdims2_sa="""\ - end if - if (allocated(d)) then - do i=1,r - s(i) = size(d,i) - end do - !s(r) must be equal to len(d(1)) - end if - flag = 2 - call f2pysetdata(d,allocated(d))""" - - -def buildhooks(pymod): - global fgetdims1,fgetdims2 - import rules - ret = {'f90modhooks':[],'initf90modhooks':[],'body':[], - 'need':['F_FUNC','arrayobject.h'], - 'separatorsfor':{'includes0':'\n','includes':'\n'}, - 'docs':['"Fortran 90/95 modules:\\n"'], - 'latexdoc':[]} - fhooks=[''] - def fadd(line,s=fhooks): s[0] = '%s\n %s'%(s[0],line) - doc = [''] - def dadd(line,s=doc): s[0] = '%s\n%s'%(s[0],line) - for m in findf90modules(pymod): - sargs,fargs,efargs,modobjs,notvars,onlyvars=[],[],[],[],[m['name']],[] - sargsp = [] - ifargs = [] - mfargs = [] - if hasbody(m): - for b in m['body']: notvars.append(b['name']) - for n in m['vars'].keys(): - var = m['vars'][n] - if (n not in notvars) and (not l_or(isintent_hide,isprivate)(var)): - onlyvars.append(n) - mfargs.append(n) - outmess('\t\tConstructing F90 module support for "%s"...\n'%(m['name'])) - if onlyvars: - outmess('\t\t Variables: %s\n'%(' '.join(onlyvars))) - chooks=[''] - def cadd(line,s=chooks): s[0] = '%s\n%s'%(s[0],line) - ihooks=[''] - def iadd(line,s=ihooks): s[0] = '%s\n%s'%(s[0],line) - - vrd=capi_maps.modsign2map(m) - cadd('static FortranDataDef f2py_%s_def[] = {'%(m['name'])) - dadd('\\subsection{Fortran 90/95 module \\texttt{%s}}\n'%(m['name'])) - if hasnote(m): - note = m['note'] - if type(note) is type([]): note='\n'.join(note) - dadd(note) - if onlyvars: - dadd('\\begin{description}') - for n in onlyvars: - var = m['vars'][n] - modobjs.append(n) - ct = capi_maps.getctype(var) - at = capi_maps.c2capi_map[ct] - dm = capi_maps.getarrdims(n,var) - dms = dm['dims'].replace('*','-1').strip() - dms = dms.replace(':','-1').strip() - if not dms: dms='-1' - use_fgetdims2 = fgetdims2 - if isstringarray(var): - if 'charselector' in var and 'len' in var['charselector']: - cadd('\t{"%s",%s,{{%s,%s}},%s},'\ - %(undo_rmbadname1(n),dm['rank'],dms,var['charselector']['len'],at)) - use_fgetdims2 = fgetdims2_sa - else: - cadd('\t{"%s",%s,{{%s}},%s},'%(undo_rmbadname1(n),dm['rank'],dms,at)) - else: - cadd('\t{"%s",%s,{{%s}},%s},'%(undo_rmbadname1(n),dm['rank'],dms,at)) - dadd('\\item[]{{}\\verb@%s@{}}'%(capi_maps.getarrdocsign(n,var))) - if hasnote(var): - note = var['note'] - if type(note) is type([]): note='\n'.join(note) - dadd('--- %s'%(note)) - if isallocatable(var): - fargs.append('f2py_%s_getdims_%s'%(m['name'],n)) - efargs.append(fargs[-1]) - sargs.append('void (*%s)(int*,int*,void(*)(char*,int*),int*)'%(n)) - sargsp.append('void (*)(int*,int*,void(*)(char*,int*),int*)') - iadd('\tf2py_%s_def[i_f2py++].func = %s;'%(m['name'],n)) - fadd('subroutine %s(r,s,f2pysetdata,flag)'%(fargs[-1])) - fadd('use %s, only: d => %s\n'%(m['name'],undo_rmbadname1(n))) - fadd('integer flag\n') - fhooks[0]=fhooks[0]+fgetdims1 - dms = eval('range(1,%s+1)'%(dm['rank'])) - fadd(' allocate(d(%s))\n'%(','.join(map(lambda i:'s(%s)'%i,dms)))) - fhooks[0]=fhooks[0]+use_fgetdims2 - fadd('end subroutine %s'%(fargs[-1])) - else: - fargs.append(n) - sargs.append('char *%s'%(n)) - sargsp.append('char*') - iadd('\tf2py_%s_def[i_f2py++].data = %s;'%(m['name'],n)) - if onlyvars: - dadd('\\end{description}') - if hasbody(m): - for b in m['body']: - if not isroutine(b): - print 'Skipping',b['block'],b['name'] - continue - modobjs.append('%s()'%(b['name'])) - b['modulename'] = m['name'] - api,wrap=rules.buildapi(b) - if isfunction(b): - fhooks[0]=fhooks[0]+wrap - fargs.append('f2pywrap_%s_%s'%(m['name'],b['name'])) - #efargs.append(fargs[-1]) - ifargs.append(func2subr.createfuncwrapper(b,signature=1)) - else: - fargs.append(b['name']) - mfargs.append(fargs[-1]) - #if '--external-modroutines' in options and options['--external-modroutines']: - # outmess('\t\t\tapplying --external-modroutines for %s\n'%(b['name'])) - # efargs.append(fargs[-1]) - api['externroutines']=[] - ar=applyrules(api,vrd) - ar['docs']=[] - ar['docshort']=[] - ret=dictappend(ret,ar) - cadd('\t{"%s",-1,{{-1}},0,NULL,(void *)f2py_rout_#modulename#_%s_%s,doc_f2py_rout_#modulename#_%s_%s},'%(b['name'],m['name'],b['name'],m['name'],b['name'])) - sargs.append('char *%s'%(b['name'])) - sargsp.append('char *') - iadd('\tf2py_%s_def[i_f2py++].data = %s;'%(m['name'],b['name'])) - cadd('\t{NULL}\n};\n') - iadd('}') - ihooks[0]='static void f2py_setup_%s(%s) {\n\tint i_f2py=0;%s'%(m['name'],','.join(sargs),ihooks[0]) - if '_' in m['name']: - F_FUNC='F_FUNC_US' - else: - F_FUNC='F_FUNC' - iadd('extern void %s(f2pyinit%s,F2PYINIT%s)(void (*)(%s));'\ - %(F_FUNC,m['name'],m['name'].upper(),','.join(sargsp))) - iadd('static void f2py_init_%s(void) {'%(m['name'])) - iadd('\t%s(f2pyinit%s,F2PYINIT%s)(f2py_setup_%s);'\ - %(F_FUNC,m['name'],m['name'].upper(),m['name'])) - iadd('}\n') - ret['f90modhooks']=ret['f90modhooks']+chooks+ihooks - ret['initf90modhooks']=['\tPyDict_SetItemString(d, "%s", PyFortranObject_New(f2py_%s_def,f2py_init_%s));'%(m['name'],m['name'],m['name'])]+ret['initf90modhooks'] - fadd('') - fadd('subroutine f2pyinit%s(f2pysetupfunc)'%(m['name'])) - #fadd('use %s'%(m['name'])) - if mfargs: - for a in undo_rmbadname(mfargs): - fadd('use %s, only : %s'%(m['name'],a)) - if ifargs: - fadd(' '.join(['interface']+ifargs)) - fadd('end interface') - fadd('external f2pysetupfunc') - if efargs: - for a in undo_rmbadname(efargs): - fadd('external %s'%(a)) - fadd('call f2pysetupfunc(%s)'%(','.join(undo_rmbadname(fargs)))) - fadd('end subroutine f2pyinit%s\n'%(m['name'])) - - dadd('\n'.join(ret['latexdoc']).replace(r'\subsection{',r'\subsubsection{')) - - ret['latexdoc']=[] - ret['docs'].append('"\t%s --- %s"'%(m['name'], - ','.join(undo_rmbadname(modobjs)))) - - ret['routine_defs']='' - ret['doc']=[] - ret['docshort']=[] - ret['latexdoc']=doc[0] - if len(ret['docs'])<=1: ret['docs']='' - return ret,fhooks[0] diff --git a/pythonPackages/numpy/numpy/f2py/func2subr.py b/pythonPackages/numpy/numpy/f2py/func2subr.py deleted file mode 100755 index 7ce30bc705..0000000000 --- a/pythonPackages/numpy/numpy/f2py/func2subr.py +++ /dev/null @@ -1,167 +0,0 @@ -#!/usr/bin/env python -""" - -Rules for building C/API module with f2py2e. - -Copyright 1999,2000 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Date: 2004/11/26 11:13:06 $ -Pearu Peterson -""" - -__version__ = "$Revision: 1.16 $"[10:-1] - -f2py_version='See `f2py -v`' - -import pprint -import copy -import sys -errmess=sys.stderr.write -outmess=sys.stdout.write -show=pprint.pprint - -from auxfuncs import * -def var2fixfortran(vars,a,fa=None,f90mode=None): - if fa is None: - fa = a - if a not in vars: - show(vars) - outmess('var2fixfortran: No definition for argument "%s".\n'%a) - return '' - if 'typespec' not in vars[a]: - show(vars[a]) - outmess('var2fixfortran: No typespec for argument "%s".\n'%a) - return '' - vardef=vars[a]['typespec'] - if vardef=='type' and 'typename' in vars[a]: - vardef='%s(%s)'%(vardef,vars[a]['typename']) - selector={} - lk = '' - if 'kindselector' in vars[a]: - selector=vars[a]['kindselector'] - lk = 'kind' - elif 'charselector' in vars[a]: - selector=vars[a]['charselector'] - lk = 'len' - if '*' in selector: - if f90mode: - if selector['*'] in ['*',':','(*)']: - vardef='%s(len=*)'%(vardef) - else: - vardef='%s(%s=%s)'%(vardef,lk,selector['*']) - else: - if selector['*'] in ['*',':']: - vardef='%s*(%s)'%(vardef,selector['*']) - else: - vardef='%s*%s'%(vardef,selector['*']) - else: - if 'len' in selector: - vardef='%s(len=%s'%(vardef,selector['len']) - if 'kind' in selector: - vardef='%s,kind=%s)'%(vardef,selector['kind']) - else: - vardef='%s)'%(vardef) - elif 'kind' in selector: - vardef='%s(kind=%s)'%(vardef,selector['kind']) - - vardef='%s %s'%(vardef,fa) - if 'dimension' in vars[a]: - vardef='%s(%s)'%(vardef,','.join(vars[a]['dimension'])) - return vardef - -def createfuncwrapper(rout,signature=0): - assert isfunction(rout) - ret = [''] - def add(line,ret=ret): - ret[0] = '%s\n %s'%(ret[0],line) - name = rout['name'] - fortranname = getfortranname(rout) - f90mode = ismoduleroutine(rout) - newname = '%sf2pywrap'%(name) - vars = rout['vars'] - if newname not in vars: - vars[newname] = vars[name] - args = [newname]+rout['args'][1:] - else: - args = [newname]+rout['args'] - - l = var2fixfortran(vars,name,newname,f90mode) - return_char_star = 0 - if l[:13]=='character*(*)': - return_char_star = 1 - if f90mode: l = 'character(len=10)'+l[13:] - else: l = 'character*10'+l[13:] - charselect = vars[name]['charselector'] - if charselect.get('*','')=='(*)': - charselect['*'] = '10' - if f90mode: - sargs = ', '.join(args) - add('subroutine f2pywrap_%s_%s (%s)'%(rout['modulename'],name,sargs)) - if not signature: - add('use %s, only : %s'%(rout['modulename'],fortranname)) - else: - add('subroutine f2pywrap%s (%s)'%(name,', '.join(args))) - add('external %s'%(fortranname)) - #if not return_char_star: - l = l + ', '+fortranname - args = args[1:] - dumped_args = [] - for a in args: - if isexternal(vars[a]): - add('external %s'%(a)) - dumped_args.append(a) - for a in args: - if a in dumped_args: continue - if isscalar(vars[a]): - add(var2fixfortran(vars,a,f90mode=f90mode)) - dumped_args.append(a) - for a in args: - if a in dumped_args: continue - add(var2fixfortran(vars,a,f90mode=f90mode)) - - add(l) - - if not signature: - if islogicalfunction(rout): - add('%s = .not.(.not.%s(%s))'%(newname,fortranname,', '.join(args))) - else: - add('%s = %s(%s)'%(newname,fortranname,', '.join(args))) - if f90mode: - add('end subroutine f2pywrap_%s_%s'%(rout['modulename'],name)) - else: - add('end') - #print '**'*10 - #print ret[0] - #print '**'*10 - return ret[0] - -def assubr(rout): - if not isfunction_wrap(rout): return rout,'' - fortranname = getfortranname(rout) - name = rout['name'] - outmess('\t\tCreating wrapper for Fortran function "%s"("%s")...\n'%(name,fortranname)) - rout = copy.copy(rout) - fname = name - rname = fname - if 'result' in rout: - rname = rout['result'] - rout['vars'][fname]=rout['vars'][rname] - fvar = rout['vars'][fname] - if not isintent_out(fvar): - if 'intent' not in fvar: - fvar['intent']=[] - fvar['intent'].append('out') - flag=1 - for i in fvar['intent']: - if i.startswith('out='): - flag = 0 - break - if flag: - fvar['intent'].append('out=%s' % (rname)) - - rout['args'] = [fname] + rout['args'] - return rout,createfuncwrapper(rout) diff --git a/pythonPackages/numpy/numpy/f2py/info.py b/pythonPackages/numpy/numpy/f2py/info.py deleted file mode 100755 index 8beaba2280..0000000000 --- a/pythonPackages/numpy/numpy/f2py/info.py +++ /dev/null @@ -1,5 +0,0 @@ -"""Fortran to Python Interface Generator. - -""" - -postpone_import = True diff --git a/pythonPackages/numpy/numpy/f2py/rules.py b/pythonPackages/numpy/numpy/f2py/rules.py deleted file mode 100755 index ae3871df52..0000000000 --- a/pythonPackages/numpy/numpy/f2py/rules.py +++ /dev/null @@ -1,1411 +0,0 @@ -#!/usr/bin/env python -""" - -Rules for building C/API module with f2py2e. - -Here is a skeleton of a new wrapper function (13Dec2001): - -wrapper_function(args) - declarations - get_python_arguments, say, `a' and `b' - - get_a_from_python - if (successful) { - - get_b_from_python - if (successful) { - - callfortran - if (succesful) { - - put_a_to_python - if (succesful) { - - put_b_to_python - if (succesful) { - - buildvalue = ... - - } - - } - - } - - } - cleanup_b - - } - cleanup_a - - return buildvalue -""" -""" -Copyright 1999,2000 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Date: 2005/08/30 08:58:42 $ -Pearu Peterson -""" - -__version__ = "$Revision: 1.129 $"[10:-1] - -import __version__ -f2py_version = __version__.version - -import pprint -import sys -import time -import types -import copy -errmess=sys.stderr.write -outmess=sys.stdout.write -show=pprint.pprint - -from auxfuncs import * -import capi_maps -from capi_maps import * -import cfuncs -import common_rules -import use_rules -import f90mod_rules -import func2subr -options={} - -sepdict={} -#for k in ['need_cfuncs']: sepdict[k]=',' -for k in ['decl', - 'frompyobj', - 'cleanupfrompyobj', - 'topyarr','method', - 'pyobjfrom','closepyobjfrom', - 'freemem', - 'userincludes', - 'includes0','includes','typedefs','typedefs_generated', - 'cppmacros','cfuncs','callbacks', - 'latexdoc', - 'restdoc', - 'routine_defs','externroutines', - 'initf2pywraphooks', - 'commonhooks','initcommonhooks', - 'f90modhooks','initf90modhooks']: - sepdict[k]='\n' - -#################### Rules for C/API module ################# - -module_rules={ - 'modulebody':"""\ -/* File: #modulename#module.c - * This file is auto-generated with f2py (version:#f2py_version#). - * f2py is a Fortran to Python Interface Generator (FPIG), Second Edition, - * written by Pearu Peterson . - * See http://cens.ioc.ee/projects/f2py2e/ - * Generation date: """+time.asctime(time.localtime(time.time()))+""" - * $R"""+"""evision:$ - * $D"""+"""ate:$ - * Do not edit this file directly unless you know what you are doing!!! - */ -#ifdef __cplusplus -extern \"C\" { -#endif - -"""+gentitle("See f2py2e/cfuncs.py: includes")+""" -#includes# -#includes0# - -"""+gentitle("See f2py2e/rules.py: mod_rules['modulebody']")+""" -static PyObject *#modulename#_error; -static PyObject *#modulename#_module; - -"""+gentitle("See f2py2e/cfuncs.py: typedefs")+""" -#typedefs# - -"""+gentitle("See f2py2e/cfuncs.py: typedefs_generated")+""" -#typedefs_generated# - -"""+gentitle("See f2py2e/cfuncs.py: cppmacros")+""" -#cppmacros# - -"""+gentitle("See f2py2e/cfuncs.py: cfuncs")+""" -#cfuncs# - -"""+gentitle("See f2py2e/cfuncs.py: userincludes")+""" -#userincludes# - -"""+gentitle("See f2py2e/capi_rules.py: usercode")+""" -#usercode# - -/* See f2py2e/rules.py */ -#externroutines# - -"""+gentitle("See f2py2e/capi_rules.py: usercode1")+""" -#usercode1# - -"""+gentitle("See f2py2e/cb_rules.py: buildcallback")+""" -#callbacks# - -"""+gentitle("See f2py2e/rules.py: buildapi")+""" -#body# - -"""+gentitle("See f2py2e/f90mod_rules.py: buildhooks")+""" -#f90modhooks# - -"""+gentitle("See f2py2e/rules.py: module_rules['modulebody']")+""" - -"""+gentitle("See f2py2e/common_rules.py: buildhooks")+""" -#commonhooks# - -"""+gentitle("See f2py2e/rules.py")+""" - -static FortranDataDef f2py_routine_defs[] = { -#routine_defs# -\t{NULL} -}; - -static PyMethodDef f2py_module_methods[] = { -#pymethoddef# -\t{NULL,NULL} -}; - -#if PY_VERSION_HEX >= 0x03000000 -static struct PyModuleDef moduledef = { -\tPyModuleDef_HEAD_INIT, -\t"#modulename#", -\tNULL, -\t-1, -\tf2py_module_methods, -\tNULL, -\tNULL, -\tNULL, -\tNULL -}; -#endif - -#if PY_VERSION_HEX >= 0x03000000 -#define RETVAL m -PyObject *PyInit_#modulename#(void) { -#else -#define RETVAL -PyMODINIT_FUNC init#modulename#(void) { -#endif -\tint i; -\tPyObject *m,*d, *s; -#if PY_VERSION_HEX >= 0x03000000 -\tm = #modulename#_module = PyModule_Create(&moduledef); -#else -\tm = #modulename#_module = Py_InitModule(\"#modulename#\", f2py_module_methods); -#endif -\tPy_TYPE(&PyFortran_Type) = &PyType_Type; -\timport_array(); -\tif (PyErr_Occurred()) -\t\t{PyErr_SetString(PyExc_ImportError, \"can't initialize module #modulename# (failed to import numpy)\"); return RETVAL;} -\td = PyModule_GetDict(m); -\ts = PyString_FromString(\"$R"""+"""evision: $\"); -\tPyDict_SetItemString(d, \"__version__\", s); -#if PY_VERSION_HEX >= 0x03000000 -\ts = PyUnicode_FromString( -#else -\ts = PyString_FromString( -#endif -\t\t\"This module '#modulename#' is auto-generated with f2py (version:#f2py_version#).\\nFunctions:\\n\"\n#docs#\".\"); -\tPyDict_SetItemString(d, \"__doc__\", s); -\t#modulename#_error = PyErr_NewException (\"#modulename#.error\", NULL, NULL); -\tPy_DECREF(s); -\tfor(i=0;f2py_routine_defs[i].name!=NULL;i++) -\t\tPyDict_SetItemString(d, f2py_routine_defs[i].name,PyFortranObject_NewAsAttr(&f2py_routine_defs[i])); -#initf2pywraphooks# -#initf90modhooks# -#initcommonhooks# -#interface_usercode# - -#ifdef F2PY_REPORT_ATEXIT -\tif (! PyErr_Occurred()) -\t\ton_exit(f2py_report_on_exit,(void*)\"#modulename#\"); -#endif - -\treturn RETVAL; -} -#ifdef __cplusplus -} -#endif -""", - 'separatorsfor':{'latexdoc':'\n\n', - 'restdoc':'\n\n'}, - 'latexdoc':['\\section{Module \\texttt{#texmodulename#}}\n', - '#modnote#\n', - '#latexdoc#'], - 'restdoc':['Module #modulename#\n'+'='*80, - '\n#restdoc#'] - } - -defmod_rules=[ - {'body':'/*eof body*/', - 'method':'/*eof method*/', - 'externroutines':'/*eof externroutines*/', - 'routine_defs':'/*eof routine_defs*/', - 'initf90modhooks':'/*eof initf90modhooks*/', - 'initf2pywraphooks':'/*eof initf2pywraphooks*/', - 'initcommonhooks':'/*eof initcommonhooks*/', - 'latexdoc':'', - 'restdoc':'', - 'modnote':{hasnote:'#note#',l_not(hasnote):''}, - } - ] - -routine_rules={ - 'separatorsfor':sepdict, - 'body':""" -#begintitle# -static char doc_#apiname#[] = \"\\\nFunction signature:\\n\\\n\t#docreturn##name#(#docsignatureshort#)\\n\\\n#docstrsigns#\"; -/* #declfortranroutine# */ -static PyObject *#apiname#(const PyObject *capi_self, - PyObject *capi_args, - PyObject *capi_keywds, - #functype# (*f2py_func)(#callprotoargument#)) { -\tPyObject * volatile capi_buildvalue = NULL; -\tvolatile int f2py_success = 1; -#decl# -\tstatic char *capi_kwlist[] = {#kwlist##kwlistopt##kwlistxa#NULL}; -#usercode# -#routdebugenter# -#ifdef F2PY_REPORT_ATEXIT -f2py_start_clock(); -#endif -\tif (!PyArg_ParseTupleAndKeywords(capi_args,capi_keywds,\\ -\t\t\"#argformat##keyformat##xaformat#:#pyname#\",\\ -\t\tcapi_kwlist#args_capi##keys_capi##keys_xa#))\n\t\treturn NULL; -#frompyobj# -/*end of frompyobj*/ -#ifdef F2PY_REPORT_ATEXIT -f2py_start_call_clock(); -#endif -#callfortranroutine# -if (PyErr_Occurred()) - f2py_success = 0; -#ifdef F2PY_REPORT_ATEXIT -f2py_stop_call_clock(); -#endif -/*end of callfortranroutine*/ -\t\tif (f2py_success) { -#pyobjfrom# -/*end of pyobjfrom*/ -\t\tCFUNCSMESS(\"Building return value.\\n\"); -\t\tcapi_buildvalue = Py_BuildValue(\"#returnformat#\"#return#); -/*closepyobjfrom*/ -#closepyobjfrom# -\t\t} /*if (f2py_success) after callfortranroutine*/ -/*cleanupfrompyobj*/ -#cleanupfrompyobj# -\tif (capi_buildvalue == NULL) { -#routdebugfailure# -\t} else { -#routdebugleave# -\t} -\tCFUNCSMESS(\"Freeing memory.\\n\"); -#freemem# -#ifdef F2PY_REPORT_ATEXIT -f2py_stop_clock(); -#endif -\treturn capi_buildvalue; -} -#endtitle# -""", - 'routine_defs':'#routine_def#', - 'initf2pywraphooks':'#initf2pywraphook#', - 'externroutines':'#declfortranroutine#', - 'doc':'#docreturn##name#(#docsignature#)', - 'docshort':'#docreturn##name#(#docsignatureshort#)', - 'docs':'"\t#docreturn##name#(#docsignature#)\\n"\n', - 'need':['arrayobject.h','CFUNCSMESS','MINMAX'], - 'cppmacros':{debugcapi:'#define DEBUGCFUNCS'}, - 'latexdoc':['\\subsection{Wrapper function \\texttt{#texname#}}\n', - """ -\\noindent{{}\\verb@#docreturn##name#@{}}\\texttt{(#latexdocsignatureshort#)} -#routnote# - -#latexdocstrsigns# -"""], - 'restdoc':['Wrapped function ``#name#``\n'+'-'*80, - - ] - } - -################## Rules for C/API function ############## - -rout_rules=[ - { # Init - 'separatorsfor': {'callfortranroutine':'\n','routdebugenter':'\n','decl':'\n', - 'routdebugleave':'\n','routdebugfailure':'\n', - 'setjmpbuf':' || ', - 'docstrreq':'\n','docstropt':'\n','docstrout':'\n', - 'docstrcbs':'\n','docstrsigns':'\\n"\n"', - 'latexdocstrsigns':'\n', - 'latexdocstrreq':'\n','latexdocstropt':'\n', - 'latexdocstrout':'\n','latexdocstrcbs':'\n', - }, - 'kwlist':'','kwlistopt':'','callfortran':'','callfortranappend':'', - 'docsign':'','docsignopt':'','decl':'/*decl*/', - 'freemem':'/*freemem*/', - 'docsignshort':'','docsignoptshort':'', - 'docstrsigns':'','latexdocstrsigns':'', - 'docstrreq':'Required arguments:', - 'docstropt':'Optional arguments:', - 'docstrout':'Return objects:', - 'docstrcbs':'Call-back functions:', - 'latexdocstrreq':'\\noindent Required arguments:', - 'latexdocstropt':'\\noindent Optional arguments:', - 'latexdocstrout':'\\noindent Return objects:', - 'latexdocstrcbs':'\\noindent Call-back functions:', - 'args_capi':'','keys_capi':'','functype':'', - 'frompyobj':'/*frompyobj*/', - 'cleanupfrompyobj':['/*end of cleanupfrompyobj*/'], #this list will be reversed - 'pyobjfrom':'/*pyobjfrom*/', - 'closepyobjfrom':['/*end of closepyobjfrom*/'], #this list will be reversed - 'topyarr':'/*topyarr*/','routdebugleave':'/*routdebugleave*/', - 'routdebugenter':'/*routdebugenter*/', - 'routdebugfailure':'/*routdebugfailure*/', - 'callfortranroutine':'/*callfortranroutine*/', - 'argformat':'','keyformat':'','need_cfuncs':'', - 'docreturn':'','return':'','returnformat':'','rformat':'', - 'kwlistxa':'','keys_xa':'','xaformat':'','docsignxa':'','docsignxashort':'', - 'initf2pywraphook':'', - 'routnote':{hasnote:'--- #note#',l_not(hasnote):''}, - },{ - 'apiname':'f2py_rout_#modulename#_#name#', - 'pyname':'#modulename#.#name#', - 'decl':'', - '_check':l_not(ismoduleroutine) - },{ - 'apiname':'f2py_rout_#modulename#_#f90modulename#_#name#', - 'pyname':'#modulename#.#f90modulename#.#name#', - 'decl':'', - '_check':ismoduleroutine - },{ # Subroutine - 'functype':'void', - 'declfortranroutine':{l_and(l_not(l_or(ismoduleroutine,isintent_c)),l_not(isdummyroutine)):'extern void #F_FUNC#(#fortranname#,#FORTRANNAME#)(#callprotoargument#);', - l_and(l_not(ismoduleroutine),isintent_c,l_not(isdummyroutine)):'extern void #fortranname#(#callprotoargument#);', - ismoduleroutine:'', - isdummyroutine:'' - }, - 'routine_def':{l_not(l_or(ismoduleroutine,isintent_c,isdummyroutine)):'\t{\"#name#\",-1,{{-1}},0,(char *)#F_FUNC#(#fortranname#,#FORTRANNAME#),(f2py_init_func)#apiname#,doc_#apiname#},', - l_and(l_not(ismoduleroutine),isintent_c,l_not(isdummyroutine)):'\t{\"#name#\",-1,{{-1}},0,(char *)#fortranname#,(f2py_init_func)#apiname#,doc_#apiname#},', - l_and(l_not(ismoduleroutine),isdummyroutine):'\t{\"#name#\",-1,{{-1}},0,NULL,(f2py_init_func)#apiname#,doc_#apiname#},', - }, - 'need':{l_and(l_not(l_or(ismoduleroutine,isintent_c)),l_not(isdummyroutine)):'F_FUNC'}, - 'callfortranroutine':[ - {debugcapi:["""\tfprintf(stderr,\"debug-capi:Fortran subroutine `#fortranname#(#callfortran#)\'\\n\");"""]}, - {hasexternals:"""\ -\t\tif (#setjmpbuf#) { -\t\t\tf2py_success = 0; -\t\t} else {"""}, - {isthreadsafe:'\t\t\tPy_BEGIN_ALLOW_THREADS'}, - {hascallstatement:'''\t\t\t\t#callstatement#; -\t\t\t\t/*(*f2py_func)(#callfortran#);*/'''}, - {l_not(l_or(hascallstatement,isdummyroutine)):'\t\t\t\t(*f2py_func)(#callfortran#);'}, - {isthreadsafe:'\t\t\tPy_END_ALLOW_THREADS'}, - {hasexternals:"""\t\t}"""} - ], - '_check':issubroutine, - },{ # Wrapped function - 'functype':'void', - 'declfortranroutine':{l_not(l_or(ismoduleroutine,isdummyroutine)):'extern void #F_WRAPPEDFUNC#(#name_lower#,#NAME#)(#callprotoargument#);', - isdummyroutine:'', - }, - - 'routine_def':{l_not(l_or(ismoduleroutine,isdummyroutine)):'\t{\"#name#\",-1,{{-1}},0,(char *)#F_WRAPPEDFUNC#(#name_lower#,#NAME#),(f2py_init_func)#apiname#,doc_#apiname#},', - isdummyroutine:'\t{\"#name#\",-1,{{-1}},0,NULL,(f2py_init_func)#apiname#,doc_#apiname#},', - }, - 'initf2pywraphook':{l_not(l_or(ismoduleroutine,isdummyroutine)):''' - { - extern #ctype# #F_FUNC#(#name_lower#,#NAME#)(void); - PyObject* o = PyDict_GetItemString(d,"#name#"); - PyObject_SetAttrString(o,"_cpointer", F2PyCapsule_FromVoidPtr((void*)#F_FUNC#(#name_lower#,#NAME#),NULL)); -#if PY_VERSION_HEX >= 0x03000000 - PyObject_SetAttrString(o,"__name__", PyUnicode_FromString("#name#")); -#else - PyObject_SetAttrString(o,"__name__", PyString_FromString("#name#")); -#endif - } - '''}, - 'need':{l_not(l_or(ismoduleroutine,isdummyroutine)):['F_WRAPPEDFUNC','F_FUNC']}, - 'callfortranroutine':[ - {debugcapi:["""\tfprintf(stderr,\"debug-capi:Fortran subroutine `f2pywrap#name_lower#(#callfortran#)\'\\n\");"""]}, - {hasexternals:"""\ -\tif (#setjmpbuf#) { -\t\tf2py_success = 0; -\t} else {"""}, - {isthreadsafe:'\tPy_BEGIN_ALLOW_THREADS'}, - {l_not(l_or(hascallstatement,isdummyroutine)):'\t(*f2py_func)(#callfortran#);'}, - {hascallstatement:'\t#callstatement#;\n\t/*(*f2py_func)(#callfortran#);*/'}, - {isthreadsafe:'\tPy_END_ALLOW_THREADS'}, - {hasexternals:'\t}'} - ], - '_check':isfunction_wrap, - },{ # Function - 'functype':'#ctype#', - 'docreturn':{l_not(isintent_hide):'#rname#,'}, - 'docstrout':'\t#pydocsignout#', - 'latexdocstrout':['\\item[]{{}\\verb@#pydocsignout#@{}}', - {hasresultnote:'--- #resultnote#'}], - 'callfortranroutine':[{l_and(debugcapi,isstringfunction):"""\ -#ifdef USESCOMPAQFORTRAN -\tfprintf(stderr,\"debug-capi:Fortran function #ctype# #fortranname#(#callcompaqfortran#)\\n\"); -#else -\tfprintf(stderr,\"debug-capi:Fortran function #ctype# #fortranname#(#callfortran#)\\n\"); -#endif -"""}, - {l_and(debugcapi,l_not(isstringfunction)):"""\ -\tfprintf(stderr,\"debug-capi:Fortran function #ctype# #fortranname#(#callfortran#)\\n\"); -"""} - ], - '_check':l_and(isfunction,l_not(isfunction_wrap)) - },{ # Scalar function - 'declfortranroutine':{l_and(l_not(l_or(ismoduleroutine,isintent_c)),l_not(isdummyroutine)):'extern #ctype# #F_FUNC#(#fortranname#,#FORTRANNAME#)(#callprotoargument#);', - l_and(l_not(ismoduleroutine),isintent_c,l_not(isdummyroutine)):'extern #ctype# #fortranname#(#callprotoargument#);', - isdummyroutine:'' - }, - 'routine_def':{l_and(l_not(l_or(ismoduleroutine,isintent_c)),l_not(isdummyroutine)):'\t{\"#name#\",-1,{{-1}},0,(char *)#F_FUNC#(#fortranname#,#FORTRANNAME#),(f2py_init_func)#apiname#,doc_#apiname#},', - l_and(l_not(ismoduleroutine),isintent_c,l_not(isdummyroutine)):'\t{\"#name#\",-1,{{-1}},0,(char *)#fortranname#,(f2py_init_func)#apiname#,doc_#apiname#},', - isdummyroutine:'\t{\"#name#\",-1,{{-1}},0,NULL,(f2py_init_func)#apiname#,doc_#apiname#},', - }, - 'decl':[{iscomplexfunction_warn:'\t#ctype# #name#_return_value={0,0};', - l_not(iscomplexfunction):'\t#ctype# #name#_return_value=0;'}, - {iscomplexfunction:'\tPyObject *#name#_return_value_capi = Py_None;'} - ], - 'callfortranroutine':[ - {hasexternals:"""\ -\tif (#setjmpbuf#) { -\t\tf2py_success = 0; -\t} else {"""}, - {isthreadsafe:'\tPy_BEGIN_ALLOW_THREADS'}, - {hascallstatement:'''\t#callstatement#; -/*\t#name#_return_value = (*f2py_func)(#callfortran#);*/ -'''}, - {l_not(l_or(hascallstatement,isdummyroutine)):'\t#name#_return_value = (*f2py_func)(#callfortran#);'}, - {isthreadsafe:'\tPy_END_ALLOW_THREADS'}, - {hasexternals:'\t}'}, - {l_and(debugcapi,iscomplexfunction):'\tfprintf(stderr,"#routdebugshowvalue#\\n",#name#_return_value.r,#name#_return_value.i);'}, - {l_and(debugcapi,l_not(iscomplexfunction)):'\tfprintf(stderr,"#routdebugshowvalue#\\n",#name#_return_value);'}], - 'pyobjfrom':{iscomplexfunction:'\t#name#_return_value_capi = pyobj_from_#ctype#1(#name#_return_value);'}, - 'need':[{l_not(isdummyroutine):'F_FUNC'}, - {iscomplexfunction:'pyobj_from_#ctype#1'}, - {islong_longfunction:'long_long'}, - {islong_doublefunction:'long_double'}], - 'returnformat':{l_not(isintent_hide):'#rformat#'}, - 'return':{iscomplexfunction:',#name#_return_value_capi', - l_not(l_or(iscomplexfunction,isintent_hide)):',#name#_return_value'}, - '_check':l_and(isfunction,l_not(isstringfunction),l_not(isfunction_wrap)) - },{ # String function # in use for --no-wrap - 'declfortranroutine':'extern void #F_FUNC#(#fortranname#,#FORTRANNAME#)(#callprotoargument#);', - 'routine_def':{l_not(l_or(ismoduleroutine,isintent_c)): -# '\t{\"#name#\",-1,{{-1}},0,(char *)F_FUNC(#fortranname#,#FORTRANNAME#),(void *)#apiname#,doc_#apiname#},', - '\t{\"#name#\",-1,{{-1}},0,(char *)#F_FUNC#(#fortranname#,#FORTRANNAME#),(f2py_init_func)#apiname#,doc_#apiname#},', - l_and(l_not(ismoduleroutine),isintent_c): -# '\t{\"#name#\",-1,{{-1}},0,(char *)#fortranname#,(void *)#apiname#,doc_#apiname#},' - '\t{\"#name#\",-1,{{-1}},0,(char *)#fortranname#,(f2py_init_func)#apiname#,doc_#apiname#},' - }, - 'decl':['\t#ctype# #name#_return_value = NULL;', - '\tint #name#_return_value_len = 0;'], - 'callfortran':'#name#_return_value,#name#_return_value_len,', - 'callfortranroutine':['\t#name#_return_value_len = #rlength#;', - '\tif ((#name#_return_value = (string)malloc(sizeof(char)*(#name#_return_value_len+1))) == NULL) {', - '\t\tPyErr_SetString(PyExc_MemoryError, \"out of memory\");', - '\t\tf2py_success = 0;', - '\t} else {', - "\t\t(#name#_return_value)[#name#_return_value_len] = '\\0';", - '\t}', - '\tif (f2py_success) {', - {hasexternals:"""\ -\t\tif (#setjmpbuf#) { -\t\t\tf2py_success = 0; -\t\t} else {"""}, - {isthreadsafe:'\t\tPy_BEGIN_ALLOW_THREADS'}, - """\ -#ifdef USESCOMPAQFORTRAN -\t\t(*f2py_func)(#callcompaqfortran#); -#else -\t\t(*f2py_func)(#callfortran#); -#endif -""", - {isthreadsafe:'\t\tPy_END_ALLOW_THREADS'}, - {hasexternals:'\t\t}'}, - {debugcapi:'\t\tfprintf(stderr,"#routdebugshowvalue#\\n",#name#_return_value_len,#name#_return_value);'}, - '\t} /* if (f2py_success) after (string)malloc */', - ], - 'returnformat':'#rformat#', - 'return':',#name#_return_value', - 'freemem':'\tSTRINGFREE(#name#_return_value);', - 'need':['F_FUNC','#ctype#','STRINGFREE'], - '_check':l_and(isstringfunction,l_not(isfunction_wrap)) # ???obsolete - }, - { # Debugging - 'routdebugenter':'\tfprintf(stderr,"debug-capi:Python C/API function #modulename#.#name#(#docsignature#)\\n");', - 'routdebugleave':'\tfprintf(stderr,"debug-capi:Python C/API function #modulename#.#name#: successful.\\n");', - 'routdebugfailure':'\tfprintf(stderr,"debug-capi:Python C/API function #modulename#.#name#: failure.\\n");', - '_check':debugcapi - } - ] - -################ Rules for arguments ################## - -typedef_need_dict = {islong_long:'long_long', - islong_double:'long_double', - islong_complex:'complex_long_double', - isunsigned_char:'unsigned_char', - isunsigned_short:'unsigned_short', - isunsigned:'unsigned', - isunsigned_long_long:'unsigned_long_long', - isunsigned_chararray:'unsigned_char', - isunsigned_shortarray:'unsigned_short', - isunsigned_long_longarray:'unsigned_long_long', - issigned_long_longarray:'long_long', - } - -aux_rules=[ - { - 'separatorsfor':sepdict - }, - { # Common - 'frompyobj':['\t/* Processing auxiliary variable #varname# */', - {debugcapi:'\tfprintf(stderr,"#vardebuginfo#\\n");'},], - 'cleanupfrompyobj':'\t/* End of cleaning variable #varname# */', - 'need':typedef_need_dict, - }, -# Scalars (not complex) - { # Common - 'decl':'\t#ctype# #varname# = 0;', - 'need':{hasinitvalue:'math.h'}, - 'frompyobj':{hasinitvalue:'\t#varname# = #init#;'}, - '_check':l_and(isscalar,l_not(iscomplex)), - }, - { - 'return':',#varname#', - 'docstrout':'\t#pydocsignout#', - 'docreturn':'#outvarname#,', - 'returnformat':'#varrformat#', - '_check':l_and(isscalar,l_not(iscomplex),isintent_out), - }, -# Complex scalars - { # Common - 'decl':'\t#ctype# #varname#;', - 'frompyobj': {hasinitvalue:'\t#varname#.r = #init.r#, #varname#.i = #init.i#;'}, - '_check':iscomplex - }, -# String - { # Common - 'decl':['\t#ctype# #varname# = NULL;', - '\tint slen(#varname#);', - ], - 'need':['len..'], - '_check':isstring - }, -# Array - { # Common - 'decl':['\t#ctype# *#varname# = NULL;', - '\tnpy_intp #varname#_Dims[#rank#] = {#rank*[-1]#};', - '\tconst int #varname#_Rank = #rank#;', - ], - 'need':['len..',{hasinitvalue:'forcomb'},{hasinitvalue:'CFUNCSMESS'}], - '_check':isarray - }, -# Scalararray - { # Common - '_check':l_and(isarray,l_not(iscomplexarray)) - },{ # Not hidden - '_check':l_and(isarray,l_not(iscomplexarray),isintent_nothide) - }, -# Integer*1 array - {'need':'#ctype#', - '_check':isint1array, - '_depend':'' - }, -# Integer*-1 array - {'need':'#ctype#', - '_check':isunsigned_chararray, - '_depend':'' - }, -# Integer*-2 array - {'need':'#ctype#', - '_check':isunsigned_shortarray, - '_depend':'' - }, -# Integer*-8 array - {'need':'#ctype#', - '_check':isunsigned_long_longarray, - '_depend':'' - }, -# Complexarray - {'need':'#ctype#', - '_check':iscomplexarray, - '_depend':'' - }, -# Stringarray - { - 'callfortranappend':{isarrayofstrings:'flen(#varname#),'}, - 'need':'string', - '_check':isstringarray - } - ] - -arg_rules=[ - { - 'separatorsfor':sepdict - }, - { # Common - 'frompyobj':['\t/* Processing variable #varname# */', - {debugcapi:'\tfprintf(stderr,"#vardebuginfo#\\n");'},], - 'cleanupfrompyobj':'\t/* End of cleaning variable #varname# */', - '_depend':'', - 'need':typedef_need_dict, - }, -# Doc signatures - { - 'docstropt':{l_and(isoptional,isintent_nothide):'\t#pydocsign#'}, - 'docstrreq':{l_and(isrequired,isintent_nothide):'\t#pydocsign#'}, - 'docstrout':{isintent_out:'\t#pydocsignout#'}, - 'latexdocstropt':{l_and(isoptional,isintent_nothide):['\\item[]{{}\\verb@#pydocsign#@{}}', - {hasnote:'--- #note#'}]}, - 'latexdocstrreq':{l_and(isrequired,isintent_nothide):['\\item[]{{}\\verb@#pydocsign#@{}}', - {hasnote:'--- #note#'}]}, - 'latexdocstrout':{isintent_out:['\\item[]{{}\\verb@#pydocsignout#@{}}', - {l_and(hasnote,isintent_hide):'--- #note#', - l_and(hasnote,isintent_nothide):'--- See above.'}]}, - 'depend':'' - }, -# Required/Optional arguments - { - 'kwlist':'"#varname#",', - 'docsign':'#varname#,', - '_check':l_and(isintent_nothide,l_not(isoptional)) - }, - { - 'kwlistopt':'"#varname#",', - 'docsignopt':'#varname#=#showinit#,', - 'docsignoptshort':'#varname#,', - '_check':l_and(isintent_nothide,isoptional) - }, -# Docstring/BuildValue - { - 'docreturn':'#outvarname#,', - 'returnformat':'#varrformat#', - '_check':isintent_out - }, -# Externals (call-back functions) - { # Common - 'docsignxa':{isintent_nothide:'#varname#_extra_args=(),'}, - 'docsignxashort':{isintent_nothide:'#varname#_extra_args,'}, - 'docstropt':{isintent_nothide:'\t#varname#_extra_args := () input tuple'}, - 'docstrcbs':'#cbdocstr#', - 'latexdocstrcbs':'\\item[] #cblatexdocstr#', - 'latexdocstropt':{isintent_nothide:'\\item[]{{}\\verb@#varname#_extra_args := () input tuple@{}} --- Extra arguments for call-back function {{}\\verb@#varname#@{}}.'}, - 'decl':['\tPyObject *#varname#_capi = Py_None;', - '\tPyTupleObject *#varname#_xa_capi = NULL;', - '\tPyTupleObject *#varname#_args_capi = NULL;', - '\tint #varname#_nofargs_capi = 0;', - {l_not(isintent_callback):'\t#cbname#_typedef #varname#_cptr;'} - ], - 'kwlistxa':{isintent_nothide:'"#varname#_extra_args",'}, - 'argformat':{isrequired:'O'}, - 'keyformat':{isoptional:'O'}, - 'xaformat':{isintent_nothide:'O!'}, - 'args_capi':{isrequired:',&#varname#_capi'}, - 'keys_capi':{isoptional:',&#varname#_capi'}, - 'keys_xa':',&PyTuple_Type,&#varname#_xa_capi', - 'setjmpbuf':'(setjmp(#cbname#_jmpbuf))', - 'callfortran':{l_not(isintent_callback):'#varname#_cptr,'}, - 'need':['#cbname#','setjmp.h'], - '_check':isexternal - }, - { - 'frompyobj':[{l_not(isintent_callback):"""\ -if(F2PyCapsule_Check(#varname#_capi)) { - #varname#_cptr = F2PyCapsule_AsVoidPtr(#varname#_capi); -} else { - #varname#_cptr = #cbname#; -} -"""},{isintent_callback:"""\ -if (#varname#_capi==Py_None) { - #varname#_capi = PyObject_GetAttrString(#modulename#_module,\"#varname#\"); - if (#varname#_capi) { - if (#varname#_xa_capi==NULL) { - if (PyObject_HasAttrString(#modulename#_module,\"#varname#_extra_args\")) { - PyObject* capi_tmp = PyObject_GetAttrString(#modulename#_module,\"#varname#_extra_args\"); - if (capi_tmp) - #varname#_xa_capi = (PyTupleObject *)PySequence_Tuple(capi_tmp); - else - #varname#_xa_capi = (PyTupleObject *)Py_BuildValue(\"()\"); - if (#varname#_xa_capi==NULL) { - PyErr_SetString(#modulename#_error,\"Failed to convert #modulename#.#varname#_extra_args to tuple.\\n\"); - return NULL; - } - } - } - } - if (#varname#_capi==NULL) { - PyErr_SetString(#modulename#_error,\"Callback #varname# not defined (as an argument or module #modulename# attribute).\\n\"); - return NULL; - } -} -"""}, -## {l_not(isintent_callback):"""\ -## if (#varname#_capi==Py_None) { -## printf(\"hoi\\n\"); -## } -## """}, -"""\ -\t#varname#_nofargs_capi = #cbname#_nofargs; -\tif (create_cb_arglist(#varname#_capi,#varname#_xa_capi,#maxnofargs#,#nofoptargs#,&#cbname#_nofargs,&#varname#_args_capi,\"failed in processing argument list for call-back #varname#.\")) { -\t\tjmp_buf #varname#_jmpbuf;""", -{debugcapi:["""\ -\t\tfprintf(stderr,\"debug-capi:Assuming %d arguments; at most #maxnofargs#(-#nofoptargs#) is expected.\\n\",#cbname#_nofargs); -\t\tCFUNCSMESSPY(\"for #varname#=\",#cbname#_capi);""", -{l_not(isintent_callback):"""\t\tfprintf(stderr,\"#vardebugshowvalue# (call-back in C).\\n\",#cbname#);"""}]}, - """\ -\t\tCFUNCSMESS(\"Saving jmpbuf for `#varname#`.\\n\"); -\t\tSWAP(#varname#_capi,#cbname#_capi,PyObject); -\t\tSWAP(#varname#_args_capi,#cbname#_args_capi,PyTupleObject); -\t\tmemcpy(&#varname#_jmpbuf,&#cbname#_jmpbuf,sizeof(jmp_buf));""", - ], -'cleanupfrompyobj': -"""\ -\t\tCFUNCSMESS(\"Restoring jmpbuf for `#varname#`.\\n\"); -\t\t#cbname#_capi = #varname#_capi; -\t\tPy_DECREF(#cbname#_args_capi); -\t\t#cbname#_args_capi = #varname#_args_capi; -\t\t#cbname#_nofargs = #varname#_nofargs_capi; -\t\tmemcpy(&#cbname#_jmpbuf,&#varname#_jmpbuf,sizeof(jmp_buf)); -\t}""", - 'need':['SWAP','create_cb_arglist'], - '_check':isexternal, - '_depend':'' - }, -# Scalars (not complex) - { # Common - 'decl':'\t#ctype# #varname# = 0;', - 'pyobjfrom':{debugcapi:'\tfprintf(stderr,"#vardebugshowvalue#\\n",#varname#);'}, - 'callfortran':{isintent_c:'#varname#,',l_not(isintent_c):'&#varname#,'}, - 'return':{isintent_out:',#varname#'}, - '_check':l_and(isscalar,l_not(iscomplex)) - },{ - 'need':{hasinitvalue:'math.h'}, - '_check':l_and(isscalar,l_not(iscomplex)), - #'_depend':'' - },{ # Not hidden - 'decl':'\tPyObject *#varname#_capi = Py_None;', - 'argformat':{isrequired:'O'}, - 'keyformat':{isoptional:'O'}, - 'args_capi':{isrequired:',&#varname#_capi'}, - 'keys_capi':{isoptional:',&#varname#_capi'}, - 'pyobjfrom':{isintent_inout:"""\ -\tf2py_success = try_pyarr_from_#ctype#(#varname#_capi,&#varname#); -\tif (f2py_success) {"""}, - 'closepyobjfrom':{isintent_inout:"\t} /*if (f2py_success) of #varname# pyobjfrom*/"}, - 'need':{isintent_inout:'try_pyarr_from_#ctype#'}, - '_check':l_and(isscalar,l_not(iscomplex),isintent_nothide) - },{ - 'frompyobj':[ -# hasinitvalue... -# if pyobj is None: -# varname = init -# else -# from_pyobj(varname) -# -# isoptional and noinitvalue... -# if pyobj is not None: -# from_pyobj(varname) -# else: -# varname is uninitialized -# -# ... -# from_pyobj(varname) -# - {hasinitvalue:'\tif (#varname#_capi == Py_None) #varname# = #init#; else', - '_depend':''}, - {l_and(isoptional,l_not(hasinitvalue)):'\tif (#varname#_capi != Py_None)', - '_depend':''}, - {l_not(islogical):'''\ -\t\tf2py_success = #ctype#_from_pyobj(&#varname#,#varname#_capi,"#pyname#() #nth# (#varname#) can\'t be converted to #ctype#"); -\tif (f2py_success) {'''}, - {islogical:'''\ -\t\t#varname# = (#ctype#)PyObject_IsTrue(#varname#_capi); -\t\tf2py_success = 1; -\tif (f2py_success) {'''}, - ], - 'cleanupfrompyobj':'\t} /*if (f2py_success) of #varname#*/', - 'need':{l_not(islogical):'#ctype#_from_pyobj'}, - '_check':l_and(isscalar,l_not(iscomplex),isintent_nothide), - '_depend':'' -# },{ # Hidden -# '_check':l_and(isscalar,l_not(iscomplex),isintent_hide) - },{ # Hidden - 'frompyobj':{hasinitvalue:'\t#varname# = #init#;'}, - 'need':typedef_need_dict, - '_check':l_and(isscalar,l_not(iscomplex),isintent_hide), - '_depend':'' - },{ # Common - 'frompyobj':{debugcapi:'\tfprintf(stderr,"#vardebugshowvalue#\\n",#varname#);'}, - '_check':l_and(isscalar,l_not(iscomplex)), - '_depend':'' - }, -# Complex scalars - { # Common - 'decl':'\t#ctype# #varname#;', - 'callfortran':{isintent_c:'#varname#,',l_not(isintent_c):'&#varname#,'}, - 'pyobjfrom':{debugcapi:'\tfprintf(stderr,"#vardebugshowvalue#\\n",#varname#.r,#varname#.i);'}, - 'return':{isintent_out:',#varname#_capi'}, - '_check':iscomplex - },{ # Not hidden - 'decl':'\tPyObject *#varname#_capi = Py_None;', - 'argformat':{isrequired:'O'}, - 'keyformat':{isoptional:'O'}, - 'args_capi':{isrequired:',&#varname#_capi'}, - 'keys_capi':{isoptional:',&#varname#_capi'}, - 'need':{isintent_inout:'try_pyarr_from_#ctype#'}, - 'pyobjfrom':{isintent_inout:"""\ -\t\tf2py_success = try_pyarr_from_#ctype#(#varname#_capi,&#varname#); -\t\tif (f2py_success) {"""}, - 'closepyobjfrom':{isintent_inout:"\t\t} /*if (f2py_success) of #varname# pyobjfrom*/"}, - '_check':l_and(iscomplex,isintent_nothide) - },{ - 'frompyobj':[{hasinitvalue:'\tif (#varname#_capi==Py_None) {#varname#.r = #init.r#, #varname#.i = #init.i#;} else'}, - {l_and(isoptional,l_not(hasinitvalue)):'\tif (#varname#_capi != Py_None)'}, -# '\t\tf2py_success = #ctype#_from_pyobj(&#varname#,#varname#_capi,"#ctype#_from_pyobj failed in converting #nth# `#varname#\' of #pyname# to C #ctype#\\n");' - '\t\tf2py_success = #ctype#_from_pyobj(&#varname#,#varname#_capi,"#pyname#() #nth# (#varname#) can\'t be converted to #ctype#");' - '\n\tif (f2py_success) {'], - 'cleanupfrompyobj':'\t} /*if (f2py_success) of #varname# frompyobj*/', - 'need':['#ctype#_from_pyobj'], - '_check':l_and(iscomplex,isintent_nothide), - '_depend':'' - },{ # Hidden - 'decl':{isintent_out:'\tPyObject *#varname#_capi = Py_None;'}, - '_check':l_and(iscomplex,isintent_hide) - },{ - 'frompyobj': {hasinitvalue:'\t#varname#.r = #init.r#, #varname#.i = #init.i#;'}, - '_check':l_and(iscomplex,isintent_hide), - '_depend':'' - },{ # Common - 'pyobjfrom':{isintent_out:'\t#varname#_capi = pyobj_from_#ctype#1(#varname#);'}, - 'need':['pyobj_from_#ctype#1'], - '_check':iscomplex - },{ - 'frompyobj':{debugcapi:'\tfprintf(stderr,"#vardebugshowvalue#\\n",#varname#.r,#varname#.i);'}, - '_check':iscomplex, - '_depend':'' - }, -# String - { # Common - 'decl':['\t#ctype# #varname# = NULL;', - '\tint slen(#varname#);', - '\tPyObject *#varname#_capi = Py_None;'], - 'callfortran':'#varname#,', - 'callfortranappend':'slen(#varname#),', - 'pyobjfrom':{debugcapi:'\tfprintf(stderr,"#vardebugshowvalue#\\n",slen(#varname#),#varname#);'}, -# 'freemem':'\tSTRINGFREE(#varname#);', - 'return':{isintent_out:',#varname#'}, - 'need':['len..'],#'STRINGFREE'], - '_check':isstring - },{ # Common - 'frompyobj':"""\ -\tslen(#varname#) = #length#; -\tf2py_success = #ctype#_from_pyobj(&#varname#,&slen(#varname#),#init#,#varname#_capi,\"#ctype#_from_pyobj failed in converting #nth# `#varname#\' of #pyname# to C #ctype#\"); -\tif (f2py_success) {""", - 'cleanupfrompyobj':"""\ -\t\tSTRINGFREE(#varname#); -\t} /*if (f2py_success) of #varname#*/""", - 'need':['#ctype#_from_pyobj','len..','STRINGFREE'], - '_check':isstring, - '_depend':'' - },{ # Not hidden - 'argformat':{isrequired:'O'}, - 'keyformat':{isoptional:'O'}, - 'args_capi':{isrequired:',&#varname#_capi'}, - 'keys_capi':{isoptional:',&#varname#_capi'}, - 'pyobjfrom':{isintent_inout:'''\ -\tf2py_success = try_pyarr_from_#ctype#(#varname#_capi,#varname#); -\tif (f2py_success) {'''}, - 'closepyobjfrom':{isintent_inout:'\t} /*if (f2py_success) of #varname# pyobjfrom*/'}, - 'need':{isintent_inout:'try_pyarr_from_#ctype#'}, - '_check':l_and(isstring,isintent_nothide) - },{ # Hidden - '_check':l_and(isstring,isintent_hide) - },{ - 'frompyobj':{debugcapi:'\tfprintf(stderr,"#vardebugshowvalue#\\n",slen(#varname#),#varname#);'}, - '_check':isstring, - '_depend':'' - }, -# Array - { # Common - 'decl':['\t#ctype# *#varname# = NULL;', - '\tnpy_intp #varname#_Dims[#rank#] = {#rank*[-1]#};', - '\tconst int #varname#_Rank = #rank#;', - '\tPyArrayObject *capi_#varname#_tmp = NULL;', - '\tint capi_#varname#_intent = 0;', - ], - 'callfortran':'#varname#,', - 'return':{isintent_out:',capi_#varname#_tmp'}, - 'need':'len..', - '_check':isarray - },{ # intent(overwrite) array - 'decl':'\tint capi_overwrite_#varname# = 1;', - 'kwlistxa':'"overwrite_#varname#",', - 'xaformat':'i', - 'keys_xa':',&capi_overwrite_#varname#', - 'docsignxa':'overwrite_#varname#=1,', - 'docsignxashort':'overwrite_#varname#,', - 'docstropt':'\toverwrite_#varname# := 1 input int', - '_check':l_and(isarray,isintent_overwrite), - },{ - 'frompyobj':'\tcapi_#varname#_intent |= (capi_overwrite_#varname#?0:F2PY_INTENT_COPY);', - '_check':l_and(isarray,isintent_overwrite), - '_depend':'', - }, - { # intent(copy) array - 'decl':'\tint capi_overwrite_#varname# = 0;', - 'kwlistxa':'"overwrite_#varname#",', - 'xaformat':'i', - 'keys_xa':',&capi_overwrite_#varname#', - 'docsignxa':'overwrite_#varname#=0,', - 'docsignxashort':'overwrite_#varname#,', - 'docstropt':'\toverwrite_#varname# := 0 input int', - '_check':l_and(isarray,isintent_copy), - },{ - 'frompyobj':'\tcapi_#varname#_intent |= (capi_overwrite_#varname#?0:F2PY_INTENT_COPY);', - '_check':l_and(isarray,isintent_copy), - '_depend':'', - },{ - 'need':[{hasinitvalue:'forcomb'},{hasinitvalue:'CFUNCSMESS'}], - '_check':isarray, - '_depend':'' - },{ # Not hidden - 'decl':'\tPyObject *#varname#_capi = Py_None;', - 'argformat':{isrequired:'O'}, - 'keyformat':{isoptional:'O'}, - 'args_capi':{isrequired:',&#varname#_capi'}, - 'keys_capi':{isoptional:',&#varname#_capi'}, -# 'pyobjfrom':{isintent_inout:"""\ -# /* Partly because of the following hack, intent(inout) is depreciated, -# Use intent(in,out) instead. - -# \tif ((#varname#_capi != Py_None) && PyArray_Check(#varname#_capi) \\ -# \t\t&& (#varname#_capi != (PyObject *)capi_#varname#_tmp)) { -# \t\tif (((PyArrayObject *)#varname#_capi)->nd != capi_#varname#_tmp->nd) { -# \t\t\tif (#varname#_capi != capi_#varname#_tmp->base) -# \t\t\t\tcopy_ND_array((PyArrayObject *)capi_#varname#_tmp->base,(PyArrayObject *)#varname#_capi); -# \t\t} else -# \t\t\tcopy_ND_array(capi_#varname#_tmp,(PyArrayObject *)#varname#_capi); -# \t} -# */ -# """}, -# 'need':{isintent_inout:'copy_ND_array'}, - '_check':l_and(isarray,isintent_nothide) - },{ - 'frompyobj':['\t#setdims#;', - '\tcapi_#varname#_intent |= #intent#;', - {isintent_hide:'\tcapi_#varname#_tmp = array_from_pyobj(#atype#,#varname#_Dims,#varname#_Rank,capi_#varname#_intent,Py_None);'}, - {isintent_nothide:'\tcapi_#varname#_tmp = array_from_pyobj(#atype#,#varname#_Dims,#varname#_Rank,capi_#varname#_intent,#varname#_capi);'}, - """\ -\tif (capi_#varname#_tmp == NULL) { -\t\tif (!PyErr_Occurred()) -\t\t\tPyErr_SetString(#modulename#_error,\"failed in converting #nth# `#varname#\' of #pyname# to C/Fortran array\" ); -\t} else { -\t\t#varname# = (#ctype# *)(capi_#varname#_tmp->data); -""", -{hasinitvalue:[ - {isintent_nothide:'\tif (#varname#_capi == Py_None) {'}, - {isintent_hide:'\t{'}, - {iscomplexarray:'\t\t#ctype# capi_c;'}, - """\ -\t\tint *_i,capi_i=0; -\t\tCFUNCSMESS(\"#name#: Initializing #varname#=#init#\\n\"); -\t\tif (initforcomb(capi_#varname#_tmp->dimensions,capi_#varname#_tmp->nd,1)) { -\t\t\twhile ((_i = nextforcomb())) -\t\t\t\t#varname#[capi_i++] = #init#; /* fortran way */ -\t\t} else { -\t\t\tif (!PyErr_Occurred()) -\t\t\t\tPyErr_SetString(#modulename#_error,\"Initialization of #nth# #varname# failed (initforcomb).\"); -\t\t\tf2py_success = 0; -\t\t} -\t} -\tif (f2py_success) {"""]}, - ], - 'cleanupfrompyobj':[ # note that this list will be reversed - '\t} /*if (capi_#varname#_tmp == NULL) ... else of #varname#*/', - {l_not(l_or(isintent_out,isintent_hide)):"""\ -\tif((PyObject *)capi_#varname#_tmp!=#varname#_capi) { -\t\tPy_XDECREF(capi_#varname#_tmp); }"""}, - {l_and(isintent_hide,l_not(isintent_out)):"""\t\tPy_XDECREF(capi_#varname#_tmp);"""}, - {hasinitvalue:'\t} /*if (f2py_success) of #varname# init*/'}, - ], - '_check':isarray, - '_depend':'' - }, -# { # Hidden -# 'freemem':{l_not(isintent_out):'\tPy_XDECREF(capi_#varname#_tmp);'}, -# '_check':l_and(isarray,isintent_hide) -# }, -# Scalararray - { # Common - '_check':l_and(isarray,l_not(iscomplexarray)) - },{ # Not hidden - '_check':l_and(isarray,l_not(iscomplexarray),isintent_nothide) - }, -# Integer*1 array - {'need':'#ctype#', - '_check':isint1array, - '_depend':'' - }, -# Integer*-1 array - {'need':'#ctype#', - '_check':isunsigned_chararray, - '_depend':'' - }, -# Integer*-2 array - {'need':'#ctype#', - '_check':isunsigned_shortarray, - '_depend':'' - }, -# Integer*-8 array - {'need':'#ctype#', - '_check':isunsigned_long_longarray, - '_depend':'' - }, -# Complexarray - {'need':'#ctype#', - '_check':iscomplexarray, - '_depend':'' - }, -# Stringarray - { - 'callfortranappend':{isarrayofstrings:'flen(#varname#),'}, - 'need':'string', - '_check':isstringarray - } - ] - -################# Rules for checking ############### - -check_rules=[ - { - 'frompyobj':{debugcapi:'\tfprintf(stderr,\"debug-capi:Checking `#check#\'\\n\");'}, - 'need':'len..' - },{ - 'frompyobj':'\tCHECKSCALAR(#check#,\"#check#\",\"#nth# #varname#\",\"#varshowvalue#\",#varname#) {', - 'cleanupfrompyobj':'\t} /*CHECKSCALAR(#check#)*/', - 'need':'CHECKSCALAR', - '_check':l_and(isscalar,l_not(iscomplex)), - '_break':'' - },{ - 'frompyobj':'\tCHECKSTRING(#check#,\"#check#\",\"#nth# #varname#\",\"#varshowvalue#\",#varname#) {', - 'cleanupfrompyobj':'\t} /*CHECKSTRING(#check#)*/', - 'need':'CHECKSTRING', - '_check':isstring, - '_break':'' - },{ - 'need':'CHECKARRAY', - 'frompyobj':'\tCHECKARRAY(#check#,\"#check#\",\"#nth# #varname#\") {', - 'cleanupfrompyobj':'\t} /*CHECKARRAY(#check#)*/', - '_check':isarray, - '_break':'' - },{ - 'need':'CHECKGENERIC', - 'frompyobj':'\tCHECKGENERIC(#check#,\"#check#\",\"#nth# #varname#\") {', - 'cleanupfrompyobj':'\t} /*CHECKGENERIC(#check#)*/', - } -] - -########## Applying the rules. No need to modify what follows ############# - -#################### Build C/API module ####################### - -def buildmodule(m,um): - """ - Return - """ - global f2py_version,options - outmess('\tBuilding module "%s"...\n'%(m['name'])) - ret = {} - mod_rules=defmod_rules[:] - vrd=modsign2map(m) - rd=dictappend({'f2py_version':f2py_version},vrd) - funcwrappers = [] - funcwrappers2 = [] # F90 codes - for n in m['interfaced']: - nb=None - for bi in m['body']: - if not bi['block']=='interface': - errmess('buildmodule: Expected interface block. Skipping.\n') - continue - for b in bi['body']: - if b['name']==n: nb=b;break - - if not nb: - errmess('buildmodule: Could not found the body of interfaced routine "%s". Skipping.\n'%(n)) - continue - nb_list = [nb] - if 'entry' in nb: - for k,a in nb['entry'].items(): - nb1 = copy.deepcopy(nb) - del nb1['entry'] - nb1['name'] = k - nb1['args'] = a - nb_list.append(nb1) - for nb in nb_list: - api,wrap=buildapi(nb) - if wrap: - if ismoduleroutine(nb): - funcwrappers2.append(wrap) - else: - funcwrappers.append(wrap) - ar=applyrules(api,vrd) - rd=dictappend(rd,ar) - - # Construct COMMON block support - cr,wrap = common_rules.buildhooks(m) - if wrap: - funcwrappers.append(wrap) - ar=applyrules(cr,vrd) - rd=dictappend(rd,ar) - - # Construct F90 module support - mr,wrap = f90mod_rules.buildhooks(m) - if wrap: - funcwrappers2.append(wrap) - ar=applyrules(mr,vrd) - rd=dictappend(rd,ar) - - for u in um: - ar=use_rules.buildusevars(u,m['use'][u['name']]) - rd=dictappend(rd,ar) - - needs=cfuncs.get_needs() - code={} - for n in needs.keys(): - code[n]=[] - for k in needs[n]: - c='' - if k in cfuncs.includes0: - c=cfuncs.includes0[k] - elif k in cfuncs.includes: - c=cfuncs.includes[k] - elif k in cfuncs.userincludes: - c=cfuncs.userincludes[k] - elif k in cfuncs.typedefs: - c=cfuncs.typedefs[k] - elif k in cfuncs.typedefs_generated: - c=cfuncs.typedefs_generated[k] - elif k in cfuncs.cppmacros: - c=cfuncs.cppmacros[k] - elif k in cfuncs.cfuncs: - c=cfuncs.cfuncs[k] - elif k in cfuncs.callbacks: - c=cfuncs.callbacks[k] - elif k in cfuncs.f90modhooks: - c=cfuncs.f90modhooks[k] - elif k in cfuncs.commonhooks: - c=cfuncs.commonhooks[k] - else: - errmess('buildmodule: unknown need %s.\n'%(`k`));continue - code[n].append(c) - mod_rules.append(code) - for r in mod_rules: - if ('_check' in r and r['_check'](m)) or ('_check' not in r): - ar=applyrules(r,vrd,m) - rd=dictappend(rd,ar) - ar=applyrules(module_rules,rd) - - fn = os.path.join(options['buildpath'],vrd['modulename']+'module.c') - ret['csrc'] = fn - f=open(fn,'w') - f.write(ar['modulebody'].replace('\t',2*' ')) - f.close() - outmess('\tWrote C/API module "%s" to file "%s/%smodule.c"\n'%(m['name'],options['buildpath'],vrd['modulename'])) - - if options['dorestdoc']: - fn = os.path.join(options['buildpath'],vrd['modulename']+'module.rest') - f=open(fn,'w') - f.write('.. -*- rest -*-\n') - f.write('\n'.join(ar['restdoc'])) - f.close() - outmess('\tReST Documentation is saved to file "%s/%smodule.rest"\n'%(options['buildpath'],vrd['modulename'])) - if options['dolatexdoc']: - fn = os.path.join(options['buildpath'],vrd['modulename']+'module.tex') - ret['ltx'] = fn - f=open(fn,'w') - f.write('%% This file is auto-generated with f2py (version:%s)\n'%(f2py_version)) - if 'shortlatex' not in options: - f.write('\\documentclass{article}\n\\usepackage{a4wide}\n\\begin{document}\n\\tableofcontents\n\n') - f.write('\n'.join(ar['latexdoc'])) - if 'shortlatex' not in options: - f.write('\\end{document}') - f.close() - outmess('\tDocumentation is saved to file "%s/%smodule.tex"\n'%(options['buildpath'],vrd['modulename'])) - if funcwrappers: - wn = os.path.join(options['buildpath'],'%s-f2pywrappers.f'%(vrd['modulename'])) - ret['fsrc'] = wn - f=open(wn,'w') - f.write('C -*- fortran -*-\n') - f.write('C This file is autogenerated with f2py (version:%s)\n'%(f2py_version)) - f.write('C It contains Fortran 77 wrappers to fortran functions.\n') - lines = [] - for l in ('\n\n'.join(funcwrappers)+'\n').split('\n'): - if l and l[0]==' ': - while len(l)>=66: - lines.append(l[:66]+'\n &') - l = l[66:] - lines.append(l+'\n') - else: lines.append(l+'\n') - lines = ''.join(lines).replace('\n &\n','\n') - f.write(lines) - f.close() - outmess('\tFortran 77 wrappers are saved to "%s"\n'%(wn)) - if funcwrappers2: - wn = os.path.join(options['buildpath'],'%s-f2pywrappers2.f90'%(vrd['modulename'])) - ret['fsrc'] = wn - f=open(wn,'w') - f.write('! -*- f90 -*-\n') - f.write('! This file is autogenerated with f2py (version:%s)\n'%(f2py_version)) - f.write('! It contains Fortran 90 wrappers to fortran functions.\n') - lines = [] - for l in ('\n\n'.join(funcwrappers2)+'\n').split('\n'): - if len(l)>72 and l[0]==' ': - lines.append(l[:72]+'&\n &') - l = l[72:] - while len(l)>66: - lines.append(l[:66]+'&\n &') - l = l[66:] - lines.append(l+'\n') - else: lines.append(l+'\n') - lines = ''.join(lines).replace('\n &\n','\n') - f.write(lines) - f.close() - outmess('\tFortran 90 wrappers are saved to "%s"\n'%(wn)) - return ret - -################## Build C/API function ############# - -stnd={1:'st',2:'nd',3:'rd',4:'th',5:'th',6:'th',7:'th',8:'th',9:'th',0:'th'} -def buildapi(rout): - rout,wrap = func2subr.assubr(rout) - args,depargs=getargs2(rout) - capi_maps.depargs=depargs - var=rout['vars'] - auxvars = [a for a in var.keys() if isintent_aux(var[a])] - - if ismoduleroutine(rout): - outmess('\t\t\tConstructing wrapper function "%s.%s"...\n'%(rout['modulename'],rout['name'])) - else: - outmess('\t\tConstructing wrapper function "%s"...\n'%(rout['name'])) - # Routine - vrd=routsign2map(rout) - rd=dictappend({},vrd) - for r in rout_rules: - if ('_check' in r and r['_check'](rout)) or ('_check' not in r): - ar=applyrules(r,vrd,rout) - rd=dictappend(rd,ar) - - # Args - nth,nthk=0,0 - savevrd={} - for a in args: - vrd=sign2map(a,var[a]) - if isintent_aux(var[a]): - _rules = aux_rules - else: - _rules = arg_rules - if not isintent_hide(var[a]): - if not isoptional(var[a]): - nth=nth+1 - vrd['nth']=`nth`+stnd[nth%10]+' argument' - else: - nthk=nthk+1 - vrd['nth']=`nthk`+stnd[nthk%10]+' keyword' - else: vrd['nth']='hidden' - savevrd[a]=vrd - for r in _rules: - if '_depend' in r: - continue - if ('_check' in r and r['_check'](var[a])) or ('_check' not in r): - ar=applyrules(r,vrd,var[a]) - rd=dictappend(rd,ar) - if '_break' in r: - break - for a in depargs: - if isintent_aux(var[a]): - _rules = aux_rules - else: - _rules = arg_rules - vrd=savevrd[a] - for r in _rules: - if '_depend' not in r: - continue - if ('_check' in r and r['_check'](var[a])) or ('_check' not in r): - ar=applyrules(r,vrd,var[a]) - rd=dictappend(rd,ar) - if '_break' in r: - break - if 'check' in var[a]: - for c in var[a]['check']: - vrd['check']=c - ar=applyrules(check_rules,vrd,var[a]) - rd=dictappend(rd,ar) - if type(rd['cleanupfrompyobj']) is types.ListType: - rd['cleanupfrompyobj'].reverse() - if type(rd['closepyobjfrom']) is types.ListType: - rd['closepyobjfrom'].reverse() - rd['docsignature']=stripcomma(replace('#docsign##docsignopt##docsignxa#', - {'docsign':rd['docsign'], - 'docsignopt':rd['docsignopt'], - 'docsignxa':rd['docsignxa']})) - optargs=stripcomma(replace('#docsignopt##docsignxa#', - {'docsignxa':rd['docsignxashort'], - 'docsignopt':rd['docsignoptshort']} - )) - if optargs=='': - rd['docsignatureshort']=stripcomma(replace('#docsign#',{'docsign':rd['docsign']})) - else: - rd['docsignatureshort']=replace('#docsign#[#docsignopt#]', - {'docsign':rd['docsign'], - 'docsignopt':optargs, - }) - rd['latexdocsignatureshort']=rd['docsignatureshort'].replace('_','\\_') - rd['latexdocsignatureshort']=rd['latexdocsignatureshort'].replace(',',', ') - cfs=stripcomma(replace('#callfortran##callfortranappend#',{'callfortran':rd['callfortran'],'callfortranappend':rd['callfortranappend']})) - if len(rd['callfortranappend'])>1: - rd['callcompaqfortran']=stripcomma(replace('#callfortran# 0,#callfortranappend#',{'callfortran':rd['callfortran'],'callfortranappend':rd['callfortranappend']})) - else: - rd['callcompaqfortran']=cfs - rd['callfortran']=cfs - if type(rd['docreturn'])==types.ListType: - rd['docreturn']=stripcomma(replace('#docreturn#',{'docreturn':rd['docreturn']}))+' = ' - rd['docstrsigns']=[] - rd['latexdocstrsigns']=[] - for k in ['docstrreq','docstropt','docstrout','docstrcbs']: - if k in rd and type(rd[k])==types.ListType: - rd['docstrsigns']=rd['docstrsigns']+rd[k] - k='latex'+k - if k in rd and type(rd[k])==types.ListType: - rd['latexdocstrsigns']=rd['latexdocstrsigns']+rd[k][0:1]+\ - ['\\begin{description}']+rd[k][1:]+\ - ['\\end{description}'] - - # Workaround for Python 2.6, 2.6.1 bug: http://bugs.python.org/issue4720 - if rd['keyformat'] or rd['xaformat']: - argformat = rd['argformat'] - if isinstance(argformat, list): - argformat.append('|') - else: - assert isinstance(argformat, str),repr((argformat, type(argformat))) - rd['argformat'] += '|' - - ar=applyrules(routine_rules,rd) - if ismoduleroutine(rout): - outmess('\t\t\t %s\n'%(ar['docshort'])) - else: - outmess('\t\t %s\n'%(ar['docshort'])) - return ar,wrap - - -#################### EOF rules.py ####################### diff --git a/pythonPackages/numpy/numpy/f2py/setup.py b/pythonPackages/numpy/numpy/f2py/setup.py deleted file mode 100755 index 500ab58ec8..0000000000 --- a/pythonPackages/numpy/numpy/f2py/setup.py +++ /dev/null @@ -1,127 +0,0 @@ -#!/usr/bin/env python -""" -setup.py for installing F2PY - -Usage: - python setup.py install - -Copyright 2001-2005 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Revision: 1.32 $ -$Date: 2005/01/30 17:22:14 $ -Pearu Peterson -""" - -__version__ = "$Id: setup.py,v 1.32 2005/01/30 17:22:14 pearu Exp $" - -import os -import sys -from distutils.dep_util import newer -from numpy.distutils import log -from numpy.distutils.core import setup -from numpy.distutils.misc_util import Configuration - -from __version__ import version - -def configuration(parent_package='',top_path=None): - config = Configuration('f2py', parent_package, top_path) - - config.add_data_dir('docs') - config.add_data_dir('tests') - - config.add_data_files('src/fortranobject.c', - 'src/fortranobject.h', - 'f2py.1' - ) - - config.make_svn_version_py() - - def generate_f2py_py(build_dir): - f2py_exe = 'f2py'+os.path.basename(sys.executable)[6:] - if f2py_exe[-4:]=='.exe': - f2py_exe = f2py_exe[:-4] + '.py' - if 'bdist_wininst' in sys.argv and f2py_exe[-3:] != '.py': - f2py_exe = f2py_exe + '.py' - target = os.path.join(build_dir,f2py_exe) - if newer(__file__,target): - log.info('Creating %s', target) - f = open(target,'w') - f.write('''\ -#!/usr/bin/env %s -# See http://cens.ioc.ee/projects/f2py2e/ -import os, sys -for mode in ["g3-numpy", "2e-numeric", "2e-numarray", "2e-numpy"]: - try: - i=sys.argv.index("--"+mode) - del sys.argv[i] - break - except ValueError: pass -os.environ["NO_SCIPY_IMPORT"]="f2py" -if mode=="g3-numpy": - print >> sys.stderr, "G3 f2py support is not implemented, yet." - sys.exit(1) -elif mode=="2e-numeric": - from f2py2e import main -elif mode=="2e-numarray": - sys.argv.append("-DNUMARRAY") - from f2py2e import main -elif mode=="2e-numpy": - from numpy.f2py import main -else: - print >> sys.stderr, "Unknown mode:",`mode` - sys.exit(1) -main() -'''%(os.path.basename(sys.executable))) - f.close() - return target - - config.add_scripts(generate_f2py_py) - - log.info('F2PY Version %s', config.get_version()) - - return config - -if __name__ == "__main__": - - config = configuration(top_path='') - version = config.get_version() - print('F2PY Version',version) - config = config.todict() - - if sys.version[:3]>='2.3': - config['download_url'] = "http://cens.ioc.ee/projects/f2py2e/2.x"\ - "/F2PY-2-latest.tar.gz" - config['classifiers'] = [ - 'Development Status :: 5 - Production/Stable', - 'Intended Audience :: Developers', - 'Intended Audience :: Science/Research', - 'License :: OSI Approved :: NumPy License', - 'Natural Language :: English', - 'Operating System :: OS Independent', - 'Programming Language :: C', - 'Programming Language :: Fortran', - 'Programming Language :: Python', - 'Topic :: Scientific/Engineering', - 'Topic :: Software Development :: Code Generators', - ] - setup(version=version, - description = "F2PY - Fortran to Python Interface Generaton", - author = "Pearu Peterson", - author_email = "pearu@cens.ioc.ee", - maintainer = "Pearu Peterson", - maintainer_email = "pearu@cens.ioc.ee", - license = "BSD", - platforms = "Unix, Windows (mingw|cygwin), Mac OSX", - long_description = """\ -The Fortran to Python Interface Generator, or F2PY for short, is a -command line tool (f2py) for generating Python C/API modules for -wrapping Fortran 77/90/95 subroutines, accessing common blocks from -Python, and calling Python functions from Fortran (call-backs). -Interfacing subroutines/data from Fortran 90/95 modules is supported.""", - url = "http://cens.ioc.ee/projects/f2py2e/", - keywords = ['Fortran','f2py'], - **config) diff --git a/pythonPackages/numpy/numpy/f2py/setupscons.py b/pythonPackages/numpy/numpy/f2py/setupscons.py deleted file mode 100755 index e30fd87433..0000000000 --- a/pythonPackages/numpy/numpy/f2py/setupscons.py +++ /dev/null @@ -1,124 +0,0 @@ -#!/usr/bin/env python -""" -setup.py for installing F2PY - -Usage: - python setup.py install - -Copyright 2001-2005 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Revision: 1.32 $ -$Date: 2005/01/30 17:22:14 $ -Pearu Peterson -""" - -__version__ = "$Id: setup.py,v 1.32 2005/01/30 17:22:14 pearu Exp $" - -import os -import sys -from distutils.dep_util import newer -from numpy.distutils import log -from numpy.distutils.core import setup -from numpy.distutils.misc_util import Configuration - -from __version__ import version - -def configuration(parent_package='',top_path=None): - config = Configuration('f2py', parent_package, top_path) - - config.add_data_dir('docs') - - config.add_data_files('src/fortranobject.c', - 'src/fortranobject.h', - 'f2py.1' - ) - - config.make_svn_version_py() - - def generate_f2py_py(build_dir): - f2py_exe = 'f2py'+os.path.basename(sys.executable)[6:] - if f2py_exe[-4:]=='.exe': - f2py_exe = f2py_exe[:-4] + '.py' - if 'bdist_wininst' in sys.argv and f2py_exe[-3:] != '.py': - f2py_exe = f2py_exe + '.py' - target = os.path.join(build_dir,f2py_exe) - if newer(__file__,target): - log.info('Creating %s', target) - f = open(target,'w') - f.write('''\ -#!/usr/bin/env %s -# See http://cens.ioc.ee/projects/f2py2e/ -import os, sys -for mode in ["g3-numpy", "2e-numeric", "2e-numarray", "2e-numpy"]: - try: - i=sys.argv.index("--"+mode) - del sys.argv[i] - break - except ValueError: pass -os.environ["NO_SCIPY_IMPORT"]="f2py" -if mode=="g3-numpy": - print >> sys.stderr, "G3 f2py support is not implemented, yet." - sys.exit(1) -elif mode=="2e-numeric": - from f2py2e import main -elif mode=="2e-numarray": - sys.argv.append("-DNUMARRAY") - from f2py2e import main -elif mode=="2e-numpy": - from numpy.f2py import main -else: - print >> sys.stderr, "Unknown mode:",`mode` - sys.exit(1) -main() -'''%(os.path.basename(sys.executable))) - f.close() - return target - - config.add_scripts(generate_f2py_py) - - return config - -if __name__ == "__main__": - - config = configuration(top_path='') - version = config.get_version() - print 'F2PY Version',version - config = config.todict() - - if sys.version[:3]>='2.3': - config['download_url'] = "http://cens.ioc.ee/projects/f2py2e/2.x"\ - "/F2PY-2-latest.tar.gz" - config['classifiers'] = [ - 'Development Status :: 5 - Production/Stable', - 'Intended Audience :: Developers', - 'Intended Audience :: Science/Research', - 'License :: OSI Approved :: NumPy License', - 'Natural Language :: English', - 'Operating System :: OS Independent', - 'Programming Language :: C', - 'Programming Language :: Fortran', - 'Programming Language :: Python', - 'Topic :: Scientific/Engineering', - 'Topic :: Software Development :: Code Generators', - ] - setup(version=version, - description = "F2PY - Fortran to Python Interface Generaton", - author = "Pearu Peterson", - author_email = "pearu@cens.ioc.ee", - maintainer = "Pearu Peterson", - maintainer_email = "pearu@cens.ioc.ee", - license = "BSD", - platforms = "Unix, Windows (mingw|cygwin), Mac OSX", - long_description = """\ -The Fortran to Python Interface Generator, or F2PY for short, is a -command line tool (f2py) for generating Python C/API modules for -wrapping Fortran 77/90/95 subroutines, accessing common blocks from -Python, and calling Python functions from Fortran (call-backs). -Interfacing subroutines/data from Fortran 90/95 modules is supported.""", - url = "http://cens.ioc.ee/projects/f2py2e/", - keywords = ['Fortran','f2py'], - **config) diff --git a/pythonPackages/numpy/numpy/f2py/src/fortranobject.c b/pythonPackages/numpy/numpy/f2py/src/fortranobject.c deleted file mode 100755 index ff80fa7e58..0000000000 --- a/pythonPackages/numpy/numpy/f2py/src/fortranobject.c +++ /dev/null @@ -1,974 +0,0 @@ -#define FORTRANOBJECT_C -#include "fortranobject.h" - -#ifdef __cplusplus -extern "C" { -#endif - -/* - This file implements: FortranObject, array_from_pyobj, copy_ND_array - - Author: Pearu Peterson - $Revision: 1.52 $ - $Date: 2005/07/11 07:44:20 $ -*/ - -int -F2PyDict_SetItemString(PyObject *dict, char *name, PyObject *obj) -{ - if (obj==NULL) { - fprintf(stderr, "Error loading %s\n", name); - if (PyErr_Occurred()) { - PyErr_Print(); - PyErr_Clear(); - } - return -1; - } - return PyDict_SetItemString(dict, name, obj); -} - -/************************* FortranObject *******************************/ - -typedef PyObject *(*fortranfunc)(PyObject *,PyObject *,PyObject *,void *); - -PyObject * -PyFortranObject_New(FortranDataDef* defs, f2py_void_func init) { - int i; - PyFortranObject *fp = NULL; - PyObject *v = NULL; - if (init!=NULL) /* Initialize F90 module objects */ - (*(init))(); - if ((fp = PyObject_New(PyFortranObject, &PyFortran_Type))==NULL) return NULL; - if ((fp->dict = PyDict_New())==NULL) return NULL; - fp->len = 0; - while (defs[fp->len].name != NULL) fp->len++; - if (fp->len == 0) goto fail; - fp->defs = defs; - for (i=0;ilen;i++) - if (fp->defs[i].rank == -1) { /* Is Fortran routine */ - v = PyFortranObject_NewAsAttr(&(fp->defs[i])); - if (v==NULL) return NULL; - PyDict_SetItemString(fp->dict,fp->defs[i].name,v); - } else - if ((fp->defs[i].data)!=NULL) { /* Is Fortran variable or array (not allocatable) */ - if (fp->defs[i].type == PyArray_STRING) { - int n = fp->defs[i].rank-1; - v = PyArray_New(&PyArray_Type, n, fp->defs[i].dims.d, - PyArray_STRING, NULL, fp->defs[i].data, fp->defs[i].dims.d[n], - NPY_FARRAY, NULL); - } - else { - v = PyArray_New(&PyArray_Type, fp->defs[i].rank, fp->defs[i].dims.d, - fp->defs[i].type, NULL, fp->defs[i].data, 0, NPY_FARRAY, - NULL); - } - if (v==NULL) return NULL; - PyDict_SetItemString(fp->dict,fp->defs[i].name,v); - } - Py_XDECREF(v); - return (PyObject *)fp; - fail: - Py_XDECREF(v); - return NULL; -} - -PyObject * -PyFortranObject_NewAsAttr(FortranDataDef* defs) { /* used for calling F90 module routines */ - PyFortranObject *fp = NULL; - fp = PyObject_New(PyFortranObject, &PyFortran_Type); - if (fp == NULL) return NULL; - if ((fp->dict = PyDict_New())==NULL) return NULL; - fp->len = 1; - fp->defs = defs; - return (PyObject *)fp; -} - -/* Fortran methods */ - -static void -fortran_dealloc(PyFortranObject *fp) { - Py_XDECREF(fp->dict); - PyMem_Del(fp); -} - - -#if PY_VERSION_HEX >= 0x03000000 -#else -static PyMethodDef fortran_methods[] = { - {NULL, NULL} /* sentinel */ -}; -#endif - - -static PyObject * -fortran_doc (FortranDataDef def) { - char *p; - /* - p is used as a buffer to hold generated documentation strings. - A common operation in generating the documentation strings, is - appending a string to the buffer p. Earlier, the following - idiom was: - - sprintf(p, "%s", p); - - but this does not work when _FORTIFY_SOURCE=2 is enabled: instead - of appending the string, the string is inserted. - - As a fix, the following idiom should be used for appending - strings to a buffer p: - - sprintf(p + strlen(p), ""); - */ - PyObject *s = NULL; - int i; - unsigned size=100; - if (def.doc!=NULL) - size += strlen(def.doc); - p = (char*)malloc (size); - p[0] = '\0'; /* make sure that the buffer has zero length */ - if (sprintf(p,"%s - ",def.name)==0) goto fail; - if (def.rank==-1) { - if (def.doc==NULL) { - if (sprintf(p+strlen(p),"no docs available")==0) - goto fail; - } else { - if (sprintf(p+strlen(p),"%s",def.doc)==0) - goto fail; - } - } else { - PyArray_Descr *d = PyArray_DescrFromType(def.type); - if (sprintf(p+strlen(p),"'%c'-",d->type)==0) { - Py_DECREF(d); - goto fail; - } - Py_DECREF(d); - if (def.data==NULL) { - if (sprintf(p+strlen(p),"array(%" NPY_INTP_FMT,def.dims.d[0])==0) - goto fail; - for(i=1;i0) { - if (sprintf(p+strlen(p),"array(%"NPY_INTP_FMT,def.dims.d[0])==0) - goto fail; - for(i=1;isize) { - fprintf(stderr,"fortranobject.c:fortran_doc:len(p)=%zd>%d(size):"\ - " too long doc string required, increase size\n",\ - strlen(p),size); - goto fail; - } -#if PY_VERSION_HEX >= 0x03000000 - s = PyUnicode_FromString(p); -#else - s = PyString_FromString(p); -#endif - fail: - free(p); - return s; -} - -static FortranDataDef *save_def; /* save pointer of an allocatable array */ -static void set_data(char *d,npy_intp *f) { /* callback from Fortran */ - if (*f) /* In fortran f=allocated(d) */ - save_def->data = d; - else - save_def->data = NULL; - /* printf("set_data: d=%p,f=%d\n",d,*f); */ -} - -static PyObject * -fortran_getattr(PyFortranObject *fp, char *name) { - int i,j,k,flag; - if (fp->dict != NULL) { - PyObject *v = PyDict_GetItemString(fp->dict, name); - if (v != NULL) { - Py_INCREF(v); - return v; - } - } - for (i=0,j=1;ilen && (j=strcmp(name,fp->defs[i].name));i++); - if (j==0) - if (fp->defs[i].rank!=-1) { /* F90 allocatable array */ - if (fp->defs[i].func==NULL) return NULL; - for(k=0;kdefs[i].rank;++k) - fp->defs[i].dims.d[k]=-1; - save_def = &fp->defs[i]; - (*(fp->defs[i].func))(&fp->defs[i].rank,fp->defs[i].dims.d,set_data,&flag); - if (flag==2) - k = fp->defs[i].rank + 1; - else - k = fp->defs[i].rank; - if (fp->defs[i].data !=NULL) { /* array is allocated */ - PyObject *v = PyArray_New(&PyArray_Type, k, fp->defs[i].dims.d, - fp->defs[i].type, NULL, fp->defs[i].data, 0, NPY_FARRAY, - NULL); - if (v==NULL) return NULL; - /* Py_INCREF(v); */ - return v; - } else { /* array is not allocated */ - Py_INCREF(Py_None); - return Py_None; - } - } - if (strcmp(name,"__dict__")==0) { - Py_INCREF(fp->dict); - return fp->dict; - } - if (strcmp(name,"__doc__")==0) { -#if PY_VERSION_HEX >= 0x03000000 - PyObject *s = PyUnicode_FromString(""), *s2, *s3; - for (i=0;ilen;i++) { - s2 = fortran_doc(fp->defs[i]); - s3 = PyUnicode_Concat(s, s2); - Py_DECREF(s2); - Py_DECREF(s); - s = s3; - } -#else - PyObject *s = PyString_FromString(""); - for (i=0;ilen;i++) - PyString_ConcatAndDel(&s,fortran_doc(fp->defs[i])); -#endif - if (PyDict_SetItemString(fp->dict, name, s)) - return NULL; - return s; - } - if ((strcmp(name,"_cpointer")==0) && (fp->len==1)) { - PyObject *cobj = F2PyCapsule_FromVoidPtr((void *)(fp->defs[0].data),NULL); - if (PyDict_SetItemString(fp->dict, name, cobj)) - return NULL; - return cobj; - } -#if PY_VERSION_HEX >= 0x03000000 - if (1) { - PyObject *str, *ret; - str = PyUnicode_FromString(name); - ret = PyObject_GenericGetAttr((PyObject *)fp, str); - Py_DECREF(str); - return ret; - } -#else - return Py_FindMethod(fortran_methods, (PyObject *)fp, name); -#endif -} - -static int -fortran_setattr(PyFortranObject *fp, char *name, PyObject *v) { - int i,j,flag; - PyArrayObject *arr = NULL; - for (i=0,j=1;ilen && (j=strcmp(name,fp->defs[i].name));i++); - if (j==0) { - if (fp->defs[i].rank==-1) { - PyErr_SetString(PyExc_AttributeError,"over-writing fortran routine"); - return -1; - } - if (fp->defs[i].func!=NULL) { /* is allocatable array */ - npy_intp dims[F2PY_MAX_DIMS]; - int k; - save_def = &fp->defs[i]; - if (v!=Py_None) { /* set new value (reallocate if needed -- - see f2py generated code for more - details ) */ - for(k=0;kdefs[i].rank;k++) dims[k]=-1; - if ((arr = array_from_pyobj(fp->defs[i].type,dims,fp->defs[i].rank,F2PY_INTENT_IN,v))==NULL) - return -1; - (*(fp->defs[i].func))(&fp->defs[i].rank,arr->dimensions,set_data,&flag); - } else { /* deallocate */ - for(k=0;kdefs[i].rank;k++) dims[k]=0; - (*(fp->defs[i].func))(&fp->defs[i].rank,dims,set_data,&flag); - for(k=0;kdefs[i].rank;k++) dims[k]=-1; - } - memcpy(fp->defs[i].dims.d,dims,fp->defs[i].rank*sizeof(npy_intp)); - } else { /* not allocatable array */ - if ((arr = array_from_pyobj(fp->defs[i].type,fp->defs[i].dims.d,fp->defs[i].rank,F2PY_INTENT_IN,v))==NULL) - return -1; - } - if (fp->defs[i].data!=NULL) { /* copy Python object to Fortran array */ - npy_intp s = PyArray_MultiplyList(fp->defs[i].dims.d,arr->nd); - if (s==-1) - s = PyArray_MultiplyList(arr->dimensions,arr->nd); - if (s<0 || - (memcpy(fp->defs[i].data,arr->data,s*PyArray_ITEMSIZE(arr)))==NULL) { - if ((PyObject*)arr!=v) { - Py_DECREF(arr); - } - return -1; - } - if ((PyObject*)arr!=v) { - Py_DECREF(arr); - } - } else return (fp->defs[i].func==NULL?-1:0); - return 0; /* succesful */ - } - if (fp->dict == NULL) { - fp->dict = PyDict_New(); - if (fp->dict == NULL) - return -1; - } - if (v == NULL) { - int rv = PyDict_DelItemString(fp->dict, name); - if (rv < 0) - PyErr_SetString(PyExc_AttributeError,"delete non-existing fortran attribute"); - return rv; - } - else - return PyDict_SetItemString(fp->dict, name, v); -} - -static PyObject* -fortran_call(PyFortranObject *fp, PyObject *arg, PyObject *kw) { - int i = 0; - /* printf("fortran call - name=%s,func=%p,data=%p,%p\n",fp->defs[i].name, - fp->defs[i].func,fp->defs[i].data,&fp->defs[i].data); */ - if (fp->defs[i].rank==-1) {/* is Fortran routine */ - if ((fp->defs[i].func==NULL)) { - PyErr_Format(PyExc_RuntimeError, "no function to call"); - return NULL; - } - else if (fp->defs[i].data==NULL) - /* dummy routine */ - return (*((fortranfunc)(fp->defs[i].func)))((PyObject *)fp,arg,kw,NULL); - else - return (*((fortranfunc)(fp->defs[i].func)))((PyObject *)fp,arg,kw, - (void *)fp->defs[i].data); - } - PyErr_Format(PyExc_TypeError, "this fortran object is not callable"); - return NULL; -} - -static PyObject * -fortran_repr(PyFortranObject *fp) -{ - PyObject *name = NULL, *repr = NULL; - name = PyObject_GetAttrString((PyObject *)fp, "__name__"); - PyErr_Clear(); -#if PY_VERSION_HEX >= 0x03000000 - if (name != NULL && PyUnicode_Check(name)) { - repr = PyUnicode_FromFormat("", name); - } - else { - repr = PyUnicode_FromString(""); - } -#else - if (name != NULL && PyString_Check(name)) { - repr = PyString_FromFormat("", PyString_AsString(name)); - } - else { - repr = PyString_FromString(""); - } -#endif - Py_XDECREF(name); - return repr; -} - - -PyTypeObject PyFortran_Type = { -#if PY_VERSION_HEX >= 0x03000000 - PyVarObject_HEAD_INIT(NULL, 0) -#else - PyObject_HEAD_INIT(0) - 0, /*ob_size*/ -#endif - "fortran", /*tp_name*/ - sizeof(PyFortranObject), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - /* methods */ - (destructor)fortran_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)fortran_getattr, /*tp_getattr*/ - (setattrfunc)fortran_setattr, /*tp_setattr*/ - 0, /*tp_compare/tp_reserved*/ - (reprfunc)fortran_repr, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - (ternaryfunc)fortran_call, /*tp_call*/ -}; - -/************************* f2py_report_atexit *******************************/ - -#ifdef F2PY_REPORT_ATEXIT -static int passed_time = 0; -static int passed_counter = 0; -static int passed_call_time = 0; -static struct timeb start_time; -static struct timeb stop_time; -static struct timeb start_call_time; -static struct timeb stop_call_time; -static int cb_passed_time = 0; -static int cb_passed_counter = 0; -static int cb_passed_call_time = 0; -static struct timeb cb_start_time; -static struct timeb cb_stop_time; -static struct timeb cb_start_call_time; -static struct timeb cb_stop_call_time; - -extern void f2py_start_clock(void) { ftime(&start_time); } -extern -void f2py_start_call_clock(void) { - f2py_stop_clock(); - ftime(&start_call_time); -} -extern -void f2py_stop_clock(void) { - ftime(&stop_time); - passed_time += 1000*(stop_time.time - start_time.time); - passed_time += stop_time.millitm - start_time.millitm; -} -extern -void f2py_stop_call_clock(void) { - ftime(&stop_call_time); - passed_call_time += 1000*(stop_call_time.time - start_call_time.time); - passed_call_time += stop_call_time.millitm - start_call_time.millitm; - passed_counter += 1; - f2py_start_clock(); -} - -extern void f2py_cb_start_clock(void) { ftime(&cb_start_time); } -extern -void f2py_cb_start_call_clock(void) { - f2py_cb_stop_clock(); - ftime(&cb_start_call_time); -} -extern -void f2py_cb_stop_clock(void) { - ftime(&cb_stop_time); - cb_passed_time += 1000*(cb_stop_time.time - cb_start_time.time); - cb_passed_time += cb_stop_time.millitm - cb_start_time.millitm; -} -extern -void f2py_cb_stop_call_clock(void) { - ftime(&cb_stop_call_time); - cb_passed_call_time += 1000*(cb_stop_call_time.time - cb_start_call_time.time); - cb_passed_call_time += cb_stop_call_time.millitm - cb_start_call_time.millitm; - cb_passed_counter += 1; - f2py_cb_start_clock(); -} - -static int f2py_report_on_exit_been_here = 0; -extern -void f2py_report_on_exit(int exit_flag,void *name) { - if (f2py_report_on_exit_been_here) { - fprintf(stderr," %s\n",(char*)name); - return; - } - f2py_report_on_exit_been_here = 1; - fprintf(stderr," /-----------------------\\\n"); - fprintf(stderr," < F2PY performance report >\n"); - fprintf(stderr," \\-----------------------/\n"); - fprintf(stderr,"Overall time spent in ...\n"); - fprintf(stderr,"(a) wrapped (Fortran/C) functions : %8d msec\n", - passed_call_time); - fprintf(stderr,"(b) f2py interface, %6d calls : %8d msec\n", - passed_counter,passed_time); - fprintf(stderr,"(c) call-back (Python) functions : %8d msec\n", - cb_passed_call_time); - fprintf(stderr,"(d) f2py call-back interface, %6d calls : %8d msec\n", - cb_passed_counter,cb_passed_time); - - fprintf(stderr,"(e) wrapped (Fortran/C) functions (acctual) : %8d msec\n\n", - passed_call_time-cb_passed_call_time-cb_passed_time); - fprintf(stderr,"Use -DF2PY_REPORT_ATEXIT_DISABLE to disable this message.\n"); - fprintf(stderr,"Exit status: %d\n",exit_flag); - fprintf(stderr,"Modules : %s\n",(char*)name); -} -#endif - -/********************** report on array copy ****************************/ - -#ifdef F2PY_REPORT_ON_ARRAY_COPY -static void f2py_report_on_array_copy(PyArrayObject* arr) { - const long arr_size = PyArray_Size((PyObject *)arr); - if (arr_size>F2PY_REPORT_ON_ARRAY_COPY) { - fprintf(stderr,"copied an array: size=%ld, elsize=%d\n", - arr_size, PyArray_ITEMSIZE(arr)); - } -} -static void f2py_report_on_array_copy_fromany(void) { - fprintf(stderr,"created an array from object\n"); -} - -#define F2PY_REPORT_ON_ARRAY_COPY_FROMARR f2py_report_on_array_copy((PyArrayObject *)arr) -#define F2PY_REPORT_ON_ARRAY_COPY_FROMANY f2py_report_on_array_copy_fromany() -#else -#define F2PY_REPORT_ON_ARRAY_COPY_FROMARR -#define F2PY_REPORT_ON_ARRAY_COPY_FROMANY -#endif - - -/************************* array_from_obj *******************************/ - -/* - * File: array_from_pyobj.c - * - * Description: - * ------------ - * Provides array_from_pyobj function that returns a contigious array - * object with the given dimensions and required storage order, either - * in row-major (C) or column-major (Fortran) order. The function - * array_from_pyobj is very flexible about its Python object argument - * that can be any number, list, tuple, or array. - * - * array_from_pyobj is used in f2py generated Python extension - * modules. - * - * Author: Pearu Peterson - * Created: 13-16 January 2002 - * $Id: fortranobject.c,v 1.52 2005/07/11 07:44:20 pearu Exp $ - */ - -static int -count_nonpos(const int rank, - const npy_intp *dims) { - int i=0,r=0; - while (ind; - npy_intp size = PyArray_Size((PyObject *)arr); - printf("\trank = %d, flags = %d, size = %" NPY_INTP_FMT "\n", - rank,arr->flags,size); - printf("\tstrides = "); - dump_dims(rank,arr->strides); - printf("\tdimensions = "); - dump_dims(rank,arr->dimensions); -} -#endif - -#define SWAPTYPE(a,b,t) {t c; c = (a); (a) = (b); (b) = c; } - -static int swap_arrays(PyArrayObject* arr1, PyArrayObject* arr2) { - SWAPTYPE(arr1->data,arr2->data,char*); - SWAPTYPE(arr1->nd,arr2->nd,int); - SWAPTYPE(arr1->dimensions,arr2->dimensions,npy_intp*); - SWAPTYPE(arr1->strides,arr2->strides,npy_intp*); - SWAPTYPE(arr1->base,arr2->base,PyObject*); - SWAPTYPE(arr1->descr,arr2->descr,PyArray_Descr*); - SWAPTYPE(arr1->flags,arr2->flags,int); - /* SWAPTYPE(arr1->weakreflist,arr2->weakreflist,PyObject*); */ - return 0; -} - -#define ARRAY_ISCOMPATIBLE(arr,type_num) \ - ( (PyArray_ISINTEGER(arr) && PyTypeNum_ISINTEGER(type_num)) \ - ||(PyArray_ISFLOAT(arr) && PyTypeNum_ISFLOAT(type_num)) \ - ||(PyArray_ISCOMPLEX(arr) && PyTypeNum_ISCOMPLEX(type_num)) \ - ||(PyArray_ISBOOL(arr) && PyTypeNum_ISBOOL(type_num)) \ - ) - -extern -PyArrayObject* array_from_pyobj(const int type_num, - npy_intp *dims, - const int rank, - const int intent, - PyObject *obj) { - /* Note about reference counting - ----------------------------- - If the caller returns the array to Python, it must be done with - Py_BuildValue("N",arr). - Otherwise, if obj!=arr then the caller must call Py_DECREF(arr). - - Note on intent(cache,out,..) - --------------------- - Don't expect correct data when returning intent(cache) array. - - */ - char mess[200]; - PyArrayObject *arr = NULL; - PyArray_Descr *descr; - char typechar; - int elsize; - - if ((intent & F2PY_INTENT_HIDE) - || ((intent & F2PY_INTENT_CACHE) && (obj==Py_None)) - || ((intent & F2PY_OPTIONAL) && (obj==Py_None)) - ) { - /* intent(cache), optional, intent(hide) */ - if (count_nonpos(rank,dims)) { - int i; - sprintf(mess,"failed to create intent(cache|hide)|optional array" - "-- must have defined dimensions but got ("); - for(i=0;ielsize; - typechar = descr->type; - Py_DECREF(descr); - if (PyArray_Check(obj)) { - arr = (PyArrayObject *)obj; - - if (intent & F2PY_INTENT_CACHE) { - /* intent(cache) */ - if (PyArray_ISONESEGMENT(obj) - && PyArray_ITEMSIZE((PyArrayObject *)obj)>=elsize) { - if (check_and_fix_dimensions((PyArrayObject *)obj,rank,dims)) { - return NULL; /*XXX: set exception */ - } - if (intent & F2PY_INTENT_OUT) - Py_INCREF(obj); - return (PyArrayObject *)obj; - } - sprintf(mess,"failed to initialize intent(cache) array"); - if (!PyArray_ISONESEGMENT(obj)) - sprintf(mess+strlen(mess)," -- input must be in one segment"); - if (PyArray_ITEMSIZE(arr)descr->type,typechar); - if (!(F2PY_CHECK_ALIGNMENT(arr, intent))) - sprintf(mess+strlen(mess)," -- input not %d-aligned", F2PY_GET_ALIGNMENT(intent)); - PyErr_SetString(PyExc_ValueError,mess); - return NULL; - } - - /* here we have always intent(in) or intent(inplace) */ - - { - PyArrayObject *retarr = (PyArrayObject *) \ - PyArray_New(&PyArray_Type, arr->nd, arr->dimensions, type_num, - NULL,NULL,0, - !(intent&F2PY_INTENT_C), - NULL); - if (retarr==NULL) - return NULL; - F2PY_REPORT_ON_ARRAY_COPY_FROMARR; - if (PyArray_CopyInto(retarr, arr)) { - Py_DECREF(retarr); - return NULL; - } - if (intent & F2PY_INTENT_INPLACE) { - if (swap_arrays(arr,retarr)) - return NULL; /* XXX: set exception */ - Py_XDECREF(retarr); - if (intent & F2PY_INTENT_OUT) - Py_INCREF(arr); - } else { - arr = retarr; - } - } - return arr; - } - - if ((intent & F2PY_INTENT_INOUT) - || (intent & F2PY_INTENT_INPLACE) - || (intent & F2PY_INTENT_CACHE)) { - sprintf(mess,"failed to initialize intent(inout|inplace|cache) array" - " -- input must be array but got %s", - PyString_AsString(PyObject_Str(PyObject_Type(obj))) - ); - PyErr_SetString(PyExc_TypeError,mess); - return NULL; - } - - { - F2PY_REPORT_ON_ARRAY_COPY_FROMANY; - arr = (PyArrayObject *) \ - PyArray_FromAny(obj,PyArray_DescrFromType(type_num), 0,0, - ((intent & F2PY_INTENT_C)?NPY_CARRAY:NPY_FARRAY) \ - | NPY_FORCECAST, NULL); - if (arr==NULL) - return NULL; - if (check_and_fix_dimensions(arr,rank,dims)) - return NULL; /*XXX: set exception */ - return arr; - } - -} - -/*****************************************/ -/* Helper functions for array_from_pyobj */ -/*****************************************/ - -static -int check_and_fix_dimensions(const PyArrayObject* arr,const int rank,npy_intp *dims) { - /* - This function fills in blanks (that are -1\'s) in dims list using - the dimensions from arr. It also checks that non-blank dims will - match with the corresponding values in arr dimensions. - */ - const npy_intp arr_size = (arr->nd)?PyArray_Size((PyObject *)arr):1; -#ifdef DEBUG_COPY_ND_ARRAY - dump_attrs(arr); - printf("check_and_fix_dimensions:init: dims="); - dump_dims(rank,dims); -#endif - if (rank > arr->nd) { /* [1,2] -> [[1],[2]]; 1 -> [[1]] */ - npy_intp new_size = 1; - int free_axe = -1; - int i; - npy_intp d; - /* Fill dims where -1 or 0; check dimensions; calc new_size; */ - for(i=0;ind;++i) { - d = arr->dimensions[i]; - if (dims[i] >= 0) { - if (d>1 && dims[i]!=d) { - fprintf(stderr,"%d-th dimension must be fixed to %" NPY_INTP_FMT - " but got %" NPY_INTP_FMT "\n", - i,dims[i], d); - return 1; - } - if (!dims[i]) dims[i] = 1; - } else { - dims[i] = d ? d : 1; - } - new_size *= dims[i]; - } - for(i=arr->nd;i1) { - fprintf(stderr,"%d-th dimension must be %" NPY_INTP_FMT - " but got 0 (not defined).\n", - i,dims[i]); - return 1; - } else if (free_axe<0) - free_axe = i; - else - dims[i] = 1; - if (free_axe>=0) { - dims[free_axe] = arr_size/new_size; - new_size *= dims[free_axe]; - } - if (new_size != arr_size) { - fprintf(stderr,"unexpected array size: new_size=%" NPY_INTP_FMT - ", got array with arr_size=%" NPY_INTP_FMT " (maybe too many free" - " indices)\n", new_size,arr_size); - return 1; - } - } else if (rank==arr->nd) { - npy_intp new_size = 1; - int i; - npy_intp d; - for (i=0; idimensions[i]; - if (dims[i]>=0) { - if (d > 1 && d!=dims[i]) { - fprintf(stderr,"%d-th dimension must be fixed to %" NPY_INTP_FMT - " but got %" NPY_INTP_FMT "\n", - i,dims[i],d); - return 1; - } - if (!dims[i]) dims[i] = 1; - } else dims[i] = d; - new_size *= dims[i]; - } - if (new_size != arr_size) { - fprintf(stderr,"unexpected array size: new_size=%" NPY_INTP_FMT - ", got array with arr_size=%" NPY_INTP_FMT "\n", new_size,arr_size); - return 1; - } - } else { /* [[1,2]] -> [[1],[2]] */ - int i,j; - npy_intp d; - int effrank; - npy_intp size; - for (i=0,effrank=0;ind;++i) - if (arr->dimensions[i]>1) ++effrank; - if (dims[rank-1]>=0) - if (effrank>rank) { - fprintf(stderr,"too many axes: %d (effrank=%d), expected rank=%d\n", - arr->nd,effrank,rank); - return 1; - } - - for (i=0,j=0;ind && arr->dimensions[j]<2) ++j; - if (j>=arr->nd) d = 1; - else d = arr->dimensions[j++]; - if (dims[i]>=0) { - if (d>1 && d!=dims[i]) { - fprintf(stderr,"%d-th dimension must be fixed to %" NPY_INTP_FMT - " but got %" NPY_INTP_FMT " (real index=%d)\n", - i,dims[i],d,j-1); - return 1; - } - if (!dims[i]) dims[i] = 1; - } else - dims[i] = d; - } - - for (i=rank;ind;++i) { /* [[1,2],[3,4]] -> [1,2,3,4] */ - while (jnd && arr->dimensions[j]<2) ++j; - if (j>=arr->nd) d = 1; - else d = arr->dimensions[j++]; - dims[rank-1] *= d; - } - for (i=0,size=1;ind); - for (i=0;ind;++i) fprintf(stderr," %" NPY_INTP_FMT,arr->dimensions[i]); - fprintf(stderr," ]\n"); - return 1; - } - } -#ifdef DEBUG_COPY_ND_ARRAY - printf("check_and_fix_dimensions:end: dims="); - dump_dims(rank,dims); -#endif - return 0; -} - -/* End of file: array_from_pyobj.c */ - -/************************* copy_ND_array *******************************/ - -extern -int copy_ND_array(const PyArrayObject *arr, PyArrayObject *out) -{ - F2PY_REPORT_ON_ARRAY_COPY_FROMARR; - return PyArray_CopyInto(out, (PyArrayObject *)arr); -} - -/*********************************************/ -/* Compatibility functions for Python >= 3.0 */ -/*********************************************/ - -#if PY_VERSION_HEX >= 0x03000000 - -PyObject * -F2PyCapsule_FromVoidPtr(void *ptr, void (*dtor)(PyObject *)) -{ - PyObject *ret = PyCapsule_New(ptr, NULL, dtor); - if (ret == NULL) { - PyErr_Clear(); - } - return ret; -} - -void * -F2PyCapsule_AsVoidPtr(PyObject *obj) -{ - void *ret = PyCapsule_GetPointer(obj, NULL); - if (ret == NULL) { - PyErr_Clear(); - } - return ret; -} - -int -F2PyCapsule_Check(PyObject *ptr) -{ - return PyCapsule_CheckExact(ptr); -} - -#else - -PyObject * -F2PyCapsule_FromVoidPtr(void *ptr, void (*dtor)(void *)) -{ - return PyCObject_FromVoidPtr(ptr, dtor); -} - -void * -F2PyCapsule_AsVoidPtr(PyObject *ptr) -{ - return PyCObject_AsVoidPtr(ptr); -} - -int -F2PyCapsule_Check(PyObject *ptr) -{ - return PyCObject_Check(ptr); -} - -#endif - - -#ifdef __cplusplus -} -#endif -/************************* EOF fortranobject.c *******************************/ diff --git a/pythonPackages/numpy/numpy/f2py/src/fortranobject.h b/pythonPackages/numpy/numpy/f2py/src/fortranobject.h deleted file mode 100755 index 283021aa12..0000000000 --- a/pythonPackages/numpy/numpy/f2py/src/fortranobject.h +++ /dev/null @@ -1,178 +0,0 @@ -#ifndef Py_FORTRANOBJECT_H -#define Py_FORTRANOBJECT_H -#ifdef __cplusplus -extern "C" { -#endif - -#include "Python.h" - -#ifdef FORTRANOBJECT_C -#define NO_IMPORT_ARRAY -#endif -#define PY_ARRAY_UNIQUE_SYMBOL PyArray_API -#include "numpy/arrayobject.h" - -/* - * Python 3 support macros - */ -#if PY_VERSION_HEX >= 0x03000000 -#define PyString_Check PyBytes_Check -#define PyString_GET_SIZE PyBytes_GET_SIZE -#define PyString_AS_STRING PyBytes_AS_STRING -#define PyString_FromString PyBytes_FromString -#define PyString_ConcatAndDel PyBytes_ConcatAndDel -#define PyString_AsString PyBytes_AsString - -#define PyInt_Check PyLong_Check -#define PyInt_FromLong PyLong_FromLong -#define PyInt_AS_LONG PyLong_AsLong -#define PyInt_AsLong PyLong_AsLong - -#define PyNumber_Int PyNumber_Long -#endif - -#if (PY_VERSION_HEX < 0x02060000) -#define Py_TYPE(o) (((PyObject*)(o))->ob_type) -#define Py_REFCNT(o) (((PyObject*)(o))->ob_refcnt) -#define Py_SIZE(o) (((PyVarObject*)(o))->ob_size) -#endif - - /* -#ifdef F2PY_REPORT_ATEXIT_DISABLE -#undef F2PY_REPORT_ATEXIT -#else - -#ifndef __FreeBSD__ -#ifndef __WIN32__ -#ifndef __APPLE__ -#define F2PY_REPORT_ATEXIT -#endif -#endif -#endif - -#endif - */ - -#ifdef F2PY_REPORT_ATEXIT -#include - extern void f2py_start_clock(void); - extern void f2py_stop_clock(void); - extern void f2py_start_call_clock(void); - extern void f2py_stop_call_clock(void); - extern void f2py_cb_start_clock(void); - extern void f2py_cb_stop_clock(void); - extern void f2py_cb_start_call_clock(void); - extern void f2py_cb_stop_call_clock(void); - extern void f2py_report_on_exit(int,void*); -#endif - -#ifdef DMALLOC -#include "dmalloc.h" -#endif - -/* Fortran object interface */ - -/* -123456789-123456789-123456789-123456789-123456789-123456789-123456789-12 - -PyFortranObject represents various Fortran objects: -Fortran (module) routines, COMMON blocks, module data. - -Author: Pearu Peterson -*/ - -#define F2PY_MAX_DIMS 40 - -typedef void (*f2py_set_data_func)(char*,npy_intp*); -typedef void (*f2py_void_func)(void); -typedef void (*f2py_init_func)(int*,npy_intp*,f2py_set_data_func,int*); - - /*typedef void* (*f2py_c_func)(void*,...);*/ - -typedef void *(*f2pycfunc)(void); - -typedef struct { - char *name; /* attribute (array||routine) name */ - int rank; /* array rank, 0 for scalar, max is F2PY_MAX_DIMS, - || rank=-1 for Fortran routine */ - struct {npy_intp d[F2PY_MAX_DIMS];} dims; /* dimensions of the array, || not used */ - int type; /* PyArray_ || not used */ - char *data; /* pointer to array || Fortran routine */ - f2py_init_func func; /* initialization function for - allocatable arrays: - func(&rank,dims,set_ptr_func,name,len(name)) - || C/API wrapper for Fortran routine */ - char *doc; /* documentation string; only recommended - for routines. */ -} FortranDataDef; - -typedef struct { - PyObject_HEAD - int len; /* Number of attributes */ - FortranDataDef *defs; /* An array of FortranDataDef's */ - PyObject *dict; /* Fortran object attribute dictionary */ -} PyFortranObject; - -#define PyFortran_Check(op) (Py_TYPE(op) == &PyFortran_Type) -#define PyFortran_Check1(op) (0==strcmp(Py_TYPE(op)->tp_name,"fortran")) - - extern PyTypeObject PyFortran_Type; - extern int F2PyDict_SetItemString(PyObject* dict, char *name, PyObject *obj); - extern PyObject * PyFortranObject_New(FortranDataDef* defs, f2py_void_func init); - extern PyObject * PyFortranObject_NewAsAttr(FortranDataDef* defs); - -#if PY_VERSION_HEX >= 0x03000000 - -PyObject * F2PyCapsule_FromVoidPtr(void *ptr, void (*dtor)(PyObject *)); -void * F2PyCapsule_AsVoidPtr(PyObject *obj); -int F2PyCapsule_Check(PyObject *ptr); - -#else - -PyObject * F2PyCapsule_FromVoidPtr(void *ptr, void (*dtor)(void *)); -void * F2PyCapsule_AsVoidPtr(PyObject *ptr); -int F2PyCapsule_Check(PyObject *ptr); - -#endif - -#define ISCONTIGUOUS(m) ((m)->flags & NPY_CONTIGUOUS) -#define F2PY_INTENT_IN 1 -#define F2PY_INTENT_INOUT 2 -#define F2PY_INTENT_OUT 4 -#define F2PY_INTENT_HIDE 8 -#define F2PY_INTENT_CACHE 16 -#define F2PY_INTENT_COPY 32 -#define F2PY_INTENT_C 64 -#define F2PY_OPTIONAL 128 -#define F2PY_INTENT_INPLACE 256 -#define F2PY_INTENT_ALIGNED4 512 -#define F2PY_INTENT_ALIGNED8 1024 -#define F2PY_INTENT_ALIGNED16 2048 - -#define ARRAY_ISALIGNED(ARR, SIZE) ((size_t)(PyArray_DATA(ARR)) % (SIZE) == 0) -#define F2PY_ALIGN4(intent) (intent & F2PY_INTENT_ALIGNED4) -#define F2PY_ALIGN8(intent) (intent & F2PY_INTENT_ALIGNED8) -#define F2PY_ALIGN16(intent) (intent & F2PY_INTENT_ALIGNED16) - -#define F2PY_GET_ALIGNMENT(intent) \ - (F2PY_ALIGN4(intent) ? 4 : \ - (F2PY_ALIGN8(intent) ? 8 : \ - (F2PY_ALIGN16(intent) ? 16 : 1) )) -#define F2PY_CHECK_ALIGNMENT(arr, intent) ARRAY_ISALIGNED(arr, F2PY_GET_ALIGNMENT(intent)) - - extern PyArrayObject* array_from_pyobj(const int type_num, - npy_intp *dims, - const int rank, - const int intent, - PyObject *obj); - extern int copy_ND_array(const PyArrayObject *in, PyArrayObject *out); - -#ifdef DEBUG_COPY_ND_ARRAY - extern void dump_attrs(const PyArrayObject* arr); -#endif - - -#ifdef __cplusplus -} -#endif -#endif /* !Py_FORTRANOBJECT_H */ diff --git a/pythonPackages/numpy/numpy/f2py/tests/test_array_from_pyobj.py b/pythonPackages/numpy/numpy/f2py/tests/test_array_from_pyobj.py deleted file mode 100755 index 3b11a5b140..0000000000 --- a/pythonPackages/numpy/numpy/f2py/tests/test_array_from_pyobj.py +++ /dev/null @@ -1,542 +0,0 @@ -import unittest -import os -import sys -import copy - -import nose - -from numpy.testing import * -from numpy import array, alltrue, ndarray, asarray, can_cast,zeros, dtype -from numpy.core.multiarray import typeinfo - -import util - -wrap = None -def setup(): - """ - Build the required testing extension module - - """ - global wrap - - # Check compiler availability first - if not util.has_c_compiler(): - raise nose.SkipTest("No C compiler available") - - if wrap is None: - config_code = """ - config.add_extension('test_array_from_pyobj_ext', - sources=['wrapmodule.c', 'fortranobject.c'], - define_macros=[]) - """ - d = os.path.dirname(__file__) - src = [os.path.join(d, 'src', 'array_from_pyobj', 'wrapmodule.c'), - os.path.join(d, '..', 'src', 'fortranobject.c'), - os.path.join(d, '..', 'src', 'fortranobject.h')] - wrap = util.build_module_distutils(src, config_code, - 'test_array_from_pyobj_ext') - -def flags_info(arr): - flags = wrap.array_attrs(arr)[6] - return flags2names(flags) - -def flags2names(flags): - info = [] - for flagname in ['CONTIGUOUS','FORTRAN','OWNDATA','ENSURECOPY', - 'ENSUREARRAY','ALIGNED','NOTSWAPPED','WRITEABLE', - 'UPDATEIFCOPY','BEHAVED','BEHAVED_RO', - 'CARRAY','FARRAY' - ]: - if abs(flags) & getattr(wrap,flagname): - info.append(flagname) - return info - -class Intent: - def __init__(self,intent_list=[]): - self.intent_list = intent_list[:] - flags = 0 - for i in intent_list: - if i=='optional': - flags |= wrap.F2PY_OPTIONAL - else: - flags |= getattr(wrap,'F2PY_INTENT_'+i.upper()) - self.flags = flags - def __getattr__(self,name): - name = name.lower() - if name=='in_': name='in' - return self.__class__(self.intent_list+[name]) - def __str__(self): - return 'intent(%s)' % (','.join(self.intent_list)) - def __repr__(self): - return 'Intent(%r)' % (self.intent_list) - def is_intent(self,*names): - for name in names: - if name not in self.intent_list: - return False - return True - def is_intent_exact(self,*names): - return len(self.intent_list)==len(names) and self.is_intent(*names) - -intent = Intent() - -class Type(object): - _type_names = ['BOOL','BYTE','UBYTE','SHORT','USHORT','INT','UINT', - 'LONG','ULONG','LONGLONG','ULONGLONG', - 'FLOAT','DOUBLE','LONGDOUBLE','CFLOAT','CDOUBLE', - 'CLONGDOUBLE'] - _type_cache = {} - - _cast_dict = {'BOOL':['BOOL']} - _cast_dict['BYTE'] = _cast_dict['BOOL'] + ['BYTE'] - _cast_dict['UBYTE'] = _cast_dict['BOOL'] + ['UBYTE'] - _cast_dict['BYTE'] = ['BYTE'] - _cast_dict['UBYTE'] = ['UBYTE'] - _cast_dict['SHORT'] = _cast_dict['BYTE'] + ['UBYTE','SHORT'] - _cast_dict['USHORT'] = _cast_dict['UBYTE'] + ['BYTE','USHORT'] - _cast_dict['INT'] = _cast_dict['SHORT'] + ['USHORT','INT'] - _cast_dict['UINT'] = _cast_dict['USHORT'] + ['SHORT','UINT'] - - _cast_dict['LONG'] = _cast_dict['INT'] + ['LONG'] - _cast_dict['ULONG'] = _cast_dict['UINT'] + ['ULONG'] - - _cast_dict['LONGLONG'] = _cast_dict['LONG'] + ['LONGLONG'] - _cast_dict['ULONGLONG'] = _cast_dict['ULONG'] + ['ULONGLONG'] - - _cast_dict['FLOAT'] = _cast_dict['SHORT'] + ['USHORT','FLOAT'] - _cast_dict['DOUBLE'] = _cast_dict['INT'] + ['UINT','FLOAT','DOUBLE'] - _cast_dict['LONGDOUBLE'] = _cast_dict['LONG'] + ['ULONG','FLOAT','DOUBLE','LONGDOUBLE'] - - _cast_dict['CFLOAT'] = _cast_dict['FLOAT'] + ['CFLOAT'] - _cast_dict['CDOUBLE'] = _cast_dict['DOUBLE'] + ['CFLOAT','CDOUBLE'] - _cast_dict['CLONGDOUBLE'] = _cast_dict['LONGDOUBLE'] + ['CFLOAT','CDOUBLE','CLONGDOUBLE'] - - - def __new__(cls,name): - if isinstance(name,dtype): - dtype0 = name - name = None - for n,i in typeinfo.items(): - if isinstance(i,tuple) and dtype0.type is i[-1]: - name = n - break - obj = cls._type_cache.get(name.upper(),None) - if obj is not None: - return obj - obj = object.__new__(cls) - obj._init(name) - cls._type_cache[name.upper()] = obj - return obj - - def _init(self,name): - self.NAME = name.upper() - self.type_num = getattr(wrap,'PyArray_'+self.NAME) - assert_equal(self.type_num,typeinfo[self.NAME][1]) - self.dtype = typeinfo[self.NAME][-1] - self.elsize = typeinfo[self.NAME][2] / 8 - self.dtypechar = typeinfo[self.NAME][0] - - def cast_types(self): - return map(self.__class__,self._cast_dict[self.NAME]) - - def all_types(self): - return map(self.__class__,self._type_names) - - def smaller_types(self): - bits = typeinfo[self.NAME][3] - types = [] - for name in self._type_names: - if typeinfo[name][3]bits: - types.append(Type(name)) - return types - -class Array: - def __init__(self,typ,dims,intent,obj): - self.type = typ - self.dims = dims - self.intent = intent - self.obj_copy = copy.deepcopy(obj) - self.obj = obj - - # arr.dtypechar may be different from typ.dtypechar - self.arr = wrap.call(typ.type_num,dims,intent.flags,obj) - - self.arr_attr = wrap.array_attrs(self.arr) - - if len(dims)>1: - if self.intent.is_intent('c'): - assert intent.flags & wrap.F2PY_INTENT_C - assert not self.arr.flags['FORTRAN'],`self.arr.flags,obj.flags` - assert self.arr.flags['CONTIGUOUS'] - assert not self.arr_attr[6] & wrap.FORTRAN - else: - assert not intent.flags & wrap.F2PY_INTENT_C - assert self.arr.flags['FORTRAN'] - assert not self.arr.flags['CONTIGUOUS'] - assert self.arr_attr[6] & wrap.FORTRAN - - if obj is None: - self.pyarr = None - self.pyarr_attr = None - return - - if intent.is_intent('cache'): - assert isinstance(obj,ndarray),`type(obj)` - self.pyarr = array(obj).reshape(*dims).copy() - else: - self.pyarr = array(array(obj, - dtype = typ.dtypechar).reshape(*dims), - order=self.intent.is_intent('c') and 'C' or 'F') - assert self.pyarr.dtype == typ, \ - `self.pyarr.dtype,typ` - assert self.pyarr.flags['OWNDATA'], (obj, intent) - self.pyarr_attr = wrap.array_attrs(self.pyarr) - - if len(dims)>1: - if self.intent.is_intent('c'): - assert not self.pyarr.flags['FORTRAN'] - assert self.pyarr.flags['CONTIGUOUS'] - assert not self.pyarr_attr[6] & wrap.FORTRAN - else: - assert self.pyarr.flags['FORTRAN'] - assert not self.pyarr.flags['CONTIGUOUS'] - assert self.pyarr_attr[6] & wrap.FORTRAN - - - assert self.arr_attr[1]==self.pyarr_attr[1] # nd - assert self.arr_attr[2]==self.pyarr_attr[2] # dimensions - if self.arr_attr[1]<=1: - assert self.arr_attr[3]==self.pyarr_attr[3],\ - `self.arr_attr[3],self.pyarr_attr[3],self.arr.tostring(),self.pyarr.tostring()` # strides - assert self.arr_attr[5][-2:]==self.pyarr_attr[5][-2:],\ - `self.arr_attr[5],self.pyarr_attr[5]` # descr - assert self.arr_attr[6]==self.pyarr_attr[6],\ - `self.arr_attr[6],self.pyarr_attr[6],flags2names(0*self.arr_attr[6]-self.pyarr_attr[6]),flags2names(self.arr_attr[6]),intent` # flags - - if intent.is_intent('cache'): - assert self.arr_attr[5][3]>=self.type.elsize,\ - `self.arr_attr[5][3],self.type.elsize` - else: - assert self.arr_attr[5][3]==self.type.elsize,\ - `self.arr_attr[5][3],self.type.elsize` - assert self.arr_equal(self.pyarr,self.arr) - - if isinstance(self.obj,ndarray): - if typ.elsize==Type(obj.dtype).elsize: - if not intent.is_intent('copy') and self.arr_attr[1]<=1: - assert self.has_shared_memory() - - def arr_equal(self,arr1,arr2): - if arr1.shape != arr2.shape: - return False - s = arr1==arr2 - return alltrue(s.flatten()) - - def __str__(self): - return str(self.arr) - - def has_shared_memory(self): - """Check that created array shares data with input array. - """ - if self.obj is self.arr: - return True - if not isinstance(self.obj,ndarray): - return False - obj_attr = wrap.array_attrs(self.obj) - return obj_attr[0]==self.arr_attr[0] - -################################################## - -class test_intent(unittest.TestCase): - def test_in_out(self): - assert_equal(str(intent.in_.out),'intent(in,out)') - assert intent.in_.c.is_intent('c') - assert not intent.in_.c.is_intent_exact('c') - assert intent.in_.c.is_intent_exact('c','in') - assert intent.in_.c.is_intent_exact('in','c') - assert not intent.in_.is_intent('c') - -class _test_shared_memory: - num2seq = [1,2] - num23seq = [[1,2,3],[4,5,6]] - def test_in_from_2seq(self): - a = self.array([2],intent.in_,self.num2seq) - assert not a.has_shared_memory() - - def test_in_from_2casttype(self): - for t in self.type.cast_types(): - obj = array(self.num2seq,dtype=t.dtype) - a = self.array([len(self.num2seq)],intent.in_,obj) - if t.elsize==self.type.elsize: - assert a.has_shared_memory(),`self.type.dtype,t.dtype` - else: - assert not a.has_shared_memory(),`t.dtype` - - def test_inout_2seq(self): - obj = array(self.num2seq,dtype=self.type.dtype) - a = self.array([len(self.num2seq)],intent.inout,obj) - assert a.has_shared_memory() - - try: - a = self.array([2],intent.in_.inout,self.num2seq) - except TypeError,msg: - if not str(msg).startswith('failed to initialize intent(inout|inplace|cache) array'): - raise - else: - raise SystemError,'intent(inout) should have failed on sequence' - - def test_f_inout_23seq(self): - obj = array(self.num23seq,dtype=self.type.dtype,order='F') - shape = (len(self.num23seq),len(self.num23seq[0])) - a = self.array(shape,intent.in_.inout,obj) - assert a.has_shared_memory() - - obj = array(self.num23seq,dtype=self.type.dtype,order='C') - shape = (len(self.num23seq),len(self.num23seq[0])) - try: - a = self.array(shape,intent.in_.inout,obj) - except ValueError,msg: - if not str(msg).startswith('failed to initialize intent(inout) array'): - raise - else: - raise SystemError,'intent(inout) should have failed on improper array' - - def test_c_inout_23seq(self): - obj = array(self.num23seq,dtype=self.type.dtype) - shape = (len(self.num23seq),len(self.num23seq[0])) - a = self.array(shape,intent.in_.c.inout,obj) - assert a.has_shared_memory() - - def test_in_copy_from_2casttype(self): - for t in self.type.cast_types(): - obj = array(self.num2seq,dtype=t.dtype) - a = self.array([len(self.num2seq)],intent.in_.copy,obj) - assert not a.has_shared_memory(),`t.dtype` - - def test_c_in_from_23seq(self): - a = self.array([len(self.num23seq),len(self.num23seq[0])], - intent.in_,self.num23seq) - assert not a.has_shared_memory() - - def test_in_from_23casttype(self): - for t in self.type.cast_types(): - obj = array(self.num23seq,dtype=t.dtype) - a = self.array([len(self.num23seq),len(self.num23seq[0])], - intent.in_,obj) - assert not a.has_shared_memory(),`t.dtype` - - def test_f_in_from_23casttype(self): - for t in self.type.cast_types(): - obj = array(self.num23seq,dtype=t.dtype,order='F') - a = self.array([len(self.num23seq),len(self.num23seq[0])], - intent.in_,obj) - if t.elsize==self.type.elsize: - assert a.has_shared_memory(),`t.dtype` - else: - assert not a.has_shared_memory(),`t.dtype` - - def test_c_in_from_23casttype(self): - for t in self.type.cast_types(): - obj = array(self.num23seq,dtype=t.dtype) - a = self.array([len(self.num23seq),len(self.num23seq[0])], - intent.in_.c,obj) - if t.elsize==self.type.elsize: - assert a.has_shared_memory(),`t.dtype` - else: - assert not a.has_shared_memory(),`t.dtype` - - def test_f_copy_in_from_23casttype(self): - for t in self.type.cast_types(): - obj = array(self.num23seq,dtype=t.dtype,order='F') - a = self.array([len(self.num23seq),len(self.num23seq[0])], - intent.in_.copy,obj) - assert not a.has_shared_memory(),`t.dtype` - - def test_c_copy_in_from_23casttype(self): - for t in self.type.cast_types(): - obj = array(self.num23seq,dtype=t.dtype) - a = self.array([len(self.num23seq),len(self.num23seq[0])], - intent.in_.c.copy,obj) - assert not a.has_shared_memory(),`t.dtype` - - def test_in_cache_from_2casttype(self): - for t in self.type.all_types(): - if t.elsize != self.type.elsize: - continue - obj = array(self.num2seq,dtype=t.dtype) - shape = (len(self.num2seq),) - a = self.array(shape,intent.in_.c.cache,obj) - assert a.has_shared_memory(),`t.dtype` - - a = self.array(shape,intent.in_.cache,obj) - assert a.has_shared_memory(),`t.dtype` - - obj = array(self.num2seq,dtype=t.dtype,order='F') - a = self.array(shape,intent.in_.c.cache,obj) - assert a.has_shared_memory(),`t.dtype` - - a = self.array(shape,intent.in_.cache,obj) - assert a.has_shared_memory(),`t.dtype` - - try: - a = self.array(shape,intent.in_.cache,obj[::-1]) - except ValueError,msg: - if not str(msg).startswith('failed to initialize intent(cache) array'): - raise - else: - raise SystemError,'intent(cache) should have failed on multisegmented array' - def test_in_cache_from_2casttype_failure(self): - for t in self.type.all_types(): - if t.elsize >= self.type.elsize: - continue - obj = array(self.num2seq,dtype=t.dtype) - shape = (len(self.num2seq),) - try: - a = self.array(shape,intent.in_.cache,obj) - except ValueError,msg: - if not str(msg).startswith('failed to initialize intent(cache) array'): - raise - else: - raise SystemError,'intent(cache) should have failed on smaller array' - - def test_cache_hidden(self): - shape = (2,) - a = self.array(shape,intent.cache.hide,None) - assert a.arr.shape==shape - - shape = (2,3) - a = self.array(shape,intent.cache.hide,None) - assert a.arr.shape==shape - - shape = (-1,3) - try: - a = self.array(shape,intent.cache.hide,None) - except ValueError,msg: - if not str(msg).startswith('failed to create intent(cache|hide)|optional array'): - raise - else: - raise SystemError,'intent(cache) should have failed on undefined dimensions' - - def test_hidden(self): - shape = (2,) - a = self.array(shape,intent.hide,None) - assert a.arr.shape==shape - assert a.arr_equal(a.arr,zeros(shape,dtype=self.type.dtype)) - - shape = (2,3) - a = self.array(shape,intent.hide,None) - assert a.arr.shape==shape - assert a.arr_equal(a.arr,zeros(shape,dtype=self.type.dtype)) - assert a.arr.flags['FORTRAN'] and not a.arr.flags['CONTIGUOUS'] - - shape = (2,3) - a = self.array(shape,intent.c.hide,None) - assert a.arr.shape==shape - assert a.arr_equal(a.arr,zeros(shape,dtype=self.type.dtype)) - assert not a.arr.flags['FORTRAN'] and a.arr.flags['CONTIGUOUS'] - - shape = (-1,3) - try: - a = self.array(shape,intent.hide,None) - except ValueError,msg: - if not str(msg).startswith('failed to create intent(cache|hide)|optional array'): - raise - else: - raise SystemError,'intent(hide) should have failed on undefined dimensions' - - def test_optional_none(self): - shape = (2,) - a = self.array(shape,intent.optional,None) - assert a.arr.shape==shape - assert a.arr_equal(a.arr,zeros(shape,dtype=self.type.dtype)) - - shape = (2,3) - a = self.array(shape,intent.optional,None) - assert a.arr.shape==shape - assert a.arr_equal(a.arr,zeros(shape,dtype=self.type.dtype)) - assert a.arr.flags['FORTRAN'] and not a.arr.flags['CONTIGUOUS'] - - shape = (2,3) - a = self.array(shape,intent.c.optional,None) - assert a.arr.shape==shape - assert a.arr_equal(a.arr,zeros(shape,dtype=self.type.dtype)) - assert not a.arr.flags['FORTRAN'] and a.arr.flags['CONTIGUOUS'] - - def test_optional_from_2seq(self): - obj = self.num2seq - shape = (len(obj),) - a = self.array(shape,intent.optional,obj) - assert a.arr.shape==shape - assert not a.has_shared_memory() - - def test_optional_from_23seq(self): - obj = self.num23seq - shape = (len(obj),len(obj[0])) - a = self.array(shape,intent.optional,obj) - assert a.arr.shape==shape - assert not a.has_shared_memory() - - a = self.array(shape,intent.optional.c,obj) - assert a.arr.shape==shape - assert not a.has_shared_memory() - - def test_inplace(self): - obj = array(self.num23seq,dtype=self.type.dtype) - assert not obj.flags['FORTRAN'] and obj.flags['CONTIGUOUS'] - shape = obj.shape - a = self.array(shape,intent.inplace,obj) - assert obj[1][2]==a.arr[1][2],`obj,a.arr` - a.arr[1][2]=54 - assert obj[1][2]==a.arr[1][2]==array(54,dtype=self.type.dtype),`obj,a.arr` - assert a.arr is obj - assert obj.flags['FORTRAN'] # obj attributes are changed inplace! - assert not obj.flags['CONTIGUOUS'] - - def test_inplace_from_casttype(self): - for t in self.type.cast_types(): - if t is self.type: - continue - obj = array(self.num23seq,dtype=t.dtype) - assert obj.dtype.type==t.dtype - assert obj.dtype.type is not self.type.dtype - assert not obj.flags['FORTRAN'] and obj.flags['CONTIGUOUS'] - shape = obj.shape - a = self.array(shape,intent.inplace,obj) - assert obj[1][2]==a.arr[1][2],`obj,a.arr` - a.arr[1][2]=54 - assert obj[1][2]==a.arr[1][2]==array(54,dtype=self.type.dtype),`obj,a.arr` - assert a.arr is obj - assert obj.flags['FORTRAN'] # obj attributes are changed inplace! - assert not obj.flags['CONTIGUOUS'] - assert obj.dtype.type is self.type.dtype # obj type is changed inplace! - - -for t in Type._type_names: - exec '''\ -class test_%s_gen(unittest.TestCase, - _test_shared_memory - ): - def setUp(self): - self.type = Type(%r) - array = lambda self,dims,intent,obj: Array(Type(%r),dims,intent,obj) -''' % (t,t,t) - -if __name__ == "__main__": - import nose - nose.runmodule() diff --git a/pythonPackages/numpy/numpy/f2py/tests/test_callback.py b/pythonPackages/numpy/numpy/f2py/tests/test_callback.py deleted file mode 100755 index b42e707760..0000000000 --- a/pythonPackages/numpy/numpy/f2py/tests/test_callback.py +++ /dev/null @@ -1,75 +0,0 @@ -from numpy.testing import * -from numpy import array -import math -import util - -class TestF77Callback(util.F2PyTest): - code = """ - subroutine t(fun,a) - integer a -cf2py intent(out) a - external fun - call fun(a) - end - - subroutine func(a) -cf2py intent(in,out) a - integer a - a = a + 11 - end - - subroutine func0(a) -cf2py intent(out) a - integer a - a = 11 - end - - subroutine t2(a) -cf2py intent(callback) fun - integer a -cf2py intent(out) a - external fun - call fun(a) - end - """ - - @dec.slow - def test_all(self): - for name in "t,t2".split(","): - self.check_function(name) - - def check_function(self, name): - t = getattr(self.module, name) - r = t(lambda : 4) - assert r==4,`r` - r = t(lambda a:5,fun_extra_args=(6,)) - assert r==5,`r` - r = t(lambda a:a,fun_extra_args=(6,)) - assert r==6,`r` - r = t(lambda a:5+a,fun_extra_args=(7,)) - assert r==12,`r` - r = t(lambda a:math.degrees(a),fun_extra_args=(math.pi,)) - assert r==180,`r` - r = t(math.degrees,fun_extra_args=(math.pi,)) - assert r==180,`r` - - r = t(self.module.func, fun_extra_args=(6,)) - assert r==17,`r` - r = t(self.module.func0) - assert r==11,`r` - r = t(self.module.func0._cpointer) - assert r==11,`r` - class A: - def __call__(self): - return 7 - def mth(self): - return 9 - a = A() - r = t(a) - assert r==7,`r` - r = t(a.mth) - assert r==9,`r` - -if __name__ == "__main__": - import nose - nose.runmodule() diff --git a/pythonPackages/numpy/numpy/f2py/tests/test_mixed.py b/pythonPackages/numpy/numpy/f2py/tests/test_mixed.py deleted file mode 100755 index e7a64c080d..0000000000 --- a/pythonPackages/numpy/numpy/f2py/tests/test_mixed.py +++ /dev/null @@ -1,25 +0,0 @@ -import os -import math - -from numpy.testing import * -from numpy import array - -import util - -def _path(*a): - return os.path.join(*((os.path.dirname(__file__),) + a)) - -class TestMixed(util.F2PyTest): - sources = [_path('src', 'mixed', 'foo.f'), - _path('src', 'mixed', 'foo_fixed.f90'), - _path('src', 'mixed', 'foo_free.f90')] - - @dec.slow - def test_all(self): - assert self.module.bar11() == 11 - assert self.module.foo_fixed.bar12() == 12 - assert self.module.foo_free.bar13() == 13 - -if __name__ == "__main__": - import nose - nose.runmodule() diff --git a/pythonPackages/numpy/numpy/f2py/tests/test_return_character.py b/pythonPackages/numpy/numpy/f2py/tests/test_return_character.py deleted file mode 100755 index a8ab4aaaca..0000000000 --- a/pythonPackages/numpy/numpy/f2py/tests/test_return_character.py +++ /dev/null @@ -1,140 +0,0 @@ -from numpy.testing import * -from numpy import array -from numpy.compat import asbytes -import util - -class TestReturnCharacter(util.F2PyTest): - def check_function(self, t): - tname = t.__doc__.split()[0] - if tname in ['t0','t1','s0','s1']: - assert t(23)==asbytes('2') - r = t('ab');assert r==asbytes('a'),`r` - r = t(array('ab'));assert r==asbytes('a'),`r` - r = t(array(77,'u1'));assert r==asbytes('M'),`r` - #assert_raises(ValueError, t, array([77,87])) - #assert_raises(ValueError, t, array(77)) - elif tname in ['ts','ss']: - assert t(23)==asbytes('23 '),`t(23)` - assert t('123456789abcdef')==asbytes('123456789a') - elif tname in ['t5','s5']: - assert t(23)==asbytes('23 '),`t(23)` - assert t('ab')==asbytes('ab '),`t('ab')` - assert t('123456789abcdef')==asbytes('12345') - else: - raise NotImplementedError - -class TestF77ReturnCharacter(TestReturnCharacter): - code = """ - function t0(value) - character value - character t0 - t0 = value - end - function t1(value) - character*1 value - character*1 t1 - t1 = value - end - function t5(value) - character*5 value - character*5 t5 - t5 = value - end - function ts(value) - character*(*) value - character*(*) ts - ts = value - end - - subroutine s0(t0,value) - character value - character t0 -cf2py intent(out) t0 - t0 = value - end - subroutine s1(t1,value) - character*1 value - character*1 t1 -cf2py intent(out) t1 - t1 = value - end - subroutine s5(t5,value) - character*5 value - character*5 t5 -cf2py intent(out) t5 - t5 = value - end - subroutine ss(ts,value) - character*(*) value - character*10 ts -cf2py intent(out) ts - ts = value - end - """ - - @dec.slow - def test_all(self): - for name in "t0,t1,t5,s0,s1,s5,ss".split(","): - self.check_function(getattr(self.module, name)) - -class TestF90ReturnCharacter(TestReturnCharacter): - suffix = ".f90" - code = """ -module f90_return_char - contains - function t0(value) - character :: value - character :: t0 - t0 = value - end function t0 - function t1(value) - character(len=1) :: value - character(len=1) :: t1 - t1 = value - end function t1 - function t5(value) - character(len=5) :: value - character(len=5) :: t5 - t5 = value - end function t5 - function ts(value) - character(len=*) :: value - character(len=10) :: ts - ts = value - end function ts - - subroutine s0(t0,value) - character :: value - character :: t0 -!f2py intent(out) t0 - t0 = value - end subroutine s0 - subroutine s1(t1,value) - character(len=1) :: value - character(len=1) :: t1 -!f2py intent(out) t1 - t1 = value - end subroutine s1 - subroutine s5(t5,value) - character(len=5) :: value - character(len=5) :: t5 -!f2py intent(out) t5 - t5 = value - end subroutine s5 - subroutine ss(ts,value) - character(len=*) :: value - character(len=10) :: ts -!f2py intent(out) ts - ts = value - end subroutine ss -end module f90_return_char - """ - - @dec.slow - def test_all(self): - for name in "t0,t1,t5,ts,s0,s1,s5,ss".split(","): - self.check_function(getattr(self.module.f90_return_char, name)) - -if __name__ == "__main__": - import nose - nose.runmodule() diff --git a/pythonPackages/numpy/numpy/f2py/tests/test_return_complex.py b/pythonPackages/numpy/numpy/f2py/tests/test_return_complex.py deleted file mode 100755 index cff10fcded..0000000000 --- a/pythonPackages/numpy/numpy/f2py/tests/test_return_complex.py +++ /dev/null @@ -1,166 +0,0 @@ -from numpy.testing import * -from numpy import array -import util - -class TestReturnComplex(util.F2PyTest): - def check_function(self, t): - tname = t.__doc__.split()[0] - if tname in ['t0','t8','s0','s8']: - err = 1e-5 - else: - err = 0.0 - assert abs(t(234j)-234.0j)<=err - assert abs(t(234.6)-234.6)<=err - assert abs(t(234l)-234.0)<=err - assert abs(t(234.6+3j)-(234.6+3j))<=err - #assert abs(t('234')-234.)<=err - #assert abs(t('234.6')-234.6)<=err - assert abs(t(-234)+234.)<=err - assert abs(t([234])-234.)<=err - assert abs(t((234,))-234.)<=err - assert abs(t(array(234))-234.)<=err - assert abs(t(array(23+4j,'F'))-(23+4j))<=err - assert abs(t(array([234]))-234.)<=err - assert abs(t(array([[234]]))-234.)<=err - assert abs(t(array([234],'b'))+22.)<=err - assert abs(t(array([234],'h'))-234.)<=err - assert abs(t(array([234],'i'))-234.)<=err - assert abs(t(array([234],'l'))-234.)<=err - assert abs(t(array([234],'q'))-234.)<=err - assert abs(t(array([234],'f'))-234.)<=err - assert abs(t(array([234],'d'))-234.)<=err - assert abs(t(array([234+3j],'F'))-(234+3j))<=err - assert abs(t(array([234],'D'))-234.)<=err - - #assert_raises(TypeError, t, array([234], 'a1')) - assert_raises(TypeError, t, 'abc') - - assert_raises(IndexError, t, []) - assert_raises(IndexError, t, ()) - - assert_raises(TypeError, t, t) - assert_raises(TypeError, t, {}) - - try: - r = t(10l**400) - assert `r` in ['(inf+0j)','(Infinity+0j)'],`r` - except OverflowError: - pass - - -class TestF77ReturnComplex(TestReturnComplex): - code = """ - function t0(value) - complex value - complex t0 - t0 = value - end - function t8(value) - complex*8 value - complex*8 t8 - t8 = value - end - function t16(value) - complex*16 value - complex*16 t16 - t16 = value - end - function td(value) - double complex value - double complex td - td = value - end - - subroutine s0(t0,value) - complex value - complex t0 -cf2py intent(out) t0 - t0 = value - end - subroutine s8(t8,value) - complex*8 value - complex*8 t8 -cf2py intent(out) t8 - t8 = value - end - subroutine s16(t16,value) - complex*16 value - complex*16 t16 -cf2py intent(out) t16 - t16 = value - end - subroutine sd(td,value) - double complex value - double complex td -cf2py intent(out) td - td = value - end - """ - - @dec.slow - def test_all(self): - for name in "t0,t8,t16,td,s0,s8,s16,sd".split(","): - self.check_function(getattr(self.module, name)) - - -class TestF90ReturnComplex(TestReturnComplex): - suffix = ".f90" - code = """ -module f90_return_complex - contains - function t0(value) - complex :: value - complex :: t0 - t0 = value - end function t0 - function t8(value) - complex(kind=4) :: value - complex(kind=4) :: t8 - t8 = value - end function t8 - function t16(value) - complex(kind=8) :: value - complex(kind=8) :: t16 - t16 = value - end function t16 - function td(value) - double complex :: value - double complex :: td - td = value - end function td - - subroutine s0(t0,value) - complex :: value - complex :: t0 -!f2py intent(out) t0 - t0 = value - end subroutine s0 - subroutine s8(t8,value) - complex(kind=4) :: value - complex(kind=4) :: t8 -!f2py intent(out) t8 - t8 = value - end subroutine s8 - subroutine s16(t16,value) - complex(kind=8) :: value - complex(kind=8) :: t16 -!f2py intent(out) t16 - t16 = value - end subroutine s16 - subroutine sd(td,value) - double complex :: value - double complex :: td -!f2py intent(out) td - td = value - end subroutine sd -end module f90_return_complex - """ - - @dec.slow - def test_all(self): - for name in "t0,t8,t16,td,s0,s8,s16,sd".split(","): - self.check_function(getattr(self.module.f90_return_complex, name)) - -if __name__ == "__main__": - import nose - nose.runmodule() diff --git a/pythonPackages/numpy/numpy/f2py/tests/test_return_integer.py b/pythonPackages/numpy/numpy/f2py/tests/test_return_integer.py deleted file mode 100755 index 9285cf47f5..0000000000 --- a/pythonPackages/numpy/numpy/f2py/tests/test_return_integer.py +++ /dev/null @@ -1,175 +0,0 @@ -from numpy.testing import * -from numpy import array -import util - -class TestReturnInteger(util.F2PyTest): - def check_function(self, t): - assert t(123)==123,`t(123)` - assert t(123.6)==123 - assert t(123l)==123 - assert t('123')==123 - assert t(-123)==-123 - assert t([123])==123 - assert t((123,))==123 - assert t(array(123))==123 - assert t(array([123]))==123 - assert t(array([[123]]))==123 - assert t(array([123],'b'))==123 - assert t(array([123],'h'))==123 - assert t(array([123],'i'))==123 - assert t(array([123],'l'))==123 - assert t(array([123],'B'))==123 - assert t(array([123],'f'))==123 - assert t(array([123],'d'))==123 - - #assert_raises(ValueError, t, array([123],'S3')) - assert_raises(ValueError, t, 'abc') - - assert_raises(IndexError, t, []) - assert_raises(IndexError, t, ()) - - assert_raises(Exception, t, t) - assert_raises(Exception, t, {}) - - if t.__doc__.split()[0] in ['t8','s8']: - assert_raises(OverflowError, t, 100000000000000000000000l) - assert_raises(OverflowError, t, 10000000011111111111111.23) - -class TestF77ReturnInteger(TestReturnInteger): - code = """ - function t0(value) - integer value - integer t0 - t0 = value - end - function t1(value) - integer*1 value - integer*1 t1 - t1 = value - end - function t2(value) - integer*2 value - integer*2 t2 - t2 = value - end - function t4(value) - integer*4 value - integer*4 t4 - t4 = value - end - function t8(value) - integer*8 value - integer*8 t8 - t8 = value - end - - subroutine s0(t0,value) - integer value - integer t0 -cf2py intent(out) t0 - t0 = value - end - subroutine s1(t1,value) - integer*1 value - integer*1 t1 -cf2py intent(out) t1 - t1 = value - end - subroutine s2(t2,value) - integer*2 value - integer*2 t2 -cf2py intent(out) t2 - t2 = value - end - subroutine s4(t4,value) - integer*4 value - integer*4 t4 -cf2py intent(out) t4 - t4 = value - end - subroutine s8(t8,value) - integer*8 value - integer*8 t8 -cf2py intent(out) t8 - t8 = value - end - """ - - @dec.slow - def test_all(self): - for name in "t0,t1,t2,t4,t8,s0,s1,s2,s4,s8".split(","): - self.check_function(getattr(self.module, name)) - - -class TestF90ReturnInteger(TestReturnInteger): - suffix = ".f90" - code = """ -module f90_return_integer - contains - function t0(value) - integer :: value - integer :: t0 - t0 = value - end function t0 - function t1(value) - integer(kind=1) :: value - integer(kind=1) :: t1 - t1 = value - end function t1 - function t2(value) - integer(kind=2) :: value - integer(kind=2) :: t2 - t2 = value - end function t2 - function t4(value) - integer(kind=4) :: value - integer(kind=4) :: t4 - t4 = value - end function t4 - function t8(value) - integer(kind=8) :: value - integer(kind=8) :: t8 - t8 = value - end function t8 - - subroutine s0(t0,value) - integer :: value - integer :: t0 -!f2py intent(out) t0 - t0 = value - end subroutine s0 - subroutine s1(t1,value) - integer(kind=1) :: value - integer(kind=1) :: t1 -!f2py intent(out) t1 - t1 = value - end subroutine s1 - subroutine s2(t2,value) - integer(kind=2) :: value - integer(kind=2) :: t2 -!f2py intent(out) t2 - t2 = value - end subroutine s2 - subroutine s4(t4,value) - integer(kind=4) :: value - integer(kind=4) :: t4 -!f2py intent(out) t4 - t4 = value - end subroutine s4 - subroutine s8(t8,value) - integer(kind=8) :: value - integer(kind=8) :: t8 -!f2py intent(out) t8 - t8 = value - end subroutine s8 -end module f90_return_integer - """ - - @dec.slow - def test_all(self): - for name in "t0,t1,t2,t4,t8,s0,s1,s2,s4,s8".split(","): - self.check_function(getattr(self.module.f90_return_integer, name)) - -if __name__ == "__main__": - import nose - nose.runmodule() diff --git a/pythonPackages/numpy/numpy/f2py/tests/test_return_logical.py b/pythonPackages/numpy/numpy/f2py/tests/test_return_logical.py deleted file mode 100755 index ab24517f2c..0000000000 --- a/pythonPackages/numpy/numpy/f2py/tests/test_return_logical.py +++ /dev/null @@ -1,184 +0,0 @@ -from numpy.testing import * -from numpy import array -import util - -class TestReturnLogical(util.F2PyTest): - def check_function(self, t): - assert t(True)==1,`t(True)` - assert t(False)==0,`t(False)` - assert t(0)==0 - assert t(None)==0 - assert t(0.0)==0 - assert t(0j)==0 - assert t(1j)==1 - assert t(234)==1 - assert t(234.6)==1 - assert t(234l)==1 - assert t(234.6+3j)==1 - assert t('234')==1 - assert t('aaa')==1 - assert t('')==0 - assert t([])==0 - assert t(())==0 - assert t({})==0 - assert t(t)==1 - assert t(-234)==1 - assert t(10l**100)==1 - assert t([234])==1 - assert t((234,))==1 - assert t(array(234))==1 - assert t(array([234]))==1 - assert t(array([[234]]))==1 - assert t(array([234],'b'))==1 - assert t(array([234],'h'))==1 - assert t(array([234],'i'))==1 - assert t(array([234],'l'))==1 - assert t(array([234],'f'))==1 - assert t(array([234],'d'))==1 - assert t(array([234+3j],'F'))==1 - assert t(array([234],'D'))==1 - assert t(array(0))==0 - assert t(array([0]))==0 - assert t(array([[0]]))==0 - assert t(array([0j]))==0 - assert t(array([1]))==1 - assert_raises(ValueError, t, array([0,0])) - - -class TestF77ReturnLogical(TestReturnLogical): - code = """ - function t0(value) - logical value - logical t0 - t0 = value - end - function t1(value) - logical*1 value - logical*1 t1 - t1 = value - end - function t2(value) - logical*2 value - logical*2 t2 - t2 = value - end - function t4(value) - logical*4 value - logical*4 t4 - t4 = value - end -c function t8(value) -c logical*8 value -c logical*8 t8 -c t8 = value -c end - - subroutine s0(t0,value) - logical value - logical t0 -cf2py intent(out) t0 - t0 = value - end - subroutine s1(t1,value) - logical*1 value - logical*1 t1 -cf2py intent(out) t1 - t1 = value - end - subroutine s2(t2,value) - logical*2 value - logical*2 t2 -cf2py intent(out) t2 - t2 = value - end - subroutine s4(t4,value) - logical*4 value - logical*4 t4 -cf2py intent(out) t4 - t4 = value - end -c subroutine s8(t8,value) -c logical*8 value -c logical*8 t8 -cf2py intent(out) t8 -c t8 = value -c end - """ - - @dec.slow - def test_all(self): - for name in "t0,t1,t2,t4,s0,s1,s2,s4".split(","): - self.check_function(getattr(self.module, name)) - -class TestF90ReturnLogical(TestReturnLogical): - suffix = ".f90" - code = """ -module f90_return_logical - contains - function t0(value) - logical :: value - logical :: t0 - t0 = value - end function t0 - function t1(value) - logical(kind=1) :: value - logical(kind=1) :: t1 - t1 = value - end function t1 - function t2(value) - logical(kind=2) :: value - logical(kind=2) :: t2 - t2 = value - end function t2 - function t4(value) - logical(kind=4) :: value - logical(kind=4) :: t4 - t4 = value - end function t4 - function t8(value) - logical(kind=8) :: value - logical(kind=8) :: t8 - t8 = value - end function t8 - - subroutine s0(t0,value) - logical :: value - logical :: t0 -!f2py intent(out) t0 - t0 = value - end subroutine s0 - subroutine s1(t1,value) - logical(kind=1) :: value - logical(kind=1) :: t1 -!f2py intent(out) t1 - t1 = value - end subroutine s1 - subroutine s2(t2,value) - logical(kind=2) :: value - logical(kind=2) :: t2 -!f2py intent(out) t2 - t2 = value - end subroutine s2 - subroutine s4(t4,value) - logical(kind=4) :: value - logical(kind=4) :: t4 -!f2py intent(out) t4 - t4 = value - end subroutine s4 - subroutine s8(t8,value) - logical(kind=8) :: value - logical(kind=8) :: t8 -!f2py intent(out) t8 - t8 = value - end subroutine s8 -end module f90_return_logical - """ - - @dec.slow - def test_all(self): - for name in "t0,t1,t2,t4,t8,s0,s1,s2,s4,s8".split(","): - self.check_function(getattr(self.module.f90_return_logical, name)) - -if __name__ == "__main__": - import nose - nose.runmodule() diff --git a/pythonPackages/numpy/numpy/f2py/tests/test_return_real.py b/pythonPackages/numpy/numpy/f2py/tests/test_return_real.py deleted file mode 100755 index eae098add7..0000000000 --- a/pythonPackages/numpy/numpy/f2py/tests/test_return_real.py +++ /dev/null @@ -1,200 +0,0 @@ -from numpy.testing import * -from numpy import array -import math -import util - -class TestReturnReal(util.F2PyTest): - def check_function(self, t): - if t.__doc__.split()[0] in ['t0','t4','s0','s4']: - err = 1e-5 - else: - err = 0.0 - assert abs(t(234)-234.0)<=err - assert abs(t(234.6)-234.6)<=err - assert abs(t(234l)-234.0)<=err - assert abs(t('234')-234)<=err - assert abs(t('234.6')-234.6)<=err - assert abs(t(-234)+234)<=err - assert abs(t([234])-234)<=err - assert abs(t((234,))-234.)<=err - assert abs(t(array(234))-234.)<=err - assert abs(t(array([234]))-234.)<=err - assert abs(t(array([[234]]))-234.)<=err - assert abs(t(array([234],'b'))+22)<=err - assert abs(t(array([234],'h'))-234.)<=err - assert abs(t(array([234],'i'))-234.)<=err - assert abs(t(array([234],'l'))-234.)<=err - assert abs(t(array([234],'B'))-234.)<=err - assert abs(t(array([234],'f'))-234.)<=err - assert abs(t(array([234],'d'))-234.)<=err - if t.__doc__.split()[0] in ['t0','t4','s0','s4']: - assert t(1e200)==t(1e300) # inf - - #assert_raises(ValueError, t, array([234], 'S1')) - assert_raises(ValueError, t, 'abc') - - assert_raises(IndexError, t, []) - assert_raises(IndexError, t, ()) - - assert_raises(Exception, t, t) - assert_raises(Exception, t, {}) - - try: - r = t(10l**400) - assert `r` in ['inf','Infinity'],`r` - except OverflowError: - pass - -class TestCReturnReal(TestReturnReal): - suffix = ".pyf" - module_name = "c_ext_return_real" - code = """ -python module c_ext_return_real -usercode \'\'\' -float t4(float value) { return value; } -void s4(float *t4, float value) { *t4 = value; } -double t8(double value) { return value; } -void s8(double *t8, double value) { *t8 = value; } -\'\'\' -interface - function t4(value) - real*4 intent(c) :: t4,value - end - function t8(value) - real*8 intent(c) :: t8,value - end - subroutine s4(t4,value) - intent(c) s4 - real*4 intent(out) :: t4 - real*4 intent(c) :: value - end - subroutine s8(t8,value) - intent(c) s8 - real*8 intent(out) :: t8 - real*8 intent(c) :: value - end -end interface -end python module c_ext_return_real - """ - - @dec.slow - def test_all(self): - for name in "t4,t8,s4,s8".split(","): - self.check_function(getattr(self.module, name)) - -class TestF77ReturnReal(TestReturnReal): - code = """ - function t0(value) - real value - real t0 - t0 = value - end - function t4(value) - real*4 value - real*4 t4 - t4 = value - end - function t8(value) - real*8 value - real*8 t8 - t8 = value - end - function td(value) - double precision value - double precision td - td = value - end - - subroutine s0(t0,value) - real value - real t0 -cf2py intent(out) t0 - t0 = value - end - subroutine s4(t4,value) - real*4 value - real*4 t4 -cf2py intent(out) t4 - t4 = value - end - subroutine s8(t8,value) - real*8 value - real*8 t8 -cf2py intent(out) t8 - t8 = value - end - subroutine sd(td,value) - double precision value - double precision td -cf2py intent(out) td - td = value - end - """ - - @dec.slow - def test_all(self): - for name in "t0,t4,t8,td,s0,s4,s8,sd".split(","): - self.check_function(getattr(self.module, name)) - -class TestF90ReturnReal(TestReturnReal): - suffix = ".f90" - code = """ -module f90_return_real - contains - function t0(value) - real :: value - real :: t0 - t0 = value - end function t0 - function t4(value) - real(kind=4) :: value - real(kind=4) :: t4 - t4 = value - end function t4 - function t8(value) - real(kind=8) :: value - real(kind=8) :: t8 - t8 = value - end function t8 - function td(value) - double precision :: value - double precision :: td - td = value - end function td - - subroutine s0(t0,value) - real :: value - real :: t0 -!f2py intent(out) t0 - t0 = value - end subroutine s0 - subroutine s4(t4,value) - real(kind=4) :: value - real(kind=4) :: t4 -!f2py intent(out) t4 - t4 = value - end subroutine s4 - subroutine s8(t8,value) - real(kind=8) :: value - real(kind=8) :: t8 -!f2py intent(out) t8 - t8 = value - end subroutine s8 - subroutine sd(td,value) - double precision :: value - double precision :: td -!f2py intent(out) td - td = value - end subroutine sd -end module f90_return_real - """ - - @dec.slow - def test_all(self): - for name in "t0,t4,t8,td,s0,s4,s8,sd".split(","): - self.check_function(getattr(self.module.f90_return_real, name)) - - -if __name__ == "__main__": - import nose - nose.runmodule() diff --git a/pythonPackages/numpy/numpy/f2py/tests/util.py b/pythonPackages/numpy/numpy/f2py/tests/util.py deleted file mode 100755 index 422d5965e9..0000000000 --- a/pythonPackages/numpy/numpy/f2py/tests/util.py +++ /dev/null @@ -1,346 +0,0 @@ -""" -Utility functions for - -- building and importing modules on test time, using a temporary location -- detecting if compilers are present - -""" - -import os -import sys -import subprocess -import tempfile -import shutil -import atexit -import textwrap -import re -import random - -import nose - -from numpy.compat import asbytes, asstr -import numpy.f2py - -try: - from hashlib import md5 -except ImportError: - from md5 import new as md5 - -# -# Maintaining a temporary module directory -# - -_module_dir = None - -def _cleanup(): - global _module_dir - if _module_dir is not None: - try: - sys.path.remove(_module_dir) - except ValueError: - pass - try: - shutil.rmtree(_module_dir) - except (IOError, OSError): - pass - _module_dir = None - -def get_module_dir(): - global _module_dir - if _module_dir is None: - _module_dir = tempfile.mkdtemp() - atexit.register(_cleanup) - if _module_dir not in sys.path: - sys.path.insert(0, _module_dir) - return _module_dir - -def get_temp_module_name(): - # Assume single-threaded, and the module dir usable only by this thread - d = get_module_dir() - for j in xrange(5403, 9999999): - name = "_test_ext_module_%d" % j - fn = os.path.join(d, name) - if name not in sys.modules and not os.path.isfile(fn+'.py'): - return name - raise RuntimeError("Failed to create a temporary module name") - -def _memoize(func): - memo = {} - def wrapper(*a, **kw): - key = repr((a, kw)) - if key not in memo: - try: - memo[key] = func(*a, **kw) - except Exception, e: - memo[key] = e - raise - ret = memo[key] - if isinstance(ret, Exception): - raise ret - return ret - wrapper.__name__ = func.__name__ - return wrapper - -# -# Building modules -# - -@_memoize -def build_module(source_files, options=[], skip=[], only=[], module_name=None): - """ - Compile and import a f2py module, built from the given files. - - """ - - code = ("import sys; sys.path = %s; import numpy.f2py as f2py2e; " - "f2py2e.main()" % repr(sys.path)) - - d = get_module_dir() - - # Copy files - dst_sources = [] - for fn in source_files: - if not os.path.isfile(fn): - raise RuntimeError("%s is not a file" % fn) - dst = os.path.join(d, os.path.basename(fn)) - shutil.copyfile(fn, dst) - dst_sources.append(dst) - - # Prepare options - if module_name is None: - module_name = get_temp_module_name() - f2py_opts = ['-c', '-m', module_name] + options + dst_sources - if skip: - f2py_opts += ['skip:'] + skip - if only: - f2py_opts += ['only:'] + only - - # Build - cwd = os.getcwd() - try: - os.chdir(d) - cmd = [sys.executable, '-c', code] + f2py_opts - p = subprocess.Popen(cmd, stdout=subprocess.PIPE, - stderr=subprocess.STDOUT) - out, err = p.communicate() - if p.returncode != 0: - raise RuntimeError("Running f2py failed: %s\n%s" - % (cmd[4:], asstr(out))) - finally: - os.chdir(cwd) - - # Partial cleanup - for fn in dst_sources: - os.unlink(fn) - - # Import - __import__(module_name) - return sys.modules[module_name] - -@_memoize -def build_code(source_code, options=[], skip=[], only=[], suffix=None, - module_name=None): - """ - Compile and import Fortran code using f2py. - - """ - if suffix is None: - suffix = '.f' - - fd, tmp_fn = tempfile.mkstemp(suffix=suffix) - os.write(fd, asbytes(source_code)) - os.close(fd) - - try: - return build_module([tmp_fn], options=options, skip=skip, only=only, - module_name=module_name) - finally: - os.unlink(tmp_fn) - -# -# Check if compilers are available at all... -# - -_compiler_status = None -def _get_compiler_status(): - global _compiler_status - if _compiler_status is not None: - return _compiler_status - - _compiler_status = (False, False, False) - - # XXX: this is really ugly. But I don't know how to invoke Distutils - # in a safer way... - code = """ -import os -import sys -sys.path = %(syspath)s - -def configuration(parent_name='',top_path=None): - global config - from numpy.distutils.misc_util import Configuration - config = Configuration('', parent_name, top_path) - return config - -from numpy.distutils.core import setup -setup(configuration=configuration) - -config_cmd = config.get_config_cmd() -have_c = config_cmd.try_compile('void foo() {}') -print('COMPILERS:%%d,%%d,%%d' %% (have_c, - config.have_f77c(), - config.have_f90c())) -sys.exit(99) -""" - code = code % dict(syspath=repr(sys.path)) - - fd, script = tempfile.mkstemp(suffix='.py') - os.write(fd, asbytes(code)) - os.close(fd) - - try: - cmd = [sys.executable, script, 'config'] - p = subprocess.Popen(cmd, stdout=subprocess.PIPE, - stderr=subprocess.STDOUT) - out, err = p.communicate() - m = re.search(asbytes(r'COMPILERS:(\d+),(\d+),(\d+)'), out) - if m: - _compiler_status = (bool(int(m.group(1))), bool(int(m.group(2))), - bool(int(m.group(3)))) - finally: - os.unlink(script) - - # Finished - return _compiler_status - -def has_c_compiler(): - return _get_compiler_status()[0] - -def has_f77_compiler(): - return _get_compiler_status()[1] - -def has_f90_compiler(): - return _get_compiler_status()[2] - -# -# Building with distutils -# - -@_memoize -def build_module_distutils(source_files, config_code, module_name, **kw): - """ - Build a module via distutils and import it. - - """ - from numpy.distutils.misc_util import Configuration - from numpy.distutils.core import setup - - d = get_module_dir() - - # Copy files - dst_sources = [] - for fn in source_files: - if not os.path.isfile(fn): - raise RuntimeError("%s is not a file" % fn) - dst = os.path.join(d, os.path.basename(fn)) - shutil.copyfile(fn, dst) - dst_sources.append(dst) - - # Build script - config_code = textwrap.dedent(config_code).replace("\n", "\n ") - - code = """\ -import os -import sys -sys.path = %(syspath)s - -def configuration(parent_name='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('', parent_name, top_path) - %(config_code)s - return config - -if __name__ == "__main__": - from numpy.distutils.core import setup - setup(configuration=configuration) -""" % dict(config_code=config_code, syspath = repr(sys.path)) - - script = os.path.join(d, get_temp_module_name() + '.py') - dst_sources.append(script) - f = open(script, 'wb') - f.write(asbytes(code)) - f.close() - - # Build - cwd = os.getcwd() - try: - os.chdir(d) - cmd = [sys.executable, script, 'build_ext', '-i'] - p = subprocess.Popen(cmd, stdout=subprocess.PIPE, - stderr=subprocess.STDOUT) - out, err = p.communicate() - if p.returncode != 0: - raise RuntimeError("Running distutils build failed: %s\n%s" - % (cmd[4:], asstr(out))) - finally: - os.chdir(cwd) - - # Partial cleanup - for fn in dst_sources: - os.unlink(fn) - - # Import - __import__(module_name) - return sys.modules[module_name] - -# -# Unittest convenience -# - -class F2PyTest(object): - code = None - sources = None - options = [] - skip = [] - only = [] - suffix = '.f' - module = None - module_name = None - - def setUp(self): - if self.module is not None: - return - - # Check compiler availability first - if not has_c_compiler(): - raise nose.SkipTest("No C compiler available") - - codes = [] - if self.sources: - codes.extend(self.sources) - if self.code is not None: - codes.append(self.suffix) - - needs_f77 = False - needs_f90 = False - for fn in codes: - if fn.endswith('.f'): - needs_f77 = True - elif fn.endswith('.f90'): - needs_f90 = True - if needs_f77 and not has_f77_compiler(): - raise nose.SkipTest("No Fortran 77 compiler available") - if needs_f90 and not has_f90_compiler(): - raise nose.SkipTest("No Fortran 90 compiler available") - - # Build the module - if self.code is not None: - self.module = build_code(self.code, options=self.options, - skip=self.skip, only=self.only, - suffix=self.suffix, - module_name=self.module_name) - - if self.sources is not None: - self.module = build_module(self.sources, options=self.options, - skip=self.skip, only=self.only, - module_name=self.module_name) diff --git a/pythonPackages/numpy/numpy/f2py/use_rules.py b/pythonPackages/numpy/numpy/f2py/use_rules.py deleted file mode 100755 index 021d08601d..0000000000 --- a/pythonPackages/numpy/numpy/f2py/use_rules.py +++ /dev/null @@ -1,107 +0,0 @@ -#!/usr/bin/env python -""" - -Build 'use others module data' mechanism for f2py2e. - -Unfinished. - -Copyright 2000 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -$Date: 2000/09/10 12:35:43 $ -Pearu Peterson -""" - -__version__ = "$Revision: 1.3 $"[10:-1] - -f2py_version='See `f2py -v`' - -import pprint -import sys -errmess=sys.stderr.write -outmess=sys.stdout.write -show=pprint.pprint - -from auxfuncs import * -############## - -usemodule_rules={ - 'body':""" -#begintitle# -static char doc_#apiname#[] = \"\\\nVariable wrapper signature:\\n\\ -\t #name# = get_#name#()\\n\\ -Arguments:\\n\\ -#docstr#\"; -extern F_MODFUNC(#usemodulename#,#USEMODULENAME#,#realname#,#REALNAME#); -static PyObject *#apiname#(PyObject *capi_self, PyObject *capi_args) { -/*#decl#*/ -\tif (!PyArg_ParseTuple(capi_args, \"\")) goto capi_fail; -printf(\"c: %d\\n\",F_MODFUNC(#usemodulename#,#USEMODULENAME#,#realname#,#REALNAME#)); -\treturn Py_BuildValue(\"\"); -capi_fail: -\treturn NULL; -} -""", - 'method':'\t{\"get_#name#\",#apiname#,METH_VARARGS|METH_KEYWORDS,doc_#apiname#},', - 'need':['F_MODFUNC'] - } - -################ - -def buildusevars(m,r): - ret={} - outmess('\t\tBuilding use variable hooks for module "%s" (feature only for F90/F95)...\n'%(m['name'])) - varsmap={} - revmap={} - if 'map' in r: - for k in r['map'].keys(): - if r['map'][k] in revmap: - outmess('\t\t\tVariable "%s<=%s" is already mapped by "%s". Skipping.\n'%(r['map'][k],k,revmap[r['map'][k]])) - else: - revmap[r['map'][k]]=k - if 'only' in r and r['only']: - for v in r['map'].keys(): - if r['map'][v] in m['vars']: - - if revmap[r['map'][v]]==v: - varsmap[v]=r['map'][v] - else: - outmess('\t\t\tIgnoring map "%s=>%s". See above.\n'%(v,r['map'][v])) - else: - outmess('\t\t\tNo definition for variable "%s=>%s". Skipping.\n'%(v,r['map'][v])) - else: - for v in m['vars'].keys(): - if v in revmap: - varsmap[v]=revmap[v] - else: - varsmap[v]=v - for v in varsmap.keys(): - ret=dictappend(ret,buildusevar(v,varsmap[v],m['vars'],m['name'])) - return ret -def buildusevar(name,realname,vars,usemodulename): - outmess('\t\t\tConstructing wrapper function for variable "%s=>%s"...\n'%(name,realname)) - ret={} - vrd={'name':name, - 'realname':realname, - 'REALNAME':realname.upper(), - 'usemodulename':usemodulename, - 'USEMODULENAME':usemodulename.upper(), - 'texname':name.replace('_','\\_'), - 'begintitle':gentitle('%s=>%s'%(name,realname)), - 'endtitle':gentitle('end of %s=>%s'%(name,realname)), - 'apiname':'#modulename#_use_%s_from_%s'%(realname,usemodulename) - } - nummap={0:'Ro',1:'Ri',2:'Rii',3:'Riii',4:'Riv',5:'Rv',6:'Rvi',7:'Rvii',8:'Rviii',9:'Rix'} - vrd['texnamename']=name - for i in nummap.keys(): - vrd['texnamename']=vrd['texnamename'].replace(`i`,nummap[i]) - if hasnote(vars[realname]): vrd['note']=vars[realname]['note'] - rd=dictappend({},vrd) - var=vars[realname] - - print name,realname,vars[realname] - ret=applyrules(usemodule_rules,rd) - return ret diff --git a/pythonPackages/numpy/numpy/fft/SConscript b/pythonPackages/numpy/numpy/fft/SConscript deleted file mode 100755 index adceea0112..0000000000 --- a/pythonPackages/numpy/numpy/fft/SConscript +++ /dev/null @@ -1,8 +0,0 @@ -# Last Change: Thu Jun 12 06:00 PM 2008 J -# vim:syntax=python -from numscons import GetNumpyEnvironment - -env = GetNumpyEnvironment(ARGUMENTS) - -env.NumpyPythonExtension('fftpack_lite', - source = ['fftpack_litemodule.c', 'fftpack.c']) diff --git a/pythonPackages/numpy/numpy/fft/SConstruct b/pythonPackages/numpy/numpy/fft/SConstruct deleted file mode 100755 index a377d8391b..0000000000 --- a/pythonPackages/numpy/numpy/fft/SConstruct +++ /dev/null @@ -1,2 +0,0 @@ -from numscons import GetInitEnvironment -GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') diff --git a/pythonPackages/numpy/numpy/fft/__init__.py b/pythonPackages/numpy/numpy/fft/__init__.py deleted file mode 100755 index 324e39f4d0..0000000000 --- a/pythonPackages/numpy/numpy/fft/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# To get sub-modules -from info import __doc__ - -from fftpack import * -from helper import * - -from numpy.testing import Tester -test = Tester().test -bench = Tester().bench diff --git a/pythonPackages/numpy/numpy/fft/fftpack.c b/pythonPackages/numpy/numpy/fft/fftpack.c deleted file mode 100755 index 9c8fd118a2..0000000000 --- a/pythonPackages/numpy/numpy/fft/fftpack.c +++ /dev/null @@ -1,1501 +0,0 @@ -/* -fftpack.c : A set of FFT routines in C. -Algorithmically based on Fortran-77 FFTPACK by Paul N. Swarztrauber (Version 4, 1985). - -*/ - -/* isign is +1 for backward and -1 for forward transforms */ - -#include -#include -#define DOUBLE - -#ifdef DOUBLE -#define Treal double -#else -#define Treal float -#endif - - -#define ref(u,a) u[a] - -#define MAXFAC 13 /* maximum number of factors in factorization of n */ -#define NSPECIAL 4 /* number of factors for which we have special-case routines */ - -#ifdef __cplusplus -extern "C" { -#endif - - -/* ---------------------------------------------------------------------- - passf2, passf3, passf4, passf5, passf. Complex FFT passes fwd and bwd. ----------------------------------------------------------------------- */ - -static void passf2(int ido, int l1, const Treal cc[], Treal ch[], const Treal wa1[], int isign) - /* isign==+1 for backward transform */ - { - int i, k, ah, ac; - Treal ti2, tr2; - if (ido <= 2) { - for (k=0; k= l1) { - for (j=1; j idp) idlj -= idp; - war = wa[idlj - 2]; - wai = wa[idlj-1]; - for (ik=0; ik= l1) { - for (j=1; j= l1) { - for (k=0; k= l1) { - for (j=1; j= l1) { - for (k=0; k= l1) { - for (j=1; j= l1) { - for (j=1; j 5) { - wa[i1-1] = wa[i-1]; - wa[i1] = wa[i]; - } - } - l1 = l2; - } - } /* cffti1 */ - - -void cffti(int n, Treal wsave[]) - { - int iw1, iw2; - if (n == 1) return; - iw1 = 2*n; - iw2 = iw1 + 2*n; - cffti1(n, wsave+iw1, (int*)(wsave+iw2)); - } /* cffti */ - - /* ---------------------------------------------------------------------- -rfftf1, rfftb1, rfftf, rfftb, rffti1, rffti. Treal FFTs. ----------------------------------------------------------------------- */ - -static void rfftf1(int n, Treal c[], Treal ch[], const Treal wa[], const int ifac[MAXFAC+2]) - { - int i; - int k1, l1, l2, na, kh, nf, ip, iw, ix2, ix3, ix4, ido, idl1; - Treal *cinput, *coutput; - nf = ifac[1]; - na = 1; - l2 = n; - iw = n-1; - for (k1 = 1; k1 <= nf; ++k1) { - kh = nf - k1; - ip = ifac[kh + 2]; - l1 = l2 / ip; - ido = n / l2; - idl1 = ido*l1; - iw -= (ip - 1)*ido; - na = !na; - if (na) { - cinput = ch; - coutput = c; - } else { - cinput = c; - coutput = ch; - } - switch (ip) { - case 4: - ix2 = iw + ido; - ix3 = ix2 + ido; - radf4(ido, l1, cinput, coutput, &wa[iw], &wa[ix2], &wa[ix3]); - break; - case 2: - radf2(ido, l1, cinput, coutput, &wa[iw]); - break; - case 3: - ix2 = iw + ido; - radf3(ido, l1, cinput, coutput, &wa[iw], &wa[ix2]); - break; - case 5: - ix2 = iw + ido; - ix3 = ix2 + ido; - ix4 = ix3 + ido; - radf5(ido, l1, cinput, coutput, &wa[iw], &wa[ix2], &wa[ix3], &wa[ix4]); - break; - default: - if (ido == 1) - na = !na; - if (na == 0) { - radfg(ido, ip, l1, idl1, c, ch, &wa[iw]); - na = 1; - } else { - radfg(ido, ip, l1, idl1, ch, c, &wa[iw]); - na = 0; - } - } - l2 = l1; - } - if (na == 1) return; - for (i = 0; i < n; i++) c[i] = ch[i]; - } /* rfftf1 */ - - -void rfftb1(int n, Treal c[], Treal ch[], const Treal wa[], const int ifac[MAXFAC+2]) - { - int i; - int k1, l1, l2, na, nf, ip, iw, ix2, ix3, ix4, ido, idl1; - Treal *cinput, *coutput; - nf = ifac[1]; - na = 0; - l1 = 1; - iw = 0; - for (k1=1; k1<=nf; k1++) { - ip = ifac[k1 + 1]; - l2 = ip*l1; - ido = n / l2; - idl1 = ido*l1; - if (na) { - cinput = ch; - coutput = c; - } else { - cinput = c; - coutput = ch; - } - switch (ip) { - case 4: - ix2 = iw + ido; - ix3 = ix2 + ido; - radb4(ido, l1, cinput, coutput, &wa[iw], &wa[ix2], &wa[ix3]); - na = !na; - break; - case 2: - radb2(ido, l1, cinput, coutput, &wa[iw]); - na = !na; - break; - case 3: - ix2 = iw + ido; - radb3(ido, l1, cinput, coutput, &wa[iw], &wa[ix2]); - na = !na; - break; - case 5: - ix2 = iw + ido; - ix3 = ix2 + ido; - ix4 = ix3 + ido; - radb5(ido, l1, cinput, coutput, &wa[iw], &wa[ix2], &wa[ix3], &wa[ix4]); - na = !na; - break; - default: - radbg(ido, ip, l1, idl1, cinput, coutput, &wa[iw]); - if (ido == 1) na = !na; - } - l1 = l2; - iw += (ip - 1)*ido; - } - if (na == 0) return; - for (i=0; i n: - index = [slice(None)]*len(s) - index[axis] = slice(0,n) - a = a[index] - else: - index = [slice(None)]*len(s) - index[axis] = slice(0,s[axis]) - s[axis] = n - z = zeros(s, a.dtype.char) - z[index] = a - a = z - - if axis != -1: - a = swapaxes(a, axis, -1) - r = work_function(a, wsave) - if axis != -1: - r = swapaxes(r, axis, -1) - return r - - -def fft(a, n=None, axis=-1): - """ - Compute the one-dimensional discrete Fourier Transform. - - This function computes the one-dimensional *n*-point discrete Fourier - Transform (DFT) with the efficient Fast Fourier Transform (FFT) - algorithm [CT]. - - Parameters - ---------- - a : array_like - Input array, can be complex. - n : int, optional - Length of the transformed axis of the output. - If `n` is smaller than the length of the input, the input is cropped. - If it is larger, the input is padded with zeros. If `n` is not given, - the length of the input (along the axis specified by `axis`) is used. - axis : int, optional - Axis over which to compute the FFT. If not given, the last axis is - used. - - Returns - ------- - out : complex ndarray - The truncated or zero-padded input, transformed along the axis - indicated by `axis`, or the last one if `axis` is not specified. - - Raises - ------ - IndexError - if `axes` is larger than the last axis of `a`. - - See Also - -------- - numpy.fft : for definition of the DFT and conventions used. - ifft : The inverse of `fft`. - fft2 : The two-dimensional FFT. - fftn : The *n*-dimensional FFT. - rfftn : The *n*-dimensional FFT of real input. - fftfreq : Frequency bins for given FFT parameters. - - Notes - ----- - FFT (Fast Fourier Transform) refers to a way the discrete Fourier - Transform (DFT) can be calculated efficiently, by using symmetries in the - calculated terms. The symmetry is highest when `n` is a power of 2, and - the transform is therefore most efficient for these sizes. - - The DFT is defined, with the conventions used in this implementation, in - the documentation for the `numpy.fft` module. - - References - ---------- - .. [CT] Cooley, James W., and John W. Tukey, 1965, "An algorithm for the - machine calculation of complex Fourier series," *Math. Comput.* - 19: 297-301. - - Examples - -------- - >>> np.fft.fft(np.exp(2j * np.pi * np.arange(8) / 8)) - array([ -3.44505240e-16 +1.14383329e-17j, - 8.00000000e+00 -5.71092652e-15j, - 2.33482938e-16 +1.22460635e-16j, - 1.64863782e-15 +1.77635684e-15j, - 9.95839695e-17 +2.33482938e-16j, - 0.00000000e+00 +1.66837030e-15j, - 1.14383329e-17 +1.22460635e-16j, - -1.64863782e-15 +1.77635684e-15j]) - - >>> import matplotlib.pyplot as plt - >>> t = np.arange(256) - >>> sp = np.fft.fft(np.sin(t)) - >>> freq = np.fft.fftfreq(t.shape[-1]) - >>> plt.plot(freq, sp.real, freq, sp.imag) - [, ] - >>> plt.show() - - In this example, real input has an FFT which is Hermitian, i.e., symmetric - in the real part and anti-symmetric in the imaginary part, as described in - the `numpy.fft` documentation. - - """ - - return _raw_fft(a, n, axis, fftpack.cffti, fftpack.cfftf, _fft_cache) - - -def ifft(a, n=None, axis=-1): - """ - Compute the one-dimensional inverse discrete Fourier Transform. - - This function computes the inverse of the one-dimensional *n*-point - discrete Fourier transform computed by `fft`. In other words, - ``ifft(fft(a)) == a`` to within numerical accuracy. - For a general description of the algorithm and definitions, - see `numpy.fft`. - - The input should be ordered in the same way as is returned by `fft`, - i.e., ``a[0]`` should contain the zero frequency term, - ``a[1:n/2+1]`` should contain the positive-frequency terms, and - ``a[n/2+1:]`` should contain the negative-frequency terms, in order of - decreasingly negative frequency. See `numpy.fft` for details. - - Parameters - ---------- - a : array_like - Input array, can be complex. - n : int, optional - Length of the transformed axis of the output. - If `n` is smaller than the length of the input, the input is cropped. - If it is larger, the input is padded with zeros. If `n` is not given, - the length of the input (along the axis specified by `axis`) is used. - See notes about padding issues. - axis : int, optional - Axis over which to compute the inverse DFT. If not given, the last - axis is used. - - Returns - ------- - out : complex ndarray - The truncated or zero-padded input, transformed along the axis - indicated by `axis`, or the last one if `axis` is not specified. - - Raises - ------ - IndexError - If `axes` is larger than the last axis of `a`. - - See Also - -------- - numpy.fft : An introduction, with definitions and general explanations. - fft : The one-dimensional (forward) FFT, of which `ifft` is the inverse - ifft2 : The two-dimensional inverse FFT. - ifftn : The n-dimensional inverse FFT. - - Notes - ----- - If the input parameter `n` is larger than the size of the input, the input - is padded by appending zeros at the end. Even though this is the common - approach, it might lead to surprising results. If a different padding is - desired, it must be performed before calling `ifft`. - - Examples - -------- - >>> np.fft.ifft([0, 4, 0, 0]) - array([ 1.+0.j, 0.+1.j, -1.+0.j, 0.-1.j]) - - Create and plot a band-limited signal with random phases: - - >>> import matplotlib.pyplot as plt - >>> t = np.arange(400) - >>> n = np.zeros((400,), dtype=complex) - >>> n[40:60] = np.exp(1j*np.random.uniform(0, 2*np.pi, (20,))) - >>> s = np.fft.ifft(n) - >>> plt.plot(t, s.real, 'b-', t, s.imag, 'r--') - [, ] - >>> plt.legend(('real', 'imaginary')) - - >>> plt.show() - - """ - - a = asarray(a).astype(complex) - if n is None: - n = shape(a)[axis] - return _raw_fft(a, n, axis, fftpack.cffti, fftpack.cfftb, _fft_cache) / n - - -def rfft(a, n=None, axis=-1): - """ - Compute the one-dimensional discrete Fourier Transform for real input. - - This function computes the one-dimensional *n*-point discrete Fourier - Transform (DFT) of a real-valued array by means of an efficient algorithm - called the Fast Fourier Transform (FFT). - - Parameters - ---------- - a : array_like - Input array - n : int, optional - Number of points along transformation axis in the input to use. - If `n` is smaller than the length of the input, the input is cropped. - If it is larger, the input is padded with zeros. If `n` is not given, - the length of the input (along the axis specified by `axis`) is used. - axis : int, optional - Axis over which to compute the FFT. If not given, the last axis is - used. - - Returns - ------- - out : complex ndarray - The truncated or zero-padded input, transformed along the axis - indicated by `axis`, or the last one if `axis` is not specified. - The length of the transformed axis is ``n/2+1``. - - Raises - ------ - IndexError - If `axis` is larger than the last axis of `a`. - - See Also - -------- - numpy.fft : For definition of the DFT and conventions used. - irfft : The inverse of `rfft`. - fft : The one-dimensional FFT of general (complex) input. - fftn : The *n*-dimensional FFT. - rfftn : The *n*-dimensional FFT of real input. - - Notes - ----- - When the DFT is computed for purely real input, the output is - Hermite-symmetric, i.e. the negative frequency terms are just the complex - conjugates of the corresponding positive-frequency terms, and the - negative-frequency terms are therefore redundant. This function does not - compute the negative frequency terms, and the length of the transformed - axis of the output is therefore ``n/2+1``. - - When ``A = rfft(a)``, ``A[0]`` contains the zero-frequency term, which - must be purely real due to the Hermite symmetry. - - If `n` is even, ``A[-1]`` contains the term for frequencies ``n/2`` and - ``-n/2``, and must also be purely real. If `n` is odd, ``A[-1]`` - contains the term for frequency ``A[(n-1)/2]``, and is complex in the - general case. - - If the input `a` contains an imaginary part, it is silently discarded. - - Examples - -------- - >>> np.fft.fft([0, 1, 0, 0]) - array([ 1.+0.j, 0.-1.j, -1.+0.j, 0.+1.j]) - >>> np.fft.rfft([0, 1, 0, 0]) - array([ 1.+0.j, 0.-1.j, -1.+0.j]) - - Notice how the final element of the `fft` output is the complex conjugate - of the second element, for real input. For `rfft`, this symmetry is - exploited to compute only the non-negative frequency terms. - - """ - - a = asarray(a).astype(float) - return _raw_fft(a, n, axis, fftpack.rffti, fftpack.rfftf, _real_fft_cache) - - -def irfft(a, n=None, axis=-1): - """ - Compute the inverse of the n-point DFT for real input. - - This function computes the inverse of the one-dimensional *n*-point - discrete Fourier Transform of real input computed by `rfft`. - In other words, ``irfft(rfft(a), len(a)) == a`` to within numerical - accuracy. (See Notes below for why ``len(a)`` is necessary here.) - - The input is expected to be in the form returned by `rfft`, i.e. the - real zero-frequency term followed by the complex positive frequency terms - in order of increasing frequency. Since the discrete Fourier Transform of - real input is Hermite-symmetric, the negative frequency terms are taken - to be the complex conjugates of the corresponding positive frequency terms. - - Parameters - ---------- - a : array_like - The input array. - n : int, optional - Length of the transformed axis of the output. - For `n` output points, ``n/2+1`` input points are necessary. If the - input is longer than this, it is cropped. If it is shorter than this, - it is padded with zeros. If `n` is not given, it is determined from - the length of the input (along the axis specified by `axis`). - axis : int, optional - Axis over which to compute the inverse FFT. - - Returns - ------- - out : ndarray - The truncated or zero-padded input, transformed along the axis - indicated by `axis`, or the last one if `axis` is not specified. - The length of the transformed axis is `n`, or, if `n` is not given, - ``2*(m-1)`` where `m` is the length of the transformed axis of the - input. To get an odd number of output points, `n` must be specified. - - Raises - ------ - IndexError - If `axis` is larger than the last axis of `a`. - - See Also - -------- - numpy.fft : For definition of the DFT and conventions used. - rfft : The one-dimensional FFT of real input, of which `irfft` is inverse. - fft : The one-dimensional FFT. - irfft2 : The inverse of the two-dimensional FFT of real input. - irfftn : The inverse of the *n*-dimensional FFT of real input. - - Notes - ----- - Returns the real valued `n`-point inverse discrete Fourier transform - of `a`, where `a` contains the non-negative frequency terms of a - Hermite-symmetric sequence. `n` is the length of the result, not the - input. - - If you specify an `n` such that `a` must be zero-padded or truncated, the - extra/removed values will be added/removed at high frequencies. One can - thus resample a series to `m` points via Fourier interpolation by: - ``a_resamp = irfft(rfft(a), m)``. - - - Examples - -------- - >>> np.fft.ifft([1, -1j, -1, 1j]) - array([ 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j]) - >>> np.fft.irfft([1, -1j, -1]) - array([ 0., 1., 0., 0.]) - - Notice how the last term in the input to the ordinary `ifft` is the - complex conjugate of the second term, and the output has zero imaginary - part everywhere. When calling `irfft`, the negative frequencies are not - specified, and the output array is purely real. - - """ - - a = asarray(a).astype(complex) - if n is None: - n = (shape(a)[axis] - 1) * 2 - return _raw_fft(a, n, axis, fftpack.rffti, fftpack.rfftb, - _real_fft_cache) / n - - -def hfft(a, n=None, axis=-1): - """ - Compute the FFT of a signal whose spectrum has Hermitian symmetry. - - Parameters - ---------- - a : array_like - The input array. - n : int, optional - The length of the FFT. - axis : int, optional - The axis over which to compute the FFT, assuming Hermitian symmetry - of the spectrum. Default is the last axis. - - Returns - ------- - out : ndarray - The transformed input. - - See also - -------- - rfft : Compute the one-dimensional FFT for real input. - ihfft : The inverse of `hfft`. - - Notes - ----- - `hfft`/`ihfft` are a pair analogous to `rfft`/`irfft`, but for the - opposite case: here the signal is real in the frequency domain and has - Hermite symmetry in the time domain. So here it's `hfft` for which - you must supply the length of the result if it is to be odd: - ``ihfft(hfft(a), len(a)) == a``, within numerical accuracy. - - Examples - -------- - >>> signal = np.array([[1, 1.j], [-1.j, 2]]) - >>> np.conj(signal.T) - signal # check Hermitian symmetry - array([[ 0.-0.j, 0.+0.j], - [ 0.+0.j, 0.-0.j]]) - >>> freq_spectrum = np.fft.hfft(signal) - >>> freq_spectrum - array([[ 1., 1.], - [ 2., -2.]]) - - """ - - a = asarray(a).astype(complex) - if n is None: - n = (shape(a)[axis] - 1) * 2 - return irfft(conjugate(a), n, axis) * n - - -def ihfft(a, n=None, axis=-1): - """ - Compute the inverse FFT of a signal whose spectrum has Hermitian symmetry. - - Parameters - ---------- - a : array_like - Input array. - n : int, optional - Length of the inverse FFT. - axis : int, optional - Axis over which to compute the inverse FFT, assuming Hermitian - symmetry of the spectrum. Default is the last axis. - - Returns - ------- - out : ndarray - The transformed input. - - See also - -------- - hfft, irfft - - Notes - ----- - `hfft`/`ihfft` are a pair analogous to `rfft`/`irfft`, but for the - opposite case: here the signal is real in the frequency domain and has - Hermite symmetry in the time domain. So here it's `hfft` for which - you must supply the length of the result if it is to be odd: - ``ihfft(hfft(a), len(a)) == a``, within numerical accuracy. - - """ - - a = asarray(a).astype(float) - if n is None: - n = shape(a)[axis] - return conjugate(rfft(a, n, axis))/n - - -def _cook_nd_args(a, s=None, axes=None, invreal=0): - if s is None: - shapeless = 1 - if axes is None: - s = list(a.shape) - else: - s = take(a.shape, axes) - else: - shapeless = 0 - s = list(s) - if axes is None: - axes = range(-len(s), 0) - if len(s) != len(axes): - raise ValueError, "Shape and axes have different lengths." - if invreal and shapeless: - s[axes[-1]] = (s[axes[-1]] - 1) * 2 - return s, axes - - -def _raw_fftnd(a, s=None, axes=None, function=fft): - a = asarray(a) - s, axes = _cook_nd_args(a, s, axes) - itl = range(len(axes)) - itl.reverse() - for ii in itl: - a = function(a, n=s[ii], axis=axes[ii]) - return a - - -def fftn(a, s=None, axes=None): - """ - Compute the N-dimensional discrete Fourier Transform. - - This function computes the *N*-dimensional discrete Fourier Transform over - any number of axes in an *M*-dimensional array by means of the Fast Fourier - Transform (FFT). - - Parameters - ---------- - a : array_like - Input array, can be complex. - s : sequence of ints, optional - Shape (length of each transformed axis) of the output - (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). - This corresponds to `n` for `fft(x, n)`. - Along any axis, if the given shape is smaller than that of the input, - the input is cropped. If it is larger, the input is padded with zeros. - if `s` is not given, the shape of the input (along the axes specified - by `axes`) is used. - axes : sequence of ints, optional - Axes over which to compute the FFT. If not given, the last ``len(s)`` - axes are used, or all axes if `s` is also not specified. - Repeated indices in `axes` means that the transform over that axis is - performed multiple times. - - Returns - ------- - out : complex ndarray - The truncated or zero-padded input, transformed along the axes - indicated by `axes`, or by a combination of `s` and `a`, - as explained in the parameters section above. - - Raises - ------ - ValueError - If `s` and `axes` have different length. - IndexError - If an element of `axes` is larger than than the number of axes of `a`. - - See Also - -------- - numpy.fft : Overall view of discrete Fourier transforms, with definitions - and conventions used. - ifftn : The inverse of `fftn`, the inverse *n*-dimensional FFT. - fft : The one-dimensional FFT, with definitions and conventions used. - rfftn : The *n*-dimensional FFT of real input. - fft2 : The two-dimensional FFT. - fftshift : Shifts zero-frequency terms to centre of array - - Notes - ----- - The output, analogously to `fft`, contains the term for zero frequency in - the low-order corner of all axes, the positive frequency terms in the - first half of all axes, the term for the Nyquist frequency in the middle - of all axes and the negative frequency terms in the second half of all - axes, in order of decreasingly negative frequency. - - See `numpy.fft` for details, definitions and conventions used. - - Examples - -------- - >>> a = np.mgrid[:3, :3, :3][0] - >>> np.fft.fftn(a, axes=(1, 2)) - array([[[ 0.+0.j, 0.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j, 0.+0.j]], - [[ 9.+0.j, 0.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j, 0.+0.j]], - [[ 18.+0.j, 0.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j, 0.+0.j]]]) - >>> np.fft.fftn(a, (2, 2), axes=(0, 1)) - array([[[ 2.+0.j, 2.+0.j, 2.+0.j], - [ 0.+0.j, 0.+0.j, 0.+0.j]], - [[-2.+0.j, -2.+0.j, -2.+0.j], - [ 0.+0.j, 0.+0.j, 0.+0.j]]]) - - >>> import matplotlib.pyplot as plt - >>> [X, Y] = np.meshgrid(2 * np.pi * np.arange(200) / 12, - ... 2 * np.pi * np.arange(200) / 34) - >>> S = np.sin(X) + np.cos(Y) + np.random.uniform(0, 1, X.shape) - >>> FS = np.fft.fftn(S) - >>> plt.imshow(np.log(np.abs(np.fft.fftshift(FS))**2)) - - >>> plt.show() - - """ - - return _raw_fftnd(a,s,axes,fft) - -def ifftn(a, s=None, axes=None): - """ - Compute the N-dimensional inverse discrete Fourier Transform. - - This function computes the inverse of the N-dimensional discrete - Fourier Transform over any number of axes in an M-dimensional array by - means of the Fast Fourier Transform (FFT). In other words, - ``ifftn(fftn(a)) == a`` to within numerical accuracy. - For a description of the definitions and conventions used, see `numpy.fft`. - - The input, analogously to `ifft`, should be ordered in the same way as is - returned by `fftn`, i.e. it should have the term for zero frequency - in all axes in the low-order corner, the positive frequency terms in the - first half of all axes, the term for the Nyquist frequency in the middle - of all axes and the negative frequency terms in the second half of all - axes, in order of decreasingly negative frequency. - - Parameters - ---------- - a : array_like - Input array, can be complex. - s : sequence of ints, optional - Shape (length of each transformed axis) of the output - (``s[0]`` refers to axis 0, ``s[1]`` to axis 1, etc.). - This corresponds to ``n`` for ``ifft(x, n)``. - Along any axis, if the given shape is smaller than that of the input, - the input is cropped. If it is larger, the input is padded with zeros. - if `s` is not given, the shape of the input (along the axes specified - by `axes`) is used. See notes for issue on `ifft` zero padding. - axes : sequence of ints, optional - Axes over which to compute the IFFT. If not given, the last ``len(s)`` - axes are used, or all axes if `s` is also not specified. - Repeated indices in `axes` means that the inverse transform over that - axis is performed multiple times. - - Returns - ------- - out : complex ndarray - The truncated or zero-padded input, transformed along the axes - indicated by `axes`, or by a combination of `s` or `a`, - as explained in the parameters section above. - - Raises - ------ - ValueError - If `s` and `axes` have different length. - IndexError - If an element of `axes` is larger than than the number of axes of `a`. - - See Also - -------- - numpy.fft : Overall view of discrete Fourier transforms, with definitions - and conventions used. - fftn : The forward *n*-dimensional FFT, of which `ifftn` is the inverse. - ifft : The one-dimensional inverse FFT. - ifft2 : The two-dimensional inverse FFT. - ifftshift : Undoes `fftshift`, shifts zero-frequency terms to beginning - of array. - - Notes - ----- - See `numpy.fft` for definitions and conventions used. - - Zero-padding, analogously with `ifft`, is performed by appending zeros to - the input along the specified dimension. Although this is the common - approach, it might lead to surprising results. If another form of zero - padding is desired, it must be performed before `ifftn` is called. - - Examples - -------- - >>> a = np.eye(4) - >>> np.fft.ifftn(np.fft.fftn(a, axes=(0,)), axes=(1,)) - array([[ 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], - [ 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j]]) - - - Create and plot an image with band-limited frequency content: - - >>> import matplotlib.pyplot as plt - >>> n = np.zeros((200,200), dtype=complex) - >>> n[60:80, 20:40] = np.exp(1j*np.random.uniform(0, 2*np.pi, (20, 20))) - >>> im = np.fft.ifftn(n).real - >>> plt.imshow(im) - - >>> plt.show() - - """ - - return _raw_fftnd(a, s, axes, ifft) - - -def fft2(a, s=None, axes=(-2,-1)): - """ - Compute the 2-dimensional discrete Fourier Transform - - This function computes the *n*-dimensional discrete Fourier Transform - over any axes in an *M*-dimensional array by means of the - Fast Fourier Transform (FFT). By default, the transform is computed over - the last two axes of the input array, i.e., a 2-dimensional FFT. - - Parameters - ---------- - a : array_like - Input array, can be complex - s : sequence of ints, optional - Shape (length of each transformed axis) of the output - (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). - This corresponds to `n` for `fft(x, n)`. - Along each axis, if the given shape is smaller than that of the input, - the input is cropped. If it is larger, the input is padded with zeros. - if `s` is not given, the shape of the input (along the axes specified - by `axes`) is used. - axes : sequence of ints, optional - Axes over which to compute the FFT. If not given, the last two - axes are used. A repeated index in `axes` means the transform over - that axis is performed multiple times. A one-element sequence means - that a one-dimensional FFT is performed. - - Returns - ------- - out : complex ndarray - The truncated or zero-padded input, transformed along the axes - indicated by `axes`, or the last two axes if `axes` is not given. - - Raises - ------ - ValueError - If `s` and `axes` have different length, or `axes` not given and - ``len(s) != 2``. - IndexError - If an element of `axes` is larger than than the number of axes of `a`. - - See Also - -------- - numpy.fft : Overall view of discrete Fourier transforms, with definitions - and conventions used. - ifft2 : The inverse two-dimensional FFT. - fft : The one-dimensional FFT. - fftn : The *n*-dimensional FFT. - fftshift : Shifts zero-frequency terms to the center of the array. - For two-dimensional input, swaps first and third quadrants, and second - and fourth quadrants. - - Notes - ----- - `fft2` is just `fftn` with a different default for `axes`. - - The output, analogously to `fft`, contains the term for zero frequency in - the low-order corner of the transformed axes, the positive frequency terms - in the first half of these axes, the term for the Nyquist frequency in the - middle of the axes and the negative frequency terms in the second half of - the axes, in order of decreasingly negative frequency. - - See `fftn` for details and a plotting example, and `numpy.fft` for - definitions and conventions used. - - - Examples - -------- - >>> a = np.mgrid[:5, :5][0] - >>> np.fft.fft2(a) - array([[ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], - [ 5.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], - [ 10.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], - [ 15.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], - [ 20.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]]) - - """ - - return _raw_fftnd(a,s,axes,fft) - - -def ifft2(a, s=None, axes=(-2,-1)): - """ - Compute the 2-dimensional inverse discrete Fourier Transform. - - This function computes the inverse of the 2-dimensional discrete Fourier - Transform over any number of axes in an M-dimensional array by means of - the Fast Fourier Transform (FFT). In other words, ``ifft2(fft2(a)) == a`` - to within numerical accuracy. By default, the inverse transform is - computed over the last two axes of the input array. - - The input, analogously to `ifft`, should be ordered in the same way as is - returned by `fft2`, i.e. it should have the term for zero frequency - in the low-order corner of the two axes, the positive frequency terms in - the first half of these axes, the term for the Nyquist frequency in the - middle of the axes and the negative frequency terms in the second half of - both axes, in order of decreasingly negative frequency. - - Parameters - ---------- - a : array_like - Input array, can be complex. - s : sequence of ints, optional - Shape (length of each axis) of the output (``s[0]`` refers to axis 0, - ``s[1]`` to axis 1, etc.). This corresponds to `n` for ``ifft(x, n)``. - Along each axis, if the given shape is smaller than that of the input, - the input is cropped. If it is larger, the input is padded with zeros. - if `s` is not given, the shape of the input (along the axes specified - by `axes`) is used. See notes for issue on `ifft` zero padding. - axes : sequence of ints, optional - Axes over which to compute the FFT. If not given, the last two - axes are used. A repeated index in `axes` means the transform over - that axis is performed multiple times. A one-element sequence means - that a one-dimensional FFT is performed. - - Returns - ------- - out : complex ndarray - The truncated or zero-padded input, transformed along the axes - indicated by `axes`, or the last two axes if `axes` is not given. - - Raises - ------ - ValueError - If `s` and `axes` have different length, or `axes` not given and - ``len(s) != 2``. - IndexError - If an element of `axes` is larger than than the number of axes of `a`. - - See Also - -------- - numpy.fft : Overall view of discrete Fourier transforms, with definitions - and conventions used. - fft2 : The forward 2-dimensional FFT, of which `ifft2` is the inverse. - ifftn : The inverse of the *n*-dimensional FFT. - fft : The one-dimensional FFT. - ifft : The one-dimensional inverse FFT. - - Notes - ----- - `ifft2` is just `ifftn` with a different default for `axes`. - - See `ifftn` for details and a plotting example, and `numpy.fft` for - definition and conventions used. - - Zero-padding, analogously with `ifft`, is performed by appending zeros to - the input along the specified dimension. Although this is the common - approach, it might lead to surprising results. If another form of zero - padding is desired, it must be performed before `ifft2` is called. - - Examples - -------- - >>> a = 4 * np.eye(4) - >>> np.fft.ifft2(a) - array([[ 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j], - [ 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j], - [ 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j]]) - - """ - - return _raw_fftnd(a, s, axes, ifft) - - -def rfftn(a, s=None, axes=None): - """ - Compute the N-dimensional discrete Fourier Transform for real input. - - This function computes the N-dimensional discrete Fourier Transform over - any number of axes in an M-dimensional real array by means of the Fast - Fourier Transform (FFT). By default, all axes are transformed, with the - real transform performed over the last axis, while the remaining - transforms are complex. - - Parameters - ---------- - a : array_like - Input array, taken to be real. - s : sequence of ints, optional - Shape (length along each transformed axis) to use from the input. - (``s[0]`` refers to axis 0, ``s[1]`` to axis 1, etc.). - The final element of `s` corresponds to `n` for ``rfft(x, n)``, while - for the remaining axes, it corresponds to `n` for ``fft(x, n)``. - Along any axis, if the given shape is smaller than that of the input, - the input is cropped. If it is larger, the input is padded with zeros. - if `s` is not given, the shape of the input (along the axes specified - by `axes`) is used. - axes : sequence of ints, optional - Axes over which to compute the FFT. If not given, the last ``len(s)`` - axes are used, or all axes if `s` is also not specified. - - Returns - ------- - out : complex ndarray - The truncated or zero-padded input, transformed along the axes - indicated by `axes`, or by a combination of `s` and `a`, - as explained in the parameters section above. - The length of the last axis transformed will be ``s[-1]//2+1``, - while the remaining transformed axes will have lengths according to - `s`, or unchanged from the input. - - Raises - ------ - ValueError - If `s` and `axes` have different length. - IndexError - If an element of `axes` is larger than than the number of axes of `a`. - - See Also - -------- - irfftn : The inverse of `rfftn`, i.e. the inverse of the n-dimensional FFT - of real input. - fft : The one-dimensional FFT, with definitions and conventions used. - rfft : The one-dimensional FFT of real input. - fftn : The n-dimensional FFT. - rfft2 : The two-dimensional FFT of real input. - - Notes - ----- - The transform for real input is performed over the last transformation - axis, as by `rfft`, then the transform over the remaining axes is - performed as by `fftn`. The order of the output is as for `rfft` for the - final transformation axis, and as for `fftn` for the remaining - transformation axes. - - See `fft` for details, definitions and conventions used. - - Examples - -------- - >>> a = np.ones((2, 2, 2)) - >>> np.fft.rfftn(a) - array([[[ 8.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j]], - [[ 0.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j]]]) - - >>> np.fft.rfftn(a, axes=(2, 0)) - array([[[ 4.+0.j, 0.+0.j], - [ 4.+0.j, 0.+0.j]], - [[ 0.+0.j, 0.+0.j], - [ 0.+0.j, 0.+0.j]]]) - - """ - - a = asarray(a).astype(float) - s, axes = _cook_nd_args(a, s, axes) - a = rfft(a, s[-1], axes[-1]) - for ii in range(len(axes)-1): - a = fft(a, s[ii], axes[ii]) - return a - -def rfft2(a, s=None, axes=(-2,-1)): - """ - Compute the 2-dimensional FFT of a real array. - - Parameters - ---------- - a : array - Input array, taken to be real. - s : sequence of ints, optional - Shape of the FFT. - axes : sequence of ints, optional - Axes over which to compute the FFT. - - Returns - ------- - out : ndarray - The result of the real 2-D FFT. - - See Also - -------- - rfftn : Compute the N-dimensional discrete Fourier Transform for real - input. - - Notes - ----- - This is really just `rfftn` with different default behavior. - For more details see `rfftn`. - - """ - - return rfftn(a, s, axes) - -def irfftn(a, s=None, axes=None): - """ - Compute the inverse of the N-dimensional FFT of real input. - - This function computes the inverse of the N-dimensional discrete - Fourier Transform for real input over any number of axes in an - M-dimensional array by means of the Fast Fourier Transform (FFT). In - other words, ``irfftn(rfftn(a), a.shape) == a`` to within numerical - accuracy. (The ``a.shape`` is necessary like ``len(a)`` is for `irfft`, - and for the same reason.) - - The input should be ordered in the same way as is returned by `rfftn`, - i.e. as for `irfft` for the final transformation axis, and as for `ifftn` - along all the other axes. - - Parameters - ---------- - a : array_like - Input array. - s : sequence of ints, optional - Shape (length of each transformed axis) of the output - (``s[0]`` refers to axis 0, ``s[1]`` to axis 1, etc.). `s` is also the - number of input points used along this axis, except for the last axis, - where ``s[-1]//2+1`` points of the input are used. - Along any axis, if the shape indicated by `s` is smaller than that of - the input, the input is cropped. If it is larger, the input is padded - with zeros. If `s` is not given, the shape of the input (along the - axes specified by `axes`) is used. - axes : sequence of ints, optional - Axes over which to compute the inverse FFT. If not given, the last - `len(s)` axes are used, or all axes if `s` is also not specified. - Repeated indices in `axes` means that the inverse transform over that - axis is performed multiple times. - - Returns - ------- - out : ndarray - The truncated or zero-padded input, transformed along the axes - indicated by `axes`, or by a combination of `s` or `a`, - as explained in the parameters section above. - The length of each transformed axis is as given by the corresponding - element of `s`, or the length of the input in every axis except for the - last one if `s` is not given. In the final transformed axis the length - of the output when `s` is not given is ``2*(m-1)`` where `m` is the - length of the final transformed axis of the input. To get an odd - number of output points in the final axis, `s` must be specified. - - Raises - ------ - ValueError - If `s` and `axes` have different length. - IndexError - If an element of `axes` is larger than than the number of axes of `a`. - - See Also - -------- - rfftn : The forward n-dimensional FFT of real input, - of which `ifftn` is the inverse. - fft : The one-dimensional FFT, with definitions and conventions used. - irfft : The inverse of the one-dimensional FFT of real input. - irfft2 : The inverse of the two-dimensional FFT of real input. - - Notes - ----- - See `fft` for definitions and conventions used. - - See `rfft` for definitions and conventions used for real input. - - Examples - -------- - >>> a = np.zeros((3, 2, 2)) - >>> a[0, 0, 0] = 3 * 2 * 2 - >>> np.fft.irfftn(a) - array([[[ 1., 1.], - [ 1., 1.]], - [[ 1., 1.], - [ 1., 1.]], - [[ 1., 1.], - [ 1., 1.]]]) - - """ - - a = asarray(a).astype(complex) - s, axes = _cook_nd_args(a, s, axes, invreal=1) - for ii in range(len(axes)-1): - a = ifft(a, s[ii], axes[ii]) - a = irfft(a, s[-1], axes[-1]) - return a - -def irfft2(a, s=None, axes=(-2,-1)): - """ - Compute the 2-dimensional inverse FFT of a real array. - - Parameters - ---------- - a : array_like - The input array - s : sequence of ints, optional - Shape of the inverse FFT. - axes : sequence of ints, optional - The axes over which to compute the inverse fft. - Default is the last two axes. - - Returns - ------- - out : ndarray - The result of the inverse real 2-D FFT. - - See Also - -------- - irfftn : Compute the inverse of the N-dimensional FFT of real input. - - Notes - ----- - This is really `irfftn` with different defaults. - For more details see `irfftn`. - - """ - - return irfftn(a, s, axes) - -# Deprecated names -from numpy import deprecate -refft = deprecate(rfft, 'refft', 'rfft') -irefft = deprecate(irfft, 'irefft', 'irfft') -refft2 = deprecate(rfft2, 'refft2', 'rfft2') -irefft2 = deprecate(irfft2, 'irefft2', 'irfft2') -refftn = deprecate(rfftn, 'refftn', 'rfftn') -irefftn = deprecate(irfftn, 'irefftn', 'irfftn') diff --git a/pythonPackages/numpy/numpy/fft/fftpack_litemodule.c b/pythonPackages/numpy/numpy/fft/fftpack_litemodule.c deleted file mode 100755 index 21343574d8..0000000000 --- a/pythonPackages/numpy/numpy/fft/fftpack_litemodule.c +++ /dev/null @@ -1,353 +0,0 @@ -#include "fftpack.h" -#include "Python.h" -#include "numpy/arrayobject.h" - -static PyObject *ErrorObject; - -/* ----------------------------------------------------- */ - -static char fftpack_cfftf__doc__[] = ""; - -PyObject * -fftpack_cfftf(PyObject *NPY_UNUSED(self), PyObject *args) -{ - PyObject *op1, *op2; - PyArrayObject *data; - PyArray_Descr *descr; - double *wsave, *dptr; - npy_intp nsave; - int npts, nrepeats, i; - - if(!PyArg_ParseTuple(args, "OO", &op1, &op2)) { - return NULL; - } - data = (PyArrayObject *)PyArray_CopyFromObject(op1, - PyArray_CDOUBLE, 1, 0); - if (data == NULL) { - return NULL; - } - descr = PyArray_DescrFromType(PyArray_DOUBLE); - if (PyArray_AsCArray(&op2, (void *)&wsave, &nsave, 1, descr) == -1) { - goto fail; - } - if (data == NULL) { - goto fail; - } - - npts = data->dimensions[data->nd - 1]; - if (nsave != npts*4 + 15) { - PyErr_SetString(ErrorObject, "invalid work array for fft size"); - goto fail; - } - - nrepeats = PyArray_SIZE(data)/npts; - dptr = (double *)data->data; - NPY_SIGINT_ON; - for (i = 0; i < nrepeats; i++) { - cfftf(npts, dptr, wsave); - dptr += npts*2; - } - NPY_SIGINT_OFF; - PyArray_Free(op2, (char *)wsave); - return (PyObject *)data; - -fail: - PyArray_Free(op2, (char *)wsave); - Py_DECREF(data); - return NULL; -} - -static char fftpack_cfftb__doc__[] = ""; - -PyObject * -fftpack_cfftb(PyObject *NPY_UNUSED(self), PyObject *args) -{ - PyObject *op1, *op2; - PyArrayObject *data; - PyArray_Descr *descr; - double *wsave, *dptr; - npy_intp nsave; - int npts, nrepeats, i; - - if(!PyArg_ParseTuple(args, "OO", &op1, &op2)) { - return NULL; - } - data = (PyArrayObject *)PyArray_CopyFromObject(op1, - PyArray_CDOUBLE, 1, 0); - if (data == NULL) { - return NULL; - } - descr = PyArray_DescrFromType(PyArray_DOUBLE); - if (PyArray_AsCArray(&op2, (void *)&wsave, &nsave, 1, descr) == -1) { - goto fail; - } - if (data == NULL) { - goto fail; - } - - npts = data->dimensions[data->nd - 1]; - if (nsave != npts*4 + 15) { - PyErr_SetString(ErrorObject, "invalid work array for fft size"); - goto fail; - } - - nrepeats = PyArray_SIZE(data)/npts; - dptr = (double *)data->data; - NPY_SIGINT_ON; - for (i = 0; i < nrepeats; i++) { - cfftb(npts, dptr, wsave); - dptr += npts*2; - } - NPY_SIGINT_OFF; - PyArray_Free(op2, (char *)wsave); - return (PyObject *)data; - -fail: - PyArray_Free(op2, (char *)wsave); - Py_DECREF(data); - return NULL; -} - -static char fftpack_cffti__doc__[] =""; - -static PyObject * -fftpack_cffti(PyObject *NPY_UNUSED(self), PyObject *args) -{ - PyArrayObject *op; - npy_intp dim; - long n; - - if (!PyArg_ParseTuple(args, "l", &n)) { - return NULL; - } - /*Magic size needed by cffti*/ - dim = 4*n + 15; - /*Create a 1 dimensional array of dimensions of type double*/ - op = (PyArrayObject *)PyArray_SimpleNew(1, &dim, PyArray_DOUBLE); - if (op == NULL) { - return NULL; - } - - NPY_SIGINT_ON; - cffti(n, (double *)((PyArrayObject*)op)->data); - NPY_SIGINT_OFF; - - return (PyObject *)op; -} - -static char fftpack_rfftf__doc__[] =""; - -PyObject * -fftpack_rfftf(PyObject *NPY_UNUSED(self), PyObject *args) -{ - PyObject *op1, *op2; - PyArrayObject *data, *ret; - PyArray_Descr *descr; - double *wsave, *dptr, *rptr; - npy_intp nsave; - int npts, nrepeats, i, rstep; - - if(!PyArg_ParseTuple(args, "OO", &op1, &op2)) { - return NULL; - } - data = (PyArrayObject *)PyArray_ContiguousFromObject(op1, - PyArray_DOUBLE, 1, 0); - if (data == NULL) { - return NULL; - } - npts = data->dimensions[data->nd-1]; - data->dimensions[data->nd - 1] = npts/2 + 1; - ret = (PyArrayObject *)PyArray_Zeros(data->nd, data->dimensions, - PyArray_DescrFromType(PyArray_CDOUBLE), 0); - data->dimensions[data->nd - 1] = npts; - rstep = (ret->dimensions[ret->nd - 1])*2; - - descr = PyArray_DescrFromType(PyArray_DOUBLE); - if (PyArray_AsCArray(&op2, (void *)&wsave, &nsave, 1, descr) == -1) { - goto fail; - } - if (data == NULL || ret == NULL) { - goto fail; - } - if (nsave != npts*2+15) { - PyErr_SetString(ErrorObject, "invalid work array for fft size"); - goto fail; - } - - nrepeats = PyArray_SIZE(data)/npts; - rptr = (double *)ret->data; - dptr = (double *)data->data; - - - NPY_SIGINT_ON; - for (i = 0; i < nrepeats; i++) { - memcpy((char *)(rptr+1), dptr, npts*sizeof(double)); - rfftf(npts, rptr+1, wsave); - rptr[0] = rptr[1]; - rptr[1] = 0.0; - rptr += rstep; - dptr += npts; - } - NPY_SIGINT_OFF; - PyArray_Free(op2, (char *)wsave); - Py_DECREF(data); - return (PyObject *)ret; - -fail: - PyArray_Free(op2, (char *)wsave); - Py_XDECREF(data); - Py_XDECREF(ret); - return NULL; -} - -static char fftpack_rfftb__doc__[] =""; - - -PyObject * -fftpack_rfftb(PyObject *NPY_UNUSED(self), PyObject *args) -{ - PyObject *op1, *op2; - PyArrayObject *data, *ret; - PyArray_Descr *descr; - double *wsave, *dptr, *rptr; - npy_intp nsave; - int npts, nrepeats, i; - - if(!PyArg_ParseTuple(args, "OO", &op1, &op2)) { - return NULL; - } - data = (PyArrayObject *)PyArray_ContiguousFromObject(op1, - PyArray_CDOUBLE, 1, 0); - if (data == NULL) { - return NULL; - } - npts = data->dimensions[data->nd - 1]; - ret = (PyArrayObject *)PyArray_Zeros(data->nd, data->dimensions, - PyArray_DescrFromType(PyArray_DOUBLE), 0); - - descr = PyArray_DescrFromType(PyArray_DOUBLE); - if (PyArray_AsCArray(&op2, (void *)&wsave, &nsave, 1, descr) == -1) { - goto fail; - } - if (data == NULL || ret == NULL) { - goto fail; - } - if (nsave != npts*2 + 15) { - PyErr_SetString(ErrorObject, "invalid work array for fft size"); - goto fail; - } - - nrepeats = PyArray_SIZE(ret)/npts; - rptr = (double *)ret->data; - dptr = (double *)data->data; - - NPY_SIGINT_ON; - for (i = 0; i < nrepeats; i++) { - memcpy((char *)(rptr + 1), (dptr + 2), (npts - 1)*sizeof(double)); - rptr[0] = dptr[0]; - rfftb(npts, rptr, wsave); - rptr += npts; - dptr += npts*2; - } - NPY_SIGINT_OFF; - PyArray_Free(op2, (char *)wsave); - Py_DECREF(data); - return (PyObject *)ret; - -fail: - PyArray_Free(op2, (char *)wsave); - Py_XDECREF(data); - Py_XDECREF(ret); - return NULL; -} - - -static char fftpack_rffti__doc__[] =""; - -static PyObject * -fftpack_rffti(PyObject *NPY_UNUSED(self), PyObject *args) -{ - PyArrayObject *op; - npy_intp dim; - long n; - - if (!PyArg_ParseTuple(args, "l", &n)) { - return NULL; - } - /*Magic size needed by rffti*/ - dim = 2*n + 15; - /*Create a 1 dimensional array of dimensions of type double*/ - op = (PyArrayObject *)PyArray_SimpleNew(1, &dim, PyArray_DOUBLE); - if (op == NULL) { - return NULL; - } - NPY_SIGINT_ON; - rffti(n, (double *)((PyArrayObject*)op)->data); - NPY_SIGINT_OFF; - - return (PyObject *)op; -} - - -/* List of methods defined in the module */ - -static struct PyMethodDef fftpack_methods[] = { - {"cfftf", fftpack_cfftf, 1, fftpack_cfftf__doc__}, - {"cfftb", fftpack_cfftb, 1, fftpack_cfftb__doc__}, - {"cffti", fftpack_cffti, 1, fftpack_cffti__doc__}, - {"rfftf", fftpack_rfftf, 1, fftpack_rfftf__doc__}, - {"rfftb", fftpack_rfftb, 1, fftpack_rfftb__doc__}, - {"rffti", fftpack_rffti, 1, fftpack_rffti__doc__}, - {NULL, NULL, 0, NULL} /* sentinel */ -}; - - -/* Initialization function for the module (*must* be called initfftpack) */ - -static char fftpack_module_documentation[] = "" ; - -#if PY_MAJOR_VERSION >= 3 -static struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - "fftpack_lite", - NULL, - -1, - fftpack_methods, - NULL, - NULL, - NULL, - NULL -}; -#endif - -/* Initialization function for the module */ -#if PY_MAJOR_VERSION >= 3 -#define RETVAL m -PyObject *PyInit_fftpack_lite(void) -#else -#define RETVAL -PyMODINIT_FUNC -initfftpack_lite(void) -#endif -{ - PyObject *m,*d; -#if PY_MAJOR_VERSION >= 3 - m = PyModule_Create(&moduledef); -#else - m = Py_InitModule4("fftpack_lite", fftpack_methods, - fftpack_module_documentation, - (PyObject*)NULL,PYTHON_API_VERSION); -#endif - - /* Import the array object */ - import_array(); - - /* Add some symbolic constants to the module */ - d = PyModule_GetDict(m); - ErrorObject = PyErr_NewException("fftpack.error", NULL, NULL); - PyDict_SetItemString(d, "error", ErrorObject); - - /* XXXX Add constants here */ - - return RETVAL; -} diff --git a/pythonPackages/numpy/numpy/fft/helper.py b/pythonPackages/numpy/numpy/fft/helper.py deleted file mode 100755 index f6c5704450..0000000000 --- a/pythonPackages/numpy/numpy/fft/helper.py +++ /dev/null @@ -1,162 +0,0 @@ -""" -Discrete Fourier Transforms - helper.py -""" -# Created by Pearu Peterson, September 2002 - -__all__ = ['fftshift','ifftshift','fftfreq'] - -from numpy.core import asarray, concatenate, arange, take, \ - integer, empty -import numpy.core.numerictypes as nt -import types - -def fftshift(x,axes=None): - """ - Shift the zero-frequency component to the center of the spectrum. - - This function swaps half-spaces for all axes listed (defaults to all). - Note that ``y[0]`` is the Nyquist component only if ``len(x)`` is even. - - Parameters - ---------- - x : array_like - Input array. - axes : int or shape tuple, optional - Axes over which to shift. Default is None, which shifts all axes. - - Returns - ------- - y : ndarray - The shifted array. - - See Also - -------- - ifftshift : The inverse of `fftshift`. - - Examples - -------- - >>> freqs = np.fft.fftfreq(10, 0.1) - >>> freqs - array([ 0., 1., 2., 3., 4., -5., -4., -3., -2., -1.]) - >>> np.fft.fftshift(freqs) - array([-5., -4., -3., -2., -1., 0., 1., 2., 3., 4.]) - - Shift the zero-frequency component only along the second axis: - - >>> freqs = np.fft.fftfreq(9, d=1./9).reshape(3, 3) - >>> freqs - array([[ 0., 1., 2.], - [ 3., 4., -4.], - [-3., -2., -1.]]) - >>> np.fft.fftshift(freqs, axes=(1,)) - array([[ 2., 0., 1.], - [-4., 3., 4.], - [-1., -3., -2.]]) - - """ - tmp = asarray(x) - ndim = len(tmp.shape) - if axes is None: - axes = range(ndim) - elif isinstance(axes, (int, nt.integer)): - axes = (axes,) - y = tmp - for k in axes: - n = tmp.shape[k] - p2 = (n+1)//2 - mylist = concatenate((arange(p2,n),arange(p2))) - y = take(y,mylist,k) - return y - - -def ifftshift(x,axes=None): - """ - The inverse of fftshift. - - Parameters - ---------- - x : array_like - Input array. - axes : int or shape tuple, optional - Axes over which to calculate. Defaults to None, which shifts all axes. - - Returns - ------- - y : ndarray - The shifted array. - - See Also - -------- - fftshift : Shift zero-frequency component to the center of the spectrum. - - Examples - -------- - >>> freqs = np.fft.fftfreq(9, d=1./9).reshape(3, 3) - >>> freqs - array([[ 0., 1., 2.], - [ 3., 4., -4.], - [-3., -2., -1.]]) - >>> np.fft.ifftshift(np.fft.fftshift(freqs)) - array([[ 0., 1., 2.], - [ 3., 4., -4.], - [-3., -2., -1.]]) - - """ - tmp = asarray(x) - ndim = len(tmp.shape) - if axes is None: - axes = range(ndim) - elif isinstance(axes, (int, nt.integer)): - axes = (axes,) - y = tmp - for k in axes: - n = tmp.shape[k] - p2 = n-(n+1)//2 - mylist = concatenate((arange(p2,n),arange(p2))) - y = take(y,mylist,k) - return y - -def fftfreq(n,d=1.0): - """ - Return the Discrete Fourier Transform sample frequencies. - - The returned float array contains the frequency bins in - cycles/unit (with zero at the start) given a window length `n` and a - sample spacing `d`:: - - f = [0, 1, ..., n/2-1, -n/2, ..., -1] / (d*n) if n is even - f = [0, 1, ..., (n-1)/2, -(n-1)/2, ..., -1] / (d*n) if n is odd - - Parameters - ---------- - n : int - Window length. - d : scalar - Sample spacing. - - Returns - ------- - out : ndarray - The array of length `n`, containing the sample frequencies. - - Examples - -------- - >>> signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5], dtype=float) - >>> fourier = np.fft.fft(signal) - >>> n = signal.size - >>> timestep = 0.1 - >>> freq = np.fft.fftfreq(n, d=timestep) - >>> freq - array([ 0. , 1.25, 2.5 , 3.75, -5. , -3.75, -2.5 , -1.25]) - - """ - assert isinstance(n,types.IntType) or isinstance(n, integer) - val = 1.0/(n*d) - results = empty(n, int) - N = (n-1)//2 + 1 - p1 = arange(0,N,dtype=int) - results[:N] = p1 - p2 = arange(-(n//2),0,dtype=int) - results[N:] = p2 - return results * val - #return hstack((arange(0,(n-1)/2 + 1), arange(-(n/2),0))) / (n*d) diff --git a/pythonPackages/numpy/numpy/fft/info.py b/pythonPackages/numpy/numpy/fft/info.py deleted file mode 100755 index 890b2add22..0000000000 --- a/pythonPackages/numpy/numpy/fft/info.py +++ /dev/null @@ -1,173 +0,0 @@ -""" -Discrete Fourier Transform (:mod:`numpy.fft`) -============================================= - -.. currentmodule:: numpy.fft - - -Standard FFTs -------------- - -.. autosummary:: - :toctree: generated/ - - fft Discrete Fourier transform. - ifft Inverse discrete Fourier transform. - fft2 Discrete Fourier transform in two dimensions. - ifft2 Inverse discrete Fourier transform in two dimensions. - fftn Discrete Fourier transform in N-dimensions. - ifftn Inverse discrete Fourier transform in N dimensions. - -Real FFTs ---------- - -.. autosummary:: - :toctree: generated/ - - rfft Real discrete Fourier transform. - irfft Inverse real discrete Fourier transform. - rfft2 Real discrete Fourier transform in two dimensions. - irfft2 Inverse real discrete Fourier transform in two dimensions. - rfftn Real discrete Fourier transform in N dimensions. - irfftn Inverse real discrete Fourier transform in N dimensions. - - -Hermitian FFTs --------------- - -.. autosummary:: - :toctree: generated/ - - hfft Hermitian discrete Fourier transform. - ihfft Inverse Hermitian discrete Fourier transform. - - -Helper routines ---------------- - -.. autosummary:: - :toctree: generated/ - - fftfreq Discrete Fourier Transform sample frequencies. - fftshift Shift zero-frequency component to center of spectrum. - ifftshift Inverse of fftshift. - -Background information ----------------------- - -Fourier analysis is fundamentally a method for expressing a function as a -sum of periodic components, and for recovering the signal from those -components. When both the function and its Fourier transform are -replaced with discretized counterparts, it is called the discrete Fourier -transform (DFT). The DFT has become a mainstay of numerical computing in -part because of a very fast algorithm for computing it, called the Fast -Fourier Transform (FFT), which was known to Gauss (1805) and was brought -to light in its current form by Cooley and Tukey [CT]_. Press et al. [NR]_ -provide an accessible introduction to Fourier analysis and its -applications. - -Because the discrete Fourier transform separates its input into -components that contribute at discrete frequencies, it has a great number -of applications in digital signal processing, e.g., for filtering, and in -this context the discretized input to the transform is customarily -referred to as a *signal*, which exists in the *time domain*. The output -is called a *spectrum* or *transform* and exists in the *frequency -domain*. - -There are many ways to define the DFT, varying in the sign of the -exponent, normalization, etc. In this implementation, the DFT is defined -as - -.. math:: - A_k = \\sum_{m=0}^{n-1} a_m \\exp\\left\\{-2\\pi i{mk \\over n}\\right\\} - \\qquad k = 0,\\ldots,n-1. - -The DFT is in general defined for complex inputs and outputs, and a -single-frequency component at linear frequency :math:`f` is -represented by a complex exponential -:math:`a_m = \\exp\\{2\\pi i\\,f m\\Delta t\\}`, where :math:`\\Delta t` -is the sampling interval. - -The values in the result follow so-called "standard" order: If ``A = -fft(a, n)``, then ``A[0]`` contains the zero-frequency term (the mean of -the signal), which is always purely real for real inputs. Then ``A[1:n/2]`` -contains the positive-frequency terms, and ``A[n/2+1:]`` contains the -negative-frequency terms, in order of decreasingly negative frequency. -For an even number of input points, ``A[n/2]`` represents both positive and -negative Nyquist frequency, and is also purely real for real input. For -an odd number of input points, ``A[(n-1)/2]`` contains the largest positive -frequency, while ``A[(n+1)/2]`` contains the largest negative frequency. -The routine ``np.fft.fftfreq(A)`` returns an array giving the frequencies -of corresponding elements in the output. The routine -``np.fft.fftshift(A)`` shifts transforms and their frequencies to put the -zero-frequency components in the middle, and ``np.fft.ifftshift(A)`` undoes -that shift. - -When the input `a` is a time-domain signal and ``A = fft(a)``, ``np.abs(A)`` -is its amplitude spectrum and ``np.abs(A)**2`` is its power spectrum. -The phase spectrum is obtained by ``np.angle(A)``. - -The inverse DFT is defined as - -.. math:: - a_m = \\frac{1}{n}\\sum_{k=0}^{n-1}A_k\\exp\\left\\{2\\pi i{mk\\over n}\\right\\} - \\qquad n = 0,\\ldots,n-1. - -It differs from the forward transform by the sign of the exponential -argument and the normalization by :math:`1/n`. - -Real and Hermitian transforms -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -When the input is purely real, its transform is Hermitian, i.e., the -component at frequency :math:`f_k` is the complex conjugate of the -component at frequency :math:`-f_k`, which means that for real -inputs there is no information in the negative frequency components that -is not already available from the positive frequency components. -The family of `rfft` functions is -designed to operate on real inputs, and exploits this symmetry by -computing only the positive frequency components, up to and including the -Nyquist frequency. Thus, ``n`` input points produce ``n/2+1`` complex -output points. The inverses of this family assumes the same symmetry of -its input, and for an output of ``n`` points uses ``n/2+1`` input points. - -Correspondingly, when the spectrum is purely real, the signal is -Hermitian. The `hfft` family of functions exploits this symmetry by -using ``n/2+1`` complex points in the input (time) domain for ``n`` real -points in the frequency domain. - -In higher dimensions, FFTs are used, e.g., for image analysis and -filtering. The computational efficiency of the FFT means that it can -also be a faster way to compute large convolutions, using the property -that a convolution in the time domain is equivalent to a point-by-point -multiplication in the frequency domain. - -In two dimensions, the DFT is defined as - -.. math:: - A_{kl} = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1} - a_{mn}\\exp\\left\\{-2\\pi i \\left({mk\\over M}+{nl\\over N}\\right)\\right\\} - \\qquad k = 0, \\ldots, N-1;\\quad l = 0, \\ldots, M-1, - -which extends in the obvious way to higher dimensions, and the inverses -in higher dimensions also extend in the same way. - -References -^^^^^^^^^^ - -.. [CT] Cooley, James W., and John W. Tukey, 1965, "An algorithm for the - machine calculation of complex Fourier series," *Math. Comput.* - 19: 297-301. - -.. [NR] Press, W., Teukolsky, S., Vetterline, W.T., and Flannery, B.P., - 2007, *Numerical Recipes: The Art of Scientific Computing*, ch. - 12-13. Cambridge Univ. Press, Cambridge, UK. - -Examples -^^^^^^^^ - -For examples, see the various functions. - -""" - -depends = ['core'] diff --git a/pythonPackages/numpy/numpy/fft/setup.py b/pythonPackages/numpy/numpy/fft/setup.py deleted file mode 100755 index 6acad7c9a6..0000000000 --- a/pythonPackages/numpy/numpy/fft/setup.py +++ /dev/null @@ -1,19 +0,0 @@ - - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('fft',parent_package,top_path) - - config.add_data_dir('tests') - - # Configure fftpack_lite - config.add_extension('fftpack_lite', - sources=['fftpack_litemodule.c', 'fftpack.c'] - ) - - - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/fft/setupscons.py b/pythonPackages/numpy/numpy/fft/setupscons.py deleted file mode 100755 index 54551b0a33..0000000000 --- a/pythonPackages/numpy/numpy/fft/setupscons.py +++ /dev/null @@ -1,15 +0,0 @@ -def configuration(parent_package = '', top_path = None): - from numpy.distutils.misc_util import Configuration, get_numpy_include_dirs - config = Configuration('fft', parent_package, top_path) - - config.add_data_dir('tests') - - config.add_sconscript('SConstruct', - source_files = ['fftpack_litemodule.c', 'fftpack.c', - 'fftpack.h']) - - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/fft/tests/test_fftpack.py b/pythonPackages/numpy/numpy/fft/tests/test_fftpack.py deleted file mode 100755 index 4f70d3bc57..0000000000 --- a/pythonPackages/numpy/numpy/fft/tests/test_fftpack.py +++ /dev/null @@ -1,23 +0,0 @@ -from numpy.testing import * -import numpy as np - -def fft1(x): - L = len(x) - phase = -2j*np.pi*(np.arange(L)/float(L)) - phase = np.arange(L).reshape(-1,1) * phase - return np.sum(x*np.exp(phase),axis=1) - -class TestFFTShift(TestCase): - def test_fft_n(self): - self.assertRaises(ValueError,np.fft.fft,[1,2,3],0) - - -class TestFFT1D(TestCase): - def test_basic(self): - rand = np.random.random - x = rand(30) + 1j*rand(30) - assert_array_almost_equal(fft1(x), np.fft.fft(x)) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/fft/tests/test_helper.py b/pythonPackages/numpy/numpy/fft/tests/test_helper.py deleted file mode 100755 index 8ddac931fa..0000000000 --- a/pythonPackages/numpy/numpy/fft/tests/test_helper.py +++ /dev/null @@ -1,50 +0,0 @@ -#!/usr/bin/env python -# Copied from fftpack.helper by Pearu Peterson, October 2005 -""" Test functions for fftpack.helper module -""" - -from numpy.testing import * -from numpy.fft import fftshift,ifftshift,fftfreq - -from numpy import pi - -def random(size): - return rand(*size) - -class TestFFTShift(TestCase): - def test_definition(self): - x = [0,1,2,3,4,-4,-3,-2,-1] - y = [-4,-3,-2,-1,0,1,2,3,4] - assert_array_almost_equal(fftshift(x),y) - assert_array_almost_equal(ifftshift(y),x) - x = [0,1,2,3,4,-5,-4,-3,-2,-1] - y = [-5,-4,-3,-2,-1,0,1,2,3,4] - assert_array_almost_equal(fftshift(x),y) - assert_array_almost_equal(ifftshift(y),x) - - def test_inverse(self): - for n in [1,4,9,100,211]: - x = random((n,)) - assert_array_almost_equal(ifftshift(fftshift(x)),x) - - def test_axes_keyword(self): - freqs = [[ 0, 1, 2], [ 3, 4, -4], [-3, -2, -1]] - shifted = [[-1, -3, -2], [ 2, 0, 1], [-4, 3, 4]] - assert_array_almost_equal(fftshift(freqs, axes=(0, 1)), shifted) - assert_array_almost_equal(fftshift(freqs, axes=0), fftshift(freqs, axes=(0,))) - assert_array_almost_equal(ifftshift(shifted, axes=(0, 1)), freqs) - assert_array_almost_equal(ifftshift(shifted, axes=0), ifftshift(shifted, axes=(0,))) - - -class TestFFTFreq(TestCase): - def test_definition(self): - x = [0,1,2,3,4,-4,-3,-2,-1] - assert_array_almost_equal(9*fftfreq(9),x) - assert_array_almost_equal(9*pi*fftfreq(9,pi),x) - x = [0,1,2,3,4,-5,-4,-3,-2,-1] - assert_array_almost_equal(10*fftfreq(10),x) - assert_array_almost_equal(10*pi*fftfreq(10,pi),x) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/SConscript b/pythonPackages/numpy/numpy/lib/SConscript deleted file mode 100755 index 2d1ed55766..0000000000 --- a/pythonPackages/numpy/numpy/lib/SConscript +++ /dev/null @@ -1,7 +0,0 @@ -# Last Change: Thu Jun 12 06:00 PM 2008 J -# vim:syntax=python -from numscons import GetNumpyEnvironment - -env = GetNumpyEnvironment(ARGUMENTS) -env.Prepend(CPPPATH=["#$build_prefix/numpy/core/src/private"]) -env.NumpyPythonExtension('_compiled_base', source = ['src/_compiled_base.c']) diff --git a/pythonPackages/numpy/numpy/lib/SConstruct b/pythonPackages/numpy/numpy/lib/SConstruct deleted file mode 100755 index a377d8391b..0000000000 --- a/pythonPackages/numpy/numpy/lib/SConstruct +++ /dev/null @@ -1,2 +0,0 @@ -from numscons import GetInitEnvironment -GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') diff --git a/pythonPackages/numpy/numpy/lib/__init__.py b/pythonPackages/numpy/numpy/lib/__init__.py deleted file mode 100755 index 1fd94a0135..0000000000 --- a/pythonPackages/numpy/numpy/lib/__init__.py +++ /dev/null @@ -1,38 +0,0 @@ -from info import __doc__ -from numpy.version import version as __version__ - -from type_check import * -from index_tricks import * -from function_base import * -from shape_base import * -from stride_tricks import * -from twodim_base import * -from ufunclike import * - -import scimath as emath -from polynomial import * -#import convertcode -from utils import * -from arraysetops import * -from npyio import * -from financial import * -import math -from arrayterator import * - -__all__ = ['emath','math'] -__all__ += type_check.__all__ -__all__ += index_tricks.__all__ -__all__ += function_base.__all__ -__all__ += shape_base.__all__ -__all__ += stride_tricks.__all__ -__all__ += twodim_base.__all__ -__all__ += ufunclike.__all__ -__all__ += polynomial.__all__ -__all__ += utils.__all__ -__all__ += arraysetops.__all__ -__all__ += npyio.__all__ -__all__ += financial.__all__ - -from numpy.testing import Tester -test = Tester().test -bench = Tester().bench diff --git a/pythonPackages/numpy/numpy/lib/_datasource.py b/pythonPackages/numpy/numpy/lib/_datasource.py deleted file mode 100755 index ce6d2391b0..0000000000 --- a/pythonPackages/numpy/numpy/lib/_datasource.py +++ /dev/null @@ -1,639 +0,0 @@ -"""A file interface for handling local and remote data files. -The goal of datasource is to abstract some of the file system operations when -dealing with data files so the researcher doesn't have to know all the -low-level details. Through datasource, a researcher can obtain and use a -file with one function call, regardless of location of the file. - -DataSource is meant to augment standard python libraries, not replace them. -It should work seemlessly with standard file IO operations and the os module. - -DataSource files can originate locally or remotely: - -- local files : '/home/guido/src/local/data.txt' -- URLs (http, ftp, ...) : 'http://www.scipy.org/not/real/data.txt' - -DataSource files can also be compressed or uncompressed. Currently only gzip -and bz2 are supported. - -Example:: - - >>> # Create a DataSource, use os.curdir (default) for local storage. - >>> ds = datasource.DataSource() - >>> - >>> # Open a remote file. - >>> # DataSource downloads the file, stores it locally in: - >>> # './www.google.com/index.html' - >>> # opens the file and returns a file object. - >>> fp = ds.open('http://www.google.com/index.html') - >>> - >>> # Use the file as you normally would - >>> fp.read() - >>> fp.close() - -""" - -__docformat__ = "restructuredtext en" - -import os -from shutil import rmtree, copyfile, copyfileobj - -_open = open - -# Using a class instead of a module-level dictionary -# to reduce the inital 'import numpy' overhead by -# deferring the import of bz2 and gzip until needed - -# TODO: .zip support, .tar support? -class _FileOpeners(object): - """ - Container for different methods to open (un-)compressed files. - - `_FileOpeners` contains a dictionary that holds one method for each - supported file format. Attribute lookup is implemented in such a way that - an instance of `_FileOpeners` itself can be indexed with the keys of that - dictionary. Currently uncompressed files as well as files - compressed with ``gzip`` or ``bz2`` compression are supported. - - Notes - ----- - `_file_openers`, an instance of `_FileOpeners`, is made available for - use in the `_datasource` module. - - Examples - -------- - >>> np.lib._datasource._file_openers.keys() - [None, '.bz2', '.gz'] - >>> np.lib._datasource._file_openers['.gz'] is gzip.open - True - - """ - def __init__(self): - self._loaded = False - self._file_openers = {None: open} - def _load(self): - if self._loaded: - return - try: - import bz2 - self._file_openers[".bz2"] = bz2.BZ2File - except ImportError: - pass - try: - import gzip - self._file_openers[".gz"] = gzip.open - except ImportError: - pass - self._loaded = True - - def keys(self): - """ - Return the keys of currently supported file openers. - - Parameters - ---------- - None - - Returns - ------- - keys : list - The keys are None for uncompressed files and the file extension - strings (i.e. ``'.gz'``, ``'.bz2'``) for supported compression - methods. - - """ - self._load() - return self._file_openers.keys() - def __getitem__(self, key): - self._load() - return self._file_openers[key] - -_file_openers = _FileOpeners() - -def open(path, mode='r', destpath=os.curdir): - """ - Open `path` with `mode` and return the file object. - - If ``path`` is an URL, it will be downloaded, stored in the `DataSource` - `destpath` directory and opened from there. - - Parameters - ---------- - path : str - Local file path or URL to open. - mode : str, optional - Mode to open `path`. Mode 'r' for reading, 'w' for writing, 'a' to - append. Available modes depend on the type of object specified by path. - Default is 'r'. - destpath : str, optional - Path to the directory where the source file gets downloaded to for use. - If `destpath` is None, a temporary directory will be created. The - default path is the current directory. - - Returns - ------- - out : file object - The opened file. - - Notes - ----- - This is a convenience function that instantiates a `DataSource` and - returns the file object from ``DataSource.open(path)``. - - """ - - ds = DataSource(destpath) - return ds.open(path, mode) - - -class DataSource (object): - """ - DataSource(destpath='.') - - A generic data source file (file, http, ftp, ...). - - DataSources can be local files or remote files/URLs. The files may - also be compressed or uncompressed. DataSource hides some of the low-level - details of downloading the file, allowing you to simply pass in a valid - file path (or URL) and obtain a file object. - - Parameters - ---------- - destpath : str or None, optional - Path to the directory where the source file gets downloaded to for use. - If `destpath` is None, a temporary directory will be created. - The default path is the current directory. - - Notes - ----- - URLs require a scheme string (``http://``) to be used, without it they - will fail:: - - >>> repos = DataSource() - >>> repos.exists('www.google.com/index.html') - False - >>> repos.exists('http://www.google.com/index.html') - True - - Temporary directories are deleted when the DataSource is deleted. - - Examples - -------- - :: - - >>> ds = DataSource('/home/guido') - >>> urlname = 'http://www.google.com/index.html' - >>> gfile = ds.open('http://www.google.com/index.html') # remote file - >>> ds.abspath(urlname) - '/home/guido/www.google.com/site/index.html' - - >>> ds = DataSource(None) # use with temporary file - >>> ds.open('/home/guido/foobar.txt') - - >>> ds.abspath('/home/guido/foobar.txt') - '/tmp/tmpy4pgsP/home/guido/foobar.txt' - - """ - - def __init__(self, destpath=os.curdir): - """Create a DataSource with a local path at destpath.""" - if destpath: - self._destpath = os.path.abspath(destpath) - self._istmpdest = False - else: - import tempfile # deferring import to improve startup time - self._destpath = tempfile.mkdtemp() - self._istmpdest = True - - def __del__(self): - # Remove temp directories - if self._istmpdest: - rmtree(self._destpath) - - def _iszip(self, filename): - """Test if the filename is a zip file by looking at the file extension. - """ - fname, ext = os.path.splitext(filename) - return ext in _file_openers.keys() - - def _iswritemode(self, mode): - """Test if the given mode will open a file for writing.""" - - # Currently only used to test the bz2 files. - _writemodes = ("w", "+") - for c in mode: - if c in _writemodes: - return True - return False - - def _splitzipext(self, filename): - """Split zip extension from filename and return filename. - - *Returns*: - base, zip_ext : {tuple} - - """ - - if self._iszip(filename): - return os.path.splitext(filename) - else: - return filename, None - - def _possible_names(self, filename): - """Return a tuple containing compressed filename variations.""" - names = [filename] - if not self._iszip(filename): - for zipext in _file_openers.keys(): - if zipext: - names.append(filename+zipext) - return names - - def _isurl(self, path): - """Test if path is a net location. Tests the scheme and netloc.""" - - # We do this here to reduce the 'import numpy' initial import time. - from urlparse import urlparse - - # BUG : URLs require a scheme string ('http://') to be used. - # www.google.com will fail. - # Should we prepend the scheme for those that don't have it and - # test that also? Similar to the way we append .gz and test for - # for compressed versions of files. - - scheme, netloc, upath, uparams, uquery, ufrag = urlparse(path) - return bool(scheme and netloc) - - def _cache(self, path): - """Cache the file specified by path. - - Creates a copy of the file in the datasource cache. - - """ - # We import these here because importing urllib2 is slow and - # a significant fraction of numpy's total import time. - from urllib2 import urlopen - from urllib2 import URLError - - upath = self.abspath(path) - - # ensure directory exists - if not os.path.exists(os.path.dirname(upath)): - os.makedirs(os.path.dirname(upath)) - - # TODO: Doesn't handle compressed files! - if self._isurl(path): - try: - openedurl = urlopen(path) - f = _open(upath, 'wb') - try: - copyfileobj(openedurl, f) - finally: - f.close() - except URLError: - raise URLError("URL not found: %s" % path) - else: - shutil.copyfile(path, upath) - return upath - - def _findfile(self, path): - """Searches for ``path`` and returns full path if found. - - If path is an URL, _findfile will cache a local copy and return - the path to the cached file. - If path is a local file, _findfile will return a path to that local - file. - - The search will include possible compressed versions of the file and - return the first occurence found. - - """ - - # Build list of possible local file paths - if not self._isurl(path): - # Valid local paths - filelist = self._possible_names(path) - # Paths in self._destpath - filelist += self._possible_names(self.abspath(path)) - else: - # Cached URLs in self._destpath - filelist = self._possible_names(self.abspath(path)) - # Remote URLs - filelist = filelist + self._possible_names(path) - - for name in filelist: - if self.exists(name): - if self._isurl(name): - name = self._cache(name) - return name - return None - - def abspath(self, path): - """ - Return absolute path of file in the DataSource directory. - - If `path` is an URL, then `abspath` will return either the location - the file exists locally or the location it would exist when opened - using the `open` method. - - Parameters - ---------- - path : str - Can be a local file or a remote URL. - - Returns - ------- - out : str - Complete path, including the `DataSource` destination directory. - - Notes - ----- - The functionality is based on `os.path.abspath`. - - """ - # We do this here to reduce the 'import numpy' initial import time. - from urlparse import urlparse - - - # TODO: This should be more robust. Handles case where path includes - # the destpath, but not other sub-paths. Failing case: - # path = /home/guido/datafile.txt - # destpath = /home/alex/ - # upath = self.abspath(path) - # upath == '/home/alex/home/guido/datafile.txt' - - # handle case where path includes self._destpath - splitpath = path.split(self._destpath, 2) - if len(splitpath) > 1: - path = splitpath[1] - scheme, netloc, upath, uparams, uquery, ufrag = urlparse(path) - netloc = self._sanitize_relative_path(netloc) - upath = self._sanitize_relative_path(upath) - return os.path.join(self._destpath, netloc, upath) - - def _sanitize_relative_path(self, path): - """Return a sanitised relative path for which - os.path.abspath(os.path.join(base, path)).startswith(base) - """ - last = None - path = os.path.normpath(path) - while path != last: - last = path - # Note: os.path.join treats '/' as os.sep on Windows - path = path.lstrip(os.sep).lstrip('/') - path = path.lstrip(os.pardir).lstrip('..') - drive, path = os.path.splitdrive(path) # for Windows - return path - - def exists(self, path): - """ - Test if path exists. - - Test if `path` exists as (and in this order): - - - a local file. - - a remote URL that has been downloaded and stored locally in the - `DataSource` directory. - - a remote URL that has not been downloaded, but is valid and accessible. - - Parameters - ---------- - path : str - Can be a local file or a remote URL. - - Returns - ------- - out : bool - True if `path` exists. - - Notes - ----- - When `path` is an URL, `exists` will return True if it's either stored - locally in the `DataSource` directory, or is a valid remote URL. - `DataSource` does not discriminate between the two, the file is accessible - if it exists in either location. - - """ - # We import this here because importing urllib2 is slow and - # a significant fraction of numpy's total import time. - from urllib2 import urlopen - from urllib2 import URLError - - # Test local path - if os.path.exists(path): - return True - - # Test cached url - upath = self.abspath(path) - if os.path.exists(upath): - return True - - # Test remote url - if self._isurl(path): - try: - netfile = urlopen(path) - del(netfile) - return True - except URLError: - return False - return False - - def open(self, path, mode='r'): - """ - Open and return file-like object. - - If `path` is an URL, it will be downloaded, stored in the `DataSource` - directory and opened from there. - - Parameters - ---------- - path : str - Local file path or URL to open. - mode : {'r', 'w', 'a'}, optional - Mode to open `path`. Mode 'r' for reading, 'w' for writing, 'a' to - append. Available modes depend on the type of object specified by - `path`. Default is 'r'. - - Returns - ------- - out : file object - File object. - - """ - - # TODO: There is no support for opening a file for writing which - # doesn't exist yet (creating a file). Should there be? - - # TODO: Add a ``subdir`` parameter for specifying the subdirectory - # used to store URLs in self._destpath. - - if self._isurl(path) and self._iswritemode(mode): - raise ValueError("URLs are not writeable") - - # NOTE: _findfile will fail on a new file opened for writing. - found = self._findfile(path) - if found: - _fname, ext = self._splitzipext(found) - if ext == 'bz2': - mode.replace("+", "") - return _file_openers[ext](found, mode=mode) - else: - raise IOError("%s not found." % path) - - -class Repository (DataSource): - """ - Repository(baseurl, destpath='.') - - A data repository where multiple DataSource's share a base URL/directory. - - `Repository` extends `DataSource` by prepending a base URL (or directory) - to all the files it handles. Use `Repository` when you will be working - with multiple files from one base URL. Initialize `Repository` with the - base URL, then refer to each file by its filename only. - - Parameters - ---------- - baseurl : str - Path to the local directory or remote location that contains the - data files. - destpath : str or None, optional - Path to the directory where the source file gets downloaded to for use. - If `destpath` is None, a temporary directory will be created. - The default path is the current directory. - - Examples - -------- - To analyze all files in the repository, do something like this - (note: this is not self-contained code):: - - >>> repos = np.lib._datasource.Repository('/home/user/data/dir/') - >>> for filename in filelist: - ... fp = repos.open(filename) - ... fp.analyze() - ... fp.close() - - Similarly you could use a URL for a repository:: - - >>> repos = np.lib._datasource.Repository('http://www.xyz.edu/data') - - """ - - def __init__(self, baseurl, destpath=os.curdir): - """Create a Repository with a shared url or directory of baseurl.""" - DataSource.__init__(self, destpath=destpath) - self._baseurl = baseurl - - def __del__(self): - DataSource.__del__(self) - - def _fullpath(self, path): - """Return complete path for path. Prepends baseurl if necessary.""" - splitpath = path.split(self._baseurl, 2) - if len(splitpath) == 1: - result = os.path.join(self._baseurl, path) - else: - result = path # path contains baseurl already - return result - - def _findfile(self, path): - """Extend DataSource method to prepend baseurl to ``path``.""" - return DataSource._findfile(self, self._fullpath(path)) - - def abspath(self, path): - """ - Return absolute path of file in the Repository directory. - - If `path` is an URL, then `abspath` will return either the location - the file exists locally or the location it would exist when opened - using the `open` method. - - Parameters - ---------- - path : str - Can be a local file or a remote URL. This may, but does not have - to, include the `baseurl` with which the `Repository` was initialized. - - Returns - ------- - out : str - Complete path, including the `DataSource` destination directory. - - """ - return DataSource.abspath(self, self._fullpath(path)) - - def exists(self, path): - """ - Test if path exists prepending Repository base URL to path. - - Test if `path` exists as (and in this order): - - - a local file. - - a remote URL that has been downloaded and stored locally in the - `DataSource` directory. - - a remote URL that has not been downloaded, but is valid and - accessible. - - Parameters - ---------- - path : str - Can be a local file or a remote URL. This may, but does not have - to, include the `baseurl` with which the `Repository` was initialized. - - Returns - ------- - out : bool - True if `path` exists. - - Notes - ----- - When `path` is an URL, `exists` will return True if it's either stored - locally in the `DataSource` directory, or is a valid remote URL. - `DataSource` does not discriminate between the two, the file is accessible - if it exists in either location. - - """ - return DataSource.exists(self, self._fullpath(path)) - - def open(self, path, mode='r'): - """ - Open and return file-like object prepending Repository base URL. - - If `path` is an URL, it will be downloaded, stored in the DataSource - directory and opened from there. - - Parameters - ---------- - path : str - Local file path or URL to open. This may, but does not have to, - include the `baseurl` with which the `Repository` was initialized. - mode : {'r', 'w', 'a'}, optional - Mode to open `path`. Mode 'r' for reading, 'w' for writing, 'a' to - append. Available modes depend on the type of object specified by - `path`. Default is 'r'. - - Returns - ------- - out : file object - File object. - - """ - return DataSource.open(self, self._fullpath(path), mode) - - def listdir(self): - """ - List files in the source Repository. - - Returns - ------- - files : list of str - List of file names (not containing a directory part). - - Notes - ----- - Does not currently work for remote repositories. - - """ - if self._isurl(self._baseurl): - raise NotImplementedError, \ - "Directory listing of URLs, not supported yet." - else: - return os.listdir(self._baseurl) diff --git a/pythonPackages/numpy/numpy/lib/_iotools.py b/pythonPackages/numpy/numpy/lib/_iotools.py deleted file mode 100755 index c9d81048f2..0000000000 --- a/pythonPackages/numpy/numpy/lib/_iotools.py +++ /dev/null @@ -1,835 +0,0 @@ -"""A collection of functions designed to help I/O with ascii files.""" -__docformat__ = "restructuredtext en" - -import sys -import numpy as np -import numpy.core.numeric as nx -from __builtin__ import bool, int, long, float, complex, object, unicode, str - -from numpy.compat import asbytes, bytes, asbytes_nested - -if sys.version_info[0] >= 3: - def _bytes_to_complex(s): - return complex(s.decode('ascii')) - def _bytes_to_name(s): - return s.decode('ascii') -else: - _bytes_to_complex = complex - _bytes_to_name = str - -def _is_string_like(obj): - """ - Check whether obj behaves like a string. - """ - try: - obj + '' - except (TypeError, ValueError): - return False - return True - -def _is_bytes_like(obj): - """ - Check whether obj behaves like a bytes object. - """ - try: - obj + asbytes('') - except (TypeError, ValueError): - return False - return True - - -def _to_filehandle(fname, flag='r', return_opened=False): - """ - Returns the filehandle corresponding to a string or a file. - If the string ends in '.gz', the file is automatically unzipped. - - Parameters - ---------- - fname : string, filehandle - Name of the file whose filehandle must be returned. - flag : string, optional - Flag indicating the status of the file ('r' for read, 'w' for write). - return_opened : boolean, optional - Whether to return the opening status of the file. - """ - if _is_string_like(fname): - if fname.endswith('.gz'): - import gzip - fhd = gzip.open(fname, flag) - elif fname.endswith('.bz2'): - import bz2 - fhd = bz2.BZ2File(fname) - else: - fhd = file(fname, flag) - opened = True - elif hasattr(fname, 'seek'): - fhd = fname - opened = False - else: - raise ValueError('fname must be a string or file handle') - if return_opened: - return fhd, opened - return fhd - - -def has_nested_fields(ndtype): - """ - Returns whether one or several fields of a dtype are nested. - - Parameters - ---------- - ndtype : dtype - Data-type of a structured array. - - Raises - ------ - AttributeError : If `ndtype` does not have a `names` attribute. - - Examples - -------- - >>> dt = np.dtype([('name', 'S4'), ('x', float), ('y', float)]) - >>> np.lib._iotools.has_nested_fields(dt) - False - - """ - for name in ndtype.names or (): - if ndtype[name].names: - return True - return False - - -def flatten_dtype(ndtype, flatten_base=False): - """ - Unpack a structured data-type by collapsing nested fields and/or fields - with a shape. - - Note that the field names are lost. - - Parameters - ---------- - ndtype : dtype - The datatype to collapse - flatten_base : {False, True}, optional - Whether to transform a field with a shape into several fields or not. - - Examples - -------- - >>> dt = np.dtype([('name', 'S4'), ('x', float), ('y', float), - ... ('block', int, (2, 3))]) - >>> np.lib._iotools.flatten_dtype(dt) - [dtype('|S4'), dtype('float64'), dtype('float64'), dtype('int32')] - >>> np.lib._iotools.flatten_dtype(dt, flatten_base=True) - [dtype('|S4'), dtype('float64'), dtype('float64'), dtype('int32'), - dtype('int32'), dtype('int32'), dtype('int32'), dtype('int32'), - dtype('int32')] - - """ - names = ndtype.names - if names is None: - if flatten_base: - return [ndtype.base] * int(np.prod(ndtype.shape)) - return [ndtype.base] - else: - types = [] - for field in names: - (typ, _) = ndtype.fields[field] - flat_dt = flatten_dtype(typ, flatten_base) - types.extend(flat_dt) - return types - - - - - - -class LineSplitter: - """ - Object to split a string at a given delimiter or at given places. - - Parameters - ---------- - delimiter : str, int, or sequence of ints, optional - If a string, character used to delimit consecutive fields. - If an integer or a sequence of integers, width(s) of each field. - comment : str, optional - Character used to mark the beginning of a comment. Default is '#'. - autostrip : bool, optional - Whether to strip each individual field. Default is True. - - """ - - def autostrip(self, method): - """ - Wrapper to strip each member of the output of `method`. - - Parameters - ---------- - method : function - Function that takes a single argument and returns a sequence of - strings. - - Returns - ------- - wrapped : function - The result of wrapping `method`. `wrapped` takes a single input - argument and returns a list of strings that are stripped of - white-space. - - """ - return lambda input: [_.strip() for _ in method(input)] - # - def __init__(self, delimiter=None, comments=asbytes('#'), autostrip=True): - self.comments = comments - # Delimiter is a character - if isinstance(delimiter, unicode): - delimiter = delimiter.encode('ascii') - if (delimiter is None) or _is_bytes_like(delimiter): - delimiter = delimiter or None - _handyman = self._delimited_splitter - # Delimiter is a list of field widths - elif hasattr(delimiter, '__iter__'): - _handyman = self._variablewidth_splitter - idx = np.cumsum([0] + list(delimiter)) - delimiter = [slice(i, j) for (i, j) in zip(idx[:-1], idx[1:])] - # Delimiter is a single integer - elif int(delimiter): - (_handyman, delimiter) = (self._fixedwidth_splitter, int(delimiter)) - else: - (_handyman, delimiter) = (self._delimited_splitter, None) - self.delimiter = delimiter - if autostrip: - self._handyman = self.autostrip(_handyman) - else: - self._handyman = _handyman - # - def _delimited_splitter(self, line): - line = line.split(self.comments)[0].strip(asbytes(" \r\n")) - if not line: - return [] - return line.split(self.delimiter) - # - def _fixedwidth_splitter(self, line): - line = line.split(self.comments)[0] - if not line: - return [] - fixed = self.delimiter - slices = [slice(i, i + fixed) for i in range(0, len(line), fixed)] - return [line[s] for s in slices] - # - def _variablewidth_splitter(self, line): - line = line.split(self.comments)[0] - if not line: - return [] - slices = self.delimiter - return [line[s] for s in slices] - # - def __call__(self, line): - return self._handyman(line) - - - -class NameValidator: - """ - Object to validate a list of strings to use as field names. - - The strings are stripped of any non alphanumeric character, and spaces - are replaced by '_'. During instantiation, the user can define a list of - names to exclude, as well as a list of invalid characters. Names in the - exclusion list are appended a '_' character. - - Once an instance has been created, it can be called with a list of names, - and a list of valid names will be created. - The `__call__` method accepts an optional keyword "default" that sets - the default name in case of ambiguity. By default this is 'f', so - that names will default to `f0`, `f1`, etc. - - Parameters - ---------- - excludelist : sequence, optional - A list of names to exclude. This list is appended to the default list - ['return', 'file', 'print']. Excluded names are appended an underscore: - for example, `file` becomes `file_` if supplied. - deletechars : str, optional - A string combining invalid characters that must be deleted from the - names. - casesensitive : {True, False, 'upper', 'lower'}, optional - * If True, field names are case-sensitive. - * If False or 'upper', field names are converted to upper case. - * If 'lower', field names are converted to lower case. - - The default value is True. - replace_space: '_', optional - Character(s) used in replacement of white spaces. - - Notes - ----- - Calling an instance of `NameValidator` is the same as calling its method - `validate`. - - Examples - -------- - >>> validator = np.lib._iotools.NameValidator() - >>> validator(['file', 'field2', 'with space', 'CaSe']) - ['file_', 'field2', 'with_space', 'CaSe'] - - >>> validator = np.lib._iotools.NameValidator(excludelist=['excl'], - deletechars='q', - case_sensitive='False') - >>> validator(['excl', 'field2', 'no_q', 'with space', 'CaSe']) - ['excl_', 'field2', 'no_', 'with_space', 'case'] - - """ - # - defaultexcludelist = ['return', 'file', 'print'] - defaultdeletechars = set("""~!@#$%^&*()-=+~\|]}[{';: /?.>,<""") - # - def __init__(self, excludelist=None, deletechars=None, - case_sensitive=None, replace_space='_'): - # Process the exclusion list .. - if excludelist is None: - excludelist = [] - excludelist.extend(self.defaultexcludelist) - self.excludelist = excludelist - # Process the list of characters to delete - if deletechars is None: - delete = self.defaultdeletechars - else: - delete = set(deletechars) - delete.add('"') - self.deletechars = delete - # Process the case option ..... - if (case_sensitive is None) or (case_sensitive is True): - self.case_converter = lambda x: x - elif (case_sensitive is False) or ('u' in case_sensitive): - self.case_converter = lambda x: x.upper() - elif 'l' in case_sensitive: - self.case_converter = lambda x: x.lower() - else: - self.case_converter = lambda x: x - # - self.replace_space = replace_space - - def validate(self, names, defaultfmt="f%i", nbfields=None): - """ - Validate a list of strings to use as field names for a structured array. - - Parameters - ---------- - names : sequence of str - Strings to be validated. - defaultfmt : str, optional - Default format string, used if validating a given string reduces its - length to zero. - nboutput : integer, optional - Final number of validated names, used to expand or shrink the initial - list of names. - - Returns - ------- - validatednames : list of str - The list of validated field names. - - Notes - ----- - A `NameValidator` instance can be called directly, which is the same as - calling `validate`. For examples, see `NameValidator`. - - """ - # Initial checks .............. - if (names is None): - if (nbfields is None): - return None - names = [] - if isinstance(names, basestring): - names = [names, ] - if nbfields is not None: - nbnames = len(names) - if (nbnames < nbfields): - names = list(names) + [''] * (nbfields - nbnames) - elif (nbnames > nbfields): - names = names[:nbfields] - # Set some shortcuts ........... - deletechars = self.deletechars - excludelist = self.excludelist - case_converter = self.case_converter - replace_space = self.replace_space - # Initializes some variables ... - validatednames = [] - seen = dict() - nbempty = 0 - # - for item in names: - item = case_converter(item).strip() - if replace_space: - item = item.replace(' ', replace_space) - item = ''.join([c for c in item if c not in deletechars]) - if item == '': - item = defaultfmt % nbempty - while item in names: - nbempty += 1 - item = defaultfmt % nbempty - nbempty += 1 - elif item in excludelist: - item += '_' - cnt = seen.get(item, 0) - if cnt > 0: - validatednames.append(item + '_%d' % cnt) - else: - validatednames.append(item) - seen[item] = cnt + 1 - return tuple(validatednames) - # - def __call__(self, names, defaultfmt="f%i", nbfields=None): - return self.validate(names, defaultfmt=defaultfmt, nbfields=nbfields) - - - -def str2bool(value): - """ - Tries to transform a string supposed to represent a boolean to a boolean. - - Parameters - ---------- - value : str - The string that is transformed to a boolean. - - Returns - ------- - boolval : bool - The boolean representation of `value`. - - Raises - ------ - ValueError - If the string is not 'True' or 'False' (case independent) - - Examples - -------- - >>> np.lib._iotools.str2bool('TRUE') - True - >>> np.lib._iotools.str2bool('false') - False - - """ - value = value.upper() - if value == asbytes('TRUE'): - return True - elif value == asbytes('FALSE'): - return False - else: - raise ValueError("Invalid boolean") - - -class ConverterError(Exception): - """ - Exception raised when an error occurs in a converter for string values. - - """ - pass - -class ConverterLockError(ConverterError): - """ - Exception raised when an attempt is made to upgrade a locked converter. - - """ - pass - -class ConversionWarning(UserWarning): - """ - Warning issued when a string converter has a problem. - - Notes - ----- - In `genfromtxt` a `ConversionWarning` is issued if raising exceptions - is explicitly suppressed with the "invalid_raise" keyword. - - """ - pass - - - -class StringConverter: - """ - Factory class for function transforming a string into another object (int, - float). - - After initialization, an instance can be called to transform a string - into another object. If the string is recognized as representing a missing - value, a default value is returned. - - Attributes - ---------- - func : function - Function used for the conversion. - default : any - Default value to return when the input corresponds to a missing value. - type : type - Type of the output. - _status : int - Integer representing the order of the conversion. - _mapper : sequence of tuples - Sequence of tuples (dtype, function, default value) to evaluate in - order. - _locked : bool - Holds `locked` parameter. - - Parameters - ---------- - dtype_or_func : {None, dtype, function}, optional - If a `dtype`, specifies the input data type, used to define a basic - function and a default value for missing data. For example, when - `dtype` is float, the `func` attribute is set to `float` and the - default value to `np.nan`. - If a function, this function is used to convert a string to another - object. In this case, it is recommended to give an associated default - value as input. - default : any, optional - Value to return by default, that is, when the string to be converted - is flagged as missing. If not given, `StringConverter` tries to supply - a reasonable default value. - missing_values : sequence of str, optional - Sequence of strings indicating a missing value. - locked : bool, optional - Whether the StringConverter should be locked to prevent automatic - upgrade or not. Default is False. - - """ - # - _mapper = [(nx.bool_, str2bool, False), - (nx.integer, int, -1), - (nx.floating, float, nx.nan), - (complex, _bytes_to_complex, nx.nan + 0j), - (nx.string_, bytes, asbytes('???'))] - (_defaulttype, _defaultfunc, _defaultfill) = zip(*_mapper) - # - @classmethod - def _getsubdtype(cls, val): - """Returns the type of the dtype of the input variable.""" - return np.array(val).dtype.type - # - @classmethod - def upgrade_mapper(cls, func, default=None): - """ - Upgrade the mapper of a StringConverter by adding a new function and its - corresponding default. - - The input function (or sequence of functions) and its associated default - value (if any) is inserted in penultimate position of the mapper. - The corresponding type is estimated from the dtype of the default value. - - Parameters - ---------- - func : var - Function, or sequence of functions - - Examples - -------- - >>> import dateutil.parser - >>> import datetime - >>> dateparser = datetustil.parser.parse - >>> defaultdate = datetime.date(2000, 1, 1) - >>> StringConverter.upgrade_mapper(dateparser, default=defaultdate) - """ - # Func is a single functions - if hasattr(func, '__call__'): - cls._mapper.insert(-1, (cls._getsubdtype(default), func, default)) - return - elif hasattr(func, '__iter__'): - if isinstance(func[0], (tuple, list)): - for _ in func: - cls._mapper.insert(-1, _) - return - if default is None: - default = [None] * len(func) - else: - default = list(default) - default.append([None] * (len(func) - len(default))) - for (fct, dft) in zip(func, default): - cls._mapper.insert(-1, (cls._getsubdtype(dft), fct, dft)) - # - def __init__(self, dtype_or_func=None, default=None, missing_values=None, - locked=False): - # Convert unicode (for Py3) - if isinstance(missing_values, unicode): - missing_values = asbytes(missing_values) - elif isinstance(missing_values, (list, tuple)): - missing_values = asbytes_nested(missing_values) - # Defines a lock for upgrade - self._locked = bool(locked) - # No input dtype: minimal initialization - if dtype_or_func is None: - self.func = str2bool - self._status = 0 - self.default = default or False - ttype = np.bool - else: - # Is the input a np.dtype ? - try: - self.func = None - ttype = np.dtype(dtype_or_func).type - except TypeError: - # dtype_or_func must be a function, then - if not hasattr(dtype_or_func, '__call__'): - errmsg = "The input argument `dtype` is neither a function"\ - " or a dtype (got '%s' instead)" - raise TypeError(errmsg % type(dtype_or_func)) - # Set the function - self.func = dtype_or_func - # If we don't have a default, try to guess it or set it to None - if default is None: - try: - default = self.func(asbytes('0')) - except ValueError: - default = None - ttype = self._getsubdtype(default) - # Set the status according to the dtype - _status = -1 - for (i, (deftype, func, default_def)) in enumerate(self._mapper): - if np.issubdtype(ttype, deftype): - _status = i - if default is None: - self.default = default_def - else: - self.default = default - break - if _status == -1: - # We never found a match in the _mapper... - _status = 0 - self.default = default - self._status = _status - # If the input was a dtype, set the function to the last we saw - if self.func is None: - self.func = func - # If the status is 1 (int), change the function to smthg more robust - if self.func == self._mapper[1][1]: - self.func = lambda x : int(float(x)) - # Store the list of strings corresponding to missing values. - if missing_values is None: - self.missing_values = set([asbytes('')]) - else: - if isinstance(missing_values, bytes): - missing_values = missing_values.split(asbytes(",")) - self.missing_values = set(list(missing_values) + [asbytes('')]) - # - self._callingfunction = self._strict_call - self.type = ttype - self._checked = False - self._initial_default = default - # - def _loose_call(self, value): - try: - return self.func(value) - except ValueError: - return self.default - # - def _strict_call(self, value): - try: - return self.func(value) - except ValueError: - if value.strip() in self.missing_values: - if not self._status: - self._checked = False - return self.default - raise ValueError("Cannot convert string '%s'" % value) - # - def __call__(self, value): - return self._callingfunction(value) - # - def upgrade(self, value): - """ - Try to find the best converter for a given string, and return the result. - - The supplied string `value` is converted by testing different - converters in order. First the `func` method of the `StringConverter` - instance is tried, if this fails other available converters are tried. - The order in which these other converters are tried is determined by the - `_status` attribute of the instance. - - Parameters - ---------- - value : str - The string to convert. - - Returns - ------- - out : any - The result of converting `value` with the appropriate converter. - - """ - self._checked = True - try: - self._strict_call(value) - except ValueError: - # Raise an exception if we locked the converter... - if self._locked: - errmsg = "Converter is locked and cannot be upgraded" - raise ConverterLockError(errmsg) - _statusmax = len(self._mapper) - # Complains if we try to upgrade by the maximum - _status = self._status - if _status == _statusmax: - errmsg = "Could not find a valid conversion function" - raise ConverterError(errmsg) - elif _status < _statusmax - 1: - _status += 1 - (self.type, self.func, default) = self._mapper[_status] - self._status = _status - if self._initial_default is not None: - self.default = self._initial_default - else: - self.default = default - self.upgrade(value) - - def iterupgrade(self, value): - self._checked = True - if not hasattr(value, '__iter__'): - value = (value,) - _strict_call = self._strict_call - try: - map(_strict_call, value) - except ValueError: - # Raise an exception if we locked the converter... - if self._locked: - errmsg = "Converter is locked and cannot be upgraded" - raise ConverterLockError(errmsg) - _statusmax = len(self._mapper) - # Complains if we try to upgrade by the maximum - _status = self._status - if _status == _statusmax: - raise ConverterError("Could not find a valid conversion function") - elif _status < _statusmax - 1: - _status += 1 - (self.type, self.func, default) = self._mapper[_status] - if self._initial_default is not None: - self.default = self._initial_default - else: - self.default = default - self._status = _status - self.iterupgrade(value) - - def update(self, func, default=None, missing_values=asbytes(''), - locked=False): - """ - Set StringConverter attributes directly. - - Parameters - ---------- - func : function - Conversion function. - default : any, optional - Value to return by default, that is, when the string to be converted - is flagged as missing. If not given, `StringConverter` tries to supply - a reasonable default value. - missing_values : sequence of str, optional - Sequence of strings indicating a missing value. - locked : bool, optional - Whether the StringConverter should be locked to prevent automatic - upgrade or not. Default is False. - - Notes - ----- - `update` takes the same parameters as the constructor of `StringConverter`, - except that `func` does not accept a `dtype` whereas `dtype_or_func` in - the constructor does. - - """ - self.func = func - self._locked = locked - # Don't reset the default to None if we can avoid it - if default is not None: - self.default = default - self.type = self._getsubdtype(default) - else: - try: - tester = func(asbytes('1')) - except (TypeError, ValueError): - tester = None - self.type = self._getsubdtype(tester) - # Add the missing values to the existing set - if missing_values is not None: - if _is_bytes_like(missing_values): - self.missing_values.add(missing_values) - elif hasattr(missing_values, '__iter__'): - for val in missing_values: - self.missing_values.add(val) - else: - self.missing_values = [] - - - -def easy_dtype(ndtype, names=None, defaultfmt="f%i", **validationargs): - """ - Convenience function to create a `np.dtype` object. - - The function processes the input `dtype` and matches it with the given - names. - - Parameters - ---------- - ndtype : var - Definition of the dtype. Can be any string or dictionary - recognized by the `np.dtype` function, or a sequence of types. - names : str or sequence, optional - Sequence of strings to use as field names for a structured dtype. - For convenience, `names` can be a string of a comma-separated list of - names. - defaultfmt : str, optional - Format string used to define missing names, such as ``"f%i"`` - (default) or ``"fields_%02i"``. - validationargs : optional - A series of optional arguments used to initialize a `NameValidator`. - - Examples - -------- - >>> np.lib._iotools.easy_dtype(float) - dtype('float64') - >>> np.lib._iotools.easy_dtype("i4, f8") - dtype([('f0', '>> np.lib._iotools.easy_dtype("i4, f8", defaultfmt="field_%03i") - dtype([('field_000', '>> np.lib._iotools.easy_dtype((int, float, float), names="a,b,c") - dtype([('a', '>> np.lib._iotools.easy_dtype(float, names="a,b,c") - dtype([('a', ' 0): - validate = NameValidator(**validationargs) - # Default initial names : should we change the format ? - if (ndtype.names == tuple("f%i" % i for i in range(nbtypes))) and \ - (defaultfmt != "f%i"): - ndtype.names = validate([''] * nbtypes, defaultfmt=defaultfmt) - # Explicit initial names : just validate - else: - ndtype.names = validate(ndtype.names, defaultfmt=defaultfmt) - return ndtype - diff --git a/pythonPackages/numpy/numpy/lib/arraysetops.py b/pythonPackages/numpy/numpy/lib/arraysetops.py deleted file mode 100755 index 042866f302..0000000000 --- a/pythonPackages/numpy/numpy/lib/arraysetops.py +++ /dev/null @@ -1,488 +0,0 @@ -""" -Set operations for 1D numeric arrays based on sorting. - -:Contains: - ediff1d, - unique, - intersect1d, - setxor1d, - in1d, - union1d, - setdiff1d - -:Deprecated: - unique1d, - intersect1d_nu, - setmember1d - -:Notes: - -For floating point arrays, inaccurate results may appear due to usual round-off -and floating point comparison issues. - -Speed could be gained in some operations by an implementation of -sort(), that can provide directly the permutation vectors, avoiding -thus calls to argsort(). - -To do: Optionally return indices analogously to unique for all functions. - -:Author: Robert Cimrman -""" -__all__ = ['ediff1d', 'unique1d', 'intersect1d', 'intersect1d_nu', 'setxor1d', - 'setmember1d', 'union1d', 'setdiff1d', 'unique', 'in1d'] - -import numpy as np -from numpy.lib.utils import deprecate - -def ediff1d(ary, to_end=None, to_begin=None): - """ - The differences between consecutive elements of an array. - - Parameters - ---------- - ary : array_like - If necessary, will be flattened before the differences are taken. - to_end : array_like, optional - Number(s) to append at the end of the returned differences. - to_begin : array_like, optional - Number(s) to prepend at the beginning of the returned differences. - - Returns - ------- - ed : ndarray - The differences. Loosely, this is ``ary.flat[1:] - ary.flat[:-1]``. - - See Also - -------- - diff, gradient - - Notes - ----- - When applied to masked arrays, this function drops the mask information - if the `to_begin` and/or `to_end` parameters are used. - - Examples - -------- - >>> x = np.array([1, 2, 4, 7, 0]) - >>> np.ediff1d(x) - array([ 1, 2, 3, -7]) - - >>> np.ediff1d(x, to_begin=-99, to_end=np.array([88, 99])) - array([-99, 1, 2, 3, -7, 88, 99]) - - The returned array is always 1D. - - >>> y = [[1, 2, 4], [1, 6, 24]] - >>> np.ediff1d(y) - array([ 1, 2, -3, 5, 18]) - - """ - ary = np.asanyarray(ary).flat - ed = ary[1:] - ary[:-1] - arrays = [ed] - if to_begin is not None: - arrays.insert(0, to_begin) - if to_end is not None: - arrays.append(to_end) - - if len(arrays) != 1: - # We'll save ourselves a copy of a potentially large array in - # the common case where neither to_begin or to_end was given. - ed = np.hstack(arrays) - - return ed - -def unique(ar, return_index=False, return_inverse=False): - """ - Find the unique elements of an array. - - Returns the sorted unique elements of an array. There are two optional - outputs in addition to the unique elements: the indices of the input array - that give the unique values, and the indices of the unique array that - reconstruct the input array. - - Parameters - ---------- - ar : array_like - Input array. This will be flattened if it is not already 1-D. - return_index : bool, optional - If True, also return the indices of `ar` that result in the unique - array. - return_inverse : bool, optional - If True, also return the indices of the unique array that can be used - to reconstruct `ar`. - - Returns - ------- - unique : ndarray - The sorted unique values. - unique_indices : ndarray, optional - The indices of the unique values in the (flattened) original array. - Only provided if `return_index` is True. - unique_inverse : ndarray, optional - The indices to reconstruct the (flattened) original array from the - unique array. Only provided if `return_inverse` is True. - - See Also - -------- - numpy.lib.arraysetops : Module with a number of other functions for - performing set operations on arrays. - - Examples - -------- - >>> np.unique([1, 1, 2, 2, 3, 3]) - array([1, 2, 3]) - >>> a = np.array([[1, 1], [2, 3]]) - >>> np.unique(a) - array([1, 2, 3]) - - Return the indices of the original array that give the unique values: - - >>> a = np.array(['a', 'b', 'b', 'c', 'a']) - >>> u, indices = np.unique(a, return_index=True) - >>> u - array(['a', 'b', 'c'], - dtype='|S1') - >>> indices - array([0, 1, 3]) - >>> a[indices] - array(['a', 'b', 'c'], - dtype='|S1') - - Reconstruct the input array from the unique values: - - >>> a = np.array([1, 2, 6, 4, 2, 3, 2]) - >>> u, indices = np.unique(a, return_inverse=True) - >>> u - array([1, 2, 3, 4, 6]) - >>> indices - array([0, 1, 4, 3, 1, 2, 1]) - >>> u[indices] - array([1, 2, 6, 4, 2, 3, 2]) - - """ - try: - ar = ar.flatten() - except AttributeError: - if not return_inverse and not return_index: - items = sorted(set(ar)) - return np.asarray(items) - else: - ar = np.asanyarray(ar).flatten() - - if ar.size == 0: - if return_inverse and return_index: - return ar, np.empty(0, np.bool), np.empty(0, np.bool) - elif return_inverse or return_index: - return ar, np.empty(0, np.bool) - else: - return ar - - if return_inverse or return_index: - perm = ar.argsort() - aux = ar[perm] - flag = np.concatenate(([True], aux[1:] != aux[:-1])) - if return_inverse: - iflag = np.cumsum(flag) - 1 - iperm = perm.argsort() - if return_index: - return aux[flag], perm[flag], iflag[iperm] - else: - return aux[flag], iflag[iperm] - else: - return aux[flag], perm[flag] - - else: - ar.sort() - flag = np.concatenate(([True], ar[1:] != ar[:-1])) - return ar[flag] - - -def intersect1d(ar1, ar2, assume_unique=False): - """ - Find the intersection of two arrays. - - Return the sorted, unique values that are in both of the input arrays. - - Parameters - ---------- - ar1, ar2 : array_like - Input arrays. - assume_unique : bool - If True, the input arrays are both assumed to be unique, which - can speed up the calculation. Default is False. - - Returns - ------- - out : ndarray - Sorted 1D array of common and unique elements. - - See Also - -------- - numpy.lib.arraysetops : Module with a number of other functions for - performing set operations on arrays. - - Examples - -------- - >>> np.intersect1d([1, 3, 4, 3], [3, 1, 2, 1]) - array([1, 3]) - - """ - if not assume_unique: - # Might be faster than unique( intersect1d( ar1, ar2 ) )? - ar1 = unique(ar1) - ar2 = unique(ar2) - aux = np.concatenate( (ar1, ar2) ) - aux.sort() - return aux[aux[1:] == aux[:-1]] - -def setxor1d(ar1, ar2, assume_unique=False): - """ - Find the set exclusive-or of two arrays. - - Return the sorted, unique values that are in only one (not both) of the - input arrays. - - Parameters - ---------- - ar1, ar2 : array_like - Input arrays. - assume_unique : bool - If True, the input arrays are both assumed to be unique, which - can speed up the calculation. Default is False. - - Returns - ------- - xor : ndarray - Sorted 1D array of unique values that are in only one of the input - arrays. - - Examples - -------- - >>> a = np.array([1, 2, 3, 2, 4]) - >>> b = np.array([2, 3, 5, 7, 5]) - >>> np.setxor1d(a,b) - array([1, 4, 5, 7]) - - """ - if not assume_unique: - ar1 = unique(ar1) - ar2 = unique(ar2) - - aux = np.concatenate( (ar1, ar2) ) - if aux.size == 0: - return aux - - aux.sort() -# flag = ediff1d( aux, to_end = 1, to_begin = 1 ) == 0 - flag = np.concatenate( ([True], aux[1:] != aux[:-1], [True] ) ) -# flag2 = ediff1d( flag ) == 0 - flag2 = flag[1:] == flag[:-1] - return aux[flag2] - -def in1d(ar1, ar2, assume_unique=False): - """ - Test whether each element of a 1D array is also present in a second array. - - Returns a boolean array the same length as `ar1` that is True - where an element of `ar1` is in `ar2` and False otherwise. - - Parameters - ---------- - ar1 : array_like, shape (M,) - Input array. - ar2 : array_like - The values against which to test each value of `ar1`. - assume_unique : bool, optional - If True, the input arrays are both assumed to be unique, which - can speed up the calculation. Default is False. - - Returns - ------- - mask : ndarray of bools, shape(M,) - The values `ar1[mask]` are in `ar2`. - - See Also - -------- - numpy.lib.arraysetops : Module with a number of other functions for - performing set operations on arrays. - - Notes - ----- - `in1d` can be considered as an element-wise function version of the - python keyword `in`, for 1D sequences. ``in1d(a, b)`` is roughly - equivalent to ``np.array([item in b for item in a])``. - - .. versionadded:: 1.4.0 - - Examples - -------- - >>> test = np.array([0, 1, 2, 5, 0]) - >>> states = [0, 2] - >>> mask = np.in1d(test, states) - >>> mask - array([ True, False, True, False, True], dtype=bool) - >>> test[mask] - array([0, 2, 0]) - - """ - if not assume_unique: - ar1, rev_idx = np.unique(ar1, return_inverse=True) - ar2 = np.unique(ar2) - - ar = np.concatenate( (ar1, ar2) ) - # We need this to be a stable sort, so always use 'mergesort' - # here. The values from the first array should always come before - # the values from the second array. - order = ar.argsort(kind='mergesort') - sar = ar[order] - equal_adj = (sar[1:] == sar[:-1]) - flag = np.concatenate( (equal_adj, [False] ) ) - indx = order.argsort(kind='mergesort')[:len( ar1 )] - - if assume_unique: - return flag[indx] - else: - return flag[indx][rev_idx] - -def union1d(ar1, ar2): - """ - Find the union of two arrays. - - Return the unique, sorted array of values that are in either of the two - input arrays. - - Parameters - ---------- - ar1, ar2 : array_like - Input arrays. They are flattened if they are not already 1D. - - Returns - ------- - union : ndarray - Unique, sorted union of the input arrays. - - See Also - -------- - numpy.lib.arraysetops : Module with a number of other functions for - performing set operations on arrays. - - Examples - -------- - >>> np.union1d([-1, 0, 1], [-2, 0, 2]) - array([-2, -1, 0, 1, 2]) - - """ - return unique( np.concatenate( (ar1, ar2) ) ) - -def setdiff1d(ar1, ar2, assume_unique=False): - """ - Find the set difference of two arrays. - - Return the sorted, unique values in `ar1` that are not in `ar2`. - - Parameters - ---------- - ar1 : array_like - Input array. - ar2 : array_like - Input comparison array. - assume_unique : bool - If True, the input arrays are both assumed to be unique, which - can speed up the calculation. Default is False. - - Returns - ------- - difference : ndarray - Sorted 1D array of values in `ar1` that are not in `ar2`. - - See Also - -------- - numpy.lib.arraysetops : Module with a number of other functions for - performing set operations on arrays. - - Examples - -------- - >>> a = np.array([1, 2, 3, 2, 4, 1]) - >>> b = np.array([3, 4, 5, 6]) - >>> np.setdiff1d(a, b) - array([1, 2]) - - """ - if not assume_unique: - ar1 = unique(ar1) - ar2 = unique(ar2) - aux = in1d(ar1, ar2, assume_unique=True) - if aux.size == 0: - return aux - else: - return np.asarray(ar1)[aux == 0] - -@deprecate -def unique1d(ar1, return_index=False, return_inverse=False): - """ - This function is deprecated. Use unique() instead. - """ - if return_index: - import warnings - warnings.warn("The order of the output arguments for " - "`return_index` has changed. Before, " - "the output was (indices, unique_arr), but " - "has now been reversed to be more consistent.") - - ar = np.asanyarray(ar1).flatten() - if ar.size == 0: - if return_inverse and return_index: - return ar, np.empty(0, np.bool), np.empty(0, np.bool) - elif return_inverse or return_index: - return ar, np.empty(0, np.bool) - else: - return ar - - if return_inverse or return_index: - perm = ar.argsort() - aux = ar[perm] - flag = np.concatenate(([True], aux[1:] != aux[:-1])) - if return_inverse: - iflag = np.cumsum(flag) - 1 - iperm = perm.argsort() - if return_index: - return aux[flag], perm[flag], iflag[iperm] - else: - return aux[flag], iflag[iperm] - else: - return aux[flag], perm[flag] - - else: - ar.sort() - flag = np.concatenate(([True], ar[1:] != ar[:-1])) - return ar[flag] - -@deprecate -def intersect1d_nu(ar1, ar2): - """ - This function is deprecated. Use intersect1d() - instead. - """ - # Might be faster than unique1d( intersect1d( ar1, ar2 ) )? - aux = np.concatenate((unique1d(ar1), unique1d(ar2))) - aux.sort() - return aux[aux[1:] == aux[:-1]] - -@deprecate -def setmember1d(ar1, ar2): - """ - This function is deprecated. Use in1d(assume_unique=True) - instead. - """ - # We need this to be a stable sort, so always use 'mergesort' here. The - # values from the first array should always come before the values from the - # second array. - ar = np.concatenate( (ar1, ar2 ) ) - order = ar.argsort(kind='mergesort') - sar = ar[order] - equal_adj = (sar[1:] == sar[:-1]) - flag = np.concatenate( (equal_adj, [False] ) ) - - indx = order.argsort(kind='mergesort')[:len( ar1 )] - return flag[indx] diff --git a/pythonPackages/numpy/numpy/lib/arrayterator.py b/pythonPackages/numpy/numpy/lib/arrayterator.py deleted file mode 100755 index 3f07cd263c..0000000000 --- a/pythonPackages/numpy/numpy/lib/arrayterator.py +++ /dev/null @@ -1,195 +0,0 @@ -""" -A buffered iterator for big arrays. - -This module solves the problem of iterating over a big file-based array -without having to read it into memory. The `Arrayterator` class wraps -an array object, and when iterated it will return sub-arrays with at most -a user-specified number of elements. - -""" - -from __future__ import division - -from operator import mul - -__all__ = ['Arrayterator'] - -import sys -if sys.version_info[0] >= 3: - from functools import reduce - -class Arrayterator(object): - """ - Buffered iterator for big arrays. - - `Arrayterator` creates a buffered iterator for reading big arrays in small - contiguous blocks. The class is useful for objects stored in the - file system. It allows iteration over the object *without* reading - everything in memory; instead, small blocks are read and iterated over. - - `Arrayterator` can be used with any object that supports multidimensional - slices. This includes NumPy arrays, but also variables from - Scientific.IO.NetCDF or pynetcdf for example. - - Parameters - ---------- - var : array_like - The object to iterate over. - buf_size : int, optional - The buffer size. If `buf_size` is supplied, the maximum amount of - data that will be read into memory is `buf_size` elements. - Default is None, which will read as many element as possible - into memory. - - Attributes - ---------- - var - buf_size - start - stop - step - shape - flat - - See Also - -------- - ndenumerate : Multidimensional array iterator. - flatiter : Flat array iterator. - memmap : Create a memory-map to an array stored in a binary file on disk. - - Notes - ----- - The algorithm works by first finding a "running dimension", along which - the blocks will be extracted. Given an array of dimensions - ``(d1, d2, ..., dn)``, e.g. if `buf_size` is smaller than ``d1``, the - first dimension will be used. If, on the other hand, - ``d1 < buf_size < d1*d2`` the second dimension will be used, and so on. - Blocks are extracted along this dimension, and when the last block is - returned the process continues from the next dimension, until all - elements have been read. - - Examples - -------- - >>> import numpy as np - >>> a = np.arange(3 * 4 * 5 * 6).reshape(3, 4, 5, 6) - >>> a_itor = np.lib.arrayterator.Arrayterator(a, 2) - >>> a_itor.shape - (3, 4, 5, 6) - - Now we can iterate over ``a_itor``, and it will return arrays of size - two. Since `buf_size` was smaller than any dimension, the first - dimension will be iterated over first: - - >>> for subarr in a_itor: - ... if not subarr.all(): - ... print subarr, subarr.shape - ... - [[[[0 1]]]] (1, 1, 1, 2) - - """ - - def __init__(self, var, buf_size=None): - self.var = var - self.buf_size = buf_size - - self.start = [0 for dim in var.shape] - self.stop = [dim for dim in var.shape] - self.step = [1 for dim in var.shape] - - def __getattr__(self, attr): - return getattr(self.var, attr) - - def __getitem__(self, index): - """ - Return a new arrayterator. - - """ - # Fix index, handling ellipsis and incomplete slices. - if not isinstance(index, tuple): index = (index,) - fixed = [] - length, dims = len(index), len(self.shape) - for slice_ in index: - if slice_ is Ellipsis: - fixed.extend([slice(None)] * (dims-length+1)) - length = len(fixed) - elif isinstance(slice_, (int, long)): - fixed.append(slice(slice_, slice_+1, 1)) - else: - fixed.append(slice_) - index = tuple(fixed) - if len(index) < dims: - index += (slice(None),) * (dims-len(index)) - - # Return a new arrayterator object. - out = self.__class__(self.var, self.buf_size) - for i, (start, stop, step, slice_) in enumerate( - zip(self.start, self.stop, self.step, index)): - out.start[i] = start + (slice_.start or 0) - out.step[i] = step * (slice_.step or 1) - out.stop[i] = start + (slice_.stop or stop-start) - out.stop[i] = min(stop, out.stop[i]) - return out - - def __array__(self): - """ - Return corresponding data. - - """ - slice_ = tuple(slice(*t) for t in zip( - self.start, self.stop, self.step)) - return self.var[slice_] - - @property - def flat(self): - for block in self: - for value in block.flat: - yield value - - @property - def shape(self): - return tuple(((stop-start-1)//step+1) for start, stop, step in - zip(self.start, self.stop, self.step)) - - def __iter__(self): - # Skip arrays with degenerate dimensions - if [dim for dim in self.shape if dim <= 0]: raise StopIteration - - start = self.start[:] - stop = self.stop[:] - step = self.step[:] - ndims = len(self.var.shape) - - while 1: - count = self.buf_size or reduce(mul, self.shape) - - # iterate over each dimension, looking for the - # running dimension (ie, the dimension along which - # the blocks will be built from) - rundim = 0 - for i in range(ndims-1, -1, -1): - # if count is zero we ran out of elements to read - # along higher dimensions, so we read only a single position - if count == 0: - stop[i] = start[i]+1 - elif count <= self.shape[i]: # limit along this dimension - stop[i] = start[i] + count*step[i] - rundim = i - else: - stop[i] = self.stop[i] # read everything along this - # dimension - stop[i] = min(self.stop[i], stop[i]) - count = count//self.shape[i] - - # yield a block - slice_ = tuple(slice(*t) for t in zip(start, stop, step)) - yield self.var[slice_] - - # Update start position, taking care of overflow to - # other dimensions - start[rundim] = stop[rundim] # start where we stopped - for i in range(ndims-1, 0, -1): - if start[i] >= self.stop[i]: - start[i] = self.start[i] - start[i-1] += self.step[i-1] - if start[0] >= self.stop[0]: - raise StopIteration diff --git a/pythonPackages/numpy/numpy/lib/benchmarks/bench_arraysetops.py b/pythonPackages/numpy/numpy/lib/benchmarks/bench_arraysetops.py deleted file mode 100755 index 2c77fb758f..0000000000 --- a/pythonPackages/numpy/numpy/lib/benchmarks/bench_arraysetops.py +++ /dev/null @@ -1,65 +0,0 @@ -import numpy as np -import time -from numpy.lib.arraysetops import * - -def bench_unique1d( plot_results = False ): - exponents = np.linspace( 2, 7, 9 ) - ratios = [] - nItems = [] - dt1s = [] - dt2s = [] - for ii in exponents: - - nItem = 10 ** ii - print 'using %d items:' % nItem - a = np.fix( nItem / 10 * np.random.random( nItem ) ) - - print 'unique:' - tt = time.clock() - b = np.unique( a ) - dt1 = time.clock() - tt - print dt1 - - print 'unique1d:' - tt = time.clock() - c = unique1d( a ) - dt2 = time.clock() - tt - print dt2 - - - if dt1 < 1e-8: - ratio = 'ND' - else: - ratio = dt2 / dt1 - print 'ratio:', ratio - print 'nUnique: %d == %d\n' % (len( b ), len( c )) - - nItems.append( nItem ) - ratios.append( ratio ) - dt1s.append( dt1 ) - dt2s.append( dt2 ) - - assert np.alltrue( b == c ) - - print nItems - print dt1s - print dt2s - print ratios - - if plot_results: - import pylab - - def plotMe( fig, fun, nItems, dt1s, dt2s ): - pylab.figure( fig ) - fun( nItems, dt1s, 'g-o', linewidth = 2, markersize = 8 ) - fun( nItems, dt2s, 'b-x', linewidth = 2, markersize = 8 ) - pylab.legend( ('unique', 'unique1d' ) ) - pylab.xlabel( 'nItem' ) - pylab.ylabel( 'time [s]' ) - - plotMe( 1, pylab.loglog, nItems, dt1s, dt2s ) - plotMe( 2, pylab.plot, nItems, dt1s, dt2s ) - pylab.show() - -if __name__ == '__main__': - bench_unique1d( plot_results = True ) diff --git a/pythonPackages/numpy/numpy/lib/financial.py b/pythonPackages/numpy/numpy/lib/financial.py deleted file mode 100755 index 55ed2839ed..0000000000 --- a/pythonPackages/numpy/numpy/lib/financial.py +++ /dev/null @@ -1,658 +0,0 @@ -# Some simple financial calculations -# patterned after spreadsheet computations. - -# There is some complexity in each function -# so that the functions behave like ufuncs with -# broadcasting and being able to be called with scalars -# or arrays (or other sequences). -import numpy as np - -__all__ = ['fv', 'pmt', 'nper', 'ipmt', 'ppmt', 'pv', 'rate', - 'irr', 'npv', 'mirr'] - -_when_to_num = {'end':0, 'begin':1, - 'e':0, 'b':1, - 0:0, 1:1, - 'beginning':1, - 'start':1, - 'finish':0} - -def _convert_when(when): - try: - return _when_to_num[when] - except KeyError: - return [_when_to_num[x] for x in when] - - -def fv(rate, nper, pmt, pv, when='end'): - """ - Compute the future value. - - Given: - * a present value, `pv` - * an interest `rate` compounded once per period, of which - there are - * `nper` total - * a (fixed) payment, `pmt`, paid either - * at the beginning (`when` = {'begin', 1}) or the end - (`when` = {'end', 0}) of each period - - Return: - the value at the end of the `nper` periods - - Parameters - ---------- - rate : scalar or array_like of shape(M, ) - Rate of interest as decimal (not per cent) per period - nper : scalar or array_like of shape(M, ) - Number of compounding periods - pmt : scalar or array_like of shape(M, ) - Payment - pv : scalar or array_like of shape(M, ) - Present value - when : {{'begin', 1}, {'end', 0}}, {string, int}, optional - When payments are due ('begin' (1) or 'end' (0)). - Defaults to {'end', 0}. - - Returns - ------- - out : ndarray - Future values. If all input is scalar, returns a scalar float. If - any input is array_like, returns future values for each input element. - If multiple inputs are array_like, they all must have the same shape. - - Notes - ----- - The future value is computed by solving the equation:: - - fv + - pv*(1+rate)**nper + - pmt*(1 + rate*when)/rate*((1 + rate)**nper - 1) == 0 - - or, when ``rate == 0``:: - - fv + pv + pmt * nper == 0 - - References - ---------- - .. [WRW] Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May). - Open Document Format for Office Applications (OpenDocument)v1.2, - Part 2: Recalculated Formula (OpenFormula) Format - Annotated Version, - Pre-Draft 12. Organization for the Advancement of Structured Information - Standards (OASIS). Billerica, MA, USA. [ODT Document]. - Available: - http://www.oasis-open.org/committees/documents.php?wg_abbrev=office-formula - OpenDocument-formula-20090508.odt - - Examples - -------- - What is the future value after 10 years of saving $100 now, with - an additional monthly savings of $100. Assume the interest rate is - 5% (annually) compounded monthly? - - >>> np.fv(0.05/12, 10*12, -100, -100) - 15692.928894335748 - - By convention, the negative sign represents cash flow out (i.e. money not - available today). Thus, saving $100 a month at 5% annual interest leads - to $15,692.93 available to spend in 10 years. - - If any input is array_like, returns an array of equal shape. Let's - compare different interest rates from the example above. - - >>> a = np.array((0.05, 0.06, 0.07))/12 - >>> np.fv(a, 10*12, -100, -100) - array([ 15692.92889434, 16569.87435405, 17509.44688102]) - - """ - when = _convert_when(when) - rate, nper, pmt, pv, when = map(np.asarray, [rate, nper, pmt, pv, when]) - temp = (1+rate)**nper - miter = np.broadcast(rate, nper, pmt, pv, when) - zer = np.zeros(miter.shape) - fact = np.where(rate==zer, nper+zer, (1+rate*when)*(temp-1)/rate+zer) - return -(pv*temp + pmt*fact) - -def pmt(rate, nper, pv, fv=0, when='end'): - """ - Compute the payment against loan principal plus interest. - - Given: - * a present value, `pv` (e.g., an amount borrowed) - * a future value, `fv` (e.g., 0) - * an interest `rate` compounded once per period, of which - there are - * `nper` total - * and (optional) specification of whether payment is made - at the beginning (`when` = {'begin', 1}) or the end - (`when` = {'end', 0}) of each period - - Return: - the (fixed) periodic payment. - - Parameters - ---------- - rate : array_like - Rate of interest (per period) - nper : array_like - Number of compounding periods - pv : array_like - Present value - fv : array_like (optional) - Future value (default = 0) - when : {{'begin', 1}, {'end', 0}}, {string, int} - When payments are due ('begin' (1) or 'end' (0)) - - Returns - ------- - out : ndarray - Payment against loan plus interest. If all input is scalar, returns a - scalar float. If any input is array_like, returns payment for each - input element. If multiple inputs are array_like, they all must have - the same shape. - - Notes - ----- - The payment is computed by solving the equation:: - - fv + - pv*(1 + rate)**nper + - pmt*(1 + rate*when)/rate*((1 + rate)**nper - 1) == 0 - - or, when ``rate == 0``:: - - fv + pv + pmt * nper == 0 - - for ``pmt``. - - Note that computing a monthly mortgage payment is only - one use for this function. For example, pmt returns the - periodic deposit one must make to achieve a specified - future balance given an initial deposit, a fixed, - periodically compounded interest rate, and the total - number of periods. - - References - ---------- - .. [WRW] Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May). - Open Document Format for Office Applications (OpenDocument)v1.2, - Part 2: Recalculated Formula (OpenFormula) Format - Annotated Version, - Pre-Draft 12. Organization for the Advancement of Structured Information - Standards (OASIS). Billerica, MA, USA. [ODT Document]. - Available: - http://www.oasis-open.org/committees/documents.php - ?wg_abbrev=office-formulaOpenDocument-formula-20090508.odt - - Examples - -------- - What is the monthly payment needed to pay off a $200,000 loan in 15 - years at an annual interest rate of 7.5%? - - >>> np.pmt(0.075/12, 12*15, 200000) - -1854.0247200054619 - - In order to pay-off (i.e., have a future-value of 0) the $200,000 obtained - today, a monthly payment of $1,854.02 would be required. Note that this - example illustrates usage of `fv` having a default value of 0. - - """ - when = _convert_when(when) - rate, nper, pv, fv, when = map(np.asarray, [rate, nper, pv, fv, when]) - temp = (1+rate)**nper - miter = np.broadcast(rate, nper, pv, fv, when) - zer = np.zeros(miter.shape) - fact = np.where(rate==zer, nper+zer, (1+rate*when)*(temp-1)/rate+zer) - return -(fv + pv*temp) / fact - -def nper(rate, pmt, pv, fv=0, when='end'): - """ - Compute the number of periodic payments. - - Parameters - ---------- - rate : array_like - Rate of interest (per period) - pmt : array_like - Payment - pv : array_like - Present value - fv : array_like, optional - Future value - when : {{'begin', 1}, {'end', 0}}, {string, int}, optional - When payments are due ('begin' (1) or 'end' (0)) - - Notes - ----- - The number of periods ``nper`` is computed by solving the equation:: - - fv + pv*(1+rate)**nper + pmt*(1+rate*when)/rate*((1+rate)**nper-1) = 0 - - but if ``rate = 0`` then:: - - fv + pv + pmt*nper = 0 - - Examples - -------- - If you only had $150/month to pay towards the loan, how long would it take - to pay-off a loan of $8,000 at 7% annual interest? - - >>> np.nper(0.07/12, -150, 8000) - 64.073348770661852 - - So, over 64 months would be required to pay off the loan. - - The same analysis could be done with several different interest rates - and/or payments and/or total amounts to produce an entire table. - - >>> np.nper(*(np.ogrid[0.07/12: 0.08/12: 0.01/12, - ... -150 : -99 : 50 , - ... 8000 : 9001 : 1000])) - array([[[ 64.07334877, 74.06368256], - [ 108.07548412, 127.99022654]], - [[ 66.12443902, 76.87897353], - [ 114.70165583, 137.90124779]]]) - - """ - when = _convert_when(when) - rate, pmt, pv, fv, when = map(np.asarray, [rate, pmt, pv, fv, when]) - - use_zero_rate = False - old_err = np.seterr(divide="raise") - try: - try: - z = pmt*(1.0+rate*when)/rate - except FloatingPointError: - use_zero_rate = True - finally: - np.seterr(**old_err) - - if use_zero_rate: - return (-fv + pv) / (pmt + 0.0) - else: - A = -(fv + pv)/(pmt+0.0) - B = np.log((-fv+z) / (pv+z))/np.log(1.0+rate) - miter = np.broadcast(rate, pmt, pv, fv, when) - zer = np.zeros(miter.shape) - return np.where(rate==zer, A+zer, B+zer) + 0.0 - -def ipmt(rate, per, nper, pv, fv=0.0, when='end'): - """ - Not implemented. Compute the payment portion for loan interest. - - Parameters - ---------- - rate : scalar or array_like of shape(M, ) - Rate of interest as decimal (not per cent) per period - per : scalar or array_like of shape(M, ) - Interest paid against the loan changes during the life or the loan. - The `per` is the payment period to calculate the interest amount. - nper : scalar or array_like of shape(M, ) - Number of compounding periods - pv : scalar or array_like of shape(M, ) - Present value - fv : scalar or array_like of shape(M, ), optional - Future value - when : {{'begin', 1}, {'end', 0}}, {string, int}, optional - When payments are due ('begin' (1) or 'end' (0)). - Defaults to {'end', 0}. - - Returns - ------- - out : ndarray - Interest portion of payment. If all input is scalar, returns a scalar - float. If any input is array_like, returns interest payment for each - input element. If multiple inputs are array_like, they all must have - the same shape. - - See Also - -------- - ppmt, pmt, pv - - Notes - ----- - The total payment is made up of payment against principal plus interest. - - ``pmt = ppmt + ipmt`` - - """ - total = pmt(rate, nper, pv, fv, when) - # Now, compute the nth step in the amortization - raise NotImplementedError - -def ppmt(rate, per, nper, pv, fv=0.0, when='end'): - """ - Not implemented. Compute the payment against loan principal. - - Parameters - ---------- - rate : array_like - Rate of interest (per period) - per : array_like, int - Amount paid against the loan changes. The `per` is the period of - interest. - nper : array_like - Number of compounding periods - pv : array_like - Present value - fv : array_like, optional - Future value - when : {{'begin', 1}, {'end', 0}}, {string, int} - When payments are due ('begin' (1) or 'end' (0)) - - See Also - -------- - pmt, pv, ipmt - - """ - total = pmt(rate, nper, pv, fv, when) - return total - ipmt(rate, per, nper, pv, fv, when) - -def pv(rate, nper, pmt, fv=0.0, when='end'): - """ - Compute the present value. - - Given: - * a future value, `fv` - * an interest `rate` compounded once per period, of which - there are - * `nper` total - * a (fixed) payment, `pmt`, paid either - * at the beginning (`when` = {'begin', 1}) or the end - (`when` = {'end', 0}) of each period - - Return: - the value now - - Parameters - ---------- - rate : array_like - Rate of interest (per period) - nper : array_like - Number of compounding periods - pmt : array_like - Payment - fv : array_like, optional - Future value - when : {{'begin', 1}, {'end', 0}}, {string, int}, optional - When payments are due ('begin' (1) or 'end' (0)) - - Returns - ------- - out : ndarray, float - Present value of a series of payments or investments. - - Notes - ----- - The present value is computed by solving the equation:: - - fv + - pv*(1 + rate)**nper + - pmt*(1 + rate*when)/rate*((1 + rate)**nper - 1) = 0 - - or, when ``rate = 0``:: - - fv + pv + pmt * nper = 0 - - for `pv`, which is then returned. - - References - ---------- - .. [WRW] Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May). - Open Document Format for Office Applications (OpenDocument)v1.2, - Part 2: Recalculated Formula (OpenFormula) Format - Annotated Version, - Pre-Draft 12. Organization for the Advancement of Structured Information - Standards (OASIS). Billerica, MA, USA. [ODT Document]. - Available: - http://www.oasis-open.org/committees/documents.php?wg_abbrev=office-formula - OpenDocument-formula-20090508.odt - - Examples - -------- - What is the present value (e.g., the initial investment) - of an investment that needs to total $15692.93 - after 10 years of saving $100 every month? Assume the - interest rate is 5% (annually) compounded monthly. - - >>> np.pv(0.05/12, 10*12, -100, 15692.93) - -100.00067131625819 - - By convention, the negative sign represents cash flow out - (i.e., money not available today). Thus, to end up with - $15,692.93 in 10 years saving $100 a month at 5% annual - interest, one's initial deposit should also be $100. - - If any input is array_like, ``pv`` returns an array of equal shape. - Let's compare different interest rates in the example above: - - >>> a = np.array((0.05, 0.04, 0.03))/12 - >>> np.pv(a, 10*12, -100, 15692.93) - array([ -100.00067132, -649.26771385, -1273.78633713]) - - So, to end up with the same $15692.93 under the same $100 per month - "savings plan," for annual interest rates of 4% and 3%, one would - need initial investments of $649.27 and $1273.79, respectively. - - """ - when = _convert_when(when) - rate, nper, pmt, fv, when = map(np.asarray, [rate, nper, pmt, fv, when]) - temp = (1+rate)**nper - miter = np.broadcast(rate, nper, pmt, fv, when) - zer = np.zeros(miter.shape) - fact = np.where(rate == zer, nper+zer, (1+rate*when)*(temp-1)/rate+zer) - return -(fv + pmt*fact)/temp - -# Computed with Sage -# (y + (r + 1)^n*x + p*((r + 1)^n - 1)*(r*w + 1)/r)/(n*(r + 1)^(n - 1)*x - p*((r + 1)^n - 1)*(r*w + 1)/r^2 + n*p*(r + 1)^(n - 1)*(r*w + 1)/r + p*((r + 1)^n - 1)*w/r) - -def _g_div_gp(r, n, p, x, y, w): - t1 = (r+1)**n - t2 = (r+1)**(n-1) - return (y + t1*x + p*(t1 - 1)*(r*w + 1)/r)/(n*t2*x - p*(t1 - 1)*(r*w + 1)/(r**2) + n*p*t2*(r*w + 1)/r + p*(t1 - 1)*w/r) - -# Use Newton's iteration until the change is less than 1e-6 -# for all values or a maximum of 100 iterations is reached. -# Newton's rule is -# r_{n+1} = r_{n} - g(r_n)/g'(r_n) -# where -# g(r) is the formula -# g'(r) is the derivative with respect to r. -def rate(nper, pmt, pv, fv, when='end', guess=0.10, tol=1e-6, maxiter=100): - """ - Compute the rate of interest per period. - - Parameters - ---------- - nper : array_like - Number of compounding periods - pmt : array_like - Payment - pv : array_like - Present value - fv : array_like - Future value - when : {{'begin', 1}, {'end', 0}}, {string, int}, optional - When payments are due ('begin' (1) or 'end' (0)) - guess : float, optional - Starting guess for solving the rate of interest - tol : float, optional - Required tolerance for the solution - maxiter : int, optional - Maximum iterations in finding the solution - - Notes - ----- - The rate of interest is computed by iteratively solving the - (non-linear) equation:: - - fv + pv*(1+rate)**nper + pmt*(1+rate*when)/rate * ((1+rate)**nper - 1) = 0 - - for ``rate``. - - References - ---------- - Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May). Open Document - Format for Office Applications (OpenDocument)v1.2, Part 2: Recalculated - Formula (OpenFormula) Format - Annotated Version, Pre-Draft 12. - Organization for the Advancement of Structured Information Standards - (OASIS). Billerica, MA, USA. [ODT Document]. Available: - http://www.oasis-open.org/committees/documents.php?wg_abbrev=office-formula - OpenDocument-formula-20090508.odt - - """ - when = _convert_when(when) - nper, pmt, pv, fv, when = map(np.asarray, [nper, pmt, pv, fv, when]) - rn = guess - iter = 0 - close = False - while (iter < maxiter) and not close: - rnp1 = rn - _g_div_gp(rn, nper, pmt, pv, fv, when) - diff = abs(rnp1-rn) - close = np.all(diff>> np.irr([-100, 39, 59, 55, 20]) - 0.2809484211599611 - - (Compare with the Example given for numpy.lib.financial.npv) - - """ - res = np.roots(values[::-1]) - # Find the root(s) between 0 and 1 - mask = (res.imag == 0) & (res.real > 0) & (res.real <= 1) - res = res[mask].real - if res.size == 0: - return np.nan - rate = 1.0/res - 1 - if rate.size == 1: - rate = rate.item() - return rate - -def npv(rate, values): - """ - Returns the NPV (Net Present Value) of a cash flow series. - - Parameters - ---------- - rate : scalar - The discount rate. - values : array_like, shape(M, ) - The values of the time series of cash flows. The (fixed) time - interval between cash flow "events" must be the same as that - for which `rate` is given (i.e., if `rate` is per year, then - precisely a year is understood to elapse between each cash flow - event). By convention, investments or "deposits" are negative, - income or "withdrawals" are positive; `values` must begin with - the initial investment, thus `values[0]` will typically be - negative. - - Returns - ------- - out : float - The NPV of the input cash flow series `values` at the discount `rate`. - - Notes - ----- - Returns the result of: [G]_ - - .. math :: \\sum_{t=0}^M{\\frac{values_t}{(1+rate)^{t}}} - - References - ---------- - .. [G] L. J. Gitman, "Principles of Managerial Finance, Brief," 3rd ed., - Addison-Wesley, 2003, pg. 346. - - Examples - -------- - >>> np.npv(0.281,[-100, 39, 59, 55, 20]) - -0.0066187288356340801 - - (Compare with the Example given for numpy.lib.financial.irr) - - """ - values = np.asarray(values) - return (values / (1+rate)**np.arange(1,len(values)+1)).sum(axis=0) - -def mirr(values, finance_rate, reinvest_rate): - """ - Modified internal rate of return. - - Parameters - ---------- - values : array_like - Cash flows (must contain at least one positive and one negative value) - or nan is returned. The first value is considered a sunk cost at time zero. - finance_rate : scalar - Interest rate paid on the cash flows - reinvest_rate : scalar - Interest rate received on the cash flows upon reinvestment - - Returns - ------- - out : float - Modified internal rate of return - - """ - - values = np.asarray(values, dtype=np.double) - n = values.size - pos = values > 0 - neg = values < 0 - if not (pos.any() and neg.any()): - return np.nan - numer = np.abs(npv(reinvest_rate, values*pos))*(1 + reinvest_rate) - denom = np.abs(npv(finance_rate, values*neg))*(1 + finance_rate) - return (numer/denom)**(1.0/(n - 1))*(1 + reinvest_rate) - 1 - diff --git a/pythonPackages/numpy/numpy/lib/format.py b/pythonPackages/numpy/numpy/lib/format.py deleted file mode 100755 index 1e508f3e5d..0000000000 --- a/pythonPackages/numpy/numpy/lib/format.py +++ /dev/null @@ -1,577 +0,0 @@ -""" -Define a simple format for saving numpy arrays to disk with the full -information about them. - -The ``.npy`` format is the standard binary file format in NumPy for -persisting a *single* arbitrary NumPy array on disk. The format stores all -of the shape and dtype information necessary to reconstruct the array -correctly even on another machine with a different architecture. -The format is designed to be as simple as possible while achieving -its limited goals. - -The ``.npz`` format is the standard format for persisting *multiple* NumPy -arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy`` -files, one for each array. - -Capabilities ------------- - -- Can represent all NumPy arrays including nested record arrays and - object arrays. - -- Represents the data in its native binary form. - -- Supports Fortran-contiguous arrays directly. - -- Stores all of the necessary information to reconstruct the array - including shape and dtype on a machine of a different - architecture. Both little-endian and big-endian arrays are - supported, and a file with little-endian numbers will yield - a little-endian array on any machine reading the file. The - types are described in terms of their actual sizes. For example, - if a machine with a 64-bit C "long int" writes out an array with - "long ints", a reading machine with 32-bit C "long ints" will yield - an array with 64-bit integers. - -- Is straightforward to reverse engineer. Datasets often live longer than - the programs that created them. A competent developer should be - able create a solution in his preferred programming language to - read most ``.npy`` files that he has been given without much - documentation. - -- Allows memory-mapping of the data. See `open_memmep`. - -- Can be read from a filelike stream object instead of an actual file. - -- Stores object arrays, i.e. arrays containing elements that are arbitrary - Python objects. Files with object arrays are not to be mmapable, but - can be read and written to disk. - -Limitations ------------ - -- Arbitrary subclasses of numpy.ndarray are not completely preserved. - Subclasses will be accepted for writing, but only the array data will - be written out. A regular numpy.ndarray object will be created - upon reading the file. - -.. warning:: - - Due to limitations in the interpretation of structured dtypes, dtypes - with fields with empty names will have the names replaced by 'f0', 'f1', - etc. Such arrays will not round-trip through the format entirely - accurately. The data is intact; only the field names will differ. We are - working on a fix for this. This fix will not require a change in the - file format. The arrays with such structures can still be saved and - restored, and the correct dtype may be restored by using the - ``loadedarray.view(correct_dtype)`` method. - -File extensions ---------------- - -We recommend using the ``.npy`` and ``.npz`` extensions for files saved -in this format. This is by no means a requirement; applications may wish -to use these file formats but use an extension specific to the -application. In the absence of an obvious alternative, however, -we suggest using ``.npy`` and ``.npz``. - -Version numbering ------------------ - -The version numbering of these formats is independent of NumPy version -numbering. If the format is upgraded, the code in `numpy.io` will still -be able to read and write Version 1.0 files. - -Format Version 1.0 ------------------- - -The first 6 bytes are a magic string: exactly ``\\x93NUMPY``. - -The next 1 byte is an unsigned byte: the major version number of the file -format, e.g. ``\\x01``. - -The next 1 byte is an unsigned byte: the minor version number of the file -format, e.g. ``\\x00``. Note: the version of the file format is not tied -to the version of the numpy package. - -The next 2 bytes form a little-endian unsigned short int: the length of -the header data HEADER_LEN. - -The next HEADER_LEN bytes form the header data describing the array's -format. It is an ASCII string which contains a Python literal expression -of a dictionary. It is terminated by a newline (``\\n``) and padded with -spaces (``\\x20``) to make the total length of -``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment -purposes. - -The dictionary contains three keys: - - "descr" : dtype.descr - An object that can be passed as an argument to the `numpy.dtype` - constructor to create the array's dtype. - "fortran_order" : bool - Whether the array data is Fortran-contiguous or not. Since - Fortran-contiguous arrays are a common form of non-C-contiguity, - we allow them to be written directly to disk for efficiency. - "shape" : tuple of int - The shape of the array. - -For repeatability and readability, the dictionary keys are sorted in -alphabetic order. This is for convenience only. A writer SHOULD implement -this if possible. A reader MUST NOT depend on this. - -Following the header comes the array data. If the dtype contains Python -objects (i.e. ``dtype.hasobject is True``), then the data is a Python -pickle of the array. Otherwise the data is the contiguous (either C- -or Fortran-, depending on ``fortran_order``) bytes of the array. -Consumers can figure out the number of bytes by multiplying the number -of elements given by the shape (noting that ``shape=()`` means there is -1 element) by ``dtype.itemsize``. - -Notes ------ -The ``.npy`` format, including reasons for creating it and a comparison of -alternatives, is described fully in the "npy-format" NEP. - -""" - -import cPickle - -import numpy -import sys -from numpy.lib.utils import safe_eval -from numpy.compat import asbytes, isfileobj - -MAGIC_PREFIX = asbytes('\x93NUMPY') -MAGIC_LEN = len(MAGIC_PREFIX) + 2 - -def magic(major, minor): - """ Return the magic string for the given file format version. - - Parameters - ---------- - major : int in [0, 255] - minor : int in [0, 255] - - Returns - ------- - magic : str - - Raises - ------ - ValueError if the version cannot be formatted. - """ - if major < 0 or major > 255: - raise ValueError("major version must be 0 <= major < 256") - if minor < 0 or minor > 255: - raise ValueError("minor version must be 0 <= minor < 256") - if sys.version_info[0] < 3: - return MAGIC_PREFIX + chr(major) + chr(minor) - else: - return MAGIC_PREFIX + bytes([major, minor]) - -def read_magic(fp): - """ Read the magic string to get the version of the file format. - - Parameters - ---------- - fp : filelike object - - Returns - ------- - major : int - minor : int - """ - magic_str = fp.read(MAGIC_LEN) - if len(magic_str) != MAGIC_LEN: - msg = "could not read %d characters for the magic string; got %r" - raise ValueError(msg % (MAGIC_LEN, magic_str)) - if magic_str[:-2] != MAGIC_PREFIX: - msg = "the magic string is not correct; expected %r, got %r" - raise ValueError(msg % (MAGIC_PREFIX, magic_str[:-2])) - if sys.version_info[0] < 3: - major, minor = map(ord, magic_str[-2:]) - else: - major, minor = magic_str[-2:] - return major, minor - -def dtype_to_descr(dtype): - """ - Get a serializable descriptor from the dtype. - - The .descr attribute of a dtype object cannot be round-tripped through - the dtype() constructor. Simple types, like dtype('float32'), have - a descr which looks like a record array with one field with '' as - a name. The dtype() constructor interprets this as a request to give - a default name. Instead, we construct descriptor that can be passed to - dtype(). - - Parameters - ---------- - dtype : dtype - The dtype of the array that will be written to disk. - - Returns - ------- - descr : object - An object that can be passed to `numpy.dtype()` in order to - replicate the input dtype. - - """ - if dtype.names is not None: - # This is a record array. The .descr is fine. - # XXX: parts of the record array with an empty name, like padding bytes, - # still get fiddled with. This needs to be fixed in the C implementation - # of dtype(). - return dtype.descr - else: - return dtype.str - -def header_data_from_array_1_0(array): - """ Get the dictionary of header metadata from a numpy.ndarray. - - Parameters - ---------- - array : numpy.ndarray - - Returns - ------- - d : dict - This has the appropriate entries for writing its string representation - to the header of the file. - """ - d = {} - d['shape'] = array.shape - if array.flags.c_contiguous: - d['fortran_order'] = False - elif array.flags.f_contiguous: - d['fortran_order'] = True - else: - # Totally non-contiguous data. We will have to make it C-contiguous - # before writing. Note that we need to test for C_CONTIGUOUS first - # because a 1-D array is both C_CONTIGUOUS and F_CONTIGUOUS. - d['fortran_order'] = False - - d['descr'] = dtype_to_descr(array.dtype) - return d - -def write_array_header_1_0(fp, d): - """ Write the header for an array using the 1.0 format. - - Parameters - ---------- - fp : filelike object - d : dict - This has the appropriate entries for writing its string representation - to the header of the file. - """ - import struct - header = ["{"] - for key, value in sorted(d.items()): - # Need to use repr here, since we eval these when reading - header.append("'%s': %s, " % (key, repr(value))) - header.append("}") - header = "".join(header) - # Pad the header with spaces and a final newline such that the magic - # string, the header-length short and the header are aligned on a 16-byte - # boundary. Hopefully, some system, possibly memory-mapping, can take - # advantage of our premature optimization. - current_header_len = MAGIC_LEN + 2 + len(header) + 1 # 1 for the newline - topad = 16 - (current_header_len % 16) - header = asbytes(header + ' '*topad + '\n') - if len(header) >= (256*256): - raise ValueError("header does not fit inside %s bytes" % (256*256)) - header_len_str = struct.pack('>> np.iterable([1, 2, 3]) - 1 - >>> np.iterable(2) - 0 - - """ - try: iter(y) - except: return 0 - return 1 - -def histogram(a, bins=10, range=None, normed=False, weights=None): - """ - Compute the histogram of a set of data. - - Parameters - ---------- - a : array_like - Input data. The histogram is computed over the flattened array. - bins : int or sequence of scalars, optional - If `bins` is an int, it defines the number of equal-width - bins in the given range (10, by default). If `bins` is a sequence, - it defines the bin edges, including the rightmost edge, allowing - for non-uniform bin widths. - range : (float, float), optional - The lower and upper range of the bins. If not provided, range - is simply ``(a.min(), a.max())``. Values outside the range are - ignored. - normed : bool, optional - If False, the result will contain the number of samples - in each bin. If True, the result is the value of the - probability *density* function at the bin, normalized such that - the *integral* over the range is 1. Note that the sum of the - histogram values will not be equal to 1 unless bins of unity - width are chosen; it is not a probability *mass* function. - weights : array_like, optional - An array of weights, of the same shape as `a`. Each value in `a` - only contributes its associated weight towards the bin count - (instead of 1). If `normed` is True, the weights are normalized, - so that the integral of the density over the range remains 1 - - Returns - ------- - hist : array - The values of the histogram. See `normed` and `weights` for a - description of the possible semantics. - bin_edges : array of dtype float - Return the bin edges ``(length(hist)+1)``. - - - See Also - -------- - histogramdd, bincount, searchsorted - - Notes - ----- - All but the last (righthand-most) bin is half-open. In other words, if - `bins` is:: - - [1, 2, 3, 4] - - then the first bin is ``[1, 2)`` (including 1, but excluding 2) and the - second ``[2, 3)``. The last bin, however, is ``[3, 4]``, which *includes* - 4. - - Examples - -------- - >>> np.histogram([1, 2, 1], bins=[0, 1, 2, 3]) - (array([0, 2, 1]), array([0, 1, 2, 3])) - >>> np.histogram(np.arange(4), bins=np.arange(5), normed=True) - (array([ 0.25, 0.25, 0.25, 0.25]), array([0, 1, 2, 3, 4])) - >>> np.histogram([[1, 2, 1], [1, 0, 1]], bins=[0,1,2,3]) - (array([1, 4, 1]), array([0, 1, 2, 3])) - - >>> a = np.arange(5) - >>> hist, bin_edges = np.histogram(a, normed=True) - >>> hist - array([ 0.5, 0. , 0.5, 0. , 0. , 0.5, 0. , 0.5, 0. , 0.5]) - >>> hist.sum() - 2.4999999999999996 - >>> np.sum(hist*np.diff(bin_edges)) - 1.0 - - """ - - a = asarray(a) - if weights is not None: - weights = asarray(weights) - if np.any(weights.shape != a.shape): - raise ValueError( - 'weights should have the same shape as a.') - weights = weights.ravel() - a = a.ravel() - - if (range is not None): - mn, mx = range - if (mn > mx): - raise AttributeError( - 'max must be larger than min in range parameter.') - - if not iterable(bins): - if range is None: - range = (a.min(), a.max()) - mn, mx = [mi+0.0 for mi in range] - if mn == mx: - mn -= 0.5 - mx += 0.5 - bins = linspace(mn, mx, bins+1, endpoint=True) - else: - bins = asarray(bins) - if (np.diff(bins) < 0).any(): - raise AttributeError( - 'bins must increase monotonically.') - - # Histogram is an integer or a float array depending on the weights. - if weights is None: - ntype = int - else: - ntype = weights.dtype - n = np.zeros(bins.shape, ntype) - - block = 65536 - if weights is None: - for i in arange(0, len(a), block): - sa = sort(a[i:i+block]) - n += np.r_[sa.searchsorted(bins[:-1], 'left'), \ - sa.searchsorted(bins[-1], 'right')] - else: - zero = array(0, dtype=ntype) - for i in arange(0, len(a), block): - tmp_a = a[i:i+block] - tmp_w = weights[i:i+block] - sorting_index = np.argsort(tmp_a) - sa = tmp_a[sorting_index] - sw = tmp_w[sorting_index] - cw = np.concatenate(([zero,], sw.cumsum())) - bin_index = np.r_[sa.searchsorted(bins[:-1], 'left'), \ - sa.searchsorted(bins[-1], 'right')] - n += cw[bin_index] - - n = np.diff(n) - - if normed: - db = array(np.diff(bins), float) - return n/(n*db).sum(), bins - else: - return n, bins - - -def histogramdd(sample, bins=10, range=None, normed=False, weights=None): - """ - Compute the multidimensional histogram of some data. - - Parameters - ---------- - sample : array_like - The data to be histogrammed. It must be an (N,D) array or data - that can be converted to such. The rows of the resulting array - are the coordinates of points in a D dimensional polytope. - bins : sequence or int, optional - The bin specification: - - * A sequence of arrays describing the bin edges along each dimension. - * The number of bins for each dimension (nx, ny, ... =bins) - * The number of bins for all dimensions (nx=ny=...=bins). - - range : sequence, optional - A sequence of lower and upper bin edges to be used if the edges are - not given explicitely in `bins`. Defaults to the minimum and maximum - values along each dimension. - normed : boolean, optional - If False, returns the number of samples in each bin. If True, returns - the bin density, ie, the bin count divided by the bin hypervolume. - weights : array_like (N,), optional - An array of values `w_i` weighing each sample `(x_i, y_i, z_i, ...)`. - Weights are normalized to 1 if normed is True. If normed is False, the - values of the returned histogram are equal to the sum of the weights - belonging to the samples falling into each bin. - - Returns - ------- - H : ndarray - The multidimensional histogram of sample x. See normed and weights for - the different possible semantics. - edges : list - A list of D arrays describing the bin edges for each dimension. - - See Also - -------- - histogram: 1D histogram - histogram2d: 2D histogram - - Examples - -------- - >>> r = np.random.randn(100,3) - >>> H, edges = np.histogramdd(r, bins = (5, 8, 4)) - >>> H.shape, edges[0].size, edges[1].size, edges[2].size - ((5, 8, 4), 6, 9, 5) - - """ - - try: - # Sample is an ND-array. - N, D = sample.shape - except (AttributeError, ValueError): - # Sample is a sequence of 1D arrays. - sample = atleast_2d(sample).T - N, D = sample.shape - - nbin = empty(D, int) - edges = D*[None] - dedges = D*[None] - if weights is not None: - weights = asarray(weights) - - try: - M = len(bins) - if M != D: - raise AttributeError( - 'The dimension of bins must be equal'\ - ' to the dimension of the sample x.') - except TypeError: - bins = D*[bins] - - # Select range for each dimension - # Used only if number of bins is given. - if range is None: - smin = atleast_1d(array(sample.min(0), float)) - smax = atleast_1d(array(sample.max(0), float)) - else: - smin = zeros(D) - smax = zeros(D) - for i in arange(D): - smin[i], smax[i] = range[i] - - # Make sure the bins have a finite width. - for i in arange(len(smin)): - if smin[i] == smax[i]: - smin[i] = smin[i] - .5 - smax[i] = smax[i] + .5 - - # Create edge arrays - for i in arange(D): - if isscalar(bins[i]): - nbin[i] = bins[i] + 2 # +2 for outlier bins - edges[i] = linspace(smin[i], smax[i], nbin[i]-1) - else: - edges[i] = asarray(bins[i], float) - nbin[i] = len(edges[i])+1 # +1 for outlier bins - dedges[i] = diff(edges[i]) - - nbin = asarray(nbin) - - # Compute the bin number each sample falls into. - Ncount = {} - for i in arange(D): - Ncount[i] = digitize(sample[:,i], edges[i]) - - # Using digitize, values that fall on an edge are put in the right bin. - # For the rightmost bin, we want values equal to the right - # edge to be counted in the last bin, and not as an outlier. - outliers = zeros(N, int) - for i in arange(D): - # Rounding precision - decimal = int(-log10(dedges[i].min())) +6 - # Find which points are on the rightmost edge. - on_edge = where(around(sample[:,i], decimal) == around(edges[i][-1], - decimal))[0] - # Shift these points one bin to the left. - Ncount[i][on_edge] -= 1 - - # Flattened histogram matrix (1D) - # Reshape is used so that overlarge arrays - # will raise an error. - hist = zeros(nbin, float).reshape(-1) - - # Compute the sample indices in the flattened histogram matrix. - ni = nbin.argsort() - shape = [] - xy = zeros(N, int) - for i in arange(0, D-1): - xy += Ncount[ni[i]] * nbin[ni[i+1:]].prod() - xy += Ncount[ni[-1]] - - # Compute the number of repetitions in xy and assign it to the - # flattened histmat. - if len(xy) == 0: - return zeros(nbin-2, int), edges - - flatcount = bincount(xy, weights) - a = arange(len(flatcount)) - hist[a] = flatcount - - # Shape into a proper matrix - hist = hist.reshape(sort(nbin)) - for i in arange(nbin.size): - j = ni.argsort()[i] - hist = hist.swapaxes(i,j) - ni[i],ni[j] = ni[j],ni[i] - - # Remove outliers (indices 0 and -1 for each dimension). - core = D*[slice(1,-1)] - hist = hist[core] - - # Normalize if normed is True - if normed: - s = hist.sum() - for i in arange(D): - shape = ones(D, int) - shape[i] = nbin[i] - 2 - hist = hist / dedges[i].reshape(shape) - hist /= s - - if (hist.shape != nbin - 2).any(): - raise RuntimeError( - "Internal Shape Error") - return hist, edges - - -def average(a, axis=None, weights=None, returned=False): - """ - Compute the weighted average along the specified axis. - - Parameters - ---------- - a : array_like - Array containing data to be averaged. If `a` is not an array, a - conversion is attempted. - axis : int, optional - Axis along which to average `a`. If `None`, averaging is done over - the flattened array. - weights : array_like, optional - An array of weights associated with the values in `a`. Each value in - `a` contributes to the average according to its associated weight. - The weights array can either be 1-D (in which case its length must be - the size of `a` along the given axis) or of the same shape as `a`. - If `weights=None`, then all data in `a` are assumed to have a - weight equal to one. - returned : bool, optional - Default is `False`. If `True`, the tuple (`average`, `sum_of_weights`) - is returned, otherwise only the average is returned. - If `weights=None`, `sum_of_weights` is equivalent to the number of - elements over which the average is taken. - - - Returns - ------- - average, [sum_of_weights] : {array_type, double} - Return the average along the specified axis. When returned is `True`, - return a tuple with the average as the first element and the sum - of the weights as the second element. The return type is `Float` - if `a` is of integer type, otherwise it is of the same type as `a`. - `sum_of_weights` is of the same type as `average`. - - Raises - ------ - ZeroDivisionError - When all weights along axis are zero. See `numpy.ma.average` for a - version robust to this type of error. - TypeError - When the length of 1D `weights` is not the same as the shape of `a` - along axis. - - See Also - -------- - mean - - ma.average : average for masked arrays - - Examples - -------- - >>> data = range(1,5) - >>> data - [1, 2, 3, 4] - >>> np.average(data) - 2.5 - >>> np.average(range(1,11), weights=range(10,0,-1)) - 4.0 - - >>> data = np.arange(6).reshape((3,2)) - >>> data - array([[0, 1], - [2, 3], - [4, 5]]) - >>> np.average(data, axis=1, weights=[1./4, 3./4]) - array([ 0.75, 2.75, 4.75]) - >>> np.average(data, weights=[1./4, 3./4]) - Traceback (most recent call last): - ... - TypeError: Axis must be specified when shapes of a and weights differ. - - """ - if not isinstance(a, np.matrix) : - a = np.asarray(a) - - if weights is None : - avg = a.mean(axis) - scl = avg.dtype.type(a.size/avg.size) - else : - a = a + 0.0 - wgt = np.array(weights, dtype=a.dtype, copy=0) - - # Sanity checks - if a.shape != wgt.shape : - if axis is None : - raise TypeError( - "Axis must be specified when shapes of a "\ - "and weights differ.") - if wgt.ndim != 1 : - raise TypeError( - "1D weights expected when shapes of a and "\ - "weights differ.") - if wgt.shape[0] != a.shape[axis] : - raise ValueError( - "Length of weights not compatible with "\ - "specified axis.") - - # setup wgt to broadcast along axis - wgt = np.array(wgt, copy=0, ndmin=a.ndim).swapaxes(-1, axis) - - scl = wgt.sum(axis=axis) - if (scl == 0.0).any(): - raise ZeroDivisionError( - "Weights sum to zero, can't be normalized") - - avg = np.multiply(a, wgt).sum(axis)/scl - - if returned: - scl = np.multiply(avg, 0) + scl - return avg, scl - else: - return avg - -def asarray_chkfinite(a): - """ - Convert the input to an array, checking for NaNs or Infs. - - Parameters - ---------- - a : array_like - Input data, in any form that can be converted to an array. This - includes lists, lists of tuples, tuples, tuples of tuples, tuples - of lists and ndarrays. Success requires no NaNs or Infs. - dtype : data-type, optional - By default, the data-type is inferred from the input data. - order : {'C', 'F'}, optional - Whether to use row-major ('C') or column-major ('FORTRAN') memory - representation. Defaults to 'C'. - - Returns - ------- - out : ndarray - Array interpretation of `a`. No copy is performed if the input - is already an ndarray. If `a` is a subclass of ndarray, a base - class ndarray is returned. - - Raises - ------ - ValueError - Raises ValueError if `a` contains NaN (Not a Number) or Inf (Infinity). - - See Also - -------- - asarray : Create and array. - asanyarray : Similar function which passes through subclasses. - ascontiguousarray : Convert input to a contiguous array. - asfarray : Convert input to a floating point ndarray. - asfortranarray : Convert input to an ndarray with column-major - memory order. - fromiter : Create an array from an iterator. - fromfunction : Construct an array by executing a function on grid - positions. - - Examples - -------- - Convert a list into an array. If all elements are finite - ``asarray_chkfinite`` is identical to ``asarray``. - - >>> a = [1, 2] - >>> np.asarray_chkfinite(a) - array([1, 2]) - - Raises ValueError if array_like contains Nans or Infs. - - >>> a = [1, 2, np.inf] - >>> try: - ... np.asarray_chkfinite(a) - ... except ValueError: - ... print 'ValueError' - ... - ValueError - - """ - a = asarray(a) - if (a.dtype.char in typecodes['AllFloat']) \ - and (_nx.isnan(a).any() or _nx.isinf(a).any()): - raise ValueError( - "array must not contain infs or NaNs") - return a - -def piecewise(x, condlist, funclist, *args, **kw): - """ - Evaluate a piecewise-defined function. - - Given a set of conditions and corresponding functions, evaluate each - function on the input data wherever its condition is true. - - Parameters - ---------- - x : ndarray - The input domain. - condlist : list of bool arrays - Each boolean array corresponds to a function in `funclist`. Wherever - `condlist[i]` is True, `funclist[i](x)` is used as the output value. - - Each boolean array in `condlist` selects a piece of `x`, - and should therefore be of the same shape as `x`. - - The length of `condlist` must correspond to that of `funclist`. - If one extra function is given, i.e. if - ``len(funclist) - len(condlist) == 1``, then that extra function - is the default value, used wherever all conditions are false. - funclist : list of callables, f(x,*args,**kw), or scalars - Each function is evaluated over `x` wherever its corresponding - condition is True. It should take an array as input and give an array - or a scalar value as output. If, instead of a callable, - a scalar is provided then a constant function (``lambda x: scalar``) is - assumed. - args : tuple, optional - Any further arguments given to `piecewise` are passed to the functions - upon execution, i.e., if called ``piecewise(..., ..., 1, 'a')``, then - each function is called as ``f(x, 1, 'a')``. - kw : dict, optional - Keyword arguments used in calling `piecewise` are passed to the - functions upon execution, i.e., if called - ``piecewise(..., ..., lambda=1)``, then each function is called as - ``f(x, lambda=1)``. - - Returns - ------- - out : ndarray - The output is the same shape and type as x and is found by - calling the functions in `funclist` on the appropriate portions of `x`, - as defined by the boolean arrays in `condlist`. Portions not covered - by any condition have undefined values. - - - See Also - -------- - choose, select, where - - Notes - ----- - This is similar to choose or select, except that functions are - evaluated on elements of `x` that satisfy the corresponding condition from - `condlist`. - - The result is:: - - |-- - |funclist[0](x[condlist[0]]) - out = |funclist[1](x[condlist[1]]) - |... - |funclist[n2](x[condlist[n2]]) - |-- - - Examples - -------- - Define the sigma function, which is -1 for ``x < 0`` and +1 for ``x >= 0``. - - >>> x = np.arange(6) - 2.5 - >>> np.piecewise(x, [x < 0, x >= 0], [-1, 1]) - array([-1., -1., -1., 1., 1., 1.]) - - Define the absolute value, which is ``-x`` for ``x <0`` and ``x`` for - ``x >= 0``. - - >>> np.piecewise(x, [x < 0, x >= 0], [lambda x: -x, lambda x: x]) - array([ 2.5, 1.5, 0.5, 0.5, 1.5, 2.5]) - - """ - x = asanyarray(x) - n2 = len(funclist) - if isscalar(condlist) or \ - not (isinstance(condlist[0], list) or - isinstance(condlist[0], ndarray)): - condlist = [condlist] - condlist = [asarray(c, dtype=bool) for c in condlist] - n = len(condlist) - if n == n2-1: # compute the "otherwise" condition. - totlist = condlist[0] - for k in range(1, n): - totlist |= condlist[k] - condlist.append(~totlist) - n += 1 - if (n != n2): - raise ValueError( - "function list and condition list must be the same") - zerod = False - # This is a hack to work around problems with NumPy's - # handling of 0-d arrays and boolean indexing with - # numpy.bool_ scalars - if x.ndim == 0: - x = x[None] - zerod = True - newcondlist = [] - for k in range(n): - if condlist[k].ndim == 0: - condition = condlist[k][None] - else: - condition = condlist[k] - newcondlist.append(condition) - condlist = newcondlist - - y = zeros(x.shape, x.dtype) - for k in range(n): - item = funclist[k] - if not callable(item): - y[condlist[k]] = item - else: - vals = x[condlist[k]] - if vals.size > 0: - y[condlist[k]] = item(vals, *args, **kw) - if zerod: - y = y.squeeze() - return y - -def select(condlist, choicelist, default=0): - """ - Return an array drawn from elements in choicelist, depending on conditions. - - Parameters - ---------- - condlist : list of bool ndarrays - The list of conditions which determine from which array in `choicelist` - the output elements are taken. When multiple conditions are satisfied, - the first one encountered in `condlist` is used. - choicelist : list of ndarrays - The list of arrays from which the output elements are taken. It has - to be of the same length as `condlist`. - default : scalar, optional - The element inserted in `output` when all conditions evaluate to False. - - Returns - ------- - output : ndarray - The output at position m is the m-th element of the array in - `choicelist` where the m-th element of the corresponding array in - `condlist` is True. - - See Also - -------- - where : Return elements from one of two arrays depending on condition. - take, choose, compress, diag, diagonal - - Examples - -------- - >>> x = np.arange(10) - >>> condlist = [x<3, x>5] - >>> choicelist = [x, x**2] - >>> np.select(condlist, choicelist) - array([ 0, 1, 2, 0, 0, 0, 36, 49, 64, 81]) - - """ - n = len(condlist) - n2 = len(choicelist) - if n2 != n: - raise ValueError( - "list of cases must be same length as list of conditions") - choicelist = [default] + choicelist - S = 0 - pfac = 1 - for k in range(1, n+1): - S += k * pfac * asarray(condlist[k-1]) - if k < n: - pfac *= (1-asarray(condlist[k-1])) - # handle special case of a 1-element condition but - # a multi-element choice - if type(S) in ScalarType or max(asarray(S).shape)==1: - pfac = asarray(1) - for k in range(n2+1): - pfac = pfac + asarray(choicelist[k]) - if type(S) in ScalarType: - S = S*ones(asarray(pfac).shape, type(S)) - else: - S = S*ones(asarray(pfac).shape, S.dtype) - return choose(S, tuple(choicelist)) - -def copy(a): - """ - Return an array copy of the given object. - - Parameters - ---------- - a : array_like - Input data. - - Returns - ------- - arr : ndarray - Array interpretation of `a`. - - Notes - ----- - This is equivalent to - - >>> np.array(a, copy=True) #doctest: +SKIP - - Examples - -------- - Create an array x, with a reference y and a copy z: - - >>> x = np.array([1, 2, 3]) - >>> y = x - >>> z = np.copy(x) - - Note that, when we modify x, y changes, but not z: - - >>> x[0] = 10 - >>> x[0] == y[0] - True - >>> x[0] == z[0] - False - - """ - return array(a, copy=True) - -# Basic operations - -def gradient(f, *varargs): - """ - Return the gradient of an N-dimensional array. - - The gradient is computed using central differences in the interior - and first differences at the boundaries. The returned gradient hence has - the same shape as the input array. - - Parameters - ---------- - f : array_like - An N-dimensional array containing samples of a scalar function. - `*varargs` : scalars - 0, 1, or N scalars specifying the sample distances in each direction, - that is: `dx`, `dy`, `dz`, ... The default distance is 1. - - - Returns - ------- - g : ndarray - N arrays of the same shape as `f` giving the derivative of `f` with - respect to each dimension. - - Examples - -------- - >>> x = np.array([1, 2, 4, 7, 11, 16], dtype=np.float) - >>> np.gradient(x) - array([ 1. , 1.5, 2.5, 3.5, 4.5, 5. ]) - >>> np.gradient(x, 2) - array([ 0.5 , 0.75, 1.25, 1.75, 2.25, 2.5 ]) - - >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float)) - [array([[ 2., 2., -1.], - [ 2., 2., -1.]]), - array([[ 1. , 2.5, 4. ], - [ 1. , 1. , 1. ]])] - - """ - N = len(f.shape) # number of dimensions - n = len(varargs) - if n == 0: - dx = [1.0]*N - elif n == 1: - dx = [varargs[0]]*N - elif n == N: - dx = list(varargs) - else: - raise SyntaxError( - "invalid number of arguments") - - # use central differences on interior and first differences on endpoints - - outvals = [] - - # create slice objects --- initially all are [:, :, ..., :] - slice1 = [slice(None)]*N - slice2 = [slice(None)]*N - slice3 = [slice(None)]*N - - otype = f.dtype.char - if otype not in ['f', 'd', 'F', 'D']: - otype = 'd' - - for axis in range(N): - # select out appropriate parts for this dimension - out = np.zeros_like(f).astype(otype) - slice1[axis] = slice(1, -1) - slice2[axis] = slice(2, None) - slice3[axis] = slice(None, -2) - # 1D equivalent -- out[1:-1] = (f[2:] - f[:-2])/2.0 - out[slice1] = (f[slice2] - f[slice3])/2.0 - slice1[axis] = 0 - slice2[axis] = 1 - slice3[axis] = 0 - # 1D equivalent -- out[0] = (f[1] - f[0]) - out[slice1] = (f[slice2] - f[slice3]) - slice1[axis] = -1 - slice2[axis] = -1 - slice3[axis] = -2 - # 1D equivalent -- out[-1] = (f[-1] - f[-2]) - out[slice1] = (f[slice2] - f[slice3]) - - # divide by step size - outvals.append(out / dx[axis]) - - # reset the slice object in this dimension to ":" - slice1[axis] = slice(None) - slice2[axis] = slice(None) - slice3[axis] = slice(None) - - if N == 1: - return outvals[0] - else: - return outvals - - -def diff(a, n=1, axis=-1): - """ - Calculate the n-th order discrete difference along given axis. - - The first order difference is given by ``out[n] = a[n+1] - a[n]`` along - the given axis, higher order differences are calculated by using `diff` - recursively. - - Parameters - ---------- - a : array_like - Input array - n : int, optional - The number of times values are differenced. - axis : int, optional - The axis along which the difference is taken, default is the last axis. - - Returns - ------- - out : ndarray - The `n` order differences. The shape of the output is the same as `a` - except along `axis` where the dimension is smaller by `n`. - - See Also - -------- - gradient, ediff1d - - Examples - -------- - >>> x = np.array([1, 2, 4, 7, 0]) - >>> np.diff(x) - array([ 1, 2, 3, -7]) - >>> np.diff(x, n=2) - array([ 1, 1, -10]) - - >>> x = np.array([[1, 3, 6, 10], [0, 5, 6, 8]]) - >>> np.diff(x) - array([[2, 3, 4], - [5, 1, 2]]) - >>> np.diff(x, axis=0) - array([[-1, 2, 0, -2]]) - - """ - if n == 0: - return a - if n < 0: - raise ValueError( - "order must be non-negative but got " + repr(n)) - a = asanyarray(a) - nd = len(a.shape) - slice1 = [slice(None)]*nd - slice2 = [slice(None)]*nd - slice1[axis] = slice(1, None) - slice2[axis] = slice(None, -1) - slice1 = tuple(slice1) - slice2 = tuple(slice2) - if n > 1: - return diff(a[slice1]-a[slice2], n-1, axis=axis) - else: - return a[slice1]-a[slice2] - -def interp(x, xp, fp, left=None, right=None): - """ - One-dimensional linear interpolation. - - Returns the one-dimensional piecewise linear interpolant to a function - with given values at discrete data-points. - - Parameters - ---------- - x : array_like - The x-coordinates of the interpolated values. - - xp : 1-D sequence of floats - The x-coordinates of the data points, must be increasing. - - fp : 1-D sequence of floats - The y-coordinates of the data points, same length as `xp`. - - left : float, optional - Value to return for `x < xp[0]`, default is `fp[0]`. - - right : float, optional - Value to return for `x > xp[-1]`, defaults is `fp[-1]`. - - Returns - ------- - y : {float, ndarray} - The interpolated values, same shape as `x`. - - Raises - ------ - ValueError - If `xp` and `fp` have different length - - Notes - ----- - Does not check that the x-coordinate sequence `xp` is increasing. - If `xp` is not increasing, the results are nonsense. - A simple check for increasingness is:: - - np.all(np.diff(xp) > 0) - - - Examples - -------- - >>> xp = [1, 2, 3] - >>> fp = [3, 2, 0] - >>> np.interp(2.5, xp, fp) - 1.0 - >>> np.interp([0, 1, 1.5, 2.72, 3.14], xp, fp) - array([ 3. , 3. , 2.5 , 0.56, 0. ]) - >>> UNDEF = -99.0 - >>> np.interp(3.14, xp, fp, right=UNDEF) - -99.0 - - Plot an interpolant to the sine function: - - >>> x = np.linspace(0, 2*np.pi, 10) - >>> y = np.sin(x) - >>> xvals = np.linspace(0, 2*np.pi, 50) - >>> yinterp = np.interp(xvals, x, y) - >>> import matplotlib.pyplot as plt - >>> plt.plot(x, y, 'o') - [] - >>> plt.plot(xvals, yinterp, '-x') - [] - >>> plt.show() - - """ - if isinstance(x, (float, int, number)): - return compiled_interp([x], xp, fp, left, right).item() - elif isinstance(x, np.ndarray) and x.ndim == 0: - return compiled_interp([x], xp, fp, left, right).item() - else: - return compiled_interp(x, xp, fp, left, right) - - -def angle(z, deg=0): - """ - Return the angle of the complex argument. - - Parameters - ---------- - z : array_like - A complex number or sequence of complex numbers. - deg : bool, optional - Return angle in degrees if True, radians if False (default). - - Returns - ------- - angle : {ndarray, scalar} - The counterclockwise angle from the positive real axis on - the complex plane, with dtype as numpy.float64. - - See Also - -------- - arctan2 - absolute - - - - Examples - -------- - >>> np.angle([1.0, 1.0j, 1+1j]) # in radians - array([ 0. , 1.57079633, 0.78539816]) - >>> np.angle(1+1j, deg=True) # in degrees - 45.0 - - """ - if deg: - fact = 180/pi - else: - fact = 1.0 - z = asarray(z) - if (issubclass(z.dtype.type, _nx.complexfloating)): - zimag = z.imag - zreal = z.real - else: - zimag = 0 - zreal = z - return arctan2(zimag, zreal) * fact - -def unwrap(p, discont=pi, axis=-1): - """ - Unwrap by changing deltas between values to 2*pi complement. - - Unwrap radian phase `p` by changing absolute jumps greater than - `discont` to their 2*pi complement along the given axis. - - Parameters - ---------- - p : array_like - Input array. - discont : float, optional - Maximum discontinuity between values, default is ``pi``. - axis : int, optional - Axis along which unwrap will operate, default is the last axis. - - Returns - ------- - out : ndarray - Output array. - - See Also - -------- - rad2deg, deg2rad - - Notes - ----- - If the discontinuity in `p` is smaller than ``pi``, but larger than - `discont`, no unwrapping is done because taking the 2*pi complement - would only make the discontinuity larger. - - Examples - -------- - >>> phase = np.linspace(0, np.pi, num=5) - >>> phase[3:] += np.pi - >>> phase - array([ 0. , 0.78539816, 1.57079633, 5.49778714, 6.28318531]) - >>> np.unwrap(phase) - array([ 0. , 0.78539816, 1.57079633, -0.78539816, 0. ]) - - """ - p = asarray(p) - nd = len(p.shape) - dd = diff(p, axis=axis) - slice1 = [slice(None, None)]*nd # full slices - slice1[axis] = slice(1, None) - ddmod = mod(dd+pi, 2*pi)-pi - _nx.putmask(ddmod, (ddmod==-pi) & (dd > 0), pi) - ph_correct = ddmod - dd; - _nx.putmask(ph_correct, abs(dd)>> np.sort_complex([5, 3, 6, 2, 1]) - array([ 1.+0.j, 2.+0.j, 3.+0.j, 5.+0.j, 6.+0.j]) - - >>> np.sort_complex([1 + 2j, 2 - 1j, 3 - 2j, 3 - 3j, 3 + 5j]) - array([ 1.+2.j, 2.-1.j, 3.-3.j, 3.-2.j, 3.+5.j]) - - """ - b = array(a,copy=True) - b.sort() - if not issubclass(b.dtype.type, _nx.complexfloating): - if b.dtype.char in 'bhBH': - return b.astype('F') - elif b.dtype.char == 'g': - return b.astype('G') - else: - return b.astype('D') - else: - return b - -def trim_zeros(filt, trim='fb'): - """ - Trim the leading and/or trailing zeros from a 1-D array or sequence. - - Parameters - ---------- - filt : 1-D array or sequence - Input array. - trim : str, optional - A string with 'f' representing trim from front and 'b' to trim from - back. Default is 'fb', trim zeros from both front and back of the - array. - - Returns - ------- - trimmed : 1-D array or sequence - The result of trimming the input. The input data type is preserved. - - Examples - -------- - >>> a = np.array((0, 0, 0, 1, 2, 3, 0, 2, 1, 0)) - >>> np.trim_zeros(a) - array([1, 2, 3, 0, 2, 1]) - - >>> np.trim_zeros(a, 'b') - array([0, 0, 0, 1, 2, 3, 0, 2, 1]) - - The input data type is preserved, list/tuple in means list/tuple out. - - >>> np.trim_zeros([0, 1, 2, 0]) - [1, 2] - - """ - first = 0 - trim = trim.upper() - if 'F' in trim: - for i in filt: - if i != 0.: break - else: first = first + 1 - last = len(filt) - if 'B' in trim: - for i in filt[::-1]: - if i != 0.: break - else: last = last - 1 - return filt[first:last] - -import sys -if sys.hexversion < 0x2040000: - from sets import Set as set - -@deprecate -def unique(x): - """ - This function is deprecated. Use numpy.lib.arraysetops.unique() - instead. - """ - try: - tmp = x.flatten() - if tmp.size == 0: - return tmp - tmp.sort() - idx = concatenate(([True],tmp[1:]!=tmp[:-1])) - return tmp[idx] - except AttributeError: - items = list(set(x)) - items.sort() - return asarray(items) - -def extract(condition, arr): - """ - Return the elements of an array that satisfy some condition. - - This is equivalent to ``np.compress(ravel(condition), ravel(arr))``. If - `condition` is boolean ``np.extract`` is equivalent to ``arr[condition]``. - - Parameters - ---------- - condition : array_like - An array whose nonzero or True entries indicate the elements of `arr` - to extract. - arr : array_like - Input array of the same size as `condition`. - - See Also - -------- - take, put, putmask, compress - - Examples - -------- - >>> arr = np.arange(12).reshape((3, 4)) - >>> arr - array([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> condition = np.mod(arr, 3)==0 - >>> condition - array([[ True, False, False, True], - [False, False, True, False], - [False, True, False, False]], dtype=bool) - >>> np.extract(condition, arr) - array([0, 3, 6, 9]) - - - If `condition` is boolean: - - >>> arr[condition] - array([0, 3, 6, 9]) - - """ - return _nx.take(ravel(arr), nonzero(ravel(condition))[0]) - -def place(arr, mask, vals): - """ - Change elements of an array based on conditional and input values. - - Similar to ``np.putmask(arr, mask, vals)``, the difference is that `place` - uses the first N elements of `vals`, where N is the number of True values - in `mask`, while `putmask` uses the elements where `mask` is True. - - Note that `extract` does the exact opposite of `place`. - - Parameters - ---------- - arr : array_like - Array to put data into. - mask : array_like - Boolean mask array. Must have the same size as `a`. - vals : 1-D sequence - Values to put into `a`. Only the first N elements are used, where - N is the number of True values in `mask`. If `vals` is smaller - than N it will be repeated. - - See Also - -------- - putmask, put, take, extract - - Examples - -------- - >>> arr = np.arange(6).reshape(2, 3) - >>> np.place(arr, arr>2, [44, 55]) - >>> arr - array([[ 0, 1, 2], - [44, 55, 44]]) - - """ - return _insert(arr, mask, vals) - -def _nanop(op, fill, a, axis=None): - """ - General operation on arrays with not-a-number values. - - Parameters - ---------- - op : callable - Operation to perform. - fill : float - NaN values are set to fill before doing the operation. - a : array-like - Input array. - axis : {int, None}, optional - Axis along which the operation is computed. - By default the input is flattened. - - Returns - ------- - y : {ndarray, scalar} - Processed data. - - """ - y = array(a, subok=True) - - # We only need to take care of NaN's in floating point arrays - if np.issubdtype(y.dtype, np.integer): - return op(y, axis=axis) - mask = isnan(a) - # y[mask] = fill - # We can't use fancy indexing here as it'll mess w/ MaskedArrays - # Instead, let's fill the array directly... - np.putmask(y, mask, fill) - res = op(y, axis=axis) - mask_all_along_axis = mask.all(axis=axis) - - # Along some axes, only nan's were encountered. As such, any values - # calculated along that axis should be set to nan. - if mask_all_along_axis.any(): - if np.isscalar(res): - res = np.nan - else: - res[mask_all_along_axis] = np.nan - - return res - -def nansum(a, axis=None): - """ - Return the sum of array elements over a given axis treating - Not a Numbers (NaNs) as zero. - - Parameters - ---------- - a : array_like - Array containing numbers whose sum is desired. If `a` is not an - array, a conversion is attempted. - axis : int, optional - Axis along which the sum is computed. The default is to compute - the sum of the flattened array. - - Returns - ------- - y : ndarray - An array with the same shape as a, with the specified axis removed. - If a is a 0-d array, or if axis is None, a scalar is returned with - the same dtype as `a`. - - See Also - -------- - numpy.sum : Sum across array including Not a Numbers. - isnan : Shows which elements are Not a Number (NaN). - isfinite: Shows which elements are not: Not a Number, positive and - negative infinity - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). This means that Not a Number is not equivalent to infinity. - If positive or negative infinity are present the result is positive or - negative infinity. But if both positive and negative infinity are present, - the result is Not A Number (NaN). - - Arithmetic is modular when using integer types (all elements of `a` must - be finite i.e. no elements that are NaNs, positive infinity and negative - infinity because NaNs are floating point types), and no error is raised - on overflow. - - - Examples - -------- - >>> np.nansum(1) - 1 - >>> np.nansum([1]) - 1 - >>> np.nansum([1, np.nan]) - 1.0 - >>> a = np.array([[1, 1], [1, np.nan]]) - >>> np.nansum(a) - 3.0 - >>> np.nansum(a, axis=0) - array([ 2., 1.]) - - When positive infinity and negative infinity are present - - >>> np.nansum([1, np.nan, np.inf]) - inf - >>> np.nansum([1, np.nan, np.NINF]) - -inf - >>> np.nansum([1, np.nan, np.inf, np.NINF]) - nan - - """ - return _nanop(np.sum, 0, a, axis) - -def nanmin(a, axis=None): - """ - Return the minimum of an array or minimum along an axis ignoring any NaNs. - - Parameters - ---------- - a : array_like - Array containing numbers whose minimum is desired. - axis : int, optional - Axis along which the minimum is computed.The default is to compute - the minimum of the flattened array. - - Returns - ------- - nanmin : ndarray - A new array or a scalar array with the result. - - See Also - -------- - numpy.amin : Minimum across array including any Not a Numbers. - numpy.nanmax : Maximum across array ignoring any Not a Numbers. - isnan : Shows which elements are Not a Number (NaN). - isfinite: Shows which elements are not: Not a Number, positive and - negative infinity - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). This means that Not a Number is not equivalent to infinity. - Positive infinity is treated as a very large number and negative infinity - is treated as a very small (i.e. negative) number. - - If the input has a integer type, an integer type is returned unless - the input contains NaNs and infinity. - - - Examples - -------- - >>> a = np.array([[1, 2], [3, np.nan]]) - >>> np.nanmin(a) - 1.0 - >>> np.nanmin(a, axis=0) - array([ 1., 2.]) - >>> np.nanmin(a, axis=1) - array([ 1., 3.]) - - When positive infinity and negative infinity are present: - - >>> np.nanmin([1, 2, np.nan, np.inf]) - 1.0 - >>> np.nanmin([1, 2, np.nan, np.NINF]) - -inf - - """ - return _nanop(np.min, np.inf, a, axis) - -def nanargmin(a, axis=None): - """ - Return indices of the minimum values over an axis, ignoring NaNs. - - Parameters - ---------- - a : array_like - Input data. - axis : int, optional - Axis along which to operate. By default flattened input is used. - - Returns - ------- - index_array : ndarray - An array of indices or a single index value. - - See Also - -------- - argmin, nanargmax - - Examples - -------- - >>> a = np.array([[np.nan, 4], [2, 3]]) - >>> np.argmin(a) - 0 - >>> np.nanargmin(a) - 2 - >>> np.nanargmin(a, axis=0) - array([1, 1]) - >>> np.nanargmin(a, axis=1) - array([1, 0]) - - """ - return _nanop(np.argmin, np.inf, a, axis) - -def nanmax(a, axis=None): - """ - Return the maximum of an array or maximum along an axis ignoring any NaNs. - - Parameters - ---------- - a : array_like - Array containing numbers whose maximum is desired. If `a` is not - an array, a conversion is attempted. - axis : int, optional - Axis along which the maximum is computed. The default is to compute - the maximum of the flattened array. - - Returns - ------- - nanmax : ndarray - An array with the same shape as `a`, with the specified axis removed. - If `a` is a 0-d array, or if axis is None, a ndarray scalar is - returned. The the same dtype as `a` is returned. - - See Also - -------- - numpy.amax : Maximum across array including any Not a Numbers. - numpy.nanmin : Minimum across array ignoring any Not a Numbers. - isnan : Shows which elements are Not a Number (NaN). - isfinite: Shows which elements are not: Not a Number, positive and - negative infinity - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). This means that Not a Number is not equivalent to infinity. - Positive infinity is treated as a very large number and negative infinity - is treated as a very small (i.e. negative) number. - - If the input has a integer type, an integer type is returned unless - the input contains NaNs and infinity. - - Examples - -------- - >>> a = np.array([[1, 2], [3, np.nan]]) - >>> np.nanmax(a) - 3.0 - >>> np.nanmax(a, axis=0) - array([ 3., 2.]) - >>> np.nanmax(a, axis=1) - array([ 2., 3.]) - - When positive infinity and negative infinity are present: - - >>> np.nanmax([1, 2, np.nan, np.NINF]) - 2.0 - >>> np.nanmax([1, 2, np.nan, np.inf]) - inf - - """ - return _nanop(np.max, -np.inf, a, axis) - -def nanargmax(a, axis=None): - """ - Return indices of the maximum values over an axis, ignoring NaNs. - - Parameters - ---------- - a : array_like - Input data. - axis : int, optional - Axis along which to operate. By default flattened input is used. - - Returns - ------- - index_array : ndarray - An array of indices or a single index value. - - See Also - -------- - argmax, nanargmin - - Examples - -------- - >>> a = np.array([[np.nan, 4], [2, 3]]) - >>> np.argmax(a) - 0 - >>> np.nanargmax(a) - 1 - >>> np.nanargmax(a, axis=0) - array([1, 0]) - >>> np.nanargmax(a, axis=1) - array([1, 1]) - - """ - return _nanop(np.argmax, -np.inf, a, axis) - -def disp(mesg, device=None, linefeed=True): - """ - Display a message on a device. - - Parameters - ---------- - mesg : str - Message to display. - device : object - Device to write message. If None, defaults to ``sys.stdout`` which is - very similar to ``print``. `device` needs to have ``write()`` and - ``flush()`` methods. - linefeed : bool, optional - Option whether to print a line feed or not. Defaults to True. - - Raises - ------ - AttributeError - If `device` does not have a ``write()`` or ``flush()`` method. - - Examples - -------- - Besides ``sys.stdout``, a file-like object can also be used as it has - both required methods: - - >>> from StringIO import StringIO - >>> buf = StringIO() - >>> np.disp('"Display" in a file', device=buf) - >>> buf.getvalue() - '"Display" in a file\\n' - - """ - if device is None: - import sys - device = sys.stdout - if linefeed: - device.write('%s\n' % mesg) - else: - device.write('%s' % mesg) - device.flush() - return - -# return number of input arguments and -# number of default arguments - -def _get_nargs(obj): - import re - - terr = re.compile(r'.*? takes (exactly|at least) (?P(\d+)|(\w+))' + - r' argument(s|) \((?P(\d+)|(\w+)) given\)') - def _convert_to_int(strval): - try: - result = int(strval) - except ValueError: - if strval=='zero': - result = 0 - elif strval=='one': - result = 1 - elif strval=='two': - result = 2 - # How high to go? English only? - else: - raise - return result - - if not callable(obj): - raise TypeError( - "Object is not callable.") - if sys.version_info[0] >= 3: - # inspect currently fails for binary extensions - # like math.cos. So fall back to other methods if - # it fails. - import inspect - try: - spec = inspect.getargspec(obj) - nargs = len(spec.args) - if spec.defaults: - ndefaults = len(spec.defaults) - else: - ndefaults = 0 - if inspect.ismethod(obj): - nargs -= 1 - return nargs, ndefaults - except: - pass - - if hasattr(obj,'func_code'): - fcode = obj.func_code - nargs = fcode.co_argcount - if obj.func_defaults is not None: - ndefaults = len(obj.func_defaults) - else: - ndefaults = 0 - if isinstance(obj, types.MethodType): - nargs -= 1 - return nargs, ndefaults - - try: - obj() - return 0, 0 - except TypeError, msg: - m = terr.match(str(msg)) - if m: - nargs = _convert_to_int(m.group('exargs')) - ndefaults = _convert_to_int(m.group('gargs')) - if isinstance(obj, types.MethodType): - nargs -= 1 - return nargs, ndefaults - - raise ValueError( - "failed to determine the number of arguments for %s" % (obj)) - - -class vectorize(object): - """ - vectorize(pyfunc, otypes='', doc=None) - - Generalized function class. - - Define a vectorized function which takes a nested sequence - of objects or numpy arrays as inputs and returns a - numpy array as output. The vectorized function evaluates `pyfunc` over - successive tuples of the input arrays like the python map function, - except it uses the broadcasting rules of numpy. - - The data type of the output of `vectorized` is determined by calling - the function with the first element of the input. This can be avoided - by specifying the `otypes` argument. - - Parameters - ---------- - pyfunc : callable - A python function or method. - otypes : str or list of dtypes, optional - The output data type. It must be specified as either a string of - typecode characters or a list of data type specifiers. There should - be one data type specifier for each output. - doc : str, optional - The docstring for the function. If None, the docstring will be the - `pyfunc` one. - - Examples - -------- - >>> def myfunc(a, b): - ... \"\"\"Return a-b if a>b, otherwise return a+b\"\"\" - ... if a > b: - ... return a - b - ... else: - ... return a + b - - >>> vfunc = np.vectorize(myfunc) - >>> vfunc([1, 2, 3, 4], 2) - array([3, 4, 1, 2]) - - The docstring is taken from the input function to `vectorize` unless it - is specified - - >>> vfunc.__doc__ - 'Return a-b if a>b, otherwise return a+b' - >>> vfunc = np.vectorize(myfunc, doc='Vectorized `myfunc`') - >>> vfunc.__doc__ - 'Vectorized `myfunc`' - - The output type is determined by evaluating the first element of the input, - unless it is specified - - >>> out = vfunc([1, 2, 3, 4], 2) - >>> type(out[0]) - - >>> vfunc = np.vectorize(myfunc, otypes=[np.float]) - >>> out = vfunc([1, 2, 3, 4], 2) - >>> type(out[0]) - - - """ - def __init__(self, pyfunc, otypes='', doc=None): - self.thefunc = pyfunc - self.ufunc = None - nin, ndefault = _get_nargs(pyfunc) - if nin == 0 and ndefault == 0: - self.nin = None - self.nin_wo_defaults = None - else: - self.nin = nin - self.nin_wo_defaults = nin - ndefault - self.nout = None - if doc is None: - self.__doc__ = pyfunc.__doc__ - else: - self.__doc__ = doc - if isinstance(otypes, str): - self.otypes = otypes - for char in self.otypes: - if char not in typecodes['All']: - raise ValueError( - "invalid otype specified") - elif iterable(otypes): - self.otypes = ''.join([_nx.dtype(x).char for x in otypes]) - else: - raise ValueError( - "Invalid otype specification") - self.lastcallargs = 0 - - def __call__(self, *args): - # get number of outputs and output types by calling - # the function on the first entries of args - nargs = len(args) - if self.nin: - if (nargs > self.nin) or (nargs < self.nin_wo_defaults): - raise ValueError( - "Invalid number of arguments") - - # we need a new ufunc if this is being called with more arguments. - if (self.lastcallargs != nargs): - self.lastcallargs = nargs - self.ufunc = None - self.nout = None - - if self.nout is None or self.otypes == '': - newargs = [] - for arg in args: - newargs.append(asarray(arg).flat[0]) - theout = self.thefunc(*newargs) - if isinstance(theout, tuple): - self.nout = len(theout) - else: - self.nout = 1 - theout = (theout,) - if self.otypes == '': - otypes = [] - for k in range(self.nout): - otypes.append(asarray(theout[k]).dtype.char) - self.otypes = ''.join(otypes) - - # Create ufunc if not already created - if (self.ufunc is None): - self.ufunc = frompyfunc(self.thefunc, nargs, self.nout) - - # Convert to object arrays first - newargs = [array(arg,copy=False,subok=True,dtype=object) for arg in args] - if self.nout == 1: - _res = array(self.ufunc(*newargs),copy=False, - subok=True,dtype=self.otypes[0]) - else: - _res = tuple([array(x,copy=False,subok=True,dtype=c) \ - for x, c in zip(self.ufunc(*newargs), self.otypes)]) - return _res - -def cov(m, y=None, rowvar=1, bias=0, ddof=None): - """ - Estimate a covariance matrix, given data. - - Covariance indicates the level to which two variables vary together. - If we examine N-dimensional samples, :math:`X = [x_1, x_2, ... x_N]^T`, - then the covariance matrix element :math:`C_{ij}` is the covariance of - :math:`x_i` and :math:`x_j`. The element :math:`C_{ii}` is the variance - of :math:`x_i`. - - Parameters - ---------- - m : array_like - A 1-D or 2-D array containing multiple variables and observations. - Each row of `m` represents a variable, and each column a single - observation of all those variables. Also see `rowvar` below. - y : array_like, optional - An additional set of variables and observations. `y` has the same - form as that of `m`. - rowvar : int, optional - If `rowvar` is non-zero (default), then each row represents a - variable, with observations in the columns. Otherwise, the relationship - is transposed: each column represents a variable, while the rows - contain observations. - bias : int, optional - Default normalization is by ``(N - 1)``, where ``N`` is the number of - observations given (unbiased estimate). If `bias` is 1, then - normalization is by ``N``. These values can be overridden by using - the keyword ``ddof`` in numpy versions >= 1.5. - ddof : int, optional - .. versionadded:: 1.5 - If not ``None`` normalization is by ``(N - ddof)``, where ``N`` is - the number of observations; this overrides the value implied by - ``bias``. The default value is ``None``. - - Returns - ------- - out : ndarray - The covariance matrix of the variables. - - See Also - -------- - corrcoef : Normalized covariance matrix - - Examples - -------- - Consider two variables, :math:`x_0` and :math:`x_1`, which - correlate perfectly, but in opposite directions: - - >>> x = np.array([[0, 2], [1, 1], [2, 0]]).T - >>> x - array([[0, 1, 2], - [2, 1, 0]]) - - Note how :math:`x_0` increases while :math:`x_1` decreases. The covariance - matrix shows this clearly: - - >>> np.cov(x) - array([[ 1., -1.], - [-1., 1.]]) - - Note that element :math:`C_{0,1}`, which shows the correlation between - :math:`x_0` and :math:`x_1`, is negative. - - Further, note how `x` and `y` are combined: - - >>> x = [-2.1, -1, 4.3] - >>> y = [3, 1.1, 0.12] - >>> X = np.vstack((x,y)) - >>> print np.cov(X) - [[ 11.71 -4.286 ] - [ -4.286 2.14413333]] - >>> print np.cov(x, y) - [[ 11.71 -4.286 ] - [ -4.286 2.14413333]] - >>> print np.cov(x) - 11.71 - - """ - # Check inputs - if ddof is not None and ddof != int(ddof): - raise ValueError("ddof must be integer") - - X = array(m, ndmin=2, dtype=float) - if X.shape[0] == 1: - rowvar = 1 - if rowvar: - axis = 0 - tup = (slice(None),newaxis) - else: - axis = 1 - tup = (newaxis, slice(None)) - - - if y is not None: - y = array(y, copy=False, ndmin=2, dtype=float) - X = concatenate((X,y), axis) - - X -= X.mean(axis=1-axis)[tup] - if rowvar: - N = X.shape[1] - else: - N = X.shape[0] - - if ddof is None: - if bias == 0: - ddof = 1 - else: - ddof = 0 - fact = float(N - ddof) - - if not rowvar: - return (dot(X.T, X.conj()) / fact).squeeze() - else: - return (dot(X, X.T.conj()) / fact).squeeze() - - -def corrcoef(x, y=None, rowvar=1, bias=0, ddof=None): - """ - Return correlation coefficients. - - Please refer to the documentation for `cov` for more detail. The - relationship between the correlation coefficient matrix, `P`, and the - covariance matrix, `C`, is - - .. math:: P_{ij} = \\frac{ C_{ij} } { \\sqrt{ C_{ii} * C_{jj} } } - - The values of `P` are between -1 and 1, inclusive. - - Parameters - ---------- - m : array_like - A 1-D or 2-D array containing multiple variables and observations. - Each row of `m` represents a variable, and each column a single - observation of all those variables. Also see `rowvar` below. - y : array_like, optional - An additional set of variables and observations. `y` has the same - shape as `m`. - rowvar : int, optional - If `rowvar` is non-zero (default), then each row represents a - variable, with observations in the columns. Otherwise, the relationship - is transposed: each column represents a variable, while the rows - contain observations. - bias : int, optional - Default normalization is by ``(N - 1)``, where ``N`` is the number of - observations (unbiased estimate). If `bias` is 1, then - normalization is by ``N``. These values can be overridden by using - the keyword ``ddof`` in numpy versions >= 1.5. - ddof : {None, int}, optional - .. versionadded:: 1.5 - If not ``None`` normalization is by ``(N - ddof)``, where ``N`` is - the number of observations; this overrides the value implied by - ``bias``. The default value is ``None``. - - Returns - ------- - out : ndarray - The correlation coefficient matrix of the variables. - - See Also - -------- - cov : Covariance matrix - - """ - c = cov(x, y, rowvar, bias, ddof) - try: - d = diag(c) - except ValueError: # scalar covariance - return 1 - return c/sqrt(multiply.outer(d,d)) - -def blackman(M): - """ - Return the Blackman window. - - The Blackman window is a taper formed by using the the first three - terms of a summation of cosines. It was designed to have close to the - minimal leakage possible. It is close to optimal, only slightly worse - than a Kaiser window. - - Parameters - ---------- - M : int - Number of points in the output window. If zero or less, an empty - array is returned. - - Returns - ------- - out : ndarray - The window, normalized to one (the value one appears only if the - number of samples is odd). - - See Also - -------- - bartlett, hamming, hanning, kaiser - - Notes - ----- - The Blackman window is defined as - - .. math:: w(n) = 0.42 - 0.5 \\cos(2\\pi n/M) + 0.08 \\cos(4\\pi n/M) - - Most references to the Blackman window come from the signal processing - literature, where it is used as one of many windowing functions for - smoothing values. It is also known as an apodization (which means - "removing the foot", i.e. smoothing discontinuities at the beginning - and end of the sampled signal) or tapering function. It is known as a - "near optimal" tapering function, almost as good (by some measures) - as the kaiser window. - - References - ---------- - Blackman, R.B. and Tukey, J.W., (1958) The measurement of power spectra, - Dover Publications, New York. - - Oppenheim, A.V., and R.W. Schafer. Discrete-Time Signal Processing. - Upper Saddle River, NJ: Prentice-Hall, 1999, pp. 468-471. - - Examples - -------- - >>> from numpy import blackman - >>> blackman(12) - array([ -1.38777878e-17, 3.26064346e-02, 1.59903635e-01, - 4.14397981e-01, 7.36045180e-01, 9.67046769e-01, - 9.67046769e-01, 7.36045180e-01, 4.14397981e-01, - 1.59903635e-01, 3.26064346e-02, -1.38777878e-17]) - - - Plot the window and the frequency response: - - >>> from numpy import clip, log10, array, blackman, linspace - >>> from numpy.fft import fft, fftshift - >>> import matplotlib.pyplot as plt - - >>> window = blackman(51) - >>> plt.plot(window) - [] - >>> plt.title("Blackman window") - - >>> plt.ylabel("Amplitude") - - >>> plt.xlabel("Sample") - - >>> plt.show() - - >>> plt.figure() - - >>> A = fft(window, 2048) / 25.5 - >>> mag = abs(fftshift(A)) - >>> freq = linspace(-0.5,0.5,len(A)) - >>> response = 20*log10(mag) - >>> response = clip(response,-100,100) - >>> plt.plot(freq, response) - [] - >>> plt.title("Frequency response of Blackman window") - - >>> plt.ylabel("Magnitude [dB]") - - >>> plt.xlabel("Normalized frequency [cycles per sample]") - - >>> plt.axis('tight') - (-0.5, 0.5, -100.0, ...) - >>> plt.show() - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1, float) - n = arange(0,M) - return 0.42-0.5*cos(2.0*pi*n/(M-1)) + 0.08*cos(4.0*pi*n/(M-1)) - -def bartlett(M): - """ - Return the Bartlett window. - - The Bartlett window is very similar to a triangular window, except - that the end points are at zero. It is often used in signal - processing for tapering a signal, without generating too much - ripple in the frequency domain. - - Parameters - ---------- - M : int - Number of points in the output window. If zero or less, an - empty array is returned. - - Returns - ------- - out : array - The triangular window, normalized to one (the value one - appears only if the number of samples is odd), with the first - and last samples equal to zero. - - See Also - -------- - blackman, hamming, hanning, kaiser - - Notes - ----- - The Bartlett window is defined as - - .. math:: w(n) = \\frac{2}{M-1} \\left( - \\frac{M-1}{2} - \\left|n - \\frac{M-1}{2}\\right| - \\right) - - Most references to the Bartlett window come from the signal - processing literature, where it is used as one of many windowing - functions for smoothing values. Note that convolution with this - window produces linear interpolation. It is also known as an - apodization (which means"removing the foot", i.e. smoothing - discontinuities at the beginning and end of the sampled signal) or - tapering function. The fourier transform of the Bartlett is the product - of two sinc functions. - Note the excellent discussion in Kanasewich. - - References - ---------- - .. [1] M.S. Bartlett, "Periodogram Analysis and Continuous Spectra", - Biometrika 37, 1-16, 1950. - .. [2] E.R. Kanasewich, "Time Sequence Analysis in Geophysics", - The University of Alberta Press, 1975, pp. 109-110. - .. [3] A.V. Oppenheim and R.W. Schafer, "Discrete-Time Signal - Processing", Prentice-Hall, 1999, pp. 468-471. - .. [4] Wikipedia, "Window function", - http://en.wikipedia.org/wiki/Window_function - .. [5] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, - "Numerical Recipes", Cambridge University Press, 1986, page 429. - - - Examples - -------- - >>> np.bartlett(12) - array([ 0. , 0.18181818, 0.36363636, 0.54545455, 0.72727273, - 0.90909091, 0.90909091, 0.72727273, 0.54545455, 0.36363636, - 0.18181818, 0. ]) - - Plot the window and its frequency response (requires SciPy and matplotlib): - - >>> from numpy import clip, log10, array, bartlett, linspace - >>> from numpy.fft import fft, fftshift - >>> import matplotlib.pyplot as plt - - >>> window = bartlett(51) - >>> plt.plot(window) - [] - >>> plt.title("Bartlett window") - - >>> plt.ylabel("Amplitude") - - >>> plt.xlabel("Sample") - - >>> plt.show() - - >>> plt.figure() - - >>> A = fft(window, 2048) / 25.5 - >>> mag = abs(fftshift(A)) - >>> freq = linspace(-0.5,0.5,len(A)) - >>> response = 20*log10(mag) - >>> response = clip(response,-100,100) - >>> plt.plot(freq, response) - [] - >>> plt.title("Frequency response of Bartlett window") - - >>> plt.ylabel("Magnitude [dB]") - - >>> plt.xlabel("Normalized frequency [cycles per sample]") - - >>> plt.axis('tight') - (-0.5, 0.5, -100.0, ...) - >>> plt.show() - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1, float) - n = arange(0,M) - return where(less_equal(n,(M-1)/2.0),2.0*n/(M-1),2.0-2.0*n/(M-1)) - -def hanning(M): - """ - Return the Hanning window. - - The Hanning window is a taper formed by using a weighted cosine. - - Parameters - ---------- - M : int - Number of points in the output window. If zero or less, an - empty array is returned. - - Returns - ------- - out : ndarray, shape(M,) - The window, normalized to one (the value one - appears only if `M` is odd). - - See Also - -------- - bartlett, blackman, hamming, kaiser - - Notes - ----- - The Hanning window is defined as - - .. math:: w(n) = 0.5 - 0.5cos\\left(\\frac{2\\pi{n}}{M-1}\\right) - \\qquad 0 \\leq n \\leq M-1 - - The Hanning was named for Julius van Hann, an Austrian meterologist. It is - also known as the Cosine Bell. Some authors prefer that it be called a - Hann window, to help avoid confusion with the very similar Hamming window. - - Most references to the Hanning window come from the signal processing - literature, where it is used as one of many windowing functions for - smoothing values. It is also known as an apodization (which means - "removing the foot", i.e. smoothing discontinuities at the beginning - and end of the sampled signal) or tapering function. - - References - ---------- - .. [1] Blackman, R.B. and Tukey, J.W., (1958) The measurement of power - spectra, Dover Publications, New York. - .. [2] E.R. Kanasewich, "Time Sequence Analysis in Geophysics", - The University of Alberta Press, 1975, pp. 106-108. - .. [3] Wikipedia, "Window function", - http://en.wikipedia.org/wiki/Window_function - .. [4] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, - "Numerical Recipes", Cambridge University Press, 1986, page 425. - - Examples - -------- - >>> from numpy import hanning - >>> hanning(12) - array([ 0. , 0.07937323, 0.29229249, 0.57115742, 0.82743037, - 0.97974649, 0.97974649, 0.82743037, 0.57115742, 0.29229249, - 0.07937323, 0. ]) - - Plot the window and its frequency response: - - >>> from numpy.fft import fft, fftshift - >>> import matplotlib.pyplot as plt - - >>> window = np.hanning(51) - >>> plt.plot(window) - [] - >>> plt.title("Hann window") - - >>> plt.ylabel("Amplitude") - - >>> plt.xlabel("Sample") - - >>> plt.show() - - >>> plt.figure() - - >>> A = fft(window, 2048) / 25.5 - >>> mag = abs(fftshift(A)) - >>> freq = np.linspace(-0.5,0.5,len(A)) - >>> response = 20*np.log10(mag) - >>> response = np.clip(response,-100,100) - >>> plt.plot(freq, response) - [] - >>> plt.title("Frequency response of the Hann window") - - >>> plt.ylabel("Magnitude [dB]") - - >>> plt.xlabel("Normalized frequency [cycles per sample]") - - >>> plt.axis('tight') - (-0.5, 0.5, -100.0, ...) - >>> plt.show() - - """ - # XXX: this docstring is inconsistent with other filter windows, e.g. - # Blackman and Bartlett - they should all follow the same convention for - # clarity. Either use np. for all numpy members (as above), or import all - # numpy members (as in Blackman and Bartlett examples) - if M < 1: - return array([]) - if M == 1: - return ones(1, float) - n = arange(0,M) - return 0.5-0.5*cos(2.0*pi*n/(M-1)) - -def hamming(M): - """ - Return the Hamming window. - - The Hamming window is a taper formed by using a weighted cosine. - - Parameters - ---------- - M : int - Number of points in the output window. If zero or less, an - empty array is returned. - - Returns - ------- - out : ndarray - The window, normalized to one (the value one - appears only if the number of samples is odd). - - See Also - -------- - bartlett, blackman, hanning, kaiser - - Notes - ----- - The Hamming window is defined as - - .. math:: w(n) = 0.54 + 0.46cos\\left(\\frac{2\\pi{n}}{M-1}\\right) - \\qquad 0 \\leq n \\leq M-1 - - The Hamming was named for R. W. Hamming, an associate of J. W. Tukey and - is described in Blackman and Tukey. It was recommended for smoothing the - truncated autocovariance function in the time domain. - Most references to the Hamming window come from the signal processing - literature, where it is used as one of many windowing functions for - smoothing values. It is also known as an apodization (which means - "removing the foot", i.e. smoothing discontinuities at the beginning - and end of the sampled signal) or tapering function. - - References - ---------- - .. [1] Blackman, R.B. and Tukey, J.W., (1958) The measurement of power - spectra, Dover Publications, New York. - .. [2] E.R. Kanasewich, "Time Sequence Analysis in Geophysics", The - University of Alberta Press, 1975, pp. 109-110. - .. [3] Wikipedia, "Window function", - http://en.wikipedia.org/wiki/Window_function - .. [4] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, - "Numerical Recipes", Cambridge University Press, 1986, page 425. - - Examples - -------- - >>> np.hamming(12) - array([ 0.08 , 0.15302337, 0.34890909, 0.60546483, 0.84123594, - 0.98136677, 0.98136677, 0.84123594, 0.60546483, 0.34890909, - 0.15302337, 0.08 ]) - - Plot the window and the frequency response: - - >>> from numpy.fft import fft, fftshift - >>> import matplotlib.pyplot as plt - - >>> window = np.hamming(51) - >>> plt.plot(window) - [] - >>> plt.title("Hamming window") - - >>> plt.ylabel("Amplitude") - - >>> plt.xlabel("Sample") - - >>> plt.show() - - >>> plt.figure() - - >>> A = fft(window, 2048) / 25.5 - >>> mag = np.abs(fftshift(A)) - >>> freq = np.linspace(-0.5, 0.5, len(A)) - >>> response = 20 * np.log10(mag) - >>> response = np.clip(response, -100, 100) - >>> plt.plot(freq, response) - [] - >>> plt.title("Frequency response of Hamming window") - - >>> plt.ylabel("Magnitude [dB]") - - >>> plt.xlabel("Normalized frequency [cycles per sample]") - - >>> plt.axis('tight') - (-0.5, 0.5, -100.0, ...) - >>> plt.show() - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,float) - n = arange(0,M) - return 0.54-0.46*cos(2.0*pi*n/(M-1)) - -## Code from cephes for i0 - -_i0A = [ --4.41534164647933937950E-18, - 3.33079451882223809783E-17, --2.43127984654795469359E-16, - 1.71539128555513303061E-15, --1.16853328779934516808E-14, - 7.67618549860493561688E-14, --4.85644678311192946090E-13, - 2.95505266312963983461E-12, --1.72682629144155570723E-11, - 9.67580903537323691224E-11, --5.18979560163526290666E-10, - 2.65982372468238665035E-9, --1.30002500998624804212E-8, - 6.04699502254191894932E-8, --2.67079385394061173391E-7, - 1.11738753912010371815E-6, --4.41673835845875056359E-6, - 1.64484480707288970893E-5, --5.75419501008210370398E-5, - 1.88502885095841655729E-4, --5.76375574538582365885E-4, - 1.63947561694133579842E-3, --4.32430999505057594430E-3, - 1.05464603945949983183E-2, --2.37374148058994688156E-2, - 4.93052842396707084878E-2, --9.49010970480476444210E-2, - 1.71620901522208775349E-1, --3.04682672343198398683E-1, - 6.76795274409476084995E-1] - -_i0B = [ --7.23318048787475395456E-18, --4.83050448594418207126E-18, - 4.46562142029675999901E-17, - 3.46122286769746109310E-17, --2.82762398051658348494E-16, --3.42548561967721913462E-16, - 1.77256013305652638360E-15, - 3.81168066935262242075E-15, --9.55484669882830764870E-15, --4.15056934728722208663E-14, - 1.54008621752140982691E-14, - 3.85277838274214270114E-13, - 7.18012445138366623367E-13, --1.79417853150680611778E-12, --1.32158118404477131188E-11, --3.14991652796324136454E-11, - 1.18891471078464383424E-11, - 4.94060238822496958910E-10, - 3.39623202570838634515E-9, - 2.26666899049817806459E-8, - 2.04891858946906374183E-7, - 2.89137052083475648297E-6, - 6.88975834691682398426E-5, - 3.36911647825569408990E-3, - 8.04490411014108831608E-1] - -def _chbevl(x, vals): - b0 = vals[0] - b1 = 0.0 - - for i in xrange(1,len(vals)): - b2 = b1 - b1 = b0 - b0 = x*b1 - b2 + vals[i] - - return 0.5*(b0 - b2) - -def _i0_1(x): - return exp(x) * _chbevl(x/2.0-2, _i0A) - -def _i0_2(x): - return exp(x) * _chbevl(32.0/x - 2.0, _i0B) / sqrt(x) - -def i0(x): - """ - Modified Bessel function of the first kind, order 0. - - Usually denoted :math:`I_0`. This function does broadcast, but will *not* - "up-cast" int dtype arguments unless accompanied by at least one float or - complex dtype argument (see Raises below). - - Parameters - ---------- - x : array_like, dtype float or complex - Argument of the Bessel function. - - Returns - ------- - out : ndarray, shape = x.shape, dtype = x.dtype - The modified Bessel function evaluated at each of the elements of `x`. - - Raises - ------ - TypeError: array cannot be safely cast to required type - If argument consists exclusively of int dtypes. - - See Also - -------- - scipy.special.iv, scipy.special.ive - - Notes - ----- - We use the algorithm published by Clenshaw [1]_ and referenced by - Abramowitz and Stegun [2]_, for which the function domain is partitioned - into the two intervals [0,8] and (8,inf), and Chebyshev polynomial - expansions are employed in each interval. Relative error on the domain - [0,30] using IEEE arithmetic is documented [3]_ as having a peak of 5.8e-16 - with an rms of 1.4e-16 (n = 30000). - - References - ---------- - .. [1] C. W. Clenshaw, "Chebyshev series for mathematical functions," in - *National Physical Laboratory Mathematical Tables*, vol. 5, London: - Her Majesty's Stationery Office, 1962. - .. [2] M. Abramowitz and I. A. Stegun, *Handbook of Mathematical - Functions*, 10th printing, New York: Dover, 1964, pp. 379. - http://www.math.sfu.ca/~cbm/aands/page_379.htm - .. [3] http://kobesearch.cpan.org/htdocs/Math-Cephes/Math/Cephes.html - - Examples - -------- - >>> np.i0([0.]) - array(1.0) - >>> np.i0([0., 1. + 2j]) - array([ 1.00000000+0.j , 0.18785373+0.64616944j]) - - """ - x = atleast_1d(x).copy() - y = empty_like(x) - ind = (x<0) - x[ind] = -x[ind] - ind = (x<=8.0) - y[ind] = _i0_1(x[ind]) - ind2 = ~ind - y[ind2] = _i0_2(x[ind2]) - return y.squeeze() - -## End of cephes code for i0 - -def kaiser(M,beta): - """ - Return the Kaiser window. - - The Kaiser window is a taper formed by using a Bessel function. - - Parameters - ---------- - M : int - Number of points in the output window. If zero or less, an - empty array is returned. - beta : float - Shape parameter for window. - - Returns - ------- - out : array - The window, normalized to one (the value one - appears only if the number of samples is odd). - - See Also - -------- - bartlett, blackman, hamming, hanning - - Notes - ----- - The Kaiser window is defined as - - .. math:: w(n) = I_0\\left( \\beta \\sqrt{1-\\frac{4n^2}{(M-1)^2}} - \\right)/I_0(\\beta) - - with - - .. math:: \\quad -\\frac{M-1}{2} \\leq n \\leq \\frac{M-1}{2}, - - where :math:`I_0` is the modified zeroth-order Bessel function. - - The Kaiser was named for Jim Kaiser, who discovered a simple approximation - to the DPSS window based on Bessel functions. - The Kaiser window is a very good approximation to the Digital Prolate - Spheroidal Sequence, or Slepian window, which is the transform which - maximizes the energy in the main lobe of the window relative to total - energy. - - The Kaiser can approximate many other windows by varying the beta - parameter. - - ==== ======================= - beta Window shape - ==== ======================= - 0 Rectangular - 5 Similar to a Hamming - 6 Similar to a Hanning - 8.6 Similar to a Blackman - ==== ======================= - - A beta value of 14 is probably a good starting point. Note that as beta - gets large, the window narrows, and so the number of samples needs to be - large enough to sample the increasingly narrow spike, otherwise nans will - get returned. - - - Most references to the Kaiser window come from the signal processing - literature, where it is used as one of many windowing functions for - smoothing values. It is also known as an apodization (which means - "removing the foot", i.e. smoothing discontinuities at the beginning - and end of the sampled signal) or tapering function. - - References - ---------- - .. [1] J. F. Kaiser, "Digital Filters" - Ch 7 in "Systems analysis by - digital computer", Editors: F.F. Kuo and J.F. Kaiser, p 218-285. - John Wiley and Sons, New York, (1966). - .. [2] E.R. Kanasewich, "Time Sequence Analysis in Geophysics", The - University of Alberta Press, 1975, pp. 177-178. - .. [3] Wikipedia, "Window function", - http://en.wikipedia.org/wiki/Window_function - - Examples - -------- - >>> from numpy import kaiser - >>> kaiser(12, 14) - array([ 7.72686684e-06, 3.46009194e-03, 4.65200189e-02, - 2.29737120e-01, 5.99885316e-01, 9.45674898e-01, - 9.45674898e-01, 5.99885316e-01, 2.29737120e-01, - 4.65200189e-02, 3.46009194e-03, 7.72686684e-06]) - - - Plot the window and the frequency response: - - >>> from numpy import clip, log10, array, kaiser, linspace - >>> from numpy.fft import fft, fftshift - >>> import matplotlib.pyplot as plt - - >>> window = kaiser(51, 14) - >>> plt.plot(window) - [] - >>> plt.title("Kaiser window") - - >>> plt.ylabel("Amplitude") - - >>> plt.xlabel("Sample") - - >>> plt.show() - - >>> plt.figure() - - >>> A = fft(window, 2048) / 25.5 - >>> mag = abs(fftshift(A)) - >>> freq = linspace(-0.5,0.5,len(A)) - >>> response = 20*log10(mag) - >>> response = clip(response,-100,100) - >>> plt.plot(freq, response) - [] - >>> plt.title("Frequency response of Kaiser window") - - >>> plt.ylabel("Magnitude [dB]") - - >>> plt.xlabel("Normalized frequency [cycles per sample]") - - >>> plt.axis('tight') - (-0.5, 0.5, -100.0, ...) - >>> plt.show() - - """ - from numpy.dual import i0 - if M == 1: - return np.array([1.]) - n = arange(0,M) - alpha = (M-1)/2.0 - return i0(beta * sqrt(1-((n-alpha)/alpha)**2.0))/i0(float(beta)) - -def sinc(x): - """ - Return the sinc function. - - The sinc function is :math:`\\sin(\\pi x)/(\\pi x)`. - - Parameters - ---------- - x : ndarray - Array (possibly multi-dimensional) of values for which to to - calculate ``sinc(x)``. - - Returns - ------- - out : ndarray - ``sinc(x)``, which has the same shape as the input. - - Notes - ----- - ``sinc(0)`` is the limit value 1. - - The name sinc is short for "sine cardinal" or "sinus cardinalis". - - The sinc function is used in various signal processing applications, - including in anti-aliasing, in the construction of a - Lanczos resampling filter, and in interpolation. - - For bandlimited interpolation of discrete-time signals, the ideal - interpolation kernel is proportional to the sinc function. - - References - ---------- - .. [1] Weisstein, Eric W. "Sinc Function." From MathWorld--A Wolfram Web - Resource. http://mathworld.wolfram.com/SincFunction.html - .. [2] Wikipedia, "Sinc function", - http://en.wikipedia.org/wiki/Sinc_function - - Examples - -------- - >>> x = np.arange(-20., 21.)/5. - >>> np.sinc(x) - array([ -3.89804309e-17, -4.92362781e-02, -8.40918587e-02, - -8.90384387e-02, -5.84680802e-02, 3.89804309e-17, - 6.68206631e-02, 1.16434881e-01, 1.26137788e-01, - 8.50444803e-02, -3.89804309e-17, -1.03943254e-01, - -1.89206682e-01, -2.16236208e-01, -1.55914881e-01, - 3.89804309e-17, 2.33872321e-01, 5.04551152e-01, - 7.56826729e-01, 9.35489284e-01, 1.00000000e+00, - 9.35489284e-01, 7.56826729e-01, 5.04551152e-01, - 2.33872321e-01, 3.89804309e-17, -1.55914881e-01, - -2.16236208e-01, -1.89206682e-01, -1.03943254e-01, - -3.89804309e-17, 8.50444803e-02, 1.26137788e-01, - 1.16434881e-01, 6.68206631e-02, 3.89804309e-17, - -5.84680802e-02, -8.90384387e-02, -8.40918587e-02, - -4.92362781e-02, -3.89804309e-17]) - - >>> import matplotlib.pyplot as plt - >>> plt.plot(x, np.sinc(x)) - [] - >>> plt.title("Sinc Function") - - >>> plt.ylabel("Amplitude") - - >>> plt.xlabel("X") - - >>> plt.show() - - It works in 2-D as well: - - >>> x = np.arange(-200., 201.)/50. - >>> xx = np.outer(x, x) - >>> plt.imshow(np.sinc(xx)) - - - """ - x = np.asanyarray(x) - y = pi* where(x == 0, 1.0e-20, x) - return sin(y)/y - -def msort(a): - """ - Return a copy of an array sorted along the first axis. - - Parameters - ---------- - a : array_like - Array to be sorted. - - Returns - ------- - sorted_array : ndarray - Array of the same type and shape as `a`. - - See Also - -------- - sort - - Notes - ----- - ``np.msort(a)`` is equivalent to ``np.sort(a, axis=0)``. - - """ - b = array(a,subok=True,copy=True) - b.sort(0) - return b - -def median(a, axis=None, out=None, overwrite_input=False): - """ - Compute the median along the specified axis. - - Returns the median of the array elements. - - Parameters - ---------- - a : array_like - Input array or object that can be converted to an array. - axis : {None, int}, optional - Axis along which the medians are computed. The default (axis=None) - is to compute the median along a flattened version of the array. - out : ndarray, optional - Alternative output array in which to place the result. It must - have the same shape and buffer length as the expected output, - but the type (of the output) will be cast if necessary. - overwrite_input : {False, True}, optional - If True, then allow use of memory of input array (a) for - calculations. The input array will be modified by the call to - median. This will save memory when you do not need to preserve - the contents of the input array. Treat the input as undefined, - but it will probably be fully or partially sorted. Default is - False. Note that, if `overwrite_input` is True and the input - is not already an ndarray, an error will be raised. - - Returns - ------- - median : ndarray - A new array holding the result (unless `out` is specified, in - which case that array is returned instead). If the input contains - integers, or floats of smaller precision than 64, then the output - data-type is float64. Otherwise, the output data-type is the same - as that of the input. - - See Also - -------- - mean, percentile - - Notes - ----- - Given a vector V of length N, the median of V is the middle value of - a sorted copy of V, ``V_sorted`` - i.e., ``V_sorted[(N-1)/2]``, when N is - odd. When N is even, it is the average of the two middle values of - ``V_sorted``. - - Examples - -------- - >>> a = np.array([[10, 7, 4], [3, 2, 1]]) - >>> a - array([[10, 7, 4], - [ 3, 2, 1]]) - >>> np.median(a) - 3.5 - >>> np.median(a, axis=0) - array([ 6.5, 4.5, 2.5]) - >>> np.median(a, axis=1) - array([ 7., 2.]) - >>> m = np.median(a, axis=0) - >>> out = np.zeros_like(m) - >>> np.median(a, axis=0, out=m) - array([ 6.5, 4.5, 2.5]) - >>> m - array([ 6.5, 4.5, 2.5]) - >>> b = a.copy() - >>> np.median(b, axis=1, overwrite_input=True) - array([ 7., 2.]) - >>> assert not np.all(a==b) - >>> b = a.copy() - >>> np.median(b, axis=None, overwrite_input=True) - 3.5 - >>> assert not np.all(a==b) - - """ - if overwrite_input: - if axis is None: - sorted = a.ravel() - sorted.sort() - else: - a.sort(axis=axis) - sorted = a - else: - sorted = sort(a, axis=axis) - if axis is None: - axis = 0 - indexer = [slice(None)] * sorted.ndim - index = int(sorted.shape[axis]/2) - if sorted.shape[axis] % 2 == 1: - # index with slice to allow mean (below) to work - indexer[axis] = slice(index, index+1) - else: - indexer[axis] = slice(index-1, index+1) - # Use mean in odd and even case to coerce data type - # and check, use out array. - return mean(sorted[indexer], axis=axis, out=out) - -def percentile(a, q, axis=None, out=None, overwrite_input=False): - """ - Compute the qth percentile of the data along the specified axis. - - Returns the qth percentile of the array elements. - - Parameters - ---------- - a : array_like - Input array or object that can be converted to an array. - q : float in range of [0,100] (or sequence of floats) - percentile to compute which must be between 0 and 100 inclusive - axis : {None, int}, optional - Axis along which the percentiles are computed. The default (axis=None) - is to compute the median along a flattened version of the array. - out : ndarray, optional - Alternative output array in which to place the result. It must - have the same shape and buffer length as the expected output, - but the type (of the output) will be cast if necessary. - overwrite_input : {False, True}, optional - If True, then allow use of memory of input array (a) for - calculations. The input array will be modified by the call to - median. This will save memory when you do not need to preserve - the contents of the input array. Treat the input as undefined, - but it will probably be fully or partially sorted. Default is - False. Note that, if `overwrite_input` is True and the input - is not already an ndarray, an error will be raised. - - Returns - ------- - pcntile : ndarray - A new array holding the result (unless `out` is specified, in - which case that array is returned instead). If the input contains - integers, or floats of smaller precision than 64, then the output - data-type is float64. Otherwise, the output data-type is the same - as that of the input. - - See Also - -------- - mean, median - - Notes - ----- - Given a vector V of length N, the qth percentile of V is the qth ranked - value in a sorted copy of V. A weighted average of the two nearest neighbors - is used if the normalized ranking does not match q exactly. - The same as the median if q is 0.5; the same as the min if q is 0; - and the same as the max if q is 1 - - Examples - -------- - >>> a = np.array([[10, 7, 4], [3, 2, 1]]) - >>> a - array([[10, 7, 4], - [ 3, 2, 1]]) - >>> np.percentile(a, 0.5) - 3.5 - >>> np.percentile(a, 0.5, axis=0) - array([ 6.5, 4.5, 2.5]) - >>> np.percentile(a, 0.5, axis=1) - array([ 7., 2.]) - >>> m = np.percentile(a, 0.5, axis=0) - >>> out = np.zeros_like(m) - >>> np.percentile(a, 0.5, axis=0, out=m) - array([ 6.5, 4.5, 2.5]) - >>> m - array([ 6.5, 4.5, 2.5]) - >>> b = a.copy() - >>> np.percentile(b, 0.5, axis=1, overwrite_input=True) - array([ 7., 2.]) - >>> assert not np.all(a==b) - >>> b = a.copy() - >>> np.percentile(b, 0.5, axis=None, overwrite_input=True) - 3.5 - >>> assert not np.all(a==b) - - """ - a = np.asarray(a) - - if q == 0: - return a.min(axis=axis, out=out) - elif q == 100: - return a.max(axis=axis, out=out) - - if overwrite_input: - if axis is None: - sorted = a.ravel() - sorted.sort() - else: - a.sort(axis=axis) - sorted = a - else: - sorted = sort(a, axis=axis) - if axis is None: - axis = 0 - - return _compute_qth_percentile(sorted, q, axis, out) - -# handle sequence of q's without calling sort multiple times -def _compute_qth_percentile(sorted, q, axis, out): - if not isscalar(q): - p = [_compute_qth_percentile(sorted, qi, axis, None) - for qi in q] - - if out is not None: - out.flat = p - - return p - - q = q / 100.0 - if (q < 0) or (q > 1): - raise ValueError, "percentile must be either in the range [0,100]" - - indexer = [slice(None)] * sorted.ndim - Nx = sorted.shape[axis] - index = q*(Nx-1) - i = int(index) - if i == index: - indexer[axis] = slice(i, i+1) - weights = array(1) - sumval = 1.0 - else: - indexer[axis] = slice(i, i+2) - j = i + 1 - weights = array([(j - index), (index - i)],float) - wshape = [1]*sorted.ndim - wshape[axis] = 2 - weights.shape = wshape - sumval = weights.sum() - - # Use add.reduce in both cases to coerce data type as well as - # check and use out array. - return add.reduce(sorted[indexer]*weights, axis=axis, out=out)/sumval - -def trapz(y, x=None, dx=1.0, axis=-1): - """ - Integrate along the given axis using the composite trapezoidal rule. - - Integrate `y` (`x`) along given axis. - - Parameters - ---------- - y : array_like - Input array to integrate. - x : array_like, optional - If `x` is None, then spacing between all `y` elements is `dx`. - dx : scalar, optional - If `x` is None, spacing given by `dx` is assumed. Default is 1. - axis : int, optional - Specify the axis. - - Returns - ------- - out : float - Definite integral as approximated by trapezoidal rule. - - See Also - -------- - sum, cumsum - - Notes - ----- - Image [2]_ illustrates trapezoidal rule -- y-axis locations of points will - be taken from `y` array, by default x-axis distances between points will be - 1.0, alternatively they can be provided with `x` array or with `dx` scalar. - Return value will be equal to combined area under the red lines. - - - References - ---------- - .. [1] Wikipedia page: http://en.wikipedia.org/wiki/Trapezoidal_rule - - .. [2] Illustration image: - http://en.wikipedia.org/wiki/File:Composite_trapezoidal_rule_illustration.png - - Examples - -------- - >>> np.trapz([1,2,3]) - 4.0 - >>> np.trapz([1,2,3], x=[4,6,8]) - 8.0 - >>> np.trapz([1,2,3], dx=2) - 8.0 - >>> a = np.arange(6).reshape(2, 3) - >>> a - array([[0, 1, 2], - [3, 4, 5]]) - >>> np.trapz(a, axis=0) - array([ 1.5, 2.5, 3.5]) - >>> np.trapz(a, axis=1) - array([ 2., 8.]) - - """ - y = asanyarray(y) - if x is None: - d = dx - else: - x = asanyarray(x) - if x.ndim == 1: - d = diff(x) - # reshape to correct shape - shape = [1]*y.ndim - shape[axis] = d.shape[0] - d = d.reshape(shape) - else: - d = diff(x, axis=axis) - nd = len(y.shape) - slice1 = [slice(None)]*nd - slice2 = [slice(None)]*nd - slice1[axis] = slice(1,None) - slice2[axis] = slice(None,-1) - try: - ret = (d * (y[slice1] +y [slice2]) / 2.0).sum(axis) - except ValueError: # Operations didn't work, cast to ndarray - d = np.asarray(d) - y = np.asarray(y) - ret = add.reduce(d * (y[slice1]+y[slice2])/2.0, axis) - return ret - -#always succeed -def add_newdoc(place, obj, doc): - """Adds documentation to obj which is in module place. - - If doc is a string add it to obj as a docstring - - If doc is a tuple, then the first element is interpreted as - an attribute of obj and the second as the docstring - (method, docstring) - - If doc is a list, then each element of the list should be a - sequence of length two --> [(method1, docstring1), - (method2, docstring2), ...] - - This routine never raises an error. - """ - try: - new = {} - exec 'from %s import %s' % (place, obj) in new - if isinstance(doc, str): - add_docstring(new[obj], doc.strip()) - elif isinstance(doc, tuple): - add_docstring(getattr(new[obj], doc[0]), doc[1].strip()) - elif isinstance(doc, list): - for val in doc: - add_docstring(getattr(new[obj], val[0]), val[1].strip()) - except: - pass - - -# From matplotlib -def meshgrid(x,y): - """ - Return coordinate matrices from two coordinate vectors. - - Parameters - ---------- - x, y : ndarray - Two 1-D arrays representing the x and y coordinates of a grid. - - Returns - ------- - X, Y : ndarray - For vectors `x`, `y` with lengths ``Nx=len(x)`` and ``Ny=len(y)``, - return `X`, `Y` where `X` and `Y` are ``(Ny, Nx)`` shaped arrays - with the elements of `x` and y repeated to fill the matrix along - the first dimension for `x`, the second for `y`. - - See Also - -------- - index_tricks.mgrid : Construct a multi-dimensional "meshgrid" - using indexing notation. - index_tricks.ogrid : Construct an open multi-dimensional "meshgrid" - using indexing notation. - - Examples - -------- - >>> X, Y = np.meshgrid([1,2,3], [4,5,6,7]) - >>> X - array([[1, 2, 3], - [1, 2, 3], - [1, 2, 3], - [1, 2, 3]]) - >>> Y - array([[4, 4, 4], - [5, 5, 5], - [6, 6, 6], - [7, 7, 7]]) - - `meshgrid` is very useful to evaluate functions on a grid. - - >>> x = np.arange(-5, 5, 0.1) - >>> y = np.arange(-5, 5, 0.1) - >>> xx, yy = np.meshgrid(x, y) - >>> z = np.sin(xx**2+yy**2)/(xx**2+yy**2) - - """ - x = asarray(x) - y = asarray(y) - numRows, numCols = len(y), len(x) # yes, reversed - x = x.reshape(1,numCols) - X = x.repeat(numRows, axis=0) - - y = y.reshape(numRows,1) - Y = y.repeat(numCols, axis=1) - return X, Y - -def delete(arr, obj, axis=None): - """ - Return a new array with sub-arrays along an axis deleted. - - Parameters - ---------- - arr : array_like - Input array. - obj : slice, int or array of ints - Indicate which sub-arrays to remove. - axis : int, optional - The axis along which to delete the subarray defined by `obj`. - If `axis` is None, `obj` is applied to the flattened array. - - Returns - ------- - out : ndarray - A copy of `arr` with the elements specified by `obj` removed. Note - that `delete` does not occur in-place. If `axis` is None, `out` is - a flattened array. - - See Also - -------- - insert : Insert elements into an array. - append : Append elements at the end of an array. - - Examples - -------- - >>> arr = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) - >>> arr - array([[ 1, 2, 3, 4], - [ 5, 6, 7, 8], - [ 9, 10, 11, 12]]) - >>> np.delete(arr, 1, 0) - array([[ 1, 2, 3, 4], - [ 9, 10, 11, 12]]) - - >>> np.delete(arr, np.s_[::2], 1) - array([[ 2, 4], - [ 6, 8], - [10, 12]]) - >>> np.delete(arr, [1,3,5], None) - array([ 1, 3, 5, 7, 8, 9, 10, 11, 12]) - - """ - wrap = None - if type(arr) is not ndarray: - try: - wrap = arr.__array_wrap__ - except AttributeError: - pass - - - arr = asarray(arr) - ndim = arr.ndim - if axis is None: - if ndim != 1: - arr = arr.ravel() - ndim = arr.ndim; - axis = ndim-1; - if ndim == 0: - if wrap: - return wrap(arr) - else: - return arr.copy() - slobj = [slice(None)]*ndim - N = arr.shape[axis] - newshape = list(arr.shape) - if isinstance(obj, (int, long, integer)): - if (obj < 0): obj += N - if (obj < 0 or obj >=N): - raise ValueError( - "invalid entry") - newshape[axis]-=1; - new = empty(newshape, arr.dtype, arr.flags.fnc) - slobj[axis] = slice(None, obj) - new[slobj] = arr[slobj] - slobj[axis] = slice(obj,None) - slobj2 = [slice(None)]*ndim - slobj2[axis] = slice(obj+1,None) - new[slobj] = arr[slobj2] - elif isinstance(obj, slice): - start, stop, step = obj.indices(N) - numtodel = len(xrange(start, stop, step)) - if numtodel <= 0: - if wrap: - return wrap(new) - else: - return arr.copy() - newshape[axis] -= numtodel - new = empty(newshape, arr.dtype, arr.flags.fnc) - # copy initial chunk - if start == 0: - pass - else: - slobj[axis] = slice(None, start) - new[slobj] = arr[slobj] - # copy end chunck - if stop == N: - pass - else: - slobj[axis] = slice(stop-numtodel,None) - slobj2 = [slice(None)]*ndim - slobj2[axis] = slice(stop, None) - new[slobj] = arr[slobj2] - # copy middle pieces - if step == 1: - pass - else: # use array indexing. - obj = arange(start, stop, step, dtype=intp) - all = arange(start, stop, dtype=intp) - obj = setdiff1d(all, obj) - slobj[axis] = slice(start, stop-numtodel) - slobj2 = [slice(None)]*ndim - slobj2[axis] = obj - new[slobj] = arr[slobj2] - else: # default behavior - obj = array(obj, dtype=intp, copy=0, ndmin=1) - all = arange(N, dtype=intp) - obj = setdiff1d(all, obj) - slobj[axis] = obj - new = arr[slobj] - if wrap: - return wrap(new) - else: - return new - -def insert(arr, obj, values, axis=None): - """ - Insert values along the given axis before the given indices. - - Parameters - ---------- - arr : array_like - Input array. - obj : int, slice or sequence of ints - Object that defines the index or indices before which `values` is - inserted. - values : array_like - Values to insert into `arr`. If the type of `values` is different - from that of `arr`, `values` is converted to the type of `arr`. - axis : int, optional - Axis along which to insert `values`. If `axis` is None then `arr` - is flattened first. - - Returns - ------- - out : ndarray - A copy of `arr` with `values` inserted. Note that `insert` - does not occur in-place: a new array is returned. If - `axis` is None, `out` is a flattened array. - - See Also - -------- - append : Append elements at the end of an array. - delete : Delete elements from an array. - - Examples - -------- - >>> a = np.array([[1, 1], [2, 2], [3, 3]]) - >>> a - array([[1, 1], - [2, 2], - [3, 3]]) - >>> np.insert(a, 1, 5) - array([1, 5, 1, 2, 2, 3, 3]) - >>> np.insert(a, 1, 5, axis=1) - array([[1, 5, 1], - [2, 5, 2], - [3, 5, 3]]) - - >>> b = a.flatten() - >>> b - array([1, 1, 2, 2, 3, 3]) - >>> np.insert(b, [2, 2], [5, 6]) - array([1, 1, 5, 6, 2, 2, 3, 3]) - - >>> np.insert(b, slice(2, 4), [5, 6]) - array([1, 1, 5, 2, 6, 2, 3, 3]) - - >>> np.insert(b, [2, 2], [7.13, False]) # type casting - array([1, 1, 7, 0, 2, 2, 3, 3]) - - >>> x = np.arange(8).reshape(2, 4) - >>> idx = (1, 3) - >>> np.insert(x, idx, 999, axis=1) - array([[ 0, 999, 1, 2, 999, 3], - [ 4, 999, 5, 6, 999, 7]]) - - """ - wrap = None - if type(arr) is not ndarray: - try: - wrap = arr.__array_wrap__ - except AttributeError: - pass - - arr = asarray(arr) - ndim = arr.ndim - if axis is None: - if ndim != 1: - arr = arr.ravel() - ndim = arr.ndim - axis = ndim-1 - if (ndim == 0): - arr = arr.copy() - arr[...] = values - if wrap: - return wrap(arr) - else: - return arr - slobj = [slice(None)]*ndim - N = arr.shape[axis] - newshape = list(arr.shape) - if isinstance(obj, (int, long, integer)): - if (obj < 0): obj += N - if obj < 0 or obj > N: - raise ValueError( - "index (%d) out of range (0<=index<=%d) "\ - "in dimension %d" % (obj, N, axis)) - newshape[axis] += 1; - new = empty(newshape, arr.dtype, arr.flags.fnc) - slobj[axis] = slice(None, obj) - new[slobj] = arr[slobj] - slobj[axis] = obj - new[slobj] = values - slobj[axis] = slice(obj+1,None) - slobj2 = [slice(None)]*ndim - slobj2[axis] = slice(obj,None) - new[slobj] = arr[slobj2] - if wrap: - return wrap(new) - return new - - elif isinstance(obj, slice): - # turn it into a range object - obj = arange(*obj.indices(N),**{'dtype':intp}) - - # get two sets of indices - # one is the indices which will hold the new stuff - # two is the indices where arr will be copied over - - obj = asarray(obj, dtype=intp) - numnew = len(obj) - index1 = obj + arange(numnew) - index2 = setdiff1d(arange(numnew+N),index1) - newshape[axis] += numnew - new = empty(newshape, arr.dtype, arr.flags.fnc) - slobj2 = [slice(None)]*ndim - slobj[axis] = index1 - slobj2[axis] = index2 - new[slobj] = values - new[slobj2] = arr - - if wrap: - return wrap(new) - return new - -def append(arr, values, axis=None): - """ - Append values to the end of an array. - - Parameters - ---------- - arr : array_like - Values are appended to a copy of this array. - values : array_like - These values are appended to a copy of `arr`. It must be of the - correct shape (the same shape as `arr`, excluding `axis`). If `axis` - is not specified, `values` can be any shape and will be flattened - before use. - axis : int, optional - The axis along which `values` are appended. If `axis` is not given, - both `arr` and `values` are flattened before use. - - Returns - ------- - out : ndarray - A copy of `arr` with `values` appended to `axis`. Note that `append` - does not occur in-place: a new array is allocated and filled. If - `axis` is None, `out` is a flattened array. - - See Also - -------- - insert : Insert elements into an array. - delete : Delete elements from an array. - - Examples - -------- - >>> np.append([1, 2, 3], [[4, 5, 6], [7, 8, 9]]) - array([1, 2, 3, 4, 5, 6, 7, 8, 9]) - - When `axis` is specified, `values` must have the correct shape. - - >>> np.append([[1, 2, 3], [4, 5, 6]], [[7, 8, 9]], axis=0) - array([[1, 2, 3], - [4, 5, 6], - [7, 8, 9]]) - >>> np.append([[1, 2, 3], [4, 5, 6]], [7, 8, 9], axis=0) - Traceback (most recent call last): - ... - ValueError: arrays must have same number of dimensions - - """ - arr = asanyarray(arr) - if axis is None: - if arr.ndim != 1: - arr = arr.ravel() - values = ravel(values) - axis = arr.ndim-1 - return concatenate((arr, values), axis=axis) diff --git a/pythonPackages/numpy/numpy/lib/index_tricks.py b/pythonPackages/numpy/numpy/lib/index_tricks.py deleted file mode 100755 index eb1ab22e97..0000000000 --- a/pythonPackages/numpy/numpy/lib/index_tricks.py +++ /dev/null @@ -1,901 +0,0 @@ -__all__ = ['unravel_index', - 'mgrid', - 'ogrid', - 'r_', 'c_', 's_', - 'index_exp', 'ix_', - 'ndenumerate','ndindex', - 'fill_diagonal','diag_indices','diag_indices_from'] - -import sys -import numpy.core.numeric as _nx -from numpy.core.numeric import ( asarray, ScalarType, array, alltrue, cumprod, - arange ) -from numpy.core.numerictypes import find_common_type -import math - -import function_base -import numpy.matrixlib as matrix -from function_base import diff -makemat = matrix.matrix - -# contributed by Stefan van der Walt -def unravel_index(x,dims): - """ - Convert a flat index to an index tuple for an array of given shape. - - Parameters - ---------- - x : int - Flattened index. - dims : tuple of ints - Input shape, the shape of an array into which indexing is - required. - - Returns - ------- - idx : tuple of ints - Tuple of the same shape as `dims`, containing the unraveled index. - - Notes - ----- - In the Examples section, since ``arr.flat[x] == arr.max()`` it may be - easier to use flattened indexing than to re-map the index to a tuple. - - Examples - -------- - >>> arr = np.arange(20).reshape(5, 4) - >>> arr - array([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11], - [12, 13, 14, 15], - [16, 17, 18, 19]]) - >>> x = arr.argmax() - >>> x - 19 - >>> dims = arr.shape - >>> idx = np.unravel_index(x, dims) - >>> idx - (4, 3) - >>> arr[idx] == arr.max() - True - - """ - if x > _nx.prod(dims)-1 or x < 0: - raise ValueError("Invalid index, must be 0 <= x <= number of elements.") - - idx = _nx.empty_like(dims) - - # Take dimensions - # [a,b,c,d] - # Reverse and drop first element - # [d,c,b] - # Prepend [1] - # [1,d,c,b] - # Calculate cumulative product - # [1,d,dc,dcb] - # Reverse - # [dcb,dc,d,1] - dim_prod = _nx.cumprod([1] + list(dims)[:0:-1])[::-1] - # Indices become [x/dcb % a, x/dc % b, x/d % c, x/1 % d] - return tuple(x//dim_prod % dims) - -def ix_(*args): - """ - Construct an open mesh from multiple sequences. - - This function takes N 1-D sequences and returns N outputs with N - dimensions each, such that the shape is 1 in all but one dimension - and the dimension with the non-unit shape value cycles through all - N dimensions. - - Using `ix_` one can quickly construct index arrays that will index - the cross product. ``a[np.ix_([1,3],[2,5])]`` returns the array - ``[[a[1,2] a[1,5]], [a[3,2] a[3,5]]]``. - - Parameters - ---------- - args : 1-D sequences - - Returns - ------- - out : tuple of ndarrays - N arrays with N dimensions each, with N the number of input - sequences. Together these arrays form an open mesh. - - See Also - -------- - ogrid, mgrid, meshgrid - - Examples - -------- - >>> a = np.arange(10).reshape(2, 5) - >>> a - array([[0, 1, 2, 3, 4], - [5, 6, 7, 8, 9]]) - >>> ixgrid = np.ix_([0,1], [2,4]) - >>> ixgrid - (array([[0], - [1]]), array([[2, 4]])) - >>> ixgrid[0].shape, ixgrid[1].shape - ((2, 1), (1, 2)) - >>> a[ixgrid] - array([[2, 4], - [7, 9]]) - - """ - out = [] - nd = len(args) - baseshape = [1]*nd - for k in range(nd): - new = _nx.asarray(args[k]) - if (new.ndim != 1): - raise ValueError, "Cross index must be 1 dimensional" - if issubclass(new.dtype.type, _nx.bool_): - new = new.nonzero()[0] - baseshape[k] = len(new) - new = new.reshape(tuple(baseshape)) - out.append(new) - baseshape[k] = 1 - return tuple(out) - -class nd_grid(object): - """ - Construct a multi-dimensional "meshgrid". - - ``grid = nd_grid()`` creates an instance which will return a mesh-grid - when indexed. The dimension and number of the output arrays are equal - to the number of indexing dimensions. If the step length is not a - complex number, then the stop is not inclusive. - - However, if the step length is a **complex number** (e.g. 5j), then the - integer part of its magnitude is interpreted as specifying the - number of points to create between the start and stop values, where - the stop value **is inclusive**. - - If instantiated with an argument of ``sparse=True``, the mesh-grid is - open (or not fleshed out) so that only one-dimension of each returned - argument is greater than 1. - - Parameters - ---------- - sparse : bool, optional - Whether the grid is sparse or not. Default is False. - - Notes - ----- - Two instances of `nd_grid` are made available in the NumPy namespace, - `mgrid` and `ogrid`:: - - mgrid = nd_grid(sparse=False) - ogrid = nd_grid(sparse=True) - - Users should use these pre-defined instances instead of using `nd_grid` - directly. - - Examples - -------- - >>> mgrid = np.lib.index_tricks.nd_grid() - >>> mgrid[0:5,0:5] - array([[[0, 0, 0, 0, 0], - [1, 1, 1, 1, 1], - [2, 2, 2, 2, 2], - [3, 3, 3, 3, 3], - [4, 4, 4, 4, 4]], - [[0, 1, 2, 3, 4], - [0, 1, 2, 3, 4], - [0, 1, 2, 3, 4], - [0, 1, 2, 3, 4], - [0, 1, 2, 3, 4]]]) - >>> mgrid[-1:1:5j] - array([-1. , -0.5, 0. , 0.5, 1. ]) - - >>> ogrid = np.lib.index_tricks.nd_grid(sparse=True) - >>> ogrid[0:5,0:5] - [array([[0], - [1], - [2], - [3], - [4]]), array([[0, 1, 2, 3, 4]])] - - """ - def __init__(self, sparse=False): - self.sparse = sparse - def __getitem__(self,key): - try: - size = [] - typ = int - for k in range(len(key)): - step = key[k].step - start = key[k].start - if start is None: start=0 - if step is None: step=1 - if isinstance(step, complex): - size.append(int(abs(step))) - typ = float - else: - size.append(math.ceil((key[k].stop - start)/(step*1.0))) - if isinstance(step, float) or \ - isinstance(start, float) or \ - isinstance(key[k].stop, float): - typ = float - if self.sparse: - nn = map(lambda x,t: _nx.arange(x, dtype=t), size, \ - (typ,)*len(size)) - else: - nn = _nx.indices(size, typ) - for k in range(len(size)): - step = key[k].step - start = key[k].start - if start is None: start=0 - if step is None: step=1 - if isinstance(step, complex): - step = int(abs(step)) - if step != 1: - step = (key[k].stop - start)/float(step-1) - nn[k] = (nn[k]*step+start) - if self.sparse: - slobj = [_nx.newaxis]*len(size) - for k in range(len(size)): - slobj[k] = slice(None,None) - nn[k] = nn[k][slobj] - slobj[k] = _nx.newaxis - return nn - except (IndexError, TypeError): - step = key.step - stop = key.stop - start = key.start - if start is None: start = 0 - if isinstance(step, complex): - step = abs(step) - length = int(step) - if step != 1: - step = (key.stop-start)/float(step-1) - stop = key.stop+step - return _nx.arange(0, length,1, float)*step + start - else: - return _nx.arange(start, stop, step) - - def __getslice__(self,i,j): - return _nx.arange(i,j) - - def __len__(self): - return 0 - -mgrid = nd_grid(sparse=False) -ogrid = nd_grid(sparse=True) -mgrid.__doc__ = None # set in numpy.add_newdocs -ogrid.__doc__ = None # set in numpy.add_newdocs - -class AxisConcatenator(object): - """ - Translates slice objects to concatenation along an axis. - - For detailed documentation on usage, see `r_`. - - """ - def _retval(self, res): - if self.matrix: - oldndim = res.ndim - res = makemat(res) - if oldndim == 1 and self.col: - res = res.T - self.axis = self._axis - self.matrix = self._matrix - self.col = 0 - return res - - def __init__(self, axis=0, matrix=False, ndmin=1, trans1d=-1): - self._axis = axis - self._matrix = matrix - self.axis = axis - self.matrix = matrix - self.col = 0 - self.trans1d = trans1d - self.ndmin = ndmin - - def __getitem__(self,key): - trans1d = self.trans1d - ndmin = self.ndmin - if isinstance(key, str): - frame = sys._getframe().f_back - mymat = matrix.bmat(key,frame.f_globals,frame.f_locals) - return mymat - if type(key) is not tuple: - key = (key,) - objs = [] - scalars = [] - arraytypes = [] - scalartypes = [] - for k in range(len(key)): - scalar = False - if type(key[k]) is slice: - step = key[k].step - start = key[k].start - stop = key[k].stop - if start is None: start = 0 - if step is None: - step = 1 - if isinstance(step, complex): - size = int(abs(step)) - newobj = function_base.linspace(start, stop, num=size) - else: - newobj = _nx.arange(start, stop, step) - if ndmin > 1: - newobj = array(newobj,copy=False,ndmin=ndmin) - if trans1d != -1: - newobj = newobj.swapaxes(-1,trans1d) - elif isinstance(key[k],str): - if k != 0: - raise ValueError, "special directives must be the"\ - "first entry." - key0 = key[0] - if key0 in 'rc': - self.matrix = True - self.col = (key0 == 'c') - continue - if ',' in key0: - vec = key0.split(',') - try: - self.axis, ndmin = \ - [int(x) for x in vec[:2]] - if len(vec) == 3: - trans1d = int(vec[2]) - continue - except: - raise ValueError, "unknown special directive" - try: - self.axis = int(key[k]) - continue - except (ValueError, TypeError): - raise ValueError, "unknown special directive" - elif type(key[k]) in ScalarType: - newobj = array(key[k],ndmin=ndmin) - scalars.append(k) - scalar = True - scalartypes.append(newobj.dtype) - else: - newobj = key[k] - if ndmin > 1: - tempobj = array(newobj, copy=False, subok=True) - newobj = array(newobj, copy=False, subok=True, - ndmin=ndmin) - if trans1d != -1 and tempobj.ndim < ndmin: - k2 = ndmin-tempobj.ndim - if (trans1d < 0): - trans1d += k2 + 1 - defaxes = range(ndmin) - k1 = trans1d - axes = defaxes[:k1] + defaxes[k2:] + \ - defaxes[k1:k2] - newobj = newobj.transpose(axes) - del tempobj - objs.append(newobj) - if not scalar and isinstance(newobj, _nx.ndarray): - arraytypes.append(newobj.dtype) - - # Esure that scalars won't up-cast unless warranted - final_dtype = find_common_type(arraytypes, scalartypes) - if final_dtype is not None: - for k in scalars: - objs[k] = objs[k].astype(final_dtype) - - res = _nx.concatenate(tuple(objs),axis=self.axis) - return self._retval(res) - - def __getslice__(self,i,j): - res = _nx.arange(i,j) - return self._retval(res) - - def __len__(self): - return 0 - -# separate classes are used here instead of just making r_ = concatentor(0), -# etc. because otherwise we couldn't get the doc string to come out right -# in help(r_) - -class RClass(AxisConcatenator): - """ - Translates slice objects to concatenation along the first axis. - - This is a simple way to build up arrays quickly. There are two use cases. - - 1. If the index expression contains comma separated arrays, then stack - them along their first axis. - 2. If the index expression contains slice notation or scalars then create - a 1-D array with a range indicated by the slice notation. - - If slice notation is used, the syntax ``start:stop:step`` is equivalent - to ``np.arange(start, stop, step)`` inside of the brackets. However, if - ``step`` is an imaginary number (i.e. 100j) then its integer portion is - interpreted as a number-of-points desired and the start and stop are - inclusive. In other words ``start:stop:stepj`` is interpreted as - ``np.linspace(start, stop, step, endpoint=1)`` inside of the brackets. - After expansion of slice notation, all comma separated sequences are - concatenated together. - - Optional character strings placed as the first element of the index - expression can be used to change the output. The strings 'r' or 'c' result - in matrix output. If the result is 1-D and 'r' is specified a 1 x N (row) - matrix is produced. If the result is 1-D and 'c' is specified, then a N x 1 - (column) matrix is produced. If the result is 2-D then both provide the - same matrix result. - - A string integer specifies which axis to stack multiple comma separated - arrays along. A string of two comma-separated integers allows indication - of the minimum number of dimensions to force each entry into as the - second integer (the axis to concatenate along is still the first integer). - - A string with three comma-separated integers allows specification of the - axis to concatenate along, the minimum number of dimensions to force the - entries to, and which axis should contain the start of the arrays which - are less than the specified number of dimensions. In other words the third - integer allows you to specify where the 1's should be placed in the shape - of the arrays that have their shapes upgraded. By default, they are placed - in the front of the shape tuple. The third argument allows you to specify - where the start of the array should be instead. Thus, a third argument of - '0' would place the 1's at the end of the array shape. Negative integers - specify where in the new shape tuple the last dimension of upgraded arrays - should be placed, so the default is '-1'. - - Parameters - ---------- - Not a function, so takes no parameters - - - Returns - ------- - A concatenated ndarray or matrix. - - See Also - -------- - concatenate : Join a sequence of arrays together. - c_ : Translates slice objects to concatenation along the second axis. - - Examples - -------- - >>> np.r_[np.array([1,2,3]), 0, 0, np.array([4,5,6])] - array([1, 2, 3, 0, 0, 4, 5, 6]) - >>> np.r_[-1:1:6j, [0]*3, 5, 6] - array([-1. , -0.6, -0.2, 0.2, 0.6, 1. , 0. , 0. , 0. , 5. , 6. ]) - - String integers specify the axis to concatenate along or the minimum - number of dimensions to force entries into. - - >>> a = np.array([[0, 1, 2], [3, 4, 5]]) - >>> np.r_['-1', a, a] # concatenate along last axis - array([[0, 1, 2, 0, 1, 2], - [3, 4, 5, 3, 4, 5]]) - >>> np.r_['0,2', [1,2,3], [4,5,6]] # concatenate along first axis, dim>=2 - array([[1, 2, 3], - [4, 5, 6]]) - - >>> np.r_['0,2,0', [1,2,3], [4,5,6]] - array([[1], - [2], - [3], - [4], - [5], - [6]]) - >>> np.r_['1,2,0', [1,2,3], [4,5,6]] - array([[1, 4], - [2, 5], - [3, 6]]) - - Using 'r' or 'c' as a first string argument creates a matrix. - - >>> np.r_['r',[1,2,3], [4,5,6]] - matrix([[1, 2, 3, 4, 5, 6]]) - - """ - def __init__(self): - AxisConcatenator.__init__(self, 0) - -r_ = RClass() - -class CClass(AxisConcatenator): - """ - Translates slice objects to concatenation along the second axis. - - This is short-hand for ``np.r_['-1,2,0', index expression]``, which is - useful because of its common occurrence. In particular, arrays will be - stacked along their last axis after being upgraded to at least 2-D with - 1's post-pended to the shape (column vectors made out of 1-D arrays). - - For detailed documentation, see `r_`. - - Examples - -------- - >>> np.c_[np.array([[1,2,3]]), 0, 0, np.array([[4,5,6]])] - array([[1, 2, 3, 0, 0, 4, 5, 6]]) - - """ - def __init__(self): - AxisConcatenator.__init__(self, -1, ndmin=2, trans1d=0) - -c_ = CClass() - -class ndenumerate(object): - """ - Multidimensional index iterator. - - Return an iterator yielding pairs of array coordinates and values. - - Parameters - ---------- - a : ndarray - Input array. - - See Also - -------- - ndindex, flatiter - - Examples - -------- - >>> a = np.array([[1, 2], [3, 4]]) - >>> for index, x in np.ndenumerate(a): - ... print index, x - (0, 0) 1 - (0, 1) 2 - (1, 0) 3 - (1, 1) 4 - - """ - def __init__(self, arr): - self.iter = asarray(arr).flat - - def next(self): - """ - Standard iterator method, returns the index tuple and array value. - - Returns - ------- - coords : tuple of ints - The indices of the current iteration. - val : scalar - The array element of the current iteration. - - """ - return self.iter.coords, self.iter.next() - - def __iter__(self): - return self - - -class ndindex(object): - """ - An N-dimensional iterator object to index arrays. - - Given the shape of an array, an `ndindex` instance iterates over - the N-dimensional index of the array. At each iteration a tuple - of indices is returned, the last dimension is iterated over first. - - Parameters - ---------- - `*args` : ints - The size of each dimension of the array. - - See Also - -------- - ndenumerate, flatiter - - Examples - -------- - >>> for index in np.ndindex(3, 2, 1): - ... print index - (0, 0, 0) - (0, 1, 0) - (1, 0, 0) - (1, 1, 0) - (2, 0, 0) - (2, 1, 0) - - """ - - def __init__(self, *args): - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - self.nd = len(args) - self.ind = [0]*self.nd - self.index = 0 - self.maxvals = args - tot = 1 - for k in range(self.nd): - tot *= args[k] - self.total = tot - - def _incrementone(self, axis): - if (axis < 0): # base case - return - if (self.ind[axis] < self.maxvals[axis]-1): - self.ind[axis] += 1 - else: - self.ind[axis] = 0 - self._incrementone(axis-1) - - def ndincr(self): - """ - Increment the multi-dimensional index by one. - - `ndincr` takes care of the "wrapping around" of the axes. - It is called by `ndindex.next` and not normally used directly. - - """ - self._incrementone(self.nd-1) - - def next(self): - """ - Standard iterator method, updates the index and returns the index tuple. - - Returns - ------- - val : tuple of ints - Returns a tuple containing the indices of the current iteration. - - """ - if (self.index >= self.total): - raise StopIteration - val = tuple(self.ind) - self.index += 1 - self.ndincr() - return val - - def __iter__(self): - return self - - - - -# You can do all this with slice() plus a few special objects, -# but there's a lot to remember. This version is simpler because -# it uses the standard array indexing syntax. -# -# Written by Konrad Hinsen -# last revision: 1999-7-23 -# -# Cosmetic changes by T. Oliphant 2001 -# -# - -class IndexExpression(object): - """ - A nicer way to build up index tuples for arrays. - - .. note:: - Use one of the two predefined instances `index_exp` or `s_` - rather than directly using `IndexExpression`. - - For any index combination, including slicing and axis insertion, - ``a[indices]`` is the same as ``a[np.index_exp[indices]]`` for any - array `a`. However, ``np.index_exp[indices]`` can be used anywhere - in Python code and returns a tuple of slice objects that can be - used in the construction of complex index expressions. - - Parameters - ---------- - maketuple : bool - If True, always returns a tuple. - - See Also - -------- - index_exp : Predefined instance that always returns a tuple: - `index_exp = IndexExpression(maketuple=True)`. - s_ : Predefined instance without tuple conversion: - `s_ = IndexExpression(maketuple=False)`. - - Notes - ----- - You can do all this with `slice()` plus a few special objects, - but there's a lot to remember and this version is simpler because - it uses the standard array indexing syntax. - - Examples - -------- - >>> np.s_[2::2] - slice(2, None, 2) - >>> np.index_exp[2::2] - (slice(2, None, 2),) - - >>> np.array([0, 1, 2, 3, 4])[np.s_[2::2]] - array([2, 4]) - - """ - maxint = sys.maxint - def __init__(self, maketuple): - self.maketuple = maketuple - - def __getitem__(self, item): - if self.maketuple and type(item) != type(()): - return (item,) - else: - return item - - def __len__(self): - return self.maxint - - def __getslice__(self, start, stop): - if stop == self.maxint: - stop = None - return self[start:stop:None] - -index_exp = IndexExpression(maketuple=True) -s_ = IndexExpression(maketuple=False) - -# End contribution from Konrad. - - -# The following functions complement those in twodim_base, but are -# applicable to N-dimensions. - -def fill_diagonal(a, val): - """ - Fill the main diagonal of the given array of any dimensionality. - - For an array `a` with ``a.ndim > 2``, the diagonal is the list of - locations with indices ``a[i, i, ..., i]`` all identical. This function - modifies the input array in-place, it does not return a value. - - Parameters - ---------- - a : array, at least 2-D. - Array whose diagonal is to be filled, it gets modified in-place. - - val : scalar - Value to be written on the diagonal, its type must be compatible with - that of the array a. - - See also - -------- - diag_indices, diag_indices_from - - Notes - ----- - .. versionadded:: 1.4.0 - - This functionality can be obtained via `diag_indices`, but internally - this version uses a much faster implementation that never constructs the - indices and uses simple slicing. - - Examples - -------- - >>> a = np.zeros((3, 3), int) - >>> np.fill_diagonal(a, 5) - >>> a - array([[5, 0, 0], - [0, 5, 0], - [0, 0, 5]]) - - The same function can operate on a 4-D array: - - >>> a = np.zeros((3, 3, 3, 3), int) - >>> np.fill_diagonal(a, 4) - - We only show a few blocks for clarity: - - >>> a[0, 0] - array([[4, 0, 0], - [0, 0, 0], - [0, 0, 0]]) - >>> a[1, 1] - array([[0, 0, 0], - [0, 4, 0], - [0, 0, 0]]) - >>> a[2, 2] - array([[0, 0, 0], - [0, 0, 0], - [0, 0, 4]]) - - """ - if a.ndim < 2: - raise ValueError("array must be at least 2-d") - if a.ndim == 2: - # Explicit, fast formula for the common case. For 2-d arrays, we - # accept rectangular ones. - step = a.shape[1] + 1 - else: - # For more than d=2, the strided formula is only valid for arrays with - # all dimensions equal, so we check first. - if not alltrue(diff(a.shape)==0): - raise ValueError("All dimensions of input must be of equal length") - step = 1 + (cumprod(a.shape[:-1])).sum() - - # Write the value out into the diagonal. - a.flat[::step] = val - - -def diag_indices(n, ndim=2): - """ - Return the indices to access the main diagonal of an array. - - This returns a tuple of indices that can be used to access the main - diagonal of an array `a` with ``a.ndim >= 2`` dimensions and shape - (n, n, ..., n). For ``a.ndim = 2`` this is the usual diagonal, for - ``a.ndim > 2`` this is the set of indices to access ``a[i, i, ..., i]`` - for ``i = [0..n-1]``. - - Parameters - ---------- - n : int - The size, along each dimension, of the arrays for which the returned - indices can be used. - - ndim : int, optional - The number of dimensions. - - See also - -------- - diag_indices_from - - Notes - ----- - .. versionadded:: 1.4.0 - - Examples - -------- - Create a set of indices to access the diagonal of a (4, 4) array: - - >>> di = np.diag_indices(4) - >>> di - (array([0, 1, 2, 3]), array([0, 1, 2, 3])) - >>> a = np.arange(16).reshape(4, 4) - >>> a - array([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11], - [12, 13, 14, 15]]) - >>> a[di] = 100 - >>> a - array([[100, 1, 2, 3], - [ 4, 100, 6, 7], - [ 8, 9, 100, 11], - [ 12, 13, 14, 100]]) - - Now, we create indices to manipulate a 3-D array: - - >>> d3 = np.diag_indices(2, 3) - >>> d3 - (array([0, 1]), array([0, 1]), array([0, 1])) - - And use it to set the diagonal of an array of zeros to 1: - - >>> a = np.zeros((2, 2, 2), dtype=np.int) - >>> a[d3] = 1 - >>> a - array([[[1, 0], - [0, 0]], - [[0, 0], - [0, 1]]]) - - """ - idx = arange(n) - return (idx,) * ndim - - -def diag_indices_from(arr): - """ - Return the indices to access the main diagonal of an n-dimensional array. - - See `diag_indices` for full details. - - Parameters - ---------- - arr : array, at least 2-D - - See Also - -------- - diag_indices - - Notes - ----- - .. versionadded:: 1.4.0 - - """ - - if not arr.ndim >= 2: - raise ValueError("input array must be at least 2-d") - # For more than d=2, the strided formula is only valid for arrays with - # all dimensions equal, so we check first. - if not alltrue(diff(arr.shape) == 0): - raise ValueError("All dimensions of input must be of equal length") - - return diag_indices(arr.shape[0], arr.ndim) diff --git a/pythonPackages/numpy/numpy/lib/info.py b/pythonPackages/numpy/numpy/lib/info.py deleted file mode 100755 index 4a781a2ca4..0000000000 --- a/pythonPackages/numpy/numpy/lib/info.py +++ /dev/null @@ -1,150 +0,0 @@ -""" -Basic functions used by several sub-packages and -useful to have in the main name-space. - -Type Handling -------------- -================ =================== -iscomplexobj Test for complex object, scalar result -isrealobj Test for real object, scalar result -iscomplex Test for complex elements, array result -isreal Test for real elements, array result -imag Imaginary part -real Real part -real_if_close Turns complex number with tiny imaginary part to real -isneginf Tests for negative infinity, array result -isposinf Tests for positive infinity, array result -isnan Tests for nans, array result -isinf Tests for infinity, array result -isfinite Tests for finite numbers, array result -isscalar True if argument is a scalar -nan_to_num Replaces NaN's with 0 and infinities with large numbers -cast Dictionary of functions to force cast to each type -common_type Determine the minimum common type code for a group - of arrays -mintypecode Return minimal allowed common typecode. -================ =================== - -Index Tricks ------------- -================ =================== -mgrid Method which allows easy construction of N-d - 'mesh-grids' -``r_`` Append and construct arrays: turns slice objects into - ranges and concatenates them, for 2d arrays appends rows. -index_exp Konrad Hinsen's index_expression class instance which - can be useful for building complicated slicing syntax. -================ =================== - -Useful Functions ----------------- -================ =================== -select Extension of where to multiple conditions and choices -extract Extract 1d array from flattened array according to mask -insert Insert 1d array of values into Nd array according to mask -linspace Evenly spaced samples in linear space -logspace Evenly spaced samples in logarithmic space -fix Round x to nearest integer towards zero -mod Modulo mod(x,y) = x % y except keeps sign of y -amax Array maximum along axis -amin Array minimum along axis -ptp Array max-min along axis -cumsum Cumulative sum along axis -prod Product of elements along axis -cumprod Cumluative product along axis -diff Discrete differences along axis -angle Returns angle of complex argument -unwrap Unwrap phase along given axis (1-d algorithm) -sort_complex Sort a complex-array (based on real, then imaginary) -trim_zeros Trim the leading and trailing zeros from 1D array. -vectorize A class that wraps a Python function taking scalar - arguments into a generalized function which can handle - arrays of arguments using the broadcast rules of - numerix Python. -================ =================== - -Shape Manipulation ------------------- -================ =================== -squeeze Return a with length-one dimensions removed. -atleast_1d Force arrays to be > 1D -atleast_2d Force arrays to be > 2D -atleast_3d Force arrays to be > 3D -vstack Stack arrays vertically (row on row) -hstack Stack arrays horizontally (column on column) -column_stack Stack 1D arrays as columns into 2D array -dstack Stack arrays depthwise (along third dimension) -split Divide array into a list of sub-arrays -hsplit Split into columns -vsplit Split into rows -dsplit Split along third dimension -================ =================== - -Matrix (2D Array) Manipulations -------------------------------- -================ =================== -fliplr 2D array with columns flipped -flipud 2D array with rows flipped -rot90 Rotate a 2D array a multiple of 90 degrees -eye Return a 2D array with ones down a given diagonal -diag Construct a 2D array from a vector, or return a given - diagonal from a 2D array. -mat Construct a Matrix -bmat Build a Matrix from blocks -================ =================== - -Polynomials ------------ -================ =================== -poly1d A one-dimensional polynomial class -poly Return polynomial coefficients from roots -roots Find roots of polynomial given coefficients -polyint Integrate polynomial -polyder Differentiate polynomial -polyadd Add polynomials -polysub Substract polynomials -polymul Multiply polynomials -polydiv Divide polynomials -polyval Evaluate polynomial at given argument -================ =================== - -Import Tricks -------------- -================ =================== -ppimport Postpone module import until trying to use it -ppimport_attr Postpone module import until trying to use its attribute -ppresolve Import postponed module and return it. -================ =================== - -Machine Arithmetics -------------------- -================ =================== -machar_single Single precision floating point arithmetic parameters -machar_double Double precision floating point arithmetic parameters -================ =================== - -Threading Tricks ----------------- -================ =================== -ParallelExec Execute commands in parallel thread. -================ =================== - -1D Array Set Operations ------------------------ -Set operations for 1D numeric arrays based on sort() function. - -================ =================== -ediff1d Array difference (auxiliary function). -unique Unique elements of an array. -intersect1d Intersection of 1D arrays with unique elements. -setxor1d Set exclusive-or of 1D arrays with unique elements. -in1d Test whether elements in a 1D array are also present in - another array. -union1d Union of 1D arrays with unique elements. -setdiff1d Set difference of 1D arrays with unique elements. -================ =================== - -""" - -depends = ['core','testing'] -global_symbols = ['*'] diff --git a/pythonPackages/numpy/numpy/lib/npyio.py b/pythonPackages/numpy/numpy/lib/npyio.py deleted file mode 100755 index 3881c88829..0000000000 --- a/pythonPackages/numpy/numpy/lib/npyio.py +++ /dev/null @@ -1,1615 +0,0 @@ -__all__ = ['savetxt', 'loadtxt', 'genfromtxt', 'ndfromtxt', 'mafromtxt', - 'recfromtxt', 'recfromcsv', 'load', 'loads', 'save', 'savez', - 'packbits', 'unpackbits', 'fromregex', 'DataSource'] - -import numpy as np -import format -import sys -import os -import sys -import itertools -import warnings -from operator import itemgetter - -from cPickle import load as _cload, loads -from _datasource import DataSource -from _compiled_base import packbits, unpackbits - -from _iotools import LineSplitter, NameValidator, StringConverter, \ - ConverterError, ConverterLockError, ConversionWarning, \ - _is_string_like, has_nested_fields, flatten_dtype, \ - easy_dtype, _bytes_to_name - -from numpy.compat import asbytes, asstr, asbytes_nested, bytes - -if sys.version_info[0] >= 3: - from io import BytesIO -else: - from cStringIO import StringIO as BytesIO - -_string_like = _is_string_like - -def seek_gzip_factory(f): - """Use this factory to produce the class so that we can do a lazy - import on gzip. - - """ - import gzip - - def seek(self, offset, whence=0): - # figure out new position (we can only seek forwards) - if whence == 1: - offset = self.offset + offset - - if whence not in [0, 1]: - raise IOError, "Illegal argument" - - if offset < self.offset: - # for negative seek, rewind and do positive seek - self.rewind() - count = offset - self.offset - for i in range(count // 1024): - self.read(1024) - self.read(count % 1024) - - def tell(self): - return self.offset - - if isinstance(f, str): - f = gzip.GzipFile(f) - - if sys.version_info[0] >= 3: - import types - f.seek = types.MethodType(seek, f) - f.tell = types.MethodType(tell, f) - else: - import new - f.seek = new.instancemethod(seek, f) - f.tell = new.instancemethod(tell, f) - - return f - - - -class BagObj(object): - """ - BagObj(obj) - - Convert attribute look-ups to getitems on the object passed in. - - Parameters - ---------- - obj : class instance - Object on which attribute look-up is performed. - - Examples - -------- - >>> from numpy.lib.npyio import BagObj as BO - >>> class BagDemo(object): - ... def __getitem__(self, key): # An instance of BagObj(BagDemo) - ... # will call this method when any - ... # attribute look-up is required - ... result = "Doesn't matter what you want, " - ... return result + "you're gonna get this" - ... - >>> demo_obj = BagDemo() - >>> bagobj = BO(demo_obj) - >>> bagobj.hello_there - "Doesn't matter what you want, you're gonna get this" - >>> bagobj.I_can_be_anything - "Doesn't matter what you want, you're gonna get this" - - """ - def __init__(self, obj): - self._obj = obj - def __getattribute__(self, key): - try: - return object.__getattribute__(self, '_obj')[key] - except KeyError: - raise AttributeError, key - - - -class NpzFile(object): - """ - NpzFile(fid) - - A dictionary-like object with lazy-loading of files in the zipped - archive provided on construction. - - `NpzFile` is used to load files in the NumPy ``.npz`` data archive - format. It assumes that files in the archive have a ".npy" extension, - other files are ignored. - - The arrays and file strings are lazily loaded on either - getitem access using ``obj['key']`` or attribute lookup using - ``obj.f.key``. A list of all files (without ".npy" extensions) can - be obtained with ``obj.files`` and the ZipFile object itself using - ``obj.zip``. - - Attributes - ---------- - files : list of str - List of all files in the archive with a ".npy" extension. - zip : ZipFile instance - The ZipFile object initialized with the zipped archive. - f : BagObj instance - An object on which attribute can be performed as an alternative - to getitem access on the `NpzFile` instance itself. - - Parameters - ---------- - fid : file or str - The zipped archive to open. This is either a file-like object - or a string containing the path to the archive. - - Examples - -------- - >>> from tempfile import TemporaryFile - >>> outfile = TemporaryFile() - >>> x = np.arange(10) - >>> y = np.sin(x) - >>> np.savez(outfile, x=x, y=y) - >>> outfile.seek(0) - - >>> npz = np.load(outfile) - >>> isinstance(npz, np.lib.io.NpzFile) - True - >>> npz.files - ['y', 'x'] - >>> npz['x'] # getitem access - array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - >>> npz.f.x # attribute lookup - array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - - """ - def __init__(self, fid): - # Import is postponed to here since zipfile depends on gzip, an optional - # component of the so-called standard library. - import zipfile - _zip = zipfile.ZipFile(fid) - self._files = _zip.namelist() - self.files = [] - for x in self._files: - if x.endswith('.npy'): - self.files.append(x[:-4]) - else: - self.files.append(x) - self.zip = _zip - self.f = BagObj(self) - - def __getitem__(self, key): - # FIXME: This seems like it will copy strings around - # more than is strictly necessary. The zipfile - # will read the string and then - # the format.read_array will copy the string - # to another place in memory. - # It would be better if the zipfile could read - # (or at least uncompress) the data - # directly into the array memory. - member = 0 - if key in self._files: - member = 1 - elif key in self.files: - member = 1 - key += '.npy' - if member: - bytes = self.zip.read(key) - if bytes.startswith(format.MAGIC_PREFIX): - value = BytesIO(bytes) - return format.read_array(value) - else: - return bytes - else: - raise KeyError, "%s is not a file in the archive" % key - - - def __iter__(self): - return iter(self.files) - - def items(self): - """ - Return a list of tuples, with each tuple (filename, array in file). - - """ - return [(f, self[f]) for f in self.files] - - def iteritems(self): - """Generator that returns tuples (filename, array in file).""" - for f in self.files: - yield (f, self[f]) - - def keys(self): - """Return files in the archive with a ".npy" extension.""" - return self.files - - def iterkeys(self): - """Return an iterator over the files in the archive.""" - return self.__iter__() - - def __contains__(self, key): - return self.files.__contains__(key) - - -def load(file, mmap_mode=None): - """ - Load a pickled, ``.npy``, or ``.npz`` binary file. - - Parameters - ---------- - file : file-like object or string - The file to read. It must support ``seek()`` and ``read()`` methods. - If the filename extension is ``.gz``, the file is first decompressed. - mmap_mode: {None, 'r+', 'r', 'w+', 'c'}, optional - If not None, then memory-map the file, using the given mode - (see `numpy.memmap`). The mode has no effect for pickled or - zipped files. - A memory-mapped array is stored on disk, and not directly loaded - into memory. However, it can be accessed and sliced like any - ndarray. Memory mapping is especially useful for accessing - small fragments of large files without reading the entire file - into memory. - - Returns - ------- - result : array, tuple, dict, etc. - Data stored in the file. - - Raises - ------ - IOError - If the input file does not exist or cannot be read. - - See Also - -------- - save, savez, loadtxt - memmap : Create a memory-map to an array stored in a file on disk. - - Notes - ----- - - If the file contains pickle data, then whatever is stored in the - pickle is returned. - - If the file is a ``.npy`` file, then an array is returned. - - If the file is a ``.npz`` file, then a dictionary-like object is - returned, containing ``{filename: array}`` key-value pairs, one for - each file in the archive. - - Examples - -------- - Store data to disk, and load it again: - - >>> np.save('/tmp/123', np.array([[1, 2, 3], [4, 5, 6]])) - >>> np.load('/tmp/123.npy') - array([[1, 2, 3], - [4, 5, 6]]) - - Mem-map the stored array, and then access the second row - directly from disk: - - >>> X = np.load('/tmp/123.npy', mmap_mode='r') - >>> X[1, :] - memmap([4, 5, 6]) - - """ - import gzip - - if isinstance(file, basestring): - fid = open(file, "rb") - elif isinstance(file, gzip.GzipFile): - fid = seek_gzip_factory(file) - else: - fid = file - - # Code to distinguish from NumPy binary files and pickles. - _ZIP_PREFIX = asbytes('PK\x03\x04') - N = len(format.MAGIC_PREFIX) - magic = fid.read(N) - fid.seek(-N, 1) # back-up - if magic.startswith(_ZIP_PREFIX): # zip-file (assume .npz) - return NpzFile(fid) - elif magic == format.MAGIC_PREFIX: # .npy file - if mmap_mode: - return format.open_memmap(file, mode=mmap_mode) - else: - return format.read_array(fid) - else: # Try a pickle - try: - return _cload(fid) - except: - raise IOError, \ - "Failed to interpret file %s as a pickle" % repr(file) - -def save(file, arr): - """ - Save an array to a binary file in NumPy ``.npy`` format. - - Parameters - ---------- - file : file or str - File or filename to which the data is saved. If file is a file-object, - then the filename is unchanged. If file is a string, a ``.npy`` - extension will be appended to the file name if it does not already - have one. - arr : array_like - Array data to be saved. - - See Also - -------- - savez : Save several arrays into a ``.npz`` compressed archive - savetxt, load - - Notes - ----- - For a description of the ``.npy`` format, see `format`. - - Examples - -------- - >>> from tempfile import TemporaryFile - >>> outfile = TemporaryFile() - - >>> x = np.arange(10) - >>> np.save(outfile, x) - - >>> outfile.seek(0) # Only needed here to simulate closing & reopening file - >>> np.load(outfile) - array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - - """ - if isinstance(file, basestring): - if not file.endswith('.npy'): - file = file + '.npy' - fid = open(file, "wb") - else: - fid = file - - arr = np.asanyarray(arr) - format.write_array(fid, arr) - -def savez(file, *args, **kwds): - """ - Save several arrays into a single, archive file in ``.npz`` format. - - If arguments are passed in with no keywords, the corresponding variable - names, in the .npz file, are 'arr_0', 'arr_1', etc. If keyword arguments - are given, the corresponding variable names, in the ``.npz`` file will - match the keyword names. - - Parameters - ---------- - file : str or file - Either the file name (string) or an open file (file-like object) - where the data will be saved. If file is a string, the ``.npz`` - extension will be appended to the file name if it is not already there. - \\*args : Arguments, optional - Arrays to save to the file. Since it is not possible for Python to - know the names of the arrays outside `savez`, the arrays will be saved - with names "arr_0", "arr_1", and so on. These arguments can be any - expression. - \\*\\*kwds : Keyword arguments, optional - Arrays to save to the file. Arrays will be saved in the file with the - keyword names. - - Returns - ------- - None - - See Also - -------- - save : Save a single array to a binary file in NumPy format. - savetxt : Save an array to a file as plain text. - - Notes - ----- - The ``.npz`` file format is a zipped archive of files named after the - variables they contain. The archive is not compressed and each file - in the archive contains one variable in ``.npy`` format. For a - description of the ``.npy`` format, see `format`. - - When opening the saved ``.npz`` file with `load` a `NpzFile` object is - returned. This is a dictionary-like object which can be queried for - its list of arrays (with the ``.files`` attribute), and for the arrays - themselves. - - Examples - -------- - >>> from tempfile import TemporaryFile - >>> outfile = TemporaryFile() - >>> x = np.arange(10) - >>> y = np.sin(x) - - Using `savez` with \\*args, the arrays are saved with default names. - - >>> np.savez(outfile, x, y) - >>> outfile.seek(0) # Only needed here to simulate closing & reopening file - >>> npzfile = np.load(outfile) - >>> npzfile.files - ['arr_1', 'arr_0'] - >>> npzfile['arr_0'] - array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - - Using `savez` with \\*\\*kwds, the arrays are saved with the keyword names. - - >>> outfile = TemporaryFile() - >>> np.savez(outfile, x=x, y=y) - >>> outfile.seek(0) - >>> npzfile = np.load(outfile) - >>> npzfile.files - ['y', 'x'] - >>> npzfile['x'] - array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - - """ - - # Import is postponed to here since zipfile depends on gzip, an optional - # component of the so-called standard library. - import zipfile - # Import deferred for startup time improvement - import tempfile - - if isinstance(file, basestring): - if not file.endswith('.npz'): - file = file + '.npz' - - namedict = kwds - for i, val in enumerate(args): - key = 'arr_%d' % i - if key in namedict.keys(): - raise ValueError, "Cannot use un-named variables and keyword %s" % key - namedict[key] = val - - zip = zipfile.ZipFile(file, mode="w") - - # Stage arrays in a temporary file on disk, before writing to zip. - fd, tmpfile = tempfile.mkstemp(suffix='-numpy.npy') - os.close(fd) - try: - for key, val in namedict.iteritems(): - fname = key + '.npy' - fid = open(tmpfile, 'wb') - try: - format.write_array(fid, np.asanyarray(val)) - fid.close() - fid = None - zip.write(tmpfile, arcname=fname) - finally: - if fid: - fid.close() - finally: - os.remove(tmpfile) - - zip.close() - -# Adapted from matplotlib - -def _getconv(dtype): - typ = dtype.type - if issubclass(typ, np.bool_): - return lambda x: bool(int(x)) - if issubclass(typ, np.integer): - return lambda x: int(float(x)) - elif issubclass(typ, np.floating): - return float - elif issubclass(typ, np.complex): - return complex - elif issubclass(typ, np.bytes_): - return bytes - else: - return str - - - -def loadtxt(fname, dtype=float, comments='#', delimiter=None, - converters=None, skiprows=0, usecols=None, unpack=False): - """ - Load data from a text file. - - Each row in the text file must have the same number of values. - - Parameters - ---------- - fname : file or str - File or filename to read. If the filename extension is ``.gz`` or - ``.bz2``, the file is first decompressed. - dtype : data-type, optional - Data-type of the resulting array; default: float. If this is a record - data-type, the resulting array will be 1-dimensional, and each row - will be interpreted as an element of the array. In this case, the - number of columns used must match the number of fields in the - data-type. - comments : str, optional - The character used to indicate the start of a comment; default: '#'. - delimiter : str, optional - The string used to separate values. By default, this is any - whitespace. - converters : dict, optional - A dictionary mapping column number to a function that will convert - that column to a float. E.g., if column 0 is a date string: - ``converters = {0: datestr2num}``. Converters can also be used to - provide a default value for missing data: - ``converters = {3: lambda s: float(s or 0)}``. Default: None. - skiprows : int, optional - Skip the first `skiprows` lines; default: 0. - usecols : sequence, optional - Which columns to read, with 0 being the first. For example, - ``usecols = (1,4,5)`` will extract the 2nd, 5th and 6th columns. - The default, None, results in all columns being read. - unpack : bool, optional - If True, the returned array is transposed, so that arguments may be - unpacked using ``x, y, z = loadtxt(...)``. The default is False. - - Returns - ------- - out : ndarray - Data read from the text file. - - See Also - -------- - load, fromstring, fromregex - genfromtxt : Load data with missing values handled as specified. - scipy.io.loadmat : reads MATLAB data files - - Notes - ----- - This function aims to be a fast reader for simply formatted files. The - `genfromtxt` function provides more sophisticated handling of, e.g., - lines with missing values. - - Examples - -------- - >>> from StringIO import StringIO # StringIO behaves like a file object - >>> c = StringIO("0 1\\n2 3") - >>> np.loadtxt(c) - array([[ 0., 1.], - [ 2., 3.]]) - - >>> d = StringIO("M 21 72\\nF 35 58") - >>> np.loadtxt(d, dtype={'names': ('gender', 'age', 'weight'), - ... 'formats': ('S1', 'i4', 'f4')}) - array([('M', 21, 72.0), ('F', 35, 58.0)], - dtype=[('gender', '|S1'), ('age', '>> c = StringIO("1,0,2\\n3,0,4") - >>> x, y = np.loadtxt(c, delimiter=',', usecols=(0, 2), unpack=True) - >>> x - array([ 1., 3.]) - >>> y - array([ 2., 4.]) - - """ - # Type conversions for Py3 convenience - comments = asbytes(comments) - if delimiter is not None: - delimiter = asbytes(delimiter) - - user_converters = converters - - if usecols is not None: - usecols = list(usecols) - - isstring = False - if _is_string_like(fname): - isstring = True - if fname.endswith('.gz'): - import gzip - fh = seek_gzip_factory(fname) - elif fname.endswith('.bz2'): - import bz2 - fh = bz2.BZ2File(fname) - else: - fh = open(fname, 'U') - elif hasattr(fname, 'readline'): - fh = fname - else: - raise ValueError('fname must be a string or file handle') - X = [] - - def flatten_dtype(dt): - """Unpack a structured data-type.""" - if dt.names is None: - # If the dtype is flattened, return. - # If the dtype has a shape, the dtype occurs - # in the list more than once. - return [dt.base] * int(np.prod(dt.shape)) - else: - types = [] - for field in dt.names: - tp, bytes = dt.fields[field] - flat_dt = flatten_dtype(tp) - types.extend(flat_dt) - return types - - def split_line(line): - """Chop off comments, strip, and split at delimiter.""" - line = asbytes(line).split(comments)[0].strip() - if line: - return line.split(delimiter) - else: - return [] - - try: - # Make sure we're dealing with a proper dtype - dtype = np.dtype(dtype) - defconv = _getconv(dtype) - - # Skip the first `skiprows` lines - for i in xrange(skiprows): - fh.readline() - - # Read until we find a line with some values, and use - # it to estimate the number of columns, N. - first_vals = None - while not first_vals: - first_line = fh.readline() - if not first_line: # EOF reached - raise IOError('End-of-file reached before encountering data.') - first_vals = split_line(first_line) - N = len(usecols or first_vals) - - dtype_types = flatten_dtype(dtype) - if len(dtype_types) > 1: - # We're dealing with a structured array, each field of - # the dtype matches a column - converters = [_getconv(dt) for dt in dtype_types] - else: - # All fields have the same dtype - converters = [defconv for i in xrange(N)] - - # By preference, use the converters specified by the user - for i, conv in (user_converters or {}).iteritems(): - if usecols: - try: - i = usecols.index(i) - except ValueError: - # Unused converter specified - continue - converters[i] = conv - - # Parse each line, including the first - for i, line in enumerate(itertools.chain([first_line], fh)): - vals = split_line(line) - if len(vals) == 0: - continue - - if usecols: - vals = [vals[i] for i in usecols] - - # Convert each value according to its column and store - X.append(tuple([conv(val) for (conv, val) in zip(converters, vals)])) - finally: - if isstring: - fh.close() - - if len(dtype_types) > 1: - # We're dealing with a structured array, with a dtype such as - # [('x', int), ('y', [('s', int), ('t', float)])] - # - # First, create the array using a flattened dtype: - # [('x', int), ('s', int), ('t', float)] - # - # Then, view the array using the specified dtype. - try: - X = np.array(X, dtype=np.dtype([('', t) for t in dtype_types])) - X = X.view(dtype) - except TypeError: - # In the case we have an object dtype - X = np.array(X, dtype=dtype) - else: - X = np.array(X, dtype) - - X = np.squeeze(X) - if unpack: - return X.T - else: - return X - - -def savetxt(fname, X, fmt='%.18e', delimiter=' ', newline='\n'): - """ - Save an array to a text file. - - Parameters - ---------- - fname : filename or file handle - If the filename ends in ``.gz``, the file is automatically saved in - compressed gzip format. `loadtxt` understands gzipped files - transparently. - X : array_like - Data to be saved to a text file. - fmt : str or sequence of strs - A single format (%10.5f), a sequence of formats, or a - multi-format string, e.g. 'Iteration %d -- %10.5f', in which - case `delimiter` is ignored. - delimiter : str - Character separating columns. - newline : str - .. versionadded:: 2.0 - - Character separating lines. - - - See Also - -------- - save : Save an array to a binary file in NumPy ``.npy`` format - savez : Save several arrays into a ``.npz`` compressed archive - - Notes - ----- - Further explanation of the `fmt` parameter - (``%[flag]width[.precision]specifier``): - - flags: - ``-`` : left justify - - ``+`` : Forces to preceed result with + or -. - - ``0`` : Left pad the number with zeros instead of space (see width). - - width: - Minimum number of characters to be printed. The value is not truncated - if it has more characters. - - precision: - - For integer specifiers (eg. ``d,i,o,x``), the minimum number of - digits. - - For ``e, E`` and ``f`` specifiers, the number of digits to print - after the decimal point. - - For ``g`` and ``G``, the maximum number of significant digits. - - For ``s``, the maximum number of characters. - - specifiers: - ``c`` : character - - ``d`` or ``i`` : signed decimal integer - - ``e`` or ``E`` : scientific notation with ``e`` or ``E``. - - ``f`` : decimal floating point - - ``g,G`` : use the shorter of ``e,E`` or ``f`` - - ``o`` : signed octal - - ``s`` : string of characters - - ``u`` : unsigned decimal integer - - ``x,X`` : unsigned hexadecimal integer - - This explanation of ``fmt`` is not complete, for an exhaustive - specification see [1]_. - - References - ---------- - .. [1] `Format Specification Mini-Language - `_, Python Documentation. - - Examples - -------- - >>> x = y = z = np.arange(0.0,5.0,1.0) - >>> np.savetxt('test.out', x, delimiter=',') # X is an array - >>> np.savetxt('test.out', (x,y,z)) # x,y,z equal sized 1D arrays - >>> np.savetxt('test.out', x, fmt='%1.4e') # use exponential notation - - """ - - # Py3 conversions first - if isinstance(fmt, bytes): - fmt = asstr(fmt) - delimiter = asstr(delimiter) - - if _is_string_like(fname): - if fname.endswith('.gz'): - import gzip - fh = gzip.open(fname, 'wb') - else: - if sys.version_info[0] >= 3: - fh = open(fname, 'wb') - else: - fh = open(fname, 'w') - elif hasattr(fname, 'seek'): - fh = fname - else: - raise ValueError('fname must be a string or file handle') - - X = np.asarray(X) - - # Handle 1-dimensional arrays - if X.ndim == 1: - # Common case -- 1d array of numbers - if X.dtype.names is None: - X = np.atleast_2d(X).T - ncol = 1 - - # Complex dtype -- each field indicates a separate column - else: - ncol = len(X.dtype.descr) - else: - ncol = X.shape[1] - - # `fmt` can be a string with multiple insertion points or a list of formats. - # E.g. '%10.5f\t%10d' or ('%10.5f', '$10d') - if type(fmt) in (list, tuple): - if len(fmt) != ncol: - raise AttributeError('fmt has wrong shape. %s' % str(fmt)) - format = asstr(delimiter).join(map(asstr, fmt)) - elif type(fmt) is str: - if fmt.count('%') == 1: - fmt = [fmt, ]*ncol - format = delimiter.join(fmt) - elif fmt.count('%') != ncol: - raise AttributeError('fmt has wrong number of %% formats. %s' - % fmt) - else: - format = fmt - - for row in X: - fh.write(asbytes(format % tuple(row) + newline)) - -import re -def fromregex(file, regexp, dtype): - """ - Construct an array from a text file, using regular expression parsing. - - The returned array is always a structured array, and is constructed from - all matches of the regular expression in the file. Groups in the regular - expression are converted to fields of the structured array. - - Parameters - ---------- - file : str or file - File name or file object to read. - regexp : str or regexp - Regular expression used to parse the file. - Groups in the regular expression correspond to fields in the dtype. - dtype : dtype or list of dtypes - Dtype for the structured array. - - Returns - ------- - output : ndarray - The output array, containing the part of the content of `file` that - was matched by `regexp`. `output` is always a structured array. - - Raises - ------ - TypeError - When `dtype` is not a valid dtype for a structured array. - - See Also - -------- - fromstring, loadtxt - - Notes - ----- - Dtypes for structured arrays can be specified in several forms, but all - forms specify at least the data type and field name. For details see - `doc.structured_arrays`. - - Examples - -------- - >>> f = open('test.dat', 'w') - >>> f.write("1312 foo\\n1534 bar\\n444 qux") - >>> f.close() - - >>> regexp = r"(\\d+)\\s+(...)" # match [digits, whitespace, anything] - >>> output = np.fromregex('test.dat', regexp, - ... [('num', np.int64), ('key', 'S3')]) - >>> output - array([(1312L, 'foo'), (1534L, 'bar'), (444L, 'qux')], - dtype=[('num', '>> output['num'] - array([1312, 1534, 444], dtype=int64) - - """ - if not hasattr(file, "read"): - file = open(file, 'rb') - if not hasattr(regexp, 'match'): - regexp = re.compile(asbytes(regexp)) - if not isinstance(dtype, np.dtype): - dtype = np.dtype(dtype) - - seq = regexp.findall(file.read()) - if seq and not isinstance(seq[0], tuple): - # Only one group is in the regexp. - # Create the new array as a single data-type and then - # re-interpret as a single-field structured array. - newdtype = np.dtype(dtype[dtype.names[0]]) - output = np.array(seq, dtype=newdtype) - output.dtype = dtype - else: - output = np.array(seq, dtype=dtype) - - return output - - - - -#####-------------------------------------------------------------------------- -#---- --- ASCII functions --- -#####-------------------------------------------------------------------------- - - - -def genfromtxt(fname, dtype=float, comments='#', delimiter=None, - skiprows=0, skip_header=0, skip_footer=0, converters=None, - missing='', missing_values=None, filling_values=None, - usecols=None, names=None, - excludelist=None, deletechars=None, replace_space='_', - autostrip=False, case_sensitive=True, defaultfmt="f%i", - unpack=None, usemask=False, loose=True, invalid_raise=True): - """ - Load data from a text file, with missing values handled as specified. - - Each line past the first `skiprows` lines is split at the `delimiter` - character, and characters following the `comments` character are discarded. - - Parameters - ---------- - fname : file or str - File or filename to read. If the filename extension is `.gz` or - `.bz2`, the file is first decompressed. - dtype : dtype, optional - Data type of the resulting array. - If None, the dtypes will be determined by the contents of each - column, individually. - comments : str, optional - The character used to indicate the start of a comment. - All the characters occurring on a line after a comment are discarded - delimiter : str, int, or sequence, optional - The string used to separate values. By default, any consecutive - whitespaces act as delimiter. An integer or sequence of integers - can also be provided as width(s) of each field. - skip_header : int, optional - The numbers of lines to skip at the beginning of the file. - skip_footer : int, optional - The numbers of lines to skip at the end of the file - converters : variable or None, optional - The set of functions that convert the data of a column to a value. - The converters can also be used to provide a default value - for missing data: ``converters = {3: lambda s: float(s or 0)}``. - missing_values : variable or None, optional - The set of strings corresponding to missing data. - filling_values : variable or None, optional - The set of values to be used as default when the data are missing. - usecols : sequence or None, optional - Which columns to read, with 0 being the first. For example, - ``usecols = (1, 4, 5)`` will extract the 2nd, 5th and 6th columns. - names : {None, True, str, sequence}, optional - If `names` is True, the field names are read from the first valid line - after the first `skiprows` lines. - If `names` is a sequence or a single-string of comma-separated names, - the names will be used to define the field names in a structured dtype. - If `names` is None, the names of the dtype fields will be used, if any. - excludelist : sequence, optional - A list of names to exclude. This list is appended to the default list - ['return','file','print']. Excluded names are appended an underscore: - for example, `file` would become `file_`. - deletechars : str, optional - A string combining invalid characters that must be deleted from the - names. - defaultfmt : str, optional - A format used to define default field names, such as "f%i" or "f_%02i". - autostrip : bool, optional - Whether to automatically strip white spaces from the variables. - replace_space : char, optional - Character(s) used in replacement of white spaces in the variables names. - By default, use a '_'. - case_sensitive : {True, False, 'upper', 'lower'}, optional - If True, field names are case sensitive. - If False or 'upper', field names are converted to upper case. - If 'lower', field names are converted to lower case. - unpack : bool, optional - If True, the returned array is transposed, so that arguments may be - unpacked using ``x, y, z = loadtxt(...)`` - usemask : bool, optional - If True, return a masked array. - If False, return a regular array. - invalid_raise : bool, optional - If True, an exception is raised if an inconsistency is detected in the - number of columns. - If False, a warning is emitted and the offending lines are skipped. - - Returns - ------- - out : ndarray - Data read from the text file. If `usemask` is True, this is a - masked array. - - See Also - -------- - numpy.loadtxt : equivalent function when no data is missing. - - Notes - ----- - * When spaces are used as delimiters, or when no delimiter has been given - as input, there should not be any missing data between two fields. - * When the variables are named (either by a flexible dtype or with `names`, - there must not be any header in the file (else a ValueError - exception is raised). - * Individual values are not stripped of spaces by default. - When using a custom converter, make sure the function does remove spaces. - - Examples - --------- - >>> from StringIO import StringIO - >>> import numpy as np - - Comma delimited file with mixed dtype - - >>> s = StringIO("1,1.3,abcde") - >>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), - ... ('mystring','S5')], delimiter=",") - >>> data - array((1, 1.3, 'abcde'), - dtype=[('myint', '>> s.seek(0) # needed for StringIO example only - >>> data = np.genfromtxt(s, dtype=None, - ... names = ['myint','myfloat','mystring'], delimiter=",") - >>> data - array((1, 1.3, 'abcde'), - dtype=[('myint', '>> s.seek(0) - >>> data = np.genfromtxt(s, dtype="i8,f8,S5", - ... names=['myint','myfloat','mystring'], delimiter=",") - >>> data - array((1, 1.3, 'abcde'), - dtype=[('myint', '>> s = StringIO("11.3abcde") - >>> data = np.genfromtxt(s, dtype=None, names=['intvar','fltvar','strvar'], - ... delimiter=[1,3,5]) - >>> data - array((1, 1.3, 'abcde'), - dtype=[('intvar', ' nbcols): - descr = dtype.descr - dtype = np.dtype([descr[_] for _ in usecols]) - names = list(dtype.names) - # If `names` is not None, update the names - elif (names is not None) and (len(names) > nbcols): - names = [names[_] for _ in usecols] - - - # Process the missing values ............................... - # Rename missing_values for convenience - user_missing_values = missing_values or () - - # Define the list of missing_values (one column: one list) - missing_values = [list([asbytes('')]) for _ in range(nbcols)] - - # We have a dictionary: process it field by field - if isinstance(user_missing_values, dict): - # Loop on the items - for (key, val) in user_missing_values.items(): - # Is the key a string ? - if _is_string_like(key): - try: - # Transform it into an integer - key = names.index(key) - except ValueError: - # We couldn't find it: the name must have been dropped, then - continue - # Redefine the key as needed if it's a column number - if usecols: - try: - key = usecols.index(key) - except ValueError: - pass - # Transform the value as a list of string - if isinstance(val, (list, tuple)): - val = [str(_) for _ in val] - else: - val = [str(val), ] - # Add the value(s) to the current list of missing - if key is None: - # None acts as default - for miss in missing_values: - miss.extend(val) - else: - missing_values[key].extend(val) - # We have a sequence : each item matches a column - elif isinstance(user_missing_values, (list, tuple)): - for (value, entry) in zip(user_missing_values, missing_values): - value = str(value) - if value not in entry: - entry.append(value) - # We have a string : apply it to all entries - elif isinstance(user_missing_values, bytes): - user_value = user_missing_values.split(asbytes(",")) - for entry in missing_values: - entry.extend(user_value) - # We have something else: apply it to all entries - else: - for entry in missing_values: - entry.extend([str(user_missing_values)]) - - # Process the deprecated `missing` - if missing != asbytes(''): - warnings.warn("The use of `missing` is deprecated.\n"\ - "Please use `missing_values` instead.", - DeprecationWarning) - values = [str(_) for _ in missing.split(asbytes(","))] - for entry in missing_values: - entry.extend(values) - - # Process the filling_values ............................... - # Rename the input for convenience - user_filling_values = filling_values or [] - # Define the default - filling_values = [None] * nbcols - # We have a dictionary : update each entry individually - if isinstance(user_filling_values, dict): - for (key, val) in user_filling_values.items(): - if _is_string_like(key): - try: - # Transform it into an integer - key = names.index(key) - except ValueError: - # We couldn't find it: the name must have been dropped, then - continue - # Redefine the key if it's a column number and usecols is defined - if usecols: - try: - key = usecols.index(key) - except ValueError: - pass - # Add the value to the list - filling_values[key] = val - # We have a sequence : update on a one-to-one basis - elif isinstance(user_filling_values, (list, tuple)): - n = len(user_filling_values) - if (n <= nbcols): - filling_values[:n] = user_filling_values - else: - filling_values = user_filling_values[:nbcols] - # We have something else : use it for all entries - else: - filling_values = [user_filling_values] * nbcols - - # Initialize the converters ................................ - if dtype is None: - # Note: we can't use a [...]*nbcols, as we would have 3 times the same - # ... converter, instead of 3 different converters. - converters = [StringConverter(None, missing_values=miss, default=fill) - for (miss, fill) in zip(missing_values, filling_values)] - else: - dtype_flat = flatten_dtype(dtype, flatten_base=True) - # Initialize the converters - if len(dtype_flat) > 1: - # Flexible type : get a converter from each dtype - zipit = zip(dtype_flat, missing_values, filling_values) - converters = [StringConverter(dt, locked=True, - missing_values=miss, default=fill) - for (dt, miss, fill) in zipit] - else: - # Set to a default converter (but w/ different missing values) - zipit = zip(missing_values, filling_values) - converters = [StringConverter(dtype, locked=True, - missing_values=miss, default=fill) - for (miss, fill) in zipit] - # Update the converters to use the user-defined ones - uc_update = [] - for (i, conv) in user_converters.items(): - # If the converter is specified by column names, use the index instead - if _is_string_like(i): - try: - i = names.index(i) - except ValueError: - continue - elif usecols: - try: - i = usecols.index(i) - except ValueError: - # Unused converter specified - continue - converters[i].update(conv, locked=True, - default=filling_values[i], - missing_values=missing_values[i],) - uc_update.append((i, conv)) - # Make sure we have the corrected keys in user_converters... - user_converters.update(uc_update) - - miss_chars = [_.missing_values for _ in converters] - - - # Initialize the output lists ... - # ... rows - rows = [] - append_to_rows = rows.append - # ... masks - if usemask: - masks = [] - append_to_masks = masks.append - # ... invalid - invalid = [] - append_to_invalid = invalid.append - - # Parse each line - for (i, line) in enumerate(itertools.chain([first_line, ], fhd)): - values = split_line(line) - nbvalues = len(values) - # Skip an empty line - if nbvalues == 0: - continue - # Select only the columns we need - if usecols: - try: - values = [values[_] for _ in usecols] - except IndexError: - append_to_invalid((i, nbvalues)) - continue - elif nbvalues != nbcols: - append_to_invalid((i, nbvalues)) - continue - # Store the values - append_to_rows(tuple(values)) - if usemask: - append_to_masks(tuple([v.strip() in m - for (v, m) in zip(values, missing_values)])) - - # Strip the last skip_footer data - if skip_footer > 0: - rows = rows[:-skip_footer] - if usemask: - masks = masks[:-skip_footer] - - # Upgrade the converters (if needed) - if dtype is None: - for (i, converter) in enumerate(converters): - current_column = map(itemgetter(i), rows) - try: - converter.iterupgrade(current_column) - except ConverterLockError: - errmsg = "Converter #%i is locked and cannot be upgraded: " % i - current_column = itertools.imap(itemgetter(i), rows) - for (j, value) in enumerate(current_column): - try: - converter.upgrade(value) - except (ConverterError, ValueError): - errmsg += "(occurred line #%i for value '%s')" - errmsg %= (j + 1 + skip_header, value) - raise ConverterError(errmsg) - - # Check that we don't have invalid values - if len(invalid) > 0: - nbrows = len(rows) - # Construct the error message - template = " Line #%%i (got %%i columns instead of %i)" % nbcols - if skip_footer > 0: - nbrows -= skip_footer - errmsg = [template % (i + skip_header + 1, nb) - for (i, nb) in invalid if i < nbrows] - else: - errmsg = [template % (i + skip_header + 1, nb) - for (i, nb) in invalid] - if len(errmsg): - errmsg.insert(0, "Some errors were detected !") - errmsg = "\n".join(errmsg) - # Raise an exception ? - if invalid_raise: - raise ValueError(errmsg) - # Issue a warning ? - else: - warnings.warn(errmsg, ConversionWarning) - - # Convert each value according to the converter: - # We want to modify the list in place to avoid creating a new one... -# if loose: -# conversionfuncs = [conv._loose_call for conv in converters] -# else: -# conversionfuncs = [conv._strict_call for conv in converters] -# for (i, vals) in enumerate(rows): -# rows[i] = tuple([convert(val) -# for (convert, val) in zip(conversionfuncs, vals)]) - if loose: - rows = zip(*[map(converter._loose_call, map(itemgetter(i), rows)) - for (i, converter) in enumerate(converters)]) - else: - rows = zip(*[map(converter._strict_call, map(itemgetter(i), rows)) - for (i, converter) in enumerate(converters)]) - # Reset the dtype - data = rows - if dtype is None: - # Get the dtypes from the types of the converters - column_types = [conv.type for conv in converters] - # Find the columns with strings... - strcolidx = [i for (i, v) in enumerate(column_types) - if v in (type('S'), np.string_)] - # ... and take the largest number of chars. - for i in strcolidx: - column_types[i] = "|S%i" % max(len(row[i]) for row in data) - # - if names is None: - # If the dtype is uniform, don't define names, else use '' - base = set([c.type for c in converters if c._checked]) - if len(base) == 1: - (ddtype, mdtype) = (list(base)[0], np.bool) - else: - ddtype = [(defaultfmt % i, dt) - for (i, dt) in enumerate(column_types)] - if usemask: - mdtype = [(defaultfmt % i, np.bool) - for (i, dt) in enumerate(column_types)] - else: - ddtype = zip(names, column_types) - mdtype = zip(names, [np.bool] * len(column_types)) - output = np.array(data, dtype=ddtype) - if usemask: - outputmask = np.array(masks, dtype=mdtype) - else: - # Overwrite the initial dtype names if needed - if names and dtype.names: - dtype.names = names - # Case 1. We have a structured type - if len(dtype_flat) > 1: - # Nested dtype, eg [('a', int), ('b', [('b0', int), ('b1', 'f4')])] - # First, create the array using a flattened dtype: - # [('a', int), ('b1', int), ('b2', float)] - # Then, view the array using the specified dtype. - if 'O' in (_.char for _ in dtype_flat): - if has_nested_fields(dtype): - errmsg = "Nested fields involving objects "\ - "are not supported..." - raise NotImplementedError(errmsg) - else: - output = np.array(data, dtype=dtype) - else: - rows = np.array(data, dtype=[('', _) for _ in dtype_flat]) - output = rows.view(dtype) - # Now, process the rowmasks the same way - if usemask: - rowmasks = np.array(masks, - dtype=np.dtype([('', np.bool) - for t in dtype_flat])) - # Construct the new dtype - mdtype = make_mask_descr(dtype) - outputmask = rowmasks.view(mdtype) - # Case #2. We have a basic dtype - else: - # We used some user-defined converters - if user_converters: - ishomogeneous = True - descr = [] - for (i, ttype) in enumerate([conv.type for conv in converters]): - # Keep the dtype of the current converter - if i in user_converters: - ishomogeneous &= (ttype == dtype.type) - if ttype == np.string_: - ttype = "|S%i" % max(len(row[i]) for row in data) - descr.append(('', ttype)) - else: - descr.append(('', dtype)) - # So we changed the dtype ? - if not ishomogeneous: - # We have more than one field - if len(descr) > 1: - dtype = np.dtype(descr) - # We have only one field: drop the name if not needed. - else: - dtype = np.dtype(ttype) - # - output = np.array(data, dtype) - if usemask: - if dtype.names: - mdtype = [(_, np.bool) for _ in dtype.names] - else: - mdtype = np.bool - outputmask = np.array(masks, dtype=mdtype) - # Try to take care of the missing data we missed - names = output.dtype.names - if usemask and names: - for (name, conv) in zip(names or (), converters): - missing_values = [conv(_) for _ in conv.missing_values - if _ != asbytes('')] - for mval in missing_values: - outputmask[name] |= (output[name] == mval) - # Construct the final array - if usemask: - output = output.view(MaskedArray) - output._mask = outputmask - if unpack: - return output.squeeze().T - return output.squeeze() - - - -def ndfromtxt(fname, **kwargs): - """ - Load ASCII data stored in a file and return it as a single array. - - Complete description of all the optional input parameters is available in - the docstring of the `genfromtxt` function. - - See Also - -------- - numpy.genfromtxt : generic function. - - """ - kwargs['usemask'] = False - return genfromtxt(fname, **kwargs) - - -def mafromtxt(fname, **kwargs): - """ - Load ASCII data stored in a text file and return a masked array. - - For a complete description of all the input parameters, see `genfromtxt`. - - See Also - -------- - numpy.genfromtxt : generic function to load ASCII data. - - """ - kwargs['usemask'] = True - return genfromtxt(fname, **kwargs) - - -def recfromtxt(fname, **kwargs): - """ - Load ASCII data from a file and return it in a record array. - - If ``usemask=False`` a standard `recarray` is returned, - if ``usemask=True`` a MaskedRecords array is returned. - - Complete description of all the optional input parameters is available in - the docstring of the `genfromtxt` function. - - See Also - -------- - numpy.genfromtxt : generic function - - Notes - ----- - By default, `dtype` is None, which means that the data-type of the output - array will be determined from the data. - - """ - kwargs.update(dtype=kwargs.get('dtype', None)) - usemask = kwargs.get('usemask', False) - output = genfromtxt(fname, **kwargs) - if usemask: - from numpy.ma.mrecords import MaskedRecords - output = output.view(MaskedRecords) - else: - output = output.view(np.recarray) - return output - - -def recfromcsv(fname, **kwargs): - """ - Load ASCII data stored in a comma-separated file. - - The returned array is a record array (if ``usemask=False``, see - `recarray`) or a masked record array (if ``usemask=True``, - see `ma.mrecords.MaskedRecords`). - - For a complete description of all the input parameters, see `genfromtxt`. - - See Also - -------- - numpy.genfromtxt : generic function to load ASCII data. - - """ - case_sensitive = kwargs.get('case_sensitive', "lower") or "lower" - names = kwargs.get('names', True) - if names is None: - names = True - kwargs.update(dtype=kwargs.get('update', None), - delimiter=kwargs.get('delimiter', ",") or ",", - names=names, - case_sensitive=case_sensitive) - usemask = kwargs.get("usemask", False) - output = genfromtxt(fname, **kwargs) - if usemask: - from numpy.ma.mrecords import MaskedRecords - output = output.view(MaskedRecords) - else: - output = output.view(np.recarray) - return output diff --git a/pythonPackages/numpy/numpy/lib/polynomial.py b/pythonPackages/numpy/numpy/lib/polynomial.py deleted file mode 100755 index 603655ec2c..0000000000 --- a/pythonPackages/numpy/numpy/lib/polynomial.py +++ /dev/null @@ -1,1235 +0,0 @@ -""" -Functions to operate on polynomials. -""" - -__all__ = ['poly', 'roots', 'polyint', 'polyder', 'polyadd', - 'polysub', 'polymul', 'polydiv', 'polyval', 'poly1d', - 'polyfit', 'RankWarning'] - -import re -import warnings -import numpy.core.numeric as NX - -from numpy.core import isscalar, abs, finfo, atleast_1d, hstack -from numpy.lib.twodim_base import diag, vander -from numpy.lib.function_base import trim_zeros, sort_complex -from numpy.lib.type_check import iscomplex, real, imag -from numpy.linalg import eigvals, lstsq - -class RankWarning(UserWarning): - """ - Issued by `polyfit` when the Vandermonde matrix is rank deficient. - - For more information, a way to suppress the warning, and an example of - `RankWarning` being issued, see `polyfit`. - - """ - pass - -def poly(seq_of_zeros): - """ - Find the coefficients of a polynomial with the given sequence of roots. - - Returns the coefficients of the polynomial whose leading coefficient - is one for the given sequence of zeros (multiple roots must be included - in the sequence as many times as their multiplicity; see Examples). - A square matrix (or array, which will be treated as a matrix) can also - be given, in which case the coefficients of the characteristic polynomial - of the matrix are returned. - - Parameters - ---------- - seq_of_zeros : array_like, shape (N,) or (N, N) - A sequence of polynomial roots, or a square array or matrix object. - - Returns - ------- - c : ndarray - 1D array of polynomial coefficients from highest to lowest degree: - - ``c[0] * x**(N) + c[1] * x**(N-1) + ... + c[N-1] * x + c[N]`` - where c[0] always equals 1. - - Raises - ------ - ValueError - If input is the wrong shape (the input must be a 1-D or square - 2-D array). - - See Also - -------- - polyval : Evaluate a polynomial at a point. - roots : Return the roots of a polynomial. - polyfit : Least squares polynomial fit. - poly1d : A one-dimensional polynomial class. - - Notes - ----- - Specifying the roots of a polynomial still leaves one degree of - freedom, typically represented by an undetermined leading - coefficient. [1]_ In the case of this function, that coefficient - - the first one in the returned array - is always taken as one. (If - for some reason you have one other point, the only automatic way - presently to leverage that information is to use ``polyfit``.) - - The characteristic polynomial, :math:`p_a(t)`, of an `n`-by-`n` - matrix **A** is given by - - :math:`p_a(t) = \\mathrm{det}(t\\, \\mathbf{I} - \\mathbf{A})`, - - where **I** is the `n`-by-`n` identity matrix. [2]_ - - References - ---------- - .. [1] M. Sullivan and M. Sullivan, III, "Algebra and Trignometry, - Enhanced With Graphing Utilities," Prentice-Hall, pg. 318, 1996. - - .. [2] G. Strang, "Linear Algebra and Its Applications, 2nd Edition," - Academic Press, pg. 182, 1980. - - Examples - -------- - Given a sequence of a polynomial's zeros: - - >>> np.poly((0, 0, 0)) # Multiple root example - array([1, 0, 0, 0]) - - The line above represents z**3 + 0*z**2 + 0*z + 0. - - >>> np.poly((-1./2, 0, 1./2)) - array([ 1. , 0. , -0.25, 0. ]) - - The line above represents z**3 - z/4 - - >>> np.poly((np.random.random(1.)[0], 0, np.random.random(1.)[0])) - array([ 1. , -0.77086955, 0.08618131, 0. ]) #random - - Given a square array object: - - >>> P = np.array([[0, 1./3], [-1./2, 0]]) - >>> np.poly(P) - array([ 1. , 0. , 0.16666667]) - - Or a square matrix object: - - >>> np.poly(np.matrix(P)) - array([ 1. , 0. , 0.16666667]) - - Note how in all cases the leading coefficient is always 1. - - """ - seq_of_zeros = atleast_1d(seq_of_zeros) - sh = seq_of_zeros.shape - if len(sh) == 2 and sh[0] == sh[1] and sh[0] != 0: - seq_of_zeros = eigvals(seq_of_zeros) - elif len(sh) == 1: - pass - else: - raise ValueError, "input must be 1d or square 2d array." - - if len(seq_of_zeros) == 0: - return 1.0 - - a = [1] - for k in range(len(seq_of_zeros)): - a = NX.convolve(a, [1, -seq_of_zeros[k]], mode='full') - - if issubclass(a.dtype.type, NX.complexfloating): - # if complex roots are all complex conjugates, the roots are real. - roots = NX.asarray(seq_of_zeros, complex) - pos_roots = sort_complex(NX.compress(roots.imag > 0, roots)) - neg_roots = NX.conjugate(sort_complex( - NX.compress(roots.imag < 0,roots))) - if (len(pos_roots) == len(neg_roots) and - NX.alltrue(neg_roots == pos_roots)): - a = a.real.copy() - - return a - -def roots(p): - """ - Return the roots of a polynomial with coefficients given in p. - - The values in the rank-1 array `p` are coefficients of a polynomial. - If the length of `p` is n+1 then the polynomial is described by - p[0] * x**n + p[1] * x**(n-1) + ... + p[n-1]*x + p[n] - - Parameters - ---------- - p : array_like of shape(M,) - Rank-1 array of polynomial co-efficients. - - Returns - ------- - out : ndarray - An array containing the complex roots of the polynomial. - - Raises - ------ - ValueError: - When `p` cannot be converted to a rank-1 array. - - See also - -------- - - poly : Find the coefficients of a polynomial with - a given sequence of roots. - polyval : Evaluate a polynomial at a point. - polyfit : Least squares polynomial fit. - poly1d : A one-dimensional polynomial class. - - Notes - ----- - - The algorithm relies on computing the eigenvalues of the - companion matrix [1]_. - - References - ---------- - .. [1] Wikipedia, "Companion matrix", - http://en.wikipedia.org/wiki/Companion_matrix - - Examples - -------- - - >>> coeff = [3.2, 2, 1] - >>> np.roots(coeff) - array([-0.3125+0.46351241j, -0.3125-0.46351241j]) - - """ - # If input is scalar, this makes it an array - p = atleast_1d(p) - if len(p.shape) != 1: - raise ValueError,"Input must be a rank-1 array." - - # find non-zero array entries - non_zero = NX.nonzero(NX.ravel(p))[0] - - # Return an empty array if polynomial is all zeros - if len(non_zero) == 0: - return NX.array([]) - - # find the number of trailing zeros -- this is the number of roots at 0. - trailing_zeros = len(p) - non_zero[-1] - 1 - - # strip leading and trailing zeros - p = p[int(non_zero[0]):int(non_zero[-1])+1] - - # casting: if incoming array isn't floating point, make it floating point. - if not issubclass(p.dtype.type, (NX.floating, NX.complexfloating)): - p = p.astype(float) - - N = len(p) - if N > 1: - # build companion matrix and find its eigenvalues (the roots) - A = diag(NX.ones((N-2,), p.dtype), -1) - A[0, :] = -p[1:] / p[0] - roots = eigvals(A) - else: - roots = NX.array([]) - - # tack any zeros onto the back of the array - roots = hstack((roots, NX.zeros(trailing_zeros, roots.dtype))) - return roots - -def polyint(p, m=1, k=None): - """ - Return an antiderivative (indefinite integral) of a polynomial. - - The returned order `m` antiderivative `P` of polynomial `p` satisfies - :math:`\\frac{d^m}{dx^m}P(x) = p(x)` and is defined up to `m - 1` - integration constants `k`. The constants determine the low-order - polynomial part - - .. math:: \\frac{k_{m-1}}{0!} x^0 + \\ldots + \\frac{k_0}{(m-1)!}x^{m-1} - - of `P` so that :math:`P^{(j)}(0) = k_{m-j-1}`. - - Parameters - ---------- - p : {array_like, poly1d} - Polynomial to differentiate. - A sequence is interpreted as polynomial coefficients, see `poly1d`. - m : int, optional - Order of the antiderivative. (Default: 1) - k : {None, list of `m` scalars, scalar}, optional - Integration constants. They are given in the order of integration: - those corresponding to highest-order terms come first. - - If ``None`` (default), all constants are assumed to be zero. - If `m = 1`, a single scalar can be given instead of a list. - - See Also - -------- - polyder : derivative of a polynomial - poly1d.integ : equivalent method - - Examples - -------- - The defining property of the antiderivative: - - >>> p = np.poly1d([1,1,1]) - >>> P = np.polyint(p) - >>> P - poly1d([ 0.33333333, 0.5 , 1. , 0. ]) - >>> np.polyder(P) == p - True - - The integration constants default to zero, but can be specified: - - >>> P = np.polyint(p, 3) - >>> P(0) - 0.0 - >>> np.polyder(P)(0) - 0.0 - >>> np.polyder(P, 2)(0) - 0.0 - >>> P = np.polyint(p, 3, k=[6,5,3]) - >>> P - poly1d([ 0.01666667, 0.04166667, 0.16666667, 3. , 5. , 3. ]) - - Note that 3 = 6 / 2!, and that the constants are given in the order of - integrations. Constant of the highest-order polynomial term comes first: - - >>> np.polyder(P, 2)(0) - 6.0 - >>> np.polyder(P, 1)(0) - 5.0 - >>> P(0) - 3.0 - - """ - m = int(m) - if m < 0: - raise ValueError, "Order of integral must be positive (see polyder)" - if k is None: - k = NX.zeros(m, float) - k = atleast_1d(k) - if len(k) == 1 and m > 1: - k = k[0]*NX.ones(m, float) - if len(k) < m: - raise ValueError, \ - "k must be a scalar or a rank-1 array of length 1 or >m." - - truepoly = isinstance(p, poly1d) - p = NX.asarray(p) - if m == 0: - if truepoly: - return poly1d(p) - return p - else: - # Note: this must work also with object and integer arrays - y = NX.concatenate((p.__truediv__(NX.arange(len(p), 0, -1)), [k[0]])) - val = polyint(y, m - 1, k=k[1:]) - if truepoly: - return poly1d(val) - return val - -def polyder(p, m=1): - """ - Return the derivative of the specified order of a polynomial. - - Parameters - ---------- - p : poly1d or sequence - Polynomial to differentiate. - A sequence is interpreted as polynomial coefficients, see `poly1d`. - m : int, optional - Order of differentiation (default: 1) - - Returns - ------- - der : poly1d - A new polynomial representing the derivative. - - See Also - -------- - polyint : Anti-derivative of a polynomial. - poly1d : Class for one-dimensional polynomials. - - Examples - -------- - The derivative of the polynomial :math:`x^3 + x^2 + x^1 + 1` is: - - >>> p = np.poly1d([1,1,1,1]) - >>> p2 = np.polyder(p) - >>> p2 - poly1d([3, 2, 1]) - - which evaluates to: - - >>> p2(2.) - 17.0 - - We can verify this, approximating the derivative with - ``(f(x + h) - f(x))/h``: - - >>> (p(2. + 0.001) - p(2.)) / 0.001 - 17.007000999997857 - - The fourth-order derivative of a 3rd-order polynomial is zero: - - >>> np.polyder(p, 2) - poly1d([6, 2]) - >>> np.polyder(p, 3) - poly1d([6]) - >>> np.polyder(p, 4) - poly1d([ 0.]) - - """ - m = int(m) - if m < 0: - raise ValueError, "Order of derivative must be positive (see polyint)" - - truepoly = isinstance(p, poly1d) - p = NX.asarray(p) - n = len(p) - 1 - y = p[:-1] * NX.arange(n, 0, -1) - if m == 0: - val = p - else: - val = polyder(y, m - 1) - if truepoly: - val = poly1d(val) - return val - -def polyfit(x, y, deg, rcond=None, full=False): - """ - Least squares polynomial fit. - - Fit a polynomial ``p(x) = p[0] * x**deg + ... + p[deg]`` of degree `deg` - to points `(x, y)`. Returns a vector of coefficients `p` that minimises - the squared error. - - Parameters - ---------- - x : array_like, shape (M,) - x-coordinates of the M sample points ``(x[i], y[i])``. - y : array_like, shape (M,) or (M, K) - y-coordinates of the sample points. Several data sets of sample - points sharing the same x-coordinates can be fitted at once by - passing in a 2D-array that contains one dataset per column. - deg : int - Degree of the fitting polynomial - rcond : float, optional - Relative condition number of the fit. Singular values smaller than this - relative to the largest singular value will be ignored. The default - value is len(x)*eps, where eps is the relative precision of the float - type, about 2e-16 in most cases. - full : bool, optional - Switch determining nature of return value. When it is - False (the default) just the coefficients are returned, when True - diagnostic information from the singular value decomposition is also - returned. - - Returns - ------- - p : ndarray, shape (M,) or (M, K) - Polynomial coefficients, highest power first. - If `y` was 2-D, the coefficients for `k`-th data set are in ``p[:,k]``. - - residuals, rank, singular_values, rcond : present only if `full` = True - Residuals of the least-squares fit, the effective rank of the scaled - Vandermonde coefficient matrix, its singular values, and the specified - value of `rcond`. For more details, see `linalg.lstsq`. - - Warns - ----- - RankWarning - The rank of the coefficient matrix in the least-squares fit is - deficient. The warning is only raised if `full` = False. - - The warnings can be turned off by - - >>> import warnings - >>> warnings.simplefilter('ignore', np.RankWarning) - - See Also - -------- - polyval : Computes polynomial values. - linalg.lstsq : Computes a least-squares fit. - scipy.interpolate.UnivariateSpline : Computes spline fits. - - Notes - ----- - The solution minimizes the squared error - - .. math :: - E = \\sum_{j=0}^k |p(x_j) - y_j|^2 - - in the equations:: - - x[0]**n * p[n] + ... + x[0] * p[1] + p[0] = y[0] - x[1]**n * p[n] + ... + x[1] * p[1] + p[0] = y[1] - ... - x[k]**n * p[n] + ... + x[k] * p[1] + p[0] = y[k] - - The coefficient matrix of the coefficients `p` is a Vandermonde matrix. - - `polyfit` issues a `RankWarning` when the least-squares fit is badly - conditioned. This implies that the best fit is not well-defined due - to numerical error. The results may be improved by lowering the polynomial - degree or by replacing `x` by `x` - `x`.mean(). The `rcond` parameter - can also be set to a value smaller than its default, but the resulting - fit may be spurious: including contributions from the small singular - values can add numerical noise to the result. - - Note that fitting polynomial coefficients is inherently badly conditioned - when the degree of the polynomial is large or the interval of sample points - is badly centered. The quality of the fit should always be checked in these - cases. When polynomial fits are not satisfactory, splines may be a good - alternative. - - References - ---------- - .. [1] Wikipedia, "Curve fitting", - http://en.wikipedia.org/wiki/Curve_fitting - .. [2] Wikipedia, "Polynomial interpolation", - http://en.wikipedia.org/wiki/Polynomial_interpolation - - Examples - -------- - >>> x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0]) - >>> y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0]) - >>> z = np.polyfit(x, y, 3) - >>> z - array([ 0.08703704, -0.81349206, 1.69312169, -0.03968254]) - - It is convenient to use `poly1d` objects for dealing with polynomials: - - >>> p = np.poly1d(z) - >>> p(0.5) - 0.6143849206349179 - >>> p(3.5) - -0.34732142857143039 - >>> p(10) - 22.579365079365115 - - High-order polynomials may oscillate wildly: - - >>> p30 = np.poly1d(np.polyfit(x, y, 30)) - /... RankWarning: Polyfit may be poorly conditioned... - >>> p30(4) - -0.80000000000000204 - >>> p30(5) - -0.99999999999999445 - >>> p30(4.5) - -0.10547061179440398 - - Illustration: - - >>> import matplotlib.pyplot as plt - >>> xp = np.linspace(-2, 6, 100) - >>> plt.plot(x, y, '.', xp, p(xp), '-', xp, p30(xp), '--') - [, , ] - >>> plt.ylim(-2,2) - (-2, 2) - >>> plt.show() - - """ - order = int(deg) + 1 - x = NX.asarray(x) + 0.0 - y = NX.asarray(y) + 0.0 - - # check arguments. - if deg < 0 : - raise ValueError, "expected deg >= 0" - if x.ndim != 1: - raise TypeError, "expected 1D vector for x" - if x.size == 0: - raise TypeError, "expected non-empty vector for x" - if y.ndim < 1 or y.ndim > 2 : - raise TypeError, "expected 1D or 2D array for y" - if x.shape[0] != y.shape[0] : - raise TypeError, "expected x and y to have same length" - - # set rcond - if rcond is None : - rcond = len(x)*finfo(x.dtype).eps - - # scale x to improve condition number - scale = abs(x).max() - if scale != 0 : - x /= scale - - # solve least squares equation for powers of x - v = vander(x, order) - c, resids, rank, s = lstsq(v, y, rcond) - - # warn on rank reduction, which indicates an ill conditioned matrix - if rank != order and not full: - msg = "Polyfit may be poorly conditioned" - warnings.warn(msg, RankWarning) - - # scale returned coefficients - if scale != 0 : - if c.ndim == 1 : - c /= vander([scale], order)[0] - else : - c /= vander([scale], order).T - - if full : - return c, resids, rank, s, rcond - else : - return c - - - -def polyval(p, x): - """ - Evaluate a polynomial at specific values. - - If `p` is of length N, this function returns the value: - - ``p[0]*x**(N-1) + p[1]*x**(N-2) + ... + p[N-2]*x + p[N-1]`` - - If `x` is a sequence, then `p(x)` is returned for each element of `x`. - If `x` is another polynomial then the composite polynomial `p(x(t))` - is returned. - - Parameters - ---------- - p : array_like or poly1d object - 1D array of polynomial coefficients (including coefficients equal - to zero) from highest degree to the constant term, or an - instance of poly1d. - x : array_like or poly1d object - A number, a 1D array of numbers, or an instance of poly1d, "at" - which to evaluate `p`. - - Returns - ------- - values : ndarray or poly1d - If `x` is a poly1d instance, the result is the composition of the two - polynomials, i.e., `x` is "substituted" in `p` and the simplified - result is returned. In addition, the type of `x` - array_like or - poly1d - governs the type of the output: `x` array_like => `values` - array_like, `x` a poly1d object => `values` is also. - - See Also - -------- - poly1d: A polynomial class. - - Notes - ----- - Horner's scheme [1]_ is used to evaluate the polynomial. Even so, - for polynomials of high degree the values may be inaccurate due to - rounding errors. Use carefully. - - References - ---------- - .. [1] I. N. Bronshtein, K. A. Semendyayev, and K. A. Hirsch (Eng. - trans. Ed.), *Handbook of Mathematics*, New York, Van Nostrand - Reinhold Co., 1985, pg. 720. - - Examples - -------- - >>> np.polyval([3,0,1], 5) # 3 * 5**2 + 0 * 5**1 + 1 - 76 - >>> np.polyval([3,0,1], np.poly1d(5)) - poly1d([ 76.]) - >>> np.polyval(np.poly1d([3,0,1]), 5) - 76 - >>> np.polyval(np.poly1d([3,0,1]), np.poly1d(5)) - poly1d([ 76.]) - - """ - p = NX.asarray(p) - if isinstance(x, poly1d): - y = 0 - else: - x = NX.asarray(x) - y = NX.zeros_like(x) - for i in range(len(p)): - y = x * y + p[i] - return y - -def polyadd(a1, a2): - """ - Find the sum of two polynomials. - - Returns the polynomial resulting from the sum of two input polynomials. - Each input must be either a poly1d object or a 1D sequence of polynomial - coefficients, from highest to lowest degree. - - Parameters - ---------- - a1, a2 : array_like or poly1d object - Input polynomials. - - Returns - ------- - out : ndarray or poly1d object - The sum of the inputs. If either input is a poly1d object, then the - output is also a poly1d object. Otherwise, it is a 1D array of - polynomial coefficients from highest to lowest degree. - - See Also - -------- - poly1d : A one-dimensional polynomial class. - poly, polyadd, polyder, polydiv, polyfit, polyint, polysub, polyval - - Examples - -------- - >>> np.polyadd([1, 2], [9, 5, 4]) - array([9, 6, 6]) - - Using poly1d objects: - - >>> p1 = np.poly1d([1, 2]) - >>> p2 = np.poly1d([9, 5, 4]) - >>> print p1 - 1 x + 2 - >>> print p2 - 2 - 9 x + 5 x + 4 - >>> print np.polyadd(p1, p2) - 2 - 9 x + 6 x + 6 - - """ - truepoly = (isinstance(a1, poly1d) or isinstance(a2, poly1d)) - a1 = atleast_1d(a1) - a2 = atleast_1d(a2) - diff = len(a2) - len(a1) - if diff == 0: - val = a1 + a2 - elif diff > 0: - zr = NX.zeros(diff, a1.dtype) - val = NX.concatenate((zr, a1)) + a2 - else: - zr = NX.zeros(abs(diff), a2.dtype) - val = a1 + NX.concatenate((zr, a2)) - if truepoly: - val = poly1d(val) - return val - -def polysub(a1, a2): - """ - Difference (subtraction) of two polynomials. - - Given two polynomials `a1` and `a2`, returns ``a1 - a2``. - `a1` and `a2` can be either array_like sequences of the polynomials' - coefficients (including coefficients equal to zero), or `poly1d` objects. - - Parameters - ---------- - a1, a2 : array_like or poly1d - Minuend and subtrahend polynomials, respectively. - - Returns - ------- - out : ndarray or poly1d - Array or `poly1d` object of the difference polynomial's coefficients. - - See Also - -------- - polyval, polydiv, polymul, polyadd - - Examples - -------- - .. math:: (2 x^2 + 10 x - 2) - (3 x^2 + 10 x -4) = (-x^2 + 2) - - >>> np.polysub([2, 10, -2], [3, 10, -4]) - array([-1, 0, 2]) - - """ - truepoly = (isinstance(a1, poly1d) or isinstance(a2, poly1d)) - a1 = atleast_1d(a1) - a2 = atleast_1d(a2) - diff = len(a2) - len(a1) - if diff == 0: - val = a1 - a2 - elif diff > 0: - zr = NX.zeros(diff, a1.dtype) - val = NX.concatenate((zr, a1)) - a2 - else: - zr = NX.zeros(abs(diff), a2.dtype) - val = a1 - NX.concatenate((zr, a2)) - if truepoly: - val = poly1d(val) - return val - - -def polymul(a1, a2): - """ - Find the product of two polynomials. - - Finds the polynomial resulting from the multiplication of the two input - polynomials. Each input must be either a poly1d object or a 1D sequence - of polynomial coefficients, from highest to lowest degree. - - Parameters - ---------- - a1, a2 : array_like or poly1d object - Input polynomials. - - Returns - ------- - out : ndarray or poly1d object - The polynomial resulting from the multiplication of the inputs. If - either inputs is a poly1d object, then the output is also a poly1d - object. Otherwise, it is a 1D array of polynomial coefficients from - highest to lowest degree. - - See Also - -------- - poly1d : A one-dimensional polynomial class. - poly, polyadd, polyder, polydiv, polyfit, polyint, polysub, - polyval - - Examples - -------- - >>> np.polymul([1, 2, 3], [9, 5, 1]) - array([ 9, 23, 38, 17, 3]) - - Using poly1d objects: - - >>> p1 = np.poly1d([1, 2, 3]) - >>> p2 = np.poly1d([9, 5, 1]) - >>> print p1 - 2 - 1 x + 2 x + 3 - >>> print p2 - 2 - 9 x + 5 x + 1 - >>> print np.polymul(p1, p2) - 4 3 2 - 9 x + 23 x + 38 x + 17 x + 3 - - """ - truepoly = (isinstance(a1, poly1d) or isinstance(a2, poly1d)) - a1,a2 = poly1d(a1),poly1d(a2) - val = NX.convolve(a1, a2) - if truepoly: - val = poly1d(val) - return val - -def polydiv(u, v): - """ - Returns the quotient and remainder of polynomial division. - - The input arrays are the coefficients (including any coefficients - equal to zero) of the "numerator" (dividend) and "denominator" - (divisor) polynomials, respectively. - - Parameters - ---------- - u : array_like or poly1d - Dividend polynomial's coefficients. - - v : array_like or poly1d - Divisor polynomial's coefficients. - - Returns - ------- - q : ndarray - Coefficients, including those equal to zero, of the quotient. - r : ndarray - Coefficients, including those equal to zero, of the remainder. - - See Also - -------- - poly, polyadd, polyder, polydiv, polyfit, polyint, polymul, polysub, - polyval - - Notes - ----- - Both `u` and `v` must be 0-d or 1-d (ndim = 0 or 1), but `u.ndim` need - not equal `v.ndim`. In other words, all four possible combinations - - ``u.ndim = v.ndim = 0``, ``u.ndim = v.ndim = 1``, - ``u.ndim = 1, v.ndim = 0``, and ``u.ndim = 0, v.ndim = 1`` - work. - - Examples - -------- - .. math:: \\frac{3x^2 + 5x + 2}{2x + 1} = 1.5x + 1.75, remainder 0.25 - - >>> x = np.array([3.0, 5.0, 2.0]) - >>> y = np.array([2.0, 1.0]) - >>> np.polydiv(x, y) - (array([ 1.5 , 1.75]), array([ 0.25])) - - """ - truepoly = (isinstance(u, poly1d) or isinstance(u, poly1d)) - u = atleast_1d(u) + 0.0 - v = atleast_1d(v) + 0.0 - # w has the common type - w = u[0] + v[0] - m = len(u) - 1 - n = len(v) - 1 - scale = 1. / v[0] - q = NX.zeros((max(m - n + 1, 1),), w.dtype) - r = u.copy() - for k in range(0, m-n+1): - d = scale * r[k] - q[k] = d - r[k:k+n+1] -= d*v - while NX.allclose(r[0], 0, rtol=1e-14) and (r.shape[-1] > 1): - r = r[1:] - if truepoly: - return poly1d(q), poly1d(r) - return q, r - -_poly_mat = re.compile(r"[*][*]([0-9]*)") -def _raise_power(astr, wrap=70): - n = 0 - line1 = '' - line2 = '' - output = ' ' - while 1: - mat = _poly_mat.search(astr, n) - if mat is None: - break - span = mat.span() - power = mat.groups()[0] - partstr = astr[n:span[0]] - n = span[1] - toadd2 = partstr + ' '*(len(power)-1) - toadd1 = ' '*(len(partstr)-1) + power - if ((len(line2)+len(toadd2) > wrap) or \ - (len(line1)+len(toadd1) > wrap)): - output += line1 + "\n" + line2 + "\n " - line1 = toadd1 - line2 = toadd2 - else: - line2 += partstr + ' '*(len(power)-1) - line1 += ' '*(len(partstr)-1) + power - output += line1 + "\n" + line2 - return output + astr[n:] - - -class poly1d(object): - """ - A one-dimensional polynomial class. - - A convenience class, used to encapsulate "natural" operations on - polynomials so that said operations may take on their customary - form in code (see Examples). - - Parameters - ---------- - c_or_r : array_like - The polynomial's coefficients, in decreasing powers, or if - the value of the second parameter is True, the polynomial's - roots (values where the polynomial evaluates to 0). For example, - ``poly1d([1, 2, 3])`` returns an object that represents - :math:`x^2 + 2x + 3`, whereas ``poly1d([1, 2, 3], True)`` returns - one that represents :math:`(x-1)(x-2)(x-3) = x^3 - 6x^2 + 11x -6`. - r : bool, optional - If True, `c_or_r` specifies the polynomial's roots; the default - is False. - variable : str, optional - Changes the variable used when printing `p` from `x` to `variable` - (see Examples). - - Examples - -------- - Construct the polynomial :math:`x^2 + 2x + 3`: - - >>> p = np.poly1d([1, 2, 3]) - >>> print np.poly1d(p) - 2 - 1 x + 2 x + 3 - - Evaluate the polynomial at :math:`x = 0.5`: - - >>> p(0.5) - 4.25 - - Find the roots: - - >>> p.r - array([-1.+1.41421356j, -1.-1.41421356j]) - >>> p(p.r) - array([ -4.44089210e-16+0.j, -4.44089210e-16+0.j]) - - These numbers in the previous line represent (0, 0) to machine precision - - Show the coefficients: - - >>> p.c - array([1, 2, 3]) - - Display the order (the leading zero-coefficients are removed): - - >>> p.order - 2 - - Show the coefficient of the k-th power in the polynomial - (which is equivalent to ``p.c[-(i+1)]``): - - >>> p[1] - 2 - - Polynomials can be added, subtracted, multiplied, and divided - (returns quotient and remainder): - - >>> p * p - poly1d([ 1, 4, 10, 12, 9]) - - >>> (p**3 + 4) / p - (poly1d([ 1., 4., 10., 12., 9.]), poly1d([ 4.])) - - ``asarray(p)`` gives the coefficient array, so polynomials can be - used in all functions that accept arrays: - - >>> p**2 # square of polynomial - poly1d([ 1, 4, 10, 12, 9]) - - >>> np.square(p) # square of individual coefficients - array([1, 4, 9]) - - The variable used in the string representation of `p` can be modified, - using the `variable` parameter: - - >>> p = np.poly1d([1,2,3], variable='z') - >>> print p - 2 - 1 z + 2 z + 3 - - Construct a polynomial from its roots: - - >>> np.poly1d([1, 2], True) - poly1d([ 1, -3, 2]) - - This is the same polynomial as obtained by: - - >>> np.poly1d([1, -1]) * np.poly1d([1, -2]) - poly1d([ 1, -3, 2]) - - """ - coeffs = None - order = None - variable = None - def __init__(self, c_or_r, r=0, variable=None): - if isinstance(c_or_r, poly1d): - for key in c_or_r.__dict__.keys(): - self.__dict__[key] = c_or_r.__dict__[key] - if variable is not None: - self.__dict__['variable'] = variable - return - if r: - c_or_r = poly(c_or_r) - c_or_r = atleast_1d(c_or_r) - if len(c_or_r.shape) > 1: - raise ValueError, "Polynomial must be 1d only." - c_or_r = trim_zeros(c_or_r, trim='f') - if len(c_or_r) == 0: - c_or_r = NX.array([0.]) - self.__dict__['coeffs'] = c_or_r - self.__dict__['order'] = len(c_or_r) - 1 - if variable is None: - variable = 'x' - self.__dict__['variable'] = variable - - def __array__(self, t=None): - if t: - return NX.asarray(self.coeffs, t) - else: - return NX.asarray(self.coeffs) - - def __repr__(self): - vals = repr(self.coeffs) - vals = vals[6:-1] - return "poly1d(%s)" % vals - - def __len__(self): - return self.order - - def __str__(self): - thestr = "0" - var = self.variable - - # Remove leading zeros - coeffs = self.coeffs[NX.logical_or.accumulate(self.coeffs != 0)] - N = len(coeffs)-1 - - def fmt_float(q): - s = '%.4g' % q - if s.endswith('.0000'): - s = s[:-5] - return s - - for k in range(len(coeffs)): - if not iscomplex(coeffs[k]): - coefstr = fmt_float(real(coeffs[k])) - elif real(coeffs[k]) == 0: - coefstr = '%sj' % fmt_float(imag(coeffs[k])) - else: - coefstr = '(%s + %sj)' % (fmt_float(real(coeffs[k])), - fmt_float(imag(coeffs[k]))) - - power = (N-k) - if power == 0: - if coefstr != '0': - newstr = '%s' % (coefstr,) - else: - if k == 0: - newstr = '0' - else: - newstr = '' - elif power == 1: - if coefstr == '0': - newstr = '' - elif coefstr == 'b': - newstr = var - else: - newstr = '%s %s' % (coefstr, var) - else: - if coefstr == '0': - newstr = '' - elif coefstr == 'b': - newstr = '%s**%d' % (var, power,) - else: - newstr = '%s %s**%d' % (coefstr, var, power) - - if k > 0: - if newstr != '': - if newstr.startswith('-'): - thestr = "%s - %s" % (thestr, newstr[1:]) - else: - thestr = "%s + %s" % (thestr, newstr) - else: - thestr = newstr - return _raise_power(thestr) - - - def __call__(self, val): - return polyval(self.coeffs, val) - - def __neg__(self): - return poly1d(-self.coeffs) - - def __pos__(self): - return self - - def __mul__(self, other): - if isscalar(other): - return poly1d(self.coeffs * other) - else: - other = poly1d(other) - return poly1d(polymul(self.coeffs, other.coeffs)) - - def __rmul__(self, other): - if isscalar(other): - return poly1d(other * self.coeffs) - else: - other = poly1d(other) - return poly1d(polymul(self.coeffs, other.coeffs)) - - def __add__(self, other): - other = poly1d(other) - return poly1d(polyadd(self.coeffs, other.coeffs)) - - def __radd__(self, other): - other = poly1d(other) - return poly1d(polyadd(self.coeffs, other.coeffs)) - - def __pow__(self, val): - if not isscalar(val) or int(val) != val or val < 0: - raise ValueError, "Power to non-negative integers only." - res = [1] - for _ in range(val): - res = polymul(self.coeffs, res) - return poly1d(res) - - def __sub__(self, other): - other = poly1d(other) - return poly1d(polysub(self.coeffs, other.coeffs)) - - def __rsub__(self, other): - other = poly1d(other) - return poly1d(polysub(other.coeffs, self.coeffs)) - - def __div__(self, other): - if isscalar(other): - return poly1d(self.coeffs/other) - else: - other = poly1d(other) - return polydiv(self, other) - - __truediv__ = __div__ - - def __rdiv__(self, other): - if isscalar(other): - return poly1d(other/self.coeffs) - else: - other = poly1d(other) - return polydiv(other, self) - - __rtruediv__ = __rdiv__ - - def __eq__(self, other): - return NX.alltrue(self.coeffs == other.coeffs) - - def __ne__(self, other): - return NX.any(self.coeffs != other.coeffs) - - def __setattr__(self, key, val): - raise ValueError, "Attributes cannot be changed this way." - - def __getattr__(self, key): - if key in ['r', 'roots']: - return roots(self.coeffs) - elif key in ['c','coef','coefficients']: - return self.coeffs - elif key in ['o']: - return self.order - else: - try: - return self.__dict__[key] - except KeyError: - raise AttributeError("'%s' has no attribute '%s'" % (self.__class__, key)) - - def __getitem__(self, val): - ind = self.order - val - if val > self.order: - return 0 - if val < 0: - return 0 - return self.coeffs[ind] - - def __setitem__(self, key, val): - ind = self.order - key - if key < 0: - raise ValueError, "Does not support negative powers." - if key > self.order: - zr = NX.zeros(key-self.order, self.coeffs.dtype) - self.__dict__['coeffs'] = NX.concatenate((zr, self.coeffs)) - self.__dict__['order'] = key - ind = 0 - self.__dict__['coeffs'][ind] = val - return - - def __iter__(self): - return iter(self.coeffs) - - def integ(self, m=1, k=0): - """ - Return an antiderivative (indefinite integral) of this polynomial. - - Refer to `polyint` for full documentation. - - See Also - -------- - polyint : equivalent function - - """ - return poly1d(polyint(self.coeffs, m=m, k=k)) - - def deriv(self, m=1): - """ - Return a derivative of this polynomial. - - Refer to `polyder` for full documentation. - - See Also - -------- - polyder : equivalent function - - """ - return poly1d(polyder(self.coeffs, m=m)) - -# Stuff to do on module import - -warnings.simplefilter('always',RankWarning) diff --git a/pythonPackages/numpy/numpy/lib/recfunctions.py b/pythonPackages/numpy/numpy/lib/recfunctions.py deleted file mode 100755 index 3c0094aaed..0000000000 --- a/pythonPackages/numpy/numpy/lib/recfunctions.py +++ /dev/null @@ -1,995 +0,0 @@ -""" -Collection of utilities to manipulate structured arrays. - -Most of these functions were initially implemented by John Hunter for matplotlib. -They have been rewritten and extended for convenience. - - -""" - -import sys -import itertools -import numpy as np -import numpy.ma as ma -from numpy import ndarray, recarray -from numpy.ma import MaskedArray -from numpy.ma.mrecords import MaskedRecords -from numpy.lib._iotools import _is_string_like - -_check_fill_value = np.ma.core._check_fill_value - -__all__ = ['append_fields', - 'drop_fields', - 'find_duplicates', - 'get_fieldstructure', - 'join_by', - 'merge_arrays', - 'rec_append_fields', 'rec_drop_fields', 'rec_join', - 'recursive_fill_fields', 'rename_fields', - 'stack_arrays', - ] - - -def recursive_fill_fields(input, output): - """ - Fills fields from output with fields from input, - with support for nested structures. - - Parameters - ---------- - input : ndarray - Input array. - output : ndarray - Output array. - - Notes - ----- - * `output` should be at least the same size as `input` - - Examples - -------- - >>> from numpy.lib import recfunctions as rfn - >>> a = np.array([(1, 10.), (2, 20.)], dtype=[('A', int), ('B', float)]) - >>> b = np.zeros((3,), dtype=a.dtype) - >>> rfn.recursive_fill_fields(a, b) - array([(1, 10.0), (2, 20.0), (0, 0.0)], - dtype=[('A', '>> from numpy.lib import recfunctions as rfn - >>> rfn.get_names(np.empty((1,), dtype=int)) is None - True - >>> rfn.get_names(np.empty((1,), dtype=[('A',int), ('B', float)])) - ('A', 'B') - >>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])]) - >>> rfn.get_names(adtype) - ('a', ('b', ('ba', 'bb'))) - """ - listnames = [] - names = adtype.names - for name in names: - current = adtype[name] - if current.names: - listnames.append((name, tuple(get_names(current)))) - else: - listnames.append(name) - return tuple(listnames) or None - - -def get_names_flat(adtype): - """ - Returns the field names of the input datatype as a tuple. Nested structure - are flattend beforehand. - - Parameters - ---------- - adtype : dtype - Input datatype - - Examples - -------- - >>> from numpy.lib import recfunctions as rfn - >>> rfn.get_names_flat(np.empty((1,), dtype=int)) is None - True - >>> rfn.get_names_flat(np.empty((1,), dtype=[('A',int), ('B', float)])) - ('A', 'B') - >>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])]) - >>> rfn.get_names_flat(adtype) - ('a', 'b', 'ba', 'bb') - """ - listnames = [] - names = adtype.names - for name in names: - listnames.append(name) - current = adtype[name] - if current.names: - listnames.extend(get_names_flat(current)) - return tuple(listnames) or None - - -def flatten_descr(ndtype): - """ - Flatten a structured data-type description. - - Examples - -------- - >>> from numpy.lib import recfunctions as rfn - >>> ndtype = np.dtype([('a', '>> rfn.flatten_descr(ndtype) - (('a', dtype('int32')), ('ba', dtype('float64')), ('bb', dtype('int32'))) - - """ - names = ndtype.names - if names is None: - return ndtype.descr - else: - descr = [] - for field in names: - (typ, _) = ndtype.fields[field] - if typ.names: - descr.extend(flatten_descr(typ)) - else: - descr.append((field, typ)) - return tuple(descr) - - -def zip_descr(seqarrays, flatten=False): - """ - Combine the dtype description of a series of arrays. - - Parameters - ---------- - seqarrays : sequence of arrays - Sequence of arrays - flatten : {boolean}, optional - Whether to collapse nested descriptions. - """ - newdtype = [] - if flatten: - for a in seqarrays: - newdtype.extend(flatten_descr(a.dtype)) - else: - for a in seqarrays: - current = a.dtype - names = current.names or () - if len(names) > 1: - newdtype.append(('', current.descr)) - else: - newdtype.extend(current.descr) - return np.dtype(newdtype).descr - - -def get_fieldstructure(adtype, lastname=None, parents=None,): - """ - Returns a dictionary with fields as keys and a list of parent fields as values. - - This function is used to simplify access to fields nested in other fields. - - Parameters - ---------- - adtype : np.dtype - Input datatype - lastname : optional - Last processed field name (used internally during recursion). - parents : dictionary - Dictionary of parent fields (used interbally during recursion). - - Examples - -------- - >>> from numpy.lib import recfunctions as rfn - >>> ndtype = np.dtype([('A', int), - ... ('B', [('BA', int), - ... ('BB', [('BBA', int), ('BBB', int)])])]) - >>> rfn.get_fieldstructure(ndtype) - ... # XXX: possible regression, order of BBA and BBB is swapped - {'A': [], 'B': [], 'BA': ['B'], 'BB': ['B'], 'BBA': ['B', 'BB'], 'BBB': ['B', 'BB']} - - """ - if parents is None: - parents = {} - names = adtype.names - for name in names: - current = adtype[name] - if current.names: - if lastname: - parents[name] = [lastname, ] - else: - parents[name] = [] - parents.update(get_fieldstructure(current, name, parents)) - else: - lastparent = [_ for _ in (parents.get(lastname, []) or [])] - if lastparent: -# if (lastparent[-1] != lastname): - lastparent.append(lastname) - elif lastname: - lastparent = [lastname, ] - parents[name] = lastparent or [] - return parents or None - - -def _izip_fields_flat(iterable): - """ - Returns an iterator of concatenated fields from a sequence of arrays, - collapsing any nested structure. - """ - for element in iterable: - if isinstance(element, np.void): - for f in _izip_fields_flat(tuple(element)): - yield f - else: - yield element - - -def _izip_fields(iterable): - """ - Returns an iterator of concatenated fields from a sequence of arrays. - """ - for element in iterable: - if hasattr(element, '__iter__') and not isinstance(element, basestring): - for f in _izip_fields(element): - yield f - elif isinstance(element, np.void) and len(tuple(element)) == 1: - for f in _izip_fields(element): - yield f - else: - yield element - - -def izip_records(seqarrays, fill_value=None, flatten=True): - """ - Returns an iterator of concatenated items from a sequence of arrays. - - Parameters - ---------- - seqarray : sequence of arrays - Sequence of arrays. - fill_value : {None, integer} - Value used to pad shorter iterables. - flatten : {True, False}, - Whether to - """ - # OK, that's a complete ripoff from Python2.6 itertools.izip_longest - def sentinel(counter=([fill_value] * (len(seqarrays) - 1)).pop): - "Yields the fill_value or raises IndexError" - yield counter() - # - fillers = itertools.repeat(fill_value) - iters = [itertools.chain(it, sentinel(), fillers) for it in seqarrays] - # Should we flatten the items, or just use a nested approach - if flatten: - zipfunc = _izip_fields_flat - else: - zipfunc = _izip_fields - # - try: - for tup in itertools.izip(*iters): - yield tuple(zipfunc(tup)) - except IndexError: - pass - - -def _fix_output(output, usemask=True, asrecarray=False): - """ - Private function: return a recarray, a ndarray, a MaskedArray - or a MaskedRecords depending on the input parameters - """ - if not isinstance(output, MaskedArray): - usemask = False - if usemask: - if asrecarray: - output = output.view(MaskedRecords) - else: - output = ma.filled(output) - if asrecarray: - output = output.view(recarray) - return output - - -def _fix_defaults(output, defaults=None): - """ - Update the fill_value and masked data of `output` - from the default given in a dictionary defaults. - """ - names = output.dtype.names - (data, mask, fill_value) = (output.data, output.mask, output.fill_value) - for (k, v) in (defaults or {}).iteritems(): - if k in names: - fill_value[k] = v - data[k][mask[k]] = v - return output - - - -def merge_arrays(seqarrays, - fill_value= -1, flatten=False, usemask=False, asrecarray=False): - """ - Merge arrays field by field. - - Parameters - ---------- - seqarrays : sequence of ndarrays - Sequence of arrays - fill_value : {float}, optional - Filling value used to pad missing data on the shorter arrays. - flatten : {False, True}, optional - Whether to collapse nested fields. - usemask : {False, True}, optional - Whether to return a masked array or not. - asrecarray : {False, True}, optional - Whether to return a recarray (MaskedRecords) or not. - - Examples - -------- - >>> from numpy.lib import recfunctions as rfn - >>> rfn.merge_arrays((np.array([1, 2]), np.array([10., 20., 30.]))) - masked_array(data = [(1, 10.0) (2, 20.0) (--, 30.0)], - mask = [(False, False) (False, False) (True, False)], - fill_value = (999999, 1e+20), - dtype = [('f0', '>> rfn.merge_arrays((np.array([1, 2]), np.array([10., 20., 30.])), - ... usemask=False) - array([(1, 10.0), (2, 20.0), (-1, 30.0)], - dtype=[('f0', '>> rfn.merge_arrays((np.array([1, 2]).view([('a', int)]), - ... np.array([10., 20., 30.])), - ... usemask=False, asrecarray=True) - rec.array([(1, 10.0), (2, 20.0), (-1, 30.0)], - dtype=[('a', '>> from numpy.lib import recfunctions as rfn - >>> a = np.array([(1, (2, 3.0)), (4, (5, 6.0))], - ... dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) - >>> rfn.drop_fields(a, 'a') - array([((2.0, 3),), ((5.0, 6),)], - dtype=[('b', [('ba', '>> rfn.drop_fields(a, 'ba') - array([(1, (3,)), (4, (6,))], - dtype=[('a', '>> rfn.drop_fields(a, ['ba', 'bb']) - array([(1,), (4,)], - dtype=[('a', '>> from numpy.lib import recfunctions as rfn - >>> a = np.array([(1, (2, [3.0, 30.])), (4, (5, [6.0, 60.]))], - ... dtype=[('a', int),('b', [('ba', float), ('bb', (float, 2))])]) - >>> rfn.rename_fields(a, {'a':'A', 'bb':'BB'}) - array([(1, (2.0, [3.0, 30.0])), (4, (5.0, [6.0, 60.0]))], - dtype=[('A', ' 1: - data = merge_arrays(data, flatten=True, usemask=usemask, - fill_value=fill_value) - else: - data = data.pop() - # - output = ma.masked_all(max(len(base), len(data)), - dtype=base.dtype.descr + data.dtype.descr) - output = recursive_fill_fields(base, output) - output = recursive_fill_fields(data, output) - # - return _fix_output(output, usemask=usemask, asrecarray=asrecarray) - - - -def rec_append_fields(base, names, data, dtypes=None): - """ - Add new fields to an existing array. - - The names of the fields are given with the `names` arguments, - the corresponding values with the `data` arguments. - If a single field is appended, `names`, `data` and `dtypes` do not have - to be lists but just values. - - Parameters - ---------- - base : array - Input array to extend. - names : string, sequence - String or sequence of strings corresponding to the names - of the new fields. - data : array or sequence of arrays - Array or sequence of arrays storing the fields to add to the base. - dtypes : sequence of datatypes, optional - Datatype or sequence of datatypes. - If None, the datatypes are estimated from the `data`. - - See Also - -------- - append_fields - - Returns - ------- - appended_array : np.recarray - """ - return append_fields(base, names, data=data, dtypes=dtypes, - asrecarray=True, usemask=False) - - - -def stack_arrays(arrays, defaults=None, usemask=True, asrecarray=False, - autoconvert=False): - """ - Superposes arrays fields by fields - - Parameters - ---------- - seqarrays : array or sequence - Sequence of input arrays. - defaults : dictionary, optional - Dictionary mapping field names to the corresponding default values. - usemask : {True, False}, optional - Whether to return a MaskedArray (or MaskedRecords is `asrecarray==True`) - or a ndarray. - asrecarray : {False, True}, optional - Whether to return a recarray (or MaskedRecords if `usemask==True`) or - just a flexible-type ndarray. - autoconvert : {False, True}, optional - Whether automatically cast the type of the field to the maximum. - - Examples - -------- - >>> from numpy.lib import recfunctions as rfn - >>> x = np.array([1, 2,]) - >>> rfn.stack_arrays(x) is x - True - >>> z = np.array([('A', 1), ('B', 2)], dtype=[('A', '|S3'), ('B', float)]) - >>> zz = np.array([('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], - ... dtype=[('A', '|S3'), ('B', float), ('C', float)]) - >>> test = rfn.stack_arrays((z,zz)) - >>> test - masked_array(data = [('A', 1.0, --) ('B', 2.0, --) ('a', 10.0, 100.0) ('b', 20.0, 200.0) - ('c', 30.0, 300.0)], - mask = [(False, False, True) (False, False, True) (False, False, False) - (False, False, False) (False, False, False)], - fill_value = ('N/A', 1e+20, 1e+20), - dtype = [('A', '|S3'), ('B', ' np.dtype(current_descr[-1]): - current_descr = list(current_descr) - current_descr[-1] = descr[1] - newdescr[nameidx] = tuple(current_descr) - elif descr[1] != current_descr[-1]: - raise TypeError("Incompatible type '%s' <> '%s'" % \ - (dict(newdescr)[name], descr[1])) - # Only one field: use concatenate - if len(newdescr) == 1: - output = ma.concatenate(seqarrays) - else: - # - output = ma.masked_all((np.sum(nrecords),), newdescr) - offset = np.cumsum(np.r_[0, nrecords]) - seen = [] - for (a, n, i, j) in zip(seqarrays, fldnames, offset[:-1], offset[1:]): - names = a.dtype.names - if names is None: - output['f%i' % len(seen)][i:j] = a - else: - for name in n: - output[name][i:j] = a[name] - if name not in seen: - seen.append(name) - # - return _fix_output(_fix_defaults(output, defaults), - usemask=usemask, asrecarray=asrecarray) - - - -def find_duplicates(a, key=None, ignoremask=True, return_index=False): - """ - Find the duplicates in a structured array along a given key - - Parameters - ---------- - a : array-like - Input array - key : {string, None}, optional - Name of the fields along which to check the duplicates. - If None, the search is performed by records - ignoremask : {True, False}, optional - Whether masked data should be discarded or considered as duplicates. - return_index : {False, True}, optional - Whether to return the indices of the duplicated values. - - Examples - -------- - >>> from numpy.lib import recfunctions as rfn - >>> ndtype = [('a', int)] - >>> a = np.ma.array([1, 1, 1, 2, 2, 3, 3], - ... mask=[0, 0, 1, 0, 0, 0, 1]).view(ndtype) - >>> rfn.find_duplicates(a, ignoremask=True, return_index=True) - ... # XXX: judging by the output, the ignoremask flag has no effect - """ - a = np.asanyarray(a).ravel() - # Get a dictionary of fields - fields = get_fieldstructure(a.dtype) - # Get the sorting data (by selecting the corresponding field) - base = a - if key: - for f in fields[key]: - base = base[f] - base = base[key] - # Get the sorting indices and the sorted data - sortidx = base.argsort() - sortedbase = base[sortidx] - sorteddata = sortedbase.filled() - # Compare the sorting data - flag = (sorteddata[:-1] == sorteddata[1:]) - # If masked data must be ignored, set the flag to false where needed - if ignoremask: - sortedmask = sortedbase.recordmask - flag[sortedmask[1:]] = False - flag = np.concatenate(([False], flag)) - # We need to take the point on the left as well (else we're missing it) - flag[:-1] = flag[:-1] + flag[1:] - duplicates = a[sortidx][flag] - if return_index: - return (duplicates, sortidx[flag]) - else: - return duplicates - - - -def join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', - defaults=None, usemask=True, asrecarray=False): - """ - Join arrays `r1` and `r2` on key `key`. - - The key should be either a string or a sequence of string corresponding - to the fields used to join the array. - An exception is raised if the `key` field cannot be found in the two input - arrays. - Neither `r1` nor `r2` should have any duplicates along `key`: the presence - of duplicates will make the output quite unreliable. Note that duplicates - are not looked for by the algorithm. - - Parameters - ---------- - key : {string, sequence} - A string or a sequence of strings corresponding to the fields used - for comparison. - r1, r2 : arrays - Structured arrays. - jointype : {'inner', 'outer', 'leftouter'}, optional - If 'inner', returns the elements common to both r1 and r2. - If 'outer', returns the common elements as well as the elements of r1 - not in r2 and the elements of not in r2. - If 'leftouter', returns the common elements and the elements of r1 not - in r2. - r1postfix : string, optional - String appended to the names of the fields of r1 that are present in r2 - but absent of the key. - r2postfix : string, optional - String appended to the names of the fields of r2 that are present in r1 - but absent of the key. - defaults : {dictionary}, optional - Dictionary mapping field names to the corresponding default values. - usemask : {True, False}, optional - Whether to return a MaskedArray (or MaskedRecords is `asrecarray==True`) - or a ndarray. - asrecarray : {False, True}, optional - Whether to return a recarray (or MaskedRecords if `usemask==True`) or - just a flexible-type ndarray. - - Notes - ----- - * The output is sorted along the key. - * A temporary array is formed by dropping the fields not in the key for the - two arrays and concatenating the result. This array is then sorted, and - the common entries selected. The output is constructed by filling the fields - with the selected entries. Matching is not preserved if there are some - duplicates... - - """ - # Check jointype - if jointype not in ('inner', 'outer', 'leftouter'): - raise ValueError("The 'jointype' argument should be in 'inner', "\ - "'outer' or 'leftouter' (got '%s' instead)" % jointype) - # If we have a single key, put it in a tuple - if isinstance(key, basestring): - key = (key,) - - # Check the keys - for name in key: - if name not in r1.dtype.names: - raise ValueError('r1 does not have key field %s' % name) - if name not in r2.dtype.names: - raise ValueError('r2 does not have key field %s' % name) - - # Make sure we work with ravelled arrays - r1 = r1.ravel() - r2 = r2.ravel() - (nb1, nb2) = (len(r1), len(r2)) - (r1names, r2names) = (r1.dtype.names, r2.dtype.names) - - # Make temporary arrays of just the keys - r1k = drop_fields(r1, [n for n in r1names if n not in key]) - r2k = drop_fields(r2, [n for n in r2names if n not in key]) - - # Concatenate the two arrays for comparison - aux = ma.concatenate((r1k, r2k)) - idx_sort = aux.argsort(order=key) - aux = aux[idx_sort] - # - # Get the common keys - flag_in = ma.concatenate(([False], aux[1:] == aux[:-1])) - flag_in[:-1] = flag_in[1:] + flag_in[:-1] - idx_in = idx_sort[flag_in] - idx_1 = idx_in[(idx_in < nb1)] - idx_2 = idx_in[(idx_in >= nb1)] - nb1 - (r1cmn, r2cmn) = (len(idx_1), len(idx_2)) - if jointype == 'inner': - (r1spc, r2spc) = (0, 0) - elif jointype == 'outer': - idx_out = idx_sort[~flag_in] - idx_1 = np.concatenate((idx_1, idx_out[(idx_out < nb1)])) - idx_2 = np.concatenate((idx_2, idx_out[(idx_out >= nb1)] - nb1)) - (r1spc, r2spc) = (len(idx_1) - r1cmn, len(idx_2) - r2cmn) - elif jointype == 'leftouter': - idx_out = idx_sort[~flag_in] - idx_1 = np.concatenate((idx_1, idx_out[(idx_out < nb1)])) - (r1spc, r2spc) = (len(idx_1) - r1cmn, 0) - # Select the entries from each input - (s1, s2) = (r1[idx_1], r2[idx_2]) - # - # Build the new description of the output array ....... - # Start with the key fields - ndtype = [list(_) for _ in r1k.dtype.descr] - # Add the other fields - ndtype.extend(list(_) for _ in r1.dtype.descr if _[0] not in key) - # Find the new list of names (it may be different from r1names) - names = list(_[0] for _ in ndtype) - for desc in r2.dtype.descr: - desc = list(desc) - name = desc[0] - # Have we seen the current name already ? - if name in names: - nameidx = names.index(name) - current = ndtype[nameidx] - # The current field is part of the key: take the largest dtype - if name in key: - current[-1] = max(desc[1], current[-1]) - # The current field is not part of the key: add the suffixes - else: - current[0] += r1postfix - desc[0] += r2postfix - ndtype.insert(nameidx + 1, desc) - #... we haven't: just add the description to the current list - else: - names.extend(desc[0]) - ndtype.append(desc) - # Revert the elements to tuples - ndtype = [tuple(_) for _ in ndtype] - # Find the largest nb of common fields : r1cmn and r2cmn should be equal, but... - cmn = max(r1cmn, r2cmn) - # Construct an empty array - output = ma.masked_all((cmn + r1spc + r2spc,), dtype=ndtype) - names = output.dtype.names - for f in r1names: - selected = s1[f] - if f not in names: - f += r1postfix - current = output[f] - current[:r1cmn] = selected[:r1cmn] - if jointype in ('outer', 'leftouter'): - current[cmn:cmn + r1spc] = selected[r1cmn:] - for f in r2names: - selected = s2[f] - if f not in names: - f += r2postfix - current = output[f] - current[:r2cmn] = selected[:r2cmn] - if (jointype == 'outer') and r2spc: - current[-r2spc:] = selected[r2cmn:] - # Sort and finalize the output - output.sort(order=key) - kwargs = dict(usemask=usemask, asrecarray=asrecarray) - return _fix_output(_fix_defaults(output, defaults), **kwargs) - - -def rec_join(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', - defaults=None): - """ - Join arrays `r1` and `r2` on keys. - Alternative to join_by, that always returns a np.recarray. - - See Also - -------- - join_by : equivalent function - """ - kwargs = dict(jointype=jointype, r1postfix=r1postfix, r2postfix=r2postfix, - defaults=defaults, usemask=False, asrecarray=True) - return join_by(key, r1, r2, **kwargs) diff --git a/pythonPackages/numpy/numpy/lib/scimath.py b/pythonPackages/numpy/numpy/lib/scimath.py deleted file mode 100755 index 48ed1dc25c..0000000000 --- a/pythonPackages/numpy/numpy/lib/scimath.py +++ /dev/null @@ -1,559 +0,0 @@ -""" -Wrapper functions to more user-friendly calling of certain math functions -whose output data-type is different than the input data-type in certain -domains of the input. - -For example, for functions like `log` with branch cuts, the versions in this -module provide the mathematically valid answers in the complex plane:: - - >>> import math - >>> from numpy.lib import scimath - >>> scimath.log(-math.exp(1)) == (1+1j*math.pi) - True - -Similarly, `sqrt`, other base logarithms, `power` and trig functions are -correctly handled. See their respective docstrings for specific examples. - -""" - -__all__ = ['sqrt', 'log', 'log2', 'logn','log10', 'power', 'arccos', - 'arcsin', 'arctanh'] - -import numpy.core.numeric as nx -import numpy.core.numerictypes as nt -from numpy.core.numeric import asarray, any -from numpy.lib.type_check import isreal - -_ln2 = nx.log(2.0) - -def _tocomplex(arr): - """Convert its input `arr` to a complex array. - - The input is returned as a complex array of the smallest type that will fit - the original data: types like single, byte, short, etc. become csingle, - while others become cdouble. - - A copy of the input is always made. - - Parameters - ---------- - arr : array - - Returns - ------- - array - An array with the same input data as the input but in complex form. - - Examples - -------- - - First, consider an input of type short: - - >>> a = np.array([1,2,3],np.short) - - >>> ac = np.lib.scimath._tocomplex(a); ac - array([ 1.+0.j, 2.+0.j, 3.+0.j], dtype=complex64) - - >>> ac.dtype - dtype('complex64') - - If the input is of type double, the output is correspondingly of the - complex double type as well: - - >>> b = np.array([1,2,3],np.double) - - >>> bc = np.lib.scimath._tocomplex(b); bc - array([ 1.+0.j, 2.+0.j, 3.+0.j]) - - >>> bc.dtype - dtype('complex128') - - Note that even if the input was complex to begin with, a copy is still - made, since the astype() method always copies: - - >>> c = np.array([1,2,3],np.csingle) - - >>> cc = np.lib.scimath._tocomplex(c); cc - array([ 1.+0.j, 2.+0.j, 3.+0.j], dtype=complex64) - - >>> c *= 2; c - array([ 2.+0.j, 4.+0.j, 6.+0.j], dtype=complex64) - - >>> cc - array([ 1.+0.j, 2.+0.j, 3.+0.j], dtype=complex64) - """ - if issubclass(arr.dtype.type, (nt.single, nt.byte, nt.short, nt.ubyte, - nt.ushort,nt.csingle)): - return arr.astype(nt.csingle) - else: - return arr.astype(nt.cdouble) - -def _fix_real_lt_zero(x): - """Convert `x` to complex if it has real, negative components. - - Otherwise, output is just the array version of the input (via asarray). - - Parameters - ---------- - x : array_like - - Returns - ------- - array - - Examples - -------- - >>> np.lib.scimath._fix_real_lt_zero([1,2]) - array([1, 2]) - - >>> np.lib.scimath._fix_real_lt_zero([-1,2]) - array([-1.+0.j, 2.+0.j]) - """ - x = asarray(x) - if any(isreal(x) & (x<0)): - x = _tocomplex(x) - return x - -def _fix_int_lt_zero(x): - """Convert `x` to double if it has real, negative components. - - Otherwise, output is just the array version of the input (via asarray). - - Parameters - ---------- - x : array_like - - Returns - ------- - array - - Examples - -------- - >>> np.lib.scimath._fix_int_lt_zero([1,2]) - array([1, 2]) - - >>> np.lib.scimath._fix_int_lt_zero([-1,2]) - array([-1., 2.]) - """ - x = asarray(x) - if any(isreal(x) & (x < 0)): - x = x * 1.0 - return x - -def _fix_real_abs_gt_1(x): - """Convert `x` to complex if it has real components x_i with abs(x_i)>1. - - Otherwise, output is just the array version of the input (via asarray). - - Parameters - ---------- - x : array_like - - Returns - ------- - array - - Examples - -------- - >>> np.lib.scimath._fix_real_abs_gt_1([0,1]) - array([0, 1]) - - >>> np.lib.scimath._fix_real_abs_gt_1([0,2]) - array([ 0.+0.j, 2.+0.j]) - """ - x = asarray(x) - if any(isreal(x) & (abs(x)>1)): - x = _tocomplex(x) - return x - -def sqrt(x): - """ - Compute the square root of x. - - For negative input elements, a complex value is returned - (unlike `numpy.sqrt` which returns NaN). - - Parameters - ---------- - x : array_like - The input value(s). - - Returns - ------- - out : ndarray or scalar - The square root of `x`. If `x` was a scalar, so is `out`, - otherwise an array is returned. - - See Also - -------- - numpy.sqrt - - Examples - -------- - For real, non-negative inputs this works just like `numpy.sqrt`: - - >>> np.lib.scimath.sqrt(1) - 1.0 - >>> np.lib.scimath.sqrt([1, 4]) - array([ 1., 2.]) - - But it automatically handles negative inputs: - - >>> np.lib.scimath.sqrt(-1) - (0.0+1.0j) - >>> np.lib.scimath.sqrt([-1,4]) - array([ 0.+1.j, 2.+0.j]) - - """ - x = _fix_real_lt_zero(x) - return nx.sqrt(x) - -def log(x): - """ - Compute the natural logarithm of `x`. - - Return the "principal value" (for a description of this, see `numpy.log`) - of :math:`log_e(x)`. For real `x > 0`, this is a real number (``log(0)`` - returns ``-inf`` and ``log(np.inf)`` returns ``inf``). Otherwise, the - complex principle value is returned. - - Parameters - ---------- - x : array_like - The value(s) whose log is (are) required. - - Returns - ------- - out : ndarray or scalar - The log of the `x` value(s). If `x` was a scalar, so is `out`, - otherwise an array is returned. - - See Also - -------- - numpy.log - - Notes - ----- - For a log() that returns ``NAN`` when real `x < 0`, use `numpy.log` - (note, however, that otherwise `numpy.log` and this `log` are identical, - i.e., both return ``-inf`` for `x = 0`, ``inf`` for `x = inf`, and, - notably, the complex principle value if ``x.imag != 0``). - - Examples - -------- - >>> np.emath.log(np.exp(1)) - 1.0 - - Negative arguments are handled "correctly" (recall that - ``exp(log(x)) == x`` does *not* hold for real ``x < 0``): - - >>> np.emath.log(-np.exp(1)) == (1 + np.pi * 1j) - True - - """ - x = _fix_real_lt_zero(x) - return nx.log(x) - -def log10(x): - """ - Compute the logarithm base 10 of `x`. - - Return the "principal value" (for a description of this, see - `numpy.log10`) of :math:`log_{10}(x)`. For real `x > 0`, this - is a real number (``log10(0)`` returns ``-inf`` and ``log10(np.inf)`` - returns ``inf``). Otherwise, the complex principle value is returned. - - Parameters - ---------- - x : array_like or scalar - The value(s) whose log base 10 is (are) required. - - Returns - ------- - out : ndarray or scalar - The log base 10 of the `x` value(s). If `x` was a scalar, so is `out`, - otherwise an array object is returned. - - See Also - -------- - numpy.log10 - - Notes - ----- - For a log10() that returns ``NAN`` when real `x < 0`, use `numpy.log10` - (note, however, that otherwise `numpy.log10` and this `log10` are - identical, i.e., both return ``-inf`` for `x = 0`, ``inf`` for `x = inf`, - and, notably, the complex principle value if ``x.imag != 0``). - - Examples - -------- - - (We set the printing precision so the example can be auto-tested) - - >>> np.set_printoptions(precision=4) - - >>> np.emath.log10(10**1) - 1.0 - - >>> np.emath.log10([-10**1, -10**2, 10**2]) - array([ 1.+1.3644j, 2.+1.3644j, 2.+0.j ]) - - """ - x = _fix_real_lt_zero(x) - return nx.log10(x) - -def logn(n, x): - """ - Take log base n of x. - - If `x` contains negative inputs, the answer is computed and returned in the - complex domain. - - Parameters - ---------- - n : int - The base in which the log is taken. - x : array_like - The value(s) whose log base `n` is (are) required. - - Returns - ------- - out : ndarray or scalar - The log base `n` of the `x` value(s). If `x` was a scalar, so is - `out`, otherwise an array is returned. - - Examples - -------- - >>> np.set_printoptions(precision=4) - - >>> np.lib.scimath.logn(2, [4, 8]) - array([ 2., 3.]) - >>> np.lib.scimath.logn(2, [-4, -8, 8]) - array([ 2.+4.5324j, 3.+4.5324j, 3.+0.j ]) - - """ - x = _fix_real_lt_zero(x) - n = _fix_real_lt_zero(n) - return nx.log(x)/nx.log(n) - -def log2(x): - """ - Compute the logarithm base 2 of `x`. - - Return the "principal value" (for a description of this, see - `numpy.log2`) of :math:`log_2(x)`. For real `x > 0`, this is - a real number (``log2(0)`` returns ``-inf`` and ``log2(np.inf)`` returns - ``inf``). Otherwise, the complex principle value is returned. - - Parameters - ---------- - x : array_like - The value(s) whose log base 2 is (are) required. - - Returns - ------- - out : ndarray or scalar - The log base 2 of the `x` value(s). If `x` was a scalar, so is `out`, - otherwise an array is returned. - - See Also - -------- - numpy.log2 - - Notes - ----- - For a log2() that returns ``NAN`` when real `x < 0`, use `numpy.log2` - (note, however, that otherwise `numpy.log2` and this `log2` are - identical, i.e., both return ``-inf`` for `x = 0`, ``inf`` for `x = inf`, - and, notably, the complex principle value if ``x.imag != 0``). - - Examples - -------- - We set the printing precision so the example can be auto-tested: - - >>> np.set_printoptions(precision=4) - - >>> np.emath.log2(8) - 3.0 - >>> np.emath.log2([-4, -8, 8]) - array([ 2.+4.5324j, 3.+4.5324j, 3.+0.j ]) - - """ - x = _fix_real_lt_zero(x) - return nx.log2(x) - -def power(x, p): - """ - Return x to the power p, (x**p). - - If `x` contains negative values, the output is converted to the - complex domain. - - Parameters - ---------- - x : array_like - The input value(s). - p : array_like of ints - The power(s) to which `x` is raised. If `x` contains multiple values, - `p` has to either be a scalar, or contain the same number of values - as `x`. In the latter case, the result is - ``x[0]**p[0], x[1]**p[1], ...``. - - Returns - ------- - out : ndarray or scalar - The result of ``x**p``. If `x` and `p` are scalars, so is `out`, - otherwise an array is returned. - - See Also - -------- - numpy.power - - Examples - -------- - >>> np.set_printoptions(precision=4) - - >>> np.lib.scimath.power([2, 4], 2) - array([ 4, 16]) - >>> np.lib.scimath.power([2, 4], -2) - array([ 0.25 , 0.0625]) - >>> np.lib.scimath.power([-2, 4], 2) - array([ 4.+0.j, 16.+0.j]) - - """ - x = _fix_real_lt_zero(x) - p = _fix_int_lt_zero(p) - return nx.power(x, p) - -def arccos(x): - """ - Compute the inverse cosine of x. - - Return the "principal value" (for a description of this, see - `numpy.arccos`) of the inverse cosine of `x`. For real `x` such that - `abs(x) <= 1`, this is a real number in the closed interval - :math:`[0, \\pi]`. Otherwise, the complex principle value is returned. - - Parameters - ---------- - x : array_like or scalar - The value(s) whose arccos is (are) required. - - Returns - ------- - out : ndarray or scalar - The inverse cosine(s) of the `x` value(s). If `x` was a scalar, so - is `out`, otherwise an array object is returned. - - See Also - -------- - numpy.arccos - - Notes - ----- - For an arccos() that returns ``NAN`` when real `x` is not in the - interval ``[-1,1]``, use `numpy.arccos`. - - Examples - -------- - >>> np.set_printoptions(precision=4) - - >>> np.emath.arccos(1) # a scalar is returned - 0.0 - - >>> np.emath.arccos([1,2]) - array([ 0.-0.j , 0.+1.317j]) - - """ - x = _fix_real_abs_gt_1(x) - return nx.arccos(x) - -def arcsin(x): - """ - Compute the inverse sine of x. - - Return the "principal value" (for a description of this, see - `numpy.arcsin`) of the inverse sine of `x`. For real `x` such that - `abs(x) <= 1`, this is a real number in the closed interval - :math:`[-\\pi/2, \\pi/2]`. Otherwise, the complex principle value is - returned. - - Parameters - ---------- - x : array_like or scalar - The value(s) whose arcsin is (are) required. - - Returns - ------- - out : ndarray or scalar - The inverse sine(s) of the `x` value(s). If `x` was a scalar, so - is `out`, otherwise an array object is returned. - - See Also - -------- - numpy.arcsin - - Notes - ----- - For an arcsin() that returns ``NAN`` when real `x` is not in the - interval ``[-1,1]``, use `numpy.arcsin`. - - Examples - -------- - >>> np.set_printoptions(precision=4) - - >>> np.emath.arcsin(0) - 0.0 - - >>> np.emath.arcsin([0,1]) - array([ 0. , 1.5708]) - - """ - x = _fix_real_abs_gt_1(x) - return nx.arcsin(x) - -def arctanh(x): - """ - Compute the inverse hyperbolic tangent of `x`. - - Return the "principal value" (for a description of this, see - `numpy.arctanh`) of `arctanh(x)`. For real `x` such that - `abs(x) < 1`, this is a real number. If `abs(x) > 1`, or if `x` is - complex, the result is complex. Finally, `x = 1` returns``inf`` and - `x=-1` returns ``-inf``. - - Parameters - ---------- - x : array_like - The value(s) whose arctanh is (are) required. - - Returns - ------- - out : ndarray or scalar - The inverse hyperbolic tangent(s) of the `x` value(s). If `x` was - a scalar so is `out`, otherwise an array is returned. - - - See Also - -------- - numpy.arctanh - - Notes - ----- - For an arctanh() that returns ``NAN`` when real `x` is not in the - interval ``(-1,1)``, use `numpy.arctanh` (this latter, however, does - return +/-inf for `x = +/-1`). - - Examples - -------- - >>> np.set_printoptions(precision=4) - - >>> np.emath.arctanh(np.matrix(np.eye(2))) - array([[ Inf, 0.], - [ 0., Inf]]) - >>> np.emath.arctanh([1j]) - array([ 0.+0.7854j]) - - """ - x = _fix_real_abs_gt_1(x) - return nx.arctanh(x) diff --git a/pythonPackages/numpy/numpy/lib/setup.py b/pythonPackages/numpy/numpy/lib/setup.py deleted file mode 100755 index e85fdb517d..0000000000 --- a/pythonPackages/numpy/numpy/lib/setup.py +++ /dev/null @@ -1,22 +0,0 @@ -from os.path import join - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - - config = Configuration('lib',parent_package,top_path) - - config.add_include_dirs(join('..','core','include')) - - - config.add_extension('_compiled_base', - sources=[join('src','_compiled_base.c')] - ) - - config.add_data_dir('benchmarks') - config.add_data_dir('tests') - - return config - -if __name__=='__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/lib/setupscons.py b/pythonPackages/numpy/numpy/lib/setupscons.py deleted file mode 100755 index 4f31f6e8ab..0000000000 --- a/pythonPackages/numpy/numpy/lib/setupscons.py +++ /dev/null @@ -1,16 +0,0 @@ -from os.path import join - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - - config = Configuration('lib',parent_package,top_path) - - config.add_sconscript('SConstruct', - source_files = [join('src', '_compiled_base.c')]) - config.add_data_dir('tests') - - return config - -if __name__=='__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/lib/shape_base.py b/pythonPackages/numpy/numpy/lib/shape_base.py deleted file mode 100755 index 5ea2648cbe..0000000000 --- a/pythonPackages/numpy/numpy/lib/shape_base.py +++ /dev/null @@ -1,839 +0,0 @@ -__all__ = ['column_stack','row_stack', 'dstack','array_split','split','hsplit', - 'vsplit','dsplit','apply_over_axes','expand_dims', - 'apply_along_axis', 'kron', 'tile', 'get_array_wrap'] - -import numpy.core.numeric as _nx -from numpy.core.numeric import asarray, zeros, newaxis, outer, \ - concatenate, isscalar, array, asanyarray -from numpy.core.fromnumeric import product, reshape -from numpy.core import hstack, vstack, atleast_3d - -def apply_along_axis(func1d,axis,arr,*args): - """ - Apply a function to 1-D slices along the given axis. - - Execute `func1d(a, *args)` where `func1d` operates on 1-D arrays and `a` - is a 1-D slice of `arr` along `axis`. - - Parameters - ---------- - func1d : function - This function should accept 1-D arrays. It is applied to 1-D - slices of `arr` along the specified axis. - axis : integer - Axis along which `arr` is sliced. - arr : ndarray - Input array. - args : any - Additional arguments to `func1d`. - - Returns - ------- - outarr : ndarray - The output array. The shape of `outarr` is identical to the shape of - `arr`, except along the `axis` dimension, where the length of `outarr` - is equal to the size of the return value of `func1d`. If `func1d` - returns a scalar `outarr` will have one fewer dimensions than `arr`. - - See Also - -------- - apply_over_axes : Apply a function repeatedly over multiple axes. - - Examples - -------- - >>> def my_func(a): - ... \"\"\"Average first and last element of a 1-D array\"\"\" - ... return (a[0] + a[-1]) * 0.5 - >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) - >>> np.apply_along_axis(my_func, 0, b) - array([ 4., 5., 6.]) - >>> np.apply_along_axis(my_func, 1, b) - array([ 2., 5., 8.]) - - For a function that doesn't return a scalar, the number of dimensions in - `outarr` is the same as `arr`. - - >>> def new_func(a): - ... \"\"\"Divide elements of a by 2.\"\"\" - ... return a * 0.5 - >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) - >>> np.apply_along_axis(new_func, 0, b) - array([[ 0.5, 1. , 1.5], - [ 2. , 2.5, 3. ], - [ 3.5, 4. , 4.5]]) - - """ - arr = asarray(arr) - nd = arr.ndim - if axis < 0: - axis += nd - if (axis >= nd): - raise ValueError("axis must be less than arr.ndim; axis=%d, rank=%d." - % (axis,nd)) - ind = [0]*(nd-1) - i = zeros(nd,'O') - indlist = range(nd) - indlist.remove(axis) - i[axis] = slice(None,None) - outshape = asarray(arr.shape).take(indlist) - i.put(indlist, ind) - res = func1d(arr[tuple(i.tolist())],*args) - # if res is a number, then we have a smaller output array - if isscalar(res): - outarr = zeros(outshape,asarray(res).dtype) - outarr[tuple(ind)] = res - Ntot = product(outshape) - k = 1 - while k < Ntot: - # increment the index - ind[-1] += 1 - n = -1 - while (ind[n] >= outshape[n]) and (n > (1-nd)): - ind[n-1] += 1 - ind[n] = 0 - n -= 1 - i.put(indlist,ind) - res = func1d(arr[tuple(i.tolist())],*args) - outarr[tuple(ind)] = res - k += 1 - return outarr - else: - Ntot = product(outshape) - holdshape = outshape - outshape = list(arr.shape) - outshape[axis] = len(res) - outarr = zeros(outshape,asarray(res).dtype) - outarr[tuple(i.tolist())] = res - k = 1 - while k < Ntot: - # increment the index - ind[-1] += 1 - n = -1 - while (ind[n] >= holdshape[n]) and (n > (1-nd)): - ind[n-1] += 1 - ind[n] = 0 - n -= 1 - i.put(indlist, ind) - res = func1d(arr[tuple(i.tolist())],*args) - outarr[tuple(i.tolist())] = res - k += 1 - return outarr - - -def apply_over_axes(func, a, axes): - """ - Apply a function repeatedly over multiple axes. - - `func` is called as `res = func(a, axis)`, where `axis` is the first - element of `axes`. The result `res` of the function call must have - either the same dimensions as `a` or one less dimension. If `res` - has one less dimension than `a`, a dimension is inserted before - `axis`. The call to `func` is then repeated for each axis in `axes`, - with `res` as the first argument. - - Parameters - ---------- - func : function - This function must take two arguments, `func(a, axis)`. - a : array_like - Input array. - axes : array_like - Axes over which `func` is applied; the elements must be integers. - - Returns - ------- - val : ndarray - The output array. The number of dimensions is the same as `a`, - but the shape can be different. This depends on whether `func` - changes the shape of its output with respect to its input. - - See Also - -------- - apply_along_axis : - Apply a function to 1-D slices of an array along the given axis. - - Examples - -------- - >>> a = np.arange(24).reshape(2,3,4) - >>> a - array([[[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]], - [[12, 13, 14, 15], - [16, 17, 18, 19], - [20, 21, 22, 23]]]) - - Sum over axes 0 and 2. The result has same number of dimensions - as the original array: - - >>> np.apply_over_axes(np.sum, a, [0,2]) - array([[[ 60], - [ 92], - [124]]]) - - """ - val = asarray(a) - N = a.ndim - if array(axes).ndim == 0: - axes = (axes,) - for axis in axes: - if axis < 0: axis = N + axis - args = (val, axis) - res = func(*args) - if res.ndim == val.ndim: - val = res - else: - res = expand_dims(res,axis) - if res.ndim == val.ndim: - val = res - else: - raise ValueError, "function is not returning"\ - " an array of correct shape" - return val - -def expand_dims(a, axis): - """ - Expand the shape of an array. - - Insert a new axis, corresponding to a given position in the array shape. - - Parameters - ---------- - a : array_like - Input array. - axis : int - Position (amongst axes) where new axis is to be inserted. - - Returns - ------- - res : ndarray - Output array. The number of dimensions is one greater than that of - the input array. - - See Also - -------- - doc.indexing, atleast_1d, atleast_2d, atleast_3d - - Examples - -------- - >>> x = np.array([1,2]) - >>> x.shape - (2,) - - The following is equivalent to ``x[np.newaxis,:]`` or ``x[np.newaxis]``: - - >>> y = np.expand_dims(x, axis=0) - >>> y - array([[1, 2]]) - >>> y.shape - (1, 2) - - >>> y = np.expand_dims(x, axis=1) # Equivalent to x[:,newaxis] - >>> y - array([[1], - [2]]) - >>> y.shape - (2, 1) - - Note that some examples may use ``None`` instead of ``np.newaxis``. These - are the same objects: - - >>> np.newaxis is None - True - - """ - a = asarray(a) - shape = a.shape - if axis < 0: - axis = axis + len(shape) + 1 - return a.reshape(shape[:axis] + (1,) + shape[axis:]) - -row_stack = vstack - -def column_stack(tup): - """ - Stack 1-D arrays as columns into a 2-D array. - - Take a sequence of 1-D arrays and stack them as columns - to make a single 2-D array. 2-D arrays are stacked as-is, - just like with `hstack`. 1-D arrays are turned into 2-D columns - first. - - Parameters - ---------- - tup : sequence of 1-D or 2-D arrays. - Arrays to stack. All of them must have the same first dimension. - - Returns - ------- - stacked : 2-D array - The array formed by stacking the given arrays. - - See Also - -------- - hstack, vstack, concatenate - - Notes - ----- - This function is equivalent to ``np.vstack(tup).T``. - - Examples - -------- - >>> a = np.array((1,2,3)) - >>> b = np.array((2,3,4)) - >>> np.column_stack((a,b)) - array([[1, 2], - [2, 3], - [3, 4]]) - - """ - arrays = [] - for v in tup: - arr = array(v,copy=False,subok=True) - if arr.ndim < 2: - arr = array(arr,copy=False,subok=True,ndmin=2).T - arrays.append(arr) - return _nx.concatenate(arrays,1) - -def dstack(tup): - """ - Stack arrays in sequence depth wise (along third axis). - - Takes a sequence of arrays and stack them along the third axis - to make a single array. Rebuilds arrays divided by `dsplit`. - This is a simple way to stack 2D arrays (images) into a single - 3D array for processing. - - Parameters - ---------- - tup : sequence of arrays - Arrays to stack. All of them must have the same shape along all - but the third axis. - - Returns - ------- - stacked : ndarray - The array formed by stacking the given arrays. - - See Also - -------- - vstack : Stack along first axis. - hstack : Stack along second axis. - concatenate : Join arrays. - dsplit : Split array along third axis. - - Notes - ----- - Equivalent to ``np.concatenate(tup, axis=2)``. - - Examples - -------- - >>> a = np.array((1,2,3)) - >>> b = np.array((2,3,4)) - >>> np.dstack((a,b)) - array([[[1, 2], - [2, 3], - [3, 4]]]) - - >>> a = np.array([[1],[2],[3]]) - >>> b = np.array([[2],[3],[4]]) - >>> np.dstack((a,b)) - array([[[1, 2]], - [[2, 3]], - [[3, 4]]]) - - """ - return _nx.concatenate(map(atleast_3d,tup),2) - -def _replace_zero_by_x_arrays(sub_arys): - for i in range(len(sub_arys)): - if len(_nx.shape(sub_arys[i])) == 0: - sub_arys[i] = _nx.array([]) - elif _nx.sometrue(_nx.equal(_nx.shape(sub_arys[i]),0)): - sub_arys[i] = _nx.array([]) - return sub_arys - -def array_split(ary,indices_or_sections,axis = 0): - """ - Split an array into multiple sub-arrays of equal or near-equal size. - - Please refer to the ``split`` documentation. The only difference - between these functions is that ``array_split`` allows - `indices_or_sections` to be an integer that does *not* equally - divide the axis. - - See Also - -------- - split : Split array into multiple sub-arrays of equal size. - - Examples - -------- - >>> x = np.arange(8.0) - >>> np.array_split(x, 3) - [array([ 0., 1., 2.]), array([ 3., 4., 5.]), array([ 6., 7.])] - - """ - try: - Ntotal = ary.shape[axis] - except AttributeError: - Ntotal = len(ary) - try: # handle scalar case. - Nsections = len(indices_or_sections) + 1 - div_points = [0] + list(indices_or_sections) + [Ntotal] - except TypeError: #indices_or_sections is a scalar, not an array. - Nsections = int(indices_or_sections) - if Nsections <= 0: - raise ValueError, 'number sections must be larger than 0.' - Neach_section,extras = divmod(Ntotal,Nsections) - section_sizes = [0] + \ - extras * [Neach_section+1] + \ - (Nsections-extras) * [Neach_section] - div_points = _nx.array(section_sizes).cumsum() - - sub_arys = [] - sary = _nx.swapaxes(ary,axis,0) - for i in range(Nsections): - st = div_points[i]; end = div_points[i+1] - sub_arys.append(_nx.swapaxes(sary[st:end],axis,0)) - - # there is a wierd issue with array slicing that allows - # 0x10 arrays and other such things. The following cluge is needed - # to get around this issue. - sub_arys = _replace_zero_by_x_arrays(sub_arys) - # end cluge. - - return sub_arys - -def split(ary,indices_or_sections,axis=0): - """ - Split an array into multiple sub-arrays of equal size. - - Parameters - ---------- - ary : ndarray - Array to be divided into sub-arrays. - indices_or_sections : int or 1-D array - If `indices_or_sections` is an integer, N, the array will be divided - into N equal arrays along `axis`. If such a split is not possible, - an error is raised. - - If `indices_or_sections` is a 1-D array of sorted integers, the entries - indicate where along `axis` the array is split. For example, - ``[2, 3]`` would, for ``axis=0``, result in - - - ary[:2] - - ary[2:3] - - ary[3:] - - If an index exceeds the dimension of the array along `axis`, - an empty sub-array is returned correspondingly. - axis : int, optional - The axis along which to split, default is 0. - - Returns - ------- - sub-arrays : list of ndarrays - A list of sub-arrays. - - Raises - ------ - ValueError - If `indices_or_sections` is given as an integer, but - a split does not result in equal division. - - See Also - -------- - array_split : Split an array into multiple sub-arrays of equal or - near-equal size. Does not raise an exception if - an equal division cannot be made. - hsplit : Split array into multiple sub-arrays horizontally (column-wise). - vsplit : Split array into multiple sub-arrays vertically (row wise). - dsplit : Split array into multiple sub-arrays along the 3rd axis (depth). - concatenate : Join arrays together. - hstack : Stack arrays in sequence horizontally (column wise). - vstack : Stack arrays in sequence vertically (row wise). - dstack : Stack arrays in sequence depth wise (along third dimension). - - Examples - -------- - >>> x = np.arange(9.0) - >>> np.split(x, 3) - [array([ 0., 1., 2.]), array([ 3., 4., 5.]), array([ 6., 7., 8.])] - - >>> x = np.arange(8.0) - >>> np.split(x, [3, 5, 6, 10]) - [array([ 0., 1., 2.]), - array([ 3., 4.]), - array([ 5.]), - array([ 6., 7.]), - array([], dtype=float64)] - - """ - try: len(indices_or_sections) - except TypeError: - sections = indices_or_sections - N = ary.shape[axis] - if N % sections: - raise ValueError, 'array split does not result in an equal division' - res = array_split(ary,indices_or_sections,axis) - return res - -def hsplit(ary,indices_or_sections): - """ - Split an array into multiple sub-arrays horizontally (column-wise). - - Please refer to the `split` documentation. `hsplit` is equivalent - to `split` with ``axis=1``, the array is always split along the second - axis regardless of the array dimension. - - See Also - -------- - split : Split an array into multiple sub-arrays of equal size. - - Examples - -------- - >>> x = np.arange(16.0).reshape(4, 4) - >>> x - array([[ 0., 1., 2., 3.], - [ 4., 5., 6., 7.], - [ 8., 9., 10., 11.], - [ 12., 13., 14., 15.]]) - >>> np.hsplit(x, 2) - [array([[ 0., 1.], - [ 4., 5.], - [ 8., 9.], - [ 12., 13.]]), - array([[ 2., 3.], - [ 6., 7.], - [ 10., 11.], - [ 14., 15.]])] - >>> np.hsplit(x, np.array([3, 6])) - [array([[ 0., 1., 2.], - [ 4., 5., 6.], - [ 8., 9., 10.], - [ 12., 13., 14.]]), - array([[ 3.], - [ 7.], - [ 11.], - [ 15.]]), - array([], dtype=float64)] - - With a higher dimensional array the split is still along the second axis. - - >>> x = np.arange(8.0).reshape(2, 2, 2) - >>> x - array([[[ 0., 1.], - [ 2., 3.]], - [[ 4., 5.], - [ 6., 7.]]]) - >>> np.hsplit(x, 2) - [array([[[ 0., 1.]], - [[ 4., 5.]]]), - array([[[ 2., 3.]], - [[ 6., 7.]]])] - - """ - if len(_nx.shape(ary)) == 0: - raise ValueError, 'hsplit only works on arrays of 1 or more dimensions' - if len(ary.shape) > 1: - return split(ary,indices_or_sections,1) - else: - return split(ary,indices_or_sections,0) - -def vsplit(ary,indices_or_sections): - """ - Split an array into multiple sub-arrays vertically (row-wise). - - Please refer to the ``split`` documentation. ``vsplit`` is equivalent - to ``split`` with `axis=0` (default), the array is always split along the - first axis regardless of the array dimension. - - See Also - -------- - split : Split an array into multiple sub-arrays of equal size. - - Examples - -------- - >>> x = np.arange(16.0).reshape(4, 4) - >>> x - array([[ 0., 1., 2., 3.], - [ 4., 5., 6., 7.], - [ 8., 9., 10., 11.], - [ 12., 13., 14., 15.]]) - >>> np.vsplit(x, 2) - [array([[ 0., 1., 2., 3.], - [ 4., 5., 6., 7.]]), - array([[ 8., 9., 10., 11.], - [ 12., 13., 14., 15.]])] - >>> np.vsplit(x, np.array([3, 6])) - [array([[ 0., 1., 2., 3.], - [ 4., 5., 6., 7.], - [ 8., 9., 10., 11.]]), - array([[ 12., 13., 14., 15.]]), - array([], dtype=float64)] - - With a higher dimensional array the split is still along the first axis. - - >>> x = np.arange(8.0).reshape(2, 2, 2) - >>> x - array([[[ 0., 1.], - [ 2., 3.]], - [[ 4., 5.], - [ 6., 7.]]]) - >>> np.vsplit(x, 2) - [array([[[ 0., 1.], - [ 2., 3.]]]), - array([[[ 4., 5.], - [ 6., 7.]]])] - - """ - if len(_nx.shape(ary)) < 2: - raise ValueError, 'vsplit only works on arrays of 2 or more dimensions' - return split(ary,indices_or_sections,0) - -def dsplit(ary,indices_or_sections): - """ - Split array into multiple sub-arrays along the 3rd axis (depth). - - Please refer to the `split` documentation. `dsplit` is equivalent - to `split` with ``axis=2``, the array is always split along the third - axis provided the array dimension is greater than or equal to 3. - - See Also - -------- - split : Split an array into multiple sub-arrays of equal size. - - Examples - -------- - >>> x = np.arange(16.0).reshape(2, 2, 4) - >>> x - array([[[ 0., 1., 2., 3.], - [ 4., 5., 6., 7.]], - [[ 8., 9., 10., 11.], - [ 12., 13., 14., 15.]]]) - >>> np.dsplit(x, 2) - [array([[[ 0., 1.], - [ 4., 5.]], - [[ 8., 9.], - [ 12., 13.]]]), - array([[[ 2., 3.], - [ 6., 7.]], - [[ 10., 11.], - [ 14., 15.]]])] - >>> np.dsplit(x, np.array([3, 6])) - [array([[[ 0., 1., 2.], - [ 4., 5., 6.]], - [[ 8., 9., 10.], - [ 12., 13., 14.]]]), - array([[[ 3.], - [ 7.]], - [[ 11.], - [ 15.]]]), - array([], dtype=float64)] - - """ - if len(_nx.shape(ary)) < 3: - raise ValueError, 'vsplit only works on arrays of 3 or more dimensions' - return split(ary,indices_or_sections,2) - -def get_array_prepare(*args): - """Find the wrapper for the array with the highest priority. - - In case of ties, leftmost wins. If no wrapper is found, return None - """ - wrappers = [(getattr(x, '__array_priority__', 0), -i, - x.__array_prepare__) for i, x in enumerate(args) - if hasattr(x, '__array_prepare__')] - wrappers.sort() - if wrappers: - return wrappers[-1][-1] - return None - -def get_array_wrap(*args): - """Find the wrapper for the array with the highest priority. - - In case of ties, leftmost wins. If no wrapper is found, return None - """ - wrappers = [(getattr(x, '__array_priority__', 0), -i, - x.__array_wrap__) for i, x in enumerate(args) - if hasattr(x, '__array_wrap__')] - wrappers.sort() - if wrappers: - return wrappers[-1][-1] - return None - -def kron(a,b): - """ - Kronecker product of two arrays. - - Computes the Kronecker product, a composite array made of blocks of the - second array scaled by the first. - - Parameters - ---------- - a, b : array_like - - Returns - ------- - out : ndarray - - See Also - -------- - - outer : The outer product - - Notes - ----- - - The function assumes that the number of dimenensions of `a` and `b` - are the same, if necessary prepending the smallest with ones. - If `a.shape = (r0,r1,..,rN)` and `b.shape = (s0,s1,...,sN)`, - the Kronecker product has shape `(r0*s0, r1*s1, ..., rN*SN)`. - The elements are products of elements from `a` and `b`, organized - explicitly by:: - - kron(a,b)[k0,k1,...,kN] = a[i0,i1,...,iN] * b[j0,j1,...,jN] - - where:: - - kt = it * st + jt, t = 0,...,N - - In the common 2-D case (N=1), the block structure can be visualized:: - - [[ a[0,0]*b, a[0,1]*b, ... , a[0,-1]*b ], - [ ... ... ], - [ a[-1,0]*b, a[-1,1]*b, ... , a[-1,-1]*b ]] - - - Examples - -------- - >>> np.kron([1,10,100], [5,6,7]) - array([ 5, 6, 7, 50, 60, 70, 500, 600, 700]) - >>> np.kron([5,6,7], [1,10,100]) - array([ 5, 50, 500, 6, 60, 600, 7, 70, 700]) - - >>> np.kron(np.eye(2), np.ones((2,2))) - array([[ 1., 1., 0., 0.], - [ 1., 1., 0., 0.], - [ 0., 0., 1., 1.], - [ 0., 0., 1., 1.]]) - - >>> a = np.arange(100).reshape((2,5,2,5)) - >>> b = np.arange(24).reshape((2,3,4)) - >>> c = np.kron(a,b) - >>> c.shape - (2, 10, 6, 20) - >>> I = (1,3,0,2) - >>> J = (0,2,1) - >>> J1 = (0,) + J # extend to ndim=4 - >>> S1 = (1,) + b.shape - >>> K = tuple(np.array(I) * np.array(S1) + np.array(J1)) - >>> c[K] == a[I]*b[J] - True - - """ - b = asanyarray(b) - a = array(a,copy=False,subok=True,ndmin=b.ndim) - ndb, nda = b.ndim, a.ndim - if (nda == 0 or ndb == 0): - return _nx.multiply(a,b) - as_ = a.shape - bs = b.shape - if not a.flags.contiguous: - a = reshape(a, as_) - if not b.flags.contiguous: - b = reshape(b, bs) - nd = ndb - if (ndb != nda): - if (ndb > nda): - as_ = (1,)*(ndb-nda) + as_ - else: - bs = (1,)*(nda-ndb) + bs - nd = nda - result = outer(a,b).reshape(as_+bs) - axis = nd-1 - for _ in xrange(nd): - result = concatenate(result, axis=axis) - wrapper = get_array_prepare(a, b) - if wrapper is not None: - result = wrapper(result) - wrapper = get_array_wrap(a, b) - if wrapper is not None: - result = wrapper(result) - return result - - -def tile(A, reps): - """ - Construct an array by repeating A the number of times given by reps. - - If `reps` has length ``d``, the result will have dimension of - ``max(d, A.ndim)``. - - If ``A.ndim < d``, `A` is promoted to be d-dimensional by prepending new - axes. So a shape (3,) array is promoted to (1, 3) for 2-D replication, - or shape (1, 1, 3) for 3-D replication. If this is not the desired - behavior, promote `A` to d-dimensions manually before calling this - function. - - If ``A.ndim > d``, `reps` is promoted to `A`.ndim by pre-pending 1's to it. - Thus for an `A` of shape (2, 3, 4, 5), a `reps` of (2, 2) is treated as - (1, 1, 2, 2). - - Parameters - ---------- - A : array_like - The input array. - reps : array_like - The number of repetitions of `A` along each axis. - - Returns - ------- - c : ndarray - The tiled output array. - - See Also - -------- - repeat : Repeat elements of an array. - - Examples - -------- - >>> a = np.array([0, 1, 2]) - >>> np.tile(a, 2) - array([0, 1, 2, 0, 1, 2]) - >>> np.tile(a, (2, 2)) - array([[0, 1, 2, 0, 1, 2], - [0, 1, 2, 0, 1, 2]]) - >>> np.tile(a, (2, 1, 2)) - array([[[0, 1, 2, 0, 1, 2]], - [[0, 1, 2, 0, 1, 2]]]) - - >>> b = np.array([[1, 2], [3, 4]]) - >>> np.tile(b, 2) - array([[1, 2, 1, 2], - [3, 4, 3, 4]]) - >>> np.tile(b, (2, 1)) - array([[1, 2], - [3, 4], - [1, 2], - [3, 4]]) - - """ - try: - tup = tuple(reps) - except TypeError: - tup = (reps,) - d = len(tup) - c = _nx.array(A,copy=False,subok=True,ndmin=d) - shape = list(c.shape) - n = max(c.size,1) - if (d < c.ndim): - tup = (1,)*(c.ndim-d) + tup - for i, nrep in enumerate(tup): - if nrep!=1: - c = c.reshape(-1,n).repeat(nrep,0) - dim_in = shape[i] - dim_out = dim_in*nrep - shape[i] = dim_out - n /= max(dim_in,1) - return c.reshape(shape) diff --git a/pythonPackages/numpy/numpy/lib/src/_compiled_base.c b/pythonPackages/numpy/numpy/lib/src/_compiled_base.c deleted file mode 100755 index 466fa0202c..0000000000 --- a/pythonPackages/numpy/numpy/lib/src/_compiled_base.c +++ /dev/null @@ -1,988 +0,0 @@ -#include "Python.h" -#include "structmember.h" -#include "numpy/noprefix.h" -#include "npy_config.h" - -static intp -incr_slot_(double x, double *bins, intp lbins) -{ - intp i; - - for ( i = 0; i < lbins; i ++ ) { - if ( x < bins [i] ) { - return i; - } - } - return lbins; -} - -static intp -decr_slot_(double x, double * bins, intp lbins) -{ - intp i; - - for ( i = lbins - 1; i >= 0; i -- ) { - if (x < bins [i]) { - return i + 1; - } - } - return 0; -} - -static int -monotonic_(double * a, int lena) -{ - int i; - - if (a [0] <= a [1]) { - /* possibly monotonic increasing */ - for (i = 1; i < lena - 1; i ++) { - if (a [i] > a [i + 1]) { - return 0; - } - } - return 1; - } - else { - /* possibly monotonic decreasing */ - for (i = 1; i < lena - 1; i ++) { - if (a [i] < a [i + 1]) { - return 0; - } - } - return -1; - } -} - - - -/* find the index of the maximum element of an integer array */ -static intp -mxx (intp *i , intp len) -{ - intp mx = 0, max = i[0]; - intp j; - - for ( j = 1; j < len; j ++ ) { - if ( i [j] > max ) { - max = i [j]; - mx = j; - } - } - return mx; -} - -/* find the index of the minimum element of an integer array */ -static intp -mnx (intp *i , intp len) -{ - intp mn = 0, min = i [0]; - intp j; - - for ( j = 1; j < len; j ++ ) - if ( i [j] < min ) - {min = i [j]; - mn = j;} - return mn; -} - - -/* - * arr_bincount is registered as bincount. - * - * bincount accepts one or two arguments. The first is an array of - * non-negative integers and the second, if present, is an array of weights, - * which must be promotable to double. Call these arguments list and - * weight. Both must be one-dimensional with len(weight) == len(list). If - * weight is not present then bincount(list)[i] is the number of occurrences - * of i in list. If weight is present then bincount(self,list, weight)[i] - * is the sum of all weight[j] where list [j] == i. Self is not used. - */ -static PyObject * -arr_bincount(PyObject *NPY_UNUSED(self), PyObject *args, PyObject *kwds) -{ - PyArray_Descr *type; - PyObject *list = NULL, *weight=Py_None; - PyObject *lst=NULL, *ans=NULL, *wts=NULL; - intp *numbers, *ians, len , mxi, mni, ans_size; - int i; - double *weights , *dans; - static char *kwlist[] = {"list", "weights", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O", - kwlist, &list, &weight)) { - goto fail; - } - if (!(lst = PyArray_ContiguousFromAny(list, PyArray_INTP, 1, 1))) { - goto fail; - } - len = PyArray_SIZE(lst); - if (len < 1) { - PyErr_SetString(PyExc_ValueError, - "The first argument cannot be empty."); - goto fail; - } - numbers = (intp *) PyArray_DATA(lst); - mxi = mxx(numbers, len); - mni = mnx(numbers, len); - if (numbers[mni] < 0) { - PyErr_SetString(PyExc_ValueError, - "The first argument of bincount must be non-negative"); - goto fail; - } - ans_size = numbers [mxi] + 1; - type = PyArray_DescrFromType(PyArray_INTP); - if (weight == Py_None) { - if (!(ans = PyArray_Zeros(1, &ans_size, type, 0))) { - goto fail; - } - ians = (intp *)(PyArray_DATA(ans)); - for (i = 0; i < len; i++) - ians [numbers [i]] += 1; - Py_DECREF(lst); - } - else { - if (!(wts = PyArray_ContiguousFromAny(weight, PyArray_DOUBLE, 1, 1))) { - goto fail; - } - weights = (double *)PyArray_DATA (wts); - if (PyArray_SIZE(wts) != len) { - PyErr_SetString(PyExc_ValueError, - "The weights and list don't have the same length."); - goto fail; - } - type = PyArray_DescrFromType(PyArray_DOUBLE); - if (!(ans = PyArray_Zeros(1, &ans_size, type, 0))) { - goto fail; - } - dans = (double *)PyArray_DATA (ans); - for (i = 0; i < len; i++) { - dans[numbers[i]] += weights[i]; - } - Py_DECREF(lst); - Py_DECREF(wts); - } - return ans; - -fail: - Py_XDECREF(lst); - Py_XDECREF(wts); - Py_XDECREF(ans); - return NULL; -} - - -/* - * digitize (x, bins) returns an array of python integers the same - * length of x. The values i returned are such that bins [i - 1] <= x < - * bins [i] if bins is monotonically increasing, or bins [i - 1] > x >= - * bins [i] if bins is monotonically decreasing. Beyond the bounds of - * bins, returns either i = 0 or i = len (bins) as appropriate. - */ -static PyObject * -arr_digitize(PyObject *NPY_UNUSED(self), PyObject *args, PyObject *kwds) -{ - /* self is not used */ - PyObject *ox, *obins; - PyObject *ax = NULL, *abins = NULL, *aret = NULL; - double *dx, *dbins; - intp lbins, lx; /* lengths */ - intp *iret; - int m, i; - static char *kwlist[] = {"x", "bins", NULL}; - PyArray_Descr *type; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO", kwlist, &ox, &obins)) { - goto fail; - } - type = PyArray_DescrFromType(PyArray_DOUBLE); - if (!(ax = PyArray_FromAny(ox, type, 1, 1, CARRAY, NULL))) { - goto fail; - } - Py_INCREF(type); - if (!(abins = PyArray_FromAny(obins, type, 1, 1, CARRAY, NULL))) { - goto fail; - } - - lx = PyArray_SIZE(ax); - dx = (double *)PyArray_DATA(ax); - lbins = PyArray_SIZE(abins); - dbins = (double *)PyArray_DATA(abins); - if (!(aret = PyArray_SimpleNew(1, &lx, PyArray_INTP))) { - goto fail; - } - iret = (intp *)PyArray_DATA(aret); - - if (lx <= 0 || lbins < 0) { - PyErr_SetString(PyExc_ValueError, - "Both x and bins must have non-zero length"); - goto fail; - } - - if (lbins == 1) { - for (i = 0; i < lx; i++) { - if (dx [i] >= dbins[0]) { - iret[i] = 1; - } - else { - iret[i] = 0; - } - } - } - else { - m = monotonic_ (dbins, lbins); - if ( m == -1 ) { - for ( i = 0; i < lx; i ++ ) { - iret [i] = decr_slot_ ((double)dx[i], dbins, lbins); - } - } - else if ( m == 1 ) { - for ( i = 0; i < lx; i ++ ) { - iret [i] = incr_slot_ ((double)dx[i], dbins, lbins); - } - } - else { - PyErr_SetString(PyExc_ValueError, - "The bins must be montonically increasing or decreasing"); - goto fail; - } - } - - Py_DECREF(ax); - Py_DECREF(abins); - return aret; - -fail: - Py_XDECREF(ax); - Py_XDECREF(abins); - Py_XDECREF(aret); - return NULL; -} - - - -static char arr_insert__doc__[] = "Insert vals sequentially into equivalent 1-d positions indicated by mask."; - -/* - * Returns input array with values inserted sequentially into places - * indicated by the mask - */ -static PyObject * -arr_insert(PyObject *NPY_UNUSED(self), PyObject *args, PyObject *kwdict) -{ - PyObject *mask = NULL, *vals = NULL; - PyArrayObject *ainput = NULL, *amask = NULL, *avals = NULL, *tmp = NULL; - int numvals, totmask, sameshape; - char *input_data, *mptr, *vptr, *zero = NULL; - int melsize, delsize, copied, nd; - intp *instrides, *inshape; - int mindx, rem_indx, indx, i, k, objarray; - - static char *kwlist[] = {"input", "mask", "vals", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwdict, "O&OO", kwlist, - PyArray_Converter, &ainput, - &mask, &vals)) { - goto fail; - } - - amask = (PyArrayObject *) PyArray_FROM_OF(mask, CARRAY); - if (amask == NULL) { - goto fail; - } - /* Cast an object array */ - if (amask->descr->type_num == PyArray_OBJECT) { - tmp = (PyArrayObject *)PyArray_Cast(amask, PyArray_INTP); - if (tmp == NULL) { - goto fail; - } - Py_DECREF(amask); - amask = tmp; - } - - sameshape = 1; - if (amask->nd == ainput->nd) { - for (k = 0; k < amask->nd; k++) { - if (amask->dimensions[k] != ainput->dimensions[k]) { - sameshape = 0; - } - } - } - else { - /* Test to see if amask is 1d */ - if (amask->nd != 1) { - sameshape = 0; - } - else if ((PyArray_SIZE(ainput)) != PyArray_SIZE(amask)) { - sameshape = 0; - } - } - if (!sameshape) { - PyErr_SetString(PyExc_TypeError, - "mask array must be 1-d or same shape as input array"); - goto fail; - } - - avals = (PyArrayObject *)PyArray_FromObject(vals, ainput->descr->type_num, 0, 1); - if (avals == NULL) { - goto fail; - } - numvals = PyArray_SIZE(avals); - nd = ainput->nd; - input_data = ainput->data; - mptr = amask->data; - melsize = amask->descr->elsize; - vptr = avals->data; - delsize = avals->descr->elsize; - zero = PyArray_Zero(amask); - if (zero == NULL) { - goto fail; - } - objarray = (ainput->descr->type_num == PyArray_OBJECT); - - /* Handle zero-dimensional case separately */ - if (nd == 0) { - if (memcmp(mptr,zero,melsize) != 0) { - /* Copy value element over to input array */ - memcpy(input_data,vptr,delsize); - if (objarray) { - Py_INCREF(*((PyObject **)vptr)); - } - } - Py_DECREF(amask); - Py_DECREF(avals); - PyDataMem_FREE(zero); - Py_DECREF(ainput); - Py_INCREF(Py_None); - return Py_None; - } - - /* - * Walk through mask array, when non-zero is encountered - * copy next value in the vals array to the input array. - * If we get through the value array, repeat it as necessary. - */ - totmask = (int) PyArray_SIZE(amask); - copied = 0; - instrides = ainput->strides; - inshape = ainput->dimensions; - for (mindx = 0; mindx < totmask; mindx++) { - if (memcmp(mptr,zero,melsize) != 0) { - /* compute indx into input array */ - rem_indx = mindx; - indx = 0; - for(i = nd - 1; i > 0; --i) { - indx += (rem_indx % inshape[i]) * instrides[i]; - rem_indx /= inshape[i]; - } - indx += rem_indx * instrides[0]; - /* fprintf(stderr, "mindx = %d, indx=%d\n", mindx, indx); */ - /* Copy value element over to input array */ - memcpy(input_data+indx,vptr,delsize); - if (objarray) { - Py_INCREF(*((PyObject **)vptr)); - } - vptr += delsize; - copied += 1; - /* If we move past value data. Reset */ - if (copied >= numvals) { - vptr = avals->data; - } - } - mptr += melsize; - } - - Py_DECREF(amask); - Py_DECREF(avals); - PyDataMem_FREE(zero); - Py_DECREF(ainput); - Py_INCREF(Py_None); - return Py_None; - -fail: - PyDataMem_FREE(zero); - Py_XDECREF(ainput); - Py_XDECREF(amask); - Py_XDECREF(avals); - return NULL; -} - -/** @brief Use bisection on a sorted array to find first entry > key. - * - * Use bisection to find an index i s.t. arr[i] <= key < arr[i + 1]. If there is - * no such i the error returns are: - * key < arr[0] -- -1 - * key == arr[len - 1] -- len - 1 - * key > arr[len - 1] -- len - * The array is assumed contiguous and sorted in ascending order. - * - * @param key key value. - * @param arr contiguous sorted array to be searched. - * @param len length of the array. - * @return index - */ -static npy_intp -binary_search(double key, double arr [], npy_intp len) -{ - npy_intp imin = 0; - npy_intp imax = len; - - if (key > arr[len - 1]) { - return len; - } - while (imin < imax) { - npy_intp imid = imin + ((imax - imin) >> 1); - if (key >= arr[imid]) { - imin = imid + 1; - } - else { - imax = imid; - } - } - return imin - 1; -} - -static PyObject * -arr_interp(PyObject *NPY_UNUSED(self), PyObject *args, PyObject *kwdict) -{ - - PyObject *fp, *xp, *x; - PyObject *left = NULL, *right = NULL; - PyArrayObject *afp = NULL, *axp = NULL, *ax = NULL, *af = NULL; - npy_intp i, lenx, lenxp, indx; - double lval, rval; - double *dy, *dx, *dz, *dres, *slopes; - - static char *kwlist[] = {"x", "xp", "fp", "left", "right", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwdict, "OOO|OO", kwlist, - &x, &xp, &fp, &left, &right)) { - return NULL; - } - - afp = (NPY_AO*)PyArray_ContiguousFromAny(fp, NPY_DOUBLE, 1, 1); - if (afp == NULL) { - return NULL; - } - axp = (NPY_AO*)PyArray_ContiguousFromAny(xp, NPY_DOUBLE, 1, 1); - if (axp == NULL) { - goto fail; - } - ax = (NPY_AO*)PyArray_ContiguousFromAny(x, NPY_DOUBLE, 1, 0); - if (ax == NULL) { - goto fail; - } - lenxp = axp->dimensions[0]; - if (lenxp == 0) { - PyErr_SetString(PyExc_ValueError, - "array of sample points is empty"); - goto fail; - } - if (afp->dimensions[0] != lenxp) { - PyErr_SetString(PyExc_ValueError, - "fp and xp are not of the same length."); - goto fail; - } - - af = (NPY_AO*)PyArray_SimpleNew(ax->nd, ax->dimensions, NPY_DOUBLE); - if (af == NULL) { - goto fail; - } - lenx = PyArray_SIZE(ax); - - dy = (double *)PyArray_DATA(afp); - dx = (double *)PyArray_DATA(axp); - dz = (double *)PyArray_DATA(ax); - dres = (double *)PyArray_DATA(af); - - /* Get left and right fill values. */ - if ((left == NULL) || (left == Py_None)) { - lval = dy[0]; - } - else { - lval = PyFloat_AsDouble(left); - if ((lval == -1) && PyErr_Occurred()) { - goto fail; - } - } - if ((right == NULL) || (right == Py_None)) { - rval = dy[lenxp-1]; - } - else { - rval = PyFloat_AsDouble(right); - if ((rval == -1) && PyErr_Occurred()) { - goto fail; - } - } - - slopes = (double *) PyDataMem_NEW((lenxp - 1)*sizeof(double)); - for (i = 0; i < lenxp - 1; i++) { - slopes[i] = (dy[i + 1] - dy[i])/(dx[i + 1] - dx[i]); - } - for (i = 0; i < lenx; i++) { - indx = binary_search(dz[i], dx, lenxp); - if (indx == -1) { - dres[i] = lval; - } - else if (indx == lenxp - 1) { - dres[i] = dy[indx]; - } - else if (indx == lenxp) { - dres[i] = rval; - } - else { - dres[i] = slopes[indx]*(dz[i] - dx[indx]) + dy[indx]; - } - } - - PyDataMem_FREE(slopes); - Py_DECREF(afp); - Py_DECREF(axp); - Py_DECREF(ax); - return (PyObject *)af; - -fail: - Py_XDECREF(afp); - Py_XDECREF(axp); - Py_XDECREF(ax); - Py_XDECREF(af); - return NULL; -} - - - -static PyTypeObject *PyMemberDescr_TypePtr = NULL; -static PyTypeObject *PyGetSetDescr_TypePtr = NULL; -static PyTypeObject *PyMethodDescr_TypePtr = NULL; - -/* Can only be called if doc is currently NULL */ -static PyObject * -arr_add_docstring(PyObject *NPY_UNUSED(dummy), PyObject *args) -{ - PyObject *obj; - PyObject *str; - char *docstr; - static char *msg = "already has a docstring"; - - /* Don't add docstrings */ - if (Py_OptimizeFlag > 1) { - Py_INCREF(Py_None); - return Py_None; - } -#if defined(NPY_PY3K) - if (!PyArg_ParseTuple(args, "OO!", &obj, &PyUnicode_Type, &str)) { - return NULL; - } - - docstr = PyBytes_AS_STRING(PyUnicode_AsUTF8String(str)); -#else - if (!PyArg_ParseTuple(args, "OO!", &obj, &PyString_Type, &str)) { - return NULL; - } - - docstr = PyString_AS_STRING(str); -#endif - -#define _TESTDOC1(typebase) (obj->ob_type == &Py##typebase##_Type) -#define _TESTDOC2(typebase) (obj->ob_type == Py##typebase##_TypePtr) -#define _ADDDOC(typebase, doc, name) do { \ - Py##typebase##Object *new = (Py##typebase##Object *)obj; \ - if (!(doc)) { \ - doc = docstr; \ - } \ - else { \ - PyErr_Format(PyExc_RuntimeError, "%s method %s", name, msg); \ - return NULL; \ - } \ - } while (0) - - if (_TESTDOC1(CFunction)) { - _ADDDOC(CFunction, new->m_ml->ml_doc, new->m_ml->ml_name); - } - else if (_TESTDOC1(Type)) { - _ADDDOC(Type, new->tp_doc, new->tp_name); - } - else if (_TESTDOC2(MemberDescr)) { - _ADDDOC(MemberDescr, new->d_member->doc, new->d_member->name); - } - else if (_TESTDOC2(GetSetDescr)) { - _ADDDOC(GetSetDescr, new->d_getset->doc, new->d_getset->name); - } - else if (_TESTDOC2(MethodDescr)) { - _ADDDOC(MethodDescr, new->d_method->ml_doc, new->d_method->ml_name); - } - else { - PyObject *doc_attr; - - doc_attr = PyObject_GetAttrString(obj, "__doc__"); - if (doc_attr != NULL && doc_attr != Py_None) { - PyErr_Format(PyExc_RuntimeError, "object %s", msg); - return NULL; - } - Py_XDECREF(doc_attr); - - if (PyObject_SetAttrString(obj, "__doc__", str) < 0) { - PyErr_SetString(PyExc_TypeError, - "Cannot set a docstring for that object"); - return NULL; - } - Py_INCREF(Py_None); - return Py_None; - } - -#undef _TESTDOC1 -#undef _TESTDOC2 -#undef _ADDDOC - - Py_INCREF(str); - Py_INCREF(Py_None); - return Py_None; -} - - -/* PACKBITS - * - * This function packs binary (0 or 1) 1-bit per pixel arrays - * into contiguous bytes. - * - */ - -static void -_packbits( void *In, - int element_size, /* in bytes */ - npy_intp in_N, - npy_intp in_stride, - void *Out, - npy_intp out_N, - npy_intp out_stride -) -{ - char build; - int i, index; - npy_intp out_Nm1; - int maxi, remain, nonzero, j; - char *outptr,*inptr; - - outptr = Out; /* pointer to output buffer */ - inptr = In; /* pointer to input buffer */ - - /* - * Loop through the elements of In - * Determine whether or not it is nonzero. - * Yes: set correspdoning bit (and adjust build value) - * No: move on - * Every 8th value, set the value of build and increment the outptr - */ - - remain = in_N % 8; /* uneven bits */ - if (remain == 0) { - remain = 8; - } - out_Nm1 = out_N - 1; - for (index = 0; index < out_N; index++) { - build = 0; - maxi = (index != out_Nm1 ? 8 : remain); - for (i = 0; i < maxi; i++) { - build <<= 1; - nonzero = 0; - for (j = 0; j < element_size; j++) { - nonzero += (*(inptr++) != 0); - } - inptr += (in_stride - element_size); - build += (nonzero != 0); - } - if (index == out_Nm1) build <<= (8-remain); - /* printf("Here: %d %d %d %d\n",build,slice,index,maxi); */ - *outptr = build; - outptr += out_stride; - } - return; -} - - -static void -_unpackbits(void *In, - int NPY_UNUSED(el_size), /* unused */ - npy_intp in_N, - npy_intp in_stride, - void *Out, - npy_intp NPY_UNUSED(out_N), - npy_intp out_stride - ) -{ - unsigned char mask; - int i, index; - char *inptr, *outptr; - - outptr = Out; - inptr = In; - for (index = 0; index < in_N; index++) { - mask = 128; - for (i = 0; i < 8; i++) { - *outptr = ((mask & (unsigned char)(*inptr)) != 0); - outptr += out_stride; - mask >>= 1; - } - inptr += in_stride; - } - return; -} - -/* Fixme -- pack and unpack should be separate routines */ -static PyObject * -pack_or_unpack_bits(PyObject *input, int axis, int unpack) -{ - PyArrayObject *inp; - PyObject *new = NULL; - PyObject *out = NULL; - npy_intp outdims[MAX_DIMS]; - int i; - void (*thefunc)(void *, int, npy_intp, npy_intp, void *, npy_intp, npy_intp); - PyArrayIterObject *it, *ot; - - inp = (PyArrayObject *)PyArray_FROM_O(input); - - if (inp == NULL) { - return NULL; - } - if (unpack) { - if (PyArray_TYPE(inp) != NPY_UBYTE) { - PyErr_SetString(PyExc_TypeError, - "Expected an input array of unsigned byte data type"); - goto fail; - } - } - else if (!PyArray_ISINTEGER(inp)) { - PyErr_SetString(PyExc_TypeError, - "Expected an input array of integer data type"); - goto fail; - } - - new = PyArray_CheckAxis(inp, &axis, 0); - Py_DECREF(inp); - if (new == NULL) { - return NULL; - } - /* Handle zero-dim array separately */ - if (PyArray_SIZE(new) == 0) { - return PyArray_Copy((PyArrayObject *)new); - } - - if (PyArray_NDIM(new) == 0) { - if (unpack) { - /* Handle 0-d array by converting it to a 1-d array */ - PyObject *temp; - PyArray_Dims newdim = {NULL, 1}; - npy_intp shape = 1; - - newdim.ptr = &shape; - temp = PyArray_Newshape((PyArrayObject *)new, &newdim, NPY_CORDER); - if (temp == NULL) { - goto fail; - } - Py_DECREF(new); - new = temp; - } - else { - ubyte *optr, *iptr; - out = PyArray_New(new->ob_type, 0, NULL, NPY_UBYTE, - NULL, NULL, 0, 0, NULL); - if (out == NULL) { - goto fail; - } - optr = PyArray_DATA(out); - iptr = PyArray_DATA(new); - *optr = 0; - for (i = 0; i 1, 9 -> 2, 16 -> 2, 17 -> 3 etc.. - */ - outdims[axis] = ((outdims[axis] - 1) >> 3) + 1; - thefunc = _packbits; - } - - /* Create output array */ - out = PyArray_New(new->ob_type, PyArray_NDIM(new), outdims, PyArray_UBYTE, - NULL, NULL, 0, PyArray_ISFORTRAN(new), NULL); - if (out == NULL) { - goto fail; - } - /* Setup iterators to iterate over all but given axis */ - it = (PyArrayIterObject *)PyArray_IterAllButAxis((PyObject *)new, &axis); - ot = (PyArrayIterObject *)PyArray_IterAllButAxis((PyObject *)out, &axis); - if (it == NULL || ot == NULL) { - Py_XDECREF(it); - Py_XDECREF(ot); - goto fail; - } - - while(PyArray_ITER_NOTDONE(it)) { - thefunc(PyArray_ITER_DATA(it), PyArray_ITEMSIZE(new), - PyArray_DIM(new, axis), PyArray_STRIDE(new, axis), - PyArray_ITER_DATA(ot), PyArray_DIM(out, axis), - PyArray_STRIDE(out, axis)); - PyArray_ITER_NEXT(it); - PyArray_ITER_NEXT(ot); - } - Py_DECREF(it); - Py_DECREF(ot); - -finish: - Py_DECREF(new); - return out; - -fail: - Py_XDECREF(new); - Py_XDECREF(out); - return NULL; -} - - -static PyObject * -io_pack(PyObject *NPY_UNUSED(self), PyObject *args, PyObject *kwds) -{ - PyObject *obj; - int axis = NPY_MAXDIMS; - static char *kwlist[] = {"in", "axis", NULL}; - - if (!PyArg_ParseTupleAndKeywords( args, kwds, "O|O&" , kwlist, - &obj, PyArray_AxisConverter, &axis)) { - return NULL; - } - return pack_or_unpack_bits(obj, axis, 0); -} - -static PyObject * -io_unpack(PyObject *NPY_UNUSED(self), PyObject *args, PyObject *kwds) -{ - PyObject *obj; - int axis = NPY_MAXDIMS; - static char *kwlist[] = {"in", "axis", NULL}; - - if (!PyArg_ParseTupleAndKeywords( args, kwds, "O|O&" , kwlist, - &obj, PyArray_AxisConverter, &axis)) { - return NULL; - } - return pack_or_unpack_bits(obj, axis, 1); -} - -static struct PyMethodDef methods[] = { - {"_insert", (PyCFunction)arr_insert, - METH_VARARGS | METH_KEYWORDS, arr_insert__doc__}, - {"bincount", (PyCFunction)arr_bincount, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"digitize", (PyCFunction)arr_digitize, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"interp", (PyCFunction)arr_interp, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"add_docstring", (PyCFunction)arr_add_docstring, - METH_VARARGS, NULL}, - {"packbits", (PyCFunction)io_pack, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"unpackbits", (PyCFunction)io_unpack, - METH_VARARGS | METH_KEYWORDS, NULL}, - {NULL, NULL, 0, NULL} /* sentinel */ -}; - -static void -define_types(void) -{ - PyObject *tp_dict; - PyObject *myobj; - - tp_dict = PyArrayDescr_Type.tp_dict; - /* Get "subdescr" */ - myobj = PyDict_GetItemString(tp_dict, "fields"); - if (myobj == NULL) { - return; - } - PyGetSetDescr_TypePtr = myobj->ob_type; - myobj = PyDict_GetItemString(tp_dict, "alignment"); - if (myobj == NULL) { - return; - } - PyMemberDescr_TypePtr = myobj->ob_type; - myobj = PyDict_GetItemString(tp_dict, "newbyteorder"); - if (myobj == NULL) { - return; - } - PyMethodDescr_TypePtr = myobj->ob_type; - return; -} - -#if defined(NPY_PY3K) -static struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - "_compiled_base", - NULL, - -1, - methods, - NULL, - NULL, - NULL, - NULL -}; -#endif - -#if defined(NPY_PY3K) -#define RETVAL m -PyObject *PyInit__compiled_base(void) -#else -#define RETVAL -PyMODINIT_FUNC -init_compiled_base(void) -#endif -{ - PyObject *m, *d; - -#if defined(NPY_PY3K) - m = PyModule_Create(&moduledef); -#else - m = Py_InitModule("_compiled_base", methods); -#endif - if (!m) { - return RETVAL; - } - - /* Import the array objects */ - import_array(); - - /* Add some symbolic constants to the module */ - d = PyModule_GetDict(m); - - /* - * PyExc_Exception should catch all the standard errors that are - * now raised instead of the string exception "numpy.lib.error". - * This is for backward compatibility with existing code. - */ - PyDict_SetItemString(d, "error", PyExc_Exception); - - - /* define PyGetSetDescr_Type and PyMemberDescr_Type */ - define_types(); - - return RETVAL; -} diff --git a/pythonPackages/numpy/numpy/lib/stride_tricks.py b/pythonPackages/numpy/numpy/lib/stride_tricks.py deleted file mode 100755 index 7358be2226..0000000000 --- a/pythonPackages/numpy/numpy/lib/stride_tricks.py +++ /dev/null @@ -1,115 +0,0 @@ -""" -Utilities that manipulate strides to achieve desirable effects. - -An explanation of strides can be found in the "ndarray.rst" file in the -NumPy reference guide. - -""" -import numpy as np - -__all__ = ['broadcast_arrays'] - -class DummyArray(object): - """ Dummy object that just exists to hang __array_interface__ dictionaries - and possibly keep alive a reference to a base array. - """ - def __init__(self, interface, base=None): - self.__array_interface__ = interface - self.base = base - -def as_strided(x, shape=None, strides=None): - """ Make an ndarray from the given array with the given shape and strides. - """ - interface = dict(x.__array_interface__) - if shape is not None: - interface['shape'] = tuple(shape) - if strides is not None: - interface['strides'] = tuple(strides) - return np.asarray(DummyArray(interface, base=x)) - -def broadcast_arrays(*args): - """ - Broadcast any number of arrays against each other. - - Parameters - ---------- - `*args` : array_likes - The arrays to broadcast. - - Returns - ------- - broadcasted : list of arrays - These arrays are views on the original arrays. They are typically - not contiguous. Furthermore, more than one element of a - broadcasted array may refer to a single memory location. If you - need to write to the arrays, make copies first. - - Examples - -------- - >>> x = np.array([[1,2,3]]) - >>> y = np.array([[1],[2],[3]]) - >>> np.broadcast_arrays(x, y) - [array([[1, 2, 3], - [1, 2, 3], - [1, 2, 3]]), array([[1, 1, 1], - [2, 2, 2], - [3, 3, 3]])] - - Here is a useful idiom for getting contiguous copies instead of - non-contiguous views. - - >>> map(np.array, np.broadcast_arrays(x, y)) - [array([[1, 2, 3], - [1, 2, 3], - [1, 2, 3]]), array([[1, 1, 1], - [2, 2, 2], - [3, 3, 3]])] - - """ - args = map(np.asarray, args) - shapes = [x.shape for x in args] - if len(set(shapes)) == 1: - # Common case where nothing needs to be broadcasted. - return args - shapes = [list(s) for s in shapes] - strides = [list(x.strides) for x in args] - nds = [len(s) for s in shapes] - biggest = max(nds) - # Go through each array and prepend dimensions of length 1 to each of the - # shapes in order to make the number of dimensions equal. - for i in range(len(args)): - diff = biggest - nds[i] - if diff > 0: - shapes[i] = [1] * diff + shapes[i] - strides[i] = [0] * diff + strides[i] - # Chech each dimension for compatibility. A dimension length of 1 is - # accepted as compatible with any other length. - common_shape = [] - for axis in range(biggest): - lengths = [s[axis] for s in shapes] - unique = set(lengths + [1]) - if len(unique) > 2: - # There must be at least two non-1 lengths for this axis. - raise ValueError("shape mismatch: two or more arrays have " - "incompatible dimensions on axis %r." % (axis,)) - elif len(unique) == 2: - # There is exactly one non-1 length. The common shape will take this - # value. - unique.remove(1) - new_length = unique.pop() - common_shape.append(new_length) - # For each array, if this axis is being broadcasted from a length of - # 1, then set its stride to 0 so that it repeats its data. - for i in range(len(args)): - if shapes[i][axis] == 1: - shapes[i][axis] = new_length - strides[i][axis] = 0 - else: - # Every array has a length of 1 on this axis. Strides can be left - # alone as nothing is broadcasted. - common_shape.append(1) - - # Construct the new arrays. - broadcasted = [as_strided(x, shape=sh, strides=st) for (x,sh,st) in - zip(args, shapes, strides)] - return broadcasted diff --git a/pythonPackages/numpy/numpy/lib/tests/test__datasource.py b/pythonPackages/numpy/numpy/lib/tests/test__datasource.py deleted file mode 100755 index 29dbb7ae56..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test__datasource.py +++ /dev/null @@ -1,319 +0,0 @@ -import os -from tempfile import mkdtemp, mkstemp, NamedTemporaryFile -from shutil import rmtree -from urlparse import urlparse -from urllib2 import URLError -import urllib2 - -from numpy.testing import * - -from numpy.compat import asbytes - -import numpy.lib._datasource as datasource - -def urlopen_stub(url, data=None): - '''Stub to replace urlopen for testing.''' - if url == valid_httpurl(): - tmpfile = NamedTemporaryFile(prefix='urltmp_') - return tmpfile - else: - raise URLError('Name or service not known') - -old_urlopen = None -def setup(): - global old_urlopen - old_urlopen = urllib2.urlopen - urllib2.urlopen = urlopen_stub - -def teardown(): - urllib2.urlopen = old_urlopen - -# A valid website for more robust testing -http_path = 'http://www.google.com/' -http_file = 'index.html' - -http_fakepath = 'http://fake.abc.web/site/' -http_fakefile = 'fake.txt' - -malicious_files = ['/etc/shadow', '../../shadow', - '..\\system.dat', 'c:\\windows\\system.dat'] - -magic_line = asbytes('three is the magic number') - - -# Utility functions used by many TestCases -def valid_textfile(filedir): - # Generate and return a valid temporary file. - fd, path = mkstemp(suffix='.txt', prefix='dstmp_', dir=filedir, text=True) - os.close(fd) - return path - -def invalid_textfile(filedir): - # Generate and return an invalid filename. - fd, path = mkstemp(suffix='.txt', prefix='dstmp_', dir=filedir) - os.close(fd) - os.remove(path) - return path - -def valid_httpurl(): - return http_path+http_file - -def invalid_httpurl(): - return http_fakepath+http_fakefile - -def valid_baseurl(): - return http_path - -def invalid_baseurl(): - return http_fakepath - -def valid_httpfile(): - return http_file - -def invalid_httpfile(): - return http_fakefile - -class TestDataSourceOpen(TestCase): - def setUp(self): - self.tmpdir = mkdtemp() - self.ds = datasource.DataSource(self.tmpdir) - - def tearDown(self): - rmtree(self.tmpdir) - del self.ds - - def test_ValidHTTP(self): - assert self.ds.open(valid_httpurl()) - - def test_InvalidHTTP(self): - url = invalid_httpurl() - self.assertRaises(IOError, self.ds.open, url) - try: - self.ds.open(url) - except IOError, e: - # Regression test for bug fixed in r4342. - assert e.errno is None - - def test_InvalidHTTPCacheURLError(self): - self.assertRaises(URLError, self.ds._cache, invalid_httpurl()) - - def test_ValidFile(self): - local_file = valid_textfile(self.tmpdir) - assert self.ds.open(local_file) - - def test_InvalidFile(self): - invalid_file = invalid_textfile(self.tmpdir) - self.assertRaises(IOError, self.ds.open, invalid_file) - - def test_ValidGzipFile(self): - try: - import gzip - except ImportError: - # We don't have the gzip capabilities to test. - import nose - raise nose.SkipTest - # Test datasource's internal file_opener for Gzip files. - filepath = os.path.join(self.tmpdir, 'foobar.txt.gz') - fp = gzip.open(filepath, 'w') - fp.write(magic_line) - fp.close() - fp = self.ds.open(filepath) - result = fp.readline() - fp.close() - self.assertEqual(magic_line, result) - - def test_ValidBz2File(self): - try: - import bz2 - except ImportError: - # We don't have the bz2 capabilities to test. - import nose - raise nose.SkipTest - # Test datasource's internal file_opener for BZip2 files. - filepath = os.path.join(self.tmpdir, 'foobar.txt.bz2') - fp = bz2.BZ2File(filepath, 'w') - fp.write(magic_line) - fp.close() - fp = self.ds.open(filepath) - result = fp.readline() - fp.close() - self.assertEqual(magic_line, result) - - -class TestDataSourceExists(TestCase): - def setUp(self): - self.tmpdir = mkdtemp() - self.ds = datasource.DataSource(self.tmpdir) - - def tearDown(self): - rmtree(self.tmpdir) - del self.ds - - def test_ValidHTTP(self): - assert self.ds.exists(valid_httpurl()) - - def test_InvalidHTTP(self): - self.assertEqual(self.ds.exists(invalid_httpurl()), False) - - def test_ValidFile(self): - # Test valid file in destpath - tmpfile = valid_textfile(self.tmpdir) - assert self.ds.exists(tmpfile) - # Test valid local file not in destpath - localdir = mkdtemp() - tmpfile = valid_textfile(localdir) - assert self.ds.exists(tmpfile) - rmtree(localdir) - - def test_InvalidFile(self): - tmpfile = invalid_textfile(self.tmpdir) - self.assertEqual(self.ds.exists(tmpfile), False) - - -class TestDataSourceAbspath(TestCase): - def setUp(self): - self.tmpdir = os.path.abspath(mkdtemp()) - self.ds = datasource.DataSource(self.tmpdir) - - def tearDown(self): - rmtree(self.tmpdir) - del self.ds - - def test_ValidHTTP(self): - scheme, netloc, upath, pms, qry, frg = urlparse(valid_httpurl()) - local_path = os.path.join(self.tmpdir, netloc, - upath.strip(os.sep).strip('/')) - self.assertEqual(local_path, self.ds.abspath(valid_httpurl())) - - def test_ValidFile(self): - tmpfile = valid_textfile(self.tmpdir) - tmpfilename = os.path.split(tmpfile)[-1] - # Test with filename only - self.assertEqual(tmpfile, self.ds.abspath(os.path.split(tmpfile)[-1])) - # Test filename with complete path - self.assertEqual(tmpfile, self.ds.abspath(tmpfile)) - - def test_InvalidHTTP(self): - scheme, netloc, upath, pms, qry, frg = urlparse(invalid_httpurl()) - invalidhttp = os.path.join(self.tmpdir, netloc, - upath.strip(os.sep).strip('/')) - self.assertNotEqual(invalidhttp, self.ds.abspath(valid_httpurl())) - - def test_InvalidFile(self): - invalidfile = valid_textfile(self.tmpdir) - tmpfile = valid_textfile(self.tmpdir) - tmpfilename = os.path.split(tmpfile)[-1] - # Test with filename only - self.assertNotEqual(invalidfile, self.ds.abspath(tmpfilename)) - # Test filename with complete path - self.assertNotEqual(invalidfile, self.ds.abspath(tmpfile)) - - def test_sandboxing(self): - tmpfile = valid_textfile(self.tmpdir) - tmpfilename = os.path.split(tmpfile)[-1] - - tmp_path = lambda x: os.path.abspath(self.ds.abspath(x)) - - assert tmp_path(valid_httpurl()).startswith(self.tmpdir) - assert tmp_path(invalid_httpurl()).startswith(self.tmpdir) - assert tmp_path(tmpfile).startswith(self.tmpdir) - assert tmp_path(tmpfilename).startswith(self.tmpdir) - for fn in malicious_files: - assert tmp_path(http_path+fn).startswith(self.tmpdir) - assert tmp_path(fn).startswith(self.tmpdir) - - def test_windows_os_sep(self): - orig_os_sep = os.sep - try: - os.sep = '\\' - self.test_ValidHTTP() - self.test_ValidFile() - self.test_InvalidHTTP() - self.test_InvalidFile() - self.test_sandboxing() - finally: - os.sep = orig_os_sep - - -class TestRepositoryAbspath(TestCase): - def setUp(self): - self.tmpdir = os.path.abspath(mkdtemp()) - self.repos = datasource.Repository(valid_baseurl(), self.tmpdir) - - def tearDown(self): - rmtree(self.tmpdir) - del self.repos - - def test_ValidHTTP(self): - scheme, netloc, upath, pms, qry, frg = urlparse(valid_httpurl()) - local_path = os.path.join(self.repos._destpath, netloc, \ - upath.strip(os.sep).strip('/')) - filepath = self.repos.abspath(valid_httpfile()) - self.assertEqual(local_path, filepath) - - def test_sandboxing(self): - tmp_path = lambda x: os.path.abspath(self.repos.abspath(x)) - assert tmp_path(valid_httpfile()).startswith(self.tmpdir) - for fn in malicious_files: - assert tmp_path(http_path+fn).startswith(self.tmpdir) - assert tmp_path(fn).startswith(self.tmpdir) - - def test_windows_os_sep(self): - orig_os_sep = os.sep - try: - os.sep = '\\' - self.test_ValidHTTP() - self.test_sandboxing() - finally: - os.sep = orig_os_sep - - -class TestRepositoryExists(TestCase): - def setUp(self): - self.tmpdir = mkdtemp() - self.repos = datasource.Repository(valid_baseurl(), self.tmpdir) - - def tearDown(self): - rmtree(self.tmpdir) - del self.repos - - def test_ValidFile(self): - # Create local temp file - tmpfile = valid_textfile(self.tmpdir) - assert self.repos.exists(tmpfile) - - def test_InvalidFile(self): - tmpfile = invalid_textfile(self.tmpdir) - self.assertEqual(self.repos.exists(tmpfile), False) - - def test_RemoveHTTPFile(self): - assert self.repos.exists(valid_httpurl()) - - def test_CachedHTTPFile(self): - localfile = valid_httpurl() - # Create a locally cached temp file with an URL based - # directory structure. This is similar to what Repository.open - # would do. - scheme, netloc, upath, pms, qry, frg = urlparse(localfile) - local_path = os.path.join(self.repos._destpath, netloc) - os.mkdir(local_path, 0700) - tmpfile = valid_textfile(local_path) - assert self.repos.exists(tmpfile) - -class TestOpenFunc(TestCase): - def setUp(self): - self.tmpdir = mkdtemp() - - def tearDown(self): - rmtree(self.tmpdir) - - def test_DataSourceOpen(self): - local_file = valid_textfile(self.tmpdir) - # Test case where destpath is passed in - assert datasource.open(local_file, destpath=self.tmpdir) - # Test case where default destpath is used - assert datasource.open(local_file) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test__iotools.py b/pythonPackages/numpy/numpy/lib/tests/test__iotools.py deleted file mode 100755 index 7c45b35278..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test__iotools.py +++ /dev/null @@ -1,289 +0,0 @@ -import sys -if sys.version_info[0] >= 3: - from io import BytesIO - def StringIO(s=""): - return BytesIO(asbytes(s)) -else: - from StringIO import StringIO - -from datetime import date -import time - -import numpy as np -from numpy.lib._iotools import LineSplitter, NameValidator, StringConverter,\ - has_nested_fields, easy_dtype -from numpy.testing import * - -from numpy.compat import asbytes, asbytes_nested - -class TestLineSplitter(TestCase): - "Tests the LineSplitter class." - # - def test_no_delimiter(self): - "Test LineSplitter w/o delimiter" - strg = asbytes(" 1 2 3 4 5 # test") - test = LineSplitter()(strg) - assert_equal(test, asbytes_nested(['1', '2', '3', '4', '5'])) - test = LineSplitter('')(strg) - assert_equal(test, asbytes_nested(['1', '2', '3', '4', '5'])) - - def test_space_delimiter(self): - "Test space delimiter" - strg = asbytes(" 1 2 3 4 5 # test") - test = LineSplitter(asbytes(' '))(strg) - assert_equal(test, asbytes_nested(['1', '2', '3', '4', '', '5'])) - test = LineSplitter(asbytes(' '))(strg) - assert_equal(test, asbytes_nested(['1 2 3 4', '5'])) - - def test_tab_delimiter(self): - "Test tab delimiter" - strg= asbytes(" 1\t 2\t 3\t 4\t 5 6") - test = LineSplitter(asbytes('\t'))(strg) - assert_equal(test, asbytes_nested(['1', '2', '3', '4', '5 6'])) - strg= asbytes(" 1 2\t 3 4\t 5 6") - test = LineSplitter(asbytes('\t'))(strg) - assert_equal(test, asbytes_nested(['1 2', '3 4', '5 6'])) - - def test_other_delimiter(self): - "Test LineSplitter on delimiter" - strg = asbytes("1,2,3,4,,5") - test = LineSplitter(asbytes(','))(strg) - assert_equal(test, asbytes_nested(['1', '2', '3', '4', '', '5'])) - # - strg = asbytes(" 1,2,3,4,,5 # test") - test = LineSplitter(asbytes(','))(strg) - assert_equal(test, asbytes_nested(['1', '2', '3', '4', '', '5'])) - - def test_constant_fixed_width(self): - "Test LineSplitter w/ fixed-width fields" - strg = asbytes(" 1 2 3 4 5 # test") - test = LineSplitter(3)(strg) - assert_equal(test, asbytes_nested(['1', '2', '3', '4', '', '5', ''])) - # - strg = asbytes(" 1 3 4 5 6# test") - test = LineSplitter(20)(strg) - assert_equal(test, asbytes_nested(['1 3 4 5 6'])) - # - strg = asbytes(" 1 3 4 5 6# test") - test = LineSplitter(30)(strg) - assert_equal(test, asbytes_nested(['1 3 4 5 6'])) - - def test_variable_fixed_width(self): - strg = asbytes(" 1 3 4 5 6# test") - test = LineSplitter((3,6,6,3))(strg) - assert_equal(test, asbytes_nested(['1', '3', '4 5', '6'])) - # - strg = asbytes(" 1 3 4 5 6# test") - test = LineSplitter((6,6,9))(strg) - assert_equal(test, asbytes_nested(['1', '3 4', '5 6'])) - - -#------------------------------------------------------------------------------- - -class TestNameValidator(TestCase): - # - def test_case_sensitivity(self): - "Test case sensitivity" - names = ['A', 'a', 'b', 'c'] - test = NameValidator().validate(names) - assert_equal(test, ['A', 'a', 'b', 'c']) - test = NameValidator(case_sensitive=False).validate(names) - assert_equal(test, ['A', 'A_1', 'B', 'C']) - test = NameValidator(case_sensitive='upper').validate(names) - assert_equal(test, ['A', 'A_1', 'B', 'C']) - test = NameValidator(case_sensitive='lower').validate(names) - assert_equal(test, ['a', 'a_1', 'b', 'c']) - # - def test_excludelist(self): - "Test excludelist" - names = ['dates', 'data', 'Other Data', 'mask'] - validator = NameValidator(excludelist = ['dates', 'data', 'mask']) - test = validator.validate(names) - assert_equal(test, ['dates_', 'data_', 'Other_Data', 'mask_']) - # - def test_missing_names(self): - "Test validate missing names" - namelist = ('a', 'b', 'c') - validator = NameValidator() - assert_equal(validator(namelist), ['a', 'b', 'c']) - namelist = ('', 'b', 'c') - assert_equal(validator(namelist), ['f0', 'b', 'c']) - namelist = ('a', 'b', '') - assert_equal(validator(namelist), ['a', 'b', 'f0']) - namelist = ('', 'f0', '') - assert_equal(validator(namelist), ['f1', 'f0', 'f2']) - # - def test_validate_nb_names(self): - "Test validate nb names" - namelist = ('a', 'b', 'c') - validator = NameValidator() - assert_equal(validator(namelist, nbfields=1), ('a', )) - assert_equal(validator(namelist, nbfields=5, defaultfmt="g%i"), - ['a', 'b', 'c', 'g0', 'g1']) - # - def test_validate_wo_names(self): - "Test validate no names" - namelist = None - validator = NameValidator() - assert(validator(namelist) is None) - assert_equal(validator(namelist, nbfields=3), ['f0', 'f1', 'f2']) - - - - -#------------------------------------------------------------------------------- - -def _bytes_to_date(s): - if sys.version_info[0] >= 3: - return date(*time.strptime(s.decode('latin1'), "%Y-%m-%d")[:3]) - else: - return date(*time.strptime(s, "%Y-%m-%d")[:3]) - -class TestStringConverter(TestCase): - "Test StringConverter" - # - def test_creation(self): - "Test creation of a StringConverter" - converter = StringConverter(int, -99999) - assert_equal(converter._status, 1) - assert_equal(converter.default, -99999) - # - def test_upgrade(self): - "Tests the upgrade method." - converter = StringConverter() - assert_equal(converter._status, 0) - converter.upgrade(asbytes('0')) - assert_equal(converter._status, 1) - converter.upgrade(asbytes('0.')) - assert_equal(converter._status, 2) - converter.upgrade(asbytes('0j')) - assert_equal(converter._status, 3) - converter.upgrade(asbytes('a')) - assert_equal(converter._status, len(converter._mapper)-1) - # - def test_missing(self): - "Tests the use of missing values." - converter = StringConverter(missing_values=(asbytes('missing'), - asbytes('missed'))) - converter.upgrade(asbytes('0')) - assert_equal(converter(asbytes('0')), 0) - assert_equal(converter(asbytes('')), converter.default) - assert_equal(converter(asbytes('missing')), converter.default) - assert_equal(converter(asbytes('missed')), converter.default) - try: - converter('miss') - except ValueError: - pass - # - def test_upgrademapper(self): - "Tests updatemapper" - dateparser = _bytes_to_date - StringConverter.upgrade_mapper(dateparser, date(2000,1,1)) - convert = StringConverter(dateparser, date(2000, 1, 1)) - test = convert(asbytes('2001-01-01')) - assert_equal(test, date(2001, 01, 01)) - test = convert(asbytes('2009-01-01')) - assert_equal(test, date(2009, 01, 01)) - test = convert(asbytes('')) - assert_equal(test, date(2000, 01, 01)) - # - def test_string_to_object(self): - "Make sure that string-to-object functions are properly recognized" - conv = StringConverter(_bytes_to_date) - assert_equal(conv._mapper[-2][0](0), 0j) - assert(hasattr(conv, 'default')) - # - def test_keep_default(self): - "Make sure we don't lose an explicit default" - converter = StringConverter(None, missing_values=asbytes(''), - default=-999) - converter.upgrade(asbytes('3.14159265')) - assert_equal(converter.default, -999) - assert_equal(converter.type, np.dtype(float)) - # - converter = StringConverter(None, missing_values=asbytes(''), default=0) - converter.upgrade(asbytes('3.14159265')) - assert_equal(converter.default, 0) - assert_equal(converter.type, np.dtype(float)) - # - def test_keep_default_zero(self): - "Check that we don't lose a default of 0" - converter = StringConverter(int, default=0, - missing_values=asbytes("N/A")) - assert_equal(converter.default, 0) - # - def test_keep_missing_values(self): - "Check that we're not losing missing values" - converter = StringConverter(int, default=0, - missing_values=asbytes("N/A")) - assert_equal(converter.missing_values, set(asbytes_nested(['', 'N/A']))) - -#------------------------------------------------------------------------------- - -class TestMiscFunctions(TestCase): - # - def test_has_nested_dtype(self): - "Test has_nested_dtype" - ndtype = np.dtype(np.float) - assert_equal(has_nested_fields(ndtype), False) - ndtype = np.dtype([('A', '|S3'), ('B', float)]) - assert_equal(has_nested_fields(ndtype), False) - ndtype = np.dtype([('A', int), ('B', [('BA', float), ('BB', '|S1')])]) - assert_equal(has_nested_fields(ndtype), True) - - def test_easy_dtype(self): - "Test ndtype on dtypes" - # Simple case - ndtype = float - assert_equal(easy_dtype(ndtype), np.dtype(float)) - # As string w/o names - ndtype = "i4, f8" - assert_equal(easy_dtype(ndtype), - np.dtype([('f0', "i4"), ('f1', "f8")])) - # As string w/o names but different default format - assert_equal(easy_dtype(ndtype, defaultfmt="field_%03i"), - np.dtype([('field_000', "i4"), ('field_001', "f8")])) - # As string w/ names - ndtype = "i4, f8" - assert_equal(easy_dtype(ndtype, names="a, b"), - np.dtype([('a', "i4"), ('b', "f8")])) - # As string w/ names (too many) - ndtype = "i4, f8" - assert_equal(easy_dtype(ndtype, names="a, b, c"), - np.dtype([('a', "i4"), ('b', "f8")])) - # As string w/ names (not enough) - ndtype = "i4, f8" - assert_equal(easy_dtype(ndtype, names=", b"), - np.dtype([('f0', "i4"), ('b', "f8")])) - # ... (with different default format) - assert_equal(easy_dtype(ndtype, names="a", defaultfmt="f%02i"), - np.dtype([('a', "i4"), ('f00', "f8")])) - # As list of tuples w/o names - ndtype = [('A', int), ('B', float)] - assert_equal(easy_dtype(ndtype), np.dtype([('A', int), ('B', float)])) - # As list of tuples w/ names - assert_equal(easy_dtype(ndtype, names="a,b"), - np.dtype([('a', int), ('b', float)])) - # As list of tuples w/ not enough names - assert_equal(easy_dtype(ndtype, names="a"), - np.dtype([('a', int), ('f0', float)])) - # As list of tuples w/ too many names - assert_equal(easy_dtype(ndtype, names="a,b,c"), - np.dtype([('a', int), ('b', float)])) - # As list of types w/o names - ndtype = (int, float, float) - assert_equal(easy_dtype(ndtype), - np.dtype([('f0', int), ('f1', float), ('f2', float)])) - # As list of types w names - ndtype = (int, float, float) - assert_equal(easy_dtype(ndtype, names="a, b, c"), - np.dtype([('a', int), ('b', float), ('c', float)])) - # As simple dtype w/ names - ndtype = np.dtype(float) - assert_equal(easy_dtype(ndtype, names="a, b, c"), - np.dtype([(_, float) for _ in ('a', 'b', 'c')])) - # As simple dtype w/o names (but multiple fields) - ndtype = np.dtype(float) - assert_equal(easy_dtype(ndtype, names=['', '', ''], defaultfmt="f%02i"), - np.dtype([(_, float) for _ in ('f00', 'f01', 'f02')])) - diff --git a/pythonPackages/numpy/numpy/lib/tests/test_arraysetops.py b/pythonPackages/numpy/numpy/lib/tests/test_arraysetops.py deleted file mode 100755 index 92305129ac..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_arraysetops.py +++ /dev/null @@ -1,245 +0,0 @@ -""" Test functions for 1D array set operations. - -""" - -from numpy.testing import * -import numpy as np -from numpy.lib.arraysetops import * - -import warnings - -class TestAso(TestCase): - def test_unique( self ): - a = np.array( [5, 7, 1, 2, 1, 5, 7] ) - - ec = np.array( [1, 2, 5, 7] ) - c = unique( a ) - assert_array_equal( c, ec ) - - warnings.simplefilter('ignore', Warning) - vals, indices = unique( a, return_index=True ) - warnings.resetwarnings() - - ed = np.array( [2, 3, 0, 1] ) - assert_array_equal(vals, ec) - assert_array_equal(indices, ed) - - warnings.simplefilter('ignore', Warning) - vals, ind0, ind1 = unique( a, return_index=True, - return_inverse=True ) - warnings.resetwarnings() - - ee = np.array( [2, 3, 0, 1, 0, 2, 3] ) - assert_array_equal(vals, ec) - assert_array_equal(ind0, ed) - assert_array_equal(ind1, ee) - - assert_array_equal([], unique([])) - - def test_intersect1d( self ): - # unique inputs - a = np.array( [5, 7, 1, 2] ) - b = np.array( [2, 4, 3, 1, 5] ) - - ec = np.array( [1, 2, 5] ) - c = intersect1d( a, b, assume_unique=True ) - assert_array_equal( c, ec ) - - # non-unique inputs - a = np.array( [5, 5, 7, 1, 2] ) - b = np.array( [2, 1, 4, 3, 3, 1, 5] ) - - ed = np.array( [1, 2, 5] ) - c = intersect1d( a, b ) - assert_array_equal( c, ed ) - - assert_array_equal([], intersect1d([],[])) - - def test_intersect1d_nu( self ): - # This should be removed when intersect1d_nu is removed. - a = np.array( [5, 5, 7, 1, 2] ) - b = np.array( [2, 1, 4, 3, 3, 1, 5] ) - - ec = np.array( [1, 2, 5] ) - warnings.simplefilter('ignore', Warning) - c = intersect1d_nu( a, b ) - warnings.resetwarnings() - assert_array_equal( c, ec ) - - assert_array_equal([], intersect1d_nu([],[])) - - def test_setxor1d( self ): - a = np.array( [5, 7, 1, 2] ) - b = np.array( [2, 4, 3, 1, 5] ) - - ec = np.array( [3, 4, 7] ) - c = setxor1d( a, b ) - assert_array_equal( c, ec ) - - a = np.array( [1, 2, 3] ) - b = np.array( [6, 5, 4] ) - - ec = np.array( [1, 2, 3, 4, 5, 6] ) - c = setxor1d( a, b ) - assert_array_equal( c, ec ) - - a = np.array( [1, 8, 2, 3] ) - b = np.array( [6, 5, 4, 8] ) - - ec = np.array( [1, 2, 3, 4, 5, 6] ) - c = setxor1d( a, b ) - assert_array_equal( c, ec ) - - assert_array_equal([], setxor1d([],[])) - - def test_ediff1d(self): - zero_elem = np.array([]) - one_elem = np.array([1]) - two_elem = np.array([1,2]) - - assert_array_equal([],ediff1d(zero_elem)) - assert_array_equal([0],ediff1d(zero_elem,to_begin=0)) - assert_array_equal([0],ediff1d(zero_elem,to_end=0)) - assert_array_equal([-1,0],ediff1d(zero_elem,to_begin=-1,to_end=0)) - assert_array_equal([],ediff1d(one_elem)) - assert_array_equal([1],ediff1d(two_elem)) - - def test_setmember1d( self ): - # This should be removed when setmember1d is removed. - a = np.array( [5, 7, 1, 2] ) - b = np.array( [2, 4, 3, 1, 5] ) - - ec = np.array( [True, False, True, True] ) - warnings.simplefilter('ignore', Warning) - c = setmember1d( a, b ) - warnings.resetwarnings() - assert_array_equal( c, ec ) - - a[0] = 8 - ec = np.array( [False, False, True, True] ) - c = setmember1d( a, b ) - assert_array_equal( c, ec ) - - a[0], a[3] = 4, 8 - ec = np.array( [True, False, True, False] ) - c = setmember1d( a, b ) - assert_array_equal( c, ec ) - - assert_array_equal([], setmember1d([],[])) - - def test_in1d(self): - a = np.array( [5, 7, 1, 2] ) - b = np.array( [2, 4, 3, 1, 5] ) - - ec = np.array( [True, False, True, True] ) - c = in1d( a, b, assume_unique=True ) - assert_array_equal( c, ec ) - - a[0] = 8 - ec = np.array( [False, False, True, True] ) - c = in1d( a, b, assume_unique=True ) - assert_array_equal( c, ec ) - - a[0], a[3] = 4, 8 - ec = np.array( [True, False, True, False] ) - c = in1d( a, b, assume_unique=True ) - assert_array_equal( c, ec ) - - a = np.array([5,4,5,3,4,4,3,4,3,5,2,1,5,5]) - b = [2,3,4] - - ec = [False, True, False, True, True, True, True, True, True, False, - True, False, False, False] - c = in1d(a, b) - assert_array_equal(c, ec) - - b = b + [5, 5, 4] - - ec = [True, True, True, True, True, True, True, True, True, True, - True, False, True, True] - c = in1d(a, b) - assert_array_equal(c, ec) - - a = np.array([5, 7, 1, 2]) - b = np.array([2, 4, 3, 1, 5]) - - ec = np.array([True, False, True, True]) - c = in1d(a, b) - assert_array_equal(c, ec) - - a = np.array([5, 7, 1, 1, 2]) - b = np.array([2, 4, 3, 3, 1, 5]) - - ec = np.array([True, False, True, True, True]) - c = in1d(a, b) - assert_array_equal(c, ec) - - a = np.array([5]) - b = np.array([2]) - - ec = np.array([False]) - c = in1d(a, b) - assert_array_equal(c, ec) - - a = np.array([5, 5]) - b = np.array([2, 2]) - - ec = np.array([False, False]) - c = in1d(a, b) - assert_array_equal(c, ec) - - assert_array_equal(in1d([], []), []) - - def test_in1d_char_array( self ): - a = np.array(['a', 'b', 'c','d','e','c','e','b']) - b = np.array(['a','c']) - - ec = np.array([True, False, True, False, False, True, False, False]) - c = in1d(a, b) - - assert_array_equal(c, ec) - - def test_union1d( self ): - a = np.array( [5, 4, 7, 1, 2] ) - b = np.array( [2, 4, 3, 3, 2, 1, 5] ) - - ec = np.array( [1, 2, 3, 4, 5, 7] ) - c = union1d( a, b ) - assert_array_equal( c, ec ) - - assert_array_equal([], union1d([],[])) - - def test_setdiff1d( self ): - a = np.array( [6, 5, 4, 7, 1, 2, 7, 4] ) - b = np.array( [2, 4, 3, 3, 2, 1, 5] ) - - ec = np.array( [6, 7] ) - c = setdiff1d( a, b ) - assert_array_equal( c, ec ) - - a = np.arange( 21 ) - b = np.arange( 19 ) - ec = np.array( [19, 20] ) - c = setdiff1d( a, b ) - assert_array_equal( c, ec ) - - assert_array_equal([], setdiff1d([],[])) - - def test_setdiff1d_char_array(self): - a = np.array(['a','b','c']) - b = np.array(['a','b','s']) - assert_array_equal(setdiff1d(a,b),np.array(['c'])) - - def test_manyways( self ): - a = np.array( [5, 7, 1, 2, 8] ) - b = np.array( [9, 8, 2, 4, 3, 1, 5] ) - - c1 = setxor1d( a, b ) - aux1 = intersect1d( a, b ) - aux2 = union1d( a, b ) - c2 = setdiff1d( aux2, aux1 ) - assert_array_equal( c1, c2 ) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_arrayterator.py b/pythonPackages/numpy/numpy/lib/tests/test_arrayterator.py deleted file mode 100755 index 3dce009d31..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_arrayterator.py +++ /dev/null @@ -1,51 +0,0 @@ -from operator import mul - -import numpy as np -from numpy.random import randint -from numpy.lib import Arrayterator - -import sys -if sys.version_info[0] >= 3: - from functools import reduce - -def test(): - np.random.seed(np.arange(10)) - - # Create a random array - ndims = randint(5)+1 - shape = tuple(randint(10)+1 for dim in range(ndims)) - els = reduce(mul, shape) - a = np.arange(els) - a.shape = shape - - buf_size = randint(2*els) - b = Arrayterator(a, buf_size) - - # Check that each block has at most ``buf_size`` elements - for block in b: - assert len(block.flat) <= (buf_size or els) - - # Check that all elements are iterated correctly - assert list(b.flat) == list(a.flat) - - # Slice arrayterator - start = [randint(dim) for dim in shape] - stop = [randint(dim)+1 for dim in shape] - step = [randint(dim)+1 for dim in shape] - slice_ = tuple(slice(*t) for t in zip(start, stop, step)) - c = b[slice_] - d = a[slice_] - - # Check that each block has at most ``buf_size`` elements - for block in c: - assert len(block.flat) <= (buf_size or els) - - # Check that the arrayterator is sliced correctly - assert np.all(c.__array__() == d) - - # Check that all elements are iterated correctly - assert list(c.flat) == list(d.flat) - -if __name__ == '__main__': - from numpy.testing import run_module_suite - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_financial.py b/pythonPackages/numpy/numpy/lib/tests/test_financial.py deleted file mode 100755 index f3143049fe..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_financial.py +++ /dev/null @@ -1,62 +0,0 @@ -from numpy.testing import * -import numpy as np - -class TestFinancial(TestCase): - def test_rate(self): - assert_almost_equal(np.rate(10,0,-3500,10000), - 0.1107, 4) - - def test_irr(self): - v = [-150000, 15000, 25000, 35000, 45000, 60000] - assert_almost_equal(np.irr(v), - 0.0524, 2) - - def test_pv(self): - assert_almost_equal(np.pv(0.07,20,12000,0), - -127128.17, 2) - - def test_fv(self): - assert_almost_equal(np.fv(0.075, 20, -2000,0,0), - 86609.36, 2) - - def test_pmt(self): - assert_almost_equal(np.pmt(0.08/12,5*12,15000), - -304.146, 3) - - def test_nper(self): - assert_almost_equal(np.nper(0.075,-2000,0,100000.), - 21.54, 2) - - def test_nper2(self): - assert_almost_equal(np.nper(0.0,-2000,0,100000.), - 50.0, 1) - - def test_npv(self): - assert_almost_equal(np.npv(0.05,[-15000,1500,2500,3500,4500,6000]), - 117.04, 2) - - def test_mirr(self): - val = [-4500,-800,800,800,600,600,800,800,700,3000] - assert_almost_equal(np.mirr(val, 0.08, 0.055), 0.0666, 4) - - val = [-120000,39000,30000,21000,37000,46000] - assert_almost_equal(np.mirr(val, 0.10, 0.12), 0.126094, 6) - - val = [100,200,-50,300,-200] - assert_almost_equal(np.mirr(val, 0.05, 0.06), 0.3428, 4) - - val = [39000,30000,21000,37000,46000] - assert_(np.isnan(np.mirr(val, 0.10, 0.12))) - - - -def test_unimplemented(): - # np.round(np.ppmt(0.1/12,1,60,55000),2) == 710.25 - assert_raises(NotImplementedError, np.ppmt, 0.1/12, 1, 60, 55000) - - # np.round(np.ipmt(0.1/12,1,24,2000),2) == 16.67 - assert_raises(NotImplementedError, np.ipmt, 0.1/12, 1, 24, 2000) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_format.py b/pythonPackages/numpy/numpy/lib/tests/test_format.py deleted file mode 100755 index 2e69351412..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_format.py +++ /dev/null @@ -1,564 +0,0 @@ -r''' Test the .npy file format. - -Set up: - - >>> import sys - >>> if sys.version_info[0] >= 3: - ... from io import BytesIO as StringIO - ... else: - ... from cStringIO import StringIO - >>> from numpy.lib import format - >>> - >>> scalars = [ - ... np.uint8, - ... np.int8, - ... np.uint16, - ... np.int16, - ... np.uint32, - ... np.int32, - ... np.uint64, - ... np.int64, - ... np.float32, - ... np.float64, - ... np.complex64, - ... np.complex128, - ... object, - ... ] - >>> - >>> basic_arrays = [] - >>> - >>> for scalar in scalars: - ... for endian in '<>': - ... dtype = np.dtype(scalar).newbyteorder(endian) - ... basic = np.arange(15).astype(dtype) - ... basic_arrays.extend([ - ... np.array([], dtype=dtype), - ... np.array(10, dtype=dtype), - ... basic, - ... basic.reshape((3,5)), - ... basic.reshape((3,5)).T, - ... basic.reshape((3,5))[::-1,::2], - ... ]) - ... - >>> - >>> Pdescr = [ - ... ('x', 'i4', (2,)), - ... ('y', 'f8', (2, 2)), - ... ('z', 'u1')] - >>> - >>> - >>> PbufferT = [ - ... ([3,2], [[6.,4.],[6.,4.]], 8), - ... ([4,3], [[7.,5.],[7.,5.]], 9), - ... ] - >>> - >>> - >>> Ndescr = [ - ... ('x', 'i4', (2,)), - ... ('Info', [ - ... ('value', 'c16'), - ... ('y2', 'f8'), - ... ('Info2', [ - ... ('name', 'S2'), - ... ('value', 'c16', (2,)), - ... ('y3', 'f8', (2,)), - ... ('z3', 'u4', (2,))]), - ... ('name', 'S2'), - ... ('z2', 'b1')]), - ... ('color', 'S2'), - ... ('info', [ - ... ('Name', 'U8'), - ... ('Value', 'c16')]), - ... ('y', 'f8', (2, 2)), - ... ('z', 'u1')] - >>> - >>> - >>> NbufferT = [ - ... ([3,2], (6j, 6., ('nn', [6j,4j], [6.,4.], [1,2]), 'NN', True), 'cc', ('NN', 6j), [[6.,4.],[6.,4.]], 8), - ... ([4,3], (7j, 7., ('oo', [7j,5j], [7.,5.], [2,1]), 'OO', False), 'dd', ('OO', 7j), [[7.,5.],[7.,5.]], 9), - ... ] - >>> - >>> - >>> record_arrays = [ - ... np.array(PbufferT, dtype=np.dtype(Pdescr).newbyteorder('<')), - ... np.array(NbufferT, dtype=np.dtype(Ndescr).newbyteorder('<')), - ... np.array(PbufferT, dtype=np.dtype(Pdescr).newbyteorder('>')), - ... np.array(NbufferT, dtype=np.dtype(Ndescr).newbyteorder('>')), - ... ] - -Test the magic string writing. - - >>> format.magic(1, 0) - '\x93NUMPY\x01\x00' - >>> format.magic(0, 0) - '\x93NUMPY\x00\x00' - >>> format.magic(255, 255) - '\x93NUMPY\xff\xff' - >>> format.magic(2, 5) - '\x93NUMPY\x02\x05' - -Test the magic string reading. - - >>> format.read_magic(StringIO(format.magic(1, 0))) - (1, 0) - >>> format.read_magic(StringIO(format.magic(0, 0))) - (0, 0) - >>> format.read_magic(StringIO(format.magic(255, 255))) - (255, 255) - >>> format.read_magic(StringIO(format.magic(2, 5))) - (2, 5) - -Test the header writing. - - >>> for arr in basic_arrays + record_arrays: - ... f = StringIO() - ... format.write_array_header_1_0(f, arr) # XXX: arr is not a dict, items gets called on it - ... print repr(f.getvalue()) - ... - "F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '|u1', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '|u1', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '|u1', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '|u1', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '|i1', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '|i1', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '|i1', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '|i1', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': 'u2', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '>u2', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '>u2', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '>u2', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '>u2', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '>u2', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': 'i2', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '>i2', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '>i2', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '>i2', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '>i2', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '>i2', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': 'u4', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '>u4', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '>u4', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '>u4', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '>u4', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '>u4', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': 'i4', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '>i4', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '>i4', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '>i4', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '>i4', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '>i4', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': 'u8', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '>u8', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '>u8', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '>u8', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '>u8', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '>u8', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': 'i8', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '>i8', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '>i8', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '>i8', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '>i8', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '>i8', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': 'f4', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '>f4', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '>f4', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '>f4', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '>f4', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '>f4', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': 'f8', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '>f8', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '>f8', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '>f8', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '>f8', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '>f8', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': 'c8', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '>c8', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '>c8', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '>c8', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '>c8', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '>c8', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': 'c16', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '>c16', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '>c16', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '>c16', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '>c16', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '>c16', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '|O4', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (3, 3)} \n" - "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (0,)} \n" - "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': ()} \n" - "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (15,)} \n" - "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (3, 5)} \n" - "F\x00{'descr': '|O4', 'fortran_order': True, 'shape': (5, 3)} \n" - "F\x00{'descr': '|O4', 'fortran_order': False, 'shape': (3, 3)} \n" - "v\x00{'descr': [('x', 'i4', (2,)), ('y', '>f8', (2, 2)), ('z', '|u1')],\n 'fortran_order': False,\n 'shape': (2,)} \n" - "\x16\x02{'descr': [('x', '>i4', (2,)),\n ('Info',\n [('value', '>c16'),\n ('y2', '>f8'),\n ('Info2',\n [('name', '|S2'),\n ('value', '>c16', (2,)),\n ('y3', '>f8', (2,)),\n ('z3', '>u4', (2,))]),\n ('name', '|S2'),\n ('z2', '|b1')]),\n ('color', '|S2'),\n ('info', [('Name', '>U8'), ('Value', '>c16')]),\n ('y', '>f8', (2, 2)),\n ('z', '|u1')],\n 'fortran_order': False,\n 'shape': (2,)} \n" -''' - - -import sys -import os -import shutil -import tempfile - -if sys.version_info[0] >= 3: - from io import BytesIO as StringIO -else: - from cStringIO import StringIO - -import numpy as np -from numpy.testing import * - -from numpy.lib import format - -from numpy.compat import asbytes, asbytes_nested - - -tempdir = None - -# Module-level setup. -def setup_module(): - global tempdir - tempdir = tempfile.mkdtemp() - -def teardown_module(): - global tempdir - if tempdir is not None and os.path.isdir(tempdir): - shutil.rmtree(tempdir) - tempdir = None - - -# Generate some basic arrays to test with. -scalars = [ - np.uint8, - np.int8, - np.uint16, - np.int16, - np.uint32, - np.int32, - np.uint64, - np.int64, - np.float32, - np.float64, - np.complex64, - np.complex128, - object, -] -basic_arrays = [] -for scalar in scalars: - for endian in '<>': - dtype = np.dtype(scalar).newbyteorder(endian) - basic = np.arange(15).astype(dtype) - basic_arrays.extend([ - # Empty - np.array([], dtype=dtype), - # Rank-0 - np.array(10, dtype=dtype), - # 1-D - basic, - # 2-D C-contiguous - basic.reshape((3,5)), - # 2-D F-contiguous - basic.reshape((3,5)).T, - # 2-D non-contiguous - basic.reshape((3,5))[::-1,::2], - ]) - -# More complicated record arrays. -# This is the structure of the table used for plain objects: -# -# +-+-+-+ -# |x|y|z| -# +-+-+-+ - -# Structure of a plain array description: -Pdescr = [ - ('x', 'i4', (2,)), - ('y', 'f8', (2, 2)), - ('z', 'u1')] - -# A plain list of tuples with values for testing: -PbufferT = [ - # x y z - ([3,2], [[6.,4.],[6.,4.]], 8), - ([4,3], [[7.,5.],[7.,5.]], 9), - ] - - -# This is the structure of the table used for nested objects (DON'T PANIC!): -# -# +-+---------------------------------+-----+----------+-+-+ -# |x|Info |color|info |y|z| -# | +-----+--+----------------+----+--+ +----+-----+ | | -# | |value|y2|Info2 |name|z2| |Name|Value| | | -# | | | +----+-----+--+--+ | | | | | | | -# | | | |name|value|y3|z3| | | | | | | | -# +-+-----+--+----+-----+--+--+----+--+-----+----+-----+-+-+ -# - -# The corresponding nested array description: -Ndescr = [ - ('x', 'i4', (2,)), - ('Info', [ - ('value', 'c16'), - ('y2', 'f8'), - ('Info2', [ - ('name', 'S2'), - ('value', 'c16', (2,)), - ('y3', 'f8', (2,)), - ('z3', 'u4', (2,))]), - ('name', 'S2'), - ('z2', 'b1')]), - ('color', 'S2'), - ('info', [ - ('Name', 'U8'), - ('Value', 'c16')]), - ('y', 'f8', (2, 2)), - ('z', 'u1')] - -NbufferT = [ - # x Info color info y z - # value y2 Info2 name z2 Name Value - # name value y3 z3 - ([3,2], (6j, 6., ('nn', [6j,4j], [6.,4.], [1,2]), 'NN', True), 'cc', ('NN', 6j), [[6.,4.],[6.,4.]], 8), - ([4,3], (7j, 7., ('oo', [7j,5j], [7.,5.], [2,1]), 'OO', False), 'dd', ('OO', 7j), [[7.,5.],[7.,5.]], 9), - ] - -record_arrays = [ - np.array(PbufferT, dtype=np.dtype(Pdescr).newbyteorder('<')), - np.array(NbufferT, dtype=np.dtype(Ndescr).newbyteorder('<')), - np.array(PbufferT, dtype=np.dtype(Pdescr).newbyteorder('>')), - np.array(NbufferT, dtype=np.dtype(Ndescr).newbyteorder('>')), -] - -def roundtrip(arr): - f = StringIO() - format.write_array(f, arr) - f2 = StringIO(f.getvalue()) - arr2 = format.read_array(f2) - return arr2 - -def assert_equal(o1, o2): - assert o1 == o2 - - -def test_roundtrip(): - for arr in basic_arrays + record_arrays: - arr2 = roundtrip(arr) - yield assert_array_equal, arr, arr2 - -def test_memmap_roundtrip(): - # XXX: test crashes nose on windows. Fix this - if not (sys.platform == 'win32' or sys.platform == 'cygwin'): - for arr in basic_arrays + record_arrays: - if arr.dtype.hasobject: - # Skip these since they can't be mmap'ed. - continue - # Write it out normally and through mmap. - nfn = os.path.join(tempdir, 'normal.npy') - mfn = os.path.join(tempdir, 'memmap.npy') - fp = open(nfn, 'wb') - try: - format.write_array(fp, arr) - finally: - fp.close() - - fortran_order = (arr.flags.f_contiguous and not arr.flags.c_contiguous) - ma = format.open_memmap(mfn, mode='w+', dtype=arr.dtype, - shape=arr.shape, fortran_order=fortran_order) - ma[...] = arr - del ma - - # Check that both of these files' contents are the same. - fp = open(nfn, 'rb') - normal_bytes = fp.read() - fp.close() - fp = open(mfn, 'rb') - memmap_bytes = fp.read() - fp.close() - yield assert_equal, normal_bytes, memmap_bytes - - # Check that reading the file using memmap works. - ma = format.open_memmap(nfn, mode='r') - #yield assert_array_equal, ma, arr - #del ma - - -def test_write_version_1_0(): - f = StringIO() - arr = np.arange(1) - # These should pass. - format.write_array(f, arr, version=(1, 0)) - format.write_array(f, arr) - - # These should all fail. - bad_versions = [ - (1, 1), - (0, 0), - (0, 1), - (2, 0), - (2, 2), - (255, 255), - ] - for version in bad_versions: - try: - format.write_array(f, arr, version=version) - except ValueError: - pass - else: - raise AssertionError("we should have raised a ValueError for the bad version %r" % (version,)) - - -bad_version_magic = asbytes_nested([ - '\x93NUMPY\x01\x01', - '\x93NUMPY\x00\x00', - '\x93NUMPY\x00\x01', - '\x93NUMPY\x02\x00', - '\x93NUMPY\x02\x02', - '\x93NUMPY\xff\xff', -]) -malformed_magic = asbytes_nested([ - '\x92NUMPY\x01\x00', - '\x00NUMPY\x01\x00', - '\x93numpy\x01\x00', - '\x93MATLB\x01\x00', - '\x93NUMPY\x01', - '\x93NUMPY', - '', -]) - -def test_read_magic_bad_magic(): - for magic in malformed_magic: - f = StringIO(magic) - yield raises(ValueError)(format.read_magic), f - -def test_read_version_1_0_bad_magic(): - for magic in bad_version_magic + malformed_magic: - f = StringIO(magic) - yield raises(ValueError)(format.read_array), f - -def test_bad_magic_args(): - assert_raises(ValueError, format.magic, -1, 1) - assert_raises(ValueError, format.magic, 256, 1) - assert_raises(ValueError, format.magic, 1, -1) - assert_raises(ValueError, format.magic, 1, 256) - -def test_large_header(): - s = StringIO() - d = {'a':1,'b':2} - format.write_array_header_1_0(s,d) - - s = StringIO() - d = {'a':1,'b':2,'c':'x'*256*256} - assert_raises(ValueError, format.write_array_header_1_0, s, d) - -def test_bad_header(): - # header of length less than 2 should fail - s = StringIO() - assert_raises(ValueError, format.read_array_header_1_0, s) - s = StringIO(asbytes('1')) - assert_raises(ValueError, format.read_array_header_1_0, s) - - # header shorter than indicated size should fail - s = StringIO(asbytes('\x01\x00')) - assert_raises(ValueError, format.read_array_header_1_0, s) - - # headers without the exact keys required should fail - d = {"shape":(1,2), - "descr":"x"} - s = StringIO() - format.write_array_header_1_0(s,d) - assert_raises(ValueError, format.read_array_header_1_0, s) - - d = {"shape":(1,2), - "fortran_order":False, - "descr":"x", - "extrakey":-1} - s = StringIO() - format.write_array_header_1_0(s,d) - assert_raises(ValueError, format.read_array_header_1_0, s) - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_function_base.py b/pythonPackages/numpy/numpy/lib/tests/test_function_base.py deleted file mode 100755 index 1d0d61be37..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_function_base.py +++ /dev/null @@ -1,1073 +0,0 @@ -import warnings - -from numpy.testing import * -import numpy.lib -from numpy.lib import * -from numpy.core import * -from numpy import matrix, asmatrix - -import numpy as np - -class TestAny(TestCase): - def test_basic(self): - y1 = [0, 0, 1, 0] - y2 = [0, 0, 0, 0] - y3 = [1, 0, 1, 0] - assert(any(y1)) - assert(any(y3)) - assert(not any(y2)) - - def test_nd(self): - y1 = [[0, 0, 0], [0, 1, 0], [1, 1, 0]] - assert(any(y1)) - assert_array_equal(sometrue(y1, axis=0), [1, 1, 0]) - assert_array_equal(sometrue(y1, axis=1), [0, 1, 1]) - - -class TestAll(TestCase): - def test_basic(self): - y1 = [0, 1, 1, 0] - y2 = [0, 0, 0, 0] - y3 = [1, 1, 1, 1] - assert(not all(y1)) - assert(all(y3)) - assert(not all(y2)) - assert(all(~array(y2))) - - def test_nd(self): - y1 = [[0, 0, 1], [0, 1, 1], [1, 1, 1]] - assert(not all(y1)) - assert_array_equal(alltrue(y1, axis=0), [0, 0, 1]) - assert_array_equal(alltrue(y1, axis=1), [0, 0, 1]) - - -class TestAverage(TestCase): - def test_basic(self): - y1 = array([1, 2, 3]) - assert(average(y1, axis=0) == 2.) - y2 = array([1., 2., 3.]) - assert(average(y2, axis=0) == 2.) - y3 = [0., 0., 0.] - assert(average(y3, axis=0) == 0.) - - y4 = ones((4, 4)) - y4[0, 1] = 0 - y4[1, 0] = 2 - assert_almost_equal(y4.mean(0), average(y4, 0)) - assert_almost_equal(y4.mean(1), average(y4, 1)) - - y5 = rand(5, 5) - assert_almost_equal(y5.mean(0), average(y5, 0)) - assert_almost_equal(y5.mean(1), average(y5, 1)) - - y6 = matrix(rand(5, 5)) - assert_array_equal(y6.mean(0), average(y6, 0)) - - def test_weights(self): - y = arange(10) - w = arange(10) - actual = average(y, weights=w) - desired = (arange(10) ** 2).sum()*1. / arange(10).sum() - assert_almost_equal(actual, desired) - - y1 = array([[1, 2, 3], [4, 5, 6]]) - w0 = [1, 2] - actual = average(y1, weights=w0, axis=0) - desired = array([3., 4., 5.]) - assert_almost_equal(actual, desired) - - w1 = [0, 0, 1] - actual = average(y1, weights=w1, axis=1) - desired = array([3., 6.]) - assert_almost_equal(actual, desired) - - # This should raise an error. Can we test for that ? - # assert_equal(average(y1, weights=w1), 9./2.) - - # 2D Case - w2 = [[0, 0, 1], [0, 0, 2]] - desired = array([3., 6.]) - assert_array_equal(average(y1, weights=w2, axis=1), desired) - assert_equal(average(y1, weights=w2), 5.) - - def test_returned(self): - y = array([[1, 2, 3], [4, 5, 6]]) - - # No weights - avg, scl = average(y, returned=True) - assert_equal(scl, 6.) - - avg, scl = average(y, 0, returned=True) - assert_array_equal(scl, array([2., 2., 2.])) - - avg, scl = average(y, 1, returned=True) - assert_array_equal(scl, array([3., 3.])) - - # With weights - w0 = [1, 2] - avg, scl = average(y, weights=w0, axis=0, returned=True) - assert_array_equal(scl, array([3., 3., 3.])) - - w1 = [1, 2, 3] - avg, scl = average(y, weights=w1, axis=1, returned=True) - assert_array_equal(scl, array([6., 6.])) - - w2 = [[0, 0, 1], [1, 2, 3]] - avg, scl = average(y, weights=w2, axis=1, returned=True) - assert_array_equal(scl, array([1., 6.])) - - -class TestSelect(TestCase): - def _select(self, cond, values, default=0): - output = [] - for m in range(len(cond)): - output += [V[m] for V, C in zip(values, cond) if C[m]] or [default] - return output - - def test_basic(self): - choices = [array([1, 2, 3]), - array([4, 5, 6]), - array([7, 8, 9])] - conditions = [array([0, 0, 0]), - array([0, 1, 0]), - array([0, 0, 1])] - assert_array_equal(select(conditions, choices, default=15), - self._select(conditions, choices, default=15)) - - assert_equal(len(choices), 3) - assert_equal(len(conditions), 3) - - -class TestInsert(TestCase): - def test_basic(self): - a = [1, 2, 3] - assert_equal(insert(a, 0, 1), [1, 1, 2, 3]) - assert_equal(insert(a, 3, 1), [1, 2, 3, 1]) - assert_equal(insert(a, [1, 1, 1], [1, 2, 3]), [1, 1, 2, 3, 2, 3]) - - -class TestAmax(TestCase): - def test_basic(self): - a = [3, 4, 5, 10, -3, -5, 6.0] - assert_equal(amax(a), 10.0) - b = [[3, 6.0, 9.0], - [4, 10.0, 5.0], - [8, 3.0, 2.0]] - assert_equal(amax(b, axis=0), [8.0, 10.0, 9.0]) - assert_equal(amax(b, axis=1), [9.0, 10.0, 8.0]) - - -class TestAmin(TestCase): - def test_basic(self): - a = [3, 4, 5, 10, -3, -5, 6.0] - assert_equal(amin(a), -5.0) - b = [[3, 6.0, 9.0], - [4, 10.0, 5.0], - [8, 3.0, 2.0]] - assert_equal(amin(b, axis=0), [3.0, 3.0, 2.0]) - assert_equal(amin(b, axis=1), [3.0, 4.0, 2.0]) - - -class TestPtp(TestCase): - def test_basic(self): - a = [3, 4, 5, 10, -3, -5, 6.0] - assert_equal(ptp(a, axis=0), 15.0) - b = [[3, 6.0, 9.0], - [4, 10.0, 5.0], - [8, 3.0, 2.0]] - assert_equal(ptp(b, axis=0), [5.0, 7.0, 7.0]) - assert_equal(ptp(b, axis= -1), [6.0, 6.0, 6.0]) - - -class TestCumsum(TestCase): - def test_basic(self): - ba = [1, 2, 10, 11, 6, 5, 4] - ba2 = [[1, 2, 3, 4], [5, 6, 7, 9], [10, 3, 4, 5]] - for ctype in [int8, uint8, int16, uint16, int32, uint32, - float32, float64, complex64, complex128]: - a = array(ba, ctype) - a2 = array(ba2, ctype) - assert_array_equal(cumsum(a, axis=0), array([1, 3, 13, 24, 30, 35, 39], ctype)) - assert_array_equal(cumsum(a2, axis=0), array([[1, 2, 3, 4], [6, 8, 10, 13], - [16, 11, 14, 18]], ctype)) - assert_array_equal(cumsum(a2, axis=1), - array([[1, 3, 6, 10], - [5, 11, 18, 27], - [10, 13, 17, 22]], ctype)) - - -class TestProd(TestCase): - def test_basic(self): - ba = [1, 2, 10, 11, 6, 5, 4] - ba2 = [[1, 2, 3, 4], [5, 6, 7, 9], [10, 3, 4, 5]] - for ctype in [int16, uint16, int32, uint32, - float32, float64, complex64, complex128]: - a = array(ba, ctype) - a2 = array(ba2, ctype) - if ctype in ['1', 'b']: - self.assertRaises(ArithmeticError, prod, a) - self.assertRaises(ArithmeticError, prod, a2, 1) - self.assertRaises(ArithmeticError, prod, a) - else: - assert_equal(prod(a, axis=0), 26400) - assert_array_equal(prod(a2, axis=0), - array([50, 36, 84, 180], ctype)) - assert_array_equal(prod(a2, axis= -1), array([24, 1890, 600], ctype)) - - -class TestCumprod(TestCase): - def test_basic(self): - ba = [1, 2, 10, 11, 6, 5, 4] - ba2 = [[1, 2, 3, 4], [5, 6, 7, 9], [10, 3, 4, 5]] - for ctype in [int16, uint16, int32, uint32, - float32, float64, complex64, complex128]: - a = array(ba, ctype) - a2 = array(ba2, ctype) - if ctype in ['1', 'b']: - self.assertRaises(ArithmeticError, cumprod, a) - self.assertRaises(ArithmeticError, cumprod, a2, 1) - self.assertRaises(ArithmeticError, cumprod, a) - else: - assert_array_equal(cumprod(a, axis= -1), - array([1, 2, 20, 220, - 1320, 6600, 26400], ctype)) - assert_array_equal(cumprod(a2, axis=0), - array([[ 1, 2, 3, 4], - [ 5, 12, 21, 36], - [50, 36, 84, 180]], ctype)) - assert_array_equal(cumprod(a2, axis= -1), - array([[ 1, 2, 6, 24], - [ 5, 30, 210, 1890], - [10, 30, 120, 600]], ctype)) - - -class TestDiff(TestCase): - def test_basic(self): - x = [1, 4, 6, 7, 12] - out = array([3, 2, 1, 5]) - out2 = array([-1, -1, 4]) - out3 = array([0, 5]) - assert_array_equal(diff(x), out) - assert_array_equal(diff(x, n=2), out2) - assert_array_equal(diff(x, n=3), out3) - - def test_nd(self): - x = 20 * rand(10, 20, 30) - out1 = x[:, :, 1:] - x[:, :, :-1] - out2 = out1[:, :, 1:] - out1[:, :, :-1] - out3 = x[1:, :, :] - x[:-1, :, :] - out4 = out3[1:, :, :] - out3[:-1, :, :] - assert_array_equal(diff(x), out1) - assert_array_equal(diff(x, n=2), out2) - assert_array_equal(diff(x, axis=0), out3) - assert_array_equal(diff(x, n=2, axis=0), out4) - - -class TestGradient(TestCase): - def test_basic(self): - x = array([[1, 1], [3, 4]]) - dx = [array([[2., 3.], [2., 3.]]), - array([[0., 0.], [1., 1.]])] - assert_array_equal(gradient(x), dx) - - def test_badargs(self): - # for 2D array, gradient can take 0,1, or 2 extra args - x = array([[1, 1], [3, 4]]) - assert_raises(SyntaxError, gradient, x, array([1., 1.]), - array([1., 1.]), array([1., 1.])) - - def test_masked(self): - # Make sure that gradient supports subclasses like masked arrays - x = np.ma.array([[1, 1], [3, 4]]) - assert_equal(type(gradient(x)[0]), type(x)) - - -class TestAngle(TestCase): - def test_basic(self): - x = [1 + 3j, sqrt(2) / 2.0 + 1j * sqrt(2) / 2, 1, 1j, -1, -1j, 1 - 3j, -1 + 3j] - y = angle(x) - yo = [arctan(3.0 / 1.0), arctan(1.0), 0, pi / 2, pi, -pi / 2.0, - - arctan(3.0 / 1.0), pi - arctan(3.0 / 1.0)] - z = angle(x, deg=1) - zo = array(yo) * 180 / pi - assert_array_almost_equal(y, yo, 11) - assert_array_almost_equal(z, zo, 11) - - -class TestTrimZeros(TestCase): - """ only testing for integer splits. - """ - def test_basic(self): - a = array([0, 0, 1, 2, 3, 4, 0]) - res = trim_zeros(a) - assert_array_equal(res, array([1, 2, 3, 4])) - - def test_leading_skip(self): - a = array([0, 0, 1, 0, 2, 3, 4, 0]) - res = trim_zeros(a) - assert_array_equal(res, array([1, 0, 2, 3, 4])) - - def test_trailing_skip(self): - a = array([0, 0, 1, 0, 2, 3, 0, 4, 0]) - res = trim_zeros(a) - assert_array_equal(res, array([1, 0, 2, 3, 0, 4])) - - -class TestExtins(TestCase): - def test_basic(self): - a = array([1, 3, 2, 1, 2, 3, 3]) - b = extract(a > 1, a) - assert_array_equal(b, [3, 2, 2, 3, 3]) - - def test_place(self): - a = array([1, 4, 3, 2, 5, 8, 7]) - place(a, [0, 1, 0, 1, 0, 1, 0], [2, 4, 6]) - assert_array_equal(a, [1, 2, 3, 4, 5, 6, 7]) - - def test_both(self): - a = rand(10) - mask = a > 0.5 - ac = a.copy() - c = extract(mask, a) - place(a, mask, 0) - place(a, mask, c) - assert_array_equal(a, ac) - - -class TestVectorize(TestCase): - def test_simple(self): - def addsubtract(a, b): - if a > b: - return a - b - else: - return a + b - f = vectorize(addsubtract) - r = f([0, 3, 6, 9], [1, 3, 5, 7]) - assert_array_equal(r, [1, 6, 1, 2]) - - def test_scalar(self): - def addsubtract(a, b): - if a > b: - return a - b - else: - return a + b - f = vectorize(addsubtract) - r = f([0, 3, 6, 9], 5) - assert_array_equal(r, [5, 8, 1, 4]) - - def test_large(self): - x = linspace(-3, 2, 10000) - f = vectorize(lambda x: x) - y = f(x) - assert_array_equal(y, x) - - def test_ufunc(self): - import math - f = vectorize(math.cos) - args = array([0, 0.5*pi, pi, 1.5*pi, 2*pi]) - r1 = f(args) - r2 = cos(args) - assert_array_equal(r1, r2) - - def test_keywords(self): - import math - def foo(a, b=1): - return a + b - f = vectorize(foo) - args = array([1,2,3]) - r1 = f(args) - r2 = array([2,3,4]) - assert_array_equal(r1, r2) - r1 = f(args, 2) - r2 = array([3,4,5]) - assert_array_equal(r1, r2) - - def test_keywords_no_func_code(self): - # This needs to test a function that has keywords but - # no func_code attribute, since otherwise vectorize will - # inspect the func_code. - import random - try: - f = vectorize(random.randrange) - except: - raise AssertionError() - - -class TestDigitize(TestCase): - def test_forward(self): - x = arange(-6, 5) - bins = arange(-5, 5) - assert_array_equal(digitize(x, bins), arange(11)) - - def test_reverse(self): - x = arange(5, -6, -1) - bins = arange(5, -5, -1) - assert_array_equal(digitize(x, bins), arange(11)) - - def test_random(self): - x = rand(10) - bin = linspace(x.min(), x.max(), 10) - assert all(digitize(x, bin) != 0) - - -class TestUnwrap(TestCase): - def test_simple(self): - #check that unwrap removes jumps greather that 2*pi - assert_array_equal(unwrap([1, 1 + 2 * pi]), [1, 1]) - #check that unwrap maintans continuity - assert(all(diff(unwrap(rand(10) * 100)) < pi)) - - -class TestFilterwindows(TestCase): - def test_hanning(self): - #check symmetry - w = hanning(10) - assert_array_almost_equal(w, flipud(w), 7) - #check known value - assert_almost_equal(sum(w, axis=0), 4.500, 4) - - def test_hamming(self): - #check symmetry - w = hamming(10) - assert_array_almost_equal(w, flipud(w), 7) - #check known value - assert_almost_equal(sum(w, axis=0), 4.9400, 4) - - def test_bartlett(self): - #check symmetry - w = bartlett(10) - assert_array_almost_equal(w, flipud(w), 7) - #check known value - assert_almost_equal(sum(w, axis=0), 4.4444, 4) - - def test_blackman(self): - #check symmetry - w = blackman(10) - assert_array_almost_equal(w, flipud(w), 7) - #check known value - assert_almost_equal(sum(w, axis=0), 3.7800, 4) - - -class TestTrapz(TestCase): - def test_simple(self): - r = trapz(exp(-1.0 / 2 * (arange(-10, 10, .1)) ** 2) / sqrt(2 * pi), dx=0.1) - #check integral of normal equals 1 - assert_almost_equal(sum(r, axis=0), 1, 7) - - def test_ndim(self): - x = linspace(0, 1, 3) - y = linspace(0, 2, 8) - z = linspace(0, 3, 13) - - wx = ones_like(x) * (x[1] - x[0]) - wx[0] /= 2 - wx[-1] /= 2 - wy = ones_like(y) * (y[1] - y[0]) - wy[0] /= 2 - wy[-1] /= 2 - wz = ones_like(z) * (z[1] - z[0]) - wz[0] /= 2 - wz[-1] /= 2 - - q = x[:, None, None] + y[None, :, None] + z[None, None, :] - - qx = (q * wx[:, None, None]).sum(axis=0) - qy = (q * wy[None, :, None]).sum(axis=1) - qz = (q * wz[None, None, :]).sum(axis=2) - - # n-d `x` - r = trapz(q, x=x[:, None, None], axis=0) - assert_almost_equal(r, qx) - r = trapz(q, x=y[None, :, None], axis=1) - assert_almost_equal(r, qy) - r = trapz(q, x=z[None, None, :], axis=2) - assert_almost_equal(r, qz) - - # 1-d `x` - r = trapz(q, x=x, axis=0) - assert_almost_equal(r, qx) - r = trapz(q, x=y, axis=1) - assert_almost_equal(r, qy) - r = trapz(q, x=z, axis=2) - assert_almost_equal(r, qz) - - def test_masked(self): - #Testing that masked arrays behave as if the function is 0 where - #masked - x = arange(5) - y = x * x - mask = x == 2 - ym = np.ma.array(y, mask=mask) - r = 13.0 # sum(0.5 * (0 + 1) * 1.0 + 0.5 * (9 + 16)) - assert_almost_equal(trapz(ym, x), r) - - xm = np.ma.array(x, mask=mask) - assert_almost_equal(trapz(ym, xm), r) - - xm = np.ma.array(x, mask=mask) - assert_almost_equal(trapz(y, xm), r) - - def test_matrix(self): - #Test to make sure matrices give the same answer as ndarrays - x = linspace(0, 5) - y = x * x - r = trapz(y, x) - mx = matrix(x) - my = matrix(y) - mr = trapz(my, mx) - assert_almost_equal(mr, r) - - -class TestSinc(TestCase): - def test_simple(self): - assert(sinc(0) == 1) - w = sinc(linspace(-1, 1, 100)) - #check symmetry - assert_array_almost_equal(w, flipud(w), 7) - - def test_array_like(self): - x = [0, 0.5] - y1 = sinc(array(x)) - y2 = sinc(list(x)) - y3 = sinc(tuple(x)) - assert_array_equal(y1, y2) - assert_array_equal(y1, y3) - -class TestHistogram(TestCase): - def setUp(self): - pass - - def tearDown(self): - pass - - def test_simple(self): - n = 100 - v = rand(n) - (a, b) = histogram(v) - #check if the sum of the bins equals the number of samples - assert_equal(sum(a, axis=0), n) - #check that the bin counts are evenly spaced when the data is from a - # linear function - (a, b) = histogram(linspace(0, 10, 100)) - assert_array_equal(a, 10) - - def test_one_bin(self): - # Ticket 632 - hist, edges = histogram([1, 2, 3, 4], [1, 2]) - assert_array_equal(hist, [2, ]) - assert_array_equal(edges, [1, 2]) - - def test_normed(self): - # Check that the integral of the density equals 1. - n = 100 - v = rand(n) - a, b = histogram(v, normed=True) - area = sum(a * diff(b)) - assert_almost_equal(area, 1) - - # Check with non constant bin width - v = rand(n) * 10 - bins = [0, 1, 5, 9, 10] - a, b = histogram(v, bins, normed=True) - area = sum(a * diff(b)) - assert_almost_equal(area, 1) - - def test_outliers(self): - # Check that outliers are not tallied - a = arange(10) + .5 - - # Lower outliers - h, b = histogram(a, range=[0, 9]) - assert_equal(h.sum(), 9) - - # Upper outliers - h, b = histogram(a, range=[1, 10]) - assert_equal(h.sum(), 9) - - # Normalization - h, b = histogram(a, range=[1, 9], normed=True) - assert_equal((h * diff(b)).sum(), 1) - - # Weights - w = arange(10) + .5 - h, b = histogram(a, range=[1, 9], weights=w, normed=True) - assert_equal((h * diff(b)).sum(), 1) - - h, b = histogram(a, bins=8, range=[1, 9], weights=w) - assert_equal(h, w[1:-1]) - - def test_type(self): - # Check the type of the returned histogram - a = arange(10) + .5 - h, b = histogram(a) - assert(issubdtype(h.dtype, int)) - - h, b = histogram(a, normed=True) - assert(issubdtype(h.dtype, float)) - - h, b = histogram(a, weights=ones(10, int)) - assert(issubdtype(h.dtype, int)) - - h, b = histogram(a, weights=ones(10, float)) - assert(issubdtype(h.dtype, float)) - - def test_weights(self): - v = rand(100) - w = ones(100) * 5 - a, b = histogram(v) - na, nb = histogram(v, normed=True) - wa, wb = histogram(v, weights=w) - nwa, nwb = histogram(v, weights=w, normed=True) - assert_array_almost_equal(a * 5, wa) - assert_array_almost_equal(na, nwa) - - # Check weights are properly applied. - v = linspace(0, 10, 10) - w = concatenate((zeros(5), ones(5))) - wa, wb = histogram(v, bins=arange(11), weights=w) - assert_array_almost_equal(wa, w) - - # Check with integer weights - wa, wb = histogram([1, 2, 2, 4], bins=4, weights=[4, 3, 2, 1]) - assert_array_equal(wa, [4, 5, 0, 1]) - wa, wb = histogram([1, 2, 2, 4], bins=4, weights=[4, 3, 2, 1], normed=True) - assert_array_equal(wa, array([4, 5, 0, 1]) / 10. / 3. * 4) - - -class TestHistogramdd(TestCase): - def test_simple(self): - x = array([[-.5, .5, 1.5], [-.5, 1.5, 2.5], [-.5, 2.5, .5], \ - [.5, .5, 1.5], [.5, 1.5, 2.5], [.5, 2.5, 2.5]]) - H, edges = histogramdd(x, (2, 3, 3), range=[[-1, 1], [0, 3], [0, 3]]) - answer = asarray([[[0, 1, 0], [0, 0, 1], [1, 0, 0]], [[0, 1, 0], [0, 0, 1], - [0, 0, 1]]]) - assert_array_equal(H, answer) - # Check normalization - ed = [[-2, 0, 2], [0, 1, 2, 3], [0, 1, 2, 3]] - H, edges = histogramdd(x, bins=ed, normed=True) - assert(all(H == answer / 12.)) - # Check that H has the correct shape. - H, edges = histogramdd(x, (2, 3, 4), range=[[-1, 1], [0, 3], [0, 4]], - normed=True) - answer = asarray([[[0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0]], [[0, 1, 0, 0], - [0, 0, 1, 0], [0, 0, 1, 0]]]) - assert_array_almost_equal(H, answer / 6., 4) - # Check that a sequence of arrays is accepted and H has the correct - # shape. - z = [squeeze(y) for y in split(x, 3, axis=1)] - H, edges = histogramdd(z, bins=(4, 3, 2), range=[[-2, 2], [0, 3], [0, 2]]) - answer = asarray([[[0, 0], [0, 0], [0, 0]], - [[0, 1], [0, 0], [1, 0]], - [[0, 1], [0, 0], [0, 0]], - [[0, 0], [0, 0], [0, 0]]]) - assert_array_equal(H, answer) - - Z = zeros((5, 5, 5)) - Z[range(5), range(5), range(5)] = 1. - H, edges = histogramdd([arange(5), arange(5), arange(5)], 5) - assert_array_equal(H, Z) - - def test_shape_3d(self): - # All possible permutations for bins of different lengths in 3D. - bins = ((5, 4, 6), (6, 4, 5), (5, 6, 4), (4, 6, 5), (6, 5, 4), - (4, 5, 6)) - r = rand(10, 3) - for b in bins: - H, edges = histogramdd(r, b) - assert(H.shape == b) - - def test_shape_4d(self): - # All possible permutations for bins of different lengths in 4D. - bins = ((7, 4, 5, 6), (4, 5, 7, 6), (5, 6, 4, 7), (7, 6, 5, 4), - (5, 7, 6, 4), (4, 6, 7, 5), (6, 5, 7, 4), (7, 5, 4, 6), - (7, 4, 6, 5), (6, 4, 7, 5), (6, 7, 5, 4), (4, 6, 5, 7), - (4, 7, 5, 6), (5, 4, 6, 7), (5, 7, 4, 6), (6, 7, 4, 5), - (6, 5, 4, 7), (4, 7, 6, 5), (4, 5, 6, 7), (7, 6, 4, 5), - (5, 4, 7, 6), (5, 6, 7, 4), (6, 4, 5, 7), (7, 5, 6, 4)) - - r = rand(10, 4) - for b in bins: - H, edges = histogramdd(r, b) - assert(H.shape == b) - - def test_weights(self): - v = rand(100, 2) - hist, edges = histogramdd(v) - n_hist, edges = histogramdd(v, normed=True) - w_hist, edges = histogramdd(v, weights=ones(100)) - assert_array_equal(w_hist, hist) - w_hist, edges = histogramdd(v, weights=ones(100) * 2, normed=True) - assert_array_equal(w_hist, n_hist) - w_hist, edges = histogramdd(v, weights=ones(100, int) * 2) - assert_array_equal(w_hist, 2 * hist) - - def test_identical_samples(self): - x = zeros((10, 2), int) - hist, edges = histogramdd(x, bins=2) - assert_array_equal(edges[0], array([-0.5, 0. , 0.5])) - - -class TestUnique(TestCase): - def test_simple(self): - x = array([4, 3, 2, 1, 1, 2, 3, 4, 0]) - assert(all(unique(x) == [0, 1, 2, 3, 4])) - assert(unique(array([1, 1, 1, 1, 1])) == array([1])) - x = ['widget', 'ham', 'foo', 'bar', 'foo', 'ham'] - assert(all(unique(x) == ['bar', 'foo', 'ham', 'widget'])) - x = array([5 + 6j, 1 + 1j, 1 + 10j, 10, 5 + 6j]) - assert(all(unique(x) == [1 + 1j, 1 + 10j, 5 + 6j, 10])) - - -class TestCheckFinite(TestCase): - def test_simple(self): - a = [1, 2, 3] - b = [1, 2, inf] - c = [1, 2, nan] - numpy.lib.asarray_chkfinite(a) - assert_raises(ValueError, numpy.lib.asarray_chkfinite, b) - assert_raises(ValueError, numpy.lib.asarray_chkfinite, c) - - -class TestNaNFuncts(TestCase): - def setUp(self): - self.A = array([[[ nan, 0.01319214, 0.01620964], - [ 0.11704017, nan, 0.75157887], - [ 0.28333658, 0.1630199 , nan ]], - [[ 0.59541557, nan, 0.37910852], - [ nan, 0.87964135, nan ], - [ 0.70543747, nan, 0.34306596]], - [[ 0.72687499, 0.91084584, nan ], - [ 0.84386844, 0.38944762, 0.23913896], - [ nan, 0.37068164, 0.33850425]]]) - - def test_nansum(self): - assert_almost_equal(nansum(self.A), 8.0664079100000006) - assert_almost_equal(nansum(self.A, 0), - array([[ 1.32229056, 0.92403798, 0.39531816], - [ 0.96090861, 1.26908897, 0.99071783], - [ 0.98877405, 0.53370154, 0.68157021]])) - assert_almost_equal(nansum(self.A, 1), - array([[ 0.40037675, 0.17621204, 0.76778851], - [ 1.30085304, 0.87964135, 0.72217448], - [ 1.57074343, 1.6709751 , 0.57764321]])) - assert_almost_equal(nansum(self.A, 2), - array([[ 0.02940178, 0.86861904, 0.44635648], - [ 0.97452409, 0.87964135, 1.04850343], - [ 1.63772083, 1.47245502, 0.70918589]])) - - def test_nanmin(self): - assert_almost_equal(nanmin(self.A), 0.01319214) - assert_almost_equal(nanmin(self.A, 0), - array([[ 0.59541557, 0.01319214, 0.01620964], - [ 0.11704017, 0.38944762, 0.23913896], - [ 0.28333658, 0.1630199 , 0.33850425]])) - assert_almost_equal(nanmin(self.A, 1), - array([[ 0.11704017, 0.01319214, 0.01620964], - [ 0.59541557, 0.87964135, 0.34306596], - [ 0.72687499, 0.37068164, 0.23913896]])) - assert_almost_equal(nanmin(self.A, 2), - array([[ 0.01319214, 0.11704017, 0.1630199 ], - [ 0.37910852, 0.87964135, 0.34306596], - [ 0.72687499, 0.23913896, 0.33850425]])) - assert nanmin([nan, nan]) is nan - - def test_nanargmin(self): - assert_almost_equal(nanargmin(self.A), 1) - assert_almost_equal(nanargmin(self.A, 0), - array([[1, 0, 0], - [0, 2, 2], - [0, 0, 2]])) - assert_almost_equal(nanargmin(self.A, 1), - array([[1, 0, 0], - [0, 1, 2], - [0, 2, 1]])) - assert_almost_equal(nanargmin(self.A, 2), - array([[1, 0, 1], - [2, 1, 2], - [0, 2, 2]])) - - def test_nanmax(self): - assert_almost_equal(nanmax(self.A), 0.91084584000000002) - assert_almost_equal(nanmax(self.A, 0), - array([[ 0.72687499, 0.91084584, 0.37910852], - [ 0.84386844, 0.87964135, 0.75157887], - [ 0.70543747, 0.37068164, 0.34306596]])) - assert_almost_equal(nanmax(self.A, 1), - array([[ 0.28333658, 0.1630199 , 0.75157887], - [ 0.70543747, 0.87964135, 0.37910852], - [ 0.84386844, 0.91084584, 0.33850425]])) - assert_almost_equal(nanmax(self.A, 2), - array([[ 0.01620964, 0.75157887, 0.28333658], - [ 0.59541557, 0.87964135, 0.70543747], - [ 0.91084584, 0.84386844, 0.37068164]])) - - def test_nanmin_allnan_on_axis(self): - assert_array_equal(isnan(nanmin([[nan] * 2] * 3, axis=1)), - [True, True, True]) - - def test_nanmin_masked(self): - a = np.ma.fix_invalid([[2, 1, 3, nan], [5, 2, 3, nan]]) - ctrl_mask = a._mask.copy() - test = np.nanmin(a, axis=1) - assert_equal(test, [1, 2]) - assert_equal(a._mask, ctrl_mask) - assert_equal(np.isinf(a), np.zeros((2, 4), dtype=bool)) - - -class TestNanFunctsIntTypes(TestCase): - - int_types = (int8, int16, int32, int64, uint8, uint16, uint32, uint64) - - def setUp(self, *args, **kwargs): - self.A = array([127, 39, 93, 87, 46]) - - def integer_arrays(self): - for dtype in self.int_types: - yield self.A.astype(dtype) - - def test_nanmin(self): - min_value = min(self.A) - for A in self.integer_arrays(): - assert_equal(nanmin(A), min_value) - - def test_nanmax(self): - max_value = max(self.A) - for A in self.integer_arrays(): - assert_equal(nanmax(A), max_value) - - def test_nanargmin(self): - min_arg = argmin(self.A) - for A in self.integer_arrays(): - assert_equal(nanargmin(A), min_arg) - - def test_nanargmax(self): - max_arg = argmax(self.A) - for A in self.integer_arrays(): - assert_equal(nanargmax(A), max_arg) - - -class TestCorrCoef(TestCase): - A = array([[ 0.15391142, 0.18045767, 0.14197213], - [ 0.70461506, 0.96474128, 0.27906989], - [ 0.9297531 , 0.32296769, 0.19267156]]) - B = array([[ 0.10377691, 0.5417086 , 0.49807457], - [ 0.82872117, 0.77801674, 0.39226705], - [ 0.9314666 , 0.66800209, 0.03538394]]) - res1 = array([[ 1. , 0.9379533 , -0.04931983], - [ 0.9379533 , 1. , 0.30007991], - [-0.04931983, 0.30007991, 1. ]]) - res2 = array([[ 1. , 0.9379533 , -0.04931983, - 0.30151751, 0.66318558, 0.51532523], - [ 0.9379533 , 1. , 0.30007991, - - 0.04781421, 0.88157256, 0.78052386], - [-0.04931983, 0.30007991, 1. , - - 0.96717111, 0.71483595, 0.83053601], - [ 0.30151751, -0.04781421, -0.96717111, - 1. , -0.51366032, -0.66173113], - [ 0.66318558, 0.88157256, 0.71483595, - - 0.51366032, 1. , 0.98317823], - [ 0.51532523, 0.78052386, 0.83053601, - - 0.66173113, 0.98317823, 1. ]]) - - def test_simple(self): - assert_almost_equal(corrcoef(self.A), self.res1) - assert_almost_equal(corrcoef(self.A, self.B), self.res2) - - def test_ddof(self): - assert_almost_equal(corrcoef(self.A, ddof=-1), self.res1) - assert_almost_equal(corrcoef(self.A, self.B, ddof=-1), self.res2) - - -class Test_i0(TestCase): - def test_simple(self): - assert_almost_equal(i0(0.5), array(1.0634833707413234)) - A = array([ 0.49842636, 0.6969809 , 0.22011976, 0.0155549]) - assert_almost_equal(i0(A), - array([ 1.06307822, 1.12518299, 1.01214991, 1.00006049])) - B = array([[ 0.827002 , 0.99959078], - [ 0.89694769, 0.39298162], - [ 0.37954418, 0.05206293], - [ 0.36465447, 0.72446427], - [ 0.48164949, 0.50324519]]) - assert_almost_equal(i0(B), - array([[ 1.17843223, 1.26583466], - [ 1.21147086, 1.0389829 ], - [ 1.03633899, 1.00067775], - [ 1.03352052, 1.13557954], - [ 1.0588429 , 1.06432317]])) - - -class TestKaiser(TestCase): - def test_simple(self): - assert_almost_equal(kaiser(0, 1.0), array([])) - assert isfinite(kaiser(1, 1.0)) - assert_almost_equal(kaiser(2, 1.0), array([ 0.78984831, 0.78984831])) - assert_almost_equal(kaiser(5, 1.0), - array([ 0.78984831, 0.94503323, 1. , - 0.94503323, 0.78984831])) - assert_almost_equal(kaiser(5, 1.56789), - array([ 0.58285404, 0.88409679, 1. , - 0.88409679, 0.58285404])) - - def test_int_beta(self): - kaiser(3, 4) - - -class TestMsort(TestCase): - def test_simple(self): - A = array([[ 0.44567325, 0.79115165, 0.5490053 ], - [ 0.36844147, 0.37325583, 0.96098397], - [ 0.64864341, 0.52929049, 0.39172155]]) - assert_almost_equal(msort(A), - array([[ 0.36844147, 0.37325583, 0.39172155], - [ 0.44567325, 0.52929049, 0.5490053 ], - [ 0.64864341, 0.79115165, 0.96098397]])) - - -class TestMeshgrid(TestCase): - def test_simple(self): - [X, Y] = meshgrid([1, 2, 3], [4, 5, 6, 7]) - assert all(X == array([[1, 2, 3], - [1, 2, 3], - [1, 2, 3], - [1, 2, 3]])) - assert all(Y == array([[4, 4, 4], - [5, 5, 5], - [6, 6, 6], - [7, 7, 7]])) - - -class TestPiecewise(TestCase): - def test_simple(self): - # Condition is single bool list - x = piecewise([0, 0], [True, False], [1]) - assert_array_equal(x, [1, 0]) - - # List of conditions: single bool list - x = piecewise([0, 0], [[True, False]], [1]) - assert_array_equal(x, [1, 0]) - - # Conditions is single bool array - x = piecewise([0, 0], array([True, False]), [1]) - assert_array_equal(x, [1, 0]) - - # Condition is single int array - x = piecewise([0, 0], array([1, 0]), [1]) - assert_array_equal(x, [1, 0]) - - # List of conditions: int array - x = piecewise([0, 0], [array([1, 0])], [1]) - assert_array_equal(x, [1, 0]) - - - x = piecewise([0, 0], [[False, True]], [lambda x:-1]) - assert_array_equal(x, [0, -1]) - - x = piecewise([1, 2], [[True, False], [False, True]], [3, 4]) - assert_array_equal(x, [3, 4]) - - def test_default(self): - # No value specified for x[1], should be 0 - x = piecewise([1, 2], [True, False], [2]) - assert_array_equal(x, [2, 0]) - - # Should set x[1] to 3 - x = piecewise([1, 2], [True, False], [2, 3]) - assert_array_equal(x, [2, 3]) - - def test_0d(self): - x = array(3) - y = piecewise(x, x > 3, [4, 0]) - assert y.ndim == 0 - assert y == 0 - - -class TestBincount(TestCase): - def test_simple(self): - y = np.bincount(np.arange(4)) - assert_array_equal(y, np.ones(4)) - - def test_simple2(self): - y = np.bincount(np.array([1, 5, 2, 4, 1])) - assert_array_equal(y, np.array([0, 2, 1, 0, 1, 1])) - - def test_simple_weight(self): - x = np.arange(4) - w = np.array([0.2, 0.3, 0.5, 0.1]) - y = np.bincount(x, w) - assert_array_equal(y, w) - - def test_simple_weight2(self): - x = np.array([1, 2, 4, 5, 2]) - w = np.array([0.2, 0.3, 0.5, 0.1, 0.2]) - y = np.bincount(x, w) - assert_array_equal(y, np.array([0, 0.2, 0.5, 0, 0.5, 0.1])) - - -class TestInterp(TestCase): - def test_exceptions(self): - assert_raises(ValueError, interp, 0, [], []) - assert_raises(ValueError, interp, 0, [0], [1, 2]) - - def test_basic(self): - x = np.linspace(0, 1, 5) - y = np.linspace(0, 1, 5) - x0 = np.linspace(0, 1, 50) - assert_almost_equal(np.interp(x0, x, y), x0) - - def test_right_left_behavior(self): - assert_equal(interp([-1, 0, 1], [0], [1]), [1,1,1]) - assert_equal(interp([-1, 0, 1], [0], [1], left=0), [0,1,1]) - assert_equal(interp([-1, 0, 1], [0], [1], right=0), [1,1,0]) - assert_equal(interp([-1, 0, 1], [0], [1], left=0, right=0), [0,1,0]) - - def test_scalar_interpolation_point(self): - x = np.linspace(0, 1, 5) - y = np.linspace(0, 1, 5) - x0 = 0 - assert_almost_equal(np.interp(x0, x, y), x0) - x0 = .3 - assert_almost_equal(np.interp(x0, x, y), x0) - x0 = np.float32(.3) - assert_almost_equal(np.interp(x0, x, y), x0) - x0 = np.float64(.3) - assert_almost_equal(np.interp(x0, x, y), x0) - - def test_zero_dimensional_interpolation_point(self): - x = np.linspace(0, 1, 5) - y = np.linspace(0, 1, 5) - x0 = np.array(.3) - assert_almost_equal(np.interp(x0, x, y), x0) - x0 = np.array(.3, dtype=object) - assert_almost_equal(np.interp(x0, x, y), .3) - - -def compare_results(res, desired): - for i in range(len(desired)): - assert_array_equal(res[i], desired[i]) - - -def test_percentile_list(): - assert_equal(np.percentile([1, 2, 3], 0), 1) - -def test_percentile_out(): - x = np.array([1, 2, 3]) - y = np.zeros((3,)) - p = (1, 2, 3) - np.percentile(x, p, out=y) - assert_equal(y, np.percentile(x, p)) - - x = np.array([[1, 2, 3], - [4, 5, 6]]) - - y = np.zeros((3, 3)) - np.percentile(x, p, axis=0, out=y) - assert_equal(y, np.percentile(x, p, axis=0)) - - y = np.zeros((3, 2)) - np.percentile(x, p, axis=1, out=y) - assert_equal(y, np.percentile(x, p, axis=1)) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_index_tricks.py b/pythonPackages/numpy/numpy/lib/tests/test_index_tricks.py deleted file mode 100755 index 3307cef3e0..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_index_tricks.py +++ /dev/null @@ -1,129 +0,0 @@ -from numpy.testing import * -import numpy as np -from numpy import ( array, ones, r_, mgrid, unravel_index, zeros, where, - ndenumerate, fill_diagonal, diag_indices, - diag_indices_from ) - -class TestUnravelIndex(TestCase): - def test_basic(self): - assert unravel_index(2,(2,2)) == (1,0) - assert unravel_index(254,(17,94)) == (2, 66) - assert_raises(ValueError, unravel_index, 4,(2,2)) - - -class TestGrid(TestCase): - def test_basic(self): - a = mgrid[-1:1:10j] - b = mgrid[-1:1:0.1] - assert(a.shape == (10,)) - assert(b.shape == (20,)) - assert(a[0] == -1) - assert_almost_equal(a[-1],1) - assert(b[0] == -1) - assert_almost_equal(b[1]-b[0],0.1,11) - assert_almost_equal(b[-1],b[0]+19*0.1,11) - assert_almost_equal(a[1]-a[0],2.0/9.0,11) - - def test_linspace_equivalence(self): - y,st = np.linspace(2,10,retstep=1) - assert_almost_equal(st,8/49.0) - assert_array_almost_equal(y,mgrid[2:10:50j],13) - - def test_nd(self): - c = mgrid[-1:1:10j,-2:2:10j] - d = mgrid[-1:1:0.1,-2:2:0.2] - assert(c.shape == (2,10,10)) - assert(d.shape == (2,20,20)) - assert_array_equal(c[0][0,:],-ones(10,'d')) - assert_array_equal(c[1][:,0],-2*ones(10,'d')) - assert_array_almost_equal(c[0][-1,:],ones(10,'d'),11) - assert_array_almost_equal(c[1][:,-1],2*ones(10,'d'),11) - assert_array_almost_equal(d[0,1,:]-d[0,0,:], 0.1*ones(20,'d'),11) - assert_array_almost_equal(d[1,:,1]-d[1,:,0], 0.2*ones(20,'d'),11) - - -class TestConcatenator(TestCase): - def test_1d(self): - assert_array_equal(r_[1,2,3,4,5,6],array([1,2,3,4,5,6])) - b = ones(5) - c = r_[b,0,0,b] - assert_array_equal(c,[1,1,1,1,1,0,0,1,1,1,1,1]) - - def test_mixed_type(self): - g = r_[10.1, 1:10] - assert(g.dtype == 'f8') - - def test_more_mixed_type(self): - g = r_[-10.1, array([1]), array([2,3,4]), 10.0] - assert(g.dtype == 'f8') - - def test_2d(self): - b = rand(5,5) - c = rand(5,5) - d = r_['1',b,c] # append columns - assert(d.shape == (5,10)) - assert_array_equal(d[:,:5],b) - assert_array_equal(d[:,5:],c) - d = r_[b,c] - assert(d.shape == (10,5)) - assert_array_equal(d[:5,:],b) - assert_array_equal(d[5:,:],c) - - -class TestNdenumerate(TestCase): - def test_basic(self): - a = array([[1,2], [3,4]]) - assert_equal(list(ndenumerate(a)), - [((0,0), 1), ((0,1), 2), ((1,0), 3), ((1,1), 4)]) - - -def test_fill_diagonal(): - a = zeros((3, 3),int) - fill_diagonal(a, 5) - yield (assert_array_equal, a, - array([[5, 0, 0], - [0, 5, 0], - [0, 0, 5]])) - - # The same function can operate on a 4-d array: - a = zeros((3, 3, 3, 3), int) - fill_diagonal(a, 4) - i = array([0, 1, 2]) - yield (assert_equal, where(a != 0), (i, i, i, i)) - - -def test_diag_indices(): - di = diag_indices(4) - a = array([[1, 2, 3, 4], - [5, 6, 7, 8], - [9, 10, 11, 12], - [13, 14, 15, 16]]) - a[di] = 100 - yield (assert_array_equal, a, - array([[100, 2, 3, 4], - [ 5, 100, 7, 8], - [ 9, 10, 100, 12], - [ 13, 14, 15, 100]])) - - # Now, we create indices to manipulate a 3-d array: - d3 = diag_indices(2, 3) - - # And use it to set the diagonal of a zeros array to 1: - a = zeros((2, 2, 2),int) - a[d3] = 1 - yield (assert_array_equal, a, - array([[[1, 0], - [0, 0]], - - [[0, 0], - [0, 1]]]) ) - -def test_diag_indices_from(): - x = np.random.random((4, 4)) - r, c = diag_indices_from(x) - assert_array_equal(r, np.arange(4)) - assert_array_equal(c, np.arange(4)) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_io.py b/pythonPackages/numpy/numpy/lib/tests/test_io.py deleted file mode 100755 index 25fa6faf45..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_io.py +++ /dev/null @@ -1,1256 +0,0 @@ -import numpy as np -import numpy.ma as ma -from numpy.ma.testutils import * -from numpy.testing import assert_warns - -import sys - -import gzip -import os -import threading - -from tempfile import mkstemp, NamedTemporaryFile -import time -from datetime import datetime - -from numpy.lib._iotools import ConverterError, ConverterLockError, \ - ConversionWarning -from numpy.compat import asbytes, asbytes_nested, bytes - -if sys.version_info[0] >= 3: - from io import BytesIO - def StringIO(s=""): - return BytesIO(asbytes(s)) -else: - from StringIO import StringIO - BytesIO = StringIO - -MAJVER, MINVER = sys.version_info[:2] - -def strptime(s, fmt=None): - """This function is available in the datetime module only - from Python >= 2.5. - - """ - if sys.version_info[0] >= 3: - return datetime(*time.strptime(s.decode('latin1'), fmt)[:3]) - else: - return datetime(*time.strptime(s, fmt)[:3]) - -class RoundtripTest(object): - def roundtrip(self, save_func, *args, **kwargs): - """ - save_func : callable - Function used to save arrays to file. - file_on_disk : bool - If true, store the file on disk, instead of in a - string buffer. - save_kwds : dict - Parameters passed to `save_func`. - load_kwds : dict - Parameters passed to `numpy.load`. - args : tuple of arrays - Arrays stored to file. - - """ - save_kwds = kwargs.get('save_kwds', {}) - load_kwds = kwargs.get('load_kwds', {}) - file_on_disk = kwargs.get('file_on_disk', False) - - if file_on_disk: - # Do not delete the file on windows, because we can't - # reopen an already opened file on that platform, so we - # need to close the file and reopen it, implying no - # automatic deletion. - if sys.platform == 'win32' and MAJVER >= 2 and MINVER >= 6: - target_file = NamedTemporaryFile(delete=False) - else: - target_file = NamedTemporaryFile() - load_file = target_file.name - else: - target_file = StringIO() - load_file = target_file - - arr = args - - save_func(target_file, *arr, **save_kwds) - target_file.flush() - target_file.seek(0) - - if sys.platform == 'win32' and not isinstance(target_file, BytesIO): - target_file.close() - - arr_reloaded = np.load(load_file, **load_kwds) - - self.arr = arr - self.arr_reloaded = arr_reloaded - - def test_array(self): - a = np.array([[1, 2], [3, 4]], float) - self.roundtrip(a) - - a = np.array([[1, 2], [3, 4]], int) - self.roundtrip(a) - - a = np.array([[1 + 5j, 2 + 6j], [3 + 7j, 4 + 8j]], dtype=np.csingle) - self.roundtrip(a) - - a = np.array([[1 + 5j, 2 + 6j], [3 + 7j, 4 + 8j]], dtype=np.cdouble) - self.roundtrip(a) - - def test_1D(self): - a = np.array([1, 2, 3, 4], int) - self.roundtrip(a) - - @np.testing.dec.knownfailureif(sys.platform == 'win32', "Fail on Win32") - def test_mmap(self): - a = np.array([[1, 2.5], [4, 7.3]]) - self.roundtrip(a, file_on_disk=True, load_kwds={'mmap_mode': 'r'}) - - def test_record(self): - a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) - self.roundtrip(a) - -class TestSaveLoad(RoundtripTest, TestCase): - def roundtrip(self, *args, **kwargs): - RoundtripTest.roundtrip(self, np.save, *args, **kwargs) - assert_equal(self.arr[0], self.arr_reloaded) - -class TestSavezLoad(RoundtripTest, TestCase): - def roundtrip(self, *args, **kwargs): - RoundtripTest.roundtrip(self, np.savez, *args, **kwargs) - for n, arr in enumerate(self.arr): - assert_equal(arr, self.arr_reloaded['arr_%d' % n]) - - def test_multiple_arrays(self): - a = np.array([[1, 2], [3, 4]], float) - b = np.array([[1 + 2j, 2 + 7j], [3 - 6j, 4 + 12j]], complex) - self.roundtrip(a, b) - - def test_named_arrays(self): - a = np.array([[1, 2], [3, 4]], float) - b = np.array([[1 + 2j, 2 + 7j], [3 - 6j, 4 + 12j]], complex) - c = StringIO() - np.savez(c, file_a=a, file_b=b) - c.seek(0) - l = np.load(c) - assert_equal(a, l['file_a']) - assert_equal(b, l['file_b']) - - def test_savez_filename_clashes(self): - # Test that issue #852 is fixed - # and savez functions in multithreaded environment - - def writer(error_list): - fd, tmp = mkstemp(suffix='.npz') - os.close(fd) - try: - arr = np.random.randn(500, 500) - try: - np.savez(tmp, arr=arr) - except OSError, err: - error_list.append(err) - finally: - os.remove(tmp) - - errors = [] - threads = [threading.Thread(target=writer, args=(errors,)) - for j in xrange(3)] - for t in threads: - t.start() - for t in threads: - t.join() - - if errors: - raise AssertionError(errors) - -class TestSaveTxt(TestCase): - def test_array(self): - a = np.array([[1, 2], [3, 4]], float) - fmt = "%.18e" - c = StringIO() - np.savetxt(c, a, fmt=fmt) - c.seek(0) - assert_equal(c.readlines(), - asbytes_nested( - [(fmt + ' ' + fmt + '\n') % (1, 2), - (fmt + ' ' + fmt + '\n') % (3, 4)])) - - a = np.array([[1, 2], [3, 4]], int) - c = StringIO() - np.savetxt(c, a, fmt='%d') - c.seek(0) - assert_equal(c.readlines(), asbytes_nested(['1 2\n', '3 4\n'])) - - def test_1D(self): - a = np.array([1, 2, 3, 4], int) - c = StringIO() - np.savetxt(c, a, fmt='%d') - c.seek(0) - lines = c.readlines() - assert_equal(lines, asbytes_nested(['1\n', '2\n', '3\n', '4\n'])) - - def test_record(self): - a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) - c = StringIO() - np.savetxt(c, a, fmt='%d') - c.seek(0) - assert_equal(c.readlines(), asbytes_nested(['1 2\n', '3 4\n'])) - - def test_delimiter(self): - a = np.array([[1., 2.], [3., 4.]]) - c = StringIO() - np.savetxt(c, a, delimiter=asbytes(','), fmt='%d') - c.seek(0) - assert_equal(c.readlines(), asbytes_nested(['1,2\n', '3,4\n'])) - - def test_format(self): - a = np.array([(1, 2), (3, 4)]) - c = StringIO() - # Sequence of formats - np.savetxt(c, a, fmt=['%02d', '%3.1f']) - c.seek(0) - assert_equal(c.readlines(), asbytes_nested(['01 2.0\n', '03 4.0\n'])) - - # A single multiformat string - c = StringIO() - np.savetxt(c, a, fmt='%02d : %3.1f') - c.seek(0) - lines = c.readlines() - assert_equal(lines, asbytes_nested(['01 : 2.0\n', '03 : 4.0\n'])) - - # Specify delimiter, should be overiden - c = StringIO() - np.savetxt(c, a, fmt='%02d : %3.1f', delimiter=',') - c.seek(0) - lines = c.readlines() - assert_equal(lines, asbytes_nested(['01 : 2.0\n', '03 : 4.0\n'])) - - def test_file_roundtrip(self): - f, name = mkstemp() - os.close(f) - try: - a = np.array([(1, 2), (3, 4)]) - np.savetxt(name, a) - b = np.loadtxt(name) - assert_array_equal(a, b) - finally: - os.unlink(name) - - -class TestLoadTxt(TestCase): - def test_record(self): - c = StringIO() - c.write(asbytes('1 2\n3 4')) - c.seek(0) - x = np.loadtxt(c, dtype=[('x', np.int32), ('y', np.int32)]) - a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) - assert_array_equal(x, a) - - d = StringIO() - d.write(asbytes('M 64.0 75.0\nF 25.0 60.0')) - d.seek(0) - mydescriptor = {'names': ('gender', 'age', 'weight'), - 'formats': ('S1', - 'i4', 'f4')} - b = np.array([('M', 64.0, 75.0), - ('F', 25.0, 60.0)], dtype=mydescriptor) - y = np.loadtxt(d, dtype=mydescriptor) - assert_array_equal(y, b) - - def test_array(self): - c = StringIO() - c.write(asbytes('1 2\n3 4')) - - c.seek(0) - x = np.loadtxt(c, dtype=int) - a = np.array([[1, 2], [3, 4]], int) - assert_array_equal(x, a) - - c.seek(0) - x = np.loadtxt(c, dtype=float) - a = np.array([[1, 2], [3, 4]], float) - assert_array_equal(x, a) - - def test_1D(self): - c = StringIO() - c.write(asbytes('1\n2\n3\n4\n')) - c.seek(0) - x = np.loadtxt(c, dtype=int) - a = np.array([1, 2, 3, 4], int) - assert_array_equal(x, a) - - c = StringIO() - c.write(asbytes('1,2,3,4\n')) - c.seek(0) - x = np.loadtxt(c, dtype=int, delimiter=',') - a = np.array([1, 2, 3, 4], int) - assert_array_equal(x, a) - - def test_missing(self): - c = StringIO() - c.write(asbytes('1,2,3,,5\n')) - c.seek(0) - x = np.loadtxt(c, dtype=int, delimiter=',', \ - converters={3:lambda s: int(s or - 999)}) - a = np.array([1, 2, 3, -999, 5], int) - assert_array_equal(x, a) - - def test_converters_with_usecols(self): - c = StringIO() - c.write(asbytes('1,2,3,,5\n6,7,8,9,10\n')) - c.seek(0) - x = np.loadtxt(c, dtype=int, delimiter=',', \ - converters={3:lambda s: int(s or - 999)}, \ - usecols=(1, 3,)) - a = np.array([[2, -999], [7, 9]], int) - assert_array_equal(x, a) - - def test_comments(self): - c = StringIO() - c.write(asbytes('# comment\n1,2,3,5\n')) - c.seek(0) - x = np.loadtxt(c, dtype=int, delimiter=',', \ - comments='#') - a = np.array([1, 2, 3, 5], int) - assert_array_equal(x, a) - - def test_skiprows(self): - c = StringIO() - c.write(asbytes('comment\n1,2,3,5\n')) - c.seek(0) - x = np.loadtxt(c, dtype=int, delimiter=',', \ - skiprows=1) - a = np.array([1, 2, 3, 5], int) - assert_array_equal(x, a) - - c = StringIO() - c.write(asbytes('# comment\n1,2,3,5\n')) - c.seek(0) - x = np.loadtxt(c, dtype=int, delimiter=',', \ - skiprows=1) - a = np.array([1, 2, 3, 5], int) - assert_array_equal(x, a) - - def test_usecols(self): - a = np.array([[1, 2], [3, 4]], float) - c = StringIO() - np.savetxt(c, a) - c.seek(0) - x = np.loadtxt(c, dtype=float, usecols=(1,)) - assert_array_equal(x, a[:, 1]) - - a = np.array([[1, 2, 3], [3, 4, 5]], float) - c = StringIO() - np.savetxt(c, a) - c.seek(0) - x = np.loadtxt(c, dtype=float, usecols=(1, 2)) - assert_array_equal(x, a[:, 1:]) - - # Testing with arrays instead of tuples. - c.seek(0) - x = np.loadtxt(c, dtype=float, usecols=np.array([1, 2])) - assert_array_equal(x, a[:, 1:]) - - # Checking with dtypes defined converters. - data = '''JOE 70.1 25.3 - BOB 60.5 27.9 - ''' - c = StringIO(data) - names = ['stid', 'temp'] - dtypes = ['S4', 'f8'] - arr = np.loadtxt(c, usecols=(0, 2), dtype=zip(names, dtypes)) - assert_equal(arr['stid'], asbytes_nested(["JOE", "BOB"])) - assert_equal(arr['temp'], [25.3, 27.9]) - - def test_fancy_dtype(self): - c = StringIO() - c.write(asbytes('1,2,3.0\n4,5,6.0\n')) - c.seek(0) - dt = np.dtype([('x', int), ('y', [('t', int), ('s', float)])]) - x = np.loadtxt(c, dtype=dt, delimiter=',') - a = np.array([(1, (2, 3.0)), (4, (5, 6.0))], dt) - assert_array_equal(x, a) - - def test_shaped_dtype(self): - c = StringIO("aaaa 1.0 8.0 1 2 3 4 5 6") - dt = np.dtype([('name', 'S4'), ('x', float), ('y', float), - ('block', int, (2, 3))]) - x = np.loadtxt(c, dtype=dt) - a = np.array([('aaaa', 1.0, 8.0, [[1, 2, 3], [4, 5, 6]])], - dtype=dt) - assert_array_equal(x, a) - - def test_empty_file(self): - c = StringIO() - assert_raises(IOError, np.loadtxt, c) - - def test_unused_converter(self): - c = StringIO() - c.writelines([asbytes('1 21\n'), asbytes('3 42\n')]) - c.seek(0) - data = np.loadtxt(c, usecols=(1,), - converters={0: lambda s: int(s, 16)}) - assert_array_equal(data, [21, 42]) - - c.seek(0) - data = np.loadtxt(c, usecols=(1,), - converters={1: lambda s: int(s, 16)}) - assert_array_equal(data, [33, 66]) - - def test_dtype_with_object(self): - "Test using an explicit dtype with an object" - from datetime import date - import time - data = """ - 1; 2001-01-01 - 2; 2002-01-31 - """ - ndtype = [('idx', int), ('code', np.object)] - func = lambda s: strptime(s.strip(), "%Y-%m-%d") - converters = {1: func} - test = np.loadtxt(StringIO(data), delimiter=";", dtype=ndtype, - converters=converters) - control = np.array([(1, datetime(2001, 1, 1)), (2, datetime(2002, 1, 31))], - dtype=ndtype) - assert_equal(test, control) - - def test_universal_newline(self): - f, name = mkstemp() - os.write(f, asbytes('1 21\r3 42\r')) - os.close(f) - - try: - data = np.loadtxt(name) - assert_array_equal(data, [[1, 21], [3, 42]]) - finally: - os.unlink(name) - - -class Testfromregex(TestCase): - def test_record(self): - c = StringIO() - c.write(asbytes('1.312 foo\n1.534 bar\n4.444 qux')) - c.seek(0) - - dt = [('num', np.float64), ('val', 'S3')] - x = np.fromregex(c, r"([0-9.]+)\s+(...)", dt) - a = np.array([(1.312, 'foo'), (1.534, 'bar'), (4.444, 'qux')], - dtype=dt) - assert_array_equal(x, a) - - def test_record_2(self): - c = StringIO() - c.write(asbytes('1312 foo\n1534 bar\n4444 qux')) - c.seek(0) - - dt = [('num', np.int32), ('val', 'S3')] - x = np.fromregex(c, r"(\d+)\s+(...)", dt) - a = np.array([(1312, 'foo'), (1534, 'bar'), (4444, 'qux')], - dtype=dt) - assert_array_equal(x, a) - - def test_record_3(self): - c = StringIO() - c.write(asbytes('1312 foo\n1534 bar\n4444 qux')) - c.seek(0) - - dt = [('num', np.float64)] - x = np.fromregex(c, r"(\d+)\s+...", dt) - a = np.array([(1312,), (1534,), (4444,)], dtype=dt) - assert_array_equal(x, a) - - -#####-------------------------------------------------------------------------- - - -class TestFromTxt(TestCase): - # - def test_record(self): - "Test w/ explicit dtype" - data = StringIO(asbytes('1 2\n3 4')) -# data.seek(0) - test = np.ndfromtxt(data, dtype=[('x', np.int32), ('y', np.int32)]) - control = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) - assert_equal(test, control) - # - data = StringIO('M 64.0 75.0\nF 25.0 60.0') -# data.seek(0) - descriptor = {'names': ('gender', 'age', 'weight'), - 'formats': ('S1', 'i4', 'f4')} - control = np.array([('M', 64.0, 75.0), ('F', 25.0, 60.0)], - dtype=descriptor) - test = np.ndfromtxt(data, dtype=descriptor) - assert_equal(test, control) - - def test_array(self): - "Test outputing a standard ndarray" - data = StringIO('1 2\n3 4') - control = np.array([[1, 2], [3, 4]], dtype=int) - test = np.ndfromtxt(data, dtype=int) - assert_array_equal(test, control) - # - data.seek(0) - control = np.array([[1, 2], [3, 4]], dtype=float) - test = np.loadtxt(data, dtype=float) - assert_array_equal(test, control) - - def test_1D(self): - "Test squeezing to 1D" - control = np.array([1, 2, 3, 4], int) - # - data = StringIO('1\n2\n3\n4\n') - test = np.ndfromtxt(data, dtype=int) - assert_array_equal(test, control) - # - data = StringIO('1,2,3,4\n') - test = np.ndfromtxt(data, dtype=int, delimiter=asbytes(',')) - assert_array_equal(test, control) - - def test_comments(self): - "Test the stripping of comments" - control = np.array([1, 2, 3, 5], int) - # Comment on its own line - data = StringIO('# comment\n1,2,3,5\n') - test = np.ndfromtxt(data, dtype=int, delimiter=asbytes(','), comments=asbytes('#')) - assert_equal(test, control) - # Comment at the end of a line - data = StringIO('1,2,3,5# comment\n') - test = np.ndfromtxt(data, dtype=int, delimiter=asbytes(','), comments=asbytes('#')) - assert_equal(test, control) - - def test_skiprows(self): - "Test row skipping" - control = np.array([1, 2, 3, 5], int) - kwargs = dict(dtype=int, delimiter=asbytes(',')) - # - data = StringIO('comment\n1,2,3,5\n') - test = np.ndfromtxt(data, skip_header=1, **kwargs) - assert_equal(test, control) - # - data = StringIO('# comment\n1,2,3,5\n') - test = np.loadtxt(data, skiprows=1, **kwargs) - assert_equal(test, control) - - def test_skip_footer(self): - data = ["# %i" % i for i in range(1, 6)] - data.append("A, B, C") - data.extend(["%i,%3.1f,%03s" % (i, i, i) for i in range(51)]) - data[-1] = "99,99" - kwargs = dict(delimiter=",", names=True, skip_header=5, skip_footer=10) - test = np.genfromtxt(StringIO(asbytes("\n".join(data))), **kwargs) - ctrl = np.array([("%f" % i, "%f" % i, "%f" % i) for i in range(40)], - dtype=[(_, float) for _ in "ABC"]) - assert_equal(test, ctrl) - - def test_header(self): - "Test retrieving a header" - data = StringIO('gender age weight\nM 64.0 75.0\nF 25.0 60.0') - test = np.ndfromtxt(data, dtype=None, names=True) - control = {'gender': np.array(asbytes_nested(['M', 'F'])), - 'age': np.array([64.0, 25.0]), - 'weight': np.array([75.0, 60.0])} - assert_equal(test['gender'], control['gender']) - assert_equal(test['age'], control['age']) - assert_equal(test['weight'], control['weight']) - - def test_auto_dtype(self): - "Test the automatic definition of the output dtype" - data = StringIO('A 64 75.0 3+4j True\nBCD 25 60.0 5+6j False') - test = np.ndfromtxt(data, dtype=None) - control = [np.array(asbytes_nested(['A', 'BCD'])), - np.array([64, 25]), - np.array([75.0, 60.0]), - np.array([3 + 4j, 5 + 6j]), - np.array([True, False]), ] - assert_equal(test.dtype.names, ['f0', 'f1', 'f2', 'f3', 'f4']) - for (i, ctrl) in enumerate(control): - assert_equal(test['f%i' % i], ctrl) - - - def test_auto_dtype_uniform(self): - "Tests whether the output dtype can be uniformized" - data = StringIO('1 2 3 4\n5 6 7 8\n') - test = np.ndfromtxt(data, dtype=None) - control = np.array([[1, 2, 3, 4], [5, 6, 7, 8]]) - assert_equal(test, control) - - - def test_fancy_dtype(self): - "Check that a nested dtype isn't MIA" - data = StringIO('1,2,3.0\n4,5,6.0\n') - fancydtype = np.dtype([('x', int), ('y', [('t', int), ('s', float)])]) - test = np.ndfromtxt(data, dtype=fancydtype, delimiter=',') - control = np.array([(1, (2, 3.0)), (4, (5, 6.0))], dtype=fancydtype) - assert_equal(test, control) - - - def test_names_overwrite(self): - "Test overwriting the names of the dtype" - descriptor = {'names': ('g', 'a', 'w'), - 'formats': ('S1', 'i4', 'f4')} - data = StringIO('M 64.0 75.0\nF 25.0 60.0') - names = ('gender', 'age', 'weight') - test = np.ndfromtxt(data, dtype=descriptor, names=names) - descriptor['names'] = names - control = np.array([('M', 64.0, 75.0), - ('F', 25.0, 60.0)], dtype=descriptor) - assert_equal(test, control) - - - def test_commented_header(self): - "Check that names can be retrieved even if the line is commented out." - data = StringIO(""" -#gender age weight -M 21 72.100000 -F 35 58.330000 -M 33 21.99 - """) - # The # is part of the first name and should be deleted automatically. - test = np.genfromtxt(data, names=True, dtype=None) - ctrl = np.array([('M', 21, 72.1), ('F', 35, 58.33), ('M', 33, 21.99)], - dtype=[('gender', '|S1'), ('age', int), ('weight', float)]) - assert_equal(test, ctrl) - # Ditto, but we should get rid of the first element - data = StringIO(""" -# gender age weight -M 21 72.100000 -F 35 58.330000 -M 33 21.99 - """) - test = np.genfromtxt(data, names=True, dtype=None) - assert_equal(test, ctrl) - - - def test_autonames_and_usecols(self): - "Tests names and usecols" - data = StringIO('A B C D\n aaaa 121 45 9.1') - test = np.ndfromtxt(data, usecols=('A', 'C', 'D'), - names=True, dtype=None) - control = np.array(('aaaa', 45, 9.1), - dtype=[('A', '|S4'), ('C', int), ('D', float)]) - assert_equal(test, control) - - - def test_converters_with_usecols(self): - "Test the combination user-defined converters and usecol" - data = StringIO('1,2,3,,5\n6,7,8,9,10\n') - test = np.ndfromtxt(data, dtype=int, delimiter=',', - converters={3:lambda s: int(s or - 999)}, - usecols=(1, 3,)) - control = np.array([[2, -999], [7, 9]], int) - assert_equal(test, control) - - def test_converters_with_usecols_and_names(self): - "Tests names and usecols" - data = StringIO('A B C D\n aaaa 121 45 9.1') - test = np.ndfromtxt(data, usecols=('A', 'C', 'D'), names=True, - dtype=None, converters={'C':lambda s: 2 * int(s)}) - control = np.array(('aaaa', 90, 9.1), - dtype=[('A', '|S4'), ('C', int), ('D', float)]) - assert_equal(test, control) - - def test_converters_cornercases(self): - "Test the conversion to datetime." - converter = {'date': lambda s: strptime(s, '%Y-%m-%d %H:%M:%SZ')} - data = StringIO('2009-02-03 12:00:00Z, 72214.0') - test = np.ndfromtxt(data, delimiter=',', dtype=None, - names=['date', 'stid'], converters=converter) - control = np.array((datetime(2009, 02, 03), 72214.), - dtype=[('date', np.object_), ('stid', float)]) - assert_equal(test, control) - - - def test_unused_converter(self): - "Test whether unused converters are forgotten" - data = StringIO("1 21\n 3 42\n") - test = np.ndfromtxt(data, usecols=(1,), - converters={0: lambda s: int(s, 16)}) - assert_equal(test, [21, 42]) - # - data.seek(0) - test = np.ndfromtxt(data, usecols=(1,), - converters={1: lambda s: int(s, 16)}) - assert_equal(test, [33, 66]) - - - def test_invalid_converter(self): - strip_rand = lambda x : float((asbytes('r') in x.lower() and x.split()[-1]) or - (not asbytes('r') in x.lower() and x.strip() or 0.0)) - strip_per = lambda x : float((asbytes('%') in x.lower() and x.split()[0]) or - (not asbytes('%') in x.lower() and x.strip() or 0.0)) - s = StringIO("D01N01,10/1/2003 ,1 %,R 75,400,600\r\n" \ - "L24U05,12/5/2003, 2 %,1,300, 150.5\r\n" - "D02N03,10/10/2004,R 1,,7,145.55") - kwargs = dict(converters={2 : strip_per, 3 : strip_rand}, delimiter=",", - dtype=None) - assert_raises(ConverterError, np.genfromtxt, s, **kwargs) - - - def test_dtype_with_converters(self): - dstr = "2009; 23; 46" - test = np.ndfromtxt(StringIO(dstr,), - delimiter=";", dtype=float, converters={0:bytes}) - control = np.array([('2009', 23., 46)], - dtype=[('f0', '|S4'), ('f1', float), ('f2', float)]) - assert_equal(test, control) - test = np.ndfromtxt(StringIO(dstr,), - delimiter=";", dtype=float, converters={0:float}) - control = np.array([2009., 23., 46],) - assert_equal(test, control) - - - def test_dtype_with_object(self): - "Test using an explicit dtype with an object" - from datetime import date - import time - data = asbytes(""" - 1; 2001-01-01 - 2; 2002-01-31 - """) - ndtype = [('idx', int), ('code', np.object)] - func = lambda s: strptime(s.strip(), "%Y-%m-%d") - converters = {1: func} - test = np.genfromtxt(StringIO(data), delimiter=";", dtype=ndtype, - converters=converters) - control = np.array([(1, datetime(2001, 1, 1)), (2, datetime(2002, 1, 31))], - dtype=ndtype) - assert_equal(test, control) - # - ndtype = [('nest', [('idx', int), ('code', np.object)])] - try: - test = np.genfromtxt(StringIO(data), delimiter=";", - dtype=ndtype, converters=converters) - except NotImplementedError: - pass - else: - errmsg = "Nested dtype involving objects should be supported." - raise AssertionError(errmsg) - - - def test_userconverters_with_explicit_dtype(self): - "Test user_converters w/ explicit (standard) dtype" - data = StringIO('skip,skip,2001-01-01,1.0,skip') - test = np.genfromtxt(data, delimiter=",", names=None, dtype=float, - usecols=(2, 3), converters={2: bytes}) - control = np.array([('2001-01-01', 1.)], - dtype=[('', '|S10'), ('', float)]) - assert_equal(test, control) - - - def test_spacedelimiter(self): - "Test space delimiter" - data = StringIO("1 2 3 4 5\n6 7 8 9 10") - test = np.ndfromtxt(data) - control = np.array([[ 1., 2., 3., 4., 5.], - [ 6., 7., 8., 9., 10.]]) - assert_equal(test, control) - - - def test_missing(self): - data = StringIO('1,2,3,,5\n') - test = np.ndfromtxt(data, dtype=int, delimiter=',', \ - converters={3:lambda s: int(s or - 999)}) - control = np.array([1, 2, 3, -999, 5], int) - assert_equal(test, control) - - - def test_missing_with_tabs(self): - "Test w/ a delimiter tab" - txt = "1\t2\t3\n\t2\t\n1\t\t3" - test = np.genfromtxt(StringIO(txt), delimiter="\t", - usemask=True,) - ctrl_d = np.array([(1, 2, 3), (np.nan, 2, np.nan), (1, np.nan, 3)],) - ctrl_m = np.array([(0, 0, 0), (1, 0, 1), (0, 1, 0)], dtype=bool) - assert_equal(test.data, ctrl_d) - assert_equal(test.mask, ctrl_m) - - - def test_usecols(self): - "Test the selection of columns" - # Select 1 column - control = np.array([[1, 2], [3, 4]], float) - data = StringIO() - np.savetxt(data, control) - data.seek(0) - test = np.ndfromtxt(data, dtype=float, usecols=(1,)) - assert_equal(test, control[:, 1]) - # - control = np.array([[1, 2, 3], [3, 4, 5]], float) - data = StringIO() - np.savetxt(data, control) - data.seek(0) - test = np.ndfromtxt(data, dtype=float, usecols=(1, 2)) - assert_equal(test, control[:, 1:]) - # Testing with arrays instead of tuples. - data.seek(0) - test = np.ndfromtxt(data, dtype=float, usecols=np.array([1, 2])) - assert_equal(test, control[:, 1:]) - - def test_usecols_as_css(self): - "Test giving usecols with a comma-separated string" - data = "1 2 3\n4 5 6" - test = np.genfromtxt(StringIO(data), - names="a, b, c", usecols="a, c") - ctrl = np.array([(1, 3), (4, 6)], dtype=[(_, float) for _ in "ac"]) - assert_equal(test, ctrl) - - def test_usecols_with_structured_dtype(self): - "Test usecols with an explicit structured dtype" - data = StringIO("""JOE 70.1 25.3\nBOB 60.5 27.9""") - names = ['stid', 'temp'] - dtypes = ['S4', 'f8'] - test = np.ndfromtxt(data, usecols=(0, 2), dtype=zip(names, dtypes)) - assert_equal(test['stid'], asbytes_nested(["JOE", "BOB"])) - assert_equal(test['temp'], [25.3, 27.9]) - - def test_usecols_with_integer(self): - "Test usecols with an integer" - test = np.genfromtxt(StringIO("1 2 3\n4 5 6"), usecols=0) - assert_equal(test, np.array([1., 4.])) - - def test_usecols_with_named_columns(self): - "Test usecols with named columns" - ctrl = np.array([(1, 3), (4, 6)], dtype=[('a', float), ('c', float)]) - data = "1 2 3\n4 5 6" - kwargs = dict(names="a, b, c") - test = np.genfromtxt(StringIO(data), usecols=(0, -1), **kwargs) - assert_equal(test, ctrl) - test = np.genfromtxt(StringIO(data), - usecols=('a', 'c'), **kwargs) - assert_equal(test, ctrl) - - - def test_empty_file(self): - "Test that an empty file raises the proper exception" - data = StringIO() - assert_raises(IOError, np.ndfromtxt, data) - - - def test_fancy_dtype_alt(self): - "Check that a nested dtype isn't MIA" - data = StringIO('1,2,3.0\n4,5,6.0\n') - fancydtype = np.dtype([('x', int), ('y', [('t', int), ('s', float)])]) - test = np.mafromtxt(data, dtype=fancydtype, delimiter=',') - control = ma.array([(1, (2, 3.0)), (4, (5, 6.0))], dtype=fancydtype) - assert_equal(test, control) - - - def test_shaped_dtype(self): - c = StringIO("aaaa 1.0 8.0 1 2 3 4 5 6") - dt = np.dtype([('name', 'S4'), ('x', float), ('y', float), - ('block', int, (2, 3))]) - x = np.ndfromtxt(c, dtype=dt) - a = np.array([('aaaa', 1.0, 8.0, [[1, 2, 3], [4, 5, 6]])], - dtype=dt) - assert_array_equal(x, a) - - - def test_withmissing(self): - data = StringIO('A,B\n0,1\n2,N/A') - kwargs = dict(delimiter=",", missing_values="N/A", names=True) - test = np.mafromtxt(data, dtype=None, **kwargs) - control = ma.array([(0, 1), (2, -1)], - mask=[(False, False), (False, True)], - dtype=[('A', np.int), ('B', np.int)]) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - # - data.seek(0) - test = np.mafromtxt(data, **kwargs) - control = ma.array([(0, 1), (2, -1)], - mask=[(False, False), (False, True)], - dtype=[('A', np.float), ('B', np.float)]) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - - - def test_user_missing_values(self): - data = "A, B, C\n0, 0., 0j\n1, N/A, 1j\n-9, 2.2, N/A\n3, -99, 3j" - basekwargs = dict(dtype=None, delimiter=",", names=True,) - mdtype = [('A', int), ('B', float), ('C', complex)] - # - test = np.mafromtxt(StringIO(data), missing_values="N/A", - **basekwargs) - control = ma.array([(0, 0.0, 0j), (1, -999, 1j), - (-9, 2.2, -999j), (3, -99, 3j)], - mask=[(0, 0, 0), (0, 1, 0), (0, 0, 1), (0, 0, 0)], - dtype=mdtype) - assert_equal(test, control) - # - basekwargs['dtype'] = mdtype - test = np.mafromtxt(StringIO(data), - missing_values={0:-9, 1:-99, 2:-999j}, **basekwargs) - control = ma.array([(0, 0.0, 0j), (1, -999, 1j), - (-9, 2.2, -999j), (3, -99, 3j)], - mask=[(0, 0, 0), (0, 1, 0), (1, 0, 1), (0, 1, 0)], - dtype=mdtype) - assert_equal(test, control) - # - test = np.mafromtxt(StringIO(data), - missing_values={0:-9, 'B':-99, 'C':-999j}, - **basekwargs) - control = ma.array([(0, 0.0, 0j), (1, -999, 1j), - (-9, 2.2, -999j), (3, -99, 3j)], - mask=[(0, 0, 0), (0, 1, 0), (1, 0, 1), (0, 1, 0)], - dtype=mdtype) - assert_equal(test, control) - - def test_user_filling_values(self): - "Test with missing and filling values" - ctrl = np.array([(0, 3), (4, -999)], dtype=[('a', int), ('b', int)]) - data = "N/A, 2, 3\n4, ,???" - kwargs = dict(delimiter=",", - dtype=int, - names="a,b,c", - missing_values={0:"N/A", 'b':" ", 2:"???"}, - filling_values={0:0, 'b':0, 2:-999}) - test = np.genfromtxt(StringIO(data), **kwargs) - ctrl = np.array([(0, 2, 3), (4, 0, -999)], - dtype=[(_, int) for _ in "abc"]) - assert_equal(test, ctrl) - # - test = np.genfromtxt(StringIO(data), usecols=(0, -1), **kwargs) - ctrl = np.array([(0, 3), (4, -999)], dtype=[(_, int) for _ in "ac"]) - assert_equal(test, ctrl) - - - def test_withmissing_float(self): - data = StringIO('A,B\n0,1.5\n2,-999.00') - test = np.mafromtxt(data, dtype=None, delimiter=',', - missing_values='-999.0', names=True,) - control = ma.array([(0, 1.5), (2, -1.)], - mask=[(False, False), (False, True)], - dtype=[('A', np.int), ('B', np.float)]) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - - - def test_with_masked_column_uniform(self): - "Test masked column" - data = StringIO('1 2 3\n4 5 6\n') - test = np.genfromtxt(data, dtype=None, - missing_values='2,5', usemask=True) - control = ma.array([[1, 2, 3], [4, 5, 6]], mask=[[0, 1, 0], [0, 1, 0]]) - assert_equal(test, control) - - def test_with_masked_column_various(self): - "Test masked column" - data = StringIO('True 2 3\nFalse 5 6\n') - test = np.genfromtxt(data, dtype=None, - missing_values='2,5', usemask=True) - control = ma.array([(1, 2, 3), (0, 5, 6)], - mask=[(0, 1, 0), (0, 1, 0)], - dtype=[('f0', bool), ('f1', bool), ('f2', int)]) - assert_equal(test, control) - - - def test_invalid_raise(self): - "Test invalid raise" - data = ["1, 1, 1, 1, 1"] * 50 - for i in range(5): - data[10 * i] = "2, 2, 2, 2 2" - data.insert(0, "a, b, c, d, e") - mdata = StringIO("\n".join(data)) - # - kwargs = dict(delimiter=",", dtype=None, names=True) - # XXX: is there a better way to get the return value of the callable in - # assert_warns ? - ret = {} - def f(_ret={}): - _ret['mtest'] = np.ndfromtxt(mdata, invalid_raise=False, **kwargs) - assert_warns(ConversionWarning, f, _ret=ret) - mtest = ret['mtest'] - assert_equal(len(mtest), 45) - assert_equal(mtest, np.ones(45, dtype=[(_, int) for _ in 'abcde'])) - # - mdata.seek(0) - assert_raises(ValueError, np.ndfromtxt, mdata, - delimiter=",", names=True) - - def test_invalid_raise_with_usecols(self): - "Test invalid_raise with usecols" - data = ["1, 1, 1, 1, 1"] * 50 - for i in range(5): - data[10 * i] = "2, 2, 2, 2 2" - data.insert(0, "a, b, c, d, e") - mdata = StringIO("\n".join(data)) - kwargs = dict(delimiter=",", dtype=None, names=True, - invalid_raise=False) - # XXX: is there a better way to get the return value of the callable in - # assert_warns ? - ret = {} - def f(_ret={}): - _ret['mtest'] = np.ndfromtxt(mdata, usecols=(0, 4), **kwargs) - assert_warns(ConversionWarning, f, _ret=ret) - mtest = ret['mtest'] - assert_equal(len(mtest), 45) - assert_equal(mtest, np.ones(45, dtype=[(_, int) for _ in 'ae'])) - # - mdata.seek(0) - mtest = np.ndfromtxt(mdata, usecols=(0, 1), **kwargs) - assert_equal(len(mtest), 50) - control = np.ones(50, dtype=[(_, int) for _ in 'ab']) - control[[10 * _ for _ in range(5)]] = (2, 2) - assert_equal(mtest, control) - - - def test_inconsistent_dtype(self): - "Test inconsistent dtype" - data = ["1, 1, 1, 1, -1.1"] * 50 - mdata = StringIO("\n".join(data)) - - converters = {4: lambda x:"(%s)" % x} - kwargs = dict(delimiter=",", converters=converters, - dtype=[(_, int) for _ in 'abcde'],) - assert_raises(TypeError, np.genfromtxt, mdata, **kwargs) - - - def test_default_field_format(self): - "Test default format" - data = "0, 1, 2.3\n4, 5, 6.7" - mtest = np.ndfromtxt(StringIO(data), - delimiter=",", dtype=None, defaultfmt="f%02i") - ctrl = np.array([(0, 1, 2.3), (4, 5, 6.7)], - dtype=[("f00", int), ("f01", int), ("f02", float)]) - assert_equal(mtest, ctrl) - - def test_single_dtype_wo_names(self): - "Test single dtype w/o names" - data = "0, 1, 2.3\n4, 5, 6.7" - mtest = np.ndfromtxt(StringIO(data), - delimiter=",", dtype=float, defaultfmt="f%02i") - ctrl = np.array([[0., 1., 2.3], [4., 5., 6.7]], dtype=float) - assert_equal(mtest, ctrl) - - def test_single_dtype_w_explicit_names(self): - "Test single dtype w explicit names" - data = "0, 1, 2.3\n4, 5, 6.7" - mtest = np.ndfromtxt(StringIO(data), - delimiter=",", dtype=float, names="a, b, c") - ctrl = np.array([(0., 1., 2.3), (4., 5., 6.7)], - dtype=[(_, float) for _ in "abc"]) - assert_equal(mtest, ctrl) - - def test_single_dtype_w_implicit_names(self): - "Test single dtype w implicit names" - data = "a, b, c\n0, 1, 2.3\n4, 5, 6.7" - mtest = np.ndfromtxt(StringIO(data), - delimiter=",", dtype=float, names=True) - ctrl = np.array([(0., 1., 2.3), (4., 5., 6.7)], - dtype=[(_, float) for _ in "abc"]) - assert_equal(mtest, ctrl) - - def test_easy_structured_dtype(self): - "Test easy structured dtype" - data = "0, 1, 2.3\n4, 5, 6.7" - mtest = np.ndfromtxt(StringIO(data), delimiter=",", - dtype=(int, float, float), defaultfmt="f_%02i") - ctrl = np.array([(0, 1., 2.3), (4, 5., 6.7)], - dtype=[("f_00", int), ("f_01", float), ("f_02", float)]) - assert_equal(mtest, ctrl) - - def test_autostrip(self): - "Test autostrip" - data = "01/01/2003 , 1.3, abcde" - kwargs = dict(delimiter=",", dtype=None) - mtest = np.ndfromtxt(StringIO(data), **kwargs) - ctrl = np.array([('01/01/2003 ', 1.3, ' abcde')], - dtype=[('f0', '|S12'), ('f1', float), ('f2', '|S8')]) - assert_equal(mtest, ctrl) - mtest = np.ndfromtxt(StringIO(data), autostrip=True, **kwargs) - ctrl = np.array([('01/01/2003', 1.3, 'abcde')], - dtype=[('f0', '|S10'), ('f1', float), ('f2', '|S5')]) - assert_equal(mtest, ctrl) - - def test_replace_space(self): - "Test the 'replace_space' option" - txt = "A.A, B (B), C:C\n1, 2, 3.14" - # Test default: replace ' ' by '_' and delete non-alphanum chars - test = np.genfromtxt(StringIO(txt), - delimiter=",", names=True, dtype=None) - ctrl_dtype = [("AA", int), ("B_B", int), ("CC", float)] - ctrl = np.array((1, 2, 3.14), dtype=ctrl_dtype) - assert_equal(test, ctrl) - # Test: no replace, no delete - test = np.genfromtxt(StringIO(txt), - delimiter=",", names=True, dtype=None, - replace_space='', deletechars='') - ctrl_dtype = [("A.A", int), ("B (B)", int), ("C:C", float)] - ctrl = np.array((1, 2, 3.14), dtype=ctrl_dtype) - assert_equal(test, ctrl) - # Test: no delete (spaces are replaced by _) - test = np.genfromtxt(StringIO(txt), - delimiter=",", names=True, dtype=None, - deletechars='') - ctrl_dtype = [("A.A", int), ("B_(B)", int), ("C:C", float)] - ctrl = np.array((1, 2, 3.14), dtype=ctrl_dtype) - assert_equal(test, ctrl) - - def test_incomplete_names(self): - "Test w/ incomplete names" - data = "A,,C\n0,1,2\n3,4,5" - kwargs = dict(delimiter=",", names=True) - # w/ dtype=None - ctrl = np.array([(0, 1, 2), (3, 4, 5)], - dtype=[(_, int) for _ in ('A', 'f0', 'C')]) - test = np.ndfromtxt(StringIO(data), dtype=None, **kwargs) - assert_equal(test, ctrl) - # w/ default dtype - ctrl = np.array([(0, 1, 2), (3, 4, 5)], - dtype=[(_, float) for _ in ('A', 'f0', 'C')]) - test = np.ndfromtxt(StringIO(data), **kwargs) - - def test_names_auto_completion(self): - "Make sure that names are properly completed" - data = "1 2 3\n 4 5 6" - test = np.genfromtxt(StringIO(data), - dtype=(int, float, int), names="a") - ctrl = np.array([(1, 2, 3), (4, 5, 6)], - dtype=[('a', int), ('f0', float), ('f1', int)]) - assert_equal(test, ctrl) - - def test_fixed_width_names(self): - "Test fix-width w/ names" - data = " A B C\n 0 1 2.3\n 45 67 9." - kwargs = dict(delimiter=(5, 5, 4), names=True, dtype=None) - ctrl = np.array([(0, 1, 2.3), (45, 67, 9.)], - dtype=[('A', int), ('B', int), ('C', float)]) - test = np.ndfromtxt(StringIO(data), **kwargs) - assert_equal(test, ctrl) - # - kwargs = dict(delimiter=5, names=True, dtype=None) - ctrl = np.array([(0, 1, 2.3), (45, 67, 9.)], - dtype=[('A', int), ('B', int), ('C', float)]) - test = np.ndfromtxt(StringIO(data), **kwargs) - assert_equal(test, ctrl) - - def test_filling_values(self): - "Test missing values" - data = "1, 2, 3\n1, , 5\n0, 6, \n" - kwargs = dict(delimiter=",", dtype=None, filling_values= -999) - ctrl = np.array([[1, 2, 3], [1, -999, 5], [0, 6, -999]], dtype=int) - test = np.ndfromtxt(StringIO(data), **kwargs) - assert_equal(test, ctrl) - - - def test_recfromtxt(self): - # - data = StringIO('A,B\n0,1\n2,3') - kwargs = dict(delimiter=",", missing_values="N/A", names=True) - test = np.recfromtxt(data, **kwargs) - control = np.array([(0, 1), (2, 3)], - dtype=[('A', np.int), ('B', np.int)]) - self.assertTrue(isinstance(test, np.recarray)) - assert_equal(test, control) - # - data = StringIO('A,B\n0,1\n2,N/A') - test = np.recfromtxt(data, dtype=None, usemask=True, **kwargs) - control = ma.array([(0, 1), (2, -1)], - mask=[(False, False), (False, True)], - dtype=[('A', np.int), ('B', np.int)]) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - assert_equal(test.A, [0, 2]) - - - def test_recfromcsv(self): - # - data = StringIO('A,B\n0,1\n2,3') - kwargs = dict(missing_values="N/A", names=True, case_sensitive=True) - test = np.recfromcsv(data, dtype=None, **kwargs) - control = np.array([(0, 1), (2, 3)], - dtype=[('A', np.int), ('B', np.int)]) - self.assertTrue(isinstance(test, np.recarray)) - assert_equal(test, control) - # - data = StringIO('A,B\n0,1\n2,N/A') - test = np.recfromcsv(data, dtype=None, usemask=True, **kwargs) - control = ma.array([(0, 1), (2, -1)], - mask=[(False, False), (False, True)], - dtype=[('A', np.int), ('B', np.int)]) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - assert_equal(test.A, [0, 2]) - # - data = StringIO('A,B\n0,1\n2,3') - test = np.recfromcsv(data, missing_values='N/A',) - control = np.array([(0, 1), (2, 3)], - dtype=[('a', np.int), ('b', np.int)]) - self.assertTrue(isinstance(test, np.recarray)) - assert_equal(test, control) - - -def test_gzip_load(): - a = np.random.random((5, 5)) - - s = StringIO() - f = gzip.GzipFile(fileobj=s, mode="w") - - np.save(f, a) - f.close() - s.seek(0) - - f = gzip.GzipFile(fileobj=s, mode="r") - assert_array_equal(np.load(f), a) - - -def test_gzip_loadtxt(): - # Thanks to another windows brokeness, we can't use - # NamedTemporaryFile: a file created from this function cannot be - # reopened by another open call. So we first put the gzipped string - # of the test reference array, write it to a securely opened file, - # which is then read from by the loadtxt function - s = StringIO() - g = gzip.GzipFile(fileobj=s, mode='w') - g.write(asbytes('1 2 3\n')) - g.close() - s.seek(0) - - f, name = mkstemp(suffix='.gz') - try: - os.write(f, s.read()) - s.close() - assert_array_equal(np.loadtxt(name), [1, 2, 3]) - finally: - os.close(f) - os.unlink(name) - -def test_gzip_loadtxt_from_string(): - s = StringIO() - f = gzip.GzipFile(fileobj=s, mode="w") - f.write(asbytes('1 2 3\n')) - f.close() - s.seek(0) - - f = gzip.GzipFile(fileobj=s, mode="r") - assert_array_equal(np.loadtxt(f), [1, 2, 3]) - -def test_npzfile_dict(): - s = StringIO() - x = np.zeros((3, 3)) - y = np.zeros((3, 3)) - - np.savez(s, x=x, y=y) - s.seek(0) - - z = np.load(s) - - assert 'x' in z - assert 'y' in z - assert 'x' in z.keys() - assert 'y' in z.keys() - - for f, a in z.iteritems(): - assert f in ['x', 'y'] - assert_equal(a.shape, (3, 3)) - - assert len(z.items()) == 2 - - for f in z: - assert f in ['x', 'y'] - - assert 'x' in list(z.iterkeys()) - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_polynomial.py b/pythonPackages/numpy/numpy/lib/tests/test_polynomial.py deleted file mode 100755 index 130d11dabf..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_polynomial.py +++ /dev/null @@ -1,153 +0,0 @@ -""" ->>> import numpy.core as nx ->>> from numpy.lib.polynomial import poly1d, polydiv - ->>> p = poly1d([1.,2,3]) ->>> p -poly1d([ 1., 2., 3.]) ->>> print(p) - 2 -1 x + 2 x + 3 ->>> q = poly1d([3.,2,1]) ->>> q -poly1d([ 3., 2., 1.]) ->>> print(q) - 2 -3 x + 2 x + 1 ->>> print(poly1d([1.89999+2j, -3j, -5.12345678, 2+1j])) - 3 2 -(1.9 + 2j) x - 3j x - 5.123 x + (2 + 1j) ->>> print(poly1d([100e-90, 1.234567e-9j+3, -1234.999e8])) - 2 -1e-88 x + (3 + 1.235e-09j) x - 1.235e+11 ->>> print(poly1d([-3, -2, -1])) - 2 --3 x - 2 x - 1 - ->>> p(0) -3.0 ->>> p(5) -38.0 ->>> q(0) -1.0 ->>> q(5) -86.0 - ->>> p * q -poly1d([ 3., 8., 14., 8., 3.]) ->>> p / q -(poly1d([ 0.33333333]), poly1d([ 1.33333333, 2.66666667])) ->>> p + q -poly1d([ 4., 4., 4.]) ->>> p - q -poly1d([-2., 0., 2.]) ->>> p ** 4 -poly1d([ 1., 8., 36., 104., 214., 312., 324., 216., 81.]) - ->>> p(q) -poly1d([ 9., 12., 16., 8., 6.]) ->>> q(p) -poly1d([ 3., 12., 32., 40., 34.]) - ->>> nx.asarray(p) -array([ 1., 2., 3.]) ->>> len(p) -2 - ->>> p[0], p[1], p[2], p[3] -(3.0, 2.0, 1.0, 0) - ->>> p.integ() -poly1d([ 0.33333333, 1. , 3. , 0. ]) ->>> p.integ(1) -poly1d([ 0.33333333, 1. , 3. , 0. ]) ->>> p.integ(5) -poly1d([ 0.00039683, 0.00277778, 0.025 , 0. , 0. , - 0. , 0. , 0. ]) ->>> p.deriv() -poly1d([ 2., 2.]) ->>> p.deriv(2) -poly1d([ 2.]) - ->>> q = poly1d([1.,2,3], variable='y') ->>> print(q) - 2 -1 y + 2 y + 3 ->>> q = poly1d([1.,2,3], variable='lambda') ->>> print(q) - 2 -1 lambda + 2 lambda + 3 - ->>> polydiv(poly1d([1,0,-1]), poly1d([1,1])) -(poly1d([ 1., -1.]), poly1d([ 0.])) -""" - -from numpy.testing import * -import numpy as np - -class TestDocs(TestCase): - def test_doctests(self): - return rundocs() - - def test_roots(self): - assert_array_equal(np.roots([1,0,0]), [0,0]) - - def test_str_leading_zeros(self): - p = np.poly1d([4,3,2,1]) - p[3] = 0 - assert_equal(str(p), - " 2\n" - "3 x + 2 x + 1") - - p = np.poly1d([1,2]) - p[0] = 0 - p[1] = 0 - assert_equal(str(p), " \n0") - - def test_polyfit(self) : - c = np.array([3., 2., 1.]) - x = np.linspace(0,2,5) - y = np.polyval(c,x) - # check 1D case - assert_almost_equal(c, np.polyfit(x,y,2)) - # check 2D (n,1) case - y = y[:,np.newaxis] - c = c[:,np.newaxis] - assert_almost_equal(c, np.polyfit(x,y,2)) - # check 2D (n,2) case - yy = np.concatenate((y,y), axis=1) - cc = np.concatenate((c,c), axis=1) - assert_almost_equal(cc, np.polyfit(x,yy,2)) - - def test_objects(self): - from decimal import Decimal - p = np.poly1d([Decimal('4.0'), Decimal('3.0'), Decimal('2.0')]) - p2 = p * Decimal('1.333333333333333') - assert p2[1] == Decimal("3.9999999999999990") - p2 = p.deriv() - assert p2[1] == Decimal('8.0') - p2 = p.integ() - assert p2[3] == Decimal("1.333333333333333333333333333") - assert p2[2] == Decimal('1.5') - assert np.issubdtype(p2.coeffs.dtype, np.object_) - - def test_complex(self): - p = np.poly1d([3j, 2j, 1j]) - p2 = p.integ() - assert (p2.coeffs == [1j,1j,1j,0]).all() - p2 = p.deriv() - assert (p2.coeffs == [6j,2j]).all() - - def test_integ_coeffs(self): - p = np.poly1d([3,2,1]) - p2 = p.integ(3, k=[9,7,6]) - assert (p2.coeffs == [1/4./5.,1/3./4.,1/2./3.,9/1./2.,7,6]).all() - - def test_zero_dims(self): - try: - np.poly(np.zeros((0, 0))) - except ValueError: - pass - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_recfunctions.py b/pythonPackages/numpy/numpy/lib/tests/test_recfunctions.py deleted file mode 100755 index 0a22e2a34e..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_recfunctions.py +++ /dev/null @@ -1,608 +0,0 @@ -import sys - -import numpy as np -import numpy.ma as ma -from numpy.ma.testutils import * - -from numpy.ma.mrecords import MaskedRecords - -from numpy.lib.recfunctions import * -get_names = np.lib.recfunctions.get_names -get_names_flat = np.lib.recfunctions.get_names_flat -zip_descr = np.lib.recfunctions.zip_descr - -class TestRecFunctions(TestCase): - """ - Misc tests - """ - # - def setUp(self): - x = np.array([1, 2, ]) - y = np.array([10, 20, 30]) - z = np.array([('A', 1.), ('B', 2.)], - dtype=[('A', '|S3'), ('B', float)]) - w = np.array([(1, (2, 3.0)), (4, (5, 6.0))], - dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) - self.data = (w, x, y, z) - - - def test_zip_descr(self): - "Test zip_descr" - (w, x, y, z) = self.data - # Std array - test = zip_descr((x, x), flatten=True) - assert_equal(test, - np.dtype([('', int), ('', int)])) - test = zip_descr((x, x), flatten=False) - assert_equal(test, - np.dtype([('', int), ('', int)])) - # Std & flexible-dtype - test = zip_descr((x, z), flatten=True) - assert_equal(test, - np.dtype([('', int), ('A', '|S3'), ('B', float)])) - test = zip_descr((x, z), flatten=False) - assert_equal(test, - np.dtype([('', int), - ('', [('A', '|S3'), ('B', float)])])) - # Standard & nested dtype - test = zip_descr((x, w), flatten=True) - assert_equal(test, - np.dtype([('', int), - ('a', int), - ('ba', float), ('bb', int)])) - test = zip_descr((x, w), flatten=False) - assert_equal(test, - np.dtype([('', int), - ('', [('a', int), - ('b', [('ba', float), ('bb', int)])])])) - - - def test_drop_fields(self): - "Test drop_fields" - a = np.array([(1, (2, 3.0)), (4, (5, 6.0))], - dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) - # A basic field - test = drop_fields(a, 'a') - control = np.array([((2, 3.0),), ((5, 6.0),)], - dtype=[('b', [('ba', float), ('bb', int)])]) - assert_equal(test, control) - # Another basic field (but nesting two fields) - test = drop_fields(a, 'b') - control = np.array([(1,), (4,)], dtype=[('a', int)]) - assert_equal(test, control) - # A nested sub-field - test = drop_fields(a, ['ba', ]) - control = np.array([(1, (3.0,)), (4, (6.0,))], - dtype=[('a', int), ('b', [('bb', int)])]) - assert_equal(test, control) - # All the nested sub-field from a field: zap that field - test = drop_fields(a, ['ba', 'bb']) - control = np.array([(1,), (4,)], dtype=[('a', int)]) - assert_equal(test, control) - # - test = drop_fields(a, ['a', 'b']) - assert(test is None) - - - def test_rename_fields(self): - "Tests rename fields" - a = np.array([(1, (2, [3.0, 30.])), (4, (5, [6.0, 60.]))], - dtype=[('a', int), - ('b', [('ba', float), ('bb', (float, 2))])]) - test = rename_fields(a, {'a':'A', 'bb':'BB'}) - newdtype = [('A', int), ('b', [('ba', float), ('BB', (float, 2))])] - control = a.view(newdtype) - assert_equal(test.dtype, newdtype) - assert_equal(test, control) - - - def test_get_names(self): - "Tests get_names" - ndtype = np.dtype([('A', '|S3'), ('B', float)]) - test = get_names(ndtype) - assert_equal(test, ('A', 'B')) - # - ndtype = np.dtype([('a', int), ('b', [('ba', float), ('bb', int)])]) - test = get_names(ndtype) - assert_equal(test, ('a', ('b', ('ba', 'bb')))) - - - def test_get_names_flat(self): - "Test get_names_flat" - ndtype = np.dtype([('A', '|S3'), ('B', float)]) - test = get_names_flat(ndtype) - assert_equal(test, ('A', 'B')) - # - ndtype = np.dtype([('a', int), ('b', [('ba', float), ('bb', int)])]) - test = get_names_flat(ndtype) - assert_equal(test, ('a', 'b', 'ba', 'bb')) - - - def test_get_fieldstructure(self): - "Test get_fieldstructure" - # No nested fields - ndtype = np.dtype([('A', '|S3'), ('B', float)]) - test = get_fieldstructure(ndtype) - assert_equal(test, {'A':[], 'B':[]}) - # One 1-nested field - ndtype = np.dtype([('A', int), ('B', [('BA', float), ('BB', '|S1')])]) - test = get_fieldstructure(ndtype) - assert_equal(test, {'A': [], 'B': [], 'BA':['B', ], 'BB':['B']}) - # One 2-nested fields - ndtype = np.dtype([('A', int), - ('B', [('BA', int), - ('BB', [('BBA', int), ('BBB', int)])])]) - test = get_fieldstructure(ndtype) - control = {'A': [], 'B': [], 'BA': ['B'], 'BB': ['B'], - 'BBA': ['B', 'BB'], 'BBB': ['B', 'BB']} - assert_equal(test, control) - - def test_find_duplicates(self): - "Test find_duplicates" - a = ma.array([(2, (2., 'B')), (1, (2., 'B')), (2, (2., 'B')), - (1, (1., 'B')), (2, (2., 'B')), (2, (2., 'C'))], - mask=[(0, (0, 0)), (0, (0, 0)), (0, (0, 0)), - (0, (0, 0)), (1, (0, 0)), (0, (1, 0))], - dtype=[('A', int), ('B', [('BA', float), ('BB', '|S1')])]) - # - test = find_duplicates(a, ignoremask=False, return_index=True) - control = [0, 2] - assert_equal(sorted(test[-1]), control) - assert_equal(test[0], a[test[-1]]) - # - test = find_duplicates(a, key='A', return_index=True) - control = [0, 1, 2, 3, 5] - assert_equal(sorted(test[-1]), control) - assert_equal(test[0], a[test[-1]]) - # - test = find_duplicates(a, key='B', return_index=True) - control = [0, 1, 2, 4] - assert_equal(sorted(test[-1]), control) - assert_equal(test[0], a[test[-1]]) - # - test = find_duplicates(a, key='BA', return_index=True) - control = [0, 1, 2, 4] - assert_equal(sorted(test[-1]), control) - assert_equal(test[0], a[test[-1]]) - # - test = find_duplicates(a, key='BB', return_index=True) - control = [0, 1, 2, 3, 4] - assert_equal(sorted(test[-1]), control) - assert_equal(test[0], a[test[-1]]) - - def test_find_duplicates_ignoremask(self): - "Test the ignoremask option of find_duplicates" - ndtype = [('a', int)] - a = ma.array([1, 1, 1, 2, 2, 3, 3], - mask=[0, 0, 1, 0, 0, 0, 1]).view(ndtype) - test = find_duplicates(a, ignoremask=True, return_index=True) - control = [0, 1, 3, 4] - assert_equal(sorted(test[-1]), control) - assert_equal(test[0], a[test[-1]]) - # - test = find_duplicates(a, ignoremask=False, return_index=True) - control = [0, 1, 2, 3, 4, 6] - assert_equal(sorted(test[-1]), control) - assert_equal(test[0], a[test[-1]]) - - -class TestRecursiveFillFields(TestCase): - """ - Test recursive_fill_fields. - """ - def test_simple_flexible(self): - "Test recursive_fill_fields on flexible-array" - a = np.array([(1, 10.), (2, 20.)], dtype=[('A', int), ('B', float)]) - b = np.zeros((3,), dtype=a.dtype) - test = recursive_fill_fields(a, b) - control = np.array([(1, 10.), (2, 20.), (0, 0.)], - dtype=[('A', int), ('B', float)]) - assert_equal(test, control) - # - def test_masked_flexible(self): - "Test recursive_fill_fields on masked flexible-array" - a = ma.array([(1, 10.), (2, 20.)], mask=[(0, 1), (1, 0)], - dtype=[('A', int), ('B', float)]) - b = ma.zeros((3,), dtype=a.dtype) - test = recursive_fill_fields(a, b) - control = ma.array([(1, 10.), (2, 20.), (0, 0.)], - mask=[(0, 1), (1, 0), (0, 0)], - dtype=[('A', int), ('B', float)]) - assert_equal(test, control) - # - - - -class TestMergeArrays(TestCase): - """ - Test merge_arrays - """ - def setUp(self): - x = np.array([1, 2, ]) - y = np.array([10, 20, 30]) - z = np.array([('A', 1.), ('B', 2.)], dtype=[('A', '|S3'), ('B', float)]) - w = np.array([(1, (2, 3.0)), (4, (5, 6.0))], - dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) - self.data = (w, x, y, z) - # - def test_solo(self): - "Test merge_arrays on a single array." - (_, x, _, z) = self.data - # - test = merge_arrays(x) - control = np.array([(1,), (2,)], dtype=[('f0', int)]) - assert_equal(test, control) - test = merge_arrays((x,)) - assert_equal(test, control) - # - test = merge_arrays(z, flatten=False) - assert_equal(test, z) - test = merge_arrays(z, flatten=True) - assert_equal(test, z) - # - def test_solo_w_flatten(self): - "Test merge_arrays on a single array w & w/o flattening" - w = self.data[0] - test = merge_arrays(w, flatten=False) - assert_equal(test, w) - # - test = merge_arrays(w, flatten=True) - control = np.array([(1, 2, 3.0), (4, 5, 6.0)], - dtype=[('a', int), ('ba', float), ('bb', int)]) - assert_equal(test, control) - # - def test_standard(self): - "Test standard & standard" - # Test merge arrays - (_, x, y, _) = self.data - test = merge_arrays((x, y), usemask=False) - control = np.array([(1, 10), (2, 20), (-1, 30)], - dtype=[('f0', int), ('f1', int)]) - assert_equal(test, control) - # - test = merge_arrays((x, y), usemask=True) - control = ma.array([(1, 10), (2, 20), (-1, 30)], - mask=[(0, 0), (0, 0), (1, 0)], - dtype=[('f0', int), ('f1', int)]) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - # - def test_flatten(self): - "Test standard & flexible" - (_, x, _, z) = self.data - test = merge_arrays((x, z), flatten=True) - control = np.array([(1, 'A', 1.), (2, 'B', 2.)], - dtype=[('f0', int), ('A', '|S3'), ('B', float)]) - assert_equal(test, control) - # - test = merge_arrays((x, z), flatten=False) - control = np.array([(1, ('A', 1.)), (2, ('B', 2.))], - dtype=[('f0', int), - ('f1', [('A', '|S3'), ('B', float)])]) - assert_equal(test, control) - # - def test_flatten_wflexible(self): - "Test flatten standard & nested" - (w, x, _, _) = self.data - test = merge_arrays((x, w), flatten=True) - control = np.array([(1, 1, 2, 3.0), (2, 4, 5, 6.0)], - dtype=[('f0', int), - ('a', int), ('ba', float), ('bb', int)]) - assert_equal(test, control) - # - test = merge_arrays((x, w), flatten=False) - controldtype = dtype = [('f0', int), - ('f1', [('a', int), - ('b', [('ba', float), ('bb', int)])])] - control = np.array([(1., (1, (2, 3.0))), (2, (4, (5, 6.0)))], - dtype=controldtype) - # - def test_wmasked_arrays(self): - "Test merge_arrays masked arrays" - (_, x, _, _) = self.data - mx = ma.array([1, 2, 3], mask=[1, 0, 0]) - test = merge_arrays((x, mx), usemask=True) - control = ma.array([(1, 1), (2, 2), (-1, 3)], - mask=[(0, 1), (0, 0), (1, 0)], - dtype=[('f0', int), ('f1', int)]) - assert_equal(test, control) - test = merge_arrays((x, mx), usemask=True, asrecarray=True) - assert_equal(test, control) - assert(isinstance(test, MaskedRecords)) - # - def test_w_singlefield(self): - "Test single field" - test = merge_arrays((np.array([1, 2]).view([('a', int)]), - np.array([10., 20., 30.])),) - control = ma.array([(1, 10.), (2, 20.), (-1, 30.)], - mask=[(0, 0), (0, 0), (1, 0)], - dtype=[('a', int), ('f1', float)]) - assert_equal(test, control) - # - def test_w_shorter_flex(self): - "Test merge_arrays w/ a shorter flexndarray." - z = self.data[-1] - test = merge_arrays((z, np.array([10, 20, 30]).view([('C', int)]))) - control = np.array([('A', 1., 10), ('B', 2., 20), ('-1', -1, 20)], - dtype=[('A', '|S3'), ('B', float), ('C', int)]) - # - def test_singlerecord(self): - (_, x, y, z) = self.data - test = merge_arrays((x[0], y[0], z[0]), usemask=False) - control = np.array([(1, 10, ('A', 1))], - dtype=[('f0', int), - ('f1', int), - ('f2', [('A', '|S3'), ('B', float)])]) - assert_equal(test, control) - - - -class TestAppendFields(TestCase): - """ - Test append_fields - """ - def setUp(self): - x = np.array([1, 2, ]) - y = np.array([10, 20, 30]) - z = np.array([('A', 1.), ('B', 2.)], dtype=[('A', '|S3'), ('B', float)]) - w = np.array([(1, (2, 3.0)), (4, (5, 6.0))], - dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) - self.data = (w, x, y, z) - # - def test_append_single(self): - "Test simple case" - (_, x, _, _) = self.data - test = append_fields(x, 'A', data=[10, 20, 30]) - control = ma.array([(1, 10), (2, 20), (-1, 30)], - mask=[(0, 0), (0, 0), (1, 0)], - dtype=[('f0', int), ('A', int)],) - assert_equal(test, control) - # - def test_append_double(self): - "Test simple case" - (_, x, _, _) = self.data - test = append_fields(x, ('A', 'B'), data=[[10, 20, 30], [100, 200]]) - control = ma.array([(1, 10, 100), (2, 20, 200), (-1, 30, -1)], - mask=[(0, 0, 0), (0, 0, 0), (1, 0, 1)], - dtype=[('f0', int), ('A', int), ('B', int)],) - assert_equal(test, control) - # - def test_append_on_flex(self): - "Test append_fields on flexible type arrays" - z = self.data[-1] - test = append_fields(z, 'C', data=[10, 20, 30]) - control = ma.array([('A', 1., 10), ('B', 2., 20), (-1, -1., 30)], - mask=[(0, 0, 0), (0, 0, 0), (1, 1, 0)], - dtype=[('A', '|S3'), ('B', float), ('C', int)],) - assert_equal(test, control) - # - def test_append_on_nested(self): - "Test append_fields on nested fields" - w = self.data[0] - test = append_fields(w, 'C', data=[10, 20, 30]) - control = ma.array([(1, (2, 3.0), 10), - (4, (5, 6.0), 20), - (-1, (-1, -1.), 30)], - mask=[(0, (0, 0), 0), (0, (0, 0), 0), (1, (1, 1), 0)], - dtype=[('a', int), - ('b', [('ba', float), ('bb', int)]), - ('C', int)],) - assert_equal(test, control) - - - -class TestStackArrays(TestCase): - """ - Test stack_arrays - """ - def setUp(self): - x = np.array([1, 2, ]) - y = np.array([10, 20, 30]) - z = np.array([('A', 1.), ('B', 2.)], dtype=[('A', '|S3'), ('B', float)]) - w = np.array([(1, (2, 3.0)), (4, (5, 6.0))], - dtype=[('a', int), ('b', [('ba', float), ('bb', int)])]) - self.data = (w, x, y, z) - # - def test_solo(self): - "Test stack_arrays on single arrays" - (_, x, _, _) = self.data - test = stack_arrays((x,)) - assert_equal(test, x) - self.assertTrue(test is x) - # - test = stack_arrays(x) - assert_equal(test, x) - self.assertTrue(test is x) - # - def test_unnamed_fields(self): - "Tests combinations of arrays w/o named fields" - (_, x, y, _) = self.data - # - test = stack_arrays((x, x), usemask=False) - control = np.array([1, 2, 1, 2]) - assert_equal(test, control) - # - test = stack_arrays((x, y), usemask=False) - control = np.array([1, 2, 10, 20, 30]) - assert_equal(test, control) - # - test = stack_arrays((y, x), usemask=False) - control = np.array([10, 20, 30, 1, 2]) - assert_equal(test, control) - # - def test_unnamed_and_named_fields(self): - "Test combination of arrays w/ & w/o named fields" - (_, x, _, z) = self.data - # - test = stack_arrays((x, z)) - control = ma.array([(1, -1, -1), (2, -1, -1), - (-1, 'A', 1), (-1, 'B', 2)], - mask=[(0, 1, 1), (0, 1, 1), - (1, 0, 0), (1, 0, 0)], - dtype=[('f0', int), ('A', '|S3'), ('B', float)]) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - # - test = stack_arrays((z, x)) - control = ma.array([('A', 1, -1), ('B', 2, -1), - (-1, -1, 1), (-1, -1, 2), ], - mask=[(0, 0, 1), (0, 0, 1), - (1, 1, 0), (1, 1, 0)], - dtype=[('A', '|S3'), ('B', float), ('f2', int)]) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - # - test = stack_arrays((z, z, x)) - control = ma.array([('A', 1, -1), ('B', 2, -1), - ('A', 1, -1), ('B', 2, -1), - (-1, -1, 1), (-1, -1, 2), ], - mask=[(0, 0, 1), (0, 0, 1), - (0, 0, 1), (0, 0, 1), - (1, 1, 0), (1, 1, 0)], - dtype=[('A', '|S3'), ('B', float), ('f2', int)]) - assert_equal(test, control) - # - def test_matching_named_fields(self): - "Test combination of arrays w/ matching field names" - (_, x, _, z) = self.data - zz = np.array([('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], - dtype=[('A', '|S3'), ('B', float), ('C', float)]) - test = stack_arrays((z, zz)) - control = ma.array([('A', 1, -1), ('B', 2, -1), - ('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], - dtype=[('A', '|S3'), ('B', float), ('C', float)], - mask=[(0, 0, 1), (0, 0, 1), - (0, 0, 0), (0, 0, 0), (0, 0, 0)]) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - # - test = stack_arrays((z, zz, x)) - ndtype = [('A', '|S3'), ('B', float), ('C', float), ('f3', int)] - control = ma.array([('A', 1, -1, -1), ('B', 2, -1, -1), - ('a', 10., 100., -1), ('b', 20., 200., -1), - ('c', 30., 300., -1), - (-1, -1, -1, 1), (-1, -1, -1, 2)], - dtype=ndtype, - mask=[(0, 0, 1, 1), (0, 0, 1, 1), - (0, 0, 0, 1), (0, 0, 0, 1), (0, 0, 0, 1), - (1, 1, 1, 0), (1, 1, 1, 0)]) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - - - def test_defaults(self): - "Test defaults: no exception raised if keys of defaults are not fields." - (_, _, _, z) = self.data - zz = np.array([('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], - dtype=[('A', '|S3'), ('B', float), ('C', float)]) - defaults = {'A':'???', 'B':-999., 'C':-9999., 'D':-99999.} - test = stack_arrays((z, zz), defaults=defaults) - control = ma.array([('A', 1, -9999.), ('B', 2, -9999.), - ('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], - dtype=[('A', '|S3'), ('B', float), ('C', float)], - mask=[(0, 0, 1), (0, 0, 1), - (0, 0, 0), (0, 0, 0), (0, 0, 0)]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - - - def test_autoconversion(self): - "Tests autoconversion" - adtype = [('A', int), ('B', bool), ('C', float)] - a = ma.array([(1, 2, 3)], mask=[(0, 1, 0)], dtype=adtype) - bdtype = [('A', int), ('B', float), ('C', float)] - b = ma.array([(4, 5, 6)], dtype=bdtype) - control = ma.array([(1, 2, 3), (4, 5, 6)], mask=[(0, 1, 0), (0, 0, 0)], - dtype=bdtype) - test = stack_arrays((a, b), autoconvert=True) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - try: - test = stack_arrays((a, b), autoconvert=False) - except TypeError: - pass - else: - raise AssertionError - - - def test_checktitles(self): - "Test using titles in the field names" - adtype = [(('a', 'A'), int), (('b', 'B'), bool), (('c', 'C'), float)] - a = ma.array([(1, 2, 3)], mask=[(0, 1, 0)], dtype=adtype) - bdtype = [(('a', 'A'), int), (('b', 'B'), bool), (('c', 'C'), float)] - b = ma.array([(4, 5, 6)], dtype=bdtype) - test = stack_arrays((a, b)) - control = ma.array([(1, 2, 3), (4, 5, 6)], mask=[(0, 1, 0), (0, 0, 0)], - dtype=bdtype) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - - -class TestJoinBy(TestCase): - # - def test_base(self): - "Basic test of join_by" - a = np.array(zip(np.arange(10), np.arange(50, 60), np.arange(100, 110)), - dtype=[('a', int), ('b', int), ('c', int)]) - b = np.array(zip(np.arange(5, 15), np.arange(65, 75), np.arange(100, 110)), - dtype=[('a', int), ('b', int), ('d', int)]) - # - test = join_by('a', a, b, jointype='inner') - control = np.array([(5, 55, 65, 105, 100), (6, 56, 66, 106, 101), - (7, 57, 67, 107, 102), (8, 58, 68, 108, 103), - (9, 59, 69, 109, 104)], - dtype=[('a', int), ('b1', int), ('b2', int), - ('c', int), ('d', int)]) - assert_equal(test, control) - # - test = join_by(('a', 'b'), a, b) - control = np.array([(5, 55, 105, 100), (6, 56, 106, 101), - (7, 57, 107, 102), (8, 58, 108, 103), - (9, 59, 109, 104)], - dtype=[('a', int), ('b', int), - ('c', int), ('d', int)]) - # - test = join_by(('a', 'b'), a, b, 'outer') - control = ma.array([(0, 50, 100, -1), (1, 51, 101, -1), - (2, 52, 102, -1), (3, 53, 103, -1), - (4, 54, 104, -1), (5, 55, 105, -1), - (5, 65, -1, 100), (6, 56, 106, -1), - (6, 66, -1, 101), (7, 57, 107, -1), - (7, 67, -1, 102), (8, 58, 108, -1), - (8, 68, -1, 103), (9, 59, 109, -1), - (9, 69, -1, 104), (10, 70, -1, 105), - (11, 71, -1, 106), (12, 72, -1, 107), - (13, 73, -1, 108), (14, 74, -1, 109)], - mask=[(0, 0, 0, 1), (0, 0, 0, 1), - (0, 0, 0, 1), (0, 0, 0, 1), - (0, 0, 0, 1), (0, 0, 0, 1), - (0, 0, 1, 0), (0, 0, 0, 1), - (0, 0, 1, 0), (0, 0, 0, 1), - (0, 0, 1, 0), (0, 0, 0, 1), - (0, 0, 1, 0), (0, 0, 0, 1), - (0, 0, 1, 0), (0, 0, 1, 0), - (0, 0, 1, 0), (0, 0, 1, 0), - (0, 0, 1, 0), (0, 0, 1, 0)], - dtype=[('a', int), ('b', int), - ('c', int), ('d', int)]) - assert_equal(test, control) - # - test = join_by(('a', 'b'), a, b, 'leftouter') - control = ma.array([(0, 50, 100, -1), (1, 51, 101, -1), - (2, 52, 102, -1), (3, 53, 103, -1), - (4, 54, 104, -1), (5, 55, 105, -1), - (6, 56, 106, -1), (7, 57, 107, -1), - (8, 58, 108, -1), (9, 59, 109, -1)], - mask=[(0, 0, 0, 1), (0, 0, 0, 1), - (0, 0, 0, 1), (0, 0, 0, 1), - (0, 0, 0, 1), (0, 0, 0, 1), - (0, 0, 0, 1), (0, 0, 0, 1), - (0, 0, 0, 1), (0, 0, 0, 1)], - dtype=[('a', int), ('b', int), ('c', int), ('d', int)]) - - - - -if __name__ == '__main__': - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_regression.py b/pythonPackages/numpy/numpy/lib/tests/test_regression.py deleted file mode 100755 index a8804ac3a4..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_regression.py +++ /dev/null @@ -1,193 +0,0 @@ -from numpy.testing import * -from numpy.testing.utils import _assert_valid_refcount -import numpy as np - -rlevel = 1 - -class TestRegression(TestCase): - def test_poly1d(self,level=rlevel): - """Ticket #28""" - assert_equal(np.poly1d([1]) - np.poly1d([1,0]), - np.poly1d([-1,1])) - - def test_cov_parameters(self,level=rlevel): - """Ticket #91""" - x = np.random.random((3,3)) - y = x.copy() - np.cov(x, rowvar=1) - np.cov(y, rowvar=0) - assert_array_equal(x,y) - - def test_mem_digitize(self,level=rlevel): - """Ticket #95""" - for i in range(100): - np.digitize([1,2,3,4],[1,3]) - np.digitize([0,1,2,3,4],[1,3]) - - def test_unique_zero_sized(self,level=rlevel): - """Ticket #205""" - assert_array_equal([], np.unique(np.array([]))) - - def test_mem_vectorise(self, level=rlevel): - """Ticket #325""" - vt = np.vectorize(lambda *args: args) - vt(np.zeros((1,2,1)), np.zeros((2,1,1)), np.zeros((1,1,2))) - vt(np.zeros((1,2,1)), np.zeros((2,1,1)), np.zeros((1,1,2)), np.zeros((2,2))) - - def test_mgrid_single_element(self, level=rlevel): - """Ticket #339""" - assert_array_equal(np.mgrid[0:0:1j],[0]) - assert_array_equal(np.mgrid[0:0],[]) - - def test_refcount_vectorize(self, level=rlevel): - """Ticket #378""" - def p(x,y): return 123 - v = np.vectorize(p) - _assert_valid_refcount(v) - - def test_poly1d_nan_roots(self, level=rlevel): - """Ticket #396""" - p = np.poly1d([np.nan,np.nan,1], r=0) - self.assertRaises(np.linalg.LinAlgError,getattr,p,"r") - - def test_mem_polymul(self, level=rlevel): - """Ticket #448""" - np.polymul([],[1.]) - - def test_mem_string_concat(self, level=rlevel): - """Ticket #469""" - x = np.array([]) - np.append(x,'asdasd\tasdasd') - - def test_poly_div(self, level=rlevel): - """Ticket #553""" - u = np.poly1d([1,2,3]) - v = np.poly1d([1,2,3,4,5]) - q,r = np.polydiv(u,v) - assert_equal(q*v + r, u) - - def test_poly_eq(self, level=rlevel): - """Ticket #554""" - x = np.poly1d([1,2,3]) - y = np.poly1d([3,4]) - assert x != y - assert x == x - - def test_mem_insert(self, level=rlevel): - """Ticket #572""" - np.lib.place(1,1,1) - - def test_polyfit_build(self): - """Ticket #628""" - ref = [-1.06123820e-06, 5.70886914e-04, -1.13822012e-01, - 9.95368241e+00, -3.14526520e+02] - x = [90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, - 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, - 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 129, - 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, - 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, - 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, - 170, 171, 172, 173, 174, 175, 176] - y = [9.0, 3.0, 7.0, 4.0, 4.0, 8.0, 6.0, 11.0, 9.0, 8.0, 11.0, 5.0, - 6.0, 5.0, 9.0, 8.0, 6.0, 10.0, 6.0, 10.0, 7.0, 6.0, 6.0, 6.0, - 13.0, 4.0, 9.0, 11.0, 4.0, 5.0, 8.0, 5.0, 7.0, 7.0, 6.0, 12.0, - 7.0, 7.0, 9.0, 4.0, 12.0, 6.0, 6.0, 4.0, 3.0, 9.0, 8.0, 8.0, - 6.0, 7.0, 9.0, 10.0, 6.0, 8.0, 4.0, 7.0, 7.0, 10.0, 8.0, 8.0, - 6.0, 3.0, 8.0, 4.0, 5.0, 7.0, 8.0, 6.0, 6.0, 4.0, 12.0, 9.0, - 8.0, 8.0, 8.0, 6.0, 7.0, 4.0, 4.0, 5.0, 7.0] - tested = np.polyfit(x, y, 4) - assert_array_almost_equal(ref, tested) - - - def test_polydiv_type(self) : - """Make polydiv work for complex types""" - msg = "Wrong type, should be complex" - x = np.ones(3, dtype=np.complex) - q,r = np.polydiv(x,x) - assert_(q.dtype == np.complex, msg) - msg = "Wrong type, should be float" - x = np.ones(3, dtype=np.int) - q,r = np.polydiv(x,x) - assert_(q.dtype == np.float, msg) - - def test_histogramdd_too_many_bins(self) : - """Ticket 928.""" - assert_raises(ValueError, np.histogramdd, np.ones((1,10)), bins=2**10) - - def test_polyint_type(self) : - """Ticket #944""" - msg = "Wrong type, should be complex" - x = np.ones(3, dtype=np.complex) - assert_(np.polyint(x).dtype == np.complex, msg) - msg = "Wrong type, should be float" - x = np.ones(3, dtype=np.int) - assert_(np.polyint(x).dtype == np.float, msg) - - def test_ndenumerate_crash(self): - """Ticket 1140""" - # Shouldn't crash: - list(np.ndenumerate(np.array([[]]))) - - def test_asfarray_none(self, level=rlevel): - """Test for changeset r5065""" - assert_array_equal(np.array([np.nan]), np.asfarray([None])) - - def test_large_fancy_indexing(self, level=rlevel): - # Large enough to fail on 64-bit. - nbits = np.dtype(np.intp).itemsize * 8 - thesize = int((2**nbits)**(1.0/5.0)+1) - def dp(): - n = 3 - a = np.ones((n,)*5) - i = np.random.randint(0,n,size=thesize) - a[np.ix_(i,i,i,i,i)] = 0 - def dp2(): - n = 3 - a = np.ones((n,)*5) - i = np.random.randint(0,n,size=thesize) - g = a[np.ix_(i,i,i,i,i)] - self.assertRaises(ValueError, dp) - self.assertRaises(ValueError, dp2) - - def test_void_coercion(self, level=rlevel): - dt = np.dtype([('a','f4'),('b','i4')]) - x = np.zeros((1,),dt) - assert(np.r_[x,x].dtype == dt) - - def test_who_with_0dim_array(self, level=rlevel) : - """ticket #1243""" - import os, sys - - sys.stdout = open(os.devnull, 'w') - try : - tmp = np.who({'foo' : np.array(1)}) - sys.stdout = sys.__stdout__ - except : - sys.stdout = sys.__stdout__ - raise AssertionError("ticket #1243") - - def test_bincount_empty(self): - """Ticket #1387: empty array as input for bincount.""" - assert_raises(ValueError, lambda : np.bincount(np.array([], dtype=np.intp))) - - @dec.deprecated() - def test_include_dirs(self): - """As a sanity check, just test that get_include and - get_numarray_include include something reasonable. Somewhat - related to ticket #1405.""" - include_dirs = [np.get_include(), np.get_numarray_include(), - np.get_numpy_include()] - for path in include_dirs: - assert isinstance(path, (str, unicode)) - assert path != '' - - def test_polyder_return_type(self): - """Ticket #1249""" - assert_(isinstance(np.polyder(np.poly1d([1]), 0), np.poly1d)) - assert_(isinstance(np.polyder([1], 0), np.ndarray)) - assert_(isinstance(np.polyder(np.poly1d([1]), 1), np.poly1d)) - assert_(isinstance(np.polyder([1], 1), np.ndarray)) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_shape_base.py b/pythonPackages/numpy/numpy/lib/tests/test_shape_base.py deleted file mode 100755 index 403761e93b..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_shape_base.py +++ /dev/null @@ -1,316 +0,0 @@ -from numpy.testing import * -from numpy.lib import * -from numpy.core import * -from numpy import matrix, asmatrix - -class TestApplyAlongAxis(TestCase): - def test_simple(self): - a = ones((20,10),'d') - assert_array_equal(apply_along_axis(len,0,a),len(a)*ones(shape(a)[1])) - - def test_simple101(self,level=11): - a = ones((10,101),'d') - assert_array_equal(apply_along_axis(len,0,a),len(a)*ones(shape(a)[1])) - - def test_3d(self): - a = arange(27).reshape((3,3,3)) - assert_array_equal(apply_along_axis(sum,0,a), - [[27,30,33],[36,39,42],[45,48,51]]) - - -class TestApplyOverAxes(TestCase): - def test_simple(self): - a = arange(24).reshape(2,3,4) - aoa_a = apply_over_axes(sum, a, [0,2]) - assert_array_equal(aoa_a, array([[[60],[92],[124]]])) - - -class TestArraySplit(TestCase): - def test_integer_0_split(self): - a = arange(10) - try: - res = array_split(a,0) - assert(0) # it should have thrown a value error - except ValueError: - pass - - def test_integer_split(self): - a = arange(10) - res = array_split(a,1) - desired = [arange(10)] - compare_results(res,desired) - - res = array_split(a,2) - desired = [arange(5),arange(5,10)] - compare_results(res,desired) - - res = array_split(a,3) - desired = [arange(4),arange(4,7),arange(7,10)] - compare_results(res,desired) - - res = array_split(a,4) - desired = [arange(3),arange(3,6),arange(6,8),arange(8,10)] - compare_results(res,desired) - - res = array_split(a,5) - desired = [arange(2),arange(2,4),arange(4,6),arange(6,8),arange(8,10)] - compare_results(res,desired) - - res = array_split(a,6) - desired = [arange(2),arange(2,4),arange(4,6),arange(6,8),arange(8,9), - arange(9,10)] - compare_results(res,desired) - - res = array_split(a,7) - desired = [arange(2),arange(2,4),arange(4,6),arange(6,7),arange(7,8), - arange(8,9), arange(9,10)] - compare_results(res,desired) - - res = array_split(a,8) - desired = [arange(2),arange(2,4),arange(4,5),arange(5,6),arange(6,7), - arange(7,8), arange(8,9), arange(9,10)] - compare_results(res,desired) - - res = array_split(a,9) - desired = [arange(2),arange(2,3),arange(3,4),arange(4,5),arange(5,6), - arange(6,7), arange(7,8), arange(8,9), arange(9,10)] - compare_results(res,desired) - - res = array_split(a,10) - desired = [arange(1),arange(1,2),arange(2,3),arange(3,4), - arange(4,5),arange(5,6), arange(6,7), arange(7,8), - arange(8,9), arange(9,10)] - compare_results(res,desired) - - res = array_split(a,11) - desired = [arange(1),arange(1,2),arange(2,3),arange(3,4), - arange(4,5),arange(5,6), arange(6,7), arange(7,8), - arange(8,9), arange(9,10),array([])] - compare_results(res,desired) - - def test_integer_split_2D_rows(self): - a = array([arange(10),arange(10)]) - res = array_split(a,3,axis=0) - desired = [array([arange(10)]),array([arange(10)]),array([])] - compare_results(res,desired) - - def test_integer_split_2D_cols(self): - a = array([arange(10),arange(10)]) - res = array_split(a,3,axis=-1) - desired = [array([arange(4),arange(4)]), - array([arange(4,7),arange(4,7)]), - array([arange(7,10),arange(7,10)])] - compare_results(res,desired) - - def test_integer_split_2D_default(self): - """ This will fail if we change default axis - """ - a = array([arange(10),arange(10)]) - res = array_split(a,3) - desired = [array([arange(10)]),array([arange(10)]),array([])] - compare_results(res,desired) - #perhaps should check higher dimensions - - def test_index_split_simple(self): - a = arange(10) - indices = [1,5,7] - res = array_split(a,indices,axis=-1) - desired = [arange(0,1),arange(1,5),arange(5,7),arange(7,10)] - compare_results(res,desired) - - def test_index_split_low_bound(self): - a = arange(10) - indices = [0,5,7] - res = array_split(a,indices,axis=-1) - desired = [array([]),arange(0,5),arange(5,7),arange(7,10)] - compare_results(res,desired) - - def test_index_split_high_bound(self): - a = arange(10) - indices = [0,5,7,10,12] - res = array_split(a,indices,axis=-1) - desired = [array([]),arange(0,5),arange(5,7),arange(7,10), - array([]),array([])] - compare_results(res,desired) - - -class TestSplit(TestCase): - """* This function is essentially the same as array_split, - except that it test if splitting will result in an - equal split. Only test for this case. - *""" - def test_equal_split(self): - a = arange(10) - res = split(a,2) - desired = [arange(5),arange(5,10)] - compare_results(res,desired) - - def test_unequal_split(self): - a = arange(10) - try: - res = split(a,3) - assert(0) # should raise an error - except ValueError: - pass - - -class TestDstack(TestCase): - def test_0D_array(self): - a = array(1); b = array(2); - res=dstack([a,b]) - desired = array([[[1,2]]]) - assert_array_equal(res,desired) - - def test_1D_array(self): - a = array([1]); b = array([2]); - res=dstack([a,b]) - desired = array([[[1,2]]]) - assert_array_equal(res,desired) - - def test_2D_array(self): - a = array([[1],[2]]); b = array([[1],[2]]); - res=dstack([a,b]) - desired = array([[[1,1]],[[2,2,]]]) - assert_array_equal(res,desired) - - def test_2D_array2(self): - a = array([1,2]); b = array([1,2]); - res=dstack([a,b]) - desired = array([[[1,1],[2,2]]]) - assert_array_equal(res,desired) - -""" array_split has more comprehensive test of splitting. - only do simple test on hsplit, vsplit, and dsplit -""" -class TestHsplit(TestCase): - """ only testing for integer splits. - """ - def test_0D_array(self): - a= array(1) - try: - hsplit(a,2) - assert(0) - except ValueError: - pass - - def test_1D_array(self): - a= array([1,2,3,4]) - res = hsplit(a,2) - desired = [array([1,2]),array([3,4])] - compare_results(res,desired) - - def test_2D_array(self): - a= array([[1,2,3,4], - [1,2,3,4]]) - res = hsplit(a,2) - desired = [array([[1,2],[1,2]]),array([[3,4],[3,4]])] - compare_results(res,desired) - - -class TestVsplit(TestCase): - """ only testing for integer splits. - """ - def test_1D_array(self): - a= array([1,2,3,4]) - try: - vsplit(a,2) - assert(0) - except ValueError: - pass - - def test_2D_array(self): - a= array([[1,2,3,4], - [1,2,3,4]]) - res = vsplit(a,2) - desired = [array([[1,2,3,4]]),array([[1,2,3,4]])] - compare_results(res,desired) - - -class TestDsplit(TestCase): - """ only testing for integer splits. - """ - def test_2D_array(self): - a= array([[1,2,3,4], - [1,2,3,4]]) - try: - dsplit(a,2) - assert(0) - except ValueError: - pass - - def test_3D_array(self): - a= array([[[1,2,3,4], - [1,2,3,4]], - [[1,2,3,4], - [1,2,3,4]]]) - res = dsplit(a,2) - desired = [array([[[1,2],[1,2]],[[1,2],[1,2]]]), - array([[[3,4],[3,4]],[[3,4],[3,4]]])] - compare_results(res,desired) - - -class TestSqueeze(TestCase): - def test_basic(self): - a = rand(20,10,10,1,1) - b = rand(20,1,10,1,20) - c = rand(1,1,20,10) - assert_array_equal(squeeze(a),reshape(a,(20,10,10))) - assert_array_equal(squeeze(b),reshape(b,(20,10,20))) - assert_array_equal(squeeze(c),reshape(c,(20,10))) - - -class TestKron(TestCase): - def test_return_type(self): - a = ones([2,2]) - m = asmatrix(a) - assert_equal(type(kron(a,a)), ndarray) - assert_equal(type(kron(m,m)), matrix) - assert_equal(type(kron(a,m)), matrix) - assert_equal(type(kron(m,a)), matrix) - class myarray(ndarray): - __array_priority__ = 0.0 - ma = myarray(a.shape, a.dtype, a.data) - assert_equal(type(kron(a,a)), ndarray) - assert_equal(type(kron(ma,ma)), myarray) - assert_equal(type(kron(a,ma)), ndarray) - assert_equal(type(kron(ma,a)), myarray) - - -class TestTile(TestCase): - def test_basic(self): - a = array([0,1,2]) - b = [[1,2],[3,4]] - assert_equal(tile(a,2), [0,1,2,0,1,2]) - assert_equal(tile(a,(2,2)), [[0,1,2,0,1,2],[0,1,2,0,1,2]]) - assert_equal(tile(a,(1,2)), [[0,1,2,0,1,2]]) - assert_equal(tile(b, 2), [[1,2,1,2],[3,4,3,4]]) - assert_equal(tile(b,(2,1)),[[1,2],[3,4],[1,2],[3,4]]) - assert_equal(tile(b,(2,2)),[[1,2,1,2],[3,4,3,4], - [1,2,1,2],[3,4,3,4]]) - - def test_empty(self): - a = array([[[]]]) - d = tile(a,(3,2,5)).shape - assert_equal(d,(3,2,0)) - - def test_kroncompare(self): - import numpy.random as nr - reps=[(2,),(1,2),(2,1),(2,2),(2,3,2),(3,2)] - shape=[(3,),(2,3),(3,4,3),(3,2,3),(4,3,2,4),(2,2)] - for s in shape: - b = nr.randint(0,10,size=s) - for r in reps: - a = ones(r, b.dtype) - large = tile(b, r) - klarge = kron(a, b) - assert_equal(large, klarge) - - -# Utility -def compare_results(res,desired): - for i in range(len(desired)): - assert_array_equal(res[i],desired[i]) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_stride_tricks.py b/pythonPackages/numpy/numpy/lib/tests/test_stride_tricks.py deleted file mode 100755 index 8f0ac52b86..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_stride_tricks.py +++ /dev/null @@ -1,208 +0,0 @@ -import numpy as np -from numpy.testing import * -from numpy.lib.stride_tricks import broadcast_arrays - - -def assert_shapes_correct(input_shapes, expected_shape): - """ Broadcast a list of arrays with the given input shapes and check the - common output shape. - """ - inarrays = [np.zeros(s) for s in input_shapes] - outarrays = broadcast_arrays(*inarrays) - outshapes = [a.shape for a in outarrays] - expected = [expected_shape] * len(inarrays) - assert outshapes == expected - -def assert_incompatible_shapes_raise(input_shapes): - """ Broadcast a list of arrays with the given (incompatible) input shapes - and check that they raise a ValueError. - """ - inarrays = [np.zeros(s) for s in input_shapes] - assert_raises(ValueError, broadcast_arrays, *inarrays) - -def assert_same_as_ufunc(shape0, shape1, transposed=False, flipped=False): - """ Broadcast two shapes against each other and check that the data layout - is the same as if a ufunc did the broadcasting. - """ - x0 = np.zeros(shape0, dtype=int) - # Note that multiply.reduce's identity element is 1.0, so when shape1==(), - # this gives the desired n==1. - n = int(np.multiply.reduce(shape1)) - x1 = np.arange(n).reshape(shape1) - if transposed: - x0 = x0.T - x1 = x1.T - if flipped: - x0 = x0[::-1] - x1 = x1[::-1] - # Use the add ufunc to do the broadcasting. Since we're adding 0s to x1, the - # result should be exactly the same as the broadcasted view of x1. - y = x0 + x1 - b0, b1 = broadcast_arrays(x0, x1) - assert_array_equal(y, b1) - - -def test_same(): - x = np.arange(10) - y = np.arange(10) - bx, by = broadcast_arrays(x, y) - assert_array_equal(x, bx) - assert_array_equal(y, by) - -def test_one_off(): - x = np.array([[1,2,3]]) - y = np.array([[1],[2],[3]]) - bx, by = broadcast_arrays(x, y) - bx0 = np.array([[1,2,3],[1,2,3],[1,2,3]]) - by0 = bx0.T - assert_array_equal(bx0, bx) - assert_array_equal(by0, by) - -def test_same_input_shapes(): - """ Check that the final shape is just the input shape. - """ - data = [ - (), - (1,), - (3,), - (0,1), - (0,3), - (1,0), - (3,0), - (1,3), - (3,1), - (3,3), - ] - for shape in data: - input_shapes = [shape] - # Single input. - yield assert_shapes_correct, input_shapes, shape - # Double input. - input_shapes2 = [shape, shape] - yield assert_shapes_correct, input_shapes2, shape - # Triple input. - input_shapes3 = [shape, shape, shape] - yield assert_shapes_correct, input_shapes3, shape - -def test_two_compatible_by_ones_input_shapes(): - """ Check that two different input shapes (of the same length but some have - 1s) broadcast to the correct shape. - """ - data = [ - [[(1,), (3,)], (3,)], - [[(1,3), (3,3)], (3,3)], - [[(3,1), (3,3)], (3,3)], - [[(1,3), (3,1)], (3,3)], - [[(1,1), (3,3)], (3,3)], - [[(1,1), (1,3)], (1,3)], - [[(1,1), (3,1)], (3,1)], - [[(1,0), (0,0)], (0,0)], - [[(0,1), (0,0)], (0,0)], - [[(1,0), (0,1)], (0,0)], - [[(1,1), (0,0)], (0,0)], - [[(1,1), (1,0)], (1,0)], - [[(1,1), (0,1)], (0,1)], - ] - for input_shapes, expected_shape in data: - yield assert_shapes_correct, input_shapes, expected_shape - # Reverse the input shapes since broadcasting should be symmetric. - yield assert_shapes_correct, input_shapes[::-1], expected_shape - -def test_two_compatible_by_prepending_ones_input_shapes(): - """ Check that two different input shapes (of different lengths) broadcast - to the correct shape. - """ - data = [ - [[(), (3,)], (3,)], - [[(3,), (3,3)], (3,3)], - [[(3,), (3,1)], (3,3)], - [[(1,), (3,3)], (3,3)], - [[(), (3,3)], (3,3)], - [[(1,1), (3,)], (1,3)], - [[(1,), (3,1)], (3,1)], - [[(1,), (1,3)], (1,3)], - [[(), (1,3)], (1,3)], - [[(), (3,1)], (3,1)], - [[(), (0,)], (0,)], - [[(0,), (0,0)], (0,0)], - [[(0,), (0,1)], (0,0)], - [[(1,), (0,0)], (0,0)], - [[(), (0,0)], (0,0)], - [[(1,1), (0,)], (1,0)], - [[(1,), (0,1)], (0,1)], - [[(1,), (1,0)], (1,0)], - [[(), (1,0)], (1,0)], - [[(), (0,1)], (0,1)], - ] - for input_shapes, expected_shape in data: - yield assert_shapes_correct, input_shapes, expected_shape - # Reverse the input shapes since broadcasting should be symmetric. - yield assert_shapes_correct, input_shapes[::-1], expected_shape - -def test_incompatible_shapes_raise_valueerror(): - """ Check that a ValueError is raised for incompatible shapes. - """ - data = [ - [(3,), (4,)], - [(2,3), (2,)], - [(3,), (3,), (4,)], - [(1,3,4), (2,3,3)], - ] - for input_shapes in data: - yield assert_incompatible_shapes_raise, input_shapes - # Reverse the input shapes since broadcasting should be symmetric. - yield assert_incompatible_shapes_raise, input_shapes[::-1] - -def test_same_as_ufunc(): - """ Check that the data layout is the same as if a ufunc did the operation. - """ - data = [ - [[(1,), (3,)], (3,)], - [[(1,3), (3,3)], (3,3)], - [[(3,1), (3,3)], (3,3)], - [[(1,3), (3,1)], (3,3)], - [[(1,1), (3,3)], (3,3)], - [[(1,1), (1,3)], (1,3)], - [[(1,1), (3,1)], (3,1)], - [[(1,0), (0,0)], (0,0)], - [[(0,1), (0,0)], (0,0)], - [[(1,0), (0,1)], (0,0)], - [[(1,1), (0,0)], (0,0)], - [[(1,1), (1,0)], (1,0)], - [[(1,1), (0,1)], (0,1)], - [[(), (3,)], (3,)], - [[(3,), (3,3)], (3,3)], - [[(3,), (3,1)], (3,3)], - [[(1,), (3,3)], (3,3)], - [[(), (3,3)], (3,3)], - [[(1,1), (3,)], (1,3)], - [[(1,), (3,1)], (3,1)], - [[(1,), (1,3)], (1,3)], - [[(), (1,3)], (1,3)], - [[(), (3,1)], (3,1)], - [[(), (0,)], (0,)], - [[(0,), (0,0)], (0,0)], - [[(0,), (0,1)], (0,0)], - [[(1,), (0,0)], (0,0)], - [[(), (0,0)], (0,0)], - [[(1,1), (0,)], (1,0)], - [[(1,), (0,1)], (0,1)], - [[(1,), (1,0)], (1,0)], - [[(), (1,0)], (1,0)], - [[(), (0,1)], (0,1)], - ] - for input_shapes, expected_shape in data: - yield assert_same_as_ufunc, input_shapes[0], input_shapes[1] - # Reverse the input shapes since broadcasting should be symmetric. - yield assert_same_as_ufunc, input_shapes[1], input_shapes[0] - # Try them transposed, too. - yield assert_same_as_ufunc, input_shapes[0], input_shapes[1], True - # ... and flipped for non-rank-0 inputs in order to test negative - # strides. - if () not in input_shapes: - yield assert_same_as_ufunc, input_shapes[0], input_shapes[1], False, True - yield assert_same_as_ufunc, input_shapes[0], input_shapes[1], True, True - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_twodim_base.py b/pythonPackages/numpy/numpy/lib/tests/test_twodim_base.py deleted file mode 100755 index 11e4ece6c3..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_twodim_base.py +++ /dev/null @@ -1,307 +0,0 @@ -""" Test functions for matrix module - -""" - -from numpy.testing import * - -from numpy import ( arange, rot90, add, fliplr, flipud, zeros, ones, eye, - array, diag, histogram2d, tri, mask_indices, triu_indices, - triu_indices_from, tril_indices, tril_indices_from ) - -import numpy as np -from numpy.compat import asbytes, asbytes_nested - -def get_mat(n): - data = arange(n) - data = add.outer(data,data) - return data - -class TestEye(TestCase): - def test_basic(self): - assert_equal(eye(4),array([[1,0,0,0], - [0,1,0,0], - [0,0,1,0], - [0,0,0,1]])) - assert_equal(eye(4,dtype='f'),array([[1,0,0,0], - [0,1,0,0], - [0,0,1,0], - [0,0,0,1]],'f')) - assert_equal(eye(3) == 1, eye(3,dtype=bool)) - - def test_diag(self): - assert_equal(eye(4,k=1),array([[0,1,0,0], - [0,0,1,0], - [0,0,0,1], - [0,0,0,0]])) - assert_equal(eye(4,k=-1),array([[0,0,0,0], - [1,0,0,0], - [0,1,0,0], - [0,0,1,0]])) - def test_2d(self): - assert_equal(eye(4,3),array([[1,0,0], - [0,1,0], - [0,0,1], - [0,0,0]])) - assert_equal(eye(3,4),array([[1,0,0,0], - [0,1,0,0], - [0,0,1,0]])) - def test_diag2d(self): - assert_equal(eye(3,4,k=2),array([[0,0,1,0], - [0,0,0,1], - [0,0,0,0]])) - assert_equal(eye(4,3,k=-2),array([[0,0,0], - [0,0,0], - [1,0,0], - [0,1,0]])) - - def test_eye_bounds(self): - assert_equal(eye(2, 2, 1), [[0, 1], [0, 0]]) - assert_equal(eye(2, 2, -1), [[0, 0], [1, 0]]) - assert_equal(eye(2, 2, 2), [[0, 0], [0, 0]]) - assert_equal(eye(2, 2, -2), [[0, 0], [0, 0]]) - assert_equal(eye(3, 2, 2), [[0, 0], [0, 0], [0, 0]]) - assert_equal(eye(3, 2, 1), [[0, 1], [0, 0], [0, 0]]) - assert_equal(eye(3, 2, -1), [[0, 0], [1, 0], [0, 1]]) - assert_equal(eye(3, 2, -2), [[0, 0], [0, 0], [1, 0]]) - assert_equal(eye(3, 2, -3), [[0, 0], [0, 0], [0, 0]]) - - def test_strings(self): - assert_equal(eye(2, 2, dtype='S3'), - asbytes_nested([['1', ''], ['', '1']])) - - def test_bool(self): - assert_equal(eye(2, 2, dtype=bool), [[True, False], [False, True]]) - -class TestDiag(TestCase): - def test_vector(self): - vals = (100 * arange(5)).astype('l') - b = zeros((5, 5)) - for k in range(5): - b[k, k] = vals[k] - assert_equal(diag(vals), b) - b = zeros((7, 7)) - c = b.copy() - for k in range(5): - b[k, k + 2] = vals[k] - c[k + 2, k] = vals[k] - assert_equal(diag(vals, k=2), b) - assert_equal(diag(vals, k=-2), c) - - def test_matrix(self, vals=None): - if vals is None: - vals = (100 * get_mat(5) + 1).astype('l') - b = zeros((5,)) - for k in range(5): - b[k] = vals[k,k] - assert_equal(diag(vals), b) - b = b * 0 - for k in range(3): - b[k] = vals[k, k + 2] - assert_equal(diag(vals, 2), b[:3]) - for k in range(3): - b[k] = vals[k + 2, k] - assert_equal(diag(vals, -2), b[:3]) - - def test_fortran_order(self): - vals = array((100 * get_mat(5) + 1), order='F', dtype='l') - self.test_matrix(vals) - - def test_diag_bounds(self): - A = [[1, 2], [3, 4], [5, 6]] - assert_equal(diag(A, k=2), []) - assert_equal(diag(A, k=1), [2]) - assert_equal(diag(A, k=0), [1, 4]) - assert_equal(diag(A, k=-1), [3, 6]) - assert_equal(diag(A, k=-2), [5]) - assert_equal(diag(A, k=-3), []) - - def test_failure(self): - self.assertRaises(ValueError, diag, [[[1]]]) - -class TestFliplr(TestCase): - def test_basic(self): - self.assertRaises(ValueError, fliplr, ones(4)) - a = get_mat(4) - b = a[:,::-1] - assert_equal(fliplr(a),b) - a = [[0,1,2], - [3,4,5]] - b = [[2,1,0], - [5,4,3]] - assert_equal(fliplr(a),b) - -class TestFlipud(TestCase): - def test_basic(self): - a = get_mat(4) - b = a[::-1,:] - assert_equal(flipud(a),b) - a = [[0,1,2], - [3,4,5]] - b = [[3,4,5], - [0,1,2]] - assert_equal(flipud(a),b) - -class TestRot90(TestCase): - def test_basic(self): - self.assertRaises(ValueError, rot90, ones(4)) - - a = [[0,1,2], - [3,4,5]] - b1 = [[2,5], - [1,4], - [0,3]] - b2 = [[5,4,3], - [2,1,0]] - b3 = [[3,0], - [4,1], - [5,2]] - b4 = [[0,1,2], - [3,4,5]] - - for k in range(-3,13,4): - assert_equal(rot90(a,k=k),b1) - for k in range(-2,13,4): - assert_equal(rot90(a,k=k),b2) - for k in range(-1,13,4): - assert_equal(rot90(a,k=k),b3) - for k in range(0,13,4): - assert_equal(rot90(a,k=k),b4) - - def test_axes(self): - a = ones((50,40,3)) - assert_equal(rot90(a).shape,(40,50,3)) - -class TestHistogram2d(TestCase): - def test_simple(self): - x = array([ 0.41702200, 0.72032449, 0.00011437481, 0.302332573, 0.146755891]) - y = array([ 0.09233859, 0.18626021, 0.34556073, 0.39676747, 0.53881673]) - xedges = np.linspace(0,1,10) - yedges = np.linspace(0,1,10) - H = histogram2d(x, y, (xedges, yedges))[0] - answer = array([[0, 0, 0, 1, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 0, 1, 0, 0, 0, 0, 0, 0], - [0, 1, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0]]) - assert_array_equal(H.T, answer) - H = histogram2d(x, y, xedges)[0] - assert_array_equal(H.T, answer) - H,xedges,yedges = histogram2d(range(10),range(10)) - assert_array_equal(H, eye(10,10)) - assert_array_equal(xedges, np.linspace(0,9,11)) - assert_array_equal(yedges, np.linspace(0,9,11)) - - def test_asym(self): - x = array([1, 1, 2, 3, 4, 4, 4, 5]) - y = array([1, 3, 2, 0, 1, 2, 3, 4]) - H, xed, yed = histogram2d(x,y, (6, 5), range = [[0,6],[0,5]], normed=True) - answer = array([[0.,0,0,0,0], - [0,1,0,1,0], - [0,0,1,0,0], - [1,0,0,0,0], - [0,1,1,1,0], - [0,0,0,0,1]]) - assert_array_almost_equal(H, answer/8., 3) - assert_array_equal(xed, np.linspace(0,6,7)) - assert_array_equal(yed, np.linspace(0,5,6)) - def test_norm(self): - x = array([1,2,3,1,2,3,1,2,3]) - y = array([1,1,1,2,2,2,3,3,3]) - H, xed, yed = histogram2d(x,y,[[1,2,3,5], [1,2,3,5]], normed=True) - answer=array([[1,1,.5], - [1,1,.5], - [.5,.5,.25]])/9. - assert_array_almost_equal(H, answer, 3) - - def test_all_outliers(self): - r = rand(100)+1. - H, xed, yed = histogram2d(r, r, (4, 5), range=([0,1], [0,1])) - assert_array_equal(H, 0) - - -class TestTri(TestCase): - def test_dtype(self): - out = array([[1,0,0], - [1,1,0], - [1,1,1]]) - assert_array_equal(tri(3),out) - assert_array_equal(tri(3,dtype=bool),out.astype(bool)) - - -def test_mask_indices(): - # simple test without offset - iu = mask_indices(3, np.triu) - a = np.arange(9).reshape(3, 3) - yield (assert_array_equal, a[iu], array([0, 1, 2, 4, 5, 8])) - # Now with an offset - iu1 = mask_indices(3, np.triu, 1) - yield (assert_array_equal, a[iu1], array([1, 2, 5])) - - -def test_tril_indices(): - # indices without and with offset - il1 = tril_indices(4) - il2 = tril_indices(4, 2) - - a = np.array([[1, 2, 3, 4], - [5, 6, 7, 8], - [9, 10, 11, 12], - [13, 14, 15, 16]]) - - # indexing: - yield (assert_array_equal, a[il1], - array([ 1, 5, 6, 9, 10, 11, 13, 14, 15, 16]) ) - - # And for assigning values: - a[il1] = -1 - yield (assert_array_equal, a, - array([[-1, 2, 3, 4], - [-1, -1, 7, 8], - [-1, -1, -1, 12], - [-1, -1, -1, -1]]) ) - - # These cover almost the whole array (two diagonals right of the main one): - a[il2] = -10 - yield (assert_array_equal, a, - array([[-10, -10, -10, 4], - [-10, -10, -10, -10], - [-10, -10, -10, -10], - [-10, -10, -10, -10]]) ) - - -def test_triu_indices(): - iu1 = triu_indices(4) - iu2 = triu_indices(4, 2) - - a = np.array([[1, 2, 3, 4], - [5, 6, 7, 8], - [9, 10, 11, 12], - [13, 14, 15, 16]]) - - # Both for indexing: - yield (assert_array_equal, a[iu1], - array([1, 2, 3, 4, 6, 7, 8, 11, 12, 16])) - - # And for assigning values: - a[iu1] = -1 - yield (assert_array_equal, a, - array([[-1, -1, -1, -1], - [ 5, -1, -1, -1], - [ 9, 10, -1, -1], - [13, 14, 15, -1]]) ) - - # These cover almost the whole array (two diagonals right of the main one): - a[iu2] = -10 - yield ( assert_array_equal, a, - array([[ -1, -1, -10, -10], - [ 5, -1, -1, -10], - [ 9, 10, -1, -1], - [ 13, 14, 15, -1]]) ) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_type_check.py b/pythonPackages/numpy/numpy/lib/tests/test_type_check.py deleted file mode 100755 index 09a9f8a90a..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_type_check.py +++ /dev/null @@ -1,380 +0,0 @@ -from numpy.testing import * -from numpy.lib import * -from numpy.core import * -from numpy.compat import asbytes - -def assert_all(x): - assert(all(x)), x - - -class TestCommonType(TestCase): - def test_basic(self): - ai32 = array([[1,2],[3,4]], dtype=int32) - af32 = array([[1,2],[3,4]], dtype=float32) - af64 = array([[1,2],[3,4]], dtype=float64) - acs = array([[1+5j,2+6j],[3+7j,4+8j]], dtype=csingle) - acd = array([[1+5j,2+6j],[3+7j,4+8j]], dtype=cdouble) - assert common_type(af32) == float32 - assert common_type(af64) == float64 - assert common_type(acs) == csingle - assert common_type(acd) == cdouble - - - -class TestMintypecode(TestCase): - - def test_default_1(self): - for itype in '1bcsuwil': - assert_equal(mintypecode(itype),'d') - assert_equal(mintypecode('f'),'f') - assert_equal(mintypecode('d'),'d') - assert_equal(mintypecode('F'),'F') - assert_equal(mintypecode('D'),'D') - - def test_default_2(self): - for itype in '1bcsuwil': - assert_equal(mintypecode(itype+'f'),'f') - assert_equal(mintypecode(itype+'d'),'d') - assert_equal(mintypecode(itype+'F'),'F') - assert_equal(mintypecode(itype+'D'),'D') - assert_equal(mintypecode('ff'),'f') - assert_equal(mintypecode('fd'),'d') - assert_equal(mintypecode('fF'),'F') - assert_equal(mintypecode('fD'),'D') - assert_equal(mintypecode('df'),'d') - assert_equal(mintypecode('dd'),'d') - #assert_equal(mintypecode('dF',savespace=1),'F') - assert_equal(mintypecode('dF'),'D') - assert_equal(mintypecode('dD'),'D') - assert_equal(mintypecode('Ff'),'F') - #assert_equal(mintypecode('Fd',savespace=1),'F') - assert_equal(mintypecode('Fd'),'D') - assert_equal(mintypecode('FF'),'F') - assert_equal(mintypecode('FD'),'D') - assert_equal(mintypecode('Df'),'D') - assert_equal(mintypecode('Dd'),'D') - assert_equal(mintypecode('DF'),'D') - assert_equal(mintypecode('DD'),'D') - - def test_default_3(self): - assert_equal(mintypecode('fdF'),'D') - #assert_equal(mintypecode('fdF',savespace=1),'F') - assert_equal(mintypecode('fdD'),'D') - assert_equal(mintypecode('fFD'),'D') - assert_equal(mintypecode('dFD'),'D') - - assert_equal(mintypecode('ifd'),'d') - assert_equal(mintypecode('ifF'),'F') - assert_equal(mintypecode('ifD'),'D') - assert_equal(mintypecode('idF'),'D') - #assert_equal(mintypecode('idF',savespace=1),'F') - assert_equal(mintypecode('idD'),'D') - - -class TestIsscalar(TestCase): - - def test_basic(self): - assert(isscalar(3)) - assert(not isscalar([3])) - assert(not isscalar((3,))) - assert(isscalar(3j)) - assert(isscalar(10L)) - assert(isscalar(4.0)) - - -class TestReal(TestCase): - - def test_real(self): - y = rand(10,) - assert_array_equal(y,real(y)) - - def test_cmplx(self): - y = rand(10,)+1j*rand(10,) - assert_array_equal(y.real,real(y)) - - -class TestImag(TestCase): - - def test_real(self): - y = rand(10,) - assert_array_equal(0,imag(y)) - - def test_cmplx(self): - y = rand(10,)+1j*rand(10,) - assert_array_equal(y.imag,imag(y)) - - -class TestIscomplex(TestCase): - - def test_fail(self): - z = array([-1,0,1]) - res = iscomplex(z) - assert(not sometrue(res,axis=0)) - def test_pass(self): - z = array([-1j,1,0]) - res = iscomplex(z) - assert_array_equal(res,[1,0,0]) - - -class TestIsreal(TestCase): - - def test_pass(self): - z = array([-1,0,1j]) - res = isreal(z) - assert_array_equal(res,[1,1,0]) - def test_fail(self): - z = array([-1j,1,0]) - res = isreal(z) - assert_array_equal(res,[0,1,1]) - - -class TestIscomplexobj(TestCase): - - def test_basic(self): - z = array([-1,0,1]) - assert(not iscomplexobj(z)) - z = array([-1j,0,-1]) - assert(iscomplexobj(z)) - - - -class TestIsrealobj(TestCase): - def test_basic(self): - z = array([-1,0,1]) - assert(isrealobj(z)) - z = array([-1j,0,-1]) - assert(not isrealobj(z)) - - -class TestIsnan(TestCase): - - def test_goodvalues(self): - z = array((-1.,0.,1.)) - res = isnan(z) == 0 - assert_all(alltrue(res,axis=0)) - - def test_posinf(self): - olderr = seterr(divide='ignore') - try: - assert_all(isnan(array((1.,))/0.) == 0) - finally: - seterr(**olderr) - - def test_neginf(self): - olderr = seterr(divide='ignore') - try: - assert_all(isnan(array((-1.,))/0.) == 0) - finally: - seterr(**olderr) - - def test_ind(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - assert_all(isnan(array((0.,))/0.) == 1) - finally: - seterr(**olderr) - - #def test_qnan(self): log(-1) return pi*j now - # assert_all(isnan(log(-1.)) == 1) - - def test_integer(self): - assert_all(isnan(1) == 0) - - def test_complex(self): - assert_all(isnan(1+1j) == 0) - - def test_complex1(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - assert_all(isnan(array(0+0j)/0.) == 1) - finally: - seterr(**olderr) - - -class TestIsfinite(TestCase): - - def test_goodvalues(self): - z = array((-1.,0.,1.)) - res = isfinite(z) == 1 - assert_all(alltrue(res,axis=0)) - - def test_posinf(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - assert_all(isfinite(array((1.,))/0.) == 0) - finally: - seterr(**olderr) - - def test_neginf(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - assert_all(isfinite(array((-1.,))/0.) == 0) - finally: - seterr(**olderr) - - def test_ind(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - assert_all(isfinite(array((0.,))/0.) == 0) - finally: - seterr(**olderr) - - #def test_qnan(self): - # assert_all(isfinite(log(-1.)) == 0) - - def test_integer(self): - assert_all(isfinite(1) == 1) - - def test_complex(self): - assert_all(isfinite(1+1j) == 1) - - def test_complex1(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - assert_all(isfinite(array(1+1j)/0.) == 0) - finally: - seterr(**olderr) - - -class TestIsinf(TestCase): - - def test_goodvalues(self): - z = array((-1.,0.,1.)) - res = isinf(z) == 0 - assert_all(alltrue(res,axis=0)) - - def test_posinf(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - assert_all(isinf(array((1.,))/0.) == 1) - finally: - seterr(**olderr) - - def test_posinf_scalar(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - assert_all(isinf(array(1.,)/0.) == 1) - finally: - seterr(**olderr) - - def test_neginf(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - assert_all(isinf(array((-1.,))/0.) == 1) - finally: - seterr(**olderr) - - def test_neginf_scalar(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - assert_all(isinf(array(-1.)/0.) == 1) - finally: - seterr(**olderr) - - def test_ind(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - assert_all(isinf(array((0.,))/0.) == 0) - finally: - seterr(**olderr) - - #def test_qnan(self): - # assert_all(isinf(log(-1.)) == 0) - # assert_all(isnan(log(-1.)) == 1) - - -class TestIsposinf(TestCase): - - def test_generic(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - vals = isposinf(array((-1.,0,1))/0.) - finally: - seterr(**olderr) - assert(vals[0] == 0) - assert(vals[1] == 0) - assert(vals[2] == 1) - - -class TestIsneginf(TestCase): - def test_generic(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - vals = isneginf(array((-1.,0,1))/0.) - finally: - seterr(**olderr) - assert(vals[0] == 1) - assert(vals[1] == 0) - assert(vals[2] == 0) - - -class TestNanToNum(TestCase): - - def test_generic(self): - olderr = seterr(divide='ignore', invalid='ignore') - try: - vals = nan_to_num(array((-1.,0,1))/0.) - finally: - seterr(**olderr) - assert_all(vals[0] < -1e10) and assert_all(isfinite(vals[0])) - assert(vals[1] == 0) - assert_all(vals[2] > 1e10) and assert_all(isfinite(vals[2])) - - def test_integer(self): - vals = nan_to_num(1) - assert_all(vals == 1) - - def test_complex_good(self): - vals = nan_to_num(1+1j) - assert_all(vals == 1+1j) - - def test_complex_bad(self): - v = 1+1j - olderr = seterr(divide='ignore', invalid='ignore') - try: - v += array(0+1.j)/0. - finally: - seterr(**olderr) - vals = nan_to_num(v) - # !! This is actually (unexpectedly) zero - assert_all(isfinite(vals)) - - def test_complex_bad2(self): - v = 1+1j - olderr = seterr(divide='ignore', invalid='ignore') - try: - v += array(-1+1.j)/0. - finally: - seterr(**olderr) - vals = nan_to_num(v) - assert_all(isfinite(vals)) - #assert_all(vals.imag > 1e10) and assert_all(isfinite(vals)) - # !! This is actually (unexpectedly) positive - # !! inf. Comment out for now, and see if it - # !! changes - #assert_all(vals.real < -1e10) and assert_all(isfinite(vals)) - - -class TestRealIfClose(TestCase): - - def test_basic(self): - a = rand(10) - b = real_if_close(a+1e-15j) - assert_all(isrealobj(b)) - assert_array_equal(a,b) - b = real_if_close(a+1e-7j) - assert_all(iscomplexobj(b)) - b = real_if_close(a+1e-7j,tol=1e-6) - assert_all(isrealobj(b)) - - -class TestArrayConversion(TestCase): - - def test_asfarray(self): - a = asfarray(array([1,2,3])) - assert_equal(a.__class__,ndarray) - assert issubdtype(a.dtype,float) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_ufunclike.py b/pythonPackages/numpy/numpy/lib/tests/test_ufunclike.py deleted file mode 100755 index e9c41d09cb..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_ufunclike.py +++ /dev/null @@ -1,71 +0,0 @@ -from numpy.testing import * -import numpy.core as nx -import numpy.lib.ufunclike as ufl -from numpy.testing.decorators import deprecated - -class TestUfunclike(TestCase): - - def test_isposinf(self): - a = nx.array([nx.inf, -nx.inf, nx.nan, 0.0, 3.0, -3.0]) - out = nx.zeros(a.shape, bool) - tgt = nx.array([True, False, False, False, False, False]) - - res = ufl.isposinf(a) - assert_equal(res, tgt) - res = ufl.isposinf(a, out) - assert_equal(res, tgt) - assert_equal(out, tgt) - - def test_isneginf(self): - a = nx.array([nx.inf, -nx.inf, nx.nan, 0.0, 3.0, -3.0]) - out = nx.zeros(a.shape, bool) - tgt = nx.array([False, True, False, False, False, False]) - - res = ufl.isneginf(a) - assert_equal(res, tgt) - res = ufl.isneginf(a, out) - assert_equal(res, tgt) - assert_equal(out, tgt) - - @deprecated() - def test_log2(self): - a = nx.array([4.5, 2.3, 6.5]) - out = nx.zeros(a.shape, float) - tgt = nx.array([2.169925, 1.20163386, 2.70043972]) - res = ufl.log2(a) - assert_almost_equal(res, tgt) - res = ufl.log2(a, out) - assert_almost_equal(res, tgt) - assert_almost_equal(out, tgt) - - def test_fix(self): - a = nx.array([[1.0, 1.1, 1.5, 1.8], [-1.0, -1.1, -1.5, -1.8]]) - out = nx.zeros(a.shape, float) - tgt = nx.array([[ 1., 1., 1., 1.], [-1., -1., -1., -1.]]) - - res = ufl.fix(a) - assert_equal(res, tgt) - res = ufl.fix(a, out) - assert_equal(res, tgt) - assert_equal(out, tgt) - assert_equal(ufl.fix(3.14), 3) - - def test_fix_with_subclass(self): - class MyArray(nx.ndarray): - def __new__(cls, data, metadata=None): - res = nx.array(data, copy=True).view(cls) - res.metadata = metadata - return res - def __array_wrap__(self, obj, context=None): - obj.metadata = self.metadata - return obj - - a = nx.array([1.1, -1.1]) - m = MyArray(a, metadata='foo') - f = ufl.fix(m) - assert_array_equal(f, nx.array([1,-1])) - assert isinstance(f, MyArray) - assert_equal(f.metadata, 'foo') - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/tests/test_utils.py b/pythonPackages/numpy/numpy/lib/tests/test_utils.py deleted file mode 100755 index 6a09c6dbd9..0000000000 --- a/pythonPackages/numpy/numpy/lib/tests/test_utils.py +++ /dev/null @@ -1,38 +0,0 @@ -from numpy.testing import * -import numpy.lib.utils as utils -from numpy.lib import deprecate - -from StringIO import StringIO - -def test_lookfor(): - out = StringIO() - utils.lookfor('eigenvalue', module='numpy', output=out, - import_modules=False) - out = out.getvalue() - assert 'numpy.linalg.eig' in out - - -@deprecate -def old_func(self, x): - return x - -@deprecate(message="Rather use new_func2") -def old_func2(self, x): - return x - -def old_func3(self, x): - return x -new_func3 = deprecate(old_func3, old_name="old_func3", new_name="new_func3") - -def test_deprecate_decorator(): - assert 'deprecated' in old_func.__doc__ - -def test_deprecate_decorator_message(): - assert 'Rather use new_func2' in old_func2.__doc__ - -def test_deprecate_fn(): - assert 'old_func3' in new_func3.__doc__ - assert 'new_func3' in new_func3.__doc__ - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/lib/twodim_base.py b/pythonPackages/numpy/numpy/lib/twodim_base.py deleted file mode 100755 index de7d14072c..0000000000 --- a/pythonPackages/numpy/numpy/lib/twodim_base.py +++ /dev/null @@ -1,879 +0,0 @@ -""" Basic functions for manipulating 2d arrays - -""" - -__all__ = ['diag','diagflat','eye','fliplr','flipud','rot90','tri','triu', - 'tril','vander','histogram2d','mask_indices', - 'tril_indices','tril_indices_from','triu_indices','triu_indices_from', - ] - -from numpy.core.numeric import asanyarray, equal, subtract, arange, \ - zeros, greater_equal, multiply, ones, asarray, alltrue, where, \ - empty - -def fliplr(m): - """ - Flip array in the left/right direction. - - Flip the entries in each row in the left/right direction. - Columns are preserved, but appear in a different order than before. - - Parameters - ---------- - m : array_like - Input array. - - Returns - ------- - f : ndarray - A view of `m` with the columns reversed. Since a view - is returned, this operation is :math:`\\mathcal O(1)`. - - See Also - -------- - flipud : Flip array in the up/down direction. - rot90 : Rotate array counterclockwise. - - Notes - ----- - Equivalent to A[:,::-1]. Does not require the array to be - two-dimensional. - - Examples - -------- - >>> A = np.diag([1.,2.,3.]) - >>> A - array([[ 1., 0., 0.], - [ 0., 2., 0.], - [ 0., 0., 3.]]) - >>> np.fliplr(A) - array([[ 0., 0., 1.], - [ 0., 2., 0.], - [ 3., 0., 0.]]) - - >>> A = np.random.randn(2,3,5) - >>> np.all(np.fliplr(A)==A[:,::-1,...]) - True - - """ - m = asanyarray(m) - if m.ndim < 2: - raise ValueError, "Input must be >= 2-d." - return m[:, ::-1] - -def flipud(m): - """ - Flip array in the up/down direction. - - Flip the entries in each column in the up/down direction. - Rows are preserved, but appear in a different order than before. - - Parameters - ---------- - m : array_like - Input array. - - Returns - ------- - out : array_like - A view of `m` with the rows reversed. Since a view is - returned, this operation is :math:`\\mathcal O(1)`. - - See Also - -------- - fliplr : Flip array in the left/right direction. - rot90 : Rotate array counterclockwise. - - Notes - ----- - Equivalent to ``A[::-1,...]``. - Does not require the array to be two-dimensional. - - Examples - -------- - >>> A = np.diag([1.0, 2, 3]) - >>> A - array([[ 1., 0., 0.], - [ 0., 2., 0.], - [ 0., 0., 3.]]) - >>> np.flipud(A) - array([[ 0., 0., 3.], - [ 0., 2., 0.], - [ 1., 0., 0.]]) - - >>> A = np.random.randn(2,3,5) - >>> np.all(np.flipud(A)==A[::-1,...]) - True - - >>> np.flipud([1,2]) - array([2, 1]) - - """ - m = asanyarray(m) - if m.ndim < 1: - raise ValueError, "Input must be >= 1-d." - return m[::-1,...] - -def rot90(m, k=1): - """ - Rotate an array by 90 degrees in the counter-clockwise direction. - - The first two dimensions are rotated; therefore, the array must be at - least 2-D. - - Parameters - ---------- - m : array_like - Array of two or more dimensions. - k : integer - Number of times the array is rotated by 90 degrees. - - Returns - ------- - y : ndarray - Rotated array. - - See Also - -------- - fliplr : Flip an array horizontally. - flipud : Flip an array vertically. - - Examples - -------- - >>> m = np.array([[1,2],[3,4]], int) - >>> m - array([[1, 2], - [3, 4]]) - >>> np.rot90(m) - array([[2, 4], - [1, 3]]) - >>> np.rot90(m, 2) - array([[4, 3], - [2, 1]]) - - """ - m = asanyarray(m) - if m.ndim < 2: - raise ValueError, "Input must >= 2-d." - k = k % 4 - if k == 0: return m - elif k == 1: return fliplr(m).swapaxes(0,1) - elif k == 2: return fliplr(flipud(m)) - else: return fliplr(m.swapaxes(0,1)) # k==3 - -def eye(N, M=None, k=0, dtype=float): - """ - Return a 2-D array with ones on the diagonal and zeros elsewhere. - - Parameters - ---------- - N : int - Number of rows in the output. - M : int, optional - Number of columns in the output. If None, defaults to `N`. - k : int, optional - Index of the diagonal: 0 (the default) refers to the main diagonal, - a positive value refers to an upper diagonal, and a negative value - to a lower diagonal. - dtype : data-type, optional - Data-type of the returned array. - - Returns - ------- - I : ndarray of shape (N,M) - An array where all elements are equal to zero, except for the `k`-th - diagonal, whose values are equal to one. - - See Also - -------- - identity : (almost) equivalent function - diag : diagonal 2-D array from a 1-D array specified by the user. - - Examples - -------- - >>> np.eye(2, dtype=int) - array([[1, 0], - [0, 1]]) - >>> np.eye(3, k=1) - array([[ 0., 1., 0.], - [ 0., 0., 1.], - [ 0., 0., 0.]]) - - """ - if M is None: - M = N - m = zeros((N, M), dtype=dtype) - if k >= M: - return m - if k >= 0: - i = k - else: - i = (-k) * M - m[:M-k].flat[i::M+1] = 1 - return m - -def diag(v, k=0): - """ - Extract a diagonal or construct a diagonal array. - - Parameters - ---------- - v : array_like - If `v` is a 2-D array, return a copy of its `k`-th diagonal. - If `v` is a 1-D array, return a 2-D array with `v` on the `k`-th - diagonal. - k : int, optional - Diagonal in question. The default is 0. Use `k>0` for diagonals - above the main diagonal, and `k<0` for diagonals below the main - diagonal. - - Returns - ------- - out : ndarray - The extracted diagonal or constructed diagonal array. - - See Also - -------- - diagonal : Return specified diagonals. - diagflat : Create a 2-D array with the flattened input as a diagonal. - trace : Sum along diagonals. - triu : Upper triangle of an array. - tril : Lower triange of an array. - - Examples - -------- - >>> x = np.arange(9).reshape((3,3)) - >>> x - array([[0, 1, 2], - [3, 4, 5], - [6, 7, 8]]) - - >>> np.diag(x) - array([0, 4, 8]) - >>> np.diag(x, k=1) - array([1, 5]) - >>> np.diag(x, k=-1) - array([3, 7]) - - >>> np.diag(np.diag(x)) - array([[0, 0, 0], - [0, 4, 0], - [0, 0, 8]]) - - """ - v = asarray(v) - s = v.shape - if len(s) == 1: - n = s[0]+abs(k) - res = zeros((n,n), v.dtype) - if k >= 0: - i = k - else: - i = (-k) * n - res[:n-k].flat[i::n+1] = v - return res - elif len(s) == 2: - if k >= s[1]: - return empty(0, dtype=v.dtype) - if v.flags.f_contiguous: - # faster slicing - v, k, s = v.T, -k, s[::-1] - if k >= 0: - i = k - else: - i = (-k) * s[1] - return v[:s[1]-k].flat[i::s[1]+1] - else: - raise ValueError, "Input must be 1- or 2-d." - -def diagflat(v,k=0): - """ - Create a two-dimensional array with the flattened input as a diagonal. - - Parameters - ---------- - v : array_like - Input data, which is flattened and set as the `k`-th - diagonal of the output. - k : int, optional - Diagonal to set; 0, the default, corresponds to the "main" diagonal, - a positive (negative) `k` giving the number of the diagonal above - (below) the main. - - Returns - ------- - out : ndarray - The 2-D output array. - - See Also - -------- - diag : MATLAB work-alike for 1-D and 2-D arrays. - diagonal : Return specified diagonals. - trace : Sum along diagonals. - - Examples - -------- - >>> np.diagflat([[1,2], [3,4]]) - array([[1, 0, 0, 0], - [0, 2, 0, 0], - [0, 0, 3, 0], - [0, 0, 0, 4]]) - - >>> np.diagflat([1,2], 1) - array([[0, 1, 0], - [0, 0, 2], - [0, 0, 0]]) - - """ - try: - wrap = v.__array_wrap__ - except AttributeError: - wrap = None - v = asarray(v).ravel() - s = len(v) - n = s + abs(k) - res = zeros((n,n), v.dtype) - if (k>=0): - i = arange(0,n-k) - fi = i+k+i*n - else: - i = arange(0,n+k) - fi = i+(i-k)*n - res.flat[fi] = v - if not wrap: - return res - return wrap(res) - -def tri(N, M=None, k=0, dtype=float): - """ - An array with ones at and below the given diagonal and zeros elsewhere. - - Parameters - ---------- - N : int - Number of rows in the array. - M : int, optional - Number of columns in the array. - By default, `M` is taken equal to `N`. - k : int, optional - The sub-diagonal at and below which the array is filled. - `k` = 0 is the main diagonal, while `k` < 0 is below it, - and `k` > 0 is above. The default is 0. - dtype : dtype, optional - Data type of the returned array. The default is float. - - Returns - ------- - T : ndarray of shape (N, M) - Array with its lower triangle filled with ones and zero elsewhere; - in other words ``T[i,j] == 1`` for ``i <= j + k``, 0 otherwise. - - Examples - -------- - >>> np.tri(3, 5, 2, dtype=int) - array([[1, 1, 1, 0, 0], - [1, 1, 1, 1, 0], - [1, 1, 1, 1, 1]]) - - >>> np.tri(3, 5, -1) - array([[ 0., 0., 0., 0., 0.], - [ 1., 0., 0., 0., 0.], - [ 1., 1., 0., 0., 0.]]) - - """ - if M is None: M = N - m = greater_equal(subtract.outer(arange(N), arange(M)),-k) - return m.astype(dtype) - -def tril(m, k=0): - """ - Lower triangle of an array. - - Return a copy of an array with elements above the `k`-th diagonal zeroed. - - Parameters - ---------- - m : array_like, shape (M, N) - Input array. - k : int, optional - Diagonal above which to zero elements. `k = 0` (the default) is the - main diagonal, `k < 0` is below it and `k > 0` is above. - - Returns - ------- - L : ndarray, shape (M, N) - Lower triangle of `m`, of same shape and data-type as `m`. - - See Also - -------- - triu : same thing, only for the upper triangle - - Examples - -------- - >>> np.tril([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) - array([[ 0, 0, 0], - [ 4, 0, 0], - [ 7, 8, 0], - [10, 11, 12]]) - - """ - m = asanyarray(m) - out = multiply(tri(m.shape[0], m.shape[1], k=k, dtype=int),m) - return out - -def triu(m, k=0): - """ - Upper triangle of an array. - - Return a copy of a matrix with the elements below the `k`-th diagonal - zeroed. - - Please refer to the documentation for `tril` for further details. - - See Also - -------- - tril : lower triangle of an array - - Examples - -------- - >>> np.triu([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) - array([[ 1, 2, 3], - [ 4, 5, 6], - [ 0, 8, 9], - [ 0, 0, 12]]) - - """ - m = asanyarray(m) - out = multiply((1-tri(m.shape[0], m.shape[1], k-1, int)),m) - return out - -# borrowed from John Hunter and matplotlib -def vander(x, N=None): - """ - Generate a Van der Monde matrix. - - The columns of the output matrix are decreasing powers of the input - vector. Specifically, the `i`-th output column is the input vector - raised element-wise to the power of ``N - i - 1``. Such a matrix with - a geometric progression in each row is named for Alexandre-Theophile - Vandermonde. - - Parameters - ---------- - x : array_like - 1-D input array. - N : int, optional - Order of (number of columns in) the output. If `N` is not specified, - a square array is returned (``N = len(x)``). - - Returns - ------- - out : ndarray - Van der Monde matrix of order `N`. The first column is ``x^(N-1)``, - the second ``x^(N-2)`` and so forth. - - Examples - -------- - >>> x = np.array([1, 2, 3, 5]) - >>> N = 3 - >>> np.vander(x, N) - array([[ 1, 1, 1], - [ 4, 2, 1], - [ 9, 3, 1], - [25, 5, 1]]) - - >>> np.column_stack([x**(N-1-i) for i in range(N)]) - array([[ 1, 1, 1], - [ 4, 2, 1], - [ 9, 3, 1], - [25, 5, 1]]) - - >>> x = np.array([1, 2, 3, 5]) - >>> np.vander(x) - array([[ 1, 1, 1, 1], - [ 8, 4, 2, 1], - [ 27, 9, 3, 1], - [125, 25, 5, 1]]) - - The determinant of a square Vandermonde matrix is the product - of the differences between the values of the input vector: - - >>> np.linalg.det(np.vander(x)) - 48.000000000000043 - >>> (5-3)*(5-2)*(5-1)*(3-2)*(3-1)*(2-1) - 48 - - """ - x = asarray(x) - if N is None: N=len(x) - X = ones( (len(x),N), x.dtype) - for i in range(N-1): - X[:,i] = x**(N-i-1) - return X - - -def histogram2d(x,y, bins=10, range=None, normed=False, weights=None): - """ - Compute the bi-dimensional histogram of two data samples. - - Parameters - ---------- - x : array_like, shape(N,) - A sequence of values to be histogrammed along the first dimension. - y : array_like, shape(M,) - A sequence of values to be histogrammed along the second dimension. - bins : int or [int, int] or array_like or [array, array], optional - The bin specification: - - * If int, the number of bins for the two dimensions (nx=ny=bins). - * If [int, int], the number of bins in each dimension (nx, ny = bins). - * If array_like, the bin edges for the two dimensions (x_edges=y_edges=bins). - * If [array, array], the bin edges in each dimension (x_edges, y_edges = bins). - - range : array_like, shape(2,2), optional - The leftmost and rightmost edges of the bins along each dimension - (if not specified explicitly in the `bins` parameters): - ``[[xmin, xmax], [ymin, ymax]]``. All values outside of this range - will be considered outliers and not tallied in the histogram. - normed : bool, optional - If False, returns the number of samples in each bin. If True, returns - the bin density, i.e. the bin count divided by the bin area. - weights : array_like, shape(N,), optional - An array of values ``w_i`` weighing each sample ``(x_i, y_i)``. Weights - are normalized to 1 if `normed` is True. If `normed` is False, the - values of the returned histogram are equal to the sum of the weights - belonging to the samples falling into each bin. - - Returns - ------- - H : ndarray, shape(nx, ny) - The bi-dimensional histogram of samples `x` and `y`. Values in `x` - are histogrammed along the first dimension and values in `y` are - histogrammed along the second dimension. - xedges : ndarray, shape(nx,) - The bin edges along the first dimension. - yedges : ndarray, shape(ny,) - The bin edges along the second dimension. - - See Also - -------- - histogram: 1D histogram - histogramdd: Multidimensional histogram - - Notes - ----- - When `normed` is True, then the returned histogram is the sample density, - defined such that: - - .. math:: - \\sum_{i=0}^{nx-1} \\sum_{j=0}^{ny-1} H_{i,j} \\Delta x_i \\Delta y_j = 1 - - where `H` is the histogram array and :math:`\\Delta x_i \\Delta y_i` - the area of bin `{i,j}`. - - Please note that the histogram does not follow the Cartesian convention - where `x` values are on the abcissa and `y` values on the ordinate axis. - Rather, `x` is histogrammed along the first dimension of the array - (vertical), and `y` along the second dimension of the array (horizontal). - This ensures compatibility with `histogramdd`. - - Examples - -------- - >>> x, y = np.random.randn(2, 100) - >>> H, xedges, yedges = np.histogram2d(x, y, bins=(5, 8)) - >>> H.shape, xedges.shape, yedges.shape - ((5, 8), (6,), (9,)) - - We can now use the Matplotlib to visualize this 2-dimensional histogram: - - >>> extent = [yedges[0], yedges[-1], xedges[-1], xedges[0]] - >>> import matplotlib.pyplot as plt - >>> plt.imshow(H, extent=extent, interpolation='nearest') - - >>> plt.colorbar() - - >>> plt.show() - - """ - from numpy import histogramdd - - try: - N = len(bins) - except TypeError: - N = 1 - - if N != 1 and N != 2: - xedges = yedges = asarray(bins, float) - bins = [xedges, yedges] - hist, edges = histogramdd([x,y], bins, range, normed, weights) - return hist, edges[0], edges[1] - - -def mask_indices(n,mask_func,k=0): - """ - Return the indices to access (n, n) arrays, given a masking function. - - Assume `mask_func` is a function that, for a square array a of size - ``(n, n)`` with a possible offset argument `k`, when called as - ``mask_func(a, k)`` returns a new array with zeros in certain locations - (functions like `triu` or `tril` do precisely this). Then this function - returns the indices where the non-zero values would be located. - - Parameters - ---------- - n : int - The returned indices will be valid to access arrays of shape (n, n). - mask_func : callable - A function whose call signature is similar to that of `triu`, `tril`. - That is, ``mask_func(x, k)`` returns a boolean array, shaped like `x`. - `k` is an optional argument to the function. - k : scalar - An optional argument which is passed through to `mask_func`. Functions - like `triu`, `tril` take a second argument that is interpreted as an - offset. - - Returns - ------- - indices : tuple of arrays. - The `n` arrays of indices corresponding to the locations where - ``mask_func(np.ones((n, n)), k)`` is True. - - See Also - -------- - triu, tril, triu_indices, tril_indices - - Notes - ----- - .. versionadded:: 1.4.0 - - Examples - -------- - These are the indices that would allow you to access the upper triangular - part of any 3x3 array: - - >>> iu = np.mask_indices(3, np.triu) - - For example, if `a` is a 3x3 array: - - >>> a = np.arange(9).reshape(3, 3) - >>> a - array([[0, 1, 2], - [3, 4, 5], - [6, 7, 8]]) - >>> a[iu] - array([0, 1, 2, 4, 5, 8]) - - An offset can be passed also to the masking function. This gets us the - indices starting on the first diagonal right of the main one: - - >>> iu1 = np.mask_indices(3, np.triu, 1) - - with which we now extract only three elements: - - >>> a[iu1] - array([1, 2, 5]) - - """ - m = ones((n,n),int) - a = mask_func(m,k) - return where(a != 0) - - -def tril_indices(n,k=0): - """ - Return the indices for the lower-triangle of an (n, n) array. - - Parameters - ---------- - n : int - Sets the size of the arrays for which the returned indices will be valid. - k : int, optional - Diagonal offset (see `tril` for details). - - Returns - ------- - inds : tuple of arrays - The indices for the triangle. The returned tuple contains two arrays, - each with the indices along one dimension of the array. - - See also - -------- - triu_indices : similar function, for upper-triangular. - mask_indices : generic function accepting an arbitrary mask function. - tril, triu - - Notes - ----- - .. versionadded:: 1.4.0 - - Examples - -------- - Compute two different sets of indices to access 4x4 arrays, one for the - lower triangular part starting at the main diagonal, and one starting two - diagonals further right: - - >>> il1 = np.tril_indices(4) - >>> il2 = np.tril_indices(4, 2) - - Here is how they can be used with a sample array: - - >>> a = np.arange(16).reshape(4, 4) - >>> a - array([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11], - [12, 13, 14, 15]]) - - Both for indexing: - - >>> a[il1] - array([ 0, 4, 5, 8, 9, 10, 12, 13, 14, 15]) - - And for assigning values: - - >>> a[il1] = -1 - >>> a - array([[-1, 1, 2, 3], - [-1, -1, 6, 7], - [-1, -1, -1, 11], - [-1, -1, -1, -1]]) - - These cover almost the whole array (two diagonals right of the main one): - - >>> a[il2] = -10 - >>> a - array([[-10, -10, -10, 3], - [-10, -10, -10, -10], - [-10, -10, -10, -10], - [-10, -10, -10, -10]]) - - """ - return mask_indices(n,tril,k) - - -def tril_indices_from(arr,k=0): - """ - Return the indices for the lower-triangle of an (n, n) array. - - See `tril_indices` for full details. - - Parameters - ---------- - n : int - Sets the size of the arrays for which the returned indices will be valid. - k : int, optional - Diagonal offset (see `tril` for details). - - See Also - -------- - tril_indices, tril - - Notes - ----- - .. versionadded:: 1.4.0 - - """ - if not arr.ndim==2 and arr.shape[0] == arr.shape[1]: - raise ValueError("input array must be 2-d and square") - return tril_indices(arr.shape[0],k) - - -def triu_indices(n,k=0): - """ - Return the indices for the upper-triangle of an (n, n) array. - - Parameters - ---------- - n : int - Sets the size of the arrays for which the returned indices will be valid. - k : int, optional - Diagonal offset (see `triu` for details). - - Returns - ------- - inds : tuple of arrays - The indices for the triangle. The returned tuple contains two arrays, - each with the indices along one dimension of the array. - - See also - -------- - tril_indices : similar function, for lower-triangular. - mask_indices : generic function accepting an arbitrary mask function. - triu, tril - - Notes - ----- - .. versionadded:: 1.4.0 - - Examples - -------- - Compute two different sets of indices to access 4x4 arrays, one for the - upper triangular part starting at the main diagonal, and one starting two - diagonals further right: - - >>> iu1 = np.triu_indices(4) - >>> iu2 = np.triu_indices(4, 2) - - Here is how they can be used with a sample array: - - >>> a = np.arange(16).reshape(4, 4) - >>> a - array([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11], - [12, 13, 14, 15]]) - - Both for indexing: - - >>> a[iu1] - array([ 0, 1, 2, 3, 5, 6, 7, 10, 11, 15]) - - And for assigning values: - - >>> a[iu1] = -1 - >>> a - array([[-1, -1, -1, -1], - [ 4, -1, -1, -1], - [ 8, 9, -1, -1], - [12, 13, 14, -1]]) - - These cover only a small part of the whole array (two diagonals right - of the main one): - - >>> a[iu2] = -10 - >>> a - array([[ -1, -1, -10, -10], - [ 4, -1, -1, -10], - [ 8, 9, -1, -1], - [ 12, 13, 14, -1]]) - - """ - return mask_indices(n,triu,k) - - -def triu_indices_from(arr,k=0): - """ - Return the indices for the upper-triangle of an (n, n) array. - - See `triu_indices` for full details. - - Parameters - ---------- - n : int - Sets the size of the arrays for which the returned indices will be valid. - k : int, optional - Diagonal offset (see `triu` for details). - - See Also - -------- - triu_indices, triu - - Notes - ----- - .. versionadded:: 1.4.0 - - """ - if not arr.ndim==2 and arr.shape[0] == arr.shape[1]: - raise ValueError("input array must be 2-d and square") - return triu_indices(arr.shape[0],k) - diff --git a/pythonPackages/numpy/numpy/lib/type_check.py b/pythonPackages/numpy/numpy/lib/type_check.py deleted file mode 100755 index 24701574ae..0000000000 --- a/pythonPackages/numpy/numpy/lib/type_check.py +++ /dev/null @@ -1,648 +0,0 @@ -## Automatically adapted for numpy Sep 19, 2005 by convertcode.py - -__all__ = ['iscomplexobj','isrealobj','imag','iscomplex', - 'isreal','nan_to_num','real','real_if_close', - 'typename','asfarray','mintypecode','asscalar', - 'common_type', 'datetime_data'] - -import numpy.core.numeric as _nx -from numpy.core.numeric import asarray, asanyarray, array, isnan, \ - obj2sctype, zeros -from numpy.core.multiarray import METADATA_DTSTR -from ufunclike import isneginf, isposinf - -_typecodes_by_elsize = 'GDFgdfQqLlIiHhBb?' - -def mintypecode(typechars,typeset='GDFgdf',default='d'): - """ - Return the character for the minimum-size type to which given types can - be safely cast. - - The returned type character must represent the smallest size dtype such - that an array of the returned type can handle the data from an array of - all types in `typechars` (or if `typechars` is an array, then its - dtype.char). - - Parameters - ---------- - typechars : list of str or array_like - If a list of strings, each string should represent a dtype. - If array_like, the character representation of the array dtype is used. - typeset : str or list of str, optional - The set of characters that the returned character is chosen from. - The default set is 'GDFgdf'. - default : str, optional - The default character, this is returned if none of the characters in - `typechars` matches a character in `typeset`. - - Returns - ------- - typechar : str - The character representing the minimum-size type that was found. - - See Also - -------- - dtype, sctype2char, maximum_sctype - - Examples - -------- - >>> np.mintypecode(['d', 'f', 'S']) - 'd' - >>> x = np.array([1.1, 2-3.j]) - >>> np.mintypecode(x) - 'D' - - >>> np.mintypecode('abceh', default='G') - 'G' - - """ - typecodes = [(type(t) is type('') and t) or asarray(t).dtype.char\ - for t in typechars] - intersection = [t for t in typecodes if t in typeset] - if not intersection: - return default - if 'F' in intersection and 'd' in intersection: - return 'D' - l = [] - for t in intersection: - i = _typecodes_by_elsize.index(t) - l.append((i,t)) - l.sort() - return l[0][1] - -def asfarray(a, dtype=_nx.float_): - """ - Return an array converted to a float type. - - Parameters - ---------- - a : array_like - The input array. - dtype : str or dtype object, optional - Float type code to coerce input array `a`. If `dtype` is one of the - 'int' dtypes, it is replaced with float64. - - Returns - ------- - out : ndarray - The input `a` as a float ndarray. - - Examples - -------- - >>> np.asfarray([2, 3]) - array([ 2., 3.]) - >>> np.asfarray([2, 3], dtype='float') - array([ 2., 3.]) - >>> np.asfarray([2, 3], dtype='int8') - array([ 2., 3.]) - - """ - dtype = _nx.obj2sctype(dtype) - if not issubclass(dtype, _nx.inexact): - dtype = _nx.float_ - return asarray(a,dtype=dtype) - -def real(val): - """ - Return the real part of the elements of the array. - - Parameters - ---------- - val : array_like - Input array. - - Returns - ------- - out : ndarray - Output array. If `val` is real, the type of `val` is used for the - output. If `val` has complex elements, the returned type is float. - - See Also - -------- - real_if_close, imag, angle - - Examples - -------- - >>> a = np.array([1+2j, 3+4j, 5+6j]) - >>> a.real - array([ 1., 3., 5.]) - >>> a.real = 9 - >>> a - array([ 9.+2.j, 9.+4.j, 9.+6.j]) - >>> a.real = np.array([9, 8, 7]) - >>> a - array([ 9.+2.j, 8.+4.j, 7.+6.j]) - - """ - return asanyarray(val).real - -def imag(val): - """ - Return the imaginary part of the elements of the array. - - Parameters - ---------- - val : array_like - Input array. - - Returns - ------- - out : ndarray - Output array. If `val` is real, the type of `val` is used for the - output. If `val` has complex elements, the returned type is float. - - See Also - -------- - real, angle, real_if_close - - Examples - -------- - >>> a = np.array([1+2j, 3+4j, 5+6j]) - >>> a.imag - array([ 2., 4., 6.]) - >>> a.imag = np.array([8, 10, 12]) - >>> a - array([ 1. +8.j, 3.+10.j, 5.+12.j]) - - """ - return asanyarray(val).imag - -def iscomplex(x): - """ - Returns a bool array, where True if input element is complex. - - What is tested is whether the input has a non-zero imaginary part, not if - the input type is complex. - - Parameters - ---------- - x : array_like - Input array. - - Returns - ------- - out : ndarray of bools - Output array. - - See Also - -------- - isreal - iscomplexobj : Return True if x is a complex type or an array of complex - numbers. - - Examples - -------- - >>> np.iscomplex([1+1j, 1+0j, 4.5, 3, 2, 2j]) - array([ True, False, False, False, False, True], dtype=bool) - - """ - ax = asanyarray(x) - if issubclass(ax.dtype.type, _nx.complexfloating): - return ax.imag != 0 - res = zeros(ax.shape, bool) - return +res # convet to array-scalar if needed - -def isreal(x): - """ - Returns a bool array, where True if input element is real. - - If element has complex type with zero complex part, the return value - for that element is True. - - Parameters - ---------- - x : array_like - Input array. - - Returns - ------- - out : ndarray, bool - Boolean array of same shape as `x`. - - See Also - -------- - iscomplex - isrealobj : Return True if x is not a complex type. - - Examples - -------- - >>> np.isreal([1+1j, 1+0j, 4.5, 3, 2, 2j]) - array([False, True, True, True, True, False], dtype=bool) - - """ - return imag(x) == 0 - -def iscomplexobj(x): - """ - Return True if x is a complex type or an array of complex numbers. - - The type of the input is checked, not the value. So even if the input - has an imaginary part equal to zero, `iscomplexobj` evaluates to True - if the data type is complex. - - Parameters - ---------- - x : any - The input can be of any type and shape. - - Returns - ------- - y : bool - The return value, True if `x` is of a complex type. - - See Also - -------- - isrealobj, iscomplex - - Examples - -------- - >>> np.iscomplexobj(1) - False - >>> np.iscomplexobj(1+0j) - True - >>> np.iscomplexobj([3, 1+0j, True]) - True - - """ - return issubclass( asarray(x).dtype.type, _nx.complexfloating) - -def isrealobj(x): - """ - Return True if x is a not complex type or an array of complex numbers. - - The type of the input is checked, not the value. So even if the input - has an imaginary part equal to zero, `isrealobj` evaluates to False - if the data type is complex. - - Parameters - ---------- - x : any - The input can be of any type and shape. - - Returns - ------- - y : bool - The return value, False if `x` is of a complex type. - - See Also - -------- - iscomplexobj, isreal - - Examples - -------- - >>> np.isrealobj(1) - True - >>> np.isrealobj(1+0j) - False - >>> np.isrealobj([3, 1+0j, True]) - False - - """ - return not issubclass( asarray(x).dtype.type, _nx.complexfloating) - -#----------------------------------------------------------------------------- - -def _getmaxmin(t): - from numpy.core import getlimits - f = getlimits.finfo(t) - return f.max, f.min - -def nan_to_num(x): - """ - Replace nan with zero and inf with finite numbers. - - Returns an array or scalar replacing Not a Number (NaN) with zero, - (positive) infinity with a very large number and negative infinity - with a very small (or negative) number. - - Parameters - ---------- - x : array_like - Input data. - - Returns - ------- - out : ndarray, float - Array with the same shape as `x` and dtype of the element in `x` with - the greatest precision. NaN is replaced by zero, and infinity - (-infinity) is replaced by the largest (smallest or most negative) - floating point value that fits in the output dtype. All finite numbers - are upcast to the output dtype (default float64). - - See Also - -------- - isinf : Shows which elements are negative or negative infinity. - isneginf : Shows which elements are negative infinity. - isposinf : Shows which elements are positive infinity. - isnan : Shows which elements are Not a Number (NaN). - isfinite : Shows which elements are finite (not NaN, not infinity) - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). This means that Not a Number is not equivalent to infinity. - - - Examples - -------- - >>> np.set_printoptions(precision=8) - >>> x = np.array([np.inf, -np.inf, np.nan, -128, 128]) - >>> np.nan_to_num(x) - array([ 1.79769313e+308, -1.79769313e+308, 0.00000000e+000, - -1.28000000e+002, 1.28000000e+002]) - - """ - try: - t = x.dtype.type - except AttributeError: - t = obj2sctype(type(x)) - if issubclass(t, _nx.complexfloating): - return nan_to_num(x.real) + 1j * nan_to_num(x.imag) - else: - try: - y = x.copy() - except AttributeError: - y = array(x) - if not issubclass(t, _nx.integer): - if not y.shape: - y = array([x]) - scalar = True - else: - scalar = False - are_inf = isposinf(y) - are_neg_inf = isneginf(y) - are_nan = isnan(y) - maxf, minf = _getmaxmin(y.dtype.type) - y[are_nan] = 0 - y[are_inf] = maxf - y[are_neg_inf] = minf - if scalar: - y = y[0] - return y - -#----------------------------------------------------------------------------- - -def real_if_close(a,tol=100): - """ - If complex input returns a real array if complex parts are close to zero. - - "Close to zero" is defined as `tol` * (machine epsilon of the type for - `a`). - - Parameters - ---------- - a : array_like - Input array. - tol : float - Tolerance in machine epsilons for the complex part of the elements - in the array. - - Returns - ------- - out : ndarray - If `a` is real, the type of `a` is used for the output. If `a` - has complex elements, the returned type is float. - - See Also - -------- - real, imag, angle - - Notes - ----- - Machine epsilon varies from machine to machine and between data types - but Python floats on most platforms have a machine epsilon equal to - 2.2204460492503131e-16. You can use 'np.finfo(np.float).eps' to print - out the machine epsilon for floats. - - Examples - -------- - >>> np.finfo(np.float).eps - 2.2204460492503131e-16 - - >>> np.real_if_close([2.1 + 4e-14j], tol=1000) - array([ 2.1]) - >>> np.real_if_close([2.1 + 4e-13j], tol=1000) - array([ 2.1 +4.00000000e-13j]) - - """ - a = asanyarray(a) - if not issubclass(a.dtype.type, _nx.complexfloating): - return a - if tol > 1: - from numpy.core import getlimits - f = getlimits.finfo(a.dtype.type) - tol = f.eps * tol - if _nx.allclose(a.imag, 0, atol=tol): - a = a.real - return a - - -def asscalar(a): - """ - Convert an array of size 1 to its scalar equivalent. - - Parameters - ---------- - a : ndarray - Input array of size 1. - - Returns - ------- - out : scalar - Scalar representation of `a`. The input data type is preserved. - - Examples - -------- - >>> np.asscalar(np.array([24])) - 24 - - """ - return a.item() - -#----------------------------------------------------------------------------- - -_namefromtype = {'S1' : 'character', - '?' : 'bool', - 'b' : 'signed char', - 'B' : 'unsigned char', - 'h' : 'short', - 'H' : 'unsigned short', - 'i' : 'integer', - 'I' : 'unsigned integer', - 'l' : 'long integer', - 'L' : 'unsigned long integer', - 'q' : 'long long integer', - 'Q' : 'unsigned long long integer', - 'f' : 'single precision', - 'd' : 'double precision', - 'g' : 'long precision', - 'F' : 'complex single precision', - 'D' : 'complex double precision', - 'G' : 'complex long double precision', - 'S' : 'string', - 'U' : 'unicode', - 'V' : 'void', - 'O' : 'object' - } - -def typename(char): - """ - Return a description for the given data type code. - - Parameters - ---------- - char : str - Data type code. - - Returns - ------- - out : str - Description of the input data type code. - - See Also - -------- - dtype, typecodes - - Examples - -------- - >>> typechars = ['S1', '?', 'B', 'D', 'G', 'F', 'I', 'H', 'L', 'O', 'Q', - ... 'S', 'U', 'V', 'b', 'd', 'g', 'f', 'i', 'h', 'l', 'q'] - >>> for typechar in typechars: - ... print typechar, ' : ', np.typename(typechar) - ... - S1 : character - ? : bool - B : unsigned char - D : complex double precision - G : complex long double precision - F : complex single precision - I : unsigned integer - H : unsigned short - L : unsigned long integer - O : object - Q : unsigned long long integer - S : string - U : unicode - V : void - b : signed char - d : double precision - g : long precision - f : single precision - i : integer - h : short - l : long integer - q : long long integer - - """ - return _namefromtype[char] - -#----------------------------------------------------------------------------- - -#determine the "minimum common type" for a group of arrays. -array_type = [[_nx.single, _nx.double, _nx.longdouble], - [_nx.csingle, _nx.cdouble, _nx.clongdouble]] -array_precision = {_nx.single : 0, - _nx.double : 1, - _nx.longdouble : 2, - _nx.csingle : 0, - _nx.cdouble : 1, - _nx.clongdouble : 2} -def common_type(*arrays): - """ - Return a scalar type which is common to the input arrays. - - The return type will always be an inexact (i.e. floating point) scalar - type, even if all the arrays are integer arrays. If one of the inputs is - an integer array, the minimum precision type that is returned is a - 64-bit floating point dtype. - - All input arrays can be safely cast to the returned dtype without loss - of information. - - Parameters - ---------- - array1, array2, ... : ndarrays - Input arrays. - - Returns - ------- - out : data type code - Data type code. - - See Also - -------- - dtype, mintypecode - - Examples - -------- - >>> np.common_type(np.arange(2, dtype=np.float32)) - - >>> np.common_type(np.arange(2, dtype=np.float32), np.arange(2)) - - >>> np.common_type(np.arange(4), np.array([45, 6.j]), np.array([45.0])) - - - """ - is_complex = False - precision = 0 - for a in arrays: - t = a.dtype.type - if iscomplexobj(a): - is_complex = True - if issubclass(t, _nx.integer): - p = 1 - else: - p = array_precision.get(t, None) - if p is None: - raise TypeError("can't get common type for non-numeric array") - precision = max(precision, p) - if is_complex: - return array_type[1][precision] - else: - return array_type[0][precision] - -def datetime_data(dtype): - """Return (unit, numerator, denominator, events) from a datetime dtype - """ - try: - import ctypes - except ImportError: - raise RuntimeError, "Cannot access date-time internals without ctypes installed" - - if dtype.kind not in ['m','M']: - raise ValueError, "Not a date-time dtype" - - obj = dtype.metadata[METADATA_DTSTR] - class DATETIMEMETA(ctypes.Structure): - _fields_ = [('base', ctypes.c_int), - ('num', ctypes.c_int), - ('den', ctypes.c_int), - ('events', ctypes.c_int)] - - import sys - if sys.version_info[:2] >= (3, 0): - func = ctypes.pythonapi.PyCapsule_GetPointer - func.argtypes = [ctypes.py_object, ctypes.c_char_p] - func.restype = ctypes.c_void_p - result = func(ctypes.py_object(obj), ctypes.c_char_p(None)) - else: - func = ctypes.pythonapi.PyCObject_AsVoidPtr - func.argtypes = [ctypes.py_object] - func.restype = ctypes.c_void_p - result = func(ctypes.py_object(obj)) - result = ctypes.cast(ctypes.c_void_p(result), ctypes.POINTER(DATETIMEMETA)) - - struct = result[0] - base = struct.base - - # FIXME: This needs to be kept consistent with enum in ndarrayobject.h - from numpy.core.multiarray import DATETIMEUNITS - obj = ctypes.py_object(DATETIMEUNITS) - if sys.version_info[:2] >= (2,7): - result = func(obj, ctypes.c_char_p(None)) - else: - result = func(obj) - _unitnum2name = ctypes.cast(ctypes.c_void_p(result), ctypes.POINTER(ctypes.c_char_p)) - - return (_unitnum2name[base], struct.num, struct.den, struct.events) - diff --git a/pythonPackages/numpy/numpy/lib/ufunclike.py b/pythonPackages/numpy/numpy/lib/ufunclike.py deleted file mode 100755 index 8f81fe4b25..0000000000 --- a/pythonPackages/numpy/numpy/lib/ufunclike.py +++ /dev/null @@ -1,216 +0,0 @@ -""" -Module of functions that are like ufuncs in acting on arrays and optionally -storing results in an output array. -""" -__all__ = ['fix', 'isneginf', 'isposinf'] - -import numpy.core.numeric as nx - -def fix(x, y=None): - """ - Round to nearest integer towards zero. - - Round an array of floats element-wise to nearest integer towards zero. - The rounded values are returned as floats. - - Parameters - ---------- - x : array_like - An array of floats to be rounded - y : ndarray, optional - Output array - - Returns - ------- - out : ndarray of floats - The array of rounded numbers - - See Also - -------- - trunc, floor, ceil - around : Round to given number of decimals - - Examples - -------- - >>> np.fix(3.14) - 3.0 - >>> np.fix(3) - 3.0 - >>> np.fix([2.1, 2.9, -2.1, -2.9]) - array([ 2., 2., -2., -2.]) - - """ - x = nx.asanyarray(x) - y1 = nx.floor(x) - y2 = nx.ceil(x) - if y is None: - y = nx.asanyarray(y1) - y[...] = nx.where(x >= 0, y1, y2) - return y - -def isposinf(x, y=None): - """ - Test element-wise for positive infinity, return result as bool array. - - Parameters - ---------- - x : array_like - The input array. - y : array_like, optional - A boolean array with the same shape as `x` to store the result. - - Returns - ------- - y : ndarray - A boolean array with the same dimensions as the input. - If second argument is not supplied then a boolean array is returned - with values True where the corresponding element of the input is - positive infinity and values False where the element of the input is - not positive infinity. - - If a second argument is supplied the result is stored there. If the - type of that array is a numeric type the result is represented as zeros - and ones, if the type is boolean then as False and True. - The return value `y` is then a reference to that array. - - See Also - -------- - isinf, isneginf, isfinite, isnan - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). - - Errors result if the second argument is also supplied when `x` is a - scalar input, or if first and second arguments have different shapes. - - Examples - -------- - >>> np.isposinf(np.PINF) - array(True, dtype=bool) - >>> np.isposinf(np.inf) - array(True, dtype=bool) - >>> np.isposinf(np.NINF) - array(False, dtype=bool) - >>> np.isposinf([-np.inf, 0., np.inf]) - array([False, False, True], dtype=bool) - - >>> x = np.array([-np.inf, 0., np.inf]) - >>> y = np.array([2, 2, 2]) - >>> np.isposinf(x, y) - array([0, 0, 1]) - >>> y - array([0, 0, 1]) - - """ - if y is None: - x = nx.asarray(x) - y = nx.empty(x.shape, dtype=nx.bool_) - nx.logical_and(nx.isinf(x), ~nx.signbit(x), y) - return y - -def isneginf(x, y=None): - """ - Test element-wise for negative infinity, return result as bool array. - - Parameters - ---------- - x : array_like - The input array. - y : array_like, optional - A boolean array with the same shape and type as `x` to store the - result. - - Returns - ------- - y : ndarray - A boolean array with the same dimensions as the input. - If second argument is not supplied then a numpy boolean array is - returned with values True where the corresponding element of the - input is negative infinity and values False where the element of - the input is not negative infinity. - - If a second argument is supplied the result is stored there. If the - type of that array is a numeric type the result is represented as - zeros and ones, if the type is boolean then as False and True. The - return value `y` is then a reference to that array. - - See Also - -------- - isinf, isposinf, isnan, isfinite - - Notes - ----- - Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic - (IEEE 754). - - Errors result if the second argument is also supplied when x is a scalar - input, or if first and second arguments have different shapes. - - Examples - -------- - >>> np.isneginf(np.NINF) - array(True, dtype=bool) - >>> np.isneginf(np.inf) - array(False, dtype=bool) - >>> np.isneginf(np.PINF) - array(False, dtype=bool) - >>> np.isneginf([-np.inf, 0., np.inf]) - array([ True, False, False], dtype=bool) - - >>> x = np.array([-np.inf, 0., np.inf]) - >>> y = np.array([2, 2, 2]) - >>> np.isneginf(x, y) - array([1, 0, 0]) - >>> y - array([1, 0, 0]) - - """ - if y is None: - x = nx.asarray(x) - y = nx.empty(x.shape, dtype=nx.bool_) - nx.logical_and(nx.isinf(x), nx.signbit(x), y) - return y - - -_log2 = nx.log(2) -def log2(x, y=None): - """ - Return the base 2 logarithm of the input array, element-wise. - This function is now deprecated, use the np.log2 ufunc instead. - - Parameters - ---------- - x : array_like - Input array. - y : array_like - Optional output array with the same shape as `x`. - - Returns - ------- - y : ndarray - The logarithm to the base 2 of `x` element-wise. - NaNs are returned where `x` is negative. - - See Also - -------- - log, log1p, log10 - - Examples - -------- - >>> np.log2([-1, 2, 4]) - array([ NaN, 1., 2.]) - - """ - import warnings - msg = "numpy.lib.log2 is deprecated, use np.log2 instead." - warnings.warn(msg, DeprecationWarning) - - x = nx.asanyarray(x) - if y is None: - y = nx.log(x) - else: - nx.log(x, y) - y /= _log2 - return y diff --git a/pythonPackages/numpy/numpy/lib/user_array.py b/pythonPackages/numpy/numpy/lib/user_array.py deleted file mode 100755 index 43e9da3f2e..0000000000 --- a/pythonPackages/numpy/numpy/lib/user_array.py +++ /dev/null @@ -1,217 +0,0 @@ -""" -Standard container-class for easy multiple-inheritance. -Try to inherit from the ndarray instead of using this class as this is not -complete. -""" - -from numpy.core import array, asarray, absolute, add, subtract, multiply, \ - divide, remainder, power, left_shift, right_shift, bitwise_and, \ - bitwise_or, bitwise_xor, invert, less, less_equal, not_equal, equal, \ - greater, greater_equal, shape, reshape, arange, sin, sqrt, transpose - -class container(object): - def __init__(self, data, dtype=None, copy=True): - self.array = array(data, dtype, copy=copy) - - def __repr__(self): - if len(self.shape) > 0: - return self.__class__.__name__+repr(self.array)[len("array"):] - else: - return self.__class__.__name__+"("+repr(self.array)+")" - - def __array__(self,t=None): - if t: return self.array.astype(t) - return self.array - - # Array as sequence - def __len__(self): return len(self.array) - - def __getitem__(self, index): - return self._rc(self.array[index]) - - def __getslice__(self, i, j): - return self._rc(self.array[i:j]) - - - def __setitem__(self, index, value): - self.array[index] = asarray(value,self.dtype) - def __setslice__(self, i, j, value): - self.array[i:j] = asarray(value,self.dtype) - - def __abs__(self): - return self._rc(absolute(self.array)) - def __neg__(self): - return self._rc(-self.array) - - def __add__(self, other): - return self._rc(self.array+asarray(other)) - __radd__ = __add__ - - def __iadd__(self, other): - add(self.array, other, self.array) - return self - - def __sub__(self, other): - return self._rc(self.array-asarray(other)) - def __rsub__(self, other): - return self._rc(asarray(other)-self.array) - def __isub__(self, other): - subtract(self.array, other, self.array) - return self - - def __mul__(self, other): - return self._rc(multiply(self.array,asarray(other))) - __rmul__ = __mul__ - def __imul__(self, other): - multiply(self.array, other, self.array) - return self - - def __div__(self, other): - return self._rc(divide(self.array,asarray(other))) - def __rdiv__(self, other): - return self._rc(divide(asarray(other),self.array)) - def __idiv__(self, other): - divide(self.array, other, self.array) - return self - - def __mod__(self, other): - return self._rc(remainder(self.array, other)) - def __rmod__(self, other): - return self._rc(remainder(other, self.array)) - def __imod__(self, other): - remainder(self.array, other, self.array) - return self - - def __divmod__(self, other): - return (self._rc(divide(self.array,other)), - self._rc(remainder(self.array, other))) - def __rdivmod__(self, other): - return (self._rc(divide(other, self.array)), - self._rc(remainder(other, self.array))) - - def __pow__(self,other): - return self._rc(power(self.array,asarray(other))) - def __rpow__(self,other): - return self._rc(power(asarray(other),self.array)) - def __ipow__(self,other): - power(self.array, other, self.array) - return self - - def __lshift__(self,other): - return self._rc(left_shift(self.array, other)) - def __rshift__(self,other): - return self._rc(right_shift(self.array, other)) - def __rlshift__(self,other): - return self._rc(left_shift(other, self.array)) - def __rrshift__(self,other): - return self._rc(right_shift(other, self.array)) - def __ilshift__(self,other): - left_shift(self.array, other, self.array) - return self - def __irshift__(self,other): - right_shift(self.array, other, self.array) - return self - - def __and__(self, other): - return self._rc(bitwise_and(self.array, other)) - def __rand__(self, other): - return self._rc(bitwise_and(other, self.array)) - def __iand__(self, other): - bitwise_and(self.array, other, self.array) - return self - - def __xor__(self, other): - return self._rc(bitwise_xor(self.array, other)) - def __rxor__(self, other): - return self._rc(bitwise_xor(other, self.array)) - def __ixor__(self, other): - bitwise_xor(self.array, other, self.array) - return self - - def __or__(self, other): - return self._rc(bitwise_or(self.array, other)) - def __ror__(self, other): - return self._rc(bitwise_or(other, self.array)) - def __ior__(self, other): - bitwise_or(self.array, other, self.array) - return self - - def __neg__(self): - return self._rc(-self.array) - def __pos__(self): - return self._rc(self.array) - def __abs__(self): - return self._rc(abs(self.array)) - def __invert__(self): - return self._rc(invert(self.array)) - - def _scalarfunc(self, func): - if len(self.shape) == 0: - return func(self[0]) - else: - raise TypeError, "only rank-0 arrays can be converted to Python scalars." - - def __complex__(self): return self._scalarfunc(complex) - def __float__(self): return self._scalarfunc(float) - def __int__(self): return self._scalarfunc(int) - def __long__(self): return self._scalarfunc(long) - def __hex__(self): return self._scalarfunc(hex) - def __oct__(self): return self._scalarfunc(oct) - - def __lt__(self,other): return self._rc(less(self.array,other)) - def __le__(self,other): return self._rc(less_equal(self.array,other)) - def __eq__(self,other): return self._rc(equal(self.array,other)) - def __ne__(self,other): return self._rc(not_equal(self.array,other)) - def __gt__(self,other): return self._rc(greater(self.array,other)) - def __ge__(self,other): return self._rc(greater_equal(self.array,other)) - - def copy(self): return self._rc(self.array.copy()) - - def tostring(self): return self.array.tostring() - - def byteswap(self): return self._rc(self.array.byteswap()) - - def astype(self, typecode): return self._rc(self.array.astype(typecode)) - - def _rc(self, a): - if len(shape(a)) == 0: return a - else: return self.__class__(a) - - def __array_wrap__(self, *args): - return self.__class__(args[0]) - - def __setattr__(self,attr,value): - if attr == 'array': - object.__setattr__(self, attr, value) - return - try: - self.array.__setattr__(attr, value) - except AttributeError: - object.__setattr__(self, attr, value) - - # Only called after other approaches fail. - def __getattr__(self,attr): - if (attr == 'array'): - return object.__getattribute__(self, attr) - return self.array.__getattribute__(attr) - -############################################################# -# Test of class container -############################################################# -if __name__ == '__main__': - temp=reshape(arange(10000),(100,100)) - - ua=container(temp) - # new object created begin test - print dir(ua) - print shape(ua),ua.shape # I have changed Numeric.py - - ua_small=ua[:3,:5] - print ua_small - ua_small[0,0]=10 # this did not change ua[0,0], which is not normal behavior - print ua_small[0,0],ua[0,0] - print sin(ua_small)/3.*6.+sqrt(ua_small**2) - print less(ua_small,103),type(less(ua_small,103)) - print type(ua_small*reshape(arange(15),shape(ua_small))) - print reshape(ua_small,(5,3)) - print transpose(ua_small) diff --git a/pythonPackages/numpy/numpy/lib/utils.py b/pythonPackages/numpy/numpy/lib/utils.py deleted file mode 100755 index fb597a7ed3..0000000000 --- a/pythonPackages/numpy/numpy/lib/utils.py +++ /dev/null @@ -1,1146 +0,0 @@ -import os -import sys -import types -import re - -from numpy.core.numerictypes import issubclass_, issubsctype, issubdtype -from numpy.core import product, ndarray, ufunc - -__all__ = ['issubclass_', 'get_numpy_include', 'issubsctype', 'issubdtype', - 'deprecate', 'deprecate_with_doc', 'get_numarray_include', - 'get_include', 'info', 'source', 'who', 'lookfor', 'byte_bounds', - 'may_share_memory', 'safe_eval'] - -def get_include(): - """ - Return the directory that contains the NumPy \\*.h header files. - - Extension modules that need to compile against NumPy should use this - function to locate the appropriate include directory. - - Notes - ----- - When using ``distutils``, for example in ``setup.py``. - :: - - import numpy as np - ... - Extension('extension_name', ... - include_dirs=[np.get_include()]) - ... - - """ - import numpy - if numpy.show_config is None: - # running from numpy source directory - d = os.path.join(os.path.dirname(numpy.__file__), 'core', 'include') - else: - # using installed numpy core headers - import numpy.core as core - d = os.path.join(os.path.dirname(core.__file__), 'include') - return d - -def get_numarray_include(type=None): - """ - Return the directory that contains the numarray \\*.h header files. - - Extension modules that need to compile against numarray should use this - function to locate the appropriate include directory. - - Parameters - ---------- - type : any, optional - If `type` is not None, the location of the NumPy headers is returned - as well. - - Returns - ------- - dirs : str or list of str - If `type` is None, `dirs` is a string containing the path to the - numarray headers. - If `type` is not None, `dirs` is a list of strings with first the - path(s) to the numarray headers, followed by the path to the NumPy - headers. - - Notes - ----- - Useful when using ``distutils``, for example in ``setup.py``. - :: - - import numpy as np - ... - Extension('extension_name', ... - include_dirs=[np.get_numarray_include()]) - ... - - """ - from numpy.numarray import get_numarray_include_dirs - include_dirs = get_numarray_include_dirs() - if type is None: - return include_dirs[0] - else: - return include_dirs + [get_include()] - - -if sys.version_info < (2, 4): - # Can't set __name__ in 2.3 - import new - def _set_function_name(func, name): - func = new.function(func.func_code, func.func_globals, - name, func.func_defaults, func.func_closure) - return func -else: - def _set_function_name(func, name): - func.__name__ = name - return func - -class _Deprecate(object): - """ - Decorator class to deprecate old functions. - - Refer to `deprecate` for details. - - See Also - -------- - deprecate - - """ - def __init__(self, old_name=None, new_name=None, message=None): - self.old_name = old_name - self.new_name = new_name - self.message = message - - def __call__(self, func, *args, **kwargs): - """ - Decorator call. Refer to ``decorate``. - - """ - old_name = self.old_name - new_name = self.new_name - message = self.message - - import warnings - if old_name is None: - try: - old_name = func.func_name - except AttributeError: - old_name = func.__name__ - if new_name is None: - depdoc = "`%s` is deprecated!" % old_name - else: - depdoc = "`%s` is deprecated, use `%s` instead!" % \ - (old_name, new_name) - - if message is not None: - depdoc += "\n" + message - - def newfunc(*args,**kwds): - """`arrayrange` is deprecated, use `arange` instead!""" - warnings.warn(depdoc, DeprecationWarning) - return func(*args, **kwds) - - newfunc = _set_function_name(newfunc, old_name) - doc = func.__doc__ - if doc is None: - doc = depdoc - else: - doc = '\n\n'.join([depdoc, doc]) - newfunc.__doc__ = doc - try: - d = func.__dict__ - except AttributeError: - pass - else: - newfunc.__dict__.update(d) - return newfunc - -def deprecate(*args, **kwargs): - """ - Issues a DeprecationWarning, adds warning to `old_name`'s - docstring, rebinds ``old_name.__name__`` and returns the new - function object. - - This function may also be used as a decorator. - - Parameters - ---------- - func : function - The function to be deprecated. - old_name : str, optional - The name of the function to be deprecated. Default is None, in which - case the name of `func` is used. - new_name : str, optional - The new name for the function. Default is None, in which case - the deprecation message is that `old_name` is deprecated. If given, - the deprecation message is that `old_name` is deprecated and `new_name` - should be used instead. - message : str, optional - Additional explanation of the deprecation. Displayed in the docstring - after the warning. - - Returns - ------- - old_func : function - The deprecated function. - - Examples - -------- - Note that ``olduint`` returns a value after printing Deprecation Warning: - - >>> olduint = np.deprecate(np.uint) - >>> olduint(6) - /usr/lib/python2.5/site-packages/numpy/lib/utils.py:114: - DeprecationWarning: uint32 is deprecated - warnings.warn(str1, DeprecationWarning) - 6 - - """ - # Deprecate may be run as a function or as a decorator - # If run as a function, we initialise the decorator class - # and execute its __call__ method. - - if args: - fn = args[0] - args = args[1:] - - # backward compatibility -- can be removed - # after next release - if 'newname' in kwargs: - kwargs['new_name'] = kwargs.pop('newname') - if 'oldname' in kwargs: - kwargs['old_name'] = kwargs.pop('oldname') - - return _Deprecate(*args, **kwargs)(fn) - else: - return _Deprecate(*args, **kwargs) - -deprecate_with_doc = lambda msg: _Deprecate(message=msg) -get_numpy_include = deprecate(get_include, 'get_numpy_include', 'get_include') - - -#-------------------------------------------- -# Determine if two arrays can share memory -#-------------------------------------------- - -def byte_bounds(a): - """ - Returns pointers to the end-points of an array. - - Parameters - ---------- - a : ndarray - Input array. It must conform to the Python-side of the array interface. - - Returns - ------- - (low, high) : tuple of 2 integers - The first integer is the first byte of the array, the second integer is - just past the last byte of the array. If `a` is not contiguous it - will not use every byte between the (`low`, `high`) values. - - Examples - -------- - >>> I = np.eye(2, dtype='f'); I.dtype - dtype('float32') - >>> low, high = np.byte_bounds(I) - >>> high - low == I.size*I.itemsize - True - >>> I = np.eye(2, dtype='G'); I.dtype - dtype('complex192') - >>> low, high = np.byte_bounds(I) - >>> high - low == I.size*I.itemsize - True - - """ - ai = a.__array_interface__ - a_data = ai['data'][0] - astrides = ai['strides'] - ashape = ai['shape'] - nd_a = len(ashape) - bytes_a = int(ai['typestr'][2:]) - - a_low = a_high = a_data - if astrides is None: # contiguous case - a_high += product(ashape, dtype=int)*bytes_a - else: - for shape, stride in zip(ashape, astrides): - if stride < 0: - a_low += (shape-1)*stride - else: - a_high += (shape-1)*stride - a_high += bytes_a - return a_low, a_high - - -def may_share_memory(a, b): - """ - Determine if two arrays can share memory - - The memory-bounds of a and b are computed. If they overlap then - this function returns True. Otherwise, it returns False. - - A return of True does not necessarily mean that the two arrays - share any element. It just means that they *might*. - - Parameters - ---------- - a, b : ndarray - - Returns - ------- - out : bool - - Examples - -------- - >>> np.may_share_memory(np.array([1,2]), np.array([5,8,9])) - False - - """ - a_low, a_high = byte_bounds(a) - b_low, b_high = byte_bounds(b) - if b_low >= a_high or a_low >= b_high: - return False - return True - -#----------------------------------------------------------------------------- -# Function for output and information on the variables used. -#----------------------------------------------------------------------------- - - -def who(vardict=None): - """ - Print the Numpy arrays in the given dictionary. - - If there is no dictionary passed in or `vardict` is None then returns - Numpy arrays in the globals() dictionary (all Numpy arrays in the - namespace). - - Parameters - ---------- - vardict : dict, optional - A dictionary possibly containing ndarrays. Default is globals(). - - Returns - ------- - out : None - Returns 'None'. - - Notes - ----- - Prints out the name, shape, bytes and type of all of the ndarrays present - in `vardict`. - - Examples - -------- - >>> a = np.arange(10) - >>> b = np.ones(20) - >>> np.who() - Name Shape Bytes Type - =========================================================== - a 10 40 int32 - b 20 160 float64 - Upper bound on total bytes = 200 - - >>> d = {'x': np.arange(2.0), 'y': np.arange(3.0), 'txt': 'Some str', - ... 'idx':5} - >>> np.who(d) - Name Shape Bytes Type - =========================================================== - y 3 24 float64 - x 2 16 float64 - Upper bound on total bytes = 40 - - """ - if vardict is None: - frame = sys._getframe().f_back - vardict = frame.f_globals - sta = [] - cache = {} - for name in vardict.keys(): - if isinstance(vardict[name],ndarray): - var = vardict[name] - idv = id(var) - if idv in cache.keys(): - namestr = name + " (%s)" % cache[idv] - original=0 - else: - cache[idv] = name - namestr = name - original=1 - shapestr = " x ".join(map(str, var.shape)) - bytestr = str(var.nbytes) - sta.append([namestr, shapestr, bytestr, var.dtype.name, - original]) - - maxname = 0 - maxshape = 0 - maxbyte = 0 - totalbytes = 0 - for k in range(len(sta)): - val = sta[k] - if maxname < len(val[0]): - maxname = len(val[0]) - if maxshape < len(val[1]): - maxshape = len(val[1]) - if maxbyte < len(val[2]): - maxbyte = len(val[2]) - if val[4]: - totalbytes += int(val[2]) - - if len(sta) > 0: - sp1 = max(10,maxname) - sp2 = max(10,maxshape) - sp3 = max(10,maxbyte) - prval = "Name %s Shape %s Bytes %s Type" % (sp1*' ', sp2*' ', sp3*' ') - print prval + "\n" + "="*(len(prval)+5) + "\n" - - for k in range(len(sta)): - val = sta[k] - print "%s %s %s %s %s %s %s" % (val[0], ' '*(sp1-len(val[0])+4), - val[1], ' '*(sp2-len(val[1])+5), - val[2], ' '*(sp3-len(val[2])+5), - val[3]) - print "\nUpper bound on total bytes = %d" % totalbytes - return - -#----------------------------------------------------------------------------- - - -# NOTE: pydoc defines a help function which works simliarly to this -# except it uses a pager to take over the screen. - -# combine name and arguments and split to multiple lines of -# width characters. End lines on a comma and begin argument list -# indented with the rest of the arguments. -def _split_line(name, arguments, width): - firstwidth = len(name) - k = firstwidth - newstr = name - sepstr = ", " - arglist = arguments.split(sepstr) - for argument in arglist: - if k == firstwidth: - addstr = "" - else: - addstr = sepstr - k = k + len(argument) + len(addstr) - if k > width: - k = firstwidth + 1 + len(argument) - newstr = newstr + ",\n" + " "*(firstwidth+2) + argument - else: - newstr = newstr + addstr + argument - return newstr - -_namedict = None -_dictlist = None - -# Traverse all module directories underneath globals -# to see if something is defined -def _makenamedict(module='numpy'): - module = __import__(module, globals(), locals(), []) - thedict = {module.__name__:module.__dict__} - dictlist = [module.__name__] - totraverse = [module.__dict__] - while 1: - if len(totraverse) == 0: - break - thisdict = totraverse.pop(0) - for x in thisdict.keys(): - if isinstance(thisdict[x],types.ModuleType): - modname = thisdict[x].__name__ - if modname not in dictlist: - moddict = thisdict[x].__dict__ - dictlist.append(modname) - totraverse.append(moddict) - thedict[modname] = moddict - return thedict, dictlist - -def info(object=None,maxwidth=76,output=sys.stdout,toplevel='numpy'): - """ - Get help information for a function, class, or module. - - Parameters - ---------- - object : object or str, optional - Input object or name to get information about. If `object` is a - numpy object, its docstring is given. If it is a string, available - modules are searched for matching objects. - If None, information about `info` itself is returned. - maxwidth : int, optional - Printing width. - output : file like object, optional - File like object that the output is written to, default is ``stdout``. - The object has to be opened in 'w' or 'a' mode. - toplevel : str, optional - Start search at this level. - - See Also - -------- - source, lookfor - - Notes - ----- - When used interactively with an object, ``np.info(obj)`` is equivalent to - ``help(obj)`` on the Python prompt or ``obj?`` on the IPython prompt. - - Examples - -------- - >>> np.info(np.polyval) # doctest: +SKIP - polyval(p, x) - Evaluate the polynomial p at x. - ... - - When using a string for `object` it is possible to get multiple results. - - >>> np.info('fft') # doctest: +SKIP - *** Found in numpy *** - Core FFT routines - ... - *** Found in numpy.fft *** - fft(a, n=None, axis=-1) - ... - *** Repeat reference found in numpy.fft.fftpack *** - *** Total of 3 references found. *** - - """ - global _namedict, _dictlist - # Local import to speed up numpy's import time. - import pydoc, inspect - - if hasattr(object,'_ppimport_importer') or \ - hasattr(object, '_ppimport_module'): - object = object._ppimport_module - elif hasattr(object, '_ppimport_attr'): - object = object._ppimport_attr - - if object is None: - info(info) - elif isinstance(object, ndarray): - import numpy.numarray as nn - nn.info(object, output=output, numpy=1) - elif isinstance(object, str): - if _namedict is None: - _namedict, _dictlist = _makenamedict(toplevel) - numfound = 0 - objlist = [] - for namestr in _dictlist: - try: - obj = _namedict[namestr][object] - if id(obj) in objlist: - print >> output, "\n *** Repeat reference found in %s *** " % namestr - else: - objlist.append(id(obj)) - print >> output, " *** Found in %s ***" % namestr - info(obj) - print >> output, "-"*maxwidth - numfound += 1 - except KeyError: - pass - if numfound == 0: - print >> output, "Help for %s not found." % object - else: - print >> output, "\n *** Total of %d references found. ***" % numfound - - elif inspect.isfunction(object): - name = object.func_name - arguments = inspect.formatargspec(*inspect.getargspec(object)) - - if len(name+arguments) > maxwidth: - argstr = _split_line(name, arguments, maxwidth) - else: - argstr = name + arguments - - print >> output, " " + argstr + "\n" - print >> output, inspect.getdoc(object) - - elif inspect.isclass(object): - name = object.__name__ - arguments = "()" - try: - if hasattr(object, '__init__'): - arguments = inspect.formatargspec(*inspect.getargspec(object.__init__.im_func)) - arglist = arguments.split(', ') - if len(arglist) > 1: - arglist[1] = "("+arglist[1] - arguments = ", ".join(arglist[1:]) - except: - pass - - if len(name+arguments) > maxwidth: - argstr = _split_line(name, arguments, maxwidth) - else: - argstr = name + arguments - - print >> output, " " + argstr + "\n" - doc1 = inspect.getdoc(object) - if doc1 is None: - if hasattr(object,'__init__'): - print >> output, inspect.getdoc(object.__init__) - else: - print >> output, inspect.getdoc(object) - - methods = pydoc.allmethods(object) - if methods != []: - print >> output, "\n\nMethods:\n" - for meth in methods: - if meth[0] == '_': - continue - thisobj = getattr(object, meth, None) - if thisobj is not None: - methstr, other = pydoc.splitdoc(inspect.getdoc(thisobj) or "None") - print >> output, " %s -- %s" % (meth, methstr) - - elif type(object) is types.InstanceType: ## check for __call__ method - print >> output, "Instance of class: ", object.__class__.__name__ - print >> output - if hasattr(object, '__call__'): - arguments = inspect.formatargspec(*inspect.getargspec(object.__call__.im_func)) - arglist = arguments.split(', ') - if len(arglist) > 1: - arglist[1] = "("+arglist[1] - arguments = ", ".join(arglist[1:]) - else: - arguments = "()" - - if hasattr(object,'name'): - name = "%s" % object.name - else: - name = "" - if len(name+arguments) > maxwidth: - argstr = _split_line(name, arguments, maxwidth) - else: - argstr = name + arguments - - print >> output, " " + argstr + "\n" - doc = inspect.getdoc(object.__call__) - if doc is not None: - print >> output, inspect.getdoc(object.__call__) - print >> output, inspect.getdoc(object) - - else: - print >> output, inspect.getdoc(object) - - elif inspect.ismethod(object): - name = object.__name__ - arguments = inspect.formatargspec(*inspect.getargspec(object.im_func)) - arglist = arguments.split(', ') - if len(arglist) > 1: - arglist[1] = "("+arglist[1] - arguments = ", ".join(arglist[1:]) - else: - arguments = "()" - - if len(name+arguments) > maxwidth: - argstr = _split_line(name, arguments, maxwidth) - else: - argstr = name + arguments - - print >> output, " " + argstr + "\n" - print >> output, inspect.getdoc(object) - - elif hasattr(object, '__doc__'): - print >> output, inspect.getdoc(object) - - -def source(object, output=sys.stdout): - """ - Print or write to a file the source code for a Numpy object. - - The source code is only returned for objects written in Python. Many - functions and classes are defined in C and will therefore not return - useful information. - - Parameters - ---------- - object : numpy object - Input object. This can be any object (function, class, module, ...). - output : file object, optional - If `output` not supplied then source code is printed to screen - (sys.stdout). File object must be created with either write 'w' or - append 'a' modes. - - See Also - -------- - lookfor, info - - Examples - -------- - >>> np.source(np.interp) #doctest: +SKIP - In file: /usr/lib/python2.6/dist-packages/numpy/lib/function_base.py - def interp(x, xp, fp, left=None, right=None): - \"\"\".... (full docstring printed)\"\"\" - if isinstance(x, (float, int, number)): - return compiled_interp([x], xp, fp, left, right).item() - else: - return compiled_interp(x, xp, fp, left, right) - - The source code is only returned for objects written in Python. - - >>> np.source(np.array) #doctest: +SKIP - Not available for this object. - - """ - # Local import to speed up numpy's import time. - import inspect - try: - print >> output, "In file: %s\n" % inspect.getsourcefile(object) - print >> output, inspect.getsource(object) - except: - print >> output, "Not available for this object." - - -# Cache for lookfor: {id(module): {name: (docstring, kind, index), ...}...} -# where kind: "func", "class", "module", "object" -# and index: index in breadth-first namespace traversal -_lookfor_caches = {} - -# regexp whose match indicates that the string may contain a function signature -_function_signature_re = re.compile(r"[a-z0-9_]+\(.*[,=].*\)", re.I) - -def lookfor(what, module=None, import_modules=True, regenerate=False, - output=None): - """ - Do a keyword search on docstrings. - - A list of of objects that matched the search is displayed, - sorted by relevance. All given keywords need to be found in the - docstring for it to be returned as a result, but the order does - not matter. - - Parameters - ---------- - what : str - String containing words to look for. - module : str or list, optional - Name of module(s) whose docstrings to go through. - import_modules : bool, optional - Whether to import sub-modules in packages. Default is True. - regenerate : bool, optional - Whether to re-generate the docstring cache. Default is False. - output : file-like, optional - File-like object to write the output to. If omitted, use a pager. - - See Also - -------- - source, info - - Notes - ----- - Relevance is determined only roughly, by checking if the keywords occur - in the function name, at the start of a docstring, etc. - - Examples - -------- - >>> np.lookfor('binary representation') - Search results for 'binary representation' - ------------------------------------------ - numpy.binary_repr - Return the binary representation of the input number as a string. - numpy.core.setup_common.long_double_representation - Given a binary dump as given by GNU od -b, look for long double - numpy.base_repr - Return a string representation of a number in the given base system. - ... - - """ - import pydoc - - # Cache - cache = _lookfor_generate_cache(module, import_modules, regenerate) - - # Search - # XXX: maybe using a real stemming search engine would be better? - found = [] - whats = str(what).lower().split() - if not whats: return - - for name, (docstring, kind, index) in cache.iteritems(): - if kind in ('module', 'object'): - # don't show modules or objects - continue - ok = True - doc = docstring.lower() - for w in whats: - if w not in doc: - ok = False - break - if ok: - found.append(name) - - # Relevance sort - # XXX: this is full Harrison-Stetson heuristics now, - # XXX: it probably could be improved - - kind_relevance = {'func': 1000, 'class': 1000, - 'module': -1000, 'object': -1000} - - def relevance(name, docstr, kind, index): - r = 0 - # do the keywords occur within the start of the docstring? - first_doc = "\n".join(docstr.lower().strip().split("\n")[:3]) - r += sum([200 for w in whats if w in first_doc]) - # do the keywords occur in the function name? - r += sum([30 for w in whats if w in name]) - # is the full name long? - r += -len(name) * 5 - # is the object of bad type? - r += kind_relevance.get(kind, -1000) - # is the object deep in namespace hierarchy? - r += -name.count('.') * 10 - r += max(-index / 100, -100) - return r - - def relevance_value(a): - return relevance(a, *cache[a]) - found.sort(key=relevance_value) - - # Pretty-print - s = "Search results for '%s'" % (' '.join(whats)) - help_text = [s, "-"*len(s)] - for name in found[::-1]: - doc, kind, ix = cache[name] - - doclines = [line.strip() for line in doc.strip().split("\n") - if line.strip()] - - # find a suitable short description - try: - first_doc = doclines[0].strip() - if _function_signature_re.search(first_doc): - first_doc = doclines[1].strip() - except IndexError: - first_doc = "" - help_text.append("%s\n %s" % (name, first_doc)) - - if not found: - help_text.append("Nothing found.") - - # Output - if output is not None: - output.write("\n".join(help_text)) - elif len(help_text) > 10: - pager = pydoc.getpager() - pager("\n".join(help_text)) - else: - print "\n".join(help_text) - -def _lookfor_generate_cache(module, import_modules, regenerate): - """ - Generate docstring cache for given module. - - Parameters - ---------- - module : str, None, module - Module for which to generate docstring cache - import_modules : bool - Whether to import sub-modules in packages. - regenerate: bool - Re-generate the docstring cache - - Returns - ------- - cache : dict {obj_full_name: (docstring, kind, index), ...} - Docstring cache for the module, either cached one (regenerate=False) - or newly generated. - - """ - global _lookfor_caches - # Local import to speed up numpy's import time. - import inspect - from cStringIO import StringIO - - if module is None: - module = "numpy" - - if isinstance(module, str): - try: - __import__(module) - except ImportError: - return {} - module = sys.modules[module] - elif isinstance(module, list) or isinstance(module, tuple): - cache = {} - for mod in module: - cache.update(_lookfor_generate_cache(mod, import_modules, - regenerate)) - return cache - - if id(module) in _lookfor_caches and not regenerate: - return _lookfor_caches[id(module)] - - # walk items and collect docstrings - cache = {} - _lookfor_caches[id(module)] = cache - seen = {} - index = 0 - stack = [(module.__name__, module)] - while stack: - name, item = stack.pop(0) - if id(item) in seen: continue - seen[id(item)] = True - - index += 1 - kind = "object" - - if inspect.ismodule(item): - kind = "module" - try: - _all = item.__all__ - except AttributeError: - _all = None - - # import sub-packages - if import_modules and hasattr(item, '__path__'): - for pth in item.__path__: - for mod_path in os.listdir(pth): - this_py = os.path.join(pth, mod_path) - init_py = os.path.join(pth, mod_path, '__init__.py') - if os.path.isfile(this_py) and mod_path.endswith('.py'): - to_import = mod_path[:-3] - elif os.path.isfile(init_py): - to_import = mod_path - else: - continue - if to_import == '__init__': - continue - - try: - # Catch SystemExit, too - base_exc = BaseException - except NameError: - # Python 2.4 doesn't have BaseException - base_exc = Exception - - try: - old_stdout = sys.stdout - old_stderr = sys.stderr - try: - sys.stdout = StringIO() - sys.stderr = StringIO() - __import__("%s.%s" % (name, to_import)) - finally: - sys.stdout = old_stdout - sys.stderr = old_stderr - except base_exc: - continue - - for n, v in _getmembers(item): - item_name = getattr(v, '__name__', "%s.%s" % (name, n)) - mod_name = getattr(v, '__module__', None) - if '.' not in item_name and mod_name: - item_name = "%s.%s" % (mod_name, item_name) - - if not item_name.startswith(name + '.'): - # don't crawl "foreign" objects - if isinstance(v, ufunc): - # ... unless they are ufuncs - pass - else: - continue - elif not (inspect.ismodule(v) or _all is None or n in _all): - continue - stack.append(("%s.%s" % (name, n), v)) - elif inspect.isclass(item): - kind = "class" - for n, v in _getmembers(item): - stack.append(("%s.%s" % (name, n), v)) - elif hasattr(item, "__call__"): - kind = "func" - - doc = inspect.getdoc(item) - if doc is not None: - cache[name] = (doc, kind, index) - - return cache - -def _getmembers(item): - import inspect - try: - members = inspect.getmembers(item) - except AttributeError: - members = [(x, getattr(item, x)) for x in dir(item) - if hasattr(item, x)] - return members - -#----------------------------------------------------------------------------- - -# The following SafeEval class and company are adapted from Michael Spencer's -# ASPN Python Cookbook recipe: -# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/364469 -# Accordingly it is mostly Copyright 2006 by Michael Spencer. -# The recipe, like most of the other ASPN Python Cookbook recipes was made -# available under the Python license. -# http://www.python.org/license - -# It has been modified to: -# * handle unary -/+ -# * support True/False/None -# * raise SyntaxError instead of a custom exception. - -class SafeEval(object): - """ - Object to evaluate constant string expressions. - - This includes strings with lists, dicts and tuples using the abstract - syntax tree created by ``compiler.parse``. - - For an example of usage, see `safe_eval`. - - See Also - -------- - safe_eval - - """ - - if sys.version_info[0] < 3: - def visit(self, node, **kw): - cls = node.__class__ - meth = getattr(self,'visit'+cls.__name__,self.default) - return meth(node, **kw) - - def default(self, node, **kw): - raise SyntaxError("Unsupported source construct: %s" - % node.__class__) - - def visitExpression(self, node, **kw): - for child in node.getChildNodes(): - return self.visit(child, **kw) - - def visitConst(self, node, **kw): - return node.value - - def visitDict(self, node,**kw): - return dict([(self.visit(k),self.visit(v)) for k,v in node.items]) - - def visitTuple(self, node, **kw): - return tuple([self.visit(i) for i in node.nodes]) - - def visitList(self, node, **kw): - return [self.visit(i) for i in node.nodes] - - def visitUnaryAdd(self, node, **kw): - return +self.visit(node.getChildNodes()[0]) - - def visitUnarySub(self, node, **kw): - return -self.visit(node.getChildNodes()[0]) - - def visitName(self, node, **kw): - if node.name == 'False': - return False - elif node.name == 'True': - return True - elif node.name == 'None': - return None - else: - raise SyntaxError("Unknown name: %s" % node.name) - else: - - def visit(self, node): - cls = node.__class__ - meth = getattr(self, 'visit' + cls.__name__, self.default) - return meth(node) - - def default(self, node): - raise SyntaxError("Unsupported source construct: %s" - % node.__class__) - - def visitExpression(self, node): - return self.visit(node.body) - - def visitNum(self, node): - return node.n - - def visitStr(self, node): - return node.s - - def visitBytes(self, node): - return node.s - - def visitDict(self, node,**kw): - return dict([(self.visit(k), self.visit(v)) - for k, v in zip(node.keys, node.values)]) - - def visitTuple(self, node): - return tuple([self.visit(i) for i in node.elts]) - - def visitList(self, node): - return [self.visit(i) for i in node.elts] - - def visitUnaryOp(self, node): - import ast - if isinstance(node.op, ast.UAdd): - return +self.visit(node.operand) - elif isinstance(node.op, ast.USub): - return -self.visit(node.operand) - else: - raise SyntaxError("Unknown unary op: %r" % node.op) - - def visitName(self, node): - if node.id == 'False': - return False - elif node.id == 'True': - return True - elif node.id == 'None': - return None - else: - raise SyntaxError("Unknown name: %s" % node.id) - -def safe_eval(source): - """ - Protected string evaluation. - - Evaluate a string containing a Python literal expression without - allowing the execution of arbitrary non-literal code. - - Parameters - ---------- - source : str - The string to evaluate. - - Returns - ------- - obj : object - The result of evaluating `source`. - - Raises - ------ - SyntaxError - If the code has invalid Python syntax, or if it contains non-literal - code. - - Examples - -------- - >>> np.safe_eval('1') - 1 - >>> np.safe_eval('[1, 2, 3]') - [1, 2, 3] - >>> np.safe_eval('{"foo": ("bar", 10.0)}') - {'foo': ('bar', 10.0)} - - >>> np.safe_eval('import os') - Traceback (most recent call last): - ... - SyntaxError: invalid syntax - - >>> np.safe_eval('open("/home/user/.ssh/id_dsa").read()') - Traceback (most recent call last): - ... - SyntaxError: Unsupported source construct: compiler.ast.CallFunc - - """ - # Local import to speed up numpy's import time. - try: - import compiler - except ImportError: - import ast as compiler - walker = SafeEval() - try: - ast = compiler.parse(source, mode="eval") - except SyntaxError, err: - raise - try: - return walker.visit(ast) - except SyntaxError, err: - raise - -#----------------------------------------------------------------------------- diff --git a/pythonPackages/numpy/numpy/linalg/SConscript b/pythonPackages/numpy/numpy/linalg/SConscript deleted file mode 100755 index 78c4d569b8..0000000000 --- a/pythonPackages/numpy/numpy/linalg/SConscript +++ /dev/null @@ -1,23 +0,0 @@ -# Last Change: Thu Jun 12 06:00 PM 2008 J -# vim:syntax=python -from numscons import GetNumpyEnvironment, scons_get_mathlib -from numscons import CheckF77LAPACK -from numscons import write_info - -env = GetNumpyEnvironment(ARGUMENTS) - -config = env.NumpyConfigure(custom_tests = {'CheckLAPACK' : CheckF77LAPACK}) - -use_lapack = config.CheckLAPACK() - -mlib = scons_get_mathlib(env) -env.AppendUnique(LIBS = mlib) - -config.Finish() -write_info(env) - -sources = ['lapack_litemodule.c'] -if not use_lapack: - sources.extend(['python_xerbla.c', 'zlapack_lite.c', 'dlapack_lite.c', - 'blas_lite.c', 'dlamch.c', 'f2c_lite.c']) -env.NumpyPythonExtension('lapack_lite', source = sources) diff --git a/pythonPackages/numpy/numpy/linalg/SConstruct b/pythonPackages/numpy/numpy/linalg/SConstruct deleted file mode 100755 index a377d8391b..0000000000 --- a/pythonPackages/numpy/numpy/linalg/SConstruct +++ /dev/null @@ -1,2 +0,0 @@ -from numscons import GetInitEnvironment -GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') diff --git a/pythonPackages/numpy/numpy/linalg/__init__.py b/pythonPackages/numpy/numpy/linalg/__init__.py deleted file mode 100755 index a74a31950f..0000000000 --- a/pythonPackages/numpy/numpy/linalg/__init__.py +++ /dev/null @@ -1,52 +0,0 @@ -""" -Core Linear Algebra Tools -========================= - -=============== ========================================================== -Linear algebra basics -========================================================================== -norm Vector or matrix norm -inv Inverse of a square matrix -solve Solve a linear system of equations -det Determinant of a square matrix -slogdet Logarithm of the determinant of a square matrix -lstsq Solve linear least-squares problem -pinv Pseudo-inverse (Moore-Penrose) calculated using a singular - value decomposition -matrix_power Integer power of a square matrix -=============== ========================================================== - -=============== ========================================================== -Eigenvalues and decompositions -========================================================================== -eig Eigenvalues and vectors of a square matrix -eigh Eigenvalues and eigenvectors of a Hermitian matrix -eigvals Eigenvalues of a square matrix -eigvalsh Eigenvalues of a Hermitian matrix -qr QR decomposition of a matrix -svd Singular value decomposition of a matrix -cholesky Cholesky decomposition of a matrix -=============== ========================================================== - -=============== ========================================================== -Tensor operations -========================================================================== -tensorsolve Solve a linear tensor equation -tensorinv Calculate an inverse of a tensor -=============== ========================================================== - -=============== ========================================================== -Exceptions -========================================================================== -LinAlgError Indicates a failed linear algebra operation -=============== ========================================================== - -""" -# To get sub-modules -from info import __doc__ - -from linalg import * - -from numpy.testing import Tester -test = Tester().test -bench = Tester().test diff --git a/pythonPackages/numpy/numpy/linalg/blas_lite.c b/pythonPackages/numpy/numpy/linalg/blas_lite.c deleted file mode 100755 index d0de434789..0000000000 --- a/pythonPackages/numpy/numpy/linalg/blas_lite.c +++ /dev/null @@ -1,10660 +0,0 @@ -/* -NOTE: This is generated code. Look in Misc/lapack_lite for information on - remaking this file. -*/ -#include "f2c.h" - -#ifdef HAVE_CONFIG -#include "config.h" -#else -extern doublereal dlamch_(char *); -#define EPSILON dlamch_("Epsilon") -#define SAFEMINIMUM dlamch_("Safe minimum") -#define PRECISION dlamch_("Precision") -#define BASE dlamch_("Base") -#endif - -extern doublereal dlapy2_(doublereal *x, doublereal *y); - - - -/* Table of constant values */ - -static integer c__1 = 1; -static doublecomplex c_b359 = {1.,0.}; - -/* Subroutine */ int daxpy_(integer *n, doublereal *da, doublereal *dx, - integer *incx, doublereal *dy, integer *incy) -{ - /* System generated locals */ - integer i__1; - - /* Local variables */ - static integer i__, m, ix, iy, mp1; - - -/* - constant times a vector plus a vector. - uses unrolled loops for increments equal to one. - jack dongarra, linpack, 3/11/78. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --dy; - --dx; - - /* Function Body */ - if (*n <= 0) { - return 0; - } - if (*da == 0.) { - return 0; - } - if ((*incx == 1 && *incy == 1)) { - goto L20; - } - -/* - code for unequal increments or equal increments - not equal to 1 -*/ - - ix = 1; - iy = 1; - if (*incx < 0) { - ix = (-(*n) + 1) * *incx + 1; - } - if (*incy < 0) { - iy = (-(*n) + 1) * *incy + 1; - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - dy[iy] += *da * dx[ix]; - ix += *incx; - iy += *incy; -/* L10: */ - } - return 0; - -/* - code for both increments equal to 1 - - - clean-up loop -*/ - -L20: - m = *n % 4; - if (m == 0) { - goto L40; - } - i__1 = m; - for (i__ = 1; i__ <= i__1; ++i__) { - dy[i__] += *da * dx[i__]; -/* L30: */ - } - if (*n < 4) { - return 0; - } -L40: - mp1 = m + 1; - i__1 = *n; - for (i__ = mp1; i__ <= i__1; i__ += 4) { - dy[i__] += *da * dx[i__]; - dy[i__ + 1] += *da * dx[i__ + 1]; - dy[i__ + 2] += *da * dx[i__ + 2]; - dy[i__ + 3] += *da * dx[i__ + 3]; -/* L50: */ - } - return 0; -} /* daxpy_ */ - -doublereal dcabs1_(doublecomplex *z__) -{ - /* System generated locals */ - doublereal ret_val; - static doublecomplex equiv_0[1]; - - /* Local variables */ -#define t ((doublereal *)equiv_0) -#define zz (equiv_0) - - zz->r = z__->r, zz->i = z__->i; - ret_val = abs(t[0]) + abs(t[1]); - return ret_val; -} /* dcabs1_ */ - -#undef zz -#undef t - - -/* Subroutine */ int dcopy_(integer *n, doublereal *dx, integer *incx, - doublereal *dy, integer *incy) -{ - /* System generated locals */ - integer i__1; - - /* Local variables */ - static integer i__, m, ix, iy, mp1; - - -/* - copies a vector, x, to a vector, y. - uses unrolled loops for increments equal to one. - jack dongarra, linpack, 3/11/78. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --dy; - --dx; - - /* Function Body */ - if (*n <= 0) { - return 0; - } - if ((*incx == 1 && *incy == 1)) { - goto L20; - } - -/* - code for unequal increments or equal increments - not equal to 1 -*/ - - ix = 1; - iy = 1; - if (*incx < 0) { - ix = (-(*n) + 1) * *incx + 1; - } - if (*incy < 0) { - iy = (-(*n) + 1) * *incy + 1; - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - dy[iy] = dx[ix]; - ix += *incx; - iy += *incy; -/* L10: */ - } - return 0; - -/* - code for both increments equal to 1 - - - clean-up loop -*/ - -L20: - m = *n % 7; - if (m == 0) { - goto L40; - } - i__1 = m; - for (i__ = 1; i__ <= i__1; ++i__) { - dy[i__] = dx[i__]; -/* L30: */ - } - if (*n < 7) { - return 0; - } -L40: - mp1 = m + 1; - i__1 = *n; - for (i__ = mp1; i__ <= i__1; i__ += 7) { - dy[i__] = dx[i__]; - dy[i__ + 1] = dx[i__ + 1]; - dy[i__ + 2] = dx[i__ + 2]; - dy[i__ + 3] = dx[i__ + 3]; - dy[i__ + 4] = dx[i__ + 4]; - dy[i__ + 5] = dx[i__ + 5]; - dy[i__ + 6] = dx[i__ + 6]; -/* L50: */ - } - return 0; -} /* dcopy_ */ - -doublereal ddot_(integer *n, doublereal *dx, integer *incx, doublereal *dy, - integer *incy) -{ - /* System generated locals */ - integer i__1; - doublereal ret_val; - - /* Local variables */ - static integer i__, m, ix, iy, mp1; - static doublereal dtemp; - - -/* - forms the dot product of two vectors. - uses unrolled loops for increments equal to one. - jack dongarra, linpack, 3/11/78. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --dy; - --dx; - - /* Function Body */ - ret_val = 0.; - dtemp = 0.; - if (*n <= 0) { - return ret_val; - } - if ((*incx == 1 && *incy == 1)) { - goto L20; - } - -/* - code for unequal increments or equal increments - not equal to 1 -*/ - - ix = 1; - iy = 1; - if (*incx < 0) { - ix = (-(*n) + 1) * *incx + 1; - } - if (*incy < 0) { - iy = (-(*n) + 1) * *incy + 1; - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - dtemp += dx[ix] * dy[iy]; - ix += *incx; - iy += *incy; -/* L10: */ - } - ret_val = dtemp; - return ret_val; - -/* - code for both increments equal to 1 - - - clean-up loop -*/ - -L20: - m = *n % 5; - if (m == 0) { - goto L40; - } - i__1 = m; - for (i__ = 1; i__ <= i__1; ++i__) { - dtemp += dx[i__] * dy[i__]; -/* L30: */ - } - if (*n < 5) { - goto L60; - } -L40: - mp1 = m + 1; - i__1 = *n; - for (i__ = mp1; i__ <= i__1; i__ += 5) { - dtemp = dtemp + dx[i__] * dy[i__] + dx[i__ + 1] * dy[i__ + 1] + dx[ - i__ + 2] * dy[i__ + 2] + dx[i__ + 3] * dy[i__ + 3] + dx[i__ + - 4] * dy[i__ + 4]; -/* L50: */ - } -L60: - ret_val = dtemp; - return ret_val; -} /* ddot_ */ - -/* Subroutine */ int dgemm_(char *transa, char *transb, integer *m, integer * - n, integer *k, doublereal *alpha, doublereal *a, integer *lda, - doublereal *b, integer *ldb, doublereal *beta, doublereal *c__, - integer *ldc) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, c_dim1, c_offset, i__1, i__2, - i__3; - - /* Local variables */ - static integer i__, j, l, info; - static logical nota, notb; - static doublereal temp; - static integer ncola; - extern logical lsame_(char *, char *); - static integer nrowa, nrowb; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - DGEMM performs one of the matrix-matrix operations - - C := alpha*op( A )*op( B ) + beta*C, - - where op( X ) is one of - - op( X ) = X or op( X ) = X', - - alpha and beta are scalars, and A, B and C are matrices, with op( A ) - an m by k matrix, op( B ) a k by n matrix and C an m by n matrix. - - Parameters - ========== - - TRANSA - CHARACTER*1. - On entry, TRANSA specifies the form of op( A ) to be used in - the matrix multiplication as follows: - - TRANSA = 'N' or 'n', op( A ) = A. - - TRANSA = 'T' or 't', op( A ) = A'. - - TRANSA = 'C' or 'c', op( A ) = A'. - - Unchanged on exit. - - TRANSB - CHARACTER*1. - On entry, TRANSB specifies the form of op( B ) to be used in - the matrix multiplication as follows: - - TRANSB = 'N' or 'n', op( B ) = B. - - TRANSB = 'T' or 't', op( B ) = B'. - - TRANSB = 'C' or 'c', op( B ) = B'. - - Unchanged on exit. - - M - INTEGER. - On entry, M specifies the number of rows of the matrix - op( A ) and of the matrix C. M must be at least zero. - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the number of columns of the matrix - op( B ) and the number of columns of the matrix C. N must be - at least zero. - Unchanged on exit. - - K - INTEGER. - On entry, K specifies the number of columns of the matrix - op( A ) and the number of rows of the matrix op( B ). K must - be at least zero. - Unchanged on exit. - - ALPHA - DOUBLE PRECISION. - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - A - DOUBLE PRECISION array of DIMENSION ( LDA, ka ), where ka is - k when TRANSA = 'N' or 'n', and is m otherwise. - Before entry with TRANSA = 'N' or 'n', the leading m by k - part of the array A must contain the matrix A, otherwise - the leading k by m part of the array A must contain the - matrix A. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. When TRANSA = 'N' or 'n' then - LDA must be at least max( 1, m ), otherwise LDA must be at - least max( 1, k ). - Unchanged on exit. - - B - DOUBLE PRECISION array of DIMENSION ( LDB, kb ), where kb is - n when TRANSB = 'N' or 'n', and is k otherwise. - Before entry with TRANSB = 'N' or 'n', the leading k by n - part of the array B must contain the matrix B, otherwise - the leading n by k part of the array B must contain the - matrix B. - Unchanged on exit. - - LDB - INTEGER. - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. When TRANSB = 'N' or 'n' then - LDB must be at least max( 1, k ), otherwise LDB must be at - least max( 1, n ). - Unchanged on exit. - - BETA - DOUBLE PRECISION. - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then C need not be set on input. - Unchanged on exit. - - C - DOUBLE PRECISION array of DIMENSION ( LDC, n ). - Before entry, the leading m by n part of the array C must - contain the matrix C, except when beta is zero, in which - case C need not be set on entry. - On exit, the array C is overwritten by the m by n matrix - ( alpha*op( A )*op( B ) + beta*C ). - - LDC - INTEGER. - On entry, LDC specifies the first dimension of C as declared - in the calling (sub) program. LDC must be at least - max( 1, m ). - Unchanged on exit. - - - Level 3 Blas routine. - - -- Written on 8-February-1989. - Jack Dongarra, Argonne National Laboratory. - Iain Duff, AERE Harwell. - Jeremy Du Croz, Numerical Algorithms Group Ltd. - Sven Hammarling, Numerical Algorithms Group Ltd. - - - Set NOTA and NOTB as true if A and B respectively are not - transposed and set NROWA, NCOLA and NROWB as the number of rows - and columns of A and the number of rows of B respectively. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - - /* Function Body */ - nota = lsame_(transa, "N"); - notb = lsame_(transb, "N"); - if (nota) { - nrowa = *m; - ncola = *k; - } else { - nrowa = *k; - ncola = *m; - } - if (notb) { - nrowb = *k; - } else { - nrowb = *n; - } - -/* Test the input parameters. */ - - info = 0; - if (((! nota && ! lsame_(transa, "C")) && ! lsame_( - transa, "T"))) { - info = 1; - } else if (((! notb && ! lsame_(transb, "C")) && ! - lsame_(transb, "T"))) { - info = 2; - } else if (*m < 0) { - info = 3; - } else if (*n < 0) { - info = 4; - } else if (*k < 0) { - info = 5; - } else if (*lda < max(1,nrowa)) { - info = 8; - } else if (*ldb < max(1,nrowb)) { - info = 10; - } else if (*ldc < max(1,*m)) { - info = 13; - } - if (info != 0) { - xerbla_("DGEMM ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*m == 0 || *n == 0 || ((*alpha == 0. || *k == 0) && *beta == 1.)) { - return 0; - } - -/* And if alpha.eq.zero. */ - - if (*alpha == 0.) { - if (*beta == 0.) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = 0.; -/* L10: */ - } -/* L20: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1]; -/* L30: */ - } -/* L40: */ - } - } - return 0; - } - -/* Start the operations. */ - - if (notb) { - if (nota) { - -/* Form C := alpha*A*B + beta*C. */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*beta == 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = 0.; -/* L50: */ - } - } else if (*beta != 1.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1]; -/* L60: */ - } - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - if (b[l + j * b_dim1] != 0.) { - temp = *alpha * b[l + j * b_dim1]; - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - c__[i__ + j * c_dim1] += temp * a[i__ + l * - a_dim1]; -/* L70: */ - } - } -/* L80: */ - } -/* L90: */ - } - } else { - -/* Form C := alpha*A'*B + beta*C */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - temp += a[l + i__ * a_dim1] * b[l + j * b_dim1]; -/* L100: */ - } - if (*beta == 0.) { - c__[i__ + j * c_dim1] = *alpha * temp; - } else { - c__[i__ + j * c_dim1] = *alpha * temp + *beta * c__[ - i__ + j * c_dim1]; - } -/* L110: */ - } -/* L120: */ - } - } - } else { - if (nota) { - -/* Form C := alpha*A*B' + beta*C */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*beta == 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = 0.; -/* L130: */ - } - } else if (*beta != 1.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1]; -/* L140: */ - } - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - if (b[j + l * b_dim1] != 0.) { - temp = *alpha * b[j + l * b_dim1]; - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - c__[i__ + j * c_dim1] += temp * a[i__ + l * - a_dim1]; -/* L150: */ - } - } -/* L160: */ - } -/* L170: */ - } - } else { - -/* Form C := alpha*A'*B' + beta*C */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - temp += a[l + i__ * a_dim1] * b[j + l * b_dim1]; -/* L180: */ - } - if (*beta == 0.) { - c__[i__ + j * c_dim1] = *alpha * temp; - } else { - c__[i__ + j * c_dim1] = *alpha * temp + *beta * c__[ - i__ + j * c_dim1]; - } -/* L190: */ - } -/* L200: */ - } - } - } - - return 0; - -/* End of DGEMM . */ - -} /* dgemm_ */ - -/* Subroutine */ int dgemv_(char *trans, integer *m, integer *n, doublereal * - alpha, doublereal *a, integer *lda, doublereal *x, integer *incx, - doublereal *beta, doublereal *y, integer *incy) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - - /* Local variables */ - static integer i__, j, ix, iy, jx, jy, kx, ky, info; - static doublereal temp; - static integer lenx, leny; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - DGEMV performs one of the matrix-vector operations - - y := alpha*A*x + beta*y, or y := alpha*A'*x + beta*y, - - where alpha and beta are scalars, x and y are vectors and A is an - m by n matrix. - - Parameters - ========== - - TRANS - CHARACTER*1. - On entry, TRANS specifies the operation to be performed as - follows: - - TRANS = 'N' or 'n' y := alpha*A*x + beta*y. - - TRANS = 'T' or 't' y := alpha*A'*x + beta*y. - - TRANS = 'C' or 'c' y := alpha*A'*x + beta*y. - - Unchanged on exit. - - M - INTEGER. - On entry, M specifies the number of rows of the matrix A. - M must be at least zero. - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the number of columns of the matrix A. - N must be at least zero. - Unchanged on exit. - - ALPHA - DOUBLE PRECISION. - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - A - DOUBLE PRECISION array of DIMENSION ( LDA, n ). - Before entry, the leading m by n part of the array A must - contain the matrix of coefficients. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, m ). - Unchanged on exit. - - X - DOUBLE PRECISION array of DIMENSION at least - ( 1 + ( n - 1 )*abs( INCX ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( m - 1 )*abs( INCX ) ) otherwise. - Before entry, the incremented array X must contain the - vector x. - Unchanged on exit. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - BETA - DOUBLE PRECISION. - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then Y need not be set on input. - Unchanged on exit. - - Y - DOUBLE PRECISION array of DIMENSION at least - ( 1 + ( m - 1 )*abs( INCY ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( n - 1 )*abs( INCY ) ) otherwise. - Before entry with BETA non-zero, the incremented array Y - must contain the vector y. On exit, Y is overwritten by the - updated vector y. - - INCY - INTEGER. - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --x; - --y; - - /* Function Body */ - info = 0; - if (((! lsame_(trans, "N") && ! lsame_(trans, "T")) && ! lsame_(trans, "C"))) { - info = 1; - } else if (*m < 0) { - info = 2; - } else if (*n < 0) { - info = 3; - } else if (*lda < max(1,*m)) { - info = 6; - } else if (*incx == 0) { - info = 8; - } else if (*incy == 0) { - info = 11; - } - if (info != 0) { - xerbla_("DGEMV ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*m == 0 || *n == 0 || (*alpha == 0. && *beta == 1.)) { - return 0; - } - -/* - Set LENX and LENY, the lengths of the vectors x and y, and set - up the start points in X and Y. -*/ - - if (lsame_(trans, "N")) { - lenx = *n; - leny = *m; - } else { - lenx = *m; - leny = *n; - } - if (*incx > 0) { - kx = 1; - } else { - kx = 1 - (lenx - 1) * *incx; - } - if (*incy > 0) { - ky = 1; - } else { - ky = 1 - (leny - 1) * *incy; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through A. - - First form y := beta*y. -*/ - - if (*beta != 1.) { - if (*incy == 1) { - if (*beta == 0.) { - i__1 = leny; - for (i__ = 1; i__ <= i__1; ++i__) { - y[i__] = 0.; -/* L10: */ - } - } else { - i__1 = leny; - for (i__ = 1; i__ <= i__1; ++i__) { - y[i__] = *beta * y[i__]; -/* L20: */ - } - } - } else { - iy = ky; - if (*beta == 0.) { - i__1 = leny; - for (i__ = 1; i__ <= i__1; ++i__) { - y[iy] = 0.; - iy += *incy; -/* L30: */ - } - } else { - i__1 = leny; - for (i__ = 1; i__ <= i__1; ++i__) { - y[iy] = *beta * y[iy]; - iy += *incy; -/* L40: */ - } - } - } - } - if (*alpha == 0.) { - return 0; - } - if (lsame_(trans, "N")) { - -/* Form y := alpha*A*x + y. */ - - jx = kx; - if (*incy == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (x[jx] != 0.) { - temp = *alpha * x[jx]; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - y[i__] += temp * a[i__ + j * a_dim1]; -/* L50: */ - } - } - jx += *incx; -/* L60: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (x[jx] != 0.) { - temp = *alpha * x[jx]; - iy = ky; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - y[iy] += temp * a[i__ + j * a_dim1]; - iy += *incy; -/* L70: */ - } - } - jx += *incx; -/* L80: */ - } - } - } else { - -/* Form y := alpha*A'*x + y. */ - - jy = ky; - if (*incx == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp = 0.; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp += a[i__ + j * a_dim1] * x[i__]; -/* L90: */ - } - y[jy] += *alpha * temp; - jy += *incy; -/* L100: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp = 0.; - ix = kx; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp += a[i__ + j * a_dim1] * x[ix]; - ix += *incx; -/* L110: */ - } - y[jy] += *alpha * temp; - jy += *incy; -/* L120: */ - } - } - } - - return 0; - -/* End of DGEMV . */ - -} /* dgemv_ */ - -/* Subroutine */ int dger_(integer *m, integer *n, doublereal *alpha, - doublereal *x, integer *incx, doublereal *y, integer *incy, - doublereal *a, integer *lda) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - - /* Local variables */ - static integer i__, j, ix, jy, kx, info; - static doublereal temp; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - DGER performs the rank 1 operation - - A := alpha*x*y' + A, - - where alpha is a scalar, x is an m element vector, y is an n element - vector and A is an m by n matrix. - - Parameters - ========== - - M - INTEGER. - On entry, M specifies the number of rows of the matrix A. - M must be at least zero. - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the number of columns of the matrix A. - N must be at least zero. - Unchanged on exit. - - ALPHA - DOUBLE PRECISION. - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - X - DOUBLE PRECISION array of dimension at least - ( 1 + ( m - 1 )*abs( INCX ) ). - Before entry, the incremented array X must contain the m - element vector x. - Unchanged on exit. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - Y - DOUBLE PRECISION array of dimension at least - ( 1 + ( n - 1 )*abs( INCY ) ). - Before entry, the incremented array Y must contain the n - element vector y. - Unchanged on exit. - - INCY - INTEGER. - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - Unchanged on exit. - - A - DOUBLE PRECISION array of DIMENSION ( LDA, n ). - Before entry, the leading m by n part of the array A must - contain the matrix of coefficients. On exit, A is - overwritten by the updated matrix. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, m ). - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --x; - --y; - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - info = 0; - if (*m < 0) { - info = 1; - } else if (*n < 0) { - info = 2; - } else if (*incx == 0) { - info = 5; - } else if (*incy == 0) { - info = 7; - } else if (*lda < max(1,*m)) { - info = 9; - } - if (info != 0) { - xerbla_("DGER ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*m == 0 || *n == 0 || *alpha == 0.) { - return 0; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through A. -*/ - - if (*incy > 0) { - jy = 1; - } else { - jy = 1 - (*n - 1) * *incy; - } - if (*incx == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (y[jy] != 0.) { - temp = *alpha * y[jy]; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] += x[i__] * temp; -/* L10: */ - } - } - jy += *incy; -/* L20: */ - } - } else { - if (*incx > 0) { - kx = 1; - } else { - kx = 1 - (*m - 1) * *incx; - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (y[jy] != 0.) { - temp = *alpha * y[jy]; - ix = kx; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] += x[ix] * temp; - ix += *incx; -/* L30: */ - } - } - jy += *incy; -/* L40: */ - } - } - - return 0; - -/* End of DGER . */ - -} /* dger_ */ - -doublereal dnrm2_(integer *n, doublereal *x, integer *incx) -{ - /* System generated locals */ - integer i__1, i__2; - doublereal ret_val, d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer ix; - static doublereal ssq, norm, scale, absxi; - - -/* - DNRM2 returns the euclidean norm of a vector via the function - name, so that - - DNRM2 := sqrt( x'*x ) - - - -- This version written on 25-October-1982. - Modified on 14-October-1993 to inline the call to DLASSQ. - Sven Hammarling, Nag Ltd. -*/ - - - /* Parameter adjustments */ - --x; - - /* Function Body */ - if (*n < 1 || *incx < 1) { - norm = 0.; - } else if (*n == 1) { - norm = abs(x[1]); - } else { - scale = 0.; - ssq = 1.; -/* - The following loop is equivalent to this call to the LAPACK - auxiliary routine: - CALL DLASSQ( N, X, INCX, SCALE, SSQ ) -*/ - - i__1 = (*n - 1) * *incx + 1; - i__2 = *incx; - for (ix = 1; i__2 < 0 ? ix >= i__1 : ix <= i__1; ix += i__2) { - if (x[ix] != 0.) { - absxi = (d__1 = x[ix], abs(d__1)); - if (scale < absxi) { -/* Computing 2nd power */ - d__1 = scale / absxi; - ssq = ssq * (d__1 * d__1) + 1.; - scale = absxi; - } else { -/* Computing 2nd power */ - d__1 = absxi / scale; - ssq += d__1 * d__1; - } - } -/* L10: */ - } - norm = scale * sqrt(ssq); - } - - ret_val = norm; - return ret_val; - -/* End of DNRM2. */ - -} /* dnrm2_ */ - -/* Subroutine */ int drot_(integer *n, doublereal *dx, integer *incx, - doublereal *dy, integer *incy, doublereal *c__, doublereal *s) -{ - /* System generated locals */ - integer i__1; - - /* Local variables */ - static integer i__, ix, iy; - static doublereal dtemp; - - -/* - applies a plane rotation. - jack dongarra, linpack, 3/11/78. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --dy; - --dx; - - /* Function Body */ - if (*n <= 0) { - return 0; - } - if ((*incx == 1 && *incy == 1)) { - goto L20; - } - -/* - code for unequal increments or equal increments not equal - to 1 -*/ - - ix = 1; - iy = 1; - if (*incx < 0) { - ix = (-(*n) + 1) * *incx + 1; - } - if (*incy < 0) { - iy = (-(*n) + 1) * *incy + 1; - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - dtemp = *c__ * dx[ix] + *s * dy[iy]; - dy[iy] = *c__ * dy[iy] - *s * dx[ix]; - dx[ix] = dtemp; - ix += *incx; - iy += *incy; -/* L10: */ - } - return 0; - -/* code for both increments equal to 1 */ - -L20: - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - dtemp = *c__ * dx[i__] + *s * dy[i__]; - dy[i__] = *c__ * dy[i__] - *s * dx[i__]; - dx[i__] = dtemp; -/* L30: */ - } - return 0; -} /* drot_ */ - -/* Subroutine */ int dscal_(integer *n, doublereal *da, doublereal *dx, - integer *incx) -{ - /* System generated locals */ - integer i__1, i__2; - - /* Local variables */ - static integer i__, m, mp1, nincx; - - -/* - scales a vector by a constant. - uses unrolled loops for increment equal to one. - jack dongarra, linpack, 3/11/78. - modified 3/93 to return if incx .le. 0. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --dx; - - /* Function Body */ - if (*n <= 0 || *incx <= 0) { - return 0; - } - if (*incx == 1) { - goto L20; - } - -/* code for increment not equal to 1 */ - - nincx = *n * *incx; - i__1 = nincx; - i__2 = *incx; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { - dx[i__] = *da * dx[i__]; -/* L10: */ - } - return 0; - -/* - code for increment equal to 1 - - - clean-up loop -*/ - -L20: - m = *n % 5; - if (m == 0) { - goto L40; - } - i__2 = m; - for (i__ = 1; i__ <= i__2; ++i__) { - dx[i__] = *da * dx[i__]; -/* L30: */ - } - if (*n < 5) { - return 0; - } -L40: - mp1 = m + 1; - i__2 = *n; - for (i__ = mp1; i__ <= i__2; i__ += 5) { - dx[i__] = *da * dx[i__]; - dx[i__ + 1] = *da * dx[i__ + 1]; - dx[i__ + 2] = *da * dx[i__ + 2]; - dx[i__ + 3] = *da * dx[i__ + 3]; - dx[i__ + 4] = *da * dx[i__ + 4]; -/* L50: */ - } - return 0; -} /* dscal_ */ - -/* Subroutine */ int dswap_(integer *n, doublereal *dx, integer *incx, - doublereal *dy, integer *incy) -{ - /* System generated locals */ - integer i__1; - - /* Local variables */ - static integer i__, m, ix, iy, mp1; - static doublereal dtemp; - - -/* - interchanges two vectors. - uses unrolled loops for increments equal one. - jack dongarra, linpack, 3/11/78. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --dy; - --dx; - - /* Function Body */ - if (*n <= 0) { - return 0; - } - if ((*incx == 1 && *incy == 1)) { - goto L20; - } - -/* - code for unequal increments or equal increments not equal - to 1 -*/ - - ix = 1; - iy = 1; - if (*incx < 0) { - ix = (-(*n) + 1) * *incx + 1; - } - if (*incy < 0) { - iy = (-(*n) + 1) * *incy + 1; - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - dtemp = dx[ix]; - dx[ix] = dy[iy]; - dy[iy] = dtemp; - ix += *incx; - iy += *incy; -/* L10: */ - } - return 0; - -/* - code for both increments equal to 1 - - - clean-up loop -*/ - -L20: - m = *n % 3; - if (m == 0) { - goto L40; - } - i__1 = m; - for (i__ = 1; i__ <= i__1; ++i__) { - dtemp = dx[i__]; - dx[i__] = dy[i__]; - dy[i__] = dtemp; -/* L30: */ - } - if (*n < 3) { - return 0; - } -L40: - mp1 = m + 1; - i__1 = *n; - for (i__ = mp1; i__ <= i__1; i__ += 3) { - dtemp = dx[i__]; - dx[i__] = dy[i__]; - dy[i__] = dtemp; - dtemp = dx[i__ + 1]; - dx[i__ + 1] = dy[i__ + 1]; - dy[i__ + 1] = dtemp; - dtemp = dx[i__ + 2]; - dx[i__ + 2] = dy[i__ + 2]; - dy[i__ + 2] = dtemp; -/* L50: */ - } - return 0; -} /* dswap_ */ - -/* Subroutine */ int dsymv_(char *uplo, integer *n, doublereal *alpha, - doublereal *a, integer *lda, doublereal *x, integer *incx, doublereal - *beta, doublereal *y, integer *incy) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - - /* Local variables */ - static integer i__, j, ix, iy, jx, jy, kx, ky, info; - static doublereal temp1, temp2; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - DSYMV performs the matrix-vector operation - - y := alpha*A*x + beta*y, - - where alpha and beta are scalars, x and y are n element vectors and - A is an n by n symmetric matrix. - - Parameters - ========== - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the upper or lower - triangular part of the array A is to be referenced as - follows: - - UPLO = 'U' or 'u' Only the upper triangular part of A - is to be referenced. - - UPLO = 'L' or 'l' Only the lower triangular part of A - is to be referenced. - - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the order of the matrix A. - N must be at least zero. - Unchanged on exit. - - ALPHA - DOUBLE PRECISION. - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - A - DOUBLE PRECISION array of DIMENSION ( LDA, n ). - Before entry with UPLO = 'U' or 'u', the leading n by n - upper triangular part of the array A must contain the upper - triangular part of the symmetric matrix and the strictly - lower triangular part of A is not referenced. - Before entry with UPLO = 'L' or 'l', the leading n by n - lower triangular part of the array A must contain the lower - triangular part of the symmetric matrix and the strictly - upper triangular part of A is not referenced. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, n ). - Unchanged on exit. - - X - DOUBLE PRECISION array of dimension at least - ( 1 + ( n - 1 )*abs( INCX ) ). - Before entry, the incremented array X must contain the n - element vector x. - Unchanged on exit. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - BETA - DOUBLE PRECISION. - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then Y need not be set on input. - Unchanged on exit. - - Y - DOUBLE PRECISION array of dimension at least - ( 1 + ( n - 1 )*abs( INCY ) ). - Before entry, the incremented array Y must contain the n - element vector y. On exit, Y is overwritten by the updated - vector y. - - INCY - INTEGER. - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --x; - --y; - - /* Function Body */ - info = 0; - if ((! lsame_(uplo, "U") && ! lsame_(uplo, "L"))) { - info = 1; - } else if (*n < 0) { - info = 2; - } else if (*lda < max(1,*n)) { - info = 5; - } else if (*incx == 0) { - info = 7; - } else if (*incy == 0) { - info = 10; - } - if (info != 0) { - xerbla_("DSYMV ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0 || (*alpha == 0. && *beta == 1.)) { - return 0; - } - -/* Set up the start points in X and Y. */ - - if (*incx > 0) { - kx = 1; - } else { - kx = 1 - (*n - 1) * *incx; - } - if (*incy > 0) { - ky = 1; - } else { - ky = 1 - (*n - 1) * *incy; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through the triangular part - of A. - - First form y := beta*y. -*/ - - if (*beta != 1.) { - if (*incy == 1) { - if (*beta == 0.) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - y[i__] = 0.; -/* L10: */ - } - } else { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - y[i__] = *beta * y[i__]; -/* L20: */ - } - } - } else { - iy = ky; - if (*beta == 0.) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - y[iy] = 0.; - iy += *incy; -/* L30: */ - } - } else { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - y[iy] = *beta * y[iy]; - iy += *incy; -/* L40: */ - } - } - } - } - if (*alpha == 0.) { - return 0; - } - if (lsame_(uplo, "U")) { - -/* Form y when A is stored in upper triangle. */ - - if ((*incx == 1 && *incy == 1)) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp1 = *alpha * x[j]; - temp2 = 0.; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - y[i__] += temp1 * a[i__ + j * a_dim1]; - temp2 += a[i__ + j * a_dim1] * x[i__]; -/* L50: */ - } - y[j] = y[j] + temp1 * a[j + j * a_dim1] + *alpha * temp2; -/* L60: */ - } - } else { - jx = kx; - jy = ky; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp1 = *alpha * x[jx]; - temp2 = 0.; - ix = kx; - iy = ky; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - y[iy] += temp1 * a[i__ + j * a_dim1]; - temp2 += a[i__ + j * a_dim1] * x[ix]; - ix += *incx; - iy += *incy; -/* L70: */ - } - y[jy] = y[jy] + temp1 * a[j + j * a_dim1] + *alpha * temp2; - jx += *incx; - jy += *incy; -/* L80: */ - } - } - } else { - -/* Form y when A is stored in lower triangle. */ - - if ((*incx == 1 && *incy == 1)) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp1 = *alpha * x[j]; - temp2 = 0.; - y[j] += temp1 * a[j + j * a_dim1]; - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - y[i__] += temp1 * a[i__ + j * a_dim1]; - temp2 += a[i__ + j * a_dim1] * x[i__]; -/* L90: */ - } - y[j] += *alpha * temp2; -/* L100: */ - } - } else { - jx = kx; - jy = ky; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp1 = *alpha * x[jx]; - temp2 = 0.; - y[jy] += temp1 * a[j + j * a_dim1]; - ix = jx; - iy = jy; - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - ix += *incx; - iy += *incy; - y[iy] += temp1 * a[i__ + j * a_dim1]; - temp2 += a[i__ + j * a_dim1] * x[ix]; -/* L110: */ - } - y[jy] += *alpha * temp2; - jx += *incx; - jy += *incy; -/* L120: */ - } - } - } - - return 0; - -/* End of DSYMV . */ - -} /* dsymv_ */ - -/* Subroutine */ int dsyr2_(char *uplo, integer *n, doublereal *alpha, - doublereal *x, integer *incx, doublereal *y, integer *incy, - doublereal *a, integer *lda) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - - /* Local variables */ - static integer i__, j, ix, iy, jx, jy, kx, ky, info; - static doublereal temp1, temp2; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - DSYR2 performs the symmetric rank 2 operation - - A := alpha*x*y' + alpha*y*x' + A, - - where alpha is a scalar, x and y are n element vectors and A is an n - by n symmetric matrix. - - Parameters - ========== - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the upper or lower - triangular part of the array A is to be referenced as - follows: - - UPLO = 'U' or 'u' Only the upper triangular part of A - is to be referenced. - - UPLO = 'L' or 'l' Only the lower triangular part of A - is to be referenced. - - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the order of the matrix A. - N must be at least zero. - Unchanged on exit. - - ALPHA - DOUBLE PRECISION. - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - X - DOUBLE PRECISION array of dimension at least - ( 1 + ( n - 1 )*abs( INCX ) ). - Before entry, the incremented array X must contain the n - element vector x. - Unchanged on exit. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - Y - DOUBLE PRECISION array of dimension at least - ( 1 + ( n - 1 )*abs( INCY ) ). - Before entry, the incremented array Y must contain the n - element vector y. - Unchanged on exit. - - INCY - INTEGER. - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - Unchanged on exit. - - A - DOUBLE PRECISION array of DIMENSION ( LDA, n ). - Before entry with UPLO = 'U' or 'u', the leading n by n - upper triangular part of the array A must contain the upper - triangular part of the symmetric matrix and the strictly - lower triangular part of A is not referenced. On exit, the - upper triangular part of the array A is overwritten by the - upper triangular part of the updated matrix. - Before entry with UPLO = 'L' or 'l', the leading n by n - lower triangular part of the array A must contain the lower - triangular part of the symmetric matrix and the strictly - upper triangular part of A is not referenced. On exit, the - lower triangular part of the array A is overwritten by the - lower triangular part of the updated matrix. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, n ). - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --x; - --y; - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - info = 0; - if ((! lsame_(uplo, "U") && ! lsame_(uplo, "L"))) { - info = 1; - } else if (*n < 0) { - info = 2; - } else if (*incx == 0) { - info = 5; - } else if (*incy == 0) { - info = 7; - } else if (*lda < max(1,*n)) { - info = 9; - } - if (info != 0) { - xerbla_("DSYR2 ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0 || *alpha == 0.) { - return 0; - } - -/* - Set up the start points in X and Y if the increments are not both - unity. -*/ - - if (*incx != 1 || *incy != 1) { - if (*incx > 0) { - kx = 1; - } else { - kx = 1 - (*n - 1) * *incx; - } - if (*incy > 0) { - ky = 1; - } else { - ky = 1 - (*n - 1) * *incy; - } - jx = kx; - jy = ky; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through the triangular part - of A. -*/ - - if (lsame_(uplo, "U")) { - -/* Form A when A is stored in the upper triangle. */ - - if ((*incx == 1 && *incy == 1)) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (x[j] != 0. || y[j] != 0.) { - temp1 = *alpha * y[j]; - temp2 = *alpha * x[j]; - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = a[i__ + j * a_dim1] + x[i__] * - temp1 + y[i__] * temp2; -/* L10: */ - } - } -/* L20: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (x[jx] != 0. || y[jy] != 0.) { - temp1 = *alpha * y[jy]; - temp2 = *alpha * x[jx]; - ix = kx; - iy = ky; - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = a[i__ + j * a_dim1] + x[ix] * - temp1 + y[iy] * temp2; - ix += *incx; - iy += *incy; -/* L30: */ - } - } - jx += *incx; - jy += *incy; -/* L40: */ - } - } - } else { - -/* Form A when A is stored in the lower triangle. */ - - if ((*incx == 1 && *incy == 1)) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (x[j] != 0. || y[j] != 0.) { - temp1 = *alpha * y[j]; - temp2 = *alpha * x[j]; - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = a[i__ + j * a_dim1] + x[i__] * - temp1 + y[i__] * temp2; -/* L50: */ - } - } -/* L60: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (x[jx] != 0. || y[jy] != 0.) { - temp1 = *alpha * y[jy]; - temp2 = *alpha * x[jx]; - ix = jx; - iy = jy; - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = a[i__ + j * a_dim1] + x[ix] * - temp1 + y[iy] * temp2; - ix += *incx; - iy += *incy; -/* L70: */ - } - } - jx += *incx; - jy += *incy; -/* L80: */ - } - } - } - - return 0; - -/* End of DSYR2 . */ - -} /* dsyr2_ */ - -/* Subroutine */ int dsyr2k_(char *uplo, char *trans, integer *n, integer *k, - doublereal *alpha, doublereal *a, integer *lda, doublereal *b, - integer *ldb, doublereal *beta, doublereal *c__, integer *ldc) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, c_dim1, c_offset, i__1, i__2, - i__3; - - /* Local variables */ - static integer i__, j, l, info; - static doublereal temp1, temp2; - extern logical lsame_(char *, char *); - static integer nrowa; - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - DSYR2K performs one of the symmetric rank 2k operations - - C := alpha*A*B' + alpha*B*A' + beta*C, - - or - - C := alpha*A'*B + alpha*B'*A + beta*C, - - where alpha and beta are scalars, C is an n by n symmetric matrix - and A and B are n by k matrices in the first case and k by n - matrices in the second case. - - Parameters - ========== - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the upper or lower - triangular part of the array C is to be referenced as - follows: - - UPLO = 'U' or 'u' Only the upper triangular part of C - is to be referenced. - - UPLO = 'L' or 'l' Only the lower triangular part of C - is to be referenced. - - Unchanged on exit. - - TRANS - CHARACTER*1. - On entry, TRANS specifies the operation to be performed as - follows: - - TRANS = 'N' or 'n' C := alpha*A*B' + alpha*B*A' + - beta*C. - - TRANS = 'T' or 't' C := alpha*A'*B + alpha*B'*A + - beta*C. - - TRANS = 'C' or 'c' C := alpha*A'*B + alpha*B'*A + - beta*C. - - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the order of the matrix C. N must be - at least zero. - Unchanged on exit. - - K - INTEGER. - On entry with TRANS = 'N' or 'n', K specifies the number - of columns of the matrices A and B, and on entry with - TRANS = 'T' or 't' or 'C' or 'c', K specifies the number - of rows of the matrices A and B. K must be at least zero. - Unchanged on exit. - - ALPHA - DOUBLE PRECISION. - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - A - DOUBLE PRECISION array of DIMENSION ( LDA, ka ), where ka is - k when TRANS = 'N' or 'n', and is n otherwise. - Before entry with TRANS = 'N' or 'n', the leading n by k - part of the array A must contain the matrix A, otherwise - the leading k by n part of the array A must contain the - matrix A. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. When TRANS = 'N' or 'n' - then LDA must be at least max( 1, n ), otherwise LDA must - be at least max( 1, k ). - Unchanged on exit. - - B - DOUBLE PRECISION array of DIMENSION ( LDB, kb ), where kb is - k when TRANS = 'N' or 'n', and is n otherwise. - Before entry with TRANS = 'N' or 'n', the leading n by k - part of the array B must contain the matrix B, otherwise - the leading k by n part of the array B must contain the - matrix B. - Unchanged on exit. - - LDB - INTEGER. - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. When TRANS = 'N' or 'n' - then LDB must be at least max( 1, n ), otherwise LDB must - be at least max( 1, k ). - Unchanged on exit. - - BETA - DOUBLE PRECISION. - On entry, BETA specifies the scalar beta. - Unchanged on exit. - - C - DOUBLE PRECISION array of DIMENSION ( LDC, n ). - Before entry with UPLO = 'U' or 'u', the leading n by n - upper triangular part of the array C must contain the upper - triangular part of the symmetric matrix and the strictly - lower triangular part of C is not referenced. On exit, the - upper triangular part of the array C is overwritten by the - upper triangular part of the updated matrix. - Before entry with UPLO = 'L' or 'l', the leading n by n - lower triangular part of the array C must contain the lower - triangular part of the symmetric matrix and the strictly - upper triangular part of C is not referenced. On exit, the - lower triangular part of the array C is overwritten by the - lower triangular part of the updated matrix. - - LDC - INTEGER. - On entry, LDC specifies the first dimension of C as declared - in the calling (sub) program. LDC must be at least - max( 1, n ). - Unchanged on exit. - - - Level 3 Blas routine. - - - -- Written on 8-February-1989. - Jack Dongarra, Argonne National Laboratory. - Iain Duff, AERE Harwell. - Jeremy Du Croz, Numerical Algorithms Group Ltd. - Sven Hammarling, Numerical Algorithms Group Ltd. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - - /* Function Body */ - if (lsame_(trans, "N")) { - nrowa = *n; - } else { - nrowa = *k; - } - upper = lsame_(uplo, "U"); - - info = 0; - if ((! upper && ! lsame_(uplo, "L"))) { - info = 1; - } else if (((! lsame_(trans, "N") && ! lsame_(trans, - "T")) && ! lsame_(trans, "C"))) { - info = 2; - } else if (*n < 0) { - info = 3; - } else if (*k < 0) { - info = 4; - } else if (*lda < max(1,nrowa)) { - info = 7; - } else if (*ldb < max(1,nrowa)) { - info = 9; - } else if (*ldc < max(1,*n)) { - info = 12; - } - if (info != 0) { - xerbla_("DSYR2K", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0 || ((*alpha == 0. || *k == 0) && *beta == 1.)) { - return 0; - } - -/* And when alpha.eq.zero. */ - - if (*alpha == 0.) { - if (upper) { - if (*beta == 0.) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = 0.; -/* L10: */ - } -/* L20: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1]; -/* L30: */ - } -/* L40: */ - } - } - } else { - if (*beta == 0.) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = 0.; -/* L50: */ - } -/* L60: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1]; -/* L70: */ - } -/* L80: */ - } - } - } - return 0; - } - -/* Start the operations. */ - - if (lsame_(trans, "N")) { - -/* Form C := alpha*A*B' + alpha*B*A' + C. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*beta == 0.) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = 0.; -/* L90: */ - } - } else if (*beta != 1.) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1]; -/* L100: */ - } - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - if (a[j + l * a_dim1] != 0. || b[j + l * b_dim1] != 0.) { - temp1 = *alpha * b[j + l * b_dim1]; - temp2 = *alpha * a[j + l * a_dim1]; - i__3 = j; - for (i__ = 1; i__ <= i__3; ++i__) { - c__[i__ + j * c_dim1] = c__[i__ + j * c_dim1] + a[ - i__ + l * a_dim1] * temp1 + b[i__ + l * - b_dim1] * temp2; -/* L110: */ - } - } -/* L120: */ - } -/* L130: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*beta == 0.) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = 0.; -/* L140: */ - } - } else if (*beta != 1.) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1]; -/* L150: */ - } - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - if (a[j + l * a_dim1] != 0. || b[j + l * b_dim1] != 0.) { - temp1 = *alpha * b[j + l * b_dim1]; - temp2 = *alpha * a[j + l * a_dim1]; - i__3 = *n; - for (i__ = j; i__ <= i__3; ++i__) { - c__[i__ + j * c_dim1] = c__[i__ + j * c_dim1] + a[ - i__ + l * a_dim1] * temp1 + b[i__ + l * - b_dim1] * temp2; -/* L160: */ - } - } -/* L170: */ - } -/* L180: */ - } - } - } else { - -/* Form C := alpha*A'*B + alpha*B'*A + C. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - temp1 = 0.; - temp2 = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - temp1 += a[l + i__ * a_dim1] * b[l + j * b_dim1]; - temp2 += b[l + i__ * b_dim1] * a[l + j * a_dim1]; -/* L190: */ - } - if (*beta == 0.) { - c__[i__ + j * c_dim1] = *alpha * temp1 + *alpha * - temp2; - } else { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1] - + *alpha * temp1 + *alpha * temp2; - } -/* L200: */ - } -/* L210: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - temp1 = 0.; - temp2 = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - temp1 += a[l + i__ * a_dim1] * b[l + j * b_dim1]; - temp2 += b[l + i__ * b_dim1] * a[l + j * a_dim1]; -/* L220: */ - } - if (*beta == 0.) { - c__[i__ + j * c_dim1] = *alpha * temp1 + *alpha * - temp2; - } else { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1] - + *alpha * temp1 + *alpha * temp2; - } -/* L230: */ - } -/* L240: */ - } - } - } - - return 0; - -/* End of DSYR2K. */ - -} /* dsyr2k_ */ - -/* Subroutine */ int dsyrk_(char *uplo, char *trans, integer *n, integer *k, - doublereal *alpha, doublereal *a, integer *lda, doublereal *beta, - doublereal *c__, integer *ldc) -{ - /* System generated locals */ - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, j, l, info; - static doublereal temp; - extern logical lsame_(char *, char *); - static integer nrowa; - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - DSYRK performs one of the symmetric rank k operations - - C := alpha*A*A' + beta*C, - - or - - C := alpha*A'*A + beta*C, - - where alpha and beta are scalars, C is an n by n symmetric matrix - and A is an n by k matrix in the first case and a k by n matrix - in the second case. - - Parameters - ========== - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the upper or lower - triangular part of the array C is to be referenced as - follows: - - UPLO = 'U' or 'u' Only the upper triangular part of C - is to be referenced. - - UPLO = 'L' or 'l' Only the lower triangular part of C - is to be referenced. - - Unchanged on exit. - - TRANS - CHARACTER*1. - On entry, TRANS specifies the operation to be performed as - follows: - - TRANS = 'N' or 'n' C := alpha*A*A' + beta*C. - - TRANS = 'T' or 't' C := alpha*A'*A + beta*C. - - TRANS = 'C' or 'c' C := alpha*A'*A + beta*C. - - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the order of the matrix C. N must be - at least zero. - Unchanged on exit. - - K - INTEGER. - On entry with TRANS = 'N' or 'n', K specifies the number - of columns of the matrix A, and on entry with - TRANS = 'T' or 't' or 'C' or 'c', K specifies the number - of rows of the matrix A. K must be at least zero. - Unchanged on exit. - - ALPHA - DOUBLE PRECISION. - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - A - DOUBLE PRECISION array of DIMENSION ( LDA, ka ), where ka is - k when TRANS = 'N' or 'n', and is n otherwise. - Before entry with TRANS = 'N' or 'n', the leading n by k - part of the array A must contain the matrix A, otherwise - the leading k by n part of the array A must contain the - matrix A. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. When TRANS = 'N' or 'n' - then LDA must be at least max( 1, n ), otherwise LDA must - be at least max( 1, k ). - Unchanged on exit. - - BETA - DOUBLE PRECISION. - On entry, BETA specifies the scalar beta. - Unchanged on exit. - - C - DOUBLE PRECISION array of DIMENSION ( LDC, n ). - Before entry with UPLO = 'U' or 'u', the leading n by n - upper triangular part of the array C must contain the upper - triangular part of the symmetric matrix and the strictly - lower triangular part of C is not referenced. On exit, the - upper triangular part of the array C is overwritten by the - upper triangular part of the updated matrix. - Before entry with UPLO = 'L' or 'l', the leading n by n - lower triangular part of the array C must contain the lower - triangular part of the symmetric matrix and the strictly - upper triangular part of C is not referenced. On exit, the - lower triangular part of the array C is overwritten by the - lower triangular part of the updated matrix. - - LDC - INTEGER. - On entry, LDC specifies the first dimension of C as declared - in the calling (sub) program. LDC must be at least - max( 1, n ). - Unchanged on exit. - - - Level 3 Blas routine. - - -- Written on 8-February-1989. - Jack Dongarra, Argonne National Laboratory. - Iain Duff, AERE Harwell. - Jeremy Du Croz, Numerical Algorithms Group Ltd. - Sven Hammarling, Numerical Algorithms Group Ltd. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - - /* Function Body */ - if (lsame_(trans, "N")) { - nrowa = *n; - } else { - nrowa = *k; - } - upper = lsame_(uplo, "U"); - - info = 0; - if ((! upper && ! lsame_(uplo, "L"))) { - info = 1; - } else if (((! lsame_(trans, "N") && ! lsame_(trans, - "T")) && ! lsame_(trans, "C"))) { - info = 2; - } else if (*n < 0) { - info = 3; - } else if (*k < 0) { - info = 4; - } else if (*lda < max(1,nrowa)) { - info = 7; - } else if (*ldc < max(1,*n)) { - info = 10; - } - if (info != 0) { - xerbla_("DSYRK ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0 || ((*alpha == 0. || *k == 0) && *beta == 1.)) { - return 0; - } - -/* And when alpha.eq.zero. */ - - if (*alpha == 0.) { - if (upper) { - if (*beta == 0.) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = 0.; -/* L10: */ - } -/* L20: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1]; -/* L30: */ - } -/* L40: */ - } - } - } else { - if (*beta == 0.) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = 0.; -/* L50: */ - } -/* L60: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1]; -/* L70: */ - } -/* L80: */ - } - } - } - return 0; - } - -/* Start the operations. */ - - if (lsame_(trans, "N")) { - -/* Form C := alpha*A*A' + beta*C. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*beta == 0.) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = 0.; -/* L90: */ - } - } else if (*beta != 1.) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1]; -/* L100: */ - } - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - if (a[j + l * a_dim1] != 0.) { - temp = *alpha * a[j + l * a_dim1]; - i__3 = j; - for (i__ = 1; i__ <= i__3; ++i__) { - c__[i__ + j * c_dim1] += temp * a[i__ + l * - a_dim1]; -/* L110: */ - } - } -/* L120: */ - } -/* L130: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*beta == 0.) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = 0.; -/* L140: */ - } - } else if (*beta != 1.) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] = *beta * c__[i__ + j * c_dim1]; -/* L150: */ - } - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - if (a[j + l * a_dim1] != 0.) { - temp = *alpha * a[j + l * a_dim1]; - i__3 = *n; - for (i__ = j; i__ <= i__3; ++i__) { - c__[i__ + j * c_dim1] += temp * a[i__ + l * - a_dim1]; -/* L160: */ - } - } -/* L170: */ - } -/* L180: */ - } - } - } else { - -/* Form C := alpha*A'*A + beta*C. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - temp = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - temp += a[l + i__ * a_dim1] * a[l + j * a_dim1]; -/* L190: */ - } - if (*beta == 0.) { - c__[i__ + j * c_dim1] = *alpha * temp; - } else { - c__[i__ + j * c_dim1] = *alpha * temp + *beta * c__[ - i__ + j * c_dim1]; - } -/* L200: */ - } -/* L210: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - temp = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - temp += a[l + i__ * a_dim1] * a[l + j * a_dim1]; -/* L220: */ - } - if (*beta == 0.) { - c__[i__ + j * c_dim1] = *alpha * temp; - } else { - c__[i__ + j * c_dim1] = *alpha * temp + *beta * c__[ - i__ + j * c_dim1]; - } -/* L230: */ - } -/* L240: */ - } - } - } - - return 0; - -/* End of DSYRK . */ - -} /* dsyrk_ */ - -/* Subroutine */ int dtrmm_(char *side, char *uplo, char *transa, char *diag, - integer *m, integer *n, doublereal *alpha, doublereal *a, integer * - lda, doublereal *b, integer *ldb) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, j, k, info; - static doublereal temp; - static logical lside; - extern logical lsame_(char *, char *); - static integer nrowa; - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical nounit; - - -/* - Purpose - ======= - - DTRMM performs one of the matrix-matrix operations - - B := alpha*op( A )*B, or B := alpha*B*op( A ), - - where alpha is a scalar, B is an m by n matrix, A is a unit, or - non-unit, upper or lower triangular matrix and op( A ) is one of - - op( A ) = A or op( A ) = A'. - - Parameters - ========== - - SIDE - CHARACTER*1. - On entry, SIDE specifies whether op( A ) multiplies B from - the left or right as follows: - - SIDE = 'L' or 'l' B := alpha*op( A )*B. - - SIDE = 'R' or 'r' B := alpha*B*op( A ). - - Unchanged on exit. - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the matrix A is an upper or - lower triangular matrix as follows: - - UPLO = 'U' or 'u' A is an upper triangular matrix. - - UPLO = 'L' or 'l' A is a lower triangular matrix. - - Unchanged on exit. - - TRANSA - CHARACTER*1. - On entry, TRANSA specifies the form of op( A ) to be used in - the matrix multiplication as follows: - - TRANSA = 'N' or 'n' op( A ) = A. - - TRANSA = 'T' or 't' op( A ) = A'. - - TRANSA = 'C' or 'c' op( A ) = A'. - - Unchanged on exit. - - DIAG - CHARACTER*1. - On entry, DIAG specifies whether or not A is unit triangular - as follows: - - DIAG = 'U' or 'u' A is assumed to be unit triangular. - - DIAG = 'N' or 'n' A is not assumed to be unit - triangular. - - Unchanged on exit. - - M - INTEGER. - On entry, M specifies the number of rows of B. M must be at - least zero. - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the number of columns of B. N must be - at least zero. - Unchanged on exit. - - ALPHA - DOUBLE PRECISION. - On entry, ALPHA specifies the scalar alpha. When alpha is - zero then A is not referenced and B need not be set before - entry. - Unchanged on exit. - - A - DOUBLE PRECISION array of DIMENSION ( LDA, k ), where k is m - when SIDE = 'L' or 'l' and is n when SIDE = 'R' or 'r'. - Before entry with UPLO = 'U' or 'u', the leading k by k - upper triangular part of the array A must contain the upper - triangular matrix and the strictly lower triangular part of - A is not referenced. - Before entry with UPLO = 'L' or 'l', the leading k by k - lower triangular part of the array A must contain the lower - triangular matrix and the strictly upper triangular part of - A is not referenced. - Note that when DIAG = 'U' or 'u', the diagonal elements of - A are not referenced either, but are assumed to be unity. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. When SIDE = 'L' or 'l' then - LDA must be at least max( 1, m ), when SIDE = 'R' or 'r' - then LDA must be at least max( 1, n ). - Unchanged on exit. - - B - DOUBLE PRECISION array of DIMENSION ( LDB, n ). - Before entry, the leading m by n part of the array B must - contain the matrix B, and on exit is overwritten by the - transformed matrix. - - LDB - INTEGER. - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. LDB must be at least - max( 1, m ). - Unchanged on exit. - - - Level 3 Blas routine. - - -- Written on 8-February-1989. - Jack Dongarra, Argonne National Laboratory. - Iain Duff, AERE Harwell. - Jeremy Du Croz, Numerical Algorithms Group Ltd. - Sven Hammarling, Numerical Algorithms Group Ltd. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - - /* Function Body */ - lside = lsame_(side, "L"); - if (lside) { - nrowa = *m; - } else { - nrowa = *n; - } - nounit = lsame_(diag, "N"); - upper = lsame_(uplo, "U"); - - info = 0; - if ((! lside && ! lsame_(side, "R"))) { - info = 1; - } else if ((! upper && ! lsame_(uplo, "L"))) { - info = 2; - } else if (((! lsame_(transa, "N") && ! lsame_( - transa, "T")) && ! lsame_(transa, "C"))) { - info = 3; - } else if ((! lsame_(diag, "U") && ! lsame_(diag, - "N"))) { - info = 4; - } else if (*m < 0) { - info = 5; - } else if (*n < 0) { - info = 6; - } else if (*lda < max(1,nrowa)) { - info = 9; - } else if (*ldb < max(1,*m)) { - info = 11; - } - if (info != 0) { - xerbla_("DTRMM ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } - -/* And when alpha.eq.zero. */ - - if (*alpha == 0.) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] = 0.; -/* L10: */ - } -/* L20: */ - } - return 0; - } - -/* Start the operations. */ - - if (lside) { - if (lsame_(transa, "N")) { - -/* Form B := alpha*A*B. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (k = 1; k <= i__2; ++k) { - if (b[k + j * b_dim1] != 0.) { - temp = *alpha * b[k + j * b_dim1]; - i__3 = k - 1; - for (i__ = 1; i__ <= i__3; ++i__) { - b[i__ + j * b_dim1] += temp * a[i__ + k * - a_dim1]; -/* L30: */ - } - if (nounit) { - temp *= a[k + k * a_dim1]; - } - b[k + j * b_dim1] = temp; - } -/* L40: */ - } -/* L50: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - for (k = *m; k >= 1; --k) { - if (b[k + j * b_dim1] != 0.) { - temp = *alpha * b[k + j * b_dim1]; - b[k + j * b_dim1] = temp; - if (nounit) { - b[k + j * b_dim1] *= a[k + k * a_dim1]; - } - i__2 = *m; - for (i__ = k + 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] += temp * a[i__ + k * - a_dim1]; -/* L60: */ - } - } -/* L70: */ - } -/* L80: */ - } - } - } else { - -/* Form B := alpha*A'*B. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - for (i__ = *m; i__ >= 1; --i__) { - temp = b[i__ + j * b_dim1]; - if (nounit) { - temp *= a[i__ + i__ * a_dim1]; - } - i__2 = i__ - 1; - for (k = 1; k <= i__2; ++k) { - temp += a[k + i__ * a_dim1] * b[k + j * b_dim1]; -/* L90: */ - } - b[i__ + j * b_dim1] = *alpha * temp; -/* L100: */ - } -/* L110: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp = b[i__ + j * b_dim1]; - if (nounit) { - temp *= a[i__ + i__ * a_dim1]; - } - i__3 = *m; - for (k = i__ + 1; k <= i__3; ++k) { - temp += a[k + i__ * a_dim1] * b[k + j * b_dim1]; -/* L120: */ - } - b[i__ + j * b_dim1] = *alpha * temp; -/* L130: */ - } -/* L140: */ - } - } - } - } else { - if (lsame_(transa, "N")) { - -/* Form B := alpha*B*A. */ - - if (upper) { - for (j = *n; j >= 1; --j) { - temp = *alpha; - if (nounit) { - temp *= a[j + j * a_dim1]; - } - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - b[i__ + j * b_dim1] = temp * b[i__ + j * b_dim1]; -/* L150: */ - } - i__1 = j - 1; - for (k = 1; k <= i__1; ++k) { - if (a[k + j * a_dim1] != 0.) { - temp = *alpha * a[k + j * a_dim1]; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] += temp * b[i__ + k * - b_dim1]; -/* L160: */ - } - } -/* L170: */ - } -/* L180: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp = *alpha; - if (nounit) { - temp *= a[j + j * a_dim1]; - } - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] = temp * b[i__ + j * b_dim1]; -/* L190: */ - } - i__2 = *n; - for (k = j + 1; k <= i__2; ++k) { - if (a[k + j * a_dim1] != 0.) { - temp = *alpha * a[k + j * a_dim1]; - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - b[i__ + j * b_dim1] += temp * b[i__ + k * - b_dim1]; -/* L200: */ - } - } -/* L210: */ - } -/* L220: */ - } - } - } else { - -/* Form B := alpha*B*A'. */ - - if (upper) { - i__1 = *n; - for (k = 1; k <= i__1; ++k) { - i__2 = k - 1; - for (j = 1; j <= i__2; ++j) { - if (a[j + k * a_dim1] != 0.) { - temp = *alpha * a[j + k * a_dim1]; - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - b[i__ + j * b_dim1] += temp * b[i__ + k * - b_dim1]; -/* L230: */ - } - } -/* L240: */ - } - temp = *alpha; - if (nounit) { - temp *= a[k + k * a_dim1]; - } - if (temp != 1.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + k * b_dim1] = temp * b[i__ + k * b_dim1]; -/* L250: */ - } - } -/* L260: */ - } - } else { - for (k = *n; k >= 1; --k) { - i__1 = *n; - for (j = k + 1; j <= i__1; ++j) { - if (a[j + k * a_dim1] != 0.) { - temp = *alpha * a[j + k * a_dim1]; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] += temp * b[i__ + k * - b_dim1]; -/* L270: */ - } - } -/* L280: */ - } - temp = *alpha; - if (nounit) { - temp *= a[k + k * a_dim1]; - } - if (temp != 1.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - b[i__ + k * b_dim1] = temp * b[i__ + k * b_dim1]; -/* L290: */ - } - } -/* L300: */ - } - } - } - } - - return 0; - -/* End of DTRMM . */ - -} /* dtrmm_ */ - -/* Subroutine */ int dtrmv_(char *uplo, char *trans, char *diag, integer *n, - doublereal *a, integer *lda, doublereal *x, integer *incx) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - - /* Local variables */ - static integer i__, j, ix, jx, kx, info; - static doublereal temp; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical nounit; - - -/* - Purpose - ======= - - DTRMV performs one of the matrix-vector operations - - x := A*x, or x := A'*x, - - where x is an n element vector and A is an n by n unit, or non-unit, - upper or lower triangular matrix. - - Parameters - ========== - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the matrix is an upper or - lower triangular matrix as follows: - - UPLO = 'U' or 'u' A is an upper triangular matrix. - - UPLO = 'L' or 'l' A is a lower triangular matrix. - - Unchanged on exit. - - TRANS - CHARACTER*1. - On entry, TRANS specifies the operation to be performed as - follows: - - TRANS = 'N' or 'n' x := A*x. - - TRANS = 'T' or 't' x := A'*x. - - TRANS = 'C' or 'c' x := A'*x. - - Unchanged on exit. - - DIAG - CHARACTER*1. - On entry, DIAG specifies whether or not A is unit - triangular as follows: - - DIAG = 'U' or 'u' A is assumed to be unit triangular. - - DIAG = 'N' or 'n' A is not assumed to be unit - triangular. - - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the order of the matrix A. - N must be at least zero. - Unchanged on exit. - - A - DOUBLE PRECISION array of DIMENSION ( LDA, n ). - Before entry with UPLO = 'U' or 'u', the leading n by n - upper triangular part of the array A must contain the upper - triangular matrix and the strictly lower triangular part of - A is not referenced. - Before entry with UPLO = 'L' or 'l', the leading n by n - lower triangular part of the array A must contain the lower - triangular matrix and the strictly upper triangular part of - A is not referenced. - Note that when DIAG = 'U' or 'u', the diagonal elements of - A are not referenced either, but are assumed to be unity. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, n ). - Unchanged on exit. - - X - DOUBLE PRECISION array of dimension at least - ( 1 + ( n - 1 )*abs( INCX ) ). - Before entry, the incremented array X must contain the n - element vector x. On exit, X is overwritten with the - tranformed vector x. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --x; - - /* Function Body */ - info = 0; - if ((! lsame_(uplo, "U") && ! lsame_(uplo, "L"))) { - info = 1; - } else if (((! lsame_(trans, "N") && ! lsame_(trans, - "T")) && ! lsame_(trans, "C"))) { - info = 2; - } else if ((! lsame_(diag, "U") && ! lsame_(diag, - "N"))) { - info = 3; - } else if (*n < 0) { - info = 4; - } else if (*lda < max(1,*n)) { - info = 6; - } else if (*incx == 0) { - info = 8; - } - if (info != 0) { - xerbla_("DTRMV ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } - - nounit = lsame_(diag, "N"); - -/* - Set up the start point in X if the increment is not unity. This - will be ( N - 1 )*INCX too small for descending loops. -*/ - - if (*incx <= 0) { - kx = 1 - (*n - 1) * *incx; - } else if (*incx != 1) { - kx = 1; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through A. -*/ - - if (lsame_(trans, "N")) { - -/* Form x := A*x. */ - - if (lsame_(uplo, "U")) { - if (*incx == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (x[j] != 0.) { - temp = x[j]; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - x[i__] += temp * a[i__ + j * a_dim1]; -/* L10: */ - } - if (nounit) { - x[j] *= a[j + j * a_dim1]; - } - } -/* L20: */ - } - } else { - jx = kx; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (x[jx] != 0.) { - temp = x[jx]; - ix = kx; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - x[ix] += temp * a[i__ + j * a_dim1]; - ix += *incx; -/* L30: */ - } - if (nounit) { - x[jx] *= a[j + j * a_dim1]; - } - } - jx += *incx; -/* L40: */ - } - } - } else { - if (*incx == 1) { - for (j = *n; j >= 1; --j) { - if (x[j] != 0.) { - temp = x[j]; - i__1 = j + 1; - for (i__ = *n; i__ >= i__1; --i__) { - x[i__] += temp * a[i__ + j * a_dim1]; -/* L50: */ - } - if (nounit) { - x[j] *= a[j + j * a_dim1]; - } - } -/* L60: */ - } - } else { - kx += (*n - 1) * *incx; - jx = kx; - for (j = *n; j >= 1; --j) { - if (x[jx] != 0.) { - temp = x[jx]; - ix = kx; - i__1 = j + 1; - for (i__ = *n; i__ >= i__1; --i__) { - x[ix] += temp * a[i__ + j * a_dim1]; - ix -= *incx; -/* L70: */ - } - if (nounit) { - x[jx] *= a[j + j * a_dim1]; - } - } - jx -= *incx; -/* L80: */ - } - } - } - } else { - -/* Form x := A'*x. */ - - if (lsame_(uplo, "U")) { - if (*incx == 1) { - for (j = *n; j >= 1; --j) { - temp = x[j]; - if (nounit) { - temp *= a[j + j * a_dim1]; - } - for (i__ = j - 1; i__ >= 1; --i__) { - temp += a[i__ + j * a_dim1] * x[i__]; -/* L90: */ - } - x[j] = temp; -/* L100: */ - } - } else { - jx = kx + (*n - 1) * *incx; - for (j = *n; j >= 1; --j) { - temp = x[jx]; - ix = jx; - if (nounit) { - temp *= a[j + j * a_dim1]; - } - for (i__ = j - 1; i__ >= 1; --i__) { - ix -= *incx; - temp += a[i__ + j * a_dim1] * x[ix]; -/* L110: */ - } - x[jx] = temp; - jx -= *incx; -/* L120: */ - } - } - } else { - if (*incx == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp = x[j]; - if (nounit) { - temp *= a[j + j * a_dim1]; - } - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - temp += a[i__ + j * a_dim1] * x[i__]; -/* L130: */ - } - x[j] = temp; -/* L140: */ - } - } else { - jx = kx; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp = x[jx]; - ix = jx; - if (nounit) { - temp *= a[j + j * a_dim1]; - } - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - ix += *incx; - temp += a[i__ + j * a_dim1] * x[ix]; -/* L150: */ - } - x[jx] = temp; - jx += *incx; -/* L160: */ - } - } - } - } - - return 0; - -/* End of DTRMV . */ - -} /* dtrmv_ */ - -/* Subroutine */ int dtrsm_(char *side, char *uplo, char *transa, char *diag, - integer *m, integer *n, doublereal *alpha, doublereal *a, integer * - lda, doublereal *b, integer *ldb) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, j, k, info; - static doublereal temp; - static logical lside; - extern logical lsame_(char *, char *); - static integer nrowa; - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical nounit; - - -/* - Purpose - ======= - - DTRSM solves one of the matrix equations - - op( A )*X = alpha*B, or X*op( A ) = alpha*B, - - where alpha is a scalar, X and B are m by n matrices, A is a unit, or - non-unit, upper or lower triangular matrix and op( A ) is one of - - op( A ) = A or op( A ) = A'. - - The matrix X is overwritten on B. - - Parameters - ========== - - SIDE - CHARACTER*1. - On entry, SIDE specifies whether op( A ) appears on the left - or right of X as follows: - - SIDE = 'L' or 'l' op( A )*X = alpha*B. - - SIDE = 'R' or 'r' X*op( A ) = alpha*B. - - Unchanged on exit. - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the matrix A is an upper or - lower triangular matrix as follows: - - UPLO = 'U' or 'u' A is an upper triangular matrix. - - UPLO = 'L' or 'l' A is a lower triangular matrix. - - Unchanged on exit. - - TRANSA - CHARACTER*1. - On entry, TRANSA specifies the form of op( A ) to be used in - the matrix multiplication as follows: - - TRANSA = 'N' or 'n' op( A ) = A. - - TRANSA = 'T' or 't' op( A ) = A'. - - TRANSA = 'C' or 'c' op( A ) = A'. - - Unchanged on exit. - - DIAG - CHARACTER*1. - On entry, DIAG specifies whether or not A is unit triangular - as follows: - - DIAG = 'U' or 'u' A is assumed to be unit triangular. - - DIAG = 'N' or 'n' A is not assumed to be unit - triangular. - - Unchanged on exit. - - M - INTEGER. - On entry, M specifies the number of rows of B. M must be at - least zero. - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the number of columns of B. N must be - at least zero. - Unchanged on exit. - - ALPHA - DOUBLE PRECISION. - On entry, ALPHA specifies the scalar alpha. When alpha is - zero then A is not referenced and B need not be set before - entry. - Unchanged on exit. - - A - DOUBLE PRECISION array of DIMENSION ( LDA, k ), where k is m - when SIDE = 'L' or 'l' and is n when SIDE = 'R' or 'r'. - Before entry with UPLO = 'U' or 'u', the leading k by k - upper triangular part of the array A must contain the upper - triangular matrix and the strictly lower triangular part of - A is not referenced. - Before entry with UPLO = 'L' or 'l', the leading k by k - lower triangular part of the array A must contain the lower - triangular matrix and the strictly upper triangular part of - A is not referenced. - Note that when DIAG = 'U' or 'u', the diagonal elements of - A are not referenced either, but are assumed to be unity. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. When SIDE = 'L' or 'l' then - LDA must be at least max( 1, m ), when SIDE = 'R' or 'r' - then LDA must be at least max( 1, n ). - Unchanged on exit. - - B - DOUBLE PRECISION array of DIMENSION ( LDB, n ). - Before entry, the leading m by n part of the array B must - contain the right-hand side matrix B, and on exit is - overwritten by the solution matrix X. - - LDB - INTEGER. - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. LDB must be at least - max( 1, m ). - Unchanged on exit. - - - Level 3 Blas routine. - - - -- Written on 8-February-1989. - Jack Dongarra, Argonne National Laboratory. - Iain Duff, AERE Harwell. - Jeremy Du Croz, Numerical Algorithms Group Ltd. - Sven Hammarling, Numerical Algorithms Group Ltd. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - - /* Function Body */ - lside = lsame_(side, "L"); - if (lside) { - nrowa = *m; - } else { - nrowa = *n; - } - nounit = lsame_(diag, "N"); - upper = lsame_(uplo, "U"); - - info = 0; - if ((! lside && ! lsame_(side, "R"))) { - info = 1; - } else if ((! upper && ! lsame_(uplo, "L"))) { - info = 2; - } else if (((! lsame_(transa, "N") && ! lsame_( - transa, "T")) && ! lsame_(transa, "C"))) { - info = 3; - } else if ((! lsame_(diag, "U") && ! lsame_(diag, - "N"))) { - info = 4; - } else if (*m < 0) { - info = 5; - } else if (*n < 0) { - info = 6; - } else if (*lda < max(1,nrowa)) { - info = 9; - } else if (*ldb < max(1,*m)) { - info = 11; - } - if (info != 0) { - xerbla_("DTRSM ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } - -/* And when alpha.eq.zero. */ - - if (*alpha == 0.) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] = 0.; -/* L10: */ - } -/* L20: */ - } - return 0; - } - -/* Start the operations. */ - - if (lside) { - if (lsame_(transa, "N")) { - -/* Form B := alpha*inv( A )*B. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*alpha != 1.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] = *alpha * b[i__ + j * b_dim1] - ; -/* L30: */ - } - } - for (k = *m; k >= 1; --k) { - if (b[k + j * b_dim1] != 0.) { - if (nounit) { - b[k + j * b_dim1] /= a[k + k * a_dim1]; - } - i__2 = k - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] -= b[k + j * b_dim1] * a[ - i__ + k * a_dim1]; -/* L40: */ - } - } -/* L50: */ - } -/* L60: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*alpha != 1.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] = *alpha * b[i__ + j * b_dim1] - ; -/* L70: */ - } - } - i__2 = *m; - for (k = 1; k <= i__2; ++k) { - if (b[k + j * b_dim1] != 0.) { - if (nounit) { - b[k + j * b_dim1] /= a[k + k * a_dim1]; - } - i__3 = *m; - for (i__ = k + 1; i__ <= i__3; ++i__) { - b[i__ + j * b_dim1] -= b[k + j * b_dim1] * a[ - i__ + k * a_dim1]; -/* L80: */ - } - } -/* L90: */ - } -/* L100: */ - } - } - } else { - -/* Form B := alpha*inv( A' )*B. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp = *alpha * b[i__ + j * b_dim1]; - i__3 = i__ - 1; - for (k = 1; k <= i__3; ++k) { - temp -= a[k + i__ * a_dim1] * b[k + j * b_dim1]; -/* L110: */ - } - if (nounit) { - temp /= a[i__ + i__ * a_dim1]; - } - b[i__ + j * b_dim1] = temp; -/* L120: */ - } -/* L130: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - for (i__ = *m; i__ >= 1; --i__) { - temp = *alpha * b[i__ + j * b_dim1]; - i__2 = *m; - for (k = i__ + 1; k <= i__2; ++k) { - temp -= a[k + i__ * a_dim1] * b[k + j * b_dim1]; -/* L140: */ - } - if (nounit) { - temp /= a[i__ + i__ * a_dim1]; - } - b[i__ + j * b_dim1] = temp; -/* L150: */ - } -/* L160: */ - } - } - } - } else { - if (lsame_(transa, "N")) { - -/* Form B := alpha*B*inv( A ). */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*alpha != 1.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] = *alpha * b[i__ + j * b_dim1] - ; -/* L170: */ - } - } - i__2 = j - 1; - for (k = 1; k <= i__2; ++k) { - if (a[k + j * a_dim1] != 0.) { - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - b[i__ + j * b_dim1] -= a[k + j * a_dim1] * b[ - i__ + k * b_dim1]; -/* L180: */ - } - } -/* L190: */ - } - if (nounit) { - temp = 1. / a[j + j * a_dim1]; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] = temp * b[i__ + j * b_dim1]; -/* L200: */ - } - } -/* L210: */ - } - } else { - for (j = *n; j >= 1; --j) { - if (*alpha != 1.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - b[i__ + j * b_dim1] = *alpha * b[i__ + j * b_dim1] - ; -/* L220: */ - } - } - i__1 = *n; - for (k = j + 1; k <= i__1; ++k) { - if (a[k + j * a_dim1] != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] -= a[k + j * a_dim1] * b[ - i__ + k * b_dim1]; -/* L230: */ - } - } -/* L240: */ - } - if (nounit) { - temp = 1. / a[j + j * a_dim1]; - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - b[i__ + j * b_dim1] = temp * b[i__ + j * b_dim1]; -/* L250: */ - } - } -/* L260: */ - } - } - } else { - -/* Form B := alpha*B*inv( A' ). */ - - if (upper) { - for (k = *n; k >= 1; --k) { - if (nounit) { - temp = 1. / a[k + k * a_dim1]; - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - b[i__ + k * b_dim1] = temp * b[i__ + k * b_dim1]; -/* L270: */ - } - } - i__1 = k - 1; - for (j = 1; j <= i__1; ++j) { - if (a[j + k * a_dim1] != 0.) { - temp = a[j + k * a_dim1]; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] -= temp * b[i__ + k * - b_dim1]; -/* L280: */ - } - } -/* L290: */ - } - if (*alpha != 1.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - b[i__ + k * b_dim1] = *alpha * b[i__ + k * b_dim1] - ; -/* L300: */ - } - } -/* L310: */ - } - } else { - i__1 = *n; - for (k = 1; k <= i__1; ++k) { - if (nounit) { - temp = 1. / a[k + k * a_dim1]; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + k * b_dim1] = temp * b[i__ + k * b_dim1]; -/* L320: */ - } - } - i__2 = *n; - for (j = k + 1; j <= i__2; ++j) { - if (a[j + k * a_dim1] != 0.) { - temp = a[j + k * a_dim1]; - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - b[i__ + j * b_dim1] -= temp * b[i__ + k * - b_dim1]; -/* L330: */ - } - } -/* L340: */ - } - if (*alpha != 1.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + k * b_dim1] = *alpha * b[i__ + k * b_dim1] - ; -/* L350: */ - } - } -/* L360: */ - } - } - } - } - - return 0; - -/* End of DTRSM . */ - -} /* dtrsm_ */ - -doublereal dzasum_(integer *n, doublecomplex *zx, integer *incx) -{ - /* System generated locals */ - integer i__1; - doublereal ret_val; - - /* Local variables */ - static integer i__, ix; - static doublereal stemp; - extern doublereal dcabs1_(doublecomplex *); - - -/* - takes the sum of the absolute values. - jack dongarra, 3/11/78. - modified 3/93 to return if incx .le. 0. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --zx; - - /* Function Body */ - ret_val = 0.; - stemp = 0.; - if (*n <= 0 || *incx <= 0) { - return ret_val; - } - if (*incx == 1) { - goto L20; - } - -/* code for increment not equal to 1 */ - - ix = 1; - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - stemp += dcabs1_(&zx[ix]); - ix += *incx; -/* L10: */ - } - ret_val = stemp; - return ret_val; - -/* code for increment equal to 1 */ - -L20: - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - stemp += dcabs1_(&zx[i__]); -/* L30: */ - } - ret_val = stemp; - return ret_val; -} /* dzasum_ */ - -doublereal dznrm2_(integer *n, doublecomplex *x, integer *incx) -{ - /* System generated locals */ - integer i__1, i__2, i__3; - doublereal ret_val, d__1; - - /* Builtin functions */ - double d_imag(doublecomplex *), sqrt(doublereal); - - /* Local variables */ - static integer ix; - static doublereal ssq, temp, norm, scale; - - -/* - DZNRM2 returns the euclidean norm of a vector via the function - name, so that - - DZNRM2 := sqrt( conjg( x' )*x ) - - - -- This version written on 25-October-1982. - Modified on 14-October-1993 to inline the call to ZLASSQ. - Sven Hammarling, Nag Ltd. -*/ - - - /* Parameter adjustments */ - --x; - - /* Function Body */ - if (*n < 1 || *incx < 1) { - norm = 0.; - } else { - scale = 0.; - ssq = 1.; -/* - The following loop is equivalent to this call to the LAPACK - auxiliary routine: - CALL ZLASSQ( N, X, INCX, SCALE, SSQ ) -*/ - - i__1 = (*n - 1) * *incx + 1; - i__2 = *incx; - for (ix = 1; i__2 < 0 ? ix >= i__1 : ix <= i__1; ix += i__2) { - i__3 = ix; - if (x[i__3].r != 0.) { - i__3 = ix; - temp = (d__1 = x[i__3].r, abs(d__1)); - if (scale < temp) { -/* Computing 2nd power */ - d__1 = scale / temp; - ssq = ssq * (d__1 * d__1) + 1.; - scale = temp; - } else { -/* Computing 2nd power */ - d__1 = temp / scale; - ssq += d__1 * d__1; - } - } - if (d_imag(&x[ix]) != 0.) { - temp = (d__1 = d_imag(&x[ix]), abs(d__1)); - if (scale < temp) { -/* Computing 2nd power */ - d__1 = scale / temp; - ssq = ssq * (d__1 * d__1) + 1.; - scale = temp; - } else { -/* Computing 2nd power */ - d__1 = temp / scale; - ssq += d__1 * d__1; - } - } -/* L10: */ - } - norm = scale * sqrt(ssq); - } - - ret_val = norm; - return ret_val; - -/* End of DZNRM2. */ - -} /* dznrm2_ */ - -integer idamax_(integer *n, doublereal *dx, integer *incx) -{ - /* System generated locals */ - integer ret_val, i__1; - doublereal d__1; - - /* Local variables */ - static integer i__, ix; - static doublereal dmax__; - - -/* - finds the index of element having max. absolute value. - jack dongarra, linpack, 3/11/78. - modified 3/93 to return if incx .le. 0. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --dx; - - /* Function Body */ - ret_val = 0; - if (*n < 1 || *incx <= 0) { - return ret_val; - } - ret_val = 1; - if (*n == 1) { - return ret_val; - } - if (*incx == 1) { - goto L20; - } - -/* code for increment not equal to 1 */ - - ix = 1; - dmax__ = abs(dx[1]); - ix += *incx; - i__1 = *n; - for (i__ = 2; i__ <= i__1; ++i__) { - if ((d__1 = dx[ix], abs(d__1)) <= dmax__) { - goto L5; - } - ret_val = i__; - dmax__ = (d__1 = dx[ix], abs(d__1)); -L5: - ix += *incx; -/* L10: */ - } - return ret_val; - -/* code for increment equal to 1 */ - -L20: - dmax__ = abs(dx[1]); - i__1 = *n; - for (i__ = 2; i__ <= i__1; ++i__) { - if ((d__1 = dx[i__], abs(d__1)) <= dmax__) { - goto L30; - } - ret_val = i__; - dmax__ = (d__1 = dx[i__], abs(d__1)); -L30: - ; - } - return ret_val; -} /* idamax_ */ - -integer izamax_(integer *n, doublecomplex *zx, integer *incx) -{ - /* System generated locals */ - integer ret_val, i__1; - - /* Local variables */ - static integer i__, ix; - static doublereal smax; - extern doublereal dcabs1_(doublecomplex *); - - -/* - finds the index of element having max. absolute value. - jack dongarra, 1/15/85. - modified 3/93 to return if incx .le. 0. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --zx; - - /* Function Body */ - ret_val = 0; - if (*n < 1 || *incx <= 0) { - return ret_val; - } - ret_val = 1; - if (*n == 1) { - return ret_val; - } - if (*incx == 1) { - goto L20; - } - -/* code for increment not equal to 1 */ - - ix = 1; - smax = dcabs1_(&zx[1]); - ix += *incx; - i__1 = *n; - for (i__ = 2; i__ <= i__1; ++i__) { - if (dcabs1_(&zx[ix]) <= smax) { - goto L5; - } - ret_val = i__; - smax = dcabs1_(&zx[ix]); -L5: - ix += *incx; -/* L10: */ - } - return ret_val; - -/* code for increment equal to 1 */ - -L20: - smax = dcabs1_(&zx[1]); - i__1 = *n; - for (i__ = 2; i__ <= i__1; ++i__) { - if (dcabs1_(&zx[i__]) <= smax) { - goto L30; - } - ret_val = i__; - smax = dcabs1_(&zx[i__]); -L30: - ; - } - return ret_val; -} /* izamax_ */ - -logical lsame_(char *ca, char *cb) -{ - /* System generated locals */ - logical ret_val; - - /* Local variables */ - static integer inta, intb, zcode; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - LSAME returns .TRUE. if CA is the same letter as CB regardless of - case. - - Arguments - ========= - - CA (input) CHARACTER*1 - CB (input) CHARACTER*1 - CA and CB specify the single characters to be compared. - - ===================================================================== - - - Test if the characters are equal -*/ - - ret_val = *(unsigned char *)ca == *(unsigned char *)cb; - if (ret_val) { - return ret_val; - } - -/* Now test for equivalence if both characters are alphabetic. */ - - zcode = 'Z'; - -/* - Use 'Z' rather than 'A' so that ASCII can be detected on Prime - machines, on which ICHAR returns a value with bit 8 set. - ICHAR('A') on Prime machines returns 193 which is the same as - ICHAR('A') on an EBCDIC machine. -*/ - - inta = *(unsigned char *)ca; - intb = *(unsigned char *)cb; - - if (zcode == 90 || zcode == 122) { - -/* - ASCII is assumed - ZCODE is the ASCII code of either lower or - upper case 'Z'. -*/ - - if ((inta >= 97 && inta <= 122)) { - inta += -32; - } - if ((intb >= 97 && intb <= 122)) { - intb += -32; - } - - } else if (zcode == 233 || zcode == 169) { - -/* - EBCDIC is assumed - ZCODE is the EBCDIC code of either lower or - upper case 'Z'. -*/ - - if ((inta >= 129 && inta <= 137) || (inta >= 145 && inta <= 153) || ( - inta >= 162 && inta <= 169)) { - inta += 64; - } - if ((intb >= 129 && intb <= 137) || (intb >= 145 && intb <= 153) || ( - intb >= 162 && intb <= 169)) { - intb += 64; - } - - } else if (zcode == 218 || zcode == 250) { - -/* - ASCII is assumed, on Prime machines - ZCODE is the ASCII code - plus 128 of either lower or upper case 'Z'. -*/ - - if ((inta >= 225 && inta <= 250)) { - inta += -32; - } - if ((intb >= 225 && intb <= 250)) { - intb += -32; - } - } - ret_val = inta == intb; - -/* - RETURN - - End of LSAME -*/ - - return ret_val; -} /* lsame_ */ - -/* Using xerbla_ from pythonxerbla.c */ -/* Subroutine */ int xerbla_DISABLE(char *srname, integer *info) -{ - /* Format strings */ - static char fmt_9999[] = "(\002 ** On entry to \002,a6,\002 parameter nu" - "mber \002,i2,\002 had \002,\002an illegal value\002)"; - - /* Builtin functions */ - integer s_wsfe(cilist *), do_fio(integer *, char *, ftnlen), e_wsfe(void); - /* Subroutine */ int s_stop(char *, ftnlen); - - /* Fortran I/O blocks */ - static cilist io___147 = { 0, 6, 0, fmt_9999, 0 }; - - -/* - -- LAPACK auxiliary routine (preliminary version) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - XERBLA is an error handler for the LAPACK routines. - It is called by an LAPACK routine if an input parameter has an - invalid value. A message is printed and execution stops. - - Installers may consider modifying the STOP statement in order to - call system-specific exception-handling facilities. - - Arguments - ========= - - SRNAME (input) CHARACTER*6 - The name of the routine which called XERBLA. - - INFO (input) INTEGER - The position of the invalid parameter in the parameter list - of the calling routine. -*/ - - - s_wsfe(&io___147); - do_fio(&c__1, srname, (ftnlen)6); - do_fio(&c__1, (char *)&(*info), (ftnlen)sizeof(integer)); - e_wsfe(); - - s_stop("", (ftnlen)0); - - -/* End of XERBLA */ - - return 0; -} /* xerbla_ */ - -/* Subroutine */ int zaxpy_(integer *n, doublecomplex *za, doublecomplex *zx, - integer *incx, doublecomplex *zy, integer *incy) -{ - /* System generated locals */ - integer i__1, i__2, i__3, i__4; - doublecomplex z__1, z__2; - - /* Local variables */ - static integer i__, ix, iy; - extern doublereal dcabs1_(doublecomplex *); - - -/* - constant times a vector plus a vector. - jack dongarra, 3/11/78. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - /* Parameter adjustments */ - --zy; - --zx; - - /* Function Body */ - if (*n <= 0) { - return 0; - } - if (dcabs1_(za) == 0.) { - return 0; - } - if ((*incx == 1 && *incy == 1)) { - goto L20; - } - -/* - code for unequal increments or equal increments - not equal to 1 -*/ - - ix = 1; - iy = 1; - if (*incx < 0) { - ix = (-(*n) + 1) * *incx + 1; - } - if (*incy < 0) { - iy = (-(*n) + 1) * *incy + 1; - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = iy; - i__3 = iy; - i__4 = ix; - z__2.r = za->r * zx[i__4].r - za->i * zx[i__4].i, z__2.i = za->r * zx[ - i__4].i + za->i * zx[i__4].r; - z__1.r = zy[i__3].r + z__2.r, z__1.i = zy[i__3].i + z__2.i; - zy[i__2].r = z__1.r, zy[i__2].i = z__1.i; - ix += *incx; - iy += *incy; -/* L10: */ - } - return 0; - -/* code for both increments equal to 1 */ - -L20: - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - i__3 = i__; - i__4 = i__; - z__2.r = za->r * zx[i__4].r - za->i * zx[i__4].i, z__2.i = za->r * zx[ - i__4].i + za->i * zx[i__4].r; - z__1.r = zy[i__3].r + z__2.r, z__1.i = zy[i__3].i + z__2.i; - zy[i__2].r = z__1.r, zy[i__2].i = z__1.i; -/* L30: */ - } - return 0; -} /* zaxpy_ */ - -/* Subroutine */ int zcopy_(integer *n, doublecomplex *zx, integer *incx, - doublecomplex *zy, integer *incy) -{ - /* System generated locals */ - integer i__1, i__2, i__3; - - /* Local variables */ - static integer i__, ix, iy; - - -/* - copies a vector, x, to a vector, y. - jack dongarra, linpack, 4/11/78. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --zy; - --zx; - - /* Function Body */ - if (*n <= 0) { - return 0; - } - if ((*incx == 1 && *incy == 1)) { - goto L20; - } - -/* - code for unequal increments or equal increments - not equal to 1 -*/ - - ix = 1; - iy = 1; - if (*incx < 0) { - ix = (-(*n) + 1) * *incx + 1; - } - if (*incy < 0) { - iy = (-(*n) + 1) * *incy + 1; - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = iy; - i__3 = ix; - zy[i__2].r = zx[i__3].r, zy[i__2].i = zx[i__3].i; - ix += *incx; - iy += *incy; -/* L10: */ - } - return 0; - -/* code for both increments equal to 1 */ - -L20: - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - i__3 = i__; - zy[i__2].r = zx[i__3].r, zy[i__2].i = zx[i__3].i; -/* L30: */ - } - return 0; -} /* zcopy_ */ - -/* Double Complex */ VOID zdotc_(doublecomplex * ret_val, integer *n, - doublecomplex *zx, integer *incx, doublecomplex *zy, integer *incy) -{ - /* System generated locals */ - integer i__1, i__2; - doublecomplex z__1, z__2, z__3; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, ix, iy; - static doublecomplex ztemp; - - -/* - forms the dot product of a vector. - jack dongarra, 3/11/78. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - /* Parameter adjustments */ - --zy; - --zx; - - /* Function Body */ - ztemp.r = 0., ztemp.i = 0.; - ret_val->r = 0., ret_val->i = 0.; - if (*n <= 0) { - return ; - } - if ((*incx == 1 && *incy == 1)) { - goto L20; - } - -/* - code for unequal increments or equal increments - not equal to 1 -*/ - - ix = 1; - iy = 1; - if (*incx < 0) { - ix = (-(*n) + 1) * *incx + 1; - } - if (*incy < 0) { - iy = (-(*n) + 1) * *incy + 1; - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - d_cnjg(&z__3, &zx[ix]); - i__2 = iy; - z__2.r = z__3.r * zy[i__2].r - z__3.i * zy[i__2].i, z__2.i = z__3.r * - zy[i__2].i + z__3.i * zy[i__2].r; - z__1.r = ztemp.r + z__2.r, z__1.i = ztemp.i + z__2.i; - ztemp.r = z__1.r, ztemp.i = z__1.i; - ix += *incx; - iy += *incy; -/* L10: */ - } - ret_val->r = ztemp.r, ret_val->i = ztemp.i; - return ; - -/* code for both increments equal to 1 */ - -L20: - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - d_cnjg(&z__3, &zx[i__]); - i__2 = i__; - z__2.r = z__3.r * zy[i__2].r - z__3.i * zy[i__2].i, z__2.i = z__3.r * - zy[i__2].i + z__3.i * zy[i__2].r; - z__1.r = ztemp.r + z__2.r, z__1.i = ztemp.i + z__2.i; - ztemp.r = z__1.r, ztemp.i = z__1.i; -/* L30: */ - } - ret_val->r = ztemp.r, ret_val->i = ztemp.i; - return ; -} /* zdotc_ */ - -/* Double Complex */ VOID zdotu_(doublecomplex * ret_val, integer *n, - doublecomplex *zx, integer *incx, doublecomplex *zy, integer *incy) -{ - /* System generated locals */ - integer i__1, i__2, i__3; - doublecomplex z__1, z__2; - - /* Local variables */ - static integer i__, ix, iy; - static doublecomplex ztemp; - - -/* - forms the dot product of two vectors. - jack dongarra, 3/11/78. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - /* Parameter adjustments */ - --zy; - --zx; - - /* Function Body */ - ztemp.r = 0., ztemp.i = 0.; - ret_val->r = 0., ret_val->i = 0.; - if (*n <= 0) { - return ; - } - if ((*incx == 1 && *incy == 1)) { - goto L20; - } - -/* - code for unequal increments or equal increments - not equal to 1 -*/ - - ix = 1; - iy = 1; - if (*incx < 0) { - ix = (-(*n) + 1) * *incx + 1; - } - if (*incy < 0) { - iy = (-(*n) + 1) * *incy + 1; - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = ix; - i__3 = iy; - z__2.r = zx[i__2].r * zy[i__3].r - zx[i__2].i * zy[i__3].i, z__2.i = - zx[i__2].r * zy[i__3].i + zx[i__2].i * zy[i__3].r; - z__1.r = ztemp.r + z__2.r, z__1.i = ztemp.i + z__2.i; - ztemp.r = z__1.r, ztemp.i = z__1.i; - ix += *incx; - iy += *incy; -/* L10: */ - } - ret_val->r = ztemp.r, ret_val->i = ztemp.i; - return ; - -/* code for both increments equal to 1 */ - -L20: - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - i__3 = i__; - z__2.r = zx[i__2].r * zy[i__3].r - zx[i__2].i * zy[i__3].i, z__2.i = - zx[i__2].r * zy[i__3].i + zx[i__2].i * zy[i__3].r; - z__1.r = ztemp.r + z__2.r, z__1.i = ztemp.i + z__2.i; - ztemp.r = z__1.r, ztemp.i = z__1.i; -/* L30: */ - } - ret_val->r = ztemp.r, ret_val->i = ztemp.i; - return ; -} /* zdotu_ */ - -/* Subroutine */ int zdscal_(integer *n, doublereal *da, doublecomplex *zx, - integer *incx) -{ - /* System generated locals */ - integer i__1, i__2, i__3; - doublecomplex z__1, z__2; - - /* Local variables */ - static integer i__, ix; - - -/* - scales a vector by a constant. - jack dongarra, 3/11/78. - modified 3/93 to return if incx .le. 0. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --zx; - - /* Function Body */ - if (*n <= 0 || *incx <= 0) { - return 0; - } - if (*incx == 1) { - goto L20; - } - -/* code for increment not equal to 1 */ - - ix = 1; - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = ix; - z__2.r = *da, z__2.i = 0.; - i__3 = ix; - z__1.r = z__2.r * zx[i__3].r - z__2.i * zx[i__3].i, z__1.i = z__2.r * - zx[i__3].i + z__2.i * zx[i__3].r; - zx[i__2].r = z__1.r, zx[i__2].i = z__1.i; - ix += *incx; -/* L10: */ - } - return 0; - -/* code for increment equal to 1 */ - -L20: - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - z__2.r = *da, z__2.i = 0.; - i__3 = i__; - z__1.r = z__2.r * zx[i__3].r - z__2.i * zx[i__3].i, z__1.i = z__2.r * - zx[i__3].i + z__2.i * zx[i__3].r; - zx[i__2].r = z__1.r, zx[i__2].i = z__1.i; -/* L30: */ - } - return 0; -} /* zdscal_ */ - -/* Subroutine */ int zgemm_(char *transa, char *transb, integer *m, integer * - n, integer *k, doublecomplex *alpha, doublecomplex *a, integer *lda, - doublecomplex *b, integer *ldb, doublecomplex *beta, doublecomplex * - c__, integer *ldc) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, c_dim1, c_offset, i__1, i__2, - i__3, i__4, i__5, i__6; - doublecomplex z__1, z__2, z__3, z__4; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, l, info; - static logical nota, notb; - static doublecomplex temp; - static logical conja, conjb; - static integer ncola; - extern logical lsame_(char *, char *); - static integer nrowa, nrowb; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - ZGEMM performs one of the matrix-matrix operations - - C := alpha*op( A )*op( B ) + beta*C, - - where op( X ) is one of - - op( X ) = X or op( X ) = X' or op( X ) = conjg( X' ), - - alpha and beta are scalars, and A, B and C are matrices, with op( A ) - an m by k matrix, op( B ) a k by n matrix and C an m by n matrix. - - Parameters - ========== - - TRANSA - CHARACTER*1. - On entry, TRANSA specifies the form of op( A ) to be used in - the matrix multiplication as follows: - - TRANSA = 'N' or 'n', op( A ) = A. - - TRANSA = 'T' or 't', op( A ) = A'. - - TRANSA = 'C' or 'c', op( A ) = conjg( A' ). - - Unchanged on exit. - - TRANSB - CHARACTER*1. - On entry, TRANSB specifies the form of op( B ) to be used in - the matrix multiplication as follows: - - TRANSB = 'N' or 'n', op( B ) = B. - - TRANSB = 'T' or 't', op( B ) = B'. - - TRANSB = 'C' or 'c', op( B ) = conjg( B' ). - - Unchanged on exit. - - M - INTEGER. - On entry, M specifies the number of rows of the matrix - op( A ) and of the matrix C. M must be at least zero. - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the number of columns of the matrix - op( B ) and the number of columns of the matrix C. N must be - at least zero. - Unchanged on exit. - - K - INTEGER. - On entry, K specifies the number of columns of the matrix - op( A ) and the number of rows of the matrix op( B ). K must - be at least zero. - Unchanged on exit. - - ALPHA - COMPLEX*16 . - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, ka ), where ka is - k when TRANSA = 'N' or 'n', and is m otherwise. - Before entry with TRANSA = 'N' or 'n', the leading m by k - part of the array A must contain the matrix A, otherwise - the leading k by m part of the array A must contain the - matrix A. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. When TRANSA = 'N' or 'n' then - LDA must be at least max( 1, m ), otherwise LDA must be at - least max( 1, k ). - Unchanged on exit. - - B - COMPLEX*16 array of DIMENSION ( LDB, kb ), where kb is - n when TRANSB = 'N' or 'n', and is k otherwise. - Before entry with TRANSB = 'N' or 'n', the leading k by n - part of the array B must contain the matrix B, otherwise - the leading n by k part of the array B must contain the - matrix B. - Unchanged on exit. - - LDB - INTEGER. - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. When TRANSB = 'N' or 'n' then - LDB must be at least max( 1, k ), otherwise LDB must be at - least max( 1, n ). - Unchanged on exit. - - BETA - COMPLEX*16 . - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then C need not be set on input. - Unchanged on exit. - - C - COMPLEX*16 array of DIMENSION ( LDC, n ). - Before entry, the leading m by n part of the array C must - contain the matrix C, except when beta is zero, in which - case C need not be set on entry. - On exit, the array C is overwritten by the m by n matrix - ( alpha*op( A )*op( B ) + beta*C ). - - LDC - INTEGER. - On entry, LDC specifies the first dimension of C as declared - in the calling (sub) program. LDC must be at least - max( 1, m ). - Unchanged on exit. - - - Level 3 Blas routine. - - -- Written on 8-February-1989. - Jack Dongarra, Argonne National Laboratory. - Iain Duff, AERE Harwell. - Jeremy Du Croz, Numerical Algorithms Group Ltd. - Sven Hammarling, Numerical Algorithms Group Ltd. - - - Set NOTA and NOTB as true if A and B respectively are not - conjugated or transposed, set CONJA and CONJB as true if A and - B respectively are to be transposed but not conjugated and set - NROWA, NCOLA and NROWB as the number of rows and columns of A - and the number of rows of B respectively. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - - /* Function Body */ - nota = lsame_(transa, "N"); - notb = lsame_(transb, "N"); - conja = lsame_(transa, "C"); - conjb = lsame_(transb, "C"); - if (nota) { - nrowa = *m; - ncola = *k; - } else { - nrowa = *k; - ncola = *m; - } - if (notb) { - nrowb = *k; - } else { - nrowb = *n; - } - -/* Test the input parameters. */ - - info = 0; - if (((! nota && ! conja) && ! lsame_(transa, "T"))) - { - info = 1; - } else if (((! notb && ! conjb) && ! lsame_(transb, "T"))) { - info = 2; - } else if (*m < 0) { - info = 3; - } else if (*n < 0) { - info = 4; - } else if (*k < 0) { - info = 5; - } else if (*lda < max(1,nrowa)) { - info = 8; - } else if (*ldb < max(1,nrowb)) { - info = 10; - } else if (*ldc < max(1,*m)) { - info = 13; - } - if (info != 0) { - xerbla_("ZGEMM ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*m == 0 || *n == 0 || (((alpha->r == 0. && alpha->i == 0.) || *k == 0) - && ((beta->r == 1. && beta->i == 0.)))) { - return 0; - } - -/* And when alpha.eq.zero. */ - - if ((alpha->r == 0. && alpha->i == 0.)) { - if ((beta->r == 0. && beta->i == 0.)) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L10: */ - } -/* L20: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = beta->r * c__[i__4].r - beta->i * c__[i__4].i, - z__1.i = beta->r * c__[i__4].i + beta->i * c__[ - i__4].r; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L30: */ - } -/* L40: */ - } - } - return 0; - } - -/* Start the operations. */ - - if (notb) { - if (nota) { - -/* Form C := alpha*A*B + beta*C. */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if ((beta->r == 0. && beta->i == 0.)) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L50: */ - } - } else if (beta->r != 1. || beta->i != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = beta->r * c__[i__4].r - beta->i * c__[i__4] - .i, z__1.i = beta->r * c__[i__4].i + beta->i * - c__[i__4].r; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L60: */ - } - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - i__3 = l + j * b_dim1; - if (b[i__3].r != 0. || b[i__3].i != 0.) { - i__3 = l + j * b_dim1; - z__1.r = alpha->r * b[i__3].r - alpha->i * b[i__3].i, - z__1.i = alpha->r * b[i__3].i + alpha->i * b[ - i__3].r; - temp.r = z__1.r, temp.i = z__1.i; - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * c_dim1; - i__5 = i__ + j * c_dim1; - i__6 = i__ + l * a_dim1; - z__2.r = temp.r * a[i__6].r - temp.i * a[i__6].i, - z__2.i = temp.r * a[i__6].i + temp.i * a[ - i__6].r; - z__1.r = c__[i__5].r + z__2.r, z__1.i = c__[i__5] - .i + z__2.i; - c__[i__4].r = z__1.r, c__[i__4].i = z__1.i; -/* L70: */ - } - } -/* L80: */ - } -/* L90: */ - } - } else if (conja) { - -/* Form C := alpha*conjg( A' )*B + beta*C. */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp.r = 0., temp.i = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - d_cnjg(&z__3, &a[l + i__ * a_dim1]); - i__4 = l + j * b_dim1; - z__2.r = z__3.r * b[i__4].r - z__3.i * b[i__4].i, - z__2.i = z__3.r * b[i__4].i + z__3.i * b[i__4] - .r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L100: */ - } - if ((beta->r == 0. && beta->i == 0.)) { - i__3 = i__ + j * c_dim1; - z__1.r = alpha->r * temp.r - alpha->i * temp.i, - z__1.i = alpha->r * temp.i + alpha->i * - temp.r; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } else { - i__3 = i__ + j * c_dim1; - z__2.r = alpha->r * temp.r - alpha->i * temp.i, - z__2.i = alpha->r * temp.i + alpha->i * - temp.r; - i__4 = i__ + j * c_dim1; - z__3.r = beta->r * c__[i__4].r - beta->i * c__[i__4] - .i, z__3.i = beta->r * c__[i__4].i + beta->i * - c__[i__4].r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } -/* L110: */ - } -/* L120: */ - } - } else { - -/* Form C := alpha*A'*B + beta*C */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp.r = 0., temp.i = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - i__4 = l + i__ * a_dim1; - i__5 = l + j * b_dim1; - z__2.r = a[i__4].r * b[i__5].r - a[i__4].i * b[i__5] - .i, z__2.i = a[i__4].r * b[i__5].i + a[i__4] - .i * b[i__5].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L130: */ - } - if ((beta->r == 0. && beta->i == 0.)) { - i__3 = i__ + j * c_dim1; - z__1.r = alpha->r * temp.r - alpha->i * temp.i, - z__1.i = alpha->r * temp.i + alpha->i * - temp.r; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } else { - i__3 = i__ + j * c_dim1; - z__2.r = alpha->r * temp.r - alpha->i * temp.i, - z__2.i = alpha->r * temp.i + alpha->i * - temp.r; - i__4 = i__ + j * c_dim1; - z__3.r = beta->r * c__[i__4].r - beta->i * c__[i__4] - .i, z__3.i = beta->r * c__[i__4].i + beta->i * - c__[i__4].r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } -/* L140: */ - } -/* L150: */ - } - } - } else if (nota) { - if (conjb) { - -/* Form C := alpha*A*conjg( B' ) + beta*C. */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if ((beta->r == 0. && beta->i == 0.)) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L160: */ - } - } else if (beta->r != 1. || beta->i != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = beta->r * c__[i__4].r - beta->i * c__[i__4] - .i, z__1.i = beta->r * c__[i__4].i + beta->i * - c__[i__4].r; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L170: */ - } - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - i__3 = j + l * b_dim1; - if (b[i__3].r != 0. || b[i__3].i != 0.) { - d_cnjg(&z__2, &b[j + l * b_dim1]); - z__1.r = alpha->r * z__2.r - alpha->i * z__2.i, - z__1.i = alpha->r * z__2.i + alpha->i * - z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * c_dim1; - i__5 = i__ + j * c_dim1; - i__6 = i__ + l * a_dim1; - z__2.r = temp.r * a[i__6].r - temp.i * a[i__6].i, - z__2.i = temp.r * a[i__6].i + temp.i * a[ - i__6].r; - z__1.r = c__[i__5].r + z__2.r, z__1.i = c__[i__5] - .i + z__2.i; - c__[i__4].r = z__1.r, c__[i__4].i = z__1.i; -/* L180: */ - } - } -/* L190: */ - } -/* L200: */ - } - } else { - -/* Form C := alpha*A*B' + beta*C */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if ((beta->r == 0. && beta->i == 0.)) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L210: */ - } - } else if (beta->r != 1. || beta->i != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = beta->r * c__[i__4].r - beta->i * c__[i__4] - .i, z__1.i = beta->r * c__[i__4].i + beta->i * - c__[i__4].r; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L220: */ - } - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - i__3 = j + l * b_dim1; - if (b[i__3].r != 0. || b[i__3].i != 0.) { - i__3 = j + l * b_dim1; - z__1.r = alpha->r * b[i__3].r - alpha->i * b[i__3].i, - z__1.i = alpha->r * b[i__3].i + alpha->i * b[ - i__3].r; - temp.r = z__1.r, temp.i = z__1.i; - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * c_dim1; - i__5 = i__ + j * c_dim1; - i__6 = i__ + l * a_dim1; - z__2.r = temp.r * a[i__6].r - temp.i * a[i__6].i, - z__2.i = temp.r * a[i__6].i + temp.i * a[ - i__6].r; - z__1.r = c__[i__5].r + z__2.r, z__1.i = c__[i__5] - .i + z__2.i; - c__[i__4].r = z__1.r, c__[i__4].i = z__1.i; -/* L230: */ - } - } -/* L240: */ - } -/* L250: */ - } - } - } else if (conja) { - if (conjb) { - -/* Form C := alpha*conjg( A' )*conjg( B' ) + beta*C. */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp.r = 0., temp.i = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - d_cnjg(&z__3, &a[l + i__ * a_dim1]); - d_cnjg(&z__4, &b[j + l * b_dim1]); - z__2.r = z__3.r * z__4.r - z__3.i * z__4.i, z__2.i = - z__3.r * z__4.i + z__3.i * z__4.r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L260: */ - } - if ((beta->r == 0. && beta->i == 0.)) { - i__3 = i__ + j * c_dim1; - z__1.r = alpha->r * temp.r - alpha->i * temp.i, - z__1.i = alpha->r * temp.i + alpha->i * - temp.r; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } else { - i__3 = i__ + j * c_dim1; - z__2.r = alpha->r * temp.r - alpha->i * temp.i, - z__2.i = alpha->r * temp.i + alpha->i * - temp.r; - i__4 = i__ + j * c_dim1; - z__3.r = beta->r * c__[i__4].r - beta->i * c__[i__4] - .i, z__3.i = beta->r * c__[i__4].i + beta->i * - c__[i__4].r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } -/* L270: */ - } -/* L280: */ - } - } else { - -/* Form C := alpha*conjg( A' )*B' + beta*C */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp.r = 0., temp.i = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - d_cnjg(&z__3, &a[l + i__ * a_dim1]); - i__4 = j + l * b_dim1; - z__2.r = z__3.r * b[i__4].r - z__3.i * b[i__4].i, - z__2.i = z__3.r * b[i__4].i + z__3.i * b[i__4] - .r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L290: */ - } - if ((beta->r == 0. && beta->i == 0.)) { - i__3 = i__ + j * c_dim1; - z__1.r = alpha->r * temp.r - alpha->i * temp.i, - z__1.i = alpha->r * temp.i + alpha->i * - temp.r; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } else { - i__3 = i__ + j * c_dim1; - z__2.r = alpha->r * temp.r - alpha->i * temp.i, - z__2.i = alpha->r * temp.i + alpha->i * - temp.r; - i__4 = i__ + j * c_dim1; - z__3.r = beta->r * c__[i__4].r - beta->i * c__[i__4] - .i, z__3.i = beta->r * c__[i__4].i + beta->i * - c__[i__4].r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } -/* L300: */ - } -/* L310: */ - } - } - } else { - if (conjb) { - -/* Form C := alpha*A'*conjg( B' ) + beta*C */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp.r = 0., temp.i = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - i__4 = l + i__ * a_dim1; - d_cnjg(&z__3, &b[j + l * b_dim1]); - z__2.r = a[i__4].r * z__3.r - a[i__4].i * z__3.i, - z__2.i = a[i__4].r * z__3.i + a[i__4].i * - z__3.r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L320: */ - } - if ((beta->r == 0. && beta->i == 0.)) { - i__3 = i__ + j * c_dim1; - z__1.r = alpha->r * temp.r - alpha->i * temp.i, - z__1.i = alpha->r * temp.i + alpha->i * - temp.r; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } else { - i__3 = i__ + j * c_dim1; - z__2.r = alpha->r * temp.r - alpha->i * temp.i, - z__2.i = alpha->r * temp.i + alpha->i * - temp.r; - i__4 = i__ + j * c_dim1; - z__3.r = beta->r * c__[i__4].r - beta->i * c__[i__4] - .i, z__3.i = beta->r * c__[i__4].i + beta->i * - c__[i__4].r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } -/* L330: */ - } -/* L340: */ - } - } else { - -/* Form C := alpha*A'*B' + beta*C */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp.r = 0., temp.i = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - i__4 = l + i__ * a_dim1; - i__5 = j + l * b_dim1; - z__2.r = a[i__4].r * b[i__5].r - a[i__4].i * b[i__5] - .i, z__2.i = a[i__4].r * b[i__5].i + a[i__4] - .i * b[i__5].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L350: */ - } - if ((beta->r == 0. && beta->i == 0.)) { - i__3 = i__ + j * c_dim1; - z__1.r = alpha->r * temp.r - alpha->i * temp.i, - z__1.i = alpha->r * temp.i + alpha->i * - temp.r; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } else { - i__3 = i__ + j * c_dim1; - z__2.r = alpha->r * temp.r - alpha->i * temp.i, - z__2.i = alpha->r * temp.i + alpha->i * - temp.r; - i__4 = i__ + j * c_dim1; - z__3.r = beta->r * c__[i__4].r - beta->i * c__[i__4] - .i, z__3.i = beta->r * c__[i__4].i + beta->i * - c__[i__4].r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } -/* L360: */ - } -/* L370: */ - } - } - } - - return 0; - -/* End of ZGEMM . */ - -} /* zgemm_ */ - -/* Subroutine */ int zgemv_(char *trans, integer *m, integer *n, - doublecomplex *alpha, doublecomplex *a, integer *lda, doublecomplex * - x, integer *incx, doublecomplex *beta, doublecomplex *y, integer * - incy) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - doublecomplex z__1, z__2, z__3; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, ix, iy, jx, jy, kx, ky, info; - static doublecomplex temp; - static integer lenx, leny; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical noconj; - - -/* - Purpose - ======= - - ZGEMV performs one of the matrix-vector operations - - y := alpha*A*x + beta*y, or y := alpha*A'*x + beta*y, or - - y := alpha*conjg( A' )*x + beta*y, - - where alpha and beta are scalars, x and y are vectors and A is an - m by n matrix. - - Parameters - ========== - - TRANS - CHARACTER*1. - On entry, TRANS specifies the operation to be performed as - follows: - - TRANS = 'N' or 'n' y := alpha*A*x + beta*y. - - TRANS = 'T' or 't' y := alpha*A'*x + beta*y. - - TRANS = 'C' or 'c' y := alpha*conjg( A' )*x + beta*y. - - Unchanged on exit. - - M - INTEGER. - On entry, M specifies the number of rows of the matrix A. - M must be at least zero. - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the number of columns of the matrix A. - N must be at least zero. - Unchanged on exit. - - ALPHA - COMPLEX*16 . - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, n ). - Before entry, the leading m by n part of the array A must - contain the matrix of coefficients. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, m ). - Unchanged on exit. - - X - COMPLEX*16 array of DIMENSION at least - ( 1 + ( n - 1 )*abs( INCX ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( m - 1 )*abs( INCX ) ) otherwise. - Before entry, the incremented array X must contain the - vector x. - Unchanged on exit. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - BETA - COMPLEX*16 . - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then Y need not be set on input. - Unchanged on exit. - - Y - COMPLEX*16 array of DIMENSION at least - ( 1 + ( m - 1 )*abs( INCY ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( n - 1 )*abs( INCY ) ) otherwise. - Before entry with BETA non-zero, the incremented array Y - must contain the vector y. On exit, Y is overwritten by the - updated vector y. - - INCY - INTEGER. - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --x; - --y; - - /* Function Body */ - info = 0; - if (((! lsame_(trans, "N") && ! lsame_(trans, "T")) && ! lsame_(trans, "C"))) { - info = 1; - } else if (*m < 0) { - info = 2; - } else if (*n < 0) { - info = 3; - } else if (*lda < max(1,*m)) { - info = 6; - } else if (*incx == 0) { - info = 8; - } else if (*incy == 0) { - info = 11; - } - if (info != 0) { - xerbla_("ZGEMV ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*m == 0 || *n == 0 || ((alpha->r == 0. && alpha->i == 0.) && (( - beta->r == 1. && beta->i == 0.)))) { - return 0; - } - - noconj = lsame_(trans, "T"); - -/* - Set LENX and LENY, the lengths of the vectors x and y, and set - up the start points in X and Y. -*/ - - if (lsame_(trans, "N")) { - lenx = *n; - leny = *m; - } else { - lenx = *m; - leny = *n; - } - if (*incx > 0) { - kx = 1; - } else { - kx = 1 - (lenx - 1) * *incx; - } - if (*incy > 0) { - ky = 1; - } else { - ky = 1 - (leny - 1) * *incy; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through A. - - First form y := beta*y. -*/ - - if (beta->r != 1. || beta->i != 0.) { - if (*incy == 1) { - if ((beta->r == 0. && beta->i == 0.)) { - i__1 = leny; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - y[i__2].r = 0., y[i__2].i = 0.; -/* L10: */ - } - } else { - i__1 = leny; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - i__3 = i__; - z__1.r = beta->r * y[i__3].r - beta->i * y[i__3].i, - z__1.i = beta->r * y[i__3].i + beta->i * y[i__3] - .r; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; -/* L20: */ - } - } - } else { - iy = ky; - if ((beta->r == 0. && beta->i == 0.)) { - i__1 = leny; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = iy; - y[i__2].r = 0., y[i__2].i = 0.; - iy += *incy; -/* L30: */ - } - } else { - i__1 = leny; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = iy; - i__3 = iy; - z__1.r = beta->r * y[i__3].r - beta->i * y[i__3].i, - z__1.i = beta->r * y[i__3].i + beta->i * y[i__3] - .r; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; - iy += *incy; -/* L40: */ - } - } - } - } - if ((alpha->r == 0. && alpha->i == 0.)) { - return 0; - } - if (lsame_(trans, "N")) { - -/* Form y := alpha*A*x + y. */ - - jx = kx; - if (*incy == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jx; - if (x[i__2].r != 0. || x[i__2].i != 0.) { - i__2 = jx; - z__1.r = alpha->r * x[i__2].r - alpha->i * x[i__2].i, - z__1.i = alpha->r * x[i__2].i + alpha->i * x[i__2] - .r; - temp.r = z__1.r, temp.i = z__1.i; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__; - i__4 = i__; - i__5 = i__ + j * a_dim1; - z__2.r = temp.r * a[i__5].r - temp.i * a[i__5].i, - z__2.i = temp.r * a[i__5].i + temp.i * a[i__5] - .r; - z__1.r = y[i__4].r + z__2.r, z__1.i = y[i__4].i + - z__2.i; - y[i__3].r = z__1.r, y[i__3].i = z__1.i; -/* L50: */ - } - } - jx += *incx; -/* L60: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jx; - if (x[i__2].r != 0. || x[i__2].i != 0.) { - i__2 = jx; - z__1.r = alpha->r * x[i__2].r - alpha->i * x[i__2].i, - z__1.i = alpha->r * x[i__2].i + alpha->i * x[i__2] - .r; - temp.r = z__1.r, temp.i = z__1.i; - iy = ky; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = iy; - i__4 = iy; - i__5 = i__ + j * a_dim1; - z__2.r = temp.r * a[i__5].r - temp.i * a[i__5].i, - z__2.i = temp.r * a[i__5].i + temp.i * a[i__5] - .r; - z__1.r = y[i__4].r + z__2.r, z__1.i = y[i__4].i + - z__2.i; - y[i__3].r = z__1.r, y[i__3].i = z__1.i; - iy += *incy; -/* L70: */ - } - } - jx += *incx; -/* L80: */ - } - } - } else { - -/* Form y := alpha*A'*x + y or y := alpha*conjg( A' )*x + y. */ - - jy = ky; - if (*incx == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp.r = 0., temp.i = 0.; - if (noconj) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__; - z__2.r = a[i__3].r * x[i__4].r - a[i__3].i * x[i__4] - .i, z__2.i = a[i__3].r * x[i__4].i + a[i__3] - .i * x[i__4].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L90: */ - } - } else { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__3 = i__; - z__2.r = z__3.r * x[i__3].r - z__3.i * x[i__3].i, - z__2.i = z__3.r * x[i__3].i + z__3.i * x[i__3] - .r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L100: */ - } - } - i__2 = jy; - i__3 = jy; - z__2.r = alpha->r * temp.r - alpha->i * temp.i, z__2.i = - alpha->r * temp.i + alpha->i * temp.r; - z__1.r = y[i__3].r + z__2.r, z__1.i = y[i__3].i + z__2.i; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; - jy += *incy; -/* L110: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp.r = 0., temp.i = 0.; - ix = kx; - if (noconj) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = ix; - z__2.r = a[i__3].r * x[i__4].r - a[i__3].i * x[i__4] - .i, z__2.i = a[i__3].r * x[i__4].i + a[i__3] - .i * x[i__4].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; - ix += *incx; -/* L120: */ - } - } else { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__3 = ix; - z__2.r = z__3.r * x[i__3].r - z__3.i * x[i__3].i, - z__2.i = z__3.r * x[i__3].i + z__3.i * x[i__3] - .r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; - ix += *incx; -/* L130: */ - } - } - i__2 = jy; - i__3 = jy; - z__2.r = alpha->r * temp.r - alpha->i * temp.i, z__2.i = - alpha->r * temp.i + alpha->i * temp.r; - z__1.r = y[i__3].r + z__2.r, z__1.i = y[i__3].i + z__2.i; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; - jy += *incy; -/* L140: */ - } - } - } - - return 0; - -/* End of ZGEMV . */ - -} /* zgemv_ */ - -/* Subroutine */ int zgerc_(integer *m, integer *n, doublecomplex *alpha, - doublecomplex *x, integer *incx, doublecomplex *y, integer *incy, - doublecomplex *a, integer *lda) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - doublecomplex z__1, z__2; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, ix, jy, kx, info; - static doublecomplex temp; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - ZGERC performs the rank 1 operation - - A := alpha*x*conjg( y' ) + A, - - where alpha is a scalar, x is an m element vector, y is an n element - vector and A is an m by n matrix. - - Parameters - ========== - - M - INTEGER. - On entry, M specifies the number of rows of the matrix A. - M must be at least zero. - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the number of columns of the matrix A. - N must be at least zero. - Unchanged on exit. - - ALPHA - COMPLEX*16 . - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - X - COMPLEX*16 array of dimension at least - ( 1 + ( m - 1 )*abs( INCX ) ). - Before entry, the incremented array X must contain the m - element vector x. - Unchanged on exit. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - Y - COMPLEX*16 array of dimension at least - ( 1 + ( n - 1 )*abs( INCY ) ). - Before entry, the incremented array Y must contain the n - element vector y. - Unchanged on exit. - - INCY - INTEGER. - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, n ). - Before entry, the leading m by n part of the array A must - contain the matrix of coefficients. On exit, A is - overwritten by the updated matrix. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, m ). - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --x; - --y; - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - info = 0; - if (*m < 0) { - info = 1; - } else if (*n < 0) { - info = 2; - } else if (*incx == 0) { - info = 5; - } else if (*incy == 0) { - info = 7; - } else if (*lda < max(1,*m)) { - info = 9; - } - if (info != 0) { - xerbla_("ZGERC ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*m == 0 || *n == 0 || (alpha->r == 0. && alpha->i == 0.)) { - return 0; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through A. -*/ - - if (*incy > 0) { - jy = 1; - } else { - jy = 1 - (*n - 1) * *incy; - } - if (*incx == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jy; - if (y[i__2].r != 0. || y[i__2].i != 0.) { - d_cnjg(&z__2, &y[jy]); - z__1.r = alpha->r * z__2.r - alpha->i * z__2.i, z__1.i = - alpha->r * z__2.i + alpha->i * z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - i__5 = i__; - z__2.r = x[i__5].r * temp.r - x[i__5].i * temp.i, z__2.i = - x[i__5].r * temp.i + x[i__5].i * temp.r; - z__1.r = a[i__4].r + z__2.r, z__1.i = a[i__4].i + z__2.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L10: */ - } - } - jy += *incy; -/* L20: */ - } - } else { - if (*incx > 0) { - kx = 1; - } else { - kx = 1 - (*m - 1) * *incx; - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jy; - if (y[i__2].r != 0. || y[i__2].i != 0.) { - d_cnjg(&z__2, &y[jy]); - z__1.r = alpha->r * z__2.r - alpha->i * z__2.i, z__1.i = - alpha->r * z__2.i + alpha->i * z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - ix = kx; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - i__5 = ix; - z__2.r = x[i__5].r * temp.r - x[i__5].i * temp.i, z__2.i = - x[i__5].r * temp.i + x[i__5].i * temp.r; - z__1.r = a[i__4].r + z__2.r, z__1.i = a[i__4].i + z__2.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; - ix += *incx; -/* L30: */ - } - } - jy += *incy; -/* L40: */ - } - } - - return 0; - -/* End of ZGERC . */ - -} /* zgerc_ */ - -/* Subroutine */ int zgeru_(integer *m, integer *n, doublecomplex *alpha, - doublecomplex *x, integer *incx, doublecomplex *y, integer *incy, - doublecomplex *a, integer *lda) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - doublecomplex z__1, z__2; - - /* Local variables */ - static integer i__, j, ix, jy, kx, info; - static doublecomplex temp; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - ZGERU performs the rank 1 operation - - A := alpha*x*y' + A, - - where alpha is a scalar, x is an m element vector, y is an n element - vector and A is an m by n matrix. - - Parameters - ========== - - M - INTEGER. - On entry, M specifies the number of rows of the matrix A. - M must be at least zero. - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the number of columns of the matrix A. - N must be at least zero. - Unchanged on exit. - - ALPHA - COMPLEX*16 . - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - X - COMPLEX*16 array of dimension at least - ( 1 + ( m - 1 )*abs( INCX ) ). - Before entry, the incremented array X must contain the m - element vector x. - Unchanged on exit. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - Y - COMPLEX*16 array of dimension at least - ( 1 + ( n - 1 )*abs( INCY ) ). - Before entry, the incremented array Y must contain the n - element vector y. - Unchanged on exit. - - INCY - INTEGER. - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, n ). - Before entry, the leading m by n part of the array A must - contain the matrix of coefficients. On exit, A is - overwritten by the updated matrix. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, m ). - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --x; - --y; - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - info = 0; - if (*m < 0) { - info = 1; - } else if (*n < 0) { - info = 2; - } else if (*incx == 0) { - info = 5; - } else if (*incy == 0) { - info = 7; - } else if (*lda < max(1,*m)) { - info = 9; - } - if (info != 0) { - xerbla_("ZGERU ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*m == 0 || *n == 0 || (alpha->r == 0. && alpha->i == 0.)) { - return 0; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through A. -*/ - - if (*incy > 0) { - jy = 1; - } else { - jy = 1 - (*n - 1) * *incy; - } - if (*incx == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jy; - if (y[i__2].r != 0. || y[i__2].i != 0.) { - i__2 = jy; - z__1.r = alpha->r * y[i__2].r - alpha->i * y[i__2].i, z__1.i = - alpha->r * y[i__2].i + alpha->i * y[i__2].r; - temp.r = z__1.r, temp.i = z__1.i; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - i__5 = i__; - z__2.r = x[i__5].r * temp.r - x[i__5].i * temp.i, z__2.i = - x[i__5].r * temp.i + x[i__5].i * temp.r; - z__1.r = a[i__4].r + z__2.r, z__1.i = a[i__4].i + z__2.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L10: */ - } - } - jy += *incy; -/* L20: */ - } - } else { - if (*incx > 0) { - kx = 1; - } else { - kx = 1 - (*m - 1) * *incx; - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jy; - if (y[i__2].r != 0. || y[i__2].i != 0.) { - i__2 = jy; - z__1.r = alpha->r * y[i__2].r - alpha->i * y[i__2].i, z__1.i = - alpha->r * y[i__2].i + alpha->i * y[i__2].r; - temp.r = z__1.r, temp.i = z__1.i; - ix = kx; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - i__5 = ix; - z__2.r = x[i__5].r * temp.r - x[i__5].i * temp.i, z__2.i = - x[i__5].r * temp.i + x[i__5].i * temp.r; - z__1.r = a[i__4].r + z__2.r, z__1.i = a[i__4].i + z__2.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; - ix += *incx; -/* L30: */ - } - } - jy += *incy; -/* L40: */ - } - } - - return 0; - -/* End of ZGERU . */ - -} /* zgeru_ */ - -/* Subroutine */ int zhemv_(char *uplo, integer *n, doublecomplex *alpha, - doublecomplex *a, integer *lda, doublecomplex *x, integer *incx, - doublecomplex *beta, doublecomplex *y, integer *incy) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - doublereal d__1; - doublecomplex z__1, z__2, z__3, z__4; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, ix, iy, jx, jy, kx, ky, info; - static doublecomplex temp1, temp2; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - ZHEMV performs the matrix-vector operation - - y := alpha*A*x + beta*y, - - where alpha and beta are scalars, x and y are n element vectors and - A is an n by n hermitian matrix. - - Parameters - ========== - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the upper or lower - triangular part of the array A is to be referenced as - follows: - - UPLO = 'U' or 'u' Only the upper triangular part of A - is to be referenced. - - UPLO = 'L' or 'l' Only the lower triangular part of A - is to be referenced. - - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the order of the matrix A. - N must be at least zero. - Unchanged on exit. - - ALPHA - COMPLEX*16 . - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, n ). - Before entry with UPLO = 'U' or 'u', the leading n by n - upper triangular part of the array A must contain the upper - triangular part of the hermitian matrix and the strictly - lower triangular part of A is not referenced. - Before entry with UPLO = 'L' or 'l', the leading n by n - lower triangular part of the array A must contain the lower - triangular part of the hermitian matrix and the strictly - upper triangular part of A is not referenced. - Note that the imaginary parts of the diagonal elements need - not be set and are assumed to be zero. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, n ). - Unchanged on exit. - - X - COMPLEX*16 array of dimension at least - ( 1 + ( n - 1 )*abs( INCX ) ). - Before entry, the incremented array X must contain the n - element vector x. - Unchanged on exit. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - BETA - COMPLEX*16 . - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then Y need not be set on input. - Unchanged on exit. - - Y - COMPLEX*16 array of dimension at least - ( 1 + ( n - 1 )*abs( INCY ) ). - Before entry, the incremented array Y must contain the n - element vector y. On exit, Y is overwritten by the updated - vector y. - - INCY - INTEGER. - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --x; - --y; - - /* Function Body */ - info = 0; - if ((! lsame_(uplo, "U") && ! lsame_(uplo, "L"))) { - info = 1; - } else if (*n < 0) { - info = 2; - } else if (*lda < max(1,*n)) { - info = 5; - } else if (*incx == 0) { - info = 7; - } else if (*incy == 0) { - info = 10; - } - if (info != 0) { - xerbla_("ZHEMV ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0 || ((alpha->r == 0. && alpha->i == 0.) && ((beta->r == 1. && - beta->i == 0.)))) { - return 0; - } - -/* Set up the start points in X and Y. */ - - if (*incx > 0) { - kx = 1; - } else { - kx = 1 - (*n - 1) * *incx; - } - if (*incy > 0) { - ky = 1; - } else { - ky = 1 - (*n - 1) * *incy; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through the triangular part - of A. - - First form y := beta*y. -*/ - - if (beta->r != 1. || beta->i != 0.) { - if (*incy == 1) { - if ((beta->r == 0. && beta->i == 0.)) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - y[i__2].r = 0., y[i__2].i = 0.; -/* L10: */ - } - } else { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - i__3 = i__; - z__1.r = beta->r * y[i__3].r - beta->i * y[i__3].i, - z__1.i = beta->r * y[i__3].i + beta->i * y[i__3] - .r; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; -/* L20: */ - } - } - } else { - iy = ky; - if ((beta->r == 0. && beta->i == 0.)) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = iy; - y[i__2].r = 0., y[i__2].i = 0.; - iy += *incy; -/* L30: */ - } - } else { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = iy; - i__3 = iy; - z__1.r = beta->r * y[i__3].r - beta->i * y[i__3].i, - z__1.i = beta->r * y[i__3].i + beta->i * y[i__3] - .r; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; - iy += *incy; -/* L40: */ - } - } - } - } - if ((alpha->r == 0. && alpha->i == 0.)) { - return 0; - } - if (lsame_(uplo, "U")) { - -/* Form y when A is stored in upper triangle. */ - - if ((*incx == 1 && *incy == 1)) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - z__1.r = alpha->r * x[i__2].r - alpha->i * x[i__2].i, z__1.i = - alpha->r * x[i__2].i + alpha->i * x[i__2].r; - temp1.r = z__1.r, temp1.i = z__1.i; - temp2.r = 0., temp2.i = 0.; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__; - i__4 = i__; - i__5 = i__ + j * a_dim1; - z__2.r = temp1.r * a[i__5].r - temp1.i * a[i__5].i, - z__2.i = temp1.r * a[i__5].i + temp1.i * a[i__5] - .r; - z__1.r = y[i__4].r + z__2.r, z__1.i = y[i__4].i + z__2.i; - y[i__3].r = z__1.r, y[i__3].i = z__1.i; - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__3 = i__; - z__2.r = z__3.r * x[i__3].r - z__3.i * x[i__3].i, z__2.i = - z__3.r * x[i__3].i + z__3.i * x[i__3].r; - z__1.r = temp2.r + z__2.r, z__1.i = temp2.i + z__2.i; - temp2.r = z__1.r, temp2.i = z__1.i; -/* L50: */ - } - i__2 = j; - i__3 = j; - i__4 = j + j * a_dim1; - d__1 = a[i__4].r; - z__3.r = d__1 * temp1.r, z__3.i = d__1 * temp1.i; - z__2.r = y[i__3].r + z__3.r, z__2.i = y[i__3].i + z__3.i; - z__4.r = alpha->r * temp2.r - alpha->i * temp2.i, z__4.i = - alpha->r * temp2.i + alpha->i * temp2.r; - z__1.r = z__2.r + z__4.r, z__1.i = z__2.i + z__4.i; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; -/* L60: */ - } - } else { - jx = kx; - jy = ky; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jx; - z__1.r = alpha->r * x[i__2].r - alpha->i * x[i__2].i, z__1.i = - alpha->r * x[i__2].i + alpha->i * x[i__2].r; - temp1.r = z__1.r, temp1.i = z__1.i; - temp2.r = 0., temp2.i = 0.; - ix = kx; - iy = ky; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = iy; - i__4 = iy; - i__5 = i__ + j * a_dim1; - z__2.r = temp1.r * a[i__5].r - temp1.i * a[i__5].i, - z__2.i = temp1.r * a[i__5].i + temp1.i * a[i__5] - .r; - z__1.r = y[i__4].r + z__2.r, z__1.i = y[i__4].i + z__2.i; - y[i__3].r = z__1.r, y[i__3].i = z__1.i; - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__3 = ix; - z__2.r = z__3.r * x[i__3].r - z__3.i * x[i__3].i, z__2.i = - z__3.r * x[i__3].i + z__3.i * x[i__3].r; - z__1.r = temp2.r + z__2.r, z__1.i = temp2.i + z__2.i; - temp2.r = z__1.r, temp2.i = z__1.i; - ix += *incx; - iy += *incy; -/* L70: */ - } - i__2 = jy; - i__3 = jy; - i__4 = j + j * a_dim1; - d__1 = a[i__4].r; - z__3.r = d__1 * temp1.r, z__3.i = d__1 * temp1.i; - z__2.r = y[i__3].r + z__3.r, z__2.i = y[i__3].i + z__3.i; - z__4.r = alpha->r * temp2.r - alpha->i * temp2.i, z__4.i = - alpha->r * temp2.i + alpha->i * temp2.r; - z__1.r = z__2.r + z__4.r, z__1.i = z__2.i + z__4.i; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; - jx += *incx; - jy += *incy; -/* L80: */ - } - } - } else { - -/* Form y when A is stored in lower triangle. */ - - if ((*incx == 1 && *incy == 1)) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - z__1.r = alpha->r * x[i__2].r - alpha->i * x[i__2].i, z__1.i = - alpha->r * x[i__2].i + alpha->i * x[i__2].r; - temp1.r = z__1.r, temp1.i = z__1.i; - temp2.r = 0., temp2.i = 0.; - i__2 = j; - i__3 = j; - i__4 = j + j * a_dim1; - d__1 = a[i__4].r; - z__2.r = d__1 * temp1.r, z__2.i = d__1 * temp1.i; - z__1.r = y[i__3].r + z__2.r, z__1.i = y[i__3].i + z__2.i; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - i__3 = i__; - i__4 = i__; - i__5 = i__ + j * a_dim1; - z__2.r = temp1.r * a[i__5].r - temp1.i * a[i__5].i, - z__2.i = temp1.r * a[i__5].i + temp1.i * a[i__5] - .r; - z__1.r = y[i__4].r + z__2.r, z__1.i = y[i__4].i + z__2.i; - y[i__3].r = z__1.r, y[i__3].i = z__1.i; - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__3 = i__; - z__2.r = z__3.r * x[i__3].r - z__3.i * x[i__3].i, z__2.i = - z__3.r * x[i__3].i + z__3.i * x[i__3].r; - z__1.r = temp2.r + z__2.r, z__1.i = temp2.i + z__2.i; - temp2.r = z__1.r, temp2.i = z__1.i; -/* L90: */ - } - i__2 = j; - i__3 = j; - z__2.r = alpha->r * temp2.r - alpha->i * temp2.i, z__2.i = - alpha->r * temp2.i + alpha->i * temp2.r; - z__1.r = y[i__3].r + z__2.r, z__1.i = y[i__3].i + z__2.i; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; -/* L100: */ - } - } else { - jx = kx; - jy = ky; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jx; - z__1.r = alpha->r * x[i__2].r - alpha->i * x[i__2].i, z__1.i = - alpha->r * x[i__2].i + alpha->i * x[i__2].r; - temp1.r = z__1.r, temp1.i = z__1.i; - temp2.r = 0., temp2.i = 0.; - i__2 = jy; - i__3 = jy; - i__4 = j + j * a_dim1; - d__1 = a[i__4].r; - z__2.r = d__1 * temp1.r, z__2.i = d__1 * temp1.i; - z__1.r = y[i__3].r + z__2.r, z__1.i = y[i__3].i + z__2.i; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; - ix = jx; - iy = jy; - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - ix += *incx; - iy += *incy; - i__3 = iy; - i__4 = iy; - i__5 = i__ + j * a_dim1; - z__2.r = temp1.r * a[i__5].r - temp1.i * a[i__5].i, - z__2.i = temp1.r * a[i__5].i + temp1.i * a[i__5] - .r; - z__1.r = y[i__4].r + z__2.r, z__1.i = y[i__4].i + z__2.i; - y[i__3].r = z__1.r, y[i__3].i = z__1.i; - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__3 = ix; - z__2.r = z__3.r * x[i__3].r - z__3.i * x[i__3].i, z__2.i = - z__3.r * x[i__3].i + z__3.i * x[i__3].r; - z__1.r = temp2.r + z__2.r, z__1.i = temp2.i + z__2.i; - temp2.r = z__1.r, temp2.i = z__1.i; -/* L110: */ - } - i__2 = jy; - i__3 = jy; - z__2.r = alpha->r * temp2.r - alpha->i * temp2.i, z__2.i = - alpha->r * temp2.i + alpha->i * temp2.r; - z__1.r = y[i__3].r + z__2.r, z__1.i = y[i__3].i + z__2.i; - y[i__2].r = z__1.r, y[i__2].i = z__1.i; - jx += *incx; - jy += *incy; -/* L120: */ - } - } - } - - return 0; - -/* End of ZHEMV . */ - -} /* zhemv_ */ - -/* Subroutine */ int zher2_(char *uplo, integer *n, doublecomplex *alpha, - doublecomplex *x, integer *incx, doublecomplex *y, integer *incy, - doublecomplex *a, integer *lda) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5, i__6; - doublereal d__1; - doublecomplex z__1, z__2, z__3, z__4; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, ix, iy, jx, jy, kx, ky, info; - static doublecomplex temp1, temp2; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - ZHER2 performs the hermitian rank 2 operation - - A := alpha*x*conjg( y' ) + conjg( alpha )*y*conjg( x' ) + A, - - where alpha is a scalar, x and y are n element vectors and A is an n - by n hermitian matrix. - - Parameters - ========== - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the upper or lower - triangular part of the array A is to be referenced as - follows: - - UPLO = 'U' or 'u' Only the upper triangular part of A - is to be referenced. - - UPLO = 'L' or 'l' Only the lower triangular part of A - is to be referenced. - - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the order of the matrix A. - N must be at least zero. - Unchanged on exit. - - ALPHA - COMPLEX*16 . - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - X - COMPLEX*16 array of dimension at least - ( 1 + ( n - 1 )*abs( INCX ) ). - Before entry, the incremented array X must contain the n - element vector x. - Unchanged on exit. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - Y - COMPLEX*16 array of dimension at least - ( 1 + ( n - 1 )*abs( INCY ) ). - Before entry, the incremented array Y must contain the n - element vector y. - Unchanged on exit. - - INCY - INTEGER. - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, n ). - Before entry with UPLO = 'U' or 'u', the leading n by n - upper triangular part of the array A must contain the upper - triangular part of the hermitian matrix and the strictly - lower triangular part of A is not referenced. On exit, the - upper triangular part of the array A is overwritten by the - upper triangular part of the updated matrix. - Before entry with UPLO = 'L' or 'l', the leading n by n - lower triangular part of the array A must contain the lower - triangular part of the hermitian matrix and the strictly - upper triangular part of A is not referenced. On exit, the - lower triangular part of the array A is overwritten by the - lower triangular part of the updated matrix. - Note that the imaginary parts of the diagonal elements need - not be set, they are assumed to be zero, and on exit they - are set to zero. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, n ). - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --x; - --y; - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - info = 0; - if ((! lsame_(uplo, "U") && ! lsame_(uplo, "L"))) { - info = 1; - } else if (*n < 0) { - info = 2; - } else if (*incx == 0) { - info = 5; - } else if (*incy == 0) { - info = 7; - } else if (*lda < max(1,*n)) { - info = 9; - } - if (info != 0) { - xerbla_("ZHER2 ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0 || (alpha->r == 0. && alpha->i == 0.)) { - return 0; - } - -/* - Set up the start points in X and Y if the increments are not both - unity. -*/ - - if (*incx != 1 || *incy != 1) { - if (*incx > 0) { - kx = 1; - } else { - kx = 1 - (*n - 1) * *incx; - } - if (*incy > 0) { - ky = 1; - } else { - ky = 1 - (*n - 1) * *incy; - } - jx = kx; - jy = ky; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through the triangular part - of A. -*/ - - if (lsame_(uplo, "U")) { - -/* Form A when A is stored in the upper triangle. */ - - if ((*incx == 1 && *incy == 1)) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - i__3 = j; - if (x[i__2].r != 0. || x[i__2].i != 0. || (y[i__3].r != 0. || - y[i__3].i != 0.)) { - d_cnjg(&z__2, &y[j]); - z__1.r = alpha->r * z__2.r - alpha->i * z__2.i, z__1.i = - alpha->r * z__2.i + alpha->i * z__2.r; - temp1.r = z__1.r, temp1.i = z__1.i; - i__2 = j; - z__2.r = alpha->r * x[i__2].r - alpha->i * x[i__2].i, - z__2.i = alpha->r * x[i__2].i + alpha->i * x[i__2] - .r; - d_cnjg(&z__1, &z__2); - temp2.r = z__1.r, temp2.i = z__1.i; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - i__5 = i__; - z__3.r = x[i__5].r * temp1.r - x[i__5].i * temp1.i, - z__3.i = x[i__5].r * temp1.i + x[i__5].i * - temp1.r; - z__2.r = a[i__4].r + z__3.r, z__2.i = a[i__4].i + - z__3.i; - i__6 = i__; - z__4.r = y[i__6].r * temp2.r - y[i__6].i * temp2.i, - z__4.i = y[i__6].r * temp2.i + y[i__6].i * - temp2.r; - z__1.r = z__2.r + z__4.r, z__1.i = z__2.i + z__4.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L10: */ - } - i__2 = j + j * a_dim1; - i__3 = j + j * a_dim1; - i__4 = j; - z__2.r = x[i__4].r * temp1.r - x[i__4].i * temp1.i, - z__2.i = x[i__4].r * temp1.i + x[i__4].i * - temp1.r; - i__5 = j; - z__3.r = y[i__5].r * temp2.r - y[i__5].i * temp2.i, - z__3.i = y[i__5].r * temp2.i + y[i__5].i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - d__1 = a[i__3].r + z__1.r; - a[i__2].r = d__1, a[i__2].i = 0.; - } else { - i__2 = j + j * a_dim1; - i__3 = j + j * a_dim1; - d__1 = a[i__3].r; - a[i__2].r = d__1, a[i__2].i = 0.; - } -/* L20: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jx; - i__3 = jy; - if (x[i__2].r != 0. || x[i__2].i != 0. || (y[i__3].r != 0. || - y[i__3].i != 0.)) { - d_cnjg(&z__2, &y[jy]); - z__1.r = alpha->r * z__2.r - alpha->i * z__2.i, z__1.i = - alpha->r * z__2.i + alpha->i * z__2.r; - temp1.r = z__1.r, temp1.i = z__1.i; - i__2 = jx; - z__2.r = alpha->r * x[i__2].r - alpha->i * x[i__2].i, - z__2.i = alpha->r * x[i__2].i + alpha->i * x[i__2] - .r; - d_cnjg(&z__1, &z__2); - temp2.r = z__1.r, temp2.i = z__1.i; - ix = kx; - iy = ky; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - i__5 = ix; - z__3.r = x[i__5].r * temp1.r - x[i__5].i * temp1.i, - z__3.i = x[i__5].r * temp1.i + x[i__5].i * - temp1.r; - z__2.r = a[i__4].r + z__3.r, z__2.i = a[i__4].i + - z__3.i; - i__6 = iy; - z__4.r = y[i__6].r * temp2.r - y[i__6].i * temp2.i, - z__4.i = y[i__6].r * temp2.i + y[i__6].i * - temp2.r; - z__1.r = z__2.r + z__4.r, z__1.i = z__2.i + z__4.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; - ix += *incx; - iy += *incy; -/* L30: */ - } - i__2 = j + j * a_dim1; - i__3 = j + j * a_dim1; - i__4 = jx; - z__2.r = x[i__4].r * temp1.r - x[i__4].i * temp1.i, - z__2.i = x[i__4].r * temp1.i + x[i__4].i * - temp1.r; - i__5 = jy; - z__3.r = y[i__5].r * temp2.r - y[i__5].i * temp2.i, - z__3.i = y[i__5].r * temp2.i + y[i__5].i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - d__1 = a[i__3].r + z__1.r; - a[i__2].r = d__1, a[i__2].i = 0.; - } else { - i__2 = j + j * a_dim1; - i__3 = j + j * a_dim1; - d__1 = a[i__3].r; - a[i__2].r = d__1, a[i__2].i = 0.; - } - jx += *incx; - jy += *incy; -/* L40: */ - } - } - } else { - -/* Form A when A is stored in the lower triangle. */ - - if ((*incx == 1 && *incy == 1)) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - i__3 = j; - if (x[i__2].r != 0. || x[i__2].i != 0. || (y[i__3].r != 0. || - y[i__3].i != 0.)) { - d_cnjg(&z__2, &y[j]); - z__1.r = alpha->r * z__2.r - alpha->i * z__2.i, z__1.i = - alpha->r * z__2.i + alpha->i * z__2.r; - temp1.r = z__1.r, temp1.i = z__1.i; - i__2 = j; - z__2.r = alpha->r * x[i__2].r - alpha->i * x[i__2].i, - z__2.i = alpha->r * x[i__2].i + alpha->i * x[i__2] - .r; - d_cnjg(&z__1, &z__2); - temp2.r = z__1.r, temp2.i = z__1.i; - i__2 = j + j * a_dim1; - i__3 = j + j * a_dim1; - i__4 = j; - z__2.r = x[i__4].r * temp1.r - x[i__4].i * temp1.i, - z__2.i = x[i__4].r * temp1.i + x[i__4].i * - temp1.r; - i__5 = j; - z__3.r = y[i__5].r * temp2.r - y[i__5].i * temp2.i, - z__3.i = y[i__5].r * temp2.i + y[i__5].i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - d__1 = a[i__3].r + z__1.r; - a[i__2].r = d__1, a[i__2].i = 0.; - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - i__5 = i__; - z__3.r = x[i__5].r * temp1.r - x[i__5].i * temp1.i, - z__3.i = x[i__5].r * temp1.i + x[i__5].i * - temp1.r; - z__2.r = a[i__4].r + z__3.r, z__2.i = a[i__4].i + - z__3.i; - i__6 = i__; - z__4.r = y[i__6].r * temp2.r - y[i__6].i * temp2.i, - z__4.i = y[i__6].r * temp2.i + y[i__6].i * - temp2.r; - z__1.r = z__2.r + z__4.r, z__1.i = z__2.i + z__4.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L50: */ - } - } else { - i__2 = j + j * a_dim1; - i__3 = j + j * a_dim1; - d__1 = a[i__3].r; - a[i__2].r = d__1, a[i__2].i = 0.; - } -/* L60: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jx; - i__3 = jy; - if (x[i__2].r != 0. || x[i__2].i != 0. || (y[i__3].r != 0. || - y[i__3].i != 0.)) { - d_cnjg(&z__2, &y[jy]); - z__1.r = alpha->r * z__2.r - alpha->i * z__2.i, z__1.i = - alpha->r * z__2.i + alpha->i * z__2.r; - temp1.r = z__1.r, temp1.i = z__1.i; - i__2 = jx; - z__2.r = alpha->r * x[i__2].r - alpha->i * x[i__2].i, - z__2.i = alpha->r * x[i__2].i + alpha->i * x[i__2] - .r; - d_cnjg(&z__1, &z__2); - temp2.r = z__1.r, temp2.i = z__1.i; - i__2 = j + j * a_dim1; - i__3 = j + j * a_dim1; - i__4 = jx; - z__2.r = x[i__4].r * temp1.r - x[i__4].i * temp1.i, - z__2.i = x[i__4].r * temp1.i + x[i__4].i * - temp1.r; - i__5 = jy; - z__3.r = y[i__5].r * temp2.r - y[i__5].i * temp2.i, - z__3.i = y[i__5].r * temp2.i + y[i__5].i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - d__1 = a[i__3].r + z__1.r; - a[i__2].r = d__1, a[i__2].i = 0.; - ix = jx; - iy = jy; - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - ix += *incx; - iy += *incy; - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - i__5 = ix; - z__3.r = x[i__5].r * temp1.r - x[i__5].i * temp1.i, - z__3.i = x[i__5].r * temp1.i + x[i__5].i * - temp1.r; - z__2.r = a[i__4].r + z__3.r, z__2.i = a[i__4].i + - z__3.i; - i__6 = iy; - z__4.r = y[i__6].r * temp2.r - y[i__6].i * temp2.i, - z__4.i = y[i__6].r * temp2.i + y[i__6].i * - temp2.r; - z__1.r = z__2.r + z__4.r, z__1.i = z__2.i + z__4.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L70: */ - } - } else { - i__2 = j + j * a_dim1; - i__3 = j + j * a_dim1; - d__1 = a[i__3].r; - a[i__2].r = d__1, a[i__2].i = 0.; - } - jx += *incx; - jy += *incy; -/* L80: */ - } - } - } - - return 0; - -/* End of ZHER2 . */ - -} /* zher2_ */ - -/* Subroutine */ int zher2k_(char *uplo, char *trans, integer *n, integer *k, - doublecomplex *alpha, doublecomplex *a, integer *lda, doublecomplex * - b, integer *ldb, doublereal *beta, doublecomplex *c__, integer *ldc) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, c_dim1, c_offset, i__1, i__2, - i__3, i__4, i__5, i__6, i__7; - doublereal d__1; - doublecomplex z__1, z__2, z__3, z__4, z__5, z__6; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, l, info; - static doublecomplex temp1, temp2; - extern logical lsame_(char *, char *); - static integer nrowa; - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - ZHER2K performs one of the hermitian rank 2k operations - - C := alpha*A*conjg( B' ) + conjg( alpha )*B*conjg( A' ) + beta*C, - - or - - C := alpha*conjg( A' )*B + conjg( alpha )*conjg( B' )*A + beta*C, - - where alpha and beta are scalars with beta real, C is an n by n - hermitian matrix and A and B are n by k matrices in the first case - and k by n matrices in the second case. - - Parameters - ========== - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the upper or lower - triangular part of the array C is to be referenced as - follows: - - UPLO = 'U' or 'u' Only the upper triangular part of C - is to be referenced. - - UPLO = 'L' or 'l' Only the lower triangular part of C - is to be referenced. - - Unchanged on exit. - - TRANS - CHARACTER*1. - On entry, TRANS specifies the operation to be performed as - follows: - - TRANS = 'N' or 'n' C := alpha*A*conjg( B' ) + - conjg( alpha )*B*conjg( A' ) + - beta*C. - - TRANS = 'C' or 'c' C := alpha*conjg( A' )*B + - conjg( alpha )*conjg( B' )*A + - beta*C. - - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the order of the matrix C. N must be - at least zero. - Unchanged on exit. - - K - INTEGER. - On entry with TRANS = 'N' or 'n', K specifies the number - of columns of the matrices A and B, and on entry with - TRANS = 'C' or 'c', K specifies the number of rows of the - matrices A and B. K must be at least zero. - Unchanged on exit. - - ALPHA - COMPLEX*16 . - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, ka ), where ka is - k when TRANS = 'N' or 'n', and is n otherwise. - Before entry with TRANS = 'N' or 'n', the leading n by k - part of the array A must contain the matrix A, otherwise - the leading k by n part of the array A must contain the - matrix A. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. When TRANS = 'N' or 'n' - then LDA must be at least max( 1, n ), otherwise LDA must - be at least max( 1, k ). - Unchanged on exit. - - B - COMPLEX*16 array of DIMENSION ( LDB, kb ), where kb is - k when TRANS = 'N' or 'n', and is n otherwise. - Before entry with TRANS = 'N' or 'n', the leading n by k - part of the array B must contain the matrix B, otherwise - the leading k by n part of the array B must contain the - matrix B. - Unchanged on exit. - - LDB - INTEGER. - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. When TRANS = 'N' or 'n' - then LDB must be at least max( 1, n ), otherwise LDB must - be at least max( 1, k ). - Unchanged on exit. - - BETA - DOUBLE PRECISION . - On entry, BETA specifies the scalar beta. - Unchanged on exit. - - C - COMPLEX*16 array of DIMENSION ( LDC, n ). - Before entry with UPLO = 'U' or 'u', the leading n by n - upper triangular part of the array C must contain the upper - triangular part of the hermitian matrix and the strictly - lower triangular part of C is not referenced. On exit, the - upper triangular part of the array C is overwritten by the - upper triangular part of the updated matrix. - Before entry with UPLO = 'L' or 'l', the leading n by n - lower triangular part of the array C must contain the lower - triangular part of the hermitian matrix and the strictly - upper triangular part of C is not referenced. On exit, the - lower triangular part of the array C is overwritten by the - lower triangular part of the updated matrix. - Note that the imaginary parts of the diagonal elements need - not be set, they are assumed to be zero, and on exit they - are set to zero. - - LDC - INTEGER. - On entry, LDC specifies the first dimension of C as declared - in the calling (sub) program. LDC must be at least - max( 1, n ). - Unchanged on exit. - - - Level 3 Blas routine. - - -- Written on 8-February-1989. - Jack Dongarra, Argonne National Laboratory. - Iain Duff, AERE Harwell. - Jeremy Du Croz, Numerical Algorithms Group Ltd. - Sven Hammarling, Numerical Algorithms Group Ltd. - - -- Modified 8-Nov-93 to set C(J,J) to DBLE( C(J,J) ) when BETA = 1. - Ed Anderson, Cray Research Inc. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - - /* Function Body */ - if (lsame_(trans, "N")) { - nrowa = *n; - } else { - nrowa = *k; - } - upper = lsame_(uplo, "U"); - - info = 0; - if ((! upper && ! lsame_(uplo, "L"))) { - info = 1; - } else if ((! lsame_(trans, "N") && ! lsame_(trans, - "C"))) { - info = 2; - } else if (*n < 0) { - info = 3; - } else if (*k < 0) { - info = 4; - } else if (*lda < max(1,nrowa)) { - info = 7; - } else if (*ldb < max(1,nrowa)) { - info = 9; - } else if (*ldc < max(1,*n)) { - info = 12; - } - if (info != 0) { - xerbla_("ZHER2K", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0 || (((alpha->r == 0. && alpha->i == 0.) || *k == 0) && *beta - == 1.)) { - return 0; - } - -/* And when alpha.eq.zero. */ - - if ((alpha->r == 0. && alpha->i == 0.)) { - if (upper) { - if (*beta == 0.) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L10: */ - } -/* L20: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = *beta * c__[i__4].r, z__1.i = *beta * c__[ - i__4].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L30: */ - } - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = *beta * c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; -/* L40: */ - } - } - } else { - if (*beta == 0.) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L50: */ - } -/* L60: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = *beta * c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = *beta * c__[i__4].r, z__1.i = *beta * c__[ - i__4].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L70: */ - } -/* L80: */ - } - } - } - return 0; - } - -/* Start the operations. */ - - if (lsame_(trans, "N")) { - -/* - Form C := alpha*A*conjg( B' ) + conjg( alpha )*B*conjg( A' ) + - C. -*/ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*beta == 0.) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L90: */ - } - } else if (*beta != 1.) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = *beta * c__[i__4].r, z__1.i = *beta * c__[ - i__4].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L100: */ - } - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = *beta * c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - } else { - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - i__3 = j + l * a_dim1; - i__4 = j + l * b_dim1; - if (a[i__3].r != 0. || a[i__3].i != 0. || (b[i__4].r != - 0. || b[i__4].i != 0.)) { - d_cnjg(&z__2, &b[j + l * b_dim1]); - z__1.r = alpha->r * z__2.r - alpha->i * z__2.i, - z__1.i = alpha->r * z__2.i + alpha->i * - z__2.r; - temp1.r = z__1.r, temp1.i = z__1.i; - i__3 = j + l * a_dim1; - z__2.r = alpha->r * a[i__3].r - alpha->i * a[i__3].i, - z__2.i = alpha->r * a[i__3].i + alpha->i * a[ - i__3].r; - d_cnjg(&z__1, &z__2); - temp2.r = z__1.r, temp2.i = z__1.i; - i__3 = j - 1; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * c_dim1; - i__5 = i__ + j * c_dim1; - i__6 = i__ + l * a_dim1; - z__3.r = a[i__6].r * temp1.r - a[i__6].i * - temp1.i, z__3.i = a[i__6].r * temp1.i + a[ - i__6].i * temp1.r; - z__2.r = c__[i__5].r + z__3.r, z__2.i = c__[i__5] - .i + z__3.i; - i__7 = i__ + l * b_dim1; - z__4.r = b[i__7].r * temp2.r - b[i__7].i * - temp2.i, z__4.i = b[i__7].r * temp2.i + b[ - i__7].i * temp2.r; - z__1.r = z__2.r + z__4.r, z__1.i = z__2.i + - z__4.i; - c__[i__4].r = z__1.r, c__[i__4].i = z__1.i; -/* L110: */ - } - i__3 = j + j * c_dim1; - i__4 = j + j * c_dim1; - i__5 = j + l * a_dim1; - z__2.r = a[i__5].r * temp1.r - a[i__5].i * temp1.i, - z__2.i = a[i__5].r * temp1.i + a[i__5].i * - temp1.r; - i__6 = j + l * b_dim1; - z__3.r = b[i__6].r * temp2.r - b[i__6].i * temp2.i, - z__3.i = b[i__6].r * temp2.i + b[i__6].i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - d__1 = c__[i__4].r + z__1.r; - c__[i__3].r = d__1, c__[i__3].i = 0.; - } -/* L120: */ - } -/* L130: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*beta == 0.) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L140: */ - } - } else if (*beta != 1.) { - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = *beta * c__[i__4].r, z__1.i = *beta * c__[ - i__4].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L150: */ - } - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = *beta * c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - } else { - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - i__3 = j + l * a_dim1; - i__4 = j + l * b_dim1; - if (a[i__3].r != 0. || a[i__3].i != 0. || (b[i__4].r != - 0. || b[i__4].i != 0.)) { - d_cnjg(&z__2, &b[j + l * b_dim1]); - z__1.r = alpha->r * z__2.r - alpha->i * z__2.i, - z__1.i = alpha->r * z__2.i + alpha->i * - z__2.r; - temp1.r = z__1.r, temp1.i = z__1.i; - i__3 = j + l * a_dim1; - z__2.r = alpha->r * a[i__3].r - alpha->i * a[i__3].i, - z__2.i = alpha->r * a[i__3].i + alpha->i * a[ - i__3].r; - d_cnjg(&z__1, &z__2); - temp2.r = z__1.r, temp2.i = z__1.i; - i__3 = *n; - for (i__ = j + 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * c_dim1; - i__5 = i__ + j * c_dim1; - i__6 = i__ + l * a_dim1; - z__3.r = a[i__6].r * temp1.r - a[i__6].i * - temp1.i, z__3.i = a[i__6].r * temp1.i + a[ - i__6].i * temp1.r; - z__2.r = c__[i__5].r + z__3.r, z__2.i = c__[i__5] - .i + z__3.i; - i__7 = i__ + l * b_dim1; - z__4.r = b[i__7].r * temp2.r - b[i__7].i * - temp2.i, z__4.i = b[i__7].r * temp2.i + b[ - i__7].i * temp2.r; - z__1.r = z__2.r + z__4.r, z__1.i = z__2.i + - z__4.i; - c__[i__4].r = z__1.r, c__[i__4].i = z__1.i; -/* L160: */ - } - i__3 = j + j * c_dim1; - i__4 = j + j * c_dim1; - i__5 = j + l * a_dim1; - z__2.r = a[i__5].r * temp1.r - a[i__5].i * temp1.i, - z__2.i = a[i__5].r * temp1.i + a[i__5].i * - temp1.r; - i__6 = j + l * b_dim1; - z__3.r = b[i__6].r * temp2.r - b[i__6].i * temp2.i, - z__3.i = b[i__6].r * temp2.i + b[i__6].i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - d__1 = c__[i__4].r + z__1.r; - c__[i__3].r = d__1, c__[i__3].i = 0.; - } -/* L170: */ - } -/* L180: */ - } - } - } else { - -/* - Form C := alpha*conjg( A' )*B + conjg( alpha )*conjg( B' )*A + - C. -*/ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - temp1.r = 0., temp1.i = 0.; - temp2.r = 0., temp2.i = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - d_cnjg(&z__3, &a[l + i__ * a_dim1]); - i__4 = l + j * b_dim1; - z__2.r = z__3.r * b[i__4].r - z__3.i * b[i__4].i, - z__2.i = z__3.r * b[i__4].i + z__3.i * b[i__4] - .r; - z__1.r = temp1.r + z__2.r, z__1.i = temp1.i + z__2.i; - temp1.r = z__1.r, temp1.i = z__1.i; - d_cnjg(&z__3, &b[l + i__ * b_dim1]); - i__4 = l + j * a_dim1; - z__2.r = z__3.r * a[i__4].r - z__3.i * a[i__4].i, - z__2.i = z__3.r * a[i__4].i + z__3.i * a[i__4] - .r; - z__1.r = temp2.r + z__2.r, z__1.i = temp2.i + z__2.i; - temp2.r = z__1.r, temp2.i = z__1.i; -/* L190: */ - } - if (i__ == j) { - if (*beta == 0.) { - i__3 = j + j * c_dim1; - z__2.r = alpha->r * temp1.r - alpha->i * temp1.i, - z__2.i = alpha->r * temp1.i + alpha->i * - temp1.r; - d_cnjg(&z__4, alpha); - z__3.r = z__4.r * temp2.r - z__4.i * temp2.i, - z__3.i = z__4.r * temp2.i + z__4.i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - d__1 = z__1.r; - c__[i__3].r = d__1, c__[i__3].i = 0.; - } else { - i__3 = j + j * c_dim1; - i__4 = j + j * c_dim1; - z__2.r = alpha->r * temp1.r - alpha->i * temp1.i, - z__2.i = alpha->r * temp1.i + alpha->i * - temp1.r; - d_cnjg(&z__4, alpha); - z__3.r = z__4.r * temp2.r - z__4.i * temp2.i, - z__3.i = z__4.r * temp2.i + z__4.i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - d__1 = *beta * c__[i__4].r + z__1.r; - c__[i__3].r = d__1, c__[i__3].i = 0.; - } - } else { - if (*beta == 0.) { - i__3 = i__ + j * c_dim1; - z__2.r = alpha->r * temp1.r - alpha->i * temp1.i, - z__2.i = alpha->r * temp1.i + alpha->i * - temp1.r; - d_cnjg(&z__4, alpha); - z__3.r = z__4.r * temp2.r - z__4.i * temp2.i, - z__3.i = z__4.r * temp2.i + z__4.i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } else { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__3.r = *beta * c__[i__4].r, z__3.i = *beta * - c__[i__4].i; - z__4.r = alpha->r * temp1.r - alpha->i * temp1.i, - z__4.i = alpha->r * temp1.i + alpha->i * - temp1.r; - z__2.r = z__3.r + z__4.r, z__2.i = z__3.i + - z__4.i; - d_cnjg(&z__6, alpha); - z__5.r = z__6.r * temp2.r - z__6.i * temp2.i, - z__5.i = z__6.r * temp2.i + z__6.i * - temp2.r; - z__1.r = z__2.r + z__5.r, z__1.i = z__2.i + - z__5.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } - } -/* L200: */ - } -/* L210: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - temp1.r = 0., temp1.i = 0.; - temp2.r = 0., temp2.i = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - d_cnjg(&z__3, &a[l + i__ * a_dim1]); - i__4 = l + j * b_dim1; - z__2.r = z__3.r * b[i__4].r - z__3.i * b[i__4].i, - z__2.i = z__3.r * b[i__4].i + z__3.i * b[i__4] - .r; - z__1.r = temp1.r + z__2.r, z__1.i = temp1.i + z__2.i; - temp1.r = z__1.r, temp1.i = z__1.i; - d_cnjg(&z__3, &b[l + i__ * b_dim1]); - i__4 = l + j * a_dim1; - z__2.r = z__3.r * a[i__4].r - z__3.i * a[i__4].i, - z__2.i = z__3.r * a[i__4].i + z__3.i * a[i__4] - .r; - z__1.r = temp2.r + z__2.r, z__1.i = temp2.i + z__2.i; - temp2.r = z__1.r, temp2.i = z__1.i; -/* L220: */ - } - if (i__ == j) { - if (*beta == 0.) { - i__3 = j + j * c_dim1; - z__2.r = alpha->r * temp1.r - alpha->i * temp1.i, - z__2.i = alpha->r * temp1.i + alpha->i * - temp1.r; - d_cnjg(&z__4, alpha); - z__3.r = z__4.r * temp2.r - z__4.i * temp2.i, - z__3.i = z__4.r * temp2.i + z__4.i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - d__1 = z__1.r; - c__[i__3].r = d__1, c__[i__3].i = 0.; - } else { - i__3 = j + j * c_dim1; - i__4 = j + j * c_dim1; - z__2.r = alpha->r * temp1.r - alpha->i * temp1.i, - z__2.i = alpha->r * temp1.i + alpha->i * - temp1.r; - d_cnjg(&z__4, alpha); - z__3.r = z__4.r * temp2.r - z__4.i * temp2.i, - z__3.i = z__4.r * temp2.i + z__4.i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - d__1 = *beta * c__[i__4].r + z__1.r; - c__[i__3].r = d__1, c__[i__3].i = 0.; - } - } else { - if (*beta == 0.) { - i__3 = i__ + j * c_dim1; - z__2.r = alpha->r * temp1.r - alpha->i * temp1.i, - z__2.i = alpha->r * temp1.i + alpha->i * - temp1.r; - d_cnjg(&z__4, alpha); - z__3.r = z__4.r * temp2.r - z__4.i * temp2.i, - z__3.i = z__4.r * temp2.i + z__4.i * - temp2.r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } else { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__3.r = *beta * c__[i__4].r, z__3.i = *beta * - c__[i__4].i; - z__4.r = alpha->r * temp1.r - alpha->i * temp1.i, - z__4.i = alpha->r * temp1.i + alpha->i * - temp1.r; - z__2.r = z__3.r + z__4.r, z__2.i = z__3.i + - z__4.i; - d_cnjg(&z__6, alpha); - z__5.r = z__6.r * temp2.r - z__6.i * temp2.i, - z__5.i = z__6.r * temp2.i + z__6.i * - temp2.r; - z__1.r = z__2.r + z__5.r, z__1.i = z__2.i + - z__5.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } - } -/* L230: */ - } -/* L240: */ - } - } - } - - return 0; - -/* End of ZHER2K. */ - -} /* zher2k_ */ - -/* Subroutine */ int zherk_(char *uplo, char *trans, integer *n, integer *k, - doublereal *alpha, doublecomplex *a, integer *lda, doublereal *beta, - doublecomplex *c__, integer *ldc) -{ - /* System generated locals */ - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3, i__4, i__5, - i__6; - doublereal d__1; - doublecomplex z__1, z__2, z__3; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, l, info; - static doublecomplex temp; - extern logical lsame_(char *, char *); - static integer nrowa; - static doublereal rtemp; - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - Purpose - ======= - - ZHERK performs one of the hermitian rank k operations - - C := alpha*A*conjg( A' ) + beta*C, - - or - - C := alpha*conjg( A' )*A + beta*C, - - where alpha and beta are real scalars, C is an n by n hermitian - matrix and A is an n by k matrix in the first case and a k by n - matrix in the second case. - - Parameters - ========== - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the upper or lower - triangular part of the array C is to be referenced as - follows: - - UPLO = 'U' or 'u' Only the upper triangular part of C - is to be referenced. - - UPLO = 'L' or 'l' Only the lower triangular part of C - is to be referenced. - - Unchanged on exit. - - TRANS - CHARACTER*1. - On entry, TRANS specifies the operation to be performed as - follows: - - TRANS = 'N' or 'n' C := alpha*A*conjg( A' ) + beta*C. - - TRANS = 'C' or 'c' C := alpha*conjg( A' )*A + beta*C. - - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the order of the matrix C. N must be - at least zero. - Unchanged on exit. - - K - INTEGER. - On entry with TRANS = 'N' or 'n', K specifies the number - of columns of the matrix A, and on entry with - TRANS = 'C' or 'c', K specifies the number of rows of the - matrix A. K must be at least zero. - Unchanged on exit. - - ALPHA - DOUBLE PRECISION . - On entry, ALPHA specifies the scalar alpha. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, ka ), where ka is - k when TRANS = 'N' or 'n', and is n otherwise. - Before entry with TRANS = 'N' or 'n', the leading n by k - part of the array A must contain the matrix A, otherwise - the leading k by n part of the array A must contain the - matrix A. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. When TRANS = 'N' or 'n' - then LDA must be at least max( 1, n ), otherwise LDA must - be at least max( 1, k ). - Unchanged on exit. - - BETA - DOUBLE PRECISION. - On entry, BETA specifies the scalar beta. - Unchanged on exit. - - C - COMPLEX*16 array of DIMENSION ( LDC, n ). - Before entry with UPLO = 'U' or 'u', the leading n by n - upper triangular part of the array C must contain the upper - triangular part of the hermitian matrix and the strictly - lower triangular part of C is not referenced. On exit, the - upper triangular part of the array C is overwritten by the - upper triangular part of the updated matrix. - Before entry with UPLO = 'L' or 'l', the leading n by n - lower triangular part of the array C must contain the lower - triangular part of the hermitian matrix and the strictly - upper triangular part of C is not referenced. On exit, the - lower triangular part of the array C is overwritten by the - lower triangular part of the updated matrix. - Note that the imaginary parts of the diagonal elements need - not be set, they are assumed to be zero, and on exit they - are set to zero. - - LDC - INTEGER. - On entry, LDC specifies the first dimension of C as declared - in the calling (sub) program. LDC must be at least - max( 1, n ). - Unchanged on exit. - - - Level 3 Blas routine. - - -- Written on 8-February-1989. - Jack Dongarra, Argonne National Laboratory. - Iain Duff, AERE Harwell. - Jeremy Du Croz, Numerical Algorithms Group Ltd. - Sven Hammarling, Numerical Algorithms Group Ltd. - - -- Modified 8-Nov-93 to set C(J,J) to DBLE( C(J,J) ) when BETA = 1. - Ed Anderson, Cray Research Inc. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - - /* Function Body */ - if (lsame_(trans, "N")) { - nrowa = *n; - } else { - nrowa = *k; - } - upper = lsame_(uplo, "U"); - - info = 0; - if ((! upper && ! lsame_(uplo, "L"))) { - info = 1; - } else if ((! lsame_(trans, "N") && ! lsame_(trans, - "C"))) { - info = 2; - } else if (*n < 0) { - info = 3; - } else if (*k < 0) { - info = 4; - } else if (*lda < max(1,nrowa)) { - info = 7; - } else if (*ldc < max(1,*n)) { - info = 10; - } - if (info != 0) { - xerbla_("ZHERK ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0 || ((*alpha == 0. || *k == 0) && *beta == 1.)) { - return 0; - } - -/* And when alpha.eq.zero. */ - - if (*alpha == 0.) { - if (upper) { - if (*beta == 0.) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L10: */ - } -/* L20: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = *beta * c__[i__4].r, z__1.i = *beta * c__[ - i__4].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L30: */ - } - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = *beta * c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; -/* L40: */ - } - } - } else { - if (*beta == 0.) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L50: */ - } -/* L60: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = *beta * c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = *beta * c__[i__4].r, z__1.i = *beta * c__[ - i__4].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L70: */ - } -/* L80: */ - } - } - } - return 0; - } - -/* Start the operations. */ - - if (lsame_(trans, "N")) { - -/* Form C := alpha*A*conjg( A' ) + beta*C. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*beta == 0.) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L90: */ - } - } else if (*beta != 1.) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = *beta * c__[i__4].r, z__1.i = *beta * c__[ - i__4].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L100: */ - } - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = *beta * c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - } else { - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - i__3 = j + l * a_dim1; - if (a[i__3].r != 0. || a[i__3].i != 0.) { - d_cnjg(&z__2, &a[j + l * a_dim1]); - z__1.r = *alpha * z__2.r, z__1.i = *alpha * z__2.i; - temp.r = z__1.r, temp.i = z__1.i; - i__3 = j - 1; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * c_dim1; - i__5 = i__ + j * c_dim1; - i__6 = i__ + l * a_dim1; - z__2.r = temp.r * a[i__6].r - temp.i * a[i__6].i, - z__2.i = temp.r * a[i__6].i + temp.i * a[ - i__6].r; - z__1.r = c__[i__5].r + z__2.r, z__1.i = c__[i__5] - .i + z__2.i; - c__[i__4].r = z__1.r, c__[i__4].i = z__1.i; -/* L110: */ - } - i__3 = j + j * c_dim1; - i__4 = j + j * c_dim1; - i__5 = i__ + l * a_dim1; - z__1.r = temp.r * a[i__5].r - temp.i * a[i__5].i, - z__1.i = temp.r * a[i__5].i + temp.i * a[i__5] - .r; - d__1 = c__[i__4].r + z__1.r; - c__[i__3].r = d__1, c__[i__3].i = 0.; - } -/* L120: */ - } -/* L130: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*beta == 0.) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - c__[i__3].r = 0., c__[i__3].i = 0.; -/* L140: */ - } - } else if (*beta != 1.) { - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = *beta * c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - z__1.r = *beta * c__[i__4].r, z__1.i = *beta * c__[ - i__4].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L150: */ - } - } else { - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - } - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - i__3 = j + l * a_dim1; - if (a[i__3].r != 0. || a[i__3].i != 0.) { - d_cnjg(&z__2, &a[j + l * a_dim1]); - z__1.r = *alpha * z__2.r, z__1.i = *alpha * z__2.i; - temp.r = z__1.r, temp.i = z__1.i; - i__3 = j + j * c_dim1; - i__4 = j + j * c_dim1; - i__5 = j + l * a_dim1; - z__1.r = temp.r * a[i__5].r - temp.i * a[i__5].i, - z__1.i = temp.r * a[i__5].i + temp.i * a[i__5] - .r; - d__1 = c__[i__4].r + z__1.r; - c__[i__3].r = d__1, c__[i__3].i = 0.; - i__3 = *n; - for (i__ = j + 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * c_dim1; - i__5 = i__ + j * c_dim1; - i__6 = i__ + l * a_dim1; - z__2.r = temp.r * a[i__6].r - temp.i * a[i__6].i, - z__2.i = temp.r * a[i__6].i + temp.i * a[ - i__6].r; - z__1.r = c__[i__5].r + z__2.r, z__1.i = c__[i__5] - .i + z__2.i; - c__[i__4].r = z__1.r, c__[i__4].i = z__1.i; -/* L160: */ - } - } -/* L170: */ - } -/* L180: */ - } - } - } else { - -/* Form C := alpha*conjg( A' )*A + beta*C. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - temp.r = 0., temp.i = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - d_cnjg(&z__3, &a[l + i__ * a_dim1]); - i__4 = l + j * a_dim1; - z__2.r = z__3.r * a[i__4].r - z__3.i * a[i__4].i, - z__2.i = z__3.r * a[i__4].i + z__3.i * a[i__4] - .r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L190: */ - } - if (*beta == 0.) { - i__3 = i__ + j * c_dim1; - z__1.r = *alpha * temp.r, z__1.i = *alpha * temp.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } else { - i__3 = i__ + j * c_dim1; - z__2.r = *alpha * temp.r, z__2.i = *alpha * temp.i; - i__4 = i__ + j * c_dim1; - z__3.r = *beta * c__[i__4].r, z__3.i = *beta * c__[ - i__4].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } -/* L200: */ - } - rtemp = 0.; - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - d_cnjg(&z__3, &a[l + j * a_dim1]); - i__3 = l + j * a_dim1; - z__2.r = z__3.r * a[i__3].r - z__3.i * a[i__3].i, z__2.i = - z__3.r * a[i__3].i + z__3.i * a[i__3].r; - z__1.r = rtemp + z__2.r, z__1.i = z__2.i; - rtemp = z__1.r; -/* L210: */ - } - if (*beta == 0.) { - i__2 = j + j * c_dim1; - d__1 = *alpha * rtemp; - c__[i__2].r = d__1, c__[i__2].i = 0.; - } else { - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = *alpha * rtemp + *beta * c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - } -/* L220: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - rtemp = 0.; - i__2 = *k; - for (l = 1; l <= i__2; ++l) { - d_cnjg(&z__3, &a[l + j * a_dim1]); - i__3 = l + j * a_dim1; - z__2.r = z__3.r * a[i__3].r - z__3.i * a[i__3].i, z__2.i = - z__3.r * a[i__3].i + z__3.i * a[i__3].r; - z__1.r = rtemp + z__2.r, z__1.i = z__2.i; - rtemp = z__1.r; -/* L230: */ - } - if (*beta == 0.) { - i__2 = j + j * c_dim1; - d__1 = *alpha * rtemp; - c__[i__2].r = d__1, c__[i__2].i = 0.; - } else { - i__2 = j + j * c_dim1; - i__3 = j + j * c_dim1; - d__1 = *alpha * rtemp + *beta * c__[i__3].r; - c__[i__2].r = d__1, c__[i__2].i = 0.; - } - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - temp.r = 0., temp.i = 0.; - i__3 = *k; - for (l = 1; l <= i__3; ++l) { - d_cnjg(&z__3, &a[l + i__ * a_dim1]); - i__4 = l + j * a_dim1; - z__2.r = z__3.r * a[i__4].r - z__3.i * a[i__4].i, - z__2.i = z__3.r * a[i__4].i + z__3.i * a[i__4] - .r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L240: */ - } - if (*beta == 0.) { - i__3 = i__ + j * c_dim1; - z__1.r = *alpha * temp.r, z__1.i = *alpha * temp.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } else { - i__3 = i__ + j * c_dim1; - z__2.r = *alpha * temp.r, z__2.i = *alpha * temp.i; - i__4 = i__ + j * c_dim1; - z__3.r = *beta * c__[i__4].r, z__3.i = *beta * c__[ - i__4].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; - } -/* L250: */ - } -/* L260: */ - } - } - } - - return 0; - -/* End of ZHERK . */ - -} /* zherk_ */ - -/* Subroutine */ int zscal_(integer *n, doublecomplex *za, doublecomplex *zx, - integer *incx) -{ - /* System generated locals */ - integer i__1, i__2, i__3; - doublecomplex z__1; - - /* Local variables */ - static integer i__, ix; - - -/* - scales a vector by a constant. - jack dongarra, 3/11/78. - modified 3/93 to return if incx .le. 0. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --zx; - - /* Function Body */ - if (*n <= 0 || *incx <= 0) { - return 0; - } - if (*incx == 1) { - goto L20; - } - -/* code for increment not equal to 1 */ - - ix = 1; - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = ix; - i__3 = ix; - z__1.r = za->r * zx[i__3].r - za->i * zx[i__3].i, z__1.i = za->r * zx[ - i__3].i + za->i * zx[i__3].r; - zx[i__2].r = z__1.r, zx[i__2].i = z__1.i; - ix += *incx; -/* L10: */ - } - return 0; - -/* code for increment equal to 1 */ - -L20: - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - i__3 = i__; - z__1.r = za->r * zx[i__3].r - za->i * zx[i__3].i, z__1.i = za->r * zx[ - i__3].i + za->i * zx[i__3].r; - zx[i__2].r = z__1.r, zx[i__2].i = z__1.i; -/* L30: */ - } - return 0; -} /* zscal_ */ - -/* Subroutine */ int zswap_(integer *n, doublecomplex *zx, integer *incx, - doublecomplex *zy, integer *incy) -{ - /* System generated locals */ - integer i__1, i__2, i__3; - - /* Local variables */ - static integer i__, ix, iy; - static doublecomplex ztemp; - - -/* - interchanges two vectors. - jack dongarra, 3/11/78. - modified 12/3/93, array(1) declarations changed to array(*) -*/ - - - /* Parameter adjustments */ - --zy; - --zx; - - /* Function Body */ - if (*n <= 0) { - return 0; - } - if ((*incx == 1 && *incy == 1)) { - goto L20; - } - -/* - code for unequal increments or equal increments not equal - to 1 -*/ - - ix = 1; - iy = 1; - if (*incx < 0) { - ix = (-(*n) + 1) * *incx + 1; - } - if (*incy < 0) { - iy = (-(*n) + 1) * *incy + 1; - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = ix; - ztemp.r = zx[i__2].r, ztemp.i = zx[i__2].i; - i__2 = ix; - i__3 = iy; - zx[i__2].r = zy[i__3].r, zx[i__2].i = zy[i__3].i; - i__2 = iy; - zy[i__2].r = ztemp.r, zy[i__2].i = ztemp.i; - ix += *incx; - iy += *incy; -/* L10: */ - } - return 0; - -/* code for both increments equal to 1 */ -L20: - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - ztemp.r = zx[i__2].r, ztemp.i = zx[i__2].i; - i__2 = i__; - i__3 = i__; - zx[i__2].r = zy[i__3].r, zx[i__2].i = zy[i__3].i; - i__2 = i__; - zy[i__2].r = ztemp.r, zy[i__2].i = ztemp.i; -/* L30: */ - } - return 0; -} /* zswap_ */ - -/* Subroutine */ int ztrmm_(char *side, char *uplo, char *transa, char *diag, - integer *m, integer *n, doublecomplex *alpha, doublecomplex *a, - integer *lda, doublecomplex *b, integer *ldb) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1, i__2, i__3, i__4, i__5, - i__6; - doublecomplex z__1, z__2, z__3; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, k, info; - static doublecomplex temp; - static logical lside; - extern logical lsame_(char *, char *); - static integer nrowa; - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical noconj, nounit; - - -/* - Purpose - ======= - - ZTRMM performs one of the matrix-matrix operations - - B := alpha*op( A )*B, or B := alpha*B*op( A ) - - where alpha is a scalar, B is an m by n matrix, A is a unit, or - non-unit, upper or lower triangular matrix and op( A ) is one of - - op( A ) = A or op( A ) = A' or op( A ) = conjg( A' ). - - Parameters - ========== - - SIDE - CHARACTER*1. - On entry, SIDE specifies whether op( A ) multiplies B from - the left or right as follows: - - SIDE = 'L' or 'l' B := alpha*op( A )*B. - - SIDE = 'R' or 'r' B := alpha*B*op( A ). - - Unchanged on exit. - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the matrix A is an upper or - lower triangular matrix as follows: - - UPLO = 'U' or 'u' A is an upper triangular matrix. - - UPLO = 'L' or 'l' A is a lower triangular matrix. - - Unchanged on exit. - - TRANSA - CHARACTER*1. - On entry, TRANSA specifies the form of op( A ) to be used in - the matrix multiplication as follows: - - TRANSA = 'N' or 'n' op( A ) = A. - - TRANSA = 'T' or 't' op( A ) = A'. - - TRANSA = 'C' or 'c' op( A ) = conjg( A' ). - - Unchanged on exit. - - DIAG - CHARACTER*1. - On entry, DIAG specifies whether or not A is unit triangular - as follows: - - DIAG = 'U' or 'u' A is assumed to be unit triangular. - - DIAG = 'N' or 'n' A is not assumed to be unit - triangular. - - Unchanged on exit. - - M - INTEGER. - On entry, M specifies the number of rows of B. M must be at - least zero. - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the number of columns of B. N must be - at least zero. - Unchanged on exit. - - ALPHA - COMPLEX*16 . - On entry, ALPHA specifies the scalar alpha. When alpha is - zero then A is not referenced and B need not be set before - entry. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, k ), where k is m - when SIDE = 'L' or 'l' and is n when SIDE = 'R' or 'r'. - Before entry with UPLO = 'U' or 'u', the leading k by k - upper triangular part of the array A must contain the upper - triangular matrix and the strictly lower triangular part of - A is not referenced. - Before entry with UPLO = 'L' or 'l', the leading k by k - lower triangular part of the array A must contain the lower - triangular matrix and the strictly upper triangular part of - A is not referenced. - Note that when DIAG = 'U' or 'u', the diagonal elements of - A are not referenced either, but are assumed to be unity. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. When SIDE = 'L' or 'l' then - LDA must be at least max( 1, m ), when SIDE = 'R' or 'r' - then LDA must be at least max( 1, n ). - Unchanged on exit. - - B - COMPLEX*16 array of DIMENSION ( LDB, n ). - Before entry, the leading m by n part of the array B must - contain the matrix B, and on exit is overwritten by the - transformed matrix. - - LDB - INTEGER. - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. LDB must be at least - max( 1, m ). - Unchanged on exit. - - - Level 3 Blas routine. - - -- Written on 8-February-1989. - Jack Dongarra, Argonne National Laboratory. - Iain Duff, AERE Harwell. - Jeremy Du Croz, Numerical Algorithms Group Ltd. - Sven Hammarling, Numerical Algorithms Group Ltd. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - - /* Function Body */ - lside = lsame_(side, "L"); - if (lside) { - nrowa = *m; - } else { - nrowa = *n; - } - noconj = lsame_(transa, "T"); - nounit = lsame_(diag, "N"); - upper = lsame_(uplo, "U"); - - info = 0; - if ((! lside && ! lsame_(side, "R"))) { - info = 1; - } else if ((! upper && ! lsame_(uplo, "L"))) { - info = 2; - } else if (((! lsame_(transa, "N") && ! lsame_( - transa, "T")) && ! lsame_(transa, "C"))) { - info = 3; - } else if ((! lsame_(diag, "U") && ! lsame_(diag, - "N"))) { - info = 4; - } else if (*m < 0) { - info = 5; - } else if (*n < 0) { - info = 6; - } else if (*lda < max(1,nrowa)) { - info = 9; - } else if (*ldb < max(1,*m)) { - info = 11; - } - if (info != 0) { - xerbla_("ZTRMM ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } - -/* And when alpha.eq.zero. */ - - if ((alpha->r == 0. && alpha->i == 0.)) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - b[i__3].r = 0., b[i__3].i = 0.; -/* L10: */ - } -/* L20: */ - } - return 0; - } - -/* Start the operations. */ - - if (lside) { - if (lsame_(transa, "N")) { - -/* Form B := alpha*A*B. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (k = 1; k <= i__2; ++k) { - i__3 = k + j * b_dim1; - if (b[i__3].r != 0. || b[i__3].i != 0.) { - i__3 = k + j * b_dim1; - z__1.r = alpha->r * b[i__3].r - alpha->i * b[i__3] - .i, z__1.i = alpha->r * b[i__3].i + - alpha->i * b[i__3].r; - temp.r = z__1.r, temp.i = z__1.i; - i__3 = k - 1; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * b_dim1; - i__5 = i__ + j * b_dim1; - i__6 = i__ + k * a_dim1; - z__2.r = temp.r * a[i__6].r - temp.i * a[i__6] - .i, z__2.i = temp.r * a[i__6].i + - temp.i * a[i__6].r; - z__1.r = b[i__5].r + z__2.r, z__1.i = b[i__5] - .i + z__2.i; - b[i__4].r = z__1.r, b[i__4].i = z__1.i; -/* L30: */ - } - if (nounit) { - i__3 = k + k * a_dim1; - z__1.r = temp.r * a[i__3].r - temp.i * a[i__3] - .i, z__1.i = temp.r * a[i__3].i + - temp.i * a[i__3].r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__3 = k + j * b_dim1; - b[i__3].r = temp.r, b[i__3].i = temp.i; - } -/* L40: */ - } -/* L50: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - for (k = *m; k >= 1; --k) { - i__2 = k + j * b_dim1; - if (b[i__2].r != 0. || b[i__2].i != 0.) { - i__2 = k + j * b_dim1; - z__1.r = alpha->r * b[i__2].r - alpha->i * b[i__2] - .i, z__1.i = alpha->r * b[i__2].i + - alpha->i * b[i__2].r; - temp.r = z__1.r, temp.i = z__1.i; - i__2 = k + j * b_dim1; - b[i__2].r = temp.r, b[i__2].i = temp.i; - if (nounit) { - i__2 = k + j * b_dim1; - i__3 = k + j * b_dim1; - i__4 = k + k * a_dim1; - z__1.r = b[i__3].r * a[i__4].r - b[i__3].i * - a[i__4].i, z__1.i = b[i__3].r * a[ - i__4].i + b[i__3].i * a[i__4].r; - b[i__2].r = z__1.r, b[i__2].i = z__1.i; - } - i__2 = *m; - for (i__ = k + 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * b_dim1; - i__5 = i__ + k * a_dim1; - z__2.r = temp.r * a[i__5].r - temp.i * a[i__5] - .i, z__2.i = temp.r * a[i__5].i + - temp.i * a[i__5].r; - z__1.r = b[i__4].r + z__2.r, z__1.i = b[i__4] - .i + z__2.i; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L60: */ - } - } -/* L70: */ - } -/* L80: */ - } - } - } else { - -/* Form B := alpha*A'*B or B := alpha*conjg( A' )*B. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - for (i__ = *m; i__ >= 1; --i__) { - i__2 = i__ + j * b_dim1; - temp.r = b[i__2].r, temp.i = b[i__2].i; - if (noconj) { - if (nounit) { - i__2 = i__ + i__ * a_dim1; - z__1.r = temp.r * a[i__2].r - temp.i * a[i__2] - .i, z__1.i = temp.r * a[i__2].i + - temp.i * a[i__2].r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__2 = i__ - 1; - for (k = 1; k <= i__2; ++k) { - i__3 = k + i__ * a_dim1; - i__4 = k + j * b_dim1; - z__2.r = a[i__3].r * b[i__4].r - a[i__3].i * - b[i__4].i, z__2.i = a[i__3].r * b[ - i__4].i + a[i__3].i * b[i__4].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L90: */ - } - } else { - if (nounit) { - d_cnjg(&z__2, &a[i__ + i__ * a_dim1]); - z__1.r = temp.r * z__2.r - temp.i * z__2.i, - z__1.i = temp.r * z__2.i + temp.i * - z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__2 = i__ - 1; - for (k = 1; k <= i__2; ++k) { - d_cnjg(&z__3, &a[k + i__ * a_dim1]); - i__3 = k + j * b_dim1; - z__2.r = z__3.r * b[i__3].r - z__3.i * b[i__3] - .i, z__2.i = z__3.r * b[i__3].i + - z__3.i * b[i__3].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L100: */ - } - } - i__2 = i__ + j * b_dim1; - z__1.r = alpha->r * temp.r - alpha->i * temp.i, - z__1.i = alpha->r * temp.i + alpha->i * - temp.r; - b[i__2].r = z__1.r, b[i__2].i = z__1.i; -/* L110: */ - } -/* L120: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - temp.r = b[i__3].r, temp.i = b[i__3].i; - if (noconj) { - if (nounit) { - i__3 = i__ + i__ * a_dim1; - z__1.r = temp.r * a[i__3].r - temp.i * a[i__3] - .i, z__1.i = temp.r * a[i__3].i + - temp.i * a[i__3].r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__3 = *m; - for (k = i__ + 1; k <= i__3; ++k) { - i__4 = k + i__ * a_dim1; - i__5 = k + j * b_dim1; - z__2.r = a[i__4].r * b[i__5].r - a[i__4].i * - b[i__5].i, z__2.i = a[i__4].r * b[ - i__5].i + a[i__4].i * b[i__5].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L130: */ - } - } else { - if (nounit) { - d_cnjg(&z__2, &a[i__ + i__ * a_dim1]); - z__1.r = temp.r * z__2.r - temp.i * z__2.i, - z__1.i = temp.r * z__2.i + temp.i * - z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__3 = *m; - for (k = i__ + 1; k <= i__3; ++k) { - d_cnjg(&z__3, &a[k + i__ * a_dim1]); - i__4 = k + j * b_dim1; - z__2.r = z__3.r * b[i__4].r - z__3.i * b[i__4] - .i, z__2.i = z__3.r * b[i__4].i + - z__3.i * b[i__4].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L140: */ - } - } - i__3 = i__ + j * b_dim1; - z__1.r = alpha->r * temp.r - alpha->i * temp.i, - z__1.i = alpha->r * temp.i + alpha->i * - temp.r; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L150: */ - } -/* L160: */ - } - } - } - } else { - if (lsame_(transa, "N")) { - -/* Form B := alpha*B*A. */ - - if (upper) { - for (j = *n; j >= 1; --j) { - temp.r = alpha->r, temp.i = alpha->i; - if (nounit) { - i__1 = j + j * a_dim1; - z__1.r = temp.r * a[i__1].r - temp.i * a[i__1].i, - z__1.i = temp.r * a[i__1].i + temp.i * a[i__1] - .r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + j * b_dim1; - i__3 = i__ + j * b_dim1; - z__1.r = temp.r * b[i__3].r - temp.i * b[i__3].i, - z__1.i = temp.r * b[i__3].i + temp.i * b[i__3] - .r; - b[i__2].r = z__1.r, b[i__2].i = z__1.i; -/* L170: */ - } - i__1 = j - 1; - for (k = 1; k <= i__1; ++k) { - i__2 = k + j * a_dim1; - if (a[i__2].r != 0. || a[i__2].i != 0.) { - i__2 = k + j * a_dim1; - z__1.r = alpha->r * a[i__2].r - alpha->i * a[i__2] - .i, z__1.i = alpha->r * a[i__2].i + - alpha->i * a[i__2].r; - temp.r = z__1.r, temp.i = z__1.i; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * b_dim1; - i__5 = i__ + k * b_dim1; - z__2.r = temp.r * b[i__5].r - temp.i * b[i__5] - .i, z__2.i = temp.r * b[i__5].i + - temp.i * b[i__5].r; - z__1.r = b[i__4].r + z__2.r, z__1.i = b[i__4] - .i + z__2.i; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L180: */ - } - } -/* L190: */ - } -/* L200: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - temp.r = alpha->r, temp.i = alpha->i; - if (nounit) { - i__2 = j + j * a_dim1; - z__1.r = temp.r * a[i__2].r - temp.i * a[i__2].i, - z__1.i = temp.r * a[i__2].i + temp.i * a[i__2] - .r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * b_dim1; - z__1.r = temp.r * b[i__4].r - temp.i * b[i__4].i, - z__1.i = temp.r * b[i__4].i + temp.i * b[i__4] - .r; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L210: */ - } - i__2 = *n; - for (k = j + 1; k <= i__2; ++k) { - i__3 = k + j * a_dim1; - if (a[i__3].r != 0. || a[i__3].i != 0.) { - i__3 = k + j * a_dim1; - z__1.r = alpha->r * a[i__3].r - alpha->i * a[i__3] - .i, z__1.i = alpha->r * a[i__3].i + - alpha->i * a[i__3].r; - temp.r = z__1.r, temp.i = z__1.i; - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * b_dim1; - i__5 = i__ + j * b_dim1; - i__6 = i__ + k * b_dim1; - z__2.r = temp.r * b[i__6].r - temp.i * b[i__6] - .i, z__2.i = temp.r * b[i__6].i + - temp.i * b[i__6].r; - z__1.r = b[i__5].r + z__2.r, z__1.i = b[i__5] - .i + z__2.i; - b[i__4].r = z__1.r, b[i__4].i = z__1.i; -/* L220: */ - } - } -/* L230: */ - } -/* L240: */ - } - } - } else { - -/* Form B := alpha*B*A' or B := alpha*B*conjg( A' ). */ - - if (upper) { - i__1 = *n; - for (k = 1; k <= i__1; ++k) { - i__2 = k - 1; - for (j = 1; j <= i__2; ++j) { - i__3 = j + k * a_dim1; - if (a[i__3].r != 0. || a[i__3].i != 0.) { - if (noconj) { - i__3 = j + k * a_dim1; - z__1.r = alpha->r * a[i__3].r - alpha->i * a[ - i__3].i, z__1.i = alpha->r * a[i__3] - .i + alpha->i * a[i__3].r; - temp.r = z__1.r, temp.i = z__1.i; - } else { - d_cnjg(&z__2, &a[j + k * a_dim1]); - z__1.r = alpha->r * z__2.r - alpha->i * - z__2.i, z__1.i = alpha->r * z__2.i + - alpha->i * z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * b_dim1; - i__5 = i__ + j * b_dim1; - i__6 = i__ + k * b_dim1; - z__2.r = temp.r * b[i__6].r - temp.i * b[i__6] - .i, z__2.i = temp.r * b[i__6].i + - temp.i * b[i__6].r; - z__1.r = b[i__5].r + z__2.r, z__1.i = b[i__5] - .i + z__2.i; - b[i__4].r = z__1.r, b[i__4].i = z__1.i; -/* L250: */ - } - } -/* L260: */ - } - temp.r = alpha->r, temp.i = alpha->i; - if (nounit) { - if (noconj) { - i__2 = k + k * a_dim1; - z__1.r = temp.r * a[i__2].r - temp.i * a[i__2].i, - z__1.i = temp.r * a[i__2].i + temp.i * a[ - i__2].r; - temp.r = z__1.r, temp.i = z__1.i; - } else { - d_cnjg(&z__2, &a[k + k * a_dim1]); - z__1.r = temp.r * z__2.r - temp.i * z__2.i, - z__1.i = temp.r * z__2.i + temp.i * - z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - } - } - if (temp.r != 1. || temp.i != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + k * b_dim1; - i__4 = i__ + k * b_dim1; - z__1.r = temp.r * b[i__4].r - temp.i * b[i__4].i, - z__1.i = temp.r * b[i__4].i + temp.i * b[ - i__4].r; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L270: */ - } - } -/* L280: */ - } - } else { - for (k = *n; k >= 1; --k) { - i__1 = *n; - for (j = k + 1; j <= i__1; ++j) { - i__2 = j + k * a_dim1; - if (a[i__2].r != 0. || a[i__2].i != 0.) { - if (noconj) { - i__2 = j + k * a_dim1; - z__1.r = alpha->r * a[i__2].r - alpha->i * a[ - i__2].i, z__1.i = alpha->r * a[i__2] - .i + alpha->i * a[i__2].r; - temp.r = z__1.r, temp.i = z__1.i; - } else { - d_cnjg(&z__2, &a[j + k * a_dim1]); - z__1.r = alpha->r * z__2.r - alpha->i * - z__2.i, z__1.i = alpha->r * z__2.i + - alpha->i * z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * b_dim1; - i__5 = i__ + k * b_dim1; - z__2.r = temp.r * b[i__5].r - temp.i * b[i__5] - .i, z__2.i = temp.r * b[i__5].i + - temp.i * b[i__5].r; - z__1.r = b[i__4].r + z__2.r, z__1.i = b[i__4] - .i + z__2.i; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L290: */ - } - } -/* L300: */ - } - temp.r = alpha->r, temp.i = alpha->i; - if (nounit) { - if (noconj) { - i__1 = k + k * a_dim1; - z__1.r = temp.r * a[i__1].r - temp.i * a[i__1].i, - z__1.i = temp.r * a[i__1].i + temp.i * a[ - i__1].r; - temp.r = z__1.r, temp.i = z__1.i; - } else { - d_cnjg(&z__2, &a[k + k * a_dim1]); - z__1.r = temp.r * z__2.r - temp.i * z__2.i, - z__1.i = temp.r * z__2.i + temp.i * - z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - } - } - if (temp.r != 1. || temp.i != 0.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + k * b_dim1; - i__3 = i__ + k * b_dim1; - z__1.r = temp.r * b[i__3].r - temp.i * b[i__3].i, - z__1.i = temp.r * b[i__3].i + temp.i * b[ - i__3].r; - b[i__2].r = z__1.r, b[i__2].i = z__1.i; -/* L310: */ - } - } -/* L320: */ - } - } - } - } - - return 0; - -/* End of ZTRMM . */ - -} /* ztrmm_ */ - -/* Subroutine */ int ztrmv_(char *uplo, char *trans, char *diag, integer *n, - doublecomplex *a, integer *lda, doublecomplex *x, integer *incx) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - doublecomplex z__1, z__2, z__3; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, ix, jx, kx, info; - static doublecomplex temp; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical noconj, nounit; - - -/* - Purpose - ======= - - ZTRMV performs one of the matrix-vector operations - - x := A*x, or x := A'*x, or x := conjg( A' )*x, - - where x is an n element vector and A is an n by n unit, or non-unit, - upper or lower triangular matrix. - - Parameters - ========== - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the matrix is an upper or - lower triangular matrix as follows: - - UPLO = 'U' or 'u' A is an upper triangular matrix. - - UPLO = 'L' or 'l' A is a lower triangular matrix. - - Unchanged on exit. - - TRANS - CHARACTER*1. - On entry, TRANS specifies the operation to be performed as - follows: - - TRANS = 'N' or 'n' x := A*x. - - TRANS = 'T' or 't' x := A'*x. - - TRANS = 'C' or 'c' x := conjg( A' )*x. - - Unchanged on exit. - - DIAG - CHARACTER*1. - On entry, DIAG specifies whether or not A is unit - triangular as follows: - - DIAG = 'U' or 'u' A is assumed to be unit triangular. - - DIAG = 'N' or 'n' A is not assumed to be unit - triangular. - - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the order of the matrix A. - N must be at least zero. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, n ). - Before entry with UPLO = 'U' or 'u', the leading n by n - upper triangular part of the array A must contain the upper - triangular matrix and the strictly lower triangular part of - A is not referenced. - Before entry with UPLO = 'L' or 'l', the leading n by n - lower triangular part of the array A must contain the lower - triangular matrix and the strictly upper triangular part of - A is not referenced. - Note that when DIAG = 'U' or 'u', the diagonal elements of - A are not referenced either, but are assumed to be unity. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, n ). - Unchanged on exit. - - X - COMPLEX*16 array of dimension at least - ( 1 + ( n - 1 )*abs( INCX ) ). - Before entry, the incremented array X must contain the n - element vector x. On exit, X is overwritten with the - tranformed vector x. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --x; - - /* Function Body */ - info = 0; - if ((! lsame_(uplo, "U") && ! lsame_(uplo, "L"))) { - info = 1; - } else if (((! lsame_(trans, "N") && ! lsame_(trans, - "T")) && ! lsame_(trans, "C"))) { - info = 2; - } else if ((! lsame_(diag, "U") && ! lsame_(diag, - "N"))) { - info = 3; - } else if (*n < 0) { - info = 4; - } else if (*lda < max(1,*n)) { - info = 6; - } else if (*incx == 0) { - info = 8; - } - if (info != 0) { - xerbla_("ZTRMV ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } - - noconj = lsame_(trans, "T"); - nounit = lsame_(diag, "N"); - -/* - Set up the start point in X if the increment is not unity. This - will be ( N - 1 )*INCX too small for descending loops. -*/ - - if (*incx <= 0) { - kx = 1 - (*n - 1) * *incx; - } else if (*incx != 1) { - kx = 1; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through A. -*/ - - if (lsame_(trans, "N")) { - -/* Form x := A*x. */ - - if (lsame_(uplo, "U")) { - if (*incx == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - if (x[i__2].r != 0. || x[i__2].i != 0.) { - i__2 = j; - temp.r = x[i__2].r, temp.i = x[i__2].i; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__; - i__4 = i__; - i__5 = i__ + j * a_dim1; - z__2.r = temp.r * a[i__5].r - temp.i * a[i__5].i, - z__2.i = temp.r * a[i__5].i + temp.i * a[ - i__5].r; - z__1.r = x[i__4].r + z__2.r, z__1.i = x[i__4].i + - z__2.i; - x[i__3].r = z__1.r, x[i__3].i = z__1.i; -/* L10: */ - } - if (nounit) { - i__2 = j; - i__3 = j; - i__4 = j + j * a_dim1; - z__1.r = x[i__3].r * a[i__4].r - x[i__3].i * a[ - i__4].i, z__1.i = x[i__3].r * a[i__4].i + - x[i__3].i * a[i__4].r; - x[i__2].r = z__1.r, x[i__2].i = z__1.i; - } - } -/* L20: */ - } - } else { - jx = kx; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jx; - if (x[i__2].r != 0. || x[i__2].i != 0.) { - i__2 = jx; - temp.r = x[i__2].r, temp.i = x[i__2].i; - ix = kx; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = ix; - i__4 = ix; - i__5 = i__ + j * a_dim1; - z__2.r = temp.r * a[i__5].r - temp.i * a[i__5].i, - z__2.i = temp.r * a[i__5].i + temp.i * a[ - i__5].r; - z__1.r = x[i__4].r + z__2.r, z__1.i = x[i__4].i + - z__2.i; - x[i__3].r = z__1.r, x[i__3].i = z__1.i; - ix += *incx; -/* L30: */ - } - if (nounit) { - i__2 = jx; - i__3 = jx; - i__4 = j + j * a_dim1; - z__1.r = x[i__3].r * a[i__4].r - x[i__3].i * a[ - i__4].i, z__1.i = x[i__3].r * a[i__4].i + - x[i__3].i * a[i__4].r; - x[i__2].r = z__1.r, x[i__2].i = z__1.i; - } - } - jx += *incx; -/* L40: */ - } - } - } else { - if (*incx == 1) { - for (j = *n; j >= 1; --j) { - i__1 = j; - if (x[i__1].r != 0. || x[i__1].i != 0.) { - i__1 = j; - temp.r = x[i__1].r, temp.i = x[i__1].i; - i__1 = j + 1; - for (i__ = *n; i__ >= i__1; --i__) { - i__2 = i__; - i__3 = i__; - i__4 = i__ + j * a_dim1; - z__2.r = temp.r * a[i__4].r - temp.i * a[i__4].i, - z__2.i = temp.r * a[i__4].i + temp.i * a[ - i__4].r; - z__1.r = x[i__3].r + z__2.r, z__1.i = x[i__3].i + - z__2.i; - x[i__2].r = z__1.r, x[i__2].i = z__1.i; -/* L50: */ - } - if (nounit) { - i__1 = j; - i__2 = j; - i__3 = j + j * a_dim1; - z__1.r = x[i__2].r * a[i__3].r - x[i__2].i * a[ - i__3].i, z__1.i = x[i__2].r * a[i__3].i + - x[i__2].i * a[i__3].r; - x[i__1].r = z__1.r, x[i__1].i = z__1.i; - } - } -/* L60: */ - } - } else { - kx += (*n - 1) * *incx; - jx = kx; - for (j = *n; j >= 1; --j) { - i__1 = jx; - if (x[i__1].r != 0. || x[i__1].i != 0.) { - i__1 = jx; - temp.r = x[i__1].r, temp.i = x[i__1].i; - ix = kx; - i__1 = j + 1; - for (i__ = *n; i__ >= i__1; --i__) { - i__2 = ix; - i__3 = ix; - i__4 = i__ + j * a_dim1; - z__2.r = temp.r * a[i__4].r - temp.i * a[i__4].i, - z__2.i = temp.r * a[i__4].i + temp.i * a[ - i__4].r; - z__1.r = x[i__3].r + z__2.r, z__1.i = x[i__3].i + - z__2.i; - x[i__2].r = z__1.r, x[i__2].i = z__1.i; - ix -= *incx; -/* L70: */ - } - if (nounit) { - i__1 = jx; - i__2 = jx; - i__3 = j + j * a_dim1; - z__1.r = x[i__2].r * a[i__3].r - x[i__2].i * a[ - i__3].i, z__1.i = x[i__2].r * a[i__3].i + - x[i__2].i * a[i__3].r; - x[i__1].r = z__1.r, x[i__1].i = z__1.i; - } - } - jx -= *incx; -/* L80: */ - } - } - } - } else { - -/* Form x := A'*x or x := conjg( A' )*x. */ - - if (lsame_(uplo, "U")) { - if (*incx == 1) { - for (j = *n; j >= 1; --j) { - i__1 = j; - temp.r = x[i__1].r, temp.i = x[i__1].i; - if (noconj) { - if (nounit) { - i__1 = j + j * a_dim1; - z__1.r = temp.r * a[i__1].r - temp.i * a[i__1].i, - z__1.i = temp.r * a[i__1].i + temp.i * a[ - i__1].r; - temp.r = z__1.r, temp.i = z__1.i; - } - for (i__ = j - 1; i__ >= 1; --i__) { - i__1 = i__ + j * a_dim1; - i__2 = i__; - z__2.r = a[i__1].r * x[i__2].r - a[i__1].i * x[ - i__2].i, z__2.i = a[i__1].r * x[i__2].i + - a[i__1].i * x[i__2].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L90: */ - } - } else { - if (nounit) { - d_cnjg(&z__2, &a[j + j * a_dim1]); - z__1.r = temp.r * z__2.r - temp.i * z__2.i, - z__1.i = temp.r * z__2.i + temp.i * - z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - } - for (i__ = j - 1; i__ >= 1; --i__) { - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__1 = i__; - z__2.r = z__3.r * x[i__1].r - z__3.i * x[i__1].i, - z__2.i = z__3.r * x[i__1].i + z__3.i * x[ - i__1].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L100: */ - } - } - i__1 = j; - x[i__1].r = temp.r, x[i__1].i = temp.i; -/* L110: */ - } - } else { - jx = kx + (*n - 1) * *incx; - for (j = *n; j >= 1; --j) { - i__1 = jx; - temp.r = x[i__1].r, temp.i = x[i__1].i; - ix = jx; - if (noconj) { - if (nounit) { - i__1 = j + j * a_dim1; - z__1.r = temp.r * a[i__1].r - temp.i * a[i__1].i, - z__1.i = temp.r * a[i__1].i + temp.i * a[ - i__1].r; - temp.r = z__1.r, temp.i = z__1.i; - } - for (i__ = j - 1; i__ >= 1; --i__) { - ix -= *incx; - i__1 = i__ + j * a_dim1; - i__2 = ix; - z__2.r = a[i__1].r * x[i__2].r - a[i__1].i * x[ - i__2].i, z__2.i = a[i__1].r * x[i__2].i + - a[i__1].i * x[i__2].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L120: */ - } - } else { - if (nounit) { - d_cnjg(&z__2, &a[j + j * a_dim1]); - z__1.r = temp.r * z__2.r - temp.i * z__2.i, - z__1.i = temp.r * z__2.i + temp.i * - z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - } - for (i__ = j - 1; i__ >= 1; --i__) { - ix -= *incx; - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__1 = ix; - z__2.r = z__3.r * x[i__1].r - z__3.i * x[i__1].i, - z__2.i = z__3.r * x[i__1].i + z__3.i * x[ - i__1].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L130: */ - } - } - i__1 = jx; - x[i__1].r = temp.r, x[i__1].i = temp.i; - jx -= *incx; -/* L140: */ - } - } - } else { - if (*incx == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - temp.r = x[i__2].r, temp.i = x[i__2].i; - if (noconj) { - if (nounit) { - i__2 = j + j * a_dim1; - z__1.r = temp.r * a[i__2].r - temp.i * a[i__2].i, - z__1.i = temp.r * a[i__2].i + temp.i * a[ - i__2].r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__; - z__2.r = a[i__3].r * x[i__4].r - a[i__3].i * x[ - i__4].i, z__2.i = a[i__3].r * x[i__4].i + - a[i__3].i * x[i__4].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L150: */ - } - } else { - if (nounit) { - d_cnjg(&z__2, &a[j + j * a_dim1]); - z__1.r = temp.r * z__2.r - temp.i * z__2.i, - z__1.i = temp.r * z__2.i + temp.i * - z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__3 = i__; - z__2.r = z__3.r * x[i__3].r - z__3.i * x[i__3].i, - z__2.i = z__3.r * x[i__3].i + z__3.i * x[ - i__3].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L160: */ - } - } - i__2 = j; - x[i__2].r = temp.r, x[i__2].i = temp.i; -/* L170: */ - } - } else { - jx = kx; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jx; - temp.r = x[i__2].r, temp.i = x[i__2].i; - ix = jx; - if (noconj) { - if (nounit) { - i__2 = j + j * a_dim1; - z__1.r = temp.r * a[i__2].r - temp.i * a[i__2].i, - z__1.i = temp.r * a[i__2].i + temp.i * a[ - i__2].r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - ix += *incx; - i__3 = i__ + j * a_dim1; - i__4 = ix; - z__2.r = a[i__3].r * x[i__4].r - a[i__3].i * x[ - i__4].i, z__2.i = a[i__3].r * x[i__4].i + - a[i__3].i * x[i__4].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L180: */ - } - } else { - if (nounit) { - d_cnjg(&z__2, &a[j + j * a_dim1]); - z__1.r = temp.r * z__2.r - temp.i * z__2.i, - z__1.i = temp.r * z__2.i + temp.i * - z__2.r; - temp.r = z__1.r, temp.i = z__1.i; - } - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - ix += *incx; - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__3 = ix; - z__2.r = z__3.r * x[i__3].r - z__3.i * x[i__3].i, - z__2.i = z__3.r * x[i__3].i + z__3.i * x[ - i__3].r; - z__1.r = temp.r + z__2.r, z__1.i = temp.i + - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L190: */ - } - } - i__2 = jx; - x[i__2].r = temp.r, x[i__2].i = temp.i; - jx += *incx; -/* L200: */ - } - } - } - } - - return 0; - -/* End of ZTRMV . */ - -} /* ztrmv_ */ - -/* Subroutine */ int ztrsm_(char *side, char *uplo, char *transa, char *diag, - integer *m, integer *n, doublecomplex *alpha, doublecomplex *a, - integer *lda, doublecomplex *b, integer *ldb) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1, i__2, i__3, i__4, i__5, - i__6, i__7; - doublecomplex z__1, z__2, z__3; - - /* Builtin functions */ - void z_div(doublecomplex *, doublecomplex *, doublecomplex *), d_cnjg( - doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, k, info; - static doublecomplex temp; - static logical lside; - extern logical lsame_(char *, char *); - static integer nrowa; - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical noconj, nounit; - - -/* - Purpose - ======= - - ZTRSM solves one of the matrix equations - - op( A )*X = alpha*B, or X*op( A ) = alpha*B, - - where alpha is a scalar, X and B are m by n matrices, A is a unit, or - non-unit, upper or lower triangular matrix and op( A ) is one of - - op( A ) = A or op( A ) = A' or op( A ) = conjg( A' ). - - The matrix X is overwritten on B. - - Parameters - ========== - - SIDE - CHARACTER*1. - On entry, SIDE specifies whether op( A ) appears on the left - or right of X as follows: - - SIDE = 'L' or 'l' op( A )*X = alpha*B. - - SIDE = 'R' or 'r' X*op( A ) = alpha*B. - - Unchanged on exit. - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the matrix A is an upper or - lower triangular matrix as follows: - - UPLO = 'U' or 'u' A is an upper triangular matrix. - - UPLO = 'L' or 'l' A is a lower triangular matrix. - - Unchanged on exit. - - TRANSA - CHARACTER*1. - On entry, TRANSA specifies the form of op( A ) to be used in - the matrix multiplication as follows: - - TRANSA = 'N' or 'n' op( A ) = A. - - TRANSA = 'T' or 't' op( A ) = A'. - - TRANSA = 'C' or 'c' op( A ) = conjg( A' ). - - Unchanged on exit. - - DIAG - CHARACTER*1. - On entry, DIAG specifies whether or not A is unit triangular - as follows: - - DIAG = 'U' or 'u' A is assumed to be unit triangular. - - DIAG = 'N' or 'n' A is not assumed to be unit - triangular. - - Unchanged on exit. - - M - INTEGER. - On entry, M specifies the number of rows of B. M must be at - least zero. - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the number of columns of B. N must be - at least zero. - Unchanged on exit. - - ALPHA - COMPLEX*16 . - On entry, ALPHA specifies the scalar alpha. When alpha is - zero then A is not referenced and B need not be set before - entry. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, k ), where k is m - when SIDE = 'L' or 'l' and is n when SIDE = 'R' or 'r'. - Before entry with UPLO = 'U' or 'u', the leading k by k - upper triangular part of the array A must contain the upper - triangular matrix and the strictly lower triangular part of - A is not referenced. - Before entry with UPLO = 'L' or 'l', the leading k by k - lower triangular part of the array A must contain the lower - triangular matrix and the strictly upper triangular part of - A is not referenced. - Note that when DIAG = 'U' or 'u', the diagonal elements of - A are not referenced either, but are assumed to be unity. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. When SIDE = 'L' or 'l' then - LDA must be at least max( 1, m ), when SIDE = 'R' or 'r' - then LDA must be at least max( 1, n ). - Unchanged on exit. - - B - COMPLEX*16 array of DIMENSION ( LDB, n ). - Before entry, the leading m by n part of the array B must - contain the right-hand side matrix B, and on exit is - overwritten by the solution matrix X. - - LDB - INTEGER. - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. LDB must be at least - max( 1, m ). - Unchanged on exit. - - - Level 3 Blas routine. - - -- Written on 8-February-1989. - Jack Dongarra, Argonne National Laboratory. - Iain Duff, AERE Harwell. - Jeremy Du Croz, Numerical Algorithms Group Ltd. - Sven Hammarling, Numerical Algorithms Group Ltd. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - - /* Function Body */ - lside = lsame_(side, "L"); - if (lside) { - nrowa = *m; - } else { - nrowa = *n; - } - noconj = lsame_(transa, "T"); - nounit = lsame_(diag, "N"); - upper = lsame_(uplo, "U"); - - info = 0; - if ((! lside && ! lsame_(side, "R"))) { - info = 1; - } else if ((! upper && ! lsame_(uplo, "L"))) { - info = 2; - } else if (((! lsame_(transa, "N") && ! lsame_( - transa, "T")) && ! lsame_(transa, "C"))) { - info = 3; - } else if ((! lsame_(diag, "U") && ! lsame_(diag, - "N"))) { - info = 4; - } else if (*m < 0) { - info = 5; - } else if (*n < 0) { - info = 6; - } else if (*lda < max(1,nrowa)) { - info = 9; - } else if (*ldb < max(1,*m)) { - info = 11; - } - if (info != 0) { - xerbla_("ZTRSM ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } - -/* And when alpha.eq.zero. */ - - if ((alpha->r == 0. && alpha->i == 0.)) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - b[i__3].r = 0., b[i__3].i = 0.; -/* L10: */ - } -/* L20: */ - } - return 0; - } - -/* Start the operations. */ - - if (lside) { - if (lsame_(transa, "N")) { - -/* Form B := alpha*inv( A )*B. */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (alpha->r != 1. || alpha->i != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * b_dim1; - z__1.r = alpha->r * b[i__4].r - alpha->i * b[i__4] - .i, z__1.i = alpha->r * b[i__4].i + - alpha->i * b[i__4].r; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L30: */ - } - } - for (k = *m; k >= 1; --k) { - i__2 = k + j * b_dim1; - if (b[i__2].r != 0. || b[i__2].i != 0.) { - if (nounit) { - i__2 = k + j * b_dim1; - z_div(&z__1, &b[k + j * b_dim1], &a[k + k * - a_dim1]); - b[i__2].r = z__1.r, b[i__2].i = z__1.i; - } - i__2 = k - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * b_dim1; - i__5 = k + j * b_dim1; - i__6 = i__ + k * a_dim1; - z__2.r = b[i__5].r * a[i__6].r - b[i__5].i * - a[i__6].i, z__2.i = b[i__5].r * a[ - i__6].i + b[i__5].i * a[i__6].r; - z__1.r = b[i__4].r - z__2.r, z__1.i = b[i__4] - .i - z__2.i; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L40: */ - } - } -/* L50: */ - } -/* L60: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (alpha->r != 1. || alpha->i != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * b_dim1; - z__1.r = alpha->r * b[i__4].r - alpha->i * b[i__4] - .i, z__1.i = alpha->r * b[i__4].i + - alpha->i * b[i__4].r; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L70: */ - } - } - i__2 = *m; - for (k = 1; k <= i__2; ++k) { - i__3 = k + j * b_dim1; - if (b[i__3].r != 0. || b[i__3].i != 0.) { - if (nounit) { - i__3 = k + j * b_dim1; - z_div(&z__1, &b[k + j * b_dim1], &a[k + k * - a_dim1]); - b[i__3].r = z__1.r, b[i__3].i = z__1.i; - } - i__3 = *m; - for (i__ = k + 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * b_dim1; - i__5 = i__ + j * b_dim1; - i__6 = k + j * b_dim1; - i__7 = i__ + k * a_dim1; - z__2.r = b[i__6].r * a[i__7].r - b[i__6].i * - a[i__7].i, z__2.i = b[i__6].r * a[ - i__7].i + b[i__6].i * a[i__7].r; - z__1.r = b[i__5].r - z__2.r, z__1.i = b[i__5] - .i - z__2.i; - b[i__4].r = z__1.r, b[i__4].i = z__1.i; -/* L80: */ - } - } -/* L90: */ - } -/* L100: */ - } - } - } else { - -/* - Form B := alpha*inv( A' )*B - or B := alpha*inv( conjg( A' ) )*B. -*/ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - z__1.r = alpha->r * b[i__3].r - alpha->i * b[i__3].i, - z__1.i = alpha->r * b[i__3].i + alpha->i * b[ - i__3].r; - temp.r = z__1.r, temp.i = z__1.i; - if (noconj) { - i__3 = i__ - 1; - for (k = 1; k <= i__3; ++k) { - i__4 = k + i__ * a_dim1; - i__5 = k + j * b_dim1; - z__2.r = a[i__4].r * b[i__5].r - a[i__4].i * - b[i__5].i, z__2.i = a[i__4].r * b[ - i__5].i + a[i__4].i * b[i__5].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L110: */ - } - if (nounit) { - z_div(&z__1, &temp, &a[i__ + i__ * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - } - } else { - i__3 = i__ - 1; - for (k = 1; k <= i__3; ++k) { - d_cnjg(&z__3, &a[k + i__ * a_dim1]); - i__4 = k + j * b_dim1; - z__2.r = z__3.r * b[i__4].r - z__3.i * b[i__4] - .i, z__2.i = z__3.r * b[i__4].i + - z__3.i * b[i__4].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L120: */ - } - if (nounit) { - d_cnjg(&z__2, &a[i__ + i__ * a_dim1]); - z_div(&z__1, &temp, &z__2); - temp.r = z__1.r, temp.i = z__1.i; - } - } - i__3 = i__ + j * b_dim1; - b[i__3].r = temp.r, b[i__3].i = temp.i; -/* L130: */ - } -/* L140: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - for (i__ = *m; i__ >= 1; --i__) { - i__2 = i__ + j * b_dim1; - z__1.r = alpha->r * b[i__2].r - alpha->i * b[i__2].i, - z__1.i = alpha->r * b[i__2].i + alpha->i * b[ - i__2].r; - temp.r = z__1.r, temp.i = z__1.i; - if (noconj) { - i__2 = *m; - for (k = i__ + 1; k <= i__2; ++k) { - i__3 = k + i__ * a_dim1; - i__4 = k + j * b_dim1; - z__2.r = a[i__3].r * b[i__4].r - a[i__3].i * - b[i__4].i, z__2.i = a[i__3].r * b[ - i__4].i + a[i__3].i * b[i__4].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L150: */ - } - if (nounit) { - z_div(&z__1, &temp, &a[i__ + i__ * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - } - } else { - i__2 = *m; - for (k = i__ + 1; k <= i__2; ++k) { - d_cnjg(&z__3, &a[k + i__ * a_dim1]); - i__3 = k + j * b_dim1; - z__2.r = z__3.r * b[i__3].r - z__3.i * b[i__3] - .i, z__2.i = z__3.r * b[i__3].i + - z__3.i * b[i__3].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L160: */ - } - if (nounit) { - d_cnjg(&z__2, &a[i__ + i__ * a_dim1]); - z_div(&z__1, &temp, &z__2); - temp.r = z__1.r, temp.i = z__1.i; - } - } - i__2 = i__ + j * b_dim1; - b[i__2].r = temp.r, b[i__2].i = temp.i; -/* L170: */ - } -/* L180: */ - } - } - } - } else { - if (lsame_(transa, "N")) { - -/* Form B := alpha*B*inv( A ). */ - - if (upper) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (alpha->r != 1. || alpha->i != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * b_dim1; - z__1.r = alpha->r * b[i__4].r - alpha->i * b[i__4] - .i, z__1.i = alpha->r * b[i__4].i + - alpha->i * b[i__4].r; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L190: */ - } - } - i__2 = j - 1; - for (k = 1; k <= i__2; ++k) { - i__3 = k + j * a_dim1; - if (a[i__3].r != 0. || a[i__3].i != 0.) { - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * b_dim1; - i__5 = i__ + j * b_dim1; - i__6 = k + j * a_dim1; - i__7 = i__ + k * b_dim1; - z__2.r = a[i__6].r * b[i__7].r - a[i__6].i * - b[i__7].i, z__2.i = a[i__6].r * b[ - i__7].i + a[i__6].i * b[i__7].r; - z__1.r = b[i__5].r - z__2.r, z__1.i = b[i__5] - .i - z__2.i; - b[i__4].r = z__1.r, b[i__4].i = z__1.i; -/* L200: */ - } - } -/* L210: */ - } - if (nounit) { - z_div(&z__1, &c_b359, &a[j + j * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * b_dim1; - z__1.r = temp.r * b[i__4].r - temp.i * b[i__4].i, - z__1.i = temp.r * b[i__4].i + temp.i * b[ - i__4].r; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L220: */ - } - } -/* L230: */ - } - } else { - for (j = *n; j >= 1; --j) { - if (alpha->r != 1. || alpha->i != 0.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + j * b_dim1; - i__3 = i__ + j * b_dim1; - z__1.r = alpha->r * b[i__3].r - alpha->i * b[i__3] - .i, z__1.i = alpha->r * b[i__3].i + - alpha->i * b[i__3].r; - b[i__2].r = z__1.r, b[i__2].i = z__1.i; -/* L240: */ - } - } - i__1 = *n; - for (k = j + 1; k <= i__1; ++k) { - i__2 = k + j * a_dim1; - if (a[i__2].r != 0. || a[i__2].i != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * b_dim1; - i__5 = k + j * a_dim1; - i__6 = i__ + k * b_dim1; - z__2.r = a[i__5].r * b[i__6].r - a[i__5].i * - b[i__6].i, z__2.i = a[i__5].r * b[ - i__6].i + a[i__5].i * b[i__6].r; - z__1.r = b[i__4].r - z__2.r, z__1.i = b[i__4] - .i - z__2.i; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L250: */ - } - } -/* L260: */ - } - if (nounit) { - z_div(&z__1, &c_b359, &a[j + j * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + j * b_dim1; - i__3 = i__ + j * b_dim1; - z__1.r = temp.r * b[i__3].r - temp.i * b[i__3].i, - z__1.i = temp.r * b[i__3].i + temp.i * b[ - i__3].r; - b[i__2].r = z__1.r, b[i__2].i = z__1.i; -/* L270: */ - } - } -/* L280: */ - } - } - } else { - -/* - Form B := alpha*B*inv( A' ) - or B := alpha*B*inv( conjg( A' ) ). -*/ - - if (upper) { - for (k = *n; k >= 1; --k) { - if (nounit) { - if (noconj) { - z_div(&z__1, &c_b359, &a[k + k * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - } else { - d_cnjg(&z__2, &a[k + k * a_dim1]); - z_div(&z__1, &c_b359, &z__2); - temp.r = z__1.r, temp.i = z__1.i; - } - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + k * b_dim1; - i__3 = i__ + k * b_dim1; - z__1.r = temp.r * b[i__3].r - temp.i * b[i__3].i, - z__1.i = temp.r * b[i__3].i + temp.i * b[ - i__3].r; - b[i__2].r = z__1.r, b[i__2].i = z__1.i; -/* L290: */ - } - } - i__1 = k - 1; - for (j = 1; j <= i__1; ++j) { - i__2 = j + k * a_dim1; - if (a[i__2].r != 0. || a[i__2].i != 0.) { - if (noconj) { - i__2 = j + k * a_dim1; - temp.r = a[i__2].r, temp.i = a[i__2].i; - } else { - d_cnjg(&z__1, &a[j + k * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - } - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * b_dim1; - i__5 = i__ + k * b_dim1; - z__2.r = temp.r * b[i__5].r - temp.i * b[i__5] - .i, z__2.i = temp.r * b[i__5].i + - temp.i * b[i__5].r; - z__1.r = b[i__4].r - z__2.r, z__1.i = b[i__4] - .i - z__2.i; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L300: */ - } - } -/* L310: */ - } - if (alpha->r != 1. || alpha->i != 0.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + k * b_dim1; - i__3 = i__ + k * b_dim1; - z__1.r = alpha->r * b[i__3].r - alpha->i * b[i__3] - .i, z__1.i = alpha->r * b[i__3].i + - alpha->i * b[i__3].r; - b[i__2].r = z__1.r, b[i__2].i = z__1.i; -/* L320: */ - } - } -/* L330: */ - } - } else { - i__1 = *n; - for (k = 1; k <= i__1; ++k) { - if (nounit) { - if (noconj) { - z_div(&z__1, &c_b359, &a[k + k * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - } else { - d_cnjg(&z__2, &a[k + k * a_dim1]); - z_div(&z__1, &c_b359, &z__2); - temp.r = z__1.r, temp.i = z__1.i; - } - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + k * b_dim1; - i__4 = i__ + k * b_dim1; - z__1.r = temp.r * b[i__4].r - temp.i * b[i__4].i, - z__1.i = temp.r * b[i__4].i + temp.i * b[ - i__4].r; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L340: */ - } - } - i__2 = *n; - for (j = k + 1; j <= i__2; ++j) { - i__3 = j + k * a_dim1; - if (a[i__3].r != 0. || a[i__3].i != 0.) { - if (noconj) { - i__3 = j + k * a_dim1; - temp.r = a[i__3].r, temp.i = a[i__3].i; - } else { - d_cnjg(&z__1, &a[j + k * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - } - i__3 = *m; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * b_dim1; - i__5 = i__ + j * b_dim1; - i__6 = i__ + k * b_dim1; - z__2.r = temp.r * b[i__6].r - temp.i * b[i__6] - .i, z__2.i = temp.r * b[i__6].i + - temp.i * b[i__6].r; - z__1.r = b[i__5].r - z__2.r, z__1.i = b[i__5] - .i - z__2.i; - b[i__4].r = z__1.r, b[i__4].i = z__1.i; -/* L350: */ - } - } -/* L360: */ - } - if (alpha->r != 1. || alpha->i != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + k * b_dim1; - i__4 = i__ + k * b_dim1; - z__1.r = alpha->r * b[i__4].r - alpha->i * b[i__4] - .i, z__1.i = alpha->r * b[i__4].i + - alpha->i * b[i__4].r; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L370: */ - } - } -/* L380: */ - } - } - } - } - - return 0; - -/* End of ZTRSM . */ - -} /* ztrsm_ */ - -/* Subroutine */ int ztrsv_(char *uplo, char *trans, char *diag, integer *n, - doublecomplex *a, integer *lda, doublecomplex *x, integer *incx) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - doublecomplex z__1, z__2, z__3; - - /* Builtin functions */ - void z_div(doublecomplex *, doublecomplex *, doublecomplex *), d_cnjg( - doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, ix, jx, kx, info; - static doublecomplex temp; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical noconj, nounit; - - -/* - Purpose - ======= - - ZTRSV solves one of the systems of equations - - A*x = b, or A'*x = b, or conjg( A' )*x = b, - - where b and x are n element vectors and A is an n by n unit, or - non-unit, upper or lower triangular matrix. - - No test for singularity or near-singularity is included in this - routine. Such tests must be performed before calling this routine. - - Parameters - ========== - - UPLO - CHARACTER*1. - On entry, UPLO specifies whether the matrix is an upper or - lower triangular matrix as follows: - - UPLO = 'U' or 'u' A is an upper triangular matrix. - - UPLO = 'L' or 'l' A is a lower triangular matrix. - - Unchanged on exit. - - TRANS - CHARACTER*1. - On entry, TRANS specifies the equations to be solved as - follows: - - TRANS = 'N' or 'n' A*x = b. - - TRANS = 'T' or 't' A'*x = b. - - TRANS = 'C' or 'c' conjg( A' )*x = b. - - Unchanged on exit. - - DIAG - CHARACTER*1. - On entry, DIAG specifies whether or not A is unit - triangular as follows: - - DIAG = 'U' or 'u' A is assumed to be unit triangular. - - DIAG = 'N' or 'n' A is not assumed to be unit - triangular. - - Unchanged on exit. - - N - INTEGER. - On entry, N specifies the order of the matrix A. - N must be at least zero. - Unchanged on exit. - - A - COMPLEX*16 array of DIMENSION ( LDA, n ). - Before entry with UPLO = 'U' or 'u', the leading n by n - upper triangular part of the array A must contain the upper - triangular matrix and the strictly lower triangular part of - A is not referenced. - Before entry with UPLO = 'L' or 'l', the leading n by n - lower triangular part of the array A must contain the lower - triangular matrix and the strictly upper triangular part of - A is not referenced. - Note that when DIAG = 'U' or 'u', the diagonal elements of - A are not referenced either, but are assumed to be unity. - Unchanged on exit. - - LDA - INTEGER. - On entry, LDA specifies the first dimension of A as declared - in the calling (sub) program. LDA must be at least - max( 1, n ). - Unchanged on exit. - - X - COMPLEX*16 array of dimension at least - ( 1 + ( n - 1 )*abs( INCX ) ). - Before entry, the incremented array X must contain the n - element right-hand side vector b. On exit, X is overwritten - with the solution vector x. - - INCX - INTEGER. - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - Unchanged on exit. - - - Level 2 Blas routine. - - -- Written on 22-October-1986. - Jack Dongarra, Argonne National Lab. - Jeremy Du Croz, Nag Central Office. - Sven Hammarling, Nag Central Office. - Richard Hanson, Sandia National Labs. - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --x; - - /* Function Body */ - info = 0; - if ((! lsame_(uplo, "U") && ! lsame_(uplo, "L"))) { - info = 1; - } else if (((! lsame_(trans, "N") && ! lsame_(trans, - "T")) && ! lsame_(trans, "C"))) { - info = 2; - } else if ((! lsame_(diag, "U") && ! lsame_(diag, - "N"))) { - info = 3; - } else if (*n < 0) { - info = 4; - } else if (*lda < max(1,*n)) { - info = 6; - } else if (*incx == 0) { - info = 8; - } - if (info != 0) { - xerbla_("ZTRSV ", &info); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } - - noconj = lsame_(trans, "T"); - nounit = lsame_(diag, "N"); - -/* - Set up the start point in X if the increment is not unity. This - will be ( N - 1 )*INCX too small for descending loops. -*/ - - if (*incx <= 0) { - kx = 1 - (*n - 1) * *incx; - } else if (*incx != 1) { - kx = 1; - } - -/* - Start the operations. In this version the elements of A are - accessed sequentially with one pass through A. -*/ - - if (lsame_(trans, "N")) { - -/* Form x := inv( A )*x. */ - - if (lsame_(uplo, "U")) { - if (*incx == 1) { - for (j = *n; j >= 1; --j) { - i__1 = j; - if (x[i__1].r != 0. || x[i__1].i != 0.) { - if (nounit) { - i__1 = j; - z_div(&z__1, &x[j], &a[j + j * a_dim1]); - x[i__1].r = z__1.r, x[i__1].i = z__1.i; - } - i__1 = j; - temp.r = x[i__1].r, temp.i = x[i__1].i; - for (i__ = j - 1; i__ >= 1; --i__) { - i__1 = i__; - i__2 = i__; - i__3 = i__ + j * a_dim1; - z__2.r = temp.r * a[i__3].r - temp.i * a[i__3].i, - z__2.i = temp.r * a[i__3].i + temp.i * a[ - i__3].r; - z__1.r = x[i__2].r - z__2.r, z__1.i = x[i__2].i - - z__2.i; - x[i__1].r = z__1.r, x[i__1].i = z__1.i; -/* L10: */ - } - } -/* L20: */ - } - } else { - jx = kx + (*n - 1) * *incx; - for (j = *n; j >= 1; --j) { - i__1 = jx; - if (x[i__1].r != 0. || x[i__1].i != 0.) { - if (nounit) { - i__1 = jx; - z_div(&z__1, &x[jx], &a[j + j * a_dim1]); - x[i__1].r = z__1.r, x[i__1].i = z__1.i; - } - i__1 = jx; - temp.r = x[i__1].r, temp.i = x[i__1].i; - ix = jx; - for (i__ = j - 1; i__ >= 1; --i__) { - ix -= *incx; - i__1 = ix; - i__2 = ix; - i__3 = i__ + j * a_dim1; - z__2.r = temp.r * a[i__3].r - temp.i * a[i__3].i, - z__2.i = temp.r * a[i__3].i + temp.i * a[ - i__3].r; - z__1.r = x[i__2].r - z__2.r, z__1.i = x[i__2].i - - z__2.i; - x[i__1].r = z__1.r, x[i__1].i = z__1.i; -/* L30: */ - } - } - jx -= *incx; -/* L40: */ - } - } - } else { - if (*incx == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - if (x[i__2].r != 0. || x[i__2].i != 0.) { - if (nounit) { - i__2 = j; - z_div(&z__1, &x[j], &a[j + j * a_dim1]); - x[i__2].r = z__1.r, x[i__2].i = z__1.i; - } - i__2 = j; - temp.r = x[i__2].r, temp.i = x[i__2].i; - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - i__3 = i__; - i__4 = i__; - i__5 = i__ + j * a_dim1; - z__2.r = temp.r * a[i__5].r - temp.i * a[i__5].i, - z__2.i = temp.r * a[i__5].i + temp.i * a[ - i__5].r; - z__1.r = x[i__4].r - z__2.r, z__1.i = x[i__4].i - - z__2.i; - x[i__3].r = z__1.r, x[i__3].i = z__1.i; -/* L50: */ - } - } -/* L60: */ - } - } else { - jx = kx; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = jx; - if (x[i__2].r != 0. || x[i__2].i != 0.) { - if (nounit) { - i__2 = jx; - z_div(&z__1, &x[jx], &a[j + j * a_dim1]); - x[i__2].r = z__1.r, x[i__2].i = z__1.i; - } - i__2 = jx; - temp.r = x[i__2].r, temp.i = x[i__2].i; - ix = jx; - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - ix += *incx; - i__3 = ix; - i__4 = ix; - i__5 = i__ + j * a_dim1; - z__2.r = temp.r * a[i__5].r - temp.i * a[i__5].i, - z__2.i = temp.r * a[i__5].i + temp.i * a[ - i__5].r; - z__1.r = x[i__4].r - z__2.r, z__1.i = x[i__4].i - - z__2.i; - x[i__3].r = z__1.r, x[i__3].i = z__1.i; -/* L70: */ - } - } - jx += *incx; -/* L80: */ - } - } - } - } else { - -/* Form x := inv( A' )*x or x := inv( conjg( A' ) )*x. */ - - if (lsame_(uplo, "U")) { - if (*incx == 1) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - temp.r = x[i__2].r, temp.i = x[i__2].i; - if (noconj) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__; - z__2.r = a[i__3].r * x[i__4].r - a[i__3].i * x[ - i__4].i, z__2.i = a[i__3].r * x[i__4].i + - a[i__3].i * x[i__4].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L90: */ - } - if (nounit) { - z_div(&z__1, &temp, &a[j + j * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - } - } else { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__3 = i__; - z__2.r = z__3.r * x[i__3].r - z__3.i * x[i__3].i, - z__2.i = z__3.r * x[i__3].i + z__3.i * x[ - i__3].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L100: */ - } - if (nounit) { - d_cnjg(&z__2, &a[j + j * a_dim1]); - z_div(&z__1, &temp, &z__2); - temp.r = z__1.r, temp.i = z__1.i; - } - } - i__2 = j; - x[i__2].r = temp.r, x[i__2].i = temp.i; -/* L110: */ - } - } else { - jx = kx; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - ix = kx; - i__2 = jx; - temp.r = x[i__2].r, temp.i = x[i__2].i; - if (noconj) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = ix; - z__2.r = a[i__3].r * x[i__4].r - a[i__3].i * x[ - i__4].i, z__2.i = a[i__3].r * x[i__4].i + - a[i__3].i * x[i__4].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; - ix += *incx; -/* L120: */ - } - if (nounit) { - z_div(&z__1, &temp, &a[j + j * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - } - } else { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__3 = ix; - z__2.r = z__3.r * x[i__3].r - z__3.i * x[i__3].i, - z__2.i = z__3.r * x[i__3].i + z__3.i * x[ - i__3].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; - ix += *incx; -/* L130: */ - } - if (nounit) { - d_cnjg(&z__2, &a[j + j * a_dim1]); - z_div(&z__1, &temp, &z__2); - temp.r = z__1.r, temp.i = z__1.i; - } - } - i__2 = jx; - x[i__2].r = temp.r, x[i__2].i = temp.i; - jx += *incx; -/* L140: */ - } - } - } else { - if (*incx == 1) { - for (j = *n; j >= 1; --j) { - i__1 = j; - temp.r = x[i__1].r, temp.i = x[i__1].i; - if (noconj) { - i__1 = j + 1; - for (i__ = *n; i__ >= i__1; --i__) { - i__2 = i__ + j * a_dim1; - i__3 = i__; - z__2.r = a[i__2].r * x[i__3].r - a[i__2].i * x[ - i__3].i, z__2.i = a[i__2].r * x[i__3].i + - a[i__2].i * x[i__3].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L150: */ - } - if (nounit) { - z_div(&z__1, &temp, &a[j + j * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - } - } else { - i__1 = j + 1; - for (i__ = *n; i__ >= i__1; --i__) { - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__2 = i__; - z__2.r = z__3.r * x[i__2].r - z__3.i * x[i__2].i, - z__2.i = z__3.r * x[i__2].i + z__3.i * x[ - i__2].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; -/* L160: */ - } - if (nounit) { - d_cnjg(&z__2, &a[j + j * a_dim1]); - z_div(&z__1, &temp, &z__2); - temp.r = z__1.r, temp.i = z__1.i; - } - } - i__1 = j; - x[i__1].r = temp.r, x[i__1].i = temp.i; -/* L170: */ - } - } else { - kx += (*n - 1) * *incx; - jx = kx; - for (j = *n; j >= 1; --j) { - ix = kx; - i__1 = jx; - temp.r = x[i__1].r, temp.i = x[i__1].i; - if (noconj) { - i__1 = j + 1; - for (i__ = *n; i__ >= i__1; --i__) { - i__2 = i__ + j * a_dim1; - i__3 = ix; - z__2.r = a[i__2].r * x[i__3].r - a[i__2].i * x[ - i__3].i, z__2.i = a[i__2].r * x[i__3].i + - a[i__2].i * x[i__3].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; - ix -= *incx; -/* L180: */ - } - if (nounit) { - z_div(&z__1, &temp, &a[j + j * a_dim1]); - temp.r = z__1.r, temp.i = z__1.i; - } - } else { - i__1 = j + 1; - for (i__ = *n; i__ >= i__1; --i__) { - d_cnjg(&z__3, &a[i__ + j * a_dim1]); - i__2 = ix; - z__2.r = z__3.r * x[i__2].r - z__3.i * x[i__2].i, - z__2.i = z__3.r * x[i__2].i + z__3.i * x[ - i__2].r; - z__1.r = temp.r - z__2.r, z__1.i = temp.i - - z__2.i; - temp.r = z__1.r, temp.i = z__1.i; - ix -= *incx; -/* L190: */ - } - if (nounit) { - d_cnjg(&z__2, &a[j + j * a_dim1]); - z_div(&z__1, &temp, &z__2); - temp.r = z__1.r, temp.i = z__1.i; - } - } - i__1 = jx; - x[i__1].r = temp.r, x[i__1].i = temp.i; - jx -= *incx; -/* L200: */ - } - } - } - } - - return 0; - -/* End of ZTRSV . */ - -} /* ztrsv_ */ - diff --git a/pythonPackages/numpy/numpy/linalg/dlamch.c b/pythonPackages/numpy/numpy/linalg/dlamch.c deleted file mode 100755 index bf1dfdb059..0000000000 --- a/pythonPackages/numpy/numpy/linalg/dlamch.c +++ /dev/null @@ -1,951 +0,0 @@ -#include -#include "f2c.h" - -/* If config.h is available, we only need dlamc3 */ -#ifndef HAVE_CONFIG -doublereal dlamch_(char *cmach) -{ -/* -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLAMCH determines double precision machine parameters. - - Arguments - ========= - - CMACH (input) CHARACTER*1 - Specifies the value to be returned by DLAMCH: - = 'E' or 'e', DLAMCH := eps - = 'S' or 's , DLAMCH := sfmin - = 'B' or 'b', DLAMCH := base - = 'P' or 'p', DLAMCH := eps*base - = 'N' or 'n', DLAMCH := t - = 'R' or 'r', DLAMCH := rnd - = 'M' or 'm', DLAMCH := emin - = 'U' or 'u', DLAMCH := rmin - = 'L' or 'l', DLAMCH := emax - = 'O' or 'o', DLAMCH := rmax - - where - - eps = relative machine precision - sfmin = safe minimum, such that 1/sfmin does not overflow - base = base of the machine - prec = eps*base - t = number of (base) digits in the mantissa - rnd = 1.0 when rounding occurs in addition, 0.0 otherwise - emin = minimum exponent before (gradual) underflow - rmin = underflow threshold - base**(emin-1) - emax = largest exponent before overflow - rmax = overflow threshold - (base**emax)*(1-eps) - - ===================================================================== -*/ -/* >>Start of File<< - Initialized data */ - static logical first = TRUE_; - /* System generated locals */ - integer i__1; - doublereal ret_val; - /* Builtin functions */ - double pow_di(doublereal *, integer *); - /* Local variables */ - static doublereal base; - static integer beta; - static doublereal emin, prec, emax; - static integer imin, imax; - static logical lrnd; - static doublereal rmin, rmax, t, rmach; - extern logical lsame_(char *, char *); - static doublereal small, sfmin; - extern /* Subroutine */ int dlamc2_(integer *, integer *, logical *, - doublereal *, integer *, doublereal *, integer *, doublereal *); - static integer it; - static doublereal rnd, eps; - - - - if (first) { - first = FALSE_; - dlamc2_(&beta, &it, &lrnd, &eps, &imin, &rmin, &imax, &rmax); - base = (doublereal) beta; - t = (doublereal) it; - if (lrnd) { - rnd = 1.; - i__1 = 1 - it; - eps = pow_di(&base, &i__1) / 2; - } else { - rnd = 0.; - i__1 = 1 - it; - eps = pow_di(&base, &i__1); - } - prec = eps * base; - emin = (doublereal) imin; - emax = (doublereal) imax; - sfmin = rmin; - small = 1. / rmax; - if (small >= sfmin) { - -/* Use SMALL plus a bit, to avoid the possibility of rou -nding - causing overflow when computing 1/sfmin. */ - - sfmin = small * (eps + 1.); - } - } - - if (lsame_(cmach, "E")) { - rmach = eps; - } else if (lsame_(cmach, "S")) { - rmach = sfmin; - } else if (lsame_(cmach, "B")) { - rmach = base; - } else if (lsame_(cmach, "P")) { - rmach = prec; - } else if (lsame_(cmach, "N")) { - rmach = t; - } else if (lsame_(cmach, "R")) { - rmach = rnd; - } else if (lsame_(cmach, "M")) { - rmach = emin; - } else if (lsame_(cmach, "U")) { - rmach = rmin; - } else if (lsame_(cmach, "L")) { - rmach = emax; - } else if (lsame_(cmach, "O")) { - rmach = rmax; - } - - ret_val = rmach; - return ret_val; - -/* End of DLAMCH */ - -} /* dlamch_ */ - - -/* Subroutine */ int dlamc1_(integer *beta, integer *t, logical *rnd, logical - *ieee1) -{ -/* -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLAMC1 determines the machine parameters given by BETA, T, RND, and - IEEE1. - - Arguments - ========= - - BETA (output) INTEGER - The base of the machine. - - T (output) INTEGER - The number of ( BETA ) digits in the mantissa. - - RND (output) LOGICAL - Specifies whether proper rounding ( RND = .TRUE. ) or - chopping ( RND = .FALSE. ) occurs in addition. This may not - - be a reliable guide to the way in which the machine performs - - its arithmetic. - - IEEE1 (output) LOGICAL - Specifies whether rounding appears to be done in the IEEE - 'round to nearest' style. - - Further Details - =============== - - The routine is based on the routine ENVRON by Malcolm and - incorporates suggestions by Gentleman and Marovich. See - - Malcolm M. A. (1972) Algorithms to reveal properties of - floating-point arithmetic. Comms. of the ACM, 15, 949-951. - - Gentleman W. M. and Marovich S. B. (1974) More on algorithms - that reveal properties of floating point arithmetic units. - Comms. of the ACM, 17, 276-277. - - ===================================================================== -*/ - /* Initialized data */ - static logical first = TRUE_; - /* System generated locals */ - doublereal d__1, d__2; - /* Local variables */ - static logical lrnd; - static doublereal a, b, c, f; - static integer lbeta; - static doublereal savec; - extern doublereal dlamc3_(doublereal *, doublereal *); - static logical lieee1; - static doublereal t1, t2; - static integer lt; - static doublereal one, qtr; - - - - if (first) { - first = FALSE_; - one = 1.; - -/* LBETA, LIEEE1, LT and LRND are the local values of BE -TA, - IEEE1, T and RND. - - Throughout this routine we use the function DLAMC3 to ens -ure - that relevant values are stored and not held in registers, - or - are not affected by optimizers. - - Compute a = 2.0**m with the smallest positive integer m s -uch - that - - fl( a + 1.0 ) = a. */ - - a = 1.; - c = 1.; - -/* + WHILE( C.EQ.ONE )LOOP */ -L10: - if (c == one) { - a *= 2; - c = dlamc3_(&a, &one); - d__1 = -a; - c = dlamc3_(&c, &d__1); - goto L10; - } -/* + END WHILE - - Now compute b = 2.0**m with the smallest positive integer -m - such that - - fl( a + b ) .gt. a. */ - - b = 1.; - c = dlamc3_(&a, &b); - -/* + WHILE( C.EQ.A )LOOP */ -L20: - if (c == a) { - b *= 2; - c = dlamc3_(&a, &b); - goto L20; - } -/* + END WHILE - - Now compute the base. a and c are neighbouring floating po -int - numbers in the interval ( beta**t, beta**( t + 1 ) ) and - so - their difference is beta. Adding 0.25 to c is to ensure that - it - is truncated to beta and not ( beta - 1 ). */ - - qtr = one / 4; - savec = c; - d__1 = -a; - c = dlamc3_(&c, &d__1); - lbeta = (integer) (c + qtr); - -/* Now determine whether rounding or chopping occurs, by addin -g a - bit less than beta/2 and a bit more than beta/2 to - a. */ - - b = (doublereal) lbeta; - d__1 = b / 2; - d__2 = -b / 100; - f = dlamc3_(&d__1, &d__2); - c = dlamc3_(&f, &a); - if (c == a) { - lrnd = TRUE_; - } else { - lrnd = FALSE_; - } - d__1 = b / 2; - d__2 = b / 100; - f = dlamc3_(&d__1, &d__2); - c = dlamc3_(&f, &a); - if (lrnd && c == a) { - lrnd = FALSE_; - } - -/* Try and decide whether rounding is done in the IEEE 'round - to - nearest' style. B/2 is half a unit in the last place of the -two - numbers A and SAVEC. Furthermore, A is even, i.e. has last -bit - zero, and SAVEC is odd. Thus adding B/2 to A should not cha -nge - A, but adding B/2 to SAVEC should change SAVEC. */ - - d__1 = b / 2; - t1 = dlamc3_(&d__1, &a); - d__1 = b / 2; - t2 = dlamc3_(&d__1, &savec); - lieee1 = t1 == a && t2 > savec && lrnd; - -/* Now find the mantissa, t. It should be the integer part - of - log to the base beta of a, however it is safer to determine - t - by powering. So we find t as the smallest positive integer -for - which - - fl( beta**t + 1.0 ) = 1.0. */ - - lt = 0; - a = 1.; - c = 1.; - -/* + WHILE( C.EQ.ONE )LOOP */ -L30: - if (c == one) { - ++lt; - a *= lbeta; - c = dlamc3_(&a, &one); - d__1 = -a; - c = dlamc3_(&c, &d__1); - goto L30; - } -/* + END WHILE */ - - } - - *beta = lbeta; - *t = lt; - *rnd = lrnd; - *ieee1 = lieee1; - return 0; - -/* End of DLAMC1 */ - -} /* dlamc1_ */ - - -/* Subroutine */ int dlamc2_(integer *beta, integer *t, logical *rnd, - doublereal *eps, integer *emin, doublereal *rmin, integer *emax, - doublereal *rmax) -{ -/* -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLAMC2 determines the machine parameters specified in its argument - list. - - Arguments - ========= - - BETA (output) INTEGER - The base of the machine. - - T (output) INTEGER - The number of ( BETA ) digits in the mantissa. - - RND (output) LOGICAL - Specifies whether proper rounding ( RND = .TRUE. ) or - chopping ( RND = .FALSE. ) occurs in addition. This may not - - be a reliable guide to the way in which the machine performs - - its arithmetic. - - EPS (output) DOUBLE PRECISION - The smallest positive number such that - - fl( 1.0 - EPS ) .LT. 1.0, - - where fl denotes the computed value. - - EMIN (output) INTEGER - The minimum exponent before (gradual) underflow occurs. - - RMIN (output) DOUBLE PRECISION - The smallest normalized number for the machine, given by - BASE**( EMIN - 1 ), where BASE is the floating point value - - of BETA. - - EMAX (output) INTEGER - The maximum exponent before overflow occurs. - - RMAX (output) DOUBLE PRECISION - The largest positive number for the machine, given by - BASE**EMAX * ( 1 - EPS ), where BASE is the floating point - - value of BETA. - - Further Details - =============== - - The computation of EPS is based on a routine PARANOIA by - W. Kahan of the University of California at Berkeley. - - ===================================================================== -*/ - - /* Initialized data */ - static logical first = TRUE_; - static logical iwarn = FALSE_; - /* System generated locals */ - integer i__1; - doublereal d__1, d__2, d__3, d__4, d__5; - /* Builtin functions */ - double pow_di(doublereal *, integer *); - /* Local variables */ - static logical ieee; - static doublereal half; - static logical lrnd; - static doublereal leps, zero, a, b, c; - static integer i, lbeta; - static doublereal rbase; - static integer lemin, lemax, gnmin; - static doublereal small; - static integer gpmin; - static doublereal third, lrmin, lrmax, sixth; - extern /* Subroutine */ int dlamc1_(integer *, integer *, logical *, - logical *); - extern doublereal dlamc3_(doublereal *, doublereal *); - static logical lieee1; - extern /* Subroutine */ int dlamc4_(integer *, doublereal *, integer *), - dlamc5_(integer *, integer *, integer *, logical *, integer *, - doublereal *); - static integer lt, ngnmin, ngpmin; - static doublereal one, two; - - - - if (first) { - first = FALSE_; - zero = 0.; - one = 1.; - two = 2.; - -/* LBETA, LT, LRND, LEPS, LEMIN and LRMIN are the local values - of - BETA, T, RND, EPS, EMIN and RMIN. - - Throughout this routine we use the function DLAMC3 to ens -ure - that relevant values are stored and not held in registers, - or - are not affected by optimizers. - - DLAMC1 returns the parameters LBETA, LT, LRND and LIEEE1. -*/ - - dlamc1_(&lbeta, <, &lrnd, &lieee1); - -/* Start to find EPS. */ - - b = (doublereal) lbeta; - i__1 = -lt; - a = pow_di(&b, &i__1); - leps = a; - -/* Try some tricks to see whether or not this is the correct E -PS. */ - - b = two / 3; - half = one / 2; - d__1 = -half; - sixth = dlamc3_(&b, &d__1); - third = dlamc3_(&sixth, &sixth); - d__1 = -half; - b = dlamc3_(&third, &d__1); - b = dlamc3_(&b, &sixth); - b = abs(b); - if (b < leps) { - b = leps; - } - - leps = 1.; - -/* + WHILE( ( LEPS.GT.B ).AND.( B.GT.ZERO ) )LOOP */ -L10: - if (leps > b && b > zero) { - leps = b; - d__1 = half * leps; -/* Computing 5th power */ - d__3 = two, d__4 = d__3, d__3 *= d__3; -/* Computing 2nd power */ - d__5 = leps; - d__2 = d__4 * (d__3 * d__3) * (d__5 * d__5); - c = dlamc3_(&d__1, &d__2); - d__1 = -c; - c = dlamc3_(&half, &d__1); - b = dlamc3_(&half, &c); - d__1 = -b; - c = dlamc3_(&half, &d__1); - b = dlamc3_(&half, &c); - goto L10; - } -/* + END WHILE */ - - if (a < leps) { - leps = a; - } - -/* Computation of EPS complete. - - Now find EMIN. Let A = + or - 1, and + or - (1 + BASE**(-3 -)). - Keep dividing A by BETA until (gradual) underflow occurs. T -his - is detected when we cannot recover the previous A. */ - - rbase = one / lbeta; - small = one; - for (i = 1; i <= 3; ++i) { - d__1 = small * rbase; - small = dlamc3_(&d__1, &zero); -/* L20: */ - } - a = dlamc3_(&one, &small); - dlamc4_(&ngpmin, &one, &lbeta); - d__1 = -one; - dlamc4_(&ngnmin, &d__1, &lbeta); - dlamc4_(&gpmin, &a, &lbeta); - d__1 = -a; - dlamc4_(&gnmin, &d__1, &lbeta); - ieee = FALSE_; - - if (ngpmin == ngnmin && gpmin == gnmin) { - if (ngpmin == gpmin) { - lemin = ngpmin; -/* ( Non twos-complement machines, no gradual under -flow; - e.g., VAX ) */ - } else if (gpmin - ngpmin == 3) { - lemin = ngpmin - 1 + lt; - ieee = TRUE_; -/* ( Non twos-complement machines, with gradual und -erflow; - e.g., IEEE standard followers ) */ - } else { - lemin = min(ngpmin,gpmin); -/* ( A guess; no known machine ) */ - iwarn = TRUE_; - } - - } else if (ngpmin == gpmin && ngnmin == gnmin) { - if ((i__1 = ngpmin - ngnmin, abs(i__1)) == 1) { - lemin = max(ngpmin,ngnmin); -/* ( Twos-complement machines, no gradual underflow -; - e.g., CYBER 205 ) */ - } else { - lemin = min(ngpmin,ngnmin); -/* ( A guess; no known machine ) */ - iwarn = TRUE_; - } - - } else if ((i__1 = ngpmin - ngnmin, abs(i__1)) == 1 && gpmin == gnmin) - { - if (gpmin - min(ngpmin,ngnmin) == 3) { - lemin = max(ngpmin,ngnmin) - 1 + lt; -/* ( Twos-complement machines with gradual underflo -w; - no known machine ) */ - } else { - lemin = min(ngpmin,ngnmin); -/* ( A guess; no known machine ) */ - iwarn = TRUE_; - } - - } else { -/* Computing MIN */ - i__1 = min(ngpmin,ngnmin), i__1 = min(i__1,gpmin); - lemin = min(i__1,gnmin); -/* ( A guess; no known machine ) */ - iwarn = TRUE_; - } -/* ** - Comment out this if block if EMIN is ok */ - if (iwarn) { - first = TRUE_; - printf("\n\n WARNING. The value EMIN may be incorrect:- "); - printf("EMIN = %8i\n",lemin); - printf("If, after inspection, the value EMIN looks acceptable"); - printf("please comment out \n the IF block as marked within the"); - printf("code of routine DLAMC2, \n otherwise supply EMIN"); - printf("explicitly.\n"); - } -/* ** - - Assume IEEE arithmetic if we found denormalised numbers abo -ve, - or if arithmetic seems to round in the IEEE style, determi -ned - in routine DLAMC1. A true IEEE machine should have both thi -ngs - true; however, faulty machines may have one or the other. */ - - ieee = ieee || lieee1; - -/* Compute RMIN by successive division by BETA. We could comp -ute - RMIN as BASE**( EMIN - 1 ), but some machines underflow dur -ing - this computation. */ - - lrmin = 1.; - i__1 = 1 - lemin; - for (i = 1; i <= 1-lemin; ++i) { - d__1 = lrmin * rbase; - lrmin = dlamc3_(&d__1, &zero); -/* L30: */ - } - -/* Finally, call DLAMC5 to compute EMAX and RMAX. */ - - dlamc5_(&lbeta, <, &lemin, &ieee, &lemax, &lrmax); - } - - *beta = lbeta; - *t = lt; - *rnd = lrnd; - *eps = leps; - *emin = lemin; - *rmin = lrmin; - *emax = lemax; - *rmax = lrmax; - - return 0; - - -/* End of DLAMC2 */ - -} /* dlamc2_ */ -#endif - - -doublereal dlamc3_(doublereal *a, doublereal *b) -{ -/* -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLAMC3 is intended to force A and B to be stored prior to doing - - the addition of A and B , for use in situations where optimizers - - might hold one of these in a register. - - Arguments - ========= - - A, B (input) DOUBLE PRECISION - The values A and B. - - ===================================================================== -*/ -/* >>Start of File<< - System generated locals */ - volatile doublereal ret_val; - - - - ret_val = *a + *b; - - return ret_val; - -/* End of DLAMC3 */ - -} /* dlamc3_ */ - - -#ifndef HAVE_CONFIG -/* Subroutine */ int dlamc4_(integer *emin, doublereal *start, integer *base) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLAMC4 is a service routine for DLAMC2. - - Arguments - ========= - - EMIN (output) EMIN - The minimum exponent before (gradual) underflow, computed by - - setting A = START and dividing by BASE until the previous A - can not be recovered. - - START (input) DOUBLE PRECISION - The starting point for determining EMIN. - - BASE (input) INTEGER - The base of the machine. - - ===================================================================== -*/ - /* System generated locals */ - integer i__1; - doublereal d__1; - /* Local variables */ - static doublereal zero, a; - static integer i; - static doublereal rbase, b1, b2, c1, c2, d1, d2; - extern doublereal dlamc3_(doublereal *, doublereal *); - static doublereal one; - - - - a = *start; - one = 1.; - rbase = one / *base; - zero = 0.; - *emin = 1; - d__1 = a * rbase; - b1 = dlamc3_(&d__1, &zero); - c1 = a; - c2 = a; - d1 = a; - d2 = a; -/* + WHILE( ( C1.EQ.A ).AND.( C2.EQ.A ).AND. - $ ( D1.EQ.A ).AND.( D2.EQ.A ) )LOOP */ -L10: - if (c1 == a && c2 == a && d1 == a && d2 == a) { - --(*emin); - a = b1; - d__1 = a / *base; - b1 = dlamc3_(&d__1, &zero); - d__1 = b1 * *base; - c1 = dlamc3_(&d__1, &zero); - d1 = zero; - i__1 = *base; - for (i = 1; i <= *base; ++i) { - d1 += b1; -/* L20: */ - } - d__1 = a * rbase; - b2 = dlamc3_(&d__1, &zero); - d__1 = b2 / rbase; - c2 = dlamc3_(&d__1, &zero); - d2 = zero; - i__1 = *base; - for (i = 1; i <= *base; ++i) { - d2 += b2; -/* L30: */ - } - goto L10; - } -/* + END WHILE */ - - return 0; - -/* End of DLAMC4 */ - -} /* dlamc4_ */ - - -/* Subroutine */ int dlamc5_(integer *beta, integer *p, integer *emin, - logical *ieee, integer *emax, doublereal *rmax) -{ -/* -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLAMC5 attempts to compute RMAX, the largest machine floating-point - number, without overflow. It assumes that EMAX + abs(EMIN) sum - approximately to a power of 2. It will fail on machines where this - assumption does not hold, for example, the Cyber 205 (EMIN = -28625, - - EMAX = 28718). It will also fail if the value supplied for EMIN is - too large (i.e. too close to zero), probably with overflow. - - Arguments - ========= - - BETA (input) INTEGER - The base of floating-point arithmetic. - - P (input) INTEGER - The number of base BETA digits in the mantissa of a - floating-point value. - - EMIN (input) INTEGER - The minimum exponent before (gradual) underflow. - - IEEE (input) LOGICAL - A logical flag specifying whether or not the arithmetic - system is thought to comply with the IEEE standard. - - EMAX (output) INTEGER - The largest exponent before overflow - - RMAX (output) DOUBLE PRECISION - The largest machine floating-point number. - - ===================================================================== - - - - First compute LEXP and UEXP, two powers of 2 that bound - abs(EMIN). We then assume that EMAX + abs(EMIN) will sum - approximately to the bound that is closest to abs(EMIN). - (EMAX is the exponent of the required number RMAX). */ - /* Table of constant values */ - static doublereal c_b5 = 0.; - - /* System generated locals */ - integer i__1; - doublereal d__1; - /* Local variables */ - static integer lexp; - static doublereal oldy; - static integer uexp, i; - static doublereal y, z; - static integer nbits; - extern doublereal dlamc3_(doublereal *, doublereal *); - static doublereal recbas; - static integer exbits, expsum, try__; - - - - lexp = 1; - exbits = 1; -L10: - try__ = lexp << 1; - if (try__ <= -(*emin)) { - lexp = try__; - ++exbits; - goto L10; - } - if (lexp == -(*emin)) { - uexp = lexp; - } else { - uexp = try__; - ++exbits; - } - -/* Now -LEXP is less than or equal to EMIN, and -UEXP is greater - than or equal to EMIN. EXBITS is the number of bits needed to - store the exponent. */ - - if (uexp + *emin > -lexp - *emin) { - expsum = lexp << 1; - } else { - expsum = uexp << 1; - } - -/* EXPSUM is the exponent range, approximately equal to - EMAX - EMIN + 1 . */ - - *emax = expsum + *emin - 1; - nbits = exbits + 1 + *p; - -/* NBITS is the total number of bits needed to store a - floating-point number. */ - - if (nbits % 2 == 1 && *beta == 2) { - -/* Either there are an odd number of bits used to store a - floating-point number, which is unlikely, or some bits are - - not used in the representation of numbers, which is possible -, - (e.g. Cray machines) or the mantissa has an implicit bit, - (e.g. IEEE machines, Dec Vax machines), which is perhaps the - - most likely. We have to assume the last alternative. - If this is true, then we need to reduce EMAX by one because - - there must be some way of representing zero in an implicit-b -it - system. On machines like Cray, we are reducing EMAX by one - - unnecessarily. */ - - --(*emax); - } - - if (*ieee) { - -/* Assume we are on an IEEE machine which reserves one exponent - - for infinity and NaN. */ - - --(*emax); - } - -/* Now create RMAX, the largest machine number, which should - be equal to (1.0 - BETA**(-P)) * BETA**EMAX . - - First compute 1.0 - BETA**(-P), being careful that the - result is less than 1.0 . */ - - recbas = 1. / *beta; - z = *beta - 1.; - y = 0.; - i__1 = *p; - for (i = 1; i <= *p; ++i) { - z *= recbas; - if (y < 1.) { - oldy = y; - } - y = dlamc3_(&y, &z); -/* L20: */ - } - if (y >= 1.) { - y = oldy; - } - -/* Now multiply by BETA**EMAX to get RMAX. */ - - i__1 = *emax; - for (i = 1; i <= *emax; ++i) { - d__1 = y * *beta; - y = dlamc3_(&d__1, &c_b5); -/* L30: */ - } - - *rmax = y; - return 0; - -/* End of DLAMC5 */ - -} /* dlamc5_ */ -#endif diff --git a/pythonPackages/numpy/numpy/linalg/dlapack_lite.c b/pythonPackages/numpy/numpy/linalg/dlapack_lite.c deleted file mode 100755 index d2c1d8129b..0000000000 --- a/pythonPackages/numpy/numpy/linalg/dlapack_lite.c +++ /dev/null @@ -1,36008 +0,0 @@ -#define MAXITERLOOPS 100 - -/* -NOTE: This is generated code. Look in Misc/lapack_lite for information on - remaking this file. -*/ -#include "f2c.h" - -#ifdef HAVE_CONFIG -#include "config.h" -#else -extern doublereal dlamch_(char *); -#define EPSILON dlamch_("Epsilon") -#define SAFEMINIMUM dlamch_("Safe minimum") -#define PRECISION dlamch_("Precision") -#define BASE dlamch_("Base") -#endif - -extern doublereal dlapy2_(doublereal *x, doublereal *y); - - - -/* Table of constant values */ - -static integer c__9 = 9; -static integer c__0 = 0; -static doublereal c_b15 = 1.; -static integer c__1 = 1; -static doublereal c_b29 = 0.; -static doublereal c_b94 = -.125; -static doublereal c_b151 = -1.; -static integer c_n1 = -1; -static integer c__3 = 3; -static integer c__2 = 2; -static integer c__8 = 8; -static integer c__4 = 4; -static integer c__65 = 65; -static integer c__6 = 6; -static integer c__15 = 15; -static logical c_false = FALSE_; -static integer c__10 = 10; -static integer c__11 = 11; -static doublereal c_b2804 = 2.; -static logical c_true = TRUE_; -static real c_b3825 = 0.f; -static real c_b3826 = 1.f; - -/* Subroutine */ int dbdsdc_(char *uplo, char *compq, integer *n, doublereal * - d__, doublereal *e, doublereal *u, integer *ldu, doublereal *vt, - integer *ldvt, doublereal *q, integer *iq, doublereal *work, integer * - iwork, integer *info) -{ - /* System generated locals */ - integer u_dim1, u_offset, vt_dim1, vt_offset, i__1, i__2; - doublereal d__1; - - /* Builtin functions */ - double d_sign(doublereal *, doublereal *), log(doublereal); - - /* Local variables */ - static integer i__, j, k; - static doublereal p, r__; - static integer z__, ic, ii, kk; - static doublereal cs; - static integer is, iu; - static doublereal sn; - static integer nm1; - static doublereal eps; - static integer ivt, difl, difr, ierr, perm, mlvl, sqre; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dlasr_(char *, char *, char *, integer *, - integer *, doublereal *, doublereal *, doublereal *, integer *), dcopy_(integer *, doublereal *, integer * - , doublereal *, integer *), dswap_(integer *, doublereal *, - integer *, doublereal *, integer *); - static integer poles, iuplo, nsize, start; - extern /* Subroutine */ int dlasd0_(integer *, integer *, doublereal *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - integer *, integer *, doublereal *, integer *); - - extern /* Subroutine */ int dlasda_(integer *, integer *, integer *, - integer *, doublereal *, doublereal *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *, integer *, integer *, integer *, - doublereal *, doublereal *, doublereal *, doublereal *, integer *, - integer *), dlascl_(char *, integer *, integer *, doublereal *, - doublereal *, integer *, integer *, doublereal *, integer *, - integer *), dlasdq_(char *, integer *, integer *, integer - *, integer *, integer *, doublereal *, doublereal *, doublereal *, - integer *, doublereal *, integer *, doublereal *, integer *, - doublereal *, integer *), dlaset_(char *, integer *, - integer *, doublereal *, doublereal *, doublereal *, integer *), dlartg_(doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int xerbla_(char *, integer *); - static integer givcol; - extern doublereal dlanst_(char *, integer *, doublereal *, doublereal *); - static integer icompq; - static doublereal orgnrm; - static integer givnum, givptr, qstart, smlsiz, wstart, smlszp; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - December 1, 1999 - - - Purpose - ======= - - DBDSDC computes the singular value decomposition (SVD) of a real - N-by-N (upper or lower) bidiagonal matrix B: B = U * S * VT, - using a divide and conquer method, where S is a diagonal matrix - with non-negative diagonal elements (the singular values of B), and - U and VT are orthogonal matrices of left and right singular vectors, - respectively. DBDSDC can be used to compute all singular values, - and optionally, singular vectors or singular vectors in compact form. - - This code makes very mild assumptions about floating point - arithmetic. It will work on machines with a guard digit in - add/subtract, or on those binary machines without guard digits - which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. - It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. See DLASD3 for details. - - The code currently call DLASDQ if singular values only are desired. - However, it can be slightly modified to compute singular values - using the divide and conquer method. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - = 'U': B is upper bidiagonal. - = 'L': B is lower bidiagonal. - - COMPQ (input) CHARACTER*1 - Specifies whether singular vectors are to be computed - as follows: - = 'N': Compute singular values only; - = 'P': Compute singular values and compute singular - vectors in compact form; - = 'I': Compute singular values and singular vectors. - - N (input) INTEGER - The order of the matrix B. N >= 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the n diagonal elements of the bidiagonal matrix B. - On exit, if INFO=0, the singular values of B. - - E (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the elements of E contain the offdiagonal - elements of the bidiagonal matrix whose SVD is desired. - On exit, E has been destroyed. - - U (output) DOUBLE PRECISION array, dimension (LDU,N) - If COMPQ = 'I', then: - On exit, if INFO = 0, U contains the left singular vectors - of the bidiagonal matrix. - For other values of COMPQ, U is not referenced. - - LDU (input) INTEGER - The leading dimension of the array U. LDU >= 1. - If singular vectors are desired, then LDU >= max( 1, N ). - - VT (output) DOUBLE PRECISION array, dimension (LDVT,N) - If COMPQ = 'I', then: - On exit, if INFO = 0, VT' contains the right singular - vectors of the bidiagonal matrix. - For other values of COMPQ, VT is not referenced. - - LDVT (input) INTEGER - The leading dimension of the array VT. LDVT >= 1. - If singular vectors are desired, then LDVT >= max( 1, N ). - - Q (output) DOUBLE PRECISION array, dimension (LDQ) - If COMPQ = 'P', then: - On exit, if INFO = 0, Q and IQ contain the left - and right singular vectors in a compact form, - requiring O(N log N) space instead of 2*N**2. - In particular, Q contains all the DOUBLE PRECISION data in - LDQ >= N*(11 + 2*SMLSIZ + 8*INT(LOG_2(N/(SMLSIZ+1)))) - words of memory, where SMLSIZ is returned by ILAENV and - is equal to the maximum size of the subproblems at the - bottom of the computation tree (usually about 25). - For other values of COMPQ, Q is not referenced. - - IQ (output) INTEGER array, dimension (LDIQ) - If COMPQ = 'P', then: - On exit, if INFO = 0, Q and IQ contain the left - and right singular vectors in a compact form, - requiring O(N log N) space instead of 2*N**2. - In particular, IQ contains all INTEGER data in - LDIQ >= N*(3 + 3*INT(LOG_2(N/(SMLSIZ+1)))) - words of memory, where SMLSIZ is returned by ILAENV and - is equal to the maximum size of the subproblems at the - bottom of the computation tree (usually about 25). - For other values of COMPQ, IQ is not referenced. - - WORK (workspace) DOUBLE PRECISION array, dimension (LWORK) - If COMPQ = 'N' then LWORK >= (4 * N). - If COMPQ = 'P' then LWORK >= (6 * N). - If COMPQ = 'I' then LWORK >= (3 * N**2 + 4 * N). - - IWORK (workspace) INTEGER array, dimension (8*N) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: The algorithm failed to compute an singular value. - The update process of divide and conquer failed. - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - vt_dim1 = *ldvt; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - --q; - --iq; - --work; - --iwork; - - /* Function Body */ - *info = 0; - - iuplo = 0; - if (lsame_(uplo, "U")) { - iuplo = 1; - } - if (lsame_(uplo, "L")) { - iuplo = 2; - } - if (lsame_(compq, "N")) { - icompq = 0; - } else if (lsame_(compq, "P")) { - icompq = 1; - } else if (lsame_(compq, "I")) { - icompq = 2; - } else { - icompq = -1; - } - if (iuplo == 0) { - *info = -1; - } else if (icompq < 0) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*ldu < 1 || (icompq == 2 && *ldu < *n)) { - *info = -7; - } else if (*ldvt < 1 || (icompq == 2 && *ldvt < *n)) { - *info = -9; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DBDSDC", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - smlsiz = ilaenv_(&c__9, "DBDSDC", " ", &c__0, &c__0, &c__0, &c__0, ( - ftnlen)6, (ftnlen)1); - if (*n == 1) { - if (icompq == 1) { - q[1] = d_sign(&c_b15, &d__[1]); - q[smlsiz * *n + 1] = 1.; - } else if (icompq == 2) { - u[u_dim1 + 1] = d_sign(&c_b15, &d__[1]); - vt[vt_dim1 + 1] = 1.; - } - d__[1] = abs(d__[1]); - return 0; - } - nm1 = *n - 1; - -/* - If matrix lower bidiagonal, rotate to be upper bidiagonal - by applying Givens rotations on the left -*/ - - wstart = 1; - qstart = 3; - if (icompq == 1) { - dcopy_(n, &d__[1], &c__1, &q[1], &c__1); - i__1 = *n - 1; - dcopy_(&i__1, &e[1], &c__1, &q[*n + 1], &c__1); - } - if (iuplo == 2) { - qstart = 5; - wstart = ((*n) << (1)) - 1; - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - dlartg_(&d__[i__], &e[i__], &cs, &sn, &r__); - d__[i__] = r__; - e[i__] = sn * d__[i__ + 1]; - d__[i__ + 1] = cs * d__[i__ + 1]; - if (icompq == 1) { - q[i__ + ((*n) << (1))] = cs; - q[i__ + *n * 3] = sn; - } else if (icompq == 2) { - work[i__] = cs; - work[nm1 + i__] = -sn; - } -/* L10: */ - } - } - -/* If ICOMPQ = 0, use DLASDQ to compute the singular values. */ - - if (icompq == 0) { - dlasdq_("U", &c__0, n, &c__0, &c__0, &c__0, &d__[1], &e[1], &vt[ - vt_offset], ldvt, &u[u_offset], ldu, &u[u_offset], ldu, &work[ - wstart], info); - goto L40; - } - -/* - If N is smaller than the minimum divide size SMLSIZ, then solve - the problem with another solver. -*/ - - if (*n <= smlsiz) { - if (icompq == 2) { - dlaset_("A", n, n, &c_b29, &c_b15, &u[u_offset], ldu); - dlaset_("A", n, n, &c_b29, &c_b15, &vt[vt_offset], ldvt); - dlasdq_("U", &c__0, n, n, n, &c__0, &d__[1], &e[1], &vt[vt_offset] - , ldvt, &u[u_offset], ldu, &u[u_offset], ldu, &work[ - wstart], info); - } else if (icompq == 1) { - iu = 1; - ivt = iu + *n; - dlaset_("A", n, n, &c_b29, &c_b15, &q[iu + (qstart - 1) * *n], n); - dlaset_("A", n, n, &c_b29, &c_b15, &q[ivt + (qstart - 1) * *n], n); - dlasdq_("U", &c__0, n, n, n, &c__0, &d__[1], &e[1], &q[ivt + ( - qstart - 1) * *n], n, &q[iu + (qstart - 1) * *n], n, &q[ - iu + (qstart - 1) * *n], n, &work[wstart], info); - } - goto L40; - } - - if (icompq == 2) { - dlaset_("A", n, n, &c_b29, &c_b15, &u[u_offset], ldu); - dlaset_("A", n, n, &c_b29, &c_b15, &vt[vt_offset], ldvt); - } - -/* Scale. */ - - orgnrm = dlanst_("M", n, &d__[1], &e[1]); - if (orgnrm == 0.) { - return 0; - } - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b15, n, &c__1, &d__[1], n, &ierr); - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b15, &nm1, &c__1, &e[1], &nm1, & - ierr); - - eps = EPSILON; - - mlvl = (integer) (log((doublereal) (*n) / (doublereal) (smlsiz + 1)) / - log(2.)) + 1; - smlszp = smlsiz + 1; - - if (icompq == 1) { - iu = 1; - ivt = smlsiz + 1; - difl = ivt + smlszp; - difr = difl + mlvl; - z__ = difr + ((mlvl) << (1)); - ic = z__ + mlvl; - is = ic + 1; - poles = is + 1; - givnum = poles + ((mlvl) << (1)); - - k = 1; - givptr = 2; - perm = 3; - givcol = perm + mlvl; - } - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - if ((d__1 = d__[i__], abs(d__1)) < eps) { - d__[i__] = d_sign(&eps, &d__[i__]); - } -/* L20: */ - } - - start = 1; - sqre = 0; - - i__1 = nm1; - for (i__ = 1; i__ <= i__1; ++i__) { - if ((d__1 = e[i__], abs(d__1)) < eps || i__ == nm1) { - -/* - Subproblem found. First determine its size and then - apply divide and conquer on it. -*/ - - if (i__ < nm1) { - -/* A subproblem with E(I) small for I < NM1. */ - - nsize = i__ - start + 1; - } else if ((d__1 = e[i__], abs(d__1)) >= eps) { - -/* A subproblem with E(NM1) not too small but I = NM1. */ - - nsize = *n - start + 1; - } else { - -/* - A subproblem with E(NM1) small. This implies an - 1-by-1 subproblem at D(N). Solve this 1-by-1 problem - first. -*/ - - nsize = i__ - start + 1; - if (icompq == 2) { - u[*n + *n * u_dim1] = d_sign(&c_b15, &d__[*n]); - vt[*n + *n * vt_dim1] = 1.; - } else if (icompq == 1) { - q[*n + (qstart - 1) * *n] = d_sign(&c_b15, &d__[*n]); - q[*n + (smlsiz + qstart - 1) * *n] = 1.; - } - d__[*n] = (d__1 = d__[*n], abs(d__1)); - } - if (icompq == 2) { - dlasd0_(&nsize, &sqre, &d__[start], &e[start], &u[start + - start * u_dim1], ldu, &vt[start + start * vt_dim1], - ldvt, &smlsiz, &iwork[1], &work[wstart], info); - } else { - dlasda_(&icompq, &smlsiz, &nsize, &sqre, &d__[start], &e[ - start], &q[start + (iu + qstart - 2) * *n], n, &q[ - start + (ivt + qstart - 2) * *n], &iq[start + k * *n], - &q[start + (difl + qstart - 2) * *n], &q[start + ( - difr + qstart - 2) * *n], &q[start + (z__ + qstart - - 2) * *n], &q[start + (poles + qstart - 2) * *n], &iq[ - start + givptr * *n], &iq[start + givcol * *n], n, & - iq[start + perm * *n], &q[start + (givnum + qstart - - 2) * *n], &q[start + (ic + qstart - 2) * *n], &q[ - start + (is + qstart - 2) * *n], &work[wstart], & - iwork[1], info); - if (*info != 0) { - return 0; - } - } - start = i__ + 1; - } -/* L30: */ - } - -/* Unscale */ - - dlascl_("G", &c__0, &c__0, &c_b15, &orgnrm, n, &c__1, &d__[1], n, &ierr); -L40: - -/* Use Selection Sort to minimize swaps of singular vectors */ - - i__1 = *n; - for (ii = 2; ii <= i__1; ++ii) { - i__ = ii - 1; - kk = i__; - p = d__[i__]; - i__2 = *n; - for (j = ii; j <= i__2; ++j) { - if (d__[j] > p) { - kk = j; - p = d__[j]; - } -/* L50: */ - } - if (kk != i__) { - d__[kk] = d__[i__]; - d__[i__] = p; - if (icompq == 1) { - iq[i__] = kk; - } else if (icompq == 2) { - dswap_(n, &u[i__ * u_dim1 + 1], &c__1, &u[kk * u_dim1 + 1], & - c__1); - dswap_(n, &vt[i__ + vt_dim1], ldvt, &vt[kk + vt_dim1], ldvt); - } - } else if (icompq == 1) { - iq[i__] = i__; - } -/* L60: */ - } - -/* If ICOMPQ = 1, use IQ(N,1) as the indicator for UPLO */ - - if (icompq == 1) { - if (iuplo == 1) { - iq[*n] = 1; - } else { - iq[*n] = 0; - } - } - -/* - If B is lower bidiagonal, update U by those Givens rotations - which rotated B to be upper bidiagonal -*/ - - if ((iuplo == 2 && icompq == 2)) { - dlasr_("L", "V", "B", n, n, &work[1], &work[*n], &u[u_offset], ldu); - } - - return 0; - -/* End of DBDSDC */ - -} /* dbdsdc_ */ - -/* Subroutine */ int dbdsqr_(char *uplo, integer *n, integer *ncvt, integer * - nru, integer *ncc, doublereal *d__, doublereal *e, doublereal *vt, - integer *ldvt, doublereal *u, integer *ldu, doublereal *c__, integer * - ldc, doublereal *work, integer *info) -{ - /* System generated locals */ - integer c_dim1, c_offset, u_dim1, u_offset, vt_dim1, vt_offset, i__1, - i__2; - doublereal d__1, d__2, d__3, d__4; - - /* Builtin functions */ - double pow_dd(doublereal *, doublereal *), sqrt(doublereal), d_sign( - doublereal *, doublereal *); - - /* Local variables */ - static doublereal f, g, h__; - static integer i__, j, m; - static doublereal r__, cs; - static integer ll; - static doublereal sn, mu; - static integer nm1, nm12, nm13, lll; - static doublereal eps, sll, tol, abse; - static integer idir; - static doublereal abss; - static integer oldm; - static doublereal cosl; - static integer isub, iter; - static doublereal unfl, sinl, cosr, smin, smax, sinr; - extern /* Subroutine */ int drot_(integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *), dlas2_( - doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *), dscal_(integer *, doublereal *, doublereal *, - integer *); - extern logical lsame_(char *, char *); - static doublereal oldcs; - extern /* Subroutine */ int dlasr_(char *, char *, char *, integer *, - integer *, doublereal *, doublereal *, doublereal *, integer *); - static integer oldll; - static doublereal shift, sigmn, oldsn; - extern /* Subroutine */ int dswap_(integer *, doublereal *, integer *, - doublereal *, integer *); - static integer maxit; - static doublereal sminl, sigmx; - static logical lower; - extern /* Subroutine */ int dlasq1_(integer *, doublereal *, doublereal *, - doublereal *, integer *), dlasv2_(doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *); - - extern /* Subroutine */ int dlartg_(doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *), xerbla_(char *, - integer *); - static doublereal sminoa, thresh; - static logical rotate; - static doublereal sminlo, tolmul; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - DBDSQR computes the singular value decomposition (SVD) of a real - N-by-N (upper or lower) bidiagonal matrix B: B = Q * S * P' (P' - denotes the transpose of P), where S is a diagonal matrix with - non-negative diagonal elements (the singular values of B), and Q - and P are orthogonal matrices. - - The routine computes S, and optionally computes U * Q, P' * VT, - or Q' * C, for given real input matrices U, VT, and C. - - See "Computing Small Singular Values of Bidiagonal Matrices With - Guaranteed High Relative Accuracy," by J. Demmel and W. Kahan, - LAPACK Working Note #3 (or SIAM J. Sci. Statist. Comput. vol. 11, - no. 5, pp. 873-912, Sept 1990) and - "Accurate singular values and differential qd algorithms," by - B. Parlett and V. Fernando, Technical Report CPAM-554, Mathematics - Department, University of California at Berkeley, July 1992 - for a detailed description of the algorithm. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - = 'U': B is upper bidiagonal; - = 'L': B is lower bidiagonal. - - N (input) INTEGER - The order of the matrix B. N >= 0. - - NCVT (input) INTEGER - The number of columns of the matrix VT. NCVT >= 0. - - NRU (input) INTEGER - The number of rows of the matrix U. NRU >= 0. - - NCC (input) INTEGER - The number of columns of the matrix C. NCC >= 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the n diagonal elements of the bidiagonal matrix B. - On exit, if INFO=0, the singular values of B in decreasing - order. - - E (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the elements of E contain the - offdiagonal elements of the bidiagonal matrix whose SVD - is desired. On normal exit (INFO = 0), E is destroyed. - If the algorithm does not converge (INFO > 0), D and E - will contain the diagonal and superdiagonal elements of a - bidiagonal matrix orthogonally equivalent to the one given - as input. E(N) is used for workspace. - - VT (input/output) DOUBLE PRECISION array, dimension (LDVT, NCVT) - On entry, an N-by-NCVT matrix VT. - On exit, VT is overwritten by P' * VT. - VT is not referenced if NCVT = 0. - - LDVT (input) INTEGER - The leading dimension of the array VT. - LDVT >= max(1,N) if NCVT > 0; LDVT >= 1 if NCVT = 0. - - U (input/output) DOUBLE PRECISION array, dimension (LDU, N) - On entry, an NRU-by-N matrix U. - On exit, U is overwritten by U * Q. - U is not referenced if NRU = 0. - - LDU (input) INTEGER - The leading dimension of the array U. LDU >= max(1,NRU). - - C (input/output) DOUBLE PRECISION array, dimension (LDC, NCC) - On entry, an N-by-NCC matrix C. - On exit, C is overwritten by Q' * C. - C is not referenced if NCC = 0. - - LDC (input) INTEGER - The leading dimension of the array C. - LDC >= max(1,N) if NCC > 0; LDC >=1 if NCC = 0. - - WORK (workspace) DOUBLE PRECISION array, dimension (4*N) - - INFO (output) INTEGER - = 0: successful exit - < 0: If INFO = -i, the i-th argument had an illegal value - > 0: the algorithm did not converge; D and E contain the - elements of a bidiagonal matrix which is orthogonally - similar to the input matrix B; if INFO = i, i - elements of E have not converged to zero. - - Internal Parameters - =================== - - TOLMUL DOUBLE PRECISION, default = max(10,min(100,EPS**(-1/8))) - TOLMUL controls the convergence criterion of the QR loop. - If it is positive, TOLMUL*EPS is the desired relative - precision in the computed singular values. - If it is negative, abs(TOLMUL*EPS*sigma_max) is the - desired absolute accuracy in the computed singular - values (corresponds to relative accuracy - abs(TOLMUL*EPS) in the largest singular value. - abs(TOLMUL) should be between 1 and 1/EPS, and preferably - between 10 (for fast convergence) and .1/EPS - (for there to be some accuracy in the results). - Default is to lose at either one eighth or 2 of the - available decimal digits in each computed singular value - (whichever is smaller). - - MAXITR INTEGER, default = 6 - MAXITR controls the maximum number of passes of the - algorithm through its inner loop. The algorithms stops - (and so fails to converge) if the number of passes - through the inner loop exceeds MAXITR*N**2. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - vt_dim1 = *ldvt; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - lower = lsame_(uplo, "L"); - if ((! lsame_(uplo, "U") && ! lower)) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*ncvt < 0) { - *info = -3; - } else if (*nru < 0) { - *info = -4; - } else if (*ncc < 0) { - *info = -5; - } else if ((*ncvt == 0 && *ldvt < 1) || (*ncvt > 0 && *ldvt < max(1,*n))) - { - *info = -9; - } else if (*ldu < max(1,*nru)) { - *info = -11; - } else if ((*ncc == 0 && *ldc < 1) || (*ncc > 0 && *ldc < max(1,*n))) { - *info = -13; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DBDSQR", &i__1); - return 0; - } - if (*n == 0) { - return 0; - } - if (*n == 1) { - goto L160; - } - -/* ROTATE is true if any singular vectors desired, false otherwise */ - - rotate = *ncvt > 0 || *nru > 0 || *ncc > 0; - -/* If no singular vectors desired, use qd algorithm */ - - if (! rotate) { - dlasq1_(n, &d__[1], &e[1], &work[1], info); - return 0; - } - - nm1 = *n - 1; - nm12 = nm1 + nm1; - nm13 = nm12 + nm1; - idir = 0; - -/* Get machine constants */ - - eps = EPSILON; - unfl = SAFEMINIMUM; - -/* - If matrix lower bidiagonal, rotate to be upper bidiagonal - by applying Givens rotations on the left -*/ - - if (lower) { - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - dlartg_(&d__[i__], &e[i__], &cs, &sn, &r__); - d__[i__] = r__; - e[i__] = sn * d__[i__ + 1]; - d__[i__ + 1] = cs * d__[i__ + 1]; - work[i__] = cs; - work[nm1 + i__] = sn; -/* L10: */ - } - -/* Update singular vectors if desired */ - - if (*nru > 0) { - dlasr_("R", "V", "F", nru, n, &work[1], &work[*n], &u[u_offset], - ldu); - } - if (*ncc > 0) { - dlasr_("L", "V", "F", n, ncc, &work[1], &work[*n], &c__[c_offset], - ldc); - } - } - -/* - Compute singular values to relative accuracy TOL - (By setting TOL to be negative, algorithm will compute - singular values to absolute accuracy ABS(TOL)*norm(input matrix)) - - Computing MAX - Computing MIN -*/ - d__3 = 100., d__4 = pow_dd(&eps, &c_b94); - d__1 = 10., d__2 = min(d__3,d__4); - tolmul = max(d__1,d__2); - tol = tolmul * eps; - -/* Compute approximate maximum, minimum singular values */ - - smax = 0.; - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { -/* Computing MAX */ - d__2 = smax, d__3 = (d__1 = d__[i__], abs(d__1)); - smax = max(d__2,d__3); -/* L20: */ - } - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { -/* Computing MAX */ - d__2 = smax, d__3 = (d__1 = e[i__], abs(d__1)); - smax = max(d__2,d__3); -/* L30: */ - } - sminl = 0.; - if (tol >= 0.) { - -/* Relative accuracy desired */ - - sminoa = abs(d__[1]); - if (sminoa == 0.) { - goto L50; - } - mu = sminoa; - i__1 = *n; - for (i__ = 2; i__ <= i__1; ++i__) { - mu = (d__2 = d__[i__], abs(d__2)) * (mu / (mu + (d__1 = e[i__ - 1] - , abs(d__1)))); - sminoa = min(sminoa,mu); - if (sminoa == 0.) { - goto L50; - } -/* L40: */ - } -L50: - sminoa /= sqrt((doublereal) (*n)); -/* Computing MAX */ - d__1 = tol * sminoa, d__2 = *n * 6 * *n * unfl; - thresh = max(d__1,d__2); - } else { - -/* - Absolute accuracy desired - - Computing MAX -*/ - d__1 = abs(tol) * smax, d__2 = *n * 6 * *n * unfl; - thresh = max(d__1,d__2); - } - -/* - Prepare for main iteration loop for the singular values - (MAXIT is the maximum number of passes through the inner - loop permitted before nonconvergence signalled.) -*/ - - maxit = *n * 6 * *n; - iter = 0; - oldll = -1; - oldm = -1; - -/* M points to last element of unconverged part of matrix */ - - m = *n; - -/* Begin main iteration loop */ - -L60: - -/* Check for convergence or exceeding iteration count */ - - if (m <= 1) { - goto L160; - } - if (iter > maxit) { - goto L200; - } - -/* Find diagonal block of matrix to work on */ - - if ((tol < 0. && (d__1 = d__[m], abs(d__1)) <= thresh)) { - d__[m] = 0.; - } - smax = (d__1 = d__[m], abs(d__1)); - smin = smax; - i__1 = m - 1; - for (lll = 1; lll <= i__1; ++lll) { - ll = m - lll; - abss = (d__1 = d__[ll], abs(d__1)); - abse = (d__1 = e[ll], abs(d__1)); - if ((tol < 0. && abss <= thresh)) { - d__[ll] = 0.; - } - if (abse <= thresh) { - goto L80; - } - smin = min(smin,abss); -/* Computing MAX */ - d__1 = max(smax,abss); - smax = max(d__1,abse); -/* L70: */ - } - ll = 0; - goto L90; -L80: - e[ll] = 0.; - -/* Matrix splits since E(LL) = 0 */ - - if (ll == m - 1) { - -/* Convergence of bottom singular value, return to top of loop */ - - --m; - goto L60; - } -L90: - ++ll; - -/* E(LL) through E(M-1) are nonzero, E(LL-1) is zero */ - - if (ll == m - 1) { - -/* 2 by 2 block, handle separately */ - - dlasv2_(&d__[m - 1], &e[m - 1], &d__[m], &sigmn, &sigmx, &sinr, &cosr, - &sinl, &cosl); - d__[m - 1] = sigmx; - e[m - 1] = 0.; - d__[m] = sigmn; - -/* Compute singular vectors, if desired */ - - if (*ncvt > 0) { - drot_(ncvt, &vt[m - 1 + vt_dim1], ldvt, &vt[m + vt_dim1], ldvt, & - cosr, &sinr); - } - if (*nru > 0) { - drot_(nru, &u[(m - 1) * u_dim1 + 1], &c__1, &u[m * u_dim1 + 1], & - c__1, &cosl, &sinl); - } - if (*ncc > 0) { - drot_(ncc, &c__[m - 1 + c_dim1], ldc, &c__[m + c_dim1], ldc, & - cosl, &sinl); - } - m += -2; - goto L60; - } - -/* - If working on new submatrix, choose shift direction - (from larger end diagonal element towards smaller) -*/ - - if (ll > oldm || m < oldll) { - if ((d__1 = d__[ll], abs(d__1)) >= (d__2 = d__[m], abs(d__2))) { - -/* Chase bulge from top (big end) to bottom (small end) */ - - idir = 1; - } else { - -/* Chase bulge from bottom (big end) to top (small end) */ - - idir = 2; - } - } - -/* Apply convergence tests */ - - if (idir == 1) { - -/* - Run convergence test in forward direction - First apply standard test to bottom of matrix -*/ - - if ((d__2 = e[m - 1], abs(d__2)) <= abs(tol) * (d__1 = d__[m], abs( - d__1)) || (tol < 0. && (d__3 = e[m - 1], abs(d__3)) <= thresh) - ) { - e[m - 1] = 0.; - goto L60; - } - - if (tol >= 0.) { - -/* - If relative accuracy desired, - apply convergence criterion forward -*/ - - mu = (d__1 = d__[ll], abs(d__1)); - sminl = mu; - i__1 = m - 1; - for (lll = ll; lll <= i__1; ++lll) { - if ((d__1 = e[lll], abs(d__1)) <= tol * mu) { - e[lll] = 0.; - goto L60; - } - sminlo = sminl; - mu = (d__2 = d__[lll + 1], abs(d__2)) * (mu / (mu + (d__1 = e[ - lll], abs(d__1)))); - sminl = min(sminl,mu); -/* L100: */ - } - } - - } else { - -/* - Run convergence test in backward direction - First apply standard test to top of matrix -*/ - - if ((d__2 = e[ll], abs(d__2)) <= abs(tol) * (d__1 = d__[ll], abs(d__1) - ) || (tol < 0. && (d__3 = e[ll], abs(d__3)) <= thresh)) { - e[ll] = 0.; - goto L60; - } - - if (tol >= 0.) { - -/* - If relative accuracy desired, - apply convergence criterion backward -*/ - - mu = (d__1 = d__[m], abs(d__1)); - sminl = mu; - i__1 = ll; - for (lll = m - 1; lll >= i__1; --lll) { - if ((d__1 = e[lll], abs(d__1)) <= tol * mu) { - e[lll] = 0.; - goto L60; - } - sminlo = sminl; - mu = (d__2 = d__[lll], abs(d__2)) * (mu / (mu + (d__1 = e[lll] - , abs(d__1)))); - sminl = min(sminl,mu); -/* L110: */ - } - } - } - oldll = ll; - oldm = m; - -/* - Compute shift. First, test if shifting would ruin relative - accuracy, and if so set the shift to zero. - - Computing MAX -*/ - d__1 = eps, d__2 = tol * .01; - if ((tol >= 0. && *n * tol * (sminl / smax) <= max(d__1,d__2))) { - -/* Use a zero shift to avoid loss of relative accuracy */ - - shift = 0.; - } else { - -/* Compute the shift from 2-by-2 block at end of matrix */ - - if (idir == 1) { - sll = (d__1 = d__[ll], abs(d__1)); - dlas2_(&d__[m - 1], &e[m - 1], &d__[m], &shift, &r__); - } else { - sll = (d__1 = d__[m], abs(d__1)); - dlas2_(&d__[ll], &e[ll], &d__[ll + 1], &shift, &r__); - } - -/* Test if shift negligible, and if so set to zero */ - - if (sll > 0.) { -/* Computing 2nd power */ - d__1 = shift / sll; - if (d__1 * d__1 < eps) { - shift = 0.; - } - } - } - -/* Increment iteration count */ - - iter = iter + m - ll; - -/* If SHIFT = 0, do simplified QR iteration */ - - if (shift == 0.) { - if (idir == 1) { - -/* - Chase bulge from top to bottom - Save cosines and sines for later singular vector updates -*/ - - cs = 1.; - oldcs = 1.; - i__1 = m - 1; - for (i__ = ll; i__ <= i__1; ++i__) { - d__1 = d__[i__] * cs; - dlartg_(&d__1, &e[i__], &cs, &sn, &r__); - if (i__ > ll) { - e[i__ - 1] = oldsn * r__; - } - d__1 = oldcs * r__; - d__2 = d__[i__ + 1] * sn; - dlartg_(&d__1, &d__2, &oldcs, &oldsn, &d__[i__]); - work[i__ - ll + 1] = cs; - work[i__ - ll + 1 + nm1] = sn; - work[i__ - ll + 1 + nm12] = oldcs; - work[i__ - ll + 1 + nm13] = oldsn; -/* L120: */ - } - h__ = d__[m] * cs; - d__[m] = h__ * oldcs; - e[m - 1] = h__ * oldsn; - -/* Update singular vectors */ - - if (*ncvt > 0) { - i__1 = m - ll + 1; - dlasr_("L", "V", "F", &i__1, ncvt, &work[1], &work[*n], &vt[ - ll + vt_dim1], ldvt); - } - if (*nru > 0) { - i__1 = m - ll + 1; - dlasr_("R", "V", "F", nru, &i__1, &work[nm12 + 1], &work[nm13 - + 1], &u[ll * u_dim1 + 1], ldu); - } - if (*ncc > 0) { - i__1 = m - ll + 1; - dlasr_("L", "V", "F", &i__1, ncc, &work[nm12 + 1], &work[nm13 - + 1], &c__[ll + c_dim1], ldc); - } - -/* Test convergence */ - - if ((d__1 = e[m - 1], abs(d__1)) <= thresh) { - e[m - 1] = 0.; - } - - } else { - -/* - Chase bulge from bottom to top - Save cosines and sines for later singular vector updates -*/ - - cs = 1.; - oldcs = 1.; - i__1 = ll + 1; - for (i__ = m; i__ >= i__1; --i__) { - d__1 = d__[i__] * cs; - dlartg_(&d__1, &e[i__ - 1], &cs, &sn, &r__); - if (i__ < m) { - e[i__] = oldsn * r__; - } - d__1 = oldcs * r__; - d__2 = d__[i__ - 1] * sn; - dlartg_(&d__1, &d__2, &oldcs, &oldsn, &d__[i__]); - work[i__ - ll] = cs; - work[i__ - ll + nm1] = -sn; - work[i__ - ll + nm12] = oldcs; - work[i__ - ll + nm13] = -oldsn; -/* L130: */ - } - h__ = d__[ll] * cs; - d__[ll] = h__ * oldcs; - e[ll] = h__ * oldsn; - -/* Update singular vectors */ - - if (*ncvt > 0) { - i__1 = m - ll + 1; - dlasr_("L", "V", "B", &i__1, ncvt, &work[nm12 + 1], &work[ - nm13 + 1], &vt[ll + vt_dim1], ldvt); - } - if (*nru > 0) { - i__1 = m - ll + 1; - dlasr_("R", "V", "B", nru, &i__1, &work[1], &work[*n], &u[ll * - u_dim1 + 1], ldu); - } - if (*ncc > 0) { - i__1 = m - ll + 1; - dlasr_("L", "V", "B", &i__1, ncc, &work[1], &work[*n], &c__[ - ll + c_dim1], ldc); - } - -/* Test convergence */ - - if ((d__1 = e[ll], abs(d__1)) <= thresh) { - e[ll] = 0.; - } - } - } else { - -/* Use nonzero shift */ - - if (idir == 1) { - -/* - Chase bulge from top to bottom - Save cosines and sines for later singular vector updates -*/ - - f = ((d__1 = d__[ll], abs(d__1)) - shift) * (d_sign(&c_b15, &d__[ - ll]) + shift / d__[ll]); - g = e[ll]; - i__1 = m - 1; - for (i__ = ll; i__ <= i__1; ++i__) { - dlartg_(&f, &g, &cosr, &sinr, &r__); - if (i__ > ll) { - e[i__ - 1] = r__; - } - f = cosr * d__[i__] + sinr * e[i__]; - e[i__] = cosr * e[i__] - sinr * d__[i__]; - g = sinr * d__[i__ + 1]; - d__[i__ + 1] = cosr * d__[i__ + 1]; - dlartg_(&f, &g, &cosl, &sinl, &r__); - d__[i__] = r__; - f = cosl * e[i__] + sinl * d__[i__ + 1]; - d__[i__ + 1] = cosl * d__[i__ + 1] - sinl * e[i__]; - if (i__ < m - 1) { - g = sinl * e[i__ + 1]; - e[i__ + 1] = cosl * e[i__ + 1]; - } - work[i__ - ll + 1] = cosr; - work[i__ - ll + 1 + nm1] = sinr; - work[i__ - ll + 1 + nm12] = cosl; - work[i__ - ll + 1 + nm13] = sinl; -/* L140: */ - } - e[m - 1] = f; - -/* Update singular vectors */ - - if (*ncvt > 0) { - i__1 = m - ll + 1; - dlasr_("L", "V", "F", &i__1, ncvt, &work[1], &work[*n], &vt[ - ll + vt_dim1], ldvt); - } - if (*nru > 0) { - i__1 = m - ll + 1; - dlasr_("R", "V", "F", nru, &i__1, &work[nm12 + 1], &work[nm13 - + 1], &u[ll * u_dim1 + 1], ldu); - } - if (*ncc > 0) { - i__1 = m - ll + 1; - dlasr_("L", "V", "F", &i__1, ncc, &work[nm12 + 1], &work[nm13 - + 1], &c__[ll + c_dim1], ldc); - } - -/* Test convergence */ - - if ((d__1 = e[m - 1], abs(d__1)) <= thresh) { - e[m - 1] = 0.; - } - - } else { - -/* - Chase bulge from bottom to top - Save cosines and sines for later singular vector updates -*/ - - f = ((d__1 = d__[m], abs(d__1)) - shift) * (d_sign(&c_b15, &d__[m] - ) + shift / d__[m]); - g = e[m - 1]; - i__1 = ll + 1; - for (i__ = m; i__ >= i__1; --i__) { - dlartg_(&f, &g, &cosr, &sinr, &r__); - if (i__ < m) { - e[i__] = r__; - } - f = cosr * d__[i__] + sinr * e[i__ - 1]; - e[i__ - 1] = cosr * e[i__ - 1] - sinr * d__[i__]; - g = sinr * d__[i__ - 1]; - d__[i__ - 1] = cosr * d__[i__ - 1]; - dlartg_(&f, &g, &cosl, &sinl, &r__); - d__[i__] = r__; - f = cosl * e[i__ - 1] + sinl * d__[i__ - 1]; - d__[i__ - 1] = cosl * d__[i__ - 1] - sinl * e[i__ - 1]; - if (i__ > ll + 1) { - g = sinl * e[i__ - 2]; - e[i__ - 2] = cosl * e[i__ - 2]; - } - work[i__ - ll] = cosr; - work[i__ - ll + nm1] = -sinr; - work[i__ - ll + nm12] = cosl; - work[i__ - ll + nm13] = -sinl; -/* L150: */ - } - e[ll] = f; - -/* Test convergence */ - - if ((d__1 = e[ll], abs(d__1)) <= thresh) { - e[ll] = 0.; - } - -/* Update singular vectors if desired */ - - if (*ncvt > 0) { - i__1 = m - ll + 1; - dlasr_("L", "V", "B", &i__1, ncvt, &work[nm12 + 1], &work[ - nm13 + 1], &vt[ll + vt_dim1], ldvt); - } - if (*nru > 0) { - i__1 = m - ll + 1; - dlasr_("R", "V", "B", nru, &i__1, &work[1], &work[*n], &u[ll * - u_dim1 + 1], ldu); - } - if (*ncc > 0) { - i__1 = m - ll + 1; - dlasr_("L", "V", "B", &i__1, ncc, &work[1], &work[*n], &c__[ - ll + c_dim1], ldc); - } - } - } - -/* QR iteration finished, go back and check convergence */ - - goto L60; - -/* All singular values converged, so make them positive */ - -L160: - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - if (d__[i__] < 0.) { - d__[i__] = -d__[i__]; - -/* Change sign of singular vectors, if desired */ - - if (*ncvt > 0) { - dscal_(ncvt, &c_b151, &vt[i__ + vt_dim1], ldvt); - } - } -/* L170: */ - } - -/* - Sort the singular values into decreasing order (insertion sort on - singular values, but only one transposition per singular vector) -*/ - - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Scan for smallest D(I) */ - - isub = 1; - smin = d__[1]; - i__2 = *n + 1 - i__; - for (j = 2; j <= i__2; ++j) { - if (d__[j] <= smin) { - isub = j; - smin = d__[j]; - } -/* L180: */ - } - if (isub != *n + 1 - i__) { - -/* Swap singular values and vectors */ - - d__[isub] = d__[*n + 1 - i__]; - d__[*n + 1 - i__] = smin; - if (*ncvt > 0) { - dswap_(ncvt, &vt[isub + vt_dim1], ldvt, &vt[*n + 1 - i__ + - vt_dim1], ldvt); - } - if (*nru > 0) { - dswap_(nru, &u[isub * u_dim1 + 1], &c__1, &u[(*n + 1 - i__) * - u_dim1 + 1], &c__1); - } - if (*ncc > 0) { - dswap_(ncc, &c__[isub + c_dim1], ldc, &c__[*n + 1 - i__ + - c_dim1], ldc); - } - } -/* L190: */ - } - goto L220; - -/* Maximum number of iterations exceeded, failure to converge */ - -L200: - *info = 0; - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - if (e[i__] != 0.) { - ++(*info); - } -/* L210: */ - } -L220: - return 0; - -/* End of DBDSQR */ - -} /* dbdsqr_ */ - -/* Subroutine */ int dgebak_(char *job, char *side, integer *n, integer *ilo, - integer *ihi, doublereal *scale, integer *m, doublereal *v, integer * - ldv, integer *info) -{ - /* System generated locals */ - integer v_dim1, v_offset, i__1; - - /* Local variables */ - static integer i__, k; - static doublereal s; - static integer ii; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dswap_(integer *, doublereal *, integer *, - doublereal *, integer *); - static logical leftv; - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical rightv; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - DGEBAK forms the right or left eigenvectors of a real general matrix - by backward transformation on the computed eigenvectors of the - balanced matrix output by DGEBAL. - - Arguments - ========= - - JOB (input) CHARACTER*1 - Specifies the type of backward transformation required: - = 'N', do nothing, return immediately; - = 'P', do backward transformation for permutation only; - = 'S', do backward transformation for scaling only; - = 'B', do backward transformations for both permutation and - scaling. - JOB must be the same as the argument JOB supplied to DGEBAL. - - SIDE (input) CHARACTER*1 - = 'R': V contains right eigenvectors; - = 'L': V contains left eigenvectors. - - N (input) INTEGER - The number of rows of the matrix V. N >= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - The integers ILO and IHI determined by DGEBAL. - 1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0. - - SCALE (input) DOUBLE PRECISION array, dimension (N) - Details of the permutation and scaling factors, as returned - by DGEBAL. - - M (input) INTEGER - The number of columns of the matrix V. M >= 0. - - V (input/output) DOUBLE PRECISION array, dimension (LDV,M) - On entry, the matrix of right or left eigenvectors to be - transformed, as returned by DHSEIN or DTREVC. - On exit, V is overwritten by the transformed eigenvectors. - - LDV (input) INTEGER - The leading dimension of the array V. LDV >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - - ===================================================================== - - - Decode and Test the input parameters -*/ - - /* Parameter adjustments */ - --scale; - v_dim1 = *ldv; - v_offset = 1 + v_dim1 * 1; - v -= v_offset; - - /* Function Body */ - rightv = lsame_(side, "R"); - leftv = lsame_(side, "L"); - - *info = 0; - if ((((! lsame_(job, "N") && ! lsame_(job, "P")) && ! lsame_(job, "S")) - && ! lsame_(job, "B"))) { - *info = -1; - } else if ((! rightv && ! leftv)) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*ilo < 1 || *ilo > max(1,*n)) { - *info = -4; - } else if (*ihi < min(*ilo,*n) || *ihi > *n) { - *info = -5; - } else if (*m < 0) { - *info = -7; - } else if (*ldv < max(1,*n)) { - *info = -9; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGEBAK", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - if (*m == 0) { - return 0; - } - if (lsame_(job, "N")) { - return 0; - } - - if (*ilo == *ihi) { - goto L30; - } - -/* Backward balance */ - - if (lsame_(job, "S") || lsame_(job, "B")) { - - if (rightv) { - i__1 = *ihi; - for (i__ = *ilo; i__ <= i__1; ++i__) { - s = scale[i__]; - dscal_(m, &s, &v[i__ + v_dim1], ldv); -/* L10: */ - } - } - - if (leftv) { - i__1 = *ihi; - for (i__ = *ilo; i__ <= i__1; ++i__) { - s = 1. / scale[i__]; - dscal_(m, &s, &v[i__ + v_dim1], ldv); -/* L20: */ - } - } - - } - -/* - Backward permutation - - For I = ILO-1 step -1 until 1, - IHI+1 step 1 until N do -- -*/ - -L30: - if (lsame_(job, "P") || lsame_(job, "B")) { - if (rightv) { - i__1 = *n; - for (ii = 1; ii <= i__1; ++ii) { - i__ = ii; - if ((i__ >= *ilo && i__ <= *ihi)) { - goto L40; - } - if (i__ < *ilo) { - i__ = *ilo - ii; - } - k = (integer) scale[i__]; - if (k == i__) { - goto L40; - } - dswap_(m, &v[i__ + v_dim1], ldv, &v[k + v_dim1], ldv); -L40: - ; - } - } - - if (leftv) { - i__1 = *n; - for (ii = 1; ii <= i__1; ++ii) { - i__ = ii; - if ((i__ >= *ilo && i__ <= *ihi)) { - goto L50; - } - if (i__ < *ilo) { - i__ = *ilo - ii; - } - k = (integer) scale[i__]; - if (k == i__) { - goto L50; - } - dswap_(m, &v[i__ + v_dim1], ldv, &v[k + v_dim1], ldv); -L50: - ; - } - } - } - - return 0; - -/* End of DGEBAK */ - -} /* dgebak_ */ - -/* Subroutine */ int dgebal_(char *job, integer *n, doublereal *a, integer * - lda, integer *ilo, integer *ihi, doublereal *scale, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - doublereal d__1, d__2; - - /* Local variables */ - static doublereal c__, f, g; - static integer i__, j, k, l, m; - static doublereal r__, s, ca, ra; - static integer ica, ira, iexc; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dswap_(integer *, doublereal *, integer *, - doublereal *, integer *); - static doublereal sfmin1, sfmin2, sfmax1, sfmax2; - - extern integer idamax_(integer *, doublereal *, integer *); - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical noconv; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DGEBAL balances a general real matrix A. This involves, first, - permuting A by a similarity transformation to isolate eigenvalues - in the first 1 to ILO-1 and last IHI+1 to N elements on the - diagonal; and second, applying a diagonal similarity transformation - to rows and columns ILO to IHI to make the rows and columns as - close in norm as possible. Both steps are optional. - - Balancing may reduce the 1-norm of the matrix, and improve the - accuracy of the computed eigenvalues and/or eigenvectors. - - Arguments - ========= - - JOB (input) CHARACTER*1 - Specifies the operations to be performed on A: - = 'N': none: simply set ILO = 1, IHI = N, SCALE(I) = 1.0 - for i = 1,...,N; - = 'P': permute only; - = 'S': scale only; - = 'B': both permute and scale. - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the input matrix A. - On exit, A is overwritten by the balanced matrix. - If JOB = 'N', A is not referenced. - See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - ILO (output) INTEGER - IHI (output) INTEGER - ILO and IHI are set to integers such that on exit - A(i,j) = 0 if i > j and j = 1,...,ILO-1 or I = IHI+1,...,N. - If JOB = 'N' or 'S', ILO = 1 and IHI = N. - - SCALE (output) DOUBLE PRECISION array, dimension (N) - Details of the permutations and scaling factors applied to - A. If P(j) is the index of the row and column interchanged - with row and column j and D(j) is the scaling factor - applied to row and column j, then - SCALE(j) = P(j) for j = 1,...,ILO-1 - = D(j) for j = ILO,...,IHI - = P(j) for j = IHI+1,...,N. - The order in which the interchanges are made is N to IHI+1, - then 1 to ILO-1. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - The permutations consist of row and column interchanges which put - the matrix in the form - - ( T1 X Y ) - P A P = ( 0 B Z ) - ( 0 0 T2 ) - - where T1 and T2 are upper triangular matrices whose eigenvalues lie - along the diagonal. The column indices ILO and IHI mark the starting - and ending columns of the submatrix B. Balancing consists of applying - a diagonal similarity transformation inv(D) * B * D to make the - 1-norms of each row of B and its corresponding column nearly equal. - The output matrix is - - ( T1 X*D Y ) - ( 0 inv(D)*B*D inv(D)*Z ). - ( 0 0 T2 ) - - Information about the permutations P and the diagonal matrix D is - returned in the vector SCALE. - - This subroutine is based on the EISPACK routine BALANC. - - Modified by Tzu-Yi Chen, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --scale; - - /* Function Body */ - *info = 0; - if ((((! lsame_(job, "N") && ! lsame_(job, "P")) && ! lsame_(job, "S")) - && ! lsame_(job, "B"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGEBAL", &i__1); - return 0; - } - - k = 1; - l = *n; - - if (*n == 0) { - goto L210; - } - - if (lsame_(job, "N")) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - scale[i__] = 1.; -/* L10: */ - } - goto L210; - } - - if (lsame_(job, "S")) { - goto L120; - } - -/* Permutation to isolate eigenvalues if possible */ - - goto L50; - -/* Row and column exchange. */ - -L20: - scale[m] = (doublereal) j; - if (j == m) { - goto L30; - } - - dswap_(&l, &a[j * a_dim1 + 1], &c__1, &a[m * a_dim1 + 1], &c__1); - i__1 = *n - k + 1; - dswap_(&i__1, &a[j + k * a_dim1], lda, &a[m + k * a_dim1], lda); - -L30: - switch (iexc) { - case 1: goto L40; - case 2: goto L80; - } - -/* Search for rows isolating an eigenvalue and push them down. */ - -L40: - if (l == 1) { - goto L210; - } - --l; - -L50: - for (j = l; j >= 1; --j) { - - i__1 = l; - for (i__ = 1; i__ <= i__1; ++i__) { - if (i__ == j) { - goto L60; - } - if (a[j + i__ * a_dim1] != 0.) { - goto L70; - } -L60: - ; - } - - m = l; - iexc = 1; - goto L20; -L70: - ; - } - - goto L90; - -/* Search for columns isolating an eigenvalue and push them left. */ - -L80: - ++k; - -L90: - i__1 = l; - for (j = k; j <= i__1; ++j) { - - i__2 = l; - for (i__ = k; i__ <= i__2; ++i__) { - if (i__ == j) { - goto L100; - } - if (a[i__ + j * a_dim1] != 0.) { - goto L110; - } -L100: - ; - } - - m = k; - iexc = 2; - goto L20; -L110: - ; - } - -L120: - i__1 = l; - for (i__ = k; i__ <= i__1; ++i__) { - scale[i__] = 1.; -/* L130: */ - } - - if (lsame_(job, "P")) { - goto L210; - } - -/* - Balance the submatrix in rows K to L. - - Iterative loop for norm reduction -*/ - - sfmin1 = SAFEMINIMUM / PRECISION; - sfmax1 = 1. / sfmin1; - sfmin2 = sfmin1 * 8.; - sfmax2 = 1. / sfmin2; -L140: - noconv = FALSE_; - - i__1 = l; - for (i__ = k; i__ <= i__1; ++i__) { - c__ = 0.; - r__ = 0.; - - i__2 = l; - for (j = k; j <= i__2; ++j) { - if (j == i__) { - goto L150; - } - c__ += (d__1 = a[j + i__ * a_dim1], abs(d__1)); - r__ += (d__1 = a[i__ + j * a_dim1], abs(d__1)); -L150: - ; - } - ica = idamax_(&l, &a[i__ * a_dim1 + 1], &c__1); - ca = (d__1 = a[ica + i__ * a_dim1], abs(d__1)); - i__2 = *n - k + 1; - ira = idamax_(&i__2, &a[i__ + k * a_dim1], lda); - ra = (d__1 = a[i__ + (ira + k - 1) * a_dim1], abs(d__1)); - -/* Guard against zero C or R due to underflow. */ - - if (c__ == 0. || r__ == 0.) { - goto L200; - } - g = r__ / 8.; - f = 1.; - s = c__ + r__; -L160: -/* Computing MAX */ - d__1 = max(f,c__); -/* Computing MIN */ - d__2 = min(r__,g); - if (c__ >= g || max(d__1,ca) >= sfmax2 || min(d__2,ra) <= sfmin2) { - goto L170; - } - f *= 8.; - c__ *= 8.; - ca *= 8.; - r__ /= 8.; - g /= 8.; - ra /= 8.; - goto L160; - -L170: - g = c__ / 8.; -L180: -/* Computing MIN */ - d__1 = min(f,c__), d__1 = min(d__1,g); - if (g < r__ || max(r__,ra) >= sfmax2 || min(d__1,ca) <= sfmin2) { - goto L190; - } - f /= 8.; - c__ /= 8.; - g /= 8.; - ca /= 8.; - r__ *= 8.; - ra *= 8.; - goto L180; - -/* Now balance. */ - -L190: - if (c__ + r__ >= s * .95) { - goto L200; - } - if ((f < 1. && scale[i__] < 1.)) { - if (f * scale[i__] <= sfmin1) { - goto L200; - } - } - if ((f > 1. && scale[i__] > 1.)) { - if (scale[i__] >= sfmax1 / f) { - goto L200; - } - } - g = 1. / f; - scale[i__] *= f; - noconv = TRUE_; - - i__2 = *n - k + 1; - dscal_(&i__2, &g, &a[i__ + k * a_dim1], lda); - dscal_(&l, &f, &a[i__ * a_dim1 + 1], &c__1); - -L200: - ; - } - - if (noconv) { - goto L140; - } - -L210: - *ilo = k; - *ihi = l; - - return 0; - -/* End of DGEBAL */ - -} /* dgebal_ */ - -/* Subroutine */ int dgebd2_(integer *m, integer *n, doublereal *a, integer * - lda, doublereal *d__, doublereal *e, doublereal *tauq, doublereal * - taup, doublereal *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__; - extern /* Subroutine */ int dlarf_(char *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - doublereal *), dlarfg_(integer *, doublereal *, - doublereal *, integer *, doublereal *), xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DGEBD2 reduces a real general m by n matrix A to upper or lower - bidiagonal form B by an orthogonal transformation: Q' * A * P = B. - - If m >= n, B is upper bidiagonal; if m < n, B is lower bidiagonal. - - Arguments - ========= - - M (input) INTEGER - The number of rows in the matrix A. M >= 0. - - N (input) INTEGER - The number of columns in the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the m by n general matrix to be reduced. - On exit, - if m >= n, the diagonal and the first superdiagonal are - overwritten with the upper bidiagonal matrix B; the - elements below the diagonal, with the array TAUQ, represent - the orthogonal matrix Q as a product of elementary - reflectors, and the elements above the first superdiagonal, - with the array TAUP, represent the orthogonal matrix P as - a product of elementary reflectors; - if m < n, the diagonal and the first subdiagonal are - overwritten with the lower bidiagonal matrix B; the - elements below the first subdiagonal, with the array TAUQ, - represent the orthogonal matrix Q as a product of - elementary reflectors, and the elements above the diagonal, - with the array TAUP, represent the orthogonal matrix P as - a product of elementary reflectors. - See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - D (output) DOUBLE PRECISION array, dimension (min(M,N)) - The diagonal elements of the bidiagonal matrix B: - D(i) = A(i,i). - - E (output) DOUBLE PRECISION array, dimension (min(M,N)-1) - The off-diagonal elements of the bidiagonal matrix B: - if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; - if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. - - TAUQ (output) DOUBLE PRECISION array dimension (min(M,N)) - The scalar factors of the elementary reflectors which - represent the orthogonal matrix Q. See Further Details. - - TAUP (output) DOUBLE PRECISION array, dimension (min(M,N)) - The scalar factors of the elementary reflectors which - represent the orthogonal matrix P. See Further Details. - - WORK (workspace) DOUBLE PRECISION array, dimension (max(M,N)) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - The matrices Q and P are represented as products of elementary - reflectors: - - If m >= n, - - Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) - - Each H(i) and G(i) has the form: - - H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' - - where tauq and taup are real scalars, and v and u are real vectors; - v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(i+1:m,i); - u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(i,i+2:n); - tauq is stored in TAUQ(i) and taup in TAUP(i). - - If m < n, - - Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) - - Each H(i) and G(i) has the form: - - H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' - - where tauq and taup are real scalars, and v and u are real vectors; - v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(i+2:m,i); - u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(i,i+1:n); - tauq is stored in TAUQ(i) and taup in TAUP(i). - - The contents of A on exit are illustrated by the following examples: - - m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): - - ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) - ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) - ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) - ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) - ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) - ( v1 v2 v3 v4 v5 ) - - where d and e denote diagonal and off-diagonal elements of B, vi - denotes an element of the vector defining H(i), and ui an element of - the vector defining G(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --d__; - --e; - --tauq; - --taup; - --work; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } - if (*info < 0) { - i__1 = -(*info); - xerbla_("DGEBD2", &i__1); - return 0; - } - - if (*m >= *n) { - -/* Reduce to upper bidiagonal form */ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Generate elementary reflector H(i) to annihilate A(i+1:m,i) */ - - i__2 = *m - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - dlarfg_(&i__2, &a[i__ + i__ * a_dim1], &a[min(i__3,*m) + i__ * - a_dim1], &c__1, &tauq[i__]); - d__[i__] = a[i__ + i__ * a_dim1]; - a[i__ + i__ * a_dim1] = 1.; - -/* Apply H(i) to A(i:m,i+1:n) from the left */ - - i__2 = *m - i__ + 1; - i__3 = *n - i__; - dlarf_("Left", &i__2, &i__3, &a[i__ + i__ * a_dim1], &c__1, &tauq[ - i__], &a[i__ + (i__ + 1) * a_dim1], lda, &work[1]); - a[i__ + i__ * a_dim1] = d__[i__]; - - if (i__ < *n) { - -/* - Generate elementary reflector G(i) to annihilate - A(i,i+2:n) -*/ - - i__2 = *n - i__; -/* Computing MIN */ - i__3 = i__ + 2; - dlarfg_(&i__2, &a[i__ + (i__ + 1) * a_dim1], &a[i__ + min( - i__3,*n) * a_dim1], lda, &taup[i__]); - e[i__] = a[i__ + (i__ + 1) * a_dim1]; - a[i__ + (i__ + 1) * a_dim1] = 1.; - -/* Apply G(i) to A(i+1:m,i+1:n) from the right */ - - i__2 = *m - i__; - i__3 = *n - i__; - dlarf_("Right", &i__2, &i__3, &a[i__ + (i__ + 1) * a_dim1], - lda, &taup[i__], &a[i__ + 1 + (i__ + 1) * a_dim1], - lda, &work[1]); - a[i__ + (i__ + 1) * a_dim1] = e[i__]; - } else { - taup[i__] = 0.; - } -/* L10: */ - } - } else { - -/* Reduce to lower bidiagonal form */ - - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Generate elementary reflector G(i) to annihilate A(i,i+1:n) */ - - i__2 = *n - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - dlarfg_(&i__2, &a[i__ + i__ * a_dim1], &a[i__ + min(i__3,*n) * - a_dim1], lda, &taup[i__]); - d__[i__] = a[i__ + i__ * a_dim1]; - a[i__ + i__ * a_dim1] = 1.; - -/* Apply G(i) to A(i+1:m,i:n) from the right */ - - i__2 = *m - i__; - i__3 = *n - i__ + 1; -/* Computing MIN */ - i__4 = i__ + 1; - dlarf_("Right", &i__2, &i__3, &a[i__ + i__ * a_dim1], lda, &taup[ - i__], &a[min(i__4,*m) + i__ * a_dim1], lda, &work[1]); - a[i__ + i__ * a_dim1] = d__[i__]; - - if (i__ < *m) { - -/* - Generate elementary reflector H(i) to annihilate - A(i+2:m,i) -*/ - - i__2 = *m - i__; -/* Computing MIN */ - i__3 = i__ + 2; - dlarfg_(&i__2, &a[i__ + 1 + i__ * a_dim1], &a[min(i__3,*m) + - i__ * a_dim1], &c__1, &tauq[i__]); - e[i__] = a[i__ + 1 + i__ * a_dim1]; - a[i__ + 1 + i__ * a_dim1] = 1.; - -/* Apply H(i) to A(i+1:m,i+1:n) from the left */ - - i__2 = *m - i__; - i__3 = *n - i__; - dlarf_("Left", &i__2, &i__3, &a[i__ + 1 + i__ * a_dim1], & - c__1, &tauq[i__], &a[i__ + 1 + (i__ + 1) * a_dim1], - lda, &work[1]); - a[i__ + 1 + i__ * a_dim1] = e[i__]; - } else { - tauq[i__] = 0.; - } -/* L20: */ - } - } - return 0; - -/* End of DGEBD2 */ - -} /* dgebd2_ */ - -/* Subroutine */ int dgebrd_(integer *m, integer *n, doublereal *a, integer * - lda, doublereal *d__, doublereal *e, doublereal *tauq, doublereal * - taup, doublereal *work, integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__, j, nb, nx; - static doublereal ws; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - static integer nbmin, iinfo, minmn; - extern /* Subroutine */ int dgebd2_(integer *, integer *, doublereal *, - integer *, doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *), dlabrd_(integer *, integer *, integer * - , doublereal *, integer *, doublereal *, doublereal *, doublereal - *, doublereal *, doublereal *, integer *, doublereal *, integer *) - , xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer ldwrkx, ldwrky, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DGEBRD reduces a general real M-by-N matrix A to upper or lower - bidiagonal form B by an orthogonal transformation: Q**T * A * P = B. - - If m >= n, B is upper bidiagonal; if m < n, B is lower bidiagonal. - - Arguments - ========= - - M (input) INTEGER - The number of rows in the matrix A. M >= 0. - - N (input) INTEGER - The number of columns in the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the M-by-N general matrix to be reduced. - On exit, - if m >= n, the diagonal and the first superdiagonal are - overwritten with the upper bidiagonal matrix B; the - elements below the diagonal, with the array TAUQ, represent - the orthogonal matrix Q as a product of elementary - reflectors, and the elements above the first superdiagonal, - with the array TAUP, represent the orthogonal matrix P as - a product of elementary reflectors; - if m < n, the diagonal and the first subdiagonal are - overwritten with the lower bidiagonal matrix B; the - elements below the first subdiagonal, with the array TAUQ, - represent the orthogonal matrix Q as a product of - elementary reflectors, and the elements above the diagonal, - with the array TAUP, represent the orthogonal matrix P as - a product of elementary reflectors. - See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - D (output) DOUBLE PRECISION array, dimension (min(M,N)) - The diagonal elements of the bidiagonal matrix B: - D(i) = A(i,i). - - E (output) DOUBLE PRECISION array, dimension (min(M,N)-1) - The off-diagonal elements of the bidiagonal matrix B: - if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; - if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. - - TAUQ (output) DOUBLE PRECISION array dimension (min(M,N)) - The scalar factors of the elementary reflectors which - represent the orthogonal matrix Q. See Further Details. - - TAUP (output) DOUBLE PRECISION array, dimension (min(M,N)) - The scalar factors of the elementary reflectors which - represent the orthogonal matrix P. See Further Details. - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The length of the array WORK. LWORK >= max(1,M,N). - For optimum performance LWORK >= (M+N)*NB, where NB - is the optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - The matrices Q and P are represented as products of elementary - reflectors: - - If m >= n, - - Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) - - Each H(i) and G(i) has the form: - - H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' - - where tauq and taup are real scalars, and v and u are real vectors; - v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(i+1:m,i); - u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(i,i+2:n); - tauq is stored in TAUQ(i) and taup in TAUP(i). - - If m < n, - - Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) - - Each H(i) and G(i) has the form: - - H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' - - where tauq and taup are real scalars, and v and u are real vectors; - v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(i+2:m,i); - u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(i,i+1:n); - tauq is stored in TAUQ(i) and taup in TAUP(i). - - The contents of A on exit are illustrated by the following examples: - - m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): - - ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) - ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) - ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) - ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) - ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) - ( v1 v2 v3 v4 v5 ) - - where d and e denote diagonal and off-diagonal elements of B, vi - denotes an element of the vector defining H(i), and ui an element of - the vector defining G(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --d__; - --e; - --tauq; - --taup; - --work; - - /* Function Body */ - *info = 0; -/* Computing MAX */ - i__1 = 1, i__2 = ilaenv_(&c__1, "DGEBRD", " ", m, n, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - nb = max(i__1,i__2); - lwkopt = (*m + *n) * nb; - work[1] = (doublereal) lwkopt; - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } else /* if(complicated condition) */ { -/* Computing MAX */ - i__1 = max(1,*m); - if ((*lwork < max(i__1,*n) && ! lquery)) { - *info = -10; - } - } - if (*info < 0) { - i__1 = -(*info); - xerbla_("DGEBRD", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - minmn = min(*m,*n); - if (minmn == 0) { - work[1] = 1.; - return 0; - } - - ws = (doublereal) max(*m,*n); - ldwrkx = *m; - ldwrky = *n; - - if ((nb > 1 && nb < minmn)) { - -/* - Set the crossover point NX. - - Computing MAX -*/ - i__1 = nb, i__2 = ilaenv_(&c__3, "DGEBRD", " ", m, n, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - -/* Determine when to switch from blocked to unblocked code. */ - - if (nx < minmn) { - ws = (doublereal) ((*m + *n) * nb); - if ((doublereal) (*lwork) < ws) { - -/* - Not enough work space for the optimal NB, consider using - a smaller block size. -*/ - - nbmin = ilaenv_(&c__2, "DGEBRD", " ", m, n, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - if (*lwork >= (*m + *n) * nbmin) { - nb = *lwork / (*m + *n); - } else { - nb = 1; - nx = minmn; - } - } - } - } else { - nx = minmn; - } - - i__1 = minmn - nx; - i__2 = nb; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { - -/* - Reduce rows and columns i:i+nb-1 to bidiagonal form and return - the matrices X and Y which are needed to update the unreduced - part of the matrix -*/ - - i__3 = *m - i__ + 1; - i__4 = *n - i__ + 1; - dlabrd_(&i__3, &i__4, &nb, &a[i__ + i__ * a_dim1], lda, &d__[i__], &e[ - i__], &tauq[i__], &taup[i__], &work[1], &ldwrkx, &work[ldwrkx - * nb + 1], &ldwrky); - -/* - Update the trailing submatrix A(i+nb:m,i+nb:n), using an update - of the form A := A - V*Y' - X*U' -*/ - - i__3 = *m - i__ - nb + 1; - i__4 = *n - i__ - nb + 1; - dgemm_("No transpose", "Transpose", &i__3, &i__4, &nb, &c_b151, &a[ - i__ + nb + i__ * a_dim1], lda, &work[ldwrkx * nb + nb + 1], & - ldwrky, &c_b15, &a[i__ + nb + (i__ + nb) * a_dim1], lda); - i__3 = *m - i__ - nb + 1; - i__4 = *n - i__ - nb + 1; - dgemm_("No transpose", "No transpose", &i__3, &i__4, &nb, &c_b151, & - work[nb + 1], &ldwrkx, &a[i__ + (i__ + nb) * a_dim1], lda, & - c_b15, &a[i__ + nb + (i__ + nb) * a_dim1], lda); - -/* Copy diagonal and off-diagonal elements of B back into A */ - - if (*m >= *n) { - i__3 = i__ + nb - 1; - for (j = i__; j <= i__3; ++j) { - a[j + j * a_dim1] = d__[j]; - a[j + (j + 1) * a_dim1] = e[j]; -/* L10: */ - } - } else { - i__3 = i__ + nb - 1; - for (j = i__; j <= i__3; ++j) { - a[j + j * a_dim1] = d__[j]; - a[j + 1 + j * a_dim1] = e[j]; -/* L20: */ - } - } -/* L30: */ - } - -/* Use unblocked code to reduce the remainder of the matrix */ - - i__2 = *m - i__ + 1; - i__1 = *n - i__ + 1; - dgebd2_(&i__2, &i__1, &a[i__ + i__ * a_dim1], lda, &d__[i__], &e[i__], & - tauq[i__], &taup[i__], &work[1], &iinfo); - work[1] = ws; - return 0; - -/* End of DGEBRD */ - -} /* dgebrd_ */ - -/* Subroutine */ int dgeev_(char *jobvl, char *jobvr, integer *n, doublereal * - a, integer *lda, doublereal *wr, doublereal *wi, doublereal *vl, - integer *ldvl, doublereal *vr, integer *ldvr, doublereal *work, - integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, vl_dim1, vl_offset, vr_dim1, vr_offset, i__1, - i__2, i__3, i__4; - doublereal d__1, d__2; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer i__, k; - static doublereal r__, cs, sn; - static integer ihi; - static doublereal scl; - static integer ilo; - static doublereal dum[1], eps; - static integer ibal; - static char side[1]; - static integer maxb; - static doublereal anrm; - static integer ierr, itau; - extern /* Subroutine */ int drot_(integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *); - static integer iwrk, nout; - extern doublereal dnrm2_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - extern logical lsame_(char *, char *); - extern doublereal dlapy2_(doublereal *, doublereal *); - extern /* Subroutine */ int dlabad_(doublereal *, doublereal *), dgebak_( - char *, char *, integer *, integer *, integer *, doublereal *, - integer *, doublereal *, integer *, integer *), - dgebal_(char *, integer *, doublereal *, integer *, integer *, - integer *, doublereal *, integer *); - static logical scalea; - - static doublereal cscale; - extern doublereal dlange_(char *, integer *, integer *, doublereal *, - integer *, doublereal *); - extern /* Subroutine */ int dgehrd_(integer *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - integer *), dlascl_(char *, integer *, integer *, doublereal *, - doublereal *, integer *, integer *, doublereal *, integer *, - integer *); - extern integer idamax_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dlacpy_(char *, integer *, integer *, - doublereal *, integer *, doublereal *, integer *), - dlartg_(doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *), xerbla_(char *, integer *); - static logical select[1]; - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static doublereal bignum; - extern /* Subroutine */ int dorghr_(integer *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - integer *), dhseqr_(char *, char *, integer *, integer *, integer - *, doublereal *, integer *, doublereal *, doublereal *, - doublereal *, integer *, doublereal *, integer *, integer *), dtrevc_(char *, char *, logical *, integer *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *, integer *, integer *, doublereal *, integer *); - static integer minwrk, maxwrk; - static logical wantvl; - static doublereal smlnum; - static integer hswork; - static logical lquery, wantvr; - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - December 8, 1999 - - - Purpose - ======= - - DGEEV computes for an N-by-N real nonsymmetric matrix A, the - eigenvalues and, optionally, the left and/or right eigenvectors. - - The right eigenvector v(j) of A satisfies - A * v(j) = lambda(j) * v(j) - where lambda(j) is its eigenvalue. - The left eigenvector u(j) of A satisfies - u(j)**H * A = lambda(j) * u(j)**H - where u(j)**H denotes the conjugate transpose of u(j). - - The computed eigenvectors are normalized to have Euclidean norm - equal to 1 and largest component real. - - Arguments - ========= - - JOBVL (input) CHARACTER*1 - = 'N': left eigenvectors of A are not computed; - = 'V': left eigenvectors of A are computed. - - JOBVR (input) CHARACTER*1 - = 'N': right eigenvectors of A are not computed; - = 'V': right eigenvectors of A are computed. - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the N-by-N matrix A. - On exit, A has been overwritten. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - WR (output) DOUBLE PRECISION array, dimension (N) - WI (output) DOUBLE PRECISION array, dimension (N) - WR and WI contain the real and imaginary parts, - respectively, of the computed eigenvalues. Complex - conjugate pairs of eigenvalues appear consecutively - with the eigenvalue having the positive imaginary part - first. - - VL (output) DOUBLE PRECISION array, dimension (LDVL,N) - If JOBVL = 'V', the left eigenvectors u(j) are stored one - after another in the columns of VL, in the same order - as their eigenvalues. - If JOBVL = 'N', VL is not referenced. - If the j-th eigenvalue is real, then u(j) = VL(:,j), - the j-th column of VL. - If the j-th and (j+1)-st eigenvalues form a complex - conjugate pair, then u(j) = VL(:,j) + i*VL(:,j+1) and - u(j+1) = VL(:,j) - i*VL(:,j+1). - - LDVL (input) INTEGER - The leading dimension of the array VL. LDVL >= 1; if - JOBVL = 'V', LDVL >= N. - - VR (output) DOUBLE PRECISION array, dimension (LDVR,N) - If JOBVR = 'V', the right eigenvectors v(j) are stored one - after another in the columns of VR, in the same order - as their eigenvalues. - If JOBVR = 'N', VR is not referenced. - If the j-th eigenvalue is real, then v(j) = VR(:,j), - the j-th column of VR. - If the j-th and (j+1)-st eigenvalues form a complex - conjugate pair, then v(j) = VR(:,j) + i*VR(:,j+1) and - v(j+1) = VR(:,j) - i*VR(:,j+1). - - LDVR (input) INTEGER - The leading dimension of the array VR. LDVR >= 1; if - JOBVR = 'V', LDVR >= N. - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,3*N), and - if JOBVL = 'V' or JOBVR = 'V', LWORK >= 4*N. For good - performance, LWORK must generally be larger. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = i, the QR algorithm failed to compute all the - eigenvalues, and no eigenvectors have been computed; - elements i+1:N of WR and WI contain eigenvalues which - have converged. - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --wr; - --wi; - vl_dim1 = *ldvl; - vl_offset = 1 + vl_dim1 * 1; - vl -= vl_offset; - vr_dim1 = *ldvr; - vr_offset = 1 + vr_dim1 * 1; - vr -= vr_offset; - --work; - - /* Function Body */ - *info = 0; - lquery = *lwork == -1; - wantvl = lsame_(jobvl, "V"); - wantvr = lsame_(jobvr, "V"); - if ((! wantvl && ! lsame_(jobvl, "N"))) { - *info = -1; - } else if ((! wantvr && ! lsame_(jobvr, "N"))) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } else if (*ldvl < 1 || (wantvl && *ldvl < *n)) { - *info = -9; - } else if (*ldvr < 1 || (wantvr && *ldvr < *n)) { - *info = -11; - } - -/* - Compute workspace - (Note: Comments in the code beginning "Workspace:" describe the - minimal amount of workspace needed at that point in the code, - as well as the preferred amount for good performance. - NB refers to the optimal block size for the immediately - following subroutine, as returned by ILAENV. - HSWORK refers to the workspace preferred by DHSEQR, as - calculated below. HSWORK is computed assuming ILO=1 and IHI=N, - the worst case.) -*/ - - minwrk = 1; - if ((*info == 0 && (*lwork >= 1 || lquery))) { - maxwrk = ((*n) << (1)) + *n * ilaenv_(&c__1, "DGEHRD", " ", n, &c__1, - n, &c__0, (ftnlen)6, (ftnlen)1); - if ((! wantvl && ! wantvr)) { -/* Computing MAX */ - i__1 = 1, i__2 = *n * 3; - minwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = ilaenv_(&c__8, "DHSEQR", "EN", n, &c__1, n, &c_n1, (ftnlen) - 6, (ftnlen)2); - maxb = max(i__1,2); -/* - Computing MIN - Computing MAX -*/ - i__3 = 2, i__4 = ilaenv_(&c__4, "DHSEQR", "EN", n, &c__1, n, & - c_n1, (ftnlen)6, (ftnlen)2); - i__1 = min(maxb,*n), i__2 = max(i__3,i__4); - k = min(i__1,i__2); -/* Computing MAX */ - i__1 = k * (k + 2), i__2 = (*n) << (1); - hswork = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *n + 1, i__1 = max(i__1,i__2), i__2 = *n + - hswork; - maxwrk = max(i__1,i__2); - } else { -/* Computing MAX */ - i__1 = 1, i__2 = (*n) << (2); - minwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + (*n - 1) * ilaenv_(&c__1, - "DORGHR", " ", n, &c__1, n, &c_n1, (ftnlen)6, (ftnlen)1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = ilaenv_(&c__8, "DHSEQR", "SV", n, &c__1, n, &c_n1, (ftnlen) - 6, (ftnlen)2); - maxb = max(i__1,2); -/* - Computing MIN - Computing MAX -*/ - i__3 = 2, i__4 = ilaenv_(&c__4, "DHSEQR", "SV", n, &c__1, n, & - c_n1, (ftnlen)6, (ftnlen)2); - i__1 = min(maxb,*n), i__2 = max(i__3,i__4); - k = min(i__1,i__2); -/* Computing MAX */ - i__1 = k * (k + 2), i__2 = (*n) << (1); - hswork = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *n + 1, i__1 = max(i__1,i__2), i__2 = *n + - hswork; - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = (*n) << (2); - maxwrk = max(i__1,i__2); - } - work[1] = (doublereal) maxwrk; - } - if ((*lwork < minwrk && ! lquery)) { - *info = -13; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGEEV ", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - -/* Get machine constants */ - - eps = PRECISION; - smlnum = SAFEMINIMUM; - bignum = 1. / smlnum; - dlabad_(&smlnum, &bignum); - smlnum = sqrt(smlnum) / eps; - bignum = 1. / smlnum; - -/* Scale A if max element outside range [SMLNUM,BIGNUM] */ - - anrm = dlange_("M", n, n, &a[a_offset], lda, dum); - scalea = FALSE_; - if ((anrm > 0. && anrm < smlnum)) { - scalea = TRUE_; - cscale = smlnum; - } else if (anrm > bignum) { - scalea = TRUE_; - cscale = bignum; - } - if (scalea) { - dlascl_("G", &c__0, &c__0, &anrm, &cscale, n, n, &a[a_offset], lda, & - ierr); - } - -/* - Balance the matrix - (Workspace: need N) -*/ - - ibal = 1; - dgebal_("B", n, &a[a_offset], lda, &ilo, &ihi, &work[ibal], &ierr); - -/* - Reduce to upper Hessenberg form - (Workspace: need 3*N, prefer 2*N+N*NB) -*/ - - itau = ibal + *n; - iwrk = itau + *n; - i__1 = *lwork - iwrk + 1; - dgehrd_(n, &ilo, &ihi, &a[a_offset], lda, &work[itau], &work[iwrk], &i__1, - &ierr); - - if (wantvl) { - -/* - Want left eigenvectors - Copy Householder vectors to VL -*/ - - *(unsigned char *)side = 'L'; - dlacpy_("L", n, n, &a[a_offset], lda, &vl[vl_offset], ldvl) - ; - -/* - Generate orthogonal matrix in VL - (Workspace: need 3*N-1, prefer 2*N+(N-1)*NB) -*/ - - i__1 = *lwork - iwrk + 1; - dorghr_(n, &ilo, &ihi, &vl[vl_offset], ldvl, &work[itau], &work[iwrk], - &i__1, &ierr); - -/* - Perform QR iteration, accumulating Schur vectors in VL - (Workspace: need N+1, prefer N+HSWORK (see comments) ) -*/ - - iwrk = itau; - i__1 = *lwork - iwrk + 1; - dhseqr_("S", "V", n, &ilo, &ihi, &a[a_offset], lda, &wr[1], &wi[1], & - vl[vl_offset], ldvl, &work[iwrk], &i__1, info); - - if (wantvr) { - -/* - Want left and right eigenvectors - Copy Schur vectors to VR -*/ - - *(unsigned char *)side = 'B'; - dlacpy_("F", n, n, &vl[vl_offset], ldvl, &vr[vr_offset], ldvr); - } - - } else if (wantvr) { - -/* - Want right eigenvectors - Copy Householder vectors to VR -*/ - - *(unsigned char *)side = 'R'; - dlacpy_("L", n, n, &a[a_offset], lda, &vr[vr_offset], ldvr) - ; - -/* - Generate orthogonal matrix in VR - (Workspace: need 3*N-1, prefer 2*N+(N-1)*NB) -*/ - - i__1 = *lwork - iwrk + 1; - dorghr_(n, &ilo, &ihi, &vr[vr_offset], ldvr, &work[itau], &work[iwrk], - &i__1, &ierr); - -/* - Perform QR iteration, accumulating Schur vectors in VR - (Workspace: need N+1, prefer N+HSWORK (see comments) ) -*/ - - iwrk = itau; - i__1 = *lwork - iwrk + 1; - dhseqr_("S", "V", n, &ilo, &ihi, &a[a_offset], lda, &wr[1], &wi[1], & - vr[vr_offset], ldvr, &work[iwrk], &i__1, info); - - } else { - -/* - Compute eigenvalues only - (Workspace: need N+1, prefer N+HSWORK (see comments) ) -*/ - - iwrk = itau; - i__1 = *lwork - iwrk + 1; - dhseqr_("E", "N", n, &ilo, &ihi, &a[a_offset], lda, &wr[1], &wi[1], & - vr[vr_offset], ldvr, &work[iwrk], &i__1, info); - } - -/* If INFO > 0 from DHSEQR, then quit */ - - if (*info > 0) { - goto L50; - } - - if (wantvl || wantvr) { - -/* - Compute left and/or right eigenvectors - (Workspace: need 4*N) -*/ - - dtrevc_(side, "B", select, n, &a[a_offset], lda, &vl[vl_offset], ldvl, - &vr[vr_offset], ldvr, n, &nout, &work[iwrk], &ierr); - } - - if (wantvl) { - -/* - Undo balancing of left eigenvectors - (Workspace: need N) -*/ - - dgebak_("B", "L", n, &ilo, &ihi, &work[ibal], n, &vl[vl_offset], ldvl, - &ierr); - -/* Normalize left eigenvectors and make largest component real */ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - if (wi[i__] == 0.) { - scl = 1. / dnrm2_(n, &vl[i__ * vl_dim1 + 1], &c__1); - dscal_(n, &scl, &vl[i__ * vl_dim1 + 1], &c__1); - } else if (wi[i__] > 0.) { - d__1 = dnrm2_(n, &vl[i__ * vl_dim1 + 1], &c__1); - d__2 = dnrm2_(n, &vl[(i__ + 1) * vl_dim1 + 1], &c__1); - scl = 1. / dlapy2_(&d__1, &d__2); - dscal_(n, &scl, &vl[i__ * vl_dim1 + 1], &c__1); - dscal_(n, &scl, &vl[(i__ + 1) * vl_dim1 + 1], &c__1); - i__2 = *n; - for (k = 1; k <= i__2; ++k) { -/* Computing 2nd power */ - d__1 = vl[k + i__ * vl_dim1]; -/* Computing 2nd power */ - d__2 = vl[k + (i__ + 1) * vl_dim1]; - work[iwrk + k - 1] = d__1 * d__1 + d__2 * d__2; -/* L10: */ - } - k = idamax_(n, &work[iwrk], &c__1); - dlartg_(&vl[k + i__ * vl_dim1], &vl[k + (i__ + 1) * vl_dim1], - &cs, &sn, &r__); - drot_(n, &vl[i__ * vl_dim1 + 1], &c__1, &vl[(i__ + 1) * - vl_dim1 + 1], &c__1, &cs, &sn); - vl[k + (i__ + 1) * vl_dim1] = 0.; - } -/* L20: */ - } - } - - if (wantvr) { - -/* - Undo balancing of right eigenvectors - (Workspace: need N) -*/ - - dgebak_("B", "R", n, &ilo, &ihi, &work[ibal], n, &vr[vr_offset], ldvr, - &ierr); - -/* Normalize right eigenvectors and make largest component real */ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - if (wi[i__] == 0.) { - scl = 1. / dnrm2_(n, &vr[i__ * vr_dim1 + 1], &c__1); - dscal_(n, &scl, &vr[i__ * vr_dim1 + 1], &c__1); - } else if (wi[i__] > 0.) { - d__1 = dnrm2_(n, &vr[i__ * vr_dim1 + 1], &c__1); - d__2 = dnrm2_(n, &vr[(i__ + 1) * vr_dim1 + 1], &c__1); - scl = 1. / dlapy2_(&d__1, &d__2); - dscal_(n, &scl, &vr[i__ * vr_dim1 + 1], &c__1); - dscal_(n, &scl, &vr[(i__ + 1) * vr_dim1 + 1], &c__1); - i__2 = *n; - for (k = 1; k <= i__2; ++k) { -/* Computing 2nd power */ - d__1 = vr[k + i__ * vr_dim1]; -/* Computing 2nd power */ - d__2 = vr[k + (i__ + 1) * vr_dim1]; - work[iwrk + k - 1] = d__1 * d__1 + d__2 * d__2; -/* L30: */ - } - k = idamax_(n, &work[iwrk], &c__1); - dlartg_(&vr[k + i__ * vr_dim1], &vr[k + (i__ + 1) * vr_dim1], - &cs, &sn, &r__); - drot_(n, &vr[i__ * vr_dim1 + 1], &c__1, &vr[(i__ + 1) * - vr_dim1 + 1], &c__1, &cs, &sn); - vr[k + (i__ + 1) * vr_dim1] = 0.; - } -/* L40: */ - } - } - -/* Undo scaling if necessary */ - -L50: - if (scalea) { - i__1 = *n - *info; -/* Computing MAX */ - i__3 = *n - *info; - i__2 = max(i__3,1); - dlascl_("G", &c__0, &c__0, &cscale, &anrm, &i__1, &c__1, &wr[*info + - 1], &i__2, &ierr); - i__1 = *n - *info; -/* Computing MAX */ - i__3 = *n - *info; - i__2 = max(i__3,1); - dlascl_("G", &c__0, &c__0, &cscale, &anrm, &i__1, &c__1, &wi[*info + - 1], &i__2, &ierr); - if (*info > 0) { - i__1 = ilo - 1; - dlascl_("G", &c__0, &c__0, &cscale, &anrm, &i__1, &c__1, &wr[1], - n, &ierr); - i__1 = ilo - 1; - dlascl_("G", &c__0, &c__0, &cscale, &anrm, &i__1, &c__1, &wi[1], - n, &ierr); - } - } - - work[1] = (doublereal) maxwrk; - return 0; - -/* End of DGEEV */ - -} /* dgeev_ */ - -/* Subroutine */ int dgehd2_(integer *n, integer *ilo, integer *ihi, - doublereal *a, integer *lda, doublereal *tau, doublereal *work, - integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__; - static doublereal aii; - extern /* Subroutine */ int dlarf_(char *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - doublereal *), dlarfg_(integer *, doublereal *, - doublereal *, integer *, doublereal *), xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DGEHD2 reduces a real general matrix A to upper Hessenberg form H by - an orthogonal similarity transformation: Q' * A * Q = H . - - Arguments - ========= - - N (input) INTEGER - The order of the matrix A. N >= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - It is assumed that A is already upper triangular in rows - and columns 1:ILO-1 and IHI+1:N. ILO and IHI are normally - set by a previous call to DGEBAL; otherwise they should be - set to 1 and N respectively. See Further Details. - 1 <= ILO <= IHI <= max(1,N). - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the n by n general matrix to be reduced. - On exit, the upper triangle and the first subdiagonal of A - are overwritten with the upper Hessenberg matrix H, and the - elements below the first subdiagonal, with the array TAU, - represent the orthogonal matrix Q as a product of elementary - reflectors. See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - TAU (output) DOUBLE PRECISION array, dimension (N-1) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace) DOUBLE PRECISION array, dimension (N) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - The matrix Q is represented as a product of (ihi-ilo) elementary - reflectors - - Q = H(ilo) H(ilo+1) . . . H(ihi-1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(1:i) = 0, v(i+1) = 1 and v(ihi+1:n) = 0; v(i+2:ihi) is stored on - exit in A(i+2:ihi,i), and tau in TAU(i). - - The contents of A are illustrated by the following example, with - n = 7, ilo = 2 and ihi = 6: - - on entry, on exit, - - ( a a a a a a a ) ( a a h h h h a ) - ( a a a a a a ) ( a h h h h a ) - ( a a a a a a ) ( h h h h h h ) - ( a a a a a a ) ( v2 h h h h h ) - ( a a a a a a ) ( v2 v3 h h h h ) - ( a a a a a a ) ( v2 v3 v4 h h h ) - ( a ) ( a ) - - where a denotes an element of the original matrix A, h denotes a - modified element of the upper Hessenberg matrix H, and vi denotes an - element of the vector defining H(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - if (*n < 0) { - *info = -1; - } else if (*ilo < 1 || *ilo > max(1,*n)) { - *info = -2; - } else if (*ihi < min(*ilo,*n) || *ihi > *n) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGEHD2", &i__1); - return 0; - } - - i__1 = *ihi - 1; - for (i__ = *ilo; i__ <= i__1; ++i__) { - -/* Compute elementary reflector H(i) to annihilate A(i+2:ihi,i) */ - - i__2 = *ihi - i__; -/* Computing MIN */ - i__3 = i__ + 2; - dlarfg_(&i__2, &a[i__ + 1 + i__ * a_dim1], &a[min(i__3,*n) + i__ * - a_dim1], &c__1, &tau[i__]); - aii = a[i__ + 1 + i__ * a_dim1]; - a[i__ + 1 + i__ * a_dim1] = 1.; - -/* Apply H(i) to A(1:ihi,i+1:ihi) from the right */ - - i__2 = *ihi - i__; - dlarf_("Right", ihi, &i__2, &a[i__ + 1 + i__ * a_dim1], &c__1, &tau[ - i__], &a[(i__ + 1) * a_dim1 + 1], lda, &work[1]); - -/* Apply H(i) to A(i+1:ihi,i+1:n) from the left */ - - i__2 = *ihi - i__; - i__3 = *n - i__; - dlarf_("Left", &i__2, &i__3, &a[i__ + 1 + i__ * a_dim1], &c__1, &tau[ - i__], &a[i__ + 1 + (i__ + 1) * a_dim1], lda, &work[1]); - - a[i__ + 1 + i__ * a_dim1] = aii; -/* L10: */ - } - - return 0; - -/* End of DGEHD2 */ - -} /* dgehd2_ */ - -/* Subroutine */ int dgehrd_(integer *n, integer *ilo, integer *ihi, - doublereal *a, integer *lda, doublereal *tau, doublereal *work, - integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__; - static doublereal t[4160] /* was [65][64] */; - static integer ib; - static doublereal ei; - static integer nb, nh, nx, iws; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - static integer nbmin, iinfo; - extern /* Subroutine */ int dgehd2_(integer *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *), - dlarfb_(char *, char *, char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, integer *), dlahrd_(integer *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - doublereal *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer ldwork, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DGEHRD reduces a real general matrix A to upper Hessenberg form H by - an orthogonal similarity transformation: Q' * A * Q = H . - - Arguments - ========= - - N (input) INTEGER - The order of the matrix A. N >= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - It is assumed that A is already upper triangular in rows - and columns 1:ILO-1 and IHI+1:N. ILO and IHI are normally - set by a previous call to DGEBAL; otherwise they should be - set to 1 and N respectively. See Further Details. - 1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the N-by-N general matrix to be reduced. - On exit, the upper triangle and the first subdiagonal of A - are overwritten with the upper Hessenberg matrix H, and the - elements below the first subdiagonal, with the array TAU, - represent the orthogonal matrix Q as a product of elementary - reflectors. See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - TAU (output) DOUBLE PRECISION array, dimension (N-1) - The scalar factors of the elementary reflectors (see Further - Details). Elements 1:ILO-1 and IHI:N-1 of TAU are set to - zero. - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The length of the array WORK. LWORK >= max(1,N). - For optimum performance LWORK >= N*NB, where NB is the - optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - The matrix Q is represented as a product of (ihi-ilo) elementary - reflectors - - Q = H(ilo) H(ilo+1) . . . H(ihi-1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(1:i) = 0, v(i+1) = 1 and v(ihi+1:n) = 0; v(i+2:ihi) is stored on - exit in A(i+2:ihi,i), and tau in TAU(i). - - The contents of A are illustrated by the following example, with - n = 7, ilo = 2 and ihi = 6: - - on entry, on exit, - - ( a a a a a a a ) ( a a h h h h a ) - ( a a a a a a ) ( a h h h h a ) - ( a a a a a a ) ( h h h h h h ) - ( a a a a a a ) ( v2 h h h h h ) - ( a a a a a a ) ( v2 v3 h h h h ) - ( a a a a a a ) ( v2 v3 v4 h h h ) - ( a ) ( a ) - - where a denotes an element of the original matrix A, h denotes a - modified element of the upper Hessenberg matrix H, and vi denotes an - element of the vector defining H(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; -/* Computing MIN */ - i__1 = 64, i__2 = ilaenv_(&c__1, "DGEHRD", " ", n, ilo, ihi, &c_n1, ( - ftnlen)6, (ftnlen)1); - nb = min(i__1,i__2); - lwkopt = *n * nb; - work[1] = (doublereal) lwkopt; - lquery = *lwork == -1; - if (*n < 0) { - *info = -1; - } else if (*ilo < 1 || *ilo > max(1,*n)) { - *info = -2; - } else if (*ihi < min(*ilo,*n) || *ihi > *n) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } else if ((*lwork < max(1,*n) && ! lquery)) { - *info = -8; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGEHRD", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Set elements 1:ILO-1 and IHI:N-1 of TAU to zero */ - - i__1 = *ilo - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - tau[i__] = 0.; -/* L10: */ - } - i__1 = *n - 1; - for (i__ = max(1,*ihi); i__ <= i__1; ++i__) { - tau[i__] = 0.; -/* L20: */ - } - -/* Quick return if possible */ - - nh = *ihi - *ilo + 1; - if (nh <= 1) { - work[1] = 1.; - return 0; - } - -/* - Determine the block size. - - Computing MIN -*/ - i__1 = 64, i__2 = ilaenv_(&c__1, "DGEHRD", " ", n, ilo, ihi, &c_n1, ( - ftnlen)6, (ftnlen)1); - nb = min(i__1,i__2); - nbmin = 2; - iws = 1; - if ((nb > 1 && nb < nh)) { - -/* - Determine when to cross over from blocked to unblocked code - (last block is always handled by unblocked code). - - Computing MAX -*/ - i__1 = nb, i__2 = ilaenv_(&c__3, "DGEHRD", " ", n, ilo, ihi, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < nh) { - -/* Determine if workspace is large enough for blocked code. */ - - iws = *n * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: determine the - minimum value of NB, and reduce NB or force use of - unblocked code. - - Computing MAX -*/ - i__1 = 2, i__2 = ilaenv_(&c__2, "DGEHRD", " ", n, ilo, ihi, & - c_n1, (ftnlen)6, (ftnlen)1); - nbmin = max(i__1,i__2); - if (*lwork >= *n * nbmin) { - nb = *lwork / *n; - } else { - nb = 1; - } - } - } - } - ldwork = *n; - - if (nb < nbmin || nb >= nh) { - -/* Use unblocked code below */ - - i__ = *ilo; - - } else { - -/* Use blocked code */ - - i__1 = *ihi - 1 - nx; - i__2 = nb; - for (i__ = *ilo; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__3 = nb, i__4 = *ihi - i__; - ib = min(i__3,i__4); - -/* - Reduce columns i:i+ib-1 to Hessenberg form, returning the - matrices V and T of the block reflector H = I - V*T*V' - which performs the reduction, and also the matrix Y = A*V*T -*/ - - dlahrd_(ihi, &i__, &ib, &a[i__ * a_dim1 + 1], lda, &tau[i__], t, & - c__65, &work[1], &ldwork); - -/* - Apply the block reflector H to A(1:ihi,i+ib:ihi) from the - right, computing A := A - Y * V'. V(i+ib,ib-1) must be set - to 1. -*/ - - ei = a[i__ + ib + (i__ + ib - 1) * a_dim1]; - a[i__ + ib + (i__ + ib - 1) * a_dim1] = 1.; - i__3 = *ihi - i__ - ib + 1; - dgemm_("No transpose", "Transpose", ihi, &i__3, &ib, &c_b151, & - work[1], &ldwork, &a[i__ + ib + i__ * a_dim1], lda, & - c_b15, &a[(i__ + ib) * a_dim1 + 1], lda); - a[i__ + ib + (i__ + ib - 1) * a_dim1] = ei; - -/* - Apply the block reflector H to A(i+1:ihi,i+ib:n) from the - left -*/ - - i__3 = *ihi - i__; - i__4 = *n - i__ - ib + 1; - dlarfb_("Left", "Transpose", "Forward", "Columnwise", &i__3, & - i__4, &ib, &a[i__ + 1 + i__ * a_dim1], lda, t, &c__65, &a[ - i__ + 1 + (i__ + ib) * a_dim1], lda, &work[1], &ldwork); -/* L30: */ - } - } - -/* Use unblocked code to reduce the rest of the matrix */ - - dgehd2_(n, &i__, ihi, &a[a_offset], lda, &tau[1], &work[1], &iinfo); - work[1] = (doublereal) iws; - - return 0; - -/* End of DGEHRD */ - -} /* dgehrd_ */ - -/* Subroutine */ int dgelq2_(integer *m, integer *n, doublereal *a, integer * - lda, doublereal *tau, doublereal *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, k; - static doublereal aii; - extern /* Subroutine */ int dlarf_(char *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - doublereal *), dlarfg_(integer *, doublereal *, - doublereal *, integer *, doublereal *), xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DGELQ2 computes an LQ factorization of a real m by n matrix A: - A = L * Q. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the m by n matrix A. - On exit, the elements on and below the diagonal of the array - contain the m by min(m,n) lower trapezoidal matrix L (L is - lower triangular if m <= n); the elements above the diagonal, - with the array TAU, represent the orthogonal matrix Q as a - product of elementary reflectors (see Further Details). - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - TAU (output) DOUBLE PRECISION array, dimension (min(M,N)) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace) DOUBLE PRECISION array, dimension (M) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - The matrix Q is represented as a product of elementary reflectors - - Q = H(k) . . . H(2) H(1), where k = min(m,n). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(1:i-1) = 0 and v(i) = 1; v(i+1:n) is stored on exit in A(i,i+1:n), - and tau in TAU(i). - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGELQ2", &i__1); - return 0; - } - - k = min(*m,*n); - - i__1 = k; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Generate elementary reflector H(i) to annihilate A(i,i+1:n) */ - - i__2 = *n - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - dlarfg_(&i__2, &a[i__ + i__ * a_dim1], &a[i__ + min(i__3,*n) * a_dim1] - , lda, &tau[i__]); - if (i__ < *m) { - -/* Apply H(i) to A(i+1:m,i:n) from the right */ - - aii = a[i__ + i__ * a_dim1]; - a[i__ + i__ * a_dim1] = 1.; - i__2 = *m - i__; - i__3 = *n - i__ + 1; - dlarf_("Right", &i__2, &i__3, &a[i__ + i__ * a_dim1], lda, &tau[ - i__], &a[i__ + 1 + i__ * a_dim1], lda, &work[1]); - a[i__ + i__ * a_dim1] = aii; - } -/* L10: */ - } - return 0; - -/* End of DGELQ2 */ - -} /* dgelq2_ */ - -/* Subroutine */ int dgelqf_(integer *m, integer *n, doublereal *a, integer * - lda, doublereal *tau, doublereal *work, integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__, k, ib, nb, nx, iws, nbmin, iinfo; - extern /* Subroutine */ int dgelq2_(integer *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *), dlarfb_(char *, - char *, char *, char *, integer *, integer *, integer *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *, doublereal *, integer *), dlarft_(char *, char *, integer *, integer *, doublereal - *, integer *, doublereal *, doublereal *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer ldwork, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DGELQF computes an LQ factorization of a real M-by-N matrix A: - A = L * Q. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the M-by-N matrix A. - On exit, the elements on and below the diagonal of the array - contain the m-by-min(m,n) lower trapezoidal matrix L (L is - lower triangular if m <= n); the elements above the diagonal, - with the array TAU, represent the orthogonal matrix Q as a - product of elementary reflectors (see Further Details). - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - TAU (output) DOUBLE PRECISION array, dimension (min(M,N)) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,M). - For optimum performance LWORK >= M*NB, where NB is the - optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - The matrix Q is represented as a product of elementary reflectors - - Q = H(k) . . . H(2) H(1), where k = min(m,n). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(1:i-1) = 0 and v(i) = 1; v(i+1:n) is stored on exit in A(i,i+1:n), - and tau in TAU(i). - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - nb = ilaenv_(&c__1, "DGELQF", " ", m, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen) - 1); - lwkopt = *m * nb; - work[1] = (doublereal) lwkopt; - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } else if ((*lwork < max(1,*m) && ! lquery)) { - *info = -7; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGELQF", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - k = min(*m,*n); - if (k == 0) { - work[1] = 1.; - return 0; - } - - nbmin = 2; - nx = 0; - iws = *m; - if ((nb > 1 && nb < k)) { - -/* - Determine when to cross over from blocked to unblocked code. - - Computing MAX -*/ - i__1 = 0, i__2 = ilaenv_(&c__3, "DGELQF", " ", m, n, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < k) { - -/* Determine if workspace is large enough for blocked code. */ - - ldwork = *m; - iws = ldwork * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: reduce NB and - determine the minimum value of NB. -*/ - - nb = *lwork / ldwork; -/* Computing MAX */ - i__1 = 2, i__2 = ilaenv_(&c__2, "DGELQF", " ", m, n, &c_n1, & - c_n1, (ftnlen)6, (ftnlen)1); - nbmin = max(i__1,i__2); - } - } - } - - if (((nb >= nbmin && nb < k) && nx < k)) { - -/* Use blocked code initially */ - - i__1 = k - nx; - i__2 = nb; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__3 = k - i__ + 1; - ib = min(i__3,nb); - -/* - Compute the LQ factorization of the current block - A(i:i+ib-1,i:n) -*/ - - i__3 = *n - i__ + 1; - dgelq2_(&ib, &i__3, &a[i__ + i__ * a_dim1], lda, &tau[i__], &work[ - 1], &iinfo); - if (i__ + ib <= *m) { - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__3 = *n - i__ + 1; - dlarft_("Forward", "Rowwise", &i__3, &ib, &a[i__ + i__ * - a_dim1], lda, &tau[i__], &work[1], &ldwork); - -/* Apply H to A(i+ib:m,i:n) from the right */ - - i__3 = *m - i__ - ib + 1; - i__4 = *n - i__ + 1; - dlarfb_("Right", "No transpose", "Forward", "Rowwise", &i__3, - &i__4, &ib, &a[i__ + i__ * a_dim1], lda, &work[1], & - ldwork, &a[i__ + ib + i__ * a_dim1], lda, &work[ib + - 1], &ldwork); - } -/* L10: */ - } - } else { - i__ = 1; - } - -/* Use unblocked code to factor the last or only block. */ - - if (i__ <= k) { - i__2 = *m - i__ + 1; - i__1 = *n - i__ + 1; - dgelq2_(&i__2, &i__1, &a[i__ + i__ * a_dim1], lda, &tau[i__], &work[1] - , &iinfo); - } - - work[1] = (doublereal) iws; - return 0; - -/* End of DGELQF */ - -} /* dgelqf_ */ - -/* Subroutine */ int dgelsd_(integer *m, integer *n, integer *nrhs, - doublereal *a, integer *lda, doublereal *b, integer *ldb, doublereal * - s, doublereal *rcond, integer *rank, doublereal *work, integer *lwork, - integer *iwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1, i__2, i__3, i__4; - - /* Builtin functions */ - double log(doublereal); - - /* Local variables */ - static integer ie, il, mm; - static doublereal eps, anrm, bnrm; - static integer itau, nlvl, iascl, ibscl; - static doublereal sfmin; - static integer minmn, maxmn, itaup, itauq, mnthr, nwork; - extern /* Subroutine */ int dlabad_(doublereal *, doublereal *), dgebrd_( - integer *, integer *, doublereal *, integer *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, integer *, - integer *); - extern doublereal dlamch_(char *), dlange_(char *, integer *, - integer *, doublereal *, integer *, doublereal *); - extern /* Subroutine */ int dgelqf_(integer *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *, integer *), - dlalsd_(char *, integer *, integer *, integer *, doublereal *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, integer *, integer *), dlascl_(char *, - integer *, integer *, doublereal *, doublereal *, integer *, - integer *, doublereal *, integer *, integer *), dgeqrf_( - integer *, integer *, doublereal *, integer *, doublereal *, - doublereal *, integer *, integer *), dlacpy_(char *, integer *, - integer *, doublereal *, integer *, doublereal *, integer *), dlaset_(char *, integer *, integer *, doublereal *, - doublereal *, doublereal *, integer *), xerbla_(char *, - integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static doublereal bignum; - extern /* Subroutine */ int dormbr_(char *, char *, char *, integer *, - integer *, integer *, doublereal *, integer *, doublereal *, - doublereal *, integer *, doublereal *, integer *, integer *); - static integer wlalsd; - extern /* Subroutine */ int dormlq_(char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - integer *, doublereal *, integer *, integer *); - static integer ldwork; - extern /* Subroutine */ int dormqr_(char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - integer *, doublereal *, integer *, integer *); - static integer minwrk, maxwrk; - static doublereal smlnum; - static logical lquery; - static integer smlsiz; - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - DGELSD computes the minimum-norm solution to a real linear least - squares problem: - minimize 2-norm(| b - A*x |) - using the singular value decomposition (SVD) of A. A is an M-by-N - matrix which may be rank-deficient. - - Several right hand side vectors b and solution vectors x can be - handled in a single call; they are stored as the columns of the - M-by-NRHS right hand side matrix B and the N-by-NRHS solution - matrix X. - - The problem is solved in three steps: - (1) Reduce the coefficient matrix A to bidiagonal form with - Householder transformations, reducing the original problem - into a "bidiagonal least squares problem" (BLS) - (2) Solve the BLS using a divide and conquer approach. - (3) Apply back all the Householder tranformations to solve - the original least squares problem. - - The effective rank of A is determined by treating as zero those - singular values which are less than RCOND times the largest singular - value. - - The divide and conquer algorithm makes very mild assumptions about - floating point arithmetic. It will work on machines with a guard - digit in add/subtract, or on those binary machines without guard - digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or - Cray-2. It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. - - Arguments - ========= - - M (input) INTEGER - The number of rows of A. M >= 0. - - N (input) INTEGER - The number of columns of A. N >= 0. - - NRHS (input) INTEGER - The number of right hand sides, i.e., the number of columns - of the matrices B and X. NRHS >= 0. - - A (input) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the M-by-N matrix A. - On exit, A has been destroyed. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS) - On entry, the M-by-NRHS right hand side matrix B. - On exit, B is overwritten by the N-by-NRHS solution - matrix X. If m >= n and RANK = n, the residual - sum-of-squares for the solution in the i-th column is given - by the sum of squares of elements n+1:m in that column. - - LDB (input) INTEGER - The leading dimension of the array B. LDB >= max(1,max(M,N)). - - S (output) DOUBLE PRECISION array, dimension (min(M,N)) - The singular values of A in decreasing order. - The condition number of A in the 2-norm = S(1)/S(min(m,n)). - - RCOND (input) DOUBLE PRECISION - RCOND is used to determine the effective rank of A. - Singular values S(i) <= RCOND*S(1) are treated as zero. - If RCOND < 0, machine precision is used instead. - - RANK (output) INTEGER - The effective rank of A, i.e., the number of singular values - which are greater than RCOND*S(1). - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK must be at least 1. - The exact minimum amount of workspace needed depends on M, - N and NRHS. As long as LWORK is at least - 12*N + 2*N*SMLSIZ + 8*N*NLVL + N*NRHS + (SMLSIZ+1)**2, - if M is greater than or equal to N or - 12*M + 2*M*SMLSIZ + 8*M*NLVL + M*NRHS + (SMLSIZ+1)**2, - if M is less than N, the code will execute correctly. - SMLSIZ is returned by ILAENV and is equal to the maximum - size of the subproblems at the bottom of the computation - tree (usually about 25), and - NLVL = MAX( 0, INT( LOG_2( MIN( M,N )/(SMLSIZ+1) ) ) + 1 ) - For good performance, LWORK should generally be larger. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - IWORK (workspace) INTEGER array, dimension (LIWORK) - LIWORK >= 3 * MINMN * NLVL + 11 * MINMN, - where MINMN = MIN( M,N ). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: the algorithm for computing the SVD failed to converge; - if INFO = i, i off-diagonal elements of an intermediate - bidiagonal form did not converge to zero. - - Further Details - =============== - - Based on contributions by - Ming Gu and Ren-Cang Li, Computer Science Division, University of - California at Berkeley, USA - Osni Marques, LBNL/NERSC, USA - - ===================================================================== - - - Test the input arguments. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - --s; - --work; - --iwork; - - /* Function Body */ - *info = 0; - minmn = min(*m,*n); - maxmn = max(*m,*n); - mnthr = ilaenv_(&c__6, "DGELSD", " ", m, n, nrhs, &c_n1, (ftnlen)6, ( - ftnlen)1); - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*nrhs < 0) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } else if (*ldb < max(1,maxmn)) { - *info = -7; - } - - smlsiz = ilaenv_(&c__9, "DGELSD", " ", &c__0, &c__0, &c__0, &c__0, ( - ftnlen)6, (ftnlen)1); - -/* - Compute workspace. - (Note: Comments in the code beginning "Workspace:" describe the - minimal amount of workspace needed at that point in the code, - as well as the preferred amount for good performance. - NB refers to the optimal block size for the immediately - following subroutine, as returned by ILAENV.) -*/ - - minwrk = 1; - minmn = max(1,minmn); -/* Computing MAX */ - i__1 = (integer) (log((doublereal) minmn / (doublereal) (smlsiz + 1)) / - log(2.)) + 1; - nlvl = max(i__1,0); - - if (*info == 0) { - maxwrk = 0; - mm = *m; - if ((*m >= *n && *m >= mnthr)) { - -/* Path 1a - overdetermined, with many more rows than columns. */ - - mm = *n; -/* Computing MAX */ - i__1 = maxwrk, i__2 = *n + *n * ilaenv_(&c__1, "DGEQRF", " ", m, - n, &c_n1, &c_n1, (ftnlen)6, (ftnlen)1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *n + *nrhs * ilaenv_(&c__1, "DORMQR", "LT", - m, nrhs, n, &c_n1, (ftnlen)6, (ftnlen)2); - maxwrk = max(i__1,i__2); - } - if (*m >= *n) { - -/* - Path 1 - overdetermined or exactly determined. - - Computing MAX -*/ - i__1 = maxwrk, i__2 = *n * 3 + (mm + *n) * ilaenv_(&c__1, "DGEBRD" - , " ", &mm, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen)1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *n * 3 + *nrhs * ilaenv_(&c__1, "DORMBR", - "QLT", &mm, nrhs, n, &c_n1, (ftnlen)6, (ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *n * 3 + (*n - 1) * ilaenv_(&c__1, "DORMBR", - "PLN", n, nrhs, n, &c_n1, (ftnlen)6, (ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing 2nd power */ - i__1 = smlsiz + 1; - wlalsd = *n * 9 + ((*n) << (1)) * smlsiz + ((*n) << (3)) * nlvl + - *n * *nrhs + i__1 * i__1; -/* Computing MAX */ - i__1 = maxwrk, i__2 = *n * 3 + wlalsd; - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = *n * 3 + mm, i__2 = *n * 3 + *nrhs, i__1 = max(i__1,i__2), - i__2 = *n * 3 + wlalsd; - minwrk = max(i__1,i__2); - } - if (*n > *m) { -/* Computing 2nd power */ - i__1 = smlsiz + 1; - wlalsd = *m * 9 + ((*m) << (1)) * smlsiz + ((*m) << (3)) * nlvl + - *m * *nrhs + i__1 * i__1; - if (*n >= mnthr) { - -/* - Path 2a - underdetermined, with many more columns - than rows. -*/ - - maxwrk = *m + *m * ilaenv_(&c__1, "DGELQF", " ", m, n, &c_n1, - &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + ((*m) << (2)) + ((*m) << (1)) - * ilaenv_(&c__1, "DGEBRD", " ", m, m, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + ((*m) << (2)) + *nrhs * - ilaenv_(&c__1, "DORMBR", "QLT", m, nrhs, m, &c_n1, ( - ftnlen)6, (ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + ((*m) << (2)) + (*m - 1) * - ilaenv_(&c__1, "DORMBR", "PLN", m, nrhs, m, &c_n1, ( - ftnlen)6, (ftnlen)3); - maxwrk = max(i__1,i__2); - if (*nrhs > 1) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + *m + *m * *nrhs; - maxwrk = max(i__1,i__2); - } else { -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + ((*m) << (1)); - maxwrk = max(i__1,i__2); - } -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m + *nrhs * ilaenv_(&c__1, "DORMLQ", - "LT", n, nrhs, m, &c_n1, (ftnlen)6, (ftnlen)2); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + ((*m) << (2)) + wlalsd; - maxwrk = max(i__1,i__2); - } else { - -/* Path 2 - remaining underdetermined cases. */ - - maxwrk = *m * 3 + (*n + *m) * ilaenv_(&c__1, "DGEBRD", " ", m, - n, &c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * 3 + *nrhs * ilaenv_(&c__1, "DORMBR" - , "QLT", m, nrhs, n, &c_n1, (ftnlen)6, (ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR", - "PLN", n, nrhs, m, &c_n1, (ftnlen)6, (ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * 3 + wlalsd; - maxwrk = max(i__1,i__2); - } -/* Computing MAX */ - i__1 = *m * 3 + *nrhs, i__2 = *m * 3 + *m, i__1 = max(i__1,i__2), - i__2 = *m * 3 + wlalsd; - minwrk = max(i__1,i__2); - } - minwrk = min(minwrk,maxwrk); - work[1] = (doublereal) maxwrk; - if ((*lwork < minwrk && ! lquery)) { - *info = -12; - } - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGELSD", &i__1); - return 0; - } else if (lquery) { - goto L10; - } - -/* Quick return if possible. */ - - if (*m == 0 || *n == 0) { - *rank = 0; - return 0; - } - -/* Get machine parameters. */ - - eps = PRECISION; - sfmin = SAFEMINIMUM; - smlnum = sfmin / eps; - bignum = 1. / smlnum; - dlabad_(&smlnum, &bignum); - -/* Scale A if max entry outside range [SMLNUM,BIGNUM]. */ - - anrm = dlange_("M", m, n, &a[a_offset], lda, &work[1]); - iascl = 0; - if ((anrm > 0. && anrm < smlnum)) { - -/* Scale matrix norm up to SMLNUM. */ - - dlascl_("G", &c__0, &c__0, &anrm, &smlnum, m, n, &a[a_offset], lda, - info); - iascl = 1; - } else if (anrm > bignum) { - -/* Scale matrix norm down to BIGNUM. */ - - dlascl_("G", &c__0, &c__0, &anrm, &bignum, m, n, &a[a_offset], lda, - info); - iascl = 2; - } else if (anrm == 0.) { - -/* Matrix all zero. Return zero solution. */ - - i__1 = max(*m,*n); - dlaset_("F", &i__1, nrhs, &c_b29, &c_b29, &b[b_offset], ldb); - dlaset_("F", &minmn, &c__1, &c_b29, &c_b29, &s[1], &c__1); - *rank = 0; - goto L10; - } - -/* Scale B if max entry outside range [SMLNUM,BIGNUM]. */ - - bnrm = dlange_("M", m, nrhs, &b[b_offset], ldb, &work[1]); - ibscl = 0; - if ((bnrm > 0. && bnrm < smlnum)) { - -/* Scale matrix norm up to SMLNUM. */ - - dlascl_("G", &c__0, &c__0, &bnrm, &smlnum, m, nrhs, &b[b_offset], ldb, - info); - ibscl = 1; - } else if (bnrm > bignum) { - -/* Scale matrix norm down to BIGNUM. */ - - dlascl_("G", &c__0, &c__0, &bnrm, &bignum, m, nrhs, &b[b_offset], ldb, - info); - ibscl = 2; - } - -/* If M < N make sure certain entries of B are zero. */ - - if (*m < *n) { - i__1 = *n - *m; - dlaset_("F", &i__1, nrhs, &c_b29, &c_b29, &b[*m + 1 + b_dim1], ldb); - } - -/* Overdetermined case. */ - - if (*m >= *n) { - -/* Path 1 - overdetermined or exactly determined. */ - - mm = *m; - if (*m >= mnthr) { - -/* Path 1a - overdetermined, with many more rows than columns. */ - - mm = *n; - itau = 1; - nwork = itau + *n; - -/* - Compute A=Q*R. - (Workspace: need 2*N, prefer N+N*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgeqrf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], &i__1, - info); - -/* - Multiply B by transpose(Q). - (Workspace: need N+NRHS, prefer N+NRHS*NB) -*/ - - i__1 = *lwork - nwork + 1; - dormqr_("L", "T", m, nrhs, n, &a[a_offset], lda, &work[itau], &b[ - b_offset], ldb, &work[nwork], &i__1, info); - -/* Zero out below R. */ - - if (*n > 1) { - i__1 = *n - 1; - i__2 = *n - 1; - dlaset_("L", &i__1, &i__2, &c_b29, &c_b29, &a[a_dim1 + 2], - lda); - } - } - - ie = 1; - itauq = ie + *n; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize R in A. - (Workspace: need 3*N+MM, prefer 3*N+(MM+N)*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgebrd_(&mm, n, &a[a_offset], lda, &s[1], &work[ie], &work[itauq], & - work[itaup], &work[nwork], &i__1, info); - -/* - Multiply B by transpose of left bidiagonalizing vectors of R. - (Workspace: need 3*N+NRHS, prefer 3*N+NRHS*NB) -*/ - - i__1 = *lwork - nwork + 1; - dormbr_("Q", "L", "T", &mm, nrhs, n, &a[a_offset], lda, &work[itauq], - &b[b_offset], ldb, &work[nwork], &i__1, info); - -/* Solve the bidiagonal least squares problem. */ - - dlalsd_("U", &smlsiz, n, nrhs, &s[1], &work[ie], &b[b_offset], ldb, - rcond, rank, &work[nwork], &iwork[1], info); - if (*info != 0) { - goto L10; - } - -/* Multiply B by right bidiagonalizing vectors of R. */ - - i__1 = *lwork - nwork + 1; - dormbr_("P", "L", "N", n, nrhs, n, &a[a_offset], lda, &work[itaup], & - b[b_offset], ldb, &work[nwork], &i__1, info); - - } else /* if(complicated condition) */ { -/* Computing MAX */ - i__1 = *m, i__2 = ((*m) << (1)) - 4, i__1 = max(i__1,i__2), i__1 = - max(i__1,*nrhs), i__2 = *n - *m * 3; - if ((*n >= mnthr && *lwork >= ((*m) << (2)) + *m * *m + max(i__1,i__2) - )) { - -/* - Path 2a - underdetermined, with many more columns than rows - and sufficient workspace for an efficient algorithm. -*/ - - ldwork = *m; -/* - Computing MAX - Computing MAX -*/ - i__3 = *m, i__4 = ((*m) << (1)) - 4, i__3 = max(i__3,i__4), i__3 = - max(i__3,*nrhs), i__4 = *n - *m * 3; - i__1 = ((*m) << (2)) + *m * *lda + max(i__3,i__4), i__2 = *m * * - lda + *m + *m * *nrhs; - if (*lwork >= max(i__1,i__2)) { - ldwork = *lda; - } - itau = 1; - nwork = *m + 1; - -/* - Compute A=L*Q. - (Workspace: need 2*M, prefer M+M*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgelqf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], &i__1, - info); - il = nwork; - -/* Copy L to WORK(IL), zeroing out above its diagonal. */ - - dlacpy_("L", m, m, &a[a_offset], lda, &work[il], &ldwork); - i__1 = *m - 1; - i__2 = *m - 1; - dlaset_("U", &i__1, &i__2, &c_b29, &c_b29, &work[il + ldwork], & - ldwork); - ie = il + ldwork * *m; - itauq = ie + *m; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize L in WORK(IL). - (Workspace: need M*M+5*M, prefer M*M+4*M+2*M*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgebrd_(m, m, &work[il], &ldwork, &s[1], &work[ie], &work[itauq], - &work[itaup], &work[nwork], &i__1, info); - -/* - Multiply B by transpose of left bidiagonalizing vectors of L. - (Workspace: need M*M+4*M+NRHS, prefer M*M+4*M+NRHS*NB) -*/ - - i__1 = *lwork - nwork + 1; - dormbr_("Q", "L", "T", m, nrhs, m, &work[il], &ldwork, &work[ - itauq], &b[b_offset], ldb, &work[nwork], &i__1, info); - -/* Solve the bidiagonal least squares problem. */ - - dlalsd_("U", &smlsiz, m, nrhs, &s[1], &work[ie], &b[b_offset], - ldb, rcond, rank, &work[nwork], &iwork[1], info); - if (*info != 0) { - goto L10; - } - -/* Multiply B by right bidiagonalizing vectors of L. */ - - i__1 = *lwork - nwork + 1; - dormbr_("P", "L", "N", m, nrhs, m, &work[il], &ldwork, &work[ - itaup], &b[b_offset], ldb, &work[nwork], &i__1, info); - -/* Zero out below first M rows of B. */ - - i__1 = *n - *m; - dlaset_("F", &i__1, nrhs, &c_b29, &c_b29, &b[*m + 1 + b_dim1], - ldb); - nwork = itau + *m; - -/* - Multiply transpose(Q) by B. - (Workspace: need M+NRHS, prefer M+NRHS*NB) -*/ - - i__1 = *lwork - nwork + 1; - dormlq_("L", "T", n, nrhs, m, &a[a_offset], lda, &work[itau], &b[ - b_offset], ldb, &work[nwork], &i__1, info); - - } else { - -/* Path 2 - remaining underdetermined cases. */ - - ie = 1; - itauq = ie + *m; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize A. - (Workspace: need 3*M+N, prefer 3*M+(M+N)*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgebrd_(m, n, &a[a_offset], lda, &s[1], &work[ie], &work[itauq], & - work[itaup], &work[nwork], &i__1, info); - -/* - Multiply B by transpose of left bidiagonalizing vectors. - (Workspace: need 3*M+NRHS, prefer 3*M+NRHS*NB) -*/ - - i__1 = *lwork - nwork + 1; - dormbr_("Q", "L", "T", m, nrhs, n, &a[a_offset], lda, &work[itauq] - , &b[b_offset], ldb, &work[nwork], &i__1, info); - -/* Solve the bidiagonal least squares problem. */ - - dlalsd_("L", &smlsiz, m, nrhs, &s[1], &work[ie], &b[b_offset], - ldb, rcond, rank, &work[nwork], &iwork[1], info); - if (*info != 0) { - goto L10; - } - -/* Multiply B by right bidiagonalizing vectors of A. */ - - i__1 = *lwork - nwork + 1; - dormbr_("P", "L", "N", n, nrhs, m, &a[a_offset], lda, &work[itaup] - , &b[b_offset], ldb, &work[nwork], &i__1, info); - - } - } - -/* Undo scaling. */ - - if (iascl == 1) { - dlascl_("G", &c__0, &c__0, &anrm, &smlnum, n, nrhs, &b[b_offset], ldb, - info); - dlascl_("G", &c__0, &c__0, &smlnum, &anrm, &minmn, &c__1, &s[1], & - minmn, info); - } else if (iascl == 2) { - dlascl_("G", &c__0, &c__0, &anrm, &bignum, n, nrhs, &b[b_offset], ldb, - info); - dlascl_("G", &c__0, &c__0, &bignum, &anrm, &minmn, &c__1, &s[1], & - minmn, info); - } - if (ibscl == 1) { - dlascl_("G", &c__0, &c__0, &smlnum, &bnrm, n, nrhs, &b[b_offset], ldb, - info); - } else if (ibscl == 2) { - dlascl_("G", &c__0, &c__0, &bignum, &bnrm, n, nrhs, &b[b_offset], ldb, - info); - } - -L10: - work[1] = (doublereal) maxwrk; - return 0; - -/* End of DGELSD */ - -} /* dgelsd_ */ - -/* Subroutine */ int dgeqr2_(integer *m, integer *n, doublereal *a, integer * - lda, doublereal *tau, doublereal *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, k; - static doublereal aii; - extern /* Subroutine */ int dlarf_(char *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - doublereal *), dlarfg_(integer *, doublereal *, - doublereal *, integer *, doublereal *), xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DGEQR2 computes a QR factorization of a real m by n matrix A: - A = Q * R. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the m by n matrix A. - On exit, the elements on and above the diagonal of the array - contain the min(m,n) by n upper trapezoidal matrix R (R is - upper triangular if m >= n); the elements below the diagonal, - with the array TAU, represent the orthogonal matrix Q as a - product of elementary reflectors (see Further Details). - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - TAU (output) DOUBLE PRECISION array, dimension (min(M,N)) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace) DOUBLE PRECISION array, dimension (N) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - The matrix Q is represented as a product of elementary reflectors - - Q = H(1) H(2) . . . H(k), where k = min(m,n). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(i+1:m,i), - and tau in TAU(i). - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGEQR2", &i__1); - return 0; - } - - k = min(*m,*n); - - i__1 = k; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Generate elementary reflector H(i) to annihilate A(i+1:m,i) */ - - i__2 = *m - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - dlarfg_(&i__2, &a[i__ + i__ * a_dim1], &a[min(i__3,*m) + i__ * a_dim1] - , &c__1, &tau[i__]); - if (i__ < *n) { - -/* Apply H(i) to A(i:m,i+1:n) from the left */ - - aii = a[i__ + i__ * a_dim1]; - a[i__ + i__ * a_dim1] = 1.; - i__2 = *m - i__ + 1; - i__3 = *n - i__; - dlarf_("Left", &i__2, &i__3, &a[i__ + i__ * a_dim1], &c__1, &tau[ - i__], &a[i__ + (i__ + 1) * a_dim1], lda, &work[1]); - a[i__ + i__ * a_dim1] = aii; - } -/* L10: */ - } - return 0; - -/* End of DGEQR2 */ - -} /* dgeqr2_ */ - -/* Subroutine */ int dgeqrf_(integer *m, integer *n, doublereal *a, integer * - lda, doublereal *tau, doublereal *work, integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__, k, ib, nb, nx, iws, nbmin, iinfo; - extern /* Subroutine */ int dgeqr2_(integer *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *), dlarfb_(char *, - char *, char *, char *, integer *, integer *, integer *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *, doublereal *, integer *), dlarft_(char *, char *, integer *, integer *, doublereal - *, integer *, doublereal *, doublereal *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer ldwork, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DGEQRF computes a QR factorization of a real M-by-N matrix A: - A = Q * R. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the M-by-N matrix A. - On exit, the elements on and above the diagonal of the array - contain the min(M,N)-by-N upper trapezoidal matrix R (R is - upper triangular if m >= n); the elements below the diagonal, - with the array TAU, represent the orthogonal matrix Q as a - product of min(m,n) elementary reflectors (see Further - Details). - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - TAU (output) DOUBLE PRECISION array, dimension (min(M,N)) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,N). - For optimum performance LWORK >= N*NB, where NB is - the optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - The matrix Q is represented as a product of elementary reflectors - - Q = H(1) H(2) . . . H(k), where k = min(m,n). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(i+1:m,i), - and tau in TAU(i). - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - nb = ilaenv_(&c__1, "DGEQRF", " ", m, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen) - 1); - lwkopt = *n * nb; - work[1] = (doublereal) lwkopt; - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } else if ((*lwork < max(1,*n) && ! lquery)) { - *info = -7; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGEQRF", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - k = min(*m,*n); - if (k == 0) { - work[1] = 1.; - return 0; - } - - nbmin = 2; - nx = 0; - iws = *n; - if ((nb > 1 && nb < k)) { - -/* - Determine when to cross over from blocked to unblocked code. - - Computing MAX -*/ - i__1 = 0, i__2 = ilaenv_(&c__3, "DGEQRF", " ", m, n, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < k) { - -/* Determine if workspace is large enough for blocked code. */ - - ldwork = *n; - iws = ldwork * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: reduce NB and - determine the minimum value of NB. -*/ - - nb = *lwork / ldwork; -/* Computing MAX */ - i__1 = 2, i__2 = ilaenv_(&c__2, "DGEQRF", " ", m, n, &c_n1, & - c_n1, (ftnlen)6, (ftnlen)1); - nbmin = max(i__1,i__2); - } - } - } - - if (((nb >= nbmin && nb < k) && nx < k)) { - -/* Use blocked code initially */ - - i__1 = k - nx; - i__2 = nb; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__3 = k - i__ + 1; - ib = min(i__3,nb); - -/* - Compute the QR factorization of the current block - A(i:m,i:i+ib-1) -*/ - - i__3 = *m - i__ + 1; - dgeqr2_(&i__3, &ib, &a[i__ + i__ * a_dim1], lda, &tau[i__], &work[ - 1], &iinfo); - if (i__ + ib <= *n) { - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__3 = *m - i__ + 1; - dlarft_("Forward", "Columnwise", &i__3, &ib, &a[i__ + i__ * - a_dim1], lda, &tau[i__], &work[1], &ldwork); - -/* Apply H' to A(i:m,i+ib:n) from the left */ - - i__3 = *m - i__ + 1; - i__4 = *n - i__ - ib + 1; - dlarfb_("Left", "Transpose", "Forward", "Columnwise", &i__3, & - i__4, &ib, &a[i__ + i__ * a_dim1], lda, &work[1], & - ldwork, &a[i__ + (i__ + ib) * a_dim1], lda, &work[ib - + 1], &ldwork); - } -/* L10: */ - } - } else { - i__ = 1; - } - -/* Use unblocked code to factor the last or only block. */ - - if (i__ <= k) { - i__2 = *m - i__ + 1; - i__1 = *n - i__ + 1; - dgeqr2_(&i__2, &i__1, &a[i__ + i__ * a_dim1], lda, &tau[i__], &work[1] - , &iinfo); - } - - work[1] = (doublereal) iws; - return 0; - -/* End of DGEQRF */ - -} /* dgeqrf_ */ - -/* Subroutine */ int dgesdd_(char *jobz, integer *m, integer *n, doublereal * - a, integer *lda, doublereal *s, doublereal *u, integer *ldu, - doublereal *vt, integer *ldvt, doublereal *work, integer *lwork, - integer *iwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, u_dim1, u_offset, vt_dim1, vt_offset, i__1, - i__2, i__3; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer i__, ie, il, ir, iu, blk; - static doublereal dum[1], eps; - static integer ivt, iscl; - static doublereal anrm; - static integer idum[1], ierr, itau; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - extern logical lsame_(char *, char *); - static integer chunk, minmn, wrkbl, itaup, itauq, mnthr; - static logical wntqa; - static integer nwork; - static logical wntqn, wntqo, wntqs; - extern /* Subroutine */ int dbdsdc_(char *, char *, integer *, doublereal - *, doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, integer *, integer *), dgebrd_(integer *, integer *, doublereal *, - integer *, doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *, integer *); - extern doublereal dlamch_(char *), dlange_(char *, integer *, - integer *, doublereal *, integer *, doublereal *); - static integer bdspac; - extern /* Subroutine */ int dgelqf_(integer *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *, integer *), - dlascl_(char *, integer *, integer *, doublereal *, doublereal *, - integer *, integer *, doublereal *, integer *, integer *), - dgeqrf_(integer *, integer *, doublereal *, integer *, - doublereal *, doublereal *, integer *, integer *), dlacpy_(char *, - integer *, integer *, doublereal *, integer *, doublereal *, - integer *), dlaset_(char *, integer *, integer *, - doublereal *, doublereal *, doublereal *, integer *), - xerbla_(char *, integer *), dorgbr_(char *, integer *, - integer *, integer *, doublereal *, integer *, doublereal *, - doublereal *, integer *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static doublereal bignum; - extern /* Subroutine */ int dormbr_(char *, char *, char *, integer *, - integer *, integer *, doublereal *, integer *, doublereal *, - doublereal *, integer *, doublereal *, integer *, integer *), dorglq_(integer *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - integer *), dorgqr_(integer *, integer *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *, integer *); - static integer ldwrkl, ldwrkr, minwrk, ldwrku, maxwrk, ldwkvt; - static doublereal smlnum; - static logical wntqas, lquery; - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - DGESDD computes the singular value decomposition (SVD) of a real - M-by-N matrix A, optionally computing the left and right singular - vectors. If singular vectors are desired, it uses a - divide-and-conquer algorithm. - - The SVD is written - - A = U * SIGMA * transpose(V) - - where SIGMA is an M-by-N matrix which is zero except for its - min(m,n) diagonal elements, U is an M-by-M orthogonal matrix, and - V is an N-by-N orthogonal matrix. The diagonal elements of SIGMA - are the singular values of A; they are real and non-negative, and - are returned in descending order. The first min(m,n) columns of - U and V are the left and right singular vectors of A. - - Note that the routine returns VT = V**T, not V. - - The divide and conquer algorithm makes very mild assumptions about - floating point arithmetic. It will work on machines with a guard - digit in add/subtract, or on those binary machines without guard - digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or - Cray-2. It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. - - Arguments - ========= - - JOBZ (input) CHARACTER*1 - Specifies options for computing all or part of the matrix U: - = 'A': all M columns of U and all N rows of V**T are - returned in the arrays U and VT; - = 'S': the first min(M,N) columns of U and the first - min(M,N) rows of V**T are returned in the arrays U - and VT; - = 'O': If M >= N, the first N columns of U are overwritten - on the array A and all rows of V**T are returned in - the array VT; - otherwise, all columns of U are returned in the - array U and the first M rows of V**T are overwritten - in the array VT; - = 'N': no columns of U or rows of V**T are computed. - - M (input) INTEGER - The number of rows of the input matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the input matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the M-by-N matrix A. - On exit, - if JOBZ = 'O', A is overwritten with the first N columns - of U (the left singular vectors, stored - columnwise) if M >= N; - A is overwritten with the first M rows - of V**T (the right singular vectors, stored - rowwise) otherwise. - if JOBZ .ne. 'O', the contents of A are destroyed. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - S (output) DOUBLE PRECISION array, dimension (min(M,N)) - The singular values of A, sorted so that S(i) >= S(i+1). - - U (output) DOUBLE PRECISION array, dimension (LDU,UCOL) - UCOL = M if JOBZ = 'A' or JOBZ = 'O' and M < N; - UCOL = min(M,N) if JOBZ = 'S'. - If JOBZ = 'A' or JOBZ = 'O' and M < N, U contains the M-by-M - orthogonal matrix U; - if JOBZ = 'S', U contains the first min(M,N) columns of U - (the left singular vectors, stored columnwise); - if JOBZ = 'O' and M >= N, or JOBZ = 'N', U is not referenced. - - LDU (input) INTEGER - The leading dimension of the array U. LDU >= 1; if - JOBZ = 'S' or 'A' or JOBZ = 'O' and M < N, LDU >= M. - - VT (output) DOUBLE PRECISION array, dimension (LDVT,N) - If JOBZ = 'A' or JOBZ = 'O' and M >= N, VT contains the - N-by-N orthogonal matrix V**T; - if JOBZ = 'S', VT contains the first min(M,N) rows of - V**T (the right singular vectors, stored rowwise); - if JOBZ = 'O' and M < N, or JOBZ = 'N', VT is not referenced. - - LDVT (input) INTEGER - The leading dimension of the array VT. LDVT >= 1; if - JOBZ = 'A' or JOBZ = 'O' and M >= N, LDVT >= N; - if JOBZ = 'S', LDVT >= min(M,N). - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK; - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= 1. - If JOBZ = 'N', - LWORK >= 3*min(M,N) + max(max(M,N),6*min(M,N)). - If JOBZ = 'O', - LWORK >= 3*min(M,N)*min(M,N) + - max(max(M,N),5*min(M,N)*min(M,N)+4*min(M,N)). - If JOBZ = 'S' or 'A' - LWORK >= 3*min(M,N)*min(M,N) + - max(max(M,N),4*min(M,N)*min(M,N)+4*min(M,N)). - For good performance, LWORK should generally be larger. - If LWORK < 0 but other input arguments are legal, WORK(1) - returns the optimal LWORK. - - IWORK (workspace) INTEGER array, dimension (8*min(M,N)) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: DBDSDC did not converge, updating process failed. - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --s; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - vt_dim1 = *ldvt; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - --work; - --iwork; - - /* Function Body */ - *info = 0; - minmn = min(*m,*n); - mnthr = (integer) (minmn * 11. / 6.); - wntqa = lsame_(jobz, "A"); - wntqs = lsame_(jobz, "S"); - wntqas = wntqa || wntqs; - wntqo = lsame_(jobz, "O"); - wntqn = lsame_(jobz, "N"); - minwrk = 1; - maxwrk = 1; - lquery = *lwork == -1; - - if (! (wntqa || wntqs || wntqo || wntqn)) { - *info = -1; - } else if (*m < 0) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } else if (*ldu < 1 || (wntqas && *ldu < *m) || ((wntqo && *m < *n) && * - ldu < *m)) { - *info = -8; - } else if (*ldvt < 1 || (wntqa && *ldvt < *n) || (wntqs && *ldvt < minmn) - || ((wntqo && *m >= *n) && *ldvt < *n)) { - *info = -10; - } - -/* - Compute workspace - (Note: Comments in the code beginning "Workspace:" describe the - minimal amount of workspace needed at that point in the code, - as well as the preferred amount for good performance. - NB refers to the optimal block size for the immediately - following subroutine, as returned by ILAENV.) -*/ - - if (((*info == 0 && *m > 0) && *n > 0)) { - if (*m >= *n) { - -/* Compute space needed for DBDSDC */ - - if (wntqn) { - bdspac = *n * 7; - } else { - bdspac = *n * 3 * *n + ((*n) << (2)); - } - if (*m >= mnthr) { - if (wntqn) { - -/* Path 1 (M much larger than N, JOBZ='N') */ - - wrkbl = *n + *n * ilaenv_(&c__1, "DGEQRF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + ((*n) << (1)) * ilaenv_(& - c__1, "DGEBRD", " ", n, n, &c_n1, &c_n1, (ftnlen) - 6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *n; - maxwrk = max(i__1,i__2); - minwrk = bdspac + *n; - } else if (wntqo) { - -/* Path 2 (M much larger than N, JOBZ='O') */ - - wrkbl = *n + *n * ilaenv_(&c__1, "DGEQRF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n + *n * ilaenv_(&c__1, "DORGQR", - " ", m, n, n, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + ((*n) << (1)) * ilaenv_(& - c__1, "DGEBRD", " ", n, n, &c_n1, &c_n1, (ftnlen) - 6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *n * ilaenv_(&c__1, "DORMBR" - , "QLN", n, n, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *n * ilaenv_(&c__1, "DORMBR" - , "PRT", n, n, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *n * 3; - wrkbl = max(i__1,i__2); - maxwrk = wrkbl + ((*n) << (1)) * *n; - minwrk = bdspac + ((*n) << (1)) * *n + *n * 3; - } else if (wntqs) { - -/* Path 3 (M much larger than N, JOBZ='S') */ - - wrkbl = *n + *n * ilaenv_(&c__1, "DGEQRF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n + *n * ilaenv_(&c__1, "DORGQR", - " ", m, n, n, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + ((*n) << (1)) * ilaenv_(& - c__1, "DGEBRD", " ", n, n, &c_n1, &c_n1, (ftnlen) - 6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *n * ilaenv_(&c__1, "DORMBR" - , "QLN", n, n, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *n * ilaenv_(&c__1, "DORMBR" - , "PRT", n, n, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *n * 3; - wrkbl = max(i__1,i__2); - maxwrk = wrkbl + *n * *n; - minwrk = bdspac + *n * *n + *n * 3; - } else if (wntqa) { - -/* Path 4 (M much larger than N, JOBZ='A') */ - - wrkbl = *n + *n * ilaenv_(&c__1, "DGEQRF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n + *m * ilaenv_(&c__1, "DORGQR", - " ", m, m, n, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + ((*n) << (1)) * ilaenv_(& - c__1, "DGEBRD", " ", n, n, &c_n1, &c_n1, (ftnlen) - 6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *n * ilaenv_(&c__1, "DORMBR" - , "QLN", n, n, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *n * ilaenv_(&c__1, "DORMBR" - , "PRT", n, n, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *n * 3; - wrkbl = max(i__1,i__2); - maxwrk = wrkbl + *n * *n; - minwrk = bdspac + *n * *n + *n * 3; - } - } else { - -/* Path 5 (M at least N, but not much larger) */ - - wrkbl = *n * 3 + (*m + *n) * ilaenv_(&c__1, "DGEBRD", " ", m, - n, &c_n1, &c_n1, (ftnlen)6, (ftnlen)1); - if (wntqn) { -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *n * 3; - maxwrk = max(i__1,i__2); - minwrk = *n * 3 + max(*m,bdspac); - } else if (wntqo) { -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *n * ilaenv_(&c__1, "DORMBR" - , "QLN", m, n, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *n * ilaenv_(&c__1, "DORMBR" - , "PRT", n, n, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *n * 3; - wrkbl = max(i__1,i__2); - maxwrk = wrkbl + *m * *n; -/* Computing MAX */ - i__1 = *m, i__2 = *n * *n + bdspac; - minwrk = *n * 3 + max(i__1,i__2); - } else if (wntqs) { -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *n * ilaenv_(&c__1, "DORMBR" - , "QLN", m, n, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *n * ilaenv_(&c__1, "DORMBR" - , "PRT", n, n, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *n * 3; - maxwrk = max(i__1,i__2); - minwrk = *n * 3 + max(*m,bdspac); - } else if (wntqa) { -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "QLN", m, m, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n * 3 + *n * ilaenv_(&c__1, "DORMBR" - , "PRT", n, n, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = bdspac + *n * 3; - maxwrk = max(i__1,i__2); - minwrk = *n * 3 + max(*m,bdspac); - } - } - } else { - -/* Compute space needed for DBDSDC */ - - if (wntqn) { - bdspac = *m * 7; - } else { - bdspac = *m * 3 * *m + ((*m) << (2)); - } - if (*n >= mnthr) { - if (wntqn) { - -/* Path 1t (N much larger than M, JOBZ='N') */ - - wrkbl = *m + *m * ilaenv_(&c__1, "DGELQF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + ((*m) << (1)) * ilaenv_(& - c__1, "DGEBRD", " ", m, m, &c_n1, &c_n1, (ftnlen) - 6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *m; - maxwrk = max(i__1,i__2); - minwrk = bdspac + *m; - } else if (wntqo) { - -/* Path 2t (N much larger than M, JOBZ='O') */ - - wrkbl = *m + *m * ilaenv_(&c__1, "DGELQF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m + *m * ilaenv_(&c__1, "DORGLQ", - " ", m, n, m, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + ((*m) << (1)) * ilaenv_(& - c__1, "DGEBRD", " ", m, m, &c_n1, &c_n1, (ftnlen) - 6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "QLN", m, m, m, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "PRT", m, m, m, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *m * 3; - wrkbl = max(i__1,i__2); - maxwrk = wrkbl + ((*m) << (1)) * *m; - minwrk = bdspac + ((*m) << (1)) * *m + *m * 3; - } else if (wntqs) { - -/* Path 3t (N much larger than M, JOBZ='S') */ - - wrkbl = *m + *m * ilaenv_(&c__1, "DGELQF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m + *m * ilaenv_(&c__1, "DORGLQ", - " ", m, n, m, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + ((*m) << (1)) * ilaenv_(& - c__1, "DGEBRD", " ", m, m, &c_n1, &c_n1, (ftnlen) - 6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "QLN", m, m, m, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "PRT", m, m, m, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *m * 3; - wrkbl = max(i__1,i__2); - maxwrk = wrkbl + *m * *m; - minwrk = bdspac + *m * *m + *m * 3; - } else if (wntqa) { - -/* Path 4t (N much larger than M, JOBZ='A') */ - - wrkbl = *m + *m * ilaenv_(&c__1, "DGELQF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m + *n * ilaenv_(&c__1, "DORGLQ", - " ", n, n, m, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + ((*m) << (1)) * ilaenv_(& - c__1, "DGEBRD", " ", m, m, &c_n1, &c_n1, (ftnlen) - 6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "QLN", m, m, m, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "PRT", m, m, m, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *m * 3; - wrkbl = max(i__1,i__2); - maxwrk = wrkbl + *m * *m; - minwrk = bdspac + *m * *m + *m * 3; - } - } else { - -/* Path 5t (N greater than M, but not much larger) */ - - wrkbl = *m * 3 + (*m + *n) * ilaenv_(&c__1, "DGEBRD", " ", m, - n, &c_n1, &c_n1, (ftnlen)6, (ftnlen)1); - if (wntqn) { -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *m * 3; - maxwrk = max(i__1,i__2); - minwrk = *m * 3 + max(*n,bdspac); - } else if (wntqo) { -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "QLN", m, m, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "PRT", m, n, m, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *m * 3; - wrkbl = max(i__1,i__2); - maxwrk = wrkbl + *m * *n; -/* Computing MAX */ - i__1 = *n, i__2 = *m * *m + bdspac; - minwrk = *m * 3 + max(i__1,i__2); - } else if (wntqs) { -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "QLN", m, m, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "PRT", m, n, m, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *m * 3; - maxwrk = max(i__1,i__2); - minwrk = *m * 3 + max(*n,bdspac); - } else if (wntqa) { -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "QLN", m, m, n, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m * 3 + *m * ilaenv_(&c__1, "DORMBR" - , "PRT", n, n, m, &c_n1, (ftnlen)6, (ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = bdspac + *m * 3; - maxwrk = max(i__1,i__2); - minwrk = *m * 3 + max(*n,bdspac); - } - } - } - work[1] = (doublereal) maxwrk; - } - - if ((*lwork < minwrk && ! lquery)) { - *info = -12; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGESDD", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0) { - if (*lwork >= 1) { - work[1] = 1.; - } - return 0; - } - -/* Get machine constants */ - - eps = PRECISION; - smlnum = sqrt(SAFEMINIMUM) / eps; - bignum = 1. / smlnum; - -/* Scale A if max element outside range [SMLNUM,BIGNUM] */ - - anrm = dlange_("M", m, n, &a[a_offset], lda, dum); - iscl = 0; - if ((anrm > 0. && anrm < smlnum)) { - iscl = 1; - dlascl_("G", &c__0, &c__0, &anrm, &smlnum, m, n, &a[a_offset], lda, & - ierr); - } else if (anrm > bignum) { - iscl = 1; - dlascl_("G", &c__0, &c__0, &anrm, &bignum, m, n, &a[a_offset], lda, & - ierr); - } - - if (*m >= *n) { - -/* - A has at least as many rows as columns. If A has sufficiently - more rows than columns, first reduce using the QR - decomposition (if sufficient workspace available) -*/ - - if (*m >= mnthr) { - - if (wntqn) { - -/* - Path 1 (M much larger than N, JOBZ='N') - No singular vectors to be computed -*/ - - itau = 1; - nwork = itau + *n; - -/* - Compute A=Q*R - (Workspace: need 2*N, prefer N+N*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgeqrf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__1, &ierr); - -/* Zero out below R */ - - i__1 = *n - 1; - i__2 = *n - 1; - dlaset_("L", &i__1, &i__2, &c_b29, &c_b29, &a[a_dim1 + 2], - lda); - ie = 1; - itauq = ie + *n; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize R in A - (Workspace: need 4*N, prefer 3*N+2*N*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgebrd_(n, n, &a[a_offset], lda, &s[1], &work[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__1, &ierr); - nwork = ie + *n; - -/* - Perform bidiagonal SVD, computing singular values only - (Workspace: need N+BDSPAC) -*/ - - dbdsdc_("U", "N", n, &s[1], &work[ie], dum, &c__1, dum, &c__1, - dum, idum, &work[nwork], &iwork[1], info); - - } else if (wntqo) { - -/* - Path 2 (M much larger than N, JOBZ = 'O') - N left singular vectors to be overwritten on A and - N right singular vectors to be computed in VT -*/ - - ir = 1; - -/* WORK(IR) is LDWRKR by N */ - - if (*lwork >= *lda * *n + *n * *n + *n * 3 + bdspac) { - ldwrkr = *lda; - } else { - ldwrkr = (*lwork - *n * *n - *n * 3 - bdspac) / *n; - } - itau = ir + ldwrkr * *n; - nwork = itau + *n; - -/* - Compute A=Q*R - (Workspace: need N*N+2*N, prefer N*N+N+N*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgeqrf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__1, &ierr); - -/* Copy R to WORK(IR), zeroing out below it */ - - dlacpy_("U", n, n, &a[a_offset], lda, &work[ir], &ldwrkr); - i__1 = *n - 1; - i__2 = *n - 1; - dlaset_("L", &i__1, &i__2, &c_b29, &c_b29, &work[ir + 1], & - ldwrkr); - -/* - Generate Q in A - (Workspace: need N*N+2*N, prefer N*N+N+N*NB) -*/ - - i__1 = *lwork - nwork + 1; - dorgqr_(m, n, n, &a[a_offset], lda, &work[itau], &work[nwork], - &i__1, &ierr); - ie = itau; - itauq = ie + *n; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize R in VT, copying result to WORK(IR) - (Workspace: need N*N+4*N, prefer N*N+3*N+2*N*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgebrd_(n, n, &work[ir], &ldwrkr, &s[1], &work[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__1, &ierr); - -/* WORK(IU) is N by N */ - - iu = nwork; - nwork = iu + *n * *n; - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in WORK(IU) and computing right - singular vectors of bidiagonal matrix in VT - (Workspace: need N+N*N+BDSPAC) -*/ - - dbdsdc_("U", "I", n, &s[1], &work[ie], &work[iu], n, &vt[ - vt_offset], ldvt, dum, idum, &work[nwork], &iwork[1], - info); - -/* - Overwrite WORK(IU) by left singular vectors of R - and VT by right singular vectors of R - (Workspace: need 2*N*N+3*N, prefer 2*N*N+2*N+N*NB) -*/ - - i__1 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", n, n, n, &work[ir], &ldwrkr, &work[ - itauq], &work[iu], n, &work[nwork], &i__1, &ierr); - i__1 = *lwork - nwork + 1; - dormbr_("P", "R", "T", n, n, n, &work[ir], &ldwrkr, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__1, & - ierr); - -/* - Multiply Q in A by left singular vectors of R in - WORK(IU), storing result in WORK(IR) and copying to A - (Workspace: need 2*N*N, prefer N*N+M*N) -*/ - - i__1 = *m; - i__2 = ldwrkr; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += - i__2) { -/* Computing MIN */ - i__3 = *m - i__ + 1; - chunk = min(i__3,ldwrkr); - dgemm_("N", "N", &chunk, n, n, &c_b15, &a[i__ + a_dim1], - lda, &work[iu], n, &c_b29, &work[ir], &ldwrkr); - dlacpy_("F", &chunk, n, &work[ir], &ldwrkr, &a[i__ + - a_dim1], lda); -/* L10: */ - } - - } else if (wntqs) { - -/* - Path 3 (M much larger than N, JOBZ='S') - N left singular vectors to be computed in U and - N right singular vectors to be computed in VT -*/ - - ir = 1; - -/* WORK(IR) is N by N */ - - ldwrkr = *n; - itau = ir + ldwrkr * *n; - nwork = itau + *n; - -/* - Compute A=Q*R - (Workspace: need N*N+2*N, prefer N*N+N+N*NB) -*/ - - i__2 = *lwork - nwork + 1; - dgeqrf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__2, &ierr); - -/* Copy R to WORK(IR), zeroing out below it */ - - dlacpy_("U", n, n, &a[a_offset], lda, &work[ir], &ldwrkr); - i__2 = *n - 1; - i__1 = *n - 1; - dlaset_("L", &i__2, &i__1, &c_b29, &c_b29, &work[ir + 1], & - ldwrkr); - -/* - Generate Q in A - (Workspace: need N*N+2*N, prefer N*N+N+N*NB) -*/ - - i__2 = *lwork - nwork + 1; - dorgqr_(m, n, n, &a[a_offset], lda, &work[itau], &work[nwork], - &i__2, &ierr); - ie = itau; - itauq = ie + *n; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize R in WORK(IR) - (Workspace: need N*N+4*N, prefer N*N+3*N+2*N*NB) -*/ - - i__2 = *lwork - nwork + 1; - dgebrd_(n, n, &work[ir], &ldwrkr, &s[1], &work[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__2, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagoal matrix in U and computing right singular - vectors of bidiagonal matrix in VT - (Workspace: need N+BDSPAC) -*/ - - dbdsdc_("U", "I", n, &s[1], &work[ie], &u[u_offset], ldu, &vt[ - vt_offset], ldvt, dum, idum, &work[nwork], &iwork[1], - info); - -/* - Overwrite U by left singular vectors of R and VT - by right singular vectors of R - (Workspace: need N*N+3*N, prefer N*N+2*N+N*NB) -*/ - - i__2 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", n, n, n, &work[ir], &ldwrkr, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__2, &ierr); - - i__2 = *lwork - nwork + 1; - dormbr_("P", "R", "T", n, n, n, &work[ir], &ldwrkr, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__2, & - ierr); - -/* - Multiply Q in A by left singular vectors of R in - WORK(IR), storing result in U - (Workspace: need N*N) -*/ - - dlacpy_("F", n, n, &u[u_offset], ldu, &work[ir], &ldwrkr); - dgemm_("N", "N", m, n, n, &c_b15, &a[a_offset], lda, &work[ir] - , &ldwrkr, &c_b29, &u[u_offset], ldu); - - } else if (wntqa) { - -/* - Path 4 (M much larger than N, JOBZ='A') - M left singular vectors to be computed in U and - N right singular vectors to be computed in VT -*/ - - iu = 1; - -/* WORK(IU) is N by N */ - - ldwrku = *n; - itau = iu + ldwrku * *n; - nwork = itau + *n; - -/* - Compute A=Q*R, copying result to U - (Workspace: need N*N+2*N, prefer N*N+N+N*NB) -*/ - - i__2 = *lwork - nwork + 1; - dgeqrf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__2, &ierr); - dlacpy_("L", m, n, &a[a_offset], lda, &u[u_offset], ldu); - -/* - Generate Q in U - (Workspace: need N*N+2*N, prefer N*N+N+N*NB) -*/ - i__2 = *lwork - nwork + 1; - dorgqr_(m, m, n, &u[u_offset], ldu, &work[itau], &work[nwork], - &i__2, &ierr); - -/* Produce R in A, zeroing out other entries */ - - i__2 = *n - 1; - i__1 = *n - 1; - dlaset_("L", &i__2, &i__1, &c_b29, &c_b29, &a[a_dim1 + 2], - lda); - ie = itau; - itauq = ie + *n; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize R in A - (Workspace: need N*N+4*N, prefer N*N+3*N+2*N*NB) -*/ - - i__2 = *lwork - nwork + 1; - dgebrd_(n, n, &a[a_offset], lda, &s[1], &work[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__2, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in WORK(IU) and computing right - singular vectors of bidiagonal matrix in VT - (Workspace: need N+N*N+BDSPAC) -*/ - - dbdsdc_("U", "I", n, &s[1], &work[ie], &work[iu], n, &vt[ - vt_offset], ldvt, dum, idum, &work[nwork], &iwork[1], - info); - -/* - Overwrite WORK(IU) by left singular vectors of R and VT - by right singular vectors of R - (Workspace: need N*N+3*N, prefer N*N+2*N+N*NB) -*/ - - i__2 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", n, n, n, &a[a_offset], lda, &work[ - itauq], &work[iu], &ldwrku, &work[nwork], &i__2, & - ierr); - i__2 = *lwork - nwork + 1; - dormbr_("P", "R", "T", n, n, n, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__2, & - ierr); - -/* - Multiply Q in U by left singular vectors of R in - WORK(IU), storing result in A - (Workspace: need N*N) -*/ - - dgemm_("N", "N", m, n, n, &c_b15, &u[u_offset], ldu, &work[iu] - , &ldwrku, &c_b29, &a[a_offset], lda); - -/* Copy left singular vectors of A from A to U */ - - dlacpy_("F", m, n, &a[a_offset], lda, &u[u_offset], ldu); - - } - - } else { - -/* - M .LT. MNTHR - - Path 5 (M at least N, but not much larger) - Reduce to bidiagonal form without QR decomposition -*/ - - ie = 1; - itauq = ie + *n; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize A - (Workspace: need 3*N+M, prefer 3*N+(M+N)*NB) -*/ - - i__2 = *lwork - nwork + 1; - dgebrd_(m, n, &a[a_offset], lda, &s[1], &work[ie], &work[itauq], & - work[itaup], &work[nwork], &i__2, &ierr); - if (wntqn) { - -/* - Perform bidiagonal SVD, only computing singular values - (Workspace: need N+BDSPAC) -*/ - - dbdsdc_("U", "N", n, &s[1], &work[ie], dum, &c__1, dum, &c__1, - dum, idum, &work[nwork], &iwork[1], info); - } else if (wntqo) { - iu = nwork; - if (*lwork >= *m * *n + *n * 3 + bdspac) { - -/* WORK( IU ) is M by N */ - - ldwrku = *m; - nwork = iu + ldwrku * *n; - dlaset_("F", m, n, &c_b29, &c_b29, &work[iu], &ldwrku); - } else { - -/* WORK( IU ) is N by N */ - - ldwrku = *n; - nwork = iu + ldwrku * *n; - -/* WORK(IR) is LDWRKR by N */ - - ir = nwork; - ldwrkr = (*lwork - *n * *n - *n * 3) / *n; - } - nwork = iu + ldwrku * *n; - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in WORK(IU) and computing right - singular vectors of bidiagonal matrix in VT - (Workspace: need N+N*N+BDSPAC) -*/ - - dbdsdc_("U", "I", n, &s[1], &work[ie], &work[iu], &ldwrku, & - vt[vt_offset], ldvt, dum, idum, &work[nwork], &iwork[ - 1], info); - -/* - Overwrite VT by right singular vectors of A - (Workspace: need N*N+2*N, prefer N*N+N+N*NB) -*/ - - i__2 = *lwork - nwork + 1; - dormbr_("P", "R", "T", n, n, n, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__2, & - ierr); - - if (*lwork >= *m * *n + *n * 3 + bdspac) { - -/* - Overwrite WORK(IU) by left singular vectors of A - (Workspace: need N*N+2*N, prefer N*N+N+N*NB) -*/ - - i__2 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", m, n, n, &a[a_offset], lda, &work[ - itauq], &work[iu], &ldwrku, &work[nwork], &i__2, & - ierr); - -/* Copy left singular vectors of A from WORK(IU) to A */ - - dlacpy_("F", m, n, &work[iu], &ldwrku, &a[a_offset], lda); - } else { - -/* - Generate Q in A - (Workspace: need N*N+2*N, prefer N*N+N+N*NB) -*/ - - i__2 = *lwork - nwork + 1; - dorgbr_("Q", m, n, n, &a[a_offset], lda, &work[itauq], & - work[nwork], &i__2, &ierr); - -/* - Multiply Q in A by left singular vectors of - bidiagonal matrix in WORK(IU), storing result in - WORK(IR) and copying to A - (Workspace: need 2*N*N, prefer N*N+M*N) -*/ - - i__2 = *m; - i__1 = ldwrkr; - for (i__ = 1; i__1 < 0 ? i__ >= i__2 : i__ <= i__2; i__ += - i__1) { -/* Computing MIN */ - i__3 = *m - i__ + 1; - chunk = min(i__3,ldwrkr); - dgemm_("N", "N", &chunk, n, n, &c_b15, &a[i__ + - a_dim1], lda, &work[iu], &ldwrku, &c_b29, & - work[ir], &ldwrkr); - dlacpy_("F", &chunk, n, &work[ir], &ldwrkr, &a[i__ + - a_dim1], lda); -/* L20: */ - } - } - - } else if (wntqs) { - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in U and computing right singular - vectors of bidiagonal matrix in VT - (Workspace: need N+BDSPAC) -*/ - - dlaset_("F", m, n, &c_b29, &c_b29, &u[u_offset], ldu); - dbdsdc_("U", "I", n, &s[1], &work[ie], &u[u_offset], ldu, &vt[ - vt_offset], ldvt, dum, idum, &work[nwork], &iwork[1], - info); - -/* - Overwrite U by left singular vectors of A and VT - by right singular vectors of A - (Workspace: need 3*N, prefer 2*N+N*NB) -*/ - - i__1 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", m, n, n, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__1, &ierr); - i__1 = *lwork - nwork + 1; - dormbr_("P", "R", "T", n, n, n, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__1, & - ierr); - } else if (wntqa) { - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in U and computing right singular - vectors of bidiagonal matrix in VT - (Workspace: need N+BDSPAC) -*/ - - dlaset_("F", m, m, &c_b29, &c_b29, &u[u_offset], ldu); - dbdsdc_("U", "I", n, &s[1], &work[ie], &u[u_offset], ldu, &vt[ - vt_offset], ldvt, dum, idum, &work[nwork], &iwork[1], - info); - -/* Set the right corner of U to identity matrix */ - - i__1 = *m - *n; - i__2 = *m - *n; - dlaset_("F", &i__1, &i__2, &c_b29, &c_b15, &u[*n + 1 + (*n + - 1) * u_dim1], ldu); - -/* - Overwrite U by left singular vectors of A and VT - by right singular vectors of A - (Workspace: need N*N+2*N+M, prefer N*N+2*N+M*NB) -*/ - - i__1 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", m, m, n, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__1, &ierr); - i__1 = *lwork - nwork + 1; - dormbr_("P", "R", "T", n, n, m, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__1, & - ierr); - } - - } - - } else { - -/* - A has more columns than rows. If A has sufficiently more - columns than rows, first reduce using the LQ decomposition (if - sufficient workspace available) -*/ - - if (*n >= mnthr) { - - if (wntqn) { - -/* - Path 1t (N much larger than M, JOBZ='N') - No singular vectors to be computed -*/ - - itau = 1; - nwork = itau + *m; - -/* - Compute A=L*Q - (Workspace: need 2*M, prefer M+M*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgelqf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__1, &ierr); - -/* Zero out above L */ - - i__1 = *m - 1; - i__2 = *m - 1; - dlaset_("U", &i__1, &i__2, &c_b29, &c_b29, &a[((a_dim1) << (1) - ) + 1], lda); - ie = 1; - itauq = ie + *m; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize L in A - (Workspace: need 4*M, prefer 3*M+2*M*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgebrd_(m, m, &a[a_offset], lda, &s[1], &work[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__1, &ierr); - nwork = ie + *m; - -/* - Perform bidiagonal SVD, computing singular values only - (Workspace: need M+BDSPAC) -*/ - - dbdsdc_("U", "N", m, &s[1], &work[ie], dum, &c__1, dum, &c__1, - dum, idum, &work[nwork], &iwork[1], info); - - } else if (wntqo) { - -/* - Path 2t (N much larger than M, JOBZ='O') - M right singular vectors to be overwritten on A and - M left singular vectors to be computed in U -*/ - - ivt = 1; - -/* IVT is M by M */ - - il = ivt + *m * *m; - if (*lwork >= *m * *n + *m * *m + *m * 3 + bdspac) { - -/* WORK(IL) is M by N */ - - ldwrkl = *m; - chunk = *n; - } else { - ldwrkl = *m; - chunk = (*lwork - *m * *m) / *m; - } - itau = il + ldwrkl * *m; - nwork = itau + *m; - -/* - Compute A=L*Q - (Workspace: need M*M+2*M, prefer M*M+M+M*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgelqf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__1, &ierr); - -/* Copy L to WORK(IL), zeroing about above it */ - - dlacpy_("L", m, m, &a[a_offset], lda, &work[il], &ldwrkl); - i__1 = *m - 1; - i__2 = *m - 1; - dlaset_("U", &i__1, &i__2, &c_b29, &c_b29, &work[il + ldwrkl], - &ldwrkl); - -/* - Generate Q in A - (Workspace: need M*M+2*M, prefer M*M+M+M*NB) -*/ - - i__1 = *lwork - nwork + 1; - dorglq_(m, n, m, &a[a_offset], lda, &work[itau], &work[nwork], - &i__1, &ierr); - ie = itau; - itauq = ie + *m; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize L in WORK(IL) - (Workspace: need M*M+4*M, prefer M*M+3*M+2*M*NB) -*/ - - i__1 = *lwork - nwork + 1; - dgebrd_(m, m, &work[il], &ldwrkl, &s[1], &work[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__1, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in U, and computing right singular - vectors of bidiagonal matrix in WORK(IVT) - (Workspace: need M+M*M+BDSPAC) -*/ - - dbdsdc_("U", "I", m, &s[1], &work[ie], &u[u_offset], ldu, & - work[ivt], m, dum, idum, &work[nwork], &iwork[1], - info); - -/* - Overwrite U by left singular vectors of L and WORK(IVT) - by right singular vectors of L - (Workspace: need 2*M*M+3*M, prefer 2*M*M+2*M+M*NB) -*/ - - i__1 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", m, m, m, &work[il], &ldwrkl, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__1, &ierr); - i__1 = *lwork - nwork + 1; - dormbr_("P", "R", "T", m, m, m, &work[il], &ldwrkl, &work[ - itaup], &work[ivt], m, &work[nwork], &i__1, &ierr); - -/* - Multiply right singular vectors of L in WORK(IVT) by Q - in A, storing result in WORK(IL) and copying to A - (Workspace: need 2*M*M, prefer M*M+M*N) -*/ - - i__1 = *n; - i__2 = chunk; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += - i__2) { -/* Computing MIN */ - i__3 = *n - i__ + 1; - blk = min(i__3,chunk); - dgemm_("N", "N", m, &blk, m, &c_b15, &work[ivt], m, &a[ - i__ * a_dim1 + 1], lda, &c_b29, &work[il], & - ldwrkl); - dlacpy_("F", m, &blk, &work[il], &ldwrkl, &a[i__ * a_dim1 - + 1], lda); -/* L30: */ - } - - } else if (wntqs) { - -/* - Path 3t (N much larger than M, JOBZ='S') - M right singular vectors to be computed in VT and - M left singular vectors to be computed in U -*/ - - il = 1; - -/* WORK(IL) is M by M */ - - ldwrkl = *m; - itau = il + ldwrkl * *m; - nwork = itau + *m; - -/* - Compute A=L*Q - (Workspace: need M*M+2*M, prefer M*M+M+M*NB) -*/ - - i__2 = *lwork - nwork + 1; - dgelqf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__2, &ierr); - -/* Copy L to WORK(IL), zeroing out above it */ - - dlacpy_("L", m, m, &a[a_offset], lda, &work[il], &ldwrkl); - i__2 = *m - 1; - i__1 = *m - 1; - dlaset_("U", &i__2, &i__1, &c_b29, &c_b29, &work[il + ldwrkl], - &ldwrkl); - -/* - Generate Q in A - (Workspace: need M*M+2*M, prefer M*M+M+M*NB) -*/ - - i__2 = *lwork - nwork + 1; - dorglq_(m, n, m, &a[a_offset], lda, &work[itau], &work[nwork], - &i__2, &ierr); - ie = itau; - itauq = ie + *m; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize L in WORK(IU), copying result to U - (Workspace: need M*M+4*M, prefer M*M+3*M+2*M*NB) -*/ - - i__2 = *lwork - nwork + 1; - dgebrd_(m, m, &work[il], &ldwrkl, &s[1], &work[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__2, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in U and computing right singular - vectors of bidiagonal matrix in VT - (Workspace: need M+BDSPAC) -*/ - - dbdsdc_("U", "I", m, &s[1], &work[ie], &u[u_offset], ldu, &vt[ - vt_offset], ldvt, dum, idum, &work[nwork], &iwork[1], - info); - -/* - Overwrite U by left singular vectors of L and VT - by right singular vectors of L - (Workspace: need M*M+3*M, prefer M*M+2*M+M*NB) -*/ - - i__2 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", m, m, m, &work[il], &ldwrkl, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__2, &ierr); - i__2 = *lwork - nwork + 1; - dormbr_("P", "R", "T", m, m, m, &work[il], &ldwrkl, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__2, & - ierr); - -/* - Multiply right singular vectors of L in WORK(IL) by - Q in A, storing result in VT - (Workspace: need M*M) -*/ - - dlacpy_("F", m, m, &vt[vt_offset], ldvt, &work[il], &ldwrkl); - dgemm_("N", "N", m, n, m, &c_b15, &work[il], &ldwrkl, &a[ - a_offset], lda, &c_b29, &vt[vt_offset], ldvt); - - } else if (wntqa) { - -/* - Path 4t (N much larger than M, JOBZ='A') - N right singular vectors to be computed in VT and - M left singular vectors to be computed in U -*/ - - ivt = 1; - -/* WORK(IVT) is M by M */ - - ldwkvt = *m; - itau = ivt + ldwkvt * *m; - nwork = itau + *m; - -/* - Compute A=L*Q, copying result to VT - (Workspace: need M*M+2*M, prefer M*M+M+M*NB) -*/ - - i__2 = *lwork - nwork + 1; - dgelqf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__2, &ierr); - dlacpy_("U", m, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - -/* - Generate Q in VT - (Workspace: need M*M+2*M, prefer M*M+M+M*NB) -*/ - - i__2 = *lwork - nwork + 1; - dorglq_(n, n, m, &vt[vt_offset], ldvt, &work[itau], &work[ - nwork], &i__2, &ierr); - -/* Produce L in A, zeroing out other entries */ - - i__2 = *m - 1; - i__1 = *m - 1; - dlaset_("U", &i__2, &i__1, &c_b29, &c_b29, &a[((a_dim1) << (1) - ) + 1], lda); - ie = itau; - itauq = ie + *m; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize L in A - (Workspace: need M*M+4*M, prefer M*M+3*M+2*M*NB) -*/ - - i__2 = *lwork - nwork + 1; - dgebrd_(m, m, &a[a_offset], lda, &s[1], &work[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__2, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in U and computing right singular - vectors of bidiagonal matrix in WORK(IVT) - (Workspace: need M+M*M+BDSPAC) -*/ - - dbdsdc_("U", "I", m, &s[1], &work[ie], &u[u_offset], ldu, & - work[ivt], &ldwkvt, dum, idum, &work[nwork], &iwork[1] - , info); - -/* - Overwrite U by left singular vectors of L and WORK(IVT) - by right singular vectors of L - (Workspace: need M*M+3*M, prefer M*M+2*M+M*NB) -*/ - - i__2 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", m, m, m, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__2, &ierr); - i__2 = *lwork - nwork + 1; - dormbr_("P", "R", "T", m, m, m, &a[a_offset], lda, &work[ - itaup], &work[ivt], &ldwkvt, &work[nwork], &i__2, & - ierr); - -/* - Multiply right singular vectors of L in WORK(IVT) by - Q in VT, storing result in A - (Workspace: need M*M) -*/ - - dgemm_("N", "N", m, n, m, &c_b15, &work[ivt], &ldwkvt, &vt[ - vt_offset], ldvt, &c_b29, &a[a_offset], lda); - -/* Copy right singular vectors of A from A to VT */ - - dlacpy_("F", m, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - - } - - } else { - -/* - N .LT. MNTHR - - Path 5t (N greater than M, but not much larger) - Reduce to bidiagonal form without LQ decomposition -*/ - - ie = 1; - itauq = ie + *m; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize A - (Workspace: need 3*M+N, prefer 3*M+(M+N)*NB) -*/ - - i__2 = *lwork - nwork + 1; - dgebrd_(m, n, &a[a_offset], lda, &s[1], &work[ie], &work[itauq], & - work[itaup], &work[nwork], &i__2, &ierr); - if (wntqn) { - -/* - Perform bidiagonal SVD, only computing singular values - (Workspace: need M+BDSPAC) -*/ - - dbdsdc_("L", "N", m, &s[1], &work[ie], dum, &c__1, dum, &c__1, - dum, idum, &work[nwork], &iwork[1], info); - } else if (wntqo) { - ldwkvt = *m; - ivt = nwork; - if (*lwork >= *m * *n + *m * 3 + bdspac) { - -/* WORK( IVT ) is M by N */ - - dlaset_("F", m, n, &c_b29, &c_b29, &work[ivt], &ldwkvt); - nwork = ivt + ldwkvt * *n; - } else { - -/* WORK( IVT ) is M by M */ - - nwork = ivt + ldwkvt * *m; - il = nwork; - -/* WORK(IL) is M by CHUNK */ - - chunk = (*lwork - *m * *m - *m * 3) / *m; - } - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in U and computing right singular - vectors of bidiagonal matrix in WORK(IVT) - (Workspace: need M*M+BDSPAC) -*/ - - dbdsdc_("L", "I", m, &s[1], &work[ie], &u[u_offset], ldu, & - work[ivt], &ldwkvt, dum, idum, &work[nwork], &iwork[1] - , info); - -/* - Overwrite U by left singular vectors of A - (Workspace: need M*M+2*M, prefer M*M+M+M*NB) -*/ - - i__2 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", m, m, n, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__2, &ierr); - - if (*lwork >= *m * *n + *m * 3 + bdspac) { - -/* - Overwrite WORK(IVT) by left singular vectors of A - (Workspace: need M*M+2*M, prefer M*M+M+M*NB) -*/ - - i__2 = *lwork - nwork + 1; - dormbr_("P", "R", "T", m, n, m, &a[a_offset], lda, &work[ - itaup], &work[ivt], &ldwkvt, &work[nwork], &i__2, - &ierr); - -/* Copy right singular vectors of A from WORK(IVT) to A */ - - dlacpy_("F", m, n, &work[ivt], &ldwkvt, &a[a_offset], lda); - } else { - -/* - Generate P**T in A - (Workspace: need M*M+2*M, prefer M*M+M+M*NB) -*/ - - i__2 = *lwork - nwork + 1; - dorgbr_("P", m, n, m, &a[a_offset], lda, &work[itaup], & - work[nwork], &i__2, &ierr); - -/* - Multiply Q in A by right singular vectors of - bidiagonal matrix in WORK(IVT), storing result in - WORK(IL) and copying to A - (Workspace: need 2*M*M, prefer M*M+M*N) -*/ - - i__2 = *n; - i__1 = chunk; - for (i__ = 1; i__1 < 0 ? i__ >= i__2 : i__ <= i__2; i__ += - i__1) { -/* Computing MIN */ - i__3 = *n - i__ + 1; - blk = min(i__3,chunk); - dgemm_("N", "N", m, &blk, m, &c_b15, &work[ivt], & - ldwkvt, &a[i__ * a_dim1 + 1], lda, &c_b29, & - work[il], m); - dlacpy_("F", m, &blk, &work[il], m, &a[i__ * a_dim1 + - 1], lda); -/* L40: */ - } - } - } else if (wntqs) { - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in U and computing right singular - vectors of bidiagonal matrix in VT - (Workspace: need M+BDSPAC) -*/ - - dlaset_("F", m, n, &c_b29, &c_b29, &vt[vt_offset], ldvt); - dbdsdc_("L", "I", m, &s[1], &work[ie], &u[u_offset], ldu, &vt[ - vt_offset], ldvt, dum, idum, &work[nwork], &iwork[1], - info); - -/* - Overwrite U by left singular vectors of A and VT - by right singular vectors of A - (Workspace: need 3*M, prefer 2*M+M*NB) -*/ - - i__1 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", m, m, n, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__1, &ierr); - i__1 = *lwork - nwork + 1; - dormbr_("P", "R", "T", m, n, m, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__1, & - ierr); - } else if (wntqa) { - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in U and computing right singular - vectors of bidiagonal matrix in VT - (Workspace: need M+BDSPAC) -*/ - - dlaset_("F", n, n, &c_b29, &c_b29, &vt[vt_offset], ldvt); - dbdsdc_("L", "I", m, &s[1], &work[ie], &u[u_offset], ldu, &vt[ - vt_offset], ldvt, dum, idum, &work[nwork], &iwork[1], - info); - -/* Set the right corner of VT to identity matrix */ - - i__1 = *n - *m; - i__2 = *n - *m; - dlaset_("F", &i__1, &i__2, &c_b29, &c_b15, &vt[*m + 1 + (*m + - 1) * vt_dim1], ldvt); - -/* - Overwrite U by left singular vectors of A and VT - by right singular vectors of A - (Workspace: need 2*M+N, prefer 2*M+N*NB) -*/ - - i__1 = *lwork - nwork + 1; - dormbr_("Q", "L", "N", m, m, n, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__1, &ierr); - i__1 = *lwork - nwork + 1; - dormbr_("P", "R", "T", n, n, m, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__1, & - ierr); - } - - } - - } - -/* Undo scaling if necessary */ - - if (iscl == 1) { - if (anrm > bignum) { - dlascl_("G", &c__0, &c__0, &bignum, &anrm, &minmn, &c__1, &s[1], & - minmn, &ierr); - } - if (anrm < smlnum) { - dlascl_("G", &c__0, &c__0, &smlnum, &anrm, &minmn, &c__1, &s[1], & - minmn, &ierr); - } - } - -/* Return optimal workspace in WORK(1) */ - - work[1] = (doublereal) maxwrk; - - return 0; - -/* End of DGESDD */ - -} /* dgesdd_ */ - -/* Subroutine */ int dgesv_(integer *n, integer *nrhs, doublereal *a, integer - *lda, integer *ipiv, doublereal *b, integer *ldb, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1; - - /* Local variables */ - extern /* Subroutine */ int dgetrf_(integer *, integer *, doublereal *, - integer *, integer *, integer *), xerbla_(char *, integer *), dgetrs_(char *, integer *, integer *, doublereal *, - integer *, integer *, doublereal *, integer *, integer *); - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - March 31, 1993 - - - Purpose - ======= - - DGESV computes the solution to a real system of linear equations - A * X = B, - where A is an N-by-N matrix and X and B are N-by-NRHS matrices. - - The LU decomposition with partial pivoting and row interchanges is - used to factor A as - A = P * L * U, - where P is a permutation matrix, L is unit lower triangular, and U is - upper triangular. The factored form of A is then used to solve the - system of equations A * X = B. - - Arguments - ========= - - N (input) INTEGER - The number of linear equations, i.e., the order of the - matrix A. N >= 0. - - NRHS (input) INTEGER - The number of right hand sides, i.e., the number of columns - of the matrix B. NRHS >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the N-by-N coefficient matrix A. - On exit, the factors L and U from the factorization - A = P*L*U; the unit diagonal elements of L are not stored. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - IPIV (output) INTEGER array, dimension (N) - The pivot indices that define the permutation matrix P; - row i of the matrix was interchanged with row IPIV(i). - - B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS) - On entry, the N-by-NRHS matrix of right hand side matrix B. - On exit, if INFO = 0, the N-by-NRHS solution matrix X. - - LDB (input) INTEGER - The leading dimension of the array B. LDB >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, U(i,i) is exactly zero. The factorization - has been completed, but the factor U is exactly - singular, so the solution could not be computed. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --ipiv; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - - /* Function Body */ - *info = 0; - if (*n < 0) { - *info = -1; - } else if (*nrhs < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } else if (*ldb < max(1,*n)) { - *info = -7; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGESV ", &i__1); - return 0; - } - -/* Compute the LU factorization of A. */ - - dgetrf_(n, n, &a[a_offset], lda, &ipiv[1], info); - if (*info == 0) { - -/* Solve the system A*X = B, overwriting B with X. */ - - dgetrs_("No transpose", n, nrhs, &a[a_offset], lda, &ipiv[1], &b[ - b_offset], ldb, info); - } - return 0; - -/* End of DGESV */ - -} /* dgesv_ */ - -/* Subroutine */ int dgetf2_(integer *m, integer *n, doublereal *a, integer * - lda, integer *ipiv, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - doublereal d__1; - - /* Local variables */ - static integer j, jp; - extern /* Subroutine */ int dger_(integer *, integer *, doublereal *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *), dscal_(integer *, doublereal *, doublereal *, integer - *), dswap_(integer *, doublereal *, integer *, doublereal *, - integer *); - extern integer idamax_(integer *, doublereal *, integer *); - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1992 - - - Purpose - ======= - - DGETF2 computes an LU factorization of a general m-by-n matrix A - using partial pivoting with row interchanges. - - The factorization has the form - A = P * L * U - where P is a permutation matrix, L is lower triangular with unit - diagonal elements (lower trapezoidal if m > n), and U is upper - triangular (upper trapezoidal if m < n). - - This is the right-looking Level 2 BLAS version of the algorithm. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the m by n matrix to be factored. - On exit, the factors L and U from the factorization - A = P*L*U; the unit diagonal elements of L are not stored. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - IPIV (output) INTEGER array, dimension (min(M,N)) - The pivot indices; for 1 <= i <= min(M,N), row i of the - matrix was interchanged with row IPIV(i). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -k, the k-th argument had an illegal value - > 0: if INFO = k, U(k,k) is exactly zero. The factorization - has been completed, but the factor U is exactly - singular, and division by zero will occur if it is used - to solve a system of equations. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --ipiv; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGETF2", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0) { - return 0; - } - - i__1 = min(*m,*n); - for (j = 1; j <= i__1; ++j) { - -/* Find pivot and test for singularity. */ - - i__2 = *m - j + 1; - jp = j - 1 + idamax_(&i__2, &a[j + j * a_dim1], &c__1); - ipiv[j] = jp; - if (a[jp + j * a_dim1] != 0.) { - -/* Apply the interchange to columns 1:N. */ - - if (jp != j) { - dswap_(n, &a[j + a_dim1], lda, &a[jp + a_dim1], lda); - } - -/* Compute elements J+1:M of J-th column. */ - - if (j < *m) { - i__2 = *m - j; - d__1 = 1. / a[j + j * a_dim1]; - dscal_(&i__2, &d__1, &a[j + 1 + j * a_dim1], &c__1); - } - - } else if (*info == 0) { - - *info = j; - } - - if (j < min(*m,*n)) { - -/* Update trailing submatrix. */ - - i__2 = *m - j; - i__3 = *n - j; - dger_(&i__2, &i__3, &c_b151, &a[j + 1 + j * a_dim1], &c__1, &a[j - + (j + 1) * a_dim1], lda, &a[j + 1 + (j + 1) * a_dim1], - lda); - } -/* L10: */ - } - return 0; - -/* End of DGETF2 */ - -} /* dgetf2_ */ - -/* Subroutine */ int dgetrf_(integer *m, integer *n, doublereal *a, integer * - lda, integer *ipiv, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - - /* Local variables */ - static integer i__, j, jb, nb; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - static integer iinfo; - extern /* Subroutine */ int dtrsm_(char *, char *, char *, char *, - integer *, integer *, doublereal *, doublereal *, integer *, - doublereal *, integer *), dgetf2_( - integer *, integer *, doublereal *, integer *, integer *, integer - *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int dlaswp_(integer *, doublereal *, integer *, - integer *, integer *, integer *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - March 31, 1993 - - - Purpose - ======= - - DGETRF computes an LU factorization of a general M-by-N matrix A - using partial pivoting with row interchanges. - - The factorization has the form - A = P * L * U - where P is a permutation matrix, L is lower triangular with unit - diagonal elements (lower trapezoidal if m > n), and U is upper - triangular (upper trapezoidal if m < n). - - This is the right-looking Level 3 BLAS version of the algorithm. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the M-by-N matrix to be factored. - On exit, the factors L and U from the factorization - A = P*L*U; the unit diagonal elements of L are not stored. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - IPIV (output) INTEGER array, dimension (min(M,N)) - The pivot indices; for 1 <= i <= min(M,N), row i of the - matrix was interchanged with row IPIV(i). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, U(i,i) is exactly zero. The factorization - has been completed, but the factor U is exactly - singular, and division by zero will occur if it is used - to solve a system of equations. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --ipiv; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGETRF", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0) { - return 0; - } - -/* Determine the block size for this environment. */ - - nb = ilaenv_(&c__1, "DGETRF", " ", m, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen) - 1); - if (nb <= 1 || nb >= min(*m,*n)) { - -/* Use unblocked code. */ - - dgetf2_(m, n, &a[a_offset], lda, &ipiv[1], info); - } else { - -/* Use blocked code. */ - - i__1 = min(*m,*n); - i__2 = nb; - for (j = 1; i__2 < 0 ? j >= i__1 : j <= i__1; j += i__2) { -/* Computing MIN */ - i__3 = min(*m,*n) - j + 1; - jb = min(i__3,nb); - -/* - Factor diagonal and subdiagonal blocks and test for exact - singularity. -*/ - - i__3 = *m - j + 1; - dgetf2_(&i__3, &jb, &a[j + j * a_dim1], lda, &ipiv[j], &iinfo); - -/* Adjust INFO and the pivot indices. */ - - if ((*info == 0 && iinfo > 0)) { - *info = iinfo + j - 1; - } -/* Computing MIN */ - i__4 = *m, i__5 = j + jb - 1; - i__3 = min(i__4,i__5); - for (i__ = j; i__ <= i__3; ++i__) { - ipiv[i__] = j - 1 + ipiv[i__]; -/* L10: */ - } - -/* Apply interchanges to columns 1:J-1. */ - - i__3 = j - 1; - i__4 = j + jb - 1; - dlaswp_(&i__3, &a[a_offset], lda, &j, &i__4, &ipiv[1], &c__1); - - if (j + jb <= *n) { - -/* Apply interchanges to columns J+JB:N. */ - - i__3 = *n - j - jb + 1; - i__4 = j + jb - 1; - dlaswp_(&i__3, &a[(j + jb) * a_dim1 + 1], lda, &j, &i__4, & - ipiv[1], &c__1); - -/* Compute block row of U. */ - - i__3 = *n - j - jb + 1; - dtrsm_("Left", "Lower", "No transpose", "Unit", &jb, &i__3, & - c_b15, &a[j + j * a_dim1], lda, &a[j + (j + jb) * - a_dim1], lda); - if (j + jb <= *m) { - -/* Update trailing submatrix. */ - - i__3 = *m - j - jb + 1; - i__4 = *n - j - jb + 1; - dgemm_("No transpose", "No transpose", &i__3, &i__4, &jb, - &c_b151, &a[j + jb + j * a_dim1], lda, &a[j + (j - + jb) * a_dim1], lda, &c_b15, &a[j + jb + (j + jb) - * a_dim1], lda); - } - } -/* L20: */ - } - } - return 0; - -/* End of DGETRF */ - -} /* dgetrf_ */ - -/* Subroutine */ int dgetrs_(char *trans, integer *n, integer *nrhs, - doublereal *a, integer *lda, integer *ipiv, doublereal *b, integer * - ldb, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1; - - /* Local variables */ - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dtrsm_(char *, char *, char *, char *, - integer *, integer *, doublereal *, doublereal *, integer *, - doublereal *, integer *), xerbla_( - char *, integer *), dlaswp_(integer *, doublereal *, - integer *, integer *, integer *, integer *, integer *); - static logical notran; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - March 31, 1993 - - - Purpose - ======= - - DGETRS solves a system of linear equations - A * X = B or A' * X = B - with a general N-by-N matrix A using the LU factorization computed - by DGETRF. - - Arguments - ========= - - TRANS (input) CHARACTER*1 - Specifies the form of the system of equations: - = 'N': A * X = B (No transpose) - = 'T': A'* X = B (Transpose) - = 'C': A'* X = B (Conjugate transpose = Transpose) - - N (input) INTEGER - The order of the matrix A. N >= 0. - - NRHS (input) INTEGER - The number of right hand sides, i.e., the number of columns - of the matrix B. NRHS >= 0. - - A (input) DOUBLE PRECISION array, dimension (LDA,N) - The factors L and U from the factorization A = P*L*U - as computed by DGETRF. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - IPIV (input) INTEGER array, dimension (N) - The pivot indices from DGETRF; for 1<=i<=N, row i of the - matrix was interchanged with row IPIV(i). - - B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS) - On entry, the right hand side matrix B. - On exit, the solution matrix X. - - LDB (input) INTEGER - The leading dimension of the array B. LDB >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --ipiv; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - - /* Function Body */ - *info = 0; - notran = lsame_(trans, "N"); - if (((! notran && ! lsame_(trans, "T")) && ! lsame_( - trans, "C"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*nrhs < 0) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } else if (*ldb < max(1,*n)) { - *info = -8; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DGETRS", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0 || *nrhs == 0) { - return 0; - } - - if (notran) { - -/* - Solve A * X = B. - - Apply row interchanges to the right hand sides. -*/ - - dlaswp_(nrhs, &b[b_offset], ldb, &c__1, n, &ipiv[1], &c__1); - -/* Solve L*X = B, overwriting B with X. */ - - dtrsm_("Left", "Lower", "No transpose", "Unit", n, nrhs, &c_b15, &a[ - a_offset], lda, &b[b_offset], ldb); - -/* Solve U*X = B, overwriting B with X. */ - - dtrsm_("Left", "Upper", "No transpose", "Non-unit", n, nrhs, &c_b15, & - a[a_offset], lda, &b[b_offset], ldb); - } else { - -/* - Solve A' * X = B. - - Solve U'*X = B, overwriting B with X. -*/ - - dtrsm_("Left", "Upper", "Transpose", "Non-unit", n, nrhs, &c_b15, &a[ - a_offset], lda, &b[b_offset], ldb); - -/* Solve L'*X = B, overwriting B with X. */ - - dtrsm_("Left", "Lower", "Transpose", "Unit", n, nrhs, &c_b15, &a[ - a_offset], lda, &b[b_offset], ldb); - -/* Apply row interchanges to the solution vectors. */ - - dlaswp_(nrhs, &b[b_offset], ldb, &c__1, n, &ipiv[1], &c_n1); - } - - return 0; - -/* End of DGETRS */ - -} /* dgetrs_ */ - -/* Subroutine */ int dhseqr_(char *job, char *compz, integer *n, integer *ilo, - integer *ihi, doublereal *h__, integer *ldh, doublereal *wr, - doublereal *wi, doublereal *z__, integer *ldz, doublereal *work, - integer *lwork, integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer h_dim1, h_offset, z_dim1, z_offset, i__1, i__2, i__3[2], i__4, - i__5; - doublereal d__1, d__2; - char ch__1[2]; - - /* Builtin functions */ - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i__, j, k, l; - static doublereal s[225] /* was [15][15] */, v[16]; - static integer i1, i2, ii, nh, nr, ns, nv; - static doublereal vv[16]; - static integer itn; - static doublereal tau; - static integer its; - static doublereal ulp, tst1; - static integer maxb; - static doublereal absw; - static integer ierr; - static doublereal unfl, temp, ovfl; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dgemv_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, doublereal *, integer *); - static integer itemp; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *); - static logical initz, wantt, wantz; - extern doublereal dlapy2_(doublereal *, doublereal *); - extern /* Subroutine */ int dlabad_(doublereal *, doublereal *); - - extern /* Subroutine */ int dlarfg_(integer *, doublereal *, doublereal *, - integer *, doublereal *); - extern integer idamax_(integer *, doublereal *, integer *); - extern doublereal dlanhs_(char *, integer *, doublereal *, integer *, - doublereal *); - extern /* Subroutine */ int dlahqr_(logical *, logical *, integer *, - integer *, integer *, doublereal *, integer *, doublereal *, - doublereal *, integer *, integer *, doublereal *, integer *, - integer *), dlacpy_(char *, integer *, integer *, doublereal *, - integer *, doublereal *, integer *), dlaset_(char *, - integer *, integer *, doublereal *, doublereal *, doublereal *, - integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int xerbla_(char *, integer *), dlarfx_( - char *, integer *, integer *, doublereal *, doublereal *, - doublereal *, integer *, doublereal *); - static doublereal smlnum; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DHSEQR computes the eigenvalues of a real upper Hessenberg matrix H - and, optionally, the matrices T and Z from the Schur decomposition - H = Z T Z**T, where T is an upper quasi-triangular matrix (the Schur - form), and Z is the orthogonal matrix of Schur vectors. - - Optionally Z may be postmultiplied into an input orthogonal matrix Q, - so that this routine can give the Schur factorization of a matrix A - which has been reduced to the Hessenberg form H by the orthogonal - matrix Q: A = Q*H*Q**T = (QZ)*T*(QZ)**T. - - Arguments - ========= - - JOB (input) CHARACTER*1 - = 'E': compute eigenvalues only; - = 'S': compute eigenvalues and the Schur form T. - - COMPZ (input) CHARACTER*1 - = 'N': no Schur vectors are computed; - = 'I': Z is initialized to the unit matrix and the matrix Z - of Schur vectors of H is returned; - = 'V': Z must contain an orthogonal matrix Q on entry, and - the product Q*Z is returned. - - N (input) INTEGER - The order of the matrix H. N >= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - It is assumed that H is already upper triangular in rows - and columns 1:ILO-1 and IHI+1:N. ILO and IHI are normally - set by a previous call to DGEBAL, and then passed to SGEHRD - when the matrix output by DGEBAL is reduced to Hessenberg - form. Otherwise ILO and IHI should be set to 1 and N - respectively. - 1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0. - - H (input/output) DOUBLE PRECISION array, dimension (LDH,N) - On entry, the upper Hessenberg matrix H. - On exit, if JOB = 'S', H contains the upper quasi-triangular - matrix T from the Schur decomposition (the Schur form); - 2-by-2 diagonal blocks (corresponding to complex conjugate - pairs of eigenvalues) are returned in standard form, with - H(i,i) = H(i+1,i+1) and H(i+1,i)*H(i,i+1) < 0. If JOB = 'E', - the contents of H are unspecified on exit. - - LDH (input) INTEGER - The leading dimension of the array H. LDH >= max(1,N). - - WR (output) DOUBLE PRECISION array, dimension (N) - WI (output) DOUBLE PRECISION array, dimension (N) - The real and imaginary parts, respectively, of the computed - eigenvalues. If two eigenvalues are computed as a complex - conjugate pair, they are stored in consecutive elements of - WR and WI, say the i-th and (i+1)th, with WI(i) > 0 and - WI(i+1) < 0. If JOB = 'S', the eigenvalues are stored in the - same order as on the diagonal of the Schur form returned in - H, with WR(i) = H(i,i) and, if H(i:i+1,i:i+1) is a 2-by-2 - diagonal block, WI(i) = sqrt(H(i+1,i)*H(i,i+1)) and - WI(i+1) = -WI(i). - - Z (input/output) DOUBLE PRECISION array, dimension (LDZ,N) - If COMPZ = 'N': Z is not referenced. - If COMPZ = 'I': on entry, Z need not be set, and on exit, Z - contains the orthogonal matrix Z of the Schur vectors of H. - If COMPZ = 'V': on entry Z must contain an N-by-N matrix Q, - which is assumed to be equal to the unit matrix except for - the submatrix Z(ILO:IHI,ILO:IHI); on exit Z contains Q*Z. - Normally Q is the orthogonal matrix generated by DORGHR after - the call to DGEHRD which formed the Hessenberg matrix H. - - LDZ (input) INTEGER - The leading dimension of the array Z. - LDZ >= max(1,N) if COMPZ = 'I' or 'V'; LDZ >= 1 otherwise. - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,N). - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, DHSEQR failed to compute all of the - eigenvalues in a total of 30*(IHI-ILO+1) iterations; - elements 1:ilo-1 and i+1:n of WR and WI contain those - eigenvalues which have been successfully computed. - - ===================================================================== - - - Decode and test the input parameters -*/ - - /* Parameter adjustments */ - h_dim1 = *ldh; - h_offset = 1 + h_dim1 * 1; - h__ -= h_offset; - --wr; - --wi; - z_dim1 = *ldz; - z_offset = 1 + z_dim1 * 1; - z__ -= z_offset; - --work; - - /* Function Body */ - wantt = lsame_(job, "S"); - initz = lsame_(compz, "I"); - wantz = initz || lsame_(compz, "V"); - - *info = 0; - work[1] = (doublereal) max(1,*n); - lquery = *lwork == -1; - if ((! lsame_(job, "E") && ! wantt)) { - *info = -1; - } else if ((! lsame_(compz, "N") && ! wantz)) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*ilo < 1 || *ilo > max(1,*n)) { - *info = -4; - } else if (*ihi < min(*ilo,*n) || *ihi > *n) { - *info = -5; - } else if (*ldh < max(1,*n)) { - *info = -7; - } else if (*ldz < 1 || (wantz && *ldz < max(1,*n))) { - *info = -11; - } else if ((*lwork < max(1,*n) && ! lquery)) { - *info = -13; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DHSEQR", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Initialize Z, if necessary */ - - if (initz) { - dlaset_("Full", n, n, &c_b29, &c_b15, &z__[z_offset], ldz); - } - -/* Store the eigenvalues isolated by DGEBAL. */ - - i__1 = *ilo - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - wr[i__] = h__[i__ + i__ * h_dim1]; - wi[i__] = 0.; -/* L10: */ - } - i__1 = *n; - for (i__ = *ihi + 1; i__ <= i__1; ++i__) { - wr[i__] = h__[i__ + i__ * h_dim1]; - wi[i__] = 0.; -/* L20: */ - } - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } - if (*ilo == *ihi) { - wr[*ilo] = h__[*ilo + *ilo * h_dim1]; - wi[*ilo] = 0.; - return 0; - } - -/* - Set rows and columns ILO to IHI to zero below the first - subdiagonal. -*/ - - i__1 = *ihi - 2; - for (j = *ilo; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j + 2; i__ <= i__2; ++i__) { - h__[i__ + j * h_dim1] = 0.; -/* L30: */ - } -/* L40: */ - } - nh = *ihi - *ilo + 1; - -/* - Determine the order of the multi-shift QR algorithm to be used. - - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = job; - i__3[1] = 1, a__1[1] = compz; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - ns = ilaenv_(&c__4, "DHSEQR", ch__1, n, ilo, ihi, &c_n1, (ftnlen)6, ( - ftnlen)2); -/* Writing concatenation */ - i__3[0] = 1, a__1[0] = job; - i__3[1] = 1, a__1[1] = compz; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - maxb = ilaenv_(&c__8, "DHSEQR", ch__1, n, ilo, ihi, &c_n1, (ftnlen)6, ( - ftnlen)2); - if (ns <= 2 || ns > nh || maxb >= nh) { - -/* Use the standard double-shift algorithm */ - - dlahqr_(&wantt, &wantz, n, ilo, ihi, &h__[h_offset], ldh, &wr[1], &wi[ - 1], ilo, ihi, &z__[z_offset], ldz, info); - return 0; - } - maxb = max(3,maxb); -/* Computing MIN */ - i__1 = min(ns,maxb); - ns = min(i__1,15); - -/* - Now 2 < NS <= MAXB < NH. - - Set machine-dependent constants for the stopping criterion. - If norm(H) <= sqrt(OVFL), overflow should not occur. -*/ - - unfl = SAFEMINIMUM; - ovfl = 1. / unfl; - dlabad_(&unfl, &ovfl); - ulp = PRECISION; - smlnum = unfl * (nh / ulp); - -/* - I1 and I2 are the indices of the first row and last column of H - to which transformations must be applied. If eigenvalues only are - being computed, I1 and I2 are set inside the main loop. -*/ - - if (wantt) { - i1 = 1; - i2 = *n; - } - -/* ITN is the total number of multiple-shift QR iterations allowed. */ - - itn = nh * 30; - -/* - The main loop begins here. I is the loop index and decreases from - IHI to ILO in steps of at most MAXB. Each iteration of the loop - works with the active submatrix in rows and columns L to I. - Eigenvalues I+1 to IHI have already converged. Either L = ILO or - H(L,L-1) is negligible so that the matrix splits. -*/ - - i__ = *ihi; -L50: - l = *ilo; - if (i__ < *ilo) { - goto L170; - } - -/* - Perform multiple-shift QR iterations on rows and columns ILO to I - until a submatrix of order at most MAXB splits off at the bottom - because a subdiagonal element has become negligible. -*/ - - i__1 = itn; - for (its = 0; its <= i__1; ++its) { - -/* Look for a single small subdiagonal element. */ - - i__2 = l + 1; - for (k = i__; k >= i__2; --k) { - tst1 = (d__1 = h__[k - 1 + (k - 1) * h_dim1], abs(d__1)) + (d__2 = - h__[k + k * h_dim1], abs(d__2)); - if (tst1 == 0.) { - i__4 = i__ - l + 1; - tst1 = dlanhs_("1", &i__4, &h__[l + l * h_dim1], ldh, &work[1] - ); - } -/* Computing MAX */ - d__2 = ulp * tst1; - if ((d__1 = h__[k + (k - 1) * h_dim1], abs(d__1)) <= max(d__2, - smlnum)) { - goto L70; - } -/* L60: */ - } -L70: - l = k; - if (l > *ilo) { - -/* H(L,L-1) is negligible. */ - - h__[l + (l - 1) * h_dim1] = 0.; - } - -/* Exit from loop if a submatrix of order <= MAXB has split off. */ - - if (l >= i__ - maxb + 1) { - goto L160; - } - -/* - Now the active submatrix is in rows and columns L to I. If - eigenvalues only are being computed, only the active submatrix - need be transformed. -*/ - - if (! wantt) { - i1 = l; - i2 = i__; - } - - if (its == 20 || its == 30) { - -/* Exceptional shifts. */ - - i__2 = i__; - for (ii = i__ - ns + 1; ii <= i__2; ++ii) { - wr[ii] = ((d__1 = h__[ii + (ii - 1) * h_dim1], abs(d__1)) + ( - d__2 = h__[ii + ii * h_dim1], abs(d__2))) * 1.5; - wi[ii] = 0.; -/* L80: */ - } - } else { - -/* Use eigenvalues of trailing submatrix of order NS as shifts. */ - - dlacpy_("Full", &ns, &ns, &h__[i__ - ns + 1 + (i__ - ns + 1) * - h_dim1], ldh, s, &c__15); - dlahqr_(&c_false, &c_false, &ns, &c__1, &ns, s, &c__15, &wr[i__ - - ns + 1], &wi[i__ - ns + 1], &c__1, &ns, &z__[z_offset], - ldz, &ierr); - if (ierr > 0) { - -/* - If DLAHQR failed to compute all NS eigenvalues, use the - unconverged diagonal elements as the remaining shifts. -*/ - - i__2 = ierr; - for (ii = 1; ii <= i__2; ++ii) { - wr[i__ - ns + ii] = s[ii + ii * 15 - 16]; - wi[i__ - ns + ii] = 0.; -/* L90: */ - } - } - } - -/* - Form the first column of (G-w(1)) (G-w(2)) . . . (G-w(ns)) - where G is the Hessenberg submatrix H(L:I,L:I) and w is - the vector of shifts (stored in WR and WI). The result is - stored in the local array V. -*/ - - v[0] = 1.; - i__2 = ns + 1; - for (ii = 2; ii <= i__2; ++ii) { - v[ii - 1] = 0.; -/* L100: */ - } - nv = 1; - i__2 = i__; - for (j = i__ - ns + 1; j <= i__2; ++j) { - if (wi[j] >= 0.) { - if (wi[j] == 0.) { - -/* real shift */ - - i__4 = nv + 1; - dcopy_(&i__4, v, &c__1, vv, &c__1); - i__4 = nv + 1; - d__1 = -wr[j]; - dgemv_("No transpose", &i__4, &nv, &c_b15, &h__[l + l * - h_dim1], ldh, vv, &c__1, &d__1, v, &c__1); - ++nv; - } else if (wi[j] > 0.) { - -/* complex conjugate pair of shifts */ - - i__4 = nv + 1; - dcopy_(&i__4, v, &c__1, vv, &c__1); - i__4 = nv + 1; - d__1 = wr[j] * -2.; - dgemv_("No transpose", &i__4, &nv, &c_b15, &h__[l + l * - h_dim1], ldh, v, &c__1, &d__1, vv, &c__1); - i__4 = nv + 1; - itemp = idamax_(&i__4, vv, &c__1); -/* Computing MAX */ - d__2 = (d__1 = vv[itemp - 1], abs(d__1)); - temp = 1. / max(d__2,smlnum); - i__4 = nv + 1; - dscal_(&i__4, &temp, vv, &c__1); - absw = dlapy2_(&wr[j], &wi[j]); - temp = temp * absw * absw; - i__4 = nv + 2; - i__5 = nv + 1; - dgemv_("No transpose", &i__4, &i__5, &c_b15, &h__[l + l * - h_dim1], ldh, vv, &c__1, &temp, v, &c__1); - nv += 2; - } - -/* - Scale V(1:NV) so that max(abs(V(i))) = 1. If V is zero, - reset it to the unit vector. -*/ - - itemp = idamax_(&nv, v, &c__1); - temp = (d__1 = v[itemp - 1], abs(d__1)); - if (temp == 0.) { - v[0] = 1.; - i__4 = nv; - for (ii = 2; ii <= i__4; ++ii) { - v[ii - 1] = 0.; -/* L110: */ - } - } else { - temp = max(temp,smlnum); - d__1 = 1. / temp; - dscal_(&nv, &d__1, v, &c__1); - } - } -/* L120: */ - } - -/* Multiple-shift QR step */ - - i__2 = i__ - 1; - for (k = l; k <= i__2; ++k) { - -/* - The first iteration of this loop determines a reflection G - from the vector V and applies it from left and right to H, - thus creating a nonzero bulge below the subdiagonal. - - Each subsequent iteration determines a reflection G to - restore the Hessenberg form in the (K-1)th column, and thus - chases the bulge one step toward the bottom of the active - submatrix. NR is the order of G. - - Computing MIN -*/ - i__4 = ns + 1, i__5 = i__ - k + 1; - nr = min(i__4,i__5); - if (k > l) { - dcopy_(&nr, &h__[k + (k - 1) * h_dim1], &c__1, v, &c__1); - } - dlarfg_(&nr, v, &v[1], &c__1, &tau); - if (k > l) { - h__[k + (k - 1) * h_dim1] = v[0]; - i__4 = i__; - for (ii = k + 1; ii <= i__4; ++ii) { - h__[ii + (k - 1) * h_dim1] = 0.; -/* L130: */ - } - } - v[0] = 1.; - -/* - Apply G from the left to transform the rows of the matrix in - columns K to I2. -*/ - - i__4 = i2 - k + 1; - dlarfx_("Left", &nr, &i__4, v, &tau, &h__[k + k * h_dim1], ldh, & - work[1]); - -/* - Apply G from the right to transform the columns of the - matrix in rows I1 to min(K+NR,I). - - Computing MIN -*/ - i__5 = k + nr; - i__4 = min(i__5,i__) - i1 + 1; - dlarfx_("Right", &i__4, &nr, v, &tau, &h__[i1 + k * h_dim1], ldh, - &work[1]); - - if (wantz) { - -/* Accumulate transformations in the matrix Z */ - - dlarfx_("Right", &nh, &nr, v, &tau, &z__[*ilo + k * z_dim1], - ldz, &work[1]); - } -/* L140: */ - } - -/* L150: */ - } - -/* Failure to converge in remaining number of iterations */ - - *info = i__; - return 0; - -L160: - -/* - A submatrix of order <= MAXB in rows and columns L to I has split - off. Use the double-shift QR algorithm to handle it. -*/ - - dlahqr_(&wantt, &wantz, n, &l, &i__, &h__[h_offset], ldh, &wr[1], &wi[1], - ilo, ihi, &z__[z_offset], ldz, info); - if (*info > 0) { - return 0; - } - -/* - Decrement number of remaining iterations, and return to start of - the main loop with a new value of I. -*/ - - itn -= its; - i__ = l - 1; - goto L50; - -L170: - work[1] = (doublereal) max(1,*n); - return 0; - -/* End of DHSEQR */ - -} /* dhseqr_ */ - -/* Subroutine */ int dlabad_(doublereal *small, doublereal *large) -{ - /* Builtin functions */ - double d_lg10(doublereal *), sqrt(doublereal); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLABAD takes as input the values computed by DLAMCH for underflow and - overflow, and returns the square root of each of these values if the - log of LARGE is sufficiently large. This subroutine is intended to - identify machines with a large exponent range, such as the Crays, and - redefine the underflow and overflow limits to be the square roots of - the values computed by DLAMCH. This subroutine is needed because - DLAMCH does not compensate for poor arithmetic in the upper half of - the exponent range, as is found on a Cray. - - Arguments - ========= - - SMALL (input/output) DOUBLE PRECISION - On entry, the underflow threshold as computed by DLAMCH. - On exit, if LOG10(LARGE) is sufficiently large, the square - root of SMALL, otherwise unchanged. - - LARGE (input/output) DOUBLE PRECISION - On entry, the overflow threshold as computed by DLAMCH. - On exit, if LOG10(LARGE) is sufficiently large, the square - root of LARGE, otherwise unchanged. - - ===================================================================== - - - If it looks like we're on a Cray, take the square root of - SMALL and LARGE to avoid overflow and underflow problems. -*/ - - if (d_lg10(large) > 2e3) { - *small = sqrt(*small); - *large = sqrt(*large); - } - - return 0; - -/* End of DLABAD */ - -} /* dlabad_ */ - -/* Subroutine */ int dlabrd_(integer *m, integer *n, integer *nb, doublereal * - a, integer *lda, doublereal *d__, doublereal *e, doublereal *tauq, - doublereal *taup, doublereal *x, integer *ldx, doublereal *y, integer - *ldy) -{ - /* System generated locals */ - integer a_dim1, a_offset, x_dim1, x_offset, y_dim1, y_offset, i__1, i__2, - i__3; - - /* Local variables */ - static integer i__; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *), dgemv_(char *, integer *, integer *, doublereal *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - doublereal *, integer *), dlarfg_(integer *, doublereal *, - doublereal *, integer *, doublereal *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DLABRD reduces the first NB rows and columns of a real general - m by n matrix A to upper or lower bidiagonal form by an orthogonal - transformation Q' * A * P, and returns the matrices X and Y which - are needed to apply the transformation to the unreduced part of A. - - If m >= n, A is reduced to upper bidiagonal form; if m < n, to lower - bidiagonal form. - - This is an auxiliary routine called by DGEBRD - - Arguments - ========= - - M (input) INTEGER - The number of rows in the matrix A. - - N (input) INTEGER - The number of columns in the matrix A. - - NB (input) INTEGER - The number of leading rows and columns of A to be reduced. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the m by n general matrix to be reduced. - On exit, the first NB rows and columns of the matrix are - overwritten; the rest of the array is unchanged. - If m >= n, elements on and below the diagonal in the first NB - columns, with the array TAUQ, represent the orthogonal - matrix Q as a product of elementary reflectors; and - elements above the diagonal in the first NB rows, with the - array TAUP, represent the orthogonal matrix P as a product - of elementary reflectors. - If m < n, elements below the diagonal in the first NB - columns, with the array TAUQ, represent the orthogonal - matrix Q as a product of elementary reflectors, and - elements on and above the diagonal in the first NB rows, - with the array TAUP, represent the orthogonal matrix P as - a product of elementary reflectors. - See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - D (output) DOUBLE PRECISION array, dimension (NB) - The diagonal elements of the first NB rows and columns of - the reduced matrix. D(i) = A(i,i). - - E (output) DOUBLE PRECISION array, dimension (NB) - The off-diagonal elements of the first NB rows and columns of - the reduced matrix. - - TAUQ (output) DOUBLE PRECISION array dimension (NB) - The scalar factors of the elementary reflectors which - represent the orthogonal matrix Q. See Further Details. - - TAUP (output) DOUBLE PRECISION array, dimension (NB) - The scalar factors of the elementary reflectors which - represent the orthogonal matrix P. See Further Details. - - X (output) DOUBLE PRECISION array, dimension (LDX,NB) - The m-by-nb matrix X required to update the unreduced part - of A. - - LDX (input) INTEGER - The leading dimension of the array X. LDX >= M. - - Y (output) DOUBLE PRECISION array, dimension (LDY,NB) - The n-by-nb matrix Y required to update the unreduced part - of A. - - LDY (output) INTEGER - The leading dimension of the array Y. LDY >= N. - - Further Details - =============== - - The matrices Q and P are represented as products of elementary - reflectors: - - Q = H(1) H(2) . . . H(nb) and P = G(1) G(2) . . . G(nb) - - Each H(i) and G(i) has the form: - - H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' - - where tauq and taup are real scalars, and v and u are real vectors. - - If m >= n, v(1:i-1) = 0, v(i) = 1, and v(i:m) is stored on exit in - A(i:m,i); u(1:i) = 0, u(i+1) = 1, and u(i+1:n) is stored on exit in - A(i,i+1:n); tauq is stored in TAUQ(i) and taup in TAUP(i). - - If m < n, v(1:i) = 0, v(i+1) = 1, and v(i+1:m) is stored on exit in - A(i+2:m,i); u(1:i-1) = 0, u(i) = 1, and u(i:n) is stored on exit in - A(i,i+1:n); tauq is stored in TAUQ(i) and taup in TAUP(i). - - The elements of the vectors v and u together form the m-by-nb matrix - V and the nb-by-n matrix U' which are needed, with X and Y, to apply - the transformation to the unreduced part of the matrix, using a block - update of the form: A := A - V*Y' - X*U'. - - The contents of A on exit are illustrated by the following examples - with nb = 2: - - m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): - - ( 1 1 u1 u1 u1 ) ( 1 u1 u1 u1 u1 u1 ) - ( v1 1 1 u2 u2 ) ( 1 1 u2 u2 u2 u2 ) - ( v1 v2 a a a ) ( v1 1 a a a a ) - ( v1 v2 a a a ) ( v1 v2 a a a a ) - ( v1 v2 a a a ) ( v1 v2 a a a a ) - ( v1 v2 a a a ) - - where a denotes an element of the original matrix which is unchanged, - vi denotes an element of the vector defining H(i), and ui an element - of the vector defining G(i). - - ===================================================================== - - - Quick return if possible -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --d__; - --e; - --tauq; - --taup; - x_dim1 = *ldx; - x_offset = 1 + x_dim1 * 1; - x -= x_offset; - y_dim1 = *ldy; - y_offset = 1 + y_dim1 * 1; - y -= y_offset; - - /* Function Body */ - if (*m <= 0 || *n <= 0) { - return 0; - } - - if (*m >= *n) { - -/* Reduce to upper bidiagonal form */ - - i__1 = *nb; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Update A(i:m,i) */ - - i__2 = *m - i__ + 1; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &a[i__ + a_dim1], - lda, &y[i__ + y_dim1], ldy, &c_b15, &a[i__ + i__ * a_dim1] - , &c__1); - i__2 = *m - i__ + 1; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &x[i__ + x_dim1], - ldx, &a[i__ * a_dim1 + 1], &c__1, &c_b15, &a[i__ + i__ * - a_dim1], &c__1); - -/* Generate reflection Q(i) to annihilate A(i+1:m,i) */ - - i__2 = *m - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - dlarfg_(&i__2, &a[i__ + i__ * a_dim1], &a[min(i__3,*m) + i__ * - a_dim1], &c__1, &tauq[i__]); - d__[i__] = a[i__ + i__ * a_dim1]; - if (i__ < *n) { - a[i__ + i__ * a_dim1] = 1.; - -/* Compute Y(i+1:n,i) */ - - i__2 = *m - i__ + 1; - i__3 = *n - i__; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &a[i__ + (i__ + 1) * - a_dim1], lda, &a[i__ + i__ * a_dim1], &c__1, &c_b29, - &y[i__ + 1 + i__ * y_dim1], &c__1); - i__2 = *m - i__ + 1; - i__3 = i__ - 1; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &a[i__ + a_dim1], - lda, &a[i__ + i__ * a_dim1], &c__1, &c_b29, &y[i__ * - y_dim1 + 1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &y[i__ + 1 + - y_dim1], ldy, &y[i__ * y_dim1 + 1], &c__1, &c_b15, &y[ - i__ + 1 + i__ * y_dim1], &c__1); - i__2 = *m - i__ + 1; - i__3 = i__ - 1; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &x[i__ + x_dim1], - ldx, &a[i__ + i__ * a_dim1], &c__1, &c_b29, &y[i__ * - y_dim1 + 1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__; - dgemv_("Transpose", &i__2, &i__3, &c_b151, &a[(i__ + 1) * - a_dim1 + 1], lda, &y[i__ * y_dim1 + 1], &c__1, &c_b15, - &y[i__ + 1 + i__ * y_dim1], &c__1); - i__2 = *n - i__; - dscal_(&i__2, &tauq[i__], &y[i__ + 1 + i__ * y_dim1], &c__1); - -/* Update A(i,i+1:n) */ - - i__2 = *n - i__; - dgemv_("No transpose", &i__2, &i__, &c_b151, &y[i__ + 1 + - y_dim1], ldy, &a[i__ + a_dim1], lda, &c_b15, &a[i__ + - (i__ + 1) * a_dim1], lda); - i__2 = i__ - 1; - i__3 = *n - i__; - dgemv_("Transpose", &i__2, &i__3, &c_b151, &a[(i__ + 1) * - a_dim1 + 1], lda, &x[i__ + x_dim1], ldx, &c_b15, &a[ - i__ + (i__ + 1) * a_dim1], lda); - -/* Generate reflection P(i) to annihilate A(i,i+2:n) */ - - i__2 = *n - i__; -/* Computing MIN */ - i__3 = i__ + 2; - dlarfg_(&i__2, &a[i__ + (i__ + 1) * a_dim1], &a[i__ + min( - i__3,*n) * a_dim1], lda, &taup[i__]); - e[i__] = a[i__ + (i__ + 1) * a_dim1]; - a[i__ + (i__ + 1) * a_dim1] = 1.; - -/* Compute X(i+1:m,i) */ - - i__2 = *m - i__; - i__3 = *n - i__; - dgemv_("No transpose", &i__2, &i__3, &c_b15, &a[i__ + 1 + ( - i__ + 1) * a_dim1], lda, &a[i__ + (i__ + 1) * a_dim1], - lda, &c_b29, &x[i__ + 1 + i__ * x_dim1], &c__1); - i__2 = *n - i__; - dgemv_("Transpose", &i__2, &i__, &c_b15, &y[i__ + 1 + y_dim1], - ldy, &a[i__ + (i__ + 1) * a_dim1], lda, &c_b29, &x[ - i__ * x_dim1 + 1], &c__1); - i__2 = *m - i__; - dgemv_("No transpose", &i__2, &i__, &c_b151, &a[i__ + 1 + - a_dim1], lda, &x[i__ * x_dim1 + 1], &c__1, &c_b15, &x[ - i__ + 1 + i__ * x_dim1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__; - dgemv_("No transpose", &i__2, &i__3, &c_b15, &a[(i__ + 1) * - a_dim1 + 1], lda, &a[i__ + (i__ + 1) * a_dim1], lda, & - c_b29, &x[i__ * x_dim1 + 1], &c__1); - i__2 = *m - i__; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &x[i__ + 1 + - x_dim1], ldx, &x[i__ * x_dim1 + 1], &c__1, &c_b15, &x[ - i__ + 1 + i__ * x_dim1], &c__1); - i__2 = *m - i__; - dscal_(&i__2, &taup[i__], &x[i__ + 1 + i__ * x_dim1], &c__1); - } -/* L10: */ - } - } else { - -/* Reduce to lower bidiagonal form */ - - i__1 = *nb; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Update A(i,i:n) */ - - i__2 = *n - i__ + 1; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &y[i__ + y_dim1], - ldy, &a[i__ + a_dim1], lda, &c_b15, &a[i__ + i__ * a_dim1] - , lda); - i__2 = i__ - 1; - i__3 = *n - i__ + 1; - dgemv_("Transpose", &i__2, &i__3, &c_b151, &a[i__ * a_dim1 + 1], - lda, &x[i__ + x_dim1], ldx, &c_b15, &a[i__ + i__ * a_dim1] - , lda); - -/* Generate reflection P(i) to annihilate A(i,i+1:n) */ - - i__2 = *n - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - dlarfg_(&i__2, &a[i__ + i__ * a_dim1], &a[i__ + min(i__3,*n) * - a_dim1], lda, &taup[i__]); - d__[i__] = a[i__ + i__ * a_dim1]; - if (i__ < *m) { - a[i__ + i__ * a_dim1] = 1.; - -/* Compute X(i+1:m,i) */ - - i__2 = *m - i__; - i__3 = *n - i__ + 1; - dgemv_("No transpose", &i__2, &i__3, &c_b15, &a[i__ + 1 + i__ - * a_dim1], lda, &a[i__ + i__ * a_dim1], lda, &c_b29, & - x[i__ + 1 + i__ * x_dim1], &c__1); - i__2 = *n - i__ + 1; - i__3 = i__ - 1; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &y[i__ + y_dim1], - ldy, &a[i__ + i__ * a_dim1], lda, &c_b29, &x[i__ * - x_dim1 + 1], &c__1); - i__2 = *m - i__; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &a[i__ + 1 + - a_dim1], lda, &x[i__ * x_dim1 + 1], &c__1, &c_b15, &x[ - i__ + 1 + i__ * x_dim1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__ + 1; - dgemv_("No transpose", &i__2, &i__3, &c_b15, &a[i__ * a_dim1 - + 1], lda, &a[i__ + i__ * a_dim1], lda, &c_b29, &x[ - i__ * x_dim1 + 1], &c__1); - i__2 = *m - i__; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &x[i__ + 1 + - x_dim1], ldx, &x[i__ * x_dim1 + 1], &c__1, &c_b15, &x[ - i__ + 1 + i__ * x_dim1], &c__1); - i__2 = *m - i__; - dscal_(&i__2, &taup[i__], &x[i__ + 1 + i__ * x_dim1], &c__1); - -/* Update A(i+1:m,i) */ - - i__2 = *m - i__; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &a[i__ + 1 + - a_dim1], lda, &y[i__ + y_dim1], ldy, &c_b15, &a[i__ + - 1 + i__ * a_dim1], &c__1); - i__2 = *m - i__; - dgemv_("No transpose", &i__2, &i__, &c_b151, &x[i__ + 1 + - x_dim1], ldx, &a[i__ * a_dim1 + 1], &c__1, &c_b15, &a[ - i__ + 1 + i__ * a_dim1], &c__1); - -/* Generate reflection Q(i) to annihilate A(i+2:m,i) */ - - i__2 = *m - i__; -/* Computing MIN */ - i__3 = i__ + 2; - dlarfg_(&i__2, &a[i__ + 1 + i__ * a_dim1], &a[min(i__3,*m) + - i__ * a_dim1], &c__1, &tauq[i__]); - e[i__] = a[i__ + 1 + i__ * a_dim1]; - a[i__ + 1 + i__ * a_dim1] = 1.; - -/* Compute Y(i+1:n,i) */ - - i__2 = *m - i__; - i__3 = *n - i__; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &a[i__ + 1 + (i__ + - 1) * a_dim1], lda, &a[i__ + 1 + i__ * a_dim1], &c__1, - &c_b29, &y[i__ + 1 + i__ * y_dim1], &c__1); - i__2 = *m - i__; - i__3 = i__ - 1; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &a[i__ + 1 + a_dim1] - , lda, &a[i__ + 1 + i__ * a_dim1], &c__1, &c_b29, &y[ - i__ * y_dim1 + 1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &y[i__ + 1 + - y_dim1], ldy, &y[i__ * y_dim1 + 1], &c__1, &c_b15, &y[ - i__ + 1 + i__ * y_dim1], &c__1); - i__2 = *m - i__; - dgemv_("Transpose", &i__2, &i__, &c_b15, &x[i__ + 1 + x_dim1], - ldx, &a[i__ + 1 + i__ * a_dim1], &c__1, &c_b29, &y[ - i__ * y_dim1 + 1], &c__1); - i__2 = *n - i__; - dgemv_("Transpose", &i__, &i__2, &c_b151, &a[(i__ + 1) * - a_dim1 + 1], lda, &y[i__ * y_dim1 + 1], &c__1, &c_b15, - &y[i__ + 1 + i__ * y_dim1], &c__1); - i__2 = *n - i__; - dscal_(&i__2, &tauq[i__], &y[i__ + 1 + i__ * y_dim1], &c__1); - } -/* L20: */ - } - } - return 0; - -/* End of DLABRD */ - -} /* dlabrd_ */ - -/* Subroutine */ int dlacpy_(char *uplo, integer *m, integer *n, doublereal * - a, integer *lda, doublereal *b, integer *ldb) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1, i__2; - - /* Local variables */ - static integer i__, j; - extern logical lsame_(char *, char *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DLACPY copies all or part of a two-dimensional matrix A to another - matrix B. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - Specifies the part of the matrix A to be copied to B. - = 'U': Upper triangular part - = 'L': Lower triangular part - Otherwise: All of the matrix A - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input) DOUBLE PRECISION array, dimension (LDA,N) - The m by n matrix A. If UPLO = 'U', only the upper triangle - or trapezoid is accessed; if UPLO = 'L', only the lower - triangle or trapezoid is accessed. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - B (output) DOUBLE PRECISION array, dimension (LDB,N) - On exit, B = A in the locations specified by UPLO. - - LDB (input) INTEGER - The leading dimension of the array B. LDB >= max(1,M). - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - - /* Function Body */ - if (lsame_(uplo, "U")) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = min(j,*m); - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] = a[i__ + j * a_dim1]; -/* L10: */ - } -/* L20: */ - } - } else if (lsame_(uplo, "L")) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = j; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] = a[i__ + j * a_dim1]; -/* L30: */ - } -/* L40: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - b[i__ + j * b_dim1] = a[i__ + j * a_dim1]; -/* L50: */ - } -/* L60: */ - } - } - return 0; - -/* End of DLACPY */ - -} /* dlacpy_ */ - -/* Subroutine */ int dladiv_(doublereal *a, doublereal *b, doublereal *c__, - doublereal *d__, doublereal *p, doublereal *q) -{ - static doublereal e, f; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLADIV performs complex division in real arithmetic - - a + i*b - p + i*q = --------- - c + i*d - - The algorithm is due to Robert L. Smith and can be found - in D. Knuth, The art of Computer Programming, Vol.2, p.195 - - Arguments - ========= - - A (input) DOUBLE PRECISION - B (input) DOUBLE PRECISION - C (input) DOUBLE PRECISION - D (input) DOUBLE PRECISION - The scalars a, b, c, and d in the above expression. - - P (output) DOUBLE PRECISION - Q (output) DOUBLE PRECISION - The scalars p and q in the above expression. - - ===================================================================== -*/ - - - if (abs(*d__) < abs(*c__)) { - e = *d__ / *c__; - f = *c__ + *d__ * e; - *p = (*a + *b * e) / f; - *q = (*b - *a * e) / f; - } else { - e = *c__ / *d__; - f = *d__ + *c__ * e; - *p = (*b + *a * e) / f; - *q = (-(*a) + *b * e) / f; - } - - return 0; - -/* End of DLADIV */ - -} /* dladiv_ */ - -/* Subroutine */ int dlae2_(doublereal *a, doublereal *b, doublereal *c__, - doublereal *rt1, doublereal *rt2) -{ - /* System generated locals */ - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal ab, df, tb, sm, rt, adf, acmn, acmx; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLAE2 computes the eigenvalues of a 2-by-2 symmetric matrix - [ A B ] - [ B C ]. - On return, RT1 is the eigenvalue of larger absolute value, and RT2 - is the eigenvalue of smaller absolute value. - - Arguments - ========= - - A (input) DOUBLE PRECISION - The (1,1) element of the 2-by-2 matrix. - - B (input) DOUBLE PRECISION - The (1,2) and (2,1) elements of the 2-by-2 matrix. - - C (input) DOUBLE PRECISION - The (2,2) element of the 2-by-2 matrix. - - RT1 (output) DOUBLE PRECISION - The eigenvalue of larger absolute value. - - RT2 (output) DOUBLE PRECISION - The eigenvalue of smaller absolute value. - - Further Details - =============== - - RT1 is accurate to a few ulps barring over/underflow. - - RT2 may be inaccurate if there is massive cancellation in the - determinant A*C-B*B; higher precision or correctly rounded or - correctly truncated arithmetic would be needed to compute RT2 - accurately in all cases. - - Overflow is possible only if RT1 is within a factor of 5 of overflow. - Underflow is harmless if the input data is 0 or exceeds - underflow_threshold / macheps. - - ===================================================================== - - - Compute the eigenvalues -*/ - - sm = *a + *c__; - df = *a - *c__; - adf = abs(df); - tb = *b + *b; - ab = abs(tb); - if (abs(*a) > abs(*c__)) { - acmx = *a; - acmn = *c__; - } else { - acmx = *c__; - acmn = *a; - } - if (adf > ab) { -/* Computing 2nd power */ - d__1 = ab / adf; - rt = adf * sqrt(d__1 * d__1 + 1.); - } else if (adf < ab) { -/* Computing 2nd power */ - d__1 = adf / ab; - rt = ab * sqrt(d__1 * d__1 + 1.); - } else { - -/* Includes case AB=ADF=0 */ - - rt = ab * sqrt(2.); - } - if (sm < 0.) { - *rt1 = (sm - rt) * .5; - -/* - Order of execution important. - To get fully accurate smaller eigenvalue, - next line needs to be executed in higher precision. -*/ - - *rt2 = acmx / *rt1 * acmn - *b / *rt1 * *b; - } else if (sm > 0.) { - *rt1 = (sm + rt) * .5; - -/* - Order of execution important. - To get fully accurate smaller eigenvalue, - next line needs to be executed in higher precision. -*/ - - *rt2 = acmx / *rt1 * acmn - *b / *rt1 * *b; - } else { - -/* Includes case RT1 = RT2 = 0 */ - - *rt1 = rt * .5; - *rt2 = rt * -.5; - } - return 0; - -/* End of DLAE2 */ - -} /* dlae2_ */ - -/* Subroutine */ int dlaed0_(integer *icompq, integer *qsiz, integer *n, - doublereal *d__, doublereal *e, doublereal *q, integer *ldq, - doublereal *qstore, integer *ldqs, doublereal *work, integer *iwork, - integer *info) -{ - /* System generated locals */ - integer q_dim1, q_offset, qstore_dim1, qstore_offset, i__1, i__2; - doublereal d__1; - - /* Builtin functions */ - double log(doublereal); - integer pow_ii(integer *, integer *); - - /* Local variables */ - static integer i__, j, k, iq, lgn, msd2, smm1, spm1, spm2; - static doublereal temp; - static integer curr; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - static integer iperm; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *); - static integer indxq, iwrem; - extern /* Subroutine */ int dlaed1_(integer *, doublereal *, doublereal *, - integer *, integer *, doublereal *, integer *, doublereal *, - integer *, integer *); - static integer iqptr; - extern /* Subroutine */ int dlaed7_(integer *, integer *, integer *, - integer *, integer *, integer *, doublereal *, doublereal *, - integer *, integer *, doublereal *, integer *, doublereal *, - integer *, integer *, integer *, integer *, integer *, doublereal - *, doublereal *, integer *, integer *); - static integer tlvls; - extern /* Subroutine */ int dlacpy_(char *, integer *, integer *, - doublereal *, integer *, doublereal *, integer *); - static integer igivcl; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer igivnm, submat, curprb, subpbs, igivpt; - extern /* Subroutine */ int dsteqr_(char *, integer *, doublereal *, - doublereal *, doublereal *, integer *, doublereal *, integer *); - static integer curlvl, matsiz, iprmpt, smlsiz; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DLAED0 computes all eigenvalues and corresponding eigenvectors of a - symmetric tridiagonal matrix using the divide and conquer method. - - Arguments - ========= - - ICOMPQ (input) INTEGER - = 0: Compute eigenvalues only. - = 1: Compute eigenvectors of original dense symmetric matrix - also. On entry, Q contains the orthogonal matrix used - to reduce the original matrix to tridiagonal form. - = 2: Compute eigenvalues and eigenvectors of tridiagonal - matrix. - - QSIZ (input) INTEGER - The dimension of the orthogonal matrix used to reduce - the full matrix to tridiagonal form. QSIZ >= N if ICOMPQ = 1. - - N (input) INTEGER - The dimension of the symmetric tridiagonal matrix. N >= 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the main diagonal of the tridiagonal matrix. - On exit, its eigenvalues. - - E (input) DOUBLE PRECISION array, dimension (N-1) - The off-diagonal elements of the tridiagonal matrix. - On exit, E has been destroyed. - - Q (input/output) DOUBLE PRECISION array, dimension (LDQ, N) - On entry, Q must contain an N-by-N orthogonal matrix. - If ICOMPQ = 0 Q is not referenced. - If ICOMPQ = 1 On entry, Q is a subset of the columns of the - orthogonal matrix used to reduce the full - matrix to tridiagonal form corresponding to - the subset of the full matrix which is being - decomposed at this time. - If ICOMPQ = 2 On entry, Q will be the identity matrix. - On exit, Q contains the eigenvectors of the - tridiagonal matrix. - - LDQ (input) INTEGER - The leading dimension of the array Q. If eigenvectors are - desired, then LDQ >= max(1,N). In any case, LDQ >= 1. - - QSTORE (workspace) DOUBLE PRECISION array, dimension (LDQS, N) - Referenced only when ICOMPQ = 1. Used to store parts of - the eigenvector matrix when the updating matrix multiplies - take place. - - LDQS (input) INTEGER - The leading dimension of the array QSTORE. If ICOMPQ = 1, - then LDQS >= max(1,N). In any case, LDQS >= 1. - - WORK (workspace) DOUBLE PRECISION array, - If ICOMPQ = 0 or 1, the dimension of WORK must be at least - 1 + 3*N + 2*N*lg N + 2*N**2 - ( lg( N ) = smallest integer k - such that 2^k >= N ) - If ICOMPQ = 2, the dimension of WORK must be at least - 4*N + N**2. - - IWORK (workspace) INTEGER array, - If ICOMPQ = 0 or 1, the dimension of IWORK must be at least - 6 + 6*N + 5*N*lg N. - ( lg( N ) = smallest integer k - such that 2^k >= N ) - If ICOMPQ = 2, the dimension of IWORK must be at least - 3 + 5*N. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: The algorithm failed to compute an eigenvalue while - working on the submatrix lying in rows and columns - INFO/(N+1) through mod(INFO,N+1). - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - q_dim1 = *ldq; - q_offset = 1 + q_dim1 * 1; - q -= q_offset; - qstore_dim1 = *ldqs; - qstore_offset = 1 + qstore_dim1 * 1; - qstore -= qstore_offset; - --work; - --iwork; - - /* Function Body */ - *info = 0; - - if (*icompq < 0 || *icompq > 2) { - *info = -1; - } else if ((*icompq == 1 && *qsiz < max(0,*n))) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*ldq < max(1,*n)) { - *info = -7; - } else if (*ldqs < max(1,*n)) { - *info = -9; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLAED0", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - - smlsiz = ilaenv_(&c__9, "DLAED0", " ", &c__0, &c__0, &c__0, &c__0, ( - ftnlen)6, (ftnlen)1); - -/* - Determine the size and placement of the submatrices, and save in - the leading elements of IWORK. -*/ - - iwork[1] = *n; - subpbs = 1; - tlvls = 0; -L10: - if (iwork[subpbs] > smlsiz) { - for (j = subpbs; j >= 1; --j) { - iwork[j * 2] = (iwork[j] + 1) / 2; - iwork[((j) << (1)) - 1] = iwork[j] / 2; -/* L20: */ - } - ++tlvls; - subpbs <<= 1; - goto L10; - } - i__1 = subpbs; - for (j = 2; j <= i__1; ++j) { - iwork[j] += iwork[j - 1]; -/* L30: */ - } - -/* - Divide the matrix into SUBPBS submatrices of size at most SMLSIZ+1 - using rank-1 modifications (cuts). -*/ - - spm1 = subpbs - 1; - i__1 = spm1; - for (i__ = 1; i__ <= i__1; ++i__) { - submat = iwork[i__] + 1; - smm1 = submat - 1; - d__[smm1] -= (d__1 = e[smm1], abs(d__1)); - d__[submat] -= (d__1 = e[smm1], abs(d__1)); -/* L40: */ - } - - indxq = ((*n) << (2)) + 3; - if (*icompq != 2) { - -/* - Set up workspaces for eigenvalues only/accumulate new vectors - routine -*/ - - temp = log((doublereal) (*n)) / log(2.); - lgn = (integer) temp; - if (pow_ii(&c__2, &lgn) < *n) { - ++lgn; - } - if (pow_ii(&c__2, &lgn) < *n) { - ++lgn; - } - iprmpt = indxq + *n + 1; - iperm = iprmpt + *n * lgn; - iqptr = iperm + *n * lgn; - igivpt = iqptr + *n + 2; - igivcl = igivpt + *n * lgn; - - igivnm = 1; - iq = igivnm + ((*n) << (1)) * lgn; -/* Computing 2nd power */ - i__1 = *n; - iwrem = iq + i__1 * i__1 + 1; - -/* Initialize pointers */ - - i__1 = subpbs; - for (i__ = 0; i__ <= i__1; ++i__) { - iwork[iprmpt + i__] = 1; - iwork[igivpt + i__] = 1; -/* L50: */ - } - iwork[iqptr] = 1; - } - -/* - Solve each submatrix eigenproblem at the bottom of the divide and - conquer tree. -*/ - - curr = 0; - i__1 = spm1; - for (i__ = 0; i__ <= i__1; ++i__) { - if (i__ == 0) { - submat = 1; - matsiz = iwork[1]; - } else { - submat = iwork[i__] + 1; - matsiz = iwork[i__ + 1] - iwork[i__]; - } - if (*icompq == 2) { - dsteqr_("I", &matsiz, &d__[submat], &e[submat], &q[submat + - submat * q_dim1], ldq, &work[1], info); - if (*info != 0) { - goto L130; - } - } else { - dsteqr_("I", &matsiz, &d__[submat], &e[submat], &work[iq - 1 + - iwork[iqptr + curr]], &matsiz, &work[1], info); - if (*info != 0) { - goto L130; - } - if (*icompq == 1) { - dgemm_("N", "N", qsiz, &matsiz, &matsiz, &c_b15, &q[submat * - q_dim1 + 1], ldq, &work[iq - 1 + iwork[iqptr + curr]], - &matsiz, &c_b29, &qstore[submat * qstore_dim1 + 1], - ldqs); - } -/* Computing 2nd power */ - i__2 = matsiz; - iwork[iqptr + curr + 1] = iwork[iqptr + curr] + i__2 * i__2; - ++curr; - } - k = 1; - i__2 = iwork[i__ + 1]; - for (j = submat; j <= i__2; ++j) { - iwork[indxq + j] = k; - ++k; -/* L60: */ - } -/* L70: */ - } - -/* - Successively merge eigensystems of adjacent submatrices - into eigensystem for the corresponding larger matrix. - - while ( SUBPBS > 1 ) -*/ - - curlvl = 1; -L80: - if (subpbs > 1) { - spm2 = subpbs - 2; - i__1 = spm2; - for (i__ = 0; i__ <= i__1; i__ += 2) { - if (i__ == 0) { - submat = 1; - matsiz = iwork[2]; - msd2 = iwork[1]; - curprb = 0; - } else { - submat = iwork[i__] + 1; - matsiz = iwork[i__ + 2] - iwork[i__]; - msd2 = matsiz / 2; - ++curprb; - } - -/* - Merge lower order eigensystems (of size MSD2 and MATSIZ - MSD2) - into an eigensystem of size MATSIZ. - DLAED1 is used only for the full eigensystem of a tridiagonal - matrix. - DLAED7 handles the cases in which eigenvalues only or eigenvalues - and eigenvectors of a full symmetric matrix (which was reduced to - tridiagonal form) are desired. -*/ - - if (*icompq == 2) { - dlaed1_(&matsiz, &d__[submat], &q[submat + submat * q_dim1], - ldq, &iwork[indxq + submat], &e[submat + msd2 - 1], & - msd2, &work[1], &iwork[subpbs + 1], info); - } else { - dlaed7_(icompq, &matsiz, qsiz, &tlvls, &curlvl, &curprb, &d__[ - submat], &qstore[submat * qstore_dim1 + 1], ldqs, & - iwork[indxq + submat], &e[submat + msd2 - 1], &msd2, & - work[iq], &iwork[iqptr], &iwork[iprmpt], &iwork[iperm] - , &iwork[igivpt], &iwork[igivcl], &work[igivnm], & - work[iwrem], &iwork[subpbs + 1], info); - } - if (*info != 0) { - goto L130; - } - iwork[i__ / 2 + 1] = iwork[i__ + 2]; -/* L90: */ - } - subpbs /= 2; - ++curlvl; - goto L80; - } - -/* - end while - - Re-merge the eigenvalues/vectors which were deflated at the final - merge step. -*/ - - if (*icompq == 1) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - j = iwork[indxq + i__]; - work[i__] = d__[j]; - dcopy_(qsiz, &qstore[j * qstore_dim1 + 1], &c__1, &q[i__ * q_dim1 - + 1], &c__1); -/* L100: */ - } - dcopy_(n, &work[1], &c__1, &d__[1], &c__1); - } else if (*icompq == 2) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - j = iwork[indxq + i__]; - work[i__] = d__[j]; - dcopy_(n, &q[j * q_dim1 + 1], &c__1, &work[*n * i__ + 1], &c__1); -/* L110: */ - } - dcopy_(n, &work[1], &c__1, &d__[1], &c__1); - dlacpy_("A", n, n, &work[*n + 1], n, &q[q_offset], ldq); - } else { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - j = iwork[indxq + i__]; - work[i__] = d__[j]; -/* L120: */ - } - dcopy_(n, &work[1], &c__1, &d__[1], &c__1); - } - goto L140; - -L130: - *info = submat * (*n + 1) + submat + matsiz - 1; - -L140: - return 0; - -/* End of DLAED0 */ - -} /* dlaed0_ */ - -/* Subroutine */ int dlaed1_(integer *n, doublereal *d__, doublereal *q, - integer *ldq, integer *indxq, doublereal *rho, integer *cutpnt, - doublereal *work, integer *iwork, integer *info) -{ - /* System generated locals */ - integer q_dim1, q_offset, i__1, i__2; - - /* Local variables */ - static integer i__, k, n1, n2, is, iw, iz, iq2, zpp1, indx, indxc; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *); - static integer indxp; - extern /* Subroutine */ int dlaed2_(integer *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, integer *, - integer *, integer *, integer *, integer *), dlaed3_(integer *, - integer *, integer *, doublereal *, doublereal *, integer *, - doublereal *, doublereal *, doublereal *, integer *, integer *, - doublereal *, doublereal *, integer *); - static integer idlmda; - extern /* Subroutine */ int dlamrg_(integer *, integer *, doublereal *, - integer *, integer *, integer *), xerbla_(char *, integer *); - static integer coltyp; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DLAED1 computes the updated eigensystem of a diagonal - matrix after modification by a rank-one symmetric matrix. This - routine is used only for the eigenproblem which requires all - eigenvalues and eigenvectors of a tridiagonal matrix. DLAED7 handles - the case in which eigenvalues only or eigenvalues and eigenvectors - of a full symmetric matrix (which was reduced to tridiagonal form) - are desired. - - T = Q(in) ( D(in) + RHO * Z*Z' ) Q'(in) = Q(out) * D(out) * Q'(out) - - where Z = Q'u, u is a vector of length N with ones in the - CUTPNT and CUTPNT + 1 th elements and zeros elsewhere. - - The eigenvectors of the original matrix are stored in Q, and the - eigenvalues are in D. The algorithm consists of three stages: - - The first stage consists of deflating the size of the problem - when there are multiple eigenvalues or if there is a zero in - the Z vector. For each such occurence the dimension of the - secular equation problem is reduced by one. This stage is - performed by the routine DLAED2. - - The second stage consists of calculating the updated - eigenvalues. This is done by finding the roots of the secular - equation via the routine DLAED4 (as called by DLAED3). - This routine also calculates the eigenvectors of the current - problem. - - The final stage consists of computing the updated eigenvectors - directly using the updated eigenvalues. The eigenvectors for - the current problem are multiplied with the eigenvectors from - the overall problem. - - Arguments - ========= - - N (input) INTEGER - The dimension of the symmetric tridiagonal matrix. N >= 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the eigenvalues of the rank-1-perturbed matrix. - On exit, the eigenvalues of the repaired matrix. - - Q (input/output) DOUBLE PRECISION array, dimension (LDQ,N) - On entry, the eigenvectors of the rank-1-perturbed matrix. - On exit, the eigenvectors of the repaired tridiagonal matrix. - - LDQ (input) INTEGER - The leading dimension of the array Q. LDQ >= max(1,N). - - INDXQ (input/output) INTEGER array, dimension (N) - On entry, the permutation which separately sorts the two - subproblems in D into ascending order. - On exit, the permutation which will reintegrate the - subproblems back into sorted order, - i.e. D( INDXQ( I = 1, N ) ) will be in ascending order. - - RHO (input) DOUBLE PRECISION - The subdiagonal entry used to create the rank-1 modification. - - CUTPNT (input) INTEGER - The location of the last eigenvalue in the leading sub-matrix. - min(1,N) <= CUTPNT <= N/2. - - WORK (workspace) DOUBLE PRECISION array, dimension (4*N + N**2) - - IWORK (workspace) INTEGER array, dimension (4*N) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = 1, an eigenvalue did not converge - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - Modified by Francoise Tisseur, University of Tennessee. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - q_dim1 = *ldq; - q_offset = 1 + q_dim1 * 1; - q -= q_offset; - --indxq; - --work; - --iwork; - - /* Function Body */ - *info = 0; - - if (*n < 0) { - *info = -1; - } else if (*ldq < max(1,*n)) { - *info = -4; - } else /* if(complicated condition) */ { -/* Computing MIN */ - i__1 = 1, i__2 = *n / 2; - if (min(i__1,i__2) > *cutpnt || *n / 2 < *cutpnt) { - *info = -7; - } - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLAED1", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - -/* - The following values are integer pointers which indicate - the portion of the workspace - used by a particular array in DLAED2 and DLAED3. -*/ - - iz = 1; - idlmda = iz + *n; - iw = idlmda + *n; - iq2 = iw + *n; - - indx = 1; - indxc = indx + *n; - coltyp = indxc + *n; - indxp = coltyp + *n; - - -/* - Form the z-vector which consists of the last row of Q_1 and the - first row of Q_2. -*/ - - dcopy_(cutpnt, &q[*cutpnt + q_dim1], ldq, &work[iz], &c__1); - zpp1 = *cutpnt + 1; - i__1 = *n - *cutpnt; - dcopy_(&i__1, &q[zpp1 + zpp1 * q_dim1], ldq, &work[iz + *cutpnt], &c__1); - -/* Deflate eigenvalues. */ - - dlaed2_(&k, n, cutpnt, &d__[1], &q[q_offset], ldq, &indxq[1], rho, &work[ - iz], &work[idlmda], &work[iw], &work[iq2], &iwork[indx], &iwork[ - indxc], &iwork[indxp], &iwork[coltyp], info); - - if (*info != 0) { - goto L20; - } - -/* Solve Secular Equation. */ - - if (k != 0) { - is = (iwork[coltyp] + iwork[coltyp + 1]) * *cutpnt + (iwork[coltyp + - 1] + iwork[coltyp + 2]) * (*n - *cutpnt) + iq2; - dlaed3_(&k, n, cutpnt, &d__[1], &q[q_offset], ldq, rho, &work[idlmda], - &work[iq2], &iwork[indxc], &iwork[coltyp], &work[iw], &work[ - is], info); - if (*info != 0) { - goto L20; - } - -/* Prepare the INDXQ sorting permutation. */ - - n1 = k; - n2 = *n - k; - dlamrg_(&n1, &n2, &d__[1], &c__1, &c_n1, &indxq[1]); - } else { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - indxq[i__] = i__; -/* L10: */ - } - } - -L20: - return 0; - -/* End of DLAED1 */ - -} /* dlaed1_ */ - -/* Subroutine */ int dlaed2_(integer *k, integer *n, integer *n1, doublereal * - d__, doublereal *q, integer *ldq, integer *indxq, doublereal *rho, - doublereal *z__, doublereal *dlamda, doublereal *w, doublereal *q2, - integer *indx, integer *indxc, integer *indxp, integer *coltyp, - integer *info) -{ - /* System generated locals */ - integer q_dim1, q_offset, i__1, i__2; - doublereal d__1, d__2, d__3, d__4; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal c__; - static integer i__, j; - static doublereal s, t; - static integer k2, n2, ct, nj, pj, js, iq1, iq2, n1p1; - static doublereal eps, tau, tol; - static integer psm[4], imax, jmax; - extern /* Subroutine */ int drot_(integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *); - static integer ctot[4]; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *), dcopy_(integer *, doublereal *, integer *, doublereal - *, integer *); - - extern integer idamax_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dlamrg_(integer *, integer *, doublereal *, - integer *, integer *, integer *), dlacpy_(char *, integer *, - integer *, doublereal *, integer *, doublereal *, integer *), xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - DLAED2 merges the two sets of eigenvalues together into a single - sorted set. Then it tries to deflate the size of the problem. - There are two ways in which deflation can occur: when two or more - eigenvalues are close together or if there is a tiny entry in the - Z vector. For each such occurrence the order of the related secular - equation problem is reduced by one. - - Arguments - ========= - - K (output) INTEGER - The number of non-deflated eigenvalues, and the order of the - related secular equation. 0 <= K <=N. - - N (input) INTEGER - The dimension of the symmetric tridiagonal matrix. N >= 0. - - N1 (input) INTEGER - The location of the last eigenvalue in the leading sub-matrix. - min(1,N) <= N1 <= N/2. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, D contains the eigenvalues of the two submatrices to - be combined. - On exit, D contains the trailing (N-K) updated eigenvalues - (those which were deflated) sorted into increasing order. - - Q (input/output) DOUBLE PRECISION array, dimension (LDQ, N) - On entry, Q contains the eigenvectors of two submatrices in - the two square blocks with corners at (1,1), (N1,N1) - and (N1+1, N1+1), (N,N). - On exit, Q contains the trailing (N-K) updated eigenvectors - (those which were deflated) in its last N-K columns. - - LDQ (input) INTEGER - The leading dimension of the array Q. LDQ >= max(1,N). - - INDXQ (input/output) INTEGER array, dimension (N) - The permutation which separately sorts the two sub-problems - in D into ascending order. Note that elements in the second - half of this permutation must first have N1 added to their - values. Destroyed on exit. - - RHO (input/output) DOUBLE PRECISION - On entry, the off-diagonal element associated with the rank-1 - cut which originally split the two submatrices which are now - being recombined. - On exit, RHO has been modified to the value required by - DLAED3. - - Z (input) DOUBLE PRECISION array, dimension (N) - On entry, Z contains the updating vector (the last - row of the first sub-eigenvector matrix and the first row of - the second sub-eigenvector matrix). - On exit, the contents of Z have been destroyed by the updating - process. - - DLAMDA (output) DOUBLE PRECISION array, dimension (N) - A copy of the first K eigenvalues which will be used by - DLAED3 to form the secular equation. - - W (output) DOUBLE PRECISION array, dimension (N) - The first k values of the final deflation-altered z-vector - which will be passed to DLAED3. - - Q2 (output) DOUBLE PRECISION array, dimension (N1**2+(N-N1)**2) - A copy of the first K eigenvectors which will be used by - DLAED3 in a matrix multiply (DGEMM) to solve for the new - eigenvectors. - - INDX (workspace) INTEGER array, dimension (N) - The permutation used to sort the contents of DLAMDA into - ascending order. - - INDXC (output) INTEGER array, dimension (N) - The permutation used to arrange the columns of the deflated - Q matrix into three groups: the first group contains non-zero - elements only at and above N1, the second contains - non-zero elements only below N1, and the third is dense. - - INDXP (workspace) INTEGER array, dimension (N) - The permutation used to place deflated values of D at the end - of the array. INDXP(1:K) points to the nondeflated D-values - and INDXP(K+1:N) points to the deflated eigenvalues. - - COLTYP (workspace/output) INTEGER array, dimension (N) - During execution, a label which will indicate which of the - following types a column in the Q2 matrix is: - 1 : non-zero in the upper half only; - 2 : dense; - 3 : non-zero in the lower half only; - 4 : deflated. - On exit, COLTYP(i) is the number of columns of type i, - for i=1 to 4 only. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - Modified by Francoise Tisseur, University of Tennessee. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - q_dim1 = *ldq; - q_offset = 1 + q_dim1 * 1; - q -= q_offset; - --indxq; - --z__; - --dlamda; - --w; - --q2; - --indx; - --indxc; - --indxp; - --coltyp; - - /* Function Body */ - *info = 0; - - if (*n < 0) { - *info = -2; - } else if (*ldq < max(1,*n)) { - *info = -6; - } else /* if(complicated condition) */ { -/* Computing MIN */ - i__1 = 1, i__2 = *n / 2; - if (min(i__1,i__2) > *n1 || *n / 2 < *n1) { - *info = -3; - } - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLAED2", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - - n2 = *n - *n1; - n1p1 = *n1 + 1; - - if (*rho < 0.) { - dscal_(&n2, &c_b151, &z__[n1p1], &c__1); - } - -/* - Normalize z so that norm(z) = 1. Since z is the concatenation of - two normalized vectors, norm2(z) = sqrt(2). -*/ - - t = 1. / sqrt(2.); - dscal_(n, &t, &z__[1], &c__1); - -/* RHO = ABS( norm(z)**2 * RHO ) */ - - *rho = (d__1 = *rho * 2., abs(d__1)); - -/* Sort the eigenvalues into increasing order */ - - i__1 = *n; - for (i__ = n1p1; i__ <= i__1; ++i__) { - indxq[i__] += *n1; -/* L10: */ - } - -/* re-integrate the deflated parts from the last pass */ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - dlamda[i__] = d__[indxq[i__]]; -/* L20: */ - } - dlamrg_(n1, &n2, &dlamda[1], &c__1, &c__1, &indxc[1]); - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - indx[i__] = indxq[indxc[i__]]; -/* L30: */ - } - -/* Calculate the allowable deflation tolerance */ - - imax = idamax_(n, &z__[1], &c__1); - jmax = idamax_(n, &d__[1], &c__1); - eps = EPSILON; -/* Computing MAX */ - d__3 = (d__1 = d__[jmax], abs(d__1)), d__4 = (d__2 = z__[imax], abs(d__2)) - ; - tol = eps * 8. * max(d__3,d__4); - -/* - If the rank-1 modifier is small enough, no more needs to be done - except to reorganize Q so that its columns correspond with the - elements in D. -*/ - - if (*rho * (d__1 = z__[imax], abs(d__1)) <= tol) { - *k = 0; - iq2 = 1; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__ = indx[j]; - dcopy_(n, &q[i__ * q_dim1 + 1], &c__1, &q2[iq2], &c__1); - dlamda[j] = d__[i__]; - iq2 += *n; -/* L40: */ - } - dlacpy_("A", n, n, &q2[1], n, &q[q_offset], ldq); - dcopy_(n, &dlamda[1], &c__1, &d__[1], &c__1); - goto L190; - } - -/* - If there are multiple eigenvalues then the problem deflates. Here - the number of equal eigenvalues are found. As each equal - eigenvalue is found, an elementary reflector is computed to rotate - the corresponding eigensubspace so that the corresponding - components of Z are zero in this new basis. -*/ - - i__1 = *n1; - for (i__ = 1; i__ <= i__1; ++i__) { - coltyp[i__] = 1; -/* L50: */ - } - i__1 = *n; - for (i__ = n1p1; i__ <= i__1; ++i__) { - coltyp[i__] = 3; -/* L60: */ - } - - - *k = 0; - k2 = *n + 1; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - nj = indx[j]; - if (*rho * (d__1 = z__[nj], abs(d__1)) <= tol) { - -/* Deflate due to small z component. */ - - --k2; - coltyp[nj] = 4; - indxp[k2] = nj; - if (j == *n) { - goto L100; - } - } else { - pj = nj; - goto L80; - } -/* L70: */ - } -L80: - ++j; - nj = indx[j]; - if (j > *n) { - goto L100; - } - if (*rho * (d__1 = z__[nj], abs(d__1)) <= tol) { - -/* Deflate due to small z component. */ - - --k2; - coltyp[nj] = 4; - indxp[k2] = nj; - } else { - -/* Check if eigenvalues are close enough to allow deflation. */ - - s = z__[pj]; - c__ = z__[nj]; - -/* - Find sqrt(a**2+b**2) without overflow or - destructive underflow. -*/ - - tau = dlapy2_(&c__, &s); - t = d__[nj] - d__[pj]; - c__ /= tau; - s = -s / tau; - if ((d__1 = t * c__ * s, abs(d__1)) <= tol) { - -/* Deflation is possible. */ - - z__[nj] = tau; - z__[pj] = 0.; - if (coltyp[nj] != coltyp[pj]) { - coltyp[nj] = 2; - } - coltyp[pj] = 4; - drot_(n, &q[pj * q_dim1 + 1], &c__1, &q[nj * q_dim1 + 1], &c__1, & - c__, &s); -/* Computing 2nd power */ - d__1 = c__; -/* Computing 2nd power */ - d__2 = s; - t = d__[pj] * (d__1 * d__1) + d__[nj] * (d__2 * d__2); -/* Computing 2nd power */ - d__1 = s; -/* Computing 2nd power */ - d__2 = c__; - d__[nj] = d__[pj] * (d__1 * d__1) + d__[nj] * (d__2 * d__2); - d__[pj] = t; - --k2; - i__ = 1; -L90: - if (k2 + i__ <= *n) { - if (d__[pj] < d__[indxp[k2 + i__]]) { - indxp[k2 + i__ - 1] = indxp[k2 + i__]; - indxp[k2 + i__] = pj; - ++i__; - goto L90; - } else { - indxp[k2 + i__ - 1] = pj; - } - } else { - indxp[k2 + i__ - 1] = pj; - } - pj = nj; - } else { - ++(*k); - dlamda[*k] = d__[pj]; - w[*k] = z__[pj]; - indxp[*k] = pj; - pj = nj; - } - } - goto L80; -L100: - -/* Record the last eigenvalue. */ - - ++(*k); - dlamda[*k] = d__[pj]; - w[*k] = z__[pj]; - indxp[*k] = pj; - -/* - Count up the total number of the various types of columns, then - form a permutation which positions the four column types into - four uniform groups (although one or more of these groups may be - empty). -*/ - - for (j = 1; j <= 4; ++j) { - ctot[j - 1] = 0; -/* L110: */ - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - ct = coltyp[j]; - ++ctot[ct - 1]; -/* L120: */ - } - -/* PSM(*) = Position in SubMatrix (of types 1 through 4) */ - - psm[0] = 1; - psm[1] = ctot[0] + 1; - psm[2] = psm[1] + ctot[1]; - psm[3] = psm[2] + ctot[2]; - *k = *n - ctot[3]; - -/* - Fill out the INDXC array so that the permutation which it induces - will place all type-1 columns first, all type-2 columns next, - then all type-3's, and finally all type-4's. -*/ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - js = indxp[j]; - ct = coltyp[js]; - indx[psm[ct - 1]] = js; - indxc[psm[ct - 1]] = j; - ++psm[ct - 1]; -/* L130: */ - } - -/* - Sort the eigenvalues and corresponding eigenvectors into DLAMDA - and Q2 respectively. The eigenvalues/vectors which were not - deflated go into the first K slots of DLAMDA and Q2 respectively, - while those which were deflated go into the last N - K slots. -*/ - - i__ = 1; - iq1 = 1; - iq2 = (ctot[0] + ctot[1]) * *n1 + 1; - i__1 = ctot[0]; - for (j = 1; j <= i__1; ++j) { - js = indx[i__]; - dcopy_(n1, &q[js * q_dim1 + 1], &c__1, &q2[iq1], &c__1); - z__[i__] = d__[js]; - ++i__; - iq1 += *n1; -/* L140: */ - } - - i__1 = ctot[1]; - for (j = 1; j <= i__1; ++j) { - js = indx[i__]; - dcopy_(n1, &q[js * q_dim1 + 1], &c__1, &q2[iq1], &c__1); - dcopy_(&n2, &q[*n1 + 1 + js * q_dim1], &c__1, &q2[iq2], &c__1); - z__[i__] = d__[js]; - ++i__; - iq1 += *n1; - iq2 += n2; -/* L150: */ - } - - i__1 = ctot[2]; - for (j = 1; j <= i__1; ++j) { - js = indx[i__]; - dcopy_(&n2, &q[*n1 + 1 + js * q_dim1], &c__1, &q2[iq2], &c__1); - z__[i__] = d__[js]; - ++i__; - iq2 += n2; -/* L160: */ - } - - iq1 = iq2; - i__1 = ctot[3]; - for (j = 1; j <= i__1; ++j) { - js = indx[i__]; - dcopy_(n, &q[js * q_dim1 + 1], &c__1, &q2[iq2], &c__1); - iq2 += *n; - z__[i__] = d__[js]; - ++i__; -/* L170: */ - } - -/* - The deflated eigenvalues and their corresponding vectors go back - into the last N - K slots of D and Q respectively. -*/ - - dlacpy_("A", n, &ctot[3], &q2[iq1], n, &q[(*k + 1) * q_dim1 + 1], ldq); - i__1 = *n - *k; - dcopy_(&i__1, &z__[*k + 1], &c__1, &d__[*k + 1], &c__1); - -/* Copy CTOT into COLTYP for referencing in DLAED3. */ - - for (j = 1; j <= 4; ++j) { - coltyp[j] = ctot[j - 1]; -/* L180: */ - } - -L190: - return 0; - -/* End of DLAED2 */ - -} /* dlaed2_ */ - -/* Subroutine */ int dlaed3_(integer *k, integer *n, integer *n1, doublereal * - d__, doublereal *q, integer *ldq, doublereal *rho, doublereal *dlamda, - doublereal *q2, integer *indx, integer *ctot, doublereal *w, - doublereal *s, integer *info) -{ - /* System generated locals */ - integer q_dim1, q_offset, i__1, i__2; - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal), d_sign(doublereal *, doublereal *); - - /* Local variables */ - static integer i__, j, n2, n12, ii, n23, iq2; - static doublereal temp; - extern doublereal dnrm2_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *), - dcopy_(integer *, doublereal *, integer *, doublereal *, integer - *), dlaed4_(integer *, integer *, doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, integer *); - extern doublereal dlamc3_(doublereal *, doublereal *); - extern /* Subroutine */ int dlacpy_(char *, integer *, integer *, - doublereal *, integer *, doublereal *, integer *), - dlaset_(char *, integer *, integer *, doublereal *, doublereal *, - doublereal *, integer *), xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - June 30, 1999 - - - Purpose - ======= - - DLAED3 finds the roots of the secular equation, as defined by the - values in D, W, and RHO, between 1 and K. It makes the - appropriate calls to DLAED4 and then updates the eigenvectors by - multiplying the matrix of eigenvectors of the pair of eigensystems - being combined by the matrix of eigenvectors of the K-by-K system - which is solved here. - - This code makes very mild assumptions about floating point - arithmetic. It will work on machines with a guard digit in - add/subtract, or on those binary machines without guard digits - which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. - It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. - - Arguments - ========= - - K (input) INTEGER - The number of terms in the rational function to be solved by - DLAED4. K >= 0. - - N (input) INTEGER - The number of rows and columns in the Q matrix. - N >= K (deflation may result in N>K). - - N1 (input) INTEGER - The location of the last eigenvalue in the leading submatrix. - min(1,N) <= N1 <= N/2. - - D (output) DOUBLE PRECISION array, dimension (N) - D(I) contains the updated eigenvalues for - 1 <= I <= K. - - Q (output) DOUBLE PRECISION array, dimension (LDQ,N) - Initially the first K columns are used as workspace. - On output the columns 1 to K contain - the updated eigenvectors. - - LDQ (input) INTEGER - The leading dimension of the array Q. LDQ >= max(1,N). - - RHO (input) DOUBLE PRECISION - The value of the parameter in the rank one update equation. - RHO >= 0 required. - - DLAMDA (input/output) DOUBLE PRECISION array, dimension (K) - The first K elements of this array contain the old roots - of the deflated updating problem. These are the poles - of the secular equation. May be changed on output by - having lowest order bit set to zero on Cray X-MP, Cray Y-MP, - Cray-2, or Cray C-90, as described above. - - Q2 (input) DOUBLE PRECISION array, dimension (LDQ2, N) - The first K columns of this matrix contain the non-deflated - eigenvectors for the split problem. - - INDX (input) INTEGER array, dimension (N) - The permutation used to arrange the columns of the deflated - Q matrix into three groups (see DLAED2). - The rows of the eigenvectors found by DLAED4 must be likewise - permuted before the matrix multiply can take place. - - CTOT (input) INTEGER array, dimension (4) - A count of the total number of the various types of columns - in Q, as described in INDX. The fourth column type is any - column which has been deflated. - - W (input/output) DOUBLE PRECISION array, dimension (K) - The first K elements of this array contain the components - of the deflation-adjusted updating vector. Destroyed on - output. - - S (workspace) DOUBLE PRECISION array, dimension (N1 + 1)*K - Will contain the eigenvectors of the repaired matrix which - will be multiplied by the previously accumulated eigenvectors - to update the system. - - LDS (input) INTEGER - The leading dimension of S. LDS >= max(1,K). - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = 1, an eigenvalue did not converge - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - Modified by Francoise Tisseur, University of Tennessee. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - q_dim1 = *ldq; - q_offset = 1 + q_dim1 * 1; - q -= q_offset; - --dlamda; - --q2; - --indx; - --ctot; - --w; - --s; - - /* Function Body */ - *info = 0; - - if (*k < 0) { - *info = -1; - } else if (*n < *k) { - *info = -2; - } else if (*ldq < max(1,*n)) { - *info = -6; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLAED3", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*k == 0) { - return 0; - } - -/* - Modify values DLAMDA(i) to make sure all DLAMDA(i)-DLAMDA(j) can - be computed with high relative accuracy (barring over/underflow). - This is a problem on machines without a guard digit in - add/subtract (Cray XMP, Cray YMP, Cray C 90 and Cray 2). - The following code replaces DLAMDA(I) by 2*DLAMDA(I)-DLAMDA(I), - which on any of these machines zeros out the bottommost - bit of DLAMDA(I) if it is 1; this makes the subsequent - subtractions DLAMDA(I)-DLAMDA(J) unproblematic when cancellation - occurs. On binary machines with a guard digit (almost all - machines) it does not change DLAMDA(I) at all. On hexadecimal - and decimal machines with a guard digit, it slightly - changes the bottommost bits of DLAMDA(I). It does not account - for hexadecimal or decimal machines without guard digits - (we know of none). We use a subroutine call to compute - 2*DLAMBDA(I) to prevent optimizing compilers from eliminating - this code. -*/ - - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - dlamda[i__] = dlamc3_(&dlamda[i__], &dlamda[i__]) - dlamda[i__]; -/* L10: */ - } - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dlaed4_(k, &j, &dlamda[1], &w[1], &q[j * q_dim1 + 1], rho, &d__[j], - info); - -/* If the zero finder fails, the computation is terminated. */ - - if (*info != 0) { - goto L120; - } -/* L20: */ - } - - if (*k == 1) { - goto L110; - } - if (*k == 2) { - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - w[1] = q[j * q_dim1 + 1]; - w[2] = q[j * q_dim1 + 2]; - ii = indx[1]; - q[j * q_dim1 + 1] = w[ii]; - ii = indx[2]; - q[j * q_dim1 + 2] = w[ii]; -/* L30: */ - } - goto L110; - } - -/* Compute updated W. */ - - dcopy_(k, &w[1], &c__1, &s[1], &c__1); - -/* Initialize W(I) = Q(I,I) */ - - i__1 = *ldq + 1; - dcopy_(k, &q[q_offset], &i__1, &w[1], &c__1); - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - w[i__] *= q[i__ + j * q_dim1] / (dlamda[i__] - dlamda[j]); -/* L40: */ - } - i__2 = *k; - for (i__ = j + 1; i__ <= i__2; ++i__) { - w[i__] *= q[i__ + j * q_dim1] / (dlamda[i__] - dlamda[j]); -/* L50: */ - } -/* L60: */ - } - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - d__1 = sqrt(-w[i__]); - w[i__] = d_sign(&d__1, &s[i__]); -/* L70: */ - } - -/* Compute eigenvectors of the modified rank-1 modification. */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *k; - for (i__ = 1; i__ <= i__2; ++i__) { - s[i__] = w[i__] / q[i__ + j * q_dim1]; -/* L80: */ - } - temp = dnrm2_(k, &s[1], &c__1); - i__2 = *k; - for (i__ = 1; i__ <= i__2; ++i__) { - ii = indx[i__]; - q[i__ + j * q_dim1] = s[ii] / temp; -/* L90: */ - } -/* L100: */ - } - -/* Compute the updated eigenvectors. */ - -L110: - - n2 = *n - *n1; - n12 = ctot[1] + ctot[2]; - n23 = ctot[2] + ctot[3]; - - dlacpy_("A", &n23, k, &q[ctot[1] + 1 + q_dim1], ldq, &s[1], &n23); - iq2 = *n1 * n12 + 1; - if (n23 != 0) { - dgemm_("N", "N", &n2, k, &n23, &c_b15, &q2[iq2], &n2, &s[1], &n23, & - c_b29, &q[*n1 + 1 + q_dim1], ldq); - } else { - dlaset_("A", &n2, k, &c_b29, &c_b29, &q[*n1 + 1 + q_dim1], ldq); - } - - dlacpy_("A", &n12, k, &q[q_offset], ldq, &s[1], &n12); - if (n12 != 0) { - dgemm_("N", "N", n1, k, &n12, &c_b15, &q2[1], n1, &s[1], &n12, &c_b29, - &q[q_offset], ldq); - } else { - dlaset_("A", n1, k, &c_b29, &c_b29, &q[q_dim1 + 1], ldq); - } - - -L120: - return 0; - -/* End of DLAED3 */ - -} /* dlaed3_ */ - -/* Subroutine */ int dlaed4_(integer *n, integer *i__, doublereal *d__, - doublereal *z__, doublereal *delta, doublereal *rho, doublereal *dlam, - integer *info) -{ - /* System generated locals */ - integer i__1; - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal a, b, c__; - static integer j; - static doublereal w; - static integer ii; - static doublereal dw, zz[3]; - static integer ip1; - static doublereal del, eta, phi, eps, tau, psi; - static integer iim1, iip1; - static doublereal dphi, dpsi; - static integer iter; - static doublereal temp, prew, temp1, dltlb, dltub, midpt; - static integer niter; - static logical swtch; - extern /* Subroutine */ int dlaed5_(integer *, doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *), dlaed6_(integer *, - logical *, doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *); - static logical swtch3; - - static logical orgati; - static doublereal erretm, rhoinv; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - December 23, 1999 - - - Purpose - ======= - - This subroutine computes the I-th updated eigenvalue of a symmetric - rank-one modification to a diagonal matrix whose elements are - given in the array d, and that - - D(i) < D(j) for i < j - - and that RHO > 0. This is arranged by the calling routine, and is - no loss in generality. The rank-one modified system is thus - - diag( D ) + RHO * Z * Z_transpose. - - where we assume the Euclidean norm of Z is 1. - - The method consists of approximating the rational functions in the - secular equation by simpler interpolating rational functions. - - Arguments - ========= - - N (input) INTEGER - The length of all arrays. - - I (input) INTEGER - The index of the eigenvalue to be computed. 1 <= I <= N. - - D (input) DOUBLE PRECISION array, dimension (N) - The original eigenvalues. It is assumed that they are in - order, D(I) < D(J) for I < J. - - Z (input) DOUBLE PRECISION array, dimension (N) - The components of the updating vector. - - DELTA (output) DOUBLE PRECISION array, dimension (N) - If N .ne. 1, DELTA contains (D(j) - lambda_I) in its j-th - component. If N = 1, then DELTA(1) = 1. The vector DELTA - contains the information necessary to construct the - eigenvectors. - - RHO (input) DOUBLE PRECISION - The scalar in the symmetric updating formula. - - DLAM (output) DOUBLE PRECISION - The computed lambda_I, the I-th updated eigenvalue. - - INFO (output) INTEGER - = 0: successful exit - > 0: if INFO = 1, the updating process failed. - - Internal Parameters - =================== - - Logical variable ORGATI (origin-at-i?) is used for distinguishing - whether D(i) or D(i+1) is treated as the origin. - - ORGATI = .true. origin at i - ORGATI = .false. origin at i+1 - - Logical variable SWTCH3 (switch-for-3-poles?) is for noting - if we are working with THREE poles! - - MAXIT is the maximum number of iterations allowed for each - eigenvalue. - - Further Details - =============== - - Based on contributions by - Ren-Cang Li, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== - - - Since this routine is called in an inner loop, we do no argument - checking. - - Quick return for N=1 and 2. -*/ - - /* Parameter adjustments */ - --delta; - --z__; - --d__; - - /* Function Body */ - *info = 0; - if (*n == 1) { - -/* Presumably, I=1 upon entry */ - - *dlam = d__[1] + *rho * z__[1] * z__[1]; - delta[1] = 1.; - return 0; - } - if (*n == 2) { - dlaed5_(i__, &d__[1], &z__[1], &delta[1], rho, dlam); - return 0; - } - -/* Compute machine epsilon */ - - eps = EPSILON; - rhoinv = 1. / *rho; - -/* The case I = N */ - - if (*i__ == *n) { - -/* Initialize some basic variables */ - - ii = *n - 1; - niter = 1; - -/* Calculate initial guess */ - - midpt = *rho / 2.; - -/* - If ||Z||_2 is not one, then TEMP should be set to - RHO * ||Z||_2^2 / TWO -*/ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] = d__[j] - d__[*i__] - midpt; -/* L10: */ - } - - psi = 0.; - i__1 = *n - 2; - for (j = 1; j <= i__1; ++j) { - psi += z__[j] * z__[j] / delta[j]; -/* L20: */ - } - - c__ = rhoinv + psi; - w = c__ + z__[ii] * z__[ii] / delta[ii] + z__[*n] * z__[*n] / delta[* - n]; - - if (w <= 0.) { - temp = z__[*n - 1] * z__[*n - 1] / (d__[*n] - d__[*n - 1] + *rho) - + z__[*n] * z__[*n] / *rho; - if (c__ <= temp) { - tau = *rho; - } else { - del = d__[*n] - d__[*n - 1]; - a = -c__ * del + z__[*n - 1] * z__[*n - 1] + z__[*n] * z__[*n] - ; - b = z__[*n] * z__[*n] * del; - if (a < 0.) { - tau = b * 2. / (sqrt(a * a + b * 4. * c__) - a); - } else { - tau = (a + sqrt(a * a + b * 4. * c__)) / (c__ * 2.); - } - } - -/* - It can be proved that - D(N)+RHO/2 <= LAMBDA(N) < D(N)+TAU <= D(N)+RHO -*/ - - dltlb = midpt; - dltub = *rho; - } else { - del = d__[*n] - d__[*n - 1]; - a = -c__ * del + z__[*n - 1] * z__[*n - 1] + z__[*n] * z__[*n]; - b = z__[*n] * z__[*n] * del; - if (a < 0.) { - tau = b * 2. / (sqrt(a * a + b * 4. * c__) - a); - } else { - tau = (a + sqrt(a * a + b * 4. * c__)) / (c__ * 2.); - } - -/* - It can be proved that - D(N) < D(N)+TAU < LAMBDA(N) < D(N)+RHO/2 -*/ - - dltlb = 0.; - dltub = midpt; - } - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] = d__[j] - d__[*i__] - tau; -/* L30: */ - } - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = ii; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / delta[j]; - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L40: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - temp = z__[*n] / delta[*n]; - phi = z__[*n] * temp; - dphi = temp * temp; - erretm = (-phi - psi) * 8. + erretm - phi + rhoinv + abs(tau) * (dpsi - + dphi); - - w = rhoinv + phi + psi; - -/* Test for convergence */ - - if (abs(w) <= eps * erretm) { - *dlam = d__[*i__] + tau; - goto L250; - } - - if (w <= 0.) { - dltlb = max(dltlb,tau); - } else { - dltub = min(dltub,tau); - } - -/* Calculate the new step */ - - ++niter; - c__ = w - delta[*n - 1] * dpsi - delta[*n] * dphi; - a = (delta[*n - 1] + delta[*n]) * w - delta[*n - 1] * delta[*n] * ( - dpsi + dphi); - b = delta[*n - 1] * delta[*n] * w; - if (c__ < 0.) { - c__ = abs(c__); - } - if (c__ == 0.) { -/* - ETA = B/A - ETA = RHO - TAU -*/ - eta = dltub - tau; - } else if (a >= 0.) { - eta = (a + sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) / (c__ - * 2.); - } else { - eta = b * 2. / (a - sqrt((d__1 = a * a - b * 4. * c__, abs(d__1))) - ); - } - -/* - Note, eta should be positive if w is negative, and - eta should be negative otherwise. However, - if for some reason caused by roundoff, eta*w > 0, - we simply use one Newton step instead. This way - will guarantee eta*w < 0. -*/ - - if (w * eta > 0.) { - eta = -w / (dpsi + dphi); - } - temp = tau + eta; - if (temp > dltub || temp < dltlb) { - if (w < 0.) { - eta = (dltub - tau) / 2.; - } else { - eta = (dltlb - tau) / 2.; - } - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] -= eta; -/* L50: */ - } - - tau += eta; - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = ii; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / delta[j]; - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L60: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - temp = z__[*n] / delta[*n]; - phi = z__[*n] * temp; - dphi = temp * temp; - erretm = (-phi - psi) * 8. + erretm - phi + rhoinv + abs(tau) * (dpsi - + dphi); - - w = rhoinv + phi + psi; - -/* Main loop to update the values of the array DELTA */ - - iter = niter + 1; - - for (niter = iter; niter <= MAXITERLOOPS; ++niter) { - -/* Test for convergence */ - - if (abs(w) <= eps * erretm) { - *dlam = d__[*i__] + tau; - goto L250; - } - - if (w <= 0.) { - dltlb = max(dltlb,tau); - } else { - dltub = min(dltub,tau); - } - -/* Calculate the new step */ - - c__ = w - delta[*n - 1] * dpsi - delta[*n] * dphi; - a = (delta[*n - 1] + delta[*n]) * w - delta[*n - 1] * delta[*n] * - (dpsi + dphi); - b = delta[*n - 1] * delta[*n] * w; - if (a >= 0.) { - eta = (a + sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) / ( - c__ * 2.); - } else { - eta = b * 2. / (a - sqrt((d__1 = a * a - b * 4. * c__, abs( - d__1)))); - } - -/* - Note, eta should be positive if w is negative, and - eta should be negative otherwise. However, - if for some reason caused by roundoff, eta*w > 0, - we simply use one Newton step instead. This way - will guarantee eta*w < 0. -*/ - - if (w * eta > 0.) { - eta = -w / (dpsi + dphi); - } - temp = tau + eta; - if (temp > dltub || temp < dltlb) { - if (w < 0.) { - eta = (dltub - tau) / 2.; - } else { - eta = (dltlb - tau) / 2.; - } - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] -= eta; -/* L70: */ - } - - tau += eta; - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = ii; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / delta[j]; - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L80: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - temp = z__[*n] / delta[*n]; - phi = z__[*n] * temp; - dphi = temp * temp; - erretm = (-phi - psi) * 8. + erretm - phi + rhoinv + abs(tau) * ( - dpsi + dphi); - - w = rhoinv + phi + psi; -/* L90: */ - } - -/* Return with INFO = 1, NITER = MAXIT and not converged */ - - *info = 1; - *dlam = d__[*i__] + tau; - goto L250; - -/* End for the case I = N */ - - } else { - -/* The case for I < N */ - - niter = 1; - ip1 = *i__ + 1; - -/* Calculate initial guess */ - - del = d__[ip1] - d__[*i__]; - midpt = del / 2.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] = d__[j] - d__[*i__] - midpt; -/* L100: */ - } - - psi = 0.; - i__1 = *i__ - 1; - for (j = 1; j <= i__1; ++j) { - psi += z__[j] * z__[j] / delta[j]; -/* L110: */ - } - - phi = 0.; - i__1 = *i__ + 2; - for (j = *n; j >= i__1; --j) { - phi += z__[j] * z__[j] / delta[j]; -/* L120: */ - } - c__ = rhoinv + psi + phi; - w = c__ + z__[*i__] * z__[*i__] / delta[*i__] + z__[ip1] * z__[ip1] / - delta[ip1]; - - if (w > 0.) { - -/* - d(i)< the ith eigenvalue < (d(i)+d(i+1))/2 - - We choose d(i) as origin. -*/ - - orgati = TRUE_; - a = c__ * del + z__[*i__] * z__[*i__] + z__[ip1] * z__[ip1]; - b = z__[*i__] * z__[*i__] * del; - if (a > 0.) { - tau = b * 2. / (a + sqrt((d__1 = a * a - b * 4. * c__, abs( - d__1)))); - } else { - tau = (a - sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) / ( - c__ * 2.); - } - dltlb = 0.; - dltub = midpt; - } else { - -/* - (d(i)+d(i+1))/2 <= the ith eigenvalue < d(i+1) - - We choose d(i+1) as origin. -*/ - - orgati = FALSE_; - a = c__ * del - z__[*i__] * z__[*i__] - z__[ip1] * z__[ip1]; - b = z__[ip1] * z__[ip1] * del; - if (a < 0.) { - tau = b * 2. / (a - sqrt((d__1 = a * a + b * 4. * c__, abs( - d__1)))); - } else { - tau = -(a + sqrt((d__1 = a * a + b * 4. * c__, abs(d__1)))) / - (c__ * 2.); - } - dltlb = -midpt; - dltub = 0.; - } - - if (orgati) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] = d__[j] - d__[*i__] - tau; -/* L130: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] = d__[j] - d__[ip1] - tau; -/* L140: */ - } - } - if (orgati) { - ii = *i__; - } else { - ii = *i__ + 1; - } - iim1 = ii - 1; - iip1 = ii + 1; - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = iim1; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / delta[j]; - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L150: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - dphi = 0.; - phi = 0.; - i__1 = iip1; - for (j = *n; j >= i__1; --j) { - temp = z__[j] / delta[j]; - phi += z__[j] * temp; - dphi += temp * temp; - erretm += phi; -/* L160: */ - } - - w = rhoinv + phi + psi; - -/* - W is the value of the secular function with - its ii-th element removed. -*/ - - swtch3 = FALSE_; - if (orgati) { - if (w < 0.) { - swtch3 = TRUE_; - } - } else { - if (w > 0.) { - swtch3 = TRUE_; - } - } - if (ii == 1 || ii == *n) { - swtch3 = FALSE_; - } - - temp = z__[ii] / delta[ii]; - dw = dpsi + dphi + temp * temp; - temp = z__[ii] * temp; - w += temp; - erretm = (phi - psi) * 8. + erretm + rhoinv * 2. + abs(temp) * 3. + - abs(tau) * dw; - -/* Test for convergence */ - - if (abs(w) <= eps * erretm) { - if (orgati) { - *dlam = d__[*i__] + tau; - } else { - *dlam = d__[ip1] + tau; - } - goto L250; - } - - if (w <= 0.) { - dltlb = max(dltlb,tau); - } else { - dltub = min(dltub,tau); - } - -/* Calculate the new step */ - - ++niter; - if (! swtch3) { - if (orgati) { -/* Computing 2nd power */ - d__1 = z__[*i__] / delta[*i__]; - c__ = w - delta[ip1] * dw - (d__[*i__] - d__[ip1]) * (d__1 * - d__1); - } else { -/* Computing 2nd power */ - d__1 = z__[ip1] / delta[ip1]; - c__ = w - delta[*i__] * dw - (d__[ip1] - d__[*i__]) * (d__1 * - d__1); - } - a = (delta[*i__] + delta[ip1]) * w - delta[*i__] * delta[ip1] * - dw; - b = delta[*i__] * delta[ip1] * w; - if (c__ == 0.) { - if (a == 0.) { - if (orgati) { - a = z__[*i__] * z__[*i__] + delta[ip1] * delta[ip1] * - (dpsi + dphi); - } else { - a = z__[ip1] * z__[ip1] + delta[*i__] * delta[*i__] * - (dpsi + dphi); - } - } - eta = b / a; - } else if (a <= 0.) { - eta = (a - sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) / ( - c__ * 2.); - } else { - eta = b * 2. / (a + sqrt((d__1 = a * a - b * 4. * c__, abs( - d__1)))); - } - } else { - -/* Interpolation using THREE most relevant poles */ - - temp = rhoinv + psi + phi; - if (orgati) { - temp1 = z__[iim1] / delta[iim1]; - temp1 *= temp1; - c__ = temp - delta[iip1] * (dpsi + dphi) - (d__[iim1] - d__[ - iip1]) * temp1; - zz[0] = z__[iim1] * z__[iim1]; - zz[2] = delta[iip1] * delta[iip1] * (dpsi - temp1 + dphi); - } else { - temp1 = z__[iip1] / delta[iip1]; - temp1 *= temp1; - c__ = temp - delta[iim1] * (dpsi + dphi) - (d__[iip1] - d__[ - iim1]) * temp1; - zz[0] = delta[iim1] * delta[iim1] * (dpsi + (dphi - temp1)); - zz[2] = z__[iip1] * z__[iip1]; - } - zz[1] = z__[ii] * z__[ii]; - dlaed6_(&niter, &orgati, &c__, &delta[iim1], zz, &w, &eta, info); - if (*info != 0) { - goto L250; - } - } - -/* - Note, eta should be positive if w is negative, and - eta should be negative otherwise. However, - if for some reason caused by roundoff, eta*w > 0, - we simply use one Newton step instead. This way - will guarantee eta*w < 0. -*/ - - if (w * eta >= 0.) { - eta = -w / dw; - } - temp = tau + eta; - if (temp > dltub || temp < dltlb) { - if (w < 0.) { - eta = (dltub - tau) / 2.; - } else { - eta = (dltlb - tau) / 2.; - } - } - - prew = w; - -/* L170: */ - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] -= eta; -/* L180: */ - } - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = iim1; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / delta[j]; - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L190: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - dphi = 0.; - phi = 0.; - i__1 = iip1; - for (j = *n; j >= i__1; --j) { - temp = z__[j] / delta[j]; - phi += z__[j] * temp; - dphi += temp * temp; - erretm += phi; -/* L200: */ - } - - temp = z__[ii] / delta[ii]; - dw = dpsi + dphi + temp * temp; - temp = z__[ii] * temp; - w = rhoinv + phi + psi + temp; - erretm = (phi - psi) * 8. + erretm + rhoinv * 2. + abs(temp) * 3. + ( - d__1 = tau + eta, abs(d__1)) * dw; - - swtch = FALSE_; - if (orgati) { - if (-w > abs(prew) / 10.) { - swtch = TRUE_; - } - } else { - if (w > abs(prew) / 10.) { - swtch = TRUE_; - } - } - - tau += eta; - -/* Main loop to update the values of the array DELTA */ - - iter = niter + 1; - - for (niter = iter; niter <= MAXITERLOOPS; ++niter) { - -/* Test for convergence */ - - if (abs(w) <= eps * erretm) { - if (orgati) { - *dlam = d__[*i__] + tau; - } else { - *dlam = d__[ip1] + tau; - } - goto L250; - } - - if (w <= 0.) { - dltlb = max(dltlb,tau); - } else { - dltub = min(dltub,tau); - } - -/* Calculate the new step */ - - if (! swtch3) { - if (! swtch) { - if (orgati) { -/* Computing 2nd power */ - d__1 = z__[*i__] / delta[*i__]; - c__ = w - delta[ip1] * dw - (d__[*i__] - d__[ip1]) * ( - d__1 * d__1); - } else { -/* Computing 2nd power */ - d__1 = z__[ip1] / delta[ip1]; - c__ = w - delta[*i__] * dw - (d__[ip1] - d__[*i__]) * - (d__1 * d__1); - } - } else { - temp = z__[ii] / delta[ii]; - if (orgati) { - dpsi += temp * temp; - } else { - dphi += temp * temp; - } - c__ = w - delta[*i__] * dpsi - delta[ip1] * dphi; - } - a = (delta[*i__] + delta[ip1]) * w - delta[*i__] * delta[ip1] - * dw; - b = delta[*i__] * delta[ip1] * w; - if (c__ == 0.) { - if (a == 0.) { - if (! swtch) { - if (orgati) { - a = z__[*i__] * z__[*i__] + delta[ip1] * - delta[ip1] * (dpsi + dphi); - } else { - a = z__[ip1] * z__[ip1] + delta[*i__] * delta[ - *i__] * (dpsi + dphi); - } - } else { - a = delta[*i__] * delta[*i__] * dpsi + delta[ip1] - * delta[ip1] * dphi; - } - } - eta = b / a; - } else if (a <= 0.) { - eta = (a - sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) - / (c__ * 2.); - } else { - eta = b * 2. / (a + sqrt((d__1 = a * a - b * 4. * c__, - abs(d__1)))); - } - } else { - -/* Interpolation using THREE most relevant poles */ - - temp = rhoinv + psi + phi; - if (swtch) { - c__ = temp - delta[iim1] * dpsi - delta[iip1] * dphi; - zz[0] = delta[iim1] * delta[iim1] * dpsi; - zz[2] = delta[iip1] * delta[iip1] * dphi; - } else { - if (orgati) { - temp1 = z__[iim1] / delta[iim1]; - temp1 *= temp1; - c__ = temp - delta[iip1] * (dpsi + dphi) - (d__[iim1] - - d__[iip1]) * temp1; - zz[0] = z__[iim1] * z__[iim1]; - zz[2] = delta[iip1] * delta[iip1] * (dpsi - temp1 + - dphi); - } else { - temp1 = z__[iip1] / delta[iip1]; - temp1 *= temp1; - c__ = temp - delta[iim1] * (dpsi + dphi) - (d__[iip1] - - d__[iim1]) * temp1; - zz[0] = delta[iim1] * delta[iim1] * (dpsi + (dphi - - temp1)); - zz[2] = z__[iip1] * z__[iip1]; - } - } - dlaed6_(&niter, &orgati, &c__, &delta[iim1], zz, &w, &eta, - info); - if (*info != 0) { - goto L250; - } - } - -/* - Note, eta should be positive if w is negative, and - eta should be negative otherwise. However, - if for some reason caused by roundoff, eta*w > 0, - we simply use one Newton step instead. This way - will guarantee eta*w < 0. -*/ - - if (w * eta >= 0.) { - eta = -w / dw; - } - temp = tau + eta; - if (temp > dltub || temp < dltlb) { - if (w < 0.) { - eta = (dltub - tau) / 2.; - } else { - eta = (dltlb - tau) / 2.; - } - } - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] -= eta; -/* L210: */ - } - - tau += eta; - prew = w; - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = iim1; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / delta[j]; - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L220: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - dphi = 0.; - phi = 0.; - i__1 = iip1; - for (j = *n; j >= i__1; --j) { - temp = z__[j] / delta[j]; - phi += z__[j] * temp; - dphi += temp * temp; - erretm += phi; -/* L230: */ - } - - temp = z__[ii] / delta[ii]; - dw = dpsi + dphi + temp * temp; - temp = z__[ii] * temp; - w = rhoinv + phi + psi + temp; - erretm = (phi - psi) * 8. + erretm + rhoinv * 2. + abs(temp) * 3. - + abs(tau) * dw; - if ((w * prew > 0. && abs(w) > abs(prew) / 10.)) { - swtch = ! swtch; - } - -/* L240: */ - } - -/* Return with INFO = 1, NITER = MAXIT and not converged */ - - *info = 1; - if (orgati) { - *dlam = d__[*i__] + tau; - } else { - *dlam = d__[ip1] + tau; - } - - } - -L250: - - return 0; - -/* End of DLAED4 */ - -} /* dlaed4_ */ - -/* Subroutine */ int dlaed5_(integer *i__, doublereal *d__, doublereal *z__, - doublereal *delta, doublereal *rho, doublereal *dlam) -{ - /* System generated locals */ - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal b, c__, w, del, tau, temp; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - September 30, 1994 - - - Purpose - ======= - - This subroutine computes the I-th eigenvalue of a symmetric rank-one - modification of a 2-by-2 diagonal matrix - - diag( D ) + RHO * Z * transpose(Z) . - - The diagonal elements in the array D are assumed to satisfy - - D(i) < D(j) for i < j . - - We also assume RHO > 0 and that the Euclidean norm of the vector - Z is one. - - Arguments - ========= - - I (input) INTEGER - The index of the eigenvalue to be computed. I = 1 or I = 2. - - D (input) DOUBLE PRECISION array, dimension (2) - The original eigenvalues. We assume D(1) < D(2). - - Z (input) DOUBLE PRECISION array, dimension (2) - The components of the updating vector. - - DELTA (output) DOUBLE PRECISION array, dimension (2) - The vector DELTA contains the information necessary - to construct the eigenvectors. - - RHO (input) DOUBLE PRECISION - The scalar in the symmetric updating formula. - - DLAM (output) DOUBLE PRECISION - The computed lambda_I, the I-th updated eigenvalue. - - Further Details - =============== - - Based on contributions by - Ren-Cang Li, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --delta; - --z__; - --d__; - - /* Function Body */ - del = d__[2] - d__[1]; - if (*i__ == 1) { - w = *rho * 2. * (z__[2] * z__[2] - z__[1] * z__[1]) / del + 1.; - if (w > 0.) { - b = del + *rho * (z__[1] * z__[1] + z__[2] * z__[2]); - c__ = *rho * z__[1] * z__[1] * del; - -/* B > ZERO, always */ - - tau = c__ * 2. / (b + sqrt((d__1 = b * b - c__ * 4., abs(d__1)))); - *dlam = d__[1] + tau; - delta[1] = -z__[1] / tau; - delta[2] = z__[2] / (del - tau); - } else { - b = -del + *rho * (z__[1] * z__[1] + z__[2] * z__[2]); - c__ = *rho * z__[2] * z__[2] * del; - if (b > 0.) { - tau = c__ * -2. / (b + sqrt(b * b + c__ * 4.)); - } else { - tau = (b - sqrt(b * b + c__ * 4.)) / 2.; - } - *dlam = d__[2] + tau; - delta[1] = -z__[1] / (del + tau); - delta[2] = -z__[2] / tau; - } - temp = sqrt(delta[1] * delta[1] + delta[2] * delta[2]); - delta[1] /= temp; - delta[2] /= temp; - } else { - -/* Now I=2 */ - - b = -del + *rho * (z__[1] * z__[1] + z__[2] * z__[2]); - c__ = *rho * z__[2] * z__[2] * del; - if (b > 0.) { - tau = (b + sqrt(b * b + c__ * 4.)) / 2.; - } else { - tau = c__ * 2. / (-b + sqrt(b * b + c__ * 4.)); - } - *dlam = d__[2] + tau; - delta[1] = -z__[1] / (del + tau); - delta[2] = -z__[2] / tau; - temp = sqrt(delta[1] * delta[1] + delta[2] * delta[2]); - delta[1] /= temp; - delta[2] /= temp; - } - return 0; - -/* End OF DLAED5 */ - -} /* dlaed5_ */ - -/* Subroutine */ int dlaed6_(integer *kniter, logical *orgati, doublereal * - rho, doublereal *d__, doublereal *z__, doublereal *finit, doublereal * - tau, integer *info) -{ - /* Initialized data */ - - static logical first = TRUE_; - - /* System generated locals */ - integer i__1; - doublereal d__1, d__2, d__3, d__4; - - /* Builtin functions */ - double sqrt(doublereal), log(doublereal), pow_di(doublereal *, integer *); - - /* Local variables */ - static doublereal a, b, c__, f; - static integer i__; - static doublereal fc, df, ddf, eta, eps, base; - static integer iter; - static doublereal temp, temp1, temp2, temp3, temp4; - static logical scale; - static integer niter; - static doublereal small1, small2, sminv1, sminv2; - - static doublereal dscale[3], sclfac, zscale[3], erretm, sclinv; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - June 30, 1999 - - - Purpose - ======= - - DLAED6 computes the positive or negative root (closest to the origin) - of - z(1) z(2) z(3) - f(x) = rho + --------- + ---------- + --------- - d(1)-x d(2)-x d(3)-x - - It is assumed that - - if ORGATI = .true. the root is between d(2) and d(3); - otherwise it is between d(1) and d(2) - - This routine will be called by DLAED4 when necessary. In most cases, - the root sought is the smallest in magnitude, though it might not be - in some extremely rare situations. - - Arguments - ========= - - KNITER (input) INTEGER - Refer to DLAED4 for its significance. - - ORGATI (input) LOGICAL - If ORGATI is true, the needed root is between d(2) and - d(3); otherwise it is between d(1) and d(2). See - DLAED4 for further details. - - RHO (input) DOUBLE PRECISION - Refer to the equation f(x) above. - - D (input) DOUBLE PRECISION array, dimension (3) - D satisfies d(1) < d(2) < d(3). - - Z (input) DOUBLE PRECISION array, dimension (3) - Each of the elements in z must be positive. - - FINIT (input) DOUBLE PRECISION - The value of f at 0. It is more accurate than the one - evaluated inside this routine (if someone wants to do - so). - - TAU (output) DOUBLE PRECISION - The root of the equation f(x). - - INFO (output) INTEGER - = 0: successful exit - > 0: if INFO = 1, failure to converge - - Further Details - =============== - - Based on contributions by - Ren-Cang Li, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== -*/ - - /* Parameter adjustments */ - --z__; - --d__; - - /* Function Body */ - - *info = 0; - - niter = 1; - *tau = 0.; - if (*kniter == 2) { - if (*orgati) { - temp = (d__[3] - d__[2]) / 2.; - c__ = *rho + z__[1] / (d__[1] - d__[2] - temp); - a = c__ * (d__[2] + d__[3]) + z__[2] + z__[3]; - b = c__ * d__[2] * d__[3] + z__[2] * d__[3] + z__[3] * d__[2]; - } else { - temp = (d__[1] - d__[2]) / 2.; - c__ = *rho + z__[3] / (d__[3] - d__[2] - temp); - a = c__ * (d__[1] + d__[2]) + z__[1] + z__[2]; - b = c__ * d__[1] * d__[2] + z__[1] * d__[2] + z__[2] * d__[1]; - } -/* Computing MAX */ - d__1 = abs(a), d__2 = abs(b), d__1 = max(d__1,d__2), d__2 = abs(c__); - temp = max(d__1,d__2); - a /= temp; - b /= temp; - c__ /= temp; - if (c__ == 0.) { - *tau = b / a; - } else if (a <= 0.) { - *tau = (a - sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) / ( - c__ * 2.); - } else { - *tau = b * 2. / (a + sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)) - )); - } - temp = *rho + z__[1] / (d__[1] - *tau) + z__[2] / (d__[2] - *tau) + - z__[3] / (d__[3] - *tau); - if (abs(*finit) <= abs(temp)) { - *tau = 0.; - } - } - -/* - On first call to routine, get machine parameters for - possible scaling to avoid overflow -*/ - - if (first) { - eps = EPSILON; - base = BASE; - i__1 = (integer) (log(SAFEMINIMUM) / log(base) / 3.); - small1 = pow_di(&base, &i__1); - sminv1 = 1. / small1; - small2 = small1 * small1; - sminv2 = sminv1 * sminv1; - first = FALSE_; - } - -/* - Determine if scaling of inputs necessary to avoid overflow - when computing 1/TEMP**3 -*/ - - if (*orgati) { -/* Computing MIN */ - d__3 = (d__1 = d__[2] - *tau, abs(d__1)), d__4 = (d__2 = d__[3] - * - tau, abs(d__2)); - temp = min(d__3,d__4); - } else { -/* Computing MIN */ - d__3 = (d__1 = d__[1] - *tau, abs(d__1)), d__4 = (d__2 = d__[2] - * - tau, abs(d__2)); - temp = min(d__3,d__4); - } - scale = FALSE_; - if (temp <= small1) { - scale = TRUE_; - if (temp <= small2) { - -/* Scale up by power of radix nearest 1/SAFMIN**(2/3) */ - - sclfac = sminv2; - sclinv = small2; - } else { - -/* Scale up by power of radix nearest 1/SAFMIN**(1/3) */ - - sclfac = sminv1; - sclinv = small1; - } - -/* Scaling up safe because D, Z, TAU scaled elsewhere to be O(1) */ - - for (i__ = 1; i__ <= 3; ++i__) { - dscale[i__ - 1] = d__[i__] * sclfac; - zscale[i__ - 1] = z__[i__] * sclfac; -/* L10: */ - } - *tau *= sclfac; - } else { - -/* Copy D and Z to DSCALE and ZSCALE */ - - for (i__ = 1; i__ <= 3; ++i__) { - dscale[i__ - 1] = d__[i__]; - zscale[i__ - 1] = z__[i__]; -/* L20: */ - } - } - - fc = 0.; - df = 0.; - ddf = 0.; - for (i__ = 1; i__ <= 3; ++i__) { - temp = 1. / (dscale[i__ - 1] - *tau); - temp1 = zscale[i__ - 1] * temp; - temp2 = temp1 * temp; - temp3 = temp2 * temp; - fc += temp1 / dscale[i__ - 1]; - df += temp2; - ddf += temp3; -/* L30: */ - } - f = *finit + *tau * fc; - - if (abs(f) <= 0.) { - goto L60; - } - -/* - Iteration begins - - It is not hard to see that - - 1) Iterations will go up monotonically - if FINIT < 0; - - 2) Iterations will go down monotonically - if FINIT > 0. -*/ - - iter = niter + 1; - - for (niter = iter; niter <= MAXITERLOOPS; ++niter) { - - if (*orgati) { - temp1 = dscale[1] - *tau; - temp2 = dscale[2] - *tau; - } else { - temp1 = dscale[0] - *tau; - temp2 = dscale[1] - *tau; - } - a = (temp1 + temp2) * f - temp1 * temp2 * df; - b = temp1 * temp2 * f; - c__ = f - (temp1 + temp2) * df + temp1 * temp2 * ddf; -/* Computing MAX */ - d__1 = abs(a), d__2 = abs(b), d__1 = max(d__1,d__2), d__2 = abs(c__); - temp = max(d__1,d__2); - a /= temp; - b /= temp; - c__ /= temp; - if (c__ == 0.) { - eta = b / a; - } else if (a <= 0.) { - eta = (a - sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) / (c__ - * 2.); - } else { - eta = b * 2. / (a + sqrt((d__1 = a * a - b * 4. * c__, abs(d__1))) - ); - } - if (f * eta >= 0.) { - eta = -f / df; - } - - temp = eta + *tau; - if (*orgati) { - if ((eta > 0. && temp >= dscale[2])) { - eta = (dscale[2] - *tau) / 2.; - } - if ((eta < 0. && temp <= dscale[1])) { - eta = (dscale[1] - *tau) / 2.; - } - } else { - if ((eta > 0. && temp >= dscale[1])) { - eta = (dscale[1] - *tau) / 2.; - } - if ((eta < 0. && temp <= dscale[0])) { - eta = (dscale[0] - *tau) / 2.; - } - } - *tau += eta; - - fc = 0.; - erretm = 0.; - df = 0.; - ddf = 0.; - for (i__ = 1; i__ <= 3; ++i__) { - temp = 1. / (dscale[i__ - 1] - *tau); - temp1 = zscale[i__ - 1] * temp; - temp2 = temp1 * temp; - temp3 = temp2 * temp; - temp4 = temp1 / dscale[i__ - 1]; - fc += temp4; - erretm += abs(temp4); - df += temp2; - ddf += temp3; -/* L40: */ - } - f = *finit + *tau * fc; - erretm = (abs(*finit) + abs(*tau) * erretm) * 8. + abs(*tau) * df; - if (abs(f) <= eps * erretm) { - goto L60; - } -/* L50: */ - } - *info = 1; -L60: - -/* Undo scaling */ - - if (scale) { - *tau *= sclinv; - } - return 0; - -/* End of DLAED6 */ - -} /* dlaed6_ */ - -/* Subroutine */ int dlaed7_(integer *icompq, integer *n, integer *qsiz, - integer *tlvls, integer *curlvl, integer *curpbm, doublereal *d__, - doublereal *q, integer *ldq, integer *indxq, doublereal *rho, integer - *cutpnt, doublereal *qstore, integer *qptr, integer *prmptr, integer * - perm, integer *givptr, integer *givcol, doublereal *givnum, - doublereal *work, integer *iwork, integer *info) -{ - /* System generated locals */ - integer q_dim1, q_offset, i__1, i__2; - - /* Builtin functions */ - integer pow_ii(integer *, integer *); - - /* Local variables */ - static integer i__, k, n1, n2, is, iw, iz, iq2, ptr, ldq2, indx, curr; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - static integer indxc, indxp; - extern /* Subroutine */ int dlaed8_(integer *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, doublereal *, - integer *, doublereal *, integer *, integer *, integer *, - doublereal *, integer *, integer *, integer *), dlaed9_(integer *, - integer *, integer *, integer *, doublereal *, doublereal *, - integer *, doublereal *, doublereal *, doublereal *, doublereal *, - integer *, integer *), dlaeda_(integer *, integer *, integer *, - integer *, integer *, integer *, integer *, integer *, doublereal - *, doublereal *, integer *, doublereal *, doublereal *, integer *) - ; - static integer idlmda; - extern /* Subroutine */ int dlamrg_(integer *, integer *, doublereal *, - integer *, integer *, integer *), xerbla_(char *, integer *); - static integer coltyp; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - DLAED7 computes the updated eigensystem of a diagonal - matrix after modification by a rank-one symmetric matrix. This - routine is used only for the eigenproblem which requires all - eigenvalues and optionally eigenvectors of a dense symmetric matrix - that has been reduced to tridiagonal form. DLAED1 handles - the case in which all eigenvalues and eigenvectors of a symmetric - tridiagonal matrix are desired. - - T = Q(in) ( D(in) + RHO * Z*Z' ) Q'(in) = Q(out) * D(out) * Q'(out) - - where Z = Q'u, u is a vector of length N with ones in the - CUTPNT and CUTPNT + 1 th elements and zeros elsewhere. - - The eigenvectors of the original matrix are stored in Q, and the - eigenvalues are in D. The algorithm consists of three stages: - - The first stage consists of deflating the size of the problem - when there are multiple eigenvalues or if there is a zero in - the Z vector. For each such occurence the dimension of the - secular equation problem is reduced by one. This stage is - performed by the routine DLAED8. - - The second stage consists of calculating the updated - eigenvalues. This is done by finding the roots of the secular - equation via the routine DLAED4 (as called by DLAED9). - This routine also calculates the eigenvectors of the current - problem. - - The final stage consists of computing the updated eigenvectors - directly using the updated eigenvalues. The eigenvectors for - the current problem are multiplied with the eigenvectors from - the overall problem. - - Arguments - ========= - - ICOMPQ (input) INTEGER - = 0: Compute eigenvalues only. - = 1: Compute eigenvectors of original dense symmetric matrix - also. On entry, Q contains the orthogonal matrix used - to reduce the original matrix to tridiagonal form. - - N (input) INTEGER - The dimension of the symmetric tridiagonal matrix. N >= 0. - - QSIZ (input) INTEGER - The dimension of the orthogonal matrix used to reduce - the full matrix to tridiagonal form. QSIZ >= N if ICOMPQ = 1. - - TLVLS (input) INTEGER - The total number of merging levels in the overall divide and - conquer tree. - - CURLVL (input) INTEGER - The current level in the overall merge routine, - 0 <= CURLVL <= TLVLS. - - CURPBM (input) INTEGER - The current problem in the current level in the overall - merge routine (counting from upper left to lower right). - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the eigenvalues of the rank-1-perturbed matrix. - On exit, the eigenvalues of the repaired matrix. - - Q (input/output) DOUBLE PRECISION array, dimension (LDQ, N) - On entry, the eigenvectors of the rank-1-perturbed matrix. - On exit, the eigenvectors of the repaired tridiagonal matrix. - - LDQ (input) INTEGER - The leading dimension of the array Q. LDQ >= max(1,N). - - INDXQ (output) INTEGER array, dimension (N) - The permutation which will reintegrate the subproblem just - solved back into sorted order, i.e., D( INDXQ( I = 1, N ) ) - will be in ascending order. - - RHO (input) DOUBLE PRECISION - The subdiagonal element used to create the rank-1 - modification. - - CUTPNT (input) INTEGER - Contains the location of the last eigenvalue in the leading - sub-matrix. min(1,N) <= CUTPNT <= N. - - QSTORE (input/output) DOUBLE PRECISION array, dimension (N**2+1) - Stores eigenvectors of submatrices encountered during - divide and conquer, packed together. QPTR points to - beginning of the submatrices. - - QPTR (input/output) INTEGER array, dimension (N+2) - List of indices pointing to beginning of submatrices stored - in QSTORE. The submatrices are numbered starting at the - bottom left of the divide and conquer tree, from left to - right and bottom to top. - - PRMPTR (input) INTEGER array, dimension (N lg N) - Contains a list of pointers which indicate where in PERM a - level's permutation is stored. PRMPTR(i+1) - PRMPTR(i) - indicates the size of the permutation and also the size of - the full, non-deflated problem. - - PERM (input) INTEGER array, dimension (N lg N) - Contains the permutations (from deflation and sorting) to be - applied to each eigenblock. - - GIVPTR (input) INTEGER array, dimension (N lg N) - Contains a list of pointers which indicate where in GIVCOL a - level's Givens rotations are stored. GIVPTR(i+1) - GIVPTR(i) - indicates the number of Givens rotations. - - GIVCOL (input) INTEGER array, dimension (2, N lg N) - Each pair of numbers indicates a pair of columns to take place - in a Givens rotation. - - GIVNUM (input) DOUBLE PRECISION array, dimension (2, N lg N) - Each number indicates the S value to be used in the - corresponding Givens rotation. - - WORK (workspace) DOUBLE PRECISION array, dimension (3*N+QSIZ*N) - - IWORK (workspace) INTEGER array, dimension (4*N) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = 1, an eigenvalue did not converge - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - q_dim1 = *ldq; - q_offset = 1 + q_dim1 * 1; - q -= q_offset; - --indxq; - --qstore; - --qptr; - --prmptr; - --perm; - --givptr; - givcol -= 3; - givnum -= 3; - --work; - --iwork; - - /* Function Body */ - *info = 0; - - if (*icompq < 0 || *icompq > 1) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if ((*icompq == 1 && *qsiz < *n)) { - *info = -4; - } else if (*ldq < max(1,*n)) { - *info = -9; - } else if (min(1,*n) > *cutpnt || *n < *cutpnt) { - *info = -12; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLAED7", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - -/* - The following values are for bookkeeping purposes only. They are - integer pointers which indicate the portion of the workspace - used by a particular array in DLAED8 and DLAED9. -*/ - - if (*icompq == 1) { - ldq2 = *qsiz; - } else { - ldq2 = *n; - } - - iz = 1; - idlmda = iz + *n; - iw = idlmda + *n; - iq2 = iw + *n; - is = iq2 + *n * ldq2; - - indx = 1; - indxc = indx + *n; - coltyp = indxc + *n; - indxp = coltyp + *n; - -/* - Form the z-vector which consists of the last row of Q_1 and the - first row of Q_2. -*/ - - ptr = pow_ii(&c__2, tlvls) + 1; - i__1 = *curlvl - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = *tlvls - i__; - ptr += pow_ii(&c__2, &i__2); -/* L10: */ - } - curr = ptr + *curpbm; - dlaeda_(n, tlvls, curlvl, curpbm, &prmptr[1], &perm[1], &givptr[1], & - givcol[3], &givnum[3], &qstore[1], &qptr[1], &work[iz], &work[iz - + *n], info); - -/* - When solving the final problem, we no longer need the stored data, - so we will overwrite the data from this level onto the previously - used storage space. -*/ - - if (*curlvl == *tlvls) { - qptr[curr] = 1; - prmptr[curr] = 1; - givptr[curr] = 1; - } - -/* Sort and Deflate eigenvalues. */ - - dlaed8_(icompq, &k, n, qsiz, &d__[1], &q[q_offset], ldq, &indxq[1], rho, - cutpnt, &work[iz], &work[idlmda], &work[iq2], &ldq2, &work[iw], & - perm[prmptr[curr]], &givptr[curr + 1], &givcol[((givptr[curr]) << - (1)) + 1], &givnum[((givptr[curr]) << (1)) + 1], &iwork[indxp], & - iwork[indx], info); - prmptr[curr + 1] = prmptr[curr] + *n; - givptr[curr + 1] += givptr[curr]; - -/* Solve Secular Equation. */ - - if (k != 0) { - dlaed9_(&k, &c__1, &k, n, &d__[1], &work[is], &k, rho, &work[idlmda], - &work[iw], &qstore[qptr[curr]], &k, info); - if (*info != 0) { - goto L30; - } - if (*icompq == 1) { - dgemm_("N", "N", qsiz, &k, &k, &c_b15, &work[iq2], &ldq2, &qstore[ - qptr[curr]], &k, &c_b29, &q[q_offset], ldq); - } -/* Computing 2nd power */ - i__1 = k; - qptr[curr + 1] = qptr[curr] + i__1 * i__1; - -/* Prepare the INDXQ sorting permutation. */ - - n1 = k; - n2 = *n - k; - dlamrg_(&n1, &n2, &d__[1], &c__1, &c_n1, &indxq[1]); - } else { - qptr[curr + 1] = qptr[curr]; - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - indxq[i__] = i__; -/* L20: */ - } - } - -L30: - return 0; - -/* End of DLAED7 */ - -} /* dlaed7_ */ - -/* Subroutine */ int dlaed8_(integer *icompq, integer *k, integer *n, integer - *qsiz, doublereal *d__, doublereal *q, integer *ldq, integer *indxq, - doublereal *rho, integer *cutpnt, doublereal *z__, doublereal *dlamda, - doublereal *q2, integer *ldq2, doublereal *w, integer *perm, integer - *givptr, integer *givcol, doublereal *givnum, integer *indxp, integer - *indx, integer *info) -{ - /* System generated locals */ - integer q_dim1, q_offset, q2_dim1, q2_offset, i__1; - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal c__; - static integer i__, j; - static doublereal s, t; - static integer k2, n1, n2, jp, n1p1; - static doublereal eps, tau, tol; - static integer jlam, imax, jmax; - extern /* Subroutine */ int drot_(integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *), dscal_( - integer *, doublereal *, doublereal *, integer *), dcopy_(integer - *, doublereal *, integer *, doublereal *, integer *); - - extern integer idamax_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dlamrg_(integer *, integer *, doublereal *, - integer *, integer *, integer *), dlacpy_(char *, integer *, - integer *, doublereal *, integer *, doublereal *, integer *), xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - September 30, 1994 - - - Purpose - ======= - - DLAED8 merges the two sets of eigenvalues together into a single - sorted set. Then it tries to deflate the size of the problem. - There are two ways in which deflation can occur: when two or more - eigenvalues are close together or if there is a tiny element in the - Z vector. For each such occurrence the order of the related secular - equation problem is reduced by one. - - Arguments - ========= - - ICOMPQ (input) INTEGER - = 0: Compute eigenvalues only. - = 1: Compute eigenvectors of original dense symmetric matrix - also. On entry, Q contains the orthogonal matrix used - to reduce the original matrix to tridiagonal form. - - K (output) INTEGER - The number of non-deflated eigenvalues, and the order of the - related secular equation. - - N (input) INTEGER - The dimension of the symmetric tridiagonal matrix. N >= 0. - - QSIZ (input) INTEGER - The dimension of the orthogonal matrix used to reduce - the full matrix to tridiagonal form. QSIZ >= N if ICOMPQ = 1. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the eigenvalues of the two submatrices to be - combined. On exit, the trailing (N-K) updated eigenvalues - (those which were deflated) sorted into increasing order. - - Q (input/output) DOUBLE PRECISION array, dimension (LDQ,N) - If ICOMPQ = 0, Q is not referenced. Otherwise, - on entry, Q contains the eigenvectors of the partially solved - system which has been previously updated in matrix - multiplies with other partially solved eigensystems. - On exit, Q contains the trailing (N-K) updated eigenvectors - (those which were deflated) in its last N-K columns. - - LDQ (input) INTEGER - The leading dimension of the array Q. LDQ >= max(1,N). - - INDXQ (input) INTEGER array, dimension (N) - The permutation which separately sorts the two sub-problems - in D into ascending order. Note that elements in the second - half of this permutation must first have CUTPNT added to - their values in order to be accurate. - - RHO (input/output) DOUBLE PRECISION - On entry, the off-diagonal element associated with the rank-1 - cut which originally split the two submatrices which are now - being recombined. - On exit, RHO has been modified to the value required by - DLAED3. - - CUTPNT (input) INTEGER - The location of the last eigenvalue in the leading - sub-matrix. min(1,N) <= CUTPNT <= N. - - Z (input) DOUBLE PRECISION array, dimension (N) - On entry, Z contains the updating vector (the last row of - the first sub-eigenvector matrix and the first row of the - second sub-eigenvector matrix). - On exit, the contents of Z are destroyed by the updating - process. - - DLAMDA (output) DOUBLE PRECISION array, dimension (N) - A copy of the first K eigenvalues which will be used by - DLAED3 to form the secular equation. - - Q2 (output) DOUBLE PRECISION array, dimension (LDQ2,N) - If ICOMPQ = 0, Q2 is not referenced. Otherwise, - a copy of the first K eigenvectors which will be used by - DLAED7 in a matrix multiply (DGEMM) to update the new - eigenvectors. - - LDQ2 (input) INTEGER - The leading dimension of the array Q2. LDQ2 >= max(1,N). - - W (output) DOUBLE PRECISION array, dimension (N) - The first k values of the final deflation-altered z-vector and - will be passed to DLAED3. - - PERM (output) INTEGER array, dimension (N) - The permutations (from deflation and sorting) to be applied - to each eigenblock. - - GIVPTR (output) INTEGER - The number of Givens rotations which took place in this - subproblem. - - GIVCOL (output) INTEGER array, dimension (2, N) - Each pair of numbers indicates a pair of columns to take place - in a Givens rotation. - - GIVNUM (output) DOUBLE PRECISION array, dimension (2, N) - Each number indicates the S value to be used in the - corresponding Givens rotation. - - INDXP (workspace) INTEGER array, dimension (N) - The permutation used to place deflated values of D at the end - of the array. INDXP(1:K) points to the nondeflated D-values - and INDXP(K+1:N) points to the deflated eigenvalues. - - INDX (workspace) INTEGER array, dimension (N) - The permutation used to sort the contents of D into ascending - order. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - q_dim1 = *ldq; - q_offset = 1 + q_dim1 * 1; - q -= q_offset; - --indxq; - --z__; - --dlamda; - q2_dim1 = *ldq2; - q2_offset = 1 + q2_dim1 * 1; - q2 -= q2_offset; - --w; - --perm; - givcol -= 3; - givnum -= 3; - --indxp; - --indx; - - /* Function Body */ - *info = 0; - - if (*icompq < 0 || *icompq > 1) { - *info = -1; - } else if (*n < 0) { - *info = -3; - } else if ((*icompq == 1 && *qsiz < *n)) { - *info = -4; - } else if (*ldq < max(1,*n)) { - *info = -7; - } else if (*cutpnt < min(1,*n) || *cutpnt > *n) { - *info = -10; - } else if (*ldq2 < max(1,*n)) { - *info = -14; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLAED8", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - - n1 = *cutpnt; - n2 = *n - n1; - n1p1 = n1 + 1; - - if (*rho < 0.) { - dscal_(&n2, &c_b151, &z__[n1p1], &c__1); - } - -/* Normalize z so that norm(z) = 1 */ - - t = 1. / sqrt(2.); - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - indx[j] = j; -/* L10: */ - } - dscal_(n, &t, &z__[1], &c__1); - *rho = (d__1 = *rho * 2., abs(d__1)); - -/* Sort the eigenvalues into increasing order */ - - i__1 = *n; - for (i__ = *cutpnt + 1; i__ <= i__1; ++i__) { - indxq[i__] += *cutpnt; -/* L20: */ - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - dlamda[i__] = d__[indxq[i__]]; - w[i__] = z__[indxq[i__]]; -/* L30: */ - } - i__ = 1; - j = *cutpnt + 1; - dlamrg_(&n1, &n2, &dlamda[1], &c__1, &c__1, &indx[1]); - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - d__[i__] = dlamda[indx[i__]]; - z__[i__] = w[indx[i__]]; -/* L40: */ - } - -/* Calculate the allowable deflation tolerence */ - - imax = idamax_(n, &z__[1], &c__1); - jmax = idamax_(n, &d__[1], &c__1); - eps = EPSILON; - tol = eps * 8. * (d__1 = d__[jmax], abs(d__1)); - -/* - If the rank-1 modifier is small enough, no more needs to be done - except to reorganize Q so that its columns correspond with the - elements in D. -*/ - - if (*rho * (d__1 = z__[imax], abs(d__1)) <= tol) { - *k = 0; - if (*icompq == 0) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - perm[j] = indxq[indx[j]]; -/* L50: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - perm[j] = indxq[indx[j]]; - dcopy_(qsiz, &q[perm[j] * q_dim1 + 1], &c__1, &q2[j * q2_dim1 - + 1], &c__1); -/* L60: */ - } - dlacpy_("A", qsiz, n, &q2[q2_dim1 + 1], ldq2, &q[q_dim1 + 1], ldq); - } - return 0; - } - -/* - If there are multiple eigenvalues then the problem deflates. Here - the number of equal eigenvalues are found. As each equal - eigenvalue is found, an elementary reflector is computed to rotate - the corresponding eigensubspace so that the corresponding - components of Z are zero in this new basis. -*/ - - *k = 0; - *givptr = 0; - k2 = *n + 1; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*rho * (d__1 = z__[j], abs(d__1)) <= tol) { - -/* Deflate due to small z component. */ - - --k2; - indxp[k2] = j; - if (j == *n) { - goto L110; - } - } else { - jlam = j; - goto L80; - } -/* L70: */ - } -L80: - ++j; - if (j > *n) { - goto L100; - } - if (*rho * (d__1 = z__[j], abs(d__1)) <= tol) { - -/* Deflate due to small z component. */ - - --k2; - indxp[k2] = j; - } else { - -/* Check if eigenvalues are close enough to allow deflation. */ - - s = z__[jlam]; - c__ = z__[j]; - -/* - Find sqrt(a**2+b**2) without overflow or - destructive underflow. -*/ - - tau = dlapy2_(&c__, &s); - t = d__[j] - d__[jlam]; - c__ /= tau; - s = -s / tau; - if ((d__1 = t * c__ * s, abs(d__1)) <= tol) { - -/* Deflation is possible. */ - - z__[j] = tau; - z__[jlam] = 0.; - -/* Record the appropriate Givens rotation */ - - ++(*givptr); - givcol[((*givptr) << (1)) + 1] = indxq[indx[jlam]]; - givcol[((*givptr) << (1)) + 2] = indxq[indx[j]]; - givnum[((*givptr) << (1)) + 1] = c__; - givnum[((*givptr) << (1)) + 2] = s; - if (*icompq == 1) { - drot_(qsiz, &q[indxq[indx[jlam]] * q_dim1 + 1], &c__1, &q[ - indxq[indx[j]] * q_dim1 + 1], &c__1, &c__, &s); - } - t = d__[jlam] * c__ * c__ + d__[j] * s * s; - d__[j] = d__[jlam] * s * s + d__[j] * c__ * c__; - d__[jlam] = t; - --k2; - i__ = 1; -L90: - if (k2 + i__ <= *n) { - if (d__[jlam] < d__[indxp[k2 + i__]]) { - indxp[k2 + i__ - 1] = indxp[k2 + i__]; - indxp[k2 + i__] = jlam; - ++i__; - goto L90; - } else { - indxp[k2 + i__ - 1] = jlam; - } - } else { - indxp[k2 + i__ - 1] = jlam; - } - jlam = j; - } else { - ++(*k); - w[*k] = z__[jlam]; - dlamda[*k] = d__[jlam]; - indxp[*k] = jlam; - jlam = j; - } - } - goto L80; -L100: - -/* Record the last eigenvalue. */ - - ++(*k); - w[*k] = z__[jlam]; - dlamda[*k] = d__[jlam]; - indxp[*k] = jlam; - -L110: - -/* - Sort the eigenvalues and corresponding eigenvectors into DLAMDA - and Q2 respectively. The eigenvalues/vectors which were not - deflated go into the first K slots of DLAMDA and Q2 respectively, - while those which were deflated go into the last N - K slots. -*/ - - if (*icompq == 0) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - jp = indxp[j]; - dlamda[j] = d__[jp]; - perm[j] = indxq[indx[jp]]; -/* L120: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - jp = indxp[j]; - dlamda[j] = d__[jp]; - perm[j] = indxq[indx[jp]]; - dcopy_(qsiz, &q[perm[j] * q_dim1 + 1], &c__1, &q2[j * q2_dim1 + 1] - , &c__1); -/* L130: */ - } - } - -/* - The deflated eigenvalues and their corresponding vectors go back - into the last N - K slots of D and Q respectively. -*/ - - if (*k < *n) { - if (*icompq == 0) { - i__1 = *n - *k; - dcopy_(&i__1, &dlamda[*k + 1], &c__1, &d__[*k + 1], &c__1); - } else { - i__1 = *n - *k; - dcopy_(&i__1, &dlamda[*k + 1], &c__1, &d__[*k + 1], &c__1); - i__1 = *n - *k; - dlacpy_("A", qsiz, &i__1, &q2[(*k + 1) * q2_dim1 + 1], ldq2, &q[(* - k + 1) * q_dim1 + 1], ldq); - } - } - - return 0; - -/* End of DLAED8 */ - -} /* dlaed8_ */ - -/* Subroutine */ int dlaed9_(integer *k, integer *kstart, integer *kstop, - integer *n, doublereal *d__, doublereal *q, integer *ldq, doublereal * - rho, doublereal *dlamda, doublereal *w, doublereal *s, integer *lds, - integer *info) -{ - /* System generated locals */ - integer q_dim1, q_offset, s_dim1, s_offset, i__1, i__2; - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal), d_sign(doublereal *, doublereal *); - - /* Local variables */ - static integer i__, j; - static doublereal temp; - extern doublereal dnrm2_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *), dlaed4_(integer *, integer *, - doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *); - extern doublereal dlamc3_(doublereal *, doublereal *); - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - September 30, 1994 - - - Purpose - ======= - - DLAED9 finds the roots of the secular equation, as defined by the - values in D, Z, and RHO, between KSTART and KSTOP. It makes the - appropriate calls to DLAED4 and then stores the new matrix of - eigenvectors for use in calculating the next level of Z vectors. - - Arguments - ========= - - K (input) INTEGER - The number of terms in the rational function to be solved by - DLAED4. K >= 0. - - KSTART (input) INTEGER - KSTOP (input) INTEGER - The updated eigenvalues Lambda(I), KSTART <= I <= KSTOP - are to be computed. 1 <= KSTART <= KSTOP <= K. - - N (input) INTEGER - The number of rows and columns in the Q matrix. - N >= K (delation may result in N > K). - - D (output) DOUBLE PRECISION array, dimension (N) - D(I) contains the updated eigenvalues - for KSTART <= I <= KSTOP. - - Q (workspace) DOUBLE PRECISION array, dimension (LDQ,N) - - LDQ (input) INTEGER - The leading dimension of the array Q. LDQ >= max( 1, N ). - - RHO (input) DOUBLE PRECISION - The value of the parameter in the rank one update equation. - RHO >= 0 required. - - DLAMDA (input) DOUBLE PRECISION array, dimension (K) - The first K elements of this array contain the old roots - of the deflated updating problem. These are the poles - of the secular equation. - - W (input) DOUBLE PRECISION array, dimension (K) - The first K elements of this array contain the components - of the deflation-adjusted updating vector. - - S (output) DOUBLE PRECISION array, dimension (LDS, K) - Will contain the eigenvectors of the repaired matrix which - will be stored for subsequent Z vector calculation and - multiplied by the previously accumulated eigenvectors - to update the system. - - LDS (input) INTEGER - The leading dimension of S. LDS >= max( 1, K ). - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = 1, an eigenvalue did not converge - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - q_dim1 = *ldq; - q_offset = 1 + q_dim1 * 1; - q -= q_offset; - --dlamda; - --w; - s_dim1 = *lds; - s_offset = 1 + s_dim1 * 1; - s -= s_offset; - - /* Function Body */ - *info = 0; - - if (*k < 0) { - *info = -1; - } else if (*kstart < 1 || *kstart > max(1,*k)) { - *info = -2; - } else if (max(1,*kstop) < *kstart || *kstop > max(1,*k)) { - *info = -3; - } else if (*n < *k) { - *info = -4; - } else if (*ldq < max(1,*k)) { - *info = -7; - } else if (*lds < max(1,*k)) { - *info = -12; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLAED9", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*k == 0) { - return 0; - } - -/* - Modify values DLAMDA(i) to make sure all DLAMDA(i)-DLAMDA(j) can - be computed with high relative accuracy (barring over/underflow). - This is a problem on machines without a guard digit in - add/subtract (Cray XMP, Cray YMP, Cray C 90 and Cray 2). - The following code replaces DLAMDA(I) by 2*DLAMDA(I)-DLAMDA(I), - which on any of these machines zeros out the bottommost - bit of DLAMDA(I) if it is 1; this makes the subsequent - subtractions DLAMDA(I)-DLAMDA(J) unproblematic when cancellation - occurs. On binary machines with a guard digit (almost all - machines) it does not change DLAMDA(I) at all. On hexadecimal - and decimal machines with a guard digit, it slightly - changes the bottommost bits of DLAMDA(I). It does not account - for hexadecimal or decimal machines without guard digits - (we know of none). We use a subroutine call to compute - 2*DLAMBDA(I) to prevent optimizing compilers from eliminating - this code. -*/ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - dlamda[i__] = dlamc3_(&dlamda[i__], &dlamda[i__]) - dlamda[i__]; -/* L10: */ - } - - i__1 = *kstop; - for (j = *kstart; j <= i__1; ++j) { - dlaed4_(k, &j, &dlamda[1], &w[1], &q[j * q_dim1 + 1], rho, &d__[j], - info); - -/* If the zero finder fails, the computation is terminated. */ - - if (*info != 0) { - goto L120; - } -/* L20: */ - } - - if (*k == 1 || *k == 2) { - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = *k; - for (j = 1; j <= i__2; ++j) { - s[j + i__ * s_dim1] = q[j + i__ * q_dim1]; -/* L30: */ - } -/* L40: */ - } - goto L120; - } - -/* Compute updated W. */ - - dcopy_(k, &w[1], &c__1, &s[s_offset], &c__1); - -/* Initialize W(I) = Q(I,I) */ - - i__1 = *ldq + 1; - dcopy_(k, &q[q_offset], &i__1, &w[1], &c__1); - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - w[i__] *= q[i__ + j * q_dim1] / (dlamda[i__] - dlamda[j]); -/* L50: */ - } - i__2 = *k; - for (i__ = j + 1; i__ <= i__2; ++i__) { - w[i__] *= q[i__ + j * q_dim1] / (dlamda[i__] - dlamda[j]); -/* L60: */ - } -/* L70: */ - } - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - d__1 = sqrt(-w[i__]); - w[i__] = d_sign(&d__1, &s[i__ + s_dim1]); -/* L80: */ - } - -/* Compute eigenvectors of the modified rank-1 modification. */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *k; - for (i__ = 1; i__ <= i__2; ++i__) { - q[i__ + j * q_dim1] = w[i__] / q[i__ + j * q_dim1]; -/* L90: */ - } - temp = dnrm2_(k, &q[j * q_dim1 + 1], &c__1); - i__2 = *k; - for (i__ = 1; i__ <= i__2; ++i__) { - s[i__ + j * s_dim1] = q[i__ + j * q_dim1] / temp; -/* L100: */ - } -/* L110: */ - } - -L120: - return 0; - -/* End of DLAED9 */ - -} /* dlaed9_ */ - -/* Subroutine */ int dlaeda_(integer *n, integer *tlvls, integer *curlvl, - integer *curpbm, integer *prmptr, integer *perm, integer *givptr, - integer *givcol, doublereal *givnum, doublereal *q, integer *qptr, - doublereal *z__, doublereal *ztemp, integer *info) -{ - /* System generated locals */ - integer i__1, i__2, i__3; - - /* Builtin functions */ - integer pow_ii(integer *, integer *); - double sqrt(doublereal); - - /* Local variables */ - static integer i__, k, mid, ptr; - extern /* Subroutine */ int drot_(integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *); - static integer curr, bsiz1, bsiz2, psiz1, psiz2, zptr1; - extern /* Subroutine */ int dgemv_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, doublereal *, integer *), dcopy_(integer *, - doublereal *, integer *, doublereal *, integer *), xerbla_(char *, - integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - DLAEDA computes the Z vector corresponding to the merge step in the - CURLVLth step of the merge process with TLVLS steps for the CURPBMth - problem. - - Arguments - ========= - - N (input) INTEGER - The dimension of the symmetric tridiagonal matrix. N >= 0. - - TLVLS (input) INTEGER - The total number of merging levels in the overall divide and - conquer tree. - - CURLVL (input) INTEGER - The current level in the overall merge routine, - 0 <= curlvl <= tlvls. - - CURPBM (input) INTEGER - The current problem in the current level in the overall - merge routine (counting from upper left to lower right). - - PRMPTR (input) INTEGER array, dimension (N lg N) - Contains a list of pointers which indicate where in PERM a - level's permutation is stored. PRMPTR(i+1) - PRMPTR(i) - indicates the size of the permutation and incidentally the - size of the full, non-deflated problem. - - PERM (input) INTEGER array, dimension (N lg N) - Contains the permutations (from deflation and sorting) to be - applied to each eigenblock. - - GIVPTR (input) INTEGER array, dimension (N lg N) - Contains a list of pointers which indicate where in GIVCOL a - level's Givens rotations are stored. GIVPTR(i+1) - GIVPTR(i) - indicates the number of Givens rotations. - - GIVCOL (input) INTEGER array, dimension (2, N lg N) - Each pair of numbers indicates a pair of columns to take place - in a Givens rotation. - - GIVNUM (input) DOUBLE PRECISION array, dimension (2, N lg N) - Each number indicates the S value to be used in the - corresponding Givens rotation. - - Q (input) DOUBLE PRECISION array, dimension (N**2) - Contains the square eigenblocks from previous levels, the - starting positions for blocks are given by QPTR. - - QPTR (input) INTEGER array, dimension (N+2) - Contains a list of pointers which indicate where in Q an - eigenblock is stored. SQRT( QPTR(i+1) - QPTR(i) ) indicates - the size of the block. - - Z (output) DOUBLE PRECISION array, dimension (N) - On output this vector contains the updating vector (the last - row of the first sub-eigenvector matrix and the first row of - the second sub-eigenvector matrix). - - ZTEMP (workspace) DOUBLE PRECISION array, dimension (N) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --ztemp; - --z__; - --qptr; - --q; - givnum -= 3; - givcol -= 3; - --givptr; - --perm; - --prmptr; - - /* Function Body */ - *info = 0; - - if (*n < 0) { - *info = -1; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLAEDA", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - -/* Determine location of first number in second half. */ - - mid = *n / 2 + 1; - -/* Gather last/first rows of appropriate eigenblocks into center of Z */ - - ptr = 1; - -/* - Determine location of lowest level subproblem in the full storage - scheme -*/ - - i__1 = *curlvl - 1; - curr = ptr + *curpbm * pow_ii(&c__2, curlvl) + pow_ii(&c__2, &i__1) - 1; - -/* - Determine size of these matrices. We add HALF to the value of - the SQRT in case the machine underestimates one of these square - roots. -*/ - - bsiz1 = (integer) (sqrt((doublereal) (qptr[curr + 1] - qptr[curr])) + .5); - bsiz2 = (integer) (sqrt((doublereal) (qptr[curr + 2] - qptr[curr + 1])) + - .5); - i__1 = mid - bsiz1 - 1; - for (k = 1; k <= i__1; ++k) { - z__[k] = 0.; -/* L10: */ - } - dcopy_(&bsiz1, &q[qptr[curr] + bsiz1 - 1], &bsiz1, &z__[mid - bsiz1], & - c__1); - dcopy_(&bsiz2, &q[qptr[curr + 1]], &bsiz2, &z__[mid], &c__1); - i__1 = *n; - for (k = mid + bsiz2; k <= i__1; ++k) { - z__[k] = 0.; -/* L20: */ - } - -/* - Loop thru remaining levels 1 -> CURLVL applying the Givens - rotations and permutation and then multiplying the center matrices - against the current Z. -*/ - - ptr = pow_ii(&c__2, tlvls) + 1; - i__1 = *curlvl - 1; - for (k = 1; k <= i__1; ++k) { - i__2 = *curlvl - k; - i__3 = *curlvl - k - 1; - curr = ptr + *curpbm * pow_ii(&c__2, &i__2) + pow_ii(&c__2, &i__3) - - 1; - psiz1 = prmptr[curr + 1] - prmptr[curr]; - psiz2 = prmptr[curr + 2] - prmptr[curr + 1]; - zptr1 = mid - psiz1; - -/* Apply Givens at CURR and CURR+1 */ - - i__2 = givptr[curr + 1] - 1; - for (i__ = givptr[curr]; i__ <= i__2; ++i__) { - drot_(&c__1, &z__[zptr1 + givcol[((i__) << (1)) + 1] - 1], &c__1, - &z__[zptr1 + givcol[((i__) << (1)) + 2] - 1], &c__1, & - givnum[((i__) << (1)) + 1], &givnum[((i__) << (1)) + 2]); -/* L30: */ - } - i__2 = givptr[curr + 2] - 1; - for (i__ = givptr[curr + 1]; i__ <= i__2; ++i__) { - drot_(&c__1, &z__[mid - 1 + givcol[((i__) << (1)) + 1]], &c__1, & - z__[mid - 1 + givcol[((i__) << (1)) + 2]], &c__1, &givnum[ - ((i__) << (1)) + 1], &givnum[((i__) << (1)) + 2]); -/* L40: */ - } - psiz1 = prmptr[curr + 1] - prmptr[curr]; - psiz2 = prmptr[curr + 2] - prmptr[curr + 1]; - i__2 = psiz1 - 1; - for (i__ = 0; i__ <= i__2; ++i__) { - ztemp[i__ + 1] = z__[zptr1 + perm[prmptr[curr] + i__] - 1]; -/* L50: */ - } - i__2 = psiz2 - 1; - for (i__ = 0; i__ <= i__2; ++i__) { - ztemp[psiz1 + i__ + 1] = z__[mid + perm[prmptr[curr + 1] + i__] - - 1]; -/* L60: */ - } - -/* - Multiply Blocks at CURR and CURR+1 - - Determine size of these matrices. We add HALF to the value of - the SQRT in case the machine underestimates one of these - square roots. -*/ - - bsiz1 = (integer) (sqrt((doublereal) (qptr[curr + 1] - qptr[curr])) + - .5); - bsiz2 = (integer) (sqrt((doublereal) (qptr[curr + 2] - qptr[curr + 1]) - ) + .5); - if (bsiz1 > 0) { - dgemv_("T", &bsiz1, &bsiz1, &c_b15, &q[qptr[curr]], &bsiz1, & - ztemp[1], &c__1, &c_b29, &z__[zptr1], &c__1); - } - i__2 = psiz1 - bsiz1; - dcopy_(&i__2, &ztemp[bsiz1 + 1], &c__1, &z__[zptr1 + bsiz1], &c__1); - if (bsiz2 > 0) { - dgemv_("T", &bsiz2, &bsiz2, &c_b15, &q[qptr[curr + 1]], &bsiz2, & - ztemp[psiz1 + 1], &c__1, &c_b29, &z__[mid], &c__1); - } - i__2 = psiz2 - bsiz2; - dcopy_(&i__2, &ztemp[psiz1 + bsiz2 + 1], &c__1, &z__[mid + bsiz2], & - c__1); - - i__2 = *tlvls - k; - ptr += pow_ii(&c__2, &i__2); -/* L70: */ - } - - return 0; - -/* End of DLAEDA */ - -} /* dlaeda_ */ - -/* Subroutine */ int dlaev2_(doublereal *a, doublereal *b, doublereal *c__, - doublereal *rt1, doublereal *rt2, doublereal *cs1, doublereal *sn1) -{ - /* System generated locals */ - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal ab, df, cs, ct, tb, sm, tn, rt, adf, acs; - static integer sgn1, sgn2; - static doublereal acmn, acmx; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLAEV2 computes the eigendecomposition of a 2-by-2 symmetric matrix - [ A B ] - [ B C ]. - On return, RT1 is the eigenvalue of larger absolute value, RT2 is the - eigenvalue of smaller absolute value, and (CS1,SN1) is the unit right - eigenvector for RT1, giving the decomposition - - [ CS1 SN1 ] [ A B ] [ CS1 -SN1 ] = [ RT1 0 ] - [-SN1 CS1 ] [ B C ] [ SN1 CS1 ] [ 0 RT2 ]. - - Arguments - ========= - - A (input) DOUBLE PRECISION - The (1,1) element of the 2-by-2 matrix. - - B (input) DOUBLE PRECISION - The (1,2) element and the conjugate of the (2,1) element of - the 2-by-2 matrix. - - C (input) DOUBLE PRECISION - The (2,2) element of the 2-by-2 matrix. - - RT1 (output) DOUBLE PRECISION - The eigenvalue of larger absolute value. - - RT2 (output) DOUBLE PRECISION - The eigenvalue of smaller absolute value. - - CS1 (output) DOUBLE PRECISION - SN1 (output) DOUBLE PRECISION - The vector (CS1, SN1) is a unit right eigenvector for RT1. - - Further Details - =============== - - RT1 is accurate to a few ulps barring over/underflow. - - RT2 may be inaccurate if there is massive cancellation in the - determinant A*C-B*B; higher precision or correctly rounded or - correctly truncated arithmetic would be needed to compute RT2 - accurately in all cases. - - CS1 and SN1 are accurate to a few ulps barring over/underflow. - - Overflow is possible only if RT1 is within a factor of 5 of overflow. - Underflow is harmless if the input data is 0 or exceeds - underflow_threshold / macheps. - - ===================================================================== - - - Compute the eigenvalues -*/ - - sm = *a + *c__; - df = *a - *c__; - adf = abs(df); - tb = *b + *b; - ab = abs(tb); - if (abs(*a) > abs(*c__)) { - acmx = *a; - acmn = *c__; - } else { - acmx = *c__; - acmn = *a; - } - if (adf > ab) { -/* Computing 2nd power */ - d__1 = ab / adf; - rt = adf * sqrt(d__1 * d__1 + 1.); - } else if (adf < ab) { -/* Computing 2nd power */ - d__1 = adf / ab; - rt = ab * sqrt(d__1 * d__1 + 1.); - } else { - -/* Includes case AB=ADF=0 */ - - rt = ab * sqrt(2.); - } - if (sm < 0.) { - *rt1 = (sm - rt) * .5; - sgn1 = -1; - -/* - Order of execution important. - To get fully accurate smaller eigenvalue, - next line needs to be executed in higher precision. -*/ - - *rt2 = acmx / *rt1 * acmn - *b / *rt1 * *b; - } else if (sm > 0.) { - *rt1 = (sm + rt) * .5; - sgn1 = 1; - -/* - Order of execution important. - To get fully accurate smaller eigenvalue, - next line needs to be executed in higher precision. -*/ - - *rt2 = acmx / *rt1 * acmn - *b / *rt1 * *b; - } else { - -/* Includes case RT1 = RT2 = 0 */ - - *rt1 = rt * .5; - *rt2 = rt * -.5; - sgn1 = 1; - } - -/* Compute the eigenvector */ - - if (df >= 0.) { - cs = df + rt; - sgn2 = 1; - } else { - cs = df - rt; - sgn2 = -1; - } - acs = abs(cs); - if (acs > ab) { - ct = -tb / cs; - *sn1 = 1. / sqrt(ct * ct + 1.); - *cs1 = ct * *sn1; - } else { - if (ab == 0.) { - *cs1 = 1.; - *sn1 = 0.; - } else { - tn = -cs / tb; - *cs1 = 1. / sqrt(tn * tn + 1.); - *sn1 = tn * *cs1; - } - } - if (sgn1 == sgn2) { - tn = *cs1; - *cs1 = -(*sn1); - *sn1 = tn; - } - return 0; - -/* End of DLAEV2 */ - -} /* dlaev2_ */ - -/* Subroutine */ int dlahqr_(logical *wantt, logical *wantz, integer *n, - integer *ilo, integer *ihi, doublereal *h__, integer *ldh, doublereal - *wr, doublereal *wi, integer *iloz, integer *ihiz, doublereal *z__, - integer *ldz, integer *info) -{ - /* System generated locals */ - integer h_dim1, h_offset, z_dim1, z_offset, i__1, i__2, i__3, i__4; - doublereal d__1, d__2; - - /* Builtin functions */ - double sqrt(doublereal), d_sign(doublereal *, doublereal *); - - /* Local variables */ - static integer i__, j, k, l, m; - static doublereal s, v[3]; - static integer i1, i2; - static doublereal t1, t2, t3, v1, v2, v3, h00, h10, h11, h12, h21, h22, - h33, h44; - static integer nh; - static doublereal cs; - static integer nr; - static doublereal sn; - static integer nz; - static doublereal ave, h33s, h44s; - static integer itn, its; - static doublereal ulp, sum, tst1, h43h34, disc, unfl, ovfl; - extern /* Subroutine */ int drot_(integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *); - static doublereal work[1]; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *), dlanv2_(doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *), dlabad_( - doublereal *, doublereal *); - - extern /* Subroutine */ int dlarfg_(integer *, doublereal *, doublereal *, - integer *, doublereal *); - extern doublereal dlanhs_(char *, integer *, doublereal *, integer *, - doublereal *); - static doublereal smlnum; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DLAHQR is an auxiliary routine called by DHSEQR to update the - eigenvalues and Schur decomposition already computed by DHSEQR, by - dealing with the Hessenberg submatrix in rows and columns ILO to IHI. - - Arguments - ========= - - WANTT (input) LOGICAL - = .TRUE. : the full Schur form T is required; - = .FALSE.: only eigenvalues are required. - - WANTZ (input) LOGICAL - = .TRUE. : the matrix of Schur vectors Z is required; - = .FALSE.: Schur vectors are not required. - - N (input) INTEGER - The order of the matrix H. N >= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - It is assumed that H is already upper quasi-triangular in - rows and columns IHI+1:N, and that H(ILO,ILO-1) = 0 (unless - ILO = 1). DLAHQR works primarily with the Hessenberg - submatrix in rows and columns ILO to IHI, but applies - transformations to all of H if WANTT is .TRUE.. - 1 <= ILO <= max(1,IHI); IHI <= N. - - H (input/output) DOUBLE PRECISION array, dimension (LDH,N) - On entry, the upper Hessenberg matrix H. - On exit, if WANTT is .TRUE., H is upper quasi-triangular in - rows and columns ILO:IHI, with any 2-by-2 diagonal blocks in - standard form. If WANTT is .FALSE., the contents of H are - unspecified on exit. - - LDH (input) INTEGER - The leading dimension of the array H. LDH >= max(1,N). - - WR (output) DOUBLE PRECISION array, dimension (N) - WI (output) DOUBLE PRECISION array, dimension (N) - The real and imaginary parts, respectively, of the computed - eigenvalues ILO to IHI are stored in the corresponding - elements of WR and WI. If two eigenvalues are computed as a - complex conjugate pair, they are stored in consecutive - elements of WR and WI, say the i-th and (i+1)th, with - WI(i) > 0 and WI(i+1) < 0. If WANTT is .TRUE., the - eigenvalues are stored in the same order as on the diagonal - of the Schur form returned in H, with WR(i) = H(i,i), and, if - H(i:i+1,i:i+1) is a 2-by-2 diagonal block, - WI(i) = sqrt(H(i+1,i)*H(i,i+1)) and WI(i+1) = -WI(i). - - ILOZ (input) INTEGER - IHIZ (input) INTEGER - Specify the rows of Z to which transformations must be - applied if WANTZ is .TRUE.. - 1 <= ILOZ <= ILO; IHI <= IHIZ <= N. - - Z (input/output) DOUBLE PRECISION array, dimension (LDZ,N) - If WANTZ is .TRUE., on entry Z must contain the current - matrix Z of transformations accumulated by DHSEQR, and on - exit Z has been updated; transformations are applied only to - the submatrix Z(ILOZ:IHIZ,ILO:IHI). - If WANTZ is .FALSE., Z is not referenced. - - LDZ (input) INTEGER - The leading dimension of the array Z. LDZ >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - > 0: DLAHQR failed to compute all the eigenvalues ILO to IHI - in a total of 30*(IHI-ILO+1) iterations; if INFO = i, - elements i+1:ihi of WR and WI contain those eigenvalues - which have been successfully computed. - - Further Details - =============== - - 2-96 Based on modifications by - David Day, Sandia National Laboratory, USA - - ===================================================================== -*/ - - - /* Parameter adjustments */ - h_dim1 = *ldh; - h_offset = 1 + h_dim1 * 1; - h__ -= h_offset; - --wr; - --wi; - z_dim1 = *ldz; - z_offset = 1 + z_dim1 * 1; - z__ -= z_offset; - - /* Function Body */ - *info = 0; - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - if (*ilo == *ihi) { - wr[*ilo] = h__[*ilo + *ilo * h_dim1]; - wi[*ilo] = 0.; - return 0; - } - - nh = *ihi - *ilo + 1; - nz = *ihiz - *iloz + 1; - -/* - Set machine-dependent constants for the stopping criterion. - If norm(H) <= sqrt(OVFL), overflow should not occur. -*/ - - unfl = SAFEMINIMUM; - ovfl = 1. / unfl; - dlabad_(&unfl, &ovfl); - ulp = PRECISION; - smlnum = unfl * (nh / ulp); - -/* - I1 and I2 are the indices of the first row and last column of H - to which transformations must be applied. If eigenvalues only are - being computed, I1 and I2 are set inside the main loop. -*/ - - if (*wantt) { - i1 = 1; - i2 = *n; - } - -/* ITN is the total number of QR iterations allowed. */ - - itn = nh * 30; - -/* - The main loop begins here. I is the loop index and decreases from - IHI to ILO in steps of 1 or 2. Each iteration of the loop works - with the active submatrix in rows and columns L to I. - Eigenvalues I+1 to IHI have already converged. Either L = ILO or - H(L,L-1) is negligible so that the matrix splits. -*/ - - i__ = *ihi; -L10: - l = *ilo; - if (i__ < *ilo) { - goto L150; - } - -/* - Perform QR iterations on rows and columns ILO to I until a - submatrix of order 1 or 2 splits off at the bottom because a - subdiagonal element has become negligible. -*/ - - i__1 = itn; - for (its = 0; its <= i__1; ++its) { - -/* Look for a single small subdiagonal element. */ - - i__2 = l + 1; - for (k = i__; k >= i__2; --k) { - tst1 = (d__1 = h__[k - 1 + (k - 1) * h_dim1], abs(d__1)) + (d__2 = - h__[k + k * h_dim1], abs(d__2)); - if (tst1 == 0.) { - i__3 = i__ - l + 1; - tst1 = dlanhs_("1", &i__3, &h__[l + l * h_dim1], ldh, work); - } -/* Computing MAX */ - d__2 = ulp * tst1; - if ((d__1 = h__[k + (k - 1) * h_dim1], abs(d__1)) <= max(d__2, - smlnum)) { - goto L30; - } -/* L20: */ - } -L30: - l = k; - if (l > *ilo) { - -/* H(L,L-1) is negligible */ - - h__[l + (l - 1) * h_dim1] = 0.; - } - -/* Exit from loop if a submatrix of order 1 or 2 has split off. */ - - if (l >= i__ - 1) { - goto L140; - } - -/* - Now the active submatrix is in rows and columns L to I. If - eigenvalues only are being computed, only the active submatrix - need be transformed. -*/ - - if (! (*wantt)) { - i1 = l; - i2 = i__; - } - - if (its == 10 || its == 20) { - -/* Exceptional shift. */ - - s = (d__1 = h__[i__ + (i__ - 1) * h_dim1], abs(d__1)) + (d__2 = - h__[i__ - 1 + (i__ - 2) * h_dim1], abs(d__2)); - h44 = s * .75 + h__[i__ + i__ * h_dim1]; - h33 = h44; - h43h34 = s * -.4375 * s; - } else { - -/* - Prepare to use Francis' double shift - (i.e. 2nd degree generalized Rayleigh quotient) -*/ - - h44 = h__[i__ + i__ * h_dim1]; - h33 = h__[i__ - 1 + (i__ - 1) * h_dim1]; - h43h34 = h__[i__ + (i__ - 1) * h_dim1] * h__[i__ - 1 + i__ * - h_dim1]; - s = h__[i__ - 1 + (i__ - 2) * h_dim1] * h__[i__ - 1 + (i__ - 2) * - h_dim1]; - disc = (h33 - h44) * .5; - disc = disc * disc + h43h34; - if (disc > 0.) { - -/* Real roots: use Wilkinson's shift twice */ - - disc = sqrt(disc); - ave = (h33 + h44) * .5; - if (abs(h33) - abs(h44) > 0.) { - h33 = h33 * h44 - h43h34; - h44 = h33 / (d_sign(&disc, &ave) + ave); - } else { - h44 = d_sign(&disc, &ave) + ave; - } - h33 = h44; - h43h34 = 0.; - } - } - -/* Look for two consecutive small subdiagonal elements. */ - - i__2 = l; - for (m = i__ - 2; m >= i__2; --m) { -/* - Determine the effect of starting the double-shift QR - iteration at row M, and see if this would make H(M,M-1) - negligible. -*/ - - h11 = h__[m + m * h_dim1]; - h22 = h__[m + 1 + (m + 1) * h_dim1]; - h21 = h__[m + 1 + m * h_dim1]; - h12 = h__[m + (m + 1) * h_dim1]; - h44s = h44 - h11; - h33s = h33 - h11; - v1 = (h33s * h44s - h43h34) / h21 + h12; - v2 = h22 - h11 - h33s - h44s; - v3 = h__[m + 2 + (m + 1) * h_dim1]; - s = abs(v1) + abs(v2) + abs(v3); - v1 /= s; - v2 /= s; - v3 /= s; - v[0] = v1; - v[1] = v2; - v[2] = v3; - if (m == l) { - goto L50; - } - h00 = h__[m - 1 + (m - 1) * h_dim1]; - h10 = h__[m + (m - 1) * h_dim1]; - tst1 = abs(v1) * (abs(h00) + abs(h11) + abs(h22)); - if (abs(h10) * (abs(v2) + abs(v3)) <= ulp * tst1) { - goto L50; - } -/* L40: */ - } -L50: - -/* Double-shift QR step */ - - i__2 = i__ - 1; - for (k = m; k <= i__2; ++k) { - -/* - The first iteration of this loop determines a reflection G - from the vector V and applies it from left and right to H, - thus creating a nonzero bulge below the subdiagonal. - - Each subsequent iteration determines a reflection G to - restore the Hessenberg form in the (K-1)th column, and thus - chases the bulge one step toward the bottom of the active - submatrix. NR is the order of G. - - Computing MIN -*/ - i__3 = 3, i__4 = i__ - k + 1; - nr = min(i__3,i__4); - if (k > m) { - dcopy_(&nr, &h__[k + (k - 1) * h_dim1], &c__1, v, &c__1); - } - dlarfg_(&nr, v, &v[1], &c__1, &t1); - if (k > m) { - h__[k + (k - 1) * h_dim1] = v[0]; - h__[k + 1 + (k - 1) * h_dim1] = 0.; - if (k < i__ - 1) { - h__[k + 2 + (k - 1) * h_dim1] = 0.; - } - } else if (m > l) { - h__[k + (k - 1) * h_dim1] = -h__[k + (k - 1) * h_dim1]; - } - v2 = v[1]; - t2 = t1 * v2; - if (nr == 3) { - v3 = v[2]; - t3 = t1 * v3; - -/* - Apply G from the left to transform the rows of the matrix - in columns K to I2. -*/ - - i__3 = i2; - for (j = k; j <= i__3; ++j) { - sum = h__[k + j * h_dim1] + v2 * h__[k + 1 + j * h_dim1] - + v3 * h__[k + 2 + j * h_dim1]; - h__[k + j * h_dim1] -= sum * t1; - h__[k + 1 + j * h_dim1] -= sum * t2; - h__[k + 2 + j * h_dim1] -= sum * t3; -/* L60: */ - } - -/* - Apply G from the right to transform the columns of the - matrix in rows I1 to min(K+3,I). - - Computing MIN -*/ - i__4 = k + 3; - i__3 = min(i__4,i__); - for (j = i1; j <= i__3; ++j) { - sum = h__[j + k * h_dim1] + v2 * h__[j + (k + 1) * h_dim1] - + v3 * h__[j + (k + 2) * h_dim1]; - h__[j + k * h_dim1] -= sum * t1; - h__[j + (k + 1) * h_dim1] -= sum * t2; - h__[j + (k + 2) * h_dim1] -= sum * t3; -/* L70: */ - } - - if (*wantz) { - -/* Accumulate transformations in the matrix Z */ - - i__3 = *ihiz; - for (j = *iloz; j <= i__3; ++j) { - sum = z__[j + k * z_dim1] + v2 * z__[j + (k + 1) * - z_dim1] + v3 * z__[j + (k + 2) * z_dim1]; - z__[j + k * z_dim1] -= sum * t1; - z__[j + (k + 1) * z_dim1] -= sum * t2; - z__[j + (k + 2) * z_dim1] -= sum * t3; -/* L80: */ - } - } - } else if (nr == 2) { - -/* - Apply G from the left to transform the rows of the matrix - in columns K to I2. -*/ - - i__3 = i2; - for (j = k; j <= i__3; ++j) { - sum = h__[k + j * h_dim1] + v2 * h__[k + 1 + j * h_dim1]; - h__[k + j * h_dim1] -= sum * t1; - h__[k + 1 + j * h_dim1] -= sum * t2; -/* L90: */ - } - -/* - Apply G from the right to transform the columns of the - matrix in rows I1 to min(K+3,I). -*/ - - i__3 = i__; - for (j = i1; j <= i__3; ++j) { - sum = h__[j + k * h_dim1] + v2 * h__[j + (k + 1) * h_dim1] - ; - h__[j + k * h_dim1] -= sum * t1; - h__[j + (k + 1) * h_dim1] -= sum * t2; -/* L100: */ - } - - if (*wantz) { - -/* Accumulate transformations in the matrix Z */ - - i__3 = *ihiz; - for (j = *iloz; j <= i__3; ++j) { - sum = z__[j + k * z_dim1] + v2 * z__[j + (k + 1) * - z_dim1]; - z__[j + k * z_dim1] -= sum * t1; - z__[j + (k + 1) * z_dim1] -= sum * t2; -/* L110: */ - } - } - } -/* L120: */ - } - -/* L130: */ - } - -/* Failure to converge in remaining number of iterations */ - - *info = i__; - return 0; - -L140: - - if (l == i__) { - -/* H(I,I-1) is negligible: one eigenvalue has converged. */ - - wr[i__] = h__[i__ + i__ * h_dim1]; - wi[i__] = 0.; - } else if (l == i__ - 1) { - -/* - H(I-1,I-2) is negligible: a pair of eigenvalues have converged. - - Transform the 2-by-2 submatrix to standard Schur form, - and compute and store the eigenvalues. -*/ - - dlanv2_(&h__[i__ - 1 + (i__ - 1) * h_dim1], &h__[i__ - 1 + i__ * - h_dim1], &h__[i__ + (i__ - 1) * h_dim1], &h__[i__ + i__ * - h_dim1], &wr[i__ - 1], &wi[i__ - 1], &wr[i__], &wi[i__], &cs, - &sn); - - if (*wantt) { - -/* Apply the transformation to the rest of H. */ - - if (i2 > i__) { - i__1 = i2 - i__; - drot_(&i__1, &h__[i__ - 1 + (i__ + 1) * h_dim1], ldh, &h__[ - i__ + (i__ + 1) * h_dim1], ldh, &cs, &sn); - } - i__1 = i__ - i1 - 1; - drot_(&i__1, &h__[i1 + (i__ - 1) * h_dim1], &c__1, &h__[i1 + i__ * - h_dim1], &c__1, &cs, &sn); - } - if (*wantz) { - -/* Apply the transformation to Z. */ - - drot_(&nz, &z__[*iloz + (i__ - 1) * z_dim1], &c__1, &z__[*iloz + - i__ * z_dim1], &c__1, &cs, &sn); - } - } - -/* - Decrement number of remaining iterations, and return to start of - the main loop with new value of I. -*/ - - itn -= its; - i__ = l - 1; - goto L10; - -L150: - return 0; - -/* End of DLAHQR */ - -} /* dlahqr_ */ - -/* Subroutine */ int dlahrd_(integer *n, integer *k, integer *nb, doublereal * - a, integer *lda, doublereal *tau, doublereal *t, integer *ldt, - doublereal *y, integer *ldy) -{ - /* System generated locals */ - integer a_dim1, a_offset, t_dim1, t_offset, y_dim1, y_offset, i__1, i__2, - i__3; - doublereal d__1; - - /* Local variables */ - static integer i__; - static doublereal ei; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *), dgemv_(char *, integer *, integer *, doublereal *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - doublereal *, integer *), dcopy_(integer *, doublereal *, - integer *, doublereal *, integer *), daxpy_(integer *, doublereal - *, doublereal *, integer *, doublereal *, integer *), dtrmv_(char - *, char *, char *, integer *, doublereal *, integer *, doublereal - *, integer *), dlarfg_(integer *, - doublereal *, doublereal *, integer *, doublereal *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DLAHRD reduces the first NB columns of a real general n-by-(n-k+1) - matrix A so that elements below the k-th subdiagonal are zero. The - reduction is performed by an orthogonal similarity transformation - Q' * A * Q. The routine returns the matrices V and T which determine - Q as a block reflector I - V*T*V', and also the matrix Y = A * V * T. - - This is an auxiliary routine called by DGEHRD. - - Arguments - ========= - - N (input) INTEGER - The order of the matrix A. - - K (input) INTEGER - The offset for the reduction. Elements below the k-th - subdiagonal in the first NB columns are reduced to zero. - - NB (input) INTEGER - The number of columns to be reduced. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N-K+1) - On entry, the n-by-(n-k+1) general matrix A. - On exit, the elements on and above the k-th subdiagonal in - the first NB columns are overwritten with the corresponding - elements of the reduced matrix; the elements below the k-th - subdiagonal, with the array TAU, represent the matrix Q as a - product of elementary reflectors. The other columns of A are - unchanged. See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - TAU (output) DOUBLE PRECISION array, dimension (NB) - The scalar factors of the elementary reflectors. See Further - Details. - - T (output) DOUBLE PRECISION array, dimension (LDT,NB) - The upper triangular matrix T. - - LDT (input) INTEGER - The leading dimension of the array T. LDT >= NB. - - Y (output) DOUBLE PRECISION array, dimension (LDY,NB) - The n-by-nb matrix Y. - - LDY (input) INTEGER - The leading dimension of the array Y. LDY >= N. - - Further Details - =============== - - The matrix Q is represented as a product of nb elementary reflectors - - Q = H(1) H(2) . . . H(nb). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(1:i+k-1) = 0, v(i+k) = 1; v(i+k+1:n) is stored on exit in - A(i+k+1:n,i), and tau in TAU(i). - - The elements of the vectors v together form the (n-k+1)-by-nb matrix - V which is needed, with T and Y, to apply the transformation to the - unreduced part of the matrix, using an update of the form: - A := (I - V*T*V') * (A - Y*V'). - - The contents of A on exit are illustrated by the following example - with n = 7, k = 3 and nb = 2: - - ( a h a a a ) - ( a h a a a ) - ( a h a a a ) - ( h h a a a ) - ( v1 h a a a ) - ( v1 v2 a a a ) - ( v1 v2 a a a ) - - where a denotes an element of the original matrix A, h denotes a - modified element of the upper Hessenberg matrix H, and vi denotes an - element of the vector defining H(i). - - ===================================================================== - - - Quick return if possible -*/ - - /* Parameter adjustments */ - --tau; - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - t_dim1 = *ldt; - t_offset = 1 + t_dim1 * 1; - t -= t_offset; - y_dim1 = *ldy; - y_offset = 1 + y_dim1 * 1; - y -= y_offset; - - /* Function Body */ - if (*n <= 1) { - return 0; - } - - i__1 = *nb; - for (i__ = 1; i__ <= i__1; ++i__) { - if (i__ > 1) { - -/* - Update A(1:n,i) - - Compute i-th column of A - Y * V' -*/ - - i__2 = i__ - 1; - dgemv_("No transpose", n, &i__2, &c_b151, &y[y_offset], ldy, &a[* - k + i__ - 1 + a_dim1], lda, &c_b15, &a[i__ * a_dim1 + 1], - &c__1); - -/* - Apply I - V * T' * V' to this column (call it b) from the - left, using the last column of T as workspace - - Let V = ( V1 ) and b = ( b1 ) (first I-1 rows) - ( V2 ) ( b2 ) - - where V1 is unit lower triangular - - w := V1' * b1 -*/ - - i__2 = i__ - 1; - dcopy_(&i__2, &a[*k + 1 + i__ * a_dim1], &c__1, &t[*nb * t_dim1 + - 1], &c__1); - i__2 = i__ - 1; - dtrmv_("Lower", "Transpose", "Unit", &i__2, &a[*k + 1 + a_dim1], - lda, &t[*nb * t_dim1 + 1], &c__1); - -/* w := w + V2'*b2 */ - - i__2 = *n - *k - i__ + 1; - i__3 = i__ - 1; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &a[*k + i__ + a_dim1], - lda, &a[*k + i__ + i__ * a_dim1], &c__1, &c_b15, &t[*nb * - t_dim1 + 1], &c__1); - -/* w := T'*w */ - - i__2 = i__ - 1; - dtrmv_("Upper", "Transpose", "Non-unit", &i__2, &t[t_offset], ldt, - &t[*nb * t_dim1 + 1], &c__1); - -/* b2 := b2 - V2*w */ - - i__2 = *n - *k - i__ + 1; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &a[*k + i__ + - a_dim1], lda, &t[*nb * t_dim1 + 1], &c__1, &c_b15, &a[*k - + i__ + i__ * a_dim1], &c__1); - -/* b1 := b1 - V1*w */ - - i__2 = i__ - 1; - dtrmv_("Lower", "No transpose", "Unit", &i__2, &a[*k + 1 + a_dim1] - , lda, &t[*nb * t_dim1 + 1], &c__1); - i__2 = i__ - 1; - daxpy_(&i__2, &c_b151, &t[*nb * t_dim1 + 1], &c__1, &a[*k + 1 + - i__ * a_dim1], &c__1); - - a[*k + i__ - 1 + (i__ - 1) * a_dim1] = ei; - } - -/* - Generate the elementary reflector H(i) to annihilate - A(k+i+1:n,i) -*/ - - i__2 = *n - *k - i__ + 1; -/* Computing MIN */ - i__3 = *k + i__ + 1; - dlarfg_(&i__2, &a[*k + i__ + i__ * a_dim1], &a[min(i__3,*n) + i__ * - a_dim1], &c__1, &tau[i__]); - ei = a[*k + i__ + i__ * a_dim1]; - a[*k + i__ + i__ * a_dim1] = 1.; - -/* Compute Y(1:n,i) */ - - i__2 = *n - *k - i__ + 1; - dgemv_("No transpose", n, &i__2, &c_b15, &a[(i__ + 1) * a_dim1 + 1], - lda, &a[*k + i__ + i__ * a_dim1], &c__1, &c_b29, &y[i__ * - y_dim1 + 1], &c__1); - i__2 = *n - *k - i__ + 1; - i__3 = i__ - 1; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &a[*k + i__ + a_dim1], lda, - &a[*k + i__ + i__ * a_dim1], &c__1, &c_b29, &t[i__ * t_dim1 + - 1], &c__1); - i__2 = i__ - 1; - dgemv_("No transpose", n, &i__2, &c_b151, &y[y_offset], ldy, &t[i__ * - t_dim1 + 1], &c__1, &c_b15, &y[i__ * y_dim1 + 1], &c__1); - dscal_(n, &tau[i__], &y[i__ * y_dim1 + 1], &c__1); - -/* Compute T(1:i,i) */ - - i__2 = i__ - 1; - d__1 = -tau[i__]; - dscal_(&i__2, &d__1, &t[i__ * t_dim1 + 1], &c__1); - i__2 = i__ - 1; - dtrmv_("Upper", "No transpose", "Non-unit", &i__2, &t[t_offset], ldt, - &t[i__ * t_dim1 + 1], &c__1) - ; - t[i__ + i__ * t_dim1] = tau[i__]; - -/* L10: */ - } - a[*k + *nb + *nb * a_dim1] = ei; - - return 0; - -/* End of DLAHRD */ - -} /* dlahrd_ */ - -/* Subroutine */ int dlaln2_(logical *ltrans, integer *na, integer *nw, - doublereal *smin, doublereal *ca, doublereal *a, integer *lda, - doublereal *d1, doublereal *d2, doublereal *b, integer *ldb, - doublereal *wr, doublereal *wi, doublereal *x, integer *ldx, - doublereal *scale, doublereal *xnorm, integer *info) -{ - /* Initialized data */ - - static logical zswap[4] = { FALSE_,FALSE_,TRUE_,TRUE_ }; - static logical rswap[4] = { FALSE_,TRUE_,FALSE_,TRUE_ }; - static integer ipivot[16] /* was [4][4] */ = { 1,2,3,4,2,1,4,3,3,4,1,2, - 4,3,2,1 }; - - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, x_dim1, x_offset; - doublereal d__1, d__2, d__3, d__4, d__5, d__6; - static doublereal equiv_0[4], equiv_1[4]; - - /* Local variables */ - static integer j; -#define ci (equiv_0) -#define cr (equiv_1) - static doublereal bi1, bi2, br1, br2, xi1, xi2, xr1, xr2, ci21, ci22, - cr21, cr22, li21, csi, ui11, lr21, ui12, ui22; -#define civ (equiv_0) - static doublereal csr, ur11, ur12, ur22; -#define crv (equiv_1) - static doublereal bbnd, cmax, ui11r, ui12s, temp, ur11r, ur12s, u22abs; - static integer icmax; - static doublereal bnorm, cnorm, smini; - - extern /* Subroutine */ int dladiv_(doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *); - static doublereal bignum, smlnum; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLALN2 solves a system of the form (ca A - w D ) X = s B - or (ca A' - w D) X = s B with possible scaling ("s") and - perturbation of A. (A' means A-transpose.) - - A is an NA x NA real matrix, ca is a real scalar, D is an NA x NA - real diagonal matrix, w is a real or complex value, and X and B are - NA x 1 matrices -- real if w is real, complex if w is complex. NA - may be 1 or 2. - - If w is complex, X and B are represented as NA x 2 matrices, - the first column of each being the real part and the second - being the imaginary part. - - "s" is a scaling factor (.LE. 1), computed by DLALN2, which is - so chosen that X can be computed without overflow. X is further - scaled if necessary to assure that norm(ca A - w D)*norm(X) is less - than overflow. - - If both singular values of (ca A - w D) are less than SMIN, - SMIN*identity will be used instead of (ca A - w D). If only one - singular value is less than SMIN, one element of (ca A - w D) will be - perturbed enough to make the smallest singular value roughly SMIN. - If both singular values are at least SMIN, (ca A - w D) will not be - perturbed. In any case, the perturbation will be at most some small - multiple of max( SMIN, ulp*norm(ca A - w D) ). The singular values - are computed by infinity-norm approximations, and thus will only be - correct to a factor of 2 or so. - - Note: all input quantities are assumed to be smaller than overflow - by a reasonable factor. (See BIGNUM.) - - Arguments - ========== - - LTRANS (input) LOGICAL - =.TRUE.: A-transpose will be used. - =.FALSE.: A will be used (not transposed.) - - NA (input) INTEGER - The size of the matrix A. It may (only) be 1 or 2. - - NW (input) INTEGER - 1 if "w" is real, 2 if "w" is complex. It may only be 1 - or 2. - - SMIN (input) DOUBLE PRECISION - The desired lower bound on the singular values of A. This - should be a safe distance away from underflow or overflow, - say, between (underflow/machine precision) and (machine - precision * overflow ). (See BIGNUM and ULP.) - - CA (input) DOUBLE PRECISION - The coefficient c, which A is multiplied by. - - A (input) DOUBLE PRECISION array, dimension (LDA,NA) - The NA x NA matrix A. - - LDA (input) INTEGER - The leading dimension of A. It must be at least NA. - - D1 (input) DOUBLE PRECISION - The 1,1 element in the diagonal matrix D. - - D2 (input) DOUBLE PRECISION - The 2,2 element in the diagonal matrix D. Not used if NW=1. - - B (input) DOUBLE PRECISION array, dimension (LDB,NW) - The NA x NW matrix B (right-hand side). If NW=2 ("w" is - complex), column 1 contains the real part of B and column 2 - contains the imaginary part. - - LDB (input) INTEGER - The leading dimension of B. It must be at least NA. - - WR (input) DOUBLE PRECISION - The real part of the scalar "w". - - WI (input) DOUBLE PRECISION - The imaginary part of the scalar "w". Not used if NW=1. - - X (output) DOUBLE PRECISION array, dimension (LDX,NW) - The NA x NW matrix X (unknowns), as computed by DLALN2. - If NW=2 ("w" is complex), on exit, column 1 will contain - the real part of X and column 2 will contain the imaginary - part. - - LDX (input) INTEGER - The leading dimension of X. It must be at least NA. - - SCALE (output) DOUBLE PRECISION - The scale factor that B must be multiplied by to insure - that overflow does not occur when computing X. Thus, - (ca A - w D) X will be SCALE*B, not B (ignoring - perturbations of A.) It will be at most 1. - - XNORM (output) DOUBLE PRECISION - The infinity-norm of X, when X is regarded as an NA x NW - real matrix. - - INFO (output) INTEGER - An error flag. It will be set to zero if no error occurs, - a negative number if an argument is in error, or a positive - number if ca A - w D had to be perturbed. - The possible values are: - = 0: No error occurred, and (ca A - w D) did not have to be - perturbed. - = 1: (ca A - w D) had to be perturbed to make its smallest - (or only) singular value greater than SMIN. - NOTE: In the interests of speed, this routine does not - check the inputs for errors. - - ===================================================================== -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - x_dim1 = *ldx; - x_offset = 1 + x_dim1 * 1; - x -= x_offset; - - /* Function Body */ - -/* Compute BIGNUM */ - - smlnum = 2. * SAFEMINIMUM; - bignum = 1. / smlnum; - smini = max(*smin,smlnum); - -/* Don't check for input errors */ - - *info = 0; - -/* Standard Initializations */ - - *scale = 1.; - - if (*na == 1) { - -/* 1 x 1 (i.e., scalar) system C X = B */ - - if (*nw == 1) { - -/* - Real 1x1 system. - - C = ca A - w D -*/ - - csr = *ca * a[a_dim1 + 1] - *wr * *d1; - cnorm = abs(csr); - -/* If | C | < SMINI, use C = SMINI */ - - if (cnorm < smini) { - csr = smini; - cnorm = smini; - *info = 1; - } - -/* Check scaling for X = B / C */ - - bnorm = (d__1 = b[b_dim1 + 1], abs(d__1)); - if ((cnorm < 1. && bnorm > 1.)) { - if (bnorm > bignum * cnorm) { - *scale = 1. / bnorm; - } - } - -/* Compute X */ - - x[x_dim1 + 1] = b[b_dim1 + 1] * *scale / csr; - *xnorm = (d__1 = x[x_dim1 + 1], abs(d__1)); - } else { - -/* - Complex 1x1 system (w is complex) - - C = ca A - w D -*/ - - csr = *ca * a[a_dim1 + 1] - *wr * *d1; - csi = -(*wi) * *d1; - cnorm = abs(csr) + abs(csi); - -/* If | C | < SMINI, use C = SMINI */ - - if (cnorm < smini) { - csr = smini; - csi = 0.; - cnorm = smini; - *info = 1; - } - -/* Check scaling for X = B / C */ - - bnorm = (d__1 = b[b_dim1 + 1], abs(d__1)) + (d__2 = b[((b_dim1) << - (1)) + 1], abs(d__2)); - if ((cnorm < 1. && bnorm > 1.)) { - if (bnorm > bignum * cnorm) { - *scale = 1. / bnorm; - } - } - -/* Compute X */ - - d__1 = *scale * b[b_dim1 + 1]; - d__2 = *scale * b[((b_dim1) << (1)) + 1]; - dladiv_(&d__1, &d__2, &csr, &csi, &x[x_dim1 + 1], &x[((x_dim1) << - (1)) + 1]); - *xnorm = (d__1 = x[x_dim1 + 1], abs(d__1)) + (d__2 = x[((x_dim1) - << (1)) + 1], abs(d__2)); - } - - } else { - -/* - 2x2 System - - Compute the real part of C = ca A - w D (or ca A' - w D ) -*/ - - cr[0] = *ca * a[a_dim1 + 1] - *wr * *d1; - cr[3] = *ca * a[((a_dim1) << (1)) + 2] - *wr * *d2; - if (*ltrans) { - cr[2] = *ca * a[a_dim1 + 2]; - cr[1] = *ca * a[((a_dim1) << (1)) + 1]; - } else { - cr[1] = *ca * a[a_dim1 + 2]; - cr[2] = *ca * a[((a_dim1) << (1)) + 1]; - } - - if (*nw == 1) { - -/* - Real 2x2 system (w is real) - - Find the largest element in C -*/ - - cmax = 0.; - icmax = 0; - - for (j = 1; j <= 4; ++j) { - if ((d__1 = crv[j - 1], abs(d__1)) > cmax) { - cmax = (d__1 = crv[j - 1], abs(d__1)); - icmax = j; - } -/* L10: */ - } - -/* If norm(C) < SMINI, use SMINI*identity. */ - - if (cmax < smini) { -/* Computing MAX */ - d__3 = (d__1 = b[b_dim1 + 1], abs(d__1)), d__4 = (d__2 = b[ - b_dim1 + 2], abs(d__2)); - bnorm = max(d__3,d__4); - if ((smini < 1. && bnorm > 1.)) { - if (bnorm > bignum * smini) { - *scale = 1. / bnorm; - } - } - temp = *scale / smini; - x[x_dim1 + 1] = temp * b[b_dim1 + 1]; - x[x_dim1 + 2] = temp * b[b_dim1 + 2]; - *xnorm = temp * bnorm; - *info = 1; - return 0; - } - -/* Gaussian elimination with complete pivoting. */ - - ur11 = crv[icmax - 1]; - cr21 = crv[ipivot[((icmax) << (2)) - 3] - 1]; - ur12 = crv[ipivot[((icmax) << (2)) - 2] - 1]; - cr22 = crv[ipivot[((icmax) << (2)) - 1] - 1]; - ur11r = 1. / ur11; - lr21 = ur11r * cr21; - ur22 = cr22 - ur12 * lr21; - -/* If smaller pivot < SMINI, use SMINI */ - - if (abs(ur22) < smini) { - ur22 = smini; - *info = 1; - } - if (rswap[icmax - 1]) { - br1 = b[b_dim1 + 2]; - br2 = b[b_dim1 + 1]; - } else { - br1 = b[b_dim1 + 1]; - br2 = b[b_dim1 + 2]; - } - br2 -= lr21 * br1; -/* Computing MAX */ - d__2 = (d__1 = br1 * (ur22 * ur11r), abs(d__1)), d__3 = abs(br2); - bbnd = max(d__2,d__3); - if ((bbnd > 1. && abs(ur22) < 1.)) { - if (bbnd >= bignum * abs(ur22)) { - *scale = 1. / bbnd; - } - } - - xr2 = br2 * *scale / ur22; - xr1 = *scale * br1 * ur11r - xr2 * (ur11r * ur12); - if (zswap[icmax - 1]) { - x[x_dim1 + 1] = xr2; - x[x_dim1 + 2] = xr1; - } else { - x[x_dim1 + 1] = xr1; - x[x_dim1 + 2] = xr2; - } -/* Computing MAX */ - d__1 = abs(xr1), d__2 = abs(xr2); - *xnorm = max(d__1,d__2); - -/* Further scaling if norm(A) norm(X) > overflow */ - - if ((*xnorm > 1. && cmax > 1.)) { - if (*xnorm > bignum / cmax) { - temp = cmax / bignum; - x[x_dim1 + 1] = temp * x[x_dim1 + 1]; - x[x_dim1 + 2] = temp * x[x_dim1 + 2]; - *xnorm = temp * *xnorm; - *scale = temp * *scale; - } - } - } else { - -/* - Complex 2x2 system (w is complex) - - Find the largest element in C -*/ - - ci[0] = -(*wi) * *d1; - ci[1] = 0.; - ci[2] = 0.; - ci[3] = -(*wi) * *d2; - cmax = 0.; - icmax = 0; - - for (j = 1; j <= 4; ++j) { - if ((d__1 = crv[j - 1], abs(d__1)) + (d__2 = civ[j - 1], abs( - d__2)) > cmax) { - cmax = (d__1 = crv[j - 1], abs(d__1)) + (d__2 = civ[j - 1] - , abs(d__2)); - icmax = j; - } -/* L20: */ - } - -/* If norm(C) < SMINI, use SMINI*identity. */ - - if (cmax < smini) { -/* Computing MAX */ - d__5 = (d__1 = b[b_dim1 + 1], abs(d__1)) + (d__2 = b[((b_dim1) - << (1)) + 1], abs(d__2)), d__6 = (d__3 = b[b_dim1 + - 2], abs(d__3)) + (d__4 = b[((b_dim1) << (1)) + 2], - abs(d__4)); - bnorm = max(d__5,d__6); - if ((smini < 1. && bnorm > 1.)) { - if (bnorm > bignum * smini) { - *scale = 1. / bnorm; - } - } - temp = *scale / smini; - x[x_dim1 + 1] = temp * b[b_dim1 + 1]; - x[x_dim1 + 2] = temp * b[b_dim1 + 2]; - x[((x_dim1) << (1)) + 1] = temp * b[((b_dim1) << (1)) + 1]; - x[((x_dim1) << (1)) + 2] = temp * b[((b_dim1) << (1)) + 2]; - *xnorm = temp * bnorm; - *info = 1; - return 0; - } - -/* Gaussian elimination with complete pivoting. */ - - ur11 = crv[icmax - 1]; - ui11 = civ[icmax - 1]; - cr21 = crv[ipivot[((icmax) << (2)) - 3] - 1]; - ci21 = civ[ipivot[((icmax) << (2)) - 3] - 1]; - ur12 = crv[ipivot[((icmax) << (2)) - 2] - 1]; - ui12 = civ[ipivot[((icmax) << (2)) - 2] - 1]; - cr22 = crv[ipivot[((icmax) << (2)) - 1] - 1]; - ci22 = civ[ipivot[((icmax) << (2)) - 1] - 1]; - if (icmax == 1 || icmax == 4) { - -/* Code when off-diagonals of pivoted C are real */ - - if (abs(ur11) > abs(ui11)) { - temp = ui11 / ur11; -/* Computing 2nd power */ - d__1 = temp; - ur11r = 1. / (ur11 * (d__1 * d__1 + 1.)); - ui11r = -temp * ur11r; - } else { - temp = ur11 / ui11; -/* Computing 2nd power */ - d__1 = temp; - ui11r = -1. / (ui11 * (d__1 * d__1 + 1.)); - ur11r = -temp * ui11r; - } - lr21 = cr21 * ur11r; - li21 = cr21 * ui11r; - ur12s = ur12 * ur11r; - ui12s = ur12 * ui11r; - ur22 = cr22 - ur12 * lr21; - ui22 = ci22 - ur12 * li21; - } else { - -/* Code when diagonals of pivoted C are real */ - - ur11r = 1. / ur11; - ui11r = 0.; - lr21 = cr21 * ur11r; - li21 = ci21 * ur11r; - ur12s = ur12 * ur11r; - ui12s = ui12 * ur11r; - ur22 = cr22 - ur12 * lr21 + ui12 * li21; - ui22 = -ur12 * li21 - ui12 * lr21; - } - u22abs = abs(ur22) + abs(ui22); - -/* If smaller pivot < SMINI, use SMINI */ - - if (u22abs < smini) { - ur22 = smini; - ui22 = 0.; - *info = 1; - } - if (rswap[icmax - 1]) { - br2 = b[b_dim1 + 1]; - br1 = b[b_dim1 + 2]; - bi2 = b[((b_dim1) << (1)) + 1]; - bi1 = b[((b_dim1) << (1)) + 2]; - } else { - br1 = b[b_dim1 + 1]; - br2 = b[b_dim1 + 2]; - bi1 = b[((b_dim1) << (1)) + 1]; - bi2 = b[((b_dim1) << (1)) + 2]; - } - br2 = br2 - lr21 * br1 + li21 * bi1; - bi2 = bi2 - li21 * br1 - lr21 * bi1; -/* Computing MAX */ - d__1 = (abs(br1) + abs(bi1)) * (u22abs * (abs(ur11r) + abs(ui11r)) - ), d__2 = abs(br2) + abs(bi2); - bbnd = max(d__1,d__2); - if ((bbnd > 1. && u22abs < 1.)) { - if (bbnd >= bignum * u22abs) { - *scale = 1. / bbnd; - br1 = *scale * br1; - bi1 = *scale * bi1; - br2 = *scale * br2; - bi2 = *scale * bi2; - } - } - - dladiv_(&br2, &bi2, &ur22, &ui22, &xr2, &xi2); - xr1 = ur11r * br1 - ui11r * bi1 - ur12s * xr2 + ui12s * xi2; - xi1 = ui11r * br1 + ur11r * bi1 - ui12s * xr2 - ur12s * xi2; - if (zswap[icmax - 1]) { - x[x_dim1 + 1] = xr2; - x[x_dim1 + 2] = xr1; - x[((x_dim1) << (1)) + 1] = xi2; - x[((x_dim1) << (1)) + 2] = xi1; - } else { - x[x_dim1 + 1] = xr1; - x[x_dim1 + 2] = xr2; - x[((x_dim1) << (1)) + 1] = xi1; - x[((x_dim1) << (1)) + 2] = xi2; - } -/* Computing MAX */ - d__1 = abs(xr1) + abs(xi1), d__2 = abs(xr2) + abs(xi2); - *xnorm = max(d__1,d__2); - -/* Further scaling if norm(A) norm(X) > overflow */ - - if ((*xnorm > 1. && cmax > 1.)) { - if (*xnorm > bignum / cmax) { - temp = cmax / bignum; - x[x_dim1 + 1] = temp * x[x_dim1 + 1]; - x[x_dim1 + 2] = temp * x[x_dim1 + 2]; - x[((x_dim1) << (1)) + 1] = temp * x[((x_dim1) << (1)) + 1] - ; - x[((x_dim1) << (1)) + 2] = temp * x[((x_dim1) << (1)) + 2] - ; - *xnorm = temp * *xnorm; - *scale = temp * *scale; - } - } - } - } - - return 0; - -/* End of DLALN2 */ - -} /* dlaln2_ */ - -#undef crv -#undef civ -#undef cr -#undef ci - - -/* Subroutine */ int dlals0_(integer *icompq, integer *nl, integer *nr, - integer *sqre, integer *nrhs, doublereal *b, integer *ldb, doublereal - *bx, integer *ldbx, integer *perm, integer *givptr, integer *givcol, - integer *ldgcol, doublereal *givnum, integer *ldgnum, doublereal * - poles, doublereal *difl, doublereal *difr, doublereal *z__, integer * - k, doublereal *c__, doublereal *s, doublereal *work, integer *info) -{ - /* System generated locals */ - integer givcol_dim1, givcol_offset, b_dim1, b_offset, bx_dim1, bx_offset, - difr_dim1, difr_offset, givnum_dim1, givnum_offset, poles_dim1, - poles_offset, i__1, i__2; - doublereal d__1; - - /* Local variables */ - static integer i__, j, m, n; - static doublereal dj; - static integer nlp1; - static doublereal temp; - extern /* Subroutine */ int drot_(integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *); - extern doublereal dnrm2_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - static doublereal diflj, difrj, dsigj; - extern /* Subroutine */ int dgemv_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, doublereal *, integer *), dcopy_(integer *, - doublereal *, integer *, doublereal *, integer *); - extern doublereal dlamc3_(doublereal *, doublereal *); - extern /* Subroutine */ int dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *), dlacpy_(char *, integer *, integer - *, doublereal *, integer *, doublereal *, integer *), - xerbla_(char *, integer *); - static doublereal dsigjp; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - December 1, 1999 - - - Purpose - ======= - - DLALS0 applies back the multiplying factors of either the left or the - right singular vector matrix of a diagonal matrix appended by a row - to the right hand side matrix B in solving the least squares problem - using the divide-and-conquer SVD approach. - - For the left singular vector matrix, three types of orthogonal - matrices are involved: - - (1L) Givens rotations: the number of such rotations is GIVPTR; the - pairs of columns/rows they were applied to are stored in GIVCOL; - and the C- and S-values of these rotations are stored in GIVNUM. - - (2L) Permutation. The (NL+1)-st row of B is to be moved to the first - row, and for J=2:N, PERM(J)-th row of B is to be moved to the - J-th row. - - (3L) The left singular vector matrix of the remaining matrix. - - For the right singular vector matrix, four types of orthogonal - matrices are involved: - - (1R) The right singular vector matrix of the remaining matrix. - - (2R) If SQRE = 1, one extra Givens rotation to generate the right - null space. - - (3R) The inverse transformation of (2L). - - (4R) The inverse transformation of (1L). - - Arguments - ========= - - ICOMPQ (input) INTEGER - Specifies whether singular vectors are to be computed in - factored form: - = 0: Left singular vector matrix. - = 1: Right singular vector matrix. - - NL (input) INTEGER - The row dimension of the upper block. NL >= 1. - - NR (input) INTEGER - The row dimension of the lower block. NR >= 1. - - SQRE (input) INTEGER - = 0: the lower block is an NR-by-NR square matrix. - = 1: the lower block is an NR-by-(NR+1) rectangular matrix. - - The bidiagonal matrix has row dimension N = NL + NR + 1, - and column dimension M = N + SQRE. - - NRHS (input) INTEGER - The number of columns of B and BX. NRHS must be at least 1. - - B (input/output) DOUBLE PRECISION array, dimension ( LDB, NRHS ) - On input, B contains the right hand sides of the least - squares problem in rows 1 through M. On output, B contains - the solution X in rows 1 through N. - - LDB (input) INTEGER - The leading dimension of B. LDB must be at least - max(1,MAX( M, N ) ). - - BX (workspace) DOUBLE PRECISION array, dimension ( LDBX, NRHS ) - - LDBX (input) INTEGER - The leading dimension of BX. - - PERM (input) INTEGER array, dimension ( N ) - The permutations (from deflation and sorting) applied - to the two blocks. - - GIVPTR (input) INTEGER - The number of Givens rotations which took place in this - subproblem. - - GIVCOL (input) INTEGER array, dimension ( LDGCOL, 2 ) - Each pair of numbers indicates a pair of rows/columns - involved in a Givens rotation. - - LDGCOL (input) INTEGER - The leading dimension of GIVCOL, must be at least N. - - GIVNUM (input) DOUBLE PRECISION array, dimension ( LDGNUM, 2 ) - Each number indicates the C or S value used in the - corresponding Givens rotation. - - LDGNUM (input) INTEGER - The leading dimension of arrays DIFR, POLES and - GIVNUM, must be at least K. - - POLES (input) DOUBLE PRECISION array, dimension ( LDGNUM, 2 ) - On entry, POLES(1:K, 1) contains the new singular - values obtained from solving the secular equation, and - POLES(1:K, 2) is an array containing the poles in the secular - equation. - - DIFL (input) DOUBLE PRECISION array, dimension ( K ). - On entry, DIFL(I) is the distance between I-th updated - (undeflated) singular value and the I-th (undeflated) old - singular value. - - DIFR (input) DOUBLE PRECISION array, dimension ( LDGNUM, 2 ). - On entry, DIFR(I, 1) contains the distances between I-th - updated (undeflated) singular value and the I+1-th - (undeflated) old singular value. And DIFR(I, 2) is the - normalizing factor for the I-th right singular vector. - - Z (input) DOUBLE PRECISION array, dimension ( K ) - Contain the components of the deflation-adjusted updating row - vector. - - K (input) INTEGER - Contains the dimension of the non-deflated matrix, - This is the order of the related secular equation. 1 <= K <=N. - - C (input) DOUBLE PRECISION - C contains garbage if SQRE =0 and the C-value of a Givens - rotation related to the right null space if SQRE = 1. - - S (input) DOUBLE PRECISION - S contains garbage if SQRE =0 and the S-value of a Givens - rotation related to the right null space if SQRE = 1. - - WORK (workspace) DOUBLE PRECISION array, dimension ( K ) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - Based on contributions by - Ming Gu and Ren-Cang Li, Computer Science Division, University of - California at Berkeley, USA - Osni Marques, LBNL/NERSC, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - bx_dim1 = *ldbx; - bx_offset = 1 + bx_dim1 * 1; - bx -= bx_offset; - --perm; - givcol_dim1 = *ldgcol; - givcol_offset = 1 + givcol_dim1 * 1; - givcol -= givcol_offset; - difr_dim1 = *ldgnum; - difr_offset = 1 + difr_dim1 * 1; - difr -= difr_offset; - poles_dim1 = *ldgnum; - poles_offset = 1 + poles_dim1 * 1; - poles -= poles_offset; - givnum_dim1 = *ldgnum; - givnum_offset = 1 + givnum_dim1 * 1; - givnum -= givnum_offset; - --difl; - --z__; - --work; - - /* Function Body */ - *info = 0; - - if (*icompq < 0 || *icompq > 1) { - *info = -1; - } else if (*nl < 1) { - *info = -2; - } else if (*nr < 1) { - *info = -3; - } else if (*sqre < 0 || *sqre > 1) { - *info = -4; - } - - n = *nl + *nr + 1; - - if (*nrhs < 1) { - *info = -5; - } else if (*ldb < n) { - *info = -7; - } else if (*ldbx < n) { - *info = -9; - } else if (*givptr < 0) { - *info = -11; - } else if (*ldgcol < n) { - *info = -13; - } else if (*ldgnum < n) { - *info = -15; - } else if (*k < 1) { - *info = -20; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLALS0", &i__1); - return 0; - } - - m = n + *sqre; - nlp1 = *nl + 1; - - if (*icompq == 0) { - -/* - Apply back orthogonal transformations from the left. - - Step (1L): apply back the Givens rotations performed. -*/ - - i__1 = *givptr; - for (i__ = 1; i__ <= i__1; ++i__) { - drot_(nrhs, &b[givcol[i__ + ((givcol_dim1) << (1))] + b_dim1], - ldb, &b[givcol[i__ + givcol_dim1] + b_dim1], ldb, &givnum[ - i__ + ((givnum_dim1) << (1))], &givnum[i__ + givnum_dim1]) - ; -/* L10: */ - } - -/* Step (2L): permute rows of B. */ - - dcopy_(nrhs, &b[nlp1 + b_dim1], ldb, &bx[bx_dim1 + 1], ldbx); - i__1 = n; - for (i__ = 2; i__ <= i__1; ++i__) { - dcopy_(nrhs, &b[perm[i__] + b_dim1], ldb, &bx[i__ + bx_dim1], - ldbx); -/* L20: */ - } - -/* - Step (3L): apply the inverse of the left singular vector - matrix to BX. -*/ - - if (*k == 1) { - dcopy_(nrhs, &bx[bx_offset], ldbx, &b[b_offset], ldb); - if (z__[1] < 0.) { - dscal_(nrhs, &c_b151, &b[b_offset], ldb); - } - } else { - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - diflj = difl[j]; - dj = poles[j + poles_dim1]; - dsigj = -poles[j + ((poles_dim1) << (1))]; - if (j < *k) { - difrj = -difr[j + difr_dim1]; - dsigjp = -poles[j + 1 + ((poles_dim1) << (1))]; - } - if (z__[j] == 0. || poles[j + ((poles_dim1) << (1))] == 0.) { - work[j] = 0.; - } else { - work[j] = -poles[j + ((poles_dim1) << (1))] * z__[j] / - diflj / (poles[j + ((poles_dim1) << (1))] + dj); - } - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - if (z__[i__] == 0. || poles[i__ + ((poles_dim1) << (1))] - == 0.) { - work[i__] = 0.; - } else { - work[i__] = poles[i__ + ((poles_dim1) << (1))] * z__[ - i__] / (dlamc3_(&poles[i__ + ((poles_dim1) << - (1))], &dsigj) - diflj) / (poles[i__ + (( - poles_dim1) << (1))] + dj); - } -/* L30: */ - } - i__2 = *k; - for (i__ = j + 1; i__ <= i__2; ++i__) { - if (z__[i__] == 0. || poles[i__ + ((poles_dim1) << (1))] - == 0.) { - work[i__] = 0.; - } else { - work[i__] = poles[i__ + ((poles_dim1) << (1))] * z__[ - i__] / (dlamc3_(&poles[i__ + ((poles_dim1) << - (1))], &dsigjp) + difrj) / (poles[i__ + (( - poles_dim1) << (1))] + dj); - } -/* L40: */ - } - work[1] = -1.; - temp = dnrm2_(k, &work[1], &c__1); - dgemv_("T", k, nrhs, &c_b15, &bx[bx_offset], ldbx, &work[1], & - c__1, &c_b29, &b[j + b_dim1], ldb); - dlascl_("G", &c__0, &c__0, &temp, &c_b15, &c__1, nrhs, &b[j + - b_dim1], ldb, info); -/* L50: */ - } - } - -/* Move the deflated rows of BX to B also. */ - - if (*k < max(m,n)) { - i__1 = n - *k; - dlacpy_("A", &i__1, nrhs, &bx[*k + 1 + bx_dim1], ldbx, &b[*k + 1 - + b_dim1], ldb); - } - } else { - -/* - Apply back the right orthogonal transformations. - - Step (1R): apply back the new right singular vector matrix - to B. -*/ - - if (*k == 1) { - dcopy_(nrhs, &b[b_offset], ldb, &bx[bx_offset], ldbx); - } else { - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dsigj = poles[j + ((poles_dim1) << (1))]; - if (z__[j] == 0.) { - work[j] = 0.; - } else { - work[j] = -z__[j] / difl[j] / (dsigj + poles[j + - poles_dim1]) / difr[j + ((difr_dim1) << (1))]; - } - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - if (z__[j] == 0.) { - work[i__] = 0.; - } else { - d__1 = -poles[i__ + 1 + ((poles_dim1) << (1))]; - work[i__] = z__[j] / (dlamc3_(&dsigj, &d__1) - difr[ - i__ + difr_dim1]) / (dsigj + poles[i__ + - poles_dim1]) / difr[i__ + ((difr_dim1) << (1)) - ]; - } -/* L60: */ - } - i__2 = *k; - for (i__ = j + 1; i__ <= i__2; ++i__) { - if (z__[j] == 0.) { - work[i__] = 0.; - } else { - d__1 = -poles[i__ + ((poles_dim1) << (1))]; - work[i__] = z__[j] / (dlamc3_(&dsigj, &d__1) - difl[ - i__]) / (dsigj + poles[i__ + poles_dim1]) / - difr[i__ + ((difr_dim1) << (1))]; - } -/* L70: */ - } - dgemv_("T", k, nrhs, &c_b15, &b[b_offset], ldb, &work[1], & - c__1, &c_b29, &bx[j + bx_dim1], ldbx); -/* L80: */ - } - } - -/* - Step (2R): if SQRE = 1, apply back the rotation that is - related to the right null space of the subproblem. -*/ - - if (*sqre == 1) { - dcopy_(nrhs, &b[m + b_dim1], ldb, &bx[m + bx_dim1], ldbx); - drot_(nrhs, &bx[bx_dim1 + 1], ldbx, &bx[m + bx_dim1], ldbx, c__, - s); - } - if (*k < max(m,n)) { - i__1 = n - *k; - dlacpy_("A", &i__1, nrhs, &b[*k + 1 + b_dim1], ldb, &bx[*k + 1 + - bx_dim1], ldbx); - } - -/* Step (3R): permute rows of B. */ - - dcopy_(nrhs, &bx[bx_dim1 + 1], ldbx, &b[nlp1 + b_dim1], ldb); - if (*sqre == 1) { - dcopy_(nrhs, &bx[m + bx_dim1], ldbx, &b[m + b_dim1], ldb); - } - i__1 = n; - for (i__ = 2; i__ <= i__1; ++i__) { - dcopy_(nrhs, &bx[i__ + bx_dim1], ldbx, &b[perm[i__] + b_dim1], - ldb); -/* L90: */ - } - -/* Step (4R): apply back the Givens rotations performed. */ - - for (i__ = *givptr; i__ >= 1; --i__) { - d__1 = -givnum[i__ + givnum_dim1]; - drot_(nrhs, &b[givcol[i__ + ((givcol_dim1) << (1))] + b_dim1], - ldb, &b[givcol[i__ + givcol_dim1] + b_dim1], ldb, &givnum[ - i__ + ((givnum_dim1) << (1))], &d__1); -/* L100: */ - } - } - - return 0; - -/* End of DLALS0 */ - -} /* dlals0_ */ - -/* Subroutine */ int dlalsa_(integer *icompq, integer *smlsiz, integer *n, - integer *nrhs, doublereal *b, integer *ldb, doublereal *bx, integer * - ldbx, doublereal *u, integer *ldu, doublereal *vt, integer *k, - doublereal *difl, doublereal *difr, doublereal *z__, doublereal * - poles, integer *givptr, integer *givcol, integer *ldgcol, integer * - perm, doublereal *givnum, doublereal *c__, doublereal *s, doublereal * - work, integer *iwork, integer *info) -{ - /* System generated locals */ - integer givcol_dim1, givcol_offset, perm_dim1, perm_offset, b_dim1, - b_offset, bx_dim1, bx_offset, difl_dim1, difl_offset, difr_dim1, - difr_offset, givnum_dim1, givnum_offset, poles_dim1, poles_offset, - u_dim1, u_offset, vt_dim1, vt_offset, z_dim1, z_offset, i__1, - i__2; - - /* Builtin functions */ - integer pow_ii(integer *, integer *); - - /* Local variables */ - static integer i__, j, i1, ic, lf, nd, ll, nl, nr, im1, nlf, nrf, lvl, - ndb1, nlp1, lvl2, nrp1, nlvl, sqre; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - static integer inode, ndiml, ndimr; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *), dlals0_(integer *, integer *, integer *, - integer *, integer *, doublereal *, integer *, doublereal *, - integer *, integer *, integer *, integer *, integer *, doublereal - *, integer *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *, doublereal *, doublereal *, doublereal *, - integer *), dlasdt_(integer *, integer *, integer *, integer *, - integer *, integer *, integer *), xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DLALSA is an itermediate step in solving the least squares problem - by computing the SVD of the coefficient matrix in compact form (The - singular vectors are computed as products of simple orthorgonal - matrices.). - - If ICOMPQ = 0, DLALSA applies the inverse of the left singular vector - matrix of an upper bidiagonal matrix to the right hand side; and if - ICOMPQ = 1, DLALSA applies the right singular vector matrix to the - right hand side. The singular vector matrices were generated in - compact form by DLALSA. - - Arguments - ========= - - - ICOMPQ (input) INTEGER - Specifies whether the left or the right singular vector - matrix is involved. - = 0: Left singular vector matrix - = 1: Right singular vector matrix - - SMLSIZ (input) INTEGER - The maximum size of the subproblems at the bottom of the - computation tree. - - N (input) INTEGER - The row and column dimensions of the upper bidiagonal matrix. - - NRHS (input) INTEGER - The number of columns of B and BX. NRHS must be at least 1. - - B (input) DOUBLE PRECISION array, dimension ( LDB, NRHS ) - On input, B contains the right hand sides of the least - squares problem in rows 1 through M. On output, B contains - the solution X in rows 1 through N. - - LDB (input) INTEGER - The leading dimension of B in the calling subprogram. - LDB must be at least max(1,MAX( M, N ) ). - - BX (output) DOUBLE PRECISION array, dimension ( LDBX, NRHS ) - On exit, the result of applying the left or right singular - vector matrix to B. - - LDBX (input) INTEGER - The leading dimension of BX. - - U (input) DOUBLE PRECISION array, dimension ( LDU, SMLSIZ ). - On entry, U contains the left singular vector matrices of all - subproblems at the bottom level. - - LDU (input) INTEGER, LDU = > N. - The leading dimension of arrays U, VT, DIFL, DIFR, - POLES, GIVNUM, and Z. - - VT (input) DOUBLE PRECISION array, dimension ( LDU, SMLSIZ+1 ). - On entry, VT' contains the right singular vector matrices of - all subproblems at the bottom level. - - K (input) INTEGER array, dimension ( N ). - - DIFL (input) DOUBLE PRECISION array, dimension ( LDU, NLVL ). - where NLVL = INT(log_2 (N/(SMLSIZ+1))) + 1. - - DIFR (input) DOUBLE PRECISION array, dimension ( LDU, 2 * NLVL ). - On entry, DIFL(*, I) and DIFR(*, 2 * I -1) record - distances between singular values on the I-th level and - singular values on the (I -1)-th level, and DIFR(*, 2 * I) - record the normalizing factors of the right singular vectors - matrices of subproblems on I-th level. - - Z (input) DOUBLE PRECISION array, dimension ( LDU, NLVL ). - On entry, Z(1, I) contains the components of the deflation- - adjusted updating row vector for subproblems on the I-th - level. - - POLES (input) DOUBLE PRECISION array, dimension ( LDU, 2 * NLVL ). - On entry, POLES(*, 2 * I -1: 2 * I) contains the new and old - singular values involved in the secular equations on the I-th - level. - - GIVPTR (input) INTEGER array, dimension ( N ). - On entry, GIVPTR( I ) records the number of Givens - rotations performed on the I-th problem on the computation - tree. - - GIVCOL (input) INTEGER array, dimension ( LDGCOL, 2 * NLVL ). - On entry, for each I, GIVCOL(*, 2 * I - 1: 2 * I) records the - locations of Givens rotations performed on the I-th level on - the computation tree. - - LDGCOL (input) INTEGER, LDGCOL = > N. - The leading dimension of arrays GIVCOL and PERM. - - PERM (input) INTEGER array, dimension ( LDGCOL, NLVL ). - On entry, PERM(*, I) records permutations done on the I-th - level of the computation tree. - - GIVNUM (input) DOUBLE PRECISION array, dimension ( LDU, 2 * NLVL ). - On entry, GIVNUM(*, 2 *I -1 : 2 * I) records the C- and S- - values of Givens rotations performed on the I-th level on the - computation tree. - - C (input) DOUBLE PRECISION array, dimension ( N ). - On entry, if the I-th subproblem is not square, - C( I ) contains the C-value of a Givens rotation related to - the right null space of the I-th subproblem. - - S (input) DOUBLE PRECISION array, dimension ( N ). - On entry, if the I-th subproblem is not square, - S( I ) contains the S-value of a Givens rotation related to - the right null space of the I-th subproblem. - - WORK (workspace) DOUBLE PRECISION array. - The dimension must be at least N. - - IWORK (workspace) INTEGER array. - The dimension must be at least 3 * N - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - Based on contributions by - Ming Gu and Ren-Cang Li, Computer Science Division, University of - California at Berkeley, USA - Osni Marques, LBNL/NERSC, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - bx_dim1 = *ldbx; - bx_offset = 1 + bx_dim1 * 1; - bx -= bx_offset; - givnum_dim1 = *ldu; - givnum_offset = 1 + givnum_dim1 * 1; - givnum -= givnum_offset; - poles_dim1 = *ldu; - poles_offset = 1 + poles_dim1 * 1; - poles -= poles_offset; - z_dim1 = *ldu; - z_offset = 1 + z_dim1 * 1; - z__ -= z_offset; - difr_dim1 = *ldu; - difr_offset = 1 + difr_dim1 * 1; - difr -= difr_offset; - difl_dim1 = *ldu; - difl_offset = 1 + difl_dim1 * 1; - difl -= difl_offset; - vt_dim1 = *ldu; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - --k; - --givptr; - perm_dim1 = *ldgcol; - perm_offset = 1 + perm_dim1 * 1; - perm -= perm_offset; - givcol_dim1 = *ldgcol; - givcol_offset = 1 + givcol_dim1 * 1; - givcol -= givcol_offset; - --c__; - --s; - --work; - --iwork; - - /* Function Body */ - *info = 0; - - if (*icompq < 0 || *icompq > 1) { - *info = -1; - } else if (*smlsiz < 3) { - *info = -2; - } else if (*n < *smlsiz) { - *info = -3; - } else if (*nrhs < 1) { - *info = -4; - } else if (*ldb < *n) { - *info = -6; - } else if (*ldbx < *n) { - *info = -8; - } else if (*ldu < *n) { - *info = -10; - } else if (*ldgcol < *n) { - *info = -19; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLALSA", &i__1); - return 0; - } - -/* Book-keeping and setting up the computation tree. */ - - inode = 1; - ndiml = inode + *n; - ndimr = ndiml + *n; - - dlasdt_(n, &nlvl, &nd, &iwork[inode], &iwork[ndiml], &iwork[ndimr], - smlsiz); - -/* - The following code applies back the left singular vector factors. - For applying back the right singular vector factors, go to 50. -*/ - - if (*icompq == 1) { - goto L50; - } - -/* - The nodes on the bottom level of the tree were solved - by DLASDQ. The corresponding left and right singular vector - matrices are in explicit form. First apply back the left - singular vector matrices. -*/ - - ndb1 = (nd + 1) / 2; - i__1 = nd; - for (i__ = ndb1; i__ <= i__1; ++i__) { - -/* - IC : center row of each node - NL : number of rows of left subproblem - NR : number of rows of right subproblem - NLF: starting row of the left subproblem - NRF: starting row of the right subproblem -*/ - - i1 = i__ - 1; - ic = iwork[inode + i1]; - nl = iwork[ndiml + i1]; - nr = iwork[ndimr + i1]; - nlf = ic - nl; - nrf = ic + 1; - dgemm_("T", "N", &nl, nrhs, &nl, &c_b15, &u[nlf + u_dim1], ldu, &b[ - nlf + b_dim1], ldb, &c_b29, &bx[nlf + bx_dim1], ldbx); - dgemm_("T", "N", &nr, nrhs, &nr, &c_b15, &u[nrf + u_dim1], ldu, &b[ - nrf + b_dim1], ldb, &c_b29, &bx[nrf + bx_dim1], ldbx); -/* L10: */ - } - -/* - Next copy the rows of B that correspond to unchanged rows - in the bidiagonal matrix to BX. -*/ - - i__1 = nd; - for (i__ = 1; i__ <= i__1; ++i__) { - ic = iwork[inode + i__ - 1]; - dcopy_(nrhs, &b[ic + b_dim1], ldb, &bx[ic + bx_dim1], ldbx); -/* L20: */ - } - -/* - Finally go through the left singular vector matrices of all - the other subproblems bottom-up on the tree. -*/ - - j = pow_ii(&c__2, &nlvl); - sqre = 0; - - for (lvl = nlvl; lvl >= 1; --lvl) { - lvl2 = ((lvl) << (1)) - 1; - -/* - find the first node LF and last node LL on - the current level LVL -*/ - - if (lvl == 1) { - lf = 1; - ll = 1; - } else { - i__1 = lvl - 1; - lf = pow_ii(&c__2, &i__1); - ll = ((lf) << (1)) - 1; - } - i__1 = ll; - for (i__ = lf; i__ <= i__1; ++i__) { - im1 = i__ - 1; - ic = iwork[inode + im1]; - nl = iwork[ndiml + im1]; - nr = iwork[ndimr + im1]; - nlf = ic - nl; - nrf = ic + 1; - --j; - dlals0_(icompq, &nl, &nr, &sqre, nrhs, &bx[nlf + bx_dim1], ldbx, & - b[nlf + b_dim1], ldb, &perm[nlf + lvl * perm_dim1], & - givptr[j], &givcol[nlf + lvl2 * givcol_dim1], ldgcol, & - givnum[nlf + lvl2 * givnum_dim1], ldu, &poles[nlf + lvl2 * - poles_dim1], &difl[nlf + lvl * difl_dim1], &difr[nlf + - lvl2 * difr_dim1], &z__[nlf + lvl * z_dim1], &k[j], &c__[ - j], &s[j], &work[1], info); -/* L30: */ - } -/* L40: */ - } - goto L90; - -/* ICOMPQ = 1: applying back the right singular vector factors. */ - -L50: - -/* - First now go through the right singular vector matrices of all - the tree nodes top-down. -*/ - - j = 0; - i__1 = nlvl; - for (lvl = 1; lvl <= i__1; ++lvl) { - lvl2 = ((lvl) << (1)) - 1; - -/* - Find the first node LF and last node LL on - the current level LVL. -*/ - - if (lvl == 1) { - lf = 1; - ll = 1; - } else { - i__2 = lvl - 1; - lf = pow_ii(&c__2, &i__2); - ll = ((lf) << (1)) - 1; - } - i__2 = lf; - for (i__ = ll; i__ >= i__2; --i__) { - im1 = i__ - 1; - ic = iwork[inode + im1]; - nl = iwork[ndiml + im1]; - nr = iwork[ndimr + im1]; - nlf = ic - nl; - nrf = ic + 1; - if (i__ == ll) { - sqre = 0; - } else { - sqre = 1; - } - ++j; - dlals0_(icompq, &nl, &nr, &sqre, nrhs, &b[nlf + b_dim1], ldb, &bx[ - nlf + bx_dim1], ldbx, &perm[nlf + lvl * perm_dim1], & - givptr[j], &givcol[nlf + lvl2 * givcol_dim1], ldgcol, & - givnum[nlf + lvl2 * givnum_dim1], ldu, &poles[nlf + lvl2 * - poles_dim1], &difl[nlf + lvl * difl_dim1], &difr[nlf + - lvl2 * difr_dim1], &z__[nlf + lvl * z_dim1], &k[j], &c__[ - j], &s[j], &work[1], info); -/* L60: */ - } -/* L70: */ - } - -/* - The nodes on the bottom level of the tree were solved - by DLASDQ. The corresponding right singular vector - matrices are in explicit form. Apply them back. -*/ - - ndb1 = (nd + 1) / 2; - i__1 = nd; - for (i__ = ndb1; i__ <= i__1; ++i__) { - i1 = i__ - 1; - ic = iwork[inode + i1]; - nl = iwork[ndiml + i1]; - nr = iwork[ndimr + i1]; - nlp1 = nl + 1; - if (i__ == nd) { - nrp1 = nr; - } else { - nrp1 = nr + 1; - } - nlf = ic - nl; - nrf = ic + 1; - dgemm_("T", "N", &nlp1, nrhs, &nlp1, &c_b15, &vt[nlf + vt_dim1], ldu, - &b[nlf + b_dim1], ldb, &c_b29, &bx[nlf + bx_dim1], ldbx); - dgemm_("T", "N", &nrp1, nrhs, &nrp1, &c_b15, &vt[nrf + vt_dim1], ldu, - &b[nrf + b_dim1], ldb, &c_b29, &bx[nrf + bx_dim1], ldbx); -/* L80: */ - } - -L90: - - return 0; - -/* End of DLALSA */ - -} /* dlalsa_ */ - -/* Subroutine */ int dlalsd_(char *uplo, integer *smlsiz, integer *n, integer - *nrhs, doublereal *d__, doublereal *e, doublereal *b, integer *ldb, - doublereal *rcond, integer *rank, doublereal *work, integer *iwork, - integer *info) -{ - /* System generated locals */ - integer b_dim1, b_offset, i__1, i__2; - doublereal d__1; - - /* Builtin functions */ - double log(doublereal), d_sign(doublereal *, doublereal *); - - /* Local variables */ - static integer c__, i__, j, k; - static doublereal r__; - static integer s, u, z__; - static doublereal cs; - static integer bx; - static doublereal sn; - static integer st, vt, nm1, st1; - static doublereal eps; - static integer iwk; - static doublereal tol; - static integer difl, difr, perm, nsub; - extern /* Subroutine */ int drot_(integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *); - static integer nlvl, sqre, bxst; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *), - dcopy_(integer *, doublereal *, integer *, doublereal *, integer - *); - static integer poles, sizei, nsize, nwork, icmpq1, icmpq2; - - extern /* Subroutine */ int dlasda_(integer *, integer *, integer *, - integer *, doublereal *, doublereal *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *, integer *, integer *, integer *, - doublereal *, doublereal *, doublereal *, doublereal *, integer *, - integer *), dlalsa_(integer *, integer *, integer *, integer *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - doublereal *, doublereal *, integer *, integer *, integer *, - integer *, doublereal *, doublereal *, doublereal *, doublereal *, - integer *, integer *), dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *); - extern integer idamax_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dlasdq_(char *, integer *, integer *, integer - *, integer *, integer *, doublereal *, doublereal *, doublereal *, - integer *, doublereal *, integer *, doublereal *, integer *, - doublereal *, integer *), dlacpy_(char *, integer *, - integer *, doublereal *, integer *, doublereal *, integer *), dlartg_(doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *), dlaset_(char *, integer *, integer *, - doublereal *, doublereal *, doublereal *, integer *), - xerbla_(char *, integer *); - static integer givcol; - extern doublereal dlanst_(char *, integer *, doublereal *, doublereal *); - extern /* Subroutine */ int dlasrt_(char *, integer *, doublereal *, - integer *); - static doublereal orgnrm; - static integer givnum, givptr, smlszp; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - DLALSD uses the singular value decomposition of A to solve the least - squares problem of finding X to minimize the Euclidean norm of each - column of A*X-B, where A is N-by-N upper bidiagonal, and X and B - are N-by-NRHS. The solution X overwrites B. - - The singular values of A smaller than RCOND times the largest - singular value are treated as zero in solving the least squares - problem; in this case a minimum norm solution is returned. - The actual singular values are returned in D in ascending order. - - This code makes very mild assumptions about floating point - arithmetic. It will work on machines with a guard digit in - add/subtract, or on those binary machines without guard digits - which subtract like the Cray XMP, Cray YMP, Cray C 90, or Cray 2. - It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - = 'U': D and E define an upper bidiagonal matrix. - = 'L': D and E define a lower bidiagonal matrix. - - SMLSIZ (input) INTEGER - The maximum size of the subproblems at the bottom of the - computation tree. - - N (input) INTEGER - The dimension of the bidiagonal matrix. N >= 0. - - NRHS (input) INTEGER - The number of columns of B. NRHS must be at least 1. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry D contains the main diagonal of the bidiagonal - matrix. On exit, if INFO = 0, D contains its singular values. - - E (input) DOUBLE PRECISION array, dimension (N-1) - Contains the super-diagonal entries of the bidiagonal matrix. - On exit, E has been destroyed. - - B (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS) - On input, B contains the right hand sides of the least - squares problem. On output, B contains the solution X. - - LDB (input) INTEGER - The leading dimension of B in the calling subprogram. - LDB must be at least max(1,N). - - RCOND (input) DOUBLE PRECISION - The singular values of A less than or equal to RCOND times - the largest singular value are treated as zero in solving - the least squares problem. If RCOND is negative, - machine precision is used instead. - For example, if diag(S)*X=B were the least squares problem, - where diag(S) is a diagonal matrix of singular values, the - solution would be X(i) = B(i) / S(i) if S(i) is greater than - RCOND*max(S), and X(i) = 0 if S(i) is less than or equal to - RCOND*max(S). - - RANK (output) INTEGER - The number of singular values of A greater than RCOND times - the largest singular value. - - WORK (workspace) DOUBLE PRECISION array, dimension at least - (9*N + 2*N*SMLSIZ + 8*N*NLVL + N*NRHS + (SMLSIZ+1)**2), - where NLVL = max(0, INT(log_2 (N/(SMLSIZ+1))) + 1). - - IWORK (workspace) INTEGER array, dimension at least - (3*N*NLVL + 11*N) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: The algorithm failed to compute an singular value while - working on the submatrix lying in rows and columns - INFO/(N+1) through MOD(INFO,N+1). - - Further Details - =============== - - Based on contributions by - Ming Gu and Ren-Cang Li, Computer Science Division, University of - California at Berkeley, USA - Osni Marques, LBNL/NERSC, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - --work; - --iwork; - - /* Function Body */ - *info = 0; - - if (*n < 0) { - *info = -3; - } else if (*nrhs < 1) { - *info = -4; - } else if (*ldb < 1 || *ldb < *n) { - *info = -8; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLALSD", &i__1); - return 0; - } - - eps = EPSILON; - -/* Set up the tolerance. */ - - if (*rcond <= 0. || *rcond >= 1.) { - *rcond = eps; - } - - *rank = 0; - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } else if (*n == 1) { - if (d__[1] == 0.) { - dlaset_("A", &c__1, nrhs, &c_b29, &c_b29, &b[b_offset], ldb); - } else { - *rank = 1; - dlascl_("G", &c__0, &c__0, &d__[1], &c_b15, &c__1, nrhs, &b[ - b_offset], ldb, info); - d__[1] = abs(d__[1]); - } - return 0; - } - -/* Rotate the matrix if it is lower bidiagonal. */ - - if (*(unsigned char *)uplo == 'L') { - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - dlartg_(&d__[i__], &e[i__], &cs, &sn, &r__); - d__[i__] = r__; - e[i__] = sn * d__[i__ + 1]; - d__[i__ + 1] = cs * d__[i__ + 1]; - if (*nrhs == 1) { - drot_(&c__1, &b[i__ + b_dim1], &c__1, &b[i__ + 1 + b_dim1], & - c__1, &cs, &sn); - } else { - work[((i__) << (1)) - 1] = cs; - work[i__ * 2] = sn; - } -/* L10: */ - } - if (*nrhs > 1) { - i__1 = *nrhs; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = *n - 1; - for (j = 1; j <= i__2; ++j) { - cs = work[((j) << (1)) - 1]; - sn = work[j * 2]; - drot_(&c__1, &b[j + i__ * b_dim1], &c__1, &b[j + 1 + i__ * - b_dim1], &c__1, &cs, &sn); -/* L20: */ - } -/* L30: */ - } - } - } - -/* Scale. */ - - nm1 = *n - 1; - orgnrm = dlanst_("M", n, &d__[1], &e[1]); - if (orgnrm == 0.) { - dlaset_("A", n, nrhs, &c_b29, &c_b29, &b[b_offset], ldb); - return 0; - } - - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b15, n, &c__1, &d__[1], n, info); - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b15, &nm1, &c__1, &e[1], &nm1, - info); - -/* - If N is smaller than the minimum divide size SMLSIZ, then solve - the problem with another solver. -*/ - - if (*n <= *smlsiz) { - nwork = *n * *n + 1; - dlaset_("A", n, n, &c_b29, &c_b15, &work[1], n); - dlasdq_("U", &c__0, n, n, &c__0, nrhs, &d__[1], &e[1], &work[1], n, & - work[1], n, &b[b_offset], ldb, &work[nwork], info); - if (*info != 0) { - return 0; - } - tol = *rcond * (d__1 = d__[idamax_(n, &d__[1], &c__1)], abs(d__1)); - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - if (d__[i__] <= tol) { - dlaset_("A", &c__1, nrhs, &c_b29, &c_b29, &b[i__ + b_dim1], - ldb); - } else { - dlascl_("G", &c__0, &c__0, &d__[i__], &c_b15, &c__1, nrhs, &b[ - i__ + b_dim1], ldb, info); - ++(*rank); - } -/* L40: */ - } - dgemm_("T", "N", n, nrhs, n, &c_b15, &work[1], n, &b[b_offset], ldb, & - c_b29, &work[nwork], n); - dlacpy_("A", n, nrhs, &work[nwork], n, &b[b_offset], ldb); - -/* Unscale. */ - - dlascl_("G", &c__0, &c__0, &c_b15, &orgnrm, n, &c__1, &d__[1], n, - info); - dlasrt_("D", n, &d__[1], info); - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b15, n, nrhs, &b[b_offset], - ldb, info); - - return 0; - } - -/* Book-keeping and setting up some constants. */ - - nlvl = (integer) (log((doublereal) (*n) / (doublereal) (*smlsiz + 1)) / - log(2.)) + 1; - - smlszp = *smlsiz + 1; - - u = 1; - vt = *smlsiz * *n + 1; - difl = vt + smlszp * *n; - difr = difl + nlvl * *n; - z__ = difr + ((nlvl * *n) << (1)); - c__ = z__ + nlvl * *n; - s = c__ + *n; - poles = s + *n; - givnum = poles + ((nlvl) << (1)) * *n; - bx = givnum + ((nlvl) << (1)) * *n; - nwork = bx + *n * *nrhs; - - sizei = *n + 1; - k = sizei + *n; - givptr = k + *n; - perm = givptr + *n; - givcol = perm + nlvl * *n; - iwk = givcol + ((nlvl * *n) << (1)); - - st = 1; - sqre = 0; - icmpq1 = 1; - icmpq2 = 0; - nsub = 0; - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - if ((d__1 = d__[i__], abs(d__1)) < eps) { - d__[i__] = d_sign(&eps, &d__[i__]); - } -/* L50: */ - } - - i__1 = nm1; - for (i__ = 1; i__ <= i__1; ++i__) { - if ((d__1 = e[i__], abs(d__1)) < eps || i__ == nm1) { - ++nsub; - iwork[nsub] = st; - -/* - Subproblem found. First determine its size and then - apply divide and conquer on it. -*/ - - if (i__ < nm1) { - -/* A subproblem with E(I) small for I < NM1. */ - - nsize = i__ - st + 1; - iwork[sizei + nsub - 1] = nsize; - } else if ((d__1 = e[i__], abs(d__1)) >= eps) { - -/* A subproblem with E(NM1) not too small but I = NM1. */ - - nsize = *n - st + 1; - iwork[sizei + nsub - 1] = nsize; - } else { - -/* - A subproblem with E(NM1) small. This implies an - 1-by-1 subproblem at D(N), which is not solved - explicitly. -*/ - - nsize = i__ - st + 1; - iwork[sizei + nsub - 1] = nsize; - ++nsub; - iwork[nsub] = *n; - iwork[sizei + nsub - 1] = 1; - dcopy_(nrhs, &b[*n + b_dim1], ldb, &work[bx + nm1], n); - } - st1 = st - 1; - if (nsize == 1) { - -/* - This is a 1-by-1 subproblem and is not solved - explicitly. -*/ - - dcopy_(nrhs, &b[st + b_dim1], ldb, &work[bx + st1], n); - } else if (nsize <= *smlsiz) { - -/* This is a small subproblem and is solved by DLASDQ. */ - - dlaset_("A", &nsize, &nsize, &c_b29, &c_b15, &work[vt + st1], - n); - dlasdq_("U", &c__0, &nsize, &nsize, &c__0, nrhs, &d__[st], &e[ - st], &work[vt + st1], n, &work[nwork], n, &b[st + - b_dim1], ldb, &work[nwork], info); - if (*info != 0) { - return 0; - } - dlacpy_("A", &nsize, nrhs, &b[st + b_dim1], ldb, &work[bx + - st1], n); - } else { - -/* A large problem. Solve it using divide and conquer. */ - - dlasda_(&icmpq1, smlsiz, &nsize, &sqre, &d__[st], &e[st], & - work[u + st1], n, &work[vt + st1], &iwork[k + st1], & - work[difl + st1], &work[difr + st1], &work[z__ + st1], - &work[poles + st1], &iwork[givptr + st1], &iwork[ - givcol + st1], n, &iwork[perm + st1], &work[givnum + - st1], &work[c__ + st1], &work[s + st1], &work[nwork], - &iwork[iwk], info); - if (*info != 0) { - return 0; - } - bxst = bx + st1; - dlalsa_(&icmpq2, smlsiz, &nsize, nrhs, &b[st + b_dim1], ldb, & - work[bxst], n, &work[u + st1], n, &work[vt + st1], & - iwork[k + st1], &work[difl + st1], &work[difr + st1], - &work[z__ + st1], &work[poles + st1], &iwork[givptr + - st1], &iwork[givcol + st1], n, &iwork[perm + st1], & - work[givnum + st1], &work[c__ + st1], &work[s + st1], - &work[nwork], &iwork[iwk], info); - if (*info != 0) { - return 0; - } - } - st = i__ + 1; - } -/* L60: */ - } - -/* Apply the singular values and treat the tiny ones as zero. */ - - tol = *rcond * (d__1 = d__[idamax_(n, &d__[1], &c__1)], abs(d__1)); - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* - Some of the elements in D can be negative because 1-by-1 - subproblems were not solved explicitly. -*/ - - if ((d__1 = d__[i__], abs(d__1)) <= tol) { - dlaset_("A", &c__1, nrhs, &c_b29, &c_b29, &work[bx + i__ - 1], n); - } else { - ++(*rank); - dlascl_("G", &c__0, &c__0, &d__[i__], &c_b15, &c__1, nrhs, &work[ - bx + i__ - 1], n, info); - } - d__[i__] = (d__1 = d__[i__], abs(d__1)); -/* L70: */ - } - -/* Now apply back the right singular vectors. */ - - icmpq2 = 1; - i__1 = nsub; - for (i__ = 1; i__ <= i__1; ++i__) { - st = iwork[i__]; - st1 = st - 1; - nsize = iwork[sizei + i__ - 1]; - bxst = bx + st1; - if (nsize == 1) { - dcopy_(nrhs, &work[bxst], n, &b[st + b_dim1], ldb); - } else if (nsize <= *smlsiz) { - dgemm_("T", "N", &nsize, nrhs, &nsize, &c_b15, &work[vt + st1], n, - &work[bxst], n, &c_b29, &b[st + b_dim1], ldb); - } else { - dlalsa_(&icmpq2, smlsiz, &nsize, nrhs, &work[bxst], n, &b[st + - b_dim1], ldb, &work[u + st1], n, &work[vt + st1], &iwork[ - k + st1], &work[difl + st1], &work[difr + st1], &work[z__ - + st1], &work[poles + st1], &iwork[givptr + st1], &iwork[ - givcol + st1], n, &iwork[perm + st1], &work[givnum + st1], - &work[c__ + st1], &work[s + st1], &work[nwork], &iwork[ - iwk], info); - if (*info != 0) { - return 0; - } - } -/* L80: */ - } - -/* Unscale and sort the singular values. */ - - dlascl_("G", &c__0, &c__0, &c_b15, &orgnrm, n, &c__1, &d__[1], n, info); - dlasrt_("D", n, &d__[1], info); - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b15, n, nrhs, &b[b_offset], ldb, - info); - - return 0; - -/* End of DLALSD */ - -} /* dlalsd_ */ - -/* Subroutine */ int dlamrg_(integer *n1, integer *n2, doublereal *a, integer - *dtrd1, integer *dtrd2, integer *index) -{ - /* System generated locals */ - integer i__1; - - /* Local variables */ - static integer i__, ind1, ind2, n1sv, n2sv; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - DLAMRG will create a permutation list which will merge the elements - of A (which is composed of two independently sorted sets) into a - single set which is sorted in ascending order. - - Arguments - ========= - - N1 (input) INTEGER - N2 (input) INTEGER - These arguements contain the respective lengths of the two - sorted lists to be merged. - - A (input) DOUBLE PRECISION array, dimension (N1+N2) - The first N1 elements of A contain a list of numbers which - are sorted in either ascending or descending order. Likewise - for the final N2 elements. - - DTRD1 (input) INTEGER - DTRD2 (input) INTEGER - These are the strides to be taken through the array A. - Allowable strides are 1 and -1. They indicate whether a - subset of A is sorted in ascending (DTRDx = 1) or descending - (DTRDx = -1) order. - - INDEX (output) INTEGER array, dimension (N1+N2) - On exit this array will contain a permutation such that - if B( I ) = A( INDEX( I ) ) for I=1,N1+N2, then B will be - sorted in ascending order. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --index; - --a; - - /* Function Body */ - n1sv = *n1; - n2sv = *n2; - if (*dtrd1 > 0) { - ind1 = 1; - } else { - ind1 = *n1; - } - if (*dtrd2 > 0) { - ind2 = *n1 + 1; - } else { - ind2 = *n1 + *n2; - } - i__ = 1; -/* while ( (N1SV > 0) & (N2SV > 0) ) */ -L10: - if ((n1sv > 0 && n2sv > 0)) { - if (a[ind1] <= a[ind2]) { - index[i__] = ind1; - ++i__; - ind1 += *dtrd1; - --n1sv; - } else { - index[i__] = ind2; - ++i__; - ind2 += *dtrd2; - --n2sv; - } - goto L10; - } -/* end while */ - if (n1sv == 0) { - i__1 = n2sv; - for (n1sv = 1; n1sv <= i__1; ++n1sv) { - index[i__] = ind2; - ++i__; - ind2 += *dtrd2; -/* L20: */ - } - } else { -/* N2SV .EQ. 0 */ - i__1 = n1sv; - for (n2sv = 1; n2sv <= i__1; ++n2sv) { - index[i__] = ind1; - ++i__; - ind1 += *dtrd1; -/* L30: */ - } - } - - return 0; - -/* End of DLAMRG */ - -} /* dlamrg_ */ - -doublereal dlange_(char *norm, integer *m, integer *n, doublereal *a, integer - *lda, doublereal *work) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - doublereal ret_val, d__1, d__2, d__3; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer i__, j; - static doublereal sum, scale; - extern logical lsame_(char *, char *); - static doublereal value; - extern /* Subroutine */ int dlassq_(integer *, doublereal *, integer *, - doublereal *, doublereal *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLANGE returns the value of the one norm, or the Frobenius norm, or - the infinity norm, or the element of largest absolute value of a - real matrix A. - - Description - =========== - - DLANGE returns the value - - DLANGE = ( max(abs(A(i,j))), NORM = 'M' or 'm' - ( - ( norm1(A), NORM = '1', 'O' or 'o' - ( - ( normI(A), NORM = 'I' or 'i' - ( - ( normF(A), NORM = 'F', 'f', 'E' or 'e' - - where norm1 denotes the one norm of a matrix (maximum column sum), - normI denotes the infinity norm of a matrix (maximum row sum) and - normF denotes the Frobenius norm of a matrix (square root of sum of - squares). Note that max(abs(A(i,j))) is not a matrix norm. - - Arguments - ========= - - NORM (input) CHARACTER*1 - Specifies the value to be returned in DLANGE as described - above. - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. When M = 0, - DLANGE is set to zero. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. When N = 0, - DLANGE is set to zero. - - A (input) DOUBLE PRECISION array, dimension (LDA,N) - The m by n matrix A. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(M,1). - - WORK (workspace) DOUBLE PRECISION array, dimension (LWORK), - where LWORK >= M when NORM = 'I'; otherwise, WORK is not - referenced. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --work; - - /* Function Body */ - if (min(*m,*n) == 0) { - value = 0.; - } else if (lsame_(norm, "M")) { - -/* Find max(abs(A(i,j))). */ - - value = 0.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { -/* Computing MAX */ - d__2 = value, d__3 = (d__1 = a[i__ + j * a_dim1], abs(d__1)); - value = max(d__2,d__3); -/* L10: */ - } -/* L20: */ - } - } else if (lsame_(norm, "O") || *(unsigned char *) - norm == '1') { - -/* Find norm1(A). */ - - value = 0.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = 0.; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - sum += (d__1 = a[i__ + j * a_dim1], abs(d__1)); -/* L30: */ - } - value = max(value,sum); -/* L40: */ - } - } else if (lsame_(norm, "I")) { - -/* Find normI(A). */ - - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - work[i__] = 0.; -/* L50: */ - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - work[i__] += (d__1 = a[i__ + j * a_dim1], abs(d__1)); -/* L60: */ - } -/* L70: */ - } - value = 0.; - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { -/* Computing MAX */ - d__1 = value, d__2 = work[i__]; - value = max(d__1,d__2); -/* L80: */ - } - } else if (lsame_(norm, "F") || lsame_(norm, "E")) { - -/* Find normF(A). */ - - scale = 0.; - sum = 1.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - dlassq_(m, &a[j * a_dim1 + 1], &c__1, &scale, &sum); -/* L90: */ - } - value = scale * sqrt(sum); - } - - ret_val = value; - return ret_val; - -/* End of DLANGE */ - -} /* dlange_ */ - -doublereal dlanhs_(char *norm, integer *n, doublereal *a, integer *lda, - doublereal *work) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - doublereal ret_val, d__1, d__2, d__3; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer i__, j; - static doublereal sum, scale; - extern logical lsame_(char *, char *); - static doublereal value; - extern /* Subroutine */ int dlassq_(integer *, doublereal *, integer *, - doublereal *, doublereal *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLANHS returns the value of the one norm, or the Frobenius norm, or - the infinity norm, or the element of largest absolute value of a - Hessenberg matrix A. - - Description - =========== - - DLANHS returns the value - - DLANHS = ( max(abs(A(i,j))), NORM = 'M' or 'm' - ( - ( norm1(A), NORM = '1', 'O' or 'o' - ( - ( normI(A), NORM = 'I' or 'i' - ( - ( normF(A), NORM = 'F', 'f', 'E' or 'e' - - where norm1 denotes the one norm of a matrix (maximum column sum), - normI denotes the infinity norm of a matrix (maximum row sum) and - normF denotes the Frobenius norm of a matrix (square root of sum of - squares). Note that max(abs(A(i,j))) is not a matrix norm. - - Arguments - ========= - - NORM (input) CHARACTER*1 - Specifies the value to be returned in DLANHS as described - above. - - N (input) INTEGER - The order of the matrix A. N >= 0. When N = 0, DLANHS is - set to zero. - - A (input) DOUBLE PRECISION array, dimension (LDA,N) - The n by n upper Hessenberg matrix A; the part of A below the - first sub-diagonal is not referenced. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(N,1). - - WORK (workspace) DOUBLE PRECISION array, dimension (LWORK), - where LWORK >= N when NORM = 'I'; otherwise, WORK is not - referenced. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --work; - - /* Function Body */ - if (*n == 0) { - value = 0.; - } else if (lsame_(norm, "M")) { - -/* Find max(abs(A(i,j))). */ - - value = 0.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = *n, i__4 = j + 1; - i__2 = min(i__3,i__4); - for (i__ = 1; i__ <= i__2; ++i__) { -/* Computing MAX */ - d__2 = value, d__3 = (d__1 = a[i__ + j * a_dim1], abs(d__1)); - value = max(d__2,d__3); -/* L10: */ - } -/* L20: */ - } - } else if (lsame_(norm, "O") || *(unsigned char *) - norm == '1') { - -/* Find norm1(A). */ - - value = 0.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = 0.; -/* Computing MIN */ - i__3 = *n, i__4 = j + 1; - i__2 = min(i__3,i__4); - for (i__ = 1; i__ <= i__2; ++i__) { - sum += (d__1 = a[i__ + j * a_dim1], abs(d__1)); -/* L30: */ - } - value = max(value,sum); -/* L40: */ - } - } else if (lsame_(norm, "I")) { - -/* Find normI(A). */ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - work[i__] = 0.; -/* L50: */ - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = *n, i__4 = j + 1; - i__2 = min(i__3,i__4); - for (i__ = 1; i__ <= i__2; ++i__) { - work[i__] += (d__1 = a[i__ + j * a_dim1], abs(d__1)); -/* L60: */ - } -/* L70: */ - } - value = 0.; - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { -/* Computing MAX */ - d__1 = value, d__2 = work[i__]; - value = max(d__1,d__2); -/* L80: */ - } - } else if (lsame_(norm, "F") || lsame_(norm, "E")) { - -/* Find normF(A). */ - - scale = 0.; - sum = 1.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = *n, i__4 = j + 1; - i__2 = min(i__3,i__4); - dlassq_(&i__2, &a[j * a_dim1 + 1], &c__1, &scale, &sum); -/* L90: */ - } - value = scale * sqrt(sum); - } - - ret_val = value; - return ret_val; - -/* End of DLANHS */ - -} /* dlanhs_ */ - -doublereal dlanst_(char *norm, integer *n, doublereal *d__, doublereal *e) -{ - /* System generated locals */ - integer i__1; - doublereal ret_val, d__1, d__2, d__3, d__4, d__5; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer i__; - static doublereal sum, scale; - extern logical lsame_(char *, char *); - static doublereal anorm; - extern /* Subroutine */ int dlassq_(integer *, doublereal *, integer *, - doublereal *, doublereal *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DLANST returns the value of the one norm, or the Frobenius norm, or - the infinity norm, or the element of largest absolute value of a - real symmetric tridiagonal matrix A. - - Description - =========== - - DLANST returns the value - - DLANST = ( max(abs(A(i,j))), NORM = 'M' or 'm' - ( - ( norm1(A), NORM = '1', 'O' or 'o' - ( - ( normI(A), NORM = 'I' or 'i' - ( - ( normF(A), NORM = 'F', 'f', 'E' or 'e' - - where norm1 denotes the one norm of a matrix (maximum column sum), - normI denotes the infinity norm of a matrix (maximum row sum) and - normF denotes the Frobenius norm of a matrix (square root of sum of - squares). Note that max(abs(A(i,j))) is not a matrix norm. - - Arguments - ========= - - NORM (input) CHARACTER*1 - Specifies the value to be returned in DLANST as described - above. - - N (input) INTEGER - The order of the matrix A. N >= 0. When N = 0, DLANST is - set to zero. - - D (input) DOUBLE PRECISION array, dimension (N) - The diagonal elements of A. - - E (input) DOUBLE PRECISION array, dimension (N-1) - The (n-1) sub-diagonal or super-diagonal elements of A. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --e; - --d__; - - /* Function Body */ - if (*n <= 0) { - anorm = 0.; - } else if (lsame_(norm, "M")) { - -/* Find max(abs(A(i,j))). */ - - anorm = (d__1 = d__[*n], abs(d__1)); - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { -/* Computing MAX */ - d__2 = anorm, d__3 = (d__1 = d__[i__], abs(d__1)); - anorm = max(d__2,d__3); -/* Computing MAX */ - d__2 = anorm, d__3 = (d__1 = e[i__], abs(d__1)); - anorm = max(d__2,d__3); -/* L10: */ - } - } else if (lsame_(norm, "O") || *(unsigned char *) - norm == '1' || lsame_(norm, "I")) { - -/* Find norm1(A). */ - - if (*n == 1) { - anorm = abs(d__[1]); - } else { -/* Computing MAX */ - d__3 = abs(d__[1]) + abs(e[1]), d__4 = (d__1 = e[*n - 1], abs( - d__1)) + (d__2 = d__[*n], abs(d__2)); - anorm = max(d__3,d__4); - i__1 = *n - 1; - for (i__ = 2; i__ <= i__1; ++i__) { -/* Computing MAX */ - d__4 = anorm, d__5 = (d__1 = d__[i__], abs(d__1)) + (d__2 = e[ - i__], abs(d__2)) + (d__3 = e[i__ - 1], abs(d__3)); - anorm = max(d__4,d__5); -/* L20: */ - } - } - } else if (lsame_(norm, "F") || lsame_(norm, "E")) { - -/* Find normF(A). */ - - scale = 0.; - sum = 1.; - if (*n > 1) { - i__1 = *n - 1; - dlassq_(&i__1, &e[1], &c__1, &scale, &sum); - sum *= 2; - } - dlassq_(n, &d__[1], &c__1, &scale, &sum); - anorm = scale * sqrt(sum); - } - - ret_val = anorm; - return ret_val; - -/* End of DLANST */ - -} /* dlanst_ */ - -doublereal dlansy_(char *norm, char *uplo, integer *n, doublereal *a, integer - *lda, doublereal *work) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - doublereal ret_val, d__1, d__2, d__3; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer i__, j; - static doublereal sum, absa, scale; - extern logical lsame_(char *, char *); - static doublereal value; - extern /* Subroutine */ int dlassq_(integer *, doublereal *, integer *, - doublereal *, doublereal *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLANSY returns the value of the one norm, or the Frobenius norm, or - the infinity norm, or the element of largest absolute value of a - real symmetric matrix A. - - Description - =========== - - DLANSY returns the value - - DLANSY = ( max(abs(A(i,j))), NORM = 'M' or 'm' - ( - ( norm1(A), NORM = '1', 'O' or 'o' - ( - ( normI(A), NORM = 'I' or 'i' - ( - ( normF(A), NORM = 'F', 'f', 'E' or 'e' - - where norm1 denotes the one norm of a matrix (maximum column sum), - normI denotes the infinity norm of a matrix (maximum row sum) and - normF denotes the Frobenius norm of a matrix (square root of sum of - squares). Note that max(abs(A(i,j))) is not a matrix norm. - - Arguments - ========= - - NORM (input) CHARACTER*1 - Specifies the value to be returned in DLANSY as described - above. - - UPLO (input) CHARACTER*1 - Specifies whether the upper or lower triangular part of the - symmetric matrix A is to be referenced. - = 'U': Upper triangular part of A is referenced - = 'L': Lower triangular part of A is referenced - - N (input) INTEGER - The order of the matrix A. N >= 0. When N = 0, DLANSY is - set to zero. - - A (input) DOUBLE PRECISION array, dimension (LDA,N) - The symmetric matrix A. If UPLO = 'U', the leading n by n - upper triangular part of A contains the upper triangular part - of the matrix A, and the strictly lower triangular part of A - is not referenced. If UPLO = 'L', the leading n by n lower - triangular part of A contains the lower triangular part of - the matrix A, and the strictly upper triangular part of A is - not referenced. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(N,1). - - WORK (workspace) DOUBLE PRECISION array, dimension (LWORK), - where LWORK >= N when NORM = 'I' or '1' or 'O'; otherwise, - WORK is not referenced. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --work; - - /* Function Body */ - if (*n == 0) { - value = 0.; - } else if (lsame_(norm, "M")) { - -/* Find max(abs(A(i,j))). */ - - value = 0.; - if (lsame_(uplo, "U")) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j; - for (i__ = 1; i__ <= i__2; ++i__) { -/* Computing MAX */ - d__2 = value, d__3 = (d__1 = a[i__ + j * a_dim1], abs( - d__1)); - value = max(d__2,d__3); -/* L10: */ - } -/* L20: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j; i__ <= i__2; ++i__) { -/* Computing MAX */ - d__2 = value, d__3 = (d__1 = a[i__ + j * a_dim1], abs( - d__1)); - value = max(d__2,d__3); -/* L30: */ - } -/* L40: */ - } - } - } else if (lsame_(norm, "I") || lsame_(norm, "O") || *(unsigned char *)norm == '1') { - -/* Find normI(A) ( = norm1(A), since A is symmetric). */ - - value = 0.; - if (lsame_(uplo, "U")) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = 0.; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - absa = (d__1 = a[i__ + j * a_dim1], abs(d__1)); - sum += absa; - work[i__] += absa; -/* L50: */ - } - work[j] = sum + (d__1 = a[j + j * a_dim1], abs(d__1)); -/* L60: */ - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { -/* Computing MAX */ - d__1 = value, d__2 = work[i__]; - value = max(d__1,d__2); -/* L70: */ - } - } else { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - work[i__] = 0.; -/* L80: */ - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = work[j] + (d__1 = a[j + j * a_dim1], abs(d__1)); - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - absa = (d__1 = a[i__ + j * a_dim1], abs(d__1)); - sum += absa; - work[i__] += absa; -/* L90: */ - } - value = max(value,sum); -/* L100: */ - } - } - } else if (lsame_(norm, "F") || lsame_(norm, "E")) { - -/* Find normF(A). */ - - scale = 0.; - sum = 1.; - if (lsame_(uplo, "U")) { - i__1 = *n; - for (j = 2; j <= i__1; ++j) { - i__2 = j - 1; - dlassq_(&i__2, &a[j * a_dim1 + 1], &c__1, &scale, &sum); -/* L110: */ - } - } else { - i__1 = *n - 1; - for (j = 1; j <= i__1; ++j) { - i__2 = *n - j; - dlassq_(&i__2, &a[j + 1 + j * a_dim1], &c__1, &scale, &sum); -/* L120: */ - } - } - sum *= 2; - i__1 = *lda + 1; - dlassq_(n, &a[a_offset], &i__1, &scale, &sum); - value = scale * sqrt(sum); - } - - ret_val = value; - return ret_val; - -/* End of DLANSY */ - -} /* dlansy_ */ - -/* Subroutine */ int dlanv2_(doublereal *a, doublereal *b, doublereal *c__, - doublereal *d__, doublereal *rt1r, doublereal *rt1i, doublereal *rt2r, - doublereal *rt2i, doublereal *cs, doublereal *sn) -{ - /* System generated locals */ - doublereal d__1, d__2; - - /* Builtin functions */ - double d_sign(doublereal *, doublereal *), sqrt(doublereal); - - /* Local variables */ - static doublereal p, z__, aa, bb, cc, dd, cs1, sn1, sab, sac, eps, tau, - temp, scale, bcmax, bcmis, sigma; - - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DLANV2 computes the Schur factorization of a real 2-by-2 nonsymmetric - matrix in standard form: - - [ A B ] = [ CS -SN ] [ AA BB ] [ CS SN ] - [ C D ] [ SN CS ] [ CC DD ] [-SN CS ] - - where either - 1) CC = 0 so that AA and DD are real eigenvalues of the matrix, or - 2) AA = DD and BB*CC < 0, so that AA + or - sqrt(BB*CC) are complex - conjugate eigenvalues. - - Arguments - ========= - - A (input/output) DOUBLE PRECISION - B (input/output) DOUBLE PRECISION - C (input/output) DOUBLE PRECISION - D (input/output) DOUBLE PRECISION - On entry, the elements of the input matrix. - On exit, they are overwritten by the elements of the - standardised Schur form. - - RT1R (output) DOUBLE PRECISION - RT1I (output) DOUBLE PRECISION - RT2R (output) DOUBLE PRECISION - RT2I (output) DOUBLE PRECISION - The real and imaginary parts of the eigenvalues. If the - eigenvalues are a complex conjugate pair, RT1I > 0. - - CS (output) DOUBLE PRECISION - SN (output) DOUBLE PRECISION - Parameters of the rotation matrix. - - Further Details - =============== - - Modified by V. Sima, Research Institute for Informatics, Bucharest, - Romania, to reduce the risk of cancellation errors, - when computing real eigenvalues, and to ensure, if possible, that - abs(RT1R) >= abs(RT2R). - - ===================================================================== -*/ - - - eps = PRECISION; - if (*c__ == 0.) { - *cs = 1.; - *sn = 0.; - goto L10; - - } else if (*b == 0.) { - -/* Swap rows and columns */ - - *cs = 0.; - *sn = 1.; - temp = *d__; - *d__ = *a; - *a = temp; - *b = -(*c__); - *c__ = 0.; - goto L10; - } else if ((*a - *d__ == 0. && d_sign(&c_b15, b) != d_sign(&c_b15, c__))) - { - *cs = 1.; - *sn = 0.; - goto L10; - } else { - - temp = *a - *d__; - p = temp * .5; -/* Computing MAX */ - d__1 = abs(*b), d__2 = abs(*c__); - bcmax = max(d__1,d__2); -/* Computing MIN */ - d__1 = abs(*b), d__2 = abs(*c__); - bcmis = min(d__1,d__2) * d_sign(&c_b15, b) * d_sign(&c_b15, c__); -/* Computing MAX */ - d__1 = abs(p); - scale = max(d__1,bcmax); - z__ = p / scale * p + bcmax / scale * bcmis; - -/* - If Z is of the order of the machine accuracy, postpone the - decision on the nature of eigenvalues -*/ - - if (z__ >= eps * 4.) { - -/* Real eigenvalues. Compute A and D. */ - - d__1 = sqrt(scale) * sqrt(z__); - z__ = p + d_sign(&d__1, &p); - *a = *d__ + z__; - *d__ -= bcmax / z__ * bcmis; - -/* Compute B and the rotation matrix */ - - tau = dlapy2_(c__, &z__); - *cs = z__ / tau; - *sn = *c__ / tau; - *b -= *c__; - *c__ = 0.; - } else { - -/* - Complex eigenvalues, or real (almost) equal eigenvalues. - Make diagonal elements equal. -*/ - - sigma = *b + *c__; - tau = dlapy2_(&sigma, &temp); - *cs = sqrt((abs(sigma) / tau + 1.) * .5); - *sn = -(p / (tau * *cs)) * d_sign(&c_b15, &sigma); - -/* - Compute [ AA BB ] = [ A B ] [ CS -SN ] - [ CC DD ] [ C D ] [ SN CS ] -*/ - - aa = *a * *cs + *b * *sn; - bb = -(*a) * *sn + *b * *cs; - cc = *c__ * *cs + *d__ * *sn; - dd = -(*c__) * *sn + *d__ * *cs; - -/* - Compute [ A B ] = [ CS SN ] [ AA BB ] - [ C D ] [-SN CS ] [ CC DD ] -*/ - - *a = aa * *cs + cc * *sn; - *b = bb * *cs + dd * *sn; - *c__ = -aa * *sn + cc * *cs; - *d__ = -bb * *sn + dd * *cs; - - temp = (*a + *d__) * .5; - *a = temp; - *d__ = temp; - - if (*c__ != 0.) { - if (*b != 0.) { - if (d_sign(&c_b15, b) == d_sign(&c_b15, c__)) { - -/* Real eigenvalues: reduce to upper triangular form */ - - sab = sqrt((abs(*b))); - sac = sqrt((abs(*c__))); - d__1 = sab * sac; - p = d_sign(&d__1, c__); - tau = 1. / sqrt((d__1 = *b + *c__, abs(d__1))); - *a = temp + p; - *d__ = temp - p; - *b -= *c__; - *c__ = 0.; - cs1 = sab * tau; - sn1 = sac * tau; - temp = *cs * cs1 - *sn * sn1; - *sn = *cs * sn1 + *sn * cs1; - *cs = temp; - } - } else { - *b = -(*c__); - *c__ = 0.; - temp = *cs; - *cs = -(*sn); - *sn = temp; - } - } - } - - } - -L10: - -/* Store eigenvalues in (RT1R,RT1I) and (RT2R,RT2I). */ - - *rt1r = *a; - *rt2r = *d__; - if (*c__ == 0.) { - *rt1i = 0.; - *rt2i = 0.; - } else { - *rt1i = sqrt((abs(*b))) * sqrt((abs(*c__))); - *rt2i = -(*rt1i); - } - return 0; - -/* End of DLANV2 */ - -} /* dlanv2_ */ - -doublereal dlapy2_(doublereal *x, doublereal *y) -{ - /* System generated locals */ - doublereal ret_val, d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal w, z__, xabs, yabs; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLAPY2 returns sqrt(x**2+y**2), taking care not to cause unnecessary - overflow. - - Arguments - ========= - - X (input) DOUBLE PRECISION - Y (input) DOUBLE PRECISION - X and Y specify the values x and y. - - ===================================================================== -*/ - - - xabs = abs(*x); - yabs = abs(*y); - w = max(xabs,yabs); - z__ = min(xabs,yabs); - if (z__ == 0.) { - ret_val = w; - } else { -/* Computing 2nd power */ - d__1 = z__ / w; - ret_val = w * sqrt(d__1 * d__1 + 1.); - } - return ret_val; - -/* End of DLAPY2 */ - -} /* dlapy2_ */ - -doublereal dlapy3_(doublereal *x, doublereal *y, doublereal *z__) -{ - /* System generated locals */ - doublereal ret_val, d__1, d__2, d__3; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal w, xabs, yabs, zabs; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLAPY3 returns sqrt(x**2+y**2+z**2), taking care not to cause - unnecessary overflow. - - Arguments - ========= - - X (input) DOUBLE PRECISION - Y (input) DOUBLE PRECISION - Z (input) DOUBLE PRECISION - X, Y and Z specify the values x, y and z. - - ===================================================================== -*/ - - - xabs = abs(*x); - yabs = abs(*y); - zabs = abs(*z__); -/* Computing MAX */ - d__1 = max(xabs,yabs); - w = max(d__1,zabs); - if (w == 0.) { - ret_val = 0.; - } else { -/* Computing 2nd power */ - d__1 = xabs / w; -/* Computing 2nd power */ - d__2 = yabs / w; -/* Computing 2nd power */ - d__3 = zabs / w; - ret_val = w * sqrt(d__1 * d__1 + d__2 * d__2 + d__3 * d__3); - } - return ret_val; - -/* End of DLAPY3 */ - -} /* dlapy3_ */ - -/* Subroutine */ int dlarf_(char *side, integer *m, integer *n, doublereal *v, - integer *incv, doublereal *tau, doublereal *c__, integer *ldc, - doublereal *work) -{ - /* System generated locals */ - integer c_dim1, c_offset; - doublereal d__1; - - /* Local variables */ - extern /* Subroutine */ int dger_(integer *, integer *, doublereal *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dgemv_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, doublereal *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DLARF applies a real elementary reflector H to a real m by n matrix - C, from either the left or the right. H is represented in the form - - H = I - tau * v * v' - - where tau is a real scalar and v is a real vector. - - If tau = 0, then H is taken to be the unit matrix. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': form H * C - = 'R': form C * H - - M (input) INTEGER - The number of rows of the matrix C. - - N (input) INTEGER - The number of columns of the matrix C. - - V (input) DOUBLE PRECISION array, dimension - (1 + (M-1)*abs(INCV)) if SIDE = 'L' - or (1 + (N-1)*abs(INCV)) if SIDE = 'R' - The vector v in the representation of H. V is not used if - TAU = 0. - - INCV (input) INTEGER - The increment between elements of v. INCV <> 0. - - TAU (input) DOUBLE PRECISION - The value tau in the representation of H. - - C (input/output) DOUBLE PRECISION array, dimension (LDC,N) - On entry, the m by n matrix C. - On exit, C is overwritten by the matrix H * C if SIDE = 'L', - or C * H if SIDE = 'R'. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace) DOUBLE PRECISION array, dimension - (N) if SIDE = 'L' - or (M) if SIDE = 'R' - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --v; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - if (lsame_(side, "L")) { - -/* Form H * C */ - - if (*tau != 0.) { - -/* w := C' * v */ - - dgemv_("Transpose", m, n, &c_b15, &c__[c_offset], ldc, &v[1], - incv, &c_b29, &work[1], &c__1); - -/* C := C - v * w' */ - - d__1 = -(*tau); - dger_(m, n, &d__1, &v[1], incv, &work[1], &c__1, &c__[c_offset], - ldc); - } - } else { - -/* Form C * H */ - - if (*tau != 0.) { - -/* w := C * v */ - - dgemv_("No transpose", m, n, &c_b15, &c__[c_offset], ldc, &v[1], - incv, &c_b29, &work[1], &c__1); - -/* C := C - w * v' */ - - d__1 = -(*tau); - dger_(m, n, &d__1, &work[1], &c__1, &v[1], incv, &c__[c_offset], - ldc); - } - } - return 0; - -/* End of DLARF */ - -} /* dlarf_ */ - -/* Subroutine */ int dlarfb_(char *side, char *trans, char *direct, char * - storev, integer *m, integer *n, integer *k, doublereal *v, integer * - ldv, doublereal *t, integer *ldt, doublereal *c__, integer *ldc, - doublereal *work, integer *ldwork) -{ - /* System generated locals */ - integer c_dim1, c_offset, t_dim1, t_offset, v_dim1, v_offset, work_dim1, - work_offset, i__1, i__2; - - /* Local variables */ - static integer i__, j; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *), dtrmm_(char *, char *, char *, char *, - integer *, integer *, doublereal *, doublereal *, integer *, - doublereal *, integer *); - static char transt[1]; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DLARFB applies a real block reflector H or its transpose H' to a - real m by n matrix C, from either the left or the right. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply H or H' from the Left - = 'R': apply H or H' from the Right - - TRANS (input) CHARACTER*1 - = 'N': apply H (No transpose) - = 'T': apply H' (Transpose) - - DIRECT (input) CHARACTER*1 - Indicates how H is formed from a product of elementary - reflectors - = 'F': H = H(1) H(2) . . . H(k) (Forward) - = 'B': H = H(k) . . . H(2) H(1) (Backward) - - STOREV (input) CHARACTER*1 - Indicates how the vectors which define the elementary - reflectors are stored: - = 'C': Columnwise - = 'R': Rowwise - - M (input) INTEGER - The number of rows of the matrix C. - - N (input) INTEGER - The number of columns of the matrix C. - - K (input) INTEGER - The order of the matrix T (= the number of elementary - reflectors whose product defines the block reflector). - - V (input) DOUBLE PRECISION array, dimension - (LDV,K) if STOREV = 'C' - (LDV,M) if STOREV = 'R' and SIDE = 'L' - (LDV,N) if STOREV = 'R' and SIDE = 'R' - The matrix V. See further details. - - LDV (input) INTEGER - The leading dimension of the array V. - If STOREV = 'C' and SIDE = 'L', LDV >= max(1,M); - if STOREV = 'C' and SIDE = 'R', LDV >= max(1,N); - if STOREV = 'R', LDV >= K. - - T (input) DOUBLE PRECISION array, dimension (LDT,K) - The triangular k by k matrix T in the representation of the - block reflector. - - LDT (input) INTEGER - The leading dimension of the array T. LDT >= K. - - C (input/output) DOUBLE PRECISION array, dimension (LDC,N) - On entry, the m by n matrix C. - On exit, C is overwritten by H*C or H'*C or C*H or C*H'. - - LDC (input) INTEGER - The leading dimension of the array C. LDA >= max(1,M). - - WORK (workspace) DOUBLE PRECISION array, dimension (LDWORK,K) - - LDWORK (input) INTEGER - The leading dimension of the array WORK. - If SIDE = 'L', LDWORK >= max(1,N); - if SIDE = 'R', LDWORK >= max(1,M). - - ===================================================================== - - - Quick return if possible -*/ - - /* Parameter adjustments */ - v_dim1 = *ldv; - v_offset = 1 + v_dim1 * 1; - v -= v_offset; - t_dim1 = *ldt; - t_offset = 1 + t_dim1 * 1; - t -= t_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - work_dim1 = *ldwork; - work_offset = 1 + work_dim1 * 1; - work -= work_offset; - - /* Function Body */ - if (*m <= 0 || *n <= 0) { - return 0; - } - - if (lsame_(trans, "N")) { - *(unsigned char *)transt = 'T'; - } else { - *(unsigned char *)transt = 'N'; - } - - if (lsame_(storev, "C")) { - - if (lsame_(direct, "F")) { - -/* - Let V = ( V1 ) (first K rows) - ( V2 ) - where V1 is unit lower triangular. -*/ - - if (lsame_(side, "L")) { - -/* - Form H * C or H' * C where C = ( C1 ) - ( C2 ) - - W := C' * V = (C1'*V1 + C2'*V2) (stored in WORK) - - W := C1' -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dcopy_(n, &c__[j + c_dim1], ldc, &work[j * work_dim1 + 1], - &c__1); -/* L10: */ - } - -/* W := W * V1 */ - - dtrmm_("Right", "Lower", "No transpose", "Unit", n, k, &c_b15, - &v[v_offset], ldv, &work[work_offset], ldwork); - if (*m > *k) { - -/* W := W + C2'*V2 */ - - i__1 = *m - *k; - dgemm_("Transpose", "No transpose", n, k, &i__1, &c_b15, & - c__[*k + 1 + c_dim1], ldc, &v[*k + 1 + v_dim1], - ldv, &c_b15, &work[work_offset], ldwork); - } - -/* W := W * T' or W * T */ - - dtrmm_("Right", "Upper", transt, "Non-unit", n, k, &c_b15, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - V * W' */ - - if (*m > *k) { - -/* C2 := C2 - V2 * W' */ - - i__1 = *m - *k; - dgemm_("No transpose", "Transpose", &i__1, n, k, &c_b151, - &v[*k + 1 + v_dim1], ldv, &work[work_offset], - ldwork, &c_b15, &c__[*k + 1 + c_dim1], ldc); - } - -/* W := W * V1' */ - - dtrmm_("Right", "Lower", "Transpose", "Unit", n, k, &c_b15, & - v[v_offset], ldv, &work[work_offset], ldwork); - -/* C1 := C1 - W' */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[j + i__ * c_dim1] -= work[i__ + j * work_dim1]; -/* L20: */ - } -/* L30: */ - } - - } else if (lsame_(side, "R")) { - -/* - Form C * H or C * H' where C = ( C1 C2 ) - - W := C * V = (C1*V1 + C2*V2) (stored in WORK) - - W := C1 -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dcopy_(m, &c__[j * c_dim1 + 1], &c__1, &work[j * - work_dim1 + 1], &c__1); -/* L40: */ - } - -/* W := W * V1 */ - - dtrmm_("Right", "Lower", "No transpose", "Unit", m, k, &c_b15, - &v[v_offset], ldv, &work[work_offset], ldwork); - if (*n > *k) { - -/* W := W + C2 * V2 */ - - i__1 = *n - *k; - dgemm_("No transpose", "No transpose", m, k, &i__1, & - c_b15, &c__[(*k + 1) * c_dim1 + 1], ldc, &v[*k + - 1 + v_dim1], ldv, &c_b15, &work[work_offset], - ldwork); - } - -/* W := W * T or W * T' */ - - dtrmm_("Right", "Upper", trans, "Non-unit", m, k, &c_b15, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - W * V' */ - - if (*n > *k) { - -/* C2 := C2 - W * V2' */ - - i__1 = *n - *k; - dgemm_("No transpose", "Transpose", m, &i__1, k, &c_b151, - &work[work_offset], ldwork, &v[*k + 1 + v_dim1], - ldv, &c_b15, &c__[(*k + 1) * c_dim1 + 1], ldc); - } - -/* W := W * V1' */ - - dtrmm_("Right", "Lower", "Transpose", "Unit", m, k, &c_b15, & - v[v_offset], ldv, &work[work_offset], ldwork); - -/* C1 := C1 - W */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] -= work[i__ + j * work_dim1]; -/* L50: */ - } -/* L60: */ - } - } - - } else { - -/* - Let V = ( V1 ) - ( V2 ) (last K rows) - where V2 is unit upper triangular. -*/ - - if (lsame_(side, "L")) { - -/* - Form H * C or H' * C where C = ( C1 ) - ( C2 ) - - W := C' * V = (C1'*V1 + C2'*V2) (stored in WORK) - - W := C2' -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dcopy_(n, &c__[*m - *k + j + c_dim1], ldc, &work[j * - work_dim1 + 1], &c__1); -/* L70: */ - } - -/* W := W * V2 */ - - dtrmm_("Right", "Upper", "No transpose", "Unit", n, k, &c_b15, - &v[*m - *k + 1 + v_dim1], ldv, &work[work_offset], - ldwork); - if (*m > *k) { - -/* W := W + C1'*V1 */ - - i__1 = *m - *k; - dgemm_("Transpose", "No transpose", n, k, &i__1, &c_b15, & - c__[c_offset], ldc, &v[v_offset], ldv, &c_b15, & - work[work_offset], ldwork); - } - -/* W := W * T' or W * T */ - - dtrmm_("Right", "Lower", transt, "Non-unit", n, k, &c_b15, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - V * W' */ - - if (*m > *k) { - -/* C1 := C1 - V1 * W' */ - - i__1 = *m - *k; - dgemm_("No transpose", "Transpose", &i__1, n, k, &c_b151, - &v[v_offset], ldv, &work[work_offset], ldwork, & - c_b15, &c__[c_offset], ldc) - ; - } - -/* W := W * V2' */ - - dtrmm_("Right", "Upper", "Transpose", "Unit", n, k, &c_b15, & - v[*m - *k + 1 + v_dim1], ldv, &work[work_offset], - ldwork); - -/* C2 := C2 - W' */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[*m - *k + j + i__ * c_dim1] -= work[i__ + j * - work_dim1]; -/* L80: */ - } -/* L90: */ - } - - } else if (lsame_(side, "R")) { - -/* - Form C * H or C * H' where C = ( C1 C2 ) - - W := C * V = (C1*V1 + C2*V2) (stored in WORK) - - W := C2 -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dcopy_(m, &c__[(*n - *k + j) * c_dim1 + 1], &c__1, &work[ - j * work_dim1 + 1], &c__1); -/* L100: */ - } - -/* W := W * V2 */ - - dtrmm_("Right", "Upper", "No transpose", "Unit", m, k, &c_b15, - &v[*n - *k + 1 + v_dim1], ldv, &work[work_offset], - ldwork); - if (*n > *k) { - -/* W := W + C1 * V1 */ - - i__1 = *n - *k; - dgemm_("No transpose", "No transpose", m, k, &i__1, & - c_b15, &c__[c_offset], ldc, &v[v_offset], ldv, & - c_b15, &work[work_offset], ldwork); - } - -/* W := W * T or W * T' */ - - dtrmm_("Right", "Lower", trans, "Non-unit", m, k, &c_b15, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - W * V' */ - - if (*n > *k) { - -/* C1 := C1 - W * V1' */ - - i__1 = *n - *k; - dgemm_("No transpose", "Transpose", m, &i__1, k, &c_b151, - &work[work_offset], ldwork, &v[v_offset], ldv, & - c_b15, &c__[c_offset], ldc) - ; - } - -/* W := W * V2' */ - - dtrmm_("Right", "Upper", "Transpose", "Unit", m, k, &c_b15, & - v[*n - *k + 1 + v_dim1], ldv, &work[work_offset], - ldwork); - -/* C2 := C2 - W */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + (*n - *k + j) * c_dim1] -= work[i__ + j * - work_dim1]; -/* L110: */ - } -/* L120: */ - } - } - } - - } else if (lsame_(storev, "R")) { - - if (lsame_(direct, "F")) { - -/* - Let V = ( V1 V2 ) (V1: first K columns) - where V1 is unit upper triangular. -*/ - - if (lsame_(side, "L")) { - -/* - Form H * C or H' * C where C = ( C1 ) - ( C2 ) - - W := C' * V' = (C1'*V1' + C2'*V2') (stored in WORK) - - W := C1' -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dcopy_(n, &c__[j + c_dim1], ldc, &work[j * work_dim1 + 1], - &c__1); -/* L130: */ - } - -/* W := W * V1' */ - - dtrmm_("Right", "Upper", "Transpose", "Unit", n, k, &c_b15, & - v[v_offset], ldv, &work[work_offset], ldwork); - if (*m > *k) { - -/* W := W + C2'*V2' */ - - i__1 = *m - *k; - dgemm_("Transpose", "Transpose", n, k, &i__1, &c_b15, & - c__[*k + 1 + c_dim1], ldc, &v[(*k + 1) * v_dim1 + - 1], ldv, &c_b15, &work[work_offset], ldwork); - } - -/* W := W * T' or W * T */ - - dtrmm_("Right", "Upper", transt, "Non-unit", n, k, &c_b15, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - V' * W' */ - - if (*m > *k) { - -/* C2 := C2 - V2' * W' */ - - i__1 = *m - *k; - dgemm_("Transpose", "Transpose", &i__1, n, k, &c_b151, &v[ - (*k + 1) * v_dim1 + 1], ldv, &work[work_offset], - ldwork, &c_b15, &c__[*k + 1 + c_dim1], ldc); - } - -/* W := W * V1 */ - - dtrmm_("Right", "Upper", "No transpose", "Unit", n, k, &c_b15, - &v[v_offset], ldv, &work[work_offset], ldwork); - -/* C1 := C1 - W' */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[j + i__ * c_dim1] -= work[i__ + j * work_dim1]; -/* L140: */ - } -/* L150: */ - } - - } else if (lsame_(side, "R")) { - -/* - Form C * H or C * H' where C = ( C1 C2 ) - - W := C * V' = (C1*V1' + C2*V2') (stored in WORK) - - W := C1 -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dcopy_(m, &c__[j * c_dim1 + 1], &c__1, &work[j * - work_dim1 + 1], &c__1); -/* L160: */ - } - -/* W := W * V1' */ - - dtrmm_("Right", "Upper", "Transpose", "Unit", m, k, &c_b15, & - v[v_offset], ldv, &work[work_offset], ldwork); - if (*n > *k) { - -/* W := W + C2 * V2' */ - - i__1 = *n - *k; - dgemm_("No transpose", "Transpose", m, k, &i__1, &c_b15, & - c__[(*k + 1) * c_dim1 + 1], ldc, &v[(*k + 1) * - v_dim1 + 1], ldv, &c_b15, &work[work_offset], - ldwork); - } - -/* W := W * T or W * T' */ - - dtrmm_("Right", "Upper", trans, "Non-unit", m, k, &c_b15, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - W * V */ - - if (*n > *k) { - -/* C2 := C2 - W * V2 */ - - i__1 = *n - *k; - dgemm_("No transpose", "No transpose", m, &i__1, k, & - c_b151, &work[work_offset], ldwork, &v[(*k + 1) * - v_dim1 + 1], ldv, &c_b15, &c__[(*k + 1) * c_dim1 - + 1], ldc); - } - -/* W := W * V1 */ - - dtrmm_("Right", "Upper", "No transpose", "Unit", m, k, &c_b15, - &v[v_offset], ldv, &work[work_offset], ldwork); - -/* C1 := C1 - W */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + j * c_dim1] -= work[i__ + j * work_dim1]; -/* L170: */ - } -/* L180: */ - } - - } - - } else { - -/* - Let V = ( V1 V2 ) (V2: last K columns) - where V2 is unit lower triangular. -*/ - - if (lsame_(side, "L")) { - -/* - Form H * C or H' * C where C = ( C1 ) - ( C2 ) - - W := C' * V' = (C1'*V1' + C2'*V2') (stored in WORK) - - W := C2' -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dcopy_(n, &c__[*m - *k + j + c_dim1], ldc, &work[j * - work_dim1 + 1], &c__1); -/* L190: */ - } - -/* W := W * V2' */ - - dtrmm_("Right", "Lower", "Transpose", "Unit", n, k, &c_b15, & - v[(*m - *k + 1) * v_dim1 + 1], ldv, &work[work_offset] - , ldwork); - if (*m > *k) { - -/* W := W + C1'*V1' */ - - i__1 = *m - *k; - dgemm_("Transpose", "Transpose", n, k, &i__1, &c_b15, & - c__[c_offset], ldc, &v[v_offset], ldv, &c_b15, & - work[work_offset], ldwork); - } - -/* W := W * T' or W * T */ - - dtrmm_("Right", "Lower", transt, "Non-unit", n, k, &c_b15, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - V' * W' */ - - if (*m > *k) { - -/* C1 := C1 - V1' * W' */ - - i__1 = *m - *k; - dgemm_("Transpose", "Transpose", &i__1, n, k, &c_b151, &v[ - v_offset], ldv, &work[work_offset], ldwork, & - c_b15, &c__[c_offset], ldc); - } - -/* W := W * V2 */ - - dtrmm_("Right", "Lower", "No transpose", "Unit", n, k, &c_b15, - &v[(*m - *k + 1) * v_dim1 + 1], ldv, &work[ - work_offset], ldwork); - -/* C2 := C2 - W' */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[*m - *k + j + i__ * c_dim1] -= work[i__ + j * - work_dim1]; -/* L200: */ - } -/* L210: */ - } - - } else if (lsame_(side, "R")) { - -/* - Form C * H or C * H' where C = ( C1 C2 ) - - W := C * V' = (C1*V1' + C2*V2') (stored in WORK) - - W := C2 -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dcopy_(m, &c__[(*n - *k + j) * c_dim1 + 1], &c__1, &work[ - j * work_dim1 + 1], &c__1); -/* L220: */ - } - -/* W := W * V2' */ - - dtrmm_("Right", "Lower", "Transpose", "Unit", m, k, &c_b15, & - v[(*n - *k + 1) * v_dim1 + 1], ldv, &work[work_offset] - , ldwork); - if (*n > *k) { - -/* W := W + C1 * V1' */ - - i__1 = *n - *k; - dgemm_("No transpose", "Transpose", m, k, &i__1, &c_b15, & - c__[c_offset], ldc, &v[v_offset], ldv, &c_b15, & - work[work_offset], ldwork); - } - -/* W := W * T or W * T' */ - - dtrmm_("Right", "Lower", trans, "Non-unit", m, k, &c_b15, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - W * V */ - - if (*n > *k) { - -/* C1 := C1 - W * V1 */ - - i__1 = *n - *k; - dgemm_("No transpose", "No transpose", m, &i__1, k, & - c_b151, &work[work_offset], ldwork, &v[v_offset], - ldv, &c_b15, &c__[c_offset], ldc); - } - -/* W := W * V2 */ - - dtrmm_("Right", "Lower", "No transpose", "Unit", m, k, &c_b15, - &v[(*n - *k + 1) * v_dim1 + 1], ldv, &work[ - work_offset], ldwork); - -/* C1 := C1 - W */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - c__[i__ + (*n - *k + j) * c_dim1] -= work[i__ + j * - work_dim1]; -/* L230: */ - } -/* L240: */ - } - - } - - } - } - - return 0; - -/* End of DLARFB */ - -} /* dlarfb_ */ - -/* Subroutine */ int dlarfg_(integer *n, doublereal *alpha, doublereal *x, - integer *incx, doublereal *tau) -{ - /* System generated locals */ - integer i__1; - doublereal d__1; - - /* Builtin functions */ - double d_sign(doublereal *, doublereal *); - - /* Local variables */ - static integer j, knt; - static doublereal beta; - extern doublereal dnrm2_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - static doublereal xnorm; - - static doublereal safmin, rsafmn; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - DLARFG generates a real elementary reflector H of order n, such - that - - H * ( alpha ) = ( beta ), H' * H = I. - ( x ) ( 0 ) - - where alpha and beta are scalars, and x is an (n-1)-element real - vector. H is represented in the form - - H = I - tau * ( 1 ) * ( 1 v' ) , - ( v ) - - where tau is a real scalar and v is a real (n-1)-element - vector. - - If the elements of x are all zero, then tau = 0 and H is taken to be - the unit matrix. - - Otherwise 1 <= tau <= 2. - - Arguments - ========= - - N (input) INTEGER - The order of the elementary reflector. - - ALPHA (input/output) DOUBLE PRECISION - On entry, the value alpha. - On exit, it is overwritten with the value beta. - - X (input/output) DOUBLE PRECISION array, dimension - (1+(N-2)*abs(INCX)) - On entry, the vector x. - On exit, it is overwritten with the vector v. - - INCX (input) INTEGER - The increment between elements of X. INCX > 0. - - TAU (output) DOUBLE PRECISION - The value tau. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --x; - - /* Function Body */ - if (*n <= 1) { - *tau = 0.; - return 0; - } - - i__1 = *n - 1; - xnorm = dnrm2_(&i__1, &x[1], incx); - - if (xnorm == 0.) { - -/* H = I */ - - *tau = 0.; - } else { - -/* general case */ - - d__1 = dlapy2_(alpha, &xnorm); - beta = -d_sign(&d__1, alpha); - safmin = SAFEMINIMUM / EPSILON; - if (abs(beta) < safmin) { - -/* XNORM, BETA may be inaccurate; scale X and recompute them */ - - rsafmn = 1. / safmin; - knt = 0; -L10: - ++knt; - i__1 = *n - 1; - dscal_(&i__1, &rsafmn, &x[1], incx); - beta *= rsafmn; - *alpha *= rsafmn; - if (abs(beta) < safmin) { - goto L10; - } - -/* New BETA is at most 1, at least SAFMIN */ - - i__1 = *n - 1; - xnorm = dnrm2_(&i__1, &x[1], incx); - d__1 = dlapy2_(alpha, &xnorm); - beta = -d_sign(&d__1, alpha); - *tau = (beta - *alpha) / beta; - i__1 = *n - 1; - d__1 = 1. / (*alpha - beta); - dscal_(&i__1, &d__1, &x[1], incx); - -/* If ALPHA is subnormal, it may lose relative accuracy */ - - *alpha = beta; - i__1 = knt; - for (j = 1; j <= i__1; ++j) { - *alpha *= safmin; -/* L20: */ - } - } else { - *tau = (beta - *alpha) / beta; - i__1 = *n - 1; - d__1 = 1. / (*alpha - beta); - dscal_(&i__1, &d__1, &x[1], incx); - *alpha = beta; - } - } - - return 0; - -/* End of DLARFG */ - -} /* dlarfg_ */ - -/* Subroutine */ int dlarft_(char *direct, char *storev, integer *n, integer * - k, doublereal *v, integer *ldv, doublereal *tau, doublereal *t, - integer *ldt) -{ - /* System generated locals */ - integer t_dim1, t_offset, v_dim1, v_offset, i__1, i__2, i__3; - doublereal d__1; - - /* Local variables */ - static integer i__, j; - static doublereal vii; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dgemv_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, doublereal *, integer *), dtrmv_(char *, - char *, char *, integer *, doublereal *, integer *, doublereal *, - integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DLARFT forms the triangular factor T of a real block reflector H - of order n, which is defined as a product of k elementary reflectors. - - If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; - - If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. - - If STOREV = 'C', the vector which defines the elementary reflector - H(i) is stored in the i-th column of the array V, and - - H = I - V * T * V' - - If STOREV = 'R', the vector which defines the elementary reflector - H(i) is stored in the i-th row of the array V, and - - H = I - V' * T * V - - Arguments - ========= - - DIRECT (input) CHARACTER*1 - Specifies the order in which the elementary reflectors are - multiplied to form the block reflector: - = 'F': H = H(1) H(2) . . . H(k) (Forward) - = 'B': H = H(k) . . . H(2) H(1) (Backward) - - STOREV (input) CHARACTER*1 - Specifies how the vectors which define the elementary - reflectors are stored (see also Further Details): - = 'C': columnwise - = 'R': rowwise - - N (input) INTEGER - The order of the block reflector H. N >= 0. - - K (input) INTEGER - The order of the triangular factor T (= the number of - elementary reflectors). K >= 1. - - V (input/output) DOUBLE PRECISION array, dimension - (LDV,K) if STOREV = 'C' - (LDV,N) if STOREV = 'R' - The matrix V. See further details. - - LDV (input) INTEGER - The leading dimension of the array V. - If STOREV = 'C', LDV >= max(1,N); if STOREV = 'R', LDV >= K. - - TAU (input) DOUBLE PRECISION array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i). - - T (output) DOUBLE PRECISION array, dimension (LDT,K) - The k by k triangular factor T of the block reflector. - If DIRECT = 'F', T is upper triangular; if DIRECT = 'B', T is - lower triangular. The rest of the array is not used. - - LDT (input) INTEGER - The leading dimension of the array T. LDT >= K. - - Further Details - =============== - - The shape of the matrix V and the storage of the vectors which define - the H(i) is best illustrated by the following example with n = 5 and - k = 3. The elements equal to 1 are not stored; the corresponding - array elements are modified but restored on exit. The rest of the - array is not used. - - DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': - - V = ( 1 ) V = ( 1 v1 v1 v1 v1 ) - ( v1 1 ) ( 1 v2 v2 v2 ) - ( v1 v2 1 ) ( 1 v3 v3 ) - ( v1 v2 v3 ) - ( v1 v2 v3 ) - - DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': - - V = ( v1 v2 v3 ) V = ( v1 v1 1 ) - ( v1 v2 v3 ) ( v2 v2 v2 1 ) - ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) - ( 1 v3 ) - ( 1 ) - - ===================================================================== - - - Quick return if possible -*/ - - /* Parameter adjustments */ - v_dim1 = *ldv; - v_offset = 1 + v_dim1 * 1; - v -= v_offset; - --tau; - t_dim1 = *ldt; - t_offset = 1 + t_dim1 * 1; - t -= t_offset; - - /* Function Body */ - if (*n == 0) { - return 0; - } - - if (lsame_(direct, "F")) { - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - if (tau[i__] == 0.) { - -/* H(i) = I */ - - i__2 = i__; - for (j = 1; j <= i__2; ++j) { - t[j + i__ * t_dim1] = 0.; -/* L10: */ - } - } else { - -/* general case */ - - vii = v[i__ + i__ * v_dim1]; - v[i__ + i__ * v_dim1] = 1.; - if (lsame_(storev, "C")) { - -/* T(1:i-1,i) := - tau(i) * V(i:n,1:i-1)' * V(i:n,i) */ - - i__2 = *n - i__ + 1; - i__3 = i__ - 1; - d__1 = -tau[i__]; - dgemv_("Transpose", &i__2, &i__3, &d__1, &v[i__ + v_dim1], - ldv, &v[i__ + i__ * v_dim1], &c__1, &c_b29, &t[ - i__ * t_dim1 + 1], &c__1); - } else { - -/* T(1:i-1,i) := - tau(i) * V(1:i-1,i:n) * V(i,i:n)' */ - - i__2 = i__ - 1; - i__3 = *n - i__ + 1; - d__1 = -tau[i__]; - dgemv_("No transpose", &i__2, &i__3, &d__1, &v[i__ * - v_dim1 + 1], ldv, &v[i__ + i__ * v_dim1], ldv, & - c_b29, &t[i__ * t_dim1 + 1], &c__1); - } - v[i__ + i__ * v_dim1] = vii; - -/* T(1:i-1,i) := T(1:i-1,1:i-1) * T(1:i-1,i) */ - - i__2 = i__ - 1; - dtrmv_("Upper", "No transpose", "Non-unit", &i__2, &t[ - t_offset], ldt, &t[i__ * t_dim1 + 1], &c__1); - t[i__ + i__ * t_dim1] = tau[i__]; - } -/* L20: */ - } - } else { - for (i__ = *k; i__ >= 1; --i__) { - if (tau[i__] == 0.) { - -/* H(i) = I */ - - i__1 = *k; - for (j = i__; j <= i__1; ++j) { - t[j + i__ * t_dim1] = 0.; -/* L30: */ - } - } else { - -/* general case */ - - if (i__ < *k) { - if (lsame_(storev, "C")) { - vii = v[*n - *k + i__ + i__ * v_dim1]; - v[*n - *k + i__ + i__ * v_dim1] = 1.; - -/* - T(i+1:k,i) := - - tau(i) * V(1:n-k+i,i+1:k)' * V(1:n-k+i,i) -*/ - - i__1 = *n - *k + i__; - i__2 = *k - i__; - d__1 = -tau[i__]; - dgemv_("Transpose", &i__1, &i__2, &d__1, &v[(i__ + 1) - * v_dim1 + 1], ldv, &v[i__ * v_dim1 + 1], & - c__1, &c_b29, &t[i__ + 1 + i__ * t_dim1], & - c__1); - v[*n - *k + i__ + i__ * v_dim1] = vii; - } else { - vii = v[i__ + (*n - *k + i__) * v_dim1]; - v[i__ + (*n - *k + i__) * v_dim1] = 1.; - -/* - T(i+1:k,i) := - - tau(i) * V(i+1:k,1:n-k+i) * V(i,1:n-k+i)' -*/ - - i__1 = *k - i__; - i__2 = *n - *k + i__; - d__1 = -tau[i__]; - dgemv_("No transpose", &i__1, &i__2, &d__1, &v[i__ + - 1 + v_dim1], ldv, &v[i__ + v_dim1], ldv, & - c_b29, &t[i__ + 1 + i__ * t_dim1], &c__1); - v[i__ + (*n - *k + i__) * v_dim1] = vii; - } - -/* T(i+1:k,i) := T(i+1:k,i+1:k) * T(i+1:k,i) */ - - i__1 = *k - i__; - dtrmv_("Lower", "No transpose", "Non-unit", &i__1, &t[i__ - + 1 + (i__ + 1) * t_dim1], ldt, &t[i__ + 1 + i__ * - t_dim1], &c__1) - ; - } - t[i__ + i__ * t_dim1] = tau[i__]; - } -/* L40: */ - } - } - return 0; - -/* End of DLARFT */ - -} /* dlarft_ */ - -/* Subroutine */ int dlarfx_(char *side, integer *m, integer *n, doublereal * - v, doublereal *tau, doublereal *c__, integer *ldc, doublereal *work) -{ - /* System generated locals */ - integer c_dim1, c_offset, i__1; - doublereal d__1; - - /* Local variables */ - static integer j; - static doublereal t1, t2, t3, t4, t5, t6, t7, t8, t9, v1, v2, v3, v4, v5, - v6, v7, v8, v9, t10, v10, sum; - extern /* Subroutine */ int dger_(integer *, integer *, doublereal *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dgemv_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, doublereal *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DLARFX applies a real elementary reflector H to a real m by n - matrix C, from either the left or the right. H is represented in the - form - - H = I - tau * v * v' - - where tau is a real scalar and v is a real vector. - - If tau = 0, then H is taken to be the unit matrix - - This version uses inline code if H has order < 11. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': form H * C - = 'R': form C * H - - M (input) INTEGER - The number of rows of the matrix C. - - N (input) INTEGER - The number of columns of the matrix C. - - V (input) DOUBLE PRECISION array, dimension (M) if SIDE = 'L' - or (N) if SIDE = 'R' - The vector v in the representation of H. - - TAU (input) DOUBLE PRECISION - The value tau in the representation of H. - - C (input/output) DOUBLE PRECISION array, dimension (LDC,N) - On entry, the m by n matrix C. - On exit, C is overwritten by the matrix H * C if SIDE = 'L', - or C * H if SIDE = 'R'. - - LDC (input) INTEGER - The leading dimension of the array C. LDA >= (1,M). - - WORK (workspace) DOUBLE PRECISION array, dimension - (N) if SIDE = 'L' - or (M) if SIDE = 'R' - WORK is not referenced if H has order < 11. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --v; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - if (*tau == 0.) { - return 0; - } - if (lsame_(side, "L")) { - -/* Form H * C, where H has order m. */ - - switch (*m) { - case 1: goto L10; - case 2: goto L30; - case 3: goto L50; - case 4: goto L70; - case 5: goto L90; - case 6: goto L110; - case 7: goto L130; - case 8: goto L150; - case 9: goto L170; - case 10: goto L190; - } - -/* - Code for general M - - w := C'*v -*/ - - dgemv_("Transpose", m, n, &c_b15, &c__[c_offset], ldc, &v[1], &c__1, & - c_b29, &work[1], &c__1); - -/* C := C - tau * v * w' */ - - d__1 = -(*tau); - dger_(m, n, &d__1, &v[1], &c__1, &work[1], &c__1, &c__[c_offset], ldc) - ; - goto L410; -L10: - -/* Special code for 1 x 1 Householder */ - - t1 = 1. - *tau * v[1] * v[1]; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - c__[j * c_dim1 + 1] = t1 * c__[j * c_dim1 + 1]; -/* L20: */ - } - goto L410; -L30: - -/* Special code for 2 x 2 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j * c_dim1 + 1] + v2 * c__[j * c_dim1 + 2]; - c__[j * c_dim1 + 1] -= sum * t1; - c__[j * c_dim1 + 2] -= sum * t2; -/* L40: */ - } - goto L410; -L50: - -/* Special code for 3 x 3 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j * c_dim1 + 1] + v2 * c__[j * c_dim1 + 2] + v3 * - c__[j * c_dim1 + 3]; - c__[j * c_dim1 + 1] -= sum * t1; - c__[j * c_dim1 + 2] -= sum * t2; - c__[j * c_dim1 + 3] -= sum * t3; -/* L60: */ - } - goto L410; -L70: - -/* Special code for 4 x 4 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j * c_dim1 + 1] + v2 * c__[j * c_dim1 + 2] + v3 * - c__[j * c_dim1 + 3] + v4 * c__[j * c_dim1 + 4]; - c__[j * c_dim1 + 1] -= sum * t1; - c__[j * c_dim1 + 2] -= sum * t2; - c__[j * c_dim1 + 3] -= sum * t3; - c__[j * c_dim1 + 4] -= sum * t4; -/* L80: */ - } - goto L410; -L90: - -/* Special code for 5 x 5 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j * c_dim1 + 1] + v2 * c__[j * c_dim1 + 2] + v3 * - c__[j * c_dim1 + 3] + v4 * c__[j * c_dim1 + 4] + v5 * c__[ - j * c_dim1 + 5]; - c__[j * c_dim1 + 1] -= sum * t1; - c__[j * c_dim1 + 2] -= sum * t2; - c__[j * c_dim1 + 3] -= sum * t3; - c__[j * c_dim1 + 4] -= sum * t4; - c__[j * c_dim1 + 5] -= sum * t5; -/* L100: */ - } - goto L410; -L110: - -/* Special code for 6 x 6 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - v6 = v[6]; - t6 = *tau * v6; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j * c_dim1 + 1] + v2 * c__[j * c_dim1 + 2] + v3 * - c__[j * c_dim1 + 3] + v4 * c__[j * c_dim1 + 4] + v5 * c__[ - j * c_dim1 + 5] + v6 * c__[j * c_dim1 + 6]; - c__[j * c_dim1 + 1] -= sum * t1; - c__[j * c_dim1 + 2] -= sum * t2; - c__[j * c_dim1 + 3] -= sum * t3; - c__[j * c_dim1 + 4] -= sum * t4; - c__[j * c_dim1 + 5] -= sum * t5; - c__[j * c_dim1 + 6] -= sum * t6; -/* L120: */ - } - goto L410; -L130: - -/* Special code for 7 x 7 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - v6 = v[6]; - t6 = *tau * v6; - v7 = v[7]; - t7 = *tau * v7; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j * c_dim1 + 1] + v2 * c__[j * c_dim1 + 2] + v3 * - c__[j * c_dim1 + 3] + v4 * c__[j * c_dim1 + 4] + v5 * c__[ - j * c_dim1 + 5] + v6 * c__[j * c_dim1 + 6] + v7 * c__[j * - c_dim1 + 7]; - c__[j * c_dim1 + 1] -= sum * t1; - c__[j * c_dim1 + 2] -= sum * t2; - c__[j * c_dim1 + 3] -= sum * t3; - c__[j * c_dim1 + 4] -= sum * t4; - c__[j * c_dim1 + 5] -= sum * t5; - c__[j * c_dim1 + 6] -= sum * t6; - c__[j * c_dim1 + 7] -= sum * t7; -/* L140: */ - } - goto L410; -L150: - -/* Special code for 8 x 8 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - v6 = v[6]; - t6 = *tau * v6; - v7 = v[7]; - t7 = *tau * v7; - v8 = v[8]; - t8 = *tau * v8; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j * c_dim1 + 1] + v2 * c__[j * c_dim1 + 2] + v3 * - c__[j * c_dim1 + 3] + v4 * c__[j * c_dim1 + 4] + v5 * c__[ - j * c_dim1 + 5] + v6 * c__[j * c_dim1 + 6] + v7 * c__[j * - c_dim1 + 7] + v8 * c__[j * c_dim1 + 8]; - c__[j * c_dim1 + 1] -= sum * t1; - c__[j * c_dim1 + 2] -= sum * t2; - c__[j * c_dim1 + 3] -= sum * t3; - c__[j * c_dim1 + 4] -= sum * t4; - c__[j * c_dim1 + 5] -= sum * t5; - c__[j * c_dim1 + 6] -= sum * t6; - c__[j * c_dim1 + 7] -= sum * t7; - c__[j * c_dim1 + 8] -= sum * t8; -/* L160: */ - } - goto L410; -L170: - -/* Special code for 9 x 9 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - v6 = v[6]; - t6 = *tau * v6; - v7 = v[7]; - t7 = *tau * v7; - v8 = v[8]; - t8 = *tau * v8; - v9 = v[9]; - t9 = *tau * v9; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j * c_dim1 + 1] + v2 * c__[j * c_dim1 + 2] + v3 * - c__[j * c_dim1 + 3] + v4 * c__[j * c_dim1 + 4] + v5 * c__[ - j * c_dim1 + 5] + v6 * c__[j * c_dim1 + 6] + v7 * c__[j * - c_dim1 + 7] + v8 * c__[j * c_dim1 + 8] + v9 * c__[j * - c_dim1 + 9]; - c__[j * c_dim1 + 1] -= sum * t1; - c__[j * c_dim1 + 2] -= sum * t2; - c__[j * c_dim1 + 3] -= sum * t3; - c__[j * c_dim1 + 4] -= sum * t4; - c__[j * c_dim1 + 5] -= sum * t5; - c__[j * c_dim1 + 6] -= sum * t6; - c__[j * c_dim1 + 7] -= sum * t7; - c__[j * c_dim1 + 8] -= sum * t8; - c__[j * c_dim1 + 9] -= sum * t9; -/* L180: */ - } - goto L410; -L190: - -/* Special code for 10 x 10 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - v6 = v[6]; - t6 = *tau * v6; - v7 = v[7]; - t7 = *tau * v7; - v8 = v[8]; - t8 = *tau * v8; - v9 = v[9]; - t9 = *tau * v9; - v10 = v[10]; - t10 = *tau * v10; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j * c_dim1 + 1] + v2 * c__[j * c_dim1 + 2] + v3 * - c__[j * c_dim1 + 3] + v4 * c__[j * c_dim1 + 4] + v5 * c__[ - j * c_dim1 + 5] + v6 * c__[j * c_dim1 + 6] + v7 * c__[j * - c_dim1 + 7] + v8 * c__[j * c_dim1 + 8] + v9 * c__[j * - c_dim1 + 9] + v10 * c__[j * c_dim1 + 10]; - c__[j * c_dim1 + 1] -= sum * t1; - c__[j * c_dim1 + 2] -= sum * t2; - c__[j * c_dim1 + 3] -= sum * t3; - c__[j * c_dim1 + 4] -= sum * t4; - c__[j * c_dim1 + 5] -= sum * t5; - c__[j * c_dim1 + 6] -= sum * t6; - c__[j * c_dim1 + 7] -= sum * t7; - c__[j * c_dim1 + 8] -= sum * t8; - c__[j * c_dim1 + 9] -= sum * t9; - c__[j * c_dim1 + 10] -= sum * t10; -/* L200: */ - } - goto L410; - } else { - -/* Form C * H, where H has order n. */ - - switch (*n) { - case 1: goto L210; - case 2: goto L230; - case 3: goto L250; - case 4: goto L270; - case 5: goto L290; - case 6: goto L310; - case 7: goto L330; - case 8: goto L350; - case 9: goto L370; - case 10: goto L390; - } - -/* - Code for general N - - w := C * v -*/ - - dgemv_("No transpose", m, n, &c_b15, &c__[c_offset], ldc, &v[1], & - c__1, &c_b29, &work[1], &c__1); - -/* C := C - tau * w * v' */ - - d__1 = -(*tau); - dger_(m, n, &d__1, &work[1], &c__1, &v[1], &c__1, &c__[c_offset], ldc) - ; - goto L410; -L210: - -/* Special code for 1 x 1 Householder */ - - t1 = 1. - *tau * v[1] * v[1]; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - c__[j + c_dim1] = t1 * c__[j + c_dim1]; -/* L220: */ - } - goto L410; -L230: - -/* Special code for 2 x 2 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j + c_dim1] + v2 * c__[j + ((c_dim1) << (1))]; - c__[j + c_dim1] -= sum * t1; - c__[j + ((c_dim1) << (1))] -= sum * t2; -/* L240: */ - } - goto L410; -L250: - -/* Special code for 3 x 3 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j + c_dim1] + v2 * c__[j + ((c_dim1) << (1))] + v3 - * c__[j + c_dim1 * 3]; - c__[j + c_dim1] -= sum * t1; - c__[j + ((c_dim1) << (1))] -= sum * t2; - c__[j + c_dim1 * 3] -= sum * t3; -/* L260: */ - } - goto L410; -L270: - -/* Special code for 4 x 4 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j + c_dim1] + v2 * c__[j + ((c_dim1) << (1))] + v3 - * c__[j + c_dim1 * 3] + v4 * c__[j + ((c_dim1) << (2))]; - c__[j + c_dim1] -= sum * t1; - c__[j + ((c_dim1) << (1))] -= sum * t2; - c__[j + c_dim1 * 3] -= sum * t3; - c__[j + ((c_dim1) << (2))] -= sum * t4; -/* L280: */ - } - goto L410; -L290: - -/* Special code for 5 x 5 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j + c_dim1] + v2 * c__[j + ((c_dim1) << (1))] + v3 - * c__[j + c_dim1 * 3] + v4 * c__[j + ((c_dim1) << (2))] + - v5 * c__[j + c_dim1 * 5]; - c__[j + c_dim1] -= sum * t1; - c__[j + ((c_dim1) << (1))] -= sum * t2; - c__[j + c_dim1 * 3] -= sum * t3; - c__[j + ((c_dim1) << (2))] -= sum * t4; - c__[j + c_dim1 * 5] -= sum * t5; -/* L300: */ - } - goto L410; -L310: - -/* Special code for 6 x 6 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - v6 = v[6]; - t6 = *tau * v6; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j + c_dim1] + v2 * c__[j + ((c_dim1) << (1))] + v3 - * c__[j + c_dim1 * 3] + v4 * c__[j + ((c_dim1) << (2))] + - v5 * c__[j + c_dim1 * 5] + v6 * c__[j + c_dim1 * 6]; - c__[j + c_dim1] -= sum * t1; - c__[j + ((c_dim1) << (1))] -= sum * t2; - c__[j + c_dim1 * 3] -= sum * t3; - c__[j + ((c_dim1) << (2))] -= sum * t4; - c__[j + c_dim1 * 5] -= sum * t5; - c__[j + c_dim1 * 6] -= sum * t6; -/* L320: */ - } - goto L410; -L330: - -/* Special code for 7 x 7 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - v6 = v[6]; - t6 = *tau * v6; - v7 = v[7]; - t7 = *tau * v7; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j + c_dim1] + v2 * c__[j + ((c_dim1) << (1))] + v3 - * c__[j + c_dim1 * 3] + v4 * c__[j + ((c_dim1) << (2))] + - v5 * c__[j + c_dim1 * 5] + v6 * c__[j + c_dim1 * 6] + v7 * - c__[j + c_dim1 * 7]; - c__[j + c_dim1] -= sum * t1; - c__[j + ((c_dim1) << (1))] -= sum * t2; - c__[j + c_dim1 * 3] -= sum * t3; - c__[j + ((c_dim1) << (2))] -= sum * t4; - c__[j + c_dim1 * 5] -= sum * t5; - c__[j + c_dim1 * 6] -= sum * t6; - c__[j + c_dim1 * 7] -= sum * t7; -/* L340: */ - } - goto L410; -L350: - -/* Special code for 8 x 8 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - v6 = v[6]; - t6 = *tau * v6; - v7 = v[7]; - t7 = *tau * v7; - v8 = v[8]; - t8 = *tau * v8; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j + c_dim1] + v2 * c__[j + ((c_dim1) << (1))] + v3 - * c__[j + c_dim1 * 3] + v4 * c__[j + ((c_dim1) << (2))] + - v5 * c__[j + c_dim1 * 5] + v6 * c__[j + c_dim1 * 6] + v7 * - c__[j + c_dim1 * 7] + v8 * c__[j + ((c_dim1) << (3))]; - c__[j + c_dim1] -= sum * t1; - c__[j + ((c_dim1) << (1))] -= sum * t2; - c__[j + c_dim1 * 3] -= sum * t3; - c__[j + ((c_dim1) << (2))] -= sum * t4; - c__[j + c_dim1 * 5] -= sum * t5; - c__[j + c_dim1 * 6] -= sum * t6; - c__[j + c_dim1 * 7] -= sum * t7; - c__[j + ((c_dim1) << (3))] -= sum * t8; -/* L360: */ - } - goto L410; -L370: - -/* Special code for 9 x 9 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - v6 = v[6]; - t6 = *tau * v6; - v7 = v[7]; - t7 = *tau * v7; - v8 = v[8]; - t8 = *tau * v8; - v9 = v[9]; - t9 = *tau * v9; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j + c_dim1] + v2 * c__[j + ((c_dim1) << (1))] + v3 - * c__[j + c_dim1 * 3] + v4 * c__[j + ((c_dim1) << (2))] + - v5 * c__[j + c_dim1 * 5] + v6 * c__[j + c_dim1 * 6] + v7 * - c__[j + c_dim1 * 7] + v8 * c__[j + ((c_dim1) << (3))] + - v9 * c__[j + c_dim1 * 9]; - c__[j + c_dim1] -= sum * t1; - c__[j + ((c_dim1) << (1))] -= sum * t2; - c__[j + c_dim1 * 3] -= sum * t3; - c__[j + ((c_dim1) << (2))] -= sum * t4; - c__[j + c_dim1 * 5] -= sum * t5; - c__[j + c_dim1 * 6] -= sum * t6; - c__[j + c_dim1 * 7] -= sum * t7; - c__[j + ((c_dim1) << (3))] -= sum * t8; - c__[j + c_dim1 * 9] -= sum * t9; -/* L380: */ - } - goto L410; -L390: - -/* Special code for 10 x 10 Householder */ - - v1 = v[1]; - t1 = *tau * v1; - v2 = v[2]; - t2 = *tau * v2; - v3 = v[3]; - t3 = *tau * v3; - v4 = v[4]; - t4 = *tau * v4; - v5 = v[5]; - t5 = *tau * v5; - v6 = v[6]; - t6 = *tau * v6; - v7 = v[7]; - t7 = *tau * v7; - v8 = v[8]; - t8 = *tau * v8; - v9 = v[9]; - t9 = *tau * v9; - v10 = v[10]; - t10 = *tau * v10; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - sum = v1 * c__[j + c_dim1] + v2 * c__[j + ((c_dim1) << (1))] + v3 - * c__[j + c_dim1 * 3] + v4 * c__[j + ((c_dim1) << (2))] + - v5 * c__[j + c_dim1 * 5] + v6 * c__[j + c_dim1 * 6] + v7 * - c__[j + c_dim1 * 7] + v8 * c__[j + ((c_dim1) << (3))] + - v9 * c__[j + c_dim1 * 9] + v10 * c__[j + c_dim1 * 10]; - c__[j + c_dim1] -= sum * t1; - c__[j + ((c_dim1) << (1))] -= sum * t2; - c__[j + c_dim1 * 3] -= sum * t3; - c__[j + ((c_dim1) << (2))] -= sum * t4; - c__[j + c_dim1 * 5] -= sum * t5; - c__[j + c_dim1 * 6] -= sum * t6; - c__[j + c_dim1 * 7] -= sum * t7; - c__[j + ((c_dim1) << (3))] -= sum * t8; - c__[j + c_dim1 * 9] -= sum * t9; - c__[j + c_dim1 * 10] -= sum * t10; -/* L400: */ - } - goto L410; - } -L410: - return 0; - -/* End of DLARFX */ - -} /* dlarfx_ */ - -/* Subroutine */ int dlartg_(doublereal *f, doublereal *g, doublereal *cs, - doublereal *sn, doublereal *r__) -{ - /* Initialized data */ - - static logical first = TRUE_; - - /* System generated locals */ - integer i__1; - doublereal d__1, d__2; - - /* Builtin functions */ - double log(doublereal), pow_di(doublereal *, integer *), sqrt(doublereal); - - /* Local variables */ - static integer i__; - static doublereal f1, g1, eps, scale; - static integer count; - static doublereal safmn2, safmx2; - - static doublereal safmin; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - DLARTG generate a plane rotation so that - - [ CS SN ] . [ F ] = [ R ] where CS**2 + SN**2 = 1. - [ -SN CS ] [ G ] [ 0 ] - - This is a slower, more accurate version of the BLAS1 routine DROTG, - with the following other differences: - F and G are unchanged on return. - If G=0, then CS=1 and SN=0. - If F=0 and (G .ne. 0), then CS=0 and SN=1 without doing any - floating point operations (saves work in DBDSQR when - there are zeros on the diagonal). - - If F exceeds G in magnitude, CS will be positive. - - Arguments - ========= - - F (input) DOUBLE PRECISION - The first component of vector to be rotated. - - G (input) DOUBLE PRECISION - The second component of vector to be rotated. - - CS (output) DOUBLE PRECISION - The cosine of the rotation. - - SN (output) DOUBLE PRECISION - The sine of the rotation. - - R (output) DOUBLE PRECISION - The nonzero component of the rotated vector. - - ===================================================================== -*/ - - - if (first) { - first = FALSE_; - safmin = SAFEMINIMUM; - eps = EPSILON; - d__1 = BASE; - i__1 = (integer) (log(safmin / eps) / log(BASE) / - 2.); - safmn2 = pow_di(&d__1, &i__1); - safmx2 = 1. / safmn2; - } - if (*g == 0.) { - *cs = 1.; - *sn = 0.; - *r__ = *f; - } else if (*f == 0.) { - *cs = 0.; - *sn = 1.; - *r__ = *g; - } else { - f1 = *f; - g1 = *g; -/* Computing MAX */ - d__1 = abs(f1), d__2 = abs(g1); - scale = max(d__1,d__2); - if (scale >= safmx2) { - count = 0; -L10: - ++count; - f1 *= safmn2; - g1 *= safmn2; -/* Computing MAX */ - d__1 = abs(f1), d__2 = abs(g1); - scale = max(d__1,d__2); - if (scale >= safmx2) { - goto L10; - } -/* Computing 2nd power */ - d__1 = f1; -/* Computing 2nd power */ - d__2 = g1; - *r__ = sqrt(d__1 * d__1 + d__2 * d__2); - *cs = f1 / *r__; - *sn = g1 / *r__; - i__1 = count; - for (i__ = 1; i__ <= i__1; ++i__) { - *r__ *= safmx2; -/* L20: */ - } - } else if (scale <= safmn2) { - count = 0; -L30: - ++count; - f1 *= safmx2; - g1 *= safmx2; -/* Computing MAX */ - d__1 = abs(f1), d__2 = abs(g1); - scale = max(d__1,d__2); - if (scale <= safmn2) { - goto L30; - } -/* Computing 2nd power */ - d__1 = f1; -/* Computing 2nd power */ - d__2 = g1; - *r__ = sqrt(d__1 * d__1 + d__2 * d__2); - *cs = f1 / *r__; - *sn = g1 / *r__; - i__1 = count; - for (i__ = 1; i__ <= i__1; ++i__) { - *r__ *= safmn2; -/* L40: */ - } - } else { -/* Computing 2nd power */ - d__1 = f1; -/* Computing 2nd power */ - d__2 = g1; - *r__ = sqrt(d__1 * d__1 + d__2 * d__2); - *cs = f1 / *r__; - *sn = g1 / *r__; - } - if ((abs(*f) > abs(*g) && *cs < 0.)) { - *cs = -(*cs); - *sn = -(*sn); - *r__ = -(*r__); - } - } - return 0; - -/* End of DLARTG */ - -} /* dlartg_ */ - -/* Subroutine */ int dlas2_(doublereal *f, doublereal *g, doublereal *h__, - doublereal *ssmin, doublereal *ssmax) -{ - /* System generated locals */ - doublereal d__1, d__2; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal c__, fa, ga, ha, as, at, au, fhmn, fhmx; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - DLAS2 computes the singular values of the 2-by-2 matrix - [ F G ] - [ 0 H ]. - On return, SSMIN is the smaller singular value and SSMAX is the - larger singular value. - - Arguments - ========= - - F (input) DOUBLE PRECISION - The (1,1) element of the 2-by-2 matrix. - - G (input) DOUBLE PRECISION - The (1,2) element of the 2-by-2 matrix. - - H (input) DOUBLE PRECISION - The (2,2) element of the 2-by-2 matrix. - - SSMIN (output) DOUBLE PRECISION - The smaller singular value. - - SSMAX (output) DOUBLE PRECISION - The larger singular value. - - Further Details - =============== - - Barring over/underflow, all output quantities are correct to within - a few units in the last place (ulps), even in the absence of a guard - digit in addition/subtraction. - - In IEEE arithmetic, the code works correctly if one matrix element is - infinite. - - Overflow will not occur unless the largest singular value itself - overflows, or is within a few ulps of overflow. (On machines with - partial overflow, like the Cray, overflow may occur if the largest - singular value is within a factor of 2 of overflow.) - - Underflow is harmless if underflow is gradual. Otherwise, results - may correspond to a matrix modified by perturbations of size near - the underflow threshold. - - ==================================================================== -*/ - - - fa = abs(*f); - ga = abs(*g); - ha = abs(*h__); - fhmn = min(fa,ha); - fhmx = max(fa,ha); - if (fhmn == 0.) { - *ssmin = 0.; - if (fhmx == 0.) { - *ssmax = ga; - } else { -/* Computing 2nd power */ - d__1 = min(fhmx,ga) / max(fhmx,ga); - *ssmax = max(fhmx,ga) * sqrt(d__1 * d__1 + 1.); - } - } else { - if (ga < fhmx) { - as = fhmn / fhmx + 1.; - at = (fhmx - fhmn) / fhmx; -/* Computing 2nd power */ - d__1 = ga / fhmx; - au = d__1 * d__1; - c__ = 2. / (sqrt(as * as + au) + sqrt(at * at + au)); - *ssmin = fhmn * c__; - *ssmax = fhmx / c__; - } else { - au = fhmx / ga; - if (au == 0.) { - -/* - Avoid possible harmful underflow if exponent range - asymmetric (true SSMIN may not underflow even if - AU underflows) -*/ - - *ssmin = fhmn * fhmx / ga; - *ssmax = ga; - } else { - as = fhmn / fhmx + 1.; - at = (fhmx - fhmn) / fhmx; -/* Computing 2nd power */ - d__1 = as * au; -/* Computing 2nd power */ - d__2 = at * au; - c__ = 1. / (sqrt(d__1 * d__1 + 1.) + sqrt(d__2 * d__2 + 1.)); - *ssmin = fhmn * c__ * au; - *ssmin += *ssmin; - *ssmax = ga / (c__ + c__); - } - } - } - return 0; - -/* End of DLAS2 */ - -} /* dlas2_ */ - -/* Subroutine */ int dlascl_(char *type__, integer *kl, integer *ku, - doublereal *cfrom, doublereal *cto, integer *m, integer *n, - doublereal *a, integer *lda, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - - /* Local variables */ - static integer i__, j, k1, k2, k3, k4; - static doublereal mul, cto1; - static logical done; - static doublereal ctoc; - extern logical lsame_(char *, char *); - static integer itype; - static doublereal cfrom1; - - static doublereal cfromc; - extern /* Subroutine */ int xerbla_(char *, integer *); - static doublereal bignum, smlnum; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DLASCL multiplies the M by N real matrix A by the real scalar - CTO/CFROM. This is done without over/underflow as long as the final - result CTO*A(I,J)/CFROM does not over/underflow. TYPE specifies that - A may be full, upper triangular, lower triangular, upper Hessenberg, - or banded. - - Arguments - ========= - - TYPE (input) CHARACTER*1 - TYPE indices the storage type of the input matrix. - = 'G': A is a full matrix. - = 'L': A is a lower triangular matrix. - = 'U': A is an upper triangular matrix. - = 'H': A is an upper Hessenberg matrix. - = 'B': A is a symmetric band matrix with lower bandwidth KL - and upper bandwidth KU and with the only the lower - half stored. - = 'Q': A is a symmetric band matrix with lower bandwidth KL - and upper bandwidth KU and with the only the upper - half stored. - = 'Z': A is a band matrix with lower bandwidth KL and upper - bandwidth KU. - - KL (input) INTEGER - The lower bandwidth of A. Referenced only if TYPE = 'B', - 'Q' or 'Z'. - - KU (input) INTEGER - The upper bandwidth of A. Referenced only if TYPE = 'B', - 'Q' or 'Z'. - - CFROM (input) DOUBLE PRECISION - CTO (input) DOUBLE PRECISION - The matrix A is multiplied by CTO/CFROM. A(I,J) is computed - without over/underflow if the final result CTO*A(I,J)/CFROM - can be represented without over/underflow. CFROM must be - nonzero. - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,M) - The matrix to be multiplied by CTO/CFROM. See TYPE for the - storage type. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - INFO (output) INTEGER - 0 - successful exit - <0 - if INFO = -i, the i-th argument had an illegal value. - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - *info = 0; - - if (lsame_(type__, "G")) { - itype = 0; - } else if (lsame_(type__, "L")) { - itype = 1; - } else if (lsame_(type__, "U")) { - itype = 2; - } else if (lsame_(type__, "H")) { - itype = 3; - } else if (lsame_(type__, "B")) { - itype = 4; - } else if (lsame_(type__, "Q")) { - itype = 5; - } else if (lsame_(type__, "Z")) { - itype = 6; - } else { - itype = -1; - } - - if (itype == -1) { - *info = -1; - } else if (*cfrom == 0.) { - *info = -4; - } else if (*m < 0) { - *info = -6; - } else if (*n < 0 || (itype == 4 && *n != *m) || (itype == 5 && *n != *m)) - { - *info = -7; - } else if ((itype <= 3 && *lda < max(1,*m))) { - *info = -9; - } else if (itype >= 4) { -/* Computing MAX */ - i__1 = *m - 1; - if (*kl < 0 || *kl > max(i__1,0)) { - *info = -2; - } else /* if(complicated condition) */ { -/* Computing MAX */ - i__1 = *n - 1; - if (*ku < 0 || *ku > max(i__1,0) || ((itype == 4 || itype == 5) && - *kl != *ku)) { - *info = -3; - } else if ((itype == 4 && *lda < *kl + 1) || (itype == 5 && *lda < - *ku + 1) || (itype == 6 && *lda < ((*kl) << (1)) + *ku + - 1)) { - *info = -9; - } - } - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLASCL", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0 || *m == 0) { - return 0; - } - -/* Get machine parameters */ - - smlnum = SAFEMINIMUM; - bignum = 1. / smlnum; - - cfromc = *cfrom; - ctoc = *cto; - -L10: - cfrom1 = cfromc * smlnum; - cto1 = ctoc / bignum; - if ((abs(cfrom1) > abs(ctoc) && ctoc != 0.)) { - mul = smlnum; - done = FALSE_; - cfromc = cfrom1; - } else if (abs(cto1) > abs(cfromc)) { - mul = bignum; - done = FALSE_; - ctoc = cto1; - } else { - mul = ctoc / cfromc; - done = TRUE_; - } - - if (itype == 0) { - -/* Full matrix */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] *= mul; -/* L20: */ - } -/* L30: */ - } - - } else if (itype == 1) { - -/* Lower triangular matrix */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = j; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] *= mul; -/* L40: */ - } -/* L50: */ - } - - } else if (itype == 2) { - -/* Upper triangular matrix */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = min(j,*m); - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] *= mul; -/* L60: */ - } -/* L70: */ - } - - } else if (itype == 3) { - -/* Upper Hessenberg matrix */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = j + 1; - i__2 = min(i__3,*m); - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] *= mul; -/* L80: */ - } -/* L90: */ - } - - } else if (itype == 4) { - -/* Lower half of a symmetric band matrix */ - - k3 = *kl + 1; - k4 = *n + 1; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = k3, i__4 = k4 - j; - i__2 = min(i__3,i__4); - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] *= mul; -/* L100: */ - } -/* L110: */ - } - - } else if (itype == 5) { - -/* Upper half of a symmetric band matrix */ - - k1 = *ku + 2; - k3 = *ku + 1; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MAX */ - i__2 = k1 - j; - i__3 = k3; - for (i__ = max(i__2,1); i__ <= i__3; ++i__) { - a[i__ + j * a_dim1] *= mul; -/* L120: */ - } -/* L130: */ - } - - } else if (itype == 6) { - -/* Band matrix */ - - k1 = *kl + *ku + 2; - k2 = *kl + 1; - k3 = ((*kl) << (1)) + *ku + 1; - k4 = *kl + *ku + 1 + *m; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MAX */ - i__3 = k1 - j; -/* Computing MIN */ - i__4 = k3, i__5 = k4 - j; - i__2 = min(i__4,i__5); - for (i__ = max(i__3,k2); i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] *= mul; -/* L140: */ - } -/* L150: */ - } - - } - - if (! done) { - goto L10; - } - - return 0; - -/* End of DLASCL */ - -} /* dlascl_ */ - -/* Subroutine */ int dlasd0_(integer *n, integer *sqre, doublereal *d__, - doublereal *e, doublereal *u, integer *ldu, doublereal *vt, integer * - ldvt, integer *smlsiz, integer *iwork, doublereal *work, integer * - info) -{ - /* System generated locals */ - integer u_dim1, u_offset, vt_dim1, vt_offset, i__1, i__2; - - /* Builtin functions */ - integer pow_ii(integer *, integer *); - - /* Local variables */ - static integer i__, j, m, i1, ic, lf, nd, ll, nl, nr, im1, ncc, nlf, nrf, - iwk, lvl, ndb1, nlp1, nrp1; - static doublereal beta; - static integer idxq, nlvl; - static doublereal alpha; - static integer inode, ndiml, idxqc, ndimr, itemp, sqrei; - extern /* Subroutine */ int dlasd1_(integer *, integer *, integer *, - doublereal *, doublereal *, doublereal *, doublereal *, integer *, - doublereal *, integer *, integer *, integer *, doublereal *, - integer *), dlasdq_(char *, integer *, integer *, integer *, - integer *, integer *, doublereal *, doublereal *, doublereal *, - integer *, doublereal *, integer *, doublereal *, integer *, - doublereal *, integer *), dlasdt_(integer *, integer *, - integer *, integer *, integer *, integer *, integer *), xerbla_( - char *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - Using a divide and conquer approach, DLASD0 computes the singular - value decomposition (SVD) of a real upper bidiagonal N-by-M - matrix B with diagonal D and offdiagonal E, where M = N + SQRE. - The algorithm computes orthogonal matrices U and VT such that - B = U * S * VT. The singular values S are overwritten on D. - - A related subroutine, DLASDA, computes only the singular values, - and optionally, the singular vectors in compact form. - - Arguments - ========= - - N (input) INTEGER - On entry, the row dimension of the upper bidiagonal matrix. - This is also the dimension of the main diagonal array D. - - SQRE (input) INTEGER - Specifies the column dimension of the bidiagonal matrix. - = 0: The bidiagonal matrix has column dimension M = N; - = 1: The bidiagonal matrix has column dimension M = N+1; - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry D contains the main diagonal of the bidiagonal - matrix. - On exit D, if INFO = 0, contains its singular values. - - E (input) DOUBLE PRECISION array, dimension (M-1) - Contains the subdiagonal entries of the bidiagonal matrix. - On exit, E has been destroyed. - - U (output) DOUBLE PRECISION array, dimension at least (LDQ, N) - On exit, U contains the left singular vectors. - - LDU (input) INTEGER - On entry, leading dimension of U. - - VT (output) DOUBLE PRECISION array, dimension at least (LDVT, M) - On exit, VT' contains the right singular vectors. - - LDVT (input) INTEGER - On entry, leading dimension of VT. - - SMLSIZ (input) INTEGER - On entry, maximum size of the subproblems at the - bottom of the computation tree. - - IWORK INTEGER work array. - Dimension must be at least (8 * N) - - WORK DOUBLE PRECISION work array. - Dimension must be at least (3 * M**2 + 2 * M) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = 1, an singular value did not converge - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - vt_dim1 = *ldvt; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - --iwork; - --work; - - /* Function Body */ - *info = 0; - - if (*n < 0) { - *info = -1; - } else if (*sqre < 0 || *sqre > 1) { - *info = -2; - } - - m = *n + *sqre; - - if (*ldu < *n) { - *info = -6; - } else if (*ldvt < m) { - *info = -8; - } else if (*smlsiz < 3) { - *info = -9; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLASD0", &i__1); - return 0; - } - -/* If the input matrix is too small, call DLASDQ to find the SVD. */ - - if (*n <= *smlsiz) { - dlasdq_("U", sqre, n, &m, n, &c__0, &d__[1], &e[1], &vt[vt_offset], - ldvt, &u[u_offset], ldu, &u[u_offset], ldu, &work[1], info); - return 0; - } - -/* Set up the computation tree. */ - - inode = 1; - ndiml = inode + *n; - ndimr = ndiml + *n; - idxq = ndimr + *n; - iwk = idxq + *n; - dlasdt_(n, &nlvl, &nd, &iwork[inode], &iwork[ndiml], &iwork[ndimr], - smlsiz); - -/* - For the nodes on bottom level of the tree, solve - their subproblems by DLASDQ. -*/ - - ndb1 = (nd + 1) / 2; - ncc = 0; - i__1 = nd; - for (i__ = ndb1; i__ <= i__1; ++i__) { - -/* - IC : center row of each node - NL : number of rows of left subproblem - NR : number of rows of right subproblem - NLF: starting row of the left subproblem - NRF: starting row of the right subproblem -*/ - - i1 = i__ - 1; - ic = iwork[inode + i1]; - nl = iwork[ndiml + i1]; - nlp1 = nl + 1; - nr = iwork[ndimr + i1]; - nrp1 = nr + 1; - nlf = ic - nl; - nrf = ic + 1; - sqrei = 1; - dlasdq_("U", &sqrei, &nl, &nlp1, &nl, &ncc, &d__[nlf], &e[nlf], &vt[ - nlf + nlf * vt_dim1], ldvt, &u[nlf + nlf * u_dim1], ldu, &u[ - nlf + nlf * u_dim1], ldu, &work[1], info); - if (*info != 0) { - return 0; - } - itemp = idxq + nlf - 2; - i__2 = nl; - for (j = 1; j <= i__2; ++j) { - iwork[itemp + j] = j; -/* L10: */ - } - if (i__ == nd) { - sqrei = *sqre; - } else { - sqrei = 1; - } - nrp1 = nr + sqrei; - dlasdq_("U", &sqrei, &nr, &nrp1, &nr, &ncc, &d__[nrf], &e[nrf], &vt[ - nrf + nrf * vt_dim1], ldvt, &u[nrf + nrf * u_dim1], ldu, &u[ - nrf + nrf * u_dim1], ldu, &work[1], info); - if (*info != 0) { - return 0; - } - itemp = idxq + ic; - i__2 = nr; - for (j = 1; j <= i__2; ++j) { - iwork[itemp + j - 1] = j; -/* L20: */ - } -/* L30: */ - } - -/* Now conquer each subproblem bottom-up. */ - - for (lvl = nlvl; lvl >= 1; --lvl) { - -/* - Find the first node LF and last node LL on the - current level LVL. -*/ - - if (lvl == 1) { - lf = 1; - ll = 1; - } else { - i__1 = lvl - 1; - lf = pow_ii(&c__2, &i__1); - ll = ((lf) << (1)) - 1; - } - i__1 = ll; - for (i__ = lf; i__ <= i__1; ++i__) { - im1 = i__ - 1; - ic = iwork[inode + im1]; - nl = iwork[ndiml + im1]; - nr = iwork[ndimr + im1]; - nlf = ic - nl; - if ((*sqre == 0 && i__ == ll)) { - sqrei = *sqre; - } else { - sqrei = 1; - } - idxqc = idxq + nlf - 1; - alpha = d__[ic]; - beta = e[ic]; - dlasd1_(&nl, &nr, &sqrei, &d__[nlf], &alpha, &beta, &u[nlf + nlf * - u_dim1], ldu, &vt[nlf + nlf * vt_dim1], ldvt, &iwork[ - idxqc], &iwork[iwk], &work[1], info); - if (*info != 0) { - return 0; - } -/* L40: */ - } -/* L50: */ - } - - return 0; - -/* End of DLASD0 */ - -} /* dlasd0_ */ - -/* Subroutine */ int dlasd1_(integer *nl, integer *nr, integer *sqre, - doublereal *d__, doublereal *alpha, doublereal *beta, doublereal *u, - integer *ldu, doublereal *vt, integer *ldvt, integer *idxq, integer * - iwork, doublereal *work, integer *info) -{ - /* System generated locals */ - integer u_dim1, u_offset, vt_dim1, vt_offset, i__1; - doublereal d__1, d__2; - - /* Local variables */ - static integer i__, k, m, n, n1, n2, iq, iz, iu2, ldq, idx, ldu2, ivt2, - idxc, idxp, ldvt2; - extern /* Subroutine */ int dlasd2_(integer *, integer *, integer *, - integer *, doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - doublereal *, integer *, doublereal *, integer *, integer *, - integer *, integer *, integer *, integer *, integer *), dlasd3_( - integer *, integer *, integer *, integer *, doublereal *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *, integer *, integer *, doublereal *, integer *), - dlascl_(char *, integer *, integer *, doublereal *, doublereal *, - integer *, integer *, doublereal *, integer *, integer *), - dlamrg_(integer *, integer *, doublereal *, integer *, integer *, - integer *); - static integer isigma; - extern /* Subroutine */ int xerbla_(char *, integer *); - static doublereal orgnrm; - static integer coltyp; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DLASD1 computes the SVD of an upper bidiagonal N-by-M matrix B, - where N = NL + NR + 1 and M = N + SQRE. DLASD1 is called from DLASD0. - - A related subroutine DLASD7 handles the case in which the singular - values (and the singular vectors in factored form) are desired. - - DLASD1 computes the SVD as follows: - - ( D1(in) 0 0 0 ) - B = U(in) * ( Z1' a Z2' b ) * VT(in) - ( 0 0 D2(in) 0 ) - - = U(out) * ( D(out) 0) * VT(out) - - where Z' = (Z1' a Z2' b) = u' VT', and u is a vector of dimension M - with ALPHA and BETA in the NL+1 and NL+2 th entries and zeros - elsewhere; and the entry b is empty if SQRE = 0. - - The left singular vectors of the original matrix are stored in U, and - the transpose of the right singular vectors are stored in VT, and the - singular values are in D. The algorithm consists of three stages: - - The first stage consists of deflating the size of the problem - when there are multiple singular values or when there are zeros in - the Z vector. For each such occurence the dimension of the - secular equation problem is reduced by one. This stage is - performed by the routine DLASD2. - - The second stage consists of calculating the updated - singular values. This is done by finding the square roots of the - roots of the secular equation via the routine DLASD4 (as called - by DLASD3). This routine also calculates the singular vectors of - the current problem. - - The final stage consists of computing the updated singular vectors - directly using the updated singular values. The singular vectors - for the current problem are multiplied with the singular vectors - from the overall problem. - - Arguments - ========= - - NL (input) INTEGER - The row dimension of the upper block. NL >= 1. - - NR (input) INTEGER - The row dimension of the lower block. NR >= 1. - - SQRE (input) INTEGER - = 0: the lower block is an NR-by-NR square matrix. - = 1: the lower block is an NR-by-(NR+1) rectangular matrix. - - The bidiagonal matrix has row dimension N = NL + NR + 1, - and column dimension M = N + SQRE. - - D (input/output) DOUBLE PRECISION array, - dimension (N = NL+NR+1). - On entry D(1:NL,1:NL) contains the singular values of the - upper block; and D(NL+2:N) contains the singular values of - the lower block. On exit D(1:N) contains the singular values - of the modified matrix. - - ALPHA (input) DOUBLE PRECISION - Contains the diagonal element associated with the added row. - - BETA (input) DOUBLE PRECISION - Contains the off-diagonal element associated with the added - row. - - U (input/output) DOUBLE PRECISION array, dimension(LDU,N) - On entry U(1:NL, 1:NL) contains the left singular vectors of - the upper block; U(NL+2:N, NL+2:N) contains the left singular - vectors of the lower block. On exit U contains the left - singular vectors of the bidiagonal matrix. - - LDU (input) INTEGER - The leading dimension of the array U. LDU >= max( 1, N ). - - VT (input/output) DOUBLE PRECISION array, dimension(LDVT,M) - where M = N + SQRE. - On entry VT(1:NL+1, 1:NL+1)' contains the right singular - vectors of the upper block; VT(NL+2:M, NL+2:M)' contains - the right singular vectors of the lower block. On exit - VT' contains the right singular vectors of the - bidiagonal matrix. - - LDVT (input) INTEGER - The leading dimension of the array VT. LDVT >= max( 1, M ). - - IDXQ (output) INTEGER array, dimension(N) - This contains the permutation which will reintegrate the - subproblem just solved back into sorted order, i.e. - D( IDXQ( I = 1, N ) ) will be in ascending order. - - IWORK (workspace) INTEGER array, dimension( 4 * N ) - - WORK (workspace) DOUBLE PRECISION array, dimension( 3*M**2 + 2*M ) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = 1, an singular value did not converge - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - vt_dim1 = *ldvt; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - --idxq; - --iwork; - --work; - - /* Function Body */ - *info = 0; - - if (*nl < 1) { - *info = -1; - } else if (*nr < 1) { - *info = -2; - } else if (*sqre < 0 || *sqre > 1) { - *info = -3; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLASD1", &i__1); - return 0; - } - - n = *nl + *nr + 1; - m = n + *sqre; - -/* - The following values are for bookkeeping purposes only. They are - integer pointers which indicate the portion of the workspace - used by a particular array in DLASD2 and DLASD3. -*/ - - ldu2 = n; - ldvt2 = m; - - iz = 1; - isigma = iz + m; - iu2 = isigma + n; - ivt2 = iu2 + ldu2 * n; - iq = ivt2 + ldvt2 * m; - - idx = 1; - idxc = idx + n; - coltyp = idxc + n; - idxp = coltyp + n; - -/* - Scale. - - Computing MAX -*/ - d__1 = abs(*alpha), d__2 = abs(*beta); - orgnrm = max(d__1,d__2); - d__[*nl + 1] = 0.; - i__1 = n; - for (i__ = 1; i__ <= i__1; ++i__) { - if ((d__1 = d__[i__], abs(d__1)) > orgnrm) { - orgnrm = (d__1 = d__[i__], abs(d__1)); - } -/* L10: */ - } - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b15, &n, &c__1, &d__[1], &n, info); - *alpha /= orgnrm; - *beta /= orgnrm; - -/* Deflate singular values. */ - - dlasd2_(nl, nr, sqre, &k, &d__[1], &work[iz], alpha, beta, &u[u_offset], - ldu, &vt[vt_offset], ldvt, &work[isigma], &work[iu2], &ldu2, & - work[ivt2], &ldvt2, &iwork[idxp], &iwork[idx], &iwork[idxc], & - idxq[1], &iwork[coltyp], info); - -/* Solve Secular Equation and update singular vectors. */ - - ldq = k; - dlasd3_(nl, nr, sqre, &k, &d__[1], &work[iq], &ldq, &work[isigma], &u[ - u_offset], ldu, &work[iu2], &ldu2, &vt[vt_offset], ldvt, &work[ - ivt2], &ldvt2, &iwork[idxc], &iwork[coltyp], &work[iz], info); - if (*info != 0) { - return 0; - } - -/* Unscale. */ - - dlascl_("G", &c__0, &c__0, &c_b15, &orgnrm, &n, &c__1, &d__[1], &n, info); - -/* Prepare the IDXQ sorting permutation. */ - - n1 = k; - n2 = n - k; - dlamrg_(&n1, &n2, &d__[1], &c__1, &c_n1, &idxq[1]); - - return 0; - -/* End of DLASD1 */ - -} /* dlasd1_ */ - -/* Subroutine */ int dlasd2_(integer *nl, integer *nr, integer *sqre, integer - *k, doublereal *d__, doublereal *z__, doublereal *alpha, doublereal * - beta, doublereal *u, integer *ldu, doublereal *vt, integer *ldvt, - doublereal *dsigma, doublereal *u2, integer *ldu2, doublereal *vt2, - integer *ldvt2, integer *idxp, integer *idx, integer *idxc, integer * - idxq, integer *coltyp, integer *info) -{ - /* System generated locals */ - integer u_dim1, u_offset, u2_dim1, u2_offset, vt_dim1, vt_offset, - vt2_dim1, vt2_offset, i__1; - doublereal d__1, d__2; - - /* Local variables */ - static doublereal c__; - static integer i__, j, m, n; - static doublereal s; - static integer k2; - static doublereal z1; - static integer ct, jp; - static doublereal eps, tau, tol; - static integer psm[4], nlp1, nlp2, idxi, idxj; - extern /* Subroutine */ int drot_(integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *); - static integer ctot[4], idxjp; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *); - static integer jprev; - - extern /* Subroutine */ int dlamrg_(integer *, integer *, doublereal *, - integer *, integer *, integer *), dlacpy_(char *, integer *, - integer *, doublereal *, integer *, doublereal *, integer *), dlaset_(char *, integer *, integer *, doublereal *, - doublereal *, doublereal *, integer *), xerbla_(char *, - integer *); - static doublereal hlftol; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - October 31, 1999 - - - Purpose - ======= - - DLASD2 merges the two sets of singular values together into a single - sorted set. Then it tries to deflate the size of the problem. - There are two ways in which deflation can occur: when two or more - singular values are close together or if there is a tiny entry in the - Z vector. For each such occurrence the order of the related secular - equation problem is reduced by one. - - DLASD2 is called from DLASD1. - - Arguments - ========= - - NL (input) INTEGER - The row dimension of the upper block. NL >= 1. - - NR (input) INTEGER - The row dimension of the lower block. NR >= 1. - - SQRE (input) INTEGER - = 0: the lower block is an NR-by-NR square matrix. - = 1: the lower block is an NR-by-(NR+1) rectangular matrix. - - The bidiagonal matrix has N = NL + NR + 1 rows and - M = N + SQRE >= N columns. - - K (output) INTEGER - Contains the dimension of the non-deflated matrix, - This is the order of the related secular equation. 1 <= K <=N. - - D (input/output) DOUBLE PRECISION array, dimension(N) - On entry D contains the singular values of the two submatrices - to be combined. On exit D contains the trailing (N-K) updated - singular values (those which were deflated) sorted into - increasing order. - - ALPHA (input) DOUBLE PRECISION - Contains the diagonal element associated with the added row. - - BETA (input) DOUBLE PRECISION - Contains the off-diagonal element associated with the added - row. - - U (input/output) DOUBLE PRECISION array, dimension(LDU,N) - On entry U contains the left singular vectors of two - submatrices in the two square blocks with corners at (1,1), - (NL, NL), and (NL+2, NL+2), (N,N). - On exit U contains the trailing (N-K) updated left singular - vectors (those which were deflated) in its last N-K columns. - - LDU (input) INTEGER - The leading dimension of the array U. LDU >= N. - - Z (output) DOUBLE PRECISION array, dimension(N) - On exit Z contains the updating row vector in the secular - equation. - - DSIGMA (output) DOUBLE PRECISION array, dimension (N) - Contains a copy of the diagonal elements (K-1 singular values - and one zero) in the secular equation. - - U2 (output) DOUBLE PRECISION array, dimension(LDU2,N) - Contains a copy of the first K-1 left singular vectors which - will be used by DLASD3 in a matrix multiply (DGEMM) to solve - for the new left singular vectors. U2 is arranged into four - blocks. The first block contains a column with 1 at NL+1 and - zero everywhere else; the second block contains non-zero - entries only at and above NL; the third contains non-zero - entries only below NL+1; and the fourth is dense. - - LDU2 (input) INTEGER - The leading dimension of the array U2. LDU2 >= N. - - VT (input/output) DOUBLE PRECISION array, dimension(LDVT,M) - On entry VT' contains the right singular vectors of two - submatrices in the two square blocks with corners at (1,1), - (NL+1, NL+1), and (NL+2, NL+2), (M,M). - On exit VT' contains the trailing (N-K) updated right singular - vectors (those which were deflated) in its last N-K columns. - In case SQRE =1, the last row of VT spans the right null - space. - - LDVT (input) INTEGER - The leading dimension of the array VT. LDVT >= M. - - VT2 (output) DOUBLE PRECISION array, dimension(LDVT2,N) - VT2' contains a copy of the first K right singular vectors - which will be used by DLASD3 in a matrix multiply (DGEMM) to - solve for the new right singular vectors. VT2 is arranged into - three blocks. The first block contains a row that corresponds - to the special 0 diagonal element in SIGMA; the second block - contains non-zeros only at and before NL +1; the third block - contains non-zeros only at and after NL +2. - - LDVT2 (input) INTEGER - The leading dimension of the array VT2. LDVT2 >= M. - - IDXP (workspace) INTEGER array, dimension(N) - This will contain the permutation used to place deflated - values of D at the end of the array. On output IDXP(2:K) - points to the nondeflated D-values and IDXP(K+1:N) - points to the deflated singular values. - - IDX (workspace) INTEGER array, dimension(N) - This will contain the permutation used to sort the contents of - D into ascending order. - - IDXC (output) INTEGER array, dimension(N) - This will contain the permutation used to arrange the columns - of the deflated U matrix into three groups: the first group - contains non-zero entries only at and above NL, the second - contains non-zero entries only below NL+2, and the third is - dense. - - COLTYP (workspace/output) INTEGER array, dimension(N) - As workspace, this will contain a label which will indicate - which of the following types a column in the U2 matrix or a - row in the VT2 matrix is: - 1 : non-zero in the upper half only - 2 : non-zero in the lower half only - 3 : dense - 4 : deflated - - On exit, it is an array of dimension 4, with COLTYP(I) being - the dimension of the I-th type columns. - - IDXQ (input) INTEGER array, dimension(N) - This contains the permutation which separately sorts the two - sub-problems in D into ascending order. Note that entries in - the first hlaf of this permutation must first be moved one - position backward; and entries in the second half - must first have NL+1 added to their values. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --z__; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - vt_dim1 = *ldvt; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - --dsigma; - u2_dim1 = *ldu2; - u2_offset = 1 + u2_dim1 * 1; - u2 -= u2_offset; - vt2_dim1 = *ldvt2; - vt2_offset = 1 + vt2_dim1 * 1; - vt2 -= vt2_offset; - --idxp; - --idx; - --idxc; - --idxq; - --coltyp; - - /* Function Body */ - *info = 0; - - if (*nl < 1) { - *info = -1; - } else if (*nr < 1) { - *info = -2; - } else if ((*sqre != 1 && *sqre != 0)) { - *info = -3; - } - - n = *nl + *nr + 1; - m = n + *sqre; - - if (*ldu < n) { - *info = -10; - } else if (*ldvt < m) { - *info = -12; - } else if (*ldu2 < n) { - *info = -15; - } else if (*ldvt2 < m) { - *info = -17; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLASD2", &i__1); - return 0; - } - - nlp1 = *nl + 1; - nlp2 = *nl + 2; - -/* - Generate the first part of the vector Z; and move the singular - values in the first part of D one position backward. -*/ - - z1 = *alpha * vt[nlp1 + nlp1 * vt_dim1]; - z__[1] = z1; - for (i__ = *nl; i__ >= 1; --i__) { - z__[i__ + 1] = *alpha * vt[i__ + nlp1 * vt_dim1]; - d__[i__ + 1] = d__[i__]; - idxq[i__ + 1] = idxq[i__] + 1; -/* L10: */ - } - -/* Generate the second part of the vector Z. */ - - i__1 = m; - for (i__ = nlp2; i__ <= i__1; ++i__) { - z__[i__] = *beta * vt[i__ + nlp2 * vt_dim1]; -/* L20: */ - } - -/* Initialize some reference arrays. */ - - i__1 = nlp1; - for (i__ = 2; i__ <= i__1; ++i__) { - coltyp[i__] = 1; -/* L30: */ - } - i__1 = n; - for (i__ = nlp2; i__ <= i__1; ++i__) { - coltyp[i__] = 2; -/* L40: */ - } - -/* Sort the singular values into increasing order */ - - i__1 = n; - for (i__ = nlp2; i__ <= i__1; ++i__) { - idxq[i__] += nlp1; -/* L50: */ - } - -/* - DSIGMA, IDXC, IDXC, and the first column of U2 - are used as storage space. -*/ - - i__1 = n; - for (i__ = 2; i__ <= i__1; ++i__) { - dsigma[i__] = d__[idxq[i__]]; - u2[i__ + u2_dim1] = z__[idxq[i__]]; - idxc[i__] = coltyp[idxq[i__]]; -/* L60: */ - } - - dlamrg_(nl, nr, &dsigma[2], &c__1, &c__1, &idx[2]); - - i__1 = n; - for (i__ = 2; i__ <= i__1; ++i__) { - idxi = idx[i__] + 1; - d__[i__] = dsigma[idxi]; - z__[i__] = u2[idxi + u2_dim1]; - coltyp[i__] = idxc[idxi]; -/* L70: */ - } - -/* Calculate the allowable deflation tolerance */ - - eps = EPSILON; -/* Computing MAX */ - d__1 = abs(*alpha), d__2 = abs(*beta); - tol = max(d__1,d__2); -/* Computing MAX */ - d__2 = (d__1 = d__[n], abs(d__1)); - tol = eps * 8. * max(d__2,tol); - -/* - There are 2 kinds of deflation -- first a value in the z-vector - is small, second two (or more) singular values are very close - together (their difference is small). - - If the value in the z-vector is small, we simply permute the - array so that the corresponding singular value is moved to the - end. - - If two values in the D-vector are close, we perform a two-sided - rotation designed to make one of the corresponding z-vector - entries zero, and then permute the array so that the deflated - singular value is moved to the end. - - If there are multiple singular values then the problem deflates. - Here the number of equal singular values are found. As each equal - singular value is found, an elementary reflector is computed to - rotate the corresponding singular subspace so that the - corresponding components of Z are zero in this new basis. -*/ - - *k = 1; - k2 = n + 1; - i__1 = n; - for (j = 2; j <= i__1; ++j) { - if ((d__1 = z__[j], abs(d__1)) <= tol) { - -/* Deflate due to small z component. */ - - --k2; - idxp[k2] = j; - coltyp[j] = 4; - if (j == n) { - goto L120; - } - } else { - jprev = j; - goto L90; - } -/* L80: */ - } -L90: - j = jprev; -L100: - ++j; - if (j > n) { - goto L110; - } - if ((d__1 = z__[j], abs(d__1)) <= tol) { - -/* Deflate due to small z component. */ - - --k2; - idxp[k2] = j; - coltyp[j] = 4; - } else { - -/* Check if singular values are close enough to allow deflation. */ - - if ((d__1 = d__[j] - d__[jprev], abs(d__1)) <= tol) { - -/* Deflation is possible. */ - - s = z__[jprev]; - c__ = z__[j]; - -/* - Find sqrt(a**2+b**2) without overflow or - destructive underflow. -*/ - - tau = dlapy2_(&c__, &s); - c__ /= tau; - s = -s / tau; - z__[j] = tau; - z__[jprev] = 0.; - -/* - Apply back the Givens rotation to the left and right - singular vector matrices. -*/ - - idxjp = idxq[idx[jprev] + 1]; - idxj = idxq[idx[j] + 1]; - if (idxjp <= nlp1) { - --idxjp; - } - if (idxj <= nlp1) { - --idxj; - } - drot_(&n, &u[idxjp * u_dim1 + 1], &c__1, &u[idxj * u_dim1 + 1], & - c__1, &c__, &s); - drot_(&m, &vt[idxjp + vt_dim1], ldvt, &vt[idxj + vt_dim1], ldvt, & - c__, &s); - if (coltyp[j] != coltyp[jprev]) { - coltyp[j] = 3; - } - coltyp[jprev] = 4; - --k2; - idxp[k2] = jprev; - jprev = j; - } else { - ++(*k); - u2[*k + u2_dim1] = z__[jprev]; - dsigma[*k] = d__[jprev]; - idxp[*k] = jprev; - jprev = j; - } - } - goto L100; -L110: - -/* Record the last singular value. */ - - ++(*k); - u2[*k + u2_dim1] = z__[jprev]; - dsigma[*k] = d__[jprev]; - idxp[*k] = jprev; - -L120: - -/* - Count up the total number of the various types of columns, then - form a permutation which positions the four column types into - four groups of uniform structure (although one or more of these - groups may be empty). -*/ - - for (j = 1; j <= 4; ++j) { - ctot[j - 1] = 0; -/* L130: */ - } - i__1 = n; - for (j = 2; j <= i__1; ++j) { - ct = coltyp[j]; - ++ctot[ct - 1]; -/* L140: */ - } - -/* PSM(*) = Position in SubMatrix (of types 1 through 4) */ - - psm[0] = 2; - psm[1] = ctot[0] + 2; - psm[2] = psm[1] + ctot[1]; - psm[3] = psm[2] + ctot[2]; - -/* - Fill out the IDXC array so that the permutation which it induces - will place all type-1 columns first, all type-2 columns next, - then all type-3's, and finally all type-4's, starting from the - second column. This applies similarly to the rows of VT. -*/ - - i__1 = n; - for (j = 2; j <= i__1; ++j) { - jp = idxp[j]; - ct = coltyp[jp]; - idxc[psm[ct - 1]] = j; - ++psm[ct - 1]; -/* L150: */ - } - -/* - Sort the singular values and corresponding singular vectors into - DSIGMA, U2, and VT2 respectively. The singular values/vectors - which were not deflated go into the first K slots of DSIGMA, U2, - and VT2 respectively, while those which were deflated go into the - last N - K slots, except that the first column/row will be treated - separately. -*/ - - i__1 = n; - for (j = 2; j <= i__1; ++j) { - jp = idxp[j]; - dsigma[j] = d__[jp]; - idxj = idxq[idx[idxp[idxc[j]]] + 1]; - if (idxj <= nlp1) { - --idxj; - } - dcopy_(&n, &u[idxj * u_dim1 + 1], &c__1, &u2[j * u2_dim1 + 1], &c__1); - dcopy_(&m, &vt[idxj + vt_dim1], ldvt, &vt2[j + vt2_dim1], ldvt2); -/* L160: */ - } - -/* Determine DSIGMA(1), DSIGMA(2) and Z(1) */ - - dsigma[1] = 0.; - hlftol = tol / 2.; - if (abs(dsigma[2]) <= hlftol) { - dsigma[2] = hlftol; - } - if (m > n) { - z__[1] = dlapy2_(&z1, &z__[m]); - if (z__[1] <= tol) { - c__ = 1.; - s = 0.; - z__[1] = tol; - } else { - c__ = z1 / z__[1]; - s = z__[m] / z__[1]; - } - } else { - if (abs(z1) <= tol) { - z__[1] = tol; - } else { - z__[1] = z1; - } - } - -/* Move the rest of the updating row to Z. */ - - i__1 = *k - 1; - dcopy_(&i__1, &u2[u2_dim1 + 2], &c__1, &z__[2], &c__1); - -/* - Determine the first column of U2, the first row of VT2 and the - last row of VT. -*/ - - dlaset_("A", &n, &c__1, &c_b29, &c_b29, &u2[u2_offset], ldu2); - u2[nlp1 + u2_dim1] = 1.; - if (m > n) { - i__1 = nlp1; - for (i__ = 1; i__ <= i__1; ++i__) { - vt[m + i__ * vt_dim1] = -s * vt[nlp1 + i__ * vt_dim1]; - vt2[i__ * vt2_dim1 + 1] = c__ * vt[nlp1 + i__ * vt_dim1]; -/* L170: */ - } - i__1 = m; - for (i__ = nlp2; i__ <= i__1; ++i__) { - vt2[i__ * vt2_dim1 + 1] = s * vt[m + i__ * vt_dim1]; - vt[m + i__ * vt_dim1] = c__ * vt[m + i__ * vt_dim1]; -/* L180: */ - } - } else { - dcopy_(&m, &vt[nlp1 + vt_dim1], ldvt, &vt2[vt2_dim1 + 1], ldvt2); - } - if (m > n) { - dcopy_(&m, &vt[m + vt_dim1], ldvt, &vt2[m + vt2_dim1], ldvt2); - } - -/* - The deflated singular values and their corresponding vectors go - into the back of D, U, and V respectively. -*/ - - if (n > *k) { - i__1 = n - *k; - dcopy_(&i__1, &dsigma[*k + 1], &c__1, &d__[*k + 1], &c__1); - i__1 = n - *k; - dlacpy_("A", &n, &i__1, &u2[(*k + 1) * u2_dim1 + 1], ldu2, &u[(*k + 1) - * u_dim1 + 1], ldu); - i__1 = n - *k; - dlacpy_("A", &i__1, &m, &vt2[*k + 1 + vt2_dim1], ldvt2, &vt[*k + 1 + - vt_dim1], ldvt); - } - -/* Copy CTOT into COLTYP for referencing in DLASD3. */ - - for (j = 1; j <= 4; ++j) { - coltyp[j] = ctot[j - 1]; -/* L190: */ - } - - return 0; - -/* End of DLASD2 */ - -} /* dlasd2_ */ - -/* Subroutine */ int dlasd3_(integer *nl, integer *nr, integer *sqre, integer - *k, doublereal *d__, doublereal *q, integer *ldq, doublereal *dsigma, - doublereal *u, integer *ldu, doublereal *u2, integer *ldu2, - doublereal *vt, integer *ldvt, doublereal *vt2, integer *ldvt2, - integer *idxc, integer *ctot, doublereal *z__, integer *info) -{ - /* System generated locals */ - integer q_dim1, q_offset, u_dim1, u_offset, u2_dim1, u2_offset, vt_dim1, - vt_offset, vt2_dim1, vt2_offset, i__1, i__2; - doublereal d__1, d__2; - - /* Builtin functions */ - double sqrt(doublereal), d_sign(doublereal *, doublereal *); - - /* Local variables */ - static integer i__, j, m, n, jc; - static doublereal rho; - static integer nlp1, nlp2, nrp1; - static doublereal temp; - extern doublereal dnrm2_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - static integer ctemp; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *); - static integer ktemp; - extern doublereal dlamc3_(doublereal *, doublereal *); - extern /* Subroutine */ int dlasd4_(integer *, integer *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *), dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *), dlacpy_(char *, integer *, integer - *, doublereal *, integer *, doublereal *, integer *), - xerbla_(char *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - October 31, 1999 - - - Purpose - ======= - - DLASD3 finds all the square roots of the roots of the secular - equation, as defined by the values in D and Z. It makes the - appropriate calls to DLASD4 and then updates the singular - vectors by matrix multiplication. - - This code makes very mild assumptions about floating point - arithmetic. It will work on machines with a guard digit in - add/subtract, or on those binary machines without guard digits - which subtract like the Cray XMP, Cray YMP, Cray C 90, or Cray 2. - It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. - - DLASD3 is called from DLASD1. - - Arguments - ========= - - NL (input) INTEGER - The row dimension of the upper block. NL >= 1. - - NR (input) INTEGER - The row dimension of the lower block. NR >= 1. - - SQRE (input) INTEGER - = 0: the lower block is an NR-by-NR square matrix. - = 1: the lower block is an NR-by-(NR+1) rectangular matrix. - - The bidiagonal matrix has N = NL + NR + 1 rows and - M = N + SQRE >= N columns. - - K (input) INTEGER - The size of the secular equation, 1 =< K = < N. - - D (output) DOUBLE PRECISION array, dimension(K) - On exit the square roots of the roots of the secular equation, - in ascending order. - - Q (workspace) DOUBLE PRECISION array, - dimension at least (LDQ,K). - - LDQ (input) INTEGER - The leading dimension of the array Q. LDQ >= K. - - DSIGMA (input) DOUBLE PRECISION array, dimension(K) - The first K elements of this array contain the old roots - of the deflated updating problem. These are the poles - of the secular equation. - - U (input) DOUBLE PRECISION array, dimension (LDU, N) - The last N - K columns of this matrix contain the deflated - left singular vectors. - - LDU (input) INTEGER - The leading dimension of the array U. LDU >= N. - - U2 (input) DOUBLE PRECISION array, dimension (LDU2, N) - The first K columns of this matrix contain the non-deflated - left singular vectors for the split problem. - - LDU2 (input) INTEGER - The leading dimension of the array U2. LDU2 >= N. - - VT (input) DOUBLE PRECISION array, dimension (LDVT, M) - The last M - K columns of VT' contain the deflated - right singular vectors. - - LDVT (input) INTEGER - The leading dimension of the array VT. LDVT >= N. - - VT2 (input) DOUBLE PRECISION array, dimension (LDVT2, N) - The first K columns of VT2' contain the non-deflated - right singular vectors for the split problem. - - LDVT2 (input) INTEGER - The leading dimension of the array VT2. LDVT2 >= N. - - IDXC (input) INTEGER array, dimension ( N ) - The permutation used to arrange the columns of U (and rows of - VT) into three groups: the first group contains non-zero - entries only at and above (or before) NL +1; the second - contains non-zero entries only at and below (or after) NL+2; - and the third is dense. The first column of U and the row of - VT are treated separately, however. - - The rows of the singular vectors found by DLASD4 - must be likewise permuted before the matrix multiplies can - take place. - - CTOT (input) INTEGER array, dimension ( 4 ) - A count of the total number of the various types of columns - in U (or rows in VT), as described in IDXC. The fourth column - type is any column which has been deflated. - - Z (input) DOUBLE PRECISION array, dimension (K) - The first K elements of this array contain the components - of the deflation-adjusted updating row vector. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = 1, an singular value did not converge - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - q_dim1 = *ldq; - q_offset = 1 + q_dim1 * 1; - q -= q_offset; - --dsigma; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - u2_dim1 = *ldu2; - u2_offset = 1 + u2_dim1 * 1; - u2 -= u2_offset; - vt_dim1 = *ldvt; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - vt2_dim1 = *ldvt2; - vt2_offset = 1 + vt2_dim1 * 1; - vt2 -= vt2_offset; - --idxc; - --ctot; - --z__; - - /* Function Body */ - *info = 0; - - if (*nl < 1) { - *info = -1; - } else if (*nr < 1) { - *info = -2; - } else if ((*sqre != 1 && *sqre != 0)) { - *info = -3; - } - - n = *nl + *nr + 1; - m = n + *sqre; - nlp1 = *nl + 1; - nlp2 = *nl + 2; - - if (*k < 1 || *k > n) { - *info = -4; - } else if (*ldq < *k) { - *info = -7; - } else if (*ldu < n) { - *info = -10; - } else if (*ldu2 < n) { - *info = -12; - } else if (*ldvt < m) { - *info = -14; - } else if (*ldvt2 < m) { - *info = -16; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLASD3", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*k == 1) { - d__[1] = abs(z__[1]); - dcopy_(&m, &vt2[vt2_dim1 + 1], ldvt2, &vt[vt_dim1 + 1], ldvt); - if (z__[1] > 0.) { - dcopy_(&n, &u2[u2_dim1 + 1], &c__1, &u[u_dim1 + 1], &c__1); - } else { - i__1 = n; - for (i__ = 1; i__ <= i__1; ++i__) { - u[i__ + u_dim1] = -u2[i__ + u2_dim1]; -/* L10: */ - } - } - return 0; - } - -/* - Modify values DSIGMA(i) to make sure all DSIGMA(i)-DSIGMA(j) can - be computed with high relative accuracy (barring over/underflow). - This is a problem on machines without a guard digit in - add/subtract (Cray XMP, Cray YMP, Cray C 90 and Cray 2). - The following code replaces DSIGMA(I) by 2*DSIGMA(I)-DSIGMA(I), - which on any of these machines zeros out the bottommost - bit of DSIGMA(I) if it is 1; this makes the subsequent - subtractions DSIGMA(I)-DSIGMA(J) unproblematic when cancellation - occurs. On binary machines with a guard digit (almost all - machines) it does not change DSIGMA(I) at all. On hexadecimal - and decimal machines with a guard digit, it slightly - changes the bottommost bits of DSIGMA(I). It does not account - for hexadecimal or decimal machines without guard digits - (we know of none). We use a subroutine call to compute - 2*DLAMBDA(I) to prevent optimizing compilers from eliminating - this code. -*/ - - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - dsigma[i__] = dlamc3_(&dsigma[i__], &dsigma[i__]) - dsigma[i__]; -/* L20: */ - } - -/* Keep a copy of Z. */ - - dcopy_(k, &z__[1], &c__1, &q[q_offset], &c__1); - -/* Normalize Z. */ - - rho = dnrm2_(k, &z__[1], &c__1); - dlascl_("G", &c__0, &c__0, &rho, &c_b15, k, &c__1, &z__[1], k, info); - rho *= rho; - -/* Find the new singular values. */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dlasd4_(k, &j, &dsigma[1], &z__[1], &u[j * u_dim1 + 1], &rho, &d__[j], - &vt[j * vt_dim1 + 1], info); - -/* If the zero finder fails, the computation is terminated. */ - - if (*info != 0) { - return 0; - } -/* L30: */ - } - -/* Compute updated Z. */ - - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - z__[i__] = u[i__ + *k * u_dim1] * vt[i__ + *k * vt_dim1]; - i__2 = i__ - 1; - for (j = 1; j <= i__2; ++j) { - z__[i__] *= u[i__ + j * u_dim1] * vt[i__ + j * vt_dim1] / (dsigma[ - i__] - dsigma[j]) / (dsigma[i__] + dsigma[j]); -/* L40: */ - } - i__2 = *k - 1; - for (j = i__; j <= i__2; ++j) { - z__[i__] *= u[i__ + j * u_dim1] * vt[i__ + j * vt_dim1] / (dsigma[ - i__] - dsigma[j + 1]) / (dsigma[i__] + dsigma[j + 1]); -/* L50: */ - } - d__2 = sqrt((d__1 = z__[i__], abs(d__1))); - z__[i__] = d_sign(&d__2, &q[i__ + q_dim1]); -/* L60: */ - } - -/* - Compute left singular vectors of the modified diagonal matrix, - and store related information for the right singular vectors. -*/ - - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - vt[i__ * vt_dim1 + 1] = z__[1] / u[i__ * u_dim1 + 1] / vt[i__ * - vt_dim1 + 1]; - u[i__ * u_dim1 + 1] = -1.; - i__2 = *k; - for (j = 2; j <= i__2; ++j) { - vt[j + i__ * vt_dim1] = z__[j] / u[j + i__ * u_dim1] / vt[j + i__ - * vt_dim1]; - u[j + i__ * u_dim1] = dsigma[j] * vt[j + i__ * vt_dim1]; -/* L70: */ - } - temp = dnrm2_(k, &u[i__ * u_dim1 + 1], &c__1); - q[i__ * q_dim1 + 1] = u[i__ * u_dim1 + 1] / temp; - i__2 = *k; - for (j = 2; j <= i__2; ++j) { - jc = idxc[j]; - q[j + i__ * q_dim1] = u[jc + i__ * u_dim1] / temp; -/* L80: */ - } -/* L90: */ - } - -/* Update the left singular vector matrix. */ - - if (*k == 2) { - dgemm_("N", "N", &n, k, k, &c_b15, &u2[u2_offset], ldu2, &q[q_offset], - ldq, &c_b29, &u[u_offset], ldu); - goto L100; - } - if (ctot[1] > 0) { - dgemm_("N", "N", nl, k, &ctot[1], &c_b15, &u2[((u2_dim1) << (1)) + 1], - ldu2, &q[q_dim1 + 2], ldq, &c_b29, &u[u_dim1 + 1], ldu); - if (ctot[3] > 0) { - ktemp = ctot[1] + 2 + ctot[2]; - dgemm_("N", "N", nl, k, &ctot[3], &c_b15, &u2[ktemp * u2_dim1 + 1] - , ldu2, &q[ktemp + q_dim1], ldq, &c_b15, &u[u_dim1 + 1], - ldu); - } - } else if (ctot[3] > 0) { - ktemp = ctot[1] + 2 + ctot[2]; - dgemm_("N", "N", nl, k, &ctot[3], &c_b15, &u2[ktemp * u2_dim1 + 1], - ldu2, &q[ktemp + q_dim1], ldq, &c_b29, &u[u_dim1 + 1], ldu); - } else { - dlacpy_("F", nl, k, &u2[u2_offset], ldu2, &u[u_offset], ldu); - } - dcopy_(k, &q[q_dim1 + 1], ldq, &u[nlp1 + u_dim1], ldu); - ktemp = ctot[1] + 2; - ctemp = ctot[2] + ctot[3]; - dgemm_("N", "N", nr, k, &ctemp, &c_b15, &u2[nlp2 + ktemp * u2_dim1], ldu2, - &q[ktemp + q_dim1], ldq, &c_b29, &u[nlp2 + u_dim1], ldu); - -/* Generate the right singular vectors. */ - -L100: - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - temp = dnrm2_(k, &vt[i__ * vt_dim1 + 1], &c__1); - q[i__ + q_dim1] = vt[i__ * vt_dim1 + 1] / temp; - i__2 = *k; - for (j = 2; j <= i__2; ++j) { - jc = idxc[j]; - q[i__ + j * q_dim1] = vt[jc + i__ * vt_dim1] / temp; -/* L110: */ - } -/* L120: */ - } - -/* Update the right singular vector matrix. */ - - if (*k == 2) { - dgemm_("N", "N", k, &m, k, &c_b15, &q[q_offset], ldq, &vt2[vt2_offset] - , ldvt2, &c_b29, &vt[vt_offset], ldvt); - return 0; - } - ktemp = ctot[1] + 1; - dgemm_("N", "N", k, &nlp1, &ktemp, &c_b15, &q[q_dim1 + 1], ldq, &vt2[ - vt2_dim1 + 1], ldvt2, &c_b29, &vt[vt_dim1 + 1], ldvt); - ktemp = ctot[1] + 2 + ctot[2]; - if (ktemp <= *ldvt2) { - dgemm_("N", "N", k, &nlp1, &ctot[3], &c_b15, &q[ktemp * q_dim1 + 1], - ldq, &vt2[ktemp + vt2_dim1], ldvt2, &c_b15, &vt[vt_dim1 + 1], - ldvt); - } - - ktemp = ctot[1] + 1; - nrp1 = *nr + *sqre; - if (ktemp > 1) { - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - q[i__ + ktemp * q_dim1] = q[i__ + q_dim1]; -/* L130: */ - } - i__1 = m; - for (i__ = nlp2; i__ <= i__1; ++i__) { - vt2[ktemp + i__ * vt2_dim1] = vt2[i__ * vt2_dim1 + 1]; -/* L140: */ - } - } - ctemp = ctot[2] + 1 + ctot[3]; - dgemm_("N", "N", k, &nrp1, &ctemp, &c_b15, &q[ktemp * q_dim1 + 1], ldq, & - vt2[ktemp + nlp2 * vt2_dim1], ldvt2, &c_b29, &vt[nlp2 * vt_dim1 + - 1], ldvt); - - return 0; - -/* End of DLASD3 */ - -} /* dlasd3_ */ - - -/* Subroutine */ int dlasd4_(integer *n, integer *i__, doublereal *d__, - doublereal *z__, doublereal *delta, doublereal *rho, doublereal * - sigma, doublereal *work, integer *info) -{ - /* System generated locals */ - integer i__1; - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal a, b, c__; - static integer j; - static doublereal w, dd[3]; - static integer ii; - static doublereal dw, zz[3]; - static integer ip1; - static doublereal eta, phi, eps, tau, psi; - static integer iim1, iip1; - static doublereal dphi, dpsi; - static integer iter; - static doublereal temp, prew, sg2lb, sg2ub, temp1, temp2, dtiim, delsq, - dtiip; - static integer niter; - static doublereal dtisq; - static logical swtch; - static doublereal dtnsq; - extern /* Subroutine */ int dlaed6_(integer *, logical *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, integer *) - , dlasd5_(integer *, doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *); - static doublereal delsq2, dtnsq1; - static logical swtch3; - - static logical orgati; - static doublereal erretm, dtipsq, rhoinv; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - October 31, 1999 - - - Purpose - ======= - - This subroutine computes the square root of the I-th updated - eigenvalue of a positive symmetric rank-one modification to - a positive diagonal matrix whose entries are given as the squares - of the corresponding entries in the array d, and that - - 0 <= D(i) < D(j) for i < j - - and that RHO > 0. This is arranged by the calling routine, and is - no loss in generality. The rank-one modified system is thus - - diag( D ) * diag( D ) + RHO * Z * Z_transpose. - - where we assume the Euclidean norm of Z is 1. - - The method consists of approximating the rational functions in the - secular equation by simpler interpolating rational functions. - - Arguments - ========= - - N (input) INTEGER - The length of all arrays. - - I (input) INTEGER - The index of the eigenvalue to be computed. 1 <= I <= N. - - D (input) DOUBLE PRECISION array, dimension ( N ) - The original eigenvalues. It is assumed that they are in - order, 0 <= D(I) < D(J) for I < J. - - Z (input) DOUBLE PRECISION array, dimension ( N ) - The components of the updating vector. - - DELTA (output) DOUBLE PRECISION array, dimension ( N ) - If N .ne. 1, DELTA contains (D(j) - sigma_I) in its j-th - component. If N = 1, then DELTA(1) = 1. The vector DELTA - contains the information necessary to construct the - (singular) eigenvectors. - - RHO (input) DOUBLE PRECISION - The scalar in the symmetric updating formula. - - SIGMA (output) DOUBLE PRECISION - The computed lambda_I, the I-th updated eigenvalue. - - WORK (workspace) DOUBLE PRECISION array, dimension ( N ) - If N .ne. 1, WORK contains (D(j) + sigma_I) in its j-th - component. If N = 1, then WORK( 1 ) = 1. - - INFO (output) INTEGER - = 0: successful exit - > 0: if INFO = 1, the updating process failed. - - Internal Parameters - =================== - - Logical variable ORGATI (origin-at-i?) is used for distinguishing - whether D(i) or D(i+1) is treated as the origin. - - ORGATI = .true. origin at i - ORGATI = .false. origin at i+1 - - Logical variable SWTCH3 (switch-for-3-poles?) is for noting - if we are working with THREE poles! - - MAXIT is the maximum number of iterations allowed for each - eigenvalue. - - Further Details - =============== - - Based on contributions by - Ren-Cang Li, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== - - - Since this routine is called in an inner loop, we do no argument - checking. - - Quick return for N=1 and 2. -*/ - - /* Parameter adjustments */ - --work; - --delta; - --z__; - --d__; - - /* Function Body */ - *info = 0; - if (*n == 1) { - -/* Presumably, I=1 upon entry */ - - *sigma = sqrt(d__[1] * d__[1] + *rho * z__[1] * z__[1]); - delta[1] = 1.; - work[1] = 1.; - return 0; - } - if (*n == 2) { - dlasd5_(i__, &d__[1], &z__[1], &delta[1], rho, sigma, &work[1]); - return 0; - } - -/* Compute machine epsilon */ - - eps = EPSILON; - rhoinv = 1. / *rho; - -/* The case I = N */ - - if (*i__ == *n) { - -/* Initialize some basic variables */ - - ii = *n - 1; - niter = 1; - -/* Calculate initial guess */ - - temp = *rho / 2.; - -/* - If ||Z||_2 is not one, then TEMP should be set to - RHO * ||Z||_2^2 / TWO -*/ - - temp1 = temp / (d__[*n] + sqrt(d__[*n] * d__[*n] + temp)); - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - work[j] = d__[j] + d__[*n] + temp1; - delta[j] = d__[j] - d__[*n] - temp1; -/* L10: */ - } - - psi = 0.; - i__1 = *n - 2; - for (j = 1; j <= i__1; ++j) { - psi += z__[j] * z__[j] / (delta[j] * work[j]); -/* L20: */ - } - - c__ = rhoinv + psi; - w = c__ + z__[ii] * z__[ii] / (delta[ii] * work[ii]) + z__[*n] * z__[* - n] / (delta[*n] * work[*n]); - - if (w <= 0.) { - temp1 = sqrt(d__[*n] * d__[*n] + *rho); - temp = z__[*n - 1] * z__[*n - 1] / ((d__[*n - 1] + temp1) * (d__[* - n] - d__[*n - 1] + *rho / (d__[*n] + temp1))) + z__[*n] * - z__[*n] / *rho; - -/* - The following TAU is to approximate - SIGMA_n^2 - D( N )*D( N ) -*/ - - if (c__ <= temp) { - tau = *rho; - } else { - delsq = (d__[*n] - d__[*n - 1]) * (d__[*n] + d__[*n - 1]); - a = -c__ * delsq + z__[*n - 1] * z__[*n - 1] + z__[*n] * z__[* - n]; - b = z__[*n] * z__[*n] * delsq; - if (a < 0.) { - tau = b * 2. / (sqrt(a * a + b * 4. * c__) - a); - } else { - tau = (a + sqrt(a * a + b * 4. * c__)) / (c__ * 2.); - } - } - -/* - It can be proved that - D(N)^2+RHO/2 <= SIGMA_n^2 < D(N)^2+TAU <= D(N)^2+RHO -*/ - - } else { - delsq = (d__[*n] - d__[*n - 1]) * (d__[*n] + d__[*n - 1]); - a = -c__ * delsq + z__[*n - 1] * z__[*n - 1] + z__[*n] * z__[*n]; - b = z__[*n] * z__[*n] * delsq; - -/* - The following TAU is to approximate - SIGMA_n^2 - D( N )*D( N ) -*/ - - if (a < 0.) { - tau = b * 2. / (sqrt(a * a + b * 4. * c__) - a); - } else { - tau = (a + sqrt(a * a + b * 4. * c__)) / (c__ * 2.); - } - -/* - It can be proved that - D(N)^2 < D(N)^2+TAU < SIGMA(N)^2 < D(N)^2+RHO/2 -*/ - - } - -/* The following ETA is to approximate SIGMA_n - D( N ) */ - - eta = tau / (d__[*n] + sqrt(d__[*n] * d__[*n] + tau)); - - *sigma = d__[*n] + eta; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] = d__[j] - d__[*i__] - eta; - work[j] = d__[j] + d__[*i__] + eta; -/* L30: */ - } - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = ii; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / (delta[j] * work[j]); - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L40: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - temp = z__[*n] / (delta[*n] * work[*n]); - phi = z__[*n] * temp; - dphi = temp * temp; - erretm = (-phi - psi) * 8. + erretm - phi + rhoinv + abs(tau) * (dpsi - + dphi); - - w = rhoinv + phi + psi; - -/* Test for convergence */ - - if (abs(w) <= eps * erretm) { - goto L240; - } - -/* Calculate the new step */ - - ++niter; - dtnsq1 = work[*n - 1] * delta[*n - 1]; - dtnsq = work[*n] * delta[*n]; - c__ = w - dtnsq1 * dpsi - dtnsq * dphi; - a = (dtnsq + dtnsq1) * w - dtnsq * dtnsq1 * (dpsi + dphi); - b = dtnsq * dtnsq1 * w; - if (c__ < 0.) { - c__ = abs(c__); - } - if (c__ == 0.) { - eta = *rho - *sigma * *sigma; - } else if (a >= 0.) { - eta = (a + sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) / (c__ - * 2.); - } else { - eta = b * 2. / (a - sqrt((d__1 = a * a - b * 4. * c__, abs(d__1))) - ); - } - -/* - Note, eta should be positive if w is negative, and - eta should be negative otherwise. However, - if for some reason caused by roundoff, eta*w > 0, - we simply use one Newton step instead. This way - will guarantee eta*w < 0. -*/ - - if (w * eta > 0.) { - eta = -w / (dpsi + dphi); - } - temp = eta - dtnsq; - if (temp > *rho) { - eta = *rho + dtnsq; - } - - tau += eta; - eta /= *sigma + sqrt(eta + *sigma * *sigma); - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] -= eta; - work[j] += eta; -/* L50: */ - } - - *sigma += eta; - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = ii; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / (work[j] * delta[j]); - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L60: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - temp = z__[*n] / (work[*n] * delta[*n]); - phi = z__[*n] * temp; - dphi = temp * temp; - erretm = (-phi - psi) * 8. + erretm - phi + rhoinv + abs(tau) * (dpsi - + dphi); - - w = rhoinv + phi + psi; - -/* Main loop to update the values of the array DELTA */ - - iter = niter + 1; - - for (niter = iter; niter <= MAXITERLOOPS; ++niter) { - -/* Test for convergence */ - - if (abs(w) <= eps * erretm) { - goto L240; - } - -/* Calculate the new step */ - - dtnsq1 = work[*n - 1] * delta[*n - 1]; - dtnsq = work[*n] * delta[*n]; - c__ = w - dtnsq1 * dpsi - dtnsq * dphi; - a = (dtnsq + dtnsq1) * w - dtnsq1 * dtnsq * (dpsi + dphi); - b = dtnsq1 * dtnsq * w; - if (a >= 0.) { - eta = (a + sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) / ( - c__ * 2.); - } else { - eta = b * 2. / (a - sqrt((d__1 = a * a - b * 4. * c__, abs( - d__1)))); - } - -/* - Note, eta should be positive if w is negative, and - eta should be negative otherwise. However, - if for some reason caused by roundoff, eta*w > 0, - we simply use one Newton step instead. This way - will guarantee eta*w < 0. -*/ - - if (w * eta > 0.) { - eta = -w / (dpsi + dphi); - } - temp = eta - dtnsq; - if (temp <= 0.) { - eta /= 2.; - } - - tau += eta; - eta /= *sigma + sqrt(eta + *sigma * *sigma); - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - delta[j] -= eta; - work[j] += eta; -/* L70: */ - } - - *sigma += eta; - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = ii; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / (work[j] * delta[j]); - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L80: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - temp = z__[*n] / (work[*n] * delta[*n]); - phi = z__[*n] * temp; - dphi = temp * temp; - erretm = (-phi - psi) * 8. + erretm - phi + rhoinv + abs(tau) * ( - dpsi + dphi); - - w = rhoinv + phi + psi; -/* L90: */ - } - -/* Return with INFO = 1, NITER = MAXIT and not converged */ - - *info = 1; - goto L240; - -/* End for the case I = N */ - - } else { - -/* The case for I < N */ - - niter = 1; - ip1 = *i__ + 1; - -/* Calculate initial guess */ - - delsq = (d__[ip1] - d__[*i__]) * (d__[ip1] + d__[*i__]); - delsq2 = delsq / 2.; - temp = delsq2 / (d__[*i__] + sqrt(d__[*i__] * d__[*i__] + delsq2)); - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - work[j] = d__[j] + d__[*i__] + temp; - delta[j] = d__[j] - d__[*i__] - temp; -/* L100: */ - } - - psi = 0.; - i__1 = *i__ - 1; - for (j = 1; j <= i__1; ++j) { - psi += z__[j] * z__[j] / (work[j] * delta[j]); -/* L110: */ - } - - phi = 0.; - i__1 = *i__ + 2; - for (j = *n; j >= i__1; --j) { - phi += z__[j] * z__[j] / (work[j] * delta[j]); -/* L120: */ - } - c__ = rhoinv + psi + phi; - w = c__ + z__[*i__] * z__[*i__] / (work[*i__] * delta[*i__]) + z__[ - ip1] * z__[ip1] / (work[ip1] * delta[ip1]); - - if (w > 0.) { - -/* - d(i)^2 < the ith sigma^2 < (d(i)^2+d(i+1)^2)/2 - - We choose d(i) as origin. -*/ - - orgati = TRUE_; - sg2lb = 0.; - sg2ub = delsq2; - a = c__ * delsq + z__[*i__] * z__[*i__] + z__[ip1] * z__[ip1]; - b = z__[*i__] * z__[*i__] * delsq; - if (a > 0.) { - tau = b * 2. / (a + sqrt((d__1 = a * a - b * 4. * c__, abs( - d__1)))); - } else { - tau = (a - sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) / ( - c__ * 2.); - } - -/* - TAU now is an estimation of SIGMA^2 - D( I )^2. The - following, however, is the corresponding estimation of - SIGMA - D( I ). -*/ - - eta = tau / (d__[*i__] + sqrt(d__[*i__] * d__[*i__] + tau)); - } else { - -/* - (d(i)^2+d(i+1)^2)/2 <= the ith sigma^2 < d(i+1)^2/2 - - We choose d(i+1) as origin. -*/ - - orgati = FALSE_; - sg2lb = -delsq2; - sg2ub = 0.; - a = c__ * delsq - z__[*i__] * z__[*i__] - z__[ip1] * z__[ip1]; - b = z__[ip1] * z__[ip1] * delsq; - if (a < 0.) { - tau = b * 2. / (a - sqrt((d__1 = a * a + b * 4. * c__, abs( - d__1)))); - } else { - tau = -(a + sqrt((d__1 = a * a + b * 4. * c__, abs(d__1)))) / - (c__ * 2.); - } - -/* - TAU now is an estimation of SIGMA^2 - D( IP1 )^2. The - following, however, is the corresponding estimation of - SIGMA - D( IP1 ). -*/ - - eta = tau / (d__[ip1] + sqrt((d__1 = d__[ip1] * d__[ip1] + tau, - abs(d__1)))); - } - - if (orgati) { - ii = *i__; - *sigma = d__[*i__] + eta; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - work[j] = d__[j] + d__[*i__] + eta; - delta[j] = d__[j] - d__[*i__] - eta; -/* L130: */ - } - } else { - ii = *i__ + 1; - *sigma = d__[ip1] + eta; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - work[j] = d__[j] + d__[ip1] + eta; - delta[j] = d__[j] - d__[ip1] - eta; -/* L140: */ - } - } - iim1 = ii - 1; - iip1 = ii + 1; - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = iim1; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / (work[j] * delta[j]); - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L150: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - dphi = 0.; - phi = 0.; - i__1 = iip1; - for (j = *n; j >= i__1; --j) { - temp = z__[j] / (work[j] * delta[j]); - phi += z__[j] * temp; - dphi += temp * temp; - erretm += phi; -/* L160: */ - } - - w = rhoinv + phi + psi; - -/* - W is the value of the secular function with - its ii-th element removed. -*/ - - swtch3 = FALSE_; - if (orgati) { - if (w < 0.) { - swtch3 = TRUE_; - } - } else { - if (w > 0.) { - swtch3 = TRUE_; - } - } - if (ii == 1 || ii == *n) { - swtch3 = FALSE_; - } - - temp = z__[ii] / (work[ii] * delta[ii]); - dw = dpsi + dphi + temp * temp; - temp = z__[ii] * temp; - w += temp; - erretm = (phi - psi) * 8. + erretm + rhoinv * 2. + abs(temp) * 3. + - abs(tau) * dw; - -/* Test for convergence */ - - if (abs(w) <= eps * erretm) { - goto L240; - } - - if (w <= 0.) { - sg2lb = max(sg2lb,tau); - } else { - sg2ub = min(sg2ub,tau); - } - -/* Calculate the new step */ - - ++niter; - if (! swtch3) { - dtipsq = work[ip1] * delta[ip1]; - dtisq = work[*i__] * delta[*i__]; - if (orgati) { -/* Computing 2nd power */ - d__1 = z__[*i__] / dtisq; - c__ = w - dtipsq * dw + delsq * (d__1 * d__1); - } else { -/* Computing 2nd power */ - d__1 = z__[ip1] / dtipsq; - c__ = w - dtisq * dw - delsq * (d__1 * d__1); - } - a = (dtipsq + dtisq) * w - dtipsq * dtisq * dw; - b = dtipsq * dtisq * w; - if (c__ == 0.) { - if (a == 0.) { - if (orgati) { - a = z__[*i__] * z__[*i__] + dtipsq * dtipsq * (dpsi + - dphi); - } else { - a = z__[ip1] * z__[ip1] + dtisq * dtisq * (dpsi + - dphi); - } - } - eta = b / a; - } else if (a <= 0.) { - eta = (a - sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) / ( - c__ * 2.); - } else { - eta = b * 2. / (a + sqrt((d__1 = a * a - b * 4. * c__, abs( - d__1)))); - } - } else { - -/* Interpolation using THREE most relevant poles */ - - dtiim = work[iim1] * delta[iim1]; - dtiip = work[iip1] * delta[iip1]; - temp = rhoinv + psi + phi; - if (orgati) { - temp1 = z__[iim1] / dtiim; - temp1 *= temp1; - c__ = temp - dtiip * (dpsi + dphi) - (d__[iim1] - d__[iip1]) * - (d__[iim1] + d__[iip1]) * temp1; - zz[0] = z__[iim1] * z__[iim1]; - if (dpsi < temp1) { - zz[2] = dtiip * dtiip * dphi; - } else { - zz[2] = dtiip * dtiip * (dpsi - temp1 + dphi); - } - } else { - temp1 = z__[iip1] / dtiip; - temp1 *= temp1; - c__ = temp - dtiim * (dpsi + dphi) - (d__[iip1] - d__[iim1]) * - (d__[iim1] + d__[iip1]) * temp1; - if (dphi < temp1) { - zz[0] = dtiim * dtiim * dpsi; - } else { - zz[0] = dtiim * dtiim * (dpsi + (dphi - temp1)); - } - zz[2] = z__[iip1] * z__[iip1]; - } - zz[1] = z__[ii] * z__[ii]; - dd[0] = dtiim; - dd[1] = delta[ii] * work[ii]; - dd[2] = dtiip; - dlaed6_(&niter, &orgati, &c__, dd, zz, &w, &eta, info); - if (*info != 0) { - goto L240; - } - } - -/* - Note, eta should be positive if w is negative, and - eta should be negative otherwise. However, - if for some reason caused by roundoff, eta*w > 0, - we simply use one Newton step instead. This way - will guarantee eta*w < 0. -*/ - - if (w * eta >= 0.) { - eta = -w / dw; - } - if (orgati) { - temp1 = work[*i__] * delta[*i__]; - temp = eta - temp1; - } else { - temp1 = work[ip1] * delta[ip1]; - temp = eta - temp1; - } - if (temp > sg2ub || temp < sg2lb) { - if (w < 0.) { - eta = (sg2ub - tau) / 2.; - } else { - eta = (sg2lb - tau) / 2.; - } - } - - tau += eta; - eta /= *sigma + sqrt(*sigma * *sigma + eta); - - prew = w; - - *sigma += eta; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - work[j] += eta; - delta[j] -= eta; -/* L170: */ - } - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = iim1; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / (work[j] * delta[j]); - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L180: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - dphi = 0.; - phi = 0.; - i__1 = iip1; - for (j = *n; j >= i__1; --j) { - temp = z__[j] / (work[j] * delta[j]); - phi += z__[j] * temp; - dphi += temp * temp; - erretm += phi; -/* L190: */ - } - - temp = z__[ii] / (work[ii] * delta[ii]); - dw = dpsi + dphi + temp * temp; - temp = z__[ii] * temp; - w = rhoinv + phi + psi + temp; - erretm = (phi - psi) * 8. + erretm + rhoinv * 2. + abs(temp) * 3. + - abs(tau) * dw; - - if (w <= 0.) { - sg2lb = max(sg2lb,tau); - } else { - sg2ub = min(sg2ub,tau); - } - - swtch = FALSE_; - if (orgati) { - if (-w > abs(prew) / 10.) { - swtch = TRUE_; - } - } else { - if (w > abs(prew) / 10.) { - swtch = TRUE_; - } - } - -/* Main loop to update the values of the array DELTA and WORK */ - - iter = niter + 1; - - for (niter = iter; niter <= MAXITERLOOPS; ++niter) { - -/* Test for convergence */ - - if (abs(w) <= eps * erretm) { - goto L240; - } - -/* Calculate the new step */ - - if (! swtch3) { - dtipsq = work[ip1] * delta[ip1]; - dtisq = work[*i__] * delta[*i__]; - if (! swtch) { - if (orgati) { -/* Computing 2nd power */ - d__1 = z__[*i__] / dtisq; - c__ = w - dtipsq * dw + delsq * (d__1 * d__1); - } else { -/* Computing 2nd power */ - d__1 = z__[ip1] / dtipsq; - c__ = w - dtisq * dw - delsq * (d__1 * d__1); - } - } else { - temp = z__[ii] / (work[ii] * delta[ii]); - if (orgati) { - dpsi += temp * temp; - } else { - dphi += temp * temp; - } - c__ = w - dtisq * dpsi - dtipsq * dphi; - } - a = (dtipsq + dtisq) * w - dtipsq * dtisq * dw; - b = dtipsq * dtisq * w; - if (c__ == 0.) { - if (a == 0.) { - if (! swtch) { - if (orgati) { - a = z__[*i__] * z__[*i__] + dtipsq * dtipsq * - (dpsi + dphi); - } else { - a = z__[ip1] * z__[ip1] + dtisq * dtisq * ( - dpsi + dphi); - } - } else { - a = dtisq * dtisq * dpsi + dtipsq * dtipsq * dphi; - } - } - eta = b / a; - } else if (a <= 0.) { - eta = (a - sqrt((d__1 = a * a - b * 4. * c__, abs(d__1)))) - / (c__ * 2.); - } else { - eta = b * 2. / (a + sqrt((d__1 = a * a - b * 4. * c__, - abs(d__1)))); - } - } else { - -/* Interpolation using THREE most relevant poles */ - - dtiim = work[iim1] * delta[iim1]; - dtiip = work[iip1] * delta[iip1]; - temp = rhoinv + psi + phi; - if (swtch) { - c__ = temp - dtiim * dpsi - dtiip * dphi; - zz[0] = dtiim * dtiim * dpsi; - zz[2] = dtiip * dtiip * dphi; - } else { - if (orgati) { - temp1 = z__[iim1] / dtiim; - temp1 *= temp1; - temp2 = (d__[iim1] - d__[iip1]) * (d__[iim1] + d__[ - iip1]) * temp1; - c__ = temp - dtiip * (dpsi + dphi) - temp2; - zz[0] = z__[iim1] * z__[iim1]; - if (dpsi < temp1) { - zz[2] = dtiip * dtiip * dphi; - } else { - zz[2] = dtiip * dtiip * (dpsi - temp1 + dphi); - } - } else { - temp1 = z__[iip1] / dtiip; - temp1 *= temp1; - temp2 = (d__[iip1] - d__[iim1]) * (d__[iim1] + d__[ - iip1]) * temp1; - c__ = temp - dtiim * (dpsi + dphi) - temp2; - if (dphi < temp1) { - zz[0] = dtiim * dtiim * dpsi; - } else { - zz[0] = dtiim * dtiim * (dpsi + (dphi - temp1)); - } - zz[2] = z__[iip1] * z__[iip1]; - } - } - dd[0] = dtiim; - dd[1] = delta[ii] * work[ii]; - dd[2] = dtiip; - dlaed6_(&niter, &orgati, &c__, dd, zz, &w, &eta, info); - if (*info != 0) { - goto L240; - } - } - -/* - Note, eta should be positive if w is negative, and - eta should be negative otherwise. However, - if for some reason caused by roundoff, eta*w > 0, - we simply use one Newton step instead. This way - will guarantee eta*w < 0. -*/ - - if (w * eta >= 0.) { - eta = -w / dw; - } - if (orgati) { - temp1 = work[*i__] * delta[*i__]; - temp = eta - temp1; - } else { - temp1 = work[ip1] * delta[ip1]; - temp = eta - temp1; - } - if (temp > sg2ub || temp < sg2lb) { - if (w < 0.) { - eta = (sg2ub - tau) / 2.; - } else { - eta = (sg2lb - tau) / 2.; - } - } - - tau += eta; - eta /= *sigma + sqrt(*sigma * *sigma + eta); - - *sigma += eta; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - work[j] += eta; - delta[j] -= eta; -/* L200: */ - } - - prew = w; - -/* Evaluate PSI and the derivative DPSI */ - - dpsi = 0.; - psi = 0.; - erretm = 0.; - i__1 = iim1; - for (j = 1; j <= i__1; ++j) { - temp = z__[j] / (work[j] * delta[j]); - psi += z__[j] * temp; - dpsi += temp * temp; - erretm += psi; -/* L210: */ - } - erretm = abs(erretm); - -/* Evaluate PHI and the derivative DPHI */ - - dphi = 0.; - phi = 0.; - i__1 = iip1; - for (j = *n; j >= i__1; --j) { - temp = z__[j] / (work[j] * delta[j]); - phi += z__[j] * temp; - dphi += temp * temp; - erretm += phi; -/* L220: */ - } - - temp = z__[ii] / (work[ii] * delta[ii]); - dw = dpsi + dphi + temp * temp; - temp = z__[ii] * temp; - w = rhoinv + phi + psi + temp; - erretm = (phi - psi) * 8. + erretm + rhoinv * 2. + abs(temp) * 3. - + abs(tau) * dw; - if ((w * prew > 0. && abs(w) > abs(prew) / 10.)) { - swtch = ! swtch; - } - - if (w <= 0.) { - sg2lb = max(sg2lb,tau); - } else { - sg2ub = min(sg2ub,tau); - } - -/* L230: */ - } - -/* Return with INFO = 1, NITER = MAXIT and not converged */ - - *info = 1; - - } - -L240: - return 0; - -/* End of DLASD4 */ - -} /* dlasd4_ */ - -/* Subroutine */ int dlasd5_(integer *i__, doublereal *d__, doublereal *z__, - doublereal *delta, doublereal *rho, doublereal *dsigma, doublereal * - work) -{ - /* System generated locals */ - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal b, c__, w, del, tau, delsq; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - June 30, 1999 - - - Purpose - ======= - - This subroutine computes the square root of the I-th eigenvalue - of a positive symmetric rank-one modification of a 2-by-2 diagonal - matrix - - diag( D ) * diag( D ) + RHO * Z * transpose(Z) . - - The diagonal entries in the array D are assumed to satisfy - - 0 <= D(i) < D(j) for i < j . - - We also assume RHO > 0 and that the Euclidean norm of the vector - Z is one. - - Arguments - ========= - - I (input) INTEGER - The index of the eigenvalue to be computed. I = 1 or I = 2. - - D (input) DOUBLE PRECISION array, dimension ( 2 ) - The original eigenvalues. We assume 0 <= D(1) < D(2). - - Z (input) DOUBLE PRECISION array, dimension ( 2 ) - The components of the updating vector. - - DELTA (output) DOUBLE PRECISION array, dimension ( 2 ) - Contains (D(j) - lambda_I) in its j-th component. - The vector DELTA contains the information necessary - to construct the eigenvectors. - - RHO (input) DOUBLE PRECISION - The scalar in the symmetric updating formula. - - DSIGMA (output) DOUBLE PRECISION - The computed lambda_I, the I-th updated eigenvalue. - - WORK (workspace) DOUBLE PRECISION array, dimension ( 2 ) - WORK contains (D(j) + sigma_I) in its j-th component. - - Further Details - =============== - - Based on contributions by - Ren-Cang Li, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --work; - --delta; - --z__; - --d__; - - /* Function Body */ - del = d__[2] - d__[1]; - delsq = del * (d__[2] + d__[1]); - if (*i__ == 1) { - w = *rho * 4. * (z__[2] * z__[2] / (d__[1] + d__[2] * 3.) - z__[1] * - z__[1] / (d__[1] * 3. + d__[2])) / del + 1.; - if (w > 0.) { - b = delsq + *rho * (z__[1] * z__[1] + z__[2] * z__[2]); - c__ = *rho * z__[1] * z__[1] * delsq; - -/* - B > ZERO, always - - The following TAU is DSIGMA * DSIGMA - D( 1 ) * D( 1 ) -*/ - - tau = c__ * 2. / (b + sqrt((d__1 = b * b - c__ * 4., abs(d__1)))); - -/* The following TAU is DSIGMA - D( 1 ) */ - - tau /= d__[1] + sqrt(d__[1] * d__[1] + tau); - *dsigma = d__[1] + tau; - delta[1] = -tau; - delta[2] = del - tau; - work[1] = d__[1] * 2. + tau; - work[2] = d__[1] + tau + d__[2]; -/* - DELTA( 1 ) = -Z( 1 ) / TAU - DELTA( 2 ) = Z( 2 ) / ( DEL-TAU ) -*/ - } else { - b = -delsq + *rho * (z__[1] * z__[1] + z__[2] * z__[2]); - c__ = *rho * z__[2] * z__[2] * delsq; - -/* The following TAU is DSIGMA * DSIGMA - D( 2 ) * D( 2 ) */ - - if (b > 0.) { - tau = c__ * -2. / (b + sqrt(b * b + c__ * 4.)); - } else { - tau = (b - sqrt(b * b + c__ * 4.)) / 2.; - } - -/* The following TAU is DSIGMA - D( 2 ) */ - - tau /= d__[2] + sqrt((d__1 = d__[2] * d__[2] + tau, abs(d__1))); - *dsigma = d__[2] + tau; - delta[1] = -(del + tau); - delta[2] = -tau; - work[1] = d__[1] + tau + d__[2]; - work[2] = d__[2] * 2. + tau; -/* - DELTA( 1 ) = -Z( 1 ) / ( DEL+TAU ) - DELTA( 2 ) = -Z( 2 ) / TAU -*/ - } -/* - TEMP = SQRT( DELTA( 1 )*DELTA( 1 )+DELTA( 2 )*DELTA( 2 ) ) - DELTA( 1 ) = DELTA( 1 ) / TEMP - DELTA( 2 ) = DELTA( 2 ) / TEMP -*/ - } else { - -/* Now I=2 */ - - b = -delsq + *rho * (z__[1] * z__[1] + z__[2] * z__[2]); - c__ = *rho * z__[2] * z__[2] * delsq; - -/* The following TAU is DSIGMA * DSIGMA - D( 2 ) * D( 2 ) */ - - if (b > 0.) { - tau = (b + sqrt(b * b + c__ * 4.)) / 2.; - } else { - tau = c__ * 2. / (-b + sqrt(b * b + c__ * 4.)); - } - -/* The following TAU is DSIGMA - D( 2 ) */ - - tau /= d__[2] + sqrt(d__[2] * d__[2] + tau); - *dsigma = d__[2] + tau; - delta[1] = -(del + tau); - delta[2] = -tau; - work[1] = d__[1] + tau + d__[2]; - work[2] = d__[2] * 2. + tau; -/* - DELTA( 1 ) = -Z( 1 ) / ( DEL+TAU ) - DELTA( 2 ) = -Z( 2 ) / TAU - TEMP = SQRT( DELTA( 1 )*DELTA( 1 )+DELTA( 2 )*DELTA( 2 ) ) - DELTA( 1 ) = DELTA( 1 ) / TEMP - DELTA( 2 ) = DELTA( 2 ) / TEMP -*/ - } - return 0; - -/* End of DLASD5 */ - -} /* dlasd5_ */ - -/* Subroutine */ int dlasd6_(integer *icompq, integer *nl, integer *nr, - integer *sqre, doublereal *d__, doublereal *vf, doublereal *vl, - doublereal *alpha, doublereal *beta, integer *idxq, integer *perm, - integer *givptr, integer *givcol, integer *ldgcol, doublereal *givnum, - integer *ldgnum, doublereal *poles, doublereal *difl, doublereal * - difr, doublereal *z__, integer *k, doublereal *c__, doublereal *s, - doublereal *work, integer *iwork, integer *info) -{ - /* System generated locals */ - integer givcol_dim1, givcol_offset, givnum_dim1, givnum_offset, - poles_dim1, poles_offset, i__1; - doublereal d__1, d__2; - - /* Local variables */ - static integer i__, m, n, n1, n2, iw, idx, idxc, idxp, ivfw, ivlw; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *), dlasd7_(integer *, integer *, integer *, - integer *, integer *, doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, integer *, integer *, - integer *, integer *, integer *, integer *, integer *, doublereal - *, integer *, doublereal *, doublereal *, integer *), dlasd8_( - integer *, integer *, doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, integer *, doublereal *, - doublereal *, integer *), dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *), dlamrg_(integer *, integer *, - doublereal *, integer *, integer *, integer *); - static integer isigma; - extern /* Subroutine */ int xerbla_(char *, integer *); - static doublereal orgnrm; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DLASD6 computes the SVD of an updated upper bidiagonal matrix B - obtained by merging two smaller ones by appending a row. This - routine is used only for the problem which requires all singular - values and optionally singular vector matrices in factored form. - B is an N-by-M matrix with N = NL + NR + 1 and M = N + SQRE. - A related subroutine, DLASD1, handles the case in which all singular - values and singular vectors of the bidiagonal matrix are desired. - - DLASD6 computes the SVD as follows: - - ( D1(in) 0 0 0 ) - B = U(in) * ( Z1' a Z2' b ) * VT(in) - ( 0 0 D2(in) 0 ) - - = U(out) * ( D(out) 0) * VT(out) - - where Z' = (Z1' a Z2' b) = u' VT', and u is a vector of dimension M - with ALPHA and BETA in the NL+1 and NL+2 th entries and zeros - elsewhere; and the entry b is empty if SQRE = 0. - - The singular values of B can be computed using D1, D2, the first - components of all the right singular vectors of the lower block, and - the last components of all the right singular vectors of the upper - block. These components are stored and updated in VF and VL, - respectively, in DLASD6. Hence U and VT are not explicitly - referenced. - - The singular values are stored in D. The algorithm consists of two - stages: - - The first stage consists of deflating the size of the problem - when there are multiple singular values or if there is a zero - in the Z vector. For each such occurence the dimension of the - secular equation problem is reduced by one. This stage is - performed by the routine DLASD7. - - The second stage consists of calculating the updated - singular values. This is done by finding the roots of the - secular equation via the routine DLASD4 (as called by DLASD8). - This routine also updates VF and VL and computes the distances - between the updated singular values and the old singular - values. - - DLASD6 is called from DLASDA. - - Arguments - ========= - - ICOMPQ (input) INTEGER - Specifies whether singular vectors are to be computed in - factored form: - = 0: Compute singular values only. - = 1: Compute singular vectors in factored form as well. - - NL (input) INTEGER - The row dimension of the upper block. NL >= 1. - - NR (input) INTEGER - The row dimension of the lower block. NR >= 1. - - SQRE (input) INTEGER - = 0: the lower block is an NR-by-NR square matrix. - = 1: the lower block is an NR-by-(NR+1) rectangular matrix. - - The bidiagonal matrix has row dimension N = NL + NR + 1, - and column dimension M = N + SQRE. - - D (input/output) DOUBLE PRECISION array, dimension ( NL+NR+1 ). - On entry D(1:NL,1:NL) contains the singular values of the - upper block, and D(NL+2:N) contains the singular values - of the lower block. On exit D(1:N) contains the singular - values of the modified matrix. - - VF (input/output) DOUBLE PRECISION array, dimension ( M ) - On entry, VF(1:NL+1) contains the first components of all - right singular vectors of the upper block; and VF(NL+2:M) - contains the first components of all right singular vectors - of the lower block. On exit, VF contains the first components - of all right singular vectors of the bidiagonal matrix. - - VL (input/output) DOUBLE PRECISION array, dimension ( M ) - On entry, VL(1:NL+1) contains the last components of all - right singular vectors of the upper block; and VL(NL+2:M) - contains the last components of all right singular vectors of - the lower block. On exit, VL contains the last components of - all right singular vectors of the bidiagonal matrix. - - ALPHA (input) DOUBLE PRECISION - Contains the diagonal element associated with the added row. - - BETA (input) DOUBLE PRECISION - Contains the off-diagonal element associated with the added - row. - - IDXQ (output) INTEGER array, dimension ( N ) - This contains the permutation which will reintegrate the - subproblem just solved back into sorted order, i.e. - D( IDXQ( I = 1, N ) ) will be in ascending order. - - PERM (output) INTEGER array, dimension ( N ) - The permutations (from deflation and sorting) to be applied - to each block. Not referenced if ICOMPQ = 0. - - GIVPTR (output) INTEGER - The number of Givens rotations which took place in this - subproblem. Not referenced if ICOMPQ = 0. - - GIVCOL (output) INTEGER array, dimension ( LDGCOL, 2 ) - Each pair of numbers indicates a pair of columns to take place - in a Givens rotation. Not referenced if ICOMPQ = 0. - - LDGCOL (input) INTEGER - leading dimension of GIVCOL, must be at least N. - - GIVNUM (output) DOUBLE PRECISION array, dimension ( LDGNUM, 2 ) - Each number indicates the C or S value to be used in the - corresponding Givens rotation. Not referenced if ICOMPQ = 0. - - LDGNUM (input) INTEGER - The leading dimension of GIVNUM and POLES, must be at least N. - - POLES (output) DOUBLE PRECISION array, dimension ( LDGNUM, 2 ) - On exit, POLES(1,*) is an array containing the new singular - values obtained from solving the secular equation, and - POLES(2,*) is an array containing the poles in the secular - equation. Not referenced if ICOMPQ = 0. - - DIFL (output) DOUBLE PRECISION array, dimension ( N ) - On exit, DIFL(I) is the distance between I-th updated - (undeflated) singular value and the I-th (undeflated) old - singular value. - - DIFR (output) DOUBLE PRECISION array, - dimension ( LDGNUM, 2 ) if ICOMPQ = 1 and - dimension ( N ) if ICOMPQ = 0. - On exit, DIFR(I, 1) is the distance between I-th updated - (undeflated) singular value and the I+1-th (undeflated) old - singular value. - - If ICOMPQ = 1, DIFR(1:K,2) is an array containing the - normalizing factors for the right singular vector matrix. - - See DLASD8 for details on DIFL and DIFR. - - Z (output) DOUBLE PRECISION array, dimension ( M ) - The first elements of this array contain the components - of the deflation-adjusted updating row vector. - - K (output) INTEGER - Contains the dimension of the non-deflated matrix, - This is the order of the related secular equation. 1 <= K <=N. - - C (output) DOUBLE PRECISION - C contains garbage if SQRE =0 and the C-value of a Givens - rotation related to the right null space if SQRE = 1. - - S (output) DOUBLE PRECISION - S contains garbage if SQRE =0 and the S-value of a Givens - rotation related to the right null space if SQRE = 1. - - WORK (workspace) DOUBLE PRECISION array, dimension ( 4 * M ) - - IWORK (workspace) INTEGER array, dimension ( 3 * N ) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = 1, an singular value did not converge - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --vf; - --vl; - --idxq; - --perm; - givcol_dim1 = *ldgcol; - givcol_offset = 1 + givcol_dim1 * 1; - givcol -= givcol_offset; - poles_dim1 = *ldgnum; - poles_offset = 1 + poles_dim1 * 1; - poles -= poles_offset; - givnum_dim1 = *ldgnum; - givnum_offset = 1 + givnum_dim1 * 1; - givnum -= givnum_offset; - --difl; - --difr; - --z__; - --work; - --iwork; - - /* Function Body */ - *info = 0; - n = *nl + *nr + 1; - m = n + *sqre; - - if (*icompq < 0 || *icompq > 1) { - *info = -1; - } else if (*nl < 1) { - *info = -2; - } else if (*nr < 1) { - *info = -3; - } else if (*sqre < 0 || *sqre > 1) { - *info = -4; - } else if (*ldgcol < n) { - *info = -14; - } else if (*ldgnum < n) { - *info = -16; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLASD6", &i__1); - return 0; - } - -/* - The following values are for bookkeeping purposes only. They are - integer pointers which indicate the portion of the workspace - used by a particular array in DLASD7 and DLASD8. -*/ - - isigma = 1; - iw = isigma + n; - ivfw = iw + m; - ivlw = ivfw + m; - - idx = 1; - idxc = idx + n; - idxp = idxc + n; - -/* - Scale. - - Computing MAX -*/ - d__1 = abs(*alpha), d__2 = abs(*beta); - orgnrm = max(d__1,d__2); - d__[*nl + 1] = 0.; - i__1 = n; - for (i__ = 1; i__ <= i__1; ++i__) { - if ((d__1 = d__[i__], abs(d__1)) > orgnrm) { - orgnrm = (d__1 = d__[i__], abs(d__1)); - } -/* L10: */ - } - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b15, &n, &c__1, &d__[1], &n, info); - *alpha /= orgnrm; - *beta /= orgnrm; - -/* Sort and Deflate singular values. */ - - dlasd7_(icompq, nl, nr, sqre, k, &d__[1], &z__[1], &work[iw], &vf[1], & - work[ivfw], &vl[1], &work[ivlw], alpha, beta, &work[isigma], & - iwork[idx], &iwork[idxp], &idxq[1], &perm[1], givptr, &givcol[ - givcol_offset], ldgcol, &givnum[givnum_offset], ldgnum, c__, s, - info); - -/* Solve Secular Equation, compute DIFL, DIFR, and update VF, VL. */ - - dlasd8_(icompq, k, &d__[1], &z__[1], &vf[1], &vl[1], &difl[1], &difr[1], - ldgnum, &work[isigma], &work[iw], info); - -/* Save the poles if ICOMPQ = 1. */ - - if (*icompq == 1) { - dcopy_(k, &d__[1], &c__1, &poles[poles_dim1 + 1], &c__1); - dcopy_(k, &work[isigma], &c__1, &poles[((poles_dim1) << (1)) + 1], & - c__1); - } - -/* Unscale. */ - - dlascl_("G", &c__0, &c__0, &c_b15, &orgnrm, &n, &c__1, &d__[1], &n, info); - -/* Prepare the IDXQ sorting permutation. */ - - n1 = *k; - n2 = n - *k; - dlamrg_(&n1, &n2, &d__[1], &c__1, &c_n1, &idxq[1]); - - return 0; - -/* End of DLASD6 */ - -} /* dlasd6_ */ - -/* Subroutine */ int dlasd7_(integer *icompq, integer *nl, integer *nr, - integer *sqre, integer *k, doublereal *d__, doublereal *z__, - doublereal *zw, doublereal *vf, doublereal *vfw, doublereal *vl, - doublereal *vlw, doublereal *alpha, doublereal *beta, doublereal * - dsigma, integer *idx, integer *idxp, integer *idxq, integer *perm, - integer *givptr, integer *givcol, integer *ldgcol, doublereal *givnum, - integer *ldgnum, doublereal *c__, doublereal *s, integer *info) -{ - /* System generated locals */ - integer givcol_dim1, givcol_offset, givnum_dim1, givnum_offset, i__1; - doublereal d__1, d__2; - - /* Local variables */ - static integer i__, j, m, n, k2; - static doublereal z1; - static integer jp; - static doublereal eps, tau, tol; - static integer nlp1, nlp2, idxi, idxj; - extern /* Subroutine */ int drot_(integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *); - static integer idxjp; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *); - static integer jprev; - - extern /* Subroutine */ int dlamrg_(integer *, integer *, doublereal *, - integer *, integer *, integer *), xerbla_(char *, integer *); - static doublereal hlftol; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - June 30, 1999 - - - Purpose - ======= - - DLASD7 merges the two sets of singular values together into a single - sorted set. Then it tries to deflate the size of the problem. There - are two ways in which deflation can occur: when two or more singular - values are close together or if there is a tiny entry in the Z - vector. For each such occurrence the order of the related - secular equation problem is reduced by one. - - DLASD7 is called from DLASD6. - - Arguments - ========= - - ICOMPQ (input) INTEGER - Specifies whether singular vectors are to be computed - in compact form, as follows: - = 0: Compute singular values only. - = 1: Compute singular vectors of upper - bidiagonal matrix in compact form. - - NL (input) INTEGER - The row dimension of the upper block. NL >= 1. - - NR (input) INTEGER - The row dimension of the lower block. NR >= 1. - - SQRE (input) INTEGER - = 0: the lower block is an NR-by-NR square matrix. - = 1: the lower block is an NR-by-(NR+1) rectangular matrix. - - The bidiagonal matrix has - N = NL + NR + 1 rows and - M = N + SQRE >= N columns. - - K (output) INTEGER - Contains the dimension of the non-deflated matrix, this is - the order of the related secular equation. 1 <= K <=N. - - D (input/output) DOUBLE PRECISION array, dimension ( N ) - On entry D contains the singular values of the two submatrices - to be combined. On exit D contains the trailing (N-K) updated - singular values (those which were deflated) sorted into - increasing order. - - Z (output) DOUBLE PRECISION array, dimension ( M ) - On exit Z contains the updating row vector in the secular - equation. - - ZW (workspace) DOUBLE PRECISION array, dimension ( M ) - Workspace for Z. - - VF (input/output) DOUBLE PRECISION array, dimension ( M ) - On entry, VF(1:NL+1) contains the first components of all - right singular vectors of the upper block; and VF(NL+2:M) - contains the first components of all right singular vectors - of the lower block. On exit, VF contains the first components - of all right singular vectors of the bidiagonal matrix. - - VFW (workspace) DOUBLE PRECISION array, dimension ( M ) - Workspace for VF. - - VL (input/output) DOUBLE PRECISION array, dimension ( M ) - On entry, VL(1:NL+1) contains the last components of all - right singular vectors of the upper block; and VL(NL+2:M) - contains the last components of all right singular vectors - of the lower block. On exit, VL contains the last components - of all right singular vectors of the bidiagonal matrix. - - VLW (workspace) DOUBLE PRECISION array, dimension ( M ) - Workspace for VL. - - ALPHA (input) DOUBLE PRECISION - Contains the diagonal element associated with the added row. - - BETA (input) DOUBLE PRECISION - Contains the off-diagonal element associated with the added - row. - - DSIGMA (output) DOUBLE PRECISION array, dimension ( N ) - Contains a copy of the diagonal elements (K-1 singular values - and one zero) in the secular equation. - - IDX (workspace) INTEGER array, dimension ( N ) - This will contain the permutation used to sort the contents of - D into ascending order. - - IDXP (workspace) INTEGER array, dimension ( N ) - This will contain the permutation used to place deflated - values of D at the end of the array. On output IDXP(2:K) - points to the nondeflated D-values and IDXP(K+1:N) - points to the deflated singular values. - - IDXQ (input) INTEGER array, dimension ( N ) - This contains the permutation which separately sorts the two - sub-problems in D into ascending order. Note that entries in - the first half of this permutation must first be moved one - position backward; and entries in the second half - must first have NL+1 added to their values. - - PERM (output) INTEGER array, dimension ( N ) - The permutations (from deflation and sorting) to be applied - to each singular block. Not referenced if ICOMPQ = 0. - - GIVPTR (output) INTEGER - The number of Givens rotations which took place in this - subproblem. Not referenced if ICOMPQ = 0. - - GIVCOL (output) INTEGER array, dimension ( LDGCOL, 2 ) - Each pair of numbers indicates a pair of columns to take place - in a Givens rotation. Not referenced if ICOMPQ = 0. - - LDGCOL (input) INTEGER - The leading dimension of GIVCOL, must be at least N. - - GIVNUM (output) DOUBLE PRECISION array, dimension ( LDGNUM, 2 ) - Each number indicates the C or S value to be used in the - corresponding Givens rotation. Not referenced if ICOMPQ = 0. - - LDGNUM (input) INTEGER - The leading dimension of GIVNUM, must be at least N. - - C (output) DOUBLE PRECISION - C contains garbage if SQRE =0 and the C-value of a Givens - rotation related to the right null space if SQRE = 1. - - S (output) DOUBLE PRECISION - S contains garbage if SQRE =0 and the S-value of a Givens - rotation related to the right null space if SQRE = 1. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --z__; - --zw; - --vf; - --vfw; - --vl; - --vlw; - --dsigma; - --idx; - --idxp; - --idxq; - --perm; - givcol_dim1 = *ldgcol; - givcol_offset = 1 + givcol_dim1 * 1; - givcol -= givcol_offset; - givnum_dim1 = *ldgnum; - givnum_offset = 1 + givnum_dim1 * 1; - givnum -= givnum_offset; - - /* Function Body */ - *info = 0; - n = *nl + *nr + 1; - m = n + *sqre; - - if (*icompq < 0 || *icompq > 1) { - *info = -1; - } else if (*nl < 1) { - *info = -2; - } else if (*nr < 1) { - *info = -3; - } else if (*sqre < 0 || *sqre > 1) { - *info = -4; - } else if (*ldgcol < n) { - *info = -22; - } else if (*ldgnum < n) { - *info = -24; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLASD7", &i__1); - return 0; - } - - nlp1 = *nl + 1; - nlp2 = *nl + 2; - if (*icompq == 1) { - *givptr = 0; - } - -/* - Generate the first part of the vector Z and move the singular - values in the first part of D one position backward. -*/ - - z1 = *alpha * vl[nlp1]; - vl[nlp1] = 0.; - tau = vf[nlp1]; - for (i__ = *nl; i__ >= 1; --i__) { - z__[i__ + 1] = *alpha * vl[i__]; - vl[i__] = 0.; - vf[i__ + 1] = vf[i__]; - d__[i__ + 1] = d__[i__]; - idxq[i__ + 1] = idxq[i__] + 1; -/* L10: */ - } - vf[1] = tau; - -/* Generate the second part of the vector Z. */ - - i__1 = m; - for (i__ = nlp2; i__ <= i__1; ++i__) { - z__[i__] = *beta * vf[i__]; - vf[i__] = 0.; -/* L20: */ - } - -/* Sort the singular values into increasing order */ - - i__1 = n; - for (i__ = nlp2; i__ <= i__1; ++i__) { - idxq[i__] += nlp1; -/* L30: */ - } - -/* DSIGMA, IDXC, IDXC, and ZW are used as storage space. */ - - i__1 = n; - for (i__ = 2; i__ <= i__1; ++i__) { - dsigma[i__] = d__[idxq[i__]]; - zw[i__] = z__[idxq[i__]]; - vfw[i__] = vf[idxq[i__]]; - vlw[i__] = vl[idxq[i__]]; -/* L40: */ - } - - dlamrg_(nl, nr, &dsigma[2], &c__1, &c__1, &idx[2]); - - i__1 = n; - for (i__ = 2; i__ <= i__1; ++i__) { - idxi = idx[i__] + 1; - d__[i__] = dsigma[idxi]; - z__[i__] = zw[idxi]; - vf[i__] = vfw[idxi]; - vl[i__] = vlw[idxi]; -/* L50: */ - } - -/* Calculate the allowable deflation tolerence */ - - eps = EPSILON; -/* Computing MAX */ - d__1 = abs(*alpha), d__2 = abs(*beta); - tol = max(d__1,d__2); -/* Computing MAX */ - d__2 = (d__1 = d__[n], abs(d__1)); - tol = eps * 64. * max(d__2,tol); - -/* - There are 2 kinds of deflation -- first a value in the z-vector - is small, second two (or more) singular values are very close - together (their difference is small). - - If the value in the z-vector is small, we simply permute the - array so that the corresponding singular value is moved to the - end. - - If two values in the D-vector are close, we perform a two-sided - rotation designed to make one of the corresponding z-vector - entries zero, and then permute the array so that the deflated - singular value is moved to the end. - - If there are multiple singular values then the problem deflates. - Here the number of equal singular values are found. As each equal - singular value is found, an elementary reflector is computed to - rotate the corresponding singular subspace so that the - corresponding components of Z are zero in this new basis. -*/ - - *k = 1; - k2 = n + 1; - i__1 = n; - for (j = 2; j <= i__1; ++j) { - if ((d__1 = z__[j], abs(d__1)) <= tol) { - -/* Deflate due to small z component. */ - - --k2; - idxp[k2] = j; - if (j == n) { - goto L100; - } - } else { - jprev = j; - goto L70; - } -/* L60: */ - } -L70: - j = jprev; -L80: - ++j; - if (j > n) { - goto L90; - } - if ((d__1 = z__[j], abs(d__1)) <= tol) { - -/* Deflate due to small z component. */ - - --k2; - idxp[k2] = j; - } else { - -/* Check if singular values are close enough to allow deflation. */ - - if ((d__1 = d__[j] - d__[jprev], abs(d__1)) <= tol) { - -/* Deflation is possible. */ - - *s = z__[jprev]; - *c__ = z__[j]; - -/* - Find sqrt(a**2+b**2) without overflow or - destructive underflow. -*/ - - tau = dlapy2_(c__, s); - z__[j] = tau; - z__[jprev] = 0.; - *c__ /= tau; - *s = -(*s) / tau; - -/* Record the appropriate Givens rotation */ - - if (*icompq == 1) { - ++(*givptr); - idxjp = idxq[idx[jprev] + 1]; - idxj = idxq[idx[j] + 1]; - if (idxjp <= nlp1) { - --idxjp; - } - if (idxj <= nlp1) { - --idxj; - } - givcol[*givptr + ((givcol_dim1) << (1))] = idxjp; - givcol[*givptr + givcol_dim1] = idxj; - givnum[*givptr + ((givnum_dim1) << (1))] = *c__; - givnum[*givptr + givnum_dim1] = *s; - } - drot_(&c__1, &vf[jprev], &c__1, &vf[j], &c__1, c__, s); - drot_(&c__1, &vl[jprev], &c__1, &vl[j], &c__1, c__, s); - --k2; - idxp[k2] = jprev; - jprev = j; - } else { - ++(*k); - zw[*k] = z__[jprev]; - dsigma[*k] = d__[jprev]; - idxp[*k] = jprev; - jprev = j; - } - } - goto L80; -L90: - -/* Record the last singular value. */ - - ++(*k); - zw[*k] = z__[jprev]; - dsigma[*k] = d__[jprev]; - idxp[*k] = jprev; - -L100: - -/* - Sort the singular values into DSIGMA. The singular values which - were not deflated go into the first K slots of DSIGMA, except - that DSIGMA(1) is treated separately. -*/ - - i__1 = n; - for (j = 2; j <= i__1; ++j) { - jp = idxp[j]; - dsigma[j] = d__[jp]; - vfw[j] = vf[jp]; - vlw[j] = vl[jp]; -/* L110: */ - } - if (*icompq == 1) { - i__1 = n; - for (j = 2; j <= i__1; ++j) { - jp = idxp[j]; - perm[j] = idxq[idx[jp] + 1]; - if (perm[j] <= nlp1) { - --perm[j]; - } -/* L120: */ - } - } - -/* - The deflated singular values go back into the last N - K slots of - D. -*/ - - i__1 = n - *k; - dcopy_(&i__1, &dsigma[*k + 1], &c__1, &d__[*k + 1], &c__1); - -/* - Determine DSIGMA(1), DSIGMA(2), Z(1), VF(1), VL(1), VF(M), and - VL(M). -*/ - - dsigma[1] = 0.; - hlftol = tol / 2.; - if (abs(dsigma[2]) <= hlftol) { - dsigma[2] = hlftol; - } - if (m > n) { - z__[1] = dlapy2_(&z1, &z__[m]); - if (z__[1] <= tol) { - *c__ = 1.; - *s = 0.; - z__[1] = tol; - } else { - *c__ = z1 / z__[1]; - *s = -z__[m] / z__[1]; - } - drot_(&c__1, &vf[m], &c__1, &vf[1], &c__1, c__, s); - drot_(&c__1, &vl[m], &c__1, &vl[1], &c__1, c__, s); - } else { - if (abs(z1) <= tol) { - z__[1] = tol; - } else { - z__[1] = z1; - } - } - -/* Restore Z, VF, and VL. */ - - i__1 = *k - 1; - dcopy_(&i__1, &zw[2], &c__1, &z__[2], &c__1); - i__1 = n - 1; - dcopy_(&i__1, &vfw[2], &c__1, &vf[2], &c__1); - i__1 = n - 1; - dcopy_(&i__1, &vlw[2], &c__1, &vl[2], &c__1); - - return 0; - -/* End of DLASD7 */ - -} /* dlasd7_ */ - -/* Subroutine */ int dlasd8_(integer *icompq, integer *k, doublereal *d__, - doublereal *z__, doublereal *vf, doublereal *vl, doublereal *difl, - doublereal *difr, integer *lddifr, doublereal *dsigma, doublereal * - work, integer *info) -{ - /* System generated locals */ - integer difr_dim1, difr_offset, i__1, i__2; - doublereal d__1, d__2; - - /* Builtin functions */ - double sqrt(doublereal), d_sign(doublereal *, doublereal *); - - /* Local variables */ - static integer i__, j; - static doublereal dj, rho; - static integer iwk1, iwk2, iwk3; - extern doublereal ddot_(integer *, doublereal *, integer *, doublereal *, - integer *); - static doublereal temp; - extern doublereal dnrm2_(integer *, doublereal *, integer *); - static integer iwk2i, iwk3i; - static doublereal diflj, difrj, dsigj; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *); - extern doublereal dlamc3_(doublereal *, doublereal *); - extern /* Subroutine */ int dlasd4_(integer *, integer *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *), dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *), dlaset_(char *, integer *, integer - *, doublereal *, doublereal *, doublereal *, integer *), - xerbla_(char *, integer *); - static doublereal dsigjp; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - June 30, 1999 - - - Purpose - ======= - - DLASD8 finds the square roots of the roots of the secular equation, - as defined by the values in DSIGMA and Z. It makes the appropriate - calls to DLASD4, and stores, for each element in D, the distance - to its two nearest poles (elements in DSIGMA). It also updates - the arrays VF and VL, the first and last components of all the - right singular vectors of the original bidiagonal matrix. - - DLASD8 is called from DLASD6. - - Arguments - ========= - - ICOMPQ (input) INTEGER - Specifies whether singular vectors are to be computed in - factored form in the calling routine: - = 0: Compute singular values only. - = 1: Compute singular vectors in factored form as well. - - K (input) INTEGER - The number of terms in the rational function to be solved - by DLASD4. K >= 1. - - D (output) DOUBLE PRECISION array, dimension ( K ) - On output, D contains the updated singular values. - - Z (input) DOUBLE PRECISION array, dimension ( K ) - The first K elements of this array contain the components - of the deflation-adjusted updating row vector. - - VF (input/output) DOUBLE PRECISION array, dimension ( K ) - On entry, VF contains information passed through DBEDE8. - On exit, VF contains the first K components of the first - components of all right singular vectors of the bidiagonal - matrix. - - VL (input/output) DOUBLE PRECISION array, dimension ( K ) - On entry, VL contains information passed through DBEDE8. - On exit, VL contains the first K components of the last - components of all right singular vectors of the bidiagonal - matrix. - - DIFL (output) DOUBLE PRECISION array, dimension ( K ) - On exit, DIFL(I) = D(I) - DSIGMA(I). - - DIFR (output) DOUBLE PRECISION array, - dimension ( LDDIFR, 2 ) if ICOMPQ = 1 and - dimension ( K ) if ICOMPQ = 0. - On exit, DIFR(I,1) = D(I) - DSIGMA(I+1), DIFR(K,1) is not - defined and will not be referenced. - - If ICOMPQ = 1, DIFR(1:K,2) is an array containing the - normalizing factors for the right singular vector matrix. - - LDDIFR (input) INTEGER - The leading dimension of DIFR, must be at least K. - - DSIGMA (input) DOUBLE PRECISION array, dimension ( K ) - The first K elements of this array contain the old roots - of the deflated updating problem. These are the poles - of the secular equation. - - WORK (workspace) DOUBLE PRECISION array, dimension at least 3 * K - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = 1, an singular value did not converge - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --z__; - --vf; - --vl; - --difl; - difr_dim1 = *lddifr; - difr_offset = 1 + difr_dim1 * 1; - difr -= difr_offset; - --dsigma; - --work; - - /* Function Body */ - *info = 0; - - if (*icompq < 0 || *icompq > 1) { - *info = -1; - } else if (*k < 1) { - *info = -2; - } else if (*lddifr < *k) { - *info = -9; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLASD8", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*k == 1) { - d__[1] = abs(z__[1]); - difl[1] = d__[1]; - if (*icompq == 1) { - difl[2] = 1.; - difr[((difr_dim1) << (1)) + 1] = 1.; - } - return 0; - } - -/* - Modify values DSIGMA(i) to make sure all DSIGMA(i)-DSIGMA(j) can - be computed with high relative accuracy (barring over/underflow). - This is a problem on machines without a guard digit in - add/subtract (Cray XMP, Cray YMP, Cray C 90 and Cray 2). - The following code replaces DSIGMA(I) by 2*DSIGMA(I)-DSIGMA(I), - which on any of these machines zeros out the bottommost - bit of DSIGMA(I) if it is 1; this makes the subsequent - subtractions DSIGMA(I)-DSIGMA(J) unproblematic when cancellation - occurs. On binary machines with a guard digit (almost all - machines) it does not change DSIGMA(I) at all. On hexadecimal - and decimal machines with a guard digit, it slightly - changes the bottommost bits of DSIGMA(I). It does not account - for hexadecimal or decimal machines without guard digits - (we know of none). We use a subroutine call to compute - 2*DLAMBDA(I) to prevent optimizing compilers from eliminating - this code. -*/ - - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - dsigma[i__] = dlamc3_(&dsigma[i__], &dsigma[i__]) - dsigma[i__]; -/* L10: */ - } - -/* Book keeping. */ - - iwk1 = 1; - iwk2 = iwk1 + *k; - iwk3 = iwk2 + *k; - iwk2i = iwk2 - 1; - iwk3i = iwk3 - 1; - -/* Normalize Z. */ - - rho = dnrm2_(k, &z__[1], &c__1); - dlascl_("G", &c__0, &c__0, &rho, &c_b15, k, &c__1, &z__[1], k, info); - rho *= rho; - -/* Initialize WORK(IWK3). */ - - dlaset_("A", k, &c__1, &c_b15, &c_b15, &work[iwk3], k); - -/* - Compute the updated singular values, the arrays DIFL, DIFR, - and the updated Z. -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dlasd4_(k, &j, &dsigma[1], &z__[1], &work[iwk1], &rho, &d__[j], &work[ - iwk2], info); - -/* If the root finder fails, the computation is terminated. */ - - if (*info != 0) { - return 0; - } - work[iwk3i + j] = work[iwk3i + j] * work[j] * work[iwk2i + j]; - difl[j] = -work[j]; - difr[j + difr_dim1] = -work[j + 1]; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - work[iwk3i + i__] = work[iwk3i + i__] * work[i__] * work[iwk2i + - i__] / (dsigma[i__] - dsigma[j]) / (dsigma[i__] + dsigma[ - j]); -/* L20: */ - } - i__2 = *k; - for (i__ = j + 1; i__ <= i__2; ++i__) { - work[iwk3i + i__] = work[iwk3i + i__] * work[i__] * work[iwk2i + - i__] / (dsigma[i__] - dsigma[j]) / (dsigma[i__] + dsigma[ - j]); -/* L30: */ - } -/* L40: */ - } - -/* Compute updated Z. */ - - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - d__2 = sqrt((d__1 = work[iwk3i + i__], abs(d__1))); - z__[i__] = d_sign(&d__2, &z__[i__]); -/* L50: */ - } - -/* Update VF and VL. */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - diflj = difl[j]; - dj = d__[j]; - dsigj = -dsigma[j]; - if (j < *k) { - difrj = -difr[j + difr_dim1]; - dsigjp = -dsigma[j + 1]; - } - work[j] = -z__[j] / diflj / (dsigma[j] + dj); - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - work[i__] = z__[i__] / (dlamc3_(&dsigma[i__], &dsigj) - diflj) / ( - dsigma[i__] + dj); -/* L60: */ - } - i__2 = *k; - for (i__ = j + 1; i__ <= i__2; ++i__) { - work[i__] = z__[i__] / (dlamc3_(&dsigma[i__], &dsigjp) + difrj) / - (dsigma[i__] + dj); -/* L70: */ - } - temp = dnrm2_(k, &work[1], &c__1); - work[iwk2i + j] = ddot_(k, &work[1], &c__1, &vf[1], &c__1) / temp; - work[iwk3i + j] = ddot_(k, &work[1], &c__1, &vl[1], &c__1) / temp; - if (*icompq == 1) { - difr[j + ((difr_dim1) << (1))] = temp; - } -/* L80: */ - } - - dcopy_(k, &work[iwk2], &c__1, &vf[1], &c__1); - dcopy_(k, &work[iwk3], &c__1, &vl[1], &c__1); - - return 0; - -/* End of DLASD8 */ - -} /* dlasd8_ */ - -/* Subroutine */ int dlasda_(integer *icompq, integer *smlsiz, integer *n, - integer *sqre, doublereal *d__, doublereal *e, doublereal *u, integer - *ldu, doublereal *vt, integer *k, doublereal *difl, doublereal *difr, - doublereal *z__, doublereal *poles, integer *givptr, integer *givcol, - integer *ldgcol, integer *perm, doublereal *givnum, doublereal *c__, - doublereal *s, doublereal *work, integer *iwork, integer *info) -{ - /* System generated locals */ - integer givcol_dim1, givcol_offset, perm_dim1, perm_offset, difl_dim1, - difl_offset, difr_dim1, difr_offset, givnum_dim1, givnum_offset, - poles_dim1, poles_offset, u_dim1, u_offset, vt_dim1, vt_offset, - z_dim1, z_offset, i__1, i__2; - - /* Builtin functions */ - integer pow_ii(integer *, integer *); - - /* Local variables */ - static integer i__, j, m, i1, ic, lf, nd, ll, nl, vf, nr, vl, im1, ncc, - nlf, nrf, vfi, iwk, vli, lvl, nru, ndb1, nlp1, lvl2, nrp1; - static doublereal beta; - static integer idxq, nlvl; - static doublereal alpha; - static integer inode, ndiml, ndimr, idxqi, itemp; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *); - static integer sqrei; - extern /* Subroutine */ int dlasd6_(integer *, integer *, integer *, - integer *, doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *, integer *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - doublereal *, doublereal *, integer *, doublereal *, doublereal *, - doublereal *, integer *, integer *); - static integer nwork1, nwork2; - extern /* Subroutine */ int dlasdq_(char *, integer *, integer *, integer - *, integer *, integer *, doublereal *, doublereal *, doublereal *, - integer *, doublereal *, integer *, doublereal *, integer *, - doublereal *, integer *), dlasdt_(integer *, integer *, - integer *, integer *, integer *, integer *, integer *), dlaset_( - char *, integer *, integer *, doublereal *, doublereal *, - doublereal *, integer *), xerbla_(char *, integer *); - static integer smlszp; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - Using a divide and conquer approach, DLASDA computes the singular - value decomposition (SVD) of a real upper bidiagonal N-by-M matrix - B with diagonal D and offdiagonal E, where M = N + SQRE. The - algorithm computes the singular values in the SVD B = U * S * VT. - The orthogonal matrices U and VT are optionally computed in - compact form. - - A related subroutine, DLASD0, computes the singular values and - the singular vectors in explicit form. - - Arguments - ========= - - ICOMPQ (input) INTEGER - Specifies whether singular vectors are to be computed - in compact form, as follows - = 0: Compute singular values only. - = 1: Compute singular vectors of upper bidiagonal - matrix in compact form. - - SMLSIZ (input) INTEGER - The maximum size of the subproblems at the bottom of the - computation tree. - - N (input) INTEGER - The row dimension of the upper bidiagonal matrix. This is - also the dimension of the main diagonal array D. - - SQRE (input) INTEGER - Specifies the column dimension of the bidiagonal matrix. - = 0: The bidiagonal matrix has column dimension M = N; - = 1: The bidiagonal matrix has column dimension M = N + 1. - - D (input/output) DOUBLE PRECISION array, dimension ( N ) - On entry D contains the main diagonal of the bidiagonal - matrix. On exit D, if INFO = 0, contains its singular values. - - E (input) DOUBLE PRECISION array, dimension ( M-1 ) - Contains the subdiagonal entries of the bidiagonal matrix. - On exit, E has been destroyed. - - U (output) DOUBLE PRECISION array, - dimension ( LDU, SMLSIZ ) if ICOMPQ = 1, and not referenced - if ICOMPQ = 0. If ICOMPQ = 1, on exit, U contains the left - singular vector matrices of all subproblems at the bottom - level. - - LDU (input) INTEGER, LDU = > N. - The leading dimension of arrays U, VT, DIFL, DIFR, POLES, - GIVNUM, and Z. - - VT (output) DOUBLE PRECISION array, - dimension ( LDU, SMLSIZ+1 ) if ICOMPQ = 1, and not referenced - if ICOMPQ = 0. If ICOMPQ = 1, on exit, VT' contains the right - singular vector matrices of all subproblems at the bottom - level. - - K (output) INTEGER array, - dimension ( N ) if ICOMPQ = 1 and dimension 1 if ICOMPQ = 0. - If ICOMPQ = 1, on exit, K(I) is the dimension of the I-th - secular equation on the computation tree. - - DIFL (output) DOUBLE PRECISION array, dimension ( LDU, NLVL ), - where NLVL = floor(log_2 (N/SMLSIZ))). - - DIFR (output) DOUBLE PRECISION array, - dimension ( LDU, 2 * NLVL ) if ICOMPQ = 1 and - dimension ( N ) if ICOMPQ = 0. - If ICOMPQ = 1, on exit, DIFL(1:N, I) and DIFR(1:N, 2 * I - 1) - record distances between singular values on the I-th - level and singular values on the (I -1)-th level, and - DIFR(1:N, 2 * I ) contains the normalizing factors for - the right singular vector matrix. See DLASD8 for details. - - Z (output) DOUBLE PRECISION array, - dimension ( LDU, NLVL ) if ICOMPQ = 1 and - dimension ( N ) if ICOMPQ = 0. - The first K elements of Z(1, I) contain the components of - the deflation-adjusted updating row vector for subproblems - on the I-th level. - - POLES (output) DOUBLE PRECISION array, - dimension ( LDU, 2 * NLVL ) if ICOMPQ = 1, and not referenced - if ICOMPQ = 0. If ICOMPQ = 1, on exit, POLES(1, 2*I - 1) and - POLES(1, 2*I) contain the new and old singular values - involved in the secular equations on the I-th level. - - GIVPTR (output) INTEGER array, - dimension ( N ) if ICOMPQ = 1, and not referenced if - ICOMPQ = 0. If ICOMPQ = 1, on exit, GIVPTR( I ) records - the number of Givens rotations performed on the I-th - problem on the computation tree. - - GIVCOL (output) INTEGER array, - dimension ( LDGCOL, 2 * NLVL ) if ICOMPQ = 1, and not - referenced if ICOMPQ = 0. If ICOMPQ = 1, on exit, for each I, - GIVCOL(1, 2 *I - 1) and GIVCOL(1, 2 *I) record the locations - of Givens rotations performed on the I-th level on the - computation tree. - - LDGCOL (input) INTEGER, LDGCOL = > N. - The leading dimension of arrays GIVCOL and PERM. - - PERM (output) INTEGER array, - dimension ( LDGCOL, NLVL ) if ICOMPQ = 1, and not referenced - if ICOMPQ = 0. If ICOMPQ = 1, on exit, PERM(1, I) records - permutations done on the I-th level of the computation tree. - - GIVNUM (output) DOUBLE PRECISION array, - dimension ( LDU, 2 * NLVL ) if ICOMPQ = 1, and not - referenced if ICOMPQ = 0. If ICOMPQ = 1, on exit, for each I, - GIVNUM(1, 2 *I - 1) and GIVNUM(1, 2 *I) record the C- and S- - values of Givens rotations performed on the I-th level on - the computation tree. - - C (output) DOUBLE PRECISION array, - dimension ( N ) if ICOMPQ = 1, and dimension 1 if ICOMPQ = 0. - If ICOMPQ = 1 and the I-th subproblem is not square, on exit, - C( I ) contains the C-value of a Givens rotation related to - the right null space of the I-th subproblem. - - S (output) DOUBLE PRECISION array, dimension ( N ) if - ICOMPQ = 1, and dimension 1 if ICOMPQ = 0. If ICOMPQ = 1 - and the I-th subproblem is not square, on exit, S( I ) - contains the S-value of a Givens rotation related to - the right null space of the I-th subproblem. - - WORK (workspace) DOUBLE PRECISION array, dimension - (6 * N + (SMLSIZ + 1)*(SMLSIZ + 1)). - - IWORK (workspace) INTEGER array. - Dimension must be at least (7 * N). - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = 1, an singular value did not converge - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - givnum_dim1 = *ldu; - givnum_offset = 1 + givnum_dim1 * 1; - givnum -= givnum_offset; - poles_dim1 = *ldu; - poles_offset = 1 + poles_dim1 * 1; - poles -= poles_offset; - z_dim1 = *ldu; - z_offset = 1 + z_dim1 * 1; - z__ -= z_offset; - difr_dim1 = *ldu; - difr_offset = 1 + difr_dim1 * 1; - difr -= difr_offset; - difl_dim1 = *ldu; - difl_offset = 1 + difl_dim1 * 1; - difl -= difl_offset; - vt_dim1 = *ldu; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - --k; - --givptr; - perm_dim1 = *ldgcol; - perm_offset = 1 + perm_dim1 * 1; - perm -= perm_offset; - givcol_dim1 = *ldgcol; - givcol_offset = 1 + givcol_dim1 * 1; - givcol -= givcol_offset; - --c__; - --s; - --work; - --iwork; - - /* Function Body */ - *info = 0; - - if (*icompq < 0 || *icompq > 1) { - *info = -1; - } else if (*smlsiz < 3) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*sqre < 0 || *sqre > 1) { - *info = -4; - } else if (*ldu < *n + *sqre) { - *info = -8; - } else if (*ldgcol < *n) { - *info = -17; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLASDA", &i__1); - return 0; - } - - m = *n + *sqre; - -/* If the input matrix is too small, call DLASDQ to find the SVD. */ - - if (*n <= *smlsiz) { - if (*icompq == 0) { - dlasdq_("U", sqre, n, &c__0, &c__0, &c__0, &d__[1], &e[1], &vt[ - vt_offset], ldu, &u[u_offset], ldu, &u[u_offset], ldu, & - work[1], info); - } else { - dlasdq_("U", sqre, n, &m, n, &c__0, &d__[1], &e[1], &vt[vt_offset] - , ldu, &u[u_offset], ldu, &u[u_offset], ldu, &work[1], - info); - } - return 0; - } - -/* Book-keeping and set up the computation tree. */ - - inode = 1; - ndiml = inode + *n; - ndimr = ndiml + *n; - idxq = ndimr + *n; - iwk = idxq + *n; - - ncc = 0; - nru = 0; - - smlszp = *smlsiz + 1; - vf = 1; - vl = vf + m; - nwork1 = vl + m; - nwork2 = nwork1 + smlszp * smlszp; - - dlasdt_(n, &nlvl, &nd, &iwork[inode], &iwork[ndiml], &iwork[ndimr], - smlsiz); - -/* - for the nodes on bottom level of the tree, solve - their subproblems by DLASDQ. -*/ - - ndb1 = (nd + 1) / 2; - i__1 = nd; - for (i__ = ndb1; i__ <= i__1; ++i__) { - -/* - IC : center row of each node - NL : number of rows of left subproblem - NR : number of rows of right subproblem - NLF: starting row of the left subproblem - NRF: starting row of the right subproblem -*/ - - i1 = i__ - 1; - ic = iwork[inode + i1]; - nl = iwork[ndiml + i1]; - nlp1 = nl + 1; - nr = iwork[ndimr + i1]; - nlf = ic - nl; - nrf = ic + 1; - idxqi = idxq + nlf - 2; - vfi = vf + nlf - 1; - vli = vl + nlf - 1; - sqrei = 1; - if (*icompq == 0) { - dlaset_("A", &nlp1, &nlp1, &c_b29, &c_b15, &work[nwork1], &smlszp); - dlasdq_("U", &sqrei, &nl, &nlp1, &nru, &ncc, &d__[nlf], &e[nlf], & - work[nwork1], &smlszp, &work[nwork2], &nl, &work[nwork2], - &nl, &work[nwork2], info); - itemp = nwork1 + nl * smlszp; - dcopy_(&nlp1, &work[nwork1], &c__1, &work[vfi], &c__1); - dcopy_(&nlp1, &work[itemp], &c__1, &work[vli], &c__1); - } else { - dlaset_("A", &nl, &nl, &c_b29, &c_b15, &u[nlf + u_dim1], ldu); - dlaset_("A", &nlp1, &nlp1, &c_b29, &c_b15, &vt[nlf + vt_dim1], - ldu); - dlasdq_("U", &sqrei, &nl, &nlp1, &nl, &ncc, &d__[nlf], &e[nlf], & - vt[nlf + vt_dim1], ldu, &u[nlf + u_dim1], ldu, &u[nlf + - u_dim1], ldu, &work[nwork1], info); - dcopy_(&nlp1, &vt[nlf + vt_dim1], &c__1, &work[vfi], &c__1); - dcopy_(&nlp1, &vt[nlf + nlp1 * vt_dim1], &c__1, &work[vli], &c__1) - ; - } - if (*info != 0) { - return 0; - } - i__2 = nl; - for (j = 1; j <= i__2; ++j) { - iwork[idxqi + j] = j; -/* L10: */ - } - if ((i__ == nd && *sqre == 0)) { - sqrei = 0; - } else { - sqrei = 1; - } - idxqi += nlp1; - vfi += nlp1; - vli += nlp1; - nrp1 = nr + sqrei; - if (*icompq == 0) { - dlaset_("A", &nrp1, &nrp1, &c_b29, &c_b15, &work[nwork1], &smlszp); - dlasdq_("U", &sqrei, &nr, &nrp1, &nru, &ncc, &d__[nrf], &e[nrf], & - work[nwork1], &smlszp, &work[nwork2], &nr, &work[nwork2], - &nr, &work[nwork2], info); - itemp = nwork1 + (nrp1 - 1) * smlszp; - dcopy_(&nrp1, &work[nwork1], &c__1, &work[vfi], &c__1); - dcopy_(&nrp1, &work[itemp], &c__1, &work[vli], &c__1); - } else { - dlaset_("A", &nr, &nr, &c_b29, &c_b15, &u[nrf + u_dim1], ldu); - dlaset_("A", &nrp1, &nrp1, &c_b29, &c_b15, &vt[nrf + vt_dim1], - ldu); - dlasdq_("U", &sqrei, &nr, &nrp1, &nr, &ncc, &d__[nrf], &e[nrf], & - vt[nrf + vt_dim1], ldu, &u[nrf + u_dim1], ldu, &u[nrf + - u_dim1], ldu, &work[nwork1], info); - dcopy_(&nrp1, &vt[nrf + vt_dim1], &c__1, &work[vfi], &c__1); - dcopy_(&nrp1, &vt[nrf + nrp1 * vt_dim1], &c__1, &work[vli], &c__1) - ; - } - if (*info != 0) { - return 0; - } - i__2 = nr; - for (j = 1; j <= i__2; ++j) { - iwork[idxqi + j] = j; -/* L20: */ - } -/* L30: */ - } - -/* Now conquer each subproblem bottom-up. */ - - j = pow_ii(&c__2, &nlvl); - for (lvl = nlvl; lvl >= 1; --lvl) { - lvl2 = ((lvl) << (1)) - 1; - -/* - Find the first node LF and last node LL on - the current level LVL. -*/ - - if (lvl == 1) { - lf = 1; - ll = 1; - } else { - i__1 = lvl - 1; - lf = pow_ii(&c__2, &i__1); - ll = ((lf) << (1)) - 1; - } - i__1 = ll; - for (i__ = lf; i__ <= i__1; ++i__) { - im1 = i__ - 1; - ic = iwork[inode + im1]; - nl = iwork[ndiml + im1]; - nr = iwork[ndimr + im1]; - nlf = ic - nl; - nrf = ic + 1; - if (i__ == ll) { - sqrei = *sqre; - } else { - sqrei = 1; - } - vfi = vf + nlf - 1; - vli = vl + nlf - 1; - idxqi = idxq + nlf - 1; - alpha = d__[ic]; - beta = e[ic]; - if (*icompq == 0) { - dlasd6_(icompq, &nl, &nr, &sqrei, &d__[nlf], &work[vfi], & - work[vli], &alpha, &beta, &iwork[idxqi], &perm[ - perm_offset], &givptr[1], &givcol[givcol_offset], - ldgcol, &givnum[givnum_offset], ldu, &poles[ - poles_offset], &difl[difl_offset], &difr[difr_offset], - &z__[z_offset], &k[1], &c__[1], &s[1], &work[nwork1], - &iwork[iwk], info); - } else { - --j; - dlasd6_(icompq, &nl, &nr, &sqrei, &d__[nlf], &work[vfi], & - work[vli], &alpha, &beta, &iwork[idxqi], &perm[nlf + - lvl * perm_dim1], &givptr[j], &givcol[nlf + lvl2 * - givcol_dim1], ldgcol, &givnum[nlf + lvl2 * - givnum_dim1], ldu, &poles[nlf + lvl2 * poles_dim1], & - difl[nlf + lvl * difl_dim1], &difr[nlf + lvl2 * - difr_dim1], &z__[nlf + lvl * z_dim1], &k[j], &c__[j], - &s[j], &work[nwork1], &iwork[iwk], info); - } - if (*info != 0) { - return 0; - } -/* L40: */ - } -/* L50: */ - } - - return 0; - -/* End of DLASDA */ - -} /* dlasda_ */ - -/* Subroutine */ int dlasdq_(char *uplo, integer *sqre, integer *n, integer * - ncvt, integer *nru, integer *ncc, doublereal *d__, doublereal *e, - doublereal *vt, integer *ldvt, doublereal *u, integer *ldu, - doublereal *c__, integer *ldc, doublereal *work, integer *info) -{ - /* System generated locals */ - integer c_dim1, c_offset, u_dim1, u_offset, vt_dim1, vt_offset, i__1, - i__2; - - /* Local variables */ - static integer i__, j; - static doublereal r__, cs, sn; - static integer np1, isub; - static doublereal smin; - static integer sqre1; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dlasr_(char *, char *, char *, integer *, - integer *, doublereal *, doublereal *, doublereal *, integer *), dswap_(integer *, doublereal *, integer * - , doublereal *, integer *); - static integer iuplo; - extern /* Subroutine */ int dlartg_(doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *), xerbla_(char *, - integer *), dbdsqr_(char *, integer *, integer *, integer - *, integer *, doublereal *, doublereal *, doublereal *, integer *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *); - static logical rotate; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - DLASDQ computes the singular value decomposition (SVD) of a real - (upper or lower) bidiagonal matrix with diagonal D and offdiagonal - E, accumulating the transformations if desired. Letting B denote - the input bidiagonal matrix, the algorithm computes orthogonal - matrices Q and P such that B = Q * S * P' (P' denotes the transpose - of P). The singular values S are overwritten on D. - - The input matrix U is changed to U * Q if desired. - The input matrix VT is changed to P' * VT if desired. - The input matrix C is changed to Q' * C if desired. - - See "Computing Small Singular Values of Bidiagonal Matrices With - Guaranteed High Relative Accuracy," by J. Demmel and W. Kahan, - LAPACK Working Note #3, for a detailed description of the algorithm. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - On entry, UPLO specifies whether the input bidiagonal matrix - is upper or lower bidiagonal, and wether it is square are - not. - UPLO = 'U' or 'u' B is upper bidiagonal. - UPLO = 'L' or 'l' B is lower bidiagonal. - - SQRE (input) INTEGER - = 0: then the input matrix is N-by-N. - = 1: then the input matrix is N-by-(N+1) if UPLU = 'U' and - (N+1)-by-N if UPLU = 'L'. - - The bidiagonal matrix has - N = NL + NR + 1 rows and - M = N + SQRE >= N columns. - - N (input) INTEGER - On entry, N specifies the number of rows and columns - in the matrix. N must be at least 0. - - NCVT (input) INTEGER - On entry, NCVT specifies the number of columns of - the matrix VT. NCVT must be at least 0. - - NRU (input) INTEGER - On entry, NRU specifies the number of rows of - the matrix U. NRU must be at least 0. - - NCC (input) INTEGER - On entry, NCC specifies the number of columns of - the matrix C. NCC must be at least 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, D contains the diagonal entries of the - bidiagonal matrix whose SVD is desired. On normal exit, - D contains the singular values in ascending order. - - E (input/output) DOUBLE PRECISION array. - dimension is (N-1) if SQRE = 0 and N if SQRE = 1. - On entry, the entries of E contain the offdiagonal entries - of the bidiagonal matrix whose SVD is desired. On normal - exit, E will contain 0. If the algorithm does not converge, - D and E will contain the diagonal and superdiagonal entries - of a bidiagonal matrix orthogonally equivalent to the one - given as input. - - VT (input/output) DOUBLE PRECISION array, dimension (LDVT, NCVT) - On entry, contains a matrix which on exit has been - premultiplied by P', dimension N-by-NCVT if SQRE = 0 - and (N+1)-by-NCVT if SQRE = 1 (not referenced if NCVT=0). - - LDVT (input) INTEGER - On entry, LDVT specifies the leading dimension of VT as - declared in the calling (sub) program. LDVT must be at - least 1. If NCVT is nonzero LDVT must also be at least N. - - U (input/output) DOUBLE PRECISION array, dimension (LDU, N) - On entry, contains a matrix which on exit has been - postmultiplied by Q, dimension NRU-by-N if SQRE = 0 - and NRU-by-(N+1) if SQRE = 1 (not referenced if NRU=0). - - LDU (input) INTEGER - On entry, LDU specifies the leading dimension of U as - declared in the calling (sub) program. LDU must be at - least max( 1, NRU ) . - - C (input/output) DOUBLE PRECISION array, dimension (LDC, NCC) - On entry, contains an N-by-NCC matrix which on exit - has been premultiplied by Q' dimension N-by-NCC if SQRE = 0 - and (N+1)-by-NCC if SQRE = 1 (not referenced if NCC=0). - - LDC (input) INTEGER - On entry, LDC specifies the leading dimension of C as - declared in the calling (sub) program. LDC must be at - least 1. If NCC is nonzero, LDC must also be at least N. - - WORK (workspace) DOUBLE PRECISION array, dimension (4*N) - Workspace. Only referenced if one of NCVT, NRU, or NCC is - nonzero, and if N is at least 2. - - INFO (output) INTEGER - On exit, a value of 0 indicates a successful exit. - If INFO < 0, argument number -INFO is illegal. - If INFO > 0, the algorithm did not converge, and INFO - specifies how many superdiagonals did not converge. - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - vt_dim1 = *ldvt; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - iuplo = 0; - if (lsame_(uplo, "U")) { - iuplo = 1; - } - if (lsame_(uplo, "L")) { - iuplo = 2; - } - if (iuplo == 0) { - *info = -1; - } else if (*sqre < 0 || *sqre > 1) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*ncvt < 0) { - *info = -4; - } else if (*nru < 0) { - *info = -5; - } else if (*ncc < 0) { - *info = -6; - } else if ((*ncvt == 0 && *ldvt < 1) || (*ncvt > 0 && *ldvt < max(1,*n))) - { - *info = -10; - } else if (*ldu < max(1,*nru)) { - *info = -12; - } else if ((*ncc == 0 && *ldc < 1) || (*ncc > 0 && *ldc < max(1,*n))) { - *info = -14; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLASDQ", &i__1); - return 0; - } - if (*n == 0) { - return 0; - } - -/* ROTATE is true if any singular vectors desired, false otherwise */ - - rotate = *ncvt > 0 || *nru > 0 || *ncc > 0; - np1 = *n + 1; - sqre1 = *sqre; - -/* - If matrix non-square upper bidiagonal, rotate to be lower - bidiagonal. The rotations are on the right. -*/ - - if ((iuplo == 1 && sqre1 == 1)) { - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - dlartg_(&d__[i__], &e[i__], &cs, &sn, &r__); - d__[i__] = r__; - e[i__] = sn * d__[i__ + 1]; - d__[i__ + 1] = cs * d__[i__ + 1]; - if (rotate) { - work[i__] = cs; - work[*n + i__] = sn; - } -/* L10: */ - } - dlartg_(&d__[*n], &e[*n], &cs, &sn, &r__); - d__[*n] = r__; - e[*n] = 0.; - if (rotate) { - work[*n] = cs; - work[*n + *n] = sn; - } - iuplo = 2; - sqre1 = 0; - -/* Update singular vectors if desired. */ - - if (*ncvt > 0) { - dlasr_("L", "V", "F", &np1, ncvt, &work[1], &work[np1], &vt[ - vt_offset], ldvt); - } - } - -/* - If matrix lower bidiagonal, rotate to be upper bidiagonal - by applying Givens rotations on the left. -*/ - - if (iuplo == 2) { - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - dlartg_(&d__[i__], &e[i__], &cs, &sn, &r__); - d__[i__] = r__; - e[i__] = sn * d__[i__ + 1]; - d__[i__ + 1] = cs * d__[i__ + 1]; - if (rotate) { - work[i__] = cs; - work[*n + i__] = sn; - } -/* L20: */ - } - -/* - If matrix (N+1)-by-N lower bidiagonal, one additional - rotation is needed. -*/ - - if (sqre1 == 1) { - dlartg_(&d__[*n], &e[*n], &cs, &sn, &r__); - d__[*n] = r__; - if (rotate) { - work[*n] = cs; - work[*n + *n] = sn; - } - } - -/* Update singular vectors if desired. */ - - if (*nru > 0) { - if (sqre1 == 0) { - dlasr_("R", "V", "F", nru, n, &work[1], &work[np1], &u[ - u_offset], ldu); - } else { - dlasr_("R", "V", "F", nru, &np1, &work[1], &work[np1], &u[ - u_offset], ldu); - } - } - if (*ncc > 0) { - if (sqre1 == 0) { - dlasr_("L", "V", "F", n, ncc, &work[1], &work[np1], &c__[ - c_offset], ldc); - } else { - dlasr_("L", "V", "F", &np1, ncc, &work[1], &work[np1], &c__[ - c_offset], ldc); - } - } - } - -/* - Call DBDSQR to compute the SVD of the reduced real - N-by-N upper bidiagonal matrix. -*/ - - dbdsqr_("U", n, ncvt, nru, ncc, &d__[1], &e[1], &vt[vt_offset], ldvt, &u[ - u_offset], ldu, &c__[c_offset], ldc, &work[1], info); - -/* - Sort the singular values into ascending order (insertion sort on - singular values, but only one transposition per singular vector) -*/ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Scan for smallest D(I). */ - - isub = i__; - smin = d__[i__]; - i__2 = *n; - for (j = i__ + 1; j <= i__2; ++j) { - if (d__[j] < smin) { - isub = j; - smin = d__[j]; - } -/* L30: */ - } - if (isub != i__) { - -/* Swap singular values and vectors. */ - - d__[isub] = d__[i__]; - d__[i__] = smin; - if (*ncvt > 0) { - dswap_(ncvt, &vt[isub + vt_dim1], ldvt, &vt[i__ + vt_dim1], - ldvt); - } - if (*nru > 0) { - dswap_(nru, &u[isub * u_dim1 + 1], &c__1, &u[i__ * u_dim1 + 1] - , &c__1); - } - if (*ncc > 0) { - dswap_(ncc, &c__[isub + c_dim1], ldc, &c__[i__ + c_dim1], ldc) - ; - } - } -/* L40: */ - } - - return 0; - -/* End of DLASDQ */ - -} /* dlasdq_ */ - -/* Subroutine */ int dlasdt_(integer *n, integer *lvl, integer *nd, integer * - inode, integer *ndiml, integer *ndimr, integer *msub) -{ - /* System generated locals */ - integer i__1, i__2; - - /* Builtin functions */ - double log(doublereal); - - /* Local variables */ - static integer i__, il, ir, maxn; - static doublereal temp; - static integer nlvl, llst, ncrnt; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DLASDT creates a tree of subproblems for bidiagonal divide and - conquer. - - Arguments - ========= - - N (input) INTEGER - On entry, the number of diagonal elements of the - bidiagonal matrix. - - LVL (output) INTEGER - On exit, the number of levels on the computation tree. - - ND (output) INTEGER - On exit, the number of nodes on the tree. - - INODE (output) INTEGER array, dimension ( N ) - On exit, centers of subproblems. - - NDIML (output) INTEGER array, dimension ( N ) - On exit, row dimensions of left children. - - NDIMR (output) INTEGER array, dimension ( N ) - On exit, row dimensions of right children. - - MSUB (input) INTEGER. - On entry, the maximum row dimension each subproblem at the - bottom of the tree can be of. - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Find the number of levels on the tree. -*/ - - /* Parameter adjustments */ - --ndimr; - --ndiml; - --inode; - - /* Function Body */ - maxn = max(1,*n); - temp = log((doublereal) maxn / (doublereal) (*msub + 1)) / log(2.); - *lvl = (integer) temp + 1; - - i__ = *n / 2; - inode[1] = i__ + 1; - ndiml[1] = i__; - ndimr[1] = *n - i__ - 1; - il = 0; - ir = 1; - llst = 1; - i__1 = *lvl - 1; - for (nlvl = 1; nlvl <= i__1; ++nlvl) { - -/* - Constructing the tree at (NLVL+1)-st level. The number of - nodes created on this level is LLST * 2. -*/ - - i__2 = llst - 1; - for (i__ = 0; i__ <= i__2; ++i__) { - il += 2; - ir += 2; - ncrnt = llst + i__; - ndiml[il] = ndiml[ncrnt] / 2; - ndimr[il] = ndiml[ncrnt] - ndiml[il] - 1; - inode[il] = inode[ncrnt] - ndimr[il] - 1; - ndiml[ir] = ndimr[ncrnt] / 2; - ndimr[ir] = ndimr[ncrnt] - ndiml[ir] - 1; - inode[ir] = inode[ncrnt] + ndiml[ir] + 1; -/* L10: */ - } - llst <<= 1; -/* L20: */ - } - *nd = ((llst) << (1)) - 1; - - return 0; - -/* End of DLASDT */ - -} /* dlasdt_ */ - -/* Subroutine */ int dlaset_(char *uplo, integer *m, integer *n, doublereal * - alpha, doublereal *beta, doublereal *a, integer *lda) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, j; - extern logical lsame_(char *, char *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLASET initializes an m-by-n matrix A to BETA on the diagonal and - ALPHA on the offdiagonals. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - Specifies the part of the matrix A to be set. - = 'U': Upper triangular part is set; the strictly lower - triangular part of A is not changed. - = 'L': Lower triangular part is set; the strictly upper - triangular part of A is not changed. - Otherwise: All of the matrix A is set. - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - ALPHA (input) DOUBLE PRECISION - The constant to which the offdiagonal elements are to be set. - - BETA (input) DOUBLE PRECISION - The constant to which the diagonal elements are to be set. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On exit, the leading m-by-n submatrix of A is set as follows: - - if UPLO = 'U', A(i,j) = ALPHA, 1<=i<=j-1, 1<=j<=n, - if UPLO = 'L', A(i,j) = ALPHA, j+1<=i<=m, 1<=j<=n, - otherwise, A(i,j) = ALPHA, 1<=i<=m, 1<=j<=n, i.ne.j, - - and, for all UPLO, A(i,i) = BETA, 1<=i<=min(m,n). - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - if (lsame_(uplo, "U")) { - -/* - Set the strictly upper triangular or trapezoidal part of the - array to ALPHA. -*/ - - i__1 = *n; - for (j = 2; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = j - 1; - i__2 = min(i__3,*m); - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = *alpha; -/* L10: */ - } -/* L20: */ - } - - } else if (lsame_(uplo, "L")) { - -/* - Set the strictly lower triangular or trapezoidal part of the - array to ALPHA. -*/ - - i__1 = min(*m,*n); - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = j + 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = *alpha; -/* L30: */ - } -/* L40: */ - } - - } else { - -/* Set the leading m-by-n submatrix to ALPHA. */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = *alpha; -/* L50: */ - } -/* L60: */ - } - } - -/* Set the first min(M,N) diagonal elements to BETA. */ - - i__1 = min(*m,*n); - for (i__ = 1; i__ <= i__1; ++i__) { - a[i__ + i__ * a_dim1] = *beta; -/* L70: */ - } - - return 0; - -/* End of DLASET */ - -} /* dlaset_ */ - -/* Subroutine */ int dlasq1_(integer *n, doublereal *d__, doublereal *e, - doublereal *work, integer *info) -{ - /* System generated locals */ - integer i__1, i__2; - doublereal d__1, d__2, d__3; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer i__; - static doublereal eps; - extern /* Subroutine */ int dlas2_(doublereal *, doublereal *, doublereal - *, doublereal *, doublereal *); - static doublereal scale; - static integer iinfo; - static doublereal sigmn; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *); - static doublereal sigmx; - extern /* Subroutine */ int dlasq2_(integer *, doublereal *, integer *); - - extern /* Subroutine */ int dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *); - static doublereal safmin; - extern /* Subroutine */ int xerbla_(char *, integer *), dlasrt_( - char *, integer *, doublereal *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - DLASQ1 computes the singular values of a real N-by-N bidiagonal - matrix with diagonal D and off-diagonal E. The singular values - are computed to high relative accuracy, in the absence of - denormalization, underflow and overflow. The algorithm was first - presented in - - "Accurate singular values and differential qd algorithms" by K. V. - Fernando and B. N. Parlett, Numer. Math., Vol-67, No. 2, pp. 191-230, - 1994, - - and the present implementation is described in "An implementation of - the dqds Algorithm (Positive Case)", LAPACK Working Note. - - Arguments - ========= - - N (input) INTEGER - The number of rows and columns in the matrix. N >= 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, D contains the diagonal elements of the - bidiagonal matrix whose SVD is desired. On normal exit, - D contains the singular values in decreasing order. - - E (input/output) DOUBLE PRECISION array, dimension (N) - On entry, elements E(1:N-1) contain the off-diagonal elements - of the bidiagonal matrix whose SVD is desired. - On exit, E is overwritten. - - WORK (workspace) DOUBLE PRECISION array, dimension (4*N) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: the algorithm failed - = 1, a split was marked by a positive value in E - = 2, current block of Z not diagonalized after 30*N - iterations (in inner while loop) - = 3, termination criterion of outer while loop not met - (program created more than N unreduced blocks) - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --work; - --e; - --d__; - - /* Function Body */ - *info = 0; - if (*n < 0) { - *info = -2; - i__1 = -(*info); - xerbla_("DLASQ1", &i__1); - return 0; - } else if (*n == 0) { - return 0; - } else if (*n == 1) { - d__[1] = abs(d__[1]); - return 0; - } else if (*n == 2) { - dlas2_(&d__[1], &e[1], &d__[2], &sigmn, &sigmx); - d__[1] = sigmx; - d__[2] = sigmn; - return 0; - } - -/* Estimate the largest singular value. */ - - sigmx = 0.; - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - d__[i__] = (d__1 = d__[i__], abs(d__1)); -/* Computing MAX */ - d__2 = sigmx, d__3 = (d__1 = e[i__], abs(d__1)); - sigmx = max(d__2,d__3); -/* L10: */ - } - d__[*n] = (d__1 = d__[*n], abs(d__1)); - -/* Early return if SIGMX is zero (matrix is already diagonal). */ - - if (sigmx == 0.) { - dlasrt_("D", n, &d__[1], &iinfo); - return 0; - } - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { -/* Computing MAX */ - d__1 = sigmx, d__2 = d__[i__]; - sigmx = max(d__1,d__2); -/* L20: */ - } - -/* - Copy D and E into WORK (in the Z format) and scale (squaring the - input data makes scaling by a power of the radix pointless). -*/ - - eps = PRECISION; - safmin = SAFEMINIMUM; - scale = sqrt(eps / safmin); - dcopy_(n, &d__[1], &c__1, &work[1], &c__2); - i__1 = *n - 1; - dcopy_(&i__1, &e[1], &c__1, &work[2], &c__2); - i__1 = ((*n) << (1)) - 1; - i__2 = ((*n) << (1)) - 1; - dlascl_("G", &c__0, &c__0, &sigmx, &scale, &i__1, &c__1, &work[1], &i__2, - &iinfo); - -/* Compute the q's and e's. */ - - i__1 = ((*n) << (1)) - 1; - for (i__ = 1; i__ <= i__1; ++i__) { -/* Computing 2nd power */ - d__1 = work[i__]; - work[i__] = d__1 * d__1; -/* L30: */ - } - work[*n * 2] = 0.; - - dlasq2_(n, &work[1], info); - - if (*info == 0) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - d__[i__] = sqrt(work[i__]); -/* L40: */ - } - dlascl_("G", &c__0, &c__0, &scale, &sigmx, n, &c__1, &d__[1], n, & - iinfo); - } - - return 0; - -/* End of DLASQ1 */ - -} /* dlasq1_ */ - -/* Subroutine */ int dlasq2_(integer *n, doublereal *z__, integer *info) -{ - /* System generated locals */ - integer i__1, i__2, i__3; - doublereal d__1, d__2; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal d__, e; - static integer k; - static doublereal s, t; - static integer i0, i4, n0, pp; - static doublereal eps, tol; - static integer ipn4; - static doublereal tol2; - static logical ieee; - static integer nbig; - static doublereal dmin__, emin, emax; - static integer ndiv, iter; - static doublereal qmin, temp, qmax, zmax; - static integer splt, nfail; - static doublereal desig, trace, sigma; - static integer iinfo; - extern /* Subroutine */ int dlasq3_(integer *, integer *, doublereal *, - integer *, doublereal *, doublereal *, doublereal *, doublereal *, - integer *, integer *, integer *, logical *); - - static integer iwhila, iwhilb; - static doublereal oldemn, safmin; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int dlasrt_(char *, integer *, doublereal *, - integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - DLASQ2 computes all the eigenvalues of the symmetric positive - definite tridiagonal matrix associated with the qd array Z to high - relative accuracy are computed to high relative accuracy, in the - absence of denormalization, underflow and overflow. - - To see the relation of Z to the tridiagonal matrix, let L be a - unit lower bidiagonal matrix with subdiagonals Z(2,4,6,,..) and - let U be an upper bidiagonal matrix with 1's above and diagonal - Z(1,3,5,,..). The tridiagonal is L*U or, if you prefer, the - symmetric tridiagonal to which it is similar. - - Note : DLASQ2 defines a logical variable, IEEE, which is true - on machines which follow ieee-754 floating-point standard in their - handling of infinities and NaNs, and false otherwise. This variable - is passed to DLASQ3. - - Arguments - ========= - - N (input) INTEGER - The number of rows and columns in the matrix. N >= 0. - - Z (workspace) DOUBLE PRECISION array, dimension ( 4*N ) - On entry Z holds the qd array. On exit, entries 1 to N hold - the eigenvalues in decreasing order, Z( 2*N+1 ) holds the - trace, and Z( 2*N+2 ) holds the sum of the eigenvalues. If - N > 2, then Z( 2*N+3 ) holds the iteration count, Z( 2*N+4 ) - holds NDIVS/NIN^2, and Z( 2*N+5 ) holds the percentage of - shifts that failed. - - INFO (output) INTEGER - = 0: successful exit - < 0: if the i-th argument is a scalar and had an illegal - value, then INFO = -i, if the i-th argument is an - array and the j-entry had an illegal value, then - INFO = -(i*100+j) - > 0: the algorithm failed - = 1, a split was marked by a positive value in E - = 2, current block of Z not diagonalized after 30*N - iterations (in inner while loop) - = 3, termination criterion of outer while loop not met - (program created more than N unreduced blocks) - - Further Details - =============== - Local Variables: I0:N0 defines a current unreduced segment of Z. - The shifts are accumulated in SIGMA. Iteration count is in ITER. - Ping-pong is controlled by PP (alternates between 0 and 1). - - ===================================================================== - - - Test the input arguments. - (in case DLASQ2 is not called by DLASQ1) -*/ - - /* Parameter adjustments */ - --z__; - - /* Function Body */ - *info = 0; - eps = PRECISION; - safmin = SAFEMINIMUM; - tol = eps * 100.; -/* Computing 2nd power */ - d__1 = tol; - tol2 = d__1 * d__1; - - if (*n < 0) { - *info = -1; - xerbla_("DLASQ2", &c__1); - return 0; - } else if (*n == 0) { - return 0; - } else if (*n == 1) { - -/* 1-by-1 case. */ - - if (z__[1] < 0.) { - *info = -201; - xerbla_("DLASQ2", &c__2); - } - return 0; - } else if (*n == 2) { - -/* 2-by-2 case. */ - - if (z__[2] < 0. || z__[3] < 0.) { - *info = -2; - xerbla_("DLASQ2", &c__2); - return 0; - } else if (z__[3] > z__[1]) { - d__ = z__[3]; - z__[3] = z__[1]; - z__[1] = d__; - } - z__[5] = z__[1] + z__[2] + z__[3]; - if (z__[2] > z__[3] * tol2) { - t = (z__[1] - z__[3] + z__[2]) * .5; - s = z__[3] * (z__[2] / t); - if (s <= t) { - s = z__[3] * (z__[2] / (t * (sqrt(s / t + 1.) + 1.))); - } else { - s = z__[3] * (z__[2] / (t + sqrt(t) * sqrt(t + s))); - } - t = z__[1] + (s + z__[2]); - z__[3] *= z__[1] / t; - z__[1] = t; - } - z__[2] = z__[3]; - z__[6] = z__[2] + z__[1]; - return 0; - } - -/* Check for negative data and compute sums of q's and e's. */ - - z__[*n * 2] = 0.; - emin = z__[2]; - qmax = 0.; - zmax = 0.; - d__ = 0.; - e = 0.; - - i__1 = (*n - 1) << (1); - for (k = 1; k <= i__1; k += 2) { - if (z__[k] < 0.) { - *info = -(k + 200); - xerbla_("DLASQ2", &c__2); - return 0; - } else if (z__[k + 1] < 0.) { - *info = -(k + 201); - xerbla_("DLASQ2", &c__2); - return 0; - } - d__ += z__[k]; - e += z__[k + 1]; -/* Computing MAX */ - d__1 = qmax, d__2 = z__[k]; - qmax = max(d__1,d__2); -/* Computing MIN */ - d__1 = emin, d__2 = z__[k + 1]; - emin = min(d__1,d__2); -/* Computing MAX */ - d__1 = max(qmax,zmax), d__2 = z__[k + 1]; - zmax = max(d__1,d__2); -/* L10: */ - } - if (z__[((*n) << (1)) - 1] < 0.) { - *info = -(((*n) << (1)) + 199); - xerbla_("DLASQ2", &c__2); - return 0; - } - d__ += z__[((*n) << (1)) - 1]; -/* Computing MAX */ - d__1 = qmax, d__2 = z__[((*n) << (1)) - 1]; - qmax = max(d__1,d__2); - zmax = max(qmax,zmax); - -/* Check for diagonality. */ - - if (e == 0.) { - i__1 = *n; - for (k = 2; k <= i__1; ++k) { - z__[k] = z__[((k) << (1)) - 1]; -/* L20: */ - } - dlasrt_("D", n, &z__[1], &iinfo); - z__[((*n) << (1)) - 1] = d__; - return 0; - } - - trace = d__ + e; - -/* Check for zero data. */ - - if (trace == 0.) { - z__[((*n) << (1)) - 1] = 0.; - return 0; - } - -/* Check whether the machine is IEEE conformable. */ - - ieee = (ilaenv_(&c__10, "DLASQ2", "N", &c__1, &c__2, &c__3, &c__4, ( - ftnlen)6, (ftnlen)1) == 1 && ilaenv_(&c__11, "DLASQ2", "N", &c__1, - &c__2, &c__3, &c__4, (ftnlen)6, (ftnlen)1) == 1); - -/* Rearrange data for locality: Z=(q1,qq1,e1,ee1,q2,qq2,e2,ee2,...). */ - - for (k = (*n) << (1); k >= 2; k += -2) { - z__[k * 2] = 0.; - z__[((k) << (1)) - 1] = z__[k]; - z__[((k) << (1)) - 2] = 0.; - z__[((k) << (1)) - 3] = z__[k - 1]; -/* L30: */ - } - - i0 = 1; - n0 = *n; - -/* Reverse the qd-array, if warranted. */ - - if (z__[((i0) << (2)) - 3] * 1.5 < z__[((n0) << (2)) - 3]) { - ipn4 = (i0 + n0) << (2); - i__1 = (i0 + n0 - 1) << (1); - for (i4 = (i0) << (2); i4 <= i__1; i4 += 4) { - temp = z__[i4 - 3]; - z__[i4 - 3] = z__[ipn4 - i4 - 3]; - z__[ipn4 - i4 - 3] = temp; - temp = z__[i4 - 1]; - z__[i4 - 1] = z__[ipn4 - i4 - 5]; - z__[ipn4 - i4 - 5] = temp; -/* L40: */ - } - } - -/* Initial split checking via dqd and Li's test. */ - - pp = 0; - - for (k = 1; k <= 2; ++k) { - - d__ = z__[((n0) << (2)) + pp - 3]; - i__1 = ((i0) << (2)) + pp; - for (i4 = ((n0 - 1) << (2)) + pp; i4 >= i__1; i4 += -4) { - if (z__[i4 - 1] <= tol2 * d__) { - z__[i4 - 1] = -0.; - d__ = z__[i4 - 3]; - } else { - d__ = z__[i4 - 3] * (d__ / (d__ + z__[i4 - 1])); - } -/* L50: */ - } - -/* dqd maps Z to ZZ plus Li's test. */ - - emin = z__[((i0) << (2)) + pp + 1]; - d__ = z__[((i0) << (2)) + pp - 3]; - i__1 = ((n0 - 1) << (2)) + pp; - for (i4 = ((i0) << (2)) + pp; i4 <= i__1; i4 += 4) { - z__[i4 - ((pp) << (1)) - 2] = d__ + z__[i4 - 1]; - if (z__[i4 - 1] <= tol2 * d__) { - z__[i4 - 1] = -0.; - z__[i4 - ((pp) << (1)) - 2] = d__; - z__[i4 - ((pp) << (1))] = 0.; - d__ = z__[i4 + 1]; - } else if ((safmin * z__[i4 + 1] < z__[i4 - ((pp) << (1)) - 2] && - safmin * z__[i4 - ((pp) << (1)) - 2] < z__[i4 + 1])) { - temp = z__[i4 + 1] / z__[i4 - ((pp) << (1)) - 2]; - z__[i4 - ((pp) << (1))] = z__[i4 - 1] * temp; - d__ *= temp; - } else { - z__[i4 - ((pp) << (1))] = z__[i4 + 1] * (z__[i4 - 1] / z__[i4 - - ((pp) << (1)) - 2]); - d__ = z__[i4 + 1] * (d__ / z__[i4 - ((pp) << (1)) - 2]); - } -/* Computing MIN */ - d__1 = emin, d__2 = z__[i4 - ((pp) << (1))]; - emin = min(d__1,d__2); -/* L60: */ - } - z__[((n0) << (2)) - pp - 2] = d__; - -/* Now find qmax. */ - - qmax = z__[((i0) << (2)) - pp - 2]; - i__1 = ((n0) << (2)) - pp - 2; - for (i4 = ((i0) << (2)) - pp + 2; i4 <= i__1; i4 += 4) { -/* Computing MAX */ - d__1 = qmax, d__2 = z__[i4]; - qmax = max(d__1,d__2); -/* L70: */ - } - -/* Prepare for the next iteration on K. */ - - pp = 1 - pp; -/* L80: */ - } - - iter = 2; - nfail = 0; - ndiv = (n0 - i0) << (1); - - i__1 = *n + 1; - for (iwhila = 1; iwhila <= i__1; ++iwhila) { - if (n0 < 1) { - goto L150; - } - -/* - While array unfinished do - - E(N0) holds the value of SIGMA when submatrix in I0:N0 - splits from the rest of the array, but is negated. -*/ - - desig = 0.; - if (n0 == *n) { - sigma = 0.; - } else { - sigma = -z__[((n0) << (2)) - 1]; - } - if (sigma < 0.) { - *info = 1; - return 0; - } - -/* - Find last unreduced submatrix's top index I0, find QMAX and - EMIN. Find Gershgorin-type bound if Q's much greater than E's. -*/ - - emax = 0.; - if (n0 > i0) { - emin = (d__1 = z__[((n0) << (2)) - 5], abs(d__1)); - } else { - emin = 0.; - } - qmin = z__[((n0) << (2)) - 3]; - qmax = qmin; - for (i4 = (n0) << (2); i4 >= 8; i4 += -4) { - if (z__[i4 - 5] <= 0.) { - goto L100; - } - if (qmin >= emax * 4.) { -/* Computing MIN */ - d__1 = qmin, d__2 = z__[i4 - 3]; - qmin = min(d__1,d__2); -/* Computing MAX */ - d__1 = emax, d__2 = z__[i4 - 5]; - emax = max(d__1,d__2); - } -/* Computing MAX */ - d__1 = qmax, d__2 = z__[i4 - 7] + z__[i4 - 5]; - qmax = max(d__1,d__2); -/* Computing MIN */ - d__1 = emin, d__2 = z__[i4 - 5]; - emin = min(d__1,d__2); -/* L90: */ - } - i4 = 4; - -L100: - i0 = i4 / 4; - -/* Store EMIN for passing to DLASQ3. */ - - z__[((n0) << (2)) - 1] = emin; - -/* - Put -(initial shift) into DMIN. - - Computing MAX -*/ - d__1 = 0., d__2 = qmin - sqrt(qmin) * 2. * sqrt(emax); - dmin__ = -max(d__1,d__2); - -/* Now I0:N0 is unreduced. PP = 0 for ping, PP = 1 for pong. */ - - pp = 0; - - nbig = (n0 - i0 + 1) * 30; - i__2 = nbig; - for (iwhilb = 1; iwhilb <= i__2; ++iwhilb) { - if (i0 > n0) { - goto L130; - } - -/* While submatrix unfinished take a good dqds step. */ - - dlasq3_(&i0, &n0, &z__[1], &pp, &dmin__, &sigma, &desig, &qmax, & - nfail, &iter, &ndiv, &ieee); - - pp = 1 - pp; - -/* When EMIN is very small check for splits. */ - - if ((pp == 0 && n0 - i0 >= 3)) { - if (z__[n0 * 4] <= tol2 * qmax || z__[((n0) << (2)) - 1] <= - tol2 * sigma) { - splt = i0 - 1; - qmax = z__[((i0) << (2)) - 3]; - emin = z__[((i0) << (2)) - 1]; - oldemn = z__[i0 * 4]; - i__3 = (n0 - 3) << (2); - for (i4 = (i0) << (2); i4 <= i__3; i4 += 4) { - if (z__[i4] <= tol2 * z__[i4 - 3] || z__[i4 - 1] <= - tol2 * sigma) { - z__[i4 - 1] = -sigma; - splt = i4 / 4; - qmax = 0.; - emin = z__[i4 + 3]; - oldemn = z__[i4 + 4]; - } else { -/* Computing MAX */ - d__1 = qmax, d__2 = z__[i4 + 1]; - qmax = max(d__1,d__2); -/* Computing MIN */ - d__1 = emin, d__2 = z__[i4 - 1]; - emin = min(d__1,d__2); -/* Computing MIN */ - d__1 = oldemn, d__2 = z__[i4]; - oldemn = min(d__1,d__2); - } -/* L110: */ - } - z__[((n0) << (2)) - 1] = emin; - z__[n0 * 4] = oldemn; - i0 = splt + 1; - } - } - -/* L120: */ - } - - *info = 2; - return 0; - -/* end IWHILB */ - -L130: - -/* L140: */ - ; - } - - *info = 3; - return 0; - -/* end IWHILA */ - -L150: - -/* Move q's to the front. */ - - i__1 = *n; - for (k = 2; k <= i__1; ++k) { - z__[k] = z__[((k) << (2)) - 3]; -/* L160: */ - } - -/* Sort and compute sum of eigenvalues. */ - - dlasrt_("D", n, &z__[1], &iinfo); - - e = 0.; - for (k = *n; k >= 1; --k) { - e += z__[k]; -/* L170: */ - } - -/* Store trace, sum(eigenvalues) and information on performance. */ - - z__[((*n) << (1)) + 1] = trace; - z__[((*n) << (1)) + 2] = e; - z__[((*n) << (1)) + 3] = (doublereal) iter; -/* Computing 2nd power */ - i__1 = *n; - z__[((*n) << (1)) + 4] = (doublereal) ndiv / (doublereal) (i__1 * i__1); - z__[((*n) << (1)) + 5] = nfail * 100. / (doublereal) iter; - return 0; - -/* End of DLASQ2 */ - -} /* dlasq2_ */ - -/* Subroutine */ int dlasq3_(integer *i0, integer *n0, doublereal *z__, - integer *pp, doublereal *dmin__, doublereal *sigma, doublereal *desig, - doublereal *qmax, integer *nfail, integer *iter, integer *ndiv, - logical *ieee) -{ - /* Initialized data */ - - static integer ttype = 0; - static doublereal dmin1 = 0.; - static doublereal dmin2 = 0.; - static doublereal dn = 0.; - static doublereal dn1 = 0.; - static doublereal dn2 = 0.; - static doublereal tau = 0.; - - /* System generated locals */ - integer i__1; - doublereal d__1, d__2; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal s, t; - static integer j4, nn; - static doublereal eps, tol; - static integer n0in, ipn4; - static doublereal tol2, temp; - extern /* Subroutine */ int dlasq4_(integer *, integer *, doublereal *, - integer *, integer *, doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, integer *) - , dlasq5_(integer *, integer *, doublereal *, integer *, - doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, logical *), dlasq6_( - integer *, integer *, doublereal *, integer *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *); - - static doublereal safmin; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - May 17, 2000 - - - Purpose - ======= - - DLASQ3 checks for deflation, computes a shift (TAU) and calls dqds. - In case of failure it changes shifts, and tries again until output - is positive. - - Arguments - ========= - - I0 (input) INTEGER - First index. - - N0 (input) INTEGER - Last index. - - Z (input) DOUBLE PRECISION array, dimension ( 4*N ) - Z holds the qd array. - - PP (input) INTEGER - PP=0 for ping, PP=1 for pong. - - DMIN (output) DOUBLE PRECISION - Minimum value of d. - - SIGMA (output) DOUBLE PRECISION - Sum of shifts used in current segment. - - DESIG (input/output) DOUBLE PRECISION - Lower order part of SIGMA - - QMAX (input) DOUBLE PRECISION - Maximum value of q. - - NFAIL (output) INTEGER - Number of times shift was too big. - - ITER (output) INTEGER - Number of iterations. - - NDIV (output) INTEGER - Number of divisions. - - TTYPE (output) INTEGER - Shift type. - - IEEE (input) LOGICAL - Flag for IEEE or non IEEE arithmetic (passed to DLASQ5). - - ===================================================================== -*/ - - /* Parameter adjustments */ - --z__; - - /* Function Body */ - - n0in = *n0; - eps = PRECISION; - safmin = SAFEMINIMUM; - tol = eps * 100.; -/* Computing 2nd power */ - d__1 = tol; - tol2 = d__1 * d__1; - -/* Check for deflation. */ - -L10: - - if (*n0 < *i0) { - return 0; - } - if (*n0 == *i0) { - goto L20; - } - nn = ((*n0) << (2)) + *pp; - if (*n0 == *i0 + 1) { - goto L40; - } - -/* Check whether E(N0-1) is negligible, 1 eigenvalue. */ - - if ((z__[nn - 5] > tol2 * (*sigma + z__[nn - 3]) && z__[nn - ((*pp) << (1) - ) - 4] > tol2 * z__[nn - 7])) { - goto L30; - } - -L20: - - z__[((*n0) << (2)) - 3] = z__[((*n0) << (2)) + *pp - 3] + *sigma; - --(*n0); - goto L10; - -/* Check whether E(N0-2) is negligible, 2 eigenvalues. */ - -L30: - - if ((z__[nn - 9] > tol2 * *sigma && z__[nn - ((*pp) << (1)) - 8] > tol2 * - z__[nn - 11])) { - goto L50; - } - -L40: - - if (z__[nn - 3] > z__[nn - 7]) { - s = z__[nn - 3]; - z__[nn - 3] = z__[nn - 7]; - z__[nn - 7] = s; - } - if (z__[nn - 5] > z__[nn - 3] * tol2) { - t = (z__[nn - 7] - z__[nn - 3] + z__[nn - 5]) * .5; - s = z__[nn - 3] * (z__[nn - 5] / t); - if (s <= t) { - s = z__[nn - 3] * (z__[nn - 5] / (t * (sqrt(s / t + 1.) + 1.))); - } else { - s = z__[nn - 3] * (z__[nn - 5] / (t + sqrt(t) * sqrt(t + s))); - } - t = z__[nn - 7] + (s + z__[nn - 5]); - z__[nn - 3] *= z__[nn - 7] / t; - z__[nn - 7] = t; - } - z__[((*n0) << (2)) - 7] = z__[nn - 7] + *sigma; - z__[((*n0) << (2)) - 3] = z__[nn - 3] + *sigma; - *n0 += -2; - goto L10; - -L50: - -/* Reverse the qd-array, if warranted. */ - - if (*dmin__ <= 0. || *n0 < n0in) { - if (z__[((*i0) << (2)) + *pp - 3] * 1.5 < z__[((*n0) << (2)) + *pp - - 3]) { - ipn4 = (*i0 + *n0) << (2); - i__1 = (*i0 + *n0 - 1) << (1); - for (j4 = (*i0) << (2); j4 <= i__1; j4 += 4) { - temp = z__[j4 - 3]; - z__[j4 - 3] = z__[ipn4 - j4 - 3]; - z__[ipn4 - j4 - 3] = temp; - temp = z__[j4 - 2]; - z__[j4 - 2] = z__[ipn4 - j4 - 2]; - z__[ipn4 - j4 - 2] = temp; - temp = z__[j4 - 1]; - z__[j4 - 1] = z__[ipn4 - j4 - 5]; - z__[ipn4 - j4 - 5] = temp; - temp = z__[j4]; - z__[j4] = z__[ipn4 - j4 - 4]; - z__[ipn4 - j4 - 4] = temp; -/* L60: */ - } - if (*n0 - *i0 <= 4) { - z__[((*n0) << (2)) + *pp - 1] = z__[((*i0) << (2)) + *pp - 1]; - z__[((*n0) << (2)) - *pp] = z__[((*i0) << (2)) - *pp]; - } -/* Computing MIN */ - d__1 = dmin2, d__2 = z__[((*n0) << (2)) + *pp - 1]; - dmin2 = min(d__1,d__2); -/* Computing MIN */ - d__1 = z__[((*n0) << (2)) + *pp - 1], d__2 = z__[((*i0) << (2)) + - *pp - 1], d__1 = min(d__1,d__2), d__2 = z__[((*i0) << (2)) - + *pp + 3]; - z__[((*n0) << (2)) + *pp - 1] = min(d__1,d__2); -/* Computing MIN */ - d__1 = z__[((*n0) << (2)) - *pp], d__2 = z__[((*i0) << (2)) - *pp] - , d__1 = min(d__1,d__2), d__2 = z__[((*i0) << (2)) - *pp - + 4]; - z__[((*n0) << (2)) - *pp] = min(d__1,d__2); -/* Computing MAX */ - d__1 = *qmax, d__2 = z__[((*i0) << (2)) + *pp - 3], d__1 = max( - d__1,d__2), d__2 = z__[((*i0) << (2)) + *pp + 1]; - *qmax = max(d__1,d__2); - *dmin__ = -0.; - } - } - -/* - L70: - - Computing MIN -*/ - d__1 = z__[((*n0) << (2)) + *pp - 1], d__2 = z__[((*n0) << (2)) + *pp - 9] - , d__1 = min(d__1,d__2), d__2 = dmin2 + z__[((*n0) << (2)) - *pp]; - if (*dmin__ < 0. || safmin * *qmax < min(d__1,d__2)) { - -/* Choose a shift. */ - - dlasq4_(i0, n0, &z__[1], pp, &n0in, dmin__, &dmin1, &dmin2, &dn, &dn1, - &dn2, &tau, &ttype); - -/* Call dqds until DMIN > 0. */ - -L80: - - dlasq5_(i0, n0, &z__[1], pp, &tau, dmin__, &dmin1, &dmin2, &dn, &dn1, - &dn2, ieee); - - *ndiv += *n0 - *i0 + 2; - ++(*iter); - -/* Check status. */ - - if ((*dmin__ >= 0. && dmin1 > 0.)) { - -/* Success. */ - - goto L100; - - } else if ((((*dmin__ < 0. && dmin1 > 0.) && z__[((*n0 - 1) << (2)) - - *pp] < tol * (*sigma + dn1)) && abs(dn) < tol * *sigma)) { - -/* Convergence hidden by negative DN. */ - - z__[((*n0 - 1) << (2)) - *pp + 2] = 0.; - *dmin__ = 0.; - goto L100; - } else if (*dmin__ < 0.) { - -/* TAU too big. Select new TAU and try again. */ - - ++(*nfail); - if (ttype < -22) { - -/* Failed twice. Play it safe. */ - - tau = 0.; - } else if (dmin1 > 0.) { - -/* Late failure. Gives excellent shift. */ - - tau = (tau + *dmin__) * (1. - eps * 2.); - ttype += -11; - } else { - -/* Early failure. Divide by 4. */ - - tau *= .25; - ttype += -12; - } - goto L80; - } else if (*dmin__ != *dmin__) { - -/* NaN. */ - - tau = 0.; - goto L80; - } else { - -/* Possible underflow. Play it safe. */ - - goto L90; - } - } - -/* Risk of underflow. */ - -L90: - dlasq6_(i0, n0, &z__[1], pp, dmin__, &dmin1, &dmin2, &dn, &dn1, &dn2); - *ndiv += *n0 - *i0 + 2; - ++(*iter); - tau = 0.; - -L100: - if (tau < *sigma) { - *desig += tau; - t = *sigma + *desig; - *desig -= t - *sigma; - } else { - t = *sigma + tau; - *desig = *sigma - (t - tau) + *desig; - } - *sigma = t; - - return 0; - -/* End of DLASQ3 */ - -} /* dlasq3_ */ - -/* Subroutine */ int dlasq4_(integer *i0, integer *n0, doublereal *z__, - integer *pp, integer *n0in, doublereal *dmin__, doublereal *dmin1, - doublereal *dmin2, doublereal *dn, doublereal *dn1, doublereal *dn2, - doublereal *tau, integer *ttype) -{ - /* Initialized data */ - - static doublereal g = 0.; - - /* System generated locals */ - integer i__1; - doublereal d__1, d__2; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal s, a2, b1, b2; - static integer i4, nn, np; - static doublereal gam, gap1, gap2; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - DLASQ4 computes an approximation TAU to the smallest eigenvalue - using values of d from the previous transform. - - I0 (input) INTEGER - First index. - - N0 (input) INTEGER - Last index. - - Z (input) DOUBLE PRECISION array, dimension ( 4*N ) - Z holds the qd array. - - PP (input) INTEGER - PP=0 for ping, PP=1 for pong. - - NOIN (input) INTEGER - The value of N0 at start of EIGTEST. - - DMIN (input) DOUBLE PRECISION - Minimum value of d. - - DMIN1 (input) DOUBLE PRECISION - Minimum value of d, excluding D( N0 ). - - DMIN2 (input) DOUBLE PRECISION - Minimum value of d, excluding D( N0 ) and D( N0-1 ). - - DN (input) DOUBLE PRECISION - d(N) - - DN1 (input) DOUBLE PRECISION - d(N-1) - - DN2 (input) DOUBLE PRECISION - d(N-2) - - TAU (output) DOUBLE PRECISION - This is the shift. - - TTYPE (output) INTEGER - Shift type. - - Further Details - =============== - CNST1 = 9/16 - - ===================================================================== -*/ - - /* Parameter adjustments */ - --z__; - - /* Function Body */ - -/* - A negative DMIN forces the shift to take that absolute value - TTYPE records the type of shift. -*/ - - if (*dmin__ <= 0.) { - *tau = -(*dmin__); - *ttype = -1; - return 0; - } - - nn = ((*n0) << (2)) + *pp; - if (*n0in == *n0) { - -/* No eigenvalues deflated. */ - - if (*dmin__ == *dn || *dmin__ == *dn1) { - - b1 = sqrt(z__[nn - 3]) * sqrt(z__[nn - 5]); - b2 = sqrt(z__[nn - 7]) * sqrt(z__[nn - 9]); - a2 = z__[nn - 7] + z__[nn - 5]; - -/* Cases 2 and 3. */ - - if ((*dmin__ == *dn && *dmin1 == *dn1)) { - gap2 = *dmin2 - a2 - *dmin2 * .25; - if ((gap2 > 0. && gap2 > b2)) { - gap1 = a2 - *dn - b2 / gap2 * b2; - } else { - gap1 = a2 - *dn - (b1 + b2); - } - if ((gap1 > 0. && gap1 > b1)) { -/* Computing MAX */ - d__1 = *dn - b1 / gap1 * b1, d__2 = *dmin__ * .5; - s = max(d__1,d__2); - *ttype = -2; - } else { - s = 0.; - if (*dn > b1) { - s = *dn - b1; - } - if (a2 > b1 + b2) { -/* Computing MIN */ - d__1 = s, d__2 = a2 - (b1 + b2); - s = min(d__1,d__2); - } -/* Computing MAX */ - d__1 = s, d__2 = *dmin__ * .333; - s = max(d__1,d__2); - *ttype = -3; - } - } else { - -/* Case 4. */ - - *ttype = -4; - s = *dmin__ * .25; - if (*dmin__ == *dn) { - gam = *dn; - a2 = 0.; - if (z__[nn - 5] > z__[nn - 7]) { - return 0; - } - b2 = z__[nn - 5] / z__[nn - 7]; - np = nn - 9; - } else { - np = nn - ((*pp) << (1)); - b2 = z__[np - 2]; - gam = *dn1; - if (z__[np - 4] > z__[np - 2]) { - return 0; - } - a2 = z__[np - 4] / z__[np - 2]; - if (z__[nn - 9] > z__[nn - 11]) { - return 0; - } - b2 = z__[nn - 9] / z__[nn - 11]; - np = nn - 13; - } - -/* Approximate contribution to norm squared from I < NN-1. */ - - a2 += b2; - i__1 = ((*i0) << (2)) - 1 + *pp; - for (i4 = np; i4 >= i__1; i4 += -4) { - if (b2 == 0.) { - goto L20; - } - b1 = b2; - if (z__[i4] > z__[i4 - 2]) { - return 0; - } - b2 *= z__[i4] / z__[i4 - 2]; - a2 += b2; - if (max(b2,b1) * 100. < a2 || .563 < a2) { - goto L20; - } -/* L10: */ - } -L20: - a2 *= 1.05; - -/* Rayleigh quotient residual bound. */ - - if (a2 < .563) { - s = gam * (1. - sqrt(a2)) / (a2 + 1.); - } - } - } else if (*dmin__ == *dn2) { - -/* Case 5. */ - - *ttype = -5; - s = *dmin__ * .25; - -/* Compute contribution to norm squared from I > NN-2. */ - - np = nn - ((*pp) << (1)); - b1 = z__[np - 2]; - b2 = z__[np - 6]; - gam = *dn2; - if (z__[np - 8] > b2 || z__[np - 4] > b1) { - return 0; - } - a2 = z__[np - 8] / b2 * (z__[np - 4] / b1 + 1.); - -/* Approximate contribution to norm squared from I < NN-2. */ - - if (*n0 - *i0 > 2) { - b2 = z__[nn - 13] / z__[nn - 15]; - a2 += b2; - i__1 = ((*i0) << (2)) - 1 + *pp; - for (i4 = nn - 17; i4 >= i__1; i4 += -4) { - if (b2 == 0.) { - goto L40; - } - b1 = b2; - if (z__[i4] > z__[i4 - 2]) { - return 0; - } - b2 *= z__[i4] / z__[i4 - 2]; - a2 += b2; - if (max(b2,b1) * 100. < a2 || .563 < a2) { - goto L40; - } -/* L30: */ - } -L40: - a2 *= 1.05; - } - - if (a2 < .563) { - s = gam * (1. - sqrt(a2)) / (a2 + 1.); - } - } else { - -/* Case 6, no information to guide us. */ - - if (*ttype == -6) { - g += (1. - g) * .333; - } else if (*ttype == -18) { - g = .083250000000000005; - } else { - g = .25; - } - s = g * *dmin__; - *ttype = -6; - } - - } else if (*n0in == *n0 + 1) { - -/* One eigenvalue just deflated. Use DMIN1, DN1 for DMIN and DN. */ - - if ((*dmin1 == *dn1 && *dmin2 == *dn2)) { - -/* Cases 7 and 8. */ - - *ttype = -7; - s = *dmin1 * .333; - if (z__[nn - 5] > z__[nn - 7]) { - return 0; - } - b1 = z__[nn - 5] / z__[nn - 7]; - b2 = b1; - if (b2 == 0.) { - goto L60; - } - i__1 = ((*i0) << (2)) - 1 + *pp; - for (i4 = ((*n0) << (2)) - 9 + *pp; i4 >= i__1; i4 += -4) { - a2 = b1; - if (z__[i4] > z__[i4 - 2]) { - return 0; - } - b1 *= z__[i4] / z__[i4 - 2]; - b2 += b1; - if (max(b1,a2) * 100. < b2) { - goto L60; - } -/* L50: */ - } -L60: - b2 = sqrt(b2 * 1.05); -/* Computing 2nd power */ - d__1 = b2; - a2 = *dmin1 / (d__1 * d__1 + 1.); - gap2 = *dmin2 * .5 - a2; - if ((gap2 > 0. && gap2 > b2 * a2)) { -/* Computing MAX */ - d__1 = s, d__2 = a2 * (1. - a2 * 1.01 * (b2 / gap2) * b2); - s = max(d__1,d__2); - } else { -/* Computing MAX */ - d__1 = s, d__2 = a2 * (1. - b2 * 1.01); - s = max(d__1,d__2); - *ttype = -8; - } - } else { - -/* Case 9. */ - - s = *dmin1 * .25; - if (*dmin1 == *dn1) { - s = *dmin1 * .5; - } - *ttype = -9; - } - - } else if (*n0in == *n0 + 2) { - -/* - Two eigenvalues deflated. Use DMIN2, DN2 for DMIN and DN. - - Cases 10 and 11. -*/ - - if ((*dmin2 == *dn2 && z__[nn - 5] * 2. < z__[nn - 7])) { - *ttype = -10; - s = *dmin2 * .333; - if (z__[nn - 5] > z__[nn - 7]) { - return 0; - } - b1 = z__[nn - 5] / z__[nn - 7]; - b2 = b1; - if (b2 == 0.) { - goto L80; - } - i__1 = ((*i0) << (2)) - 1 + *pp; - for (i4 = ((*n0) << (2)) - 9 + *pp; i4 >= i__1; i4 += -4) { - if (z__[i4] > z__[i4 - 2]) { - return 0; - } - b1 *= z__[i4] / z__[i4 - 2]; - b2 += b1; - if (b1 * 100. < b2) { - goto L80; - } -/* L70: */ - } -L80: - b2 = sqrt(b2 * 1.05); -/* Computing 2nd power */ - d__1 = b2; - a2 = *dmin2 / (d__1 * d__1 + 1.); - gap2 = z__[nn - 7] + z__[nn - 9] - sqrt(z__[nn - 11]) * sqrt(z__[ - nn - 9]) - a2; - if ((gap2 > 0. && gap2 > b2 * a2)) { -/* Computing MAX */ - d__1 = s, d__2 = a2 * (1. - a2 * 1.01 * (b2 / gap2) * b2); - s = max(d__1,d__2); - } else { -/* Computing MAX */ - d__1 = s, d__2 = a2 * (1. - b2 * 1.01); - s = max(d__1,d__2); - } - } else { - s = *dmin2 * .25; - *ttype = -11; - } - } else if (*n0in > *n0 + 2) { - -/* Case 12, more than two eigenvalues deflated. No information. */ - - s = 0.; - *ttype = -12; - } - - *tau = s; - return 0; - -/* End of DLASQ4 */ - -} /* dlasq4_ */ - -/* Subroutine */ int dlasq5_(integer *i0, integer *n0, doublereal *z__, - integer *pp, doublereal *tau, doublereal *dmin__, doublereal *dmin1, - doublereal *dmin2, doublereal *dn, doublereal *dnm1, doublereal *dnm2, - logical *ieee) -{ - /* System generated locals */ - integer i__1; - doublereal d__1, d__2; - - /* Local variables */ - static doublereal d__; - static integer j4, j4p2; - static doublereal emin, temp; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - May 17, 2000 - - - Purpose - ======= - - DLASQ5 computes one dqds transform in ping-pong form, one - version for IEEE machines another for non IEEE machines. - - Arguments - ========= - - I0 (input) INTEGER - First index. - - N0 (input) INTEGER - Last index. - - Z (input) DOUBLE PRECISION array, dimension ( 4*N ) - Z holds the qd array. EMIN is stored in Z(4*N0) to avoid - an extra argument. - - PP (input) INTEGER - PP=0 for ping, PP=1 for pong. - - TAU (input) DOUBLE PRECISION - This is the shift. - - DMIN (output) DOUBLE PRECISION - Minimum value of d. - - DMIN1 (output) DOUBLE PRECISION - Minimum value of d, excluding D( N0 ). - - DMIN2 (output) DOUBLE PRECISION - Minimum value of d, excluding D( N0 ) and D( N0-1 ). - - DN (output) DOUBLE PRECISION - d(N0), the last value of d. - - DNM1 (output) DOUBLE PRECISION - d(N0-1). - - DNM2 (output) DOUBLE PRECISION - d(N0-2). - - IEEE (input) LOGICAL - Flag for IEEE or non IEEE arithmetic. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --z__; - - /* Function Body */ - if (*n0 - *i0 - 1 <= 0) { - return 0; - } - - j4 = ((*i0) << (2)) + *pp - 3; - emin = z__[j4 + 4]; - d__ = z__[j4] - *tau; - *dmin__ = d__; - *dmin1 = -z__[j4]; - - if (*ieee) { - -/* Code for IEEE arithmetic. */ - - if (*pp == 0) { - i__1 = (*n0 - 3) << (2); - for (j4 = (*i0) << (2); j4 <= i__1; j4 += 4) { - z__[j4 - 2] = d__ + z__[j4 - 1]; - temp = z__[j4 + 1] / z__[j4 - 2]; - d__ = d__ * temp - *tau; - *dmin__ = min(*dmin__,d__); - z__[j4] = z__[j4 - 1] * temp; -/* Computing MIN */ - d__1 = z__[j4]; - emin = min(d__1,emin); -/* L10: */ - } - } else { - i__1 = (*n0 - 3) << (2); - for (j4 = (*i0) << (2); j4 <= i__1; j4 += 4) { - z__[j4 - 3] = d__ + z__[j4]; - temp = z__[j4 + 2] / z__[j4 - 3]; - d__ = d__ * temp - *tau; - *dmin__ = min(*dmin__,d__); - z__[j4 - 1] = z__[j4] * temp; -/* Computing MIN */ - d__1 = z__[j4 - 1]; - emin = min(d__1,emin); -/* L20: */ - } - } - -/* Unroll last two steps. */ - - *dnm2 = d__; - *dmin2 = *dmin__; - j4 = ((*n0 - 2) << (2)) - *pp; - j4p2 = j4 + ((*pp) << (1)) - 1; - z__[j4 - 2] = *dnm2 + z__[j4p2]; - z__[j4] = z__[j4p2 + 2] * (z__[j4p2] / z__[j4 - 2]); - *dnm1 = z__[j4p2 + 2] * (*dnm2 / z__[j4 - 2]) - *tau; - *dmin__ = min(*dmin__,*dnm1); - - *dmin1 = *dmin__; - j4 += 4; - j4p2 = j4 + ((*pp) << (1)) - 1; - z__[j4 - 2] = *dnm1 + z__[j4p2]; - z__[j4] = z__[j4p2 + 2] * (z__[j4p2] / z__[j4 - 2]); - *dn = z__[j4p2 + 2] * (*dnm1 / z__[j4 - 2]) - *tau; - *dmin__ = min(*dmin__,*dn); - - } else { - -/* Code for non IEEE arithmetic. */ - - if (*pp == 0) { - i__1 = (*n0 - 3) << (2); - for (j4 = (*i0) << (2); j4 <= i__1; j4 += 4) { - z__[j4 - 2] = d__ + z__[j4 - 1]; - if (d__ < 0.) { - return 0; - } else { - z__[j4] = z__[j4 + 1] * (z__[j4 - 1] / z__[j4 - 2]); - d__ = z__[j4 + 1] * (d__ / z__[j4 - 2]) - *tau; - } - *dmin__ = min(*dmin__,d__); -/* Computing MIN */ - d__1 = emin, d__2 = z__[j4]; - emin = min(d__1,d__2); -/* L30: */ - } - } else { - i__1 = (*n0 - 3) << (2); - for (j4 = (*i0) << (2); j4 <= i__1; j4 += 4) { - z__[j4 - 3] = d__ + z__[j4]; - if (d__ < 0.) { - return 0; - } else { - z__[j4 - 1] = z__[j4 + 2] * (z__[j4] / z__[j4 - 3]); - d__ = z__[j4 + 2] * (d__ / z__[j4 - 3]) - *tau; - } - *dmin__ = min(*dmin__,d__); -/* Computing MIN */ - d__1 = emin, d__2 = z__[j4 - 1]; - emin = min(d__1,d__2); -/* L40: */ - } - } - -/* Unroll last two steps. */ - - *dnm2 = d__; - *dmin2 = *dmin__; - j4 = ((*n0 - 2) << (2)) - *pp; - j4p2 = j4 + ((*pp) << (1)) - 1; - z__[j4 - 2] = *dnm2 + z__[j4p2]; - if (*dnm2 < 0.) { - return 0; - } else { - z__[j4] = z__[j4p2 + 2] * (z__[j4p2] / z__[j4 - 2]); - *dnm1 = z__[j4p2 + 2] * (*dnm2 / z__[j4 - 2]) - *tau; - } - *dmin__ = min(*dmin__,*dnm1); - - *dmin1 = *dmin__; - j4 += 4; - j4p2 = j4 + ((*pp) << (1)) - 1; - z__[j4 - 2] = *dnm1 + z__[j4p2]; - if (*dnm1 < 0.) { - return 0; - } else { - z__[j4] = z__[j4p2 + 2] * (z__[j4p2] / z__[j4 - 2]); - *dn = z__[j4p2 + 2] * (*dnm1 / z__[j4 - 2]) - *tau; - } - *dmin__ = min(*dmin__,*dn); - - } - - z__[j4 + 2] = *dn; - z__[((*n0) << (2)) - *pp] = emin; - return 0; - -/* End of DLASQ5 */ - -} /* dlasq5_ */ - -/* Subroutine */ int dlasq6_(integer *i0, integer *n0, doublereal *z__, - integer *pp, doublereal *dmin__, doublereal *dmin1, doublereal *dmin2, - doublereal *dn, doublereal *dnm1, doublereal *dnm2) -{ - /* System generated locals */ - integer i__1; - doublereal d__1, d__2; - - /* Local variables */ - static doublereal d__; - static integer j4, j4p2; - static doublereal emin, temp; - - static doublereal safmin; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - DLASQ6 computes one dqd (shift equal to zero) transform in - ping-pong form, with protection against underflow and overflow. - - Arguments - ========= - - I0 (input) INTEGER - First index. - - N0 (input) INTEGER - Last index. - - Z (input) DOUBLE PRECISION array, dimension ( 4*N ) - Z holds the qd array. EMIN is stored in Z(4*N0) to avoid - an extra argument. - - PP (input) INTEGER - PP=0 for ping, PP=1 for pong. - - DMIN (output) DOUBLE PRECISION - Minimum value of d. - - DMIN1 (output) DOUBLE PRECISION - Minimum value of d, excluding D( N0 ). - - DMIN2 (output) DOUBLE PRECISION - Minimum value of d, excluding D( N0 ) and D( N0-1 ). - - DN (output) DOUBLE PRECISION - d(N0), the last value of d. - - DNM1 (output) DOUBLE PRECISION - d(N0-1). - - DNM2 (output) DOUBLE PRECISION - d(N0-2). - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --z__; - - /* Function Body */ - if (*n0 - *i0 - 1 <= 0) { - return 0; - } - - safmin = SAFEMINIMUM; - j4 = ((*i0) << (2)) + *pp - 3; - emin = z__[j4 + 4]; - d__ = z__[j4]; - *dmin__ = d__; - - if (*pp == 0) { - i__1 = (*n0 - 3) << (2); - for (j4 = (*i0) << (2); j4 <= i__1; j4 += 4) { - z__[j4 - 2] = d__ + z__[j4 - 1]; - if (z__[j4 - 2] == 0.) { - z__[j4] = 0.; - d__ = z__[j4 + 1]; - *dmin__ = d__; - emin = 0.; - } else if ((safmin * z__[j4 + 1] < z__[j4 - 2] && safmin * z__[j4 - - 2] < z__[j4 + 1])) { - temp = z__[j4 + 1] / z__[j4 - 2]; - z__[j4] = z__[j4 - 1] * temp; - d__ *= temp; - } else { - z__[j4] = z__[j4 + 1] * (z__[j4 - 1] / z__[j4 - 2]); - d__ = z__[j4 + 1] * (d__ / z__[j4 - 2]); - } - *dmin__ = min(*dmin__,d__); -/* Computing MIN */ - d__1 = emin, d__2 = z__[j4]; - emin = min(d__1,d__2); -/* L10: */ - } - } else { - i__1 = (*n0 - 3) << (2); - for (j4 = (*i0) << (2); j4 <= i__1; j4 += 4) { - z__[j4 - 3] = d__ + z__[j4]; - if (z__[j4 - 3] == 0.) { - z__[j4 - 1] = 0.; - d__ = z__[j4 + 2]; - *dmin__ = d__; - emin = 0.; - } else if ((safmin * z__[j4 + 2] < z__[j4 - 3] && safmin * z__[j4 - - 3] < z__[j4 + 2])) { - temp = z__[j4 + 2] / z__[j4 - 3]; - z__[j4 - 1] = z__[j4] * temp; - d__ *= temp; - } else { - z__[j4 - 1] = z__[j4 + 2] * (z__[j4] / z__[j4 - 3]); - d__ = z__[j4 + 2] * (d__ / z__[j4 - 3]); - } - *dmin__ = min(*dmin__,d__); -/* Computing MIN */ - d__1 = emin, d__2 = z__[j4 - 1]; - emin = min(d__1,d__2); -/* L20: */ - } - } - -/* Unroll last two steps. */ - - *dnm2 = d__; - *dmin2 = *dmin__; - j4 = ((*n0 - 2) << (2)) - *pp; - j4p2 = j4 + ((*pp) << (1)) - 1; - z__[j4 - 2] = *dnm2 + z__[j4p2]; - if (z__[j4 - 2] == 0.) { - z__[j4] = 0.; - *dnm1 = z__[j4p2 + 2]; - *dmin__ = *dnm1; - emin = 0.; - } else if ((safmin * z__[j4p2 + 2] < z__[j4 - 2] && safmin * z__[j4 - 2] < - z__[j4p2 + 2])) { - temp = z__[j4p2 + 2] / z__[j4 - 2]; - z__[j4] = z__[j4p2] * temp; - *dnm1 = *dnm2 * temp; - } else { - z__[j4] = z__[j4p2 + 2] * (z__[j4p2] / z__[j4 - 2]); - *dnm1 = z__[j4p2 + 2] * (*dnm2 / z__[j4 - 2]); - } - *dmin__ = min(*dmin__,*dnm1); - - *dmin1 = *dmin__; - j4 += 4; - j4p2 = j4 + ((*pp) << (1)) - 1; - z__[j4 - 2] = *dnm1 + z__[j4p2]; - if (z__[j4 - 2] == 0.) { - z__[j4] = 0.; - *dn = z__[j4p2 + 2]; - *dmin__ = *dn; - emin = 0.; - } else if ((safmin * z__[j4p2 + 2] < z__[j4 - 2] && safmin * z__[j4 - 2] < - z__[j4p2 + 2])) { - temp = z__[j4p2 + 2] / z__[j4 - 2]; - z__[j4] = z__[j4p2] * temp; - *dn = *dnm1 * temp; - } else { - z__[j4] = z__[j4p2 + 2] * (z__[j4p2] / z__[j4 - 2]); - *dn = z__[j4p2 + 2] * (*dnm1 / z__[j4 - 2]); - } - *dmin__ = min(*dmin__,*dn); - - z__[j4 + 2] = *dn; - z__[((*n0) << (2)) - *pp] = emin; - return 0; - -/* End of DLASQ6 */ - -} /* dlasq6_ */ - -/* Subroutine */ int dlasr_(char *side, char *pivot, char *direct, integer *m, - integer *n, doublereal *c__, doublereal *s, doublereal *a, integer * - lda) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - - /* Local variables */ - static integer i__, j, info; - static doublereal temp; - extern logical lsame_(char *, char *); - static doublereal ctemp, stemp; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLASR performs the transformation - - A := P*A, when SIDE = 'L' or 'l' ( Left-hand side ) - - A := A*P', when SIDE = 'R' or 'r' ( Right-hand side ) - - where A is an m by n real matrix and P is an orthogonal matrix, - consisting of a sequence of plane rotations determined by the - parameters PIVOT and DIRECT as follows ( z = m when SIDE = 'L' or 'l' - and z = n when SIDE = 'R' or 'r' ): - - When DIRECT = 'F' or 'f' ( Forward sequence ) then - - P = P( z - 1 )*...*P( 2 )*P( 1 ), - - and when DIRECT = 'B' or 'b' ( Backward sequence ) then - - P = P( 1 )*P( 2 )*...*P( z - 1 ), - - where P( k ) is a plane rotation matrix for the following planes: - - when PIVOT = 'V' or 'v' ( Variable pivot ), - the plane ( k, k + 1 ) - - when PIVOT = 'T' or 't' ( Top pivot ), - the plane ( 1, k + 1 ) - - when PIVOT = 'B' or 'b' ( Bottom pivot ), - the plane ( k, z ) - - c( k ) and s( k ) must contain the cosine and sine that define the - matrix P( k ). The two by two plane rotation part of the matrix - P( k ), R( k ), is assumed to be of the form - - R( k ) = ( c( k ) s( k ) ). - ( -s( k ) c( k ) ) - - This version vectorises across rows of the array A when SIDE = 'L'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - Specifies whether the plane rotation matrix P is applied to - A on the left or the right. - = 'L': Left, compute A := P*A - = 'R': Right, compute A:= A*P' - - DIRECT (input) CHARACTER*1 - Specifies whether P is a forward or backward sequence of - plane rotations. - = 'F': Forward, P = P( z - 1 )*...*P( 2 )*P( 1 ) - = 'B': Backward, P = P( 1 )*P( 2 )*...*P( z - 1 ) - - PIVOT (input) CHARACTER*1 - Specifies the plane for which P(k) is a plane rotation - matrix. - = 'V': Variable pivot, the plane (k,k+1) - = 'T': Top pivot, the plane (1,k+1) - = 'B': Bottom pivot, the plane (k,z) - - M (input) INTEGER - The number of rows of the matrix A. If m <= 1, an immediate - return is effected. - - N (input) INTEGER - The number of columns of the matrix A. If n <= 1, an - immediate return is effected. - - C, S (input) DOUBLE PRECISION arrays, dimension - (M-1) if SIDE = 'L' - (N-1) if SIDE = 'R' - c(k) and s(k) contain the cosine and sine that define the - matrix P(k). The two by two plane rotation part of the - matrix P(k), R(k), is assumed to be of the form - R( k ) = ( c( k ) s( k ) ). - ( -s( k ) c( k ) ) - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - The m by n matrix A. On exit, A is overwritten by P*A if - SIDE = 'R' or by A*P' if SIDE = 'L'. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - --c__; - --s; - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - info = 0; - if (! (lsame_(side, "L") || lsame_(side, "R"))) { - info = 1; - } else if (! (lsame_(pivot, "V") || lsame_(pivot, - "T") || lsame_(pivot, "B"))) { - info = 2; - } else if (! (lsame_(direct, "F") || lsame_(direct, - "B"))) { - info = 3; - } else if (*m < 0) { - info = 4; - } else if (*n < 0) { - info = 5; - } else if (*lda < max(1,*m)) { - info = 9; - } - if (info != 0) { - xerbla_("DLASR ", &info); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0) { - return 0; - } - if (lsame_(side, "L")) { - -/* Form P * A */ - - if (lsame_(pivot, "V")) { - if (lsame_(direct, "F")) { - i__1 = *m - 1; - for (j = 1; j <= i__1; ++j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - temp = a[j + 1 + i__ * a_dim1]; - a[j + 1 + i__ * a_dim1] = ctemp * temp - stemp * - a[j + i__ * a_dim1]; - a[j + i__ * a_dim1] = stemp * temp + ctemp * a[j - + i__ * a_dim1]; -/* L10: */ - } - } -/* L20: */ - } - } else if (lsame_(direct, "B")) { - for (j = *m - 1; j >= 1; --j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - temp = a[j + 1 + i__ * a_dim1]; - a[j + 1 + i__ * a_dim1] = ctemp * temp - stemp * - a[j + i__ * a_dim1]; - a[j + i__ * a_dim1] = stemp * temp + ctemp * a[j - + i__ * a_dim1]; -/* L30: */ - } - } -/* L40: */ - } - } - } else if (lsame_(pivot, "T")) { - if (lsame_(direct, "F")) { - i__1 = *m; - for (j = 2; j <= i__1; ++j) { - ctemp = c__[j - 1]; - stemp = s[j - 1]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - temp = a[j + i__ * a_dim1]; - a[j + i__ * a_dim1] = ctemp * temp - stemp * a[ - i__ * a_dim1 + 1]; - a[i__ * a_dim1 + 1] = stemp * temp + ctemp * a[ - i__ * a_dim1 + 1]; -/* L50: */ - } - } -/* L60: */ - } - } else if (lsame_(direct, "B")) { - for (j = *m; j >= 2; --j) { - ctemp = c__[j - 1]; - stemp = s[j - 1]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - temp = a[j + i__ * a_dim1]; - a[j + i__ * a_dim1] = ctemp * temp - stemp * a[ - i__ * a_dim1 + 1]; - a[i__ * a_dim1 + 1] = stemp * temp + ctemp * a[ - i__ * a_dim1 + 1]; -/* L70: */ - } - } -/* L80: */ - } - } - } else if (lsame_(pivot, "B")) { - if (lsame_(direct, "F")) { - i__1 = *m - 1; - for (j = 1; j <= i__1; ++j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - temp = a[j + i__ * a_dim1]; - a[j + i__ * a_dim1] = stemp * a[*m + i__ * a_dim1] - + ctemp * temp; - a[*m + i__ * a_dim1] = ctemp * a[*m + i__ * - a_dim1] - stemp * temp; -/* L90: */ - } - } -/* L100: */ - } - } else if (lsame_(direct, "B")) { - for (j = *m - 1; j >= 1; --j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - temp = a[j + i__ * a_dim1]; - a[j + i__ * a_dim1] = stemp * a[*m + i__ * a_dim1] - + ctemp * temp; - a[*m + i__ * a_dim1] = ctemp * a[*m + i__ * - a_dim1] - stemp * temp; -/* L110: */ - } - } -/* L120: */ - } - } - } - } else if (lsame_(side, "R")) { - -/* Form A * P' */ - - if (lsame_(pivot, "V")) { - if (lsame_(direct, "F")) { - i__1 = *n - 1; - for (j = 1; j <= i__1; ++j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp = a[i__ + (j + 1) * a_dim1]; - a[i__ + (j + 1) * a_dim1] = ctemp * temp - stemp * - a[i__ + j * a_dim1]; - a[i__ + j * a_dim1] = stemp * temp + ctemp * a[ - i__ + j * a_dim1]; -/* L130: */ - } - } -/* L140: */ - } - } else if (lsame_(direct, "B")) { - for (j = *n - 1; j >= 1; --j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - temp = a[i__ + (j + 1) * a_dim1]; - a[i__ + (j + 1) * a_dim1] = ctemp * temp - stemp * - a[i__ + j * a_dim1]; - a[i__ + j * a_dim1] = stemp * temp + ctemp * a[ - i__ + j * a_dim1]; -/* L150: */ - } - } -/* L160: */ - } - } - } else if (lsame_(pivot, "T")) { - if (lsame_(direct, "F")) { - i__1 = *n; - for (j = 2; j <= i__1; ++j) { - ctemp = c__[j - 1]; - stemp = s[j - 1]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp = a[i__ + j * a_dim1]; - a[i__ + j * a_dim1] = ctemp * temp - stemp * a[ - i__ + a_dim1]; - a[i__ + a_dim1] = stemp * temp + ctemp * a[i__ + - a_dim1]; -/* L170: */ - } - } -/* L180: */ - } - } else if (lsame_(direct, "B")) { - for (j = *n; j >= 2; --j) { - ctemp = c__[j - 1]; - stemp = s[j - 1]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - temp = a[i__ + j * a_dim1]; - a[i__ + j * a_dim1] = ctemp * temp - stemp * a[ - i__ + a_dim1]; - a[i__ + a_dim1] = stemp * temp + ctemp * a[i__ + - a_dim1]; -/* L190: */ - } - } -/* L200: */ - } - } - } else if (lsame_(pivot, "B")) { - if (lsame_(direct, "F")) { - i__1 = *n - 1; - for (j = 1; j <= i__1; ++j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - temp = a[i__ + j * a_dim1]; - a[i__ + j * a_dim1] = stemp * a[i__ + *n * a_dim1] - + ctemp * temp; - a[i__ + *n * a_dim1] = ctemp * a[i__ + *n * - a_dim1] - stemp * temp; -/* L210: */ - } - } -/* L220: */ - } - } else if (lsame_(direct, "B")) { - for (j = *n - 1; j >= 1; --j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - temp = a[i__ + j * a_dim1]; - a[i__ + j * a_dim1] = stemp * a[i__ + *n * a_dim1] - + ctemp * temp; - a[i__ + *n * a_dim1] = ctemp * a[i__ + *n * - a_dim1] - stemp * temp; -/* L230: */ - } - } -/* L240: */ - } - } - } - } - - return 0; - -/* End of DLASR */ - -} /* dlasr_ */ - -/* Subroutine */ int dlasrt_(char *id, integer *n, doublereal *d__, integer * - info) -{ - /* System generated locals */ - integer i__1, i__2; - - /* Local variables */ - static integer i__, j; - static doublereal d1, d2, d3; - static integer dir; - static doublereal tmp; - static integer endd; - extern logical lsame_(char *, char *); - static integer stack[64] /* was [2][32] */; - static doublereal dmnmx; - static integer start; - extern /* Subroutine */ int xerbla_(char *, integer *); - static integer stkpnt; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - Sort the numbers in D in increasing order (if ID = 'I') or - in decreasing order (if ID = 'D' ). - - Use Quick Sort, reverting to Insertion sort on arrays of - size <= 20. Dimension of STACK limits N to about 2**32. - - Arguments - ========= - - ID (input) CHARACTER*1 - = 'I': sort D in increasing order; - = 'D': sort D in decreasing order. - - N (input) INTEGER - The length of the array D. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the array to be sorted. - On exit, D has been sorted into increasing order - (D(1) <= ... <= D(N) ) or into decreasing order - (D(1) >= ... >= D(N) ), depending on ID. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input paramters. -*/ - - /* Parameter adjustments */ - --d__; - - /* Function Body */ - *info = 0; - dir = -1; - if (lsame_(id, "D")) { - dir = 0; - } else if (lsame_(id, "I")) { - dir = 1; - } - if (dir == -1) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DLASRT", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n <= 1) { - return 0; - } - - stkpnt = 1; - stack[0] = 1; - stack[1] = *n; -L10: - start = stack[((stkpnt) << (1)) - 2]; - endd = stack[((stkpnt) << (1)) - 1]; - --stkpnt; - if ((endd - start <= 20 && endd - start > 0)) { - -/* Do Insertion sort on D( START:ENDD ) */ - - if (dir == 0) { - -/* Sort into decreasing order */ - - i__1 = endd; - for (i__ = start + 1; i__ <= i__1; ++i__) { - i__2 = start + 1; - for (j = i__; j >= i__2; --j) { - if (d__[j] > d__[j - 1]) { - dmnmx = d__[j]; - d__[j] = d__[j - 1]; - d__[j - 1] = dmnmx; - } else { - goto L30; - } -/* L20: */ - } -L30: - ; - } - - } else { - -/* Sort into increasing order */ - - i__1 = endd; - for (i__ = start + 1; i__ <= i__1; ++i__) { - i__2 = start + 1; - for (j = i__; j >= i__2; --j) { - if (d__[j] < d__[j - 1]) { - dmnmx = d__[j]; - d__[j] = d__[j - 1]; - d__[j - 1] = dmnmx; - } else { - goto L50; - } -/* L40: */ - } -L50: - ; - } - - } - - } else if (endd - start > 20) { - -/* - Partition D( START:ENDD ) and stack parts, largest one first - - Choose partition entry as median of 3 -*/ - - d1 = d__[start]; - d2 = d__[endd]; - i__ = (start + endd) / 2; - d3 = d__[i__]; - if (d1 < d2) { - if (d3 < d1) { - dmnmx = d1; - } else if (d3 < d2) { - dmnmx = d3; - } else { - dmnmx = d2; - } - } else { - if (d3 < d2) { - dmnmx = d2; - } else if (d3 < d1) { - dmnmx = d3; - } else { - dmnmx = d1; - } - } - - if (dir == 0) { - -/* Sort into decreasing order */ - - i__ = start - 1; - j = endd + 1; -L60: -L70: - --j; - if (d__[j] < dmnmx) { - goto L70; - } -L80: - ++i__; - if (d__[i__] > dmnmx) { - goto L80; - } - if (i__ < j) { - tmp = d__[i__]; - d__[i__] = d__[j]; - d__[j] = tmp; - goto L60; - } - if (j - start > endd - j - 1) { - ++stkpnt; - stack[((stkpnt) << (1)) - 2] = start; - stack[((stkpnt) << (1)) - 1] = j; - ++stkpnt; - stack[((stkpnt) << (1)) - 2] = j + 1; - stack[((stkpnt) << (1)) - 1] = endd; - } else { - ++stkpnt; - stack[((stkpnt) << (1)) - 2] = j + 1; - stack[((stkpnt) << (1)) - 1] = endd; - ++stkpnt; - stack[((stkpnt) << (1)) - 2] = start; - stack[((stkpnt) << (1)) - 1] = j; - } - } else { - -/* Sort into increasing order */ - - i__ = start - 1; - j = endd + 1; -L90: -L100: - --j; - if (d__[j] > dmnmx) { - goto L100; - } -L110: - ++i__; - if (d__[i__] < dmnmx) { - goto L110; - } - if (i__ < j) { - tmp = d__[i__]; - d__[i__] = d__[j]; - d__[j] = tmp; - goto L90; - } - if (j - start > endd - j - 1) { - ++stkpnt; - stack[((stkpnt) << (1)) - 2] = start; - stack[((stkpnt) << (1)) - 1] = j; - ++stkpnt; - stack[((stkpnt) << (1)) - 2] = j + 1; - stack[((stkpnt) << (1)) - 1] = endd; - } else { - ++stkpnt; - stack[((stkpnt) << (1)) - 2] = j + 1; - stack[((stkpnt) << (1)) - 1] = endd; - ++stkpnt; - stack[((stkpnt) << (1)) - 2] = start; - stack[((stkpnt) << (1)) - 1] = j; - } - } - } - if (stkpnt > 0) { - goto L10; - } - return 0; - -/* End of DLASRT */ - -} /* dlasrt_ */ - -/* Subroutine */ int dlassq_(integer *n, doublereal *x, integer *incx, - doublereal *scale, doublereal *sumsq) -{ - /* System generated locals */ - integer i__1, i__2; - doublereal d__1; - - /* Local variables */ - static integer ix; - static doublereal absxi; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DLASSQ returns the values scl and smsq such that - - ( scl**2 )*smsq = x( 1 )**2 +...+ x( n )**2 + ( scale**2 )*sumsq, - - where x( i ) = X( 1 + ( i - 1 )*INCX ). The value of sumsq is - assumed to be non-negative and scl returns the value - - scl = max( scale, abs( x( i ) ) ). - - scale and sumsq must be supplied in SCALE and SUMSQ and - scl and smsq are overwritten on SCALE and SUMSQ respectively. - - The routine makes only one pass through the vector x. - - Arguments - ========= - - N (input) INTEGER - The number of elements to be used from the vector X. - - X (input) DOUBLE PRECISION array, dimension (N) - The vector for which a scaled sum of squares is computed. - x( i ) = X( 1 + ( i - 1 )*INCX ), 1 <= i <= n. - - INCX (input) INTEGER - The increment between successive values of the vector X. - INCX > 0. - - SCALE (input/output) DOUBLE PRECISION - On entry, the value scale in the equation above. - On exit, SCALE is overwritten with scl , the scaling factor - for the sum of squares. - - SUMSQ (input/output) DOUBLE PRECISION - On entry, the value sumsq in the equation above. - On exit, SUMSQ is overwritten with smsq , the basic sum of - squares from which scl has been factored out. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --x; - - /* Function Body */ - if (*n > 0) { - i__1 = (*n - 1) * *incx + 1; - i__2 = *incx; - for (ix = 1; i__2 < 0 ? ix >= i__1 : ix <= i__1; ix += i__2) { - if (x[ix] != 0.) { - absxi = (d__1 = x[ix], abs(d__1)); - if (*scale < absxi) { -/* Computing 2nd power */ - d__1 = *scale / absxi; - *sumsq = *sumsq * (d__1 * d__1) + 1; - *scale = absxi; - } else { -/* Computing 2nd power */ - d__1 = absxi / *scale; - *sumsq += d__1 * d__1; - } - } -/* L10: */ - } - } - return 0; - -/* End of DLASSQ */ - -} /* dlassq_ */ - -/* Subroutine */ int dlasv2_(doublereal *f, doublereal *g, doublereal *h__, - doublereal *ssmin, doublereal *ssmax, doublereal *snr, doublereal * - csr, doublereal *snl, doublereal *csl) -{ - /* System generated locals */ - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal), d_sign(doublereal *, doublereal *); - - /* Local variables */ - static doublereal a, d__, l, m, r__, s, t, fa, ga, ha, ft, gt, ht, mm, tt, - clt, crt, slt, srt; - static integer pmax; - static doublereal temp; - static logical swap; - static doublereal tsign; - - static logical gasmal; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLASV2 computes the singular value decomposition of a 2-by-2 - triangular matrix - [ F G ] - [ 0 H ]. - On return, abs(SSMAX) is the larger singular value, abs(SSMIN) is the - smaller singular value, and (CSL,SNL) and (CSR,SNR) are the left and - right singular vectors for abs(SSMAX), giving the decomposition - - [ CSL SNL ] [ F G ] [ CSR -SNR ] = [ SSMAX 0 ] - [-SNL CSL ] [ 0 H ] [ SNR CSR ] [ 0 SSMIN ]. - - Arguments - ========= - - F (input) DOUBLE PRECISION - The (1,1) element of the 2-by-2 matrix. - - G (input) DOUBLE PRECISION - The (1,2) element of the 2-by-2 matrix. - - H (input) DOUBLE PRECISION - The (2,2) element of the 2-by-2 matrix. - - SSMIN (output) DOUBLE PRECISION - abs(SSMIN) is the smaller singular value. - - SSMAX (output) DOUBLE PRECISION - abs(SSMAX) is the larger singular value. - - SNL (output) DOUBLE PRECISION - CSL (output) DOUBLE PRECISION - The vector (CSL, SNL) is a unit left singular vector for the - singular value abs(SSMAX). - - SNR (output) DOUBLE PRECISION - CSR (output) DOUBLE PRECISION - The vector (CSR, SNR) is a unit right singular vector for the - singular value abs(SSMAX). - - Further Details - =============== - - Any input parameter may be aliased with any output parameter. - - Barring over/underflow and assuming a guard digit in subtraction, all - output quantities are correct to within a few units in the last - place (ulps). - - In IEEE arithmetic, the code works correctly if one matrix element is - infinite. - - Overflow will not occur unless the largest singular value itself - overflows or is within a few ulps of overflow. (On machines with - partial overflow, like the Cray, overflow may occur if the largest - singular value is within a factor of 2 of overflow.) - - Underflow is harmless if underflow is gradual. Otherwise, results - may correspond to a matrix modified by perturbations of size near - the underflow threshold. - - ===================================================================== -*/ - - - ft = *f; - fa = abs(ft); - ht = *h__; - ha = abs(*h__); - -/* - PMAX points to the maximum absolute element of matrix - PMAX = 1 if F largest in absolute values - PMAX = 2 if G largest in absolute values - PMAX = 3 if H largest in absolute values -*/ - - pmax = 1; - swap = ha > fa; - if (swap) { - pmax = 3; - temp = ft; - ft = ht; - ht = temp; - temp = fa; - fa = ha; - ha = temp; - -/* Now FA .ge. HA */ - - } - gt = *g; - ga = abs(gt); - if (ga == 0.) { - -/* Diagonal matrix */ - - *ssmin = ha; - *ssmax = fa; - clt = 1.; - crt = 1.; - slt = 0.; - srt = 0.; - } else { - gasmal = TRUE_; - if (ga > fa) { - pmax = 2; - if (fa / ga < EPSILON) { - -/* Case of very large GA */ - - gasmal = FALSE_; - *ssmax = ga; - if (ha > 1.) { - *ssmin = fa / (ga / ha); - } else { - *ssmin = fa / ga * ha; - } - clt = 1.; - slt = ht / gt; - srt = 1.; - crt = ft / gt; - } - } - if (gasmal) { - -/* Normal case */ - - d__ = fa - ha; - if (d__ == fa) { - -/* Copes with infinite F or H */ - - l = 1.; - } else { - l = d__ / fa; - } - -/* Note that 0 .le. L .le. 1 */ - - m = gt / ft; - -/* Note that abs(M) .le. 1/macheps */ - - t = 2. - l; - -/* Note that T .ge. 1 */ - - mm = m * m; - tt = t * t; - s = sqrt(tt + mm); - -/* Note that 1 .le. S .le. 1 + 1/macheps */ - - if (l == 0.) { - r__ = abs(m); - } else { - r__ = sqrt(l * l + mm); - } - -/* Note that 0 .le. R .le. 1 + 1/macheps */ - - a = (s + r__) * .5; - -/* Note that 1 .le. A .le. 1 + abs(M) */ - - *ssmin = ha / a; - *ssmax = fa * a; - if (mm == 0.) { - -/* Note that M is very tiny */ - - if (l == 0.) { - t = d_sign(&c_b2804, &ft) * d_sign(&c_b15, >); - } else { - t = gt / d_sign(&d__, &ft) + m / t; - } - } else { - t = (m / (s + t) + m / (r__ + l)) * (a + 1.); - } - l = sqrt(t * t + 4.); - crt = 2. / l; - srt = t / l; - clt = (crt + srt * m) / a; - slt = ht / ft * srt / a; - } - } - if (swap) { - *csl = srt; - *snl = crt; - *csr = slt; - *snr = clt; - } else { - *csl = clt; - *snl = slt; - *csr = crt; - *snr = srt; - } - -/* Correct signs of SSMAX and SSMIN */ - - if (pmax == 1) { - tsign = d_sign(&c_b15, csr) * d_sign(&c_b15, csl) * d_sign(&c_b15, f); - } - if (pmax == 2) { - tsign = d_sign(&c_b15, snr) * d_sign(&c_b15, csl) * d_sign(&c_b15, g); - } - if (pmax == 3) { - tsign = d_sign(&c_b15, snr) * d_sign(&c_b15, snl) * d_sign(&c_b15, - h__); - } - *ssmax = d_sign(ssmax, &tsign); - d__1 = tsign * d_sign(&c_b15, f) * d_sign(&c_b15, h__); - *ssmin = d_sign(ssmin, &d__1); - return 0; - -/* End of DLASV2 */ - -} /* dlasv2_ */ - -/* Subroutine */ int dlaswp_(integer *n, doublereal *a, integer *lda, integer - *k1, integer *k2, integer *ipiv, integer *incx) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__, j, k, i1, i2, n32, ip, ix, ix0, inc; - static doublereal temp; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DLASWP performs a series of row interchanges on the matrix A. - One row interchange is initiated for each of rows K1 through K2 of A. - - Arguments - ========= - - N (input) INTEGER - The number of columns of the matrix A. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the matrix of column dimension N to which the row - interchanges will be applied. - On exit, the permuted matrix. - - LDA (input) INTEGER - The leading dimension of the array A. - - K1 (input) INTEGER - The first element of IPIV for which a row interchange will - be done. - - K2 (input) INTEGER - The last element of IPIV for which a row interchange will - be done. - - IPIV (input) INTEGER array, dimension (M*abs(INCX)) - The vector of pivot indices. Only the elements in positions - K1 through K2 of IPIV are accessed. - IPIV(K) = L implies rows K and L are to be interchanged. - - INCX (input) INTEGER - The increment between successive values of IPIV. If IPIV - is negative, the pivots are applied in reverse order. - - Further Details - =============== - - Modified by - R. C. Whaley, Computer Science Dept., Univ. of Tenn., Knoxville, USA - - ===================================================================== - - - Interchange row I with row IPIV(I) for each of rows K1 through K2. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --ipiv; - - /* Function Body */ - if (*incx > 0) { - ix0 = *k1; - i1 = *k1; - i2 = *k2; - inc = 1; - } else if (*incx < 0) { - ix0 = (1 - *k2) * *incx + 1; - i1 = *k2; - i2 = *k1; - inc = -1; - } else { - return 0; - } - - n32 = (*n / 32) << (5); - if (n32 != 0) { - i__1 = n32; - for (j = 1; j <= i__1; j += 32) { - ix = ix0; - i__2 = i2; - i__3 = inc; - for (i__ = i1; i__3 < 0 ? i__ >= i__2 : i__ <= i__2; i__ += i__3) - { - ip = ipiv[ix]; - if (ip != i__) { - i__4 = j + 31; - for (k = j; k <= i__4; ++k) { - temp = a[i__ + k * a_dim1]; - a[i__ + k * a_dim1] = a[ip + k * a_dim1]; - a[ip + k * a_dim1] = temp; -/* L10: */ - } - } - ix += *incx; -/* L20: */ - } -/* L30: */ - } - } - if (n32 != *n) { - ++n32; - ix = ix0; - i__1 = i2; - i__3 = inc; - for (i__ = i1; i__3 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__3) { - ip = ipiv[ix]; - if (ip != i__) { - i__2 = *n; - for (k = n32; k <= i__2; ++k) { - temp = a[i__ + k * a_dim1]; - a[i__ + k * a_dim1] = a[ip + k * a_dim1]; - a[ip + k * a_dim1] = temp; -/* L40: */ - } - } - ix += *incx; -/* L50: */ - } - } - - return 0; - -/* End of DLASWP */ - -} /* dlaswp_ */ - -/* Subroutine */ int dlatrd_(char *uplo, integer *n, integer *nb, doublereal * - a, integer *lda, doublereal *e, doublereal *tau, doublereal *w, - integer *ldw) -{ - /* System generated locals */ - integer a_dim1, a_offset, w_dim1, w_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, iw; - extern doublereal ddot_(integer *, doublereal *, integer *, doublereal *, - integer *); - static doublereal alpha; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dgemv_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, doublereal *, integer *), daxpy_(integer *, - doublereal *, doublereal *, integer *, doublereal *, integer *), - dsymv_(char *, integer *, doublereal *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *), dlarfg_(integer *, doublereal *, doublereal *, integer *, - doublereal *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DLATRD reduces NB rows and columns of a real symmetric matrix A to - symmetric tridiagonal form by an orthogonal similarity - transformation Q' * A * Q, and returns the matrices V and W which are - needed to apply the transformation to the unreduced part of A. - - If UPLO = 'U', DLATRD reduces the last NB rows and columns of a - matrix, of which the upper triangle is supplied; - if UPLO = 'L', DLATRD reduces the first NB rows and columns of a - matrix, of which the lower triangle is supplied. - - This is an auxiliary routine called by DSYTRD. - - Arguments - ========= - - UPLO (input) CHARACTER - Specifies whether the upper or lower triangular part of the - symmetric matrix A is stored: - = 'U': Upper triangular - = 'L': Lower triangular - - N (input) INTEGER - The order of the matrix A. - - NB (input) INTEGER - The number of rows and columns to be reduced. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the symmetric matrix A. If UPLO = 'U', the leading - n-by-n upper triangular part of A contains the upper - triangular part of the matrix A, and the strictly lower - triangular part of A is not referenced. If UPLO = 'L', the - leading n-by-n lower triangular part of A contains the lower - triangular part of the matrix A, and the strictly upper - triangular part of A is not referenced. - On exit: - if UPLO = 'U', the last NB columns have been reduced to - tridiagonal form, with the diagonal elements overwriting - the diagonal elements of A; the elements above the diagonal - with the array TAU, represent the orthogonal matrix Q as a - product of elementary reflectors; - if UPLO = 'L', the first NB columns have been reduced to - tridiagonal form, with the diagonal elements overwriting - the diagonal elements of A; the elements below the diagonal - with the array TAU, represent the orthogonal matrix Q as a - product of elementary reflectors. - See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= (1,N). - - E (output) DOUBLE PRECISION array, dimension (N-1) - If UPLO = 'U', E(n-nb:n-1) contains the superdiagonal - elements of the last NB columns of the reduced matrix; - if UPLO = 'L', E(1:nb) contains the subdiagonal elements of - the first NB columns of the reduced matrix. - - TAU (output) DOUBLE PRECISION array, dimension (N-1) - The scalar factors of the elementary reflectors, stored in - TAU(n-nb:n-1) if UPLO = 'U', and in TAU(1:nb) if UPLO = 'L'. - See Further Details. - - W (output) DOUBLE PRECISION array, dimension (LDW,NB) - The n-by-nb matrix W required to update the unreduced part - of A. - - LDW (input) INTEGER - The leading dimension of the array W. LDW >= max(1,N). - - Further Details - =============== - - If UPLO = 'U', the matrix Q is represented as a product of elementary - reflectors - - Q = H(n) H(n-1) . . . H(n-nb+1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(i:n) = 0 and v(i-1) = 1; v(1:i-1) is stored on exit in A(1:i-1,i), - and tau in TAU(i-1). - - If UPLO = 'L', the matrix Q is represented as a product of elementary - reflectors - - Q = H(1) H(2) . . . H(nb). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(1:i) = 0 and v(i+1) = 1; v(i+1:n) is stored on exit in A(i+1:n,i), - and tau in TAU(i). - - The elements of the vectors v together form the n-by-nb matrix V - which is needed, with W, to apply the transformation to the unreduced - part of the matrix, using a symmetric rank-2k update of the form: - A := A - V*W' - W*V'. - - The contents of A on exit are illustrated by the following examples - with n = 5 and nb = 2: - - if UPLO = 'U': if UPLO = 'L': - - ( a a a v4 v5 ) ( d ) - ( a a v4 v5 ) ( 1 d ) - ( a 1 v5 ) ( v1 1 a ) - ( d 1 ) ( v1 v2 a a ) - ( d ) ( v1 v2 a a a ) - - where d denotes a diagonal element of the reduced matrix, a denotes - an element of the original matrix that is unchanged, and vi denotes - an element of the vector defining H(i). - - ===================================================================== - - - Quick return if possible -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --e; - --tau; - w_dim1 = *ldw; - w_offset = 1 + w_dim1 * 1; - w -= w_offset; - - /* Function Body */ - if (*n <= 0) { - return 0; - } - - if (lsame_(uplo, "U")) { - -/* Reduce last NB columns of upper triangle */ - - i__1 = *n - *nb + 1; - for (i__ = *n; i__ >= i__1; --i__) { - iw = i__ - *n + *nb; - if (i__ < *n) { - -/* Update A(1:i,i) */ - - i__2 = *n - i__; - dgemv_("No transpose", &i__, &i__2, &c_b151, &a[(i__ + 1) * - a_dim1 + 1], lda, &w[i__ + (iw + 1) * w_dim1], ldw, & - c_b15, &a[i__ * a_dim1 + 1], &c__1); - i__2 = *n - i__; - dgemv_("No transpose", &i__, &i__2, &c_b151, &w[(iw + 1) * - w_dim1 + 1], ldw, &a[i__ + (i__ + 1) * a_dim1], lda, & - c_b15, &a[i__ * a_dim1 + 1], &c__1); - } - if (i__ > 1) { - -/* - Generate elementary reflector H(i) to annihilate - A(1:i-2,i) -*/ - - i__2 = i__ - 1; - dlarfg_(&i__2, &a[i__ - 1 + i__ * a_dim1], &a[i__ * a_dim1 + - 1], &c__1, &tau[i__ - 1]); - e[i__ - 1] = a[i__ - 1 + i__ * a_dim1]; - a[i__ - 1 + i__ * a_dim1] = 1.; - -/* Compute W(1:i-1,i) */ - - i__2 = i__ - 1; - dsymv_("Upper", &i__2, &c_b15, &a[a_offset], lda, &a[i__ * - a_dim1 + 1], &c__1, &c_b29, &w[iw * w_dim1 + 1], & - c__1); - if (i__ < *n) { - i__2 = i__ - 1; - i__3 = *n - i__; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &w[(iw + 1) * - w_dim1 + 1], ldw, &a[i__ * a_dim1 + 1], &c__1, & - c_b29, &w[i__ + 1 + iw * w_dim1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &a[(i__ + 1) - * a_dim1 + 1], lda, &w[i__ + 1 + iw * w_dim1], & - c__1, &c_b15, &w[iw * w_dim1 + 1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &a[(i__ + 1) * - a_dim1 + 1], lda, &a[i__ * a_dim1 + 1], &c__1, & - c_b29, &w[i__ + 1 + iw * w_dim1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &w[(iw + 1) - * w_dim1 + 1], ldw, &w[i__ + 1 + iw * w_dim1], & - c__1, &c_b15, &w[iw * w_dim1 + 1], &c__1); - } - i__2 = i__ - 1; - dscal_(&i__2, &tau[i__ - 1], &w[iw * w_dim1 + 1], &c__1); - i__2 = i__ - 1; - alpha = tau[i__ - 1] * -.5 * ddot_(&i__2, &w[iw * w_dim1 + 1], - &c__1, &a[i__ * a_dim1 + 1], &c__1); - i__2 = i__ - 1; - daxpy_(&i__2, &alpha, &a[i__ * a_dim1 + 1], &c__1, &w[iw * - w_dim1 + 1], &c__1); - } - -/* L10: */ - } - } else { - -/* Reduce first NB columns of lower triangle */ - - i__1 = *nb; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Update A(i:n,i) */ - - i__2 = *n - i__ + 1; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &a[i__ + a_dim1], - lda, &w[i__ + w_dim1], ldw, &c_b15, &a[i__ + i__ * a_dim1] - , &c__1); - i__2 = *n - i__ + 1; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &w[i__ + w_dim1], - ldw, &a[i__ + a_dim1], lda, &c_b15, &a[i__ + i__ * a_dim1] - , &c__1); - if (i__ < *n) { - -/* - Generate elementary reflector H(i) to annihilate - A(i+2:n,i) -*/ - - i__2 = *n - i__; -/* Computing MIN */ - i__3 = i__ + 2; - dlarfg_(&i__2, &a[i__ + 1 + i__ * a_dim1], &a[min(i__3,*n) + - i__ * a_dim1], &c__1, &tau[i__]); - e[i__] = a[i__ + 1 + i__ * a_dim1]; - a[i__ + 1 + i__ * a_dim1] = 1.; - -/* Compute W(i+1:n,i) */ - - i__2 = *n - i__; - dsymv_("Lower", &i__2, &c_b15, &a[i__ + 1 + (i__ + 1) * - a_dim1], lda, &a[i__ + 1 + i__ * a_dim1], &c__1, & - c_b29, &w[i__ + 1 + i__ * w_dim1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &w[i__ + 1 + w_dim1] - , ldw, &a[i__ + 1 + i__ * a_dim1], &c__1, &c_b29, &w[ - i__ * w_dim1 + 1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &a[i__ + 1 + - a_dim1], lda, &w[i__ * w_dim1 + 1], &c__1, &c_b15, &w[ - i__ + 1 + i__ * w_dim1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - dgemv_("Transpose", &i__2, &i__3, &c_b15, &a[i__ + 1 + a_dim1] - , lda, &a[i__ + 1 + i__ * a_dim1], &c__1, &c_b29, &w[ - i__ * w_dim1 + 1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &w[i__ + 1 + - w_dim1], ldw, &w[i__ * w_dim1 + 1], &c__1, &c_b15, &w[ - i__ + 1 + i__ * w_dim1], &c__1); - i__2 = *n - i__; - dscal_(&i__2, &tau[i__], &w[i__ + 1 + i__ * w_dim1], &c__1); - i__2 = *n - i__; - alpha = tau[i__] * -.5 * ddot_(&i__2, &w[i__ + 1 + i__ * - w_dim1], &c__1, &a[i__ + 1 + i__ * a_dim1], &c__1); - i__2 = *n - i__; - daxpy_(&i__2, &alpha, &a[i__ + 1 + i__ * a_dim1], &c__1, &w[ - i__ + 1 + i__ * w_dim1], &c__1); - } - -/* L20: */ - } - } - - return 0; - -/* End of DLATRD */ - -} /* dlatrd_ */ - -/* Subroutine */ int dorg2r_(integer *m, integer *n, integer *k, doublereal * - a, integer *lda, doublereal *tau, doublereal *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - doublereal d__1; - - /* Local variables */ - static integer i__, j, l; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *), dlarf_(char *, integer *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *, doublereal *), xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DORG2R generates an m by n real matrix Q with orthonormal columns, - which is defined as the first n columns of a product of k elementary - reflectors of order m - - Q = H(1) H(2) . . . H(k) - - as returned by DGEQRF. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix Q. M >= 0. - - N (input) INTEGER - The number of columns of the matrix Q. M >= N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines the - matrix Q. N >= K >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the i-th column must contain the vector which - defines the elementary reflector H(i), for i = 1,2,...,k, as - returned by DGEQRF in the first k columns of its array - argument A. - On exit, the m-by-n matrix Q. - - LDA (input) INTEGER - The first dimension of the array A. LDA >= max(1,M). - - TAU (input) DOUBLE PRECISION array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DGEQRF. - - WORK (workspace) DOUBLE PRECISION array, dimension (N) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument has an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0 || *n > *m) { - *info = -2; - } else if (*k < 0 || *k > *n) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORG2R", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n <= 0) { - return 0; - } - -/* Initialise columns k+1:n to columns of the unit matrix */ - - i__1 = *n; - for (j = *k + 1; j <= i__1; ++j) { - i__2 = *m; - for (l = 1; l <= i__2; ++l) { - a[l + j * a_dim1] = 0.; -/* L10: */ - } - a[j + j * a_dim1] = 1.; -/* L20: */ - } - - for (i__ = *k; i__ >= 1; --i__) { - -/* Apply H(i) to A(i:m,i:n) from the left */ - - if (i__ < *n) { - a[i__ + i__ * a_dim1] = 1.; - i__1 = *m - i__ + 1; - i__2 = *n - i__; - dlarf_("Left", &i__1, &i__2, &a[i__ + i__ * a_dim1], &c__1, &tau[ - i__], &a[i__ + (i__ + 1) * a_dim1], lda, &work[1]); - } - if (i__ < *m) { - i__1 = *m - i__; - d__1 = -tau[i__]; - dscal_(&i__1, &d__1, &a[i__ + 1 + i__ * a_dim1], &c__1); - } - a[i__ + i__ * a_dim1] = 1. - tau[i__]; - -/* Set A(1:i-1,i) to zero */ - - i__1 = i__ - 1; - for (l = 1; l <= i__1; ++l) { - a[l + i__ * a_dim1] = 0.; -/* L30: */ - } -/* L40: */ - } - return 0; - -/* End of DORG2R */ - -} /* dorg2r_ */ - -/* Subroutine */ int dorgbr_(char *vect, integer *m, integer *n, integer *k, - doublereal *a, integer *lda, doublereal *tau, doublereal *work, - integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, j, nb, mn; - extern logical lsame_(char *, char *); - static integer iinfo; - static logical wantq; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int dorglq_(integer *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - integer *), dorgqr_(integer *, integer *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *, integer *); - static integer lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DORGBR generates one of the real orthogonal matrices Q or P**T - determined by DGEBRD when reducing a real matrix A to bidiagonal - form: A = Q * B * P**T. Q and P**T are defined as products of - elementary reflectors H(i) or G(i) respectively. - - If VECT = 'Q', A is assumed to have been an M-by-K matrix, and Q - is of order M: - if m >= k, Q = H(1) H(2) . . . H(k) and DORGBR returns the first n - columns of Q, where m >= n >= k; - if m < k, Q = H(1) H(2) . . . H(m-1) and DORGBR returns Q as an - M-by-M matrix. - - If VECT = 'P', A is assumed to have been a K-by-N matrix, and P**T - is of order N: - if k < n, P**T = G(k) . . . G(2) G(1) and DORGBR returns the first m - rows of P**T, where n >= m >= k; - if k >= n, P**T = G(n-1) . . . G(2) G(1) and DORGBR returns P**T as - an N-by-N matrix. - - Arguments - ========= - - VECT (input) CHARACTER*1 - Specifies whether the matrix Q or the matrix P**T is - required, as defined in the transformation applied by DGEBRD: - = 'Q': generate Q; - = 'P': generate P**T. - - M (input) INTEGER - The number of rows of the matrix Q or P**T to be returned. - M >= 0. - - N (input) INTEGER - The number of columns of the matrix Q or P**T to be returned. - N >= 0. - If VECT = 'Q', M >= N >= min(M,K); - if VECT = 'P', N >= M >= min(N,K). - - K (input) INTEGER - If VECT = 'Q', the number of columns in the original M-by-K - matrix reduced by DGEBRD. - If VECT = 'P', the number of rows in the original K-by-N - matrix reduced by DGEBRD. - K >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the vectors which define the elementary reflectors, - as returned by DGEBRD. - On exit, the M-by-N matrix Q or P**T. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - TAU (input) DOUBLE PRECISION array, dimension - (min(M,K)) if VECT = 'Q' - (min(N,K)) if VECT = 'P' - TAU(i) must contain the scalar factor of the elementary - reflector H(i) or G(i), which determines Q or P**T, as - returned by DGEBRD in its array argument TAUQ or TAUP. - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,min(M,N)). - For optimum performance LWORK >= min(M,N)*NB, where NB - is the optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - wantq = lsame_(vect, "Q"); - mn = min(*m,*n); - lquery = *lwork == -1; - if ((! wantq && ! lsame_(vect, "P"))) { - *info = -1; - } else if (*m < 0) { - *info = -2; - } else if (*n < 0 || (wantq && (*n > *m || *n < min(*m,*k))) || (! wantq - && (*m > *n || *m < min(*n,*k)))) { - *info = -3; - } else if (*k < 0) { - *info = -4; - } else if (*lda < max(1,*m)) { - *info = -6; - } else if ((*lwork < max(1,mn) && ! lquery)) { - *info = -9; - } - - if (*info == 0) { - if (wantq) { - nb = ilaenv_(&c__1, "DORGQR", " ", m, n, k, &c_n1, (ftnlen)6, ( - ftnlen)1); - } else { - nb = ilaenv_(&c__1, "DORGLQ", " ", m, n, k, &c_n1, (ftnlen)6, ( - ftnlen)1); - } - lwkopt = max(1,mn) * nb; - work[1] = (doublereal) lwkopt; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORGBR", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0) { - work[1] = 1.; - return 0; - } - - if (wantq) { - -/* - Form Q, determined by a call to DGEBRD to reduce an m-by-k - matrix -*/ - - if (*m >= *k) { - -/* If m >= k, assume m >= n >= k */ - - dorgqr_(m, n, k, &a[a_offset], lda, &tau[1], &work[1], lwork, & - iinfo); - - } else { - -/* - If m < k, assume m = n - - Shift the vectors which define the elementary reflectors one - column to the right, and set the first row and column of Q - to those of the unit matrix -*/ - - for (j = *m; j >= 2; --j) { - a[j * a_dim1 + 1] = 0.; - i__1 = *m; - for (i__ = j + 1; i__ <= i__1; ++i__) { - a[i__ + j * a_dim1] = a[i__ + (j - 1) * a_dim1]; -/* L10: */ - } -/* L20: */ - } - a[a_dim1 + 1] = 1.; - i__1 = *m; - for (i__ = 2; i__ <= i__1; ++i__) { - a[i__ + a_dim1] = 0.; -/* L30: */ - } - if (*m > 1) { - -/* Form Q(2:m,2:m) */ - - i__1 = *m - 1; - i__2 = *m - 1; - i__3 = *m - 1; - dorgqr_(&i__1, &i__2, &i__3, &a[((a_dim1) << (1)) + 2], lda, & - tau[1], &work[1], lwork, &iinfo); - } - } - } else { - -/* - Form P', determined by a call to DGEBRD to reduce a k-by-n - matrix -*/ - - if (*k < *n) { - -/* If k < n, assume k <= m <= n */ - - dorglq_(m, n, k, &a[a_offset], lda, &tau[1], &work[1], lwork, & - iinfo); - - } else { - -/* - If k >= n, assume m = n - - Shift the vectors which define the elementary reflectors one - row downward, and set the first row and column of P' to - those of the unit matrix -*/ - - a[a_dim1 + 1] = 1.; - i__1 = *n; - for (i__ = 2; i__ <= i__1; ++i__) { - a[i__ + a_dim1] = 0.; -/* L40: */ - } - i__1 = *n; - for (j = 2; j <= i__1; ++j) { - for (i__ = j - 1; i__ >= 2; --i__) { - a[i__ + j * a_dim1] = a[i__ - 1 + j * a_dim1]; -/* L50: */ - } - a[j * a_dim1 + 1] = 0.; -/* L60: */ - } - if (*n > 1) { - -/* Form P'(2:n,2:n) */ - - i__1 = *n - 1; - i__2 = *n - 1; - i__3 = *n - 1; - dorglq_(&i__1, &i__2, &i__3, &a[((a_dim1) << (1)) + 2], lda, & - tau[1], &work[1], lwork, &iinfo); - } - } - } - work[1] = (doublereal) lwkopt; - return 0; - -/* End of DORGBR */ - -} /* dorgbr_ */ - -/* Subroutine */ int dorghr_(integer *n, integer *ilo, integer *ihi, - doublereal *a, integer *lda, doublereal *tau, doublereal *work, - integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - - /* Local variables */ - static integer i__, j, nb, nh, iinfo; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int dorgqr_(integer *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - integer *); - static integer lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DORGHR generates a real orthogonal matrix Q which is defined as the - product of IHI-ILO elementary reflectors of order N, as returned by - DGEHRD: - - Q = H(ilo) H(ilo+1) . . . H(ihi-1). - - Arguments - ========= - - N (input) INTEGER - The order of the matrix Q. N >= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - ILO and IHI must have the same values as in the previous call - of DGEHRD. Q is equal to the unit matrix except in the - submatrix Q(ilo+1:ihi,ilo+1:ihi). - 1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the vectors which define the elementary reflectors, - as returned by DGEHRD. - On exit, the N-by-N orthogonal matrix Q. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - TAU (input) DOUBLE PRECISION array, dimension (N-1) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DGEHRD. - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= IHI-ILO. - For optimum performance LWORK >= (IHI-ILO)*NB, where NB is - the optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - nh = *ihi - *ilo; - lquery = *lwork == -1; - if (*n < 0) { - *info = -1; - } else if (*ilo < 1 || *ilo > max(1,*n)) { - *info = -2; - } else if (*ihi < min(*ilo,*n) || *ihi > *n) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } else if ((*lwork < max(1,nh) && ! lquery)) { - *info = -8; - } - - if (*info == 0) { - nb = ilaenv_(&c__1, "DORGQR", " ", &nh, &nh, &nh, &c_n1, (ftnlen)6, ( - ftnlen)1); - lwkopt = max(1,nh) * nb; - work[1] = (doublereal) lwkopt; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORGHR", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - work[1] = 1.; - return 0; - } - -/* - Shift the vectors which define the elementary reflectors one - column to the right, and set the first ilo and the last n-ihi - rows and columns to those of the unit matrix -*/ - - i__1 = *ilo + 1; - for (j = *ihi; j >= i__1; --j) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = 0.; -/* L10: */ - } - i__2 = *ihi; - for (i__ = j + 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = a[i__ + (j - 1) * a_dim1]; -/* L20: */ - } - i__2 = *n; - for (i__ = *ihi + 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = 0.; -/* L30: */ - } -/* L40: */ - } - i__1 = *ilo; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = 0.; -/* L50: */ - } - a[j + j * a_dim1] = 1.; -/* L60: */ - } - i__1 = *n; - for (j = *ihi + 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = 0.; -/* L70: */ - } - a[j + j * a_dim1] = 1.; -/* L80: */ - } - - if (nh > 0) { - -/* Generate Q(ilo+1:ihi,ilo+1:ihi) */ - - dorgqr_(&nh, &nh, &nh, &a[*ilo + 1 + (*ilo + 1) * a_dim1], lda, &tau[* - ilo], &work[1], lwork, &iinfo); - } - work[1] = (doublereal) lwkopt; - return 0; - -/* End of DORGHR */ - -} /* dorghr_ */ - -/* Subroutine */ int dorgl2_(integer *m, integer *n, integer *k, doublereal * - a, integer *lda, doublereal *tau, doublereal *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - doublereal d__1; - - /* Local variables */ - static integer i__, j, l; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *), dlarf_(char *, integer *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *, doublereal *), xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DORGL2 generates an m by n real matrix Q with orthonormal rows, - which is defined as the first m rows of a product of k elementary - reflectors of order n - - Q = H(k) . . . H(2) H(1) - - as returned by DGELQF. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix Q. M >= 0. - - N (input) INTEGER - The number of columns of the matrix Q. N >= M. - - K (input) INTEGER - The number of elementary reflectors whose product defines the - matrix Q. M >= K >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the i-th row must contain the vector which defines - the elementary reflector H(i), for i = 1,2,...,k, as returned - by DGELQF in the first k rows of its array argument A. - On exit, the m-by-n matrix Q. - - LDA (input) INTEGER - The first dimension of the array A. LDA >= max(1,M). - - TAU (input) DOUBLE PRECISION array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DGELQF. - - WORK (workspace) DOUBLE PRECISION array, dimension (M) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument has an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < *m) { - *info = -2; - } else if (*k < 0 || *k > *m) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORGL2", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m <= 0) { - return 0; - } - - if (*k < *m) { - -/* Initialise rows k+1:m to rows of the unit matrix */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (l = *k + 1; l <= i__2; ++l) { - a[l + j * a_dim1] = 0.; -/* L10: */ - } - if ((j > *k && j <= *m)) { - a[j + j * a_dim1] = 1.; - } -/* L20: */ - } - } - - for (i__ = *k; i__ >= 1; --i__) { - -/* Apply H(i) to A(i:m,i:n) from the right */ - - if (i__ < *n) { - if (i__ < *m) { - a[i__ + i__ * a_dim1] = 1.; - i__1 = *m - i__; - i__2 = *n - i__ + 1; - dlarf_("Right", &i__1, &i__2, &a[i__ + i__ * a_dim1], lda, & - tau[i__], &a[i__ + 1 + i__ * a_dim1], lda, &work[1]); - } - i__1 = *n - i__; - d__1 = -tau[i__]; - dscal_(&i__1, &d__1, &a[i__ + (i__ + 1) * a_dim1], lda); - } - a[i__ + i__ * a_dim1] = 1. - tau[i__]; - -/* Set A(i,1:i-1) to zero */ - - i__1 = i__ - 1; - for (l = 1; l <= i__1; ++l) { - a[i__ + l * a_dim1] = 0.; -/* L30: */ - } -/* L40: */ - } - return 0; - -/* End of DORGL2 */ - -} /* dorgl2_ */ - -/* Subroutine */ int dorglq_(integer *m, integer *n, integer *k, doublereal * - a, integer *lda, doublereal *tau, doublereal *work, integer *lwork, - integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, j, l, ib, nb, ki, kk, nx, iws, nbmin, iinfo; - extern /* Subroutine */ int dorgl2_(integer *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *), - dlarfb_(char *, char *, char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, integer *), dlarft_(char *, char *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer ldwork, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DORGLQ generates an M-by-N real matrix Q with orthonormal rows, - which is defined as the first M rows of a product of K elementary - reflectors of order N - - Q = H(k) . . . H(2) H(1) - - as returned by DGELQF. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix Q. M >= 0. - - N (input) INTEGER - The number of columns of the matrix Q. N >= M. - - K (input) INTEGER - The number of elementary reflectors whose product defines the - matrix Q. M >= K >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the i-th row must contain the vector which defines - the elementary reflector H(i), for i = 1,2,...,k, as returned - by DGELQF in the first k rows of its array argument A. - On exit, the M-by-N matrix Q. - - LDA (input) INTEGER - The first dimension of the array A. LDA >= max(1,M). - - TAU (input) DOUBLE PRECISION array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DGELQF. - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,M). - For optimum performance LWORK >= M*NB, where NB is - the optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument has an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - nb = ilaenv_(&c__1, "DORGLQ", " ", m, n, k, &c_n1, (ftnlen)6, (ftnlen)1); - lwkopt = max(1,*m) * nb; - work[1] = (doublereal) lwkopt; - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < *m) { - *info = -2; - } else if (*k < 0 || *k > *m) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } else if ((*lwork < max(1,*m) && ! lquery)) { - *info = -8; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORGLQ", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m <= 0) { - work[1] = 1.; - return 0; - } - - nbmin = 2; - nx = 0; - iws = *m; - if ((nb > 1 && nb < *k)) { - -/* - Determine when to cross over from blocked to unblocked code. - - Computing MAX -*/ - i__1 = 0, i__2 = ilaenv_(&c__3, "DORGLQ", " ", m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < *k) { - -/* Determine if workspace is large enough for blocked code. */ - - ldwork = *m; - iws = ldwork * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: reduce NB and - determine the minimum value of NB. -*/ - - nb = *lwork / ldwork; -/* Computing MAX */ - i__1 = 2, i__2 = ilaenv_(&c__2, "DORGLQ", " ", m, n, k, &c_n1, - (ftnlen)6, (ftnlen)1); - nbmin = max(i__1,i__2); - } - } - } - - if (((nb >= nbmin && nb < *k) && nx < *k)) { - -/* - Use blocked code after the last block. - The first kk rows are handled by the block method. -*/ - - ki = (*k - nx - 1) / nb * nb; -/* Computing MIN */ - i__1 = *k, i__2 = ki + nb; - kk = min(i__1,i__2); - -/* Set A(kk+1:m,1:kk) to zero. */ - - i__1 = kk; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = kk + 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = 0.; -/* L10: */ - } -/* L20: */ - } - } else { - kk = 0; - } - -/* Use unblocked code for the last or only block. */ - - if (kk < *m) { - i__1 = *m - kk; - i__2 = *n - kk; - i__3 = *k - kk; - dorgl2_(&i__1, &i__2, &i__3, &a[kk + 1 + (kk + 1) * a_dim1], lda, & - tau[kk + 1], &work[1], &iinfo); - } - - if (kk > 0) { - -/* Use blocked code */ - - i__1 = -nb; - for (i__ = ki + 1; i__1 < 0 ? i__ >= 1 : i__ <= 1; i__ += i__1) { -/* Computing MIN */ - i__2 = nb, i__3 = *k - i__ + 1; - ib = min(i__2,i__3); - if (i__ + ib <= *m) { - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__2 = *n - i__ + 1; - dlarft_("Forward", "Rowwise", &i__2, &ib, &a[i__ + i__ * - a_dim1], lda, &tau[i__], &work[1], &ldwork); - -/* Apply H' to A(i+ib:m,i:n) from the right */ - - i__2 = *m - i__ - ib + 1; - i__3 = *n - i__ + 1; - dlarfb_("Right", "Transpose", "Forward", "Rowwise", &i__2, & - i__3, &ib, &a[i__ + i__ * a_dim1], lda, &work[1], & - ldwork, &a[i__ + ib + i__ * a_dim1], lda, &work[ib + - 1], &ldwork); - } - -/* Apply H' to columns i:n of current block */ - - i__2 = *n - i__ + 1; - dorgl2_(&ib, &i__2, &ib, &a[i__ + i__ * a_dim1], lda, &tau[i__], & - work[1], &iinfo); - -/* Set columns 1:i-1 of current block to zero */ - - i__2 = i__ - 1; - for (j = 1; j <= i__2; ++j) { - i__3 = i__ + ib - 1; - for (l = i__; l <= i__3; ++l) { - a[l + j * a_dim1] = 0.; -/* L30: */ - } -/* L40: */ - } -/* L50: */ - } - } - - work[1] = (doublereal) iws; - return 0; - -/* End of DORGLQ */ - -} /* dorglq_ */ - -/* Subroutine */ int dorgqr_(integer *m, integer *n, integer *k, doublereal * - a, integer *lda, doublereal *tau, doublereal *work, integer *lwork, - integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, j, l, ib, nb, ki, kk, nx, iws, nbmin, iinfo; - extern /* Subroutine */ int dorg2r_(integer *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *), - dlarfb_(char *, char *, char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, integer *), dlarft_(char *, char *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer ldwork, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DORGQR generates an M-by-N real matrix Q with orthonormal columns, - which is defined as the first N columns of a product of K elementary - reflectors of order M - - Q = H(1) H(2) . . . H(k) - - as returned by DGEQRF. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix Q. M >= 0. - - N (input) INTEGER - The number of columns of the matrix Q. M >= N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines the - matrix Q. N >= K >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the i-th column must contain the vector which - defines the elementary reflector H(i), for i = 1,2,...,k, as - returned by DGEQRF in the first k columns of its array - argument A. - On exit, the M-by-N matrix Q. - - LDA (input) INTEGER - The first dimension of the array A. LDA >= max(1,M). - - TAU (input) DOUBLE PRECISION array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DGEQRF. - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,N). - For optimum performance LWORK >= N*NB, where NB is the - optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument has an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - nb = ilaenv_(&c__1, "DORGQR", " ", m, n, k, &c_n1, (ftnlen)6, (ftnlen)1); - lwkopt = max(1,*n) * nb; - work[1] = (doublereal) lwkopt; - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < 0 || *n > *m) { - *info = -2; - } else if (*k < 0 || *k > *n) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } else if ((*lwork < max(1,*n) && ! lquery)) { - *info = -8; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORGQR", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n <= 0) { - work[1] = 1.; - return 0; - } - - nbmin = 2; - nx = 0; - iws = *n; - if ((nb > 1 && nb < *k)) { - -/* - Determine when to cross over from blocked to unblocked code. - - Computing MAX -*/ - i__1 = 0, i__2 = ilaenv_(&c__3, "DORGQR", " ", m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < *k) { - -/* Determine if workspace is large enough for blocked code. */ - - ldwork = *n; - iws = ldwork * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: reduce NB and - determine the minimum value of NB. -*/ - - nb = *lwork / ldwork; -/* Computing MAX */ - i__1 = 2, i__2 = ilaenv_(&c__2, "DORGQR", " ", m, n, k, &c_n1, - (ftnlen)6, (ftnlen)1); - nbmin = max(i__1,i__2); - } - } - } - - if (((nb >= nbmin && nb < *k) && nx < *k)) { - -/* - Use blocked code after the last block. - The first kk columns are handled by the block method. -*/ - - ki = (*k - nx - 1) / nb * nb; -/* Computing MIN */ - i__1 = *k, i__2 = ki + nb; - kk = min(i__1,i__2); - -/* Set A(1:kk,kk+1:n) to zero. */ - - i__1 = *n; - for (j = kk + 1; j <= i__1; ++j) { - i__2 = kk; - for (i__ = 1; i__ <= i__2; ++i__) { - a[i__ + j * a_dim1] = 0.; -/* L10: */ - } -/* L20: */ - } - } else { - kk = 0; - } - -/* Use unblocked code for the last or only block. */ - - if (kk < *n) { - i__1 = *m - kk; - i__2 = *n - kk; - i__3 = *k - kk; - dorg2r_(&i__1, &i__2, &i__3, &a[kk + 1 + (kk + 1) * a_dim1], lda, & - tau[kk + 1], &work[1], &iinfo); - } - - if (kk > 0) { - -/* Use blocked code */ - - i__1 = -nb; - for (i__ = ki + 1; i__1 < 0 ? i__ >= 1 : i__ <= 1; i__ += i__1) { -/* Computing MIN */ - i__2 = nb, i__3 = *k - i__ + 1; - ib = min(i__2,i__3); - if (i__ + ib <= *n) { - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__2 = *m - i__ + 1; - dlarft_("Forward", "Columnwise", &i__2, &ib, &a[i__ + i__ * - a_dim1], lda, &tau[i__], &work[1], &ldwork); - -/* Apply H to A(i:m,i+ib:n) from the left */ - - i__2 = *m - i__ + 1; - i__3 = *n - i__ - ib + 1; - dlarfb_("Left", "No transpose", "Forward", "Columnwise", & - i__2, &i__3, &ib, &a[i__ + i__ * a_dim1], lda, &work[ - 1], &ldwork, &a[i__ + (i__ + ib) * a_dim1], lda, & - work[ib + 1], &ldwork); - } - -/* Apply H to rows i:m of current block */ - - i__2 = *m - i__ + 1; - dorg2r_(&i__2, &ib, &ib, &a[i__ + i__ * a_dim1], lda, &tau[i__], & - work[1], &iinfo); - -/* Set rows 1:i-1 of current block to zero */ - - i__2 = i__ + ib - 1; - for (j = i__; j <= i__2; ++j) { - i__3 = i__ - 1; - for (l = 1; l <= i__3; ++l) { - a[l + j * a_dim1] = 0.; -/* L30: */ - } -/* L40: */ - } -/* L50: */ - } - } - - work[1] = (doublereal) iws; - return 0; - -/* End of DORGQR */ - -} /* dorgqr_ */ - -/* Subroutine */ int dorm2l_(char *side, char *trans, integer *m, integer *n, - integer *k, doublereal *a, integer *lda, doublereal *tau, doublereal * - c__, integer *ldc, doublereal *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2; - - /* Local variables */ - static integer i__, i1, i2, i3, mi, ni, nq; - static doublereal aii; - static logical left; - extern /* Subroutine */ int dlarf_(char *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - doublereal *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical notran; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DORM2L overwrites the general real m by n matrix C with - - Q * C if SIDE = 'L' and TRANS = 'N', or - - Q'* C if SIDE = 'L' and TRANS = 'T', or - - C * Q if SIDE = 'R' and TRANS = 'N', or - - C * Q' if SIDE = 'R' and TRANS = 'T', - - where Q is a real orthogonal matrix defined as the product of k - elementary reflectors - - Q = H(k) . . . H(2) H(1) - - as returned by DGEQLF. Q is of order m if SIDE = 'L' and of order n - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q' from the Left - = 'R': apply Q or Q' from the Right - - TRANS (input) CHARACTER*1 - = 'N': apply Q (No transpose) - = 'T': apply Q' (Transpose) - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) DOUBLE PRECISION array, dimension (LDA,K) - The i-th column must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - DGEQLF in the last k columns of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. - If SIDE = 'L', LDA >= max(1,M); - if SIDE = 'R', LDA >= max(1,N). - - TAU (input) DOUBLE PRECISION array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DGEQLF. - - C (input/output) DOUBLE PRECISION array, dimension (LDC,N) - On entry, the m by n matrix C. - On exit, C is overwritten by Q*C or Q'*C or C*Q' or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace) DOUBLE PRECISION array, dimension - (N) if SIDE = 'L', - (M) if SIDE = 'R' - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - -/* NQ is the order of Q */ - - if (left) { - nq = *m; - } else { - nq = *n; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "T"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,nq)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORM2L", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - return 0; - } - - if ((left && notran) || (! left && ! notran)) { - i1 = 1; - i2 = *k; - i3 = 1; - } else { - i1 = *k; - i2 = 1; - i3 = -1; - } - - if (left) { - ni = *n; - } else { - mi = *m; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { - if (left) { - -/* H(i) is applied to C(1:m-k+i,1:n) */ - - mi = *m - *k + i__; - } else { - -/* H(i) is applied to C(1:m,1:n-k+i) */ - - ni = *n - *k + i__; - } - -/* Apply H(i) */ - - aii = a[nq - *k + i__ + i__ * a_dim1]; - a[nq - *k + i__ + i__ * a_dim1] = 1.; - dlarf_(side, &mi, &ni, &a[i__ * a_dim1 + 1], &c__1, &tau[i__], &c__[ - c_offset], ldc, &work[1]); - a[nq - *k + i__ + i__ * a_dim1] = aii; -/* L10: */ - } - return 0; - -/* End of DORM2L */ - -} /* dorm2l_ */ - -/* Subroutine */ int dorm2r_(char *side, char *trans, integer *m, integer *n, - integer *k, doublereal *a, integer *lda, doublereal *tau, doublereal * - c__, integer *ldc, doublereal *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2; - - /* Local variables */ - static integer i__, i1, i2, i3, ic, jc, mi, ni, nq; - static doublereal aii; - static logical left; - extern /* Subroutine */ int dlarf_(char *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - doublereal *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical notran; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DORM2R overwrites the general real m by n matrix C with - - Q * C if SIDE = 'L' and TRANS = 'N', or - - Q'* C if SIDE = 'L' and TRANS = 'T', or - - C * Q if SIDE = 'R' and TRANS = 'N', or - - C * Q' if SIDE = 'R' and TRANS = 'T', - - where Q is a real orthogonal matrix defined as the product of k - elementary reflectors - - Q = H(1) H(2) . . . H(k) - - as returned by DGEQRF. Q is of order m if SIDE = 'L' and of order n - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q' from the Left - = 'R': apply Q or Q' from the Right - - TRANS (input) CHARACTER*1 - = 'N': apply Q (No transpose) - = 'T': apply Q' (Transpose) - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) DOUBLE PRECISION array, dimension (LDA,K) - The i-th column must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - DGEQRF in the first k columns of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. - If SIDE = 'L', LDA >= max(1,M); - if SIDE = 'R', LDA >= max(1,N). - - TAU (input) DOUBLE PRECISION array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DGEQRF. - - C (input/output) DOUBLE PRECISION array, dimension (LDC,N) - On entry, the m by n matrix C. - On exit, C is overwritten by Q*C or Q'*C or C*Q' or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace) DOUBLE PRECISION array, dimension - (N) if SIDE = 'L', - (M) if SIDE = 'R' - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - -/* NQ is the order of Q */ - - if (left) { - nq = *m; - } else { - nq = *n; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "T"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,nq)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORM2R", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - return 0; - } - - if ((left && ! notran) || (! left && notran)) { - i1 = 1; - i2 = *k; - i3 = 1; - } else { - i1 = *k; - i2 = 1; - i3 = -1; - } - - if (left) { - ni = *n; - jc = 1; - } else { - mi = *m; - ic = 1; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { - if (left) { - -/* H(i) is applied to C(i:m,1:n) */ - - mi = *m - i__ + 1; - ic = i__; - } else { - -/* H(i) is applied to C(1:m,i:n) */ - - ni = *n - i__ + 1; - jc = i__; - } - -/* Apply H(i) */ - - aii = a[i__ + i__ * a_dim1]; - a[i__ + i__ * a_dim1] = 1.; - dlarf_(side, &mi, &ni, &a[i__ + i__ * a_dim1], &c__1, &tau[i__], &c__[ - ic + jc * c_dim1], ldc, &work[1]); - a[i__ + i__ * a_dim1] = aii; -/* L10: */ - } - return 0; - -/* End of DORM2R */ - -} /* dorm2r_ */ - -/* Subroutine */ int dormbr_(char *vect, char *side, char *trans, integer *m, - integer *n, integer *k, doublereal *a, integer *lda, doublereal *tau, - doublereal *c__, integer *ldc, doublereal *work, integer *lwork, - integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3[2]; - char ch__1[2]; - - /* Builtin functions */ - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i1, i2, nb, mi, ni, nq, nw; - static logical left; - extern logical lsame_(char *, char *); - static integer iinfo; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int dormlq_(char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - integer *, doublereal *, integer *, integer *); - static logical notran; - extern /* Subroutine */ int dormqr_(char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - integer *, doublereal *, integer *, integer *); - static logical applyq; - static char transt[1]; - static integer lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - If VECT = 'Q', DORMBR overwrites the general real M-by-N matrix C - with - SIDE = 'L' SIDE = 'R' - TRANS = 'N': Q * C C * Q - TRANS = 'T': Q**T * C C * Q**T - - If VECT = 'P', DORMBR overwrites the general real M-by-N matrix C - with - SIDE = 'L' SIDE = 'R' - TRANS = 'N': P * C C * P - TRANS = 'T': P**T * C C * P**T - - Here Q and P**T are the orthogonal matrices determined by DGEBRD when - reducing a real matrix A to bidiagonal form: A = Q * B * P**T. Q and - P**T are defined as products of elementary reflectors H(i) and G(i) - respectively. - - Let nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Thus nq is the - order of the orthogonal matrix Q or P**T that is applied. - - If VECT = 'Q', A is assumed to have been an NQ-by-K matrix: - if nq >= k, Q = H(1) H(2) . . . H(k); - if nq < k, Q = H(1) H(2) . . . H(nq-1). - - If VECT = 'P', A is assumed to have been a K-by-NQ matrix: - if k < nq, P = G(1) G(2) . . . G(k); - if k >= nq, P = G(1) G(2) . . . G(nq-1). - - Arguments - ========= - - VECT (input) CHARACTER*1 - = 'Q': apply Q or Q**T; - = 'P': apply P or P**T. - - SIDE (input) CHARACTER*1 - = 'L': apply Q, Q**T, P or P**T from the Left; - = 'R': apply Q, Q**T, P or P**T from the Right. - - TRANS (input) CHARACTER*1 - = 'N': No transpose, apply Q or P; - = 'T': Transpose, apply Q**T or P**T. - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - If VECT = 'Q', the number of columns in the original - matrix reduced by DGEBRD. - If VECT = 'P', the number of rows in the original - matrix reduced by DGEBRD. - K >= 0. - - A (input) DOUBLE PRECISION array, dimension - (LDA,min(nq,K)) if VECT = 'Q' - (LDA,nq) if VECT = 'P' - The vectors which define the elementary reflectors H(i) and - G(i), whose products determine the matrices Q and P, as - returned by DGEBRD. - - LDA (input) INTEGER - The leading dimension of the array A. - If VECT = 'Q', LDA >= max(1,nq); - if VECT = 'P', LDA >= max(1,min(nq,K)). - - TAU (input) DOUBLE PRECISION array, dimension (min(nq,K)) - TAU(i) must contain the scalar factor of the elementary - reflector H(i) or G(i) which determines Q or P, as returned - by DGEBRD in the array argument TAUQ or TAUP. - - C (input/output) DOUBLE PRECISION array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by Q*C or Q**T*C or C*Q**T or C*Q - or P*C or P**T*C or C*P or C*P**T. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If SIDE = 'L', LWORK >= max(1,N); - if SIDE = 'R', LWORK >= max(1,M). - For optimum performance LWORK >= N*NB if SIDE = 'L', and - LWORK >= M*NB if SIDE = 'R', where NB is the optimal - blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - applyq = lsame_(vect, "Q"); - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - lquery = *lwork == -1; - -/* NQ is the order of Q or P and NW is the minimum dimension of WORK */ - - if (left) { - nq = *m; - nw = *n; - } else { - nq = *n; - nw = *m; - } - if ((! applyq && ! lsame_(vect, "P"))) { - *info = -1; - } else if ((! left && ! lsame_(side, "R"))) { - *info = -2; - } else if ((! notran && ! lsame_(trans, "T"))) { - *info = -3; - } else if (*m < 0) { - *info = -4; - } else if (*n < 0) { - *info = -5; - } else if (*k < 0) { - *info = -6; - } else /* if(complicated condition) */ { -/* Computing MAX */ - i__1 = 1, i__2 = min(nq,*k); - if ((applyq && *lda < max(1,nq)) || (! applyq && *lda < max(i__1,i__2) - )) { - *info = -8; - } else if (*ldc < max(1,*m)) { - *info = -11; - } else if ((*lwork < max(1,nw) && ! lquery)) { - *info = -13; - } - } - - if (*info == 0) { - if (applyq) { - if (left) { -/* Writing concatenation */ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = *m - 1; - i__2 = *m - 1; - nb = ilaenv_(&c__1, "DORMQR", ch__1, &i__1, n, &i__2, &c_n1, ( - ftnlen)6, (ftnlen)2); - } else { -/* Writing concatenation */ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = *n - 1; - i__2 = *n - 1; - nb = ilaenv_(&c__1, "DORMQR", ch__1, m, &i__1, &i__2, &c_n1, ( - ftnlen)6, (ftnlen)2); - } - } else { - if (left) { -/* Writing concatenation */ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = *m - 1; - i__2 = *m - 1; - nb = ilaenv_(&c__1, "DORMLQ", ch__1, &i__1, n, &i__2, &c_n1, ( - ftnlen)6, (ftnlen)2); - } else { -/* Writing concatenation */ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = *n - 1; - i__2 = *n - 1; - nb = ilaenv_(&c__1, "DORMLQ", ch__1, m, &i__1, &i__2, &c_n1, ( - ftnlen)6, (ftnlen)2); - } - } - lwkopt = max(1,nw) * nb; - work[1] = (doublereal) lwkopt; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORMBR", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - work[1] = 1.; - if (*m == 0 || *n == 0) { - return 0; - } - - if (applyq) { - -/* Apply Q */ - - if (nq >= *k) { - -/* Q was determined by a call to DGEBRD with nq >= k */ - - dormqr_(side, trans, m, n, k, &a[a_offset], lda, &tau[1], &c__[ - c_offset], ldc, &work[1], lwork, &iinfo); - } else if (nq > 1) { - -/* Q was determined by a call to DGEBRD with nq < k */ - - if (left) { - mi = *m - 1; - ni = *n; - i1 = 2; - i2 = 1; - } else { - mi = *m; - ni = *n - 1; - i1 = 1; - i2 = 2; - } - i__1 = nq - 1; - dormqr_(side, trans, &mi, &ni, &i__1, &a[a_dim1 + 2], lda, &tau[1] - , &c__[i1 + i2 * c_dim1], ldc, &work[1], lwork, &iinfo); - } - } else { - -/* Apply P */ - - if (notran) { - *(unsigned char *)transt = 'T'; - } else { - *(unsigned char *)transt = 'N'; - } - if (nq > *k) { - -/* P was determined by a call to DGEBRD with nq > k */ - - dormlq_(side, transt, m, n, k, &a[a_offset], lda, &tau[1], &c__[ - c_offset], ldc, &work[1], lwork, &iinfo); - } else if (nq > 1) { - -/* P was determined by a call to DGEBRD with nq <= k */ - - if (left) { - mi = *m - 1; - ni = *n; - i1 = 2; - i2 = 1; - } else { - mi = *m; - ni = *n - 1; - i1 = 1; - i2 = 2; - } - i__1 = nq - 1; - dormlq_(side, transt, &mi, &ni, &i__1, &a[((a_dim1) << (1)) + 1], - lda, &tau[1], &c__[i1 + i2 * c_dim1], ldc, &work[1], - lwork, &iinfo); - } - } - work[1] = (doublereal) lwkopt; - return 0; - -/* End of DORMBR */ - -} /* dormbr_ */ - -/* Subroutine */ int dorml2_(char *side, char *trans, integer *m, integer *n, - integer *k, doublereal *a, integer *lda, doublereal *tau, doublereal * - c__, integer *ldc, doublereal *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2; - - /* Local variables */ - static integer i__, i1, i2, i3, ic, jc, mi, ni, nq; - static doublereal aii; - static logical left; - extern /* Subroutine */ int dlarf_(char *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - doublereal *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int xerbla_(char *, integer *); - static logical notran; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DORML2 overwrites the general real m by n matrix C with - - Q * C if SIDE = 'L' and TRANS = 'N', or - - Q'* C if SIDE = 'L' and TRANS = 'T', or - - C * Q if SIDE = 'R' and TRANS = 'N', or - - C * Q' if SIDE = 'R' and TRANS = 'T', - - where Q is a real orthogonal matrix defined as the product of k - elementary reflectors - - Q = H(k) . . . H(2) H(1) - - as returned by DGELQF. Q is of order m if SIDE = 'L' and of order n - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q' from the Left - = 'R': apply Q or Q' from the Right - - TRANS (input) CHARACTER*1 - = 'N': apply Q (No transpose) - = 'T': apply Q' (Transpose) - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) DOUBLE PRECISION array, dimension - (LDA,M) if SIDE = 'L', - (LDA,N) if SIDE = 'R' - The i-th row must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - DGELQF in the first k rows of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,K). - - TAU (input) DOUBLE PRECISION array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DGELQF. - - C (input/output) DOUBLE PRECISION array, dimension (LDC,N) - On entry, the m by n matrix C. - On exit, C is overwritten by Q*C or Q'*C or C*Q' or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace) DOUBLE PRECISION array, dimension - (N) if SIDE = 'L', - (M) if SIDE = 'R' - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - -/* NQ is the order of Q */ - - if (left) { - nq = *m; - } else { - nq = *n; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "T"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,*k)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORML2", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - return 0; - } - - if ((left && notran) || (! left && ! notran)) { - i1 = 1; - i2 = *k; - i3 = 1; - } else { - i1 = *k; - i2 = 1; - i3 = -1; - } - - if (left) { - ni = *n; - jc = 1; - } else { - mi = *m; - ic = 1; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { - if (left) { - -/* H(i) is applied to C(i:m,1:n) */ - - mi = *m - i__ + 1; - ic = i__; - } else { - -/* H(i) is applied to C(1:m,i:n) */ - - ni = *n - i__ + 1; - jc = i__; - } - -/* Apply H(i) */ - - aii = a[i__ + i__ * a_dim1]; - a[i__ + i__ * a_dim1] = 1.; - dlarf_(side, &mi, &ni, &a[i__ + i__ * a_dim1], lda, &tau[i__], &c__[ - ic + jc * c_dim1], ldc, &work[1]); - a[i__ + i__ * a_dim1] = aii; -/* L10: */ - } - return 0; - -/* End of DORML2 */ - -} /* dorml2_ */ - -/* Subroutine */ int dormlq_(char *side, char *trans, integer *m, integer *n, - integer *k, doublereal *a, integer *lda, doublereal *tau, doublereal * - c__, integer *ldc, doublereal *work, integer *lwork, integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3[2], i__4, - i__5; - char ch__1[2]; - - /* Builtin functions */ - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i__; - static doublereal t[4160] /* was [65][64] */; - static integer i1, i2, i3, ib, ic, jc, nb, mi, ni, nq, nw, iws; - static logical left; - extern logical lsame_(char *, char *); - static integer nbmin, iinfo; - extern /* Subroutine */ int dorml2_(char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - integer *, doublereal *, integer *), dlarfb_(char - *, char *, char *, char *, integer *, integer *, integer *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *, doublereal *, integer *), dlarft_(char *, char *, integer *, integer *, doublereal - *, integer *, doublereal *, doublereal *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static logical notran; - static integer ldwork; - static char transt[1]; - static integer lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DORMLQ overwrites the general real M-by-N matrix C with - - SIDE = 'L' SIDE = 'R' - TRANS = 'N': Q * C C * Q - TRANS = 'T': Q**T * C C * Q**T - - where Q is a real orthogonal matrix defined as the product of k - elementary reflectors - - Q = H(k) . . . H(2) H(1) - - as returned by DGELQF. Q is of order M if SIDE = 'L' and of order N - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q**T from the Left; - = 'R': apply Q or Q**T from the Right. - - TRANS (input) CHARACTER*1 - = 'N': No transpose, apply Q; - = 'T': Transpose, apply Q**T. - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) DOUBLE PRECISION array, dimension - (LDA,M) if SIDE = 'L', - (LDA,N) if SIDE = 'R' - The i-th row must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - DGELQF in the first k rows of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,K). - - TAU (input) DOUBLE PRECISION array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DGELQF. - - C (input/output) DOUBLE PRECISION array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by Q*C or Q**T*C or C*Q**T or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If SIDE = 'L', LWORK >= max(1,N); - if SIDE = 'R', LWORK >= max(1,M). - For optimum performance LWORK >= N*NB if SIDE = 'L', and - LWORK >= M*NB if SIDE = 'R', where NB is the optimal - blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - lquery = *lwork == -1; - -/* NQ is the order of Q and NW is the minimum dimension of WORK */ - - if (left) { - nq = *m; - nw = *n; - } else { - nq = *n; - nw = *m; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "T"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,*k)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } else if ((*lwork < max(1,nw) && ! lquery)) { - *info = -12; - } - - if (*info == 0) { - -/* - Determine the block size. NB may be at most NBMAX, where NBMAX - is used to define the local array T. - - Computing MIN - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 64, i__2 = ilaenv_(&c__1, "DORMLQ", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nb = min(i__1,i__2); - lwkopt = max(1,nw) * nb; - work[1] = (doublereal) lwkopt; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORMLQ", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - work[1] = 1.; - return 0; - } - - nbmin = 2; - ldwork = nw; - if ((nb > 1 && nb < *k)) { - iws = nw * nb; - if (*lwork < iws) { - nb = *lwork / ldwork; -/* - Computing MAX - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 2, i__2 = ilaenv_(&c__2, "DORMLQ", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nbmin = max(i__1,i__2); - } - } else { - iws = nw; - } - - if (nb < nbmin || nb >= *k) { - -/* Use unblocked code */ - - dorml2_(side, trans, m, n, k, &a[a_offset], lda, &tau[1], &c__[ - c_offset], ldc, &work[1], &iinfo); - } else { - -/* Use blocked code */ - - if ((left && notran) || (! left && ! notran)) { - i1 = 1; - i2 = *k; - i3 = nb; - } else { - i1 = (*k - 1) / nb * nb + 1; - i2 = 1; - i3 = -nb; - } - - if (left) { - ni = *n; - jc = 1; - } else { - mi = *m; - ic = 1; - } - - if (notran) { - *(unsigned char *)transt = 'T'; - } else { - *(unsigned char *)transt = 'N'; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__4 = nb, i__5 = *k - i__ + 1; - ib = min(i__4,i__5); - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__4 = nq - i__ + 1; - dlarft_("Forward", "Rowwise", &i__4, &ib, &a[i__ + i__ * a_dim1], - lda, &tau[i__], t, &c__65); - if (left) { - -/* H or H' is applied to C(i:m,1:n) */ - - mi = *m - i__ + 1; - ic = i__; - } else { - -/* H or H' is applied to C(1:m,i:n) */ - - ni = *n - i__ + 1; - jc = i__; - } - -/* Apply H or H' */ - - dlarfb_(side, transt, "Forward", "Rowwise", &mi, &ni, &ib, &a[i__ - + i__ * a_dim1], lda, t, &c__65, &c__[ic + jc * c_dim1], - ldc, &work[1], &ldwork); -/* L10: */ - } - } - work[1] = (doublereal) lwkopt; - return 0; - -/* End of DORMLQ */ - -} /* dormlq_ */ - -/* Subroutine */ int dormql_(char *side, char *trans, integer *m, integer *n, - integer *k, doublereal *a, integer *lda, doublereal *tau, doublereal * - c__, integer *ldc, doublereal *work, integer *lwork, integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3[2], i__4, - i__5; - char ch__1[2]; - - /* Builtin functions */ - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i__; - static doublereal t[4160] /* was [65][64] */; - static integer i1, i2, i3, ib, nb, mi, ni, nq, nw, iws; - static logical left; - extern logical lsame_(char *, char *); - static integer nbmin, iinfo; - extern /* Subroutine */ int dorm2l_(char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - integer *, doublereal *, integer *), dlarfb_(char - *, char *, char *, char *, integer *, integer *, integer *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *, doublereal *, integer *), dlarft_(char *, char *, integer *, integer *, doublereal - *, integer *, doublereal *, doublereal *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static logical notran; - static integer ldwork, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DORMQL overwrites the general real M-by-N matrix C with - - SIDE = 'L' SIDE = 'R' - TRANS = 'N': Q * C C * Q - TRANS = 'T': Q**T * C C * Q**T - - where Q is a real orthogonal matrix defined as the product of k - elementary reflectors - - Q = H(k) . . . H(2) H(1) - - as returned by DGEQLF. Q is of order M if SIDE = 'L' and of order N - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q**T from the Left; - = 'R': apply Q or Q**T from the Right. - - TRANS (input) CHARACTER*1 - = 'N': No transpose, apply Q; - = 'T': Transpose, apply Q**T. - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) DOUBLE PRECISION array, dimension (LDA,K) - The i-th column must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - DGEQLF in the last k columns of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. - If SIDE = 'L', LDA >= max(1,M); - if SIDE = 'R', LDA >= max(1,N). - - TAU (input) DOUBLE PRECISION array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DGEQLF. - - C (input/output) DOUBLE PRECISION array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by Q*C or Q**T*C or C*Q**T or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If SIDE = 'L', LWORK >= max(1,N); - if SIDE = 'R', LWORK >= max(1,M). - For optimum performance LWORK >= N*NB if SIDE = 'L', and - LWORK >= M*NB if SIDE = 'R', where NB is the optimal - blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - lquery = *lwork == -1; - -/* NQ is the order of Q and NW is the minimum dimension of WORK */ - - if (left) { - nq = *m; - nw = *n; - } else { - nq = *n; - nw = *m; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "T"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,nq)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } else if ((*lwork < max(1,nw) && ! lquery)) { - *info = -12; - } - - if (*info == 0) { - -/* - Determine the block size. NB may be at most NBMAX, where NBMAX - is used to define the local array T. - - Computing MIN - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 64, i__2 = ilaenv_(&c__1, "DORMQL", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nb = min(i__1,i__2); - lwkopt = max(1,nw) * nb; - work[1] = (doublereal) lwkopt; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORMQL", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - work[1] = 1.; - return 0; - } - - nbmin = 2; - ldwork = nw; - if ((nb > 1 && nb < *k)) { - iws = nw * nb; - if (*lwork < iws) { - nb = *lwork / ldwork; -/* - Computing MAX - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 2, i__2 = ilaenv_(&c__2, "DORMQL", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nbmin = max(i__1,i__2); - } - } else { - iws = nw; - } - - if (nb < nbmin || nb >= *k) { - -/* Use unblocked code */ - - dorm2l_(side, trans, m, n, k, &a[a_offset], lda, &tau[1], &c__[ - c_offset], ldc, &work[1], &iinfo); - } else { - -/* Use blocked code */ - - if ((left && notran) || (! left && ! notran)) { - i1 = 1; - i2 = *k; - i3 = nb; - } else { - i1 = (*k - 1) / nb * nb + 1; - i2 = 1; - i3 = -nb; - } - - if (left) { - ni = *n; - } else { - mi = *m; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__4 = nb, i__5 = *k - i__ + 1; - ib = min(i__4,i__5); - -/* - Form the triangular factor of the block reflector - H = H(i+ib-1) . . . H(i+1) H(i) -*/ - - i__4 = nq - *k + i__ + ib - 1; - dlarft_("Backward", "Columnwise", &i__4, &ib, &a[i__ * a_dim1 + 1] - , lda, &tau[i__], t, &c__65); - if (left) { - -/* H or H' is applied to C(1:m-k+i+ib-1,1:n) */ - - mi = *m - *k + i__ + ib - 1; - } else { - -/* H or H' is applied to C(1:m,1:n-k+i+ib-1) */ - - ni = *n - *k + i__ + ib - 1; - } - -/* Apply H or H' */ - - dlarfb_(side, trans, "Backward", "Columnwise", &mi, &ni, &ib, &a[ - i__ * a_dim1 + 1], lda, t, &c__65, &c__[c_offset], ldc, & - work[1], &ldwork); -/* L10: */ - } - } - work[1] = (doublereal) lwkopt; - return 0; - -/* End of DORMQL */ - -} /* dormql_ */ - -/* Subroutine */ int dormqr_(char *side, char *trans, integer *m, integer *n, - integer *k, doublereal *a, integer *lda, doublereal *tau, doublereal * - c__, integer *ldc, doublereal *work, integer *lwork, integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3[2], i__4, - i__5; - char ch__1[2]; - - /* Builtin functions */ - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i__; - static doublereal t[4160] /* was [65][64] */; - static integer i1, i2, i3, ib, ic, jc, nb, mi, ni, nq, nw, iws; - static logical left; - extern logical lsame_(char *, char *); - static integer nbmin, iinfo; - extern /* Subroutine */ int dorm2r_(char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - integer *, doublereal *, integer *), dlarfb_(char - *, char *, char *, char *, integer *, integer *, integer *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *, doublereal *, integer *), dlarft_(char *, char *, integer *, integer *, doublereal - *, integer *, doublereal *, doublereal *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static logical notran; - static integer ldwork, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DORMQR overwrites the general real M-by-N matrix C with - - SIDE = 'L' SIDE = 'R' - TRANS = 'N': Q * C C * Q - TRANS = 'T': Q**T * C C * Q**T - - where Q is a real orthogonal matrix defined as the product of k - elementary reflectors - - Q = H(1) H(2) . . . H(k) - - as returned by DGEQRF. Q is of order M if SIDE = 'L' and of order N - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q**T from the Left; - = 'R': apply Q or Q**T from the Right. - - TRANS (input) CHARACTER*1 - = 'N': No transpose, apply Q; - = 'T': Transpose, apply Q**T. - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) DOUBLE PRECISION array, dimension (LDA,K) - The i-th column must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - DGEQRF in the first k columns of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. - If SIDE = 'L', LDA >= max(1,M); - if SIDE = 'R', LDA >= max(1,N). - - TAU (input) DOUBLE PRECISION array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DGEQRF. - - C (input/output) DOUBLE PRECISION array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by Q*C or Q**T*C or C*Q**T or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If SIDE = 'L', LWORK >= max(1,N); - if SIDE = 'R', LWORK >= max(1,M). - For optimum performance LWORK >= N*NB if SIDE = 'L', and - LWORK >= M*NB if SIDE = 'R', where NB is the optimal - blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - lquery = *lwork == -1; - -/* NQ is the order of Q and NW is the minimum dimension of WORK */ - - if (left) { - nq = *m; - nw = *n; - } else { - nq = *n; - nw = *m; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "T"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,nq)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } else if ((*lwork < max(1,nw) && ! lquery)) { - *info = -12; - } - - if (*info == 0) { - -/* - Determine the block size. NB may be at most NBMAX, where NBMAX - is used to define the local array T. - - Computing MIN - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 64, i__2 = ilaenv_(&c__1, "DORMQR", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nb = min(i__1,i__2); - lwkopt = max(1,nw) * nb; - work[1] = (doublereal) lwkopt; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("DORMQR", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - work[1] = 1.; - return 0; - } - - nbmin = 2; - ldwork = nw; - if ((nb > 1 && nb < *k)) { - iws = nw * nb; - if (*lwork < iws) { - nb = *lwork / ldwork; -/* - Computing MAX - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 2, i__2 = ilaenv_(&c__2, "DORMQR", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nbmin = max(i__1,i__2); - } - } else { - iws = nw; - } - - if (nb < nbmin || nb >= *k) { - -/* Use unblocked code */ - - dorm2r_(side, trans, m, n, k, &a[a_offset], lda, &tau[1], &c__[ - c_offset], ldc, &work[1], &iinfo); - } else { - -/* Use blocked code */ - - if ((left && ! notran) || (! left && notran)) { - i1 = 1; - i2 = *k; - i3 = nb; - } else { - i1 = (*k - 1) / nb * nb + 1; - i2 = 1; - i3 = -nb; - } - - if (left) { - ni = *n; - jc = 1; - } else { - mi = *m; - ic = 1; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__4 = nb, i__5 = *k - i__ + 1; - ib = min(i__4,i__5); - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__4 = nq - i__ + 1; - dlarft_("Forward", "Columnwise", &i__4, &ib, &a[i__ + i__ * - a_dim1], lda, &tau[i__], t, &c__65) - ; - if (left) { - -/* H or H' is applied to C(i:m,1:n) */ - - mi = *m - i__ + 1; - ic = i__; - } else { - -/* H or H' is applied to C(1:m,i:n) */ - - ni = *n - i__ + 1; - jc = i__; - } - -/* Apply H or H' */ - - dlarfb_(side, trans, "Forward", "Columnwise", &mi, &ni, &ib, &a[ - i__ + i__ * a_dim1], lda, t, &c__65, &c__[ic + jc * - c_dim1], ldc, &work[1], &ldwork); -/* L10: */ - } - } - work[1] = (doublereal) lwkopt; - return 0; - -/* End of DORMQR */ - -} /* dormqr_ */ - -/* Subroutine */ int dormtr_(char *side, char *uplo, char *trans, integer *m, - integer *n, doublereal *a, integer *lda, doublereal *tau, doublereal * - c__, integer *ldc, doublereal *work, integer *lwork, integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer a_dim1, a_offset, c_dim1, c_offset, i__1[2], i__2, i__3; - char ch__1[2]; - - /* Builtin functions */ - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i1, i2, nb, mi, ni, nq, nw; - static logical left; - extern logical lsame_(char *, char *); - static integer iinfo; - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int dormql_(char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - integer *, doublereal *, integer *, integer *), - dormqr_(char *, char *, integer *, integer *, integer *, - doublereal *, integer *, doublereal *, doublereal *, integer *, - doublereal *, integer *, integer *); - static integer lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DORMTR overwrites the general real M-by-N matrix C with - - SIDE = 'L' SIDE = 'R' - TRANS = 'N': Q * C C * Q - TRANS = 'T': Q**T * C C * Q**T - - where Q is a real orthogonal matrix of order nq, with nq = m if - SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of - nq-1 elementary reflectors, as returned by DSYTRD: - - if UPLO = 'U', Q = H(nq-1) . . . H(2) H(1); - - if UPLO = 'L', Q = H(1) H(2) . . . H(nq-1). - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q**T from the Left; - = 'R': apply Q or Q**T from the Right. - - UPLO (input) CHARACTER*1 - = 'U': Upper triangle of A contains elementary reflectors - from DSYTRD; - = 'L': Lower triangle of A contains elementary reflectors - from DSYTRD. - - TRANS (input) CHARACTER*1 - = 'N': No transpose, apply Q; - = 'T': Transpose, apply Q**T. - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - A (input) DOUBLE PRECISION array, dimension - (LDA,M) if SIDE = 'L' - (LDA,N) if SIDE = 'R' - The vectors which define the elementary reflectors, as - returned by DSYTRD. - - LDA (input) INTEGER - The leading dimension of the array A. - LDA >= max(1,M) if SIDE = 'L'; LDA >= max(1,N) if SIDE = 'R'. - - TAU (input) DOUBLE PRECISION array, dimension - (M-1) if SIDE = 'L' - (N-1) if SIDE = 'R' - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by DSYTRD. - - C (input/output) DOUBLE PRECISION array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by Q*C or Q**T*C or C*Q**T or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If SIDE = 'L', LWORK >= max(1,N); - if SIDE = 'R', LWORK >= max(1,M). - For optimum performance LWORK >= N*NB if SIDE = 'L', and - LWORK >= M*NB if SIDE = 'R', where NB is the optimal - blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - upper = lsame_(uplo, "U"); - lquery = *lwork == -1; - -/* NQ is the order of Q and NW is the minimum dimension of WORK */ - - if (left) { - nq = *m; - nw = *n; - } else { - nq = *n; - nw = *m; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! upper && ! lsame_(uplo, "L"))) { - *info = -2; - } else if ((! lsame_(trans, "N") && ! lsame_(trans, - "T"))) { - *info = -3; - } else if (*m < 0) { - *info = -4; - } else if (*n < 0) { - *info = -5; - } else if (*lda < max(1,nq)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } else if ((*lwork < max(1,nw) && ! lquery)) { - *info = -12; - } - - if (*info == 0) { - if (upper) { - if (left) { -/* Writing concatenation */ - i__1[0] = 1, a__1[0] = side; - i__1[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__1, &c__2, (ftnlen)2); - i__2 = *m - 1; - i__3 = *m - 1; - nb = ilaenv_(&c__1, "DORMQL", ch__1, &i__2, n, &i__3, &c_n1, ( - ftnlen)6, (ftnlen)2); - } else { -/* Writing concatenation */ - i__1[0] = 1, a__1[0] = side; - i__1[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__1, &c__2, (ftnlen)2); - i__2 = *n - 1; - i__3 = *n - 1; - nb = ilaenv_(&c__1, "DORMQL", ch__1, m, &i__2, &i__3, &c_n1, ( - ftnlen)6, (ftnlen)2); - } - } else { - if (left) { -/* Writing concatenation */ - i__1[0] = 1, a__1[0] = side; - i__1[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__1, &c__2, (ftnlen)2); - i__2 = *m - 1; - i__3 = *m - 1; - nb = ilaenv_(&c__1, "DORMQR", ch__1, &i__2, n, &i__3, &c_n1, ( - ftnlen)6, (ftnlen)2); - } else { -/* Writing concatenation */ - i__1[0] = 1, a__1[0] = side; - i__1[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__1, &c__2, (ftnlen)2); - i__2 = *n - 1; - i__3 = *n - 1; - nb = ilaenv_(&c__1, "DORMQR", ch__1, m, &i__2, &i__3, &c_n1, ( - ftnlen)6, (ftnlen)2); - } - } - lwkopt = max(1,nw) * nb; - work[1] = (doublereal) lwkopt; - } - - if (*info != 0) { - i__2 = -(*info); - xerbla_("DORMTR", &i__2); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || nq == 1) { - work[1] = 1.; - return 0; - } - - if (left) { - mi = *m - 1; - ni = *n; - } else { - mi = *m; - ni = *n - 1; - } - - if (upper) { - -/* Q was determined by a call to DSYTRD with UPLO = 'U' */ - - i__2 = nq - 1; - dormql_(side, trans, &mi, &ni, &i__2, &a[((a_dim1) << (1)) + 1], lda, - &tau[1], &c__[c_offset], ldc, &work[1], lwork, &iinfo); - } else { - -/* Q was determined by a call to DSYTRD with UPLO = 'L' */ - - if (left) { - i1 = 2; - i2 = 1; - } else { - i1 = 1; - i2 = 2; - } - i__2 = nq - 1; - dormqr_(side, trans, &mi, &ni, &i__2, &a[a_dim1 + 2], lda, &tau[1], & - c__[i1 + i2 * c_dim1], ldc, &work[1], lwork, &iinfo); - } - work[1] = (doublereal) lwkopt; - return 0; - -/* End of DORMTR */ - -} /* dormtr_ */ - -/* Subroutine */ int dpotf2_(char *uplo, integer *n, doublereal *a, integer * - lda, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer j; - static doublereal ajj; - extern doublereal ddot_(integer *, doublereal *, integer *, doublereal *, - integer *); - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dgemv_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, doublereal *, integer *); - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - DPOTF2 computes the Cholesky factorization of a real symmetric - positive definite matrix A. - - The factorization has the form - A = U' * U , if UPLO = 'U', or - A = L * L', if UPLO = 'L', - where U is an upper triangular matrix and L is lower triangular. - - This is the unblocked version of the algorithm, calling Level 2 BLAS. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - Specifies whether the upper or lower triangular part of the - symmetric matrix A is stored. - = 'U': Upper triangular - = 'L': Lower triangular - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the symmetric matrix A. If UPLO = 'U', the leading - n by n upper triangular part of A contains the upper - triangular part of the matrix A, and the strictly lower - triangular part of A is not referenced. If UPLO = 'L', the - leading n by n lower triangular part of A contains the lower - triangular part of the matrix A, and the strictly upper - triangular part of A is not referenced. - - On exit, if INFO = 0, the factor U or L from the Cholesky - factorization A = U'*U or A = L*L'. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -k, the k-th argument had an illegal value - > 0: if INFO = k, the leading minor of order k is not - positive definite, and the factorization could not be - completed. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - *info = 0; - upper = lsame_(uplo, "U"); - if ((! upper && ! lsame_(uplo, "L"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DPOTF2", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - - if (upper) { - -/* Compute the Cholesky factorization A = U'*U. */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - -/* Compute U(J,J) and test for non-positive-definiteness. */ - - i__2 = j - 1; - ajj = a[j + j * a_dim1] - ddot_(&i__2, &a[j * a_dim1 + 1], &c__1, - &a[j * a_dim1 + 1], &c__1); - if (ajj <= 0.) { - a[j + j * a_dim1] = ajj; - goto L30; - } - ajj = sqrt(ajj); - a[j + j * a_dim1] = ajj; - -/* Compute elements J+1:N of row J. */ - - if (j < *n) { - i__2 = j - 1; - i__3 = *n - j; - dgemv_("Transpose", &i__2, &i__3, &c_b151, &a[(j + 1) * - a_dim1 + 1], lda, &a[j * a_dim1 + 1], &c__1, &c_b15, & - a[j + (j + 1) * a_dim1], lda); - i__2 = *n - j; - d__1 = 1. / ajj; - dscal_(&i__2, &d__1, &a[j + (j + 1) * a_dim1], lda); - } -/* L10: */ - } - } else { - -/* Compute the Cholesky factorization A = L*L'. */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - -/* Compute L(J,J) and test for non-positive-definiteness. */ - - i__2 = j - 1; - ajj = a[j + j * a_dim1] - ddot_(&i__2, &a[j + a_dim1], lda, &a[j - + a_dim1], lda); - if (ajj <= 0.) { - a[j + j * a_dim1] = ajj; - goto L30; - } - ajj = sqrt(ajj); - a[j + j * a_dim1] = ajj; - -/* Compute elements J+1:N of column J. */ - - if (j < *n) { - i__2 = *n - j; - i__3 = j - 1; - dgemv_("No transpose", &i__2, &i__3, &c_b151, &a[j + 1 + - a_dim1], lda, &a[j + a_dim1], lda, &c_b15, &a[j + 1 + - j * a_dim1], &c__1); - i__2 = *n - j; - d__1 = 1. / ajj; - dscal_(&i__2, &d__1, &a[j + 1 + j * a_dim1], &c__1); - } -/* L20: */ - } - } - goto L40; - -L30: - *info = j; - -L40: - return 0; - -/* End of DPOTF2 */ - -} /* dpotf2_ */ - -/* Subroutine */ int dpotrf_(char *uplo, integer *n, doublereal *a, integer * - lda, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer j, jb, nb; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dtrsm_(char *, char *, char *, char *, - integer *, integer *, doublereal *, doublereal *, integer *, - doublereal *, integer *); - static logical upper; - extern /* Subroutine */ int dsyrk_(char *, char *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, doublereal *, - integer *), dpotf2_(char *, integer *, - doublereal *, integer *, integer *), xerbla_(char *, - integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - March 31, 1993 - - - Purpose - ======= - - DPOTRF computes the Cholesky factorization of a real symmetric - positive definite matrix A. - - The factorization has the form - A = U**T * U, if UPLO = 'U', or - A = L * L**T, if UPLO = 'L', - where U is an upper triangular matrix and L is lower triangular. - - This is the block version of the algorithm, calling Level 3 BLAS. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - = 'U': Upper triangle of A is stored; - = 'L': Lower triangle of A is stored. - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the symmetric matrix A. If UPLO = 'U', the leading - N-by-N upper triangular part of A contains the upper - triangular part of the matrix A, and the strictly lower - triangular part of A is not referenced. If UPLO = 'L', the - leading N-by-N lower triangular part of A contains the lower - triangular part of the matrix A, and the strictly upper - triangular part of A is not referenced. - - On exit, if INFO = 0, the factor U or L from the Cholesky - factorization A = U**T*U or A = L*L**T. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, the leading minor of order i is not - positive definite, and the factorization could not be - completed. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - *info = 0; - upper = lsame_(uplo, "U"); - if ((! upper && ! lsame_(uplo, "L"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DPOTRF", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - -/* Determine the block size for this environment. */ - - nb = ilaenv_(&c__1, "DPOTRF", uplo, n, &c_n1, &c_n1, &c_n1, (ftnlen)6, ( - ftnlen)1); - if (nb <= 1 || nb >= *n) { - -/* Use unblocked code. */ - - dpotf2_(uplo, n, &a[a_offset], lda, info); - } else { - -/* Use blocked code. */ - - if (upper) { - -/* Compute the Cholesky factorization A = U'*U. */ - - i__1 = *n; - i__2 = nb; - for (j = 1; i__2 < 0 ? j >= i__1 : j <= i__1; j += i__2) { - -/* - Update and factorize the current diagonal block and test - for non-positive-definiteness. - - Computing MIN -*/ - i__3 = nb, i__4 = *n - j + 1; - jb = min(i__3,i__4); - i__3 = j - 1; - dsyrk_("Upper", "Transpose", &jb, &i__3, &c_b151, &a[j * - a_dim1 + 1], lda, &c_b15, &a[j + j * a_dim1], lda); - dpotf2_("Upper", &jb, &a[j + j * a_dim1], lda, info); - if (*info != 0) { - goto L30; - } - if (j + jb <= *n) { - -/* Compute the current block row. */ - - i__3 = *n - j - jb + 1; - i__4 = j - 1; - dgemm_("Transpose", "No transpose", &jb, &i__3, &i__4, & - c_b151, &a[j * a_dim1 + 1], lda, &a[(j + jb) * - a_dim1 + 1], lda, &c_b15, &a[j + (j + jb) * - a_dim1], lda); - i__3 = *n - j - jb + 1; - dtrsm_("Left", "Upper", "Transpose", "Non-unit", &jb, & - i__3, &c_b15, &a[j + j * a_dim1], lda, &a[j + (j - + jb) * a_dim1], lda); - } -/* L10: */ - } - - } else { - -/* Compute the Cholesky factorization A = L*L'. */ - - i__2 = *n; - i__1 = nb; - for (j = 1; i__1 < 0 ? j >= i__2 : j <= i__2; j += i__1) { - -/* - Update and factorize the current diagonal block and test - for non-positive-definiteness. - - Computing MIN -*/ - i__3 = nb, i__4 = *n - j + 1; - jb = min(i__3,i__4); - i__3 = j - 1; - dsyrk_("Lower", "No transpose", &jb, &i__3, &c_b151, &a[j + - a_dim1], lda, &c_b15, &a[j + j * a_dim1], lda); - dpotf2_("Lower", &jb, &a[j + j * a_dim1], lda, info); - if (*info != 0) { - goto L30; - } - if (j + jb <= *n) { - -/* Compute the current block column. */ - - i__3 = *n - j - jb + 1; - i__4 = j - 1; - dgemm_("No transpose", "Transpose", &i__3, &jb, &i__4, & - c_b151, &a[j + jb + a_dim1], lda, &a[j + a_dim1], - lda, &c_b15, &a[j + jb + j * a_dim1], lda); - i__3 = *n - j - jb + 1; - dtrsm_("Right", "Lower", "Transpose", "Non-unit", &i__3, & - jb, &c_b15, &a[j + j * a_dim1], lda, &a[j + jb + - j * a_dim1], lda); - } -/* L20: */ - } - } - } - goto L40; - -L30: - *info = *info + j - 1; - -L40: - return 0; - -/* End of DPOTRF */ - -} /* dpotrf_ */ - -/* Subroutine */ int dstedc_(char *compz, integer *n, doublereal *d__, - doublereal *e, doublereal *z__, integer *ldz, doublereal *work, - integer *lwork, integer *iwork, integer *liwork, integer *info) -{ - /* System generated locals */ - integer z_dim1, z_offset, i__1, i__2; - doublereal d__1, d__2; - - /* Builtin functions */ - double log(doublereal); - integer pow_ii(integer *, integer *); - double sqrt(doublereal); - - /* Local variables */ - static integer i__, j, k, m; - static doublereal p; - static integer ii, end, lgn; - static doublereal eps, tiny; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dswap_(integer *, doublereal *, integer *, - doublereal *, integer *); - static integer lwmin; - extern /* Subroutine */ int dlaed0_(integer *, integer *, integer *, - doublereal *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, integer *, integer *); - static integer start; - - extern /* Subroutine */ int dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *), dlacpy_(char *, integer *, integer - *, doublereal *, integer *, doublereal *, integer *), - dlaset_(char *, integer *, integer *, doublereal *, doublereal *, - doublereal *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int xerbla_(char *, integer *); - extern doublereal dlanst_(char *, integer *, doublereal *, doublereal *); - extern /* Subroutine */ int dsterf_(integer *, doublereal *, doublereal *, - integer *), dlasrt_(char *, integer *, doublereal *, integer *); - static integer liwmin, icompz; - extern /* Subroutine */ int dsteqr_(char *, integer *, doublereal *, - doublereal *, doublereal *, integer *, doublereal *, integer *); - static doublereal orgnrm; - static logical lquery; - static integer smlsiz, dtrtrw, storez; - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DSTEDC computes all eigenvalues and, optionally, eigenvectors of a - symmetric tridiagonal matrix using the divide and conquer method. - The eigenvectors of a full or band real symmetric matrix can also be - found if DSYTRD or DSPTRD or DSBTRD has been used to reduce this - matrix to tridiagonal form. - - This code makes very mild assumptions about floating point - arithmetic. It will work on machines with a guard digit in - add/subtract, or on those binary machines without guard digits - which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. - It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. See DLAED3 for details. - - Arguments - ========= - - COMPZ (input) CHARACTER*1 - = 'N': Compute eigenvalues only. - = 'I': Compute eigenvectors of tridiagonal matrix also. - = 'V': Compute eigenvectors of original dense symmetric - matrix also. On entry, Z contains the orthogonal - matrix used to reduce the original matrix to - tridiagonal form. - - N (input) INTEGER - The dimension of the symmetric tridiagonal matrix. N >= 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the diagonal elements of the tridiagonal matrix. - On exit, if INFO = 0, the eigenvalues in ascending order. - - E (input/output) DOUBLE PRECISION array, dimension (N-1) - On entry, the subdiagonal elements of the tridiagonal matrix. - On exit, E has been destroyed. - - Z (input/output) DOUBLE PRECISION array, dimension (LDZ,N) - On entry, if COMPZ = 'V', then Z contains the orthogonal - matrix used in the reduction to tridiagonal form. - On exit, if INFO = 0, then if COMPZ = 'V', Z contains the - orthonormal eigenvectors of the original symmetric matrix, - and if COMPZ = 'I', Z contains the orthonormal eigenvectors - of the symmetric tridiagonal matrix. - If COMPZ = 'N', then Z is not referenced. - - LDZ (input) INTEGER - The leading dimension of the array Z. LDZ >= 1. - If eigenvectors are desired, then LDZ >= max(1,N). - - WORK (workspace/output) DOUBLE PRECISION array, - dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If COMPZ = 'N' or N <= 1 then LWORK must be at least 1. - If COMPZ = 'V' and N > 1 then LWORK must be at least - ( 1 + 3*N + 2*N*lg N + 3*N**2 ), - where lg( N ) = smallest integer k such - that 2**k >= N. - If COMPZ = 'I' and N > 1 then LWORK must be at least - ( 1 + 4*N + N**2 ). - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - IWORK (workspace/output) INTEGER array, dimension (LIWORK) - On exit, if INFO = 0, IWORK(1) returns the optimal LIWORK. - - LIWORK (input) INTEGER - The dimension of the array IWORK. - If COMPZ = 'N' or N <= 1 then LIWORK must be at least 1. - If COMPZ = 'V' and N > 1 then LIWORK must be at least - ( 6 + 6*N + 5*N*lg N ). - If COMPZ = 'I' and N > 1 then LIWORK must be at least - ( 3 + 5*N ). - - If LIWORK = -1, then a workspace query is assumed; the - routine only calculates the optimal size of the IWORK array, - returns this value as the first entry of the IWORK array, and - no error message related to LIWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: The algorithm failed to compute an eigenvalue while - working on the submatrix lying in rows and columns - INFO/(N+1) through mod(INFO,N+1). - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - Modified by Francoise Tisseur, University of Tennessee. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - z_dim1 = *ldz; - z_offset = 1 + z_dim1 * 1; - z__ -= z_offset; - --work; - --iwork; - - /* Function Body */ - *info = 0; - lquery = *lwork == -1 || *liwork == -1; - - if (lsame_(compz, "N")) { - icompz = 0; - } else if (lsame_(compz, "V")) { - icompz = 1; - } else if (lsame_(compz, "I")) { - icompz = 2; - } else { - icompz = -1; - } - if (*n <= 1 || icompz <= 0) { - liwmin = 1; - lwmin = 1; - } else { - lgn = (integer) (log((doublereal) (*n)) / log(2.)); - if (pow_ii(&c__2, &lgn) < *n) { - ++lgn; - } - if (pow_ii(&c__2, &lgn) < *n) { - ++lgn; - } - if (icompz == 1) { -/* Computing 2nd power */ - i__1 = *n; - lwmin = *n * 3 + 1 + ((*n) << (1)) * lgn + i__1 * i__1 * 3; - liwmin = *n * 6 + 6 + *n * 5 * lgn; - } else if (icompz == 2) { -/* Computing 2nd power */ - i__1 = *n; - lwmin = ((*n) << (2)) + 1 + i__1 * i__1; - liwmin = *n * 5 + 3; - } - } - if (icompz < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*ldz < 1 || (icompz > 0 && *ldz < max(1,*n))) { - *info = -6; - } else if ((*lwork < lwmin && ! lquery)) { - *info = -8; - } else if ((*liwork < liwmin && ! lquery)) { - *info = -10; - } - - if (*info == 0) { - work[1] = (doublereal) lwmin; - iwork[1] = liwmin; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("DSTEDC", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - if (*n == 1) { - if (icompz != 0) { - z__[z_dim1 + 1] = 1.; - } - return 0; - } - - smlsiz = ilaenv_(&c__9, "DSTEDC", " ", &c__0, &c__0, &c__0, &c__0, ( - ftnlen)6, (ftnlen)1); - -/* - If the following conditional clause is removed, then the routine - will use the Divide and Conquer routine to compute only the - eigenvalues, which requires (3N + 3N**2) real workspace and - (2 + 5N + 2N lg(N)) integer workspace. - Since on many architectures DSTERF is much faster than any other - algorithm for finding eigenvalues only, it is used here - as the default. - - If COMPZ = 'N', use DSTERF to compute the eigenvalues. -*/ - - if (icompz == 0) { - dsterf_(n, &d__[1], &e[1], info); - return 0; - } - -/* - If N is smaller than the minimum divide size (SMLSIZ+1), then - solve the problem with another solver. -*/ - - if (*n <= smlsiz) { - if (icompz == 0) { - dsterf_(n, &d__[1], &e[1], info); - return 0; - } else if (icompz == 2) { - dsteqr_("I", n, &d__[1], &e[1], &z__[z_offset], ldz, &work[1], - info); - return 0; - } else { - dsteqr_("V", n, &d__[1], &e[1], &z__[z_offset], ldz, &work[1], - info); - return 0; - } - } - -/* - If COMPZ = 'V', the Z matrix must be stored elsewhere for later - use. -*/ - - if (icompz == 1) { - storez = *n * *n + 1; - } else { - storez = 1; - } - - if (icompz == 2) { - dlaset_("Full", n, n, &c_b29, &c_b15, &z__[z_offset], ldz); - } - -/* Scale. */ - - orgnrm = dlanst_("M", n, &d__[1], &e[1]); - if (orgnrm == 0.) { - return 0; - } - - eps = EPSILON; - - start = 1; - -/* while ( START <= N ) */ - -L10: - if (start <= *n) { - -/* - Let END be the position of the next subdiagonal entry such that - E( END ) <= TINY or END = N if no such subdiagonal exists. The - matrix identified by the elements between START and END - constitutes an independent sub-problem. -*/ - - end = start; -L20: - if (end < *n) { - tiny = eps * sqrt((d__1 = d__[end], abs(d__1))) * sqrt((d__2 = - d__[end + 1], abs(d__2))); - if ((d__1 = e[end], abs(d__1)) > tiny) { - ++end; - goto L20; - } - } - -/* (Sub) Problem determined. Compute its size and solve it. */ - - m = end - start + 1; - if (m == 1) { - start = end + 1; - goto L10; - } - if (m > smlsiz) { - *info = smlsiz; - -/* Scale. */ - - orgnrm = dlanst_("M", &m, &d__[start], &e[start]); - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b15, &m, &c__1, &d__[start] - , &m, info); - i__1 = m - 1; - i__2 = m - 1; - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b15, &i__1, &c__1, &e[ - start], &i__2, info); - - if (icompz == 1) { - dtrtrw = 1; - } else { - dtrtrw = start; - } - dlaed0_(&icompz, n, &m, &d__[start], &e[start], &z__[dtrtrw + - start * z_dim1], ldz, &work[1], n, &work[storez], &iwork[ - 1], info); - if (*info != 0) { - *info = (*info / (m + 1) + start - 1) * (*n + 1) + *info % (m - + 1) + start - 1; - return 0; - } - -/* Scale back. */ - - dlascl_("G", &c__0, &c__0, &c_b15, &orgnrm, &m, &c__1, &d__[start] - , &m, info); - - } else { - if (icompz == 1) { - -/* - Since QR won't update a Z matrix which is larger than the - length of D, we must solve the sub-problem in a workspace and - then multiply back into Z. -*/ - - dsteqr_("I", &m, &d__[start], &e[start], &work[1], &m, &work[ - m * m + 1], info); - dlacpy_("A", n, &m, &z__[start * z_dim1 + 1], ldz, &work[ - storez], n); - dgemm_("N", "N", n, &m, &m, &c_b15, &work[storez], ldz, &work[ - 1], &m, &c_b29, &z__[start * z_dim1 + 1], ldz); - } else if (icompz == 2) { - dsteqr_("I", &m, &d__[start], &e[start], &z__[start + start * - z_dim1], ldz, &work[1], info); - } else { - dsterf_(&m, &d__[start], &e[start], info); - } - if (*info != 0) { - *info = start * (*n + 1) + end; - return 0; - } - } - - start = end + 1; - goto L10; - } - -/* - endwhile - - If the problem split any number of times, then the eigenvalues - will not be properly ordered. Here we permute the eigenvalues - (and the associated eigenvectors) into ascending order. -*/ - - if (m != *n) { - if (icompz == 0) { - -/* Use Quick Sort */ - - dlasrt_("I", n, &d__[1], info); - - } else { - -/* Use Selection Sort to minimize swaps of eigenvectors */ - - i__1 = *n; - for (ii = 2; ii <= i__1; ++ii) { - i__ = ii - 1; - k = i__; - p = d__[i__]; - i__2 = *n; - for (j = ii; j <= i__2; ++j) { - if (d__[j] < p) { - k = j; - p = d__[j]; - } -/* L30: */ - } - if (k != i__) { - d__[k] = d__[i__]; - d__[i__] = p; - dswap_(n, &z__[i__ * z_dim1 + 1], &c__1, &z__[k * z_dim1 - + 1], &c__1); - } -/* L40: */ - } - } - } - - work[1] = (doublereal) lwmin; - iwork[1] = liwmin; - - return 0; - -/* End of DSTEDC */ - -} /* dstedc_ */ - -/* Subroutine */ int dsteqr_(char *compz, integer *n, doublereal *d__, - doublereal *e, doublereal *z__, integer *ldz, doublereal *work, - integer *info) -{ - /* System generated locals */ - integer z_dim1, z_offset, i__1, i__2; - doublereal d__1, d__2; - - /* Builtin functions */ - double sqrt(doublereal), d_sign(doublereal *, doublereal *); - - /* Local variables */ - static doublereal b, c__, f, g; - static integer i__, j, k, l, m; - static doublereal p, r__, s; - static integer l1, ii, mm, lm1, mm1, nm1; - static doublereal rt1, rt2, eps; - static integer lsv; - static doublereal tst, eps2; - static integer lend, jtot; - extern /* Subroutine */ int dlae2_(doublereal *, doublereal *, doublereal - *, doublereal *, doublereal *); - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dlasr_(char *, char *, char *, integer *, - integer *, doublereal *, doublereal *, doublereal *, integer *); - static doublereal anorm; - extern /* Subroutine */ int dswap_(integer *, doublereal *, integer *, - doublereal *, integer *), dlaev2_(doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *); - static integer lendm1, lendp1; - - static integer iscale; - extern /* Subroutine */ int dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *), dlaset_(char *, integer *, integer - *, doublereal *, doublereal *, doublereal *, integer *); - static doublereal safmin; - extern /* Subroutine */ int dlartg_(doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *); - static doublereal safmax; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern doublereal dlanst_(char *, integer *, doublereal *, doublereal *); - extern /* Subroutine */ int dlasrt_(char *, integer *, doublereal *, - integer *); - static integer lendsv; - static doublereal ssfmin; - static integer nmaxit, icompz; - static doublereal ssfmax; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - DSTEQR computes all eigenvalues and, optionally, eigenvectors of a - symmetric tridiagonal matrix using the implicit QL or QR method. - The eigenvectors of a full or band symmetric matrix can also be found - if DSYTRD or DSPTRD or DSBTRD has been used to reduce this matrix to - tridiagonal form. - - Arguments - ========= - - COMPZ (input) CHARACTER*1 - = 'N': Compute eigenvalues only. - = 'V': Compute eigenvalues and eigenvectors of the original - symmetric matrix. On entry, Z must contain the - orthogonal matrix used to reduce the original matrix - to tridiagonal form. - = 'I': Compute eigenvalues and eigenvectors of the - tridiagonal matrix. Z is initialized to the identity - matrix. - - N (input) INTEGER - The order of the matrix. N >= 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the diagonal elements of the tridiagonal matrix. - On exit, if INFO = 0, the eigenvalues in ascending order. - - E (input/output) DOUBLE PRECISION array, dimension (N-1) - On entry, the (n-1) subdiagonal elements of the tridiagonal - matrix. - On exit, E has been destroyed. - - Z (input/output) DOUBLE PRECISION array, dimension (LDZ, N) - On entry, if COMPZ = 'V', then Z contains the orthogonal - matrix used in the reduction to tridiagonal form. - On exit, if INFO = 0, then if COMPZ = 'V', Z contains the - orthonormal eigenvectors of the original symmetric matrix, - and if COMPZ = 'I', Z contains the orthonormal eigenvectors - of the symmetric tridiagonal matrix. - If COMPZ = 'N', then Z is not referenced. - - LDZ (input) INTEGER - The leading dimension of the array Z. LDZ >= 1, and if - eigenvectors are desired, then LDZ >= max(1,N). - - WORK (workspace) DOUBLE PRECISION array, dimension (max(1,2*N-2)) - If COMPZ = 'N', then WORK is not referenced. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: the algorithm has failed to find all the eigenvalues in - a total of 30*N iterations; if INFO = i, then i - elements of E have not converged to zero; on exit, D - and E contain the elements of a symmetric tridiagonal - matrix which is orthogonally similar to the original - matrix. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - z_dim1 = *ldz; - z_offset = 1 + z_dim1 * 1; - z__ -= z_offset; - --work; - - /* Function Body */ - *info = 0; - - if (lsame_(compz, "N")) { - icompz = 0; - } else if (lsame_(compz, "V")) { - icompz = 1; - } else if (lsame_(compz, "I")) { - icompz = 2; - } else { - icompz = -1; - } - if (icompz < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*ldz < 1 || (icompz > 0 && *ldz < max(1,*n))) { - *info = -6; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DSTEQR", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - - if (*n == 1) { - if (icompz == 2) { - z__[z_dim1 + 1] = 1.; - } - return 0; - } - -/* Determine the unit roundoff and over/underflow thresholds. */ - - eps = EPSILON; -/* Computing 2nd power */ - d__1 = eps; - eps2 = d__1 * d__1; - safmin = SAFEMINIMUM; - safmax = 1. / safmin; - ssfmax = sqrt(safmax) / 3.; - ssfmin = sqrt(safmin) / eps2; - -/* - Compute the eigenvalues and eigenvectors of the tridiagonal - matrix. -*/ - - if (icompz == 2) { - dlaset_("Full", n, n, &c_b29, &c_b15, &z__[z_offset], ldz); - } - - nmaxit = *n * 30; - jtot = 0; - -/* - Determine where the matrix splits and choose QL or QR iteration - for each block, according to whether top or bottom diagonal - element is smaller. -*/ - - l1 = 1; - nm1 = *n - 1; - -L10: - if (l1 > *n) { - goto L160; - } - if (l1 > 1) { - e[l1 - 1] = 0.; - } - if (l1 <= nm1) { - i__1 = nm1; - for (m = l1; m <= i__1; ++m) { - tst = (d__1 = e[m], abs(d__1)); - if (tst == 0.) { - goto L30; - } - if (tst <= sqrt((d__1 = d__[m], abs(d__1))) * sqrt((d__2 = d__[m - + 1], abs(d__2))) * eps) { - e[m] = 0.; - goto L30; - } -/* L20: */ - } - } - m = *n; - -L30: - l = l1; - lsv = l; - lend = m; - lendsv = lend; - l1 = m + 1; - if (lend == l) { - goto L10; - } - -/* Scale submatrix in rows and columns L to LEND */ - - i__1 = lend - l + 1; - anorm = dlanst_("I", &i__1, &d__[l], &e[l]); - iscale = 0; - if (anorm == 0.) { - goto L10; - } - if (anorm > ssfmax) { - iscale = 1; - i__1 = lend - l + 1; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmax, &i__1, &c__1, &d__[l], n, - info); - i__1 = lend - l; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmax, &i__1, &c__1, &e[l], n, - info); - } else if (anorm < ssfmin) { - iscale = 2; - i__1 = lend - l + 1; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmin, &i__1, &c__1, &d__[l], n, - info); - i__1 = lend - l; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmin, &i__1, &c__1, &e[l], n, - info); - } - -/* Choose between QL and QR iteration */ - - if ((d__1 = d__[lend], abs(d__1)) < (d__2 = d__[l], abs(d__2))) { - lend = lsv; - l = lendsv; - } - - if (lend > l) { - -/* - QL Iteration - - Look for small subdiagonal element. -*/ - -L40: - if (l != lend) { - lendm1 = lend - 1; - i__1 = lendm1; - for (m = l; m <= i__1; ++m) { -/* Computing 2nd power */ - d__2 = (d__1 = e[m], abs(d__1)); - tst = d__2 * d__2; - if (tst <= eps2 * (d__1 = d__[m], abs(d__1)) * (d__2 = d__[m - + 1], abs(d__2)) + safmin) { - goto L60; - } -/* L50: */ - } - } - - m = lend; - -L60: - if (m < lend) { - e[m] = 0.; - } - p = d__[l]; - if (m == l) { - goto L80; - } - -/* - If remaining matrix is 2-by-2, use DLAE2 or SLAEV2 - to compute its eigensystem. -*/ - - if (m == l + 1) { - if (icompz > 0) { - dlaev2_(&d__[l], &e[l], &d__[l + 1], &rt1, &rt2, &c__, &s); - work[l] = c__; - work[*n - 1 + l] = s; - dlasr_("R", "V", "B", n, &c__2, &work[l], &work[*n - 1 + l], & - z__[l * z_dim1 + 1], ldz); - } else { - dlae2_(&d__[l], &e[l], &d__[l + 1], &rt1, &rt2); - } - d__[l] = rt1; - d__[l + 1] = rt2; - e[l] = 0.; - l += 2; - if (l <= lend) { - goto L40; - } - goto L140; - } - - if (jtot == nmaxit) { - goto L140; - } - ++jtot; - -/* Form shift. */ - - g = (d__[l + 1] - p) / (e[l] * 2.); - r__ = dlapy2_(&g, &c_b15); - g = d__[m] - p + e[l] / (g + d_sign(&r__, &g)); - - s = 1.; - c__ = 1.; - p = 0.; - -/* Inner loop */ - - mm1 = m - 1; - i__1 = l; - for (i__ = mm1; i__ >= i__1; --i__) { - f = s * e[i__]; - b = c__ * e[i__]; - dlartg_(&g, &f, &c__, &s, &r__); - if (i__ != m - 1) { - e[i__ + 1] = r__; - } - g = d__[i__ + 1] - p; - r__ = (d__[i__] - g) * s + c__ * 2. * b; - p = s * r__; - d__[i__ + 1] = g + p; - g = c__ * r__ - b; - -/* If eigenvectors are desired, then save rotations. */ - - if (icompz > 0) { - work[i__] = c__; - work[*n - 1 + i__] = -s; - } - -/* L70: */ - } - -/* If eigenvectors are desired, then apply saved rotations. */ - - if (icompz > 0) { - mm = m - l + 1; - dlasr_("R", "V", "B", n, &mm, &work[l], &work[*n - 1 + l], &z__[l - * z_dim1 + 1], ldz); - } - - d__[l] -= p; - e[l] = g; - goto L40; - -/* Eigenvalue found. */ - -L80: - d__[l] = p; - - ++l; - if (l <= lend) { - goto L40; - } - goto L140; - - } else { - -/* - QR Iteration - - Look for small superdiagonal element. -*/ - -L90: - if (l != lend) { - lendp1 = lend + 1; - i__1 = lendp1; - for (m = l; m >= i__1; --m) { -/* Computing 2nd power */ - d__2 = (d__1 = e[m - 1], abs(d__1)); - tst = d__2 * d__2; - if (tst <= eps2 * (d__1 = d__[m], abs(d__1)) * (d__2 = d__[m - - 1], abs(d__2)) + safmin) { - goto L110; - } -/* L100: */ - } - } - - m = lend; - -L110: - if (m > lend) { - e[m - 1] = 0.; - } - p = d__[l]; - if (m == l) { - goto L130; - } - -/* - If remaining matrix is 2-by-2, use DLAE2 or SLAEV2 - to compute its eigensystem. -*/ - - if (m == l - 1) { - if (icompz > 0) { - dlaev2_(&d__[l - 1], &e[l - 1], &d__[l], &rt1, &rt2, &c__, &s) - ; - work[m] = c__; - work[*n - 1 + m] = s; - dlasr_("R", "V", "F", n, &c__2, &work[m], &work[*n - 1 + m], & - z__[(l - 1) * z_dim1 + 1], ldz); - } else { - dlae2_(&d__[l - 1], &e[l - 1], &d__[l], &rt1, &rt2); - } - d__[l - 1] = rt1; - d__[l] = rt2; - e[l - 1] = 0.; - l += -2; - if (l >= lend) { - goto L90; - } - goto L140; - } - - if (jtot == nmaxit) { - goto L140; - } - ++jtot; - -/* Form shift. */ - - g = (d__[l - 1] - p) / (e[l - 1] * 2.); - r__ = dlapy2_(&g, &c_b15); - g = d__[m] - p + e[l - 1] / (g + d_sign(&r__, &g)); - - s = 1.; - c__ = 1.; - p = 0.; - -/* Inner loop */ - - lm1 = l - 1; - i__1 = lm1; - for (i__ = m; i__ <= i__1; ++i__) { - f = s * e[i__]; - b = c__ * e[i__]; - dlartg_(&g, &f, &c__, &s, &r__); - if (i__ != m) { - e[i__ - 1] = r__; - } - g = d__[i__] - p; - r__ = (d__[i__ + 1] - g) * s + c__ * 2. * b; - p = s * r__; - d__[i__] = g + p; - g = c__ * r__ - b; - -/* If eigenvectors are desired, then save rotations. */ - - if (icompz > 0) { - work[i__] = c__; - work[*n - 1 + i__] = s; - } - -/* L120: */ - } - -/* If eigenvectors are desired, then apply saved rotations. */ - - if (icompz > 0) { - mm = l - m + 1; - dlasr_("R", "V", "F", n, &mm, &work[m], &work[*n - 1 + m], &z__[m - * z_dim1 + 1], ldz); - } - - d__[l] -= p; - e[lm1] = g; - goto L90; - -/* Eigenvalue found. */ - -L130: - d__[l] = p; - - --l; - if (l >= lend) { - goto L90; - } - goto L140; - - } - -/* Undo scaling if necessary */ - -L140: - if (iscale == 1) { - i__1 = lendsv - lsv + 1; - dlascl_("G", &c__0, &c__0, &ssfmax, &anorm, &i__1, &c__1, &d__[lsv], - n, info); - i__1 = lendsv - lsv; - dlascl_("G", &c__0, &c__0, &ssfmax, &anorm, &i__1, &c__1, &e[lsv], n, - info); - } else if (iscale == 2) { - i__1 = lendsv - lsv + 1; - dlascl_("G", &c__0, &c__0, &ssfmin, &anorm, &i__1, &c__1, &d__[lsv], - n, info); - i__1 = lendsv - lsv; - dlascl_("G", &c__0, &c__0, &ssfmin, &anorm, &i__1, &c__1, &e[lsv], n, - info); - } - -/* - Check for no convergence to an eigenvalue after a total - of N*MAXIT iterations. -*/ - - if (jtot < nmaxit) { - goto L10; - } - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - if (e[i__] != 0.) { - ++(*info); - } -/* L150: */ - } - goto L190; - -/* Order eigenvalues and eigenvectors. */ - -L160: - if (icompz == 0) { - -/* Use Quick Sort */ - - dlasrt_("I", n, &d__[1], info); - - } else { - -/* Use Selection Sort to minimize swaps of eigenvectors */ - - i__1 = *n; - for (ii = 2; ii <= i__1; ++ii) { - i__ = ii - 1; - k = i__; - p = d__[i__]; - i__2 = *n; - for (j = ii; j <= i__2; ++j) { - if (d__[j] < p) { - k = j; - p = d__[j]; - } -/* L170: */ - } - if (k != i__) { - d__[k] = d__[i__]; - d__[i__] = p; - dswap_(n, &z__[i__ * z_dim1 + 1], &c__1, &z__[k * z_dim1 + 1], - &c__1); - } -/* L180: */ - } - } - -L190: - return 0; - -/* End of DSTEQR */ - -} /* dsteqr_ */ - -/* Subroutine */ int dsterf_(integer *n, doublereal *d__, doublereal *e, - integer *info) -{ - /* System generated locals */ - integer i__1; - doublereal d__1, d__2, d__3; - - /* Builtin functions */ - double sqrt(doublereal), d_sign(doublereal *, doublereal *); - - /* Local variables */ - static doublereal c__; - static integer i__, l, m; - static doublereal p, r__, s; - static integer l1; - static doublereal bb, rt1, rt2, eps, rte; - static integer lsv; - static doublereal eps2, oldc; - static integer lend, jtot; - extern /* Subroutine */ int dlae2_(doublereal *, doublereal *, doublereal - *, doublereal *, doublereal *); - static doublereal gamma, alpha, sigma, anorm; - - static integer iscale; - extern /* Subroutine */ int dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *); - static doublereal oldgam, safmin; - extern /* Subroutine */ int xerbla_(char *, integer *); - static doublereal safmax; - extern doublereal dlanst_(char *, integer *, doublereal *, doublereal *); - extern /* Subroutine */ int dlasrt_(char *, integer *, doublereal *, - integer *); - static integer lendsv; - static doublereal ssfmin; - static integer nmaxit; - static doublereal ssfmax; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DSTERF computes all eigenvalues of a symmetric tridiagonal matrix - using the Pal-Walker-Kahan variant of the QL or QR algorithm. - - Arguments - ========= - - N (input) INTEGER - The order of the matrix. N >= 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the n diagonal elements of the tridiagonal matrix. - On exit, if INFO = 0, the eigenvalues in ascending order. - - E (input/output) DOUBLE PRECISION array, dimension (N-1) - On entry, the (n-1) subdiagonal elements of the tridiagonal - matrix. - On exit, E has been destroyed. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: the algorithm failed to find all of the eigenvalues in - a total of 30*N iterations; if INFO = i, then i - elements of E have not converged to zero. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --e; - --d__; - - /* Function Body */ - *info = 0; - -/* Quick return if possible */ - - if (*n < 0) { - *info = -1; - i__1 = -(*info); - xerbla_("DSTERF", &i__1); - return 0; - } - if (*n <= 1) { - return 0; - } - -/* Determine the unit roundoff for this environment. */ - - eps = EPSILON; -/* Computing 2nd power */ - d__1 = eps; - eps2 = d__1 * d__1; - safmin = SAFEMINIMUM; - safmax = 1. / safmin; - ssfmax = sqrt(safmax) / 3.; - ssfmin = sqrt(safmin) / eps2; - -/* Compute the eigenvalues of the tridiagonal matrix. */ - - nmaxit = *n * 30; - sigma = 0.; - jtot = 0; - -/* - Determine where the matrix splits and choose QL or QR iteration - for each block, according to whether top or bottom diagonal - element is smaller. -*/ - - l1 = 1; - -L10: - if (l1 > *n) { - goto L170; - } - if (l1 > 1) { - e[l1 - 1] = 0.; - } - i__1 = *n - 1; - for (m = l1; m <= i__1; ++m) { - if ((d__3 = e[m], abs(d__3)) <= sqrt((d__1 = d__[m], abs(d__1))) * - sqrt((d__2 = d__[m + 1], abs(d__2))) * eps) { - e[m] = 0.; - goto L30; - } -/* L20: */ - } - m = *n; - -L30: - l = l1; - lsv = l; - lend = m; - lendsv = lend; - l1 = m + 1; - if (lend == l) { - goto L10; - } - -/* Scale submatrix in rows and columns L to LEND */ - - i__1 = lend - l + 1; - anorm = dlanst_("I", &i__1, &d__[l], &e[l]); - iscale = 0; - if (anorm > ssfmax) { - iscale = 1; - i__1 = lend - l + 1; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmax, &i__1, &c__1, &d__[l], n, - info); - i__1 = lend - l; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmax, &i__1, &c__1, &e[l], n, - info); - } else if (anorm < ssfmin) { - iscale = 2; - i__1 = lend - l + 1; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmin, &i__1, &c__1, &d__[l], n, - info); - i__1 = lend - l; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmin, &i__1, &c__1, &e[l], n, - info); - } - - i__1 = lend - 1; - for (i__ = l; i__ <= i__1; ++i__) { -/* Computing 2nd power */ - d__1 = e[i__]; - e[i__] = d__1 * d__1; -/* L40: */ - } - -/* Choose between QL and QR iteration */ - - if ((d__1 = d__[lend], abs(d__1)) < (d__2 = d__[l], abs(d__2))) { - lend = lsv; - l = lendsv; - } - - if (lend >= l) { - -/* - QL Iteration - - Look for small subdiagonal element. -*/ - -L50: - if (l != lend) { - i__1 = lend - 1; - for (m = l; m <= i__1; ++m) { - if ((d__2 = e[m], abs(d__2)) <= eps2 * (d__1 = d__[m] * d__[m - + 1], abs(d__1))) { - goto L70; - } -/* L60: */ - } - } - m = lend; - -L70: - if (m < lend) { - e[m] = 0.; - } - p = d__[l]; - if (m == l) { - goto L90; - } - -/* - If remaining matrix is 2 by 2, use DLAE2 to compute its - eigenvalues. -*/ - - if (m == l + 1) { - rte = sqrt(e[l]); - dlae2_(&d__[l], &rte, &d__[l + 1], &rt1, &rt2); - d__[l] = rt1; - d__[l + 1] = rt2; - e[l] = 0.; - l += 2; - if (l <= lend) { - goto L50; - } - goto L150; - } - - if (jtot == nmaxit) { - goto L150; - } - ++jtot; - -/* Form shift. */ - - rte = sqrt(e[l]); - sigma = (d__[l + 1] - p) / (rte * 2.); - r__ = dlapy2_(&sigma, &c_b15); - sigma = p - rte / (sigma + d_sign(&r__, &sigma)); - - c__ = 1.; - s = 0.; - gamma = d__[m] - sigma; - p = gamma * gamma; - -/* Inner loop */ - - i__1 = l; - for (i__ = m - 1; i__ >= i__1; --i__) { - bb = e[i__]; - r__ = p + bb; - if (i__ != m - 1) { - e[i__ + 1] = s * r__; - } - oldc = c__; - c__ = p / r__; - s = bb / r__; - oldgam = gamma; - alpha = d__[i__]; - gamma = c__ * (alpha - sigma) - s * oldgam; - d__[i__ + 1] = oldgam + (alpha - gamma); - if (c__ != 0.) { - p = gamma * gamma / c__; - } else { - p = oldc * bb; - } -/* L80: */ - } - - e[l] = s * p; - d__[l] = sigma + gamma; - goto L50; - -/* Eigenvalue found. */ - -L90: - d__[l] = p; - - ++l; - if (l <= lend) { - goto L50; - } - goto L150; - - } else { - -/* - QR Iteration - - Look for small superdiagonal element. -*/ - -L100: - i__1 = lend + 1; - for (m = l; m >= i__1; --m) { - if ((d__2 = e[m - 1], abs(d__2)) <= eps2 * (d__1 = d__[m] * d__[m - - 1], abs(d__1))) { - goto L120; - } -/* L110: */ - } - m = lend; - -L120: - if (m > lend) { - e[m - 1] = 0.; - } - p = d__[l]; - if (m == l) { - goto L140; - } - -/* - If remaining matrix is 2 by 2, use DLAE2 to compute its - eigenvalues. -*/ - - if (m == l - 1) { - rte = sqrt(e[l - 1]); - dlae2_(&d__[l], &rte, &d__[l - 1], &rt1, &rt2); - d__[l] = rt1; - d__[l - 1] = rt2; - e[l - 1] = 0.; - l += -2; - if (l >= lend) { - goto L100; - } - goto L150; - } - - if (jtot == nmaxit) { - goto L150; - } - ++jtot; - -/* Form shift. */ - - rte = sqrt(e[l - 1]); - sigma = (d__[l - 1] - p) / (rte * 2.); - r__ = dlapy2_(&sigma, &c_b15); - sigma = p - rte / (sigma + d_sign(&r__, &sigma)); - - c__ = 1.; - s = 0.; - gamma = d__[m] - sigma; - p = gamma * gamma; - -/* Inner loop */ - - i__1 = l - 1; - for (i__ = m; i__ <= i__1; ++i__) { - bb = e[i__]; - r__ = p + bb; - if (i__ != m) { - e[i__ - 1] = s * r__; - } - oldc = c__; - c__ = p / r__; - s = bb / r__; - oldgam = gamma; - alpha = d__[i__ + 1]; - gamma = c__ * (alpha - sigma) - s * oldgam; - d__[i__] = oldgam + (alpha - gamma); - if (c__ != 0.) { - p = gamma * gamma / c__; - } else { - p = oldc * bb; - } -/* L130: */ - } - - e[l - 1] = s * p; - d__[l] = sigma + gamma; - goto L100; - -/* Eigenvalue found. */ - -L140: - d__[l] = p; - - --l; - if (l >= lend) { - goto L100; - } - goto L150; - - } - -/* Undo scaling if necessary */ - -L150: - if (iscale == 1) { - i__1 = lendsv - lsv + 1; - dlascl_("G", &c__0, &c__0, &ssfmax, &anorm, &i__1, &c__1, &d__[lsv], - n, info); - } - if (iscale == 2) { - i__1 = lendsv - lsv + 1; - dlascl_("G", &c__0, &c__0, &ssfmin, &anorm, &i__1, &c__1, &d__[lsv], - n, info); - } - -/* - Check for no convergence to an eigenvalue after a total - of N*MAXIT iterations. -*/ - - if (jtot < nmaxit) { - goto L10; - } - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - if (e[i__] != 0.) { - ++(*info); - } -/* L160: */ - } - goto L180; - -/* Sort eigenvalues in increasing order. */ - -L170: - dlasrt_("I", n, &d__[1], info); - -L180: - return 0; - -/* End of DSTERF */ - -} /* dsterf_ */ - -/* Subroutine */ int dsyevd_(char *jobz, char *uplo, integer *n, doublereal * - a, integer *lda, doublereal *w, doublereal *work, integer *lwork, - integer *iwork, integer *liwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal eps; - static integer inde; - static doublereal anrm, rmin, rmax; - static integer lopt; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - static doublereal sigma; - extern logical lsame_(char *, char *); - static integer iinfo, lwmin, liopt; - static logical lower, wantz; - static integer indwk2, llwrk2; - - static integer iscale; - extern /* Subroutine */ int dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *), dstedc_(char *, integer *, - doublereal *, doublereal *, doublereal *, integer *, doublereal *, - integer *, integer *, integer *, integer *), dlacpy_( - char *, integer *, integer *, doublereal *, integer *, doublereal - *, integer *); - static doublereal safmin; - extern /* Subroutine */ int xerbla_(char *, integer *); - static doublereal bignum; - static integer indtau; - extern /* Subroutine */ int dsterf_(integer *, doublereal *, doublereal *, - integer *); - extern doublereal dlansy_(char *, char *, integer *, doublereal *, - integer *, doublereal *); - static integer indwrk, liwmin; - extern /* Subroutine */ int dormtr_(char *, char *, char *, integer *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - integer *, doublereal *, integer *, integer *), dsytrd_(char *, integer *, doublereal *, integer *, - doublereal *, doublereal *, doublereal *, doublereal *, integer *, - integer *); - static integer llwork; - static doublereal smlnum; - static logical lquery; - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DSYEVD computes all eigenvalues and, optionally, eigenvectors of a - real symmetric matrix A. If eigenvectors are desired, it uses a - divide and conquer algorithm. - - The divide and conquer algorithm makes very mild assumptions about - floating point arithmetic. It will work on machines with a guard - digit in add/subtract, or on those binary machines without guard - digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or - Cray-2. It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. - - Because of large use of BLAS of level 3, DSYEVD needs N**2 more - workspace than DSYEVX. - - Arguments - ========= - - JOBZ (input) CHARACTER*1 - = 'N': Compute eigenvalues only; - = 'V': Compute eigenvalues and eigenvectors. - - UPLO (input) CHARACTER*1 - = 'U': Upper triangle of A is stored; - = 'L': Lower triangle of A is stored. - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA, N) - On entry, the symmetric matrix A. If UPLO = 'U', the - leading N-by-N upper triangular part of A contains the - upper triangular part of the matrix A. If UPLO = 'L', - the leading N-by-N lower triangular part of A contains - the lower triangular part of the matrix A. - On exit, if JOBZ = 'V', then if INFO = 0, A contains the - orthonormal eigenvectors of the matrix A. - If JOBZ = 'N', then on exit the lower triangle (if UPLO='L') - or the upper triangle (if UPLO='U') of A, including the - diagonal, is destroyed. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - W (output) DOUBLE PRECISION array, dimension (N) - If INFO = 0, the eigenvalues in ascending order. - - WORK (workspace/output) DOUBLE PRECISION array, - dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If N <= 1, LWORK must be at least 1. - If JOBZ = 'N' and N > 1, LWORK must be at least 2*N+1. - If JOBZ = 'V' and N > 1, LWORK must be at least - 1 + 6*N + 2*N**2. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - IWORK (workspace/output) INTEGER array, dimension (LIWORK) - On exit, if INFO = 0, IWORK(1) returns the optimal LIWORK. - - LIWORK (input) INTEGER - The dimension of the array IWORK. - If N <= 1, LIWORK must be at least 1. - If JOBZ = 'N' and N > 1, LIWORK must be at least 1. - If JOBZ = 'V' and N > 1, LIWORK must be at least 3 + 5*N. - - If LIWORK = -1, then a workspace query is assumed; the - routine only calculates the optimal size of the IWORK array, - returns this value as the first entry of the IWORK array, and - no error message related to LIWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, the algorithm failed to converge; i - off-diagonal elements of an intermediate tridiagonal - form did not converge to zero. - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - Modified by Francoise Tisseur, University of Tennessee. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --w; - --work; - --iwork; - - /* Function Body */ - wantz = lsame_(jobz, "V"); - lower = lsame_(uplo, "L"); - lquery = *lwork == -1 || *liwork == -1; - - *info = 0; - if (*n <= 1) { - liwmin = 1; - lwmin = 1; - lopt = lwmin; - liopt = liwmin; - } else { - if (wantz) { - liwmin = *n * 5 + 3; -/* Computing 2nd power */ - i__1 = *n; - lwmin = *n * 6 + 1 + ((i__1 * i__1) << (1)); - } else { - liwmin = 1; - lwmin = ((*n) << (1)) + 1; - } - lopt = lwmin; - liopt = liwmin; - } - if (! (wantz || lsame_(jobz, "N"))) { - *info = -1; - } else if (! (lower || lsame_(uplo, "U"))) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } else if ((*lwork < lwmin && ! lquery)) { - *info = -8; - } else if ((*liwork < liwmin && ! lquery)) { - *info = -10; - } - - if (*info == 0) { - work[1] = (doublereal) lopt; - iwork[1] = liopt; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("DSYEVD", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - - if (*n == 1) { - w[1] = a[a_dim1 + 1]; - if (wantz) { - a[a_dim1 + 1] = 1.; - } - return 0; - } - -/* Get machine constants. */ - - safmin = SAFEMINIMUM; - eps = PRECISION; - smlnum = safmin / eps; - bignum = 1. / smlnum; - rmin = sqrt(smlnum); - rmax = sqrt(bignum); - -/* Scale matrix to allowable range, if necessary. */ - - anrm = dlansy_("M", uplo, n, &a[a_offset], lda, &work[1]); - iscale = 0; - if ((anrm > 0. && anrm < rmin)) { - iscale = 1; - sigma = rmin / anrm; - } else if (anrm > rmax) { - iscale = 1; - sigma = rmax / anrm; - } - if (iscale == 1) { - dlascl_(uplo, &c__0, &c__0, &c_b15, &sigma, n, n, &a[a_offset], lda, - info); - } - -/* Call DSYTRD to reduce symmetric matrix to tridiagonal form. */ - - inde = 1; - indtau = inde + *n; - indwrk = indtau + *n; - llwork = *lwork - indwrk + 1; - indwk2 = indwrk + *n * *n; - llwrk2 = *lwork - indwk2 + 1; - - dsytrd_(uplo, n, &a[a_offset], lda, &w[1], &work[inde], &work[indtau], & - work[indwrk], &llwork, &iinfo); - lopt = (integer) (((*n) << (1)) + work[indwrk]); - -/* - For eigenvalues only, call DSTERF. For eigenvectors, first call - DSTEDC to generate the eigenvector matrix, WORK(INDWRK), of the - tridiagonal matrix, then call DORMTR to multiply it by the - Householder transformations stored in A. -*/ - - if (! wantz) { - dsterf_(n, &w[1], &work[inde], info); - } else { - dstedc_("I", n, &w[1], &work[inde], &work[indwrk], n, &work[indwk2], & - llwrk2, &iwork[1], liwork, info); - dormtr_("L", uplo, "N", n, n, &a[a_offset], lda, &work[indtau], &work[ - indwrk], n, &work[indwk2], &llwrk2, &iinfo); - dlacpy_("A", n, n, &work[indwrk], n, &a[a_offset], lda); -/* - Computing MAX - Computing 2nd power -*/ - i__3 = *n; - i__1 = lopt, i__2 = *n * 6 + 1 + ((i__3 * i__3) << (1)); - lopt = max(i__1,i__2); - } - -/* If matrix was scaled, then rescale eigenvalues appropriately. */ - - if (iscale == 1) { - d__1 = 1. / sigma; - dscal_(n, &d__1, &w[1], &c__1); - } - - work[1] = (doublereal) lopt; - iwork[1] = liopt; - - return 0; - -/* End of DSYEVD */ - -} /* dsyevd_ */ - -/* Subroutine */ int dsytd2_(char *uplo, integer *n, doublereal *a, integer * - lda, doublereal *d__, doublereal *e, doublereal *tau, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__; - extern doublereal ddot_(integer *, doublereal *, integer *, doublereal *, - integer *); - static doublereal taui; - extern /* Subroutine */ int dsyr2_(char *, integer *, doublereal *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - integer *); - static doublereal alpha; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int daxpy_(integer *, doublereal *, doublereal *, - integer *, doublereal *, integer *); - static logical upper; - extern /* Subroutine */ int dsymv_(char *, integer *, doublereal *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - doublereal *, integer *), dlarfg_(integer *, doublereal *, - doublereal *, integer *, doublereal *), xerbla_(char *, integer * - ); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - DSYTD2 reduces a real symmetric matrix A to symmetric tridiagonal - form T by an orthogonal similarity transformation: Q' * A * Q = T. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - Specifies whether the upper or lower triangular part of the - symmetric matrix A is stored: - = 'U': Upper triangular - = 'L': Lower triangular - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the symmetric matrix A. If UPLO = 'U', the leading - n-by-n upper triangular part of A contains the upper - triangular part of the matrix A, and the strictly lower - triangular part of A is not referenced. If UPLO = 'L', the - leading n-by-n lower triangular part of A contains the lower - triangular part of the matrix A, and the strictly upper - triangular part of A is not referenced. - On exit, if UPLO = 'U', the diagonal and first superdiagonal - of A are overwritten by the corresponding elements of the - tridiagonal matrix T, and the elements above the first - superdiagonal, with the array TAU, represent the orthogonal - matrix Q as a product of elementary reflectors; if UPLO - = 'L', the diagonal and first subdiagonal of A are over- - written by the corresponding elements of the tridiagonal - matrix T, and the elements below the first subdiagonal, with - the array TAU, represent the orthogonal matrix Q as a product - of elementary reflectors. See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - D (output) DOUBLE PRECISION array, dimension (N) - The diagonal elements of the tridiagonal matrix T: - D(i) = A(i,i). - - E (output) DOUBLE PRECISION array, dimension (N-1) - The off-diagonal elements of the tridiagonal matrix T: - E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. - - TAU (output) DOUBLE PRECISION array, dimension (N-1) - The scalar factors of the elementary reflectors (see Further - Details). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - If UPLO = 'U', the matrix Q is represented as a product of elementary - reflectors - - Q = H(n-1) . . . H(2) H(1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in - A(1:i-1,i+1), and tau in TAU(i). - - If UPLO = 'L', the matrix Q is represented as a product of elementary - reflectors - - Q = H(1) H(2) . . . H(n-1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), - and tau in TAU(i). - - The contents of A on exit are illustrated by the following examples - with n = 5: - - if UPLO = 'U': if UPLO = 'L': - - ( d e v2 v3 v4 ) ( d ) - ( d e v3 v4 ) ( e d ) - ( d e v4 ) ( v1 e d ) - ( d e ) ( v1 v2 e d ) - ( d ) ( v1 v2 v3 e d ) - - where d and e denote diagonal and off-diagonal elements of T, and vi - denotes an element of the vector defining H(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --d__; - --e; - --tau; - - /* Function Body */ - *info = 0; - upper = lsame_(uplo, "U"); - if ((! upper && ! lsame_(uplo, "L"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DSYTD2", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n <= 0) { - return 0; - } - - if (upper) { - -/* Reduce the upper triangle of A */ - - for (i__ = *n - 1; i__ >= 1; --i__) { - -/* - Generate elementary reflector H(i) = I - tau * v * v' - to annihilate A(1:i-1,i+1) -*/ - - dlarfg_(&i__, &a[i__ + (i__ + 1) * a_dim1], &a[(i__ + 1) * a_dim1 - + 1], &c__1, &taui); - e[i__] = a[i__ + (i__ + 1) * a_dim1]; - - if (taui != 0.) { - -/* Apply H(i) from both sides to A(1:i,1:i) */ - - a[i__ + (i__ + 1) * a_dim1] = 1.; - -/* Compute x := tau * A * v storing x in TAU(1:i) */ - - dsymv_(uplo, &i__, &taui, &a[a_offset], lda, &a[(i__ + 1) * - a_dim1 + 1], &c__1, &c_b29, &tau[1], &c__1) - ; - -/* Compute w := x - 1/2 * tau * (x'*v) * v */ - - alpha = taui * -.5 * ddot_(&i__, &tau[1], &c__1, &a[(i__ + 1) - * a_dim1 + 1], &c__1); - daxpy_(&i__, &alpha, &a[(i__ + 1) * a_dim1 + 1], &c__1, &tau[ - 1], &c__1); - -/* - Apply the transformation as a rank-2 update: - A := A - v * w' - w * v' -*/ - - dsyr2_(uplo, &i__, &c_b151, &a[(i__ + 1) * a_dim1 + 1], &c__1, - &tau[1], &c__1, &a[a_offset], lda); - - a[i__ + (i__ + 1) * a_dim1] = e[i__]; - } - d__[i__ + 1] = a[i__ + 1 + (i__ + 1) * a_dim1]; - tau[i__] = taui; -/* L10: */ - } - d__[1] = a[a_dim1 + 1]; - } else { - -/* Reduce the lower triangle of A */ - - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* - Generate elementary reflector H(i) = I - tau * v * v' - to annihilate A(i+2:n,i) -*/ - - i__2 = *n - i__; -/* Computing MIN */ - i__3 = i__ + 2; - dlarfg_(&i__2, &a[i__ + 1 + i__ * a_dim1], &a[min(i__3,*n) + i__ * - a_dim1], &c__1, &taui); - e[i__] = a[i__ + 1 + i__ * a_dim1]; - - if (taui != 0.) { - -/* Apply H(i) from both sides to A(i+1:n,i+1:n) */ - - a[i__ + 1 + i__ * a_dim1] = 1.; - -/* Compute x := tau * A * v storing y in TAU(i:n-1) */ - - i__2 = *n - i__; - dsymv_(uplo, &i__2, &taui, &a[i__ + 1 + (i__ + 1) * a_dim1], - lda, &a[i__ + 1 + i__ * a_dim1], &c__1, &c_b29, &tau[ - i__], &c__1); - -/* Compute w := x - 1/2 * tau * (x'*v) * v */ - - i__2 = *n - i__; - alpha = taui * -.5 * ddot_(&i__2, &tau[i__], &c__1, &a[i__ + - 1 + i__ * a_dim1], &c__1); - i__2 = *n - i__; - daxpy_(&i__2, &alpha, &a[i__ + 1 + i__ * a_dim1], &c__1, &tau[ - i__], &c__1); - -/* - Apply the transformation as a rank-2 update: - A := A - v * w' - w * v' -*/ - - i__2 = *n - i__; - dsyr2_(uplo, &i__2, &c_b151, &a[i__ + 1 + i__ * a_dim1], & - c__1, &tau[i__], &c__1, &a[i__ + 1 + (i__ + 1) * - a_dim1], lda); - - a[i__ + 1 + i__ * a_dim1] = e[i__]; - } - d__[i__] = a[i__ + i__ * a_dim1]; - tau[i__] = taui; -/* L20: */ - } - d__[*n] = a[*n + *n * a_dim1]; - } - - return 0; - -/* End of DSYTD2 */ - -} /* dsytd2_ */ - -/* Subroutine */ int dsytrd_(char *uplo, integer *n, doublereal *a, integer * - lda, doublereal *d__, doublereal *e, doublereal *tau, doublereal * - work, integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, j, nb, kk, nx, iws; - extern logical lsame_(char *, char *); - static integer nbmin, iinfo; - static logical upper; - extern /* Subroutine */ int dsytd2_(char *, integer *, doublereal *, - integer *, doublereal *, doublereal *, doublereal *, integer *), dsyr2k_(char *, char *, integer *, integer *, doublereal - *, doublereal *, integer *, doublereal *, integer *, doublereal *, - doublereal *, integer *), dlatrd_(char *, - integer *, integer *, doublereal *, integer *, doublereal *, - doublereal *, doublereal *, integer *), xerbla_(char *, - integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer ldwork, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DSYTRD reduces a real symmetric matrix A to real symmetric - tridiagonal form T by an orthogonal similarity transformation: - Q**T * A * Q = T. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - = 'U': Upper triangle of A is stored; - = 'L': Lower triangle of A is stored. - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) DOUBLE PRECISION array, dimension (LDA,N) - On entry, the symmetric matrix A. If UPLO = 'U', the leading - N-by-N upper triangular part of A contains the upper - triangular part of the matrix A, and the strictly lower - triangular part of A is not referenced. If UPLO = 'L', the - leading N-by-N lower triangular part of A contains the lower - triangular part of the matrix A, and the strictly upper - triangular part of A is not referenced. - On exit, if UPLO = 'U', the diagonal and first superdiagonal - of A are overwritten by the corresponding elements of the - tridiagonal matrix T, and the elements above the first - superdiagonal, with the array TAU, represent the orthogonal - matrix Q as a product of elementary reflectors; if UPLO - = 'L', the diagonal and first subdiagonal of A are over- - written by the corresponding elements of the tridiagonal - matrix T, and the elements below the first subdiagonal, with - the array TAU, represent the orthogonal matrix Q as a product - of elementary reflectors. See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - D (output) DOUBLE PRECISION array, dimension (N) - The diagonal elements of the tridiagonal matrix T: - D(i) = A(i,i). - - E (output) DOUBLE PRECISION array, dimension (N-1) - The off-diagonal elements of the tridiagonal matrix T: - E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. - - TAU (output) DOUBLE PRECISION array, dimension (N-1) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= 1. - For optimum performance LWORK >= N*NB, where NB is the - optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - If UPLO = 'U', the matrix Q is represented as a product of elementary - reflectors - - Q = H(n-1) . . . H(2) H(1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in - A(1:i-1,i+1), and tau in TAU(i). - - If UPLO = 'L', the matrix Q is represented as a product of elementary - reflectors - - Q = H(1) H(2) . . . H(n-1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a real scalar, and v is a real vector with - v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), - and tau in TAU(i). - - The contents of A on exit are illustrated by the following examples - with n = 5: - - if UPLO = 'U': if UPLO = 'L': - - ( d e v2 v3 v4 ) ( d ) - ( d e v3 v4 ) ( e d ) - ( d e v4 ) ( v1 e d ) - ( d e ) ( v1 v2 e d ) - ( d ) ( v1 v2 v3 e d ) - - where d and e denote diagonal and off-diagonal elements of T, and vi - denotes an element of the vector defining H(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --d__; - --e; - --tau; - --work; - - /* Function Body */ - *info = 0; - upper = lsame_(uplo, "U"); - lquery = *lwork == -1; - if ((! upper && ! lsame_(uplo, "L"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } else if ((*lwork < 1 && ! lquery)) { - *info = -9; - } - - if (*info == 0) { - -/* Determine the block size. */ - - nb = ilaenv_(&c__1, "DSYTRD", uplo, n, &c_n1, &c_n1, &c_n1, (ftnlen)6, - (ftnlen)1); - lwkopt = *n * nb; - work[1] = (doublereal) lwkopt; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("DSYTRD", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - work[1] = 1.; - return 0; - } - - nx = *n; - iws = 1; - if ((nb > 1 && nb < *n)) { - -/* - Determine when to cross over from blocked to unblocked code - (last block is always handled by unblocked code). - - Computing MAX -*/ - i__1 = nb, i__2 = ilaenv_(&c__3, "DSYTRD", uplo, n, &c_n1, &c_n1, & - c_n1, (ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < *n) { - -/* Determine if workspace is large enough for blocked code. */ - - ldwork = *n; - iws = ldwork * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: determine the - minimum value of NB, and reduce NB or force use of - unblocked code by setting NX = N. - - Computing MAX -*/ - i__1 = *lwork / ldwork; - nb = max(i__1,1); - nbmin = ilaenv_(&c__2, "DSYTRD", uplo, n, &c_n1, &c_n1, &c_n1, - (ftnlen)6, (ftnlen)1); - if (nb < nbmin) { - nx = *n; - } - } - } else { - nx = *n; - } - } else { - nb = 1; - } - - if (upper) { - -/* - Reduce the upper triangle of A. - Columns 1:kk are handled by the unblocked method. -*/ - - kk = *n - (*n - nx + nb - 1) / nb * nb; - i__1 = kk + 1; - i__2 = -nb; - for (i__ = *n - nb + 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += - i__2) { - -/* - Reduce columns i:i+nb-1 to tridiagonal form and form the - matrix W which is needed to update the unreduced part of - the matrix -*/ - - i__3 = i__ + nb - 1; - dlatrd_(uplo, &i__3, &nb, &a[a_offset], lda, &e[1], &tau[1], & - work[1], &ldwork); - -/* - Update the unreduced submatrix A(1:i-1,1:i-1), using an - update of the form: A := A - V*W' - W*V' -*/ - - i__3 = i__ - 1; - dsyr2k_(uplo, "No transpose", &i__3, &nb, &c_b151, &a[i__ * - a_dim1 + 1], lda, &work[1], &ldwork, &c_b15, &a[a_offset], - lda); - -/* - Copy superdiagonal elements back into A, and diagonal - elements into D -*/ - - i__3 = i__ + nb - 1; - for (j = i__; j <= i__3; ++j) { - a[j - 1 + j * a_dim1] = e[j - 1]; - d__[j] = a[j + j * a_dim1]; -/* L10: */ - } -/* L20: */ - } - -/* Use unblocked code to reduce the last or only block */ - - dsytd2_(uplo, &kk, &a[a_offset], lda, &d__[1], &e[1], &tau[1], &iinfo); - } else { - -/* Reduce the lower triangle of A */ - - i__2 = *n - nx; - i__1 = nb; - for (i__ = 1; i__1 < 0 ? i__ >= i__2 : i__ <= i__2; i__ += i__1) { - -/* - Reduce columns i:i+nb-1 to tridiagonal form and form the - matrix W which is needed to update the unreduced part of - the matrix -*/ - - i__3 = *n - i__ + 1; - dlatrd_(uplo, &i__3, &nb, &a[i__ + i__ * a_dim1], lda, &e[i__], & - tau[i__], &work[1], &ldwork); - -/* - Update the unreduced submatrix A(i+ib:n,i+ib:n), using - an update of the form: A := A - V*W' - W*V' -*/ - - i__3 = *n - i__ - nb + 1; - dsyr2k_(uplo, "No transpose", &i__3, &nb, &c_b151, &a[i__ + nb + - i__ * a_dim1], lda, &work[nb + 1], &ldwork, &c_b15, &a[ - i__ + nb + (i__ + nb) * a_dim1], lda); - -/* - Copy subdiagonal elements back into A, and diagonal - elements into D -*/ - - i__3 = i__ + nb - 1; - for (j = i__; j <= i__3; ++j) { - a[j + 1 + j * a_dim1] = e[j]; - d__[j] = a[j + j * a_dim1]; -/* L30: */ - } -/* L40: */ - } - -/* Use unblocked code to reduce the last or only block */ - - i__1 = *n - i__ + 1; - dsytd2_(uplo, &i__1, &a[i__ + i__ * a_dim1], lda, &d__[i__], &e[i__], - &tau[i__], &iinfo); - } - - work[1] = (doublereal) lwkopt; - return 0; - -/* End of DSYTRD */ - -} /* dsytrd_ */ - -/* Subroutine */ int dtrevc_(char *side, char *howmny, logical *select, - integer *n, doublereal *t, integer *ldt, doublereal *vl, integer * - ldvl, doublereal *vr, integer *ldvr, integer *mm, integer *m, - doublereal *work, integer *info) -{ - /* System generated locals */ - integer t_dim1, t_offset, vl_dim1, vl_offset, vr_dim1, vr_offset, i__1, - i__2, i__3; - doublereal d__1, d__2, d__3, d__4; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer i__, j, k; - static doublereal x[4] /* was [2][2] */; - static integer j1, j2, n2, ii, ki, ip, is; - static doublereal wi, wr, rec, ulp, beta, emax; - static logical pair; - extern doublereal ddot_(integer *, doublereal *, integer *, doublereal *, - integer *); - static logical allv; - static integer ierr; - static doublereal unfl, ovfl, smin; - static logical over; - static doublereal vmax; - static integer jnxt; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - static doublereal scale; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int dgemv_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, doublereal *, integer *); - static doublereal remax; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *); - static logical leftv, bothv; - extern /* Subroutine */ int daxpy_(integer *, doublereal *, doublereal *, - integer *, doublereal *, integer *); - static doublereal vcrit; - static logical somev; - static doublereal xnorm; - extern /* Subroutine */ int dlaln2_(logical *, integer *, integer *, - doublereal *, doublereal *, doublereal *, integer *, doublereal *, - doublereal *, doublereal *, integer *, doublereal *, doublereal * - , doublereal *, integer *, doublereal *, doublereal *, integer *), - dlabad_(doublereal *, doublereal *); - - extern integer idamax_(integer *, doublereal *, integer *); - extern /* Subroutine */ int xerbla_(char *, integer *); - static doublereal bignum; - static logical rightv; - static doublereal smlnum; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - DTREVC computes some or all of the right and/or left eigenvectors of - a real upper quasi-triangular matrix T. - - The right eigenvector x and the left eigenvector y of T corresponding - to an eigenvalue w are defined by: - - T*x = w*x, y'*T = w*y' - - where y' denotes the conjugate transpose of the vector y. - - If all eigenvectors are requested, the routine may either return the - matrices X and/or Y of right or left eigenvectors of T, or the - products Q*X and/or Q*Y, where Q is an input orthogonal - matrix. If T was obtained from the real-Schur factorization of an - original matrix A = Q*T*Q', then Q*X and Q*Y are the matrices of - right or left eigenvectors of A. - - T must be in Schur canonical form (as returned by DHSEQR), that is, - block upper triangular with 1-by-1 and 2-by-2 diagonal blocks; each - 2-by-2 diagonal block has its diagonal elements equal and its - off-diagonal elements of opposite sign. Corresponding to each 2-by-2 - diagonal block is a complex conjugate pair of eigenvalues and - eigenvectors; only one eigenvector of the pair is computed, namely - the one corresponding to the eigenvalue with positive imaginary part. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'R': compute right eigenvectors only; - = 'L': compute left eigenvectors only; - = 'B': compute both right and left eigenvectors. - - HOWMNY (input) CHARACTER*1 - = 'A': compute all right and/or left eigenvectors; - = 'B': compute all right and/or left eigenvectors, - and backtransform them using the input matrices - supplied in VR and/or VL; - = 'S': compute selected right and/or left eigenvectors, - specified by the logical array SELECT. - - SELECT (input/output) LOGICAL array, dimension (N) - If HOWMNY = 'S', SELECT specifies the eigenvectors to be - computed. - If HOWMNY = 'A' or 'B', SELECT is not referenced. - To select the real eigenvector corresponding to a real - eigenvalue w(j), SELECT(j) must be set to .TRUE.. To select - the complex eigenvector corresponding to a complex conjugate - pair w(j) and w(j+1), either SELECT(j) or SELECT(j+1) must be - set to .TRUE.; then on exit SELECT(j) is .TRUE. and - SELECT(j+1) is .FALSE.. - - N (input) INTEGER - The order of the matrix T. N >= 0. - - T (input) DOUBLE PRECISION array, dimension (LDT,N) - The upper quasi-triangular matrix T in Schur canonical form. - - LDT (input) INTEGER - The leading dimension of the array T. LDT >= max(1,N). - - VL (input/output) DOUBLE PRECISION array, dimension (LDVL,MM) - On entry, if SIDE = 'L' or 'B' and HOWMNY = 'B', VL must - contain an N-by-N matrix Q (usually the orthogonal matrix Q - of Schur vectors returned by DHSEQR). - On exit, if SIDE = 'L' or 'B', VL contains: - if HOWMNY = 'A', the matrix Y of left eigenvectors of T; - VL has the same quasi-lower triangular form - as T'. If T(i,i) is a real eigenvalue, then - the i-th column VL(i) of VL is its - corresponding eigenvector. If T(i:i+1,i:i+1) - is a 2-by-2 block whose eigenvalues are - complex-conjugate eigenvalues of T, then - VL(i)+sqrt(-1)*VL(i+1) is the complex - eigenvector corresponding to the eigenvalue - with positive real part. - if HOWMNY = 'B', the matrix Q*Y; - if HOWMNY = 'S', the left eigenvectors of T specified by - SELECT, stored consecutively in the columns - of VL, in the same order as their - eigenvalues. - A complex eigenvector corresponding to a complex eigenvalue - is stored in two consecutive columns, the first holding the - real part, and the second the imaginary part. - If SIDE = 'R', VL is not referenced. - - LDVL (input) INTEGER - The leading dimension of the array VL. LDVL >= max(1,N) if - SIDE = 'L' or 'B'; LDVL >= 1 otherwise. - - VR (input/output) DOUBLE PRECISION array, dimension (LDVR,MM) - On entry, if SIDE = 'R' or 'B' and HOWMNY = 'B', VR must - contain an N-by-N matrix Q (usually the orthogonal matrix Q - of Schur vectors returned by DHSEQR). - On exit, if SIDE = 'R' or 'B', VR contains: - if HOWMNY = 'A', the matrix X of right eigenvectors of T; - VR has the same quasi-upper triangular form - as T. If T(i,i) is a real eigenvalue, then - the i-th column VR(i) of VR is its - corresponding eigenvector. If T(i:i+1,i:i+1) - is a 2-by-2 block whose eigenvalues are - complex-conjugate eigenvalues of T, then - VR(i)+sqrt(-1)*VR(i+1) is the complex - eigenvector corresponding to the eigenvalue - with positive real part. - if HOWMNY = 'B', the matrix Q*X; - if HOWMNY = 'S', the right eigenvectors of T specified by - SELECT, stored consecutively in the columns - of VR, in the same order as their - eigenvalues. - A complex eigenvector corresponding to a complex eigenvalue - is stored in two consecutive columns, the first holding the - real part and the second the imaginary part. - If SIDE = 'L', VR is not referenced. - - LDVR (input) INTEGER - The leading dimension of the array VR. LDVR >= max(1,N) if - SIDE = 'R' or 'B'; LDVR >= 1 otherwise. - - MM (input) INTEGER - The number of columns in the arrays VL and/or VR. MM >= M. - - M (output) INTEGER - The number of columns in the arrays VL and/or VR actually - used to store the eigenvectors. - If HOWMNY = 'A' or 'B', M is set to N. - Each selected real eigenvector occupies one column and each - selected complex eigenvector occupies two columns. - - WORK (workspace) DOUBLE PRECISION array, dimension (3*N) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - The algorithm used in this program is basically backward (forward) - substitution, with scaling to make the the code robust against - possible overflow. - - Each eigenvector is normalized so that the element of largest - magnitude has magnitude 1; here the magnitude of a complex number - (x,y) is taken to be |x| + |y|. - - ===================================================================== - - - Decode and test the input parameters -*/ - - /* Parameter adjustments */ - --select; - t_dim1 = *ldt; - t_offset = 1 + t_dim1 * 1; - t -= t_offset; - vl_dim1 = *ldvl; - vl_offset = 1 + vl_dim1 * 1; - vl -= vl_offset; - vr_dim1 = *ldvr; - vr_offset = 1 + vr_dim1 * 1; - vr -= vr_offset; - --work; - - /* Function Body */ - bothv = lsame_(side, "B"); - rightv = lsame_(side, "R") || bothv; - leftv = lsame_(side, "L") || bothv; - - allv = lsame_(howmny, "A"); - over = lsame_(howmny, "B"); - somev = lsame_(howmny, "S"); - - *info = 0; - if ((! rightv && ! leftv)) { - *info = -1; - } else if (((! allv && ! over) && ! somev)) { - *info = -2; - } else if (*n < 0) { - *info = -4; - } else if (*ldt < max(1,*n)) { - *info = -6; - } else if (*ldvl < 1 || (leftv && *ldvl < *n)) { - *info = -8; - } else if (*ldvr < 1 || (rightv && *ldvr < *n)) { - *info = -10; - } else { - -/* - Set M to the number of columns required to store the selected - eigenvectors, standardize the array SELECT if necessary, and - test MM. -*/ - - if (somev) { - *m = 0; - pair = FALSE_; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (pair) { - pair = FALSE_; - select[j] = FALSE_; - } else { - if (j < *n) { - if (t[j + 1 + j * t_dim1] == 0.) { - if (select[j]) { - ++(*m); - } - } else { - pair = TRUE_; - if (select[j] || select[j + 1]) { - select[j] = TRUE_; - *m += 2; - } - } - } else { - if (select[*n]) { - ++(*m); - } - } - } -/* L10: */ - } - } else { - *m = *n; - } - - if (*mm < *m) { - *info = -11; - } - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("DTREVC", &i__1); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } - -/* Set the constants to control overflow. */ - - unfl = SAFEMINIMUM; - ovfl = 1. / unfl; - dlabad_(&unfl, &ovfl); - ulp = PRECISION; - smlnum = unfl * (*n / ulp); - bignum = (1. - ulp) / smlnum; - -/* - Compute 1-norm of each column of strictly upper triangular - part of T to control overflow in triangular solver. -*/ - - work[1] = 0.; - i__1 = *n; - for (j = 2; j <= i__1; ++j) { - work[j] = 0.; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - work[j] += (d__1 = t[i__ + j * t_dim1], abs(d__1)); -/* L20: */ - } -/* L30: */ - } - -/* - Index IP is used to specify the real or complex eigenvalue: - IP = 0, real eigenvalue, - 1, first of conjugate complex pair: (wr,wi) - -1, second of conjugate complex pair: (wr,wi) -*/ - - n2 = (*n) << (1); - - if (rightv) { - -/* Compute right eigenvectors. */ - - ip = 0; - is = *m; - for (ki = *n; ki >= 1; --ki) { - - if (ip == 1) { - goto L130; - } - if (ki == 1) { - goto L40; - } - if (t[ki + (ki - 1) * t_dim1] == 0.) { - goto L40; - } - ip = -1; - -L40: - if (somev) { - if (ip == 0) { - if (! select[ki]) { - goto L130; - } - } else { - if (! select[ki - 1]) { - goto L130; - } - } - } - -/* Compute the KI-th eigenvalue (WR,WI). */ - - wr = t[ki + ki * t_dim1]; - wi = 0.; - if (ip != 0) { - wi = sqrt((d__1 = t[ki + (ki - 1) * t_dim1], abs(d__1))) * - sqrt((d__2 = t[ki - 1 + ki * t_dim1], abs(d__2))); - } -/* Computing MAX */ - d__1 = ulp * (abs(wr) + abs(wi)); - smin = max(d__1,smlnum); - - if (ip == 0) { - -/* Real right eigenvector */ - - work[ki + *n] = 1.; - -/* Form right-hand side */ - - i__1 = ki - 1; - for (k = 1; k <= i__1; ++k) { - work[k + *n] = -t[k + ki * t_dim1]; -/* L50: */ - } - -/* - Solve the upper quasi-triangular system: - (T(1:KI-1,1:KI-1) - WR)*X = SCALE*WORK. -*/ - - jnxt = ki - 1; - for (j = ki - 1; j >= 1; --j) { - if (j > jnxt) { - goto L60; - } - j1 = j; - j2 = j; - jnxt = j - 1; - if (j > 1) { - if (t[j + (j - 1) * t_dim1] != 0.) { - j1 = j - 1; - jnxt = j - 2; - } - } - - if (j1 == j2) { - -/* 1-by-1 diagonal block */ - - dlaln2_(&c_false, &c__1, &c__1, &smin, &c_b15, &t[j + - j * t_dim1], ldt, &c_b15, &c_b15, &work[j + * - n], n, &wr, &c_b29, x, &c__2, &scale, &xnorm, - &ierr); - -/* - Scale X(1,1) to avoid overflow when updating - the right-hand side. -*/ - - if (xnorm > 1.) { - if (work[j] > bignum / xnorm) { - x[0] /= xnorm; - scale /= xnorm; - } - } - -/* Scale if necessary */ - - if (scale != 1.) { - dscal_(&ki, &scale, &work[*n + 1], &c__1); - } - work[j + *n] = x[0]; - -/* Update right-hand side */ - - i__1 = j - 1; - d__1 = -x[0]; - daxpy_(&i__1, &d__1, &t[j * t_dim1 + 1], &c__1, &work[ - *n + 1], &c__1); - - } else { - -/* 2-by-2 diagonal block */ - - dlaln2_(&c_false, &c__2, &c__1, &smin, &c_b15, &t[j - - 1 + (j - 1) * t_dim1], ldt, &c_b15, &c_b15, & - work[j - 1 + *n], n, &wr, &c_b29, x, &c__2, & - scale, &xnorm, &ierr); - -/* - Scale X(1,1) and X(2,1) to avoid overflow when - updating the right-hand side. -*/ - - if (xnorm > 1.) { -/* Computing MAX */ - d__1 = work[j - 1], d__2 = work[j]; - beta = max(d__1,d__2); - if (beta > bignum / xnorm) { - x[0] /= xnorm; - x[1] /= xnorm; - scale /= xnorm; - } - } - -/* Scale if necessary */ - - if (scale != 1.) { - dscal_(&ki, &scale, &work[*n + 1], &c__1); - } - work[j - 1 + *n] = x[0]; - work[j + *n] = x[1]; - -/* Update right-hand side */ - - i__1 = j - 2; - d__1 = -x[0]; - daxpy_(&i__1, &d__1, &t[(j - 1) * t_dim1 + 1], &c__1, - &work[*n + 1], &c__1); - i__1 = j - 2; - d__1 = -x[1]; - daxpy_(&i__1, &d__1, &t[j * t_dim1 + 1], &c__1, &work[ - *n + 1], &c__1); - } -L60: - ; - } - -/* Copy the vector x or Q*x to VR and normalize. */ - - if (! over) { - dcopy_(&ki, &work[*n + 1], &c__1, &vr[is * vr_dim1 + 1], & - c__1); - - ii = idamax_(&ki, &vr[is * vr_dim1 + 1], &c__1); - remax = 1. / (d__1 = vr[ii + is * vr_dim1], abs(d__1)); - dscal_(&ki, &remax, &vr[is * vr_dim1 + 1], &c__1); - - i__1 = *n; - for (k = ki + 1; k <= i__1; ++k) { - vr[k + is * vr_dim1] = 0.; -/* L70: */ - } - } else { - if (ki > 1) { - i__1 = ki - 1; - dgemv_("N", n, &i__1, &c_b15, &vr[vr_offset], ldvr, & - work[*n + 1], &c__1, &work[ki + *n], &vr[ki * - vr_dim1 + 1], &c__1); - } - - ii = idamax_(n, &vr[ki * vr_dim1 + 1], &c__1); - remax = 1. / (d__1 = vr[ii + ki * vr_dim1], abs(d__1)); - dscal_(n, &remax, &vr[ki * vr_dim1 + 1], &c__1); - } - - } else { - -/* - Complex right eigenvector. - - Initial solve - [ (T(KI-1,KI-1) T(KI-1,KI) ) - (WR + I* WI)]*X = 0. - [ (T(KI,KI-1) T(KI,KI) ) ] -*/ - - if ((d__1 = t[ki - 1 + ki * t_dim1], abs(d__1)) >= (d__2 = t[ - ki + (ki - 1) * t_dim1], abs(d__2))) { - work[ki - 1 + *n] = 1.; - work[ki + n2] = wi / t[ki - 1 + ki * t_dim1]; - } else { - work[ki - 1 + *n] = -wi / t[ki + (ki - 1) * t_dim1]; - work[ki + n2] = 1.; - } - work[ki + *n] = 0.; - work[ki - 1 + n2] = 0.; - -/* Form right-hand side */ - - i__1 = ki - 2; - for (k = 1; k <= i__1; ++k) { - work[k + *n] = -work[ki - 1 + *n] * t[k + (ki - 1) * - t_dim1]; - work[k + n2] = -work[ki + n2] * t[k + ki * t_dim1]; -/* L80: */ - } - -/* - Solve upper quasi-triangular system: - (T(1:KI-2,1:KI-2) - (WR+i*WI))*X = SCALE*(WORK+i*WORK2) -*/ - - jnxt = ki - 2; - for (j = ki - 2; j >= 1; --j) { - if (j > jnxt) { - goto L90; - } - j1 = j; - j2 = j; - jnxt = j - 1; - if (j > 1) { - if (t[j + (j - 1) * t_dim1] != 0.) { - j1 = j - 1; - jnxt = j - 2; - } - } - - if (j1 == j2) { - -/* 1-by-1 diagonal block */ - - dlaln2_(&c_false, &c__1, &c__2, &smin, &c_b15, &t[j + - j * t_dim1], ldt, &c_b15, &c_b15, &work[j + * - n], n, &wr, &wi, x, &c__2, &scale, &xnorm, & - ierr); - -/* - Scale X(1,1) and X(1,2) to avoid overflow when - updating the right-hand side. -*/ - - if (xnorm > 1.) { - if (work[j] > bignum / xnorm) { - x[0] /= xnorm; - x[2] /= xnorm; - scale /= xnorm; - } - } - -/* Scale if necessary */ - - if (scale != 1.) { - dscal_(&ki, &scale, &work[*n + 1], &c__1); - dscal_(&ki, &scale, &work[n2 + 1], &c__1); - } - work[j + *n] = x[0]; - work[j + n2] = x[2]; - -/* Update the right-hand side */ - - i__1 = j - 1; - d__1 = -x[0]; - daxpy_(&i__1, &d__1, &t[j * t_dim1 + 1], &c__1, &work[ - *n + 1], &c__1); - i__1 = j - 1; - d__1 = -x[2]; - daxpy_(&i__1, &d__1, &t[j * t_dim1 + 1], &c__1, &work[ - n2 + 1], &c__1); - - } else { - -/* 2-by-2 diagonal block */ - - dlaln2_(&c_false, &c__2, &c__2, &smin, &c_b15, &t[j - - 1 + (j - 1) * t_dim1], ldt, &c_b15, &c_b15, & - work[j - 1 + *n], n, &wr, &wi, x, &c__2, & - scale, &xnorm, &ierr); - -/* - Scale X to avoid overflow when updating - the right-hand side. -*/ - - if (xnorm > 1.) { -/* Computing MAX */ - d__1 = work[j - 1], d__2 = work[j]; - beta = max(d__1,d__2); - if (beta > bignum / xnorm) { - rec = 1. / xnorm; - x[0] *= rec; - x[2] *= rec; - x[1] *= rec; - x[3] *= rec; - scale *= rec; - } - } - -/* Scale if necessary */ - - if (scale != 1.) { - dscal_(&ki, &scale, &work[*n + 1], &c__1); - dscal_(&ki, &scale, &work[n2 + 1], &c__1); - } - work[j - 1 + *n] = x[0]; - work[j + *n] = x[1]; - work[j - 1 + n2] = x[2]; - work[j + n2] = x[3]; - -/* Update the right-hand side */ - - i__1 = j - 2; - d__1 = -x[0]; - daxpy_(&i__1, &d__1, &t[(j - 1) * t_dim1 + 1], &c__1, - &work[*n + 1], &c__1); - i__1 = j - 2; - d__1 = -x[1]; - daxpy_(&i__1, &d__1, &t[j * t_dim1 + 1], &c__1, &work[ - *n + 1], &c__1); - i__1 = j - 2; - d__1 = -x[2]; - daxpy_(&i__1, &d__1, &t[(j - 1) * t_dim1 + 1], &c__1, - &work[n2 + 1], &c__1); - i__1 = j - 2; - d__1 = -x[3]; - daxpy_(&i__1, &d__1, &t[j * t_dim1 + 1], &c__1, &work[ - n2 + 1], &c__1); - } -L90: - ; - } - -/* Copy the vector x or Q*x to VR and normalize. */ - - if (! over) { - dcopy_(&ki, &work[*n + 1], &c__1, &vr[(is - 1) * vr_dim1 - + 1], &c__1); - dcopy_(&ki, &work[n2 + 1], &c__1, &vr[is * vr_dim1 + 1], & - c__1); - - emax = 0.; - i__1 = ki; - for (k = 1; k <= i__1; ++k) { -/* Computing MAX */ - d__3 = emax, d__4 = (d__1 = vr[k + (is - 1) * vr_dim1] - , abs(d__1)) + (d__2 = vr[k + is * vr_dim1], - abs(d__2)); - emax = max(d__3,d__4); -/* L100: */ - } - - remax = 1. / emax; - dscal_(&ki, &remax, &vr[(is - 1) * vr_dim1 + 1], &c__1); - dscal_(&ki, &remax, &vr[is * vr_dim1 + 1], &c__1); - - i__1 = *n; - for (k = ki + 1; k <= i__1; ++k) { - vr[k + (is - 1) * vr_dim1] = 0.; - vr[k + is * vr_dim1] = 0.; -/* L110: */ - } - - } else { - - if (ki > 2) { - i__1 = ki - 2; - dgemv_("N", n, &i__1, &c_b15, &vr[vr_offset], ldvr, & - work[*n + 1], &c__1, &work[ki - 1 + *n], &vr[( - ki - 1) * vr_dim1 + 1], &c__1); - i__1 = ki - 2; - dgemv_("N", n, &i__1, &c_b15, &vr[vr_offset], ldvr, & - work[n2 + 1], &c__1, &work[ki + n2], &vr[ki * - vr_dim1 + 1], &c__1); - } else { - dscal_(n, &work[ki - 1 + *n], &vr[(ki - 1) * vr_dim1 - + 1], &c__1); - dscal_(n, &work[ki + n2], &vr[ki * vr_dim1 + 1], & - c__1); - } - - emax = 0.; - i__1 = *n; - for (k = 1; k <= i__1; ++k) { -/* Computing MAX */ - d__3 = emax, d__4 = (d__1 = vr[k + (ki - 1) * vr_dim1] - , abs(d__1)) + (d__2 = vr[k + ki * vr_dim1], - abs(d__2)); - emax = max(d__3,d__4); -/* L120: */ - } - remax = 1. / emax; - dscal_(n, &remax, &vr[(ki - 1) * vr_dim1 + 1], &c__1); - dscal_(n, &remax, &vr[ki * vr_dim1 + 1], &c__1); - } - } - - --is; - if (ip != 0) { - --is; - } -L130: - if (ip == 1) { - ip = 0; - } - if (ip == -1) { - ip = 1; - } -/* L140: */ - } - } - - if (leftv) { - -/* Compute left eigenvectors. */ - - ip = 0; - is = 1; - i__1 = *n; - for (ki = 1; ki <= i__1; ++ki) { - - if (ip == -1) { - goto L250; - } - if (ki == *n) { - goto L150; - } - if (t[ki + 1 + ki * t_dim1] == 0.) { - goto L150; - } - ip = 1; - -L150: - if (somev) { - if (! select[ki]) { - goto L250; - } - } - -/* Compute the KI-th eigenvalue (WR,WI). */ - - wr = t[ki + ki * t_dim1]; - wi = 0.; - if (ip != 0) { - wi = sqrt((d__1 = t[ki + (ki + 1) * t_dim1], abs(d__1))) * - sqrt((d__2 = t[ki + 1 + ki * t_dim1], abs(d__2))); - } -/* Computing MAX */ - d__1 = ulp * (abs(wr) + abs(wi)); - smin = max(d__1,smlnum); - - if (ip == 0) { - -/* Real left eigenvector. */ - - work[ki + *n] = 1.; - -/* Form right-hand side */ - - i__2 = *n; - for (k = ki + 1; k <= i__2; ++k) { - work[k + *n] = -t[ki + k * t_dim1]; -/* L160: */ - } - -/* - Solve the quasi-triangular system: - (T(KI+1:N,KI+1:N) - WR)'*X = SCALE*WORK -*/ - - vmax = 1.; - vcrit = bignum; - - jnxt = ki + 1; - i__2 = *n; - for (j = ki + 1; j <= i__2; ++j) { - if (j < jnxt) { - goto L170; - } - j1 = j; - j2 = j; - jnxt = j + 1; - if (j < *n) { - if (t[j + 1 + j * t_dim1] != 0.) { - j2 = j + 1; - jnxt = j + 2; - } - } - - if (j1 == j2) { - -/* - 1-by-1 diagonal block - - Scale if necessary to avoid overflow when forming - the right-hand side. -*/ - - if (work[j] > vcrit) { - rec = 1. / vmax; - i__3 = *n - ki + 1; - dscal_(&i__3, &rec, &work[ki + *n], &c__1); - vmax = 1.; - vcrit = bignum; - } - - i__3 = j - ki - 1; - work[j + *n] -= ddot_(&i__3, &t[ki + 1 + j * t_dim1], - &c__1, &work[ki + 1 + *n], &c__1); - -/* Solve (T(J,J)-WR)'*X = WORK */ - - dlaln2_(&c_false, &c__1, &c__1, &smin, &c_b15, &t[j + - j * t_dim1], ldt, &c_b15, &c_b15, &work[j + * - n], n, &wr, &c_b29, x, &c__2, &scale, &xnorm, - &ierr); - -/* Scale if necessary */ - - if (scale != 1.) { - i__3 = *n - ki + 1; - dscal_(&i__3, &scale, &work[ki + *n], &c__1); - } - work[j + *n] = x[0]; -/* Computing MAX */ - d__2 = (d__1 = work[j + *n], abs(d__1)); - vmax = max(d__2,vmax); - vcrit = bignum / vmax; - - } else { - -/* - 2-by-2 diagonal block - - Scale if necessary to avoid overflow when forming - the right-hand side. - - Computing MAX -*/ - d__1 = work[j], d__2 = work[j + 1]; - beta = max(d__1,d__2); - if (beta > vcrit) { - rec = 1. / vmax; - i__3 = *n - ki + 1; - dscal_(&i__3, &rec, &work[ki + *n], &c__1); - vmax = 1.; - vcrit = bignum; - } - - i__3 = j - ki - 1; - work[j + *n] -= ddot_(&i__3, &t[ki + 1 + j * t_dim1], - &c__1, &work[ki + 1 + *n], &c__1); - - i__3 = j - ki - 1; - work[j + 1 + *n] -= ddot_(&i__3, &t[ki + 1 + (j + 1) * - t_dim1], &c__1, &work[ki + 1 + *n], &c__1); - -/* - Solve - [T(J,J)-WR T(J,J+1) ]'* X = SCALE*( WORK1 ) - [T(J+1,J) T(J+1,J+1)-WR] ( WORK2 ) -*/ - - dlaln2_(&c_true, &c__2, &c__1, &smin, &c_b15, &t[j + - j * t_dim1], ldt, &c_b15, &c_b15, &work[j + * - n], n, &wr, &c_b29, x, &c__2, &scale, &xnorm, - &ierr); - -/* Scale if necessary */ - - if (scale != 1.) { - i__3 = *n - ki + 1; - dscal_(&i__3, &scale, &work[ki + *n], &c__1); - } - work[j + *n] = x[0]; - work[j + 1 + *n] = x[1]; - -/* Computing MAX */ - d__3 = (d__1 = work[j + *n], abs(d__1)), d__4 = (d__2 - = work[j + 1 + *n], abs(d__2)), d__3 = max( - d__3,d__4); - vmax = max(d__3,vmax); - vcrit = bignum / vmax; - - } -L170: - ; - } - -/* Copy the vector x or Q*x to VL and normalize. */ - - if (! over) { - i__2 = *n - ki + 1; - dcopy_(&i__2, &work[ki + *n], &c__1, &vl[ki + is * - vl_dim1], &c__1); - - i__2 = *n - ki + 1; - ii = idamax_(&i__2, &vl[ki + is * vl_dim1], &c__1) + ki - - 1; - remax = 1. / (d__1 = vl[ii + is * vl_dim1], abs(d__1)); - i__2 = *n - ki + 1; - dscal_(&i__2, &remax, &vl[ki + is * vl_dim1], &c__1); - - i__2 = ki - 1; - for (k = 1; k <= i__2; ++k) { - vl[k + is * vl_dim1] = 0.; -/* L180: */ - } - - } else { - - if (ki < *n) { - i__2 = *n - ki; - dgemv_("N", n, &i__2, &c_b15, &vl[(ki + 1) * vl_dim1 - + 1], ldvl, &work[ki + 1 + *n], &c__1, &work[ - ki + *n], &vl[ki * vl_dim1 + 1], &c__1); - } - - ii = idamax_(n, &vl[ki * vl_dim1 + 1], &c__1); - remax = 1. / (d__1 = vl[ii + ki * vl_dim1], abs(d__1)); - dscal_(n, &remax, &vl[ki * vl_dim1 + 1], &c__1); - - } - - } else { - -/* - Complex left eigenvector. - - Initial solve: - ((T(KI,KI) T(KI,KI+1) )' - (WR - I* WI))*X = 0. - ((T(KI+1,KI) T(KI+1,KI+1)) ) -*/ - - if ((d__1 = t[ki + (ki + 1) * t_dim1], abs(d__1)) >= (d__2 = - t[ki + 1 + ki * t_dim1], abs(d__2))) { - work[ki + *n] = wi / t[ki + (ki + 1) * t_dim1]; - work[ki + 1 + n2] = 1.; - } else { - work[ki + *n] = 1.; - work[ki + 1 + n2] = -wi / t[ki + 1 + ki * t_dim1]; - } - work[ki + 1 + *n] = 0.; - work[ki + n2] = 0.; - -/* Form right-hand side */ - - i__2 = *n; - for (k = ki + 2; k <= i__2; ++k) { - work[k + *n] = -work[ki + *n] * t[ki + k * t_dim1]; - work[k + n2] = -work[ki + 1 + n2] * t[ki + 1 + k * t_dim1] - ; -/* L190: */ - } - -/* - Solve complex quasi-triangular system: - ( T(KI+2,N:KI+2,N) - (WR-i*WI) )*X = WORK1+i*WORK2 -*/ - - vmax = 1.; - vcrit = bignum; - - jnxt = ki + 2; - i__2 = *n; - for (j = ki + 2; j <= i__2; ++j) { - if (j < jnxt) { - goto L200; - } - j1 = j; - j2 = j; - jnxt = j + 1; - if (j < *n) { - if (t[j + 1 + j * t_dim1] != 0.) { - j2 = j + 1; - jnxt = j + 2; - } - } - - if (j1 == j2) { - -/* - 1-by-1 diagonal block - - Scale if necessary to avoid overflow when - forming the right-hand side elements. -*/ - - if (work[j] > vcrit) { - rec = 1. / vmax; - i__3 = *n - ki + 1; - dscal_(&i__3, &rec, &work[ki + *n], &c__1); - i__3 = *n - ki + 1; - dscal_(&i__3, &rec, &work[ki + n2], &c__1); - vmax = 1.; - vcrit = bignum; - } - - i__3 = j - ki - 2; - work[j + *n] -= ddot_(&i__3, &t[ki + 2 + j * t_dim1], - &c__1, &work[ki + 2 + *n], &c__1); - i__3 = j - ki - 2; - work[j + n2] -= ddot_(&i__3, &t[ki + 2 + j * t_dim1], - &c__1, &work[ki + 2 + n2], &c__1); - -/* Solve (T(J,J)-(WR-i*WI))*(X11+i*X12)= WK+I*WK2 */ - - d__1 = -wi; - dlaln2_(&c_false, &c__1, &c__2, &smin, &c_b15, &t[j + - j * t_dim1], ldt, &c_b15, &c_b15, &work[j + * - n], n, &wr, &d__1, x, &c__2, &scale, &xnorm, & - ierr); - -/* Scale if necessary */ - - if (scale != 1.) { - i__3 = *n - ki + 1; - dscal_(&i__3, &scale, &work[ki + *n], &c__1); - i__3 = *n - ki + 1; - dscal_(&i__3, &scale, &work[ki + n2], &c__1); - } - work[j + *n] = x[0]; - work[j + n2] = x[2]; -/* Computing MAX */ - d__3 = (d__1 = work[j + *n], abs(d__1)), d__4 = (d__2 - = work[j + n2], abs(d__2)), d__3 = max(d__3, - d__4); - vmax = max(d__3,vmax); - vcrit = bignum / vmax; - - } else { - -/* - 2-by-2 diagonal block - - Scale if necessary to avoid overflow when forming - the right-hand side elements. - - Computing MAX -*/ - d__1 = work[j], d__2 = work[j + 1]; - beta = max(d__1,d__2); - if (beta > vcrit) { - rec = 1. / vmax; - i__3 = *n - ki + 1; - dscal_(&i__3, &rec, &work[ki + *n], &c__1); - i__3 = *n - ki + 1; - dscal_(&i__3, &rec, &work[ki + n2], &c__1); - vmax = 1.; - vcrit = bignum; - } - - i__3 = j - ki - 2; - work[j + *n] -= ddot_(&i__3, &t[ki + 2 + j * t_dim1], - &c__1, &work[ki + 2 + *n], &c__1); - - i__3 = j - ki - 2; - work[j + n2] -= ddot_(&i__3, &t[ki + 2 + j * t_dim1], - &c__1, &work[ki + 2 + n2], &c__1); - - i__3 = j - ki - 2; - work[j + 1 + *n] -= ddot_(&i__3, &t[ki + 2 + (j + 1) * - t_dim1], &c__1, &work[ki + 2 + *n], &c__1); - - i__3 = j - ki - 2; - work[j + 1 + n2] -= ddot_(&i__3, &t[ki + 2 + (j + 1) * - t_dim1], &c__1, &work[ki + 2 + n2], &c__1); - -/* - Solve 2-by-2 complex linear equation - ([T(j,j) T(j,j+1) ]'-(wr-i*wi)*I)*X = SCALE*B - ([T(j+1,j) T(j+1,j+1)] ) -*/ - - d__1 = -wi; - dlaln2_(&c_true, &c__2, &c__2, &smin, &c_b15, &t[j + - j * t_dim1], ldt, &c_b15, &c_b15, &work[j + * - n], n, &wr, &d__1, x, &c__2, &scale, &xnorm, & - ierr); - -/* Scale if necessary */ - - if (scale != 1.) { - i__3 = *n - ki + 1; - dscal_(&i__3, &scale, &work[ki + *n], &c__1); - i__3 = *n - ki + 1; - dscal_(&i__3, &scale, &work[ki + n2], &c__1); - } - work[j + *n] = x[0]; - work[j + n2] = x[2]; - work[j + 1 + *n] = x[1]; - work[j + 1 + n2] = x[3]; -/* Computing MAX */ - d__1 = abs(x[0]), d__2 = abs(x[2]), d__1 = max(d__1, - d__2), d__2 = abs(x[1]), d__1 = max(d__1,d__2) - , d__2 = abs(x[3]), d__1 = max(d__1,d__2); - vmax = max(d__1,vmax); - vcrit = bignum / vmax; - - } -L200: - ; - } - -/* - Copy the vector x or Q*x to VL and normalize. - - L210: -*/ - if (! over) { - i__2 = *n - ki + 1; - dcopy_(&i__2, &work[ki + *n], &c__1, &vl[ki + is * - vl_dim1], &c__1); - i__2 = *n - ki + 1; - dcopy_(&i__2, &work[ki + n2], &c__1, &vl[ki + (is + 1) * - vl_dim1], &c__1); - - emax = 0.; - i__2 = *n; - for (k = ki; k <= i__2; ++k) { -/* Computing MAX */ - d__3 = emax, d__4 = (d__1 = vl[k + is * vl_dim1], abs( - d__1)) + (d__2 = vl[k + (is + 1) * vl_dim1], - abs(d__2)); - emax = max(d__3,d__4); -/* L220: */ - } - remax = 1. / emax; - i__2 = *n - ki + 1; - dscal_(&i__2, &remax, &vl[ki + is * vl_dim1], &c__1); - i__2 = *n - ki + 1; - dscal_(&i__2, &remax, &vl[ki + (is + 1) * vl_dim1], &c__1) - ; - - i__2 = ki - 1; - for (k = 1; k <= i__2; ++k) { - vl[k + is * vl_dim1] = 0.; - vl[k + (is + 1) * vl_dim1] = 0.; -/* L230: */ - } - } else { - if (ki < *n - 1) { - i__2 = *n - ki - 1; - dgemv_("N", n, &i__2, &c_b15, &vl[(ki + 2) * vl_dim1 - + 1], ldvl, &work[ki + 2 + *n], &c__1, &work[ - ki + *n], &vl[ki * vl_dim1 + 1], &c__1); - i__2 = *n - ki - 1; - dgemv_("N", n, &i__2, &c_b15, &vl[(ki + 2) * vl_dim1 - + 1], ldvl, &work[ki + 2 + n2], &c__1, &work[ - ki + 1 + n2], &vl[(ki + 1) * vl_dim1 + 1], & - c__1); - } else { - dscal_(n, &work[ki + *n], &vl[ki * vl_dim1 + 1], & - c__1); - dscal_(n, &work[ki + 1 + n2], &vl[(ki + 1) * vl_dim1 - + 1], &c__1); - } - - emax = 0.; - i__2 = *n; - for (k = 1; k <= i__2; ++k) { -/* Computing MAX */ - d__3 = emax, d__4 = (d__1 = vl[k + ki * vl_dim1], abs( - d__1)) + (d__2 = vl[k + (ki + 1) * vl_dim1], - abs(d__2)); - emax = max(d__3,d__4); -/* L240: */ - } - remax = 1. / emax; - dscal_(n, &remax, &vl[ki * vl_dim1 + 1], &c__1); - dscal_(n, &remax, &vl[(ki + 1) * vl_dim1 + 1], &c__1); - - } - - } - - ++is; - if (ip != 0) { - ++is; - } -L250: - if (ip == -1) { - ip = 0; - } - if (ip == 1) { - ip = -1; - } - -/* L260: */ - } - - } - - return 0; - -/* End of DTREVC */ - -} /* dtrevc_ */ - -integer ieeeck_(integer *ispec, real *zero, real *one) -{ - /* System generated locals */ - integer ret_val; - - /* Local variables */ - static real nan1, nan2, nan3, nan4, nan5, nan6, neginf, posinf, negzro, - newzro; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1998 - - - Purpose - ======= - - IEEECK is called from the ILAENV to verify that Infinity and - possibly NaN arithmetic is safe (i.e. will not trap). - - Arguments - ========= - - ISPEC (input) INTEGER - Specifies whether to test just for inifinity arithmetic - or whether to test for infinity and NaN arithmetic. - = 0: Verify infinity arithmetic only. - = 1: Verify infinity and NaN arithmetic. - - ZERO (input) REAL - Must contain the value 0.0 - This is passed to prevent the compiler from optimizing - away this code. - - ONE (input) REAL - Must contain the value 1.0 - This is passed to prevent the compiler from optimizing - away this code. - - RETURN VALUE: INTEGER - = 0: Arithmetic failed to produce the correct answers - = 1: Arithmetic produced the correct answers -*/ - - ret_val = 1; - - posinf = *one / *zero; - if (posinf <= *one) { - ret_val = 0; - return ret_val; - } - - neginf = -(*one) / *zero; - if (neginf >= *zero) { - ret_val = 0; - return ret_val; - } - - negzro = *one / (neginf + *one); - if (negzro != *zero) { - ret_val = 0; - return ret_val; - } - - neginf = *one / negzro; - if (neginf >= *zero) { - ret_val = 0; - return ret_val; - } - - newzro = negzro + *zero; - if (newzro != *zero) { - ret_val = 0; - return ret_val; - } - - posinf = *one / newzro; - if (posinf <= *one) { - ret_val = 0; - return ret_val; - } - - neginf *= posinf; - if (neginf >= *zero) { - ret_val = 0; - return ret_val; - } - - posinf *= posinf; - if (posinf <= *one) { - ret_val = 0; - return ret_val; - } - - -/* Return if we were only asked to check infinity arithmetic */ - - if (*ispec == 0) { - return ret_val; - } - - nan1 = posinf + neginf; - - nan2 = posinf / neginf; - - nan3 = posinf / posinf; - - nan4 = posinf * *zero; - - nan5 = neginf * negzro; - - nan6 = nan5 * 0.f; - - if (nan1 == nan1) { - ret_val = 0; - return ret_val; - } - - if (nan2 == nan2) { - ret_val = 0; - return ret_val; - } - - if (nan3 == nan3) { - ret_val = 0; - return ret_val; - } - - if (nan4 == nan4) { - ret_val = 0; - return ret_val; - } - - if (nan5 == nan5) { - ret_val = 0; - return ret_val; - } - - if (nan6 == nan6) { - ret_val = 0; - return ret_val; - } - - return ret_val; -} /* ieeeck_ */ - -integer ilaenv_(integer *ispec, char *name__, char *opts, integer *n1, - integer *n2, integer *n3, integer *n4, ftnlen name_len, ftnlen - opts_len) -{ - /* System generated locals */ - integer ret_val; - - /* Builtin functions */ - /* Subroutine */ int s_copy(char *, char *, ftnlen, ftnlen); - integer s_cmp(char *, char *, ftnlen, ftnlen); - - /* Local variables */ - static integer i__; - static char c1[1], c2[2], c3[3], c4[2]; - static integer ic, nb, iz, nx; - static logical cname, sname; - static integer nbmin; - extern integer ieeeck_(integer *, real *, real *); - static char subnam[6]; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ILAENV is called from the LAPACK routines to choose problem-dependent - parameters for the local environment. See ISPEC for a description of - the parameters. - - This version provides a set of parameters which should give good, - but not optimal, performance on many of the currently available - computers. Users are encouraged to modify this subroutine to set - the tuning parameters for their particular machine using the option - and problem size information in the arguments. - - This routine will not function correctly if it is converted to all - lower case. Converting it to all upper case is allowed. - - Arguments - ========= - - ISPEC (input) INTEGER - Specifies the parameter to be returned as the value of - ILAENV. - = 1: the optimal blocksize; if this value is 1, an unblocked - algorithm will give the best performance. - = 2: the minimum block size for which the block routine - should be used; if the usable block size is less than - this value, an unblocked routine should be used. - = 3: the crossover point (in a block routine, for N less - than this value, an unblocked routine should be used) - = 4: the number of shifts, used in the nonsymmetric - eigenvalue routines - = 5: the minimum column dimension for blocking to be used; - rectangular blocks must have dimension at least k by m, - where k is given by ILAENV(2,...) and m by ILAENV(5,...) - = 6: the crossover point for the SVD (when reducing an m by n - matrix to bidiagonal form, if max(m,n)/min(m,n) exceeds - this value, a QR factorization is used first to reduce - the matrix to a triangular form.) - = 7: the number of processors - = 8: the crossover point for the multishift QR and QZ methods - for nonsymmetric eigenvalue problems. - = 9: maximum size of the subproblems at the bottom of the - computation tree in the divide-and-conquer algorithm - (used by xGELSD and xGESDD) - =10: ieee NaN arithmetic can be trusted not to trap - =11: infinity arithmetic can be trusted not to trap - - NAME (input) CHARACTER*(*) - The name of the calling subroutine, in either upper case or - lower case. - - OPTS (input) CHARACTER*(*) - The character options to the subroutine NAME, concatenated - into a single character string. For example, UPLO = 'U', - TRANS = 'T', and DIAG = 'N' for a triangular routine would - be specified as OPTS = 'UTN'. - - N1 (input) INTEGER - N2 (input) INTEGER - N3 (input) INTEGER - N4 (input) INTEGER - Problem dimensions for the subroutine NAME; these may not all - be required. - - (ILAENV) (output) INTEGER - >= 0: the value of the parameter specified by ISPEC - < 0: if ILAENV = -k, the k-th argument had an illegal value. - - Further Details - =============== - - The following conventions have been used when calling ILAENV from the - LAPACK routines: - 1) OPTS is a concatenation of all of the character options to - subroutine NAME, in the same order that they appear in the - argument list for NAME, even if they are not used in determining - the value of the parameter specified by ISPEC. - 2) The problem dimensions N1, N2, N3, N4 are specified in the order - that they appear in the argument list for NAME. N1 is used - first, N2 second, and so on, and unused problem dimensions are - passed a value of -1. - 3) The parameter value returned by ILAENV is checked for validity in - the calling subroutine. For example, ILAENV is used to retrieve - the optimal blocksize for STRTRI as follows: - - NB = ILAENV( 1, 'STRTRI', UPLO // DIAG, N, -1, -1, -1 ) - IF( NB.LE.1 ) NB = MAX( 1, N ) - - ===================================================================== -*/ - - - switch (*ispec) { - case 1: goto L100; - case 2: goto L100; - case 3: goto L100; - case 4: goto L400; - case 5: goto L500; - case 6: goto L600; - case 7: goto L700; - case 8: goto L800; - case 9: goto L900; - case 10: goto L1000; - case 11: goto L1100; - } - -/* Invalid value for ISPEC */ - - ret_val = -1; - return ret_val; - -L100: - -/* Convert NAME to upper case if the first character is lower case. */ - - ret_val = 1; - s_copy(subnam, name__, (ftnlen)6, name_len); - ic = *(unsigned char *)subnam; - iz = 'Z'; - if (iz == 90 || iz == 122) { - -/* ASCII character set */ - - if ((ic >= 97 && ic <= 122)) { - *(unsigned char *)subnam = (char) (ic - 32); - for (i__ = 2; i__ <= 6; ++i__) { - ic = *(unsigned char *)&subnam[i__ - 1]; - if ((ic >= 97 && ic <= 122)) { - *(unsigned char *)&subnam[i__ - 1] = (char) (ic - 32); - } -/* L10: */ - } - } - - } else if (iz == 233 || iz == 169) { - -/* EBCDIC character set */ - - if ((ic >= 129 && ic <= 137) || (ic >= 145 && ic <= 153) || (ic >= - 162 && ic <= 169)) { - *(unsigned char *)subnam = (char) (ic + 64); - for (i__ = 2; i__ <= 6; ++i__) { - ic = *(unsigned char *)&subnam[i__ - 1]; - if ((ic >= 129 && ic <= 137) || (ic >= 145 && ic <= 153) || ( - ic >= 162 && ic <= 169)) { - *(unsigned char *)&subnam[i__ - 1] = (char) (ic + 64); - } -/* L20: */ - } - } - - } else if (iz == 218 || iz == 250) { - -/* Prime machines: ASCII+128 */ - - if ((ic >= 225 && ic <= 250)) { - *(unsigned char *)subnam = (char) (ic - 32); - for (i__ = 2; i__ <= 6; ++i__) { - ic = *(unsigned char *)&subnam[i__ - 1]; - if ((ic >= 225 && ic <= 250)) { - *(unsigned char *)&subnam[i__ - 1] = (char) (ic - 32); - } -/* L30: */ - } - } - } - - *(unsigned char *)c1 = *(unsigned char *)subnam; - sname = *(unsigned char *)c1 == 'S' || *(unsigned char *)c1 == 'D'; - cname = *(unsigned char *)c1 == 'C' || *(unsigned char *)c1 == 'Z'; - if (! (cname || sname)) { - return ret_val; - } - s_copy(c2, subnam + 1, (ftnlen)2, (ftnlen)2); - s_copy(c3, subnam + 3, (ftnlen)3, (ftnlen)3); - s_copy(c4, c3 + 1, (ftnlen)2, (ftnlen)2); - - switch (*ispec) { - case 1: goto L110; - case 2: goto L200; - case 3: goto L300; - } - -L110: - -/* - ISPEC = 1: block size - - In these examples, separate code is provided for setting NB for - real and complex. We assume that NB will take the same value in - single or double precision. -*/ - - nb = 1; - - if (s_cmp(c2, "GE", (ftnlen)2, (ftnlen)2) == 0) { - if (s_cmp(c3, "TRF", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nb = 64; - } else { - nb = 64; - } - } else if (s_cmp(c3, "QRF", (ftnlen)3, (ftnlen)3) == 0 || s_cmp(c3, - "RQF", (ftnlen)3, (ftnlen)3) == 0 || s_cmp(c3, "LQF", (ftnlen) - 3, (ftnlen)3) == 0 || s_cmp(c3, "QLF", (ftnlen)3, (ftnlen)3) - == 0) { - if (sname) { - nb = 32; - } else { - nb = 32; - } - } else if (s_cmp(c3, "HRD", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nb = 32; - } else { - nb = 32; - } - } else if (s_cmp(c3, "BRD", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nb = 32; - } else { - nb = 32; - } - } else if (s_cmp(c3, "TRI", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nb = 64; - } else { - nb = 64; - } - } - } else if (s_cmp(c2, "PO", (ftnlen)2, (ftnlen)2) == 0) { - if (s_cmp(c3, "TRF", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nb = 64; - } else { - nb = 64; - } - } - } else if (s_cmp(c2, "SY", (ftnlen)2, (ftnlen)2) == 0) { - if (s_cmp(c3, "TRF", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nb = 64; - } else { - nb = 64; - } - } else if ((sname && s_cmp(c3, "TRD", (ftnlen)3, (ftnlen)3) == 0)) { - nb = 32; - } else if ((sname && s_cmp(c3, "GST", (ftnlen)3, (ftnlen)3) == 0)) { - nb = 64; - } - } else if ((cname && s_cmp(c2, "HE", (ftnlen)2, (ftnlen)2) == 0)) { - if (s_cmp(c3, "TRF", (ftnlen)3, (ftnlen)3) == 0) { - nb = 64; - } else if (s_cmp(c3, "TRD", (ftnlen)3, (ftnlen)3) == 0) { - nb = 32; - } else if (s_cmp(c3, "GST", (ftnlen)3, (ftnlen)3) == 0) { - nb = 64; - } - } else if ((sname && s_cmp(c2, "OR", (ftnlen)2, (ftnlen)2) == 0)) { - if (*(unsigned char *)c3 == 'G') { - if (s_cmp(c4, "QR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "RQ", - (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "LQ", (ftnlen)2, ( - ftnlen)2) == 0 || s_cmp(c4, "QL", (ftnlen)2, (ftnlen)2) == - 0 || s_cmp(c4, "HR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp( - c4, "TR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "BR", ( - ftnlen)2, (ftnlen)2) == 0) { - nb = 32; - } - } else if (*(unsigned char *)c3 == 'M') { - if (s_cmp(c4, "QR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "RQ", - (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "LQ", (ftnlen)2, ( - ftnlen)2) == 0 || s_cmp(c4, "QL", (ftnlen)2, (ftnlen)2) == - 0 || s_cmp(c4, "HR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp( - c4, "TR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "BR", ( - ftnlen)2, (ftnlen)2) == 0) { - nb = 32; - } - } - } else if ((cname && s_cmp(c2, "UN", (ftnlen)2, (ftnlen)2) == 0)) { - if (*(unsigned char *)c3 == 'G') { - if (s_cmp(c4, "QR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "RQ", - (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "LQ", (ftnlen)2, ( - ftnlen)2) == 0 || s_cmp(c4, "QL", (ftnlen)2, (ftnlen)2) == - 0 || s_cmp(c4, "HR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp( - c4, "TR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "BR", ( - ftnlen)2, (ftnlen)2) == 0) { - nb = 32; - } - } else if (*(unsigned char *)c3 == 'M') { - if (s_cmp(c4, "QR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "RQ", - (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "LQ", (ftnlen)2, ( - ftnlen)2) == 0 || s_cmp(c4, "QL", (ftnlen)2, (ftnlen)2) == - 0 || s_cmp(c4, "HR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp( - c4, "TR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "BR", ( - ftnlen)2, (ftnlen)2) == 0) { - nb = 32; - } - } - } else if (s_cmp(c2, "GB", (ftnlen)2, (ftnlen)2) == 0) { - if (s_cmp(c3, "TRF", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - if (*n4 <= 64) { - nb = 1; - } else { - nb = 32; - } - } else { - if (*n4 <= 64) { - nb = 1; - } else { - nb = 32; - } - } - } - } else if (s_cmp(c2, "PB", (ftnlen)2, (ftnlen)2) == 0) { - if (s_cmp(c3, "TRF", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - if (*n2 <= 64) { - nb = 1; - } else { - nb = 32; - } - } else { - if (*n2 <= 64) { - nb = 1; - } else { - nb = 32; - } - } - } - } else if (s_cmp(c2, "TR", (ftnlen)2, (ftnlen)2) == 0) { - if (s_cmp(c3, "TRI", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nb = 64; - } else { - nb = 64; - } - } - } else if (s_cmp(c2, "LA", (ftnlen)2, (ftnlen)2) == 0) { - if (s_cmp(c3, "UUM", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nb = 64; - } else { - nb = 64; - } - } - } else if ((sname && s_cmp(c2, "ST", (ftnlen)2, (ftnlen)2) == 0)) { - if (s_cmp(c3, "EBZ", (ftnlen)3, (ftnlen)3) == 0) { - nb = 1; - } - } - ret_val = nb; - return ret_val; - -L200: - -/* ISPEC = 2: minimum block size */ - - nbmin = 2; - if (s_cmp(c2, "GE", (ftnlen)2, (ftnlen)2) == 0) { - if (s_cmp(c3, "QRF", (ftnlen)3, (ftnlen)3) == 0 || s_cmp(c3, "RQF", ( - ftnlen)3, (ftnlen)3) == 0 || s_cmp(c3, "LQF", (ftnlen)3, ( - ftnlen)3) == 0 || s_cmp(c3, "QLF", (ftnlen)3, (ftnlen)3) == 0) - { - if (sname) { - nbmin = 2; - } else { - nbmin = 2; - } - } else if (s_cmp(c3, "HRD", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nbmin = 2; - } else { - nbmin = 2; - } - } else if (s_cmp(c3, "BRD", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nbmin = 2; - } else { - nbmin = 2; - } - } else if (s_cmp(c3, "TRI", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nbmin = 2; - } else { - nbmin = 2; - } - } - } else if (s_cmp(c2, "SY", (ftnlen)2, (ftnlen)2) == 0) { - if (s_cmp(c3, "TRF", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nbmin = 8; - } else { - nbmin = 8; - } - } else if ((sname && s_cmp(c3, "TRD", (ftnlen)3, (ftnlen)3) == 0)) { - nbmin = 2; - } - } else if ((cname && s_cmp(c2, "HE", (ftnlen)2, (ftnlen)2) == 0)) { - if (s_cmp(c3, "TRD", (ftnlen)3, (ftnlen)3) == 0) { - nbmin = 2; - } - } else if ((sname && s_cmp(c2, "OR", (ftnlen)2, (ftnlen)2) == 0)) { - if (*(unsigned char *)c3 == 'G') { - if (s_cmp(c4, "QR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "RQ", - (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "LQ", (ftnlen)2, ( - ftnlen)2) == 0 || s_cmp(c4, "QL", (ftnlen)2, (ftnlen)2) == - 0 || s_cmp(c4, "HR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp( - c4, "TR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "BR", ( - ftnlen)2, (ftnlen)2) == 0) { - nbmin = 2; - } - } else if (*(unsigned char *)c3 == 'M') { - if (s_cmp(c4, "QR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "RQ", - (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "LQ", (ftnlen)2, ( - ftnlen)2) == 0 || s_cmp(c4, "QL", (ftnlen)2, (ftnlen)2) == - 0 || s_cmp(c4, "HR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp( - c4, "TR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "BR", ( - ftnlen)2, (ftnlen)2) == 0) { - nbmin = 2; - } - } - } else if ((cname && s_cmp(c2, "UN", (ftnlen)2, (ftnlen)2) == 0)) { - if (*(unsigned char *)c3 == 'G') { - if (s_cmp(c4, "QR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "RQ", - (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "LQ", (ftnlen)2, ( - ftnlen)2) == 0 || s_cmp(c4, "QL", (ftnlen)2, (ftnlen)2) == - 0 || s_cmp(c4, "HR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp( - c4, "TR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "BR", ( - ftnlen)2, (ftnlen)2) == 0) { - nbmin = 2; - } - } else if (*(unsigned char *)c3 == 'M') { - if (s_cmp(c4, "QR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "RQ", - (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "LQ", (ftnlen)2, ( - ftnlen)2) == 0 || s_cmp(c4, "QL", (ftnlen)2, (ftnlen)2) == - 0 || s_cmp(c4, "HR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp( - c4, "TR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "BR", ( - ftnlen)2, (ftnlen)2) == 0) { - nbmin = 2; - } - } - } - ret_val = nbmin; - return ret_val; - -L300: - -/* ISPEC = 3: crossover point */ - - nx = 0; - if (s_cmp(c2, "GE", (ftnlen)2, (ftnlen)2) == 0) { - if (s_cmp(c3, "QRF", (ftnlen)3, (ftnlen)3) == 0 || s_cmp(c3, "RQF", ( - ftnlen)3, (ftnlen)3) == 0 || s_cmp(c3, "LQF", (ftnlen)3, ( - ftnlen)3) == 0 || s_cmp(c3, "QLF", (ftnlen)3, (ftnlen)3) == 0) - { - if (sname) { - nx = 128; - } else { - nx = 128; - } - } else if (s_cmp(c3, "HRD", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nx = 128; - } else { - nx = 128; - } - } else if (s_cmp(c3, "BRD", (ftnlen)3, (ftnlen)3) == 0) { - if (sname) { - nx = 128; - } else { - nx = 128; - } - } - } else if (s_cmp(c2, "SY", (ftnlen)2, (ftnlen)2) == 0) { - if ((sname && s_cmp(c3, "TRD", (ftnlen)3, (ftnlen)3) == 0)) { - nx = 32; - } - } else if ((cname && s_cmp(c2, "HE", (ftnlen)2, (ftnlen)2) == 0)) { - if (s_cmp(c3, "TRD", (ftnlen)3, (ftnlen)3) == 0) { - nx = 32; - } - } else if ((sname && s_cmp(c2, "OR", (ftnlen)2, (ftnlen)2) == 0)) { - if (*(unsigned char *)c3 == 'G') { - if (s_cmp(c4, "QR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "RQ", - (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "LQ", (ftnlen)2, ( - ftnlen)2) == 0 || s_cmp(c4, "QL", (ftnlen)2, (ftnlen)2) == - 0 || s_cmp(c4, "HR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp( - c4, "TR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "BR", ( - ftnlen)2, (ftnlen)2) == 0) { - nx = 128; - } - } - } else if ((cname && s_cmp(c2, "UN", (ftnlen)2, (ftnlen)2) == 0)) { - if (*(unsigned char *)c3 == 'G') { - if (s_cmp(c4, "QR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "RQ", - (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "LQ", (ftnlen)2, ( - ftnlen)2) == 0 || s_cmp(c4, "QL", (ftnlen)2, (ftnlen)2) == - 0 || s_cmp(c4, "HR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp( - c4, "TR", (ftnlen)2, (ftnlen)2) == 0 || s_cmp(c4, "BR", ( - ftnlen)2, (ftnlen)2) == 0) { - nx = 128; - } - } - } - ret_val = nx; - return ret_val; - -L400: - -/* ISPEC = 4: number of shifts (used by xHSEQR) */ - - ret_val = 6; - return ret_val; - -L500: - -/* ISPEC = 5: minimum column dimension (not used) */ - - ret_val = 2; - return ret_val; - -L600: - -/* ISPEC = 6: crossover point for SVD (used by xGELSS and xGESVD) */ - - ret_val = (integer) ((real) min(*n1,*n2) * 1.6f); - return ret_val; - -L700: - -/* ISPEC = 7: number of processors (not used) */ - - ret_val = 1; - return ret_val; - -L800: - -/* ISPEC = 8: crossover point for multishift (used by xHSEQR) */ - - ret_val = 50; - return ret_val; - -L900: - -/* - ISPEC = 9: maximum size of the subproblems at the bottom of the - computation tree in the divide-and-conquer algorithm - (used by xGELSD and xGESDD) -*/ - - ret_val = 25; - return ret_val; - -L1000: - -/* - ISPEC = 10: ieee NaN arithmetic can be trusted not to trap - - ILAENV = 0 -*/ - ret_val = 1; - if (ret_val == 1) { - ret_val = ieeeck_(&c__0, &c_b3825, &c_b3826); - } - return ret_val; - -L1100: - -/* - ISPEC = 11: infinity arithmetic can be trusted not to trap - - ILAENV = 0 -*/ - ret_val = 1; - if (ret_val == 1) { - ret_val = ieeeck_(&c__1, &c_b3825, &c_b3826); - } - return ret_val; - -/* End of ILAENV */ - -} /* ilaenv_ */ - diff --git a/pythonPackages/numpy/numpy/linalg/f2c.h b/pythonPackages/numpy/numpy/linalg/f2c.h deleted file mode 100755 index e27d7ae577..0000000000 --- a/pythonPackages/numpy/numpy/linalg/f2c.h +++ /dev/null @@ -1,217 +0,0 @@ -/* f2c.h -- Standard Fortran to C header file */ - -/** barf [ba:rf] 2. "He suggested using FORTRAN, and everybody barfed." - - - From The Shogakukan DICTIONARY OF NEW ENGLISH (Second edition) */ - -#ifndef F2C_INCLUDE -#define F2C_INCLUDE - -typedef int integer; -typedef char *address; -typedef short int shortint; -typedef float real; -typedef double doublereal; -typedef struct { real r, i; } complex; -typedef struct { doublereal r, i; } doublecomplex; -typedef int logical; -typedef short int shortlogical; -typedef char logical1; -typedef char integer1; - -#define TRUE_ (1) -#define FALSE_ (0) - -/* Extern is for use with -E */ -#ifndef Extern -#define Extern extern -#endif - -/* I/O stuff */ - -#ifdef f2c_i2 -/* for -i2 */ -typedef short flag; -typedef short ftnlen; -typedef short ftnint; -#else -typedef int flag; -typedef int ftnlen; -typedef int ftnint; -#endif - -/*external read, write*/ -typedef struct -{ flag cierr; - ftnint ciunit; - flag ciend; - char *cifmt; - ftnint cirec; -} cilist; - -/*internal read, write*/ -typedef struct -{ flag icierr; - char *iciunit; - flag iciend; - char *icifmt; - ftnint icirlen; - ftnint icirnum; -} icilist; - -/*open*/ -typedef struct -{ flag oerr; - ftnint ounit; - char *ofnm; - ftnlen ofnmlen; - char *osta; - char *oacc; - char *ofm; - ftnint orl; - char *oblnk; -} olist; - -/*close*/ -typedef struct -{ flag cerr; - ftnint cunit; - char *csta; -} cllist; - -/*rewind, backspace, endfile*/ -typedef struct -{ flag aerr; - ftnint aunit; -} alist; - -/* inquire */ -typedef struct -{ flag inerr; - ftnint inunit; - char *infile; - ftnlen infilen; - ftnint *inex; /*parameters in standard's order*/ - ftnint *inopen; - ftnint *innum; - ftnint *innamed; - char *inname; - ftnlen innamlen; - char *inacc; - ftnlen inacclen; - char *inseq; - ftnlen inseqlen; - char *indir; - ftnlen indirlen; - char *infmt; - ftnlen infmtlen; - char *inform; - ftnint informlen; - char *inunf; - ftnlen inunflen; - ftnint *inrecl; - ftnint *innrec; - char *inblank; - ftnlen inblanklen; -} inlist; - -#define VOID void - -union Multitype { /* for multiple entry points */ - shortint h; - integer i; - real r; - doublereal d; - complex c; - doublecomplex z; - }; - -typedef union Multitype Multitype; - -typedef long Long; /* No longer used; formerly in Namelist */ - -struct Vardesc { /* for Namelist */ - char *name; - char *addr; - ftnlen *dims; - int type; - }; -typedef struct Vardesc Vardesc; - -struct Namelist { - char *name; - Vardesc **vars; - int nvars; - }; -typedef struct Namelist Namelist; - -#ifndef abs -#define abs(x) ((x) >= 0 ? (x) : -(x)) -#endif -#define dabs(x) (doublereal)abs(x) -#ifndef min -#define min(a,b) ((a) <= (b) ? (a) : (b)) -#endif -#ifndef max -#define max(a,b) ((a) >= (b) ? (a) : (b)) -#endif -#define dmin(a,b) (doublereal)min(a,b) -#define dmax(a,b) (doublereal)max(a,b) - -/* procedure parameter types for -A and -C++ */ - -#define F2C_proc_par_types 1 -#ifdef __cplusplus -typedef int /* Unknown procedure type */ (*U_fp)(...); -typedef shortint (*J_fp)(...); -typedef integer (*I_fp)(...); -typedef real (*R_fp)(...); -typedef doublereal (*D_fp)(...), (*E_fp)(...); -typedef /* Complex */ VOID (*C_fp)(...); -typedef /* Double Complex */ VOID (*Z_fp)(...); -typedef logical (*L_fp)(...); -typedef shortlogical (*K_fp)(...); -typedef /* Character */ VOID (*H_fp)(...); -typedef /* Subroutine */ int (*S_fp)(...); -#else -typedef int /* Unknown procedure type */ (*U_fp)(void); -typedef shortint (*J_fp)(void); -typedef integer (*I_fp)(void); -typedef real (*R_fp)(void); -typedef doublereal (*D_fp)(void), (*E_fp)(void); -typedef /* Complex */ VOID (*C_fp)(void); -typedef /* Double Complex */ VOID (*Z_fp)(void); -typedef logical (*L_fp)(void); -typedef shortlogical (*K_fp)(void); -typedef /* Character */ VOID (*H_fp)(void); -typedef /* Subroutine */ int (*S_fp)(void); -#endif -/* E_fp is for real functions when -R is not specified */ -typedef VOID C_f; /* complex function */ -typedef VOID H_f; /* character function */ -typedef VOID Z_f; /* double complex function */ -typedef doublereal E_f; /* real function with -R not specified */ - -/* undef any lower-case symbols that your C compiler predefines, e.g.: */ - -#ifndef Skip_f2c_Undefs -#undef cray -#undef gcos -#undef mc68010 -#undef mc68020 -#undef mips -#undef pdp11 -#undef sgi -#undef sparc -#undef sun -#undef sun2 -#undef sun3 -#undef sun4 -#undef u370 -#undef u3b -#undef u3b2 -#undef u3b5 -#undef unix -#undef vax -#endif -#endif diff --git a/pythonPackages/numpy/numpy/linalg/f2c_lite.c b/pythonPackages/numpy/numpy/linalg/f2c_lite.c deleted file mode 100755 index 6402271c94..0000000000 --- a/pythonPackages/numpy/numpy/linalg/f2c_lite.c +++ /dev/null @@ -1,492 +0,0 @@ -#include -#include -#include -#include -#include "f2c.h" - - -extern void s_wsfe(cilist *f) {;} -extern void e_wsfe(void) {;} -extern void do_fio(integer *c, char *s, ftnlen l) {;} - -/* You'll want this if you redo the *_lite.c files with the -C option - * to f2c for checking array subscripts. (It's not suggested you do that - * for production use, of course.) */ -extern int -s_rnge(char *var, int index, char *routine, int lineno) -{ - fprintf(stderr, "array index out-of-bounds for %s[%d] in routine %s:%d\n", - var, index, routine, lineno); - fflush(stderr); - abort(); -} - - -#ifdef KR_headers -extern double sqrt(); -double f__cabs(real, imag) double real, imag; -#else -#undef abs - -double f__cabs(double real, double imag) -#endif -{ -double temp; - -if(real < 0) - real = -real; -if(imag < 0) - imag = -imag; -if(imag > real){ - temp = real; - real = imag; - imag = temp; -} -if((imag+real) == real) - return((double)real); - -temp = imag/real; -temp = real*sqrt(1.0 + temp*temp); /*overflow!!*/ -return(temp); -} - - - VOID -#ifdef KR_headers -d_cnjg(r, z) doublecomplex *r, *z; -#else -d_cnjg(doublecomplex *r, doublecomplex *z) -#endif -{ -r->r = z->r; -r->i = - z->i; -} - - -#ifdef KR_headers -double d_imag(z) doublecomplex *z; -#else -double d_imag(doublecomplex *z) -#endif -{ -return(z->i); -} - - -#define log10e 0.43429448190325182765 - -#ifdef KR_headers -double log(); -double d_lg10(x) doublereal *x; -#else -#undef abs - -double d_lg10(doublereal *x) -#endif -{ -return( log10e * log(*x) ); -} - - -#ifdef KR_headers -double d_sign(a,b) doublereal *a, *b; -#else -double d_sign(doublereal *a, doublereal *b) -#endif -{ -double x; -x = (*a >= 0 ? *a : - *a); -return( *b >= 0 ? x : -x); -} - - -#ifdef KR_headers -double floor(); -integer i_dnnt(x) doublereal *x; -#else -#undef abs - -integer i_dnnt(doublereal *x) -#endif -{ -return( (*x)>=0 ? - floor(*x + .5) : -floor(.5 - *x) ); -} - - -#ifdef KR_headers -double pow(); -double pow_dd(ap, bp) doublereal *ap, *bp; -#else -#undef abs - -double pow_dd(doublereal *ap, doublereal *bp) -#endif -{ -return(pow(*ap, *bp) ); -} - - -#ifdef KR_headers -double pow_di(ap, bp) doublereal *ap; integer *bp; -#else -double pow_di(doublereal *ap, integer *bp) -#endif -{ -double pow, x; -integer n; -unsigned long u; - -pow = 1; -x = *ap; -n = *bp; - -if(n != 0) - { - if(n < 0) - { - n = -n; - x = 1/x; - } - for(u = n; ; ) - { - if(u & 01) - pow *= x; - if(u >>= 1) - x *= x; - else - break; - } - } -return(pow); -} -/* Unless compiled with -DNO_OVERWRITE, this variant of s_cat allows the - * target of a concatenation to appear on its right-hand side (contrary - * to the Fortran 77 Standard, but in accordance with Fortran 90). - */ -#define NO_OVERWRITE - - -#ifndef NO_OVERWRITE - -#undef abs -#ifdef KR_headers - extern char *F77_aloc(); - extern void free(); - extern void exit_(); -#else - - extern char *F77_aloc(ftnlen, char*); -#endif - -#endif /* NO_OVERWRITE */ - - VOID -#ifdef KR_headers -s_cat(lp, rpp, rnp, np, ll) char *lp, *rpp[]; ftnlen rnp[], *np, ll; -#else -s_cat(char *lp, char *rpp[], ftnlen rnp[], ftnlen *np, ftnlen ll) -#endif -{ - ftnlen i, nc; - char *rp; - ftnlen n = *np; -#ifndef NO_OVERWRITE - ftnlen L, m; - char *lp0, *lp1; - - lp0 = 0; - lp1 = lp; - L = ll; - i = 0; - while(i < n) { - rp = rpp[i]; - m = rnp[i++]; - if (rp >= lp1 || rp + m <= lp) { - if ((L -= m) <= 0) { - n = i; - break; - } - lp1 += m; - continue; - } - lp0 = lp; - lp = lp1 = F77_aloc(L = ll, "s_cat"); - break; - } - lp1 = lp; -#endif /* NO_OVERWRITE */ - for(i = 0 ; i < n ; ++i) { - nc = ll; - if(rnp[i] < nc) - nc = rnp[i]; - ll -= nc; - rp = rpp[i]; - while(--nc >= 0) - *lp++ = *rp++; - } - while(--ll >= 0) - *lp++ = ' '; -#ifndef NO_OVERWRITE - if (lp0) { - memmove(lp0, lp1, L); - free(lp1); - } -#endif - } - - -/* compare two strings */ - -#ifdef KR_headers -integer s_cmp(a0, b0, la, lb) char *a0, *b0; ftnlen la, lb; -#else -integer s_cmp(char *a0, char *b0, ftnlen la, ftnlen lb) -#endif -{ -register unsigned char *a, *aend, *b, *bend; -a = (unsigned char *)a0; -b = (unsigned char *)b0; -aend = a + la; -bend = b + lb; - -if(la <= lb) - { - while(a < aend) - if(*a != *b) - return( *a - *b ); - else - { ++a; ++b; } - - while(b < bend) - if(*b != ' ') - return( ' ' - *b ); - else ++b; - } - -else - { - while(b < bend) - if(*a == *b) - { ++a; ++b; } - else - return( *a - *b ); - while(a < aend) - if(*a != ' ') - return(*a - ' '); - else ++a; - } -return(0); -} -/* Unless compiled with -DNO_OVERWRITE, this variant of s_copy allows the - * target of an assignment to appear on its right-hand side (contrary - * to the Fortran 77 Standard, but in accordance with Fortran 90), - * as in a(2:5) = a(4:7) . - */ - - - -/* assign strings: a = b */ - -#ifdef KR_headers -VOID s_copy(a, b, la, lb) register char *a, *b; ftnlen la, lb; -#else -void s_copy(register char *a, register char *b, ftnlen la, ftnlen lb) -#endif -{ - register char *aend, *bend; - - aend = a + la; - - if(la <= lb) -#ifndef NO_OVERWRITE - if (a <= b || a >= b + la) -#endif - while(a < aend) - *a++ = *b++; -#ifndef NO_OVERWRITE - else - for(b += la; a < aend; ) - *--aend = *--b; -#endif - - else { - bend = b + lb; -#ifndef NO_OVERWRITE - if (a <= b || a >= bend) -#endif - while(b < bend) - *a++ = *b++; -#ifndef NO_OVERWRITE - else { - a += lb; - while(b < bend) - *--a = *--bend; - a += lb; - } -#endif - while(a < aend) - *a++ = ' '; - } - } - - -#ifdef KR_headers -double f__cabs(); -double z_abs(z) doublecomplex *z; -#else -double f__cabs(double, double); -double z_abs(doublecomplex *z) -#endif -{ -return( f__cabs( z->r, z->i ) ); -} - - -#ifdef KR_headers -extern void sig_die(); -VOID z_div(c, a, b) doublecomplex *a, *b, *c; -#else -extern void sig_die(char*, int); -void z_div(doublecomplex *c, doublecomplex *a, doublecomplex *b) -#endif -{ -double ratio, den; -double abr, abi; - -if( (abr = b->r) < 0.) - abr = - abr; -if( (abi = b->i) < 0.) - abi = - abi; -if( abr <= abi ) - { - /*Let IEEE Infinties handle this ;( */ - /*if(abi == 0) - sig_die("complex division by zero", 1);*/ - ratio = b->r / b->i ; - den = b->i * (1 + ratio*ratio); - c->r = (a->r*ratio + a->i) / den; - c->i = (a->i*ratio - a->r) / den; - } - -else - { - ratio = b->i / b->r ; - den = b->r * (1 + ratio*ratio); - c->r = (a->r + a->i*ratio) / den; - c->i = (a->i - a->r*ratio) / den; - } - -} - - -#ifdef KR_headers -double sqrt(), f__cabs(); -VOID z_sqrt(r, z) doublecomplex *r, *z; -#else -#undef abs - -extern double f__cabs(double, double); -void z_sqrt(doublecomplex *r, doublecomplex *z) -#endif -{ -double mag; - -if( (mag = f__cabs(z->r, z->i)) == 0.) - r->r = r->i = 0.; -else if(z->r > 0) - { - r->r = sqrt(0.5 * (mag + z->r) ); - r->i = z->i / r->r / 2; - } -else - { - r->i = sqrt(0.5 * (mag - z->r) ); - if(z->i < 0) - r->i = - r->i; - r->r = z->i / r->i / 2; - } -} -#ifdef __cplusplus -extern "C" { -#endif - -#ifdef KR_headers -integer pow_ii(ap, bp) integer *ap, *bp; -#else -integer pow_ii(integer *ap, integer *bp) -#endif -{ - integer pow, x, n; - unsigned long u; - - x = *ap; - n = *bp; - - if (n <= 0) { - if (n == 0 || x == 1) - return 1; - if (x != -1) - return x == 0 ? 1/x : 0; - n = -n; - } - u = n; - for(pow = 1; ; ) - { - if(u & 01) - pow *= x; - if(u >>= 1) - x *= x; - else - break; - } - return(pow); - } -#ifdef __cplusplus -} -#endif - -#ifdef KR_headers -extern void f_exit(); -VOID s_stop(s, n) char *s; ftnlen n; -#else -#undef abs -#undef min -#undef max -#ifdef __cplusplus -extern "C" { -#endif -#ifdef __cplusplus -extern "C" { -#endif -void f_exit(void); - -int s_stop(char *s, ftnlen n) -#endif -{ -int i; - -if(n > 0) - { - fprintf(stderr, "STOP "); - for(i = 0; iflags & CONTIGUOUS)) { - PyErr_Format(LapackError, - "Parameter %s is not contiguous in lapack_lite.%s", - obname, funname); - return 0; - } else if (!(((PyArrayObject *)ob)->descr->type_num == t)) { - PyErr_Format(LapackError, - "Parameter %s is not of type %s in lapack_lite.%s", - obname, tname, funname); - return 0; - } else if (((PyArrayObject *)ob)->descr->byteorder != '=' && - ((PyArrayObject *)ob)->descr->byteorder != '|') { - PyErr_Format(LapackError, - "Parameter %s has non-native byte order in lapack_lite.%s", - obname, funname); - return 0; - } else { - return 1; - } -} - -#define CHDATA(p) ((char *) (((PyArrayObject *)p)->data)) -#define SHDATA(p) ((short int *) (((PyArrayObject *)p)->data)) -#define DDATA(p) ((double *) (((PyArrayObject *)p)->data)) -#define FDATA(p) ((float *) (((PyArrayObject *)p)->data)) -#define CDATA(p) ((f2c_complex *) (((PyArrayObject *)p)->data)) -#define ZDATA(p) ((f2c_doublecomplex *) (((PyArrayObject *)p)->data)) -#define IDATA(p) ((int *) (((PyArrayObject *)p)->data)) - -static PyObject * -lapack_lite_dgeev(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - char jobvl; - char jobvr; - int n; - PyObject *a; - int lda; - PyObject *wr; - PyObject *wi; - PyObject *vl; - int ldvl; - PyObject *vr; - int ldvr; - PyObject *work; - int lwork; - int info; - TRY(PyArg_ParseTuple(args,"cciOiOOOiOiOii", - &jobvl,&jobvr,&n,&a,&lda,&wr,&wi,&vl,&ldvl, - &vr,&ldvr,&work,&lwork,&info)); - - TRY(check_object(a,PyArray_DOUBLE,"a","PyArray_DOUBLE","dgeev")); - TRY(check_object(wr,PyArray_DOUBLE,"wr","PyArray_DOUBLE","dgeev")); - TRY(check_object(wi,PyArray_DOUBLE,"wi","PyArray_DOUBLE","dgeev")); - TRY(check_object(vl,PyArray_DOUBLE,"vl","PyArray_DOUBLE","dgeev")); - TRY(check_object(vr,PyArray_DOUBLE,"vr","PyArray_DOUBLE","dgeev")); - TRY(check_object(work,PyArray_DOUBLE,"work","PyArray_DOUBLE","dgeev")); - - lapack_lite_status__ = \ - FNAME(dgeev)(&jobvl,&jobvr,&n,DDATA(a),&lda,DDATA(wr),DDATA(wi), - DDATA(vl),&ldvl,DDATA(vr),&ldvr,DDATA(work),&lwork, - &info); - - return Py_BuildValue("{s:i,s:c,s:c,s:i,s:i,s:i,s:i,s:i,s:i}","dgeev_", - lapack_lite_status__,"jobvl",jobvl,"jobvr",jobvr, - "n",n,"lda",lda,"ldvl",ldvl,"ldvr",ldvr, - "lwork",lwork,"info",info); -} - -static PyObject * -lapack_lite_dsyevd(PyObject *NPY_UNUSED(self), PyObject *args) -{ - /* Arguments */ - /* ========= */ - - char jobz; - /* JOBZ (input) CHARACTER*1 */ - /* = 'N': Compute eigenvalues only; */ - /* = 'V': Compute eigenvalues and eigenvectors. */ - - char uplo; - /* UPLO (input) CHARACTER*1 */ - /* = 'U': Upper triangle of A is stored; */ - /* = 'L': Lower triangle of A is stored. */ - - int n; - /* N (input) INTEGER */ - /* The order of the matrix A. N >= 0. */ - - PyObject *a; - /* A (input/output) DOUBLE PRECISION array, dimension (LDA, N) */ - /* On entry, the symmetric matrix A. If UPLO = 'U', the */ - /* leading N-by-N upper triangular part of A contains the */ - /* upper triangular part of the matrix A. If UPLO = 'L', */ - /* the leading N-by-N lower triangular part of A contains */ - /* the lower triangular part of the matrix A. */ - /* On exit, if JOBZ = 'V', then if INFO = 0, A contains the */ - /* orthonormal eigenvectors of the matrix A. */ - /* If JOBZ = 'N', then on exit the lower triangle (if UPLO='L') */ - /* or the upper triangle (if UPLO='U') of A, including the */ - /* diagonal, is destroyed. */ - - int lda; - /* LDA (input) INTEGER */ - /* The leading dimension of the array A. LDA >= max(1,N). */ - - PyObject *w; - /* W (output) DOUBLE PRECISION array, dimension (N) */ - /* If INFO = 0, the eigenvalues in ascending order. */ - - PyObject *work; - /* WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) */ - /* On exit, if INFO = 0, WORK(1) returns the optimal LWORK. */ - - int lwork; - /* LWORK (input) INTEGER */ - /* The length of the array WORK. LWORK >= max(1,3*N-1). */ - /* For optimal efficiency, LWORK >= (NB+2)*N, */ - /* where NB is the blocksize for DSYTRD returned by ILAENV. */ - - PyObject *iwork; - int liwork; - - int info; - /* INFO (output) INTEGER */ - /* = 0: successful exit */ - /* < 0: if INFO = -i, the i-th argument had an illegal value */ - /* > 0: if INFO = i, the algorithm failed to converge; i */ - /* off-diagonal elements of an intermediate tridiagonal */ - /* form did not converge to zero. */ - - int lapack_lite_status__; - - TRY(PyArg_ParseTuple(args,"cciOiOOiOii", - &jobz,&uplo,&n,&a,&lda,&w,&work,&lwork, - &iwork,&liwork,&info)); - - TRY(check_object(a,PyArray_DOUBLE,"a","PyArray_DOUBLE","dsyevd")); - TRY(check_object(w,PyArray_DOUBLE,"w","PyArray_DOUBLE","dsyevd")); - TRY(check_object(work,PyArray_DOUBLE,"work","PyArray_DOUBLE","dsyevd")); - TRY(check_object(iwork,PyArray_INT,"iwork","PyArray_INT","dsyevd")); - - lapack_lite_status__ = \ - FNAME(dsyevd)(&jobz,&uplo,&n,DDATA(a),&lda,DDATA(w),DDATA(work), - &lwork,IDATA(iwork),&liwork,&info); - - return Py_BuildValue("{s:i,s:c,s:c,s:i,s:i,s:i,s:i,s:i}","dsyevd_", - lapack_lite_status__,"jobz",jobz,"uplo",uplo, - "n",n,"lda",lda,"lwork",lwork,"liwork",liwork,"info",info); -} - -static PyObject * -lapack_lite_zheevd(PyObject *NPY_UNUSED(self), PyObject *args) -{ - /* Arguments */ - /* ========= */ - - char jobz; - /* JOBZ (input) CHARACTER*1 */ - /* = 'N': Compute eigenvalues only; */ - /* = 'V': Compute eigenvalues and eigenvectors. */ - - char uplo; - /* UPLO (input) CHARACTER*1 */ - /* = 'U': Upper triangle of A is stored; */ - /* = 'L': Lower triangle of A is stored. */ - - int n; - /* N (input) INTEGER */ - /* The order of the matrix A. N >= 0. */ - - PyObject *a; - /* A (input/output) COMPLEX*16 array, dimension (LDA, N) */ - /* On entry, the Hermitian matrix A. If UPLO = 'U', the */ - /* leading N-by-N upper triangular part of A contains the */ - /* upper triangular part of the matrix A. If UPLO = 'L', */ - /* the leading N-by-N lower triangular part of A contains */ - /* the lower triangular part of the matrix A. */ - /* On exit, if JOBZ = 'V', then if INFO = 0, A contains the */ - /* orthonormal eigenvectors of the matrix A. */ - /* If JOBZ = 'N', then on exit the lower triangle (if UPLO='L') */ - /* or the upper triangle (if UPLO='U') of A, including the */ - /* diagonal, is destroyed. */ - - int lda; - /* LDA (input) INTEGER */ - /* The leading dimension of the array A. LDA >= max(1,N). */ - - PyObject *w; - /* W (output) DOUBLE PRECISION array, dimension (N) */ - /* If INFO = 0, the eigenvalues in ascending order. */ - - PyObject *work; - /* WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) */ - /* On exit, if INFO = 0, WORK(1) returns the optimal LWORK. */ - - int lwork; - /* LWORK (input) INTEGER */ - /* The length of the array WORK. LWORK >= max(1,3*N-1). */ - /* For optimal efficiency, LWORK >= (NB+2)*N, */ - /* where NB is the blocksize for DSYTRD returned by ILAENV. */ - - PyObject *rwork; - /* RWORK (workspace) DOUBLE PRECISION array, dimension (max(1, 3*N-2)) */ - int lrwork; - - PyObject *iwork; - int liwork; - - int info; - /* INFO (output) INTEGER */ - /* = 0: successful exit */ - /* < 0: if INFO = -i, the i-th argument had an illegal value */ - /* > 0: if INFO = i, the algorithm failed to converge; i */ - /* off-diagonal elements of an intermediate tridiagonal */ - /* form did not converge to zero. */ - - int lapack_lite_status__; - - TRY(PyArg_ParseTuple(args,"cciOiOOiOiOii", - &jobz,&uplo,&n,&a,&lda,&w,&work,&lwork,&rwork, - &lrwork,&iwork,&liwork,&info)); - - TRY(check_object(a,PyArray_CDOUBLE,"a","PyArray_CDOUBLE","zheevd")); - TRY(check_object(w,PyArray_DOUBLE,"w","PyArray_DOUBLE","zheevd")); - TRY(check_object(work,PyArray_CDOUBLE,"work","PyArray_CDOUBLE","zheevd")); - TRY(check_object(w,PyArray_DOUBLE,"rwork","PyArray_DOUBLE","zheevd")); - TRY(check_object(iwork,PyArray_INT,"iwork","PyArray_INT","zheevd")); - - lapack_lite_status__ = \ - FNAME(zheevd)(&jobz,&uplo,&n,ZDATA(a),&lda,DDATA(w),ZDATA(work), - &lwork,DDATA(rwork),&lrwork,IDATA(iwork),&liwork,&info); - - return Py_BuildValue("{s:i,s:c,s:c,s:i,s:i,s:i,s:i,s:i,s:i}","zheevd_", - lapack_lite_status__,"jobz",jobz,"uplo",uplo,"n",n, - "lda",lda,"lwork",lwork,"lrwork",lrwork, - "liwork",liwork,"info",info); -} - -static PyObject * -lapack_lite_dgelsd(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int m; - int n; - int nrhs; - PyObject *a; - int lda; - PyObject *b; - int ldb; - PyObject *s; - double rcond; - int rank; - PyObject *work; - PyObject *iwork; - int lwork; - int info; - TRY(PyArg_ParseTuple(args,"iiiOiOiOdiOiOi", - &m,&n,&nrhs,&a,&lda,&b,&ldb,&s,&rcond, - &rank,&work,&lwork,&iwork,&info)); - - TRY(check_object(a,PyArray_DOUBLE,"a","PyArray_DOUBLE","dgelsd")); - TRY(check_object(b,PyArray_DOUBLE,"b","PyArray_DOUBLE","dgelsd")); - TRY(check_object(s,PyArray_DOUBLE,"s","PyArray_DOUBLE","dgelsd")); - TRY(check_object(work,PyArray_DOUBLE,"work","PyArray_DOUBLE","dgelsd")); - TRY(check_object(iwork,PyArray_INT,"iwork","PyArray_INT","dgelsd")); - - lapack_lite_status__ = \ - FNAME(dgelsd)(&m,&n,&nrhs,DDATA(a),&lda,DDATA(b),&ldb, - DDATA(s),&rcond,&rank,DDATA(work),&lwork, - IDATA(iwork),&info); - - return Py_BuildValue("{s:i,s:i,s:i,s:i,s:i,s:i,s:d,s:i,s:i,s:i}","dgelsd_", - lapack_lite_status__,"m",m,"n",n,"nrhs",nrhs, - "lda",lda,"ldb",ldb,"rcond",rcond,"rank",rank, - "lwork",lwork,"info",info); -} - -static PyObject * -lapack_lite_dgesv(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int n; - int nrhs; - PyObject *a; - int lda; - PyObject *ipiv; - PyObject *b; - int ldb; - int info; - TRY(PyArg_ParseTuple(args,"iiOiOOii",&n,&nrhs,&a,&lda,&ipiv,&b,&ldb,&info)); - - TRY(check_object(a,PyArray_DOUBLE,"a","PyArray_DOUBLE","dgesv")); - TRY(check_object(ipiv,PyArray_INT,"ipiv","PyArray_INT","dgesv")); - TRY(check_object(b,PyArray_DOUBLE,"b","PyArray_DOUBLE","dgesv")); - - lapack_lite_status__ = \ - FNAME(dgesv)(&n,&nrhs,DDATA(a),&lda,IDATA(ipiv),DDATA(b),&ldb,&info); - - return Py_BuildValue("{s:i,s:i,s:i,s:i,s:i,s:i}","dgesv_", - lapack_lite_status__,"n",n,"nrhs",nrhs,"lda",lda, - "ldb",ldb,"info",info); -} - -static PyObject * -lapack_lite_dgesdd(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - char jobz; - int m; - int n; - PyObject *a; - int lda; - PyObject *s; - PyObject *u; - int ldu; - PyObject *vt; - int ldvt; - PyObject *work; - int lwork; - PyObject *iwork; - int info; - TRY(PyArg_ParseTuple(args,"ciiOiOOiOiOiOi", - &jobz,&m,&n,&a,&lda,&s,&u,&ldu,&vt,&ldvt, - &work,&lwork,&iwork,&info)); - - TRY(check_object(a,PyArray_DOUBLE,"a","PyArray_DOUBLE","dgesdd")); - TRY(check_object(s,PyArray_DOUBLE,"s","PyArray_DOUBLE","dgesdd")); - TRY(check_object(u,PyArray_DOUBLE,"u","PyArray_DOUBLE","dgesdd")); - TRY(check_object(vt,PyArray_DOUBLE,"vt","PyArray_DOUBLE","dgesdd")); - TRY(check_object(work,PyArray_DOUBLE,"work","PyArray_DOUBLE","dgesdd")); - TRY(check_object(iwork,PyArray_INT,"iwork","PyArray_INT","dgesdd")); - - lapack_lite_status__ = \ - FNAME(dgesdd)(&jobz,&m,&n,DDATA(a),&lda,DDATA(s),DDATA(u),&ldu, - DDATA(vt),&ldvt,DDATA(work),&lwork,IDATA(iwork), - &info); - - if (info == 0 && lwork == -1) { - /* We need to check the result because - sometimes the "optimal" value is actually - too small. - Change it to the maximum of the minimum and the optimal. - */ - long work0 = (long) *DDATA(work); - int mn = MIN(m,n); - int mx = MAX(m,n); - - switch(jobz){ - case 'N': - work0 = MAX(work0,3*mn + MAX(mx,6*mn)+500); - break; - case 'O': - work0 = MAX(work0,3*mn*mn + \ - MAX(mx,5*mn*mn+4*mn+500)); - break; - case 'S': - case 'A': - work0 = MAX(work0,3*mn*mn + \ - MAX(mx,4*mn*(mn+1))+500); - break; - } - *DDATA(work) = (double) work0; - } - return Py_BuildValue("{s:i,s:c,s:i,s:i,s:i,s:i,s:i,s:i,s:i}","dgesdd_", - lapack_lite_status__,"jobz",jobz,"m",m,"n",n, - "lda",lda,"ldu",ldu,"ldvt",ldvt,"lwork",lwork, - "info",info); -} - -static PyObject * -lapack_lite_dgetrf(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int m; - int n; - PyObject *a; - int lda; - PyObject *ipiv; - int info; - TRY(PyArg_ParseTuple(args,"iiOiOi",&m,&n,&a,&lda,&ipiv,&info)); - - TRY(check_object(a,PyArray_DOUBLE,"a","PyArray_DOUBLE","dgetrf")); - TRY(check_object(ipiv,PyArray_INT,"ipiv","PyArray_INT","dgetrf")); - - lapack_lite_status__ = \ - FNAME(dgetrf)(&m,&n,DDATA(a),&lda,IDATA(ipiv),&info); - - return Py_BuildValue("{s:i,s:i,s:i,s:i,s:i}","dgetrf_",lapack_lite_status__, - "m",m,"n",n,"lda",lda,"info",info); -} - -static PyObject * -lapack_lite_dpotrf(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int n; - PyObject *a; - int lda; - char uplo; - int info; - - TRY(PyArg_ParseTuple(args,"ciOii",&uplo,&n,&a,&lda,&info)); - TRY(check_object(a,PyArray_DOUBLE,"a","PyArray_DOUBLE","dpotrf")); - - lapack_lite_status__ = \ - FNAME(dpotrf)(&uplo,&n,DDATA(a),&lda,&info); - - return Py_BuildValue("{s:i,s:i,s:i,s:i}","dpotrf_",lapack_lite_status__, - "n",n,"lda",lda,"info",info); -} - -static PyObject * -lapack_lite_dgeqrf(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int m, n, lwork; - PyObject *a, *tau, *work; - int lda; - int info; - - TRY(PyArg_ParseTuple(args,"iiOiOOii",&m,&n,&a,&lda,&tau,&work,&lwork,&info)); - - /* check objects and convert to right storage order */ - TRY(check_object(a,PyArray_DOUBLE,"a","PyArray_DOUBLE","dgeqrf")); - TRY(check_object(tau,PyArray_DOUBLE,"tau","PyArray_DOUBLE","dgeqrf")); - TRY(check_object(work,PyArray_DOUBLE,"work","PyArray_DOUBLE","dgeqrf")); - - lapack_lite_status__ = \ - FNAME(dgeqrf)(&m, &n, DDATA(a), &lda, DDATA(tau), - DDATA(work), &lwork, &info); - - return Py_BuildValue("{s:i,s:i,s:i,s:i,s:i,s:i}","dgeqrf_", - lapack_lite_status__,"m",m,"n",n,"lda",lda, - "lwork",lwork,"info",info); -} - - -static PyObject * -lapack_lite_dorgqr(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int m, n, k, lwork; - PyObject *a, *tau, *work; - int lda; - int info; - - TRY(PyArg_ParseTuple(args,"iiiOiOOii", &m, &n, &k, &a, &lda, &tau, &work, &lwork, &info)); - TRY(check_object(a,PyArray_DOUBLE,"a","PyArray_DOUBLE","dorgqr")); - TRY(check_object(tau,PyArray_DOUBLE,"tau","PyArray_DOUBLE","dorgqr")); - TRY(check_object(work,PyArray_DOUBLE,"work","PyArray_DOUBLE","dorgqr")); - lapack_lite_status__ = \ - FNAME(dorgqr)(&m, &n, &k, DDATA(a), &lda, DDATA(tau), DDATA(work), &lwork, &info); - - return Py_BuildValue("{s:i,s:i}","dorgqr_",lapack_lite_status__, - "info",info); -} - - -static PyObject * -lapack_lite_zgeev(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - char jobvl; - char jobvr; - int n; - PyObject *a; - int lda; - PyObject *w; - PyObject *vl; - int ldvl; - PyObject *vr; - int ldvr; - PyObject *work; - int lwork; - PyObject *rwork; - int info; - TRY(PyArg_ParseTuple(args,"cciOiOOiOiOiOi", - &jobvl,&jobvr,&n,&a,&lda,&w,&vl,&ldvl, - &vr,&ldvr,&work,&lwork,&rwork,&info)); - - TRY(check_object(a,PyArray_CDOUBLE,"a","PyArray_CDOUBLE","zgeev")); - TRY(check_object(w,PyArray_CDOUBLE,"w","PyArray_CDOUBLE","zgeev")); - TRY(check_object(vl,PyArray_CDOUBLE,"vl","PyArray_CDOUBLE","zgeev")); - TRY(check_object(vr,PyArray_CDOUBLE,"vr","PyArray_CDOUBLE","zgeev")); - TRY(check_object(work,PyArray_CDOUBLE,"work","PyArray_CDOUBLE","zgeev")); - TRY(check_object(rwork,PyArray_DOUBLE,"rwork","PyArray_DOUBLE","zgeev")); - - lapack_lite_status__ = \ - FNAME(zgeev)(&jobvl,&jobvr,&n,ZDATA(a),&lda,ZDATA(w),ZDATA(vl), - &ldvl,ZDATA(vr),&ldvr,ZDATA(work),&lwork, - DDATA(rwork),&info); - - return Py_BuildValue("{s:i,s:c,s:c,s:i,s:i,s:i,s:i,s:i,s:i}","zgeev_", - lapack_lite_status__,"jobvl",jobvl,"jobvr",jobvr, - "n",n,"lda",lda,"ldvl",ldvl,"ldvr",ldvr, - "lwork",lwork,"info",info); -} - -static PyObject * -lapack_lite_zgelsd(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int m; - int n; - int nrhs; - PyObject *a; - int lda; - PyObject *b; - int ldb; - PyObject *s; - double rcond; - int rank; - PyObject *work; - int lwork; - PyObject *rwork; - PyObject *iwork; - int info; - TRY(PyArg_ParseTuple(args,"iiiOiOiOdiOiOOi", - &m,&n,&nrhs,&a,&lda,&b,&ldb,&s,&rcond, - &rank,&work,&lwork,&rwork,&iwork,&info)); - - TRY(check_object(a,PyArray_CDOUBLE,"a","PyArray_CDOUBLE","zgelsd")); - TRY(check_object(b,PyArray_CDOUBLE,"b","PyArray_CDOUBLE","zgelsd")); - TRY(check_object(s,PyArray_DOUBLE,"s","PyArray_DOUBLE","zgelsd")); - TRY(check_object(work,PyArray_CDOUBLE,"work","PyArray_CDOUBLE","zgelsd")); - TRY(check_object(rwork,PyArray_DOUBLE,"rwork","PyArray_DOUBLE","zgelsd")); - TRY(check_object(iwork,PyArray_INT,"iwork","PyArray_INT","zgelsd")); - - lapack_lite_status__ = \ - FNAME(zgelsd)(&m,&n,&nrhs,ZDATA(a),&lda,ZDATA(b),&ldb,DDATA(s),&rcond, - &rank,ZDATA(work),&lwork,DDATA(rwork),IDATA(iwork),&info); - - return Py_BuildValue("{s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i}","zgelsd_", - lapack_lite_status__,"m",m,"n",n,"nrhs",nrhs,"lda",lda, - "ldb",ldb,"rank",rank,"lwork",lwork,"info",info); -} - -static PyObject * -lapack_lite_zgesv(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int n; - int nrhs; - PyObject *a; - int lda; - PyObject *ipiv; - PyObject *b; - int ldb; - int info; - TRY(PyArg_ParseTuple(args,"iiOiOOii",&n,&nrhs,&a,&lda,&ipiv,&b,&ldb,&info)); - - TRY(check_object(a,PyArray_CDOUBLE,"a","PyArray_CDOUBLE","zgesv")); - TRY(check_object(ipiv,PyArray_INT,"ipiv","PyArray_INT","zgesv")); - TRY(check_object(b,PyArray_CDOUBLE,"b","PyArray_CDOUBLE","zgesv")); - - lapack_lite_status__ = \ - FNAME(zgesv)(&n,&nrhs,ZDATA(a),&lda,IDATA(ipiv),ZDATA(b),&ldb,&info); - - return Py_BuildValue("{s:i,s:i,s:i,s:i,s:i,s:i}","zgesv_", - lapack_lite_status__,"n",n,"nrhs",nrhs,"lda",lda, - "ldb",ldb,"info",info); -} - -static PyObject * -lapack_lite_zgesdd(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - char jobz; - int m; - int n; - PyObject *a; - int lda; - PyObject *s; - PyObject *u; - int ldu; - PyObject *vt; - int ldvt; - PyObject *work; - int lwork; - PyObject *rwork; - PyObject *iwork; - int info; - TRY(PyArg_ParseTuple(args,"ciiOiOOiOiOiOOi", - &jobz,&m,&n,&a,&lda,&s,&u,&ldu, - &vt,&ldvt,&work,&lwork,&rwork,&iwork,&info)); - - TRY(check_object(a,PyArray_CDOUBLE,"a","PyArray_CDOUBLE","zgesdd")); - TRY(check_object(s,PyArray_DOUBLE,"s","PyArray_DOUBLE","zgesdd")); - TRY(check_object(u,PyArray_CDOUBLE,"u","PyArray_CDOUBLE","zgesdd")); - TRY(check_object(vt,PyArray_CDOUBLE,"vt","PyArray_CDOUBLE","zgesdd")); - TRY(check_object(work,PyArray_CDOUBLE,"work","PyArray_CDOUBLE","zgesdd")); - TRY(check_object(rwork,PyArray_DOUBLE,"rwork","PyArray_DOUBLE","zgesdd")); - TRY(check_object(iwork,PyArray_INT,"iwork","PyArray_INT","zgesdd")); - - lapack_lite_status__ = \ - FNAME(zgesdd)(&jobz,&m,&n,ZDATA(a),&lda,DDATA(s),ZDATA(u),&ldu, - ZDATA(vt),&ldvt,ZDATA(work),&lwork,DDATA(rwork), - IDATA(iwork),&info); - - return Py_BuildValue("{s:i,s:c,s:i,s:i,s:i,s:i,s:i,s:i,s:i}","zgesdd_", - lapack_lite_status__,"jobz",jobz,"m",m,"n",n, - "lda",lda,"ldu",ldu,"ldvt",ldvt,"lwork",lwork, - "info",info); -} - -static PyObject * -lapack_lite_zgetrf(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int m; - int n; - PyObject *a; - int lda; - PyObject *ipiv; - int info; - TRY(PyArg_ParseTuple(args,"iiOiOi",&m,&n,&a,&lda,&ipiv,&info)); - - TRY(check_object(a,PyArray_CDOUBLE,"a","PyArray_CDOUBLE","zgetrf")); - TRY(check_object(ipiv,PyArray_INT,"ipiv","PyArray_INT","zgetrf")); - - lapack_lite_status__ = \ - FNAME(zgetrf)(&m,&n,ZDATA(a),&lda,IDATA(ipiv),&info); - - return Py_BuildValue("{s:i,s:i,s:i,s:i,s:i}","zgetrf_", - lapack_lite_status__,"m",m,"n",n,"lda",lda,"info",info); -} - -static PyObject * -lapack_lite_zpotrf(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int n; - PyObject *a; - int lda; - char uplo; - int info; - - TRY(PyArg_ParseTuple(args,"ciOii",&uplo,&n,&a,&lda,&info)); - TRY(check_object(a,PyArray_CDOUBLE,"a","PyArray_CDOUBLE","zpotrf")); - lapack_lite_status__ = \ - FNAME(zpotrf)(&uplo,&n,ZDATA(a),&lda,&info); - - return Py_BuildValue("{s:i,s:i,s:i,s:i}","zpotrf_", - lapack_lite_status__,"n",n,"lda",lda,"info",info); -} - -static PyObject * -lapack_lite_zgeqrf(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int m, n, lwork; - PyObject *a, *tau, *work; - int lda; - int info; - - TRY(PyArg_ParseTuple(args,"iiOiOOii",&m,&n,&a,&lda,&tau,&work,&lwork,&info)); - -/* check objects and convert to right storage order */ - TRY(check_object(a,PyArray_CDOUBLE,"a","PyArray_CDOUBLE","zgeqrf")); - TRY(check_object(tau,PyArray_CDOUBLE,"tau","PyArray_CDOUBLE","zgeqrf")); - TRY(check_object(work,PyArray_CDOUBLE,"work","PyArray_CDOUBLE","zgeqrf")); - - lapack_lite_status__ = \ - FNAME(zgeqrf)(&m, &n, ZDATA(a), &lda, ZDATA(tau), ZDATA(work), &lwork, &info); - - return Py_BuildValue("{s:i,s:i,s:i,s:i,s:i,s:i}","zgeqrf_",lapack_lite_status__,"m",m,"n",n,"lda",lda,"lwork",lwork,"info",info); -} - - -static PyObject * -lapack_lite_zungqr(PyObject *NPY_UNUSED(self), PyObject *args) -{ - int lapack_lite_status__; - int m, n, k, lwork; - PyObject *a, *tau, *work; - int lda; - int info; - - TRY(PyArg_ParseTuple(args,"iiiOiOOii", &m, &n, &k, &a, &lda, &tau, &work, &lwork, &info)); - TRY(check_object(a,PyArray_CDOUBLE,"a","PyArray_CDOUBLE","zungqr")); - TRY(check_object(tau,PyArray_CDOUBLE,"tau","PyArray_CDOUBLE","zungqr")); - TRY(check_object(work,PyArray_CDOUBLE,"work","PyArray_CDOUBLE","zungqr")); - - - lapack_lite_status__ = \ - FNAME(zungqr)(&m, &n, &k, ZDATA(a), &lda, ZDATA(tau), ZDATA(work), - &lwork, &info); - - return Py_BuildValue("{s:i,s:i}","zungqr_",lapack_lite_status__, - "info",info); -} - - - -#define STR(x) #x -#define lameth(name) {STR(name), lapack_lite_##name, METH_VARARGS, NULL} -static struct PyMethodDef lapack_lite_module_methods[] = { - lameth(zheevd), - lameth(dsyevd), - lameth(dgeev), - lameth(dgelsd), - lameth(dgesv), - lameth(dgesdd), - lameth(dgetrf), - lameth(dpotrf), - lameth(dgeqrf), - lameth(dorgqr), - lameth(zgeev), - lameth(zgelsd), - lameth(zgesv), - lameth(zgesdd), - lameth(zgetrf), - lameth(zpotrf), - lameth(zgeqrf), - lameth(zungqr), - { NULL,NULL,0, NULL} -}; - -static char lapack_lite_module_documentation[] = ""; - - -#if PY_MAJOR_VERSION >= 3 -static struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - "lapack_lite", - NULL, - -1, - lapack_lite_module_methods, - NULL, - NULL, - NULL, - NULL -}; -#endif - -/* Initialization function for the module */ -#if PY_MAJOR_VERSION >= 3 -#define RETVAL m -PyObject *PyInit_lapack_lite(void) -#else -#define RETVAL -PyMODINIT_FUNC -initlapack_lite(void) -#endif -{ - PyObject *m,*d; -#if PY_MAJOR_VERSION >= 3 - m = PyModule_Create(&moduledef); -#else - m = Py_InitModule4("lapack_lite", lapack_lite_module_methods, - lapack_lite_module_documentation, - (PyObject*)NULL,PYTHON_API_VERSION); -#endif - if (m == NULL) { - return RETVAL; - } - import_array(); - d = PyModule_GetDict(m); - LapackError = PyErr_NewException("lapack_lite.LapackError", NULL, NULL); - PyDict_SetItemString(d, "LapackError", LapackError); - - return RETVAL; -} diff --git a/pythonPackages/numpy/numpy/linalg/linalg.py b/pythonPackages/numpy/numpy/linalg/linalg.py deleted file mode 100755 index eb8c8379ad..0000000000 --- a/pythonPackages/numpy/numpy/linalg/linalg.py +++ /dev/null @@ -1,1969 +0,0 @@ -"""Lite version of scipy.linalg. - -Notes ------ -This module is a lite version of the linalg.py module in SciPy which -contains high-level Python interface to the LAPACK library. The lite -version only accesses the following LAPACK functions: dgesv, zgesv, -dgeev, zgeev, dgesdd, zgesdd, dgelsd, zgelsd, dsyevd, zheevd, dgetrf, -zgetrf, dpotrf, zpotrf, dgeqrf, zgeqrf, zungqr, dorgqr. -""" - -__all__ = ['matrix_power', 'solve', 'tensorsolve', 'tensorinv', 'inv', - 'cholesky', 'eigvals', 'eigvalsh', 'pinv', 'slogdet', 'det', - 'svd', 'eig', 'eigh','lstsq', 'norm', 'qr', 'cond', 'matrix_rank', - 'LinAlgError'] - -from numpy.core import array, asarray, zeros, empty, transpose, \ - intc, single, double, csingle, cdouble, inexact, complexfloating, \ - newaxis, ravel, all, Inf, dot, add, multiply, identity, sqrt, \ - maximum, flatnonzero, diagonal, arange, fastCopyAndTranspose, sum, \ - isfinite, size, finfo, absolute, log, exp -from numpy.lib import triu -from numpy.linalg import lapack_lite -from numpy.matrixlib.defmatrix import matrix_power -from numpy.compat import asbytes - -# For Python2/3 compatibility -_N = asbytes('N') -_V = asbytes('V') -_A = asbytes('A') -_S = asbytes('S') -_L = asbytes('L') - -fortran_int = intc - -# Error object -class LinAlgError(Exception): - """ - Generic Python-exception-derived object raised by linalg functions. - - General purpose exception class, derived from Python's exception.Exception - class, programmatically raised in linalg functions when a Linear - Algebra-related condition would prevent further correct execution of the - function. - - Parameters - ---------- - None - - Examples - -------- - >>> from numpy import linalg as LA - >>> LA.inv(np.zeros((2,2))) - Traceback (most recent call last): - File "", line 1, in - File "...linalg.py", line 350, - in inv return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) - File "...linalg.py", line 249, - in solve - raise LinAlgError, 'Singular matrix' - numpy.linalg.linalg.LinAlgError: Singular matrix - - """ - pass - -def _makearray(a): - new = asarray(a) - wrap = getattr(a, "__array_prepare__", new.__array_wrap__) - return new, wrap - -def isComplexType(t): - return issubclass(t, complexfloating) - -_real_types_map = {single : single, - double : double, - csingle : single, - cdouble : double} - -_complex_types_map = {single : csingle, - double : cdouble, - csingle : csingle, - cdouble : cdouble} - -def _realType(t, default=double): - return _real_types_map.get(t, default) - -def _complexType(t, default=cdouble): - return _complex_types_map.get(t, default) - -def _linalgRealType(t): - """Cast the type t to either double or cdouble.""" - return double - -_complex_types_map = {single : csingle, - double : cdouble, - csingle : csingle, - cdouble : cdouble} - -def _commonType(*arrays): - # in lite version, use higher precision (always double or cdouble) - result_type = single - is_complex = False - for a in arrays: - if issubclass(a.dtype.type, inexact): - if isComplexType(a.dtype.type): - is_complex = True - rt = _realType(a.dtype.type, default=None) - if rt is None: - # unsupported inexact scalar - raise TypeError("array type %s is unsupported in linalg" % - (a.dtype.name,)) - else: - rt = double - if rt is double: - result_type = double - if is_complex: - t = cdouble - result_type = _complex_types_map[result_type] - else: - t = double - return t, result_type - -# _fastCopyAndTranpose assumes the input is 2D (as all the calls in here are). - -_fastCT = fastCopyAndTranspose - -def _to_native_byte_order(*arrays): - ret = [] - for arr in arrays: - if arr.dtype.byteorder not in ('=', '|'): - ret.append(asarray(arr, dtype=arr.dtype.newbyteorder('='))) - else: - ret.append(arr) - if len(ret) == 1: - return ret[0] - else: - return ret - -def _fastCopyAndTranspose(type, *arrays): - cast_arrays = () - for a in arrays: - if a.dtype.type is type: - cast_arrays = cast_arrays + (_fastCT(a),) - else: - cast_arrays = cast_arrays + (_fastCT(a.astype(type)),) - if len(cast_arrays) == 1: - return cast_arrays[0] - else: - return cast_arrays - -def _assertRank2(*arrays): - for a in arrays: - if len(a.shape) != 2: - raise LinAlgError, '%d-dimensional array given. Array must be \ - two-dimensional' % len(a.shape) - -def _assertSquareness(*arrays): - for a in arrays: - if max(a.shape) != min(a.shape): - raise LinAlgError, 'Array must be square' - -def _assertFinite(*arrays): - for a in arrays: - if not (isfinite(a).all()): - raise LinAlgError, "Array must not contain infs or NaNs" - -def _assertNonEmpty(*arrays): - for a in arrays: - if size(a) == 0: - raise LinAlgError("Arrays cannot be empty") - - -# Linear equations - -def tensorsolve(a, b, axes=None): - """ - Solve the tensor equation ``a x = b`` for x. - - It is assumed that all indices of `x` are summed over in the product, - together with the rightmost indices of `a`, as is done in, for example, - ``tensordot(a, x, axes=len(b.shape))``. - - Parameters - ---------- - a : array_like - Coefficient tensor, of shape ``b.shape + Q``. `Q`, a tuple, equals - the shape of that sub-tensor of `a` consisting of the appropriate - number of its rightmost indices, and must be such that - ``prod(Q) == prod(b.shape)`` (in which sense `a` is said to be - 'square'). - b : array_like - Right-hand tensor, which can be of any shape. - axes : tuple of ints, optional - Axes in `a` to reorder to the right, before inversion. - If None (default), no reordering is done. - - Returns - ------- - x : ndarray, shape Q - - Raises - ------ - LinAlgError - If `a` is singular or not 'square' (in the above sense). - - See Also - -------- - tensordot, tensorinv - - Examples - -------- - >>> a = np.eye(2*3*4) - >>> a.shape = (2*3, 4, 2, 3, 4) - >>> b = np.random.randn(2*3, 4) - >>> x = np.linalg.tensorsolve(a, b) - >>> x.shape - (2, 3, 4) - >>> np.allclose(np.tensordot(a, x, axes=3), b) - True - - """ - a,wrap = _makearray(a) - b = asarray(b) - an = a.ndim - - if axes is not None: - allaxes = range(0, an) - for k in axes: - allaxes.remove(k) - allaxes.insert(an, k) - a = a.transpose(allaxes) - - oldshape = a.shape[-(an-b.ndim):] - prod = 1 - for k in oldshape: - prod *= k - - a = a.reshape(-1, prod) - b = b.ravel() - res = wrap(solve(a, b)) - res.shape = oldshape - return res - -def solve(a, b): - """ - Solve a linear matrix equation, or system of linear scalar equations. - - Computes the "exact" solution, `x`, of the well-determined, i.e., full - rank, linear matrix equation `ax = b`. - - Parameters - ---------- - a : array_like, shape (M, M) - Coefficient matrix. - b : array_like, shape (M,) or (M, N) - Ordinate or "dependent variable" values. - - Returns - ------- - x : ndarray, shape (M,) or (M, N) depending on b - Solution to the system a x = b - - Raises - ------ - LinAlgError - If `a` is singular or not square. - - Notes - ----- - `solve` is a wrapper for the LAPACK routines `dgesv`_ and - `zgesv`_, the former being used if `a` is real-valued, the latter if - it is complex-valued. The solution to the system of linear equations - is computed using an LU decomposition [1]_ with partial pivoting and - row interchanges. - - .. _dgesv: http://www.netlib.org/lapack/double/dgesv.f - - .. _zgesv: http://www.netlib.org/lapack/complex16/zgesv.f - - `a` must be square and of full-rank, i.e., all rows (or, equivalently, - columns) must be linearly independent; if either is not true, use - `lstsq` for the least-squares best "solution" of the - system/equation. - - References - ---------- - .. [1] G. Strang, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, - FL, Academic Press, Inc., 1980, pg. 22. - - Examples - -------- - Solve the system of equations ``3 * x0 + x1 = 9`` and ``x0 + 2 * x1 = 8``: - - >>> a = np.array([[3,1], [1,2]]) - >>> b = np.array([9,8]) - >>> x = np.linalg.solve(a, b) - >>> x - array([ 2., 3.]) - - Check that the solution is correct: - - >>> (np.dot(a, x) == b).all() - True - - """ - a, _ = _makearray(a) - b, wrap = _makearray(b) - one_eq = len(b.shape) == 1 - if one_eq: - b = b[:, newaxis] - _assertRank2(a, b) - _assertSquareness(a) - n_eq = a.shape[0] - n_rhs = b.shape[1] - if n_eq != b.shape[0]: - raise LinAlgError, 'Incompatible dimensions' - t, result_t = _commonType(a, b) -# lapack_routine = _findLapackRoutine('gesv', t) - if isComplexType(t): - lapack_routine = lapack_lite.zgesv - else: - lapack_routine = lapack_lite.dgesv - a, b = _fastCopyAndTranspose(t, a, b) - a, b = _to_native_byte_order(a, b) - pivots = zeros(n_eq, fortran_int) - results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0) - if results['info'] > 0: - raise LinAlgError, 'Singular matrix' - if one_eq: - return wrap(b.ravel().astype(result_t)) - else: - return wrap(b.transpose().astype(result_t)) - - -def tensorinv(a, ind=2): - """ - Compute the 'inverse' of an N-dimensional array. - - The result is an inverse for `a` relative to the tensordot operation - ``tensordot(a, b, ind)``, i. e., up to floating-point accuracy, - ``tensordot(tensorinv(a), a, ind)`` is the "identity" tensor for the - tensordot operation. - - Parameters - ---------- - a : array_like - Tensor to 'invert'. Its shape must be 'square', i. e., - ``prod(a.shape[:ind]) == prod(a.shape[ind:])``. - ind : int, optional - Number of first indices that are involved in the inverse sum. - Must be a positive integer, default is 2. - - Returns - ------- - b : ndarray - `a`'s tensordot inverse, shape ``a.shape[:ind] + a.shape[ind:]``. - - Raises - ------ - LinAlgError - If `a` is singular or not 'square' (in the above sense). - - See Also - -------- - tensordot, tensorsolve - - Examples - -------- - >>> a = np.eye(4*6) - >>> a.shape = (4, 6, 8, 3) - >>> ainv = np.linalg.tensorinv(a, ind=2) - >>> ainv.shape - (8, 3, 4, 6) - >>> b = np.random.randn(4, 6) - >>> np.allclose(np.tensordot(ainv, b), np.linalg.tensorsolve(a, b)) - True - - >>> a = np.eye(4*6) - >>> a.shape = (24, 8, 3) - >>> ainv = np.linalg.tensorinv(a, ind=1) - >>> ainv.shape - (8, 3, 24) - >>> b = np.random.randn(24) - >>> np.allclose(np.tensordot(ainv, b, 1), np.linalg.tensorsolve(a, b)) - True - - """ - a = asarray(a) - oldshape = a.shape - prod = 1 - if ind > 0: - invshape = oldshape[ind:] + oldshape[:ind] - for k in oldshape[ind:]: - prod *= k - else: - raise ValueError, "Invalid ind argument." - a = a.reshape(prod, -1) - ia = inv(a) - return ia.reshape(*invshape) - - -# Matrix inversion - -def inv(a): - """ - Compute the (multiplicative) inverse of a matrix. - - Given a square matrix `a`, return the matrix `ainv` satisfying - ``dot(a, ainv) = dot(ainv, a) = eye(a.shape[0])``. - - Parameters - ---------- - a : array_like, shape (M, M) - Matrix to be inverted. - - Returns - ------- - ainv : ndarray or matrix, shape (M, M) - (Multiplicative) inverse of the matrix `a`. - - Raises - ------ - LinAlgError - If `a` is singular or not square. - - Examples - -------- - >>> from numpy import linalg as LA - >>> a = np.array([[1., 2.], [3., 4.]]) - >>> ainv = LA.inv(a) - >>> np.allclose(np.dot(a, ainv), np.eye(2)) - True - >>> np.allclose(np.dot(ainv, a), np.eye(2)) - True - - If a is a matrix object, then the return value is a matrix as well: - - >>> ainv = LA.inv(np.matrix(a)) - >>> ainv - matrix([[-2. , 1. ], - [ 1.5, -0.5]]) - - """ - a, wrap = _makearray(a) - return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) - - -# Cholesky decomposition - -def cholesky(a): - """ - Cholesky decomposition. - - Return the Cholesky decomposition, `L * L.H`, of the square matrix `a`, - where `L` is lower-triangular and .H is the conjugate transpose operator - (which is the ordinary transpose if `a` is real-valued). `a` must be - Hermitian (symmetric if real-valued) and positive-definite. Only `L` is - actually returned. - - Parameters - ---------- - a : array_like, shape (M, M) - Hermitian (symmetric if all elements are real), positive-definite - input matrix. - - Returns - ------- - L : ndarray, or matrix object if `a` is, shape (M, M) - Lower-triangular Cholesky factor of a. - - Raises - ------ - LinAlgError - If the decomposition fails, for example, if `a` is not - positive-definite. - - Notes - ----- - The Cholesky decomposition is often used as a fast way of solving - - .. math:: A \\mathbf{x} = \\mathbf{b} - - (when `A` is both Hermitian/symmetric and positive-definite). - - First, we solve for :math:`\\mathbf{y}` in - - .. math:: L \\mathbf{y} = \\mathbf{b}, - - and then for :math:`\\mathbf{x}` in - - .. math:: L.H \\mathbf{x} = \\mathbf{y}. - - Examples - -------- - >>> A = np.array([[1,-2j],[2j,5]]) - >>> A - array([[ 1.+0.j, 0.-2.j], - [ 0.+2.j, 5.+0.j]]) - >>> L = np.linalg.cholesky(A) - >>> L - array([[ 1.+0.j, 0.+0.j], - [ 0.+2.j, 1.+0.j]]) - >>> np.dot(L, L.T.conj()) # verify that L * L.H = A - array([[ 1.+0.j, 0.-2.j], - [ 0.+2.j, 5.+0.j]]) - >>> A = [[1,-2j],[2j,5]] # what happens if A is only array_like? - >>> np.linalg.cholesky(A) # an ndarray object is returned - array([[ 1.+0.j, 0.+0.j], - [ 0.+2.j, 1.+0.j]]) - >>> # But a matrix object is returned if A is a matrix object - >>> LA.cholesky(np.matrix(A)) - matrix([[ 1.+0.j, 0.+0.j], - [ 0.+2.j, 1.+0.j]]) - - """ - a, wrap = _makearray(a) - _assertRank2(a) - _assertSquareness(a) - t, result_t = _commonType(a) - a = _fastCopyAndTranspose(t, a) - a = _to_native_byte_order(a) - m = a.shape[0] - n = a.shape[1] - if isComplexType(t): - lapack_routine = lapack_lite.zpotrf - else: - lapack_routine = lapack_lite.dpotrf - results = lapack_routine(_L, n, a, m, 0) - if results['info'] > 0: - raise LinAlgError, 'Matrix is not positive definite - \ - Cholesky decomposition cannot be computed' - s = triu(a, k=0).transpose() - if (s.dtype != result_t): - s = s.astype(result_t) - return wrap(s) - -# QR decompostion - -def qr(a, mode='full'): - """ - Compute the qr factorization of a matrix. - - Factor the matrix `a` as *qr*, where `q` is orthonormal and `r` is - upper-triangular. - - Parameters - ---------- - a : array_like - Matrix to be factored, of shape (M, N). - mode : {'full', 'r', 'economic'}, optional - Specifies the values to be returned. 'full' is the default. - Economic mode is slightly faster then 'r' mode if only `r` is needed. - - Returns - ------- - q : ndarray of float or complex, optional - The orthonormal matrix, of shape (M, K). Only returned if - ``mode='full'``. - r : ndarray of float or complex, optional - The upper-triangular matrix, of shape (K, N) with K = min(M, N). - Only returned when ``mode='full'`` or ``mode='r'``. - a2 : ndarray of float or complex, optional - Array of shape (M, N), only returned when ``mode='economic``'. - The diagonal and the upper triangle of `a2` contains `r`, while - the rest of the matrix is undefined. - - Raises - ------ - LinAlgError - If factoring fails. - - Notes - ----- - This is an interface to the LAPACK routines dgeqrf, zgeqrf, - dorgqr, and zungqr. - - For more information on the qr factorization, see for example: - http://en.wikipedia.org/wiki/QR_factorization - - Subclasses of `ndarray` are preserved, so if `a` is of type `matrix`, - all the return values will be matrices too. - - Examples - -------- - >>> a = np.random.randn(9, 6) - >>> q, r = np.linalg.qr(a) - >>> np.allclose(a, np.dot(q, r)) # a does equal qr - True - >>> r2 = np.linalg.qr(a, mode='r') - >>> r3 = np.linalg.qr(a, mode='economic') - >>> np.allclose(r, r2) # mode='r' returns the same r as mode='full' - True - >>> # But only triu parts are guaranteed equal when mode='economic' - >>> np.allclose(r, np.triu(r3[:6,:6], k=0)) - True - - Example illustrating a common use of `qr`: solving of least squares - problems - - What are the least-squares-best `m` and `y0` in ``y = y0 + mx`` for - the following data: {(0,1), (1,0), (1,2), (2,1)}. (Graph the points - and you'll see that it should be y0 = 0, m = 1.) The answer is provided - by solving the over-determined matrix equation ``Ax = b``, where:: - - A = array([[0, 1], [1, 1], [1, 1], [2, 1]]) - x = array([[y0], [m]]) - b = array([[1], [0], [2], [1]]) - - If A = qr such that q is orthonormal (which is always possible via - Gram-Schmidt), then ``x = inv(r) * (q.T) * b``. (In numpy practice, - however, we simply use `lstsq`.) - - >>> A = np.array([[0, 1], [1, 1], [1, 1], [2, 1]]) - >>> A - array([[0, 1], - [1, 1], - [1, 1], - [2, 1]]) - >>> b = np.array([1, 0, 2, 1]) - >>> q, r = LA.qr(A) - >>> p = np.dot(q.T, b) - >>> np.dot(LA.inv(r), p) - array([ 1.1e-16, 1.0e+00]) - - """ - a, wrap = _makearray(a) - _assertRank2(a) - m, n = a.shape - t, result_t = _commonType(a) - a = _fastCopyAndTranspose(t, a) - a = _to_native_byte_order(a) - mn = min(m, n) - tau = zeros((mn,), t) - if isComplexType(t): - lapack_routine = lapack_lite.zgeqrf - routine_name = 'zgeqrf' - else: - lapack_routine = lapack_lite.dgeqrf - routine_name = 'dgeqrf' - - # calculate optimal size of work data 'work' - lwork = 1 - work = zeros((lwork,), t) - results = lapack_routine(m, n, a, m, tau, work, -1, 0) - if results['info'] != 0: - raise LinAlgError, '%s returns %d' % (routine_name, results['info']) - - # do qr decomposition - lwork = int(abs(work[0])) - work = zeros((lwork,), t) - results = lapack_routine(m, n, a, m, tau, work, lwork, 0) - - if results['info'] != 0: - raise LinAlgError, '%s returns %d' % (routine_name, results['info']) - - # economic mode. Isn't actually economic. - if mode[0] == 'e': - if t != result_t : - a = a.astype(result_t) - return a.T - - # generate r - r = _fastCopyAndTranspose(result_t, a[:,:mn]) - for i in range(mn): - r[i,:i].fill(0.0) - - # 'r'-mode, that is, calculate only r - if mode[0] == 'r': - return r - - # from here on: build orthonormal matrix q from a - - if isComplexType(t): - lapack_routine = lapack_lite.zungqr - routine_name = 'zungqr' - else: - lapack_routine = lapack_lite.dorgqr - routine_name = 'dorgqr' - - # determine optimal lwork - lwork = 1 - work = zeros((lwork,), t) - results = lapack_routine(m, mn, mn, a, m, tau, work, -1, 0) - if results['info'] != 0: - raise LinAlgError, '%s returns %d' % (routine_name, results['info']) - - # compute q - lwork = int(abs(work[0])) - work = zeros((lwork,), t) - results = lapack_routine(m, mn, mn, a, m, tau, work, lwork, 0) - if results['info'] != 0: - raise LinAlgError, '%s returns %d' % (routine_name, results['info']) - - q = _fastCopyAndTranspose(result_t, a[:mn,:]) - - return wrap(q), wrap(r) - - -# Eigenvalues - - -def eigvals(a): - """ - Compute the eigenvalues of a general matrix. - - Main difference between `eigvals` and `eig`: the eigenvectors aren't - returned. - - Parameters - ---------- - a : array_like, shape (M, M) - A complex- or real-valued matrix whose eigenvalues will be computed. - - Returns - ------- - w : ndarray, shape (M,) - The eigenvalues, each repeated according to its multiplicity. - They are not necessarily ordered, nor are they necessarily - real for real matrices. - - Raises - ------ - LinAlgError - If the eigenvalue computation does not converge. - - See Also - -------- - eig : eigenvalues and right eigenvectors of general arrays - eigvalsh : eigenvalues of symmetric or Hermitian arrays. - eigh : eigenvalues and eigenvectors of symmetric/Hermitian arrays. - - Notes - ----- - This is a simple interface to the LAPACK routines dgeev and zgeev - that sets those routines' flags to return only the eigenvalues of - general real and complex arrays, respectively. - - Examples - -------- - Illustration, using the fact that the eigenvalues of a diagonal matrix - are its diagonal elements, that multiplying a matrix on the left - by an orthogonal matrix, `Q`, and on the right by `Q.T` (the transpose - of `Q`), preserves the eigenvalues of the "middle" matrix. In other words, - if `Q` is orthogonal, then ``Q * A * Q.T`` has the same eigenvalues as - ``A``: - - >>> from numpy import linalg as LA - >>> x = np.random.random() - >>> Q = np.array([[np.cos(x), -np.sin(x)], [np.sin(x), np.cos(x)]]) - >>> LA.norm(Q[0, :]), LA.norm(Q[1, :]), np.dot(Q[0, :],Q[1, :]) - (1.0, 1.0, 0.0) - - Now multiply a diagonal matrix by Q on one side and by Q.T on the other: - - >>> D = np.diag((-1,1)) - >>> LA.eigvals(D) - array([-1., 1.]) - >>> A = np.dot(Q, D) - >>> A = np.dot(A, Q.T) - >>> LA.eigvals(A) - array([ 1., -1.]) - - """ - a, wrap = _makearray(a) - _assertRank2(a) - _assertSquareness(a) - _assertFinite(a) - t, result_t = _commonType(a) - real_t = _linalgRealType(t) - a = _fastCopyAndTranspose(t, a) - a = _to_native_byte_order(a) - n = a.shape[0] - dummy = zeros((1,), t) - if isComplexType(t): - lapack_routine = lapack_lite.zgeev - w = zeros((n,), t) - rwork = zeros((n,), real_t) - lwork = 1 - work = zeros((lwork,), t) - results = lapack_routine(_N, _N, n, a, n, w, - dummy, 1, dummy, 1, work, -1, rwork, 0) - lwork = int(abs(work[0])) - work = zeros((lwork,), t) - results = lapack_routine(_N, _N, n, a, n, w, - dummy, 1, dummy, 1, work, lwork, rwork, 0) - else: - lapack_routine = lapack_lite.dgeev - wr = zeros((n,), t) - wi = zeros((n,), t) - lwork = 1 - work = zeros((lwork,), t) - results = lapack_routine(_N, _N, n, a, n, wr, wi, - dummy, 1, dummy, 1, work, -1, 0) - lwork = int(work[0]) - work = zeros((lwork,), t) - results = lapack_routine(_N, _N, n, a, n, wr, wi, - dummy, 1, dummy, 1, work, lwork, 0) - if all(wi == 0.): - w = wr - result_t = _realType(result_t) - else: - w = wr+1j*wi - result_t = _complexType(result_t) - if results['info'] > 0: - raise LinAlgError, 'Eigenvalues did not converge' - return w.astype(result_t) - - -def eigvalsh(a, UPLO='L'): - """ - Compute the eigenvalues of a Hermitian or real symmetric matrix. - - Main difference from eigh: the eigenvectors are not computed. - - Parameters - ---------- - a : array_like, shape (M, M) - A complex- or real-valued matrix whose eigenvalues are to be - computed. - UPLO : {'L', 'U'}, optional - Specifies whether the calculation is done with the lower triangular - part of `a` ('L', default) or the upper triangular part ('U'). - - Returns - ------- - w : ndarray, shape (M,) - The eigenvalues, not necessarily ordered, each repeated according to - its multiplicity. - - Raises - ------ - LinAlgError - If the eigenvalue computation does not converge. - - See Also - -------- - eigh : eigenvalues and eigenvectors of symmetric/Hermitian arrays. - eigvals : eigenvalues of general real or complex arrays. - eig : eigenvalues and right eigenvectors of general real or complex - arrays. - - Notes - ----- - This is a simple interface to the LAPACK routines dsyevd and zheevd - that sets those routines' flags to return only the eigenvalues of - real symmetric and complex Hermitian arrays, respectively. - - Examples - -------- - >>> from numpy import linalg as LA - >>> a = np.array([[1, -2j], [2j, 5]]) - >>> LA.eigvalsh(a) - array([ 0.17157288+0.j, 5.82842712+0.j]) - - """ - UPLO = asbytes(UPLO) - a, wrap = _makearray(a) - _assertRank2(a) - _assertSquareness(a) - t, result_t = _commonType(a) - real_t = _linalgRealType(t) - a = _fastCopyAndTranspose(t, a) - a = _to_native_byte_order(a) - n = a.shape[0] - liwork = 5*n+3 - iwork = zeros((liwork,), fortran_int) - if isComplexType(t): - lapack_routine = lapack_lite.zheevd - w = zeros((n,), real_t) - lwork = 1 - work = zeros((lwork,), t) - lrwork = 1 - rwork = zeros((lrwork,), real_t) - results = lapack_routine(_N, UPLO, n, a, n, w, work, -1, - rwork, -1, iwork, liwork, 0) - lwork = int(abs(work[0])) - work = zeros((lwork,), t) - lrwork = int(rwork[0]) - rwork = zeros((lrwork,), real_t) - results = lapack_routine(_N, UPLO, n, a, n, w, work, lwork, - rwork, lrwork, iwork, liwork, 0) - else: - lapack_routine = lapack_lite.dsyevd - w = zeros((n,), t) - lwork = 1 - work = zeros((lwork,), t) - results = lapack_routine(_N, UPLO, n, a, n, w, work, -1, - iwork, liwork, 0) - lwork = int(work[0]) - work = zeros((lwork,), t) - results = lapack_routine(_N, UPLO, n, a, n, w, work, lwork, - iwork, liwork, 0) - if results['info'] > 0: - raise LinAlgError, 'Eigenvalues did not converge' - return w.astype(result_t) - -def _convertarray(a): - t, result_t = _commonType(a) - a = _fastCT(a.astype(t)) - return a, t, result_t - - -# Eigenvectors - - -def eig(a): - """ - Compute the eigenvalues and right eigenvectors of a square array. - - Parameters - ---------- - a : array_like, shape (M, M) - A square array of real or complex elements. - - Returns - ------- - w : ndarray, shape (M,) - The eigenvalues, each repeated according to its multiplicity. - The eigenvalues are not necessarily ordered, nor are they - necessarily real for real arrays (though for real arrays - complex-valued eigenvalues should occur in conjugate pairs). - - v : ndarray, shape (M, M) - The normalized (unit "length") eigenvectors, such that the - column ``v[:,i]`` is the eigenvector corresponding to the - eigenvalue ``w[i]``. - - Raises - ------ - LinAlgError - If the eigenvalue computation does not converge. - - See Also - -------- - eigvalsh : eigenvalues of a symmetric or Hermitian (conjugate symmetric) - array. - - eigvals : eigenvalues of a non-symmetric array. - - Notes - ----- - This is a simple interface to the LAPACK routines dgeev and zgeev - which compute the eigenvalues and eigenvectors of, respectively, - general real- and complex-valued square arrays. - - The number `w` is an eigenvalue of `a` if there exists a vector - `v` such that ``dot(a,v) = w * v``. Thus, the arrays `a`, `w`, and - `v` satisfy the equations ``dot(a[i,:], v[i]) = w[i] * v[:,i]`` - for :math:`i \\in \\{0,...,M-1\\}`. - - The array `v` of eigenvectors may not be of maximum rank, that is, some - of the columns may be linearly dependent, although round-off error may - obscure that fact. If the eigenvalues are all different, then theoretically - the eigenvectors are linearly independent. Likewise, the (complex-valued) - matrix of eigenvectors `v` is unitary if the matrix `a` is normal, i.e., - if ``dot(a, a.H) = dot(a.H, a)``, where `a.H` denotes the conjugate - transpose of `a`. - - Finally, it is emphasized that `v` consists of the *right* (as in - right-hand side) eigenvectors of `a`. A vector `y` satisfying - ``dot(y.T, a) = z * y.T`` for some number `z` is called a *left* - eigenvector of `a`, and, in general, the left and right eigenvectors - of a matrix are not necessarily the (perhaps conjugate) transposes - of each other. - - References - ---------- - G. Strang, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, FL, - Academic Press, Inc., 1980, Various pp. - - Examples - -------- - >>> from numpy import linalg as LA - - (Almost) trivial example with real e-values and e-vectors. - - >>> w, v = LA.eig(np.diag((1, 2, 3))) - >>> w; v - array([ 1., 2., 3.]) - array([[ 1., 0., 0.], - [ 0., 1., 0.], - [ 0., 0., 1.]]) - - Real matrix possessing complex e-values and e-vectors; note that the - e-values are complex conjugates of each other. - - >>> w, v = LA.eig(np.array([[1, -1], [1, 1]])) - >>> w; v - array([ 1. + 1.j, 1. - 1.j]) - array([[ 0.70710678+0.j , 0.70710678+0.j ], - [ 0.00000000-0.70710678j, 0.00000000+0.70710678j]]) - - Complex-valued matrix with real e-values (but complex-valued e-vectors); - note that a.conj().T = a, i.e., a is Hermitian. - - >>> a = np.array([[1, 1j], [-1j, 1]]) - >>> w, v = LA.eig(a) - >>> w; v - array([ 2.00000000e+00+0.j, 5.98651912e-36+0.j]) # i.e., {2, 0} - array([[ 0.00000000+0.70710678j, 0.70710678+0.j ], - [ 0.70710678+0.j , 0.00000000+0.70710678j]]) - - Be careful about round-off error! - - >>> a = np.array([[1 + 1e-9, 0], [0, 1 - 1e-9]]) - >>> # Theor. e-values are 1 +/- 1e-9 - >>> w, v = LA.eig(a) - >>> w; v - array([ 1., 1.]) - array([[ 1., 0.], - [ 0., 1.]]) - - """ - a, wrap = _makearray(a) - _assertRank2(a) - _assertSquareness(a) - _assertFinite(a) - a, t, result_t = _convertarray(a) # convert to double or cdouble type - a = _to_native_byte_order(a) - real_t = _linalgRealType(t) - n = a.shape[0] - dummy = zeros((1,), t) - if isComplexType(t): - # Complex routines take different arguments - lapack_routine = lapack_lite.zgeev - w = zeros((n,), t) - v = zeros((n, n), t) - lwork = 1 - work = zeros((lwork,), t) - rwork = zeros((2*n,), real_t) - results = lapack_routine(_N, _V, n, a, n, w, - dummy, 1, v, n, work, -1, rwork, 0) - lwork = int(abs(work[0])) - work = zeros((lwork,), t) - results = lapack_routine(_N, _V, n, a, n, w, - dummy, 1, v, n, work, lwork, rwork, 0) - else: - lapack_routine = lapack_lite.dgeev - wr = zeros((n,), t) - wi = zeros((n,), t) - vr = zeros((n, n), t) - lwork = 1 - work = zeros((lwork,), t) - results = lapack_routine(_N, _V, n, a, n, wr, wi, - dummy, 1, vr, n, work, -1, 0) - lwork = int(work[0]) - work = zeros((lwork,), t) - results = lapack_routine(_N, _V, n, a, n, wr, wi, - dummy, 1, vr, n, work, lwork, 0) - if all(wi == 0.0): - w = wr - v = vr - result_t = _realType(result_t) - else: - w = wr+1j*wi - v = array(vr, w.dtype) - ind = flatnonzero(wi != 0.0) # indices of complex e-vals - for i in range(len(ind)//2): - v[ind[2*i]] = vr[ind[2*i]] + 1j*vr[ind[2*i+1]] - v[ind[2*i+1]] = vr[ind[2*i]] - 1j*vr[ind[2*i+1]] - result_t = _complexType(result_t) - - if results['info'] > 0: - raise LinAlgError, 'Eigenvalues did not converge' - vt = v.transpose().astype(result_t) - return w.astype(result_t), wrap(vt) - - -def eigh(a, UPLO='L'): - """ - Return the eigenvalues and eigenvectors of a Hermitian or symmetric matrix. - - Returns two objects, a 1-D array containing the eigenvalues of `a`, and - a 2-D square array or matrix (depending on the input type) of the - corresponding eigenvectors (in columns). - - Parameters - ---------- - a : array_like, shape (M, M) - A complex Hermitian or real symmetric matrix. - UPLO : {'L', 'U'}, optional - Specifies whether the calculation is done with the lower triangular - part of `a` ('L', default) or the upper triangular part ('U'). - - Returns - ------- - w : ndarray, shape (M,) - The eigenvalues, not necessarily ordered. - v : ndarray, or matrix object if `a` is, shape (M, M) - The column ``v[:, i]`` is the normalized eigenvector corresponding - to the eigenvalue ``w[i]``. - - Raises - ------ - LinAlgError - If the eigenvalue computation does not converge. - - See Also - -------- - eigvalsh : eigenvalues of symmetric or Hermitian arrays. - eig : eigenvalues and right eigenvectors for non-symmetric arrays. - eigvals : eigenvalues of non-symmetric arrays. - - Notes - ----- - This is a simple interface to the LAPACK routines dsyevd and zheevd, - which compute the eigenvalues and eigenvectors of real symmetric and - complex Hermitian arrays, respectively. - - The eigenvalues of real symmetric or complex Hermitian matrices are - always real. [1]_ The array `v` of (column) eigenvectors is unitary - and `a`, `w`, and `v` satisfy the equations - ``dot(a, v[:, i]) = w[i] * v[:, i]``. - - References - ---------- - .. [1] G. Strang, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, - FL, Academic Press, Inc., 1980, pg. 222. - - Examples - -------- - >>> from numpy import linalg as LA - >>> a = np.array([[1, -2j], [2j, 5]]) - >>> a - array([[ 1.+0.j, 0.-2.j], - [ 0.+2.j, 5.+0.j]]) - >>> w, v = LA.eigh(a) - >>> w; v - array([ 0.17157288, 5.82842712]) - array([[-0.92387953+0.j , -0.38268343+0.j ], - [ 0.00000000+0.38268343j, 0.00000000-0.92387953j]]) - - >>> np.dot(a, v[:, 0]) - w[0] * v[:, 0] # verify 1st e-val/vec pair - array([2.77555756e-17 + 0.j, 0. + 1.38777878e-16j]) - >>> np.dot(a, v[:, 1]) - w[1] * v[:, 1] # verify 2nd e-val/vec pair - array([ 0.+0.j, 0.+0.j]) - - >>> A = np.matrix(a) # what happens if input is a matrix object - >>> A - matrix([[ 1.+0.j, 0.-2.j], - [ 0.+2.j, 5.+0.j]]) - >>> w, v = LA.eigh(A) - >>> w; v - array([ 0.17157288, 5.82842712]) - matrix([[-0.92387953+0.j , -0.38268343+0.j ], - [ 0.00000000+0.38268343j, 0.00000000-0.92387953j]]) - - """ - UPLO = asbytes(UPLO) - a, wrap = _makearray(a) - _assertRank2(a) - _assertSquareness(a) - t, result_t = _commonType(a) - real_t = _linalgRealType(t) - a = _fastCopyAndTranspose(t, a) - a = _to_native_byte_order(a) - n = a.shape[0] - liwork = 5*n+3 - iwork = zeros((liwork,), fortran_int) - if isComplexType(t): - lapack_routine = lapack_lite.zheevd - w = zeros((n,), real_t) - lwork = 1 - work = zeros((lwork,), t) - lrwork = 1 - rwork = zeros((lrwork,), real_t) - results = lapack_routine(_V, UPLO, n, a, n, w, work, -1, - rwork, -1, iwork, liwork, 0) - lwork = int(abs(work[0])) - work = zeros((lwork,), t) - lrwork = int(rwork[0]) - rwork = zeros((lrwork,), real_t) - results = lapack_routine(_V, UPLO, n, a, n, w, work, lwork, - rwork, lrwork, iwork, liwork, 0) - else: - lapack_routine = lapack_lite.dsyevd - w = zeros((n,), t) - lwork = 1 - work = zeros((lwork,), t) - results = lapack_routine(_V, UPLO, n, a, n, w, work, -1, - iwork, liwork, 0) - lwork = int(work[0]) - work = zeros((lwork,), t) - results = lapack_routine(_V, UPLO, n, a, n, w, work, lwork, - iwork, liwork, 0) - if results['info'] > 0: - raise LinAlgError, 'Eigenvalues did not converge' - at = a.transpose().astype(result_t) - return w.astype(_realType(result_t)), wrap(at) - - -# Singular value decomposition - -def svd(a, full_matrices=1, compute_uv=1): - """ - Singular Value Decomposition. - - Factors the matrix `a` as ``u * np.diag(s) * v``, where `u` and `v` - are unitary and `s` is a 1-d array of `a`'s singular values. - - Parameters - ---------- - a : array_like - A real or complex matrix of shape (`M`, `N`) . - full_matrices : bool, optional - If True (default), `u` and `v` have the shapes (`M`, `M`) and - (`N`, `N`), respectively. Otherwise, the shapes are (`M`, `K`) - and (`K`, `N`), respectively, where `K` = min(`M`, `N`). - compute_uv : bool, optional - Whether or not to compute `u` and `v` in addition to `s`. True - by default. - - Returns - ------- - u : ndarray - Unitary matrix. The shape of `u` is (`M`, `M`) or (`M`, `K`) - depending on value of ``full_matrices``. - s : ndarray - The singular values, sorted so that ``s[i] >= s[i+1]``. `s` is - a 1-d array of length min(`M`, `N`). - v : ndarray - Unitary matrix of shape (`N`, `N`) or (`K`, `N`), depending on - ``full_matrices``. - - Raises - ------ - LinAlgError - If SVD computation does not converge. - - Notes - ----- - The SVD is commonly written as ``a = U S V.H``. The `v` returned - by this function is ``V.H`` and ``u = U``. - - If ``U`` is a unitary matrix, it means that it - satisfies ``U.H = inv(U)``. - - The rows of `v` are the eigenvectors of ``a.H a``. The columns - of `u` are the eigenvectors of ``a a.H``. For row ``i`` in - `v` and column ``i`` in `u`, the corresponding eigenvalue is - ``s[i]**2``. - - If `a` is a `matrix` object (as opposed to an `ndarray`), then so - are all the return values. - - Examples - -------- - >>> a = np.random.randn(9, 6) + 1j*np.random.randn(9, 6) - - Reconstruction based on full SVD: - - >>> U, s, V = np.linalg.svd(a, full_matrices=True) - >>> U.shape, V.shape, s.shape - ((9, 6), (6, 6), (6,)) - >>> S = np.zeros((9, 6), dtype=complex) - >>> S[:6, :6] = np.diag(s) - >>> np.allclose(a, np.dot(U, np.dot(S, V))) - True - - Reconstruction based on reduced SVD: - - >>> U, s, V = np.linalg.svd(a, full_matrices=False) - >>> U.shape, V.shape, s.shape - ((9, 6), (6, 6), (6,)) - >>> S = np.diag(s) - >>> np.allclose(a, np.dot(U, np.dot(S, V))) - True - - """ - a, wrap = _makearray(a) - _assertRank2(a) - _assertNonEmpty(a) - m, n = a.shape - t, result_t = _commonType(a) - real_t = _linalgRealType(t) - a = _fastCopyAndTranspose(t, a) - a = _to_native_byte_order(a) - s = zeros((min(n, m),), real_t) - if compute_uv: - if full_matrices: - nu = m - nvt = n - option = _A - else: - nu = min(n, m) - nvt = min(n, m) - option = _S - u = zeros((nu, m), t) - vt = zeros((n, nvt), t) - else: - option = _N - nu = 1 - nvt = 1 - u = empty((1, 1), t) - vt = empty((1, 1), t) - - iwork = zeros((8*min(m, n),), fortran_int) - if isComplexType(t): - lapack_routine = lapack_lite.zgesdd - rwork = zeros((5*min(m, n)*min(m, n) + 5*min(m, n),), real_t) - lwork = 1 - work = zeros((lwork,), t) - results = lapack_routine(option, m, n, a, m, s, u, m, vt, nvt, - work, -1, rwork, iwork, 0) - lwork = int(abs(work[0])) - work = zeros((lwork,), t) - results = lapack_routine(option, m, n, a, m, s, u, m, vt, nvt, - work, lwork, rwork, iwork, 0) - else: - lapack_routine = lapack_lite.dgesdd - lwork = 1 - work = zeros((lwork,), t) - results = lapack_routine(option, m, n, a, m, s, u, m, vt, nvt, - work, -1, iwork, 0) - lwork = int(work[0]) - work = zeros((lwork,), t) - results = lapack_routine(option, m, n, a, m, s, u, m, vt, nvt, - work, lwork, iwork, 0) - if results['info'] > 0: - raise LinAlgError, 'SVD did not converge' - s = s.astype(_realType(result_t)) - if compute_uv: - u = u.transpose().astype(result_t) - vt = vt.transpose().astype(result_t) - return wrap(u), s, wrap(vt) - else: - return s - -def cond(x, p=None): - """ - Compute the condition number of a matrix. - - This function is capable of returning the condition number using - one of seven different norms, depending on the value of `p` (see - Parameters below). - - Parameters - ---------- - x : array_like, shape (M, N) - The matrix whose condition number is sought. - p : {None, 1, -1, 2, -2, inf, -inf, 'fro'}, optional - Order of the norm: - - ===== ============================ - p norm for matrices - ===== ============================ - None 2-norm, computed directly using the ``SVD`` - 'fro' Frobenius norm - inf max(sum(abs(x), axis=1)) - -inf min(sum(abs(x), axis=1)) - 1 max(sum(abs(x), axis=0)) - -1 min(sum(abs(x), axis=0)) - 2 2-norm (largest sing. value) - -2 smallest singular value - ===== ============================ - - inf means the numpy.inf object, and the Frobenius norm is - the root-of-sum-of-squares norm. - - Returns - ------- - c : {float, inf} - The condition number of the matrix. May be infinite. - - See Also - -------- - numpy.linalg.linalg.norm - - Notes - ----- - The condition number of `x` is defined as the norm of `x` times the - norm of the inverse of `x` [1]_; the norm can be the usual L2-norm - (root-of-sum-of-squares) or one of a number of other matrix norms. - - References - ---------- - .. [1] G. Strang, *Linear Algebra and Its Applications*, Orlando, FL, - Academic Press, Inc., 1980, pg. 285. - - Examples - -------- - >>> from numpy import linalg as LA - >>> a = np.array([[1, 0, -1], [0, 1, 0], [1, 0, 1]]) - >>> a - array([[ 1, 0, -1], - [ 0, 1, 0], - [ 1, 0, 1]]) - >>> LA.cond(a) - 1.4142135623730951 - >>> LA.cond(a, 'fro') - 3.1622776601683795 - >>> LA.cond(a, np.inf) - 2.0 - >>> LA.cond(a, -np.inf) - 1.0 - >>> LA.cond(a, 1) - 2.0 - >>> LA.cond(a, -1) - 1.0 - >>> LA.cond(a, 2) - 1.4142135623730951 - >>> LA.cond(a, -2) - 0.70710678118654746 - >>> min(LA.svd(a, compute_uv=0))*min(LA.svd(LA.inv(a), compute_uv=0)) - 0.70710678118654746 - - """ - x = asarray(x) # in case we have a matrix - if p is None: - s = svd(x,compute_uv=False) - return s[0]/s[-1] - else: - return norm(x,p)*norm(inv(x),p) - - -def matrix_rank(M, tol=None): - """ - Return matrix rank of array using SVD method - - Rank of the array is the number of SVD singular values of the - array that are greater than `tol`. - - Parameters - ---------- - M : array_like - array of <=2 dimensions - tol : {None, float} - threshold below which SVD values are considered zero. If `tol` is - None, and ``S`` is an array with singular values for `M`, and - ``eps`` is the epsilon value for datatype of ``S``, then `tol` is - set to ``S.max() * eps``. - - Notes - ----- - Golub and van Loan [1]_ define "numerical rank deficiency" as using - tol=eps*S[0] (where S[0] is the maximum singular value and thus the - 2-norm of the matrix). This is one definition of rank deficiency, - and the one we use here. When floating point roundoff is the main - concern, then "numerical rank deficiency" is a reasonable choice. In - some cases you may prefer other definitions. The most useful measure - of the tolerance depends on the operations you intend to use on your - matrix. For example, if your data come from uncertain measurements - with uncertainties greater than floating point epsilon, choosing a - tolerance near that uncertainty may be preferable. The tolerance - may be absolute if the uncertainties are absolute rather than - relative. - - References - ---------- - .. [1] G. H. Golub and C. F. Van Loan, *Matrix Computations*. - Baltimore: Johns Hopkins University Press, 1996. - - Examples - -------- - >>> matrix_rank(np.eye(4)) # Full rank matrix - 4 - >>> I=np.eye(4); I[-1,-1] = 0. # rank deficient matrix - >>> matrix_rank(I) - 3 - >>> matrix_rank(np.ones((4,))) # 1 dimension - rank 1 unless all 0 - 1 - >>> matrix_rank(np.zeros((4,))) - 0 - - """ - M = asarray(M) - if M.ndim > 2: - raise TypeError('array should have 2 or fewer dimensions') - if M.ndim < 2: - return int(not all(M==0)) - S = svd(M, compute_uv=False) - if tol is None: - tol = S.max() * finfo(S.dtype).eps - return sum(S > tol) - - -# Generalized inverse - -def pinv(a, rcond=1e-15 ): - """ - Compute the (Moore-Penrose) pseudo-inverse of a matrix. - - Calculate the generalized inverse of a matrix using its - singular-value decomposition (SVD) and including all - *large* singular values. - - Parameters - ---------- - a : array_like, shape (M, N) - Matrix to be pseudo-inverted. - rcond : float - Cutoff for small singular values. - Singular values smaller (in modulus) than - `rcond` * largest_singular_value (again, in modulus) - are set to zero. - - Returns - ------- - B : ndarray, shape (N, M) - The pseudo-inverse of `a`. If `a` is a `matrix` instance, then so - is `B`. - - Raises - ------ - LinAlgError - If the SVD computation does not converge. - - Notes - ----- - The pseudo-inverse of a matrix A, denoted :math:`A^+`, is - defined as: "the matrix that 'solves' [the least-squares problem] - :math:`Ax = b`," i.e., if :math:`\\bar{x}` is said solution, then - :math:`A^+` is that matrix such that :math:`\\bar{x} = A^+b`. - - It can be shown that if :math:`Q_1 \\Sigma Q_2^T = A` is the singular - value decomposition of A, then - :math:`A^+ = Q_2 \\Sigma^+ Q_1^T`, where :math:`Q_{1,2}` are - orthogonal matrices, :math:`\\Sigma` is a diagonal matrix consisting - of A's so-called singular values, (followed, typically, by - zeros), and then :math:`\\Sigma^+` is simply the diagonal matrix - consisting of the reciprocals of A's singular values - (again, followed by zeros). [1]_ - - References - ---------- - .. [1] G. Strang, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, - FL, Academic Press, Inc., 1980, pp. 139-142. - - Examples - -------- - The following example checks that ``a * a+ * a == a`` and - ``a+ * a * a+ == a+``: - - >>> a = np.random.randn(9, 6) - >>> B = np.linalg.pinv(a) - >>> np.allclose(a, np.dot(a, np.dot(B, a))) - True - >>> np.allclose(B, np.dot(B, np.dot(a, B))) - True - - """ - a, wrap = _makearray(a) - _assertNonEmpty(a) - a = a.conjugate() - u, s, vt = svd(a, 0) - m = u.shape[0] - n = vt.shape[1] - cutoff = rcond*maximum.reduce(s) - for i in range(min(n, m)): - if s[i] > cutoff: - s[i] = 1./s[i] - else: - s[i] = 0.; - res = dot(transpose(vt), multiply(s[:, newaxis],transpose(u))) - return wrap(res) - -# Determinant - -def slogdet(a): - """ - Compute the sign and (natural) logarithm of the determinant of an array. - - If an array has a very small or very large determinant, than a call to - `det` may overflow or underflow. This routine is more robust against such - issues, because it computes the logarithm of the determinant rather than - the determinant itself. - - Parameters - ---------- - a : array_like, shape (M, M) - Input array. - - Returns - ------- - sign : float or complex - A number representing the sign of the determinant. For a real matrix, - this is 1, 0, or -1. For a complex matrix, this is a complex number - with absolute value 1 (i.e., it is on the unit circle), or else 0. - logdet : float - The natural log of the absolute value of the determinant. - - If the determinant is zero, then `sign` will be 0 and `logdet` will be - -Inf. In all cases, the determinant is equal to `sign * np.exp(logdet)`. - - Notes - ----- - The determinant is computed via LU factorization using the LAPACK - routine z/dgetrf. - - .. versionadded:: 2.0.0. - - Examples - -------- - The determinant of a 2-D array [[a, b], [c, d]] is ad - bc: - - >>> a = np.array([[1, 2], [3, 4]]) - >>> (sign, logdet) = np.linalg.slogdet(a) - >>> (sign, logdet) - (-1, 0.69314718055994529) - >>> sign * np.exp(logdet) - -2.0 - - This routine succeeds where ordinary `det` does not: - - >>> np.linalg.det(np.eye(500) * 0.1) - 0.0 - >>> np.linalg.slogdet(np.eye(500) * 0.1) - (1, -1151.2925464970228) - - See Also - -------- - det - - """ - a = asarray(a) - _assertRank2(a) - _assertSquareness(a) - t, result_t = _commonType(a) - a = _fastCopyAndTranspose(t, a) - a = _to_native_byte_order(a) - n = a.shape[0] - if isComplexType(t): - lapack_routine = lapack_lite.zgetrf - else: - lapack_routine = lapack_lite.dgetrf - pivots = zeros((n,), fortran_int) - results = lapack_routine(n, n, a, n, pivots, 0) - info = results['info'] - if (info < 0): - raise TypeError, "Illegal input to Fortran routine" - elif (info > 0): - return (t(0.0), _realType(t)(-Inf)) - sign = 1. - 2. * (add.reduce(pivots != arange(1, n + 1)) % 2) - d = diagonal(a) - absd = absolute(d) - sign *= multiply.reduce(d / absd) - log(absd, absd) - logdet = add.reduce(absd, axis=-1) - return sign, logdet - -def det(a): - """ - Compute the determinant of an array. - - Parameters - ---------- - a : array_like, shape (M, M) - Input array. - - Returns - ------- - det : ndarray - Determinant of `a`. - - Notes - ----- - The determinant is computed via LU factorization using the LAPACK - routine z/dgetrf. - - Examples - -------- - The determinant of a 2-D array [[a, b], [c, d]] is ad - bc: - - >>> a = np.array([[1, 2], [3, 4]]) - >>> np.linalg.det(a) - -2.0 - - See Also - -------- - slogdet : Another way to representing the determinant, more suitable - for large matrices where underflow/overflow may occur. - - """ - sign, logdet = slogdet(a) - return sign * exp(logdet) - -# Linear Least Squares - -def lstsq(a, b, rcond=-1): - """ - Return the least-squares solution to a linear matrix equation. - - Solves the equation `a x = b` by computing a vector `x` that minimizes - the norm `|| b - a x ||`. The equation may be under-, well-, or over- - determined (i.e., the number of linearly independent rows of `a` can be - less than, equal to, or greater than its number of linearly independent - columns). If `a` is square and of full rank, then `x` (but for round-off - error) is the "exact" solution of the equation. - - Parameters - ---------- - a : array_like, shape (M, N) - "Coefficient" matrix. - b : array_like, shape (M,) or (M, K) - Ordinate or "dependent variable" values. If `b` is two-dimensional, - the least-squares solution is calculated for each of the `K` columns - of `b`. - rcond : float, optional - Cut-off ratio for small singular values of `a`. - Singular values are set to zero if they are smaller than `rcond` - times the largest singular value of `a`. - - Returns - ------- - x : ndarray, shape (N,) or (N, K) - Least-squares solution. The shape of `x` depends on the shape of - `b`. - residues : ndarray, shape (), (1,), or (K,) - Sums of residues; squared Euclidean norm for each column in - ``b - a*x``. - If the rank of `a` is < N or > M, this is an empty array. - If `b` is 1-dimensional, this is a (1,) shape array. - Otherwise the shape is (K,). - rank : int - Rank of matrix `a`. - s : ndarray, shape (min(M,N),) - Singular values of `a`. - - Raises - ------ - LinAlgError - If computation does not converge. - - Notes - ----- - If `b` is a matrix, then all array results are returned as matrices. - - Examples - -------- - Fit a line, ``y = mx + c``, through some noisy data-points: - - >>> x = np.array([0, 1, 2, 3]) - >>> y = np.array([-1, 0.2, 0.9, 2.1]) - - By examining the coefficients, we see that the line should have a - gradient of roughly 1 and cut the y-axis at, more or less, -1. - - We can rewrite the line equation as ``y = Ap``, where ``A = [[x 1]]`` - and ``p = [[m], [c]]``. Now use `lstsq` to solve for `p`: - - >>> A = np.vstack([x, np.ones(len(x))]).T - >>> A - array([[ 0., 1.], - [ 1., 1.], - [ 2., 1.], - [ 3., 1.]]) - - >>> m, c = np.linalg.lstsq(A, y)[0] - >>> print m, c - 1.0 -0.95 - - Plot the data along with the fitted line: - - >>> import matplotlib.pyplot as plt - >>> plt.plot(x, y, 'o', label='Original data', markersize=10) - >>> plt.plot(x, m*x + c, 'r', label='Fitted line') - >>> plt.legend() - >>> plt.show() - - """ - import math - a, _ = _makearray(a) - b, wrap = _makearray(b) - is_1d = len(b.shape) == 1 - if is_1d: - b = b[:, newaxis] - _assertRank2(a, b) - m = a.shape[0] - n = a.shape[1] - n_rhs = b.shape[1] - ldb = max(n, m) - if m != b.shape[0]: - raise LinAlgError, 'Incompatible dimensions' - t, result_t = _commonType(a, b) - real_t = _linalgRealType(t) - bstar = zeros((ldb, n_rhs), t) - bstar[:b.shape[0],:n_rhs] = b.copy() - a, bstar = _fastCopyAndTranspose(t, a, bstar) - a, bstar = _to_native_byte_order(a, bstar) - s = zeros((min(m, n),), real_t) - nlvl = max( 0, int( math.log( float(min(m, n))/2. ) ) + 1 ) - iwork = zeros((3*min(m, n)*nlvl+11*min(m, n),), fortran_int) - if isComplexType(t): - lapack_routine = lapack_lite.zgelsd - lwork = 1 - rwork = zeros((lwork,), real_t) - work = zeros((lwork,), t) - results = lapack_routine(m, n, n_rhs, a, m, bstar, ldb, s, rcond, - 0, work, -1, rwork, iwork, 0) - lwork = int(abs(work[0])) - rwork = zeros((lwork,), real_t) - a_real = zeros((m, n), real_t) - bstar_real = zeros((ldb, n_rhs,), real_t) - results = lapack_lite.dgelsd(m, n, n_rhs, a_real, m, - bstar_real, ldb, s, rcond, - 0, rwork, -1, iwork, 0) - lrwork = int(rwork[0]) - work = zeros((lwork,), t) - rwork = zeros((lrwork,), real_t) - results = lapack_routine(m, n, n_rhs, a, m, bstar, ldb, s, rcond, - 0, work, lwork, rwork, iwork, 0) - else: - lapack_routine = lapack_lite.dgelsd - lwork = 1 - work = zeros((lwork,), t) - results = lapack_routine(m, n, n_rhs, a, m, bstar, ldb, s, rcond, - 0, work, -1, iwork, 0) - lwork = int(work[0]) - work = zeros((lwork,), t) - results = lapack_routine(m, n, n_rhs, a, m, bstar, ldb, s, rcond, - 0, work, lwork, iwork, 0) - if results['info'] > 0: - raise LinAlgError, 'SVD did not converge in Linear Least Squares' - resids = array([], t) - if is_1d: - x = array(ravel(bstar)[:n], dtype=result_t, copy=True) - if results['rank'] == n and m > n: - resids = array([sum((ravel(bstar)[n:])**2)], dtype=result_t) - else: - x = array(transpose(bstar)[:n,:], dtype=result_t, copy=True) - if results['rank'] == n and m > n: - resids = sum((transpose(bstar)[n:,:])**2, axis=0).astype(result_t) - st = s[:min(n, m)].copy().astype(_realType(result_t)) - return wrap(x), wrap(resids), results['rank'], st - -def norm(x, ord=None): - """ - Matrix or vector norm. - - This function is able to return one of seven different matrix norms, - or one of an infinite number of vector norms (described below), depending - on the value of the ``ord`` parameter. - - Parameters - ---------- - x : array_like, shape (M,) or (M, N) - Input array. - ord : {non-zero int, inf, -inf, 'fro'}, optional - Order of the norm (see table under ``Notes``). inf means numpy's - `inf` object. - - Returns - ------- - n : float - Norm of the matrix or vector. - - Notes - ----- - For values of ``ord <= 0``, the result is, strictly speaking, not a - mathematical 'norm', but it may still be useful for various numerical - purposes. - - The following norms can be calculated: - - ===== ============================ ========================== - ord norm for matrices norm for vectors - ===== ============================ ========================== - None Frobenius norm 2-norm - 'fro' Frobenius norm -- - inf max(sum(abs(x), axis=1)) max(abs(x)) - -inf min(sum(abs(x), axis=1)) min(abs(x)) - 0 -- sum(x != 0) - 1 max(sum(abs(x), axis=0)) as below - -1 min(sum(abs(x), axis=0)) as below - 2 2-norm (largest sing. value) as below - -2 smallest singular value as below - other -- sum(abs(x)**ord)**(1./ord) - ===== ============================ ========================== - - The Frobenius norm is given by [1]_: - - :math:`||A||_F = [\\sum_{i,j} abs(a_{i,j})^2]^{1/2}` - - References - ---------- - .. [1] G. H. Golub and C. F. Van Loan, *Matrix Computations*, - Baltimore, MD, Johns Hopkins University Press, 1985, pg. 15 - - Examples - -------- - >>> from numpy import linalg as LA - >>> a = np.arange(9) - 4 - >>> a - array([-4, -3, -2, -1, 0, 1, 2, 3, 4]) - >>> b = a.reshape((3, 3)) - >>> b - array([[-4, -3, -2], - [-1, 0, 1], - [ 2, 3, 4]]) - - >>> LA.norm(a) - 7.745966692414834 - >>> LA.norm(b) - 7.745966692414834 - >>> LA.norm(b, 'fro') - 7.745966692414834 - >>> LA.norm(a, np.inf) - 4 - >>> LA.norm(b, np.inf) - 9 - >>> LA.norm(a, -np.inf) - 0 - >>> LA.norm(b, -np.inf) - 2 - - >>> LA.norm(a, 1) - 20 - >>> LA.norm(b, 1) - 7 - >>> LA.norm(a, -1) - -4.6566128774142013e-010 - >>> LA.norm(b, -1) - 6 - >>> LA.norm(a, 2) - 7.745966692414834 - >>> LA.norm(b, 2) - 7.3484692283495345 - - >>> LA.norm(a, -2) - nan - >>> LA.norm(b, -2) - 1.8570331885190563e-016 - >>> LA.norm(a, 3) - 5.8480354764257312 - >>> LA.norm(a, -3) - nan - - """ - x = asarray(x) - if ord is None: # check the default case first and handle it immediately - return sqrt(add.reduce((x.conj() * x).ravel().real)) - - nd = x.ndim - if nd == 1: - if ord == Inf: - return abs(x).max() - elif ord == -Inf: - return abs(x).min() - elif ord == 0: - return (x != 0).sum() # Zero norm - elif ord == 1: - return abs(x).sum() # special case for speedup - elif ord == 2: - return sqrt(((x.conj()*x).real).sum()) # special case for speedup - else: - try: - ord + 1 - except TypeError: - raise ValueError, "Invalid norm order for vectors." - return ((abs(x)**ord).sum())**(1.0/ord) - elif nd == 2: - if ord == 2: - return svd(x, compute_uv=0).max() - elif ord == -2: - return svd(x, compute_uv=0).min() - elif ord == 1: - return abs(x).sum(axis=0).max() - elif ord == Inf: - return abs(x).sum(axis=1).max() - elif ord == -1: - return abs(x).sum(axis=0).min() - elif ord == -Inf: - return abs(x).sum(axis=1).min() - elif ord in ['fro','f']: - return sqrt(add.reduce((x.conj() * x).real.ravel())) - else: - raise ValueError, "Invalid norm order for matrices." - else: - raise ValueError, "Improper number of dimensions to norm." diff --git a/pythonPackages/numpy/numpy/linalg/python_xerbla.c b/pythonPackages/numpy/numpy/linalg/python_xerbla.c deleted file mode 100755 index 4e5a68413b..0000000000 --- a/pythonPackages/numpy/numpy/linalg/python_xerbla.c +++ /dev/null @@ -1,37 +0,0 @@ -#include "Python.h" -#include "f2c.h" - -/* - From the original manpage: - -------------------------- - XERBLA is an error handler for the LAPACK routines. - It is called by an LAPACK routine if an input parameter has an invalid value. - A message is printed and execution stops. - - Instead of printing a message and stopping the execution, a - ValueError is raised with the message. - - Parameters: - ----------- - srname: Subroutine name to use in error message, maximum six characters. - Spaces at the end are skipped. - info: Number of the invalid parameter. -*/ - -int xerbla_(char *srname, integer *info) -{ - const char* format = "On entry to %.*s" \ - " parameter number %d had an illegal value"; - char buf[57 + 6 + 4]; /* 57 for strlen(format), - 6 for name, 4 for param. num. */ - - int len = 0; /* length of subroutine name*/ - while( len<6 && srname[len]!='\0' ) - len++; - while( len && srname[len-1]==' ' ) - len--; - - snprintf(buf, sizeof(buf), format, len, srname, *info); - PyErr_SetString(PyExc_ValueError, buf); - return 0; -} diff --git a/pythonPackages/numpy/numpy/linalg/setup.py b/pythonPackages/numpy/numpy/linalg/setup.py deleted file mode 100755 index 1fb7a3acd0..0000000000 --- a/pythonPackages/numpy/numpy/linalg/setup.py +++ /dev/null @@ -1,37 +0,0 @@ - -import sys - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - from numpy.distutils.system_info import get_info - config = Configuration('linalg',parent_package,top_path) - - config.add_data_dir('tests') - - # Configure lapack_lite - lapack_info = get_info('lapack_opt',0) # and {} - def get_lapack_lite_sources(ext, build_dir): - if not lapack_info: - print("### Warning: Using unoptimized lapack ###") - return ext.depends[:-1] - else: - if sys.platform=='win32': - print("### Warning: python_xerbla.c is disabled ###") - return ext.depends[:1] - return ext.depends[:2] - - config.add_extension('lapack_lite', - sources = [get_lapack_lite_sources], - depends= ['lapack_litemodule.c', - 'python_xerbla.c', - 'zlapack_lite.c', 'dlapack_lite.c', - 'blas_lite.c', 'dlamch.c', - 'f2c_lite.c','f2c.h'], - extra_info = lapack_info - ) - - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/linalg/setupscons.py b/pythonPackages/numpy/numpy/linalg/setupscons.py deleted file mode 100755 index fd05ce9aff..0000000000 --- a/pythonPackages/numpy/numpy/linalg/setupscons.py +++ /dev/null @@ -1,19 +0,0 @@ - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - from numpy.distutils.system_info import get_info - config = Configuration('linalg',parent_package,top_path) - - config.add_data_dir('tests') - - config.add_sconscript('SConstruct', - source_files = ['lapack_litemodule.c', - 'zlapack_lite.c', 'dlapack_lite.c', - 'blas_lite.c', 'dlamch.c', - 'f2c_lite.c','f2c.h']) - - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/linalg/tests/test_build.py b/pythonPackages/numpy/numpy/linalg/tests/test_build.py deleted file mode 100755 index 6b6c24d617..0000000000 --- a/pythonPackages/numpy/numpy/linalg/tests/test_build.py +++ /dev/null @@ -1,50 +0,0 @@ -from subprocess import call, PIPE, Popen -import sys -import re - -import numpy as np -from numpy.linalg import lapack_lite -from numpy.testing import TestCase, dec - -from numpy.compat import asbytes_nested - -class FindDependenciesLdd: - def __init__(self): - self.cmd = ['ldd'] - - try: - st = call(self.cmd, stdout=PIPE, stderr=PIPE) - except OSError: - raise RuntimeError("command %s cannot be run" % self.cmd) - - def get_dependencies(self, file): - p = Popen(self.cmd + [file], stdout=PIPE, stderr=PIPE) - stdout, stderr = p.communicate() - if not (p.returncode == 0): - raise RuntimeError("Failed to check dependencies for %s" % libfile) - - return stdout - - def grep_dependencies(self, file, deps): - stdout = self.get_dependencies(file) - - rdeps = dict([(dep, re.compile(dep)) for dep in deps]) - founds = [] - for l in stdout.splitlines(): - for k, v in rdeps.items(): - if v.search(l): - founds.append(k) - - return founds - -class TestF77Mismatch(TestCase): - @dec.skipif(not(sys.platform[:5] == 'linux'), - "Skipping fortran compiler mismatch on non Linux platform") - def test_lapack(self): - f = FindDependenciesLdd() - deps = f.grep_dependencies(lapack_lite.__file__, - asbytes_nested(['libg2c', 'libgfortran'])) - self.assertFalse(len(deps) > 1, -"""Both g77 and gfortran runtimes linked in lapack_lite ! This is likely to -cause random crashes and wrong results. See numpy INSTALL.txt for more -information.""") diff --git a/pythonPackages/numpy/numpy/linalg/tests/test_linalg.py b/pythonPackages/numpy/numpy/linalg/tests/test_linalg.py deleted file mode 100755 index 3a9584dd6d..0000000000 --- a/pythonPackages/numpy/numpy/linalg/tests/test_linalg.py +++ /dev/null @@ -1,358 +0,0 @@ -""" Test functions for linalg module -""" - -import numpy as np -from numpy.testing import * -from numpy import array, single, double, csingle, cdouble, dot, identity -from numpy import multiply, atleast_2d, inf, asarray, matrix -from numpy import linalg -from numpy.linalg import matrix_power, norm, matrix_rank - -def ifthen(a, b): - return not a or b - -old_assert_almost_equal = assert_almost_equal -def imply(a, b): - return not a or b - -def assert_almost_equal(a, b, **kw): - if asarray(a).dtype.type in (single, csingle): - decimal = 6 - else: - decimal = 12 - old_assert_almost_equal(a, b, decimal=decimal, **kw) - -class LinalgTestCase: - def test_single(self): - a = array([[1.,2.], [3.,4.]], dtype=single) - b = array([2., 1.], dtype=single) - self.do(a, b) - - def test_double(self): - a = array([[1.,2.], [3.,4.]], dtype=double) - b = array([2., 1.], dtype=double) - self.do(a, b) - - def test_csingle(self): - a = array([[1.+2j,2+3j], [3+4j,4+5j]], dtype=csingle) - b = array([2.+1j, 1.+2j], dtype=csingle) - self.do(a, b) - - def test_cdouble(self): - a = array([[1.+2j,2+3j], [3+4j,4+5j]], dtype=cdouble) - b = array([2.+1j, 1.+2j], dtype=cdouble) - self.do(a, b) - - def test_empty(self): - a = atleast_2d(array([], dtype = double)) - b = atleast_2d(array([], dtype = double)) - try: - self.do(a, b) - raise AssertionError("%s should fail with empty matrices", self.__name__[5:]) - except linalg.LinAlgError, e: - pass - - def test_nonarray(self): - a = [[1,2], [3,4]] - b = [2, 1] - self.do(a,b) - - def test_matrix_b_only(self): - """Check that matrix type is preserved.""" - a = array([[1.,2.], [3.,4.]]) - b = matrix([2., 1.]).T - self.do(a, b) - - def test_matrix_a_and_b(self): - """Check that matrix type is preserved.""" - a = matrix([[1.,2.], [3.,4.]]) - b = matrix([2., 1.]).T - self.do(a, b) - - -class TestSolve(LinalgTestCase, TestCase): - def do(self, a, b): - x = linalg.solve(a, b) - assert_almost_equal(b, dot(a, x)) - assert imply(isinstance(b, matrix), isinstance(x, matrix)) - -class TestInv(LinalgTestCase, TestCase): - def do(self, a, b): - a_inv = linalg.inv(a) - assert_almost_equal(dot(a, a_inv), identity(asarray(a).shape[0])) - assert imply(isinstance(a, matrix), isinstance(a_inv, matrix)) - -class TestEigvals(LinalgTestCase, TestCase): - def do(self, a, b): - ev = linalg.eigvals(a) - evalues, evectors = linalg.eig(a) - assert_almost_equal(ev, evalues) - -class TestEig(LinalgTestCase, TestCase): - def do(self, a, b): - evalues, evectors = linalg.eig(a) - assert_almost_equal(dot(a, evectors), multiply(evectors, evalues)) - assert imply(isinstance(a, matrix), isinstance(evectors, matrix)) - -class TestSVD(LinalgTestCase, TestCase): - def do(self, a, b): - u, s, vt = linalg.svd(a, 0) - assert_almost_equal(a, dot(multiply(u, s), vt)) - assert imply(isinstance(a, matrix), isinstance(u, matrix)) - assert imply(isinstance(a, matrix), isinstance(vt, matrix)) - -class TestCondSVD(LinalgTestCase, TestCase): - def do(self, a, b): - c = asarray(a) # a might be a matrix - s = linalg.svd(c, compute_uv=False) - old_assert_almost_equal(s[0]/s[-1], linalg.cond(a), decimal=5) - -class TestCond2(LinalgTestCase, TestCase): - def do(self, a, b): - c = asarray(a) # a might be a matrix - s = linalg.svd(c, compute_uv=False) - old_assert_almost_equal(s[0]/s[-1], linalg.cond(a,2), decimal=5) - -class TestCondInf(TestCase): - def test(self): - A = array([[1.,0,0],[0,-2.,0],[0,0,3.]]) - assert_almost_equal(linalg.cond(A,inf),3.) - -class TestPinv(LinalgTestCase, TestCase): - def do(self, a, b): - a_ginv = linalg.pinv(a) - assert_almost_equal(dot(a, a_ginv), identity(asarray(a).shape[0])) - assert imply(isinstance(a, matrix), isinstance(a_ginv, matrix)) - -class TestDet(LinalgTestCase, TestCase): - def do(self, a, b): - d = linalg.det(a) - (s, ld) = linalg.slogdet(a) - if asarray(a).dtype.type in (single, double): - ad = asarray(a).astype(double) - else: - ad = asarray(a).astype(cdouble) - ev = linalg.eigvals(ad) - assert_almost_equal(d, multiply.reduce(ev)) - assert_almost_equal(s * np.exp(ld), multiply.reduce(ev)) - if s != 0: - assert_almost_equal(np.abs(s), 1) - else: - assert_equal(ld, -inf) - - def test_zero(self): - assert_equal(linalg.det([[0.0]]), 0.0) - assert_equal(type(linalg.det([[0.0]])), double) - assert_equal(linalg.det([[0.0j]]), 0.0) - assert_equal(type(linalg.det([[0.0j]])), cdouble) - - assert_equal(linalg.slogdet([[0.0]]), (0.0, -inf)) - assert_equal(type(linalg.slogdet([[0.0]])[0]), double) - assert_equal(type(linalg.slogdet([[0.0]])[1]), double) - assert_equal(linalg.slogdet([[0.0j]]), (0.0j, -inf)) - assert_equal(type(linalg.slogdet([[0.0j]])[0]), cdouble) - assert_equal(type(linalg.slogdet([[0.0j]])[1]), double) - -class TestLstsq(LinalgTestCase, TestCase): - def do(self, a, b): - u, s, vt = linalg.svd(a, 0) - x, residuals, rank, sv = linalg.lstsq(a, b) - assert_almost_equal(b, dot(a, x)) - assert_equal(rank, asarray(a).shape[0]) - assert_almost_equal(sv, sv.__array_wrap__(s)) - assert imply(isinstance(b, matrix), isinstance(x, matrix)) - assert imply(isinstance(b, matrix), isinstance(residuals, matrix)) - -class TestMatrixPower(TestCase): - R90 = array([[0,1],[-1,0]]) - Arb22 = array([[4,-7],[-2,10]]) - noninv = array([[1,0],[0,0]]) - arbfloat = array([[0.1,3.2],[1.2,0.7]]) - - large = identity(10) - t = large[1,:].copy() - large[1,:] = large[0,:] - large[0,:] = t - - def test_large_power(self): - assert_equal(matrix_power(self.R90,2L**100+2**10+2**5+1),self.R90) - - def test_large_power_trailing_zero(self): - assert_equal(matrix_power(self.R90,2L**100+2**10+2**5),identity(2)) - - def testip_zero(self): - def tz(M): - mz = matrix_power(M,0) - assert_equal(mz, identity(M.shape[0])) - assert_equal(mz.dtype, M.dtype) - for M in [self.Arb22, self.arbfloat, self.large]: - yield tz, M - - def testip_one(self): - def tz(M): - mz = matrix_power(M,1) - assert_equal(mz, M) - assert_equal(mz.dtype, M.dtype) - for M in [self.Arb22, self.arbfloat, self.large]: - yield tz, M - - def testip_two(self): - def tz(M): - mz = matrix_power(M,2) - assert_equal(mz, dot(M,M)) - assert_equal(mz.dtype, M.dtype) - for M in [self.Arb22, self.arbfloat, self.large]: - yield tz, M - - def testip_invert(self): - def tz(M): - mz = matrix_power(M,-1) - assert_almost_equal(identity(M.shape[0]), dot(mz,M)) - for M in [self.R90, self.Arb22, self.arbfloat, self.large]: - yield tz, M - - def test_invert_noninvertible(self): - import numpy.linalg - self.assertRaises(numpy.linalg.linalg.LinAlgError, - lambda: matrix_power(self.noninv,-1)) - -class TestBoolPower(TestCase): - def test_square(self): - A = array([[True,False],[True,True]]) - assert_equal(matrix_power(A,2),A) - - -class HermitianTestCase(object): - def test_single(self): - a = array([[1.,2.], [2.,1.]], dtype=single) - self.do(a) - - def test_double(self): - a = array([[1.,2.], [2.,1.]], dtype=double) - self.do(a) - - def test_csingle(self): - a = array([[1.,2+3j], [2-3j,1]], dtype=csingle) - self.do(a) - - def test_cdouble(self): - a = array([[1.,2+3j], [2-3j,1]], dtype=cdouble) - self.do(a) - - def test_empty(self): - a = atleast_2d(array([], dtype = double)) - assert_raises(linalg.LinAlgError, self.do, a) - - def test_nonarray(self): - a = [[1,2], [2,1]] - self.do(a) - - def test_matrix_b_only(self): - """Check that matrix type is preserved.""" - a = array([[1.,2.], [2.,1.]]) - self.do(a) - - def test_matrix_a_and_b(self): - """Check that matrix type is preserved.""" - a = matrix([[1.,2.], [2.,1.]]) - self.do(a) - -class TestEigvalsh(HermitianTestCase, TestCase): - def do(self, a): - # note that eigenvalue arrays must be sorted since - # their order isn't guaranteed. - ev = linalg.eigvalsh(a) - evalues, evectors = linalg.eig(a) - ev.sort() - evalues.sort() - assert_almost_equal(ev, evalues) - -class TestEigh(HermitianTestCase, TestCase): - def do(self, a): - # note that eigenvalue arrays must be sorted since - # their order isn't guaranteed. - ev, evc = linalg.eigh(a) - evalues, evectors = linalg.eig(a) - ev.sort() - evalues.sort() - assert_almost_equal(ev, evalues) - -class _TestNorm(TestCase): - dt = None - dec = None - def test_empty(self): - assert_equal(norm([]), 0.0) - assert_equal(norm(array([], dtype=self.dt)), 0.0) - assert_equal(norm(atleast_2d(array([], dtype=self.dt))), 0.0) - - def test_vector(self): - a = [1.0,2.0,3.0,4.0] - b = [-1.0,-2.0,-3.0,-4.0] - c = [-1.0, 2.0,-3.0, 4.0] - - def _test(v): - np.testing.assert_almost_equal(norm(v), 30**0.5, decimal=self.dec) - np.testing.assert_almost_equal(norm(v,inf), 4.0, decimal=self.dec) - np.testing.assert_almost_equal(norm(v,-inf), 1.0, decimal=self.dec) - np.testing.assert_almost_equal(norm(v,1), 10.0, decimal=self.dec) - np.testing.assert_almost_equal(norm(v,-1), 12.0/25, - decimal=self.dec) - np.testing.assert_almost_equal(norm(v,2), 30**0.5, - decimal=self.dec) - np.testing.assert_almost_equal(norm(v,-2), ((205./144)**-0.5), - decimal=self.dec) - np.testing.assert_almost_equal(norm(v,0), 4, decimal=self.dec) - - for v in (a, b, c,): - _test(v) - - for v in (array(a, dtype=self.dt), array(b, dtype=self.dt), - array(c, dtype=self.dt)): - _test(v) - - def test_matrix(self): - A = matrix([[1.,3.],[5.,7.]], dtype=self.dt) - A = matrix([[1.,3.],[5.,7.]], dtype=self.dt) - assert_almost_equal(norm(A), 84**0.5) - assert_almost_equal(norm(A,'fro'), 84**0.5) - assert_almost_equal(norm(A,inf), 12.0) - assert_almost_equal(norm(A,-inf), 4.0) - assert_almost_equal(norm(A,1), 10.0) - assert_almost_equal(norm(A,-1), 6.0) - assert_almost_equal(norm(A,2), 9.1231056256176615) - assert_almost_equal(norm(A,-2), 0.87689437438234041) - - self.assertRaises(ValueError, norm, A, 'nofro') - self.assertRaises(ValueError, norm, A, -3) - self.assertRaises(ValueError, norm, A, 0) - -class TestNormDouble(_TestNorm): - dt = np.double - dec= 12 - -class TestNormSingle(_TestNorm): - dt = np.float32 - dec = 6 - - -def test_matrix_rank(): - # Full rank matrix - yield assert_equal, 4, matrix_rank(np.eye(4)) - # rank deficient matrix - I=np.eye(4); I[-1,-1] = 0. - yield assert_equal, matrix_rank(I), 3 - # All zeros - zero rank - yield assert_equal, matrix_rank(np.zeros((4,4))), 0 - # 1 dimension - rank 1 unless all 0 - yield assert_equal, matrix_rank([1, 0, 0, 0]), 1 - yield assert_equal, matrix_rank(np.zeros((4,))), 0 - # accepts array-like - yield assert_equal, matrix_rank([1]), 1 - # greater than 2 dimensions raises error - yield assert_raises, TypeError, matrix_rank, np.zeros((2,2,2)) - # works on scalar - yield assert_equal, matrix_rank(1), 1 - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/linalg/tests/test_regression.py b/pythonPackages/numpy/numpy/linalg/tests/test_regression.py deleted file mode 100755 index b3188f99c2..0000000000 --- a/pythonPackages/numpy/numpy/linalg/tests/test_regression.py +++ /dev/null @@ -1,71 +0,0 @@ -""" Test functions for linalg module -""" - -from numpy.testing import * -import numpy as np -from numpy import linalg, arange, float64, array, dot, transpose - -rlevel = 1 - -class TestRegression(TestCase): - def test_eig_build(self, level = rlevel): - """Ticket #652""" - rva = array([1.03221168e+02 +0.j, - -1.91843603e+01 +0.j, - -6.04004526e-01+15.84422474j, - -6.04004526e-01-15.84422474j, - -1.13692929e+01 +0.j, - -6.57612485e-01+10.41755503j, - -6.57612485e-01-10.41755503j, - 1.82126812e+01 +0.j, - 1.06011014e+01 +0.j , - 7.80732773e+00 +0.j , - -7.65390898e-01 +0.j, - 1.51971555e-15 +0.j , - -1.51308713e-15 +0.j]) - a = arange(13*13, dtype = float64) - a.shape = (13,13) - a = a%17 - va, ve = linalg.eig(a) - va.sort() - rva.sort() - assert_array_almost_equal(va, rva) - - def test_eigh_build(self, level = rlevel): - """Ticket 662.""" - rvals = [68.60568999, 89.57756725, 106.67185574] - - cov = array([[ 77.70273908, 3.51489954, 15.64602427], - [3.51489954, 88.97013878, -1.07431931], - [15.64602427, -1.07431931, 98.18223512]]) - - vals, vecs = linalg.eigh(cov) - assert_array_almost_equal(vals, rvals) - - def test_svd_build(self, level = rlevel): - """Ticket 627.""" - a = array([[ 0., 1.], [ 1., 1.], [ 2., 1.], [ 3., 1.]]) - m, n = a.shape - u, s, vh = linalg.svd(a) - - b = dot(transpose(u[:, n:]), a) - - assert_array_almost_equal(b, np.zeros((2, 2))) - - def test_norm_vector_badarg(self): - """Regression for #786: Froebenius norm for vectors raises - TypeError.""" - self.assertRaises(ValueError, linalg.norm, array([1., 2., 3.]), 'fro') - - def test_lapack_endian(self): - # For bug #1482 - a = array([[5.7998084, -2.1825367 ], - [-2.1825367, 9.85910595]], dtype='>f8') - b = array(a, dtype='= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - The integers ILO and IHI determined by ZGEBAL. - 1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0. - - SCALE (input) DOUBLE PRECISION array, dimension (N) - Details of the permutation and scaling factors, as returned - by ZGEBAL. - - M (input) INTEGER - The number of columns of the matrix V. M >= 0. - - V (input/output) COMPLEX*16 array, dimension (LDV,M) - On entry, the matrix of right or left eigenvectors to be - transformed, as returned by ZHSEIN or ZTREVC. - On exit, V is overwritten by the transformed eigenvectors. - - LDV (input) INTEGER - The leading dimension of the array V. LDV >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - - ===================================================================== - - - Decode and Test the input parameters -*/ - - /* Parameter adjustments */ - --scale; - v_dim1 = *ldv; - v_offset = 1 + v_dim1 * 1; - v -= v_offset; - - /* Function Body */ - rightv = lsame_(side, "R"); - leftv = lsame_(side, "L"); - - *info = 0; - if ((((! lsame_(job, "N") && ! lsame_(job, "P")) && ! lsame_(job, "S")) - && ! lsame_(job, "B"))) { - *info = -1; - } else if ((! rightv && ! leftv)) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*ilo < 1 || *ilo > max(1,*n)) { - *info = -4; - } else if (*ihi < min(*ilo,*n) || *ihi > *n) { - *info = -5; - } else if (*m < 0) { - *info = -7; - } else if (*ldv < max(1,*n)) { - *info = -9; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGEBAK", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - if (*m == 0) { - return 0; - } - if (lsame_(job, "N")) { - return 0; - } - - if (*ilo == *ihi) { - goto L30; - } - -/* Backward balance */ - - if (lsame_(job, "S") || lsame_(job, "B")) { - - if (rightv) { - i__1 = *ihi; - for (i__ = *ilo; i__ <= i__1; ++i__) { - s = scale[i__]; - zdscal_(m, &s, &v[i__ + v_dim1], ldv); -/* L10: */ - } - } - - if (leftv) { - i__1 = *ihi; - for (i__ = *ilo; i__ <= i__1; ++i__) { - s = 1. / scale[i__]; - zdscal_(m, &s, &v[i__ + v_dim1], ldv); -/* L20: */ - } - } - - } - -/* - Backward permutation - - For I = ILO-1 step -1 until 1, - IHI+1 step 1 until N do -- -*/ - -L30: - if (lsame_(job, "P") || lsame_(job, "B")) { - if (rightv) { - i__1 = *n; - for (ii = 1; ii <= i__1; ++ii) { - i__ = ii; - if ((i__ >= *ilo && i__ <= *ihi)) { - goto L40; - } - if (i__ < *ilo) { - i__ = *ilo - ii; - } - k = (integer) scale[i__]; - if (k == i__) { - goto L40; - } - zswap_(m, &v[i__ + v_dim1], ldv, &v[k + v_dim1], ldv); -L40: - ; - } - } - - if (leftv) { - i__1 = *n; - for (ii = 1; ii <= i__1; ++ii) { - i__ = ii; - if ((i__ >= *ilo && i__ <= *ihi)) { - goto L50; - } - if (i__ < *ilo) { - i__ = *ilo - ii; - } - k = (integer) scale[i__]; - if (k == i__) { - goto L50; - } - zswap_(m, &v[i__ + v_dim1], ldv, &v[k + v_dim1], ldv); -L50: - ; - } - } - } - - return 0; - -/* End of ZGEBAK */ - -} /* zgebak_ */ - -/* Subroutine */ int zgebal_(char *job, integer *n, doublecomplex *a, integer - *lda, integer *ilo, integer *ihi, doublereal *scale, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - doublereal d__1, d__2; - - /* Builtin functions */ - double d_imag(doublecomplex *), z_abs(doublecomplex *); - - /* Local variables */ - static doublereal c__, f, g; - static integer i__, j, k, l, m; - static doublereal r__, s, ca, ra; - static integer ica, ira, iexc; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zswap_(integer *, doublecomplex *, integer *, - doublecomplex *, integer *); - static doublereal sfmin1, sfmin2, sfmax1, sfmax2; - - extern /* Subroutine */ int xerbla_(char *, integer *), zdscal_( - integer *, doublereal *, doublecomplex *, integer *); - extern integer izamax_(integer *, doublecomplex *, integer *); - static logical noconv; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZGEBAL balances a general complex matrix A. This involves, first, - permuting A by a similarity transformation to isolate eigenvalues - in the first 1 to ILO-1 and last IHI+1 to N elements on the - diagonal; and second, applying a diagonal similarity transformation - to rows and columns ILO to IHI to make the rows and columns as - close in norm as possible. Both steps are optional. - - Balancing may reduce the 1-norm of the matrix, and improve the - accuracy of the computed eigenvalues and/or eigenvectors. - - Arguments - ========= - - JOB (input) CHARACTER*1 - Specifies the operations to be performed on A: - = 'N': none: simply set ILO = 1, IHI = N, SCALE(I) = 1.0 - for i = 1,...,N; - = 'P': permute only; - = 'S': scale only; - = 'B': both permute and scale. - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the input matrix A. - On exit, A is overwritten by the balanced matrix. - If JOB = 'N', A is not referenced. - See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - ILO (output) INTEGER - IHI (output) INTEGER - ILO and IHI are set to integers such that on exit - A(i,j) = 0 if i > j and j = 1,...,ILO-1 or I = IHI+1,...,N. - If JOB = 'N' or 'S', ILO = 1 and IHI = N. - - SCALE (output) DOUBLE PRECISION array, dimension (N) - Details of the permutations and scaling factors applied to - A. If P(j) is the index of the row and column interchanged - with row and column j and D(j) is the scaling factor - applied to row and column j, then - SCALE(j) = P(j) for j = 1,...,ILO-1 - = D(j) for j = ILO,...,IHI - = P(j) for j = IHI+1,...,N. - The order in which the interchanges are made is N to IHI+1, - then 1 to ILO-1. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - The permutations consist of row and column interchanges which put - the matrix in the form - - ( T1 X Y ) - P A P = ( 0 B Z ) - ( 0 0 T2 ) - - where T1 and T2 are upper triangular matrices whose eigenvalues lie - along the diagonal. The column indices ILO and IHI mark the starting - and ending columns of the submatrix B. Balancing consists of applying - a diagonal similarity transformation inv(D) * B * D to make the - 1-norms of each row of B and its corresponding column nearly equal. - The output matrix is - - ( T1 X*D Y ) - ( 0 inv(D)*B*D inv(D)*Z ). - ( 0 0 T2 ) - - Information about the permutations P and the diagonal matrix D is - returned in the vector SCALE. - - This subroutine is based on the EISPACK routine CBAL. - - Modified by Tzu-Yi Chen, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --scale; - - /* Function Body */ - *info = 0; - if ((((! lsame_(job, "N") && ! lsame_(job, "P")) && ! lsame_(job, "S")) - && ! lsame_(job, "B"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGEBAL", &i__1); - return 0; - } - - k = 1; - l = *n; - - if (*n == 0) { - goto L210; - } - - if (lsame_(job, "N")) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - scale[i__] = 1.; -/* L10: */ - } - goto L210; - } - - if (lsame_(job, "S")) { - goto L120; - } - -/* Permutation to isolate eigenvalues if possible */ - - goto L50; - -/* Row and column exchange. */ - -L20: - scale[m] = (doublereal) j; - if (j == m) { - goto L30; - } - - zswap_(&l, &a[j * a_dim1 + 1], &c__1, &a[m * a_dim1 + 1], &c__1); - i__1 = *n - k + 1; - zswap_(&i__1, &a[j + k * a_dim1], lda, &a[m + k * a_dim1], lda); - -L30: - switch (iexc) { - case 1: goto L40; - case 2: goto L80; - } - -/* Search for rows isolating an eigenvalue and push them down. */ - -L40: - if (l == 1) { - goto L210; - } - --l; - -L50: - for (j = l; j >= 1; --j) { - - i__1 = l; - for (i__ = 1; i__ <= i__1; ++i__) { - if (i__ == j) { - goto L60; - } - i__2 = j + i__ * a_dim1; - if (a[i__2].r != 0. || d_imag(&a[j + i__ * a_dim1]) != 0.) { - goto L70; - } -L60: - ; - } - - m = l; - iexc = 1; - goto L20; -L70: - ; - } - - goto L90; - -/* Search for columns isolating an eigenvalue and push them left. */ - -L80: - ++k; - -L90: - i__1 = l; - for (j = k; j <= i__1; ++j) { - - i__2 = l; - for (i__ = k; i__ <= i__2; ++i__) { - if (i__ == j) { - goto L100; - } - i__3 = i__ + j * a_dim1; - if (a[i__3].r != 0. || d_imag(&a[i__ + j * a_dim1]) != 0.) { - goto L110; - } -L100: - ; - } - - m = k; - iexc = 2; - goto L20; -L110: - ; - } - -L120: - i__1 = l; - for (i__ = k; i__ <= i__1; ++i__) { - scale[i__] = 1.; -/* L130: */ - } - - if (lsame_(job, "P")) { - goto L210; - } - -/* - Balance the submatrix in rows K to L. - - Iterative loop for norm reduction -*/ - - sfmin1 = SAFEMINIMUM / PRECISION; - sfmax1 = 1. / sfmin1; - sfmin2 = sfmin1 * 8.; - sfmax2 = 1. / sfmin2; -L140: - noconv = FALSE_; - - i__1 = l; - for (i__ = k; i__ <= i__1; ++i__) { - c__ = 0.; - r__ = 0.; - - i__2 = l; - for (j = k; j <= i__2; ++j) { - if (j == i__) { - goto L150; - } - i__3 = j + i__ * a_dim1; - c__ += (d__1 = a[i__3].r, abs(d__1)) + (d__2 = d_imag(&a[j + i__ * - a_dim1]), abs(d__2)); - i__3 = i__ + j * a_dim1; - r__ += (d__1 = a[i__3].r, abs(d__1)) + (d__2 = d_imag(&a[i__ + j * - a_dim1]), abs(d__2)); -L150: - ; - } - ica = izamax_(&l, &a[i__ * a_dim1 + 1], &c__1); - ca = z_abs(&a[ica + i__ * a_dim1]); - i__2 = *n - k + 1; - ira = izamax_(&i__2, &a[i__ + k * a_dim1], lda); - ra = z_abs(&a[i__ + (ira + k - 1) * a_dim1]); - -/* Guard against zero C or R due to underflow. */ - - if (c__ == 0. || r__ == 0.) { - goto L200; - } - g = r__ / 8.; - f = 1.; - s = c__ + r__; -L160: -/* Computing MAX */ - d__1 = max(f,c__); -/* Computing MIN */ - d__2 = min(r__,g); - if (c__ >= g || max(d__1,ca) >= sfmax2 || min(d__2,ra) <= sfmin2) { - goto L170; - } - f *= 8.; - c__ *= 8.; - ca *= 8.; - r__ /= 8.; - g /= 8.; - ra /= 8.; - goto L160; - -L170: - g = c__ / 8.; -L180: -/* Computing MIN */ - d__1 = min(f,c__), d__1 = min(d__1,g); - if (g < r__ || max(r__,ra) >= sfmax2 || min(d__1,ca) <= sfmin2) { - goto L190; - } - f /= 8.; - c__ /= 8.; - g /= 8.; - ca /= 8.; - r__ *= 8.; - ra *= 8.; - goto L180; - -/* Now balance. */ - -L190: - if (c__ + r__ >= s * .95) { - goto L200; - } - if ((f < 1. && scale[i__] < 1.)) { - if (f * scale[i__] <= sfmin1) { - goto L200; - } - } - if ((f > 1. && scale[i__] > 1.)) { - if (scale[i__] >= sfmax1 / f) { - goto L200; - } - } - g = 1. / f; - scale[i__] *= f; - noconv = TRUE_; - - i__2 = *n - k + 1; - zdscal_(&i__2, &g, &a[i__ + k * a_dim1], lda); - zdscal_(&l, &f, &a[i__ * a_dim1 + 1], &c__1); - -L200: - ; - } - - if (noconv) { - goto L140; - } - -L210: - *ilo = k; - *ihi = l; - - return 0; - -/* End of ZGEBAL */ - -} /* zgebal_ */ - -/* Subroutine */ int zgebd2_(integer *m, integer *n, doublecomplex *a, - integer *lda, doublereal *d__, doublereal *e, doublecomplex *tauq, - doublecomplex *taup, doublecomplex *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - doublecomplex z__1; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__; - static doublecomplex alpha; - extern /* Subroutine */ int zlarf_(char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), xerbla_(char *, integer *), zlarfg_(integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), zlacgv_(integer *, doublecomplex *, - integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZGEBD2 reduces a complex general m by n matrix A to upper or lower - real bidiagonal form B by a unitary transformation: Q' * A * P = B. - - If m >= n, B is upper bidiagonal; if m < n, B is lower bidiagonal. - - Arguments - ========= - - M (input) INTEGER - The number of rows in the matrix A. M >= 0. - - N (input) INTEGER - The number of columns in the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the m by n general matrix to be reduced. - On exit, - if m >= n, the diagonal and the first superdiagonal are - overwritten with the upper bidiagonal matrix B; the - elements below the diagonal, with the array TAUQ, represent - the unitary matrix Q as a product of elementary - reflectors, and the elements above the first superdiagonal, - with the array TAUP, represent the unitary matrix P as - a product of elementary reflectors; - if m < n, the diagonal and the first subdiagonal are - overwritten with the lower bidiagonal matrix B; the - elements below the first subdiagonal, with the array TAUQ, - represent the unitary matrix Q as a product of - elementary reflectors, and the elements above the diagonal, - with the array TAUP, represent the unitary matrix P as - a product of elementary reflectors. - See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - D (output) DOUBLE PRECISION array, dimension (min(M,N)) - The diagonal elements of the bidiagonal matrix B: - D(i) = A(i,i). - - E (output) DOUBLE PRECISION array, dimension (min(M,N)-1) - The off-diagonal elements of the bidiagonal matrix B: - if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; - if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. - - TAUQ (output) COMPLEX*16 array dimension (min(M,N)) - The scalar factors of the elementary reflectors which - represent the unitary matrix Q. See Further Details. - - TAUP (output) COMPLEX*16 array, dimension (min(M,N)) - The scalar factors of the elementary reflectors which - represent the unitary matrix P. See Further Details. - - WORK (workspace) COMPLEX*16 array, dimension (max(M,N)) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - The matrices Q and P are represented as products of elementary - reflectors: - - If m >= n, - - Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) - - Each H(i) and G(i) has the form: - - H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' - - where tauq and taup are complex scalars, and v and u are complex - vectors; v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in - A(i+1:m,i); u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in - A(i,i+2:n); tauq is stored in TAUQ(i) and taup in TAUP(i). - - If m < n, - - Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) - - Each H(i) and G(i) has the form: - - H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' - - where tauq and taup are complex scalars, v and u are complex vectors; - v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(i+2:m,i); - u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(i,i+1:n); - tauq is stored in TAUQ(i) and taup in TAUP(i). - - The contents of A on exit are illustrated by the following examples: - - m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): - - ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) - ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) - ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) - ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) - ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) - ( v1 v2 v3 v4 v5 ) - - where d and e denote diagonal and off-diagonal elements of B, vi - denotes an element of the vector defining H(i), and ui an element of - the vector defining G(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --d__; - --e; - --tauq; - --taup; - --work; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } - if (*info < 0) { - i__1 = -(*info); - xerbla_("ZGEBD2", &i__1); - return 0; - } - - if (*m >= *n) { - -/* Reduce to upper bidiagonal form */ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Generate elementary reflector H(i) to annihilate A(i+1:m,i) */ - - i__2 = i__ + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *m - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - zlarfg_(&i__2, &alpha, &a[min(i__3,*m) + i__ * a_dim1], &c__1, & - tauq[i__]); - i__2 = i__; - d__[i__2] = alpha.r; - i__2 = i__ + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Apply H(i)' to A(i:m,i+1:n) from the left */ - - i__2 = *m - i__ + 1; - i__3 = *n - i__; - d_cnjg(&z__1, &tauq[i__]); - zlarf_("Left", &i__2, &i__3, &a[i__ + i__ * a_dim1], &c__1, &z__1, - &a[i__ + (i__ + 1) * a_dim1], lda, &work[1]); - i__2 = i__ + i__ * a_dim1; - i__3 = i__; - a[i__2].r = d__[i__3], a[i__2].i = 0.; - - if (i__ < *n) { - -/* - Generate elementary reflector G(i) to annihilate - A(i,i+2:n) -*/ - - i__2 = *n - i__; - zlacgv_(&i__2, &a[i__ + (i__ + 1) * a_dim1], lda); - i__2 = i__ + (i__ + 1) * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *n - i__; -/* Computing MIN */ - i__3 = i__ + 2; - zlarfg_(&i__2, &alpha, &a[i__ + min(i__3,*n) * a_dim1], lda, & - taup[i__]); - i__2 = i__; - e[i__2] = alpha.r; - i__2 = i__ + (i__ + 1) * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Apply G(i) to A(i+1:m,i+1:n) from the right */ - - i__2 = *m - i__; - i__3 = *n - i__; - zlarf_("Right", &i__2, &i__3, &a[i__ + (i__ + 1) * a_dim1], - lda, &taup[i__], &a[i__ + 1 + (i__ + 1) * a_dim1], - lda, &work[1]); - i__2 = *n - i__; - zlacgv_(&i__2, &a[i__ + (i__ + 1) * a_dim1], lda); - i__2 = i__ + (i__ + 1) * a_dim1; - i__3 = i__; - a[i__2].r = e[i__3], a[i__2].i = 0.; - } else { - i__2 = i__; - taup[i__2].r = 0., taup[i__2].i = 0.; - } -/* L10: */ - } - } else { - -/* Reduce to lower bidiagonal form */ - - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Generate elementary reflector G(i) to annihilate A(i,i+1:n) */ - - i__2 = *n - i__ + 1; - zlacgv_(&i__2, &a[i__ + i__ * a_dim1], lda); - i__2 = i__ + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *n - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - zlarfg_(&i__2, &alpha, &a[i__ + min(i__3,*n) * a_dim1], lda, & - taup[i__]); - i__2 = i__; - d__[i__2] = alpha.r; - i__2 = i__ + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Apply G(i) to A(i+1:m,i:n) from the right */ - - i__2 = *m - i__; - i__3 = *n - i__ + 1; -/* Computing MIN */ - i__4 = i__ + 1; - zlarf_("Right", &i__2, &i__3, &a[i__ + i__ * a_dim1], lda, &taup[ - i__], &a[min(i__4,*m) + i__ * a_dim1], lda, &work[1]); - i__2 = *n - i__ + 1; - zlacgv_(&i__2, &a[i__ + i__ * a_dim1], lda); - i__2 = i__ + i__ * a_dim1; - i__3 = i__; - a[i__2].r = d__[i__3], a[i__2].i = 0.; - - if (i__ < *m) { - -/* - Generate elementary reflector H(i) to annihilate - A(i+2:m,i) -*/ - - i__2 = i__ + 1 + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *m - i__; -/* Computing MIN */ - i__3 = i__ + 2; - zlarfg_(&i__2, &alpha, &a[min(i__3,*m) + i__ * a_dim1], &c__1, - &tauq[i__]); - i__2 = i__; - e[i__2] = alpha.r; - i__2 = i__ + 1 + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Apply H(i)' to A(i+1:m,i+1:n) from the left */ - - i__2 = *m - i__; - i__3 = *n - i__; - d_cnjg(&z__1, &tauq[i__]); - zlarf_("Left", &i__2, &i__3, &a[i__ + 1 + i__ * a_dim1], & - c__1, &z__1, &a[i__ + 1 + (i__ + 1) * a_dim1], lda, & - work[1]); - i__2 = i__ + 1 + i__ * a_dim1; - i__3 = i__; - a[i__2].r = e[i__3], a[i__2].i = 0.; - } else { - i__2 = i__; - tauq[i__2].r = 0., tauq[i__2].i = 0.; - } -/* L20: */ - } - } - return 0; - -/* End of ZGEBD2 */ - -} /* zgebd2_ */ - -/* Subroutine */ int zgebrd_(integer *m, integer *n, doublecomplex *a, - integer *lda, doublereal *d__, doublereal *e, doublecomplex *tauq, - doublecomplex *taup, doublecomplex *work, integer *lwork, integer * - info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - doublereal d__1; - doublecomplex z__1; - - /* Local variables */ - static integer i__, j, nb, nx; - static doublereal ws; - static integer nbmin, iinfo, minmn; - extern /* Subroutine */ int zgemm_(char *, char *, integer *, integer *, - integer *, doublecomplex *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *), zgebd2_(integer *, integer *, - doublecomplex *, integer *, doublereal *, doublereal *, - doublecomplex *, doublecomplex *, doublecomplex *, integer *), - xerbla_(char *, integer *), zlabrd_(integer *, integer *, - integer *, doublecomplex *, integer *, doublereal *, doublereal *, - doublecomplex *, doublecomplex *, doublecomplex *, integer *, - doublecomplex *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer ldwrkx, ldwrky, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZGEBRD reduces a general complex M-by-N matrix A to upper or lower - bidiagonal form B by a unitary transformation: Q**H * A * P = B. - - If m >= n, B is upper bidiagonal; if m < n, B is lower bidiagonal. - - Arguments - ========= - - M (input) INTEGER - The number of rows in the matrix A. M >= 0. - - N (input) INTEGER - The number of columns in the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the M-by-N general matrix to be reduced. - On exit, - if m >= n, the diagonal and the first superdiagonal are - overwritten with the upper bidiagonal matrix B; the - elements below the diagonal, with the array TAUQ, represent - the unitary matrix Q as a product of elementary - reflectors, and the elements above the first superdiagonal, - with the array TAUP, represent the unitary matrix P as - a product of elementary reflectors; - if m < n, the diagonal and the first subdiagonal are - overwritten with the lower bidiagonal matrix B; the - elements below the first subdiagonal, with the array TAUQ, - represent the unitary matrix Q as a product of - elementary reflectors, and the elements above the diagonal, - with the array TAUP, represent the unitary matrix P as - a product of elementary reflectors. - See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - D (output) DOUBLE PRECISION array, dimension (min(M,N)) - The diagonal elements of the bidiagonal matrix B: - D(i) = A(i,i). - - E (output) DOUBLE PRECISION array, dimension (min(M,N)-1) - The off-diagonal elements of the bidiagonal matrix B: - if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; - if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. - - TAUQ (output) COMPLEX*16 array dimension (min(M,N)) - The scalar factors of the elementary reflectors which - represent the unitary matrix Q. See Further Details. - - TAUP (output) COMPLEX*16 array, dimension (min(M,N)) - The scalar factors of the elementary reflectors which - represent the unitary matrix P. See Further Details. - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The length of the array WORK. LWORK >= max(1,M,N). - For optimum performance LWORK >= (M+N)*NB, where NB - is the optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - The matrices Q and P are represented as products of elementary - reflectors: - - If m >= n, - - Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) - - Each H(i) and G(i) has the form: - - H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' - - where tauq and taup are complex scalars, and v and u are complex - vectors; v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in - A(i+1:m,i); u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in - A(i,i+2:n); tauq is stored in TAUQ(i) and taup in TAUP(i). - - If m < n, - - Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) - - Each H(i) and G(i) has the form: - - H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' - - where tauq and taup are complex scalars, and v and u are complex - vectors; v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in - A(i+2:m,i); u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in - A(i,i+1:n); tauq is stored in TAUQ(i) and taup in TAUP(i). - - The contents of A on exit are illustrated by the following examples: - - m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): - - ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) - ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) - ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) - ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) - ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) - ( v1 v2 v3 v4 v5 ) - - where d and e denote diagonal and off-diagonal elements of B, vi - denotes an element of the vector defining H(i), and ui an element of - the vector defining G(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --d__; - --e; - --tauq; - --taup; - --work; - - /* Function Body */ - *info = 0; -/* Computing MAX */ - i__1 = 1, i__2 = ilaenv_(&c__1, "ZGEBRD", " ", m, n, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - nb = max(i__1,i__2); - lwkopt = (*m + *n) * nb; - d__1 = (doublereal) lwkopt; - work[1].r = d__1, work[1].i = 0.; - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } else /* if(complicated condition) */ { -/* Computing MAX */ - i__1 = max(1,*m); - if ((*lwork < max(i__1,*n) && ! lquery)) { - *info = -10; - } - } - if (*info < 0) { - i__1 = -(*info); - xerbla_("ZGEBRD", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - minmn = min(*m,*n); - if (minmn == 0) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - ws = (doublereal) max(*m,*n); - ldwrkx = *m; - ldwrky = *n; - - if ((nb > 1 && nb < minmn)) { - -/* - Set the crossover point NX. - - Computing MAX -*/ - i__1 = nb, i__2 = ilaenv_(&c__3, "ZGEBRD", " ", m, n, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - -/* Determine when to switch from blocked to unblocked code. */ - - if (nx < minmn) { - ws = (doublereal) ((*m + *n) * nb); - if ((doublereal) (*lwork) < ws) { - -/* - Not enough work space for the optimal NB, consider using - a smaller block size. -*/ - - nbmin = ilaenv_(&c__2, "ZGEBRD", " ", m, n, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - if (*lwork >= (*m + *n) * nbmin) { - nb = *lwork / (*m + *n); - } else { - nb = 1; - nx = minmn; - } - } - } - } else { - nx = minmn; - } - - i__1 = minmn - nx; - i__2 = nb; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { - -/* - Reduce rows and columns i:i+ib-1 to bidiagonal form and return - the matrices X and Y which are needed to update the unreduced - part of the matrix -*/ - - i__3 = *m - i__ + 1; - i__4 = *n - i__ + 1; - zlabrd_(&i__3, &i__4, &nb, &a[i__ + i__ * a_dim1], lda, &d__[i__], &e[ - i__], &tauq[i__], &taup[i__], &work[1], &ldwrkx, &work[ldwrkx - * nb + 1], &ldwrky); - -/* - Update the trailing submatrix A(i+ib:m,i+ib:n), using - an update of the form A := A - V*Y' - X*U' -*/ - - i__3 = *m - i__ - nb + 1; - i__4 = *n - i__ - nb + 1; - z__1.r = -1., z__1.i = -0.; - zgemm_("No transpose", "Conjugate transpose", &i__3, &i__4, &nb, & - z__1, &a[i__ + nb + i__ * a_dim1], lda, &work[ldwrkx * nb + - nb + 1], &ldwrky, &c_b60, &a[i__ + nb + (i__ + nb) * a_dim1], - lda); - i__3 = *m - i__ - nb + 1; - i__4 = *n - i__ - nb + 1; - z__1.r = -1., z__1.i = -0.; - zgemm_("No transpose", "No transpose", &i__3, &i__4, &nb, &z__1, & - work[nb + 1], &ldwrkx, &a[i__ + (i__ + nb) * a_dim1], lda, & - c_b60, &a[i__ + nb + (i__ + nb) * a_dim1], lda); - -/* Copy diagonal and off-diagonal elements of B back into A */ - - if (*m >= *n) { - i__3 = i__ + nb - 1; - for (j = i__; j <= i__3; ++j) { - i__4 = j + j * a_dim1; - i__5 = j; - a[i__4].r = d__[i__5], a[i__4].i = 0.; - i__4 = j + (j + 1) * a_dim1; - i__5 = j; - a[i__4].r = e[i__5], a[i__4].i = 0.; -/* L10: */ - } - } else { - i__3 = i__ + nb - 1; - for (j = i__; j <= i__3; ++j) { - i__4 = j + j * a_dim1; - i__5 = j; - a[i__4].r = d__[i__5], a[i__4].i = 0.; - i__4 = j + 1 + j * a_dim1; - i__5 = j; - a[i__4].r = e[i__5], a[i__4].i = 0.; -/* L20: */ - } - } -/* L30: */ - } - -/* Use unblocked code to reduce the remainder of the matrix */ - - i__2 = *m - i__ + 1; - i__1 = *n - i__ + 1; - zgebd2_(&i__2, &i__1, &a[i__ + i__ * a_dim1], lda, &d__[i__], &e[i__], & - tauq[i__], &taup[i__], &work[1], &iinfo); - work[1].r = ws, work[1].i = 0.; - return 0; - -/* End of ZGEBRD */ - -} /* zgebrd_ */ - -/* Subroutine */ int zgeev_(char *jobvl, char *jobvr, integer *n, - doublecomplex *a, integer *lda, doublecomplex *w, doublecomplex *vl, - integer *ldvl, doublecomplex *vr, integer *ldvr, doublecomplex *work, - integer *lwork, doublereal *rwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, vl_dim1, vl_offset, vr_dim1, vr_offset, i__1, - i__2, i__3, i__4; - doublereal d__1, d__2; - doublecomplex z__1, z__2; - - /* Builtin functions */ - double sqrt(doublereal), d_imag(doublecomplex *); - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, k, ihi; - static doublereal scl; - static integer ilo; - static doublereal dum[1], eps; - static doublecomplex tmp; - static integer ibal; - static char side[1]; - static integer maxb; - static doublereal anrm; - static integer ierr, itau, iwrk, nout; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zscal_(integer *, doublecomplex *, - doublecomplex *, integer *), dlabad_(doublereal *, doublereal *); - extern doublereal dznrm2_(integer *, doublecomplex *, integer *); - static logical scalea; - - static doublereal cscale; - extern /* Subroutine */ int zgebak_(char *, char *, integer *, integer *, - integer *, doublereal *, integer *, doublecomplex *, integer *, - integer *), zgebal_(char *, integer *, - doublecomplex *, integer *, integer *, integer *, doublereal *, - integer *); - extern integer idamax_(integer *, doublereal *, integer *); - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static logical select[1]; - extern /* Subroutine */ int zdscal_(integer *, doublereal *, - doublecomplex *, integer *); - static doublereal bignum; - extern doublereal zlange_(char *, integer *, integer *, doublecomplex *, - integer *, doublereal *); - extern /* Subroutine */ int zgehrd_(integer *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, integer *), zlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublecomplex *, - integer *, integer *), zlacpy_(char *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, integer *); - static integer minwrk, maxwrk; - static logical wantvl; - static doublereal smlnum; - static integer hswork, irwork; - extern /* Subroutine */ int zhseqr_(char *, char *, integer *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, integer *), ztrevc_(char *, char *, logical *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *, integer *, integer *, doublecomplex *, - doublereal *, integer *); - static logical lquery, wantvr; - extern /* Subroutine */ int zunghr_(integer *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, integer *); - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZGEEV computes for an N-by-N complex nonsymmetric matrix A, the - eigenvalues and, optionally, the left and/or right eigenvectors. - - The right eigenvector v(j) of A satisfies - A * v(j) = lambda(j) * v(j) - where lambda(j) is its eigenvalue. - The left eigenvector u(j) of A satisfies - u(j)**H * A = lambda(j) * u(j)**H - where u(j)**H denotes the conjugate transpose of u(j). - - The computed eigenvectors are normalized to have Euclidean norm - equal to 1 and largest component real. - - Arguments - ========= - - JOBVL (input) CHARACTER*1 - = 'N': left eigenvectors of A are not computed; - = 'V': left eigenvectors of are computed. - - JOBVR (input) CHARACTER*1 - = 'N': right eigenvectors of A are not computed; - = 'V': right eigenvectors of A are computed. - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the N-by-N matrix A. - On exit, A has been overwritten. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - W (output) COMPLEX*16 array, dimension (N) - W contains the computed eigenvalues. - - VL (output) COMPLEX*16 array, dimension (LDVL,N) - If JOBVL = 'V', the left eigenvectors u(j) are stored one - after another in the columns of VL, in the same order - as their eigenvalues. - If JOBVL = 'N', VL is not referenced. - u(j) = VL(:,j), the j-th column of VL. - - LDVL (input) INTEGER - The leading dimension of the array VL. LDVL >= 1; if - JOBVL = 'V', LDVL >= N. - - VR (output) COMPLEX*16 array, dimension (LDVR,N) - If JOBVR = 'V', the right eigenvectors v(j) are stored one - after another in the columns of VR, in the same order - as their eigenvalues. - If JOBVR = 'N', VR is not referenced. - v(j) = VR(:,j), the j-th column of VR. - - LDVR (input) INTEGER - The leading dimension of the array VR. LDVR >= 1; if - JOBVR = 'V', LDVR >= N. - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,2*N). - For good performance, LWORK must generally be larger. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - RWORK (workspace) DOUBLE PRECISION array, dimension (2*N) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = i, the QR algorithm failed to compute all the - eigenvalues, and no eigenvectors have been computed; - elements and i+1:N of W contain eigenvalues which have - converged. - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --w; - vl_dim1 = *ldvl; - vl_offset = 1 + vl_dim1 * 1; - vl -= vl_offset; - vr_dim1 = *ldvr; - vr_offset = 1 + vr_dim1 * 1; - vr -= vr_offset; - --work; - --rwork; - - /* Function Body */ - *info = 0; - lquery = *lwork == -1; - wantvl = lsame_(jobvl, "V"); - wantvr = lsame_(jobvr, "V"); - if ((! wantvl && ! lsame_(jobvl, "N"))) { - *info = -1; - } else if ((! wantvr && ! lsame_(jobvr, "N"))) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } else if (*ldvl < 1 || (wantvl && *ldvl < *n)) { - *info = -8; - } else if (*ldvr < 1 || (wantvr && *ldvr < *n)) { - *info = -10; - } - -/* - Compute workspace - (Note: Comments in the code beginning "Workspace:" describe the - minimal amount of workspace needed at that point in the code, - as well as the preferred amount for good performance. - CWorkspace refers to complex workspace, and RWorkspace to real - workspace. NB refers to the optimal block size for the - immediately following subroutine, as returned by ILAENV. - HSWORK refers to the workspace preferred by ZHSEQR, as - calculated below. HSWORK is computed assuming ILO=1 and IHI=N, - the worst case.) -*/ - - minwrk = 1; - if ((*info == 0 && (*lwork >= 1 || lquery))) { - maxwrk = *n + *n * ilaenv_(&c__1, "ZGEHRD", " ", n, &c__1, n, &c__0, ( - ftnlen)6, (ftnlen)1); - if ((! wantvl && ! wantvr)) { -/* Computing MAX */ - i__1 = 1, i__2 = (*n) << (1); - minwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = ilaenv_(&c__8, "ZHSEQR", "EN", n, &c__1, n, &c_n1, (ftnlen) - 6, (ftnlen)2); - maxb = max(i__1,2); -/* - Computing MIN - Computing MAX -*/ - i__3 = 2, i__4 = ilaenv_(&c__4, "ZHSEQR", "EN", n, &c__1, n, & - c_n1, (ftnlen)6, (ftnlen)2); - i__1 = min(maxb,*n), i__2 = max(i__3,i__4); - k = min(i__1,i__2); -/* Computing MAX */ - i__1 = k * (k + 2), i__2 = (*n) << (1); - hswork = max(i__1,i__2); - maxwrk = max(maxwrk,hswork); - } else { -/* Computing MAX */ - i__1 = 1, i__2 = (*n) << (1); - minwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *n + (*n - 1) * ilaenv_(&c__1, "ZUNGHR", - " ", n, &c__1, n, &c_n1, (ftnlen)6, (ftnlen)1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = ilaenv_(&c__8, "ZHSEQR", "SV", n, &c__1, n, &c_n1, (ftnlen) - 6, (ftnlen)2); - maxb = max(i__1,2); -/* - Computing MIN - Computing MAX -*/ - i__3 = 2, i__4 = ilaenv_(&c__4, "ZHSEQR", "SV", n, &c__1, n, & - c_n1, (ftnlen)6, (ftnlen)2); - i__1 = min(maxb,*n), i__2 = max(i__3,i__4); - k = min(i__1,i__2); -/* Computing MAX */ - i__1 = k * (k + 2), i__2 = (*n) << (1); - hswork = max(i__1,i__2); -/* Computing MAX */ - i__1 = max(maxwrk,hswork), i__2 = (*n) << (1); - maxwrk = max(i__1,i__2); - } - work[1].r = (doublereal) maxwrk, work[1].i = 0.; - } - if ((*lwork < minwrk && ! lquery)) { - *info = -12; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGEEV ", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - -/* Get machine constants */ - - eps = PRECISION; - smlnum = SAFEMINIMUM; - bignum = 1. / smlnum; - dlabad_(&smlnum, &bignum); - smlnum = sqrt(smlnum) / eps; - bignum = 1. / smlnum; - -/* Scale A if max element outside range [SMLNUM,BIGNUM] */ - - anrm = zlange_("M", n, n, &a[a_offset], lda, dum); - scalea = FALSE_; - if ((anrm > 0. && anrm < smlnum)) { - scalea = TRUE_; - cscale = smlnum; - } else if (anrm > bignum) { - scalea = TRUE_; - cscale = bignum; - } - if (scalea) { - zlascl_("G", &c__0, &c__0, &anrm, &cscale, n, n, &a[a_offset], lda, & - ierr); - } - -/* - Balance the matrix - (CWorkspace: none) - (RWorkspace: need N) -*/ - - ibal = 1; - zgebal_("B", n, &a[a_offset], lda, &ilo, &ihi, &rwork[ibal], &ierr); - -/* - Reduce to upper Hessenberg form - (CWorkspace: need 2*N, prefer N+N*NB) - (RWorkspace: none) -*/ - - itau = 1; - iwrk = itau + *n; - i__1 = *lwork - iwrk + 1; - zgehrd_(n, &ilo, &ihi, &a[a_offset], lda, &work[itau], &work[iwrk], &i__1, - &ierr); - - if (wantvl) { - -/* - Want left eigenvectors - Copy Householder vectors to VL -*/ - - *(unsigned char *)side = 'L'; - zlacpy_("L", n, n, &a[a_offset], lda, &vl[vl_offset], ldvl) - ; - -/* - Generate unitary matrix in VL - (CWorkspace: need 2*N-1, prefer N+(N-1)*NB) - (RWorkspace: none) -*/ - - i__1 = *lwork - iwrk + 1; - zunghr_(n, &ilo, &ihi, &vl[vl_offset], ldvl, &work[itau], &work[iwrk], - &i__1, &ierr); - -/* - Perform QR iteration, accumulating Schur vectors in VL - (CWorkspace: need 1, prefer HSWORK (see comments) ) - (RWorkspace: none) -*/ - - iwrk = itau; - i__1 = *lwork - iwrk + 1; - zhseqr_("S", "V", n, &ilo, &ihi, &a[a_offset], lda, &w[1], &vl[ - vl_offset], ldvl, &work[iwrk], &i__1, info); - - if (wantvr) { - -/* - Want left and right eigenvectors - Copy Schur vectors to VR -*/ - - *(unsigned char *)side = 'B'; - zlacpy_("F", n, n, &vl[vl_offset], ldvl, &vr[vr_offset], ldvr); - } - - } else if (wantvr) { - -/* - Want right eigenvectors - Copy Householder vectors to VR -*/ - - *(unsigned char *)side = 'R'; - zlacpy_("L", n, n, &a[a_offset], lda, &vr[vr_offset], ldvr) - ; - -/* - Generate unitary matrix in VR - (CWorkspace: need 2*N-1, prefer N+(N-1)*NB) - (RWorkspace: none) -*/ - - i__1 = *lwork - iwrk + 1; - zunghr_(n, &ilo, &ihi, &vr[vr_offset], ldvr, &work[itau], &work[iwrk], - &i__1, &ierr); - -/* - Perform QR iteration, accumulating Schur vectors in VR - (CWorkspace: need 1, prefer HSWORK (see comments) ) - (RWorkspace: none) -*/ - - iwrk = itau; - i__1 = *lwork - iwrk + 1; - zhseqr_("S", "V", n, &ilo, &ihi, &a[a_offset], lda, &w[1], &vr[ - vr_offset], ldvr, &work[iwrk], &i__1, info); - - } else { - -/* - Compute eigenvalues only - (CWorkspace: need 1, prefer HSWORK (see comments) ) - (RWorkspace: none) -*/ - - iwrk = itau; - i__1 = *lwork - iwrk + 1; - zhseqr_("E", "N", n, &ilo, &ihi, &a[a_offset], lda, &w[1], &vr[ - vr_offset], ldvr, &work[iwrk], &i__1, info); - } - -/* If INFO > 0 from ZHSEQR, then quit */ - - if (*info > 0) { - goto L50; - } - - if (wantvl || wantvr) { - -/* - Compute left and/or right eigenvectors - (CWorkspace: need 2*N) - (RWorkspace: need 2*N) -*/ - - irwork = ibal + *n; - ztrevc_(side, "B", select, n, &a[a_offset], lda, &vl[vl_offset], ldvl, - &vr[vr_offset], ldvr, n, &nout, &work[iwrk], &rwork[irwork], - &ierr); - } - - if (wantvl) { - -/* - Undo balancing of left eigenvectors - (CWorkspace: none) - (RWorkspace: need N) -*/ - - zgebak_("B", "L", n, &ilo, &ihi, &rwork[ibal], n, &vl[vl_offset], - ldvl, &ierr); - -/* Normalize left eigenvectors and make largest component real */ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - scl = 1. / dznrm2_(n, &vl[i__ * vl_dim1 + 1], &c__1); - zdscal_(n, &scl, &vl[i__ * vl_dim1 + 1], &c__1); - i__2 = *n; - for (k = 1; k <= i__2; ++k) { - i__3 = k + i__ * vl_dim1; -/* Computing 2nd power */ - d__1 = vl[i__3].r; -/* Computing 2nd power */ - d__2 = d_imag(&vl[k + i__ * vl_dim1]); - rwork[irwork + k - 1] = d__1 * d__1 + d__2 * d__2; -/* L10: */ - } - k = idamax_(n, &rwork[irwork], &c__1); - d_cnjg(&z__2, &vl[k + i__ * vl_dim1]); - d__1 = sqrt(rwork[irwork + k - 1]); - z__1.r = z__2.r / d__1, z__1.i = z__2.i / d__1; - tmp.r = z__1.r, tmp.i = z__1.i; - zscal_(n, &tmp, &vl[i__ * vl_dim1 + 1], &c__1); - i__2 = k + i__ * vl_dim1; - i__3 = k + i__ * vl_dim1; - d__1 = vl[i__3].r; - z__1.r = d__1, z__1.i = 0.; - vl[i__2].r = z__1.r, vl[i__2].i = z__1.i; -/* L20: */ - } - } - - if (wantvr) { - -/* - Undo balancing of right eigenvectors - (CWorkspace: none) - (RWorkspace: need N) -*/ - - zgebak_("B", "R", n, &ilo, &ihi, &rwork[ibal], n, &vr[vr_offset], - ldvr, &ierr); - -/* Normalize right eigenvectors and make largest component real */ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - scl = 1. / dznrm2_(n, &vr[i__ * vr_dim1 + 1], &c__1); - zdscal_(n, &scl, &vr[i__ * vr_dim1 + 1], &c__1); - i__2 = *n; - for (k = 1; k <= i__2; ++k) { - i__3 = k + i__ * vr_dim1; -/* Computing 2nd power */ - d__1 = vr[i__3].r; -/* Computing 2nd power */ - d__2 = d_imag(&vr[k + i__ * vr_dim1]); - rwork[irwork + k - 1] = d__1 * d__1 + d__2 * d__2; -/* L30: */ - } - k = idamax_(n, &rwork[irwork], &c__1); - d_cnjg(&z__2, &vr[k + i__ * vr_dim1]); - d__1 = sqrt(rwork[irwork + k - 1]); - z__1.r = z__2.r / d__1, z__1.i = z__2.i / d__1; - tmp.r = z__1.r, tmp.i = z__1.i; - zscal_(n, &tmp, &vr[i__ * vr_dim1 + 1], &c__1); - i__2 = k + i__ * vr_dim1; - i__3 = k + i__ * vr_dim1; - d__1 = vr[i__3].r; - z__1.r = d__1, z__1.i = 0.; - vr[i__2].r = z__1.r, vr[i__2].i = z__1.i; -/* L40: */ - } - } - -/* Undo scaling if necessary */ - -L50: - if (scalea) { - i__1 = *n - *info; -/* Computing MAX */ - i__3 = *n - *info; - i__2 = max(i__3,1); - zlascl_("G", &c__0, &c__0, &cscale, &anrm, &i__1, &c__1, &w[*info + 1] - , &i__2, &ierr); - if (*info > 0) { - i__1 = ilo - 1; - zlascl_("G", &c__0, &c__0, &cscale, &anrm, &i__1, &c__1, &w[1], n, - &ierr); - } - } - - work[1].r = (doublereal) maxwrk, work[1].i = 0.; - return 0; - -/* End of ZGEEV */ - -} /* zgeev_ */ - -/* Subroutine */ int zgehd2_(integer *n, integer *ilo, integer *ihi, - doublecomplex *a, integer *lda, doublecomplex *tau, doublecomplex * - work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - doublecomplex z__1; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__; - static doublecomplex alpha; - extern /* Subroutine */ int zlarf_(char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), xerbla_(char *, integer *), zlarfg_(integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZGEHD2 reduces a complex general matrix A to upper Hessenberg form H - by a unitary similarity transformation: Q' * A * Q = H . - - Arguments - ========= - - N (input) INTEGER - The order of the matrix A. N >= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - It is assumed that A is already upper triangular in rows - and columns 1:ILO-1 and IHI+1:N. ILO and IHI are normally - set by a previous call to ZGEBAL; otherwise they should be - set to 1 and N respectively. See Further Details. - 1 <= ILO <= IHI <= max(1,N). - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the n by n general matrix to be reduced. - On exit, the upper triangle and the first subdiagonal of A - are overwritten with the upper Hessenberg matrix H, and the - elements below the first subdiagonal, with the array TAU, - represent the unitary matrix Q as a product of elementary - reflectors. See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - TAU (output) COMPLEX*16 array, dimension (N-1) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace) COMPLEX*16 array, dimension (N) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - The matrix Q is represented as a product of (ihi-ilo) elementary - reflectors - - Q = H(ilo) H(ilo+1) . . . H(ihi-1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(1:i) = 0, v(i+1) = 1 and v(ihi+1:n) = 0; v(i+2:ihi) is stored on - exit in A(i+2:ihi,i), and tau in TAU(i). - - The contents of A are illustrated by the following example, with - n = 7, ilo = 2 and ihi = 6: - - on entry, on exit, - - ( a a a a a a a ) ( a a h h h h a ) - ( a a a a a a ) ( a h h h h a ) - ( a a a a a a ) ( h h h h h h ) - ( a a a a a a ) ( v2 h h h h h ) - ( a a a a a a ) ( v2 v3 h h h h ) - ( a a a a a a ) ( v2 v3 v4 h h h ) - ( a ) ( a ) - - where a denotes an element of the original matrix A, h denotes a - modified element of the upper Hessenberg matrix H, and vi denotes an - element of the vector defining H(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - if (*n < 0) { - *info = -1; - } else if (*ilo < 1 || *ilo > max(1,*n)) { - *info = -2; - } else if (*ihi < min(*ilo,*n) || *ihi > *n) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGEHD2", &i__1); - return 0; - } - - i__1 = *ihi - 1; - for (i__ = *ilo; i__ <= i__1; ++i__) { - -/* Compute elementary reflector H(i) to annihilate A(i+2:ihi,i) */ - - i__2 = i__ + 1 + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *ihi - i__; -/* Computing MIN */ - i__3 = i__ + 2; - zlarfg_(&i__2, &alpha, &a[min(i__3,*n) + i__ * a_dim1], &c__1, &tau[ - i__]); - i__2 = i__ + 1 + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Apply H(i) to A(1:ihi,i+1:ihi) from the right */ - - i__2 = *ihi - i__; - zlarf_("Right", ihi, &i__2, &a[i__ + 1 + i__ * a_dim1], &c__1, &tau[ - i__], &a[(i__ + 1) * a_dim1 + 1], lda, &work[1]); - -/* Apply H(i)' to A(i+1:ihi,i+1:n) from the left */ - - i__2 = *ihi - i__; - i__3 = *n - i__; - d_cnjg(&z__1, &tau[i__]); - zlarf_("Left", &i__2, &i__3, &a[i__ + 1 + i__ * a_dim1], &c__1, &z__1, - &a[i__ + 1 + (i__ + 1) * a_dim1], lda, &work[1]); - - i__2 = i__ + 1 + i__ * a_dim1; - a[i__2].r = alpha.r, a[i__2].i = alpha.i; -/* L10: */ - } - - return 0; - -/* End of ZGEHD2 */ - -} /* zgehd2_ */ - -/* Subroutine */ int zgehrd_(integer *n, integer *ilo, integer *ihi, - doublecomplex *a, integer *lda, doublecomplex *tau, doublecomplex * - work, integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - doublecomplex z__1; - - /* Local variables */ - static integer i__; - static doublecomplex t[4160] /* was [65][64] */; - static integer ib; - static doublecomplex ei; - static integer nb, nh, nx, iws, nbmin, iinfo; - extern /* Subroutine */ int zgemm_(char *, char *, integer *, integer *, - integer *, doublecomplex *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *), zgehd2_(integer *, integer *, integer - *, doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zlarfb_(char *, char *, char *, char *, - integer *, integer *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *), - zlahrd_(integer *, integer *, integer *, doublecomplex *, integer - *, doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *); - static integer ldwork, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZGEHRD reduces a complex general matrix A to upper Hessenberg form H - by a unitary similarity transformation: Q' * A * Q = H . - - Arguments - ========= - - N (input) INTEGER - The order of the matrix A. N >= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - It is assumed that A is already upper triangular in rows - and columns 1:ILO-1 and IHI+1:N. ILO and IHI are normally - set by a previous call to ZGEBAL; otherwise they should be - set to 1 and N respectively. See Further Details. - 1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the N-by-N general matrix to be reduced. - On exit, the upper triangle and the first subdiagonal of A - are overwritten with the upper Hessenberg matrix H, and the - elements below the first subdiagonal, with the array TAU, - represent the unitary matrix Q as a product of elementary - reflectors. See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - TAU (output) COMPLEX*16 array, dimension (N-1) - The scalar factors of the elementary reflectors (see Further - Details). Elements 1:ILO-1 and IHI:N-1 of TAU are set to - zero. - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The length of the array WORK. LWORK >= max(1,N). - For optimum performance LWORK >= N*NB, where NB is the - optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - The matrix Q is represented as a product of (ihi-ilo) elementary - reflectors - - Q = H(ilo) H(ilo+1) . . . H(ihi-1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(1:i) = 0, v(i+1) = 1 and v(ihi+1:n) = 0; v(i+2:ihi) is stored on - exit in A(i+2:ihi,i), and tau in TAU(i). - - The contents of A are illustrated by the following example, with - n = 7, ilo = 2 and ihi = 6: - - on entry, on exit, - - ( a a a a a a a ) ( a a h h h h a ) - ( a a a a a a ) ( a h h h h a ) - ( a a a a a a ) ( h h h h h h ) - ( a a a a a a ) ( v2 h h h h h ) - ( a a a a a a ) ( v2 v3 h h h h ) - ( a a a a a a ) ( v2 v3 v4 h h h ) - ( a ) ( a ) - - where a denotes an element of the original matrix A, h denotes a - modified element of the upper Hessenberg matrix H, and vi denotes an - element of the vector defining H(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; -/* Computing MIN */ - i__1 = 64, i__2 = ilaenv_(&c__1, "ZGEHRD", " ", n, ilo, ihi, &c_n1, ( - ftnlen)6, (ftnlen)1); - nb = min(i__1,i__2); - lwkopt = *n * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - lquery = *lwork == -1; - if (*n < 0) { - *info = -1; - } else if (*ilo < 1 || *ilo > max(1,*n)) { - *info = -2; - } else if (*ihi < min(*ilo,*n) || *ihi > *n) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } else if ((*lwork < max(1,*n) && ! lquery)) { - *info = -8; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGEHRD", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Set elements 1:ILO-1 and IHI:N-1 of TAU to zero */ - - i__1 = *ilo - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - tau[i__2].r = 0., tau[i__2].i = 0.; -/* L10: */ - } - i__1 = *n - 1; - for (i__ = max(1,*ihi); i__ <= i__1; ++i__) { - i__2 = i__; - tau[i__2].r = 0., tau[i__2].i = 0.; -/* L20: */ - } - -/* Quick return if possible */ - - nh = *ihi - *ilo + 1; - if (nh <= 1) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - nbmin = 2; - iws = 1; - if ((nb > 1 && nb < nh)) { - -/* - Determine when to cross over from blocked to unblocked code - (last block is always handled by unblocked code). - - Computing MAX -*/ - i__1 = nb, i__2 = ilaenv_(&c__3, "ZGEHRD", " ", n, ilo, ihi, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < nh) { - -/* Determine if workspace is large enough for blocked code. */ - - iws = *n * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: determine the - minimum value of NB, and reduce NB or force use of - unblocked code. - - Computing MAX -*/ - i__1 = 2, i__2 = ilaenv_(&c__2, "ZGEHRD", " ", n, ilo, ihi, & - c_n1, (ftnlen)6, (ftnlen)1); - nbmin = max(i__1,i__2); - if (*lwork >= *n * nbmin) { - nb = *lwork / *n; - } else { - nb = 1; - } - } - } - } - ldwork = *n; - - if (nb < nbmin || nb >= nh) { - -/* Use unblocked code below */ - - i__ = *ilo; - - } else { - -/* Use blocked code */ - - i__1 = *ihi - 1 - nx; - i__2 = nb; - for (i__ = *ilo; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__3 = nb, i__4 = *ihi - i__; - ib = min(i__3,i__4); - -/* - Reduce columns i:i+ib-1 to Hessenberg form, returning the - matrices V and T of the block reflector H = I - V*T*V' - which performs the reduction, and also the matrix Y = A*V*T -*/ - - zlahrd_(ihi, &i__, &ib, &a[i__ * a_dim1 + 1], lda, &tau[i__], t, & - c__65, &work[1], &ldwork); - -/* - Apply the block reflector H to A(1:ihi,i+ib:ihi) from the - right, computing A := A - Y * V'. V(i+ib,ib-1) must be set - to 1. -*/ - - i__3 = i__ + ib + (i__ + ib - 1) * a_dim1; - ei.r = a[i__3].r, ei.i = a[i__3].i; - i__3 = i__ + ib + (i__ + ib - 1) * a_dim1; - a[i__3].r = 1., a[i__3].i = 0.; - i__3 = *ihi - i__ - ib + 1; - z__1.r = -1., z__1.i = -0.; - zgemm_("No transpose", "Conjugate transpose", ihi, &i__3, &ib, & - z__1, &work[1], &ldwork, &a[i__ + ib + i__ * a_dim1], lda, - &c_b60, &a[(i__ + ib) * a_dim1 + 1], lda); - i__3 = i__ + ib + (i__ + ib - 1) * a_dim1; - a[i__3].r = ei.r, a[i__3].i = ei.i; - -/* - Apply the block reflector H to A(i+1:ihi,i+ib:n) from the - left -*/ - - i__3 = *ihi - i__; - i__4 = *n - i__ - ib + 1; - zlarfb_("Left", "Conjugate transpose", "Forward", "Columnwise", & - i__3, &i__4, &ib, &a[i__ + 1 + i__ * a_dim1], lda, t, & - c__65, &a[i__ + 1 + (i__ + ib) * a_dim1], lda, &work[1], & - ldwork); -/* L30: */ - } - } - -/* Use unblocked code to reduce the rest of the matrix */ - - zgehd2_(n, &i__, ihi, &a[a_offset], lda, &tau[1], &work[1], &iinfo); - work[1].r = (doublereal) iws, work[1].i = 0.; - - return 0; - -/* End of ZGEHRD */ - -} /* zgehrd_ */ - -/* Subroutine */ int zgelq2_(integer *m, integer *n, doublecomplex *a, - integer *lda, doublecomplex *tau, doublecomplex *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, k; - static doublecomplex alpha; - extern /* Subroutine */ int zlarf_(char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), xerbla_(char *, integer *), zlarfg_(integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), zlacgv_(integer *, doublecomplex *, - integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZGELQ2 computes an LQ factorization of a complex m by n matrix A: - A = L * Q. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the m by n matrix A. - On exit, the elements on and below the diagonal of the array - contain the m by min(m,n) lower trapezoidal matrix L (L is - lower triangular if m <= n); the elements above the diagonal, - with the array TAU, represent the unitary matrix Q as a - product of elementary reflectors (see Further Details). - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - TAU (output) COMPLEX*16 array, dimension (min(M,N)) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace) COMPLEX*16 array, dimension (M) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - The matrix Q is represented as a product of elementary reflectors - - Q = H(k)' . . . H(2)' H(1)', where k = min(m,n). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(1:i-1) = 0 and v(i) = 1; conjg(v(i+1:n)) is stored on exit in - A(i,i+1:n), and tau in TAU(i). - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGELQ2", &i__1); - return 0; - } - - k = min(*m,*n); - - i__1 = k; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Generate elementary reflector H(i) to annihilate A(i,i+1:n) */ - - i__2 = *n - i__ + 1; - zlacgv_(&i__2, &a[i__ + i__ * a_dim1], lda); - i__2 = i__ + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *n - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - zlarfg_(&i__2, &alpha, &a[i__ + min(i__3,*n) * a_dim1], lda, &tau[i__] - ); - if (i__ < *m) { - -/* Apply H(i) to A(i+1:m,i:n) from the right */ - - i__2 = i__ + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - i__2 = *m - i__; - i__3 = *n - i__ + 1; - zlarf_("Right", &i__2, &i__3, &a[i__ + i__ * a_dim1], lda, &tau[ - i__], &a[i__ + 1 + i__ * a_dim1], lda, &work[1]); - } - i__2 = i__ + i__ * a_dim1; - a[i__2].r = alpha.r, a[i__2].i = alpha.i; - i__2 = *n - i__ + 1; - zlacgv_(&i__2, &a[i__ + i__ * a_dim1], lda); -/* L10: */ - } - return 0; - -/* End of ZGELQ2 */ - -} /* zgelq2_ */ - -/* Subroutine */ int zgelqf_(integer *m, integer *n, doublecomplex *a, - integer *lda, doublecomplex *tau, doublecomplex *work, integer *lwork, - integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__, k, ib, nb, nx, iws, nbmin, iinfo; - extern /* Subroutine */ int zgelq2_(integer *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *), xerbla_( - char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zlarfb_(char *, char *, char *, char *, - integer *, integer *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *); - static integer ldwork; - extern /* Subroutine */ int zlarft_(char *, char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *); - static integer lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZGELQF computes an LQ factorization of a complex M-by-N matrix A: - A = L * Q. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the M-by-N matrix A. - On exit, the elements on and below the diagonal of the array - contain the m-by-min(m,n) lower trapezoidal matrix L (L is - lower triangular if m <= n); the elements above the diagonal, - with the array TAU, represent the unitary matrix Q as a - product of elementary reflectors (see Further Details). - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - TAU (output) COMPLEX*16 array, dimension (min(M,N)) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,M). - For optimum performance LWORK >= M*NB, where NB is the - optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - The matrix Q is represented as a product of elementary reflectors - - Q = H(k)' . . . H(2)' H(1)', where k = min(m,n). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(1:i-1) = 0 and v(i) = 1; conjg(v(i+1:n)) is stored on exit in - A(i,i+1:n), and tau in TAU(i). - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - nb = ilaenv_(&c__1, "ZGELQF", " ", m, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen) - 1); - lwkopt = *m * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } else if ((*lwork < max(1,*m) && ! lquery)) { - *info = -7; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGELQF", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - k = min(*m,*n); - if (k == 0) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - nbmin = 2; - nx = 0; - iws = *m; - if ((nb > 1 && nb < k)) { - -/* - Determine when to cross over from blocked to unblocked code. - - Computing MAX -*/ - i__1 = 0, i__2 = ilaenv_(&c__3, "ZGELQF", " ", m, n, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < k) { - -/* Determine if workspace is large enough for blocked code. */ - - ldwork = *m; - iws = ldwork * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: reduce NB and - determine the minimum value of NB. -*/ - - nb = *lwork / ldwork; -/* Computing MAX */ - i__1 = 2, i__2 = ilaenv_(&c__2, "ZGELQF", " ", m, n, &c_n1, & - c_n1, (ftnlen)6, (ftnlen)1); - nbmin = max(i__1,i__2); - } - } - } - - if (((nb >= nbmin && nb < k) && nx < k)) { - -/* Use blocked code initially */ - - i__1 = k - nx; - i__2 = nb; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__3 = k - i__ + 1; - ib = min(i__3,nb); - -/* - Compute the LQ factorization of the current block - A(i:i+ib-1,i:n) -*/ - - i__3 = *n - i__ + 1; - zgelq2_(&ib, &i__3, &a[i__ + i__ * a_dim1], lda, &tau[i__], &work[ - 1], &iinfo); - if (i__ + ib <= *m) { - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__3 = *n - i__ + 1; - zlarft_("Forward", "Rowwise", &i__3, &ib, &a[i__ + i__ * - a_dim1], lda, &tau[i__], &work[1], &ldwork); - -/* Apply H to A(i+ib:m,i:n) from the right */ - - i__3 = *m - i__ - ib + 1; - i__4 = *n - i__ + 1; - zlarfb_("Right", "No transpose", "Forward", "Rowwise", &i__3, - &i__4, &ib, &a[i__ + i__ * a_dim1], lda, &work[1], & - ldwork, &a[i__ + ib + i__ * a_dim1], lda, &work[ib + - 1], &ldwork); - } -/* L10: */ - } - } else { - i__ = 1; - } - -/* Use unblocked code to factor the last or only block. */ - - if (i__ <= k) { - i__2 = *m - i__ + 1; - i__1 = *n - i__ + 1; - zgelq2_(&i__2, &i__1, &a[i__ + i__ * a_dim1], lda, &tau[i__], &work[1] - , &iinfo); - } - - work[1].r = (doublereal) iws, work[1].i = 0.; - return 0; - -/* End of ZGELQF */ - -} /* zgelqf_ */ - -/* Subroutine */ int zgelsd_(integer *m, integer *n, integer *nrhs, - doublecomplex *a, integer *lda, doublecomplex *b, integer *ldb, - doublereal *s, doublereal *rcond, integer *rank, doublecomplex *work, - integer *lwork, doublereal *rwork, integer *iwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1, i__2, i__3, i__4; - doublereal d__1; - doublecomplex z__1; - - /* Local variables */ - static integer ie, il, mm; - static doublereal eps, anrm, bnrm; - static integer itau, iascl, ibscl; - static doublereal sfmin; - static integer minmn, maxmn, itaup, itauq, mnthr, nwork; - extern /* Subroutine */ int dlabad_(doublereal *, doublereal *); - - extern /* Subroutine */ int dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *), dlaset_(char *, integer *, integer - *, doublereal *, doublereal *, doublereal *, integer *), - xerbla_(char *, integer *), zgebrd_(integer *, integer *, - doublecomplex *, integer *, doublereal *, doublereal *, - doublecomplex *, doublecomplex *, doublecomplex *, integer *, - integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern doublereal zlange_(char *, integer *, integer *, doublecomplex *, - integer *, doublereal *); - static doublereal bignum; - extern /* Subroutine */ int zgelqf_(integer *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *, integer * - ), zlalsd_(char *, integer *, integer *, integer *, doublereal *, - doublereal *, doublecomplex *, integer *, doublereal *, integer *, - doublecomplex *, doublereal *, integer *, integer *), - zlascl_(char *, integer *, integer *, doublereal *, doublereal *, - integer *, integer *, doublecomplex *, integer *, integer *), zgeqrf_(integer *, integer *, doublecomplex *, integer *, - doublecomplex *, doublecomplex *, integer *, integer *); - static integer ldwork; - extern /* Subroutine */ int zlacpy_(char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *), - zlaset_(char *, integer *, integer *, doublecomplex *, - doublecomplex *, doublecomplex *, integer *); - static integer minwrk, maxwrk; - static doublereal smlnum; - extern /* Subroutine */ int zunmbr_(char *, char *, char *, integer *, - integer *, integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, integer * - ); - static logical lquery; - static integer nrwork, smlsiz; - extern /* Subroutine */ int zunmlq_(char *, char *, integer *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, integer *), zunmqr_(char *, char *, integer *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, integer *); - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - ZGELSD computes the minimum-norm solution to a real linear least - squares problem: - minimize 2-norm(| b - A*x |) - using the singular value decomposition (SVD) of A. A is an M-by-N - matrix which may be rank-deficient. - - Several right hand side vectors b and solution vectors x can be - handled in a single call; they are stored as the columns of the - M-by-NRHS right hand side matrix B and the N-by-NRHS solution - matrix X. - - The problem is solved in three steps: - (1) Reduce the coefficient matrix A to bidiagonal form with - Householder tranformations, reducing the original problem - into a "bidiagonal least squares problem" (BLS) - (2) Solve the BLS using a divide and conquer approach. - (3) Apply back all the Householder tranformations to solve - the original least squares problem. - - The effective rank of A is determined by treating as zero those - singular values which are less than RCOND times the largest singular - value. - - The divide and conquer algorithm makes very mild assumptions about - floating point arithmetic. It will work on machines with a guard - digit in add/subtract, or on those binary machines without guard - digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or - Cray-2. It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - NRHS (input) INTEGER - The number of right hand sides, i.e., the number of columns - of the matrices B and X. NRHS >= 0. - - A (input) COMPLEX*16 array, dimension (LDA,N) - On entry, the M-by-N matrix A. - On exit, A has been destroyed. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - B (input/output) COMPLEX*16 array, dimension (LDB,NRHS) - On entry, the M-by-NRHS right hand side matrix B. - On exit, B is overwritten by the N-by-NRHS solution matrix X. - If m >= n and RANK = n, the residual sum-of-squares for - the solution in the i-th column is given by the sum of - squares of elements n+1:m in that column. - - LDB (input) INTEGER - The leading dimension of the array B. LDB >= max(1,M,N). - - S (output) DOUBLE PRECISION array, dimension (min(M,N)) - The singular values of A in decreasing order. - The condition number of A in the 2-norm = S(1)/S(min(m,n)). - - RCOND (input) DOUBLE PRECISION - RCOND is used to determine the effective rank of A. - Singular values S(i) <= RCOND*S(1) are treated as zero. - If RCOND < 0, machine precision is used instead. - - RANK (output) INTEGER - The effective rank of A, i.e., the number of singular values - which are greater than RCOND*S(1). - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK must be at least 1. - The exact minimum amount of workspace needed depends on M, - N and NRHS. As long as LWORK is at least - 2 * N + N * NRHS - if M is greater than or equal to N or - 2 * M + M * NRHS - if M is less than N, the code will execute correctly. - For good performance, LWORK should generally be larger. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - RWORK (workspace) DOUBLE PRECISION array, dimension at least - 10*N + 2*N*SMLSIZ + 8*N*NLVL + 3*SMLSIZ*NRHS + - (SMLSIZ+1)**2 - if M is greater than or equal to N or - 10*M + 2*M*SMLSIZ + 8*M*NLVL + 3*SMLSIZ*NRHS + - (SMLSIZ+1)**2 - if M is less than N, the code will execute correctly. - SMLSIZ is returned by ILAENV and is equal to the maximum - size of the subproblems at the bottom of the computation - tree (usually about 25), and - NLVL = MAX( 0, INT( LOG_2( MIN( M,N )/(SMLSIZ+1) ) ) + 1 ) - - IWORK (workspace) INTEGER array, dimension (LIWORK) - LIWORK >= 3 * MINMN * NLVL + 11 * MINMN, - where MINMN = MIN( M,N ). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: the algorithm for computing the SVD failed to converge; - if INFO = i, i off-diagonal elements of an intermediate - bidiagonal form did not converge to zero. - - Further Details - =============== - - Based on contributions by - Ming Gu and Ren-Cang Li, Computer Science Division, University of - California at Berkeley, USA - Osni Marques, LBNL/NERSC, USA - - ===================================================================== - - - Test the input arguments. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - --s; - --work; - --rwork; - --iwork; - - /* Function Body */ - *info = 0; - minmn = min(*m,*n); - maxmn = max(*m,*n); - mnthr = ilaenv_(&c__6, "ZGELSD", " ", m, n, nrhs, &c_n1, (ftnlen)6, ( - ftnlen)1); - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*nrhs < 0) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } else if (*ldb < max(1,maxmn)) { - *info = -7; - } - - smlsiz = ilaenv_(&c__9, "ZGELSD", " ", &c__0, &c__0, &c__0, &c__0, ( - ftnlen)6, (ftnlen)1); - -/* - Compute workspace. - (Note: Comments in the code beginning "Workspace:" describe the - minimal amount of workspace needed at that point in the code, - as well as the preferred amount for good performance. - NB refers to the optimal block size for the immediately - following subroutine, as returned by ILAENV.) -*/ - - minwrk = 1; - if (*info == 0) { - maxwrk = 0; - mm = *m; - if ((*m >= *n && *m >= mnthr)) { - -/* Path 1a - overdetermined, with many more rows than columns. */ - - mm = *n; -/* Computing MAX */ - i__1 = maxwrk, i__2 = *n * ilaenv_(&c__1, "ZGEQRF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *nrhs * ilaenv_(&c__1, "ZUNMQR", "LC", m, - nrhs, n, &c_n1, (ftnlen)6, (ftnlen)2); - maxwrk = max(i__1,i__2); - } - if (*m >= *n) { - -/* - Path 1 - overdetermined or exactly determined. - - Computing MAX -*/ - i__1 = maxwrk, i__2 = ((*n) << (1)) + (mm + *n) * ilaenv_(&c__1, - "ZGEBRD", " ", &mm, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen)1) - ; - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *nrhs * ilaenv_(&c__1, - "ZUNMBR", "QLC", &mm, nrhs, n, &c_n1, (ftnlen)6, (ftnlen) - 3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + (*n - 1) * ilaenv_(&c__1, - "ZUNMBR", "PLN", n, nrhs, n, &c_n1, (ftnlen)6, (ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *n * *nrhs; - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = ((*n) << (1)) + mm, i__2 = ((*n) << (1)) + *n * *nrhs; - minwrk = max(i__1,i__2); - } - if (*n > *m) { - if (*n >= mnthr) { - -/* - Path 2a - underdetermined, with many more columns - than rows. -*/ - - maxwrk = *m + *m * ilaenv_(&c__1, "ZGELQF", " ", m, n, &c_n1, - &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + ((*m) << (2)) + ((*m) << (1)) - * ilaenv_(&c__1, "ZGEBRD", " ", m, m, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + ((*m) << (2)) + *nrhs * - ilaenv_(&c__1, "ZUNMBR", "QLC", m, nrhs, m, &c_n1, ( - ftnlen)6, (ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + ((*m) << (2)) + (*m - 1) * - ilaenv_(&c__1, "ZUNMLQ", "LC", n, nrhs, m, &c_n1, ( - ftnlen)6, (ftnlen)2); - maxwrk = max(i__1,i__2); - if (*nrhs > 1) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + *m + *m * *nrhs; - maxwrk = max(i__1,i__2); - } else { -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + ((*m) << (1)); - maxwrk = max(i__1,i__2); - } -/* Computing MAX */ - i__1 = maxwrk, i__2 = *m * *m + ((*m) << (2)) + *m * *nrhs; - maxwrk = max(i__1,i__2); - } else { - -/* Path 2 - underdetermined. */ - - maxwrk = ((*m) << (1)) + (*n + *m) * ilaenv_(&c__1, "ZGEBRD", - " ", m, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *nrhs * ilaenv_(&c__1, - "ZUNMBR", "QLC", m, nrhs, m, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNMBR", "PLN", n, nrhs, m, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * *nrhs; - maxwrk = max(i__1,i__2); - } -/* Computing MAX */ - i__1 = ((*m) << (1)) + *n, i__2 = ((*m) << (1)) + *m * *nrhs; - minwrk = max(i__1,i__2); - } - minwrk = min(minwrk,maxwrk); - d__1 = (doublereal) maxwrk; - z__1.r = d__1, z__1.i = 0.; - work[1].r = z__1.r, work[1].i = z__1.i; - if ((*lwork < minwrk && ! lquery)) { - *info = -12; - } - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGELSD", &i__1); - return 0; - } else if (lquery) { - goto L10; - } - -/* Quick return if possible. */ - - if (*m == 0 || *n == 0) { - *rank = 0; - return 0; - } - -/* Get machine parameters. */ - - eps = PRECISION; - sfmin = SAFEMINIMUM; - smlnum = sfmin / eps; - bignum = 1. / smlnum; - dlabad_(&smlnum, &bignum); - -/* Scale A if max entry outside range [SMLNUM,BIGNUM]. */ - - anrm = zlange_("M", m, n, &a[a_offset], lda, &rwork[1]); - iascl = 0; - if ((anrm > 0. && anrm < smlnum)) { - -/* Scale matrix norm up to SMLNUM */ - - zlascl_("G", &c__0, &c__0, &anrm, &smlnum, m, n, &a[a_offset], lda, - info); - iascl = 1; - } else if (anrm > bignum) { - -/* Scale matrix norm down to BIGNUM. */ - - zlascl_("G", &c__0, &c__0, &anrm, &bignum, m, n, &a[a_offset], lda, - info); - iascl = 2; - } else if (anrm == 0.) { - -/* Matrix all zero. Return zero solution. */ - - i__1 = max(*m,*n); - zlaset_("F", &i__1, nrhs, &c_b59, &c_b59, &b[b_offset], ldb); - dlaset_("F", &minmn, &c__1, &c_b324, &c_b324, &s[1], &c__1) - ; - *rank = 0; - goto L10; - } - -/* Scale B if max entry outside range [SMLNUM,BIGNUM]. */ - - bnrm = zlange_("M", m, nrhs, &b[b_offset], ldb, &rwork[1]); - ibscl = 0; - if ((bnrm > 0. && bnrm < smlnum)) { - -/* Scale matrix norm up to SMLNUM. */ - - zlascl_("G", &c__0, &c__0, &bnrm, &smlnum, m, nrhs, &b[b_offset], ldb, - info); - ibscl = 1; - } else if (bnrm > bignum) { - -/* Scale matrix norm down to BIGNUM. */ - - zlascl_("G", &c__0, &c__0, &bnrm, &bignum, m, nrhs, &b[b_offset], ldb, - info); - ibscl = 2; - } - -/* If M < N make sure B(M+1:N,:) = 0 */ - - if (*m < *n) { - i__1 = *n - *m; - zlaset_("F", &i__1, nrhs, &c_b59, &c_b59, &b[*m + 1 + b_dim1], ldb); - } - -/* Overdetermined case. */ - - if (*m >= *n) { - -/* Path 1 - overdetermined or exactly determined. */ - - mm = *m; - if (*m >= mnthr) { - -/* Path 1a - overdetermined, with many more rows than columns */ - - mm = *n; - itau = 1; - nwork = itau + *n; - -/* - Compute A=Q*R. - (RWorkspace: need N) - (CWorkspace: need N, prefer N*NB) -*/ - - i__1 = *lwork - nwork + 1; - zgeqrf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], &i__1, - info); - -/* - Multiply B by transpose(Q). - (RWorkspace: need N) - (CWorkspace: need NRHS, prefer NRHS*NB) -*/ - - i__1 = *lwork - nwork + 1; - zunmqr_("L", "C", m, nrhs, n, &a[a_offset], lda, &work[itau], &b[ - b_offset], ldb, &work[nwork], &i__1, info); - -/* Zero out below R. */ - - if (*n > 1) { - i__1 = *n - 1; - i__2 = *n - 1; - zlaset_("L", &i__1, &i__2, &c_b59, &c_b59, &a[a_dim1 + 2], - lda); - } - } - - itauq = 1; - itaup = itauq + *n; - nwork = itaup + *n; - ie = 1; - nrwork = ie + *n; - -/* - Bidiagonalize R in A. - (RWorkspace: need N) - (CWorkspace: need 2*N+MM, prefer 2*N+(MM+N)*NB) -*/ - - i__1 = *lwork - nwork + 1; - zgebrd_(&mm, n, &a[a_offset], lda, &s[1], &rwork[ie], &work[itauq], & - work[itaup], &work[nwork], &i__1, info); - -/* - Multiply B by transpose of left bidiagonalizing vectors of R. - (CWorkspace: need 2*N+NRHS, prefer 2*N+NRHS*NB) -*/ - - i__1 = *lwork - nwork + 1; - zunmbr_("Q", "L", "C", &mm, nrhs, n, &a[a_offset], lda, &work[itauq], - &b[b_offset], ldb, &work[nwork], &i__1, info); - -/* Solve the bidiagonal least squares problem. */ - - zlalsd_("U", &smlsiz, n, nrhs, &s[1], &rwork[ie], &b[b_offset], ldb, - rcond, rank, &work[nwork], &rwork[nrwork], &iwork[1], info); - if (*info != 0) { - goto L10; - } - -/* Multiply B by right bidiagonalizing vectors of R. */ - - i__1 = *lwork - nwork + 1; - zunmbr_("P", "L", "N", n, nrhs, n, &a[a_offset], lda, &work[itaup], & - b[b_offset], ldb, &work[nwork], &i__1, info); - - } else /* if(complicated condition) */ { -/* Computing MAX */ - i__1 = *m, i__2 = ((*m) << (1)) - 4, i__1 = max(i__1,i__2), i__1 = - max(i__1,*nrhs), i__2 = *n - *m * 3; - if ((*n >= mnthr && *lwork >= ((*m) << (2)) + *m * *m + max(i__1,i__2) - )) { - -/* - Path 2a - underdetermined, with many more columns than rows - and sufficient workspace for an efficient algorithm. -*/ - - ldwork = *m; -/* - Computing MAX - Computing MAX -*/ - i__3 = *m, i__4 = ((*m) << (1)) - 4, i__3 = max(i__3,i__4), i__3 = - max(i__3,*nrhs), i__4 = *n - *m * 3; - i__1 = ((*m) << (2)) + *m * *lda + max(i__3,i__4), i__2 = *m * * - lda + *m + *m * *nrhs; - if (*lwork >= max(i__1,i__2)) { - ldwork = *lda; - } - itau = 1; - nwork = *m + 1; - -/* - Compute A=L*Q. - (CWorkspace: need 2*M, prefer M+M*NB) -*/ - - i__1 = *lwork - nwork + 1; - zgelqf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], &i__1, - info); - il = nwork; - -/* Copy L to WORK(IL), zeroing out above its diagonal. */ - - zlacpy_("L", m, m, &a[a_offset], lda, &work[il], &ldwork); - i__1 = *m - 1; - i__2 = *m - 1; - zlaset_("U", &i__1, &i__2, &c_b59, &c_b59, &work[il + ldwork], & - ldwork); - itauq = il + ldwork * *m; - itaup = itauq + *m; - nwork = itaup + *m; - ie = 1; - nrwork = ie + *m; - -/* - Bidiagonalize L in WORK(IL). - (RWorkspace: need M) - (CWorkspace: need M*M+4*M, prefer M*M+4*M+2*M*NB) -*/ - - i__1 = *lwork - nwork + 1; - zgebrd_(m, m, &work[il], &ldwork, &s[1], &rwork[ie], &work[itauq], - &work[itaup], &work[nwork], &i__1, info); - -/* - Multiply B by transpose of left bidiagonalizing vectors of L. - (CWorkspace: need M*M+4*M+NRHS, prefer M*M+4*M+NRHS*NB) -*/ - - i__1 = *lwork - nwork + 1; - zunmbr_("Q", "L", "C", m, nrhs, m, &work[il], &ldwork, &work[ - itauq], &b[b_offset], ldb, &work[nwork], &i__1, info); - -/* Solve the bidiagonal least squares problem. */ - - zlalsd_("U", &smlsiz, m, nrhs, &s[1], &rwork[ie], &b[b_offset], - ldb, rcond, rank, &work[nwork], &rwork[nrwork], &iwork[1], - info); - if (*info != 0) { - goto L10; - } - -/* Multiply B by right bidiagonalizing vectors of L. */ - - i__1 = *lwork - nwork + 1; - zunmbr_("P", "L", "N", m, nrhs, m, &work[il], &ldwork, &work[ - itaup], &b[b_offset], ldb, &work[nwork], &i__1, info); - -/* Zero out below first M rows of B. */ - - i__1 = *n - *m; - zlaset_("F", &i__1, nrhs, &c_b59, &c_b59, &b[*m + 1 + b_dim1], - ldb); - nwork = itau + *m; - -/* - Multiply transpose(Q) by B. - (CWorkspace: need NRHS, prefer NRHS*NB) -*/ - - i__1 = *lwork - nwork + 1; - zunmlq_("L", "C", n, nrhs, m, &a[a_offset], lda, &work[itau], &b[ - b_offset], ldb, &work[nwork], &i__1, info); - - } else { - -/* Path 2 - remaining underdetermined cases. */ - - itauq = 1; - itaup = itauq + *m; - nwork = itaup + *m; - ie = 1; - nrwork = ie + *m; - -/* - Bidiagonalize A. - (RWorkspace: need M) - (CWorkspace: need 2*M+N, prefer 2*M+(M+N)*NB) -*/ - - i__1 = *lwork - nwork + 1; - zgebrd_(m, n, &a[a_offset], lda, &s[1], &rwork[ie], &work[itauq], - &work[itaup], &work[nwork], &i__1, info); - -/* - Multiply B by transpose of left bidiagonalizing vectors. - (CWorkspace: need 2*M+NRHS, prefer 2*M+NRHS*NB) -*/ - - i__1 = *lwork - nwork + 1; - zunmbr_("Q", "L", "C", m, nrhs, n, &a[a_offset], lda, &work[itauq] - , &b[b_offset], ldb, &work[nwork], &i__1, info); - -/* Solve the bidiagonal least squares problem. */ - - zlalsd_("L", &smlsiz, m, nrhs, &s[1], &rwork[ie], &b[b_offset], - ldb, rcond, rank, &work[nwork], &rwork[nrwork], &iwork[1], - info); - if (*info != 0) { - goto L10; - } - -/* Multiply B by right bidiagonalizing vectors of A. */ - - i__1 = *lwork - nwork + 1; - zunmbr_("P", "L", "N", n, nrhs, m, &a[a_offset], lda, &work[itaup] - , &b[b_offset], ldb, &work[nwork], &i__1, info); - - } - } - -/* Undo scaling. */ - - if (iascl == 1) { - zlascl_("G", &c__0, &c__0, &anrm, &smlnum, n, nrhs, &b[b_offset], ldb, - info); - dlascl_("G", &c__0, &c__0, &smlnum, &anrm, &minmn, &c__1, &s[1], & - minmn, info); - } else if (iascl == 2) { - zlascl_("G", &c__0, &c__0, &anrm, &bignum, n, nrhs, &b[b_offset], ldb, - info); - dlascl_("G", &c__0, &c__0, &bignum, &anrm, &minmn, &c__1, &s[1], & - minmn, info); - } - if (ibscl == 1) { - zlascl_("G", &c__0, &c__0, &smlnum, &bnrm, n, nrhs, &b[b_offset], ldb, - info); - } else if (ibscl == 2) { - zlascl_("G", &c__0, &c__0, &bignum, &bnrm, n, nrhs, &b[b_offset], ldb, - info); - } - -L10: - d__1 = (doublereal) maxwrk; - z__1.r = d__1, z__1.i = 0.; - work[1].r = z__1.r, work[1].i = z__1.i; - return 0; - -/* End of ZGELSD */ - -} /* zgelsd_ */ - -/* Subroutine */ int zgeqr2_(integer *m, integer *n, doublecomplex *a, - integer *lda, doublecomplex *tau, doublecomplex *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - doublecomplex z__1; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, k; - static doublecomplex alpha; - extern /* Subroutine */ int zlarf_(char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), xerbla_(char *, integer *), zlarfg_(integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZGEQR2 computes a QR factorization of a complex m by n matrix A: - A = Q * R. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the m by n matrix A. - On exit, the elements on and above the diagonal of the array - contain the min(m,n) by n upper trapezoidal matrix R (R is - upper triangular if m >= n); the elements below the diagonal, - with the array TAU, represent the unitary matrix Q as a - product of elementary reflectors (see Further Details). - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - TAU (output) COMPLEX*16 array, dimension (min(M,N)) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace) COMPLEX*16 array, dimension (N) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - The matrix Q is represented as a product of elementary reflectors - - Q = H(1) H(2) . . . H(k), where k = min(m,n). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(i+1:m,i), - and tau in TAU(i). - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGEQR2", &i__1); - return 0; - } - - k = min(*m,*n); - - i__1 = k; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Generate elementary reflector H(i) to annihilate A(i+1:m,i) */ - - i__2 = *m - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - zlarfg_(&i__2, &a[i__ + i__ * a_dim1], &a[min(i__3,*m) + i__ * a_dim1] - , &c__1, &tau[i__]); - if (i__ < *n) { - -/* Apply H(i)' to A(i:m,i+1:n) from the left */ - - i__2 = i__ + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = i__ + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - i__2 = *m - i__ + 1; - i__3 = *n - i__; - d_cnjg(&z__1, &tau[i__]); - zlarf_("Left", &i__2, &i__3, &a[i__ + i__ * a_dim1], &c__1, &z__1, - &a[i__ + (i__ + 1) * a_dim1], lda, &work[1]); - i__2 = i__ + i__ * a_dim1; - a[i__2].r = alpha.r, a[i__2].i = alpha.i; - } -/* L10: */ - } - return 0; - -/* End of ZGEQR2 */ - -} /* zgeqr2_ */ - -/* Subroutine */ int zgeqrf_(integer *m, integer *n, doublecomplex *a, - integer *lda, doublecomplex *tau, doublecomplex *work, integer *lwork, - integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__, k, ib, nb, nx, iws, nbmin, iinfo; - extern /* Subroutine */ int zgeqr2_(integer *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *), xerbla_( - char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zlarfb_(char *, char *, char *, char *, - integer *, integer *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *); - static integer ldwork; - extern /* Subroutine */ int zlarft_(char *, char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *); - static integer lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZGEQRF computes a QR factorization of a complex M-by-N matrix A: - A = Q * R. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the M-by-N matrix A. - On exit, the elements on and above the diagonal of the array - contain the min(M,N)-by-N upper trapezoidal matrix R (R is - upper triangular if m >= n); the elements below the diagonal, - with the array TAU, represent the unitary matrix Q as a - product of min(m,n) elementary reflectors (see Further - Details). - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - TAU (output) COMPLEX*16 array, dimension (min(M,N)) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,N). - For optimum performance LWORK >= N*NB, where NB is - the optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - The matrix Q is represented as a product of elementary reflectors - - Q = H(1) H(2) . . . H(k), where k = min(m,n). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(i+1:m,i), - and tau in TAU(i). - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - nb = ilaenv_(&c__1, "ZGEQRF", " ", m, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen) - 1); - lwkopt = *n * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } else if ((*lwork < max(1,*n) && ! lquery)) { - *info = -7; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGEQRF", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - k = min(*m,*n); - if (k == 0) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - nbmin = 2; - nx = 0; - iws = *n; - if ((nb > 1 && nb < k)) { - -/* - Determine when to cross over from blocked to unblocked code. - - Computing MAX -*/ - i__1 = 0, i__2 = ilaenv_(&c__3, "ZGEQRF", " ", m, n, &c_n1, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < k) { - -/* Determine if workspace is large enough for blocked code. */ - - ldwork = *n; - iws = ldwork * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: reduce NB and - determine the minimum value of NB. -*/ - - nb = *lwork / ldwork; -/* Computing MAX */ - i__1 = 2, i__2 = ilaenv_(&c__2, "ZGEQRF", " ", m, n, &c_n1, & - c_n1, (ftnlen)6, (ftnlen)1); - nbmin = max(i__1,i__2); - } - } - } - - if (((nb >= nbmin && nb < k) && nx < k)) { - -/* Use blocked code initially */ - - i__1 = k - nx; - i__2 = nb; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__3 = k - i__ + 1; - ib = min(i__3,nb); - -/* - Compute the QR factorization of the current block - A(i:m,i:i+ib-1) -*/ - - i__3 = *m - i__ + 1; - zgeqr2_(&i__3, &ib, &a[i__ + i__ * a_dim1], lda, &tau[i__], &work[ - 1], &iinfo); - if (i__ + ib <= *n) { - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__3 = *m - i__ + 1; - zlarft_("Forward", "Columnwise", &i__3, &ib, &a[i__ + i__ * - a_dim1], lda, &tau[i__], &work[1], &ldwork); - -/* Apply H' to A(i:m,i+ib:n) from the left */ - - i__3 = *m - i__ + 1; - i__4 = *n - i__ - ib + 1; - zlarfb_("Left", "Conjugate transpose", "Forward", "Columnwise" - , &i__3, &i__4, &ib, &a[i__ + i__ * a_dim1], lda, & - work[1], &ldwork, &a[i__ + (i__ + ib) * a_dim1], lda, - &work[ib + 1], &ldwork); - } -/* L10: */ - } - } else { - i__ = 1; - } - -/* Use unblocked code to factor the last or only block. */ - - if (i__ <= k) { - i__2 = *m - i__ + 1; - i__1 = *n - i__ + 1; - zgeqr2_(&i__2, &i__1, &a[i__ + i__ * a_dim1], lda, &tau[i__], &work[1] - , &iinfo); - } - - work[1].r = (doublereal) iws, work[1].i = 0.; - return 0; - -/* End of ZGEQRF */ - -} /* zgeqrf_ */ - -/* Subroutine */ int zgesdd_(char *jobz, integer *m, integer *n, - doublecomplex *a, integer *lda, doublereal *s, doublecomplex *u, - integer *ldu, doublecomplex *vt, integer *ldvt, doublecomplex *work, - integer *lwork, doublereal *rwork, integer *iwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, u_dim1, u_offset, vt_dim1, vt_offset, i__1, - i__2, i__3; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer i__, ie, il, ir, iu, blk; - static doublereal dum[1], eps; - static integer iru, ivt, iscl; - static doublereal anrm; - static integer idum[1], ierr, itau, irvt; - extern logical lsame_(char *, char *); - static integer chunk, minmn; - extern /* Subroutine */ int zgemm_(char *, char *, integer *, integer *, - integer *, doublecomplex *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *); - static integer wrkbl, itaup, itauq; - static logical wntqa; - static integer nwork; - static logical wntqn, wntqo, wntqs; - extern /* Subroutine */ int zlacp2_(char *, integer *, integer *, - doublereal *, integer *, doublecomplex *, integer *); - static integer mnthr1, mnthr2; - extern /* Subroutine */ int dbdsdc_(char *, char *, integer *, doublereal - *, doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, integer *, doublereal *, integer *, integer *); - - extern /* Subroutine */ int dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *), xerbla_(char *, integer *), - zgebrd_(integer *, integer *, doublecomplex *, integer *, - doublereal *, doublereal *, doublecomplex *, doublecomplex *, - doublecomplex *, integer *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static doublereal bignum; - extern doublereal zlange_(char *, integer *, integer *, doublecomplex *, - integer *, doublereal *); - extern /* Subroutine */ int zgelqf_(integer *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *, integer * - ), zlacrm_(integer *, integer *, doublecomplex *, integer *, - doublereal *, integer *, doublecomplex *, integer *, doublereal *) - , zlarcm_(integer *, integer *, doublereal *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublereal *), zlascl_(char *, integer *, integer *, doublereal *, - doublereal *, integer *, integer *, doublecomplex *, integer *, - integer *), zgeqrf_(integer *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *, integer * - ); - static integer ldwrkl; - extern /* Subroutine */ int zlacpy_(char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *), - zlaset_(char *, integer *, integer *, doublecomplex *, - doublecomplex *, doublecomplex *, integer *); - static integer ldwrkr, minwrk, ldwrku, maxwrk; - extern /* Subroutine */ int zungbr_(char *, integer *, integer *, integer - *, doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, integer *); - static integer ldwkvt; - static doublereal smlnum; - static logical wntqas; - extern /* Subroutine */ int zunmbr_(char *, char *, char *, integer *, - integer *, integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, integer * - ), zunglq_(integer *, integer *, integer * - , doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, integer *); - static logical lquery; - static integer nrwork; - extern /* Subroutine */ int zungqr_(integer *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, integer *); - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - ZGESDD computes the singular value decomposition (SVD) of a complex - M-by-N matrix A, optionally computing the left and/or right singular - vectors, by using divide-and-conquer method. The SVD is written - - A = U * SIGMA * conjugate-transpose(V) - - where SIGMA is an M-by-N matrix which is zero except for its - min(m,n) diagonal elements, U is an M-by-M unitary matrix, and - V is an N-by-N unitary matrix. The diagonal elements of SIGMA - are the singular values of A; they are real and non-negative, and - are returned in descending order. The first min(m,n) columns of - U and V are the left and right singular vectors of A. - - Note that the routine returns VT = V**H, not V. - - The divide and conquer algorithm makes very mild assumptions about - floating point arithmetic. It will work on machines with a guard - digit in add/subtract, or on those binary machines without guard - digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or - Cray-2. It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. - - Arguments - ========= - - JOBZ (input) CHARACTER*1 - Specifies options for computing all or part of the matrix U: - = 'A': all M columns of U and all N rows of V**H are - returned in the arrays U and VT; - = 'S': the first min(M,N) columns of U and the first - min(M,N) rows of V**H are returned in the arrays U - and VT; - = 'O': If M >= N, the first N columns of U are overwritten - on the array A and all rows of V**H are returned in - the array VT; - otherwise, all columns of U are returned in the - array U and the first M rows of V**H are overwritten - in the array VT; - = 'N': no columns of U or rows of V**H are computed. - - M (input) INTEGER - The number of rows of the input matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the input matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the M-by-N matrix A. - On exit, - if JOBZ = 'O', A is overwritten with the first N columns - of U (the left singular vectors, stored - columnwise) if M >= N; - A is overwritten with the first M rows - of V**H (the right singular vectors, stored - rowwise) otherwise. - if JOBZ .ne. 'O', the contents of A are destroyed. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - S (output) DOUBLE PRECISION array, dimension (min(M,N)) - The singular values of A, sorted so that S(i) >= S(i+1). - - U (output) COMPLEX*16 array, dimension (LDU,UCOL) - UCOL = M if JOBZ = 'A' or JOBZ = 'O' and M < N; - UCOL = min(M,N) if JOBZ = 'S'. - If JOBZ = 'A' or JOBZ = 'O' and M < N, U contains the M-by-M - unitary matrix U; - if JOBZ = 'S', U contains the first min(M,N) columns of U - (the left singular vectors, stored columnwise); - if JOBZ = 'O' and M >= N, or JOBZ = 'N', U is not referenced. - - LDU (input) INTEGER - The leading dimension of the array U. LDU >= 1; if - JOBZ = 'S' or 'A' or JOBZ = 'O' and M < N, LDU >= M. - - VT (output) COMPLEX*16 array, dimension (LDVT,N) - If JOBZ = 'A' or JOBZ = 'O' and M >= N, VT contains the - N-by-N unitary matrix V**H; - if JOBZ = 'S', VT contains the first min(M,N) rows of - V**H (the right singular vectors, stored rowwise); - if JOBZ = 'O' and M < N, or JOBZ = 'N', VT is not referenced. - - LDVT (input) INTEGER - The leading dimension of the array VT. LDVT >= 1; if - JOBZ = 'A' or JOBZ = 'O' and M >= N, LDVT >= N; - if JOBZ = 'S', LDVT >= min(M,N). - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= 1. - if JOBZ = 'N', LWORK >= 2*min(M,N)+max(M,N). - if JOBZ = 'O', - LWORK >= 2*min(M,N)*min(M,N)+2*min(M,N)+max(M,N). - if JOBZ = 'S' or 'A', - LWORK >= min(M,N)*min(M,N)+2*min(M,N)+max(M,N). - For good performance, LWORK should generally be larger. - If LWORK < 0 but other input arguments are legal, WORK(1) - returns the optimal LWORK. - - RWORK (workspace) DOUBLE PRECISION array, dimension (LRWORK) - If JOBZ = 'N', LRWORK >= 7*min(M,N). - Otherwise, LRWORK >= 5*min(M,N)*min(M,N) + 5*min(M,N) - - IWORK (workspace) INTEGER array, dimension (8*min(M,N)) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: The updating process of DBDSDC did not converge. - - Further Details - =============== - - Based on contributions by - Ming Gu and Huan Ren, Computer Science Division, University of - California at Berkeley, USA - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --s; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - vt_dim1 = *ldvt; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - --work; - --rwork; - --iwork; - - /* Function Body */ - *info = 0; - minmn = min(*m,*n); - mnthr1 = (integer) (minmn * 17. / 9.); - mnthr2 = (integer) (minmn * 5. / 3.); - wntqa = lsame_(jobz, "A"); - wntqs = lsame_(jobz, "S"); - wntqas = wntqa || wntqs; - wntqo = lsame_(jobz, "O"); - wntqn = lsame_(jobz, "N"); - minwrk = 1; - maxwrk = 1; - lquery = *lwork == -1; - - if (! (wntqa || wntqs || wntqo || wntqn)) { - *info = -1; - } else if (*m < 0) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } else if (*ldu < 1 || (wntqas && *ldu < *m) || ((wntqo && *m < *n) && * - ldu < *m)) { - *info = -8; - } else if (*ldvt < 1 || (wntqa && *ldvt < *n) || (wntqs && *ldvt < minmn) - || ((wntqo && *m >= *n) && *ldvt < *n)) { - *info = -10; - } - -/* - Compute workspace - (Note: Comments in the code beginning "Workspace:" describe the - minimal amount of workspace needed at that point in the code, - as well as the preferred amount for good performance. - CWorkspace refers to complex workspace, and RWorkspace to - real workspace. NB refers to the optimal block size for the - immediately following subroutine, as returned by ILAENV.) -*/ - - if (((*info == 0 && *m > 0) && *n > 0)) { - if (*m >= *n) { - -/* - There is no complex work space needed for bidiagonal SVD - The real work space needed for bidiagonal SVD is BDSPAC, - BDSPAC = 3*N*N + 4*N -*/ - - if (*m >= mnthr1) { - if (wntqn) { - -/* Path 1 (M much larger than N, JOBZ='N') */ - - wrkbl = *n + *n * ilaenv_(&c__1, "ZGEQRF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*n) << (1)) + ((*n) << (1)) * - ilaenv_(&c__1, "ZGEBRD", " ", n, n, &c_n1, &c_n1, - (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); - maxwrk = wrkbl; - minwrk = *n * 3; - } else if (wntqo) { - -/* Path 2 (M much larger than N, JOBZ='O') */ - - wrkbl = *n + *n * ilaenv_(&c__1, "ZGEQRF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n + *n * ilaenv_(&c__1, "ZUNGQR", - " ", m, n, n, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*n) << (1)) + ((*n) << (1)) * - ilaenv_(&c__1, "ZGEBRD", " ", n, n, &c_n1, &c_n1, - (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNMBR", "QLN", n, n, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNMBR", "PRC", n, n, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); - maxwrk = *m * *n + *n * *n + wrkbl; - minwrk = ((*n) << (1)) * *n + *n * 3; - } else if (wntqs) { - -/* Path 3 (M much larger than N, JOBZ='S') */ - - wrkbl = *n + *n * ilaenv_(&c__1, "ZGEQRF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n + *n * ilaenv_(&c__1, "ZUNGQR", - " ", m, n, n, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*n) << (1)) + ((*n) << (1)) * - ilaenv_(&c__1, "ZGEBRD", " ", n, n, &c_n1, &c_n1, - (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNMBR", "QLN", n, n, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNMBR", "PRC", n, n, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); - maxwrk = *n * *n + wrkbl; - minwrk = *n * *n + *n * 3; - } else if (wntqa) { - -/* Path 4 (M much larger than N, JOBZ='A') */ - - wrkbl = *n + *n * ilaenv_(&c__1, "ZGEQRF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *n + *m * ilaenv_(&c__1, "ZUNGQR", - " ", m, m, n, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*n) << (1)) + ((*n) << (1)) * - ilaenv_(&c__1, "ZGEBRD", " ", n, n, &c_n1, &c_n1, - (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNMBR", "QLN", n, n, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNMBR", "PRC", n, n, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); - maxwrk = *n * *n + wrkbl; - minwrk = *n * *n + ((*n) << (1)) + *m; - } - } else if (*m >= mnthr2) { - -/* Path 5 (M much larger than N, but not as much as MNTHR1) */ - - maxwrk = ((*n) << (1)) + (*m + *n) * ilaenv_(&c__1, "ZGEBRD", - " ", m, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen)1); - minwrk = ((*n) << (1)) + *m; - if (wntqo) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNGBR", "P", n, n, n, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNGBR", "Q", m, n, n, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); - maxwrk += *m * *n; - minwrk += *n * *n; - } else if (wntqs) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNGBR", "P", n, n, n, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNGBR", "Q", m, n, n, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); - } else if (wntqa) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNGBR", "P", n, n, n, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *m * ilaenv_(&c__1, - "ZUNGBR", "Q", m, m, n, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); - } - } else { - -/* Path 6 (M at least N, but not much larger) */ - - maxwrk = ((*n) << (1)) + (*m + *n) * ilaenv_(&c__1, "ZGEBRD", - " ", m, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen)1); - minwrk = ((*n) << (1)) + *m; - if (wntqo) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNMBR", "PRC", n, n, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNMBR", "QLN", m, n, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); - maxwrk += *m * *n; - minwrk += *n * *n; - } else if (wntqs) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNMBR", "PRC", n, n, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNMBR", "QLN", m, n, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); - } else if (wntqa) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *n * ilaenv_(&c__1, - "ZUNGBR", "PRC", n, n, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*n) << (1)) + *m * ilaenv_(&c__1, - "ZUNGBR", "QLN", m, m, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); - } - } - } else { - -/* - There is no complex work space needed for bidiagonal SVD - The real work space needed for bidiagonal SVD is BDSPAC, - BDSPAC = 3*M*M + 4*M -*/ - - if (*n >= mnthr1) { - if (wntqn) { - -/* Path 1t (N much larger than M, JOBZ='N') */ - - maxwrk = *m + *m * ilaenv_(&c__1, "ZGELQF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + ((*m) << (1)) * - ilaenv_(&c__1, "ZGEBRD", " ", m, m, &c_n1, &c_n1, - (ftnlen)6, (ftnlen)1); - maxwrk = max(i__1,i__2); - minwrk = *m * 3; - } else if (wntqo) { - -/* Path 2t (N much larger than M, JOBZ='O') */ - - wrkbl = *m + *m * ilaenv_(&c__1, "ZGELQF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m + *m * ilaenv_(&c__1, "ZUNGLQ", - " ", m, n, m, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*m) << (1)) + ((*m) << (1)) * - ilaenv_(&c__1, "ZGEBRD", " ", m, m, &c_n1, &c_n1, - (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNMBR", "PRC", m, m, m, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNMBR", "QLN", m, m, m, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); - maxwrk = *m * *n + *m * *m + wrkbl; - minwrk = ((*m) << (1)) * *m + *m * 3; - } else if (wntqs) { - -/* Path 3t (N much larger than M, JOBZ='S') */ - - wrkbl = *m + *m * ilaenv_(&c__1, "ZGELQF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m + *m * ilaenv_(&c__1, "ZUNGLQ", - " ", m, n, m, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*m) << (1)) + ((*m) << (1)) * - ilaenv_(&c__1, "ZGEBRD", " ", m, m, &c_n1, &c_n1, - (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNMBR", "PRC", m, m, m, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNMBR", "QLN", m, m, m, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); - maxwrk = *m * *m + wrkbl; - minwrk = *m * *m + *m * 3; - } else if (wntqa) { - -/* Path 4t (N much larger than M, JOBZ='A') */ - - wrkbl = *m + *m * ilaenv_(&c__1, "ZGELQF", " ", m, n, & - c_n1, &c_n1, (ftnlen)6, (ftnlen)1); -/* Computing MAX */ - i__1 = wrkbl, i__2 = *m + *n * ilaenv_(&c__1, "ZUNGLQ", - " ", n, n, m, &c_n1, (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*m) << (1)) + ((*m) << (1)) * - ilaenv_(&c__1, "ZGEBRD", " ", m, m, &c_n1, &c_n1, - (ftnlen)6, (ftnlen)1); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNMBR", "PRC", m, m, m, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); -/* Computing MAX */ - i__1 = wrkbl, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNMBR", "QLN", m, m, m, &c_n1, (ftnlen)6, ( - ftnlen)3); - wrkbl = max(i__1,i__2); - maxwrk = *m * *m + wrkbl; - minwrk = *m * *m + ((*m) << (1)) + *n; - } - } else if (*n >= mnthr2) { - -/* Path 5t (N much larger than M, but not as much as MNTHR1) */ - - maxwrk = ((*m) << (1)) + (*m + *n) * ilaenv_(&c__1, "ZGEBRD", - " ", m, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen)1); - minwrk = ((*m) << (1)) + *n; - if (wntqo) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNGBR", "P", m, n, m, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNGBR", "Q", m, m, n, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); - maxwrk += *m * *n; - minwrk += *m * *m; - } else if (wntqs) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNGBR", "P", m, n, m, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNGBR", "Q", m, m, n, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); - } else if (wntqa) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *n * ilaenv_(&c__1, - "ZUNGBR", "P", n, n, m, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNGBR", "Q", m, m, n, &c_n1, (ftnlen)6, (ftnlen) - 1); - maxwrk = max(i__1,i__2); - } - } else { - -/* Path 6t (N greater than M, but not much larger) */ - - maxwrk = ((*m) << (1)) + (*m + *n) * ilaenv_(&c__1, "ZGEBRD", - " ", m, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen)1); - minwrk = ((*m) << (1)) + *n; - if (wntqo) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNMBR", "PRC", m, n, m, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNMBR", "QLN", m, m, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); - maxwrk += *m * *n; - minwrk += *m * *m; - } else if (wntqs) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNGBR", "PRC", m, n, m, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNGBR", "QLN", m, m, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); - } else if (wntqa) { -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *n * ilaenv_(&c__1, - "ZUNGBR", "PRC", n, n, m, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); -/* Computing MAX */ - i__1 = maxwrk, i__2 = ((*m) << (1)) + *m * ilaenv_(&c__1, - "ZUNGBR", "QLN", m, m, n, &c_n1, (ftnlen)6, ( - ftnlen)3); - maxwrk = max(i__1,i__2); - } - } - } - maxwrk = max(maxwrk,minwrk); - work[1].r = (doublereal) maxwrk, work[1].i = 0.; - } - - if ((*lwork < minwrk && ! lquery)) { - *info = -13; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGESDD", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0) { - if (*lwork >= 1) { - work[1].r = 1., work[1].i = 0.; - } - return 0; - } - -/* Get machine constants */ - - eps = PRECISION; - smlnum = sqrt(SAFEMINIMUM) / eps; - bignum = 1. / smlnum; - -/* Scale A if max element outside range [SMLNUM,BIGNUM] */ - - anrm = zlange_("M", m, n, &a[a_offset], lda, dum); - iscl = 0; - if ((anrm > 0. && anrm < smlnum)) { - iscl = 1; - zlascl_("G", &c__0, &c__0, &anrm, &smlnum, m, n, &a[a_offset], lda, & - ierr); - } else if (anrm > bignum) { - iscl = 1; - zlascl_("G", &c__0, &c__0, &anrm, &bignum, m, n, &a[a_offset], lda, & - ierr); - } - - if (*m >= *n) { - -/* - A has at least as many rows as columns. If A has sufficiently - more rows than columns, first reduce using the QR - decomposition (if sufficient workspace available) -*/ - - if (*m >= mnthr1) { - - if (wntqn) { - -/* - Path 1 (M much larger than N, JOBZ='N') - No singular vectors to be computed -*/ - - itau = 1; - nwork = itau + *n; - -/* - Compute A=Q*R - (CWorkspace: need 2*N, prefer N+N*NB) - (RWorkspace: need 0) -*/ - - i__1 = *lwork - nwork + 1; - zgeqrf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__1, &ierr); - -/* Zero out below R */ - - i__1 = *n - 1; - i__2 = *n - 1; - zlaset_("L", &i__1, &i__2, &c_b59, &c_b59, &a[a_dim1 + 2], - lda); - ie = 1; - itauq = 1; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize R in A - (CWorkspace: need 3*N, prefer 2*N+2*N*NB) - (RWorkspace: need N) -*/ - - i__1 = *lwork - nwork + 1; - zgebrd_(n, n, &a[a_offset], lda, &s[1], &rwork[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__1, &ierr); - nrwork = ie + *n; - -/* - Perform bidiagonal SVD, compute singular values only - (CWorkspace: 0) - (RWorkspace: need BDSPAC) -*/ - - dbdsdc_("U", "N", n, &s[1], &rwork[ie], dum, &c__1, dum, & - c__1, dum, idum, &rwork[nrwork], &iwork[1], info); - - } else if (wntqo) { - -/* - Path 2 (M much larger than N, JOBZ='O') - N left singular vectors to be overwritten on A and - N right singular vectors to be computed in VT -*/ - - iu = 1; - -/* WORK(IU) is N by N */ - - ldwrku = *n; - ir = iu + ldwrku * *n; - if (*lwork >= *m * *n + *n * *n + *n * 3) { - -/* WORK(IR) is M by N */ - - ldwrkr = *m; - } else { - ldwrkr = (*lwork - *n * *n - *n * 3) / *n; - } - itau = ir + ldwrkr * *n; - nwork = itau + *n; - -/* - Compute A=Q*R - (CWorkspace: need N*N+2*N, prefer M*N+N+N*NB) - (RWorkspace: 0) -*/ - - i__1 = *lwork - nwork + 1; - zgeqrf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__1, &ierr); - -/* Copy R to WORK( IR ), zeroing out below it */ - - zlacpy_("U", n, n, &a[a_offset], lda, &work[ir], &ldwrkr); - i__1 = *n - 1; - i__2 = *n - 1; - zlaset_("L", &i__1, &i__2, &c_b59, &c_b59, &work[ir + 1], & - ldwrkr); - -/* - Generate Q in A - (CWorkspace: need 2*N, prefer N+N*NB) - (RWorkspace: 0) -*/ - - i__1 = *lwork - nwork + 1; - zungqr_(m, n, n, &a[a_offset], lda, &work[itau], &work[nwork], - &i__1, &ierr); - ie = 1; - itauq = itau; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize R in WORK(IR) - (CWorkspace: need N*N+3*N, prefer M*N+2*N+2*N*NB) - (RWorkspace: need N) -*/ - - i__1 = *lwork - nwork + 1; - zgebrd_(n, n, &work[ir], &ldwrkr, &s[1], &rwork[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__1, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of R in WORK(IRU) and computing right singular vectors - of R in WORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - iru = ie + *n; - irvt = iru + *n * *n; - nrwork = irvt + *n * *n; - dbdsdc_("U", "I", n, &s[1], &rwork[ie], &rwork[iru], n, & - rwork[irvt], n, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Copy real matrix RWORK(IRU) to complex matrix WORK(IU) - Overwrite WORK(IU) by the left singular vectors of R - (CWorkspace: need 2*N*N+3*N, prefer M*N+N*N+2*N+N*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", n, n, &rwork[iru], n, &work[iu], &ldwrku); - i__1 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", n, n, n, &work[ir], &ldwrkr, &work[ - itauq], &work[iu], &ldwrku, &work[nwork], &i__1, & - ierr); - -/* - Copy real matrix RWORK(IRVT) to complex matrix VT - Overwrite VT by the right singular vectors of R - (CWorkspace: need N*N+3*N, prefer M*N+2*N+N*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", n, n, &rwork[irvt], n, &vt[vt_offset], ldvt); - i__1 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", n, n, n, &work[ir], &ldwrkr, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__1, & - ierr); - -/* - Multiply Q in A by left singular vectors of R in - WORK(IU), storing result in WORK(IR) and copying to A - (CWorkspace: need 2*N*N, prefer N*N+M*N) - (RWorkspace: 0) -*/ - - i__1 = *m; - i__2 = ldwrkr; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += - i__2) { -/* Computing MIN */ - i__3 = *m - i__ + 1; - chunk = min(i__3,ldwrkr); - zgemm_("N", "N", &chunk, n, n, &c_b60, &a[i__ + a_dim1], - lda, &work[iu], &ldwrku, &c_b59, &work[ir], & - ldwrkr); - zlacpy_("F", &chunk, n, &work[ir], &ldwrkr, &a[i__ + - a_dim1], lda); -/* L10: */ - } - - } else if (wntqs) { - -/* - Path 3 (M much larger than N, JOBZ='S') - N left singular vectors to be computed in U and - N right singular vectors to be computed in VT -*/ - - ir = 1; - -/* WORK(IR) is N by N */ - - ldwrkr = *n; - itau = ir + ldwrkr * *n; - nwork = itau + *n; - -/* - Compute A=Q*R - (CWorkspace: need N*N+2*N, prefer N*N+N+N*NB) - (RWorkspace: 0) -*/ - - i__2 = *lwork - nwork + 1; - zgeqrf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__2, &ierr); - -/* Copy R to WORK(IR), zeroing out below it */ - - zlacpy_("U", n, n, &a[a_offset], lda, &work[ir], &ldwrkr); - i__2 = *n - 1; - i__1 = *n - 1; - zlaset_("L", &i__2, &i__1, &c_b59, &c_b59, &work[ir + 1], & - ldwrkr); - -/* - Generate Q in A - (CWorkspace: need 2*N, prefer N+N*NB) - (RWorkspace: 0) -*/ - - i__2 = *lwork - nwork + 1; - zungqr_(m, n, n, &a[a_offset], lda, &work[itau], &work[nwork], - &i__2, &ierr); - ie = 1; - itauq = itau; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize R in WORK(IR) - (CWorkspace: need N*N+3*N, prefer N*N+2*N+2*N*NB) - (RWorkspace: need N) -*/ - - i__2 = *lwork - nwork + 1; - zgebrd_(n, n, &work[ir], &ldwrkr, &s[1], &rwork[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__2, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - iru = ie + *n; - irvt = iru + *n * *n; - nrwork = irvt + *n * *n; - dbdsdc_("U", "I", n, &s[1], &rwork[ie], &rwork[iru], n, & - rwork[irvt], n, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Copy real matrix RWORK(IRU) to complex matrix U - Overwrite U by left singular vectors of R - (CWorkspace: need N*N+3*N, prefer N*N+2*N+N*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", n, n, &rwork[iru], n, &u[u_offset], ldu); - i__2 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", n, n, n, &work[ir], &ldwrkr, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__2, &ierr); - -/* - Copy real matrix RWORK(IRVT) to complex matrix VT - Overwrite VT by right singular vectors of R - (CWorkspace: need N*N+3*N, prefer N*N+2*N+N*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", n, n, &rwork[irvt], n, &vt[vt_offset], ldvt); - i__2 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", n, n, n, &work[ir], &ldwrkr, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__2, & - ierr); - -/* - Multiply Q in A by left singular vectors of R in - WORK(IR), storing result in U - (CWorkspace: need N*N) - (RWorkspace: 0) -*/ - - zlacpy_("F", n, n, &u[u_offset], ldu, &work[ir], &ldwrkr); - zgemm_("N", "N", m, n, n, &c_b60, &a[a_offset], lda, &work[ir] - , &ldwrkr, &c_b59, &u[u_offset], ldu); - - } else if (wntqa) { - -/* - Path 4 (M much larger than N, JOBZ='A') - M left singular vectors to be computed in U and - N right singular vectors to be computed in VT -*/ - - iu = 1; - -/* WORK(IU) is N by N */ - - ldwrku = *n; - itau = iu + ldwrku * *n; - nwork = itau + *n; - -/* - Compute A=Q*R, copying result to U - (CWorkspace: need 2*N, prefer N+N*NB) - (RWorkspace: 0) -*/ - - i__2 = *lwork - nwork + 1; - zgeqrf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__2, &ierr); - zlacpy_("L", m, n, &a[a_offset], lda, &u[u_offset], ldu); - -/* - Generate Q in U - (CWorkspace: need N+M, prefer N+M*NB) - (RWorkspace: 0) -*/ - - i__2 = *lwork - nwork + 1; - zungqr_(m, m, n, &u[u_offset], ldu, &work[itau], &work[nwork], - &i__2, &ierr); - -/* Produce R in A, zeroing out below it */ - - i__2 = *n - 1; - i__1 = *n - 1; - zlaset_("L", &i__2, &i__1, &c_b59, &c_b59, &a[a_dim1 + 2], - lda); - ie = 1; - itauq = itau; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize R in A - (CWorkspace: need 3*N, prefer 2*N+2*N*NB) - (RWorkspace: need N) -*/ - - i__2 = *lwork - nwork + 1; - zgebrd_(n, n, &a[a_offset], lda, &s[1], &rwork[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__2, &ierr); - iru = ie + *n; - irvt = iru + *n * *n; - nrwork = irvt + *n * *n; - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - dbdsdc_("U", "I", n, &s[1], &rwork[ie], &rwork[iru], n, & - rwork[irvt], n, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Copy real matrix RWORK(IRU) to complex matrix WORK(IU) - Overwrite WORK(IU) by left singular vectors of R - (CWorkspace: need N*N+3*N, prefer N*N+2*N+N*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", n, n, &rwork[iru], n, &work[iu], &ldwrku); - i__2 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", n, n, n, &a[a_offset], lda, &work[ - itauq], &work[iu], &ldwrku, &work[nwork], &i__2, & - ierr); - -/* - Copy real matrix RWORK(IRVT) to complex matrix VT - Overwrite VT by right singular vectors of R - (CWorkspace: need 3*N, prefer 2*N+N*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", n, n, &rwork[irvt], n, &vt[vt_offset], ldvt); - i__2 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", n, n, n, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__2, & - ierr); - -/* - Multiply Q in U by left singular vectors of R in - WORK(IU), storing result in A - (CWorkspace: need N*N) - (RWorkspace: 0) -*/ - - zgemm_("N", "N", m, n, n, &c_b60, &u[u_offset], ldu, &work[iu] - , &ldwrku, &c_b59, &a[a_offset], lda); - -/* Copy left singular vectors of A from A to U */ - - zlacpy_("F", m, n, &a[a_offset], lda, &u[u_offset], ldu); - - } - - } else if (*m >= mnthr2) { - -/* - MNTHR2 <= M < MNTHR1 - - Path 5 (M much larger than N, but not as much as MNTHR1) - Reduce to bidiagonal form without QR decomposition, use - ZUNGBR and matrix multiplication to compute singular vectors -*/ - - ie = 1; - nrwork = ie + *n; - itauq = 1; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize A - (CWorkspace: need 2*N+M, prefer 2*N+(M+N)*NB) - (RWorkspace: need N) -*/ - - i__2 = *lwork - nwork + 1; - zgebrd_(m, n, &a[a_offset], lda, &s[1], &rwork[ie], &work[itauq], - &work[itaup], &work[nwork], &i__2, &ierr); - if (wntqn) { - -/* - Compute singular values only - (Cworkspace: 0) - (Rworkspace: need BDSPAC) -*/ - - dbdsdc_("U", "N", n, &s[1], &rwork[ie], dum, &c__1, dum, & - c__1, dum, idum, &rwork[nrwork], &iwork[1], info); - } else if (wntqo) { - iu = nwork; - iru = nrwork; - irvt = iru + *n * *n; - nrwork = irvt + *n * *n; - -/* - Copy A to VT, generate P**H - (Cworkspace: need 2*N, prefer N+N*NB) - (Rworkspace: 0) -*/ - - zlacpy_("U", n, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - i__2 = *lwork - nwork + 1; - zungbr_("P", n, n, n, &vt[vt_offset], ldvt, &work[itaup], & - work[nwork], &i__2, &ierr); - -/* - Generate Q in A - (CWorkspace: need 2*N, prefer N+N*NB) - (RWorkspace: 0) -*/ - - i__2 = *lwork - nwork + 1; - zungbr_("Q", m, n, n, &a[a_offset], lda, &work[itauq], &work[ - nwork], &i__2, &ierr); - - if (*lwork >= *m * *n + *n * 3) { - -/* WORK( IU ) is M by N */ - - ldwrku = *m; - } else { - -/* WORK(IU) is LDWRKU by N */ - - ldwrku = (*lwork - *n * 3) / *n; - } - nwork = iu + ldwrku * *n; - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - dbdsdc_("U", "I", n, &s[1], &rwork[ie], &rwork[iru], n, & - rwork[irvt], n, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Multiply real matrix RWORK(IRVT) by P**H in VT, - storing the result in WORK(IU), copying to VT - (Cworkspace: need 0) - (Rworkspace: need 3*N*N) -*/ - - zlarcm_(n, n, &rwork[irvt], n, &vt[vt_offset], ldvt, &work[iu] - , &ldwrku, &rwork[nrwork]); - zlacpy_("F", n, n, &work[iu], &ldwrku, &vt[vt_offset], ldvt); - -/* - Multiply Q in A by real matrix RWORK(IRU), storing the - result in WORK(IU), copying to A - (CWorkspace: need N*N, prefer M*N) - (Rworkspace: need 3*N*N, prefer N*N+2*M*N) -*/ - - nrwork = irvt; - i__2 = *m; - i__1 = ldwrku; - for (i__ = 1; i__1 < 0 ? i__ >= i__2 : i__ <= i__2; i__ += - i__1) { -/* Computing MIN */ - i__3 = *m - i__ + 1; - chunk = min(i__3,ldwrku); - zlacrm_(&chunk, n, &a[i__ + a_dim1], lda, &rwork[iru], n, - &work[iu], &ldwrku, &rwork[nrwork]); - zlacpy_("F", &chunk, n, &work[iu], &ldwrku, &a[i__ + - a_dim1], lda); -/* L20: */ - } - - } else if (wntqs) { - -/* - Copy A to VT, generate P**H - (Cworkspace: need 2*N, prefer N+N*NB) - (Rworkspace: 0) -*/ - - zlacpy_("U", n, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - i__1 = *lwork - nwork + 1; - zungbr_("P", n, n, n, &vt[vt_offset], ldvt, &work[itaup], & - work[nwork], &i__1, &ierr); - -/* - Copy A to U, generate Q - (Cworkspace: need 2*N, prefer N+N*NB) - (Rworkspace: 0) -*/ - - zlacpy_("L", m, n, &a[a_offset], lda, &u[u_offset], ldu); - i__1 = *lwork - nwork + 1; - zungbr_("Q", m, n, n, &u[u_offset], ldu, &work[itauq], &work[ - nwork], &i__1, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - iru = nrwork; - irvt = iru + *n * *n; - nrwork = irvt + *n * *n; - dbdsdc_("U", "I", n, &s[1], &rwork[ie], &rwork[iru], n, & - rwork[irvt], n, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Multiply real matrix RWORK(IRVT) by P**H in VT, - storing the result in A, copying to VT - (Cworkspace: need 0) - (Rworkspace: need 3*N*N) -*/ - - zlarcm_(n, n, &rwork[irvt], n, &vt[vt_offset], ldvt, &a[ - a_offset], lda, &rwork[nrwork]); - zlacpy_("F", n, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - -/* - Multiply Q in U by real matrix RWORK(IRU), storing the - result in A, copying to U - (CWorkspace: need 0) - (Rworkspace: need N*N+2*M*N) -*/ - - nrwork = irvt; - zlacrm_(m, n, &u[u_offset], ldu, &rwork[iru], n, &a[a_offset], - lda, &rwork[nrwork]); - zlacpy_("F", m, n, &a[a_offset], lda, &u[u_offset], ldu); - } else { - -/* - Copy A to VT, generate P**H - (Cworkspace: need 2*N, prefer N+N*NB) - (Rworkspace: 0) -*/ - - zlacpy_("U", n, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - i__1 = *lwork - nwork + 1; - zungbr_("P", n, n, n, &vt[vt_offset], ldvt, &work[itaup], & - work[nwork], &i__1, &ierr); - -/* - Copy A to U, generate Q - (Cworkspace: need 2*N, prefer N+N*NB) - (Rworkspace: 0) -*/ - - zlacpy_("L", m, n, &a[a_offset], lda, &u[u_offset], ldu); - i__1 = *lwork - nwork + 1; - zungbr_("Q", m, m, n, &u[u_offset], ldu, &work[itauq], &work[ - nwork], &i__1, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - iru = nrwork; - irvt = iru + *n * *n; - nrwork = irvt + *n * *n; - dbdsdc_("U", "I", n, &s[1], &rwork[ie], &rwork[iru], n, & - rwork[irvt], n, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Multiply real matrix RWORK(IRVT) by P**H in VT, - storing the result in A, copying to VT - (Cworkspace: need 0) - (Rworkspace: need 3*N*N) -*/ - - zlarcm_(n, n, &rwork[irvt], n, &vt[vt_offset], ldvt, &a[ - a_offset], lda, &rwork[nrwork]); - zlacpy_("F", n, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - -/* - Multiply Q in U by real matrix RWORK(IRU), storing the - result in A, copying to U - (CWorkspace: 0) - (Rworkspace: need 3*N*N) -*/ - - nrwork = irvt; - zlacrm_(m, n, &u[u_offset], ldu, &rwork[iru], n, &a[a_offset], - lda, &rwork[nrwork]); - zlacpy_("F", m, n, &a[a_offset], lda, &u[u_offset], ldu); - } - - } else { - -/* - M .LT. MNTHR2 - - Path 6 (M at least N, but not much larger) - Reduce to bidiagonal form without QR decomposition - Use ZUNMBR to compute singular vectors -*/ - - ie = 1; - nrwork = ie + *n; - itauq = 1; - itaup = itauq + *n; - nwork = itaup + *n; - -/* - Bidiagonalize A - (CWorkspace: need 2*N+M, prefer 2*N+(M+N)*NB) - (RWorkspace: need N) -*/ - - i__1 = *lwork - nwork + 1; - zgebrd_(m, n, &a[a_offset], lda, &s[1], &rwork[ie], &work[itauq], - &work[itaup], &work[nwork], &i__1, &ierr); - if (wntqn) { - -/* - Compute singular values only - (Cworkspace: 0) - (Rworkspace: need BDSPAC) -*/ - - dbdsdc_("U", "N", n, &s[1], &rwork[ie], dum, &c__1, dum, & - c__1, dum, idum, &rwork[nrwork], &iwork[1], info); - } else if (wntqo) { - iu = nwork; - iru = nrwork; - irvt = iru + *n * *n; - nrwork = irvt + *n * *n; - if (*lwork >= *m * *n + *n * 3) { - -/* WORK( IU ) is M by N */ - - ldwrku = *m; - } else { - -/* WORK( IU ) is LDWRKU by N */ - - ldwrku = (*lwork - *n * 3) / *n; - } - nwork = iu + ldwrku * *n; - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - dbdsdc_("U", "I", n, &s[1], &rwork[ie], &rwork[iru], n, & - rwork[irvt], n, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Copy real matrix RWORK(IRVT) to complex matrix VT - Overwrite VT by right singular vectors of A - (Cworkspace: need 2*N, prefer N+N*NB) - (Rworkspace: need 0) -*/ - - zlacp2_("F", n, n, &rwork[irvt], n, &vt[vt_offset], ldvt); - i__1 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", n, n, n, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__1, & - ierr); - - if (*lwork >= *m * *n + *n * 3) { - -/* - Copy real matrix RWORK(IRU) to complex matrix WORK(IU) - Overwrite WORK(IU) by left singular vectors of A, copying - to A - (Cworkspace: need M*N+2*N, prefer M*N+N+N*NB) - (Rworkspace: need 0) -*/ - - zlaset_("F", m, n, &c_b59, &c_b59, &work[iu], &ldwrku); - zlacp2_("F", n, n, &rwork[iru], n, &work[iu], &ldwrku); - i__1 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", m, n, n, &a[a_offset], lda, &work[ - itauq], &work[iu], &ldwrku, &work[nwork], &i__1, & - ierr); - zlacpy_("F", m, n, &work[iu], &ldwrku, &a[a_offset], lda); - } else { - -/* - Generate Q in A - (Cworkspace: need 2*N, prefer N+N*NB) - (Rworkspace: need 0) -*/ - - i__1 = *lwork - nwork + 1; - zungbr_("Q", m, n, n, &a[a_offset], lda, &work[itauq], & - work[nwork], &i__1, &ierr); - -/* - Multiply Q in A by real matrix RWORK(IRU), storing the - result in WORK(IU), copying to A - (CWorkspace: need N*N, prefer M*N) - (Rworkspace: need 3*N*N, prefer N*N+2*M*N) -*/ - - nrwork = irvt; - i__1 = *m; - i__2 = ldwrku; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += - i__2) { -/* Computing MIN */ - i__3 = *m - i__ + 1; - chunk = min(i__3,ldwrku); - zlacrm_(&chunk, n, &a[i__ + a_dim1], lda, &rwork[iru], - n, &work[iu], &ldwrku, &rwork[nrwork]); - zlacpy_("F", &chunk, n, &work[iu], &ldwrku, &a[i__ + - a_dim1], lda); -/* L30: */ - } - } - - } else if (wntqs) { - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - iru = nrwork; - irvt = iru + *n * *n; - nrwork = irvt + *n * *n; - dbdsdc_("U", "I", n, &s[1], &rwork[ie], &rwork[iru], n, & - rwork[irvt], n, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Copy real matrix RWORK(IRU) to complex matrix U - Overwrite U by left singular vectors of A - (CWorkspace: need 3*N, prefer 2*N+N*NB) - (RWorkspace: 0) -*/ - - zlaset_("F", m, n, &c_b59, &c_b59, &u[u_offset], ldu); - zlacp2_("F", n, n, &rwork[iru], n, &u[u_offset], ldu); - i__2 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", m, n, n, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__2, &ierr); - -/* - Copy real matrix RWORK(IRVT) to complex matrix VT - Overwrite VT by right singular vectors of A - (CWorkspace: need 3*N, prefer 2*N+N*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", n, n, &rwork[irvt], n, &vt[vt_offset], ldvt); - i__2 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", n, n, n, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__2, & - ierr); - } else { - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - iru = nrwork; - irvt = iru + *n * *n; - nrwork = irvt + *n * *n; - dbdsdc_("U", "I", n, &s[1], &rwork[ie], &rwork[iru], n, & - rwork[irvt], n, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* Set the right corner of U to identity matrix */ - - zlaset_("F", m, m, &c_b59, &c_b59, &u[u_offset], ldu); - i__2 = *m - *n; - i__1 = *m - *n; - zlaset_("F", &i__2, &i__1, &c_b59, &c_b60, &u[*n + 1 + (*n + - 1) * u_dim1], ldu); - -/* - Copy real matrix RWORK(IRU) to complex matrix U - Overwrite U by left singular vectors of A - (CWorkspace: need 2*N+M, prefer 2*N+M*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", n, n, &rwork[iru], n, &u[u_offset], ldu); - i__2 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", m, m, n, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__2, &ierr); - -/* - Copy real matrix RWORK(IRVT) to complex matrix VT - Overwrite VT by right singular vectors of A - (CWorkspace: need 3*N, prefer 2*N+N*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", n, n, &rwork[irvt], n, &vt[vt_offset], ldvt); - i__2 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", n, n, n, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__2, & - ierr); - } - - } - - } else { - -/* - A has more columns than rows. If A has sufficiently more - columns than rows, first reduce using the LQ decomposition - (if sufficient workspace available) -*/ - - if (*n >= mnthr1) { - - if (wntqn) { - -/* - Path 1t (N much larger than M, JOBZ='N') - No singular vectors to be computed -*/ - - itau = 1; - nwork = itau + *m; - -/* - Compute A=L*Q - (CWorkspace: need 2*M, prefer M+M*NB) - (RWorkspace: 0) -*/ - - i__2 = *lwork - nwork + 1; - zgelqf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__2, &ierr); - -/* Zero out above L */ - - i__2 = *m - 1; - i__1 = *m - 1; - zlaset_("U", &i__2, &i__1, &c_b59, &c_b59, &a[((a_dim1) << (1) - ) + 1], lda); - ie = 1; - itauq = 1; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize L in A - (CWorkspace: need 3*M, prefer 2*M+2*M*NB) - (RWorkspace: need M) -*/ - - i__2 = *lwork - nwork + 1; - zgebrd_(m, m, &a[a_offset], lda, &s[1], &rwork[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__2, &ierr); - nrwork = ie + *m; - -/* - Perform bidiagonal SVD, compute singular values only - (CWorkspace: 0) - (RWorkspace: need BDSPAC) -*/ - - dbdsdc_("U", "N", m, &s[1], &rwork[ie], dum, &c__1, dum, & - c__1, dum, idum, &rwork[nrwork], &iwork[1], info); - - } else if (wntqo) { - -/* - Path 2t (N much larger than M, JOBZ='O') - M right singular vectors to be overwritten on A and - M left singular vectors to be computed in U -*/ - - ivt = 1; - ldwkvt = *m; - -/* WORK(IVT) is M by M */ - - il = ivt + ldwkvt * *m; - if (*lwork >= *m * *n + *m * *m + *m * 3) { - -/* WORK(IL) M by N */ - - ldwrkl = *m; - chunk = *n; - } else { - -/* WORK(IL) is M by CHUNK */ - - ldwrkl = *m; - chunk = (*lwork - *m * *m - *m * 3) / *m; - } - itau = il + ldwrkl * chunk; - nwork = itau + *m; - -/* - Compute A=L*Q - (CWorkspace: need 2*M, prefer M+M*NB) - (RWorkspace: 0) -*/ - - i__2 = *lwork - nwork + 1; - zgelqf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__2, &ierr); - -/* Copy L to WORK(IL), zeroing about above it */ - - zlacpy_("L", m, m, &a[a_offset], lda, &work[il], &ldwrkl); - i__2 = *m - 1; - i__1 = *m - 1; - zlaset_("U", &i__2, &i__1, &c_b59, &c_b59, &work[il + ldwrkl], - &ldwrkl); - -/* - Generate Q in A - (CWorkspace: need M*M+2*M, prefer M*M+M+M*NB) - (RWorkspace: 0) -*/ - - i__2 = *lwork - nwork + 1; - zunglq_(m, n, m, &a[a_offset], lda, &work[itau], &work[nwork], - &i__2, &ierr); - ie = 1; - itauq = itau; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize L in WORK(IL) - (CWorkspace: need M*M+3*M, prefer M*M+2*M+2*M*NB) - (RWorkspace: need M) -*/ - - i__2 = *lwork - nwork + 1; - zgebrd_(m, m, &work[il], &ldwrkl, &s[1], &rwork[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__2, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - iru = ie + *m; - irvt = iru + *m * *m; - nrwork = irvt + *m * *m; - dbdsdc_("U", "I", m, &s[1], &rwork[ie], &rwork[iru], m, & - rwork[irvt], m, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Copy real matrix RWORK(IRU) to complex matrix WORK(IU) - Overwrite WORK(IU) by the left singular vectors of L - (CWorkspace: need N*N+3*N, prefer M*N+2*N+N*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", m, m, &rwork[iru], m, &u[u_offset], ldu); - i__2 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", m, m, m, &work[il], &ldwrkl, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__2, &ierr); - -/* - Copy real matrix RWORK(IRVT) to complex matrix WORK(IVT) - Overwrite WORK(IVT) by the right singular vectors of L - (CWorkspace: need N*N+3*N, prefer M*N+2*N+N*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", m, m, &rwork[irvt], m, &work[ivt], &ldwkvt); - i__2 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", m, m, m, &work[il], &ldwrkl, &work[ - itaup], &work[ivt], &ldwkvt, &work[nwork], &i__2, & - ierr); - -/* - Multiply right singular vectors of L in WORK(IL) by Q - in A, storing result in WORK(IL) and copying to A - (CWorkspace: need 2*M*M, prefer M*M+M*N)) - (RWorkspace: 0) -*/ - - i__2 = *n; - i__1 = chunk; - for (i__ = 1; i__1 < 0 ? i__ >= i__2 : i__ <= i__2; i__ += - i__1) { -/* Computing MIN */ - i__3 = *n - i__ + 1; - blk = min(i__3,chunk); - zgemm_("N", "N", m, &blk, m, &c_b60, &work[ivt], m, &a[ - i__ * a_dim1 + 1], lda, &c_b59, &work[il], & - ldwrkl); - zlacpy_("F", m, &blk, &work[il], &ldwrkl, &a[i__ * a_dim1 - + 1], lda); -/* L40: */ - } - - } else if (wntqs) { - -/* - Path 3t (N much larger than M, JOBZ='S') - M right singular vectors to be computed in VT and - M left singular vectors to be computed in U -*/ - - il = 1; - -/* WORK(IL) is M by M */ - - ldwrkl = *m; - itau = il + ldwrkl * *m; - nwork = itau + *m; - -/* - Compute A=L*Q - (CWorkspace: need 2*M, prefer M+M*NB) - (RWorkspace: 0) -*/ - - i__1 = *lwork - nwork + 1; - zgelqf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__1, &ierr); - -/* Copy L to WORK(IL), zeroing out above it */ - - zlacpy_("L", m, m, &a[a_offset], lda, &work[il], &ldwrkl); - i__1 = *m - 1; - i__2 = *m - 1; - zlaset_("U", &i__1, &i__2, &c_b59, &c_b59, &work[il + ldwrkl], - &ldwrkl); - -/* - Generate Q in A - (CWorkspace: need M*M+2*M, prefer M*M+M+M*NB) - (RWorkspace: 0) -*/ - - i__1 = *lwork - nwork + 1; - zunglq_(m, n, m, &a[a_offset], lda, &work[itau], &work[nwork], - &i__1, &ierr); - ie = 1; - itauq = itau; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize L in WORK(IL) - (CWorkspace: need M*M+3*M, prefer M*M+2*M+2*M*NB) - (RWorkspace: need M) -*/ - - i__1 = *lwork - nwork + 1; - zgebrd_(m, m, &work[il], &ldwrkl, &s[1], &rwork[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__1, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - iru = ie + *m; - irvt = iru + *m * *m; - nrwork = irvt + *m * *m; - dbdsdc_("U", "I", m, &s[1], &rwork[ie], &rwork[iru], m, & - rwork[irvt], m, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Copy real matrix RWORK(IRU) to complex matrix U - Overwrite U by left singular vectors of L - (CWorkspace: need M*M+3*M, prefer M*M+2*M+M*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", m, m, &rwork[iru], m, &u[u_offset], ldu); - i__1 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", m, m, m, &work[il], &ldwrkl, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__1, &ierr); - -/* - Copy real matrix RWORK(IRVT) to complex matrix VT - Overwrite VT by left singular vectors of L - (CWorkspace: need M*M+3*M, prefer M*M+2*M+M*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", m, m, &rwork[irvt], m, &vt[vt_offset], ldvt); - i__1 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", m, m, m, &work[il], &ldwrkl, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__1, & - ierr); - -/* - Copy VT to WORK(IL), multiply right singular vectors of L - in WORK(IL) by Q in A, storing result in VT - (CWorkspace: need M*M) - (RWorkspace: 0) -*/ - - zlacpy_("F", m, m, &vt[vt_offset], ldvt, &work[il], &ldwrkl); - zgemm_("N", "N", m, n, m, &c_b60, &work[il], &ldwrkl, &a[ - a_offset], lda, &c_b59, &vt[vt_offset], ldvt); - - } else if (wntqa) { - -/* - Path 9t (N much larger than M, JOBZ='A') - N right singular vectors to be computed in VT and - M left singular vectors to be computed in U -*/ - - ivt = 1; - -/* WORK(IVT) is M by M */ - - ldwkvt = *m; - itau = ivt + ldwkvt * *m; - nwork = itau + *m; - -/* - Compute A=L*Q, copying result to VT - (CWorkspace: need 2*M, prefer M+M*NB) - (RWorkspace: 0) -*/ - - i__1 = *lwork - nwork + 1; - zgelqf_(m, n, &a[a_offset], lda, &work[itau], &work[nwork], & - i__1, &ierr); - zlacpy_("U", m, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - -/* - Generate Q in VT - (CWorkspace: need M+N, prefer M+N*NB) - (RWorkspace: 0) -*/ - - i__1 = *lwork - nwork + 1; - zunglq_(n, n, m, &vt[vt_offset], ldvt, &work[itau], &work[ - nwork], &i__1, &ierr); - -/* Produce L in A, zeroing out above it */ - - i__1 = *m - 1; - i__2 = *m - 1; - zlaset_("U", &i__1, &i__2, &c_b59, &c_b59, &a[((a_dim1) << (1) - ) + 1], lda); - ie = 1; - itauq = itau; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize L in A - (CWorkspace: need M*M+3*M, prefer M*M+2*M+2*M*NB) - (RWorkspace: need M) -*/ - - i__1 = *lwork - nwork + 1; - zgebrd_(m, m, &a[a_offset], lda, &s[1], &rwork[ie], &work[ - itauq], &work[itaup], &work[nwork], &i__1, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - iru = ie + *m; - irvt = iru + *m * *m; - nrwork = irvt + *m * *m; - dbdsdc_("U", "I", m, &s[1], &rwork[ie], &rwork[iru], m, & - rwork[irvt], m, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Copy real matrix RWORK(IRU) to complex matrix U - Overwrite U by left singular vectors of L - (CWorkspace: need 3*M, prefer 2*M+M*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", m, m, &rwork[iru], m, &u[u_offset], ldu); - i__1 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", m, m, m, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__1, &ierr); - -/* - Copy real matrix RWORK(IRVT) to complex matrix WORK(IVT) - Overwrite WORK(IVT) by right singular vectors of L - (CWorkspace: need M*M+3*M, prefer M*M+2*M+M*NB) - (RWorkspace: 0) -*/ - - zlacp2_("F", m, m, &rwork[irvt], m, &work[ivt], &ldwkvt); - i__1 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", m, m, m, &a[a_offset], lda, &work[ - itaup], &work[ivt], &ldwkvt, &work[nwork], &i__1, & - ierr); - -/* - Multiply right singular vectors of L in WORK(IVT) by - Q in VT, storing result in A - (CWorkspace: need M*M) - (RWorkspace: 0) -*/ - - zgemm_("N", "N", m, n, m, &c_b60, &work[ivt], &ldwkvt, &vt[ - vt_offset], ldvt, &c_b59, &a[a_offset], lda); - -/* Copy right singular vectors of A from A to VT */ - - zlacpy_("F", m, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - - } - - } else if (*n >= mnthr2) { - -/* - MNTHR2 <= N < MNTHR1 - - Path 5t (N much larger than M, but not as much as MNTHR1) - Reduce to bidiagonal form without QR decomposition, use - ZUNGBR and matrix multiplication to compute singular vectors -*/ - - - ie = 1; - nrwork = ie + *m; - itauq = 1; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize A - (CWorkspace: need 2*M+N, prefer 2*M+(M+N)*NB) - (RWorkspace: M) -*/ - - i__1 = *lwork - nwork + 1; - zgebrd_(m, n, &a[a_offset], lda, &s[1], &rwork[ie], &work[itauq], - &work[itaup], &work[nwork], &i__1, &ierr); - - if (wntqn) { - -/* - Compute singular values only - (Cworkspace: 0) - (Rworkspace: need BDSPAC) -*/ - - dbdsdc_("L", "N", m, &s[1], &rwork[ie], dum, &c__1, dum, & - c__1, dum, idum, &rwork[nrwork], &iwork[1], info); - } else if (wntqo) { - irvt = nrwork; - iru = irvt + *m * *m; - nrwork = iru + *m * *m; - ivt = nwork; - -/* - Copy A to U, generate Q - (Cworkspace: need 2*M, prefer M+M*NB) - (Rworkspace: 0) -*/ - - zlacpy_("L", m, m, &a[a_offset], lda, &u[u_offset], ldu); - i__1 = *lwork - nwork + 1; - zungbr_("Q", m, m, n, &u[u_offset], ldu, &work[itauq], &work[ - nwork], &i__1, &ierr); - -/* - Generate P**H in A - (Cworkspace: need 2*M, prefer M+M*NB) - (Rworkspace: 0) -*/ - - i__1 = *lwork - nwork + 1; - zungbr_("P", m, n, m, &a[a_offset], lda, &work[itaup], &work[ - nwork], &i__1, &ierr); - - ldwkvt = *m; - if (*lwork >= *m * *n + *m * 3) { - -/* WORK( IVT ) is M by N */ - - nwork = ivt + ldwkvt * *n; - chunk = *n; - } else { - -/* WORK( IVT ) is M by CHUNK */ - - chunk = (*lwork - *m * 3) / *m; - nwork = ivt + ldwkvt * chunk; - } - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - dbdsdc_("L", "I", m, &s[1], &rwork[ie], &rwork[iru], m, & - rwork[irvt], m, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Multiply Q in U by real matrix RWORK(IRVT) - storing the result in WORK(IVT), copying to U - (Cworkspace: need 0) - (Rworkspace: need 2*M*M) -*/ - - zlacrm_(m, m, &u[u_offset], ldu, &rwork[iru], m, &work[ivt], & - ldwkvt, &rwork[nrwork]); - zlacpy_("F", m, m, &work[ivt], &ldwkvt, &u[u_offset], ldu); - -/* - Multiply RWORK(IRVT) by P**H in A, storing the - result in WORK(IVT), copying to A - (CWorkspace: need M*M, prefer M*N) - (Rworkspace: need 2*M*M, prefer 2*M*N) -*/ - - nrwork = iru; - i__1 = *n; - i__2 = chunk; - for (i__ = 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += - i__2) { -/* Computing MIN */ - i__3 = *n - i__ + 1; - blk = min(i__3,chunk); - zlarcm_(m, &blk, &rwork[irvt], m, &a[i__ * a_dim1 + 1], - lda, &work[ivt], &ldwkvt, &rwork[nrwork]); - zlacpy_("F", m, &blk, &work[ivt], &ldwkvt, &a[i__ * - a_dim1 + 1], lda); -/* L50: */ - } - } else if (wntqs) { - -/* - Copy A to U, generate Q - (Cworkspace: need 2*M, prefer M+M*NB) - (Rworkspace: 0) -*/ - - zlacpy_("L", m, m, &a[a_offset], lda, &u[u_offset], ldu); - i__2 = *lwork - nwork + 1; - zungbr_("Q", m, m, n, &u[u_offset], ldu, &work[itauq], &work[ - nwork], &i__2, &ierr); - -/* - Copy A to VT, generate P**H - (Cworkspace: need 2*M, prefer M+M*NB) - (Rworkspace: 0) -*/ - - zlacpy_("U", m, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - i__2 = *lwork - nwork + 1; - zungbr_("P", m, n, m, &vt[vt_offset], ldvt, &work[itaup], & - work[nwork], &i__2, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - irvt = nrwork; - iru = irvt + *m * *m; - nrwork = iru + *m * *m; - dbdsdc_("L", "I", m, &s[1], &rwork[ie], &rwork[iru], m, & - rwork[irvt], m, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Multiply Q in U by real matrix RWORK(IRU), storing the - result in A, copying to U - (CWorkspace: need 0) - (Rworkspace: need 3*M*M) -*/ - - zlacrm_(m, m, &u[u_offset], ldu, &rwork[iru], m, &a[a_offset], - lda, &rwork[nrwork]); - zlacpy_("F", m, m, &a[a_offset], lda, &u[u_offset], ldu); - -/* - Multiply real matrix RWORK(IRVT) by P**H in VT, - storing the result in A, copying to VT - (Cworkspace: need 0) - (Rworkspace: need M*M+2*M*N) -*/ - - nrwork = iru; - zlarcm_(m, n, &rwork[irvt], m, &vt[vt_offset], ldvt, &a[ - a_offset], lda, &rwork[nrwork]); - zlacpy_("F", m, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - } else { - -/* - Copy A to U, generate Q - (Cworkspace: need 2*M, prefer M+M*NB) - (Rworkspace: 0) -*/ - - zlacpy_("L", m, m, &a[a_offset], lda, &u[u_offset], ldu); - i__2 = *lwork - nwork + 1; - zungbr_("Q", m, m, n, &u[u_offset], ldu, &work[itauq], &work[ - nwork], &i__2, &ierr); - -/* - Copy A to VT, generate P**H - (Cworkspace: need 2*M, prefer M+M*NB) - (Rworkspace: 0) -*/ - - zlacpy_("U", m, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - i__2 = *lwork - nwork + 1; - zungbr_("P", n, n, m, &vt[vt_offset], ldvt, &work[itaup], & - work[nwork], &i__2, &ierr); - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - irvt = nrwork; - iru = irvt + *m * *m; - nrwork = iru + *m * *m; - dbdsdc_("L", "I", m, &s[1], &rwork[ie], &rwork[iru], m, & - rwork[irvt], m, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Multiply Q in U by real matrix RWORK(IRU), storing the - result in A, copying to U - (CWorkspace: need 0) - (Rworkspace: need 3*M*M) -*/ - - zlacrm_(m, m, &u[u_offset], ldu, &rwork[iru], m, &a[a_offset], - lda, &rwork[nrwork]); - zlacpy_("F", m, m, &a[a_offset], lda, &u[u_offset], ldu); - -/* - Multiply real matrix RWORK(IRVT) by P**H in VT, - storing the result in A, copying to VT - (Cworkspace: need 0) - (Rworkspace: need M*M+2*M*N) -*/ - - zlarcm_(m, n, &rwork[irvt], m, &vt[vt_offset], ldvt, &a[ - a_offset], lda, &rwork[nrwork]); - zlacpy_("F", m, n, &a[a_offset], lda, &vt[vt_offset], ldvt); - } - - } else { - -/* - N .LT. MNTHR2 - - Path 6t (N greater than M, but not much larger) - Reduce to bidiagonal form without LQ decomposition - Use ZUNMBR to compute singular vectors -*/ - - ie = 1; - nrwork = ie + *m; - itauq = 1; - itaup = itauq + *m; - nwork = itaup + *m; - -/* - Bidiagonalize A - (CWorkspace: need 2*M+N, prefer 2*M+(M+N)*NB) - (RWorkspace: M) -*/ - - i__2 = *lwork - nwork + 1; - zgebrd_(m, n, &a[a_offset], lda, &s[1], &rwork[ie], &work[itauq], - &work[itaup], &work[nwork], &i__2, &ierr); - if (wntqn) { - -/* - Compute singular values only - (Cworkspace: 0) - (Rworkspace: need BDSPAC) -*/ - - dbdsdc_("L", "N", m, &s[1], &rwork[ie], dum, &c__1, dum, & - c__1, dum, idum, &rwork[nrwork], &iwork[1], info); - } else if (wntqo) { - ldwkvt = *m; - ivt = nwork; - if (*lwork >= *m * *n + *m * 3) { - -/* WORK( IVT ) is M by N */ - - zlaset_("F", m, n, &c_b59, &c_b59, &work[ivt], &ldwkvt); - nwork = ivt + ldwkvt * *n; - } else { - -/* WORK( IVT ) is M by CHUNK */ - - chunk = (*lwork - *m * 3) / *m; - nwork = ivt + ldwkvt * chunk; - } - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - irvt = nrwork; - iru = irvt + *m * *m; - nrwork = iru + *m * *m; - dbdsdc_("L", "I", m, &s[1], &rwork[ie], &rwork[iru], m, & - rwork[irvt], m, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Copy real matrix RWORK(IRU) to complex matrix U - Overwrite U by left singular vectors of A - (Cworkspace: need 2*M, prefer M+M*NB) - (Rworkspace: need 0) -*/ - - zlacp2_("F", m, m, &rwork[iru], m, &u[u_offset], ldu); - i__2 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", m, m, n, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__2, &ierr); - - if (*lwork >= *m * *n + *m * 3) { - -/* - Copy real matrix RWORK(IRVT) to complex matrix WORK(IVT) - Overwrite WORK(IVT) by right singular vectors of A, - copying to A - (Cworkspace: need M*N+2*M, prefer M*N+M+M*NB) - (Rworkspace: need 0) -*/ - - zlacp2_("F", m, m, &rwork[irvt], m, &work[ivt], &ldwkvt); - i__2 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", m, n, m, &a[a_offset], lda, &work[ - itaup], &work[ivt], &ldwkvt, &work[nwork], &i__2, - &ierr); - zlacpy_("F", m, n, &work[ivt], &ldwkvt, &a[a_offset], lda); - } else { - -/* - Generate P**H in A - (Cworkspace: need 2*M, prefer M+M*NB) - (Rworkspace: need 0) -*/ - - i__2 = *lwork - nwork + 1; - zungbr_("P", m, n, m, &a[a_offset], lda, &work[itaup], & - work[nwork], &i__2, &ierr); - -/* - Multiply Q in A by real matrix RWORK(IRU), storing the - result in WORK(IU), copying to A - (CWorkspace: need M*M, prefer M*N) - (Rworkspace: need 3*M*M, prefer M*M+2*M*N) -*/ - - nrwork = iru; - i__2 = *n; - i__1 = chunk; - for (i__ = 1; i__1 < 0 ? i__ >= i__2 : i__ <= i__2; i__ += - i__1) { -/* Computing MIN */ - i__3 = *n - i__ + 1; - blk = min(i__3,chunk); - zlarcm_(m, &blk, &rwork[irvt], m, &a[i__ * a_dim1 + 1] - , lda, &work[ivt], &ldwkvt, &rwork[nrwork]); - zlacpy_("F", m, &blk, &work[ivt], &ldwkvt, &a[i__ * - a_dim1 + 1], lda); -/* L60: */ - } - } - } else if (wntqs) { - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - irvt = nrwork; - iru = irvt + *m * *m; - nrwork = iru + *m * *m; - dbdsdc_("L", "I", m, &s[1], &rwork[ie], &rwork[iru], m, & - rwork[irvt], m, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Copy real matrix RWORK(IRU) to complex matrix U - Overwrite U by left singular vectors of A - (CWorkspace: need 3*M, prefer 2*M+M*NB) - (RWorkspace: M*M) -*/ - - zlacp2_("F", m, m, &rwork[iru], m, &u[u_offset], ldu); - i__1 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", m, m, n, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__1, &ierr); - -/* - Copy real matrix RWORK(IRVT) to complex matrix VT - Overwrite VT by right singular vectors of A - (CWorkspace: need 3*M, prefer 2*M+M*NB) - (RWorkspace: M*M) -*/ - - zlaset_("F", m, n, &c_b59, &c_b59, &vt[vt_offset], ldvt); - zlacp2_("F", m, m, &rwork[irvt], m, &vt[vt_offset], ldvt); - i__1 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", m, n, m, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__1, & - ierr); - } else { - -/* - Perform bidiagonal SVD, computing left singular vectors - of bidiagonal matrix in RWORK(IRU) and computing right - singular vectors of bidiagonal matrix in RWORK(IRVT) - (CWorkspace: need 0) - (RWorkspace: need BDSPAC) -*/ - - irvt = nrwork; - iru = irvt + *m * *m; - nrwork = iru + *m * *m; - - dbdsdc_("L", "I", m, &s[1], &rwork[ie], &rwork[iru], m, & - rwork[irvt], m, dum, idum, &rwork[nrwork], &iwork[1], - info); - -/* - Copy real matrix RWORK(IRU) to complex matrix U - Overwrite U by left singular vectors of A - (CWorkspace: need 3*M, prefer 2*M+M*NB) - (RWorkspace: M*M) -*/ - - zlacp2_("F", m, m, &rwork[iru], m, &u[u_offset], ldu); - i__1 = *lwork - nwork + 1; - zunmbr_("Q", "L", "N", m, m, n, &a[a_offset], lda, &work[ - itauq], &u[u_offset], ldu, &work[nwork], &i__1, &ierr); - -/* Set the right corner of VT to identity matrix */ - - i__1 = *n - *m; - i__2 = *n - *m; - zlaset_("F", &i__1, &i__2, &c_b59, &c_b60, &vt[*m + 1 + (*m + - 1) * vt_dim1], ldvt); - -/* - Copy real matrix RWORK(IRVT) to complex matrix VT - Overwrite VT by right singular vectors of A - (CWorkspace: need 2*M+N, prefer 2*M+N*NB) - (RWorkspace: M*M) -*/ - - zlaset_("F", n, n, &c_b59, &c_b59, &vt[vt_offset], ldvt); - zlacp2_("F", m, m, &rwork[irvt], m, &vt[vt_offset], ldvt); - i__1 = *lwork - nwork + 1; - zunmbr_("P", "R", "C", n, n, m, &a[a_offset], lda, &work[ - itaup], &vt[vt_offset], ldvt, &work[nwork], &i__1, & - ierr); - } - - } - - } - -/* Undo scaling if necessary */ - - if (iscl == 1) { - if (anrm > bignum) { - dlascl_("G", &c__0, &c__0, &bignum, &anrm, &minmn, &c__1, &s[1], & - minmn, &ierr); - } - if (anrm < smlnum) { - dlascl_("G", &c__0, &c__0, &smlnum, &anrm, &minmn, &c__1, &s[1], & - minmn, &ierr); - } - } - -/* Return optimal workspace in WORK(1) */ - - work[1].r = (doublereal) maxwrk, work[1].i = 0.; - - return 0; - -/* End of ZGESDD */ - -} /* zgesdd_ */ - -/* Subroutine */ int zgesv_(integer *n, integer *nrhs, doublecomplex *a, - integer *lda, integer *ipiv, doublecomplex *b, integer *ldb, integer * - info) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1; - - /* Local variables */ - extern /* Subroutine */ int xerbla_(char *, integer *), zgetrf_( - integer *, integer *, doublecomplex *, integer *, integer *, - integer *), zgetrs_(char *, integer *, integer *, doublecomplex *, - integer *, integer *, doublecomplex *, integer *, integer *); - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - March 31, 1993 - - - Purpose - ======= - - ZGESV computes the solution to a complex system of linear equations - A * X = B, - where A is an N-by-N matrix and X and B are N-by-NRHS matrices. - - The LU decomposition with partial pivoting and row interchanges is - used to factor A as - A = P * L * U, - where P is a permutation matrix, L is unit lower triangular, and U is - upper triangular. The factored form of A is then used to solve the - system of equations A * X = B. - - Arguments - ========= - - N (input) INTEGER - The number of linear equations, i.e., the order of the - matrix A. N >= 0. - - NRHS (input) INTEGER - The number of right hand sides, i.e., the number of columns - of the matrix B. NRHS >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the N-by-N coefficient matrix A. - On exit, the factors L and U from the factorization - A = P*L*U; the unit diagonal elements of L are not stored. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - IPIV (output) INTEGER array, dimension (N) - The pivot indices that define the permutation matrix P; - row i of the matrix was interchanged with row IPIV(i). - - B (input/output) COMPLEX*16 array, dimension (LDB,NRHS) - On entry, the N-by-NRHS matrix of right hand side matrix B. - On exit, if INFO = 0, the N-by-NRHS solution matrix X. - - LDB (input) INTEGER - The leading dimension of the array B. LDB >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, U(i,i) is exactly zero. The factorization - has been completed, but the factor U is exactly - singular, so the solution could not be computed. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --ipiv; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - - /* Function Body */ - *info = 0; - if (*n < 0) { - *info = -1; - } else if (*nrhs < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } else if (*ldb < max(1,*n)) { - *info = -7; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGESV ", &i__1); - return 0; - } - -/* Compute the LU factorization of A. */ - - zgetrf_(n, n, &a[a_offset], lda, &ipiv[1], info); - if (*info == 0) { - -/* Solve the system A*X = B, overwriting B with X. */ - - zgetrs_("No transpose", n, nrhs, &a[a_offset], lda, &ipiv[1], &b[ - b_offset], ldb, info); - } - return 0; - -/* End of ZGESV */ - -} /* zgesv_ */ - -/* Subroutine */ int zgetf2_(integer *m, integer *n, doublecomplex *a, - integer *lda, integer *ipiv, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - doublecomplex z__1; - - /* Builtin functions */ - void z_div(doublecomplex *, doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer j, jp; - extern /* Subroutine */ int zscal_(integer *, doublecomplex *, - doublecomplex *, integer *), zgeru_(integer *, integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *, doublecomplex *, integer *), zswap_(integer *, - doublecomplex *, integer *, doublecomplex *, integer *), xerbla_( - char *, integer *); - extern integer izamax_(integer *, doublecomplex *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZGETF2 computes an LU factorization of a general m-by-n matrix A - using partial pivoting with row interchanges. - - The factorization has the form - A = P * L * U - where P is a permutation matrix, L is lower triangular with unit - diagonal elements (lower trapezoidal if m > n), and U is upper - triangular (upper trapezoidal if m < n). - - This is the right-looking Level 2 BLAS version of the algorithm. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the m by n matrix to be factored. - On exit, the factors L and U from the factorization - A = P*L*U; the unit diagonal elements of L are not stored. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - IPIV (output) INTEGER array, dimension (min(M,N)) - The pivot indices; for 1 <= i <= min(M,N), row i of the - matrix was interchanged with row IPIV(i). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -k, the k-th argument had an illegal value - > 0: if INFO = k, U(k,k) is exactly zero. The factorization - has been completed, but the factor U is exactly - singular, and division by zero will occur if it is used - to solve a system of equations. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --ipiv; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGETF2", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0) { - return 0; - } - - i__1 = min(*m,*n); - for (j = 1; j <= i__1; ++j) { - -/* Find pivot and test for singularity. */ - - i__2 = *m - j + 1; - jp = j - 1 + izamax_(&i__2, &a[j + j * a_dim1], &c__1); - ipiv[j] = jp; - i__2 = jp + j * a_dim1; - if (a[i__2].r != 0. || a[i__2].i != 0.) { - -/* Apply the interchange to columns 1:N. */ - - if (jp != j) { - zswap_(n, &a[j + a_dim1], lda, &a[jp + a_dim1], lda); - } - -/* Compute elements J+1:M of J-th column. */ - - if (j < *m) { - i__2 = *m - j; - z_div(&z__1, &c_b60, &a[j + j * a_dim1]); - zscal_(&i__2, &z__1, &a[j + 1 + j * a_dim1], &c__1); - } - - } else if (*info == 0) { - - *info = j; - } - - if (j < min(*m,*n)) { - -/* Update trailing submatrix. */ - - i__2 = *m - j; - i__3 = *n - j; - z__1.r = -1., z__1.i = -0.; - zgeru_(&i__2, &i__3, &z__1, &a[j + 1 + j * a_dim1], &c__1, &a[j + - (j + 1) * a_dim1], lda, &a[j + 1 + (j + 1) * a_dim1], lda) - ; - } -/* L10: */ - } - return 0; - -/* End of ZGETF2 */ - -} /* zgetf2_ */ - -/* Subroutine */ int zgetrf_(integer *m, integer *n, doublecomplex *a, - integer *lda, integer *ipiv, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - doublecomplex z__1; - - /* Local variables */ - static integer i__, j, jb, nb, iinfo; - extern /* Subroutine */ int zgemm_(char *, char *, integer *, integer *, - integer *, doublecomplex *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *), ztrsm_(char *, char *, char *, char *, - integer *, integer *, doublecomplex *, doublecomplex *, integer * - , doublecomplex *, integer *), - zgetf2_(integer *, integer *, doublecomplex *, integer *, integer - *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zlaswp_(integer *, doublecomplex *, integer *, - integer *, integer *, integer *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZGETRF computes an LU factorization of a general M-by-N matrix A - using partial pivoting with row interchanges. - - The factorization has the form - A = P * L * U - where P is a permutation matrix, L is lower triangular with unit - diagonal elements (lower trapezoidal if m > n), and U is upper - triangular (upper trapezoidal if m < n). - - This is the right-looking Level 3 BLAS version of the algorithm. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the M-by-N matrix to be factored. - On exit, the factors L and U from the factorization - A = P*L*U; the unit diagonal elements of L are not stored. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - IPIV (output) INTEGER array, dimension (min(M,N)) - The pivot indices; for 1 <= i <= min(M,N), row i of the - matrix was interchanged with row IPIV(i). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, U(i,i) is exactly zero. The factorization - has been completed, but the factor U is exactly - singular, and division by zero will occur if it is used - to solve a system of equations. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --ipiv; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*m)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGETRF", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0) { - return 0; - } - -/* Determine the block size for this environment. */ - - nb = ilaenv_(&c__1, "ZGETRF", " ", m, n, &c_n1, &c_n1, (ftnlen)6, (ftnlen) - 1); - if (nb <= 1 || nb >= min(*m,*n)) { - -/* Use unblocked code. */ - - zgetf2_(m, n, &a[a_offset], lda, &ipiv[1], info); - } else { - -/* Use blocked code. */ - - i__1 = min(*m,*n); - i__2 = nb; - for (j = 1; i__2 < 0 ? j >= i__1 : j <= i__1; j += i__2) { -/* Computing MIN */ - i__3 = min(*m,*n) - j + 1; - jb = min(i__3,nb); - -/* - Factor diagonal and subdiagonal blocks and test for exact - singularity. -*/ - - i__3 = *m - j + 1; - zgetf2_(&i__3, &jb, &a[j + j * a_dim1], lda, &ipiv[j], &iinfo); - -/* Adjust INFO and the pivot indices. */ - - if ((*info == 0 && iinfo > 0)) { - *info = iinfo + j - 1; - } -/* Computing MIN */ - i__4 = *m, i__5 = j + jb - 1; - i__3 = min(i__4,i__5); - for (i__ = j; i__ <= i__3; ++i__) { - ipiv[i__] = j - 1 + ipiv[i__]; -/* L10: */ - } - -/* Apply interchanges to columns 1:J-1. */ - - i__3 = j - 1; - i__4 = j + jb - 1; - zlaswp_(&i__3, &a[a_offset], lda, &j, &i__4, &ipiv[1], &c__1); - - if (j + jb <= *n) { - -/* Apply interchanges to columns J+JB:N. */ - - i__3 = *n - j - jb + 1; - i__4 = j + jb - 1; - zlaswp_(&i__3, &a[(j + jb) * a_dim1 + 1], lda, &j, &i__4, & - ipiv[1], &c__1); - -/* Compute block row of U. */ - - i__3 = *n - j - jb + 1; - ztrsm_("Left", "Lower", "No transpose", "Unit", &jb, &i__3, & - c_b60, &a[j + j * a_dim1], lda, &a[j + (j + jb) * - a_dim1], lda); - if (j + jb <= *m) { - -/* Update trailing submatrix. */ - - i__3 = *m - j - jb + 1; - i__4 = *n - j - jb + 1; - z__1.r = -1., z__1.i = -0.; - zgemm_("No transpose", "No transpose", &i__3, &i__4, &jb, - &z__1, &a[j + jb + j * a_dim1], lda, &a[j + (j + - jb) * a_dim1], lda, &c_b60, &a[j + jb + (j + jb) * - a_dim1], lda); - } - } -/* L20: */ - } - } - return 0; - -/* End of ZGETRF */ - -} /* zgetrf_ */ - -/* Subroutine */ int zgetrs_(char *trans, integer *n, integer *nrhs, - doublecomplex *a, integer *lda, integer *ipiv, doublecomplex *b, - integer *ldb, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1; - - /* Local variables */ - extern logical lsame_(char *, char *); - extern /* Subroutine */ int ztrsm_(char *, char *, char *, char *, - integer *, integer *, doublecomplex *, doublecomplex *, integer *, - doublecomplex *, integer *), - xerbla_(char *, integer *); - static logical notran; - extern /* Subroutine */ int zlaswp_(integer *, doublecomplex *, integer *, - integer *, integer *, integer *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZGETRS solves a system of linear equations - A * X = B, A**T * X = B, or A**H * X = B - with a general N-by-N matrix A using the LU factorization computed - by ZGETRF. - - Arguments - ========= - - TRANS (input) CHARACTER*1 - Specifies the form of the system of equations: - = 'N': A * X = B (No transpose) - = 'T': A**T * X = B (Transpose) - = 'C': A**H * X = B (Conjugate transpose) - - N (input) INTEGER - The order of the matrix A. N >= 0. - - NRHS (input) INTEGER - The number of right hand sides, i.e., the number of columns - of the matrix B. NRHS >= 0. - - A (input) COMPLEX*16 array, dimension (LDA,N) - The factors L and U from the factorization A = P*L*U - as computed by ZGETRF. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - IPIV (input) INTEGER array, dimension (N) - The pivot indices from ZGETRF; for 1<=i<=N, row i of the - matrix was interchanged with row IPIV(i). - - B (input/output) COMPLEX*16 array, dimension (LDB,NRHS) - On entry, the right hand side matrix B. - On exit, the solution matrix X. - - LDB (input) INTEGER - The leading dimension of the array B. LDB >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --ipiv; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - - /* Function Body */ - *info = 0; - notran = lsame_(trans, "N"); - if (((! notran && ! lsame_(trans, "T")) && ! lsame_( - trans, "C"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*nrhs < 0) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } else if (*ldb < max(1,*n)) { - *info = -8; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZGETRS", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0 || *nrhs == 0) { - return 0; - } - - if (notran) { - -/* - Solve A * X = B. - - Apply row interchanges to the right hand sides. -*/ - - zlaswp_(nrhs, &b[b_offset], ldb, &c__1, n, &ipiv[1], &c__1); - -/* Solve L*X = B, overwriting B with X. */ - - ztrsm_("Left", "Lower", "No transpose", "Unit", n, nrhs, &c_b60, &a[ - a_offset], lda, &b[b_offset], ldb); - -/* Solve U*X = B, overwriting B with X. */ - - ztrsm_("Left", "Upper", "No transpose", "Non-unit", n, nrhs, &c_b60, & - a[a_offset], lda, &b[b_offset], ldb); - } else { - -/* - Solve A**T * X = B or A**H * X = B. - - Solve U'*X = B, overwriting B with X. -*/ - - ztrsm_("Left", "Upper", trans, "Non-unit", n, nrhs, &c_b60, &a[ - a_offset], lda, &b[b_offset], ldb); - -/* Solve L'*X = B, overwriting B with X. */ - - ztrsm_("Left", "Lower", trans, "Unit", n, nrhs, &c_b60, &a[a_offset], - lda, &b[b_offset], ldb); - -/* Apply row interchanges to the solution vectors. */ - - zlaswp_(nrhs, &b[b_offset], ldb, &c__1, n, &ipiv[1], &c_n1); - } - - return 0; - -/* End of ZGETRS */ - -} /* zgetrs_ */ - -/* Subroutine */ int zheevd_(char *jobz, char *uplo, integer *n, - doublecomplex *a, integer *lda, doublereal *w, doublecomplex *work, - integer *lwork, doublereal *rwork, integer *lrwork, integer *iwork, - integer *liwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - doublereal d__1, d__2; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal eps; - static integer inde; - static doublereal anrm; - static integer imax; - static doublereal rmin, rmax; - static integer lopt; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - static doublereal sigma; - extern logical lsame_(char *, char *); - static integer iinfo, lwmin, liopt; - static logical lower; - static integer llrwk, lropt; - static logical wantz; - static integer indwk2, llwrk2; - - static integer iscale; - static doublereal safmin; - extern /* Subroutine */ int xerbla_(char *, integer *); - static doublereal bignum; - extern doublereal zlanhe_(char *, char *, integer *, doublecomplex *, - integer *, doublereal *); - static integer indtau; - extern /* Subroutine */ int dsterf_(integer *, doublereal *, doublereal *, - integer *), zlascl_(char *, integer *, integer *, doublereal *, - doublereal *, integer *, integer *, doublecomplex *, integer *, - integer *), zstedc_(char *, integer *, doublereal *, - doublereal *, doublecomplex *, integer *, doublecomplex *, - integer *, doublereal *, integer *, integer *, integer *, integer - *); - static integer indrwk, indwrk, liwmin; - extern /* Subroutine */ int zhetrd_(char *, integer *, doublecomplex *, - integer *, doublereal *, doublereal *, doublecomplex *, - doublecomplex *, integer *, integer *), zlacpy_(char *, - integer *, integer *, doublecomplex *, integer *, doublecomplex *, - integer *); - static integer lrwmin, llwork; - static doublereal smlnum; - static logical lquery; - extern /* Subroutine */ int zunmtr_(char *, char *, char *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, integer *); - - -/* - -- LAPACK driver routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZHEEVD computes all eigenvalues and, optionally, eigenvectors of a - complex Hermitian matrix A. If eigenvectors are desired, it uses a - divide and conquer algorithm. - - The divide and conquer algorithm makes very mild assumptions about - floating point arithmetic. It will work on machines with a guard - digit in add/subtract, or on those binary machines without guard - digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or - Cray-2. It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. - - Arguments - ========= - - JOBZ (input) CHARACTER*1 - = 'N': Compute eigenvalues only; - = 'V': Compute eigenvalues and eigenvectors. - - UPLO (input) CHARACTER*1 - = 'U': Upper triangle of A is stored; - = 'L': Lower triangle of A is stored. - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA, N) - On entry, the Hermitian matrix A. If UPLO = 'U', the - leading N-by-N upper triangular part of A contains the - upper triangular part of the matrix A. If UPLO = 'L', - the leading N-by-N lower triangular part of A contains - the lower triangular part of the matrix A. - On exit, if JOBZ = 'V', then if INFO = 0, A contains the - orthonormal eigenvectors of the matrix A. - If JOBZ = 'N', then on exit the lower triangle (if UPLO='L') - or the upper triangle (if UPLO='U') of A, including the - diagonal, is destroyed. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - W (output) DOUBLE PRECISION array, dimension (N) - If INFO = 0, the eigenvalues in ascending order. - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The length of the array WORK. - If N <= 1, LWORK must be at least 1. - If JOBZ = 'N' and N > 1, LWORK must be at least N + 1. - If JOBZ = 'V' and N > 1, LWORK must be at least 2*N + N**2. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - RWORK (workspace/output) DOUBLE PRECISION array, - dimension (LRWORK) - On exit, if INFO = 0, RWORK(1) returns the optimal LRWORK. - - LRWORK (input) INTEGER - The dimension of the array RWORK. - If N <= 1, LRWORK must be at least 1. - If JOBZ = 'N' and N > 1, LRWORK must be at least N. - If JOBZ = 'V' and N > 1, LRWORK must be at least - 1 + 5*N + 2*N**2. - - If LRWORK = -1, then a workspace query is assumed; the - routine only calculates the optimal size of the RWORK array, - returns this value as the first entry of the RWORK array, and - no error message related to LRWORK is issued by XERBLA. - - IWORK (workspace/output) INTEGER array, dimension (LIWORK) - On exit, if INFO = 0, IWORK(1) returns the optimal LIWORK. - - LIWORK (input) INTEGER - The dimension of the array IWORK. - If N <= 1, LIWORK must be at least 1. - If JOBZ = 'N' and N > 1, LIWORK must be at least 1. - If JOBZ = 'V' and N > 1, LIWORK must be at least 3 + 5*N. - - If LIWORK = -1, then a workspace query is assumed; the - routine only calculates the optimal size of the IWORK array, - returns this value as the first entry of the IWORK array, and - no error message related to LIWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, the algorithm failed to converge; i - off-diagonal elements of an intermediate tridiagonal - form did not converge to zero. - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --w; - --work; - --rwork; - --iwork; - - /* Function Body */ - wantz = lsame_(jobz, "V"); - lower = lsame_(uplo, "L"); - lquery = *lwork == -1 || *lrwork == -1 || *liwork == -1; - - *info = 0; - if (*n <= 1) { - lwmin = 1; - lrwmin = 1; - liwmin = 1; - lopt = lwmin; - lropt = lrwmin; - liopt = liwmin; - } else { - if (wantz) { - lwmin = ((*n) << (1)) + *n * *n; -/* Computing 2nd power */ - i__1 = *n; - lrwmin = *n * 5 + 1 + ((i__1 * i__1) << (1)); - liwmin = *n * 5 + 3; - } else { - lwmin = *n + 1; - lrwmin = *n; - liwmin = 1; - } - lopt = lwmin; - lropt = lrwmin; - liopt = liwmin; - } - if (! (wantz || lsame_(jobz, "N"))) { - *info = -1; - } else if (! (lower || lsame_(uplo, "U"))) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } else if ((*lwork < lwmin && ! lquery)) { - *info = -8; - } else if ((*lrwork < lrwmin && ! lquery)) { - *info = -10; - } else if ((*liwork < liwmin && ! lquery)) { - *info = -12; - } - - if (*info == 0) { - work[1].r = (doublereal) lopt, work[1].i = 0.; - rwork[1] = (doublereal) lropt; - iwork[1] = liopt; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZHEEVD", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - - if (*n == 1) { - i__1 = a_dim1 + 1; - w[1] = a[i__1].r; - if (wantz) { - i__1 = a_dim1 + 1; - a[i__1].r = 1., a[i__1].i = 0.; - } - return 0; - } - -/* Get machine constants. */ - - safmin = SAFEMINIMUM; - eps = PRECISION; - smlnum = safmin / eps; - bignum = 1. / smlnum; - rmin = sqrt(smlnum); - rmax = sqrt(bignum); - -/* Scale matrix to allowable range, if necessary. */ - - anrm = zlanhe_("M", uplo, n, &a[a_offset], lda, &rwork[1]); - iscale = 0; - if ((anrm > 0. && anrm < rmin)) { - iscale = 1; - sigma = rmin / anrm; - } else if (anrm > rmax) { - iscale = 1; - sigma = rmax / anrm; - } - if (iscale == 1) { - zlascl_(uplo, &c__0, &c__0, &c_b1015, &sigma, n, n, &a[a_offset], lda, - info); - } - -/* Call ZHETRD to reduce Hermitian matrix to tridiagonal form. */ - - inde = 1; - indtau = 1; - indwrk = indtau + *n; - indrwk = inde + *n; - indwk2 = indwrk + *n * *n; - llwork = *lwork - indwrk + 1; - llwrk2 = *lwork - indwk2 + 1; - llrwk = *lrwork - indrwk + 1; - zhetrd_(uplo, n, &a[a_offset], lda, &w[1], &rwork[inde], &work[indtau], & - work[indwrk], &llwork, &iinfo); -/* Computing MAX */ - i__1 = indwrk; - d__1 = (doublereal) lopt, d__2 = (doublereal) (*n) + work[i__1].r; - lopt = (integer) max(d__1,d__2); - -/* - For eigenvalues only, call DSTERF. For eigenvectors, first call - ZSTEDC to generate the eigenvector matrix, WORK(INDWRK), of the - tridiagonal matrix, then call ZUNMTR to multiply it to the - Householder transformations represented as Householder vectors in - A. -*/ - - if (! wantz) { - dsterf_(n, &w[1], &rwork[inde], info); - } else { - zstedc_("I", n, &w[1], &rwork[inde], &work[indwrk], n, &work[indwk2], - &llwrk2, &rwork[indrwk], &llrwk, &iwork[1], liwork, info); - zunmtr_("L", uplo, "N", n, n, &a[a_offset], lda, &work[indtau], &work[ - indwrk], n, &work[indwk2], &llwrk2, &iinfo); - zlacpy_("A", n, n, &work[indwrk], n, &a[a_offset], lda); -/* - Computing MAX - Computing 2nd power -*/ - i__3 = *n; - i__4 = indwk2; - i__1 = lopt, i__2 = *n + i__3 * i__3 + (integer) work[i__4].r; - lopt = max(i__1,i__2); - } - -/* If matrix was scaled, then rescale eigenvalues appropriately. */ - - if (iscale == 1) { - if (*info == 0) { - imax = *n; - } else { - imax = *info - 1; - } - d__1 = 1. / sigma; - dscal_(&imax, &d__1, &w[1], &c__1); - } - - work[1].r = (doublereal) lopt, work[1].i = 0.; - rwork[1] = (doublereal) lropt; - iwork[1] = liopt; - - return 0; - -/* End of ZHEEVD */ - -} /* zheevd_ */ - -/* Subroutine */ int zhetd2_(char *uplo, integer *n, doublecomplex *a, - integer *lda, doublereal *d__, doublereal *e, doublecomplex *tau, - integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - doublereal d__1; - doublecomplex z__1, z__2, z__3, z__4; - - /* Local variables */ - static integer i__; - static doublecomplex taui; - extern /* Subroutine */ int zher2_(char *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *); - static doublecomplex alpha; - extern logical lsame_(char *, char *); - extern /* Double Complex */ VOID zdotc_(doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *); - extern /* Subroutine */ int zhemv_(char *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, doublecomplex *, integer *); - static logical upper; - extern /* Subroutine */ int zaxpy_(integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *), xerbla_( - char *, integer *), zlarfg_(integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - ZHETD2 reduces a complex Hermitian matrix A to real symmetric - tridiagonal form T by a unitary similarity transformation: - Q' * A * Q = T. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - Specifies whether the upper or lower triangular part of the - Hermitian matrix A is stored: - = 'U': Upper triangular - = 'L': Lower triangular - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the Hermitian matrix A. If UPLO = 'U', the leading - n-by-n upper triangular part of A contains the upper - triangular part of the matrix A, and the strictly lower - triangular part of A is not referenced. If UPLO = 'L', the - leading n-by-n lower triangular part of A contains the lower - triangular part of the matrix A, and the strictly upper - triangular part of A is not referenced. - On exit, if UPLO = 'U', the diagonal and first superdiagonal - of A are overwritten by the corresponding elements of the - tridiagonal matrix T, and the elements above the first - superdiagonal, with the array TAU, represent the unitary - matrix Q as a product of elementary reflectors; if UPLO - = 'L', the diagonal and first subdiagonal of A are over- - written by the corresponding elements of the tridiagonal - matrix T, and the elements below the first subdiagonal, with - the array TAU, represent the unitary matrix Q as a product - of elementary reflectors. See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - D (output) DOUBLE PRECISION array, dimension (N) - The diagonal elements of the tridiagonal matrix T: - D(i) = A(i,i). - - E (output) DOUBLE PRECISION array, dimension (N-1) - The off-diagonal elements of the tridiagonal matrix T: - E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. - - TAU (output) COMPLEX*16 array, dimension (N-1) - The scalar factors of the elementary reflectors (see Further - Details). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - If UPLO = 'U', the matrix Q is represented as a product of elementary - reflectors - - Q = H(n-1) . . . H(2) H(1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in - A(1:i-1,i+1), and tau in TAU(i). - - If UPLO = 'L', the matrix Q is represented as a product of elementary - reflectors - - Q = H(1) H(2) . . . H(n-1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), - and tau in TAU(i). - - The contents of A on exit are illustrated by the following examples - with n = 5: - - if UPLO = 'U': if UPLO = 'L': - - ( d e v2 v3 v4 ) ( d ) - ( d e v3 v4 ) ( e d ) - ( d e v4 ) ( v1 e d ) - ( d e ) ( v1 v2 e d ) - ( d ) ( v1 v2 v3 e d ) - - where d and e denote diagonal and off-diagonal elements of T, and vi - denotes an element of the vector defining H(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --d__; - --e; - --tau; - - /* Function Body */ - *info = 0; - upper = lsame_(uplo, "U"); - if ((! upper && ! lsame_(uplo, "L"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZHETD2", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n <= 0) { - return 0; - } - - if (upper) { - -/* Reduce the upper triangle of A */ - - i__1 = *n + *n * a_dim1; - i__2 = *n + *n * a_dim1; - d__1 = a[i__2].r; - a[i__1].r = d__1, a[i__1].i = 0.; - for (i__ = *n - 1; i__ >= 1; --i__) { - -/* - Generate elementary reflector H(i) = I - tau * v * v' - to annihilate A(1:i-1,i+1) -*/ - - i__1 = i__ + (i__ + 1) * a_dim1; - alpha.r = a[i__1].r, alpha.i = a[i__1].i; - zlarfg_(&i__, &alpha, &a[(i__ + 1) * a_dim1 + 1], &c__1, &taui); - i__1 = i__; - e[i__1] = alpha.r; - - if (taui.r != 0. || taui.i != 0.) { - -/* Apply H(i) from both sides to A(1:i,1:i) */ - - i__1 = i__ + (i__ + 1) * a_dim1; - a[i__1].r = 1., a[i__1].i = 0.; - -/* Compute x := tau * A * v storing x in TAU(1:i) */ - - zhemv_(uplo, &i__, &taui, &a[a_offset], lda, &a[(i__ + 1) * - a_dim1 + 1], &c__1, &c_b59, &tau[1], &c__1) - ; - -/* Compute w := x - 1/2 * tau * (x'*v) * v */ - - z__3.r = -.5, z__3.i = -0.; - z__2.r = z__3.r * taui.r - z__3.i * taui.i, z__2.i = z__3.r * - taui.i + z__3.i * taui.r; - zdotc_(&z__4, &i__, &tau[1], &c__1, &a[(i__ + 1) * a_dim1 + 1] - , &c__1); - z__1.r = z__2.r * z__4.r - z__2.i * z__4.i, z__1.i = z__2.r * - z__4.i + z__2.i * z__4.r; - alpha.r = z__1.r, alpha.i = z__1.i; - zaxpy_(&i__, &alpha, &a[(i__ + 1) * a_dim1 + 1], &c__1, &tau[ - 1], &c__1); - -/* - Apply the transformation as a rank-2 update: - A := A - v * w' - w * v' -*/ - - z__1.r = -1., z__1.i = -0.; - zher2_(uplo, &i__, &z__1, &a[(i__ + 1) * a_dim1 + 1], &c__1, & - tau[1], &c__1, &a[a_offset], lda); - - } else { - i__1 = i__ + i__ * a_dim1; - i__2 = i__ + i__ * a_dim1; - d__1 = a[i__2].r; - a[i__1].r = d__1, a[i__1].i = 0.; - } - i__1 = i__ + (i__ + 1) * a_dim1; - i__2 = i__; - a[i__1].r = e[i__2], a[i__1].i = 0.; - i__1 = i__ + 1; - i__2 = i__ + 1 + (i__ + 1) * a_dim1; - d__[i__1] = a[i__2].r; - i__1 = i__; - tau[i__1].r = taui.r, tau[i__1].i = taui.i; -/* L10: */ - } - i__1 = a_dim1 + 1; - d__[1] = a[i__1].r; - } else { - -/* Reduce the lower triangle of A */ - - i__1 = a_dim1 + 1; - i__2 = a_dim1 + 1; - d__1 = a[i__2].r; - a[i__1].r = d__1, a[i__1].i = 0.; - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* - Generate elementary reflector H(i) = I - tau * v * v' - to annihilate A(i+2:n,i) -*/ - - i__2 = i__ + 1 + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *n - i__; -/* Computing MIN */ - i__3 = i__ + 2; - zlarfg_(&i__2, &alpha, &a[min(i__3,*n) + i__ * a_dim1], &c__1, & - taui); - i__2 = i__; - e[i__2] = alpha.r; - - if (taui.r != 0. || taui.i != 0.) { - -/* Apply H(i) from both sides to A(i+1:n,i+1:n) */ - - i__2 = i__ + 1 + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Compute x := tau * A * v storing y in TAU(i:n-1) */ - - i__2 = *n - i__; - zhemv_(uplo, &i__2, &taui, &a[i__ + 1 + (i__ + 1) * a_dim1], - lda, &a[i__ + 1 + i__ * a_dim1], &c__1, &c_b59, &tau[ - i__], &c__1); - -/* Compute w := x - 1/2 * tau * (x'*v) * v */ - - z__3.r = -.5, z__3.i = -0.; - z__2.r = z__3.r * taui.r - z__3.i * taui.i, z__2.i = z__3.r * - taui.i + z__3.i * taui.r; - i__2 = *n - i__; - zdotc_(&z__4, &i__2, &tau[i__], &c__1, &a[i__ + 1 + i__ * - a_dim1], &c__1); - z__1.r = z__2.r * z__4.r - z__2.i * z__4.i, z__1.i = z__2.r * - z__4.i + z__2.i * z__4.r; - alpha.r = z__1.r, alpha.i = z__1.i; - i__2 = *n - i__; - zaxpy_(&i__2, &alpha, &a[i__ + 1 + i__ * a_dim1], &c__1, &tau[ - i__], &c__1); - -/* - Apply the transformation as a rank-2 update: - A := A - v * w' - w * v' -*/ - - i__2 = *n - i__; - z__1.r = -1., z__1.i = -0.; - zher2_(uplo, &i__2, &z__1, &a[i__ + 1 + i__ * a_dim1], &c__1, - &tau[i__], &c__1, &a[i__ + 1 + (i__ + 1) * a_dim1], - lda); - - } else { - i__2 = i__ + 1 + (i__ + 1) * a_dim1; - i__3 = i__ + 1 + (i__ + 1) * a_dim1; - d__1 = a[i__3].r; - a[i__2].r = d__1, a[i__2].i = 0.; - } - i__2 = i__ + 1 + i__ * a_dim1; - i__3 = i__; - a[i__2].r = e[i__3], a[i__2].i = 0.; - i__2 = i__; - i__3 = i__ + i__ * a_dim1; - d__[i__2] = a[i__3].r; - i__2 = i__; - tau[i__2].r = taui.r, tau[i__2].i = taui.i; -/* L20: */ - } - i__1 = *n; - i__2 = *n + *n * a_dim1; - d__[i__1] = a[i__2].r; - } - - return 0; - -/* End of ZHETD2 */ - -} /* zhetd2_ */ - -/* Subroutine */ int zhetrd_(char *uplo, integer *n, doublecomplex *a, - integer *lda, doublereal *d__, doublereal *e, doublecomplex *tau, - doublecomplex *work, integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - doublecomplex z__1; - - /* Local variables */ - static integer i__, j, nb, kk, nx, iws; - extern logical lsame_(char *, char *); - static integer nbmin, iinfo; - static logical upper; - extern /* Subroutine */ int zhetd2_(char *, integer *, doublecomplex *, - integer *, doublereal *, doublereal *, doublecomplex *, integer *), zher2k_(char *, char *, integer *, integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *, doublereal *, doublecomplex *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zlatrd_(char *, integer *, integer *, - doublecomplex *, integer *, doublereal *, doublecomplex *, - doublecomplex *, integer *); - static integer ldwork, lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZHETRD reduces a complex Hermitian matrix A to real symmetric - tridiagonal form T by a unitary similarity transformation: - Q**H * A * Q = T. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - = 'U': Upper triangle of A is stored; - = 'L': Lower triangle of A is stored. - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the Hermitian matrix A. If UPLO = 'U', the leading - N-by-N upper triangular part of A contains the upper - triangular part of the matrix A, and the strictly lower - triangular part of A is not referenced. If UPLO = 'L', the - leading N-by-N lower triangular part of A contains the lower - triangular part of the matrix A, and the strictly upper - triangular part of A is not referenced. - On exit, if UPLO = 'U', the diagonal and first superdiagonal - of A are overwritten by the corresponding elements of the - tridiagonal matrix T, and the elements above the first - superdiagonal, with the array TAU, represent the unitary - matrix Q as a product of elementary reflectors; if UPLO - = 'L', the diagonal and first subdiagonal of A are over- - written by the corresponding elements of the tridiagonal - matrix T, and the elements below the first subdiagonal, with - the array TAU, represent the unitary matrix Q as a product - of elementary reflectors. See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - D (output) DOUBLE PRECISION array, dimension (N) - The diagonal elements of the tridiagonal matrix T: - D(i) = A(i,i). - - E (output) DOUBLE PRECISION array, dimension (N-1) - The off-diagonal elements of the tridiagonal matrix T: - E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. - - TAU (output) COMPLEX*16 array, dimension (N-1) - The scalar factors of the elementary reflectors (see Further - Details). - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= 1. - For optimum performance LWORK >= N*NB, where NB is the - optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - If UPLO = 'U', the matrix Q is represented as a product of elementary - reflectors - - Q = H(n-1) . . . H(2) H(1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in - A(1:i-1,i+1), and tau in TAU(i). - - If UPLO = 'L', the matrix Q is represented as a product of elementary - reflectors - - Q = H(1) H(2) . . . H(n-1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in A(i+2:n,i), - and tau in TAU(i). - - The contents of A on exit are illustrated by the following examples - with n = 5: - - if UPLO = 'U': if UPLO = 'L': - - ( d e v2 v3 v4 ) ( d ) - ( d e v3 v4 ) ( e d ) - ( d e v4 ) ( v1 e d ) - ( d e ) ( v1 v2 e d ) - ( d ) ( v1 v2 v3 e d ) - - where d and e denote diagonal and off-diagonal elements of T, and vi - denotes an element of the vector defining H(i). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --d__; - --e; - --tau; - --work; - - /* Function Body */ - *info = 0; - upper = lsame_(uplo, "U"); - lquery = *lwork == -1; - if ((! upper && ! lsame_(uplo, "L"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } else if ((*lwork < 1 && ! lquery)) { - *info = -9; - } - - if (*info == 0) { - -/* Determine the block size. */ - - nb = ilaenv_(&c__1, "ZHETRD", uplo, n, &c_n1, &c_n1, &c_n1, (ftnlen)6, - (ftnlen)1); - lwkopt = *n * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZHETRD", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - nx = *n; - iws = 1; - if ((nb > 1 && nb < *n)) { - -/* - Determine when to cross over from blocked to unblocked code - (last block is always handled by unblocked code). - - Computing MAX -*/ - i__1 = nb, i__2 = ilaenv_(&c__3, "ZHETRD", uplo, n, &c_n1, &c_n1, & - c_n1, (ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < *n) { - -/* Determine if workspace is large enough for blocked code. */ - - ldwork = *n; - iws = ldwork * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: determine the - minimum value of NB, and reduce NB or force use of - unblocked code by setting NX = N. - - Computing MAX -*/ - i__1 = *lwork / ldwork; - nb = max(i__1,1); - nbmin = ilaenv_(&c__2, "ZHETRD", uplo, n, &c_n1, &c_n1, &c_n1, - (ftnlen)6, (ftnlen)1); - if (nb < nbmin) { - nx = *n; - } - } - } else { - nx = *n; - } - } else { - nb = 1; - } - - if (upper) { - -/* - Reduce the upper triangle of A. - Columns 1:kk are handled by the unblocked method. -*/ - - kk = *n - (*n - nx + nb - 1) / nb * nb; - i__1 = kk + 1; - i__2 = -nb; - for (i__ = *n - nb + 1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += - i__2) { - -/* - Reduce columns i:i+nb-1 to tridiagonal form and form the - matrix W which is needed to update the unreduced part of - the matrix -*/ - - i__3 = i__ + nb - 1; - zlatrd_(uplo, &i__3, &nb, &a[a_offset], lda, &e[1], &tau[1], & - work[1], &ldwork); - -/* - Update the unreduced submatrix A(1:i-1,1:i-1), using an - update of the form: A := A - V*W' - W*V' -*/ - - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zher2k_(uplo, "No transpose", &i__3, &nb, &z__1, &a[i__ * a_dim1 - + 1], lda, &work[1], &ldwork, &c_b1015, &a[a_offset], lda); - -/* - Copy superdiagonal elements back into A, and diagonal - elements into D -*/ - - i__3 = i__ + nb - 1; - for (j = i__; j <= i__3; ++j) { - i__4 = j - 1 + j * a_dim1; - i__5 = j - 1; - a[i__4].r = e[i__5], a[i__4].i = 0.; - i__4 = j; - i__5 = j + j * a_dim1; - d__[i__4] = a[i__5].r; -/* L10: */ - } -/* L20: */ - } - -/* Use unblocked code to reduce the last or only block */ - - zhetd2_(uplo, &kk, &a[a_offset], lda, &d__[1], &e[1], &tau[1], &iinfo); - } else { - -/* Reduce the lower triangle of A */ - - i__2 = *n - nx; - i__1 = nb; - for (i__ = 1; i__1 < 0 ? i__ >= i__2 : i__ <= i__2; i__ += i__1) { - -/* - Reduce columns i:i+nb-1 to tridiagonal form and form the - matrix W which is needed to update the unreduced part of - the matrix -*/ - - i__3 = *n - i__ + 1; - zlatrd_(uplo, &i__3, &nb, &a[i__ + i__ * a_dim1], lda, &e[i__], & - tau[i__], &work[1], &ldwork); - -/* - Update the unreduced submatrix A(i+nb:n,i+nb:n), using - an update of the form: A := A - V*W' - W*V' -*/ - - i__3 = *n - i__ - nb + 1; - z__1.r = -1., z__1.i = -0.; - zher2k_(uplo, "No transpose", &i__3, &nb, &z__1, &a[i__ + nb + - i__ * a_dim1], lda, &work[nb + 1], &ldwork, &c_b1015, &a[ - i__ + nb + (i__ + nb) * a_dim1], lda); - -/* - Copy subdiagonal elements back into A, and diagonal - elements into D -*/ - - i__3 = i__ + nb - 1; - for (j = i__; j <= i__3; ++j) { - i__4 = j + 1 + j * a_dim1; - i__5 = j; - a[i__4].r = e[i__5], a[i__4].i = 0.; - i__4 = j; - i__5 = j + j * a_dim1; - d__[i__4] = a[i__5].r; -/* L30: */ - } -/* L40: */ - } - -/* Use unblocked code to reduce the last or only block */ - - i__1 = *n - i__ + 1; - zhetd2_(uplo, &i__1, &a[i__ + i__ * a_dim1], lda, &d__[i__], &e[i__], - &tau[i__], &iinfo); - } - - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - return 0; - -/* End of ZHETRD */ - -} /* zhetrd_ */ - -/* Subroutine */ int zhseqr_(char *job, char *compz, integer *n, integer *ilo, - integer *ihi, doublecomplex *h__, integer *ldh, doublecomplex *w, - doublecomplex *z__, integer *ldz, doublecomplex *work, integer *lwork, - integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer h_dim1, h_offset, z_dim1, z_offset, i__1, i__2, i__3, i__4[2], - i__5, i__6; - doublereal d__1, d__2, d__3, d__4; - doublecomplex z__1; - char ch__1[2]; - - /* Builtin functions */ - double d_imag(doublecomplex *); - void d_cnjg(doublecomplex *, doublecomplex *); - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i__, j, k, l; - static doublecomplex s[225] /* was [15][15] */, v[16]; - static integer i1, i2, ii, nh, nr, ns, nv; - static doublecomplex vv[16]; - static integer itn; - static doublecomplex tau; - static integer its; - static doublereal ulp, tst1; - static integer maxb, ierr; - static doublereal unfl; - static doublecomplex temp; - static doublereal ovfl; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zscal_(integer *, doublecomplex *, - doublecomplex *, integer *); - static integer itemp; - static doublereal rtemp; - extern /* Subroutine */ int zgemv_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *); - static logical initz, wantt, wantz; - static doublereal rwork[1]; - extern /* Subroutine */ int zcopy_(integer *, doublecomplex *, integer *, - doublecomplex *, integer *); - extern doublereal dlapy2_(doublereal *, doublereal *); - extern /* Subroutine */ int dlabad_(doublereal *, doublereal *); - - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zdscal_(integer *, doublereal *, - doublecomplex *, integer *), zlarfg_(integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *); - extern integer izamax_(integer *, doublecomplex *, integer *); - extern doublereal zlanhs_(char *, integer *, doublecomplex *, integer *, - doublereal *); - extern /* Subroutine */ int zlahqr_(logical *, logical *, integer *, - integer *, integer *, doublecomplex *, integer *, doublecomplex *, - integer *, integer *, doublecomplex *, integer *, integer *), - zlacpy_(char *, integer *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *), zlaset_(char *, integer *, - integer *, doublecomplex *, doublecomplex *, doublecomplex *, - integer *), zlarfx_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, doublecomplex *, integer *, - doublecomplex *); - static doublereal smlnum; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZHSEQR computes the eigenvalues of a complex upper Hessenberg - matrix H, and, optionally, the matrices T and Z from the Schur - decomposition H = Z T Z**H, where T is an upper triangular matrix - (the Schur form), and Z is the unitary matrix of Schur vectors. - - Optionally Z may be postmultiplied into an input unitary matrix Q, - so that this routine can give the Schur factorization of a matrix A - which has been reduced to the Hessenberg form H by the unitary - matrix Q: A = Q*H*Q**H = (QZ)*T*(QZ)**H. - - Arguments - ========= - - JOB (input) CHARACTER*1 - = 'E': compute eigenvalues only; - = 'S': compute eigenvalues and the Schur form T. - - COMPZ (input) CHARACTER*1 - = 'N': no Schur vectors are computed; - = 'I': Z is initialized to the unit matrix and the matrix Z - of Schur vectors of H is returned; - = 'V': Z must contain an unitary matrix Q on entry, and - the product Q*Z is returned. - - N (input) INTEGER - The order of the matrix H. N >= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - It is assumed that H is already upper triangular in rows - and columns 1:ILO-1 and IHI+1:N. ILO and IHI are normally - set by a previous call to ZGEBAL, and then passed to CGEHRD - when the matrix output by ZGEBAL is reduced to Hessenberg - form. Otherwise ILO and IHI should be set to 1 and N - respectively. - 1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0. - - H (input/output) COMPLEX*16 array, dimension (LDH,N) - On entry, the upper Hessenberg matrix H. - On exit, if JOB = 'S', H contains the upper triangular matrix - T from the Schur decomposition (the Schur form). If - JOB = 'E', the contents of H are unspecified on exit. - - LDH (input) INTEGER - The leading dimension of the array H. LDH >= max(1,N). - - W (output) COMPLEX*16 array, dimension (N) - The computed eigenvalues. If JOB = 'S', the eigenvalues are - stored in the same order as on the diagonal of the Schur form - returned in H, with W(i) = H(i,i). - - Z (input/output) COMPLEX*16 array, dimension (LDZ,N) - If COMPZ = 'N': Z is not referenced. - If COMPZ = 'I': on entry, Z need not be set, and on exit, Z - contains the unitary matrix Z of the Schur vectors of H. - If COMPZ = 'V': on entry Z must contain an N-by-N matrix Q, - which is assumed to be equal to the unit matrix except for - the submatrix Z(ILO:IHI,ILO:IHI); on exit Z contains Q*Z. - Normally Q is the unitary matrix generated by ZUNGHR after - the call to ZGEHRD which formed the Hessenberg matrix H. - - LDZ (input) INTEGER - The leading dimension of the array Z. - LDZ >= max(1,N) if COMPZ = 'I' or 'V'; LDZ >= 1 otherwise. - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,N). - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, ZHSEQR failed to compute all the - eigenvalues in a total of 30*(IHI-ILO+1) iterations; - elements 1:ilo-1 and i+1:n of W contain those - eigenvalues which have been successfully computed. - - ===================================================================== - - - Decode and test the input parameters -*/ - - /* Parameter adjustments */ - h_dim1 = *ldh; - h_offset = 1 + h_dim1 * 1; - h__ -= h_offset; - --w; - z_dim1 = *ldz; - z_offset = 1 + z_dim1 * 1; - z__ -= z_offset; - --work; - - /* Function Body */ - wantt = lsame_(job, "S"); - initz = lsame_(compz, "I"); - wantz = initz || lsame_(compz, "V"); - - *info = 0; - i__1 = max(1,*n); - work[1].r = (doublereal) i__1, work[1].i = 0.; - lquery = *lwork == -1; - if ((! lsame_(job, "E") && ! wantt)) { - *info = -1; - } else if ((! lsame_(compz, "N") && ! wantz)) { - *info = -2; - } else if (*n < 0) { - *info = -3; - } else if (*ilo < 1 || *ilo > max(1,*n)) { - *info = -4; - } else if (*ihi < min(*ilo,*n) || *ihi > *n) { - *info = -5; - } else if (*ldh < max(1,*n)) { - *info = -7; - } else if (*ldz < 1 || (wantz && *ldz < max(1,*n))) { - *info = -10; - } else if ((*lwork < max(1,*n) && ! lquery)) { - *info = -12; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZHSEQR", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Initialize Z, if necessary */ - - if (initz) { - zlaset_("Full", n, n, &c_b59, &c_b60, &z__[z_offset], ldz); - } - -/* Store the eigenvalues isolated by ZGEBAL. */ - - i__1 = *ilo - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - i__3 = i__ + i__ * h_dim1; - w[i__2].r = h__[i__3].r, w[i__2].i = h__[i__3].i; -/* L10: */ - } - i__1 = *n; - for (i__ = *ihi + 1; i__ <= i__1; ++i__) { - i__2 = i__; - i__3 = i__ + i__ * h_dim1; - w[i__2].r = h__[i__3].r, w[i__2].i = h__[i__3].i; -/* L20: */ - } - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } - if (*ilo == *ihi) { - i__1 = *ilo; - i__2 = *ilo + *ilo * h_dim1; - w[i__1].r = h__[i__2].r, w[i__1].i = h__[i__2].i; - return 0; - } - -/* - Set rows and columns ILO to IHI to zero below the first - subdiagonal. -*/ - - i__1 = *ihi - 2; - for (j = *ilo; j <= i__1; ++j) { - i__2 = *n; - for (i__ = j + 2; i__ <= i__2; ++i__) { - i__3 = i__ + j * h_dim1; - h__[i__3].r = 0., h__[i__3].i = 0.; -/* L30: */ - } -/* L40: */ - } - nh = *ihi - *ilo + 1; - -/* - I1 and I2 are the indices of the first row and last column of H - to which transformations must be applied. If eigenvalues only are - being computed, I1 and I2 are re-set inside the main loop. -*/ - - if (wantt) { - i1 = 1; - i2 = *n; - } else { - i1 = *ilo; - i2 = *ihi; - } - -/* Ensure that the subdiagonal elements are real. */ - - i__1 = *ihi; - for (i__ = *ilo + 1; i__ <= i__1; ++i__) { - i__2 = i__ + (i__ - 1) * h_dim1; - temp.r = h__[i__2].r, temp.i = h__[i__2].i; - if (d_imag(&temp) != 0.) { - d__1 = temp.r; - d__2 = d_imag(&temp); - rtemp = dlapy2_(&d__1, &d__2); - i__2 = i__ + (i__ - 1) * h_dim1; - h__[i__2].r = rtemp, h__[i__2].i = 0.; - z__1.r = temp.r / rtemp, z__1.i = temp.i / rtemp; - temp.r = z__1.r, temp.i = z__1.i; - if (i2 > i__) { - i__2 = i2 - i__; - d_cnjg(&z__1, &temp); - zscal_(&i__2, &z__1, &h__[i__ + (i__ + 1) * h_dim1], ldh); - } - i__2 = i__ - i1; - zscal_(&i__2, &temp, &h__[i1 + i__ * h_dim1], &c__1); - if (i__ < *ihi) { - i__2 = i__ + 1 + i__ * h_dim1; - i__3 = i__ + 1 + i__ * h_dim1; - z__1.r = temp.r * h__[i__3].r - temp.i * h__[i__3].i, z__1.i = - temp.r * h__[i__3].i + temp.i * h__[i__3].r; - h__[i__2].r = z__1.r, h__[i__2].i = z__1.i; - } - if (wantz) { - zscal_(&nh, &temp, &z__[*ilo + i__ * z_dim1], &c__1); - } - } -/* L50: */ - } - -/* - Determine the order of the multi-shift QR algorithm to be used. - - Writing concatenation -*/ - i__4[0] = 1, a__1[0] = job; - i__4[1] = 1, a__1[1] = compz; - s_cat(ch__1, a__1, i__4, &c__2, (ftnlen)2); - ns = ilaenv_(&c__4, "ZHSEQR", ch__1, n, ilo, ihi, &c_n1, (ftnlen)6, ( - ftnlen)2); -/* Writing concatenation */ - i__4[0] = 1, a__1[0] = job; - i__4[1] = 1, a__1[1] = compz; - s_cat(ch__1, a__1, i__4, &c__2, (ftnlen)2); - maxb = ilaenv_(&c__8, "ZHSEQR", ch__1, n, ilo, ihi, &c_n1, (ftnlen)6, ( - ftnlen)2); - if (ns <= 1 || ns > nh || maxb >= nh) { - -/* Use the standard double-shift algorithm */ - - zlahqr_(&wantt, &wantz, n, ilo, ihi, &h__[h_offset], ldh, &w[1], ilo, - ihi, &z__[z_offset], ldz, info); - return 0; - } - maxb = max(2,maxb); -/* Computing MIN */ - i__1 = min(ns,maxb); - ns = min(i__1,15); - -/* - Now 1 < NS <= MAXB < NH. - - Set machine-dependent constants for the stopping criterion. - If norm(H) <= sqrt(OVFL), overflow should not occur. -*/ - - unfl = SAFEMINIMUM; - ovfl = 1. / unfl; - dlabad_(&unfl, &ovfl); - ulp = PRECISION; - smlnum = unfl * (nh / ulp); - -/* ITN is the total number of multiple-shift QR iterations allowed. */ - - itn = nh * 30; - -/* - The main loop begins here. I is the loop index and decreases from - IHI to ILO in steps of at most MAXB. Each iteration of the loop - works with the active submatrix in rows and columns L to I. - Eigenvalues I+1 to IHI have already converged. Either L = ILO, or - H(L,L-1) is negligible so that the matrix splits. -*/ - - i__ = *ihi; -L60: - if (i__ < *ilo) { - goto L180; - } - -/* - Perform multiple-shift QR iterations on rows and columns ILO to I - until a submatrix of order at most MAXB splits off at the bottom - because a subdiagonal element has become negligible. -*/ - - l = *ilo; - i__1 = itn; - for (its = 0; its <= i__1; ++its) { - -/* Look for a single small subdiagonal element. */ - - i__2 = l + 1; - for (k = i__; k >= i__2; --k) { - i__3 = k - 1 + (k - 1) * h_dim1; - i__5 = k + k * h_dim1; - tst1 = (d__1 = h__[i__3].r, abs(d__1)) + (d__2 = d_imag(&h__[k - - 1 + (k - 1) * h_dim1]), abs(d__2)) + ((d__3 = h__[i__5].r, - abs(d__3)) + (d__4 = d_imag(&h__[k + k * h_dim1]), abs( - d__4))); - if (tst1 == 0.) { - i__3 = i__ - l + 1; - tst1 = zlanhs_("1", &i__3, &h__[l + l * h_dim1], ldh, rwork); - } - i__3 = k + (k - 1) * h_dim1; -/* Computing MAX */ - d__2 = ulp * tst1; - if ((d__1 = h__[i__3].r, abs(d__1)) <= max(d__2,smlnum)) { - goto L80; - } -/* L70: */ - } -L80: - l = k; - if (l > *ilo) { - -/* H(L,L-1) is negligible. */ - - i__2 = l + (l - 1) * h_dim1; - h__[i__2].r = 0., h__[i__2].i = 0.; - } - -/* Exit from loop if a submatrix of order <= MAXB has split off. */ - - if (l >= i__ - maxb + 1) { - goto L170; - } - -/* - Now the active submatrix is in rows and columns L to I. If - eigenvalues only are being computed, only the active submatrix - need be transformed. -*/ - - if (! wantt) { - i1 = l; - i2 = i__; - } - - if (its == 20 || its == 30) { - -/* Exceptional shifts. */ - - i__2 = i__; - for (ii = i__ - ns + 1; ii <= i__2; ++ii) { - i__3 = ii; - i__5 = ii + (ii - 1) * h_dim1; - i__6 = ii + ii * h_dim1; - d__3 = ((d__1 = h__[i__5].r, abs(d__1)) + (d__2 = h__[i__6].r, - abs(d__2))) * 1.5; - w[i__3].r = d__3, w[i__3].i = 0.; -/* L90: */ - } - } else { - -/* Use eigenvalues of trailing submatrix of order NS as shifts. */ - - zlacpy_("Full", &ns, &ns, &h__[i__ - ns + 1 + (i__ - ns + 1) * - h_dim1], ldh, s, &c__15); - zlahqr_(&c_false, &c_false, &ns, &c__1, &ns, s, &c__15, &w[i__ - - ns + 1], &c__1, &ns, &z__[z_offset], ldz, &ierr); - if (ierr > 0) { - -/* - If ZLAHQR failed to compute all NS eigenvalues, use the - unconverged diagonal elements as the remaining shifts. -*/ - - i__2 = ierr; - for (ii = 1; ii <= i__2; ++ii) { - i__3 = i__ - ns + ii; - i__5 = ii + ii * 15 - 16; - w[i__3].r = s[i__5].r, w[i__3].i = s[i__5].i; -/* L100: */ - } - } - } - -/* - Form the first column of (G-w(1)) (G-w(2)) . . . (G-w(ns)) - where G is the Hessenberg submatrix H(L:I,L:I) and w is - the vector of shifts (stored in W). The result is - stored in the local array V. -*/ - - v[0].r = 1., v[0].i = 0.; - i__2 = ns + 1; - for (ii = 2; ii <= i__2; ++ii) { - i__3 = ii - 1; - v[i__3].r = 0., v[i__3].i = 0.; -/* L110: */ - } - nv = 1; - i__2 = i__; - for (j = i__ - ns + 1; j <= i__2; ++j) { - i__3 = nv + 1; - zcopy_(&i__3, v, &c__1, vv, &c__1); - i__3 = nv + 1; - i__5 = j; - z__1.r = -w[i__5].r, z__1.i = -w[i__5].i; - zgemv_("No transpose", &i__3, &nv, &c_b60, &h__[l + l * h_dim1], - ldh, vv, &c__1, &z__1, v, &c__1); - ++nv; - -/* - Scale V(1:NV) so that max(abs(V(i))) = 1. If V is zero, - reset it to the unit vector. -*/ - - itemp = izamax_(&nv, v, &c__1); - i__3 = itemp - 1; - rtemp = (d__1 = v[i__3].r, abs(d__1)) + (d__2 = d_imag(&v[itemp - - 1]), abs(d__2)); - if (rtemp == 0.) { - v[0].r = 1., v[0].i = 0.; - i__3 = nv; - for (ii = 2; ii <= i__3; ++ii) { - i__5 = ii - 1; - v[i__5].r = 0., v[i__5].i = 0.; -/* L120: */ - } - } else { - rtemp = max(rtemp,smlnum); - d__1 = 1. / rtemp; - zdscal_(&nv, &d__1, v, &c__1); - } -/* L130: */ - } - -/* Multiple-shift QR step */ - - i__2 = i__ - 1; - for (k = l; k <= i__2; ++k) { - -/* - The first iteration of this loop determines a reflection G - from the vector V and applies it from left and right to H, - thus creating a nonzero bulge below the subdiagonal. - - Each subsequent iteration determines a reflection G to - restore the Hessenberg form in the (K-1)th column, and thus - chases the bulge one step toward the bottom of the active - submatrix. NR is the order of G. - - Computing MIN -*/ - i__3 = ns + 1, i__5 = i__ - k + 1; - nr = min(i__3,i__5); - if (k > l) { - zcopy_(&nr, &h__[k + (k - 1) * h_dim1], &c__1, v, &c__1); - } - zlarfg_(&nr, v, &v[1], &c__1, &tau); - if (k > l) { - i__3 = k + (k - 1) * h_dim1; - h__[i__3].r = v[0].r, h__[i__3].i = v[0].i; - i__3 = i__; - for (ii = k + 1; ii <= i__3; ++ii) { - i__5 = ii + (k - 1) * h_dim1; - h__[i__5].r = 0., h__[i__5].i = 0.; -/* L140: */ - } - } - v[0].r = 1., v[0].i = 0.; - -/* - Apply G' from the left to transform the rows of the matrix - in columns K to I2. -*/ - - i__3 = i2 - k + 1; - d_cnjg(&z__1, &tau); - zlarfx_("Left", &nr, &i__3, v, &z__1, &h__[k + k * h_dim1], ldh, & - work[1]); - -/* - Apply G from the right to transform the columns of the - matrix in rows I1 to min(K+NR,I). - - Computing MIN -*/ - i__5 = k + nr; - i__3 = min(i__5,i__) - i1 + 1; - zlarfx_("Right", &i__3, &nr, v, &tau, &h__[i1 + k * h_dim1], ldh, - &work[1]); - - if (wantz) { - -/* Accumulate transformations in the matrix Z */ - - zlarfx_("Right", &nh, &nr, v, &tau, &z__[*ilo + k * z_dim1], - ldz, &work[1]); - } -/* L150: */ - } - -/* Ensure that H(I,I-1) is real. */ - - i__2 = i__ + (i__ - 1) * h_dim1; - temp.r = h__[i__2].r, temp.i = h__[i__2].i; - if (d_imag(&temp) != 0.) { - d__1 = temp.r; - d__2 = d_imag(&temp); - rtemp = dlapy2_(&d__1, &d__2); - i__2 = i__ + (i__ - 1) * h_dim1; - h__[i__2].r = rtemp, h__[i__2].i = 0.; - z__1.r = temp.r / rtemp, z__1.i = temp.i / rtemp; - temp.r = z__1.r, temp.i = z__1.i; - if (i2 > i__) { - i__2 = i2 - i__; - d_cnjg(&z__1, &temp); - zscal_(&i__2, &z__1, &h__[i__ + (i__ + 1) * h_dim1], ldh); - } - i__2 = i__ - i1; - zscal_(&i__2, &temp, &h__[i1 + i__ * h_dim1], &c__1); - if (wantz) { - zscal_(&nh, &temp, &z__[*ilo + i__ * z_dim1], &c__1); - } - } - -/* L160: */ - } - -/* Failure to converge in remaining number of iterations */ - - *info = i__; - return 0; - -L170: - -/* - A submatrix of order <= MAXB in rows and columns L to I has split - off. Use the double-shift QR algorithm to handle it. -*/ - - zlahqr_(&wantt, &wantz, n, &l, &i__, &h__[h_offset], ldh, &w[1], ilo, ihi, - &z__[z_offset], ldz, info); - if (*info > 0) { - return 0; - } - -/* - Decrement number of remaining iterations, and return to start of - the main loop with a new value of I. -*/ - - itn -= its; - i__ = l - 1; - goto L60; - -L180: - i__1 = max(1,*n); - work[1].r = (doublereal) i__1, work[1].i = 0.; - return 0; - -/* End of ZHSEQR */ - -} /* zhseqr_ */ - -/* Subroutine */ int zlabrd_(integer *m, integer *n, integer *nb, - doublecomplex *a, integer *lda, doublereal *d__, doublereal *e, - doublecomplex *tauq, doublecomplex *taup, doublecomplex *x, integer * - ldx, doublecomplex *y, integer *ldy) -{ - /* System generated locals */ - integer a_dim1, a_offset, x_dim1, x_offset, y_dim1, y_offset, i__1, i__2, - i__3; - doublecomplex z__1; - - /* Local variables */ - static integer i__; - static doublecomplex alpha; - extern /* Subroutine */ int zscal_(integer *, doublecomplex *, - doublecomplex *, integer *), zgemv_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *), - zlarfg_(integer *, doublecomplex *, doublecomplex *, integer *, - doublecomplex *), zlacgv_(integer *, doublecomplex *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZLABRD reduces the first NB rows and columns of a complex general - m by n matrix A to upper or lower real bidiagonal form by a unitary - transformation Q' * A * P, and returns the matrices X and Y which - are needed to apply the transformation to the unreduced part of A. - - If m >= n, A is reduced to upper bidiagonal form; if m < n, to lower - bidiagonal form. - - This is an auxiliary routine called by ZGEBRD - - Arguments - ========= - - M (input) INTEGER - The number of rows in the matrix A. - - N (input) INTEGER - The number of columns in the matrix A. - - NB (input) INTEGER - The number of leading rows and columns of A to be reduced. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the m by n general matrix to be reduced. - On exit, the first NB rows and columns of the matrix are - overwritten; the rest of the array is unchanged. - If m >= n, elements on and below the diagonal in the first NB - columns, with the array TAUQ, represent the unitary - matrix Q as a product of elementary reflectors; and - elements above the diagonal in the first NB rows, with the - array TAUP, represent the unitary matrix P as a product - of elementary reflectors. - If m < n, elements below the diagonal in the first NB - columns, with the array TAUQ, represent the unitary - matrix Q as a product of elementary reflectors, and - elements on and above the diagonal in the first NB rows, - with the array TAUP, represent the unitary matrix P as - a product of elementary reflectors. - See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - D (output) DOUBLE PRECISION array, dimension (NB) - The diagonal elements of the first NB rows and columns of - the reduced matrix. D(i) = A(i,i). - - E (output) DOUBLE PRECISION array, dimension (NB) - The off-diagonal elements of the first NB rows and columns of - the reduced matrix. - - TAUQ (output) COMPLEX*16 array dimension (NB) - The scalar factors of the elementary reflectors which - represent the unitary matrix Q. See Further Details. - - TAUP (output) COMPLEX*16 array, dimension (NB) - The scalar factors of the elementary reflectors which - represent the unitary matrix P. See Further Details. - - X (output) COMPLEX*16 array, dimension (LDX,NB) - The m-by-nb matrix X required to update the unreduced part - of A. - - LDX (input) INTEGER - The leading dimension of the array X. LDX >= max(1,M). - - Y (output) COMPLEX*16 array, dimension (LDY,NB) - The n-by-nb matrix Y required to update the unreduced part - of A. - - LDY (output) INTEGER - The leading dimension of the array Y. LDY >= max(1,N). - - Further Details - =============== - - The matrices Q and P are represented as products of elementary - reflectors: - - Q = H(1) H(2) . . . H(nb) and P = G(1) G(2) . . . G(nb) - - Each H(i) and G(i) has the form: - - H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' - - where tauq and taup are complex scalars, and v and u are complex - vectors. - - If m >= n, v(1:i-1) = 0, v(i) = 1, and v(i:m) is stored on exit in - A(i:m,i); u(1:i) = 0, u(i+1) = 1, and u(i+1:n) is stored on exit in - A(i,i+1:n); tauq is stored in TAUQ(i) and taup in TAUP(i). - - If m < n, v(1:i) = 0, v(i+1) = 1, and v(i+1:m) is stored on exit in - A(i+2:m,i); u(1:i-1) = 0, u(i) = 1, and u(i:n) is stored on exit in - A(i,i+1:n); tauq is stored in TAUQ(i) and taup in TAUP(i). - - The elements of the vectors v and u together form the m-by-nb matrix - V and the nb-by-n matrix U' which are needed, with X and Y, to apply - the transformation to the unreduced part of the matrix, using a block - update of the form: A := A - V*Y' - X*U'. - - The contents of A on exit are illustrated by the following examples - with nb = 2: - - m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): - - ( 1 1 u1 u1 u1 ) ( 1 u1 u1 u1 u1 u1 ) - ( v1 1 1 u2 u2 ) ( 1 1 u2 u2 u2 u2 ) - ( v1 v2 a a a ) ( v1 1 a a a a ) - ( v1 v2 a a a ) ( v1 v2 a a a a ) - ( v1 v2 a a a ) ( v1 v2 a a a a ) - ( v1 v2 a a a ) - - where a denotes an element of the original matrix which is unchanged, - vi denotes an element of the vector defining H(i), and ui an element - of the vector defining G(i). - - ===================================================================== - - - Quick return if possible -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --d__; - --e; - --tauq; - --taup; - x_dim1 = *ldx; - x_offset = 1 + x_dim1 * 1; - x -= x_offset; - y_dim1 = *ldy; - y_offset = 1 + y_dim1 * 1; - y -= y_offset; - - /* Function Body */ - if (*m <= 0 || *n <= 0) { - return 0; - } - - if (*m >= *n) { - -/* Reduce to upper bidiagonal form */ - - i__1 = *nb; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Update A(i:m,i) */ - - i__2 = i__ - 1; - zlacgv_(&i__2, &y[i__ + y_dim1], ldy); - i__2 = *m - i__ + 1; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &a[i__ + a_dim1], lda, - &y[i__ + y_dim1], ldy, &c_b60, &a[i__ + i__ * a_dim1], & - c__1); - i__2 = i__ - 1; - zlacgv_(&i__2, &y[i__ + y_dim1], ldy); - i__2 = *m - i__ + 1; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &x[i__ + x_dim1], ldx, - &a[i__ * a_dim1 + 1], &c__1, &c_b60, &a[i__ + i__ * - a_dim1], &c__1); - -/* Generate reflection Q(i) to annihilate A(i+1:m,i) */ - - i__2 = i__ + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *m - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - zlarfg_(&i__2, &alpha, &a[min(i__3,*m) + i__ * a_dim1], &c__1, & - tauq[i__]); - i__2 = i__; - d__[i__2] = alpha.r; - if (i__ < *n) { - i__2 = i__ + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Compute Y(i+1:n,i) */ - - i__2 = *m - i__ + 1; - i__3 = *n - i__; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &a[i__ + ( - i__ + 1) * a_dim1], lda, &a[i__ + i__ * a_dim1], & - c__1, &c_b59, &y[i__ + 1 + i__ * y_dim1], &c__1); - i__2 = *m - i__ + 1; - i__3 = i__ - 1; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &a[i__ + - a_dim1], lda, &a[i__ + i__ * a_dim1], &c__1, &c_b59, & - y[i__ * y_dim1 + 1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &y[i__ + 1 + - y_dim1], ldy, &y[i__ * y_dim1 + 1], &c__1, &c_b60, &y[ - i__ + 1 + i__ * y_dim1], &c__1); - i__2 = *m - i__ + 1; - i__3 = i__ - 1; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &x[i__ + - x_dim1], ldx, &a[i__ + i__ * a_dim1], &c__1, &c_b59, & - y[i__ * y_dim1 + 1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__; - z__1.r = -1., z__1.i = -0.; - zgemv_("Conjugate transpose", &i__2, &i__3, &z__1, &a[(i__ + - 1) * a_dim1 + 1], lda, &y[i__ * y_dim1 + 1], &c__1, & - c_b60, &y[i__ + 1 + i__ * y_dim1], &c__1); - i__2 = *n - i__; - zscal_(&i__2, &tauq[i__], &y[i__ + 1 + i__ * y_dim1], &c__1); - -/* Update A(i,i+1:n) */ - - i__2 = *n - i__; - zlacgv_(&i__2, &a[i__ + (i__ + 1) * a_dim1], lda); - zlacgv_(&i__, &a[i__ + a_dim1], lda); - i__2 = *n - i__; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__, &z__1, &y[i__ + 1 + - y_dim1], ldy, &a[i__ + a_dim1], lda, &c_b60, &a[i__ + - (i__ + 1) * a_dim1], lda); - zlacgv_(&i__, &a[i__ + a_dim1], lda); - i__2 = i__ - 1; - zlacgv_(&i__2, &x[i__ + x_dim1], ldx); - i__2 = i__ - 1; - i__3 = *n - i__; - z__1.r = -1., z__1.i = -0.; - zgemv_("Conjugate transpose", &i__2, &i__3, &z__1, &a[(i__ + - 1) * a_dim1 + 1], lda, &x[i__ + x_dim1], ldx, &c_b60, - &a[i__ + (i__ + 1) * a_dim1], lda); - i__2 = i__ - 1; - zlacgv_(&i__2, &x[i__ + x_dim1], ldx); - -/* Generate reflection P(i) to annihilate A(i,i+2:n) */ - - i__2 = i__ + (i__ + 1) * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *n - i__; -/* Computing MIN */ - i__3 = i__ + 2; - zlarfg_(&i__2, &alpha, &a[i__ + min(i__3,*n) * a_dim1], lda, & - taup[i__]); - i__2 = i__; - e[i__2] = alpha.r; - i__2 = i__ + (i__ + 1) * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Compute X(i+1:m,i) */ - - i__2 = *m - i__; - i__3 = *n - i__; - zgemv_("No transpose", &i__2, &i__3, &c_b60, &a[i__ + 1 + ( - i__ + 1) * a_dim1], lda, &a[i__ + (i__ + 1) * a_dim1], - lda, &c_b59, &x[i__ + 1 + i__ * x_dim1], &c__1); - i__2 = *n - i__; - zgemv_("Conjugate transpose", &i__2, &i__, &c_b60, &y[i__ + 1 - + y_dim1], ldy, &a[i__ + (i__ + 1) * a_dim1], lda, & - c_b59, &x[i__ * x_dim1 + 1], &c__1); - i__2 = *m - i__; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__, &z__1, &a[i__ + 1 + - a_dim1], lda, &x[i__ * x_dim1 + 1], &c__1, &c_b60, &x[ - i__ + 1 + i__ * x_dim1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__; - zgemv_("No transpose", &i__2, &i__3, &c_b60, &a[(i__ + 1) * - a_dim1 + 1], lda, &a[i__ + (i__ + 1) * a_dim1], lda, & - c_b59, &x[i__ * x_dim1 + 1], &c__1); - i__2 = *m - i__; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &x[i__ + 1 + - x_dim1], ldx, &x[i__ * x_dim1 + 1], &c__1, &c_b60, &x[ - i__ + 1 + i__ * x_dim1], &c__1); - i__2 = *m - i__; - zscal_(&i__2, &taup[i__], &x[i__ + 1 + i__ * x_dim1], &c__1); - i__2 = *n - i__; - zlacgv_(&i__2, &a[i__ + (i__ + 1) * a_dim1], lda); - } -/* L10: */ - } - } else { - -/* Reduce to lower bidiagonal form */ - - i__1 = *nb; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Update A(i,i:n) */ - - i__2 = *n - i__ + 1; - zlacgv_(&i__2, &a[i__ + i__ * a_dim1], lda); - i__2 = i__ - 1; - zlacgv_(&i__2, &a[i__ + a_dim1], lda); - i__2 = *n - i__ + 1; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &y[i__ + y_dim1], ldy, - &a[i__ + a_dim1], lda, &c_b60, &a[i__ + i__ * a_dim1], - lda); - i__2 = i__ - 1; - zlacgv_(&i__2, &a[i__ + a_dim1], lda); - i__2 = i__ - 1; - zlacgv_(&i__2, &x[i__ + x_dim1], ldx); - i__2 = i__ - 1; - i__3 = *n - i__ + 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("Conjugate transpose", &i__2, &i__3, &z__1, &a[i__ * - a_dim1 + 1], lda, &x[i__ + x_dim1], ldx, &c_b60, &a[i__ + - i__ * a_dim1], lda); - i__2 = i__ - 1; - zlacgv_(&i__2, &x[i__ + x_dim1], ldx); - -/* Generate reflection P(i) to annihilate A(i,i+1:n) */ - - i__2 = i__ + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *n - i__ + 1; -/* Computing MIN */ - i__3 = i__ + 1; - zlarfg_(&i__2, &alpha, &a[i__ + min(i__3,*n) * a_dim1], lda, & - taup[i__]); - i__2 = i__; - d__[i__2] = alpha.r; - if (i__ < *m) { - i__2 = i__ + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Compute X(i+1:m,i) */ - - i__2 = *m - i__; - i__3 = *n - i__ + 1; - zgemv_("No transpose", &i__2, &i__3, &c_b60, &a[i__ + 1 + i__ - * a_dim1], lda, &a[i__ + i__ * a_dim1], lda, &c_b59, & - x[i__ + 1 + i__ * x_dim1], &c__1); - i__2 = *n - i__ + 1; - i__3 = i__ - 1; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &y[i__ + - y_dim1], ldy, &a[i__ + i__ * a_dim1], lda, &c_b59, &x[ - i__ * x_dim1 + 1], &c__1); - i__2 = *m - i__; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &a[i__ + 1 + - a_dim1], lda, &x[i__ * x_dim1 + 1], &c__1, &c_b60, &x[ - i__ + 1 + i__ * x_dim1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__ + 1; - zgemv_("No transpose", &i__2, &i__3, &c_b60, &a[i__ * a_dim1 - + 1], lda, &a[i__ + i__ * a_dim1], lda, &c_b59, &x[ - i__ * x_dim1 + 1], &c__1); - i__2 = *m - i__; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &x[i__ + 1 + - x_dim1], ldx, &x[i__ * x_dim1 + 1], &c__1, &c_b60, &x[ - i__ + 1 + i__ * x_dim1], &c__1); - i__2 = *m - i__; - zscal_(&i__2, &taup[i__], &x[i__ + 1 + i__ * x_dim1], &c__1); - i__2 = *n - i__ + 1; - zlacgv_(&i__2, &a[i__ + i__ * a_dim1], lda); - -/* Update A(i+1:m,i) */ - - i__2 = i__ - 1; - zlacgv_(&i__2, &y[i__ + y_dim1], ldy); - i__2 = *m - i__; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &a[i__ + 1 + - a_dim1], lda, &y[i__ + y_dim1], ldy, &c_b60, &a[i__ + - 1 + i__ * a_dim1], &c__1); - i__2 = i__ - 1; - zlacgv_(&i__2, &y[i__ + y_dim1], ldy); - i__2 = *m - i__; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__, &z__1, &x[i__ + 1 + - x_dim1], ldx, &a[i__ * a_dim1 + 1], &c__1, &c_b60, &a[ - i__ + 1 + i__ * a_dim1], &c__1); - -/* Generate reflection Q(i) to annihilate A(i+2:m,i) */ - - i__2 = i__ + 1 + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *m - i__; -/* Computing MIN */ - i__3 = i__ + 2; - zlarfg_(&i__2, &alpha, &a[min(i__3,*m) + i__ * a_dim1], &c__1, - &tauq[i__]); - i__2 = i__; - e[i__2] = alpha.r; - i__2 = i__ + 1 + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Compute Y(i+1:n,i) */ - - i__2 = *m - i__; - i__3 = *n - i__; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &a[i__ + - 1 + (i__ + 1) * a_dim1], lda, &a[i__ + 1 + i__ * - a_dim1], &c__1, &c_b59, &y[i__ + 1 + i__ * y_dim1], & - c__1); - i__2 = *m - i__; - i__3 = i__ - 1; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &a[i__ + - 1 + a_dim1], lda, &a[i__ + 1 + i__ * a_dim1], &c__1, & - c_b59, &y[i__ * y_dim1 + 1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &y[i__ + 1 + - y_dim1], ldy, &y[i__ * y_dim1 + 1], &c__1, &c_b60, &y[ - i__ + 1 + i__ * y_dim1], &c__1); - i__2 = *m - i__; - zgemv_("Conjugate transpose", &i__2, &i__, &c_b60, &x[i__ + 1 - + x_dim1], ldx, &a[i__ + 1 + i__ * a_dim1], &c__1, & - c_b59, &y[i__ * y_dim1 + 1], &c__1); - i__2 = *n - i__; - z__1.r = -1., z__1.i = -0.; - zgemv_("Conjugate transpose", &i__, &i__2, &z__1, &a[(i__ + 1) - * a_dim1 + 1], lda, &y[i__ * y_dim1 + 1], &c__1, & - c_b60, &y[i__ + 1 + i__ * y_dim1], &c__1); - i__2 = *n - i__; - zscal_(&i__2, &tauq[i__], &y[i__ + 1 + i__ * y_dim1], &c__1); - } else { - i__2 = *n - i__ + 1; - zlacgv_(&i__2, &a[i__ + i__ * a_dim1], lda); - } -/* L20: */ - } - } - return 0; - -/* End of ZLABRD */ - -} /* zlabrd_ */ - -/* Subroutine */ int zlacgv_(integer *n, doublecomplex *x, integer *incx) -{ - /* System generated locals */ - integer i__1, i__2; - doublecomplex z__1; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, ioff; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - ZLACGV conjugates a complex vector of length N. - - Arguments - ========= - - N (input) INTEGER - The length of the vector X. N >= 0. - - X (input/output) COMPLEX*16 array, dimension - (1+(N-1)*abs(INCX)) - On entry, the vector of length N to be conjugated. - On exit, X is overwritten with conjg(X). - - INCX (input) INTEGER - The spacing between successive elements of X. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --x; - - /* Function Body */ - if (*incx == 1) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - d_cnjg(&z__1, &x[i__]); - x[i__2].r = z__1.r, x[i__2].i = z__1.i; -/* L10: */ - } - } else { - ioff = 1; - if (*incx < 0) { - ioff = 1 - (*n - 1) * *incx; - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = ioff; - d_cnjg(&z__1, &x[ioff]); - x[i__2].r = z__1.r, x[i__2].i = z__1.i; - ioff += *incx; -/* L20: */ - } - } - return 0; - -/* End of ZLACGV */ - -} /* zlacgv_ */ - -/* Subroutine */ int zlacp2_(char *uplo, integer *m, integer *n, doublereal * - a, integer *lda, doublecomplex *b, integer *ldb) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__, j; - extern logical lsame_(char *, char *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZLACP2 copies all or part of a real two-dimensional matrix A to a - complex matrix B. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - Specifies the part of the matrix A to be copied to B. - = 'U': Upper triangular part - = 'L': Lower triangular part - Otherwise: All of the matrix A - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input) DOUBLE PRECISION array, dimension (LDA,N) - The m by n matrix A. If UPLO = 'U', only the upper trapezium - is accessed; if UPLO = 'L', only the lower trapezium is - accessed. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - B (output) COMPLEX*16 array, dimension (LDB,N) - On exit, B = A in the locations specified by UPLO. - - LDB (input) INTEGER - The leading dimension of the array B. LDB >= max(1,M). - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - - /* Function Body */ - if (lsame_(uplo, "U")) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = min(j,*m); - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * a_dim1; - b[i__3].r = a[i__4], b[i__3].i = 0.; -/* L10: */ - } -/* L20: */ - } - - } else if (lsame_(uplo, "L")) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = j; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * a_dim1; - b[i__3].r = a[i__4], b[i__3].i = 0.; -/* L30: */ - } -/* L40: */ - } - - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * a_dim1; - b[i__3].r = a[i__4], b[i__3].i = 0.; -/* L50: */ - } -/* L60: */ - } - } - - return 0; - -/* End of ZLACP2 */ - -} /* zlacp2_ */ - -/* Subroutine */ int zlacpy_(char *uplo, integer *m, integer *n, - doublecomplex *a, integer *lda, doublecomplex *b, integer *ldb) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__, j; - extern logical lsame_(char *, char *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - ZLACPY copies all or part of a two-dimensional matrix A to another - matrix B. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - Specifies the part of the matrix A to be copied to B. - = 'U': Upper triangular part - = 'L': Lower triangular part - Otherwise: All of the matrix A - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input) COMPLEX*16 array, dimension (LDA,N) - The m by n matrix A. If UPLO = 'U', only the upper trapezium - is accessed; if UPLO = 'L', only the lower trapezium is - accessed. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - B (output) COMPLEX*16 array, dimension (LDB,N) - On exit, B = A in the locations specified by UPLO. - - LDB (input) INTEGER - The leading dimension of the array B. LDB >= max(1,M). - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - - /* Function Body */ - if (lsame_(uplo, "U")) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = min(j,*m); - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * a_dim1; - b[i__3].r = a[i__4].r, b[i__3].i = a[i__4].i; -/* L10: */ - } -/* L20: */ - } - - } else if (lsame_(uplo, "L")) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = j; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * a_dim1; - b[i__3].r = a[i__4].r, b[i__3].i = a[i__4].i; -/* L30: */ - } -/* L40: */ - } - - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - i__4 = i__ + j * a_dim1; - b[i__3].r = a[i__4].r, b[i__3].i = a[i__4].i; -/* L50: */ - } -/* L60: */ - } - } - - return 0; - -/* End of ZLACPY */ - -} /* zlacpy_ */ - -/* Subroutine */ int zlacrm_(integer *m, integer *n, doublecomplex *a, - integer *lda, doublereal *b, integer *ldb, doublecomplex *c__, - integer *ldc, doublereal *rwork) -{ - /* System generated locals */ - integer b_dim1, b_offset, a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, - i__3, i__4, i__5; - doublereal d__1; - doublecomplex z__1; - - /* Builtin functions */ - double d_imag(doublecomplex *); - - /* Local variables */ - static integer i__, j, l; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZLACRM performs a very simple matrix-matrix multiplication: - C := A * B, - where A is M by N and complex; B is N by N and real; - C is M by N and complex. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A and of the matrix C. - M >= 0. - - N (input) INTEGER - The number of columns and rows of the matrix B and - the number of columns of the matrix C. - N >= 0. - - A (input) COMPLEX*16 array, dimension (LDA, N) - A contains the M by N matrix A. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >=max(1,M). - - B (input) DOUBLE PRECISION array, dimension (LDB, N) - B contains the N by N matrix B. - - LDB (input) INTEGER - The leading dimension of the array B. LDB >=max(1,N). - - C (input) COMPLEX*16 array, dimension (LDC, N) - C contains the M by N matrix C. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >=max(1,N). - - RWORK (workspace) DOUBLE PRECISION array, dimension (2*M*N) - - ===================================================================== - - - Quick return if possible. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --rwork; - - /* Function Body */ - if (*m == 0 || *n == 0) { - return 0; - } - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - rwork[(j - 1) * *m + i__] = a[i__3].r; -/* L10: */ - } -/* L20: */ - } - - l = *m * *n + 1; - dgemm_("N", "N", m, n, n, &c_b1015, &rwork[1], m, &b[b_offset], ldb, & - c_b324, &rwork[l], m); - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = l + (j - 1) * *m + i__ - 1; - c__[i__3].r = rwork[i__4], c__[i__3].i = 0.; -/* L30: */ - } -/* L40: */ - } - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - rwork[(j - 1) * *m + i__] = d_imag(&a[i__ + j * a_dim1]); -/* L50: */ - } -/* L60: */ - } - dgemm_("N", "N", m, n, n, &c_b1015, &rwork[1], m, &b[b_offset], ldb, & - c_b324, &rwork[l], m); - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - d__1 = c__[i__4].r; - i__5 = l + (j - 1) * *m + i__ - 1; - z__1.r = d__1, z__1.i = rwork[i__5]; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L70: */ - } -/* L80: */ - } - - return 0; - -/* End of ZLACRM */ - -} /* zlacrm_ */ - -/* Double Complex */ VOID zladiv_(doublecomplex * ret_val, doublecomplex *x, - doublecomplex *y) -{ - /* System generated locals */ - doublereal d__1, d__2, d__3, d__4; - doublecomplex z__1; - - /* Builtin functions */ - double d_imag(doublecomplex *); - - /* Local variables */ - static doublereal zi, zr; - extern /* Subroutine */ int dladiv_(doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - ZLADIV := X / Y, where X and Y are complex. The computation of X / Y - will not overflow on an intermediary step unless the results - overflows. - - Arguments - ========= - - X (input) COMPLEX*16 - Y (input) COMPLEX*16 - The complex scalars X and Y. - - ===================================================================== -*/ - - - d__1 = x->r; - d__2 = d_imag(x); - d__3 = y->r; - d__4 = d_imag(y); - dladiv_(&d__1, &d__2, &d__3, &d__4, &zr, &zi); - z__1.r = zr, z__1.i = zi; - ret_val->r = z__1.r, ret_val->i = z__1.i; - - return ; - -/* End of ZLADIV */ - -} /* zladiv_ */ - -/* Subroutine */ int zlaed0_(integer *qsiz, integer *n, doublereal *d__, - doublereal *e, doublecomplex *q, integer *ldq, doublecomplex *qstore, - integer *ldqs, doublereal *rwork, integer *iwork, integer *info) -{ - /* System generated locals */ - integer q_dim1, q_offset, qstore_dim1, qstore_offset, i__1, i__2; - doublereal d__1; - - /* Builtin functions */ - double log(doublereal); - integer pow_ii(integer *, integer *); - - /* Local variables */ - static integer i__, j, k, ll, iq, lgn, msd2, smm1, spm1, spm2; - static doublereal temp; - static integer curr, iperm; - extern /* Subroutine */ int dcopy_(integer *, doublereal *, integer *, - doublereal *, integer *); - static integer indxq, iwrem, iqptr, tlvls; - extern /* Subroutine */ int zcopy_(integer *, doublecomplex *, integer *, - doublecomplex *, integer *), zlaed7_(integer *, integer *, - integer *, integer *, integer *, integer *, doublereal *, - doublecomplex *, integer *, doublereal *, integer *, doublereal *, - integer *, integer *, integer *, integer *, integer *, - doublereal *, doublecomplex *, doublereal *, integer *, integer *) - ; - static integer igivcl; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zlacrm_(integer *, integer *, doublecomplex *, - integer *, doublereal *, integer *, doublecomplex *, integer *, - doublereal *); - static integer igivnm, submat, curprb, subpbs, igivpt; - extern /* Subroutine */ int dsteqr_(char *, integer *, doublereal *, - doublereal *, doublereal *, integer *, doublereal *, integer *); - static integer curlvl, matsiz, iprmpt, smlsiz; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - Using the divide and conquer method, ZLAED0 computes all eigenvalues - of a symmetric tridiagonal matrix which is one diagonal block of - those from reducing a dense or band Hermitian matrix and - corresponding eigenvectors of the dense or band matrix. - - Arguments - ========= - - QSIZ (input) INTEGER - The dimension of the unitary matrix used to reduce - the full matrix to tridiagonal form. QSIZ >= N if ICOMPQ = 1. - - N (input) INTEGER - The dimension of the symmetric tridiagonal matrix. N >= 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the diagonal elements of the tridiagonal matrix. - On exit, the eigenvalues in ascending order. - - E (input/output) DOUBLE PRECISION array, dimension (N-1) - On entry, the off-diagonal elements of the tridiagonal matrix. - On exit, E has been destroyed. - - Q (input/output) COMPLEX*16 array, dimension (LDQ,N) - On entry, Q must contain an QSIZ x N matrix whose columns - unitarily orthonormal. It is a part of the unitary matrix - that reduces the full dense Hermitian matrix to a - (reducible) symmetric tridiagonal matrix. - - LDQ (input) INTEGER - The leading dimension of the array Q. LDQ >= max(1,N). - - IWORK (workspace) INTEGER array, - the dimension of IWORK must be at least - 6 + 6*N + 5*N*lg N - ( lg( N ) = smallest integer k - such that 2^k >= N ) - - RWORK (workspace) DOUBLE PRECISION array, - dimension (1 + 3*N + 2*N*lg N + 3*N**2) - ( lg( N ) = smallest integer k - such that 2^k >= N ) - - QSTORE (workspace) COMPLEX*16 array, dimension (LDQS, N) - Used to store parts of - the eigenvector matrix when the updating matrix multiplies - take place. - - LDQS (input) INTEGER - The leading dimension of the array QSTORE. - LDQS >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: The algorithm failed to compute an eigenvalue while - working on the submatrix lying in rows and columns - INFO/(N+1) through mod(INFO,N+1). - - ===================================================================== - - Warning: N could be as big as QSIZ! - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - q_dim1 = *ldq; - q_offset = 1 + q_dim1 * 1; - q -= q_offset; - qstore_dim1 = *ldqs; - qstore_offset = 1 + qstore_dim1 * 1; - qstore -= qstore_offset; - --rwork; - --iwork; - - /* Function Body */ - *info = 0; - -/* - IF( ICOMPQ .LT. 0 .OR. ICOMPQ .GT. 2 ) THEN - INFO = -1 - ELSE IF( ( ICOMPQ .EQ. 1 ) .AND. ( QSIZ .LT. MAX( 0, N ) ) ) - $ THEN -*/ - if (*qsiz < max(0,*n)) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*ldq < max(1,*n)) { - *info = -6; - } else if (*ldqs < max(1,*n)) { - *info = -8; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZLAED0", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - - smlsiz = ilaenv_(&c__9, "ZLAED0", " ", &c__0, &c__0, &c__0, &c__0, ( - ftnlen)6, (ftnlen)1); - -/* - Determine the size and placement of the submatrices, and save in - the leading elements of IWORK. -*/ - - iwork[1] = *n; - subpbs = 1; - tlvls = 0; -L10: - if (iwork[subpbs] > smlsiz) { - for (j = subpbs; j >= 1; --j) { - iwork[j * 2] = (iwork[j] + 1) / 2; - iwork[((j) << (1)) - 1] = iwork[j] / 2; -/* L20: */ - } - ++tlvls; - subpbs <<= 1; - goto L10; - } - i__1 = subpbs; - for (j = 2; j <= i__1; ++j) { - iwork[j] += iwork[j - 1]; -/* L30: */ - } - -/* - Divide the matrix into SUBPBS submatrices of size at most SMLSIZ+1 - using rank-1 modifications (cuts). -*/ - - spm1 = subpbs - 1; - i__1 = spm1; - for (i__ = 1; i__ <= i__1; ++i__) { - submat = iwork[i__] + 1; - smm1 = submat - 1; - d__[smm1] -= (d__1 = e[smm1], abs(d__1)); - d__[submat] -= (d__1 = e[smm1], abs(d__1)); -/* L40: */ - } - - indxq = ((*n) << (2)) + 3; - -/* - Set up workspaces for eigenvalues only/accumulate new vectors - routine -*/ - - temp = log((doublereal) (*n)) / log(2.); - lgn = (integer) temp; - if (pow_ii(&c__2, &lgn) < *n) { - ++lgn; - } - if (pow_ii(&c__2, &lgn) < *n) { - ++lgn; - } - iprmpt = indxq + *n + 1; - iperm = iprmpt + *n * lgn; - iqptr = iperm + *n * lgn; - igivpt = iqptr + *n + 2; - igivcl = igivpt + *n * lgn; - - igivnm = 1; - iq = igivnm + ((*n) << (1)) * lgn; -/* Computing 2nd power */ - i__1 = *n; - iwrem = iq + i__1 * i__1 + 1; -/* Initialize pointers */ - i__1 = subpbs; - for (i__ = 0; i__ <= i__1; ++i__) { - iwork[iprmpt + i__] = 1; - iwork[igivpt + i__] = 1; -/* L50: */ - } - iwork[iqptr] = 1; - -/* - Solve each submatrix eigenproblem at the bottom of the divide and - conquer tree. -*/ - - curr = 0; - i__1 = spm1; - for (i__ = 0; i__ <= i__1; ++i__) { - if (i__ == 0) { - submat = 1; - matsiz = iwork[1]; - } else { - submat = iwork[i__] + 1; - matsiz = iwork[i__ + 1] - iwork[i__]; - } - ll = iq - 1 + iwork[iqptr + curr]; - dsteqr_("I", &matsiz, &d__[submat], &e[submat], &rwork[ll], &matsiz, & - rwork[1], info); - zlacrm_(qsiz, &matsiz, &q[submat * q_dim1 + 1], ldq, &rwork[ll], & - matsiz, &qstore[submat * qstore_dim1 + 1], ldqs, &rwork[iwrem] - ); -/* Computing 2nd power */ - i__2 = matsiz; - iwork[iqptr + curr + 1] = iwork[iqptr + curr] + i__2 * i__2; - ++curr; - if (*info > 0) { - *info = submat * (*n + 1) + submat + matsiz - 1; - return 0; - } - k = 1; - i__2 = iwork[i__ + 1]; - for (j = submat; j <= i__2; ++j) { - iwork[indxq + j] = k; - ++k; -/* L60: */ - } -/* L70: */ - } - -/* - Successively merge eigensystems of adjacent submatrices - into eigensystem for the corresponding larger matrix. - - while ( SUBPBS > 1 ) -*/ - - curlvl = 1; -L80: - if (subpbs > 1) { - spm2 = subpbs - 2; - i__1 = spm2; - for (i__ = 0; i__ <= i__1; i__ += 2) { - if (i__ == 0) { - submat = 1; - matsiz = iwork[2]; - msd2 = iwork[1]; - curprb = 0; - } else { - submat = iwork[i__] + 1; - matsiz = iwork[i__ + 2] - iwork[i__]; - msd2 = matsiz / 2; - ++curprb; - } - -/* - Merge lower order eigensystems (of size MSD2 and MATSIZ - MSD2) - into an eigensystem of size MATSIZ. ZLAED7 handles the case - when the eigenvectors of a full or band Hermitian matrix (which - was reduced to tridiagonal form) are desired. - - I am free to use Q as a valuable working space until Loop 150. -*/ - - zlaed7_(&matsiz, &msd2, qsiz, &tlvls, &curlvl, &curprb, &d__[ - submat], &qstore[submat * qstore_dim1 + 1], ldqs, &e[ - submat + msd2 - 1], &iwork[indxq + submat], &rwork[iq], & - iwork[iqptr], &iwork[iprmpt], &iwork[iperm], &iwork[ - igivpt], &iwork[igivcl], &rwork[igivnm], &q[submat * - q_dim1 + 1], &rwork[iwrem], &iwork[subpbs + 1], info); - if (*info > 0) { - *info = submat * (*n + 1) + submat + matsiz - 1; - return 0; - } - iwork[i__ / 2 + 1] = iwork[i__ + 2]; -/* L90: */ - } - subpbs /= 2; - ++curlvl; - goto L80; - } - -/* - end while - - Re-merge the eigenvalues/vectors which were deflated at the final - merge step. -*/ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - j = iwork[indxq + i__]; - rwork[i__] = d__[j]; - zcopy_(qsiz, &qstore[j * qstore_dim1 + 1], &c__1, &q[i__ * q_dim1 + 1] - , &c__1); -/* L100: */ - } - dcopy_(n, &rwork[1], &c__1, &d__[1], &c__1); - - return 0; - -/* End of ZLAED0 */ - -} /* zlaed0_ */ - -/* Subroutine */ int zlaed7_(integer *n, integer *cutpnt, integer *qsiz, - integer *tlvls, integer *curlvl, integer *curpbm, doublereal *d__, - doublecomplex *q, integer *ldq, doublereal *rho, integer *indxq, - doublereal *qstore, integer *qptr, integer *prmptr, integer *perm, - integer *givptr, integer *givcol, doublereal *givnum, doublecomplex * - work, doublereal *rwork, integer *iwork, integer *info) -{ - /* System generated locals */ - integer q_dim1, q_offset, i__1, i__2; - - /* Builtin functions */ - integer pow_ii(integer *, integer *); - - /* Local variables */ - static integer i__, k, n1, n2, iq, iw, iz, ptr, ind1, ind2, indx, curr, - indxc, indxp; - extern /* Subroutine */ int dlaed9_(integer *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - doublereal *, doublereal *, doublereal *, integer *, integer *), - zlaed8_(integer *, integer *, integer *, doublecomplex *, integer - *, doublereal *, doublereal *, integer *, doublereal *, - doublereal *, doublecomplex *, integer *, doublereal *, integer *, - integer *, integer *, integer *, integer *, integer *, - doublereal *, integer *), dlaeda_(integer *, integer *, integer *, - integer *, integer *, integer *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, doublereal *, - integer *); - static integer idlmda; - extern /* Subroutine */ int dlamrg_(integer *, integer *, doublereal *, - integer *, integer *, integer *), xerbla_(char *, integer *), zlacrm_(integer *, integer *, doublecomplex *, integer *, - doublereal *, integer *, doublecomplex *, integer *, doublereal * - ); - static integer coltyp; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZLAED7 computes the updated eigensystem of a diagonal - matrix after modification by a rank-one symmetric matrix. This - routine is used only for the eigenproblem which requires all - eigenvalues and optionally eigenvectors of a dense or banded - Hermitian matrix that has been reduced to tridiagonal form. - - T = Q(in) ( D(in) + RHO * Z*Z' ) Q'(in) = Q(out) * D(out) * Q'(out) - - where Z = Q'u, u is a vector of length N with ones in the - CUTPNT and CUTPNT + 1 th elements and zeros elsewhere. - - The eigenvectors of the original matrix are stored in Q, and the - eigenvalues are in D. The algorithm consists of three stages: - - The first stage consists of deflating the size of the problem - when there are multiple eigenvalues or if there is a zero in - the Z vector. For each such occurence the dimension of the - secular equation problem is reduced by one. This stage is - performed by the routine DLAED2. - - The second stage consists of calculating the updated - eigenvalues. This is done by finding the roots of the secular - equation via the routine DLAED4 (as called by SLAED3). - This routine also calculates the eigenvectors of the current - problem. - - The final stage consists of computing the updated eigenvectors - directly using the updated eigenvalues. The eigenvectors for - the current problem are multiplied with the eigenvectors from - the overall problem. - - Arguments - ========= - - N (input) INTEGER - The dimension of the symmetric tridiagonal matrix. N >= 0. - - CUTPNT (input) INTEGER - Contains the location of the last eigenvalue in the leading - sub-matrix. min(1,N) <= CUTPNT <= N. - - QSIZ (input) INTEGER - The dimension of the unitary matrix used to reduce - the full matrix to tridiagonal form. QSIZ >= N. - - TLVLS (input) INTEGER - The total number of merging levels in the overall divide and - conquer tree. - - CURLVL (input) INTEGER - The current level in the overall merge routine, - 0 <= curlvl <= tlvls. - - CURPBM (input) INTEGER - The current problem in the current level in the overall - merge routine (counting from upper left to lower right). - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the eigenvalues of the rank-1-perturbed matrix. - On exit, the eigenvalues of the repaired matrix. - - Q (input/output) COMPLEX*16 array, dimension (LDQ,N) - On entry, the eigenvectors of the rank-1-perturbed matrix. - On exit, the eigenvectors of the repaired tridiagonal matrix. - - LDQ (input) INTEGER - The leading dimension of the array Q. LDQ >= max(1,N). - - RHO (input) DOUBLE PRECISION - Contains the subdiagonal element used to create the rank-1 - modification. - - INDXQ (output) INTEGER array, dimension (N) - This contains the permutation which will reintegrate the - subproblem just solved back into sorted order, - ie. D( INDXQ( I = 1, N ) ) will be in ascending order. - - IWORK (workspace) INTEGER array, dimension (4*N) - - RWORK (workspace) DOUBLE PRECISION array, - dimension (3*N+2*QSIZ*N) - - WORK (workspace) COMPLEX*16 array, dimension (QSIZ*N) - - QSTORE (input/output) DOUBLE PRECISION array, dimension (N**2+1) - Stores eigenvectors of submatrices encountered during - divide and conquer, packed together. QPTR points to - beginning of the submatrices. - - QPTR (input/output) INTEGER array, dimension (N+2) - List of indices pointing to beginning of submatrices stored - in QSTORE. The submatrices are numbered starting at the - bottom left of the divide and conquer tree, from left to - right and bottom to top. - - PRMPTR (input) INTEGER array, dimension (N lg N) - Contains a list of pointers which indicate where in PERM a - level's permutation is stored. PRMPTR(i+1) - PRMPTR(i) - indicates the size of the permutation and also the size of - the full, non-deflated problem. - - PERM (input) INTEGER array, dimension (N lg N) - Contains the permutations (from deflation and sorting) to be - applied to each eigenblock. - - GIVPTR (input) INTEGER array, dimension (N lg N) - Contains a list of pointers which indicate where in GIVCOL a - level's Givens rotations are stored. GIVPTR(i+1) - GIVPTR(i) - indicates the number of Givens rotations. - - GIVCOL (input) INTEGER array, dimension (2, N lg N) - Each pair of numbers indicates a pair of columns to take place - in a Givens rotation. - - GIVNUM (input) DOUBLE PRECISION array, dimension (2, N lg N) - Each number indicates the S value to be used in the - corresponding Givens rotation. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: if INFO = 1, an eigenvalue did not converge - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - q_dim1 = *ldq; - q_offset = 1 + q_dim1 * 1; - q -= q_offset; - --indxq; - --qstore; - --qptr; - --prmptr; - --perm; - --givptr; - givcol -= 3; - givnum -= 3; - --work; - --rwork; - --iwork; - - /* Function Body */ - *info = 0; - -/* - IF( ICOMPQ.LT.0 .OR. ICOMPQ.GT.1 ) THEN - INFO = -1 - ELSE IF( N.LT.0 ) THEN -*/ - if (*n < 0) { - *info = -1; - } else if (min(1,*n) > *cutpnt || *n < *cutpnt) { - *info = -2; - } else if (*qsiz < *n) { - *info = -3; - } else if (*ldq < max(1,*n)) { - *info = -9; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZLAED7", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - -/* - The following values are for bookkeeping purposes only. They are - integer pointers which indicate the portion of the workspace - used by a particular array in DLAED2 and SLAED3. -*/ - - iz = 1; - idlmda = iz + *n; - iw = idlmda + *n; - iq = iw + *n; - - indx = 1; - indxc = indx + *n; - coltyp = indxc + *n; - indxp = coltyp + *n; - -/* - Form the z-vector which consists of the last row of Q_1 and the - first row of Q_2. -*/ - - ptr = pow_ii(&c__2, tlvls) + 1; - i__1 = *curlvl - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = *tlvls - i__; - ptr += pow_ii(&c__2, &i__2); -/* L10: */ - } - curr = ptr + *curpbm; - dlaeda_(n, tlvls, curlvl, curpbm, &prmptr[1], &perm[1], &givptr[1], & - givcol[3], &givnum[3], &qstore[1], &qptr[1], &rwork[iz], &rwork[ - iz + *n], info); - -/* - When solving the final problem, we no longer need the stored data, - so we will overwrite the data from this level onto the previously - used storage space. -*/ - - if (*curlvl == *tlvls) { - qptr[curr] = 1; - prmptr[curr] = 1; - givptr[curr] = 1; - } - -/* Sort and Deflate eigenvalues. */ - - zlaed8_(&k, n, qsiz, &q[q_offset], ldq, &d__[1], rho, cutpnt, &rwork[iz], - &rwork[idlmda], &work[1], qsiz, &rwork[iw], &iwork[indxp], &iwork[ - indx], &indxq[1], &perm[prmptr[curr]], &givptr[curr + 1], &givcol[ - ((givptr[curr]) << (1)) + 1], &givnum[((givptr[curr]) << (1)) + 1] - , info); - prmptr[curr + 1] = prmptr[curr] + *n; - givptr[curr + 1] += givptr[curr]; - -/* Solve Secular Equation. */ - - if (k != 0) { - dlaed9_(&k, &c__1, &k, n, &d__[1], &rwork[iq], &k, rho, &rwork[idlmda] - , &rwork[iw], &qstore[qptr[curr]], &k, info); - zlacrm_(qsiz, &k, &work[1], qsiz, &qstore[qptr[curr]], &k, &q[ - q_offset], ldq, &rwork[iq]); -/* Computing 2nd power */ - i__1 = k; - qptr[curr + 1] = qptr[curr] + i__1 * i__1; - if (*info != 0) { - return 0; - } - -/* Prepare the INDXQ sorting premutation. */ - - n1 = k; - n2 = *n - k; - ind1 = 1; - ind2 = *n; - dlamrg_(&n1, &n2, &d__[1], &c__1, &c_n1, &indxq[1]); - } else { - qptr[curr + 1] = qptr[curr]; - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - indxq[i__] = i__; -/* L20: */ - } - } - - return 0; - -/* End of ZLAED7 */ - -} /* zlaed7_ */ - -/* Subroutine */ int zlaed8_(integer *k, integer *n, integer *qsiz, - doublecomplex *q, integer *ldq, doublereal *d__, doublereal *rho, - integer *cutpnt, doublereal *z__, doublereal *dlamda, doublecomplex * - q2, integer *ldq2, doublereal *w, integer *indxp, integer *indx, - integer *indxq, integer *perm, integer *givptr, integer *givcol, - doublereal *givnum, integer *info) -{ - /* System generated locals */ - integer q_dim1, q_offset, q2_dim1, q2_offset, i__1; - doublereal d__1; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static doublereal c__; - static integer i__, j; - static doublereal s, t; - static integer k2, n1, n2, jp, n1p1; - static doublereal eps, tau, tol; - static integer jlam, imax, jmax; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *), dcopy_(integer *, doublereal *, integer *, doublereal - *, integer *), zdrot_(integer *, doublecomplex *, integer *, - doublecomplex *, integer *, doublereal *, doublereal *), zcopy_( - integer *, doublecomplex *, integer *, doublecomplex *, integer *) - ; - - extern integer idamax_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dlamrg_(integer *, integer *, doublereal *, - integer *, integer *, integer *), xerbla_(char *, integer *), zlacpy_(char *, integer *, integer *, doublecomplex *, - integer *, doublecomplex *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Oak Ridge National Lab, Argonne National Lab, - Courant Institute, NAG Ltd., and Rice University - September 30, 1994 - - - Purpose - ======= - - ZLAED8 merges the two sets of eigenvalues together into a single - sorted set. Then it tries to deflate the size of the problem. - There are two ways in which deflation can occur: when two or more - eigenvalues are close together or if there is a tiny element in the - Z vector. For each such occurrence the order of the related secular - equation problem is reduced by one. - - Arguments - ========= - - K (output) INTEGER - Contains the number of non-deflated eigenvalues. - This is the order of the related secular equation. - - N (input) INTEGER - The dimension of the symmetric tridiagonal matrix. N >= 0. - - QSIZ (input) INTEGER - The dimension of the unitary matrix used to reduce - the dense or band matrix to tridiagonal form. - QSIZ >= N if ICOMPQ = 1. - - Q (input/output) COMPLEX*16 array, dimension (LDQ,N) - On entry, Q contains the eigenvectors of the partially solved - system which has been previously updated in matrix - multiplies with other partially solved eigensystems. - On exit, Q contains the trailing (N-K) updated eigenvectors - (those which were deflated) in its last N-K columns. - - LDQ (input) INTEGER - The leading dimension of the array Q. LDQ >= max( 1, N ). - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, D contains the eigenvalues of the two submatrices to - be combined. On exit, D contains the trailing (N-K) updated - eigenvalues (those which were deflated) sorted into increasing - order. - - RHO (input/output) DOUBLE PRECISION - Contains the off diagonal element associated with the rank-1 - cut which originally split the two submatrices which are now - being recombined. RHO is modified during the computation to - the value required by DLAED3. - - CUTPNT (input) INTEGER - Contains the location of the last eigenvalue in the leading - sub-matrix. MIN(1,N) <= CUTPNT <= N. - - Z (input) DOUBLE PRECISION array, dimension (N) - On input this vector contains the updating vector (the last - row of the first sub-eigenvector matrix and the first row of - the second sub-eigenvector matrix). The contents of Z are - destroyed during the updating process. - - DLAMDA (output) DOUBLE PRECISION array, dimension (N) - Contains a copy of the first K eigenvalues which will be used - by DLAED3 to form the secular equation. - - Q2 (output) COMPLEX*16 array, dimension (LDQ2,N) - If ICOMPQ = 0, Q2 is not referenced. Otherwise, - Contains a copy of the first K eigenvectors which will be used - by DLAED7 in a matrix multiply (DGEMM) to update the new - eigenvectors. - - LDQ2 (input) INTEGER - The leading dimension of the array Q2. LDQ2 >= max( 1, N ). - - W (output) DOUBLE PRECISION array, dimension (N) - This will hold the first k values of the final - deflation-altered z-vector and will be passed to DLAED3. - - INDXP (workspace) INTEGER array, dimension (N) - This will contain the permutation used to place deflated - values of D at the end of the array. On output INDXP(1:K) - points to the nondeflated D-values and INDXP(K+1:N) - points to the deflated eigenvalues. - - INDX (workspace) INTEGER array, dimension (N) - This will contain the permutation used to sort the contents of - D into ascending order. - - INDXQ (input) INTEGER array, dimension (N) - This contains the permutation which separately sorts the two - sub-problems in D into ascending order. Note that elements in - the second half of this permutation must first have CUTPNT - added to their values in order to be accurate. - - PERM (output) INTEGER array, dimension (N) - Contains the permutations (from deflation and sorting) to be - applied to each eigenblock. - - GIVPTR (output) INTEGER - Contains the number of Givens rotations which took place in - this subproblem. - - GIVCOL (output) INTEGER array, dimension (2, N) - Each pair of numbers indicates a pair of columns to take place - in a Givens rotation. - - GIVNUM (output) DOUBLE PRECISION array, dimension (2, N) - Each number indicates the S value to be used in the - corresponding Givens rotation. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - q_dim1 = *ldq; - q_offset = 1 + q_dim1 * 1; - q -= q_offset; - --d__; - --z__; - --dlamda; - q2_dim1 = *ldq2; - q2_offset = 1 + q2_dim1 * 1; - q2 -= q2_offset; - --w; - --indxp; - --indx; - --indxq; - --perm; - givcol -= 3; - givnum -= 3; - - /* Function Body */ - *info = 0; - - if (*n < 0) { - *info = -2; - } else if (*qsiz < *n) { - *info = -3; - } else if (*ldq < max(1,*n)) { - *info = -5; - } else if (*cutpnt < min(1,*n) || *cutpnt > *n) { - *info = -8; - } else if (*ldq2 < max(1,*n)) { - *info = -12; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZLAED8", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - - n1 = *cutpnt; - n2 = *n - n1; - n1p1 = n1 + 1; - - if (*rho < 0.) { - dscal_(&n2, &c_b1294, &z__[n1p1], &c__1); - } - -/* Normalize z so that norm(z) = 1 */ - - t = 1. / sqrt(2.); - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - indx[j] = j; -/* L10: */ - } - dscal_(n, &t, &z__[1], &c__1); - *rho = (d__1 = *rho * 2., abs(d__1)); - -/* Sort the eigenvalues into increasing order */ - - i__1 = *n; - for (i__ = *cutpnt + 1; i__ <= i__1; ++i__) { - indxq[i__] += *cutpnt; -/* L20: */ - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - dlamda[i__] = d__[indxq[i__]]; - w[i__] = z__[indxq[i__]]; -/* L30: */ - } - i__ = 1; - j = *cutpnt + 1; - dlamrg_(&n1, &n2, &dlamda[1], &c__1, &c__1, &indx[1]); - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - d__[i__] = dlamda[indx[i__]]; - z__[i__] = w[indx[i__]]; -/* L40: */ - } - -/* Calculate the allowable deflation tolerance */ - - imax = idamax_(n, &z__[1], &c__1); - jmax = idamax_(n, &d__[1], &c__1); - eps = EPSILON; - tol = eps * 8. * (d__1 = d__[jmax], abs(d__1)); - -/* - If the rank-1 modifier is small enough, no more needs to be done - -- except to reorganize Q so that its columns correspond with the - elements in D. -*/ - - if (*rho * (d__1 = z__[imax], abs(d__1)) <= tol) { - *k = 0; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - perm[j] = indxq[indx[j]]; - zcopy_(qsiz, &q[perm[j] * q_dim1 + 1], &c__1, &q2[j * q2_dim1 + 1] - , &c__1); -/* L50: */ - } - zlacpy_("A", qsiz, n, &q2[q2_dim1 + 1], ldq2, &q[q_dim1 + 1], ldq); - return 0; - } - -/* - If there are multiple eigenvalues then the problem deflates. Here - the number of equal eigenvalues are found. As each equal - eigenvalue is found, an elementary reflector is computed to rotate - the corresponding eigensubspace so that the corresponding - components of Z are zero in this new basis. -*/ - - *k = 0; - *givptr = 0; - k2 = *n + 1; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (*rho * (d__1 = z__[j], abs(d__1)) <= tol) { - -/* Deflate due to small z component. */ - - --k2; - indxp[k2] = j; - if (j == *n) { - goto L100; - } - } else { - jlam = j; - goto L70; - } -/* L60: */ - } -L70: - ++j; - if (j > *n) { - goto L90; - } - if (*rho * (d__1 = z__[j], abs(d__1)) <= tol) { - -/* Deflate due to small z component. */ - - --k2; - indxp[k2] = j; - } else { - -/* Check if eigenvalues are close enough to allow deflation. */ - - s = z__[jlam]; - c__ = z__[j]; - -/* - Find sqrt(a**2+b**2) without overflow or - destructive underflow. -*/ - - tau = dlapy2_(&c__, &s); - t = d__[j] - d__[jlam]; - c__ /= tau; - s = -s / tau; - if ((d__1 = t * c__ * s, abs(d__1)) <= tol) { - -/* Deflation is possible. */ - - z__[j] = tau; - z__[jlam] = 0.; - -/* Record the appropriate Givens rotation */ - - ++(*givptr); - givcol[((*givptr) << (1)) + 1] = indxq[indx[jlam]]; - givcol[((*givptr) << (1)) + 2] = indxq[indx[j]]; - givnum[((*givptr) << (1)) + 1] = c__; - givnum[((*givptr) << (1)) + 2] = s; - zdrot_(qsiz, &q[indxq[indx[jlam]] * q_dim1 + 1], &c__1, &q[indxq[ - indx[j]] * q_dim1 + 1], &c__1, &c__, &s); - t = d__[jlam] * c__ * c__ + d__[j] * s * s; - d__[j] = d__[jlam] * s * s + d__[j] * c__ * c__; - d__[jlam] = t; - --k2; - i__ = 1; -L80: - if (k2 + i__ <= *n) { - if (d__[jlam] < d__[indxp[k2 + i__]]) { - indxp[k2 + i__ - 1] = indxp[k2 + i__]; - indxp[k2 + i__] = jlam; - ++i__; - goto L80; - } else { - indxp[k2 + i__ - 1] = jlam; - } - } else { - indxp[k2 + i__ - 1] = jlam; - } - jlam = j; - } else { - ++(*k); - w[*k] = z__[jlam]; - dlamda[*k] = d__[jlam]; - indxp[*k] = jlam; - jlam = j; - } - } - goto L70; -L90: - -/* Record the last eigenvalue. */ - - ++(*k); - w[*k] = z__[jlam]; - dlamda[*k] = d__[jlam]; - indxp[*k] = jlam; - -L100: - -/* - Sort the eigenvalues and corresponding eigenvectors into DLAMDA - and Q2 respectively. The eigenvalues/vectors which were not - deflated go into the first K slots of DLAMDA and Q2 respectively, - while those which were deflated go into the last N - K slots. -*/ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - jp = indxp[j]; - dlamda[j] = d__[jp]; - perm[j] = indxq[indx[jp]]; - zcopy_(qsiz, &q[perm[j] * q_dim1 + 1], &c__1, &q2[j * q2_dim1 + 1], & - c__1); -/* L110: */ - } - -/* - The deflated eigenvalues and their corresponding vectors go back - into the last N - K slots of D and Q respectively. -*/ - - if (*k < *n) { - i__1 = *n - *k; - dcopy_(&i__1, &dlamda[*k + 1], &c__1, &d__[*k + 1], &c__1); - i__1 = *n - *k; - zlacpy_("A", qsiz, &i__1, &q2[(*k + 1) * q2_dim1 + 1], ldq2, &q[(*k + - 1) * q_dim1 + 1], ldq); - } - - return 0; - -/* End of ZLAED8 */ - -} /* zlaed8_ */ - -/* Subroutine */ int zlahqr_(logical *wantt, logical *wantz, integer *n, - integer *ilo, integer *ihi, doublecomplex *h__, integer *ldh, - doublecomplex *w, integer *iloz, integer *ihiz, doublecomplex *z__, - integer *ldz, integer *info) -{ - /* System generated locals */ - integer h_dim1, h_offset, z_dim1, z_offset, i__1, i__2, i__3, i__4, i__5; - doublereal d__1, d__2, d__3, d__4, d__5, d__6; - doublecomplex z__1, z__2, z__3, z__4; - - /* Builtin functions */ - double d_imag(doublecomplex *); - void z_sqrt(doublecomplex *, doublecomplex *), d_cnjg(doublecomplex *, - doublecomplex *); - double z_abs(doublecomplex *); - - /* Local variables */ - static integer i__, j, k, l, m; - static doublereal s; - static doublecomplex t, u, v[2], x, y; - static integer i1, i2; - static doublecomplex t1; - static doublereal t2; - static doublecomplex v2; - static doublereal h10; - static doublecomplex h11; - static doublereal h21; - static doublecomplex h22; - static integer nh, nz; - static doublecomplex h11s; - static integer itn, its; - static doublereal ulp; - static doublecomplex sum; - static doublereal tst1; - static doublecomplex temp; - extern /* Subroutine */ int zscal_(integer *, doublecomplex *, - doublecomplex *, integer *); - static doublereal rtemp, rwork[1]; - extern /* Subroutine */ int zcopy_(integer *, doublecomplex *, integer *, - doublecomplex *, integer *); - - extern /* Subroutine */ int zlarfg_(integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *); - extern /* Double Complex */ VOID zladiv_(doublecomplex *, doublecomplex *, - doublecomplex *); - extern doublereal zlanhs_(char *, integer *, doublecomplex *, integer *, - doublereal *); - static doublereal smlnum; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZLAHQR is an auxiliary routine called by ZHSEQR to update the - eigenvalues and Schur decomposition already computed by ZHSEQR, by - dealing with the Hessenberg submatrix in rows and columns ILO to IHI. - - Arguments - ========= - - WANTT (input) LOGICAL - = .TRUE. : the full Schur form T is required; - = .FALSE.: only eigenvalues are required. - - WANTZ (input) LOGICAL - = .TRUE. : the matrix of Schur vectors Z is required; - = .FALSE.: Schur vectors are not required. - - N (input) INTEGER - The order of the matrix H. N >= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - It is assumed that H is already upper triangular in rows and - columns IHI+1:N, and that H(ILO,ILO-1) = 0 (unless ILO = 1). - ZLAHQR works primarily with the Hessenberg submatrix in rows - and columns ILO to IHI, but applies transformations to all of - H if WANTT is .TRUE.. - 1 <= ILO <= max(1,IHI); IHI <= N. - - H (input/output) COMPLEX*16 array, dimension (LDH,N) - On entry, the upper Hessenberg matrix H. - On exit, if WANTT is .TRUE., H is upper triangular in rows - and columns ILO:IHI, with any 2-by-2 diagonal blocks in - standard form. If WANTT is .FALSE., the contents of H are - unspecified on exit. - - LDH (input) INTEGER - The leading dimension of the array H. LDH >= max(1,N). - - W (output) COMPLEX*16 array, dimension (N) - The computed eigenvalues ILO to IHI are stored in the - corresponding elements of W. If WANTT is .TRUE., the - eigenvalues are stored in the same order as on the diagonal - of the Schur form returned in H, with W(i) = H(i,i). - - ILOZ (input) INTEGER - IHIZ (input) INTEGER - Specify the rows of Z to which transformations must be - applied if WANTZ is .TRUE.. - 1 <= ILOZ <= ILO; IHI <= IHIZ <= N. - - Z (input/output) COMPLEX*16 array, dimension (LDZ,N) - If WANTZ is .TRUE., on entry Z must contain the current - matrix Z of transformations accumulated by ZHSEQR, and on - exit Z has been updated; transformations are applied only to - the submatrix Z(ILOZ:IHIZ,ILO:IHI). - If WANTZ is .FALSE., Z is not referenced. - - LDZ (input) INTEGER - The leading dimension of the array Z. LDZ >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - > 0: if INFO = i, ZLAHQR failed to compute all the - eigenvalues ILO to IHI in a total of 30*(IHI-ILO+1) - iterations; elements i+1:ihi of W contain those - eigenvalues which have been successfully computed. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - h_dim1 = *ldh; - h_offset = 1 + h_dim1 * 1; - h__ -= h_offset; - --w; - z_dim1 = *ldz; - z_offset = 1 + z_dim1 * 1; - z__ -= z_offset; - - /* Function Body */ - *info = 0; - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - if (*ilo == *ihi) { - i__1 = *ilo; - i__2 = *ilo + *ilo * h_dim1; - w[i__1].r = h__[i__2].r, w[i__1].i = h__[i__2].i; - return 0; - } - - nh = *ihi - *ilo + 1; - nz = *ihiz - *iloz + 1; - -/* - Set machine-dependent constants for the stopping criterion. - If norm(H) <= sqrt(OVFL), overflow should not occur. -*/ - - ulp = PRECISION; - smlnum = SAFEMINIMUM / ulp; - -/* - I1 and I2 are the indices of the first row and last column of H - to which transformations must be applied. If eigenvalues only are - being computed, I1 and I2 are set inside the main loop. -*/ - - if (*wantt) { - i1 = 1; - i2 = *n; - } - -/* ITN is the total number of QR iterations allowed. */ - - itn = nh * 30; - -/* - The main loop begins here. I is the loop index and decreases from - IHI to ILO in steps of 1. Each iteration of the loop works - with the active submatrix in rows and columns L to I. - Eigenvalues I+1 to IHI have already converged. Either L = ILO, or - H(L,L-1) is negligible so that the matrix splits. -*/ - - i__ = *ihi; -L10: - if (i__ < *ilo) { - goto L130; - } - -/* - Perform QR iterations on rows and columns ILO to I until a - submatrix of order 1 splits off at the bottom because a - subdiagonal element has become negligible. -*/ - - l = *ilo; - i__1 = itn; - for (its = 0; its <= i__1; ++its) { - -/* Look for a single small subdiagonal element. */ - - i__2 = l + 1; - for (k = i__; k >= i__2; --k) { - i__3 = k - 1 + (k - 1) * h_dim1; - i__4 = k + k * h_dim1; - tst1 = (d__1 = h__[i__3].r, abs(d__1)) + (d__2 = d_imag(&h__[k - - 1 + (k - 1) * h_dim1]), abs(d__2)) + ((d__3 = h__[i__4].r, - abs(d__3)) + (d__4 = d_imag(&h__[k + k * h_dim1]), abs( - d__4))); - if (tst1 == 0.) { - i__3 = i__ - l + 1; - tst1 = zlanhs_("1", &i__3, &h__[l + l * h_dim1], ldh, rwork); - } - i__3 = k + (k - 1) * h_dim1; -/* Computing MAX */ - d__2 = ulp * tst1; - if ((d__1 = h__[i__3].r, abs(d__1)) <= max(d__2,smlnum)) { - goto L30; - } -/* L20: */ - } -L30: - l = k; - if (l > *ilo) { - -/* H(L,L-1) is negligible */ - - i__2 = l + (l - 1) * h_dim1; - h__[i__2].r = 0., h__[i__2].i = 0.; - } - -/* Exit from loop if a submatrix of order 1 has split off. */ - - if (l >= i__) { - goto L120; - } - -/* - Now the active submatrix is in rows and columns L to I. If - eigenvalues only are being computed, only the active submatrix - need be transformed. -*/ - - if (! (*wantt)) { - i1 = l; - i2 = i__; - } - - if (its == 10 || its == 20) { - -/* Exceptional shift. */ - - i__2 = i__ + (i__ - 1) * h_dim1; - s = (d__1 = h__[i__2].r, abs(d__1)) * .75; - i__2 = i__ + i__ * h_dim1; - z__1.r = s + h__[i__2].r, z__1.i = h__[i__2].i; - t.r = z__1.r, t.i = z__1.i; - } else { - -/* Wilkinson's shift. */ - - i__2 = i__ + i__ * h_dim1; - t.r = h__[i__2].r, t.i = h__[i__2].i; - i__2 = i__ - 1 + i__ * h_dim1; - i__3 = i__ + (i__ - 1) * h_dim1; - d__1 = h__[i__3].r; - z__1.r = d__1 * h__[i__2].r, z__1.i = d__1 * h__[i__2].i; - u.r = z__1.r, u.i = z__1.i; - if (u.r != 0. || u.i != 0.) { - i__2 = i__ - 1 + (i__ - 1) * h_dim1; - z__2.r = h__[i__2].r - t.r, z__2.i = h__[i__2].i - t.i; - z__1.r = z__2.r * .5, z__1.i = z__2.i * .5; - x.r = z__1.r, x.i = z__1.i; - z__3.r = x.r * x.r - x.i * x.i, z__3.i = x.r * x.i + x.i * - x.r; - z__2.r = z__3.r + u.r, z__2.i = z__3.i + u.i; - z_sqrt(&z__1, &z__2); - y.r = z__1.r, y.i = z__1.i; - if (x.r * y.r + d_imag(&x) * d_imag(&y) < 0.) { - z__1.r = -y.r, z__1.i = -y.i; - y.r = z__1.r, y.i = z__1.i; - } - z__3.r = x.r + y.r, z__3.i = x.i + y.i; - zladiv_(&z__2, &u, &z__3); - z__1.r = t.r - z__2.r, z__1.i = t.i - z__2.i; - t.r = z__1.r, t.i = z__1.i; - } - } - -/* Look for two consecutive small subdiagonal elements. */ - - i__2 = l + 1; - for (m = i__ - 1; m >= i__2; --m) { - -/* - Determine the effect of starting the single-shift QR - iteration at row M, and see if this would make H(M,M-1) - negligible. -*/ - - i__3 = m + m * h_dim1; - h11.r = h__[i__3].r, h11.i = h__[i__3].i; - i__3 = m + 1 + (m + 1) * h_dim1; - h22.r = h__[i__3].r, h22.i = h__[i__3].i; - z__1.r = h11.r - t.r, z__1.i = h11.i - t.i; - h11s.r = z__1.r, h11s.i = z__1.i; - i__3 = m + 1 + m * h_dim1; - h21 = h__[i__3].r; - s = (d__1 = h11s.r, abs(d__1)) + (d__2 = d_imag(&h11s), abs(d__2)) - + abs(h21); - z__1.r = h11s.r / s, z__1.i = h11s.i / s; - h11s.r = z__1.r, h11s.i = z__1.i; - h21 /= s; - v[0].r = h11s.r, v[0].i = h11s.i; - v[1].r = h21, v[1].i = 0.; - i__3 = m + (m - 1) * h_dim1; - h10 = h__[i__3].r; - tst1 = ((d__1 = h11s.r, abs(d__1)) + (d__2 = d_imag(&h11s), abs( - d__2))) * ((d__3 = h11.r, abs(d__3)) + (d__4 = d_imag(& - h11), abs(d__4)) + ((d__5 = h22.r, abs(d__5)) + (d__6 = - d_imag(&h22), abs(d__6)))); - if ((d__1 = h10 * h21, abs(d__1)) <= ulp * tst1) { - goto L50; - } -/* L40: */ - } - i__2 = l + l * h_dim1; - h11.r = h__[i__2].r, h11.i = h__[i__2].i; - i__2 = l + 1 + (l + 1) * h_dim1; - h22.r = h__[i__2].r, h22.i = h__[i__2].i; - z__1.r = h11.r - t.r, z__1.i = h11.i - t.i; - h11s.r = z__1.r, h11s.i = z__1.i; - i__2 = l + 1 + l * h_dim1; - h21 = h__[i__2].r; - s = (d__1 = h11s.r, abs(d__1)) + (d__2 = d_imag(&h11s), abs(d__2)) + - abs(h21); - z__1.r = h11s.r / s, z__1.i = h11s.i / s; - h11s.r = z__1.r, h11s.i = z__1.i; - h21 /= s; - v[0].r = h11s.r, v[0].i = h11s.i; - v[1].r = h21, v[1].i = 0.; -L50: - -/* Single-shift QR step */ - - i__2 = i__ - 1; - for (k = m; k <= i__2; ++k) { - -/* - The first iteration of this loop determines a reflection G - from the vector V and applies it from left and right to H, - thus creating a nonzero bulge below the subdiagonal. - - Each subsequent iteration determines a reflection G to - restore the Hessenberg form in the (K-1)th column, and thus - chases the bulge one step toward the bottom of the active - submatrix. - - V(2) is always real before the call to ZLARFG, and hence - after the call T2 ( = T1*V(2) ) is also real. -*/ - - if (k > m) { - zcopy_(&c__2, &h__[k + (k - 1) * h_dim1], &c__1, v, &c__1); - } - zlarfg_(&c__2, v, &v[1], &c__1, &t1); - if (k > m) { - i__3 = k + (k - 1) * h_dim1; - h__[i__3].r = v[0].r, h__[i__3].i = v[0].i; - i__3 = k + 1 + (k - 1) * h_dim1; - h__[i__3].r = 0., h__[i__3].i = 0.; - } - v2.r = v[1].r, v2.i = v[1].i; - z__1.r = t1.r * v2.r - t1.i * v2.i, z__1.i = t1.r * v2.i + t1.i * - v2.r; - t2 = z__1.r; - -/* - Apply G from the left to transform the rows of the matrix - in columns K to I2. -*/ - - i__3 = i2; - for (j = k; j <= i__3; ++j) { - d_cnjg(&z__3, &t1); - i__4 = k + j * h_dim1; - z__2.r = z__3.r * h__[i__4].r - z__3.i * h__[i__4].i, z__2.i = - z__3.r * h__[i__4].i + z__3.i * h__[i__4].r; - i__5 = k + 1 + j * h_dim1; - z__4.r = t2 * h__[i__5].r, z__4.i = t2 * h__[i__5].i; - z__1.r = z__2.r + z__4.r, z__1.i = z__2.i + z__4.i; - sum.r = z__1.r, sum.i = z__1.i; - i__4 = k + j * h_dim1; - i__5 = k + j * h_dim1; - z__1.r = h__[i__5].r - sum.r, z__1.i = h__[i__5].i - sum.i; - h__[i__4].r = z__1.r, h__[i__4].i = z__1.i; - i__4 = k + 1 + j * h_dim1; - i__5 = k + 1 + j * h_dim1; - z__2.r = sum.r * v2.r - sum.i * v2.i, z__2.i = sum.r * v2.i + - sum.i * v2.r; - z__1.r = h__[i__5].r - z__2.r, z__1.i = h__[i__5].i - z__2.i; - h__[i__4].r = z__1.r, h__[i__4].i = z__1.i; -/* L60: */ - } - -/* - Apply G from the right to transform the columns of the - matrix in rows I1 to min(K+2,I). - - Computing MIN -*/ - i__4 = k + 2; - i__3 = min(i__4,i__); - for (j = i1; j <= i__3; ++j) { - i__4 = j + k * h_dim1; - z__2.r = t1.r * h__[i__4].r - t1.i * h__[i__4].i, z__2.i = - t1.r * h__[i__4].i + t1.i * h__[i__4].r; - i__5 = j + (k + 1) * h_dim1; - z__3.r = t2 * h__[i__5].r, z__3.i = t2 * h__[i__5].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - sum.r = z__1.r, sum.i = z__1.i; - i__4 = j + k * h_dim1; - i__5 = j + k * h_dim1; - z__1.r = h__[i__5].r - sum.r, z__1.i = h__[i__5].i - sum.i; - h__[i__4].r = z__1.r, h__[i__4].i = z__1.i; - i__4 = j + (k + 1) * h_dim1; - i__5 = j + (k + 1) * h_dim1; - d_cnjg(&z__3, &v2); - z__2.r = sum.r * z__3.r - sum.i * z__3.i, z__2.i = sum.r * - z__3.i + sum.i * z__3.r; - z__1.r = h__[i__5].r - z__2.r, z__1.i = h__[i__5].i - z__2.i; - h__[i__4].r = z__1.r, h__[i__4].i = z__1.i; -/* L70: */ - } - - if (*wantz) { - -/* Accumulate transformations in the matrix Z */ - - i__3 = *ihiz; - for (j = *iloz; j <= i__3; ++j) { - i__4 = j + k * z_dim1; - z__2.r = t1.r * z__[i__4].r - t1.i * z__[i__4].i, z__2.i = - t1.r * z__[i__4].i + t1.i * z__[i__4].r; - i__5 = j + (k + 1) * z_dim1; - z__3.r = t2 * z__[i__5].r, z__3.i = t2 * z__[i__5].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - sum.r = z__1.r, sum.i = z__1.i; - i__4 = j + k * z_dim1; - i__5 = j + k * z_dim1; - z__1.r = z__[i__5].r - sum.r, z__1.i = z__[i__5].i - - sum.i; - z__[i__4].r = z__1.r, z__[i__4].i = z__1.i; - i__4 = j + (k + 1) * z_dim1; - i__5 = j + (k + 1) * z_dim1; - d_cnjg(&z__3, &v2); - z__2.r = sum.r * z__3.r - sum.i * z__3.i, z__2.i = sum.r * - z__3.i + sum.i * z__3.r; - z__1.r = z__[i__5].r - z__2.r, z__1.i = z__[i__5].i - - z__2.i; - z__[i__4].r = z__1.r, z__[i__4].i = z__1.i; -/* L80: */ - } - } - - if ((k == m && m > l)) { - -/* - If the QR step was started at row M > L because two - consecutive small subdiagonals were found, then extra - scaling must be performed to ensure that H(M,M-1) remains - real. -*/ - - z__1.r = 1. - t1.r, z__1.i = 0. - t1.i; - temp.r = z__1.r, temp.i = z__1.i; - d__1 = z_abs(&temp); - z__1.r = temp.r / d__1, z__1.i = temp.i / d__1; - temp.r = z__1.r, temp.i = z__1.i; - i__3 = m + 1 + m * h_dim1; - i__4 = m + 1 + m * h_dim1; - d_cnjg(&z__2, &temp); - z__1.r = h__[i__4].r * z__2.r - h__[i__4].i * z__2.i, z__1.i = - h__[i__4].r * z__2.i + h__[i__4].i * z__2.r; - h__[i__3].r = z__1.r, h__[i__3].i = z__1.i; - if (m + 2 <= i__) { - i__3 = m + 2 + (m + 1) * h_dim1; - i__4 = m + 2 + (m + 1) * h_dim1; - z__1.r = h__[i__4].r * temp.r - h__[i__4].i * temp.i, - z__1.i = h__[i__4].r * temp.i + h__[i__4].i * - temp.r; - h__[i__3].r = z__1.r, h__[i__3].i = z__1.i; - } - i__3 = i__; - for (j = m; j <= i__3; ++j) { - if (j != m + 1) { - if (i2 > j) { - i__4 = i2 - j; - zscal_(&i__4, &temp, &h__[j + (j + 1) * h_dim1], - ldh); - } - i__4 = j - i1; - d_cnjg(&z__1, &temp); - zscal_(&i__4, &z__1, &h__[i1 + j * h_dim1], &c__1); - if (*wantz) { - d_cnjg(&z__1, &temp); - zscal_(&nz, &z__1, &z__[*iloz + j * z_dim1], & - c__1); - } - } -/* L90: */ - } - } -/* L100: */ - } - -/* Ensure that H(I,I-1) is real. */ - - i__2 = i__ + (i__ - 1) * h_dim1; - temp.r = h__[i__2].r, temp.i = h__[i__2].i; - if (d_imag(&temp) != 0.) { - rtemp = z_abs(&temp); - i__2 = i__ + (i__ - 1) * h_dim1; - h__[i__2].r = rtemp, h__[i__2].i = 0.; - z__1.r = temp.r / rtemp, z__1.i = temp.i / rtemp; - temp.r = z__1.r, temp.i = z__1.i; - if (i2 > i__) { - i__2 = i2 - i__; - d_cnjg(&z__1, &temp); - zscal_(&i__2, &z__1, &h__[i__ + (i__ + 1) * h_dim1], ldh); - } - i__2 = i__ - i1; - zscal_(&i__2, &temp, &h__[i1 + i__ * h_dim1], &c__1); - if (*wantz) { - zscal_(&nz, &temp, &z__[*iloz + i__ * z_dim1], &c__1); - } - } - -/* L110: */ - } - -/* Failure to converge in remaining number of iterations */ - - *info = i__; - return 0; - -L120: - -/* H(I,I-1) is negligible: one eigenvalue has converged. */ - - i__1 = i__; - i__2 = i__ + i__ * h_dim1; - w[i__1].r = h__[i__2].r, w[i__1].i = h__[i__2].i; - -/* - Decrement number of remaining iterations, and return to start of - the main loop with new value of I. -*/ - - itn -= its; - i__ = l - 1; - goto L10; - -L130: - return 0; - -/* End of ZLAHQR */ - -} /* zlahqr_ */ - -/* Subroutine */ int zlahrd_(integer *n, integer *k, integer *nb, - doublecomplex *a, integer *lda, doublecomplex *tau, doublecomplex *t, - integer *ldt, doublecomplex *y, integer *ldy) -{ - /* System generated locals */ - integer a_dim1, a_offset, t_dim1, t_offset, y_dim1, y_offset, i__1, i__2, - i__3; - doublecomplex z__1; - - /* Local variables */ - static integer i__; - static doublecomplex ei; - extern /* Subroutine */ int zscal_(integer *, doublecomplex *, - doublecomplex *, integer *), zgemv_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *), - zcopy_(integer *, doublecomplex *, integer *, doublecomplex *, - integer *), zaxpy_(integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *, integer *), ztrmv_(char *, char *, - char *, integer *, doublecomplex *, integer *, doublecomplex *, - integer *), zlarfg_(integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *), - zlacgv_(integer *, doublecomplex *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZLAHRD reduces the first NB columns of a complex general n-by-(n-k+1) - matrix A so that elements below the k-th subdiagonal are zero. The - reduction is performed by a unitary similarity transformation - Q' * A * Q. The routine returns the matrices V and T which determine - Q as a block reflector I - V*T*V', and also the matrix Y = A * V * T. - - This is an auxiliary routine called by ZGEHRD. - - Arguments - ========= - - N (input) INTEGER - The order of the matrix A. - - K (input) INTEGER - The offset for the reduction. Elements below the k-th - subdiagonal in the first NB columns are reduced to zero. - - NB (input) INTEGER - The number of columns to be reduced. - - A (input/output) COMPLEX*16 array, dimension (LDA,N-K+1) - On entry, the n-by-(n-k+1) general matrix A. - On exit, the elements on and above the k-th subdiagonal in - the first NB columns are overwritten with the corresponding - elements of the reduced matrix; the elements below the k-th - subdiagonal, with the array TAU, represent the matrix Q as a - product of elementary reflectors. The other columns of A are - unchanged. See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - TAU (output) COMPLEX*16 array, dimension (NB) - The scalar factors of the elementary reflectors. See Further - Details. - - T (output) COMPLEX*16 array, dimension (LDT,NB) - The upper triangular matrix T. - - LDT (input) INTEGER - The leading dimension of the array T. LDT >= NB. - - Y (output) COMPLEX*16 array, dimension (LDY,NB) - The n-by-nb matrix Y. - - LDY (input) INTEGER - The leading dimension of the array Y. LDY >= max(1,N). - - Further Details - =============== - - The matrix Q is represented as a product of nb elementary reflectors - - Q = H(1) H(2) . . . H(nb). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(1:i+k-1) = 0, v(i+k) = 1; v(i+k+1:n) is stored on exit in - A(i+k+1:n,i), and tau in TAU(i). - - The elements of the vectors v together form the (n-k+1)-by-nb matrix - V which is needed, with T and Y, to apply the transformation to the - unreduced part of the matrix, using an update of the form: - A := (I - V*T*V') * (A - Y*V'). - - The contents of A on exit are illustrated by the following example - with n = 7, k = 3 and nb = 2: - - ( a h a a a ) - ( a h a a a ) - ( a h a a a ) - ( h h a a a ) - ( v1 h a a a ) - ( v1 v2 a a a ) - ( v1 v2 a a a ) - - where a denotes an element of the original matrix A, h denotes a - modified element of the upper Hessenberg matrix H, and vi denotes an - element of the vector defining H(i). - - ===================================================================== - - - Quick return if possible -*/ - - /* Parameter adjustments */ - --tau; - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - t_dim1 = *ldt; - t_offset = 1 + t_dim1 * 1; - t -= t_offset; - y_dim1 = *ldy; - y_offset = 1 + y_dim1 * 1; - y -= y_offset; - - /* Function Body */ - if (*n <= 1) { - return 0; - } - - i__1 = *nb; - for (i__ = 1; i__ <= i__1; ++i__) { - if (i__ > 1) { - -/* - Update A(1:n,i) - - Compute i-th column of A - Y * V' -*/ - - i__2 = i__ - 1; - zlacgv_(&i__2, &a[*k + i__ - 1 + a_dim1], lda); - i__2 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", n, &i__2, &z__1, &y[y_offset], ldy, &a[*k - + i__ - 1 + a_dim1], lda, &c_b60, &a[i__ * a_dim1 + 1], & - c__1); - i__2 = i__ - 1; - zlacgv_(&i__2, &a[*k + i__ - 1 + a_dim1], lda); - -/* - Apply I - V * T' * V' to this column (call it b) from the - left, using the last column of T as workspace - - Let V = ( V1 ) and b = ( b1 ) (first I-1 rows) - ( V2 ) ( b2 ) - - where V1 is unit lower triangular - - w := V1' * b1 -*/ - - i__2 = i__ - 1; - zcopy_(&i__2, &a[*k + 1 + i__ * a_dim1], &c__1, &t[*nb * t_dim1 + - 1], &c__1); - i__2 = i__ - 1; - ztrmv_("Lower", "Conjugate transpose", "Unit", &i__2, &a[*k + 1 + - a_dim1], lda, &t[*nb * t_dim1 + 1], &c__1); - -/* w := w + V2'*b2 */ - - i__2 = *n - *k - i__ + 1; - i__3 = i__ - 1; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &a[*k + i__ + - a_dim1], lda, &a[*k + i__ + i__ * a_dim1], &c__1, &c_b60, - &t[*nb * t_dim1 + 1], &c__1); - -/* w := T'*w */ - - i__2 = i__ - 1; - ztrmv_("Upper", "Conjugate transpose", "Non-unit", &i__2, &t[ - t_offset], ldt, &t[*nb * t_dim1 + 1], &c__1); - -/* b2 := b2 - V2*w */ - - i__2 = *n - *k - i__ + 1; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &a[*k + i__ + a_dim1], - lda, &t[*nb * t_dim1 + 1], &c__1, &c_b60, &a[*k + i__ + - i__ * a_dim1], &c__1); - -/* b1 := b1 - V1*w */ - - i__2 = i__ - 1; - ztrmv_("Lower", "No transpose", "Unit", &i__2, &a[*k + 1 + a_dim1] - , lda, &t[*nb * t_dim1 + 1], &c__1); - i__2 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zaxpy_(&i__2, &z__1, &t[*nb * t_dim1 + 1], &c__1, &a[*k + 1 + i__ - * a_dim1], &c__1); - - i__2 = *k + i__ - 1 + (i__ - 1) * a_dim1; - a[i__2].r = ei.r, a[i__2].i = ei.i; - } - -/* - Generate the elementary reflector H(i) to annihilate - A(k+i+1:n,i) -*/ - - i__2 = *k + i__ + i__ * a_dim1; - ei.r = a[i__2].r, ei.i = a[i__2].i; - i__2 = *n - *k - i__ + 1; -/* Computing MIN */ - i__3 = *k + i__ + 1; - zlarfg_(&i__2, &ei, &a[min(i__3,*n) + i__ * a_dim1], &c__1, &tau[i__]) - ; - i__2 = *k + i__ + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Compute Y(1:n,i) */ - - i__2 = *n - *k - i__ + 1; - zgemv_("No transpose", n, &i__2, &c_b60, &a[(i__ + 1) * a_dim1 + 1], - lda, &a[*k + i__ + i__ * a_dim1], &c__1, &c_b59, &y[i__ * - y_dim1 + 1], &c__1); - i__2 = *n - *k - i__ + 1; - i__3 = i__ - 1; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &a[*k + i__ + - a_dim1], lda, &a[*k + i__ + i__ * a_dim1], &c__1, &c_b59, &t[ - i__ * t_dim1 + 1], &c__1); - i__2 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", n, &i__2, &z__1, &y[y_offset], ldy, &t[i__ * - t_dim1 + 1], &c__1, &c_b60, &y[i__ * y_dim1 + 1], &c__1); - zscal_(n, &tau[i__], &y[i__ * y_dim1 + 1], &c__1); - -/* Compute T(1:i,i) */ - - i__2 = i__ - 1; - i__3 = i__; - z__1.r = -tau[i__3].r, z__1.i = -tau[i__3].i; - zscal_(&i__2, &z__1, &t[i__ * t_dim1 + 1], &c__1); - i__2 = i__ - 1; - ztrmv_("Upper", "No transpose", "Non-unit", &i__2, &t[t_offset], ldt, - &t[i__ * t_dim1 + 1], &c__1) - ; - i__2 = i__ + i__ * t_dim1; - i__3 = i__; - t[i__2].r = tau[i__3].r, t[i__2].i = tau[i__3].i; - -/* L10: */ - } - i__1 = *k + *nb + *nb * a_dim1; - a[i__1].r = ei.r, a[i__1].i = ei.i; - - return 0; - -/* End of ZLAHRD */ - -} /* zlahrd_ */ - -/* Subroutine */ int zlals0_(integer *icompq, integer *nl, integer *nr, - integer *sqre, integer *nrhs, doublecomplex *b, integer *ldb, - doublecomplex *bx, integer *ldbx, integer *perm, integer *givptr, - integer *givcol, integer *ldgcol, doublereal *givnum, integer *ldgnum, - doublereal *poles, doublereal *difl, doublereal *difr, doublereal * - z__, integer *k, doublereal *c__, doublereal *s, doublereal *rwork, - integer *info) -{ - /* System generated locals */ - integer givcol_dim1, givcol_offset, difr_dim1, difr_offset, givnum_dim1, - givnum_offset, poles_dim1, poles_offset, b_dim1, b_offset, - bx_dim1, bx_offset, i__1, i__2, i__3, i__4, i__5; - doublereal d__1; - doublecomplex z__1; - - /* Builtin functions */ - double d_imag(doublecomplex *); - - /* Local variables */ - static integer i__, j, m, n; - static doublereal dj; - static integer nlp1, jcol; - static doublereal temp; - static integer jrow; - extern doublereal dnrm2_(integer *, doublereal *, integer *); - static doublereal diflj, difrj, dsigj; - extern /* Subroutine */ int dgemv_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, doublereal *, integer *, - doublereal *, doublereal *, integer *), zdrot_(integer *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublereal *, doublereal *); - extern doublereal dlamc3_(doublereal *, doublereal *); - extern /* Subroutine */ int zcopy_(integer *, doublecomplex *, integer *, - doublecomplex *, integer *), xerbla_(char *, integer *); - static doublereal dsigjp; - extern /* Subroutine */ int zdscal_(integer *, doublereal *, - doublecomplex *, integer *), zlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublecomplex * - , integer *, integer *), zlacpy_(char *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - December 1, 1999 - - - Purpose - ======= - - ZLALS0 applies back the multiplying factors of either the left or the - right singular vector matrix of a diagonal matrix appended by a row - to the right hand side matrix B in solving the least squares problem - using the divide-and-conquer SVD approach. - - For the left singular vector matrix, three types of orthogonal - matrices are involved: - - (1L) Givens rotations: the number of such rotations is GIVPTR; the - pairs of columns/rows they were applied to are stored in GIVCOL; - and the C- and S-values of these rotations are stored in GIVNUM. - - (2L) Permutation. The (NL+1)-st row of B is to be moved to the first - row, and for J=2:N, PERM(J)-th row of B is to be moved to the - J-th row. - - (3L) The left singular vector matrix of the remaining matrix. - - For the right singular vector matrix, four types of orthogonal - matrices are involved: - - (1R) The right singular vector matrix of the remaining matrix. - - (2R) If SQRE = 1, one extra Givens rotation to generate the right - null space. - - (3R) The inverse transformation of (2L). - - (4R) The inverse transformation of (1L). - - Arguments - ========= - - ICOMPQ (input) INTEGER - Specifies whether singular vectors are to be computed in - factored form: - = 0: Left singular vector matrix. - = 1: Right singular vector matrix. - - NL (input) INTEGER - The row dimension of the upper block. NL >= 1. - - NR (input) INTEGER - The row dimension of the lower block. NR >= 1. - - SQRE (input) INTEGER - = 0: the lower block is an NR-by-NR square matrix. - = 1: the lower block is an NR-by-(NR+1) rectangular matrix. - - The bidiagonal matrix has row dimension N = NL + NR + 1, - and column dimension M = N + SQRE. - - NRHS (input) INTEGER - The number of columns of B and BX. NRHS must be at least 1. - - B (input/output) COMPLEX*16 array, dimension ( LDB, NRHS ) - On input, B contains the right hand sides of the least - squares problem in rows 1 through M. On output, B contains - the solution X in rows 1 through N. - - LDB (input) INTEGER - The leading dimension of B. LDB must be at least - max(1,MAX( M, N ) ). - - BX (workspace) COMPLEX*16 array, dimension ( LDBX, NRHS ) - - LDBX (input) INTEGER - The leading dimension of BX. - - PERM (input) INTEGER array, dimension ( N ) - The permutations (from deflation and sorting) applied - to the two blocks. - - GIVPTR (input) INTEGER - The number of Givens rotations which took place in this - subproblem. - - GIVCOL (input) INTEGER array, dimension ( LDGCOL, 2 ) - Each pair of numbers indicates a pair of rows/columns - involved in a Givens rotation. - - LDGCOL (input) INTEGER - The leading dimension of GIVCOL, must be at least N. - - GIVNUM (input) DOUBLE PRECISION array, dimension ( LDGNUM, 2 ) - Each number indicates the C or S value used in the - corresponding Givens rotation. - - LDGNUM (input) INTEGER - The leading dimension of arrays DIFR, POLES and - GIVNUM, must be at least K. - - POLES (input) DOUBLE PRECISION array, dimension ( LDGNUM, 2 ) - On entry, POLES(1:K, 1) contains the new singular - values obtained from solving the secular equation, and - POLES(1:K, 2) is an array containing the poles in the secular - equation. - - DIFL (input) DOUBLE PRECISION array, dimension ( K ). - On entry, DIFL(I) is the distance between I-th updated - (undeflated) singular value and the I-th (undeflated) old - singular value. - - DIFR (input) DOUBLE PRECISION array, dimension ( LDGNUM, 2 ). - On entry, DIFR(I, 1) contains the distances between I-th - updated (undeflated) singular value and the I+1-th - (undeflated) old singular value. And DIFR(I, 2) is the - normalizing factor for the I-th right singular vector. - - Z (input) DOUBLE PRECISION array, dimension ( K ) - Contain the components of the deflation-adjusted updating row - vector. - - K (input) INTEGER - Contains the dimension of the non-deflated matrix, - This is the order of the related secular equation. 1 <= K <=N. - - C (input) DOUBLE PRECISION - C contains garbage if SQRE =0 and the C-value of a Givens - rotation related to the right null space if SQRE = 1. - - S (input) DOUBLE PRECISION - S contains garbage if SQRE =0 and the S-value of a Givens - rotation related to the right null space if SQRE = 1. - - RWORK (workspace) DOUBLE PRECISION array, dimension - ( K*(1+NRHS) + 2*NRHS ) - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - Based on contributions by - Ming Gu and Ren-Cang Li, Computer Science Division, University of - California at Berkeley, USA - Osni Marques, LBNL/NERSC, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - bx_dim1 = *ldbx; - bx_offset = 1 + bx_dim1 * 1; - bx -= bx_offset; - --perm; - givcol_dim1 = *ldgcol; - givcol_offset = 1 + givcol_dim1 * 1; - givcol -= givcol_offset; - difr_dim1 = *ldgnum; - difr_offset = 1 + difr_dim1 * 1; - difr -= difr_offset; - poles_dim1 = *ldgnum; - poles_offset = 1 + poles_dim1 * 1; - poles -= poles_offset; - givnum_dim1 = *ldgnum; - givnum_offset = 1 + givnum_dim1 * 1; - givnum -= givnum_offset; - --difl; - --z__; - --rwork; - - /* Function Body */ - *info = 0; - - if (*icompq < 0 || *icompq > 1) { - *info = -1; - } else if (*nl < 1) { - *info = -2; - } else if (*nr < 1) { - *info = -3; - } else if (*sqre < 0 || *sqre > 1) { - *info = -4; - } - - n = *nl + *nr + 1; - - if (*nrhs < 1) { - *info = -5; - } else if (*ldb < n) { - *info = -7; - } else if (*ldbx < n) { - *info = -9; - } else if (*givptr < 0) { - *info = -11; - } else if (*ldgcol < n) { - *info = -13; - } else if (*ldgnum < n) { - *info = -15; - } else if (*k < 1) { - *info = -20; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZLALS0", &i__1); - return 0; - } - - m = n + *sqre; - nlp1 = *nl + 1; - - if (*icompq == 0) { - -/* - Apply back orthogonal transformations from the left. - - Step (1L): apply back the Givens rotations performed. -*/ - - i__1 = *givptr; - for (i__ = 1; i__ <= i__1; ++i__) { - zdrot_(nrhs, &b[givcol[i__ + ((givcol_dim1) << (1))] + b_dim1], - ldb, &b[givcol[i__ + givcol_dim1] + b_dim1], ldb, &givnum[ - i__ + ((givnum_dim1) << (1))], &givnum[i__ + givnum_dim1]) - ; -/* L10: */ - } - -/* Step (2L): permute rows of B. */ - - zcopy_(nrhs, &b[nlp1 + b_dim1], ldb, &bx[bx_dim1 + 1], ldbx); - i__1 = n; - for (i__ = 2; i__ <= i__1; ++i__) { - zcopy_(nrhs, &b[perm[i__] + b_dim1], ldb, &bx[i__ + bx_dim1], - ldbx); -/* L20: */ - } - -/* - Step (3L): apply the inverse of the left singular vector - matrix to BX. -*/ - - if (*k == 1) { - zcopy_(nrhs, &bx[bx_offset], ldbx, &b[b_offset], ldb); - if (z__[1] < 0.) { - zdscal_(nrhs, &c_b1294, &b[b_offset], ldb); - } - } else { - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - diflj = difl[j]; - dj = poles[j + poles_dim1]; - dsigj = -poles[j + ((poles_dim1) << (1))]; - if (j < *k) { - difrj = -difr[j + difr_dim1]; - dsigjp = -poles[j + 1 + ((poles_dim1) << (1))]; - } - if (z__[j] == 0. || poles[j + ((poles_dim1) << (1))] == 0.) { - rwork[j] = 0.; - } else { - rwork[j] = -poles[j + ((poles_dim1) << (1))] * z__[j] / - diflj / (poles[j + ((poles_dim1) << (1))] + dj); - } - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - if (z__[i__] == 0. || poles[i__ + ((poles_dim1) << (1))] - == 0.) { - rwork[i__] = 0.; - } else { - rwork[i__] = poles[i__ + ((poles_dim1) << (1))] * z__[ - i__] / (dlamc3_(&poles[i__ + ((poles_dim1) << - (1))], &dsigj) - diflj) / (poles[i__ + (( - poles_dim1) << (1))] + dj); - } -/* L30: */ - } - i__2 = *k; - for (i__ = j + 1; i__ <= i__2; ++i__) { - if (z__[i__] == 0. || poles[i__ + ((poles_dim1) << (1))] - == 0.) { - rwork[i__] = 0.; - } else { - rwork[i__] = poles[i__ + ((poles_dim1) << (1))] * z__[ - i__] / (dlamc3_(&poles[i__ + ((poles_dim1) << - (1))], &dsigjp) + difrj) / (poles[i__ + (( - poles_dim1) << (1))] + dj); - } -/* L40: */ - } - rwork[1] = -1.; - temp = dnrm2_(k, &rwork[1], &c__1); - -/* - Since B and BX are complex, the following call to DGEMV - is performed in two steps (real and imaginary parts). - - CALL DGEMV( 'T', K, NRHS, ONE, BX, LDBX, WORK, 1, ZERO, - $ B( J, 1 ), LDB ) -*/ - - i__ = *k + ((*nrhs) << (1)); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = *k; - for (jrow = 1; jrow <= i__3; ++jrow) { - ++i__; - i__4 = jrow + jcol * bx_dim1; - rwork[i__] = bx[i__4].r; -/* L50: */ - } -/* L60: */ - } - dgemv_("T", k, nrhs, &c_b1015, &rwork[*k + 1 + ((*nrhs) << (1) - )], k, &rwork[1], &c__1, &c_b324, &rwork[*k + 1], & - c__1); - i__ = *k + ((*nrhs) << (1)); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = *k; - for (jrow = 1; jrow <= i__3; ++jrow) { - ++i__; - rwork[i__] = d_imag(&bx[jrow + jcol * bx_dim1]); -/* L70: */ - } -/* L80: */ - } - dgemv_("T", k, nrhs, &c_b1015, &rwork[*k + 1 + ((*nrhs) << (1) - )], k, &rwork[1], &c__1, &c_b324, &rwork[*k + 1 + * - nrhs], &c__1); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = j + jcol * b_dim1; - i__4 = jcol + *k; - i__5 = jcol + *k + *nrhs; - z__1.r = rwork[i__4], z__1.i = rwork[i__5]; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L90: */ - } - zlascl_("G", &c__0, &c__0, &temp, &c_b1015, &c__1, nrhs, &b[j - + b_dim1], ldb, info); -/* L100: */ - } - } - -/* Move the deflated rows of BX to B also. */ - - if (*k < max(m,n)) { - i__1 = n - *k; - zlacpy_("A", &i__1, nrhs, &bx[*k + 1 + bx_dim1], ldbx, &b[*k + 1 - + b_dim1], ldb); - } - } else { - -/* - Apply back the right orthogonal transformations. - - Step (1R): apply back the new right singular vector matrix - to B. -*/ - - if (*k == 1) { - zcopy_(nrhs, &b[b_offset], ldb, &bx[bx_offset], ldbx); - } else { - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - dsigj = poles[j + ((poles_dim1) << (1))]; - if (z__[j] == 0.) { - rwork[j] = 0.; - } else { - rwork[j] = -z__[j] / difl[j] / (dsigj + poles[j + - poles_dim1]) / difr[j + ((difr_dim1) << (1))]; - } - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - if (z__[j] == 0.) { - rwork[i__] = 0.; - } else { - d__1 = -poles[i__ + 1 + ((poles_dim1) << (1))]; - rwork[i__] = z__[j] / (dlamc3_(&dsigj, &d__1) - difr[ - i__ + difr_dim1]) / (dsigj + poles[i__ + - poles_dim1]) / difr[i__ + ((difr_dim1) << (1)) - ]; - } -/* L110: */ - } - i__2 = *k; - for (i__ = j + 1; i__ <= i__2; ++i__) { - if (z__[j] == 0.) { - rwork[i__] = 0.; - } else { - d__1 = -poles[i__ + ((poles_dim1) << (1))]; - rwork[i__] = z__[j] / (dlamc3_(&dsigj, &d__1) - difl[ - i__]) / (dsigj + poles[i__ + poles_dim1]) / - difr[i__ + ((difr_dim1) << (1))]; - } -/* L120: */ - } - -/* - Since B and BX are complex, the following call to DGEMV - is performed in two steps (real and imaginary parts). - - CALL DGEMV( 'T', K, NRHS, ONE, B, LDB, WORK, 1, ZERO, - $ BX( J, 1 ), LDBX ) -*/ - - i__ = *k + ((*nrhs) << (1)); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = *k; - for (jrow = 1; jrow <= i__3; ++jrow) { - ++i__; - i__4 = jrow + jcol * b_dim1; - rwork[i__] = b[i__4].r; -/* L130: */ - } -/* L140: */ - } - dgemv_("T", k, nrhs, &c_b1015, &rwork[*k + 1 + ((*nrhs) << (1) - )], k, &rwork[1], &c__1, &c_b324, &rwork[*k + 1], & - c__1); - i__ = *k + ((*nrhs) << (1)); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = *k; - for (jrow = 1; jrow <= i__3; ++jrow) { - ++i__; - rwork[i__] = d_imag(&b[jrow + jcol * b_dim1]); -/* L150: */ - } -/* L160: */ - } - dgemv_("T", k, nrhs, &c_b1015, &rwork[*k + 1 + ((*nrhs) << (1) - )], k, &rwork[1], &c__1, &c_b324, &rwork[*k + 1 + * - nrhs], &c__1); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = j + jcol * bx_dim1; - i__4 = jcol + *k; - i__5 = jcol + *k + *nrhs; - z__1.r = rwork[i__4], z__1.i = rwork[i__5]; - bx[i__3].r = z__1.r, bx[i__3].i = z__1.i; -/* L170: */ - } -/* L180: */ - } - } - -/* - Step (2R): if SQRE = 1, apply back the rotation that is - related to the right null space of the subproblem. -*/ - - if (*sqre == 1) { - zcopy_(nrhs, &b[m + b_dim1], ldb, &bx[m + bx_dim1], ldbx); - zdrot_(nrhs, &bx[bx_dim1 + 1], ldbx, &bx[m + bx_dim1], ldbx, c__, - s); - } - if (*k < max(m,n)) { - i__1 = n - *k; - zlacpy_("A", &i__1, nrhs, &b[*k + 1 + b_dim1], ldb, &bx[*k + 1 + - bx_dim1], ldbx); - } - -/* Step (3R): permute rows of B. */ - - zcopy_(nrhs, &bx[bx_dim1 + 1], ldbx, &b[nlp1 + b_dim1], ldb); - if (*sqre == 1) { - zcopy_(nrhs, &bx[m + bx_dim1], ldbx, &b[m + b_dim1], ldb); - } - i__1 = n; - for (i__ = 2; i__ <= i__1; ++i__) { - zcopy_(nrhs, &bx[i__ + bx_dim1], ldbx, &b[perm[i__] + b_dim1], - ldb); -/* L190: */ - } - -/* Step (4R): apply back the Givens rotations performed. */ - - for (i__ = *givptr; i__ >= 1; --i__) { - d__1 = -givnum[i__ + givnum_dim1]; - zdrot_(nrhs, &b[givcol[i__ + ((givcol_dim1) << (1))] + b_dim1], - ldb, &b[givcol[i__ + givcol_dim1] + b_dim1], ldb, &givnum[ - i__ + ((givnum_dim1) << (1))], &d__1); -/* L200: */ - } - } - - return 0; - -/* End of ZLALS0 */ - -} /* zlals0_ */ - -/* Subroutine */ int zlalsa_(integer *icompq, integer *smlsiz, integer *n, - integer *nrhs, doublecomplex *b, integer *ldb, doublecomplex *bx, - integer *ldbx, doublereal *u, integer *ldu, doublereal *vt, integer * - k, doublereal *difl, doublereal *difr, doublereal *z__, doublereal * - poles, integer *givptr, integer *givcol, integer *ldgcol, integer * - perm, doublereal *givnum, doublereal *c__, doublereal *s, doublereal * - rwork, integer *iwork, integer *info) -{ - /* System generated locals */ - integer givcol_dim1, givcol_offset, perm_dim1, perm_offset, difl_dim1, - difl_offset, difr_dim1, difr_offset, givnum_dim1, givnum_offset, - poles_dim1, poles_offset, u_dim1, u_offset, vt_dim1, vt_offset, - z_dim1, z_offset, b_dim1, b_offset, bx_dim1, bx_offset, i__1, - i__2, i__3, i__4, i__5, i__6; - doublecomplex z__1; - - /* Builtin functions */ - double d_imag(doublecomplex *); - integer pow_ii(integer *, integer *); - - /* Local variables */ - static integer i__, j, i1, ic, lf, nd, ll, nl, nr, im1, nlf, nrf, lvl, - ndb1, nlp1, lvl2, nrp1, jcol, nlvl, sqre, jrow, jimag; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - static integer jreal, inode, ndiml, ndimr; - extern /* Subroutine */ int zcopy_(integer *, doublecomplex *, integer *, - doublecomplex *, integer *), zlals0_(integer *, integer *, - integer *, integer *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *, integer *, integer *, integer *, - integer *, doublereal *, integer *, doublereal *, doublereal *, - doublereal *, doublereal *, integer *, doublereal *, doublereal *, - doublereal *, integer *), dlasdt_(integer *, integer *, integer * - , integer *, integer *, integer *, integer *), xerbla_(char *, - integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZLALSA is an itermediate step in solving the least squares problem - by computing the SVD of the coefficient matrix in compact form (The - singular vectors are computed as products of simple orthorgonal - matrices.). - - If ICOMPQ = 0, ZLALSA applies the inverse of the left singular vector - matrix of an upper bidiagonal matrix to the right hand side; and if - ICOMPQ = 1, ZLALSA applies the right singular vector matrix to the - right hand side. The singular vector matrices were generated in - compact form by ZLALSA. - - Arguments - ========= - - ICOMPQ (input) INTEGER - Specifies whether the left or the right singular vector - matrix is involved. - = 0: Left singular vector matrix - = 1: Right singular vector matrix - - SMLSIZ (input) INTEGER - The maximum size of the subproblems at the bottom of the - computation tree. - - N (input) INTEGER - The row and column dimensions of the upper bidiagonal matrix. - - NRHS (input) INTEGER - The number of columns of B and BX. NRHS must be at least 1. - - B (input) COMPLEX*16 array, dimension ( LDB, NRHS ) - On input, B contains the right hand sides of the least - squares problem in rows 1 through M. On output, B contains - the solution X in rows 1 through N. - - LDB (input) INTEGER - The leading dimension of B in the calling subprogram. - LDB must be at least max(1,MAX( M, N ) ). - - BX (output) COMPLEX*16 array, dimension ( LDBX, NRHS ) - On exit, the result of applying the left or right singular - vector matrix to B. - - LDBX (input) INTEGER - The leading dimension of BX. - - U (input) DOUBLE PRECISION array, dimension ( LDU, SMLSIZ ). - On entry, U contains the left singular vector matrices of all - subproblems at the bottom level. - - LDU (input) INTEGER, LDU = > N. - The leading dimension of arrays U, VT, DIFL, DIFR, - POLES, GIVNUM, and Z. - - VT (input) DOUBLE PRECISION array, dimension ( LDU, SMLSIZ+1 ). - On entry, VT' contains the right singular vector matrices of - all subproblems at the bottom level. - - K (input) INTEGER array, dimension ( N ). - - DIFL (input) DOUBLE PRECISION array, dimension ( LDU, NLVL ). - where NLVL = INT(log_2 (N/(SMLSIZ+1))) + 1. - - DIFR (input) DOUBLE PRECISION array, dimension ( LDU, 2 * NLVL ). - On entry, DIFL(*, I) and DIFR(*, 2 * I -1) record - distances between singular values on the I-th level and - singular values on the (I -1)-th level, and DIFR(*, 2 * I) - record the normalizing factors of the right singular vectors - matrices of subproblems on I-th level. - - Z (input) DOUBLE PRECISION array, dimension ( LDU, NLVL ). - On entry, Z(1, I) contains the components of the deflation- - adjusted updating row vector for subproblems on the I-th - level. - - POLES (input) DOUBLE PRECISION array, dimension ( LDU, 2 * NLVL ). - On entry, POLES(*, 2 * I -1: 2 * I) contains the new and old - singular values involved in the secular equations on the I-th - level. - - GIVPTR (input) INTEGER array, dimension ( N ). - On entry, GIVPTR( I ) records the number of Givens - rotations performed on the I-th problem on the computation - tree. - - GIVCOL (input) INTEGER array, dimension ( LDGCOL, 2 * NLVL ). - On entry, for each I, GIVCOL(*, 2 * I - 1: 2 * I) records the - locations of Givens rotations performed on the I-th level on - the computation tree. - - LDGCOL (input) INTEGER, LDGCOL = > N. - The leading dimension of arrays GIVCOL and PERM. - - PERM (input) INTEGER array, dimension ( LDGCOL, NLVL ). - On entry, PERM(*, I) records permutations done on the I-th - level of the computation tree. - - GIVNUM (input) DOUBLE PRECISION array, dimension ( LDU, 2 * NLVL ). - On entry, GIVNUM(*, 2 *I -1 : 2 * I) records the C- and S- - values of Givens rotations performed on the I-th level on the - computation tree. - - C (input) DOUBLE PRECISION array, dimension ( N ). - On entry, if the I-th subproblem is not square, - C( I ) contains the C-value of a Givens rotation related to - the right null space of the I-th subproblem. - - S (input) DOUBLE PRECISION array, dimension ( N ). - On entry, if the I-th subproblem is not square, - S( I ) contains the S-value of a Givens rotation related to - the right null space of the I-th subproblem. - - RWORK (workspace) DOUBLE PRECISION array, dimension at least - max ( N, (SMLSZ+1)*NRHS*3 ). - - IWORK (workspace) INTEGER array. - The dimension must be at least 3 * N - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - - Further Details - =============== - - Based on contributions by - Ming Gu and Ren-Cang Li, Computer Science Division, University of - California at Berkeley, USA - Osni Marques, LBNL/NERSC, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - bx_dim1 = *ldbx; - bx_offset = 1 + bx_dim1 * 1; - bx -= bx_offset; - givnum_dim1 = *ldu; - givnum_offset = 1 + givnum_dim1 * 1; - givnum -= givnum_offset; - poles_dim1 = *ldu; - poles_offset = 1 + poles_dim1 * 1; - poles -= poles_offset; - z_dim1 = *ldu; - z_offset = 1 + z_dim1 * 1; - z__ -= z_offset; - difr_dim1 = *ldu; - difr_offset = 1 + difr_dim1 * 1; - difr -= difr_offset; - difl_dim1 = *ldu; - difl_offset = 1 + difl_dim1 * 1; - difl -= difl_offset; - vt_dim1 = *ldu; - vt_offset = 1 + vt_dim1 * 1; - vt -= vt_offset; - u_dim1 = *ldu; - u_offset = 1 + u_dim1 * 1; - u -= u_offset; - --k; - --givptr; - perm_dim1 = *ldgcol; - perm_offset = 1 + perm_dim1 * 1; - perm -= perm_offset; - givcol_dim1 = *ldgcol; - givcol_offset = 1 + givcol_dim1 * 1; - givcol -= givcol_offset; - --c__; - --s; - --rwork; - --iwork; - - /* Function Body */ - *info = 0; - - if (*icompq < 0 || *icompq > 1) { - *info = -1; - } else if (*smlsiz < 3) { - *info = -2; - } else if (*n < *smlsiz) { - *info = -3; - } else if (*nrhs < 1) { - *info = -4; - } else if (*ldb < *n) { - *info = -6; - } else if (*ldbx < *n) { - *info = -8; - } else if (*ldu < *n) { - *info = -10; - } else if (*ldgcol < *n) { - *info = -19; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZLALSA", &i__1); - return 0; - } - -/* Book-keeping and setting up the computation tree. */ - - inode = 1; - ndiml = inode + *n; - ndimr = ndiml + *n; - - dlasdt_(n, &nlvl, &nd, &iwork[inode], &iwork[ndiml], &iwork[ndimr], - smlsiz); - -/* - The following code applies back the left singular vector factors. - For applying back the right singular vector factors, go to 170. -*/ - - if (*icompq == 1) { - goto L170; - } - -/* - The nodes on the bottom level of the tree were solved - by DLASDQ. The corresponding left and right singular vector - matrices are in explicit form. First apply back the left - singular vector matrices. -*/ - - ndb1 = (nd + 1) / 2; - i__1 = nd; - for (i__ = ndb1; i__ <= i__1; ++i__) { - -/* - IC : center row of each node - NL : number of rows of left subproblem - NR : number of rows of right subproblem - NLF: starting row of the left subproblem - NRF: starting row of the right subproblem -*/ - - i1 = i__ - 1; - ic = iwork[inode + i1]; - nl = iwork[ndiml + i1]; - nr = iwork[ndimr + i1]; - nlf = ic - nl; - nrf = ic + 1; - -/* - Since B and BX are complex, the following call to DGEMM - is performed in two steps (real and imaginary parts). - - CALL DGEMM( 'T', 'N', NL, NRHS, NL, ONE, U( NLF, 1 ), LDU, - $ B( NLF, 1 ), LDB, ZERO, BX( NLF, 1 ), LDBX ) -*/ - - j = (nl * *nrhs) << (1); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nlf + nl - 1; - for (jrow = nlf; jrow <= i__3; ++jrow) { - ++j; - i__4 = jrow + jcol * b_dim1; - rwork[j] = b[i__4].r; -/* L10: */ - } -/* L20: */ - } - dgemm_("T", "N", &nl, nrhs, &nl, &c_b1015, &u[nlf + u_dim1], ldu, & - rwork[((nl * *nrhs) << (1)) + 1], &nl, &c_b324, &rwork[1], & - nl); - j = (nl * *nrhs) << (1); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nlf + nl - 1; - for (jrow = nlf; jrow <= i__3; ++jrow) { - ++j; - rwork[j] = d_imag(&b[jrow + jcol * b_dim1]); -/* L30: */ - } -/* L40: */ - } - dgemm_("T", "N", &nl, nrhs, &nl, &c_b1015, &u[nlf + u_dim1], ldu, & - rwork[((nl * *nrhs) << (1)) + 1], &nl, &c_b324, &rwork[nl * * - nrhs + 1], &nl); - jreal = 0; - jimag = nl * *nrhs; - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nlf + nl - 1; - for (jrow = nlf; jrow <= i__3; ++jrow) { - ++jreal; - ++jimag; - i__4 = jrow + jcol * bx_dim1; - i__5 = jreal; - i__6 = jimag; - z__1.r = rwork[i__5], z__1.i = rwork[i__6]; - bx[i__4].r = z__1.r, bx[i__4].i = z__1.i; -/* L50: */ - } -/* L60: */ - } - -/* - Since B and BX are complex, the following call to DGEMM - is performed in two steps (real and imaginary parts). - - CALL DGEMM( 'T', 'N', NR, NRHS, NR, ONE, U( NRF, 1 ), LDU, - $ B( NRF, 1 ), LDB, ZERO, BX( NRF, 1 ), LDBX ) -*/ - - j = (nr * *nrhs) << (1); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nrf + nr - 1; - for (jrow = nrf; jrow <= i__3; ++jrow) { - ++j; - i__4 = jrow + jcol * b_dim1; - rwork[j] = b[i__4].r; -/* L70: */ - } -/* L80: */ - } - dgemm_("T", "N", &nr, nrhs, &nr, &c_b1015, &u[nrf + u_dim1], ldu, & - rwork[((nr * *nrhs) << (1)) + 1], &nr, &c_b324, &rwork[1], & - nr); - j = (nr * *nrhs) << (1); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nrf + nr - 1; - for (jrow = nrf; jrow <= i__3; ++jrow) { - ++j; - rwork[j] = d_imag(&b[jrow + jcol * b_dim1]); -/* L90: */ - } -/* L100: */ - } - dgemm_("T", "N", &nr, nrhs, &nr, &c_b1015, &u[nrf + u_dim1], ldu, & - rwork[((nr * *nrhs) << (1)) + 1], &nr, &c_b324, &rwork[nr * * - nrhs + 1], &nr); - jreal = 0; - jimag = nr * *nrhs; - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nrf + nr - 1; - for (jrow = nrf; jrow <= i__3; ++jrow) { - ++jreal; - ++jimag; - i__4 = jrow + jcol * bx_dim1; - i__5 = jreal; - i__6 = jimag; - z__1.r = rwork[i__5], z__1.i = rwork[i__6]; - bx[i__4].r = z__1.r, bx[i__4].i = z__1.i; -/* L110: */ - } -/* L120: */ - } - -/* L130: */ - } - -/* - Next copy the rows of B that correspond to unchanged rows - in the bidiagonal matrix to BX. -*/ - - i__1 = nd; - for (i__ = 1; i__ <= i__1; ++i__) { - ic = iwork[inode + i__ - 1]; - zcopy_(nrhs, &b[ic + b_dim1], ldb, &bx[ic + bx_dim1], ldbx); -/* L140: */ - } - -/* - Finally go through the left singular vector matrices of all - the other subproblems bottom-up on the tree. -*/ - - j = pow_ii(&c__2, &nlvl); - sqre = 0; - - for (lvl = nlvl; lvl >= 1; --lvl) { - lvl2 = ((lvl) << (1)) - 1; - -/* - find the first node LF and last node LL on - the current level LVL -*/ - - if (lvl == 1) { - lf = 1; - ll = 1; - } else { - i__1 = lvl - 1; - lf = pow_ii(&c__2, &i__1); - ll = ((lf) << (1)) - 1; - } - i__1 = ll; - for (i__ = lf; i__ <= i__1; ++i__) { - im1 = i__ - 1; - ic = iwork[inode + im1]; - nl = iwork[ndiml + im1]; - nr = iwork[ndimr + im1]; - nlf = ic - nl; - nrf = ic + 1; - --j; - zlals0_(icompq, &nl, &nr, &sqre, nrhs, &bx[nlf + bx_dim1], ldbx, & - b[nlf + b_dim1], ldb, &perm[nlf + lvl * perm_dim1], & - givptr[j], &givcol[nlf + lvl2 * givcol_dim1], ldgcol, & - givnum[nlf + lvl2 * givnum_dim1], ldu, &poles[nlf + lvl2 * - poles_dim1], &difl[nlf + lvl * difl_dim1], &difr[nlf + - lvl2 * difr_dim1], &z__[nlf + lvl * z_dim1], &k[j], &c__[ - j], &s[j], &rwork[1], info); -/* L150: */ - } -/* L160: */ - } - goto L330; - -/* ICOMPQ = 1: applying back the right singular vector factors. */ - -L170: - -/* - First now go through the right singular vector matrices of all - the tree nodes top-down. -*/ - - j = 0; - i__1 = nlvl; - for (lvl = 1; lvl <= i__1; ++lvl) { - lvl2 = ((lvl) << (1)) - 1; - -/* - Find the first node LF and last node LL on - the current level LVL. -*/ - - if (lvl == 1) { - lf = 1; - ll = 1; - } else { - i__2 = lvl - 1; - lf = pow_ii(&c__2, &i__2); - ll = ((lf) << (1)) - 1; - } - i__2 = lf; - for (i__ = ll; i__ >= i__2; --i__) { - im1 = i__ - 1; - ic = iwork[inode + im1]; - nl = iwork[ndiml + im1]; - nr = iwork[ndimr + im1]; - nlf = ic - nl; - nrf = ic + 1; - if (i__ == ll) { - sqre = 0; - } else { - sqre = 1; - } - ++j; - zlals0_(icompq, &nl, &nr, &sqre, nrhs, &b[nlf + b_dim1], ldb, &bx[ - nlf + bx_dim1], ldbx, &perm[nlf + lvl * perm_dim1], & - givptr[j], &givcol[nlf + lvl2 * givcol_dim1], ldgcol, & - givnum[nlf + lvl2 * givnum_dim1], ldu, &poles[nlf + lvl2 * - poles_dim1], &difl[nlf + lvl * difl_dim1], &difr[nlf + - lvl2 * difr_dim1], &z__[nlf + lvl * z_dim1], &k[j], &c__[ - j], &s[j], &rwork[1], info); -/* L180: */ - } -/* L190: */ - } - -/* - The nodes on the bottom level of the tree were solved - by DLASDQ. The corresponding right singular vector - matrices are in explicit form. Apply them back. -*/ - - ndb1 = (nd + 1) / 2; - i__1 = nd; - for (i__ = ndb1; i__ <= i__1; ++i__) { - i1 = i__ - 1; - ic = iwork[inode + i1]; - nl = iwork[ndiml + i1]; - nr = iwork[ndimr + i1]; - nlp1 = nl + 1; - if (i__ == nd) { - nrp1 = nr; - } else { - nrp1 = nr + 1; - } - nlf = ic - nl; - nrf = ic + 1; - -/* - Since B and BX are complex, the following call to DGEMM is - performed in two steps (real and imaginary parts). - - CALL DGEMM( 'T', 'N', NLP1, NRHS, NLP1, ONE, VT( NLF, 1 ), LDU, - $ B( NLF, 1 ), LDB, ZERO, BX( NLF, 1 ), LDBX ) -*/ - - j = (nlp1 * *nrhs) << (1); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nlf + nlp1 - 1; - for (jrow = nlf; jrow <= i__3; ++jrow) { - ++j; - i__4 = jrow + jcol * b_dim1; - rwork[j] = b[i__4].r; -/* L200: */ - } -/* L210: */ - } - dgemm_("T", "N", &nlp1, nrhs, &nlp1, &c_b1015, &vt[nlf + vt_dim1], - ldu, &rwork[((nlp1 * *nrhs) << (1)) + 1], &nlp1, &c_b324, & - rwork[1], &nlp1); - j = (nlp1 * *nrhs) << (1); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nlf + nlp1 - 1; - for (jrow = nlf; jrow <= i__3; ++jrow) { - ++j; - rwork[j] = d_imag(&b[jrow + jcol * b_dim1]); -/* L220: */ - } -/* L230: */ - } - dgemm_("T", "N", &nlp1, nrhs, &nlp1, &c_b1015, &vt[nlf + vt_dim1], - ldu, &rwork[((nlp1 * *nrhs) << (1)) + 1], &nlp1, &c_b324, & - rwork[nlp1 * *nrhs + 1], &nlp1); - jreal = 0; - jimag = nlp1 * *nrhs; - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nlf + nlp1 - 1; - for (jrow = nlf; jrow <= i__3; ++jrow) { - ++jreal; - ++jimag; - i__4 = jrow + jcol * bx_dim1; - i__5 = jreal; - i__6 = jimag; - z__1.r = rwork[i__5], z__1.i = rwork[i__6]; - bx[i__4].r = z__1.r, bx[i__4].i = z__1.i; -/* L240: */ - } -/* L250: */ - } - -/* - Since B and BX are complex, the following call to DGEMM is - performed in two steps (real and imaginary parts). - - CALL DGEMM( 'T', 'N', NRP1, NRHS, NRP1, ONE, VT( NRF, 1 ), LDU, - $ B( NRF, 1 ), LDB, ZERO, BX( NRF, 1 ), LDBX ) -*/ - - j = (nrp1 * *nrhs) << (1); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nrf + nrp1 - 1; - for (jrow = nrf; jrow <= i__3; ++jrow) { - ++j; - i__4 = jrow + jcol * b_dim1; - rwork[j] = b[i__4].r; -/* L260: */ - } -/* L270: */ - } - dgemm_("T", "N", &nrp1, nrhs, &nrp1, &c_b1015, &vt[nrf + vt_dim1], - ldu, &rwork[((nrp1 * *nrhs) << (1)) + 1], &nrp1, &c_b324, & - rwork[1], &nrp1); - j = (nrp1 * *nrhs) << (1); - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nrf + nrp1 - 1; - for (jrow = nrf; jrow <= i__3; ++jrow) { - ++j; - rwork[j] = d_imag(&b[jrow + jcol * b_dim1]); -/* L280: */ - } -/* L290: */ - } - dgemm_("T", "N", &nrp1, nrhs, &nrp1, &c_b1015, &vt[nrf + vt_dim1], - ldu, &rwork[((nrp1 * *nrhs) << (1)) + 1], &nrp1, &c_b324, & - rwork[nrp1 * *nrhs + 1], &nrp1); - jreal = 0; - jimag = nrp1 * *nrhs; - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = nrf + nrp1 - 1; - for (jrow = nrf; jrow <= i__3; ++jrow) { - ++jreal; - ++jimag; - i__4 = jrow + jcol * bx_dim1; - i__5 = jreal; - i__6 = jimag; - z__1.r = rwork[i__5], z__1.i = rwork[i__6]; - bx[i__4].r = z__1.r, bx[i__4].i = z__1.i; -/* L300: */ - } -/* L310: */ - } - -/* L320: */ - } - -L330: - - return 0; - -/* End of ZLALSA */ - -} /* zlalsa_ */ - -/* Subroutine */ int zlalsd_(char *uplo, integer *smlsiz, integer *n, integer - *nrhs, doublereal *d__, doublereal *e, doublecomplex *b, integer *ldb, - doublereal *rcond, integer *rank, doublecomplex *work, doublereal * - rwork, integer *iwork, integer *info) -{ - /* System generated locals */ - integer b_dim1, b_offset, i__1, i__2, i__3, i__4, i__5, i__6; - doublereal d__1; - doublecomplex z__1; - - /* Builtin functions */ - double d_imag(doublecomplex *), log(doublereal), d_sign(doublereal *, - doublereal *); - - /* Local variables */ - static integer c__, i__, j, k; - static doublereal r__; - static integer s, u, z__; - static doublereal cs; - static integer bx; - static doublereal sn; - static integer st, vt, nm1, st1; - static doublereal eps; - static integer iwk; - static doublereal tol; - static integer difl, difr, jcol, irwb, perm, nsub, nlvl, sqre, bxst, jrow, - irwu, jimag; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - static integer jreal, irwib, poles, sizei, irwrb, nsize; - extern /* Subroutine */ int zdrot_(integer *, doublecomplex *, integer *, - doublecomplex *, integer *, doublereal *, doublereal *), zcopy_( - integer *, doublecomplex *, integer *, doublecomplex *, integer *) - ; - static integer irwvt, icmpq1, icmpq2; - - extern /* Subroutine */ int dlasda_(integer *, integer *, integer *, - integer *, doublereal *, doublereal *, doublereal *, integer *, - doublereal *, integer *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *, integer *, integer *, integer *, - doublereal *, doublereal *, doublereal *, doublereal *, integer *, - integer *), dlascl_(char *, integer *, integer *, doublereal *, - doublereal *, integer *, integer *, doublereal *, integer *, - integer *); - extern integer idamax_(integer *, doublereal *, integer *); - extern /* Subroutine */ int dlasdq_(char *, integer *, integer *, integer - *, integer *, integer *, doublereal *, doublereal *, doublereal *, - integer *, doublereal *, integer *, doublereal *, integer *, - doublereal *, integer *), dlaset_(char *, integer *, - integer *, doublereal *, doublereal *, doublereal *, integer *), dlartg_(doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *), xerbla_(char *, integer *); - static integer givcol; - extern doublereal dlanst_(char *, integer *, doublereal *, doublereal *); - extern /* Subroutine */ int zlalsa_(integer *, integer *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, integer *, - doublereal *, integer *, doublereal *, integer *, doublereal *, - doublereal *, doublereal *, doublereal *, integer *, integer *, - integer *, integer *, doublereal *, doublereal *, doublereal *, - doublereal *, integer *, integer *), zlascl_(char *, integer *, - integer *, doublereal *, doublereal *, integer *, integer *, - doublecomplex *, integer *, integer *), dlasrt_(char *, - integer *, doublereal *, integer *), zlacpy_(char *, - integer *, integer *, doublecomplex *, integer *, doublecomplex *, - integer *), zlaset_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, doublecomplex *, integer *); - static doublereal orgnrm; - static integer givnum, givptr, nrwork, irwwrk, smlszp; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1999 - - - Purpose - ======= - - ZLALSD uses the singular value decomposition of A to solve the least - squares problem of finding X to minimize the Euclidean norm of each - column of A*X-B, where A is N-by-N upper bidiagonal, and X and B - are N-by-NRHS. The solution X overwrites B. - - The singular values of A smaller than RCOND times the largest - singular value are treated as zero in solving the least squares - problem; in this case a minimum norm solution is returned. - The actual singular values are returned in D in ascending order. - - This code makes very mild assumptions about floating point - arithmetic. It will work on machines with a guard digit in - add/subtract, or on those binary machines without guard digits - which subtract like the Cray XMP, Cray YMP, Cray C 90, or Cray 2. - It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - = 'U': D and E define an upper bidiagonal matrix. - = 'L': D and E define a lower bidiagonal matrix. - - SMLSIZ (input) INTEGER - The maximum size of the subproblems at the bottom of the - computation tree. - - N (input) INTEGER - The dimension of the bidiagonal matrix. N >= 0. - - NRHS (input) INTEGER - The number of columns of B. NRHS must be at least 1. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry D contains the main diagonal of the bidiagonal - matrix. On exit, if INFO = 0, D contains its singular values. - - E (input) DOUBLE PRECISION array, dimension (N-1) - Contains the super-diagonal entries of the bidiagonal matrix. - On exit, E has been destroyed. - - B (input/output) COMPLEX*16 array, dimension (LDB,NRHS) - On input, B contains the right hand sides of the least - squares problem. On output, B contains the solution X. - - LDB (input) INTEGER - The leading dimension of B in the calling subprogram. - LDB must be at least max(1,N). - - RCOND (input) DOUBLE PRECISION - The singular values of A less than or equal to RCOND times - the largest singular value are treated as zero in solving - the least squares problem. If RCOND is negative, - machine precision is used instead. - For example, if diag(S)*X=B were the least squares problem, - where diag(S) is a diagonal matrix of singular values, the - solution would be X(i) = B(i) / S(i) if S(i) is greater than - RCOND*max(S), and X(i) = 0 if S(i) is less than or equal to - RCOND*max(S). - - RANK (output) INTEGER - The number of singular values of A greater than RCOND times - the largest singular value. - - WORK (workspace) COMPLEX*16 array, dimension at least - (N * NRHS). - - RWORK (workspace) DOUBLE PRECISION array, dimension at least - (9*N + 2*N*SMLSIZ + 8*N*NLVL + 3*SMLSIZ*NRHS + (SMLSIZ+1)**2), - where - NLVL = MAX( 0, INT( LOG_2( MIN( M,N )/(SMLSIZ+1) ) ) + 1 ) - - IWORK (workspace) INTEGER array, dimension at least - (3*N*NLVL + 11*N). - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: The algorithm failed to compute an singular value while - working on the submatrix lying in rows and columns - INFO/(N+1) through MOD(INFO,N+1). - - Further Details - =============== - - Based on contributions by - Ming Gu and Ren-Cang Li, Computer Science Division, University of - California at Berkeley, USA - Osni Marques, LBNL/NERSC, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - --work; - --rwork; - --iwork; - - /* Function Body */ - *info = 0; - - if (*n < 0) { - *info = -3; - } else if (*nrhs < 1) { - *info = -4; - } else if (*ldb < 1 || *ldb < *n) { - *info = -8; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZLALSD", &i__1); - return 0; - } - - eps = EPSILON; - -/* Set up the tolerance. */ - - if (*rcond <= 0. || *rcond >= 1.) { - *rcond = eps; - } - - *rank = 0; - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } else if (*n == 1) { - if (d__[1] == 0.) { - zlaset_("A", &c__1, nrhs, &c_b59, &c_b59, &b[b_offset], ldb); - } else { - *rank = 1; - zlascl_("G", &c__0, &c__0, &d__[1], &c_b1015, &c__1, nrhs, &b[ - b_offset], ldb, info); - d__[1] = abs(d__[1]); - } - return 0; - } - -/* Rotate the matrix if it is lower bidiagonal. */ - - if (*(unsigned char *)uplo == 'L') { - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - dlartg_(&d__[i__], &e[i__], &cs, &sn, &r__); - d__[i__] = r__; - e[i__] = sn * d__[i__ + 1]; - d__[i__ + 1] = cs * d__[i__ + 1]; - if (*nrhs == 1) { - zdrot_(&c__1, &b[i__ + b_dim1], &c__1, &b[i__ + 1 + b_dim1], & - c__1, &cs, &sn); - } else { - rwork[((i__) << (1)) - 1] = cs; - rwork[i__ * 2] = sn; - } -/* L10: */ - } - if (*nrhs > 1) { - i__1 = *nrhs; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = *n - 1; - for (j = 1; j <= i__2; ++j) { - cs = rwork[((j) << (1)) - 1]; - sn = rwork[j * 2]; - zdrot_(&c__1, &b[j + i__ * b_dim1], &c__1, &b[j + 1 + i__ - * b_dim1], &c__1, &cs, &sn); -/* L20: */ - } -/* L30: */ - } - } - } - -/* Scale. */ - - nm1 = *n - 1; - orgnrm = dlanst_("M", n, &d__[1], &e[1]); - if (orgnrm == 0.) { - zlaset_("A", n, nrhs, &c_b59, &c_b59, &b[b_offset], ldb); - return 0; - } - - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b1015, n, &c__1, &d__[1], n, info); - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b1015, &nm1, &c__1, &e[1], &nm1, - info); - -/* - If N is smaller than the minimum divide size SMLSIZ, then solve - the problem with another solver. -*/ - - if (*n <= *smlsiz) { - irwu = 1; - irwvt = irwu + *n * *n; - irwwrk = irwvt + *n * *n; - irwrb = irwwrk; - irwib = irwrb + *n * *nrhs; - irwb = irwib + *n * *nrhs; - dlaset_("A", n, n, &c_b324, &c_b1015, &rwork[irwu], n); - dlaset_("A", n, n, &c_b324, &c_b1015, &rwork[irwvt], n); - dlasdq_("U", &c__0, n, n, n, &c__0, &d__[1], &e[1], &rwork[irwvt], n, - &rwork[irwu], n, &rwork[irwwrk], &c__1, &rwork[irwwrk], info); - if (*info != 0) { - return 0; - } - -/* - In the real version, B is passed to DLASDQ and multiplied - internally by Q'. Here B is complex and that product is - computed below in two steps (real and imaginary parts). -*/ - - j = irwb - 1; - i__1 = *nrhs; - for (jcol = 1; jcol <= i__1; ++jcol) { - i__2 = *n; - for (jrow = 1; jrow <= i__2; ++jrow) { - ++j; - i__3 = jrow + jcol * b_dim1; - rwork[j] = b[i__3].r; -/* L40: */ - } -/* L50: */ - } - dgemm_("T", "N", n, nrhs, n, &c_b1015, &rwork[irwu], n, &rwork[irwb], - n, &c_b324, &rwork[irwrb], n); - j = irwb - 1; - i__1 = *nrhs; - for (jcol = 1; jcol <= i__1; ++jcol) { - i__2 = *n; - for (jrow = 1; jrow <= i__2; ++jrow) { - ++j; - rwork[j] = d_imag(&b[jrow + jcol * b_dim1]); -/* L60: */ - } -/* L70: */ - } - dgemm_("T", "N", n, nrhs, n, &c_b1015, &rwork[irwu], n, &rwork[irwb], - n, &c_b324, &rwork[irwib], n); - jreal = irwrb - 1; - jimag = irwib - 1; - i__1 = *nrhs; - for (jcol = 1; jcol <= i__1; ++jcol) { - i__2 = *n; - for (jrow = 1; jrow <= i__2; ++jrow) { - ++jreal; - ++jimag; - i__3 = jrow + jcol * b_dim1; - i__4 = jreal; - i__5 = jimag; - z__1.r = rwork[i__4], z__1.i = rwork[i__5]; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L80: */ - } -/* L90: */ - } - - tol = *rcond * (d__1 = d__[idamax_(n, &d__[1], &c__1)], abs(d__1)); - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - if (d__[i__] <= tol) { - zlaset_("A", &c__1, nrhs, &c_b59, &c_b59, &b[i__ + b_dim1], - ldb); - } else { - zlascl_("G", &c__0, &c__0, &d__[i__], &c_b1015, &c__1, nrhs, & - b[i__ + b_dim1], ldb, info); - ++(*rank); - } -/* L100: */ - } - -/* - Since B is complex, the following call to DGEMM is performed - in two steps (real and imaginary parts). That is for V * B - (in the real version of the code V' is stored in WORK). - - CALL DGEMM( 'T', 'N', N, NRHS, N, ONE, WORK, N, B, LDB, ZERO, - $ WORK( NWORK ), N ) -*/ - - j = irwb - 1; - i__1 = *nrhs; - for (jcol = 1; jcol <= i__1; ++jcol) { - i__2 = *n; - for (jrow = 1; jrow <= i__2; ++jrow) { - ++j; - i__3 = jrow + jcol * b_dim1; - rwork[j] = b[i__3].r; -/* L110: */ - } -/* L120: */ - } - dgemm_("T", "N", n, nrhs, n, &c_b1015, &rwork[irwvt], n, &rwork[irwb], - n, &c_b324, &rwork[irwrb], n); - j = irwb - 1; - i__1 = *nrhs; - for (jcol = 1; jcol <= i__1; ++jcol) { - i__2 = *n; - for (jrow = 1; jrow <= i__2; ++jrow) { - ++j; - rwork[j] = d_imag(&b[jrow + jcol * b_dim1]); -/* L130: */ - } -/* L140: */ - } - dgemm_("T", "N", n, nrhs, n, &c_b1015, &rwork[irwvt], n, &rwork[irwb], - n, &c_b324, &rwork[irwib], n); - jreal = irwrb - 1; - jimag = irwib - 1; - i__1 = *nrhs; - for (jcol = 1; jcol <= i__1; ++jcol) { - i__2 = *n; - for (jrow = 1; jrow <= i__2; ++jrow) { - ++jreal; - ++jimag; - i__3 = jrow + jcol * b_dim1; - i__4 = jreal; - i__5 = jimag; - z__1.r = rwork[i__4], z__1.i = rwork[i__5]; - b[i__3].r = z__1.r, b[i__3].i = z__1.i; -/* L150: */ - } -/* L160: */ - } - -/* Unscale. */ - - dlascl_("G", &c__0, &c__0, &c_b1015, &orgnrm, n, &c__1, &d__[1], n, - info); - dlasrt_("D", n, &d__[1], info); - zlascl_("G", &c__0, &c__0, &orgnrm, &c_b1015, n, nrhs, &b[b_offset], - ldb, info); - - return 0; - } - -/* Book-keeping and setting up some constants. */ - - nlvl = (integer) (log((doublereal) (*n) / (doublereal) (*smlsiz + 1)) / - log(2.)) + 1; - - smlszp = *smlsiz + 1; - - u = 1; - vt = *smlsiz * *n + 1; - difl = vt + smlszp * *n; - difr = difl + nlvl * *n; - z__ = difr + ((nlvl * *n) << (1)); - c__ = z__ + nlvl * *n; - s = c__ + *n; - poles = s + *n; - givnum = poles + ((nlvl) << (1)) * *n; - nrwork = givnum + ((nlvl) << (1)) * *n; - bx = 1; - - irwrb = nrwork; - irwib = irwrb + *smlsiz * *nrhs; - irwb = irwib + *smlsiz * *nrhs; - - sizei = *n + 1; - k = sizei + *n; - givptr = k + *n; - perm = givptr + *n; - givcol = perm + nlvl * *n; - iwk = givcol + ((nlvl * *n) << (1)); - - st = 1; - sqre = 0; - icmpq1 = 1; - icmpq2 = 0; - nsub = 0; - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - if ((d__1 = d__[i__], abs(d__1)) < eps) { - d__[i__] = d_sign(&eps, &d__[i__]); - } -/* L170: */ - } - - i__1 = nm1; - for (i__ = 1; i__ <= i__1; ++i__) { - if ((d__1 = e[i__], abs(d__1)) < eps || i__ == nm1) { - ++nsub; - iwork[nsub] = st; - -/* - Subproblem found. First determine its size and then - apply divide and conquer on it. -*/ - - if (i__ < nm1) { - -/* A subproblem with E(I) small for I < NM1. */ - - nsize = i__ - st + 1; - iwork[sizei + nsub - 1] = nsize; - } else if ((d__1 = e[i__], abs(d__1)) >= eps) { - -/* A subproblem with E(NM1) not too small but I = NM1. */ - - nsize = *n - st + 1; - iwork[sizei + nsub - 1] = nsize; - } else { - -/* - A subproblem with E(NM1) small. This implies an - 1-by-1 subproblem at D(N), which is not solved - explicitly. -*/ - - nsize = i__ - st + 1; - iwork[sizei + nsub - 1] = nsize; - ++nsub; - iwork[nsub] = *n; - iwork[sizei + nsub - 1] = 1; - zcopy_(nrhs, &b[*n + b_dim1], ldb, &work[bx + nm1], n); - } - st1 = st - 1; - if (nsize == 1) { - -/* - This is a 1-by-1 subproblem and is not solved - explicitly. -*/ - - zcopy_(nrhs, &b[st + b_dim1], ldb, &work[bx + st1], n); - } else if (nsize <= *smlsiz) { - -/* This is a small subproblem and is solved by DLASDQ. */ - - dlaset_("A", &nsize, &nsize, &c_b324, &c_b1015, &rwork[vt + - st1], n); - dlaset_("A", &nsize, &nsize, &c_b324, &c_b1015, &rwork[u + - st1], n); - dlasdq_("U", &c__0, &nsize, &nsize, &nsize, &c__0, &d__[st], & - e[st], &rwork[vt + st1], n, &rwork[u + st1], n, & - rwork[nrwork], &c__1, &rwork[nrwork], info) - ; - if (*info != 0) { - return 0; - } - -/* - In the real version, B is passed to DLASDQ and multiplied - internally by Q'. Here B is complex and that product is - computed below in two steps (real and imaginary parts). -*/ - - j = irwb - 1; - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = st + nsize - 1; - for (jrow = st; jrow <= i__3; ++jrow) { - ++j; - i__4 = jrow + jcol * b_dim1; - rwork[j] = b[i__4].r; -/* L180: */ - } -/* L190: */ - } - dgemm_("T", "N", &nsize, nrhs, &nsize, &c_b1015, &rwork[u + - st1], n, &rwork[irwb], &nsize, &c_b324, &rwork[irwrb], - &nsize); - j = irwb - 1; - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = st + nsize - 1; - for (jrow = st; jrow <= i__3; ++jrow) { - ++j; - rwork[j] = d_imag(&b[jrow + jcol * b_dim1]); -/* L200: */ - } -/* L210: */ - } - dgemm_("T", "N", &nsize, nrhs, &nsize, &c_b1015, &rwork[u + - st1], n, &rwork[irwb], &nsize, &c_b324, &rwork[irwib], - &nsize); - jreal = irwrb - 1; - jimag = irwib - 1; - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = st + nsize - 1; - for (jrow = st; jrow <= i__3; ++jrow) { - ++jreal; - ++jimag; - i__4 = jrow + jcol * b_dim1; - i__5 = jreal; - i__6 = jimag; - z__1.r = rwork[i__5], z__1.i = rwork[i__6]; - b[i__4].r = z__1.r, b[i__4].i = z__1.i; -/* L220: */ - } -/* L230: */ - } - - zlacpy_("A", &nsize, nrhs, &b[st + b_dim1], ldb, &work[bx + - st1], n); - } else { - -/* A large problem. Solve it using divide and conquer. */ - - dlasda_(&icmpq1, smlsiz, &nsize, &sqre, &d__[st], &e[st], & - rwork[u + st1], n, &rwork[vt + st1], &iwork[k + st1], - &rwork[difl + st1], &rwork[difr + st1], &rwork[z__ + - st1], &rwork[poles + st1], &iwork[givptr + st1], & - iwork[givcol + st1], n, &iwork[perm + st1], &rwork[ - givnum + st1], &rwork[c__ + st1], &rwork[s + st1], & - rwork[nrwork], &iwork[iwk], info); - if (*info != 0) { - return 0; - } - bxst = bx + st1; - zlalsa_(&icmpq2, smlsiz, &nsize, nrhs, &b[st + b_dim1], ldb, & - work[bxst], n, &rwork[u + st1], n, &rwork[vt + st1], & - iwork[k + st1], &rwork[difl + st1], &rwork[difr + st1] - , &rwork[z__ + st1], &rwork[poles + st1], &iwork[ - givptr + st1], &iwork[givcol + st1], n, &iwork[perm + - st1], &rwork[givnum + st1], &rwork[c__ + st1], &rwork[ - s + st1], &rwork[nrwork], &iwork[iwk], info); - if (*info != 0) { - return 0; - } - } - st = i__ + 1; - } -/* L240: */ - } - -/* Apply the singular values and treat the tiny ones as zero. */ - - tol = *rcond * (d__1 = d__[idamax_(n, &d__[1], &c__1)], abs(d__1)); - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* - Some of the elements in D can be negative because 1-by-1 - subproblems were not solved explicitly. -*/ - - if ((d__1 = d__[i__], abs(d__1)) <= tol) { - zlaset_("A", &c__1, nrhs, &c_b59, &c_b59, &work[bx + i__ - 1], n); - } else { - ++(*rank); - zlascl_("G", &c__0, &c__0, &d__[i__], &c_b1015, &c__1, nrhs, & - work[bx + i__ - 1], n, info); - } - d__[i__] = (d__1 = d__[i__], abs(d__1)); -/* L250: */ - } - -/* Now apply back the right singular vectors. */ - - icmpq2 = 1; - i__1 = nsub; - for (i__ = 1; i__ <= i__1; ++i__) { - st = iwork[i__]; - st1 = st - 1; - nsize = iwork[sizei + i__ - 1]; - bxst = bx + st1; - if (nsize == 1) { - zcopy_(nrhs, &work[bxst], n, &b[st + b_dim1], ldb); - } else if (nsize <= *smlsiz) { - -/* - Since B and BX are complex, the following call to DGEMM - is performed in two steps (real and imaginary parts). - - CALL DGEMM( 'T', 'N', NSIZE, NRHS, NSIZE, ONE, - $ RWORK( VT+ST1 ), N, RWORK( BXST ), N, ZERO, - $ B( ST, 1 ), LDB ) -*/ - - j = bxst - *n - 1; - jreal = irwb - 1; - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - j += *n; - i__3 = nsize; - for (jrow = 1; jrow <= i__3; ++jrow) { - ++jreal; - i__4 = j + jrow; - rwork[jreal] = work[i__4].r; -/* L260: */ - } -/* L270: */ - } - dgemm_("T", "N", &nsize, nrhs, &nsize, &c_b1015, &rwork[vt + st1], - n, &rwork[irwb], &nsize, &c_b324, &rwork[irwrb], &nsize); - j = bxst - *n - 1; - jimag = irwb - 1; - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - j += *n; - i__3 = nsize; - for (jrow = 1; jrow <= i__3; ++jrow) { - ++jimag; - rwork[jimag] = d_imag(&work[j + jrow]); -/* L280: */ - } -/* L290: */ - } - dgemm_("T", "N", &nsize, nrhs, &nsize, &c_b1015, &rwork[vt + st1], - n, &rwork[irwb], &nsize, &c_b324, &rwork[irwib], &nsize); - jreal = irwrb - 1; - jimag = irwib - 1; - i__2 = *nrhs; - for (jcol = 1; jcol <= i__2; ++jcol) { - i__3 = st + nsize - 1; - for (jrow = st; jrow <= i__3; ++jrow) { - ++jreal; - ++jimag; - i__4 = jrow + jcol * b_dim1; - i__5 = jreal; - i__6 = jimag; - z__1.r = rwork[i__5], z__1.i = rwork[i__6]; - b[i__4].r = z__1.r, b[i__4].i = z__1.i; -/* L300: */ - } -/* L310: */ - } - } else { - zlalsa_(&icmpq2, smlsiz, &nsize, nrhs, &work[bxst], n, &b[st + - b_dim1], ldb, &rwork[u + st1], n, &rwork[vt + st1], & - iwork[k + st1], &rwork[difl + st1], &rwork[difr + st1], & - rwork[z__ + st1], &rwork[poles + st1], &iwork[givptr + - st1], &iwork[givcol + st1], n, &iwork[perm + st1], &rwork[ - givnum + st1], &rwork[c__ + st1], &rwork[s + st1], &rwork[ - nrwork], &iwork[iwk], info); - if (*info != 0) { - return 0; - } - } -/* L320: */ - } - -/* Unscale and sort the singular values. */ - - dlascl_("G", &c__0, &c__0, &c_b1015, &orgnrm, n, &c__1, &d__[1], n, info); - dlasrt_("D", n, &d__[1], info); - zlascl_("G", &c__0, &c__0, &orgnrm, &c_b1015, n, nrhs, &b[b_offset], ldb, - info); - - return 0; - -/* End of ZLALSD */ - -} /* zlalsd_ */ - -doublereal zlange_(char *norm, integer *m, integer *n, doublecomplex *a, - integer *lda, doublereal *work) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - doublereal ret_val, d__1, d__2; - - /* Builtin functions */ - double z_abs(doublecomplex *), sqrt(doublereal); - - /* Local variables */ - static integer i__, j; - static doublereal sum, scale; - extern logical lsame_(char *, char *); - static doublereal value; - extern /* Subroutine */ int zlassq_(integer *, doublecomplex *, integer *, - doublereal *, doublereal *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - ZLANGE returns the value of the one norm, or the Frobenius norm, or - the infinity norm, or the element of largest absolute value of a - complex matrix A. - - Description - =========== - - ZLANGE returns the value - - ZLANGE = ( max(abs(A(i,j))), NORM = 'M' or 'm' - ( - ( norm1(A), NORM = '1', 'O' or 'o' - ( - ( normI(A), NORM = 'I' or 'i' - ( - ( normF(A), NORM = 'F', 'f', 'E' or 'e' - - where norm1 denotes the one norm of a matrix (maximum column sum), - normI denotes the infinity norm of a matrix (maximum row sum) and - normF denotes the Frobenius norm of a matrix (square root of sum of - squares). Note that max(abs(A(i,j))) is not a matrix norm. - - Arguments - ========= - - NORM (input) CHARACTER*1 - Specifies the value to be returned in ZLANGE as described - above. - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. When M = 0, - ZLANGE is set to zero. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. When N = 0, - ZLANGE is set to zero. - - A (input) COMPLEX*16 array, dimension (LDA,N) - The m by n matrix A. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(M,1). - - WORK (workspace) DOUBLE PRECISION array, dimension (LWORK), - where LWORK >= M when NORM = 'I'; otherwise, WORK is not - referenced. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --work; - - /* Function Body */ - if (min(*m,*n) == 0) { - value = 0.; - } else if (lsame_(norm, "M")) { - -/* Find max(abs(A(i,j))). */ - - value = 0.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { -/* Computing MAX */ - d__1 = value, d__2 = z_abs(&a[i__ + j * a_dim1]); - value = max(d__1,d__2); -/* L10: */ - } -/* L20: */ - } - } else if (lsame_(norm, "O") || *(unsigned char *) - norm == '1') { - -/* Find norm1(A). */ - - value = 0.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = 0.; - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - sum += z_abs(&a[i__ + j * a_dim1]); -/* L30: */ - } - value = max(value,sum); -/* L40: */ - } - } else if (lsame_(norm, "I")) { - -/* Find normI(A). */ - - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - work[i__] = 0.; -/* L50: */ - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - work[i__] += z_abs(&a[i__ + j * a_dim1]); -/* L60: */ - } -/* L70: */ - } - value = 0.; - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { -/* Computing MAX */ - d__1 = value, d__2 = work[i__]; - value = max(d__1,d__2); -/* L80: */ - } - } else if (lsame_(norm, "F") || lsame_(norm, "E")) { - -/* Find normF(A). */ - - scale = 0.; - sum = 1.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - zlassq_(m, &a[j * a_dim1 + 1], &c__1, &scale, &sum); -/* L90: */ - } - value = scale * sqrt(sum); - } - - ret_val = value; - return ret_val; - -/* End of ZLANGE */ - -} /* zlange_ */ - -doublereal zlanhe_(char *norm, char *uplo, integer *n, doublecomplex *a, - integer *lda, doublereal *work) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2; - doublereal ret_val, d__1, d__2, d__3; - - /* Builtin functions */ - double z_abs(doublecomplex *), sqrt(doublereal); - - /* Local variables */ - static integer i__, j; - static doublereal sum, absa, scale; - extern logical lsame_(char *, char *); - static doublereal value; - extern /* Subroutine */ int zlassq_(integer *, doublecomplex *, integer *, - doublereal *, doublereal *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - ZLANHE returns the value of the one norm, or the Frobenius norm, or - the infinity norm, or the element of largest absolute value of a - complex hermitian matrix A. - - Description - =========== - - ZLANHE returns the value - - ZLANHE = ( max(abs(A(i,j))), NORM = 'M' or 'm' - ( - ( norm1(A), NORM = '1', 'O' or 'o' - ( - ( normI(A), NORM = 'I' or 'i' - ( - ( normF(A), NORM = 'F', 'f', 'E' or 'e' - - where norm1 denotes the one norm of a matrix (maximum column sum), - normI denotes the infinity norm of a matrix (maximum row sum) and - normF denotes the Frobenius norm of a matrix (square root of sum of - squares). Note that max(abs(A(i,j))) is not a matrix norm. - - Arguments - ========= - - NORM (input) CHARACTER*1 - Specifies the value to be returned in ZLANHE as described - above. - - UPLO (input) CHARACTER*1 - Specifies whether the upper or lower triangular part of the - hermitian matrix A is to be referenced. - = 'U': Upper triangular part of A is referenced - = 'L': Lower triangular part of A is referenced - - N (input) INTEGER - The order of the matrix A. N >= 0. When N = 0, ZLANHE is - set to zero. - - A (input) COMPLEX*16 array, dimension (LDA,N) - The hermitian matrix A. If UPLO = 'U', the leading n by n - upper triangular part of A contains the upper triangular part - of the matrix A, and the strictly lower triangular part of A - is not referenced. If UPLO = 'L', the leading n by n lower - triangular part of A contains the lower triangular part of - the matrix A, and the strictly upper triangular part of A is - not referenced. Note that the imaginary parts of the diagonal - elements need not be set and are assumed to be zero. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(N,1). - - WORK (workspace) DOUBLE PRECISION array, dimension (LWORK), - where LWORK >= N when NORM = 'I' or '1' or 'O'; otherwise, - WORK is not referenced. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --work; - - /* Function Body */ - if (*n == 0) { - value = 0.; - } else if (lsame_(norm, "M")) { - -/* Find max(abs(A(i,j))). */ - - value = 0.; - if (lsame_(uplo, "U")) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { -/* Computing MAX */ - d__1 = value, d__2 = z_abs(&a[i__ + j * a_dim1]); - value = max(d__1,d__2); -/* L10: */ - } -/* Computing MAX */ - i__2 = j + j * a_dim1; - d__2 = value, d__3 = (d__1 = a[i__2].r, abs(d__1)); - value = max(d__2,d__3); -/* L20: */ - } - } else { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MAX */ - i__2 = j + j * a_dim1; - d__2 = value, d__3 = (d__1 = a[i__2].r, abs(d__1)); - value = max(d__2,d__3); - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { -/* Computing MAX */ - d__1 = value, d__2 = z_abs(&a[i__ + j * a_dim1]); - value = max(d__1,d__2); -/* L30: */ - } -/* L40: */ - } - } - } else if (lsame_(norm, "I") || lsame_(norm, "O") || *(unsigned char *)norm == '1') { - -/* Find normI(A) ( = norm1(A), since A is hermitian). */ - - value = 0.; - if (lsame_(uplo, "U")) { - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = 0.; - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - absa = z_abs(&a[i__ + j * a_dim1]); - sum += absa; - work[i__] += absa; -/* L50: */ - } - i__2 = j + j * a_dim1; - work[j] = sum + (d__1 = a[i__2].r, abs(d__1)); -/* L60: */ - } - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { -/* Computing MAX */ - d__1 = value, d__2 = work[i__]; - value = max(d__1,d__2); -/* L70: */ - } - } else { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - work[i__] = 0.; -/* L80: */ - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j + j * a_dim1; - sum = work[j] + (d__1 = a[i__2].r, abs(d__1)); - i__2 = *n; - for (i__ = j + 1; i__ <= i__2; ++i__) { - absa = z_abs(&a[i__ + j * a_dim1]); - sum += absa; - work[i__] += absa; -/* L90: */ - } - value = max(value,sum); -/* L100: */ - } - } - } else if (lsame_(norm, "F") || lsame_(norm, "E")) { - -/* Find normF(A). */ - - scale = 0.; - sum = 1.; - if (lsame_(uplo, "U")) { - i__1 = *n; - for (j = 2; j <= i__1; ++j) { - i__2 = j - 1; - zlassq_(&i__2, &a[j * a_dim1 + 1], &c__1, &scale, &sum); -/* L110: */ - } - } else { - i__1 = *n - 1; - for (j = 1; j <= i__1; ++j) { - i__2 = *n - j; - zlassq_(&i__2, &a[j + 1 + j * a_dim1], &c__1, &scale, &sum); -/* L120: */ - } - } - sum *= 2; - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + i__ * a_dim1; - if (a[i__2].r != 0.) { - i__2 = i__ + i__ * a_dim1; - absa = (d__1 = a[i__2].r, abs(d__1)); - if (scale < absa) { -/* Computing 2nd power */ - d__1 = scale / absa; - sum = sum * (d__1 * d__1) + 1.; - scale = absa; - } else { -/* Computing 2nd power */ - d__1 = absa / scale; - sum += d__1 * d__1; - } - } -/* L130: */ - } - value = scale * sqrt(sum); - } - - ret_val = value; - return ret_val; - -/* End of ZLANHE */ - -} /* zlanhe_ */ - -doublereal zlanhs_(char *norm, integer *n, doublecomplex *a, integer *lda, - doublereal *work) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - doublereal ret_val, d__1, d__2; - - /* Builtin functions */ - double z_abs(doublecomplex *), sqrt(doublereal); - - /* Local variables */ - static integer i__, j; - static doublereal sum, scale; - extern logical lsame_(char *, char *); - static doublereal value; - extern /* Subroutine */ int zlassq_(integer *, doublecomplex *, integer *, - doublereal *, doublereal *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - ZLANHS returns the value of the one norm, or the Frobenius norm, or - the infinity norm, or the element of largest absolute value of a - Hessenberg matrix A. - - Description - =========== - - ZLANHS returns the value - - ZLANHS = ( max(abs(A(i,j))), NORM = 'M' or 'm' - ( - ( norm1(A), NORM = '1', 'O' or 'o' - ( - ( normI(A), NORM = 'I' or 'i' - ( - ( normF(A), NORM = 'F', 'f', 'E' or 'e' - - where norm1 denotes the one norm of a matrix (maximum column sum), - normI denotes the infinity norm of a matrix (maximum row sum) and - normF denotes the Frobenius norm of a matrix (square root of sum of - squares). Note that max(abs(A(i,j))) is not a matrix norm. - - Arguments - ========= - - NORM (input) CHARACTER*1 - Specifies the value to be returned in ZLANHS as described - above. - - N (input) INTEGER - The order of the matrix A. N >= 0. When N = 0, ZLANHS is - set to zero. - - A (input) COMPLEX*16 array, dimension (LDA,N) - The n by n upper Hessenberg matrix A; the part of A below the - first sub-diagonal is not referenced. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(N,1). - - WORK (workspace) DOUBLE PRECISION array, dimension (LWORK), - where LWORK >= N when NORM = 'I'; otherwise, WORK is not - referenced. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --work; - - /* Function Body */ - if (*n == 0) { - value = 0.; - } else if (lsame_(norm, "M")) { - -/* Find max(abs(A(i,j))). */ - - value = 0.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = *n, i__4 = j + 1; - i__2 = min(i__3,i__4); - for (i__ = 1; i__ <= i__2; ++i__) { -/* Computing MAX */ - d__1 = value, d__2 = z_abs(&a[i__ + j * a_dim1]); - value = max(d__1,d__2); -/* L10: */ - } -/* L20: */ - } - } else if (lsame_(norm, "O") || *(unsigned char *) - norm == '1') { - -/* Find norm1(A). */ - - value = 0.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - sum = 0.; -/* Computing MIN */ - i__3 = *n, i__4 = j + 1; - i__2 = min(i__3,i__4); - for (i__ = 1; i__ <= i__2; ++i__) { - sum += z_abs(&a[i__ + j * a_dim1]); -/* L30: */ - } - value = max(value,sum); -/* L40: */ - } - } else if (lsame_(norm, "I")) { - -/* Find normI(A). */ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - work[i__] = 0.; -/* L50: */ - } - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = *n, i__4 = j + 1; - i__2 = min(i__3,i__4); - for (i__ = 1; i__ <= i__2; ++i__) { - work[i__] += z_abs(&a[i__ + j * a_dim1]); -/* L60: */ - } -/* L70: */ - } - value = 0.; - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { -/* Computing MAX */ - d__1 = value, d__2 = work[i__]; - value = max(d__1,d__2); -/* L80: */ - } - } else if (lsame_(norm, "F") || lsame_(norm, "E")) { - -/* Find normF(A). */ - - scale = 0.; - sum = 1.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = *n, i__4 = j + 1; - i__2 = min(i__3,i__4); - zlassq_(&i__2, &a[j * a_dim1 + 1], &c__1, &scale, &sum); -/* L90: */ - } - value = scale * sqrt(sum); - } - - ret_val = value; - return ret_val; - -/* End of ZLANHS */ - -} /* zlanhs_ */ - -/* Subroutine */ int zlarcm_(integer *m, integer *n, doublereal *a, integer * - lda, doublecomplex *b, integer *ldb, doublecomplex *c__, integer *ldc, - doublereal *rwork) -{ - /* System generated locals */ - integer a_dim1, a_offset, b_dim1, b_offset, c_dim1, c_offset, i__1, i__2, - i__3, i__4, i__5; - doublereal d__1; - doublecomplex z__1; - - /* Builtin functions */ - double d_imag(doublecomplex *); - - /* Local variables */ - static integer i__, j, l; - extern /* Subroutine */ int dgemm_(char *, char *, integer *, integer *, - integer *, doublereal *, doublereal *, integer *, doublereal *, - integer *, doublereal *, doublereal *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZLARCM performs a very simple matrix-matrix multiplication: - C := A * B, - where A is M by M and real; B is M by N and complex; - C is M by N and complex. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix A and of the matrix C. - M >= 0. - - N (input) INTEGER - The number of columns and rows of the matrix B and - the number of columns of the matrix C. - N >= 0. - - A (input) DOUBLE PRECISION array, dimension (LDA, M) - A contains the M by M matrix A. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >=max(1,M). - - B (input) DOUBLE PRECISION array, dimension (LDB, N) - B contains the M by N matrix B. - - LDB (input) INTEGER - The leading dimension of the array B. LDB >=max(1,M). - - C (input) COMPLEX*16 array, dimension (LDC, N) - C contains the M by N matrix C. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >=max(1,M). - - RWORK (workspace) DOUBLE PRECISION array, dimension (2*M*N) - - ===================================================================== - - - Quick return if possible. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - b_dim1 = *ldb; - b_offset = 1 + b_dim1 * 1; - b -= b_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --rwork; - - /* Function Body */ - if (*m == 0 || *n == 0) { - return 0; - } - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * b_dim1; - rwork[(j - 1) * *m + i__] = b[i__3].r; -/* L10: */ - } -/* L20: */ - } - - l = *m * *n + 1; - dgemm_("N", "N", m, n, m, &c_b1015, &a[a_offset], lda, &rwork[1], m, & - c_b324, &rwork[l], m); - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = l + (j - 1) * *m + i__ - 1; - c__[i__3].r = rwork[i__4], c__[i__3].i = 0.; -/* L30: */ - } -/* L40: */ - } - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - rwork[(j - 1) * *m + i__] = d_imag(&b[i__ + j * b_dim1]); -/* L50: */ - } -/* L60: */ - } - dgemm_("N", "N", m, n, m, &c_b1015, &a[a_offset], lda, &rwork[1], m, & - c_b324, &rwork[l], m); - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - d__1 = c__[i__4].r; - i__5 = l + (j - 1) * *m + i__ - 1; - z__1.r = d__1, z__1.i = rwork[i__5]; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L70: */ - } -/* L80: */ - } - - return 0; - -/* End of ZLARCM */ - -} /* zlarcm_ */ - -/* Subroutine */ int zlarf_(char *side, integer *m, integer *n, doublecomplex - *v, integer *incv, doublecomplex *tau, doublecomplex *c__, integer * - ldc, doublecomplex *work) -{ - /* System generated locals */ - integer c_dim1, c_offset; - doublecomplex z__1; - - /* Local variables */ - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zgerc_(integer *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *), zgemv_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZLARF applies a complex elementary reflector H to a complex M-by-N - matrix C, from either the left or the right. H is represented in the - form - - H = I - tau * v * v' - - where tau is a complex scalar and v is a complex vector. - - If tau = 0, then H is taken to be the unit matrix. - - To apply H' (the conjugate transpose of H), supply conjg(tau) instead - tau. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': form H * C - = 'R': form C * H - - M (input) INTEGER - The number of rows of the matrix C. - - N (input) INTEGER - The number of columns of the matrix C. - - V (input) COMPLEX*16 array, dimension - (1 + (M-1)*abs(INCV)) if SIDE = 'L' - or (1 + (N-1)*abs(INCV)) if SIDE = 'R' - The vector v in the representation of H. V is not used if - TAU = 0. - - INCV (input) INTEGER - The increment between elements of v. INCV <> 0. - - TAU (input) COMPLEX*16 - The value tau in the representation of H. - - C (input/output) COMPLEX*16 array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by the matrix H * C if SIDE = 'L', - or C * H if SIDE = 'R'. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace) COMPLEX*16 array, dimension - (N) if SIDE = 'L' - or (M) if SIDE = 'R' - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --v; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - if (lsame_(side, "L")) { - -/* Form H * C */ - - if (tau->r != 0. || tau->i != 0.) { - -/* w := C' * v */ - - zgemv_("Conjugate transpose", m, n, &c_b60, &c__[c_offset], ldc, & - v[1], incv, &c_b59, &work[1], &c__1); - -/* C := C - v * w' */ - - z__1.r = -tau->r, z__1.i = -tau->i; - zgerc_(m, n, &z__1, &v[1], incv, &work[1], &c__1, &c__[c_offset], - ldc); - } - } else { - -/* Form C * H */ - - if (tau->r != 0. || tau->i != 0.) { - -/* w := C * v */ - - zgemv_("No transpose", m, n, &c_b60, &c__[c_offset], ldc, &v[1], - incv, &c_b59, &work[1], &c__1); - -/* C := C - w * v' */ - - z__1.r = -tau->r, z__1.i = -tau->i; - zgerc_(m, n, &z__1, &work[1], &c__1, &v[1], incv, &c__[c_offset], - ldc); - } - } - return 0; - -/* End of ZLARF */ - -} /* zlarf_ */ - -/* Subroutine */ int zlarfb_(char *side, char *trans, char *direct, char * - storev, integer *m, integer *n, integer *k, doublecomplex *v, integer - *ldv, doublecomplex *t, integer *ldt, doublecomplex *c__, integer * - ldc, doublecomplex *work, integer *ldwork) -{ - /* System generated locals */ - integer c_dim1, c_offset, t_dim1, t_offset, v_dim1, v_offset, work_dim1, - work_offset, i__1, i__2, i__3, i__4, i__5; - doublecomplex z__1, z__2; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zgemm_(char *, char *, integer *, integer *, - integer *, doublecomplex *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *), zcopy_(integer *, doublecomplex *, - integer *, doublecomplex *, integer *), ztrmm_(char *, char *, - char *, char *, integer *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *), zlacgv_(integer *, doublecomplex *, - integer *); - static char transt[1]; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZLARFB applies a complex block reflector H or its transpose H' to a - complex M-by-N matrix C, from either the left or the right. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply H or H' from the Left - = 'R': apply H or H' from the Right - - TRANS (input) CHARACTER*1 - = 'N': apply H (No transpose) - = 'C': apply H' (Conjugate transpose) - - DIRECT (input) CHARACTER*1 - Indicates how H is formed from a product of elementary - reflectors - = 'F': H = H(1) H(2) . . . H(k) (Forward) - = 'B': H = H(k) . . . H(2) H(1) (Backward) - - STOREV (input) CHARACTER*1 - Indicates how the vectors which define the elementary - reflectors are stored: - = 'C': Columnwise - = 'R': Rowwise - - M (input) INTEGER - The number of rows of the matrix C. - - N (input) INTEGER - The number of columns of the matrix C. - - K (input) INTEGER - The order of the matrix T (= the number of elementary - reflectors whose product defines the block reflector). - - V (input) COMPLEX*16 array, dimension - (LDV,K) if STOREV = 'C' - (LDV,M) if STOREV = 'R' and SIDE = 'L' - (LDV,N) if STOREV = 'R' and SIDE = 'R' - The matrix V. See further details. - - LDV (input) INTEGER - The leading dimension of the array V. - If STOREV = 'C' and SIDE = 'L', LDV >= max(1,M); - if STOREV = 'C' and SIDE = 'R', LDV >= max(1,N); - if STOREV = 'R', LDV >= K. - - T (input) COMPLEX*16 array, dimension (LDT,K) - The triangular K-by-K matrix T in the representation of the - block reflector. - - LDT (input) INTEGER - The leading dimension of the array T. LDT >= K. - - C (input/output) COMPLEX*16 array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by H*C or H'*C or C*H or C*H'. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace) COMPLEX*16 array, dimension (LDWORK,K) - - LDWORK (input) INTEGER - The leading dimension of the array WORK. - If SIDE = 'L', LDWORK >= max(1,N); - if SIDE = 'R', LDWORK >= max(1,M). - - ===================================================================== - - - Quick return if possible -*/ - - /* Parameter adjustments */ - v_dim1 = *ldv; - v_offset = 1 + v_dim1 * 1; - v -= v_offset; - t_dim1 = *ldt; - t_offset = 1 + t_dim1 * 1; - t -= t_offset; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - work_dim1 = *ldwork; - work_offset = 1 + work_dim1 * 1; - work -= work_offset; - - /* Function Body */ - if (*m <= 0 || *n <= 0) { - return 0; - } - - if (lsame_(trans, "N")) { - *(unsigned char *)transt = 'C'; - } else { - *(unsigned char *)transt = 'N'; - } - - if (lsame_(storev, "C")) { - - if (lsame_(direct, "F")) { - -/* - Let V = ( V1 ) (first K rows) - ( V2 ) - where V1 is unit lower triangular. -*/ - - if (lsame_(side, "L")) { - -/* - Form H * C or H' * C where C = ( C1 ) - ( C2 ) - - W := C' * V = (C1'*V1 + C2'*V2) (stored in WORK) - - W := C1' -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - zcopy_(n, &c__[j + c_dim1], ldc, &work[j * work_dim1 + 1], - &c__1); - zlacgv_(n, &work[j * work_dim1 + 1], &c__1); -/* L10: */ - } - -/* W := W * V1 */ - - ztrmm_("Right", "Lower", "No transpose", "Unit", n, k, &c_b60, - &v[v_offset], ldv, &work[work_offset], ldwork); - if (*m > *k) { - -/* W := W + C2'*V2 */ - - i__1 = *m - *k; - zgemm_("Conjugate transpose", "No transpose", n, k, &i__1, - &c_b60, &c__[*k + 1 + c_dim1], ldc, &v[*k + 1 + - v_dim1], ldv, &c_b60, &work[work_offset], ldwork); - } - -/* W := W * T' or W * T */ - - ztrmm_("Right", "Upper", transt, "Non-unit", n, k, &c_b60, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - V * W' */ - - if (*m > *k) { - -/* C2 := C2 - V2 * W' */ - - i__1 = *m - *k; - z__1.r = -1., z__1.i = -0.; - zgemm_("No transpose", "Conjugate transpose", &i__1, n, k, - &z__1, &v[*k + 1 + v_dim1], ldv, &work[ - work_offset], ldwork, &c_b60, &c__[*k + 1 + - c_dim1], ldc); - } - -/* W := W * V1' */ - - ztrmm_("Right", "Lower", "Conjugate transpose", "Unit", n, k, - &c_b60, &v[v_offset], ldv, &work[work_offset], ldwork); - -/* C1 := C1 - W' */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = j + i__ * c_dim1; - i__4 = j + i__ * c_dim1; - d_cnjg(&z__2, &work[i__ + j * work_dim1]); - z__1.r = c__[i__4].r - z__2.r, z__1.i = c__[i__4].i - - z__2.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L20: */ - } -/* L30: */ - } - - } else if (lsame_(side, "R")) { - -/* - Form C * H or C * H' where C = ( C1 C2 ) - - W := C * V = (C1*V1 + C2*V2) (stored in WORK) - - W := C1 -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - zcopy_(m, &c__[j * c_dim1 + 1], &c__1, &work[j * - work_dim1 + 1], &c__1); -/* L40: */ - } - -/* W := W * V1 */ - - ztrmm_("Right", "Lower", "No transpose", "Unit", m, k, &c_b60, - &v[v_offset], ldv, &work[work_offset], ldwork); - if (*n > *k) { - -/* W := W + C2 * V2 */ - - i__1 = *n - *k; - zgemm_("No transpose", "No transpose", m, k, &i__1, & - c_b60, &c__[(*k + 1) * c_dim1 + 1], ldc, &v[*k + - 1 + v_dim1], ldv, &c_b60, &work[work_offset], - ldwork); - } - -/* W := W * T or W * T' */ - - ztrmm_("Right", "Upper", trans, "Non-unit", m, k, &c_b60, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - W * V' */ - - if (*n > *k) { - -/* C2 := C2 - W * V2' */ - - i__1 = *n - *k; - z__1.r = -1., z__1.i = -0.; - zgemm_("No transpose", "Conjugate transpose", m, &i__1, k, - &z__1, &work[work_offset], ldwork, &v[*k + 1 + - v_dim1], ldv, &c_b60, &c__[(*k + 1) * c_dim1 + 1], - ldc); - } - -/* W := W * V1' */ - - ztrmm_("Right", "Lower", "Conjugate transpose", "Unit", m, k, - &c_b60, &v[v_offset], ldv, &work[work_offset], ldwork); - -/* C1 := C1 - W */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - i__5 = i__ + j * work_dim1; - z__1.r = c__[i__4].r - work[i__5].r, z__1.i = c__[ - i__4].i - work[i__5].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L50: */ - } -/* L60: */ - } - } - - } else { - -/* - Let V = ( V1 ) - ( V2 ) (last K rows) - where V2 is unit upper triangular. -*/ - - if (lsame_(side, "L")) { - -/* - Form H * C or H' * C where C = ( C1 ) - ( C2 ) - - W := C' * V = (C1'*V1 + C2'*V2) (stored in WORK) - - W := C2' -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - zcopy_(n, &c__[*m - *k + j + c_dim1], ldc, &work[j * - work_dim1 + 1], &c__1); - zlacgv_(n, &work[j * work_dim1 + 1], &c__1); -/* L70: */ - } - -/* W := W * V2 */ - - ztrmm_("Right", "Upper", "No transpose", "Unit", n, k, &c_b60, - &v[*m - *k + 1 + v_dim1], ldv, &work[work_offset], - ldwork); - if (*m > *k) { - -/* W := W + C1'*V1 */ - - i__1 = *m - *k; - zgemm_("Conjugate transpose", "No transpose", n, k, &i__1, - &c_b60, &c__[c_offset], ldc, &v[v_offset], ldv, & - c_b60, &work[work_offset], ldwork); - } - -/* W := W * T' or W * T */ - - ztrmm_("Right", "Lower", transt, "Non-unit", n, k, &c_b60, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - V * W' */ - - if (*m > *k) { - -/* C1 := C1 - V1 * W' */ - - i__1 = *m - *k; - z__1.r = -1., z__1.i = -0.; - zgemm_("No transpose", "Conjugate transpose", &i__1, n, k, - &z__1, &v[v_offset], ldv, &work[work_offset], - ldwork, &c_b60, &c__[c_offset], ldc); - } - -/* W := W * V2' */ - - ztrmm_("Right", "Upper", "Conjugate transpose", "Unit", n, k, - &c_b60, &v[*m - *k + 1 + v_dim1], ldv, &work[ - work_offset], ldwork); - -/* C2 := C2 - W' */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = *m - *k + j + i__ * c_dim1; - i__4 = *m - *k + j + i__ * c_dim1; - d_cnjg(&z__2, &work[i__ + j * work_dim1]); - z__1.r = c__[i__4].r - z__2.r, z__1.i = c__[i__4].i - - z__2.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L80: */ - } -/* L90: */ - } - - } else if (lsame_(side, "R")) { - -/* - Form C * H or C * H' where C = ( C1 C2 ) - - W := C * V = (C1*V1 + C2*V2) (stored in WORK) - - W := C2 -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - zcopy_(m, &c__[(*n - *k + j) * c_dim1 + 1], &c__1, &work[ - j * work_dim1 + 1], &c__1); -/* L100: */ - } - -/* W := W * V2 */ - - ztrmm_("Right", "Upper", "No transpose", "Unit", m, k, &c_b60, - &v[*n - *k + 1 + v_dim1], ldv, &work[work_offset], - ldwork); - if (*n > *k) { - -/* W := W + C1 * V1 */ - - i__1 = *n - *k; - zgemm_("No transpose", "No transpose", m, k, &i__1, & - c_b60, &c__[c_offset], ldc, &v[v_offset], ldv, & - c_b60, &work[work_offset], ldwork); - } - -/* W := W * T or W * T' */ - - ztrmm_("Right", "Lower", trans, "Non-unit", m, k, &c_b60, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - W * V' */ - - if (*n > *k) { - -/* C1 := C1 - W * V1' */ - - i__1 = *n - *k; - z__1.r = -1., z__1.i = -0.; - zgemm_("No transpose", "Conjugate transpose", m, &i__1, k, - &z__1, &work[work_offset], ldwork, &v[v_offset], - ldv, &c_b60, &c__[c_offset], ldc); - } - -/* W := W * V2' */ - - ztrmm_("Right", "Upper", "Conjugate transpose", "Unit", m, k, - &c_b60, &v[*n - *k + 1 + v_dim1], ldv, &work[ - work_offset], ldwork); - -/* C2 := C2 - W */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + (*n - *k + j) * c_dim1; - i__4 = i__ + (*n - *k + j) * c_dim1; - i__5 = i__ + j * work_dim1; - z__1.r = c__[i__4].r - work[i__5].r, z__1.i = c__[ - i__4].i - work[i__5].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L110: */ - } -/* L120: */ - } - } - } - - } else if (lsame_(storev, "R")) { - - if (lsame_(direct, "F")) { - -/* - Let V = ( V1 V2 ) (V1: first K columns) - where V1 is unit upper triangular. -*/ - - if (lsame_(side, "L")) { - -/* - Form H * C or H' * C where C = ( C1 ) - ( C2 ) - - W := C' * V' = (C1'*V1' + C2'*V2') (stored in WORK) - - W := C1' -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - zcopy_(n, &c__[j + c_dim1], ldc, &work[j * work_dim1 + 1], - &c__1); - zlacgv_(n, &work[j * work_dim1 + 1], &c__1); -/* L130: */ - } - -/* W := W * V1' */ - - ztrmm_("Right", "Upper", "Conjugate transpose", "Unit", n, k, - &c_b60, &v[v_offset], ldv, &work[work_offset], ldwork); - if (*m > *k) { - -/* W := W + C2'*V2' */ - - i__1 = *m - *k; - zgemm_("Conjugate transpose", "Conjugate transpose", n, k, - &i__1, &c_b60, &c__[*k + 1 + c_dim1], ldc, &v[(* - k + 1) * v_dim1 + 1], ldv, &c_b60, &work[ - work_offset], ldwork); - } - -/* W := W * T' or W * T */ - - ztrmm_("Right", "Upper", transt, "Non-unit", n, k, &c_b60, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - V' * W' */ - - if (*m > *k) { - -/* C2 := C2 - V2' * W' */ - - i__1 = *m - *k; - z__1.r = -1., z__1.i = -0.; - zgemm_("Conjugate transpose", "Conjugate transpose", & - i__1, n, k, &z__1, &v[(*k + 1) * v_dim1 + 1], ldv, - &work[work_offset], ldwork, &c_b60, &c__[*k + 1 - + c_dim1], ldc); - } - -/* W := W * V1 */ - - ztrmm_("Right", "Upper", "No transpose", "Unit", n, k, &c_b60, - &v[v_offset], ldv, &work[work_offset], ldwork); - -/* C1 := C1 - W' */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = j + i__ * c_dim1; - i__4 = j + i__ * c_dim1; - d_cnjg(&z__2, &work[i__ + j * work_dim1]); - z__1.r = c__[i__4].r - z__2.r, z__1.i = c__[i__4].i - - z__2.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L140: */ - } -/* L150: */ - } - - } else if (lsame_(side, "R")) { - -/* - Form C * H or C * H' where C = ( C1 C2 ) - - W := C * V' = (C1*V1' + C2*V2') (stored in WORK) - - W := C1 -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - zcopy_(m, &c__[j * c_dim1 + 1], &c__1, &work[j * - work_dim1 + 1], &c__1); -/* L160: */ - } - -/* W := W * V1' */ - - ztrmm_("Right", "Upper", "Conjugate transpose", "Unit", m, k, - &c_b60, &v[v_offset], ldv, &work[work_offset], ldwork); - if (*n > *k) { - -/* W := W + C2 * V2' */ - - i__1 = *n - *k; - zgemm_("No transpose", "Conjugate transpose", m, k, &i__1, - &c_b60, &c__[(*k + 1) * c_dim1 + 1], ldc, &v[(*k - + 1) * v_dim1 + 1], ldv, &c_b60, &work[ - work_offset], ldwork); - } - -/* W := W * T or W * T' */ - - ztrmm_("Right", "Upper", trans, "Non-unit", m, k, &c_b60, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - W * V */ - - if (*n > *k) { - -/* C2 := C2 - W * V2 */ - - i__1 = *n - *k; - z__1.r = -1., z__1.i = -0.; - zgemm_("No transpose", "No transpose", m, &i__1, k, &z__1, - &work[work_offset], ldwork, &v[(*k + 1) * v_dim1 - + 1], ldv, &c_b60, &c__[(*k + 1) * c_dim1 + 1], - ldc); - } - -/* W := W * V1 */ - - ztrmm_("Right", "Upper", "No transpose", "Unit", m, k, &c_b60, - &v[v_offset], ldv, &work[work_offset], ldwork); - -/* C1 := C1 - W */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * c_dim1; - i__4 = i__ + j * c_dim1; - i__5 = i__ + j * work_dim1; - z__1.r = c__[i__4].r - work[i__5].r, z__1.i = c__[ - i__4].i - work[i__5].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L170: */ - } -/* L180: */ - } - - } - - } else { - -/* - Let V = ( V1 V2 ) (V2: last K columns) - where V2 is unit lower triangular. -*/ - - if (lsame_(side, "L")) { - -/* - Form H * C or H' * C where C = ( C1 ) - ( C2 ) - - W := C' * V' = (C1'*V1' + C2'*V2') (stored in WORK) - - W := C2' -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - zcopy_(n, &c__[*m - *k + j + c_dim1], ldc, &work[j * - work_dim1 + 1], &c__1); - zlacgv_(n, &work[j * work_dim1 + 1], &c__1); -/* L190: */ - } - -/* W := W * V2' */ - - ztrmm_("Right", "Lower", "Conjugate transpose", "Unit", n, k, - &c_b60, &v[(*m - *k + 1) * v_dim1 + 1], ldv, &work[ - work_offset], ldwork); - if (*m > *k) { - -/* W := W + C1'*V1' */ - - i__1 = *m - *k; - zgemm_("Conjugate transpose", "Conjugate transpose", n, k, - &i__1, &c_b60, &c__[c_offset], ldc, &v[v_offset], - ldv, &c_b60, &work[work_offset], ldwork); - } - -/* W := W * T' or W * T */ - - ztrmm_("Right", "Lower", transt, "Non-unit", n, k, &c_b60, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - V' * W' */ - - if (*m > *k) { - -/* C1 := C1 - V1' * W' */ - - i__1 = *m - *k; - z__1.r = -1., z__1.i = -0.; - zgemm_("Conjugate transpose", "Conjugate transpose", & - i__1, n, k, &z__1, &v[v_offset], ldv, &work[ - work_offset], ldwork, &c_b60, &c__[c_offset], ldc); - } - -/* W := W * V2 */ - - ztrmm_("Right", "Lower", "No transpose", "Unit", n, k, &c_b60, - &v[(*m - *k + 1) * v_dim1 + 1], ldv, &work[ - work_offset], ldwork); - -/* C2 := C2 - W' */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = *m - *k + j + i__ * c_dim1; - i__4 = *m - *k + j + i__ * c_dim1; - d_cnjg(&z__2, &work[i__ + j * work_dim1]); - z__1.r = c__[i__4].r - z__2.r, z__1.i = c__[i__4].i - - z__2.i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L200: */ - } -/* L210: */ - } - - } else if (lsame_(side, "R")) { - -/* - Form C * H or C * H' where C = ( C1 C2 ) - - W := C * V' = (C1*V1' + C2*V2') (stored in WORK) - - W := C2 -*/ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - zcopy_(m, &c__[(*n - *k + j) * c_dim1 + 1], &c__1, &work[ - j * work_dim1 + 1], &c__1); -/* L220: */ - } - -/* W := W * V2' */ - - ztrmm_("Right", "Lower", "Conjugate transpose", "Unit", m, k, - &c_b60, &v[(*n - *k + 1) * v_dim1 + 1], ldv, &work[ - work_offset], ldwork); - if (*n > *k) { - -/* W := W + C1 * V1' */ - - i__1 = *n - *k; - zgemm_("No transpose", "Conjugate transpose", m, k, &i__1, - &c_b60, &c__[c_offset], ldc, &v[v_offset], ldv, & - c_b60, &work[work_offset], ldwork); - } - -/* W := W * T or W * T' */ - - ztrmm_("Right", "Lower", trans, "Non-unit", m, k, &c_b60, &t[ - t_offset], ldt, &work[work_offset], ldwork); - -/* C := C - W * V */ - - if (*n > *k) { - -/* C1 := C1 - W * V1 */ - - i__1 = *n - *k; - z__1.r = -1., z__1.i = -0.; - zgemm_("No transpose", "No transpose", m, &i__1, k, &z__1, - &work[work_offset], ldwork, &v[v_offset], ldv, & - c_b60, &c__[c_offset], ldc); - } - -/* W := W * V2 */ - - ztrmm_("Right", "Lower", "No transpose", "Unit", m, k, &c_b60, - &v[(*n - *k + 1) * v_dim1 + 1], ldv, &work[ - work_offset], ldwork); - -/* C1 := C1 - W */ - - i__1 = *k; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + (*n - *k + j) * c_dim1; - i__4 = i__ + (*n - *k + j) * c_dim1; - i__5 = i__ + j * work_dim1; - z__1.r = c__[i__4].r - work[i__5].r, z__1.i = c__[ - i__4].i - work[i__5].i; - c__[i__3].r = z__1.r, c__[i__3].i = z__1.i; -/* L230: */ - } -/* L240: */ - } - - } - - } - } - - return 0; - -/* End of ZLARFB */ - -} /* zlarfb_ */ - -/* Subroutine */ int zlarfg_(integer *n, doublecomplex *alpha, doublecomplex * - x, integer *incx, doublecomplex *tau) -{ - /* System generated locals */ - integer i__1; - doublereal d__1, d__2; - doublecomplex z__1, z__2; - - /* Builtin functions */ - double d_imag(doublecomplex *), d_sign(doublereal *, doublereal *); - - /* Local variables */ - static integer j, knt; - static doublereal beta, alphi, alphr; - extern /* Subroutine */ int zscal_(integer *, doublecomplex *, - doublecomplex *, integer *); - static doublereal xnorm; - extern doublereal dlapy3_(doublereal *, doublereal *, doublereal *), - dznrm2_(integer *, doublecomplex *, integer *), dlamch_(char *); - static doublereal safmin; - extern /* Subroutine */ int zdscal_(integer *, doublereal *, - doublecomplex *, integer *); - static doublereal rsafmn; - extern /* Double Complex */ VOID zladiv_(doublecomplex *, doublecomplex *, - doublecomplex *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZLARFG generates a complex elementary reflector H of order n, such - that - - H' * ( alpha ) = ( beta ), H' * H = I. - ( x ) ( 0 ) - - where alpha and beta are scalars, with beta real, and x is an - (n-1)-element complex vector. H is represented in the form - - H = I - tau * ( 1 ) * ( 1 v' ) , - ( v ) - - where tau is a complex scalar and v is a complex (n-1)-element - vector. Note that H is not hermitian. - - If the elements of x are all zero and alpha is real, then tau = 0 - and H is taken to be the unit matrix. - - Otherwise 1 <= real(tau) <= 2 and abs(tau-1) <= 1 . - - Arguments - ========= - - N (input) INTEGER - The order of the elementary reflector. - - ALPHA (input/output) COMPLEX*16 - On entry, the value alpha. - On exit, it is overwritten with the value beta. - - X (input/output) COMPLEX*16 array, dimension - (1+(N-2)*abs(INCX)) - On entry, the vector x. - On exit, it is overwritten with the vector v. - - INCX (input) INTEGER - The increment between elements of X. INCX > 0. - - TAU (output) COMPLEX*16 - The value tau. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --x; - - /* Function Body */ - if (*n <= 0) { - tau->r = 0., tau->i = 0.; - return 0; - } - - i__1 = *n - 1; - xnorm = dznrm2_(&i__1, &x[1], incx); - alphr = alpha->r; - alphi = d_imag(alpha); - - if ((xnorm == 0. && alphi == 0.)) { - -/* H = I */ - - tau->r = 0., tau->i = 0.; - } else { - -/* general case */ - - d__1 = dlapy3_(&alphr, &alphi, &xnorm); - beta = -d_sign(&d__1, &alphr); - safmin = SAFEMINIMUM / EPSILON; - rsafmn = 1. / safmin; - - if (abs(beta) < safmin) { - -/* XNORM, BETA may be inaccurate; scale X and recompute them */ - - knt = 0; -L10: - ++knt; - i__1 = *n - 1; - zdscal_(&i__1, &rsafmn, &x[1], incx); - beta *= rsafmn; - alphi *= rsafmn; - alphr *= rsafmn; - if (abs(beta) < safmin) { - goto L10; - } - -/* New BETA is at most 1, at least SAFMIN */ - - i__1 = *n - 1; - xnorm = dznrm2_(&i__1, &x[1], incx); - z__1.r = alphr, z__1.i = alphi; - alpha->r = z__1.r, alpha->i = z__1.i; - d__1 = dlapy3_(&alphr, &alphi, &xnorm); - beta = -d_sign(&d__1, &alphr); - d__1 = (beta - alphr) / beta; - d__2 = -alphi / beta; - z__1.r = d__1, z__1.i = d__2; - tau->r = z__1.r, tau->i = z__1.i; - z__2.r = alpha->r - beta, z__2.i = alpha->i; - zladiv_(&z__1, &c_b60, &z__2); - alpha->r = z__1.r, alpha->i = z__1.i; - i__1 = *n - 1; - zscal_(&i__1, alpha, &x[1], incx); - -/* If ALPHA is subnormal, it may lose relative accuracy */ - - alpha->r = beta, alpha->i = 0.; - i__1 = knt; - for (j = 1; j <= i__1; ++j) { - z__1.r = safmin * alpha->r, z__1.i = safmin * alpha->i; - alpha->r = z__1.r, alpha->i = z__1.i; -/* L20: */ - } - } else { - d__1 = (beta - alphr) / beta; - d__2 = -alphi / beta; - z__1.r = d__1, z__1.i = d__2; - tau->r = z__1.r, tau->i = z__1.i; - z__2.r = alpha->r - beta, z__2.i = alpha->i; - zladiv_(&z__1, &c_b60, &z__2); - alpha->r = z__1.r, alpha->i = z__1.i; - i__1 = *n - 1; - zscal_(&i__1, alpha, &x[1], incx); - alpha->r = beta, alpha->i = 0.; - } - } - - return 0; - -/* End of ZLARFG */ - -} /* zlarfg_ */ - -/* Subroutine */ int zlarft_(char *direct, char *storev, integer *n, integer * - k, doublecomplex *v, integer *ldv, doublecomplex *tau, doublecomplex * - t, integer *ldt) -{ - /* System generated locals */ - integer t_dim1, t_offset, v_dim1, v_offset, i__1, i__2, i__3, i__4; - doublecomplex z__1; - - /* Local variables */ - static integer i__, j; - static doublecomplex vii; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zgemv_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *), - ztrmv_(char *, char *, char *, integer *, doublecomplex *, - integer *, doublecomplex *, integer *), - zlacgv_(integer *, doublecomplex *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZLARFT forms the triangular factor T of a complex block reflector H - of order n, which is defined as a product of k elementary reflectors. - - If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; - - If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. - - If STOREV = 'C', the vector which defines the elementary reflector - H(i) is stored in the i-th column of the array V, and - - H = I - V * T * V' - - If STOREV = 'R', the vector which defines the elementary reflector - H(i) is stored in the i-th row of the array V, and - - H = I - V' * T * V - - Arguments - ========= - - DIRECT (input) CHARACTER*1 - Specifies the order in which the elementary reflectors are - multiplied to form the block reflector: - = 'F': H = H(1) H(2) . . . H(k) (Forward) - = 'B': H = H(k) . . . H(2) H(1) (Backward) - - STOREV (input) CHARACTER*1 - Specifies how the vectors which define the elementary - reflectors are stored (see also Further Details): - = 'C': columnwise - = 'R': rowwise - - N (input) INTEGER - The order of the block reflector H. N >= 0. - - K (input) INTEGER - The order of the triangular factor T (= the number of - elementary reflectors). K >= 1. - - V (input/output) COMPLEX*16 array, dimension - (LDV,K) if STOREV = 'C' - (LDV,N) if STOREV = 'R' - The matrix V. See further details. - - LDV (input) INTEGER - The leading dimension of the array V. - If STOREV = 'C', LDV >= max(1,N); if STOREV = 'R', LDV >= K. - - TAU (input) COMPLEX*16 array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i). - - T (output) COMPLEX*16 array, dimension (LDT,K) - The k by k triangular factor T of the block reflector. - If DIRECT = 'F', T is upper triangular; if DIRECT = 'B', T is - lower triangular. The rest of the array is not used. - - LDT (input) INTEGER - The leading dimension of the array T. LDT >= K. - - Further Details - =============== - - The shape of the matrix V and the storage of the vectors which define - the H(i) is best illustrated by the following example with n = 5 and - k = 3. The elements equal to 1 are not stored; the corresponding - array elements are modified but restored on exit. The rest of the - array is not used. - - DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': - - V = ( 1 ) V = ( 1 v1 v1 v1 v1 ) - ( v1 1 ) ( 1 v2 v2 v2 ) - ( v1 v2 1 ) ( 1 v3 v3 ) - ( v1 v2 v3 ) - ( v1 v2 v3 ) - - DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': - - V = ( v1 v2 v3 ) V = ( v1 v1 1 ) - ( v1 v2 v3 ) ( v2 v2 v2 1 ) - ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) - ( 1 v3 ) - ( 1 ) - - ===================================================================== - - - Quick return if possible -*/ - - /* Parameter adjustments */ - v_dim1 = *ldv; - v_offset = 1 + v_dim1 * 1; - v -= v_offset; - --tau; - t_dim1 = *ldt; - t_offset = 1 + t_dim1 * 1; - t -= t_offset; - - /* Function Body */ - if (*n == 0) { - return 0; - } - - if (lsame_(direct, "F")) { - i__1 = *k; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__; - if ((tau[i__2].r == 0. && tau[i__2].i == 0.)) { - -/* H(i) = I */ - - i__2 = i__; - for (j = 1; j <= i__2; ++j) { - i__3 = j + i__ * t_dim1; - t[i__3].r = 0., t[i__3].i = 0.; -/* L10: */ - } - } else { - -/* general case */ - - i__2 = i__ + i__ * v_dim1; - vii.r = v[i__2].r, vii.i = v[i__2].i; - i__2 = i__ + i__ * v_dim1; - v[i__2].r = 1., v[i__2].i = 0.; - if (lsame_(storev, "C")) { - -/* T(1:i-1,i) := - tau(i) * V(i:n,1:i-1)' * V(i:n,i) */ - - i__2 = *n - i__ + 1; - i__3 = i__ - 1; - i__4 = i__; - z__1.r = -tau[i__4].r, z__1.i = -tau[i__4].i; - zgemv_("Conjugate transpose", &i__2, &i__3, &z__1, &v[i__ - + v_dim1], ldv, &v[i__ + i__ * v_dim1], &c__1, & - c_b59, &t[i__ * t_dim1 + 1], &c__1); - } else { - -/* T(1:i-1,i) := - tau(i) * V(1:i-1,i:n) * V(i,i:n)' */ - - if (i__ < *n) { - i__2 = *n - i__; - zlacgv_(&i__2, &v[i__ + (i__ + 1) * v_dim1], ldv); - } - i__2 = i__ - 1; - i__3 = *n - i__ + 1; - i__4 = i__; - z__1.r = -tau[i__4].r, z__1.i = -tau[i__4].i; - zgemv_("No transpose", &i__2, &i__3, &z__1, &v[i__ * - v_dim1 + 1], ldv, &v[i__ + i__ * v_dim1], ldv, & - c_b59, &t[i__ * t_dim1 + 1], &c__1); - if (i__ < *n) { - i__2 = *n - i__; - zlacgv_(&i__2, &v[i__ + (i__ + 1) * v_dim1], ldv); - } - } - i__2 = i__ + i__ * v_dim1; - v[i__2].r = vii.r, v[i__2].i = vii.i; - -/* T(1:i-1,i) := T(1:i-1,1:i-1) * T(1:i-1,i) */ - - i__2 = i__ - 1; - ztrmv_("Upper", "No transpose", "Non-unit", &i__2, &t[ - t_offset], ldt, &t[i__ * t_dim1 + 1], &c__1); - i__2 = i__ + i__ * t_dim1; - i__3 = i__; - t[i__2].r = tau[i__3].r, t[i__2].i = tau[i__3].i; - } -/* L20: */ - } - } else { - for (i__ = *k; i__ >= 1; --i__) { - i__1 = i__; - if ((tau[i__1].r == 0. && tau[i__1].i == 0.)) { - -/* H(i) = I */ - - i__1 = *k; - for (j = i__; j <= i__1; ++j) { - i__2 = j + i__ * t_dim1; - t[i__2].r = 0., t[i__2].i = 0.; -/* L30: */ - } - } else { - -/* general case */ - - if (i__ < *k) { - if (lsame_(storev, "C")) { - i__1 = *n - *k + i__ + i__ * v_dim1; - vii.r = v[i__1].r, vii.i = v[i__1].i; - i__1 = *n - *k + i__ + i__ * v_dim1; - v[i__1].r = 1., v[i__1].i = 0.; - -/* - T(i+1:k,i) := - - tau(i) * V(1:n-k+i,i+1:k)' * V(1:n-k+i,i) -*/ - - i__1 = *n - *k + i__; - i__2 = *k - i__; - i__3 = i__; - z__1.r = -tau[i__3].r, z__1.i = -tau[i__3].i; - zgemv_("Conjugate transpose", &i__1, &i__2, &z__1, &v[ - (i__ + 1) * v_dim1 + 1], ldv, &v[i__ * v_dim1 - + 1], &c__1, &c_b59, &t[i__ + 1 + i__ * - t_dim1], &c__1); - i__1 = *n - *k + i__ + i__ * v_dim1; - v[i__1].r = vii.r, v[i__1].i = vii.i; - } else { - i__1 = i__ + (*n - *k + i__) * v_dim1; - vii.r = v[i__1].r, vii.i = v[i__1].i; - i__1 = i__ + (*n - *k + i__) * v_dim1; - v[i__1].r = 1., v[i__1].i = 0.; - -/* - T(i+1:k,i) := - - tau(i) * V(i+1:k,1:n-k+i) * V(i,1:n-k+i)' -*/ - - i__1 = *n - *k + i__ - 1; - zlacgv_(&i__1, &v[i__ + v_dim1], ldv); - i__1 = *k - i__; - i__2 = *n - *k + i__; - i__3 = i__; - z__1.r = -tau[i__3].r, z__1.i = -tau[i__3].i; - zgemv_("No transpose", &i__1, &i__2, &z__1, &v[i__ + - 1 + v_dim1], ldv, &v[i__ + v_dim1], ldv, & - c_b59, &t[i__ + 1 + i__ * t_dim1], &c__1); - i__1 = *n - *k + i__ - 1; - zlacgv_(&i__1, &v[i__ + v_dim1], ldv); - i__1 = i__ + (*n - *k + i__) * v_dim1; - v[i__1].r = vii.r, v[i__1].i = vii.i; - } - -/* T(i+1:k,i) := T(i+1:k,i+1:k) * T(i+1:k,i) */ - - i__1 = *k - i__; - ztrmv_("Lower", "No transpose", "Non-unit", &i__1, &t[i__ - + 1 + (i__ + 1) * t_dim1], ldt, &t[i__ + 1 + i__ * - t_dim1], &c__1) - ; - } - i__1 = i__ + i__ * t_dim1; - i__2 = i__; - t[i__1].r = tau[i__2].r, t[i__1].i = tau[i__2].i; - } -/* L40: */ - } - } - return 0; - -/* End of ZLARFT */ - -} /* zlarft_ */ - -/* Subroutine */ int zlarfx_(char *side, integer *m, integer *n, - doublecomplex *v, doublecomplex *tau, doublecomplex *c__, integer * - ldc, doublecomplex *work) -{ - /* System generated locals */ - integer c_dim1, c_offset, i__1, i__2, i__3, i__4, i__5, i__6, i__7, i__8, - i__9, i__10, i__11; - doublecomplex z__1, z__2, z__3, z__4, z__5, z__6, z__7, z__8, z__9, z__10, - z__11, z__12, z__13, z__14, z__15, z__16, z__17, z__18, z__19; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer j; - static doublecomplex t1, t2, t3, t4, t5, t6, t7, t8, t9, v1, v2, v3, v4, - v5, v6, v7, v8, v9, t10, v10, sum; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zgerc_(integer *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *), zgemv_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZLARFX applies a complex elementary reflector H to a complex m by n - matrix C, from either the left or the right. H is represented in the - form - - H = I - tau * v * v' - - where tau is a complex scalar and v is a complex vector. - - If tau = 0, then H is taken to be the unit matrix - - This version uses inline code if H has order < 11. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': form H * C - = 'R': form C * H - - M (input) INTEGER - The number of rows of the matrix C. - - N (input) INTEGER - The number of columns of the matrix C. - - V (input) COMPLEX*16 array, dimension (M) if SIDE = 'L' - or (N) if SIDE = 'R' - The vector v in the representation of H. - - TAU (input) COMPLEX*16 - The value tau in the representation of H. - - C (input/output) COMPLEX*16 array, dimension (LDC,N) - On entry, the m by n matrix C. - On exit, C is overwritten by the matrix H * C if SIDE = 'L', - or C * H if SIDE = 'R'. - - LDC (input) INTEGER - The leading dimension of the array C. LDA >= max(1,M). - - WORK (workspace) COMPLEX*16 array, dimension (N) if SIDE = 'L' - or (M) if SIDE = 'R' - WORK is not referenced if H has order < 11. - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --v; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - if ((tau->r == 0. && tau->i == 0.)) { - return 0; - } - if (lsame_(side, "L")) { - -/* Form H * C, where H has order m. */ - - switch (*m) { - case 1: goto L10; - case 2: goto L30; - case 3: goto L50; - case 4: goto L70; - case 5: goto L90; - case 6: goto L110; - case 7: goto L130; - case 8: goto L150; - case 9: goto L170; - case 10: goto L190; - } - -/* - Code for general M - - w := C'*v -*/ - - zgemv_("Conjugate transpose", m, n, &c_b60, &c__[c_offset], ldc, &v[1] - , &c__1, &c_b59, &work[1], &c__1); - -/* C := C - tau * v * w' */ - - z__1.r = -tau->r, z__1.i = -tau->i; - zgerc_(m, n, &z__1, &v[1], &c__1, &work[1], &c__1, &c__[c_offset], - ldc); - goto L410; -L10: - -/* Special code for 1 x 1 Householder */ - - z__3.r = tau->r * v[1].r - tau->i * v[1].i, z__3.i = tau->r * v[1].i - + tau->i * v[1].r; - d_cnjg(&z__4, &v[1]); - z__2.r = z__3.r * z__4.r - z__3.i * z__4.i, z__2.i = z__3.r * z__4.i - + z__3.i * z__4.r; - z__1.r = 1. - z__2.r, z__1.i = 0. - z__2.i; - t1.r = z__1.r, t1.i = z__1.i; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j * c_dim1 + 1; - i__3 = j * c_dim1 + 1; - z__1.r = t1.r * c__[i__3].r - t1.i * c__[i__3].i, z__1.i = t1.r * - c__[i__3].i + t1.i * c__[i__3].r; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L20: */ - } - goto L410; -L30: - -/* Special code for 2 x 2 Householder */ - - d_cnjg(&z__1, &v[1]); - v1.r = z__1.r, v1.i = z__1.i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - d_cnjg(&z__1, &v[2]); - v2.r = z__1.r, v2.i = z__1.i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j * c_dim1 + 1; - z__2.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__2.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j * c_dim1 + 2; - z__3.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__3.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j * c_dim1 + 1; - i__3 = j * c_dim1 + 1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 2; - i__3 = j * c_dim1 + 2; - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L40: */ - } - goto L410; -L50: - -/* Special code for 3 x 3 Householder */ - - d_cnjg(&z__1, &v[1]); - v1.r = z__1.r, v1.i = z__1.i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - d_cnjg(&z__1, &v[2]); - v2.r = z__1.r, v2.i = z__1.i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - d_cnjg(&z__1, &v[3]); - v3.r = z__1.r, v3.i = z__1.i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j * c_dim1 + 1; - z__3.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__3.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j * c_dim1 + 2; - z__4.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__4.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__2.r = z__3.r + z__4.r, z__2.i = z__3.i + z__4.i; - i__4 = j * c_dim1 + 3; - z__5.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__5.i = v3.r * - c__[i__4].i + v3.i * c__[i__4].r; - z__1.r = z__2.r + z__5.r, z__1.i = z__2.i + z__5.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j * c_dim1 + 1; - i__3 = j * c_dim1 + 1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 2; - i__3 = j * c_dim1 + 2; - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 3; - i__3 = j * c_dim1 + 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L60: */ - } - goto L410; -L70: - -/* Special code for 4 x 4 Householder */ - - d_cnjg(&z__1, &v[1]); - v1.r = z__1.r, v1.i = z__1.i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - d_cnjg(&z__1, &v[2]); - v2.r = z__1.r, v2.i = z__1.i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - d_cnjg(&z__1, &v[3]); - v3.r = z__1.r, v3.i = z__1.i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - d_cnjg(&z__1, &v[4]); - v4.r = z__1.r, v4.i = z__1.i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j * c_dim1 + 1; - z__4.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__4.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j * c_dim1 + 2; - z__5.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__5.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__3.r = z__4.r + z__5.r, z__3.i = z__4.i + z__5.i; - i__4 = j * c_dim1 + 3; - z__6.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__6.i = v3.r * - c__[i__4].i + v3.i * c__[i__4].r; - z__2.r = z__3.r + z__6.r, z__2.i = z__3.i + z__6.i; - i__5 = j * c_dim1 + 4; - z__7.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__7.i = v4.r * - c__[i__5].i + v4.i * c__[i__5].r; - z__1.r = z__2.r + z__7.r, z__1.i = z__2.i + z__7.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j * c_dim1 + 1; - i__3 = j * c_dim1 + 1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 2; - i__3 = j * c_dim1 + 2; - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 3; - i__3 = j * c_dim1 + 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 4; - i__3 = j * c_dim1 + 4; - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L80: */ - } - goto L410; -L90: - -/* Special code for 5 x 5 Householder */ - - d_cnjg(&z__1, &v[1]); - v1.r = z__1.r, v1.i = z__1.i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - d_cnjg(&z__1, &v[2]); - v2.r = z__1.r, v2.i = z__1.i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - d_cnjg(&z__1, &v[3]); - v3.r = z__1.r, v3.i = z__1.i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - d_cnjg(&z__1, &v[4]); - v4.r = z__1.r, v4.i = z__1.i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - d_cnjg(&z__1, &v[5]); - v5.r = z__1.r, v5.i = z__1.i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j * c_dim1 + 1; - z__5.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__5.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j * c_dim1 + 2; - z__6.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__6.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__4.r = z__5.r + z__6.r, z__4.i = z__5.i + z__6.i; - i__4 = j * c_dim1 + 3; - z__7.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__7.i = v3.r * - c__[i__4].i + v3.i * c__[i__4].r; - z__3.r = z__4.r + z__7.r, z__3.i = z__4.i + z__7.i; - i__5 = j * c_dim1 + 4; - z__8.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__8.i = v4.r * - c__[i__5].i + v4.i * c__[i__5].r; - z__2.r = z__3.r + z__8.r, z__2.i = z__3.i + z__8.i; - i__6 = j * c_dim1 + 5; - z__9.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__9.i = v5.r * - c__[i__6].i + v5.i * c__[i__6].r; - z__1.r = z__2.r + z__9.r, z__1.i = z__2.i + z__9.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j * c_dim1 + 1; - i__3 = j * c_dim1 + 1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 2; - i__3 = j * c_dim1 + 2; - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 3; - i__3 = j * c_dim1 + 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 4; - i__3 = j * c_dim1 + 4; - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 5; - i__3 = j * c_dim1 + 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L100: */ - } - goto L410; -L110: - -/* Special code for 6 x 6 Householder */ - - d_cnjg(&z__1, &v[1]); - v1.r = z__1.r, v1.i = z__1.i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - d_cnjg(&z__1, &v[2]); - v2.r = z__1.r, v2.i = z__1.i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - d_cnjg(&z__1, &v[3]); - v3.r = z__1.r, v3.i = z__1.i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - d_cnjg(&z__1, &v[4]); - v4.r = z__1.r, v4.i = z__1.i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - d_cnjg(&z__1, &v[5]); - v5.r = z__1.r, v5.i = z__1.i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - d_cnjg(&z__1, &v[6]); - v6.r = z__1.r, v6.i = z__1.i; - d_cnjg(&z__2, &v6); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t6.r = z__1.r, t6.i = z__1.i; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j * c_dim1 + 1; - z__6.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__6.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j * c_dim1 + 2; - z__7.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__7.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__5.r = z__6.r + z__7.r, z__5.i = z__6.i + z__7.i; - i__4 = j * c_dim1 + 3; - z__8.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__8.i = v3.r * - c__[i__4].i + v3.i * c__[i__4].r; - z__4.r = z__5.r + z__8.r, z__4.i = z__5.i + z__8.i; - i__5 = j * c_dim1 + 4; - z__9.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__9.i = v4.r * - c__[i__5].i + v4.i * c__[i__5].r; - z__3.r = z__4.r + z__9.r, z__3.i = z__4.i + z__9.i; - i__6 = j * c_dim1 + 5; - z__10.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__10.i = v5.r - * c__[i__6].i + v5.i * c__[i__6].r; - z__2.r = z__3.r + z__10.r, z__2.i = z__3.i + z__10.i; - i__7 = j * c_dim1 + 6; - z__11.r = v6.r * c__[i__7].r - v6.i * c__[i__7].i, z__11.i = v6.r - * c__[i__7].i + v6.i * c__[i__7].r; - z__1.r = z__2.r + z__11.r, z__1.i = z__2.i + z__11.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j * c_dim1 + 1; - i__3 = j * c_dim1 + 1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 2; - i__3 = j * c_dim1 + 2; - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 3; - i__3 = j * c_dim1 + 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 4; - i__3 = j * c_dim1 + 4; - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 5; - i__3 = j * c_dim1 + 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 6; - i__3 = j * c_dim1 + 6; - z__2.r = sum.r * t6.r - sum.i * t6.i, z__2.i = sum.r * t6.i + - sum.i * t6.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L120: */ - } - goto L410; -L130: - -/* Special code for 7 x 7 Householder */ - - d_cnjg(&z__1, &v[1]); - v1.r = z__1.r, v1.i = z__1.i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - d_cnjg(&z__1, &v[2]); - v2.r = z__1.r, v2.i = z__1.i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - d_cnjg(&z__1, &v[3]); - v3.r = z__1.r, v3.i = z__1.i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - d_cnjg(&z__1, &v[4]); - v4.r = z__1.r, v4.i = z__1.i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - d_cnjg(&z__1, &v[5]); - v5.r = z__1.r, v5.i = z__1.i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - d_cnjg(&z__1, &v[6]); - v6.r = z__1.r, v6.i = z__1.i; - d_cnjg(&z__2, &v6); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t6.r = z__1.r, t6.i = z__1.i; - d_cnjg(&z__1, &v[7]); - v7.r = z__1.r, v7.i = z__1.i; - d_cnjg(&z__2, &v7); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t7.r = z__1.r, t7.i = z__1.i; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j * c_dim1 + 1; - z__7.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__7.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j * c_dim1 + 2; - z__8.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__8.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__6.r = z__7.r + z__8.r, z__6.i = z__7.i + z__8.i; - i__4 = j * c_dim1 + 3; - z__9.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__9.i = v3.r * - c__[i__4].i + v3.i * c__[i__4].r; - z__5.r = z__6.r + z__9.r, z__5.i = z__6.i + z__9.i; - i__5 = j * c_dim1 + 4; - z__10.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__10.i = v4.r - * c__[i__5].i + v4.i * c__[i__5].r; - z__4.r = z__5.r + z__10.r, z__4.i = z__5.i + z__10.i; - i__6 = j * c_dim1 + 5; - z__11.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__11.i = v5.r - * c__[i__6].i + v5.i * c__[i__6].r; - z__3.r = z__4.r + z__11.r, z__3.i = z__4.i + z__11.i; - i__7 = j * c_dim1 + 6; - z__12.r = v6.r * c__[i__7].r - v6.i * c__[i__7].i, z__12.i = v6.r - * c__[i__7].i + v6.i * c__[i__7].r; - z__2.r = z__3.r + z__12.r, z__2.i = z__3.i + z__12.i; - i__8 = j * c_dim1 + 7; - z__13.r = v7.r * c__[i__8].r - v7.i * c__[i__8].i, z__13.i = v7.r - * c__[i__8].i + v7.i * c__[i__8].r; - z__1.r = z__2.r + z__13.r, z__1.i = z__2.i + z__13.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j * c_dim1 + 1; - i__3 = j * c_dim1 + 1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 2; - i__3 = j * c_dim1 + 2; - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 3; - i__3 = j * c_dim1 + 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 4; - i__3 = j * c_dim1 + 4; - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 5; - i__3 = j * c_dim1 + 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 6; - i__3 = j * c_dim1 + 6; - z__2.r = sum.r * t6.r - sum.i * t6.i, z__2.i = sum.r * t6.i + - sum.i * t6.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 7; - i__3 = j * c_dim1 + 7; - z__2.r = sum.r * t7.r - sum.i * t7.i, z__2.i = sum.r * t7.i + - sum.i * t7.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L140: */ - } - goto L410; -L150: - -/* Special code for 8 x 8 Householder */ - - d_cnjg(&z__1, &v[1]); - v1.r = z__1.r, v1.i = z__1.i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - d_cnjg(&z__1, &v[2]); - v2.r = z__1.r, v2.i = z__1.i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - d_cnjg(&z__1, &v[3]); - v3.r = z__1.r, v3.i = z__1.i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - d_cnjg(&z__1, &v[4]); - v4.r = z__1.r, v4.i = z__1.i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - d_cnjg(&z__1, &v[5]); - v5.r = z__1.r, v5.i = z__1.i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - d_cnjg(&z__1, &v[6]); - v6.r = z__1.r, v6.i = z__1.i; - d_cnjg(&z__2, &v6); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t6.r = z__1.r, t6.i = z__1.i; - d_cnjg(&z__1, &v[7]); - v7.r = z__1.r, v7.i = z__1.i; - d_cnjg(&z__2, &v7); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t7.r = z__1.r, t7.i = z__1.i; - d_cnjg(&z__1, &v[8]); - v8.r = z__1.r, v8.i = z__1.i; - d_cnjg(&z__2, &v8); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t8.r = z__1.r, t8.i = z__1.i; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j * c_dim1 + 1; - z__8.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__8.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j * c_dim1 + 2; - z__9.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__9.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__7.r = z__8.r + z__9.r, z__7.i = z__8.i + z__9.i; - i__4 = j * c_dim1 + 3; - z__10.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__10.i = v3.r - * c__[i__4].i + v3.i * c__[i__4].r; - z__6.r = z__7.r + z__10.r, z__6.i = z__7.i + z__10.i; - i__5 = j * c_dim1 + 4; - z__11.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__11.i = v4.r - * c__[i__5].i + v4.i * c__[i__5].r; - z__5.r = z__6.r + z__11.r, z__5.i = z__6.i + z__11.i; - i__6 = j * c_dim1 + 5; - z__12.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__12.i = v5.r - * c__[i__6].i + v5.i * c__[i__6].r; - z__4.r = z__5.r + z__12.r, z__4.i = z__5.i + z__12.i; - i__7 = j * c_dim1 + 6; - z__13.r = v6.r * c__[i__7].r - v6.i * c__[i__7].i, z__13.i = v6.r - * c__[i__7].i + v6.i * c__[i__7].r; - z__3.r = z__4.r + z__13.r, z__3.i = z__4.i + z__13.i; - i__8 = j * c_dim1 + 7; - z__14.r = v7.r * c__[i__8].r - v7.i * c__[i__8].i, z__14.i = v7.r - * c__[i__8].i + v7.i * c__[i__8].r; - z__2.r = z__3.r + z__14.r, z__2.i = z__3.i + z__14.i; - i__9 = j * c_dim1 + 8; - z__15.r = v8.r * c__[i__9].r - v8.i * c__[i__9].i, z__15.i = v8.r - * c__[i__9].i + v8.i * c__[i__9].r; - z__1.r = z__2.r + z__15.r, z__1.i = z__2.i + z__15.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j * c_dim1 + 1; - i__3 = j * c_dim1 + 1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 2; - i__3 = j * c_dim1 + 2; - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 3; - i__3 = j * c_dim1 + 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 4; - i__3 = j * c_dim1 + 4; - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 5; - i__3 = j * c_dim1 + 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 6; - i__3 = j * c_dim1 + 6; - z__2.r = sum.r * t6.r - sum.i * t6.i, z__2.i = sum.r * t6.i + - sum.i * t6.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 7; - i__3 = j * c_dim1 + 7; - z__2.r = sum.r * t7.r - sum.i * t7.i, z__2.i = sum.r * t7.i + - sum.i * t7.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 8; - i__3 = j * c_dim1 + 8; - z__2.r = sum.r * t8.r - sum.i * t8.i, z__2.i = sum.r * t8.i + - sum.i * t8.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L160: */ - } - goto L410; -L170: - -/* Special code for 9 x 9 Householder */ - - d_cnjg(&z__1, &v[1]); - v1.r = z__1.r, v1.i = z__1.i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - d_cnjg(&z__1, &v[2]); - v2.r = z__1.r, v2.i = z__1.i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - d_cnjg(&z__1, &v[3]); - v3.r = z__1.r, v3.i = z__1.i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - d_cnjg(&z__1, &v[4]); - v4.r = z__1.r, v4.i = z__1.i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - d_cnjg(&z__1, &v[5]); - v5.r = z__1.r, v5.i = z__1.i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - d_cnjg(&z__1, &v[6]); - v6.r = z__1.r, v6.i = z__1.i; - d_cnjg(&z__2, &v6); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t6.r = z__1.r, t6.i = z__1.i; - d_cnjg(&z__1, &v[7]); - v7.r = z__1.r, v7.i = z__1.i; - d_cnjg(&z__2, &v7); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t7.r = z__1.r, t7.i = z__1.i; - d_cnjg(&z__1, &v[8]); - v8.r = z__1.r, v8.i = z__1.i; - d_cnjg(&z__2, &v8); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t8.r = z__1.r, t8.i = z__1.i; - d_cnjg(&z__1, &v[9]); - v9.r = z__1.r, v9.i = z__1.i; - d_cnjg(&z__2, &v9); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t9.r = z__1.r, t9.i = z__1.i; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j * c_dim1 + 1; - z__9.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__9.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j * c_dim1 + 2; - z__10.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__10.i = v2.r - * c__[i__3].i + v2.i * c__[i__3].r; - z__8.r = z__9.r + z__10.r, z__8.i = z__9.i + z__10.i; - i__4 = j * c_dim1 + 3; - z__11.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__11.i = v3.r - * c__[i__4].i + v3.i * c__[i__4].r; - z__7.r = z__8.r + z__11.r, z__7.i = z__8.i + z__11.i; - i__5 = j * c_dim1 + 4; - z__12.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__12.i = v4.r - * c__[i__5].i + v4.i * c__[i__5].r; - z__6.r = z__7.r + z__12.r, z__6.i = z__7.i + z__12.i; - i__6 = j * c_dim1 + 5; - z__13.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__13.i = v5.r - * c__[i__6].i + v5.i * c__[i__6].r; - z__5.r = z__6.r + z__13.r, z__5.i = z__6.i + z__13.i; - i__7 = j * c_dim1 + 6; - z__14.r = v6.r * c__[i__7].r - v6.i * c__[i__7].i, z__14.i = v6.r - * c__[i__7].i + v6.i * c__[i__7].r; - z__4.r = z__5.r + z__14.r, z__4.i = z__5.i + z__14.i; - i__8 = j * c_dim1 + 7; - z__15.r = v7.r * c__[i__8].r - v7.i * c__[i__8].i, z__15.i = v7.r - * c__[i__8].i + v7.i * c__[i__8].r; - z__3.r = z__4.r + z__15.r, z__3.i = z__4.i + z__15.i; - i__9 = j * c_dim1 + 8; - z__16.r = v8.r * c__[i__9].r - v8.i * c__[i__9].i, z__16.i = v8.r - * c__[i__9].i + v8.i * c__[i__9].r; - z__2.r = z__3.r + z__16.r, z__2.i = z__3.i + z__16.i; - i__10 = j * c_dim1 + 9; - z__17.r = v9.r * c__[i__10].r - v9.i * c__[i__10].i, z__17.i = - v9.r * c__[i__10].i + v9.i * c__[i__10].r; - z__1.r = z__2.r + z__17.r, z__1.i = z__2.i + z__17.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j * c_dim1 + 1; - i__3 = j * c_dim1 + 1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 2; - i__3 = j * c_dim1 + 2; - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 3; - i__3 = j * c_dim1 + 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 4; - i__3 = j * c_dim1 + 4; - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 5; - i__3 = j * c_dim1 + 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 6; - i__3 = j * c_dim1 + 6; - z__2.r = sum.r * t6.r - sum.i * t6.i, z__2.i = sum.r * t6.i + - sum.i * t6.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 7; - i__3 = j * c_dim1 + 7; - z__2.r = sum.r * t7.r - sum.i * t7.i, z__2.i = sum.r * t7.i + - sum.i * t7.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 8; - i__3 = j * c_dim1 + 8; - z__2.r = sum.r * t8.r - sum.i * t8.i, z__2.i = sum.r * t8.i + - sum.i * t8.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 9; - i__3 = j * c_dim1 + 9; - z__2.r = sum.r * t9.r - sum.i * t9.i, z__2.i = sum.r * t9.i + - sum.i * t9.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L180: */ - } - goto L410; -L190: - -/* Special code for 10 x 10 Householder */ - - d_cnjg(&z__1, &v[1]); - v1.r = z__1.r, v1.i = z__1.i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - d_cnjg(&z__1, &v[2]); - v2.r = z__1.r, v2.i = z__1.i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - d_cnjg(&z__1, &v[3]); - v3.r = z__1.r, v3.i = z__1.i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - d_cnjg(&z__1, &v[4]); - v4.r = z__1.r, v4.i = z__1.i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - d_cnjg(&z__1, &v[5]); - v5.r = z__1.r, v5.i = z__1.i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - d_cnjg(&z__1, &v[6]); - v6.r = z__1.r, v6.i = z__1.i; - d_cnjg(&z__2, &v6); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t6.r = z__1.r, t6.i = z__1.i; - d_cnjg(&z__1, &v[7]); - v7.r = z__1.r, v7.i = z__1.i; - d_cnjg(&z__2, &v7); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t7.r = z__1.r, t7.i = z__1.i; - d_cnjg(&z__1, &v[8]); - v8.r = z__1.r, v8.i = z__1.i; - d_cnjg(&z__2, &v8); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t8.r = z__1.r, t8.i = z__1.i; - d_cnjg(&z__1, &v[9]); - v9.r = z__1.r, v9.i = z__1.i; - d_cnjg(&z__2, &v9); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t9.r = z__1.r, t9.i = z__1.i; - d_cnjg(&z__1, &v[10]); - v10.r = z__1.r, v10.i = z__1.i; - d_cnjg(&z__2, &v10); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t10.r = z__1.r, t10.i = z__1.i; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j * c_dim1 + 1; - z__10.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__10.i = v1.r - * c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j * c_dim1 + 2; - z__11.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__11.i = v2.r - * c__[i__3].i + v2.i * c__[i__3].r; - z__9.r = z__10.r + z__11.r, z__9.i = z__10.i + z__11.i; - i__4 = j * c_dim1 + 3; - z__12.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__12.i = v3.r - * c__[i__4].i + v3.i * c__[i__4].r; - z__8.r = z__9.r + z__12.r, z__8.i = z__9.i + z__12.i; - i__5 = j * c_dim1 + 4; - z__13.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__13.i = v4.r - * c__[i__5].i + v4.i * c__[i__5].r; - z__7.r = z__8.r + z__13.r, z__7.i = z__8.i + z__13.i; - i__6 = j * c_dim1 + 5; - z__14.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__14.i = v5.r - * c__[i__6].i + v5.i * c__[i__6].r; - z__6.r = z__7.r + z__14.r, z__6.i = z__7.i + z__14.i; - i__7 = j * c_dim1 + 6; - z__15.r = v6.r * c__[i__7].r - v6.i * c__[i__7].i, z__15.i = v6.r - * c__[i__7].i + v6.i * c__[i__7].r; - z__5.r = z__6.r + z__15.r, z__5.i = z__6.i + z__15.i; - i__8 = j * c_dim1 + 7; - z__16.r = v7.r * c__[i__8].r - v7.i * c__[i__8].i, z__16.i = v7.r - * c__[i__8].i + v7.i * c__[i__8].r; - z__4.r = z__5.r + z__16.r, z__4.i = z__5.i + z__16.i; - i__9 = j * c_dim1 + 8; - z__17.r = v8.r * c__[i__9].r - v8.i * c__[i__9].i, z__17.i = v8.r - * c__[i__9].i + v8.i * c__[i__9].r; - z__3.r = z__4.r + z__17.r, z__3.i = z__4.i + z__17.i; - i__10 = j * c_dim1 + 9; - z__18.r = v9.r * c__[i__10].r - v9.i * c__[i__10].i, z__18.i = - v9.r * c__[i__10].i + v9.i * c__[i__10].r; - z__2.r = z__3.r + z__18.r, z__2.i = z__3.i + z__18.i; - i__11 = j * c_dim1 + 10; - z__19.r = v10.r * c__[i__11].r - v10.i * c__[i__11].i, z__19.i = - v10.r * c__[i__11].i + v10.i * c__[i__11].r; - z__1.r = z__2.r + z__19.r, z__1.i = z__2.i + z__19.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j * c_dim1 + 1; - i__3 = j * c_dim1 + 1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 2; - i__3 = j * c_dim1 + 2; - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 3; - i__3 = j * c_dim1 + 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 4; - i__3 = j * c_dim1 + 4; - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 5; - i__3 = j * c_dim1 + 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 6; - i__3 = j * c_dim1 + 6; - z__2.r = sum.r * t6.r - sum.i * t6.i, z__2.i = sum.r * t6.i + - sum.i * t6.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 7; - i__3 = j * c_dim1 + 7; - z__2.r = sum.r * t7.r - sum.i * t7.i, z__2.i = sum.r * t7.i + - sum.i * t7.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 8; - i__3 = j * c_dim1 + 8; - z__2.r = sum.r * t8.r - sum.i * t8.i, z__2.i = sum.r * t8.i + - sum.i * t8.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 9; - i__3 = j * c_dim1 + 9; - z__2.r = sum.r * t9.r - sum.i * t9.i, z__2.i = sum.r * t9.i + - sum.i * t9.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j * c_dim1 + 10; - i__3 = j * c_dim1 + 10; - z__2.r = sum.r * t10.r - sum.i * t10.i, z__2.i = sum.r * t10.i + - sum.i * t10.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L200: */ - } - goto L410; - } else { - -/* Form C * H, where H has order n. */ - - switch (*n) { - case 1: goto L210; - case 2: goto L230; - case 3: goto L250; - case 4: goto L270; - case 5: goto L290; - case 6: goto L310; - case 7: goto L330; - case 8: goto L350; - case 9: goto L370; - case 10: goto L390; - } - -/* - Code for general N - - w := C * v -*/ - - zgemv_("No transpose", m, n, &c_b60, &c__[c_offset], ldc, &v[1], & - c__1, &c_b59, &work[1], &c__1); - -/* C := C - tau * w * v' */ - - z__1.r = -tau->r, z__1.i = -tau->i; - zgerc_(m, n, &z__1, &work[1], &c__1, &v[1], &c__1, &c__[c_offset], - ldc); - goto L410; -L210: - -/* Special code for 1 x 1 Householder */ - - z__3.r = tau->r * v[1].r - tau->i * v[1].i, z__3.i = tau->r * v[1].i - + tau->i * v[1].r; - d_cnjg(&z__4, &v[1]); - z__2.r = z__3.r * z__4.r - z__3.i * z__4.i, z__2.i = z__3.r * z__4.i - + z__3.i * z__4.r; - z__1.r = 1. - z__2.r, z__1.i = 0. - z__2.i; - t1.r = z__1.r, t1.i = z__1.i; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - i__2 = j + c_dim1; - i__3 = j + c_dim1; - z__1.r = t1.r * c__[i__3].r - t1.i * c__[i__3].i, z__1.i = t1.r * - c__[i__3].i + t1.i * c__[i__3].r; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L220: */ - } - goto L410; -L230: - -/* Special code for 2 x 2 Householder */ - - v1.r = v[1].r, v1.i = v[1].i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - v2.r = v[2].r, v2.i = v[2].i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - i__2 = j + c_dim1; - z__2.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__2.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j + ((c_dim1) << (1)); - z__3.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__3.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + z__3.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j + c_dim1; - i__3 = j + c_dim1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (1)); - i__3 = j + ((c_dim1) << (1)); - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L240: */ - } - goto L410; -L250: - -/* Special code for 3 x 3 Householder */ - - v1.r = v[1].r, v1.i = v[1].i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - v2.r = v[2].r, v2.i = v[2].i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - v3.r = v[3].r, v3.i = v[3].i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - i__2 = j + c_dim1; - z__3.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__3.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j + ((c_dim1) << (1)); - z__4.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__4.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__2.r = z__3.r + z__4.r, z__2.i = z__3.i + z__4.i; - i__4 = j + c_dim1 * 3; - z__5.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__5.i = v3.r * - c__[i__4].i + v3.i * c__[i__4].r; - z__1.r = z__2.r + z__5.r, z__1.i = z__2.i + z__5.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j + c_dim1; - i__3 = j + c_dim1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (1)); - i__3 = j + ((c_dim1) << (1)); - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 3; - i__3 = j + c_dim1 * 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L260: */ - } - goto L410; -L270: - -/* Special code for 4 x 4 Householder */ - - v1.r = v[1].r, v1.i = v[1].i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - v2.r = v[2].r, v2.i = v[2].i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - v3.r = v[3].r, v3.i = v[3].i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - v4.r = v[4].r, v4.i = v[4].i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - i__2 = j + c_dim1; - z__4.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__4.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j + ((c_dim1) << (1)); - z__5.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__5.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__3.r = z__4.r + z__5.r, z__3.i = z__4.i + z__5.i; - i__4 = j + c_dim1 * 3; - z__6.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__6.i = v3.r * - c__[i__4].i + v3.i * c__[i__4].r; - z__2.r = z__3.r + z__6.r, z__2.i = z__3.i + z__6.i; - i__5 = j + ((c_dim1) << (2)); - z__7.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__7.i = v4.r * - c__[i__5].i + v4.i * c__[i__5].r; - z__1.r = z__2.r + z__7.r, z__1.i = z__2.i + z__7.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j + c_dim1; - i__3 = j + c_dim1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (1)); - i__3 = j + ((c_dim1) << (1)); - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 3; - i__3 = j + c_dim1 * 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (2)); - i__3 = j + ((c_dim1) << (2)); - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L280: */ - } - goto L410; -L290: - -/* Special code for 5 x 5 Householder */ - - v1.r = v[1].r, v1.i = v[1].i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - v2.r = v[2].r, v2.i = v[2].i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - v3.r = v[3].r, v3.i = v[3].i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - v4.r = v[4].r, v4.i = v[4].i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - v5.r = v[5].r, v5.i = v[5].i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - i__2 = j + c_dim1; - z__5.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__5.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j + ((c_dim1) << (1)); - z__6.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__6.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__4.r = z__5.r + z__6.r, z__4.i = z__5.i + z__6.i; - i__4 = j + c_dim1 * 3; - z__7.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__7.i = v3.r * - c__[i__4].i + v3.i * c__[i__4].r; - z__3.r = z__4.r + z__7.r, z__3.i = z__4.i + z__7.i; - i__5 = j + ((c_dim1) << (2)); - z__8.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__8.i = v4.r * - c__[i__5].i + v4.i * c__[i__5].r; - z__2.r = z__3.r + z__8.r, z__2.i = z__3.i + z__8.i; - i__6 = j + c_dim1 * 5; - z__9.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__9.i = v5.r * - c__[i__6].i + v5.i * c__[i__6].r; - z__1.r = z__2.r + z__9.r, z__1.i = z__2.i + z__9.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j + c_dim1; - i__3 = j + c_dim1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (1)); - i__3 = j + ((c_dim1) << (1)); - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 3; - i__3 = j + c_dim1 * 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (2)); - i__3 = j + ((c_dim1) << (2)); - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 5; - i__3 = j + c_dim1 * 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L300: */ - } - goto L410; -L310: - -/* Special code for 6 x 6 Householder */ - - v1.r = v[1].r, v1.i = v[1].i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - v2.r = v[2].r, v2.i = v[2].i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - v3.r = v[3].r, v3.i = v[3].i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - v4.r = v[4].r, v4.i = v[4].i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - v5.r = v[5].r, v5.i = v[5].i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - v6.r = v[6].r, v6.i = v[6].i; - d_cnjg(&z__2, &v6); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t6.r = z__1.r, t6.i = z__1.i; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - i__2 = j + c_dim1; - z__6.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__6.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j + ((c_dim1) << (1)); - z__7.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__7.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__5.r = z__6.r + z__7.r, z__5.i = z__6.i + z__7.i; - i__4 = j + c_dim1 * 3; - z__8.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__8.i = v3.r * - c__[i__4].i + v3.i * c__[i__4].r; - z__4.r = z__5.r + z__8.r, z__4.i = z__5.i + z__8.i; - i__5 = j + ((c_dim1) << (2)); - z__9.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__9.i = v4.r * - c__[i__5].i + v4.i * c__[i__5].r; - z__3.r = z__4.r + z__9.r, z__3.i = z__4.i + z__9.i; - i__6 = j + c_dim1 * 5; - z__10.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__10.i = v5.r - * c__[i__6].i + v5.i * c__[i__6].r; - z__2.r = z__3.r + z__10.r, z__2.i = z__3.i + z__10.i; - i__7 = j + c_dim1 * 6; - z__11.r = v6.r * c__[i__7].r - v6.i * c__[i__7].i, z__11.i = v6.r - * c__[i__7].i + v6.i * c__[i__7].r; - z__1.r = z__2.r + z__11.r, z__1.i = z__2.i + z__11.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j + c_dim1; - i__3 = j + c_dim1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (1)); - i__3 = j + ((c_dim1) << (1)); - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 3; - i__3 = j + c_dim1 * 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (2)); - i__3 = j + ((c_dim1) << (2)); - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 5; - i__3 = j + c_dim1 * 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 6; - i__3 = j + c_dim1 * 6; - z__2.r = sum.r * t6.r - sum.i * t6.i, z__2.i = sum.r * t6.i + - sum.i * t6.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L320: */ - } - goto L410; -L330: - -/* Special code for 7 x 7 Householder */ - - v1.r = v[1].r, v1.i = v[1].i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - v2.r = v[2].r, v2.i = v[2].i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - v3.r = v[3].r, v3.i = v[3].i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - v4.r = v[4].r, v4.i = v[4].i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - v5.r = v[5].r, v5.i = v[5].i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - v6.r = v[6].r, v6.i = v[6].i; - d_cnjg(&z__2, &v6); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t6.r = z__1.r, t6.i = z__1.i; - v7.r = v[7].r, v7.i = v[7].i; - d_cnjg(&z__2, &v7); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t7.r = z__1.r, t7.i = z__1.i; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - i__2 = j + c_dim1; - z__7.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__7.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j + ((c_dim1) << (1)); - z__8.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__8.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__6.r = z__7.r + z__8.r, z__6.i = z__7.i + z__8.i; - i__4 = j + c_dim1 * 3; - z__9.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__9.i = v3.r * - c__[i__4].i + v3.i * c__[i__4].r; - z__5.r = z__6.r + z__9.r, z__5.i = z__6.i + z__9.i; - i__5 = j + ((c_dim1) << (2)); - z__10.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__10.i = v4.r - * c__[i__5].i + v4.i * c__[i__5].r; - z__4.r = z__5.r + z__10.r, z__4.i = z__5.i + z__10.i; - i__6 = j + c_dim1 * 5; - z__11.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__11.i = v5.r - * c__[i__6].i + v5.i * c__[i__6].r; - z__3.r = z__4.r + z__11.r, z__3.i = z__4.i + z__11.i; - i__7 = j + c_dim1 * 6; - z__12.r = v6.r * c__[i__7].r - v6.i * c__[i__7].i, z__12.i = v6.r - * c__[i__7].i + v6.i * c__[i__7].r; - z__2.r = z__3.r + z__12.r, z__2.i = z__3.i + z__12.i; - i__8 = j + c_dim1 * 7; - z__13.r = v7.r * c__[i__8].r - v7.i * c__[i__8].i, z__13.i = v7.r - * c__[i__8].i + v7.i * c__[i__8].r; - z__1.r = z__2.r + z__13.r, z__1.i = z__2.i + z__13.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j + c_dim1; - i__3 = j + c_dim1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (1)); - i__3 = j + ((c_dim1) << (1)); - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 3; - i__3 = j + c_dim1 * 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (2)); - i__3 = j + ((c_dim1) << (2)); - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 5; - i__3 = j + c_dim1 * 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 6; - i__3 = j + c_dim1 * 6; - z__2.r = sum.r * t6.r - sum.i * t6.i, z__2.i = sum.r * t6.i + - sum.i * t6.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 7; - i__3 = j + c_dim1 * 7; - z__2.r = sum.r * t7.r - sum.i * t7.i, z__2.i = sum.r * t7.i + - sum.i * t7.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L340: */ - } - goto L410; -L350: - -/* Special code for 8 x 8 Householder */ - - v1.r = v[1].r, v1.i = v[1].i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - v2.r = v[2].r, v2.i = v[2].i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - v3.r = v[3].r, v3.i = v[3].i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - v4.r = v[4].r, v4.i = v[4].i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - v5.r = v[5].r, v5.i = v[5].i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - v6.r = v[6].r, v6.i = v[6].i; - d_cnjg(&z__2, &v6); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t6.r = z__1.r, t6.i = z__1.i; - v7.r = v[7].r, v7.i = v[7].i; - d_cnjg(&z__2, &v7); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t7.r = z__1.r, t7.i = z__1.i; - v8.r = v[8].r, v8.i = v[8].i; - d_cnjg(&z__2, &v8); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t8.r = z__1.r, t8.i = z__1.i; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - i__2 = j + c_dim1; - z__8.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__8.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j + ((c_dim1) << (1)); - z__9.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__9.i = v2.r * - c__[i__3].i + v2.i * c__[i__3].r; - z__7.r = z__8.r + z__9.r, z__7.i = z__8.i + z__9.i; - i__4 = j + c_dim1 * 3; - z__10.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__10.i = v3.r - * c__[i__4].i + v3.i * c__[i__4].r; - z__6.r = z__7.r + z__10.r, z__6.i = z__7.i + z__10.i; - i__5 = j + ((c_dim1) << (2)); - z__11.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__11.i = v4.r - * c__[i__5].i + v4.i * c__[i__5].r; - z__5.r = z__6.r + z__11.r, z__5.i = z__6.i + z__11.i; - i__6 = j + c_dim1 * 5; - z__12.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__12.i = v5.r - * c__[i__6].i + v5.i * c__[i__6].r; - z__4.r = z__5.r + z__12.r, z__4.i = z__5.i + z__12.i; - i__7 = j + c_dim1 * 6; - z__13.r = v6.r * c__[i__7].r - v6.i * c__[i__7].i, z__13.i = v6.r - * c__[i__7].i + v6.i * c__[i__7].r; - z__3.r = z__4.r + z__13.r, z__3.i = z__4.i + z__13.i; - i__8 = j + c_dim1 * 7; - z__14.r = v7.r * c__[i__8].r - v7.i * c__[i__8].i, z__14.i = v7.r - * c__[i__8].i + v7.i * c__[i__8].r; - z__2.r = z__3.r + z__14.r, z__2.i = z__3.i + z__14.i; - i__9 = j + ((c_dim1) << (3)); - z__15.r = v8.r * c__[i__9].r - v8.i * c__[i__9].i, z__15.i = v8.r - * c__[i__9].i + v8.i * c__[i__9].r; - z__1.r = z__2.r + z__15.r, z__1.i = z__2.i + z__15.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j + c_dim1; - i__3 = j + c_dim1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (1)); - i__3 = j + ((c_dim1) << (1)); - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 3; - i__3 = j + c_dim1 * 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (2)); - i__3 = j + ((c_dim1) << (2)); - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 5; - i__3 = j + c_dim1 * 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 6; - i__3 = j + c_dim1 * 6; - z__2.r = sum.r * t6.r - sum.i * t6.i, z__2.i = sum.r * t6.i + - sum.i * t6.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 7; - i__3 = j + c_dim1 * 7; - z__2.r = sum.r * t7.r - sum.i * t7.i, z__2.i = sum.r * t7.i + - sum.i * t7.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (3)); - i__3 = j + ((c_dim1) << (3)); - z__2.r = sum.r * t8.r - sum.i * t8.i, z__2.i = sum.r * t8.i + - sum.i * t8.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L360: */ - } - goto L410; -L370: - -/* Special code for 9 x 9 Householder */ - - v1.r = v[1].r, v1.i = v[1].i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - v2.r = v[2].r, v2.i = v[2].i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - v3.r = v[3].r, v3.i = v[3].i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - v4.r = v[4].r, v4.i = v[4].i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - v5.r = v[5].r, v5.i = v[5].i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - v6.r = v[6].r, v6.i = v[6].i; - d_cnjg(&z__2, &v6); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t6.r = z__1.r, t6.i = z__1.i; - v7.r = v[7].r, v7.i = v[7].i; - d_cnjg(&z__2, &v7); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t7.r = z__1.r, t7.i = z__1.i; - v8.r = v[8].r, v8.i = v[8].i; - d_cnjg(&z__2, &v8); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t8.r = z__1.r, t8.i = z__1.i; - v9.r = v[9].r, v9.i = v[9].i; - d_cnjg(&z__2, &v9); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t9.r = z__1.r, t9.i = z__1.i; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - i__2 = j + c_dim1; - z__9.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__9.i = v1.r * - c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j + ((c_dim1) << (1)); - z__10.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__10.i = v2.r - * c__[i__3].i + v2.i * c__[i__3].r; - z__8.r = z__9.r + z__10.r, z__8.i = z__9.i + z__10.i; - i__4 = j + c_dim1 * 3; - z__11.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__11.i = v3.r - * c__[i__4].i + v3.i * c__[i__4].r; - z__7.r = z__8.r + z__11.r, z__7.i = z__8.i + z__11.i; - i__5 = j + ((c_dim1) << (2)); - z__12.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__12.i = v4.r - * c__[i__5].i + v4.i * c__[i__5].r; - z__6.r = z__7.r + z__12.r, z__6.i = z__7.i + z__12.i; - i__6 = j + c_dim1 * 5; - z__13.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__13.i = v5.r - * c__[i__6].i + v5.i * c__[i__6].r; - z__5.r = z__6.r + z__13.r, z__5.i = z__6.i + z__13.i; - i__7 = j + c_dim1 * 6; - z__14.r = v6.r * c__[i__7].r - v6.i * c__[i__7].i, z__14.i = v6.r - * c__[i__7].i + v6.i * c__[i__7].r; - z__4.r = z__5.r + z__14.r, z__4.i = z__5.i + z__14.i; - i__8 = j + c_dim1 * 7; - z__15.r = v7.r * c__[i__8].r - v7.i * c__[i__8].i, z__15.i = v7.r - * c__[i__8].i + v7.i * c__[i__8].r; - z__3.r = z__4.r + z__15.r, z__3.i = z__4.i + z__15.i; - i__9 = j + ((c_dim1) << (3)); - z__16.r = v8.r * c__[i__9].r - v8.i * c__[i__9].i, z__16.i = v8.r - * c__[i__9].i + v8.i * c__[i__9].r; - z__2.r = z__3.r + z__16.r, z__2.i = z__3.i + z__16.i; - i__10 = j + c_dim1 * 9; - z__17.r = v9.r * c__[i__10].r - v9.i * c__[i__10].i, z__17.i = - v9.r * c__[i__10].i + v9.i * c__[i__10].r; - z__1.r = z__2.r + z__17.r, z__1.i = z__2.i + z__17.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j + c_dim1; - i__3 = j + c_dim1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (1)); - i__3 = j + ((c_dim1) << (1)); - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 3; - i__3 = j + c_dim1 * 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (2)); - i__3 = j + ((c_dim1) << (2)); - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 5; - i__3 = j + c_dim1 * 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 6; - i__3 = j + c_dim1 * 6; - z__2.r = sum.r * t6.r - sum.i * t6.i, z__2.i = sum.r * t6.i + - sum.i * t6.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 7; - i__3 = j + c_dim1 * 7; - z__2.r = sum.r * t7.r - sum.i * t7.i, z__2.i = sum.r * t7.i + - sum.i * t7.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (3)); - i__3 = j + ((c_dim1) << (3)); - z__2.r = sum.r * t8.r - sum.i * t8.i, z__2.i = sum.r * t8.i + - sum.i * t8.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 9; - i__3 = j + c_dim1 * 9; - z__2.r = sum.r * t9.r - sum.i * t9.i, z__2.i = sum.r * t9.i + - sum.i * t9.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L380: */ - } - goto L410; -L390: - -/* Special code for 10 x 10 Householder */ - - v1.r = v[1].r, v1.i = v[1].i; - d_cnjg(&z__2, &v1); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t1.r = z__1.r, t1.i = z__1.i; - v2.r = v[2].r, v2.i = v[2].i; - d_cnjg(&z__2, &v2); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t2.r = z__1.r, t2.i = z__1.i; - v3.r = v[3].r, v3.i = v[3].i; - d_cnjg(&z__2, &v3); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t3.r = z__1.r, t3.i = z__1.i; - v4.r = v[4].r, v4.i = v[4].i; - d_cnjg(&z__2, &v4); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t4.r = z__1.r, t4.i = z__1.i; - v5.r = v[5].r, v5.i = v[5].i; - d_cnjg(&z__2, &v5); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t5.r = z__1.r, t5.i = z__1.i; - v6.r = v[6].r, v6.i = v[6].i; - d_cnjg(&z__2, &v6); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t6.r = z__1.r, t6.i = z__1.i; - v7.r = v[7].r, v7.i = v[7].i; - d_cnjg(&z__2, &v7); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t7.r = z__1.r, t7.i = z__1.i; - v8.r = v[8].r, v8.i = v[8].i; - d_cnjg(&z__2, &v8); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t8.r = z__1.r, t8.i = z__1.i; - v9.r = v[9].r, v9.i = v[9].i; - d_cnjg(&z__2, &v9); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t9.r = z__1.r, t9.i = z__1.i; - v10.r = v[10].r, v10.i = v[10].i; - d_cnjg(&z__2, &v10); - z__1.r = tau->r * z__2.r - tau->i * z__2.i, z__1.i = tau->r * z__2.i - + tau->i * z__2.r; - t10.r = z__1.r, t10.i = z__1.i; - i__1 = *m; - for (j = 1; j <= i__1; ++j) { - i__2 = j + c_dim1; - z__10.r = v1.r * c__[i__2].r - v1.i * c__[i__2].i, z__10.i = v1.r - * c__[i__2].i + v1.i * c__[i__2].r; - i__3 = j + ((c_dim1) << (1)); - z__11.r = v2.r * c__[i__3].r - v2.i * c__[i__3].i, z__11.i = v2.r - * c__[i__3].i + v2.i * c__[i__3].r; - z__9.r = z__10.r + z__11.r, z__9.i = z__10.i + z__11.i; - i__4 = j + c_dim1 * 3; - z__12.r = v3.r * c__[i__4].r - v3.i * c__[i__4].i, z__12.i = v3.r - * c__[i__4].i + v3.i * c__[i__4].r; - z__8.r = z__9.r + z__12.r, z__8.i = z__9.i + z__12.i; - i__5 = j + ((c_dim1) << (2)); - z__13.r = v4.r * c__[i__5].r - v4.i * c__[i__5].i, z__13.i = v4.r - * c__[i__5].i + v4.i * c__[i__5].r; - z__7.r = z__8.r + z__13.r, z__7.i = z__8.i + z__13.i; - i__6 = j + c_dim1 * 5; - z__14.r = v5.r * c__[i__6].r - v5.i * c__[i__6].i, z__14.i = v5.r - * c__[i__6].i + v5.i * c__[i__6].r; - z__6.r = z__7.r + z__14.r, z__6.i = z__7.i + z__14.i; - i__7 = j + c_dim1 * 6; - z__15.r = v6.r * c__[i__7].r - v6.i * c__[i__7].i, z__15.i = v6.r - * c__[i__7].i + v6.i * c__[i__7].r; - z__5.r = z__6.r + z__15.r, z__5.i = z__6.i + z__15.i; - i__8 = j + c_dim1 * 7; - z__16.r = v7.r * c__[i__8].r - v7.i * c__[i__8].i, z__16.i = v7.r - * c__[i__8].i + v7.i * c__[i__8].r; - z__4.r = z__5.r + z__16.r, z__4.i = z__5.i + z__16.i; - i__9 = j + ((c_dim1) << (3)); - z__17.r = v8.r * c__[i__9].r - v8.i * c__[i__9].i, z__17.i = v8.r - * c__[i__9].i + v8.i * c__[i__9].r; - z__3.r = z__4.r + z__17.r, z__3.i = z__4.i + z__17.i; - i__10 = j + c_dim1 * 9; - z__18.r = v9.r * c__[i__10].r - v9.i * c__[i__10].i, z__18.i = - v9.r * c__[i__10].i + v9.i * c__[i__10].r; - z__2.r = z__3.r + z__18.r, z__2.i = z__3.i + z__18.i; - i__11 = j + c_dim1 * 10; - z__19.r = v10.r * c__[i__11].r - v10.i * c__[i__11].i, z__19.i = - v10.r * c__[i__11].i + v10.i * c__[i__11].r; - z__1.r = z__2.r + z__19.r, z__1.i = z__2.i + z__19.i; - sum.r = z__1.r, sum.i = z__1.i; - i__2 = j + c_dim1; - i__3 = j + c_dim1; - z__2.r = sum.r * t1.r - sum.i * t1.i, z__2.i = sum.r * t1.i + - sum.i * t1.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (1)); - i__3 = j + ((c_dim1) << (1)); - z__2.r = sum.r * t2.r - sum.i * t2.i, z__2.i = sum.r * t2.i + - sum.i * t2.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 3; - i__3 = j + c_dim1 * 3; - z__2.r = sum.r * t3.r - sum.i * t3.i, z__2.i = sum.r * t3.i + - sum.i * t3.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (2)); - i__3 = j + ((c_dim1) << (2)); - z__2.r = sum.r * t4.r - sum.i * t4.i, z__2.i = sum.r * t4.i + - sum.i * t4.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 5; - i__3 = j + c_dim1 * 5; - z__2.r = sum.r * t5.r - sum.i * t5.i, z__2.i = sum.r * t5.i + - sum.i * t5.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 6; - i__3 = j + c_dim1 * 6; - z__2.r = sum.r * t6.r - sum.i * t6.i, z__2.i = sum.r * t6.i + - sum.i * t6.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 7; - i__3 = j + c_dim1 * 7; - z__2.r = sum.r * t7.r - sum.i * t7.i, z__2.i = sum.r * t7.i + - sum.i * t7.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + ((c_dim1) << (3)); - i__3 = j + ((c_dim1) << (3)); - z__2.r = sum.r * t8.r - sum.i * t8.i, z__2.i = sum.r * t8.i + - sum.i * t8.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 9; - i__3 = j + c_dim1 * 9; - z__2.r = sum.r * t9.r - sum.i * t9.i, z__2.i = sum.r * t9.i + - sum.i * t9.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; - i__2 = j + c_dim1 * 10; - i__3 = j + c_dim1 * 10; - z__2.r = sum.r * t10.r - sum.i * t10.i, z__2.i = sum.r * t10.i + - sum.i * t10.r; - z__1.r = c__[i__3].r - z__2.r, z__1.i = c__[i__3].i - z__2.i; - c__[i__2].r = z__1.r, c__[i__2].i = z__1.i; -/* L400: */ - } - goto L410; - } -L410: - return 0; - -/* End of ZLARFX */ - -} /* zlarfx_ */ - -/* Subroutine */ int zlascl_(char *type__, integer *kl, integer *ku, - doublereal *cfrom, doublereal *cto, integer *m, integer *n, - doublecomplex *a, integer *lda, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - doublecomplex z__1; - - /* Local variables */ - static integer i__, j, k1, k2, k3, k4; - static doublereal mul, cto1; - static logical done; - static doublereal ctoc; - extern logical lsame_(char *, char *); - static integer itype; - static doublereal cfrom1; - - static doublereal cfromc; - extern /* Subroutine */ int xerbla_(char *, integer *); - static doublereal bignum, smlnum; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - February 29, 1992 - - - Purpose - ======= - - ZLASCL multiplies the M by N complex matrix A by the real scalar - CTO/CFROM. This is done without over/underflow as long as the final - result CTO*A(I,J)/CFROM does not over/underflow. TYPE specifies that - A may be full, upper triangular, lower triangular, upper Hessenberg, - or banded. - - Arguments - ========= - - TYPE (input) CHARACTER*1 - TYPE indices the storage type of the input matrix. - = 'G': A is a full matrix. - = 'L': A is a lower triangular matrix. - = 'U': A is an upper triangular matrix. - = 'H': A is an upper Hessenberg matrix. - = 'B': A is a symmetric band matrix with lower bandwidth KL - and upper bandwidth KU and with the only the lower - half stored. - = 'Q': A is a symmetric band matrix with lower bandwidth KL - and upper bandwidth KU and with the only the upper - half stored. - = 'Z': A is a band matrix with lower bandwidth KL and upper - bandwidth KU. - - KL (input) INTEGER - The lower bandwidth of A. Referenced only if TYPE = 'B', - 'Q' or 'Z'. - - KU (input) INTEGER - The upper bandwidth of A. Referenced only if TYPE = 'B', - 'Q' or 'Z'. - - CFROM (input) DOUBLE PRECISION - CTO (input) DOUBLE PRECISION - The matrix A is multiplied by CTO/CFROM. A(I,J) is computed - without over/underflow if the final result CTO*A(I,J)/CFROM - can be represented without over/underflow. CFROM must be - nonzero. - - M (input) INTEGER - The number of rows of the matrix A. M >= 0. - - N (input) INTEGER - The number of columns of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,M) - The matrix to be multiplied by CTO/CFROM. See TYPE for the - storage type. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - INFO (output) INTEGER - 0 - successful exit - <0 - if INFO = -i, the i-th argument had an illegal value. - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - *info = 0; - - if (lsame_(type__, "G")) { - itype = 0; - } else if (lsame_(type__, "L")) { - itype = 1; - } else if (lsame_(type__, "U")) { - itype = 2; - } else if (lsame_(type__, "H")) { - itype = 3; - } else if (lsame_(type__, "B")) { - itype = 4; - } else if (lsame_(type__, "Q")) { - itype = 5; - } else if (lsame_(type__, "Z")) { - itype = 6; - } else { - itype = -1; - } - - if (itype == -1) { - *info = -1; - } else if (*cfrom == 0.) { - *info = -4; - } else if (*m < 0) { - *info = -6; - } else if (*n < 0 || (itype == 4 && *n != *m) || (itype == 5 && *n != *m)) - { - *info = -7; - } else if ((itype <= 3 && *lda < max(1,*m))) { - *info = -9; - } else if (itype >= 4) { -/* Computing MAX */ - i__1 = *m - 1; - if (*kl < 0 || *kl > max(i__1,0)) { - *info = -2; - } else /* if(complicated condition) */ { -/* Computing MAX */ - i__1 = *n - 1; - if (*ku < 0 || *ku > max(i__1,0) || ((itype == 4 || itype == 5) && - *kl != *ku)) { - *info = -3; - } else if ((itype == 4 && *lda < *kl + 1) || (itype == 5 && *lda < - *ku + 1) || (itype == 6 && *lda < ((*kl) << (1)) + *ku + - 1)) { - *info = -9; - } - } - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZLASCL", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0 || *m == 0) { - return 0; - } - -/* Get machine parameters */ - - smlnum = SAFEMINIMUM; - bignum = 1. / smlnum; - - cfromc = *cfrom; - ctoc = *cto; - -L10: - cfrom1 = cfromc * smlnum; - cto1 = ctoc / bignum; - if ((abs(cfrom1) > abs(ctoc) && ctoc != 0.)) { - mul = smlnum; - done = FALSE_; - cfromc = cfrom1; - } else if (abs(cto1) > abs(cfromc)) { - mul = bignum; - done = FALSE_; - ctoc = cto1; - } else { - mul = ctoc / cfromc; - done = TRUE_; - } - - if (itype == 0) { - -/* Full matrix */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - z__1.r = mul * a[i__4].r, z__1.i = mul * a[i__4].i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L20: */ - } -/* L30: */ - } - - } else if (itype == 1) { - -/* Lower triangular matrix */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = j; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - z__1.r = mul * a[i__4].r, z__1.i = mul * a[i__4].i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L40: */ - } -/* L50: */ - } - - } else if (itype == 2) { - -/* Upper triangular matrix */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = min(j,*m); - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - z__1.r = mul * a[i__4].r, z__1.i = mul * a[i__4].i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L60: */ - } -/* L70: */ - } - - } else if (itype == 3) { - -/* Upper Hessenberg matrix */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = j + 1; - i__2 = min(i__3,*m); - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - z__1.r = mul * a[i__4].r, z__1.i = mul * a[i__4].i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L80: */ - } -/* L90: */ - } - - } else if (itype == 4) { - -/* Lower half of a symmetric band matrix */ - - k3 = *kl + 1; - k4 = *n + 1; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = k3, i__4 = k4 - j; - i__2 = min(i__3,i__4); - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - z__1.r = mul * a[i__4].r, z__1.i = mul * a[i__4].i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L100: */ - } -/* L110: */ - } - - } else if (itype == 5) { - -/* Upper half of a symmetric band matrix */ - - k1 = *ku + 2; - k3 = *ku + 1; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MAX */ - i__2 = k1 - j; - i__3 = k3; - for (i__ = max(i__2,1); i__ <= i__3; ++i__) { - i__2 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - z__1.r = mul * a[i__4].r, z__1.i = mul * a[i__4].i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; -/* L120: */ - } -/* L130: */ - } - - } else if (itype == 6) { - -/* Band matrix */ - - k1 = *kl + *ku + 2; - k2 = *kl + 1; - k3 = ((*kl) << (1)) + *ku + 1; - k4 = *kl + *ku + 1 + *m; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MAX */ - i__3 = k1 - j; -/* Computing MIN */ - i__4 = k3, i__5 = k4 - j; - i__2 = min(i__4,i__5); - for (i__ = max(i__3,k2); i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + j * a_dim1; - z__1.r = mul * a[i__4].r, z__1.i = mul * a[i__4].i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L140: */ - } -/* L150: */ - } - - } - - if (! done) { - goto L10; - } - - return 0; - -/* End of ZLASCL */ - -} /* zlascl_ */ - -/* Subroutine */ int zlaset_(char *uplo, integer *m, integer *n, - doublecomplex *alpha, doublecomplex *beta, doublecomplex *a, integer * - lda) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, j; - extern logical lsame_(char *, char *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - ZLASET initializes a 2-D array A to BETA on the diagonal and - ALPHA on the offdiagonals. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - Specifies the part of the matrix A to be set. - = 'U': Upper triangular part is set. The lower triangle - is unchanged. - = 'L': Lower triangular part is set. The upper triangle - is unchanged. - Otherwise: All of the matrix A is set. - - M (input) INTEGER - On entry, M specifies the number of rows of A. - - N (input) INTEGER - On entry, N specifies the number of columns of A. - - ALPHA (input) COMPLEX*16 - All the offdiagonal array elements are set to ALPHA. - - BETA (input) COMPLEX*16 - All the diagonal array elements are set to BETA. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the m by n matrix A. - On exit, A(i,j) = ALPHA, 1 <= i <= m, 1 <= j <= n, i.ne.j; - A(i,i) = BETA , 1 <= i <= min(m,n) - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - if (lsame_(uplo, "U")) { - -/* - Set the diagonal to BETA and the strictly upper triangular - part of the array to ALPHA. -*/ - - i__1 = *n; - for (j = 2; j <= i__1; ++j) { -/* Computing MIN */ - i__3 = j - 1; - i__2 = min(i__3,*m); - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - a[i__3].r = alpha->r, a[i__3].i = alpha->i; -/* L10: */ - } -/* L20: */ - } - i__1 = min(*n,*m); - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + i__ * a_dim1; - a[i__2].r = beta->r, a[i__2].i = beta->i; -/* L30: */ - } - - } else if (lsame_(uplo, "L")) { - -/* - Set the diagonal to BETA and the strictly lower triangular - part of the array to ALPHA. -*/ - - i__1 = min(*m,*n); - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = j + 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - a[i__3].r = alpha->r, a[i__3].i = alpha->i; -/* L40: */ - } -/* L50: */ - } - i__1 = min(*n,*m); - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + i__ * a_dim1; - a[i__2].r = beta->r, a[i__2].i = beta->i; -/* L60: */ - } - - } else { - -/* - Set the array to BETA on the diagonal and ALPHA on the - offdiagonal. -*/ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - a[i__3].r = alpha->r, a[i__3].i = alpha->i; -/* L70: */ - } -/* L80: */ - } - i__1 = min(*m,*n); - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + i__ * a_dim1; - a[i__2].r = beta->r, a[i__2].i = beta->i; -/* L90: */ - } - } - - return 0; - -/* End of ZLASET */ - -} /* zlaset_ */ - -/* Subroutine */ int zlasr_(char *side, char *pivot, char *direct, integer *m, - integer *n, doublereal *c__, doublereal *s, doublecomplex *a, - integer *lda) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - doublecomplex z__1, z__2, z__3; - - /* Local variables */ - static integer i__, j, info; - static doublecomplex temp; - extern logical lsame_(char *, char *); - static doublereal ctemp, stemp; - extern /* Subroutine */ int xerbla_(char *, integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose - ======= - - ZLASR performs the transformation - - A := P*A, when SIDE = 'L' or 'l' ( Left-hand side ) - - A := A*P', when SIDE = 'R' or 'r' ( Right-hand side ) - - where A is an m by n complex matrix and P is an orthogonal matrix, - consisting of a sequence of plane rotations determined by the - parameters PIVOT and DIRECT as follows ( z = m when SIDE = 'L' or 'l' - and z = n when SIDE = 'R' or 'r' ): - - When DIRECT = 'F' or 'f' ( Forward sequence ) then - - P = P( z - 1 )*...*P( 2 )*P( 1 ), - - and when DIRECT = 'B' or 'b' ( Backward sequence ) then - - P = P( 1 )*P( 2 )*...*P( z - 1 ), - - where P( k ) is a plane rotation matrix for the following planes: - - when PIVOT = 'V' or 'v' ( Variable pivot ), - the plane ( k, k + 1 ) - - when PIVOT = 'T' or 't' ( Top pivot ), - the plane ( 1, k + 1 ) - - when PIVOT = 'B' or 'b' ( Bottom pivot ), - the plane ( k, z ) - - c( k ) and s( k ) must contain the cosine and sine that define the - matrix P( k ). The two by two plane rotation part of the matrix - P( k ), R( k ), is assumed to be of the form - - R( k ) = ( c( k ) s( k ) ). - ( -s( k ) c( k ) ) - - Arguments - ========= - - SIDE (input) CHARACTER*1 - Specifies whether the plane rotation matrix P is applied to - A on the left or the right. - = 'L': Left, compute A := P*A - = 'R': Right, compute A:= A*P' - - DIRECT (input) CHARACTER*1 - Specifies whether P is a forward or backward sequence of - plane rotations. - = 'F': Forward, P = P( z - 1 )*...*P( 2 )*P( 1 ) - = 'B': Backward, P = P( 1 )*P( 2 )*...*P( z - 1 ) - - PIVOT (input) CHARACTER*1 - Specifies the plane for which P(k) is a plane rotation - matrix. - = 'V': Variable pivot, the plane (k,k+1) - = 'T': Top pivot, the plane (1,k+1) - = 'B': Bottom pivot, the plane (k,z) - - M (input) INTEGER - The number of rows of the matrix A. If m <= 1, an immediate - return is effected. - - N (input) INTEGER - The number of columns of the matrix A. If n <= 1, an - immediate return is effected. - - C, S (input) DOUBLE PRECISION arrays, dimension - (M-1) if SIDE = 'L' - (N-1) if SIDE = 'R' - c(k) and s(k) contain the cosine and sine that define the - matrix P(k). The two by two plane rotation part of the - matrix P(k), R(k), is assumed to be of the form - R( k ) = ( c( k ) s( k ) ). - ( -s( k ) c( k ) ) - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - The m by n matrix A. On exit, A is overwritten by P*A if - SIDE = 'R' or by A*P' if SIDE = 'L'. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,M). - - ===================================================================== - - - Test the input parameters -*/ - - /* Parameter adjustments */ - --c__; - --s; - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - info = 0; - if (! (lsame_(side, "L") || lsame_(side, "R"))) { - info = 1; - } else if (! (lsame_(pivot, "V") || lsame_(pivot, - "T") || lsame_(pivot, "B"))) { - info = 2; - } else if (! (lsame_(direct, "F") || lsame_(direct, - "B"))) { - info = 3; - } else if (*m < 0) { - info = 4; - } else if (*n < 0) { - info = 5; - } else if (*lda < max(1,*m)) { - info = 9; - } - if (info != 0) { - xerbla_("ZLASR ", &info); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0) { - return 0; - } - if (lsame_(side, "L")) { - -/* Form P * A */ - - if (lsame_(pivot, "V")) { - if (lsame_(direct, "F")) { - i__1 = *m - 1; - for (j = 1; j <= i__1; ++j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = j + 1 + i__ * a_dim1; - temp.r = a[i__3].r, temp.i = a[i__3].i; - i__3 = j + 1 + i__ * a_dim1; - z__2.r = ctemp * temp.r, z__2.i = ctemp * temp.i; - i__4 = j + i__ * a_dim1; - z__3.r = stemp * a[i__4].r, z__3.i = stemp * a[ - i__4].i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; - i__3 = j + i__ * a_dim1; - z__2.r = stemp * temp.r, z__2.i = stemp * temp.i; - i__4 = j + i__ * a_dim1; - z__3.r = ctemp * a[i__4].r, z__3.i = ctemp * a[ - i__4].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L10: */ - } - } -/* L20: */ - } - } else if (lsame_(direct, "B")) { - for (j = *m - 1; j >= 1; --j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = j + 1 + i__ * a_dim1; - temp.r = a[i__2].r, temp.i = a[i__2].i; - i__2 = j + 1 + i__ * a_dim1; - z__2.r = ctemp * temp.r, z__2.i = ctemp * temp.i; - i__3 = j + i__ * a_dim1; - z__3.r = stemp * a[i__3].r, z__3.i = stemp * a[ - i__3].i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; - i__2 = j + i__ * a_dim1; - z__2.r = stemp * temp.r, z__2.i = stemp * temp.i; - i__3 = j + i__ * a_dim1; - z__3.r = ctemp * a[i__3].r, z__3.i = ctemp * a[ - i__3].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; -/* L30: */ - } - } -/* L40: */ - } - } - } else if (lsame_(pivot, "T")) { - if (lsame_(direct, "F")) { - i__1 = *m; - for (j = 2; j <= i__1; ++j) { - ctemp = c__[j - 1]; - stemp = s[j - 1]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = j + i__ * a_dim1; - temp.r = a[i__3].r, temp.i = a[i__3].i; - i__3 = j + i__ * a_dim1; - z__2.r = ctemp * temp.r, z__2.i = ctemp * temp.i; - i__4 = i__ * a_dim1 + 1; - z__3.r = stemp * a[i__4].r, z__3.i = stemp * a[ - i__4].i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; - i__3 = i__ * a_dim1 + 1; - z__2.r = stemp * temp.r, z__2.i = stemp * temp.i; - i__4 = i__ * a_dim1 + 1; - z__3.r = ctemp * a[i__4].r, z__3.i = ctemp * a[ - i__4].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L50: */ - } - } -/* L60: */ - } - } else if (lsame_(direct, "B")) { - for (j = *m; j >= 2; --j) { - ctemp = c__[j - 1]; - stemp = s[j - 1]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = j + i__ * a_dim1; - temp.r = a[i__2].r, temp.i = a[i__2].i; - i__2 = j + i__ * a_dim1; - z__2.r = ctemp * temp.r, z__2.i = ctemp * temp.i; - i__3 = i__ * a_dim1 + 1; - z__3.r = stemp * a[i__3].r, z__3.i = stemp * a[ - i__3].i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; - i__2 = i__ * a_dim1 + 1; - z__2.r = stemp * temp.r, z__2.i = stemp * temp.i; - i__3 = i__ * a_dim1 + 1; - z__3.r = ctemp * a[i__3].r, z__3.i = ctemp * a[ - i__3].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; -/* L70: */ - } - } -/* L80: */ - } - } - } else if (lsame_(pivot, "B")) { - if (lsame_(direct, "F")) { - i__1 = *m - 1; - for (j = 1; j <= i__1; ++j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = j + i__ * a_dim1; - temp.r = a[i__3].r, temp.i = a[i__3].i; - i__3 = j + i__ * a_dim1; - i__4 = *m + i__ * a_dim1; - z__2.r = stemp * a[i__4].r, z__2.i = stemp * a[ - i__4].i; - z__3.r = ctemp * temp.r, z__3.i = ctemp * temp.i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; - i__3 = *m + i__ * a_dim1; - i__4 = *m + i__ * a_dim1; - z__2.r = ctemp * a[i__4].r, z__2.i = ctemp * a[ - i__4].i; - z__3.r = stemp * temp.r, z__3.i = stemp * temp.i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L90: */ - } - } -/* L100: */ - } - } else if (lsame_(direct, "B")) { - for (j = *m - 1; j >= 1; --j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = j + i__ * a_dim1; - temp.r = a[i__2].r, temp.i = a[i__2].i; - i__2 = j + i__ * a_dim1; - i__3 = *m + i__ * a_dim1; - z__2.r = stemp * a[i__3].r, z__2.i = stemp * a[ - i__3].i; - z__3.r = ctemp * temp.r, z__3.i = ctemp * temp.i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; - i__2 = *m + i__ * a_dim1; - i__3 = *m + i__ * a_dim1; - z__2.r = ctemp * a[i__3].r, z__2.i = ctemp * a[ - i__3].i; - z__3.r = stemp * temp.r, z__3.i = stemp * temp.i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; -/* L110: */ - } - } -/* L120: */ - } - } - } - } else if (lsame_(side, "R")) { - -/* Form A * P' */ - - if (lsame_(pivot, "V")) { - if (lsame_(direct, "F")) { - i__1 = *n - 1; - for (j = 1; j <= i__1; ++j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + (j + 1) * a_dim1; - temp.r = a[i__3].r, temp.i = a[i__3].i; - i__3 = i__ + (j + 1) * a_dim1; - z__2.r = ctemp * temp.r, z__2.i = ctemp * temp.i; - i__4 = i__ + j * a_dim1; - z__3.r = stemp * a[i__4].r, z__3.i = stemp * a[ - i__4].i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; - i__3 = i__ + j * a_dim1; - z__2.r = stemp * temp.r, z__2.i = stemp * temp.i; - i__4 = i__ + j * a_dim1; - z__3.r = ctemp * a[i__4].r, z__3.i = ctemp * a[ - i__4].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L130: */ - } - } -/* L140: */ - } - } else if (lsame_(direct, "B")) { - for (j = *n - 1; j >= 1; --j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + (j + 1) * a_dim1; - temp.r = a[i__2].r, temp.i = a[i__2].i; - i__2 = i__ + (j + 1) * a_dim1; - z__2.r = ctemp * temp.r, z__2.i = ctemp * temp.i; - i__3 = i__ + j * a_dim1; - z__3.r = stemp * a[i__3].r, z__3.i = stemp * a[ - i__3].i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; - i__2 = i__ + j * a_dim1; - z__2.r = stemp * temp.r, z__2.i = stemp * temp.i; - i__3 = i__ + j * a_dim1; - z__3.r = ctemp * a[i__3].r, z__3.i = ctemp * a[ - i__3].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; -/* L150: */ - } - } -/* L160: */ - } - } - } else if (lsame_(pivot, "T")) { - if (lsame_(direct, "F")) { - i__1 = *n; - for (j = 2; j <= i__1; ++j) { - ctemp = c__[j - 1]; - stemp = s[j - 1]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - temp.r = a[i__3].r, temp.i = a[i__3].i; - i__3 = i__ + j * a_dim1; - z__2.r = ctemp * temp.r, z__2.i = ctemp * temp.i; - i__4 = i__ + a_dim1; - z__3.r = stemp * a[i__4].r, z__3.i = stemp * a[ - i__4].i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; - i__3 = i__ + a_dim1; - z__2.r = stemp * temp.r, z__2.i = stemp * temp.i; - i__4 = i__ + a_dim1; - z__3.r = ctemp * a[i__4].r, z__3.i = ctemp * a[ - i__4].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L170: */ - } - } -/* L180: */ - } - } else if (lsame_(direct, "B")) { - for (j = *n; j >= 2; --j) { - ctemp = c__[j - 1]; - stemp = s[j - 1]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + j * a_dim1; - temp.r = a[i__2].r, temp.i = a[i__2].i; - i__2 = i__ + j * a_dim1; - z__2.r = ctemp * temp.r, z__2.i = ctemp * temp.i; - i__3 = i__ + a_dim1; - z__3.r = stemp * a[i__3].r, z__3.i = stemp * a[ - i__3].i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; - i__2 = i__ + a_dim1; - z__2.r = stemp * temp.r, z__2.i = stemp * temp.i; - i__3 = i__ + a_dim1; - z__3.r = ctemp * a[i__3].r, z__3.i = ctemp * a[ - i__3].i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; -/* L190: */ - } - } -/* L200: */ - } - } - } else if (lsame_(pivot, "B")) { - if (lsame_(direct, "F")) { - i__1 = *n - 1; - for (j = 1; j <= i__1; ++j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__2 = *m; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - temp.r = a[i__3].r, temp.i = a[i__3].i; - i__3 = i__ + j * a_dim1; - i__4 = i__ + *n * a_dim1; - z__2.r = stemp * a[i__4].r, z__2.i = stemp * a[ - i__4].i; - z__3.r = ctemp * temp.r, z__3.i = ctemp * temp.i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; - i__3 = i__ + *n * a_dim1; - i__4 = i__ + *n * a_dim1; - z__2.r = ctemp * a[i__4].r, z__2.i = ctemp * a[ - i__4].i; - z__3.r = stemp * temp.r, z__3.i = stemp * temp.i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__3].r = z__1.r, a[i__3].i = z__1.i; -/* L210: */ - } - } -/* L220: */ - } - } else if (lsame_(direct, "B")) { - for (j = *n - 1; j >= 1; --j) { - ctemp = c__[j]; - stemp = s[j]; - if (ctemp != 1. || stemp != 0.) { - i__1 = *m; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + j * a_dim1; - temp.r = a[i__2].r, temp.i = a[i__2].i; - i__2 = i__ + j * a_dim1; - i__3 = i__ + *n * a_dim1; - z__2.r = stemp * a[i__3].r, z__2.i = stemp * a[ - i__3].i; - z__3.r = ctemp * temp.r, z__3.i = ctemp * temp.i; - z__1.r = z__2.r + z__3.r, z__1.i = z__2.i + - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; - i__2 = i__ + *n * a_dim1; - i__3 = i__ + *n * a_dim1; - z__2.r = ctemp * a[i__3].r, z__2.i = ctemp * a[ - i__3].i; - z__3.r = stemp * temp.r, z__3.i = stemp * temp.i; - z__1.r = z__2.r - z__3.r, z__1.i = z__2.i - - z__3.i; - a[i__2].r = z__1.r, a[i__2].i = z__1.i; -/* L230: */ - } - } -/* L240: */ - } - } - } - } - - return 0; - -/* End of ZLASR */ - -} /* zlasr_ */ - -/* Subroutine */ int zlassq_(integer *n, doublecomplex *x, integer *incx, - doublereal *scale, doublereal *sumsq) -{ - /* System generated locals */ - integer i__1, i__2, i__3; - doublereal d__1; - - /* Builtin functions */ - double d_imag(doublecomplex *); - - /* Local variables */ - static integer ix; - static doublereal temp1; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZLASSQ returns the values scl and ssq such that - - ( scl**2 )*ssq = x( 1 )**2 +...+ x( n )**2 + ( scale**2 )*sumsq, - - where x( i ) = abs( X( 1 + ( i - 1 )*INCX ) ). The value of sumsq is - assumed to be at least unity and the value of ssq will then satisfy - - 1.0 .le. ssq .le. ( sumsq + 2*n ). - - scale is assumed to be non-negative and scl returns the value - - scl = max( scale, abs( real( x( i ) ) ), abs( aimag( x( i ) ) ) ), - i - - scale and sumsq must be supplied in SCALE and SUMSQ respectively. - SCALE and SUMSQ are overwritten by scl and ssq respectively. - - The routine makes only one pass through the vector X. - - Arguments - ========= - - N (input) INTEGER - The number of elements to be used from the vector X. - - X (input) COMPLEX*16 array, dimension (N) - The vector x as described above. - x( i ) = X( 1 + ( i - 1 )*INCX ), 1 <= i <= n. - - INCX (input) INTEGER - The increment between successive values of the vector X. - INCX > 0. - - SCALE (input/output) DOUBLE PRECISION - On entry, the value scale in the equation above. - On exit, SCALE is overwritten with the value scl . - - SUMSQ (input/output) DOUBLE PRECISION - On entry, the value sumsq in the equation above. - On exit, SUMSQ is overwritten with the value ssq . - - ===================================================================== -*/ - - - /* Parameter adjustments */ - --x; - - /* Function Body */ - if (*n > 0) { - i__1 = (*n - 1) * *incx + 1; - i__2 = *incx; - for (ix = 1; i__2 < 0 ? ix >= i__1 : ix <= i__1; ix += i__2) { - i__3 = ix; - if (x[i__3].r != 0.) { - i__3 = ix; - temp1 = (d__1 = x[i__3].r, abs(d__1)); - if (*scale < temp1) { -/* Computing 2nd power */ - d__1 = *scale / temp1; - *sumsq = *sumsq * (d__1 * d__1) + 1; - *scale = temp1; - } else { -/* Computing 2nd power */ - d__1 = temp1 / *scale; - *sumsq += d__1 * d__1; - } - } - if (d_imag(&x[ix]) != 0.) { - temp1 = (d__1 = d_imag(&x[ix]), abs(d__1)); - if (*scale < temp1) { -/* Computing 2nd power */ - d__1 = *scale / temp1; - *sumsq = *sumsq * (d__1 * d__1) + 1; - *scale = temp1; - } else { -/* Computing 2nd power */ - d__1 = temp1 / *scale; - *sumsq += d__1 * d__1; - } - } -/* L10: */ - } - } - - return 0; - -/* End of ZLASSQ */ - -} /* zlassq_ */ - -/* Subroutine */ int zlaswp_(integer *n, doublecomplex *a, integer *lda, - integer *k1, integer *k2, integer *ipiv, integer *incx) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5, i__6; - - /* Local variables */ - static integer i__, j, k, i1, i2, n32, ip, ix, ix0, inc; - static doublecomplex temp; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZLASWP performs a series of row interchanges on the matrix A. - One row interchange is initiated for each of rows K1 through K2 of A. - - Arguments - ========= - - N (input) INTEGER - The number of columns of the matrix A. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the matrix of column dimension N to which the row - interchanges will be applied. - On exit, the permuted matrix. - - LDA (input) INTEGER - The leading dimension of the array A. - - K1 (input) INTEGER - The first element of IPIV for which a row interchange will - be done. - - K2 (input) INTEGER - The last element of IPIV for which a row interchange will - be done. - - IPIV (input) INTEGER array, dimension (M*abs(INCX)) - The vector of pivot indices. Only the elements in positions - K1 through K2 of IPIV are accessed. - IPIV(K) = L implies rows K and L are to be interchanged. - - INCX (input) INTEGER - The increment between successive values of IPIV. If IPIV - is negative, the pivots are applied in reverse order. - - Further Details - =============== - - Modified by - R. C. Whaley, Computer Science Dept., Univ. of Tenn., Knoxville, USA - - ===================================================================== - - - Interchange row I with row IPIV(I) for each of rows K1 through K2. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --ipiv; - - /* Function Body */ - if (*incx > 0) { - ix0 = *k1; - i1 = *k1; - i2 = *k2; - inc = 1; - } else if (*incx < 0) { - ix0 = (1 - *k2) * *incx + 1; - i1 = *k2; - i2 = *k1; - inc = -1; - } else { - return 0; - } - - n32 = (*n / 32) << (5); - if (n32 != 0) { - i__1 = n32; - for (j = 1; j <= i__1; j += 32) { - ix = ix0; - i__2 = i2; - i__3 = inc; - for (i__ = i1; i__3 < 0 ? i__ >= i__2 : i__ <= i__2; i__ += i__3) - { - ip = ipiv[ix]; - if (ip != i__) { - i__4 = j + 31; - for (k = j; k <= i__4; ++k) { - i__5 = i__ + k * a_dim1; - temp.r = a[i__5].r, temp.i = a[i__5].i; - i__5 = i__ + k * a_dim1; - i__6 = ip + k * a_dim1; - a[i__5].r = a[i__6].r, a[i__5].i = a[i__6].i; - i__5 = ip + k * a_dim1; - a[i__5].r = temp.r, a[i__5].i = temp.i; -/* L10: */ - } - } - ix += *incx; -/* L20: */ - } -/* L30: */ - } - } - if (n32 != *n) { - ++n32; - ix = ix0; - i__1 = i2; - i__3 = inc; - for (i__ = i1; i__3 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__3) { - ip = ipiv[ix]; - if (ip != i__) { - i__2 = *n; - for (k = n32; k <= i__2; ++k) { - i__4 = i__ + k * a_dim1; - temp.r = a[i__4].r, temp.i = a[i__4].i; - i__4 = i__ + k * a_dim1; - i__5 = ip + k * a_dim1; - a[i__4].r = a[i__5].r, a[i__4].i = a[i__5].i; - i__4 = ip + k * a_dim1; - a[i__4].r = temp.r, a[i__4].i = temp.i; -/* L40: */ - } - } - ix += *incx; -/* L50: */ - } - } - - return 0; - -/* End of ZLASWP */ - -} /* zlaswp_ */ - -/* Subroutine */ int zlatrd_(char *uplo, integer *n, integer *nb, - doublecomplex *a, integer *lda, doublereal *e, doublecomplex *tau, - doublecomplex *w, integer *ldw) -{ - /* System generated locals */ - integer a_dim1, a_offset, w_dim1, w_offset, i__1, i__2, i__3; - doublereal d__1; - doublecomplex z__1, z__2, z__3, z__4; - - /* Local variables */ - static integer i__, iw; - static doublecomplex alpha; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zscal_(integer *, doublecomplex *, - doublecomplex *, integer *); - extern /* Double Complex */ VOID zdotc_(doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *); - extern /* Subroutine */ int zgemv_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *), - zhemv_(char *, integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *), zaxpy_(integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *), zlarfg_(integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), zlacgv_(integer *, doublecomplex *, - integer *); - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZLATRD reduces NB rows and columns of a complex Hermitian matrix A to - Hermitian tridiagonal form by a unitary similarity - transformation Q' * A * Q, and returns the matrices V and W which are - needed to apply the transformation to the unreduced part of A. - - If UPLO = 'U', ZLATRD reduces the last NB rows and columns of a - matrix, of which the upper triangle is supplied; - if UPLO = 'L', ZLATRD reduces the first NB rows and columns of a - matrix, of which the lower triangle is supplied. - - This is an auxiliary routine called by ZHETRD. - - Arguments - ========= - - UPLO (input) CHARACTER - Specifies whether the upper or lower triangular part of the - Hermitian matrix A is stored: - = 'U': Upper triangular - = 'L': Lower triangular - - N (input) INTEGER - The order of the matrix A. - - NB (input) INTEGER - The number of rows and columns to be reduced. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the Hermitian matrix A. If UPLO = 'U', the leading - n-by-n upper triangular part of A contains the upper - triangular part of the matrix A, and the strictly lower - triangular part of A is not referenced. If UPLO = 'L', the - leading n-by-n lower triangular part of A contains the lower - triangular part of the matrix A, and the strictly upper - triangular part of A is not referenced. - On exit: - if UPLO = 'U', the last NB columns have been reduced to - tridiagonal form, with the diagonal elements overwriting - the diagonal elements of A; the elements above the diagonal - with the array TAU, represent the unitary matrix Q as a - product of elementary reflectors; - if UPLO = 'L', the first NB columns have been reduced to - tridiagonal form, with the diagonal elements overwriting - the diagonal elements of A; the elements below the diagonal - with the array TAU, represent the unitary matrix Q as a - product of elementary reflectors. - See Further Details. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - E (output) DOUBLE PRECISION array, dimension (N-1) - If UPLO = 'U', E(n-nb:n-1) contains the superdiagonal - elements of the last NB columns of the reduced matrix; - if UPLO = 'L', E(1:nb) contains the subdiagonal elements of - the first NB columns of the reduced matrix. - - TAU (output) COMPLEX*16 array, dimension (N-1) - The scalar factors of the elementary reflectors, stored in - TAU(n-nb:n-1) if UPLO = 'U', and in TAU(1:nb) if UPLO = 'L'. - See Further Details. - - W (output) COMPLEX*16 array, dimension (LDW,NB) - The n-by-nb matrix W required to update the unreduced part - of A. - - LDW (input) INTEGER - The leading dimension of the array W. LDW >= max(1,N). - - Further Details - =============== - - If UPLO = 'U', the matrix Q is represented as a product of elementary - reflectors - - Q = H(n) H(n-1) . . . H(n-nb+1). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(i:n) = 0 and v(i-1) = 1; v(1:i-1) is stored on exit in A(1:i-1,i), - and tau in TAU(i-1). - - If UPLO = 'L', the matrix Q is represented as a product of elementary - reflectors - - Q = H(1) H(2) . . . H(nb). - - Each H(i) has the form - - H(i) = I - tau * v * v' - - where tau is a complex scalar, and v is a complex vector with - v(1:i) = 0 and v(i+1) = 1; v(i+1:n) is stored on exit in A(i+1:n,i), - and tau in TAU(i). - - The elements of the vectors v together form the n-by-nb matrix V - which is needed, with W, to apply the transformation to the unreduced - part of the matrix, using a Hermitian rank-2k update of the form: - A := A - V*W' - W*V'. - - The contents of A on exit are illustrated by the following examples - with n = 5 and nb = 2: - - if UPLO = 'U': if UPLO = 'L': - - ( a a a v4 v5 ) ( d ) - ( a a v4 v5 ) ( 1 d ) - ( a 1 v5 ) ( v1 1 a ) - ( d 1 ) ( v1 v2 a a ) - ( d ) ( v1 v2 a a a ) - - where d denotes a diagonal element of the reduced matrix, a denotes - an element of the original matrix that is unchanged, and vi denotes - an element of the vector defining H(i). - - ===================================================================== - - - Quick return if possible -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --e; - --tau; - w_dim1 = *ldw; - w_offset = 1 + w_dim1 * 1; - w -= w_offset; - - /* Function Body */ - if (*n <= 0) { - return 0; - } - - if (lsame_(uplo, "U")) { - -/* Reduce last NB columns of upper triangle */ - - i__1 = *n - *nb + 1; - for (i__ = *n; i__ >= i__1; --i__) { - iw = i__ - *n + *nb; - if (i__ < *n) { - -/* Update A(1:i,i) */ - - i__2 = i__ + i__ * a_dim1; - i__3 = i__ + i__ * a_dim1; - d__1 = a[i__3].r; - a[i__2].r = d__1, a[i__2].i = 0.; - i__2 = *n - i__; - zlacgv_(&i__2, &w[i__ + (iw + 1) * w_dim1], ldw); - i__2 = *n - i__; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__, &i__2, &z__1, &a[(i__ + 1) * - a_dim1 + 1], lda, &w[i__ + (iw + 1) * w_dim1], ldw, & - c_b60, &a[i__ * a_dim1 + 1], &c__1); - i__2 = *n - i__; - zlacgv_(&i__2, &w[i__ + (iw + 1) * w_dim1], ldw); - i__2 = *n - i__; - zlacgv_(&i__2, &a[i__ + (i__ + 1) * a_dim1], lda); - i__2 = *n - i__; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__, &i__2, &z__1, &w[(iw + 1) * - w_dim1 + 1], ldw, &a[i__ + (i__ + 1) * a_dim1], lda, & - c_b60, &a[i__ * a_dim1 + 1], &c__1); - i__2 = *n - i__; - zlacgv_(&i__2, &a[i__ + (i__ + 1) * a_dim1], lda); - i__2 = i__ + i__ * a_dim1; - i__3 = i__ + i__ * a_dim1; - d__1 = a[i__3].r; - a[i__2].r = d__1, a[i__2].i = 0.; - } - if (i__ > 1) { - -/* - Generate elementary reflector H(i) to annihilate - A(1:i-2,i) -*/ - - i__2 = i__ - 1 + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = i__ - 1; - zlarfg_(&i__2, &alpha, &a[i__ * a_dim1 + 1], &c__1, &tau[i__ - - 1]); - i__2 = i__ - 1; - e[i__2] = alpha.r; - i__2 = i__ - 1 + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Compute W(1:i-1,i) */ - - i__2 = i__ - 1; - zhemv_("Upper", &i__2, &c_b60, &a[a_offset], lda, &a[i__ * - a_dim1 + 1], &c__1, &c_b59, &w[iw * w_dim1 + 1], & - c__1); - if (i__ < *n) { - i__2 = i__ - 1; - i__3 = *n - i__; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &w[( - iw + 1) * w_dim1 + 1], ldw, &a[i__ * a_dim1 + 1], - &c__1, &c_b59, &w[i__ + 1 + iw * w_dim1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &a[(i__ + 1) * - a_dim1 + 1], lda, &w[i__ + 1 + iw * w_dim1], & - c__1, &c_b60, &w[iw * w_dim1 + 1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &a[( - i__ + 1) * a_dim1 + 1], lda, &a[i__ * a_dim1 + 1], - &c__1, &c_b59, &w[i__ + 1 + iw * w_dim1], &c__1); - i__2 = i__ - 1; - i__3 = *n - i__; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &w[(iw + 1) * - w_dim1 + 1], ldw, &w[i__ + 1 + iw * w_dim1], & - c__1, &c_b60, &w[iw * w_dim1 + 1], &c__1); - } - i__2 = i__ - 1; - zscal_(&i__2, &tau[i__ - 1], &w[iw * w_dim1 + 1], &c__1); - z__3.r = -.5, z__3.i = -0.; - i__2 = i__ - 1; - z__2.r = z__3.r * tau[i__2].r - z__3.i * tau[i__2].i, z__2.i = - z__3.r * tau[i__2].i + z__3.i * tau[i__2].r; - i__3 = i__ - 1; - zdotc_(&z__4, &i__3, &w[iw * w_dim1 + 1], &c__1, &a[i__ * - a_dim1 + 1], &c__1); - z__1.r = z__2.r * z__4.r - z__2.i * z__4.i, z__1.i = z__2.r * - z__4.i + z__2.i * z__4.r; - alpha.r = z__1.r, alpha.i = z__1.i; - i__2 = i__ - 1; - zaxpy_(&i__2, &alpha, &a[i__ * a_dim1 + 1], &c__1, &w[iw * - w_dim1 + 1], &c__1); - } - -/* L10: */ - } - } else { - -/* Reduce first NB columns of lower triangle */ - - i__1 = *nb; - for (i__ = 1; i__ <= i__1; ++i__) { - -/* Update A(i:n,i) */ - - i__2 = i__ + i__ * a_dim1; - i__3 = i__ + i__ * a_dim1; - d__1 = a[i__3].r; - a[i__2].r = d__1, a[i__2].i = 0.; - i__2 = i__ - 1; - zlacgv_(&i__2, &w[i__ + w_dim1], ldw); - i__2 = *n - i__ + 1; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &a[i__ + a_dim1], lda, - &w[i__ + w_dim1], ldw, &c_b60, &a[i__ + i__ * a_dim1], & - c__1); - i__2 = i__ - 1; - zlacgv_(&i__2, &w[i__ + w_dim1], ldw); - i__2 = i__ - 1; - zlacgv_(&i__2, &a[i__ + a_dim1], lda); - i__2 = *n - i__ + 1; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &w[i__ + w_dim1], ldw, - &a[i__ + a_dim1], lda, &c_b60, &a[i__ + i__ * a_dim1], & - c__1); - i__2 = i__ - 1; - zlacgv_(&i__2, &a[i__ + a_dim1], lda); - i__2 = i__ + i__ * a_dim1; - i__3 = i__ + i__ * a_dim1; - d__1 = a[i__3].r; - a[i__2].r = d__1, a[i__2].i = 0.; - if (i__ < *n) { - -/* - Generate elementary reflector H(i) to annihilate - A(i+2:n,i) -*/ - - i__2 = i__ + 1 + i__ * a_dim1; - alpha.r = a[i__2].r, alpha.i = a[i__2].i; - i__2 = *n - i__; -/* Computing MIN */ - i__3 = i__ + 2; - zlarfg_(&i__2, &alpha, &a[min(i__3,*n) + i__ * a_dim1], &c__1, - &tau[i__]); - i__2 = i__; - e[i__2] = alpha.r; - i__2 = i__ + 1 + i__ * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - -/* Compute W(i+1:n,i) */ - - i__2 = *n - i__; - zhemv_("Lower", &i__2, &c_b60, &a[i__ + 1 + (i__ + 1) * - a_dim1], lda, &a[i__ + 1 + i__ * a_dim1], &c__1, & - c_b59, &w[i__ + 1 + i__ * w_dim1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &w[i__ + - 1 + w_dim1], ldw, &a[i__ + 1 + i__ * a_dim1], &c__1, & - c_b59, &w[i__ * w_dim1 + 1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &a[i__ + 1 + - a_dim1], lda, &w[i__ * w_dim1 + 1], &c__1, &c_b60, &w[ - i__ + 1 + i__ * w_dim1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - zgemv_("Conjugate transpose", &i__2, &i__3, &c_b60, &a[i__ + - 1 + a_dim1], lda, &a[i__ + 1 + i__ * a_dim1], &c__1, & - c_b59, &w[i__ * w_dim1 + 1], &c__1); - i__2 = *n - i__; - i__3 = i__ - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &w[i__ + 1 + - w_dim1], ldw, &w[i__ * w_dim1 + 1], &c__1, &c_b60, &w[ - i__ + 1 + i__ * w_dim1], &c__1); - i__2 = *n - i__; - zscal_(&i__2, &tau[i__], &w[i__ + 1 + i__ * w_dim1], &c__1); - z__3.r = -.5, z__3.i = -0.; - i__2 = i__; - z__2.r = z__3.r * tau[i__2].r - z__3.i * tau[i__2].i, z__2.i = - z__3.r * tau[i__2].i + z__3.i * tau[i__2].r; - i__3 = *n - i__; - zdotc_(&z__4, &i__3, &w[i__ + 1 + i__ * w_dim1], &c__1, &a[ - i__ + 1 + i__ * a_dim1], &c__1); - z__1.r = z__2.r * z__4.r - z__2.i * z__4.i, z__1.i = z__2.r * - z__4.i + z__2.i * z__4.r; - alpha.r = z__1.r, alpha.i = z__1.i; - i__2 = *n - i__; - zaxpy_(&i__2, &alpha, &a[i__ + 1 + i__ * a_dim1], &c__1, &w[ - i__ + 1 + i__ * w_dim1], &c__1); - } - -/* L20: */ - } - } - - return 0; - -/* End of ZLATRD */ - -} /* zlatrd_ */ - -/* Subroutine */ int zlatrs_(char *uplo, char *trans, char *diag, char * - normin, integer *n, doublecomplex *a, integer *lda, doublecomplex *x, - doublereal *scale, doublereal *cnorm, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4, i__5; - doublereal d__1, d__2, d__3, d__4; - doublecomplex z__1, z__2, z__3, z__4; - - /* Builtin functions */ - double d_imag(doublecomplex *); - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j; - static doublereal xj, rec, tjj; - static integer jinc; - static doublereal xbnd; - static integer imax; - static doublereal tmax; - static doublecomplex tjjs; - static doublereal xmax, grow; - extern /* Subroutine */ int dscal_(integer *, doublereal *, doublereal *, - integer *); - extern logical lsame_(char *, char *); - static doublereal tscal; - static doublecomplex uscal; - static integer jlast; - static doublecomplex csumj; - extern /* Double Complex */ VOID zdotc_(doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *); - static logical upper; - extern /* Double Complex */ VOID zdotu_(doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *); - extern /* Subroutine */ int zaxpy_(integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *), ztrsv_( - char *, char *, char *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *), dlabad_( - doublereal *, doublereal *); - - extern integer idamax_(integer *, doublereal *, integer *); - extern /* Subroutine */ int xerbla_(char *, integer *), zdscal_( - integer *, doublereal *, doublecomplex *, integer *); - static doublereal bignum; - extern integer izamax_(integer *, doublecomplex *, integer *); - extern /* Double Complex */ VOID zladiv_(doublecomplex *, doublecomplex *, - doublecomplex *); - static logical notran; - static integer jfirst; - extern doublereal dzasum_(integer *, doublecomplex *, integer *); - static doublereal smlnum; - static logical nounit; - - -/* - -- LAPACK auxiliary routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1992 - - - Purpose - ======= - - ZLATRS solves one of the triangular systems - - A * x = s*b, A**T * x = s*b, or A**H * x = s*b, - - with scaling to prevent overflow. Here A is an upper or lower - triangular matrix, A**T denotes the transpose of A, A**H denotes the - conjugate transpose of A, x and b are n-element vectors, and s is a - scaling factor, usually less than or equal to 1, chosen so that the - components of x will be less than the overflow threshold. If the - unscaled problem will not cause overflow, the Level 2 BLAS routine - ZTRSV is called. If the matrix A is singular (A(j,j) = 0 for some j), - then s is set to 0 and a non-trivial solution to A*x = 0 is returned. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - Specifies whether the matrix A is upper or lower triangular. - = 'U': Upper triangular - = 'L': Lower triangular - - TRANS (input) CHARACTER*1 - Specifies the operation applied to A. - = 'N': Solve A * x = s*b (No transpose) - = 'T': Solve A**T * x = s*b (Transpose) - = 'C': Solve A**H * x = s*b (Conjugate transpose) - - DIAG (input) CHARACTER*1 - Specifies whether or not the matrix A is unit triangular. - = 'N': Non-unit triangular - = 'U': Unit triangular - - NORMIN (input) CHARACTER*1 - Specifies whether CNORM has been set or not. - = 'Y': CNORM contains the column norms on entry - = 'N': CNORM is not set on entry. On exit, the norms will - be computed and stored in CNORM. - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input) COMPLEX*16 array, dimension (LDA,N) - The triangular matrix A. If UPLO = 'U', the leading n by n - upper triangular part of the array A contains the upper - triangular matrix, and the strictly lower triangular part of - A is not referenced. If UPLO = 'L', the leading n by n lower - triangular part of the array A contains the lower triangular - matrix, and the strictly upper triangular part of A is not - referenced. If DIAG = 'U', the diagonal elements of A are - also not referenced and are assumed to be 1. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max (1,N). - - X (input/output) COMPLEX*16 array, dimension (N) - On entry, the right hand side b of the triangular system. - On exit, X is overwritten by the solution vector x. - - SCALE (output) DOUBLE PRECISION - The scaling factor s for the triangular system - A * x = s*b, A**T * x = s*b, or A**H * x = s*b. - If SCALE = 0, the matrix A is singular or badly scaled, and - the vector x is an exact or approximate solution to A*x = 0. - - CNORM (input or output) DOUBLE PRECISION array, dimension (N) - - If NORMIN = 'Y', CNORM is an input argument and CNORM(j) - contains the norm of the off-diagonal part of the j-th column - of A. If TRANS = 'N', CNORM(j) must be greater than or equal - to the infinity-norm, and if TRANS = 'T' or 'C', CNORM(j) - must be greater than or equal to the 1-norm. - - If NORMIN = 'N', CNORM is an output argument and CNORM(j) - returns the 1-norm of the offdiagonal part of the j-th column - of A. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -k, the k-th argument had an illegal value - - Further Details - ======= ======= - - A rough bound on x is computed; if that is less than overflow, ZTRSV - is called, otherwise, specific code is used which checks for possible - overflow or divide-by-zero at every operation. - - A columnwise scheme is used for solving A*x = b. The basic algorithm - if A is lower triangular is - - x[1:n] := b[1:n] - for j = 1, ..., n - x(j) := x(j) / A(j,j) - x[j+1:n] := x[j+1:n] - x(j) * A[j+1:n,j] - end - - Define bounds on the components of x after j iterations of the loop: - M(j) = bound on x[1:j] - G(j) = bound on x[j+1:n] - Initially, let M(0) = 0 and G(0) = max{x(i), i=1,...,n}. - - Then for iteration j+1 we have - M(j+1) <= G(j) / | A(j+1,j+1) | - G(j+1) <= G(j) + M(j+1) * | A[j+2:n,j+1] | - <= G(j) ( 1 + CNORM(j+1) / | A(j+1,j+1) | ) - - where CNORM(j+1) is greater than or equal to the infinity-norm of - column j+1 of A, not counting the diagonal. Hence - - G(j) <= G(0) product ( 1 + CNORM(i) / | A(i,i) | ) - 1<=i<=j - and - - |x(j)| <= ( G(0) / |A(j,j)| ) product ( 1 + CNORM(i) / |A(i,i)| ) - 1<=i< j - - Since |x(j)| <= M(j), we use the Level 2 BLAS routine ZTRSV if the - reciprocal of the largest M(j), j=1,..,n, is larger than - max(underflow, 1/overflow). - - The bound on x(j) is also used to determine when a step in the - columnwise method can be performed without fear of overflow. If - the computed bound is greater than a large constant, x is scaled to - prevent overflow, but if the bound overflows, x is set to 0, x(j) to - 1, and scale to 0, and a non-trivial solution to A*x = 0 is found. - - Similarly, a row-wise scheme is used to solve A**T *x = b or - A**H *x = b. The basic algorithm for A upper triangular is - - for j = 1, ..., n - x(j) := ( b(j) - A[1:j-1,j]' * x[1:j-1] ) / A(j,j) - end - - We simultaneously compute two bounds - G(j) = bound on ( b(i) - A[1:i-1,i]' * x[1:i-1] ), 1<=i<=j - M(j) = bound on x(i), 1<=i<=j - - The initial values are G(0) = 0, M(0) = max{b(i), i=1,..,n}, and we - add the constraint G(j) >= G(j-1) and M(j) >= M(j-1) for j >= 1. - Then the bound on x(j) is - - M(j) <= M(j-1) * ( 1 + CNORM(j) ) / | A(j,j) | - - <= M(0) * product ( ( 1 + CNORM(i) ) / |A(i,i)| ) - 1<=i<=j - - and we can safely call ZTRSV if 1/M(n) and 1/G(n) are both greater - than max(underflow, 1/overflow). - - ===================================================================== -*/ - - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --x; - --cnorm; - - /* Function Body */ - *info = 0; - upper = lsame_(uplo, "U"); - notran = lsame_(trans, "N"); - nounit = lsame_(diag, "N"); - -/* Test the input parameters. */ - - if ((! upper && ! lsame_(uplo, "L"))) { - *info = -1; - } else if (((! notran && ! lsame_(trans, "T")) && ! - lsame_(trans, "C"))) { - *info = -2; - } else if ((! nounit && ! lsame_(diag, "U"))) { - *info = -3; - } else if ((! lsame_(normin, "Y") && ! lsame_( - normin, "N"))) { - *info = -4; - } else if (*n < 0) { - *info = -5; - } else if (*lda < max(1,*n)) { - *info = -7; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZLATRS", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - -/* Determine machine dependent parameters to control overflow. */ - - smlnum = SAFEMINIMUM; - bignum = 1. / smlnum; - dlabad_(&smlnum, &bignum); - smlnum /= PRECISION; - bignum = 1. / smlnum; - *scale = 1.; - - if (lsame_(normin, "N")) { - -/* Compute the 1-norm of each column, not including the diagonal. */ - - if (upper) { - -/* A is upper triangular. */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = j - 1; - cnorm[j] = dzasum_(&i__2, &a[j * a_dim1 + 1], &c__1); -/* L10: */ - } - } else { - -/* A is lower triangular. */ - - i__1 = *n - 1; - for (j = 1; j <= i__1; ++j) { - i__2 = *n - j; - cnorm[j] = dzasum_(&i__2, &a[j + 1 + j * a_dim1], &c__1); -/* L20: */ - } - cnorm[*n] = 0.; - } - } - -/* - Scale the column norms by TSCAL if the maximum element in CNORM is - greater than BIGNUM/2. -*/ - - imax = idamax_(n, &cnorm[1], &c__1); - tmax = cnorm[imax]; - if (tmax <= bignum * .5) { - tscal = 1.; - } else { - tscal = .5 / (smlnum * tmax); - dscal_(n, &tscal, &cnorm[1], &c__1); - } - -/* - Compute a bound on the computed solution vector to see if the - Level 2 BLAS routine ZTRSV can be used. -*/ - - xmax = 0.; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { -/* Computing MAX */ - i__2 = j; - d__3 = xmax, d__4 = (d__1 = x[i__2].r / 2., abs(d__1)) + (d__2 = - d_imag(&x[j]) / 2., abs(d__2)); - xmax = max(d__3,d__4); -/* L30: */ - } - xbnd = xmax; - - if (notran) { - -/* Compute the growth in A * x = b. */ - - if (upper) { - jfirst = *n; - jlast = 1; - jinc = -1; - } else { - jfirst = 1; - jlast = *n; - jinc = 1; - } - - if (tscal != 1.) { - grow = 0.; - goto L60; - } - - if (nounit) { - -/* - A is non-unit triangular. - - Compute GROW = 1/G(j) and XBND = 1/M(j). - Initially, G(0) = max{x(i), i=1,...,n}. -*/ - - grow = .5 / max(xbnd,smlnum); - xbnd = grow; - i__1 = jlast; - i__2 = jinc; - for (j = jfirst; i__2 < 0 ? j >= i__1 : j <= i__1; j += i__2) { - -/* Exit the loop if the growth factor is too small. */ - - if (grow <= smlnum) { - goto L60; - } - - i__3 = j + j * a_dim1; - tjjs.r = a[i__3].r, tjjs.i = a[i__3].i; - tjj = (d__1 = tjjs.r, abs(d__1)) + (d__2 = d_imag(&tjjs), abs( - d__2)); - - if (tjj >= smlnum) { - -/* - M(j) = G(j-1) / abs(A(j,j)) - - Computing MIN -*/ - d__1 = xbnd, d__2 = min(1.,tjj) * grow; - xbnd = min(d__1,d__2); - } else { - -/* M(j) could overflow, set XBND to 0. */ - - xbnd = 0.; - } - - if (tjj + cnorm[j] >= smlnum) { - -/* G(j) = G(j-1)*( 1 + CNORM(j) / abs(A(j,j)) ) */ - - grow *= tjj / (tjj + cnorm[j]); - } else { - -/* G(j) could overflow, set GROW to 0. */ - - grow = 0.; - } -/* L40: */ - } - grow = xbnd; - } else { - -/* - A is unit triangular. - - Compute GROW = 1/G(j), where G(0) = max{x(i), i=1,...,n}. - - Computing MIN -*/ - d__1 = 1., d__2 = .5 / max(xbnd,smlnum); - grow = min(d__1,d__2); - i__2 = jlast; - i__1 = jinc; - for (j = jfirst; i__1 < 0 ? j >= i__2 : j <= i__2; j += i__1) { - -/* Exit the loop if the growth factor is too small. */ - - if (grow <= smlnum) { - goto L60; - } - -/* G(j) = G(j-1)*( 1 + CNORM(j) ) */ - - grow *= 1. / (cnorm[j] + 1.); -/* L50: */ - } - } -L60: - - ; - } else { - -/* Compute the growth in A**T * x = b or A**H * x = b. */ - - if (upper) { - jfirst = 1; - jlast = *n; - jinc = 1; - } else { - jfirst = *n; - jlast = 1; - jinc = -1; - } - - if (tscal != 1.) { - grow = 0.; - goto L90; - } - - if (nounit) { - -/* - A is non-unit triangular. - - Compute GROW = 1/G(j) and XBND = 1/M(j). - Initially, M(0) = max{x(i), i=1,...,n}. -*/ - - grow = .5 / max(xbnd,smlnum); - xbnd = grow; - i__1 = jlast; - i__2 = jinc; - for (j = jfirst; i__2 < 0 ? j >= i__1 : j <= i__1; j += i__2) { - -/* Exit the loop if the growth factor is too small. */ - - if (grow <= smlnum) { - goto L90; - } - -/* G(j) = max( G(j-1), M(j-1)*( 1 + CNORM(j) ) ) */ - - xj = cnorm[j] + 1.; -/* Computing MIN */ - d__1 = grow, d__2 = xbnd / xj; - grow = min(d__1,d__2); - - i__3 = j + j * a_dim1; - tjjs.r = a[i__3].r, tjjs.i = a[i__3].i; - tjj = (d__1 = tjjs.r, abs(d__1)) + (d__2 = d_imag(&tjjs), abs( - d__2)); - - if (tjj >= smlnum) { - -/* M(j) = M(j-1)*( 1 + CNORM(j) ) / abs(A(j,j)) */ - - if (xj > tjj) { - xbnd *= tjj / xj; - } - } else { - -/* M(j) could overflow, set XBND to 0. */ - - xbnd = 0.; - } -/* L70: */ - } - grow = min(grow,xbnd); - } else { - -/* - A is unit triangular. - - Compute GROW = 1/G(j), where G(0) = max{x(i), i=1,...,n}. - - Computing MIN -*/ - d__1 = 1., d__2 = .5 / max(xbnd,smlnum); - grow = min(d__1,d__2); - i__2 = jlast; - i__1 = jinc; - for (j = jfirst; i__1 < 0 ? j >= i__2 : j <= i__2; j += i__1) { - -/* Exit the loop if the growth factor is too small. */ - - if (grow <= smlnum) { - goto L90; - } - -/* G(j) = ( 1 + CNORM(j) )*G(j-1) */ - - xj = cnorm[j] + 1.; - grow /= xj; -/* L80: */ - } - } -L90: - ; - } - - if (grow * tscal > smlnum) { - -/* - Use the Level 2 BLAS solve if the reciprocal of the bound on - elements of X is not too small. -*/ - - ztrsv_(uplo, trans, diag, n, &a[a_offset], lda, &x[1], &c__1); - } else { - -/* Use a Level 1 BLAS solve, scaling intermediate results. */ - - if (xmax > bignum * .5) { - -/* - Scale X so that its components are less than or equal to - BIGNUM in absolute value. -*/ - - *scale = bignum * .5 / xmax; - zdscal_(n, scale, &x[1], &c__1); - xmax = bignum; - } else { - xmax *= 2.; - } - - if (notran) { - -/* Solve A * x = b */ - - i__1 = jlast; - i__2 = jinc; - for (j = jfirst; i__2 < 0 ? j >= i__1 : j <= i__1; j += i__2) { - -/* Compute x(j) = b(j) / A(j,j), scaling x if necessary. */ - - i__3 = j; - xj = (d__1 = x[i__3].r, abs(d__1)) + (d__2 = d_imag(&x[j]), - abs(d__2)); - if (nounit) { - i__3 = j + j * a_dim1; - z__1.r = tscal * a[i__3].r, z__1.i = tscal * a[i__3].i; - tjjs.r = z__1.r, tjjs.i = z__1.i; - } else { - tjjs.r = tscal, tjjs.i = 0.; - if (tscal == 1.) { - goto L110; - } - } - tjj = (d__1 = tjjs.r, abs(d__1)) + (d__2 = d_imag(&tjjs), abs( - d__2)); - if (tjj > smlnum) { - -/* abs(A(j,j)) > SMLNUM: */ - - if (tjj < 1.) { - if (xj > tjj * bignum) { - -/* Scale x by 1/b(j). */ - - rec = 1. / xj; - zdscal_(n, &rec, &x[1], &c__1); - *scale *= rec; - xmax *= rec; - } - } - i__3 = j; - zladiv_(&z__1, &x[j], &tjjs); - x[i__3].r = z__1.r, x[i__3].i = z__1.i; - i__3 = j; - xj = (d__1 = x[i__3].r, abs(d__1)) + (d__2 = d_imag(&x[j]) - , abs(d__2)); - } else if (tjj > 0.) { - -/* 0 < abs(A(j,j)) <= SMLNUM: */ - - if (xj > tjj * bignum) { - -/* - Scale x by (1/abs(x(j)))*abs(A(j,j))*BIGNUM - to avoid overflow when dividing by A(j,j). -*/ - - rec = tjj * bignum / xj; - if (cnorm[j] > 1.) { - -/* - Scale by 1/CNORM(j) to avoid overflow when - multiplying x(j) times column j. -*/ - - rec /= cnorm[j]; - } - zdscal_(n, &rec, &x[1], &c__1); - *scale *= rec; - xmax *= rec; - } - i__3 = j; - zladiv_(&z__1, &x[j], &tjjs); - x[i__3].r = z__1.r, x[i__3].i = z__1.i; - i__3 = j; - xj = (d__1 = x[i__3].r, abs(d__1)) + (d__2 = d_imag(&x[j]) - , abs(d__2)); - } else { - -/* - A(j,j) = 0: Set x(1:n) = 0, x(j) = 1, and - scale = 0, and compute a solution to A*x = 0. -*/ - - i__3 = *n; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__; - x[i__4].r = 0., x[i__4].i = 0.; -/* L100: */ - } - i__3 = j; - x[i__3].r = 1., x[i__3].i = 0.; - xj = 1.; - *scale = 0.; - xmax = 0.; - } -L110: - -/* - Scale x if necessary to avoid overflow when adding a - multiple of column j of A. -*/ - - if (xj > 1.) { - rec = 1. / xj; - if (cnorm[j] > (bignum - xmax) * rec) { - -/* Scale x by 1/(2*abs(x(j))). */ - - rec *= .5; - zdscal_(n, &rec, &x[1], &c__1); - *scale *= rec; - } - } else if (xj * cnorm[j] > bignum - xmax) { - -/* Scale x by 1/2. */ - - zdscal_(n, &c_b2210, &x[1], &c__1); - *scale *= .5; - } - - if (upper) { - if (j > 1) { - -/* - Compute the update - x(1:j-1) := x(1:j-1) - x(j) * A(1:j-1,j) -*/ - - i__3 = j - 1; - i__4 = j; - z__2.r = -x[i__4].r, z__2.i = -x[i__4].i; - z__1.r = tscal * z__2.r, z__1.i = tscal * z__2.i; - zaxpy_(&i__3, &z__1, &a[j * a_dim1 + 1], &c__1, &x[1], - &c__1); - i__3 = j - 1; - i__ = izamax_(&i__3, &x[1], &c__1); - i__3 = i__; - xmax = (d__1 = x[i__3].r, abs(d__1)) + (d__2 = d_imag( - &x[i__]), abs(d__2)); - } - } else { - if (j < *n) { - -/* - Compute the update - x(j+1:n) := x(j+1:n) - x(j) * A(j+1:n,j) -*/ - - i__3 = *n - j; - i__4 = j; - z__2.r = -x[i__4].r, z__2.i = -x[i__4].i; - z__1.r = tscal * z__2.r, z__1.i = tscal * z__2.i; - zaxpy_(&i__3, &z__1, &a[j + 1 + j * a_dim1], &c__1, & - x[j + 1], &c__1); - i__3 = *n - j; - i__ = j + izamax_(&i__3, &x[j + 1], &c__1); - i__3 = i__; - xmax = (d__1 = x[i__3].r, abs(d__1)) + (d__2 = d_imag( - &x[i__]), abs(d__2)); - } - } -/* L120: */ - } - - } else if (lsame_(trans, "T")) { - -/* Solve A**T * x = b */ - - i__2 = jlast; - i__1 = jinc; - for (j = jfirst; i__1 < 0 ? j >= i__2 : j <= i__2; j += i__1) { - -/* - Compute x(j) = b(j) - sum A(k,j)*x(k). - k<>j -*/ - - i__3 = j; - xj = (d__1 = x[i__3].r, abs(d__1)) + (d__2 = d_imag(&x[j]), - abs(d__2)); - uscal.r = tscal, uscal.i = 0.; - rec = 1. / max(xmax,1.); - if (cnorm[j] > (bignum - xj) * rec) { - -/* If x(j) could overflow, scale x by 1/(2*XMAX). */ - - rec *= .5; - if (nounit) { - i__3 = j + j * a_dim1; - z__1.r = tscal * a[i__3].r, z__1.i = tscal * a[i__3] - .i; - tjjs.r = z__1.r, tjjs.i = z__1.i; - } else { - tjjs.r = tscal, tjjs.i = 0.; - } - tjj = (d__1 = tjjs.r, abs(d__1)) + (d__2 = d_imag(&tjjs), - abs(d__2)); - if (tjj > 1.) { - -/* - Divide by A(j,j) when scaling x if A(j,j) > 1. - - Computing MIN -*/ - d__1 = 1., d__2 = rec * tjj; - rec = min(d__1,d__2); - zladiv_(&z__1, &uscal, &tjjs); - uscal.r = z__1.r, uscal.i = z__1.i; - } - if (rec < 1.) { - zdscal_(n, &rec, &x[1], &c__1); - *scale *= rec; - xmax *= rec; - } - } - - csumj.r = 0., csumj.i = 0.; - if ((uscal.r == 1. && uscal.i == 0.)) { - -/* - If the scaling needed for A in the dot product is 1, - call ZDOTU to perform the dot product. -*/ - - if (upper) { - i__3 = j - 1; - zdotu_(&z__1, &i__3, &a[j * a_dim1 + 1], &c__1, &x[1], - &c__1); - csumj.r = z__1.r, csumj.i = z__1.i; - } else if (j < *n) { - i__3 = *n - j; - zdotu_(&z__1, &i__3, &a[j + 1 + j * a_dim1], &c__1, & - x[j + 1], &c__1); - csumj.r = z__1.r, csumj.i = z__1.i; - } - } else { - -/* Otherwise, use in-line code for the dot product. */ - - if (upper) { - i__3 = j - 1; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * a_dim1; - z__3.r = a[i__4].r * uscal.r - a[i__4].i * - uscal.i, z__3.i = a[i__4].r * uscal.i + a[ - i__4].i * uscal.r; - i__5 = i__; - z__2.r = z__3.r * x[i__5].r - z__3.i * x[i__5].i, - z__2.i = z__3.r * x[i__5].i + z__3.i * x[ - i__5].r; - z__1.r = csumj.r + z__2.r, z__1.i = csumj.i + - z__2.i; - csumj.r = z__1.r, csumj.i = z__1.i; -/* L130: */ - } - } else if (j < *n) { - i__3 = *n; - for (i__ = j + 1; i__ <= i__3; ++i__) { - i__4 = i__ + j * a_dim1; - z__3.r = a[i__4].r * uscal.r - a[i__4].i * - uscal.i, z__3.i = a[i__4].r * uscal.i + a[ - i__4].i * uscal.r; - i__5 = i__; - z__2.r = z__3.r * x[i__5].r - z__3.i * x[i__5].i, - z__2.i = z__3.r * x[i__5].i + z__3.i * x[ - i__5].r; - z__1.r = csumj.r + z__2.r, z__1.i = csumj.i + - z__2.i; - csumj.r = z__1.r, csumj.i = z__1.i; -/* L140: */ - } - } - } - - z__1.r = tscal, z__1.i = 0.; - if ((uscal.r == z__1.r && uscal.i == z__1.i)) { - -/* - Compute x(j) := ( x(j) - CSUMJ ) / A(j,j) if 1/A(j,j) - was not used to scale the dotproduct. -*/ - - i__3 = j; - i__4 = j; - z__1.r = x[i__4].r - csumj.r, z__1.i = x[i__4].i - - csumj.i; - x[i__3].r = z__1.r, x[i__3].i = z__1.i; - i__3 = j; - xj = (d__1 = x[i__3].r, abs(d__1)) + (d__2 = d_imag(&x[j]) - , abs(d__2)); - if (nounit) { - i__3 = j + j * a_dim1; - z__1.r = tscal * a[i__3].r, z__1.i = tscal * a[i__3] - .i; - tjjs.r = z__1.r, tjjs.i = z__1.i; - } else { - tjjs.r = tscal, tjjs.i = 0.; - if (tscal == 1.) { - goto L160; - } - } - -/* Compute x(j) = x(j) / A(j,j), scaling if necessary. */ - - tjj = (d__1 = tjjs.r, abs(d__1)) + (d__2 = d_imag(&tjjs), - abs(d__2)); - if (tjj > smlnum) { - -/* abs(A(j,j)) > SMLNUM: */ - - if (tjj < 1.) { - if (xj > tjj * bignum) { - -/* Scale X by 1/abs(x(j)). */ - - rec = 1. / xj; - zdscal_(n, &rec, &x[1], &c__1); - *scale *= rec; - xmax *= rec; - } - } - i__3 = j; - zladiv_(&z__1, &x[j], &tjjs); - x[i__3].r = z__1.r, x[i__3].i = z__1.i; - } else if (tjj > 0.) { - -/* 0 < abs(A(j,j)) <= SMLNUM: */ - - if (xj > tjj * bignum) { - -/* Scale x by (1/abs(x(j)))*abs(A(j,j))*BIGNUM. */ - - rec = tjj * bignum / xj; - zdscal_(n, &rec, &x[1], &c__1); - *scale *= rec; - xmax *= rec; - } - i__3 = j; - zladiv_(&z__1, &x[j], &tjjs); - x[i__3].r = z__1.r, x[i__3].i = z__1.i; - } else { - -/* - A(j,j) = 0: Set x(1:n) = 0, x(j) = 1, and - scale = 0 and compute a solution to A**T *x = 0. -*/ - - i__3 = *n; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__; - x[i__4].r = 0., x[i__4].i = 0.; -/* L150: */ - } - i__3 = j; - x[i__3].r = 1., x[i__3].i = 0.; - *scale = 0.; - xmax = 0.; - } -L160: - ; - } else { - -/* - Compute x(j) := x(j) / A(j,j) - CSUMJ if the dot - product has already been divided by 1/A(j,j). -*/ - - i__3 = j; - zladiv_(&z__2, &x[j], &tjjs); - z__1.r = z__2.r - csumj.r, z__1.i = z__2.i - csumj.i; - x[i__3].r = z__1.r, x[i__3].i = z__1.i; - } -/* Computing MAX */ - i__3 = j; - d__3 = xmax, d__4 = (d__1 = x[i__3].r, abs(d__1)) + (d__2 = - d_imag(&x[j]), abs(d__2)); - xmax = max(d__3,d__4); -/* L170: */ - } - - } else { - -/* Solve A**H * x = b */ - - i__1 = jlast; - i__2 = jinc; - for (j = jfirst; i__2 < 0 ? j >= i__1 : j <= i__1; j += i__2) { - -/* - Compute x(j) = b(j) - sum A(k,j)*x(k). - k<>j -*/ - - i__3 = j; - xj = (d__1 = x[i__3].r, abs(d__1)) + (d__2 = d_imag(&x[j]), - abs(d__2)); - uscal.r = tscal, uscal.i = 0.; - rec = 1. / max(xmax,1.); - if (cnorm[j] > (bignum - xj) * rec) { - -/* If x(j) could overflow, scale x by 1/(2*XMAX). */ - - rec *= .5; - if (nounit) { - d_cnjg(&z__2, &a[j + j * a_dim1]); - z__1.r = tscal * z__2.r, z__1.i = tscal * z__2.i; - tjjs.r = z__1.r, tjjs.i = z__1.i; - } else { - tjjs.r = tscal, tjjs.i = 0.; - } - tjj = (d__1 = tjjs.r, abs(d__1)) + (d__2 = d_imag(&tjjs), - abs(d__2)); - if (tjj > 1.) { - -/* - Divide by A(j,j) when scaling x if A(j,j) > 1. - - Computing MIN -*/ - d__1 = 1., d__2 = rec * tjj; - rec = min(d__1,d__2); - zladiv_(&z__1, &uscal, &tjjs); - uscal.r = z__1.r, uscal.i = z__1.i; - } - if (rec < 1.) { - zdscal_(n, &rec, &x[1], &c__1); - *scale *= rec; - xmax *= rec; - } - } - - csumj.r = 0., csumj.i = 0.; - if ((uscal.r == 1. && uscal.i == 0.)) { - -/* - If the scaling needed for A in the dot product is 1, - call ZDOTC to perform the dot product. -*/ - - if (upper) { - i__3 = j - 1; - zdotc_(&z__1, &i__3, &a[j * a_dim1 + 1], &c__1, &x[1], - &c__1); - csumj.r = z__1.r, csumj.i = z__1.i; - } else if (j < *n) { - i__3 = *n - j; - zdotc_(&z__1, &i__3, &a[j + 1 + j * a_dim1], &c__1, & - x[j + 1], &c__1); - csumj.r = z__1.r, csumj.i = z__1.i; - } - } else { - -/* Otherwise, use in-line code for the dot product. */ - - if (upper) { - i__3 = j - 1; - for (i__ = 1; i__ <= i__3; ++i__) { - d_cnjg(&z__4, &a[i__ + j * a_dim1]); - z__3.r = z__4.r * uscal.r - z__4.i * uscal.i, - z__3.i = z__4.r * uscal.i + z__4.i * - uscal.r; - i__4 = i__; - z__2.r = z__3.r * x[i__4].r - z__3.i * x[i__4].i, - z__2.i = z__3.r * x[i__4].i + z__3.i * x[ - i__4].r; - z__1.r = csumj.r + z__2.r, z__1.i = csumj.i + - z__2.i; - csumj.r = z__1.r, csumj.i = z__1.i; -/* L180: */ - } - } else if (j < *n) { - i__3 = *n; - for (i__ = j + 1; i__ <= i__3; ++i__) { - d_cnjg(&z__4, &a[i__ + j * a_dim1]); - z__3.r = z__4.r * uscal.r - z__4.i * uscal.i, - z__3.i = z__4.r * uscal.i + z__4.i * - uscal.r; - i__4 = i__; - z__2.r = z__3.r * x[i__4].r - z__3.i * x[i__4].i, - z__2.i = z__3.r * x[i__4].i + z__3.i * x[ - i__4].r; - z__1.r = csumj.r + z__2.r, z__1.i = csumj.i + - z__2.i; - csumj.r = z__1.r, csumj.i = z__1.i; -/* L190: */ - } - } - } - - z__1.r = tscal, z__1.i = 0.; - if ((uscal.r == z__1.r && uscal.i == z__1.i)) { - -/* - Compute x(j) := ( x(j) - CSUMJ ) / A(j,j) if 1/A(j,j) - was not used to scale the dotproduct. -*/ - - i__3 = j; - i__4 = j; - z__1.r = x[i__4].r - csumj.r, z__1.i = x[i__4].i - - csumj.i; - x[i__3].r = z__1.r, x[i__3].i = z__1.i; - i__3 = j; - xj = (d__1 = x[i__3].r, abs(d__1)) + (d__2 = d_imag(&x[j]) - , abs(d__2)); - if (nounit) { - d_cnjg(&z__2, &a[j + j * a_dim1]); - z__1.r = tscal * z__2.r, z__1.i = tscal * z__2.i; - tjjs.r = z__1.r, tjjs.i = z__1.i; - } else { - tjjs.r = tscal, tjjs.i = 0.; - if (tscal == 1.) { - goto L210; - } - } - -/* Compute x(j) = x(j) / A(j,j), scaling if necessary. */ - - tjj = (d__1 = tjjs.r, abs(d__1)) + (d__2 = d_imag(&tjjs), - abs(d__2)); - if (tjj > smlnum) { - -/* abs(A(j,j)) > SMLNUM: */ - - if (tjj < 1.) { - if (xj > tjj * bignum) { - -/* Scale X by 1/abs(x(j)). */ - - rec = 1. / xj; - zdscal_(n, &rec, &x[1], &c__1); - *scale *= rec; - xmax *= rec; - } - } - i__3 = j; - zladiv_(&z__1, &x[j], &tjjs); - x[i__3].r = z__1.r, x[i__3].i = z__1.i; - } else if (tjj > 0.) { - -/* 0 < abs(A(j,j)) <= SMLNUM: */ - - if (xj > tjj * bignum) { - -/* Scale x by (1/abs(x(j)))*abs(A(j,j))*BIGNUM. */ - - rec = tjj * bignum / xj; - zdscal_(n, &rec, &x[1], &c__1); - *scale *= rec; - xmax *= rec; - } - i__3 = j; - zladiv_(&z__1, &x[j], &tjjs); - x[i__3].r = z__1.r, x[i__3].i = z__1.i; - } else { - -/* - A(j,j) = 0: Set x(1:n) = 0, x(j) = 1, and - scale = 0 and compute a solution to A**H *x = 0. -*/ - - i__3 = *n; - for (i__ = 1; i__ <= i__3; ++i__) { - i__4 = i__; - x[i__4].r = 0., x[i__4].i = 0.; -/* L200: */ - } - i__3 = j; - x[i__3].r = 1., x[i__3].i = 0.; - *scale = 0.; - xmax = 0.; - } -L210: - ; - } else { - -/* - Compute x(j) := x(j) / A(j,j) - CSUMJ if the dot - product has already been divided by 1/A(j,j). -*/ - - i__3 = j; - zladiv_(&z__2, &x[j], &tjjs); - z__1.r = z__2.r - csumj.r, z__1.i = z__2.i - csumj.i; - x[i__3].r = z__1.r, x[i__3].i = z__1.i; - } -/* Computing MAX */ - i__3 = j; - d__3 = xmax, d__4 = (d__1 = x[i__3].r, abs(d__1)) + (d__2 = - d_imag(&x[j]), abs(d__2)); - xmax = max(d__3,d__4); -/* L220: */ - } - } - *scale /= tscal; - } - -/* Scale the column norms by 1/TSCAL for return. */ - - if (tscal != 1.) { - d__1 = 1. / tscal; - dscal_(n, &d__1, &cnorm[1], &c__1); - } - - return 0; - -/* End of ZLATRS */ - -} /* zlatrs_ */ - -/* Subroutine */ int zpotf2_(char *uplo, integer *n, doublecomplex *a, - integer *lda, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - doublereal d__1; - doublecomplex z__1, z__2; - - /* Builtin functions */ - double sqrt(doublereal); - - /* Local variables */ - static integer j; - static doublereal ajj; - extern logical lsame_(char *, char *); - extern /* Double Complex */ VOID zdotc_(doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *); - extern /* Subroutine */ int zgemv_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *); - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *), zdscal_( - integer *, doublereal *, doublecomplex *, integer *), zlacgv_( - integer *, doublecomplex *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZPOTF2 computes the Cholesky factorization of a complex Hermitian - positive definite matrix A. - - The factorization has the form - A = U' * U , if UPLO = 'U', or - A = L * L', if UPLO = 'L', - where U is an upper triangular matrix and L is lower triangular. - - This is the unblocked version of the algorithm, calling Level 2 BLAS. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - Specifies whether the upper or lower triangular part of the - Hermitian matrix A is stored. - = 'U': Upper triangular - = 'L': Lower triangular - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the Hermitian matrix A. If UPLO = 'U', the leading - n by n upper triangular part of A contains the upper - triangular part of the matrix A, and the strictly lower - triangular part of A is not referenced. If UPLO = 'L', the - leading n by n lower triangular part of A contains the lower - triangular part of the matrix A, and the strictly upper - triangular part of A is not referenced. - - On exit, if INFO = 0, the factor U or L from the Cholesky - factorization A = U'*U or A = L*L'. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -k, the k-th argument had an illegal value - > 0: if INFO = k, the leading minor of order k is not - positive definite, and the factorization could not be - completed. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - *info = 0; - upper = lsame_(uplo, "U"); - if ((! upper && ! lsame_(uplo, "L"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZPOTF2", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - - if (upper) { - -/* Compute the Cholesky factorization A = U'*U. */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - -/* Compute U(J,J) and test for non-positive-definiteness. */ - - i__2 = j + j * a_dim1; - d__1 = a[i__2].r; - i__3 = j - 1; - zdotc_(&z__2, &i__3, &a[j * a_dim1 + 1], &c__1, &a[j * a_dim1 + 1] - , &c__1); - z__1.r = d__1 - z__2.r, z__1.i = -z__2.i; - ajj = z__1.r; - if (ajj <= 0.) { - i__2 = j + j * a_dim1; - a[i__2].r = ajj, a[i__2].i = 0.; - goto L30; - } - ajj = sqrt(ajj); - i__2 = j + j * a_dim1; - a[i__2].r = ajj, a[i__2].i = 0.; - -/* Compute elements J+1:N of row J. */ - - if (j < *n) { - i__2 = j - 1; - zlacgv_(&i__2, &a[j * a_dim1 + 1], &c__1); - i__2 = j - 1; - i__3 = *n - j; - z__1.r = -1., z__1.i = -0.; - zgemv_("Transpose", &i__2, &i__3, &z__1, &a[(j + 1) * a_dim1 - + 1], lda, &a[j * a_dim1 + 1], &c__1, &c_b60, &a[j + ( - j + 1) * a_dim1], lda); - i__2 = j - 1; - zlacgv_(&i__2, &a[j * a_dim1 + 1], &c__1); - i__2 = *n - j; - d__1 = 1. / ajj; - zdscal_(&i__2, &d__1, &a[j + (j + 1) * a_dim1], lda); - } -/* L10: */ - } - } else { - -/* Compute the Cholesky factorization A = L*L'. */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - -/* Compute L(J,J) and test for non-positive-definiteness. */ - - i__2 = j + j * a_dim1; - d__1 = a[i__2].r; - i__3 = j - 1; - zdotc_(&z__2, &i__3, &a[j + a_dim1], lda, &a[j + a_dim1], lda); - z__1.r = d__1 - z__2.r, z__1.i = -z__2.i; - ajj = z__1.r; - if (ajj <= 0.) { - i__2 = j + j * a_dim1; - a[i__2].r = ajj, a[i__2].i = 0.; - goto L30; - } - ajj = sqrt(ajj); - i__2 = j + j * a_dim1; - a[i__2].r = ajj, a[i__2].i = 0.; - -/* Compute elements J+1:N of column J. */ - - if (j < *n) { - i__2 = j - 1; - zlacgv_(&i__2, &a[j + a_dim1], lda); - i__2 = *n - j; - i__3 = j - 1; - z__1.r = -1., z__1.i = -0.; - zgemv_("No transpose", &i__2, &i__3, &z__1, &a[j + 1 + a_dim1] - , lda, &a[j + a_dim1], lda, &c_b60, &a[j + 1 + j * - a_dim1], &c__1); - i__2 = j - 1; - zlacgv_(&i__2, &a[j + a_dim1], lda); - i__2 = *n - j; - d__1 = 1. / ajj; - zdscal_(&i__2, &d__1, &a[j + 1 + j * a_dim1], &c__1); - } -/* L20: */ - } - } - goto L40; - -L30: - *info = j; - -L40: - return 0; - -/* End of ZPOTF2 */ - -} /* zpotf2_ */ - -/* Subroutine */ int zpotrf_(char *uplo, integer *n, doublecomplex *a, - integer *lda, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - doublecomplex z__1; - - /* Local variables */ - static integer j, jb, nb; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zgemm_(char *, char *, integer *, integer *, - integer *, doublecomplex *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *), zherk_(char *, char *, integer *, - integer *, doublereal *, doublecomplex *, integer *, doublereal *, - doublecomplex *, integer *); - static logical upper; - extern /* Subroutine */ int ztrsm_(char *, char *, char *, char *, - integer *, integer *, doublecomplex *, doublecomplex *, integer *, - doublecomplex *, integer *), - zpotf2_(char *, integer *, doublecomplex *, integer *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZPOTRF computes the Cholesky factorization of a complex Hermitian - positive definite matrix A. - - The factorization has the form - A = U**H * U, if UPLO = 'U', or - A = L * L**H, if UPLO = 'L', - where U is an upper triangular matrix and L is lower triangular. - - This is the block version of the algorithm, calling Level 3 BLAS. - - Arguments - ========= - - UPLO (input) CHARACTER*1 - = 'U': Upper triangle of A is stored; - = 'L': Lower triangle of A is stored. - - N (input) INTEGER - The order of the matrix A. N >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the Hermitian matrix A. If UPLO = 'U', the leading - N-by-N upper triangular part of A contains the upper - triangular part of the matrix A, and the strictly lower - triangular part of A is not referenced. If UPLO = 'L', the - leading N-by-N lower triangular part of A contains the lower - triangular part of the matrix A, and the strictly upper - triangular part of A is not referenced. - - On exit, if INFO = 0, the factor U or L from the Cholesky - factorization A = U**H*U or A = L*L**H. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, the leading minor of order i is not - positive definite, and the factorization could not be - completed. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - - /* Function Body */ - *info = 0; - upper = lsame_(uplo, "U"); - if ((! upper && ! lsame_(uplo, "L"))) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*lda < max(1,*n)) { - *info = -4; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZPOTRF", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - -/* Determine the block size for this environment. */ - - nb = ilaenv_(&c__1, "ZPOTRF", uplo, n, &c_n1, &c_n1, &c_n1, (ftnlen)6, ( - ftnlen)1); - if (nb <= 1 || nb >= *n) { - -/* Use unblocked code. */ - - zpotf2_(uplo, n, &a[a_offset], lda, info); - } else { - -/* Use blocked code. */ - - if (upper) { - -/* Compute the Cholesky factorization A = U'*U. */ - - i__1 = *n; - i__2 = nb; - for (j = 1; i__2 < 0 ? j >= i__1 : j <= i__1; j += i__2) { - -/* - Update and factorize the current diagonal block and test - for non-positive-definiteness. - - Computing MIN -*/ - i__3 = nb, i__4 = *n - j + 1; - jb = min(i__3,i__4); - i__3 = j - 1; - zherk_("Upper", "Conjugate transpose", &jb, &i__3, &c_b1294, & - a[j * a_dim1 + 1], lda, &c_b1015, &a[j + j * a_dim1], - lda); - zpotf2_("Upper", &jb, &a[j + j * a_dim1], lda, info); - if (*info != 0) { - goto L30; - } - if (j + jb <= *n) { - -/* Compute the current block row. */ - - i__3 = *n - j - jb + 1; - i__4 = j - 1; - z__1.r = -1., z__1.i = -0.; - zgemm_("Conjugate transpose", "No transpose", &jb, &i__3, - &i__4, &z__1, &a[j * a_dim1 + 1], lda, &a[(j + jb) - * a_dim1 + 1], lda, &c_b60, &a[j + (j + jb) * - a_dim1], lda); - i__3 = *n - j - jb + 1; - ztrsm_("Left", "Upper", "Conjugate transpose", "Non-unit", - &jb, &i__3, &c_b60, &a[j + j * a_dim1], lda, &a[ - j + (j + jb) * a_dim1], lda); - } -/* L10: */ - } - - } else { - -/* Compute the Cholesky factorization A = L*L'. */ - - i__2 = *n; - i__1 = nb; - for (j = 1; i__1 < 0 ? j >= i__2 : j <= i__2; j += i__1) { - -/* - Update and factorize the current diagonal block and test - for non-positive-definiteness. - - Computing MIN -*/ - i__3 = nb, i__4 = *n - j + 1; - jb = min(i__3,i__4); - i__3 = j - 1; - zherk_("Lower", "No transpose", &jb, &i__3, &c_b1294, &a[j + - a_dim1], lda, &c_b1015, &a[j + j * a_dim1], lda); - zpotf2_("Lower", &jb, &a[j + j * a_dim1], lda, info); - if (*info != 0) { - goto L30; - } - if (j + jb <= *n) { - -/* Compute the current block column. */ - - i__3 = *n - j - jb + 1; - i__4 = j - 1; - z__1.r = -1., z__1.i = -0.; - zgemm_("No transpose", "Conjugate transpose", &i__3, &jb, - &i__4, &z__1, &a[j + jb + a_dim1], lda, &a[j + - a_dim1], lda, &c_b60, &a[j + jb + j * a_dim1], - lda); - i__3 = *n - j - jb + 1; - ztrsm_("Right", "Lower", "Conjugate transpose", "Non-unit" - , &i__3, &jb, &c_b60, &a[j + j * a_dim1], lda, &a[ - j + jb + j * a_dim1], lda); - } -/* L20: */ - } - } - } - goto L40; - -L30: - *info = *info + j - 1; - -L40: - return 0; - -/* End of ZPOTRF */ - -} /* zpotrf_ */ - -/* Subroutine */ int zstedc_(char *compz, integer *n, doublereal *d__, - doublereal *e, doublecomplex *z__, integer *ldz, doublecomplex *work, - integer *lwork, doublereal *rwork, integer *lrwork, integer *iwork, - integer *liwork, integer *info) -{ - /* System generated locals */ - integer z_dim1, z_offset, i__1, i__2, i__3, i__4; - doublereal d__1, d__2; - - /* Builtin functions */ - double log(doublereal); - integer pow_ii(integer *, integer *); - double sqrt(doublereal); - - /* Local variables */ - static integer i__, j, k, m; - static doublereal p; - static integer ii, ll, end, lgn; - static doublereal eps, tiny; - extern logical lsame_(char *, char *); - static integer lwmin, start; - extern /* Subroutine */ int zswap_(integer *, doublecomplex *, integer *, - doublecomplex *, integer *), zlaed0_(integer *, integer *, - doublereal *, doublereal *, doublecomplex *, integer *, - doublecomplex *, integer *, doublereal *, integer *, integer *); - - extern /* Subroutine */ int dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *), dstedc_(char *, integer *, - doublereal *, doublereal *, doublereal *, integer *, doublereal *, - integer *, integer *, integer *, integer *), dlaset_( - char *, integer *, integer *, doublereal *, doublereal *, - doublereal *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern doublereal dlanst_(char *, integer *, doublereal *, doublereal *); - extern /* Subroutine */ int dsterf_(integer *, doublereal *, doublereal *, - integer *), zlacrm_(integer *, integer *, doublecomplex *, - integer *, doublereal *, integer *, doublecomplex *, integer *, - doublereal *); - static integer liwmin, icompz; - extern /* Subroutine */ int dsteqr_(char *, integer *, doublereal *, - doublereal *, doublereal *, integer *, doublereal *, integer *), zlacpy_(char *, integer *, integer *, doublecomplex *, - integer *, doublecomplex *, integer *); - static doublereal orgnrm; - static integer lrwmin; - static logical lquery; - static integer smlsiz; - extern /* Subroutine */ int zsteqr_(char *, integer *, doublereal *, - doublereal *, doublecomplex *, integer *, doublereal *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZSTEDC computes all eigenvalues and, optionally, eigenvectors of a - symmetric tridiagonal matrix using the divide and conquer method. - The eigenvectors of a full or band complex Hermitian matrix can also - be found if ZHETRD or ZHPTRD or ZHBTRD has been used to reduce this - matrix to tridiagonal form. - - This code makes very mild assumptions about floating point - arithmetic. It will work on machines with a guard digit in - add/subtract, or on those binary machines without guard digits - which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. - It could conceivably fail on hexadecimal or decimal machines - without guard digits, but we know of none. See DLAED3 for details. - - Arguments - ========= - - COMPZ (input) CHARACTER*1 - = 'N': Compute eigenvalues only. - = 'I': Compute eigenvectors of tridiagonal matrix also. - = 'V': Compute eigenvectors of original Hermitian matrix - also. On entry, Z contains the unitary matrix used - to reduce the original matrix to tridiagonal form. - - N (input) INTEGER - The dimension of the symmetric tridiagonal matrix. N >= 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the diagonal elements of the tridiagonal matrix. - On exit, if INFO = 0, the eigenvalues in ascending order. - - E (input/output) DOUBLE PRECISION array, dimension (N-1) - On entry, the subdiagonal elements of the tridiagonal matrix. - On exit, E has been destroyed. - - Z (input/output) COMPLEX*16 array, dimension (LDZ,N) - On entry, if COMPZ = 'V', then Z contains the unitary - matrix used in the reduction to tridiagonal form. - On exit, if INFO = 0, then if COMPZ = 'V', Z contains the - orthonormal eigenvectors of the original Hermitian matrix, - and if COMPZ = 'I', Z contains the orthonormal eigenvectors - of the symmetric tridiagonal matrix. - If COMPZ = 'N', then Z is not referenced. - - LDZ (input) INTEGER - The leading dimension of the array Z. LDZ >= 1. - If eigenvectors are desired, then LDZ >= max(1,N). - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If COMPZ = 'N' or 'I', or N <= 1, LWORK must be at least 1. - If COMPZ = 'V' and N > 1, LWORK must be at least N*N. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - RWORK (workspace/output) DOUBLE PRECISION array, - dimension (LRWORK) - On exit, if INFO = 0, RWORK(1) returns the optimal LRWORK. - - LRWORK (input) INTEGER - The dimension of the array RWORK. - If COMPZ = 'N' or N <= 1, LRWORK must be at least 1. - If COMPZ = 'V' and N > 1, LRWORK must be at least - 1 + 3*N + 2*N*lg N + 3*N**2 , - where lg( N ) = smallest integer k such - that 2**k >= N. - If COMPZ = 'I' and N > 1, LRWORK must be at least - 1 + 4*N + 2*N**2 . - - If LRWORK = -1, then a workspace query is assumed; the - routine only calculates the optimal size of the RWORK array, - returns this value as the first entry of the RWORK array, and - no error message related to LRWORK is issued by XERBLA. - - IWORK (workspace/output) INTEGER array, dimension (LIWORK) - On exit, if INFO = 0, IWORK(1) returns the optimal LIWORK. - - LIWORK (input) INTEGER - The dimension of the array IWORK. - If COMPZ = 'N' or N <= 1, LIWORK must be at least 1. - If COMPZ = 'V' or N > 1, LIWORK must be at least - 6 + 6*N + 5*N*lg N. - If COMPZ = 'I' or N > 1, LIWORK must be at least - 3 + 5*N . - - If LIWORK = -1, then a workspace query is assumed; the - routine only calculates the optimal size of the IWORK array, - returns this value as the first entry of the IWORK array, and - no error message related to LIWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit. - < 0: if INFO = -i, the i-th argument had an illegal value. - > 0: The algorithm failed to compute an eigenvalue while - working on the submatrix lying in rows and columns - INFO/(N+1) through mod(INFO,N+1). - - Further Details - =============== - - Based on contributions by - Jeff Rutter, Computer Science Division, University of California - at Berkeley, USA - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - z_dim1 = *ldz; - z_offset = 1 + z_dim1 * 1; - z__ -= z_offset; - --work; - --rwork; - --iwork; - - /* Function Body */ - *info = 0; - lquery = *lwork == -1 || *lrwork == -1 || *liwork == -1; - - if (lsame_(compz, "N")) { - icompz = 0; - } else if (lsame_(compz, "V")) { - icompz = 1; - } else if (lsame_(compz, "I")) { - icompz = 2; - } else { - icompz = -1; - } - if (*n <= 1 || icompz <= 0) { - lwmin = 1; - liwmin = 1; - lrwmin = 1; - } else { - lgn = (integer) (log((doublereal) (*n)) / log(2.)); - if (pow_ii(&c__2, &lgn) < *n) { - ++lgn; - } - if (pow_ii(&c__2, &lgn) < *n) { - ++lgn; - } - if (icompz == 1) { - lwmin = *n * *n; -/* Computing 2nd power */ - i__1 = *n; - lrwmin = *n * 3 + 1 + ((*n) << (1)) * lgn + i__1 * i__1 * 3; - liwmin = *n * 6 + 6 + *n * 5 * lgn; - } else if (icompz == 2) { - lwmin = 1; -/* Computing 2nd power */ - i__1 = *n; - lrwmin = ((*n) << (2)) + 1 + ((i__1 * i__1) << (1)); - liwmin = *n * 5 + 3; - } - } - if (icompz < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*ldz < 1 || (icompz > 0 && *ldz < max(1,*n))) { - *info = -6; - } else if ((*lwork < lwmin && ! lquery)) { - *info = -8; - } else if ((*lrwork < lrwmin && ! lquery)) { - *info = -10; - } else if ((*liwork < liwmin && ! lquery)) { - *info = -12; - } - - if (*info == 0) { - work[1].r = (doublereal) lwmin, work[1].i = 0.; - rwork[1] = (doublereal) lrwmin; - iwork[1] = liwmin; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZSTEDC", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - if (*n == 1) { - if (icompz != 0) { - i__1 = z_dim1 + 1; - z__[i__1].r = 1., z__[i__1].i = 0.; - } - return 0; - } - - smlsiz = ilaenv_(&c__9, "ZSTEDC", " ", &c__0, &c__0, &c__0, &c__0, ( - ftnlen)6, (ftnlen)1); - -/* - If the following conditional clause is removed, then the routine - will use the Divide and Conquer routine to compute only the - eigenvalues, which requires (3N + 3N**2) real workspace and - (2 + 5N + 2N lg(N)) integer workspace. - Since on many architectures DSTERF is much faster than any other - algorithm for finding eigenvalues only, it is used here - as the default. - - If COMPZ = 'N', use DSTERF to compute the eigenvalues. -*/ - - if (icompz == 0) { - dsterf_(n, &d__[1], &e[1], info); - return 0; - } - -/* - If N is smaller than the minimum divide size (SMLSIZ+1), then - solve the problem with another solver. -*/ - - if (*n <= smlsiz) { - if (icompz == 0) { - dsterf_(n, &d__[1], &e[1], info); - return 0; - } else if (icompz == 2) { - zsteqr_("I", n, &d__[1], &e[1], &z__[z_offset], ldz, &rwork[1], - info); - return 0; - } else { - zsteqr_("V", n, &d__[1], &e[1], &z__[z_offset], ldz, &rwork[1], - info); - return 0; - } - } - -/* If COMPZ = 'I', we simply call DSTEDC instead. */ - - if (icompz == 2) { - dlaset_("Full", n, n, &c_b324, &c_b1015, &rwork[1], n); - ll = *n * *n + 1; - i__1 = *lrwork - ll + 1; - dstedc_("I", n, &d__[1], &e[1], &rwork[1], n, &rwork[ll], &i__1, & - iwork[1], liwork, info); - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * z_dim1; - i__4 = (j - 1) * *n + i__; - z__[i__3].r = rwork[i__4], z__[i__3].i = 0.; -/* L10: */ - } -/* L20: */ - } - return 0; - } - -/* - From now on, only option left to be handled is COMPZ = 'V', - i.e. ICOMPZ = 1. - - Scale. -*/ - - orgnrm = dlanst_("M", n, &d__[1], &e[1]); - if (orgnrm == 0.) { - return 0; - } - - eps = EPSILON; - - start = 1; - -/* while ( START <= N ) */ - -L30: - if (start <= *n) { - -/* - Let END be the position of the next subdiagonal entry such that - E( END ) <= TINY or END = N if no such subdiagonal exists. The - matrix identified by the elements between START and END - constitutes an independent sub-problem. -*/ - - end = start; -L40: - if (end < *n) { - tiny = eps * sqrt((d__1 = d__[end], abs(d__1))) * sqrt((d__2 = - d__[end + 1], abs(d__2))); - if ((d__1 = e[end], abs(d__1)) > tiny) { - ++end; - goto L40; - } - } - -/* (Sub) Problem determined. Compute its size and solve it. */ - - m = end - start + 1; - if (m > smlsiz) { - *info = smlsiz; - -/* Scale. */ - - orgnrm = dlanst_("M", &m, &d__[start], &e[start]); - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b1015, &m, &c__1, &d__[ - start], &m, info); - i__1 = m - 1; - i__2 = m - 1; - dlascl_("G", &c__0, &c__0, &orgnrm, &c_b1015, &i__1, &c__1, &e[ - start], &i__2, info); - - zlaed0_(n, &m, &d__[start], &e[start], &z__[start * z_dim1 + 1], - ldz, &work[1], n, &rwork[1], &iwork[1], info); - if (*info > 0) { - *info = (*info / (m + 1) + start - 1) * (*n + 1) + *info % (m - + 1) + start - 1; - return 0; - } - -/* Scale back. */ - - dlascl_("G", &c__0, &c__0, &c_b1015, &orgnrm, &m, &c__1, &d__[ - start], &m, info); - - } else { - dsteqr_("I", &m, &d__[start], &e[start], &rwork[1], &m, &rwork[m * - m + 1], info); - zlacrm_(n, &m, &z__[start * z_dim1 + 1], ldz, &rwork[1], &m, & - work[1], n, &rwork[m * m + 1]); - zlacpy_("A", n, &m, &work[1], n, &z__[start * z_dim1 + 1], ldz); - if (*info > 0) { - *info = start * (*n + 1) + end; - return 0; - } - } - - start = end + 1; - goto L30; - } - -/* - endwhile - - If the problem split any number of times, then the eigenvalues - will not be properly ordered. Here we permute the eigenvalues - (and the associated eigenvectors) into ascending order. -*/ - - if (m != *n) { - -/* Use Selection Sort to minimize swaps of eigenvectors */ - - i__1 = *n; - for (ii = 2; ii <= i__1; ++ii) { - i__ = ii - 1; - k = i__; - p = d__[i__]; - i__2 = *n; - for (j = ii; j <= i__2; ++j) { - if (d__[j] < p) { - k = j; - p = d__[j]; - } -/* L50: */ - } - if (k != i__) { - d__[k] = d__[i__]; - d__[i__] = p; - zswap_(n, &z__[i__ * z_dim1 + 1], &c__1, &z__[k * z_dim1 + 1], - &c__1); - } -/* L60: */ - } - } - - work[1].r = (doublereal) lwmin, work[1].i = 0.; - rwork[1] = (doublereal) lrwmin; - iwork[1] = liwmin; - - return 0; - -/* End of ZSTEDC */ - -} /* zstedc_ */ - -/* Subroutine */ int zsteqr_(char *compz, integer *n, doublereal *d__, - doublereal *e, doublecomplex *z__, integer *ldz, doublereal *work, - integer *info) -{ - /* System generated locals */ - integer z_dim1, z_offset, i__1, i__2; - doublereal d__1, d__2; - - /* Builtin functions */ - double sqrt(doublereal), d_sign(doublereal *, doublereal *); - - /* Local variables */ - static doublereal b, c__, f, g; - static integer i__, j, k, l, m; - static doublereal p, r__, s; - static integer l1, ii, mm, lm1, mm1, nm1; - static doublereal rt1, rt2, eps; - static integer lsv; - static doublereal tst, eps2; - static integer lend, jtot; - extern /* Subroutine */ int dlae2_(doublereal *, doublereal *, doublereal - *, doublereal *, doublereal *); - extern logical lsame_(char *, char *); - static doublereal anorm; - extern /* Subroutine */ int zlasr_(char *, char *, char *, integer *, - integer *, doublereal *, doublereal *, doublecomplex *, integer *), zswap_(integer *, doublecomplex *, - integer *, doublecomplex *, integer *), dlaev2_(doublereal *, - doublereal *, doublereal *, doublereal *, doublereal *, - doublereal *, doublereal *); - static integer lendm1, lendp1; - - static integer iscale; - extern /* Subroutine */ int dlascl_(char *, integer *, integer *, - doublereal *, doublereal *, integer *, integer *, doublereal *, - integer *, integer *); - static doublereal safmin; - extern /* Subroutine */ int dlartg_(doublereal *, doublereal *, - doublereal *, doublereal *, doublereal *); - static doublereal safmax; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern doublereal dlanst_(char *, integer *, doublereal *, doublereal *); - extern /* Subroutine */ int dlasrt_(char *, integer *, doublereal *, - integer *); - static integer lendsv; - static doublereal ssfmin; - static integer nmaxit, icompz; - static doublereal ssfmax; - extern /* Subroutine */ int zlaset_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, doublecomplex *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZSTEQR computes all eigenvalues and, optionally, eigenvectors of a - symmetric tridiagonal matrix using the implicit QL or QR method. - The eigenvectors of a full or band complex Hermitian matrix can also - be found if ZHETRD or ZHPTRD or ZHBTRD has been used to reduce this - matrix to tridiagonal form. - - Arguments - ========= - - COMPZ (input) CHARACTER*1 - = 'N': Compute eigenvalues only. - = 'V': Compute eigenvalues and eigenvectors of the original - Hermitian matrix. On entry, Z must contain the - unitary matrix used to reduce the original matrix - to tridiagonal form. - = 'I': Compute eigenvalues and eigenvectors of the - tridiagonal matrix. Z is initialized to the identity - matrix. - - N (input) INTEGER - The order of the matrix. N >= 0. - - D (input/output) DOUBLE PRECISION array, dimension (N) - On entry, the diagonal elements of the tridiagonal matrix. - On exit, if INFO = 0, the eigenvalues in ascending order. - - E (input/output) DOUBLE PRECISION array, dimension (N-1) - On entry, the (n-1) subdiagonal elements of the tridiagonal - matrix. - On exit, E has been destroyed. - - Z (input/output) COMPLEX*16 array, dimension (LDZ, N) - On entry, if COMPZ = 'V', then Z contains the unitary - matrix used in the reduction to tridiagonal form. - On exit, if INFO = 0, then if COMPZ = 'V', Z contains the - orthonormal eigenvectors of the original Hermitian matrix, - and if COMPZ = 'I', Z contains the orthonormal eigenvectors - of the symmetric tridiagonal matrix. - If COMPZ = 'N', then Z is not referenced. - - LDZ (input) INTEGER - The leading dimension of the array Z. LDZ >= 1, and if - eigenvectors are desired, then LDZ >= max(1,N). - - WORK (workspace) DOUBLE PRECISION array, dimension (max(1,2*N-2)) - If COMPZ = 'N', then WORK is not referenced. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: the algorithm has failed to find all the eigenvalues in - a total of 30*N iterations; if INFO = i, then i - elements of E have not converged to zero; on exit, D - and E contain the elements of a symmetric tridiagonal - matrix which is unitarily similar to the original - matrix. - - ===================================================================== - - - Test the input parameters. -*/ - - /* Parameter adjustments */ - --d__; - --e; - z_dim1 = *ldz; - z_offset = 1 + z_dim1 * 1; - z__ -= z_offset; - --work; - - /* Function Body */ - *info = 0; - - if (lsame_(compz, "N")) { - icompz = 0; - } else if (lsame_(compz, "V")) { - icompz = 1; - } else if (lsame_(compz, "I")) { - icompz = 2; - } else { - icompz = -1; - } - if (icompz < 0) { - *info = -1; - } else if (*n < 0) { - *info = -2; - } else if (*ldz < 1 || (icompz > 0 && *ldz < max(1,*n))) { - *info = -6; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZSTEQR", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - return 0; - } - - if (*n == 1) { - if (icompz == 2) { - i__1 = z_dim1 + 1; - z__[i__1].r = 1., z__[i__1].i = 0.; - } - return 0; - } - -/* Determine the unit roundoff and over/underflow thresholds. */ - - eps = EPSILON; -/* Computing 2nd power */ - d__1 = eps; - eps2 = d__1 * d__1; - safmin = SAFEMINIMUM; - safmax = 1. / safmin; - ssfmax = sqrt(safmax) / 3.; - ssfmin = sqrt(safmin) / eps2; - -/* - Compute the eigenvalues and eigenvectors of the tridiagonal - matrix. -*/ - - if (icompz == 2) { - zlaset_("Full", n, n, &c_b59, &c_b60, &z__[z_offset], ldz); - } - - nmaxit = *n * 30; - jtot = 0; - -/* - Determine where the matrix splits and choose QL or QR iteration - for each block, according to whether top or bottom diagonal - element is smaller. -*/ - - l1 = 1; - nm1 = *n - 1; - -L10: - if (l1 > *n) { - goto L160; - } - if (l1 > 1) { - e[l1 - 1] = 0.; - } - if (l1 <= nm1) { - i__1 = nm1; - for (m = l1; m <= i__1; ++m) { - tst = (d__1 = e[m], abs(d__1)); - if (tst == 0.) { - goto L30; - } - if (tst <= sqrt((d__1 = d__[m], abs(d__1))) * sqrt((d__2 = d__[m - + 1], abs(d__2))) * eps) { - e[m] = 0.; - goto L30; - } -/* L20: */ - } - } - m = *n; - -L30: - l = l1; - lsv = l; - lend = m; - lendsv = lend; - l1 = m + 1; - if (lend == l) { - goto L10; - } - -/* Scale submatrix in rows and columns L to LEND */ - - i__1 = lend - l + 1; - anorm = dlanst_("I", &i__1, &d__[l], &e[l]); - iscale = 0; - if (anorm == 0.) { - goto L10; - } - if (anorm > ssfmax) { - iscale = 1; - i__1 = lend - l + 1; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmax, &i__1, &c__1, &d__[l], n, - info); - i__1 = lend - l; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmax, &i__1, &c__1, &e[l], n, - info); - } else if (anorm < ssfmin) { - iscale = 2; - i__1 = lend - l + 1; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmin, &i__1, &c__1, &d__[l], n, - info); - i__1 = lend - l; - dlascl_("G", &c__0, &c__0, &anorm, &ssfmin, &i__1, &c__1, &e[l], n, - info); - } - -/* Choose between QL and QR iteration */ - - if ((d__1 = d__[lend], abs(d__1)) < (d__2 = d__[l], abs(d__2))) { - lend = lsv; - l = lendsv; - } - - if (lend > l) { - -/* - QL Iteration - - Look for small subdiagonal element. -*/ - -L40: - if (l != lend) { - lendm1 = lend - 1; - i__1 = lendm1; - for (m = l; m <= i__1; ++m) { -/* Computing 2nd power */ - d__2 = (d__1 = e[m], abs(d__1)); - tst = d__2 * d__2; - if (tst <= eps2 * (d__1 = d__[m], abs(d__1)) * (d__2 = d__[m - + 1], abs(d__2)) + safmin) { - goto L60; - } -/* L50: */ - } - } - - m = lend; - -L60: - if (m < lend) { - e[m] = 0.; - } - p = d__[l]; - if (m == l) { - goto L80; - } - -/* - If remaining matrix is 2-by-2, use DLAE2 or SLAEV2 - to compute its eigensystem. -*/ - - if (m == l + 1) { - if (icompz > 0) { - dlaev2_(&d__[l], &e[l], &d__[l + 1], &rt1, &rt2, &c__, &s); - work[l] = c__; - work[*n - 1 + l] = s; - zlasr_("R", "V", "B", n, &c__2, &work[l], &work[*n - 1 + l], & - z__[l * z_dim1 + 1], ldz); - } else { - dlae2_(&d__[l], &e[l], &d__[l + 1], &rt1, &rt2); - } - d__[l] = rt1; - d__[l + 1] = rt2; - e[l] = 0.; - l += 2; - if (l <= lend) { - goto L40; - } - goto L140; - } - - if (jtot == nmaxit) { - goto L140; - } - ++jtot; - -/* Form shift. */ - - g = (d__[l + 1] - p) / (e[l] * 2.); - r__ = dlapy2_(&g, &c_b1015); - g = d__[m] - p + e[l] / (g + d_sign(&r__, &g)); - - s = 1.; - c__ = 1.; - p = 0.; - -/* Inner loop */ - - mm1 = m - 1; - i__1 = l; - for (i__ = mm1; i__ >= i__1; --i__) { - f = s * e[i__]; - b = c__ * e[i__]; - dlartg_(&g, &f, &c__, &s, &r__); - if (i__ != m - 1) { - e[i__ + 1] = r__; - } - g = d__[i__ + 1] - p; - r__ = (d__[i__] - g) * s + c__ * 2. * b; - p = s * r__; - d__[i__ + 1] = g + p; - g = c__ * r__ - b; - -/* If eigenvectors are desired, then save rotations. */ - - if (icompz > 0) { - work[i__] = c__; - work[*n - 1 + i__] = -s; - } - -/* L70: */ - } - -/* If eigenvectors are desired, then apply saved rotations. */ - - if (icompz > 0) { - mm = m - l + 1; - zlasr_("R", "V", "B", n, &mm, &work[l], &work[*n - 1 + l], &z__[l - * z_dim1 + 1], ldz); - } - - d__[l] -= p; - e[l] = g; - goto L40; - -/* Eigenvalue found. */ - -L80: - d__[l] = p; - - ++l; - if (l <= lend) { - goto L40; - } - goto L140; - - } else { - -/* - QR Iteration - - Look for small superdiagonal element. -*/ - -L90: - if (l != lend) { - lendp1 = lend + 1; - i__1 = lendp1; - for (m = l; m >= i__1; --m) { -/* Computing 2nd power */ - d__2 = (d__1 = e[m - 1], abs(d__1)); - tst = d__2 * d__2; - if (tst <= eps2 * (d__1 = d__[m], abs(d__1)) * (d__2 = d__[m - - 1], abs(d__2)) + safmin) { - goto L110; - } -/* L100: */ - } - } - - m = lend; - -L110: - if (m > lend) { - e[m - 1] = 0.; - } - p = d__[l]; - if (m == l) { - goto L130; - } - -/* - If remaining matrix is 2-by-2, use DLAE2 or SLAEV2 - to compute its eigensystem. -*/ - - if (m == l - 1) { - if (icompz > 0) { - dlaev2_(&d__[l - 1], &e[l - 1], &d__[l], &rt1, &rt2, &c__, &s) - ; - work[m] = c__; - work[*n - 1 + m] = s; - zlasr_("R", "V", "F", n, &c__2, &work[m], &work[*n - 1 + m], & - z__[(l - 1) * z_dim1 + 1], ldz); - } else { - dlae2_(&d__[l - 1], &e[l - 1], &d__[l], &rt1, &rt2); - } - d__[l - 1] = rt1; - d__[l] = rt2; - e[l - 1] = 0.; - l += -2; - if (l >= lend) { - goto L90; - } - goto L140; - } - - if (jtot == nmaxit) { - goto L140; - } - ++jtot; - -/* Form shift. */ - - g = (d__[l - 1] - p) / (e[l - 1] * 2.); - r__ = dlapy2_(&g, &c_b1015); - g = d__[m] - p + e[l - 1] / (g + d_sign(&r__, &g)); - - s = 1.; - c__ = 1.; - p = 0.; - -/* Inner loop */ - - lm1 = l - 1; - i__1 = lm1; - for (i__ = m; i__ <= i__1; ++i__) { - f = s * e[i__]; - b = c__ * e[i__]; - dlartg_(&g, &f, &c__, &s, &r__); - if (i__ != m) { - e[i__ - 1] = r__; - } - g = d__[i__] - p; - r__ = (d__[i__ + 1] - g) * s + c__ * 2. * b; - p = s * r__; - d__[i__] = g + p; - g = c__ * r__ - b; - -/* If eigenvectors are desired, then save rotations. */ - - if (icompz > 0) { - work[i__] = c__; - work[*n - 1 + i__] = s; - } - -/* L120: */ - } - -/* If eigenvectors are desired, then apply saved rotations. */ - - if (icompz > 0) { - mm = l - m + 1; - zlasr_("R", "V", "F", n, &mm, &work[m], &work[*n - 1 + m], &z__[m - * z_dim1 + 1], ldz); - } - - d__[l] -= p; - e[lm1] = g; - goto L90; - -/* Eigenvalue found. */ - -L130: - d__[l] = p; - - --l; - if (l >= lend) { - goto L90; - } - goto L140; - - } - -/* Undo scaling if necessary */ - -L140: - if (iscale == 1) { - i__1 = lendsv - lsv + 1; - dlascl_("G", &c__0, &c__0, &ssfmax, &anorm, &i__1, &c__1, &d__[lsv], - n, info); - i__1 = lendsv - lsv; - dlascl_("G", &c__0, &c__0, &ssfmax, &anorm, &i__1, &c__1, &e[lsv], n, - info); - } else if (iscale == 2) { - i__1 = lendsv - lsv + 1; - dlascl_("G", &c__0, &c__0, &ssfmin, &anorm, &i__1, &c__1, &d__[lsv], - n, info); - i__1 = lendsv - lsv; - dlascl_("G", &c__0, &c__0, &ssfmin, &anorm, &i__1, &c__1, &e[lsv], n, - info); - } - -/* - Check for no convergence to an eigenvalue after a total - of N*MAXIT iterations. -*/ - - if (jtot == nmaxit) { - i__1 = *n - 1; - for (i__ = 1; i__ <= i__1; ++i__) { - if (e[i__] != 0.) { - ++(*info); - } -/* L150: */ - } - return 0; - } - goto L10; - -/* Order eigenvalues and eigenvectors. */ - -L160: - if (icompz == 0) { - -/* Use Quick Sort */ - - dlasrt_("I", n, &d__[1], info); - - } else { - -/* Use Selection Sort to minimize swaps of eigenvectors */ - - i__1 = *n; - for (ii = 2; ii <= i__1; ++ii) { - i__ = ii - 1; - k = i__; - p = d__[i__]; - i__2 = *n; - for (j = ii; j <= i__2; ++j) { - if (d__[j] < p) { - k = j; - p = d__[j]; - } -/* L170: */ - } - if (k != i__) { - d__[k] = d__[i__]; - d__[i__] = p; - zswap_(n, &z__[i__ * z_dim1 + 1], &c__1, &z__[k * z_dim1 + 1], - &c__1); - } -/* L180: */ - } - } - return 0; - -/* End of ZSTEQR */ - -} /* zsteqr_ */ - -/* Subroutine */ int ztrevc_(char *side, char *howmny, logical *select, - integer *n, doublecomplex *t, integer *ldt, doublecomplex *vl, - integer *ldvl, doublecomplex *vr, integer *ldvr, integer *mm, integer - *m, doublecomplex *work, doublereal *rwork, integer *info) -{ - /* System generated locals */ - integer t_dim1, t_offset, vl_dim1, vl_offset, vr_dim1, vr_offset, i__1, - i__2, i__3, i__4, i__5; - doublereal d__1, d__2, d__3; - doublecomplex z__1, z__2; - - /* Builtin functions */ - double d_imag(doublecomplex *); - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, k, ii, ki, is; - static doublereal ulp; - static logical allv; - static doublereal unfl, ovfl, smin; - static logical over; - static doublereal scale; - extern logical lsame_(char *, char *); - static doublereal remax; - static logical leftv, bothv; - extern /* Subroutine */ int zgemv_(char *, integer *, integer *, - doublecomplex *, doublecomplex *, integer *, doublecomplex *, - integer *, doublecomplex *, doublecomplex *, integer *); - static logical somev; - extern /* Subroutine */ int zcopy_(integer *, doublecomplex *, integer *, - doublecomplex *, integer *), dlabad_(doublereal *, doublereal *); - - extern /* Subroutine */ int xerbla_(char *, integer *), zdscal_( - integer *, doublereal *, doublecomplex *, integer *); - extern integer izamax_(integer *, doublecomplex *, integer *); - static logical rightv; - extern doublereal dzasum_(integer *, doublecomplex *, integer *); - static doublereal smlnum; - extern /* Subroutine */ int zlatrs_(char *, char *, char *, char *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublereal *, doublereal *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZTREVC computes some or all of the right and/or left eigenvectors of - a complex upper triangular matrix T. - - The right eigenvector x and the left eigenvector y of T corresponding - to an eigenvalue w are defined by: - - T*x = w*x, y'*T = w*y' - - where y' denotes the conjugate transpose of the vector y. - - If all eigenvectors are requested, the routine may either return the - matrices X and/or Y of right or left eigenvectors of T, or the - products Q*X and/or Q*Y, where Q is an input unitary - matrix. If T was obtained from the Schur factorization of an - original matrix A = Q*T*Q', then Q*X and Q*Y are the matrices of - right or left eigenvectors of A. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'R': compute right eigenvectors only; - = 'L': compute left eigenvectors only; - = 'B': compute both right and left eigenvectors. - - HOWMNY (input) CHARACTER*1 - = 'A': compute all right and/or left eigenvectors; - = 'B': compute all right and/or left eigenvectors, - and backtransform them using the input matrices - supplied in VR and/or VL; - = 'S': compute selected right and/or left eigenvectors, - specified by the logical array SELECT. - - SELECT (input) LOGICAL array, dimension (N) - If HOWMNY = 'S', SELECT specifies the eigenvectors to be - computed. - If HOWMNY = 'A' or 'B', SELECT is not referenced. - To select the eigenvector corresponding to the j-th - eigenvalue, SELECT(j) must be set to .TRUE.. - - N (input) INTEGER - The order of the matrix T. N >= 0. - - T (input/output) COMPLEX*16 array, dimension (LDT,N) - The upper triangular matrix T. T is modified, but restored - on exit. - - LDT (input) INTEGER - The leading dimension of the array T. LDT >= max(1,N). - - VL (input/output) COMPLEX*16 array, dimension (LDVL,MM) - On entry, if SIDE = 'L' or 'B' and HOWMNY = 'B', VL must - contain an N-by-N matrix Q (usually the unitary matrix Q of - Schur vectors returned by ZHSEQR). - On exit, if SIDE = 'L' or 'B', VL contains: - if HOWMNY = 'A', the matrix Y of left eigenvectors of T; - VL is lower triangular. The i-th column - VL(i) of VL is the eigenvector corresponding - to T(i,i). - if HOWMNY = 'B', the matrix Q*Y; - if HOWMNY = 'S', the left eigenvectors of T specified by - SELECT, stored consecutively in the columns - of VL, in the same order as their - eigenvalues. - If SIDE = 'R', VL is not referenced. - - LDVL (input) INTEGER - The leading dimension of the array VL. LDVL >= max(1,N) if - SIDE = 'L' or 'B'; LDVL >= 1 otherwise. - - VR (input/output) COMPLEX*16 array, dimension (LDVR,MM) - On entry, if SIDE = 'R' or 'B' and HOWMNY = 'B', VR must - contain an N-by-N matrix Q (usually the unitary matrix Q of - Schur vectors returned by ZHSEQR). - On exit, if SIDE = 'R' or 'B', VR contains: - if HOWMNY = 'A', the matrix X of right eigenvectors of T; - VR is upper triangular. The i-th column - VR(i) of VR is the eigenvector corresponding - to T(i,i). - if HOWMNY = 'B', the matrix Q*X; - if HOWMNY = 'S', the right eigenvectors of T specified by - SELECT, stored consecutively in the columns - of VR, in the same order as their - eigenvalues. - If SIDE = 'L', VR is not referenced. - - LDVR (input) INTEGER - The leading dimension of the array VR. LDVR >= max(1,N) if - SIDE = 'R' or 'B'; LDVR >= 1 otherwise. - - MM (input) INTEGER - The number of columns in the arrays VL and/or VR. MM >= M. - - M (output) INTEGER - The number of columns in the arrays VL and/or VR actually - used to store the eigenvectors. If HOWMNY = 'A' or 'B', M - is set to N. Each selected eigenvector occupies one - column. - - WORK (workspace) COMPLEX*16 array, dimension (2*N) - - RWORK (workspace) DOUBLE PRECISION array, dimension (N) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - Further Details - =============== - - The algorithm used in this program is basically backward (forward) - substitution, with scaling to make the the code robust against - possible overflow. - - Each eigenvector is normalized so that the element of largest - magnitude has magnitude 1; here the magnitude of a complex number - (x,y) is taken to be |x| + |y|. - - ===================================================================== - - - Decode and test the input parameters -*/ - - /* Parameter adjustments */ - --select; - t_dim1 = *ldt; - t_offset = 1 + t_dim1 * 1; - t -= t_offset; - vl_dim1 = *ldvl; - vl_offset = 1 + vl_dim1 * 1; - vl -= vl_offset; - vr_dim1 = *ldvr; - vr_offset = 1 + vr_dim1 * 1; - vr -= vr_offset; - --work; - --rwork; - - /* Function Body */ - bothv = lsame_(side, "B"); - rightv = lsame_(side, "R") || bothv; - leftv = lsame_(side, "L") || bothv; - - allv = lsame_(howmny, "A"); - over = lsame_(howmny, "B"); - somev = lsame_(howmny, "S"); - -/* - Set M to the number of columns required to store the selected - eigenvectors. -*/ - - if (somev) { - *m = 0; - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - if (select[j]) { - ++(*m); - } -/* L10: */ - } - } else { - *m = *n; - } - - *info = 0; - if ((! rightv && ! leftv)) { - *info = -1; - } else if (((! allv && ! over) && ! somev)) { - *info = -2; - } else if (*n < 0) { - *info = -4; - } else if (*ldt < max(1,*n)) { - *info = -6; - } else if (*ldvl < 1 || (leftv && *ldvl < *n)) { - *info = -8; - } else if (*ldvr < 1 || (rightv && *ldvr < *n)) { - *info = -10; - } else if (*mm < *m) { - *info = -11; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZTREVC", &i__1); - return 0; - } - -/* Quick return if possible. */ - - if (*n == 0) { - return 0; - } - -/* Set the constants to control overflow. */ - - unfl = SAFEMINIMUM; - ovfl = 1. / unfl; - dlabad_(&unfl, &ovfl); - ulp = PRECISION; - smlnum = unfl * (*n / ulp); - -/* Store the diagonal elements of T in working array WORK. */ - - i__1 = *n; - for (i__ = 1; i__ <= i__1; ++i__) { - i__2 = i__ + *n; - i__3 = i__ + i__ * t_dim1; - work[i__2].r = t[i__3].r, work[i__2].i = t[i__3].i; -/* L20: */ - } - -/* - Compute 1-norm of each column of strictly upper triangular - part of T to control overflow in triangular solver. -*/ - - rwork[1] = 0.; - i__1 = *n; - for (j = 2; j <= i__1; ++j) { - i__2 = j - 1; - rwork[j] = dzasum_(&i__2, &t[j * t_dim1 + 1], &c__1); -/* L30: */ - } - - if (rightv) { - -/* Compute right eigenvectors. */ - - is = *m; - for (ki = *n; ki >= 1; --ki) { - - if (somev) { - if (! select[ki]) { - goto L80; - } - } -/* Computing MAX */ - i__1 = ki + ki * t_dim1; - d__3 = ulp * ((d__1 = t[i__1].r, abs(d__1)) + (d__2 = d_imag(&t[ - ki + ki * t_dim1]), abs(d__2))); - smin = max(d__3,smlnum); - - work[1].r = 1., work[1].i = 0.; - -/* Form right-hand side. */ - - i__1 = ki - 1; - for (k = 1; k <= i__1; ++k) { - i__2 = k; - i__3 = k + ki * t_dim1; - z__1.r = -t[i__3].r, z__1.i = -t[i__3].i; - work[i__2].r = z__1.r, work[i__2].i = z__1.i; -/* L40: */ - } - -/* - Solve the triangular system: - (T(1:KI-1,1:KI-1) - T(KI,KI))*X = SCALE*WORK. -*/ - - i__1 = ki - 1; - for (k = 1; k <= i__1; ++k) { - i__2 = k + k * t_dim1; - i__3 = k + k * t_dim1; - i__4 = ki + ki * t_dim1; - z__1.r = t[i__3].r - t[i__4].r, z__1.i = t[i__3].i - t[i__4] - .i; - t[i__2].r = z__1.r, t[i__2].i = z__1.i; - i__2 = k + k * t_dim1; - if ((d__1 = t[i__2].r, abs(d__1)) + (d__2 = d_imag(&t[k + k * - t_dim1]), abs(d__2)) < smin) { - i__3 = k + k * t_dim1; - t[i__3].r = smin, t[i__3].i = 0.; - } -/* L50: */ - } - - if (ki > 1) { - i__1 = ki - 1; - zlatrs_("Upper", "No transpose", "Non-unit", "Y", &i__1, &t[ - t_offset], ldt, &work[1], &scale, &rwork[1], info); - i__1 = ki; - work[i__1].r = scale, work[i__1].i = 0.; - } - -/* Copy the vector x or Q*x to VR and normalize. */ - - if (! over) { - zcopy_(&ki, &work[1], &c__1, &vr[is * vr_dim1 + 1], &c__1); - - ii = izamax_(&ki, &vr[is * vr_dim1 + 1], &c__1); - i__1 = ii + is * vr_dim1; - remax = 1. / ((d__1 = vr[i__1].r, abs(d__1)) + (d__2 = d_imag( - &vr[ii + is * vr_dim1]), abs(d__2))); - zdscal_(&ki, &remax, &vr[is * vr_dim1 + 1], &c__1); - - i__1 = *n; - for (k = ki + 1; k <= i__1; ++k) { - i__2 = k + is * vr_dim1; - vr[i__2].r = 0., vr[i__2].i = 0.; -/* L60: */ - } - } else { - if (ki > 1) { - i__1 = ki - 1; - z__1.r = scale, z__1.i = 0.; - zgemv_("N", n, &i__1, &c_b60, &vr[vr_offset], ldvr, &work[ - 1], &c__1, &z__1, &vr[ki * vr_dim1 + 1], &c__1); - } - - ii = izamax_(n, &vr[ki * vr_dim1 + 1], &c__1); - i__1 = ii + ki * vr_dim1; - remax = 1. / ((d__1 = vr[i__1].r, abs(d__1)) + (d__2 = d_imag( - &vr[ii + ki * vr_dim1]), abs(d__2))); - zdscal_(n, &remax, &vr[ki * vr_dim1 + 1], &c__1); - } - -/* Set back the original diagonal elements of T. */ - - i__1 = ki - 1; - for (k = 1; k <= i__1; ++k) { - i__2 = k + k * t_dim1; - i__3 = k + *n; - t[i__2].r = work[i__3].r, t[i__2].i = work[i__3].i; -/* L70: */ - } - - --is; -L80: - ; - } - } - - if (leftv) { - -/* Compute left eigenvectors. */ - - is = 1; - i__1 = *n; - for (ki = 1; ki <= i__1; ++ki) { - - if (somev) { - if (! select[ki]) { - goto L130; - } - } -/* Computing MAX */ - i__2 = ki + ki * t_dim1; - d__3 = ulp * ((d__1 = t[i__2].r, abs(d__1)) + (d__2 = d_imag(&t[ - ki + ki * t_dim1]), abs(d__2))); - smin = max(d__3,smlnum); - - i__2 = *n; - work[i__2].r = 1., work[i__2].i = 0.; - -/* Form right-hand side. */ - - i__2 = *n; - for (k = ki + 1; k <= i__2; ++k) { - i__3 = k; - d_cnjg(&z__2, &t[ki + k * t_dim1]); - z__1.r = -z__2.r, z__1.i = -z__2.i; - work[i__3].r = z__1.r, work[i__3].i = z__1.i; -/* L90: */ - } - -/* - Solve the triangular system: - (T(KI+1:N,KI+1:N) - T(KI,KI))'*X = SCALE*WORK. -*/ - - i__2 = *n; - for (k = ki + 1; k <= i__2; ++k) { - i__3 = k + k * t_dim1; - i__4 = k + k * t_dim1; - i__5 = ki + ki * t_dim1; - z__1.r = t[i__4].r - t[i__5].r, z__1.i = t[i__4].i - t[i__5] - .i; - t[i__3].r = z__1.r, t[i__3].i = z__1.i; - i__3 = k + k * t_dim1; - if ((d__1 = t[i__3].r, abs(d__1)) + (d__2 = d_imag(&t[k + k * - t_dim1]), abs(d__2)) < smin) { - i__4 = k + k * t_dim1; - t[i__4].r = smin, t[i__4].i = 0.; - } -/* L100: */ - } - - if (ki < *n) { - i__2 = *n - ki; - zlatrs_("Upper", "Conjugate transpose", "Non-unit", "Y", & - i__2, &t[ki + 1 + (ki + 1) * t_dim1], ldt, &work[ki + - 1], &scale, &rwork[1], info); - i__2 = ki; - work[i__2].r = scale, work[i__2].i = 0.; - } - -/* Copy the vector x or Q*x to VL and normalize. */ - - if (! over) { - i__2 = *n - ki + 1; - zcopy_(&i__2, &work[ki], &c__1, &vl[ki + is * vl_dim1], &c__1) - ; - - i__2 = *n - ki + 1; - ii = izamax_(&i__2, &vl[ki + is * vl_dim1], &c__1) + ki - 1; - i__2 = ii + is * vl_dim1; - remax = 1. / ((d__1 = vl[i__2].r, abs(d__1)) + (d__2 = d_imag( - &vl[ii + is * vl_dim1]), abs(d__2))); - i__2 = *n - ki + 1; - zdscal_(&i__2, &remax, &vl[ki + is * vl_dim1], &c__1); - - i__2 = ki - 1; - for (k = 1; k <= i__2; ++k) { - i__3 = k + is * vl_dim1; - vl[i__3].r = 0., vl[i__3].i = 0.; -/* L110: */ - } - } else { - if (ki < *n) { - i__2 = *n - ki; - z__1.r = scale, z__1.i = 0.; - zgemv_("N", n, &i__2, &c_b60, &vl[(ki + 1) * vl_dim1 + 1], - ldvl, &work[ki + 1], &c__1, &z__1, &vl[ki * - vl_dim1 + 1], &c__1); - } - - ii = izamax_(n, &vl[ki * vl_dim1 + 1], &c__1); - i__2 = ii + ki * vl_dim1; - remax = 1. / ((d__1 = vl[i__2].r, abs(d__1)) + (d__2 = d_imag( - &vl[ii + ki * vl_dim1]), abs(d__2))); - zdscal_(n, &remax, &vl[ki * vl_dim1 + 1], &c__1); - } - -/* Set back the original diagonal elements of T. */ - - i__2 = *n; - for (k = ki + 1; k <= i__2; ++k) { - i__3 = k + k * t_dim1; - i__4 = k + *n; - t[i__3].r = work[i__4].r, t[i__3].i = work[i__4].i; -/* L120: */ - } - - ++is; -L130: - ; - } - } - - return 0; - -/* End of ZTREVC */ - -} /* ztrevc_ */ - -/* Subroutine */ int zung2r_(integer *m, integer *n, integer *k, - doublecomplex *a, integer *lda, doublecomplex *tau, doublecomplex * - work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - doublecomplex z__1; - - /* Local variables */ - static integer i__, j, l; - extern /* Subroutine */ int zscal_(integer *, doublecomplex *, - doublecomplex *, integer *), zlarf_(char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), xerbla_(char *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZUNG2R generates an m by n complex matrix Q with orthonormal columns, - which is defined as the first n columns of a product of k elementary - reflectors of order m - - Q = H(1) H(2) . . . H(k) - - as returned by ZGEQRF. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix Q. M >= 0. - - N (input) INTEGER - The number of columns of the matrix Q. M >= N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines the - matrix Q. N >= K >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the i-th column must contain the vector which - defines the elementary reflector H(i), for i = 1,2,...,k, as - returned by ZGEQRF in the first k columns of its array - argument A. - On exit, the m by n matrix Q. - - LDA (input) INTEGER - The first dimension of the array A. LDA >= max(1,M). - - TAU (input) COMPLEX*16 array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZGEQRF. - - WORK (workspace) COMPLEX*16 array, dimension (N) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument has an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < 0 || *n > *m) { - *info = -2; - } else if (*k < 0 || *k > *n) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNG2R", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*n <= 0) { - return 0; - } - -/* Initialise columns k+1:n to columns of the unit matrix */ - - i__1 = *n; - for (j = *k + 1; j <= i__1; ++j) { - i__2 = *m; - for (l = 1; l <= i__2; ++l) { - i__3 = l + j * a_dim1; - a[i__3].r = 0., a[i__3].i = 0.; -/* L10: */ - } - i__2 = j + j * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; -/* L20: */ - } - - for (i__ = *k; i__ >= 1; --i__) { - -/* Apply H(i) to A(i:m,i:n) from the left */ - - if (i__ < *n) { - i__1 = i__ + i__ * a_dim1; - a[i__1].r = 1., a[i__1].i = 0.; - i__1 = *m - i__ + 1; - i__2 = *n - i__; - zlarf_("Left", &i__1, &i__2, &a[i__ + i__ * a_dim1], &c__1, &tau[ - i__], &a[i__ + (i__ + 1) * a_dim1], lda, &work[1]); - } - if (i__ < *m) { - i__1 = *m - i__; - i__2 = i__; - z__1.r = -tau[i__2].r, z__1.i = -tau[i__2].i; - zscal_(&i__1, &z__1, &a[i__ + 1 + i__ * a_dim1], &c__1); - } - i__1 = i__ + i__ * a_dim1; - i__2 = i__; - z__1.r = 1. - tau[i__2].r, z__1.i = 0. - tau[i__2].i; - a[i__1].r = z__1.r, a[i__1].i = z__1.i; - -/* Set A(1:i-1,i) to zero */ - - i__1 = i__ - 1; - for (l = 1; l <= i__1; ++l) { - i__2 = l + i__ * a_dim1; - a[i__2].r = 0., a[i__2].i = 0.; -/* L30: */ - } -/* L40: */ - } - return 0; - -/* End of ZUNG2R */ - -} /* zung2r_ */ - -/* Subroutine */ int zungbr_(char *vect, integer *m, integer *n, integer *k, - doublecomplex *a, integer *lda, doublecomplex *tau, doublecomplex * - work, integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - - /* Local variables */ - static integer i__, j, nb, mn; - extern logical lsame_(char *, char *); - static integer iinfo; - static logical wantq; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer lwkopt; - static logical lquery; - extern /* Subroutine */ int zunglq_(integer *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, integer *), zungqr_(integer *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZUNGBR generates one of the complex unitary matrices Q or P**H - determined by ZGEBRD when reducing a complex matrix A to bidiagonal - form: A = Q * B * P**H. Q and P**H are defined as products of - elementary reflectors H(i) or G(i) respectively. - - If VECT = 'Q', A is assumed to have been an M-by-K matrix, and Q - is of order M: - if m >= k, Q = H(1) H(2) . . . H(k) and ZUNGBR returns the first n - columns of Q, where m >= n >= k; - if m < k, Q = H(1) H(2) . . . H(m-1) and ZUNGBR returns Q as an - M-by-M matrix. - - If VECT = 'P', A is assumed to have been a K-by-N matrix, and P**H - is of order N: - if k < n, P**H = G(k) . . . G(2) G(1) and ZUNGBR returns the first m - rows of P**H, where n >= m >= k; - if k >= n, P**H = G(n-1) . . . G(2) G(1) and ZUNGBR returns P**H as - an N-by-N matrix. - - Arguments - ========= - - VECT (input) CHARACTER*1 - Specifies whether the matrix Q or the matrix P**H is - required, as defined in the transformation applied by ZGEBRD: - = 'Q': generate Q; - = 'P': generate P**H. - - M (input) INTEGER - The number of rows of the matrix Q or P**H to be returned. - M >= 0. - - N (input) INTEGER - The number of columns of the matrix Q or P**H to be returned. - N >= 0. - If VECT = 'Q', M >= N >= min(M,K); - if VECT = 'P', N >= M >= min(N,K). - - K (input) INTEGER - If VECT = 'Q', the number of columns in the original M-by-K - matrix reduced by ZGEBRD. - If VECT = 'P', the number of rows in the original K-by-N - matrix reduced by ZGEBRD. - K >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the vectors which define the elementary reflectors, - as returned by ZGEBRD. - On exit, the M-by-N matrix Q or P**H. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= M. - - TAU (input) COMPLEX*16 array, dimension - (min(M,K)) if VECT = 'Q' - (min(N,K)) if VECT = 'P' - TAU(i) must contain the scalar factor of the elementary - reflector H(i) or G(i), which determines Q or P**H, as - returned by ZGEBRD in its array argument TAUQ or TAUP. - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,min(M,N)). - For optimum performance LWORK >= min(M,N)*NB, where NB - is the optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - wantq = lsame_(vect, "Q"); - mn = min(*m,*n); - lquery = *lwork == -1; - if ((! wantq && ! lsame_(vect, "P"))) { - *info = -1; - } else if (*m < 0) { - *info = -2; - } else if (*n < 0 || (wantq && (*n > *m || *n < min(*m,*k))) || (! wantq - && (*m > *n || *m < min(*n,*k)))) { - *info = -3; - } else if (*k < 0) { - *info = -4; - } else if (*lda < max(1,*m)) { - *info = -6; - } else if ((*lwork < max(1,mn) && ! lquery)) { - *info = -9; - } - - if (*info == 0) { - if (wantq) { - nb = ilaenv_(&c__1, "ZUNGQR", " ", m, n, k, &c_n1, (ftnlen)6, ( - ftnlen)1); - } else { - nb = ilaenv_(&c__1, "ZUNGLQ", " ", m, n, k, &c_n1, (ftnlen)6, ( - ftnlen)1); - } - lwkopt = max(1,mn) * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNGBR", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - if (wantq) { - -/* - Form Q, determined by a call to ZGEBRD to reduce an m-by-k - matrix -*/ - - if (*m >= *k) { - -/* If m >= k, assume m >= n >= k */ - - zungqr_(m, n, k, &a[a_offset], lda, &tau[1], &work[1], lwork, & - iinfo); - - } else { - -/* - If m < k, assume m = n - - Shift the vectors which define the elementary reflectors one - column to the right, and set the first row and column of Q - to those of the unit matrix -*/ - - for (j = *m; j >= 2; --j) { - i__1 = j * a_dim1 + 1; - a[i__1].r = 0., a[i__1].i = 0.; - i__1 = *m; - for (i__ = j + 1; i__ <= i__1; ++i__) { - i__2 = i__ + j * a_dim1; - i__3 = i__ + (j - 1) * a_dim1; - a[i__2].r = a[i__3].r, a[i__2].i = a[i__3].i; -/* L10: */ - } -/* L20: */ - } - i__1 = a_dim1 + 1; - a[i__1].r = 1., a[i__1].i = 0.; - i__1 = *m; - for (i__ = 2; i__ <= i__1; ++i__) { - i__2 = i__ + a_dim1; - a[i__2].r = 0., a[i__2].i = 0.; -/* L30: */ - } - if (*m > 1) { - -/* Form Q(2:m,2:m) */ - - i__1 = *m - 1; - i__2 = *m - 1; - i__3 = *m - 1; - zungqr_(&i__1, &i__2, &i__3, &a[((a_dim1) << (1)) + 2], lda, & - tau[1], &work[1], lwork, &iinfo); - } - } - } else { - -/* - Form P', determined by a call to ZGEBRD to reduce a k-by-n - matrix -*/ - - if (*k < *n) { - -/* If k < n, assume k <= m <= n */ - - zunglq_(m, n, k, &a[a_offset], lda, &tau[1], &work[1], lwork, & - iinfo); - - } else { - -/* - If k >= n, assume m = n - - Shift the vectors which define the elementary reflectors one - row downward, and set the first row and column of P' to - those of the unit matrix -*/ - - i__1 = a_dim1 + 1; - a[i__1].r = 1., a[i__1].i = 0.; - i__1 = *n; - for (i__ = 2; i__ <= i__1; ++i__) { - i__2 = i__ + a_dim1; - a[i__2].r = 0., a[i__2].i = 0.; -/* L40: */ - } - i__1 = *n; - for (j = 2; j <= i__1; ++j) { - for (i__ = j - 1; i__ >= 2; --i__) { - i__2 = i__ + j * a_dim1; - i__3 = i__ - 1 + j * a_dim1; - a[i__2].r = a[i__3].r, a[i__2].i = a[i__3].i; -/* L50: */ - } - i__2 = j * a_dim1 + 1; - a[i__2].r = 0., a[i__2].i = 0.; -/* L60: */ - } - if (*n > 1) { - -/* Form P'(2:n,2:n) */ - - i__1 = *n - 1; - i__2 = *n - 1; - i__3 = *n - 1; - zunglq_(&i__1, &i__2, &i__3, &a[((a_dim1) << (1)) + 2], lda, & - tau[1], &work[1], lwork, &iinfo); - } - } - } - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - return 0; - -/* End of ZUNGBR */ - -} /* zungbr_ */ - -/* Subroutine */ int zunghr_(integer *n, integer *ilo, integer *ihi, - doublecomplex *a, integer *lda, doublecomplex *tau, doublecomplex * - work, integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__, j, nb, nh, iinfo; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer lwkopt; - static logical lquery; - extern /* Subroutine */ int zungqr_(integer *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZUNGHR generates a complex unitary matrix Q which is defined as the - product of IHI-ILO elementary reflectors of order N, as returned by - ZGEHRD: - - Q = H(ilo) H(ilo+1) . . . H(ihi-1). - - Arguments - ========= - - N (input) INTEGER - The order of the matrix Q. N >= 0. - - ILO (input) INTEGER - IHI (input) INTEGER - ILO and IHI must have the same values as in the previous call - of ZGEHRD. Q is equal to the unit matrix except in the - submatrix Q(ilo+1:ihi,ilo+1:ihi). - 1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the vectors which define the elementary reflectors, - as returned by ZGEHRD. - On exit, the N-by-N unitary matrix Q. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,N). - - TAU (input) COMPLEX*16 array, dimension (N-1) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZGEHRD. - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= IHI-ILO. - For optimum performance LWORK >= (IHI-ILO)*NB, where NB is - the optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - nh = *ihi - *ilo; - lquery = *lwork == -1; - if (*n < 0) { - *info = -1; - } else if (*ilo < 1 || *ilo > max(1,*n)) { - *info = -2; - } else if (*ihi < min(*ilo,*n) || *ihi > *n) { - *info = -3; - } else if (*lda < max(1,*n)) { - *info = -5; - } else if ((*lwork < max(1,nh) && ! lquery)) { - *info = -8; - } - - if (*info == 0) { - nb = ilaenv_(&c__1, "ZUNGQR", " ", &nh, &nh, &nh, &c_n1, (ftnlen)6, ( - ftnlen)1); - lwkopt = max(1,nh) * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNGHR", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n == 0) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - -/* - Shift the vectors which define the elementary reflectors one - column to the right, and set the first ilo and the last n-ihi - rows and columns to those of the unit matrix -*/ - - i__1 = *ilo + 1; - for (j = *ihi; j >= i__1; --j) { - i__2 = j - 1; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - a[i__3].r = 0., a[i__3].i = 0.; -/* L10: */ - } - i__2 = *ihi; - for (i__ = j + 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - i__4 = i__ + (j - 1) * a_dim1; - a[i__3].r = a[i__4].r, a[i__3].i = a[i__4].i; -/* L20: */ - } - i__2 = *n; - for (i__ = *ihi + 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - a[i__3].r = 0., a[i__3].i = 0.; -/* L30: */ - } -/* L40: */ - } - i__1 = *ilo; - for (j = 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - a[i__3].r = 0., a[i__3].i = 0.; -/* L50: */ - } - i__2 = j + j * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; -/* L60: */ - } - i__1 = *n; - for (j = *ihi + 1; j <= i__1; ++j) { - i__2 = *n; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - a[i__3].r = 0., a[i__3].i = 0.; -/* L70: */ - } - i__2 = j + j * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; -/* L80: */ - } - - if (nh > 0) { - -/* Generate Q(ilo+1:ihi,ilo+1:ihi) */ - - zungqr_(&nh, &nh, &nh, &a[*ilo + 1 + (*ilo + 1) * a_dim1], lda, &tau[* - ilo], &work[1], lwork, &iinfo); - } - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - return 0; - -/* End of ZUNGHR */ - -} /* zunghr_ */ - -/* Subroutine */ int zungl2_(integer *m, integer *n, integer *k, - doublecomplex *a, integer *lda, doublecomplex *tau, doublecomplex * - work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3; - doublecomplex z__1, z__2; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, j, l; - extern /* Subroutine */ int zscal_(integer *, doublecomplex *, - doublecomplex *, integer *), zlarf_(char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), xerbla_(char *, integer *), zlacgv_(integer *, doublecomplex *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZUNGL2 generates an m-by-n complex matrix Q with orthonormal rows, - which is defined as the first m rows of a product of k elementary - reflectors of order n - - Q = H(k)' . . . H(2)' H(1)' - - as returned by ZGELQF. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix Q. M >= 0. - - N (input) INTEGER - The number of columns of the matrix Q. N >= M. - - K (input) INTEGER - The number of elementary reflectors whose product defines the - matrix Q. M >= K >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the i-th row must contain the vector which defines - the elementary reflector H(i), for i = 1,2,...,k, as returned - by ZGELQF in the first k rows of its array argument A. - On exit, the m by n matrix Q. - - LDA (input) INTEGER - The first dimension of the array A. LDA >= max(1,M). - - TAU (input) COMPLEX*16 array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZGELQF. - - WORK (workspace) COMPLEX*16 array, dimension (M) - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument has an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - if (*m < 0) { - *info = -1; - } else if (*n < *m) { - *info = -2; - } else if (*k < 0 || *k > *m) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNGL2", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m <= 0) { - return 0; - } - - if (*k < *m) { - -/* Initialise rows k+1:m to rows of the unit matrix */ - - i__1 = *n; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (l = *k + 1; l <= i__2; ++l) { - i__3 = l + j * a_dim1; - a[i__3].r = 0., a[i__3].i = 0.; -/* L10: */ - } - if ((j > *k && j <= *m)) { - i__2 = j + j * a_dim1; - a[i__2].r = 1., a[i__2].i = 0.; - } -/* L20: */ - } - } - - for (i__ = *k; i__ >= 1; --i__) { - -/* Apply H(i)' to A(i:m,i:n) from the right */ - - if (i__ < *n) { - i__1 = *n - i__; - zlacgv_(&i__1, &a[i__ + (i__ + 1) * a_dim1], lda); - if (i__ < *m) { - i__1 = i__ + i__ * a_dim1; - a[i__1].r = 1., a[i__1].i = 0.; - i__1 = *m - i__; - i__2 = *n - i__ + 1; - d_cnjg(&z__1, &tau[i__]); - zlarf_("Right", &i__1, &i__2, &a[i__ + i__ * a_dim1], lda, & - z__1, &a[i__ + 1 + i__ * a_dim1], lda, &work[1]); - } - i__1 = *n - i__; - i__2 = i__; - z__1.r = -tau[i__2].r, z__1.i = -tau[i__2].i; - zscal_(&i__1, &z__1, &a[i__ + (i__ + 1) * a_dim1], lda); - i__1 = *n - i__; - zlacgv_(&i__1, &a[i__ + (i__ + 1) * a_dim1], lda); - } - i__1 = i__ + i__ * a_dim1; - d_cnjg(&z__2, &tau[i__]); - z__1.r = 1. - z__2.r, z__1.i = 0. - z__2.i; - a[i__1].r = z__1.r, a[i__1].i = z__1.i; - -/* Set A(i,1:i-1) to zero */ - - i__1 = i__ - 1; - for (l = 1; l <= i__1; ++l) { - i__2 = i__ + l * a_dim1; - a[i__2].r = 0., a[i__2].i = 0.; -/* L30: */ - } -/* L40: */ - } - return 0; - -/* End of ZUNGL2 */ - -} /* zungl2_ */ - -/* Subroutine */ int zunglq_(integer *m, integer *n, integer *k, - doublecomplex *a, integer *lda, doublecomplex *tau, doublecomplex * - work, integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__, j, l, ib, nb, ki, kk, nx, iws, nbmin, iinfo; - extern /* Subroutine */ int zungl2_(integer *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zlarfb_(char *, char *, char *, char *, - integer *, integer *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *); - static integer ldwork; - extern /* Subroutine */ int zlarft_(char *, char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *); - static logical lquery; - static integer lwkopt; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZUNGLQ generates an M-by-N complex matrix Q with orthonormal rows, - which is defined as the first M rows of a product of K elementary - reflectors of order N - - Q = H(k)' . . . H(2)' H(1)' - - as returned by ZGELQF. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix Q. M >= 0. - - N (input) INTEGER - The number of columns of the matrix Q. N >= M. - - K (input) INTEGER - The number of elementary reflectors whose product defines the - matrix Q. M >= K >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the i-th row must contain the vector which defines - the elementary reflector H(i), for i = 1,2,...,k, as returned - by ZGELQF in the first k rows of its array argument A. - On exit, the M-by-N matrix Q. - - LDA (input) INTEGER - The first dimension of the array A. LDA >= max(1,M). - - TAU (input) COMPLEX*16 array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZGELQF. - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,M). - For optimum performance LWORK >= M*NB, where NB is - the optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit; - < 0: if INFO = -i, the i-th argument has an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - nb = ilaenv_(&c__1, "ZUNGLQ", " ", m, n, k, &c_n1, (ftnlen)6, (ftnlen)1); - lwkopt = max(1,*m) * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < *m) { - *info = -2; - } else if (*k < 0 || *k > *m) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } else if ((*lwork < max(1,*m) && ! lquery)) { - *info = -8; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNGLQ", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m <= 0) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - nbmin = 2; - nx = 0; - iws = *m; - if ((nb > 1 && nb < *k)) { - -/* - Determine when to cross over from blocked to unblocked code. - - Computing MAX -*/ - i__1 = 0, i__2 = ilaenv_(&c__3, "ZUNGLQ", " ", m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < *k) { - -/* Determine if workspace is large enough for blocked code. */ - - ldwork = *m; - iws = ldwork * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: reduce NB and - determine the minimum value of NB. -*/ - - nb = *lwork / ldwork; -/* Computing MAX */ - i__1 = 2, i__2 = ilaenv_(&c__2, "ZUNGLQ", " ", m, n, k, &c_n1, - (ftnlen)6, (ftnlen)1); - nbmin = max(i__1,i__2); - } - } - } - - if (((nb >= nbmin && nb < *k) && nx < *k)) { - -/* - Use blocked code after the last block. - The first kk rows are handled by the block method. -*/ - - ki = (*k - nx - 1) / nb * nb; -/* Computing MIN */ - i__1 = *k, i__2 = ki + nb; - kk = min(i__1,i__2); - -/* Set A(kk+1:m,1:kk) to zero. */ - - i__1 = kk; - for (j = 1; j <= i__1; ++j) { - i__2 = *m; - for (i__ = kk + 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - a[i__3].r = 0., a[i__3].i = 0.; -/* L10: */ - } -/* L20: */ - } - } else { - kk = 0; - } - -/* Use unblocked code for the last or only block. */ - - if (kk < *m) { - i__1 = *m - kk; - i__2 = *n - kk; - i__3 = *k - kk; - zungl2_(&i__1, &i__2, &i__3, &a[kk + 1 + (kk + 1) * a_dim1], lda, & - tau[kk + 1], &work[1], &iinfo); - } - - if (kk > 0) { - -/* Use blocked code */ - - i__1 = -nb; - for (i__ = ki + 1; i__1 < 0 ? i__ >= 1 : i__ <= 1; i__ += i__1) { -/* Computing MIN */ - i__2 = nb, i__3 = *k - i__ + 1; - ib = min(i__2,i__3); - if (i__ + ib <= *m) { - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__2 = *n - i__ + 1; - zlarft_("Forward", "Rowwise", &i__2, &ib, &a[i__ + i__ * - a_dim1], lda, &tau[i__], &work[1], &ldwork); - -/* Apply H' to A(i+ib:m,i:n) from the right */ - - i__2 = *m - i__ - ib + 1; - i__3 = *n - i__ + 1; - zlarfb_("Right", "Conjugate transpose", "Forward", "Rowwise", - &i__2, &i__3, &ib, &a[i__ + i__ * a_dim1], lda, &work[ - 1], &ldwork, &a[i__ + ib + i__ * a_dim1], lda, &work[ - ib + 1], &ldwork); - } - -/* Apply H' to columns i:n of current block */ - - i__2 = *n - i__ + 1; - zungl2_(&ib, &i__2, &ib, &a[i__ + i__ * a_dim1], lda, &tau[i__], & - work[1], &iinfo); - -/* Set columns 1:i-1 of current block to zero */ - - i__2 = i__ - 1; - for (j = 1; j <= i__2; ++j) { - i__3 = i__ + ib - 1; - for (l = i__; l <= i__3; ++l) { - i__4 = l + j * a_dim1; - a[i__4].r = 0., a[i__4].i = 0.; -/* L30: */ - } -/* L40: */ - } -/* L50: */ - } - } - - work[1].r = (doublereal) iws, work[1].i = 0.; - return 0; - -/* End of ZUNGLQ */ - -} /* zunglq_ */ - -/* Subroutine */ int zungqr_(integer *m, integer *n, integer *k, - doublecomplex *a, integer *lda, doublecomplex *tau, doublecomplex * - work, integer *lwork, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, i__1, i__2, i__3, i__4; - - /* Local variables */ - static integer i__, j, l, ib, nb, ki, kk, nx, iws, nbmin, iinfo; - extern /* Subroutine */ int zung2r_(integer *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zlarfb_(char *, char *, char *, char *, - integer *, integer *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *); - static integer ldwork; - extern /* Subroutine */ int zlarft_(char *, char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *); - static integer lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZUNGQR generates an M-by-N complex matrix Q with orthonormal columns, - which is defined as the first N columns of a product of K elementary - reflectors of order M - - Q = H(1) H(2) . . . H(k) - - as returned by ZGEQRF. - - Arguments - ========= - - M (input) INTEGER - The number of rows of the matrix Q. M >= 0. - - N (input) INTEGER - The number of columns of the matrix Q. M >= N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines the - matrix Q. N >= K >= 0. - - A (input/output) COMPLEX*16 array, dimension (LDA,N) - On entry, the i-th column must contain the vector which - defines the elementary reflector H(i), for i = 1,2,...,k, as - returned by ZGEQRF in the first k columns of its array - argument A. - On exit, the M-by-N matrix Q. - - LDA (input) INTEGER - The first dimension of the array A. LDA >= max(1,M). - - TAU (input) COMPLEX*16 array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZGEQRF. - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. LWORK >= max(1,N). - For optimum performance LWORK >= N*NB, where NB is the - optimal blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument has an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - --work; - - /* Function Body */ - *info = 0; - nb = ilaenv_(&c__1, "ZUNGQR", " ", m, n, k, &c_n1, (ftnlen)6, (ftnlen)1); - lwkopt = max(1,*n) * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - lquery = *lwork == -1; - if (*m < 0) { - *info = -1; - } else if (*n < 0 || *n > *m) { - *info = -2; - } else if (*k < 0 || *k > *n) { - *info = -3; - } else if (*lda < max(1,*m)) { - *info = -5; - } else if ((*lwork < max(1,*n) && ! lquery)) { - *info = -8; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNGQR", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*n <= 0) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - nbmin = 2; - nx = 0; - iws = *n; - if ((nb > 1 && nb < *k)) { - -/* - Determine when to cross over from blocked to unblocked code. - - Computing MAX -*/ - i__1 = 0, i__2 = ilaenv_(&c__3, "ZUNGQR", " ", m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)1); - nx = max(i__1,i__2); - if (nx < *k) { - -/* Determine if workspace is large enough for blocked code. */ - - ldwork = *n; - iws = ldwork * nb; - if (*lwork < iws) { - -/* - Not enough workspace to use optimal NB: reduce NB and - determine the minimum value of NB. -*/ - - nb = *lwork / ldwork; -/* Computing MAX */ - i__1 = 2, i__2 = ilaenv_(&c__2, "ZUNGQR", " ", m, n, k, &c_n1, - (ftnlen)6, (ftnlen)1); - nbmin = max(i__1,i__2); - } - } - } - - if (((nb >= nbmin && nb < *k) && nx < *k)) { - -/* - Use blocked code after the last block. - The first kk columns are handled by the block method. -*/ - - ki = (*k - nx - 1) / nb * nb; -/* Computing MIN */ - i__1 = *k, i__2 = ki + nb; - kk = min(i__1,i__2); - -/* Set A(1:kk,kk+1:n) to zero. */ - - i__1 = *n; - for (j = kk + 1; j <= i__1; ++j) { - i__2 = kk; - for (i__ = 1; i__ <= i__2; ++i__) { - i__3 = i__ + j * a_dim1; - a[i__3].r = 0., a[i__3].i = 0.; -/* L10: */ - } -/* L20: */ - } - } else { - kk = 0; - } - -/* Use unblocked code for the last or only block. */ - - if (kk < *n) { - i__1 = *m - kk; - i__2 = *n - kk; - i__3 = *k - kk; - zung2r_(&i__1, &i__2, &i__3, &a[kk + 1 + (kk + 1) * a_dim1], lda, & - tau[kk + 1], &work[1], &iinfo); - } - - if (kk > 0) { - -/* Use blocked code */ - - i__1 = -nb; - for (i__ = ki + 1; i__1 < 0 ? i__ >= 1 : i__ <= 1; i__ += i__1) { -/* Computing MIN */ - i__2 = nb, i__3 = *k - i__ + 1; - ib = min(i__2,i__3); - if (i__ + ib <= *n) { - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__2 = *m - i__ + 1; - zlarft_("Forward", "Columnwise", &i__2, &ib, &a[i__ + i__ * - a_dim1], lda, &tau[i__], &work[1], &ldwork); - -/* Apply H to A(i:m,i+ib:n) from the left */ - - i__2 = *m - i__ + 1; - i__3 = *n - i__ - ib + 1; - zlarfb_("Left", "No transpose", "Forward", "Columnwise", & - i__2, &i__3, &ib, &a[i__ + i__ * a_dim1], lda, &work[ - 1], &ldwork, &a[i__ + (i__ + ib) * a_dim1], lda, & - work[ib + 1], &ldwork); - } - -/* Apply H to rows i:m of current block */ - - i__2 = *m - i__ + 1; - zung2r_(&i__2, &ib, &ib, &a[i__ + i__ * a_dim1], lda, &tau[i__], & - work[1], &iinfo); - -/* Set rows 1:i-1 of current block to zero */ - - i__2 = i__ + ib - 1; - for (j = i__; j <= i__2; ++j) { - i__3 = i__ - 1; - for (l = 1; l <= i__3; ++l) { - i__4 = l + j * a_dim1; - a[i__4].r = 0., a[i__4].i = 0.; -/* L30: */ - } -/* L40: */ - } -/* L50: */ - } - } - - work[1].r = (doublereal) iws, work[1].i = 0.; - return 0; - -/* End of ZUNGQR */ - -} /* zungqr_ */ - -/* Subroutine */ int zunm2l_(char *side, char *trans, integer *m, integer *n, - integer *k, doublecomplex *a, integer *lda, doublecomplex *tau, - doublecomplex *c__, integer *ldc, doublecomplex *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3; - doublecomplex z__1; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, i1, i2, i3, mi, ni, nq; - static doublecomplex aii; - static logical left; - static doublecomplex taui; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zlarf_(char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), xerbla_(char *, integer *); - static logical notran; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZUNM2L overwrites the general complex m-by-n matrix C with - - Q * C if SIDE = 'L' and TRANS = 'N', or - - Q'* C if SIDE = 'L' and TRANS = 'C', or - - C * Q if SIDE = 'R' and TRANS = 'N', or - - C * Q' if SIDE = 'R' and TRANS = 'C', - - where Q is a complex unitary matrix defined as the product of k - elementary reflectors - - Q = H(k) . . . H(2) H(1) - - as returned by ZGEQLF. Q is of order m if SIDE = 'L' and of order n - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q' from the Left - = 'R': apply Q or Q' from the Right - - TRANS (input) CHARACTER*1 - = 'N': apply Q (No transpose) - = 'C': apply Q' (Conjugate transpose) - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) COMPLEX*16 array, dimension (LDA,K) - The i-th column must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - ZGEQLF in the last k columns of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. - If SIDE = 'L', LDA >= max(1,M); - if SIDE = 'R', LDA >= max(1,N). - - TAU (input) COMPLEX*16 array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZGEQLF. - - C (input/output) COMPLEX*16 array, dimension (LDC,N) - On entry, the m-by-n matrix C. - On exit, C is overwritten by Q*C or Q'*C or C*Q' or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace) COMPLEX*16 array, dimension - (N) if SIDE = 'L', - (M) if SIDE = 'R' - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - -/* NQ is the order of Q */ - - if (left) { - nq = *m; - } else { - nq = *n; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "C"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,nq)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNM2L", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - return 0; - } - - if ((left && notran) || (! left && ! notran)) { - i1 = 1; - i2 = *k; - i3 = 1; - } else { - i1 = *k; - i2 = 1; - i3 = -1; - } - - if (left) { - ni = *n; - } else { - mi = *m; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { - if (left) { - -/* H(i) or H(i)' is applied to C(1:m-k+i,1:n) */ - - mi = *m - *k + i__; - } else { - -/* H(i) or H(i)' is applied to C(1:m,1:n-k+i) */ - - ni = *n - *k + i__; - } - -/* Apply H(i) or H(i)' */ - - if (notran) { - i__3 = i__; - taui.r = tau[i__3].r, taui.i = tau[i__3].i; - } else { - d_cnjg(&z__1, &tau[i__]); - taui.r = z__1.r, taui.i = z__1.i; - } - i__3 = nq - *k + i__ + i__ * a_dim1; - aii.r = a[i__3].r, aii.i = a[i__3].i; - i__3 = nq - *k + i__ + i__ * a_dim1; - a[i__3].r = 1., a[i__3].i = 0.; - zlarf_(side, &mi, &ni, &a[i__ * a_dim1 + 1], &c__1, &taui, &c__[ - c_offset], ldc, &work[1]); - i__3 = nq - *k + i__ + i__ * a_dim1; - a[i__3].r = aii.r, a[i__3].i = aii.i; -/* L10: */ - } - return 0; - -/* End of ZUNM2L */ - -} /* zunm2l_ */ - -/* Subroutine */ int zunm2r_(char *side, char *trans, integer *m, integer *n, - integer *k, doublecomplex *a, integer *lda, doublecomplex *tau, - doublecomplex *c__, integer *ldc, doublecomplex *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3; - doublecomplex z__1; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, i1, i2, i3, ic, jc, mi, ni, nq; - static doublecomplex aii; - static logical left; - static doublecomplex taui; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zlarf_(char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), xerbla_(char *, integer *); - static logical notran; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZUNM2R overwrites the general complex m-by-n matrix C with - - Q * C if SIDE = 'L' and TRANS = 'N', or - - Q'* C if SIDE = 'L' and TRANS = 'C', or - - C * Q if SIDE = 'R' and TRANS = 'N', or - - C * Q' if SIDE = 'R' and TRANS = 'C', - - where Q is a complex unitary matrix defined as the product of k - elementary reflectors - - Q = H(1) H(2) . . . H(k) - - as returned by ZGEQRF. Q is of order m if SIDE = 'L' and of order n - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q' from the Left - = 'R': apply Q or Q' from the Right - - TRANS (input) CHARACTER*1 - = 'N': apply Q (No transpose) - = 'C': apply Q' (Conjugate transpose) - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) COMPLEX*16 array, dimension (LDA,K) - The i-th column must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - ZGEQRF in the first k columns of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. - If SIDE = 'L', LDA >= max(1,M); - if SIDE = 'R', LDA >= max(1,N). - - TAU (input) COMPLEX*16 array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZGEQRF. - - C (input/output) COMPLEX*16 array, dimension (LDC,N) - On entry, the m-by-n matrix C. - On exit, C is overwritten by Q*C or Q'*C or C*Q' or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace) COMPLEX*16 array, dimension - (N) if SIDE = 'L', - (M) if SIDE = 'R' - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - -/* NQ is the order of Q */ - - if (left) { - nq = *m; - } else { - nq = *n; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "C"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,nq)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNM2R", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - return 0; - } - - if ((left && ! notran) || (! left && notran)) { - i1 = 1; - i2 = *k; - i3 = 1; - } else { - i1 = *k; - i2 = 1; - i3 = -1; - } - - if (left) { - ni = *n; - jc = 1; - } else { - mi = *m; - ic = 1; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { - if (left) { - -/* H(i) or H(i)' is applied to C(i:m,1:n) */ - - mi = *m - i__ + 1; - ic = i__; - } else { - -/* H(i) or H(i)' is applied to C(1:m,i:n) */ - - ni = *n - i__ + 1; - jc = i__; - } - -/* Apply H(i) or H(i)' */ - - if (notran) { - i__3 = i__; - taui.r = tau[i__3].r, taui.i = tau[i__3].i; - } else { - d_cnjg(&z__1, &tau[i__]); - taui.r = z__1.r, taui.i = z__1.i; - } - i__3 = i__ + i__ * a_dim1; - aii.r = a[i__3].r, aii.i = a[i__3].i; - i__3 = i__ + i__ * a_dim1; - a[i__3].r = 1., a[i__3].i = 0.; - zlarf_(side, &mi, &ni, &a[i__ + i__ * a_dim1], &c__1, &taui, &c__[ic - + jc * c_dim1], ldc, &work[1]); - i__3 = i__ + i__ * a_dim1; - a[i__3].r = aii.r, a[i__3].i = aii.i; -/* L10: */ - } - return 0; - -/* End of ZUNM2R */ - -} /* zunm2r_ */ - -/* Subroutine */ int zunmbr_(char *vect, char *side, char *trans, integer *m, - integer *n, integer *k, doublecomplex *a, integer *lda, doublecomplex - *tau, doublecomplex *c__, integer *ldc, doublecomplex *work, integer * - lwork, integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3[2]; - char ch__1[2]; - - /* Builtin functions */ - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i1, i2, nb, mi, ni, nq, nw; - static logical left; - extern logical lsame_(char *, char *); - static integer iinfo; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static logical notran, applyq; - static char transt[1]; - static integer lwkopt; - static logical lquery; - extern /* Subroutine */ int zunmlq_(char *, char *, integer *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, integer *), zunmqr_(char *, char *, integer *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - If VECT = 'Q', ZUNMBR overwrites the general complex M-by-N matrix C - with - SIDE = 'L' SIDE = 'R' - TRANS = 'N': Q * C C * Q - TRANS = 'C': Q**H * C C * Q**H - - If VECT = 'P', ZUNMBR overwrites the general complex M-by-N matrix C - with - SIDE = 'L' SIDE = 'R' - TRANS = 'N': P * C C * P - TRANS = 'C': P**H * C C * P**H - - Here Q and P**H are the unitary matrices determined by ZGEBRD when - reducing a complex matrix A to bidiagonal form: A = Q * B * P**H. Q - and P**H are defined as products of elementary reflectors H(i) and - G(i) respectively. - - Let nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Thus nq is the - order of the unitary matrix Q or P**H that is applied. - - If VECT = 'Q', A is assumed to have been an NQ-by-K matrix: - if nq >= k, Q = H(1) H(2) . . . H(k); - if nq < k, Q = H(1) H(2) . . . H(nq-1). - - If VECT = 'P', A is assumed to have been a K-by-NQ matrix: - if k < nq, P = G(1) G(2) . . . G(k); - if k >= nq, P = G(1) G(2) . . . G(nq-1). - - Arguments - ========= - - VECT (input) CHARACTER*1 - = 'Q': apply Q or Q**H; - = 'P': apply P or P**H. - - SIDE (input) CHARACTER*1 - = 'L': apply Q, Q**H, P or P**H from the Left; - = 'R': apply Q, Q**H, P or P**H from the Right. - - TRANS (input) CHARACTER*1 - = 'N': No transpose, apply Q or P; - = 'C': Conjugate transpose, apply Q**H or P**H. - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - If VECT = 'Q', the number of columns in the original - matrix reduced by ZGEBRD. - If VECT = 'P', the number of rows in the original - matrix reduced by ZGEBRD. - K >= 0. - - A (input) COMPLEX*16 array, dimension - (LDA,min(nq,K)) if VECT = 'Q' - (LDA,nq) if VECT = 'P' - The vectors which define the elementary reflectors H(i) and - G(i), whose products determine the matrices Q and P, as - returned by ZGEBRD. - - LDA (input) INTEGER - The leading dimension of the array A. - If VECT = 'Q', LDA >= max(1,nq); - if VECT = 'P', LDA >= max(1,min(nq,K)). - - TAU (input) COMPLEX*16 array, dimension (min(nq,K)) - TAU(i) must contain the scalar factor of the elementary - reflector H(i) or G(i) which determines Q or P, as returned - by ZGEBRD in the array argument TAUQ or TAUP. - - C (input/output) COMPLEX*16 array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by Q*C or Q**H*C or C*Q**H or C*Q - or P*C or P**H*C or C*P or C*P**H. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If SIDE = 'L', LWORK >= max(1,N); - if SIDE = 'R', LWORK >= max(1,M). - For optimum performance LWORK >= N*NB if SIDE = 'L', and - LWORK >= M*NB if SIDE = 'R', where NB is the optimal - blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - applyq = lsame_(vect, "Q"); - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - lquery = *lwork == -1; - -/* NQ is the order of Q or P and NW is the minimum dimension of WORK */ - - if (left) { - nq = *m; - nw = *n; - } else { - nq = *n; - nw = *m; - } - if ((! applyq && ! lsame_(vect, "P"))) { - *info = -1; - } else if ((! left && ! lsame_(side, "R"))) { - *info = -2; - } else if ((! notran && ! lsame_(trans, "C"))) { - *info = -3; - } else if (*m < 0) { - *info = -4; - } else if (*n < 0) { - *info = -5; - } else if (*k < 0) { - *info = -6; - } else /* if(complicated condition) */ { -/* Computing MAX */ - i__1 = 1, i__2 = min(nq,*k); - if ((applyq && *lda < max(1,nq)) || (! applyq && *lda < max(i__1,i__2) - )) { - *info = -8; - } else if (*ldc < max(1,*m)) { - *info = -11; - } else if ((*lwork < max(1,nw) && ! lquery)) { - *info = -13; - } - } - - if (*info == 0) { - if (applyq) { - if (left) { -/* Writing concatenation */ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = *m - 1; - i__2 = *m - 1; - nb = ilaenv_(&c__1, "ZUNMQR", ch__1, &i__1, n, &i__2, &c_n1, ( - ftnlen)6, (ftnlen)2); - } else { -/* Writing concatenation */ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = *n - 1; - i__2 = *n - 1; - nb = ilaenv_(&c__1, "ZUNMQR", ch__1, m, &i__1, &i__2, &c_n1, ( - ftnlen)6, (ftnlen)2); - } - } else { - if (left) { -/* Writing concatenation */ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = *m - 1; - i__2 = *m - 1; - nb = ilaenv_(&c__1, "ZUNMLQ", ch__1, &i__1, n, &i__2, &c_n1, ( - ftnlen)6, (ftnlen)2); - } else { -/* Writing concatenation */ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = *n - 1; - i__2 = *n - 1; - nb = ilaenv_(&c__1, "ZUNMLQ", ch__1, m, &i__1, &i__2, &c_n1, ( - ftnlen)6, (ftnlen)2); - } - } - lwkopt = max(1,nw) * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNMBR", &i__1); - return 0; - } else if (lquery) { - } - -/* Quick return if possible */ - - work[1].r = 1., work[1].i = 0.; - if (*m == 0 || *n == 0) { - return 0; - } - - if (applyq) { - -/* Apply Q */ - - if (nq >= *k) { - -/* Q was determined by a call to ZGEBRD with nq >= k */ - - zunmqr_(side, trans, m, n, k, &a[a_offset], lda, &tau[1], &c__[ - c_offset], ldc, &work[1], lwork, &iinfo); - } else if (nq > 1) { - -/* Q was determined by a call to ZGEBRD with nq < k */ - - if (left) { - mi = *m - 1; - ni = *n; - i1 = 2; - i2 = 1; - } else { - mi = *m; - ni = *n - 1; - i1 = 1; - i2 = 2; - } - i__1 = nq - 1; - zunmqr_(side, trans, &mi, &ni, &i__1, &a[a_dim1 + 2], lda, &tau[1] - , &c__[i1 + i2 * c_dim1], ldc, &work[1], lwork, &iinfo); - } - } else { - -/* Apply P */ - - if (notran) { - *(unsigned char *)transt = 'C'; - } else { - *(unsigned char *)transt = 'N'; - } - if (nq > *k) { - -/* P was determined by a call to ZGEBRD with nq > k */ - - zunmlq_(side, transt, m, n, k, &a[a_offset], lda, &tau[1], &c__[ - c_offset], ldc, &work[1], lwork, &iinfo); - } else if (nq > 1) { - -/* P was determined by a call to ZGEBRD with nq <= k */ - - if (left) { - mi = *m - 1; - ni = *n; - i1 = 2; - i2 = 1; - } else { - mi = *m; - ni = *n - 1; - i1 = 1; - i2 = 2; - } - i__1 = nq - 1; - zunmlq_(side, transt, &mi, &ni, &i__1, &a[((a_dim1) << (1)) + 1], - lda, &tau[1], &c__[i1 + i2 * c_dim1], ldc, &work[1], - lwork, &iinfo); - } - } - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - return 0; - -/* End of ZUNMBR */ - -} /* zunmbr_ */ - -/* Subroutine */ int zunml2_(char *side, char *trans, integer *m, integer *n, - integer *k, doublecomplex *a, integer *lda, doublecomplex *tau, - doublecomplex *c__, integer *ldc, doublecomplex *work, integer *info) -{ - /* System generated locals */ - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3; - doublecomplex z__1; - - /* Builtin functions */ - void d_cnjg(doublecomplex *, doublecomplex *); - - /* Local variables */ - static integer i__, i1, i2, i3, ic, jc, mi, ni, nq; - static doublecomplex aii; - static logical left; - static doublecomplex taui; - extern logical lsame_(char *, char *); - extern /* Subroutine */ int zlarf_(char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *, doublecomplex *), xerbla_(char *, integer *), zlacgv_(integer *, doublecomplex *, integer *); - static logical notran; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 - - - Purpose - ======= - - ZUNML2 overwrites the general complex m-by-n matrix C with - - Q * C if SIDE = 'L' and TRANS = 'N', or - - Q'* C if SIDE = 'L' and TRANS = 'C', or - - C * Q if SIDE = 'R' and TRANS = 'N', or - - C * Q' if SIDE = 'R' and TRANS = 'C', - - where Q is a complex unitary matrix defined as the product of k - elementary reflectors - - Q = H(k)' . . . H(2)' H(1)' - - as returned by ZGELQF. Q is of order m if SIDE = 'L' and of order n - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q' from the Left - = 'R': apply Q or Q' from the Right - - TRANS (input) CHARACTER*1 - = 'N': apply Q (No transpose) - = 'C': apply Q' (Conjugate transpose) - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) COMPLEX*16 array, dimension - (LDA,M) if SIDE = 'L', - (LDA,N) if SIDE = 'R' - The i-th row must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - ZGELQF in the first k rows of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,K). - - TAU (input) COMPLEX*16 array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZGELQF. - - C (input/output) COMPLEX*16 array, dimension (LDC,N) - On entry, the m-by-n matrix C. - On exit, C is overwritten by Q*C or Q'*C or C*Q' or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace) COMPLEX*16 array, dimension - (N) if SIDE = 'L', - (M) if SIDE = 'R' - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - -/* NQ is the order of Q */ - - if (left) { - nq = *m; - } else { - nq = *n; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "C"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,*k)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNML2", &i__1); - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - return 0; - } - - if ((left && notran) || (! left && ! notran)) { - i1 = 1; - i2 = *k; - i3 = 1; - } else { - i1 = *k; - i2 = 1; - i3 = -1; - } - - if (left) { - ni = *n; - jc = 1; - } else { - mi = *m; - ic = 1; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { - if (left) { - -/* H(i) or H(i)' is applied to C(i:m,1:n) */ - - mi = *m - i__ + 1; - ic = i__; - } else { - -/* H(i) or H(i)' is applied to C(1:m,i:n) */ - - ni = *n - i__ + 1; - jc = i__; - } - -/* Apply H(i) or H(i)' */ - - if (notran) { - d_cnjg(&z__1, &tau[i__]); - taui.r = z__1.r, taui.i = z__1.i; - } else { - i__3 = i__; - taui.r = tau[i__3].r, taui.i = tau[i__3].i; - } - if (i__ < nq) { - i__3 = nq - i__; - zlacgv_(&i__3, &a[i__ + (i__ + 1) * a_dim1], lda); - } - i__3 = i__ + i__ * a_dim1; - aii.r = a[i__3].r, aii.i = a[i__3].i; - i__3 = i__ + i__ * a_dim1; - a[i__3].r = 1., a[i__3].i = 0.; - zlarf_(side, &mi, &ni, &a[i__ + i__ * a_dim1], lda, &taui, &c__[ic + - jc * c_dim1], ldc, &work[1]); - i__3 = i__ + i__ * a_dim1; - a[i__3].r = aii.r, a[i__3].i = aii.i; - if (i__ < nq) { - i__3 = nq - i__; - zlacgv_(&i__3, &a[i__ + (i__ + 1) * a_dim1], lda); - } -/* L10: */ - } - return 0; - -/* End of ZUNML2 */ - -} /* zunml2_ */ - -/* Subroutine */ int zunmlq_(char *side, char *trans, integer *m, integer *n, - integer *k, doublecomplex *a, integer *lda, doublecomplex *tau, - doublecomplex *c__, integer *ldc, doublecomplex *work, integer *lwork, - integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3[2], i__4, - i__5; - char ch__1[2]; - - /* Builtin functions */ - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i__; - static doublecomplex t[4160] /* was [65][64] */; - static integer i1, i2, i3, ib, ic, jc, nb, mi, ni, nq, nw, iws; - static logical left; - extern logical lsame_(char *, char *); - static integer nbmin, iinfo; - extern /* Subroutine */ int zunml2_(char *, char *, integer *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zlarfb_(char *, char *, char *, char *, - integer *, integer *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *); - static logical notran; - static integer ldwork; - extern /* Subroutine */ int zlarft_(char *, char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *); - static char transt[1]; - static integer lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZUNMLQ overwrites the general complex M-by-N matrix C with - - SIDE = 'L' SIDE = 'R' - TRANS = 'N': Q * C C * Q - TRANS = 'C': Q**H * C C * Q**H - - where Q is a complex unitary matrix defined as the product of k - elementary reflectors - - Q = H(k)' . . . H(2)' H(1)' - - as returned by ZGELQF. Q is of order M if SIDE = 'L' and of order N - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q**H from the Left; - = 'R': apply Q or Q**H from the Right. - - TRANS (input) CHARACTER*1 - = 'N': No transpose, apply Q; - = 'C': Conjugate transpose, apply Q**H. - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) COMPLEX*16 array, dimension - (LDA,M) if SIDE = 'L', - (LDA,N) if SIDE = 'R' - The i-th row must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - ZGELQF in the first k rows of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. LDA >= max(1,K). - - TAU (input) COMPLEX*16 array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZGELQF. - - C (input/output) COMPLEX*16 array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by Q*C or Q**H*C or C*Q**H or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If SIDE = 'L', LWORK >= max(1,N); - if SIDE = 'R', LWORK >= max(1,M). - For optimum performance LWORK >= N*NB if SIDE 'L', and - LWORK >= M*NB if SIDE = 'R', where NB is the optimal - blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - lquery = *lwork == -1; - -/* NQ is the order of Q and NW is the minimum dimension of WORK */ - - if (left) { - nq = *m; - nw = *n; - } else { - nq = *n; - nw = *m; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "C"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,*k)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } else if ((*lwork < max(1,nw) && ! lquery)) { - *info = -12; - } - - if (*info == 0) { - -/* - Determine the block size. NB may be at most NBMAX, where NBMAX - is used to define the local array T. - - Computing MIN - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 64, i__2 = ilaenv_(&c__1, "ZUNMLQ", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nb = min(i__1,i__2); - lwkopt = max(1,nw) * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNMLQ", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - nbmin = 2; - ldwork = nw; - if ((nb > 1 && nb < *k)) { - iws = nw * nb; - if (*lwork < iws) { - nb = *lwork / ldwork; -/* - Computing MAX - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 2, i__2 = ilaenv_(&c__2, "ZUNMLQ", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nbmin = max(i__1,i__2); - } - } else { - iws = nw; - } - - if (nb < nbmin || nb >= *k) { - -/* Use unblocked code */ - - zunml2_(side, trans, m, n, k, &a[a_offset], lda, &tau[1], &c__[ - c_offset], ldc, &work[1], &iinfo); - } else { - -/* Use blocked code */ - - if ((left && notran) || (! left && ! notran)) { - i1 = 1; - i2 = *k; - i3 = nb; - } else { - i1 = (*k - 1) / nb * nb + 1; - i2 = 1; - i3 = -nb; - } - - if (left) { - ni = *n; - jc = 1; - } else { - mi = *m; - ic = 1; - } - - if (notran) { - *(unsigned char *)transt = 'C'; - } else { - *(unsigned char *)transt = 'N'; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__4 = nb, i__5 = *k - i__ + 1; - ib = min(i__4,i__5); - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__4 = nq - i__ + 1; - zlarft_("Forward", "Rowwise", &i__4, &ib, &a[i__ + i__ * a_dim1], - lda, &tau[i__], t, &c__65); - if (left) { - -/* H or H' is applied to C(i:m,1:n) */ - - mi = *m - i__ + 1; - ic = i__; - } else { - -/* H or H' is applied to C(1:m,i:n) */ - - ni = *n - i__ + 1; - jc = i__; - } - -/* Apply H or H' */ - - zlarfb_(side, transt, "Forward", "Rowwise", &mi, &ni, &ib, &a[i__ - + i__ * a_dim1], lda, t, &c__65, &c__[ic + jc * c_dim1], - ldc, &work[1], &ldwork); -/* L10: */ - } - } - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - return 0; - -/* End of ZUNMLQ */ - -} /* zunmlq_ */ - -/* Subroutine */ int zunmql_(char *side, char *trans, integer *m, integer *n, - integer *k, doublecomplex *a, integer *lda, doublecomplex *tau, - doublecomplex *c__, integer *ldc, doublecomplex *work, integer *lwork, - integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3[2], i__4, - i__5; - char ch__1[2]; - - /* Builtin functions */ - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i__; - static doublecomplex t[4160] /* was [65][64] */; - static integer i1, i2, i3, ib, nb, mi, ni, nq, nw, iws; - static logical left; - extern logical lsame_(char *, char *); - static integer nbmin, iinfo; - extern /* Subroutine */ int zunm2l_(char *, char *, integer *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zlarfb_(char *, char *, char *, char *, - integer *, integer *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *); - static logical notran; - static integer ldwork; - extern /* Subroutine */ int zlarft_(char *, char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *); - static integer lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZUNMQL overwrites the general complex M-by-N matrix C with - - SIDE = 'L' SIDE = 'R' - TRANS = 'N': Q * C C * Q - TRANS = 'C': Q**H * C C * Q**H - - where Q is a complex unitary matrix defined as the product of k - elementary reflectors - - Q = H(k) . . . H(2) H(1) - - as returned by ZGEQLF. Q is of order M if SIDE = 'L' and of order N - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q**H from the Left; - = 'R': apply Q or Q**H from the Right. - - TRANS (input) CHARACTER*1 - = 'N': No transpose, apply Q; - = 'C': Transpose, apply Q**H. - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) COMPLEX*16 array, dimension (LDA,K) - The i-th column must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - ZGEQLF in the last k columns of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. - If SIDE = 'L', LDA >= max(1,M); - if SIDE = 'R', LDA >= max(1,N). - - TAU (input) COMPLEX*16 array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZGEQLF. - - C (input/output) COMPLEX*16 array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by Q*C or Q**H*C or C*Q**H or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If SIDE = 'L', LWORK >= max(1,N); - if SIDE = 'R', LWORK >= max(1,M). - For optimum performance LWORK >= N*NB if SIDE = 'L', and - LWORK >= M*NB if SIDE = 'R', where NB is the optimal - blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - lquery = *lwork == -1; - -/* NQ is the order of Q and NW is the minimum dimension of WORK */ - - if (left) { - nq = *m; - nw = *n; - } else { - nq = *n; - nw = *m; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "C"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,nq)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } else if ((*lwork < max(1,nw) && ! lquery)) { - *info = -12; - } - - if (*info == 0) { - -/* - Determine the block size. NB may be at most NBMAX, where NBMAX - is used to define the local array T. - - Computing MIN - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 64, i__2 = ilaenv_(&c__1, "ZUNMQL", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nb = min(i__1,i__2); - lwkopt = max(1,nw) * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNMQL", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - nbmin = 2; - ldwork = nw; - if ((nb > 1 && nb < *k)) { - iws = nw * nb; - if (*lwork < iws) { - nb = *lwork / ldwork; -/* - Computing MAX - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 2, i__2 = ilaenv_(&c__2, "ZUNMQL", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nbmin = max(i__1,i__2); - } - } else { - iws = nw; - } - - if (nb < nbmin || nb >= *k) { - -/* Use unblocked code */ - - zunm2l_(side, trans, m, n, k, &a[a_offset], lda, &tau[1], &c__[ - c_offset], ldc, &work[1], &iinfo); - } else { - -/* Use blocked code */ - - if ((left && notran) || (! left && ! notran)) { - i1 = 1; - i2 = *k; - i3 = nb; - } else { - i1 = (*k - 1) / nb * nb + 1; - i2 = 1; - i3 = -nb; - } - - if (left) { - ni = *n; - } else { - mi = *m; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__4 = nb, i__5 = *k - i__ + 1; - ib = min(i__4,i__5); - -/* - Form the triangular factor of the block reflector - H = H(i+ib-1) . . . H(i+1) H(i) -*/ - - i__4 = nq - *k + i__ + ib - 1; - zlarft_("Backward", "Columnwise", &i__4, &ib, &a[i__ * a_dim1 + 1] - , lda, &tau[i__], t, &c__65); - if (left) { - -/* H or H' is applied to C(1:m-k+i+ib-1,1:n) */ - - mi = *m - *k + i__ + ib - 1; - } else { - -/* H or H' is applied to C(1:m,1:n-k+i+ib-1) */ - - ni = *n - *k + i__ + ib - 1; - } - -/* Apply H or H' */ - - zlarfb_(side, trans, "Backward", "Columnwise", &mi, &ni, &ib, &a[ - i__ * a_dim1 + 1], lda, t, &c__65, &c__[c_offset], ldc, & - work[1], &ldwork); -/* L10: */ - } - } - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - return 0; - -/* End of ZUNMQL */ - -} /* zunmql_ */ - -/* Subroutine */ int zunmqr_(char *side, char *trans, integer *m, integer *n, - integer *k, doublecomplex *a, integer *lda, doublecomplex *tau, - doublecomplex *c__, integer *ldc, doublecomplex *work, integer *lwork, - integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer a_dim1, a_offset, c_dim1, c_offset, i__1, i__2, i__3[2], i__4, - i__5; - char ch__1[2]; - - /* Builtin functions */ - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i__; - static doublecomplex t[4160] /* was [65][64] */; - static integer i1, i2, i3, ib, ic, jc, nb, mi, ni, nq, nw, iws; - static logical left; - extern logical lsame_(char *, char *); - static integer nbmin, iinfo; - extern /* Subroutine */ int zunm2r_(char *, char *, integer *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *), xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - extern /* Subroutine */ int zlarfb_(char *, char *, char *, char *, - integer *, integer *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *, doublecomplex *, integer *, - doublecomplex *, integer *); - static logical notran; - static integer ldwork; - extern /* Subroutine */ int zlarft_(char *, char *, integer *, integer *, - doublecomplex *, integer *, doublecomplex *, doublecomplex *, - integer *); - static integer lwkopt; - static logical lquery; - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZUNMQR overwrites the general complex M-by-N matrix C with - - SIDE = 'L' SIDE = 'R' - TRANS = 'N': Q * C C * Q - TRANS = 'C': Q**H * C C * Q**H - - where Q is a complex unitary matrix defined as the product of k - elementary reflectors - - Q = H(1) H(2) . . . H(k) - - as returned by ZGEQRF. Q is of order M if SIDE = 'L' and of order N - if SIDE = 'R'. - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q**H from the Left; - = 'R': apply Q or Q**H from the Right. - - TRANS (input) CHARACTER*1 - = 'N': No transpose, apply Q; - = 'C': Conjugate transpose, apply Q**H. - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - K (input) INTEGER - The number of elementary reflectors whose product defines - the matrix Q. - If SIDE = 'L', M >= K >= 0; - if SIDE = 'R', N >= K >= 0. - - A (input) COMPLEX*16 array, dimension (LDA,K) - The i-th column must contain the vector which defines the - elementary reflector H(i), for i = 1,2,...,k, as returned by - ZGEQRF in the first k columns of its array argument A. - A is modified by the routine but restored on exit. - - LDA (input) INTEGER - The leading dimension of the array A. - If SIDE = 'L', LDA >= max(1,M); - if SIDE = 'R', LDA >= max(1,N). - - TAU (input) COMPLEX*16 array, dimension (K) - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZGEQRF. - - C (input/output) COMPLEX*16 array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by Q*C or Q**H*C or C*Q**H or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If SIDE = 'L', LWORK >= max(1,N); - if SIDE = 'R', LWORK >= max(1,M). - For optimum performance LWORK >= N*NB if SIDE = 'L', and - LWORK >= M*NB if SIDE = 'R', where NB is the optimal - blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - notran = lsame_(trans, "N"); - lquery = *lwork == -1; - -/* NQ is the order of Q and NW is the minimum dimension of WORK */ - - if (left) { - nq = *m; - nw = *n; - } else { - nq = *n; - nw = *m; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! notran && ! lsame_(trans, "C"))) { - *info = -2; - } else if (*m < 0) { - *info = -3; - } else if (*n < 0) { - *info = -4; - } else if (*k < 0 || *k > nq) { - *info = -5; - } else if (*lda < max(1,nq)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } else if ((*lwork < max(1,nw) && ! lquery)) { - *info = -12; - } - - if (*info == 0) { - -/* - Determine the block size. NB may be at most NBMAX, where NBMAX - is used to define the local array T. - - Computing MIN - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 64, i__2 = ilaenv_(&c__1, "ZUNMQR", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nb = min(i__1,i__2); - lwkopt = max(1,nw) * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - } - - if (*info != 0) { - i__1 = -(*info); - xerbla_("ZUNMQR", &i__1); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || *k == 0) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - nbmin = 2; - ldwork = nw; - if ((nb > 1 && nb < *k)) { - iws = nw * nb; - if (*lwork < iws) { - nb = *lwork / ldwork; -/* - Computing MAX - Writing concatenation -*/ - i__3[0] = 1, a__1[0] = side; - i__3[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__3, &c__2, (ftnlen)2); - i__1 = 2, i__2 = ilaenv_(&c__2, "ZUNMQR", ch__1, m, n, k, &c_n1, ( - ftnlen)6, (ftnlen)2); - nbmin = max(i__1,i__2); - } - } else { - iws = nw; - } - - if (nb < nbmin || nb >= *k) { - -/* Use unblocked code */ - - zunm2r_(side, trans, m, n, k, &a[a_offset], lda, &tau[1], &c__[ - c_offset], ldc, &work[1], &iinfo); - } else { - -/* Use blocked code */ - - if ((left && ! notran) || (! left && notran)) { - i1 = 1; - i2 = *k; - i3 = nb; - } else { - i1 = (*k - 1) / nb * nb + 1; - i2 = 1; - i3 = -nb; - } - - if (left) { - ni = *n; - jc = 1; - } else { - mi = *m; - ic = 1; - } - - i__1 = i2; - i__2 = i3; - for (i__ = i1; i__2 < 0 ? i__ >= i__1 : i__ <= i__1; i__ += i__2) { -/* Computing MIN */ - i__4 = nb, i__5 = *k - i__ + 1; - ib = min(i__4,i__5); - -/* - Form the triangular factor of the block reflector - H = H(i) H(i+1) . . . H(i+ib-1) -*/ - - i__4 = nq - i__ + 1; - zlarft_("Forward", "Columnwise", &i__4, &ib, &a[i__ + i__ * - a_dim1], lda, &tau[i__], t, &c__65) - ; - if (left) { - -/* H or H' is applied to C(i:m,1:n) */ - - mi = *m - i__ + 1; - ic = i__; - } else { - -/* H or H' is applied to C(1:m,i:n) */ - - ni = *n - i__ + 1; - jc = i__; - } - -/* Apply H or H' */ - - zlarfb_(side, trans, "Forward", "Columnwise", &mi, &ni, &ib, &a[ - i__ + i__ * a_dim1], lda, t, &c__65, &c__[ic + jc * - c_dim1], ldc, &work[1], &ldwork); -/* L10: */ - } - } - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - return 0; - -/* End of ZUNMQR */ - -} /* zunmqr_ */ - -/* Subroutine */ int zunmtr_(char *side, char *uplo, char *trans, integer *m, - integer *n, doublecomplex *a, integer *lda, doublecomplex *tau, - doublecomplex *c__, integer *ldc, doublecomplex *work, integer *lwork, - integer *info) -{ - /* System generated locals */ - address a__1[2]; - integer a_dim1, a_offset, c_dim1, c_offset, i__1[2], i__2, i__3; - char ch__1[2]; - - /* Builtin functions */ - /* Subroutine */ int s_cat(char *, char **, integer *, integer *, ftnlen); - - /* Local variables */ - static integer i1, i2, nb, mi, ni, nq, nw; - static logical left; - extern logical lsame_(char *, char *); - static integer iinfo; - static logical upper; - extern /* Subroutine */ int xerbla_(char *, integer *); - extern integer ilaenv_(integer *, char *, char *, integer *, integer *, - integer *, integer *, ftnlen, ftnlen); - static integer lwkopt; - static logical lquery; - extern /* Subroutine */ int zunmql_(char *, char *, integer *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, integer *), zunmqr_(char *, char *, integer *, integer *, - integer *, doublecomplex *, integer *, doublecomplex *, - doublecomplex *, integer *, doublecomplex *, integer *, integer *); - - -/* - -- LAPACK routine (version 3.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - June 30, 1999 - - - Purpose - ======= - - ZUNMTR overwrites the general complex M-by-N matrix C with - - SIDE = 'L' SIDE = 'R' - TRANS = 'N': Q * C C * Q - TRANS = 'C': Q**H * C C * Q**H - - where Q is a complex unitary matrix of order nq, with nq = m if - SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of - nq-1 elementary reflectors, as returned by ZHETRD: - - if UPLO = 'U', Q = H(nq-1) . . . H(2) H(1); - - if UPLO = 'L', Q = H(1) H(2) . . . H(nq-1). - - Arguments - ========= - - SIDE (input) CHARACTER*1 - = 'L': apply Q or Q**H from the Left; - = 'R': apply Q or Q**H from the Right. - - UPLO (input) CHARACTER*1 - = 'U': Upper triangle of A contains elementary reflectors - from ZHETRD; - = 'L': Lower triangle of A contains elementary reflectors - from ZHETRD. - - TRANS (input) CHARACTER*1 - = 'N': No transpose, apply Q; - = 'C': Conjugate transpose, apply Q**H. - - M (input) INTEGER - The number of rows of the matrix C. M >= 0. - - N (input) INTEGER - The number of columns of the matrix C. N >= 0. - - A (input) COMPLEX*16 array, dimension - (LDA,M) if SIDE = 'L' - (LDA,N) if SIDE = 'R' - The vectors which define the elementary reflectors, as - returned by ZHETRD. - - LDA (input) INTEGER - The leading dimension of the array A. - LDA >= max(1,M) if SIDE = 'L'; LDA >= max(1,N) if SIDE = 'R'. - - TAU (input) COMPLEX*16 array, dimension - (M-1) if SIDE = 'L' - (N-1) if SIDE = 'R' - TAU(i) must contain the scalar factor of the elementary - reflector H(i), as returned by ZHETRD. - - C (input/output) COMPLEX*16 array, dimension (LDC,N) - On entry, the M-by-N matrix C. - On exit, C is overwritten by Q*C or Q**H*C or C*Q**H or C*Q. - - LDC (input) INTEGER - The leading dimension of the array C. LDC >= max(1,M). - - WORK (workspace/output) COMPLEX*16 array, dimension (LWORK) - On exit, if INFO = 0, WORK(1) returns the optimal LWORK. - - LWORK (input) INTEGER - The dimension of the array WORK. - If SIDE = 'L', LWORK >= max(1,N); - if SIDE = 'R', LWORK >= max(1,M). - For optimum performance LWORK >= N*NB if SIDE = 'L', and - LWORK >=M*NB if SIDE = 'R', where NB is the optimal - blocksize. - - If LWORK = -1, then a workspace query is assumed; the routine - only calculates the optimal size of the WORK array, returns - this value as the first entry of the WORK array, and no error - message related to LWORK is issued by XERBLA. - - INFO (output) INTEGER - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - - ===================================================================== - - - Test the input arguments -*/ - - /* Parameter adjustments */ - a_dim1 = *lda; - a_offset = 1 + a_dim1 * 1; - a -= a_offset; - --tau; - c_dim1 = *ldc; - c_offset = 1 + c_dim1 * 1; - c__ -= c_offset; - --work; - - /* Function Body */ - *info = 0; - left = lsame_(side, "L"); - upper = lsame_(uplo, "U"); - lquery = *lwork == -1; - -/* NQ is the order of Q and NW is the minimum dimension of WORK */ - - if (left) { - nq = *m; - nw = *n; - } else { - nq = *n; - nw = *m; - } - if ((! left && ! lsame_(side, "R"))) { - *info = -1; - } else if ((! upper && ! lsame_(uplo, "L"))) { - *info = -2; - } else if ((! lsame_(trans, "N") && ! lsame_(trans, - "C"))) { - *info = -3; - } else if (*m < 0) { - *info = -4; - } else if (*n < 0) { - *info = -5; - } else if (*lda < max(1,nq)) { - *info = -7; - } else if (*ldc < max(1,*m)) { - *info = -10; - } else if ((*lwork < max(1,nw) && ! lquery)) { - *info = -12; - } - - if (*info == 0) { - if (upper) { - if (left) { -/* Writing concatenation */ - i__1[0] = 1, a__1[0] = side; - i__1[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__1, &c__2, (ftnlen)2); - i__2 = *m - 1; - i__3 = *m - 1; - nb = ilaenv_(&c__1, "ZUNMQL", ch__1, &i__2, n, &i__3, &c_n1, ( - ftnlen)6, (ftnlen)2); - } else { -/* Writing concatenation */ - i__1[0] = 1, a__1[0] = side; - i__1[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__1, &c__2, (ftnlen)2); - i__2 = *n - 1; - i__3 = *n - 1; - nb = ilaenv_(&c__1, "ZUNMQL", ch__1, m, &i__2, &i__3, &c_n1, ( - ftnlen)6, (ftnlen)2); - } - } else { - if (left) { -/* Writing concatenation */ - i__1[0] = 1, a__1[0] = side; - i__1[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__1, &c__2, (ftnlen)2); - i__2 = *m - 1; - i__3 = *m - 1; - nb = ilaenv_(&c__1, "ZUNMQR", ch__1, &i__2, n, &i__3, &c_n1, ( - ftnlen)6, (ftnlen)2); - } else { -/* Writing concatenation */ - i__1[0] = 1, a__1[0] = side; - i__1[1] = 1, a__1[1] = trans; - s_cat(ch__1, a__1, i__1, &c__2, (ftnlen)2); - i__2 = *n - 1; - i__3 = *n - 1; - nb = ilaenv_(&c__1, "ZUNMQR", ch__1, m, &i__2, &i__3, &c_n1, ( - ftnlen)6, (ftnlen)2); - } - } - lwkopt = max(1,nw) * nb; - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - } - - if (*info != 0) { - i__2 = -(*info); - xerbla_("ZUNMTR", &i__2); - return 0; - } else if (lquery) { - return 0; - } - -/* Quick return if possible */ - - if (*m == 0 || *n == 0 || nq == 1) { - work[1].r = 1., work[1].i = 0.; - return 0; - } - - if (left) { - mi = *m - 1; - ni = *n; - } else { - mi = *m; - ni = *n - 1; - } - - if (upper) { - -/* Q was determined by a call to ZHETRD with UPLO = 'U' */ - - i__2 = nq - 1; - zunmql_(side, trans, &mi, &ni, &i__2, &a[((a_dim1) << (1)) + 1], lda, - &tau[1], &c__[c_offset], ldc, &work[1], lwork, &iinfo); - } else { - -/* Q was determined by a call to ZHETRD with UPLO = 'L' */ - - if (left) { - i1 = 2; - i2 = 1; - } else { - i1 = 1; - i2 = 2; - } - i__2 = nq - 1; - zunmqr_(side, trans, &mi, &ni, &i__2, &a[a_dim1 + 2], lda, &tau[1], & - c__[i1 + i2 * c_dim1], ldc, &work[1], lwork, &iinfo); - } - work[1].r = (doublereal) lwkopt, work[1].i = 0.; - return 0; - -/* End of ZUNMTR */ - -} /* zunmtr_ */ - diff --git a/pythonPackages/numpy/numpy/ma/__init__.py b/pythonPackages/numpy/numpy/ma/__init__.py deleted file mode 100755 index 17caa9e025..0000000000 --- a/pythonPackages/numpy/numpy/ma/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -""" -============= -Masked Arrays -============= - -Arrays sometimes contain invalid or missing data. When doing operations -on such arrays, we wish to suppress invalid values, which is the purpose masked -arrays fulfill (an example of typical use is given below). - -For example, examine the following array: - ->>> x = np.array([2, 1, 3, np.nan, 5, 2, 3, np.nan]) - -When we try to calculate the mean of the data, the result is undetermined: - ->>> np.mean(x) -nan - -The mean is calculated using roughly ``np.sum(x)/len(x)``, but since -any number added to ``NaN`` [1]_ produces ``NaN``, this doesn't work. Enter -masked arrays: - ->>> m = np.ma.masked_array(x, np.isnan(x)) ->>> m -masked_array(data = [2.0 1.0 3.0 -- 5.0 2.0 3.0 --], - mask = [False False False True False False False True], - fill_value=1e+20) - -Here, we construct a masked array that suppress all ``NaN`` values. We -may now proceed to calculate the mean of the other values: - ->>> np.mean(m) -2.6666666666666665 - -.. [1] Not-a-Number, a floating point value that is the result of an - invalid operation. - -""" -__author__ = "Pierre GF Gerard-Marchant ($Author: jarrod.millman $)" -__version__ = '1.0' -__revision__ = "$Revision: 3473 $" -__date__ = '$Date: 2007-10-29 17:18:13 +0200 (Mon, 29 Oct 2007) $' - -import core -from core import * - -import extras -from extras import * - -__all__ = ['core', 'extras'] -__all__ += core.__all__ -__all__ += extras.__all__ - -from numpy.testing import Tester -test = Tester().test -bench = Tester().bench diff --git a/pythonPackages/numpy/numpy/ma/bench.py b/pythonPackages/numpy/numpy/ma/bench.py deleted file mode 100755 index 2cc8f6a808..0000000000 --- a/pythonPackages/numpy/numpy/ma/bench.py +++ /dev/null @@ -1,165 +0,0 @@ -#! python -# encoding: utf-8 - -import timeit -#import IPython.ipapi -#ip = IPython.ipapi.get() -#from IPython import ipmagic -import numpy -#from numpy import ma -#from numpy.ma import filled -#from numpy.ma.testutils import assert_equal - - -#####--------------------------------------------------------------------------- -#---- --- Global variables --- -#####--------------------------------------------------------------------------- - -# Small arrays .................................. -xs = numpy.random.uniform(-1,1,6).reshape(2,3) -ys = numpy.random.uniform(-1,1,6).reshape(2,3) -zs = xs + 1j * ys -m1 = [[True, False, False], [False, False, True]] -m2 = [[True, False, True], [False, False, True]] -nmxs = numpy.ma.array(xs, mask=m1) -nmys = numpy.ma.array(ys, mask=m2) -nmzs = numpy.ma.array(zs, mask=m1) -# Big arrays .................................... -xl = numpy.random.uniform(-1,1,100*100).reshape(100,100) -yl = numpy.random.uniform(-1,1,100*100).reshape(100,100) -zl = xl + 1j * yl -maskx = xl > 0.8 -masky = yl < -0.8 -nmxl = numpy.ma.array(xl, mask=maskx) -nmyl = numpy.ma.array(yl, mask=masky) -nmzl = numpy.ma.array(zl, mask=maskx) - -#####--------------------------------------------------------------------------- -#---- --- Functions --- -#####--------------------------------------------------------------------------- - -def timer(s, v='', nloop=500, nrep=3): - units = ["s", "ms", "µs", "ns"] - scaling = [1, 1e3, 1e6, 1e9] - print "%s : %-50s : " % (v,s), - varnames = ["%ss,nm%ss,%sl,nm%sl" % tuple(x*4) for x in 'xyz'] - setup = 'from __main__ import numpy, ma, %s' % ','.join(varnames) - Timer = timeit.Timer(stmt=s, setup=setup) - best = min(Timer.repeat(nrep, nloop)) / nloop - if best > 0.0: - order = min(-int(numpy.floor(numpy.log10(best)) // 3), 3) - else: - order = 3 - print "%d loops, best of %d: %.*g %s per loop" % (nloop, nrep, - 3, - best * scaling[order], - units[order]) -# ip.magic('timeit -n%i %s' % (nloop,s)) - - - -def compare_functions_1v(func, nloop=500, - xs=xs, nmxs=nmxs, xl=xl, nmxl=nmxl): - funcname = func.__name__ - print "-"*50 - print "%s on small arrays" % funcname - module, data = "numpy.ma","nmxs" - timer("%(module)s.%(funcname)s(%(data)s)" % locals(), v="%11s" % module, nloop=nloop) - # - print "%s on large arrays" % funcname - module, data = "numpy.ma","nmxl" - timer("%(module)s.%(funcname)s(%(data)s)" % locals(), v="%11s" % module, nloop=nloop) - return - -def compare_methods(methodname, args, vars='x', nloop=500, test=True, - xs=xs, nmxs=nmxs, xl=xl, nmxl=nmxl): - print "-"*50 - print "%s on small arrays" % methodname - data, ver = "nm%ss" % vars, 'numpy.ma' - timer("%(data)s.%(methodname)s(%(args)s)" % locals(), v=ver, nloop=nloop) - # - print "%s on large arrays" % methodname - data, ver = "nm%sl" % vars, 'numpy.ma' - timer("%(data)s.%(methodname)s(%(args)s)" % locals(), v=ver, nloop=nloop) - return - -def compare_functions_2v(func, nloop=500, test=True, - xs=xs, nmxs=nmxs, - ys=ys, nmys=nmys, - xl=xl, nmxl=nmxl, - yl=yl, nmyl=nmyl): - funcname = func.__name__ - print "-"*50 - print "%s on small arrays" % funcname - module, data = "numpy.ma","nmxs,nmys" - timer("%(module)s.%(funcname)s(%(data)s)" % locals(), v="%11s" % module, nloop=nloop) - # - print "%s on large arrays" % funcname - module, data = "numpy.ma","nmxl,nmyl" - timer("%(module)s.%(funcname)s(%(data)s)" % locals(), v="%11s" % module, nloop=nloop) - return - - -############################################################################### - - -################################################################################ -if __name__ == '__main__': -# # Small arrays .................................. -# xs = numpy.random.uniform(-1,1,6).reshape(2,3) -# ys = numpy.random.uniform(-1,1,6).reshape(2,3) -# zs = xs + 1j * ys -# m1 = [[True, False, False], [False, False, True]] -# m2 = [[True, False, True], [False, False, True]] -# nmxs = numpy.ma.array(xs, mask=m1) -# nmys = numpy.ma.array(ys, mask=m2) -# nmzs = numpy.ma.array(zs, mask=m1) -# mmxs = maskedarray.array(xs, mask=m1) -# mmys = maskedarray.array(ys, mask=m2) -# mmzs = maskedarray.array(zs, mask=m1) -# # Big arrays .................................... -# xl = numpy.random.uniform(-1,1,100*100).reshape(100,100) -# yl = numpy.random.uniform(-1,1,100*100).reshape(100,100) -# zl = xl + 1j * yl -# maskx = xl > 0.8 -# masky = yl < -0.8 -# nmxl = numpy.ma.array(xl, mask=maskx) -# nmyl = numpy.ma.array(yl, mask=masky) -# nmzl = numpy.ma.array(zl, mask=maskx) -# mmxl = maskedarray.array(xl, mask=maskx, shrink=True) -# mmyl = maskedarray.array(yl, mask=masky, shrink=True) -# mmzl = maskedarray.array(zl, mask=maskx, shrink=True) -# - compare_functions_1v(numpy.sin) - compare_functions_1v(numpy.log) - compare_functions_1v(numpy.sqrt) - #.................................................................... - compare_functions_2v(numpy.multiply) - compare_functions_2v(numpy.divide) - compare_functions_2v(numpy.power) - #.................................................................... - compare_methods('ravel','', nloop=1000) - compare_methods('conjugate','','z', nloop=1000) - compare_methods('transpose','', nloop=1000) - compare_methods('compressed','', nloop=1000) - compare_methods('__getitem__','0', nloop=1000) - compare_methods('__getitem__','(0,0)', nloop=1000) - compare_methods('__getitem__','[0,-1]', nloop=1000) - compare_methods('__setitem__','0, 17', nloop=1000, test=False) - compare_methods('__setitem__','(0,0), 17', nloop=1000, test=False) - #.................................................................... - print "-"*50 - print "__setitem__ on small arrays" - timer('nmxs.__setitem__((-1,0),numpy.ma.masked)', 'numpy.ma ',nloop=10000) - - print "-"*50 - print "__setitem__ on large arrays" - timer('nmxl.__setitem__((-1,0),numpy.ma.masked)', 'numpy.ma ',nloop=10000) - - #.................................................................... - print "-"*50 - print "where on small arrays" - timer('numpy.ma.where(nmxs>2,nmxs,nmys)', 'numpy.ma ',nloop=1000) - print "-"*50 - print "where on large arrays" - timer('numpy.ma.where(nmxl>2,nmxl,nmyl)', 'numpy.ma ',nloop=100) diff --git a/pythonPackages/numpy/numpy/ma/core.py b/pythonPackages/numpy/numpy/ma/core.py deleted file mode 100755 index de74856385..0000000000 --- a/pythonPackages/numpy/numpy/ma/core.py +++ /dev/null @@ -1,7195 +0,0 @@ -""" -numpy.ma : a package to handle missing or invalid values. - -This package was initially written for numarray by Paul F. Dubois -at Lawrence Livermore National Laboratory. -In 2006, the package was completely rewritten by Pierre Gerard-Marchant -(University of Georgia) to make the MaskedArray class a subclass of ndarray, -and to improve support of structured arrays. - - -Copyright 1999, 2000, 2001 Regents of the University of California. -Released for unlimited redistribution. - -* Adapted for numpy_core 2005 by Travis Oliphant and (mainly) Paul Dubois. -* Subclassing of the base `ndarray` 2006 by Pierre Gerard-Marchant - (pgmdevlist_AT_gmail_DOT_com) -* Improvements suggested by Reggie Dugard (reggie_AT_merfinllc_DOT_com) - -.. moduleauthor:: Pierre Gerard-Marchant - -""" -# pylint: disable-msg=E1002 - -__author__ = "Pierre GF Gerard-Marchant" -__docformat__ = "restructuredtext en" - -__all__ = ['MAError', 'MaskError', 'MaskType', 'MaskedArray', - 'bool_', - 'abs', 'absolute', 'add', 'all', 'allclose', 'allequal', 'alltrue', - 'amax', 'amin', 'anom', 'anomalies', 'any', 'arange', - 'arccos', 'arccosh', 'arcsin', 'arcsinh', 'arctan', 'arctan2', - 'arctanh', 'argmax', 'argmin', 'argsort', 'around', - 'array', 'asarray', 'asanyarray', - 'bitwise_and', 'bitwise_or', 'bitwise_xor', - 'ceil', 'choose', 'clip', 'common_fill_value', 'compress', - 'compressed', 'concatenate', 'conjugate', 'copy', 'cos', 'cosh', - 'count', 'cumprod', 'cumsum', - 'default_fill_value', 'diag', 'diagonal', 'diff', 'divide', 'dump', - 'dumps', - 'empty', 'empty_like', 'equal', 'exp', 'expand_dims', - 'fabs', 'flatten_mask', 'fmod', 'filled', 'floor', 'floor_divide', - 'fix_invalid', 'flatten_structured_array', 'frombuffer', 'fromflex', - 'fromfunction', - 'getdata', 'getmask', 'getmaskarray', 'greater', 'greater_equal', - 'harden_mask', 'hypot', - 'identity', 'ids', 'indices', 'inner', 'innerproduct', - 'isMA', 'isMaskedArray', 'is_mask', 'is_masked', 'isarray', - 'left_shift', 'less', 'less_equal', 'load', 'loads', 'log', 'log2', - 'log10', 'logical_and', 'logical_not', 'logical_or', 'logical_xor', - 'make_mask', 'make_mask_descr', 'make_mask_none', 'mask_or', - 'masked', 'masked_array', 'masked_equal', 'masked_greater', - 'masked_greater_equal', 'masked_inside', 'masked_invalid', - 'masked_less', 'masked_less_equal', 'masked_not_equal', - 'masked_object', 'masked_outside', 'masked_print_option', - 'masked_singleton', 'masked_values', 'masked_where', 'max', 'maximum', - 'maximum_fill_value', 'mean', 'min', 'minimum', 'minimum_fill_value', - 'mod', 'multiply', 'mvoid', - 'negative', 'nomask', 'nonzero', 'not_equal', - 'ones', 'outer', 'outerproduct', - 'power', 'prod', 'product', 'ptp', 'put', 'putmask', - 'rank', 'ravel', 'remainder', 'repeat', 'reshape', 'resize', - 'right_shift', 'round_', 'round', - 'set_fill_value', 'shape', 'sin', 'sinh', 'size', 'sometrue', - 'sort', 'soften_mask', 'sqrt', 'squeeze', 'std', 'subtract', 'sum', - 'swapaxes', - 'take', 'tan', 'tanh', 'trace', 'transpose', 'true_divide', - 'var', 'where', - 'zeros'] - -import cPickle - -import numpy as np -from numpy import ndarray, amax, amin, iscomplexobj, bool_ -from numpy import array as narray - -import numpy.core.umath as umath -import numpy.core.numerictypes as ntypes -from numpy.compat import getargspec, formatargspec -from numpy import expand_dims as n_expand_dims -import warnings - -import sys -if sys.version_info[0] >= 3: - from functools import reduce - -MaskType = np.bool_ -nomask = MaskType(0) - -def doc_note(initialdoc, note): - """ - Adds a Notes section to an existing docstring. - """ - if initialdoc is None: - return - if note is None: - return initialdoc - newdoc = """ - %s - - Notes - ----- - %s - """ - return newdoc % (initialdoc, note) - -def get_object_signature(obj): - """ - Get the signature from obj - """ - try: - sig = formatargspec(*getargspec(obj)) - except TypeError, errmsg: - sig = '' -# msg = "Unable to retrieve the signature of %s '%s'\n"\ -# "(Initial error message: %s)" -# warnings.warn(msg % (type(obj), -# getattr(obj, '__name__', '???'), -# errmsg)) - return sig - - -#####-------------------------------------------------------------------------- -#---- --- Exceptions --- -#####-------------------------------------------------------------------------- -class MAError(Exception): - """Class for masked array related errors.""" - pass -class MaskError(MAError): - "Class for mask related errors." - pass - - -#####-------------------------------------------------------------------------- -#---- --- Filling options --- -#####-------------------------------------------------------------------------- -# b: boolean - c: complex - f: floats - i: integer - O: object - S: string -default_filler = {'b': True, - 'c' : 1.e20 + 0.0j, - 'f' : 1.e20, - 'i' : 999999, - 'O' : '?', - 'S' : 'N/A', - 'u' : 999999, - 'V' : '???', - 'U' : 'N/A', - } -max_filler = ntypes._minvals -max_filler.update([(k, -np.inf) for k in [np.float32, np.float64]]) -min_filler = ntypes._maxvals -min_filler.update([(k, +np.inf) for k in [np.float32, np.float64]]) -if 'float128' in ntypes.typeDict: - max_filler.update([(np.float128, -np.inf)]) - min_filler.update([(np.float128, +np.inf)]) - - -def default_fill_value(obj): - """ - Return the default fill value for the argument object. - - The default filling value depends on the datatype of the input - array or the type of the input scalar: - - ======== ======== - datatype default - ======== ======== - bool True - int 999999 - float 1.e20 - complex 1.e20+0j - object '?' - string 'N/A' - ======== ======== - - - Parameters - ---------- - obj : ndarray, dtype or scalar - The array data-type or scalar for which the default fill value - is returned. - - Returns - ------- - fill_value : scalar - The default fill value. - - Examples - -------- - >>> np.ma.default_fill_value(1) - 999999 - >>> np.ma.default_fill_value(np.array([1.1, 2., np.pi])) - 1e+20 - >>> np.ma.default_fill_value(np.dtype(complex)) - (1e+20+0j) - - """ - if hasattr(obj, 'dtype'): - defval = _check_fill_value(None, obj.dtype) - elif isinstance(obj, np.dtype): - if obj.subdtype: - defval = default_filler.get(obj.subdtype[0].kind, '?') - else: - defval = default_filler.get(obj.kind, '?') - elif isinstance(obj, float): - defval = default_filler['f'] - elif isinstance(obj, int) or isinstance(obj, long): - defval = default_filler['i'] - elif isinstance(obj, str): - defval = default_filler['S'] - elif isinstance(obj, unicode): - defval = default_filler['U'] - elif isinstance(obj, complex): - defval = default_filler['c'] - else: - defval = default_filler['O'] - return defval - - -def _recursive_extremum_fill_value(ndtype, extremum): - names = ndtype.names - if names: - deflist = [] - for name in names: - fval = _recursive_extremum_fill_value(ndtype[name], extremum) - deflist.append(fval) - return tuple(deflist) - return extremum[ndtype] - - -def minimum_fill_value(obj): - """ - Return the maximum value that can be represented by the dtype of an object. - - This function is useful for calculating a fill value suitable for - taking the minimum of an array with a given dtype. - - Parameters - ---------- - obj : ndarray or dtype - An object that can be queried for it's numeric type. - - Returns - ------- - val : scalar - The maximum representable value. - - Raises - ------ - TypeError - If `obj` isn't a suitable numeric type. - - See Also - -------- - maximum_fill_value : The inverse function. - set_fill_value : Set the filling value of a masked array. - MaskedArray.fill_value : Return current fill value. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.int8() - >>> ma.minimum_fill_value(a) - 127 - >>> a = np.int32() - >>> ma.minimum_fill_value(a) - 2147483647 - - An array of numeric data can also be passed. - - >>> a = np.array([1, 2, 3], dtype=np.int8) - >>> ma.minimum_fill_value(a) - 127 - >>> a = np.array([1, 2, 3], dtype=np.float32) - >>> ma.minimum_fill_value(a) - inf - - """ - errmsg = "Unsuitable type for calculating minimum." - if hasattr(obj, 'dtype'): - return _recursive_extremum_fill_value(obj.dtype, min_filler) - elif isinstance(obj, float): - return min_filler[ntypes.typeDict['float_']] - elif isinstance(obj, int): - return min_filler[ntypes.typeDict['int_']] - elif isinstance(obj, long): - return min_filler[ntypes.typeDict['uint']] - elif isinstance(obj, np.dtype): - return min_filler[obj] - else: - raise TypeError(errmsg) - - -def maximum_fill_value(obj): - """ - Return the minimum value that can be represented by the dtype of an object. - - This function is useful for calculating a fill value suitable for - taking the maximum of an array with a given dtype. - - Parameters - ---------- - obj : {ndarray, dtype} - An object that can be queried for it's numeric type. - - Returns - ------- - val : scalar - The minimum representable value. - - Raises - ------ - TypeError - If `obj` isn't a suitable numeric type. - - See Also - -------- - minimum_fill_value : The inverse function. - set_fill_value : Set the filling value of a masked array. - MaskedArray.fill_value : Return current fill value. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.int8() - >>> ma.maximum_fill_value(a) - -128 - >>> a = np.int32() - >>> ma.maximum_fill_value(a) - -2147483648 - - An array of numeric data can also be passed. - - >>> a = np.array([1, 2, 3], dtype=np.int8) - >>> ma.maximum_fill_value(a) - -128 - >>> a = np.array([1, 2, 3], dtype=np.float32) - >>> ma.maximum_fill_value(a) - -inf - - """ - errmsg = "Unsuitable type for calculating maximum." - if hasattr(obj, 'dtype'): - return _recursive_extremum_fill_value(obj.dtype, max_filler) - elif isinstance(obj, float): - return max_filler[ntypes.typeDict['float_']] - elif isinstance(obj, int): - return max_filler[ntypes.typeDict['int_']] - elif isinstance(obj, long): - return max_filler[ntypes.typeDict['uint']] - elif isinstance(obj, np.dtype): - return max_filler[obj] - else: - raise TypeError(errmsg) - - -def _recursive_set_default_fill_value(dtypedescr): - deflist = [] - for currentdescr in dtypedescr: - currenttype = currentdescr[1] - if isinstance(currenttype, list): - deflist.append(tuple(_recursive_set_default_fill_value(currenttype))) - else: - deflist.append(default_fill_value(np.dtype(currenttype))) - return tuple(deflist) - -def _recursive_set_fill_value(fillvalue, dtypedescr): - fillvalue = np.resize(fillvalue, len(dtypedescr)) - output_value = [] - for (fval, descr) in zip(fillvalue, dtypedescr): - cdtype = descr[1] - if isinstance(cdtype, list): - output_value.append(tuple(_recursive_set_fill_value(fval, cdtype))) - else: - output_value.append(np.array(fval, dtype=cdtype).item()) - return tuple(output_value) - - -def _check_fill_value(fill_value, ndtype): - """ - Private function validating the given `fill_value` for the given dtype. - - If fill_value is None, it is set to the default corresponding to the dtype - if this latter is standard (no fields). If the datatype is flexible (named - fields), fill_value is set to a tuple whose elements are the default fill - values corresponding to each field. - - If fill_value is not None, its value is forced to the given dtype. - - """ - ndtype = np.dtype(ndtype) - fields = ndtype.fields - if fill_value is None: - if fields: - descr = ndtype.descr - fill_value = np.array(_recursive_set_default_fill_value(descr), - dtype=ndtype,) - else: - fill_value = default_fill_value(ndtype) - elif fields: - fdtype = [(_[0], _[1]) for _ in ndtype.descr] - if isinstance(fill_value, (ndarray, np.void)): - try: - fill_value = np.array(fill_value, copy=False, dtype=fdtype) - except ValueError: - err_msg = "Unable to transform %s to dtype %s" - raise ValueError(err_msg % (fill_value, fdtype)) - else: - descr = ndtype.descr - fill_value = np.asarray(fill_value, dtype=object) - fill_value = np.array(_recursive_set_fill_value(fill_value, descr), - dtype=ndtype) - else: - if isinstance(fill_value, basestring) and (ndtype.char not in 'SV'): - fill_value = default_fill_value(ndtype) - else: - # In case we want to convert 1e+20 to int... - try: - fill_value = np.array(fill_value, copy=False, dtype=ndtype)#.item() - except OverflowError: - fill_value = default_fill_value(ndtype) - return np.array(fill_value) - - -def set_fill_value(a, fill_value): - """ - Set the filling value of a, if a is a masked array. - - This function changes the fill value of the masked array `a` in place. - If `a` is not a masked array, the function returns silently, without - doing anything. - - Parameters - ---------- - a : array_like - Input array. - fill_value : dtype - Filling value. A consistency test is performed to make sure - the value is compatible with the dtype of `a`. - - Returns - ------- - None - Nothing returned by this function. - - See Also - -------- - maximum_fill_value : Return the default fill value for a dtype. - MaskedArray.fill_value : Return current fill value. - MaskedArray.set_fill_value : Equivalent method. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.arange(5) - >>> a - array([0, 1, 2, 3, 4]) - >>> a = ma.masked_where(a < 3, a) - >>> a - masked_array(data = [-- -- -- 3 4], - mask = [ True True True False False], - fill_value=999999) - >>> ma.set_fill_value(a, -999) - >>> a - masked_array(data = [-- -- -- 3 4], - mask = [ True True True False False], - fill_value=-999) - - Nothing happens if `a` is not a masked array. - - >>> a = range(5) - >>> a - [0, 1, 2, 3, 4] - >>> ma.set_fill_value(a, 100) - >>> a - [0, 1, 2, 3, 4] - >>> a = np.arange(5) - >>> a - array([0, 1, 2, 3, 4]) - >>> ma.set_fill_value(a, 100) - >>> a - array([0, 1, 2, 3, 4]) - - """ - if isinstance(a, MaskedArray): - a.set_fill_value(fill_value) - return - -def get_fill_value(a): - """ - Return the filling value of a, if any. Otherwise, returns the - default filling value for that type. - - """ - if isinstance(a, MaskedArray): - result = a.fill_value - else: - result = default_fill_value(a) - return result - -def common_fill_value(a, b): - """ - Return the common filling value of two masked arrays, if any. - - If ``a.fill_value == b.fill_value``, return the fill value, - otherwise return None. - - Parameters - ---------- - a, b : MaskedArray - The masked arrays for which to compare fill values. - - Returns - ------- - fill_value : scalar or None - The common fill value, or None. - - Examples - -------- - >>> x = np.ma.array([0, 1.], fill_value=3) - >>> y = np.ma.array([0, 1.], fill_value=3) - >>> np.ma.common_fill_value(x, y) - 3.0 - - """ - t1 = get_fill_value(a) - t2 = get_fill_value(b) - if t1 == t2: - return t1 - return None - - -#####-------------------------------------------------------------------------- -def filled(a, fill_value=None): - """ - Return input as an array with masked data replaced by a fill value. - - If `a` is not a `MaskedArray`, `a` itself is returned. - If `a` is a `MaskedArray` and `fill_value` is None, `fill_value` is set to - ``a.fill_value``. - - Parameters - ---------- - a : MaskedArray or array_like - An input object. - fill_value : scalar, optional - Filling value. Default is None. - - Returns - ------- - a : ndarray - The filled array. - - See Also - -------- - compressed - - Examples - -------- - >>> x = np.ma.array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0], - ... [1, 0, 0], - ... [0, 0, 0]]) - >>> x.filled() - array([[999999, 1, 2], - [999999, 4, 5], - [ 6, 7, 8]]) - - """ - if hasattr(a, 'filled'): - return a.filled(fill_value) - elif isinstance(a, ndarray): - # Should we check for contiguity ? and a.flags['CONTIGUOUS']: - return a - elif isinstance(a, dict): - return np.array(a, 'O') - else: - return np.array(a) - -#####-------------------------------------------------------------------------- -def get_masked_subclass(*arrays): - """ - Return the youngest subclass of MaskedArray from a list of (masked) arrays. - In case of siblings, the first listed takes over. - - """ - if len(arrays) == 1: - arr = arrays[0] - if isinstance(arr, MaskedArray): - rcls = type(arr) - else: - rcls = MaskedArray - else: - arrcls = [type(a) for a in arrays] - rcls = arrcls[0] - if not issubclass(rcls, MaskedArray): - rcls = MaskedArray - for cls in arrcls[1:]: - if issubclass(cls, rcls): - rcls = cls - # Don't return MaskedConstant as result: revert to MaskedArray - if rcls.__name__ == 'MaskedConstant': - return MaskedArray - return rcls - -#####-------------------------------------------------------------------------- -def getdata(a, subok=True): - """ - Return the data of a masked array as an ndarray. - - Return the data of `a` (if any) as an ndarray if `a` is a ``MaskedArray``, - else return `a` as a ndarray or subclass (depending on `subok`) if not. - - Parameters - ---------- - a : array_like - Input ``MaskedArray``, alternatively a ndarray or a subclass thereof. - subok : bool - Whether to force the output to be a `pure` ndarray (False) or to - return a subclass of ndarray if appropriate (True, default). - - See Also - -------- - getmask : Return the mask of a masked array, or nomask. - getmaskarray : Return the mask of a masked array, or full array of False. - - Examples - -------- - - >>> import numpy.ma as ma - >>> a = ma.masked_equal([[1,2],[3,4]], 2) - >>> a - masked_array(data = - [[1 --] - [3 4]], - mask = - [[False True] - [False False]], - fill_value=999999) - >>> ma.getdata(a) - array([[1, 2], - [3, 4]]) - - Equivalently use the ``MaskedArray`` `data` attribute. - - >>> a.data - array([[1, 2], - [3, 4]]) - - """ - try: - data = a._data - except AttributeError: - data = np.array(a, copy=False, subok=subok) - if not subok: - return data.view(ndarray) - return data -get_data = getdata - - -def fix_invalid(a, mask=nomask, copy=True, fill_value=None): - """ - Return input with invalid data masked and replaced by a fill value. - - Invalid data means values of `nan`, `inf`, etc. - - Parameters - ---------- - a : array_like - Input array, a (subclass of) ndarray. - copy : bool, optional - Whether to use a copy of `a` (True) or to fix `a` in place (False). - Default is True. - fill_value : scalar, optional - Value used for fixing invalid data. Default is None, in which case - the ``a.fill_value`` is used. - - Returns - ------- - b : MaskedArray - The input array with invalid entries fixed. - - Notes - ----- - A copy is performed by default. - - Examples - -------- - >>> x = np.ma.array([1., -1, np.nan, np.inf], mask=[1] + [0]*3) - >>> x - masked_array(data = [-- -1.0 nan inf], - mask = [ True False False False], - fill_value = 1e+20) - >>> np.ma.fix_invalid(x) - masked_array(data = [-- -1.0 -- --], - mask = [ True False True True], - fill_value = 1e+20) - - >>> fixed = np.ma.fix_invalid(x) - >>> fixed.data - array([ 1.00000000e+00, -1.00000000e+00, 1.00000000e+20, - 1.00000000e+20]) - >>> x.data - array([ 1., -1., NaN, Inf]) - - """ - a = masked_array(a, copy=copy, mask=mask, subok=True) - #invalid = (numpy.isnan(a._data) | numpy.isinf(a._data)) - invalid = np.logical_not(np.isfinite(a._data)) - if not invalid.any(): - return a - a._mask |= invalid - if fill_value is None: - fill_value = a.fill_value - a._data[invalid] = fill_value - return a - - - -#####-------------------------------------------------------------------------- -#---- --- Ufuncs --- -#####-------------------------------------------------------------------------- -ufunc_domain = {} -ufunc_fills = {} - -class _DomainCheckInterval: - """ - Define a valid interval, so that : - - ``domain_check_interval(a,b)(x) == True`` where - ``x < a`` or ``x > b``. - - """ - def __init__(self, a, b): - "domain_check_interval(a,b)(x) = true where x < a or y > b" - if (a > b): - (a, b) = (b, a) - self.a = a - self.b = b - - def __call__ (self, x): - "Execute the call behavior." - return umath.logical_or(umath.greater (x, self.b), - umath.less(x, self.a)) - - - -class _DomainTan: - """Define a valid interval for the `tan` function, so that: - - ``domain_tan(eps) = True`` where ``abs(cos(x)) < eps`` - - """ - def __init__(self, eps): - "domain_tan(eps) = true where abs(cos(x)) < eps)" - self.eps = eps - - def __call__ (self, x): - "Executes the call behavior." - return umath.less(umath.absolute(umath.cos(x)), self.eps) - - - -class _DomainSafeDivide: - """Define a domain for safe division.""" - def __init__ (self, tolerance=None): - self.tolerance = tolerance - - def __call__ (self, a, b): - # Delay the selection of the tolerance to here in order to reduce numpy - # import times. The calculation of these parameters is a substantial - # component of numpy's import time. - if self.tolerance is None: - self.tolerance = np.finfo(float).tiny - return umath.absolute(a) * self.tolerance >= umath.absolute(b) - - - -class _DomainGreater: - """DomainGreater(v)(x) is True where x <= v.""" - def __init__(self, critical_value): - "DomainGreater(v)(x) = true where x <= v" - self.critical_value = critical_value - - def __call__ (self, x): - "Executes the call behavior." - return umath.less_equal(x, self.critical_value) - - - -class _DomainGreaterEqual: - """DomainGreaterEqual(v)(x) is True where x < v.""" - def __init__(self, critical_value): - "DomainGreaterEqual(v)(x) = true where x < v" - self.critical_value = critical_value - - def __call__ (self, x): - "Executes the call behavior." - return umath.less(x, self.critical_value) - -#.............................................................................. -class _MaskedUnaryOperation: - """ - Defines masked version of unary operations, where invalid values are - pre-masked. - - Parameters - ---------- - mufunc : callable - The function for which to define a masked version. Made available - as ``_MaskedUnaryOperation.f``. - fill : scalar, optional - Filling value, default is 0. - domain : class instance - Domain for the function. Should be one of the ``_Domain*`` - classes. Default is None. - - """ - def __init__ (self, mufunc, fill=0, domain=None): - """ _MaskedUnaryOperation(aufunc, fill=0, domain=None) - aufunc(fill) must be defined - self(x) returns aufunc(x) - with masked values where domain(x) is true or getmask(x) is true. - """ - self.f = mufunc - self.fill = fill - self.domain = domain - self.__doc__ = getattr(mufunc, "__doc__", str(mufunc)) - self.__name__ = getattr(mufunc, "__name__", str(mufunc)) - ufunc_domain[mufunc] = domain - ufunc_fills[mufunc] = fill - # - def __call__ (self, a, *args, **kwargs): - "Execute the call behavior." - d = getdata(a) - # Case 1.1. : Domained function - if self.domain is not None: - # Save the error status - err_status_ini = np.geterr() - try: - np.seterr(divide='ignore', invalid='ignore') - result = self.f(d, *args, **kwargs) - finally: - np.seterr(**err_status_ini) - # Make a mask - m = ~umath.isfinite(result) - m |= self.domain(d) - m |= getmask(a) - # Case 1.2. : Function without a domain - else: - # Get the result and the mask - result = self.f(d, *args, **kwargs) - m = getmask(a) - # Case 2.1. : The result is scalarscalar - if not result.ndim: - if m: - return masked - return result - # Case 2.2. The result is an array - # We need to fill the invalid data back w/ the input - # Now, that's plain silly: in C, we would just skip the element and keep - # the original, but we do have to do it that way in Python - if m is not nomask: - # In case result has a lower dtype than the inputs (as in equal) - try: - np.putmask(result, m, d) - except TypeError: - pass - # Transform to - if isinstance(a, MaskedArray): - subtype = type(a) - else: - subtype = MaskedArray - result = result.view(subtype) - result._mask = m - result._update_from(a) - return result - # - def __str__ (self): - return "Masked version of %s. [Invalid values are masked]" % str(self.f) - - - -class _MaskedBinaryOperation: - """ - Define masked version of binary operations, where invalid - values are pre-masked. - - Parameters - ---------- - mbfunc : function - The function for which to define a masked version. Made available - as ``_MaskedBinaryOperation.f``. - domain : class instance - Default domain for the function. Should be one of the ``_Domain*`` - classes. Default is None. - fillx : scalar, optional - Filling value for the first argument, default is 0. - filly : scalar, optional - Filling value for the second argument, default is 0. - - """ - def __init__ (self, mbfunc, fillx=0, filly=0): - """abfunc(fillx, filly) must be defined. - abfunc(x, filly) = x for all x to enable reduce. - """ - self.f = mbfunc - self.fillx = fillx - self.filly = filly - self.__doc__ = getattr(mbfunc, "__doc__", str(mbfunc)) - self.__name__ = getattr(mbfunc, "__name__", str(mbfunc)) - ufunc_domain[mbfunc] = None - ufunc_fills[mbfunc] = (fillx, filly) - - def __call__ (self, a, b, *args, **kwargs): - "Execute the call behavior." - # Get the data, as ndarray - (da, db) = (getdata(a, subok=False), getdata(b, subok=False)) - # Get the mask - (ma, mb) = (getmask(a), getmask(b)) - if ma is nomask: - if mb is nomask: - m = nomask - else: - m = umath.logical_or(getmaskarray(a), mb) - elif mb is nomask: - m = umath.logical_or(ma, getmaskarray(b)) - else: - m = umath.logical_or(ma, mb) - # Get the result - err_status_ini = np.geterr() - try: - np.seterr(divide='ignore', invalid='ignore') - result = self.f(da, db, *args, **kwargs) - finally: - np.seterr(**err_status_ini) - # Case 1. : scalar - if not result.ndim: - if m: - return masked - return result - # Case 2. : array - # Revert result to da where masked - if m.any(): - np.putmask(result, m, 0) - result += m * da - # Transforms to a (subclass of) MaskedArray - result = result.view(get_masked_subclass(a, b)) - result._mask = m - # Update the optional info from the inputs - if isinstance(b, MaskedArray): - if isinstance(a, MaskedArray): - result._update_from(a) - else: - result._update_from(b) - elif isinstance(a, MaskedArray): - result._update_from(a) - return result - - - def reduce(self, target, axis=0, dtype=None): - """Reduce `target` along the given `axis`.""" - if isinstance(target, MaskedArray): - tclass = type(target) - else: - tclass = MaskedArray - m = getmask(target) - t = filled(target, self.filly) - if t.shape == (): - t = t.reshape(1) - if m is not nomask: - m = make_mask(m, copy=1) - m.shape = (1,) - if m is nomask: - return self.f.reduce(t, axis).view(tclass) - t = t.view(tclass) - t._mask = m - tr = self.f.reduce(getdata(t), axis, dtype=dtype or t.dtype) - mr = umath.logical_and.reduce(m, axis) - tr = tr.view(tclass) - if mr.ndim > 0: - tr._mask = mr - return tr - elif mr: - return masked - return tr - - def outer (self, a, b): - """Return the function applied to the outer product of a and b. - - """ - ma = getmask(a) - mb = getmask(b) - if ma is nomask and mb is nomask: - m = nomask - else: - ma = getmaskarray(a) - mb = getmaskarray(b) - m = umath.logical_or.outer(ma, mb) - if (not m.ndim) and m: - return masked - (da, db) = (getdata(a), getdata(b)) - d = self.f.outer(da, db) - if m is not nomask: - np.putmask(d, m, da) - if d.shape: - d = d.view(get_masked_subclass(a, b)) - d._mask = m - return d - - def accumulate (self, target, axis=0): - """Accumulate `target` along `axis` after filling with y fill - value. - - """ - if isinstance(target, MaskedArray): - tclass = type(target) - else: - tclass = MaskedArray - t = filled(target, self.filly) - return self.f.accumulate(t, axis).view(tclass) - - def __str__ (self): - return "Masked version of " + str(self.f) - - - -class _DomainedBinaryOperation: - """ - Define binary operations that have a domain, like divide. - - They have no reduce, outer or accumulate. - - Parameters - ---------- - mbfunc : function - The function for which to define a masked version. Made available - as ``_DomainedBinaryOperation.f``. - domain : class instance - Default domain for the function. Should be one of the ``_Domain*`` - classes. - fillx : scalar, optional - Filling value for the first argument, default is 0. - filly : scalar, optional - Filling value for the second argument, default is 0. - - """ - def __init__ (self, dbfunc, domain, fillx=0, filly=0): - """abfunc(fillx, filly) must be defined. - abfunc(x, filly) = x for all x to enable reduce. - """ - self.f = dbfunc - self.domain = domain - self.fillx = fillx - self.filly = filly - self.__doc__ = getattr(dbfunc, "__doc__", str(dbfunc)) - self.__name__ = getattr(dbfunc, "__name__", str(dbfunc)) - ufunc_domain[dbfunc] = domain - ufunc_fills[dbfunc] = (fillx, filly) - - def __call__(self, a, b, *args, **kwargs): - "Execute the call behavior." - # Get the data and the mask - (da, db) = (getdata(a, subok=False), getdata(b, subok=False)) - (ma, mb) = (getmask(a), getmask(b)) - # Get the result - err_status_ini = np.geterr() - try: - np.seterr(divide='ignore', invalid='ignore') - result = self.f(da, db, *args, **kwargs) - finally: - np.seterr(**err_status_ini) - # Get the mask as a combination of ma, mb and invalid - m = ~umath.isfinite(result) - m |= ma - m |= mb - # Apply the domain - domain = ufunc_domain.get(self.f, None) - if domain is not None: - m |= filled(domain(da, db), True) - # Take care of the scalar case first - if (not m.ndim): - if m: - return masked - else: - return result - # When the mask is True, put back da - np.putmask(result, m, 0) - result += m * da - result = result.view(get_masked_subclass(a, b)) - result._mask = m - if isinstance(b, MaskedArray): - if isinstance(a, MaskedArray): - result._update_from(a) - else: - result._update_from(b) - elif isinstance(a, MaskedArray): - result._update_from(a) - return result - - def __str__ (self): - return "Masked version of " + str(self.f) - -#.............................................................................. -# Unary ufuncs -exp = _MaskedUnaryOperation(umath.exp) -conjugate = _MaskedUnaryOperation(umath.conjugate) -sin = _MaskedUnaryOperation(umath.sin) -cos = _MaskedUnaryOperation(umath.cos) -tan = _MaskedUnaryOperation(umath.tan) -arctan = _MaskedUnaryOperation(umath.arctan) -arcsinh = _MaskedUnaryOperation(umath.arcsinh) -sinh = _MaskedUnaryOperation(umath.sinh) -cosh = _MaskedUnaryOperation(umath.cosh) -tanh = _MaskedUnaryOperation(umath.tanh) -abs = absolute = _MaskedUnaryOperation(umath.absolute) -fabs = _MaskedUnaryOperation(umath.fabs) -negative = _MaskedUnaryOperation(umath.negative) -floor = _MaskedUnaryOperation(umath.floor) -ceil = _MaskedUnaryOperation(umath.ceil) -around = _MaskedUnaryOperation(np.round_) -logical_not = _MaskedUnaryOperation(umath.logical_not) -# Domained unary ufuncs ....................................................... -sqrt = _MaskedUnaryOperation(umath.sqrt, 0.0, - _DomainGreaterEqual(0.0)) -log = _MaskedUnaryOperation(umath.log, 1.0, - _DomainGreater(0.0)) -log2 = _MaskedUnaryOperation(umath.log2, 1.0, - _DomainGreater(0.0)) -log10 = _MaskedUnaryOperation(umath.log10, 1.0, - _DomainGreater(0.0)) -tan = _MaskedUnaryOperation(umath.tan, 0.0, - _DomainTan(1e-35)) -arcsin = _MaskedUnaryOperation(umath.arcsin, 0.0, - _DomainCheckInterval(-1.0, 1.0)) -arccos = _MaskedUnaryOperation(umath.arccos, 0.0, - _DomainCheckInterval(-1.0, 1.0)) -arccosh = _MaskedUnaryOperation(umath.arccosh, 1.0, - _DomainGreaterEqual(1.0)) -arctanh = _MaskedUnaryOperation(umath.arctanh, 0.0, - _DomainCheckInterval(-1.0 + 1e-15, 1.0 - 1e-15)) -# Binary ufuncs ............................................................... -add = _MaskedBinaryOperation(umath.add) -subtract = _MaskedBinaryOperation(umath.subtract) -multiply = _MaskedBinaryOperation(umath.multiply, 1, 1) -arctan2 = _MaskedBinaryOperation(umath.arctan2, 0.0, 1.0) -equal = _MaskedBinaryOperation(umath.equal) -equal.reduce = None -not_equal = _MaskedBinaryOperation(umath.not_equal) -not_equal.reduce = None -less_equal = _MaskedBinaryOperation(umath.less_equal) -less_equal.reduce = None -greater_equal = _MaskedBinaryOperation(umath.greater_equal) -greater_equal.reduce = None -less = _MaskedBinaryOperation(umath.less) -less.reduce = None -greater = _MaskedBinaryOperation(umath.greater) -greater.reduce = None -logical_and = _MaskedBinaryOperation(umath.logical_and) -alltrue = _MaskedBinaryOperation(umath.logical_and, 1, 1).reduce -logical_or = _MaskedBinaryOperation(umath.logical_or) -sometrue = logical_or.reduce -logical_xor = _MaskedBinaryOperation(umath.logical_xor) -bitwise_and = _MaskedBinaryOperation(umath.bitwise_and) -bitwise_or = _MaskedBinaryOperation(umath.bitwise_or) -bitwise_xor = _MaskedBinaryOperation(umath.bitwise_xor) -hypot = _MaskedBinaryOperation(umath.hypot) -# Domained binary ufuncs ...................................................... -divide = _DomainedBinaryOperation(umath.divide, _DomainSafeDivide(), 0, 1) -true_divide = _DomainedBinaryOperation(umath.true_divide, - _DomainSafeDivide(), 0, 1) -floor_divide = _DomainedBinaryOperation(umath.floor_divide, - _DomainSafeDivide(), 0, 1) -remainder = _DomainedBinaryOperation(umath.remainder, - _DomainSafeDivide(), 0, 1) -fmod = _DomainedBinaryOperation(umath.fmod, _DomainSafeDivide(), 0, 1) -mod = _DomainedBinaryOperation(umath.mod, _DomainSafeDivide(), 0, 1) - - -#####-------------------------------------------------------------------------- -#---- --- Mask creation functions --- -#####-------------------------------------------------------------------------- - -def _recursive_make_descr(datatype, newtype=bool_): - "Private function allowing recursion in make_descr." - # Do we have some name fields ? - if datatype.names: - descr = [] - for name in datatype.names: - field = datatype.fields[name] - if len(field) == 3: - # Prepend the title to the name - name = (field[-1], name) - descr.append((name, _recursive_make_descr(field[0], newtype))) - return descr - # Is this some kind of composite a la (np.float,2) - elif datatype.subdtype: - mdescr = list(datatype.subdtype) - mdescr[0] = newtype - return tuple(mdescr) - else: - return newtype - -def make_mask_descr(ndtype): - """ - Construct a dtype description list from a given dtype. - - Returns a new dtype object, with the type of all fields in `ndtype` to a - boolean type. Field names are not altered. - - Parameters - ---------- - ndtype : dtype - The dtype to convert. - - Returns - ------- - result : dtype - A dtype that looks like `ndtype`, the type of all fields is boolean. - - Examples - -------- - >>> import numpy.ma as ma - >>> dtype = np.dtype({'names':['foo', 'bar'], - 'formats':[np.float32, np.int]}) - >>> dtype - dtype([('foo', '>> ma.make_mask_descr(dtype) - dtype([('foo', '|b1'), ('bar', '|b1')]) - >>> ma.make_mask_descr(np.float32) - - - """ - # Make sure we do have a dtype - if not isinstance(ndtype, np.dtype): - ndtype = np.dtype(ndtype) - return np.dtype(_recursive_make_descr(ndtype, np.bool)) - -def getmask(a): - """ - Return the mask of a masked array, or nomask. - - Return the mask of `a` as an ndarray if `a` is a `MaskedArray` and the - mask is not `nomask`, else return `nomask`. To guarantee a full array - of booleans of the same shape as a, use `getmaskarray`. - - Parameters - ---------- - a : array_like - Input `MaskedArray` for which the mask is required. - - See Also - -------- - getdata : Return the data of a masked array as an ndarray. - getmaskarray : Return the mask of a masked array, or full array of False. - - Examples - -------- - - >>> import numpy.ma as ma - >>> a = ma.masked_equal([[1,2],[3,4]], 2) - >>> a - masked_array(data = - [[1 --] - [3 4]], - mask = - [[False True] - [False False]], - fill_value=999999) - >>> ma.getmask(a) - array([[False, True], - [False, False]], dtype=bool) - - Equivalently use the `MaskedArray` `mask` attribute. - - >>> a.mask - array([[False, True], - [False, False]], dtype=bool) - - Result when mask == `nomask` - - >>> b = ma.masked_array([[1,2],[3,4]]) - >>> b - masked_array(data = - [[1 2] - [3 4]], - mask = - False, - fill_value=999999) - >>> ma.nomask - False - >>> ma.getmask(b) == ma.nomask - True - >>> b.mask == ma.nomask - True - - """ - return getattr(a, '_mask', nomask) -get_mask = getmask - -def getmaskarray(arr): - """ - Return the mask of a masked array, or full boolean array of False. - - Return the mask of `arr` as an ndarray if `arr` is a `MaskedArray` and - the mask is not `nomask`, else return a full boolean array of False of - the same shape as `arr`. - - Parameters - ---------- - arr : array_like - Input `MaskedArray` for which the mask is required. - - See Also - -------- - getmask : Return the mask of a masked array, or nomask. - getdata : Return the data of a masked array as an ndarray. - - Examples - -------- - - >>> import numpy.ma as ma - >>> a = ma.masked_equal([[1,2],[3,4]], 2) - >>> a - masked_array(data = - [[1 --] - [3 4]], - mask = - [[False True] - [False False]], - fill_value=999999) - >>> ma.getmaskarray(a) - array([[False, True], - [False, False]], dtype=bool) - - Result when mask == ``nomask`` - - >>> b = ma.masked_array([[1,2],[3,4]]) - >>> b - masked_array(data = - [[1 2] - [3 4]], - mask = - False, - fill_value=999999) - >>> >ma.getmaskarray(b) - array([[False, False], - [False, False]], dtype=bool) - - """ - mask = getmask(arr) - if mask is nomask: - mask = make_mask_none(np.shape(arr), getdata(arr).dtype) - return mask - -def is_mask(m): - """ - Return True if m is a valid, standard mask. - - This function does not check the contents of the input, only that the - type is MaskType. In particular, this function returns False if the - mask has a flexible dtype. - - Parameters - ---------- - m : array_like - Array to test. - - Returns - ------- - result : bool - True if `m.dtype.type` is MaskType, False otherwise. - - See Also - -------- - isMaskedArray : Test whether input is an instance of MaskedArray. - - Examples - -------- - >>> import numpy.ma as ma - >>> m = ma.masked_equal([0, 1, 0, 2, 3], 0) - >>> m - masked_array(data = [-- 1 -- 2 3], - mask = [ True False True False False], - fill_value=999999) - >>> ma.is_mask(m) - False - >>> ma.is_mask(m.mask) - True - - Input must be an ndarray (or have similar attributes) - for it to be considered a valid mask. - - >>> m = [False, True, False] - >>> ma.is_mask(m) - False - >>> m = np.array([False, True, False]) - >>> m - array([False, True, False], dtype=bool) - >>> ma.is_mask(m) - True - - Arrays with complex dtypes don't return True. - - >>> dtype = np.dtype({'names':['monty', 'pithon'], - 'formats':[np.bool, np.bool]}) - >>> dtype - dtype([('monty', '|b1'), ('pithon', '|b1')]) - >>> m = np.array([(True, False), (False, True), (True, False)], - dtype=dtype) - >>> m - array([(True, False), (False, True), (True, False)], - dtype=[('monty', '|b1'), ('pithon', '|b1')]) - >>> ma.is_mask(m) - False - - """ - try: - return m.dtype.type is MaskType - except AttributeError: - return False - -def make_mask(m, copy=False, shrink=True, flag=None, dtype=MaskType): - """ - Create a boolean mask from an array. - - Return `m` as a boolean mask, creating a copy if necessary or requested. - The function can accept any sequence that is convertible to integers, - or ``nomask``. Does not require that contents must be 0s and 1s, values - of 0 are interepreted as False, everything else as True. - - Parameters - ---------- - m : array_like - Potential mask. - copy : bool, optional - Whether to return a copy of `m` (True) or `m` itself (False). - shrink : bool, optional - Whether to shrink `m` to ``nomask`` if all its values are False. - flag : bool, optional - Deprecated equivalent of `shrink`. - dtype : dtype, optional - Data-type of the output mask. By default, the output mask has - a dtype of MaskType (bool). If the dtype is flexible, each field - has a boolean dtype. - - Returns - ------- - result : ndarray - A boolean mask derived from `m`. - - Examples - -------- - >>> import numpy.ma as ma - >>> m = [True, False, True, True] - >>> ma.make_mask(m) - array([ True, False, True, True], dtype=bool) - >>> m = [1, 0, 1, 1] - >>> ma.make_mask(m) - array([ True, False, True, True], dtype=bool) - >>> m = [1, 0, 2, -3] - >>> ma.make_mask(m) - array([ True, False, True, True], dtype=bool) - - Effect of the `shrink` parameter. - - >>> m = np.zeros(4) - >>> m - array([ 0., 0., 0., 0.]) - >>> ma.make_mask(m) - False - >>> ma.make_mask(m, shrink=False) - array([False, False, False, False], dtype=bool) - - Using a flexible `dtype`. - - >>> m = [1, 0, 1, 1] - >>> n = [0, 1, 0, 0] - >>> arr = [] - >>> for man, mouse in zip(m, n): - ... arr.append((man, mouse)) - >>> arr - [(1, 0), (0, 1), (1, 0), (1, 0)] - >>> dtype = np.dtype({'names':['man', 'mouse'], - 'formats':[np.int, np.int]}) - >>> arr = np.array(arr, dtype=dtype) - >>> arr - array([(1, 0), (0, 1), (1, 0), (1, 0)], - dtype=[('man', '>> ma.make_mask(arr, dtype=dtype) - array([(True, False), (False, True), (True, False), (True, False)], - dtype=[('man', '|b1'), ('mouse', '|b1')]) - - """ - if flag is not None: - warnings.warn("The flag 'flag' is now called 'shrink'!", - DeprecationWarning) - shrink = flag - if m is nomask: - return nomask - elif isinstance(m, ndarray): - # We won't return after this point to make sure we can shrink the mask - # Fill the mask in case there are missing data - m = filled(m, True) - # Make sure the input dtype is valid - dtype = make_mask_descr(dtype) - if m.dtype == dtype: - if copy: - result = m.copy() - else: - result = m - else: - result = np.array(m, dtype=dtype, copy=copy) - else: - result = np.array(filled(m, True), dtype=MaskType) - # Bas les masques ! - if shrink and (not result.dtype.names) and (not result.any()): - return nomask - else: - return result - - -def make_mask_none(newshape, dtype=None): - """ - Return a boolean mask of the given shape, filled with False. - - This function returns a boolean ndarray with all entries False, that can - be used in common mask manipulations. If a complex dtype is specified, the - type of each field is converted to a boolean type. - - Parameters - ---------- - newshape : tuple - A tuple indicating the shape of the mask. - dtype: {None, dtype}, optional - If None, use a MaskType instance. Otherwise, use a new datatype with - the same fields as `dtype`, converted to boolean types. - - Returns - ------- - result : ndarray - An ndarray of appropriate shape and dtype, filled with False. - - See Also - -------- - make_mask : Create a boolean mask from an array. - make_mask_descr : Construct a dtype description list from a given dtype. - - Examples - -------- - >>> import numpy.ma as ma - >>> ma.make_mask_none((3,)) - array([False, False, False], dtype=bool) - - Defining a more complex dtype. - - >>> dtype = np.dtype({'names':['foo', 'bar'], - 'formats':[np.float32, np.int]}) - >>> dtype - dtype([('foo', '>> ma.make_mask_none((3,), dtype=dtype) - array([(False, False), (False, False), (False, False)], - dtype=[('foo', '|b1'), ('bar', '|b1')]) - - """ - if dtype is None: - result = np.zeros(newshape, dtype=MaskType) - else: - result = np.zeros(newshape, dtype=make_mask_descr(dtype)) - return result - -def mask_or (m1, m2, copy=False, shrink=True): - """ - Combine two masks with the ``logical_or`` operator. - - The result may be a view on `m1` or `m2` if the other is `nomask` - (i.e. False). - - Parameters - ---------- - m1, m2 : array_like - Input masks. - copy : bool, optional - If copy is False and one of the inputs is `nomask`, return a view - of the other input mask. Defaults to False. - shrink : bool, optional - Whether to shrink the output to `nomask` if all its values are - False. Defaults to True. - - Returns - ------- - mask : output mask - The result masks values that are masked in either `m1` or `m2`. - - Raises - ------ - ValueError - If `m1` and `m2` have different flexible dtypes. - - Examples - -------- - >>> m1 = np.ma.make_mask([0, 1, 1, 0]) - >>> m2 = np.ma.make_mask([1, 0, 0, 0]) - >>> np.ma.mask_or(m1, m2) - array([ True, True, True, False], dtype=bool) - - """ - def _recursive_mask_or(m1, m2, newmask): - names = m1.dtype.names - for name in names: - current1 = m1[name] - if current1.dtype.names: - _recursive_mask_or(current1, m2[name], newmask[name]) - else: - umath.logical_or(current1, m2[name], newmask[name]) - return - # - if (m1 is nomask) or (m1 is False): - dtype = getattr(m2, 'dtype', MaskType) - return make_mask(m2, copy=copy, shrink=shrink, dtype=dtype) - if (m2 is nomask) or (m2 is False): - dtype = getattr(m1, 'dtype', MaskType) - return make_mask(m1, copy=copy, shrink=shrink, dtype=dtype) - if m1 is m2 and is_mask(m1): - return m1 - (dtype1, dtype2) = (getattr(m1, 'dtype', None), getattr(m2, 'dtype', None)) - if (dtype1 != dtype2): - raise ValueError("Incompatible dtypes '%s'<>'%s'" % (dtype1, dtype2)) - if dtype1.names: - newmask = np.empty_like(m1) - _recursive_mask_or(m1, m2, newmask) - return newmask - return make_mask(umath.logical_or(m1, m2), copy=copy, shrink=shrink) - - -def flatten_mask(mask): - """ - Returns a completely flattened version of the mask, where nested fields - are collapsed. - - Parameters - ---------- - mask : array_like - Input array, which will be interpreted as booleans. - - Returns - ------- - flattened_mask : ndarray of bools - The flattened input. - - Examples - -------- - >>> mask = np.array([0, 0, 1], dtype=np.bool) - >>> flatten_mask(mask) - array([False, False, True], dtype=bool) - - >>> mask = np.array([(0, 0), (0, 1)], dtype=[('a', bool), ('b', bool)]) - >>> flatten_mask(mask) - array([False, False, False, True], dtype=bool) - - >>> mdtype = [('a', bool), ('b', [('ba', bool), ('bb', bool)])] - >>> mask = np.array([(0, (0, 0)), (0, (0, 1))], dtype=mdtype) - >>> flatten_mask(mask) - array([False, False, False, False, False, True], dtype=bool) - - """ - # - def _flatmask(mask): - "Flatten the mask and returns a (maybe nested) sequence of booleans." - mnames = mask.dtype.names - if mnames: - return [flatten_mask(mask[name]) for name in mnames] - else: - return mask - # - def _flatsequence(sequence): - "Generates a flattened version of the sequence." - try: - for element in sequence: - if hasattr(element, '__iter__'): - for f in _flatsequence(element): - yield f - else: - yield element - except TypeError: - yield sequence - # - mask = np.asarray(mask) - flattened = _flatsequence(_flatmask(mask)) - return np.array([_ for _ in flattened], dtype=bool) - - -def _check_mask_axis(mask, axis): - "Check whether there are masked values along the given axis" - if mask is not nomask: - return mask.all(axis=axis) - return nomask - - -#####-------------------------------------------------------------------------- -#--- --- Masking functions --- -#####-------------------------------------------------------------------------- - -def masked_where(condition, a, copy=True): - """ - Mask an array where a condition is met. - - Return `a` as an array masked where `condition` is True. - Any masked values of `a` or `condition` are also masked in the output. - - Parameters - ---------- - condition : array_like - Masking condition. When `condition` tests floating point values for - equality, consider using ``masked_values`` instead. - a : array_like - Array to mask. - copy : bool - If True (default) make a copy of `a` in the result. If False modify - `a` in place and return a view. - - Returns - ------- - result : MaskedArray - The result of masking `a` where `condition` is True. - - See Also - -------- - masked_values : Mask using floating point equality. - masked_equal : Mask where equal to a given value. - masked_not_equal : Mask where `not` equal to a given value. - masked_less_equal : Mask where less than or equal to a given value. - masked_greater_equal : Mask where greater than or equal to a given value. - masked_less : Mask where less than a given value. - masked_greater : Mask where greater than a given value. - masked_inside : Mask inside a given interval. - masked_outside : Mask outside a given interval. - masked_invalid : Mask invalid values (NaNs or infs). - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.arange(4) - >>> a - array([0, 1, 2, 3]) - >>> ma.masked_where(a <= 2, a) - masked_array(data = [-- -- -- 3], - mask = [ True True True False], - fill_value=999999) - - Mask array `b` conditional on `a`. - - >>> b = ['a', 'b', 'c', 'd'] - >>> ma.masked_where(a == 2, b) - masked_array(data = [a b -- d], - mask = [False False True False], - fill_value=N/A) - - Effect of the `copy` argument. - - >>> c = ma.masked_where(a <= 2, a) - >>> c - masked_array(data = [-- -- -- 3], - mask = [ True True True False], - fill_value=999999) - >>> c[0] = 99 - >>> c - masked_array(data = [99 -- -- 3], - mask = [False True True False], - fill_value=999999) - >>> a - array([0, 1, 2, 3]) - >>> c = ma.masked_where(a <= 2, a, copy=False) - >>> c[0] = 99 - >>> c - masked_array(data = [99 -- -- 3], - mask = [False True True False], - fill_value=999999) - >>> a - array([99, 1, 2, 3]) - - When `condition` or `a` contain masked values. - - >>> a = np.arange(4) - >>> a = ma.masked_where(a == 2, a) - >>> a - masked_array(data = [0 1 -- 3], - mask = [False False True False], - fill_value=999999) - >>> b = np.arange(4) - >>> b = ma.masked_where(b == 0, b) - >>> b - masked_array(data = [-- 1 2 3], - mask = [ True False False False], - fill_value=999999) - >>> ma.masked_where(a == 3, b) - masked_array(data = [-- 1 -- --], - mask = [ True False True True], - fill_value=999999) - - """ - # Make sure that condition is a valid standard-type mask. - cond = make_mask(condition) - a = np.array(a, copy=copy, subok=True) - - (cshape, ashape) = (cond.shape, a.shape) - if cshape and cshape != ashape: - raise IndexError("Inconsistant shape between the condition and the input"\ - " (got %s and %s)" % (cshape, ashape)) - if hasattr(a, '_mask'): - cond = mask_or(cond, a._mask) - cls = type(a) - else: - cls = MaskedArray - result = a.view(cls) - result._mask = cond - return result - - -def masked_greater(x, value, copy=True): - """ - Mask an array where greater than a given value. - - This function is a shortcut to ``masked_where``, with - `condition` = (x > value). - - See Also - -------- - masked_where : Mask where a condition is met. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.arange(4) - >>> a - array([0, 1, 2, 3]) - >>> ma.masked_greater(a, 2) - masked_array(data = [0 1 2 --], - mask = [False False False True], - fill_value=999999) - - """ - return masked_where(greater(x, value), x, copy=copy) - - -def masked_greater_equal(x, value, copy=True): - """ - Mask an array where greater than or equal to a given value. - - This function is a shortcut to ``masked_where``, with - `condition` = (x >= value). - - See Also - -------- - masked_where : Mask where a condition is met. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.arange(4) - >>> a - array([0, 1, 2, 3]) - >>> ma.masked_greater_equal(a, 2) - masked_array(data = [0 1 -- --], - mask = [False False True True], - fill_value=999999) - - """ - return masked_where(greater_equal(x, value), x, copy=copy) - - -def masked_less(x, value, copy=True): - """ - Mask an array where less than a given value. - - This function is a shortcut to ``masked_where``, with - `condition` = (x < value). - - See Also - -------- - masked_where : Mask where a condition is met. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.arange(4) - >>> a - array([0, 1, 2, 3]) - >>> ma.masked_less(a, 2) - masked_array(data = [-- -- 2 3], - mask = [ True True False False], - fill_value=999999) - - """ - return masked_where(less(x, value), x, copy=copy) - - -def masked_less_equal(x, value, copy=True): - """ - Mask an array where less than or equal to a given value. - - This function is a shortcut to ``masked_where``, with - `condition` = (x <= value). - - See Also - -------- - masked_where : Mask where a condition is met. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.arange(4) - >>> a - array([0, 1, 2, 3]) - >>> ma.masked_less_equal(a, 2) - masked_array(data = [-- -- -- 3], - mask = [ True True True False], - fill_value=999999) - - """ - return masked_where(less_equal(x, value), x, copy=copy) - - -def masked_not_equal(x, value, copy=True): - """ - Mask an array where `not` equal to a given value. - - This function is a shortcut to ``masked_where``, with - `condition` = (x != value). - - See Also - -------- - masked_where : Mask where a condition is met. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.arange(4) - >>> a - array([0, 1, 2, 3]) - >>> ma.masked_not_equal(a, 2) - masked_array(data = [-- -- 2 --], - mask = [ True True False True], - fill_value=999999) - - """ - return masked_where(not_equal(x, value), x, copy=copy) - - -def masked_equal(x, value, copy=True): - """ - Mask an array where equal to a given value. - - This function is a shortcut to ``masked_where``, with - `condition` = (x == value). For floating point arrays, - consider using ``masked_values(x, value)``. - - See Also - -------- - masked_where : Mask where a condition is met. - masked_values : Mask using floating point equality. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.arange(4) - >>> a - array([0, 1, 2, 3]) - >>> ma.masked_equal(a, 2) - masked_array(data = [0 1 -- 3], - mask = [False False True False], - fill_value=999999) - - """ - # An alternative implementation relies on filling first: probably not needed. - # d = filled(x, 0) - # c = umath.equal(d, value) - # m = mask_or(c, getmask(x)) - # return array(d, mask=m, copy=copy) - output = masked_where(equal(x, value), x, copy=copy) - output.fill_value = value - return output - - -def masked_inside(x, v1, v2, copy=True): - """ - Mask an array inside a given interval. - - Shortcut to ``masked_where``, where `condition` is True for `x` inside - the interval [v1,v2] (v1 <= x <= v2). The boundaries `v1` and `v2` - can be given in either order. - - See Also - -------- - masked_where : Mask where a condition is met. - - Notes - ----- - The array `x` is prefilled with its filling value. - - Examples - -------- - >>> import numpy.ma as ma - >>> x = [0.31, 1.2, 0.01, 0.2, -0.4, -1.1] - >>> ma.masked_inside(x, -0.3, 0.3) - masked_array(data = [0.31 1.2 -- -- -0.4 -1.1], - mask = [False False True True False False], - fill_value=1e+20) - - The order of `v1` and `v2` doesn't matter. - - >>> ma.masked_inside(x, 0.3, -0.3) - masked_array(data = [0.31 1.2 -- -- -0.4 -1.1], - mask = [False False True True False False], - fill_value=1e+20) - - """ - if v2 < v1: - (v1, v2) = (v2, v1) - xf = filled(x) - condition = (xf >= v1) & (xf <= v2) - return masked_where(condition, x, copy=copy) - - -def masked_outside(x, v1, v2, copy=True): - """ - Mask an array outside a given interval. - - Shortcut to ``masked_where``, where `condition` is True for `x` outside - the interval [v1,v2] (x < v1)|(x > v2). - The boundaries `v1` and `v2` can be given in either order. - - See Also - -------- - masked_where : Mask where a condition is met. - - Notes - ----- - The array `x` is prefilled with its filling value. - - Examples - -------- - >>> import numpy.ma as ma - >>> x = [0.31, 1.2, 0.01, 0.2, -0.4, -1.1] - >>> ma.masked_outside(x, -0.3, 0.3) - masked_array(data = [-- -- 0.01 0.2 -- --], - mask = [ True True False False True True], - fill_value=1e+20) - - The order of `v1` and `v2` doesn't matter. - - >>> ma.masked_outside(x, 0.3, -0.3) - masked_array(data = [-- -- 0.01 0.2 -- --], - mask = [ True True False False True True], - fill_value=1e+20) - - """ - if v2 < v1: - (v1, v2) = (v2, v1) - xf = filled(x) - condition = (xf < v1) | (xf > v2) - return masked_where(condition, x, copy=copy) - - -def masked_object(x, value, copy=True, shrink=True): - """ - Mask the array `x` where the data are exactly equal to value. - - This function is similar to `masked_values`, but only suitable - for object arrays: for floating point, use `masked_values` instead. - - Parameters - ---------- - x : array_like - Array to mask - value : object - Comparison value - copy : {True, False}, optional - Whether to return a copy of `x`. - shrink : {True, False}, optional - Whether to collapse a mask full of False to nomask - - Returns - ------- - result : MaskedArray - The result of masking `x` where equal to `value`. - - See Also - -------- - masked_where : Mask where a condition is met. - masked_equal : Mask where equal to a given value (integers). - masked_values : Mask using floating point equality. - - Examples - -------- - >>> import numpy.ma as ma - >>> food = np.array(['green_eggs', 'ham'], dtype=object) - >>> # don't eat spoiled food - >>> eat = ma.masked_object(food, 'green_eggs') - >>> print eat - [-- ham] - >>> # plain ol` ham is boring - >>> fresh_food = np.array(['cheese', 'ham', 'pineapple'], dtype=object) - >>> eat = ma.masked_object(fresh_food, 'green_eggs') - >>> print eat - [cheese ham pineapple] - - Note that `mask` is set to ``nomask`` if possible. - - >>> eat - masked_array(data = [cheese ham pineapple], - mask = False, - fill_value=?) - - """ - if isMaskedArray(x): - condition = umath.equal(x._data, value) - mask = x._mask - else: - condition = umath.equal(np.asarray(x), value) - mask = nomask - mask = mask_or(mask, make_mask(condition, shrink=shrink)) - return masked_array(x, mask=mask, copy=copy, fill_value=value) - - -def masked_values(x, value, rtol=1e-5, atol=1e-8, copy=True, shrink=True): - """ - Mask using floating point equality. - - Return a MaskedArray, masked where the data in array `x` are approximately - equal to `value`, i.e. where the following condition is True - - (abs(x - value) <= atol+rtol*abs(value)) - - The fill_value is set to `value` and the mask is set to ``nomask`` if - possible. For integers, consider using ``masked_equal``. - - Parameters - ---------- - x : array_like - Array to mask. - value : float - Masking value. - rtol : float, optional - Tolerance parameter. - atol : float, optional - Tolerance parameter (1e-8). - copy : bool, optional - Whether to return a copy of `x`. - shrink : bool, optional - Whether to collapse a mask full of False to ``nomask``. - - Returns - ------- - result : MaskedArray - The result of masking `x` where approximately equal to `value`. - - See Also - -------- - masked_where : Mask where a condition is met. - masked_equal : Mask where equal to a given value (integers). - - Examples - -------- - >>> import numpy.ma as ma - >>> x = np.array([1, 1.1, 2, 1.1, 3]) - >>> ma.masked_values(x, 1.1) - masked_array(data = [1.0 -- 2.0 -- 3.0], - mask = [False True False True False], - fill_value=1.1) - - Note that `mask` is set to ``nomask`` if possible. - - >>> ma.masked_values(x, 1.5) - masked_array(data = [ 1. 1.1 2. 1.1 3. ], - mask = False, - fill_value=1.5) - - For integers, the fill value will be different in general to the - result of ``masked_equal``. - - >>> x = np.arange(5) - >>> x - array([0, 1, 2, 3, 4]) - >>> ma.masked_values(x, 2) - masked_array(data = [0 1 -- 3 4], - mask = [False False True False False], - fill_value=2) - >>> ma.masked_equal(x, 2) - masked_array(data = [0 1 -- 3 4], - mask = [False False True False False], - fill_value=999999) - - """ - mabs = umath.absolute - xnew = filled(x, value) - if issubclass(xnew.dtype.type, np.floating): - condition = umath.less_equal(mabs(xnew - value), atol + rtol * mabs(value)) - mask = getattr(x, '_mask', nomask) - else: - condition = umath.equal(xnew, value) - mask = nomask - mask = mask_or(mask, make_mask(condition, shrink=shrink)) - return masked_array(xnew, mask=mask, copy=copy, fill_value=value) - - -def masked_invalid(a, copy=True): - """ - Mask an array where invalid values occur (NaNs or infs). - - This function is a shortcut to ``masked_where``, with - `condition` = ~(np.isfinite(a)). Any pre-existing mask is conserved. - Only applies to arrays with a dtype where NaNs or infs make sense - (i.e. floating point types), but accepts any array_like object. - - See Also - -------- - masked_where : Mask where a condition is met. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.arange(5, dtype=np.float) - >>> a[2] = np.NaN - >>> a[3] = np.PINF - >>> a - array([ 0., 1., NaN, Inf, 4.]) - >>> ma.masked_invalid(a) - masked_array(data = [0.0 1.0 -- -- 4.0], - mask = [False False True True False], - fill_value=1e+20) - - """ - a = np.array(a, copy=copy, subok=True) - mask = getattr(a, '_mask', None) - if mask is not None: - condition = ~(np.isfinite(getdata(a))) - if mask is not nomask: - condition |= mask - cls = type(a) - else: - condition = ~(np.isfinite(a)) - cls = MaskedArray - result = a.view(cls) - result._mask = condition - return result - - -#####-------------------------------------------------------------------------- -#---- --- Printing options --- -#####-------------------------------------------------------------------------- - -class _MaskedPrintOption: - """ - Handle the string used to represent missing data in a masked array. - - """ - def __init__ (self, display): - "Create the masked_print_option object." - self._display = display - self._enabled = True - - def display(self): - "Display the string to print for masked values." - return self._display - - def set_display (self, s): - "Set the string to print for masked values." - self._display = s - - def enabled(self): - "Is the use of the display value enabled?" - return self._enabled - - def enable(self, shrink=1): - "Set the enabling shrink to `shrink`." - self._enabled = shrink - - def __str__ (self): - return str(self._display) - - __repr__ = __str__ - -#if you single index into a masked location you get this object. -masked_print_option = _MaskedPrintOption('--') - - -def _recursive_printoption(result, mask, printopt): - """ - Puts printoptions in result where mask is True. - Private function allowing for recursion - """ - names = result.dtype.names - for name in names: - (curdata, curmask) = (result[name], mask[name]) - if curdata.dtype.names: - _recursive_printoption(curdata, curmask, printopt) - else: - np.putmask(curdata, curmask, printopt) - return - -_print_templates = dict(long="""\ -masked_%(name)s(data = - %(data)s, - %(nlen)s mask = - %(mask)s, - %(nlen)s fill_value = %(fill)s) -""", - short="""\ -masked_%(name)s(data = %(data)s, - %(nlen)s mask = %(mask)s, -%(nlen)s fill_value = %(fill)s) -""", - long_flx="""\ -masked_%(name)s(data = - %(data)s, - %(nlen)s mask = - %(mask)s, -%(nlen)s fill_value = %(fill)s, - %(nlen)s dtype = %(dtype)s) -""", - short_flx="""\ -masked_%(name)s(data = %(data)s, -%(nlen)s mask = %(mask)s, -%(nlen)s fill_value = %(fill)s, -%(nlen)s dtype = %(dtype)s) -""") - -#####-------------------------------------------------------------------------- -#---- --- MaskedArray class --- -#####-------------------------------------------------------------------------- - -def _recursive_filled(a, mask, fill_value): - """ - Recursively fill `a` with `fill_value`. - Private function - """ - names = a.dtype.names - for name in names: - current = a[name] - if current.dtype.names: - _recursive_filled(current, mask[name], fill_value[name]) - else: - np.putmask(current, mask[name], fill_value[name]) - - - -def flatten_structured_array(a): - """ - Flatten a structured array. - - The data type of the output is chosen such that it can represent all of the - (nested) fields. - - Parameters - ---------- - a : structured array - - Returns - ------- - output : masked array or ndarray - A flattened masked array if the input is a masked array, otherwise a - standard ndarray. - - Examples - -------- - >>> ndtype = [('a', int), ('b', float)] - >>> a = np.array([(1, 1), (2, 2)], dtype=ndtype) - >>> flatten_structured_array(a) - array([[1., 1.], - [2., 2.]]) - - """ - # - def flatten_sequence(iterable): - """Flattens a compound of nested iterables.""" - for elm in iter(iterable): - if hasattr(elm, '__iter__'): - for f in flatten_sequence(elm): - yield f - else: - yield elm - # - a = np.asanyarray(a) - inishape = a.shape - a = a.ravel() - if isinstance(a, MaskedArray): - out = np.array([tuple(flatten_sequence(d.item())) for d in a._data]) - out = out.view(MaskedArray) - out._mask = np.array([tuple(flatten_sequence(d.item())) - for d in getmaskarray(a)]) - else: - out = np.array([tuple(flatten_sequence(d.item())) for d in a]) - if len(inishape) > 1: - newshape = list(out.shape) - newshape[0] = inishape - out.shape = tuple(flatten_sequence(newshape)) - return out - - - -class _arraymethod(object): - """ - Define a wrapper for basic array methods. - - Upon call, returns a masked array, where the new ``_data`` array is - the output of the corresponding method called on the original - ``_data``. - - If `onmask` is True, the new mask is the output of the method called - on the initial mask. Otherwise, the new mask is just a reference - to the initial mask. - - Attributes - ---------- - _onmask : bool - Holds the `onmask` parameter. - obj : object - The object calling `_arraymethod`. - - Parameters - ---------- - funcname : str - Name of the function to apply on data. - onmask : bool - Whether the mask must be processed also (True) or left - alone (False). Default is True. Make available as `_onmask` - attribute. - - """ - def __init__(self, funcname, onmask=True): - self.__name__ = funcname - self._onmask = onmask - self.obj = None - self.__doc__ = self.getdoc() - # - def getdoc(self): - "Return the doc of the function (from the doc of the method)." - methdoc = getattr(ndarray, self.__name__, None) or \ - getattr(np, self.__name__, None) - if methdoc is not None: - return methdoc.__doc__ - # - def __get__(self, obj, objtype=None): - self.obj = obj - return self - # - def __call__(self, *args, **params): - methodname = self.__name__ - instance = self.obj - # Fallback : if the instance has not been initialized, use the first arg - if instance is None: - args = list(args) - instance = args.pop(0) - data = instance._data - mask = instance._mask - cls = type(instance) - result = getattr(data, methodname)(*args, **params).view(cls) - result._update_from(instance) - if result.ndim: - if not self._onmask: - result.__setmask__(mask) - elif mask is not nomask: - result.__setmask__(getattr(mask, methodname)(*args, **params)) - else: - if mask.ndim and (not mask.dtype.names and mask.all()): - return masked - return result - - - -class MaskedIterator(object): - """ - Flat iterator object to iterate over masked arrays. - - A `MaskedIterator` iterator is returned by ``x.flat`` for any masked array - `x`. It allows iterating over the array as if it were a 1-D array, - either in a for-loop or by calling its `next` method. - - Iteration is done in C-contiguous style, with the last index varying the - fastest. The iterator can also be indexed using basic slicing or - advanced indexing. - - See Also - -------- - MaskedArray.flat : Return a flat iterator over an array. - MaskedArray.flatten : Returns a flattened copy of an array. - - Notes - ----- - `MaskedIterator` is not exported by the `ma` module. Instead of - instantiating a `MaskedIterator` directly, use `MaskedArray.flat`. - - Examples - -------- - >>> x = np.ma.array(arange(6).reshape(2, 3)) - >>> fl = x.flat - >>> type(fl) - - >>> for item in fl: - ... print item - ... - 0 - 1 - 2 - 3 - 4 - 5 - - Extracting more than a single element b indexing the `MaskedIterator` - returns a masked array: - - >>> fl[2:4] - masked_array(data = [2 3], - mask = False, - fill_value = 999999) - - """ - def __init__(self, ma): - self.ma = ma - self.dataiter = ma._data.flat - # - if ma._mask is nomask: - self.maskiter = None - else: - self.maskiter = ma._mask.flat - - def __iter__(self): - return self - - def __getitem__(self, indx): - result = self.dataiter.__getitem__(indx).view(type(self.ma)) - if self.maskiter is not None: - _mask = self.maskiter.__getitem__(indx) - _mask.shape = result.shape - result._mask = _mask - return result - - ### This won't work is ravel makes a copy - def __setitem__(self, index, value): - self.dataiter[index] = getdata(value) - if self.maskiter is not None: - self.maskiter[index] = getmaskarray(value) - - def next(self): - """ - Return the next value, or raise StopIteration. - - Examples - -------- - >>> x = np.ma.array([3, 2], mask=[0, 1]) - >>> fl = x.flat - >>> fl.next() - 3 - >>> fl.next() - masked_array(data = --, - mask = True, - fill_value = 1e+20) - >>> fl.next() - Traceback (most recent call last): - File "", line 1, in - File "/home/ralf/python/numpy/numpy/ma/core.py", line 2243, in next - d = self.dataiter.next() - StopIteration - - """ - d = self.dataiter.next() - if self.maskiter is not None and self.maskiter.next(): - d = masked - return d - - - - -class MaskedArray(ndarray): - """ - An array class with possibly masked values. - - Masked values of True exclude the corresponding element from any - computation. - - Construction:: - - x = MaskedArray(data, mask=nomask, dtype=None, copy=True, - fill_value=None, keep_mask=True, hard_mask=False, - shrink=True) - - Parameters - ---------- - data : array_like - Input data. - mask : sequence, optional - Mask. Must be convertible to an array of booleans with the same - shape as `data`. True indicates a masked (i.e. invalid) data. - dtype : dtype, optional - Data type of the output. - If `dtype` is None, the type of the data argument (``data.dtype``) - is used. If `dtype` is not None and different from ``data.dtype``, - a copy is performed. - copy : bool, optional - Whether to copy the input data (True), or to use a reference instead. - Default is False. - subok : bool, optional - Whether to return a subclass of `MaskedArray` if possible (True) or a - plain `MaskedArray`. Default is True. - ndmin : int, optional - Minimum number of dimensions. Default is 0. - fill_value : scalar, optional - Value used to fill in the masked values when necessary. - If None, a default based on the data-type is used. - keep_mask : bool, optional - Whether to combine `mask` with the mask of the input data, if any - (True), or to use only `mask` for the output (False). Default is True. - hard_mask : bool, optional - Whether to use a hard mask or not. With a hard mask, masked values - cannot be unmasked. Default is False. - shrink : bool, optional - Whether to force compression of an empty mask. Default is True. - - """ - - __array_priority__ = 15 - _defaultmask = nomask - _defaulthardmask = False - _baseclass = ndarray - - def __new__(cls, data=None, mask=nomask, dtype=None, copy=False, - subok=True, ndmin=0, fill_value=None, - keep_mask=True, hard_mask=None, flag=None, shrink=True, - **options): - """ - Create a new masked array from scratch. - - Notes - ----- - A masked array can also be created by taking a .view(MaskedArray). - - """ - if flag is not None: - warnings.warn("The flag 'flag' is now called 'shrink'!", - DeprecationWarning) - shrink = flag - # Process data............ - _data = np.array(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin) - _baseclass = getattr(data, '_baseclass', type(_data)) - # Check that we're not erasing the mask.......... - if isinstance(data, MaskedArray) and (data.shape != _data.shape): - copy = True - # Careful, cls might not always be MaskedArray... - if not isinstance(data, cls) or not subok: - _data = ndarray.view(_data, cls) - else: - _data = ndarray.view(_data, type(data)) - # Backwards compatibility w/ numpy.core.ma ....... - if hasattr(data, '_mask') and not isinstance(data, ndarray): - _data._mask = data._mask - _sharedmask = True - # Process mask ............................... - # Number of named fields (or zero if none) - names_ = _data.dtype.names or () - # Type of the mask - if names_: - mdtype = make_mask_descr(_data.dtype) - else: - mdtype = MaskType - # Case 1. : no mask in input ............ - if mask is nomask: - # Erase the current mask ? - if not keep_mask: - # With a reduced version - if shrink: - _data._mask = nomask - # With full version - else: - _data._mask = np.zeros(_data.shape, dtype=mdtype) - # Check whether we missed something - elif isinstance(data, (tuple, list)): - try: - # If data is a sequence of masked array - mask = np.array([getmaskarray(m) for m in data], - dtype=mdtype) - except ValueError: - # If data is nested - mask = nomask - # Force shrinking of the mask if needed (and possible) - if (mdtype == MaskType) and mask.any(): - _data._mask = mask - _data._sharedmask = False - else: - if copy: - _data._mask = _data._mask.copy() - _data._sharedmask = False - # Reset the shape of the original mask - if getmask(data) is not nomask: - data._mask.shape = data.shape - else: - _data._sharedmask = True - # Case 2. : With a mask in input ........ - else: - # Read the mask with the current mdtype - try: - mask = np.array(mask, copy=copy, dtype=mdtype) - # Or assume it's a sequence of bool/int - except TypeError: - mask = np.array([tuple([m] * len(mdtype)) for m in mask], - dtype=mdtype) - # Make sure the mask and the data have the same shape - if mask.shape != _data.shape: - (nd, nm) = (_data.size, mask.size) - if nm == 1: - mask = np.resize(mask, _data.shape) - elif nm == nd: - mask = np.reshape(mask, _data.shape) - else: - msg = "Mask and data not compatible: data size is %i, " + \ - "mask size is %i." - raise MaskError, msg % (nd, nm) - copy = True - # Set the mask to the new value - if _data._mask is nomask: - _data._mask = mask - _data._sharedmask = not copy - else: - if not keep_mask: - _data._mask = mask - _data._sharedmask = not copy - else: - if names_: - def _recursive_or(a, b): - "do a|=b on each field of a, recursively" - for name in a.dtype.names: - (af, bf) = (a[name], b[name]) - if af.dtype.names: - _recursive_or(af, bf) - else: - af |= bf - return - _recursive_or(_data._mask, mask) - else: - _data._mask = np.logical_or(mask, _data._mask) - _data._sharedmask = False - # Update fill_value....... - if fill_value is None: - fill_value = getattr(data, '_fill_value', None) - # But don't run the check unless we have something to check.... - if fill_value is not None: - _data._fill_value = _check_fill_value(fill_value, _data.dtype) - # Process extra options .. - if hard_mask is None: - _data._hardmask = getattr(data, '_hardmask', False) - else: - _data._hardmask = hard_mask - _data._baseclass = _baseclass - return _data - # - def _update_from(self, obj): - """Copies some attributes of obj to self. - """ - if obj is not None and isinstance(obj, ndarray): - _baseclass = type(obj) - else: - _baseclass = ndarray - # We need to copy the _basedict to avoid backward propagation - _optinfo = {} - _optinfo.update(getattr(obj, '_optinfo', {})) - _optinfo.update(getattr(obj, '_basedict', {})) - if not isinstance(obj, MaskedArray): - _optinfo.update(getattr(obj, '__dict__', {})) - _dict = dict(_fill_value=getattr(obj, '_fill_value', None), - _hardmask=getattr(obj, '_hardmask', False), - _sharedmask=getattr(obj, '_sharedmask', False), - _isfield=getattr(obj, '_isfield', False), - _baseclass=getattr(obj, '_baseclass', _baseclass), - _optinfo=_optinfo, - _basedict=_optinfo) - self.__dict__.update(_dict) - self.__dict__.update(_optinfo) - return - - - def __array_finalize__(self, obj): - """Finalizes the masked array. - """ - # Get main attributes ......... - self._update_from(obj) - if isinstance(obj, ndarray): - odtype = obj.dtype - if odtype.names: - _mask = getattr(obj, '_mask', make_mask_none(obj.shape, odtype)) - else: - _mask = getattr(obj, '_mask', nomask) - else: - _mask = nomask - self._mask = _mask - # Finalize the mask ........... - if self._mask is not nomask: - try: - self._mask.shape = self.shape - except ValueError: - self._mask = nomask - except (TypeError, AttributeError): - # When _mask.shape is not writable (because it's a void) - pass - # Finalize the fill_value for structured arrays - if self.dtype.names: - if self._fill_value is None: - self._fill_value = _check_fill_value(None, self.dtype) - return - - - def __array_wrap__(self, obj, context=None): - """ - Special hook for ufuncs. - Wraps the numpy array and sets the mask according to context. - """ - result = obj.view(type(self)) - result._update_from(self) - #.......... - if context is not None: - result._mask = result._mask.copy() - (func, args, _) = context - m = reduce(mask_or, [getmaskarray(arg) for arg in args]) - # Get the domain mask................ - domain = ufunc_domain.get(func, None) - if domain is not None: - # Take the domain, and make sure it's a ndarray - if len(args) > 2: - d = filled(reduce(domain, args), True) - else: - d = filled(domain(*args), True) - # Fill the result where the domain is wrong - try: - # Binary domain: take the last value - fill_value = ufunc_fills[func][-1] - except TypeError: - # Unary domain: just use this one - fill_value = ufunc_fills[func] - except KeyError: - # Domain not recognized, use fill_value instead - fill_value = self.fill_value - result = result.copy() - np.putmask(result, d, fill_value) - # Update the mask - if m is nomask: - if d is not nomask: - m = d - else: - # Don't modify inplace, we risk back-propagation - m = (m | d) - # Make sure the mask has the proper size - if result.shape == () and m: - return masked - else: - result._mask = m - result._sharedmask = False - #.... - return result - - - def view(self, dtype=None, type=None): - if dtype is None: - if type is None: - output = ndarray.view(self) - else: - output = ndarray.view(self, type) - elif type is None: - try: - if issubclass(dtype, ndarray): - output = ndarray.view(self, dtype) - dtype = None - else: - output = ndarray.view(self, dtype) - except TypeError: - output = ndarray.view(self, dtype) - else: - output = ndarray.view(self, dtype, type) - # Should we update the mask ? - if (getattr(output, '_mask', nomask) is not nomask): - if dtype is None: - dtype = output.dtype - mdtype = make_mask_descr(dtype) - output._mask = self._mask.view(mdtype, ndarray) - # Try to reset the shape of the mask (if we don't have a void) - try: - output._mask.shape = output.shape - except (AttributeError, TypeError): - pass - # Make sure to reset the _fill_value if needed - if getattr(output, '_fill_value', None) is not None: - output._fill_value = None - return output - view.__doc__ = ndarray.view.__doc__ - - - def astype(self, newtype): - """ - Returns a copy of the MaskedArray cast to given newtype. - - Returns - ------- - output : MaskedArray - A copy of self cast to input newtype. - The returned record shape matches self.shape. - - Examples - -------- - >>> x = np.ma.array([[1,2,3.1],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) - >>> print x - [[1.0 -- 3.1] - [-- 5.0 --] - [7.0 -- 9.0]] - >>> print x.astype(int32) - [[1 -- 3] - [-- 5 --] - [7 -- 9]] - - """ - newtype = np.dtype(newtype) - output = self._data.astype(newtype).view(type(self)) - output._update_from(self) - names = output.dtype.names - if names is None: - output._mask = self._mask.astype(bool) - else: - if self._mask is nomask: - output._mask = nomask - else: - output._mask = self._mask.astype([(n, bool) for n in names]) - # Don't check _fill_value if it's None, that'll speed things up - if self._fill_value is not None: - output._fill_value = _check_fill_value(self._fill_value, newtype) - return output - - - def __getitem__(self, indx): - """x.__getitem__(y) <==> x[y] - - Return the item described by i, as a masked array. - - """ - # This test is useful, but we should keep things light... -# if getmask(indx) is not nomask: -# msg = "Masked arrays must be filled before they can be used as indices!" -# raise IndexError, msg - _data = ndarray.view(self, ndarray) - dout = ndarray.__getitem__(_data, indx) - # We could directly use ndarray.__getitem__ on self... - # But then we would have to modify __array_finalize__ to prevent the - # mask of being reshaped if it hasn't been set up properly yet... - # So it's easier to stick to the current version - _mask = self._mask - if not getattr(dout, 'ndim', False): - # A record ................ - if isinstance(dout, np.void): - mask = _mask[indx] -# If we can make mvoid a subclass of np.void, that'd be what we'd need -# return mvoid(dout, mask=mask) - if flatten_mask(mask).any(): - dout = mvoid(dout, mask=mask) - else: - return dout - # Just a scalar............ - elif _mask is not nomask and _mask[indx]: - return masked - else: - # Force dout to MA ........ - dout = dout.view(type(self)) - # Inherit attributes from self - dout._update_from(self) - # Check the fill_value .... - if isinstance(indx, basestring): - if self._fill_value is not None: - dout._fill_value = self._fill_value[indx] - dout._isfield = True - # Update the mask if needed - if _mask is not nomask: - dout._mask = _mask[indx] - dout._sharedmask = True -# Note: Don't try to check for m.any(), that'll take too long... - return dout - - def __setitem__(self, indx, value): - """x.__setitem__(i, y) <==> x[i]=y - - Set item described by index. If value is masked, masks those - locations. - - """ - if self is masked: - raise MaskError, 'Cannot alter the masked element.' - # This test is useful, but we should keep things light... -# if getmask(indx) is not nomask: -# msg = "Masked arrays must be filled before they can be used as indices!" -# raise IndexError, msg - _data = ndarray.view(self, ndarray.__getattribute__(self, '_baseclass')) - _mask = ndarray.__getattribute__(self, '_mask') - if isinstance(indx, basestring): - ndarray.__setitem__(_data, indx, value) - if _mask is nomask: - self._mask = _mask = make_mask_none(self.shape, self.dtype) - _mask[indx] = getmask(value) - return - #........................................ - _dtype = ndarray.__getattribute__(_data, 'dtype') - nbfields = len(_dtype.names or ()) - #........................................ - if value is masked: - # The mask wasn't set: create a full version... - if _mask is nomask: - _mask = self._mask = make_mask_none(self.shape, _dtype) - # Now, set the mask to its value. - if nbfields: - _mask[indx] = tuple([True] * nbfields) - else: - _mask[indx] = True - if not self._isfield: - self._sharedmask = False - return - #........................................ - # Get the _data part of the new value - dval = value - # Get the _mask part of the new value - mval = getattr(value, '_mask', nomask) - if nbfields and mval is nomask: - mval = tuple([False] * nbfields) - if _mask is nomask: - # Set the data, then the mask - ndarray.__setitem__(_data, indx, dval) - if mval is not nomask: - _mask = self._mask = make_mask_none(self.shape, _dtype) - ndarray.__setitem__(_mask, indx, mval) - elif not self._hardmask: - # Unshare the mask if necessary to avoid propagation - if not self._isfield: - self.unshare_mask() - _mask = ndarray.__getattribute__(self, '_mask') - # Set the data, then the mask - ndarray.__setitem__(_data, indx, dval) - ndarray.__setitem__(_mask, indx, mval) - elif hasattr(indx, 'dtype') and (indx.dtype == MaskType): - indx = indx * umath.logical_not(_mask) - ndarray.__setitem__(_data, indx, dval) - else: - if nbfields: - err_msg = "Flexible 'hard' masks are not yet supported..." - raise NotImplementedError(err_msg) - mindx = mask_or(_mask[indx], mval, copy=True) - dindx = self._data[indx] - if dindx.size > 1: - dindx[~mindx] = dval - elif mindx is nomask: - dindx = dval - ndarray.__setitem__(_data, indx, dindx) - _mask[indx] = mindx - return - - - def __getslice__(self, i, j): - """x.__getslice__(i, j) <==> x[i:j] - - Return the slice described by (i, j). The use of negative - indices is not supported. - - """ - return self.__getitem__(slice(i, j)) - - def __setslice__(self, i, j, value): - """x.__setslice__(i, j, value) <==> x[i:j]=value - - Set the slice (i,j) of a to value. If value is masked, mask - those locations. - - """ - self.__setitem__(slice(i, j), value) - - - def __setmask__(self, mask, copy=False): - """Set the mask. - - """ - idtype = ndarray.__getattribute__(self, 'dtype') - current_mask = ndarray.__getattribute__(self, '_mask') - if mask is masked: - mask = True - # Make sure the mask is set - if (current_mask is nomask): - # Just don't do anything is there's nothing to do... - if mask is nomask: - return - current_mask = self._mask = make_mask_none(self.shape, idtype) - # No named fields......... - if idtype.names is None: - # Hardmask: don't unmask the data - if self._hardmask: - current_mask |= mask - # Softmask: set everything to False - else: - current_mask.flat = mask - # Named fields w/ ............ - else: - mdtype = current_mask.dtype - mask = np.array(mask, copy=False) - # Mask is a singleton - if not mask.ndim: - # It's a boolean : make a record - if mask.dtype.kind == 'b': - mask = np.array(tuple([mask.item()]*len(mdtype)), - dtype=mdtype) - # It's a record: make sure the dtype is correct - else: - mask = mask.astype(mdtype) - # Mask is a sequence - else: - # Make sure the new mask is a ndarray with the proper dtype - try: - mask = np.array(mask, copy=copy, dtype=mdtype) - # Or assume it's a sequence of bool/int - except TypeError: - mask = np.array([tuple([m] * len(mdtype)) for m in mask], - dtype=mdtype) - # Hardmask: don't unmask the data - if self._hardmask: - for n in idtype.names: - current_mask[n] |= mask[n] - # Softmask: set everything to False - else: - current_mask.flat = mask - # Reshape if needed - if current_mask.shape: - current_mask.shape = self.shape - return - _set_mask = __setmask__ - #.... - def _get_mask(self): - """Return the current mask. - - """ - # We could try to force a reshape, but that wouldn't work in some cases. -# return self._mask.reshape(self.shape) - return self._mask - mask = property(fget=_get_mask, fset=__setmask__, doc="Mask") - - - def _get_recordmask(self): - """ - Return the mask of the records. - A record is masked when all the fields are masked. - - """ - _mask = ndarray.__getattribute__(self, '_mask').view(ndarray) - if _mask.dtype.names is None: - return _mask - return np.all(flatten_structured_array(_mask), axis= -1) - - - def _set_recordmask(self): - """Return the mask of the records. - A record is masked when all the fields are masked. - - """ - raise NotImplementedError("Coming soon: setting the mask per records!") - recordmask = property(fget=_get_recordmask) - - #............................................ - def harden_mask(self): - """ - Force the mask to hard. - - Whether the mask of a masked array is hard or soft is determined by - its `hardmask` property. `harden_mask` sets `hardmask` to True. - - See Also - -------- - hardmask - - """ - self._hardmask = True - return self - - def soften_mask(self): - """ - Force the mask to soft. - - Whether the mask of a masked array is hard or soft is determined by - its `hardmask` property. `soften_mask` sets `hardmask` to False. - - See Also - -------- - hardmask - - """ - self._hardmask = False - return self - - hardmask = property(fget=lambda self: self._hardmask, - doc="Hardness of the mask") - - - def unshare_mask(self): - """ - Copy the mask and set the sharedmask flag to False. - - Whether the mask is shared between masked arrays can be seen from - the `sharedmask` property. `unshare_mask` ensures the mask is not shared. - A copy of the mask is only made if it was shared. - - See Also - -------- - sharedmask - - """ - if self._sharedmask: - self._mask = self._mask.copy() - self._sharedmask = False - return self - - sharedmask = property(fget=lambda self: self._sharedmask, - doc="Share status of the mask (read-only).") - - def shrink_mask(self): - """ - Reduce a mask to nomask when possible. - - Parameters - ---------- - None - - Returns - ------- - None - - Examples - -------- - >>> x = np.ma.array([[1,2 ], [3, 4]], mask=[0]*4) - >>> x.mask - array([[False, False], - [False, False]], dtype=bool) - >>> x.shrink_mask() - >>> x.mask - False - - """ - m = self._mask - if m.ndim and not m.any(): - self._mask = nomask - return self - - #............................................ - - baseclass = property(fget=lambda self:self._baseclass, - doc="Class of the underlying data (read-only).") - - def _get_data(self): - """Return the current data, as a view of the original - underlying data. - - """ - return ndarray.view(self, self._baseclass) - _data = property(fget=_get_data) - data = property(fget=_get_data) - - def raw_data(self): - """ - Return the data part of the masked array. - - DEPRECATED: You should really use ``.data`` instead. - - Examples - -------- - >>> x = np.ma.array([1, 2, 3], mask=[False, True, False]) - >>> x - masked_array(data = [1 -- 3], - mask = [False True False], - fill_value = 999999) - >>> x.data - array([1, 2, 3]) - - """ - warnings.warn('Use .data instead.', DeprecationWarning) - return self._data - - - def _get_flat(self): - "Return a flat iterator." - return MaskedIterator(self) - # - def _set_flat (self, value): - "Set a flattened version of self to value." - y = self.ravel() - y[:] = value - # - flat = property(fget=_get_flat, fset=_set_flat, - doc="Flat version of the array.") - - - def get_fill_value(self): - """ - Return the filling value of the masked array. - - Returns - ------- - fill_value : scalar - The filling value. - - Examples - -------- - >>> for dt in [np.int32, np.int64, np.float64, np.complex128]: - ... np.ma.array([0, 1], dtype=dt).get_fill_value() - ... - 999999 - 999999 - 1e+20 - (1e+20+0j) - - >>> x = np.ma.array([0, 1.], fill_value=-np.inf) - >>> x.get_fill_value() - -inf - - """ - if self._fill_value is None: - self._fill_value = _check_fill_value(None, self.dtype) - return self._fill_value[()] - - def set_fill_value(self, value=None): - """ - Set the filling value of the masked array. - - Parameters - ---------- - value : scalar, optional - The new filling value. Default is None, in which case a default - based on the data type is used. - - See Also - -------- - ma.set_fill_value : Equivalent function. - - Examples - -------- - >>> x = np.ma.array([0, 1.], fill_value=-np.inf) - >>> x.fill_value - -inf - >>> x.set_fill_value(np.pi) - >>> x.fill_value - 3.1415926535897931 - - Reset to default: - - >>> x.set_fill_value() - >>> x.fill_value - 1e+20 - - """ - target = _check_fill_value(value, self.dtype) - _fill_value = self._fill_value - if _fill_value is None: - # Create the attribute if it was undefined - self._fill_value = target - else: - # Don't overwrite the attribute, just fill it (for propagation) - _fill_value[()] = target - - fill_value = property(fget=get_fill_value, fset=set_fill_value, - doc="Filling value.") - - - def filled(self, fill_value=None): - """ - Return a copy of self, with masked values filled with a given value. - - Parameters - ---------- - fill_value : scalar, optional - The value to use for invalid entries (None by default). - If None, the `fill_value` attribute of the array is used instead. - - Notes - ----- - The result is **not** a MaskedArray! - - Examples - -------- - >>> x = np.ma.array([1,2,3,4,5], mask=[0,0,1,0,1], fill_value=-999) - >>> x.filled() - array([1, 2, -999, 4, -999]) - >>> type(x.filled()) - - - Subclassing is preserved. This means that if the data part of the masked - array is a matrix, `filled` returns a matrix: - - >>> x = np.ma.array(np.matrix([[1, 2], [3, 4]]), mask=[[0, 1], [1, 0]]) - >>> x.filled() - matrix([[ 1, 999999], - [999999, 4]]) - - """ - m = self._mask - if m is nomask: - return self._data - # - if fill_value is None: - fill_value = self.fill_value - else: - fill_value = _check_fill_value(fill_value, self.dtype) - # - if self is masked_singleton: - return np.asanyarray(fill_value) - # - if m.dtype.names: - result = self._data.copy() - _recursive_filled(result, self._mask, fill_value) - elif not m.any(): - return self._data - else: - result = self._data.copy() - try: - np.putmask(result, m, fill_value) - except (TypeError, AttributeError): - fill_value = narray(fill_value, dtype=object) - d = result.astype(object) - result = np.choose(m, (d, fill_value)) - except IndexError: - #ok, if scalar - if self._data.shape: - raise - elif m: - result = np.array(fill_value, dtype=self.dtype) - else: - result = self._data - return result - - def compressed(self): - """ - Return all the non-masked data as a 1-D array. - - Returns - ------- - data : ndarray - A new `ndarray` holding the non-masked data is returned. - - Notes - ----- - The result is **not** a MaskedArray! - - Examples - -------- - >>> x = np.ma.array(np.arange(5), mask=[0]*2 + [1]*3) - >>> x.compressed() - array([0, 1]) - >>> type(x.compressed()) - - - """ - data = ndarray.ravel(self._data) - if self._mask is not nomask: - data = data.compress(np.logical_not(ndarray.ravel(self._mask))) - return data - - - def compress(self, condition, axis=None, out=None): - """ - Return `a` where condition is ``True``. - - If condition is a `MaskedArray`, missing values are considered - as ``False``. - - Parameters - ---------- - condition : var - Boolean 1-d array selecting which entries to return. If len(condition) - is less than the size of a along the axis, then output is truncated - to length of condition array. - axis : {None, int}, optional - Axis along which the operation must be performed. - out : {None, ndarray}, optional - Alternative output array in which to place the result. It must have - the same shape as the expected output but the type will be cast if - necessary. - - Returns - ------- - result : MaskedArray - A :class:`MaskedArray` object. - - Notes - ----- - Please note the difference with :meth:`compressed` ! - The output of :meth:`compress` has a mask, the output of - :meth:`compressed` does not. - - Examples - -------- - >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) - >>> print x - [[1 -- 3] - [-- 5 --] - [7 -- 9]] - >>> x.compress([1, 0, 1]) - masked_array(data = [1 3], - mask = [False False], - fill_value=999999) - - >>> x.compress([1, 0, 1], axis=1) - masked_array(data = - [[1 3] - [-- --] - [7 9]], - mask = - [[False False] - [ True True] - [False False]], - fill_value=999999) - - """ - # Get the basic components - (_data, _mask) = (self._data, self._mask) - # Force the condition to a regular ndarray (forget the missing values...) - condition = np.array(condition, copy=False, subok=False) - # - _new = _data.compress(condition, axis=axis, out=out).view(type(self)) - _new._update_from(self) - if _mask is not nomask: - _new._mask = _mask.compress(condition, axis=axis) - return _new - - #............................................ - def __str__(self): - """String representation. - - """ - if masked_print_option.enabled(): - f = masked_print_option - if self is masked: - return str(f) - m = self._mask - if m is nomask: - res = self._data - else: - if m.shape == (): - if m.dtype.names: - m = m.view((bool, len(m.dtype))) - if m.any(): - r = np.array(self._data.tolist(), dtype=object) - np.putmask(r, m, f) - return str(tuple(r)) - else: - return str(self._data) - elif m: - return str(f) - else: - return str(self._data) - # convert to object array to make filled work - names = self.dtype.names - if names is None: - res = self._data.astype("|O8") - res[m] = f - else: - rdtype = _recursive_make_descr(self.dtype, "|O8") - res = self._data.astype(rdtype) - _recursive_printoption(res, m, f) - else: - res = self.filled(self.fill_value) - return str(res) - - def __repr__(self): - """Literal string representation. - - """ - n = len(self.shape) - name = repr(self._data).split('(')[0] - parameters = dict(name=name, nlen=" " * len(name), - data=str(self), mask=str(self._mask), - fill=str(self.fill_value), dtype=str(self.dtype)) - if self.dtype.names: - if n <= 1: - return _print_templates['short_flx'] % parameters - return _print_templates['long_flx'] % parameters - elif n <= 1: - return _print_templates['short'] % parameters - return _print_templates['long'] % parameters - - - def __eq__(self, other): - "Check whether other equals self elementwise" - if self is masked: - return masked - omask = getattr(other, '_mask', nomask) - if omask is nomask: - check = ndarray.__eq__(self.filled(0), other) - try: - check = check.view(type(self)) - check._mask = self._mask - except AttributeError: - # Dang, we have a bool instead of an array: return the bool - return check - else: - odata = filled(other, 0) - check = ndarray.__eq__(self.filled(0), odata).view(type(self)) - if self._mask is nomask: - check._mask = omask - else: - mask = mask_or(self._mask, omask) - if mask.dtype.names: - if mask.size > 1: - axis = 1 - else: - axis = None - try: - mask = mask.view((bool_, len(self.dtype))).all(axis) - except ValueError: - mask = np.all([[f[n].all() for n in mask.dtype.names] - for f in mask], axis=axis) - check._mask = mask - return check - # - def __ne__(self, other): - "Check whether other doesn't equal self elementwise" - if self is masked: - return masked - omask = getattr(other, '_mask', nomask) - if omask is nomask: - check = ndarray.__ne__(self.filled(0), other) - try: - check = check.view(type(self)) - check._mask = self._mask - except AttributeError: - # In case check is a boolean (or a numpy.bool) - return check - else: - odata = filled(other, 0) - check = ndarray.__ne__(self.filled(0), odata).view(type(self)) - if self._mask is nomask: - check._mask = omask - else: - mask = mask_or(self._mask, omask) - if mask.dtype.names: - if mask.size > 1: - axis = 1 - else: - axis = None - try: - mask = mask.view((bool_, len(self.dtype))).all(axis) - except ValueError: - mask = np.all([[f[n].all() for n in mask.dtype.names] - for f in mask], axis=axis) - check._mask = mask - return check - # - def __add__(self, other): - "Add other to self, and return a new masked array." - return add(self, other) - # - def __radd__(self, other): - "Add other to self, and return a new masked array." - return add(self, other) - # - def __sub__(self, other): - "Subtract other to self, and return a new masked array." - return subtract(self, other) - # - def __rsub__(self, other): - "Subtract other to self, and return a new masked array." - return subtract(other, self) - # - def __mul__(self, other): - "Multiply other by self, and return a new masked array." - return multiply(self, other) - # - def __rmul__(self, other): - "Multiply other by self, and return a new masked array." - return multiply(self, other) - # - def __div__(self, other): - "Divide other into self, and return a new masked array." - return divide(self, other) - # - def __truediv__(self, other): - "Divide other into self, and return a new masked array." - return true_divide(self, other) - # - def __rtruediv__(self, other): - "Divide other into self, and return a new masked array." - return true_divide(other, self) - # - def __floordiv__(self, other): - "Divide other into self, and return a new masked array." - return floor_divide(self, other) - # - def __rfloordiv__(self, other): - "Divide other into self, and return a new masked array." - return floor_divide(other, self) - # - def __pow__(self, other): - "Raise self to the power other, masking the potential NaNs/Infs" - return power(self, other) - # - def __rpow__(self, other): - "Raise self to the power other, masking the potential NaNs/Infs" - return power(other, self) - #............................................ - def __iadd__(self, other): - "Add other to self in-place." - m = getmask(other) - if self._mask is nomask: - if m is not nomask and m.any(): - self._mask = make_mask_none(self.shape, self.dtype) - self._mask += m - else: - if m is not nomask: - self._mask += m - ndarray.__iadd__(self._data, np.where(self._mask, 0, getdata(other))) - return self - #.... - def __isub__(self, other): - "Subtract other from self in-place." - m = getmask(other) - if self._mask is nomask: - if m is not nomask and m.any(): - self._mask = make_mask_none(self.shape, self.dtype) - self._mask += m - elif m is not nomask: - self._mask += m - ndarray.__isub__(self._data, np.where(self._mask, 0, getdata(other))) - return self - #.... - def __imul__(self, other): - "Multiply self by other in-place." - m = getmask(other) - if self._mask is nomask: - if m is not nomask and m.any(): - self._mask = make_mask_none(self.shape, self.dtype) - self._mask += m - elif m is not nomask: - self._mask += m - ndarray.__imul__(self._data, np.where(self._mask, 1, getdata(other))) - return self - #.... - def __idiv__(self, other): - "Divide self by other in-place." - other_data = getdata(other) - dom_mask = _DomainSafeDivide().__call__(self._data, other_data) - other_mask = getmask(other) - new_mask = mask_or(other_mask, dom_mask) - # The following 3 lines control the domain filling - if dom_mask.any(): - (_, fval) = ufunc_fills[np.divide] - other_data = np.where(dom_mask, fval, other_data) -# self._mask = mask_or(self._mask, new_mask) - self._mask |= new_mask - ndarray.__idiv__(self._data, np.where(self._mask, 1, other_data)) - return self - #.... - def __ifloordiv__(self, other): - "Floor divide self by other in-place." - other_data = getdata(other) - dom_mask = _DomainSafeDivide().__call__(self._data, other_data) - other_mask = getmask(other) - new_mask = mask_or(other_mask, dom_mask) - # The following 3 lines control the domain filling - if dom_mask.any(): - (_, fval) = ufunc_fills[np.floor_divide] - other_data = np.where(dom_mask, fval, other_data) -# self._mask = mask_or(self._mask, new_mask) - self._mask |= new_mask - ndarray.__ifloordiv__(self._data, np.where(self._mask, 1, other_data)) - return self - #.... - def __itruediv__(self, other): - "True divide self by other in-place." - other_data = getdata(other) - dom_mask = _DomainSafeDivide().__call__(self._data, other_data) - other_mask = getmask(other) - new_mask = mask_or(other_mask, dom_mask) - # The following 3 lines control the domain filling - if dom_mask.any(): - (_, fval) = ufunc_fills[np.true_divide] - other_data = np.where(dom_mask, fval, other_data) -# self._mask = mask_or(self._mask, new_mask) - self._mask |= new_mask - ndarray.__itruediv__(self._data, np.where(self._mask, 1, other_data)) - return self - #... - def __ipow__(self, other): - "Raise self to the power other, in place." - other_data = getdata(other) - other_mask = getmask(other) - err_status = np.geterr() - try: - np.seterr(divide='ignore', invalid='ignore') - ndarray.__ipow__(self._data, np.where(self._mask, 1, other_data)) - finally: - np.seterr(**err_status) - invalid = np.logical_not(np.isfinite(self._data)) - if invalid.any(): - if self._mask is not nomask: - self._mask |= invalid - else: - self._mask = invalid - np.putmask(self._data, invalid, self.fill_value) - new_mask = mask_or(other_mask, invalid) - self._mask = mask_or(self._mask, new_mask) - return self - #............................................ - def __float__(self): - "Convert to float." - if self.size > 1: - raise TypeError("Only length-1 arrays can be converted "\ - "to Python scalars") - elif self._mask: - warnings.warn("Warning: converting a masked element to nan.") - return np.nan - return float(self.item()) - - def __int__(self): - "Convert to int." - if self.size > 1: - raise TypeError("Only length-1 arrays can be converted "\ - "to Python scalars") - elif self._mask: - raise MaskError, 'Cannot convert masked element to a Python int.' - return int(self.item()) - - - def get_imag(self): - """ - Return the imaginary part of the masked array. - - The returned array is a view on the imaginary part of the `MaskedArray` - whose `get_imag` method is called. - - Parameters - ---------- - None - - Returns - ------- - result : MaskedArray - The imaginary part of the masked array. - - See Also - -------- - get_real, real, imag - - Examples - -------- - >>> x = np.ma.array([1+1.j, -2j, 3.45+1.6j], mask=[False, True, False]) - >>> x.get_imag() - masked_array(data = [1.0 -- 1.6], - mask = [False True False], - fill_value = 1e+20) - - """ - result = self._data.imag.view(type(self)) - result.__setmask__(self._mask) - return result - imag = property(fget=get_imag, doc="Imaginary part.") - - def get_real(self): - """ - Return the real part of the masked array. - - The returned array is a view on the real part of the `MaskedArray` - whose `get_real` method is called. - - Parameters - ---------- - None - - Returns - ------- - result : MaskedArray - The real part of the masked array. - - See Also - -------- - get_imag, real, imag - - Examples - -------- - >>> x = np.ma.array([1+1.j, -2j, 3.45+1.6j], mask=[False, True, False]) - >>> x.get_real() - masked_array(data = [1.0 -- 3.45], - mask = [False True False], - fill_value = 1e+20) - - """ - result = self._data.real.view(type(self)) - result.__setmask__(self._mask) - return result - real = property(fget=get_real, doc="Real part") - - - #............................................ - def count(self, axis=None): - """ - Count the non-masked elements of the array along the given axis. - - Parameters - ---------- - axis : int, optional - Axis along which to count the non-masked elements. If `axis` is - `None`, all non-masked elements are counted. - - Returns - ------- - result : int or ndarray - If `axis` is `None`, an integer count is returned. When `axis` is - not `None`, an array with shape determined by the lengths of the - remaining axes, is returned. - - See Also - -------- - count_masked : Count masked elements in array or along a given axis. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = ma.arange(6).reshape((2, 3)) - >>> a[1, :] = ma.masked - >>> a - masked_array(data = - [[0 1 2] - [-- -- --]], - mask = - [[False False False] - [ True True True]], - fill_value = 999999) - >>> a.count() - 3 - - When the `axis` keyword is specified an array of appropriate size is - returned. - - >>> a.count(axis=0) - array([1, 1, 1]) - >>> a.count(axis=1) - array([3, 0]) - - """ - m = self._mask - s = self.shape - ls = len(s) - if m is nomask: - if ls == 0: - return 1 - if ls == 1: - return s[0] - if axis is None: - return self.size - else: - n = s[axis] - t = list(s) - del t[axis] - return np.ones(t) * n - n1 = np.size(m, axis) - n2 = m.astype(int).sum(axis) - if axis is None: - return (n1 - n2) - else: - return narray(n1 - n2) - #............................................ - flatten = _arraymethod('flatten') - # - def ravel(self): - """ - Returns a 1D version of self, as a view. - - Returns - ------- - MaskedArray - Output view is of shape ``(self.size,)`` (or - ``(np.ma.product(self.shape),)``). - - Examples - -------- - >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) - >>> print x - [[1 -- 3] - [-- 5 --] - [7 -- 9]] - >>> print x.ravel() - [1 -- 3 -- 5 -- 7 -- 9] - - """ - r = ndarray.ravel(self._data).view(type(self)) - r._update_from(self) - if self._mask is not nomask: - r._mask = ndarray.ravel(self._mask).reshape(r.shape) - else: - r._mask = nomask - return r - # - repeat = _arraymethod('repeat') - # - def reshape (self, *s, **kwargs): - """ - Give a new shape to the array without changing its data. - - Returns a masked array containing the same data, but with a new shape. - The result is a view on the original array; if this is not possible, a - ValueError is raised. - - Parameters - ---------- - shape : int or tuple of ints - The new shape should be compatible with the original shape. If an - integer is supplied, then the result will be a 1-D array of that - length. - order : {'C', 'F'}, optional - Determines whether the array data should be viewed as in C - (row-major) or FORTRAN (column-major) order. - - Returns - ------- - reshaped_array : array - A new view on the array. - - See Also - -------- - reshape : Equivalent function in the masked array module. - numpy.ndarray.reshape : Equivalent method on ndarray object. - numpy.reshape : Equivalent function in the NumPy module. - - Notes - ----- - The reshaping operation cannot guarantee that a copy will not be made, - to modify the shape in place, use ``a.shape = s`` - - Examples - -------- - >>> x = np.ma.array([[1,2],[3,4]], mask=[1,0,0,1]) - >>> print x - [[-- 2] - [3 --]] - >>> x = x.reshape((4,1)) - >>> print x - [[--] - [2] - [3] - [--]] - - """ - kwargs.update(order=kwargs.get('order', 'C')) - result = self._data.reshape(*s, **kwargs).view(type(self)) - result._update_from(self) - mask = self._mask - if mask is not nomask: - result._mask = mask.reshape(*s, **kwargs) - return result - # - def resize(self, newshape, refcheck=True, order=False): - """ - .. warning:: - - This method does nothing, except raise a ValueError exception. A - masked array does not own its data and therefore cannot safely be - resized in place. Use the `numpy.ma.resize` function instead. - - This method is difficult to implement safely and may be deprecated in - future releases of NumPy. - - """ - # Note : the 'order' keyword looks broken, let's just drop it -# try: -# ndarray.resize(self, newshape, refcheck=refcheck) -# if self.mask is not nomask: -# self._mask.resize(newshape, refcheck=refcheck) -# except ValueError: -# raise ValueError("Cannot resize an array that has been referenced " -# "or is referencing another array in this way.\n" -# "Use the numpy.ma.resize function.") -# return None - errmsg = "A masked array does not own its data "\ - "and therefore cannot be resized.\n" \ - "Use the numpy.ma.resize function instead." - raise ValueError(errmsg) - # - def put(self, indices, values, mode='raise'): - """ - Set storage-indexed locations to corresponding values. - - Sets self._data.flat[n] = values[n] for each n in indices. - If `values` is shorter than `indices` then it will repeat. - If `values` has some masked values, the initial mask is updated - in consequence, else the corresponding values are unmasked. - - Parameters - ---------- - indices : 1-D array_like - Target indices, interpreted as integers. - values : array_like - Values to place in self._data copy at target indices. - mode : {'raise', 'wrap', 'clip'}, optional - Specifies how out-of-bounds indices will behave. - 'raise' : raise an error. - 'wrap' : wrap around. - 'clip' : clip to the range. - - Notes - ----- - `values` can be a scalar or length 1 array. - - Examples - -------- - >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) - >>> print x - [[1 -- 3] - [-- 5 --] - [7 -- 9]] - >>> x.put([0,4,8],[10,20,30]) - >>> print x - [[10 -- 3] - [-- 20 --] - [7 -- 30]] - - >>> x.put(4,999) - >>> print x - [[10 -- 3] - [-- 999 --] - [7 -- 30]] - - """ - m = self._mask - # Hard mask: Get rid of the values/indices that fall on masked data - if self._hardmask and self._mask is not nomask: - mask = self._mask[indices] - indices = narray(indices, copy=False) - values = narray(values, copy=False, subok=True) - values.resize(indices.shape) - indices = indices[~mask] - values = values[~mask] - #.... - self._data.put(indices, values, mode=mode) - #.... - if m is nomask: - m = getmask(values) - else: - m = m.copy() - if getmask(values) is nomask: - m.put(indices, False, mode=mode) - else: - m.put(indices, values._mask, mode=mode) - m = make_mask(m, copy=False, shrink=True) - self._mask = m - #............................................ - def ids (self): - """ - Return the addresses of the data and mask areas. - - Parameters - ---------- - None - - Examples - -------- - >>> x = np.ma.array([1, 2, 3], mask=[0, 1, 1]) - >>> x.ids() - (166670640, 166659832) - - If the array has no mask, the address of `nomask` is returned. This address - is typically not close to the data in memory: - - >>> x = np.ma.array([1, 2, 3]) - >>> x.ids() - (166691080, 3083169284L) - - """ - if self._mask is nomask: - return (self.ctypes.data, id(nomask)) - return (self.ctypes.data, self._mask.ctypes.data) - - def iscontiguous(self): - """ - Return a boolean indicating whether the data is contiguous. - - Parameters - ---------- - None - - Examples - -------- - >>> x = np.ma.array([1, 2, 3]) - >>> x.iscontiguous() - True - - `iscontiguous` returns one of the flags of the masked array: - - >>> x.flags - C_CONTIGUOUS : True - F_CONTIGUOUS : True - OWNDATA : False - WRITEABLE : True - ALIGNED : True - UPDATEIFCOPY : False - - """ - return self.flags['CONTIGUOUS'] - - #............................................ - def all(self, axis=None, out=None): - """ - Check if all of the elements of `a` are true. - - Performs a :func:`logical_and` over the given axis and returns the result. - Masked values are considered as True during computation. - For convenience, the output array is masked where ALL the values along the - current axis are masked: if the output would have been a scalar and that - all the values are masked, then the output is `masked`. - - Parameters - ---------- - axis : {None, integer} - Axis to perform the operation over. - If None, perform over flattened array. - out : {None, array}, optional - Array into which the result can be placed. Its type is preserved - and it must be of the right shape to hold the output. - - See Also - -------- - all : equivalent function - - Examples - -------- - >>> np.ma.array([1,2,3]).all() - True - >>> a = np.ma.array([1,2,3], mask=True) - >>> (a.all() is np.ma.masked) - True - - """ - mask = _check_mask_axis(self._mask, axis) - if out is None: - d = self.filled(True).all(axis=axis).view(type(self)) - if d.ndim: - d.__setmask__(mask) - elif mask: - return masked - return d - self.filled(True).all(axis=axis, out=out) - if isinstance(out, MaskedArray): - if out.ndim or mask: - out.__setmask__(mask) - return out - - - def any(self, axis=None, out=None): - """ - Check if any of the elements of `a` are true. - - Performs a logical_or over the given axis and returns the result. - Masked values are considered as False during computation. - - Parameters - ---------- - axis : {None, integer} - Axis to perform the operation over. - If None, perform over flattened array and return a scalar. - out : {None, array}, optional - Array into which the result can be placed. Its type is preserved - and it must be of the right shape to hold the output. - - See Also - -------- - any : equivalent function - - """ - mask = _check_mask_axis(self._mask, axis) - if out is None: - d = self.filled(False).any(axis=axis).view(type(self)) - if d.ndim: - d.__setmask__(mask) - elif mask: - d = masked - return d - self.filled(False).any(axis=axis, out=out) - if isinstance(out, MaskedArray): - if out.ndim or mask: - out.__setmask__(mask) - return out - - - def nonzero(self): - """ - Return the indices of unmasked elements that are not zero. - - Returns a tuple of arrays, one for each dimension, containing the - indices of the non-zero elements in that dimension. The corresponding - non-zero values can be obtained with:: - - a[a.nonzero()] - - To group the indices by element, rather than dimension, use - instead:: - - np.transpose(a.nonzero()) - - The result of this is always a 2d array, with a row for each non-zero - element. - - Parameters - ---------- - None - - Returns - ------- - tuple_of_arrays : tuple - Indices of elements that are non-zero. - - See Also - -------- - numpy.nonzero : - Function operating on ndarrays. - flatnonzero : - Return indices that are non-zero in the flattened version of the input - array. - ndarray.nonzero : - Equivalent ndarray method. - - Examples - -------- - >>> import numpy.ma as ma - >>> x = ma.array(np.eye(3)) - >>> x - masked_array(data = - [[ 1. 0. 0.] - [ 0. 1. 0.] - [ 0. 0. 1.]], - mask = - False, - fill_value=1e+20) - >>> x.nonzero() - (array([0, 1, 2]), array([0, 1, 2])) - - Masked elements are ignored. - - >>> x[1, 1] = ma.masked - >>> x - masked_array(data = - [[1.0 0.0 0.0] - [0.0 -- 0.0] - [0.0 0.0 1.0]], - mask = - [[False False False] - [False True False] - [False False False]], - fill_value=1e+20) - >>> x.nonzero() - (array([0, 2]), array([0, 2])) - - Indices can also be grouped by element. - - >>> np.transpose(x.nonzero()) - array([[0, 0], - [2, 2]]) - - A common use for ``nonzero`` is to find the indices of an array, where - a condition is True. Given an array `a`, the condition `a` > 3 is a - boolean array and since False is interpreted as 0, ma.nonzero(a > 3) - yields the indices of the `a` where the condition is true. - - >>> a = ma.array([[1,2,3],[4,5,6],[7,8,9]]) - >>> a > 3 - masked_array(data = - [[False False False] - [ True True True] - [ True True True]], - mask = - False, - fill_value=999999) - >>> ma.nonzero(a > 3) - (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) - - The ``nonzero`` method of the condition array can also be called. - - >>> (a > 3).nonzero() - (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) - - """ - return narray(self.filled(0), copy=False).nonzero() - - - def trace(self, offset=0, axis1=0, axis2=1, dtype=None, out=None): - """ - (this docstring should be overwritten) - """ - #!!!: implement out + test! - m = self._mask - if m is nomask: - result = super(MaskedArray, self).trace(offset=offset, axis1=axis1, - axis2=axis2, out=out) - return result.astype(dtype) - else: - D = self.diagonal(offset=offset, axis1=axis1, axis2=axis2) - return D.astype(dtype).filled(0).sum(axis=None, out=out) - trace.__doc__ = ndarray.trace.__doc__ - - def sum(self, axis=None, dtype=None, out=None): - """ - Return the sum of the array elements over the given axis. - Masked elements are set to 0 internally. - - Parameters - ---------- - axis : {None, -1, int}, optional - Axis along which the sum is computed. The default - (`axis` = None) is to compute over the flattened array. - dtype : {None, dtype}, optional - Determines the type of the returned array and of the accumulator - where the elements are summed. If dtype has the value None and - the type of a is an integer type of precision less than the default - platform integer, then the default platform integer precision is - used. Otherwise, the dtype is the same as that of a. - out : {None, ndarray}, optional - Alternative output array in which to place the result. It must - have the same shape and buffer length as the expected output - but the type will be cast if necessary. - - Returns - ------- - sum_along_axis : MaskedArray or scalar - An array with the same shape as self, with the specified - axis removed. If self is a 0-d array, or if `axis` is None, a scalar - is returned. If an output array is specified, a reference to - `out` is returned. - - Examples - -------- - >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) - >>> print x - [[1 -- 3] - [-- 5 --] - [7 -- 9]] - >>> print x.sum() - 25 - >>> print x.sum(axis=1) - [4 5 16] - >>> print x.sum(axis=0) - [8 5 12] - >>> print type(x.sum(axis=0, dtype=np.int64)[0]) - - - """ - _mask = ndarray.__getattribute__(self, '_mask') - newmask = _check_mask_axis(_mask, axis) - # No explicit output - if out is None: - result = self.filled(0).sum(axis, dtype=dtype) - rndim = getattr(result, 'ndim', 0) - if rndim: - result = result.view(type(self)) - result.__setmask__(newmask) - elif newmask: - result = masked - return result - # Explicit output - result = self.filled(0).sum(axis, dtype=dtype, out=out) - if isinstance(out, MaskedArray): - outmask = getattr(out, '_mask', nomask) - if (outmask is nomask): - outmask = out._mask = make_mask_none(out.shape) - outmask.flat = newmask - return out - - - def cumsum(self, axis=None, dtype=None, out=None): - """ - Return the cumulative sum of the elements along the given axis. - The cumulative sum is calculated over the flattened array by - default, otherwise over the specified axis. - - Masked values are set to 0 internally during the computation. - However, their position is saved, and the result will be masked at - the same locations. - - Parameters - ---------- - axis : {None, -1, int}, optional - Axis along which the sum is computed. The default (`axis` = None) is to - compute over the flattened array. `axis` may be negative, in which case - it counts from the last to the first axis. - dtype : {None, dtype}, optional - Type of the returned array and of the accumulator in which the - elements are summed. If `dtype` is not specified, it defaults - to the dtype of `a`, unless `a` has an integer dtype with a - precision less than that of the default platform integer. In - that case, the default platform integer is used. - out : ndarray, optional - Alternative output array in which to place the result. It must - have the same shape and buffer length as the expected output - but the type will be cast if necessary. - - Returns - ------- - cumsum : ndarray. - A new array holding the result is returned unless ``out`` is - specified, in which case a reference to ``out`` is returned. - - Notes - ----- - The mask is lost if `out` is not a valid :class:`MaskedArray` ! - - Arithmetic is modular when using integer types, and no error is - raised on overflow. - - Examples - -------- - >>> marr = np.ma.array(np.arange(10), mask=[0,0,0,1,1,1,0,0,0,0]) - >>> print marr.cumsum() - [0 1 3 -- -- -- 9 16 24 33] - - """ - result = self.filled(0).cumsum(axis=axis, dtype=dtype, out=out) - if out is not None: - if isinstance(out, MaskedArray): - out.__setmask__(self.mask) - return out - result = result.view(type(self)) - result.__setmask__(self._mask) - return result - - - def prod(self, axis=None, dtype=None, out=None): - """ - Return the product of the array elements over the given axis. - Masked elements are set to 1 internally for computation. - - Parameters - ---------- - axis : {None, int}, optional - Axis over which the product is taken. If None is used, then the - product is over all the array elements. - dtype : {None, dtype}, optional - Determines the type of the returned array and of the accumulator - where the elements are multiplied. If ``dtype`` has the value ``None`` - and the type of a is an integer type of precision less than the default - platform integer, then the default platform integer precision is - used. Otherwise, the dtype is the same as that of a. - out : {None, array}, optional - Alternative output array in which to place the result. It must have - the same shape as the expected output but the type will be cast if - necessary. - - Returns - ------- - product_along_axis : {array, scalar}, see dtype parameter above. - Returns an array whose shape is the same as a with the specified - axis removed. Returns a 0d array when a is 1d or axis=None. - Returns a reference to the specified output array if specified. - - See Also - -------- - prod : equivalent function - - Notes - ----- - Arithmetic is modular when using integer types, and no error is raised - on overflow. - - Examples - -------- - >>> np.prod([1.,2.]) - 2.0 - >>> np.prod([1.,2.], dtype=np.int32) - 2 - >>> np.prod([[1.,2.],[3.,4.]]) - 24.0 - >>> np.prod([[1.,2.],[3.,4.]], axis=1) - array([ 2., 12.]) - - """ - _mask = ndarray.__getattribute__(self, '_mask') - newmask = _check_mask_axis(_mask, axis) - # No explicit output - if out is None: - result = self.filled(1).prod(axis, dtype=dtype) - rndim = getattr(result, 'ndim', 0) - if rndim: - result = result.view(type(self)) - result.__setmask__(newmask) - elif newmask: - result = masked - return result - # Explicit output - result = self.filled(1).prod(axis, dtype=dtype, out=out) - if isinstance(out, MaskedArray): - outmask = getattr(out, '_mask', nomask) - if (outmask is nomask): - outmask = out._mask = make_mask_none(out.shape) - outmask.flat = newmask - return out - - product = prod - - def cumprod(self, axis=None, dtype=None, out=None): - """ - Return the cumulative product of the elements along the given axis. - The cumulative product is taken over the flattened array by - default, otherwise over the specified axis. - - Masked values are set to 1 internally during the computation. - However, their position is saved, and the result will be masked at - the same locations. - - Parameters - ---------- - axis : {None, -1, int}, optional - Axis along which the product is computed. The default - (`axis` = None) is to compute over the flattened array. - dtype : {None, dtype}, optional - Determines the type of the returned array and of the accumulator - where the elements are multiplied. If ``dtype`` has the value ``None`` - and the type of ``a`` is an integer type of precision less than the - default platform integer, then the default platform integer precision - is used. Otherwise, the dtype is the same as that of ``a``. - out : ndarray, optional - Alternative output array in which to place the result. It must - have the same shape and buffer length as the expected output - but the type will be cast if necessary. - - Returns - ------- - cumprod : ndarray - A new array holding the result is returned unless out is specified, - in which case a reference to out is returned. - - Notes - ----- - The mask is lost if `out` is not a valid MaskedArray ! - - Arithmetic is modular when using integer types, and no error is - raised on overflow. - - """ - result = self.filled(1).cumprod(axis=axis, dtype=dtype, out=out) - if out is not None: - if isinstance(out, MaskedArray): - out.__setmask__(self._mask) - return out - result = result.view(type(self)) - result.__setmask__(self._mask) - return result - - - def mean(self, axis=None, dtype=None, out=None): - """ - Returns the average of the array elements. - - Masked entries are ignored. - The average is taken over the flattened array by default, otherwise over - the specified axis. Refer to `numpy.mean` for the full documentation. - - Parameters - ---------- - a : array_like - Array containing numbers whose mean is desired. If `a` is not an - array, a conversion is attempted. - axis : int, optional - Axis along which the means are computed. The default is to compute - the mean of the flattened array. - dtype : dtype, optional - Type to use in computing the mean. For integer inputs, the default - is float64; for floating point, inputs it is the same as the input - dtype. - out : ndarray, optional - Alternative output array in which to place the result. It must have - the same shape as the expected output but the type will be cast if - necessary. - - Returns - ------- - mean : ndarray, see dtype parameter above - If `out=None`, returns a new array containing the mean values, - otherwise a reference to the output array is returned. - - See Also - -------- - numpy.ma.mean : Equivalent function. - numpy.mean : Equivalent function on non-masked arrays. - numpy.ma.average: Weighted average. - - Examples - -------- - >>> a = np.ma.array([1,2,3], mask=[False, False, True]) - >>> a - masked_array(data = [1 2 --], - mask = [False False True], - fill_value = 999999) - >>> a.mean() - 1.5 - - """ - if self._mask is nomask: - result = super(MaskedArray, self).mean(axis=axis, dtype=dtype) - else: - dsum = self.sum(axis=axis, dtype=dtype) - cnt = self.count(axis=axis) - if cnt.shape == () and (cnt == 0): - result = masked - else: - result = dsum * 1. / cnt - if out is not None: - out.flat = result - if isinstance(out, MaskedArray): - outmask = getattr(out, '_mask', nomask) - if (outmask is nomask): - outmask = out._mask = make_mask_none(out.shape) - outmask.flat = getattr(result, '_mask', nomask) - return out - return result - - def anom(self, axis=None, dtype=None): - """ - Compute the anomalies (deviations from the arithmetic mean) - along the given axis. - - Returns an array of anomalies, with the same shape as the input and - where the arithmetic mean is computed along the given axis. - - Parameters - ---------- - axis : int, optional - Axis over which the anomalies are taken. - The default is to use the mean of the flattened array as reference. - dtype : dtype, optional - Type to use in computing the variance. For arrays of integer type - the default is float32; for arrays of float types it is the same as - the array type. - - See Also - -------- - mean : Compute the mean of the array. - - Examples - -------- - >>> a = np.ma.array([1,2,3]) - >>> a.anom() - masked_array(data = [-1. 0. 1.], - mask = False, - fill_value = 1e+20) - - """ - m = self.mean(axis, dtype) - if not axis: - return (self - m) - else: - return (self - expand_dims(m, axis)) - - def var(self, axis=None, dtype=None, out=None, ddof=0): - "" - # Easy case: nomask, business as usual - if self._mask is nomask: - return self._data.var(axis=axis, dtype=dtype, out=out, ddof=ddof) - # Some data are masked, yay! - cnt = self.count(axis=axis) - ddof - danom = self.anom(axis=axis, dtype=dtype) - if iscomplexobj(self): - danom = umath.absolute(danom) ** 2 - else: - danom *= danom - dvar = divide(danom.sum(axis), cnt).view(type(self)) - # Apply the mask if it's not a scalar - if dvar.ndim: - dvar._mask = mask_or(self._mask.all(axis), (cnt <= 0)) - dvar._update_from(self) - elif getattr(dvar, '_mask', False): - # Make sure that masked is returned when the scalar is masked. - dvar = masked - if out is not None: - if isinstance(out, MaskedArray): - out.__setmask__(True) - elif out.dtype.kind in 'biu': - errmsg = "Masked data information would be lost in one or "\ - "more location." - raise MaskError(errmsg) - else: - out.flat = np.nan - return out - # In case with have an explicit output - if out is not None: - # Set the data - out.flat = dvar - # Set the mask if needed - if isinstance(out, MaskedArray): - out.__setmask__(dvar.mask) - return out - return dvar - var.__doc__ = np.var.__doc__ - - - def std(self, axis=None, dtype=None, out=None, ddof=0): - "" - dvar = self.var(axis=axis, dtype=dtype, out=out, ddof=ddof) - if dvar is not masked: - dvar = sqrt(dvar) - if out is not None: - out **= 0.5 - return out - return dvar - std.__doc__ = np.std.__doc__ - - #............................................ - def round(self, decimals=0, out=None): - """ - Return an array rounded a to the given number of decimals. - - Refer to `numpy.around` for full documentation. - - See Also - -------- - numpy.around : equivalent function - - """ - result = self._data.round(decimals=decimals, out=out).view(type(self)) - result._mask = self._mask - result._update_from(self) - # No explicit output: we're done - if out is None: - return result - if isinstance(out, MaskedArray): - out.__setmask__(self._mask) - return out - round.__doc__ = ndarray.round.__doc__ - - #............................................ - def argsort(self, axis=None, kind='quicksort', order=None, fill_value=None): - """ - Return an ndarray of indices that sort the array along the - specified axis. Masked values are filled beforehand to - `fill_value`. - - Parameters - ---------- - axis : int, optional - Axis along which to sort. The default is -1 (last axis). - If None, the flattened array is used. - fill_value : var, optional - Value used to fill the array before sorting. - The default is the `fill_value` attribute of the input array. - kind : {'quicksort', 'mergesort', 'heapsort'}, optional - Sorting algorithm. - order : list, optional - When `a` is an array with fields defined, this argument specifies - which fields to compare first, second, etc. Not all fields need be - specified. - - Returns - ------- - index_array : ndarray, int - Array of indices that sort `a` along the specified axis. - In other words, ``a[index_array]`` yields a sorted `a`. - - See Also - -------- - sort : Describes sorting algorithms used. - lexsort : Indirect stable sort with multiple keys. - ndarray.sort : Inplace sort. - - Notes - ----- - See `sort` for notes on the different sorting algorithms. - - Examples - -------- - >>> a = np.ma.array([3,2,1], mask=[False, False, True]) - >>> a - masked_array(data = [3 2 --], - mask = [False False True], - fill_value = 999999) - >>> a.argsort() - array([1, 0, 2]) - - """ - if fill_value is None: - fill_value = default_fill_value(self) - d = self.filled(fill_value).view(ndarray) - return d.argsort(axis=axis, kind=kind, order=order) - - - def argmin(self, axis=None, fill_value=None, out=None): - """ - Return array of indices to the minimum values along the given axis. - - Parameters - ---------- - axis : {None, integer} - If None, the index is into the flattened array, otherwise along - the specified axis - fill_value : {var}, optional - Value used to fill in the masked values. If None, the output of - minimum_fill_value(self._data) is used instead. - out : {None, array}, optional - Array into which the result can be placed. Its type is preserved - and it must be of the right shape to hold the output. - - Returns - ------- - {ndarray, scalar} - If multi-dimension input, returns a new ndarray of indices to the - minimum values along the given axis. Otherwise, returns a scalar - of index to the minimum values along the given axis. - - Examples - -------- - >>> x = np.ma.array(arange(4), mask=[1,1,0,0]) - >>> x.shape = (2,2) - >>> print x - [[-- --] - [2 3]] - >>> print x.argmin(axis=0, fill_value=-1) - [0 0] - >>> print x.argmin(axis=0, fill_value=9) - [1 1] - - """ - if fill_value is None: - fill_value = minimum_fill_value(self) - d = self.filled(fill_value).view(ndarray) - return d.argmin(axis, out=out) - - - def argmax(self, axis=None, fill_value=None, out=None): - """ - Returns array of indices of the maximum values along the given axis. - Masked values are treated as if they had the value fill_value. - - Parameters - ---------- - axis : {None, integer} - If None, the index is into the flattened array, otherwise along - the specified axis - fill_value : {var}, optional - Value used to fill in the masked values. If None, the output of - maximum_fill_value(self._data) is used instead. - out : {None, array}, optional - Array into which the result can be placed. Its type is preserved - and it must be of the right shape to hold the output. - - Returns - ------- - index_array : {integer_array} - - Examples - -------- - >>> a = np.arange(6).reshape(2,3) - >>> a.argmax() - 5 - >>> a.argmax(0) - array([1, 1, 1]) - >>> a.argmax(1) - array([2, 2]) - - """ - if fill_value is None: - fill_value = maximum_fill_value(self._data) - d = self.filled(fill_value).view(ndarray) - return d.argmax(axis, out=out) - - - def sort(self, axis= -1, kind='quicksort', order=None, - endwith=True, fill_value=None): - """ - Sort the array, in-place - - Parameters - ---------- - a : array_like - Array to be sorted. - axis : int, optional - Axis along which to sort. If None, the array is flattened before - sorting. The default is -1, which sorts along the last axis. - kind : {'quicksort', 'mergesort', 'heapsort'}, optional - Sorting algorithm. Default is 'quicksort'. - order : list, optional - When `a` is a structured array, this argument specifies which fields - to compare first, second, and so on. This list does not need to - include all of the fields. - endwith : {True, False}, optional - Whether missing values (if any) should be forced in the upper indices - (at the end of the array) (True) or lower indices (at the beginning). - fill_value : {var}, optional - Value used internally for the masked values. - If ``fill_value`` is not None, it supersedes ``endwith``. - - Returns - ------- - sorted_array : ndarray - Array of the same type and shape as `a`. - - See Also - -------- - ndarray.sort : Method to sort an array in-place. - argsort : Indirect sort. - lexsort : Indirect stable sort on multiple keys. - searchsorted : Find elements in a sorted array. - - Notes - ----- - See ``sort`` for notes on the different sorting algorithms. - - Examples - -------- - >>> a = ma.array([1, 2, 5, 4, 3],mask=[0, 1, 0, 1, 0]) - >>> # Default - >>> a.sort() - >>> print a - [1 3 5 -- --] - - >>> a = ma.array([1, 2, 5, 4, 3],mask=[0, 1, 0, 1, 0]) - >>> # Put missing values in the front - >>> a.sort(endwith=False) - >>> print a - [-- -- 1 3 5] - - >>> a = ma.array([1, 2, 5, 4, 3],mask=[0, 1, 0, 1, 0]) - >>> # fill_value takes over endwith - >>> a.sort(endwith=False, fill_value=3) - >>> print a - [1 -- -- 3 5] - - """ - if self._mask is nomask: - ndarray.sort(self, axis=axis, kind=kind, order=order) - else: - if self is masked: - return self - if fill_value is None: - if endwith: - filler = minimum_fill_value(self) - else: - filler = maximum_fill_value(self) - else: - filler = fill_value - idx = np.indices(self.shape) - idx[axis] = self.filled(filler).argsort(axis=axis, kind=kind, - order=order) - idx_l = idx.tolist() - tmp_mask = self._mask[idx_l].flat - tmp_data = self._data[idx_l].flat - self._data.flat = tmp_data - self._mask.flat = tmp_mask - return - - #............................................ - def min(self, axis=None, out=None, fill_value=None): - """ - Return the minimum along a given axis. - - Parameters - ---------- - axis : {None, int}, optional - Axis along which to operate. By default, ``axis`` is None and the - flattened input is used. - out : array_like, optional - Alternative output array in which to place the result. Must be of - the same shape and buffer length as the expected output. - fill_value : {var}, optional - Value used to fill in the masked values. - If None, use the output of `minimum_fill_value`. - - Returns - ------- - amin : array_like - New array holding the result. - If ``out`` was specified, ``out`` is returned. - - See Also - -------- - minimum_fill_value - Returns the minimum filling value for a given datatype. - - """ - _mask = ndarray.__getattribute__(self, '_mask') - newmask = _check_mask_axis(_mask, axis) - if fill_value is None: - fill_value = minimum_fill_value(self) - # No explicit output - if out is None: - result = self.filled(fill_value).min(axis=axis, out=out).view(type(self)) - if result.ndim: - # Set the mask - result.__setmask__(newmask) - # Get rid of Infs - if newmask.ndim: - np.putmask(result, newmask, result.fill_value) - elif newmask: - result = masked - return result - # Explicit output - result = self.filled(fill_value).min(axis=axis, out=out) - if isinstance(out, MaskedArray): - outmask = getattr(out, '_mask', nomask) - if (outmask is nomask): - outmask = out._mask = make_mask_none(out.shape) - outmask.flat = newmask - else: - if out.dtype.kind in 'biu': - errmsg = "Masked data information would be lost in one or more"\ - " location." - raise MaskError(errmsg) - np.putmask(out, newmask, np.nan) - return out - - def mini(self, axis=None): - """ - Return the array minimum along the specified axis. - - Parameters - ---------- - axis : int, optional - The axis along which to find the minima. Default is None, in which case - the minimum value in the whole array is returned. - - Returns - ------- - min : scalar or MaskedArray - If `axis` is None, the result is a scalar. Otherwise, if `axis` is - given and the array is at least 2-D, the result is a masked array with - dimension one smaller than the array on which `mini` is called. - - Examples - -------- - >>> x = np.ma.array(np.arange(6), mask=[0 ,1, 0, 0, 0 ,1]).reshape(3, 2) - >>> print x - [[0 --] - [2 3] - [4 --]] - >>> x.mini() - 0 - >>> x.mini(axis=0) - masked_array(data = [0 3], - mask = [False False], - fill_value = 999999) - >>> print x.mini(axis=1) - [0 2 4] - - """ - if axis is None: - return minimum(self) - else: - return minimum.reduce(self, axis) - - #........................ - def max(self, axis=None, out=None, fill_value=None): - """ - Return the maximum along a given axis. - - Parameters - ---------- - axis : {None, int}, optional - Axis along which to operate. By default, ``axis`` is None and the - flattened input is used. - out : array_like, optional - Alternative output array in which to place the result. Must - be of the same shape and buffer length as the expected output. - fill_value : {var}, optional - Value used to fill in the masked values. - If None, use the output of maximum_fill_value(). - - Returns - ------- - amax : array_like - New array holding the result. - If ``out`` was specified, ``out`` is returned. - - See Also - -------- - maximum_fill_value - Returns the maximum filling value for a given datatype. - - """ - _mask = ndarray.__getattribute__(self, '_mask') - newmask = _check_mask_axis(_mask, axis) - if fill_value is None: - fill_value = maximum_fill_value(self) - # No explicit output - if out is None: - result = self.filled(fill_value).max(axis=axis, out=out).view(type(self)) - if result.ndim: - # Set the mask - result.__setmask__(newmask) - # Get rid of Infs - if newmask.ndim: - np.putmask(result, newmask, result.fill_value) - elif newmask: - result = masked - return result - # Explicit output - result = self.filled(fill_value).max(axis=axis, out=out) - if isinstance(out, MaskedArray): - outmask = getattr(out, '_mask', nomask) - if (outmask is nomask): - outmask = out._mask = make_mask_none(out.shape) - outmask.flat = newmask - else: - - if out.dtype.kind in 'biu': - errmsg = "Masked data information would be lost in one or more"\ - " location." - raise MaskError(errmsg) - np.putmask(out, newmask, np.nan) - return out - - def ptp(self, axis=None, out=None, fill_value=None): - """ - Return (maximum - minimum) along the the given dimension - (i.e. peak-to-peak value). - - Parameters - ---------- - axis : {None, int}, optional - Axis along which to find the peaks. If None (default) the - flattened array is used. - out : {None, array_like}, optional - Alternative output array in which to place the result. It must - have the same shape and buffer length as the expected output - but the type will be cast if necessary. - fill_value : {var}, optional - Value used to fill in the masked values. - - Returns - ------- - ptp : ndarray. - A new array holding the result, unless ``out`` was - specified, in which case a reference to ``out`` is returned. - - """ - if out is None: - result = self.max(axis=axis, fill_value=fill_value) - result -= self.min(axis=axis, fill_value=fill_value) - return result - out.flat = self.max(axis=axis, out=out, fill_value=fill_value) - out -= self.min(axis=axis, fill_value=fill_value) - return out - - def take(self, indices, axis=None, out=None, mode='raise'): - """ - """ - (_data, _mask) = (self._data, self._mask) - cls = type(self) - # Make sure the indices are not masked - maskindices = getattr(indices, '_mask', nomask) - if maskindices is not nomask: - indices = indices.filled(0) - # Get the data - if out is None: - out = _data.take(indices, axis=axis, mode=mode).view(cls) - else: - np.take(_data, indices, axis=axis, mode=mode, out=out) - # Get the mask - if isinstance(out, MaskedArray): - if _mask is nomask: - outmask = maskindices - else: - outmask = _mask.take(indices, axis=axis, mode=mode) - outmask |= maskindices - out.__setmask__(outmask) - return out - - - # Array methods --------------------------------------- - copy = _arraymethod('copy') - diagonal = _arraymethod('diagonal') - transpose = _arraymethod('transpose') - T = property(fget=lambda self:self.transpose()) - swapaxes = _arraymethod('swapaxes') - clip = _arraymethod('clip', onmask=False) - copy = _arraymethod('copy') - squeeze = _arraymethod('squeeze') - #-------------------------------------------- - def tolist(self, fill_value=None): - """ - Return the data portion of the masked array as a hierarchical Python list. - - Data items are converted to the nearest compatible Python type. - Masked values are converted to `fill_value`. If `fill_value` is None, - the corresponding entries in the output list will be ``None``. - - Parameters - ---------- - fill_value : scalar, optional - The value to use for invalid entries. Default is None. - - Returns - ------- - result : list - The Python list representation of the masked array. - - Examples - -------- - >>> x = np.ma.array([[1,2,3], [4,5,6], [7,8,9]], mask=[0] + [1,0]*4) - >>> x.tolist() - [[1, None, 3], [None, 5, None], [7, None, 9]] - >>> x.tolist(-999) - [[1, -999, 3], [-999, 5, -999], [7, -999, 9]] - - """ - _mask = self._mask - # No mask ? Just return .data.tolist ? - if _mask is nomask: - return self._data.tolist() - # Explicit fill_value: fill the array and get the list - if fill_value is not None: - return self.filled(fill_value).tolist() - # Structured array ............. - names = self.dtype.names - if names: - result = self._data.astype([(_, object) for _ in names]) - for n in names: - result[n][_mask[n]] = None - return result.tolist() - # Standard arrays ............... - if _mask is nomask: - return [None] - # Set temps to save time when dealing w/ marrays... - inishape = self.shape - result = np.array(self._data.ravel(), dtype=object) - result[_mask.ravel()] = None - result.shape = inishape - return result.tolist() -# if fill_value is not None: -# return self.filled(fill_value).tolist() -# result = self.filled().tolist() -# # Set temps to save time when dealing w/ mrecarrays... -# _mask = self._mask -# if _mask is nomask: -# return result -# nbdims = self.ndim -# dtypesize = len(self.dtype) -# if nbdims == 0: -# return tuple([None] * dtypesize) -# elif nbdims == 1: -# maskedidx = _mask.nonzero()[0].tolist() -# if dtypesize: -# nodata = tuple([None] * dtypesize) -# else: -# nodata = None -# [operator.setitem(result, i, nodata) for i in maskedidx] -# else: -# for idx in zip(*[i.tolist() for i in _mask.nonzero()]): -# tmp = result -# for i in idx[:-1]: -# tmp = tmp[i] -# tmp[idx[-1]] = None -# return result - #........................ - def tostring(self, fill_value=None, order='C'): - """ - Return the array data as a string containing the raw bytes in the array. - - The array is filled with a fill value before the string conversion. - - Parameters - ---------- - fill_value : scalar, optional - Value used to fill in the masked values. Deafult is None, in which - case `MaskedArray.fill_value` is used. - order : {'C','F','A'}, optional - Order of the data item in the copy. Default is 'C'. - - - 'C' -- C order (row major). - - 'F' -- Fortran order (column major). - - 'A' -- Any, current order of array. - - None -- Same as 'A'. - - See Also - -------- - ndarray.tostring - tolist, tofile - - Notes - ----- - As for `ndarray.tostring`, information about the shape, dtype, etc., - but also about `fill_value`, will be lost. - - Examples - -------- - >>> x = np.ma.array(np.array([[1, 2], [3, 4]]), mask=[[0, 1], [1, 0]]) - >>> x.tostring() - '\\x01\\x00\\x00\\x00?B\\x0f\\x00?B\\x0f\\x00\\x04\\x00\\x00\\x00' - - """ - return self.filled(fill_value).tostring(order=order) - #........................ - def tofile(self, fid, sep="", format="%s"): - """ - Save a masked array to a file in binary format. - - .. warning:: - This function is not implemented yet. - - Raises - ------ - NotImplementedError - When `tofile` is called. - - """ - raise NotImplementedError("Not implemented yet, sorry...") - - def toflex(self): - """ - Transforms a masked array into a flexible-type array. - - The flexible type array that is returned will have two fields: - - * the ``_data`` field stores the ``_data`` part of the array. - * the ``_mask`` field stores the ``_mask`` part of the array. - - Parameters - ---------- - None - - Returns - ------- - record : ndarray - A new flexible-type `ndarray` with two fields: the first element - containing a value, the second element containing the corresponding - mask boolean. The returned record shape matches self.shape. - - Notes - ----- - A side-effect of transforming a masked array into a flexible `ndarray` is - that meta information (``fill_value``, ...) will be lost. - - Examples - -------- - >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) - >>> print x - [[1 -- 3] - [-- 5 --] - [7 -- 9]] - >>> print x.toflex() - [[(1, False) (2, True) (3, False)] - [(4, True) (5, False) (6, True)] - [(7, False) (8, True) (9, False)]] - - """ - # Get the basic dtype .... - ddtype = self.dtype - # Make sure we have a mask - _mask = self._mask - if _mask is None: - _mask = make_mask_none(self.shape, ddtype) - # And get its dtype - mdtype = self._mask.dtype - # - record = np.ndarray(shape=self.shape, - dtype=[('_data', ddtype), ('_mask', mdtype)]) - record['_data'] = self._data - record['_mask'] = self._mask - return record - torecords = toflex - #-------------------------------------------- - # Pickling - def __getstate__(self): - """Return the internal state of the masked array, for pickling - purposes. - - """ - cf = 'CF'[self.flags.fnc] - state = (1, - self.shape, - self.dtype, - self.flags.fnc, - self._data.tostring(cf), - #self._data.tolist(), - getmaskarray(self).tostring(cf), - #getmaskarray(self).tolist(), - self._fill_value, - ) - return state - # - def __setstate__(self, state): - """Restore the internal state of the masked array, for - pickling purposes. ``state`` is typically the output of the - ``__getstate__`` output, and is a 5-tuple: - - - class name - - a tuple giving the shape of the data - - a typecode for the data - - a binary string for the data - - a binary string for the mask. - - """ - (_, shp, typ, isf, raw, msk, flv) = state - ndarray.__setstate__(self, (shp, typ, isf, raw)) - self._mask.__setstate__((shp, make_mask_descr(typ), isf, msk)) - self.fill_value = flv - # - def __reduce__(self): - """Return a 3-tuple for pickling a MaskedArray. - - """ - return (_mareconstruct, - (self.__class__, self._baseclass, (0,), 'b',), - self.__getstate__()) - # - def __deepcopy__(self, memo=None): - from copy import deepcopy - copied = MaskedArray.__new__(type(self), self, copy=True) - if memo is None: - memo = {} - memo[id(self)] = copied - for (k, v) in self.__dict__.iteritems(): - copied.__dict__[k] = deepcopy(v, memo) - return copied - - -def _mareconstruct(subtype, baseclass, baseshape, basetype,): - """Internal function that builds a new MaskedArray from the - information stored in a pickle. - - """ - _data = ndarray.__new__(baseclass, baseshape, basetype) - _mask = ndarray.__new__(ndarray, baseshape, make_mask_descr(basetype)) - return subtype.__new__(subtype, _data, mask=_mask, dtype=basetype,) - - - - - - -class mvoid(MaskedArray): - """ - Fake a 'void' object to use for masked array with structured dtypes. - """ - # - def __new__(self, data, mask=nomask, dtype=None, fill_value=None): - dtype = dtype or data.dtype - _data = ndarray((), dtype=dtype) - _data[()] = data - _data = _data.view(self) - if mask is not nomask: - if isinstance(mask, np.void): - _data._mask = mask - else: - try: - # Mask is already a 0D array - _data._mask = np.void(mask) - except TypeError: - # Transform the mask to a void - mdtype = make_mask_descr(dtype) - _data._mask = np.array(mask, dtype=mdtype)[()] - if fill_value is not None: - _data.fill_value = fill_value - return _data - - def _get_data(self): - # Make sure that the _data part is a np.void - return self.view(ndarray)[()] - _data = property(fget=_get_data) - - def __getitem__(self, indx): - "Get the index..." - _mask = self._mask.astype(np.void) - if _mask is not nomask and _mask[indx]: - return masked - return self._data[indx] - - def __setitem__(self, indx, value): - self._data[indx] = value - self._mask[indx] |= getattr(value, "_mask", False) - - def __str__(self): - m = self._mask - if (m is nomask): - return self._data.__str__() - m = tuple(m) - if (not any(m)): - return self._data.__str__() - r = self._data.tolist() - p = masked_print_option - if not p.enabled(): - p = 'N/A' - else: - p = str(p) - r = [(str(_), p)[int(_m)] for (_, _m) in zip(r, m)] - return "(%s)" % ", ".join(r) - - def __repr__(self): - m = self._mask - if (m is nomask): - return self._data.__repr__() - m = tuple(m) - if not any(m): - return self._data.__repr__() - p = masked_print_option - if not p.enabled(): - return self.filled(self.fill_value).__repr__() - p = str(p) - r = [(str(_), p)[int(_m)] for (_, _m) in zip(self._data.tolist(), m)] - return "(%s)" % ", ".join(r) - - def __iter__(self): - "Defines an iterator for mvoid" - (_data, _mask) = (self._data, self._mask) - if _mask is nomask: - for d in _data: - yield d - else: - for (d, m) in zip(_data, _mask): - if m: - yield masked - else: - yield d - - def filled(self, fill_value=None): - """ - Return a copy with masked fields filled with a given value. - - Parameters - ---------- - fill_value : scalar, optional - The value to use for invalid entries (None by default). - If None, the `fill_value` attribute is used instead. - - Returns - ------- - filled_void: - A `np.void` object - - See Also - -------- - MaskedArray.filled - - """ - return asarray(self).filled(fill_value)[()] - - def tolist(self): - """ - Transforms the mvoid object into a tuple. - - Masked fields are replaced by None. - - Returns - ------- - returned_tuple - Tuple of fields - """ - _mask = self._mask - if _mask is nomask: - return self._data.tolist() - result = [] - for (d, m) in zip(self._data, self._mask): - if m: - result.append(None) - else: - # .item() makes sure we return a standard Python object - result.append(d.item()) - return tuple(result) - - - -#####-------------------------------------------------------------------------- -#---- --- Shortcuts --- -#####--------------------------------------------------------------------------- -def isMaskedArray(x): - """ - Test whether input is an instance of MaskedArray. - - This function returns True if `x` is an instance of MaskedArray - and returns False otherwise. Any object is accepted as input. - - Parameters - ---------- - x : object - Object to test. - - Returns - ------- - result : bool - True if `x` is a MaskedArray. - - See Also - -------- - isMA : Alias to isMaskedArray. - isarray : Alias to isMaskedArray. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.eye(3, 3) - >>> a - array([[ 1., 0., 0.], - [ 0., 1., 0.], - [ 0., 0., 1.]]) - >>> m = ma.masked_values(a, 0) - >>> m - masked_array(data = - [[1.0 -- --] - [-- 1.0 --] - [-- -- 1.0]], - mask = - [[False True True] - [ True False True] - [ True True False]], - fill_value=0.0) - >>> ma.isMaskedArray(a) - False - >>> ma.isMaskedArray(m) - True - >>> ma.isMaskedArray([0, 1, 2]) - False - - """ - return isinstance(x, MaskedArray) -isarray = isMaskedArray -isMA = isMaskedArray #backward compatibility - -# We define the masked singleton as a float for higher precedence... -# Note that it can be tricky sometimes w/ type comparison - -class MaskedConstant(MaskedArray): - # - _data = data = np.array(0.) - _mask = mask = np.array(True) - _baseclass = ndarray - # - def __new__(self): - return self._data.view(self) - # - def __array_finalize__(self, obj): - return - # - def __array_wrap__(self, obj): - return self - # - def __str__(self): - return str(masked_print_option._display) - # - def __repr__(self): - return 'masked' - # - def flatten(self): - return masked_array([self._data], dtype=float, mask=[True]) - -masked = masked_singleton = MaskedConstant() - - - -masked_array = MaskedArray - -def array(data, dtype=None, copy=False, order=False, - mask=nomask, fill_value=None, - keep_mask=True, hard_mask=False, shrink=True, subok=True, ndmin=0, - ): - """array(data, dtype=None, copy=False, order=False, mask=nomask, - fill_value=None, keep_mask=True, hard_mask=False, shrink=True, - subok=True, ndmin=0) - - Acts as shortcut to MaskedArray, with options in a different order - for convenience. And backwards compatibility... - - """ - #!!!: we should try to put 'order' somwehere - return MaskedArray(data, mask=mask, dtype=dtype, copy=copy, subok=subok, - keep_mask=keep_mask, hard_mask=hard_mask, - fill_value=fill_value, ndmin=ndmin, shrink=shrink) -array.__doc__ = masked_array.__doc__ - -def is_masked(x): - """ - Determine whether input has masked values. - - Accepts any object as input, but always returns False unless the - input is a MaskedArray containing masked values. - - Parameters - ---------- - x : array_like - Array to check for masked values. - - Returns - ------- - result : bool - True if `x` is a MaskedArray with masked values, False otherwise. - - Examples - -------- - >>> import numpy.ma as ma - >>> x = ma.masked_equal([0, 1, 0, 2, 3], 0) - >>> x - masked_array(data = [-- 1 -- 2 3], - mask = [ True False True False False], - fill_value=999999) - >>> ma.is_masked(x) - True - >>> x = ma.masked_equal([0, 1, 0, 2, 3], 42) - >>> x - masked_array(data = [0 1 0 2 3], - mask = False, - fill_value=999999) - >>> ma.is_masked(x) - False - - Always returns False if `x` isn't a MaskedArray. - - >>> x = [False, True, False] - >>> ma.is_masked(x) - False - >>> x = 'a string' - >>> ma.is_masked(x) - False - - """ - m = getmask(x) - if m is nomask: - return False - elif m.any(): - return True - return False - - -#####--------------------------------------------------------------------------- -#---- --- Extrema functions --- -#####--------------------------------------------------------------------------- -class _extrema_operation(object): - """ - Generic class for maximum/minimum functions. - - .. note:: - This is the base class for `_maximum_operation` and - `_minimum_operation`. - - """ - def __call__(self, a, b=None): - "Executes the call behavior." - if b is None: - return self.reduce(a) - return where(self.compare(a, b), a, b) - #......... - def reduce(self, target, axis=None): - "Reduce target along the given axis." - target = narray(target, copy=False, subok=True) - m = getmask(target) - if axis is not None: - kargs = { 'axis' : axis } - else: - kargs = {} - target = target.ravel() - if not (m is nomask): - m = m.ravel() - if m is nomask: - t = self.ufunc.reduce(target, **kargs) - else: - target = target.filled(self.fill_value_func(target)).view(type(target)) - t = self.ufunc.reduce(target, **kargs) - m = umath.logical_and.reduce(m, **kargs) - if hasattr(t, '_mask'): - t._mask = m - elif m: - t = masked - return t - #......... - def outer (self, a, b): - "Return the function applied to the outer product of a and b." - ma = getmask(a) - mb = getmask(b) - if ma is nomask and mb is nomask: - m = nomask - else: - ma = getmaskarray(a) - mb = getmaskarray(b) - m = logical_or.outer(ma, mb) - result = self.ufunc.outer(filled(a), filled(b)) - if not isinstance(result, MaskedArray): - result = result.view(MaskedArray) - result._mask = m - return result - -#............................ -class _minimum_operation(_extrema_operation): - "Object to calculate minima" - def __init__ (self): - """minimum(a, b) or minimum(a) -In one argument case, returns the scalar minimum. - """ - self.ufunc = umath.minimum - self.afunc = amin - self.compare = less - self.fill_value_func = minimum_fill_value - -#............................ -class _maximum_operation(_extrema_operation): - "Object to calculate maxima" - def __init__ (self): - """maximum(a, b) or maximum(a) - In one argument case returns the scalar maximum. - """ - self.ufunc = umath.maximum - self.afunc = amax - self.compare = greater - self.fill_value_func = maximum_fill_value - -#.......................................................... -def min(obj, axis=None, out=None, fill_value=None): - try: - return obj.min(axis=axis, fill_value=fill_value, out=out) - except (AttributeError, TypeError): - # If obj doesn't have a max method, - # ...or if the method doesn't accept a fill_value argument - return asanyarray(obj).min(axis=axis, fill_value=fill_value, out=out) -min.__doc__ = MaskedArray.min.__doc__ - -def max(obj, axis=None, out=None, fill_value=None): - try: - return obj.max(axis=axis, fill_value=fill_value, out=out) - except (AttributeError, TypeError): - # If obj doesn't have a max method, - # ...or if the method doesn't accept a fill_value argument - return asanyarray(obj).max(axis=axis, fill_value=fill_value, out=out) -max.__doc__ = MaskedArray.max.__doc__ - -def ptp(obj, axis=None, out=None, fill_value=None): - """a.ptp(axis=None) = a.max(axis)-a.min(axis)""" - try: - return obj.ptp(axis, out=out, fill_value=fill_value) - except (AttributeError, TypeError): - # If obj doesn't have a max method, - # ...or if the method doesn't accept a fill_value argument - return asanyarray(obj).ptp(axis=axis, fill_value=fill_value, out=out) -ptp.__doc__ = MaskedArray.ptp.__doc__ - - -#####--------------------------------------------------------------------------- -#---- --- Definition of functions from the corresponding methods --- -#####--------------------------------------------------------------------------- -class _frommethod: - """ - Define functions from existing MaskedArray methods. - - Parameters - ---------- - methodname : str - Name of the method to transform. - - """ - def __init__(self, methodname): - self.__name__ = methodname - self.__doc__ = self.getdoc() - # - def getdoc(self): - "Return the doc of the function (from the doc of the method)." - meth = getattr(MaskedArray, self.__name__, None) or\ - getattr(np, self.__name__, None) - signature = self.__name__ + get_object_signature(meth) - if meth is not None: - doc = """ %s\n%s""" % (signature, getattr(meth, '__doc__', None)) - return doc - # - def __call__(self, a, *args, **params): - # Get the method from the array (if possible) - method_name = self.__name__ - method = getattr(a, method_name, None) - if method is not None: - return method(*args, **params) - # Still here ? Then a is not a MaskedArray - method = getattr(MaskedArray, method_name, None) - if method is not None: - return method(MaskedArray(a), *args, **params) - # Still here ? OK, let's call the corresponding np function - method = getattr(np, method_name) - return method(a, *args, **params) - -all = _frommethod('all') -anomalies = anom = _frommethod('anom') -any = _frommethod('any') -compress = _frommethod('compress') -cumprod = _frommethod('cumprod') -cumsum = _frommethod('cumsum') -copy = _frommethod('copy') -diagonal = _frommethod('diagonal') -harden_mask = _frommethod('harden_mask') -ids = _frommethod('ids') -maximum = _maximum_operation() -mean = _frommethod('mean') -minimum = _minimum_operation () -nonzero = _frommethod('nonzero') -prod = _frommethod('prod') -product = _frommethod('prod') -ravel = _frommethod('ravel') -repeat = _frommethod('repeat') -shrink_mask = _frommethod('shrink_mask') -soften_mask = _frommethod('soften_mask') -std = _frommethod('std') -sum = _frommethod('sum') -swapaxes = _frommethod('swapaxes') -#take = _frommethod('take') -trace = _frommethod('trace') -var = _frommethod('var') - -def take(a, indices, axis=None, out=None, mode='raise'): - """ - """ - a = masked_array(a) - return a.take(indices, axis=axis, out=out, mode=mode) - - -#.............................................................................. -def power(a, b, third=None): - """ - Returns element-wise base array raised to power from second array. - - This is the masked array version of `numpy.power`. For details see - `numpy.power`. - - See Also - -------- - numpy.power - - Notes - ----- - The *out* argument to `numpy.power` is not supported, `third` has to be - None. - - """ - if third is not None: - raise MaskError, "3-argument power not supported." - # Get the masks - ma = getmask(a) - mb = getmask(b) - m = mask_or(ma, mb) - # Get the rawdata - fa = getdata(a) - fb = getdata(b) - # Get the type of the result (so that we preserve subclasses) - if isinstance(a, MaskedArray): - basetype = type(a) - else: - basetype = MaskedArray - # Get the result and view it as a (subclass of) MaskedArray - err_status = np.geterr() - try: - np.seterr(divide='ignore', invalid='ignore') - result = np.where(m, fa, umath.power(fa, fb)).view(basetype) - finally: - np.seterr(**err_status) - result._update_from(a) - # Find where we're in trouble w/ NaNs and Infs - invalid = np.logical_not(np.isfinite(result.view(ndarray))) - # Add the initial mask - if m is not nomask: - if not (result.ndim): - return masked - m |= invalid - result._mask = m - # Fix the invalid parts - if invalid.any(): - if not result.ndim: - return masked - elif result._mask is nomask: - result._mask = invalid - result._data[invalid] = result.fill_value - return result - -# if fb.dtype.char in typecodes["Integer"]: -# return masked_array(umath.power(fa, fb), m) -# m = mask_or(m, (fa < 0) & (fb != fb.astype(int))) -# if m is nomask: -# return masked_array(umath.power(fa, fb)) -# else: -# fa = fa.copy() -# if m.all(): -# fa.flat = 1 -# else: -# np.putmask(fa,m,1) -# return masked_array(umath.power(fa, fb), m) - -#.............................................................................. -def argsort(a, axis=None, kind='quicksort', order=None, fill_value=None): - "Function version of the eponymous method." - if fill_value is None: - fill_value = default_fill_value(a) - d = filled(a, fill_value) - if axis is None: - return d.argsort(kind=kind, order=order) - return d.argsort(axis, kind=kind, order=order) -argsort.__doc__ = MaskedArray.argsort.__doc__ - -def argmin(a, axis=None, fill_value=None): - "Function version of the eponymous method." - if fill_value is None: - fill_value = default_fill_value(a) - d = filled(a, fill_value) - return d.argmin(axis=axis) -argmin.__doc__ = MaskedArray.argmin.__doc__ - -def argmax(a, axis=None, fill_value=None): - "Function version of the eponymous method." - if fill_value is None: - fill_value = default_fill_value(a) - try: - fill_value = -fill_value - except: - pass - d = filled(a, fill_value) - return d.argmax(axis=axis) -argmin.__doc__ = MaskedArray.argmax.__doc__ - -def sort(a, axis= -1, kind='quicksort', order=None, endwith=True, fill_value=None): - "Function version of the eponymous method." - a = narray(a, copy=True, subok=True) - if axis is None: - a = a.flatten() - axis = 0 - if fill_value is None: - if endwith: - filler = minimum_fill_value(a) - else: - filler = maximum_fill_value(a) - else: - filler = fill_value -# return - indx = np.indices(a.shape).tolist() - indx[axis] = filled(a, filler).argsort(axis=axis, kind=kind, order=order) - return a[indx] -sort.__doc__ = MaskedArray.sort.__doc__ - - -def compressed(x): - """ - Return all the non-masked data as a 1-D array. - - This function is equivalent to calling the "compressed" method of a - `MaskedArray`, see `MaskedArray.compressed` for details. - - See Also - -------- - MaskedArray.compressed - Equivalent method. - - """ - if getmask(x) is nomask: - return np.asanyarray(x) - else: - return x.compressed() - -def concatenate(arrays, axis=0): - """ - Concatenate a sequence of arrays along the given axis. - - Parameters - ---------- - arrays : sequence of array_like - The arrays must have the same shape, except in the dimension - corresponding to `axis` (the first, by default). - axis : int, optional - The axis along which the arrays will be joined. Default is 0. - - Returns - ------- - result : MaskedArray - The concatenated array with any masked entries preserved. - - See Also - -------- - numpy.concatenate : Equivalent function in the top-level NumPy module. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = ma.arange(3) - >>> a[1] = ma.masked - >>> b = ma.arange(2, 5) - >>> a - masked_array(data = [0 -- 2], - mask = [False True False], - fill_value = 999999) - >>> b - masked_array(data = [2 3 4], - mask = False, - fill_value = 999999) - >>> ma.concatenate([a, b]) - masked_array(data = [0 -- 2 2 3 4], - mask = [False True False False False False], - fill_value = 999999) - - """ - d = np.concatenate([getdata(a) for a in arrays], axis) - rcls = get_masked_subclass(*arrays) - data = d.view(rcls) - # Check whether one of the arrays has a non-empty mask... - for x in arrays: - if getmask(x) is not nomask: - break - else: - return data - # OK, so we have to concatenate the masks - dm = np.concatenate([getmaskarray(a) for a in arrays], axis) - # If we decide to keep a '_shrinkmask' option, we want to check that ... - # ... all of them are True, and then check for dm.any() -# shrink = numpy.logical_or.reduce([getattr(a,'_shrinkmask',True) for a in arrays]) -# if shrink and not dm.any(): - if not dm.dtype.fields and not dm.any(): - data._mask = nomask - else: - data._mask = dm.reshape(d.shape) - return data - -def count(a, axis=None): - if isinstance(a, MaskedArray): - return a.count(axis) - return masked_array(a, copy=False).count(axis) -count.__doc__ = MaskedArray.count.__doc__ - - -def diag(v, k=0): - """ - Extract a diagonal or construct a diagonal array. - - This function is the equivalent of `numpy.diag` that takes masked - values into account, see `numpy.diag` for details. - - See Also - -------- - numpy.diag : Equivalent function for ndarrays. - - """ - output = np.diag(v, k).view(MaskedArray) - if getmask(v) is not nomask: - output._mask = np.diag(v._mask, k) - return output - - -def expand_dims(x, axis): - """ - Expand the shape of an array. - - Expands the shape of the array by including a new axis before the one - specified by the `axis` parameter. This function behaves the same as - `numpy.expand_dims` but preserves masked elements. - - See Also - -------- - numpy.expand_dims : Equivalent function in top-level NumPy module. - - Examples - -------- - >>> import numpy.ma as ma - >>> x = ma.array([1, 2, 4]) - >>> x[1] = ma.masked - >>> x - masked_array(data = [1 -- 4], - mask = [False True False], - fill_value = 999999) - >>> np.expand_dims(x, axis=0) - array([[1, 2, 4]]) - >>> ma.expand_dims(x, axis=0) - masked_array(data = - [[1 -- 4]], - mask = - [[False True False]], - fill_value = 999999) - - The same result can be achieved using slicing syntax with `np.newaxis`. - - >>> x[np.newaxis, :] - masked_array(data = - [[1 -- 4]], - mask = - [[False True False]], - fill_value = 999999) - - """ - result = n_expand_dims(x, axis) - if isinstance(x, MaskedArray): - new_shape = result.shape - result = x.view() - result.shape = new_shape - if result._mask is not nomask: - result._mask.shape = new_shape - return result - -#...................................... -def left_shift (a, n): - """ - Shift the bits of an integer to the left. - - This is the masked array version of `numpy.left_shift`, for details - see that function. - - See Also - -------- - numpy.left_shift - - """ - m = getmask(a) - if m is nomask: - d = umath.left_shift(filled(a), n) - return masked_array(d) - else: - d = umath.left_shift(filled(a, 0), n) - return masked_array(d, mask=m) - -def right_shift (a, n): - """ - Shift the bits of an integer to the right. - - This is the masked array version of `numpy.right_shift`, for details - see that function. - - See Also - -------- - numpy.right_shift - - """ - m = getmask(a) - if m is nomask: - d = umath.right_shift(filled(a), n) - return masked_array(d) - else: - d = umath.right_shift(filled(a, 0), n) - return masked_array(d, mask=m) - -#...................................... -def put(a, indices, values, mode='raise'): - """ - Set storage-indexed locations to corresponding values. - - This function is equivalent to `MaskedArray.put`, see that method - for details. - - See Also - -------- - MaskedArray.put - - """ - # We can't use 'frommethod', the order of arguments is different - try: - return a.put(indices, values, mode=mode) - except AttributeError: - return narray(a, copy=False).put(indices, values, mode=mode) - -def putmask(a, mask, values): #, mode='raise'): - """ - Changes elements of an array based on conditional and input values. - - This is the masked array version of `numpy.putmask`, for details see - `numpy.putmask`. - - See Also - -------- - numpy.putmask - - Notes - ----- - Using a masked array as `values` will **not** transform a `ndarray` into - a `MaskedArray`. - - """ - # We can't use 'frommethod', the order of arguments is different - if not isinstance(a, MaskedArray): - a = a.view(MaskedArray) - (valdata, valmask) = (getdata(values), getmask(values)) - if getmask(a) is nomask: - if valmask is not nomask: - a._sharedmask = True - a._mask = make_mask_none(a.shape, a.dtype) - np.putmask(a._mask, mask, valmask) - elif a._hardmask: - if valmask is not nomask: - m = a._mask.copy() - np.putmask(m, mask, valmask) - a.mask |= m - else: - if valmask is nomask: - valmask = getmaskarray(values) - np.putmask(a._mask, mask, valmask) - np.putmask(a._data, mask, valdata) - return - -def transpose(a, axes=None): - """ - Permute the dimensions of an array. - - This function is exactly equivalent to `numpy.transpose`. - - See Also - -------- - numpy.transpose : Equivalent function in top-level NumPy module. - - Examples - -------- - >>> import numpy.ma as ma - >>> x = ma.arange(4).reshape((2,2)) - >>> x[1, 1] = ma.masked - >>>> x - masked_array(data = - [[0 1] - [2 --]], - mask = - [[False False] - [False True]], - fill_value = 999999) - >>> ma.transpose(x) - masked_array(data = - [[0 2] - [1 --]], - mask = - [[False False] - [False True]], - fill_value = 999999) - - """ - #We can't use 'frommethod', as 'transpose' doesn't take keywords - try: - return a.transpose(axes) - except AttributeError: - return narray(a, copy=False).transpose(axes).view(MaskedArray) - -def reshape(a, new_shape, order='C'): - """ - Returns an array containing the same data with a new shape. - - Refer to `MaskedArray.reshape` for full documentation. - - See Also - -------- - MaskedArray.reshape : equivalent function - - """ - #We can't use 'frommethod', it whine about some parameters. Dmmit. - try: - return a.reshape(new_shape, order=order) - except AttributeError: - _tmp = narray(a, copy=False).reshape(new_shape, order=order) - return _tmp.view(MaskedArray) - -def resize(x, new_shape): - """ - Return a new masked array with the specified size and shape. - - This is the masked equivalent of the `numpy.resize` function. The new - array is filled with repeated copies of `x` (in the order that the - data are stored in memory). If `x` is masked, the new array will be - masked, and the new mask will be a repetition of the old one. - - See Also - -------- - numpy.resize : Equivalent function in the top level NumPy module. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = ma.array([[1, 2] ,[3, 4]]) - >>> a[0, 1] = ma.masked - >>> a - masked_array(data = - [[1 --] - [3 4]], - mask = - [[False True] - [False False]], - fill_value = 999999) - >>> np.resize(a, (3, 3)) - array([[1, 2, 3], - [4, 1, 2], - [3, 4, 1]]) - >>> ma.resize(a, (3, 3)) - masked_array(data = - [[1 -- 3] - [4 1 --] - [3 4 1]], - mask = - [[False True False] - [False False True] - [False False False]], - fill_value = 999999) - - A MaskedArray is always returned, regardless of the input type. - - >>> a = np.array([[1, 2] ,[3, 4]]) - >>> ma.resize(a, (3, 3)) - masked_array(data = - [[1 2 3] - [4 1 2] - [3 4 1]], - mask = - False, - fill_value = 999999) - - """ - # We can't use _frommethods here, as N.resize is notoriously whiny. - m = getmask(x) - if m is not nomask: - m = np.resize(m, new_shape) - result = np.resize(x, new_shape).view(get_masked_subclass(x)) - if result.ndim: - result._mask = m - return result - - -#................................................ -def rank(obj): - "maskedarray version of the numpy function." - return np.rank(getdata(obj)) -rank.__doc__ = np.rank.__doc__ -# -def shape(obj): - "maskedarray version of the numpy function." - return np.shape(getdata(obj)) -shape.__doc__ = np.shape.__doc__ -# -def size(obj, axis=None): - "maskedarray version of the numpy function." - return np.size(getdata(obj), axis) -size.__doc__ = np.size.__doc__ -#................................................ - -#####-------------------------------------------------------------------------- -#---- --- Extra functions --- -#####-------------------------------------------------------------------------- -def where (condition, x=None, y=None): - """ - Return a masked array with elements from x or y, depending on condition. - - Returns a masked array, shaped like condition, where the elements - are from `x` when `condition` is True, and from `y` otherwise. - If neither `x` nor `y` are given, the function returns a tuple of - indices where `condition` is True (the result of - ``condition.nonzero()``). - - Parameters - ---------- - condition : array_like, bool - The condition to meet. For each True element, yield the corresponding - element from `x`, otherwise from `y`. - x, y : array_like, optional - Values from which to choose. `x` and `y` need to have the same shape - as condition, or be broadcast-able to that shape. - - Returns - ------- - out : MaskedArray or tuple of ndarrays - The resulting masked array if `x` and `y` were given, otherwise - the result of ``condition.nonzero()``. - - See Also - -------- - numpy.where : Equivalent function in the top-level NumPy module. - - Examples - -------- - >>> x = np.ma.array(np.arange(9.).reshape(3, 3), mask=[[0, 1, 0], - ... [1, 0, 1], - ... [0, 1, 0]]) - >>> print x - [[0.0 -- 2.0] - [-- 4.0 --] - [6.0 -- 8.0]] - >>> np.ma.where(x > 5) # return the indices where x > 5 - (array([2, 2]), array([0, 2])) - - >>> print np.ma.where(x > 5, x, -3.1416) - [[-3.1416 -- -3.1416] - [-- -3.1416 --] - [6.0 -- 8.0]] - - """ - if x is None and y is None: - return filled(condition, 0).nonzero() - elif x is None or y is None: - raise ValueError, "Either both or neither x and y should be given." - # Get the condition ............... - fc = filled(condition, 0).astype(MaskType) - notfc = np.logical_not(fc) - # Get the data ...................................... - xv = getdata(x) - yv = getdata(y) - if x is masked: - ndtype = yv.dtype - elif y is masked: - ndtype = xv.dtype - else: - ndtype = np.find_common_type([xv.dtype, yv.dtype], []) - # Construct an empty array and fill it - d = np.empty(fc.shape, dtype=ndtype).view(MaskedArray) - _data = d._data - np.putmask(_data, fc, xv.astype(ndtype)) - np.putmask(_data, notfc, yv.astype(ndtype)) - # Create an empty mask and fill it - _mask = d._mask = np.zeros(fc.shape, dtype=MaskType) - np.putmask(_mask, fc, getmask(x)) - np.putmask(_mask, notfc, getmask(y)) - _mask |= getmaskarray(condition) - if not _mask.any(): - d._mask = nomask - return d - -def choose (indices, choices, out=None, mode='raise'): - """ - Use an index array to construct a new array from a set of choices. - - Given an array of integers and a set of n choice arrays, this method - will create a new array that merges each of the choice arrays. Where a - value in `a` is i, the new array will have the value that choices[i] - contains in the same place. - - Parameters - ---------- - a : ndarray of ints - This array must contain integers in ``[0, n-1]``, where n is the - number of choices. - choices : sequence of arrays - Choice arrays. The index array and all of the choices should be - broadcastable to the same shape. - out : array, optional - If provided, the result will be inserted into this array. It should - be of the appropriate shape and `dtype`. - mode : {'raise', 'wrap', 'clip'}, optional - Specifies how out-of-bounds indices will behave. - - * 'raise' : raise an error - * 'wrap' : wrap around - * 'clip' : clip to the range - - Returns - ------- - merged_array : array - - See Also - -------- - choose : equivalent function - - Examples - -------- - >>> choice = np.array([[1,1,1], [2,2,2], [3,3,3]]) - >>> a = np.array([2, 1, 0]) - >>> np.ma.choose(a, choice) - masked_array(data = [3 2 1], - mask = False, - fill_value=999999) - - """ - def fmask (x): - "Returns the filled array, or True if masked." - if x is masked: - return True - return filled(x) - def nmask (x): - "Returns the mask, True if ``masked``, False if ``nomask``." - if x is masked: - return True - return getmask(x) - # Get the indices...... - c = filled(indices, 0) - # Get the masks........ - masks = [nmask(x) for x in choices] - data = [fmask(x) for x in choices] - # Construct the mask - outputmask = np.choose(c, masks, mode=mode) - outputmask = make_mask(mask_or(outputmask, getmask(indices)), - copy=0, shrink=True) - # Get the choices...... - d = np.choose(c, data, mode=mode, out=out).view(MaskedArray) - if out is not None: - if isinstance(out, MaskedArray): - out.__setmask__(outputmask) - return out - d.__setmask__(outputmask) - return d - - -def round_(a, decimals=0, out=None): - """ - Return a copy of a, rounded to 'decimals' places. - - When 'decimals' is negative, it specifies the number of positions - to the left of the decimal point. The real and imaginary parts of - complex numbers are rounded separately. Nothing is done if the - array is not of float type and 'decimals' is greater than or equal - to 0. - - Parameters - ---------- - decimals : int - Number of decimals to round to. May be negative. - out : array_like - Existing array to use for output. - If not given, returns a default copy of a. - - Notes - ----- - If out is given and does not have a mask attribute, the mask of a - is lost! - - """ - if out is None: - return np.round_(a, decimals, out) - else: - np.round_(getdata(a), decimals, out) - if hasattr(out, '_mask'): - out._mask = getmask(a) - return out -round = round_ - -def inner(a, b): - """ - Returns the inner product of a and b for arrays of floating point types. - - Like the generic NumPy equivalent the product sum is over the last dimension - of a and b. - - Notes - ----- - The first argument is not conjugated. - - """ - fa = filled(a, 0) - fb = filled(b, 0) - if len(fa.shape) == 0: - fa.shape = (1,) - if len(fb.shape) == 0: - fb.shape = (1,) - return np.inner(fa, fb).view(MaskedArray) -inner.__doc__ = doc_note(np.inner.__doc__, - "Masked values are replaced by 0.") -innerproduct = inner - -def outer(a, b): - "maskedarray version of the numpy function." - fa = filled(a, 0).ravel() - fb = filled(b, 0).ravel() - d = np.outer(fa, fb) - ma = getmask(a) - mb = getmask(b) - if ma is nomask and mb is nomask: - return masked_array(d) - ma = getmaskarray(a) - mb = getmaskarray(b) - m = make_mask(1 - np.outer(1 - ma, 1 - mb), copy=0) - return masked_array(d, mask=m) -outer.__doc__ = doc_note(np.outer.__doc__, - "Masked values are replaced by 0.") -outerproduct = outer - -def allequal (a, b, fill_value=True): - """ - Return True if all entries of a and b are equal, using - fill_value as a truth value where either or both are masked. - - Parameters - ---------- - a, b : array_like - Input arrays to compare. - fill_value : bool, optional - Whether masked values in a or b are considered equal (True) or not - (False). - - Returns - ------- - y : bool - Returns True if the two arrays are equal within the given - tolerance, False otherwise. If either array contains NaN, - then False is returned. - - See Also - -------- - all, any - numpy.ma.allclose - - Examples - -------- - >>> a = ma.array([1e10, 1e-7, 42.0], mask=[0, 0, 1]) - >>> a - masked_array(data = [10000000000.0 1e-07 --], - mask = [False False True], - fill_value=1e+20) - - >>> b = array([1e10, 1e-7, -42.0]) - >>> b - array([ 1.00000000e+10, 1.00000000e-07, -4.20000000e+01]) - >>> ma.allequal(a, b, fill_value=False) - False - >>> ma.allequal(a, b) - True - - """ - m = mask_or(getmask(a), getmask(b)) - if m is nomask: - x = getdata(a) - y = getdata(b) - d = umath.equal(x, y) - return d.all() - elif fill_value: - x = getdata(a) - y = getdata(b) - d = umath.equal(x, y) - dm = array(d, mask=m, copy=False) - return dm.filled(True).all(None) - else: - return False - -def allclose (a, b, masked_equal=True, rtol=1e-5, atol=1e-8, fill_value=None): - """ - Returns True if two arrays are element-wise equal within a tolerance. - - This function is equivalent to `allclose` except that masked values - are treated as equal (default) or unequal, depending on the `masked_equal` - argument. - - Parameters - ---------- - a, b : array_like - Input arrays to compare. - masked_equal : bool, optional - Whether masked values in `a` and `b` are considered equal (True) or not - (False). They are considered equal by default. - rtol : float, optional - Relative tolerance. The relative difference is equal to ``rtol * b``. - Default is 1e-5. - atol : float, optional - Absolute tolerance. The absolute difference is equal to `atol`. - Default is 1e-8. - fill_value : bool, optional - *Deprecated* - Whether masked values in `a` or `b` are considered equal - (True) or not (False). - - Returns - ------- - y : bool - Returns True if the two arrays are equal within the given - tolerance, False otherwise. If either array contains NaN, then - False is returned. - - See Also - -------- - all, any - numpy.allclose : the non-masked `allclose`. - - Notes - ----- - If the following equation is element-wise True, then `allclose` returns - True:: - - absolute(`a` - `b`) <= (`atol` + `rtol` * absolute(`b`)) - - Return True if all elements of `a` and `b` are equal subject to - given tolerances. - - Examples - -------- - >>> a = ma.array([1e10, 1e-7, 42.0], mask=[0, 0, 1]) - >>> a - masked_array(data = [10000000000.0 1e-07 --], - mask = [False False True], - fill_value = 1e+20) - >>> b = ma.array([1e10, 1e-8, -42.0], mask=[0, 0, 1]) - >>> ma.allclose(a, b) - False - - >>> a = ma.array([1e10, 1e-8, 42.0], mask=[0, 0, 1]) - >>> b = ma.array([1.00001e10, 1e-9, -42.0], mask=[0, 0, 1]) - >>> ma.allclose(a, b) - True - >>> ma.allclose(a, b, masked_equal=False) - False - - Masked values are not compared directly. - - >>> a = ma.array([1e10, 1e-8, 42.0], mask=[0, 0, 1]) - >>> b = ma.array([1.00001e10, 1e-9, 42.0], mask=[0, 0, 1]) - >>> ma.allclose(a, b) - True - >>> ma.allclose(a, b, masked_equal=False) - False - - """ - if fill_value is not None: - warnings.warn("The use of fill_value is deprecated."\ - " Please use masked_equal instead.") - masked_equal = fill_value - # - x = masked_array(a, copy=False) - y = masked_array(b, copy=False) - m = mask_or(getmask(x), getmask(y)) - xinf = np.isinf(masked_array(x, copy=False, mask=m)).filled(False) - # If we have some infs, they should fall at the same place. - if not np.all(xinf == filled(np.isinf(y), False)): - return False - # No infs at all - if not np.any(xinf): - d = filled(umath.less_equal(umath.absolute(x - y), - atol + rtol * umath.absolute(y)), - masked_equal) - return np.all(d) - if not np.all(filled(x[xinf] == y[xinf], masked_equal)): - return False - x = x[~xinf] - y = y[~xinf] - d = filled(umath.less_equal(umath.absolute(x - y), - atol + rtol * umath.absolute(y)), - masked_equal) - return np.all(d) - -#.............................................................................. -def asarray(a, dtype=None, order=None): - """ - Convert the input to a masked array of the given data-type. - - No copy is performed if the input is already an `ndarray`. If `a` is - a subclass of `MaskedArray`, a base class `MaskedArray` is returned. - - Parameters - ---------- - a : array_like - Input data, in any form that can be converted to a masked array. This - includes lists, lists of tuples, tuples, tuples of tuples, tuples - of lists, ndarrays and masked arrays. - dtype : dtype, optional - By default, the data-type is inferred from the input data. - order : {'C', 'F'}, optional - Whether to use row-major ('C') or column-major ('FORTRAN') memory - representation. Default is 'C'. - - Returns - ------- - out : MaskedArray - Masked array interpretation of `a`. - - See Also - -------- - asanyarray : Similar to `asarray`, but conserves subclasses. - - Examples - -------- - >>> x = np.arange(10.).reshape(2, 5) - >>> x - array([[ 0., 1., 2., 3., 4.], - [ 5., 6., 7., 8., 9.]]) - >>> np.ma.asarray(x) - masked_array(data = - [[ 0. 1. 2. 3. 4.] - [ 5. 6. 7. 8. 9.]], - mask = - False, - fill_value = 1e+20) - >>> type(np.ma.asarray(x)) - - - """ - return masked_array(a, dtype=dtype, copy=False, keep_mask=True, subok=False) - -def asanyarray(a, dtype=None): - """ - Convert the input to a masked array, conserving subclasses. - - If `a` is a subclass of `MaskedArray`, its class is conserved. - No copy is performed if the input is already an `ndarray`. - - Parameters - ---------- - a : array_like - Input data, in any form that can be converted to an array. - dtype : dtype, optional - By default, the data-type is inferred from the input data. - order : {'C', 'F'}, optional - Whether to use row-major ('C') or column-major ('FORTRAN') memory - representation. Default is 'C'. - - Returns - ------- - out : MaskedArray - MaskedArray interpretation of `a`. - - See Also - -------- - asarray : Similar to `asanyarray`, but does not conserve subclass. - - Examples - -------- - >>> x = np.arange(10.).reshape(2, 5) - >>> x - array([[ 0., 1., 2., 3., 4.], - [ 5., 6., 7., 8., 9.]]) - >>> np.ma.asanyarray(x) - masked_array(data = - [[ 0. 1. 2. 3. 4.] - [ 5. 6. 7. 8. 9.]], - mask = - False, - fill_value = 1e+20) - >>> type(np.ma.asanyarray(x)) - - - """ - return masked_array(a, dtype=dtype, copy=False, keep_mask=True, subok=True) - - -#####-------------------------------------------------------------------------- -#---- --- Pickling --- -#####-------------------------------------------------------------------------- -def dump(a, F): - """ - Pickle a masked array to a file. - - This is a wrapper around ``cPickle.dump``. - - Parameters - ---------- - a : MaskedArray - The array to be pickled. - F : str or file-like object - The file to pickle `a` to. If a string, the full path to the file. - - """ - if not hasattr(F, 'readline'): - F = open(F, 'w') - return cPickle.dump(a, F) - -def dumps(a): - """ - Return a string corresponding to the pickling of a masked array. - - This is a wrapper around ``cPickle.dumps``. - - Parameters - ---------- - a : MaskedArray - The array for which the string representation of the pickle is - returned. - - """ - return cPickle.dumps(a) - -def load(F): - """ - Wrapper around ``cPickle.load`` which accepts either a file-like object - or a filename. - - Parameters - ---------- - F : str or file - The file or file name to load. - - See Also - -------- - dump : Pickle an array - - Notes - ----- - This is different from `numpy.load`, which does not use cPickle but loads - the NumPy binary .npy format. - - """ - if not hasattr(F, 'readline'): - F = open(F, 'r') - return cPickle.load(F) - -def loads(strg): - """ - Load a pickle from the current string. - - The result of ``cPickle.loads(strg)`` is returned. - - Parameters - ---------- - strg : str - The string to load. - - See Also - -------- - dumps : Return a string corresponding to the pickling of a masked array. - - """ - return cPickle.loads(strg) - -################################################################################ -def fromfile(file, dtype=float, count= -1, sep=''): - raise NotImplementedError("Not yet implemented. Sorry") - - -def fromflex(fxarray): - """ - Build a masked array from a suitable flexible-type array. - - The input array has to have a data-type with ``_data`` and ``_mask`` - fields. This type of array is output by `MaskedArray.toflex`. - - Parameters - ---------- - fxarray : ndarray - The structured input array, containing ``_data`` and ``_mask`` - fields. If present, other fields are discarded. - - Returns - ------- - result : MaskedArray - The constructed masked array. - - See Also - -------- - MaskedArray.toflex : Build a flexible-type array from a masked array. - - Examples - -------- - >>> x = np.ma.array(np.arange(9).reshape(3, 3), mask=[0] + [1, 0] * 4) - >>> rec = x.toflex() - >>> rec - array([[(0, False), (1, True), (2, False)], - [(3, True), (4, False), (5, True)], - [(6, False), (7, True), (8, False)]], - dtype=[('_data', '>> x2 = np.ma.fromflex(rec) - >>> x2 - masked_array(data = - [[0 -- 2] - [-- 4 --] - [6 -- 8]], - mask = - [[False True False] - [ True False True] - [False True False]], - fill_value = 999999) - - Extra fields can be present in the structured array but are discarded: - - >>> dt = [('_data', '>> rec2 = np.zeros((2, 2), dtype=dt) - >>> rec2 - array([[(0, False, 0.0), (0, False, 0.0)], - [(0, False, 0.0), (0, False, 0.0)]], - dtype=[('_data', '>> y = np.ma.fromflex(rec2) - >>> y - masked_array(data = - [[0 0] - [0 0]], - mask = - [[False False] - [False False]], - fill_value = 999999) - - """ - return masked_array(fxarray['_data'], mask=fxarray['_mask']) - - - -class _convert2ma: - """ - Convert functions from numpy to numpy.ma. - - Parameters - ---------- - _methodname : string - Name of the method to transform. - - """ - __doc__ = None - # - def __init__(self, funcname, params=None): - self._func = getattr(np, funcname) - self.__doc__ = self.getdoc() - self._extras = params or {} - # - def getdoc(self): - "Return the doc of the function (from the doc of the method)." - doc = getattr(self._func, '__doc__', None) - sig = get_object_signature(self._func) - if doc: - # Add the signature of the function at the beginning of the doc - if sig: - sig = "%s%s\n" % (self._func.__name__, sig) - doc = sig + doc - return doc - # - def __call__(self, a, *args, **params): - # Find the common parameters to the call and the definition - _extras = self._extras - common_params = set(params).intersection(_extras) - # Drop the common parameters from the call - for p in common_params: - _extras[p] = params.pop(p) - # Get the result - result = self._func.__call__(a, *args, **params).view(MaskedArray) - if "fill_value" in common_params: - result.fill_value = _extras.get("fill_value", None) - if "hardmask" in common_params: - result._hardmask = bool(_extras.get("hard_mask", False)) - return result - -arange = _convert2ma('arange', params=dict(fill_value=None, hardmask=False)) -clip = np.clip -diff = np.diff -empty = _convert2ma('empty', params=dict(fill_value=None, hardmask=False)) -empty_like = _convert2ma('empty_like') -frombuffer = _convert2ma('frombuffer') -fromfunction = _convert2ma('fromfunction') -identity = _convert2ma('identity', params=dict(fill_value=None, hardmask=False)) -indices = np.indices -ones = _convert2ma('ones', params=dict(fill_value=None, hardmask=False)) -ones_like = np.ones_like -squeeze = np.squeeze -zeros = _convert2ma('zeros', params=dict(fill_value=None, hardmask=False)) -zeros_like = np.zeros_like - -############################################################################### diff --git a/pythonPackages/numpy/numpy/ma/extras.py b/pythonPackages/numpy/numpy/ma/extras.py deleted file mode 100755 index a84e81cb97..0000000000 --- a/pythonPackages/numpy/numpy/ma/extras.py +++ /dev/null @@ -1,1925 +0,0 @@ -""" -Masked arrays add-ons. - -A collection of utilities for `numpy.ma`. - -:author: Pierre Gerard-Marchant -:contact: pierregm_at_uga_dot_edu -:version: $Id: extras.py 3473 2007-10-29 15:18:13Z jarrod.millman $ - -""" -__author__ = "Pierre GF Gerard-Marchant ($Author: jarrod.millman $)" -__version__ = '1.0' -__revision__ = "$Revision: 3473 $" -__date__ = '$Date: 2007-10-29 17:18:13 +0200 (Mon, 29 Oct 2007) $' - -__all__ = ['apply_along_axis', 'apply_over_axes', 'atleast_1d', 'atleast_2d', - 'atleast_3d', 'average', - 'clump_masked', 'clump_unmasked', 'column_stack', 'compress_cols', - 'compress_rowcols', 'compress_rows', 'count_masked', 'corrcoef', - 'cov', - 'diagflat', 'dot', 'dstack', - 'ediff1d', - 'flatnotmasked_contiguous', 'flatnotmasked_edges', - 'hsplit', 'hstack', - 'in1d', 'intersect1d', 'intersect1d_nu', - 'mask_cols', 'mask_rowcols', 'mask_rows', 'masked_all', - 'masked_all_like', 'median', 'mr_', - 'notmasked_contiguous', 'notmasked_edges', - 'polyfit', - 'row_stack', - 'setdiff1d', 'setmember1d', 'setxor1d', - 'unique', 'unique1d', 'union1d', - 'vander', 'vstack', - ] - -import itertools -import warnings - -import core as ma -from core import MaskedArray, MAError, add, array, asarray, concatenate, count, \ - filled, getmask, getmaskarray, make_mask_descr, masked, masked_array, \ - mask_or, nomask, ones, sort, zeros -#from core import * - -import numpy as np -from numpy import ndarray, array as nxarray -import numpy.core.umath as umath -from numpy.lib.index_tricks import AxisConcatenator -from numpy.linalg import lstsq - -from numpy.lib.utils import deprecate - -#............................................................................... -def issequence(seq): - """Is seq a sequence (ndarray, list or tuple)?""" - if isinstance(seq, (ndarray, tuple, list)): - return True - return False - -def count_masked(arr, axis=None): - """ - Count the number of masked elements along the given axis. - - Parameters - ---------- - arr : array_like - An array with (possibly) masked elements. - axis : int, optional - Axis along which to count. If None (default), a flattened - version of the array is used. - - Returns - ------- - count : int, ndarray - The total number of masked elements (axis=None) or the number - of masked elements along each slice of the given axis. - - See Also - -------- - MaskedArray.count : Count non-masked elements. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.arange(9).reshape((3,3)) - >>> a = ma.array(a) - >>> a[1, 0] = ma.masked - >>> a[1, 2] = ma.masked - >>> a[2, 1] = ma.masked - >>> a - masked_array(data = - [[0 1 2] - [-- 4 --] - [6 -- 8]], - mask = - [[False False False] - [ True False True] - [False True False]], - fill_value=999999) - >>> ma.count_masked(a) - 3 - - When the `axis` keyword is used an array is returned. - - >>> ma.count_masked(a, axis=0) - array([1, 1, 1]) - >>> ma.count_masked(a, axis=1) - array([0, 2, 1]) - - """ - m = getmaskarray(arr) - return m.sum(axis) - -def masked_all(shape, dtype=float): - """ - Empty masked array with all elements masked. - - Return an empty masked array of the given shape and dtype, where all the - data are masked. - - Parameters - ---------- - shape : tuple - Shape of the required MaskedArray. - dtype : dtype, optional - Data type of the output. - - Returns - ------- - a : MaskedArray - A masked array with all data masked. - - See Also - -------- - masked_all_like : Empty masked array modelled on an existing array. - - Examples - -------- - >>> import numpy.ma as ma - >>> ma.masked_all((3, 3)) - masked_array(data = - [[-- -- --] - [-- -- --] - [-- -- --]], - mask = - [[ True True True] - [ True True True] - [ True True True]], - fill_value=1e+20) - - The `dtype` parameter defines the underlying data type. - - >>> a = ma.masked_all((3, 3)) - >>> a.dtype - dtype('float64') - >>> a = ma.masked_all((3, 3), dtype=np.int32) - >>> a.dtype - dtype('int32') - - """ - a = masked_array(np.empty(shape, dtype), - mask=np.ones(shape, make_mask_descr(dtype))) - return a - -def masked_all_like(arr): - """ - Empty masked array with the properties of an existing array. - - Return an empty masked array of the same shape and dtype as - the array `arr`, where all the data are masked. - - Parameters - ---------- - arr : ndarray - An array describing the shape and dtype of the required MaskedArray. - - Returns - ------- - a : MaskedArray - A masked array with all data masked. - - Raises - ------ - AttributeError - If `arr` doesn't have a shape attribute (i.e. not an ndarray) - - See Also - -------- - masked_all : Empty masked array with all elements masked. - - Examples - -------- - >>> import numpy.ma as ma - >>> arr = np.zeros((2, 3), dtype=np.float32) - >>> arr - array([[ 0., 0., 0.], - [ 0., 0., 0.]], dtype=float32) - >>> ma.masked_all_like(arr) - masked_array(data = - [[-- -- --] - [-- -- --]], - mask = - [[ True True True] - [ True True True]], - fill_value=1e+20) - - The dtype of the masked array matches the dtype of `arr`. - - >>> arr.dtype - dtype('float32') - >>> ma.masked_all_like(arr).dtype - dtype('float32') - - """ - a = np.empty_like(arr).view(MaskedArray) - a._mask = np.ones(a.shape, dtype=make_mask_descr(a.dtype)) - return a - - -#####-------------------------------------------------------------------------- -#---- --- Standard functions --- -#####-------------------------------------------------------------------------- -class _fromnxfunction: - """ - Defines a wrapper to adapt NumPy functions to masked arrays. - - - An instance of `_fromnxfunction` can be called with the same parameters - as the wrapped NumPy function. The docstring of `newfunc` is adapted from - the wrapped function as well, see `getdoc`. - - Parameters - ---------- - funcname : str - The name of the function to be adapted. The function should be - in the NumPy namespace (i.e. ``np.funcname``). - - """ - - def __init__(self, funcname): - self.__name__ = funcname - self.__doc__ = self.getdoc() - - def getdoc(self): - """ - Retrieve the docstring and signature from the function. - - The ``__doc__`` attribute of the function is used as the docstring for - the new masked array version of the function. A note on application - of the function to the mask is appended. - - .. warning:: - If the function docstring already contained a Notes section, the - new docstring will have two Notes sections instead of appending a note - to the existing section. - - Parameters - ---------- - None - - """ - npfunc = getattr(np, self.__name__, None) - doc = getattr(npfunc, '__doc__', None) - if doc: - sig = self.__name__ + ma.get_object_signature(npfunc) - locdoc = "Notes\n-----\nThe function is applied to both the _data"\ - " and the _mask, if any." - return '\n'.join((sig, doc, locdoc)) - return - - - def __call__(self, *args, **params): - func = getattr(np, self.__name__) - if len(args) == 1: - x = args[0] - if isinstance(x, ndarray): - _d = func(np.asarray(x), **params) - _m = func(getmaskarray(x), **params) - return masked_array(_d, mask=_m) - elif isinstance(x, tuple) or isinstance(x, list): - _d = func(tuple([np.asarray(a) for a in x]), **params) - _m = func(tuple([getmaskarray(a) for a in x]), **params) - return masked_array(_d, mask=_m) - else: - arrays = [] - args = list(args) - while len(args) > 0 and issequence(args[0]): - arrays.append(args.pop(0)) - res = [] - for x in arrays: - _d = func(np.asarray(x), *args, **params) - _m = func(getmaskarray(x), *args, **params) - res.append(masked_array(_d, mask=_m)) - return res - -#atleast_1d = _fromnxfunction('atleast_1d') -#atleast_2d = _fromnxfunction('atleast_2d') -#atleast_3d = _fromnxfunction('atleast_3d') -atleast_1d = np.atleast_1d -atleast_2d = np.atleast_2d -atleast_3d = np.atleast_3d - -vstack = row_stack = _fromnxfunction('vstack') -hstack = _fromnxfunction('hstack') -column_stack = _fromnxfunction('column_stack') -dstack = _fromnxfunction('dstack') - -hsplit = _fromnxfunction('hsplit') - -diagflat = _fromnxfunction('diagflat') - - -#####-------------------------------------------------------------------------- -#---- -#####-------------------------------------------------------------------------- -def flatten_inplace(seq): - """Flatten a sequence in place.""" - k = 0 - while (k != len(seq)): - while hasattr(seq[k], '__iter__'): - seq[k:(k + 1)] = seq[k] - k += 1 - return seq - - -def apply_along_axis(func1d, axis, arr, *args, **kwargs): - """ - (This docstring should be overwritten) - """ - arr = array(arr, copy=False, subok=True) - nd = arr.ndim - if axis < 0: - axis += nd - if (axis >= nd): - raise ValueError("axis must be less than arr.ndim; axis=%d, rank=%d." - % (axis, nd)) - ind = [0] * (nd - 1) - i = np.zeros(nd, 'O') - indlist = range(nd) - indlist.remove(axis) - i[axis] = slice(None, None) - outshape = np.asarray(arr.shape).take(indlist) - i.put(indlist, ind) - j = i.copy() - res = func1d(arr[tuple(i.tolist())], *args, **kwargs) - # if res is a number, then we have a smaller output array - asscalar = np.isscalar(res) - if not asscalar: - try: - len(res) - except TypeError: - asscalar = True - # Note: we shouldn't set the dtype of the output from the first result... - #...so we force the type to object, and build a list of dtypes - #...we'll just take the largest, to avoid some downcasting - dtypes = [] - if asscalar: - dtypes.append(np.asarray(res).dtype) - outarr = zeros(outshape, object) - outarr[tuple(ind)] = res - Ntot = np.product(outshape) - k = 1 - while k < Ntot: - # increment the index - ind[-1] += 1 - n = -1 - while (ind[n] >= outshape[n]) and (n > (1 - nd)): - ind[n - 1] += 1 - ind[n] = 0 - n -= 1 - i.put(indlist, ind) - res = func1d(arr[tuple(i.tolist())], *args, **kwargs) - outarr[tuple(ind)] = res - dtypes.append(asarray(res).dtype) - k += 1 - else: - res = array(res, copy=False, subok=True) - j = i.copy() - j[axis] = ([slice(None, None)] * res.ndim) - j.put(indlist, ind) - Ntot = np.product(outshape) - holdshape = outshape - outshape = list(arr.shape) - outshape[axis] = res.shape - dtypes.append(asarray(res).dtype) - outshape = flatten_inplace(outshape) - outarr = zeros(outshape, object) - outarr[tuple(flatten_inplace(j.tolist()))] = res - k = 1 - while k < Ntot: - # increment the index - ind[-1] += 1 - n = -1 - while (ind[n] >= holdshape[n]) and (n > (1 - nd)): - ind[n - 1] += 1 - ind[n] = 0 - n -= 1 - i.put(indlist, ind) - j.put(indlist, ind) - res = func1d(arr[tuple(i.tolist())], *args, **kwargs) - outarr[tuple(flatten_inplace(j.tolist()))] = res - dtypes.append(asarray(res).dtype) - k += 1 - max_dtypes = np.dtype(np.asarray(dtypes).max()) - if not hasattr(arr, '_mask'): - result = np.asarray(outarr, dtype=max_dtypes) - else: - result = asarray(outarr, dtype=max_dtypes) - result.fill_value = ma.default_fill_value(result) - return result -apply_along_axis.__doc__ = np.apply_along_axis.__doc__ - - -def apply_over_axes(func, a, axes): - """ - (This docstring will be overwritten) - """ - val = np.asarray(a) - msk = getmaskarray(a) - N = a.ndim - if array(axes).ndim == 0: - axes = (axes,) - for axis in axes: - if axis < 0: axis = N + axis - args = (val, axis) - res = ma.array(func(*(val, axis)), mask=func(*(msk, axis))) - if res.ndim == val.ndim: - (val, msk) = (res._data, res._mask) - else: - res = ma.expand_dims(res, axis) - if res.ndim == val.ndim: - (val, msk) = (res._data, res._mask) - else: - raise ValueError("Function is not returning"\ - " an array of correct shape") - return val -apply_over_axes.__doc__ = np.apply_over_axes.__doc__ - - -def average(a, axis=None, weights=None, returned=False): - """ - Return the weighted average of array over the given axis. - - Parameters - ---------- - a : array_like - Data to be averaged. - Masked entries are not taken into account in the computation. - axis : int, optional - Axis along which the variance is computed. The default is to compute - the variance of the flattened array. - weights : array_like, optional - The importance that each element has in the computation of the average. - The weights array can either be 1-D (in which case its length must be - the size of `a` along the given axis) or of the same shape as `a`. - If ``weights=None``, then all data in `a` are assumed to have a - weight equal to one. - returned : bool, optional - Flag indicating whether a tuple ``(result, sum of weights)`` - should be returned as output (True), or just the result (False). - Default is False. - - Returns - ------- - average, [sum_of_weights] : (tuple of) scalar or MaskedArray - The average along the specified axis. When returned is `True`, - return a tuple with the average as the first element and the sum - of the weights as the second element. The return type is `np.float64` - if `a` is of integer type, otherwise it is of the same type as `a`. - If returned, `sum_of_weights` is of the same type as `average`. - - Examples - -------- - >>> a = np.ma.array([1., 2., 3., 4.], mask=[False, False, True, True]) - >>> np.ma.average(a, weights=[3, 1, 0, 0]) - 1.25 - - >>> x = np.ma.arange(6.).reshape(3, 2) - >>> print x - [[ 0. 1.] - [ 2. 3.] - [ 4. 5.]] - >>> avg, sumweights = np.ma.average(x, axis=0, weights=[1, 2, 3], - ... returned=True) - >>> print avg - [2.66666666667 3.66666666667] - - """ - a = asarray(a) - mask = a.mask - ash = a.shape - if ash == (): - ash = (1,) - if axis is None: - if mask is nomask: - if weights is None: - n = a.sum(axis=None) - d = float(a.size) - else: - w = filled(weights, 0.0).ravel() - n = umath.add.reduce(a._data.ravel() * w) - d = umath.add.reduce(w) - del w - else: - if weights is None: - n = a.filled(0).sum(axis=None) - d = float(umath.add.reduce((~mask).ravel())) - else: - w = array(filled(weights, 0.0), float, mask=mask).ravel() - n = add.reduce(a.ravel() * w) - d = add.reduce(w) - del w - else: - if mask is nomask: - if weights is None: - d = ash[axis] * 1.0 - n = add.reduce(a._data, axis, dtype=float) - else: - w = filled(weights, 0.0) - wsh = w.shape - if wsh == (): - wsh = (1,) - if wsh == ash: - w = np.array(w, float, copy=0) - n = add.reduce(a * w, axis) - d = add.reduce(w, axis) - del w - elif wsh == (ash[axis],): - ni = ash[axis] - r = [None] * len(ash) - r[axis] = slice(None, None, 1) - w = eval ("w[" + repr(tuple(r)) + "] * ones(ash, float)") - n = add.reduce(a * w, axis, dtype=float) - d = add.reduce(w, axis, dtype=float) - del w, r - else: - raise ValueError, 'average: weights wrong shape.' - else: - if weights is None: - n = add.reduce(a, axis, dtype=float) - d = umath.add.reduce((-mask), axis=axis, dtype=float) - else: - w = filled(weights, 0.0) - wsh = w.shape - if wsh == (): - wsh = (1,) - if wsh == ash: - w = array(w, dtype=float, mask=mask, copy=0) - n = add.reduce(a * w, axis, dtype=float) - d = add.reduce(w, axis, dtype=float) - elif wsh == (ash[axis],): - ni = ash[axis] - r = [None] * len(ash) - r[axis] = slice(None, None, 1) - w = eval ("w[" + repr(tuple(r)) + \ - "] * masked_array(ones(ash, float), mask)") - n = add.reduce(a * w, axis, dtype=float) - d = add.reduce(w, axis, dtype=float) - else: - raise ValueError, 'average: weights wrong shape.' - del w - if n is masked or d is masked: - return masked - result = n / d - del n - - if isinstance(result, MaskedArray): - if ((axis is None) or (axis == 0 and a.ndim == 1)) and \ - (result.mask is nomask): - result = result._data - if returned: - if not isinstance(d, MaskedArray): - d = masked_array(d) - if isinstance(d, ndarray) and (not d.shape == result.shape): - d = ones(result.shape, dtype=float) * d - if returned: - return result, d - else: - return result - - - -def median(a, axis=None, out=None, overwrite_input=False): - """ - Compute the median along the specified axis. - - Returns the median of the array elements. - - Parameters - ---------- - a : array_like - Input array or object that can be converted to an array. - axis : int, optional - Axis along which the medians are computed. The default (None) is - to compute the median along a flattened version of the array. - out : ndarray, optional - Alternative output array in which to place the result. It must - have the same shape and buffer length as the expected output - but the type will be cast if necessary. - overwrite_input : bool, optional - If True, then allow use of memory of input array (a) for - calculations. The input array will be modified by the call to - median. This will save memory when you do not need to preserve - the contents of the input array. Treat the input as undefined, - but it will probably be fully or partially sorted. Default is - False. Note that, if `overwrite_input` is True, and the input - is not already an `ndarray`, an error will be raised. - - Returns - ------- - median : ndarray - A new array holding the result is returned unless out is - specified, in which case a reference to out is returned. - Return data-type is `float64` for integers and floats smaller than - `float64`, or the input data-type, otherwise. - - See Also - -------- - mean - - Notes - ----- - Given a vector ``V`` with ``N`` non masked values, the median of ``V`` - is the middle value of a sorted copy of ``V`` (``Vs``) - i.e. - ``Vs[(N-1)/2]``, when ``N`` is odd, or ``{Vs[N/2 - 1] + Vs[N/2]}/2`` - when ``N`` is even. - - Examples - -------- - >>> x = np.ma.array(np.arange(8), mask=[0]*4 + [1]*4) - >>> np.ma.extras.median(x) - 1.5 - - >>> x = np.ma.array(np.arange(10).reshape(2, 5), mask=[0]*6 + [1]*4) - >>> np.ma.extras.median(x) - 2.5 - >>> np.ma.extras.median(x, axis=-1, overwrite_input=True) - masked_array(data = [ 2. 5.], - mask = False, - fill_value = 1e+20) - - """ - def _median1D(data): - counts = filled(count(data), 0) - (idx, rmd) = divmod(counts, 2) - if rmd: - choice = slice(idx, idx + 1) - else: - choice = slice(idx - 1, idx + 1) - return data[choice].mean(0) - # - if overwrite_input: - if axis is None: - asorted = a.ravel() - asorted.sort() - else: - a.sort(axis=axis) - asorted = a - else: - asorted = sort(a, axis=axis) - if axis is None: - result = _median1D(asorted) - else: - result = apply_along_axis(_median1D, axis, asorted) - if out is not None: - out = result - return result - - - - -#.............................................................................. -def compress_rowcols(x, axis=None): - """ - Suppress the rows and/or columns of a 2-D array that contain - masked values. - - The suppression behavior is selected with the `axis` parameter. - - - If axis is None, both rows and columns are suppressed. - - If axis is 0, only rows are suppressed. - - If axis is 1 or -1, only columns are suppressed. - - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. Default is None. - - Returns - ------- - compressed_array : ndarray - The compressed array. - - Examples - -------- - >>> x = np.ma.array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0], - ... [1, 0, 0], - ... [0, 0, 0]]) - >>> x - masked_array(data = - [[-- 1 2] - [-- 4 5] - [6 7 8]], - mask = - [[ True False False] - [ True False False] - [False False False]], - fill_value = 999999) - - >>> np.ma.extras.compress_rowcols(x) - array([[7, 8]]) - >>> np.ma.extras.compress_rowcols(x, 0) - array([[6, 7, 8]]) - >>> np.ma.extras.compress_rowcols(x, 1) - array([[1, 2], - [4, 5], - [7, 8]]) - - """ - x = asarray(x) - if x.ndim != 2: - raise NotImplementedError, "compress2d works for 2D arrays only." - m = getmask(x) - # Nothing is masked: return x - if m is nomask or not m.any(): - return x._data - # All is masked: return empty - if m.all(): - return nxarray([]) - # Builds a list of rows/columns indices - (idxr, idxc) = (range(len(x)), range(x.shape[1])) - masked = m.nonzero() - if not axis: - for i in np.unique(masked[0]): - idxr.remove(i) - if axis in [None, 1, -1]: - for j in np.unique(masked[1]): - idxc.remove(j) - return x._data[idxr][:, idxc] - -def compress_rows(a): - """ - Suppress whole rows of a 2-D array that contain masked values. - - This is equivalent to ``np.ma.extras.compress_rowcols(a, 0)``, see - `extras.compress_rowcols` for details. - - See Also - -------- - extras.compress_rowcols - - """ - return compress_rowcols(a, 0) - -def compress_cols(a): - """ - Suppress whole columns of a 2-D array that contain masked values. - - This is equivalent to ``np.ma.extras.compress_rowcols(a, 1)``, see - `extras.compress_rowcols` for details. - - See Also - -------- - extras.compress_rowcols - - """ - return compress_rowcols(a, 1) - -def mask_rowcols(a, axis=None): - """ - Mask rows and/or columns of a 2D array that contain masked values. - - Mask whole rows and/or columns of a 2D array that contain - masked values. The masking behavior is selected using the - `axis` parameter. - - - If `axis` is None, rows *and* columns are masked. - - If `axis` is 0, only rows are masked. - - If `axis` is 1 or -1, only columns are masked. - - Parameters - ---------- - a : array_like, MaskedArray - The array to mask. If not a MaskedArray instance (or if no array - elements are masked). The result is a MaskedArray with `mask` set - to `nomask` (False). Must be a 2D array. - axis : int, optional - Axis along which to perform the operation. If None, applies to a - flattened version of the array. - - Returns - ------- - a : MaskedArray - A modified version of the input array, masked depending on the value - of the `axis` parameter. - - Raises - ------ - NotImplementedError - If input array `a` is not 2D. - - See Also - -------- - mask_rows : Mask rows of a 2D array that contain masked values. - mask_cols : Mask cols of a 2D array that contain masked values. - masked_where : Mask where a condition is met. - - Notes - ----- - The input array's mask is modified by this function. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.zeros((3, 3), dtype=np.int) - >>> a[1, 1] = 1 - >>> a - array([[0, 0, 0], - [0, 1, 0], - [0, 0, 0]]) - >>> a = ma.masked_equal(a, 1) - >>> a - masked_array(data = - [[0 0 0] - [0 -- 0] - [0 0 0]], - mask = - [[False False False] - [False True False] - [False False False]], - fill_value=999999) - >>> ma.mask_rowcols(a) - masked_array(data = - [[0 -- 0] - [-- -- --] - [0 -- 0]], - mask = - [[False True False] - [ True True True] - [False True False]], - fill_value=999999) - - """ - a = asarray(a) - if a.ndim != 2: - raise NotImplementedError, "compress2d works for 2D arrays only." - m = getmask(a) - # Nothing is masked: return a - if m is nomask or not m.any(): - return a - maskedval = m.nonzero() - a._mask = a._mask.copy() - if not axis: - a[np.unique(maskedval[0])] = masked - if axis in [None, 1, -1]: - a[:, np.unique(maskedval[1])] = masked - return a - -def mask_rows(a, axis=None): - """ - Mask rows of a 2D array that contain masked values. - - This function is a shortcut to ``mask_rowcols`` with `axis` equal to 0. - - See Also - -------- - mask_rowcols : Mask rows and/or columns of a 2D array. - masked_where : Mask where a condition is met. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.zeros((3, 3), dtype=np.int) - >>> a[1, 1] = 1 - >>> a - array([[0, 0, 0], - [0, 1, 0], - [0, 0, 0]]) - >>> a = ma.masked_equal(a, 1) - >>> a - masked_array(data = - [[0 0 0] - [0 -- 0] - [0 0 0]], - mask = - [[False False False] - [False True False] - [False False False]], - fill_value=999999) - >>> ma.mask_rows(a) - masked_array(data = - [[0 0 0] - [-- -- --] - [0 0 0]], - mask = - [[False False False] - [ True True True] - [False False False]], - fill_value=999999) - - """ - return mask_rowcols(a, 0) - -def mask_cols(a, axis=None): - """ - Mask columns of a 2D array that contain masked values. - - This function is a shortcut to ``mask_rowcols`` with `axis` equal to 1. - - See Also - -------- - mask_rowcols : Mask rows and/or columns of a 2D array. - masked_where : Mask where a condition is met. - - Examples - -------- - >>> import numpy.ma as ma - >>> a = np.zeros((3, 3), dtype=np.int) - >>> a[1, 1] = 1 - >>> a - array([[0, 0, 0], - [0, 1, 0], - [0, 0, 0]]) - >>> a = ma.masked_equal(a, 1) - >>> a - masked_array(data = - [[0 0 0] - [0 -- 0] - [0 0 0]], - mask = - [[False False False] - [False True False] - [False False False]], - fill_value=999999) - >>> ma.mask_cols(a) - masked_array(data = - [[0 -- 0] - [0 -- 0] - [0 -- 0]], - mask = - [[False True False] - [False True False] - [False True False]], - fill_value=999999) - - """ - return mask_rowcols(a, 1) - - -def dot(a, b, strict=False): - """ - Return the dot product of two arrays. - - .. note:: - Works only with 2-D arrays at the moment. - - This function is the equivalent of `numpy.dot` that takes masked values - into account, see `numpy.dot` for details. - - Parameters - ---------- - a, b : ndarray - Inputs arrays. - strict : bool, optional - Whether masked data are propagated (True) or set to 0 (False) for the - computation. Default is False. - Propagating the mask means that if a masked value appears in a row or - column, the whole row or column is considered masked. - - See Also - -------- - numpy.dot : Equivalent function for ndarrays. - - Examples - -------- - >>> a = ma.array([[1, 2, 3], [4, 5, 6]], mask=[[1, 0, 0], [0, 0, 0]]) - >>> b = ma.array([[1, 2], [3, 4], [5, 6]], mask=[[1, 0], [0, 0], [0, 0]]) - >>> np.ma.dot(a, b) - masked_array(data = - [[21 26] - [45 64]], - mask = - [[False False] - [False False]], - fill_value = 999999) - >>> np.ma.dot(a, b, strict=True) - masked_array(data = - [[-- --] - [-- 64]], - mask = - [[ True True] - [ True False]], - fill_value = 999999) - - """ - #!!!: Works only with 2D arrays. There should be a way to get it to run with higher dimension - if strict and (a.ndim == 2) and (b.ndim == 2): - a = mask_rows(a) - b = mask_cols(b) - # - d = np.dot(filled(a, 0), filled(b, 0)) - # - am = (~getmaskarray(a)) - bm = (~getmaskarray(b)) - m = ~np.dot(am, bm) - return masked_array(d, mask=m) - -#####-------------------------------------------------------------------------- -#---- --- arraysetops --- -#####-------------------------------------------------------------------------- - -def ediff1d(arr, to_end=None, to_begin=None): - """ - Compute the differences between consecutive elements of an array. - - This function is the equivalent of `numpy.ediff1d` that takes masked - values into account, see `numpy.ediff1d` for details. - - See Also - -------- - numpy.ediff1d : Equivalent function for ndarrays. - - """ - arr = ma.asanyarray(arr).flat - ed = arr[1:] - arr[:-1] - arrays = [ed] - # - if to_begin is not None: - arrays.insert(0, to_begin) - if to_end is not None: - arrays.append(to_end) - # - if len(arrays) != 1: - # We'll save ourselves a copy of a potentially large array in the common - # case where neither to_begin or to_end was given. - ed = hstack(arrays) - # - return ed - - -def unique(ar1, return_index=False, return_inverse=False): - """ - Finds the unique elements of an array. - - Masked values are considered the same element (masked). The output array - is always a masked array. See `numpy.unique` for more details. - - See Also - -------- - numpy.unique : Equivalent function for ndarrays. - - """ - output = np.unique(ar1, - return_index=return_index, - return_inverse=return_inverse) - if isinstance(output, tuple): - output = list(output) - output[0] = output[0].view(MaskedArray) - output = tuple(output) - else: - output = output.view(MaskedArray) - return output - - -def intersect1d(ar1, ar2, assume_unique=False): - """ - Returns the unique elements common to both arrays. - - Masked values are considered equal one to the other. - The output is always a masked array. - - See `numpy.intersect1d` for more details. - - See Also - -------- - numpy.intersect1d : Equivalent function for ndarrays. - - Examples - -------- - >>> x = array([1, 3, 3, 3], mask=[0, 0, 0, 1]) - >>> y = array([3, 1, 1, 1], mask=[0, 0, 0, 1]) - >>> intersect1d(x, y) - masked_array(data = [1 3 --], - mask = [False False True], - fill_value = 999999) - - """ - if assume_unique: - aux = ma.concatenate((ar1, ar2)) - else: - # Might be faster than unique1d( intersect1d( ar1, ar2 ) )? - aux = ma.concatenate((unique(ar1), unique(ar2))) - aux.sort() - return aux[aux[1:] == aux[:-1]] - - -def setxor1d(ar1, ar2, assume_unique=False): - """ - Set exclusive-or of 1-D arrays with unique elements. - - The output is always a masked array. See `numpy.setxor1d` for more details. - - See Also - -------- - numpy.setxor1d : Equivalent function for ndarrays. - - """ - if not assume_unique: - ar1 = unique(ar1) - ar2 = unique(ar2) - - aux = ma.concatenate((ar1, ar2)) - if aux.size == 0: - return aux - aux.sort() - auxf = aux.filled() -# flag = ediff1d( aux, to_end = 1, to_begin = 1 ) == 0 - flag = ma.concatenate(([True], (auxf[1:] != auxf[:-1]), [True])) -# flag2 = ediff1d( flag ) == 0 - flag2 = (flag[1:] == flag[:-1]) - return aux[flag2] - -def in1d(ar1, ar2, assume_unique=False): - """ - Test whether each element of an array is also present in a second - array. - - The output is always a masked array. See `numpy.in1d` for more details. - - See Also - -------- - numpy.in1d : Equivalent function for ndarrays. - - Notes - ----- - .. versionadded:: 1.4.0 - - """ - if not assume_unique: - ar1, rev_idx = unique(ar1, return_inverse=True) - ar2 = unique(ar2) - - ar = ma.concatenate((ar1, ar2)) - # We need this to be a stable sort, so always use 'mergesort' - # here. The values from the first array should always come before - # the values from the second array. - order = ar.argsort(kind='mergesort') - sar = ar[order] - equal_adj = (sar[1:] == sar[:-1]) - flag = ma.concatenate((equal_adj, [False])) - indx = order.argsort(kind='mergesort')[:len(ar1)] - - if assume_unique: - return flag[indx] - else: - return flag[indx][rev_idx] - - -def union1d(ar1, ar2): - """ - Union of two arrays. - - The output is always a masked array. See `numpy.union1d` for more details. - - See also - -------- - numpy.union1d : Equivalent function for ndarrays. - - """ - return unique(ma.concatenate((ar1, ar2))) - - -def setdiff1d(ar1, ar2, assume_unique=False): - """ - Set difference of 1D arrays with unique elements. - - The output is always a masked array. See `numpy.setdiff1d` for more - details. - - See Also - -------- - numpy.setdiff1d : Equivalent function for ndarrays. - - Examples - -------- - >>> x = np.ma.array([1, 2, 3, 4], mask=[0, 1, 0, 1]) - >>> np.ma.extras.setdiff1d(x, [1, 2]) - masked_array(data = [3 --], - mask = [False True], - fill_value = 999999) - - """ - if not assume_unique: - ar1 = unique(ar1) - ar2 = unique(ar2) - aux = in1d(ar1, ar2, assume_unique=True) - if aux.size == 0: - return aux - else: - return ma.asarray(ar1)[aux == 0] - -@deprecate -def unique1d(ar1, return_index=False, return_inverse=False): - """ This function is deprecated. Use ma.unique() instead. """ - output = np.unique1d(ar1, - return_index=return_index, - return_inverse=return_inverse) - if isinstance(output, tuple): - output = list(output) - output[0] = output[0].view(MaskedArray) - output = tuple(output) - else: - output = output.view(MaskedArray) - return output - -@deprecate -def intersect1d_nu(ar1, ar2): - """ This function is deprecated. Use ma.intersect1d() instead.""" - # Might be faster than unique1d( intersect1d( ar1, ar2 ) )? - aux = ma.concatenate((unique1d(ar1), unique1d(ar2))) - aux.sort() - return aux[aux[1:] == aux[:-1]] - -@deprecate -def setmember1d(ar1, ar2): - """ This function is deprecated. Use ma.in1d() instead.""" - ar1 = ma.asanyarray(ar1) - ar2 = ma.asanyarray(ar2) - ar = ma.concatenate((ar1, ar2)) - b1 = ma.zeros(ar1.shape, dtype=np.int8) - b2 = ma.ones(ar2.shape, dtype=np.int8) - tt = ma.concatenate((b1, b2)) - - # We need this to be a stable sort, so always use 'mergesort' here. The - # values from the first array should always come before the values from the - # second array. - perm = ar.argsort(kind='mergesort') - aux = ar[perm] - aux2 = tt[perm] -# flag = ediff1d( aux, 1 ) == 0 - flag = ma.concatenate((aux[1:] == aux[:-1], [False])) - ii = ma.where(flag * aux2)[0] - aux = perm[ii + 1] - perm[ii + 1] = perm[ii] - perm[ii] = aux - # - indx = perm.argsort(kind='mergesort')[:len(ar1)] - # - return flag[indx] - - -#####-------------------------------------------------------------------------- -#---- --- Covariance --- -#####-------------------------------------------------------------------------- - - - - -def _covhelper(x, y=None, rowvar=True, allow_masked=True): - """ - Private function for the computation of covariance and correlation - coefficients. - - """ - x = ma.array(x, ndmin=2, copy=True, dtype=float) - xmask = ma.getmaskarray(x) - # Quick exit if we can't process masked data - if not allow_masked and xmask.any(): - raise ValueError("Cannot process masked data...") - # - if x.shape[0] == 1: - rowvar = True - # Make sure that rowvar is either 0 or 1 - rowvar = int(bool(rowvar)) - axis = 1 - rowvar - if rowvar: - tup = (slice(None), None) - else: - tup = (None, slice(None)) - # - if y is None: - xnotmask = np.logical_not(xmask).astype(int) - else: - y = array(y, copy=False, ndmin=2, dtype=float) - ymask = ma.getmaskarray(y) - if not allow_masked and ymask.any(): - raise ValueError("Cannot process masked data...") - if xmask.any() or ymask.any(): - if y.shape == x.shape: - # Define some common mask - common_mask = np.logical_or(xmask, ymask) - if common_mask is not nomask: - x.unshare_mask() - y.unshare_mask() - xmask = x._mask = y._mask = ymask = common_mask - x = ma.concatenate((x, y), axis) - xnotmask = np.logical_not(np.concatenate((xmask, ymask), axis)).astype(int) - x -= x.mean(axis=rowvar)[tup] - return (x, xnotmask, rowvar) - - -def cov(x, y=None, rowvar=True, bias=False, allow_masked=True, ddof=None): - """ - Estimate the covariance matrix. - - Except for the handling of missing data this function does the same as - `numpy.cov`. For more details and examples, see `numpy.cov`. - - By default, masked values are recognized as such. If `x` and `y` have the - same shape, a common mask is allocated: if ``x[i,j]`` is masked, then - ``y[i,j]`` will also be masked. - Setting `allow_masked` to False will raise an exception if values are - missing in either of the input arrays. - - Parameters - ---------- - x : array_like - A 1-D or 2-D array containing multiple variables and observations. - Each row of `x` represents a variable, and each column a single - observation of all those variables. Also see `rowvar` below. - y : array_like, optional - An additional set of variables and observations. `y` has the same - form as `x`. - rowvar : bool, optional - If `rowvar` is True (default), then each row represents a - variable, with observations in the columns. Otherwise, the relationship - is transposed: each column represents a variable, while the rows - contain observations. - bias : bool, optional - Default normalization (False) is by ``(N-1)``, where ``N`` is the - number of observations given (unbiased estimate). If `bias` is True, - then normalization is by ``N``. This keyword can be overridden by - the keyword ``ddof`` in numpy versions >= 1.5. - allow_masked : bool, optional - If True, masked values are propagated pair-wise: if a value is masked - in `x`, the corresponding value is masked in `y`. - If False, raises a `ValueError` exception when some values are missing. - ddof : {None, int}, optional - .. versionadded:: 1.5 - If not ``None`` normalization is by ``(N - ddof)``, where ``N`` is - the number of observations; this overrides the value implied by - ``bias``. The default value is ``None``. - - - Raises - ------ - ValueError: - Raised if some values are missing and `allow_masked` is False. - - See Also - -------- - numpy.cov - - """ - # Check inputs - if ddof is not None and ddof != int(ddof): - raise ValueError("ddof must be an integer") - # Set up ddof - if ddof is None: - if bias: - ddof = 0 - else: - ddof = 1 - - (x, xnotmask, rowvar) = _covhelper(x, y, rowvar, allow_masked) - if not rowvar: - fact = np.dot(xnotmask.T, xnotmask) * 1. - ddof - result = (dot(x.T, x.conj(), strict=False) / fact).squeeze() - else: - fact = np.dot(xnotmask, xnotmask.T) * 1. - ddof - result = (dot(x, x.T.conj(), strict=False) / fact).squeeze() - return result - - -def corrcoef(x, y=None, rowvar=True, bias=False, allow_masked=True, ddof=None): - """ - Return correlation coefficients of the input array. - - Except for the handling of missing data this function does the same as - `numpy.corrcoef`. For more details and examples, see `numpy.corrcoef`. - - Parameters - ---------- - x : array_like - A 1-D or 2-D array containing multiple variables and observations. - Each row of `x` represents a variable, and each column a single - observation of all those variables. Also see `rowvar` below. - y : array_like, optional - An additional set of variables and observations. `y` has the same - shape as `x`. - rowvar : bool, optional - If `rowvar` is True (default), then each row represents a - variable, with observations in the columns. Otherwise, the relationship - is transposed: each column represents a variable, while the rows - contain observations. - bias : bool, optional - Default normalization (False) is by ``(N-1)``, where ``N`` is the - number of observations given (unbiased estimate). If `bias` is 1, - then normalization is by ``N``. This keyword can be overridden by - the keyword ``ddof`` in numpy versions >= 1.5. - allow_masked : bool, optional - If True, masked values are propagated pair-wise: if a value is masked - in `x`, the corresponding value is masked in `y`. - If False, raises an exception. - ddof : {None, int}, optional - .. versionadded:: 1.5 - If not ``None`` normalization is by ``(N - ddof)``, where ``N`` is - the number of observations; this overrides the value implied by - ``bias``. The default value is ``None``. - - See Also - -------- - numpy.corrcoef : Equivalent function in top-level NumPy module. - cov : Estimate the covariance matrix. - - """ - # Check inputs - if ddof is not None and ddof != int(ddof): - raise ValueError("ddof must be an integer") - # Set up ddof - if ddof is None: - if bias: - ddof = 0 - else: - ddof = 1 - - # Get the data - (x, xnotmask, rowvar) = _covhelper(x, y, rowvar, allow_masked) - # Compute the covariance matrix - if not rowvar: - fact = np.dot(xnotmask.T, xnotmask) * 1. - ddof - c = (dot(x.T, x.conj(), strict=False) / fact).squeeze() - else: - fact = np.dot(xnotmask, xnotmask.T) * 1. - ddof - c = (dot(x, x.T.conj(), strict=False) / fact).squeeze() - # Check whether we have a scalar - try: - diag = ma.diagonal(c) - except ValueError: - return 1 - # - if xnotmask.all(): - _denom = ma.sqrt(ma.multiply.outer(diag, diag)) - else: - _denom = diagflat(diag) - n = x.shape[1 - rowvar] - if rowvar: - for i in range(n - 1): - for j in range(i + 1, n): - _x = mask_cols(vstack((x[i], x[j]))).var(axis=1, - ddof=1 - bias) - _denom[i, j] = _denom[j, i] = ma.sqrt(ma.multiply.reduce(_x)) - else: - for i in range(n - 1): - for j in range(i + 1, n): - _x = mask_cols(vstack((x[:, i], x[:, j]))).var(axis=1, - ddof=1 - bias) - _denom[i, j] = _denom[j, i] = ma.sqrt(ma.multiply.reduce(_x)) - return c / _denom - -#####-------------------------------------------------------------------------- -#---- --- Concatenation helpers --- -#####-------------------------------------------------------------------------- - -class MAxisConcatenator(AxisConcatenator): - """ - Translate slice objects to concatenation along an axis. - - For documentation on usage, see `mr_class`. - - See Also - -------- - mr_class - - """ - - def __init__(self, axis=0): - AxisConcatenator.__init__(self, axis, matrix=False) - - def __getitem__(self, key): - if isinstance(key, str): - raise MAError, "Unavailable for masked array." - if type(key) is not tuple: - key = (key,) - objs = [] - scalars = [] - final_dtypedescr = None - for k in range(len(key)): - scalar = False - if type(key[k]) is slice: - step = key[k].step - start = key[k].start - stop = key[k].stop - if start is None: - start = 0 - if step is None: - step = 1 - if type(step) is type(1j): - size = int(abs(step)) - newobj = np.linspace(start, stop, num=size) - else: - newobj = np.arange(start, stop, step) - elif type(key[k]) is str: - if (key[k] in 'rc'): - self.matrix = True - self.col = (key[k] == 'c') - continue - try: - self.axis = int(key[k]) - continue - except (ValueError, TypeError): - raise ValueError, "Unknown special directive" - elif type(key[k]) in np.ScalarType: - newobj = asarray([key[k]]) - scalars.append(k) - scalar = True - else: - newobj = key[k] - objs.append(newobj) - if isinstance(newobj, ndarray) and not scalar: - if final_dtypedescr is None: - final_dtypedescr = newobj.dtype - elif newobj.dtype > final_dtypedescr: - final_dtypedescr = newobj.dtype - if final_dtypedescr is not None: - for k in scalars: - objs[k] = objs[k].astype(final_dtypedescr) - res = concatenate(tuple(objs), axis=self.axis) - return self._retval(res) - -class mr_class(MAxisConcatenator): - """ - Translate slice objects to concatenation along the first axis. - - This is the masked array version of `lib.index_tricks.RClass`. - - See Also - -------- - lib.index_tricks.RClass - - Examples - -------- - >>> np.ma.mr_[np.ma.array([1,2,3]), 0, 0, np.ma.array([4,5,6])] - array([1, 2, 3, 0, 0, 4, 5, 6]) - - """ - def __init__(self): - MAxisConcatenator.__init__(self, 0) - -mr_ = mr_class() - -#####-------------------------------------------------------------------------- -#---- Find unmasked data --- -#####-------------------------------------------------------------------------- - -def flatnotmasked_edges(a): - """ - Find the indices of the first and last unmasked values. - - Expects a 1-D `MaskedArray`, returns None if all values are masked. - - Parameters - ---------- - arr : array_like - Input 1-D `MaskedArray` - - Returns - ------- - edges : ndarray or None - The indices of first and last non-masked value in the array. - Returns None if all values are masked. - - See Also - -------- - flatnotmasked_contiguous, notmasked_contiguous, notmasked_edges - - Notes - ----- - Only accepts 1-D arrays. - - Examples - -------- - >>> a = np.arange(10) - >>> mask = (a < 3) | (a > 8) | (a == 5) - - >>> ma = np.ma.array(a, mask=m) - >>> np.array(ma[~ma.mask]) - array([3, 4, 6, 7, 8]) - - >>> flatnotmasked_edges(ma) - array([3, 8]) - - >>> ma = np.ma.array(a, mask=np.ones_like(a)) - >>> print flatnotmasked_edges(ma) - None - - """ - m = getmask(a) - if m is nomask or not np.any(m): - return [0, -1] - unmasked = np.flatnonzero(~m) - if len(unmasked) > 0: - return unmasked[[0, -1]] - else: - return None - - -def notmasked_edges(a, axis=None): - """ - Find the indices of the first and last unmasked values along an axis. - - If all values are masked, return None. Otherwise, return a list - of two tuples, corresponding to the indices of the first and last - unmasked values respectively. - - Parameters - ---------- - a : array_like - The input array. - axis : int, optional - Axis along which to perform the operation. - If None (default), applies to a flattened version of the array. - - Returns - ------- - edges : ndarray or list - An array of start and end indexes if there are any masked data in - the array. If there are no masked data in the array, `edges` is a - list of the first and last index. - - See Also - -------- - flatnotmasked_contiguous, flatnotmasked_edges, notmasked_contiguous - - Examples - -------- - >>> a = np.arange(9).reshape((3, 3)) - >>> m = np.zeros_like(a) - >>> m[1:, 1:] = 1 - - >>> ma = np.ma.array(a, mask=m) - >>> np.array(ma[~ma.mask]) - array([0, 1, 2, 3, 6]) - - >>> np.ma.extras.notmasked_edges(ma) - array([0, 6]) - - """ - a = asarray(a) - if axis is None or a.ndim == 1: - return flatnotmasked_edges(a) - m = getmaskarray(a) - idx = array(np.indices(a.shape), mask=np.asarray([m] * a.ndim)) - return [tuple([idx[i].min(axis).compressed() for i in range(a.ndim)]), - tuple([idx[i].max(axis).compressed() for i in range(a.ndim)]), ] - - -def flatnotmasked_contiguous(a): - """ - Find contiguous unmasked data in a masked array along the given axis. - - Parameters - ---------- - a : narray - The input array. - - Returns - ------- - slice_list : list - A sorted sequence of slices (start index, end index). - - See Also - -------- - flatnotmasked_edges, notmasked_contiguous, notmasked_edges - - Notes - ----- - Only accepts 2-D arrays at most. - - Examples - -------- - >>> a = np.arange(10) - >>> mask = (a < 3) | (a > 8) | (a == 5) - >>> ma = np.ma.array(a, mask=mask) - >>> np.array(ma[~ma.mask]) - array([3, 4, 6, 7, 8]) - - >>> np.ma.extras.flatnotmasked_contiguous(ma) - [slice(3, 4, None), slice(6, 8, None)] - >>> ma = np.ma.array(a, mask=np.ones_like(a)) - >>> print np.ma.extras.flatnotmasked_edges(ma) - None - - """ - m = getmask(a) - if m is nomask: - return (a.size, [0, -1]) - unmasked = np.flatnonzero(~m) - if len(unmasked) == 0: - return None - result = [] - for (k, group) in itertools.groupby(enumerate(unmasked), lambda (i, x):i - x): - tmp = np.array([g[1] for g in group], int) -# result.append((tmp.size, tuple(tmp[[0,-1]]))) - result.append(slice(tmp[0], tmp[-1])) - result.sort() - return result - -def notmasked_contiguous(a, axis=None): - """ - Find contiguous unmasked data in a masked array along the given axis. - - Parameters - ---------- - a : array_like - The input array. - axis : int, optional - Axis along which to perform the operation. - If None (default), applies to a flattened version of the array. - - Returns - ------- - endpoints : list - A list of slices (start and end indexes) of unmasked indexes - in the array. - - See Also - -------- - flatnotmasked_edges, flatnotmasked_contiguous, notmasked_edges - - Notes - ----- - Only accepts 2-D arrays at most. - - Examples - -------- - >>> a = np.arange(9).reshape((3, 3)) - >>> mask = np.zeros_like(a) - >>> mask[1:, 1:] = 1 - - >>> ma = np.ma.array(a, mask=mask) - >>> np.array(ma[~ma.mask]) - array([0, 1, 2, 3, 6]) - - >>> np.ma.extras.notmasked_contiguous(ma) - [slice(0, 3, None), slice(6, 6, None)] - - """ - a = asarray(a) - nd = a.ndim - if nd > 2: - raise NotImplementedError, "Currently limited to atmost 2D array." - if axis is None or nd == 1: - return flatnotmasked_contiguous(a) - # - result = [] - # - other = (axis + 1) % 2 - idx = [0, 0] - idx[axis] = slice(None, None) - # - for i in range(a.shape[other]): - idx[other] = i - result.append(flatnotmasked_contiguous(a[idx])) - return result - - -def _ezclump(mask): - """ - Finds the clumps (groups of data with the same values) for a 1D bool array. - - Returns a series of slices. - """ - #def clump_masked(a): - if mask.ndim > 1: - mask = mask.ravel() - idx = (mask[1:] - mask[:-1]).nonzero() - idx = idx[0] + 1 - slices = [slice(left, right) - for (left, right) in zip(itertools.chain([0], idx), - itertools.chain(idx, [len(mask)]),)] - return slices - - -def clump_unmasked(a): - """ - Return list of slices corresponding to the unmasked clumps of a 1-D array. - - Parameters - ---------- - a : ndarray - A one-dimensional masked array. - - Returns - ------- - slices : list of slice - The list of slices, one for each continuous region of unmasked - elements in `a`. - - Notes - ----- - .. versionadded:: 1.4.0 - - Examples - -------- - >>> a = np.ma.masked_array(np.arange(10)) - >>> a[[0, 1, 2, 6, 8, 9]] = np.ma.masked - >>> np.ma.extras.clump_unmasked(a) - [slice(3, 6, None), slice(7, 8, None)] - - """ - mask = getattr(a, '_mask', nomask) - if mask is nomask: - return [slice(0, a.size)] - slices = _ezclump(mask) - if a[0] is masked: - result = slices[1::2] - else: - result = slices[::2] - return result - - -def clump_masked(a): - """ - Returns a list of slices corresponding to the masked clumps of a 1-D array. - - Parameters - ---------- - a : ndarray - A one-dimensional masked array. - - Returns - ------- - slices : list of slice - The list of slices, one for each continuous region of masked elements - in `a`. - - Notes - ----- - .. versionadded:: 1.4.0 - - Examples - -------- - >>> a = np.ma.masked_array(np.arange(10)) - >>> a[[0, 1, 2, 6, 8, 9]] = np.ma.masked - >>> np.ma.extras.clump_masked(a) - [slice(0, 3, None), slice(6, 7, None), slice(8, None, None)] - - """ - mask = ma.getmask(a) - if mask is nomask: - return [] - slices = _ezclump(mask) - if len(slices): - if a[0] is masked: - slices = slices[::2] - else: - slices = slices[1::2] - return slices - - - -#####-------------------------------------------------------------------------- -#---- Polynomial fit --- -#####-------------------------------------------------------------------------- - -def vander(x, n=None): - """ - Masked values in the input array result in rows of zeros. - """ - _vander = np.vander(x, n) - m = getmask(x) - if m is not nomask: - _vander[m] = 0 - return _vander -vander.__doc__ = ma.doc_note(np.vander.__doc__, vander.__doc__) - - -def polyfit(x, y, deg, rcond=None, full=False): - """ - Any masked values in x is propagated in y, and vice-versa. - """ - order = int(deg) + 1 - x = asarray(x) - mx = getmask(x) - y = asarray(y) - if y.ndim == 1: - m = mask_or(mx, getmask(y)) - elif y.ndim == 2: - y = mask_rows(y) - my = getmask(y) - if my is not nomask: - m = mask_or(mx, my[:, 0]) - else: - m = mx - else: - raise TypeError, "Expected a 1D or 2D array for y!" - if m is not nomask: - x[m] = y[m] = masked - # Set rcond - if rcond is None : - rcond = len(x) * np.finfo(x.dtype).eps - # Scale x to improve condition number - scale = abs(x).max() - if scale != 0 : - x = x / scale - # solve least squares equation for powers of x - v = vander(x, order) - c, resids, rank, s = lstsq(v, y.filled(0), rcond) - # warn on rank reduction, which indicates an ill conditioned matrix - if rank != order and not full: - warnings.warn("Polyfit may be poorly conditioned", np.RankWarning) - # scale returned coefficients - if scale != 0 : - if c.ndim == 1 : - c /= np.vander([scale], order)[0] - else : - c /= np.vander([scale], order).T - if full : - return c, resids, rank, s, rcond - else : - return c -polyfit.__doc__ = ma.doc_note(np.polyfit.__doc__, polyfit.__doc__) - -################################################################################ diff --git a/pythonPackages/numpy/numpy/ma/mrecords.py b/pythonPackages/numpy/numpy/ma/mrecords.py deleted file mode 100755 index 79fa6e15b8..0000000000 --- a/pythonPackages/numpy/numpy/ma/mrecords.py +++ /dev/null @@ -1,720 +0,0 @@ -""":mod:`numpy.ma..mrecords` - -Defines the equivalent of :class:`numpy.recarrays` for masked arrays, -where fields can be accessed as attributes. -Note that :class:`numpy.ma.MaskedArray` already supports structured datatypes -and the masking of individual fields. - -:author: Pierre Gerard-Marchant -""" -#!!!: * We should make sure that no field is called '_mask','mask','_fieldmask', -#!!!: or whatever restricted keywords. -#!!!: An idea would be to no bother in the first place, and then rename the -#!!!: invalid fields with a trailing underscore... -#!!!: Maybe we could just overload the parser function ? - - -__author__ = "Pierre GF Gerard-Marchant" - -import sys - -import numpy as np -from numpy import bool_, dtype, \ - ndarray, recarray, array as narray -import numpy.core.numerictypes as ntypes -from numpy.core.records import fromarrays as recfromarrays, \ - fromrecords as recfromrecords - -_byteorderconv = np.core.records._byteorderconv -_typestr = ntypes._typestr - -import numpy.ma as ma -from numpy.ma import MAError, MaskedArray, masked, nomask, masked_array, \ - getdata, getmaskarray, filled - -_check_fill_value = ma.core._check_fill_value - -import warnings - -__all__ = ['MaskedRecords', 'mrecarray', - 'fromarrays', 'fromrecords', 'fromtextfile', 'addfield', - ] - -reserved_fields = ['_data', '_mask', '_fieldmask', 'dtype'] - -def _getformats(data): - "Returns the formats of each array of arraylist as a comma-separated string." - if hasattr(data, 'dtype'): - return ",".join([desc[1] for desc in data.dtype.descr]) - - formats = '' - for obj in data: - obj = np.asarray(obj) - formats += _typestr[obj.dtype.type] - if issubclass(obj.dtype.type, ntypes.flexible): - formats += `obj.itemsize` - formats += ',' - return formats[:-1] - -def _checknames(descr, names=None): - """Checks that the field names of the descriptor ``descr`` are not some -reserved keywords. If this is the case, a default 'f%i' is substituted. -If the argument `names` is not None, updates the field names to valid names. - """ - ndescr = len(descr) - default_names = ['f%i' % i for i in range(ndescr)] - if names is None: - new_names = default_names - else: - if isinstance(names, (tuple, list)): - new_names = names - elif isinstance(names, str): - new_names = names.split(',') - else: - raise NameError("illegal input names %s" % `names`) - nnames = len(new_names) - if nnames < ndescr: - new_names += default_names[nnames:] - ndescr = [] - for (n, d, t) in zip(new_names, default_names, descr.descr): - if n in reserved_fields: - if t[0] in reserved_fields: - ndescr.append((d, t[1])) - else: - ndescr.append(t) - else: - ndescr.append((n, t[1])) - return np.dtype(ndescr) - - -def _get_fieldmask(self): - mdescr = [(n, '|b1') for n in self.dtype.names] - fdmask = np.empty(self.shape, dtype=mdescr) - fdmask.flat = tuple([False] * len(mdescr)) - return fdmask - - -class MaskedRecords(MaskedArray, object): - """ - -*IVariables*: - _data : {recarray} - Underlying data, as a record array. - _mask : {boolean array} - Mask of the records. A record is masked when all its fields are masked. - _fieldmask : {boolean recarray} - Record array of booleans, setting the mask of each individual field of each record. - _fill_value : {record} - Filling values for each field. - """ - #............................................ - def __new__(cls, shape, dtype=None, buf=None, offset=0, strides=None, - formats=None, names=None, titles=None, - byteorder=None, aligned=False, - mask=nomask, hard_mask=False, fill_value=None, keep_mask=True, - copy=False, - **options): - # - self = recarray.__new__(cls, shape, dtype=dtype, buf=buf, offset=offset, - strides=strides, formats=formats, names=names, - titles=titles, byteorder=byteorder, - aligned=aligned,) - # - mdtype = ma.make_mask_descr(self.dtype) - if mask is nomask or not np.size(mask): - if not keep_mask: - self._mask = tuple([False] * len(mdtype)) - else: - mask = np.array(mask, copy=copy) - if mask.shape != self.shape: - (nd, nm) = (self.size, mask.size) - if nm == 1: - mask = np.resize(mask, self.shape) - elif nm == nd: - mask = np.reshape(mask, self.shape) - else: - msg = "Mask and data not compatible: data size is %i, " + \ - "mask size is %i." - raise MAError(msg % (nd, nm)) - copy = True - if not keep_mask: - self.__setmask__(mask) - self._sharedmask = True - else: - if mask.dtype == mdtype: - _mask = mask - else: - _mask = np.array([tuple([m] * len(mdtype)) for m in mask], - dtype=mdtype) - self._mask = _mask - return self - #...................................................... - def __array_finalize__(self, obj): - # Make sure we have a _fieldmask by default .. - _mask = getattr(obj, '_mask', None) - if _mask is None: - objmask = getattr(obj, '_mask', nomask) - _dtype = ndarray.__getattribute__(self, 'dtype') - if objmask is nomask: - _mask = ma.make_mask_none(self.shape, dtype=_dtype) - else: - mdescr = ma.make_mask_descr(_dtype) - _mask = narray([tuple([m] * len(mdescr)) for m in objmask], - dtype=mdescr).view(recarray) - # Update some of the attributes - _dict = self.__dict__ - _dict.update(_mask=_mask) - self._update_from(obj) - if _dict['_baseclass'] == ndarray: - _dict['_baseclass'] = recarray - return - - - def _getdata(self): - "Returns the data as a recarray." - return ndarray.view(self, recarray) - _data = property(fget=_getdata) - - def _getfieldmask(self): - "Alias to mask" - return self._mask - _fieldmask = property(fget=_getfieldmask) - - def __len__(self): - "Returns the length" - # We have more than one record - if self.ndim: - return len(self._data) - # We have only one record: return the nb of fields - return len(self.dtype) - - def __getattribute__(self, attr): - try: - return object.__getattribute__(self, attr) - except AttributeError: # attr must be a fieldname - pass - fielddict = ndarray.__getattribute__(self, 'dtype').fields - try: - res = fielddict[attr][:2] - except (TypeError, KeyError): - raise AttributeError, "record array has no attribute %s" % attr - # So far, so good... - _localdict = ndarray.__getattribute__(self, '__dict__') - _data = ndarray.view(self, _localdict['_baseclass']) - obj = _data.getfield(*res) - if obj.dtype.fields: - raise NotImplementedError("MaskedRecords is currently limited to"\ - "simple records...") - # Get some special attributes - # Reset the object's mask - hasmasked = False - _mask = _localdict.get('_mask', None) - if _mask is not None: - try: - _mask = _mask[attr] - except IndexError: - # Couldn't find a mask: use the default (nomask) - pass - hasmasked = _mask.view((np.bool, (len(_mask.dtype) or 1))).any() - if (obj.shape or hasmasked): - obj = obj.view(MaskedArray) - obj._baseclass = ndarray - obj._isfield = True - obj._mask = _mask - # Reset the field values - _fill_value = _localdict.get('_fill_value', None) - if _fill_value is not None: - try: - obj._fill_value = _fill_value[attr] - except ValueError: - obj._fill_value = None - else: - obj = obj.item() - return obj - - - def __setattr__(self, attr, val): - "Sets the attribute attr to the value val." - # Should we call __setmask__ first ? - if attr in ['mask', 'fieldmask']: - self.__setmask__(val) - return - # Create a shortcut (so that we don't have to call getattr all the time) - _localdict = object.__getattribute__(self, '__dict__') - # Check whether we're creating a new field - newattr = attr not in _localdict - try: - # Is attr a generic attribute ? - ret = object.__setattr__(self, attr, val) - except: - # Not a generic attribute: exit if it's not a valid field - fielddict = ndarray.__getattribute__(self, 'dtype').fields or {} - optinfo = ndarray.__getattribute__(self, '_optinfo') or {} - if not (attr in fielddict or attr in optinfo): - exctype, value = sys.exc_info()[:2] - raise exctype, value - else: - # Get the list of names ...... - fielddict = ndarray.__getattribute__(self, 'dtype').fields or {} - # Check the attribute - if attr not in fielddict: - return ret - if newattr: # We just added this one - try: # or this setattr worked on an internal - # attribute. - object.__delattr__(self, attr) - except: - return ret - # Let's try to set the field - try: - res = fielddict[attr][:2] - except (TypeError, KeyError): - raise AttributeError, "record array has no attribute %s" % attr - # - if val is masked: - _fill_value = _localdict['_fill_value'] - if _fill_value is not None: - dval = _localdict['_fill_value'][attr] - else: - dval = val - mval = True - else: - dval = filled(val) - mval = getmaskarray(val) - obj = ndarray.__getattribute__(self, '_data').setfield(dval, *res) - _localdict['_mask'].__setitem__(attr, mval) - return obj - - - def __getitem__(self, indx): - """Returns all the fields sharing the same fieldname base. -The fieldname base is either `_data` or `_mask`.""" - _localdict = self.__dict__ - _mask = ndarray.__getattribute__(self, '_mask') - _data = ndarray.view(self, _localdict['_baseclass']) - # We want a field ........ - if isinstance(indx, basestring): - #!!!: Make sure _sharedmask is True to propagate back to _fieldmask - #!!!: Don't use _set_mask, there are some copies being made... - #!!!: ...that break propagation - #!!!: Don't force the mask to nomask, that wrecks easy masking - obj = _data[indx].view(MaskedArray) - obj._mask = _mask[indx] - obj._sharedmask = True - fval = _localdict['_fill_value'] - if fval is not None: - obj._fill_value = fval[indx] - # Force to masked if the mask is True - if not obj.ndim and obj._mask: - return masked - return obj - # We want some elements .. - # First, the data ........ - obj = np.array(_data[indx], copy=False).view(mrecarray) - obj._mask = np.array(_mask[indx], copy=False).view(recarray) - return obj - #.... - def __setitem__(self, indx, value): - "Sets the given record to value." - MaskedArray.__setitem__(self, indx, value) - if isinstance(indx, basestring): - self._mask[indx] = ma.getmaskarray(value) - - - def __str__(self): - "Calculates the string representation." - if self.size > 1: - mstr = ["(%s)" % ",".join([str(i) for i in s]) - for s in zip(*[getattr(self, f) for f in self.dtype.names])] - return "[%s]" % ", ".join(mstr) - else: - mstr = ["%s" % ",".join([str(i) for i in s]) - for s in zip([getattr(self, f) for f in self.dtype.names])] - return "(%s)" % ", ".join(mstr) - # - def __repr__(self): - "Calculates the repr representation." - _names = self.dtype.names - fmt = "%%%is : %%s" % (max([len(n) for n in _names]) + 4,) - reprstr = [fmt % (f, getattr(self, f)) for f in self.dtype.names] - reprstr.insert(0, 'masked_records(') - reprstr.extend([fmt % (' fill_value', self.fill_value), - ' )']) - return str("\n".join(reprstr)) -# #...................................................... - def view(self, dtype=None, type=None): - """Returns a view of the mrecarray.""" - # OK, basic copy-paste from MaskedArray.view... - if dtype is None: - if type is None: - output = ndarray.view(self) - else: - output = ndarray.view(self, type) - # Here again... - elif type is None: - try: - if issubclass(dtype, ndarray): - output = ndarray.view(self, dtype) - dtype = None - else: - output = ndarray.view(self, dtype) - # OK, there's the change - except TypeError: - dtype = np.dtype(dtype) - # we need to revert to MaskedArray, but keeping the possibility - # ...of subclasses (eg, TimeSeriesRecords), so we'll force a type - # ...set to the first parent - if dtype.fields is None: - basetype = self.__class__.__bases__[0] - output = self.__array__().view(dtype, basetype) - output._update_from(self) - else: - output = ndarray.view(self, dtype) - output._fill_value = None - else: - output = ndarray.view(self, dtype, type) - # Update the mask, just like in MaskedArray.view - if (getattr(output, '_mask', nomask) is not nomask): - mdtype = ma.make_mask_descr(output.dtype) - output._mask = self._mask.view(mdtype, ndarray) - output._mask.shape = output.shape - return output - - def harden_mask(self): - "Forces the mask to hard" - self._hardmask = True - def soften_mask(self): - "Forces the mask to soft" - self._hardmask = False - - def copy(self): - """Returns a copy of the masked record.""" - _localdict = self.__dict__ - copied = self._data.copy().view(type(self)) - copied._mask = self._mask.copy() - return copied - - def tolist(self, fill_value=None): - """Copy the data portion of the array to a hierarchical python - list and returns that list. - - Data items are converted to the nearest compatible Python - type. Masked values are converted to fill_value. If - fill_value is None, the corresponding entries in the output - list will be ``None``. - - """ - if fill_value is not None: - return self.filled(fill_value).tolist() - result = narray(self.filled().tolist(), dtype=object) - mask = narray(self._mask.tolist()) - result[mask] = None - return result.tolist() - #-------------------------------------------- - # Pickling - def __getstate__(self): - """Return the internal state of the masked array, for pickling purposes. - - """ - state = (1, - self.shape, - self.dtype, - self.flags.fnc, - self._data.tostring(), - self._mask.tostring(), - self._fill_value, - ) - return state - # - def __setstate__(self, state): - """Restore the internal state of the masked array, for pickling purposes. - ``state`` is typically the output of the ``__getstate__`` output, and is a - 5-tuple: - - - class name - - a tuple giving the shape of the data - - a typecode for the data - - a binary string for the data - - a binary string for the mask. - - """ - (ver, shp, typ, isf, raw, msk, flv) = state - ndarray.__setstate__(self, (shp, typ, isf, raw)) - mdtype = dtype([(k, bool_) for (k, _) in self.dtype.descr]) - self.__dict__['_mask'].__setstate__((shp, mdtype, isf, msk)) - self.fill_value = flv - # - def __reduce__(self): - """Return a 3-tuple for pickling a MaskedArray. - - """ - return (_mrreconstruct, - (self.__class__, self._baseclass, (0,), 'b',), - self.__getstate__()) - -def _mrreconstruct(subtype, baseclass, baseshape, basetype,): - """Internal function that builds a new MaskedArray from the - information stored in a pickle. - - """ - _data = ndarray.__new__(baseclass, baseshape, basetype).view(subtype) -# _data._mask = ndarray.__new__(ndarray, baseshape, 'b1') -# return _data - _mask = ndarray.__new__(ndarray, baseshape, 'b1') - return subtype.__new__(subtype, _data, mask=_mask, dtype=basetype,) - - -mrecarray = MaskedRecords - -#####--------------------------------------------------------------------------- -#---- --- Constructors --- -#####--------------------------------------------------------------------------- - -def fromarrays(arraylist, dtype=None, shape=None, formats=None, - names=None, titles=None, aligned=False, byteorder=None, - fill_value=None): - """Creates a mrecarray from a (flat) list of masked arrays. - - Parameters - ---------- - arraylist : sequence - A list of (masked) arrays. Each element of the sequence is first converted - to a masked array if needed. If a 2D array is passed as argument, it is - processed line by line - dtype : {None, dtype}, optional - Data type descriptor. - shape : {None, integer}, optional - Number of records. If None, shape is defined from the shape of the - first array in the list. - formats : {None, sequence}, optional - Sequence of formats for each individual field. If None, the formats will - be autodetected by inspecting the fields and selecting the highest dtype - possible. - names : {None, sequence}, optional - Sequence of the names of each field. - fill_value : {None, sequence}, optional - Sequence of data to be used as filling values. - - Notes - ----- - Lists of tuples should be preferred over lists of lists for faster processing. - """ - datalist = [getdata(x) for x in arraylist] - masklist = [np.atleast_1d(getmaskarray(x)) for x in arraylist] - _array = recfromarrays(datalist, - dtype=dtype, shape=shape, formats=formats, - names=names, titles=titles, aligned=aligned, - byteorder=byteorder).view(mrecarray) - _array._mask.flat = zip(*masklist) - if fill_value is not None: - _array.fill_value = fill_value - return _array - - -#.............................................................................. -def fromrecords(reclist, dtype=None, shape=None, formats=None, names=None, - titles=None, aligned=False, byteorder=None, - fill_value=None, mask=nomask): - """Creates a MaskedRecords from a list of records. - - Parameters - ---------- - reclist : sequence - A list of records. Each element of the sequence is first converted - to a masked array if needed. If a 2D array is passed as argument, it is - processed line by line - dtype : {None, dtype}, optional - Data type descriptor. - shape : {None,int}, optional - Number of records. If None, ``shape`` is defined from the shape of the - first array in the list. - formats : {None, sequence}, optional - Sequence of formats for each individual field. If None, the formats will - be autodetected by inspecting the fields and selecting the highest dtype - possible. - names : {None, sequence}, optional - Sequence of the names of each field. - fill_value : {None, sequence}, optional - Sequence of data to be used as filling values. - mask : {nomask, sequence}, optional. - External mask to apply on the data. - - Notes - ----- - Lists of tuples should be preferred over lists of lists for faster processing. - """ - # Grab the initial _fieldmask, if needed: - _mask = getattr(reclist, '_mask', None) - # Get the list of records..... - try: - nfields = len(reclist[0]) - except TypeError: - nfields = len(reclist[0].dtype) - if isinstance(reclist, ndarray): - # Make sure we don't have some hidden mask - if isinstance(reclist, MaskedArray): - reclist = reclist.filled().view(ndarray) - # Grab the initial dtype, just in case - if dtype is None: - dtype = reclist.dtype - reclist = reclist.tolist() - mrec = recfromrecords(reclist, dtype=dtype, shape=shape, formats=formats, - names=names, titles=titles, - aligned=aligned, byteorder=byteorder).view(mrecarray) - # Set the fill_value if needed - if fill_value is not None: - mrec.fill_value = fill_value - # Now, let's deal w/ the mask - if mask is not nomask: - mask = np.array(mask, copy=False) - maskrecordlength = len(mask.dtype) - if maskrecordlength: - mrec._mask.flat = mask - elif len(mask.shape) == 2: - mrec._mask.flat = [tuple(m) for m in mask] - else: - mrec.__setmask__(mask) - if _mask is not None: - mrec._mask[:] = _mask - return mrec - -def _guessvartypes(arr): - """Tries to guess the dtypes of the str_ ndarray `arr`, by testing element-wise -conversion. Returns a list of dtypes. -The array is first converted to ndarray. If the array is 2D, the test is performed -on the first line. An exception is raised if the file is 3D or more. - """ - vartypes = [] - arr = np.asarray(arr) - if len(arr.shape) == 2 : - arr = arr[0] - elif len(arr.shape) > 2: - raise ValueError, "The array should be 2D at most!" - # Start the conversion loop ....... - for f in arr: - try: - int(f) - except ValueError: - try: - float(f) - except ValueError: - try: - val = complex(f) - except ValueError: - vartypes.append(arr.dtype) - else: - vartypes.append(np.dtype(complex)) - else: - vartypes.append(np.dtype(float)) - else: - vartypes.append(np.dtype(int)) - return vartypes - -def openfile(fname): - "Opens the file handle of file `fname`" - # A file handle ................... - if hasattr(fname, 'readline'): - return fname - # Try to open the file and guess its type - try: - f = open(fname) - except IOError: - raise IOError, "No such file: '%s'" % fname - if f.readline()[:2] != "\\x": - f.seek(0, 0) - return f - raise NotImplementedError, "Wow, binary file" - - -def fromtextfile(fname, delimitor=None, commentchar='#', missingchar='', - varnames=None, vartypes=None): - """Creates a mrecarray from data stored in the file `filename`. - - Parameters - ---------- - filename : {file name/handle} - Handle of an opened file. - delimitor : {None, string}, optional - Alphanumeric character used to separate columns in the file. - If None, any (group of) white spacestring(s) will be used. - commentchar : {'#', string}, optional - Alphanumeric character used to mark the start of a comment. - missingchar : {'', string}, optional - String indicating missing data, and used to create the masks. - varnames : {None, sequence}, optional - Sequence of the variable names. If None, a list will be created from - the first non empty line of the file. - vartypes : {None, sequence}, optional - Sequence of the variables dtypes. If None, it will be estimated from - the first non-commented line. - - - Ultra simple: the varnames are in the header, one line""" - # Try to open the file ...................... - f = openfile(fname) - # Get the first non-empty line as the varnames - while True: - line = f.readline() - firstline = line[:line.find(commentchar)].strip() - _varnames = firstline.split(delimitor) - if len(_varnames) > 1: - break - if varnames is None: - varnames = _varnames - # Get the data .............................. - _variables = masked_array([line.strip().split(delimitor) for line in f - if line[0] != commentchar and len(line) > 1]) - (_, nfields) = _variables.shape - # Try to guess the dtype .................... - if vartypes is None: - vartypes = _guessvartypes(_variables[0]) - else: - vartypes = [np.dtype(v) for v in vartypes] - if len(vartypes) != nfields: - msg = "Attempting to %i dtypes for %i fields!" - msg += " Reverting to default." - warnings.warn(msg % (len(vartypes), nfields)) - vartypes = _guessvartypes(_variables[0]) - # Construct the descriptor .................. - mdescr = [(n, f) for (n, f) in zip(varnames, vartypes)] - mfillv = [ma.default_fill_value(f) for f in vartypes] - # Get the data and the mask ................. - # We just need a list of masked_arrays. It's easier to create it like that: - _mask = (_variables.T == missingchar) - _datalist = [masked_array(a, mask=m, dtype=t, fill_value=f) - for (a, m, t, f) in zip(_variables.T, _mask, vartypes, mfillv)] - return fromarrays(_datalist, dtype=mdescr) - -#.................................................................... -def addfield(mrecord, newfield, newfieldname=None): - """Adds a new field to the masked record array, using `newfield` as data -and `newfieldname` as name. If `newfieldname` is None, the new field name is -set to 'fi', where `i` is the number of existing fields. - """ - _data = mrecord._data - _mask = mrecord._mask - if newfieldname is None or newfieldname in reserved_fields: - newfieldname = 'f%i' % len(_data.dtype) - newfield = ma.array(newfield) - # Get the new data ............ - # Create a new empty recarray - newdtype = np.dtype(_data.dtype.descr + [(newfieldname, newfield.dtype)]) - newdata = recarray(_data.shape, newdtype) - # Add the exisintg field - [newdata.setfield(_data.getfield(*f), *f) - for f in _data.dtype.fields.values()] - # Add the new field - newdata.setfield(newfield._data, *newdata.dtype.fields[newfieldname]) - newdata = newdata.view(MaskedRecords) - # Get the new mask ............. - # Create a new empty recarray - newmdtype = np.dtype([(n, bool_) for n in newdtype.names]) - newmask = recarray(_data.shape, newmdtype) - # Add the old masks - [newmask.setfield(_mask.getfield(*f), *f) - for f in _mask.dtype.fields.values()] - # Add the mask of the new field - newmask.setfield(getmaskarray(newfield), - *newmask.dtype.fields[newfieldname]) - newdata._mask = newmask - return newdata diff --git a/pythonPackages/numpy/numpy/ma/setup.py b/pythonPackages/numpy/numpy/ma/setup.py deleted file mode 100755 index 0247466554..0000000000 --- a/pythonPackages/numpy/numpy/ma/setup.py +++ /dev/null @@ -1,18 +0,0 @@ -#!/usr/bin/env python -__author__ = "Pierre GF Gerard-Marchant ($Author: jarrod.millman $)" -__version__ = '1.0' -__revision__ = "$Revision: 3473 $" -__date__ = '$Date: 2007-10-29 17:18:13 +0200 (Mon, 29 Oct 2007) $' - -import os - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('ma',parent_package,top_path) - config.add_data_dir('tests') - return config - -if __name__ == "__main__": - from numpy.distutils.core import setup - config = configuration(top_path='').todict() - setup(**config) diff --git a/pythonPackages/numpy/numpy/ma/setupscons.py b/pythonPackages/numpy/numpy/ma/setupscons.py deleted file mode 100755 index 0247466554..0000000000 --- a/pythonPackages/numpy/numpy/ma/setupscons.py +++ /dev/null @@ -1,18 +0,0 @@ -#!/usr/bin/env python -__author__ = "Pierre GF Gerard-Marchant ($Author: jarrod.millman $)" -__version__ = '1.0' -__revision__ = "$Revision: 3473 $" -__date__ = '$Date: 2007-10-29 17:18:13 +0200 (Mon, 29 Oct 2007) $' - -import os - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('ma',parent_package,top_path) - config.add_data_dir('tests') - return config - -if __name__ == "__main__": - from numpy.distutils.core import setup - config = configuration(top_path='').todict() - setup(**config) diff --git a/pythonPackages/numpy/numpy/ma/tests/test_core.py b/pythonPackages/numpy/numpy/ma/tests/test_core.py deleted file mode 100755 index 908d7adc63..0000000000 --- a/pythonPackages/numpy/numpy/ma/tests/test_core.py +++ /dev/null @@ -1,3481 +0,0 @@ -# pylint: disable-msg=W0401,W0511,W0611,W0612,W0614,R0201,E1102 -"""Tests suite for MaskedArray & subclassing. - -:author: Pierre Gerard-Marchant -:contact: pierregm_at_uga_dot_edu -""" -__author__ = "Pierre GF Gerard-Marchant" - -import types -import warnings - -import numpy as np -import numpy.core.fromnumeric as fromnumeric -from numpy import ndarray -from numpy.ma.testutils import * - -import numpy.ma.core -from numpy.ma.core import * - -from numpy.compat import asbytes, asbytes_nested - -pi = np.pi - -import sys -if sys.version_info[0] >= 3: - from functools import reduce - -#.............................................................................. -class TestMaskedArray(TestCase): - "Base test class for MaskedArrays." - - def setUp (self): - "Base data definition." - x = np.array([1., 1., 1., -2., pi / 2.0, 4., 5., -10., 10., 1., 2., 3.]) - y = np.array([5., 0., 3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) - a10 = 10. - m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] - m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 , 0, 1] - xm = masked_array(x, mask=m1) - ym = masked_array(y, mask=m2) - z = np.array([-.5, 0., .5, .8]) - zm = masked_array(z, mask=[0, 1, 0, 0]) - xf = np.where(m1, 1e+20, x) - xm.set_fill_value(1e+20) - self.d = (x, y, a10, m1, m2, xm, ym, z, zm, xf) - - - def test_basicattributes(self): - "Tests some basic array attributes." - a = array([1, 3, 2]) - b = array([1, 3, 2], mask=[1, 0, 1]) - assert_equal(a.ndim, 1) - assert_equal(b.ndim, 1) - assert_equal(a.size, 3) - assert_equal(b.size, 3) - assert_equal(a.shape, (3,)) - assert_equal(b.shape, (3,)) - - - def test_basic0d(self): - "Checks masking a scalar" - x = masked_array(0) - assert_equal(str(x), '0') - x = masked_array(0, mask=True) - assert_equal(str(x), str(masked_print_option)) - x = masked_array(0, mask=False) - assert_equal(str(x), '0') - x = array(0, mask=1) - self.assertTrue(x.filled().dtype is x._data.dtype) - - def test_basic1d(self): - "Test of basic array creation and properties in 1 dimension." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - self.assertTrue(not isMaskedArray(x)) - self.assertTrue(isMaskedArray(xm)) - self.assertTrue((xm - ym).filled(0).any()) - fail_if_equal(xm.mask.astype(int), ym.mask.astype(int)) - s = x.shape - assert_equal(np.shape(xm), s) - assert_equal(xm.shape, s) - assert_equal(xm.dtype, x.dtype) - assert_equal(zm.dtype, z.dtype) - assert_equal(xm.size , reduce(lambda x, y:x * y, s)) - assert_equal(count(xm) , len(m1) - reduce(lambda x, y:x + y, m1)) - assert_array_equal(xm, xf) - assert_array_equal(filled(xm, 1.e20), xf) - assert_array_equal(x, xm) - - - def test_basic2d(self): - "Test of basic array creation and properties in 2 dimensions." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - for s in [(4, 3), (6, 2)]: - x.shape = s - y.shape = s - xm.shape = s - ym.shape = s - xf.shape = s - # - self.assertTrue(not isMaskedArray(x)) - self.assertTrue(isMaskedArray(xm)) - assert_equal(shape(xm), s) - assert_equal(xm.shape, s) - assert_equal(xm.size , reduce(lambda x, y:x * y, s)) - assert_equal(count(xm) , len(m1) - reduce(lambda x, y:x + y, m1)) - assert_equal(xm, xf) - assert_equal(filled(xm, 1.e20), xf) - assert_equal(x, xm) - - def test_concatenate_basic(self): - "Tests concatenations." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - # basic concatenation - assert_equal(np.concatenate((x, y)), concatenate((xm, ym))) - assert_equal(np.concatenate((x, y)), concatenate((x, y))) - assert_equal(np.concatenate((x, y)), concatenate((xm, y))) - assert_equal(np.concatenate((x, y, x)), concatenate((x, ym, x))) - - def test_concatenate_alongaxis(self): - "Tests concatenations." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - # Concatenation along an axis - s = (3, 4) - x.shape = y.shape = xm.shape = ym.shape = s - assert_equal(xm.mask, np.reshape(m1, s)) - assert_equal(ym.mask, np.reshape(m2, s)) - xmym = concatenate((xm, ym), 1) - assert_equal(np.concatenate((x, y), 1), xmym) - assert_equal(np.concatenate((xm.mask, ym.mask), 1), xmym._mask) - # - x = zeros(2) - y = array(ones(2), mask=[False, True]) - z = concatenate((x, y)) - assert_array_equal(z, [0, 0, 1, 1]) - assert_array_equal(z.mask, [False, False, False, True]) - z = concatenate((y, x)) - assert_array_equal(z, [1, 1, 0, 0]) - assert_array_equal(z.mask, [False, True, False, False]) - - def test_concatenate_flexible(self): - "Tests the concatenation on flexible arrays." - data = masked_array(zip(np.random.rand(10), - np.arange(10)), - dtype=[('a', float), ('b', int)]) - # - test = concatenate([data[:5], data[5:]]) - assert_equal_records(test, data) - - def test_creation_ndmin(self): - "Check the use of ndmin" - x = array([1, 2, 3], mask=[1, 0, 0], ndmin=2) - assert_equal(x.shape, (1, 3)) - assert_equal(x._data, [[1, 2, 3]]) - assert_equal(x._mask, [[1, 0, 0]]) - - def test_creation_ndmin_from_maskedarray(self): - "Make sure we're not losing the original mask w/ ndmin" - x = array([1, 2, 3]) - x[-1] = masked - xx = array(x, ndmin=2, dtype=float) - assert_equal(x.shape, x._mask.shape) - assert_equal(xx.shape, xx._mask.shape) - - def test_creation_maskcreation(self): - "Tests how masks are initialized at the creation of Maskedarrays." - data = arange(24, dtype=float) - data[[3, 6, 15]] = masked - dma_1 = MaskedArray(data) - assert_equal(dma_1.mask, data.mask) - dma_2 = MaskedArray(dma_1) - assert_equal(dma_2.mask, dma_1.mask) - dma_3 = MaskedArray(dma_1, mask=[1, 0, 0, 0] * 6) - fail_if_equal(dma_3.mask, dma_1.mask) - - def test_creation_with_list_of_maskedarrays(self): - "Tests creaating a masked array from alist of masked arrays." - x = array(np.arange(5), mask=[1, 0, 0, 0, 0]) - data = array((x, x[::-1])) - assert_equal(data, [[0, 1, 2, 3, 4], [4, 3, 2, 1, 0]]) - assert_equal(data._mask, [[1, 0, 0, 0, 0], [0, 0, 0, 0, 1]]) - # - x.mask = nomask - data = array((x, x[::-1])) - assert_equal(data, [[0, 1, 2, 3, 4], [4, 3, 2, 1, 0]]) - self.assertTrue(data.mask is nomask) - - def test_asarray(self): - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - xm.fill_value = -9999 - xm._hardmask = True - xmm = asarray(xm) - assert_equal(xmm._data, xm._data) - assert_equal(xmm._mask, xm._mask) - assert_equal(xmm.fill_value, xm.fill_value) - assert_equal(xmm._hardmask, xm._hardmask) - - def test_fix_invalid(self): - "Checks fix_invalid." - err_status_ini = np.geterr() - try: - np.seterr(invalid='ignore') - data = masked_array([np.nan, 0., 1.], mask=[0, 0, 1]) - data_fixed = fix_invalid(data) - assert_equal(data_fixed._data, [data.fill_value, 0., 1.]) - assert_equal(data_fixed._mask, [1., 0., 1.]) - finally: - np.seterr(**err_status_ini) - - def test_maskedelement(self): - "Test of masked element" - x = arange(6) - x[1] = masked - self.assertTrue(str(masked) == '--') - self.assertTrue(x[1] is masked) - assert_equal(filled(x[1], 0), 0) - # don't know why these should raise an exception... - #self.assertRaises(Exception, lambda x,y: x+y, masked, masked) - #self.assertRaises(Exception, lambda x,y: x+y, masked, 2) - #self.assertRaises(Exception, lambda x,y: x+y, masked, xx) - #self.assertRaises(Exception, lambda x,y: x+y, xx, masked) - - def test_set_element_as_object(self): - """Tests setting elements with object""" - a = empty(1, dtype=object) - x = (1, 2, 3, 4, 5) - a[0] = x - assert_equal(a[0], x) - self.assertTrue(a[0] is x) - # - import datetime - dt = datetime.datetime.now() - a[0] = dt - self.assertTrue(a[0] is dt) - - - def test_indexing(self): - "Tests conversions and indexing" - x1 = np.array([1, 2, 4, 3]) - x2 = array(x1, mask=[1, 0, 0, 0]) - x3 = array(x1, mask=[0, 1, 0, 1]) - x4 = array(x1) - # test conversion to strings - junk, garbage = str(x2), repr(x2) - assert_equal(np.sort(x1), sort(x2, endwith=False)) - # tests of indexing - assert type(x2[1]) is type(x1[1]) - assert x1[1] == x2[1] - assert x2[0] is masked - assert_equal(x1[2], x2[2]) - assert_equal(x1[2:5], x2[2:5]) - assert_equal(x1[:], x2[:]) - assert_equal(x1[1:], x3[1:]) - x1[2] = 9 - x2[2] = 9 - assert_equal(x1, x2) - x1[1:3] = 99 - x2[1:3] = 99 - assert_equal(x1, x2) - x2[1] = masked - assert_equal(x1, x2) - x2[1:3] = masked - assert_equal(x1, x2) - x2[:] = x1 - x2[1] = masked - assert allequal(getmask(x2), array([0, 1, 0, 0])) - x3[:] = masked_array([1, 2, 3, 4], [0, 1, 1, 0]) - assert allequal(getmask(x3), array([0, 1, 1, 0])) - x4[:] = masked_array([1, 2, 3, 4], [0, 1, 1, 0]) - assert allequal(getmask(x4), array([0, 1, 1, 0])) - assert allequal(x4, array([1, 2, 3, 4])) - x1 = np.arange(5) * 1.0 - x2 = masked_values(x1, 3.0) - assert_equal(x1, x2) - assert allequal(array([0, 0, 0, 1, 0], MaskType), x2.mask) - assert_equal(3.0, x2.fill_value) - x1 = array([1, 'hello', 2, 3], object) - x2 = np.array([1, 'hello', 2, 3], object) - s1 = x1[1] - s2 = x2[1] - assert_equal(type(s2), str) - assert_equal(type(s1), str) - assert_equal(s1, s2) - assert x1[1:1].shape == (0,) - - - def test_copy(self): - "Tests of some subtle points of copying and sizing." - n = [0, 0, 1, 0, 0] - m = make_mask(n) - m2 = make_mask(m) - self.assertTrue(m is m2) - m3 = make_mask(m, copy=1) - self.assertTrue(m is not m3) - - warnings.simplefilter('ignore', DeprecationWarning) - x1 = np.arange(5) - y1 = array(x1, mask=m) - #self.assertTrue( y1._data is x1) - assert_equal(y1._data.__array_interface__, x1.__array_interface__) - self.assertTrue(allequal(x1, y1.raw_data())) - #self.assertTrue( y1.mask is m) - assert_equal(y1._mask.__array_interface__, m.__array_interface__) - warnings.simplefilter('default', DeprecationWarning) - - y1a = array(y1) - #self.assertTrue( y1a.raw_data() is y1.raw_data()) - self.assertTrue(y1a._data.__array_interface__ == y1._data.__array_interface__) - self.assertTrue(y1a.mask is y1.mask) - - y2 = array(x1, mask=m) - #self.assertTrue( y2.raw_data() is x1) - self.assertTrue(y2._data.__array_interface__ == x1.__array_interface__) - #self.assertTrue( y2.mask is m) - self.assertTrue(y2._mask.__array_interface__ == m.__array_interface__) - self.assertTrue(y2[2] is masked) - y2[2] = 9 - self.assertTrue(y2[2] is not masked) - #self.assertTrue( y2.mask is not m) - self.assertTrue(y2._mask.__array_interface__ != m.__array_interface__) - self.assertTrue(allequal(y2.mask, 0)) - - y3 = array(x1 * 1.0, mask=m) - self.assertTrue(filled(y3).dtype is (x1 * 1.0).dtype) - - x4 = arange(4) - x4[2] = masked - y4 = resize(x4, (8,)) - assert_equal(concatenate([x4, x4]), y4) - assert_equal(getmask(y4), [0, 0, 1, 0, 0, 0, 1, 0]) - y5 = repeat(x4, (2, 2, 2, 2), axis=0) - assert_equal(y5, [0, 0, 1, 1, 2, 2, 3, 3]) - y6 = repeat(x4, 2, axis=0) - assert_equal(y5, y6) - y7 = x4.repeat((2, 2, 2, 2), axis=0) - assert_equal(y5, y7) - y8 = x4.repeat(2, 0) - assert_equal(y5, y8) - - y9 = x4.copy() - assert_equal(y9._data, x4._data) - assert_equal(y9._mask, x4._mask) - # - x = masked_array([1, 2, 3], mask=[0, 1, 0]) - # Copy is False by default - y = masked_array(x) - assert_equal(y._data.ctypes.data, x._data.ctypes.data) - assert_equal(y._mask.ctypes.data, x._mask.ctypes.data) - y = masked_array(x, copy=True) - assert_not_equal(y._data.ctypes.data, x._data.ctypes.data) - assert_not_equal(y._mask.ctypes.data, x._mask.ctypes.data) - - - def test_deepcopy(self): - from copy import deepcopy - a = array([0, 1, 2], mask=[False, True, False]) - copied = deepcopy(a) - assert_equal(copied.mask, a.mask) - assert_not_equal(id(a._mask), id(copied._mask)) - # - copied[1] = 1 - assert_equal(copied.mask, [0, 0, 0]) - assert_equal(a.mask, [0, 1, 0]) - # - copied = deepcopy(a) - assert_equal(copied.mask, a.mask) - copied.mask[1] = False - assert_equal(copied.mask, [0, 0, 0]) - assert_equal(a.mask, [0, 1, 0]) - - - def test_pickling(self): - "Tests pickling" - import cPickle - a = arange(10) - a[::3] = masked - a.fill_value = 999 - a_pickled = cPickle.loads(a.dumps()) - assert_equal(a_pickled._mask, a._mask) - assert_equal(a_pickled._data, a._data) - assert_equal(a_pickled.fill_value, 999) - - def test_pickling_subbaseclass(self): - "Test pickling w/ a subclass of ndarray" - import cPickle - a = array(np.matrix(range(10)), mask=[1, 0, 1, 0, 0] * 2) - a_pickled = cPickle.loads(a.dumps()) - assert_equal(a_pickled._mask, a._mask) - assert_equal(a_pickled, a) - self.assertTrue(isinstance(a_pickled._data, np.matrix)) - - def test_pickling_wstructured(self): - "Tests pickling w/ structured array" - import cPickle - a = array([(1, 1.), (2, 2.)], mask=[(0, 0), (0, 1)], - dtype=[('a', int), ('b', float)]) - a_pickled = cPickle.loads(a.dumps()) - assert_equal(a_pickled._mask, a._mask) - assert_equal(a_pickled, a) - - def test_pickling_keepalignment(self): - "Tests pickling w/ F_CONTIGUOUS arrays" - import cPickle - a = arange(10) - a.shape = (-1, 2) - b = a.T - test = cPickle.loads(cPickle.dumps(b)) - assert_equal(test, b) - -# def test_pickling_oddity(self): -# "Test some pickling oddity" -# import cPickle -# a = array([{'a':1}, {'b':2}, 3], dtype=object) -# test = cPickle.loads(cPickle.dumps(a)) -# assert_equal(test, a) - - def test_single_element_subscript(self): - "Tests single element subscripts of Maskedarrays." - a = array([1, 3, 2]) - b = array([1, 3, 2], mask=[1, 0, 1]) - assert_equal(a[0].shape, ()) - assert_equal(b[0].shape, ()) - assert_equal(b[1].shape, ()) - - - def test_topython(self): - "Tests some communication issues with Python." - assert_equal(1, int(array(1))) - assert_equal(1.0, float(array(1))) - assert_equal(1, int(array([[[1]]]))) - assert_equal(1.0, float(array([[1]]))) - self.assertRaises(TypeError, float, array([1, 1])) - # - warnings.simplefilter('ignore', UserWarning) - assert np.isnan(float(array([1], mask=[1]))) - warnings.simplefilter('default', UserWarning) - # - a = array([1, 2, 3], mask=[1, 0, 0]) - self.assertRaises(TypeError, lambda:float(a)) - assert_equal(float(a[-1]), 3.) - self.assertTrue(np.isnan(float(a[0]))) - self.assertRaises(TypeError, int, a) - assert_equal(int(a[-1]), 3) - self.assertRaises(MAError, lambda:int(a[0])) - - - def test_oddfeatures_1(self): - "Test of other odd features" - x = arange(20) - x = x.reshape(4, 5) - x.flat[5] = 12 - assert x[1, 0] == 12 - z = x + 10j * x - assert_equal(z.real, x) - assert_equal(z.imag, 10 * x) - assert_equal((z * conjugate(z)).real, 101 * x * x) - z.imag[...] = 0.0 - # - x = arange(10) - x[3] = masked - assert str(x[3]) == str(masked) - c = x >= 8 - assert count(where(c, masked, masked)) == 0 - assert shape(where(c, masked, masked)) == c.shape - # - z = masked_where(c, x) - assert z.dtype is x.dtype - assert z[3] is masked - assert z[4] is not masked - assert z[7] is not masked - assert z[8] is masked - assert z[9] is masked - assert_equal(x, z) - - - def test_oddfeatures_2(self): - "Tests some more features." - x = array([1., 2., 3., 4., 5.]) - c = array([1, 1, 1, 0, 0]) - x[2] = masked - z = where(c, x, -x) - assert_equal(z, [1., 2., 0., -4., -5]) - c[0] = masked - z = where(c, x, -x) - assert_equal(z, [1., 2., 0., -4., -5]) - assert z[0] is masked - assert z[1] is not masked - assert z[2] is masked - - - def test_oddfeatures_3(self): - """Tests some generic features.""" - atest = array([10], mask=True) - btest = array([20]) - idx = atest.mask - atest[idx] = btest[idx] - assert_equal(atest, [20]) - - - def test_filled_w_flexible_dtype(self): - "Test filled w/ flexible dtype" - flexi = array([(1, 1, 1)], - dtype=[('i', int), ('s', '|S8'), ('f', float)]) - flexi[0] = masked - assert_equal(flexi.filled(), - np.array([(default_fill_value(0), - default_fill_value('0'), - default_fill_value(0.),)], dtype=flexi.dtype)) - flexi[0] = masked - assert_equal(flexi.filled(1), - np.array([(1, '1', 1.)], dtype=flexi.dtype)) - - def test_filled_w_mvoid(self): - "Test filled w/ mvoid" - ndtype = [('a', int), ('b', float)] - a = mvoid((1, 2.), mask=[(0, 1)], dtype=ndtype) - # Filled using default - test = a.filled() - assert_equal(tuple(test), (1, default_fill_value(1.))) - # Explicit fill_value - test = a.filled((-1, -1)) - assert_equal(tuple(test), (1, -1)) - # Using predefined filling values - a.fill_value = (-999, -999) - assert_equal(tuple(a.filled()), (1, -999)) - - - def test_filled_w_nested_dtype(self): - "Test filled w/ nested dtype" - ndtype = [('A', int), ('B', [('BA', int), ('BB', int)])] - a = array([(1, (1, 1)), (2, (2, 2))], - mask=[(0, (1, 0)), (0, (0, 1))], dtype=ndtype) - test = a.filled(0) - control = np.array([(1, (0, 1)), (2, (2, 0))], dtype=ndtype) - assert_equal(test, control) - # - test = a['B'].filled(0) - control = np.array([(0, 1), (2, 0)], dtype=a['B'].dtype) - assert_equal(test, control) - - - def test_optinfo_propagation(self): - "Checks that _optinfo dictionary isn't back-propagated" - x = array([1, 2, 3, ], dtype=float) - x._optinfo['info'] = '???' - y = x.copy() - assert_equal(y._optinfo['info'], '???') - y._optinfo['info'] = '!!!' - assert_equal(x._optinfo['info'], '???') - - - def test_fancy_printoptions(self): - "Test printing a masked array w/ fancy dtype." - fancydtype = np.dtype([('x', int), ('y', [('t', int), ('s', float)])]) - test = array([(1, (2, 3.0)), (4, (5, 6.0))], - mask=[(1, (0, 1)), (0, (1, 0))], - dtype=fancydtype) - control = "[(--, (2, --)) (4, (--, 6.0))]" - assert_equal(str(test), control) - - - def test_flatten_structured_array(self): - "Test flatten_structured_array on arrays" - # On ndarray - ndtype = [('a', int), ('b', float)] - a = np.array([(1, 1), (2, 2)], dtype=ndtype) - test = flatten_structured_array(a) - control = np.array([[1., 1.], [2., 2.]], dtype=np.float) - assert_equal(test, control) - assert_equal(test.dtype, control.dtype) - # On masked_array - a = array([(1, 1), (2, 2)], mask=[(0, 1), (1, 0)], dtype=ndtype) - test = flatten_structured_array(a) - control = array([[1., 1.], [2., 2.]], - mask=[[0, 1], [1, 0]], dtype=np.float) - assert_equal(test, control) - assert_equal(test.dtype, control.dtype) - assert_equal(test.mask, control.mask) - # On masked array with nested structure - ndtype = [('a', int), ('b', [('ba', int), ('bb', float)])] - a = array([(1, (1, 1.1)), (2, (2, 2.2))], - mask=[(0, (1, 0)), (1, (0, 1))], dtype=ndtype) - test = flatten_structured_array(a) - control = array([[1., 1., 1.1], [2., 2., 2.2]], - mask=[[0, 1, 0], [1, 0, 1]], dtype=np.float) - assert_equal(test, control) - assert_equal(test.dtype, control.dtype) - assert_equal(test.mask, control.mask) - # Keeping the initial shape - ndtype = [('a', int), ('b', float)] - a = np.array([[(1, 1), ], [(2, 2), ]], dtype=ndtype) - test = flatten_structured_array(a) - control = np.array([[[1., 1.], ], [[2., 2.], ]], dtype=np.float) - assert_equal(test, control) - assert_equal(test.dtype, control.dtype) - - - - def test_void0d(self): - "Test creating a mvoid object" - ndtype = [('a', int), ('b', int)] - a = np.array([(1, 2,)], dtype=ndtype)[0] - f = mvoid(a) - assert(isinstance(f, mvoid)) - # - a = masked_array([(1, 2)], mask=[(1, 0)], dtype=ndtype)[0] - assert(isinstance(a, mvoid)) - # - a = masked_array([(1, 2), (1, 2)], mask=[(1, 0), (0, 0)], dtype=ndtype) - f = mvoid(a._data[0], a._mask[0]) - assert(isinstance(f, mvoid)) - - def test_mvoid_getitem(self): - "Test mvoid.__getitem__" - ndtype = [('a', int), ('b', int)] - a = masked_array([(1, 2,), (3, 4)], mask=[(0, 0), (1, 0)], dtype=ndtype) - # w/o mask - f = a[0] - self.assertTrue(isinstance(f, np.void)) - assert_equal((f[0], f['a']), (1, 1)) - assert_equal(f['b'], 2) - # w/ mask - f = a[1] - self.assertTrue(isinstance(f, mvoid)) - self.assertTrue(f[0] is masked) - self.assertTrue(f['a'] is masked) - assert_equal(f[1], 4) - - def test_mvoid_iter(self): - "Test iteration on __getitem__" - ndtype = [('a', int), ('b', int)] - a = masked_array([(1, 2,), (3, 4)], mask=[(0, 0), (1, 0)], dtype=ndtype) - # w/o mask - assert_equal(list(a[0]), [1, 2]) - # w/ mask - assert_equal(list(a[1]), [masked, 4]) - - def test_mvoid_print(self): - "Test printing a mvoid" - mx = array([(1, 1), (2, 2)], dtype=[('a', int), ('b', int)]) - assert_equal(str(mx[0]), "(1, 1)") - mx['b'][0] = masked - ini_display = masked_print_option._display - masked_print_option.set_display("-X-") - try: - assert_equal(str(mx[0]), "(1, -X-)") - assert_equal(repr(mx[0]), "(1, -X-)") - finally: - masked_print_option.set_display(ini_display) - -#------------------------------------------------------------------------------ - -class TestMaskedArrayArithmetic(TestCase): - "Base test class for MaskedArrays." - - def setUp (self): - "Base data definition." - x = np.array([1., 1., 1., -2., pi / 2.0, 4., 5., -10., 10., 1., 2., 3.]) - y = np.array([5., 0., 3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) - a10 = 10. - m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] - m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 , 0, 1] - xm = masked_array(x, mask=m1) - ym = masked_array(y, mask=m2) - z = np.array([-.5, 0., .5, .8]) - zm = masked_array(z, mask=[0, 1, 0, 0]) - xf = np.where(m1, 1e+20, x) - xm.set_fill_value(1e+20) - self.d = (x, y, a10, m1, m2, xm, ym, z, zm, xf) - self.err_status = np.geterr() - np.seterr(divide='ignore', invalid='ignore') - - def tearDown(self): - np.seterr(**self.err_status) - - def test_basic_arithmetic (self): - "Test of basic arithmetic." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - a2d = array([[1, 2], [0, 4]]) - a2dm = masked_array(a2d, [[0, 0], [1, 0]]) - assert_equal(a2d * a2d, a2d * a2dm) - assert_equal(a2d + a2d, a2d + a2dm) - assert_equal(a2d - a2d, a2d - a2dm) - for s in [(12,), (4, 3), (2, 6)]: - x = x.reshape(s) - y = y.reshape(s) - xm = xm.reshape(s) - ym = ym.reshape(s) - xf = xf.reshape(s) - assert_equal(-x, -xm) - assert_equal(x + y, xm + ym) - assert_equal(x - y, xm - ym) - assert_equal(x * y, xm * ym) - assert_equal(x / y, xm / ym) - assert_equal(a10 + y, a10 + ym) - assert_equal(a10 - y, a10 - ym) - assert_equal(a10 * y, a10 * ym) - assert_equal(a10 / y, a10 / ym) - assert_equal(x + a10, xm + a10) - assert_equal(x - a10, xm - a10) - assert_equal(x * a10, xm * a10) - assert_equal(x / a10, xm / a10) - assert_equal(x ** 2, xm ** 2) - assert_equal(abs(x) ** 2.5, abs(xm) ** 2.5) - assert_equal(x ** y, xm ** ym) - assert_equal(np.add(x, y), add(xm, ym)) - assert_equal(np.subtract(x, y), subtract(xm, ym)) - assert_equal(np.multiply(x, y), multiply(xm, ym)) - assert_equal(np.divide(x, y), divide(xm, ym)) - - - def test_divide_on_different_shapes(self): - x = arange(6, dtype=float) - x.shape = (2, 3) - y = arange(3, dtype=float) - # - z = x / y - assert_equal(z, [[-1., 1., 1.], [-1., 4., 2.5]]) - assert_equal(z.mask, [[1, 0, 0], [1, 0, 0]]) - # - z = x / y[None, :] - assert_equal(z, [[-1., 1., 1.], [-1., 4., 2.5]]) - assert_equal(z.mask, [[1, 0, 0], [1, 0, 0]]) - # - y = arange(2, dtype=float) - z = x / y[:, None] - assert_equal(z, [[-1., -1., -1.], [3., 4., 5.]]) - assert_equal(z.mask, [[1, 1, 1], [0, 0, 0]]) - - - def test_mixed_arithmetic(self): - "Tests mixed arithmetics." - na = np.array([1]) - ma = array([1]) - self.assertTrue(isinstance(na + ma, MaskedArray)) - self.assertTrue(isinstance(ma + na, MaskedArray)) - - - def test_limits_arithmetic(self): - tiny = np.finfo(float).tiny - a = array([tiny, 1. / tiny, 0.]) - assert_equal(getmaskarray(a / 2), [0, 0, 0]) - assert_equal(getmaskarray(2 / a), [1, 0, 1]) - - - def test_masked_singleton_arithmetic(self): - "Tests some scalar arithmetics on MaskedArrays." - # Masked singleton should remain masked no matter what - xm = array(0, mask=1) - self.assertTrue((1 / array(0)).mask) - self.assertTrue((1 + xm).mask) - self.assertTrue((-xm).mask) - self.assertTrue(maximum(xm, xm).mask) - self.assertTrue(minimum(xm, xm).mask) - - - def test_masked_singleton_equality(self): - "Tests (in)equality on masked snigleton" - a = array([1, 2, 3], mask=[1, 1, 0]) - assert((a[0] == 0) is masked) - assert((a[0] != 0) is masked) - assert_equal((a[-1] == 0), False) - assert_equal((a[-1] != 0), True) - - - def test_arithmetic_with_masked_singleton(self): - "Checks that there's no collapsing to masked" - x = masked_array([1, 2]) - y = x * masked - assert_equal(y.shape, x.shape) - assert_equal(y._mask, [True, True]) - y = x[0] * masked - assert y is masked - y = x + masked - assert_equal(y.shape, x.shape) - assert_equal(y._mask, [True, True]) - - - def test_arithmetic_with_masked_singleton_on_1d_singleton(self): - "Check that we're not losing the shape of a singleton" - x = masked_array([1, ]) - y = x + masked - assert_equal(y.shape, x.shape) - assert_equal(y.mask, [True, ]) - - - def test_scalar_arithmetic(self): - x = array(0, mask=0) - assert_equal(x.filled().ctypes.data, x.ctypes.data) - # Make sure we don't lose the shape in some circumstances - xm = array((0, 0)) / 0. - assert_equal(xm.shape, (2,)) - assert_equal(xm.mask, [1, 1]) - - - def test_basic_ufuncs (self): - "Test various functions such as sin, cos." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - assert_equal(np.cos(x), cos(xm)) - assert_equal(np.cosh(x), cosh(xm)) - assert_equal(np.sin(x), sin(xm)) - assert_equal(np.sinh(x), sinh(xm)) - assert_equal(np.tan(x), tan(xm)) - assert_equal(np.tanh(x), tanh(xm)) - assert_equal(np.sqrt(abs(x)), sqrt(xm)) - assert_equal(np.log(abs(x)), log(xm)) - assert_equal(np.log10(abs(x)), log10(xm)) - assert_equal(np.exp(x), exp(xm)) - assert_equal(np.arcsin(z), arcsin(zm)) - assert_equal(np.arccos(z), arccos(zm)) - assert_equal(np.arctan(z), arctan(zm)) - assert_equal(np.arctan2(x, y), arctan2(xm, ym)) - assert_equal(np.absolute(x), absolute(xm)) - assert_equal(np.equal(x, y), equal(xm, ym)) - assert_equal(np.not_equal(x, y), not_equal(xm, ym)) - assert_equal(np.less(x, y), less(xm, ym)) - assert_equal(np.greater(x, y), greater(xm, ym)) - assert_equal(np.less_equal(x, y), less_equal(xm, ym)) - assert_equal(np.greater_equal(x, y), greater_equal(xm, ym)) - assert_equal(np.conjugate(x), conjugate(xm)) - - - def test_count_func (self): - "Tests count" - ott = array([0., 1., 2., 3.], mask=[1, 0, 0, 0]) - if sys.version_info[0] >= 3: - self.assertTrue(isinstance(count(ott), np.integer)) - else: - self.assertTrue(isinstance(count(ott), int)) - assert_equal(3, count(ott)) - assert_equal(1, count(1)) - assert_equal(0, array(1, mask=[1])) - ott = ott.reshape((2, 2)) - assert isinstance(count(ott, 0), ndarray) - if sys.version_info[0] >= 3: - assert isinstance(count(ott), np.integer) - else: - assert isinstance(count(ott), types.IntType) - assert_equal(3, count(ott)) - assert getmask(count(ott, 0)) is nomask - assert_equal([1, 2], count(ott, 0)) - - - def test_minmax_func (self): - "Tests minimum and maximum." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - xr = np.ravel(x) #max doesn't work if shaped - xmr = ravel(xm) - assert_equal(max(xr), maximum(xmr)) #true because of careful selection of data - assert_equal(min(xr), minimum(xmr)) #true because of careful selection of data - # - assert_equal(minimum([1, 2, 3], [4, 0, 9]), [1, 0, 3]) - assert_equal(maximum([1, 2, 3], [4, 0, 9]), [4, 2, 9]) - x = arange(5) - y = arange(5) - 2 - x[3] = masked - y[0] = masked - assert_equal(minimum(x, y), where(less(x, y), x, y)) - assert_equal(maximum(x, y), where(greater(x, y), x, y)) - assert minimum(x) == 0 - assert maximum(x) == 4 - # - x = arange(4).reshape(2, 2) - x[-1, -1] = masked - assert_equal(maximum(x), 2) - - - def test_minimummaximum_func(self): - a = np.ones((2, 2)) - aminimum = minimum(a, a) - self.assertTrue(isinstance(aminimum, MaskedArray)) - assert_equal(aminimum, np.minimum(a, a)) - # - aminimum = minimum.outer(a, a) - self.assertTrue(isinstance(aminimum, MaskedArray)) - assert_equal(aminimum, np.minimum.outer(a, a)) - # - amaximum = maximum(a, a) - self.assertTrue(isinstance(amaximum, MaskedArray)) - assert_equal(amaximum, np.maximum(a, a)) - # - amaximum = maximum.outer(a, a) - self.assertTrue(isinstance(amaximum, MaskedArray)) - assert_equal(amaximum, np.maximum.outer(a, a)) - - - def test_minmax_reduce(self): - "Test np.min/maximum.reduce on array w/ full False mask" - a = array([1, 2, 3], mask=[False, False, False]) - b = np.maximum.reduce(a) - assert_equal(b, 3) - - def test_minmax_funcs_with_output(self): - "Tests the min/max functions with explicit outputs" - mask = np.random.rand(12).round() - xm = array(np.random.uniform(0, 10, 12), mask=mask) - xm.shape = (3, 4) - for funcname in ('min', 'max'): - # Initialize - npfunc = getattr(np, funcname) - mafunc = getattr(numpy.ma.core, funcname) - # Use the np version - nout = np.empty((4,), dtype=int) - try: - result = npfunc(xm, axis=0, out=nout) - except MaskError: - pass - nout = np.empty((4,), dtype=float) - result = npfunc(xm, axis=0, out=nout) - self.assertTrue(result is nout) - # Use the ma version - nout.fill(-999) - result = mafunc(xm, axis=0, out=nout) - self.assertTrue(result is nout) - - - def test_minmax_methods(self): - "Additional tests on max/min" - (_, _, _, _, _, xm, _, _, _, _) = self.d - xm.shape = (xm.size,) - assert_equal(xm.max(), 10) - self.assertTrue(xm[0].max() is masked) - self.assertTrue(xm[0].max(0) is masked) - self.assertTrue(xm[0].max(-1) is masked) - assert_equal(xm.min(), -10.) - self.assertTrue(xm[0].min() is masked) - self.assertTrue(xm[0].min(0) is masked) - self.assertTrue(xm[0].min(-1) is masked) - assert_equal(xm.ptp(), 20.) - self.assertTrue(xm[0].ptp() is masked) - self.assertTrue(xm[0].ptp(0) is masked) - self.assertTrue(xm[0].ptp(-1) is masked) - # - x = array([1, 2, 3], mask=True) - self.assertTrue(x.min() is masked) - self.assertTrue(x.max() is masked) - self.assertTrue(x.ptp() is masked) - - - def test_addsumprod (self): - "Tests add, sum, product." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - assert_equal(np.add.reduce(x), add.reduce(x)) - assert_equal(np.add.accumulate(x), add.accumulate(x)) - assert_equal(4, sum(array(4), axis=0)) - assert_equal(4, sum(array(4), axis=0)) - assert_equal(np.sum(x, axis=0), sum(x, axis=0)) - assert_equal(np.sum(filled(xm, 0), axis=0), sum(xm, axis=0)) - assert_equal(np.sum(x, 0), sum(x, 0)) - assert_equal(np.product(x, axis=0), product(x, axis=0)) - assert_equal(np.product(x, 0), product(x, 0)) - assert_equal(np.product(filled(xm, 1), axis=0), product(xm, axis=0)) - s = (3, 4) - x.shape = y.shape = xm.shape = ym.shape = s - if len(s) > 1: - assert_equal(np.concatenate((x, y), 1), concatenate((xm, ym), 1)) - assert_equal(np.add.reduce(x, 1), add.reduce(x, 1)) - assert_equal(np.sum(x, 1), sum(x, 1)) - assert_equal(np.product(x, 1), product(x, 1)) - - - def test_binops_d2D(self): - "Test binary operations on 2D data" - a = array([[1.], [2.], [3.]], mask=[[False], [True], [True]]) - b = array([[2., 3.], [4., 5.], [6., 7.]]) - # - test = a * b - control = array([[2., 3.], [2., 2.], [3., 3.]], - mask=[[0, 0], [1, 1], [1, 1]]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - test = b * a - control = array([[2., 3.], [4., 5.], [6., 7.]], - mask=[[0, 0], [1, 1], [1, 1]]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - a = array([[1.], [2.], [3.]]) - b = array([[2., 3.], [4., 5.], [6., 7.]], - mask=[[0, 0], [0, 0], [0, 1]]) - test = a * b - control = array([[2, 3], [8, 10], [18, 3]], - mask=[[0, 0], [0, 0], [0, 1]]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - test = b * a - control = array([[2, 3], [8, 10], [18, 7]], - mask=[[0, 0], [0, 0], [0, 1]]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - - - def test_domained_binops_d2D(self): - "Test domained binary operations on 2D data" - a = array([[1.], [2.], [3.]], mask=[[False], [True], [True]]) - b = array([[2., 3.], [4., 5.], [6., 7.]]) - # - test = a / b - control = array([[1. / 2., 1. / 3.], [2., 2.], [3., 3.]], - mask=[[0, 0], [1, 1], [1, 1]]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - test = b / a - control = array([[2. / 1., 3. / 1.], [4., 5.], [6., 7.]], - mask=[[0, 0], [1, 1], [1, 1]]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - a = array([[1.], [2.], [3.]]) - b = array([[2., 3.], [4., 5.], [6., 7.]], - mask=[[0, 0], [0, 0], [0, 1]]) - test = a / b - control = array([[1. / 2, 1. / 3], [2. / 4, 2. / 5], [3. / 6, 3]], - mask=[[0, 0], [0, 0], [0, 1]]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - test = b / a - control = array([[2 / 1., 3 / 1.], [4 / 2., 5 / 2.], [6 / 3., 7]], - mask=[[0, 0], [0, 0], [0, 1]]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - - - def test_noshrinking(self): - "Check that we don't shrink a mask when not wanted" - # Binary operations - a = masked_array([1, 2, 3], mask=[False, False, False], shrink=False) - b = a + 1 - assert_equal(b.mask, [0, 0, 0]) - # In place binary operation - a += 1 - assert_equal(a.mask, [0, 0, 0]) - # Domained binary operation - b = a / 1. - assert_equal(b.mask, [0, 0, 0]) - # In place binary operation - a /= 1. - assert_equal(a.mask, [0, 0, 0]) - - - def test_mod(self): - "Tests mod" - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - assert_equal(mod(x, y), mod(xm, ym)) - test = mod(ym, xm) - assert_equal(test, np.mod(ym, xm)) - assert_equal(test.mask, mask_or(xm.mask, ym.mask)) - test = mod(xm, ym) - assert_equal(test, np.mod(xm, ym)) - assert_equal(test.mask, mask_or(mask_or(xm.mask, ym.mask), (ym == 0))) - - def test_TakeTransposeInnerOuter(self): - "Test of take, transpose, inner, outer products" - x = arange(24) - y = np.arange(24) - x[5:6] = masked - x = x.reshape(2, 3, 4) - y = y.reshape(2, 3, 4) - assert_equal(np.transpose(y, (2, 0, 1)), transpose(x, (2, 0, 1))) - assert_equal(np.take(y, (2, 0, 1), 1), take(x, (2, 0, 1), 1)) - assert_equal(np.inner(filled(x, 0), filled(y, 0)), - inner(x, y)) - assert_equal(np.outer(filled(x, 0), filled(y, 0)), - outer(x, y)) - y = array(['abc', 1, 'def', 2, 3], object) - y[2] = masked - t = take(y, [0, 3, 4]) - assert t[0] == 'abc' - assert t[1] == 2 - assert t[2] == 3 - - - def test_imag_real(self): - "Check complex" - xx = array([1 + 10j, 20 + 2j], mask=[1, 0]) - assert_equal(xx.imag, [10, 2]) - assert_equal(xx.imag.filled(), [1e+20, 2]) - assert_equal(xx.imag.dtype, xx._data.imag.dtype) - assert_equal(xx.real, [1, 20]) - assert_equal(xx.real.filled(), [1e+20, 20]) - assert_equal(xx.real.dtype, xx._data.real.dtype) - - - def test_methods_with_output(self): - xm = array(np.random.uniform(0, 10, 12)).reshape(3, 4) - xm[:, 0] = xm[0] = xm[-1, -1] = masked - # - funclist = ('sum', 'prod', 'var', 'std', 'max', 'min', 'ptp', 'mean',) - # - for funcname in funclist: - npfunc = getattr(np, funcname) - xmmeth = getattr(xm, funcname) - # A ndarray as explicit input - output = np.empty(4, dtype=float) - output.fill(-9999) - result = npfunc(xm, axis=0, out=output) - # ... the result should be the given output - self.assertTrue(result is output) - assert_equal(result, xmmeth(axis=0, out=output)) - # - output = empty(4, dtype=int) - result = xmmeth(axis=0, out=output) - self.assertTrue(result is output) - self.assertTrue(output[0] is masked) - - - def test_eq_on_structured(self): - "Test the equality of structured arrays" - ndtype = [('A', int), ('B', int)] - a = array([(1, 1), (2, 2)], mask=[(0, 1), (0, 0)], dtype=ndtype) - test = (a == a) - assert_equal(test, [True, True]) - assert_equal(test.mask, [False, False]) - b = array([(1, 1), (2, 2)], mask=[(1, 0), (0, 0)], dtype=ndtype) - test = (a == b) - assert_equal(test, [False, True]) - assert_equal(test.mask, [True, False]) - b = array([(1, 1), (2, 2)], mask=[(0, 1), (1, 0)], dtype=ndtype) - test = (a == b) - assert_equal(test, [True, False]) - assert_equal(test.mask, [False, False]) - - - def test_ne_on_structured(self): - "Test the equality of structured arrays" - ndtype = [('A', int), ('B', int)] - a = array([(1, 1), (2, 2)], mask=[(0, 1), (0, 0)], dtype=ndtype) - test = (a != a) - assert_equal(test, [False, False]) - assert_equal(test.mask, [False, False]) - b = array([(1, 1), (2, 2)], mask=[(1, 0), (0, 0)], dtype=ndtype) - test = (a != b) - assert_equal(test, [True, False]) - assert_equal(test.mask, [True, False]) - b = array([(1, 1), (2, 2)], mask=[(0, 1), (1, 0)], dtype=ndtype) - test = (a != b) - assert_equal(test, [False, True]) - assert_equal(test.mask, [False, False]) - - - def test_eq_w_None(self): - a = array([1, 2], mask=False) - assert_equal(a == None, False) - assert_equal(a != None, True) - a = masked - assert_equal(a == None, masked) - - def test_eq_w_scalar(self): - a = array(1) - assert_equal(a == 1, True) - assert_equal(a == 0, False) - assert_equal(a != 1, False) - assert_equal(a != 0, True) - - - def test_numpyarithmetics(self): - "Check that the mask is not back-propagated when using numpy functions" - a = masked_array([-1, 0, 1, 2, 3], mask=[0, 0, 0, 0, 1]) - control = masked_array([np.nan, np.nan, 0, np.log(2), -1], - mask=[1, 1, 0, 0, 1]) - # - test = log(a) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - assert_equal(a.mask, [0, 0, 0, 0, 1]) - # - test = np.log(a) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - assert_equal(a.mask, [0, 0, 0, 0, 1]) - -#------------------------------------------------------------------------------ - -class TestMaskedArrayAttributes(TestCase): - - def test_keepmask(self): - "Tests the keep mask flag" - x = masked_array([1, 2, 3], mask=[1, 0, 0]) - mx = masked_array(x) - assert_equal(mx.mask, x.mask) - mx = masked_array(x, mask=[0, 1, 0], keep_mask=False) - assert_equal(mx.mask, [0, 1, 0]) - mx = masked_array(x, mask=[0, 1, 0], keep_mask=True) - assert_equal(mx.mask, [1, 1, 0]) - # We default to true - mx = masked_array(x, mask=[0, 1, 0]) - assert_equal(mx.mask, [1, 1, 0]) - - def test_hardmask(self): - "Test hard_mask" - d = arange(5) - n = [0, 0, 0, 1, 1] - m = make_mask(n) - xh = array(d, mask=m, hard_mask=True) - # We need to copy, to avoid updating d in xh ! - xs = array(d, mask=m, hard_mask=False, copy=True) - xh[[1, 4]] = [10, 40] - xs[[1, 4]] = [10, 40] - assert_equal(xh._data, [0, 10, 2, 3, 4]) - assert_equal(xs._data, [0, 10, 2, 3, 40]) - #assert_equal(xh.mask.ctypes._data, m.ctypes._data) - assert_equal(xs.mask, [0, 0, 0, 1, 0]) - self.assertTrue(xh._hardmask) - self.assertTrue(not xs._hardmask) - xh[1:4] = [10, 20, 30] - xs[1:4] = [10, 20, 30] - assert_equal(xh._data, [0, 10, 20, 3, 4]) - assert_equal(xs._data, [0, 10, 20, 30, 40]) - #assert_equal(xh.mask.ctypes._data, m.ctypes._data) - assert_equal(xs.mask, nomask) - xh[0] = masked - xs[0] = masked - assert_equal(xh.mask, [1, 0, 0, 1, 1]) - assert_equal(xs.mask, [1, 0, 0, 0, 0]) - xh[:] = 1 - xs[:] = 1 - assert_equal(xh._data, [0, 1, 1, 3, 4]) - assert_equal(xs._data, [1, 1, 1, 1, 1]) - assert_equal(xh.mask, [1, 0, 0, 1, 1]) - assert_equal(xs.mask, nomask) - # Switch to soft mask - xh.soften_mask() - xh[:] = arange(5) - assert_equal(xh._data, [0, 1, 2, 3, 4]) - assert_equal(xh.mask, nomask) - # Switch back to hard mask - xh.harden_mask() - xh[xh < 3] = masked - assert_equal(xh._data, [0, 1, 2, 3, 4]) - assert_equal(xh._mask, [1, 1, 1, 0, 0]) - xh[filled(xh > 1, False)] = 5 - assert_equal(xh._data, [0, 1, 2, 5, 5]) - assert_equal(xh._mask, [1, 1, 1, 0, 0]) - # - xh = array([[1, 2], [3, 4]], mask=[[1, 0], [0, 0]], hard_mask=True) - xh[0] = 0 - assert_equal(xh._data, [[1, 0], [3, 4]]) - assert_equal(xh._mask, [[1, 0], [0, 0]]) - xh[-1, -1] = 5 - assert_equal(xh._data, [[1, 0], [3, 5]]) - assert_equal(xh._mask, [[1, 0], [0, 0]]) - xh[filled(xh < 5, False)] = 2 - assert_equal(xh._data, [[1, 2], [2, 5]]) - assert_equal(xh._mask, [[1, 0], [0, 0]]) - - def test_hardmask_again(self): - "Another test of hardmask" - d = arange(5) - n = [0, 0, 0, 1, 1] - m = make_mask(n) - xh = array(d, mask=m, hard_mask=True) - xh[4:5] = 999 - #assert_equal(xh.mask.ctypes._data, m.ctypes._data) - xh[0:1] = 999 - assert_equal(xh._data, [999, 1, 2, 3, 4]) - - def test_hardmask_oncemore_yay(self): - "OK, yet another test of hardmask" - "Make sure that harden_mask/soften_mask//unshare_mask retursn self" - a = array([1, 2, 3], mask=[1, 0, 0]) - b = a.harden_mask() - assert_equal(a, b) - b[0] = 0 - assert_equal(a, b) - assert_equal(b, array([1, 2, 3], mask=[1, 0, 0])) - a = b.soften_mask() - a[0] = 0 - assert_equal(a, b) - assert_equal(b, array([0, 2, 3], mask=[0, 0, 0])) - - - def test_smallmask(self): - "Checks the behaviour of _smallmask" - a = arange(10) - a[1] = masked - a[1] = 1 - assert_equal(a._mask, nomask) - a = arange(10) - a._smallmask = False - a[1] = masked - a[1] = 1 - assert_equal(a._mask, zeros(10)) - - - def test_shrink_mask(self): - "Tests .shrink_mask()" - a = array([1, 2, 3], mask=[0, 0, 0]) - b = a.shrink_mask() - assert_equal(a, b) - assert_equal(a.mask, nomask) - - - def test_flat(self): - "Test flat on masked_matrices" - test = masked_array(np.matrix([[1, 2, 3]]), mask=[0, 0, 1]) - test.flat = masked_array([3, 2, 1], mask=[1, 0, 0]) - control = masked_array(np.matrix([[3, 2, 1]]), mask=[1, 0, 0]) - assert_equal(test, control) - # - test = masked_array(np.matrix([[1, 2, 3]]), mask=[0, 0, 1]) - testflat = test.flat - testflat[:] = testflat[[2, 1, 0]] - assert_equal(test, control) - -#------------------------------------------------------------------------------ - -class TestFillingValues(TestCase): - # - def test_check_on_scalar(self): - "Test _check_fill_value" - _check_fill_value = np.ma.core._check_fill_value - # - fval = _check_fill_value(0, int) - assert_equal(fval, 0) - fval = _check_fill_value(None, int) - assert_equal(fval, default_fill_value(0)) - # - fval = _check_fill_value(0, "|S3") - assert_equal(fval, asbytes("0")) - fval = _check_fill_value(None, "|S3") - assert_equal(fval, default_fill_value("|S3")) - # - fval = _check_fill_value(1e+20, int) - assert_equal(fval, default_fill_value(0)) - - - def test_check_on_fields(self): - "Tests _check_fill_value with records" - _check_fill_value = np.ma.core._check_fill_value - ndtype = [('a', int), ('b', float), ('c', "|S3")] - # A check on a list should return a single record - fval = _check_fill_value([-999, -12345678.9, "???"], ndtype) - self.assertTrue(isinstance(fval, ndarray)) - assert_equal(fval.item(), [-999, -12345678.9, asbytes("???")]) - # A check on None should output the defaults - fval = _check_fill_value(None, ndtype) - self.assertTrue(isinstance(fval, ndarray)) - assert_equal(fval.item(), [default_fill_value(0), - default_fill_value(0.), - asbytes(default_fill_value("0"))]) - #.....Using a structured type as fill_value should work - fill_val = np.array((-999, -12345678.9, "???"), dtype=ndtype) - fval = _check_fill_value(fill_val, ndtype) - self.assertTrue(isinstance(fval, ndarray)) - assert_equal(fval.item(), [-999, -12345678.9, asbytes("???")]) - #.....Using a flexible type w/ a different type shouldn't matter - fill_val = np.array((-999, -12345678.9, "???"), - dtype=[("A", int), ("B", float), ("C", "|S3")]) - fval = _check_fill_value(fill_val, ndtype) - self.assertTrue(isinstance(fval, ndarray)) - assert_equal(fval.item(), [-999, -12345678.9, asbytes("???")]) - #.....Using an object-array shouldn't matter either - fill_value = np.array((-999, -12345678.9, "???"), dtype=object) - fval = _check_fill_value(fill_val, ndtype) - self.assertTrue(isinstance(fval, ndarray)) - assert_equal(fval.item(), [-999, -12345678.9, asbytes("???")]) - # - fill_value = np.array((-999, -12345678.9, "???")) - fval = _check_fill_value(fill_val, ndtype) - self.assertTrue(isinstance(fval, ndarray)) - assert_equal(fval.item(), [-999, -12345678.9, asbytes("???")]) - #.....One-field-only flexible type should work as well - ndtype = [("a", int)] - fval = _check_fill_value(-999999999, ndtype) - self.assertTrue(isinstance(fval, ndarray)) - assert_equal(fval.item(), (-999999999,)) - - - def test_fillvalue_conversion(self): - "Tests the behavior of fill_value during conversion" - # We had a tailored comment to make sure special attributes are properly - # dealt with - a = array(asbytes_nested(['3', '4', '5'])) - a._optinfo.update({'comment':"updated!"}) - # - b = array(a, dtype=int) - assert_equal(b._data, [3, 4, 5]) - assert_equal(b.fill_value, default_fill_value(0)) - # - b = array(a, dtype=float) - assert_equal(b._data, [3, 4, 5]) - assert_equal(b.fill_value, default_fill_value(0.)) - # - b = a.astype(int) - assert_equal(b._data, [3, 4, 5]) - assert_equal(b.fill_value, default_fill_value(0)) - assert_equal(b._optinfo['comment'], "updated!") - # - b = a.astype([('a', '|S3')]) - assert_equal(b['a']._data, a._data) - assert_equal(b['a'].fill_value, a.fill_value) - - - def test_fillvalue(self): - "Yet more fun with the fill_value" - data = masked_array([1, 2, 3], fill_value= -999) - series = data[[0, 2, 1]] - assert_equal(series._fill_value, data._fill_value) - # - mtype = [('f', float), ('s', '|S3')] - x = array([(1, 'a'), (2, 'b'), (pi, 'pi')], dtype=mtype) - x.fill_value = 999 - assert_equal(x.fill_value.item(), [999., asbytes('999')]) - assert_equal(x['f'].fill_value, 999) - assert_equal(x['s'].fill_value, asbytes('999')) - # - x.fill_value = (9, '???') - assert_equal(x.fill_value.item(), (9, asbytes('???'))) - assert_equal(x['f'].fill_value, 9) - assert_equal(x['s'].fill_value, asbytes('???')) - # - x = array([1, 2, 3.1]) - x.fill_value = 999 - assert_equal(np.asarray(x.fill_value).dtype, float) - assert_equal(x.fill_value, 999.) - assert_equal(x._fill_value, np.array(999.)) - - - def test_fillvalue_exotic_dtype(self): - "Tests yet more exotic flexible dtypes" - _check_fill_value = np.ma.core._check_fill_value - ndtype = [('i', int), ('s', '|S8'), ('f', float)] - control = np.array((default_fill_value(0), - default_fill_value('0'), - default_fill_value(0.),), - dtype=ndtype) - assert_equal(_check_fill_value(None, ndtype), control) - # The shape shouldn't matter - ndtype = [('f0', float, (2, 2))] - control = np.array((default_fill_value(0.),), - dtype=[('f0', float)]).astype(ndtype) - assert_equal(_check_fill_value(None, ndtype), control) - control = np.array((0,), dtype=[('f0', float)]).astype(ndtype) - assert_equal(_check_fill_value(0, ndtype), control) - # - ndtype = np.dtype("int, (2,3)float, float") - control = np.array((default_fill_value(0), - default_fill_value(0.), - default_fill_value(0.),), - dtype="int, float, float").astype(ndtype) - test = _check_fill_value(None, ndtype) - assert_equal(test, control) - control = np.array((0, 0, 0), dtype="int, float, float").astype(ndtype) - assert_equal(_check_fill_value(0, ndtype), control) - - - def test_extremum_fill_value(self): - "Tests extremum fill values for flexible type." - a = array([(1, (2, 3)), (4, (5, 6))], - dtype=[('A', int), ('B', [('BA', int), ('BB', int)])]) - test = a.fill_value - assert_equal(test['A'], default_fill_value(a['A'])) - assert_equal(test['B']['BA'], default_fill_value(a['B']['BA'])) - assert_equal(test['B']['BB'], default_fill_value(a['B']['BB'])) - # - test = minimum_fill_value(a) - assert_equal(test[0], minimum_fill_value(a['A'])) - assert_equal(test[1][0], minimum_fill_value(a['B']['BA'])) - assert_equal(test[1][1], minimum_fill_value(a['B']['BB'])) - assert_equal(test[1], minimum_fill_value(a['B'])) - # - test = maximum_fill_value(a) - assert_equal(test[0], maximum_fill_value(a['A'])) - assert_equal(test[1][0], maximum_fill_value(a['B']['BA'])) - assert_equal(test[1][1], maximum_fill_value(a['B']['BB'])) - assert_equal(test[1], maximum_fill_value(a['B'])) - - def test_fillvalue_individual_fields(self): - "Test setting fill_value on individual fields" - ndtype = [('a', int), ('b', int)] - # Explicit fill_value - a = array(zip([1, 2, 3], [4, 5, 6]), - fill_value=(-999, -999), dtype=ndtype) - f = a._fill_value - aa = a['a'] - aa.set_fill_value(10) - assert_equal(aa._fill_value, np.array(10)) - assert_equal(tuple(a.fill_value), (10, -999)) - a.fill_value['b'] = -10 - assert_equal(tuple(a.fill_value), (10, -10)) - # Implicit fill_value - t = array(zip([1, 2, 3], [4, 5, 6]), dtype=[('a', int), ('b', int)]) - tt = t['a'] - tt.set_fill_value(10) - assert_equal(tt._fill_value, np.array(10)) - assert_equal(tuple(t.fill_value), (10, default_fill_value(0))) - - def test_fillvalue_implicit_structured_array(self): - "Check that fill_value is always defined for structured arrays" - ndtype = ('b', float) - adtype = ('a', float) - a = array([(1.,), (2.,)], mask=[(False,), (False,)], - fill_value=(np.nan,), dtype=np.dtype([adtype])) - b = empty(a.shape, dtype=[adtype, ndtype]) - b['a'] = a['a'] - b['a'].set_fill_value(a['a'].fill_value) - f = b._fill_value[()] - assert(np.isnan(f[0])) - assert_equal(f[-1], default_fill_value(1.)) - - def test_fillvalue_as_arguments(self): - "Test adding a fill_value parameter to empty/ones/zeros" - a = empty(3, fill_value=999.) - assert_equal(a.fill_value, 999.) - # - a = ones(3, fill_value=999., dtype=float) - assert_equal(a.fill_value, 999.) - # - a = zeros(3, fill_value=0., dtype=complex) - assert_equal(a.fill_value, 0.) - # - a = identity(3, fill_value=0., dtype=complex) - assert_equal(a.fill_value, 0.) - -#------------------------------------------------------------------------------ - -class TestUfuncs(TestCase): - "Test class for the application of ufuncs on MaskedArrays." - - def setUp(self): - "Base data definition." - self.d = (array([1.0, 0, -1, pi / 2] * 2, mask=[0, 1] + [0] * 6), - array([1.0, 0, -1, pi / 2] * 2, mask=[1, 0] + [0] * 6),) - self.err_status = np.geterr() - np.seterr(divide='ignore', invalid='ignore') - - def tearDown(self): - np.seterr(**self.err_status) - - def test_testUfuncRegression(self): - "Tests new ufuncs on MaskedArrays." - for f in ['sqrt', 'log', 'log10', 'exp', 'conjugate', - 'sin', 'cos', 'tan', - 'arcsin', 'arccos', 'arctan', - 'sinh', 'cosh', 'tanh', - 'arcsinh', - 'arccosh', - 'arctanh', - 'absolute', 'fabs', 'negative', - # 'nonzero', 'around', - 'floor', 'ceil', - # 'sometrue', 'alltrue', - 'logical_not', - 'add', 'subtract', 'multiply', - 'divide', 'true_divide', 'floor_divide', - 'remainder', 'fmod', 'hypot', 'arctan2', - 'equal', 'not_equal', 'less_equal', 'greater_equal', - 'less', 'greater', - 'logical_and', 'logical_or', 'logical_xor', - ]: - try: - uf = getattr(umath, f) - except AttributeError: - uf = getattr(fromnumeric, f) - mf = getattr(numpy.ma.core, f) - args = self.d[:uf.nin] - ur = uf(*args) - mr = mf(*args) - assert_equal(ur.filled(0), mr.filled(0), f) - assert_mask_equal(ur.mask, mr.mask, err_msg=f) - - def test_reduce(self): - "Tests reduce on MaskedArrays." - a = self.d[0] - self.assertTrue(not alltrue(a, axis=0)) - self.assertTrue(sometrue(a, axis=0)) - assert_equal(sum(a[:3], axis=0), 0) - assert_equal(product(a, axis=0), 0) - assert_equal(add.reduce(a), pi) - - def test_minmax(self): - "Tests extrema on MaskedArrays." - a = arange(1, 13).reshape(3, 4) - amask = masked_where(a < 5, a) - assert_equal(amask.max(), a.max()) - assert_equal(amask.min(), 5) - assert_equal(amask.max(0), a.max(0)) - assert_equal(amask.min(0), [5, 6, 7, 8]) - self.assertTrue(amask.max(1)[0].mask) - self.assertTrue(amask.min(1)[0].mask) - - def test_ndarray_mask(self): - "Check that the mask of the result is a ndarray (not a MaskedArray...)" - a = masked_array([-1, 0, 1, 2, 3], mask=[0, 0, 0, 0, 1]) - test = np.sqrt(a) - control = masked_array([-1, 0, 1, np.sqrt(2), -1], - mask=[1, 0, 0, 0, 1]) - assert_equal(test, control) - assert_equal(test.mask, control.mask) - self.assertTrue(not isinstance(test.mask, MaskedArray)) - -#------------------------------------------------------------------------------ - -class TestMaskedArrayInPlaceArithmetics(TestCase): - "Test MaskedArray Arithmetics" - - def setUp(self): - x = arange(10) - y = arange(10) - xm = arange(10) - xm[2] = masked - self.intdata = (x, y, xm) - self.floatdata = (x.astype(float), y.astype(float), xm.astype(float)) - - def test_inplace_addition_scalar(self): - """Test of inplace additions""" - (x, y, xm) = self.intdata - xm[2] = masked - x += 1 - assert_equal(x, y + 1) - xm += 1 - assert_equal(xm, y + 1) - # - warnings.simplefilter('ignore', DeprecationWarning) - (x, _, xm) = self.floatdata - id1 = x.raw_data().ctypes._data - x += 1. - assert (id1 == x.raw_data().ctypes._data) - assert_equal(x, y + 1.) - warnings.simplefilter('default', DeprecationWarning) - - def test_inplace_addition_array(self): - """Test of inplace additions""" - (x, y, xm) = self.intdata - m = xm.mask - a = arange(10, dtype=float) - a[-1] = masked - x += a - xm += a - assert_equal(x, y + a) - assert_equal(xm, y + a) - assert_equal(xm.mask, mask_or(m, a.mask)) - - def test_inplace_subtraction_scalar(self): - """Test of inplace subtractions""" - (x, y, xm) = self.intdata - x -= 1 - assert_equal(x, y - 1) - xm -= 1 - assert_equal(xm, y - 1) - - def test_inplace_subtraction_array(self): - """Test of inplace subtractions""" - (x, y, xm) = self.floatdata - m = xm.mask - a = arange(10, dtype=float) - a[-1] = masked - x -= a - xm -= a - assert_equal(x, y - a) - assert_equal(xm, y - a) - assert_equal(xm.mask, mask_or(m, a.mask)) - - def test_inplace_multiplication_scalar(self): - """Test of inplace multiplication""" - (x, y, xm) = self.floatdata - x *= 2.0 - assert_equal(x, y * 2) - xm *= 2.0 - assert_equal(xm, y * 2) - - def test_inplace_multiplication_array(self): - """Test of inplace multiplication""" - (x, y, xm) = self.floatdata - m = xm.mask - a = arange(10, dtype=float) - a[-1] = masked - x *= a - xm *= a - assert_equal(x, y * a) - assert_equal(xm, y * a) - assert_equal(xm.mask, mask_or(m, a.mask)) - - def test_inplace_division_scalar_int(self): - """Test of inplace division""" - (x, y, xm) = self.intdata - x = arange(10) * 2 - xm = arange(10) * 2 - xm[2] = masked - x /= 2 - assert_equal(x, y) - xm /= 2 - assert_equal(xm, y) - - def test_inplace_division_scalar_float(self): - """Test of inplace division""" - (x, y, xm) = self.floatdata - x /= 2.0 - assert_equal(x, y / 2.0) - xm /= arange(10) - assert_equal(xm, ones((10,))) - - def test_inplace_division_array_float(self): - """Test of inplace division""" - (x, y, xm) = self.floatdata - m = xm.mask - a = arange(10, dtype=float) - a[-1] = masked - x /= a - xm /= a - assert_equal(x, y / a) - assert_equal(xm, y / a) - assert_equal(xm.mask, mask_or(mask_or(m, a.mask), (a == 0))) - - def test_inplace_division_misc(self): - # - x = [1., 1., 1., -2., pi / 2., 4., 5., -10., 10., 1., 2., 3.] - y = [5., 0., 3., 2., -1., -4., 0., -10., 10., 1., 0., 3.] - m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] - m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 , 0, 1] - xm = masked_array(x, mask=m1) - ym = masked_array(y, mask=m2) - # - z = xm / ym - assert_equal(z._mask, [1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1]) - assert_equal(z._data, [1., 1., 1., -1., -pi / 2., 4., 5., 1., 1., 1., 2., 3.]) - #assert_equal(z._data, [0.2,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) - # - xm = xm.copy() - xm /= ym - assert_equal(xm._mask, [1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1]) - assert_equal(z._data, [1., 1., 1., -1., -pi / 2., 4., 5., 1., 1., 1., 2., 3.]) - #assert_equal(xm._data, [1/5.,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) - - - def test_datafriendly_add(self): - "Test keeping data w/ (inplace) addition" - x = array([1, 2, 3], mask=[0, 0, 1]) - # Test add w/ scalar - xx = x + 1 - assert_equal(xx.data, [2, 3, 3]) - assert_equal(xx.mask, [0, 0, 1]) - # Test iadd w/ scalar - x += 1 - assert_equal(x.data, [2, 3, 3]) - assert_equal(x.mask, [0, 0, 1]) - # Test add w/ array - x = array([1, 2, 3], mask=[0, 0, 1]) - xx = x + array([1, 2, 3], mask=[1, 0, 0]) - assert_equal(xx.data, [1, 4, 3]) - assert_equal(xx.mask, [1, 0, 1]) - # Test iadd w/ array - x = array([1, 2, 3], mask=[0, 0, 1]) - x += array([1, 2, 3], mask=[1, 0, 0]) - assert_equal(x.data, [1, 4, 3]) - assert_equal(x.mask, [1, 0, 1]) - - - def test_datafriendly_sub(self): - "Test keeping data w/ (inplace) subtraction" - # Test sub w/ scalar - x = array([1, 2, 3], mask=[0, 0, 1]) - xx = x - 1 - assert_equal(xx.data, [0, 1, 3]) - assert_equal(xx.mask, [0, 0, 1]) - # Test isub w/ scalar - x = array([1, 2, 3], mask=[0, 0, 1]) - x -= 1 - assert_equal(x.data, [0, 1, 3]) - assert_equal(x.mask, [0, 0, 1]) - # Test sub w/ array - x = array([1, 2, 3], mask=[0, 0, 1]) - xx = x - array([1, 2, 3], mask=[1, 0, 0]) - assert_equal(xx.data, [1, 0, 3]) - assert_equal(xx.mask, [1, 0, 1]) - # Test isub w/ array - x = array([1, 2, 3], mask=[0, 0, 1]) - x -= array([1, 2, 3], mask=[1, 0, 0]) - assert_equal(x.data, [1, 0, 3]) - assert_equal(x.mask, [1, 0, 1]) - - - def test_datafriendly_mul(self): - "Test keeping data w/ (inplace) multiplication" - # Test mul w/ scalar - x = array([1, 2, 3], mask=[0, 0, 1]) - xx = x * 2 - assert_equal(xx.data, [2, 4, 3]) - assert_equal(xx.mask, [0, 0, 1]) - # Test imul w/ scalar - x = array([1, 2, 3], mask=[0, 0, 1]) - x *= 2 - assert_equal(x.data, [2, 4, 3]) - assert_equal(x.mask, [0, 0, 1]) - # Test mul w/ array - x = array([1, 2, 3], mask=[0, 0, 1]) - xx = x * array([10, 20, 30], mask=[1, 0, 0]) - assert_equal(xx.data, [1, 40, 3]) - assert_equal(xx.mask, [1, 0, 1]) - # Test imul w/ array - x = array([1, 2, 3], mask=[0, 0, 1]) - x *= array([10, 20, 30], mask=[1, 0, 0]) - assert_equal(x.data, [1, 40, 3]) - assert_equal(x.mask, [1, 0, 1]) - - - def test_datafriendly_div(self): - "Test keeping data w/ (inplace) division" - # Test div on scalar - x = array([1, 2, 3], mask=[0, 0, 1]) - xx = x / 2. - assert_equal(xx.data, [1 / 2., 2 / 2., 3]) - assert_equal(xx.mask, [0, 0, 1]) - # Test idiv on scalar - x = array([1., 2., 3.], mask=[0, 0, 1]) - x /= 2. - assert_equal(x.data, [1 / 2., 2 / 2., 3]) - assert_equal(x.mask, [0, 0, 1]) - # Test div on array - x = array([1., 2., 3.], mask=[0, 0, 1]) - xx = x / array([10., 20., 30.], mask=[1, 0, 0]) - assert_equal(xx.data, [1., 2. / 20., 3.]) - assert_equal(xx.mask, [1, 0, 1]) - # Test idiv on array - x = array([1., 2., 3.], mask=[0, 0, 1]) - x /= array([10., 20., 30.], mask=[1, 0, 0]) - assert_equal(x.data, [1., 2 / 20., 3.]) - assert_equal(x.mask, [1, 0, 1]) - - - def test_datafriendly_pow(self): - "Test keeping data w/ (inplace) power" - # Test pow on scalar - x = array([1., 2., 3.], mask=[0, 0, 1]) - xx = x ** 2.5 - assert_equal(xx.data, [1., 2. ** 2.5, 3.]) - assert_equal(xx.mask, [0, 0, 1]) - # Test ipow on scalar - x **= 2.5 - assert_equal(x.data, [1., 2. ** 2.5, 3]) - assert_equal(x.mask, [0, 0, 1]) - - - def test_datafriendly_add_arrays(self): - a = array([[1, 1], [3, 3]]) - b = array([1, 1], mask=[0, 0]) - a += b - assert_equal(a, [[2, 2], [4, 4]]) - if a.mask is not nomask: - assert_equal(a.mask, [[0, 0], [0, 0]]) - # - a = array([[1, 1], [3, 3]]) - b = array([1, 1], mask=[0, 1]) - a += b - assert_equal(a, [[2, 2], [4, 4]]) - assert_equal(a.mask, [[0, 1], [0, 1]]) - - - def test_datafriendly_sub_arrays(self): - a = array([[1, 1], [3, 3]]) - b = array([1, 1], mask=[0, 0]) - a -= b - assert_equal(a, [[0, 0], [2, 2]]) - if a.mask is not nomask: - assert_equal(a.mask, [[0, 0], [0, 0]]) - # - a = array([[1, 1], [3, 3]]) - b = array([1, 1], mask=[0, 1]) - a -= b - assert_equal(a, [[0, 0], [2, 2]]) - assert_equal(a.mask, [[0, 1], [0, 1]]) - - - def test_datafriendly_mul_arrays(self): - a = array([[1, 1], [3, 3]]) - b = array([1, 1], mask=[0, 0]) - a *= b - assert_equal(a, [[1, 1], [3, 3]]) - if a.mask is not nomask: - assert_equal(a.mask, [[0, 0], [0, 0]]) - # - a = array([[1, 1], [3, 3]]) - b = array([1, 1], mask=[0, 1]) - a *= b - assert_equal(a, [[1, 1], [3, 3]]) - assert_equal(a.mask, [[0, 1], [0, 1]]) - -#------------------------------------------------------------------------------ - -class TestMaskedArrayMethods(TestCase): - "Test class for miscellaneous MaskedArrays methods." - def setUp(self): - "Base data definition." - x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, - 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, - 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, - 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, - 7.189, 9.645, 5.395, 4.961, 9.894, 2.893, - 7.357, 9.828, 6.272, 3.758, 6.693, 0.993]) - X = x.reshape(6, 6) - XX = x.reshape(3, 2, 2, 3) - - m = np.array([0, 1, 0, 1, 0, 0, - 1, 0, 1, 1, 0, 1, - 0, 0, 0, 1, 0, 1, - 0, 0, 0, 1, 1, 1, - 1, 0, 0, 1, 0, 0, - 0, 0, 1, 0, 1, 0]) - mx = array(data=x, mask=m) - mX = array(data=X, mask=m.reshape(X.shape)) - mXX = array(data=XX, mask=m.reshape(XX.shape)) - - m2 = np.array([1, 1, 0, 1, 0, 0, - 1, 1, 1, 1, 0, 1, - 0, 0, 1, 1, 0, 1, - 0, 0, 0, 1, 1, 1, - 1, 0, 0, 1, 1, 0, - 0, 0, 1, 0, 1, 1]) - m2x = array(data=x, mask=m2) - m2X = array(data=X, mask=m2.reshape(X.shape)) - m2XX = array(data=XX, mask=m2.reshape(XX.shape)) - self.d = (x, X, XX, m, mx, mX, mXX, m2x, m2X, m2XX) - - def test_generic_methods(self): - "Tests some MaskedArray methods." - a = array([1, 3, 2]) - b = array([1, 3, 2], mask=[1, 0, 1]) - assert_equal(a.any(), a._data.any()) - assert_equal(a.all(), a._data.all()) - assert_equal(a.argmax(), a._data.argmax()) - assert_equal(a.argmin(), a._data.argmin()) - assert_equal(a.choose(0, 1, 2, 3, 4), a._data.choose(0, 1, 2, 3, 4)) - assert_equal(a.compress([1, 0, 1]), a._data.compress([1, 0, 1])) - assert_equal(a.conj(), a._data.conj()) - assert_equal(a.conjugate(), a._data.conjugate()) - # - m = array([[1, 2], [3, 4]]) - assert_equal(m.diagonal(), m._data.diagonal()) - assert_equal(a.sum(), a._data.sum()) - assert_equal(a.take([1, 2]), a._data.take([1, 2])) - assert_equal(m.transpose(), m._data.transpose()) - - - def test_allclose(self): - "Tests allclose on arrays" - a = np.random.rand(10) - b = a + np.random.rand(10) * 1e-8 - self.assertTrue(allclose(a, b)) - # Test allclose w/ infs - a[0] = np.inf - self.assertTrue(not allclose(a, b)) - b[0] = np.inf - self.assertTrue(allclose(a, b)) - # Test all close w/ masked - a = masked_array(a) - a[-1] = masked - self.assertTrue(allclose(a, b, masked_equal=True)) - self.assertTrue(not allclose(a, b, masked_equal=False)) - # Test comparison w/ scalar - a *= 1e-8 - a[0] = 0 - self.assertTrue(allclose(a, 0, masked_equal=True)) - - - def test_allany(self): - """Checks the any/all methods/functions.""" - x = np.array([[ 0.13, 0.26, 0.90], - [ 0.28, 0.33, 0.63], - [ 0.31, 0.87, 0.70]]) - m = np.array([[ True, False, False], - [False, False, False], - [True, True, False]], dtype=np.bool_) - mx = masked_array(x, mask=m) - xbig = np.array([[False, False, True], - [False, False, True], - [False, True, True]], dtype=np.bool_) - mxbig = (mx > 0.5) - mxsmall = (mx < 0.5) - # - assert (mxbig.all() == False) - assert (mxbig.any() == True) - assert_equal(mxbig.all(0), [False, False, True]) - assert_equal(mxbig.all(1), [False, False, True]) - assert_equal(mxbig.any(0), [False, False, True]) - assert_equal(mxbig.any(1), [True, True, True]) - # - assert (mxsmall.all() == False) - assert (mxsmall.any() == True) - assert_equal(mxsmall.all(0), [True, True, False]) - assert_equal(mxsmall.all(1), [False, False, False]) - assert_equal(mxsmall.any(0), [True, True, False]) - assert_equal(mxsmall.any(1), [True, True, False]) - - - def test_allany_onmatrices(self): - x = np.array([[ 0.13, 0.26, 0.90], - [ 0.28, 0.33, 0.63], - [ 0.31, 0.87, 0.70]]) - X = np.matrix(x) - m = np.array([[ True, False, False], - [False, False, False], - [True, True, False]], dtype=np.bool_) - mX = masked_array(X, mask=m) - mXbig = (mX > 0.5) - mXsmall = (mX < 0.5) - # - assert (mXbig.all() == False) - assert (mXbig.any() == True) - assert_equal(mXbig.all(0), np.matrix([False, False, True])) - assert_equal(mXbig.all(1), np.matrix([False, False, True]).T) - assert_equal(mXbig.any(0), np.matrix([False, False, True])) - assert_equal(mXbig.any(1), np.matrix([ True, True, True]).T) - # - assert (mXsmall.all() == False) - assert (mXsmall.any() == True) - assert_equal(mXsmall.all(0), np.matrix([True, True, False])) - assert_equal(mXsmall.all(1), np.matrix([False, False, False]).T) - assert_equal(mXsmall.any(0), np.matrix([True, True, False])) - assert_equal(mXsmall.any(1), np.matrix([True, True, False]).T) - - - def test_allany_oddities(self): - "Some fun with all and any" - store = empty(1, dtype=bool) - full = array([1, 2, 3], mask=True) - # - self.assertTrue(full.all() is masked) - full.all(out=store) - self.assertTrue(store) - self.assertTrue(store._mask, True) - self.assertTrue(store is not masked) - # - store = empty(1, dtype=bool) - self.assertTrue(full.any() is masked) - full.any(out=store) - self.assertTrue(not store) - self.assertTrue(store._mask, True) - self.assertTrue(store is not masked) - - - def test_argmax_argmin(self): - "Tests argmin & argmax on MaskedArrays." - (x, X, XX, m, mx, mX, mXX, m2x, m2X, m2XX) = self.d - # - assert_equal(mx.argmin(), 35) - assert_equal(mX.argmin(), 35) - assert_equal(m2x.argmin(), 4) - assert_equal(m2X.argmin(), 4) - assert_equal(mx.argmax(), 28) - assert_equal(mX.argmax(), 28) - assert_equal(m2x.argmax(), 31) - assert_equal(m2X.argmax(), 31) - # - assert_equal(mX.argmin(0), [2, 2, 2, 5, 0, 5]) - assert_equal(m2X.argmin(0), [2, 2, 4, 5, 0, 4]) - assert_equal(mX.argmax(0), [0, 5, 0, 5, 4, 0]) - assert_equal(m2X.argmax(0), [5, 5, 0, 5, 1, 0]) - # - assert_equal(mX.argmin(1), [4, 1, 0, 0, 5, 5, ]) - assert_equal(m2X.argmin(1), [4, 4, 0, 0, 5, 3]) - assert_equal(mX.argmax(1), [2, 4, 1, 1, 4, 1]) - assert_equal(m2X.argmax(1), [2, 4, 1, 1, 1, 1]) - - - def test_clip(self): - "Tests clip on MaskedArrays." - x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, - 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, - 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, - 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, - 7.189, 9.645, 5.395, 4.961, 9.894, 2.893, - 7.357, 9.828, 6.272, 3.758, 6.693, 0.993]) - m = np.array([0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, - 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, - 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0]) - mx = array(x, mask=m) - clipped = mx.clip(2, 8) - assert_equal(clipped.mask, mx.mask) - assert_equal(clipped._data, x.clip(2, 8)) - assert_equal(clipped._data, mx._data.clip(2, 8)) - - - def test_compress(self): - "test compress" - a = masked_array([1., 2., 3., 4., 5.], fill_value=9999) - condition = (a > 1.5) & (a < 3.5) - assert_equal(a.compress(condition), [2., 3.]) - # - a[[2, 3]] = masked - b = a.compress(condition) - assert_equal(b._data, [2., 3.]) - assert_equal(b._mask, [0, 1]) - assert_equal(b.fill_value, 9999) - assert_equal(b, a[condition]) - # - condition = (a < 4.) - b = a.compress(condition) - assert_equal(b._data, [1., 2., 3.]) - assert_equal(b._mask, [0, 0, 1]) - assert_equal(b.fill_value, 9999) - assert_equal(b, a[condition]) - # - a = masked_array([[10, 20, 30], [40, 50, 60]], mask=[[0, 0, 1], [1, 0, 0]]) - b = a.compress(a.ravel() >= 22) - assert_equal(b._data, [30, 40, 50, 60]) - assert_equal(b._mask, [1, 1, 0, 0]) - # - x = np.array([3, 1, 2]) - b = a.compress(x >= 2, axis=1) - assert_equal(b._data, [[10, 30], [40, 60]]) - assert_equal(b._mask, [[0, 1], [1, 0]]) - - - def test_compressed(self): - "Tests compressed" - a = array([1, 2, 3, 4], mask=[0, 0, 0, 0]) - b = a.compressed() - assert_equal(b, a) - a[0] = masked - b = a.compressed() - assert_equal(b, [2, 3, 4]) - # - a = array(np.matrix([1, 2, 3, 4]), mask=[0, 0, 0, 0]) - b = a.compressed() - assert_equal(b, a) - self.assertTrue(isinstance(b, np.matrix)) - a[0, 0] = masked - b = a.compressed() - assert_equal(b, [[2, 3, 4]]) - - - def test_empty(self): - "Tests empty/like" - datatype = [('a', int), ('b', float), ('c', '|S8')] - a = masked_array([(1, 1.1, '1.1'), (2, 2.2, '2.2'), (3, 3.3, '3.3')], - dtype=datatype) - assert_equal(len(a.fill_value.item()), len(datatype)) - # - b = empty_like(a) - assert_equal(b.shape, a.shape) - assert_equal(b.fill_value, a.fill_value) - # - b = empty(len(a), dtype=datatype) - assert_equal(b.shape, a.shape) - assert_equal(b.fill_value, a.fill_value) - - - def test_put(self): - "Tests put." - d = arange(5) - n = [0, 0, 0, 1, 1] - m = make_mask(n) - x = array(d, mask=m) - self.assertTrue(x[3] is masked) - self.assertTrue(x[4] is masked) - x[[1, 4]] = [10, 40] - #self.assertTrue(x.mask is not m) - self.assertTrue(x[3] is masked) - self.assertTrue(x[4] is not masked) - assert_equal(x, [0, 10, 2, -1, 40]) - # - x = masked_array(arange(10), mask=[1, 0, 0, 0, 0] * 2) - i = [0, 2, 4, 6] - x.put(i, [6, 4, 2, 0]) - assert_equal(x, asarray([6, 1, 4, 3, 2, 5, 0, 7, 8, 9, ])) - assert_equal(x.mask, [0, 0, 0, 0, 0, 1, 0, 0, 0, 0]) - x.put(i, masked_array([0, 2, 4, 6], [1, 0, 1, 0])) - assert_array_equal(x, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ]) - assert_equal(x.mask, [1, 0, 0, 0, 1, 1, 0, 0, 0, 0]) - # - x = masked_array(arange(10), mask=[1, 0, 0, 0, 0] * 2) - put(x, i, [6, 4, 2, 0]) - assert_equal(x, asarray([6, 1, 4, 3, 2, 5, 0, 7, 8, 9, ])) - assert_equal(x.mask, [0, 0, 0, 0, 0, 1, 0, 0, 0, 0]) - put(x, i, masked_array([0, 2, 4, 6], [1, 0, 1, 0])) - assert_array_equal(x, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ]) - assert_equal(x.mask, [1, 0, 0, 0, 1, 1, 0, 0, 0, 0]) - - - def test_put_hardmask(self): - "Tests put on hardmask" - d = arange(5) - n = [0, 0, 0, 1, 1] - m = make_mask(n) - xh = array(d + 1, mask=m, hard_mask=True, copy=True) - xh.put([4, 2, 0, 1, 3], [1, 2, 3, 4, 5]) - assert_equal(xh._data, [3, 4, 2, 4, 5]) - - - def test_putmask(self): - x = arange(6) + 1 - mx = array(x, mask=[0, 0, 0, 1, 1, 1]) - mask = [0, 0, 1, 0, 0, 1] - # w/o mask, w/o masked values - xx = x.copy() - putmask(xx, mask, 99) - assert_equal(xx, [1, 2, 99, 4, 5, 99]) - # w/ mask, w/o masked values - mxx = mx.copy() - putmask(mxx, mask, 99) - assert_equal(mxx._data, [1, 2, 99, 4, 5, 99]) - assert_equal(mxx._mask, [0, 0, 0, 1, 1, 0]) - # w/o mask, w/ masked values - values = array([10, 20, 30, 40, 50, 60], mask=[1, 1, 1, 0, 0, 0]) - xx = x.copy() - putmask(xx, mask, values) - assert_equal(xx._data, [1, 2, 30, 4, 5, 60]) - assert_equal(xx._mask, [0, 0, 1, 0, 0, 0]) - # w/ mask, w/ masked values - mxx = mx.copy() - putmask(mxx, mask, values) - assert_equal(mxx._data, [1, 2, 30, 4, 5, 60]) - assert_equal(mxx._mask, [0, 0, 1, 1, 1, 0]) - # w/ mask, w/ masked values + hardmask - mxx = mx.copy() - mxx.harden_mask() - putmask(mxx, mask, values) - assert_equal(mxx, [1, 2, 30, 4, 5, 60]) - - - def test_ravel(self): - "Tests ravel" - a = array([[1, 2, 3, 4, 5]], mask=[[0, 1, 0, 0, 0]]) - aravel = a.ravel() - assert_equal(a._mask.shape, a.shape) - a = array([0, 0], mask=[1, 1]) - aravel = a.ravel() - assert_equal(a._mask.shape, a.shape) - a = array(np.matrix([1, 2, 3, 4, 5]), mask=[[0, 1, 0, 0, 0]]) - aravel = a.ravel() - assert_equal(a.shape, (1, 5)) - assert_equal(a._mask.shape, a.shape) - # Checks that small_mask is preserved - a = array([1, 2, 3, 4], mask=[0, 0, 0, 0], shrink=False) - assert_equal(a.ravel()._mask, [0, 0, 0, 0]) - # Test that the fill_value is preserved - a.fill_value = -99 - a.shape = (2, 2) - ar = a.ravel() - assert_equal(ar._mask, [0, 0, 0, 0]) - assert_equal(ar._data, [1, 2, 3, 4]) - assert_equal(ar.fill_value, -99) - - - def test_reshape(self): - "Tests reshape" - x = arange(4) - x[0] = masked - y = x.reshape(2, 2) - assert_equal(y.shape, (2, 2,)) - assert_equal(y._mask.shape, (2, 2,)) - assert_equal(x.shape, (4,)) - assert_equal(x._mask.shape, (4,)) - - - def test_sort(self): - "Test sort" - x = array([1, 4, 2, 3], mask=[0, 1, 0, 0], dtype=np.uint8) - # - sortedx = sort(x) - assert_equal(sortedx._data, [1, 2, 3, 4]) - assert_equal(sortedx._mask, [0, 0, 0, 1]) - # - sortedx = sort(x, endwith=False) - assert_equal(sortedx._data, [4, 1, 2, 3]) - assert_equal(sortedx._mask, [1, 0, 0, 0]) - # - x.sort() - assert_equal(x._data, [1, 2, 3, 4]) - assert_equal(x._mask, [0, 0, 0, 1]) - # - x = array([1, 4, 2, 3], mask=[0, 1, 0, 0], dtype=np.uint8) - x.sort(endwith=False) - assert_equal(x._data, [4, 1, 2, 3]) - assert_equal(x._mask, [1, 0, 0, 0]) - # - x = [1, 4, 2, 3] - sortedx = sort(x) - self.assertTrue(not isinstance(sorted, MaskedArray)) - # - x = array([0, 1, -1, -2, 2], mask=nomask, dtype=np.int8) - sortedx = sort(x, endwith=False) - assert_equal(sortedx._data, [-2, -1, 0, 1, 2]) - x = array([0, 1, -1, -2, 2], mask=[0, 1, 0, 0, 1], dtype=np.int8) - sortedx = sort(x, endwith=False) - assert_equal(sortedx._data, [1, 2, -2, -1, 0]) - assert_equal(sortedx._mask, [1, 1, 0, 0, 0]) - - - def test_sort_2d(self): - "Check sort of 2D array." - # 2D array w/o mask - a = masked_array([[8, 4, 1], [2, 0, 9]]) - a.sort(0) - assert_equal(a, [[2, 0, 1], [8, 4, 9]]) - a = masked_array([[8, 4, 1], [2, 0, 9]]) - a.sort(1) - assert_equal(a, [[1, 4, 8], [0, 2, 9]]) - # 2D array w/mask - a = masked_array([[8, 4, 1], [2, 0, 9]], mask=[[1, 0, 0], [0, 0, 1]]) - a.sort(0) - assert_equal(a, [[2, 0, 1], [8, 4, 9]]) - assert_equal(a._mask, [[0, 0, 0], [1, 0, 1]]) - a = masked_array([[8, 4, 1], [2, 0, 9]], mask=[[1, 0, 0], [0, 0, 1]]) - a.sort(1) - assert_equal(a, [[1, 4, 8], [0, 2, 9]]) - assert_equal(a._mask, [[0, 0, 1], [0, 0, 1]]) - # 3D - a = masked_array([[[7, 8, 9], [4, 5, 6], [1, 2, 3]], - [[1, 2, 3], [7, 8, 9], [4, 5, 6]], - [[7, 8, 9], [1, 2, 3], [4, 5, 6]], - [[4, 5, 6], [1, 2, 3], [7, 8, 9]]]) - a[a % 4 == 0] = masked - am = a.copy() - an = a.filled(99) - am.sort(0) - an.sort(0) - assert_equal(am, an) - am = a.copy() - an = a.filled(99) - am.sort(1) - an.sort(1) - assert_equal(am, an) - am = a.copy() - an = a.filled(99) - am.sort(2) - an.sort(2) - assert_equal(am, an) - - - def test_sort_flexible(self): - "Test sort on flexible dtype." - a = array([(3, 3), (3, 2), (2, 2), (2, 1), (1, 0), (1, 1), (1, 2)], - mask=[(0, 0), (0, 1), (0, 0), (0, 0), (1, 0), (0, 0), (0, 0)], - dtype=[('A', int), ('B', int)]) - # - test = sort(a) - b = array([(1, 1), (1, 2), (2, 1), (2, 2), (3, 3), (3, 2), (1, 0)], - mask=[(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 1), (1, 0)], - dtype=[('A', int), ('B', int)]) - assert_equal(test, b) - assert_equal(test.mask, b.mask) - # - test = sort(a, endwith=False) - b = array([(1, 0), (1, 1), (1, 2), (2, 1), (2, 2), (3, 2), (3, 3), ], - mask=[(1, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 1), (0, 0), ], - dtype=[('A', int), ('B', int)]) - assert_equal(test, b) - assert_equal(test.mask, b.mask) - - def test_argsort(self): - "Test argsort" - a = array([1, 5, 2, 4, 3], mask=[1, 0, 0, 1, 0]) - assert_equal(np.argsort(a), argsort(a)) - - - def test_squeeze(self): - "Check squeeze" - data = masked_array([[1, 2, 3]]) - assert_equal(data.squeeze(), [1, 2, 3]) - data = masked_array([[1, 2, 3]], mask=[[1, 1, 1]]) - assert_equal(data.squeeze(), [1, 2, 3]) - assert_equal(data.squeeze()._mask, [1, 1, 1]) - data = masked_array([[1]], mask=True) - self.assertTrue(data.squeeze() is masked) - - - def test_swapaxes(self): - "Tests swapaxes on MaskedArrays." - x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, - 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, - 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, - 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, - 7.189, 9.645, 5.395, 4.961, 9.894, 2.893, - 7.357, 9.828, 6.272, 3.758, 6.693, 0.993]) - m = np.array([0, 1, 0, 1, 0, 0, - 1, 0, 1, 1, 0, 1, - 0, 0, 0, 1, 0, 1, - 0, 0, 0, 1, 1, 1, - 1, 0, 0, 1, 0, 0, - 0, 0, 1, 0, 1, 0]) - mX = array(x, mask=m).reshape(6, 6) - mXX = mX.reshape(3, 2, 2, 3) - # - mXswapped = mX.swapaxes(0, 1) - assert_equal(mXswapped[-1], mX[:, -1]) - - mXXswapped = mXX.swapaxes(0, 2) - assert_equal(mXXswapped.shape, (2, 2, 3, 3)) - - - def test_take(self): - "Tests take" - x = masked_array([10, 20, 30, 40], [0, 1, 0, 1]) - assert_equal(x.take([0, 0, 3]), masked_array([10, 10, 40], [0, 0, 1])) - assert_equal(x.take([0, 0, 3]), x[[0, 0, 3]]) - assert_equal(x.take([[0, 1], [0, 1]]), - masked_array([[10, 20], [10, 20]], [[0, 1], [0, 1]])) - # - x = array([[10, 20, 30], [40, 50, 60]], mask=[[0, 0, 1], [1, 0, 0, ]]) - assert_equal(x.take([0, 2], axis=1), - array([[10, 30], [40, 60]], mask=[[0, 1], [1, 0]])) - assert_equal(take(x, [0, 2], axis=1), - array([[10, 30], [40, 60]], mask=[[0, 1], [1, 0]])) - - def test_take_masked_indices(self): - "Test take w/ masked indices" - a = np.array((40, 18, 37, 9, 22)) - indices = np.arange(3)[None, :] + np.arange(5)[:, None] - mindices = array(indices, mask=(indices >= len(a))) - # No mask - test = take(a, mindices, mode='clip') - ctrl = array([[40, 18, 37], - [18, 37, 9], - [37, 9, 22], - [ 9, 22, 22], - [22, 22, 22]]) - assert_equal(test, ctrl) - # Masked indices - test = take(a, mindices) - ctrl = array([[40, 18, 37], - [18, 37, 9], - [37, 9, 22], - [ 9, 22, 40], - [22, 40, 40]]) - ctrl[3, 2] = ctrl[4, 1] = ctrl[4, 2] = masked - assert_equal(test, ctrl) - assert_equal(test.mask, ctrl.mask) - # Masked input + masked indices - a = array((40, 18, 37, 9, 22), mask=(0, 1, 0, 0, 0)) - test = take(a, mindices) - ctrl[0, 1] = ctrl[1, 0] = masked - assert_equal(test, ctrl) - assert_equal(test.mask, ctrl.mask) - - - def test_tolist(self): - "Tests to list" - # ... on 1D - x = array(np.arange(12)) - x[[1, -2]] = masked - xlist = x.tolist() - self.assertTrue(xlist[1] is None) - self.assertTrue(xlist[-2] is None) - # ... on 2D - x.shape = (3, 4) - xlist = x.tolist() - ctrl = [[0, None, 2, 3], [4, 5, 6, 7], [8, 9, None, 11]] - assert_equal(xlist[0], [0, None, 2, 3]) - assert_equal(xlist[1], [4, 5, 6, 7]) - assert_equal(xlist[2], [8, 9, None, 11]) - assert_equal(xlist, ctrl) - # ... on structured array w/ masked records - x = array(zip([1, 2, 3], - [1.1, 2.2, 3.3], - ['one', 'two', 'thr']), - dtype=[('a', int), ('b', float), ('c', '|S8')]) - x[-1] = masked - assert_equal(x.tolist(), - [(1, 1.1, asbytes('one')), - (2, 2.2, asbytes('two')), - (None, None, None)]) - # ... on structured array w/ masked fields - a = array([(1, 2,), (3, 4)], mask=[(0, 1), (0, 0)], - dtype=[('a', int), ('b', int)]) - test = a.tolist() - assert_equal(test, [[1, None], [3, 4]]) - # ... on mvoid - a = a[0] - test = a.tolist() - assert_equal(test, [1, None]) - - def test_tolist_specialcase(self): - "Test mvoid.tolist: make sure we return a standard Python object" - a = array([(0, 1), (2, 3)], dtype=[('a', int), ('b', int)]) - # w/o mask: each entry is a np.void whose elements are standard Python - for entry in a: - for item in entry.tolist(): - assert(not isinstance(item, np.generic)) - # w/ mask: each entry is a ma.void whose elements should be standard Python - a.mask[0] = (0, 1) - for entry in a: - for item in entry.tolist(): - assert(not isinstance(item, np.generic)) - - - def test_toflex(self): - "Test the conversion to records" - data = arange(10) - record = data.toflex() - assert_equal(record['_data'], data._data) - assert_equal(record['_mask'], data._mask) - # - data[[0, 1, 2, -1]] = masked - record = data.toflex() - assert_equal(record['_data'], data._data) - assert_equal(record['_mask'], data._mask) - # - ndtype = [('i', int), ('s', '|S3'), ('f', float)] - data = array([(i, s, f) for (i, s, f) in zip(np.arange(10), - 'ABCDEFGHIJKLM', - np.random.rand(10))], - dtype=ndtype) - data[[0, 1, 2, -1]] = masked - record = data.toflex() - assert_equal(record['_data'], data._data) - assert_equal(record['_mask'], data._mask) - # - ndtype = np.dtype("int, (2,3)float, float") - data = array([(i, f, ff) for (i, f, ff) in zip(np.arange(10), - np.random.rand(10), - np.random.rand(10))], - dtype=ndtype) - data[[0, 1, 2, -1]] = masked - record = data.toflex() - assert_equal_records(record['_data'], data._data) - assert_equal_records(record['_mask'], data._mask) - - - def test_fromflex(self): - "Test the reconstruction of a masked_array from a record" - a = array([1, 2, 3]) - test = fromflex(a.toflex()) - assert_equal(test, a) - assert_equal(test.mask, a.mask) - # - a = array([1, 2, 3], mask=[0, 0, 1]) - test = fromflex(a.toflex()) - assert_equal(test, a) - assert_equal(test.mask, a.mask) - # - a = array([(1, 1.), (2, 2.), (3, 3.)], mask=[(1, 0), (0, 0), (0, 1)], - dtype=[('A', int), ('B', float)]) - test = fromflex(a.toflex()) - assert_equal(test, a) - assert_equal(test.data, a.data) - - - def test_arraymethod(self): - "Test a _arraymethod w/ n argument" - marray = masked_array([[1, 2, 3, 4, 5]], mask=[0, 0, 1, 0, 0]) - control = masked_array([[1], [2], [3], [4], [5]], - mask=[0, 0, 1, 0, 0]) - assert_equal(marray.T, control) - assert_equal(marray.transpose(), control) - # - assert_equal(MaskedArray.cumsum(marray.T, 0), control.cumsum(0)) - - -#------------------------------------------------------------------------------ - - -class TestMaskedArrayMathMethods(TestCase): - - def setUp(self): - "Base data definition." - x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, - 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, - 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, - 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, - 7.189, 9.645, 5.395, 4.961, 9.894, 2.893, - 7.357, 9.828, 6.272, 3.758, 6.693, 0.993]) - X = x.reshape(6, 6) - XX = x.reshape(3, 2, 2, 3) - - m = np.array([0, 1, 0, 1, 0, 0, - 1, 0, 1, 1, 0, 1, - 0, 0, 0, 1, 0, 1, - 0, 0, 0, 1, 1, 1, - 1, 0, 0, 1, 0, 0, - 0, 0, 1, 0, 1, 0]) - mx = array(data=x, mask=m) - mX = array(data=X, mask=m.reshape(X.shape)) - mXX = array(data=XX, mask=m.reshape(XX.shape)) - - m2 = np.array([1, 1, 0, 1, 0, 0, - 1, 1, 1, 1, 0, 1, - 0, 0, 1, 1, 0, 1, - 0, 0, 0, 1, 1, 1, - 1, 0, 0, 1, 1, 0, - 0, 0, 1, 0, 1, 1]) - m2x = array(data=x, mask=m2) - m2X = array(data=X, mask=m2.reshape(X.shape)) - m2XX = array(data=XX, mask=m2.reshape(XX.shape)) - self.d = (x, X, XX, m, mx, mX, mXX, m2x, m2X, m2XX) - - - def test_cumsumprod(self): - "Tests cumsum & cumprod on MaskedArrays." - (x, X, XX, m, mx, mX, mXX, m2x, m2X, m2XX) = self.d - mXcp = mX.cumsum(0) - assert_equal(mXcp._data, mX.filled(0).cumsum(0)) - mXcp = mX.cumsum(1) - assert_equal(mXcp._data, mX.filled(0).cumsum(1)) - # - mXcp = mX.cumprod(0) - assert_equal(mXcp._data, mX.filled(1).cumprod(0)) - mXcp = mX.cumprod(1) - assert_equal(mXcp._data, mX.filled(1).cumprod(1)) - - - def test_cumsumprod_with_output(self): - "Tests cumsum/cumprod w/ output" - xm = array(np.random.uniform(0, 10, 12)).reshape(3, 4) - xm[:, 0] = xm[0] = xm[-1, -1] = masked - # - for funcname in ('cumsum', 'cumprod'): - npfunc = getattr(np, funcname) - xmmeth = getattr(xm, funcname) - - # A ndarray as explicit input - output = np.empty((3, 4), dtype=float) - output.fill(-9999) - result = npfunc(xm, axis=0, out=output) - # ... the result should be the given output - self.assertTrue(result is output) - assert_equal(result, xmmeth(axis=0, out=output)) - # - output = empty((3, 4), dtype=int) - result = xmmeth(axis=0, out=output) - self.assertTrue(result is output) - - - def test_ptp(self): - "Tests ptp on MaskedArrays." - (x, X, XX, m, mx, mX, mXX, m2x, m2X, m2XX) = self.d - (n, m) = X.shape - assert_equal(mx.ptp(), mx.compressed().ptp()) - rows = np.zeros(n, np.float) - cols = np.zeros(m, np.float) - for k in range(m): - cols[k] = mX[:, k].compressed().ptp() - for k in range(n): - rows[k] = mX[k].compressed().ptp() - assert_equal(mX.ptp(0), cols) - assert_equal(mX.ptp(1), rows) - - - def test_sum_object(self): - "Test sum on object dtype" - a = masked_array([1, 2, 3], mask=[1, 0, 0], dtype=np.object) - assert_equal(a.sum(), 5) - a = masked_array([[1, 2, 3], [4, 5, 6]], dtype=object) - assert_equal(a.sum(axis=0), [5, 7, 9]) - - def test_prod_object(self): - "Test prod on object dtype" - a = masked_array([1, 2, 3], mask=[1, 0, 0], dtype=np.object) - assert_equal(a.prod(), 2 * 3) - a = masked_array([[1, 2, 3], [4, 5, 6]], dtype=object) - assert_equal(a.prod(axis=0), [4, 10, 18]) - - def test_meananom_object(self): - "Test mean/anom on object dtype" - a = masked_array([1, 2, 3], dtype=np.object) - assert_equal(a.mean(), 2) - assert_equal(a.anom(), [-1, 0, 1]) - - - def test_trace(self): - "Tests trace on MaskedArrays." - (x, X, XX, m, mx, mX, mXX, m2x, m2X, m2XX) = self.d - mXdiag = mX.diagonal() - assert_equal(mX.trace(), mX.diagonal().compressed().sum()) - assert_almost_equal(mX.trace(), - X.trace() - sum(mXdiag.mask * X.diagonal(), axis=0)) - - - def test_varstd(self): - "Tests var & std on MaskedArrays." - (x, X, XX, m, mx, mX, mXX, m2x, m2X, m2XX) = self.d - assert_almost_equal(mX.var(axis=None), mX.compressed().var()) - assert_almost_equal(mX.std(axis=None), mX.compressed().std()) - assert_almost_equal(mX.std(axis=None, ddof=1), - mX.compressed().std(ddof=1)) - assert_almost_equal(mX.var(axis=None, ddof=1), - mX.compressed().var(ddof=1)) - assert_equal(mXX.var(axis=3).shape, XX.var(axis=3).shape) - assert_equal(mX.var().shape, X.var().shape) - (mXvar0, mXvar1) = (mX.var(axis=0), mX.var(axis=1)) - assert_almost_equal(mX.var(axis=None, ddof=2), mX.compressed().var(ddof=2)) - assert_almost_equal(mX.std(axis=None, ddof=2), mX.compressed().std(ddof=2)) - for k in range(6): - assert_almost_equal(mXvar1[k], mX[k].compressed().var()) - assert_almost_equal(mXvar0[k], mX[:, k].compressed().var()) - assert_almost_equal(np.sqrt(mXvar0[k]), mX[:, k].compressed().std()) - - - def test_varstd_specialcases(self): - "Test a special case for var" - nout = np.empty(1, dtype=float) - mout = empty(1, dtype=float) - # - x = array(arange(10), mask=True) - for methodname in ('var', 'std'): - method = getattr(x, methodname) - self.assertTrue(method() is masked) - self.assertTrue(method(0) is masked) - self.assertTrue(method(-1) is masked) - # Using a masked array as explicit output - _ = method(out=mout) - self.assertTrue(mout is not masked) - assert_equal(mout.mask, True) - # Using a ndarray as explicit output - _ = method(out=nout) - self.assertTrue(np.isnan(nout)) - # - x = array(arange(10), mask=True) - x[-1] = 9 - for methodname in ('var', 'std'): - method = getattr(x, methodname) - self.assertTrue(method(ddof=1) is masked) - self.assertTrue(method(0, ddof=1) is masked) - self.assertTrue(method(-1, ddof=1) is masked) - # Using a masked array as explicit output - _ = method(out=mout, ddof=1) - self.assertTrue(mout is not masked) - assert_equal(mout.mask, True) - # Using a ndarray as explicit output - _ = method(out=nout, ddof=1) - self.assertTrue(np.isnan(nout)) - - - def test_varstd_ddof(self): - a = array([[1, 1, 0], [1, 1, 0]], mask=[[0, 0, 1], [0, 0, 1]]) - test = a.std(axis=0, ddof=0) - assert_equal(test.filled(0), [0, 0, 0]) - assert_equal(test.mask, [0, 0, 1]) - test = a.std(axis=0, ddof=1) - assert_equal(test.filled(0), [0, 0, 0]) - assert_equal(test.mask, [0, 0, 1]) - test = a.std(axis=0, ddof=2) - assert_equal(test.filled(0), [0, 0, 0]) - assert_equal(test.mask, [1, 1, 1]) - - - def test_diag(self): - "Test diag" - x = arange(9).reshape((3, 3)) - x[1, 1] = masked - out = np.diag(x) - assert_equal(out, [0, 4, 8]) - out = diag(x) - assert_equal(out, [0, 4, 8]) - assert_equal(out.mask, [0, 1, 0]) - out = diag(out) - control = array([[0, 0, 0], [0, 4, 0], [0, 0, 8]], - mask=[[0, 0, 0], [0, 1, 0], [0, 0, 0]]) - assert_equal(out, control) - - - def test_axis_methods_nomask(self): - "Test the combination nomask & methods w/ axis" - a = array([[1, 2, 3], [4, 5, 6]]) - # - assert_equal(a.sum(0), [5, 7, 9]) - assert_equal(a.sum(-1), [6, 15]) - assert_equal(a.sum(1), [6, 15]) - # - assert_equal(a.prod(0), [4, 10, 18]) - assert_equal(a.prod(-1), [6, 120]) - assert_equal(a.prod(1), [6, 120]) - # - assert_equal(a.min(0), [1, 2, 3]) - assert_equal(a.min(-1), [1, 4]) - assert_equal(a.min(1), [1, 4]) - # - assert_equal(a.max(0), [4, 5, 6]) - assert_equal(a.max(-1), [3, 6]) - assert_equal(a.max(1), [3, 6]) - -#------------------------------------------------------------------------------ - -class TestMaskedArrayMathMethodsComplex(TestCase): - "Test class for miscellaneous MaskedArrays methods." - def setUp(self): - "Base data definition." - x = np.array([ 8.375j, 7.545j, 8.828j, 8.5j , 1.757j, 5.928, - 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, - 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, - 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479j, - 7.189j, 9.645, 5.395, 4.961, 9.894, 2.893, - 7.357, 9.828, 6.272, 3.758, 6.693, 0.993j]) - X = x.reshape(6, 6) - XX = x.reshape(3, 2, 2, 3) - - m = np.array([0, 1, 0, 1, 0, 0, - 1, 0, 1, 1, 0, 1, - 0, 0, 0, 1, 0, 1, - 0, 0, 0, 1, 1, 1, - 1, 0, 0, 1, 0, 0, - 0, 0, 1, 0, 1, 0]) - mx = array(data=x, mask=m) - mX = array(data=X, mask=m.reshape(X.shape)) - mXX = array(data=XX, mask=m.reshape(XX.shape)) - - m2 = np.array([1, 1, 0, 1, 0, 0, - 1, 1, 1, 1, 0, 1, - 0, 0, 1, 1, 0, 1, - 0, 0, 0, 1, 1, 1, - 1, 0, 0, 1, 1, 0, - 0, 0, 1, 0, 1, 1]) - m2x = array(data=x, mask=m2) - m2X = array(data=X, mask=m2.reshape(X.shape)) - m2XX = array(data=XX, mask=m2.reshape(XX.shape)) - self.d = (x, X, XX, m, mx, mX, mXX, m2x, m2X, m2XX) - - - def test_varstd(self): - "Tests var & std on MaskedArrays." - (x, X, XX, m, mx, mX, mXX, m2x, m2X, m2XX) = self.d - assert_almost_equal(mX.var(axis=None), mX.compressed().var()) - assert_almost_equal(mX.std(axis=None), mX.compressed().std()) - assert_equal(mXX.var(axis=3).shape, XX.var(axis=3).shape) - assert_equal(mX.var().shape, X.var().shape) - (mXvar0, mXvar1) = (mX.var(axis=0), mX.var(axis=1)) - assert_almost_equal(mX.var(axis=None, ddof=2), mX.compressed().var(ddof=2)) - assert_almost_equal(mX.std(axis=None, ddof=2), mX.compressed().std(ddof=2)) - for k in range(6): - assert_almost_equal(mXvar1[k], mX[k].compressed().var()) - assert_almost_equal(mXvar0[k], mX[:, k].compressed().var()) - assert_almost_equal(np.sqrt(mXvar0[k]), mX[:, k].compressed().std()) - - -#------------------------------------------------------------------------------ - -class TestMaskedArrayFunctions(TestCase): - "Test class for miscellaneous functions." - - def setUp(self): - x = np.array([1., 1., 1., -2., pi / 2.0, 4., 5., -10., 10., 1., 2., 3.]) - y = np.array([5., 0., 3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) - a10 = 10. - m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] - m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 , 0, 1] - xm = masked_array(x, mask=m1) - ym = masked_array(y, mask=m2) - z = np.array([-.5, 0., .5, .8]) - zm = masked_array(z, mask=[0, 1, 0, 0]) - xf = np.where(m1, 1e+20, x) - xm.set_fill_value(1e+20) - self.info = (xm, ym) - - def test_masked_where_bool(self): - x = [1, 2] - y = masked_where(False, x) - assert_equal(y, [1, 2]) - assert_equal(y[1], 2) - - def test_masked_equal_wlist(self): - x = [1, 2, 3] - mx = masked_equal(x, 3) - assert_equal(mx, x) - assert_equal(mx._mask, [0, 0, 1]) - mx = masked_not_equal(x, 3) - assert_equal(mx, x) - assert_equal(mx._mask, [1, 1, 0]) - - def test_masked_equal_fill_value(self): - x = [1, 2, 3] - mx = masked_equal(x, 3) - assert_equal(mx._mask, [0, 0, 1]) - assert_equal(mx.fill_value, 3) - - def test_masked_where_condition(self): - "Tests masking functions." - x = array([1., 2., 3., 4., 5.]) - x[2] = masked - assert_equal(masked_where(greater(x, 2), x), masked_greater(x, 2)) - assert_equal(masked_where(greater_equal(x, 2), x), masked_greater_equal(x, 2)) - assert_equal(masked_where(less(x, 2), x), masked_less(x, 2)) - assert_equal(masked_where(less_equal(x, 2), x), masked_less_equal(x, 2)) - assert_equal(masked_where(not_equal(x, 2), x), masked_not_equal(x, 2)) - assert_equal(masked_where(equal(x, 2), x), masked_equal(x, 2)) - assert_equal(masked_where(not_equal(x, 2), x), masked_not_equal(x, 2)) - assert_equal(masked_where([1, 1, 0, 0, 0], [1, 2, 3, 4, 5]), [99, 99, 3, 4, 5]) - - - def test_masked_where_oddities(self): - """Tests some generic features.""" - atest = ones((10, 10, 10), dtype=float) - btest = zeros(atest.shape, MaskType) - ctest = masked_where(btest, atest) - assert_equal(atest, ctest) - - - def test_masked_where_shape_constraint(self): - a = arange(10) - try: - test = masked_equal(1, a) - except IndexError: - pass - else: - raise AssertionError("Should have failed...") - test = masked_equal(a, 1) - assert_equal(test.mask, [0, 1, 0, 0, 0, 0, 0, 0, 0, 0]) - - - def test_masked_otherfunctions(self): - assert_equal(masked_inside(range(5), 1, 3), [0, 199, 199, 199, 4]) - assert_equal(masked_outside(range(5), 1, 3), [199, 1, 2, 3, 199]) - assert_equal(masked_inside(array(range(5), mask=[1, 0, 0, 0, 0]), 1, 3).mask, [1, 1, 1, 1, 0]) - assert_equal(masked_outside(array(range(5), mask=[0, 1, 0, 0, 0]), 1, 3).mask, [1, 1, 0, 0, 1]) - assert_equal(masked_equal(array(range(5), mask=[1, 0, 0, 0, 0]), 2).mask, [1, 0, 1, 0, 0]) - assert_equal(masked_not_equal(array([2, 2, 1, 2, 1], mask=[1, 0, 0, 0, 0]), 2).mask, [1, 0, 1, 0, 1]) - - - def test_round(self): - a = array([1.23456, 2.34567, 3.45678, 4.56789, 5.67890], - mask=[0, 1, 0, 0, 0]) - assert_equal(a.round(), [1., 2., 3., 5., 6.]) - assert_equal(a.round(1), [1.2, 2.3, 3.5, 4.6, 5.7]) - assert_equal(a.round(3), [1.235, 2.346, 3.457, 4.568, 5.679]) - b = empty_like(a) - a.round(out=b) - assert_equal(b, [1., 2., 3., 5., 6.]) - - x = array([1., 2., 3., 4., 5.]) - c = array([1, 1, 1, 0, 0]) - x[2] = masked - z = where(c, x, -x) - assert_equal(z, [1., 2., 0., -4., -5]) - c[0] = masked - z = where(c, x, -x) - assert_equal(z, [1., 2., 0., -4., -5]) - assert z[0] is masked - assert z[1] is not masked - assert z[2] is masked - - - def test_round_with_output(self): - "Testing round with an explicit output" - - xm = array(np.random.uniform(0, 10, 12)).reshape(3, 4) - xm[:, 0] = xm[0] = xm[-1, -1] = masked - - # A ndarray as explicit input - output = np.empty((3, 4), dtype=float) - output.fill(-9999) - result = np.round(xm, decimals=2, out=output) - # ... the result should be the given output - self.assertTrue(result is output) - assert_equal(result, xm.round(decimals=2, out=output)) - # - output = empty((3, 4), dtype=float) - result = xm.round(decimals=2, out=output) - self.assertTrue(result is output) - - - def test_identity(self): - a = identity(5) - self.assertTrue(isinstance(a, MaskedArray)) - assert_equal(a, np.identity(5)) - - - def test_power(self): - x = -1.1 - assert_almost_equal(power(x, 2.), 1.21) - self.assertTrue(power(x, masked) is masked) - x = array([-1.1, -1.1, 1.1, 1.1, 0.]) - b = array([0.5, 2., 0.5, 2., -1.], mask=[0, 0, 0, 0, 1]) - y = power(x, b) - assert_almost_equal(y, [0, 1.21, 1.04880884817, 1.21, 0.]) - assert_equal(y._mask, [1, 0, 0, 0, 1]) - b.mask = nomask - y = power(x, b) - assert_equal(y._mask, [1, 0, 0, 0, 1]) - z = x ** b - assert_equal(z._mask, y._mask) - assert_almost_equal(z, y) - assert_almost_equal(z._data, y._data) - x **= b - assert_equal(x._mask, y._mask) - assert_almost_equal(x, y) - assert_almost_equal(x._data, y._data) - - - def test_where(self): - "Test the where function" - x = np.array([1., 1., 1., -2., pi / 2.0, 4., 5., -10., 10., 1., 2., 3.]) - y = np.array([5., 0., 3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) - a10 = 10. - m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] - m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 , 0, 1] - xm = masked_array(x, mask=m1) - ym = masked_array(y, mask=m2) - z = np.array([-.5, 0., .5, .8]) - zm = masked_array(z, mask=[0, 1, 0, 0]) - xf = np.where(m1, 1e+20, x) - xm.set_fill_value(1e+20) - # - d = where(xm > 2, xm, -9) - assert_equal(d, [-9., -9., -9., -9., -9., 4., -9., -9., 10., -9., -9., 3.]) - assert_equal(d._mask, xm._mask) - d = where(xm > 2, -9, ym) - assert_equal(d, [5., 0., 3., 2., -1., -9., -9., -10., -9., 1., 0., -9.]) - assert_equal(d._mask, [1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0]) - d = where(xm > 2, xm, masked) - assert_equal(d, [-9., -9., -9., -9., -9., 4., -9., -9., 10., -9., -9., 3.]) - tmp = xm._mask.copy() - tmp[(xm <= 2).filled(True)] = True - assert_equal(d._mask, tmp) - # - ixm = xm.astype(int) - d = where(ixm > 2, ixm, masked) - assert_equal(d, [-9, -9, -9, -9, -9, 4, -9, -9, 10, -9, -9, 3]) - assert_equal(d.dtype, ixm.dtype) - - - def test_where_with_masked_choice(self): - x = arange(10) - x[3] = masked - c = x >= 8 - # Set False to masked - z = where(c , x, masked) - assert z.dtype is x.dtype - assert z[3] is masked - assert z[4] is masked - assert z[7] is masked - assert z[8] is not masked - assert z[9] is not masked - assert_equal(x, z) - # Set True to masked - z = where(c , masked, x) - assert z.dtype is x.dtype - assert z[3] is masked - assert z[4] is not masked - assert z[7] is not masked - assert z[8] is masked - assert z[9] is masked - - def test_where_with_masked_condition(self): - x = array([1., 2., 3., 4., 5.]) - c = array([1, 1, 1, 0, 0]) - x[2] = masked - z = where(c, x, -x) - assert_equal(z, [1., 2., 0., -4., -5]) - c[0] = masked - z = where(c, x, -x) - assert_equal(z, [1., 2., 0., -4., -5]) - assert z[0] is masked - assert z[1] is not masked - assert z[2] is masked - # - x = arange(1, 6) - x[-1] = masked - y = arange(1, 6) * 10 - y[2] = masked - c = array([1, 1, 1, 0, 0], mask=[1, 0, 0, 0, 0]) - cm = c.filled(1) - z = where(c, x, y) - zm = where(cm, x, y) - assert_equal(z, zm) - assert getmask(zm) is nomask - assert_equal(zm, [1, 2, 3, 40, 50]) - z = where(c, masked, 1) - assert_equal(z, [99, 99, 99, 1, 1]) - z = where(c, 1, masked) - assert_equal(z, [99, 1, 1, 99, 99]) - - def test_where_type(self): - "Test the type conservation with where" - x = np.arange(4, dtype=np.int32) - y = np.arange(4, dtype=np.float32) * 2.2 - test = where(x > 1.5, y, x).dtype - control = np.find_common_type([np.int32, np.float32], []) - assert_equal(test, control) - - - def test_choose(self): - "Test choose" - choices = [[0, 1, 2, 3], [10, 11, 12, 13], - [20, 21, 22, 23], [30, 31, 32, 33]] - chosen = choose([2, 3, 1, 0], choices) - assert_equal(chosen, array([20, 31, 12, 3])) - chosen = choose([2, 4, 1, 0], choices, mode='clip') - assert_equal(chosen, array([20, 31, 12, 3])) - chosen = choose([2, 4, 1, 0], choices, mode='wrap') - assert_equal(chosen, array([20, 1, 12, 3])) - # Check with some masked indices - indices_ = array([2, 4, 1, 0], mask=[1, 0, 0, 1]) - chosen = choose(indices_, choices, mode='wrap') - assert_equal(chosen, array([99, 1, 12, 99])) - assert_equal(chosen.mask, [1, 0, 0, 1]) - # Check with some masked choices - choices = array(choices, mask=[[0, 0, 0, 1], [1, 1, 0, 1], - [1, 0, 0, 0], [0, 0, 0, 0]]) - indices_ = [2, 3, 1, 0] - chosen = choose(indices_, choices, mode='wrap') - assert_equal(chosen, array([20, 31, 12, 3])) - assert_equal(chosen.mask, [1, 0, 0, 1]) - - - def test_choose_with_out(self): - "Test choose with an explicit out keyword" - choices = [[0, 1, 2, 3], [10, 11, 12, 13], - [20, 21, 22, 23], [30, 31, 32, 33]] - store = empty(4, dtype=int) - chosen = choose([2, 3, 1, 0], choices, out=store) - assert_equal(store, array([20, 31, 12, 3])) - self.assertTrue(store is chosen) - # Check with some masked indices + out - store = empty(4, dtype=int) - indices_ = array([2, 3, 1, 0], mask=[1, 0, 0, 1]) - chosen = choose(indices_, choices, mode='wrap', out=store) - assert_equal(store, array([99, 31, 12, 99])) - assert_equal(store.mask, [1, 0, 0, 1]) - # Check with some masked choices + out ina ndarray ! - choices = array(choices, mask=[[0, 0, 0, 1], [1, 1, 0, 1], - [1, 0, 0, 0], [0, 0, 0, 0]]) - indices_ = [2, 3, 1, 0] - store = empty(4, dtype=int).view(ndarray) - chosen = choose(indices_, choices, mode='wrap', out=store) - assert_equal(store, array([999999, 31, 12, 999999])) - - - def test_reshape(self): - a = arange(10) - a[0] = masked - # Try the default - b = a.reshape((5, 2)) - assert_equal(b.shape, (5, 2)) - self.assertTrue(b.flags['C']) - # Try w/ arguments as list instead of tuple - b = a.reshape(5, 2) - assert_equal(b.shape, (5, 2)) - self.assertTrue(b.flags['C']) - # Try w/ order - b = a.reshape((5, 2), order='F') - assert_equal(b.shape, (5, 2)) - self.assertTrue(b.flags['F']) - # Try w/ order - b = a.reshape(5, 2, order='F') - assert_equal(b.shape, (5, 2)) - self.assertTrue(b.flags['F']) - # - c = np.reshape(a, (2, 5)) - self.assertTrue(isinstance(c, MaskedArray)) - assert_equal(c.shape, (2, 5)) - self.assertTrue(c[0, 0] is masked) - self.assertTrue(c.flags['C']) - - - def test_make_mask_descr(self): - "Test make_mask_descr" - # Flexible - ntype = [('a', np.float), ('b', np.float)] - test = make_mask_descr(ntype) - assert_equal(test, [('a', np.bool), ('b', np.bool)]) - # Standard w/ shape - ntype = (np.float, 2) - test = make_mask_descr(ntype) - assert_equal(test, (np.bool, 2)) - # Standard standard - ntype = np.float - test = make_mask_descr(ntype) - assert_equal(test, np.dtype(np.bool)) - # Nested - ntype = [('a', np.float), ('b', [('ba', np.float), ('bb', np.float)])] - test = make_mask_descr(ntype) - control = np.dtype([('a', 'b1'), ('b', [('ba', 'b1'), ('bb', 'b1')])]) - assert_equal(test, control) - # Named+ shape - ntype = [('a', (np.float, 2))] - test = make_mask_descr(ntype) - assert_equal(test, np.dtype([('a', (np.bool, 2))])) - # 2 names - ntype = [(('A', 'a'), float)] - test = make_mask_descr(ntype) - assert_equal(test, np.dtype([(('A', 'a'), bool)])) - - - def test_make_mask(self): - "Test make_mask" - # w/ a list as an input - mask = [0, 1] - test = make_mask(mask) - assert_equal(test.dtype, MaskType) - assert_equal(test, [0, 1]) - # w/ a ndarray as an input - mask = np.array([0, 1], dtype=np.bool) - test = make_mask(mask) - assert_equal(test.dtype, MaskType) - assert_equal(test, [0, 1]) - # w/ a flexible-type ndarray as an input - use default - mdtype = [('a', np.bool), ('b', np.bool)] - mask = np.array([(0, 0), (0, 1)], dtype=mdtype) - test = make_mask(mask) - assert_equal(test.dtype, MaskType) - assert_equal(test, [1, 1]) - # w/ a flexible-type ndarray as an input - use input dtype - mdtype = [('a', np.bool), ('b', np.bool)] - mask = np.array([(0, 0), (0, 1)], dtype=mdtype) - test = make_mask(mask, dtype=mask.dtype) - assert_equal(test.dtype, mdtype) - assert_equal(test, mask) - # w/ a flexible-type ndarray as an input - use input dtype - mdtype = [('a', np.float), ('b', np.float)] - bdtype = [('a', np.bool), ('b', np.bool)] - mask = np.array([(0, 0), (0, 1)], dtype=mdtype) - test = make_mask(mask, dtype=mask.dtype) - assert_equal(test.dtype, bdtype) - assert_equal(test, np.array([(0, 0), (0, 1)], dtype=bdtype)) - - - def test_mask_or(self): - # Initialize - mtype = [('a', np.bool), ('b', np.bool)] - mask = np.array([(0, 0), (0, 1), (1, 0), (0, 0)], dtype=mtype) - # Test using nomask as input - test = mask_or(mask, nomask) - assert_equal(test, mask) - test = mask_or(nomask, mask) - assert_equal(test, mask) - # Using False as input - test = mask_or(mask, False) - assert_equal(test, mask) - # Using True as input. Won't work, but keep it for the kicks - # test = mask_or(mask, True) - # control = np.array([(1, 1), (1, 1), (1, 1), (1, 1)], dtype=mtype) - # assert_equal(test, control) - # Using another array w / the same dtype - other = np.array([(0, 1), (0, 1), (0, 1), (0, 1)], dtype=mtype) - test = mask_or(mask, other) - control = np.array([(0, 1), (0, 1), (1, 1), (0, 1)], dtype=mtype) - assert_equal(test, control) - # Using another array w / a different dtype - othertype = [('A', np.bool), ('B', np.bool)] - other = np.array([(0, 1), (0, 1), (0, 1), (0, 1)], dtype=othertype) - try: - test = mask_or(mask, other) - except ValueError: - pass - # Using nested arrays - dtype = [('a', np.bool), ('b', [('ba', np.bool), ('bb', np.bool)])] - amask = np.array([(0, (1, 0)), (0, (1, 0))], dtype=dtype) - bmask = np.array([(1, (0, 1)), (0, (0, 0))], dtype=dtype) - cntrl = np.array([(1, (1, 1)), (0, (1, 0))], dtype=dtype) - assert_equal(mask_or(amask, bmask), cntrl) - - - def test_flatten_mask(self): - "Tests flatten mask" - # Standarad dtype - mask = np.array([0, 0, 1], dtype=np.bool) - assert_equal(flatten_mask(mask), mask) - # Flexible dtype - mask = np.array([(0, 0), (0, 1)], dtype=[('a', bool), ('b', bool)]) - test = flatten_mask(mask) - control = np.array([0, 0, 0, 1], dtype=bool) - assert_equal(test, control) - - mdtype = [('a', bool), ('b', [('ba', bool), ('bb', bool)])] - data = [(0, (0, 0)), (0, (0, 1))] - mask = np.array(data, dtype=mdtype) - test = flatten_mask(mask) - control = np.array([ 0, 0, 0, 0, 0, 1], dtype=bool) - assert_equal(test, control) - - - def test_on_ndarray(self): - "Test functions on ndarrays" - a = np.array([1, 2, 3, 4]) - m = array(a, mask=False) - test = anom(a) - assert_equal(test, m.anom()) - test = reshape(a, (2, 2)) - assert_equal(test, m.reshape(2, 2)) - -#------------------------------------------------------------------------------ - -class TestMaskedFields(TestCase): - # - def setUp(self): - ilist = [1, 2, 3, 4, 5] - flist = [1.1, 2.2, 3.3, 4.4, 5.5] - slist = ['one', 'two', 'three', 'four', 'five'] - ddtype = [('a', int), ('b', float), ('c', '|S8')] - mdtype = [('a', bool), ('b', bool), ('c', bool)] - mask = [0, 1, 0, 0, 1] - base = array(zip(ilist, flist, slist), mask=mask, dtype=ddtype) - self.data = dict(base=base, mask=mask, ddtype=ddtype, mdtype=mdtype) - - def test_set_records_masks(self): - base = self.data['base'] - mdtype = self.data['mdtype'] - # Set w/ nomask or masked - base.mask = nomask - assert_equal_records(base._mask, np.zeros(base.shape, dtype=mdtype)) - base.mask = masked - assert_equal_records(base._mask, np.ones(base.shape, dtype=mdtype)) - # Set w/ simple boolean - base.mask = False - assert_equal_records(base._mask, np.zeros(base.shape, dtype=mdtype)) - base.mask = True - assert_equal_records(base._mask, np.ones(base.shape, dtype=mdtype)) - # Set w/ list - base.mask = [0, 0, 0, 1, 1] - assert_equal_records(base._mask, - np.array([(x, x, x) for x in [0, 0, 0, 1, 1]], - dtype=mdtype)) - - def test_set_record_element(self): - "Check setting an element of a record)" - base = self.data['base'] - (base_a, base_b, base_c) = (base['a'], base['b'], base['c']) - base[0] = (pi, pi, 'pi') - - assert_equal(base_a.dtype, int) - assert_equal(base_a._data, [3, 2, 3, 4, 5]) - - assert_equal(base_b.dtype, float) - assert_equal(base_b._data, [pi, 2.2, 3.3, 4.4, 5.5]) - - assert_equal(base_c.dtype, '|S8') - assert_equal(base_c._data, - asbytes_nested(['pi', 'two', 'three', 'four', 'five'])) - - def test_set_record_slice(self): - base = self.data['base'] - (base_a, base_b, base_c) = (base['a'], base['b'], base['c']) - base[:3] = (pi, pi, 'pi') - - assert_equal(base_a.dtype, int) - assert_equal(base_a._data, [3, 3, 3, 4, 5]) - - assert_equal(base_b.dtype, float) - assert_equal(base_b._data, [pi, pi, pi, 4.4, 5.5]) - - assert_equal(base_c.dtype, '|S8') - assert_equal(base_c._data, - asbytes_nested(['pi', 'pi', 'pi', 'four', 'five'])) - - def test_mask_element(self): - "Check record access" - base = self.data['base'] - (base_a, base_b, base_c) = (base['a'], base['b'], base['c']) - base[0] = masked - # - for n in ('a', 'b', 'c'): - assert_equal(base[n].mask, [1, 1, 0, 0, 1]) - assert_equal(base[n]._data, base._data[n]) - # - def test_getmaskarray(self): - "Test getmaskarray on flexible dtype" - ndtype = [('a', int), ('b', float)] - test = empty(3, dtype=ndtype) - assert_equal(getmaskarray(test), - np.array([(0, 0) , (0, 0), (0, 0)], - dtype=[('a', '|b1'), ('b', '|b1')])) - test[:] = masked - assert_equal(getmaskarray(test), - np.array([(1, 1) , (1, 1), (1, 1)], - dtype=[('a', '|b1'), ('b', '|b1')])) - # - def test_view(self): - "Test view w/ flexible dtype" - iterator = zip(np.arange(10), np.random.rand(10)) - data = np.array(iterator) - a = array(iterator, dtype=[('a', float), ('b', float)]) - a.mask[0] = (1, 0) - controlmask = np.array([1] + 19 * [0], dtype=bool) - # Transform globally to simple dtype - test = a.view(float) - assert_equal(test, data.ravel()) - assert_equal(test.mask, controlmask) - # Transform globally to dty - test = a.view((float, 2)) - assert_equal(test, data) - assert_equal(test.mask, controlmask.reshape(-1, 2)) - # - test = a.view((float, 2), np.matrix) - assert_equal(test, data) - self.assertTrue(isinstance(test, np.matrix)) - # - def test_getitem(self): - ndtype = [('a', float), ('b', float)] - a = array(zip(np.random.rand(10), np.arange(10)), dtype=ndtype) - a.mask = np.array(zip([0, 0, 0, 0, 0, 0, 0, 0, 1, 1], - [1, 0, 0, 0, 0, 0, 0, 0, 1, 0]), - dtype=[('a', bool), ('b', bool)]) - # No mask - self.assertTrue(isinstance(a[1], np.void)) - # One element masked - self.assertTrue(isinstance(a[0], MaskedArray)) - assert_equal_records(a[0]._data, a._data[0]) - assert_equal_records(a[0]._mask, a._mask[0]) - # All element masked - self.assertTrue(isinstance(a[-2], MaskedArray)) - assert_equal_records(a[-2]._data, a._data[-2]) - assert_equal_records(a[-2]._mask, a._mask[-2]) - -#------------------------------------------------------------------------------ - -class TestMaskedView(TestCase): - # - def setUp(self): - iterator = zip(np.arange(10), np.random.rand(10)) - data = np.array(iterator) - a = array(iterator, dtype=[('a', float), ('b', float)]) - a.mask[0] = (1, 0) - controlmask = np.array([1] + 19 * [0], dtype=bool) - self.data = (data, a, controlmask) - # - def test_view_to_nothing(self): - (data, a, controlmask) = self.data - test = a.view() - self.assertTrue(isinstance(test, MaskedArray)) - assert_equal(test._data, a._data) - assert_equal(test._mask, a._mask) - - # - def test_view_to_type(self): - (data, a, controlmask) = self.data - test = a.view(np.ndarray) - self.assertTrue(not isinstance(test, MaskedArray)) - assert_equal(test, a._data) - assert_equal_records(test, data.view(a.dtype).squeeze()) - # - def test_view_to_simple_dtype(self): - (data, a, controlmask) = self.data - # View globally - test = a.view(float) - self.assertTrue(isinstance(test, MaskedArray)) - assert_equal(test, data.ravel()) - assert_equal(test.mask, controlmask) - # - def test_view_to_flexible_dtype(self): - (data, a, controlmask) = self.data - # - test = a.view([('A', float), ('B', float)]) - assert_equal(test.mask.dtype.names, ('A', 'B')) - assert_equal(test['A'], a['a']) - assert_equal(test['B'], a['b']) - # - test = a[0].view([('A', float), ('B', float)]) - self.assertTrue(isinstance(test, MaskedArray)) - assert_equal(test.mask.dtype.names, ('A', 'B')) - assert_equal(test['A'], a['a'][0]) - assert_equal(test['B'], a['b'][0]) - # - test = a[-1].view([('A', float), ('B', float)]) - self.assertTrue(not isinstance(test, MaskedArray)) - assert_equal(test.dtype.names, ('A', 'B')) - assert_equal(test['A'], a['a'][-1]) - assert_equal(test['B'], a['b'][-1]) - - # - def test_view_to_subdtype(self): - (data, a, controlmask) = self.data - # View globally - test = a.view((float, 2)) - self.assertTrue(isinstance(test, MaskedArray)) - assert_equal(test, data) - assert_equal(test.mask, controlmask.reshape(-1, 2)) - # View on 1 masked element - test = a[0].view((float, 2)) - self.assertTrue(isinstance(test, MaskedArray)) - assert_equal(test, data[0]) - assert_equal(test.mask, (1, 0)) - # View on 1 unmasked element - test = a[-1].view((float, 2)) - self.assertTrue(not isinstance(test, MaskedArray)) - assert_equal(test, data[-1]) - # - def test_view_to_dtype_and_type(self): - (data, a, controlmask) = self.data - # - test = a.view((float, 2), np.matrix) - assert_equal(test, data) - self.assertTrue(isinstance(test, np.matrix)) - self.assertTrue(not isinstance(test, MaskedArray)) - -def test_masked_array(): - a = np.ma.array([0, 1, 2, 3], mask=[0, 0, 1, 0]) - assert_equal(np.argwhere(a), [[1], [3]]) - -############################################################################### -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/ma/tests/test_extras.py b/pythonPackages/numpy/numpy/ma/tests/test_extras.py deleted file mode 100755 index 359e074463..0000000000 --- a/pythonPackages/numpy/numpy/ma/tests/test_extras.py +++ /dev/null @@ -1,825 +0,0 @@ -# pylint: disable-msg=W0611, W0612, W0511 -"""Tests suite for MaskedArray. -Adapted from the original test_ma by Pierre Gerard-Marchant - -:author: Pierre Gerard-Marchant -:contact: pierregm_at_uga_dot_edu -:version: $Id: test_extras.py 3473 2007-10-29 15:18:13Z jarrod.millman $ -""" -__author__ = "Pierre GF Gerard-Marchant ($Author: jarrod.millman $)" -__version__ = '1.0' -__revision__ = "$Revision: 3473 $" -__date__ = '$Date: 2007-10-29 17:18:13 +0200 (Mon, 29 Oct 2007) $' - -import numpy as np -from numpy.testing import TestCase, run_module_suite -from numpy.ma.testutils import * -from numpy.ma.core import * -from numpy.ma.extras import * - - -class TestGeneric(TestCase): - # - def test_masked_all(self): - "Tests masked_all" - # Standard dtype - test = masked_all((2,), dtype=float) - control = array([1, 1], mask=[1, 1], dtype=float) - assert_equal(test, control) - # Flexible dtype - dt = np.dtype({'names': ['a', 'b'], 'formats': ['f', 'f']}) - test = masked_all((2,), dtype=dt) - control = array([(0, 0), (0, 0)], mask=[(1, 1), (1, 1)], dtype=dt) - assert_equal(test, control) - test = masked_all((2, 2), dtype=dt) - control = array([[(0, 0), (0, 0)], [(0, 0), (0, 0)]], - mask=[[(1, 1), (1, 1)], [(1, 1), (1, 1)]], - dtype=dt) - assert_equal(test, control) - # Nested dtype - dt = np.dtype([('a', 'f'), ('b', [('ba', 'f'), ('bb', 'f')])]) - test = masked_all((2,), dtype=dt) - control = array([(1, (1, 1)), (1, (1, 1))], - mask=[(1, (1, 1)), (1, (1, 1))], dtype=dt) - assert_equal(test, control) - test = masked_all((2,), dtype=dt) - control = array([(1, (1, 1)), (1, (1, 1))], - mask=[(1, (1, 1)), (1, (1, 1))], dtype=dt) - assert_equal(test, control) - test = masked_all((1, 1), dtype=dt) - control = array([[(1, (1, 1))]], mask=[[(1, (1, 1))]], dtype=dt) - assert_equal(test, control) - - - def test_masked_all_like(self): - "Tests masked_all" - # Standard dtype - base = array([1, 2], dtype=float) - test = masked_all_like(base) - control = array([1, 1], mask=[1, 1], dtype=float) - assert_equal(test, control) - # Flexible dtype - dt = np.dtype({'names': ['a', 'b'], 'formats': ['f', 'f']}) - base = array([(0, 0), (0, 0)], mask=[(1, 1), (1, 1)], dtype=dt) - test = masked_all_like(base) - control = array([(10, 10), (10, 10)], mask=[(1, 1), (1, 1)], dtype=dt) - assert_equal(test, control) - # Nested dtype - dt = np.dtype([('a', 'f'), ('b', [('ba', 'f'), ('bb', 'f')])]) - control = array([(1, (1, 1)), (1, (1, 1))], - mask=[(1, (1, 1)), (1, (1, 1))], dtype=dt) - test = masked_all_like(control) - assert_equal(test, control) - - def test_clump_masked(self): - "Test clump_masked" - a = masked_array(np.arange(10)) - a[[0, 1, 2, 6, 8, 9]] = masked - # - test = clump_masked(a) - control = [slice(0, 3), slice(6, 7), slice(8, 10)] - assert_equal(test, control) - - def test_clump_unmasked(self): - "Test clump_unmasked" - a = masked_array(np.arange(10)) - a[[0, 1, 2, 6, 8, 9]] = masked - test = clump_unmasked(a) - control = [slice(3, 6), slice(7, 8), ] - assert_equal(test, control) - - - -class TestAverage(TestCase): - "Several tests of average. Why so many ? Good point..." - def test_testAverage1(self): - "Test of average." - ott = array([0., 1., 2., 3.], mask=[True, False, False, False]) - assert_equal(2.0, average(ott, axis=0)) - assert_equal(2.0, average(ott, weights=[1., 1., 2., 1.])) - result, wts = average(ott, weights=[1., 1., 2., 1.], returned=1) - assert_equal(2.0, result) - self.assertTrue(wts == 4.0) - ott[:] = masked - assert_equal(average(ott, axis=0).mask, [True]) - ott = array([0., 1., 2., 3.], mask=[True, False, False, False]) - ott = ott.reshape(2, 2) - ott[:, 1] = masked - assert_equal(average(ott, axis=0), [2.0, 0.0]) - assert_equal(average(ott, axis=1).mask[0], [True]) - assert_equal([2., 0.], average(ott, axis=0)) - result, wts = average(ott, axis=0, returned=1) - assert_equal(wts, [1., 0.]) - - def test_testAverage2(self): - "More tests of average." - w1 = [0, 1, 1, 1, 1, 0] - w2 = [[0, 1, 1, 1, 1, 0], [1, 0, 0, 0, 0, 1]] - x = arange(6, dtype=float_) - assert_equal(average(x, axis=0), 2.5) - assert_equal(average(x, axis=0, weights=w1), 2.5) - y = array([arange(6, dtype=float_), 2.0 * arange(6)]) - assert_equal(average(y, None), np.add.reduce(np.arange(6)) * 3. / 12.) - assert_equal(average(y, axis=0), np.arange(6) * 3. / 2.) - assert_equal(average(y, axis=1), - [average(x, axis=0), average(x, axis=0) * 2.0]) - assert_equal(average(y, None, weights=w2), 20. / 6.) - assert_equal(average(y, axis=0, weights=w2), - [0., 1., 2., 3., 4., 10.]) - assert_equal(average(y, axis=1), - [average(x, axis=0), average(x, axis=0) * 2.0]) - m1 = zeros(6) - m2 = [0, 0, 1, 1, 0, 0] - m3 = [[0, 0, 1, 1, 0, 0], [0, 1, 1, 1, 1, 0]] - m4 = ones(6) - m5 = [0, 1, 1, 1, 1, 1] - assert_equal(average(masked_array(x, m1), axis=0), 2.5) - assert_equal(average(masked_array(x, m2), axis=0), 2.5) - assert_equal(average(masked_array(x, m4), axis=0).mask, [True]) - assert_equal(average(masked_array(x, m5), axis=0), 0.0) - assert_equal(count(average(masked_array(x, m4), axis=0)), 0) - z = masked_array(y, m3) - assert_equal(average(z, None), 20. / 6.) - assert_equal(average(z, axis=0), [0., 1., 99., 99., 4.0, 7.5]) - assert_equal(average(z, axis=1), [2.5, 5.0]) - assert_equal(average(z, axis=0, weights=w2), - [0., 1., 99., 99., 4.0, 10.0]) - - def test_testAverage3(self): - "Yet more tests of average!" - a = arange(6) - b = arange(6) * 3 - r1, w1 = average([[a, b], [b, a]], axis=1, returned=1) - assert_equal(shape(r1) , shape(w1)) - assert_equal(r1.shape , w1.shape) - r2, w2 = average(ones((2, 2, 3)), axis=0, weights=[3, 1], returned=1) - assert_equal(shape(w2) , shape(r2)) - r2, w2 = average(ones((2, 2, 3)), returned=1) - assert_equal(shape(w2) , shape(r2)) - r2, w2 = average(ones((2, 2, 3)), weights=ones((2, 2, 3)), returned=1) - assert_equal(shape(w2), shape(r2)) - a2d = array([[1, 2], [0, 4]], float) - a2dm = masked_array(a2d, [[False, False], [True, False]]) - a2da = average(a2d, axis=0) - assert_equal(a2da, [0.5, 3.0]) - a2dma = average(a2dm, axis=0) - assert_equal(a2dma, [1.0, 3.0]) - a2dma = average(a2dm, axis=None) - assert_equal(a2dma, 7. / 3.) - a2dma = average(a2dm, axis=1) - assert_equal(a2dma, [1.5, 4.0]) - - def test_onintegers_with_mask(self): - "Test average on integers with mask" - a = average(array([1, 2])) - assert_equal(a, 1.5) - a = average(array([1, 2, 3, 4], mask=[False, False, True, True])) - assert_equal(a, 1.5) - - -class TestConcatenator(TestCase): - """ - Tests for mr_, the equivalent of r_ for masked arrays. - """ - - def test_1d(self): - "Tests mr_ on 1D arrays." - assert_array_equal(mr_[1, 2, 3, 4, 5, 6], array([1, 2, 3, 4, 5, 6])) - b = ones(5) - m = [1, 0, 0, 0, 0] - d = masked_array(b, mask=m) - c = mr_[d, 0, 0, d] - self.assertTrue(isinstance(c, MaskedArray) or isinstance(c, core.MaskedArray)) - assert_array_equal(c, [1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1]) - assert_array_equal(c.mask, mr_[m, 0, 0, m]) - - def test_2d(self): - "Tests mr_ on 2D arrays." - a_1 = rand(5, 5) - a_2 = rand(5, 5) - m_1 = np.round_(rand(5, 5), 0) - m_2 = np.round_(rand(5, 5), 0) - b_1 = masked_array(a_1, mask=m_1) - b_2 = masked_array(a_2, mask=m_2) - d = mr_['1', b_1, b_2] # append columns - self.assertTrue(d.shape == (5, 10)) - assert_array_equal(d[:, :5], b_1) - assert_array_equal(d[:, 5:], b_2) - assert_array_equal(d.mask, np.r_['1', m_1, m_2]) - d = mr_[b_1, b_2] - self.assertTrue(d.shape == (10, 5)) - assert_array_equal(d[:5, :], b_1) - assert_array_equal(d[5:, :], b_2) - assert_array_equal(d.mask, np.r_[m_1, m_2]) - - - -class TestNotMasked(TestCase): - """ - Tests notmasked_edges and notmasked_contiguous. - """ - - def test_edges(self): - "Tests unmasked_edges" - data = masked_array(np.arange(25).reshape(5, 5), - mask=[[0, 0, 1, 0, 0], - [0, 0, 0, 1, 1], - [1, 1, 0, 0, 0], - [0, 0, 0, 0, 0], - [1, 1, 1, 0, 0]],) - test = notmasked_edges(data, None) - assert_equal(test, [0, 24]) - test = notmasked_edges(data, 0) - assert_equal(test[0], [(0, 0, 1, 0, 0), (0, 1, 2, 3, 4)]) - assert_equal(test[1], [(3, 3, 3, 4, 4), (0, 1, 2, 3, 4)]) - test = notmasked_edges(data, 1) - assert_equal(test[0], [(0, 1, 2, 3, 4), (0, 0, 2, 0, 3)]) - assert_equal(test[1], [(0, 1, 2, 3, 4), (4, 2, 4, 4, 4)]) - # - test = notmasked_edges(data.data, None) - assert_equal(test, [0, -1]) - test = notmasked_edges(data.data, 0) - assert_equal(test[0], [(0, 0, 0, 0, 0), (0, 1, 2, 3, 4)]) - assert_equal(test[1], [(4, 4, 4, 4, 4), (0, 1, 2, 3, 4)]) - test = notmasked_edges(data.data, -1) - assert_equal(test[0], [(0, 1, 2, 3, 4), (0, 0, 0, 0, 0)]) - assert_equal(test[1], [(0, 1, 2, 3, 4), (4, 4, 4, 4, 4)]) - # - data[-2] = masked - test = notmasked_edges(data, 0) - assert_equal(test[0], [(0, 0, 1, 0, 0), (0, 1, 2, 3, 4)]) - assert_equal(test[1], [(1, 1, 2, 4, 4), (0, 1, 2, 3, 4)]) - test = notmasked_edges(data, -1) - assert_equal(test[0], [(0, 1, 2, 4), (0, 0, 2, 3)]) - assert_equal(test[1], [(0, 1, 2, 4), (4, 2, 4, 4)]) - - - def test_contiguous(self): - "Tests notmasked_contiguous" - a = masked_array(np.arange(24).reshape(3, 8), - mask=[[0, 0, 0, 0, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0, 1, 0], ]) - tmp = notmasked_contiguous(a, None) - assert_equal(tmp[-1], slice(23, 23, None)) - assert_equal(tmp[-2], slice(16, 21, None)) - assert_equal(tmp[-3], slice(0, 3, None)) - # - tmp = notmasked_contiguous(a, 0) - self.assertTrue(len(tmp[-1]) == 1) - self.assertTrue(tmp[-2] is None) - assert_equal(tmp[-3], tmp[-1]) - self.assertTrue(len(tmp[0]) == 2) - # - tmp = notmasked_contiguous(a, 1) - assert_equal(tmp[0][-1], slice(0, 3, None)) - self.assertTrue(tmp[1] is None) - assert_equal(tmp[2][-1], slice(7, 7, None)) - assert_equal(tmp[2][-2], slice(0, 5, None)) - - - -class Test2DFunctions(TestCase): - "Tests 2D functions" - def test_compress2d(self): - "Tests compress2d" - x = array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0], [0, 0, 0], [0, 0, 0]]) - assert_equal(compress_rowcols(x), [[4, 5], [7, 8]]) - assert_equal(compress_rowcols(x, 0), [[3, 4, 5], [6, 7, 8]]) - assert_equal(compress_rowcols(x, 1), [[1, 2], [4, 5], [7, 8]]) - x = array(x._data, mask=[[0, 0, 0], [0, 1, 0], [0, 0, 0]]) - assert_equal(compress_rowcols(x), [[0, 2], [6, 8]]) - assert_equal(compress_rowcols(x, 0), [[0, 1, 2], [6, 7, 8]]) - assert_equal(compress_rowcols(x, 1), [[0, 2], [3, 5], [6, 8]]) - x = array(x._data, mask=[[1, 0, 0], [0, 1, 0], [0, 0, 0]]) - assert_equal(compress_rowcols(x), [[8]]) - assert_equal(compress_rowcols(x, 0), [[6, 7, 8]]) - assert_equal(compress_rowcols(x, 1,), [[2], [5], [8]]) - x = array(x._data, mask=[[1, 0, 0], [0, 1, 0], [0, 0, 1]]) - assert_equal(compress_rowcols(x).size, 0) - assert_equal(compress_rowcols(x, 0).size, 0) - assert_equal(compress_rowcols(x, 1).size, 0) - # - def test_mask_rowcols(self): - "Tests mask_rowcols." - x = array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0], [0, 0, 0], [0, 0, 0]]) - assert_equal(mask_rowcols(x).mask, [[1, 1, 1], [1, 0, 0], [1, 0, 0]]) - assert_equal(mask_rowcols(x, 0).mask, [[1, 1, 1], [0, 0, 0], [0, 0, 0]]) - assert_equal(mask_rowcols(x, 1).mask, [[1, 0, 0], [1, 0, 0], [1, 0, 0]]) - x = array(x._data, mask=[[0, 0, 0], [0, 1, 0], [0, 0, 0]]) - assert_equal(mask_rowcols(x).mask, [[0, 1, 0], [1, 1, 1], [0, 1, 0]]) - assert_equal(mask_rowcols(x, 0).mask, [[0, 0, 0], [1, 1, 1], [0, 0, 0]]) - assert_equal(mask_rowcols(x, 1).mask, [[0, 1, 0], [0, 1, 0], [0, 1, 0]]) - x = array(x._data, mask=[[1, 0, 0], [0, 1, 0], [0, 0, 0]]) - assert_equal(mask_rowcols(x).mask, [[1, 1, 1], [1, 1, 1], [1, 1, 0]]) - assert_equal(mask_rowcols(x, 0).mask, [[1, 1, 1], [1, 1, 1], [0, 0, 0]]) - assert_equal(mask_rowcols(x, 1,).mask, [[1, 1, 0], [1, 1, 0], [1, 1, 0]]) - x = array(x._data, mask=[[1, 0, 0], [0, 1, 0], [0, 0, 1]]) - self.assertTrue(mask_rowcols(x).all() is masked) - self.assertTrue(mask_rowcols(x, 0).all() is masked) - self.assertTrue(mask_rowcols(x, 1).all() is masked) - self.assertTrue(mask_rowcols(x).mask.all()) - self.assertTrue(mask_rowcols(x, 0).mask.all()) - self.assertTrue(mask_rowcols(x, 1).mask.all()) - # - def test_dot(self): - "Tests dot product" - n = np.arange(1, 7) - # - m = [1, 0, 0, 0, 0, 0] - a = masked_array(n, mask=m).reshape(2, 3) - b = masked_array(n, mask=m).reshape(3, 2) - c = dot(a, b, True) - assert_equal(c.mask, [[1, 1], [1, 0]]) - c = dot(b, a, True) - assert_equal(c.mask, [[1, 1, 1], [1, 0, 0], [1, 0, 0]]) - c = dot(a, b, False) - assert_equal(c, np.dot(a.filled(0), b.filled(0))) - c = dot(b, a, False) - assert_equal(c, np.dot(b.filled(0), a.filled(0))) - # - m = [0, 0, 0, 0, 0, 1] - a = masked_array(n, mask=m).reshape(2, 3) - b = masked_array(n, mask=m).reshape(3, 2) - c = dot(a, b, True) - assert_equal(c.mask, [[0, 1], [1, 1]]) - c = dot(b, a, True) - assert_equal(c.mask, [[0, 0, 1], [0, 0, 1], [1, 1, 1]]) - c = dot(a, b, False) - assert_equal(c, np.dot(a.filled(0), b.filled(0))) - assert_equal(c, dot(a, b)) - c = dot(b, a, False) - assert_equal(c, np.dot(b.filled(0), a.filled(0))) - # - m = [0, 0, 0, 0, 0, 0] - a = masked_array(n, mask=m).reshape(2, 3) - b = masked_array(n, mask=m).reshape(3, 2) - c = dot(a, b) - assert_equal(c.mask, nomask) - c = dot(b, a) - assert_equal(c.mask, nomask) - # - a = masked_array(n, mask=[1, 0, 0, 0, 0, 0]).reshape(2, 3) - b = masked_array(n, mask=[0, 0, 0, 0, 0, 0]).reshape(3, 2) - c = dot(a, b, True) - assert_equal(c.mask, [[1, 1], [0, 0]]) - c = dot(a, b, False) - assert_equal(c, np.dot(a.filled(0), b.filled(0))) - c = dot(b, a, True) - assert_equal(c.mask, [[1, 0, 0], [1, 0, 0], [1, 0, 0]]) - c = dot(b, a, False) - assert_equal(c, np.dot(b.filled(0), a.filled(0))) - # - a = masked_array(n, mask=[0, 0, 0, 0, 0, 1]).reshape(2, 3) - b = masked_array(n, mask=[0, 0, 0, 0, 0, 0]).reshape(3, 2) - c = dot(a, b, True) - assert_equal(c.mask, [[0, 0], [1, 1]]) - c = dot(a, b) - assert_equal(c, np.dot(a.filled(0), b.filled(0))) - c = dot(b, a, True) - assert_equal(c.mask, [[0, 0, 1], [0, 0, 1], [0, 0, 1]]) - c = dot(b, a, False) - assert_equal(c, np.dot(b.filled(0), a.filled(0))) - # - a = masked_array(n, mask=[0, 0, 0, 0, 0, 1]).reshape(2, 3) - b = masked_array(n, mask=[0, 0, 1, 0, 0, 0]).reshape(3, 2) - c = dot(a, b, True) - assert_equal(c.mask, [[1, 0], [1, 1]]) - c = dot(a, b, False) - assert_equal(c, np.dot(a.filled(0), b.filled(0))) - c = dot(b, a, True) - assert_equal(c.mask, [[0, 0, 1], [1, 1, 1], [0, 0, 1]]) - c = dot(b, a, False) - assert_equal(c, np.dot(b.filled(0), a.filled(0))) - - - -class TestApplyAlongAxis(TestCase): - # - "Tests 2D functions" - def test_3d(self): - a = arange(12.).reshape(2, 2, 3) - def myfunc(b): - return b[1] - xa = apply_along_axis(myfunc, 2, a) - assert_equal(xa, [[1, 4], [7, 10]]) - - - -class TestApplyOverAxes(TestCase): - "Tests apply_over_axes" - def test_basic(self): - a = arange(24).reshape(2, 3, 4) - test = apply_over_axes(np.sum, a, [0, 2]) - ctrl = np.array([[[ 60], [ 92], [124]]]) - assert_equal(test, ctrl) - a[(a % 2).astype(np.bool)] = masked - test = apply_over_axes(np.sum, a, [0, 2]) - ctrl = np.array([[[ 30], [ 44], [60]]]) - - -class TestMedian(TestCase): - # - def test_2d(self): - "Tests median w/ 2D" - (n, p) = (101, 30) - x = masked_array(np.linspace(-1., 1., n),) - x[:10] = x[-10:] = masked - z = masked_array(np.empty((n, p), dtype=float)) - z[:, 0] = x[:] - idx = np.arange(len(x)) - for i in range(1, p): - np.random.shuffle(idx) - z[:, i] = x[idx] - assert_equal(median(z[:, 0]), 0) - assert_equal(median(z), 0) - assert_equal(median(z, axis=0), np.zeros(p)) - assert_equal(median(z.T, axis=1), np.zeros(p)) - # - def test_2d_waxis(self): - "Tests median w/ 2D arrays and different axis." - x = masked_array(np.arange(30).reshape(10, 3)) - x[:3] = x[-3:] = masked - assert_equal(median(x), 14.5) - assert_equal(median(x, axis=0), [13.5, 14.5, 15.5]) - assert_equal(median(x, axis=1), [0, 0, 0, 10, 13, 16, 19, 0, 0, 0]) - assert_equal(median(x, axis=1).mask, [1, 1, 1, 0, 0, 0, 0, 1, 1, 1]) - # - def test_3d(self): - "Tests median w/ 3D" - x = np.ma.arange(24).reshape(3, 4, 2) - x[x % 3 == 0] = masked - assert_equal(median(x, 0), [[12, 9], [6, 15], [12, 9], [18, 15]]) - x.shape = (4, 3, 2) - assert_equal(median(x, 0), [[99, 10], [11, 99], [13, 14]]) - x = np.ma.arange(24).reshape(4, 3, 2) - x[x % 5 == 0] = masked - assert_equal(median(x, 0), [[12, 10], [8, 9], [16, 17]]) - - - -class TestCov(TestCase): - - def setUp(self): - self.data = array(np.random.rand(12)) - - def test_1d_wo_missing(self): - "Test cov on 1D variable w/o missing values" - x = self.data - assert_almost_equal(np.cov(x), cov(x)) - assert_almost_equal(np.cov(x, rowvar=False), cov(x, rowvar=False)) - assert_almost_equal(np.cov(x, rowvar=False, bias=True), - cov(x, rowvar=False, bias=True)) - - def test_2d_wo_missing(self): - "Test cov on 1 2D variable w/o missing values" - x = self.data.reshape(3, 4) - assert_almost_equal(np.cov(x), cov(x)) - assert_almost_equal(np.cov(x, rowvar=False), cov(x, rowvar=False)) - assert_almost_equal(np.cov(x, rowvar=False, bias=True), - cov(x, rowvar=False, bias=True)) - - def test_1d_w_missing(self): - "Test cov 1 1D variable w/missing values" - x = self.data - x[-1] = masked - x -= x.mean() - nx = x.compressed() - assert_almost_equal(np.cov(nx), cov(x)) - assert_almost_equal(np.cov(nx, rowvar=False), cov(x, rowvar=False)) - assert_almost_equal(np.cov(nx, rowvar=False, bias=True), - cov(x, rowvar=False, bias=True)) - # - try: - cov(x, allow_masked=False) - except ValueError: - pass - # - # 2 1D variables w/ missing values - nx = x[1:-1] - assert_almost_equal(np.cov(nx, nx[::-1]), cov(x, x[::-1])) - assert_almost_equal(np.cov(nx, nx[::-1], rowvar=False), - cov(x, x[::-1], rowvar=False)) - assert_almost_equal(np.cov(nx, nx[::-1], rowvar=False, bias=True), - cov(x, x[::-1], rowvar=False, bias=True)) - - def test_2d_w_missing(self): - "Test cov on 2D variable w/ missing value" - x = self.data - x[-1] = masked - x = x.reshape(3, 4) - valid = np.logical_not(getmaskarray(x)).astype(int) - frac = np.dot(valid, valid.T) - xf = (x - x.mean(1)[:, None]).filled(0) - assert_almost_equal(cov(x), np.cov(xf) * (x.shape[1] - 1) / (frac - 1.)) - assert_almost_equal(cov(x, bias=True), - np.cov(xf, bias=True) * x.shape[1] / frac) - frac = np.dot(valid.T, valid) - xf = (x - x.mean(0)).filled(0) - assert_almost_equal(cov(x, rowvar=False), - np.cov(xf, rowvar=False) * (x.shape[0] - 1) / (frac - 1.)) - assert_almost_equal(cov(x, rowvar=False, bias=True), - np.cov(xf, rowvar=False, bias=True) * x.shape[0] / frac) - - - -class TestCorrcoef(TestCase): - - def setUp(self): - self.data = array(np.random.rand(12)) - - def test_ddof(self): - "Test ddof keyword" - x = self.data - assert_almost_equal(np.corrcoef(x, ddof=0), corrcoef(x, ddof=0)) - - - def test_1d_wo_missing(self): - "Test cov on 1D variable w/o missing values" - x = self.data - assert_almost_equal(np.corrcoef(x), corrcoef(x)) - assert_almost_equal(np.corrcoef(x, rowvar=False), - corrcoef(x, rowvar=False)) - assert_almost_equal(np.corrcoef(x, rowvar=False, bias=True), - corrcoef(x, rowvar=False, bias=True)) - - def test_2d_wo_missing(self): - "Test corrcoef on 1 2D variable w/o missing values" - x = self.data.reshape(3, 4) - assert_almost_equal(np.corrcoef(x), corrcoef(x)) - assert_almost_equal(np.corrcoef(x, rowvar=False), - corrcoef(x, rowvar=False)) - assert_almost_equal(np.corrcoef(x, rowvar=False, bias=True), - corrcoef(x, rowvar=False, bias=True)) - - def test_1d_w_missing(self): - "Test corrcoef 1 1D variable w/missing values" - x = self.data - x[-1] = masked - x -= x.mean() - nx = x.compressed() - assert_almost_equal(np.corrcoef(nx), corrcoef(x)) - assert_almost_equal(np.corrcoef(nx, rowvar=False), corrcoef(x, rowvar=False)) - assert_almost_equal(np.corrcoef(nx, rowvar=False, bias=True), - corrcoef(x, rowvar=False, bias=True)) - # - try: - corrcoef(x, allow_masked=False) - except ValueError: - pass - # - # 2 1D variables w/ missing values - nx = x[1:-1] - assert_almost_equal(np.corrcoef(nx, nx[::-1]), corrcoef(x, x[::-1])) - assert_almost_equal(np.corrcoef(nx, nx[::-1], rowvar=False), - corrcoef(x, x[::-1], rowvar=False)) - assert_almost_equal(np.corrcoef(nx, nx[::-1], rowvar=False, bias=True), - corrcoef(x, x[::-1], rowvar=False, bias=True)) - - def test_2d_w_missing(self): - "Test corrcoef on 2D variable w/ missing value" - x = self.data - x[-1] = masked - x = x.reshape(3, 4) - - test = corrcoef(x) - control = np.corrcoef(x) - assert_almost_equal(test[:-1, :-1], control[:-1, :-1]) - - - -class TestPolynomial(TestCase): - # - def test_polyfit(self): - "Tests polyfit" - # On ndarrays - x = np.random.rand(10) - y = np.random.rand(20).reshape(-1, 2) - assert_almost_equal(polyfit(x, y, 3), np.polyfit(x, y, 3)) - # ON 1D maskedarrays - x = x.view(MaskedArray) - x[0] = masked - y = y.view(MaskedArray) - y[0, 0] = y[-1, -1] = masked - # - (C, R, K, S, D) = polyfit(x, y[:, 0], 3, full=True) - (c, r, k, s, d) = np.polyfit(x[1:], y[1:, 0].compressed(), 3, full=True) - for (a, a_) in zip((C, R, K, S, D), (c, r, k, s, d)): - assert_almost_equal(a, a_) - # - (C, R, K, S, D) = polyfit(x, y[:, -1], 3, full=True) - (c, r, k, s, d) = np.polyfit(x[1:-1], y[1:-1, -1], 3, full=True) - for (a, a_) in zip((C, R, K, S, D), (c, r, k, s, d)): - assert_almost_equal(a, a_) - # - (C, R, K, S, D) = polyfit(x, y, 3, full=True) - (c, r, k, s, d) = np.polyfit(x[1:-1], y[1:-1, :], 3, full=True) - for (a, a_) in zip((C, R, K, S, D), (c, r, k, s, d)): - assert_almost_equal(a, a_) - - - -class TestArraySetOps(TestCase): - # - def test_unique_onlist(self): - "Test unique on list" - data = [1, 1, 1, 2, 2, 3] - test = unique(data, return_index=True, return_inverse=True) - self.assertTrue(isinstance(test[0], MaskedArray)) - assert_equal(test[0], masked_array([1, 2, 3], mask=[0, 0, 0])) - assert_equal(test[1], [0, 3, 5]) - assert_equal(test[2], [0, 0, 0, 1, 1, 2]) - - def test_unique_onmaskedarray(self): - "Test unique on masked data w/use_mask=True" - data = masked_array([1, 1, 1, 2, 2, 3], mask=[0, 0, 1, 0, 1, 0]) - test = unique(data, return_index=True, return_inverse=True) - assert_equal(test[0], masked_array([1, 2, 3, -1], mask=[0, 0, 0, 1])) - assert_equal(test[1], [0, 3, 5, 2]) - assert_equal(test[2], [0, 0, 3, 1, 3, 2]) - # - data.fill_value = 3 - data = masked_array([1, 1, 1, 2, 2, 3], - mask=[0, 0, 1, 0, 1, 0], fill_value=3) - test = unique(data, return_index=True, return_inverse=True) - assert_equal(test[0], masked_array([1, 2, 3, -1], mask=[0, 0, 0, 1])) - assert_equal(test[1], [0, 3, 5, 2]) - assert_equal(test[2], [0, 0, 3, 1, 3, 2]) - - def test_unique_allmasked(self): - "Test all masked" - data = masked_array([1, 1, 1], mask=True) - test = unique(data, return_index=True, return_inverse=True) - assert_equal(test[0], masked_array([1, ], mask=[True])) - assert_equal(test[1], [0]) - assert_equal(test[2], [0, 0, 0]) - # - "Test masked" - data = masked - test = unique(data, return_index=True, return_inverse=True) - assert_equal(test[0], masked_array(masked)) - assert_equal(test[1], [0]) - assert_equal(test[2], [0]) - - def test_ediff1d(self): - "Tests mediff1d" - x = masked_array(np.arange(5), mask=[1, 0, 0, 0, 1]) - control = array([1, 1, 1, 4], mask=[1, 0, 0, 1]) - test = ediff1d(x) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - def test_ediff1d_tobegin(self): - "Test ediff1d w/ to_begin" - x = masked_array(np.arange(5), mask=[1, 0, 0, 0, 1]) - test = ediff1d(x, to_begin=masked) - control = array([0, 1, 1, 1, 4], mask=[1, 1, 0, 0, 1]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - test = ediff1d(x, to_begin=[1, 2, 3]) - control = array([1, 2, 3, 1, 1, 1, 4], mask=[0, 0, 0, 1, 0, 0, 1]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - def test_ediff1d_toend(self): - "Test ediff1d w/ to_end" - x = masked_array(np.arange(5), mask=[1, 0, 0, 0, 1]) - test = ediff1d(x, to_end=masked) - control = array([1, 1, 1, 4, 0], mask=[1, 0, 0, 1, 1]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - test = ediff1d(x, to_end=[1, 2, 3]) - control = array([1, 1, 1, 4, 1, 2, 3], mask=[1, 0, 0, 1, 0, 0, 0]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - def test_ediff1d_tobegin_toend(self): - "Test ediff1d w/ to_begin and to_end" - x = masked_array(np.arange(5), mask=[1, 0, 0, 0, 1]) - test = ediff1d(x, to_end=masked, to_begin=masked) - control = array([0, 1, 1, 1, 4, 0], mask=[1, 1, 0, 0, 1, 1]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - test = ediff1d(x, to_end=[1, 2, 3], to_begin=masked) - control = array([0, 1, 1, 1, 4, 1, 2, 3], mask=[1, 1, 0, 0, 1, 0, 0, 0]) - assert_equal(test, control) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - def test_ediff1d_ndarray(self): - "Test ediff1d w/ a ndarray" - x = np.arange(5) - test = ediff1d(x) - control = array([1, 1, 1, 1], mask=[0, 0, 0, 0]) - assert_equal(test, control) - self.assertTrue(isinstance(test, MaskedArray)) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - # - test = ediff1d(x, to_end=masked, to_begin=masked) - control = array([0, 1, 1, 1, 1, 0], mask=[1, 0, 0, 0, 0, 1]) - self.assertTrue(isinstance(test, MaskedArray)) - assert_equal(test.data, control.data) - assert_equal(test.mask, control.mask) - - - def test_intersect1d(self): - "Test intersect1d" - x = array([1, 3, 3, 3], mask=[0, 0, 0, 1]) - y = array([3, 1, 1, 1], mask=[0, 0, 0, 1]) - test = intersect1d(x, y) - control = array([1, 3, -1], mask=[0, 0, 1]) - assert_equal(test, control) - - - def test_setxor1d(self): - "Test setxor1d" - a = array([1, 2, 5, 7, -1], mask=[0, 0, 0, 0, 1]) - b = array([1, 2, 3, 4, 5, -1], mask=[0, 0, 0, 0, 0, 1]) - test = setxor1d(a, b) - assert_equal(test, array([3, 4, 7])) - # - a = array([1, 2, 5, 7, -1], mask=[0, 0, 0, 0, 1]) - b = [1, 2, 3, 4, 5] - test = setxor1d(a, b) - assert_equal(test, array([3, 4, 7, -1], mask=[0, 0, 0, 1])) - # - a = array([1, 2, 3]) - b = array([6, 5, 4]) - test = setxor1d(a, b) - assert(isinstance(test, MaskedArray)) - assert_equal(test, [1, 2, 3, 4, 5, 6]) - # - a = array([1, 8, 2, 3], mask=[0, 1, 0, 0]) - b = array([6, 5, 4, 8], mask=[0, 0, 0, 1]) - test = setxor1d(a, b) - assert(isinstance(test, MaskedArray)) - assert_equal(test, [1, 2, 3, 4, 5, 6]) - # - assert_array_equal([], setxor1d([], [])) - - - def test_in1d(self): - "Test in1d" - a = array([1, 2, 5, 7, -1], mask=[0, 0, 0, 0, 1]) - b = array([1, 2, 3, 4, 5, -1], mask=[0, 0, 0, 0, 0, 1]) - test = in1d(a, b) - assert_equal(test, [True, True, True, False, True]) - # - a = array([5, 5, 2, 1, -1], mask=[0, 0, 0, 0, 1]) - b = array([1, 5, -1], mask=[0, 0, 1]) - test = in1d(a, b) - assert_equal(test, [True, True, False, True, True]) - # - assert_array_equal([], in1d([], [])) - - - def test_union1d(self): - "Test union1d" - a = array([1, 2, 5, 7, 5, -1], mask=[0, 0, 0, 0, 0, 1]) - b = array([1, 2, 3, 4, 5, -1], mask=[0, 0, 0, 0, 0, 1]) - test = union1d(a, b) - control = array([1, 2, 3, 4, 5, 7, -1], mask=[0, 0, 0, 0, 0, 0, 1]) - assert_equal(test, control) - # - assert_array_equal([], union1d([], [])) - - - def test_setdiff1d(self): - "Test setdiff1d" - a = array([6, 5, 4, 7, 7, 1, 2, 1], mask=[0, 0, 0, 0, 0, 0, 0, 1]) - b = array([2, 4, 3, 3, 2, 1, 5]) - test = setdiff1d(a, b) - assert_equal(test, array([6, 7, -1], mask=[0, 0, 1])) - # - a = arange(10) - b = arange(8) - assert_equal(setdiff1d(a, b), array([8, 9])) - - - def test_setdiff1d_char_array(self): - "Test setdiff1d_charray" - a = np.array(['a', 'b', 'c']) - b = np.array(['a', 'b', 's']) - assert_array_equal(setdiff1d(a, b), np.array(['c'])) - - -class TestShapeBase(TestCase): - # - def test_atleast1d(self): - pass - - -############################################################################### -#------------------------------------------------------------------------------ -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/ma/tests/test_mrecords.py b/pythonPackages/numpy/numpy/ma/tests/test_mrecords.py deleted file mode 100755 index f068d4e503..0000000000 --- a/pythonPackages/numpy/numpy/ma/tests/test_mrecords.py +++ /dev/null @@ -1,501 +0,0 @@ -# pylint: disable-msg=W0611, W0612, W0511,R0201 -"""Tests suite for mrecords. - -:author: Pierre Gerard-Marchant -:contact: pierregm_at_uga_dot_edu -""" -__author__ = "Pierre GF Gerard-Marchant ($Author: jarrod.millman $)" -__revision__ = "$Revision: 3473 $" -__date__ = '$Date: 2007-10-29 17:18:13 +0200 (Mon, 29 Oct 2007) $' - -import sys -import numpy as np -from numpy import recarray -from numpy.core.records import fromrecords as recfromrecords, \ - fromarrays as recfromarrays - -from numpy.compat import asbytes, asbytes_nested - -import numpy.ma.testutils -from numpy.ma.testutils import * - -import numpy.ma as ma -from numpy.ma import masked, nomask - -from numpy.ma.mrecords import MaskedRecords, mrecarray, fromarrays, \ - fromtextfile, fromrecords, addfield - -#.............................................................................. -class TestMRecords(TestCase): - "Base test class for MaskedArrays." - def __init__(self, *args, **kwds): - TestCase.__init__(self, *args, **kwds) - self.setup() - - def setup(self): - "Generic setup" - ilist = [1,2,3,4,5] - flist = [1.1,2.2,3.3,4.4,5.5] - slist = asbytes_nested(['one','two','three','four','five']) - ddtype = [('a',int),('b',float),('c','|S8')] - mask = [0,1,0,0,1] - self.base = ma.array(list(zip(ilist,flist,slist)), - mask=mask, dtype=ddtype) - - def test_byview(self): - "Test creation by view" - base = self.base - mbase = base.view(mrecarray) - assert_equal(mbase.recordmask, base.recordmask) - assert_equal_records(mbase._mask, base._mask) - assert isinstance(mbase._data, recarray) - assert_equal_records(mbase._data, base._data.view(recarray)) - for field in ('a','b','c'): - assert_equal(base[field], mbase[field]) - assert_equal_records(mbase.view(mrecarray), mbase) - - def test_get(self): - "Tests fields retrieval" - base = self.base.copy() - mbase = base.view(mrecarray) - # As fields.......... - for field in ('a','b','c'): - assert_equal(getattr(mbase,field), mbase[field]) - assert_equal(base[field], mbase[field]) - # as elements ....... - mbase_first = mbase[0] - assert isinstance(mbase_first, mrecarray) - assert_equal(mbase_first.dtype, mbase.dtype) - assert_equal(mbase_first.tolist(), (1,1.1,asbytes('one'))) - # Used to be mask, now it's recordmask - assert_equal(mbase_first.recordmask, nomask) - assert_equal(mbase_first._mask.item(), (False, False, False)) - assert_equal(mbase_first['a'], mbase['a'][0]) - mbase_last = mbase[-1] - assert isinstance(mbase_last, mrecarray) - assert_equal(mbase_last.dtype, mbase.dtype) - assert_equal(mbase_last.tolist(), (None,None,None)) - # Used to be mask, now it's recordmask - assert_equal(mbase_last.recordmask, True) - assert_equal(mbase_last._mask.item(), (True, True, True)) - assert_equal(mbase_last['a'], mbase['a'][-1]) - assert (mbase_last['a'] is masked) - # as slice .......... - mbase_sl = mbase[:2] - assert isinstance(mbase_sl, mrecarray) - assert_equal(mbase_sl.dtype, mbase.dtype) - # Used to be mask, now it's recordmask - assert_equal(mbase_sl.recordmask, [0,1]) - assert_equal_records(mbase_sl.mask, - np.array([(False,False,False),(True,True,True)], - dtype=mbase._mask.dtype)) - assert_equal_records(mbase_sl, base[:2].view(mrecarray)) - for field in ('a','b','c'): - assert_equal(getattr(mbase_sl,field), base[:2][field]) - - def test_set_fields(self): - "Tests setting fields." - base = self.base.copy() - mbase = base.view(mrecarray) - mbase = mbase.copy() - mbase.fill_value = (999999,1e20,'N/A') - # Change the data, the mask should be conserved - mbase.a._data[:] = 5 - assert_equal(mbase['a']._data, [5,5,5,5,5]) - assert_equal(mbase['a']._mask, [0,1,0,0,1]) - # Change the elements, and the mask will follow - mbase.a = 1 - assert_equal(mbase['a']._data, [1]*5) - assert_equal(ma.getmaskarray(mbase['a']), [0]*5) - # Use to be _mask, now it's recordmask - assert_equal(mbase.recordmask, [False]*5) - assert_equal(mbase._mask.tolist(), - np.array([(0,0,0),(0,1,1),(0,0,0),(0,0,0),(0,1,1)], - dtype=bool)) - # Set a field to mask ........................ - mbase.c = masked - # Use to be mask, and now it's still mask ! - assert_equal(mbase.c.mask, [1]*5) - assert_equal(mbase.c.recordmask, [1]*5) - assert_equal(ma.getmaskarray(mbase['c']), [1]*5) - assert_equal(ma.getdata(mbase['c']), [asbytes('N/A')]*5) - assert_equal(mbase._mask.tolist(), - np.array([(0,0,1),(0,1,1),(0,0,1),(0,0,1),(0,1,1)], - dtype=bool)) - # Set fields by slices ....................... - mbase = base.view(mrecarray).copy() - mbase.a[3:] = 5 - assert_equal(mbase.a, [1,2,3,5,5]) - assert_equal(mbase.a._mask, [0,1,0,0,0]) - mbase.b[3:] = masked - assert_equal(mbase.b, base['b']) - assert_equal(mbase.b._mask, [0,1,0,1,1]) - # Set fields globally.......................... - ndtype = [('alpha','|S1'),('num',int)] - data = ma.array([('a',1),('b',2),('c',3)], dtype=ndtype) - rdata = data.view(MaskedRecords) - val = ma.array([10,20,30], mask=[1,0,0]) - # - import warnings - warnings.simplefilter("ignore") - rdata['num'] = val - assert_equal(rdata.num, val) - assert_equal(rdata.num.mask, [1,0,0]) - - def test_set_fields_mask(self): - "Tests setting the mask of a field." - base = self.base.copy() - # This one has already a mask.... - mbase = base.view(mrecarray) - mbase['a'][-2] = masked - assert_equal(mbase.a, [1,2,3,4,5]) - assert_equal(mbase.a._mask, [0,1,0,1,1]) - # This one has not yet - mbase = fromarrays([np.arange(5), np.random.rand(5)], - dtype=[('a',int),('b',float)]) - mbase['a'][-2] = masked - assert_equal(mbase.a, [0,1,2,3,4]) - assert_equal(mbase.a._mask, [0,0,0,1,0]) - # - def test_set_mask(self): - base = self.base.copy() - mbase = base.view(mrecarray) - # Set the mask to True ....................... - mbase.mask = masked - assert_equal(ma.getmaskarray(mbase['b']), [1]*5) - assert_equal(mbase['a']._mask, mbase['b']._mask) - assert_equal(mbase['a']._mask, mbase['c']._mask) - assert_equal(mbase._mask.tolist(), - np.array([(1,1,1)]*5, dtype=bool)) - # Delete the mask ............................ - mbase.mask = nomask - assert_equal(ma.getmaskarray(mbase['c']), [0]*5) - assert_equal(mbase._mask.tolist(), - np.array([(0,0,0)]*5, dtype=bool)) - # - def test_set_mask_fromarray(self): - base = self.base.copy() - mbase = base.view(mrecarray) - # Sets the mask w/ an array - mbase.mask = [1,0,0,0,1] - assert_equal(mbase.a.mask, [1,0,0,0,1]) - assert_equal(mbase.b.mask, [1,0,0,0,1]) - assert_equal(mbase.c.mask, [1,0,0,0,1]) - # Yay, once more ! - mbase.mask = [0,0,0,0,1] - assert_equal(mbase.a.mask, [0,0,0,0,1]) - assert_equal(mbase.b.mask, [0,0,0,0,1]) - assert_equal(mbase.c.mask, [0,0,0,0,1]) - # - def test_set_mask_fromfields(self): - mbase = self.base.copy().view(mrecarray) - # - nmask = np.array([(0,1,0),(0,1,0),(1,0,1),(1,0,1),(0,0,0)], - dtype=[('a',bool),('b',bool),('c',bool)]) - mbase.mask = nmask - assert_equal(mbase.a.mask, [0,0,1,1,0]) - assert_equal(mbase.b.mask, [1,1,0,0,0]) - assert_equal(mbase.c.mask, [0,0,1,1,0]) - # Reinitalizes and redo - mbase.mask = False - mbase.fieldmask = nmask - assert_equal(mbase.a.mask, [0,0,1,1,0]) - assert_equal(mbase.b.mask, [1,1,0,0,0]) - assert_equal(mbase.c.mask, [0,0,1,1,0]) - # - def test_set_elements(self): - base = self.base.copy() - # Set an element to mask ..................... - mbase = base.view(mrecarray).copy() - mbase[-2] = masked - assert_equal(mbase._mask.tolist(), - np.array([(0,0,0),(1,1,1),(0,0,0),(1,1,1),(1,1,1)], - dtype=bool)) - # Used to be mask, now it's recordmask! - assert_equal(mbase.recordmask, [0,1,0,1,1]) - # Set slices ................................. - mbase = base.view(mrecarray).copy() - mbase[:2] = (5,5,5) - assert_equal(mbase.a._data, [5,5,3,4,5]) - assert_equal(mbase.a._mask, [0,0,0,0,1]) - assert_equal(mbase.b._data, [5.,5.,3.3,4.4,5.5]) - assert_equal(mbase.b._mask, [0,0,0,0,1]) - assert_equal(mbase.c._data, - asbytes_nested(['5','5','three','four','five'])) - assert_equal(mbase.b._mask, [0,0,0,0,1]) - # - mbase = base.view(mrecarray).copy() - mbase[:2] = masked - assert_equal(mbase.a._data, [1,2,3,4,5]) - assert_equal(mbase.a._mask, [1,1,0,0,1]) - assert_equal(mbase.b._data, [1.1,2.2,3.3,4.4,5.5]) - assert_equal(mbase.b._mask, [1,1,0,0,1]) - assert_equal(mbase.c._data, - asbytes_nested(['one','two','three','four','five'])) - assert_equal(mbase.b._mask, [1,1,0,0,1]) - # - def test_setslices_hardmask(self): - "Tests setting slices w/ hardmask." - base = self.base.copy() - mbase = base.view(mrecarray) - mbase.harden_mask() - try: - mbase[-2:] = (5,5,5) - assert_equal(mbase.a._data, [1,2,3,5,5]) - assert_equal(mbase.b._data, [1.1,2.2,3.3,5,5.5]) - assert_equal(mbase.c._data, - asbytes_nested(['one','two','three','5','five'])) - assert_equal(mbase.a._mask, [0,1,0,0,1]) - assert_equal(mbase.b._mask, mbase.a._mask) - assert_equal(mbase.b._mask, mbase.c._mask) - except NotImplementedError: - # OK, not implemented yet... - pass - except AssertionError: - raise - else: - raise Exception("Flexible hard masks should be supported !") - # Not using a tuple should crash - try: - mbase[-2:] = 3 - except (NotImplementedError, TypeError): - pass - else: - raise TypeError("Should have expected a readable buffer object!") - - - def test_hardmask(self): - "Test hardmask" - base = self.base.copy() - mbase = base.view(mrecarray) - mbase.harden_mask() - self.assertTrue(mbase._hardmask) - mbase.mask = nomask - assert_equal_records(mbase._mask, base._mask) - mbase.soften_mask() - self.assertTrue(not mbase._hardmask) - mbase.mask = nomask - # So, the mask of a field is no longer set to nomask... - assert_equal_records(mbase._mask, - ma.make_mask_none(base.shape,base.dtype)) - self.assertTrue(ma.make_mask(mbase['b']._mask) is nomask) - assert_equal(mbase['a']._mask,mbase['b']._mask) - # - def test_pickling(self): - "Test pickling" - import cPickle - base = self.base.copy() - mrec = base.view(mrecarray) - _ = cPickle.dumps(mrec) - mrec_ = cPickle.loads(_) - assert_equal(mrec_.dtype, mrec.dtype) - assert_equal_records(mrec_._data, mrec._data) - assert_equal(mrec_._mask, mrec._mask) - assert_equal_records(mrec_._mask, mrec._mask) - # - def test_filled(self): - "Test filling the array" - _a = ma.array([1,2,3],mask=[0,0,1],dtype=int) - _b = ma.array([1.1,2.2,3.3],mask=[0,0,1],dtype=float) - _c = ma.array(['one','two','three'],mask=[0,0,1],dtype='|S8') - ddtype = [('a',int),('b',float),('c','|S8')] - mrec = fromarrays([_a,_b,_c], dtype=ddtype, - fill_value=(99999,99999.,'N/A')) - mrecfilled = mrec.filled() - assert_equal(mrecfilled['a'], np.array((1,2,99999), dtype=int)) - assert_equal(mrecfilled['b'], np.array((1.1,2.2,99999.), dtype=float)) - assert_equal(mrecfilled['c'], np.array(('one','two','N/A'), dtype='|S8')) - # - def test_tolist(self): - "Test tolist." - _a = ma.array([1,2,3],mask=[0,0,1],dtype=int) - _b = ma.array([1.1,2.2,3.3],mask=[0,0,1],dtype=float) - _c = ma.array(['one','two','three'],mask=[1,0,0],dtype='|S8') - ddtype = [('a',int),('b',float),('c','|S8')] - mrec = fromarrays([_a,_b,_c], dtype=ddtype, - fill_value=(99999,99999.,'N/A')) - # - assert_equal(mrec.tolist(), - [(1,1.1,None),(2,2.2,asbytes('two')), - (None,None,asbytes('three'))]) - - - # - def test_withnames(self): - "Test the creation w/ format and names" - x = mrecarray(1, formats=float, names='base') - x[0]['base'] = 10 - assert_equal(x['base'][0], 10) - # - def test_exotic_formats(self): - "Test that 'exotic' formats are processed properly" - easy = mrecarray(1, dtype=[('i',int), ('s','|S8'), ('f',float)]) - easy[0] = masked - assert_equal(easy.filled(1).item(), (1,asbytes('1'),1.)) - # - solo = mrecarray(1, dtype=[('f0', '= 3: - from functools import reduce - -pi = numpy.pi -def eq(v, w, msg=''): - result = allclose(v, w) - if not result: - print """Not eq:%s -%s ----- -%s""" % (msg, str(v), str(w)) - return result - -class TestMa(TestCase): - def setUp (self): - x = numpy.array([1., 1., 1., -2., pi / 2.0, 4., 5., -10., 10., 1., 2., 3.]) - y = numpy.array([5., 0., 3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) - a10 = 10. - m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] - m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 , 0, 1] - xm = array(x, mask=m1) - ym = array(y, mask=m2) - z = numpy.array([-.5, 0., .5, .8]) - zm = array(z, mask=[0, 1, 0, 0]) - xf = numpy.where(m1, 1e+20, x) - s = x.shape - xm.set_fill_value(1e+20) - self.d = (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) - - def test_testBasic1d(self): - "Test of basic array creation and properties in 1 dimension." - (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d - self.assertFalse(isMaskedArray(x)) - self.assertTrue(isMaskedArray(xm)) - self.assertEqual(shape(xm), s) - self.assertEqual(xm.shape, s) - self.assertEqual(xm.dtype, x.dtype) - self.assertEqual(xm.size , reduce(lambda x, y:x * y, s)) - self.assertEqual(count(xm) , len(m1) - reduce(lambda x, y:x + y, m1)) - self.assertTrue(eq(xm, xf)) - self.assertTrue(eq(filled(xm, 1.e20), xf)) - self.assertTrue(eq(x, xm)) - - def test_testBasic2d(self): - "Test of basic array creation and properties in 2 dimensions." - for s in [(4, 3), (6, 2)]: - (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d - x.shape = s - y.shape = s - xm.shape = s - ym.shape = s - xf.shape = s - - self.assertFalse(isMaskedArray(x)) - self.assertTrue(isMaskedArray(xm)) - self.assertEqual(shape(xm), s) - self.assertEqual(xm.shape, s) - self.assertEqual(xm.size , reduce(lambda x, y:x * y, s)) - self.assertEqual(count(xm) , len(m1) - reduce(lambda x, y:x + y, m1)) - self.assertTrue(eq(xm, xf)) - self.assertTrue(eq(filled(xm, 1.e20), xf)) - self.assertTrue(eq(x, xm)) - self.setUp() - - def test_testArithmetic (self): - "Test of basic arithmetic." - (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d - a2d = array([[1, 2], [0, 4]]) - a2dm = masked_array(a2d, [[0, 0], [1, 0]]) - self.assertTrue(eq (a2d * a2d, a2d * a2dm)) - self.assertTrue(eq (a2d + a2d, a2d + a2dm)) - self.assertTrue(eq (a2d - a2d, a2d - a2dm)) - for s in [(12,), (4, 3), (2, 6)]: - x = x.reshape(s) - y = y.reshape(s) - xm = xm.reshape(s) - ym = ym.reshape(s) - xf = xf.reshape(s) - self.assertTrue(eq(-x, -xm)) - self.assertTrue(eq(x + y, xm + ym)) - self.assertTrue(eq(x - y, xm - ym)) - self.assertTrue(eq(x * y, xm * ym)) - olderr = numpy.seterr(divide='ignore', invalid='ignore') - try: - self.assertTrue(eq(x / y, xm / ym)) - finally: - numpy.seterr(**olderr) - self.assertTrue(eq(a10 + y, a10 + ym)) - self.assertTrue(eq(a10 - y, a10 - ym)) - self.assertTrue(eq(a10 * y, a10 * ym)) - olderr = numpy.seterr(divide='ignore', invalid='ignore') - try: - self.assertTrue(eq(a10 / y, a10 / ym)) - finally: - numpy.seterr(**olderr) - self.assertTrue(eq(x + a10, xm + a10)) - self.assertTrue(eq(x - a10, xm - a10)) - self.assertTrue(eq(x * a10, xm * a10)) - self.assertTrue(eq(x / a10, xm / a10)) - self.assertTrue(eq(x ** 2, xm ** 2)) - self.assertTrue(eq(abs(x) ** 2.5, abs(xm) ** 2.5)) - self.assertTrue(eq(x ** y, xm ** ym)) - self.assertTrue(eq(numpy.add(x, y), add(xm, ym))) - self.assertTrue(eq(numpy.subtract(x, y), subtract(xm, ym))) - self.assertTrue(eq(numpy.multiply(x, y), multiply(xm, ym))) - olderr = numpy.seterr(divide='ignore', invalid='ignore') - try: - self.assertTrue(eq(numpy.divide(x, y), divide(xm, ym))) - finally: - numpy.seterr(**olderr) - - - def test_testMixedArithmetic(self): - na = numpy.array([1]) - ma = array([1]) - self.assertTrue(isinstance(na + ma, MaskedArray)) - self.assertTrue(isinstance(ma + na, MaskedArray)) - - def test_testUfuncs1 (self): - "Test various functions such as sin, cos." - (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d - self.assertTrue (eq(numpy.cos(x), cos(xm))) - self.assertTrue (eq(numpy.cosh(x), cosh(xm))) - self.assertTrue (eq(numpy.sin(x), sin(xm))) - self.assertTrue (eq(numpy.sinh(x), sinh(xm))) - self.assertTrue (eq(numpy.tan(x), tan(xm))) - self.assertTrue (eq(numpy.tanh(x), tanh(xm))) - olderr = numpy.seterr(divide='ignore', invalid='ignore') - try: - self.assertTrue (eq(numpy.sqrt(abs(x)), sqrt(xm))) - self.assertTrue (eq(numpy.log(abs(x)), log(xm))) - self.assertTrue (eq(numpy.log10(abs(x)), log10(xm))) - finally: - numpy.seterr(**olderr) - self.assertTrue (eq(numpy.exp(x), exp(xm))) - self.assertTrue (eq(numpy.arcsin(z), arcsin(zm))) - self.assertTrue (eq(numpy.arccos(z), arccos(zm))) - self.assertTrue (eq(numpy.arctan(z), arctan(zm))) - self.assertTrue (eq(numpy.arctan2(x, y), arctan2(xm, ym))) - self.assertTrue (eq(numpy.absolute(x), absolute(xm))) - self.assertTrue (eq(numpy.equal(x, y), equal(xm, ym))) - self.assertTrue (eq(numpy.not_equal(x, y), not_equal(xm, ym))) - self.assertTrue (eq(numpy.less(x, y), less(xm, ym))) - self.assertTrue (eq(numpy.greater(x, y), greater(xm, ym))) - self.assertTrue (eq(numpy.less_equal(x, y), less_equal(xm, ym))) - self.assertTrue (eq(numpy.greater_equal(x, y), greater_equal(xm, ym))) - self.assertTrue (eq(numpy.conjugate(x), conjugate(xm))) - self.assertTrue (eq(numpy.concatenate((x, y)), concatenate((xm, ym)))) - self.assertTrue (eq(numpy.concatenate((x, y)), concatenate((x, y)))) - self.assertTrue (eq(numpy.concatenate((x, y)), concatenate((xm, y)))) - self.assertTrue (eq(numpy.concatenate((x, y, x)), concatenate((x, ym, x)))) - - def test_xtestCount (self): - "Test count" - ott = array([0., 1., 2., 3.], mask=[1, 0, 0, 0]) - if sys.version_info[0] >= 3: - self.assertTrue(isinstance(count(ott), numpy.integer)) - else: - self.assertTrue(isinstance(count(ott), types.IntType)) - self.assertEqual(3, count(ott)) - self.assertEqual(1, count(1)) - self.assertTrue (eq(0, array(1, mask=[1]))) - ott = ott.reshape((2, 2)) - assert isinstance(count(ott, 0), numpy.ndarray) - if sys.version_info[0] >= 3: - assert isinstance(count(ott), numpy.integer) - else: - assert isinstance(count(ott), types.IntType) - self.assertTrue (eq(3, count(ott))) - assert getmask(count(ott, 0)) is nomask - self.assertTrue (eq([1, 2], count(ott, 0))) - - def test_testMinMax (self): - "Test minimum and maximum." - (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d - xr = numpy.ravel(x) #max doesn't work if shaped - xmr = ravel(xm) - - #true because of careful selection of data - self.assertTrue(eq(max(xr), maximum(xmr))) - - #true because of careful selection of data - self.assertTrue(eq(min(xr), minimum(xmr))) - - def test_testAddSumProd (self): - "Test add, sum, product." - (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d - self.assertTrue (eq(numpy.add.reduce(x), add.reduce(x))) - self.assertTrue (eq(numpy.add.accumulate(x), add.accumulate(x))) - self.assertTrue (eq(4, sum(array(4), axis=0))) - self.assertTrue (eq(4, sum(array(4), axis=0))) - self.assertTrue (eq(numpy.sum(x, axis=0), sum(x, axis=0))) - self.assertTrue (eq(numpy.sum(filled(xm, 0), axis=0), sum(xm, axis=0))) - self.assertTrue (eq(numpy.sum(x, 0), sum(x, 0))) - self.assertTrue (eq(numpy.product(x, axis=0), product(x, axis=0))) - self.assertTrue (eq(numpy.product(x, 0), product(x, 0))) - self.assertTrue (eq(numpy.product(filled(xm, 1), axis=0), - product(xm, axis=0))) - if len(s) > 1: - self.assertTrue (eq(numpy.concatenate((x, y), 1), - concatenate((xm, ym), 1))) - self.assertTrue (eq(numpy.add.reduce(x, 1), add.reduce(x, 1))) - self.assertTrue (eq(numpy.sum(x, 1), sum(x, 1))) - self.assertTrue (eq(numpy.product(x, 1), product(x, 1))) - - - def test_testCI(self): - "Test of conversions and indexing" - x1 = numpy.array([1, 2, 4, 3]) - x2 = array(x1, mask=[1, 0, 0, 0]) - x3 = array(x1, mask=[0, 1, 0, 1]) - x4 = array(x1) - # test conversion to strings - junk, garbage = str(x2), repr(x2) - assert eq(numpy.sort(x1), sort(x2, fill_value=0)) - # tests of indexing - assert type(x2[1]) is type(x1[1]) - assert x1[1] == x2[1] - assert x2[0] is masked - assert eq(x1[2], x2[2]) - assert eq(x1[2:5], x2[2:5]) - assert eq(x1[:], x2[:]) - assert eq(x1[1:], x3[1:]) - x1[2] = 9 - x2[2] = 9 - assert eq(x1, x2) - x1[1:3] = 99 - x2[1:3] = 99 - assert eq(x1, x2) - x2[1] = masked - assert eq(x1, x2) - x2[1:3] = masked - assert eq(x1, x2) - x2[:] = x1 - x2[1] = masked - assert allequal(getmask(x2), array([0, 1, 0, 0])) - x3[:] = masked_array([1, 2, 3, 4], [0, 1, 1, 0]) - assert allequal(getmask(x3), array([0, 1, 1, 0])) - x4[:] = masked_array([1, 2, 3, 4], [0, 1, 1, 0]) - assert allequal(getmask(x4), array([0, 1, 1, 0])) - assert allequal(x4, array([1, 2, 3, 4])) - x1 = numpy.arange(5) * 1.0 - x2 = masked_values(x1, 3.0) - assert eq(x1, x2) - assert allequal(array([0, 0, 0, 1, 0], MaskType), x2.mask) - assert eq(3.0, x2.fill_value) - x1 = array([1, 'hello', 2, 3], object) - x2 = numpy.array([1, 'hello', 2, 3], object) - s1 = x1[1] - s2 = x2[1] - self.assertEqual(type(s2), str) - self.assertEqual(type(s1), str) - self.assertEqual(s1, s2) - assert x1[1:1].shape == (0,) - - def test_testCopySize(self): - "Tests of some subtle points of copying and sizing." - n = [0, 0, 1, 0, 0] - m = make_mask(n) - m2 = make_mask(m) - self.assertTrue(m is m2) - m3 = make_mask(m, copy=1) - self.assertTrue(m is not m3) - - x1 = numpy.arange(5) - y1 = array(x1, mask=m) - self.assertTrue(y1._data is not x1) - self.assertTrue(allequal(x1, y1._data)) - self.assertTrue(y1.mask is m) - - y1a = array(y1, copy=0) - self.assertTrue(y1a.mask is y1.mask) - - y2 = array(x1, mask=m, copy=0) - self.assertTrue(y2.mask is m) - self.assertTrue(y2[2] is masked) - y2[2] = 9 - self.assertTrue(y2[2] is not masked) - self.assertTrue(y2.mask is not m) - self.assertTrue(allequal(y2.mask, 0)) - - y3 = array(x1 * 1.0, mask=m) - self.assertTrue(filled(y3).dtype is (x1 * 1.0).dtype) - - x4 = arange(4) - x4[2] = masked - y4 = resize(x4, (8,)) - self.assertTrue(eq(concatenate([x4, x4]), y4)) - self.assertTrue(eq(getmask(y4), [0, 0, 1, 0, 0, 0, 1, 0])) - y5 = repeat(x4, (2, 2, 2, 2), axis=0) - self.assertTrue(eq(y5, [0, 0, 1, 1, 2, 2, 3, 3])) - y6 = repeat(x4, 2, axis=0) - self.assertTrue(eq(y5, y6)) - - def test_testPut(self): - "Test of put" - d = arange(5) - n = [0, 0, 0, 1, 1] - m = make_mask(n) - x = array(d, mask=m) - self.assertTrue(x[3] is masked) - self.assertTrue(x[4] is masked) - x[[1, 4]] = [10, 40] - self.assertTrue(x.mask is not m) - self.assertTrue(x[3] is masked) - self.assertTrue(x[4] is not masked) - self.assertTrue(eq(x, [0, 10, 2, -1, 40])) - - x = array(d, mask=m) - x.put([0, 1, 2], [-1, 100, 200]) - self.assertTrue(eq(x, [-1, 100, 200, 0, 0])) - self.assertTrue(x[3] is masked) - self.assertTrue(x[4] is masked) - - def test_testMaPut(self): - (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d - m = [1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1] - i = numpy.nonzero(m)[0] - put(ym, i, zm) - assert all(take(ym, i, axis=0) == zm) - - def test_testOddFeatures(self): - "Test of other odd features" - x = arange(20); x = x.reshape(4, 5) - x.flat[5] = 12 - assert x[1, 0] == 12 - z = x + 10j * x - assert eq(z.real, x) - assert eq(z.imag, 10 * x) - assert eq((z * conjugate(z)).real, 101 * x * x) - z.imag[...] = 0.0 - - x = arange(10) - x[3] = masked - assert str(x[3]) == str(masked) - c = x >= 8 - assert count(where(c, masked, masked)) == 0 - assert shape(where(c, masked, masked)) == c.shape - z = where(c , x, masked) - assert z.dtype is x.dtype - assert z[3] is masked - assert z[4] is masked - assert z[7] is masked - assert z[8] is not masked - assert z[9] is not masked - assert eq(x, z) - z = where(c , masked, x) - assert z.dtype is x.dtype - assert z[3] is masked - assert z[4] is not masked - assert z[7] is not masked - assert z[8] is masked - assert z[9] is masked - z = masked_where(c, x) - assert z.dtype is x.dtype - assert z[3] is masked - assert z[4] is not masked - assert z[7] is not masked - assert z[8] is masked - assert z[9] is masked - assert eq(x, z) - x = array([1., 2., 3., 4., 5.]) - c = array([1, 1, 1, 0, 0]) - x[2] = masked - z = where(c, x, -x) - assert eq(z, [1., 2., 0., -4., -5]) - c[0] = masked - z = where(c, x, -x) - assert eq(z, [1., 2., 0., -4., -5]) - assert z[0] is masked - assert z[1] is not masked - assert z[2] is masked - assert eq(masked_where(greater(x, 2), x), masked_greater(x, 2)) - assert eq(masked_where(greater_equal(x, 2), x), - masked_greater_equal(x, 2)) - assert eq(masked_where(less(x, 2), x), masked_less(x, 2)) - assert eq(masked_where(less_equal(x, 2), x), masked_less_equal(x, 2)) - assert eq(masked_where(not_equal(x, 2), x), masked_not_equal(x, 2)) - assert eq(masked_where(equal(x, 2), x), masked_equal(x, 2)) - assert eq(masked_where(not_equal(x, 2), x), masked_not_equal(x, 2)) - assert eq(masked_inside(range(5), 1, 3), [0, 199, 199, 199, 4]) - assert eq(masked_outside(range(5), 1, 3), [199, 1, 2, 3, 199]) - assert eq(masked_inside(array(range(5), mask=[1, 0, 0, 0, 0]), 1, 3).mask, - [1, 1, 1, 1, 0]) - assert eq(masked_outside(array(range(5), mask=[0, 1, 0, 0, 0]), 1, 3).mask, - [1, 1, 0, 0, 1]) - assert eq(masked_equal(array(range(5), mask=[1, 0, 0, 0, 0]), 2).mask, - [1, 0, 1, 0, 0]) - assert eq(masked_not_equal(array([2, 2, 1, 2, 1], mask=[1, 0, 0, 0, 0]), 2).mask, - [1, 0, 1, 0, 1]) - assert eq(masked_where([1, 1, 0, 0, 0], [1, 2, 3, 4, 5]), [99, 99, 3, 4, 5]) - atest = ones((10, 10, 10), dtype=float32) - btest = zeros(atest.shape, MaskType) - ctest = masked_where(btest, atest) - assert eq(atest, ctest) - z = choose(c, (-x, x)) - assert eq(z, [1., 2., 0., -4., -5]) - assert z[0] is masked - assert z[1] is not masked - assert z[2] is masked - x = arange(6) - x[5] = masked - y = arange(6) * 10 - y[2] = masked - c = array([1, 1, 1, 0, 0, 0], mask=[1, 0, 0, 0, 0, 0]) - cm = c.filled(1) - z = where(c, x, y) - zm = where(cm, x, y) - assert eq(z, zm) - assert getmask(zm) is nomask - assert eq(zm, [0, 1, 2, 30, 40, 50]) - z = where(c, masked, 1) - assert eq(z, [99, 99, 99, 1, 1, 1]) - z = where(c, 1, masked) - assert eq(z, [99, 1, 1, 99, 99, 99]) - - def test_testMinMax(self): - "Test of minumum, maximum." - assert eq(minimum([1, 2, 3], [4, 0, 9]), [1, 0, 3]) - assert eq(maximum([1, 2, 3], [4, 0, 9]), [4, 2, 9]) - x = arange(5) - y = arange(5) - 2 - x[3] = masked - y[0] = masked - assert eq(minimum(x, y), where(less(x, y), x, y)) - assert eq(maximum(x, y), where(greater(x, y), x, y)) - assert minimum(x) == 0 - assert maximum(x) == 4 - - def test_testTakeTransposeInnerOuter(self): - "Test of take, transpose, inner, outer products" - x = arange(24) - y = numpy.arange(24) - x[5:6] = masked - x = x.reshape(2, 3, 4) - y = y.reshape(2, 3, 4) - assert eq(numpy.transpose(y, (2, 0, 1)), transpose(x, (2, 0, 1))) - assert eq(numpy.take(y, (2, 0, 1), 1), take(x, (2, 0, 1), 1)) - assert eq(numpy.inner(filled(x, 0), filled(y, 0)), - inner(x, y)) - assert eq(numpy.outer(filled(x, 0), filled(y, 0)), - outer(x, y)) - y = array(['abc', 1, 'def', 2, 3], object) - y[2] = masked - t = take(y, [0, 3, 4]) - assert t[0] == 'abc' - assert t[1] == 2 - assert t[2] == 3 - - def test_testInplace(self): - """Test of inplace operations and rich comparisons""" - y = arange(10) - - x = arange(10) - xm = arange(10) - xm[2] = masked - x += 1 - assert eq(x, y + 1) - xm += 1 - assert eq(x, y + 1) - - x = arange(10) - xm = arange(10) - xm[2] = masked - x -= 1 - assert eq(x, y - 1) - xm -= 1 - assert eq(xm, y - 1) - - x = arange(10) * 1.0 - xm = arange(10) * 1.0 - xm[2] = masked - x *= 2.0 - assert eq(x, y * 2) - xm *= 2.0 - assert eq(xm, y * 2) - - x = arange(10) * 2 - xm = arange(10) - xm[2] = masked - x /= 2 - assert eq(x, y) - xm /= 2 - assert eq(x, y) - - x = arange(10) * 1.0 - xm = arange(10) * 1.0 - xm[2] = masked - x /= 2.0 - assert eq(x, y / 2.0) - xm /= arange(10) - assert eq(xm, ones((10,))) - - x = arange(10).astype(float32) - xm = arange(10) - xm[2] = masked - x += 1. - assert eq(x, y + 1.) - - def test_testPickle(self): - "Test of pickling" - import pickle - x = arange(12) - x[4:10:2] = masked - x = x.reshape(4, 3) - s = pickle.dumps(x) - y = pickle.loads(s) - assert eq(x, y) - - def test_testMasked(self): - "Test of masked element" - xx = arange(6) - xx[1] = masked - self.assertTrue(str(masked) == '--') - self.assertTrue(xx[1] is masked) - self.assertEqual(filled(xx[1], 0), 0) - # don't know why these should raise an exception... - #self.assertRaises(Exception, lambda x,y: x+y, masked, masked) - #self.assertRaises(Exception, lambda x,y: x+y, masked, 2) - #self.assertRaises(Exception, lambda x,y: x+y, masked, xx) - #self.assertRaises(Exception, lambda x,y: x+y, xx, masked) - - def test_testAverage1(self): - "Test of average." - ott = array([0., 1., 2., 3.], mask=[1, 0, 0, 0]) - self.assertTrue(eq(2.0, average(ott, axis=0))) - self.assertTrue(eq(2.0, average(ott, weights=[1., 1., 2., 1.]))) - result, wts = average(ott, weights=[1., 1., 2., 1.], returned=1) - self.assertTrue(eq(2.0, result)) - self.assertTrue(wts == 4.0) - ott[:] = masked - self.assertTrue(average(ott, axis=0) is masked) - ott = array([0., 1., 2., 3.], mask=[1, 0, 0, 0]) - ott = ott.reshape(2, 2) - ott[:, 1] = masked - self.assertTrue(eq(average(ott, axis=0), [2.0, 0.0])) - self.assertTrue(average(ott, axis=1)[0] is masked) - self.assertTrue(eq([2., 0.], average(ott, axis=0))) - result, wts = average(ott, axis=0, returned=1) - self.assertTrue(eq(wts, [1., 0.])) - - def test_testAverage2(self): - "More tests of average." - w1 = [0, 1, 1, 1, 1, 0] - w2 = [[0, 1, 1, 1, 1, 0], [1, 0, 0, 0, 0, 1]] - x = arange(6) - self.assertTrue(allclose(average(x, axis=0), 2.5)) - self.assertTrue(allclose(average(x, axis=0, weights=w1), 2.5)) - y = array([arange(6), 2.0 * arange(6)]) - self.assertTrue(allclose(average(y, None), - numpy.add.reduce(numpy.arange(6)) * 3. / 12.)) - self.assertTrue(allclose(average(y, axis=0), numpy.arange(6) * 3. / 2.)) - self.assertTrue(allclose(average(y, axis=1), - [average(x, axis=0), average(x, axis=0) * 2.0])) - self.assertTrue(allclose(average(y, None, weights=w2), 20. / 6.)) - self.assertTrue(allclose(average(y, axis=0, weights=w2), - [0., 1., 2., 3., 4., 10.])) - self.assertTrue(allclose(average(y, axis=1), - [average(x, axis=0), average(x, axis=0) * 2.0])) - m1 = zeros(6) - m2 = [0, 0, 1, 1, 0, 0] - m3 = [[0, 0, 1, 1, 0, 0], [0, 1, 1, 1, 1, 0]] - m4 = ones(6) - m5 = [0, 1, 1, 1, 1, 1] - self.assertTrue(allclose(average(masked_array(x, m1), axis=0), 2.5)) - self.assertTrue(allclose(average(masked_array(x, m2), axis=0), 2.5)) - self.assertTrue(average(masked_array(x, m4), axis=0) is masked) - self.assertEqual(average(masked_array(x, m5), axis=0), 0.0) - self.assertEqual(count(average(masked_array(x, m4), axis=0)), 0) - z = masked_array(y, m3) - self.assertTrue(allclose(average(z, None), 20. / 6.)) - self.assertTrue(allclose(average(z, axis=0), [0., 1., 99., 99., 4.0, 7.5])) - self.assertTrue(allclose(average(z, axis=1), [2.5, 5.0])) - self.assertTrue(allclose(average(z, axis=0, weights=w2), - [0., 1., 99., 99., 4.0, 10.0])) - - a = arange(6) - b = arange(6) * 3 - r1, w1 = average([[a, b], [b, a]], axis=1, returned=1) - self.assertEqual(shape(r1) , shape(w1)) - self.assertEqual(r1.shape , w1.shape) - r2, w2 = average(ones((2, 2, 3)), axis=0, weights=[3, 1], returned=1) - self.assertEqual(shape(w2) , shape(r2)) - r2, w2 = average(ones((2, 2, 3)), returned=1) - self.assertEqual(shape(w2) , shape(r2)) - r2, w2 = average(ones((2, 2, 3)), weights=ones((2, 2, 3)), returned=1) - self.assertTrue(shape(w2) == shape(r2)) - a2d = array([[1, 2], [0, 4]], float) - a2dm = masked_array(a2d, [[0, 0], [1, 0]]) - a2da = average(a2d, axis=0) - self.assertTrue(eq (a2da, [0.5, 3.0])) - a2dma = average(a2dm, axis=0) - self.assertTrue(eq(a2dma, [1.0, 3.0])) - a2dma = average(a2dm, axis=None) - self.assertTrue(eq(a2dma, 7. / 3.)) - a2dma = average(a2dm, axis=1) - self.assertTrue(eq(a2dma, [1.5, 4.0])) - - def test_testToPython(self): - self.assertEqual(1, int(array(1))) - self.assertEqual(1.0, float(array(1))) - self.assertEqual(1, int(array([[[1]]]))) - self.assertEqual(1.0, float(array([[1]]))) - self.assertRaises(TypeError, float, array([1, 1])) - self.assertRaises(ValueError, bool, array([0, 1])) - self.assertRaises(ValueError, bool, array([0, 0], mask=[0, 1])) - - def test_testScalarArithmetic(self): - xm = array(0, mask=1) - #TODO FIXME: Find out what the following raises a warning in r8247 - err_status = numpy.geterr() - try: - numpy.seterr(divide='ignore') - self.assertTrue((1 / array(0)).mask) - finally: - numpy.seterr(**err_status) - self.assertTrue((1 + xm).mask) - self.assertTrue((-xm).mask) - self.assertTrue((-xm).mask) - self.assertTrue(maximum(xm, xm).mask) - self.assertTrue(minimum(xm, xm).mask) - self.assertTrue(xm.filled().dtype is xm._data.dtype) - x = array(0, mask=0) - self.assertTrue(x.filled() == x._data) - self.assertEqual(str(xm), str(masked_print_option)) - - def test_testArrayMethods(self): - a = array([1, 3, 2]) - b = array([1, 3, 2], mask=[1, 0, 1]) - self.assertTrue(eq(a.any(), a._data.any())) - self.assertTrue(eq(a.all(), a._data.all())) - self.assertTrue(eq(a.argmax(), a._data.argmax())) - self.assertTrue(eq(a.argmin(), a._data.argmin())) - self.assertTrue(eq(a.choose(0, 1, 2, 3, 4), a._data.choose(0, 1, 2, 3, 4))) - self.assertTrue(eq(a.compress([1, 0, 1]), a._data.compress([1, 0, 1]))) - self.assertTrue(eq(a.conj(), a._data.conj())) - self.assertTrue(eq(a.conjugate(), a._data.conjugate())) - m = array([[1, 2], [3, 4]]) - self.assertTrue(eq(m.diagonal(), m._data.diagonal())) - self.assertTrue(eq(a.sum(), a._data.sum())) - self.assertTrue(eq(a.take([1, 2]), a._data.take([1, 2]))) - self.assertTrue(eq(m.transpose(), m._data.transpose())) - - def test_testArrayAttributes(self): - a = array([1, 3, 2]) - b = array([1, 3, 2], mask=[1, 0, 1]) - self.assertEqual(a.ndim, 1) - - def test_testAPI(self): - self.assertFalse([m for m in dir(numpy.ndarray) - if m not in dir(MaskedArray) and not m.startswith('_')]) - - def test_testSingleElementSubscript(self): - a = array([1, 3, 2]) - b = array([1, 3, 2], mask=[1, 0, 1]) - self.assertEqual(a[0].shape, ()) - self.assertEqual(b[0].shape, ()) - self.assertEqual(b[1].shape, ()) - -class TestUfuncs(TestCase): - def setUp(self): - self.d = (array([1.0, 0, -1, pi / 2] * 2, mask=[0, 1] + [0] * 6), - array([1.0, 0, -1, pi / 2] * 2, mask=[1, 0] + [0] * 6),) - - - def test_testUfuncRegression(self): - f_invalid_ignore = ['sqrt', 'arctanh', 'arcsin', 'arccos', - 'arccosh', 'arctanh', 'log', 'log10', 'divide', - 'true_divide', 'floor_divide', 'remainder', 'fmod'] - for f in ['sqrt', 'log', 'log10', 'exp', 'conjugate', - 'sin', 'cos', 'tan', - 'arcsin', 'arccos', 'arctan', - 'sinh', 'cosh', 'tanh', - 'arcsinh', - 'arccosh', - 'arctanh', - 'absolute', 'fabs', 'negative', - # 'nonzero', 'around', - 'floor', 'ceil', - # 'sometrue', 'alltrue', - 'logical_not', - 'add', 'subtract', 'multiply', - 'divide', 'true_divide', 'floor_divide', - 'remainder', 'fmod', 'hypot', 'arctan2', - 'equal', 'not_equal', 'less_equal', 'greater_equal', - 'less', 'greater', - 'logical_and', 'logical_or', 'logical_xor', - ]: - try: - uf = getattr(umath, f) - except AttributeError: - uf = getattr(fromnumeric, f) - mf = getattr(numpy.ma, f) - args = self.d[:uf.nin] - olderr = numpy.geterr() - try: - if f in f_invalid_ignore: - numpy.seterr(invalid='ignore') - if f in ['arctanh', 'log', 'log10']: - numpy.seterr(divide='ignore') - ur = uf(*args) - mr = mf(*args) - finally: - numpy.seterr(**olderr) - self.assertTrue(eq(ur.filled(0), mr.filled(0), f)) - self.assertTrue(eqmask(ur.mask, mr.mask)) - - def test_reduce(self): - a = self.d[0] - self.assertFalse(alltrue(a, axis=0)) - self.assertTrue(sometrue(a, axis=0)) - self.assertEqual(sum(a[:3], axis=0), 0) - self.assertEqual(product(a, axis=0), 0) - - def test_minmax(self): - a = arange(1, 13).reshape(3, 4) - amask = masked_where(a < 5, a) - self.assertEqual(amask.max(), a.max()) - self.assertEqual(amask.min(), 5) - self.assertTrue((amask.max(0) == a.max(0)).all()) - self.assertTrue((amask.min(0) == [5, 6, 7, 8]).all()) - self.assertTrue(amask.max(1)[0].mask) - self.assertTrue(amask.min(1)[0].mask) - - def test_nonzero(self): - for t in "?bhilqpBHILQPfdgFDGO": - x = array([1, 0, 2, 0], mask=[0, 0, 1, 1]) - self.assertTrue(eq(nonzero(x), [0])) - - -class TestArrayMethods(TestCase): - - def setUp(self): - x = numpy.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, - 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, - 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, - 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, - 7.189, 9.645, 5.395, 4.961, 9.894, 2.893, - 7.357, 9.828, 6.272, 3.758, 6.693, 0.993]) - X = x.reshape(6, 6) - XX = x.reshape(3, 2, 2, 3) - - m = numpy.array([0, 1, 0, 1, 0, 0, - 1, 0, 1, 1, 0, 1, - 0, 0, 0, 1, 0, 1, - 0, 0, 0, 1, 1, 1, - 1, 0, 0, 1, 0, 0, - 0, 0, 1, 0, 1, 0]) - mx = array(data=x, mask=m) - mX = array(data=X, mask=m.reshape(X.shape)) - mXX = array(data=XX, mask=m.reshape(XX.shape)) - - m2 = numpy.array([1, 1, 0, 1, 0, 0, - 1, 1, 1, 1, 0, 1, - 0, 0, 1, 1, 0, 1, - 0, 0, 0, 1, 1, 1, - 1, 0, 0, 1, 1, 0, - 0, 0, 1, 0, 1, 1]) - m2x = array(data=x, mask=m2) - m2X = array(data=X, mask=m2.reshape(X.shape)) - m2XX = array(data=XX, mask=m2.reshape(XX.shape)) - self.d = (x, X, XX, m, mx, mX, mXX) - - #------------------------------------------------------ - def test_trace(self): - (x, X, XX, m, mx, mX, mXX,) = self.d - mXdiag = mX.diagonal() - self.assertEqual(mX.trace(), mX.diagonal().compressed().sum()) - self.assertTrue(eq(mX.trace(), - X.trace() - sum(mXdiag.mask * X.diagonal(), axis=0))) - - def test_clip(self): - (x, X, XX, m, mx, mX, mXX,) = self.d - clipped = mx.clip(2, 8) - self.assertTrue(eq(clipped.mask, mx.mask)) - self.assertTrue(eq(clipped._data, x.clip(2, 8))) - self.assertTrue(eq(clipped._data, mx._data.clip(2, 8))) - - def test_ptp(self): - (x, X, XX, m, mx, mX, mXX,) = self.d - (n, m) = X.shape - self.assertEqual(mx.ptp(), mx.compressed().ptp()) - rows = numpy.zeros(n, numpy.float_) - cols = numpy.zeros(m, numpy.float_) - for k in range(m): - cols[k] = mX[:, k].compressed().ptp() - for k in range(n): - rows[k] = mX[k].compressed().ptp() - self.assertTrue(eq(mX.ptp(0), cols)) - self.assertTrue(eq(mX.ptp(1), rows)) - - def test_swapaxes(self): - (x, X, XX, m, mx, mX, mXX,) = self.d - mXswapped = mX.swapaxes(0, 1) - self.assertTrue(eq(mXswapped[-1], mX[:, -1])) - mXXswapped = mXX.swapaxes(0, 2) - self.assertEqual(mXXswapped.shape, (2, 2, 3, 3)) - - - def test_cumprod(self): - (x, X, XX, m, mx, mX, mXX,) = self.d - mXcp = mX.cumprod(0) - self.assertTrue(eq(mXcp._data, mX.filled(1).cumprod(0))) - mXcp = mX.cumprod(1) - self.assertTrue(eq(mXcp._data, mX.filled(1).cumprod(1))) - - def test_cumsum(self): - (x, X, XX, m, mx, mX, mXX,) = self.d - mXcp = mX.cumsum(0) - self.assertTrue(eq(mXcp._data, mX.filled(0).cumsum(0))) - mXcp = mX.cumsum(1) - self.assertTrue(eq(mXcp._data, mX.filled(0).cumsum(1))) - - def test_varstd(self): - (x, X, XX, m, mx, mX, mXX,) = self.d - self.assertTrue(eq(mX.var(axis=None), mX.compressed().var())) - self.assertTrue(eq(mX.std(axis=None), mX.compressed().std())) - self.assertTrue(eq(mXX.var(axis=3).shape, XX.var(axis=3).shape)) - self.assertTrue(eq(mX.var().shape, X.var().shape)) - (mXvar0, mXvar1) = (mX.var(axis=0), mX.var(axis=1)) - for k in range(6): - self.assertTrue(eq(mXvar1[k], mX[k].compressed().var())) - self.assertTrue(eq(mXvar0[k], mX[:, k].compressed().var())) - self.assertTrue(eq(numpy.sqrt(mXvar0[k]), - mX[:, k].compressed().std())) - - -def eqmask(m1, m2): - if m1 is nomask: - return m2 is nomask - if m2 is nomask: - return m1 is nomask - return (m1 == m2).all() - -#def timingTest(): -# for f in [testf, testinplace]: -# for n in [1000,10000,50000]: -# t = testta(n, f) -# t1 = testtb(n, f) -# t2 = testtc(n, f) -# print f.test_name -# print """\ -#n = %7d -#numpy time (ms) %6.1f -#MA maskless ratio %6.1f -#MA masked ratio %6.1f -#""" % (n, t*1000.0, t1/t, t2/t) - -#def testta(n, f): -# x=numpy.arange(n) + 1.0 -# tn0 = time.time() -# z = f(x) -# return time.time() - tn0 - -#def testtb(n, f): -# x=arange(n) + 1.0 -# tn0 = time.time() -# z = f(x) -# return time.time() - tn0 - -#def testtc(n, f): -# x=arange(n) + 1.0 -# x[0] = masked -# tn0 = time.time() -# z = f(x) -# return time.time() - tn0 - -#def testf(x): -# for i in range(25): -# y = x **2 + 2.0 * x - 1.0 -# w = x **2 + 1.0 -# z = (y / w) ** 2 -# return z -#testf.test_name = 'Simple arithmetic' - -#def testinplace(x): -# for i in range(25): -# y = x**2 -# y += 2.0*x -# y -= 1.0 -# y /= x -# return y -#testinplace.test_name = 'Inplace operations' - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/ma/tests/test_regression.py b/pythonPackages/numpy/numpy/ma/tests/test_regression.py deleted file mode 100755 index 4fe26367f7..0000000000 --- a/pythonPackages/numpy/numpy/ma/tests/test_regression.py +++ /dev/null @@ -1,39 +0,0 @@ -from numpy.testing import * -import numpy as np - -rlevel = 1 - -class TestRegression(TestCase): - def test_masked_array_create(self,level=rlevel): - """Ticket #17""" - x = np.ma.masked_array([0,1,2,3,0,4,5,6],mask=[0,0,0,1,1,1,0,0]) - assert_array_equal(np.ma.nonzero(x),[[1,2,6,7]]) - - def test_masked_array(self,level=rlevel): - """Ticket #61""" - x = np.ma.array(1,mask=[1]) - - def test_mem_masked_where(self,level=rlevel): - """Ticket #62""" - from numpy.ma import masked_where, MaskType - a = np.zeros((1,1)) - b = np.zeros(a.shape, MaskType) - c = masked_where(b,a) - a-c - - def test_masked_array_multiply(self,level=rlevel): - """Ticket #254""" - a = np.ma.zeros((4,1)) - a[2,0] = np.ma.masked - b = np.zeros((4,2)) - a*b - b*a - - def test_masked_array_repeat(self, level=rlevel): - """Ticket #271""" - np.ma.array([1],mask=False).repeat(10) - - def test_masked_array_repr_unicode(self): - """Ticket #1256""" - repr(np.ma.array(u"Unicode")) - diff --git a/pythonPackages/numpy/numpy/ma/tests/test_subclassing.py b/pythonPackages/numpy/numpy/ma/tests/test_subclassing.py deleted file mode 100755 index 146ea30519..0000000000 --- a/pythonPackages/numpy/numpy/ma/tests/test_subclassing.py +++ /dev/null @@ -1,177 +0,0 @@ -# pylint: disable-msg=W0611, W0612, W0511,R0201 -"""Tests suite for MaskedArray & subclassing. - -:author: Pierre Gerard-Marchant -:contact: pierregm_at_uga_dot_edu -:version: $Id: test_subclassing.py 3473 2007-10-29 15:18:13Z jarrod.millman $ -""" -__author__ = "Pierre GF Gerard-Marchant ($Author: jarrod.millman $)" -__version__ = '1.0' -__revision__ = "$Revision: 3473 $" -__date__ = '$Date: 2007-10-29 17:18:13 +0200 (Mon, 29 Oct 2007) $' - -import numpy as np -from numpy.testing import * -from numpy.ma.testutils import * -from numpy.ma.core import * - -class SubArray(np.ndarray): - """Defines a generic np.ndarray subclass, that stores some metadata - in the dictionary `info`.""" - def __new__(cls,arr,info={}): - x = np.asanyarray(arr).view(cls) - x.info = info - return x - def __array_finalize__(self, obj): - self.info = getattr(obj,'info',{}) - return - def __add__(self, other): - result = np.ndarray.__add__(self, other) - result.info.update({'added':result.info.pop('added',0)+1}) - return result - -subarray = SubArray - -class MSubArray(SubArray,MaskedArray): - def __new__(cls, data, info={}, mask=nomask): - subarr = SubArray(data, info) - _data = MaskedArray.__new__(cls, data=subarr, mask=mask) - _data.info = subarr.info - return _data - def __array_finalize__(self,obj): - MaskedArray.__array_finalize__(self,obj) - SubArray.__array_finalize__(self, obj) - return - def _get_series(self): - _view = self.view(MaskedArray) - _view._sharedmask = False - return _view - _series = property(fget=_get_series) - -msubarray = MSubArray - -class MMatrix(MaskedArray, np.matrix,): - def __new__(cls, data, mask=nomask): - mat = np.matrix(data) - _data = MaskedArray.__new__(cls, data=mat, mask=mask) - return _data - def __array_finalize__(self,obj): - np.matrix.__array_finalize__(self, obj) - MaskedArray.__array_finalize__(self,obj) - return - def _get_series(self): - _view = self.view(MaskedArray) - _view._sharedmask = False - return _view - _series = property(fget=_get_series) - -mmatrix = MMatrix - -class TestSubclassing(TestCase): - """Test suite for masked subclasses of ndarray.""" - - def setUp(self): - x = np.arange(5) - mx = mmatrix(x, mask=[0, 1, 0, 0, 0]) - self.data = (x, mx) - - def test_data_subclassing(self): - "Tests whether the subclass is kept." - x = np.arange(5) - m = [0,0,1,0,0] - xsub = SubArray(x) - xmsub = masked_array(xsub, mask=m) - self.assertTrue(isinstance(xmsub, MaskedArray)) - assert_equal(xmsub._data, xsub) - self.assertTrue(isinstance(xmsub._data, SubArray)) - - def test_maskedarray_subclassing(self): - "Tests subclassing MaskedArray" - (x, mx) = self.data - self.assertTrue(isinstance(mx._data, np.matrix)) - - def test_masked_unary_operations(self): - "Tests masked_unary_operation" - (x, mx) = self.data - self.assertTrue(isinstance(log(mx), mmatrix)) - assert_equal(log(x), np.log(x)) - - def test_masked_binary_operations(self): - "Tests masked_binary_operation" - (x, mx) = self.data - # Result should be a mmatrix - self.assertTrue(isinstance(add(mx,mx), mmatrix)) - self.assertTrue(isinstance(add(mx,x), mmatrix)) - # Result should work - assert_equal(add(mx,x), mx+x) - self.assertTrue(isinstance(add(mx,mx)._data, np.matrix)) - self.assertTrue(isinstance(add.outer(mx,mx), mmatrix)) - self.assertTrue(isinstance(hypot(mx,mx), mmatrix)) - self.assertTrue(isinstance(hypot(mx,x), mmatrix)) - - def test_masked_binary_operations(self): - "Tests domained_masked_binary_operation" - (x, mx) = self.data - xmx = masked_array(mx.data.__array__(), mask=mx.mask) - self.assertTrue(isinstance(divide(mx,mx), mmatrix)) - self.assertTrue(isinstance(divide(mx,x), mmatrix)) - assert_equal(divide(mx, mx), divide(xmx, xmx)) - - def test_attributepropagation(self): - x = array(arange(5), mask=[0]+[1]*4) - my = masked_array(subarray(x)) - ym = msubarray(x) - # - z = (my+1) - self.assertTrue(isinstance(z,MaskedArray)) - self.assertTrue(not isinstance(z, MSubArray)) - self.assertTrue(isinstance(z._data, SubArray)) - assert_equal(z._data.info, {}) - # - z = (ym+1) - self.assertTrue(isinstance(z, MaskedArray)) - self.assertTrue(isinstance(z, MSubArray)) - self.assertTrue(isinstance(z._data, SubArray)) - self.assertTrue(z._data.info['added'] > 0) - # - ym._set_mask([1,0,0,0,1]) - assert_equal(ym._mask, [1,0,0,0,1]) - ym._series._set_mask([0,0,0,0,1]) - assert_equal(ym._mask, [0,0,0,0,1]) - # - xsub = subarray(x, info={'name':'x'}) - mxsub = masked_array(xsub) - self.assertTrue(hasattr(mxsub, 'info')) - assert_equal(mxsub.info, xsub.info) - - def test_subclasspreservation(self): - "Checks that masked_array(...,subok=True) preserves the class." - x = np.arange(5) - m = [0,0,1,0,0] - xinfo = [(i,j) for (i,j) in zip(x,m)] - xsub = MSubArray(x, mask=m, info={'xsub':xinfo}) - # - mxsub = masked_array(xsub, subok=False) - self.assertTrue(not isinstance(mxsub, MSubArray)) - self.assertTrue(isinstance(mxsub, MaskedArray)) - assert_equal(mxsub._mask, m) - # - mxsub = asarray(xsub) - self.assertTrue(not isinstance(mxsub, MSubArray)) - self.assertTrue(isinstance(mxsub, MaskedArray)) - assert_equal(mxsub._mask, m) - # - mxsub = masked_array(xsub, subok=True) - self.assertTrue(isinstance(mxsub, MSubArray)) - assert_equal(mxsub.info, xsub.info) - assert_equal(mxsub._mask, xsub._mask) - # - mxsub = asanyarray(xsub) - self.assertTrue(isinstance(mxsub, MSubArray)) - assert_equal(mxsub.info, xsub.info) - assert_equal(mxsub._mask, m) - - -################################################################################ -if __name__ == '__main__': - run_module_suite() diff --git a/pythonPackages/numpy/numpy/ma/testutils.py b/pythonPackages/numpy/numpy/ma/testutils.py deleted file mode 100755 index 1918e97a0c..0000000000 --- a/pythonPackages/numpy/numpy/ma/testutils.py +++ /dev/null @@ -1,237 +0,0 @@ -"""Miscellaneous functions for testing masked arrays and subclasses - -:author: Pierre Gerard-Marchant -:contact: pierregm_at_uga_dot_edu -:version: $Id: testutils.py 3529 2007-11-13 08:01:14Z jarrod.millman $ -""" -__author__ = "Pierre GF Gerard-Marchant ($Author: jarrod.millman $)" -__version__ = "1.0" -__revision__ = "$Revision: 3529 $" -__date__ = "$Date: 2007-11-13 10:01:14 +0200 (Tue, 13 Nov 2007) $" - - -import operator - -import numpy as np -from numpy import ndarray, float_ -import numpy.core.umath as umath -from numpy.testing import * -import numpy.testing.utils as utils - -from core import mask_or, getmask, masked_array, nomask, masked, filled, \ - equal, less - -#------------------------------------------------------------------------------ -def approx (a, b, fill_value=True, rtol=1.e-5, atol=1.e-8): - """Returns true if all components of a and b are equal subject to given tolerances. - -If fill_value is True, masked values considered equal. Otherwise, masked values -are considered unequal. -The relative error rtol should be positive and << 1.0 -The absolute error atol comes into play for those elements of b that are very -small or zero; it says how small a must be also. - """ - m = mask_or(getmask(a), getmask(b)) - d1 = filled(a) - d2 = filled(b) - if d1.dtype.char == "O" or d2.dtype.char == "O": - return np.equal(d1,d2).ravel() - x = filled(masked_array(d1, copy=False, mask=m), fill_value).astype(float_) - y = filled(masked_array(d2, copy=False, mask=m), 1).astype(float_) - d = np.less_equal(umath.absolute(x-y), atol + rtol * umath.absolute(y)) - return d.ravel() - - -def almost(a, b, decimal=6, fill_value=True): - """Returns True if a and b are equal up to decimal places. -If fill_value is True, masked values considered equal. Otherwise, masked values -are considered unequal. - """ - m = mask_or(getmask(a), getmask(b)) - d1 = filled(a) - d2 = filled(b) - if d1.dtype.char == "O" or d2.dtype.char == "O": - return np.equal(d1,d2).ravel() - x = filled(masked_array(d1, copy=False, mask=m), fill_value).astype(float_) - y = filled(masked_array(d2, copy=False, mask=m), 1).astype(float_) - d = np.around(np.abs(x-y),decimal) <= 10.0**(-decimal) - return d.ravel() - - -#................................................ -def _assert_equal_on_sequences(actual, desired, err_msg=''): - "Asserts the equality of two non-array sequences." - assert_equal(len(actual),len(desired),err_msg) - for k in range(len(desired)): - assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg)) - return - -def assert_equal_records(a,b): - """Asserts that two records are equal. Pretty crude for now.""" - assert_equal(a.dtype, b.dtype) - for f in a.dtype.names: - (af, bf) = (operator.getitem(a,f), operator.getitem(b,f)) - if not (af is masked) and not (bf is masked): - assert_equal(operator.getitem(a,f), operator.getitem(b,f)) - return - - -def assert_equal(actual,desired,err_msg=''): - """Asserts that two items are equal. - """ - # Case #1: dictionary ..... - if isinstance(desired, dict): - if not isinstance(actual, dict): - raise AssertionError(repr(type(actual))) - assert_equal(len(actual),len(desired),err_msg) - for k,i in desired.items(): - if not k in actual: - raise AssertionError("%s not in %s" % (k,actual)) - assert_equal(actual[k], desired[k], 'key=%r\n%s' % (k,err_msg)) - return - # Case #2: lists ..... - if isinstance(desired, (list,tuple)) and isinstance(actual, (list,tuple)): - return _assert_equal_on_sequences(actual, desired, err_msg='') - if not (isinstance(actual, ndarray) or isinstance(desired, ndarray)): - msg = build_err_msg([actual, desired], err_msg,) - if not desired == actual: - raise AssertionError(msg) - return - # Case #4. arrays or equivalent - if ((actual is masked) and not (desired is masked)) or \ - ((desired is masked) and not (actual is masked)): - msg = build_err_msg([actual, desired], - err_msg, header='', names=('x', 'y')) - raise ValueError(msg) - actual = np.array(actual, copy=False, subok=True) - desired = np.array(desired, copy=False, subok=True) - (actual_dtype, desired_dtype) = (actual.dtype, desired.dtype) - if actual_dtype.char == "S" and desired_dtype.char == "S": - return _assert_equal_on_sequences(actual.tolist(), - desired.tolist(), - err_msg='') -# elif actual_dtype.char in "OV" and desired_dtype.char in "OV": -# if (actual_dtype != desired_dtype) and actual_dtype: -# msg = build_err_msg([actual_dtype, desired_dtype], -# err_msg, header='', names=('actual', 'desired')) -# raise ValueError(msg) -# return _assert_equal_on_sequences(actual.tolist(), -# desired.tolist(), -# err_msg='') - return assert_array_equal(actual, desired, err_msg) - - -def fail_if_equal(actual,desired,err_msg='',): - """Raises an assertion error if two items are equal. - """ - if isinstance(desired, dict): - if not isinstance(actual, dict): - raise AssertionError(repr(type(actual))) - fail_if_equal(len(actual),len(desired),err_msg) - for k,i in desired.items(): - if not k in actual: - raise AssertionError(repr(k)) - fail_if_equal(actual[k], desired[k], 'key=%r\n%s' % (k,err_msg)) - return - if isinstance(desired, (list,tuple)) and isinstance(actual, (list,tuple)): - fail_if_equal(len(actual),len(desired),err_msg) - for k in range(len(desired)): - fail_if_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg)) - return - if isinstance(actual, np.ndarray) or isinstance(desired, np.ndarray): - return fail_if_array_equal(actual, desired, err_msg) - msg = build_err_msg([actual, desired], err_msg) - if not desired != actual: - raise AssertionError(msg) -assert_not_equal = fail_if_equal - - -def assert_almost_equal(actual, desired, decimal=7, err_msg='', verbose=True): - """Asserts that two items are almost equal. - The test is equivalent to abs(desired-actual) < 0.5 * 10**(-decimal) - """ - if isinstance(actual, np.ndarray) or isinstance(desired, np.ndarray): - return assert_array_almost_equal(actual, desired, decimal=decimal, - err_msg=err_msg, verbose=verbose) - msg = build_err_msg([actual, desired], - err_msg=err_msg, verbose=verbose) - if not round(abs(desired - actual),decimal) == 0: - raise AssertionError(msg) - - -assert_close = assert_almost_equal - - -def assert_array_compare(comparison, x, y, err_msg='', verbose=True, header='', - fill_value=True): - """Asserts that a comparison relation between two masked arrays is satisfied - elementwise.""" - # Fill the data first -# xf = filled(x) -# yf = filled(y) - # Allocate a common mask and refill - m = mask_or(getmask(x), getmask(y)) - x = masked_array(x, copy=False, mask=m, keep_mask=False, subok=False) - y = masked_array(y, copy=False, mask=m, keep_mask=False, subok=False) - if ((x is masked) and not (y is masked)) or \ - ((y is masked) and not (x is masked)): - msg = build_err_msg([x, y], err_msg=err_msg, verbose=verbose, - header=header, names=('x', 'y')) - raise ValueError(msg) - # OK, now run the basic tests on filled versions - return utils.assert_array_compare(comparison, - x.filled(fill_value), - y.filled(fill_value), - err_msg=err_msg, - verbose=verbose, header=header) - - -def assert_array_equal(x, y, err_msg='', verbose=True): - """Checks the elementwise equality of two masked arrays.""" - assert_array_compare(operator.__eq__, x, y, - err_msg=err_msg, verbose=verbose, - header='Arrays are not equal') - - -def fail_if_array_equal(x, y, err_msg='', verbose=True): - "Raises an assertion error if two masked arrays are not equal (elementwise)." - def compare(x,y): - return (not np.alltrue(approx(x, y))) - assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, - header='Arrays are not equal') - - -def assert_array_approx_equal(x, y, decimal=6, err_msg='', verbose=True): - """Checks the elementwise equality of two masked arrays, up to a given - number of decimals.""" - def compare(x, y): - "Returns the result of the loose comparison between x and y)." - return approx(x,y, rtol=10.**-decimal) - assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, - header='Arrays are not almost equal') - - -def assert_array_almost_equal(x, y, decimal=6, err_msg='', verbose=True): - """Checks the elementwise equality of two masked arrays, up to a given - number of decimals.""" - def compare(x, y): - "Returns the result of the loose comparison between x and y)." - return almost(x,y,decimal) - assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, - header='Arrays are not almost equal') - - -def assert_array_less(x, y, err_msg='', verbose=True): - "Checks that x is smaller than y elementwise." - assert_array_compare(operator.__lt__, x, y, - err_msg=err_msg, verbose=verbose, - header='Arrays are not less-ordered') - - -def assert_mask_equal(m1, m2, err_msg=''): - """Asserts the equality of two masks.""" - if m1 is nomask: - assert(m2 is nomask) - if m2 is nomask: - assert(m1 is nomask) - assert_array_equal(m1, m2, err_msg=err_msg) diff --git a/pythonPackages/numpy/numpy/ma/timer_comparison.py b/pythonPackages/numpy/numpy/ma/timer_comparison.py deleted file mode 100755 index 57cccf0b40..0000000000 --- a/pythonPackages/numpy/numpy/ma/timer_comparison.py +++ /dev/null @@ -1,458 +0,0 @@ -import timeit - -import sys -import numpy as np -from numpy import float_ -import np.core.fromnumeric as fromnumeric - -from np.testing.utils import build_err_msg - -np.seterr(all='ignore') - -pi = np.pi - -if sys.version_info[0] >= 3: - from functools import reduce - -class moduletester: - def __init__(self, module): - self.module = module - self.allequal = module.allequal - self.arange = module.arange - self.array = module.array -# self.average = module.average - self.concatenate = module.concatenate - self.count = module.count - self.equal = module.equal - self.filled = module.filled - self.getmask = module.getmask - self.getmaskarray = module.getmaskarray - self.id = id - self.inner = module.inner - self.make_mask = module.make_mask - self.masked = module.masked - self.masked_array = module.masked_array - self.masked_values = module.masked_values - self.mask_or = module.mask_or - self.nomask = module.nomask - self.ones = module.ones - self.outer = module.outer - self.repeat = module.repeat - self.resize = module.resize - self.sort = module.sort - self.take = module.take - self.transpose = module.transpose - self.zeros = module.zeros - self.MaskType = module.MaskType - try: - self.umath = module.umath - except AttributeError: - self.umath = module.core.umath - self.testnames = [] - - def assert_array_compare(self, comparison, x, y, err_msg='', header='', - fill_value=True): - """Asserts that a comparison relation between two masked arrays is satisfied - elementwise.""" - xf = self.filled(x) - yf = self.filled(y) - m = self.mask_or(self.getmask(x), self.getmask(y)) - - x = self.filled(self.masked_array(xf, mask=m), fill_value) - y = self.filled(self.masked_array(yf, mask=m), fill_value) - if (x.dtype.char != "O"): - x = x.astype(float_) - if isinstance(x, np.ndarray) and x.size > 1: - x[np.isnan(x)] = 0 - elif np.isnan(x): - x = 0 - if (y.dtype.char != "O"): - y = y.astype(float_) - if isinstance(y, np.ndarray) and y.size > 1: - y[np.isnan(y)] = 0 - elif np.isnan(y): - y = 0 - try: - cond = (x.shape==() or y.shape==()) or x.shape == y.shape - if not cond: - msg = build_err_msg([x, y], - err_msg - + '\n(shapes %s, %s mismatch)' % (x.shape, - y.shape), - header=header, - names=('x', 'y')) - assert cond, msg - val = comparison(x,y) - if m is not self.nomask and fill_value: - val = self.masked_array(val, mask=m) - if isinstance(val, bool): - cond = val - reduced = [0] - else: - reduced = val.ravel() - cond = reduced.all() - reduced = reduced.tolist() - if not cond: - match = 100-100.0*reduced.count(1)/len(reduced) - msg = build_err_msg([x, y], - err_msg - + '\n(mismatch %s%%)' % (match,), - header=header, - names=('x', 'y')) - assert cond, msg - except ValueError: - msg = build_err_msg([x, y], err_msg, header=header, names=('x', 'y')) - raise ValueError(msg) - - def assert_array_equal(self, x, y, err_msg=''): - """Checks the elementwise equality of two masked arrays.""" - self.assert_array_compare(self.equal, x, y, err_msg=err_msg, - header='Arrays are not equal') - - def test_0(self): - "Tests creation" - x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) - m = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] - xm = self.masked_array(x, mask=m) - xm[0] - - def test_1(self): - "Tests creation" - x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) - y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) - a10 = 10. - m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] - m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] - xm = self.masked_array(x, mask=m1) - ym = self.masked_array(y, mask=m2) - z = np.array([-.5, 0., .5, .8]) - zm = self.masked_array(z, mask=[0,1,0,0]) - xf = np.where(m1, 1.e+20, x) - xm.set_fill_value(1.e+20) - - assert((xm-ym).filled(0).any()) - #fail_if_equal(xm.mask.astype(int_), ym.mask.astype(int_)) - s = x.shape - assert(xm.size == reduce(lambda x,y:x*y, s)) - assert(self.count(xm) == len(m1) - reduce(lambda x,y:x+y, m1)) - - for s in [(4,3), (6,2)]: - x.shape = s - y.shape = s - xm.shape = s - ym.shape = s - xf.shape = s - - assert(self.count(xm) == len(m1) - reduce(lambda x,y:x+y, m1)) - - def test_2(self): - "Tests conversions and indexing" - x1 = np.array([1,2,4,3]) - x2 = self.array(x1, mask=[1,0,0,0]) - x3 = self.array(x1, mask=[0,1,0,1]) - x4 = self.array(x1) - # test conversion to strings - junk, garbage = str(x2), repr(x2) -# assert_equal(np.sort(x1), self.sort(x2, fill_value=0)) - # tests of indexing - assert type(x2[1]) is type(x1[1]) - assert x1[1] == x2[1] -# assert self.allequal(x1[2],x2[2]) -# assert self.allequal(x1[2:5],x2[2:5]) -# assert self.allequal(x1[:],x2[:]) -# assert self.allequal(x1[1:], x3[1:]) - x1[2] = 9 - x2[2] = 9 - self.assert_array_equal(x1,x2) - x1[1:3] = 99 - x2[1:3] = 99 -# assert self.allequal(x1,x2) - x2[1] = self.masked -# assert self.allequal(x1,x2) - x2[1:3] = self.masked -# assert self.allequal(x1,x2) - x2[:] = x1 - x2[1] = self.masked -# assert self.allequal(self.getmask(x2),self.array([0,1,0,0])) - x3[:] = self.masked_array([1,2,3,4],[0,1,1,0]) -# assert self.allequal(self.getmask(x3), self.array([0,1,1,0])) - x4[:] = self.masked_array([1,2,3,4],[0,1,1,0]) -# assert self.allequal(self.getmask(x4), self.array([0,1,1,0])) -# assert self.allequal(x4, self.array([1,2,3,4])) - x1 = np.arange(5)*1.0 - x2 = self.masked_values(x1, 3.0) -# assert self.allequal(x1,x2) -# assert self.allequal(self.array([0,0,0,1,0], self.MaskType), x2.mask) - x1 = self.array([1,'hello',2,3],object) - x2 = np.array([1,'hello',2,3],object) - s1 = x1[1] - s2 = x2[1] - assert x1[1:1].shape == (0,) - # Tests copy-size - n = [0,0,1,0,0] - m = self.make_mask(n) - m2 = self.make_mask(m) - assert(m is m2) - m3 = self.make_mask(m, copy=1) - assert(m is not m3) - - - def test_3(self): - "Tests resize/repeat" - x4 = self.arange(4) - x4[2] = self.masked - y4 = self.resize(x4, (8,)) - assert self.allequal(self.concatenate([x4,x4]), y4) - assert self.allequal(self.getmask(y4),[0,0,1,0,0,0,1,0]) - y5 = self.repeat(x4, (2,2,2,2), axis=0) - self.assert_array_equal(y5, [0,0,1,1,2,2,3,3]) - y6 = self.repeat(x4, 2, axis=0) - assert self.allequal(y5, y6) - y7 = x4.repeat((2,2,2,2), axis=0) - assert self.allequal(y5,y7) - y8 = x4.repeat(2,0) - assert self.allequal(y5,y8) - - #---------------------------------- - def test_4(self): - "Test of take, transpose, inner, outer products" - x = self.arange(24) - y = np.arange(24) - x[5:6] = self.masked - x = x.reshape(2,3,4) - y = y.reshape(2,3,4) - assert self.allequal(np.transpose(y,(2,0,1)), self.transpose(x,(2,0,1))) - assert self.allequal(np.take(y, (2,0,1), 1), self.take(x, (2,0,1), 1)) - assert self.allequal(np.inner(self.filled(x,0), self.filled(y,0)), - self.inner(x, y)) - assert self.allequal(np.outer(self.filled(x,0), self.filled(y,0)), - self.outer(x, y)) - y = self.array(['abc', 1, 'def', 2, 3], object) - y[2] = self.masked - t = self.take(y,[0,3,4]) - assert t[0] == 'abc' - assert t[1] == 2 - assert t[2] == 3 - #---------------------------------- - def test_5(self): - "Tests inplace w/ scalar" - - x = self.arange(10) - y = self.arange(10) - xm = self.arange(10) - xm[2] = self.masked - x += 1 - assert self.allequal(x, y+1) - xm += 1 - assert self.allequal(xm, y+1) - - x = self.arange(10) - xm = self.arange(10) - xm[2] = self.masked - x -= 1 - assert self.allequal(x, y-1) - xm -= 1 - assert self.allequal(xm, y-1) - - x = self.arange(10)*1.0 - xm = self.arange(10)*1.0 - xm[2] = self.masked - x *= 2.0 - assert self.allequal(x, y*2) - xm *= 2.0 - assert self.allequal(xm, y*2) - - x = self.arange(10)*2 - xm = self.arange(10)*2 - xm[2] = self.masked - x /= 2 - assert self.allequal(x, y) - xm /= 2 - assert self.allequal(xm, y) - - x = self.arange(10)*1.0 - xm = self.arange(10)*1.0 - xm[2] = self.masked - x /= 2.0 - assert self.allequal(x, y/2.0) - xm /= self.arange(10) - self.assert_array_equal(xm, self.ones((10,))) - - x = self.arange(10).astype(float_) - xm = self.arange(10) - xm[2] = self.masked - id1 = self.id(x.raw_data()) - x += 1. - #assert id1 == self.id(x.raw_data()) - assert self.allequal(x, y+1.) - - - def test_6(self): - "Tests inplace w/ array" - - x = self.arange(10, dtype=float_) - y = self.arange(10) - xm = self.arange(10, dtype=float_) - xm[2] = self.masked - m = xm.mask - a = self.arange(10, dtype=float_) - a[-1] = self.masked - x += a - xm += a - assert self.allequal(x,y+a) - assert self.allequal(xm,y+a) - assert self.allequal(xm.mask, self.mask_or(m,a.mask)) - - x = self.arange(10, dtype=float_) - xm = self.arange(10, dtype=float_) - xm[2] = self.masked - m = xm.mask - a = self.arange(10, dtype=float_) - a[-1] = self.masked - x -= a - xm -= a - assert self.allequal(x,y-a) - assert self.allequal(xm,y-a) - assert self.allequal(xm.mask, self.mask_or(m,a.mask)) - - x = self.arange(10, dtype=float_) - xm = self.arange(10, dtype=float_) - xm[2] = self.masked - m = xm.mask - a = self.arange(10, dtype=float_) - a[-1] = self.masked - x *= a - xm *= a - assert self.allequal(x,y*a) - assert self.allequal(xm,y*a) - assert self.allequal(xm.mask, self.mask_or(m,a.mask)) - - x = self.arange(10, dtype=float_) - xm = self.arange(10, dtype=float_) - xm[2] = self.masked - m = xm.mask - a = self.arange(10, dtype=float_) - a[-1] = self.masked - x /= a - xm /= a - - #---------------------------------- - def test_7(self): - "Tests ufunc" - d = (self.array([1.0, 0, -1, pi/2]*2, mask=[0,1]+[0]*6), - self.array([1.0, 0, -1, pi/2]*2, mask=[1,0]+[0]*6),) - for f in ['sqrt', 'log', 'log10', 'exp', 'conjugate', -# 'sin', 'cos', 'tan', -# 'arcsin', 'arccos', 'arctan', -# 'sinh', 'cosh', 'tanh', -# 'arcsinh', -# 'arccosh', -# 'arctanh', -# 'absolute', 'fabs', 'negative', -# # 'nonzero', 'around', -# 'floor', 'ceil', -# # 'sometrue', 'alltrue', -# 'logical_not', -# 'add', 'subtract', 'multiply', -# 'divide', 'true_divide', 'floor_divide', -# 'remainder', 'fmod', 'hypot', 'arctan2', -# 'equal', 'not_equal', 'less_equal', 'greater_equal', -# 'less', 'greater', -# 'logical_and', 'logical_or', 'logical_xor', - ]: - #print f - try: - uf = getattr(self.umath, f) - except AttributeError: - uf = getattr(fromnumeric, f) - mf = getattr(self.module, f) - args = d[:uf.nin] - ur = uf(*args) - mr = mf(*args) - self.assert_array_equal(ur.filled(0), mr.filled(0), f) - self.assert_array_equal(ur._mask, mr._mask) - - #---------------------------------- - def test_99(self): - # test average - ott = self.array([0.,1.,2.,3.], mask=[1,0,0,0]) - self.assert_array_equal(2.0, self.average(ott,axis=0)) - self.assert_array_equal(2.0, self.average(ott, weights=[1., 1., 2., 1.])) - result, wts = self.average(ott, weights=[1.,1.,2.,1.], returned=1) - self.assert_array_equal(2.0, result) - assert(wts == 4.0) - ott[:] = self.masked - assert(self.average(ott,axis=0) is self.masked) - ott = self.array([0.,1.,2.,3.], mask=[1,0,0,0]) - ott = ott.reshape(2,2) - ott[:,1] = self.masked - self.assert_array_equal(self.average(ott,axis=0), [2.0, 0.0]) - assert(self.average(ott,axis=1)[0] is self.masked) - self.assert_array_equal([2.,0.], self.average(ott, axis=0)) - result, wts = self.average(ott, axis=0, returned=1) - self.assert_array_equal(wts, [1., 0.]) - w1 = [0,1,1,1,1,0] - w2 = [[0,1,1,1,1,0],[1,0,0,0,0,1]] - x = self.arange(6) - self.assert_array_equal(self.average(x, axis=0), 2.5) - self.assert_array_equal(self.average(x, axis=0, weights=w1), 2.5) - y = self.array([self.arange(6), 2.0*self.arange(6)]) - self.assert_array_equal(self.average(y, None), np.add.reduce(np.arange(6))*3./12.) - self.assert_array_equal(self.average(y, axis=0), np.arange(6) * 3./2.) - self.assert_array_equal(self.average(y, axis=1), [self.average(x,axis=0), self.average(x,axis=0) * 2.0]) - self.assert_array_equal(self.average(y, None, weights=w2), 20./6.) - self.assert_array_equal(self.average(y, axis=0, weights=w2), [0.,1.,2.,3.,4.,10.]) - self.assert_array_equal(self.average(y, axis=1), [self.average(x,axis=0), self.average(x,axis=0) * 2.0]) - m1 = self.zeros(6) - m2 = [0,0,1,1,0,0] - m3 = [[0,0,1,1,0,0],[0,1,1,1,1,0]] - m4 = self.ones(6) - m5 = [0, 1, 1, 1, 1, 1] - self.assert_array_equal(self.average(self.masked_array(x, m1),axis=0), 2.5) - self.assert_array_equal(self.average(self.masked_array(x, m2),axis=0), 2.5) - # assert(self.average(masked_array(x, m4),axis=0) is masked) - self.assert_array_equal(self.average(self.masked_array(x, m5),axis=0), 0.0) - self.assert_array_equal(self.count(self.average(self.masked_array(x, m4),axis=0)), 0) - z = self.masked_array(y, m3) - self.assert_array_equal(self.average(z, None), 20./6.) - self.assert_array_equal(self.average(z, axis=0), [0.,1.,99.,99.,4.0, 7.5]) - self.assert_array_equal(self.average(z, axis=1), [2.5, 5.0]) - self.assert_array_equal(self.average(z,axis=0, weights=w2), [0.,1., 99., 99., 4.0, 10.0]) - #------------------------ - def test_A(self): - x = self.arange(24) - y = np.arange(24) - x[5:6] = self.masked - x = x.reshape(2,3,4) - - -################################################################################ -if __name__ == '__main__': - - setup_base = "from __main__ import moduletester \n"\ - "import numpy\n" \ - "tester = moduletester(module)\n" -# setup_new = "import np.ma.core_ini as module\n"+setup_base - setup_cur = "import np.ma.core as module\n"+setup_base -# setup_alt = "import np.ma.core_alt as module\n"+setup_base -# setup_tmp = "import np.ma.core_tmp as module\n"+setup_base - - (nrepeat, nloop) = (10, 10) - - if 1: - for i in range(1,8): - func = 'tester.test_%i()' % i -# new = timeit.Timer(func, setup_new).repeat(nrepeat, nloop*10) - cur = timeit.Timer(func, setup_cur).repeat(nrepeat, nloop*10) -# alt = timeit.Timer(func, setup_alt).repeat(nrepeat, nloop*10) -# tmp = timeit.Timer(func, setup_tmp).repeat(nrepeat, nloop*10) -# new = np.sort(new) - cur = np.sort(cur) -# alt = np.sort(alt) -# tmp = np.sort(tmp) - print "#%i" % i +50*'.' - print eval("moduletester.test_%i.__doc__" % i) -# print "core_ini : %.3f - %.3f" % (new[0], new[1]) - print "core_current : %.3f - %.3f" % (cur[0], cur[1]) -# print "core_alt : %.3f - %.3f" % (alt[0], alt[1]) -# print "core_tmp : %.3f - %.3f" % (tmp[0], tmp[1]) diff --git a/pythonPackages/numpy/numpy/ma/version.py b/pythonPackages/numpy/numpy/ma/version.py deleted file mode 100755 index 7a925f1a85..0000000000 --- a/pythonPackages/numpy/numpy/ma/version.py +++ /dev/null @@ -1,11 +0,0 @@ -"""Version number""" - -version = '1.00' -release = False - -if not release: - import core - import extras - revision = [core.__revision__.split(':')[-1][:-1].strip(), - extras.__revision__.split(':')[-1][:-1].strip(),] - version += '.dev%04i' % max([int(rev) for rev in revision]) diff --git a/pythonPackages/numpy/numpy/matlib.py b/pythonPackages/numpy/numpy/matlib.py deleted file mode 100755 index f55f763c3d..0000000000 --- a/pythonPackages/numpy/numpy/matlib.py +++ /dev/null @@ -1,356 +0,0 @@ -import numpy as np -from numpy.matrixlib.defmatrix import matrix, asmatrix -# need * as we're copying the numpy namespace -from numpy import * - -__version__ = np.__version__ - -__all__ = np.__all__[:] # copy numpy namespace -__all__ += ['rand', 'randn', 'repmat'] - -def empty(shape, dtype=None, order='C'): - """ - Return a new matrix of given shape and type, without initializing entries. - - Parameters - ---------- - shape : int or tuple of int - Shape of the empty matrix. - dtype : data-type, optional - Desired output data-type. - order : {'C', 'F'}, optional - Whether to store multi-dimensional data in C (row-major) or - Fortran (column-major) order in memory. - - See Also - -------- - empty_like, zeros - - Notes - ----- - `empty`, unlike `zeros`, does not set the matrix values to zero, - and may therefore be marginally faster. On the other hand, it requires - the user to manually set all the values in the array, and should be - used with caution. - - Examples - -------- - >>> import numpy.matlib - >>> np.matlib.empty((2, 2)) # filled with random data - matrix([[ 6.76425276e-320, 9.79033856e-307], - [ 7.39337286e-309, 3.22135945e-309]]) #random - >>> np.matlib.empty((2, 2), dtype=int) - matrix([[ 6600475, 0], - [ 6586976, 22740995]]) #random - - """ - return ndarray.__new__(matrix, shape, dtype, order=order) - -def ones(shape, dtype=None, order='C'): - """ - Matrix of ones. - - Return a matrix of given shape and type, filled with ones. - - Parameters - ---------- - shape : {sequence of ints, int} - Shape of the matrix - dtype : data-type, optional - The desired data-type for the matrix, default is np.float64. - order : {'C', 'F'}, optional - Whether to store matrix in C- or Fortran-contiguous order, - default is 'C'. - - Returns - ------- - out : matrix - Matrix of ones of given shape, dtype, and order. - - See Also - -------- - ones : Array of ones. - matlib.zeros : Zero matrix. - - Notes - ----- - If `shape` has length one i.e. ``(N,)``, or is a scalar ``N``, - `out` becomes a single row matrix of shape ``(1,N)``. - - Examples - -------- - >>> np.matlib.ones((2,3)) - matrix([[ 1., 1., 1.], - [ 1., 1., 1.]]) - - >>> np.matlib.ones(2) - matrix([[ 1., 1.]]) - - """ - a = ndarray.__new__(matrix, shape, dtype, order=order) - a.fill(1) - return a - -def zeros(shape, dtype=None, order='C'): - """ - Return a matrix of given shape and type, filled with zeros. - - Parameters - ---------- - shape : int or sequence of ints - Shape of the matrix - dtype : data-type, optional - The desired data-type for the matrix, default is float. - order : {'C', 'F'}, optional - Whether to store the result in C- or Fortran-contiguous order, - default is 'C'. - - Returns - ------- - out : matrix - Zero matrix of given shape, dtype, and order. - - See Also - -------- - numpy.zeros : Equivalent array function. - matlib.ones : Return a matrix of ones. - - Notes - ----- - If `shape` has length one i.e. ``(N,)``, or is a scalar ``N``, - `out` becomes a single row matrix of shape ``(1,N)``. - - Examples - -------- - >>> import numpy.matlib - >>> np.matlib.zeros((2, 3)) - matrix([[ 0., 0., 0.], - [ 0., 0., 0.]]) - - >>> np.matlib.zeros(2) - matrix([[ 0., 0.]]) - - """ - a = ndarray.__new__(matrix, shape, dtype, order=order) - a.fill(0) - return a - -def identity(n,dtype=None): - """ - Returns the square identity matrix of given size. - - Parameters - ---------- - n : int - Size of the returned identity matrix. - dtype : data-type, optional - Data-type of the output. Defaults to ``float``. - - Returns - ------- - out : matrix - `n` x `n` matrix with its main diagonal set to one, - and all other elements zero. - - See Also - -------- - numpy.identity : Equivalent array function. - matlib.eye : More general matrix identity function. - - Examples - -------- - >>> import numpy.matlib - >>> np.matlib.identity(3, dtype=int) - matrix([[1, 0, 0], - [0, 1, 0], - [0, 0, 1]]) - - """ - a = array([1]+n*[0],dtype=dtype) - b = empty((n,n),dtype=dtype) - b.flat = a - return b - -def eye(n,M=None, k=0, dtype=float): - """ - Return a matrix with ones on the diagonal and zeros elsewhere. - - Parameters - ---------- - n : int - Number of rows in the output. - M : int, optional - Number of columns in the output, defaults to `n`. - k : int, optional - Index of the diagonal: 0 refers to the main diagonal, - a positive value refers to an upper diagonal, - and a negative value to a lower diagonal. - dtype : dtype, optional - Data-type of the returned matrix. - - Returns - ------- - I : matrix - A `n` x `M` matrix where all elements are equal to zero, - except for the `k`-th diagonal, whose values are equal to one. - - See Also - -------- - numpy.eye : Equivalent array function. - identity : Square identity matrix. - - Examples - -------- - >>> import numpy.matlib - >>> np.matlib.eye(3, k=1, dtype=float) - matrix([[ 0., 1., 0.], - [ 0., 0., 1.], - [ 0., 0., 0.]]) - - """ - return asmatrix(np.eye(n,M,k,dtype)) - -def rand(*args): - """ - Return a matrix of random values with given shape. - - Create a matrix of the given shape and propagate it with - random samples from a uniform distribution over ``[0, 1)``. - - Parameters - ---------- - \\*args : Arguments - Shape of the output. - If given as N integers, each integer specifies the size of one - dimension. - If given as a tuple, this tuple gives the complete shape. - - Returns - ------- - out : ndarray - The matrix of random values with shape given by `\\*args`. - - See Also - -------- - randn, numpy.random.rand - - Examples - -------- - >>> import numpy.matlib - >>> np.matlib.rand(2, 3) - matrix([[ 0.68340382, 0.67926887, 0.83271405], - [ 0.00793551, 0.20468222, 0.95253525]]) #random - >>> np.matlib.rand((2, 3)) - matrix([[ 0.84682055, 0.73626594, 0.11308016], - [ 0.85429008, 0.3294825 , 0.89139555]]) #random - - If the first argument is a tuple, other arguments are ignored: - - >>> np.matlib.rand((2, 3), 4) - matrix([[ 0.46898646, 0.15163588, 0.95188261], - [ 0.59208621, 0.09561818, 0.00583606]]) #random - - """ - if isinstance(args[0], tuple): - args = args[0] - return asmatrix(np.random.rand(*args)) - -def randn(*args): - """ - Return a random matrix with data from the "standard normal" distribution. - - `randn` generates a matrix filled with random floats sampled from a - univariate "normal" (Gaussian) distribution of mean 0 and variance 1. - - Parameters - ---------- - \\*args : Arguments - Shape of the output. - If given as N integers, each integer specifies the size of one - dimension. If given as a tuple, this tuple gives the complete shape. - - Returns - ------- - Z : matrix of floats - A matrix of floating-point samples drawn from the standard normal - distribution. - - See Also - -------- - rand, random.randn - - Notes - ----- - For random samples from :math:`N(\\mu, \\sigma^2)`, use: - - ``sigma * np.matlib.randn(...) + mu`` - - Examples - -------- - >>> import numpy.matlib - >>> np.matlib.randn(1) - matrix([[-0.09542833]]) #random - >>> np.matlib.randn(1, 2, 3) - matrix([[ 0.16198284, 0.0194571 , 0.18312985], - [-0.7509172 , 1.61055 , 0.45298599]]) #random - - Two-by-four matrix of samples from :math:`N(3, 6.25)`: - - >>> 2.5 * np.matlib.randn((2, 4)) + 3 - matrix([[ 4.74085004, 8.89381862, 4.09042411, 4.83721922], - [ 7.52373709, 5.07933944, -2.64043543, 0.45610557]]) #random - - """ - if isinstance(args[0], tuple): - args = args[0] - return asmatrix(np.random.randn(*args)) - -def repmat(a, m, n): - """ - Repeat a 0-D to 2-D array or matrix MxN times. - - Parameters - ---------- - a : array_like - The array or matrix to be repeated. - m, n : int - The number of times `a` is repeated along the first and second axes. - - Returns - ------- - out : ndarray - The result of repeating `a`. - - Examples - -------- - >>> import numpy.matlib - >>> a0 = np.array(1) - >>> np.matlib.repmat(a0, 2, 3) - array([[1, 1, 1], - [1, 1, 1]]) - - >>> a1 = np.arange(4) - >>> np.matlib.repmat(a1, 2, 2) - array([[0, 1, 2, 3, 0, 1, 2, 3], - [0, 1, 2, 3, 0, 1, 2, 3]]) - - >>> a2 = np.asmatrix(np.arange(6).reshape(2, 3)) - >>> np.matlib.repmat(a2, 2, 3) - matrix([[0, 1, 2, 0, 1, 2, 0, 1, 2], - [3, 4, 5, 3, 4, 5, 3, 4, 5], - [0, 1, 2, 0, 1, 2, 0, 1, 2], - [3, 4, 5, 3, 4, 5, 3, 4, 5]]) - - """ - a = asanyarray(a) - ndim = a.ndim - if ndim == 0: - origrows, origcols = (1,1) - elif ndim == 1: - origrows, origcols = (1, a.shape[0]) - else: - origrows, origcols = a.shape - rows = origrows * m - cols = origcols * n - c = a.reshape(1,a.size).repeat(m, 0).reshape(rows, origcols).repeat(n,0) - return c.reshape(rows, cols) diff --git a/pythonPackages/numpy/numpy/matrixlib/__init__.py b/pythonPackages/numpy/numpy/matrixlib/__init__.py deleted file mode 100755 index 468a8829d5..0000000000 --- a/pythonPackages/numpy/numpy/matrixlib/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -"""Sub-package containing the matrix class and related functions.""" -from defmatrix import * - -__all__ = defmatrix.__all__ - -from numpy.testing import Tester -test = Tester().test -bench = Tester().bench diff --git a/pythonPackages/numpy/numpy/matrixlib/defmatrix.py b/pythonPackages/numpy/numpy/matrixlib/defmatrix.py deleted file mode 100755 index a5aa84f6d0..0000000000 --- a/pythonPackages/numpy/numpy/matrixlib/defmatrix.py +++ /dev/null @@ -1,1074 +0,0 @@ -__all__ = ['matrix', 'bmat', 'mat', 'asmatrix'] - -import sys -import numpy.core.numeric as N -from numpy.core.numeric import concatenate, isscalar, binary_repr, identity, asanyarray -from numpy.core.numerictypes import issubdtype - -# make translation table -_numchars = '0123456789.-+jeEL' - -if sys.version_info[0] >= 3: - class _NumCharTable: - def __getitem__(self, i): - if chr(i) in _numchars: - return chr(i) - else: - return None - _table = _NumCharTable() - def _eval(astr): - return eval(astr.translate(_table)) -else: - _table = [None]*256 - for k in range(256): - _table[k] = chr(k) - _table = ''.join(_table) - - _todelete = [] - for k in _table: - if k not in _numchars: - _todelete.append(k) - _todelete = ''.join(_todelete) - del k - - def _eval(astr): - return eval(astr.translate(_table,_todelete)) - -def _convert_from_string(data): - rows = data.split(';') - newdata = [] - count = 0 - for row in rows: - trow = row.split(',') - newrow = [] - for col in trow: - temp = col.split() - newrow.extend(map(_eval,temp)) - if count == 0: - Ncols = len(newrow) - elif len(newrow) != Ncols: - raise ValueError, "Rows not the same size." - count += 1 - newdata.append(newrow) - return newdata - -def asmatrix(data, dtype=None): - """ - Interpret the input as a matrix. - - Unlike `matrix`, `asmatrix` does not make a copy if the input is already - a matrix or an ndarray. Equivalent to ``matrix(data, copy=False)``. - - Parameters - ---------- - data : array_like - Input data. - - Returns - ------- - mat : matrix - `data` interpreted as a matrix. - - Examples - -------- - >>> x = np.array([[1, 2], [3, 4]]) - - >>> m = np.asmatrix(x) - - >>> x[0,0] = 5 - - >>> m - matrix([[5, 2], - [3, 4]]) - - """ - return matrix(data, dtype=dtype, copy=False) - -def matrix_power(M,n): - """ - Raise a square matrix to the (integer) power `n`. - - For positive integers `n`, the power is computed by repeated matrix - squarings and matrix multiplications. If ``n == 0``, the identity matrix - of the same shape as M is returned. If ``n < 0``, the inverse - is computed and then raised to the ``abs(n)``. - - Parameters - ---------- - M : ndarray or matrix object - Matrix to be "powered." Must be square, i.e. ``M.shape == (m, m)``, - with `m` a positive integer. - n : int - The exponent can be any integer or long integer, positive, - negative, or zero. - - Returns - ------- - M**n : ndarray or matrix object - The return value is the same shape and type as `M`; - if the exponent is positive or zero then the type of the - elements is the same as those of `M`. If the exponent is - negative the elements are floating-point. - - Raises - ------ - LinAlgError - If the matrix is not numerically invertible. - - See Also - -------- - matrix - Provides an equivalent function as the exponentiation operator - (``**``, not ``^``). - - Examples - -------- - >>> from numpy import linalg as LA - >>> i = np.array([[0, 1], [-1, 0]]) # matrix equiv. of the imaginary unit - >>> LA.matrix_power(i, 3) # should = -i - array([[ 0, -1], - [ 1, 0]]) - >>> LA.matrix_power(np.matrix(i), 3) # matrix arg returns matrix - matrix([[ 0, -1], - [ 1, 0]]) - >>> LA.matrix_power(i, 0) - array([[1, 0], - [0, 1]]) - >>> LA.matrix_power(i, -3) # should = 1/(-i) = i, but w/ f.p. elements - array([[ 0., 1.], - [-1., 0.]]) - - Somewhat more sophisticated example - - >>> q = np.zeros((4, 4)) - >>> q[0:2, 0:2] = -i - >>> q[2:4, 2:4] = i - >>> q # one of the three quarternion units not equal to 1 - array([[ 0., -1., 0., 0.], - [ 1., 0., 0., 0.], - [ 0., 0., 0., 1.], - [ 0., 0., -1., 0.]]) - >>> LA.matrix_power(q, 2) # = -np.eye(4) - array([[-1., 0., 0., 0.], - [ 0., -1., 0., 0.], - [ 0., 0., -1., 0.], - [ 0., 0., 0., -1.]]) - - """ - M = asanyarray(M) - if len(M.shape) != 2 or M.shape[0] != M.shape[1]: - raise ValueError("input must be a square array") - if not issubdtype(type(n),int): - raise TypeError("exponent must be an integer") - - from numpy.linalg import inv - - if n==0: - M = M.copy() - M[:] = identity(M.shape[0]) - return M - elif n<0: - M = inv(M) - n *= -1 - - result = M - if n <= 3: - for _ in range(n-1): - result=N.dot(result,M) - return result - - # binary decomposition to reduce the number of Matrix - # multiplications for n > 3. - beta = binary_repr(n) - Z,q,t = M,0,len(beta) - while beta[t-q-1] == '0': - Z = N.dot(Z,Z) - q += 1 - result = Z - for k in range(q+1,t): - Z = N.dot(Z,Z) - if beta[t-k-1] == '1': - result = N.dot(result,Z) - return result - - -class matrix(N.ndarray): - """ - matrix(data, dtype=None, copy=True) - - Returns a matrix from an array-like object, or from a string of data. - A matrix is a specialized 2-D array that retains its 2-D nature - through operations. It has certain special operators, such as ``*`` - (matrix multiplication) and ``**`` (matrix power). - - Parameters - ---------- - data : array_like or string - If `data` is a string, it is interpreted as a matrix with commas - or spaces separating columns, and semicolons separating rows. - dtype : data-type - Data-type of the output matrix. - copy : bool - If `data` is already an `ndarray`, then this flag determines - whether the data is copied (the default), or whether a view is - constructed. - - See Also - -------- - array - - Examples - -------- - >>> a = np.matrix('1 2; 3 4') - >>> print a - [[1 2] - [3 4]] - - >>> np.matrix([[1, 2], [3, 4]]) - matrix([[1, 2], - [3, 4]]) - - """ - __array_priority__ = 10.0 - def __new__(subtype, data, dtype=None, copy=True): - if isinstance(data, matrix): - dtype2 = data.dtype - if (dtype is None): - dtype = dtype2 - if (dtype2 == dtype) and (not copy): - return data - return data.astype(dtype) - - if isinstance(data, N.ndarray): - if dtype is None: - intype = data.dtype - else: - intype = N.dtype(dtype) - new = data.view(subtype) - if intype != data.dtype: - return new.astype(intype) - if copy: return new.copy() - else: return new - - if isinstance(data, str): - data = _convert_from_string(data) - - # now convert data to an array - arr = N.array(data, dtype=dtype, copy=copy) - ndim = arr.ndim - shape = arr.shape - if (ndim > 2): - raise ValueError, "matrix must be 2-dimensional" - elif ndim == 0: - shape = (1,1) - elif ndim == 1: - shape = (1,shape[0]) - - order = False - if (ndim == 2) and arr.flags.fortran: - order = True - - if not (order or arr.flags.contiguous): - arr = arr.copy() - - ret = N.ndarray.__new__(subtype, shape, arr.dtype, - buffer=arr, - order=order) - return ret - - def __array_finalize__(self, obj): - self._getitem = False - if (isinstance(obj, matrix) and obj._getitem): return - ndim = self.ndim - if (ndim == 2): - return - if (ndim > 2): - newshape = tuple([x for x in self.shape if x > 1]) - ndim = len(newshape) - if ndim == 2: - self.shape = newshape - return - elif (ndim > 2): - raise ValueError, "shape too large to be a matrix." - else: - newshape = self.shape - if ndim == 0: - self.shape = (1,1) - elif ndim == 1: - self.shape = (1,newshape[0]) - return - - def __getitem__(self, index): - self._getitem = True - - try: - out = N.ndarray.__getitem__(self, index) - finally: - self._getitem = False - - if not isinstance(out, N.ndarray): - return out - - if out.ndim == 0: - return out[()] - if out.ndim == 1: - sh = out.shape[0] - # Determine when we should have a column array - try: - n = len(index) - except: - n = 0 - if n > 1 and isscalar(index[1]): - out.shape = (sh,1) - else: - out.shape = (1,sh) - return out - - def __mul__(self, other): - if isinstance(other,(N.ndarray, list, tuple)) : - # This promotes 1-D vectors to row vectors - return N.dot(self, asmatrix(other)) - if isscalar(other) or not hasattr(other, '__rmul__') : - return N.dot(self, other) - return NotImplemented - - def __rmul__(self, other): - return N.dot(other, self) - - def __imul__(self, other): - self[:] = self * other - return self - - def __pow__(self, other): - return matrix_power(self, other) - - def __ipow__(self, other): - self[:] = self ** other - return self - - def __rpow__(self, other): - return NotImplemented - - def __repr__(self): - s = repr(self.__array__()).replace('array', 'matrix') - # now, 'matrix' has 6 letters, and 'array' 5, so the columns don't - # line up anymore. We need to add a space. - l = s.splitlines() - for i in range(1, len(l)): - if l[i]: - l[i] = ' ' + l[i] - return '\n'.join(l) - - def __str__(self): - return str(self.__array__()) - - def _align(self, axis): - """A convenience function for operations that need to preserve axis - orientation. - """ - if axis is None: - return self[0,0] - elif axis==0: - return self - elif axis==1: - return self.transpose() - else: - raise ValueError, "unsupported axis" - - # Necessary because base-class tolist expects dimension - # reduction by x[0] - def tolist(self): - """ - Return the matrix as a (possibly nested) list. - - See `ndarray.tolist` for full documentation. - - See Also - -------- - ndarray.tolist - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3,4))); x - matrix([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> x.tolist() - [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]] - - """ - return self.__array__().tolist() - - # To preserve orientation of result... - def sum(self, axis=None, dtype=None, out=None): - """ - Returns the sum of the matrix elements, along the given axis. - - Refer to `numpy.sum` for full documentation. - - See Also - -------- - numpy.sum - - Notes - ----- - This is the same as `ndarray.sum`, except that where an `ndarray` would - be returned, a `matrix` object is returned instead. - - Examples - -------- - >>> x = np.matrix([[1, 2], [4, 3]]) - >>> x.sum() - 10 - >>> x.sum(axis=1) - matrix([[3], - [7]]) - >>> x.sum(axis=1, dtype='float') - matrix([[ 3.], - [ 7.]]) - >>> out = np.zeros((1, 2), dtype='float') - >>> x.sum(axis=1, dtype='float', out=out) - matrix([[ 3.], - [ 7.]]) - - """ - return N.ndarray.sum(self, axis, dtype, out)._align(axis) - - def mean(self, axis=None, dtype=None, out=None): - """ - Returns the average of the matrix elements along the given axis. - - Refer to `numpy.mean` for full documentation. - - See Also - -------- - numpy.mean - - Notes - ----- - Same as `ndarray.mean` except that, where that returns an `ndarray`, - this returns a `matrix` object. - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3, 4))) - >>> x - matrix([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> x.mean() - 5.5 - >>> x.mean(0) - matrix([[ 4., 5., 6., 7.]]) - >>> x.mean(1) - matrix([[ 1.5], - [ 5.5], - [ 9.5]]) - - """ - return N.ndarray.mean(self, axis, dtype, out)._align(axis) - - def std(self, axis=None, dtype=None, out=None, ddof=0): - """ - Return the standard deviation of the array elements along the given axis. - - Refer to `numpy.std` for full documentation. - - See Also - -------- - numpy.std - - Notes - ----- - This is the same as `ndarray.std`, except that where an `ndarray` would - be returned, a `matrix` object is returned instead. - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3, 4))) - >>> x - matrix([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> x.std() - 3.4520525295346629 - >>> x.std(0) - matrix([[ 3.26598632, 3.26598632, 3.26598632, 3.26598632]]) - >>> x.std(1) - matrix([[ 1.11803399], - [ 1.11803399], - [ 1.11803399]]) - - """ - return N.ndarray.std(self, axis, dtype, out, ddof)._align(axis) - - def var(self, axis=None, dtype=None, out=None, ddof=0): - """ - Returns the variance of the matrix elements, along the given axis. - - Refer to `numpy.var` for full documentation. - - See Also - -------- - numpy.var - - Notes - ----- - This is the same as `ndarray.var`, except that where an `ndarray` would - be returned, a `matrix` object is returned instead. - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3, 4))) - >>> x - matrix([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> x.var() - 11.916666666666666 - >>> x.var(0) - matrix([[ 10.66666667, 10.66666667, 10.66666667, 10.66666667]]) - >>> x.var(1) - matrix([[ 1.25], - [ 1.25], - [ 1.25]]) - - """ - return N.ndarray.var(self, axis, dtype, out, ddof)._align(axis) - - def prod(self, axis=None, dtype=None, out=None): - """ - Return the product of the array elements over the given axis. - - Refer to `prod` for full documentation. - - See Also - -------- - prod, ndarray.prod - - Notes - ----- - Same as `ndarray.prod`, except, where that returns an `ndarray`, this - returns a `matrix` object instead. - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3,4))); x - matrix([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> x.prod() - 0 - >>> x.prod(0) - matrix([[ 0, 45, 120, 231]]) - >>> x.prod(1) - matrix([[ 0], - [ 840], - [7920]]) - - """ - return N.ndarray.prod(self, axis, dtype, out)._align(axis) - - def any(self, axis=None, out=None): - """ - Test whether any array element along a given axis evaluates to True. - - Refer to `numpy.any` for full documentation. - - Parameters - ---------- - axis: int, optional - Axis along which logical OR is performed - out: ndarray, optional - Output to existing array instead of creating new one, must have - same shape as expected output - - Returns - ------- - any : bool, ndarray - Returns a single bool if `axis` is ``None``; otherwise, - returns `ndarray` - - """ - return N.ndarray.any(self, axis, out)._align(axis) - - def all(self, axis=None, out=None): - """ - Test whether all matrix elements along a given axis evaluate to True. - - Parameters - ---------- - See `numpy.all` for complete descriptions - - See Also - -------- - numpy.all - - Notes - ----- - This is the same as `ndarray.all`, but it returns a `matrix` object. - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3,4))); x - matrix([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> y = x[0]; y - matrix([[0, 1, 2, 3]]) - >>> (x == y) - matrix([[ True, True, True, True], - [False, False, False, False], - [False, False, False, False]], dtype=bool) - >>> (x == y).all() - False - >>> (x == y).all(0) - matrix([[False, False, False, False]], dtype=bool) - >>> (x == y).all(1) - matrix([[ True], - [False], - [False]], dtype=bool) - - """ - return N.ndarray.all(self, axis, out)._align(axis) - - def max(self, axis=None, out=None): - """ - Return the maximum value along an axis. - - Parameters - ---------- - See `amax` for complete descriptions - - See Also - -------- - amax, ndarray.max - - Notes - ----- - This is the same as `ndarray.max`, but returns a `matrix` object - where `ndarray.max` would return an ndarray. - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3,4))); x - matrix([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> x.max() - 11 - >>> x.max(0) - matrix([[ 8, 9, 10, 11]]) - >>> x.max(1) - matrix([[ 3], - [ 7], - [11]]) - - """ - return N.ndarray.max(self, axis, out)._align(axis) - - def argmax(self, axis=None, out=None): - """ - Indices of the maximum values along an axis. - - Parameters - ---------- - See `numpy.argmax` for complete descriptions - - See Also - -------- - numpy.argmax - - Notes - ----- - This is the same as `ndarray.argmax`, but returns a `matrix` object - where `ndarray.argmax` would return an `ndarray`. - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3,4))); x - matrix([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> x.argmax() - 11 - >>> x.argmax(0) - matrix([[2, 2, 2, 2]]) - >>> x.argmax(1) - matrix([[3], - [3], - [3]]) - - """ - return N.ndarray.argmax(self, axis, out)._align(axis) - - def min(self, axis=None, out=None): - """ - Return the minimum value along an axis. - - Parameters - ---------- - See `amin` for complete descriptions. - - See Also - -------- - amin, ndarray.min - - Notes - ----- - This is the same as `ndarray.min`, but returns a `matrix` object - where `ndarray.min` would return an ndarray. - - Examples - -------- - >>> x = -np.matrix(np.arange(12).reshape((3,4))); x - matrix([[ 0, -1, -2, -3], - [ -4, -5, -6, -7], - [ -8, -9, -10, -11]]) - >>> x.min() - -11 - >>> x.min(0) - matrix([[ -8, -9, -10, -11]]) - >>> x.min(1) - matrix([[ -3], - [ -7], - [-11]]) - - """ - return N.ndarray.min(self, axis, out)._align(axis) - - def argmin(self, axis=None, out=None): - """ - Return the indices of the minimum values along an axis. - - Parameters - ---------- - See `numpy.argmin` for complete descriptions. - - See Also - -------- - numpy.argmin - - Notes - ----- - This is the same as `ndarray.argmin`, but returns a `matrix` object - where `ndarray.argmin` would return an `ndarray`. - - Examples - -------- - >>> x = -np.matrix(np.arange(12).reshape((3,4))); x - matrix([[ 0, -1, -2, -3], - [ -4, -5, -6, -7], - [ -8, -9, -10, -11]]) - >>> x.argmin() - 11 - >>> x.argmin(0) - matrix([[2, 2, 2, 2]]) - >>> x.argmin(1) - matrix([[3], - [3], - [3]]) - - """ - return N.ndarray.argmin(self, axis, out)._align(axis) - - def ptp(self, axis=None, out=None): - """ - Peak-to-peak (maximum - minimum) value along the given axis. - - Refer to `numpy.ptp` for full documentation. - - See Also - -------- - numpy.ptp - - Notes - ----- - Same as `ndarray.ptp`, except, where that would return an `ndarray` object, - this returns a `matrix` object. - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3,4))); x - matrix([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> x.ptp() - 11 - >>> x.ptp(0) - matrix([[8, 8, 8, 8]]) - >>> x.ptp(1) - matrix([[3], - [3], - [3]]) - - """ - return N.ndarray.ptp(self, axis, out)._align(axis) - - def getI(self): - """ - Returns the (multiplicative) inverse of invertible `self`. - - Parameters - ---------- - None - - Returns - ------- - ret : matrix object - If `self` is non-singular, `ret` is such that ``ret * self`` == - ``self * ret`` == ``np.matrix(np.eye(self[0,:].size)`` all return - ``True``. - - Raises - ------ - numpy.linalg.linalg.LinAlgError: Singular matrix - If `self` is singular. - - See Also - -------- - linalg.inv - - Examples - -------- - >>> m = np.matrix('[1, 2; 3, 4]'); m - matrix([[1, 2], - [3, 4]]) - >>> m.getI() - matrix([[-2. , 1. ], - [ 1.5, -0.5]]) - >>> m.getI() * m - matrix([[ 1., 0.], - [ 0., 1.]]) - - """ - M,N = self.shape - if M == N: - from numpy.dual import inv as func - else: - from numpy.dual import pinv as func - return asmatrix(func(self)) - - def getA(self): - """ - Return `self` as an `ndarray` object. - - Equivalent to ``np.asarray(self)``. - - Parameters - ---------- - None - - Returns - ------- - ret : ndarray - `self` as an `ndarray` - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3,4))); x - matrix([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> x.getA() - array([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - - """ - return self.__array__() - - def getA1(self): - """ - Return `self` as a flattened `ndarray`. - - Equivalent to ``np.asarray(x).ravel()`` - - Parameters - ---------- - None - - Returns - ------- - ret : ndarray - `self`, 1-D, as an `ndarray` - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3,4))); x - matrix([[ 0, 1, 2, 3], - [ 4, 5, 6, 7], - [ 8, 9, 10, 11]]) - >>> x.getA1() - array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) - - """ - return self.__array__().ravel() - - def getT(self): - """ - Returns the transpose of the matrix. - - Does *not* conjugate! For the complex conjugate transpose, use `getH`. - - Parameters - ---------- - None - - Returns - ------- - ret : matrix object - The (non-conjugated) transpose of the matrix. - - See Also - -------- - transpose, getH - - Examples - -------- - >>> m = np.matrix('[1, 2; 3, 4]') - >>> m - matrix([[1, 2], - [3, 4]]) - >>> m.getT() - matrix([[1, 3], - [2, 4]]) - - """ - return self.transpose() - - def getH(self): - """ - Returns the (complex) conjugate transpose of `self`. - - Equivalent to ``np.transpose(self)`` if `self` is real-valued. - - Parameters - ---------- - None - - Returns - ------- - ret : matrix object - complex conjugate transpose of `self` - - Examples - -------- - >>> x = np.matrix(np.arange(12).reshape((3,4))) - >>> z = x - 1j*x; z - matrix([[ 0. +0.j, 1. -1.j, 2. -2.j, 3. -3.j], - [ 4. -4.j, 5. -5.j, 6. -6.j, 7. -7.j], - [ 8. -8.j, 9. -9.j, 10.-10.j, 11.-11.j]]) - >>> z.getH() - matrix([[ 0. +0.j, 4. +4.j, 8. +8.j], - [ 1. +1.j, 5. +5.j, 9. +9.j], - [ 2. +2.j, 6. +6.j, 10.+10.j], - [ 3. +3.j, 7. +7.j, 11.+11.j]]) - - """ - if issubclass(self.dtype.type, N.complexfloating): - return self.transpose().conjugate() - else: - return self.transpose() - - T = property(getT, None, doc="transpose") - A = property(getA, None, doc="base array") - A1 = property(getA1, None, doc="1-d base array") - H = property(getH, None, doc="hermitian (conjugate) transpose") - I = property(getI, None, doc="inverse") - -def _from_string(str,gdict,ldict): - rows = str.split(';') - rowtup = [] - for row in rows: - trow = row.split(',') - newrow = [] - for x in trow: - newrow.extend(x.split()) - trow = newrow - coltup = [] - for col in trow: - col = col.strip() - try: - thismat = ldict[col] - except KeyError: - try: - thismat = gdict[col] - except KeyError: - raise KeyError, "%s not found" % (col,) - - coltup.append(thismat) - rowtup.append(concatenate(coltup,axis=-1)) - return concatenate(rowtup,axis=0) - - -def bmat(obj, ldict=None, gdict=None): - """ - Build a matrix object from a string, nested sequence, or array. - - Parameters - ---------- - obj : str or array_like - Input data. Names of variables in the current scope may be - referenced, even if `obj` is a string. - - Returns - ------- - out : matrix - Returns a matrix object, which is a specialized 2-D array. - - See Also - -------- - matrix - - Examples - -------- - >>> A = np.mat('1 1; 1 1') - >>> B = np.mat('2 2; 2 2') - >>> C = np.mat('3 4; 5 6') - >>> D = np.mat('7 8; 9 0') - - All the following expressions construct the same block matrix: - - >>> np.bmat([[A, B], [C, D]]) - matrix([[1, 1, 2, 2], - [1, 1, 2, 2], - [3, 4, 7, 8], - [5, 6, 9, 0]]) - >>> np.bmat(np.r_[np.c_[A, B], np.c_[C, D]]) - matrix([[1, 1, 2, 2], - [1, 1, 2, 2], - [3, 4, 7, 8], - [5, 6, 9, 0]]) - >>> np.bmat('A,B; C,D') - matrix([[1, 1, 2, 2], - [1, 1, 2, 2], - [3, 4, 7, 8], - [5, 6, 9, 0]]) - - """ - if isinstance(obj, str): - if gdict is None: - # get previous frame - frame = sys._getframe().f_back - glob_dict = frame.f_globals - loc_dict = frame.f_locals - else: - glob_dict = gdict - loc_dict = ldict - - return matrix(_from_string(obj, glob_dict, loc_dict)) - - if isinstance(obj, (tuple, list)): - # [[A,B],[C,D]] - arr_rows = [] - for row in obj: - if isinstance(row, N.ndarray): # not 2-d - return matrix(concatenate(obj,axis=-1)) - else: - arr_rows.append(concatenate(row,axis=-1)) - return matrix(concatenate(arr_rows,axis=0)) - if isinstance(obj, N.ndarray): - return matrix(obj) - -mat = asmatrix diff --git a/pythonPackages/numpy/numpy/matrixlib/setup.py b/pythonPackages/numpy/numpy/matrixlib/setup.py deleted file mode 100755 index 85b090094e..0000000000 --- a/pythonPackages/numpy/numpy/matrixlib/setup.py +++ /dev/null @@ -1,13 +0,0 @@ -#!/usr/bin/env python -import os - -def configuration(parent_package='', top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('matrixlib', parent_package, top_path) - config.add_data_dir('tests') - return config - -if __name__ == "__main__": - from numpy.distutils.core import setup - config = configuration(top_path='').todict() - setup(**config) diff --git a/pythonPackages/numpy/numpy/matrixlib/setupscons.py b/pythonPackages/numpy/numpy/matrixlib/setupscons.py deleted file mode 100755 index 85b090094e..0000000000 --- a/pythonPackages/numpy/numpy/matrixlib/setupscons.py +++ /dev/null @@ -1,13 +0,0 @@ -#!/usr/bin/env python -import os - -def configuration(parent_package='', top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('matrixlib', parent_package, top_path) - config.add_data_dir('tests') - return config - -if __name__ == "__main__": - from numpy.distutils.core import setup - config = configuration(top_path='').todict() - setup(**config) diff --git a/pythonPackages/numpy/numpy/matrixlib/tests/test_defmatrix.py b/pythonPackages/numpy/numpy/matrixlib/tests/test_defmatrix.py deleted file mode 100755 index 65d79df0b1..0000000000 --- a/pythonPackages/numpy/numpy/matrixlib/tests/test_defmatrix.py +++ /dev/null @@ -1,376 +0,0 @@ -from numpy.testing import * -from numpy.core import * -from numpy import matrix, asmatrix, bmat -from numpy.matrixlib.defmatrix import matrix_power -from numpy.matrixlib import mat -import numpy as np - -class TestCtor(TestCase): - def test_basic(self): - A = array([[1,2],[3,4]]) - mA = matrix(A) - assert all(mA.A == A) - - B = bmat("A,A;A,A") - C = bmat([[A,A], [A,A]]) - D = array([[1,2,1,2], - [3,4,3,4], - [1,2,1,2], - [3,4,3,4]]) - assert all(B.A == D) - assert all(C.A == D) - - E = array([[5,6],[7,8]]) - AEresult = matrix([[1,2,5,6],[3,4,7,8]]) - assert all(bmat([A,E]) == AEresult) - - vec = arange(5) - mvec = matrix(vec) - assert mvec.shape == (1,5) - - def test_bmat_nondefault_str(self): - A = array([[1,2],[3,4]]) - B = array([[5,6],[7,8]]) - Aresult = array([[1,2,1,2], - [3,4,3,4], - [1,2,1,2], - [3,4,3,4]]) - Bresult = array([[5,6,5,6], - [7,8,7,8], - [5,6,5,6], - [7,8,7,8]]) - mixresult = array([[1,2,5,6], - [3,4,7,8], - [5,6,1,2], - [7,8,3,4]]) - assert all(bmat("A,A;A,A") == Aresult) - assert all(bmat("A,A;A,A",ldict={'A':B}) == Aresult) - assert_raises(TypeError, bmat, "A,A;A,A",gdict={'A':B}) - assert all(bmat("A,A;A,A",ldict={'A':A},gdict={'A':B}) == Aresult) - b2 = bmat("A,B;C,D",ldict={'A':A,'B':B},gdict={'C':B,'D':A}) - assert all(b2 == mixresult) - - -class TestProperties(TestCase): - def test_sum(self): - """Test whether matrix.sum(axis=1) preserves orientation. - Fails in NumPy <= 0.9.6.2127. - """ - M = matrix([[1,2,0,0], - [3,4,0,0], - [1,2,1,2], - [3,4,3,4]]) - sum0 = matrix([8,12,4,6]) - sum1 = matrix([3,7,6,14]).T - sumall = 30 - assert_array_equal(sum0, M.sum(axis=0)) - assert_array_equal(sum1, M.sum(axis=1)) - assert sumall == M.sum() - - - def test_prod(self): - x = matrix([[1,2,3],[4,5,6]]) - assert x.prod() == 720 - assert all(x.prod(0) == matrix([[4,10,18]])) - assert all(x.prod(1) == matrix([[6],[120]])) - - y = matrix([0,1,3]) - assert y.prod() == 0 - - def test_max(self): - x = matrix([[1,2,3],[4,5,6]]) - assert x.max() == 6 - assert all(x.max(0) == matrix([[4,5,6]])) - assert all(x.max(1) == matrix([[3],[6]])) - - def test_min(self): - x = matrix([[1,2,3],[4,5,6]]) - assert x.min() == 1 - assert all(x.min(0) == matrix([[1,2,3]])) - assert all(x.min(1) == matrix([[1],[4]])) - - def test_ptp(self): - x = np.arange(4).reshape((2,2)) - assert x.ptp() == 3 - assert all(x.ptp(0) == array([2, 2])) - assert all(x.ptp(1) == array([1, 1])) - - def test_var(self): - x = np.arange(9).reshape((3,3)) - mx = x.view(np.matrix) - assert_equal(x.var(ddof=0), mx.var(ddof=0)) - assert_equal(x.var(ddof=1), mx.var(ddof=1)) - - def test_basic(self): - import numpy.linalg as linalg - - A = array([[1., 2.], - [3., 4.]]) - mA = matrix(A) - assert allclose(linalg.inv(A), mA.I) - assert all(array(transpose(A) == mA.T)) - assert all(array(transpose(A) == mA.H)) - assert all(A == mA.A) - - B = A + 2j*A - mB = matrix(B) - assert allclose(linalg.inv(B), mB.I) - assert all(array(transpose(B) == mB.T)) - assert all(array(conjugate(transpose(B)) == mB.H)) - - def test_pinv(self): - x = matrix(arange(6).reshape(2,3)) - xpinv = matrix([[-0.77777778, 0.27777778], - [-0.11111111, 0.11111111], - [ 0.55555556, -0.05555556]]) - assert_almost_equal(x.I, xpinv) - - def test_comparisons(self): - A = arange(100).reshape(10,10) - mA = matrix(A) - mB = matrix(A) + 0.1 - assert all(mB == A+0.1) - assert all(mB == matrix(A+0.1)) - assert not any(mB == matrix(A-0.1)) - assert all(mA < mB) - assert all(mA <= mB) - assert all(mA <= mA) - assert not any(mA < mA) - - assert not any(mB < mA) - assert all(mB >= mA) - assert all(mB >= mB) - assert not any(mB > mB) - - assert all(mA == mA) - assert not any(mA == mB) - assert all(mB != mA) - - assert not all(abs(mA) > 0) - assert all(abs(mB > 0)) - - def test_asmatrix(self): - A = arange(100).reshape(10,10) - mA = asmatrix(A) - A[0,0] = -10 - assert A[0,0] == mA[0,0] - - def test_noaxis(self): - A = matrix([[1,0],[0,1]]) - assert A.sum() == matrix(2) - assert A.mean() == matrix(0.5) - - def test_repr(self): - A = matrix([[1,0],[0,1]]) - assert repr(A) == "matrix([[1, 0],\n [0, 1]])" - -class TestCasting(TestCase): - def test_basic(self): - A = arange(100).reshape(10,10) - mA = matrix(A) - - mB = mA.copy() - O = ones((10,10), float64) * 0.1 - mB = mB + O - assert mB.dtype.type == float64 - assert all(mA != mB) - assert all(mB == mA+0.1) - - mC = mA.copy() - O = ones((10,10), complex128) - mC = mC * O - assert mC.dtype.type == complex128 - assert all(mA != mB) - - -class TestAlgebra(TestCase): - def test_basic(self): - import numpy.linalg as linalg - - A = array([[1., 2.], - [3., 4.]]) - mA = matrix(A) - - B = identity(2) - for i in xrange(6): - assert allclose((mA ** i).A, B) - B = dot(B, A) - - Ainv = linalg.inv(A) - B = identity(2) - for i in xrange(6): - assert allclose((mA ** -i).A, B) - B = dot(B, Ainv) - - assert allclose((mA * mA).A, dot(A, A)) - assert allclose((mA + mA).A, (A + A)) - assert allclose((3*mA).A, (3*A)) - - mA2 = matrix(A) - mA2 *= 3 - assert allclose(mA2.A, 3*A) - - def test_pow(self): - """Test raising a matrix to an integer power works as expected.""" - m = matrix("1. 2.; 3. 4.") - m2 = m.copy() - m2 **= 2 - mi = m.copy() - mi **= -1 - m4 = m2.copy() - m4 **= 2 - assert_array_almost_equal(m2, m**2) - assert_array_almost_equal(m4, np.dot(m2, m2)) - assert_array_almost_equal(np.dot(mi, m), np.eye(2)) - - def test_notimplemented(self): - '''Check that 'not implemented' operations produce a failure.''' - A = matrix([[1., 2.], - [3., 4.]]) - - # __rpow__ - try: - 1.0**A - except TypeError: - pass - else: - self.fail("matrix.__rpow__ doesn't raise a TypeError") - - # __mul__ with something not a list, ndarray, tuple, or scalar - try: - A*object() - except TypeError: - pass - else: - self.fail("matrix.__mul__ with non-numeric object doesn't raise" - "a TypeError") - -class TestMatrixReturn(TestCase): - def test_instance_methods(self): - a = matrix([1.0], dtype='f8') - methodargs = { - 'astype' : ('intc',), - 'clip' : (0.0, 1.0), - 'compress' : ([1],), - 'repeat' : (1,), - 'reshape' : (1,), - 'swapaxes' : (0,0), - 'dot': np.array([1.0]), - } - excluded_methods = [ - 'argmin', 'choose', 'dump', 'dumps', 'fill', 'getfield', - 'getA', 'getA1', 'item', 'nonzero', 'put', 'putmask', 'resize', - 'searchsorted', 'setflags', 'setfield', 'sort', 'take', - 'tofile', 'tolist', 'tostring', 'all', 'any', 'sum', - 'argmax', 'argmin', 'min', 'max', 'mean', 'var', 'ptp', - 'prod', 'std', 'ctypes', 'itemset' - ] - for attrib in dir(a): - if attrib.startswith('_') or attrib in excluded_methods: - continue - f = getattr(a, attrib) - if callable(f): - # reset contents of a - a.astype('f8') - a.fill(1.0) - if attrib in methodargs: - args = methodargs[attrib] - else: - args = () - b = f(*args) - assert type(b) is matrix, "%s" % attrib - assert type(a.real) is matrix - assert type(a.imag) is matrix - c,d = matrix([0.0]).nonzero() - assert type(c) is matrix - assert type(d) is matrix - - -class TestIndexing(TestCase): - def test_basic(self): - x = asmatrix(zeros((3,2),float)) - y = zeros((3,1),float) - y[:,0] = [0.8,0.2,0.3] - x[:,1] = y>0.5 - assert_equal(x, [[0,1],[0,0],[0,0]]) - - -class TestNewScalarIndexing(TestCase): - def setUp(self): - self.a = matrix([[1, 2],[3,4]]) - - def test_dimesions(self): - a = self.a - x = a[0] - assert_equal(x.ndim, 2) - - def test_array_from_matrix_list(self): - a = self.a - x = array([a, a]) - assert_equal(x.shape, [2,2,2]) - - def test_array_to_list(self): - a = self.a - assert_equal(a.tolist(),[[1, 2], [3, 4]]) - - def test_fancy_indexing(self): - a = self.a - x = a[1, [0,1,0]] - assert isinstance(x, matrix) - assert_equal(x, matrix([[3, 4, 3]])) - x = a[[1,0]] - assert isinstance(x, matrix) - assert_equal(x, matrix([[3, 4], [1, 2]])) - x = a[[[1],[0]],[[1,0],[0,1]]] - assert isinstance(x, matrix) - assert_equal(x, matrix([[4, 3], [1, 2]])) - - def test_matrix_element(self): - x = matrix([[1,2,3],[4,5,6]]) - assert_equal(x[0][0],matrix([[1,2,3]])) - assert_equal(x[0][0].shape,(1,3)) - assert_equal(x[0].shape,(1,3)) - assert_equal(x[:,0].shape,(2,1)) - - x = matrix(0) - assert_equal(x[0,0],0) - assert_equal(x[0],0) - assert_equal(x[:,0].shape,x.shape) - - def test_scalar_indexing(self): - x = asmatrix(zeros((3,2),float)) - assert_equal(x[0,0],x[0][0]) - - def test_row_column_indexing(self): - x = asmatrix(np.eye(2)) - assert_array_equal(x[0,:],[[1,0]]) - assert_array_equal(x[1,:],[[0,1]]) - assert_array_equal(x[:,0],[[1],[0]]) - assert_array_equal(x[:,1],[[0],[1]]) - - def test_boolean_indexing(self): - A = arange(6) - A.shape = (3,2) - x = asmatrix(A) - assert_array_equal(x[:,array([True,False])],x[:,0]) - assert_array_equal(x[array([True,False,False]),:],x[0,:]) - - def test_list_indexing(self): - A = arange(6) - A.shape = (3,2) - x = asmatrix(A) - assert_array_equal(x[:,[1,0]],x[:,::-1]) - assert_array_equal(x[[2,1,0],:],x[::-1,:]) - -class TestPower(TestCase): - def test_returntype(self): - a = array([[0,1],[0,0]]) - assert type(matrix_power(a, 2)) is ndarray - a = mat(a) - assert type(matrix_power(a, 2)) is matrix - - def test_list(self): - assert_array_equal(matrix_power([[0, 1], [0, 0]], 2), [[0, 0], [0, 0]]) - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/matrixlib/tests/test_multiarray.py b/pythonPackages/numpy/numpy/matrixlib/tests/test_multiarray.py deleted file mode 100755 index 9f2dce7e47..0000000000 --- a/pythonPackages/numpy/numpy/matrixlib/tests/test_multiarray.py +++ /dev/null @@ -1,16 +0,0 @@ -import numpy as np -from numpy.testing import * - -class TestView(TestCase): - def test_type(self): - x = np.array([1,2,3]) - assert(isinstance(x.view(np.matrix),np.matrix)) - - def test_keywords(self): - x = np.array([(1,2)],dtype=[('a',np.int8),('b',np.int8)]) - # We must be specific about the endianness here: - y = x.view(dtype=' - -#define _libnumarray_MODULE -#include "include/numpy/libnumarray.h" -#include "numpy/npy_3kcompat.h" -#include - -#if (defined(__unix__) || defined(unix)) && !defined(USG) -#include -#endif - -#if defined(__GLIBC__) || defined(__APPLE__) || defined(__MINGW32__) || (defined(__FreeBSD__) && (__FreeBSD_version >= 502114)) -#include -#elif defined(__CYGWIN__) -#include "numpy/fenv/fenv.h" -#include "numpy/fenv/fenv.c" -#endif - -static PyObject *pCfuncClass; -static PyTypeObject CfuncType; -static PyObject *pHandleErrorFunc; - -static int -deferred_libnumarray_init(void) -{ -static int initialized=0; - - if (initialized) return 0; - - pCfuncClass = (PyObject *) &CfuncType; - Py_INCREF(pCfuncClass); - - pHandleErrorFunc = - NA_initModuleGlobal("numpy.numarray.util", "handleError"); - - if (!pHandleErrorFunc) goto _fail; - - - /* _exit: */ - initialized = 1; - return 0; - -_fail: - initialized = 0; - return -1; -} - - - -/**********************************************************************/ -/* Buffer Utility Functions */ -/**********************************************************************/ - -static PyObject * -getBuffer( PyObject *obj) -{ - if (!obj) return PyErr_Format(PyExc_RuntimeError, - "NULL object passed to getBuffer()"); - if (((PyObject*)obj)->ob_type->tp_as_buffer == NULL) { - return PyObject_CallMethod(obj, "__buffer__", NULL); - } else { - Py_INCREF(obj); /* Since CallMethod returns a new object when it - succeeds, We'll need to DECREF later to free it. - INCREF ordinary buffers here so we don't have to - remember where the buffer came from at DECREF time. - */ - return obj; - } -} - -/* Either it defines the buffer API, or it is an instance which returns - a buffer when obj.__buffer__() is called */ -static int -isBuffer (PyObject *obj) -{ - PyObject *buf = getBuffer(obj); - int ans = 0; - if (buf) { - ans = buf->ob_type->tp_as_buffer != NULL; - Py_DECREF(buf); - } else { - PyErr_Clear(); - } - return ans; -} - -/**********************************************************************/ - -static int -getWriteBufferDataPtr(PyObject *buffobj, void **buff) -{ -#if defined(NPY_PY3K) - /* FIXME: XXX - needs implementation */ - PyErr_SetString(PyExc_RuntimeError, - "XXX: getWriteBufferDataPtr is not implemented"); - return -1; -#else - int rval = -1; - PyObject *buff2; - if ((buff2 = getBuffer(buffobj))) - { - if (buff2->ob_type->tp_as_buffer->bf_getwritebuffer) - rval = buff2->ob_type->tp_as_buffer->bf_getwritebuffer(buff2, - 0, buff); - Py_DECREF(buff2); - } - return rval; -#endif -} - -/**********************************************************************/ - -static int -isBufferWriteable (PyObject *buffobj) -{ - void *ptr; - int rval = -1; - rval = getWriteBufferDataPtr(buffobj, &ptr); - if (rval == -1) - PyErr_Clear(); /* Since we're just "testing", it's not really an error */ - return rval != -1; -} - -/**********************************************************************/ - -static int -getReadBufferDataPtr(PyObject *buffobj, void **buff) -{ -#if defined(NPY_PY3K) - /* FIXME: XXX - needs implementation */ - PyErr_SetString(PyExc_RuntimeError, - "XXX: getWriteBufferDataPtr is not implemented"); - return -1; -#else - int rval = -1; - PyObject *buff2; - if ((buff2 = getBuffer(buffobj))) { - if (buff2->ob_type->tp_as_buffer->bf_getreadbuffer) - rval = buff2->ob_type->tp_as_buffer->bf_getreadbuffer(buff2, - 0, buff); - Py_DECREF(buff2); - } - return rval; -#endif -} - -/**********************************************************************/ - -static int -getBufferSize(PyObject *buffobj) -{ -#if defined(NPY_PY3K) - /* FIXME: XXX - needs implementation */ - PyErr_SetString(PyExc_RuntimeError, - "XXX: getWriteBufferDataPtr is not implemented"); - return -1; -#else - Py_ssize_t size=0; - PyObject *buff2; - if ((buff2 = getBuffer(buffobj))) - { - (void) buff2->ob_type->tp_as_buffer->bf_getsegcount(buff2, &size); - Py_DECREF(buff2); - } - else - size = -1; - return size; -#endif -} - - -static double numarray_zero = 0.0; - -static double raiseDivByZero(void) -{ - return 1.0/numarray_zero; -} - -static double raiseNegDivByZero(void) -{ - return -1.0/numarray_zero; -} - -static double num_log(double x) -{ - if (x == 0.0) - return raiseNegDivByZero(); - else - return log(x); -} - -static double num_log10(double x) -{ - if (x == 0.0) - return raiseNegDivByZero(); - else - return log10(x); -} - -static double num_pow(double x, double y) -{ - int z = (int) y; - if ((x < 0.0) && (y != z)) - return raiseDivByZero(); - else - return pow(x, y); -} - -/* Inverse hyperbolic trig functions from Numeric */ -static double num_acosh(double x) -{ - return log(x + sqrt((x-1.0)*(x+1.0))); -} - -static double num_asinh(double xx) -{ - double x; - int sign; - if (xx < 0.0) { - sign = -1; - x = -xx; - } - else { - sign = 1; - x = xx; - } - return sign*log(x + sqrt(x*x+1.0)); -} - -static double num_atanh(double x) -{ - return 0.5*log((1.0+x)/(1.0-x)); -} - -/* NUM_CROUND (in numcomplex.h) also calls num_round */ -static double num_round(double x) -{ - return (x >= 0) ? floor(x+0.5) : ceil(x-0.5); -} - - -/* The following routine is used in the event of a detected integer * - ** divide by zero so that a floating divide by zero is generated. * - ** This is done since numarray uses the floating point exception * - ** sticky bits to detect errors. The last bit is an attempt to * - ** prevent optimization of the divide by zero away, the input value * - ** should always be 0 * - */ - -static int int_dividebyzero_error(long NPY_UNUSED(value), long NPY_UNUSED(unused)) { - double dummy; - dummy = 1./numarray_zero; - if (dummy) /* to prevent optimizer from eliminating expression */ - return 0; - else - return 1; -} - -/* Likewise for Integer overflows */ -#if defined(__GLIBC__) || defined(__APPLE__) || defined(__CYGWIN__) || defined(__MINGW32__) || (defined(__FreeBSD__) && (__FreeBSD_version >= 502114)) -static int int_overflow_error(Float64 value) { /* For x86_64 */ - feraiseexcept(FE_OVERFLOW); - return (int) value; -} -#else -static int int_overflow_error(Float64 value) { - double dummy; - dummy = pow(1.e10, fabs(value/2)); - if (dummy) /* to prevent optimizer from eliminating expression */ - return (int) value; - else - return 1; -} -#endif - -static int umult64_overflow(UInt64 a, UInt64 b) -{ - UInt64 ah, al, bh, bl, w, x, y, z; - - ah = (a >> 32); - al = (a & 0xFFFFFFFFL); - bh = (b >> 32); - bl = (b & 0xFFFFFFFFL); - - /* 128-bit product: z*2**64 + (x+y)*2**32 + w */ - w = al*bl; - x = bh*al; - y = ah*bl; - z = ah*bh; - - /* *c = ((x + y)<<32) + w; */ - return z || (x>>32) || (y>>32) || - (((x & 0xFFFFFFFFL) + (y & 0xFFFFFFFFL) + (w >> 32)) >> 32); -} - -static int smult64_overflow(Int64 a0, Int64 b0) -{ - UInt64 a, b; - UInt64 ah, al, bh, bl, w, x, y, z; - - /* Convert to non-negative quantities */ - if (a0 < 0) { a = -a0; } else { a = a0; } - if (b0 < 0) { b = -b0; } else { b = b0; } - - ah = (a >> 32); - al = (a & 0xFFFFFFFFL); - bh = (b >> 32); - bl = (b & 0xFFFFFFFFL); - - w = al*bl; - x = bh*al; - y = ah*bl; - z = ah*bh; - - /* - UInt64 c = ((x + y)<<32) + w; - if ((a0 < 0) ^ (b0 < 0)) - *c = -c; - else - *c = c - */ - - return z || (x>>31) || (y>>31) || - (((x & 0xFFFFFFFFL) + (y & 0xFFFFFFFFL) + (w >> 32)) >> 31); -} - - -static void -NA_Done(void) -{ - return; -} - -static PyArrayObject * -NA_NewAll(int ndim, maybelong *shape, NumarrayType type, - void *buffer, maybelong byteoffset, maybelong bytestride, - int byteorder, int aligned, int writeable) -{ - PyArrayObject *result = NA_NewAllFromBuffer( - ndim, shape, type, Py_None, byteoffset, bytestride, - byteorder, aligned, writeable); - - if (result) { - if (!NA_NumArrayCheck((PyObject *) result)) { - PyErr_Format( PyExc_TypeError, - "NA_NewAll: non-NumArray result"); - result = NULL; - } else { - if (buffer) { - memcpy(result->data, buffer, NA_NBYTES(result)); - } else { - memset(result->data, 0, NA_NBYTES(result)); - } - } - } - return result; -} - -static PyArrayObject * -NA_NewAllStrides(int ndim, maybelong *shape, maybelong *strides, - NumarrayType type, void *buffer, maybelong byteoffset, - int byteorder, int aligned, int writeable) -{ - int i; - PyArrayObject *result = NA_NewAll(ndim, shape, type, buffer, - byteoffset, 0, - byteorder, aligned, writeable); - for(i=0; istrides[i] = strides[i]; - return result; -} - - -static PyArrayObject * -NA_New(void *buffer, NumarrayType type, int ndim, ...) -{ - int i; - maybelong shape[MAXDIM]; - va_list ap; - va_start(ap, ndim); - for(i=0; i out.copyFrom(shadow) */ - Py_DECREF(shadow); - Py_INCREF(Py_None); - rval = Py_None; - return rval; - } -} - -static long NA_getBufferPtrAndSize(PyObject *buffobj, int readonly, void **ptr) -{ - long rval; - if (readonly) - rval = getReadBufferDataPtr(buffobj, ptr); - else - rval = getWriteBufferDataPtr(buffobj, ptr); - return rval; -} - - -static int NA_checkIo(char *name, - int wantIn, int wantOut, int gotIn, int gotOut) -{ - if (wantIn != gotIn) { - PyErr_Format(_Error, - "%s: wrong # of input buffers. Expected %d. Got %d.", - name, wantIn, gotIn); - return -1; - } - if (wantOut != gotOut) { - PyErr_Format(_Error, - "%s: wrong # of output buffers. Expected %d. Got %d.", - name, wantOut, gotOut); - return -1; - } - return 0; -} - -static int NA_checkOneCBuffer(char *name, long niter, - void *buffer, long bsize, size_t typesize) -{ - Int64 lniter = niter, ltypesize = typesize; - - if (lniter*ltypesize > bsize) { - PyErr_Format(_Error, - "%s: access out of buffer. niter=%d typesize=%d bsize=%d", - name, (int) niter, (int) typesize, (int) bsize); - return -1; - } - if ((typesize <= sizeof(Float64)) && (((long) buffer) % typesize)) { - PyErr_Format(_Error, - "%s: buffer not aligned on %d byte boundary.", - name, (int) typesize); - return -1; - } - return 0; -} - - -static int NA_checkNCBuffers(char *name, int N, long niter, - void **buffers, long *bsizes, - Int8 *typesizes, Int8 *iters) -{ - int i; - for (i=0; i= 0) { /* Skip dimension == 0. */ - omax = MAX(omax, tmax); - omin = MIN(omin, tmin); - if (align && (ABS(stride[i]) % alignsize)) { - PyErr_Format(_Error, - "%s: stride %d not aligned on %d byte boundary.", - name, (int) stride[i], (int) alignsize); - return -1; - } - if (omax + itemsize > buffersize) { - PyErr_Format(_Error, - "%s: access beyond buffer. offset=%d buffersize=%d", - name, (int) (omax+itemsize-1), (int) buffersize); - return -1; - } - if (omin < 0) { - PyErr_Format(_Error, - "%s: access before buffer. offset=%d buffersize=%d", - name, (int) omin, (int) buffersize); - return -1; - } - } - } - return 0; -} - -/* Function to call standard C Ufuncs - ** - ** The C Ufuncs expect contiguous 1-d data numarray, input and output numarray - ** iterate with standard increments of one data element over all numarray. - ** (There are some exceptions like arrayrangexxx which use one or more of - ** the data numarray as parameter or other sources of information and do not - ** iterate over every buffer). - ** - ** Arguments: - ** - ** Number of iterations (simple integer value). - ** Number of input numarray. - ** Number of output numarray. - ** Tuple of tuples, one tuple per input/output array. Each of these - ** tuples consists of a buffer object and a byte offset to start. - ** - ** Returns None - */ - - -static PyObject * -NA_callCUFuncCore(PyObject *self, - long niter, long ninargs, long noutargs, - PyObject **BufferObj, long *offset) -{ - CfuncObject *me = (CfuncObject *) self; - char *buffers[MAXARGS]; - long bsizes[MAXARGS]; - long i, pnargs = ninargs + noutargs; - UFUNC ufuncptr; - - if (pnargs > MAXARGS) - return PyErr_Format(PyExc_RuntimeError, "NA_callCUFuncCore: too many parameters"); - - if (!PyObject_IsInstance(self, (PyObject *) &CfuncType) - || me->descr.type != CFUNC_UFUNC) - return PyErr_Format(PyExc_TypeError, - "NA_callCUFuncCore: problem with cfunc."); - - for (i=0; idescr.name, (int) offset[i], (int) i); - if ((bsizes[i] = NA_getBufferPtrAndSize(BufferObj[i], readonly, - (void *) &buffers[i])) < 0) - return PyErr_Format(_Error, - "%s: Problem with %s buffer[%d].", - me->descr.name, - readonly ? "read" : "write", (int) i); - buffers[i] += offset[i]; - bsizes[i] -= offset[i]; /* "shorten" buffer size by offset. */ - } - - ufuncptr = (UFUNC) me->descr.fptr; - - /* If it's not a self-checking ufunc, check arg count match, - buffer size, and alignment for all buffers */ - if (!me->descr.chkself && - (NA_checkIo(me->descr.name, - me->descr.wantIn, me->descr.wantOut, ninargs, noutargs) || - NA_checkNCBuffers(me->descr.name, pnargs, - niter, (void **) buffers, bsizes, - me->descr.sizes, me->descr.iters))) - return NULL; - - /* Since the parameters are valid, call the C Ufunc */ - if (!(*ufuncptr)(niter, ninargs, noutargs, (void **)buffers, bsizes)) { - Py_INCREF(Py_None); - return Py_None; - } else { - return NULL; - } -} - -static PyObject * -callCUFunc(PyObject *self, PyObject *args) { - PyObject *DataArgs, *ArgTuple; - long pnargs, ninargs, noutargs, niter, i; - CfuncObject *me = (CfuncObject *) self; - PyObject *BufferObj[MAXARGS]; - long offset[MAXARGS]; - - if (!PyArg_ParseTuple(args, "lllO", - &niter, &ninargs, &noutargs, &DataArgs)) - return PyErr_Format(_Error, - "%s: Problem with argument list", me->descr.name); - - /* check consistency of stated inputs/outputs and supplied buffers */ - pnargs = PyObject_Length(DataArgs); - if ((pnargs != (ninargs+noutargs)) || (pnargs > MAXARGS)) - return PyErr_Format(_Error, - "%s: wrong buffer count for function", me->descr.name); - - /* Unpack buffers and offsets, get data pointers */ - for (i=0; idescr.name); - } - return NA_callCUFuncCore(self, niter, ninargs, noutargs, BufferObj, offset); -} - -static PyObject * -callStrideConvCFunc(PyObject *self, PyObject *args) { - PyObject *inbuffObj, *outbuffObj, *shapeObj; - PyObject *inbstridesObj, *outbstridesObj; - CfuncObject *me = (CfuncObject *) self; - int nshape, ninbstrides, noutbstrides; - maybelong shape[MAXDIM], inbstrides[MAXDIM], - outbstrides[MAXDIM], *outbstrides1 = outbstrides; - long inboffset, outboffset, nbytes=0; - - if (!PyArg_ParseTuple(args, "OOlOOlO|l", - &shapeObj, &inbuffObj, &inboffset, &inbstridesObj, - &outbuffObj, &outboffset, &outbstridesObj, - &nbytes)) { - return PyErr_Format(_Error, - "%s: Problem with argument list", - me->descr.name); - } - - nshape = NA_maybeLongsFromIntTuple(MAXDIM, shape, shapeObj); - if (nshape < 0) return NULL; - - ninbstrides = NA_maybeLongsFromIntTuple(MAXDIM, inbstrides, inbstridesObj); - if (ninbstrides < 0) return NULL; - - noutbstrides= NA_maybeLongsFromIntTuple(MAXDIM, outbstrides, outbstridesObj); - if (noutbstrides < 0) return NULL; - - if (nshape && (nshape != ninbstrides)) { - return PyErr_Format(_Error, - "%s: Missmatch between input iteration and strides tuples", - me->descr.name); - } - - if (nshape && (nshape != noutbstrides)) { - if (noutbstrides < 1 || - outbstrides[ noutbstrides - 1 ])/* allow 0 for reductions. */ - return PyErr_Format(_Error, - "%s: Missmatch between output " - "iteration and strides tuples", - me->descr.name); - } - - return NA_callStrideConvCFuncCore( - self, nshape, shape, - inbuffObj, inboffset, ninbstrides, inbstrides, - outbuffObj, outboffset, noutbstrides, outbstrides1, nbytes); -} - -static int -_NA_callStridingHelper(PyObject *aux, long dim, - long nnumarray, PyArrayObject *numarray[], char *data[], - CFUNC_STRIDED_FUNC f) -{ - int i, j, status=0; - dim -= 1; - for(i=0; idimensions[dim]; i++) { - for (j=0; jstrides[dim]*i; - if (dim == 0) - status |= f(aux, nnumarray, numarray, data); - else - status |= _NA_callStridingHelper( - aux, dim, nnumarray, numarray, data, f); - for (j=0; jstrides[dim]*i; - } - return status; -} - - -static PyObject * -callStridingCFunc(PyObject *self, PyObject *args) { - CfuncObject *me = (CfuncObject *) self; - PyObject *aux; - PyArrayObject *numarray[MAXARRAYS]; - char *data[MAXARRAYS]; - CFUNC_STRIDED_FUNC f; - int i; - - int nnumarray = PySequence_Length(args)-1; - if ((nnumarray < 1) || (nnumarray > MAXARRAYS)) - return PyErr_Format(_Error, "%s, too many or too few numarray.", - me->descr.name); - - aux = PySequence_GetItem(args, 0); - if (!aux) - return NULL; - - for(i=0; idescr.name, i); - if (!NA_NDArrayCheck(otemp)) - return PyErr_Format(PyExc_TypeError, - "%s arg[%d] is not an array.", - me->descr.name, i); - numarray[i] = (PyArrayObject *) otemp; - data[i] = numarray[i]->data; - Py_DECREF(otemp); - if (!NA_updateDataPtr(numarray[i])) - return NULL; - } - - /* Cast function pointer and perform stride operation */ - f = (CFUNC_STRIDED_FUNC) me->descr.fptr; - - if (_NA_callStridingHelper(aux, numarray[0]->nd, - nnumarray, numarray, data, f)) { - return NULL; - } else { - Py_INCREF(Py_None); - return Py_None; - } -} - -/* Convert a standard C numeric value to a Python numeric value. - ** - ** Handles both nonaligned and/or byteswapped C data. - ** - ** Input arguments are: - ** - ** Buffer object that contains the C numeric value. - ** Offset (in bytes) into the buffer that the data is located at. - ** The size of the C numeric data item in bytes. - ** Flag indicating if the C data is byteswapped from the processor's - ** natural representation. - ** - ** Returns a Python numeric value. - */ - -static PyObject * -NumTypeAsPyValue(PyObject *self, PyObject *args) { - PyObject *bufferObj; - long offset, itemsize, byteswap, i, buffersize; - Py_complex temp; /* to hold copies of largest possible type */ - void *buffer; - char *tempptr; - CFUNCasPyValue funcptr; - CfuncObject *me = (CfuncObject *) self; - - if (!PyArg_ParseTuple(args, "Olll", - &bufferObj, &offset, &itemsize, &byteswap)) - return PyErr_Format(_Error, - "NumTypeAsPyValue: Problem with argument list"); - - if ((buffersize = NA_getBufferPtrAndSize(bufferObj, 1, &buffer)) < 0) - return PyErr_Format(_Error, - "NumTypeAsPyValue: Problem with array buffer"); - - if (offset < 0) - return PyErr_Format(_Error, - "NumTypeAsPyValue: invalid negative offset: %d", (int) offset); - - /* Guarantee valid buffer pointer */ - if (offset+itemsize > buffersize) - return PyErr_Format(_Error, - "NumTypeAsPyValue: buffer too small for offset and itemsize."); - - /* Do byteswapping. Guarantee double alignment by using temp. */ - tempptr = (char *) &temp; - if (!byteswap) { - for (i=0; idescr.fptr; - - /* Call function to build PyObject. Bad parameters to this function - may render call meaningless, but "temp" guarantees that its safe. */ - return (*funcptr)((void *)(&temp)); -} - -/* Convert a Python numeric value to a standard C numeric value. - ** - ** Handles both nonaligned and/or byteswapped C data. - ** - ** Input arguments are: - ** - ** The Python numeric value to be converted. - ** Buffer object to contain the C numeric value. - ** Offset (in bytes) into the buffer that the data is to be copied to. - ** The size of the C numeric data item in bytes. - ** Flag indicating if the C data is byteswapped from the processor's - ** natural representation. - ** - ** Returns None - */ - -static PyObject * -NumTypeFromPyValue(PyObject *self, PyObject *args) { - PyObject *bufferObj, *valueObj; - long offset, itemsize, byteswap, i, buffersize; - Py_complex temp; /* to hold copies of largest possible type */ - void *buffer; - char *tempptr; - CFUNCfromPyValue funcptr; - CfuncObject *me = (CfuncObject *) self; - - if (!PyArg_ParseTuple(args, "OOlll", - &valueObj, &bufferObj, &offset, &itemsize, &byteswap)) - return PyErr_Format(_Error, - "%s: Problem with argument list", me->descr.name); - - if ((buffersize = NA_getBufferPtrAndSize(bufferObj, 0, &buffer)) < 0) - return PyErr_Format(_Error, - "%s: Problem with array buffer (read only?)", me->descr.name); - - funcptr = (CFUNCfromPyValue) me->descr.fptr; - - /* Convert python object into "temp". Always safe. */ - if (!((*funcptr)(valueObj, (void *)( &temp)))) - return PyErr_Format(_Error, - "%s: Problem converting value", me->descr.name); - - /* Check buffer offset. */ - if (offset < 0) - return PyErr_Format(_Error, - "%s: invalid negative offset: %d", me->descr.name, (int) offset); - - if (offset+itemsize > buffersize) - return PyErr_Format(_Error, - "%s: buffer too small(%d) for offset(%d) and itemsize(%d)", - me->descr.name, (int) buffersize, (int) offset, (int) itemsize); - - /* Copy "temp" to array buffer. */ - tempptr = (char *) &temp; - if (!byteswap) { - for (i=0; idescr.type) { - case CFUNC_UFUNC: - return callCUFunc(self, argsTuple); - break; - case CFUNC_STRIDING: - return callStrideConvCFunc(self, argsTuple); - break; - case CFUNC_NSTRIDING: - return callStridingCFunc(self, argsTuple); - case CFUNC_FROM_PY_VALUE: - return NumTypeFromPyValue(self, argsTuple); - break; - case CFUNC_AS_PY_VALUE: - return NumTypeAsPyValue(self, argsTuple); - break; - default: - return PyErr_Format( _Error, - "cfunc_call: Can't dispatch cfunc '%s' with type: %d.", - me->descr.name, me->descr.type); - } -} - -static PyTypeObject CfuncType; - -static void -cfunc_dealloc(PyObject* self) -{ - PyObject_Del(self); -} - -static PyObject * -cfunc_repr(PyObject *self) -{ - char buf[256]; - CfuncObject *me = (CfuncObject *) self; - sprintf(buf, "", - me->descr.name, (unsigned long ) me->descr.fptr, - me->descr.chkself, me->descr.align, - me->descr.wantIn, me->descr.wantOut); - return PyUString_FromString(buf); -} - -static PyTypeObject CfuncType = { -#if defined(NPY_PY3K) - PyVarObject_HEAD_INIT(0,0) -#else - PyObject_HEAD_INIT(0) - 0, /* ob_size */ -#endif - "Cfunc", - sizeof(CfuncObject), - 0, - cfunc_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - cfunc_repr, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - cfunc_call, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - 0, /* tp_flags */ - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ -#if PY_VERSION_HEX >= 0x02060000 - 0, /* tp_version_tag */ -#endif - }; - -/* CfuncObjects are created at the c-level only. They ensure that each - cfunc is called via the correct python-c-wrapper as defined by its - CfuncDescriptor. The wrapper, in turn, does conversions and buffer size - and alignment checking. Allowing these to be created at the python level - would enable them to be created *wrong* at the python level, and thereby - enable python code to *crash* python. - */ -static PyObject* -NA_new_cfunc(CfuncDescriptor *cfd) -{ - CfuncObject* cfunc; - - /* Should be done once at init. - Do now since there is no init. */ - ((PyObject*)&CfuncType)->ob_type = &PyType_Type; - - cfunc = PyObject_New(CfuncObject, &CfuncType); - - if (!cfunc) { - return PyErr_Format(_Error, - "NA_new_cfunc: failed creating '%s'", - cfd->name); - } - - cfunc->descr = *cfd; - - return (PyObject*)cfunc; -} - -static int NA_add_cfunc(PyObject *dict, char *keystr, CfuncDescriptor *descr) -{ - PyObject *c = (PyObject *) NA_new_cfunc(descr); - if (!c) return -1; - return PyDict_SetItemString(dict, keystr, c); -} - -static PyArrayObject* -NA_InputArray(PyObject *a, NumarrayType t, int requires) -{ - PyArray_Descr *descr; - if (t == tAny) descr = NULL; - else descr = PyArray_DescrFromType(t); - return (PyArrayObject *) \ - PyArray_CheckFromAny(a, descr, 0, 0, requires, NULL); -} - -/* satisfies ensures that 'a' meets a set of requirements and matches - the specified type. - */ -static int -satisfies(PyArrayObject *a, int requirements, NumarrayType t) -{ - int type_ok = (a->descr->type_num == t) || (t == tAny); - - if (PyArray_ISCARRAY(a)) - return type_ok; - if (PyArray_ISBYTESWAPPED(a) && (requirements & NUM_NOTSWAPPED)) - return 0; - if (!PyArray_ISALIGNED(a) && (requirements & NUM_ALIGNED)) - return 0; - if (!PyArray_ISCONTIGUOUS(a) && (requirements & NUM_CONTIGUOUS)) - return 0; - if (!PyArray_ISWRITABLE(a) && (requirements & NUM_WRITABLE)) - return 0; - if (requirements & NUM_COPY) - return 0; - return type_ok; -} - - -static PyArrayObject * -NA_OutputArray(PyObject *a, NumarrayType t, int requires) -{ - PyArray_Descr *dtype; - PyArrayObject *ret; - - if (!PyArray_Check(a) || !PyArray_ISWRITEABLE(a)) { - PyErr_Format(PyExc_TypeError, - "NA_OutputArray: only writeable arrays work for output."); - return NULL; - } - - if (satisfies((PyArrayObject *)a, requires, t)) { - Py_INCREF(a); - return (PyArrayObject *)a; - } - if (t == tAny) { - dtype = PyArray_DESCR(a); - Py_INCREF(dtype); - } - else { - dtype = PyArray_DescrFromType(t); - } - ret = (PyArrayObject *)PyArray_Empty(PyArray_NDIM(a), PyArray_DIMS(a), - dtype, 0); - ret->flags |= NPY_UPDATEIFCOPY; - ret->base = a; - PyArray_FLAGS(a) &= ~NPY_WRITEABLE; - Py_INCREF(a); - return ret; -} - - -/* NA_IoArray is a combination of NA_InputArray and NA_OutputArray. - - Unlike NA_OutputArray, if a temporary is required it is initialized to a copy - of the input array. - - Unlike NA_InputArray, deallocating any resulting temporary array results in a - copy from the temporary back to the original. - */ -static PyArrayObject * -NA_IoArray(PyObject *a, NumarrayType t, int requires) -{ - PyArrayObject *shadow = NA_InputArray(a, t, requires | NPY_UPDATEIFCOPY ); - - if (!shadow) return NULL; - - /* Guard against non-writable, but otherwise satisfying requires. - In this case, shadow == a. - */ - if (!PyArray_ISWRITABLE(shadow)) { - PyErr_Format(PyExc_TypeError, - "NA_IoArray: I/O array must be writable array"); - PyArray_XDECREF_ERR(shadow); - return NULL; - } - - return shadow; -} - -/* NA_OptionalOutputArray works like NA_OutputArray, but handles the case - where the output array 'optional' is omitted entirely at the python level, - resulting in 'optional'==Py_None. When 'optional' is Py_None, the return - value is cloned (but with NumarrayType 't') from 'master', typically an input - array with the same shape as the output array. - */ -static PyArrayObject * -NA_OptionalOutputArray(PyObject *optional, NumarrayType t, int requires, - PyArrayObject *master) -{ - if ((optional == Py_None) || (optional == NULL)) { - PyObject *rval; - PyArray_Descr *descr; - if (t == tAny) descr=NULL; - else descr = PyArray_DescrFromType(t); - rval = PyArray_FromArray( - master, descr, NUM_C_ARRAY | NUM_COPY | NUM_WRITABLE); - return (PyArrayObject *)rval; - } else { - return NA_OutputArray(optional, t, requires); - } -} - -Complex64 NA_get_Complex64(PyArrayObject *a, long offset) -{ - Complex32 v0; - Complex64 v; - - switch(a->descr->type_num) { - case tComplex32: - v0 = NA_GETP(a, Complex32, (NA_PTR(a)+offset)); - v.r = v0.r; - v.i = v0.i; - break; - case tComplex64: - v = NA_GETP(a, Complex64, (NA_PTR(a)+offset)); - break; - default: - v.r = NA_get_Float64(a, offset); - v.i = 0; - break; - } - return v; -} - -void NA_set_Complex64(PyArrayObject *a, long offset, Complex64 v) -{ - Complex32 v0; - - switch(a->descr->type_num) { - case tComplex32: - v0.r = v.r; - v0.i = v.i; - NA_SETP(a, Complex32, (NA_PTR(a)+offset), v0); - break; - case tComplex64: - NA_SETP(a, Complex64, (NA_PTR(a)+offset), v); - break; - default: - NA_set_Float64(a, offset, v.r); - break; - } -} - -Int64 NA_get_Int64(PyArrayObject *a, long offset) -{ - switch(a->descr->type_num) { - case tBool: - return NA_GETP(a, Bool, (NA_PTR(a)+offset)) != 0; - case tInt8: - return NA_GETP(a, Int8, (NA_PTR(a)+offset)); - case tUInt8: - return NA_GETP(a, UInt8, (NA_PTR(a)+offset)); - case tInt16: - return NA_GETP(a, Int16, (NA_PTR(a)+offset)); - case tUInt16: - return NA_GETP(a, UInt16, (NA_PTR(a)+offset)); - case tInt32: - return NA_GETP(a, Int32, (NA_PTR(a)+offset)); - case tUInt32: - return NA_GETP(a, UInt32, (NA_PTR(a)+offset)); - case tInt64: - return NA_GETP(a, Int64, (NA_PTR(a)+offset)); - case tUInt64: - return NA_GETP(a, UInt64, (NA_PTR(a)+offset)); - case tFloat32: - return NA_GETP(a, Float32, (NA_PTR(a)+offset)); - case tFloat64: - return NA_GETP(a, Float64, (NA_PTR(a)+offset)); - case tComplex32: - return NA_GETP(a, Float32, (NA_PTR(a)+offset)); - case tComplex64: - return NA_GETP(a, Float64, (NA_PTR(a)+offset)); - default: - PyErr_Format( PyExc_TypeError, - "Unknown type %d in NA_get_Int64", - a->descr->type_num); - PyErr_Print(); - } - return 0; /* suppress warning */ -} - -void NA_set_Int64(PyArrayObject *a, long offset, Int64 v) -{ - Bool b; - - switch(a->descr->type_num) { - case tBool: - b = (v != 0); - NA_SETP(a, Bool, (NA_PTR(a)+offset), b); - break; - case tInt8: NA_SETP(a, Int8, (NA_PTR(a)+offset), v); - break; - case tUInt8: NA_SETP(a, UInt8, (NA_PTR(a)+offset), v); - break; - case tInt16: NA_SETP(a, Int16, (NA_PTR(a)+offset), v); - break; - case tUInt16: NA_SETP(a, UInt16, (NA_PTR(a)+offset), v); - break; - case tInt32: NA_SETP(a, Int32, (NA_PTR(a)+offset), v); - break; - case tUInt32: NA_SETP(a, UInt32, (NA_PTR(a)+offset), v); - break; - case tInt64: NA_SETP(a, Int64, (NA_PTR(a)+offset), v); - break; - case tUInt64: NA_SETP(a, UInt64, (NA_PTR(a)+offset), v); - break; - case tFloat32: - NA_SETP(a, Float32, (NA_PTR(a)+offset), v); - break; - case tFloat64: - NA_SETP(a, Float64, (NA_PTR(a)+offset), v); - break; - case tComplex32: - NA_SETP(a, Float32, (NA_PTR(a)+offset), v); - NA_SETP(a, Float32, (NA_PTR(a)+offset+sizeof(Float32)), 0); - break; - case tComplex64: - NA_SETP(a, Float64, (NA_PTR(a)+offset), v); - NA_SETP(a, Float64, (NA_PTR(a)+offset+sizeof(Float64)), 0); - break; - default: - PyErr_Format( PyExc_TypeError, - "Unknown type %d in NA_set_Int64", - a->descr->type_num); - PyErr_Print(); - } -} - -/* NA_get_offset computes the offset specified by the set of indices. - If N > 0, the indices are taken from the outer dimensions of the array. - If N < 0, the indices are taken from the inner dimensions of the array. - If N == 0, the offset is 0. - */ -long NA_get_offset(PyArrayObject *a, int N, ...) -{ - int i; - long offset = 0; - va_list ap; - va_start(ap, N); - if (N > 0) { /* compute offset of "outer" indices. */ - for(i=0; istrides[i]; - } else { /* compute offset of "inner" indices. */ - N = -N; - for(i=0; istrides[a->nd-N+i]; - } - va_end(ap); - return offset; -} - -Float64 NA_get_Float64(PyArrayObject *a, long offset) -{ - switch(a->descr->type_num) { - case tBool: - return NA_GETP(a, Bool, (NA_PTR(a)+offset)) != 0; - case tInt8: - return NA_GETP(a, Int8, (NA_PTR(a)+offset)); - case tUInt8: - return NA_GETP(a, UInt8, (NA_PTR(a)+offset)); - case tInt16: - return NA_GETP(a, Int16, (NA_PTR(a)+offset)); - case tUInt16: - return NA_GETP(a, UInt16, (NA_PTR(a)+offset)); - case tInt32: - return NA_GETP(a, Int32, (NA_PTR(a)+offset)); - case tUInt32: - return NA_GETP(a, UInt32, (NA_PTR(a)+offset)); - case tInt64: - return NA_GETP(a, Int64, (NA_PTR(a)+offset)); -#if HAS_UINT64 - case tUInt64: - return NA_GETP(a, UInt64, (NA_PTR(a)+offset)); -#endif - case tFloat32: - return NA_GETP(a, Float32, (NA_PTR(a)+offset)); - case tFloat64: - return NA_GETP(a, Float64, (NA_PTR(a)+offset)); - case tComplex32: /* Since real value is first */ - return NA_GETP(a, Float32, (NA_PTR(a)+offset)); - case tComplex64: /* Since real value is first */ - return NA_GETP(a, Float64, (NA_PTR(a)+offset)); - default: - PyErr_Format( PyExc_TypeError, - "Unknown type %d in NA_get_Float64", - a->descr->type_num); - } - return 0; /* suppress warning */ -} - -void NA_set_Float64(PyArrayObject *a, long offset, Float64 v) -{ - Bool b; - - switch(a->descr->type_num) { - case tBool: - b = (v != 0); - NA_SETP(a, Bool, (NA_PTR(a)+offset), b); - break; - case tInt8: NA_SETP(a, Int8, (NA_PTR(a)+offset), v); - break; - case tUInt8: NA_SETP(a, UInt8, (NA_PTR(a)+offset), v); - break; - case tInt16: NA_SETP(a, Int16, (NA_PTR(a)+offset), v); - break; - case tUInt16: NA_SETP(a, UInt16, (NA_PTR(a)+offset), v); - break; - case tInt32: NA_SETP(a, Int32, (NA_PTR(a)+offset), v); - break; - case tUInt32: NA_SETP(a, UInt32, (NA_PTR(a)+offset), v); - break; - case tInt64: NA_SETP(a, Int64, (NA_PTR(a)+offset), v); - break; -#if HAS_UINT64 - case tUInt64: NA_SETP(a, UInt64, (NA_PTR(a)+offset), v); - break; -#endif - case tFloat32: - NA_SETP(a, Float32, (NA_PTR(a)+offset), v); - break; - case tFloat64: - NA_SETP(a, Float64, (NA_PTR(a)+offset), v); - break; - case tComplex32: { - NA_SETP(a, Float32, (NA_PTR(a)+offset), v); - NA_SETP(a, Float32, (NA_PTR(a)+offset+sizeof(Float32)), 0); - break; - } - case tComplex64: { - NA_SETP(a, Float64, (NA_PTR(a)+offset), v); - NA_SETP(a, Float64, (NA_PTR(a)+offset+sizeof(Float64)), 0); - break; - } - default: - PyErr_Format( PyExc_TypeError, - "Unknown type %d in NA_set_Float64", - a->descr->type_num ); - PyErr_Print(); - } -} - - -Float64 NA_get1_Float64(PyArrayObject *a, long i) -{ - long offset = i * a->strides[0]; - return NA_get_Float64(a, offset); -} - -Float64 NA_get2_Float64(PyArrayObject *a, long i, long j) -{ - long offset = i * a->strides[0] - + j * a->strides[1]; - return NA_get_Float64(a, offset); -} - -Float64 NA_get3_Float64(PyArrayObject *a, long i, long j, long k) -{ - long offset = i * a->strides[0] - + j * a->strides[1] - + k * a->strides[2]; - return NA_get_Float64(a, offset); -} - -void NA_set1_Float64(PyArrayObject *a, long i, Float64 v) -{ - long offset = i * a->strides[0]; - NA_set_Float64(a, offset, v); -} - -void NA_set2_Float64(PyArrayObject *a, long i, long j, Float64 v) -{ - long offset = i * a->strides[0] - + j * a->strides[1]; - NA_set_Float64(a, offset, v); -} - -void NA_set3_Float64(PyArrayObject *a, long i, long j, long k, Float64 v) -{ - long offset = i * a->strides[0] - + j * a->strides[1] - + k * a->strides[2]; - NA_set_Float64(a, offset, v); -} - -Complex64 NA_get1_Complex64(PyArrayObject *a, long i) -{ - long offset = i * a->strides[0]; - return NA_get_Complex64(a, offset); -} - -Complex64 NA_get2_Complex64(PyArrayObject *a, long i, long j) -{ - long offset = i * a->strides[0] - + j * a->strides[1]; - return NA_get_Complex64(a, offset); -} - -Complex64 NA_get3_Complex64(PyArrayObject *a, long i, long j, long k) -{ - long offset = i * a->strides[0] - + j * a->strides[1] - + k * a->strides[2]; - return NA_get_Complex64(a, offset); -} - -void NA_set1_Complex64(PyArrayObject *a, long i, Complex64 v) -{ - long offset = i * a->strides[0]; - NA_set_Complex64(a, offset, v); -} - -void NA_set2_Complex64(PyArrayObject *a, long i, long j, Complex64 v) -{ - long offset = i * a->strides[0] - + j * a->strides[1]; - NA_set_Complex64(a, offset, v); -} - -void NA_set3_Complex64(PyArrayObject *a, long i, long j, long k, Complex64 v) -{ - long offset = i * a->strides[0] - + j * a->strides[1] - + k * a->strides[2]; - NA_set_Complex64(a, offset, v); -} - -Int64 NA_get1_Int64(PyArrayObject *a, long i) -{ - long offset = i * a->strides[0]; - return NA_get_Int64(a, offset); -} - -Int64 NA_get2_Int64(PyArrayObject *a, long i, long j) -{ - long offset = i * a->strides[0] - + j * a->strides[1]; - return NA_get_Int64(a, offset); -} - -Int64 NA_get3_Int64(PyArrayObject *a, long i, long j, long k) -{ - long offset = i * a->strides[0] - + j * a->strides[1] - + k * a->strides[2]; - return NA_get_Int64(a, offset); -} - -void NA_set1_Int64(PyArrayObject *a, long i, Int64 v) -{ - long offset = i * a->strides[0]; - NA_set_Int64(a, offset, v); -} - -void NA_set2_Int64(PyArrayObject *a, long i, long j, Int64 v) -{ - long offset = i * a->strides[0] - + j * a->strides[1]; - NA_set_Int64(a, offset, v); -} - -void NA_set3_Int64(PyArrayObject *a, long i, long j, long k, Int64 v) -{ - long offset = i * a->strides[0] - + j * a->strides[1] - + k * a->strides[2]; - NA_set_Int64(a, offset, v); -} - -/* SET_CMPLX could be made faster by factoring it into 3 seperate loops. -*/ -#define NA_SET_CMPLX(a, type, base, cnt, in) \ -{ \ - int i; \ - int stride = a->strides[ a->nd - 1]; \ - NA_SET1D(a, type, base, cnt, in); \ - base = NA_PTR(a) + offset + sizeof(type); \ - for(i=0; idescr->type_num) { - case tBool: - NA_GET1D(a, Bool, base, cnt, out); - break; - case tInt8: - NA_GET1D(a, Int8, base, cnt, out); - break; - case tUInt8: - NA_GET1D(a, UInt8, base, cnt, out); - break; - case tInt16: - NA_GET1D(a, Int16, base, cnt, out); - break; - case tUInt16: - NA_GET1D(a, UInt16, base, cnt, out); - break; - case tInt32: - NA_GET1D(a, Int32, base, cnt, out); - break; - case tUInt32: - NA_GET1D(a, UInt32, base, cnt, out); - break; - case tInt64: - NA_GET1D(a, Int64, base, cnt, out); - break; -#if HAS_UINT64 - case tUInt64: - NA_GET1D(a, UInt64, base, cnt, out); - break; -#endif - case tFloat32: - NA_GET1D(a, Float32, base, cnt, out); - break; - case tFloat64: - NA_GET1D(a, Float64, base, cnt, out); - break; - case tComplex32: - NA_GET1D(a, Float32, base, cnt, out); - break; - case tComplex64: - NA_GET1D(a, Float64, base, cnt, out); - break; - default: - PyErr_Format( PyExc_TypeError, - "Unknown type %d in NA_get1D_Float64", - a->descr->type_num); - PyErr_Print(); - return -1; - } - return 0; -} - -static Float64 * -NA_alloc1D_Float64(PyArrayObject *a, long offset, int cnt) -{ - Float64 *result = PyMem_New(Float64, (size_t)cnt); - if (!result) return NULL; - if (NA_get1D_Float64(a, offset, cnt, result) < 0) { - PyMem_Free(result); - return NULL; - } - return result; -} - -static int -NA_set1D_Float64(PyArrayObject *a, long offset, int cnt, Float64*in) -{ - char *base = NA_PTR(a) + offset; - - switch(a->descr->type_num) { - case tBool: - NA_SET1D(a, Bool, base, cnt, in); - break; - case tInt8: - NA_SET1D(a, Int8, base, cnt, in); - break; - case tUInt8: - NA_SET1D(a, UInt8, base, cnt, in); - break; - case tInt16: - NA_SET1D(a, Int16, base, cnt, in); - break; - case tUInt16: - NA_SET1D(a, UInt16, base, cnt, in); - break; - case tInt32: - NA_SET1D(a, Int32, base, cnt, in); - break; - case tUInt32: - NA_SET1D(a, UInt32, base, cnt, in); - break; - case tInt64: - NA_SET1D(a, Int64, base, cnt, in); - break; -#if HAS_UINT64 - case tUInt64: - NA_SET1D(a, UInt64, base, cnt, in); - break; -#endif - case tFloat32: - NA_SET1D(a, Float32, base, cnt, in); - break; - case tFloat64: - NA_SET1D(a, Float64, base, cnt, in); - break; - case tComplex32: - NA_SET_CMPLX(a, Float32, base, cnt, in); - break; - case tComplex64: - NA_SET_CMPLX(a, Float64, base, cnt, in); - break; - default: - PyErr_Format( PyExc_TypeError, - "Unknown type %d in NA_set1D_Float64", - a->descr->type_num); - PyErr_Print(); - return -1; - } - return 0; -} - -static int -NA_get1D_Int64(PyArrayObject *a, long offset, int cnt, Int64*out) -{ - char *base = NA_PTR(a) + offset; - - switch(a->descr->type_num) { - case tBool: - NA_GET1D(a, Bool, base, cnt, out); - break; - case tInt8: - NA_GET1D(a, Int8, base, cnt, out); - break; - case tUInt8: - NA_GET1D(a, UInt8, base, cnt, out); - break; - case tInt16: - NA_GET1D(a, Int16, base, cnt, out); - break; - case tUInt16: - NA_GET1D(a, UInt16, base, cnt, out); - break; - case tInt32: - NA_GET1D(a, Int32, base, cnt, out); - break; - case tUInt32: - NA_GET1D(a, UInt32, base, cnt, out); - break; - case tInt64: - NA_GET1D(a, Int64, base, cnt, out); - break; - case tUInt64: - NA_GET1D(a, UInt64, base, cnt, out); - break; - case tFloat32: - NA_GET1D(a, Float32, base, cnt, out); - break; - case tFloat64: - NA_GET1D(a, Float64, base, cnt, out); - break; - case tComplex32: - NA_GET1D(a, Float32, base, cnt, out); - break; - case tComplex64: - NA_GET1D(a, Float64, base, cnt, out); - break; - default: - PyErr_Format( PyExc_TypeError, - "Unknown type %d in NA_get1D_Int64", - a->descr->type_num); - PyErr_Print(); - return -1; - } - return 0; -} - -static Int64 * -NA_alloc1D_Int64(PyArrayObject *a, long offset, int cnt) -{ - Int64 *result = PyMem_New(Int64, (size_t)cnt); - if (!result) return NULL; - if (NA_get1D_Int64(a, offset, cnt, result) < 0) { - PyMem_Free(result); - return NULL; - } - return result; -} - -static int -NA_set1D_Int64(PyArrayObject *a, long offset, int cnt, Int64*in) -{ - char *base = NA_PTR(a) + offset; - - switch(a->descr->type_num) { - case tBool: - NA_SET1D(a, Bool, base, cnt, in); - break; - case tInt8: - NA_SET1D(a, Int8, base, cnt, in); - break; - case tUInt8: - NA_SET1D(a, UInt8, base, cnt, in); - break; - case tInt16: - NA_SET1D(a, Int16, base, cnt, in); - break; - case tUInt16: - NA_SET1D(a, UInt16, base, cnt, in); - break; - case tInt32: - NA_SET1D(a, Int32, base, cnt, in); - break; - case tUInt32: - NA_SET1D(a, UInt32, base, cnt, in); - break; - case tInt64: - NA_SET1D(a, Int64, base, cnt, in); - break; - case tUInt64: - NA_SET1D(a, UInt64, base, cnt, in); - break; - case tFloat32: - NA_SET1D(a, Float32, base, cnt, in); - break; - case tFloat64: - NA_SET1D(a, Float64, base, cnt, in); - break; - case tComplex32: - NA_SET_CMPLX(a, Float32, base, cnt, in); - break; - case tComplex64: - NA_SET_CMPLX(a, Float64, base, cnt, in); - break; - default: - PyErr_Format( PyExc_TypeError, - "Unknown type %d in NA_set1D_Int64", - a->descr->type_num); - PyErr_Print(); - return -1; - } - return 0; -} - -static int -NA_get1D_Complex64(PyArrayObject *a, long offset, int cnt, Complex64*out) -{ - char *base = NA_PTR(a) + offset; - - switch(a->descr->type_num) { - case tComplex64: - NA_GET1D(a, Complex64, base, cnt, out); - break; - default: - PyErr_Format( PyExc_TypeError, - "Unsupported type %d in NA_get1D_Complex64", - a->descr->type_num); - PyErr_Print(); - return -1; - } - return 0; -} - -static int -NA_set1D_Complex64(PyArrayObject *a, long offset, int cnt, Complex64*in) -{ - char *base = NA_PTR(a) + offset; - - switch(a->descr->type_num) { - case tComplex64: - NA_SET1D(a, Complex64, base, cnt, in); - break; - default: - PyErr_Format( PyExc_TypeError, - "Unsupported type %d in NA_set1D_Complex64", - a->descr->type_num); - PyErr_Print(); - return -1; - } - return 0; -} - - -/* NA_ShapeEqual returns 1 if 'a' and 'b' have the same shape, 0 otherwise. -*/ -static int -NA_ShapeEqual(PyArrayObject *a, PyArrayObject *b) -{ - int i; - - if (!NA_NDArrayCheck((PyObject *) a) || - !NA_NDArrayCheck((PyObject*) b)) { - PyErr_Format( - PyExc_TypeError, - "NA_ShapeEqual: non-array as parameter."); - return -1; - } - if (a->nd != b->nd) - return 0; - for(i=0; ind; i++) - if (a->dimensions[i] != b->dimensions[i]) - return 0; - return 1; -} - -/* NA_ShapeLessThan returns 1 if a.shape[i] < b.shape[i] for all i, else 0. - If they have a different number of dimensions, it compares the innermost - overlapping dimensions of each. - */ -static int -NA_ShapeLessThan(PyArrayObject *a, PyArrayObject *b) -{ - int i; - int mindim, aoff, boff; - if (!NA_NDArrayCheck((PyObject *) a) || - !NA_NDArrayCheck((PyObject *) b)) { - PyErr_Format(PyExc_TypeError, - "NA_ShapeLessThan: non-array as parameter."); - return -1; - } - mindim = MIN(a->nd, b->nd); - aoff = a->nd - mindim; - boff = b->nd - mindim; - for(i=0; idimensions[i+aoff] >= b->dimensions[i+boff]) - return 0; - return 1; -} - -static int -NA_ByteOrder(void) -{ - unsigned long byteorder_test; - byteorder_test = 1; - if (*((char *) &byteorder_test)) - return NUM_LITTLE_ENDIAN; - else - return NUM_BIG_ENDIAN; -} - -static Bool -NA_IeeeSpecial32( Float32 *f, Int32 *mask) -{ - return NA_IeeeMask32(*f, *mask); -} - -static Bool -NA_IeeeSpecial64( Float64 *f, Int32 *mask) -{ - return NA_IeeeMask64(*f, *mask); -} - -static PyArrayObject * -NA_updateDataPtr(PyArrayObject *me) -{ - return me; -} - - -#define ELEM(x) (sizeof(x)/sizeof(x[0])) - -typedef struct -{ - char *name; - int typeno; -} NumarrayTypeNameMapping; - -static NumarrayTypeNameMapping NumarrayTypeNameMap[] = { - {"Any", tAny}, - {"Bool", tBool}, - {"Int8", tInt8}, - {"UInt8", tUInt8}, - {"Int16", tInt16}, - {"UInt16", tUInt16}, - {"Int32", tInt32}, - {"UInt32", tUInt32}, - {"Int64", tInt64}, - {"UInt64", tUInt64}, - {"Float32", tFloat32}, - {"Float64", tFloat64}, - {"Complex32", tComplex32}, - {"Complex64", tComplex64}, - {"Object", tObject}, - {"Long", tLong}, -}; - - -/* Convert NumarrayType 'typeno' into the string of the type's name. */ -static char * -NA_typeNoToName(int typeno) -{ - size_t i; - PyObject *typeObj; - int typeno2; - - for(i=0; ind == 0))) - return dims; - slen = PySequence_Length(a); - if (slen < 0) { - PyErr_Format(_Error, - "getShape: couldn't get sequence length."); - return -1; - } - if (!slen) { - *shape = 0; - return dims+1; - } else if (dims < MAXDIM) { - PyObject *item0 = PySequence_GetItem(a, 0); - if (item0) { - *shape = PySequence_Length(a); - dims = getShape(item0, ++shape, dims+1); - Py_DECREF(item0); - } else { - PyErr_Format(_Error, - "getShape: couldn't get sequence item."); - return -1; - } - } else { - PyErr_Format(_Error, - "getShape: sequence object nested more than MAXDIM deep."); - return -1; - } - return dims; -} - - - -typedef enum { - NOTHING, - NUMBER, - SEQUENCE -} SequenceConstraint; - -static int -setArrayFromSequence(PyArrayObject *a, PyObject *s, int dim, long offset) -{ - SequenceConstraint mustbe = NOTHING; - int i, seqlen=-1, slen = PySequence_Length(s); - - if (dim > a->nd) { - PyErr_Format(PyExc_ValueError, - "setArrayFromSequence: sequence/array dimensions mismatch."); - return -1; - } - - if (slen != a->dimensions[dim]) { - PyErr_Format(PyExc_ValueError, - "setArrayFromSequence: sequence/array shape mismatch."); - return -1; - } - - for(i=0; ind == 0)) && - ((mustbe == NOTHING) || (mustbe == NUMBER))) { - if (NA_setFromPythonScalar(a, offset, o) < 0) - return -2; - mustbe = NUMBER; - } else if (PyBytes_Check(o)) { - PyErr_SetString( PyExc_ValueError, - "setArrayFromSequence: strings can't define numeric numarray."); - return -3; - } else if (PySequence_Check(o)) { - - if ((mustbe == NOTHING) || (mustbe == SEQUENCE)) { - if (mustbe == NOTHING) { - mustbe = SEQUENCE; - seqlen = PySequence_Length(o); - } else if (PySequence_Length(o) != seqlen) { - PyErr_SetString( - PyExc_ValueError, - "Nested sequences with different lengths."); - return -5; - } - setArrayFromSequence(a, o, dim+1, offset); - } else { - PyErr_SetString(PyExc_ValueError, - "Nested sequences with different lengths."); - return -4; - } - } else { - PyErr_SetString(PyExc_ValueError, "Invalid sequence."); - return -6; - } - Py_DECREF(o); - offset += a->strides[dim]; - } - return 0; -} - -static PyObject * -NA_setArrayFromSequence(PyArrayObject *a, PyObject *s) -{ - maybelong shape[MAXDIM]; - - if (!PySequence_Check(s)) - return PyErr_Format( PyExc_TypeError, - "NA_setArrayFromSequence: (array, seq) expected."); - - if (getShape(s, shape, 0) < 0) - return NULL; - - if (!NA_updateDataPtr(a)) - return NULL; - - if (setArrayFromSequence(a, s, 0, 0) < 0) - return NULL; - - Py_INCREF(Py_None); - return Py_None; -} - -enum { - BOOL_SCALAR, - INT_SCALAR, - LONG_SCALAR, - FLOAT_SCALAR, - COMPLEX_SCALAR -}; - - -static int -_NA_maxType(PyObject *seq, int limit) -{ - if (limit > MAXDIM) { - PyErr_Format( PyExc_ValueError, - "NA_maxType: sequence nested too deep." ); - return -1; - } - if (NA_NumArrayCheck(seq)) { - switch(PyArray(seq)->descr->type_num) { - case tBool: - return BOOL_SCALAR; - case tInt8: - case tUInt8: - case tInt16: - case tUInt16: - case tInt32: - case tUInt32: - return INT_SCALAR; - case tInt64: - case tUInt64: - return LONG_SCALAR; - case tFloat32: - case tFloat64: - return FLOAT_SCALAR; - case tComplex32: - case tComplex64: - return COMPLEX_SCALAR; - default: - PyErr_Format(PyExc_TypeError, - "Expecting a python numeric type, got something else."); - return -1; - } - } else if (PySequence_Check(seq) && !PyBytes_Check(seq)) { - long i, maxtype=BOOL_SCALAR, slen; - - slen = PySequence_Length(seq); - if (slen < 0) return -1; - - if (slen == 0) return INT_SCALAR; - - for(i=0; i maxtype) { - maxtype = newmax; - } - Py_DECREF(o); - } - return maxtype; - } else { -#if PY_VERSION_HEX >= 0x02030000 - if (PyBool_Check(seq)) - return BOOL_SCALAR; - else -#endif -#if defined(NPY_PY3K) - if (PyInt_Check(seq)) - return INT_SCALAR; - else if (PyLong_Check(seq)) -#else - if (PyLong_Check(seq)) -#endif - return LONG_SCALAR; - else if (PyFloat_Check(seq)) - return FLOAT_SCALAR; - else if (PyComplex_Check(seq)) - return COMPLEX_SCALAR; - else { - PyErr_Format(PyExc_TypeError, - "Expecting a python numeric type, got something else."); - return -1; - } - } -} - -static int -NA_maxType(PyObject *seq) -{ - int rval; - rval = _NA_maxType(seq, 0); - return rval; -} - -static int -NA_isPythonScalar(PyObject *o) -{ - int rval; - rval = PyInt_Check(o) || - PyLong_Check(o) || - PyFloat_Check(o) || - PyComplex_Check(o) || - (PyBytes_Check(o) && (PyBytes_Size(o) == 1)); - return rval; -} - -#if (NPY_SIZEOF_INTP == 8) -#define PlatBigInt PyInt_FromLong -#define PlatBigUInt PyLong_FromUnsignedLong -#else -#define PlatBigInt PyLong_FromLongLong -#define PlatBigUInt PyLong_FromUnsignedLongLong -#endif - - -static PyObject * -NA_getPythonScalar(PyArrayObject *a, long offset) -{ - int type = a->descr->type_num; - PyObject *rval = NULL; - - switch(type) { - case tBool: - case tInt8: - case tUInt8: - case tInt16: - case tUInt16: - case tInt32: { - Int64 v = NA_get_Int64(a, offset); - rval = PyInt_FromLong(v); - break; - } - case tUInt32: { - Int64 v = NA_get_Int64(a, offset); - rval = PlatBigUInt(v); - break; - } - case tInt64: { - Int64 v = NA_get_Int64(a, offset); - rval = PlatBigInt( v); - break; - } - case tUInt64: { - Int64 v = NA_get_Int64(a, offset); - rval = PlatBigUInt( v); - break; - } - case tFloat32: - case tFloat64: { - Float64 v = NA_get_Float64(a, offset); - rval = PyFloat_FromDouble( v ); - break; - } - case tComplex32: - case tComplex64: - { - Complex64 v = NA_get_Complex64(a, offset); - rval = PyComplex_FromDoubles(v.r, v.i); - break; - } - default: - rval = PyErr_Format(PyExc_TypeError, - "NA_getPythonScalar: bad type %d\n", - type); - } - return rval; -} - -static int -NA_overflow(PyArrayObject *a, Float64 v) -{ - if ((a->flags & CHECKOVERFLOW) == 0) return 0; - - switch(a->descr->type_num) { - case tBool: - return 0; - case tInt8: - if ((v < -128) || (v > 127)) goto _fail; - return 0; - case tUInt8: - if ((v < 0) || (v > 255)) goto _fail; - return 0; - case tInt16: - if ((v < -32768) || (v > 32767)) goto _fail; - return 0; - case tUInt16: - if ((v < 0) || (v > 65535)) goto _fail; - return 0; - case tInt32: - if ((v < -2147483648.) || - (v > 2147483647.)) goto _fail; - return 0; - case tUInt32: - if ((v < 0) || (v > 4294967295.)) goto _fail; - return 0; - case tInt64: - if ((v < -9223372036854775808.) || - (v > 9223372036854775807.)) goto _fail; - return 0; -#if HAS_UINT64 - case tUInt64: - if ((v < 0) || - (v > 18446744073709551615.)) goto _fail; - return 0; -#endif - case tFloat32: - if ((v < -FLT_MAX) || (v > FLT_MAX)) goto _fail; - return 0; - case tFloat64: - return 0; - case tComplex32: - if ((v < -FLT_MAX) || (v > FLT_MAX)) goto _fail; - return 0; - case tComplex64: - return 0; - default: - PyErr_Format( PyExc_TypeError, - "Unknown type %d in NA_overflow", - a->descr->type_num ); - PyErr_Print(); - return -1; - } -_fail: - PyErr_Format(PyExc_OverflowError, "value out of range for array"); - return -1; -} - -static int -_setFromPythonScalarCore(PyArrayObject *a, long offset, PyObject*value, int entries) -{ - Int64 v; - if (entries >= 100) { - PyErr_Format(PyExc_RuntimeError, - "NA_setFromPythonScalar: __tonumtype__ conversion chain too long"); - return -1; - } else if (PyInt_Check(value)) { - v = PyInt_AsLong(value); - if (NA_overflow(a, v) < 0) - return -1; - NA_set_Int64(a, offset, v); - } else if (PyLong_Check(value)) { - if (a->descr->type_num == tInt64) { - v = (Int64) PyLong_AsLongLong( value ); - } else if (a->descr->type_num == tUInt64) { - v = (UInt64) PyLong_AsUnsignedLongLong( value ); - } else if (a->descr->type_num == tUInt32) { - v = PyLong_AsUnsignedLong(value); - } else { - v = PyLong_AsLongLong(value); - } - if (PyErr_Occurred()) - return -1; - if (NA_overflow(a, v) < 0) - return -1; - NA_set_Int64(a, offset, v); - } else if (PyFloat_Check(value)) { - Float64 v = PyFloat_AsDouble(value); - if (NA_overflow(a, v) < 0) - return -1; - NA_set_Float64(a, offset, v); - } else if (PyComplex_Check(value)) { - Complex64 vc; - vc.r = PyComplex_RealAsDouble(value); - vc.i = PyComplex_ImagAsDouble(value); - if (NA_overflow(a, vc.r) < 0) - return -1; - if (NA_overflow(a, vc.i) < 0) - return -1; - NA_set_Complex64(a, offset, vc); - } else if (PyObject_HasAttrString(value, "__tonumtype__")) { - int rval; - PyObject *type = NA_typeNoToTypeObject(a->descr->type_num); - if (!type) return -1; - value = PyObject_CallMethod( - value, "__tonumtype__", "(N)", type); - if (!value) return -1; - rval = _setFromPythonScalarCore(a, offset, value, entries+1); - Py_DECREF(value); - return rval; - } else if (PyBytes_Check(value)) { - long size = PyBytes_Size(value); - if ((size <= 0) || (size > 1)) { - PyErr_Format( PyExc_ValueError, - "NA_setFromPythonScalar: len(string) must be 1."); - return -1; - } - NA_set_Int64(a, offset, *PyBytes_AsString(value)); - } else { - PyErr_Format(PyExc_TypeError, - "NA_setFromPythonScalar: bad value type."); - return -1; - } - return 0; -} - -static int -NA_setFromPythonScalar(PyArrayObject *a, long offset, PyObject *value) -{ - if (a->flags & WRITABLE) - return _setFromPythonScalarCore(a, offset, value, 0); - else { - PyErr_Format( - PyExc_ValueError, "NA_setFromPythonScalar: assigment to readonly array buffer"); - return -1; - } -} - - -static int -NA_NDArrayCheck(PyObject *obj) { - return PyArray_Check(obj); -} - -static int -NA_NumArrayCheck(PyObject *obj) { - return PyArray_Check(obj); -} - -static int -NA_ComplexArrayCheck(PyObject *a) -{ - int rval = NA_NumArrayCheck(a); - if (rval > 0) { - PyArrayObject *arr = (PyArrayObject *) a; - switch(arr->descr->type_num) { - case tComplex64: case tComplex32: - return 1; - default: - return 0; - } - } - return rval; -} - -static unsigned long -NA_elements(PyArrayObject *a) -{ - int i; - unsigned long n = 1; - for(i = 0; ind; i++) - n *= a->dimensions[i]; - return n; -} - -static int -NA_typeObjectToTypeNo(PyObject *typeObj) -{ - PyArray_Descr *dtype; - int i; - if (PyArray_DescrConverter(typeObj, &dtype) == NPY_FAIL) i=-1; - else i=dtype->type_num; - return i; -} - -static int -NA_copyArray(PyArrayObject *to, const PyArrayObject *from) -{ - return PyArray_CopyInto(to, (PyArrayObject *)from); -} - -static PyArrayObject * -NA_copy(PyArrayObject *from) -{ - return (PyArrayObject *)PyArray_NewCopy(from, 0); -} - - -static PyObject * -NA_getType( PyObject *type) -{ - PyArray_Descr *typeobj = NULL; - if (!type && PyArray_DescrConverter(type, &typeobj) == NPY_FAIL) { - PyErr_Format(PyExc_ValueError, "NA_getType: unknown type."); - typeobj = NULL; - } - return (PyObject *)typeobj; -} - - -/* Call a standard "stride" function - ** - ** Stride functions always take one input and one output array. - ** They can handle n-dimensional data with arbitrary strides (of - ** either sign) for both the input and output numarray. Typically - ** these functions are used to copy data, byteswap, or align data. - ** - ** - ** It expects the following arguments: - ** - ** Number of iterations for each dimension as a tuple - ** Input Buffer Object - ** Offset in bytes for input buffer - ** Input strides (in bytes) for each dimension as a tuple - ** Output Buffer Object - ** Offset in bytes for output buffer - ** Output strides (in bytes) for each dimension as a tuple - ** An integer (Optional), typically the number of bytes to copy per - * element. - ** - ** Returns None - ** - ** The arguments expected by the standard stride functions that this - ** function calls are: - ** - ** Number of dimensions to iterate over - ** Long int value (from the optional last argument to - ** callStrideConvCFunc) - ** often unused by the C Function - ** An array of long ints. Each is the number of iterations for each - ** dimension. NOTE: the previous argument as well as the stride - ** arguments are reversed in order with respect to how they are - ** used in Python. Fastest changing dimension is the first element - ** in the numarray! - ** A void pointer to the input data buffer. - ** The starting offset for the input data buffer in bytes (long int). - ** An array of long int input strides (in bytes) [reversed as with - ** the iteration array] - ** A void pointer to the output data buffer. - ** The starting offset for the output data buffer in bytes (long int). - ** An array of long int output strides (in bytes) [also reversed] - */ - - -static PyObject * -NA_callStrideConvCFuncCore( - PyObject *self, int nshape, maybelong *shape, - PyObject *inbuffObj, long inboffset, - int NPY_UNUSED(ninbstrides), maybelong *inbstrides, - PyObject *outbuffObj, long outboffset, - int NPY_UNUSED(noutbstrides), maybelong *outbstrides, - long nbytes) -{ - CfuncObject *me = (CfuncObject *) self; - CFUNC_STRIDE_CONV_FUNC funcptr; - void *inbuffer, *outbuffer; - long inbsize, outbsize; - maybelong i, lshape[MAXDIM], in_strides[MAXDIM], out_strides[MAXDIM]; - maybelong shape_0, inbstr_0, outbstr_0; - - if (nshape == 0) { /* handle rank-0 numarray. */ - nshape = 1; - shape = &shape_0; - inbstrides = &inbstr_0; - outbstrides = &outbstr_0; - shape[0] = 1; - inbstrides[0] = outbstrides[0] = 0; - } - - for(i=0; idescr.type != CFUNC_STRIDING) - return PyErr_Format(PyExc_TypeError, - "NA_callStrideConvCFuncCore: problem with cfunc"); - - if ((inbsize = NA_getBufferPtrAndSize(inbuffObj, 1, &inbuffer)) < 0) - return PyErr_Format(_Error, - "%s: Problem with input buffer", me->descr.name); - - if ((outbsize = NA_getBufferPtrAndSize(outbuffObj, 0, &outbuffer)) < 0) - return PyErr_Format(_Error, - "%s: Problem with output buffer (read only?)", - me->descr.name); - - /* Check buffer alignment and bounds */ - if (NA_checkOneStriding(me->descr.name, nshape, lshape, - inboffset, in_strides, inbsize, - (me->descr.sizes[0] == -1) ? - nbytes : me->descr.sizes[0], - me->descr.align) || - NA_checkOneStriding(me->descr.name, nshape, lshape, - outboffset, out_strides, outbsize, - (me->descr.sizes[1] == -1) ? - nbytes : me->descr.sizes[1], - me->descr.align)) - return NULL; - - /* Cast function pointer and perform stride operation */ - funcptr = (CFUNC_STRIDE_CONV_FUNC) me->descr.fptr; - if ((*funcptr)(nshape-1, nbytes, lshape, - inbuffer, inboffset, in_strides, - outbuffer, outboffset, out_strides) == 0) { - Py_INCREF(Py_None); - return Py_None; - } else { - return NULL; - } -} - -static void -NA_stridesFromShape(int nshape, maybelong *shape, maybelong bytestride, - maybelong *strides) -{ - int i; - if (nshape > 0) { - for(i=0; i=0; i--) - strides[i] = strides[i+1]*shape[i+1]; - } -} - -static int -NA_OperatorCheck(PyObject *NPY_UNUSED(op)) { - return 0; -} - -static int -NA_ConverterCheck(PyObject *NPY_UNUSED(op)) { - return 0; -} - -static int -NA_UfuncCheck(PyObject *NPY_UNUSED(op)) { - return 0; -} - -static int -NA_CfuncCheck(PyObject *op) { - return PyObject_TypeCheck(op, &CfuncType); -} - -static int -NA_getByteOffset(PyArrayObject *NPY_UNUSED(array), int NPY_UNUSED(nindices), - maybelong *NPY_UNUSED(indices), long *NPY_UNUSED(offset)) -{ - return 0; -} - -static int -NA_swapAxes(PyArrayObject *array, int x, int y) -{ - long temp; - - if (((PyObject *) array) == Py_None) return 0; - - if (array->nd < 2) return 0; - - if (x < 0) x += array->nd; - if (y < 0) y += array->nd; - - if ((x < 0) || (x >= array->nd) || - (y < 0) || (y >= array->nd)) { - PyErr_Format(PyExc_ValueError, - "Specified dimension does not exist"); - return -1; - } - - temp = array->dimensions[x]; - array->dimensions[x] = array->dimensions[y]; - array->dimensions[y] = temp; - - temp = array->strides[x]; - array->strides[x] = array->strides[y]; - array->strides[y] = temp; - - PyArray_UpdateFlags(array, NPY_UPDATE_ALL); - - return 0; -} - -static PyObject * -NA_initModuleGlobal(char *modulename, char *globalname) -{ - PyObject *module, *dict, *global = NULL; - module = PyImport_ImportModule(modulename); - if (!module) { - PyErr_Format(PyExc_RuntimeError, - "Can't import '%s' module", - modulename); - goto _exit; - } - dict = PyModule_GetDict(module); - global = PyDict_GetItemString(dict, globalname); - if (!global) { - PyErr_Format(PyExc_RuntimeError, - "Can't find '%s' global in '%s' module.", - globalname, modulename); - goto _exit; - } - Py_DECREF(module); - Py_INCREF(global); -_exit: - return global; -} - - NumarrayType -NA_NumarrayType(PyObject *seq) -{ - int maxtype = NA_maxType(seq); - int rval; - switch(maxtype) { - case BOOL_SCALAR: - rval = tBool; - goto _exit; - case INT_SCALAR: - case LONG_SCALAR: - rval = tLong; /* tLong corresponds to C long int, - not Python long int */ - goto _exit; - case FLOAT_SCALAR: - rval = tFloat64; - goto _exit; - case COMPLEX_SCALAR: - rval = tComplex64; - goto _exit; - default: - PyErr_Format(PyExc_TypeError, - "expecting Python numeric scalar value; got something else."); - rval = -1; - } -_exit: - return rval; -} - -/* ignores bytestride */ -static PyArrayObject * -NA_NewAllFromBuffer(int ndim, maybelong *shape, NumarrayType type, - PyObject *bufferObject, maybelong byteoffset, - maybelong NPY_UNUSED(bytestride), int byteorder, - int NPY_UNUSED(aligned), int NPY_UNUSED(writeable)) -{ - PyArrayObject *self = NULL; - PyArray_Descr *dtype; - - if (type == tAny) - type = tDefault; - - dtype = PyArray_DescrFromType(type); - if (dtype == NULL) return NULL; - - if (byteorder != NA_ByteOrder()) { - PyArray_Descr *temp; - temp = PyArray_DescrNewByteorder(dtype, PyArray_SWAP); - Py_DECREF(dtype); - if (temp == NULL) return NULL; - dtype = temp; - } - - if (bufferObject == Py_None || bufferObject == NULL) { - self = (PyArrayObject *) \ - PyArray_NewFromDescr(&PyArray_Type, dtype, - ndim, shape, NULL, NULL, - 0, NULL); - } - else { - npy_intp size = 1; - int i; - PyArrayObject *newself; - PyArray_Dims newdims; - for(i=0; iob_type == &PyArray_Type); -} - -static int -NA_NDArrayCheckExact(PyObject *op) { - return (op->ob_type == &PyArray_Type); -} - -static int -NA_OperatorCheckExact(PyObject *NPY_UNUSED(op)) { - return 0; -} - -static int -NA_ConverterCheckExact(PyObject *NPY_UNUSED(op)) { - return 0; -} - -static int -NA_UfuncCheckExact(PyObject *NPY_UNUSED(op)) { - return 0; -} - - -static int -NA_CfuncCheckExact(PyObject *op) { - return op->ob_type == &CfuncType; -} - -static char * -NA_getArrayData(PyArrayObject *obj) -{ - if (!NA_NDArrayCheck((PyObject *) obj)) { - PyErr_Format(PyExc_TypeError, - "expected an NDArray"); - } - return obj->data; -} - -/* Byteswap is not a flag of the array --- it is implicit in the data-type */ -static void -NA_updateByteswap(PyArrayObject *NPY_UNUSED(self)) -{ - return; -} - -static PyArray_Descr * -NA_DescrFromType(int type) -{ - if (type == tAny) - type = tDefault; - return PyArray_DescrFromType(type); -} - -static PyObject * -NA_Cast(PyArrayObject *a, int type) -{ - return PyArray_Cast(a, type); -} - - -/* The following function has much platform dependent code since - ** there is no platform-independent way of checking Floating Point - ** status bits - */ - -/* OSF/Alpha (Tru64) ---------------------------------------------*/ -#if defined(__osf__) && defined(__alpha) - -static int -NA_checkFPErrors(void) -{ - unsigned long fpstatus; - int retstatus; - -#include /* Should migrate to global scope */ - - fpstatus = ieee_get_fp_control(); - /* clear status bits as well as disable exception mode if on */ - ieee_set_fp_control( 0 ); - retstatus = - pyFPE_DIVIDE_BY_ZERO* (int)((IEEE_STATUS_DZE & fpstatus) != 0) - + pyFPE_OVERFLOW * (int)((IEEE_STATUS_OVF & fpstatus) != 0) - + pyFPE_UNDERFLOW * (int)((IEEE_STATUS_UNF & fpstatus) != 0) - + pyFPE_INVALID * (int)((IEEE_STATUS_INV & fpstatus) != 0); - - return retstatus; -} - -/* MS Windows -----------------------------------------------------*/ -#elif defined(_MSC_VER) - -#include - -static int -NA_checkFPErrors(void) -{ - int fpstatus = (int) _clear87(); - int retstatus = - pyFPE_DIVIDE_BY_ZERO * ((SW_ZERODIVIDE & fpstatus) != 0) - + pyFPE_OVERFLOW * ((SW_OVERFLOW & fpstatus) != 0) - + pyFPE_UNDERFLOW * ((SW_UNDERFLOW & fpstatus) != 0) - + pyFPE_INVALID * ((SW_INVALID & fpstatus) != 0); - - - return retstatus; -} - -/* Solaris --------------------------------------------------------*/ -/* --------ignoring SunOS ieee_flags approach, someone else can - ** deal with that! */ -#elif defined(sun) -#include - -static int -NA_checkFPErrors(void) -{ - int fpstatus; - int retstatus; - - fpstatus = (int) fpgetsticky(); - retstatus = pyFPE_DIVIDE_BY_ZERO * ((FP_X_DZ & fpstatus) != 0) - + pyFPE_OVERFLOW * ((FP_X_OFL & fpstatus) != 0) - + pyFPE_UNDERFLOW * ((FP_X_UFL & fpstatus) != 0) - + pyFPE_INVALID * ((FP_X_INV & fpstatus) != 0); - (void) fpsetsticky(0); - - return retstatus; -} - -#elif defined(__GLIBC__) || defined(__APPLE__) || defined(__CYGWIN__) || defined(__MINGW32__) || (defined(__FreeBSD__) && (__FreeBSD_version >= 502114)) - -static int -NA_checkFPErrors(void) -{ - int fpstatus = (int) fetestexcept( - FE_DIVBYZERO | FE_OVERFLOW | FE_UNDERFLOW | FE_INVALID); - int retstatus = - pyFPE_DIVIDE_BY_ZERO * ((FE_DIVBYZERO & fpstatus) != 0) - + pyFPE_OVERFLOW * ((FE_OVERFLOW & fpstatus) != 0) - + pyFPE_UNDERFLOW * ((FE_UNDERFLOW & fpstatus) != 0) - + pyFPE_INVALID * ((FE_INVALID & fpstatus) != 0); - (void) feclearexcept(FE_DIVBYZERO | FE_OVERFLOW | - FE_UNDERFLOW | FE_INVALID); - return retstatus; -} - -#else - -static int -NA_checkFPErrors(void) -{ - return 0; -} - -#endif - -static void -NA_clearFPErrors() -{ - NA_checkFPErrors(); -} - -/* Not supported yet */ -static int -NA_checkAndReportFPErrors(char *name) -{ - int error = NA_checkFPErrors(); - if (error) { - PyObject *ans; - char msg[128]; - strcpy(msg, " in "); - strncat(msg, name, 100); - ans = PyObject_CallFunction(pHandleErrorFunc, "(is)", error, msg); - if (!ans) return -1; - Py_DECREF(ans); /* Py_None */ - } - return 0; - -} - - -#define WITHIN32(v, f) (((v) >= f##_MIN32) && ((v) <= f##_MAX32)) -#define WITHIN64(v, f) (((v) >= f##_MIN64) && ((v) <= f##_MAX64)) - -static Bool -NA_IeeeMask32( Float32 f, Int32 mask) -{ - Int32 category; - UInt32 v = *(UInt32 *) &f; - - if (v & BIT(31)) { - if (WITHIN32(v, NEG_NORMALIZED)) { - category = MSK_NEG_NOR; - } else if (WITHIN32(v, NEG_DENORMALIZED)) { - category = MSK_NEG_DEN; - } else if (WITHIN32(v, NEG_SIGNAL_NAN)) { - category = MSK_NEG_SNAN; - } else if (WITHIN32(v, NEG_QUIET_NAN)) { - category = MSK_NEG_QNAN; - } else if (v == NEG_INFINITY_MIN32) { - category = MSK_NEG_INF; - } else if (v == NEG_ZERO_MIN32) { - category = MSK_NEG_ZERO; - } else if (v == INDETERMINATE_MIN32) { - category = MSK_INDETERM; - } else { - category = MSK_BUG; - } - } else { - if (WITHIN32(v, POS_NORMALIZED)) { - category = MSK_POS_NOR; - } else if (WITHIN32(v, POS_DENORMALIZED)) { - category = MSK_POS_DEN; - } else if (WITHIN32(v, POS_SIGNAL_NAN)) { - category = MSK_POS_SNAN; - } else if (WITHIN32(v, POS_QUIET_NAN)) { - category = MSK_POS_QNAN; - } else if (v == POS_INFINITY_MIN32) { - category = MSK_POS_INF; - } else if (v == POS_ZERO_MIN32) { - category = MSK_POS_ZERO; - } else { - category = MSK_BUG; - } - } - return (category & mask) != 0; -} - -static Bool -NA_IeeeMask64( Float64 f, Int32 mask) -{ - Int32 category; - UInt64 v = *(UInt64 *) &f; - - if (v & BIT(63)) { - if (WITHIN64(v, NEG_NORMALIZED)) { - category = MSK_NEG_NOR; - } else if (WITHIN64(v, NEG_DENORMALIZED)) { - category = MSK_NEG_DEN; - } else if (WITHIN64(v, NEG_SIGNAL_NAN)) { - category = MSK_NEG_SNAN; - } else if (WITHIN64(v, NEG_QUIET_NAN)) { - category = MSK_NEG_QNAN; - } else if (v == NEG_INFINITY_MIN64) { - category = MSK_NEG_INF; - } else if (v == NEG_ZERO_MIN64) { - category = MSK_NEG_ZERO; - } else if (v == INDETERMINATE_MIN64) { - category = MSK_INDETERM; - } else { - category = MSK_BUG; - } - } else { - if (WITHIN64(v, POS_NORMALIZED)) { - category = MSK_POS_NOR; - } else if (WITHIN64(v, POS_DENORMALIZED)) { - category = MSK_POS_DEN; - } else if (WITHIN64(v, POS_SIGNAL_NAN)) { - category = MSK_POS_SNAN; - } else if (WITHIN64(v, POS_QUIET_NAN)) { - category = MSK_POS_QNAN; - } else if (v == POS_INFINITY_MIN64) { - category = MSK_POS_INF; - } else if (v == POS_ZERO_MIN64) { - category = MSK_POS_ZERO; - } else { - category = MSK_BUG; - } - } - return (category & mask) != 0; -} - -static PyArrayObject * -NA_FromDimsStridesDescrAndData(int nd, maybelong *d, maybelong *s, PyArray_Descr *descr, char *data) -{ - return (PyArrayObject *)\ - PyArray_NewFromDescr(&PyArray_Type, descr, nd, d, - s, data, 0, NULL); -} - -static PyArrayObject * -NA_FromDimsTypeAndData(int nd, maybelong *d, int type, char *data) -{ - PyArray_Descr *descr = NA_DescrFromType(type); - return NA_FromDimsStridesDescrAndData(nd, d, NULL, descr, data); -} - -static PyArrayObject * -NA_FromDimsStridesTypeAndData(int nd, maybelong *shape, maybelong *strides, - int type, char *data) -{ - PyArray_Descr *descr = NA_DescrFromType(type); - return NA_FromDimsStridesDescrAndData(nd, shape, strides, descr, data); -} - - -typedef struct -{ - NumarrayType type_num; - char suffix[5]; - int itemsize; -} scipy_typestr; - -static scipy_typestr scipy_descriptors[ ] = { - { tAny, "", 0}, - - { tBool, "b1", 1}, - - { tInt8, "i1", 1}, - { tUInt8, "u1", 1}, - - { tInt16, "i2", 2}, - { tUInt16, "u2", 2}, - - { tInt32, "i4", 4}, - { tUInt32, "u4", 4}, - - { tInt64, "i8", 8}, - { tUInt64, "u8", 8}, - - { tFloat32, "f4", 4}, - { tFloat64, "f8", 8}, - - { tComplex32, "c8", 8}, - { tComplex64, "c16", 16} -}; - - -static int -NA_scipy_typestr(NumarrayType t, int byteorder, char *typestr) -{ - size_t i; - if (byteorder) - strcpy(typestr, ">"); - else - strcpy(typestr, "<"); - for(i=0; itype_num == t) { - strncat(typestr, ts->suffix, 4); - return 0; - } - } - return -1; -} - -static PyArrayObject * -NA_FromArrayStruct(PyObject *obj) -{ - return (PyArrayObject *)PyArray_FromStructInterface(obj); -} - - -static PyObject *_Error; - -void *libnumarray_API[] = { - (void*) getBuffer, - (void*) isBuffer, - (void*) getWriteBufferDataPtr, - (void*) isBufferWriteable, - (void*) getReadBufferDataPtr, - (void*) getBufferSize, - (void*) num_log, - (void*) num_log10, - (void*) num_pow, - (void*) num_acosh, - (void*) num_asinh, - (void*) num_atanh, - (void*) num_round, - (void*) int_dividebyzero_error, - (void*) int_overflow_error, - (void*) umult64_overflow, - (void*) smult64_overflow, - (void*) NA_Done, - (void*) NA_NewAll, - (void*) NA_NewAllStrides, - (void*) NA_New, - (void*) NA_Empty, - (void*) NA_NewArray, - (void*) NA_vNewArray, - (void*) NA_ReturnOutput, - (void*) NA_getBufferPtrAndSize, - (void*) NA_checkIo, - (void*) NA_checkOneCBuffer, - (void*) NA_checkNCBuffers, - (void*) NA_checkOneStriding, - (void*) NA_new_cfunc, - (void*) NA_add_cfunc, - (void*) NA_InputArray, - (void*) NA_OutputArray, - (void*) NA_IoArray, - (void*) NA_OptionalOutputArray, - (void*) NA_get_offset, - (void*) NA_get_Float64, - (void*) NA_set_Float64, - (void*) NA_get_Complex64, - (void*) NA_set_Complex64, - (void*) NA_get_Int64, - (void*) NA_set_Int64, - (void*) NA_get1_Float64, - (void*) NA_get2_Float64, - (void*) NA_get3_Float64, - (void*) NA_set1_Float64, - (void*) NA_set2_Float64, - (void*) NA_set3_Float64, - (void*) NA_get1_Complex64, - (void*) NA_get2_Complex64, - (void*) NA_get3_Complex64, - (void*) NA_set1_Complex64, - (void*) NA_set2_Complex64, - (void*) NA_set3_Complex64, - (void*) NA_get1_Int64, - (void*) NA_get2_Int64, - (void*) NA_get3_Int64, - (void*) NA_set1_Int64, - (void*) NA_set2_Int64, - (void*) NA_set3_Int64, - (void*) NA_get1D_Float64, - (void*) NA_set1D_Float64, - (void*) NA_get1D_Int64, - (void*) NA_set1D_Int64, - (void*) NA_get1D_Complex64, - (void*) NA_set1D_Complex64, - (void*) NA_ShapeEqual, - (void*) NA_ShapeLessThan, - (void*) NA_ByteOrder, - (void*) NA_IeeeSpecial32, - (void*) NA_IeeeSpecial64, - (void*) NA_updateDataPtr, - (void*) NA_typeNoToName, - (void*) NA_nameToTypeNo, - (void*) NA_typeNoToTypeObject, - (void*) NA_intTupleFromMaybeLongs, - (void*) NA_maybeLongsFromIntTuple, - (void*) NA_intTupleProduct, - (void*) NA_isIntegerSequence, - (void*) NA_setArrayFromSequence, - (void*) NA_maxType, - (void*) NA_isPythonScalar, - (void*) NA_getPythonScalar, - (void*) NA_setFromPythonScalar, - (void*) NA_NDArrayCheck, - (void*) NA_NumArrayCheck, - (void*) NA_ComplexArrayCheck, - (void*) NA_elements, - (void*) NA_typeObjectToTypeNo, - (void*) NA_copyArray, - (void*) NA_copy, - (void*) NA_getType, - (void*) NA_callCUFuncCore, - (void*) NA_callStrideConvCFuncCore, - (void*) NA_stridesFromShape, - (void*) NA_OperatorCheck, - (void*) NA_ConverterCheck, - (void*) NA_UfuncCheck, - (void*) NA_CfuncCheck, - (void*) NA_getByteOffset, - (void*) NA_swapAxes, - (void*) NA_initModuleGlobal, - (void*) NA_NumarrayType, - (void*) NA_NewAllFromBuffer, - (void*) NA_alloc1D_Float64, - (void*) NA_alloc1D_Int64, - (void*) NA_updateAlignment, - (void*) NA_updateContiguous, - (void*) NA_updateStatus, - (void*) NA_NumArrayCheckExact, - (void*) NA_NDArrayCheckExact, - (void*) NA_OperatorCheckExact, - (void*) NA_ConverterCheckExact, - (void*) NA_UfuncCheckExact, - (void*) NA_CfuncCheckExact, - (void*) NA_getArrayData, - (void*) NA_updateByteswap, - (void*) NA_DescrFromType, - (void*) NA_Cast, - (void*) NA_checkFPErrors, - (void*) NA_clearFPErrors, - (void*) NA_checkAndReportFPErrors, - (void*) NA_IeeeMask32, - (void*) NA_IeeeMask64, - (void*) _NA_callStridingHelper, - (void*) NA_FromDimsStridesDescrAndData, - (void*) NA_FromDimsTypeAndData, - (void*) NA_FromDimsStridesTypeAndData, - (void*) NA_scipy_typestr, - (void*) NA_FromArrayStruct -}; - -#if (!defined(METHOD_TABLE_EXISTS)) -static PyMethodDef _libnumarrayMethods[] = { - {NULL, NULL, 0, NULL} /* Sentinel */ -}; -#endif - -/* boiler plate API init */ -#if defined(NPY_PY3K) - -#define RETVAL m - -static struct PyModuleDef moduledef = { - PyModuleDef_HEAD_INIT, - "_capi", - NULL, - -1, - _libnumarrayMethods, - NULL, - NULL, - NULL, - NULL -}; - -PyObject *PyInit__capi(void) -#else - -#define RETVAL - -PyMODINIT_FUNC init_capi(void) -#endif -{ - PyObject *m; - PyObject *c_api_object; - - _Error = PyErr_NewException("numpy.numarray._capi.error", NULL, NULL); - - /* Create a CObject containing the API pointer array's address */ -#if defined(NPY_PY3K) - m = PyModule_Create(&moduledef); -#else - m = Py_InitModule("_capi", _libnumarrayMethods); -#endif - -#if defined(NPY_PY3K) - c_api_object = PyCapsule_New((void *)libnumarray_API, NULL, NULL); - if (c_api_object == NULL) { - PyErr_Clear(); - } -#else - c_api_object = PyCObject_FromVoidPtr((void *)libnumarray_API, NULL); -#endif - - if (c_api_object != NULL) { - /* Create a name for this object in the module's namespace */ - PyObject *d = PyModule_GetDict(m); - - PyDict_SetItemString(d, "_C_API", c_api_object); - PyDict_SetItemString(d, "error", _Error); - Py_DECREF(c_api_object); - } - else { - return RETVAL; - } - if (PyModule_AddObject(m, "__version__", PyUString_FromString("0.9")) < 0) { - return RETVAL; - } - if (_import_array() < 0) { - return RETVAL; - } - deferred_libnumarray_init(); - return RETVAL; -} diff --git a/pythonPackages/numpy/numpy/numarray/alter_code1.py b/pythonPackages/numpy/numpy/numarray/alter_code1.py deleted file mode 100755 index ae950e7e01..0000000000 --- a/pythonPackages/numpy/numpy/numarray/alter_code1.py +++ /dev/null @@ -1,265 +0,0 @@ -""" -This module converts code written for numarray to run with numpy - -Makes the following changes: - * Changes import statements - - import numarray.package - --> import numpy.numarray.package as numarray_package - with all numarray.package in code changed to numarray_package - - import numarray --> import numpy.numarray as numarray - import numarray.package as --> import numpy.numarray.package as - - from numarray import --> from numpy.numarray import - from numarray.package import - --> from numpy.numarray.package import - - package can be convolve, image, nd_image, mlab, linear_algebra, ma, - matrix, fft, random_array - - - * Makes search and replace changes to: - - .imaginary --> .imag - - .flat --> .ravel() (most of the time) - - .byteswapped() --> .byteswap(False) - - .byteswap() --> .byteswap(True) - - .info() --> numarray.info(self) - - .isaligned() --> .flags.aligned - - .isbyteswapped() --> (not .dtype.isnative) - - .typecode() --> .dtype.char - - .iscontiguous() --> .flags.contiguous - - .is_c_array() --> .flags.carray and .dtype.isnative - - .is_fortran_contiguous() --> .flags.fortran - - .is_f_array() --> .dtype.isnative and .flags.farray - - .itemsize() --> .itemsize - - .nelements() --> .size - - self.new(type) --> numarray.newobj(self, type) - - .repeat(r) --> .repeat(r, axis=0) - - .size() --> .size - - self.type() -- numarray.typefrom(self) - - .typecode() --> .dtype.char - - .stddev() --> .std() - - .togglebyteorder() --> numarray.togglebyteorder(self) - - .getshape() --> .shape - - .setshape(obj) --> .shape=obj - - .getflat() --> .ravel() - - .getreal() --> .real - - .setreal() --> .real = - - .getimag() --> .imag - - .setimag() --> .imag = - - .getimaginary() --> .imag - - .setimaginary() --> .imag - -""" -__all__ = ['convertfile', 'convertall', 'converttree', 'convertsrc'] - -import sys -import os -import re -import glob - -def changeimports(fstr, name, newname): - importstr = 'import %s' % name - importasstr = 'import %s as ' % name - fromstr = 'from %s import ' % name - fromall=0 - - name_ = name - if ('.' in name): - name_ = name.replace('.','_') - - fstr = re.sub(r'(import\s+[^,\n\r]+,\s*)(%s)' % name, - "\\1%s as %s" % (newname, name), fstr) - fstr = fstr.replace(importasstr, 'import %s as ' % newname) - fstr = fstr.replace(importstr, 'import %s as %s' % (newname,name_)) - if (name_ != name): - fstr = fstr.replace(name, name_) - - ind = 0 - Nlen = len(fromstr) - Nlen2 = len("from %s import " % newname) - while 1: - found = fstr.find(fromstr,ind) - if (found < 0): - break - ind = found + Nlen - if fstr[ind] == '*': - continue - fstr = "%sfrom %s import %s" % (fstr[:found], newname, fstr[ind:]) - ind += Nlen2 - Nlen - return fstr, fromall - -flatindex_re = re.compile('([.]flat(\s*?[[=]))') - - -def addimport(astr): - # find the first line with import on it - ind = astr.find('import') - start = astr.rfind(os.linesep, 0, ind) - astr = "%s%s%s%s" % (astr[:start], os.linesep, - "import numpy.numarray as numarray", - astr[start:]) - return astr - -def replaceattr(astr): - astr = astr.replace(".imaginary", ".imag") - astr = astr.replace(".byteswapped()",".byteswap(False)") - astr = astr.replace(".byteswap()", ".byteswap(True)") - astr = astr.replace(".isaligned()", ".flags.aligned") - astr = astr.replace(".iscontiguous()",".flags.contiguous") - astr = astr.replace(".is_fortran_contiguous()",".flags.fortran") - astr = astr.replace(".itemsize()",".itemsize") - astr = astr.replace(".size()",".size") - astr = astr.replace(".nelements()",".size") - astr = astr.replace(".typecode()",".dtype.char") - astr = astr.replace(".stddev()",".std()") - astr = astr.replace(".getshape()", ".shape") - astr = astr.replace(".getflat()", ".ravel()") - astr = astr.replace(".getreal", ".real") - astr = astr.replace(".getimag", ".imag") - astr = astr.replace(".getimaginary", ".imag") - - # preserve uses of flat that should be o.k. - tmpstr = flatindex_re.sub(r"@@@@\2",astr) - # replace other uses of flat - tmpstr = tmpstr.replace(".flat",".ravel()") - # put back .flat where it was valid - astr = tmpstr.replace("@@@@", ".flat") - return astr - -info_re = re.compile(r'(\S+)\s*[.]\s*info\s*[(]\s*[)]') -new_re = re.compile(r'(\S+)\s*[.]\s*new\s*[(]\s*(\S+)\s*[)]') -toggle_re = re.compile(r'(\S+)\s*[.]\s*togglebyteorder\s*[(]\s*[)]') -type_re = re.compile(r'(\S+)\s*[.]\s*type\s*[(]\s*[)]') - -isbyte_re = re.compile(r'(\S+)\s*[.]\s*isbyteswapped\s*[(]\s*[)]') -iscarr_re = re.compile(r'(\S+)\s*[.]\s*is_c_array\s*[(]\s*[)]') -isfarr_re = re.compile(r'(\S+)\s*[.]\s*is_f_array\s*[(]\s*[)]') -repeat_re = re.compile(r'(\S+)\s*[.]\s*repeat\s*[(]\s*(\S+)\s*[)]') - -setshape_re = re.compile(r'(\S+)\s*[.]\s*setshape\s*[(]\s*(\S+)\s*[)]') -setreal_re = re.compile(r'(\S+)\s*[.]\s*setreal\s*[(]\s*(\S+)\s*[)]') -setimag_re = re.compile(r'(\S+)\s*[.]\s*setimag\s*[(]\s*(\S+)\s*[)]') -setimaginary_re = re.compile(r'(\S+)\s*[.]\s*setimaginary\s*[(]\s*(\S+)\s*[)]') -def replaceother(astr): - # self.info() --> numarray.info(self) - # self.new(type) --> numarray.newobj(self, type) - # self.togglebyteorder() --> numarray.togglebyteorder(self) - # self.type() --> numarray.typefrom(self) - (astr, n1) = info_re.subn('numarray.info(\\1)', astr) - (astr, n2) = new_re.subn('numarray.newobj(\\1, \\2)', astr) - (astr, n3) = toggle_re.subn('numarray.togglebyteorder(\\1)', astr) - (astr, n4) = type_re.subn('numarray.typefrom(\\1)', astr) - if (n1+n2+n3+n4 > 0): - astr = addimport(astr) - - astr = isbyte_re.sub('not \\1.dtype.isnative', astr) - astr = iscarr_re.sub('\\1.dtype.isnative and \\1.flags.carray', astr) - astr = isfarr_re.sub('\\1.dtype.isnative and \\1.flags.farray', astr) - astr = repeat_re.sub('\\1.repeat(\\2, axis=0)', astr) - astr = setshape_re.sub('\\1.shape = \\2', astr) - astr = setreal_re.sub('\\1.real = \\2', astr) - astr = setimag_re.sub('\\1.imag = \\2', astr) - astr = setimaginary_re.sub('\\1.imag = \\2', astr) - return astr - -import datetime -def fromstr(filestr): - savestr = filestr[:] - filestr, fromall = changeimports(filestr, 'numarray', 'numpy.numarray') - base = 'numarray' - newbase = 'numpy.numarray' - for sub in ['', 'convolve', 'image', 'nd_image', 'mlab', 'linear_algebra', - 'ma', 'matrix', 'fft', 'random_array']: - if sub != '': - sub = '.'+sub - filestr, fromall = changeimports(filestr, base+sub, newbase+sub) - - filestr = replaceattr(filestr) - filestr = replaceother(filestr) - if savestr != filestr: - name = os.path.split(sys.argv[0])[-1] - today = datetime.date.today().strftime('%b %d, %Y') - filestr = '## Automatically adapted for '\ - 'numpy.numarray %s by %s\n\n%s' % (today, name, filestr) - return filestr, 1 - return filestr, 0 - -def makenewfile(name, filestr): - fid = file(name, 'w') - fid.write(filestr) - fid.close() - -def convertfile(filename, orig=1): - """Convert the filename given from using Numarray to using NumPy - - Copies the file to filename.orig and then over-writes the file - with the updated code - """ - fid = open(filename) - filestr = fid.read() - fid.close() - filestr, changed = fromstr(filestr) - if changed: - if orig: - base, ext = os.path.splitext(filename) - os.rename(filename, base+".orig") - else: - os.remove(filename) - makenewfile(filename, filestr) - -def fromargs(args): - filename = args[1] - convertfile(filename) - -def convertall(direc=os.path.curdir, orig=1): - """Convert all .py files to use numpy.oldnumeric (from Numeric) in the directory given - - For each file, a backup of .py is made as - .py.orig. A new file named .py - is then written with the updated code. - """ - files = glob.glob(os.path.join(direc,'*.py')) - for afile in files: - if afile[-8:] == 'setup.py': continue - convertfile(afile, orig) - -header_re = re.compile(r'(numarray/libnumarray.h)') - -def convertsrc(direc=os.path.curdir, ext=None, orig=1): - """Replace Numeric/arrayobject.h with numpy/oldnumeric.h in all files in the - directory with extension give by list ext (if ext is None, then all files are - replaced).""" - if ext is None: - files = glob.glob(os.path.join(direc,'*')) - else: - files = [] - for aext in ext: - files.extend(glob.glob(os.path.join(direc,"*.%s" % aext))) - for afile in files: - fid = open(afile) - fstr = fid.read() - fid.close() - fstr, n = header_re.subn(r'numpy/libnumarray.h',fstr) - if n > 0: - if orig: - base, ext = os.path.splitext(afile) - os.rename(afile, base+".orig") - else: - os.remove(afile) - makenewfile(afile, fstr) - -def _func(arg, dirname, fnames): - convertall(dirname, orig=0) - convertsrc(dirname, ['h','c'], orig=0) - -def converttree(direc=os.path.curdir): - """Convert all .py files in the tree given - - """ - os.path.walk(direc, _func, None) - - -if __name__ == '__main__': - converttree(sys.argv) diff --git a/pythonPackages/numpy/numpy/numarray/alter_code2.py b/pythonPackages/numpy/numpy/numarray/alter_code2.py deleted file mode 100755 index 4bb773850c..0000000000 --- a/pythonPackages/numpy/numpy/numarray/alter_code2.py +++ /dev/null @@ -1,67 +0,0 @@ -""" -This module converts code written for numpy.numarray to work -with numpy - -FIXME: finish this. - -""" -#__all__ = ['convertfile', 'convertall', 'converttree'] -__all__ = [] - -import warnings -warnings.warn("numpy.numarray.alter_code2 is not working yet.") -import sys - -import os -import glob - -def makenewfile(name, filestr): - fid = file(name, 'w') - fid.write(filestr) - fid.close() - -def getandcopy(name): - fid = file(name) - filestr = fid.read() - fid.close() - base, ext = os.path.splitext(name) - makenewfile(base+'.orig', filestr) - return filestr - -def convertfile(filename): - """Convert the filename given from using Numeric to using NumPy - - Copies the file to filename.orig and then over-writes the file - with the updated code - """ - filestr = getandcopy(filename) - filestr = fromstr(filestr) - makenewfile(filename, filestr) - -def fromargs(args): - filename = args[1] - convertfile(filename) - -def convertall(direc=os.path.curdir): - """Convert all .py files to use NumPy (from Numeric) in the directory given - - For each file, a backup of .py is made as - .py.orig. A new file named .py - is then written with the updated code. - """ - files = glob.glob(os.path.join(direc,'*.py')) - for afile in files: - convertfile(afile) - -def _func(arg, dirname, fnames): - convertall(dirname) - -def converttree(direc=os.path.curdir): - """Convert all .py files in the tree given - - """ - os.path.walk(direc, _func, None) - - -if __name__ == '__main__': - fromargs(sys.argv) diff --git a/pythonPackages/numpy/numpy/numarray/compat.py b/pythonPackages/numpy/numpy/numarray/compat.py deleted file mode 100755 index e0d13a7c28..0000000000 --- a/pythonPackages/numpy/numpy/numarray/compat.py +++ /dev/null @@ -1,4 +0,0 @@ - -__all__ = ['NewAxis', 'ArrayType'] - -from numpy import newaxis as NewAxis, ndarray as ArrayType diff --git a/pythonPackages/numpy/numpy/numarray/convolve.py b/pythonPackages/numpy/numpy/numarray/convolve.py deleted file mode 100755 index 68a4730a19..0000000000 --- a/pythonPackages/numpy/numpy/numarray/convolve.py +++ /dev/null @@ -1,14 +0,0 @@ -try: - from stsci.convolve import * -except ImportError: - try: - from scipy.stsci.convolve import * - except ImportError: - msg = \ -"""The convolve package is not installed. - -It can be downloaded by checking out the latest source from -http://svn.scipy.org/svn/scipy/trunk/Lib/stsci or by downloading and -installing all of SciPy from http://www.scipy.org. -""" - raise ImportError(msg) diff --git a/pythonPackages/numpy/numpy/numarray/fft.py b/pythonPackages/numpy/numpy/numarray/fft.py deleted file mode 100755 index c7ac6a27ed..0000000000 --- a/pythonPackages/numpy/numpy/numarray/fft.py +++ /dev/null @@ -1,7 +0,0 @@ - -from numpy.oldnumeric.fft import * -import numpy.oldnumeric.fft as nof - -__all__ = nof.__all__ - -del nof diff --git a/pythonPackages/numpy/numpy/numarray/functions.py b/pythonPackages/numpy/numpy/numarray/functions.py deleted file mode 100755 index 1c2141c98b..0000000000 --- a/pythonPackages/numpy/numpy/numarray/functions.py +++ /dev/null @@ -1,498 +0,0 @@ -# missing Numarray defined names (in from numarray import *) -##__all__ = ['ClassicUnpickler', 'Complex32_fromtype', -## 'Complex64_fromtype', 'ComplexArray', 'Error', -## 'MAX_ALIGN', 'MAX_INT_SIZE', 'MAX_LINE_WIDTH', -## 'NDArray', 'NewArray', 'NumArray', -## 'NumError', 'PRECISION', 'Py2NumType', -## 'PyINT_TYPES', 'PyLevel2Type', 'PyNUMERIC_TYPES', 'PyREAL_TYPES', -## 'SUPPRESS_SMALL', -## 'SuitableBuffer', 'USING_BLAS', -## 'UsesOpPriority', -## 'codegenerator', 'generic', 'libnumarray', 'libnumeric', -## 'make_ufuncs', 'memory', -## 'numarrayall', 'numarraycore', 'numinclude', 'safethread', -## 'typecode', 'typecodes', 'typeconv', 'ufunc', 'ufuncFactory', -## 'ieeemask'] - -__all__ = ['asarray', 'ones', 'zeros', 'array', 'where'] -__all__ += ['vdot', 'dot', 'matrixmultiply', 'ravel', 'indices', - 'arange', 'concatenate', 'all', 'allclose', 'alltrue', 'and_', - 'any', 'argmax', 'argmin', 'argsort', 'around', 'array_equal', - 'array_equiv', 'arrayrange', 'array_str', 'array_repr', - 'array2list', 'average', 'choose', 'CLIP', 'RAISE', 'WRAP', - 'clip', 'compress', 'copy', 'copy_reg', - 'diagonal', 'divide_remainder', 'e', 'explicit_type', 'pi', - 'flush_caches', 'fromfile', 'os', 'sys', 'STRICT', - 'SLOPPY', 'WARN', 'EarlyEOFError', 'SizeMismatchError', - 'SizeMismatchWarning', 'FileSeekWarning', 'fromstring', - 'fromfunction', 'fromlist', 'getShape', 'getTypeObject', - 'identity', 'info', 'innerproduct', 'inputarray', - 'isBigEndian', 'kroneckerproduct', 'lexsort', 'math', - 'operator', 'outerproduct', 'put', 'putmask', 'rank', - 'repeat', 'reshape', 'resize', 'round', 'searchsorted', - 'shape', 'size', 'sometrue', 'sort', 'swapaxes', 'take', - 'tcode', 'tname', 'tensormultiply', 'trace', 'transpose', - 'types', 'value', 'cumsum', 'cumproduct', 'nonzero', 'newobj', - 'togglebyteorder' - ] - -import copy -import copy_reg -import types -import os -import sys -import math -import operator - -from numpy import dot as matrixmultiply, dot, vdot, ravel, concatenate, all,\ - allclose, any, argsort, array_equal, array_equiv,\ - array_str, array_repr, CLIP, RAISE, WRAP, clip, concatenate, \ - diagonal, e, pi, inner as innerproduct, nonzero, \ - outer as outerproduct, kron as kroneckerproduct, lexsort, putmask, rank, \ - resize, searchsorted, shape, size, sort, swapaxes, trace, transpose -import numpy as np - -from numerictypes import typefrom - -if sys.version_info[0] >= 3: - import copyreg as copy_reg - -isBigEndian = sys.byteorder != 'little' -value = tcode = 'f' -tname = 'Float32' - -# If dtype is not None, then it is used -# If type is not None, then it is used -# If typecode is not None then it is used -# If use_default is True, then the default -# data-type is returned if all are None -def type2dtype(typecode, type, dtype, use_default=True): - if dtype is None: - if type is None: - if use_default or typecode is not None: - dtype = np.dtype(typecode) - else: - dtype = np.dtype(type) - if use_default and dtype is None: - dtype = np.dtype('int') - return dtype - -def fromfunction(shape, dimensions, type=None, typecode=None, dtype=None): - dtype = type2dtype(typecode, type, dtype, 1) - return np.fromfunction(shape, dimensions, dtype=dtype) -def ones(shape, type=None, typecode=None, dtype=None): - dtype = type2dtype(typecode, type, dtype, 1) - return np.ones(shape, dtype) - -def zeros(shape, type=None, typecode=None, dtype=None): - dtype = type2dtype(typecode, type, dtype, 1) - return np.zeros(shape, dtype) - -def where(condition, x=None, y=None, out=None): - if x is None and y is None: - arr = np.where(condition) - else: - arr = np.where(condition, x, y) - if out is not None: - out[...] = arr - return out - return arr - -def indices(shape, type=None): - return np.indices(shape, type) - -def arange(a1, a2=None, stride=1, type=None, shape=None, - typecode=None, dtype=None): - dtype = type2dtype(typecode, type, dtype, 0) - return np.arange(a1, a2, stride, dtype) - -arrayrange = arange - -def alltrue(x, axis=0): - return np.alltrue(x, axis) - -def and_(a, b): - """Same as a & b - """ - return a & b - -def divide_remainder(a, b): - a, b = asarray(a), asarray(b) - return (a/b,a%b) - -def around(array, digits=0, output=None): - ret = np.around(array, digits, output) - if output is None: - return ret - return - -def array2list(arr): - return arr.tolist() - - -def choose(selector, population, outarr=None, clipmode=RAISE): - a = np.asarray(selector) - ret = a.choose(population, out=outarr, mode=clipmode) - if outarr is None: - return ret - return - -def compress(condition, a, axis=0): - return np.compress(condition, a, axis) - -# only returns a view -def explicit_type(a): - x = a.view() - return x - -# stub -def flush_caches(): - pass - - -class EarlyEOFError(Exception): - "Raised in fromfile() if EOF unexpectedly occurs." - pass - -class SizeMismatchError(Exception): - "Raised in fromfile() if file size does not match shape." - pass - -class SizeMismatchWarning(Warning): - "Issued in fromfile() if file size does not match shape." - pass - -class FileSeekWarning(Warning): - "Issued in fromfile() if there is unused data and seek() fails" - pass - - -STRICT, SLOPPY, WARN = range(3) - -_BLOCKSIZE=1024 - -# taken and adapted directly from numarray -def fromfile(infile, type=None, shape=None, sizing=STRICT, - typecode=None, dtype=None): - if isinstance(infile, (str, unicode)): - infile = open(infile, 'rb') - dtype = type2dtype(typecode, type, dtype, True) - if shape is None: - shape = (-1,) - if not isinstance(shape, tuple): - shape = (shape,) - - if (list(shape).count(-1)>1): - raise ValueError("At most one unspecified dimension in shape") - - if -1 not in shape: - if sizing != STRICT: - raise ValueError("sizing must be STRICT if size complete") - arr = np.empty(shape, dtype) - bytesleft=arr.nbytes - bytesread=0 - while(bytesleft > _BLOCKSIZE): - data = infile.read(_BLOCKSIZE) - if len(data) != _BLOCKSIZE: - raise EarlyEOFError("Unexpected EOF reading data for size complete array") - arr.data[bytesread:bytesread+_BLOCKSIZE]=data - bytesread += _BLOCKSIZE - bytesleft -= _BLOCKSIZE - if bytesleft > 0: - data = infile.read(bytesleft) - if len(data) != bytesleft: - raise EarlyEOFError("Unexpected EOF reading data for size complete array") - arr.data[bytesread:bytesread+bytesleft]=data - return arr - - - ##shape is incompletely specified - ##read until EOF - ##implementation 1: naively use memory blocks - ##problematic because memory allocation can be double what is - ##necessary (!) - - ##the most common case, namely reading in data from an unchanging - ##file whose size may be determined before allocation, should be - ##quick -- only one allocation will be needed. - - recsize = dtype.itemsize * np.product([i for i in shape if i != -1]) - blocksize = max(_BLOCKSIZE/recsize, 1)*recsize - - ##try to estimate file size - try: - curpos=infile.tell() - infile.seek(0,2) - endpos=infile.tell() - infile.seek(curpos) - except (AttributeError, IOError): - initsize=blocksize - else: - initsize=max(1,(endpos-curpos)/recsize)*recsize - - buf = np.newbuffer(initsize) - - bytesread=0 - while 1: - data=infile.read(blocksize) - if len(data) != blocksize: ##eof - break - ##do we have space? - if len(buf) < bytesread+blocksize: - buf=_resizebuf(buf,len(buf)+blocksize) - ## or rather a=resizebuf(a,2*len(a)) ? - assert len(buf) >= bytesread+blocksize - buf[bytesread:bytesread+blocksize]=data - bytesread += blocksize - - if len(data) % recsize != 0: - if sizing == STRICT: - raise SizeMismatchError("Filesize does not match specified shape") - if sizing == WARN: - _warnings.warn("Filesize does not match specified shape", - SizeMismatchWarning) - try: - infile.seek(-(len(data) % recsize),1) - except AttributeError: - _warnings.warn("Could not rewind (no seek support)", - FileSeekWarning) - except IOError: - _warnings.warn("Could not rewind (IOError in seek)", - FileSeekWarning) - datasize = (len(data)/recsize) * recsize - if len(buf) != bytesread+datasize: - buf=_resizebuf(buf,bytesread+datasize) - buf[bytesread:bytesread+datasize]=data[:datasize] - ##deduce shape from len(buf) - shape = list(shape) - uidx = shape.index(-1) - shape[uidx]=len(buf) / recsize - - a = np.ndarray(shape=shape, dtype=type, buffer=buf) - if a.dtype.char == '?': - np.not_equal(a, 0, a) - return a - -def fromstring(datastring, type=None, shape=None, typecode=None, dtype=None): - dtype = type2dtype(typecode, type, dtype, True) - if shape is None: - count = -1 - else: - count = np.product(shape) - res = np.fromstring(datastring, dtype=dtype, count=count) - if shape is not None: - res.shape = shape - return res - - -# check_overflow is ignored -def fromlist(seq, type=None, shape=None, check_overflow=0, typecode=None, dtype=None): - dtype = type2dtype(typecode, type, dtype, False) - return np.array(seq, dtype) - -def array(sequence=None, typecode=None, copy=1, savespace=0, - type=None, shape=None, dtype=None): - dtype = type2dtype(typecode, type, dtype, 0) - if sequence is None: - if shape is None: - return None - if dtype is None: - dtype = 'l' - return np.empty(shape, dtype) - if isinstance(sequence, file): - return fromfile(sequence, dtype=dtype, shape=shape) - if isinstance(sequence, str): - return fromstring(sequence, dtype=dtype, shape=shape) - if isinstance(sequence, buffer): - arr = np.frombuffer(sequence, dtype=dtype) - else: - arr = np.array(sequence, dtype, copy=copy) - if shape is not None: - arr.shape = shape - return arr - -def asarray(seq, type=None, typecode=None, dtype=None): - if isinstance(seq, np.ndarray) and type is None and \ - typecode is None and dtype is None: - return seq - return array(seq, type=type, typecode=typecode, copy=0, dtype=dtype) - -inputarray = asarray - - -def getTypeObject(sequence, type): - if type is not None: - return type - try: - return typefrom(np.array(sequence)) - except: - raise TypeError("Can't determine a reasonable type from sequence") - -def getShape(shape, *args): - try: - if shape is () and not args: - return () - if len(args) > 0: - shape = (shape, ) + args - else: - shape = tuple(shape) - dummy = np.array(shape) - if not issubclass(dummy.dtype.type, np.integer): - raise TypeError - if len(dummy) > np.MAXDIMS: - raise TypeError - except: - raise TypeError("Shape must be a sequence of integers") - return shape - - -def identity(n, type=None, typecode=None, dtype=None): - dtype = type2dtype(typecode, type, dtype, True) - return np.identity(n, dtype) - -def info(obj, output=sys.stdout, numpy=0): - if numpy: - bp = lambda x: x - else: - bp = lambda x: int(x) - cls = getattr(obj, '__class__', type(obj)) - if numpy: - nm = getattr(cls, '__name__', cls) - else: - nm = cls - print >> output, "class: ", nm - print >> output, "shape: ", obj.shape - strides = obj.strides - print >> output, "strides: ", strides - if not numpy: - print >> output, "byteoffset: 0" - if len(strides) > 0: - bs = obj.strides[0] - else: - bs = obj.itemsize - print >> output, "bytestride: ", bs - print >> output, "itemsize: ", obj.itemsize - print >> output, "aligned: ", bp(obj.flags.aligned) - print >> output, "contiguous: ", bp(obj.flags.contiguous) - if numpy: - print >> output, "fortran: ", obj.flags.fortran - if not numpy: - print >> output, "buffer: ", repr(obj.data) - if not numpy: - extra = " (DEBUG ONLY)" - tic = "'" - else: - extra = "" - tic = "" - print >> output, "data pointer: %s%s" % (hex(obj.ctypes._as_parameter_.value), extra) - print >> output, "byteorder: ", - endian = obj.dtype.byteorder - if endian in ['|','=']: - print >> output, "%s%s%s" % (tic, sys.byteorder, tic) - byteswap = False - elif endian == '>': - print >> output, "%sbig%s" % (tic, tic) - byteswap = sys.byteorder != "big" - else: - print >> output, "%slittle%s" % (tic, tic) - byteswap = sys.byteorder != "little" - print >> output, "byteswap: ", bp(byteswap) - if not numpy: - print >> output, "type: ", typefrom(obj).name - else: - print >> output, "type: %s" % obj.dtype - -#clipmode is ignored if axis is not 0 and array is not 1d -def put(array, indices, values, axis=0, clipmode=RAISE): - if not isinstance(array, np.ndarray): - raise TypeError("put only works on subclass of ndarray") - work = asarray(array) - if axis == 0: - if array.ndim == 1: - work.put(indices, values, clipmode) - else: - work[indices] = values - elif isinstance(axis, (int, long, np.integer)): - work = work.swapaxes(0, axis) - work[indices] = values - work = work.swapaxes(0, axis) - else: - def_axes = range(work.ndim) - for x in axis: - def_axes.remove(x) - axis = list(axis)+def_axes - work = work.transpose(axis) - work[indices] = values - work = work.transpose(axis) - -def repeat(array, repeats, axis=0): - return np.repeat(array, repeats, axis) - - -def reshape(array, shape, *args): - if len(args) > 0: - shape = (shape,) + args - return np.reshape(array, shape) - - -import warnings as _warnings -def round(*args, **keys): - _warnings.warn("round() is deprecated. Switch to around()", - DeprecationWarning) - return around(*args, **keys) - -def sometrue(array, axis=0): - return np.sometrue(array, axis) - -#clipmode is ignored if axis is not an integer -def take(array, indices, axis=0, outarr=None, clipmode=RAISE): - array = np.asarray(array) - if isinstance(axis, (int, long, np.integer)): - res = array.take(indices, axis, outarr, clipmode) - if outarr is None: - return res - return - else: - def_axes = range(array.ndim) - for x in axis: - def_axes.remove(x) - axis = list(axis) + def_axes - work = array.transpose(axis) - res = work[indices] - if outarr is None: - return res - outarr[...] = res - return - -def tensormultiply(a1, a2): - a1, a2 = np.asarray(a1), np.asarray(a2) - if (a1.shape[-1] != a2.shape[0]): - raise ValueError("Unmatched dimensions") - shape = a1.shape[:-1] + a2.shape[1:] - return np.reshape(dot(np.reshape(a1, (-1, a1.shape[-1])), - np.reshape(a2, (a2.shape[0],-1))), - shape) - -def cumsum(a1, axis=0, out=None, type=None, dim=0): - return np.asarray(a1).cumsum(axis,dtype=type,out=out) - -def cumproduct(a1, axis=0, out=None, type=None, dim=0): - return np.asarray(a1).cumprod(axis,dtype=type,out=out) - -def argmax(x, axis=-1): - return np.argmax(x, axis) - -def argmin(x, axis=-1): - return np.argmin(x, axis) - -def newobj(self, type): - if type is None: - return np.empty_like(self) - else: - return np.empty(self.shape, type) - -def togglebyteorder(self): - self.dtype=self.dtype.newbyteorder() - -def average(a, axis=0, weights=None, returned=0): - return np.average(a, axis, weights, returned) diff --git a/pythonPackages/numpy/numpy/numarray/image.py b/pythonPackages/numpy/numpy/numarray/image.py deleted file mode 100755 index 3235289050..0000000000 --- a/pythonPackages/numpy/numpy/numarray/image.py +++ /dev/null @@ -1,14 +0,0 @@ -try: - from stsci.image import * -except ImportError: - try: - from scipy.stsci.image import * - except ImportError: - msg = \ -"""The image package is not installed - -It can be downloaded by checking out the latest source from -http://svn.scipy.org/svn/scipy/trunk/Lib/stsci or by downloading and -installing all of SciPy from http://www.scipy.org. -""" - raise ImportError(msg) diff --git a/pythonPackages/numpy/numpy/numarray/include/numpy/arraybase.h b/pythonPackages/numpy/numpy/numarray/include/numpy/arraybase.h deleted file mode 100755 index a964979ce1..0000000000 --- a/pythonPackages/numpy/numpy/numarray/include/numpy/arraybase.h +++ /dev/null @@ -1,71 +0,0 @@ -#if !defined(__arraybase_h) -#define _arraybase_h 1 - -#define SZ_BUF 79 -#define MAXDIM NPY_MAXDIMS -#define MAXARGS 18 - -typedef npy_intp maybelong; -typedef npy_bool Bool; -typedef npy_int8 Int8; -typedef npy_uint8 UInt8; -typedef npy_int16 Int16; -typedef npy_uint16 UInt16; -typedef npy_int32 Int32; -typedef npy_uint32 UInt32; -typedef npy_int64 Int64; -typedef npy_uint64 UInt64; -typedef npy_float32 Float32; -typedef npy_float64 Float64; - -typedef enum -{ - tAny=-1, - tBool=PyArray_BOOL, - tInt8=PyArray_INT8, - tUInt8=PyArray_UINT8, - tInt16=PyArray_INT16, - tUInt16=PyArray_UINT16, - tInt32=PyArray_INT32, - tUInt32=PyArray_UINT32, - tInt64=PyArray_INT64, - tUInt64=PyArray_UINT64, - tFloat32=PyArray_FLOAT32, - tFloat64=PyArray_FLOAT64, - tComplex32=PyArray_COMPLEX64, - tComplex64=PyArray_COMPLEX128, - tObject=PyArray_OBJECT, /* placeholder... does nothing */ - tMaxType=PyArray_NTYPES, - tDefault = tFloat64, -#if NPY_BITSOF_LONG == 64 - tLong = tInt64, -#else - tLong = tInt32, -#endif -} NumarrayType; - -#define nNumarrayType PyArray_NTYPES - -#define HAS_UINT64 1 - -typedef enum -{ - NUM_LITTLE_ENDIAN=0, - NUM_BIG_ENDIAN = 1 -} NumarrayByteOrder; - -typedef struct { Float32 r, i; } Complex32; -typedef struct { Float64 r, i; } Complex64; - -#define WRITABLE NPY_WRITEABLE -#define CHECKOVERFLOW 0x800 -#define UPDATEDICT 0x1000 -#define FORTRAN_CONTIGUOUS NPY_FORTRAN -#define IS_CARRAY (NPY_CONTIGUOUS | NPY_ALIGNED) - -#define PyArray(m) ((PyArrayObject *)(m)) -#define PyArray_ISFORTRAN_CONTIGUOUS(m) (((PyArray(m))->flags & FORTRAN_CONTIGUOUS) != 0) -#define PyArray_ISWRITABLE PyArray_ISWRITEABLE - - -#endif diff --git a/pythonPackages/numpy/numpy/numarray/include/numpy/cfunc.h b/pythonPackages/numpy/numpy/numarray/include/numpy/cfunc.h deleted file mode 100755 index b581be08f0..0000000000 --- a/pythonPackages/numpy/numpy/numarray/include/numpy/cfunc.h +++ /dev/null @@ -1,78 +0,0 @@ -#if !defined(__cfunc__) -#define __cfunc__ 1 - -typedef PyObject *(*CFUNCasPyValue)(void *); -typedef int (*UFUNC)(long, long, long, void **, long*); -/* typedef void (*CFUNC_2ARG)(long, void *, void *); */ -/* typedef void (*CFUNC_3ARG)(long, void *, void *, void *); */ -typedef int (*CFUNCfromPyValue)(PyObject *, void *); -typedef int (*CFUNC_STRIDE_CONV_FUNC)(long, long, maybelong *, - void *, long, maybelong*, void *, long, maybelong *); - -typedef int (*CFUNC_STRIDED_FUNC)(PyObject *, long, PyArrayObject **, - char **data); - -#define MAXARRAYS 16 - -typedef enum { - CFUNC_UFUNC, - CFUNC_STRIDING, - CFUNC_NSTRIDING, - CFUNC_AS_PY_VALUE, - CFUNC_FROM_PY_VALUE -} eCfuncType; - -typedef struct { - char *name; - void *fptr; /* Pointer to "un-wrapped" c function */ - eCfuncType type; /* UFUNC, STRIDING, AsPyValue, FromPyValue */ - Bool chkself; /* CFUNC does own alignment/bounds checking */ - Bool align; /* CFUNC requires aligned buffer pointers */ - Int8 wantIn, wantOut; /* required input/output arg counts. */ - Int8 sizes[MAXARRAYS]; /* array of align/itemsizes. */ - Int8 iters[MAXARRAYS]; /* array of element counts. 0 --> niter. */ -} CfuncDescriptor; - -typedef struct { - PyObject_HEAD - CfuncDescriptor descr; -} CfuncObject; - -#define SELF_CHECKED_CFUNC_DESCR(name, type) \ - static CfuncDescriptor name##_descr = { #name, (void *) name, type, 1 } - -#define CHECK_ALIGN 1 - -#define CFUNC_DESCR(name, type, align, iargs, oargs, s1, s2, s3, i1, i2, i3) \ - static CfuncDescriptor name##_descr = \ - { #name, (void *)name, type, 0, align, iargs, oargs, {s1, s2, s3}, {i1, i2, i3} } - -#define UFUNC_DESCR1(name, s1) \ - CFUNC_DESCR(name, CFUNC_UFUNC, CHECK_ALIGN, 0, 1, s1, 0, 0, 0, 0, 0) - -#define UFUNC_DESCR2(name, s1, s2) \ - CFUNC_DESCR(name, CFUNC_UFUNC, CHECK_ALIGN, 1, 1, s1, s2, 0, 0, 0, 0) - -#define UFUNC_DESCR3(name, s1, s2, s3) \ - CFUNC_DESCR(name, CFUNC_UFUNC, CHECK_ALIGN, 2, 1, s1, s2, s3, 0, 0, 0) - -#define UFUNC_DESCR3sv(name, s1, s2, s3) \ - CFUNC_DESCR(name, CFUNC_UFUNC, CHECK_ALIGN, 2, 1, s1, s2, s3, 1, 0, 0) - -#define UFUNC_DESCR3vs(name, s1, s2, s3) \ - CFUNC_DESCR(name, CFUNC_UFUNC, CHECK_ALIGN, 2, 1, s1, s2, s3, 0, 1, 0) - -#define STRIDING_DESCR2(name, align, s1, s2) \ - CFUNC_DESCR(name, CFUNC_STRIDING, align, 1, 1, s1, s2, 0, 0, 0, 0) - -#define NSTRIDING_DESCR1(name) \ - CFUNC_DESCR(name, CFUNC_NSTRIDING, 0, 0, 1, 0, 0, 0, 0, 0, 0) - -#define NSTRIDING_DESCR2(name) \ - CFUNC_DESCR(name, CFUNC_NSTRIDING, 0, 1, 1, 0, 0, 0, 0, 0, 0) - -#define NSTRIDING_DESCR3(name) \ - CFUNC_DESCR(name, CFUNC_NSTRIDING, 0, 2, 1, 0, 0, 0, 0, 0, 0) - -#endif - diff --git a/pythonPackages/numpy/numpy/numarray/include/numpy/ieeespecial.h b/pythonPackages/numpy/numpy/numarray/include/numpy/ieeespecial.h deleted file mode 100755 index 0f3fff2a92..0000000000 --- a/pythonPackages/numpy/numpy/numarray/include/numpy/ieeespecial.h +++ /dev/null @@ -1,124 +0,0 @@ -/* 32-bit special value ranges */ - -#if defined(_MSC_VER) -#define MKINT(x) (x##UL) -#define MKINT64(x) (x##Ui64) -#define BIT(x) (1Ui64 << (x)) -#else -#define MKINT(x) (x##U) -#define MKINT64(x) (x##ULL) -#define BIT(x) (1ULL << (x)) -#endif - - -#define NEG_QUIET_NAN_MIN32 MKINT(0xFFC00001) -#define NEG_QUIET_NAN_MAX32 MKINT(0xFFFFFFFF) - -#define INDETERMINATE_MIN32 MKINT(0xFFC00000) -#define INDETERMINATE_MAX32 MKINT(0xFFC00000) - -#define NEG_SIGNAL_NAN_MIN32 MKINT(0xFF800001) -#define NEG_SIGNAL_NAN_MAX32 MKINT(0xFFBFFFFF) - -#define NEG_INFINITY_MIN32 MKINT(0xFF800000) - -#define NEG_NORMALIZED_MIN32 MKINT(0x80800000) -#define NEG_NORMALIZED_MAX32 MKINT(0xFF7FFFFF) - -#define NEG_DENORMALIZED_MIN32 MKINT(0x80000001) -#define NEG_DENORMALIZED_MAX32 MKINT(0x807FFFFF) - -#define NEG_ZERO_MIN32 MKINT(0x80000000) -#define NEG_ZERO_MAX32 MKINT(0x80000000) - -#define POS_ZERO_MIN32 MKINT(0x00000000) -#define POS_ZERO_MAX32 MKINT(0x00000000) - -#define POS_DENORMALIZED_MIN32 MKINT(0x00000001) -#define POS_DENORMALIZED_MAX32 MKINT(0x007FFFFF) - -#define POS_NORMALIZED_MIN32 MKINT(0x00800000) -#define POS_NORMALIZED_MAX32 MKINT(0x7F7FFFFF) - -#define POS_INFINITY_MIN32 MKINT(0x7F800000) -#define POS_INFINITY_MAX32 MKINT(0x7F800000) - -#define POS_SIGNAL_NAN_MIN32 MKINT(0x7F800001) -#define POS_SIGNAL_NAN_MAX32 MKINT(0x7FBFFFFF) - -#define POS_QUIET_NAN_MIN32 MKINT(0x7FC00000) -#define POS_QUIET_NAN_MAX32 MKINT(0x7FFFFFFF) - -/* 64-bit special value ranges */ - -#define NEG_QUIET_NAN_MIN64 MKINT64(0xFFF8000000000001) -#define NEG_QUIET_NAN_MAX64 MKINT64(0xFFFFFFFFFFFFFFFF) - -#define INDETERMINATE_MIN64 MKINT64(0xFFF8000000000000) -#define INDETERMINATE_MAX64 MKINT64(0xFFF8000000000000) - -#define NEG_SIGNAL_NAN_MIN64 MKINT64(0xFFF7FFFFFFFFFFFF) -#define NEG_SIGNAL_NAN_MAX64 MKINT64(0xFFF0000000000001) - -#define NEG_INFINITY_MIN64 MKINT64(0xFFF0000000000000) - -#define NEG_NORMALIZED_MIN64 MKINT64(0xFFEFFFFFFFFFFFFF) -#define NEG_NORMALIZED_MAX64 MKINT64(0x8010000000000000) - -#define NEG_DENORMALIZED_MIN64 MKINT64(0x800FFFFFFFFFFFFF) -#define NEG_DENORMALIZED_MAX64 MKINT64(0x8000000000000001) - -#define NEG_ZERO_MIN64 MKINT64(0x8000000000000000) -#define NEG_ZERO_MAX64 MKINT64(0x8000000000000000) - -#define POS_ZERO_MIN64 MKINT64(0x0000000000000000) -#define POS_ZERO_MAX64 MKINT64(0x0000000000000000) - -#define POS_DENORMALIZED_MIN64 MKINT64(0x0000000000000001) -#define POS_DENORMALIZED_MAX64 MKINT64(0x000FFFFFFFFFFFFF) - -#define POS_NORMALIZED_MIN64 MKINT64(0x0010000000000000) -#define POS_NORMALIZED_MAX64 MKINT64(0x7FEFFFFFFFFFFFFF) - -#define POS_INFINITY_MIN64 MKINT64(0x7FF0000000000000) -#define POS_INFINITY_MAX64 MKINT64(0x7FF0000000000000) - -#define POS_SIGNAL_NAN_MIN64 MKINT64(0x7FF0000000000001) -#define POS_SIGNAL_NAN_MAX64 MKINT64(0x7FF7FFFFFFFFFFFF) - -#define POS_QUIET_NAN_MIN64 MKINT64(0x7FF8000000000000) -#define POS_QUIET_NAN_MAX64 MKINT64(0x7FFFFFFFFFFFFFFF) - -typedef enum -{ - POS_QNAN_BIT, - NEG_QNAN_BIT, - POS_SNAN_BIT, - NEG_SNAN_BIT, - POS_INF_BIT, - NEG_INF_BIT, - POS_DEN_BIT, - NEG_DEN_BIT, - POS_NOR_BIT, - NEG_NOR_BIT, - POS_ZERO_BIT, - NEG_ZERO_BIT, - INDETERM_BIT, - BUG_BIT -} ieee_selects; - -#define MSK_POS_QNAN BIT(POS_QNAN_BIT) -#define MSK_POS_SNAN BIT(POS_SNAN_BIT) -#define MSK_POS_INF BIT(POS_INF_BIT) -#define MSK_POS_DEN BIT(POS_DEN_BIT) -#define MSK_POS_NOR BIT(POS_NOR_BIT) -#define MSK_POS_ZERO BIT(POS_ZERO_BIT) -#define MSK_NEG_QNAN BIT(NEG_QNAN_BIT) -#define MSK_NEG_SNAN BIT(NEG_SNAN_BIT) -#define MSK_NEG_INF BIT(NEG_INF_BIT) -#define MSK_NEG_DEN BIT(NEG_DEN_BIT) -#define MSK_NEG_NOR BIT(NEG_NOR_BIT) -#define MSK_NEG_ZERO BIT(NEG_ZERO_BIT) -#define MSK_INDETERM BIT(INDETERM_BIT) -#define MSK_BUG BIT(BUG_BIT) - diff --git a/pythonPackages/numpy/numpy/numarray/include/numpy/libnumarray.h b/pythonPackages/numpy/numpy/numarray/include/numpy/libnumarray.h deleted file mode 100755 index c69f10d8ee..0000000000 --- a/pythonPackages/numpy/numpy/numarray/include/numpy/libnumarray.h +++ /dev/null @@ -1,630 +0,0 @@ -/* Compatibility with numarray. Do not use in new code. - */ - -#ifndef NUMPY_LIBNUMARRAY_H -#define NUMPY_LIBNUMARRAY_H - -#include "numpy/arrayobject.h" -#include "arraybase.h" -#include "nummacro.h" -#include "numcomplex.h" -#include "ieeespecial.h" -#include "cfunc.h" - -#ifdef __cplusplus -extern "C" { -#endif - -/* Header file for libnumarray */ - -#if !defined(_libnumarray_MODULE) - -/* -Extensions constructed from seperate compilation units can access the -C-API defined here by defining "libnumarray_UNIQUE_SYMBOL" to a global -name unique to the extension. Doing this circumvents the requirement -to import libnumarray into each compilation unit, but is nevertheless -mildly discouraged as "outside the Python norm" and potentially -leading to problems. Looking around at "existing Python art", most -extension modules are monolithic C files, and likely for good reason. -*/ - -/* C API address pointer */ -#if defined(NO_IMPORT) || defined(NO_IMPORT_ARRAY) -extern void **libnumarray_API; -#else -#if defined(libnumarray_UNIQUE_SYMBOL) -void **libnumarray_API; -#else -static void **libnumarray_API; -#endif -#endif - -#if PY_VERSION_HEX >= 0x03000000 -#define _import_libnumarray() \ - { \ - PyObject *module = PyImport_ImportModule("numpy.numarray._capi"); \ - if (module != NULL) { \ - PyObject *module_dict = PyModule_GetDict(module); \ - PyObject *c_api_object = \ - PyDict_GetItemString(module_dict, "_C_API"); \ - if (c_api_object && PyCapsule_CheckExact(c_api_object)) { \ - libnumarray_API = (void **)PyCapsule_GetPointer(c_api_object, NULL); \ - } else { \ - PyErr_Format(PyExc_ImportError, \ - "Can't get API for module 'numpy.numarray._capi'"); \ - } \ - } \ - } - -#else -#define _import_libnumarray() \ - { \ - PyObject *module = PyImport_ImportModule("numpy.numarray._capi"); \ - if (module != NULL) { \ - PyObject *module_dict = PyModule_GetDict(module); \ - PyObject *c_api_object = \ - PyDict_GetItemString(module_dict, "_C_API"); \ - if (c_api_object && PyCObject_Check(c_api_object)) { \ - libnumarray_API = (void **)PyCObject_AsVoidPtr(c_api_object); \ - } else { \ - PyErr_Format(PyExc_ImportError, \ - "Can't get API for module 'numpy.numarray._capi'"); \ - } \ - } \ - } -#endif - -#define import_libnumarray() _import_libnumarray(); if (PyErr_Occurred()) { PyErr_Print(); PyErr_SetString(PyExc_ImportError, "numpy.numarray._capi failed to import.\n"); return; } - -#endif - - -#define libnumarray_FatalApiError (Py_FatalError("Call to API function without first calling import_libnumarray() in " __FILE__), NULL) - - -/* Macros defining components of function prototypes */ - -#ifdef _libnumarray_MODULE - /* This section is used when compiling libnumarray */ - -static PyObject *_Error; - -static PyObject* getBuffer (PyObject*o); - -static int isBuffer (PyObject*o); - -static int getWriteBufferDataPtr (PyObject*o,void**p); - -static int isBufferWriteable (PyObject*o); - -static int getReadBufferDataPtr (PyObject*o,void**p); - -static int getBufferSize (PyObject*o); - -static double num_log (double x); - -static double num_log10 (double x); - -static double num_pow (double x, double y); - -static double num_acosh (double x); - -static double num_asinh (double x); - -static double num_atanh (double x); - -static double num_round (double x); - -static int int_dividebyzero_error (long value, long unused); - -static int int_overflow_error (Float64 value); - -static int umult64_overflow (UInt64 a, UInt64 b); - -static int smult64_overflow (Int64 a0, Int64 b0); - -static void NA_Done (void); - -static PyArrayObject* NA_NewAll (int ndim, maybelong* shape, NumarrayType type, void* buffer, maybelong byteoffset, maybelong bytestride, int byteorder, int aligned, int writeable); - -static PyArrayObject* NA_NewAllStrides (int ndim, maybelong* shape, maybelong* strides, NumarrayType type, void* buffer, maybelong byteoffset, int byteorder, int aligned, int writeable); - -static PyArrayObject* NA_New (void* buffer, NumarrayType type, int ndim,...); - -static PyArrayObject* NA_Empty (int ndim, maybelong* shape, NumarrayType type); - -static PyArrayObject* NA_NewArray (void* buffer, NumarrayType type, int ndim, ...); - -static PyArrayObject* NA_vNewArray (void* buffer, NumarrayType type, int ndim, maybelong *shape); - -static PyObject* NA_ReturnOutput (PyObject*,PyArrayObject*); - -static long NA_getBufferPtrAndSize (PyObject*,int,void**); - -static int NA_checkIo (char*,int,int,int,int); - -static int NA_checkOneCBuffer (char*,long,void*,long,size_t); - -static int NA_checkNCBuffers (char*,int,long,void**,long*,Int8*,Int8*); - -static int NA_checkOneStriding (char*,long,maybelong*,long,maybelong*,long,long,int); - -static PyObject* NA_new_cfunc (CfuncDescriptor*); - -static int NA_add_cfunc (PyObject*,char*,CfuncDescriptor*); - -static PyArrayObject* NA_InputArray (PyObject*,NumarrayType,int); - -static PyArrayObject* NA_OutputArray (PyObject*,NumarrayType,int); - -static PyArrayObject* NA_IoArray (PyObject*,NumarrayType,int); - -static PyArrayObject* NA_OptionalOutputArray (PyObject*,NumarrayType,int,PyArrayObject*); - -static long NA_get_offset (PyArrayObject*,int,...); - -static Float64 NA_get_Float64 (PyArrayObject*,long); - -static void NA_set_Float64 (PyArrayObject*,long,Float64); - -static Complex64 NA_get_Complex64 (PyArrayObject*,long); - -static void NA_set_Complex64 (PyArrayObject*,long,Complex64); - -static Int64 NA_get_Int64 (PyArrayObject*,long); - -static void NA_set_Int64 (PyArrayObject*,long,Int64); - -static Float64 NA_get1_Float64 (PyArrayObject*,long); - -static Float64 NA_get2_Float64 (PyArrayObject*,long,long); - -static Float64 NA_get3_Float64 (PyArrayObject*,long,long,long); - -static void NA_set1_Float64 (PyArrayObject*,long,Float64); - -static void NA_set2_Float64 (PyArrayObject*,long,long,Float64); - -static void NA_set3_Float64 (PyArrayObject*,long,long,long,Float64); - -static Complex64 NA_get1_Complex64 (PyArrayObject*,long); - -static Complex64 NA_get2_Complex64 (PyArrayObject*,long,long); - -static Complex64 NA_get3_Complex64 (PyArrayObject*,long,long,long); - -static void NA_set1_Complex64 (PyArrayObject*,long,Complex64); - -static void NA_set2_Complex64 (PyArrayObject*,long,long,Complex64); - -static void NA_set3_Complex64 (PyArrayObject*,long,long,long,Complex64); - -static Int64 NA_get1_Int64 (PyArrayObject*,long); - -static Int64 NA_get2_Int64 (PyArrayObject*,long,long); - -static Int64 NA_get3_Int64 (PyArrayObject*,long,long,long); - -static void NA_set1_Int64 (PyArrayObject*,long,Int64); - -static void NA_set2_Int64 (PyArrayObject*,long,long,Int64); - -static void NA_set3_Int64 (PyArrayObject*,long,long,long,Int64); - -static int NA_get1D_Float64 (PyArrayObject*,long,int,Float64*); - -static int NA_set1D_Float64 (PyArrayObject*,long,int,Float64*); - -static int NA_get1D_Int64 (PyArrayObject*,long,int,Int64*); - -static int NA_set1D_Int64 (PyArrayObject*,long,int,Int64*); - -static int NA_get1D_Complex64 (PyArrayObject*,long,int,Complex64*); - -static int NA_set1D_Complex64 (PyArrayObject*,long,int,Complex64*); - -static int NA_ShapeEqual (PyArrayObject*,PyArrayObject*); - -static int NA_ShapeLessThan (PyArrayObject*,PyArrayObject*); - -static int NA_ByteOrder (void); - -static Bool NA_IeeeSpecial32 (Float32*,Int32*); - -static Bool NA_IeeeSpecial64 (Float64*,Int32*); - -static PyArrayObject* NA_updateDataPtr (PyArrayObject*); - -static char* NA_typeNoToName (int); - -static int NA_nameToTypeNo (char*); - -static PyObject* NA_typeNoToTypeObject (int); - -static PyObject* NA_intTupleFromMaybeLongs (int,maybelong*); - -static long NA_maybeLongsFromIntTuple (int,maybelong*,PyObject*); - -static int NA_intTupleProduct (PyObject *obj, long *product); - -static long NA_isIntegerSequence (PyObject*); - -static PyObject* NA_setArrayFromSequence (PyArrayObject*,PyObject*); - -static int NA_maxType (PyObject*); - -static int NA_isPythonScalar (PyObject *obj); - -static PyObject* NA_getPythonScalar (PyArrayObject*,long); - -static int NA_setFromPythonScalar (PyArrayObject*,long,PyObject*); - -static int NA_NDArrayCheck (PyObject*); - -static int NA_NumArrayCheck (PyObject*); - -static int NA_ComplexArrayCheck (PyObject*); - -static unsigned long NA_elements (PyArrayObject*); - -static int NA_typeObjectToTypeNo (PyObject*); - -static int NA_copyArray (PyArrayObject* to, const PyArrayObject* from); - -static PyArrayObject* NA_copy (PyArrayObject*); - -static PyObject* NA_getType (PyObject *typeobj_or_name); - -static PyObject * NA_callCUFuncCore (PyObject *cfunc, long niter, long ninargs, long noutargs, PyObject **BufferObj, long *offset); - -static PyObject * NA_callStrideConvCFuncCore (PyObject *cfunc, int nshape, maybelong *shape, PyObject *inbuffObj, long inboffset, int nstrides0, maybelong *inbstrides, PyObject *outbuffObj, long outboffset, int nstrides1, maybelong *outbstrides, long nbytes); - -static void NA_stridesFromShape (int nshape, maybelong *shape, maybelong bytestride, maybelong *strides); - -static int NA_OperatorCheck (PyObject *obj); - -static int NA_ConverterCheck (PyObject *obj); - -static int NA_UfuncCheck (PyObject *obj); - -static int NA_CfuncCheck (PyObject *obj); - -static int NA_getByteOffset (PyArrayObject *array, int nindices, maybelong *indices, long *offset); - -static int NA_swapAxes (PyArrayObject *array, int x, int y); - -static PyObject * NA_initModuleGlobal (char *module, char *global); - -static NumarrayType NA_NumarrayType (PyObject *seq); - -static PyArrayObject * NA_NewAllFromBuffer (int ndim, maybelong *shape, NumarrayType type, PyObject *bufferObject, maybelong byteoffset, maybelong bytestride, int byteorder, int aligned, int writeable); - -static Float64 * NA_alloc1D_Float64 (PyArrayObject *a, long offset, int cnt); - -static Int64 * NA_alloc1D_Int64 (PyArrayObject *a, long offset, int cnt); - -static void NA_updateAlignment (PyArrayObject *self); - -static void NA_updateContiguous (PyArrayObject *self); - -static void NA_updateStatus (PyArrayObject *self); - -static int NA_NumArrayCheckExact (PyObject *op); - -static int NA_NDArrayCheckExact (PyObject *op); - -static int NA_OperatorCheckExact (PyObject *op); - -static int NA_ConverterCheckExact (PyObject *op); - -static int NA_UfuncCheckExact (PyObject *op); - -static int NA_CfuncCheckExact (PyObject *op); - -static char * NA_getArrayData (PyArrayObject *ap); - -static void NA_updateByteswap (PyArrayObject *ap); - -static PyArray_Descr * NA_DescrFromType (int type); - -static PyObject * NA_Cast (PyArrayObject *a, int type); - -static int NA_checkFPErrors (void); - -static void NA_clearFPErrors (void); - -static int NA_checkAndReportFPErrors (char *name); - -static Bool NA_IeeeMask32 (Float32,Int32); - -static Bool NA_IeeeMask64 (Float64,Int32); - -static int _NA_callStridingHelper (PyObject *aux, long dim, long nnumarray, PyArrayObject *numarray[], char *data[], CFUNC_STRIDED_FUNC f); - -static PyArrayObject * NA_FromDimsStridesDescrAndData (int nd, maybelong *dims, maybelong *strides, PyArray_Descr *descr, char *data); - -static PyArrayObject * NA_FromDimsTypeAndData (int nd, maybelong *dims, int type, char *data); - -static PyArrayObject * NA_FromDimsStridesTypeAndData (int nd, maybelong *dims, maybelong *strides, int type, char *data); - -static int NA_scipy_typestr (NumarrayType t, int byteorder, char *typestr); - -static PyArrayObject * NA_FromArrayStruct (PyObject *a); - - -#else - /* This section is used in modules that use libnumarray */ - -#define getBuffer (libnumarray_API ? (*(PyObject* (*) (PyObject*o) ) libnumarray_API[ 0 ]) : (*(PyObject* (*) (PyObject*o) ) libnumarray_FatalApiError)) - -#define isBuffer (libnumarray_API ? (*(int (*) (PyObject*o) ) libnumarray_API[ 1 ]) : (*(int (*) (PyObject*o) ) libnumarray_FatalApiError)) - -#define getWriteBufferDataPtr (libnumarray_API ? (*(int (*) (PyObject*o,void**p) ) libnumarray_API[ 2 ]) : (*(int (*) (PyObject*o,void**p) ) libnumarray_FatalApiError)) - -#define isBufferWriteable (libnumarray_API ? (*(int (*) (PyObject*o) ) libnumarray_API[ 3 ]) : (*(int (*) (PyObject*o) ) libnumarray_FatalApiError)) - -#define getReadBufferDataPtr (libnumarray_API ? (*(int (*) (PyObject*o,void**p) ) libnumarray_API[ 4 ]) : (*(int (*) (PyObject*o,void**p) ) libnumarray_FatalApiError)) - -#define getBufferSize (libnumarray_API ? (*(int (*) (PyObject*o) ) libnumarray_API[ 5 ]) : (*(int (*) (PyObject*o) ) libnumarray_FatalApiError)) - -#define num_log (libnumarray_API ? (*(double (*) (double x) ) libnumarray_API[ 6 ]) : (*(double (*) (double x) ) libnumarray_FatalApiError)) - -#define num_log10 (libnumarray_API ? (*(double (*) (double x) ) libnumarray_API[ 7 ]) : (*(double (*) (double x) ) libnumarray_FatalApiError)) - -#define num_pow (libnumarray_API ? (*(double (*) (double x, double y) ) libnumarray_API[ 8 ]) : (*(double (*) (double x, double y) ) libnumarray_FatalApiError)) - -#define num_acosh (libnumarray_API ? (*(double (*) (double x) ) libnumarray_API[ 9 ]) : (*(double (*) (double x) ) libnumarray_FatalApiError)) - -#define num_asinh (libnumarray_API ? (*(double (*) (double x) ) libnumarray_API[ 10 ]) : (*(double (*) (double x) ) libnumarray_FatalApiError)) - -#define num_atanh (libnumarray_API ? (*(double (*) (double x) ) libnumarray_API[ 11 ]) : (*(double (*) (double x) ) libnumarray_FatalApiError)) - -#define num_round (libnumarray_API ? (*(double (*) (double x) ) libnumarray_API[ 12 ]) : (*(double (*) (double x) ) libnumarray_FatalApiError)) - -#define int_dividebyzero_error (libnumarray_API ? (*(int (*) (long value, long unused) ) libnumarray_API[ 13 ]) : (*(int (*) (long value, long unused) ) libnumarray_FatalApiError)) - -#define int_overflow_error (libnumarray_API ? (*(int (*) (Float64 value) ) libnumarray_API[ 14 ]) : (*(int (*) (Float64 value) ) libnumarray_FatalApiError)) - -#define umult64_overflow (libnumarray_API ? (*(int (*) (UInt64 a, UInt64 b) ) libnumarray_API[ 15 ]) : (*(int (*) (UInt64 a, UInt64 b) ) libnumarray_FatalApiError)) - -#define smult64_overflow (libnumarray_API ? (*(int (*) (Int64 a0, Int64 b0) ) libnumarray_API[ 16 ]) : (*(int (*) (Int64 a0, Int64 b0) ) libnumarray_FatalApiError)) - -#define NA_Done (libnumarray_API ? (*(void (*) (void) ) libnumarray_API[ 17 ]) : (*(void (*) (void) ) libnumarray_FatalApiError)) - -#define NA_NewAll (libnumarray_API ? (*(PyArrayObject* (*) (int ndim, maybelong* shape, NumarrayType type, void* buffer, maybelong byteoffset, maybelong bytestride, int byteorder, int aligned, int writeable) ) libnumarray_API[ 18 ]) : (*(PyArrayObject* (*) (int ndim, maybelong* shape, NumarrayType type, void* buffer, maybelong byteoffset, maybelong bytestride, int byteorder, int aligned, int writeable) ) libnumarray_FatalApiError)) - -#define NA_NewAllStrides (libnumarray_API ? (*(PyArrayObject* (*) (int ndim, maybelong* shape, maybelong* strides, NumarrayType type, void* buffer, maybelong byteoffset, int byteorder, int aligned, int writeable) ) libnumarray_API[ 19 ]) : (*(PyArrayObject* (*) (int ndim, maybelong* shape, maybelong* strides, NumarrayType type, void* buffer, maybelong byteoffset, int byteorder, int aligned, int writeable) ) libnumarray_FatalApiError)) - -#define NA_New (libnumarray_API ? (*(PyArrayObject* (*) (void* buffer, NumarrayType type, int ndim,...) ) libnumarray_API[ 20 ]) : (*(PyArrayObject* (*) (void* buffer, NumarrayType type, int ndim,...) ) libnumarray_FatalApiError)) - -#define NA_Empty (libnumarray_API ? (*(PyArrayObject* (*) (int ndim, maybelong* shape, NumarrayType type) ) libnumarray_API[ 21 ]) : (*(PyArrayObject* (*) (int ndim, maybelong* shape, NumarrayType type) ) libnumarray_FatalApiError)) - -#define NA_NewArray (libnumarray_API ? (*(PyArrayObject* (*) (void* buffer, NumarrayType type, int ndim, ...) ) libnumarray_API[ 22 ]) : (*(PyArrayObject* (*) (void* buffer, NumarrayType type, int ndim, ...) ) libnumarray_FatalApiError)) - -#define NA_vNewArray (libnumarray_API ? (*(PyArrayObject* (*) (void* buffer, NumarrayType type, int ndim, maybelong *shape) ) libnumarray_API[ 23 ]) : (*(PyArrayObject* (*) (void* buffer, NumarrayType type, int ndim, maybelong *shape) ) libnumarray_FatalApiError)) - -#define NA_ReturnOutput (libnumarray_API ? (*(PyObject* (*) (PyObject*,PyArrayObject*) ) libnumarray_API[ 24 ]) : (*(PyObject* (*) (PyObject*,PyArrayObject*) ) libnumarray_FatalApiError)) - -#define NA_getBufferPtrAndSize (libnumarray_API ? (*(long (*) (PyObject*,int,void**) ) libnumarray_API[ 25 ]) : (*(long (*) (PyObject*,int,void**) ) libnumarray_FatalApiError)) - -#define NA_checkIo (libnumarray_API ? (*(int (*) (char*,int,int,int,int) ) libnumarray_API[ 26 ]) : (*(int (*) (char*,int,int,int,int) ) libnumarray_FatalApiError)) - -#define NA_checkOneCBuffer (libnumarray_API ? (*(int (*) (char*,long,void*,long,size_t) ) libnumarray_API[ 27 ]) : (*(int (*) (char*,long,void*,long,size_t) ) libnumarray_FatalApiError)) - -#define NA_checkNCBuffers (libnumarray_API ? (*(int (*) (char*,int,long,void**,long*,Int8*,Int8*) ) libnumarray_API[ 28 ]) : (*(int (*) (char*,int,long,void**,long*,Int8*,Int8*) ) libnumarray_FatalApiError)) - -#define NA_checkOneStriding (libnumarray_API ? (*(int (*) (char*,long,maybelong*,long,maybelong*,long,long,int) ) libnumarray_API[ 29 ]) : (*(int (*) (char*,long,maybelong*,long,maybelong*,long,long,int) ) libnumarray_FatalApiError)) - -#define NA_new_cfunc (libnumarray_API ? (*(PyObject* (*) (CfuncDescriptor*) ) libnumarray_API[ 30 ]) : (*(PyObject* (*) (CfuncDescriptor*) ) libnumarray_FatalApiError)) - -#define NA_add_cfunc (libnumarray_API ? (*(int (*) (PyObject*,char*,CfuncDescriptor*) ) libnumarray_API[ 31 ]) : (*(int (*) (PyObject*,char*,CfuncDescriptor*) ) libnumarray_FatalApiError)) - -#define NA_InputArray (libnumarray_API ? (*(PyArrayObject* (*) (PyObject*,NumarrayType,int) ) libnumarray_API[ 32 ]) : (*(PyArrayObject* (*) (PyObject*,NumarrayType,int) ) libnumarray_FatalApiError)) - -#define NA_OutputArray (libnumarray_API ? (*(PyArrayObject* (*) (PyObject*,NumarrayType,int) ) libnumarray_API[ 33 ]) : (*(PyArrayObject* (*) (PyObject*,NumarrayType,int) ) libnumarray_FatalApiError)) - -#define NA_IoArray (libnumarray_API ? (*(PyArrayObject* (*) (PyObject*,NumarrayType,int) ) libnumarray_API[ 34 ]) : (*(PyArrayObject* (*) (PyObject*,NumarrayType,int) ) libnumarray_FatalApiError)) - -#define NA_OptionalOutputArray (libnumarray_API ? (*(PyArrayObject* (*) (PyObject*,NumarrayType,int,PyArrayObject*) ) libnumarray_API[ 35 ]) : (*(PyArrayObject* (*) (PyObject*,NumarrayType,int,PyArrayObject*) ) libnumarray_FatalApiError)) - -#define NA_get_offset (libnumarray_API ? (*(long (*) (PyArrayObject*,int,...) ) libnumarray_API[ 36 ]) : (*(long (*) (PyArrayObject*,int,...) ) libnumarray_FatalApiError)) - -#define NA_get_Float64 (libnumarray_API ? (*(Float64 (*) (PyArrayObject*,long) ) libnumarray_API[ 37 ]) : (*(Float64 (*) (PyArrayObject*,long) ) libnumarray_FatalApiError)) - -#define NA_set_Float64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,Float64) ) libnumarray_API[ 38 ]) : (*(void (*) (PyArrayObject*,long,Float64) ) libnumarray_FatalApiError)) - -#define NA_get_Complex64 (libnumarray_API ? (*(Complex64 (*) (PyArrayObject*,long) ) libnumarray_API[ 39 ]) : (*(Complex64 (*) (PyArrayObject*,long) ) libnumarray_FatalApiError)) - -#define NA_set_Complex64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,Complex64) ) libnumarray_API[ 40 ]) : (*(void (*) (PyArrayObject*,long,Complex64) ) libnumarray_FatalApiError)) - -#define NA_get_Int64 (libnumarray_API ? (*(Int64 (*) (PyArrayObject*,long) ) libnumarray_API[ 41 ]) : (*(Int64 (*) (PyArrayObject*,long) ) libnumarray_FatalApiError)) - -#define NA_set_Int64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,Int64) ) libnumarray_API[ 42 ]) : (*(void (*) (PyArrayObject*,long,Int64) ) libnumarray_FatalApiError)) - -#define NA_get1_Float64 (libnumarray_API ? (*(Float64 (*) (PyArrayObject*,long) ) libnumarray_API[ 43 ]) : (*(Float64 (*) (PyArrayObject*,long) ) libnumarray_FatalApiError)) - -#define NA_get2_Float64 (libnumarray_API ? (*(Float64 (*) (PyArrayObject*,long,long) ) libnumarray_API[ 44 ]) : (*(Float64 (*) (PyArrayObject*,long,long) ) libnumarray_FatalApiError)) - -#define NA_get3_Float64 (libnumarray_API ? (*(Float64 (*) (PyArrayObject*,long,long,long) ) libnumarray_API[ 45 ]) : (*(Float64 (*) (PyArrayObject*,long,long,long) ) libnumarray_FatalApiError)) - -#define NA_set1_Float64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,Float64) ) libnumarray_API[ 46 ]) : (*(void (*) (PyArrayObject*,long,Float64) ) libnumarray_FatalApiError)) - -#define NA_set2_Float64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,long,Float64) ) libnumarray_API[ 47 ]) : (*(void (*) (PyArrayObject*,long,long,Float64) ) libnumarray_FatalApiError)) - -#define NA_set3_Float64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,long,long,Float64) ) libnumarray_API[ 48 ]) : (*(void (*) (PyArrayObject*,long,long,long,Float64) ) libnumarray_FatalApiError)) - -#define NA_get1_Complex64 (libnumarray_API ? (*(Complex64 (*) (PyArrayObject*,long) ) libnumarray_API[ 49 ]) : (*(Complex64 (*) (PyArrayObject*,long) ) libnumarray_FatalApiError)) - -#define NA_get2_Complex64 (libnumarray_API ? (*(Complex64 (*) (PyArrayObject*,long,long) ) libnumarray_API[ 50 ]) : (*(Complex64 (*) (PyArrayObject*,long,long) ) libnumarray_FatalApiError)) - -#define NA_get3_Complex64 (libnumarray_API ? (*(Complex64 (*) (PyArrayObject*,long,long,long) ) libnumarray_API[ 51 ]) : (*(Complex64 (*) (PyArrayObject*,long,long,long) ) libnumarray_FatalApiError)) - -#define NA_set1_Complex64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,Complex64) ) libnumarray_API[ 52 ]) : (*(void (*) (PyArrayObject*,long,Complex64) ) libnumarray_FatalApiError)) - -#define NA_set2_Complex64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,long,Complex64) ) libnumarray_API[ 53 ]) : (*(void (*) (PyArrayObject*,long,long,Complex64) ) libnumarray_FatalApiError)) - -#define NA_set3_Complex64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,long,long,Complex64) ) libnumarray_API[ 54 ]) : (*(void (*) (PyArrayObject*,long,long,long,Complex64) ) libnumarray_FatalApiError)) - -#define NA_get1_Int64 (libnumarray_API ? (*(Int64 (*) (PyArrayObject*,long) ) libnumarray_API[ 55 ]) : (*(Int64 (*) (PyArrayObject*,long) ) libnumarray_FatalApiError)) - -#define NA_get2_Int64 (libnumarray_API ? (*(Int64 (*) (PyArrayObject*,long,long) ) libnumarray_API[ 56 ]) : (*(Int64 (*) (PyArrayObject*,long,long) ) libnumarray_FatalApiError)) - -#define NA_get3_Int64 (libnumarray_API ? (*(Int64 (*) (PyArrayObject*,long,long,long) ) libnumarray_API[ 57 ]) : (*(Int64 (*) (PyArrayObject*,long,long,long) ) libnumarray_FatalApiError)) - -#define NA_set1_Int64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,Int64) ) libnumarray_API[ 58 ]) : (*(void (*) (PyArrayObject*,long,Int64) ) libnumarray_FatalApiError)) - -#define NA_set2_Int64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,long,Int64) ) libnumarray_API[ 59 ]) : (*(void (*) (PyArrayObject*,long,long,Int64) ) libnumarray_FatalApiError)) - -#define NA_set3_Int64 (libnumarray_API ? (*(void (*) (PyArrayObject*,long,long,long,Int64) ) libnumarray_API[ 60 ]) : (*(void (*) (PyArrayObject*,long,long,long,Int64) ) libnumarray_FatalApiError)) - -#define NA_get1D_Float64 (libnumarray_API ? (*(int (*) (PyArrayObject*,long,int,Float64*) ) libnumarray_API[ 61 ]) : (*(int (*) (PyArrayObject*,long,int,Float64*) ) libnumarray_FatalApiError)) - -#define NA_set1D_Float64 (libnumarray_API ? (*(int (*) (PyArrayObject*,long,int,Float64*) ) libnumarray_API[ 62 ]) : (*(int (*) (PyArrayObject*,long,int,Float64*) ) libnumarray_FatalApiError)) - -#define NA_get1D_Int64 (libnumarray_API ? (*(int (*) (PyArrayObject*,long,int,Int64*) ) libnumarray_API[ 63 ]) : (*(int (*) (PyArrayObject*,long,int,Int64*) ) libnumarray_FatalApiError)) - -#define NA_set1D_Int64 (libnumarray_API ? (*(int (*) (PyArrayObject*,long,int,Int64*) ) libnumarray_API[ 64 ]) : (*(int (*) (PyArrayObject*,long,int,Int64*) ) libnumarray_FatalApiError)) - -#define NA_get1D_Complex64 (libnumarray_API ? (*(int (*) (PyArrayObject*,long,int,Complex64*) ) libnumarray_API[ 65 ]) : (*(int (*) (PyArrayObject*,long,int,Complex64*) ) libnumarray_FatalApiError)) - -#define NA_set1D_Complex64 (libnumarray_API ? (*(int (*) (PyArrayObject*,long,int,Complex64*) ) libnumarray_API[ 66 ]) : (*(int (*) (PyArrayObject*,long,int,Complex64*) ) libnumarray_FatalApiError)) - -#define NA_ShapeEqual (libnumarray_API ? (*(int (*) (PyArrayObject*,PyArrayObject*) ) libnumarray_API[ 67 ]) : (*(int (*) (PyArrayObject*,PyArrayObject*) ) libnumarray_FatalApiError)) - -#define NA_ShapeLessThan (libnumarray_API ? (*(int (*) (PyArrayObject*,PyArrayObject*) ) libnumarray_API[ 68 ]) : (*(int (*) (PyArrayObject*,PyArrayObject*) ) libnumarray_FatalApiError)) - -#define NA_ByteOrder (libnumarray_API ? (*(int (*) (void) ) libnumarray_API[ 69 ]) : (*(int (*) (void) ) libnumarray_FatalApiError)) - -#define NA_IeeeSpecial32 (libnumarray_API ? (*(Bool (*) (Float32*,Int32*) ) libnumarray_API[ 70 ]) : (*(Bool (*) (Float32*,Int32*) ) libnumarray_FatalApiError)) - -#define NA_IeeeSpecial64 (libnumarray_API ? (*(Bool (*) (Float64*,Int32*) ) libnumarray_API[ 71 ]) : (*(Bool (*) (Float64*,Int32*) ) libnumarray_FatalApiError)) - -#define NA_updateDataPtr (libnumarray_API ? (*(PyArrayObject* (*) (PyArrayObject*) ) libnumarray_API[ 72 ]) : (*(PyArrayObject* (*) (PyArrayObject*) ) libnumarray_FatalApiError)) - -#define NA_typeNoToName (libnumarray_API ? (*(char* (*) (int) ) libnumarray_API[ 73 ]) : (*(char* (*) (int) ) libnumarray_FatalApiError)) - -#define NA_nameToTypeNo (libnumarray_API ? (*(int (*) (char*) ) libnumarray_API[ 74 ]) : (*(int (*) (char*) ) libnumarray_FatalApiError)) - -#define NA_typeNoToTypeObject (libnumarray_API ? (*(PyObject* (*) (int) ) libnumarray_API[ 75 ]) : (*(PyObject* (*) (int) ) libnumarray_FatalApiError)) - -#define NA_intTupleFromMaybeLongs (libnumarray_API ? (*(PyObject* (*) (int,maybelong*) ) libnumarray_API[ 76 ]) : (*(PyObject* (*) (int,maybelong*) ) libnumarray_FatalApiError)) - -#define NA_maybeLongsFromIntTuple (libnumarray_API ? (*(long (*) (int,maybelong*,PyObject*) ) libnumarray_API[ 77 ]) : (*(long (*) (int,maybelong*,PyObject*) ) libnumarray_FatalApiError)) - -#define NA_intTupleProduct (libnumarray_API ? (*(int (*) (PyObject *obj, long *product) ) libnumarray_API[ 78 ]) : (*(int (*) (PyObject *obj, long *product) ) libnumarray_FatalApiError)) - -#define NA_isIntegerSequence (libnumarray_API ? (*(long (*) (PyObject*) ) libnumarray_API[ 79 ]) : (*(long (*) (PyObject*) ) libnumarray_FatalApiError)) - -#define NA_setArrayFromSequence (libnumarray_API ? (*(PyObject* (*) (PyArrayObject*,PyObject*) ) libnumarray_API[ 80 ]) : (*(PyObject* (*) (PyArrayObject*,PyObject*) ) libnumarray_FatalApiError)) - -#define NA_maxType (libnumarray_API ? (*(int (*) (PyObject*) ) libnumarray_API[ 81 ]) : (*(int (*) (PyObject*) ) libnumarray_FatalApiError)) - -#define NA_isPythonScalar (libnumarray_API ? (*(int (*) (PyObject *obj) ) libnumarray_API[ 82 ]) : (*(int (*) (PyObject *obj) ) libnumarray_FatalApiError)) - -#define NA_getPythonScalar (libnumarray_API ? (*(PyObject* (*) (PyArrayObject*,long) ) libnumarray_API[ 83 ]) : (*(PyObject* (*) (PyArrayObject*,long) ) libnumarray_FatalApiError)) - -#define NA_setFromPythonScalar (libnumarray_API ? (*(int (*) (PyArrayObject*,long,PyObject*) ) libnumarray_API[ 84 ]) : (*(int (*) (PyArrayObject*,long,PyObject*) ) libnumarray_FatalApiError)) - -#define NA_NDArrayCheck (libnumarray_API ? (*(int (*) (PyObject*) ) libnumarray_API[ 85 ]) : (*(int (*) (PyObject*) ) libnumarray_FatalApiError)) - -#define NA_NumArrayCheck (libnumarray_API ? (*(int (*) (PyObject*) ) libnumarray_API[ 86 ]) : (*(int (*) (PyObject*) ) libnumarray_FatalApiError)) - -#define NA_ComplexArrayCheck (libnumarray_API ? (*(int (*) (PyObject*) ) libnumarray_API[ 87 ]) : (*(int (*) (PyObject*) ) libnumarray_FatalApiError)) - -#define NA_elements (libnumarray_API ? (*(unsigned long (*) (PyArrayObject*) ) libnumarray_API[ 88 ]) : (*(unsigned long (*) (PyArrayObject*) ) libnumarray_FatalApiError)) - -#define NA_typeObjectToTypeNo (libnumarray_API ? (*(int (*) (PyObject*) ) libnumarray_API[ 89 ]) : (*(int (*) (PyObject*) ) libnumarray_FatalApiError)) - -#define NA_copyArray (libnumarray_API ? (*(int (*) (PyArrayObject* to, const PyArrayObject* from) ) libnumarray_API[ 90 ]) : (*(int (*) (PyArrayObject* to, const PyArrayObject* from) ) libnumarray_FatalApiError)) - -#define NA_copy (libnumarray_API ? (*(PyArrayObject* (*) (PyArrayObject*) ) libnumarray_API[ 91 ]) : (*(PyArrayObject* (*) (PyArrayObject*) ) libnumarray_FatalApiError)) - -#define NA_getType (libnumarray_API ? (*(PyObject* (*) (PyObject *typeobj_or_name) ) libnumarray_API[ 92 ]) : (*(PyObject* (*) (PyObject *typeobj_or_name) ) libnumarray_FatalApiError)) - -#define NA_callCUFuncCore (libnumarray_API ? (*(PyObject * (*) (PyObject *cfunc, long niter, long ninargs, long noutargs, PyObject **BufferObj, long *offset) ) libnumarray_API[ 93 ]) : (*(PyObject * (*) (PyObject *cfunc, long niter, long ninargs, long noutargs, PyObject **BufferObj, long *offset) ) libnumarray_FatalApiError)) - -#define NA_callStrideConvCFuncCore (libnumarray_API ? (*(PyObject * (*) (PyObject *cfunc, int nshape, maybelong *shape, PyObject *inbuffObj, long inboffset, int nstrides0, maybelong *inbstrides, PyObject *outbuffObj, long outboffset, int nstrides1, maybelong *outbstrides, long nbytes) ) libnumarray_API[ 94 ]) : (*(PyObject * (*) (PyObject *cfunc, int nshape, maybelong *shape, PyObject *inbuffObj, long inboffset, int nstrides0, maybelong *inbstrides, PyObject *outbuffObj, long outboffset, int nstrides1, maybelong *outbstrides, long nbytes) ) libnumarray_FatalApiError)) - -#define NA_stridesFromShape (libnumarray_API ? (*(void (*) (int nshape, maybelong *shape, maybelong bytestride, maybelong *strides) ) libnumarray_API[ 95 ]) : (*(void (*) (int nshape, maybelong *shape, maybelong bytestride, maybelong *strides) ) libnumarray_FatalApiError)) - -#define NA_OperatorCheck (libnumarray_API ? (*(int (*) (PyObject *obj) ) libnumarray_API[ 96 ]) : (*(int (*) (PyObject *obj) ) libnumarray_FatalApiError)) - -#define NA_ConverterCheck (libnumarray_API ? (*(int (*) (PyObject *obj) ) libnumarray_API[ 97 ]) : (*(int (*) (PyObject *obj) ) libnumarray_FatalApiError)) - -#define NA_UfuncCheck (libnumarray_API ? (*(int (*) (PyObject *obj) ) libnumarray_API[ 98 ]) : (*(int (*) (PyObject *obj) ) libnumarray_FatalApiError)) - -#define NA_CfuncCheck (libnumarray_API ? (*(int (*) (PyObject *obj) ) libnumarray_API[ 99 ]) : (*(int (*) (PyObject *obj) ) libnumarray_FatalApiError)) - -#define NA_getByteOffset (libnumarray_API ? (*(int (*) (PyArrayObject *array, int nindices, maybelong *indices, long *offset) ) libnumarray_API[ 100 ]) : (*(int (*) (PyArrayObject *array, int nindices, maybelong *indices, long *offset) ) libnumarray_FatalApiError)) - -#define NA_swapAxes (libnumarray_API ? (*(int (*) (PyArrayObject *array, int x, int y) ) libnumarray_API[ 101 ]) : (*(int (*) (PyArrayObject *array, int x, int y) ) libnumarray_FatalApiError)) - -#define NA_initModuleGlobal (libnumarray_API ? (*(PyObject * (*) (char *module, char *global) ) libnumarray_API[ 102 ]) : (*(PyObject * (*) (char *module, char *global) ) libnumarray_FatalApiError)) - -#define NA_NumarrayType (libnumarray_API ? (*(NumarrayType (*) (PyObject *seq) ) libnumarray_API[ 103 ]) : (*(NumarrayType (*) (PyObject *seq) ) libnumarray_FatalApiError)) - -#define NA_NewAllFromBuffer (libnumarray_API ? (*(PyArrayObject * (*) (int ndim, maybelong *shape, NumarrayType type, PyObject *bufferObject, maybelong byteoffset, maybelong bytestride, int byteorder, int aligned, int writeable) ) libnumarray_API[ 104 ]) : (*(PyArrayObject * (*) (int ndim, maybelong *shape, NumarrayType type, PyObject *bufferObject, maybelong byteoffset, maybelong bytestride, int byteorder, int aligned, int writeable) ) libnumarray_FatalApiError)) - -#define NA_alloc1D_Float64 (libnumarray_API ? (*(Float64 * (*) (PyArrayObject *a, long offset, int cnt) ) libnumarray_API[ 105 ]) : (*(Float64 * (*) (PyArrayObject *a, long offset, int cnt) ) libnumarray_FatalApiError)) - -#define NA_alloc1D_Int64 (libnumarray_API ? (*(Int64 * (*) (PyArrayObject *a, long offset, int cnt) ) libnumarray_API[ 106 ]) : (*(Int64 * (*) (PyArrayObject *a, long offset, int cnt) ) libnumarray_FatalApiError)) - -#define NA_updateAlignment (libnumarray_API ? (*(void (*) (PyArrayObject *self) ) libnumarray_API[ 107 ]) : (*(void (*) (PyArrayObject *self) ) libnumarray_FatalApiError)) - -#define NA_updateContiguous (libnumarray_API ? (*(void (*) (PyArrayObject *self) ) libnumarray_API[ 108 ]) : (*(void (*) (PyArrayObject *self) ) libnumarray_FatalApiError)) - -#define NA_updateStatus (libnumarray_API ? (*(void (*) (PyArrayObject *self) ) libnumarray_API[ 109 ]) : (*(void (*) (PyArrayObject *self) ) libnumarray_FatalApiError)) - -#define NA_NumArrayCheckExact (libnumarray_API ? (*(int (*) (PyObject *op) ) libnumarray_API[ 110 ]) : (*(int (*) (PyObject *op) ) libnumarray_FatalApiError)) - -#define NA_NDArrayCheckExact (libnumarray_API ? (*(int (*) (PyObject *op) ) libnumarray_API[ 111 ]) : (*(int (*) (PyObject *op) ) libnumarray_FatalApiError)) - -#define NA_OperatorCheckExact (libnumarray_API ? (*(int (*) (PyObject *op) ) libnumarray_API[ 112 ]) : (*(int (*) (PyObject *op) ) libnumarray_FatalApiError)) - -#define NA_ConverterCheckExact (libnumarray_API ? (*(int (*) (PyObject *op) ) libnumarray_API[ 113 ]) : (*(int (*) (PyObject *op) ) libnumarray_FatalApiError)) - -#define NA_UfuncCheckExact (libnumarray_API ? (*(int (*) (PyObject *op) ) libnumarray_API[ 114 ]) : (*(int (*) (PyObject *op) ) libnumarray_FatalApiError)) - -#define NA_CfuncCheckExact (libnumarray_API ? (*(int (*) (PyObject *op) ) libnumarray_API[ 115 ]) : (*(int (*) (PyObject *op) ) libnumarray_FatalApiError)) - -#define NA_getArrayData (libnumarray_API ? (*(char * (*) (PyArrayObject *ap) ) libnumarray_API[ 116 ]) : (*(char * (*) (PyArrayObject *ap) ) libnumarray_FatalApiError)) - -#define NA_updateByteswap (libnumarray_API ? (*(void (*) (PyArrayObject *ap) ) libnumarray_API[ 117 ]) : (*(void (*) (PyArrayObject *ap) ) libnumarray_FatalApiError)) - -#define NA_DescrFromType (libnumarray_API ? (*(PyArray_Descr * (*) (int type) ) libnumarray_API[ 118 ]) : (*(PyArray_Descr * (*) (int type) ) libnumarray_FatalApiError)) - -#define NA_Cast (libnumarray_API ? (*(PyObject * (*) (PyArrayObject *a, int type) ) libnumarray_API[ 119 ]) : (*(PyObject * (*) (PyArrayObject *a, int type) ) libnumarray_FatalApiError)) - -#define NA_checkFPErrors (libnumarray_API ? (*(int (*) (void) ) libnumarray_API[ 120 ]) : (*(int (*) (void) ) libnumarray_FatalApiError)) - -#define NA_clearFPErrors (libnumarray_API ? (*(void (*) (void) ) libnumarray_API[ 121 ]) : (*(void (*) (void) ) libnumarray_FatalApiError)) - -#define NA_checkAndReportFPErrors (libnumarray_API ? (*(int (*) (char *name) ) libnumarray_API[ 122 ]) : (*(int (*) (char *name) ) libnumarray_FatalApiError)) - -#define NA_IeeeMask32 (libnumarray_API ? (*(Bool (*) (Float32,Int32) ) libnumarray_API[ 123 ]) : (*(Bool (*) (Float32,Int32) ) libnumarray_FatalApiError)) - -#define NA_IeeeMask64 (libnumarray_API ? (*(Bool (*) (Float64,Int32) ) libnumarray_API[ 124 ]) : (*(Bool (*) (Float64,Int32) ) libnumarray_FatalApiError)) - -#define _NA_callStridingHelper (libnumarray_API ? (*(int (*) (PyObject *aux, long dim, long nnumarray, PyArrayObject *numarray[], char *data[], CFUNC_STRIDED_FUNC f) ) libnumarray_API[ 125 ]) : (*(int (*) (PyObject *aux, long dim, long nnumarray, PyArrayObject *numarray[], char *data[], CFUNC_STRIDED_FUNC f) ) libnumarray_FatalApiError)) - -#define NA_FromDimsStridesDescrAndData (libnumarray_API ? (*(PyArrayObject * (*) (int nd, maybelong *dims, maybelong *strides, PyArray_Descr *descr, char *data) ) libnumarray_API[ 126 ]) : (*(PyArrayObject * (*) (int nd, maybelong *dims, maybelong *strides, PyArray_Descr *descr, char *data) ) libnumarray_FatalApiError)) - -#define NA_FromDimsTypeAndData (libnumarray_API ? (*(PyArrayObject * (*) (int nd, maybelong *dims, int type, char *data) ) libnumarray_API[ 127 ]) : (*(PyArrayObject * (*) (int nd, maybelong *dims, int type, char *data) ) libnumarray_FatalApiError)) - -#define NA_FromDimsStridesTypeAndData (libnumarray_API ? (*(PyArrayObject * (*) (int nd, maybelong *dims, maybelong *strides, int type, char *data) ) libnumarray_API[ 128 ]) : (*(PyArrayObject * (*) (int nd, maybelong *dims, maybelong *strides, int type, char *data) ) libnumarray_FatalApiError)) - -#define NA_scipy_typestr (libnumarray_API ? (*(int (*) (NumarrayType t, int byteorder, char *typestr) ) libnumarray_API[ 129 ]) : (*(int (*) (NumarrayType t, int byteorder, char *typestr) ) libnumarray_FatalApiError)) - -#define NA_FromArrayStruct (libnumarray_API ? (*(PyArrayObject * (*) (PyObject *a) ) libnumarray_API[ 130 ]) : (*(PyArrayObject * (*) (PyObject *a) ) libnumarray_FatalApiError)) - -#endif - - /* Total number of C API pointers */ -#define libnumarray_API_pointers 131 - -#ifdef __cplusplus -} -#endif - -#endif /* NUMPY_LIBNUMARRAY_H */ diff --git a/pythonPackages/numpy/numpy/numarray/include/numpy/numcomplex.h b/pythonPackages/numpy/numpy/numarray/include/numpy/numcomplex.h deleted file mode 100755 index 9ed4198c7e..0000000000 --- a/pythonPackages/numpy/numpy/numarray/include/numpy/numcomplex.h +++ /dev/null @@ -1,252 +0,0 @@ -/* See numarray.h for Complex32, Complex64: - -typedef struct { Float32 r, i; } Complex32; -typedef struct { Float64 r, i; } Complex64; - -*/ -typedef struct { Float32 a, theta; } PolarComplex32; -typedef struct { Float64 a, theta; } PolarComplex64; - -#define NUM_SQ(x) ((x)*(x)) - -#define NUM_CABSSQ(p) (NUM_SQ((p).r) + NUM_SQ((p).i)) - -#define NUM_CABS(p) sqrt(NUM_CABSSQ(p)) - -#define NUM_C_TO_P(c, p) (p).a = NUM_CABS(c); \ - (p).theta = atan2((c).i, (c).r); - -#define NUM_P_TO_C(p, c) (c).r = (p).a*cos((p).theta); \ - (c).i = (p).a*sin((p).theta); - -#define NUM_CASS(p, q) (q).r = (p).r, (q).i = (p).i - -#define NUM_CADD(p, q, s) (s).r = (p).r + (q).r, \ - (s).i = (p).i + (q).i - -#define NUM_CSUB(p, q, s) (s).r = (p).r - (q).r, \ - (s).i = (p).i - (q).i - -#define NUM_CMUL(p, q, s) \ - { Float64 rp = (p).r; \ - Float64 rq = (q).r; \ - (s).r = rp*rq - (p).i*(q).i; \ - (s).i = rp*(q).i + rq*(p).i; \ - } - -#define NUM_CDIV(p, q, s) \ - { \ - Float64 rp = (p).r; \ - Float64 ip = (p).i; \ - Float64 rq = (q).r; \ - if ((q).i != 0) { \ - Float64 temp = NUM_CABSSQ(q); \ - (s).r = (rp*rq+(p).i*(q).i)/temp; \ - (s).i = (rq*(p).i-(q).i*rp)/temp; \ - } else { \ - (s).r = rp/rq; \ - (s).i = ip/rq; \ - } \ - } - -#define NUM_CREM(p, q, s) \ - { Complex64 r; \ - NUM_CDIV(p, q, r); \ - r.r = floor(r.r); \ - r.i = 0; \ - NUM_CMUL(r, q, r); \ - NUM_CSUB(p, r, s); \ - } - -#define NUM_CMINUS(p, s) (s).r = -(p).r; (s).i = -(p).i; -#define NUM_CNEG NUM_CMINUS - -#define NUM_CEQ(p, q) (((p).r == (q).r) && ((p).i == (q).i)) -#define NUM_CNE(p, q) (((p).r != (q).r) || ((p).i != (q).i)) -#define NUM_CLT(p, q) ((p).r < (q).r) -#define NUM_CGT(p, q) ((p).r > (q).r) -#define NUM_CLE(p, q) ((p).r <= (q).r) -#define NUM_CGE(p, q) ((p).r >= (q).r) - -/* e**z = e**x * (cos(y)+ i*sin(y)) where z = x + i*y - so e**z = e**x * cos(y) + i * e**x * sin(y) -*/ -#define NUM_CEXP(p, s) \ - { Float64 ex = exp((p).r); \ - (s).r = ex * cos((p).i); \ - (s).i = ex * sin((p).i); \ - } - -/* e**w = z; w = u + i*v; z = r * e**(i*theta); - -e**u * e**(i*v) = r * e**(i*theta); - -log(z) = w; log(z) = log(r) + i*theta; - */ -#define NUM_CLOG(p, s) \ - { PolarComplex64 temp; NUM_C_TO_P(p, temp); \ - (s).r = num_log(temp.a); \ - (s).i = temp.theta; \ - } - -#define NUM_LOG10_E 0.43429448190325182 - -#define NUM_CLOG10(p, s) \ - { NUM_CLOG(p, s); \ - (s).r *= NUM_LOG10_E; \ - (s).i *= NUM_LOG10_E; \ - } - -/* s = p ** q */ -#define NUM_CPOW(p, q, s) { if (NUM_CABSSQ(p) == 0) { \ - if ((q).r == 0 && (q).i == 0) { \ - (s).r = (s).i = 1; \ - } else { \ - (s).r = (s).i = 0; \ - } \ - } else { \ - NUM_CLOG(p, s); \ - NUM_CMUL(s, q, s); \ - NUM_CEXP(s, s); \ - } \ - } - -#define NUM_CSQRT(p, s) { Complex64 temp; temp.r = 0.5; temp.i=0; \ - NUM_CPOW(p, temp, s); \ - } - -#define NUM_CSQR(p, s) { Complex64 temp; temp.r = 2.0; temp.i=0; \ - NUM_CPOW(p, temp, s); \ - } - -#define NUM_CSIN(p, s) { Float64 sp = sin((p).r); \ - Float64 cp = cos((p).r); \ - (s).r = cosh((p).i) * sp; \ - (s).i = sinh((p).i) * cp; \ - } - -#define NUM_CCOS(p, s) { Float64 sp = sin((p).r); \ - Float64 cp = cos((p).r); \ - (s).r = cosh((p).i) * cp; \ - (s).i = -sinh((p).i) * sp; \ - } - -#define NUM_CTAN(p, s) { Complex64 ss, cs; \ - NUM_CSIN(p, ss); \ - NUM_CCOS(p, cs); \ - NUM_CDIV(ss, cs, s); \ - } - -#define NUM_CSINH(p, s) { Float64 sp = sin((p).i); \ - Float64 cp = cos((p).i); \ - (s).r = sinh((p).r) * cp; \ - (s).i = cosh((p).r) * sp; \ - } - -#define NUM_CCOSH(p, s) { Float64 sp = sin((p).i); \ - Float64 cp = cos((p).i); \ - (s).r = cosh((p).r) * cp; \ - (s).i = sinh((p).r) * sp; \ - } - -#define NUM_CTANH(p, s) { Complex64 ss, cs; \ - NUM_CSINH(p, ss); \ - NUM_CCOSH(p, cs); \ - NUM_CDIV(ss, cs, s); \ - } - -#define NUM_CRPOW(p, v, s) { Complex64 cr; cr.r = v; cr.i = 0; \ - NUM_CPOW(p,cr,s); \ - } - -#define NUM_CRMUL(p, v, s) (s).r = (p).r * v; (s).i = (p).i * v; - -#define NUM_CIMUL(p, s) { Float64 temp = (s).r; \ - (s).r = -(p).i; (s).i = temp; \ - } - -/* asin(z) = -i * log(i*z + (1 - z**2)**0.5) */ -#define NUM_CASIN(p, s) { Complex64 p1; NUM_CASS(p, p1); \ - NUM_CIMUL(p, p1); \ - NUM_CMUL(p, p, s); \ - NUM_CNEG(s, s); \ - (s).r += 1; \ - NUM_CRPOW(s, 0.5, s); \ - NUM_CADD(p1, s, s); \ - NUM_CLOG(s, s); \ - NUM_CIMUL(s, s); \ - NUM_CNEG(s, s); \ - } - -/* acos(z) = -i * log(z + i*(1 - z**2)**0.5) */ -#define NUM_CACOS(p, s) { Complex64 p1; NUM_CASS(p, p1); \ - NUM_CMUL(p, p, s); \ - NUM_CNEG(s, s); \ - (s).r += 1; \ - NUM_CRPOW(s, 0.5, s); \ - NUM_CIMUL(s, s); \ - NUM_CADD(p1, s, s); \ - NUM_CLOG(s, s); \ - NUM_CIMUL(s, s); \ - NUM_CNEG(s, s); \ - } - -/* atan(z) = i/2 * log( (i+z) / (i - z) ) */ -#define NUM_CATAN(p, s) { Complex64 p1, p2; \ - NUM_CASS(p, p1); NUM_CNEG(p, p2); \ - p1.i += 1; \ - p2.i += 1; \ - NUM_CDIV(p1, p2, s); \ - NUM_CLOG(s, s); \ - NUM_CIMUL(s, s); \ - NUM_CRMUL(s, 0.5, s); \ - } - -/* asinh(z) = log( z + (z**2 + 1)**0.5 ) */ -#define NUM_CASINH(p, s) { Complex64 p1; NUM_CASS(p, p1); \ - NUM_CMUL(p, p, s); \ - (s).r += 1; \ - NUM_CRPOW(s, 0.5, s); \ - NUM_CADD(p1, s, s); \ - NUM_CLOG(s, s); \ - } - -/* acosh(z) = log( z + (z**2 - 1)**0.5 ) */ -#define NUM_CACOSH(p, s) { Complex64 p1; NUM_CASS(p, p1); \ - NUM_CMUL(p, p, s); \ - (s).r -= 1; \ - NUM_CRPOW(s, 0.5, s); \ - NUM_CADD(p1, s, s); \ - NUM_CLOG(s, s); \ - } - -/* atanh(z) = 1/2 * log( (1+z)/(1-z) ) */ -#define NUM_CATANH(p, s) { Complex64 p1, p2; \ - NUM_CASS(p, p1); NUM_CNEG(p, p2); \ - p1.r += 1; \ - p2.r += 1; \ - NUM_CDIV(p1, p2, s); \ - NUM_CLOG(s, s); \ - NUM_CRMUL(s, 0.5, s); \ - } - - -#define NUM_CMIN(p, q) (NUM_CLE(p, q) ? p : q) -#define NUM_CMAX(p, q) (NUM_CGE(p, q) ? p : q) - -#define NUM_CNZ(p) (((p).r != 0) || ((p).i != 0)) -#define NUM_CLAND(p, q) (NUM_CNZ(p) & NUM_CNZ(q)) -#define NUM_CLOR(p, q) (NUM_CNZ(p) | NUM_CNZ(q)) -#define NUM_CLXOR(p, q) (NUM_CNZ(p) ^ NUM_CNZ(q)) -#define NUM_CLNOT(p) (!NUM_CNZ(p)) - -#define NUM_CFLOOR(p, s) (s).r = floor((p).r); (s).i = floor((p).i); -#define NUM_CCEIL(p, s) (s).r = ceil((p).r); (s).i = ceil((p).i); - -#define NUM_CFABS(p, s) (s).r = fabs((p).r); (s).i = fabs((p).i); -#define NUM_CROUND(p, s) (s).r = num_round((p).r); (s).i = num_round((p).i); -#define NUM_CHYPOT(p, q, s) { Complex64 t; \ - NUM_CSQR(p, s); NUM_CSQR(q, t); \ - NUM_CADD(s, t, s); \ - NUM_CSQRT(s, s); \ - } diff --git a/pythonPackages/numpy/numpy/numarray/include/numpy/nummacro.h b/pythonPackages/numpy/numpy/numarray/include/numpy/nummacro.h deleted file mode 100755 index e9acd6e31c..0000000000 --- a/pythonPackages/numpy/numpy/numarray/include/numpy/nummacro.h +++ /dev/null @@ -1,447 +0,0 @@ -/* Primarily for compatibility with numarray C-API */ - -#if !defined(_ndarraymacro) -#define _ndarraymacro - -/* The structs defined here are private implementation details of numarray -which are subject to change w/o notice. -*/ - -#define PY_BOOL_CHAR "b" -#define PY_INT8_CHAR "b" -#define PY_INT16_CHAR "h" -#define PY_INT32_CHAR "i" -#define PY_FLOAT32_CHAR "f" -#define PY_FLOAT64_CHAR "d" -#define PY_UINT8_CHAR "h" -#define PY_UINT16_CHAR "i" -#define PY_UINT32_CHAR "i" /* Unless longer int available */ -#define PY_COMPLEX64_CHAR "D" -#define PY_COMPLEX128_CHAR "D" - -#define PY_LONG_CHAR "l" -#define PY_LONG_LONG_CHAR "L" - -#define pyFPE_DIVIDE_BY_ZERO 1 -#define pyFPE_OVERFLOW 2 -#define pyFPE_UNDERFLOW 4 -#define pyFPE_INVALID 8 - -#define isNonZERO(x) (x != 0) /* to convert values to boolean 1's or 0's */ - -typedef enum -{ - NUM_CONTIGUOUS=1, - NUM_NOTSWAPPED=0x0200, - NUM_ALIGNED=0x0100, - NUM_WRITABLE=0x0400, - NUM_COPY=0x0020, - - NUM_C_ARRAY = (NUM_CONTIGUOUS | NUM_ALIGNED | NUM_NOTSWAPPED), - NUM_UNCONVERTED = 0 -} NumRequirements; - -#define UNCONVERTED 0 -#define C_ARRAY (NUM_CONTIGUOUS | NUM_NOTSWAPPED | NUM_ALIGNED) - -#define MUST_BE_COMPUTED 2 - -#define NUM_FLOORDIVIDE(a,b,out) (out) = floor((a)/(b)) - -#define NA_Begin() Py_Initialize(); import_libnumarray(); -#define NA_End() NA_Done(); Py_Finalize(); - -#define NA_OFFSETDATA(num) ((void *) PyArray_DATA(num)) - -/* unaligned NA_COPY functions */ -#define NA_COPY1(i, o) (*(o) = *(i)) -#define NA_COPY2(i, o) NA_COPY1(i, o), NA_COPY1(i+1, o+1) -#define NA_COPY4(i, o) NA_COPY2(i, o), NA_COPY2(i+2, o+2) -#define NA_COPY8(i, o) NA_COPY4(i, o), NA_COPY4(i+4, o+4) -#define NA_COPY16(i, o) NA_COPY8(i, o), NA_COPY8(i+8, o+8) - -/* byteswapping macros: these fail if i==o */ -#define NA_SWAP1(i, o) NA_COPY1(i, o) -#define NA_SWAP2(i, o) NA_SWAP1(i, o+1), NA_SWAP1(i+1, o) -#define NA_SWAP4(i, o) NA_SWAP2(i, o+2), NA_SWAP2(i+2, o) -#define NA_SWAP8(i, o) NA_SWAP4(i, o+4), NA_SWAP4(i+4, o) -#define NA_SWAP16(i, o) NA_SWAP8(i, o+8), NA_SWAP8(i+8, o) - -/* complex byteswaps must swap each part (real, imag) independently */ -#define NA_COMPLEX_SWAP8(i, o) NA_SWAP4(i, o), NA_SWAP4(i+4, o+4) -#define NA_COMPLEX_SWAP16(i, o) NA_SWAP8(i, o), NA_SWAP8(i+8, o+8) - -/* byteswapping macros: these work even if i == o */ -#define NA_TSWAP1(i, o, t) NA_COPY1(i, t), NA_SWAP1(t, o) -#define NA_TSWAP2(i, o, t) NA_COPY2(i, t), NA_SWAP2(t, o) -#define NA_TSWAP4(i, o, t) NA_COPY4(i, t), NA_SWAP4(t, o) -#define NA_TSWAP8(i, o, t) NA_COPY8(i, t), NA_SWAP8(t, o) - -/* fast copy functions for %N aligned i and o */ -#define NA_ACOPY1(i, o) (((Int8 *)o)[0] = ((Int8 *)i)[0]) -#define NA_ACOPY2(i, o) (((Int16 *)o)[0] = ((Int16 *)i)[0]) -#define NA_ACOPY4(i, o) (((Int32 *)o)[0] = ((Int32 *)i)[0]) -#define NA_ACOPY8(i, o) (((Float64 *)o)[0] = ((Float64 *)i)[0]) -#define NA_ACOPY16(i, o) (((Complex64 *)o)[0] = ((Complex64 *)i)[0]) - -/* from here down, type("ai") is NDInfo* */ - -#define NA_PTR(ai) ((char *) NA_OFFSETDATA((ai))) -#define NA_PTR1(ai, i) (NA_PTR(ai) + \ - (i)*(ai)->strides[0]) -#define NA_PTR2(ai, i, j) (NA_PTR(ai) + \ - (i)*(ai)->strides[0] + \ - (j)*(ai)->strides[1]) -#define NA_PTR3(ai, i, j, k) (NA_PTR(ai) + \ - (i)*(ai)->strides[0] + \ - (j)*(ai)->strides[1] + \ - (k)*(ai)->strides[2]) - -#define NA_SET_TEMP(ai, type, v) (((type *) &__temp__)[0] = v) - -#define NA_SWAPComplex64 NA_COMPLEX_SWAP16 -#define NA_SWAPComplex32 NA_COMPLEX_SWAP8 -#define NA_SWAPFloat64 NA_SWAP8 -#define NA_SWAPFloat32 NA_SWAP4 -#define NA_SWAPInt64 NA_SWAP8 -#define NA_SWAPUInt64 NA_SWAP8 -#define NA_SWAPInt32 NA_SWAP4 -#define NA_SWAPUInt32 NA_SWAP4 -#define NA_SWAPInt16 NA_SWAP2 -#define NA_SWAPUInt16 NA_SWAP2 -#define NA_SWAPInt8 NA_SWAP1 -#define NA_SWAPUInt8 NA_SWAP1 -#define NA_SWAPBool NA_SWAP1 - -#define NA_COPYComplex64 NA_COPY16 -#define NA_COPYComplex32 NA_COPY8 -#define NA_COPYFloat64 NA_COPY8 -#define NA_COPYFloat32 NA_COPY4 -#define NA_COPYInt64 NA_COPY8 -#define NA_COPYUInt64 NA_COPY8 -#define NA_COPYInt32 NA_COPY4 -#define NA_COPYUInt32 NA_COPY4 -#define NA_COPYInt16 NA_COPY2 -#define NA_COPYUInt16 NA_COPY2 -#define NA_COPYInt8 NA_COPY1 -#define NA_COPYUInt8 NA_COPY1 -#define NA_COPYBool NA_COPY1 - -#ifdef __cplusplus -extern "C" { -#endif - -#define _makeGetPb(type) \ -static type _NA_GETPb_##type(char *ptr) \ -{ \ - type temp; \ - NA_SWAP##type(ptr, (char *)&temp); \ - return temp; \ -} - -#define _makeGetPa(type) \ -static type _NA_GETPa_##type(char *ptr) \ -{ \ - type temp; \ - NA_COPY##type(ptr, (char *)&temp); \ - return temp; \ -} - -_makeGetPb(Complex64) -_makeGetPb(Complex32) -_makeGetPb(Float64) -_makeGetPb(Float32) -_makeGetPb(Int64) -_makeGetPb(UInt64) -_makeGetPb(Int32) -_makeGetPb(UInt32) -_makeGetPb(Int16) -_makeGetPb(UInt16) -_makeGetPb(Int8) -_makeGetPb(UInt8) -_makeGetPb(Bool) - -_makeGetPa(Complex64) -_makeGetPa(Complex32) -_makeGetPa(Float64) -_makeGetPa(Float32) -_makeGetPa(Int64) -_makeGetPa(UInt64) -_makeGetPa(Int32) -_makeGetPa(UInt32) -_makeGetPa(Int16) -_makeGetPa(UInt16) -_makeGetPa(Int8) -_makeGetPa(UInt8) -_makeGetPa(Bool) - -#undef _makeGetPb -#undef _makeGetPa - -#define _makeSetPb(type) \ -static void _NA_SETPb_##type(char *ptr, type v) \ -{ \ - NA_SWAP##type(((char *)&v), ptr); \ - return; \ -} - -#define _makeSetPa(type) \ -static void _NA_SETPa_##type(char *ptr, type v) \ -{ \ - NA_COPY##type(((char *)&v), ptr); \ - return; \ -} - -_makeSetPb(Complex64) -_makeSetPb(Complex32) -_makeSetPb(Float64) -_makeSetPb(Float32) -_makeSetPb(Int64) -_makeSetPb(UInt64) -_makeSetPb(Int32) -_makeSetPb(UInt32) -_makeSetPb(Int16) -_makeSetPb(UInt16) -_makeSetPb(Int8) -_makeSetPb(UInt8) -_makeSetPb(Bool) - -_makeSetPa(Complex64) -_makeSetPa(Complex32) -_makeSetPa(Float64) -_makeSetPa(Float32) -_makeSetPa(Int64) -_makeSetPa(UInt64) -_makeSetPa(Int32) -_makeSetPa(UInt32) -_makeSetPa(Int16) -_makeSetPa(UInt16) -_makeSetPa(Int8) -_makeSetPa(UInt8) -_makeSetPa(Bool) - -#undef _makeSetPb -#undef _makeSetPa - -#ifdef __cplusplus - } -#endif - -/* ========================== ptr get/set ================================ */ - -/* byteswapping */ -#define NA_GETPb(ai, type, ptr) _NA_GETPb_##type(ptr) - -/* aligning */ -#define NA_GETPa(ai, type, ptr) _NA_GETPa_##type(ptr) - -/* fast (aligned, !byteswapped) */ -#define NA_GETPf(ai, type, ptr) (*((type *) (ptr))) - -#define NA_GETP(ai, type, ptr) \ - (PyArray_ISCARRAY(ai) ? NA_GETPf(ai, type, ptr) \ - : (PyArray_ISBYTESWAPPED(ai) ? \ - NA_GETPb(ai, type, ptr) \ - : NA_GETPa(ai, type, ptr))) - -/* NOTE: NA_SET* macros cannot be used as values. */ - -/* byteswapping */ -#define NA_SETPb(ai, type, ptr, v) _NA_SETPb_##type(ptr, v) - -/* aligning */ -#define NA_SETPa(ai, type, ptr, v) _NA_SETPa_##type(ptr, v) - -/* fast (aligned, !byteswapped) */ -#define NA_SETPf(ai, type, ptr, v) ((*((type *) ptr)) = (v)) - -#define NA_SETP(ai, type, ptr, v) \ - if (PyArray_ISCARRAY(ai)) { \ - NA_SETPf((ai), type, (ptr), (v)); \ - } else if (PyArray_ISBYTESWAPPED(ai)) { \ - NA_SETPb((ai), type, (ptr), (v)); \ - } else \ - NA_SETPa((ai), type, (ptr), (v)) - -/* ========================== 1 index get/set ============================ */ - -/* byteswapping */ -#define NA_GET1b(ai, type, i) NA_GETPb(ai, type, NA_PTR1(ai, i)) -/* aligning */ -#define NA_GET1a(ai, type, i) NA_GETPa(ai, type, NA_PTR1(ai, i)) -/* fast (aligned, !byteswapped) */ -#define NA_GET1f(ai, type, i) NA_GETPf(ai, type, NA_PTR1(ai, i)) -/* testing */ -#define NA_GET1(ai, type, i) NA_GETP(ai, type, NA_PTR1(ai, i)) - -/* byteswapping */ -#define NA_SET1b(ai, type, i, v) NA_SETPb(ai, type, NA_PTR1(ai, i), v) -/* aligning */ -#define NA_SET1a(ai, type, i, v) NA_SETPa(ai, type, NA_PTR1(ai, i), v) -/* fast (aligned, !byteswapped) */ -#define NA_SET1f(ai, type, i, v) NA_SETPf(ai, type, NA_PTR1(ai, i), v) -/* testing */ -#define NA_SET1(ai, type, i, v) NA_SETP(ai, type, NA_PTR1(ai, i), v) - -/* ========================== 2 index get/set ============================= */ - -/* byteswapping */ -#define NA_GET2b(ai, type, i, j) NA_GETPb(ai, type, NA_PTR2(ai, i, j)) -/* aligning */ -#define NA_GET2a(ai, type, i, j) NA_GETPa(ai, type, NA_PTR2(ai, i, j)) -/* fast (aligned, !byteswapped) */ -#define NA_GET2f(ai, type, i, j) NA_GETPf(ai, type, NA_PTR2(ai, i, j)) -/* testing */ -#define NA_GET2(ai, type, i, j) NA_GETP(ai, type, NA_PTR2(ai, i, j)) - -/* byteswapping */ -#define NA_SET2b(ai, type, i, j, v) NA_SETPb(ai, type, NA_PTR2(ai, i, j), v) -/* aligning */ -#define NA_SET2a(ai, type, i, j, v) NA_SETPa(ai, type, NA_PTR2(ai, i, j), v) -/* fast (aligned, !byteswapped) */ -#define NA_SET2f(ai, type, i, j, v) NA_SETPf(ai, type, NA_PTR2(ai, i, j), v) - -#define NA_SET2(ai, type, i, j, v) NA_SETP(ai, type, NA_PTR2(ai, i, j), v) - -/* ========================== 3 index get/set ============================= */ - -/* byteswapping */ -#define NA_GET3b(ai, type, i, j, k) NA_GETPb(ai, type, NA_PTR3(ai, i, j, k)) -/* aligning */ -#define NA_GET3a(ai, type, i, j, k) NA_GETPa(ai, type, NA_PTR3(ai, i, j, k)) -/* fast (aligned, !byteswapped) */ -#define NA_GET3f(ai, type, i, j, k) NA_GETPf(ai, type, NA_PTR3(ai, i, j, k)) -/* testing */ -#define NA_GET3(ai, type, i, j, k) NA_GETP(ai, type, NA_PTR3(ai, i, j, k)) - -/* byteswapping */ -#define NA_SET3b(ai, type, i, j, k, v) \ - NA_SETPb(ai, type, NA_PTR3(ai, i, j, k), v) -/* aligning */ -#define NA_SET3a(ai, type, i, j, k, v) \ - NA_SETPa(ai, type, NA_PTR3(ai, i, j, k), v) -/* fast (aligned, !byteswapped) */ -#define NA_SET3f(ai, type, i, j, k, v) \ - NA_SETPf(ai, type, NA_PTR3(ai, i, j, k), v) -#define NA_SET3(ai, type, i, j, k, v) \ - NA_SETP(ai, type, NA_PTR3(ai, i, j, k), v) - -/* ========================== 1D get/set ================================== */ - -#define NA_GET1Db(ai, type, base, cnt, out) \ - { int i, stride = ai->strides[ai->nd-1]; \ - for(i=0; istrides[ai->nd-1]; \ - for(i=0; istrides[ai->nd-1]; \ - for(i=0; istrides[ai->nd-1]; \ - for(i=0; istrides[ai->nd-1]; \ - for(i=0; istrides[ai->nd-1]; \ - for(i=0; i=(y)) ? (x) : (y)) -#endif - -#if !defined(ABS) -#define ABS(x) (((x) >= 0) ? (x) : -(x)) -#endif - -#define ELEM(x) (sizeof(x)/sizeof(x[0])) - -#define BOOLEAN_BITWISE_NOT(x) ((x) ^ 1) - -#define NA_NBYTES(a) (a->descr->elsize * NA_elements(a)) - -#if defined(NA_SMP) -#define BEGIN_THREADS Py_BEGIN_ALLOW_THREADS -#define END_THREADS Py_END_ALLOW_THREADS -#else -#define BEGIN_THREADS -#define END_THREADS -#endif - -#if !defined(NA_isnan) - -#define U32(u) (* (Int32 *) &(u) ) -#define U64(u) (* (Int64 *) &(u) ) - -#define NA_isnan32(u) \ - ( (( U32(u) & 0x7f800000) == 0x7f800000) && ((U32(u) & 0x007fffff) != 0)) ? 1:0 - -#if !defined(_MSC_VER) -#define NA_isnan64(u) \ - ( (( U64(u) & 0x7ff0000000000000LL) == 0x7ff0000000000000LL) && ((U64(u) & 0x000fffffffffffffLL) != 0)) ? 1:0 -#else -#define NA_isnan64(u) \ - ( (( U64(u) & 0x7ff0000000000000i64) == 0x7ff0000000000000i64) && ((U64(u) & 0x000fffffffffffffi64) != 0)) ? 1:0 -#endif - -#define NA_isnanC32(u) (NA_isnan32(((Complex32 *)&(u))->r) || NA_isnan32(((Complex32 *)&(u))->i)) -#define NA_isnanC64(u) (NA_isnan64(((Complex64 *)&(u))->r) || NA_isnan64(((Complex64 *)&(u))->i)) - -#endif /* NA_isnan */ - - -#endif /* _ndarraymacro */ diff --git a/pythonPackages/numpy/numpy/numarray/linear_algebra.py b/pythonPackages/numpy/numpy/numarray/linear_algebra.py deleted file mode 100755 index 238dff9522..0000000000 --- a/pythonPackages/numpy/numpy/numarray/linear_algebra.py +++ /dev/null @@ -1,15 +0,0 @@ - -from numpy.oldnumeric.linear_algebra import * - -import numpy.oldnumeric.linear_algebra as nol - -__all__ = list(nol.__all__) -__all__ += ['qr_decomposition'] - -from numpy.linalg import qr as _qr - -def qr_decomposition(a, mode='full'): - res = _qr(a, mode) - if mode == 'full': - return res - return (None, res) diff --git a/pythonPackages/numpy/numpy/numarray/ma.py b/pythonPackages/numpy/numpy/numarray/ma.py deleted file mode 100755 index 5c7a19cf2f..0000000000 --- a/pythonPackages/numpy/numpy/numarray/ma.py +++ /dev/null @@ -1,2 +0,0 @@ - -from numpy.oldnumeric.ma import * diff --git a/pythonPackages/numpy/numpy/numarray/matrix.py b/pythonPackages/numpy/numpy/numarray/matrix.py deleted file mode 100755 index 86d79bbe21..0000000000 --- a/pythonPackages/numpy/numpy/numarray/matrix.py +++ /dev/null @@ -1,7 +0,0 @@ - -__all__ = ['Matrix'] - -from numpy import matrix as _matrix - -def Matrix(data, typecode=None, copy=1, savespace=0): - return _matrix(data, typecode, copy=copy) diff --git a/pythonPackages/numpy/numpy/numarray/mlab.py b/pythonPackages/numpy/numpy/numarray/mlab.py deleted file mode 100755 index 05f234d376..0000000000 --- a/pythonPackages/numpy/numpy/numarray/mlab.py +++ /dev/null @@ -1,7 +0,0 @@ - -from numpy.oldnumeric.mlab import * -import numpy.oldnumeric.mlab as nom - -__all__ = nom.__all__ - -del nom diff --git a/pythonPackages/numpy/numpy/numarray/nd_image.py b/pythonPackages/numpy/numpy/numarray/nd_image.py deleted file mode 100755 index dff7fa066f..0000000000 --- a/pythonPackages/numpy/numpy/numarray/nd_image.py +++ /dev/null @@ -1,14 +0,0 @@ -try: - from ndimage import * -except ImportError: - try: - from scipy.ndimage import * - except ImportError: - msg = \ -"""The nd_image package is not installed - -It can be downloaded by checking out the latest source from -http://svn.scipy.org/svn/scipy/trunk/Lib/ndimage or by downloading and -installing all of SciPy from http://www.scipy.org. -""" - raise ImportError(msg) diff --git a/pythonPackages/numpy/numpy/numarray/numerictypes.py b/pythonPackages/numpy/numpy/numarray/numerictypes.py deleted file mode 100755 index 7bc91612ec..0000000000 --- a/pythonPackages/numpy/numpy/numarray/numerictypes.py +++ /dev/null @@ -1,548 +0,0 @@ -"""numerictypes: Define the numeric type objects - -This module is designed so 'from numerictypes import *' is safe. -Exported symbols include: - - Dictionary with all registered number types (including aliases): - typeDict - - Numeric type objects: - Bool - Int8 Int16 Int32 Int64 - UInt8 UInt16 UInt32 UInt64 - Float32 Double64 - Complex32 Complex64 - - Numeric type classes: - NumericType - BooleanType - SignedType - UnsignedType - IntegralType - SignedIntegralType - UnsignedIntegralType - FloatingType - ComplexType - -$Id: numerictypes.py,v 1.55 2005/12/01 16:22:03 jaytmiller Exp $ -""" - -__all__ = ['NumericType','HasUInt64','typeDict','IsType', - 'BooleanType', 'SignedType', 'UnsignedType', 'IntegralType', - 'SignedIntegralType', 'UnsignedIntegralType', 'FloatingType', - 'ComplexType', 'AnyType', 'ObjectType', 'Any', 'Object', - 'Bool', 'Int8', 'Int16', 'Int32', 'Int64', 'Float32', - 'Float64', 'UInt8', 'UInt16', 'UInt32', 'UInt64', - 'Complex32', 'Complex64', 'Byte', 'Short', 'Int','Long', - 'Float', 'Complex', 'genericTypeRank', 'pythonTypeRank', - 'pythonTypeMap', 'scalarTypeMap', 'genericCoercions', - 'typecodes', 'genericPromotionExclusions','MaximumType', - 'getType','scalarTypes', 'typefrom'] - -MAX_ALIGN = 8 -MAX_INT_SIZE = 8 - -import numpy -LP64 = numpy.intp(0).itemsize == 8 - -HasUInt64 = 1 -try: - numpy.int64(0) -except: - HasUInt64 = 0 - -#from typeconv import typeConverters as _typeConverters -#import numinclude -#from _numerictype import _numerictype, typeDict - -# Enumeration of numarray type codes -typeDict = {} - -_tAny = 0 -_tBool = 1 -_tInt8 = 2 -_tUInt8 = 3 -_tInt16 = 4 -_tUInt16 = 5 -_tInt32 = 6 -_tUInt32 = 7 -_tInt64 = 8 -_tUInt64 = 9 -_tFloat32 = 10 -_tFloat64 = 11 -_tComplex32 = 12 -_tComplex64 = 13 -_tObject = 14 - -def IsType(rep): - """Determines whether the given object or string, 'rep', represents - a numarray type.""" - return isinstance(rep, NumericType) or rep in typeDict - -def _register(name, type, force=0): - """Register the type object. Raise an exception if it is already registered - unless force is true. - """ - if name in typeDict and not force: - raise ValueError("Type %s has already been registered" % name) - typeDict[name] = type - return type - - -class NumericType(object): - """Numeric type class - - Used both as a type identification and the repository of - characteristics and conversion functions. - """ - def __new__(type, name, bytes, default, typeno): - """__new__() implements a 'quasi-singleton pattern because attempts - to create duplicate types return the first created instance of that - particular type parameterization, i.e. the second time you try to - create "Int32", you get the original Int32, not a new one. - """ - if name in typeDict: - self = typeDict[name] - if self.bytes != bytes or self.default != default or \ - self.typeno != typeno: - raise ValueError("Redeclaration of existing NumericType "\ - "with different parameters.") - return self - else: - self = object.__new__(type) - self.name = "no name" - self.bytes = None - self.default = None - self.typeno = -1 - return self - - def __init__(self, name, bytes, default, typeno): - if not isinstance(name, str): - raise TypeError("name must be a string") - self.name = name - self.bytes = bytes - self.default = default - self.typeno = typeno - self._conv = None - _register(self.name, self) - - def __getnewargs__(self): - """support the pickling protocol.""" - return (self.name, self.bytes, self.default, self.typeno) - - def __getstate__(self): - """support pickling protocol... no __setstate__ required.""" - False - -class BooleanType(NumericType): - pass - -class SignedType: - """Marker class used for signed type check""" - pass - -class UnsignedType: - """Marker class used for unsigned type check""" - pass - -class IntegralType(NumericType): - pass - -class SignedIntegralType(IntegralType, SignedType): - pass - -class UnsignedIntegralType(IntegralType, UnsignedType): - pass - -class FloatingType(NumericType): - pass - -class ComplexType(NumericType): - pass - -class AnyType(NumericType): - pass - -class ObjectType(NumericType): - pass - -# C-API Type Any - -Any = AnyType("Any", None, None, _tAny) - -Object = ObjectType("Object", None, None, _tObject) - -# Numeric Types: - -Bool = BooleanType("Bool", 1, 0, _tBool) -Int8 = SignedIntegralType( "Int8", 1, 0, _tInt8) -Int16 = SignedIntegralType("Int16", 2, 0, _tInt16) -Int32 = SignedIntegralType("Int32", 4, 0, _tInt32) -Int64 = SignedIntegralType("Int64", 8, 0, _tInt64) - -Float32 = FloatingType("Float32", 4, 0.0, _tFloat32) -Float64 = FloatingType("Float64", 8, 0.0, _tFloat64) - -UInt8 = UnsignedIntegralType( "UInt8", 1, 0, _tUInt8) -UInt16 = UnsignedIntegralType("UInt16", 2, 0, _tUInt16) -UInt32 = UnsignedIntegralType("UInt32", 4, 0, _tUInt32) -UInt64 = UnsignedIntegralType("UInt64", 8, 0, _tUInt64) - -Complex32 = ComplexType("Complex32", 8, complex(0.0), _tComplex32) -Complex64 = ComplexType("Complex64", 16, complex(0.0), _tComplex64) - -Object.dtype = 'O' -Bool.dtype = '?' -Int8.dtype = 'i1' -Int16.dtype = 'i2' -Int32.dtype = 'i4' -Int64.dtype = 'i8' - -UInt8.dtype = 'u1' -UInt16.dtype = 'u2' -UInt32.dtype = 'u4' -UInt64.dtype = 'u8' - -Float32.dtype = 'f4' -Float64.dtype = 'f8' - -Complex32.dtype = 'c8' -Complex64.dtype = 'c16' - -# Aliases - -Byte = _register("Byte", Int8) -Short = _register("Short", Int16) -Int = _register("Int", Int32) -if LP64: - Long = _register("Long", Int64) - if HasUInt64: - _register("ULong", UInt64) - MaybeLong = _register("MaybeLong", Int64) - __all__.append('MaybeLong') -else: - Long = _register("Long", Int32) - _register("ULong", UInt32) - MaybeLong = _register("MaybeLong", Int32) - __all__.append('MaybeLong') - - -_register("UByte", UInt8) -_register("UShort", UInt16) -_register("UInt", UInt32) -Float = _register("Float", Float64) -Complex = _register("Complex", Complex64) - -# short forms - -_register("b1", Bool) -_register("u1", UInt8) -_register("u2", UInt16) -_register("u4", UInt32) -_register("i1", Int8) -_register("i2", Int16) -_register("i4", Int32) - -_register("i8", Int64) -if HasUInt64: - _register("u8", UInt64) - -_register("f4", Float32) -_register("f8", Float64) -_register("c8", Complex32) -_register("c16", Complex64) - -# NumPy forms - -_register("1", Int8) -_register("B", Bool) -_register("c", Int8) -_register("b", UInt8) -_register("s", Int16) -_register("w", UInt16) -_register("i", Int32) -_register("N", Int64) -_register("u", UInt32) -_register("U", UInt64) - -if LP64: - _register("l", Int64) -else: - _register("l", Int32) - -_register("d", Float64) -_register("f", Float32) -_register("D", Complex64) -_register("F", Complex32) - -# scipy.base forms - -def _scipy_alias(scipy_type, numarray_type): - _register(scipy_type, eval(numarray_type)) - globals()[scipy_type] = globals()[numarray_type] - -_scipy_alias("bool_", "Bool") -_scipy_alias("bool8", "Bool") -_scipy_alias("int8", "Int8") -_scipy_alias("uint8", "UInt8") -_scipy_alias("int16", "Int16") -_scipy_alias("uint16", "UInt16") -_scipy_alias("int32", "Int32") -_scipy_alias("uint32", "UInt32") -_scipy_alias("int64", "Int64") -_scipy_alias("uint64", "UInt64") - -_scipy_alias("float64", "Float64") -_scipy_alias("float32", "Float32") -_scipy_alias("complex128", "Complex64") -_scipy_alias("complex64", "Complex32") - -# The rest is used by numeric modules to determine conversions - -# Ranking of types from lowest to highest (sorta) -if not HasUInt64: - genericTypeRank = ['Bool','Int8','UInt8','Int16','UInt16', - 'Int32', 'UInt32', 'Int64', - 'Float32','Float64', 'Complex32', 'Complex64', 'Object'] -else: - genericTypeRank = ['Bool','Int8','UInt8','Int16','UInt16', - 'Int32', 'UInt32', 'Int64', 'UInt64', - 'Float32','Float64', 'Complex32', 'Complex64', 'Object'] - -pythonTypeRank = [ bool, int, long, float, complex ] - -# The next line is not platform independent XXX Needs to be generalized -if not LP64: - pythonTypeMap = { - int:("Int32","int"), - long:("Int64","int"), - float:("Float64","float"), - complex:("Complex64","complex")} - - scalarTypeMap = { - int:"Int32", - long:"Int64", - float:"Float64", - complex:"Complex64"} -else: - pythonTypeMap = { - int:("Int64","int"), - long:("Int64","int"), - float:("Float64","float"), - complex:("Complex64","complex")} - - scalarTypeMap = { - int:"Int64", - long:"Int64", - float:"Float64", - complex:"Complex64"} - -pythonTypeMap.update({bool:("Bool","bool") }) -scalarTypeMap.update({bool:"Bool"}) - -# Generate coercion matrix - -def _initGenericCoercions(): - global genericCoercions - genericCoercions = {} - - # vector with ... - for ntype1 in genericTypeRank: - nt1 = typeDict[ntype1] - rank1 = genericTypeRank.index(ntype1) - ntypesize1, inttype1, signedtype1 = nt1.bytes, \ - isinstance(nt1, IntegralType), isinstance(nt1, SignedIntegralType) - for ntype2 in genericTypeRank: - # vector - nt2 = typeDict[ntype2] - ntypesize2, inttype2, signedtype2 = nt2.bytes, \ - isinstance(nt2, IntegralType), isinstance(nt2, SignedIntegralType) - rank2 = genericTypeRank.index(ntype2) - if (signedtype1 != signedtype2) and inttype1 and inttype2: - # mixing of signed and unsigned ints is a special case - # If unsigned same size or larger, final size needs to be bigger - # if possible - if signedtype1: - if ntypesize2 >= ntypesize1: - size = min(2*ntypesize2, MAX_INT_SIZE) - else: - size = ntypesize1 - else: - if ntypesize1 >= ntypesize2: - size = min(2*ntypesize1, MAX_INT_SIZE) - else: - size = ntypesize2 - outtype = "Int"+str(8*size) - else: - if rank1 >= rank2: - outtype = ntype1 - else: - outtype = ntype2 - genericCoercions[(ntype1, ntype2)] = outtype - - for ntype2 in pythonTypeRank: - # scalar - mapto, kind = pythonTypeMap[ntype2] - if ((inttype1 and kind=="int") or (not inttype1 and kind=="float")): - # both are of the same "kind" thus vector type dominates - outtype = ntype1 - else: - rank2 = genericTypeRank.index(mapto) - if rank1 >= rank2: - outtype = ntype1 - else: - outtype = mapto - genericCoercions[(ntype1, ntype2)] = outtype - genericCoercions[(ntype2, ntype1)] = outtype - - # scalar-scalar - for ntype1 in pythonTypeRank: - maptype1 = scalarTypeMap[ntype1] - genericCoercions[(ntype1,)] = maptype1 - for ntype2 in pythonTypeRank: - maptype2 = scalarTypeMap[ntype2] - genericCoercions[(ntype1, ntype2)] = genericCoercions[(maptype1, maptype2)] - - # Special cases more easily dealt with outside of the loop - genericCoercions[("Complex32", "Float64")] = "Complex64" - genericCoercions[("Float64", "Complex32")] = "Complex64" - genericCoercions[("Complex32", "Int64")] = "Complex64" - genericCoercions[("Int64", "Complex32")] = "Complex64" - genericCoercions[("Complex32", "UInt64")] = "Complex64" - genericCoercions[("UInt64", "Complex32")] = "Complex64" - - genericCoercions[("Int64","Float32")] = "Float64" - genericCoercions[("Float32", "Int64")] = "Float64" - genericCoercions[("UInt64","Float32")] = "Float64" - genericCoercions[("Float32", "UInt64")] = "Float64" - - genericCoercions[(float, "Bool")] = "Float64" - genericCoercions[("Bool", float)] = "Float64" - - genericCoercions[(float,float,float)] = "Float64" # for scipy.special - genericCoercions[(int,int,float)] = "Float64" # for scipy.special - -_initGenericCoercions() - -# If complex is subclassed, the following may not be necessary -genericPromotionExclusions = { - 'Bool': (), - 'Int8': (), - 'Int16': (), - 'Int32': ('Float32','Complex32'), - 'UInt8': (), - 'UInt16': (), - 'UInt32': ('Float32','Complex32'), - 'Int64' : ('Float32','Complex32'), - 'UInt64' : ('Float32','Complex32'), - 'Float32': (), - 'Float64': ('Complex32',), - 'Complex32':(), - 'Complex64':() -} # e.g., don't allow promotion from Float64 to Complex32 or Int64 to Float32 - -# Numeric typecodes -typecodes = {'Integer': '1silN', - 'UnsignedInteger': 'bBwuU', - 'Float': 'fd', - 'Character': 'c', - 'Complex': 'FD' } - -if HasUInt64: - _MaximumType = { - Bool : UInt64, - - Int8 : Int64, - Int16 : Int64, - Int32 : Int64, - Int64 : Int64, - - UInt8 : UInt64, - UInt16 : UInt64, - UInt32 : UInt64, - UInt8 : UInt64, - - Float32 : Float64, - Float64 : Float64, - - Complex32 : Complex64, - Complex64 : Complex64 - } -else: - _MaximumType = { - Bool : Int64, - - Int8 : Int64, - Int16 : Int64, - Int32 : Int64, - Int64 : Int64, - - UInt8 : Int64, - UInt16 : Int64, - UInt32 : Int64, - UInt8 : Int64, - - Float32 : Float64, - Float64 : Float64, - - Complex32 : Complex64, - Complex64 : Complex64 - } - -def MaximumType(t): - """returns the type of highest precision of the same general kind as 't'""" - return _MaximumType[t] - - -def getType(type): - """Return the numeric type object for type - - type may be the name of a type object or the actual object - """ - if isinstance(type, NumericType): - return type - try: - return typeDict[type] - except KeyError: - raise TypeError("Not a numeric type") - -scalarTypes = (bool,int,long,float,complex) - -_scipy_dtypechar = { - Int8 : 'b', - UInt8 : 'B', - Int16 : 'h', - UInt16 : 'H', - Int32 : 'i', - UInt32 : 'I', - Int64 : 'q', - UInt64 : 'Q', - Float32 : 'f', - Float64 : 'd', - Complex32 : 'F', # Note the switchup here: - Complex64 : 'D' # numarray.Complex32 == scipy.complex64, etc. - } - -_scipy_dtypechar_inverse = {} -for key,value in _scipy_dtypechar.items(): - _scipy_dtypechar_inverse[value] = key - -_val = numpy.int_(0).itemsize -if _val == 8: - _scipy_dtypechar_inverse['l'] = Int64 - _scipy_dtypechar_inverse['L'] = UInt64 -elif _val == 4: - _scipy_dtypechar_inverse['l'] = Int32 - _scipy_dtypechar_inverse['L'] = UInt32 - -del _val - -if LP64: - _scipy_dtypechar_inverse['p'] = Int64 - _scipy_dtypechar_inverse['P'] = UInt64 -else: - _scipy_dtypechar_inverse['p'] = Int32 - _scipy_dtypechar_inverse['P'] = UInt32 - -def typefrom(obj): - return _scipy_dtypechar_inverse[obj.dtype.char] diff --git a/pythonPackages/numpy/numpy/numarray/random_array.py b/pythonPackages/numpy/numpy/numarray/random_array.py deleted file mode 100755 index d70e2694a5..0000000000 --- a/pythonPackages/numpy/numpy/numarray/random_array.py +++ /dev/null @@ -1,9 +0,0 @@ - -__all__ = ['ArgumentError', 'F', 'beta', 'binomial', 'chi_square', - 'exponential', 'gamma', 'get_seed', 'multinomial', - 'multivariate_normal', 'negative_binomial', 'noncentral_F', - 'noncentral_chi_square', 'normal', 'permutation', 'poisson', - 'randint', 'random', 'random_integers', 'standard_normal', - 'uniform', 'seed'] - -from numpy.oldnumeric.random_array import * diff --git a/pythonPackages/numpy/numpy/numarray/session.py b/pythonPackages/numpy/numpy/numarray/session.py deleted file mode 100755 index 0982742abc..0000000000 --- a/pythonPackages/numpy/numpy/numarray/session.py +++ /dev/null @@ -1,346 +0,0 @@ -""" This module contains a "session saver" which saves the state of a -NumPy session to a file. At a later time, a different Python -process can be started and the saved session can be restored using -load(). - -The session saver relies on the Python pickle protocol to save and -restore objects. Objects which are not themselves picklable (e.g. -modules) can sometimes be saved by "proxy", particularly when they -are global constants of some kind. If it's not known that proxying -will work, a warning is issued at save time. If a proxy fails to -reload properly (e.g. because it's not a global constant), a warning -is issued at reload time and that name is bound to a _ProxyFailure -instance which tries to identify what should have been restored. - -First, some unfortunate (probably unnecessary) concessions to doctest -to keep the test run free of warnings. - ->>> del _PROXY_ALLOWED ->>> del __builtins__ - -By default, save() stores every variable in the caller's namespace: - ->>> import numpy as na ->>> a = na.arange(10) ->>> save() - -Alternately, save() can be passed a comma seperated string of variables: - ->>> save("a,na") - -Alternately, save() can be passed a dictionary, typically one you already -have lying around somewhere rather than created inline as shown here: - ->>> save(dictionary={"a":a,"na":na}) - -If both variables and a dictionary are specified, the variables to be -saved are taken from the dictionary. - ->>> save(variables="a,na",dictionary={"a":a,"na":na}) - -Remove names from the session namespace - ->>> del a, na - -By default, load() restores every variable/object in the session file -to the caller's namespace. - ->>> load() - -load() can be passed a comma seperated string of variables to be -restored from the session file to the caller's namespace: - ->>> load("a,na") - -load() can also be passed a dictionary to *restore to*: - ->>> d = {} ->>> load(dictionary=d) - -load can be passed both a list variables of variables to restore and a -dictionary to restore to: - ->>> load(variables="a,na", dictionary=d) - ->>> na.all(a == na.arange(10)) -1 ->>> na.__name__ -'numpy' - -NOTE: session saving is faked for modules using module proxy objects. -Saved modules are re-imported at load time but any "state" in the module -which is not restored by a simple import is lost. - -""" - -__all__ = ['load', 'save'] - -import sys -import pickle - -SAVEFILE="session.dat" -VERBOSE = False # global import-time override - -def _foo(): pass - -_PROXY_ALLOWED = (type(sys), # module - type(_foo), # function - type(None)) # None - -def _update_proxy_types(): - """Suppress warnings for known un-picklables with working proxies.""" - pass - -def _unknown(_type): - """returns True iff _type isn't known as OK to proxy""" - return (_type is not None) and (_type not in _PROXY_ALLOWED) - -# caller() from the following article with one extra f_back added. -# from http://www.python.org/search/hypermail/python-1994q1/0506.html -# SUBJECT: import ( how to put a symbol into caller's namespace ) -# SENDER: Steven D. Majewski (sdm7g@elvis.med.virginia.edu) -# DATE: Thu, 24 Mar 1994 15:38:53 -0500 - -def _caller(): - """caller() returns the frame object of the function's caller.""" - try: - 1 + '' # make an error happen - except: # and return the caller's caller's frame - return sys.exc_traceback.tb_frame.f_back.f_back.f_back - -def _callers_globals(): - """callers_globals() returns the global dictionary of the caller.""" - frame = _caller() - return frame.f_globals - -def _callers_modules(): - """returns a list containing the names of all the modules in the caller's - global namespace.""" - g = _callers_globals() - mods = [] - for k,v in g.items(): - if type(v) == type(sys): - mods.append(getattr(v,"__name__")) - return mods - -def _errout(*args): - for a in args: - print >>sys.stderr, a, - print >>sys.stderr - -def _verbose(*args): - if VERBOSE: - _errout(*args) - -class _ProxyingFailure: - """Object which is bound to a variable for a proxy pickle which failed to reload""" - def __init__(self, module, name, type=None): - self.module = module - self.name = name - self.type = type - def __repr__(self): - return "ProxyingFailure('%s','%s','%s')" % (self.module, self.name, self.type) - -class _ModuleProxy(object): - """Proxy object which fakes pickling a module""" - def __new__(_type, name, save=False): - if save: - _verbose("proxying module", name) - self = object.__new__(_type) - self.name = name - else: - _verbose("loading module proxy", name) - try: - self = _loadmodule(name) - except ImportError: - _errout("warning: module", name,"import failed.") - return self - - def __getnewargs__(self): - return (self.name,) - - def __getstate__(self): - return False - -def _loadmodule(module): - if module not in sys.modules: - modules = module.split(".") - s = "" - for i in range(len(modules)): - s = ".".join(modules[:i+1]) - exec "import " + s - return sys.modules[module] - -class _ObjectProxy(object): - """Proxy object which fakes pickling an arbitrary object. Only global - constants can really be proxied.""" - def __new__(_type, module, name, _type2, save=False): - if save: - if _unknown(_type2): - _errout("warning: proxying object", module + "." + name, - "of type", _type2, "because it wouldn't pickle...", - "it may not reload later.") - else: - _verbose("proxying object", module, name) - self = object.__new__(_type) - self.module, self.name, self.type = module, name, str(_type2) - else: - _verbose("loading object proxy", module, name) - try: - m = _loadmodule(module) - except (ImportError, KeyError): - _errout("warning: loading object proxy", module + "." + name, - "module import failed.") - return _ProxyingFailure(module,name,_type2) - try: - self = getattr(m, name) - except AttributeError: - _errout("warning: object proxy", module + "." + name, - "wouldn't reload from", m) - return _ProxyingFailure(module,name,_type2) - return self - - def __getnewargs__(self): - return (self.module, self.name, self.type) - - def __getstate__(self): - return False - - -class _SaveSession(object): - """Tag object which marks the end of a save session and holds the - saved session variable names as a list of strings in the same - order as the session pickles.""" - def __new__(_type, keys, save=False): - if save: - _verbose("saving session", keys) - else: - _verbose("loading session", keys) - self = object.__new__(_type) - self.keys = keys - return self - - def __getnewargs__(self): - return (self.keys,) - - def __getstate__(self): - return False - -class ObjectNotFound(RuntimeError): - pass - -def _locate(modules, object): - for mname in modules: - m = sys.modules[mname] - if m: - for k,v in m.__dict__.items(): - if v is object: - return m.__name__, k - else: - raise ObjectNotFound(k) - -def save(variables=None, file=SAVEFILE, dictionary=None, verbose=False): - - """saves variables from a numpy session to a file. Variables - which won't pickle are "proxied" if possible. - - 'variables' a string of comma seperated variables: e.g. "a,b,c" - Defaults to dictionary.keys(). - - 'file' a filename or file object for the session file. - - 'dictionary' the dictionary in which to look up the variables. - Defaults to the caller's globals() - - 'verbose' print additional debug output when True. - """ - - global VERBOSE - VERBOSE = verbose - - _update_proxy_types() - - if isinstance(file, str): - file = open(file, "wb") - - if dictionary is None: - dictionary = _callers_globals() - - if variables is None: - keys = dictionary.keys() - else: - keys = variables.split(",") - - source_modules = _callers_modules() + sys.modules.keys() - - p = pickle.Pickler(file, protocol=2) - - _verbose("variables:",keys) - for k in keys: - v = dictionary[k] - _verbose("saving", k, type(v)) - try: # Try to write an ordinary pickle - p.dump(v) - _verbose("pickled", k) - except (pickle.PicklingError, TypeError, SystemError): - # Use proxies for stuff that won't pickle - if isinstance(v, type(sys)): # module - proxy = _ModuleProxy(v.__name__, save=True) - else: - try: - module, name = _locate(source_modules, v) - except ObjectNotFound: - _errout("warning: couldn't find object",k, - "in any module... skipping.") - continue - else: - proxy = _ObjectProxy(module, name, type(v), save=True) - p.dump(proxy) - o = _SaveSession(keys, save=True) - p.dump(o) - file.close() - -def load(variables=None, file=SAVEFILE, dictionary=None, verbose=False): - - """load a numpy session from a file and store the specified - 'variables' into 'dictionary'. - - 'variables' a string of comma seperated variables: e.g. "a,b,c" - Defaults to dictionary.keys(). - - 'file' a filename or file object for the session file. - - 'dictionary' the dictionary in which to look up the variables. - Defaults to the caller's globals() - - 'verbose' print additional debug output when True. - """ - - global VERBOSE - VERBOSE = verbose - - if isinstance(file, str): - file = open(file, "rb") - if dictionary is None: - dictionary = _callers_globals() - values = [] - p = pickle.Unpickler(file) - while 1: - o = p.load() - if isinstance(o, _SaveSession): - session = dict(zip(o.keys, values)) - _verbose("updating dictionary with session variables.") - if variables is None: - keys = session.keys() - else: - keys = variables.split(",") - for k in keys: - dictionary[k] = session[k] - return None - else: - _verbose("unpickled object", str(o)) - values.append(o) - -def test(): - import doctest, numpy.numarray.session - return doctest.testmod(numpy.numarray.session) diff --git a/pythonPackages/numpy/numpy/numarray/setup.py b/pythonPackages/numpy/numpy/numarray/setup.py deleted file mode 100755 index 6419902179..0000000000 --- a/pythonPackages/numpy/numpy/numarray/setup.py +++ /dev/null @@ -1,17 +0,0 @@ -from os.path import join - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('numarray',parent_package,top_path) - - config.add_data_files('include/numpy/*') - - config.add_extension('_capi', - sources=['_capi.c'], - ) - - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/numarray/setupscons.py b/pythonPackages/numpy/numpy/numarray/setupscons.py deleted file mode 100755 index 173612ae8b..0000000000 --- a/pythonPackages/numpy/numpy/numarray/setupscons.py +++ /dev/null @@ -1,14 +0,0 @@ -from os.path import join - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('numarray',parent_package,top_path) - - config.add_data_files('include/numpy/') - config.add_sconscript('SConstruct', source_files = ['_capi.c']) - - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/numarray/ufuncs.py b/pythonPackages/numpy/numpy/numarray/ufuncs.py deleted file mode 100755 index 3fb5671ce8..0000000000 --- a/pythonPackages/numpy/numpy/numarray/ufuncs.py +++ /dev/null @@ -1,22 +0,0 @@ - -__all__ = ['abs', 'absolute', 'add', 'arccos', 'arccosh', 'arcsin', 'arcsinh', - 'arctan', 'arctan2', 'arctanh', 'bitwise_and', 'bitwise_not', - 'bitwise_or', 'bitwise_xor', 'ceil', 'cos', 'cosh', 'divide', - 'equal', 'exp', 'fabs', 'floor', 'floor_divide', - 'fmod', 'greater', 'greater_equal', 'hypot', 'isnan', - 'less', 'less_equal', 'log', 'log10', 'logical_and', 'logical_not', - 'logical_or', 'logical_xor', 'lshift', 'maximum', 'minimum', - 'minus', 'multiply', 'negative', 'not_equal', - 'power', 'product', 'remainder', 'rshift', 'sin', 'sinh', 'sqrt', - 'subtract', 'sum', 'tan', 'tanh', 'true_divide', - 'conjugate', 'sign'] - -from numpy import absolute as abs, absolute, add, arccos, arccosh, arcsin, \ - arcsinh, arctan, arctan2, arctanh, bitwise_and, invert as bitwise_not, \ - bitwise_or, bitwise_xor, ceil, cos, cosh, divide, \ - equal, exp, fabs, floor, floor_divide, fmod, greater, greater_equal, \ - hypot, isnan, less, less_equal, log, log10, logical_and, \ - logical_not, logical_or, logical_xor, left_shift as lshift, \ - maximum, minimum, negative as minus, multiply, negative, \ - not_equal, power, product, remainder, right_shift as rshift, sin, \ - sinh, sqrt, subtract, sum, tan, tanh, true_divide, conjugate, sign diff --git a/pythonPackages/numpy/numpy/numarray/util.py b/pythonPackages/numpy/numpy/numarray/util.py deleted file mode 100755 index 9555474a8e..0000000000 --- a/pythonPackages/numpy/numpy/numarray/util.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -import numpy as np - -__all__ = ['MathDomainError', 'UnderflowError', 'NumOverflowError', - 'handleError', 'get_numarray_include_dirs'] - -class MathDomainError(ArithmeticError): - pass - - -class UnderflowError(ArithmeticError): - pass - - -class NumOverflowError(OverflowError, ArithmeticError): - pass - - -def handleError(errorStatus, sourcemsg): - """Take error status and use error mode to handle it.""" - modes = np.geterr() - if errorStatus & np.FPE_INVALID: - if modes['invalid'] == "warn": - print "Warning: Encountered invalid numeric result(s)", sourcemsg - if modes['invalid'] == "raise": - raise MathDomainError(sourcemsg) - if errorStatus & np.FPE_DIVIDEBYZERO: - if modes['dividebyzero'] == "warn": - print "Warning: Encountered divide by zero(s)", sourcemsg - if modes['dividebyzero'] == "raise": - raise ZeroDivisionError(sourcemsg) - if errorStatus & np.FPE_OVERFLOW: - if modes['overflow'] == "warn": - print "Warning: Encountered overflow(s)", sourcemsg - if modes['overflow'] == "raise": - raise NumOverflowError(sourcemsg) - if errorStatus & np.FPE_UNDERFLOW: - if modes['underflow'] == "warn": - print "Warning: Encountered underflow(s)", sourcemsg - if modes['underflow'] == "raise": - raise UnderflowError(sourcemsg) - - -def get_numarray_include_dirs(): - base = os.path.dirname(np.__file__) - newdirs = [os.path.join(base, 'numarray', 'include')] - return newdirs diff --git a/pythonPackages/numpy/numpy/oldnumeric/__init__.py b/pythonPackages/numpy/numpy/oldnumeric/__init__.py deleted file mode 100755 index 05712c02c4..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/__init__.py +++ /dev/null @@ -1,45 +0,0 @@ -# Don't add these to the __all__ variable though -from numpy import * - -def _move_axis_to_0(a, axis): - if axis == 0: - return a - n = len(a.shape) - if axis < 0: - axis += n - axes = range(1, axis+1) + [0,] + range(axis+1, n) - return transpose(a, axes) - -# Add these -from compat import * -from functions import * -from precision import * -from ufuncs import * -from misc import * - -import compat -import precision -import functions -import misc -import ufuncs - -import numpy -__version__ = numpy.__version__ -del numpy - -__all__ = ['__version__'] -__all__ += compat.__all__ -__all__ += precision.__all__ -__all__ += functions.__all__ -__all__ += ufuncs.__all__ -__all__ += misc.__all__ - -del compat -del functions -del precision -del ufuncs -del misc - -from numpy.testing import Tester -test = Tester().test -bench = Tester().bench diff --git a/pythonPackages/numpy/numpy/oldnumeric/alter_code1.py b/pythonPackages/numpy/numpy/oldnumeric/alter_code1.py deleted file mode 100755 index 87538a8559..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/alter_code1.py +++ /dev/null @@ -1,240 +0,0 @@ -""" -This module converts code written for Numeric to run with numpy - -Makes the following changes: - * Changes import statements (warns of use of from Numeric import *) - * Changes import statements (using numerix) ... - * Makes search and replace changes to: - - .typecode() - - .iscontiguous() - - .byteswapped() - - .itemsize() - - .toscalar() - * Converts .flat to .ravel() except for .flat = xxx or .flat[xxx] - * Replace xxx.spacesaver() with True - * Convert xx.savespace(?) to pass + ## xx.savespace(?) - - * Converts uses of 'b' to 'B' in the typecode-position of - functions: - eye, tri (in position 4) - ones, zeros, identity, empty, array, asarray, arange, - fromstring, indices, array_constructor (in position 2) - - and methods: - astype --- only argument - -- converts uses of '1', 's', 'w', and 'u' to - -- 'b', 'h', 'H', and 'I' - - * Converts uses of type(...) is - isinstance(..., ) -""" -__all__ = ['convertfile', 'convertall', 'converttree', 'convertsrc'] - -import sys -import os -import re -import glob - - -_func4 = ['eye', 'tri'] -_meth1 = ['astype'] -_func2 = ['ones', 'zeros', 'identity', 'fromstring', 'indices', - 'empty', 'array', 'asarray', 'arange', 'array_constructor'] - -_chars = {'1':'b','s':'h','w':'H','u':'I'} - -func_re = {} -meth_re = {} - -for name in _func2: - _astr = r"""(%s\s*[(][^,]*?[,][^'"]*?['"])b(['"][^)]*?[)])"""%name - func_re[name] = re.compile(_astr, re.DOTALL) - -for name in _func4: - _astr = r"""(%s\s*[(][^,]*?[,][^,]*?[,][^,]*?[,][^'"]*?['"])b(['"][^)]*?[)])"""%name - func_re[name] = re.compile(_astr, re.DOTALL) - -for name in _meth1: - _astr = r"""(.%s\s*[(][^'"]*?['"])b(['"][^)]*?[)])"""%name - func_re[name] = re.compile(_astr, re.DOTALL) - -for char in _chars.keys(): - _astr = r"""(.astype\s*[(][^'"]*?['"])%s(['"][^)]*?[)])"""%char - meth_re[char] = re.compile(_astr, re.DOTALL) - -def fixtypechars(fstr): - for name in _func2 + _func4 + _meth1: - fstr = func_re[name].sub('\\1B\\2',fstr) - for char in _chars.keys(): - fstr = meth_re[char].sub('\\1%s\\2'%_chars[char], fstr) - return fstr - -flatindex_re = re.compile('([.]flat(\s*?[[=]))') - -def changeimports(fstr, name, newname): - importstr = 'import %s' % name - importasstr = 'import %s as ' % name - fromstr = 'from %s import ' % name - fromall=0 - - fstr = re.sub(r'(import\s+[^,\n\r]+,\s*)(%s)' % name, - "\\1%s as %s" % (newname, name), fstr) - fstr = fstr.replace(importasstr, 'import %s as ' % newname) - fstr = fstr.replace(importstr, 'import %s as %s' % (newname,name)) - - ind = 0 - Nlen = len(fromstr) - Nlen2 = len("from %s import " % newname) - while 1: - found = fstr.find(fromstr,ind) - if (found < 0): - break - ind = found + Nlen - if fstr[ind] == '*': - continue - fstr = "%sfrom %s import %s" % (fstr[:found], newname, fstr[ind:]) - ind += Nlen2 - Nlen - return fstr, fromall - -istest_re = {} -_types = ['float', 'int', 'complex', 'ArrayType', 'FloatType', - 'IntType', 'ComplexType'] -for name in _types: - _astr = r'type\s*[(]([^)]*)[)]\s+(?:is|==)\s+(.*?%s)'%name - istest_re[name] = re.compile(_astr) -def fixistesting(astr): - for name in _types: - astr = istest_re[name].sub('isinstance(\\1, \\2)', astr) - return astr - -def replaceattr(astr): - astr = astr.replace(".typecode()",".dtype.char") - astr = astr.replace(".iscontiguous()",".flags.contiguous") - astr = astr.replace(".byteswapped()",".byteswap()") - astr = astr.replace(".toscalar()", ".item()") - astr = astr.replace(".itemsize()",".itemsize") - # preserve uses of flat that should be o.k. - tmpstr = flatindex_re.sub(r"@@@@\2",astr) - # replace other uses of flat - tmpstr = tmpstr.replace(".flat",".ravel()") - # put back .flat where it was valid - astr = tmpstr.replace("@@@@", ".flat") - return astr - -svspc2 = re.compile(r'([^,(\s]+[.]spacesaver[(][)])') -svspc3 = re.compile(r'(\S+[.]savespace[(].*[)])') -#shpe = re.compile(r'(\S+\s*)[.]shape\s*=[^=]\s*(.+)') -def replaceother(astr): - astr = svspc2.sub('True',astr) - astr = svspc3.sub(r'pass ## \1', astr) - #astr = shpe.sub('\\1=\\1.reshape(\\2)', astr) - return astr - -import datetime -def fromstr(filestr): - savestr = filestr[:] - filestr = fixtypechars(filestr) - filestr = fixistesting(filestr) - filestr, fromall1 = changeimports(filestr, 'Numeric', 'numpy.oldnumeric') - filestr, fromall1 = changeimports(filestr, 'multiarray','numpy.oldnumeric') - filestr, fromall1 = changeimports(filestr, 'umath', 'numpy.oldnumeric') - filestr, fromall1 = changeimports(filestr, 'Precision', 'numpy.oldnumeric.precision') - filestr, fromall1 = changeimports(filestr, 'UserArray', 'numpy.oldnumeric.user_array') - filestr, fromall1 = changeimports(filestr, 'ArrayPrinter', 'numpy.oldnumeric.array_printer') - filestr, fromall2 = changeimports(filestr, 'numerix', 'numpy.oldnumeric') - filestr, fromall3 = changeimports(filestr, 'scipy_base', 'numpy.oldnumeric') - filestr, fromall3 = changeimports(filestr, 'Matrix', 'numpy.oldnumeric.matrix') - filestr, fromall3 = changeimports(filestr, 'MLab', 'numpy.oldnumeric.mlab') - filestr, fromall3 = changeimports(filestr, 'LinearAlgebra', 'numpy.oldnumeric.linear_algebra') - filestr, fromall3 = changeimports(filestr, 'RNG', 'numpy.oldnumeric.rng') - filestr, fromall3 = changeimports(filestr, 'RNG.Statistics', 'numpy.oldnumeric.rng_stats') - filestr, fromall3 = changeimports(filestr, 'RandomArray', 'numpy.oldnumeric.random_array') - filestr, fromall3 = changeimports(filestr, 'FFT', 'numpy.oldnumeric.fft') - filestr, fromall3 = changeimports(filestr, 'MA', 'numpy.oldnumeric.ma') - fromall = fromall1 or fromall2 or fromall3 - filestr = replaceattr(filestr) - filestr = replaceother(filestr) - if savestr != filestr: - today = datetime.date.today().strftime('%b %d, %Y') - name = os.path.split(sys.argv[0])[-1] - filestr = '## Automatically adapted for '\ - 'numpy.oldnumeric %s by %s\n\n%s' % (today, name, filestr) - return filestr, 1 - return filestr, 0 - -def makenewfile(name, filestr): - fid = file(name, 'w') - fid.write(filestr) - fid.close() - -def convertfile(filename, orig=1): - """Convert the filename given from using Numeric to using NumPy - - Copies the file to filename.orig and then over-writes the file - with the updated code - """ - fid = open(filename) - filestr = fid.read() - fid.close() - filestr, changed = fromstr(filestr) - if changed: - if orig: - base, ext = os.path.splitext(filename) - os.rename(filename, base+".orig") - else: - os.remove(filename) - makenewfile(filename, filestr) - -def fromargs(args): - filename = args[1] - converttree(filename) - -def convertall(direc=os.path.curdir, orig=1): - """Convert all .py files to use numpy.oldnumeric (from Numeric) in the directory given - - For each changed file, a backup of .py is made as - .py.orig. A new file named .py - is then written with the updated code. - """ - files = glob.glob(os.path.join(direc,'*.py')) - for afile in files: - if afile[-8:] == 'setup.py': continue # skip these - convertfile(afile, orig) - -header_re = re.compile(r'(Numeric/arrayobject.h)') - -def convertsrc(direc=os.path.curdir, ext=None, orig=1): - """Replace Numeric/arrayobject.h with numpy/oldnumeric.h in all files in the - directory with extension give by list ext (if ext is None, then all files are - replaced).""" - if ext is None: - files = glob.glob(os.path.join(direc,'*')) - else: - files = [] - for aext in ext: - files.extend(glob.glob(os.path.join(direc,"*.%s" % aext))) - for afile in files: - fid = open(afile) - fstr = fid.read() - fid.close() - fstr, n = header_re.subn(r'numpy/oldnumeric.h',fstr) - if n > 0: - if orig: - base, ext = os.path.splitext(afile) - os.rename(afile, base+".orig") - else: - os.remove(afile) - makenewfile(afile, fstr) - -def _func(arg, dirname, fnames): - convertall(dirname, orig=0) - convertsrc(dirname, ext=['h','c'], orig=0) - -def converttree(direc=os.path.curdir): - """Convert all .py files and source code files in the tree given - """ - os.path.walk(direc, _func, None) - - -if __name__ == '__main__': - fromargs(sys.argv) diff --git a/pythonPackages/numpy/numpy/oldnumeric/alter_code2.py b/pythonPackages/numpy/numpy/oldnumeric/alter_code2.py deleted file mode 100755 index baa6b9d265..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/alter_code2.py +++ /dev/null @@ -1,146 +0,0 @@ -""" -This module converts code written for numpy.oldnumeric to work -with numpy - -FIXME: Flesh this out. - -Makes the following changes: - * Converts typecharacters '1swu' to 'bhHI' respectively - when used as typecodes - * Changes import statements - * Change typecode= to dtype= - * Eliminates savespace=xxx keyword arguments - * Removes it when keyword is not given as well - * replaces matrixmultiply with dot - * converts functions that don't give axis= keyword that have changed - * converts functions that don't give typecode= keyword that have changed - * converts use of capitalized type-names - * converts old function names in oldnumeric.linear_algebra, - oldnumeric.random_array, and oldnumeric.fft - -""" -#__all__ = ['convertfile', 'convertall', 'converttree'] -__all__ = [] - -import warnings -warnings.warn("numpy.oldnumeric.alter_code2 is not working yet.") - -import sys -import os -import re -import glob - -# To convert typecharacters we need to -# Not very safe. Disabled for now.. -def replacetypechars(astr): - astr = astr.replace("'s'","'h'") - astr = astr.replace("'b'","'B'") - astr = astr.replace("'1'","'b'") - astr = astr.replace("'w'","'H'") - astr = astr.replace("'u'","'I'") - return astr - -def changeimports(fstr, name, newname): - importstr = 'import %s' % name - importasstr = 'import %s as ' % name - fromstr = 'from %s import ' % name - fromall=0 - - fstr = fstr.replace(importasstr, 'import %s as ' % newname) - fstr = fstr.replace(importstr, 'import %s as %s' % (newname,name)) - - ind = 0 - Nlen = len(fromstr) - Nlen2 = len("from %s import " % newname) - while 1: - found = fstr.find(fromstr,ind) - if (found < 0): - break - ind = found + Nlen - if fstr[ind] == '*': - continue - fstr = "%sfrom %s import %s" % (fstr[:found], newname, fstr[ind:]) - ind += Nlen2 - Nlen - return fstr, fromall - -def replaceattr(astr): - astr = astr.replace("matrixmultiply","dot") - return astr - -def replaceother(astr): - astr = re.sub(r'typecode\s*=', 'dtype=', astr) - astr = astr.replace('ArrayType', 'ndarray') - astr = astr.replace('NewAxis', 'newaxis') - return astr - -import datetime -def fromstr(filestr): - #filestr = replacetypechars(filestr) - filestr, fromall1 = changeimports(filestr, 'numpy.oldnumeric', 'numpy') - filestr, fromall1 = changeimports(filestr, 'numpy.core.multiarray', 'numpy') - filestr, fromall1 = changeimports(filestr, 'numpy.core.umath', 'numpy') - filestr, fromall3 = changeimports(filestr, 'LinearAlgebra', - 'numpy.linalg.old') - filestr, fromall3 = changeimports(filestr, 'RNG', 'numpy.random.oldrng') - filestr, fromall3 = changeimports(filestr, 'RNG.Statistics', 'numpy.random.oldrngstats') - filestr, fromall3 = changeimports(filestr, 'RandomArray', 'numpy.random.oldrandomarray') - filestr, fromall3 = changeimports(filestr, 'FFT', 'numpy.fft.old') - filestr, fromall3 = changeimports(filestr, 'MA', 'numpy.core.ma') - fromall = fromall1 or fromall2 or fromall3 - filestr = replaceattr(filestr) - filestr = replaceother(filestr) - today = datetime.date.today().strftime('%b %d, %Y') - name = os.path.split(sys.argv[0])[-1] - filestr = '## Automatically adapted for '\ - 'numpy %s by %s\n\n%s' % (today, name, filestr) - return filestr - -def makenewfile(name, filestr): - fid = file(name, 'w') - fid.write(filestr) - fid.close() - -def getandcopy(name): - fid = file(name) - filestr = fid.read() - fid.close() - base, ext = os.path.splitext(name) - makenewfile(base+'.orig', filestr) - return filestr - -def convertfile(filename): - """Convert the filename given from using Numeric to using NumPy - - Copies the file to filename.orig and then over-writes the file - with the updated code - """ - filestr = getandcopy(filename) - filestr = fromstr(filestr) - makenewfile(filename, filestr) - -def fromargs(args): - filename = args[1] - convertfile(filename) - -def convertall(direc=os.path.curdir): - """Convert all .py files to use NumPy (from Numeric) in the directory given - - For each file, a backup of .py is made as - .py.orig. A new file named .py - is then written with the updated code. - """ - files = glob.glob(os.path.join(direc,'*.py')) - for afile in files: - convertfile(afile) - -def _func(arg, dirname, fnames): - convertall(dirname) - -def converttree(direc=os.path.curdir): - """Convert all .py files in the tree given - - """ - os.path.walk(direc, _func, None) - -if __name__ == '__main__': - fromargs(sys.argv) diff --git a/pythonPackages/numpy/numpy/oldnumeric/array_printer.py b/pythonPackages/numpy/numpy/oldnumeric/array_printer.py deleted file mode 100755 index 95f3f42c77..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/array_printer.py +++ /dev/null @@ -1,16 +0,0 @@ - -__all__ = ['array2string'] - -from numpy import array2string as _array2string - -def array2string(a, max_line_width=None, precision=None, - suppress_small=None, separator=' ', - array_output=0): - if array_output: - prefix="array(" - style=repr - else: - prefix = "" - style=str - return _array2string(a, max_line_width, precision, - suppress_small, separator, prefix, style) diff --git a/pythonPackages/numpy/numpy/oldnumeric/arrayfns.py b/pythonPackages/numpy/numpy/oldnumeric/arrayfns.py deleted file mode 100755 index 230b200a9d..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/arrayfns.py +++ /dev/null @@ -1,97 +0,0 @@ -"""Backward compatible with arrayfns from Numeric -""" - -__all__ = ['array_set', 'construct3', 'digitize', 'error', 'find_mask', - 'histogram', 'index_sort', 'interp', 'nz', 'reverse', 'span', - 'to_corners', 'zmin_zmax'] - -import numpy as np -from numpy import asarray - -class error(Exception): - pass - -def array_set(vals1, indices, vals2): - indices = asarray(indices) - if indices.ndim != 1: - raise ValueError, "index array must be 1-d" - if not isinstance(vals1, np.ndarray): - raise TypeError, "vals1 must be an ndarray" - vals1 = asarray(vals1) - vals2 = asarray(vals2) - if vals1.ndim != vals2.ndim or vals1.ndim < 1: - raise error, "vals1 and vals2 must have same number of dimensions (>=1)" - vals1[indices] = vals2 - -from numpy import digitize -from numpy import bincount as histogram - -def index_sort(arr): - return asarray(arr).argsort(kind='heap') - -def interp(y, x, z, typ=None): - """y(z) interpolated by treating y(x) as piecewise function - """ - res = np.interp(z, x, y) - if typ is None or typ == 'd': - return res - if typ == 'f': - return res.astype('f') - - raise error, "incompatible typecode" - -def nz(x): - x = asarray(x,dtype=np.ubyte) - if x.ndim != 1: - raise TypeError, "intput must have 1 dimension." - indxs = np.flatnonzero(x != 0) - return indxs[-1].item()+1 - -def reverse(x, n): - x = asarray(x,dtype='d') - if x.ndim != 2: - raise ValueError, "input must be 2-d" - y = np.empty_like(x) - if n == 0: - y[...] = x[::-1,:] - elif n == 1: - y[...] = x[:,::-1] - return y - -def span(lo, hi, num, d2=0): - x = np.linspace(lo, hi, num) - if d2 <= 0: - return x - else: - ret = np.empty((d2,num),x.dtype) - ret[...] = x - return ret - -def zmin_zmax(z, ireg): - z = asarray(z, dtype=float) - ireg = asarray(ireg, dtype=int) - if z.shape != ireg.shape or z.ndim != 2: - raise ValueError, "z and ireg must be the same shape and 2-d" - ix, iy = np.nonzero(ireg) - # Now, add more indices - x1m = ix - 1 - y1m = iy-1 - i1 = x1m>=0 - i2 = y1m>=0 - i3 = i1 & i2 - nix = np.r_[ix, x1m[i1], x1m[i1], ix[i2] ] - niy = np.r_[iy, iy[i1], y1m[i3], y1m[i2]] - # remove any negative indices - zres = z[nix,niy] - return zres.min().item(), zres.max().item() - - -def find_mask(fs, node_edges): - raise NotImplementedError - -def to_corners(arr, nv, nvsum): - raise NotImplementedError - - -def construct3(mask, itype): - raise NotImplementedError diff --git a/pythonPackages/numpy/numpy/oldnumeric/compat.py b/pythonPackages/numpy/numpy/oldnumeric/compat.py deleted file mode 100755 index 607dd0b904..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/compat.py +++ /dev/null @@ -1,117 +0,0 @@ -# Compatibility module containing deprecated names - -__all__ = ['NewAxis', - 'UFuncType', 'UfuncType', 'ArrayType', 'arraytype', - 'LittleEndian', 'arrayrange', 'matrixmultiply', - 'array_constructor', 'pickle_array', - 'DumpArray', 'LoadArray', 'multiarray', - # from cPickle - 'dump', 'dumps', 'load', 'loads', - 'Unpickler', 'Pickler' - ] - -import numpy.core.multiarray as multiarray -import numpy.core.umath as um -from numpy.core.numeric import array -import functions -import sys - -from cPickle import dump, dumps - -mu = multiarray - -#Use this to add a new axis to an array -#compatibility only -NewAxis = None - -#deprecated -UFuncType = type(um.sin) -UfuncType = type(um.sin) -ArrayType = mu.ndarray -arraytype = mu.ndarray - -LittleEndian = (sys.byteorder == 'little') - -from numpy import deprecate - -# backward compatibility -arrayrange = deprecate(functions.arange, 'arrayrange', 'arange') - -# deprecated names -matrixmultiply = deprecate(mu.dot, 'matrixmultiply', 'dot') - -def DumpArray(m, fp): - m.dump(fp) - -def LoadArray(fp): - import cPickle - return cPickle.load(fp) - -def array_constructor(shape, typecode, thestr, Endian=LittleEndian): - if typecode == "O": - x = array(thestr, "O") - else: - x = mu.fromstring(thestr, typecode) - x.shape = shape - if LittleEndian != Endian: - return x.byteswap(True) - else: - return x - -def pickle_array(a): - if a.dtype.hasobject: - return (array_constructor, - a.shape, a.dtype.char, a.tolist(), LittleEndian) - else: - return (array_constructor, - (a.shape, a.dtype.char, a.tostring(), LittleEndian)) - -def loads(astr): - import cPickle - arr = cPickle.loads(astr.replace('Numeric', 'numpy.oldnumeric')) - return arr - -def load(fp): - return loads(fp.read()) - -def _LoadArray(fp): - import typeconv - ln = fp.readline().split() - if ln[0][0] == 'A': ln[0] = ln[0][1:] - typecode = ln[0][0] - endian = ln[0][1] - itemsize = int(ln[0][2:]) - shape = [int(x) for x in ln[1:]] - sz = itemsize - for val in shape: - sz *= val - dstr = fp.read(sz) - m = mu.fromstring(dstr, typeconv.convtypecode(typecode)) - m.shape = shape - - if (LittleEndian and endian == 'B') or (not LittleEndian and endian == 'L'): - return m.byteswap(True) - else: - return m - -import pickle, copy -if sys.version_info[0] >= 3: - class Unpickler(pickle.Unpickler): - # XXX: should we implement this? It's not completely straightforward - # to do. - def __init__(self, *a, **kw): - raise NotImplementedError( - "numpy.oldnumeric.Unpickler is not supported on Python 3") -else: - class Unpickler(pickle.Unpickler): - def load_array(self): - self.stack.append(_LoadArray(self)) - - dispatch = copy.copy(pickle.Unpickler.dispatch) - dispatch['A'] = load_array - -class Pickler(pickle.Pickler): - def __init__(self, *args, **kwds): - raise NotImplementedError, "Don't pickle new arrays with this" - def save_array(self, object): - raise NotImplementedError, "Don't pickle new arrays with this" diff --git a/pythonPackages/numpy/numpy/oldnumeric/fft.py b/pythonPackages/numpy/numpy/oldnumeric/fft.py deleted file mode 100755 index 67f30c7509..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/fft.py +++ /dev/null @@ -1,21 +0,0 @@ - -__all__ = ['fft', 'fft2d', 'fftnd', 'hermite_fft', 'inverse_fft', - 'inverse_fft2d', 'inverse_fftnd', - 'inverse_hermite_fft', 'inverse_real_fft', - 'inverse_real_fft2d', 'inverse_real_fftnd', - 'real_fft', 'real_fft2d', 'real_fftnd'] - -from numpy.fft import fft -from numpy.fft import fft2 as fft2d -from numpy.fft import fftn as fftnd -from numpy.fft import hfft as hermite_fft -from numpy.fft import ifft as inverse_fft -from numpy.fft import ifft2 as inverse_fft2d -from numpy.fft import ifftn as inverse_fftnd -from numpy.fft import ihfft as inverse_hermite_fft -from numpy.fft import irfft as inverse_real_fft -from numpy.fft import irfft2 as inverse_real_fft2d -from numpy.fft import irfftn as inverse_real_fftnd -from numpy.fft import rfft as real_fft -from numpy.fft import rfft2 as real_fft2d -from numpy.fft import rfftn as real_fftnd diff --git a/pythonPackages/numpy/numpy/oldnumeric/fix_default_axis.py b/pythonPackages/numpy/numpy/oldnumeric/fix_default_axis.py deleted file mode 100755 index 8483de85e5..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/fix_default_axis.py +++ /dev/null @@ -1,291 +0,0 @@ -""" -This module adds the default axis argument to code which did not specify it -for the functions where the default was changed in NumPy. - -The functions changed are - -add -1 ( all second argument) -====== -nansum -nanmax -nanmin -nanargmax -nanargmin -argmax -argmin -compress 3 - - -add 0 -====== -take 3 -repeat 3 -sum # might cause problems with builtin. -product -sometrue -alltrue -cumsum -cumproduct -average -ptp -cumprod -prod -std -mean -""" -__all__ = ['convertfile', 'convertall', 'converttree'] - -import sys -import os -import re -import glob - - -_args3 = ['compress', 'take', 'repeat'] -_funcm1 = ['nansum', 'nanmax', 'nanmin', 'nanargmax', 'nanargmin', - 'argmax', 'argmin', 'compress'] -_func0 = ['take', 'repeat', 'sum', 'product', 'sometrue', 'alltrue', - 'cumsum', 'cumproduct', 'average', 'ptp', 'cumprod', 'prod', - 'std', 'mean'] - -_all = _func0 + _funcm1 -func_re = {} - -for name in _all: - _astr = r"""%s\s*[(]"""%name - func_re[name] = re.compile(_astr) - - -import string -disallowed = '_' + string.uppercase + string.lowercase + string.digits - -def _add_axis(fstr, name, repl): - alter = 0 - if name in _args3: - allowed_comma = 1 - else: - allowed_comma = 0 - newcode = "" - last = 0 - for obj in func_re[name].finditer(fstr): - nochange = 0 - start, end = obj.span() - if fstr[start-1] in disallowed: - continue - if fstr[start-1] == '.' \ - and fstr[start-6:start-1] != 'numpy' \ - and fstr[start-2:start-1] != 'N' \ - and fstr[start-9:start-1] != 'numarray' \ - and fstr[start-8:start-1] != 'numerix' \ - and fstr[start-8:start-1] != 'Numeric': - continue - if fstr[start-1] in ['\t',' ']: - k = start-2 - while fstr[k] in ['\t',' ']: - k -= 1 - if fstr[k-2:k+1] == 'def' or \ - fstr[k-4:k+1] == 'class': - continue - k = end - stack = 1 - ncommas = 0 - N = len(fstr) - while stack: - if k>=N: - nochange =1 - break - if fstr[k] == ')': - stack -= 1 - elif fstr[k] == '(': - stack += 1 - elif stack == 1 and fstr[k] == ',': - ncommas += 1 - if ncommas > allowed_comma: - nochange = 1 - break - k += 1 - if nochange: - continue - alter += 1 - newcode = "%s%s,%s)" % (newcode, fstr[last:k-1], repl) - last = k - if not alter: - newcode = fstr - else: - newcode = "%s%s" % (newcode, fstr[last:]) - return newcode, alter - -def _import_change(fstr, names): - # Four possibilities - # 1.) import numpy with subsequent use of numpy. - # change this to import numpy.oldnumeric as numpy - # 2.) import numpy as XXXX with subsequent use of - # XXXX. ==> import numpy.oldnumeric as XXXX - # 3.) from numpy import * - # with subsequent use of one of the names - # 4.) from numpy import ..., , ... (could span multiple - # lines. ==> remove all names from list and - # add from numpy.oldnumeric import - - num = 0 - # case 1 - importstr = "import numpy" - ind = fstr.find(importstr) - if (ind > 0): - found = 0 - for name in names: - ind2 = fstr.find("numpy.%s" % name, ind) - if (ind2 > 0): - found = 1 - break - if found: - fstr = "%s%s%s" % (fstr[:ind], "import numpy.oldnumeric as numpy", - fstr[ind+len(importstr):]) - num += 1 - - # case 2 - importre = re.compile("""import numpy as ([A-Za-z0-9_]+)""") - modules = importre.findall(fstr) - if len(modules) > 0: - for module in modules: - found = 0 - for name in names: - ind2 = fstr.find("%s.%s" % (module, name)) - if (ind2 > 0): - found = 1 - break - if found: - importstr = "import numpy as %s" % module - ind = fstr.find(importstr) - fstr = "%s%s%s" % (fstr[:ind], - "import numpy.oldnumeric as %s" % module, - fstr[ind+len(importstr):]) - num += 1 - - # case 3 - importstr = "from numpy import *" - ind = fstr.find(importstr) - if (ind > 0): - found = 0 - for name in names: - ind2 = fstr.find(name, ind) - if (ind2 > 0) and fstr[ind2-1] not in disallowed: - found = 1 - break - if found: - fstr = "%s%s%s" % (fstr[:ind], - "from numpy.oldnumeric import *", - fstr[ind+len(importstr):]) - num += 1 - - # case 4 - ind = 0 - importstr = "from numpy import" - N = len(importstr) - while 1: - ind = fstr.find(importstr, ind) - if (ind < 0): - break - ind += N - ptr = ind+1 - stack = 1 - while stack: - if fstr[ptr] == '\\': - stack += 1 - elif fstr[ptr] == '\n': - stack -= 1 - ptr += 1 - substr = fstr[ind:ptr] - found = 0 - substr = substr.replace('\n',' ') - substr = substr.replace('\\','') - importnames = [x.strip() for x in substr.split(',')] - # determine if any of names are in importnames - addnames = [] - for name in names: - if name in importnames: - importnames.remove(name) - addnames.append(name) - if len(addnames) > 0: - fstr = "%s%s\n%s\n%s" % \ - (fstr[:ind], - "from numpy import %s" % \ - ", ".join(importnames), - "from numpy.oldnumeric import %s" % \ - ", ".join(addnames), - fstr[ptr:]) - num += 1 - - return fstr, num - -def add_axis(fstr, import_change=False): - total = 0 - if not import_change: - for name in _funcm1: - fstr, num = _add_axis(fstr, name, 'axis=-1') - total += num - for name in _func0: - fstr, num = _add_axis(fstr, name, 'axis=0') - total += num - return fstr, total - else: - fstr, num = _import_change(fstr, _funcm1+_func0) - return fstr, num - - -def makenewfile(name, filestr): - fid = file(name, 'w') - fid.write(filestr) - fid.close() - -def getfile(name): - fid = file(name) - filestr = fid.read() - fid.close() - return filestr - -def copyfile(name, fstr): - base, ext = os.path.splitext(name) - makenewfile(base+'.orig', fstr) - return - -def convertfile(filename, import_change=False): - """Convert the filename given from using Numeric to using NumPy - - Copies the file to filename.orig and then over-writes the file - with the updated code - """ - filestr = getfile(filename) - newstr, total = add_axis(filestr, import_change) - if total > 0: - print "Changing ", filename - copyfile(filename, filestr) - makenewfile(filename, newstr) - sys.stdout.flush() - -def fromargs(args): - filename = args[1] - convertfile(filename) - -def convertall(direc=os.path.curdir, import_change=False): - """Convert all .py files in the directory given - - For each file, a backup of .py is made as - .py.orig. A new file named .py - is then written with the updated code. - """ - files = glob.glob(os.path.join(direc,'*.py')) - for afile in files: - convertfile(afile, import_change) - -def _func(arg, dirname, fnames): - convertall(dirname, import_change=arg) - -def converttree(direc=os.path.curdir, import_change=False): - """Convert all .py files in the tree given - - """ - os.path.walk(direc, _func, import_change) - -if __name__ == '__main__': - fromargs(sys.argv) diff --git a/pythonPackages/numpy/numpy/oldnumeric/functions.py b/pythonPackages/numpy/numpy/oldnumeric/functions.py deleted file mode 100755 index 5b2b1a8bfd..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/functions.py +++ /dev/null @@ -1,124 +0,0 @@ -# Functions that should behave the same as Numeric and need changing - -import numpy as np -import numpy.core.multiarray as mu -import numpy.core.numeric as nn -from typeconv import convtypecode, convtypecode2 - -__all__ = ['take', 'repeat', 'sum', 'product', 'sometrue', 'alltrue', - 'cumsum', 'cumproduct', 'compress', 'fromfunction', - 'ones', 'empty', 'identity', 'zeros', 'array', 'asarray', - 'nonzero', 'reshape', 'arange', 'fromstring', 'ravel', 'trace', - 'indices', 'where','sarray','cross_product', 'argmax', 'argmin', - 'average'] - -def take(a, indicies, axis=0): - return np.take(a, indicies, axis) - -def repeat(a, repeats, axis=0): - return np.repeat(a, repeats, axis) - -def sum(x, axis=0): - return np.sum(x, axis) - -def product(x, axis=0): - return np.product(x, axis) - -def sometrue(x, axis=0): - return np.sometrue(x, axis) - -def alltrue(x, axis=0): - return np.alltrue(x, axis) - -def cumsum(x, axis=0): - return np.cumsum(x, axis) - -def cumproduct(x, axis=0): - return np.cumproduct(x, axis) - -def argmax(x, axis=-1): - return np.argmax(x, axis) - -def argmin(x, axis=-1): - return np.argmin(x, axis) - -def compress(condition, m, axis=-1): - return np.compress(condition, m, axis) - -def fromfunction(args, dimensions): - return np.fromfunction(args, dimensions, dtype=int) - -def ones(shape, typecode='l', savespace=0, dtype=None): - """ones(shape, dtype=int) returns an array of the given - dimensions which is initialized to all ones. - """ - dtype = convtypecode(typecode,dtype) - a = mu.empty(shape, dtype) - a.fill(1) - return a - -def zeros(shape, typecode='l', savespace=0, dtype=None): - """zeros(shape, dtype=int) returns an array of the given - dimensions which is initialized to all zeros - """ - dtype = convtypecode(typecode,dtype) - return mu.zeros(shape, dtype) - -def identity(n,typecode='l', dtype=None): - """identity(n) returns the identity 2-d array of shape n x n. - """ - dtype = convtypecode(typecode, dtype) - return nn.identity(n, dtype) - -def empty(shape, typecode='l', dtype=None): - dtype = convtypecode(typecode, dtype) - return mu.empty(shape, dtype) - -def array(sequence, typecode=None, copy=1, savespace=0, dtype=None): - dtype = convtypecode2(typecode, dtype) - return mu.array(sequence, dtype, copy=copy) - -def sarray(a, typecode=None, copy=False, dtype=None): - dtype = convtypecode2(typecode, dtype) - return mu.array(a, dtype, copy) - -def asarray(a, typecode=None, dtype=None): - dtype = convtypecode2(typecode, dtype) - return mu.array(a, dtype, copy=0) - -def nonzero(a): - res = np.nonzero(a) - if len(res) == 1: - return res[0] - else: - raise ValueError, "Input argument must be 1d" - -def reshape(a, shape): - return np.reshape(a, shape) - -def arange(start, stop=None, step=1, typecode=None, dtype=None): - dtype = convtypecode2(typecode, dtype) - return mu.arange(start, stop, step, dtype) - -def fromstring(string, typecode='l', count=-1, dtype=None): - dtype = convtypecode(typecode, dtype) - return mu.fromstring(string, dtype, count=count) - -def ravel(m): - return np.ravel(m) - -def trace(a, offset=0, axis1=0, axis2=1): - return np.trace(a, offset=0, axis1=0, axis2=1) - -def indices(dimensions, typecode=None, dtype=None): - dtype = convtypecode(typecode, dtype) - return np.indices(dimensions, dtype) - -def where(condition, x, y): - return np.where(condition, x, y) - -def cross_product(a, b, axis1=-1, axis2=-1): - return np.cross(a, b, axis1, axis2) - -def average(a, axis=0, weights=None, returned=False): - return np.average(a, axis, weights, returned) diff --git a/pythonPackages/numpy/numpy/oldnumeric/linear_algebra.py b/pythonPackages/numpy/numpy/oldnumeric/linear_algebra.py deleted file mode 100755 index 2e7a264fe1..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/linear_algebra.py +++ /dev/null @@ -1,83 +0,0 @@ -"""Backward compatible with LinearAlgebra from Numeric -""" -# This module is a lite version of the linalg.py module in SciPy which contains -# high-level Python interface to the LAPACK library. The lite version -# only accesses the following LAPACK functions: dgesv, zgesv, dgeev, -# zgeev, dgesdd, zgesdd, dgelsd, zgelsd, dsyevd, zheevd, dgetrf, dpotrf. - - -__all__ = ['LinAlgError', 'solve_linear_equations', - 'inverse', 'cholesky_decomposition', 'eigenvalues', - 'Heigenvalues', 'generalized_inverse', - 'determinant', 'singular_value_decomposition', - 'eigenvectors', 'Heigenvectors', - 'linear_least_squares' - ] - -from numpy.core import transpose -import numpy.linalg as linalg - -# Linear equations - -LinAlgError = linalg.LinAlgError - -def solve_linear_equations(a, b): - return linalg.solve(a,b) - -# Matrix inversion - -def inverse(a): - return linalg.inv(a) - -# Cholesky decomposition - -def cholesky_decomposition(a): - return linalg.cholesky(a) - -# Eigenvalues - -def eigenvalues(a): - return linalg.eigvals(a) - -def Heigenvalues(a, UPLO='L'): - return linalg.eigvalsh(a,UPLO) - -# Eigenvectors - -def eigenvectors(A): - w, v = linalg.eig(A) - return w, transpose(v) - -def Heigenvectors(A): - w, v = linalg.eigh(A) - return w, transpose(v) - -# Generalized inverse - -def generalized_inverse(a, rcond = 1.e-10): - return linalg.pinv(a, rcond) - -# Determinant - -def determinant(a): - return linalg.det(a) - -# Linear Least Squares - -def linear_least_squares(a, b, rcond=1.e-10): - """returns x,resids,rank,s -where x minimizes 2-norm(|b - Ax|) - resids is the sum square residuals - rank is the rank of A - s is the rank of the singular values of A in descending order - -If b is a matrix then x is also a matrix with corresponding columns. -If the rank of A is less than the number of columns of A or greater than -the number of rows, then residuals will be returned as an empty array -otherwise resids = sum((b-dot(A,x)**2). -Singular values less than s[0]*rcond are treated as zero. -""" - return linalg.lstsq(a,b,rcond) - -def singular_value_decomposition(A, full_matrices=0): - return linalg.svd(A, full_matrices) diff --git a/pythonPackages/numpy/numpy/oldnumeric/ma.py b/pythonPackages/numpy/numpy/oldnumeric/ma.py deleted file mode 100755 index 1284c6019f..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/ma.py +++ /dev/null @@ -1,2270 +0,0 @@ -"""MA: a facility for dealing with missing observations -MA is generally used as a numpy.array look-alike. -by Paul F. Dubois. - -Copyright 1999, 2000, 2001 Regents of the University of California. -Released for unlimited redistribution. -Adapted for numpy_core 2005 by Travis Oliphant and -(mainly) Paul Dubois. - -""" -import types, sys - -import numpy.core.umath as umath -import numpy.core.fromnumeric as fromnumeric -from numpy.core.numeric import newaxis, ndarray, inf -from numpy.core.fromnumeric import amax, amin -from numpy.core.numerictypes import bool_, typecodes -import numpy.core.numeric as numeric -import warnings - -if sys.version_info[0] >= 3: - from functools import reduce - -# Ufunc domain lookup for __array_wrap__ -ufunc_domain = {} -# Ufunc fills lookup for __array__ -ufunc_fills = {} - -MaskType = bool_ -nomask = MaskType(0) -divide_tolerance = 1.e-35 - -class MAError (Exception): - def __init__ (self, args=None): - "Create an exception" - - # The .args attribute must be a tuple. - if not isinstance(args, tuple): - args = (args,) - self.args = args - def __str__(self): - "Calculate the string representation" - return str(self.args[0]) - __repr__ = __str__ - -class _MaskedPrintOption: - "One instance of this class, masked_print_option, is created." - def __init__ (self, display): - "Create the masked print option object." - self.set_display(display) - self._enabled = 1 - - def display (self): - "Show what prints for masked values." - return self._display - - def set_display (self, s): - "set_display(s) sets what prints for masked values." - self._display = s - - def enabled (self): - "Is the use of the display value enabled?" - return self._enabled - - def enable(self, flag=1): - "Set the enabling flag to flag." - self._enabled = flag - - def __str__ (self): - return str(self._display) - - __repr__ = __str__ - -#if you single index into a masked location you get this object. -masked_print_option = _MaskedPrintOption('--') - -# Use single element arrays or scalars. -default_real_fill_value = 1.e20 -default_complex_fill_value = 1.e20 + 0.0j -default_character_fill_value = '-' -default_integer_fill_value = 999999 -default_object_fill_value = '?' - -def default_fill_value (obj): - "Function to calculate default fill value for an object." - if isinstance(obj, types.FloatType): - return default_real_fill_value - elif isinstance(obj, types.IntType) or isinstance(obj, types.LongType): - return default_integer_fill_value - elif isinstance(obj, types.StringType): - return default_character_fill_value - elif isinstance(obj, types.ComplexType): - return default_complex_fill_value - elif isinstance(obj, MaskedArray) or isinstance(obj, ndarray): - x = obj.dtype.char - if x in typecodes['Float']: - return default_real_fill_value - if x in typecodes['Integer']: - return default_integer_fill_value - if x in typecodes['Complex']: - return default_complex_fill_value - if x in typecodes['Character']: - return default_character_fill_value - if x in typecodes['UnsignedInteger']: - return umath.absolute(default_integer_fill_value) - return default_object_fill_value - else: - return default_object_fill_value - -def minimum_fill_value (obj): - "Function to calculate default fill value suitable for taking minima." - if isinstance(obj, types.FloatType): - return numeric.inf - elif isinstance(obj, types.IntType) or isinstance(obj, types.LongType): - return sys.maxint - elif isinstance(obj, MaskedArray) or isinstance(obj, ndarray): - x = obj.dtype.char - if x in typecodes['Float']: - return numeric.inf - if x in typecodes['Integer']: - return sys.maxint - if x in typecodes['UnsignedInteger']: - return sys.maxint - else: - raise TypeError, 'Unsuitable type for calculating minimum.' - -def maximum_fill_value (obj): - "Function to calculate default fill value suitable for taking maxima." - if isinstance(obj, types.FloatType): - return -inf - elif isinstance(obj, types.IntType) or isinstance(obj, types.LongType): - return -sys.maxint - elif isinstance(obj, MaskedArray) or isinstance(obj, ndarray): - x = obj.dtype.char - if x in typecodes['Float']: - return -inf - if x in typecodes['Integer']: - return -sys.maxint - if x in typecodes['UnsignedInteger']: - return 0 - else: - raise TypeError, 'Unsuitable type for calculating maximum.' - -def set_fill_value (a, fill_value): - "Set fill value of a if it is a masked array." - if isMaskedArray(a): - a.set_fill_value (fill_value) - -def getmask (a): - """Mask of values in a; could be nomask. - Returns nomask if a is not a masked array. - To get an array for sure use getmaskarray.""" - if isinstance(a, MaskedArray): - return a.raw_mask() - else: - return nomask - -def getmaskarray (a): - """Mask of values in a; an array of zeros if mask is nomask - or not a masked array, and is a byte-sized integer. - Do not try to add up entries, for example. - """ - m = getmask(a) - if m is nomask: - return make_mask_none(shape(a)) - else: - return m - -def is_mask (m): - """Is m a legal mask? Does not check contents, only type. - """ - try: - return m.dtype.type is MaskType - except AttributeError: - return False - -def make_mask (m, copy=0, flag=0): - """make_mask(m, copy=0, flag=0) - return m as a mask, creating a copy if necessary or requested. - Can accept any sequence of integers or nomask. Does not check - that contents must be 0s and 1s. - if flag, return nomask if m contains no true elements. - """ - if m is nomask: - return nomask - elif isinstance(m, ndarray): - if m.dtype.type is MaskType: - if copy: - result = numeric.array(m, dtype=MaskType, copy=copy) - else: - result = m - else: - result = m.astype(MaskType) - else: - result = filled(m, True).astype(MaskType) - - if flag and not fromnumeric.sometrue(fromnumeric.ravel(result)): - return nomask - else: - return result - -def make_mask_none (s): - "Return a mask of all zeros of shape s." - result = numeric.zeros(s, dtype=MaskType) - result.shape = s - return result - -def mask_or (m1, m2): - """Logical or of the mask candidates m1 and m2, treating nomask as false. - Result may equal m1 or m2 if the other is nomask. - """ - if m1 is nomask: return make_mask(m2) - if m2 is nomask: return make_mask(m1) - if m1 is m2 and is_mask(m1): return m1 - return make_mask(umath.logical_or(m1, m2)) - -def filled (a, value = None): - """a as a contiguous numeric array with any masked areas replaced by value - if value is None or the special element "masked", get_fill_value(a) - is used instead. - - If a is already a contiguous numeric array, a itself is returned. - - filled(a) can be used to be sure that the result is numeric when - passing an object a to other software ignorant of MA, in particular to - numeric itself. - """ - if isinstance(a, MaskedArray): - return a.filled(value) - elif isinstance(a, ndarray) and a.flags['CONTIGUOUS']: - return a - elif isinstance(a, types.DictType): - return numeric.array(a, 'O') - else: - return numeric.array(a) - -def get_fill_value (a): - """ - The fill value of a, if it has one; otherwise, the default fill value - for that type. - """ - if isMaskedArray(a): - result = a.fill_value() - else: - result = default_fill_value(a) - return result - -def common_fill_value (a, b): - "The common fill_value of a and b, if there is one, or None" - t1 = get_fill_value(a) - t2 = get_fill_value(b) - if t1 == t2: return t1 - return None - -# Domain functions return 1 where the argument(s) are not in the domain. -class domain_check_interval: - "domain_check_interval(a,b)(x) = true where x < a or y > b" - def __init__(self, y1, y2): - "domain_check_interval(a,b)(x) = true where x < a or y > b" - self.y1 = y1 - self.y2 = y2 - - def __call__ (self, x): - "Execute the call behavior." - return umath.logical_or(umath.greater (x, self.y2), - umath.less(x, self.y1) - ) - -class domain_tan: - "domain_tan(eps) = true where abs(cos(x)) < eps)" - def __init__(self, eps): - "domain_tan(eps) = true where abs(cos(x)) < eps)" - self.eps = eps - - def __call__ (self, x): - "Execute the call behavior." - return umath.less(umath.absolute(umath.cos(x)), self.eps) - -class domain_greater: - "domain_greater(v)(x) = true where x <= v" - def __init__(self, critical_value): - "domain_greater(v)(x) = true where x <= v" - self.critical_value = critical_value - - def __call__ (self, x): - "Execute the call behavior." - return umath.less_equal (x, self.critical_value) - -class domain_greater_equal: - "domain_greater_equal(v)(x) = true where x < v" - def __init__(self, critical_value): - "domain_greater_equal(v)(x) = true where x < v" - self.critical_value = critical_value - - def __call__ (self, x): - "Execute the call behavior." - return umath.less (x, self.critical_value) - -class masked_unary_operation: - def __init__ (self, aufunc, fill=0, domain=None): - """ masked_unary_operation(aufunc, fill=0, domain=None) - aufunc(fill) must be defined - self(x) returns aufunc(x) - with masked values where domain(x) is true or getmask(x) is true. - """ - self.f = aufunc - self.fill = fill - self.domain = domain - self.__doc__ = getattr(aufunc, "__doc__", str(aufunc)) - self.__name__ = getattr(aufunc, "__name__", str(aufunc)) - ufunc_domain[aufunc] = domain - ufunc_fills[aufunc] = fill, - - def __call__ (self, a, *args, **kwargs): - "Execute the call behavior." -# numeric tries to return scalars rather than arrays when given scalars. - m = getmask(a) - d1 = filled(a, self.fill) - if self.domain is not None: - m = mask_or(m, self.domain(d1)) - result = self.f(d1, *args, **kwargs) - return masked_array(result, m) - - def __str__ (self): - return "Masked version of " + str(self.f) - - -class domain_safe_divide: - def __init__ (self, tolerance=divide_tolerance): - self.tolerance = tolerance - def __call__ (self, a, b): - return umath.absolute(a) * self.tolerance >= umath.absolute(b) - -class domained_binary_operation: - """Binary operations that have a domain, like divide. These are complicated - so they are a separate class. They have no reduce, outer or accumulate. - """ - def __init__ (self, abfunc, domain, fillx=0, filly=0): - """abfunc(fillx, filly) must be defined. - abfunc(x, filly) = x for all x to enable reduce. - """ - self.f = abfunc - self.domain = domain - self.fillx = fillx - self.filly = filly - self.__doc__ = getattr(abfunc, "__doc__", str(abfunc)) - self.__name__ = getattr(abfunc, "__name__", str(abfunc)) - ufunc_domain[abfunc] = domain - ufunc_fills[abfunc] = fillx, filly - - def __call__(self, a, b): - "Execute the call behavior." - ma = getmask(a) - mb = getmask(b) - d1 = filled(a, self.fillx) - d2 = filled(b, self.filly) - t = self.domain(d1, d2) - - if fromnumeric.sometrue(t, None): - d2 = where(t, self.filly, d2) - mb = mask_or(mb, t) - m = mask_or(ma, mb) - result = self.f(d1, d2) - return masked_array(result, m) - - def __str__ (self): - return "Masked version of " + str(self.f) - -class masked_binary_operation: - def __init__ (self, abfunc, fillx=0, filly=0): - """abfunc(fillx, filly) must be defined. - abfunc(x, filly) = x for all x to enable reduce. - """ - self.f = abfunc - self.fillx = fillx - self.filly = filly - self.__doc__ = getattr(abfunc, "__doc__", str(abfunc)) - ufunc_domain[abfunc] = None - ufunc_fills[abfunc] = fillx, filly - - def __call__ (self, a, b, *args, **kwargs): - "Execute the call behavior." - m = mask_or(getmask(a), getmask(b)) - d1 = filled(a, self.fillx) - d2 = filled(b, self.filly) - result = self.f(d1, d2, *args, **kwargs) - if isinstance(result, ndarray) \ - and m.ndim != 0 \ - and m.shape != result.shape: - m = mask_or(getmaskarray(a), getmaskarray(b)) - return masked_array(result, m) - - def reduce (self, target, axis=0, dtype=None): - """Reduce target along the given axis with this function.""" - m = getmask(target) - t = filled(target, self.filly) - if t.shape == (): - t = t.reshape(1) - if m is not nomask: - m = make_mask(m, copy=1) - m.shape = (1,) - if m is nomask: - t = self.f.reduce(t, axis) - else: - t = masked_array (t, m) - # XXX: "or t.dtype" below is a workaround for what appears - # XXX: to be a bug in reduce. - t = self.f.reduce(filled(t, self.filly), axis, - dtype=dtype or t.dtype) - m = umath.logical_and.reduce(m, axis) - if isinstance(t, ndarray): - return masked_array(t, m, get_fill_value(target)) - elif m: - return masked - else: - return t - - def outer (self, a, b): - "Return the function applied to the outer product of a and b." - ma = getmask(a) - mb = getmask(b) - if ma is nomask and mb is nomask: - m = nomask - else: - ma = getmaskarray(a) - mb = getmaskarray(b) - m = logical_or.outer(ma, mb) - d = self.f.outer(filled(a, self.fillx), filled(b, self.filly)) - return masked_array(d, m) - - def accumulate (self, target, axis=0): - """Accumulate target along axis after filling with y fill value.""" - t = filled(target, self.filly) - return masked_array (self.f.accumulate (t, axis)) - def __str__ (self): - return "Masked version of " + str(self.f) - -sqrt = masked_unary_operation(umath.sqrt, 0.0, domain_greater_equal(0.0)) -log = masked_unary_operation(umath.log, 1.0, domain_greater(0.0)) -log10 = masked_unary_operation(umath.log10, 1.0, domain_greater(0.0)) -exp = masked_unary_operation(umath.exp) -conjugate = masked_unary_operation(umath.conjugate) -sin = masked_unary_operation(umath.sin) -cos = masked_unary_operation(umath.cos) -tan = masked_unary_operation(umath.tan, 0.0, domain_tan(1.e-35)) -arcsin = masked_unary_operation(umath.arcsin, 0.0, domain_check_interval(-1.0, 1.0)) -arccos = masked_unary_operation(umath.arccos, 0.0, domain_check_interval(-1.0, 1.0)) -arctan = masked_unary_operation(umath.arctan) -# Missing from numeric -arcsinh = masked_unary_operation(umath.arcsinh) -arccosh = masked_unary_operation(umath.arccosh, 1.0, domain_greater_equal(1.0)) -arctanh = masked_unary_operation(umath.arctanh, 0.0, domain_check_interval(-1.0+1e-15, 1.0-1e-15)) -sinh = masked_unary_operation(umath.sinh) -cosh = masked_unary_operation(umath.cosh) -tanh = masked_unary_operation(umath.tanh) -absolute = masked_unary_operation(umath.absolute) -fabs = masked_unary_operation(umath.fabs) -negative = masked_unary_operation(umath.negative) - -def nonzero(a): - """returns the indices of the elements of a which are not zero - and not masked - """ - return numeric.asarray(filled(a, 0).nonzero()) - -around = masked_unary_operation(fromnumeric.round_) -floor = masked_unary_operation(umath.floor) -ceil = masked_unary_operation(umath.ceil) -logical_not = masked_unary_operation(umath.logical_not) - -add = masked_binary_operation(umath.add) -subtract = masked_binary_operation(umath.subtract) -subtract.reduce = None -multiply = masked_binary_operation(umath.multiply, 1, 1) -divide = domained_binary_operation(umath.divide, domain_safe_divide(), 0, 1) -true_divide = domained_binary_operation(umath.true_divide, domain_safe_divide(), 0, 1) -floor_divide = domained_binary_operation(umath.floor_divide, domain_safe_divide(), 0, 1) -remainder = domained_binary_operation(umath.remainder, domain_safe_divide(), 0, 1) -fmod = domained_binary_operation(umath.fmod, domain_safe_divide(), 0, 1) -hypot = masked_binary_operation(umath.hypot) -arctan2 = masked_binary_operation(umath.arctan2, 0.0, 1.0) -arctan2.reduce = None -equal = masked_binary_operation(umath.equal) -equal.reduce = None -not_equal = masked_binary_operation(umath.not_equal) -not_equal.reduce = None -less_equal = masked_binary_operation(umath.less_equal) -less_equal.reduce = None -greater_equal = masked_binary_operation(umath.greater_equal) -greater_equal.reduce = None -less = masked_binary_operation(umath.less) -less.reduce = None -greater = masked_binary_operation(umath.greater) -greater.reduce = None -logical_and = masked_binary_operation(umath.logical_and) -alltrue = masked_binary_operation(umath.logical_and, 1, 1).reduce -logical_or = masked_binary_operation(umath.logical_or) -sometrue = logical_or.reduce -logical_xor = masked_binary_operation(umath.logical_xor) -bitwise_and = masked_binary_operation(umath.bitwise_and) -bitwise_or = masked_binary_operation(umath.bitwise_or) -bitwise_xor = masked_binary_operation(umath.bitwise_xor) - -def rank (object): - return fromnumeric.rank(filled(object)) - -def shape (object): - return fromnumeric.shape(filled(object)) - -def size (object, axis=None): - return fromnumeric.size(filled(object), axis) - -class MaskedArray (object): - """Arrays with possibly masked values. - Masked values of 1 exclude the corresponding element from - any computation. - - Construction: - x = array(data, dtype=None, copy=True, order=False, - mask = nomask, fill_value=None) - - If copy=False, every effort is made not to copy the data: - If data is a MaskedArray, and argument mask=nomask, - then the candidate data is data.data and the - mask used is data.mask. If data is a numeric array, - it is used as the candidate raw data. - If dtype is not None and - is != data.dtype.char then a data copy is required. - Otherwise, the candidate is used. - - If a data copy is required, raw data stored is the result of: - numeric.array(data, dtype=dtype.char, copy=copy) - - If mask is nomask there are no masked values. Otherwise mask must - be convertible to an array of booleans with the same shape as x. - - fill_value is used to fill in masked values when necessary, - such as when printing and in method/function filled(). - The fill_value is not used for computation within this module. - """ - __array_priority__ = 10.1 - def __init__(self, data, dtype=None, copy=True, order=False, - mask=nomask, fill_value=None): - """array(data, dtype=None, copy=True, order=False, mask=nomask, fill_value=None) - If data already a numeric array, its dtype becomes the default value of dtype. - """ - if dtype is None: - tc = None - else: - tc = numeric.dtype(dtype) - need_data_copied = copy - if isinstance(data, MaskedArray): - c = data.data - if tc is None: - tc = c.dtype - elif tc != c.dtype: - need_data_copied = True - if mask is nomask: - mask = data.mask - elif mask is not nomask: #attempting to change the mask - need_data_copied = True - - elif isinstance(data, ndarray): - c = data - if tc is None: - tc = c.dtype - elif tc != c.dtype: - need_data_copied = True - else: - need_data_copied = False #because I'll do it now - c = numeric.array(data, dtype=tc, copy=True, order=order) - tc = c.dtype - - if need_data_copied: - if tc == c.dtype: - self._data = numeric.array(c, dtype=tc, copy=True, order=order) - else: - self._data = c.astype(tc) - else: - self._data = c - - if mask is nomask: - self._mask = nomask - self._shared_mask = 0 - else: - self._mask = make_mask (mask) - if self._mask is nomask: - self._shared_mask = 0 - else: - self._shared_mask = (self._mask is mask) - nm = size(self._mask) - nd = size(self._data) - if nm != nd: - if nm == 1: - self._mask = fromnumeric.resize(self._mask, self._data.shape) - self._shared_mask = 0 - elif nd == 1: - self._data = fromnumeric.resize(self._data, self._mask.shape) - self._data.shape = self._mask.shape - else: - raise MAError, "Mask and data not compatible." - elif nm == 1 and shape(self._mask) != shape(self._data): - self.unshare_mask() - self._mask.shape = self._data.shape - - self.set_fill_value(fill_value) - - def __array__ (self, t=None, context=None): - "Special hook for numeric. Converts to numeric if possible." - if self._mask is not nomask: - if fromnumeric.ravel(self._mask).any(): - if context is None: - warnings.warn("Cannot automatically convert masked array to "\ - "numeric because data\n is masked in one or "\ - "more locations."); - return self._data - #raise MAError, \ - # """Cannot automatically convert masked array to numeric because data - # is masked in one or more locations. - # """ - else: - func, args, i = context - fills = ufunc_fills.get(func) - if fills is None: - raise MAError, "%s not known to ma" % func - return self.filled(fills[i]) - else: # Mask is all false - # Optimize to avoid future invocations of this section. - self._mask = nomask - self._shared_mask = 0 - if t: - return self._data.astype(t) - else: - return self._data - - def __array_wrap__ (self, array, context=None): - """Special hook for ufuncs. - - Wraps the numpy array and sets the mask according to - context. - """ - if context is None: - return MaskedArray(array, copy=False, mask=nomask) - func, args = context[:2] - domain = ufunc_domain[func] - m = reduce(mask_or, [getmask(a) for a in args]) - if domain is not None: - m = mask_or(m, domain(*[getattr(a, '_data', a) - for a in args])) - if m is not nomask: - try: - shape = array.shape - except AttributeError: - pass - else: - if m.shape != shape: - m = reduce(mask_or, [getmaskarray(a) for a in args]) - - return MaskedArray(array, copy=False, mask=m) - - def _get_shape(self): - "Return the current shape." - return self._data.shape - - def _set_shape (self, newshape): - "Set the array's shape." - self._data.shape = newshape - if self._mask is not nomask: - self._mask = self._mask.copy() - self._mask.shape = newshape - - def _get_flat(self): - """Calculate the flat value. - """ - if self._mask is nomask: - return masked_array(self._data.ravel(), mask=nomask, - fill_value = self.fill_value()) - else: - return masked_array(self._data.ravel(), - mask=self._mask.ravel(), - fill_value = self.fill_value()) - - def _set_flat (self, value): - "x.flat = value" - y = self.ravel() - y[:] = value - - def _get_real(self): - "Get the real part of a complex array." - if self._mask is nomask: - return masked_array(self._data.real, mask=nomask, - fill_value = self.fill_value()) - else: - return masked_array(self._data.real, mask=self._mask, - fill_value = self.fill_value()) - - def _set_real (self, value): - "x.real = value" - y = self.real - y[...] = value - - def _get_imaginary(self): - "Get the imaginary part of a complex array." - if self._mask is nomask: - return masked_array(self._data.imag, mask=nomask, - fill_value = self.fill_value()) - else: - return masked_array(self._data.imag, mask=self._mask, - fill_value = self.fill_value()) - - def _set_imaginary (self, value): - "x.imaginary = value" - y = self.imaginary - y[...] = value - - def __str__(self): - """Calculate the str representation, using masked for fill if - it is enabled. Otherwise fill with fill value. - """ - if masked_print_option.enabled(): - f = masked_print_option - # XXX: Without the following special case masked - # XXX: would print as "[--]", not "--". Can we avoid - # XXX: checks for masked by choosing a different value - # XXX: for the masked singleton? 2005-01-05 -- sasha - if self is masked: - return str(f) - m = self._mask - if m is not nomask and m.shape == () and m: - return str(f) - # convert to object array to make filled work - self = self.astype(object) - else: - f = self.fill_value() - res = self.filled(f) - return str(res) - - def __repr__(self): - """Calculate the repr representation, using masked for fill if - it is enabled. Otherwise fill with fill value. - """ - with_mask = """\ -array(data = - %(data)s, - mask = - %(mask)s, - fill_value=%(fill)s) -""" - with_mask1 = """\ -array(data = %(data)s, - mask = %(mask)s, - fill_value=%(fill)s) -""" - without_mask = """array( - %(data)s)""" - without_mask1 = """array(%(data)s)""" - - n = len(self.shape) - if self._mask is nomask: - if n <= 1: - return without_mask1 % {'data':str(self.filled())} - return without_mask % {'data':str(self.filled())} - else: - if n <= 1: - return with_mask % { - 'data': str(self.filled()), - 'mask': str(self._mask), - 'fill': str(self.fill_value()) - } - return with_mask % { - 'data': str(self.filled()), - 'mask': str(self._mask), - 'fill': str(self.fill_value()) - } - without_mask1 = """array(%(data)s)""" - if self._mask is nomask: - return without_mask % {'data':str(self.filled())} - else: - return with_mask % { - 'data': str(self.filled()), - 'mask': str(self._mask), - 'fill': str(self.fill_value()) - } - - def __float__(self): - "Convert self to float." - self.unmask() - if self._mask is not nomask: - raise MAError, 'Cannot convert masked element to a Python float.' - return float(self.data.item()) - - def __int__(self): - "Convert self to int." - self.unmask() - if self._mask is not nomask: - raise MAError, 'Cannot convert masked element to a Python int.' - return int(self.data.item()) - - def __getitem__(self, i): - "Get item described by i. Not a copy as in previous versions." - self.unshare_mask() - m = self._mask - dout = self._data[i] - if m is nomask: - try: - if dout.size == 1: - return dout - else: - return masked_array(dout, fill_value=self._fill_value) - except AttributeError: - return dout - mi = m[i] - if mi.size == 1: - if mi: - return masked - else: - return dout - else: - return masked_array(dout, mi, fill_value=self._fill_value) - -# -------- -# setitem and setslice notes -# note that if value is masked, it means to mask those locations. -# setting a value changes the mask to match the value in those locations. - - def __setitem__(self, index, value): - "Set item described by index. If value is masked, mask those locations." - d = self._data - if self is masked: - raise MAError, 'Cannot alter masked elements.' - if value is masked: - if self._mask is nomask: - self._mask = make_mask_none(d.shape) - self._shared_mask = False - else: - self.unshare_mask() - self._mask[index] = True - return - m = getmask(value) - value = filled(value).astype(d.dtype) - d[index] = value - if m is nomask: - if self._mask is not nomask: - self.unshare_mask() - self._mask[index] = False - else: - if self._mask is nomask: - self._mask = make_mask_none(d.shape) - self._shared_mask = True - else: - self.unshare_mask() - self._mask[index] = m - - def __nonzero__(self): - """returns true if any element is non-zero or masked - - """ - # XXX: This changes bool conversion logic from MA. - # XXX: In MA bool(a) == len(a) != 0, but in numpy - # XXX: scalars do not have len - m = self._mask - d = self._data - return bool(m is not nomask and m.any() - or d is not nomask and d.any()) - - def __len__ (self): - """Return length of first dimension. This is weird but Python's - slicing behavior depends on it.""" - return len(self._data) - - def __and__(self, other): - "Return bitwise_and" - return bitwise_and(self, other) - - def __or__(self, other): - "Return bitwise_or" - return bitwise_or(self, other) - - def __xor__(self, other): - "Return bitwise_xor" - return bitwise_xor(self, other) - - __rand__ = __and__ - __ror__ = __or__ - __rxor__ = __xor__ - - def __abs__(self): - "Return absolute(self)" - return absolute(self) - - def __neg__(self): - "Return negative(self)" - return negative(self) - - def __pos__(self): - "Return array(self)" - return array(self) - - def __add__(self, other): - "Return add(self, other)" - return add(self, other) - - __radd__ = __add__ - - def __mod__ (self, other): - "Return remainder(self, other)" - return remainder(self, other) - - def __rmod__ (self, other): - "Return remainder(other, self)" - return remainder(other, self) - - def __lshift__ (self, n): - return left_shift(self, n) - - def __rshift__ (self, n): - return right_shift(self, n) - - def __sub__(self, other): - "Return subtract(self, other)" - return subtract(self, other) - - def __rsub__(self, other): - "Return subtract(other, self)" - return subtract(other, self) - - def __mul__(self, other): - "Return multiply(self, other)" - return multiply(self, other) - - __rmul__ = __mul__ - - def __div__(self, other): - "Return divide(self, other)" - return divide(self, other) - - def __rdiv__(self, other): - "Return divide(other, self)" - return divide(other, self) - - def __truediv__(self, other): - "Return divide(self, other)" - return true_divide(self, other) - - def __rtruediv__(self, other): - "Return divide(other, self)" - return true_divide(other, self) - - def __floordiv__(self, other): - "Return divide(self, other)" - return floor_divide(self, other) - - def __rfloordiv__(self, other): - "Return divide(other, self)" - return floor_divide(other, self) - - def __pow__(self, other, third=None): - "Return power(self, other, third)" - return power(self, other, third) - - def __sqrt__(self): - "Return sqrt(self)" - return sqrt(self) - - def __iadd__(self, other): - "Add other to self in place." - t = self._data.dtype.char - f = filled(other, 0) - t1 = f.dtype.char - if t == t1: - pass - elif t in typecodes['Integer']: - if t1 in typecodes['Integer']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - elif t in typecodes['Float']: - if t1 in typecodes['Integer']: - f = f.astype(t) - elif t1 in typecodes['Float']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - elif t in typecodes['Complex']: - if t1 in typecodes['Integer']: - f = f.astype(t) - elif t1 in typecodes['Float']: - f = f.astype(t) - elif t1 in typecodes['Complex']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - else: - raise TypeError, 'Incorrect type for in-place operation.' - - if self._mask is nomask: - self._data += f - m = getmask(other) - self._mask = m - self._shared_mask = m is not nomask - else: - result = add(self, masked_array(f, mask=getmask(other))) - self._data = result.data - self._mask = result.mask - self._shared_mask = 1 - return self - - def __imul__(self, other): - "Add other to self in place." - t = self._data.dtype.char - f = filled(other, 0) - t1 = f.dtype.char - if t == t1: - pass - elif t in typecodes['Integer']: - if t1 in typecodes['Integer']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - elif t in typecodes['Float']: - if t1 in typecodes['Integer']: - f = f.astype(t) - elif t1 in typecodes['Float']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - elif t in typecodes['Complex']: - if t1 in typecodes['Integer']: - f = f.astype(t) - elif t1 in typecodes['Float']: - f = f.astype(t) - elif t1 in typecodes['Complex']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - else: - raise TypeError, 'Incorrect type for in-place operation.' - - if self._mask is nomask: - self._data *= f - m = getmask(other) - self._mask = m - self._shared_mask = m is not nomask - else: - result = multiply(self, masked_array(f, mask=getmask(other))) - self._data = result.data - self._mask = result.mask - self._shared_mask = 1 - return self - - def __isub__(self, other): - "Subtract other from self in place." - t = self._data.dtype.char - f = filled(other, 0) - t1 = f.dtype.char - if t == t1: - pass - elif t in typecodes['Integer']: - if t1 in typecodes['Integer']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - elif t in typecodes['Float']: - if t1 in typecodes['Integer']: - f = f.astype(t) - elif t1 in typecodes['Float']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - elif t in typecodes['Complex']: - if t1 in typecodes['Integer']: - f = f.astype(t) - elif t1 in typecodes['Float']: - f = f.astype(t) - elif t1 in typecodes['Complex']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - else: - raise TypeError, 'Incorrect type for in-place operation.' - - if self._mask is nomask: - self._data -= f - m = getmask(other) - self._mask = m - self._shared_mask = m is not nomask - else: - result = subtract(self, masked_array(f, mask=getmask(other))) - self._data = result.data - self._mask = result.mask - self._shared_mask = 1 - return self - - - - def __idiv__(self, other): - "Divide self by other in place." - t = self._data.dtype.char - f = filled(other, 0) - t1 = f.dtype.char - if t == t1: - pass - elif t in typecodes['Integer']: - if t1 in typecodes['Integer']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - elif t in typecodes['Float']: - if t1 in typecodes['Integer']: - f = f.astype(t) - elif t1 in typecodes['Float']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - elif t in typecodes['Complex']: - if t1 in typecodes['Integer']: - f = f.astype(t) - elif t1 in typecodes['Float']: - f = f.astype(t) - elif t1 in typecodes['Complex']: - f = f.astype(t) - else: - raise TypeError, 'Incorrect type for in-place operation.' - else: - raise TypeError, 'Incorrect type for in-place operation.' - mo = getmask(other) - result = divide(self, masked_array(f, mask=mo)) - self._data = result.data - dm = result.raw_mask() - if dm is not self._mask: - self._mask = dm - self._shared_mask = 1 - return self - - def __eq__(self, other): - return equal(self,other) - - def __ne__(self, other): - return not_equal(self,other) - - def __lt__(self, other): - return less(self,other) - - def __le__(self, other): - return less_equal(self,other) - - def __gt__(self, other): - return greater(self,other) - - def __ge__(self, other): - return greater_equal(self,other) - - def astype (self, tc): - "return self as array of given type." - d = self._data.astype(tc) - return array(d, mask=self._mask) - - def byte_swapped(self): - """Returns the raw data field, byte_swapped. Included for consistency - with numeric but doesn't make sense in this context. - """ - return self._data.byte_swapped() - - def compressed (self): - "A 1-D array of all the non-masked data." - d = fromnumeric.ravel(self._data) - if self._mask is nomask: - return array(d) - else: - m = 1 - fromnumeric.ravel(self._mask) - c = fromnumeric.compress(m, d) - return array(c, copy=0) - - def count (self, axis = None): - "Count of the non-masked elements in a, or along a certain axis." - m = self._mask - s = self._data.shape - ls = len(s) - if m is nomask: - if ls == 0: - return 1 - if ls == 1: - return s[0] - if axis is None: - return reduce(lambda x, y:x*y, s) - else: - n = s[axis] - t = list(s) - del t[axis] - return ones(t) * n - if axis is None: - w = fromnumeric.ravel(m).astype(int) - n1 = size(w) - if n1 == 1: - n2 = w[0] - else: - n2 = umath.add.reduce(w) - return n1 - n2 - else: - n1 = size(m, axis) - n2 = sum(m.astype(int), axis) - return n1 - n2 - - def dot (self, other): - "s.dot(other) = innerproduct(s, other)" - return innerproduct(self, other) - - def fill_value(self): - "Get the current fill value." - return self._fill_value - - def filled (self, fill_value=None): - """A numeric array with masked values filled. If fill_value is None, - use self.fill_value(). - - If mask is nomask, copy data only if not contiguous. - Result is always a contiguous, numeric array. -# Is contiguous really necessary now? - """ - d = self._data - m = self._mask - if m is nomask: - if d.flags['CONTIGUOUS']: - return d - else: - return d.copy() - else: - if fill_value is None: - value = self._fill_value - else: - value = fill_value - - if self is masked: - result = numeric.array(value) - else: - try: - result = numeric.array(d, dtype=d.dtype, copy=1) - result[m] = value - except (TypeError, AttributeError): - #ok, can't put that value in here - value = numeric.array(value, dtype=object) - d = d.astype(object) - result = fromnumeric.choose(m, (d, value)) - return result - - def ids (self): - """Return the ids of the data and mask areas""" - return (id(self._data), id(self._mask)) - - def iscontiguous (self): - "Is the data contiguous?" - return self._data.flags['CONTIGUOUS'] - - def itemsize(self): - "Item size of each data item." - return self._data.itemsize - - - def outer(self, other): - "s.outer(other) = outerproduct(s, other)" - return outerproduct(self, other) - - def put (self, values): - """Set the non-masked entries of self to filled(values). - No change to mask - """ - iota = numeric.arange(self.size) - d = self._data - if self._mask is nomask: - ind = iota - else: - ind = fromnumeric.compress(1 - self._mask, iota) - d[ind] = filled(values).astype(d.dtype) - - def putmask (self, values): - """Set the masked entries of self to filled(values). - Mask changed to nomask. - """ - d = self._data - if self._mask is not nomask: - d[self._mask] = filled(values).astype(d.dtype) - self._shared_mask = 0 - self._mask = nomask - - def ravel (self): - """Return a 1-D view of self.""" - if self._mask is nomask: - return masked_array(self._data.ravel()) - else: - return masked_array(self._data.ravel(), self._mask.ravel()) - - def raw_data (self): - """ Obsolete; use data property instead. - The raw data; portions may be meaningless. - May be noncontiguous. Expert use only.""" - return self._data - data = property(fget=raw_data, - doc="The data, but values at masked locations are meaningless.") - - def raw_mask (self): - """ Obsolete; use mask property instead. - May be noncontiguous. Expert use only. - """ - return self._mask - mask = property(fget=raw_mask, - doc="The mask, may be nomask. Values where mask true are meaningless.") - - def reshape (self, *s): - """This array reshaped to shape s""" - d = self._data.reshape(*s) - if self._mask is nomask: - return masked_array(d) - else: - m = self._mask.reshape(*s) - return masked_array(d, m) - - def set_fill_value (self, v=None): - "Set the fill value to v. Omit v to restore default." - if v is None: - v = default_fill_value (self.raw_data()) - self._fill_value = v - - def _get_ndim(self): - return self._data.ndim - ndim = property(_get_ndim, doc=numeric.ndarray.ndim.__doc__) - - def _get_size (self): - return self._data.size - size = property(fget=_get_size, doc="Number of elements in the array.") -## CHECK THIS: signature of numeric.array.size? - - def _get_dtype(self): - return self._data.dtype - dtype = property(fget=_get_dtype, doc="type of the array elements.") - - def item(self, *args): - "Return Python scalar if possible" - if self._mask is not nomask: - m = self._mask.item(*args) - try: - if m[0]: - return masked - except IndexError: - return masked - return self._data.item(*args) - - def itemset(self, *args): - "Set Python scalar into array" - item = args[-1] - args = args[:-1] - self[args] = item - - def tolist(self, fill_value=None): - "Convert to list" - return self.filled(fill_value).tolist() - - def tostring(self, fill_value=None): - "Convert to string" - return self.filled(fill_value).tostring() - - def unmask (self): - "Replace the mask by nomask if possible." - if self._mask is nomask: return - m = make_mask(self._mask, flag=1) - if m is nomask: - self._mask = nomask - self._shared_mask = 0 - - def unshare_mask (self): - "If currently sharing mask, make a copy." - if self._shared_mask: - self._mask = make_mask (self._mask, copy=1, flag=0) - self._shared_mask = 0 - - def _get_ctypes(self): - return self._data.ctypes - - def _get_T(self): - if (self.ndim < 2): - return self - return self.transpose() - - shape = property(_get_shape, _set_shape, - doc = 'tuple giving the shape of the array') - - flat = property(_get_flat, _set_flat, - doc = 'Access array in flat form.') - - real = property(_get_real, _set_real, - doc = 'Access the real part of the array') - - imaginary = property(_get_imaginary, _set_imaginary, - doc = 'Access the imaginary part of the array') - - imag = imaginary - - ctypes = property(_get_ctypes, None, doc="ctypes") - - T = property(_get_T, None, doc="get transpose") - -#end class MaskedArray - -array = MaskedArray - -def isMaskedArray (x): - "Is x a masked array, that is, an instance of MaskedArray?" - return isinstance(x, MaskedArray) - -isarray = isMaskedArray -isMA = isMaskedArray #backward compatibility - -def allclose (a, b, fill_value=1, rtol=1.e-5, atol=1.e-8): - """ Returns true if all components of a and b are equal - subject to given tolerances. - If fill_value is 1, masked values considered equal. - If fill_value is 0, masked values considered unequal. - The relative error rtol should be positive and << 1.0 - The absolute error atol comes into play for those elements - of b that are very small or zero; it says how small a must be also. - """ - m = mask_or(getmask(a), getmask(b)) - d1 = filled(a) - d2 = filled(b) - x = filled(array(d1, copy=0, mask=m), fill_value).astype(float) - y = filled(array(d2, copy=0, mask=m), 1).astype(float) - d = umath.less_equal(umath.absolute(x-y), atol + rtol * umath.absolute(y)) - return fromnumeric.alltrue(fromnumeric.ravel(d)) - -def allequal (a, b, fill_value=1): - """ - True if all entries of a and b are equal, using - fill_value as a truth value where either or both are masked. - """ - m = mask_or(getmask(a), getmask(b)) - if m is nomask: - x = filled(a) - y = filled(b) - d = umath.equal(x, y) - return fromnumeric.alltrue(fromnumeric.ravel(d)) - elif fill_value: - x = filled(a) - y = filled(b) - d = umath.equal(x, y) - dm = array(d, mask=m, copy=0) - return fromnumeric.alltrue(fromnumeric.ravel(filled(dm, 1))) - else: - return 0 - -def masked_values (data, value, rtol=1.e-5, atol=1.e-8, copy=1): - """ - masked_values(data, value, rtol=1.e-5, atol=1.e-8) - Create a masked array; mask is nomask if possible. - If copy==0, and otherwise possible, result - may share data values with original array. - Let d = filled(data, value). Returns d - masked where abs(data-value)<= atol + rtol * abs(value) - if d is of a floating point type. Otherwise returns - masked_object(d, value, copy) - """ - abs = umath.absolute - d = filled(data, value) - if issubclass(d.dtype.type, numeric.floating): - m = umath.less_equal(abs(d-value), atol+rtol*abs(value)) - m = make_mask(m, flag=1) - return array(d, mask = m, copy=copy, - fill_value=value) - else: - return masked_object(d, value, copy=copy) - -def masked_object (data, value, copy=1): - "Create array masked where exactly data equal to value" - d = filled(data, value) - dm = make_mask(umath.equal(d, value), flag=1) - return array(d, mask=dm, copy=copy, fill_value=value) - -def arange(start, stop=None, step=1, dtype=None): - """Just like range() except it returns a array whose type can be specified - by the keyword argument dtype. - """ - return array(numeric.arange(start, stop, step, dtype)) - -arrayrange = arange - -def fromstring (s, t): - "Construct a masked array from a string. Result will have no mask." - return masked_array(numeric.fromstring(s, t)) - -def left_shift (a, n): - "Left shift n bits" - m = getmask(a) - if m is nomask: - d = umath.left_shift(filled(a), n) - return masked_array(d) - else: - d = umath.left_shift(filled(a, 0), n) - return masked_array(d, m) - -def right_shift (a, n): - "Right shift n bits" - m = getmask(a) - if m is nomask: - d = umath.right_shift(filled(a), n) - return masked_array(d) - else: - d = umath.right_shift(filled(a, 0), n) - return masked_array(d, m) - -def resize (a, new_shape): - """resize(a, new_shape) returns a new array with the specified shape. - The original array's total size can be any size.""" - m = getmask(a) - if m is not nomask: - m = fromnumeric.resize(m, new_shape) - result = array(fromnumeric.resize(filled(a), new_shape), mask=m) - result.set_fill_value(get_fill_value(a)) - return result - -def new_repeat(a, repeats, axis=None): - """repeat elements of a repeats times along axis - repeats is a sequence of length a.shape[axis] - telling how many times to repeat each element. - """ - af = filled(a) - if isinstance(repeats, types.IntType): - if axis is None: - num = af.size - else: - num = af.shape[axis] - repeats = tuple([repeats]*num) - - m = getmask(a) - if m is not nomask: - m = fromnumeric.repeat(m, repeats, axis) - d = fromnumeric.repeat(af, repeats, axis) - result = masked_array(d, m) - result.set_fill_value(get_fill_value(a)) - return result - - - -def identity(n): - """identity(n) returns the identity matrix of shape n x n. - """ - return array(numeric.identity(n)) - -def indices (dimensions, dtype=None): - """indices(dimensions,dtype=None) returns an array representing a grid - of indices with row-only, and column-only variation. - """ - return array(numeric.indices(dimensions, dtype)) - -def zeros (shape, dtype=float): - """zeros(n, dtype=float) = - an array of all zeros of the given length or shape.""" - return array(numeric.zeros(shape, dtype)) - -def ones (shape, dtype=float): - """ones(n, dtype=float) = - an array of all ones of the given length or shape.""" - return array(numeric.ones(shape, dtype)) - -def count (a, axis = None): - "Count of the non-masked elements in a, or along a certain axis." - a = masked_array(a) - return a.count(axis) - -def power (a, b, third=None): - "a**b" - if third is not None: - raise MAError, "3-argument power not supported." - ma = getmask(a) - mb = getmask(b) - m = mask_or(ma, mb) - fa = filled(a, 1) - fb = filled(b, 1) - if fb.dtype.char in typecodes["Integer"]: - return masked_array(umath.power(fa, fb), m) - md = make_mask(umath.less(fa, 0), flag=1) - m = mask_or(m, md) - if m is nomask: - return masked_array(umath.power(fa, fb)) - else: - fa = numeric.where(m, 1, fa) - return masked_array(umath.power(fa, fb), m) - -def masked_array (a, mask=nomask, fill_value=None): - """masked_array(a, mask=nomask) = - array(a, mask=mask, copy=0, fill_value=fill_value) - """ - return array(a, mask=mask, copy=0, fill_value=fill_value) - -def sum (target, axis=None, dtype=None): - if axis is None: - target = ravel(target) - axis = 0 - return add.reduce(target, axis, dtype) - -def product (target, axis=None, dtype=None): - if axis is None: - target = ravel(target) - axis = 0 - return multiply.reduce(target, axis, dtype) - -def new_average (a, axis=None, weights=None, returned = 0): - """average(a, axis=None, weights=None) - Computes average along indicated axis. - If axis is None, average over the entire array - Inputs can be integer or floating types; result is of type float. - - If weights are given, result is sum(a*weights,axis=0)/(sum(weights,axis=0)*1.0) - weights must have a's shape or be the 1-d with length the size - of a in the given axis. - - If returned, return a tuple: the result and the sum of the weights - or count of values. Results will have the same shape. - - masked values in the weights will be set to 0.0 - """ - a = masked_array(a) - mask = a.mask - ash = a.shape - if ash == (): - ash = (1,) - if axis is None: - if mask is nomask: - if weights is None: - n = add.reduce(a.raw_data().ravel()) - d = reduce(lambda x, y: x * y, ash, 1.0) - else: - w = filled(weights, 0.0).ravel() - n = umath.add.reduce(a.raw_data().ravel() * w) - d = umath.add.reduce(w) - del w - else: - if weights is None: - n = add.reduce(a.ravel()) - w = fromnumeric.choose(mask, (1.0, 0.0)).ravel() - d = umath.add.reduce(w) - del w - else: - w = array(filled(weights, 0.0), float, mask=mask).ravel() - n = add.reduce(a.ravel() * w) - d = add.reduce(w) - del w - else: - if mask is nomask: - if weights is None: - d = ash[axis] * 1.0 - n = umath.add.reduce(a.raw_data(), axis) - else: - w = filled(weights, 0.0) - wsh = w.shape - if wsh == (): - wsh = (1,) - if wsh == ash: - w = numeric.array(w, float, copy=0) - n = add.reduce(a*w, axis) - d = add.reduce(w, axis) - del w - elif wsh == (ash[axis],): - r = [newaxis]*len(ash) - r[axis] = slice(None, None, 1) - w = eval ("w["+ repr(tuple(r)) + "] * ones(ash, float)") - n = add.reduce(a*w, axis) - d = add.reduce(w, axis) - del w, r - else: - raise ValueError, 'average: weights wrong shape.' - else: - if weights is None: - n = add.reduce(a, axis) - w = numeric.choose(mask, (1.0, 0.0)) - d = umath.add.reduce(w, axis) - del w - else: - w = filled(weights, 0.0) - wsh = w.shape - if wsh == (): - wsh = (1,) - if wsh == ash: - w = array(w, float, mask=mask, copy=0) - n = add.reduce(a*w, axis) - d = add.reduce(w, axis) - elif wsh == (ash[axis],): - r = [newaxis]*len(ash) - r[axis] = slice(None, None, 1) - w = eval ("w["+ repr(tuple(r)) + "] * masked_array(ones(ash, float), mask)") - n = add.reduce(a*w, axis) - d = add.reduce(w, axis) - else: - raise ValueError, 'average: weights wrong shape.' - del w - #print n, d, repr(mask), repr(weights) - if n is masked or d is masked: return masked - result = divide (n, d) - del n - - if isinstance(result, MaskedArray): - result.unmask() - if returned: - if not isinstance(d, MaskedArray): - d = masked_array(d) - if not d.shape == result.shape: - d = ones(result.shape, float) * d - d.unmask() - if returned: - return result, d - else: - return result - -def where (condition, x, y): - """where(condition, x, y) is x where condition is nonzero, y otherwise. - condition must be convertible to an integer array. - Answer is always the shape of condition. - The type depends on x and y. It is integer if both x and y are - the value masked. - """ - fc = filled(not_equal(condition, 0), 0) - xv = filled(x) - xm = getmask(x) - yv = filled(y) - ym = getmask(y) - d = numeric.choose(fc, (yv, xv)) - md = numeric.choose(fc, (ym, xm)) - m = getmask(condition) - m = make_mask(mask_or(m, md), copy=0, flag=1) - return masked_array(d, m) - -def choose (indices, t, out=None, mode='raise'): - "Returns array shaped like indices with elements chosen from t" - def fmask (x): - if x is masked: return 1 - return filled(x) - def nmask (x): - if x is masked: return 1 - m = getmask(x) - if m is nomask: return 0 - return m - c = filled(indices, 0) - masks = [nmask(x) for x in t] - a = [fmask(x) for x in t] - d = numeric.choose(c, a) - m = numeric.choose(c, masks) - m = make_mask(mask_or(m, getmask(indices)), copy=0, flag=1) - return masked_array(d, m) - -def masked_where(condition, x, copy=1): - """Return x as an array masked where condition is true. - Also masked where x or condition masked. - """ - cm = filled(condition,1) - m = mask_or(getmask(x), cm) - return array(filled(x), copy=copy, mask=m) - -def masked_greater(x, value, copy=1): - "masked_greater(x, value) = x masked where x > value" - return masked_where(greater(x, value), x, copy) - -def masked_greater_equal(x, value, copy=1): - "masked_greater_equal(x, value) = x masked where x >= value" - return masked_where(greater_equal(x, value), x, copy) - -def masked_less(x, value, copy=1): - "masked_less(x, value) = x masked where x < value" - return masked_where(less(x, value), x, copy) - -def masked_less_equal(x, value, copy=1): - "masked_less_equal(x, value) = x masked where x <= value" - return masked_where(less_equal(x, value), x, copy) - -def masked_not_equal(x, value, copy=1): - "masked_not_equal(x, value) = x masked where x != value" - d = filled(x, 0) - c = umath.not_equal(d, value) - m = mask_or(c, getmask(x)) - return array(d, mask=m, copy=copy) - -def masked_equal(x, value, copy=1): - """masked_equal(x, value) = x masked where x == value - For floating point consider masked_values(x, value) instead. - """ - d = filled(x, 0) - c = umath.equal(d, value) - m = mask_or(c, getmask(x)) - return array(d, mask=m, copy=copy) - -def masked_inside(x, v1, v2, copy=1): - """x with mask of all values of x that are inside [v1,v2] - v1 and v2 can be given in either order. - """ - if v2 < v1: - t = v2 - v2 = v1 - v1 = t - d = filled(x, 0) - c = umath.logical_and(umath.less_equal(d, v2), umath.greater_equal(d, v1)) - m = mask_or(c, getmask(x)) - return array(d, mask = m, copy=copy) - -def masked_outside(x, v1, v2, copy=1): - """x with mask of all values of x that are outside [v1,v2] - v1 and v2 can be given in either order. - """ - if v2 < v1: - t = v2 - v2 = v1 - v1 = t - d = filled(x, 0) - c = umath.logical_or(umath.less(d, v1), umath.greater(d, v2)) - m = mask_or(c, getmask(x)) - return array(d, mask = m, copy=copy) - -def reshape (a, *newshape): - "Copy of a with a new shape." - m = getmask(a) - d = filled(a).reshape(*newshape) - if m is nomask: - return masked_array(d) - else: - return masked_array(d, mask=numeric.reshape(m, *newshape)) - -def ravel (a): - "a as one-dimensional, may share data and mask" - m = getmask(a) - d = fromnumeric.ravel(filled(a)) - if m is nomask: - return masked_array(d) - else: - return masked_array(d, mask=numeric.ravel(m)) - -def concatenate (arrays, axis=0): - "Concatenate the arrays along the given axis" - d = [] - for x in arrays: - d.append(filled(x)) - d = numeric.concatenate(d, axis) - for x in arrays: - if getmask(x) is not nomask: break - else: - return masked_array(d) - dm = [] - for x in arrays: - dm.append(getmaskarray(x)) - dm = numeric.concatenate(dm, axis) - return masked_array(d, mask=dm) - -def swapaxes (a, axis1, axis2): - m = getmask(a) - d = masked_array(a).data - if m is nomask: - return masked_array(data=numeric.swapaxes(d, axis1, axis2)) - else: - return masked_array(data=numeric.swapaxes(d, axis1, axis2), - mask=numeric.swapaxes(m, axis1, axis2),) - - -def new_take (a, indices, axis=None, out=None, mode='raise'): - "returns selection of items from a." - m = getmask(a) - # d = masked_array(a).raw_data() - d = masked_array(a).data - if m is nomask: - return masked_array(numeric.take(d, indices, axis)) - else: - return masked_array(numeric.take(d, indices, axis), - mask = numeric.take(m, indices, axis)) - -def transpose(a, axes=None): - "reorder dimensions per tuple axes" - m = getmask(a) - d = filled(a) - if m is nomask: - return masked_array(numeric.transpose(d, axes)) - else: - return masked_array(numeric.transpose(d, axes), - mask = numeric.transpose(m, axes)) - - -def put(a, indices, values, mode='raise'): - """sets storage-indexed locations to corresponding values. - - Values and indices are filled if necessary. - - """ - d = a.raw_data() - ind = filled(indices) - v = filled(values) - numeric.put (d, ind, v) - m = getmask(a) - if m is not nomask: - a.unshare_mask() - numeric.put(a.raw_mask(), ind, 0) - -def putmask(a, mask, values): - "putmask(a, mask, values) sets a where mask is true." - if mask is nomask: - return - numeric.putmask(a.raw_data(), mask, values) - m = getmask(a) - if m is nomask: return - a.unshare_mask() - numeric.putmask(a.raw_mask(), mask, 0) - -def inner(a, b): - """inner(a,b) returns the dot product of two arrays, which has - shape a.shape[:-1] + b.shape[:-1] with elements computed by summing the - product of the elements from the last dimensions of a and b. - Masked elements are replace by zeros. - """ - fa = filled(a, 0) - fb = filled(b, 0) - if len(fa.shape) == 0: fa.shape = (1,) - if len(fb.shape) == 0: fb.shape = (1,) - return masked_array(numeric.inner(fa, fb)) - -innerproduct = inner - -def outer(a, b): - """outer(a,b) = {a[i]*b[j]}, has shape (len(a),len(b))""" - fa = filled(a, 0).ravel() - fb = filled(b, 0).ravel() - d = numeric.outer(fa, fb) - ma = getmask(a) - mb = getmask(b) - if ma is nomask and mb is nomask: - return masked_array(d) - ma = getmaskarray(a) - mb = getmaskarray(b) - m = make_mask(1-numeric.outer(1-ma, 1-mb), copy=0) - return masked_array(d, m) - -outerproduct = outer - -def dot(a, b): - """dot(a,b) returns matrix-multiplication between a and b. The product-sum - is over the last dimension of a and the second-to-last dimension of b. - Masked values are replaced by zeros. See also innerproduct. - """ - return innerproduct(filled(a, 0), numeric.swapaxes(filled(b, 0), -1, -2)) - -def compress(condition, x, dimension=-1, out=None): - """Select those parts of x for which condition is true. - Masked values in condition are considered false. - """ - c = filled(condition, 0) - m = getmask(x) - if m is not nomask: - m = numeric.compress(c, m, dimension) - d = numeric.compress(c, filled(x), dimension) - return masked_array(d, m) - -class _minimum_operation: - "Object to calculate minima" - def __init__ (self): - """minimum(a, b) or minimum(a) - In one argument case returns the scalar minimum. - """ - pass - - def __call__ (self, a, b=None): - "Execute the call behavior." - if b is None: - m = getmask(a) - if m is nomask: - d = amin(filled(a).ravel()) - return d - ac = a.compressed() - if len(ac) == 0: - return masked - else: - return amin(ac.raw_data()) - else: - return where(less(a, b), a, b) - - def reduce (self, target, axis=0): - """Reduce target along the given axis.""" - m = getmask(target) - if m is nomask: - t = filled(target) - return masked_array (umath.minimum.reduce (t, axis)) - else: - t = umath.minimum.reduce(filled(target, minimum_fill_value(target)), axis) - m = umath.logical_and.reduce(m, axis) - return masked_array(t, m, get_fill_value(target)) - - def outer (self, a, b): - "Return the function applied to the outer product of a and b." - ma = getmask(a) - mb = getmask(b) - if ma is nomask and mb is nomask: - m = nomask - else: - ma = getmaskarray(a) - mb = getmaskarray(b) - m = logical_or.outer(ma, mb) - d = umath.minimum.outer(filled(a), filled(b)) - return masked_array(d, m) - -minimum = _minimum_operation () - -class _maximum_operation: - "Object to calculate maxima" - def __init__ (self): - """maximum(a, b) or maximum(a) - In one argument case returns the scalar maximum. - """ - pass - - def __call__ (self, a, b=None): - "Execute the call behavior." - if b is None: - m = getmask(a) - if m is nomask: - d = amax(filled(a).ravel()) - return d - ac = a.compressed() - if len(ac) == 0: - return masked - else: - return amax(ac.raw_data()) - else: - return where(greater(a, b), a, b) - - def reduce (self, target, axis=0): - """Reduce target along the given axis.""" - m = getmask(target) - if m is nomask: - t = filled(target) - return masked_array (umath.maximum.reduce (t, axis)) - else: - t = umath.maximum.reduce(filled(target, maximum_fill_value(target)), axis) - m = umath.logical_and.reduce(m, axis) - return masked_array(t, m, get_fill_value(target)) - - def outer (self, a, b): - "Return the function applied to the outer product of a and b." - ma = getmask(a) - mb = getmask(b) - if ma is nomask and mb is nomask: - m = nomask - else: - ma = getmaskarray(a) - mb = getmaskarray(b) - m = logical_or.outer(ma, mb) - d = umath.maximum.outer(filled(a), filled(b)) - return masked_array(d, m) - -maximum = _maximum_operation () - -def sort (x, axis = -1, fill_value=None): - """If x does not have a mask, return a masked array formed from the - result of numeric.sort(x, axis). - Otherwise, fill x with fill_value. Sort it. - Set a mask where the result is equal to fill_value. - Note that this may have unintended consequences if the data contains the - fill value at a non-masked site. - - If fill_value is not given the default fill value for x's type will be - used. - """ - if fill_value is None: - fill_value = default_fill_value (x) - d = filled(x, fill_value) - s = fromnumeric.sort(d, axis) - if getmask(x) is nomask: - return masked_array(s) - return masked_values(s, fill_value, copy=0) - -def diagonal(a, k = 0, axis1=0, axis2=1): - """diagonal(a,k=0,axis1=0, axis2=1) = the k'th diagonal of a""" - d = fromnumeric.diagonal(filled(a), k, axis1, axis2) - m = getmask(a) - if m is nomask: - return masked_array(d, m) - else: - return masked_array(d, fromnumeric.diagonal(m, k, axis1, axis2)) - -def trace (a, offset=0, axis1=0, axis2=1, dtype=None, out=None): - """trace(a,offset=0, axis1=0, axis2=1) returns the sum along diagonals - (defined by the last two dimenions) of the array. - """ - return diagonal(a, offset, axis1, axis2).sum(dtype=dtype) - -def argsort (x, axis = -1, out=None, fill_value=None): - """Treating masked values as if they have the value fill_value, - return sort indices for sorting along given axis. - if fill_value is None, use get_fill_value(x) - Returns a numpy array. - """ - d = filled(x, fill_value) - return fromnumeric.argsort(d, axis) - -def argmin (x, axis = -1, out=None, fill_value=None): - """Treating masked values as if they have the value fill_value, - return indices for minimum values along given axis. - if fill_value is None, use get_fill_value(x). - Returns a numpy array if x has more than one dimension. - Otherwise, returns a scalar index. - """ - d = filled(x, fill_value) - return fromnumeric.argmin(d, axis) - -def argmax (x, axis = -1, out=None, fill_value=None): - """Treating masked values as if they have the value fill_value, - return sort indices for maximum along given axis. - if fill_value is None, use -get_fill_value(x) if it exists. - Returns a numpy array if x has more than one dimension. - Otherwise, returns a scalar index. - """ - if fill_value is None: - fill_value = default_fill_value (x) - try: - fill_value = - fill_value - except: - pass - d = filled(x, fill_value) - return fromnumeric.argmax(d, axis) - -def fromfunction (f, s): - """apply f to s to create array as in umath.""" - return masked_array(numeric.fromfunction(f, s)) - -def asarray(data, dtype=None): - """asarray(data, dtype) = array(data, dtype, copy=0) - """ - if isinstance(data, MaskedArray) and \ - (dtype is None or dtype == data.dtype): - return data - return array(data, dtype=dtype, copy=0) - -# Add methods to support ndarray interface -# XXX: I is better to to change the masked_*_operation adaptors -# XXX: to wrap ndarray methods directly to create ma.array methods. -from types import MethodType -def _m(f): - return MethodType(f, None, array) -def not_implemented(*args, **kwds): - raise NotImplementedError, "not yet implemented for numpy.ma arrays" -array.all = _m(alltrue) -array.any = _m(sometrue) -array.argmax = _m(argmax) -array.argmin = _m(argmin) -array.argsort = _m(argsort) -array.base = property(_m(not_implemented)) -array.byteswap = _m(not_implemented) - -def _choose(self, *args, **kwds): - return choose(self, args) -array.choose = _m(_choose) -del _choose - -def _clip(self,a_min,a_max,out=None): - return MaskedArray(data = self.data.clip(asarray(a_min).data, - asarray(a_max).data), - mask = mask_or(self.mask, - mask_or(getmask(a_min),getmask(a_max)))) -array.clip = _m(_clip) - -def _compress(self, cond, axis=None, out=None): - return compress(cond, self, axis) -array.compress = _m(_compress) -del _compress - -array.conj = array.conjugate = _m(conjugate) -array.copy = _m(not_implemented) - -def _cumprod(self, axis=None, dtype=None, out=None): - m = self.mask - if m is not nomask: - m = umath.logical_or.accumulate(self.mask, axis) - return MaskedArray(data = self.filled(1).cumprod(axis, dtype), mask=m) -array.cumprod = _m(_cumprod) - -def _cumsum(self, axis=None, dtype=None, out=None): - m = self.mask - if m is not nomask: - m = umath.logical_or.accumulate(self.mask, axis) - return MaskedArray(data=self.filled(0).cumsum(axis, dtype), mask=m) -array.cumsum = _m(_cumsum) - -array.diagonal = _m(diagonal) -array.dump = _m(not_implemented) -array.dumps = _m(not_implemented) -array.fill = _m(not_implemented) -array.flags = property(_m(not_implemented)) -array.flatten = _m(ravel) -array.getfield = _m(not_implemented) - -def _max(a, axis=None, out=None): - if out is not None: - raise TypeError("Output arrays Unsupported for masked arrays") - if axis is None: - return maximum(a) - else: - return maximum.reduce(a, axis) -array.max = _m(_max) -del _max -def _min(a, axis=None, out=None): - if out is not None: - raise TypeError("Output arrays Unsupported for masked arrays") - if axis is None: - return minimum(a) - else: - return minimum.reduce(a, axis) -array.min = _m(_min) -del _min -array.mean = _m(new_average) -array.nbytes = property(_m(not_implemented)) -array.newbyteorder = _m(not_implemented) -array.nonzero = _m(nonzero) -array.prod = _m(product) - -def _ptp(a,axis=None,out=None): - return a.max(axis,out)-a.min(axis) -array.ptp = _m(_ptp) -array.repeat = _m(new_repeat) -array.resize = _m(resize) -array.searchsorted = _m(not_implemented) -array.setfield = _m(not_implemented) -array.setflags = _m(not_implemented) -array.sort = _m(not_implemented) # NB: ndarray.sort is inplace - -def _squeeze(self): - try: - result = MaskedArray(data = self.data.squeeze(), - mask = self.mask.squeeze()) - except AttributeError: - result = _wrapit(self, 'squeeze') - return result -array.squeeze = _m(_squeeze) - -array.strides = property(_m(not_implemented)) -array.sum = _m(sum) -def _swapaxes(self,axis1,axis2): - return MaskedArray(data = self.data.swapaxes(axis1, axis2), - mask = self.mask.swapaxes(axis1, axis2)) -array.swapaxes = _m(_swapaxes) -array.take = _m(new_take) -array.tofile = _m(not_implemented) -array.trace = _m(trace) -array.transpose = _m(transpose) - -def _var(self,axis=None,dtype=None, out=None): - if axis is None: - return numeric.asarray(self.compressed()).var() - a = self.swapaxes(axis,0) - a = a - a.mean(axis=0) - a *= a - a /= a.count(axis=0) - return a.swapaxes(0,axis).sum(axis) -def _std(self,axis=None, dtype=None, out=None): - return (self.var(axis,dtype))**0.5 -array.var = _m(_var) -array.std = _m(_std) - -array.view = _m(not_implemented) -array.round = _m(around) -del _m, MethodType, not_implemented - - -masked = MaskedArray(0, int, mask=1) - -def repeat(a, repeats, axis=0): - return new_repeat(a, repeats, axis) - -def average(a, axis=0, weights=None, returned=0): - return new_average(a, axis, weights, returned) - -def take(a, indices, axis=0): - return new_take(a, indices, axis) diff --git a/pythonPackages/numpy/numpy/oldnumeric/matrix.py b/pythonPackages/numpy/numpy/oldnumeric/matrix.py deleted file mode 100755 index 5f8c1ca5ea..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/matrix.py +++ /dev/null @@ -1,67 +0,0 @@ -# This module is for compatibility only. - -__all__ = ['UserArray', 'squeeze', 'Matrix', 'asarray', 'dot', 'k', 'Numeric', 'LinearAlgebra', 'identity', 'multiply', 'types', 'string'] - -import types -from user_array import UserArray, asarray -import numpy.oldnumeric as Numeric -from numpy.oldnumeric import dot, identity, multiply -import numpy.oldnumeric.linear_algebra as LinearAlgebra -from numpy import matrix as Matrix, squeeze - -# Hidden names that will be the same. - -_table = [None]*256 -for k in range(256): - _table[k] = chr(k) -_table = ''.join(_table) - -_numchars = '0123456789.-+jeEL' -_todelete = [] -for k in _table: - if k not in _numchars: - _todelete.append(k) -_todelete = ''.join(_todelete) - - -def _eval(astr): - return eval(astr.translate(_table,_todelete)) - -def _convert_from_string(data): - data.find - rows = data.split(';') - newdata = [] - count = 0 - for row in rows: - trow = row.split(',') - newrow = [] - for col in trow: - temp = col.split() - newrow.extend(map(_eval,temp)) - if count == 0: - Ncols = len(newrow) - elif len(newrow) != Ncols: - raise ValueError, "Rows not the same size." - count += 1 - newdata.append(newrow) - return newdata - - -_lkup = {'0':'000', - '1':'001', - '2':'010', - '3':'011', - '4':'100', - '5':'101', - '6':'110', - '7':'111'} - -def _binary(num): - ostr = oct(num) - bin = '' - for ch in ostr[1:]: - bin += _lkup[ch] - ind = 0 - while bin[ind] == '0': - ind += 1 - return bin[ind:] diff --git a/pythonPackages/numpy/numpy/oldnumeric/misc.py b/pythonPackages/numpy/numpy/oldnumeric/misc.py deleted file mode 100755 index ccd47efbbe..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/misc.py +++ /dev/null @@ -1,36 +0,0 @@ -# Functions that already have the correct syntax or miscellaneous functions - - -__all__ = ['sort', 'copy_reg', 'clip', 'rank', - 'sign', 'shape', 'types', 'allclose', 'size', - 'choose', 'swapaxes', 'array_str', - 'pi', 'math', 'concatenate', 'putmask', 'put', - 'around', 'vdot', 'transpose', 'array2string', 'diagonal', - 'searchsorted', 'copy', 'resize', - 'array_repr', 'e', 'StringIO', 'pickle', - 'argsort', 'convolve', 'cross_correlate', - 'dot', 'outerproduct', 'innerproduct', 'insert'] - -import types -import StringIO -import pickle -import math -import copy -import copy_reg - -import sys -if sys.version_info[0] >= 3: - import copyreg - import io - StringIO = io.BytesIO - copy_reg = copyreg - -from numpy import sort, clip, rank, sign, shape, putmask, allclose, size,\ - choose, swapaxes, array_str, array_repr, e, pi, put, \ - resize, around, concatenate, vdot, transpose, \ - diagonal, searchsorted, argsort, convolve, dot, \ - outer as outerproduct, inner as innerproduct, \ - correlate as cross_correlate, \ - place as insert - -from array_printer import array2string diff --git a/pythonPackages/numpy/numpy/oldnumeric/mlab.py b/pythonPackages/numpy/numpy/oldnumeric/mlab.py deleted file mode 100755 index e2a0262f02..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/mlab.py +++ /dev/null @@ -1,126 +0,0 @@ -# This module is for compatibility only. All functions are defined elsewhere. - -__all__ = ['rand', 'tril', 'trapz', 'hanning', 'rot90', 'triu', 'diff', 'angle', - 'roots', 'ptp', 'kaiser', 'randn', 'cumprod', 'diag', 'msort', - 'LinearAlgebra', 'RandomArray', 'prod', 'std', 'hamming', 'flipud', - 'max', 'blackman', 'corrcoef', 'bartlett', 'eye', 'squeeze', 'sinc', - 'tri', 'cov', 'svd', 'min', 'median', 'fliplr', 'eig', 'mean'] - -import numpy.oldnumeric.linear_algebra as LinearAlgebra -import numpy.oldnumeric.random_array as RandomArray -from numpy import tril, trapz as _Ntrapz, hanning, rot90, triu, diff, \ - angle, roots, ptp as _Nptp, kaiser, cumprod as _Ncumprod, \ - diag, msort, prod as _Nprod, std as _Nstd, hamming, flipud, \ - amax as _Nmax, amin as _Nmin, blackman, bartlett, \ - squeeze, sinc, median, fliplr, mean as _Nmean, transpose - -from numpy.linalg import eig, svd -from numpy.random import rand, randn -import numpy as np - -from typeconv import convtypecode - -def eye(N, M=None, k=0, typecode=None, dtype=None): - """ eye returns a N-by-M 2-d array where the k-th diagonal is all ones, - and everything else is zeros. - """ - dtype = convtypecode(typecode, dtype) - if M is None: M = N - m = np.equal(np.subtract.outer(np.arange(N), np.arange(M)),-k) - if m.dtype != dtype: - return m.astype(dtype) - -def tri(N, M=None, k=0, typecode=None, dtype=None): - """ returns a N-by-M array where all the diagonals starting from - lower left corner up to the k-th are all ones. - """ - dtype = convtypecode(typecode, dtype) - if M is None: M = N - m = np.greater_equal(np.subtract.outer(np.arange(N), np.arange(M)),-k) - if m.dtype != dtype: - return m.astype(dtype) - -def trapz(y, x=None, axis=-1): - return _Ntrapz(y, x, axis=axis) - -def ptp(x, axis=0): - return _Nptp(x, axis) - -def cumprod(x, axis=0): - return _Ncumprod(x, axis) - -def max(x, axis=0): - return _Nmax(x, axis) - -def min(x, axis=0): - return _Nmin(x, axis) - -def prod(x, axis=0): - return _Nprod(x, axis) - -def std(x, axis=0): - N = asarray(x).shape[axis] - return _Nstd(x, axis)*sqrt(N/(N-1.)) - -def mean(x, axis=0): - return _Nmean(x, axis) - -# This is exactly the same cov function as in MLab -def cov(m, y=None, rowvar=0, bias=0): - if y is None: - y = m - else: - y = y - if rowvar: - m = transpose(m) - y = transpose(y) - if (m.shape[0] == 1): - m = transpose(m) - if (y.shape[0] == 1): - y = transpose(y) - N = m.shape[0] - if (y.shape[0] != N): - raise ValueError, "x and y must have the same number "\ - "of observations" - m = m - _Nmean(m,axis=0) - y = y - _Nmean(y,axis=0) - if bias: - fact = N*1.0 - else: - fact = N-1.0 - return squeeze(dot(transpose(m), conjugate(y)) / fact) - -from numpy import sqrt, multiply -def corrcoef(x, y=None): - c = cov(x, y) - d = diag(c) - return c/sqrt(multiply.outer(d,d)) - -from compat import * -from functions import * -from precision import * -from ufuncs import * -from misc import * - -import compat -import precision -import functions -import misc -import ufuncs - -import numpy -__version__ = numpy.__version__ -del numpy - -__all__ += ['__version__'] -__all__ += compat.__all__ -__all__ += precision.__all__ -__all__ += functions.__all__ -__all__ += ufuncs.__all__ -__all__ += misc.__all__ - -del compat -del functions -del precision -del ufuncs -del misc diff --git a/pythonPackages/numpy/numpy/oldnumeric/precision.py b/pythonPackages/numpy/numpy/oldnumeric/precision.py deleted file mode 100755 index c095ceb199..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/precision.py +++ /dev/null @@ -1,168 +0,0 @@ -# Lifted from Precision.py. This is for compatibility only. -# -# The character strings are still for "new" NumPy -# which is the only Incompatibility with Numeric - -__all__ = ['Character', 'Complex', 'Float', - 'PrecisionError', 'PyObject', 'Int', 'UInt', - 'UnsignedInt', 'UnsignedInteger', 'string', 'typecodes', 'zeros'] - -from functions import zeros -import string # for backwards compatibility - -typecodes = {'Character':'c', 'Integer':'bhil', 'UnsignedInteger':'BHIL', 'Float':'fd', 'Complex':'FD'} - -def _get_precisions(typecodes): - lst = [] - for t in typecodes: - lst.append( (zeros( (1,), t ).itemsize*8, t) ) - return lst - -def _fill_table(typecodes, table={}): - for key, value in typecodes.items(): - table[key] = _get_precisions(value) - return table - -_code_table = _fill_table(typecodes) - -class PrecisionError(Exception): - pass - -def _lookup(table, key, required_bits): - lst = table[key] - for bits, typecode in lst: - if bits >= required_bits: - return typecode - raise PrecisionError, key+" of "+str(required_bits)+" bits not available on this system" - -Character = 'c' - -try: - UnsignedInt8 = _lookup(_code_table, "UnsignedInteger", 8) - UInt8 = UnsignedInt8 - __all__.extend(['UnsignedInt8', 'UInt8']) -except(PrecisionError): - pass -try: - UnsignedInt16 = _lookup(_code_table, "UnsignedInteger", 16) - UInt16 = UnsignedInt16 - __all__.extend(['UnsignedInt16', 'UInt16']) -except(PrecisionError): - pass -try: - UnsignedInt32 = _lookup(_code_table, "UnsignedInteger", 32) - UInt32 = UnsignedInt32 - __all__.extend(['UnsignedInt32', 'UInt32']) -except(PrecisionError): - pass -try: - UnsignedInt64 = _lookup(_code_table, "UnsignedInteger", 64) - UInt64 = UnsignedInt64 - __all__.extend(['UnsignedInt64', 'UInt64']) -except(PrecisionError): - pass -try: - UnsignedInt128 = _lookup(_code_table, "UnsignedInteger", 128) - UInt128 = UnsignedInt128 - __all__.extend(['UnsignedInt128', 'UInt128']) -except(PrecisionError): - pass -UInt = UnsignedInt = UnsignedInteger = 'u' - -try: - Int0 = _lookup(_code_table, 'Integer', 0) - __all__.append('Int0') -except(PrecisionError): - pass -try: - Int8 = _lookup(_code_table, 'Integer', 8) - __all__.append('Int8') -except(PrecisionError): - pass -try: - Int16 = _lookup(_code_table, 'Integer', 16) - __all__.append('Int16') -except(PrecisionError): - pass -try: - Int32 = _lookup(_code_table, 'Integer', 32) - __all__.append('Int32') -except(PrecisionError): - pass -try: - Int64 = _lookup(_code_table, 'Integer', 64) - __all__.append('Int64') -except(PrecisionError): - pass -try: - Int128 = _lookup(_code_table, 'Integer', 128) - __all__.append('Int128') -except(PrecisionError): - pass -Int = 'l' - -try: - Float0 = _lookup(_code_table, 'Float', 0) - __all__.append('Float0') -except(PrecisionError): - pass -try: - Float8 = _lookup(_code_table, 'Float', 8) - __all__.append('Float8') -except(PrecisionError): - pass -try: - Float16 = _lookup(_code_table, 'Float', 16) - __all__.append('Float16') -except(PrecisionError): - pass -try: - Float32 = _lookup(_code_table, 'Float', 32) - __all__.append('Float32') -except(PrecisionError): - pass -try: - Float64 = _lookup(_code_table, 'Float', 64) - __all__.append('Float64') -except(PrecisionError): - pass -try: - Float128 = _lookup(_code_table, 'Float', 128) - __all__.append('Float128') -except(PrecisionError): - pass -Float = 'd' - -try: - Complex0 = _lookup(_code_table, 'Complex', 0) - __all__.append('Complex0') -except(PrecisionError): - pass -try: - Complex8 = _lookup(_code_table, 'Complex', 16) - __all__.append('Complex8') -except(PrecisionError): - pass -try: - Complex16 = _lookup(_code_table, 'Complex', 32) - __all__.append('Complex16') -except(PrecisionError): - pass -try: - Complex32 = _lookup(_code_table, 'Complex', 64) - __all__.append('Complex32') -except(PrecisionError): - pass -try: - Complex64 = _lookup(_code_table, 'Complex', 128) - __all__.append('Complex64') -except(PrecisionError): - pass -try: - Complex128 = _lookup(_code_table, 'Complex', 256) - __all__.append('Complex128') -except(PrecisionError): - pass -Complex = 'D' - -PyObject = 'O' diff --git a/pythonPackages/numpy/numpy/oldnumeric/random_array.py b/pythonPackages/numpy/numpy/oldnumeric/random_array.py deleted file mode 100755 index e84aedf1e3..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/random_array.py +++ /dev/null @@ -1,266 +0,0 @@ -# Backward compatible module for RandomArray - -__all__ = ['ArgumentError','F','beta','binomial','chi_square', 'exponential', - 'gamma', 'get_seed', 'mean_var_test', 'multinomial', - 'multivariate_normal', 'negative_binomial', 'noncentral_F', - 'noncentral_chi_square', 'normal', 'permutation', 'poisson', - 'randint', 'random', 'random_integers', 'seed', 'standard_normal', - 'uniform'] - -ArgumentError = ValueError - -import numpy.random.mtrand as mt -import numpy as np - -def seed(x=0, y=0): - if (x == 0 or y == 0): - mt.seed() - else: - mt.seed((x,y)) - -def get_seed(): - raise NotImplementedError, \ - "If you want to save the state of the random number generator.\n"\ - "Then you should use obj = numpy.random.get_state() followed by.\n"\ - "numpy.random.set_state(obj)." - -def random(shape=[]): - "random(n) or random([n, m, ...]) returns array of random numbers" - if shape == []: - shape = None - return mt.random_sample(shape) - -def uniform(minimum, maximum, shape=[]): - """uniform(minimum, maximum, shape=[]) returns array of given shape of random reals - in given range""" - if shape == []: - shape = None - return mt.uniform(minimum, maximum, shape) - -def randint(minimum, maximum=None, shape=[]): - """randint(min, max, shape=[]) = random integers >=min, < max - If max not given, random integers >= 0, = 0.6: - raise SystemExit, "uniform returned out of desired range" - print "randint(1, 10, shape=[50])" - print randint(1, 10, shape=[50]) - print "permutation(10)", permutation(10) - print "randint(3,9)", randint(3,9) - print "random_integers(10, shape=[20])" - print random_integers(10, shape=[20]) - s = 3.0 - x = normal(2.0, s, [10, 1000]) - if len(x.shape) != 2 or x.shape[0] != 10 or x.shape[1] != 1000: - raise SystemExit, "standard_normal returned wrong shape" - x.shape = (10000,) - mean_var_test(x, "normally distributed numbers with mean 2 and variance %f"%(s**2,), 2, s**2, 0) - x = exponential(3, 10000) - mean_var_test(x, "random numbers exponentially distributed with mean %f"%(s,), s, s**2, 2) - x = multivariate_normal(np.array([10,20]), np.array(([1,2],[2,4]))) - print "\nA multivariate normal", x - if x.shape != (2,): raise SystemExit, "multivariate_normal returned wrong shape" - x = multivariate_normal(np.array([10,20]), np.array([[1,2],[2,4]]), [4,3]) - print "A 4x3x2 array containing multivariate normals" - print x - if x.shape != (4,3,2): raise SystemExit, "multivariate_normal returned wrong shape" - x = multivariate_normal(np.array([-100,0,100]), np.array([[3,2,1],[2,2,1],[1,1,1]]), 10000) - x_mean = np.sum(x,axis=0)/10000. - print "Average of 10000 multivariate normals with mean [-100,0,100]" - print x_mean - x_minus_mean = x - x_mean - print "Estimated covariance of 10000 multivariate normals with covariance [[3,2,1],[2,2,1],[1,1,1]]" - print np.dot(np.transpose(x_minus_mean),x_minus_mean)/9999. - x = beta(5.0, 10.0, 10000) - mean_var_test(x, "beta(5.,10.) random numbers", 0.333, 0.014) - x = gamma(.01, 2., 10000) - mean_var_test(x, "gamma(.01,2.) random numbers", 2*100, 2*100*100) - x = chi_square(11., 10000) - mean_var_test(x, "chi squared random numbers with 11 degrees of freedom", 11, 22, 2*np.sqrt(2./11.)) - x = F(5., 10., 10000) - mean_var_test(x, "F random numbers with 5 and 10 degrees of freedom", 1.25, 1.35) - x = poisson(50., 10000) - mean_var_test(x, "poisson random numbers with mean 50", 50, 50, 0.14) - print "\nEach element is the result of 16 binomial trials with probability 0.5:" - print binomial(16, 0.5, 16) - print "\nEach element is the result of 16 negative binomial trials with probability 0.5:" - print negative_binomial(16, 0.5, [16,]) - print "\nEach row is the result of 16 multinomial trials with probabilities [0.1, 0.5, 0.1 0.3]:" - x = multinomial(16, [0.1, 0.5, 0.1], 8) - print x - print "Mean = ", np.sum(x,axis=0)/8. - -if __name__ == '__main__': - test() diff --git a/pythonPackages/numpy/numpy/oldnumeric/rng.py b/pythonPackages/numpy/numpy/oldnumeric/rng.py deleted file mode 100755 index 28d3f16dfc..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/rng.py +++ /dev/null @@ -1,135 +0,0 @@ -# This module re-creates the RNG interface from Numeric -# Replace import RNG with import numpy.oldnumeric.rng as RNG -# -# It is for backwards compatibility only. - - -__all__ = ['CreateGenerator','ExponentialDistribution','LogNormalDistribution', - 'NormalDistribution', 'UniformDistribution', 'error', 'ranf', - 'default_distribution', 'random_sample', 'standard_generator'] - -import numpy.random.mtrand as mt -import math - -class error(Exception): - pass - -class Distribution(object): - def __init__(self, meth, *args): - self._meth = meth - self._args = args - - def density(self,x): - raise NotImplementedError - - def __call__(self, x): - return self.density(x) - - def _onesample(self, rng): - return getattr(rng, self._meth)(*self._args) - - def _sample(self, rng, n): - kwds = {'size' : n} - return getattr(rng, self._meth)(*self._args, **kwds) - - -class ExponentialDistribution(Distribution): - def __init__(self, lambda_): - if (lambda_ <= 0): - raise error, "parameter must be positive" - Distribution.__init__(self, 'exponential', lambda_) - - def density(x): - if x < 0: - return 0.0 - else: - lambda_ = self._args[0] - return lambda_*math.exp(-lambda_*x) - -class LogNormalDistribution(Distribution): - def __init__(self, m, s): - m = float(m) - s = float(s) - if (s <= 0): - raise error, "standard deviation must be positive" - Distribution.__init__(self, 'lognormal', m, s) - sn = math.log(1.0+s*s/(m*m)); - self._mn = math.log(m)-0.5*sn - self._sn = math.sqrt(sn) - self._fac = 1.0/math.sqrt(2*math.pi)/self._sn - - def density(x): - m,s = self._args - y = (math.log(x)-self._mn)/self._sn - return self._fac*math.exp(-0.5*y*y)/x - - -class NormalDistribution(Distribution): - def __init__(self, m, s): - m = float(m) - s = float(s) - if (s <= 0): - raise error, "standard deviation must be positive" - Distribution.__init__(self, 'normal', m, s) - self._fac = 1.0/math.sqrt(2*math.pi)/s - - def density(x): - m,s = self._args - y = (x-m)/s - return self._fac*math.exp(-0.5*y*y) - -class UniformDistribution(Distribution): - def __init__(self, a, b): - a = float(a) - b = float(b) - width = b-a - if (width <=0): - raise error, "width of uniform distribution must be > 0" - Distribution.__init__(self, 'uniform', a, b) - self._fac = 1.0/width - - def density(x): - a, b = self._args - if (x < a) or (x >= b): - return 0.0 - else: - return self._fac - -default_distribution = UniformDistribution(0.0,1.0) - -class CreateGenerator(object): - def __init__(self, seed, dist=None): - if seed <= 0: - self._rng = mt.RandomState() - elif seed > 0: - self._rng = mt.RandomState(seed) - if dist is None: - dist = default_distribution - if not isinstance(dist, Distribution): - raise error, "Not a distribution object" - self._dist = dist - - def ranf(self): - return self._dist._onesample(self._rng) - - def sample(self, n): - return self._dist._sample(self._rng, n) - - -standard_generator = CreateGenerator(-1) - -def ranf(): - "ranf() = a random number from the standard generator." - return standard_generator.ranf() - -def random_sample(*n): - """random_sample(n) = array of n random numbers; - - random_sample(n1, n2, ...)= random array of shape (n1, n2, ..)""" - - if not n: - return standard_generator.ranf() - m = 1 - for i in n: - m = m * i - return standard_generator.sample(m).reshape(*n) diff --git a/pythonPackages/numpy/numpy/oldnumeric/rng_stats.py b/pythonPackages/numpy/numpy/oldnumeric/rng_stats.py deleted file mode 100755 index 8c7fec4336..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/rng_stats.py +++ /dev/null @@ -1,35 +0,0 @@ - -__all__ = ['average', 'histogram', 'standardDeviation', 'variance'] - -import numpy.oldnumeric as Numeric - -def average(data): - data = Numeric.array(data) - return Numeric.add.reduce(data)/len(data) - -def variance(data): - data = Numeric.array(data) - return Numeric.add.reduce((data-average(data,axis=0))**2)/(len(data)-1) - -def standardDeviation(data): - data = Numeric.array(data) - return Numeric.sqrt(variance(data)) - -def histogram(data, nbins, range = None): - data = Numeric.array(data, Numeric.Float) - if range is None: - min = Numeric.minimum.reduce(data) - max = Numeric.maximum.reduce(data) - else: - min, max = range - data = Numeric.repeat(data, - Numeric.logical_and(Numeric.less_equal(data, max), - Numeric.greater_equal(data, - min)),axis=0) - bin_width = (max-min)/nbins - data = Numeric.floor((data - min)/bin_width).astype(Numeric.Int) - histo = Numeric.add.reduce(Numeric.equal( - Numeric.arange(nbins)[:,Numeric.NewAxis], data), -1) - histo[-1] = histo[-1] + Numeric.add.reduce(Numeric.equal(nbins, data)) - bins = min + bin_width*(Numeric.arange(nbins)+0.5) - return Numeric.transpose(Numeric.array([bins, histo])) diff --git a/pythonPackages/numpy/numpy/oldnumeric/setup.py b/pythonPackages/numpy/numpy/oldnumeric/setup.py deleted file mode 100755 index 31b5ff3cc6..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/setup.py +++ /dev/null @@ -1,10 +0,0 @@ - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('oldnumeric',parent_package,top_path) - config.add_data_dir('tests') - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/oldnumeric/setupscons.py b/pythonPackages/numpy/numpy/oldnumeric/setupscons.py deleted file mode 100755 index 82e8a62013..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/setupscons.py +++ /dev/null @@ -1,8 +0,0 @@ - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - return Configuration('oldnumeric',parent_package,top_path) - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/oldnumeric/tests/test_oldnumeric.py b/pythonPackages/numpy/numpy/oldnumeric/tests/test_oldnumeric.py deleted file mode 100755 index 24d709d2c8..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/tests/test_oldnumeric.py +++ /dev/null @@ -1,94 +0,0 @@ -import unittest - -from numpy.testing import * - -from numpy import array -from numpy.oldnumeric import * -from numpy.core.numeric import float32, float64, complex64, complex128, int8, \ - int16, int32, int64, uint, uint8, uint16, uint32, uint64 - -class test_oldtypes(unittest.TestCase): - def test_oldtypes(self, level=1): - a1 = array([0,1,0], Float) - a2 = array([0,1,0], float) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Float8) - a2 = array([0,1,0], float) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Float16) - a2 = array([0,1,0], float) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Float32) - a2 = array([0,1,0], float32) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Float64) - a2 = array([0,1,0], float64) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Complex) - a2 = array([0,1,0], complex) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Complex8) - a2 = array([0,1,0], complex) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Complex16) - a2 = array([0,1,0], complex) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Complex32) - a2 = array([0,1,0], complex64) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Complex64) - a2 = array([0,1,0], complex128) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Int) - a2 = array([0,1,0], int) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Int8) - a2 = array([0,1,0], int8) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Int16) - a2 = array([0,1,0], int16) - assert_array_equal(a1, a2) - a1 = array([0,1,0], Int32) - a2 = array([0,1,0], int32) - assert_array_equal(a1, a2) - try: - a1 = array([0,1,0], Int64) - a2 = array([0,1,0], int64) - assert_array_equal(a1, a2) - except NameError: - # Not all systems have 64-bit integers. - pass - a1 = array([0,1,0], UnsignedInt) - a2 = array([0,1,0], UnsignedInteger) - a3 = array([0,1,0], uint) - assert_array_equal(a1, a3) - assert_array_equal(a2, a3) - a1 = array([0,1,0], UInt8) - a2 = array([0,1,0], UnsignedInt8) - a3 = array([0,1,0], uint8) - assert_array_equal(a1, a3) - assert_array_equal(a2, a3) - a1 = array([0,1,0], UInt16) - a2 = array([0,1,0], UnsignedInt16) - a3 = array([0,1,0], uint16) - assert_array_equal(a1, a3) - assert_array_equal(a2, a3) - a1 = array([0,1,0], UInt32) - a2 = array([0,1,0], UnsignedInt32) - a3 = array([0,1,0], uint32) - assert_array_equal(a1, a3) - assert_array_equal(a2, a3) - try: - a1 = array([0,1,0], UInt64) - a2 = array([0,1,0], UnsignedInt64) - a3 = array([0,1,0], uint64) - assert_array_equal(a1, a3) - assert_array_equal(a2, a3) - except NameError: - # Not all systems have 64-bit integers. - pass - - -if __name__ == "__main__": - import nose - nose.main() diff --git a/pythonPackages/numpy/numpy/oldnumeric/tests/test_regression.py b/pythonPackages/numpy/numpy/oldnumeric/tests/test_regression.py deleted file mode 100755 index 235ae4fe5a..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/tests/test_regression.py +++ /dev/null @@ -1,10 +0,0 @@ -from numpy.testing import * - -rlevel = 1 - -class TestRegression(TestCase): - def test_numeric_random(self, level=rlevel): - """Ticket #552""" - from numpy.oldnumeric.random_array import randint - randint(0,50,[2,3]) - diff --git a/pythonPackages/numpy/numpy/oldnumeric/typeconv.py b/pythonPackages/numpy/numpy/oldnumeric/typeconv.py deleted file mode 100755 index 4e203d4aed..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/typeconv.py +++ /dev/null @@ -1,60 +0,0 @@ -__all__ = ['oldtype2dtype', 'convtypecode', 'convtypecode2', 'oldtypecodes'] - -import numpy as np - -oldtype2dtype = {'1': np.dtype(np.byte), - 's': np.dtype(np.short), -# 'i': np.dtype(np.intc), -# 'l': np.dtype(int), -# 'b': np.dtype(np.ubyte), - 'w': np.dtype(np.ushort), - 'u': np.dtype(np.uintc), -# 'f': np.dtype(np.single), -# 'd': np.dtype(float), -# 'F': np.dtype(np.csingle), -# 'D': np.dtype(complex), -# 'O': np.dtype(object), -# 'c': np.dtype('c'), - None: np.dtype(int) - } - -# converts typecode=None to int -def convtypecode(typecode, dtype=None): - if dtype is None: - try: - return oldtype2dtype[typecode] - except: - return np.dtype(typecode) - else: - return dtype - -#if both typecode and dtype are None -# return None -def convtypecode2(typecode, dtype=None): - if dtype is None: - if typecode is None: - return None - else: - try: - return oldtype2dtype[typecode] - except: - return np.dtype(typecode) - else: - return dtype - -_changedtypes = {'B': 'b', - 'b': '1', - 'h': 's', - 'H': 'w', - 'I': 'u'} - -class _oldtypecodes(dict): - def __getitem__(self, obj): - char = np.dtype(obj).char - try: - return _changedtypes[char] - except KeyError: - return char - - -oldtypecodes = _oldtypecodes() diff --git a/pythonPackages/numpy/numpy/oldnumeric/ufuncs.py b/pythonPackages/numpy/numpy/oldnumeric/ufuncs.py deleted file mode 100755 index c26050f55e..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/ufuncs.py +++ /dev/null @@ -1,19 +0,0 @@ -__all__ = ['less', 'cosh', 'arcsinh', 'add', 'ceil', 'arctan2', 'floor_divide', - 'fmod', 'hypot', 'logical_and', 'power', 'sinh', 'remainder', 'cos', - 'equal', 'arccos', 'less_equal', 'divide', 'bitwise_or', - 'bitwise_and', 'logical_xor', 'log', 'subtract', 'invert', - 'negative', 'log10', 'arcsin', 'arctanh', 'logical_not', - 'not_equal', 'tanh', 'true_divide', 'maximum', 'arccosh', - 'logical_or', 'minimum', 'conjugate', 'tan', 'greater', - 'bitwise_xor', 'fabs', 'floor', 'sqrt', 'arctan', 'right_shift', - 'absolute', 'sin', 'multiply', 'greater_equal', 'left_shift', - 'exp', 'divide_safe'] - -from numpy import less, cosh, arcsinh, add, ceil, arctan2, floor_divide, \ - fmod, hypot, logical_and, power, sinh, remainder, cos, \ - equal, arccos, less_equal, divide, bitwise_or, bitwise_and, \ - logical_xor, log, subtract, invert, negative, log10, arcsin, \ - arctanh, logical_not, not_equal, tanh, true_divide, maximum, \ - arccosh, logical_or, minimum, conjugate, tan, greater, bitwise_xor, \ - fabs, floor, sqrt, arctan, right_shift, absolute, sin, \ - multiply, greater_equal, left_shift, exp, divide as divide_safe diff --git a/pythonPackages/numpy/numpy/oldnumeric/user_array.py b/pythonPackages/numpy/numpy/oldnumeric/user_array.py deleted file mode 100755 index 375c4013bb..0000000000 --- a/pythonPackages/numpy/numpy/oldnumeric/user_array.py +++ /dev/null @@ -1,9 +0,0 @@ - - -from numpy.oldnumeric import * -from numpy.lib.user_array import container as UserArray - -import numpy.oldnumeric as nold -__all__ = nold.__all__[:] -__all__ += ['UserArray'] -del nold diff --git a/pythonPackages/numpy/numpy/polynomial/__init__.py b/pythonPackages/numpy/numpy/polynomial/__init__.py deleted file mode 100755 index 7e755ca52b..0000000000 --- a/pythonPackages/numpy/numpy/polynomial/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -""" -A sub-package for efficiently dealing with polynomials. - -Within the documentation for this sub-package, a "finite power series," -i.e., a polynomial (also referred to simply as a "series") is represented -by a 1-D numpy array of the polynomial's coefficients, ordered from lowest -order term to highest. For example, array([1,2,3]) represents -``P_0 + 2*P_1 + 3*P_2``, where P_n is the n-th order basis polynomial -applicable to the specific module in question, e.g., `polynomial` (which -"wraps" the "standard" basis) or `chebyshev`. For optimal performance, -all operations on polynomials, including evaluation at an argument, are -implemented as operations on the coefficients. Additional (module-specific) -information can be found in the docstring for the module of interest. - -""" -from polynomial import * -from chebyshev import * -from polyutils import * - -from numpy.testing import Tester -test = Tester().test -bench = Tester().bench diff --git a/pythonPackages/numpy/numpy/polynomial/chebyshev.py b/pythonPackages/numpy/numpy/polynomial/chebyshev.py deleted file mode 100755 index 99edecca14..0000000000 --- a/pythonPackages/numpy/numpy/polynomial/chebyshev.py +++ /dev/null @@ -1,1290 +0,0 @@ -""" -Objects for dealing with Chebyshev series. - -This module provides a number of objects (mostly functions) useful for -dealing with Chebyshev series, including a `Chebyshev` class that -encapsulates the usual arithmetic operations. (General information -on how this module represents and works with such polynomials is in the -docstring for its "parent" sub-package, `numpy.polynomial`). - -Constants ---------- -- `chebdomain` -- Chebyshev series default domain, [-1,1]. -- `chebzero` -- (Coefficients of the) Chebyshev series that evaluates - identically to 0. -- `chebone` -- (Coefficients of the) Chebyshev series that evaluates - identically to 1. -- `chebx` -- (Coefficients of the) Chebyshev series for the identity map, - ``f(x) = x``. - -Arithmetic ----------- -- `chebadd` -- add two Chebyshev series. -- `chebsub` -- subtract one Chebyshev series from another. -- `chebmul` -- multiply two Chebyshev series. -- `chebdiv` -- divide one Chebyshev series by another. -- `chebval` -- evaluate a Chebyshev series at given points. - -Calculus --------- -- `chebder` -- differentiate a Chebyshev series. -- `chebint` -- integrate a Chebyshev series. - -Misc Functions --------------- -- `chebfromroots` -- create a Chebyshev series with specified roots. -- `chebroots` -- find the roots of a Chebyshev series. -- `chebvander` -- Vandermonde-like matrix for Chebyshev polynomials. -- `chebfit` -- least-squares fit returning a Chebyshev series. -- `chebtrim` -- trim leading coefficients from a Chebyshev series. -- `chebline` -- Chebyshev series of given straight line. -- `cheb2poly` -- convert a Chebyshev series to a polynomial. -- `poly2cheb` -- convert a polynomial to a Chebyshev series. - -Classes -------- -- `Chebyshev` -- A Chebyshev series class. - -See also --------- -`numpy.polynomial` - -Notes ------ -The implementations of multiplication, division, integration, and -differentiation use the algebraic identities [1]_: - -.. math :: - T_n(x) = \\frac{z^n + z^{-n}}{2} \\\\ - z\\frac{dx}{dz} = \\frac{z - z^{-1}}{2}. - -where - -.. math :: x = \\frac{z + z^{-1}}{2}. - -These identities allow a Chebyshev series to be expressed as a finite, -symmetric Laurent series. In this module, this sort of Laurent series -is referred to as a "z-series." - -References ----------- -.. [1] A. T. Benjamin, et al., "Combinatorial Trigonometry with Chebyshev - Polynomials," *Journal of Statistical Planning and Inference 14*, 2008 - (preprint: http://www.math.hmc.edu/~benjamin/papers/CombTrig.pdf, pg. 4) - -""" -from __future__ import division - -__all__ = ['chebzero', 'chebone', 'chebx', 'chebdomain', 'chebline', - 'chebadd', 'chebsub', 'chebmul', 'chebdiv', 'chebval', 'chebder', - 'chebint', 'cheb2poly', 'poly2cheb', 'chebfromroots', 'chebvander', - 'chebfit', 'chebtrim', 'chebroots', 'Chebyshev'] - -import numpy as np -import numpy.linalg as la -import polyutils as pu -import warnings -from polytemplate import polytemplate - -chebtrim = pu.trimcoef - -# -# A collection of functions for manipulating z-series. These are private -# functions and do minimal error checking. -# - -def _cseries_to_zseries(cs) : - """Covert Chebyshev series to z-series. - - Covert a Chebyshev series to the equivalent z-series. The result is - never an empty array. The dtype of the return is the same as that of - the input. No checks are run on the arguments as this routine is for - internal use. - - Parameters - ---------- - cs : 1-d ndarray - Chebyshev coefficients, ordered from low to high - - Returns - ------- - zs : 1-d ndarray - Odd length symmetric z-series, ordered from low to high. - - """ - n = cs.size - zs = np.zeros(2*n-1, dtype=cs.dtype) - zs[n-1:] = cs/2 - return zs + zs[::-1] - -def _zseries_to_cseries(zs) : - """Covert z-series to a Chebyshev series. - - Covert a z series to the equivalent Chebyshev series. The result is - never an empty array. The dtype of the return is the same as that of - the input. No checks are run on the arguments as this routine is for - internal use. - - Parameters - ---------- - zs : 1-d ndarray - Odd length symmetric z-series, ordered from low to high. - - Returns - ------- - cs : 1-d ndarray - Chebyshev coefficients, ordered from low to high. - - """ - n = (zs.size + 1)//2 - cs = zs[n-1:].copy() - cs[1:n] *= 2 - return cs - -def _zseries_mul(z1, z2) : - """Multiply two z-series. - - Multiply two z-series to produce a z-series. - - Parameters - ---------- - z1, z2 : 1-d ndarray - The arrays must be 1-d but this is not checked. - - Returns - ------- - product : 1-d ndarray - The product z-series. - - Notes - ----- - This is simply convolution. If symmetic/anti-symmetric z-series are - denoted by S/A then the following rules apply: - - S*S, A*A -> S - S*A, A*S -> A - - """ - return np.convolve(z1, z2) - -def _zseries_div(z1, z2) : - """Divide the first z-series by the second. - - Divide `z1` by `z2` and return the quotient and remainder as z-series. - Warning: this implementation only applies when both z1 and z2 have the - same symmetry, which is sufficient for present purposes. - - Parameters - ---------- - z1, z2 : 1-d ndarray - The arrays must be 1-d and have the same symmetry, but this is not - checked. - - Returns - ------- - - (quotient, remainder) : 1-d ndarrays - Quotient and remainder as z-series. - - Notes - ----- - This is not the same as polynomial division on account of the desired form - of the remainder. If symmetic/anti-symmetric z-series are denoted by S/A - then the following rules apply: - - S/S -> S,S - A/A -> S,A - - The restriction to types of the same symmetry could be fixed but seems like - uneeded generality. There is no natural form for the remainder in the case - where there is no symmetry. - - """ - z1 = z1.copy() - z2 = z2.copy() - len1 = len(z1) - len2 = len(z2) - if len2 == 1 : - z1 /= z2 - return z1, z1[:1]*0 - elif len1 < len2 : - return z1[:1]*0, z1 - else : - dlen = len1 - len2 - scl = z2[0] - z2 /= scl - quo = np.empty(dlen + 1, dtype=z1.dtype) - i = 0 - j = dlen - while i < j : - r = z1[i] - quo[i] = z1[i] - quo[dlen - i] = r - tmp = r*z2 - z1[i:i+len2] -= tmp - z1[j:j+len2] -= tmp - i += 1 - j -= 1 - r = z1[i] - quo[i] = r - tmp = r*z2 - z1[i:i+len2] -= tmp - quo /= scl - rem = z1[i+1:i-1+len2].copy() - return quo, rem - -def _zseries_der(zs) : - """Differentiate a z-series. - - The derivative is with respect to x, not z. This is achieved using the - chain rule and the value of dx/dz given in the module notes. - - Parameters - ---------- - zs : z-series - The z-series to differentiate. - - Returns - ------- - derivative : z-series - The derivative - - Notes - ----- - The zseries for x (ns) has been multiplied by two in order to avoid - using floats that are incompatible with Decimal and likely other - specialized scalar types. This scaling has been compensated by - multiplying the value of zs by two also so that the two cancels in the - division. - - """ - n = len(zs)//2 - ns = np.array([-1, 0, 1], dtype=zs.dtype) - zs *= np.arange(-n, n+1)*2 - d, r = _zseries_div(zs, ns) - return d - -def _zseries_int(zs) : - """Integrate a z-series. - - The integral is with respect to x, not z. This is achieved by a change - of variable using dx/dz given in the module notes. - - Parameters - ---------- - zs : z-series - The z-series to integrate - - Returns - ------- - integral : z-series - The indefinite integral - - Notes - ----- - The zseries for x (ns) has been multiplied by two in order to avoid - using floats that are incompatible with Decimal and likely other - specialized scalar types. This scaling has been compensated by - dividing the resulting zs by two. - - """ - n = 1 + len(zs)//2 - ns = np.array([-1, 0, 1], dtype=zs.dtype) - zs = _zseries_mul(zs, ns) - div = np.arange(-n, n+1)*2 - zs[:n] /= div[:n] - zs[n+1:] /= div[n+1:] - zs[n] = 0 - return zs - -# -# Chebyshev series functions -# - - -def poly2cheb(pol) : - """ - poly2cheb(pol) - - Convert a polynomial to a Chebyshev series. - - Convert an array representing the coefficients of a polynomial (relative - to the "standard" basis) ordered from lowest degree to highest, to an - array of the coefficients of the equivalent Chebyshev series, ordered - from lowest to highest degree. - - Parameters - ---------- - pol : array_like - 1-d array containing the polynomial coefficients - - Returns - ------- - cs : ndarray - 1-d array containing the coefficients of the equivalent Chebyshev - series. - - See Also - -------- - cheb2poly - - Notes - ----- - Note that a consequence of the input needing to be array_like and that - the output is an ndarray, is that if one is going to use this function - to convert a Polynomial instance, P, to a Chebyshev instance, T, the - usage is ``T = Chebyshev(poly2cheb(P.coef))``; see Examples below. - - Examples - -------- - >>> from numpy import polynomial as P - >>> p = P.Polynomial(np.arange(4)) - >>> p - Polynomial([ 0., 1., 2., 3.], [-1., 1.]) - >>> c = P.Chebyshev(P.poly2cheb(p.coef)) - >>> c - Chebyshev([ 1. , 3.25, 1. , 0.75], [-1., 1.]) - - """ - [pol] = pu.as_series([pol]) - pol = pol[::-1] - zs = pol[:1].copy() - x = np.array([.5, 0, .5], dtype=pol.dtype) - for i in range(1, len(pol)) : - zs = _zseries_mul(zs, x) - zs[i] += pol[i] - return _zseries_to_cseries(zs) - - -def cheb2poly(cs) : - """ - cheb2poly(cs) - - Convert a Chebyshev series to a polynomial. - - Convert an array representing the coefficients of a Chebyshev series, - ordered from lowest degree to highest, to an array of the coefficients - of the equivalent polynomial (relative to the "standard" basis) ordered - from lowest to highest degree. - - Parameters - ---------- - cs : array_like - 1-d array containing the Chebyshev series coefficients, ordered - from lowest order term to highest. - - Returns - ------- - pol : ndarray - 1-d array containing the coefficients of the equivalent polynomial - (relative to the "standard" basis) ordered from lowest order term - to highest. - - See Also - -------- - poly2cheb - - Notes - ----- - Note that a consequence of the input needing to be array_like and that - the output is an ndarray, is that if one is going to use this function - to convert a Chebyshev instance, T, to a Polynomial instance, P, the - usage is ``P = Polynomial(cheb2poly(T.coef))``; see Examples below. - - Examples - -------- - >>> from numpy import polynomial as P - >>> c = P.Chebyshev(np.arange(4)) - >>> c - Chebyshev([ 0., 1., 2., 3.], [-1., 1.]) - >>> p = P.Polynomial(P.cheb2poly(c.coef)) - >>> p - Polynomial([ -2., -8., 4., 12.], [-1., 1.]) - - """ - [cs] = pu.as_series([cs]) - pol = np.zeros(len(cs), dtype=cs.dtype) - quo = _cseries_to_zseries(cs) - x = np.array([.5, 0, .5], dtype=pol.dtype) - for i in range(0, len(cs) - 1) : - quo, rem = _zseries_div(quo, x) - pol[i] = rem[0] - pol[-1] = quo[0] - return pol - -# -# These are constant arrays are of integer type so as to be compatible -# with the widest range of other types, such as Decimal. -# - -# Chebyshev default domain. -chebdomain = np.array([-1,1]) - -# Chebyshev coefficients representing zero. -chebzero = np.array([0]) - -# Chebyshev coefficients representing one. -chebone = np.array([1]) - -# Chebyshev coefficients representing the identity x. -chebx = np.array([0,1]) - -def chebline(off, scl) : - """ - Chebyshev series whose graph is a straight line. - - - - Parameters - ---------- - off, scl : scalars - The specified line is given by ``off + scl*x``. - - Returns - ------- - y : ndarray - This module's representation of the Chebyshev series for - ``off + scl*x``. - - See Also - -------- - polyline - - Examples - -------- - >>> import numpy.polynomial.chebyshev as C - >>> C.chebline(3,2) - array([3, 2]) - >>> C.chebval(-3, C.chebline(3,2)) # should be -3 - -3.0 - - """ - if scl != 0 : - return np.array([off,scl]) - else : - return np.array([off]) - -def chebfromroots(roots) : - """ - Generate a Chebyshev series with the given roots. - - Return the array of coefficients for the C-series whose roots (a.k.a. - "zeros") are given by *roots*. The returned array of coefficients is - ordered from lowest order "term" to highest, and zeros of multiplicity - greater than one must be included in *roots* a number of times equal - to their multiplicity (e.g., if `2` is a root of multiplicity three, - then [2,2,2] must be in *roots*). - - Parameters - ---------- - roots : array_like - Sequence containing the roots. - - Returns - ------- - out : ndarray - 1-d array of the C-series' coefficients, ordered from low to - high. If all roots are real, ``out.dtype`` is a float type; - otherwise, ``out.dtype`` is a complex type, even if all the - coefficients in the result are real (see Examples below). - - See Also - -------- - polyfromroots - - Notes - ----- - What is returned are the :math:`c_i` such that: - - .. math:: - - \\sum_{i=0}^{n} c_i*T_i(x) = \\prod_{i=0}^{n} (x - roots[i]) - - where ``n == len(roots)`` and :math:`T_i(x)` is the `i`-th Chebyshev - (basis) polynomial over the domain `[-1,1]`. Note that, unlike - `polyfromroots`, due to the nature of the C-series basis set, the - above identity *does not* imply :math:`c_n = 1` identically (see - Examples). - - Examples - -------- - >>> import numpy.polynomial.chebyshev as C - >>> C.chebfromroots((-1,0,1)) # x^3 - x relative to the standard basis - array([ 0. , -0.25, 0. , 0.25]) - >>> j = complex(0,1) - >>> C.chebfromroots((-j,j)) # x^2 + 1 relative to the standard basis - array([ 1.5+0.j, 0.0+0.j, 0.5+0.j]) - - """ - if len(roots) == 0 : - return np.ones(1) - else : - [roots] = pu.as_series([roots], trim=False) - prd = np.array([1], dtype=roots.dtype) - for r in roots : - fac = np.array([.5, -r, .5], dtype=roots.dtype) - prd = _zseries_mul(fac, prd) - return _zseries_to_cseries(prd) - - -def chebadd(c1, c2): - """ - Add one Chebyshev series to another. - - Returns the sum of two Chebyshev series `c1` + `c2`. The arguments - are sequences of coefficients ordered from lowest order term to - highest, i.e., [1,2,3] represents the series ``T_0 + 2*T_1 + 3*T_2``. - - Parameters - ---------- - c1, c2 : array_like - 1-d arrays of Chebyshev series coefficients ordered from low to - high. - - Returns - ------- - out : ndarray - Array representing the Chebyshev series of their sum. - - See Also - -------- - chebsub, chebmul, chebdiv, chebpow - - Notes - ----- - Unlike multiplication, division, etc., the sum of two Chebyshev series - is a Chebyshev series (without having to "reproject" the result onto - the basis set) so addition, just like that of "standard" polynomials, - is simply "component-wise." - - Examples - -------- - >>> from numpy.polynomial import chebyshev as C - >>> c1 = (1,2,3) - >>> c2 = (3,2,1) - >>> C.chebadd(c1,c2) - array([ 4., 4., 4.]) - - """ - # c1, c2 are trimmed copies - [c1, c2] = pu.as_series([c1, c2]) - if len(c1) > len(c2) : - c1[:c2.size] += c2 - ret = c1 - else : - c2[:c1.size] += c1 - ret = c2 - return pu.trimseq(ret) - - -def chebsub(c1, c2): - """ - Subtract one Chebyshev series from another. - - Returns the difference of two Chebyshev series `c1` - `c2`. The - sequences of coefficients are from lowest order term to highest, i.e., - [1,2,3] represents the series ``T_0 + 2*T_1 + 3*T_2``. - - Parameters - ---------- - c1, c2 : array_like - 1-d arrays of Chebyshev series coefficients ordered from low to - high. - - Returns - ------- - out : ndarray - Of Chebyshev series coefficients representing their difference. - - See Also - -------- - chebadd, chebmul, chebdiv, chebpow - - Notes - ----- - Unlike multiplication, division, etc., the difference of two Chebyshev - series is a Chebyshev series (without having to "reproject" the result - onto the basis set) so subtraction, just like that of "standard" - polynomials, is simply "component-wise." - - Examples - -------- - >>> from numpy.polynomial import chebyshev as C - >>> c1 = (1,2,3) - >>> c2 = (3,2,1) - >>> C.chebsub(c1,c2) - array([-2., 0., 2.]) - >>> C.chebsub(c2,c1) # -C.chebsub(c1,c2) - array([ 2., 0., -2.]) - - """ - # c1, c2 are trimmed copies - [c1, c2] = pu.as_series([c1, c2]) - if len(c1) > len(c2) : - c1[:c2.size] -= c2 - ret = c1 - else : - c2 = -c2 - c2[:c1.size] += c1 - ret = c2 - return pu.trimseq(ret) - - -def chebmul(c1, c2): - """ - Multiply one Chebyshev series by another. - - Returns the product of two Chebyshev series `c1` * `c2`. The arguments - are sequences of coefficients, from lowest order "term" to highest, - e.g., [1,2,3] represents the series ``T_0 + 2*T_1 + 3*T_2``. - - Parameters - ---------- - c1, c2 : array_like - 1-d arrays of Chebyshev series coefficients ordered from low to - high. - - Returns - ------- - out : ndarray - Of Chebyshev series coefficients representing their product. - - See Also - -------- - chebadd, chebsub, chebdiv, chebpow - - Notes - ----- - In general, the (polynomial) product of two C-series results in terms - that are not in the Chebyshev polynomial basis set. Thus, to express - the product as a C-series, it is typically necessary to "re-project" - the product onto said basis set, which typically produces - "un-intuitive" (but correct) results; see Examples section below. - - Examples - -------- - >>> from numpy.polynomial import chebyshev as C - >>> c1 = (1,2,3) - >>> c2 = (3,2,1) - >>> C.chebmul(c1,c2) # multiplication requires "reprojection" - array([ 6.5, 12. , 12. , 4. , 1.5]) - - """ - # c1, c2 are trimmed copies - [c1, c2] = pu.as_series([c1, c2]) - z1 = _cseries_to_zseries(c1) - z2 = _cseries_to_zseries(c2) - prd = _zseries_mul(z1, z2) - ret = _zseries_to_cseries(prd) - return pu.trimseq(ret) - - -def chebdiv(c1, c2): - """ - Divide one Chebyshev series by another. - - Returns the quotient-with-remainder of two Chebyshev series - `c1` / `c2`. The arguments are sequences of coefficients from lowest - order "term" to highest, e.g., [1,2,3] represents the series - ``T_0 + 2*T_1 + 3*T_2``. - - Parameters - ---------- - c1, c2 : array_like - 1-d arrays of Chebyshev series coefficients ordered from low to - high. - - Returns - ------- - [quo, rem] : ndarrays - Of Chebyshev series coefficients representing the quotient and - remainder. - - See Also - -------- - chebadd, chebsub, chebmul, chebpow - - Notes - ----- - In general, the (polynomial) division of one C-series by another - results in quotient and remainder terms that are not in the Chebyshev - polynomial basis set. Thus, to express these results as C-series, it - is typically necessary to "re-project" the results onto said basis - set, which typically produces "un-intuitive" (but correct) results; - see Examples section below. - - Examples - -------- - >>> from numpy.polynomial import chebyshev as C - >>> c1 = (1,2,3) - >>> c2 = (3,2,1) - >>> C.chebdiv(c1,c2) # quotient "intuitive," remainder not - (array([ 3.]), array([-8., -4.])) - >>> c2 = (0,1,2,3) - >>> C.chebdiv(c2,c1) # neither "intuitive" - (array([ 0., 2.]), array([-2., -4.])) - - """ - # c1, c2 are trimmed copies - [c1, c2] = pu.as_series([c1, c2]) - if c2[-1] == 0 : - raise ZeroDivisionError() - - lc1 = len(c1) - lc2 = len(c2) - if lc1 < lc2 : - return c1[:1]*0, c1 - elif lc2 == 1 : - return c1/c2[-1], c1[:1]*0 - else : - z1 = _cseries_to_zseries(c1) - z2 = _cseries_to_zseries(c2) - quo, rem = _zseries_div(z1, z2) - quo = pu.trimseq(_zseries_to_cseries(quo)) - rem = pu.trimseq(_zseries_to_cseries(rem)) - return quo, rem - -def chebpow(cs, pow, maxpower=16) : - """Raise a Chebyshev series to a power. - - Returns the Chebyshev series `cs` raised to the power `pow`. The - arguement `cs` is a sequence of coefficients ordered from low to high. - i.e., [1,2,3] is the series ``T_0 + 2*T_1 + 3*T_2.`` - - Parameters - ---------- - cs : array_like - 1d array of chebyshev series coefficients ordered from low to - high. - pow : integer - Power to which the series will be raised - maxpower : integer, optional - Maximum power allowed. This is mainly to limit growth of the series - to umanageable size. Default is 16 - - Returns - ------- - coef : ndarray - Chebyshev series of power. - - See Also - -------- - chebadd, chebsub, chebmul, chebdiv - - Examples - -------- - - """ - # cs is a trimmed copy - [cs] = pu.as_series([cs]) - power = int(pow) - if power != pow or power < 0 : - raise ValueError("Power must be a non-negative integer.") - elif maxpower is not None and power > maxpower : - raise ValueError("Power is too large") - elif power == 0 : - return np.array([1], dtype=cs.dtype) - elif power == 1 : - return cs - else : - # This can be made more efficient by using powers of two - # in the usual way. - zs = _cseries_to_zseries(cs) - prd = zs - for i in range(2, power + 1) : - prd = np.convolve(prd, zs) - return _zseries_to_cseries(prd) - -def chebder(cs, m=1, scl=1) : - """ - Differentiate a Chebyshev series. - - Returns the series `cs` differentiated `m` times. At each iteration the - result is multiplied by `scl` (the scaling factor is for use in a linear - change of variable). The argument `cs` is the sequence of coefficients - from lowest order "term" to highest, e.g., [1,2,3] represents the series - ``T_0 + 2*T_1 + 3*T_2``. - - Parameters - ---------- - cs: array_like - 1-d array of Chebyshev series coefficients ordered from low to high. - m : int, optional - Number of derivatives taken, must be non-negative. (Default: 1) - scl : scalar, optional - Each differentiation is multiplied by `scl`. The end result is - multiplication by ``scl**m``. This is for use in a linear change of - variable. (Default: 1) - - Returns - ------- - der : ndarray - Chebyshev series of the derivative. - - See Also - -------- - chebint - - Notes - ----- - In general, the result of differentiating a C-series needs to be - "re-projected" onto the C-series basis set. Thus, typically, the - result of this function is "un-intuitive," albeit correct; see Examples - section below. - - Examples - -------- - >>> from numpy.polynomial import chebyshev as C - >>> cs = (1,2,3,4) - >>> C.chebder(cs) - array([ 14., 12., 24.]) - >>> C.chebder(cs,3) - array([ 96.]) - >>> C.chebder(cs,scl=-1) - array([-14., -12., -24.]) - >>> C.chebder(cs,2,-1) - array([ 12., 96.]) - - """ - cnt = int(m) - - if cnt != m: - raise ValueError, "The order of derivation must be integer" - if cnt < 0 : - raise ValueError, "The order of derivation must be non-negative" - if not np.isscalar(scl) : - raise ValueError, "The scl parameter must be a scalar" - - # cs is a trimmed copy - [cs] = pu.as_series([cs]) - if cnt == 0: - return cs - elif cnt >= len(cs): - return cs[:1]*0 - else : - zs = _cseries_to_zseries(cs) - for i in range(cnt): - zs = _zseries_der(zs)*scl - return _zseries_to_cseries(zs) - - -def chebint(cs, m=1, k=[], lbnd=0, scl=1): - """ - Integrate a Chebyshev series. - - Returns, as a C-series, the input C-series `cs`, integrated `m` times - from `lbnd` to `x`. At each iteration the resulting series is - **multiplied** by `scl` and an integration constant, `k`, is added. - The scaling factor is for use in a linear change of variable. ("Buyer - beware": note that, depending on what one is doing, one may want `scl` - to be the reciprocal of what one might expect; for more information, - see the Notes section below.) The argument `cs` is a sequence of - coefficients, from lowest order C-series "term" to highest, e.g., - [1,2,3] represents the series :math:`T_0(x) + 2T_1(x) + 3T_2(x)`. - - Parameters - ---------- - cs : array_like - 1-d array of C-series coefficients, ordered from low to high. - m : int, optional - Order of integration, must be positive. (Default: 1) - k : {[], list, scalar}, optional - Integration constant(s). The value of the first integral at zero - is the first value in the list, the value of the second integral - at zero is the second value, etc. If ``k == []`` (the default), - all constants are set to zero. If ``m == 1``, a single scalar can - be given instead of a list. - lbnd : scalar, optional - The lower bound of the integral. (Default: 0) - scl : scalar, optional - Following each integration the result is *multiplied* by `scl` - before the integration constant is added. (Default: 1) - - Returns - ------- - S : ndarray - C-series coefficients of the integral. - - Raises - ------ - ValueError - If ``m < 1``, ``len(k) > m``, ``np.isscalar(lbnd) == False``, or - ``np.isscalar(scl) == False``. - - See Also - -------- - chebder - - Notes - ----- - Note that the result of each integration is *multiplied* by `scl`. - Why is this important to note? Say one is making a linear change of - variable :math:`u = ax + b` in an integral relative to `x`. Then - :math:`dx = du/a`, so one will need to set `scl` equal to :math:`1/a` - - perhaps not what one would have first thought. - - Also note that, in general, the result of integrating a C-series needs - to be "re-projected" onto the C-series basis set. Thus, typically, - the result of this function is "un-intuitive," albeit correct; see - Examples section below. - - Examples - -------- - >>> from numpy.polynomial import chebyshev as C - >>> cs = (1,2,3) - >>> C.chebint(cs) - array([ 0.5, -0.5, 0.5, 0.5]) - >>> C.chebint(cs,3) - array([ 0.03125 , -0.1875 , 0.04166667, -0.05208333, 0.01041667, - 0.00625 ]) - >>> C.chebint(cs, k=3) - array([ 3.5, -0.5, 0.5, 0.5]) - >>> C.chebint(cs,lbnd=-2) - array([ 8.5, -0.5, 0.5, 0.5]) - >>> C.chebint(cs,scl=-2) - array([-1., 1., -1., -1.]) - - """ - cnt = int(m) - if np.isscalar(k) : - k = [k] - - if cnt != m: - raise ValueError, "The order of integration must be integer" - if cnt < 0 : - raise ValueError, "The order of integration must be non-negative" - if len(k) > cnt : - raise ValueError, "Too many integration constants" - if not np.isscalar(lbnd) : - raise ValueError, "The lbnd parameter must be a scalar" - if not np.isscalar(scl) : - raise ValueError, "The scl parameter must be a scalar" - - # cs is a trimmed copy - [cs] = pu.as_series([cs]) - if cnt == 0: - return cs - else: - k = list(k) + [0]*(cnt - len(k)) - for i in range(cnt) : - zs = _cseries_to_zseries(cs)*scl - zs = _zseries_int(zs) - cs = _zseries_to_cseries(zs) - cs[0] += k[i] - chebval(lbnd, cs) - return cs - -def chebval(x, cs): - """Evaluate a Chebyshev series. - - If `cs` is of length `n`, this function returns : - - ``p(x) = cs[0]*T_0(x) + cs[1]*T_1(x) + ... + cs[n-1]*T_{n-1}(x)`` - - If x is a sequence or array then p(x) will have the same shape as x. - If r is a ring_like object that supports multiplication and addition - by the values in `cs`, then an object of the same type is returned. - - Parameters - ---------- - x : array_like, ring_like - Array of numbers or objects that support multiplication and - addition with themselves and with the elements of `cs`. - cs : array_like - 1-d array of Chebyshev coefficients ordered from low to high. - - Returns - ------- - values : ndarray, ring_like - If the return is an ndarray then it has the same shape as `x`. - - See Also - -------- - chebfit - - Examples - -------- - - Notes - ----- - The evaluation uses Clenshaw recursion, aka synthetic division. - - Examples - -------- - - """ - # cs is a trimmed copy - [cs] = pu.as_series([cs]) - if isinstance(x, tuple) or isinstance(x, list) : - x = np.asarray(x) - - if len(cs) == 1 : - c0 = cs[0] - c1 = 0 - elif len(cs) == 2 : - c0 = cs[0] - c1 = cs[1] - else : - x2 = 2*x - c0 = cs[-2] - c1 = cs[-1] - for i in range(3, len(cs) + 1) : - tmp = c0 - c0 = cs[-i] - c1 - c1 = tmp + c1*x2 - return c0 + c1*x - -def chebvander(x, deg) : - """Vandermonde matrix of given degree. - - Returns the Vandermonde matrix of degree `deg` and sample points `x`. - This isn't a true Vandermonde matrix because `x` can be an arbitrary - ndarray and the Chebyshev polynomials aren't powers. If ``V`` is the - returned matrix and `x` is a 2d array, then the elements of ``V`` are - ``V[i,j,k] = T_k(x[i,j])``, where ``T_k`` is the Chebyshev polynomial - of degree ``k``. - - Parameters - ---------- - x : array_like - Array of points. The values are converted to double or complex - doubles. - deg : integer - Degree of the resulting matrix. - - Returns - ------- - vander : Vandermonde matrix. - The shape of the returned matrix is ``x.shape + (deg+1,)``. The last - index is the degree. - - """ - x = np.asarray(x) + 0.0 - order = int(deg) + 1 - v = np.ones((order,) + x.shape, dtype=x.dtype) - if order > 1 : - x2 = 2*x - v[1] = x - for i in range(2, order) : - v[i] = v[i-1]*x2 - v[i-2] - return np.rollaxis(v, 0, v.ndim) - -def chebfit(x, y, deg, rcond=None, full=False, w=None): - """ - Least squares fit of Chebyshev series to data. - - Fit a Chebyshev series ``p(x) = p[0] * T_{0}(x) + ... + p[deg] * - T_{deg}(x)`` of degree `deg` to points `(x, y)`. Returns a vector of - coefficients `p` that minimises the squared error. - - Parameters - ---------- - x : array_like, shape (M,) - x-coordinates of the M sample points ``(x[i], y[i])``. - y : array_like, shape (M,) or (M, K) - y-coordinates of the sample points. Several data sets of sample - points sharing the same x-coordinates can be fitted at once by - passing in a 2D-array that contains one dataset per column. - deg : int - Degree of the fitting polynomial - rcond : float, optional - Relative condition number of the fit. Singular values smaller than - this relative to the largest singular value will be ignored. The - default value is len(x)*eps, where eps is the relative precision of - the float type, about 2e-16 in most cases. - full : bool, optional - Switch determining nature of return value. When it is False (the - default) just the coefficients are returned, when True diagnostic - information from the singular value decomposition is also returned. - w : array_like, shape (`M`,), optional - Weights. If not None, the contribution of each point - ``(x[i],y[i])`` to the fit is weighted by `w[i]`. Ideally the - weights are chosen so that the errors of the products ``w[i]*y[i]`` - all have the same variance. The default value is None. - .. versionadded:: 1.5.0 - - Returns - ------- - coef : ndarray, shape (M,) or (M, K) - Chebyshev coefficients ordered from low to high. If `y` was 2-D, - the coefficients for the data in column k of `y` are in column - `k`. - - [residuals, rank, singular_values, rcond] : present when `full` = True - Residuals of the least-squares fit, the effective rank of the - scaled Vandermonde matrix and its singular values, and the - specified value of `rcond`. For more details, see `linalg.lstsq`. - - Warns - ----- - RankWarning - The rank of the coefficient matrix in the least-squares fit is - deficient. The warning is only raised if `full` = False. The - warnings can be turned off by - - >>> import warnings - >>> warnings.simplefilter('ignore', RankWarning) - - See Also - -------- - chebval : Evaluates a Chebyshev series. - chebvander : Vandermonde matrix of Chebyshev series. - polyfit : least squares fit using polynomials. - linalg.lstsq : Computes a least-squares fit from the matrix. - scipy.interpolate.UnivariateSpline : Computes spline fits. - - Notes - ----- - The solution are the coefficients ``c[i]`` of the Chebyshev series - ``T(x)`` that minimizes the squared error - - ``E = \\sum_j |y_j - T(x_j)|^2``. - - This problem is solved by setting up as the overdetermined matrix - equation - - ``V(x)*c = y``, - - where ``V`` is the Vandermonde matrix of `x`, the elements of ``c`` are - the coefficients to be solved for, and the elements of `y` are the - observed values. This equation is then solved using the singular value - decomposition of ``V``. - - If some of the singular values of ``V`` are so small that they are - neglected, then a `RankWarning` will be issued. This means that the - coeficient values may be poorly determined. Using a lower order fit - will usually get rid of the warning. The `rcond` parameter can also be - set to a value smaller than its default, but the resulting fit may be - spurious and have large contributions from roundoff error. - - Fits using Chebyshev series are usually better conditioned than fits - using power series, but much can depend on the distribution of the - sample points and the smoothness of the data. If the quality of the fit - is inadequate splines may be a good alternative. - - References - ---------- - .. [1] Wikipedia, "Curve fitting", - http://en.wikipedia.org/wiki/Curve_fitting - - Examples - -------- - - """ - order = int(deg) + 1 - x = np.asarray(x) + 0.0 - y = np.asarray(y) + 0.0 - - # check arguments. - if deg < 0 : - raise ValueError, "expected deg >= 0" - if x.ndim != 1: - raise TypeError, "expected 1D vector for x" - if x.size == 0: - raise TypeError, "expected non-empty vector for x" - if y.ndim < 1 or y.ndim > 2 : - raise TypeError, "expected 1D or 2D array for y" - if len(x) != len(y): - raise TypeError, "expected x and y to have same length" - - # set up the least squares matrices - lhs = chebvander(x, deg) - rhs = y - if w is not None: - w = np.asarray(w) + 0.0 - if w.ndim != 1: - raise TypeError, "expected 1D vector for w" - if len(x) != len(w): - raise TypeError, "expected x and w to have same length" - # apply weights - if rhs.ndim == 2: - lhs *= w[:, np.newaxis] - rhs *= w[:, np.newaxis] - else: - lhs *= w[:, np.newaxis] - rhs *= w - - # set rcond - if rcond is None : - rcond = len(x)*np.finfo(x.dtype).eps - - # scale the design matrix and solve the least squares equation - scl = np.sqrt((lhs*lhs).sum(0)) - c, resids, rank, s = la.lstsq(lhs/scl, rhs, rcond) - c = (c.T/scl).T - - # warn on rank reduction - if rank != order and not full: - msg = "The fit may be poorly conditioned" - warnings.warn(msg, pu.RankWarning) - - if full : - return c, [resids, rank, s, rcond] - else : - return c - - -def chebroots(cs): - """ - Compute the roots of a Chebyshev series. - - Return the roots (a.k.a "zeros") of the C-series represented by `cs`, - which is the sequence of the C-series' coefficients from lowest order - "term" to highest, e.g., [1,2,3] represents the C-series - ``T_0 + 2*T_1 + 3*T_2``. - - Parameters - ---------- - cs : array_like - 1-d array of C-series coefficients ordered from low to high. - - Returns - ------- - out : ndarray - Array of the roots. If all the roots are real, then so is the - dtype of ``out``; otherwise, ``out``'s dtype is complex. - - See Also - -------- - polyroots - - Notes - ----- - Algorithm(s) used: - - Remember: because the C-series basis set is different from the - "standard" basis set, the results of this function *may* not be what - one is expecting. - - Examples - -------- - >>> import numpy.polynomial as P - >>> import numpy.polynomial.chebyshev as C - >>> P.polyroots((-1,1,-1,1)) # x^3 - x^2 + x - 1 has two complex roots - array([ -4.99600361e-16-1.j, -4.99600361e-16+1.j, 1.00000e+00+0.j]) - >>> C.chebroots((-1,1,-1,1)) # T3 - T2 + T1 - T0 has only real roots - array([ -5.00000000e-01, 2.60860684e-17, 1.00000000e+00]) - - """ - # cs is a trimmed copy - [cs] = pu.as_series([cs]) - if len(cs) <= 1 : - return np.array([], dtype=cs.dtype) - if len(cs) == 2 : - return np.array([-cs[0]/cs[1]]) - n = len(cs) - 1 - cmat = np.zeros((n,n), dtype=cs.dtype) - cmat.flat[1::n+1] = .5 - cmat.flat[n::n+1] = .5 - cmat[1, 0] = 1 - cmat[:,-1] -= cs[:-1]*(.5/cs[-1]) - roots = la.eigvals(cmat) - roots.sort() - return roots - - -# -# Chebyshev series class -# - -exec polytemplate.substitute(name='Chebyshev', nick='cheb', domain='[-1,1]') - diff --git a/pythonPackages/numpy/numpy/polynomial/polynomial.py b/pythonPackages/numpy/numpy/polynomial/polynomial.py deleted file mode 100755 index 3144d99852..0000000000 --- a/pythonPackages/numpy/numpy/polynomial/polynomial.py +++ /dev/null @@ -1,892 +0,0 @@ -""" -Objects for dealing with polynomials. - -This module provides a number of objects (mostly functions) useful for -dealing with polynomials, including a `Polynomial` class that -encapsulates the usual arithmetic operations. (General information -on how this module represents and works with polynomial objects is in -the docstring for its "parent" sub-package, `numpy.polynomial`). - -Constants ---------- -- `polydomain` -- Polynomial default domain, [-1,1]. -- `polyzero` -- (Coefficients of the) "zero polynomial." -- `polyone` -- (Coefficients of the) constant polynomial 1. -- `polyx` -- (Coefficients of the) identity map polynomial, ``f(x) = x``. - -Arithmetic ----------- -- `polyadd` -- add two polynomials. -- `polysub` -- subtract one polynomial from another. -- `polymul` -- multiply two polynomials. -- `polydiv` -- divide one polynomial by another. -- `polyval` -- evaluate a polynomial at given points. - -Calculus --------- -- `polyder` -- differentiate a polynomial. -- `polyint` -- integrate a polynomial. - -Misc Functions --------------- -- `polyfromroots` -- create a polynomial with specified roots. -- `polyroots` -- find the roots of a polynomial. -- `polyvander` -- Vandermonde-like matrix for powers. -- `polyfit` -- least-squares fit returning a polynomial. -- `polytrim` -- trim leading coefficients from a polynomial. -- `polyline` -- Given a straight line, return the equivalent polynomial - object. - -Classes -------- -- `Polynomial` -- polynomial class. - -See also --------- -`numpy.polynomial` - -""" -from __future__ import division - -__all__ = ['polyzero', 'polyone', 'polyx', 'polydomain', - 'polyline','polyadd', 'polysub', 'polymul', 'polydiv', 'polyval', - 'polyder', 'polyint', 'polyfromroots', 'polyvander', 'polyfit', - 'polytrim', 'polyroots', 'Polynomial'] - -import numpy as np -import numpy.linalg as la -import polyutils as pu -import warnings -from polytemplate import polytemplate - -polytrim = pu.trimcoef - -# -# These are constant arrays are of integer type so as to be compatible -# with the widest range of other types, such as Decimal. -# - -# Polynomial default domain. -polydomain = np.array([-1,1]) - -# Polynomial coefficients representing zero. -polyzero = np.array([0]) - -# Polynomial coefficients representing one. -polyone = np.array([1]) - -# Polynomial coefficients representing the identity x. -polyx = np.array([0,1]) - -# -# Polynomial series functions -# - -def polyline(off, scl) : - """ - Returns an array representing a linear polynomial. - - Parameters - ---------- - off, scl : scalars - The "y-intercept" and "slope" of the line, respectively. - - Returns - ------- - y : ndarray - This module's representation of the linear polynomial ``off + - scl*x``. - - See Also - -------- - chebline - - Examples - -------- - >>> from numpy import polynomial as P - >>> P.polyline(1,-1) - array([ 1, -1]) - >>> P.polyval(1, P.polyline(1,-1)) # should be 0 - 0.0 - - """ - if scl != 0 : - return np.array([off,scl]) - else : - return np.array([off]) - -def polyfromroots(roots) : - """ - Generate a polynomial with the given roots. - - Return the array of coefficients for the polynomial whose leading - coefficient (i.e., that of the highest order term) is `1` and whose - roots (a.k.a. "zeros") are given by *roots*. The returned array of - coefficients is ordered from lowest order term to highest, and zeros - of multiplicity greater than one must be included in *roots* a number - of times equal to their multiplicity (e.g., if `2` is a root of - multiplicity three, then [2,2,2] must be in *roots*). - - Parameters - ---------- - roots : array_like - Sequence containing the roots. - - Returns - ------- - out : ndarray - 1-d array of the polynomial's coefficients, ordered from low to - high. If all roots are real, ``out.dtype`` is a float type; - otherwise, ``out.dtype`` is a complex type, even if all the - coefficients in the result are real (see Examples below). - - See Also - -------- - chebfromroots - - Notes - ----- - What is returned are the :math:`a_i` such that: - - .. math:: - - \\sum_{i=0}^{n} a_ix^i = \\prod_{i=0}^{n} (x - roots[i]) - - where ``n == len(roots)``; note that this implies that `1` is always - returned for :math:`a_n`. - - Examples - -------- - >>> import numpy.polynomial as P - >>> P.polyfromroots((-1,0,1)) # x(x - 1)(x + 1) = x^3 - x - array([ 0., -1., 0., 1.]) - >>> j = complex(0,1) - >>> P.polyfromroots((-j,j)) # complex returned, though values are real - array([ 1.+0.j, 0.+0.j, 1.+0.j]) - - """ - if len(roots) == 0 : - return np.ones(1) - else : - [roots] = pu.as_series([roots], trim=False) - prd = np.zeros(len(roots) + 1, dtype=roots.dtype) - prd[-1] = 1 - for i in range(len(roots)) : - prd[-(i+2):-1] -= roots[i]*prd[-(i+1):] - return prd - - -def polyadd(c1, c2): - """ - Add one polynomial to another. - - Returns the sum of two polynomials `c1` + `c2`. The arguments are - sequences of coefficients from lowest order term to highest, i.e., - [1,2,3] represents the polynomial ``1 + 2*x + 3*x**2"``. - - Parameters - ---------- - c1, c2 : array_like - 1-d arrays of polynomial coefficients ordered from low to high. - - Returns - ------- - out : ndarray - The coefficient array representing their sum. - - See Also - -------- - polysub, polymul, polydiv, polypow - - Examples - -------- - >>> from numpy import polynomial as P - >>> c1 = (1,2,3) - >>> c2 = (3,2,1) - >>> sum = P.polyadd(c1,c2); sum - array([ 4., 4., 4.]) - >>> P.polyval(2, sum) # 4 + 4(2) + 4(2**2) - 28.0 - - """ - # c1, c2 are trimmed copies - [c1, c2] = pu.as_series([c1, c2]) - if len(c1) > len(c2) : - c1[:c2.size] += c2 - ret = c1 - else : - c2[:c1.size] += c1 - ret = c2 - return pu.trimseq(ret) - - -def polysub(c1, c2): - """ - Subtract one polynomial from another. - - Returns the difference of two polynomials `c1` - `c2`. The arguments - are sequences of coefficients from lowest order term to highest, i.e., - [1,2,3] represents the polynomial ``1 + 2*x + 3*x**2``. - - Parameters - ---------- - c1, c2 : array_like - 1-d arrays of polynomial coefficients ordered from low to - high. - - Returns - ------- - out : ndarray - Of coefficients representing their difference. - - See Also - -------- - polyadd, polymul, polydiv, polypow - - Examples - -------- - >>> from numpy import polynomial as P - >>> c1 = (1,2,3) - >>> c2 = (3,2,1) - >>> P.polysub(c1,c2) - array([-2., 0., 2.]) - >>> P.polysub(c2,c1) # -P.polysub(c1,c2) - array([ 2., 0., -2.]) - - """ - # c1, c2 are trimmed copies - [c1, c2] = pu.as_series([c1, c2]) - if len(c1) > len(c2) : - c1[:c2.size] -= c2 - ret = c1 - else : - c2 = -c2 - c2[:c1.size] += c1 - ret = c2 - return pu.trimseq(ret) - - -def polymul(c1, c2): - """ - Multiply one polynomial by another. - - Returns the product of two polynomials `c1` * `c2`. The arguments are - sequences of coefficients, from lowest order term to highest, e.g., - [1,2,3] represents the polynomial ``1 + 2*x + 3*x**2.`` - - Parameters - ---------- - c1, c2 : array_like - 1-d arrays of coefficients representing a polynomial, relative to the - "standard" basis, and ordered from lowest order term to highest. - - Returns - ------- - out : ndarray - Of the coefficients of their product. - - See Also - -------- - polyadd, polysub, polydiv, polypow - - Examples - -------- - >>> import numpy.polynomial as P - >>> c1 = (1,2,3) - >>> c2 = (3,2,1) - >>> P.polymul(c1,c2) - array([ 3., 8., 14., 8., 3.]) - - """ - # c1, c2 are trimmed copies - [c1, c2] = pu.as_series([c1, c2]) - ret = np.convolve(c1, c2) - return pu.trimseq(ret) - - -def polydiv(c1, c2): - """ - Divide one polynomial by another. - - Returns the quotient-with-remainder of two polynomials `c1` / `c2`. - The arguments are sequences of coefficients, from lowest order term - to highest, e.g., [1,2,3] represents ``1 + 2*x + 3*x**2``. - - Parameters - ---------- - c1, c2 : array_like - 1-d arrays of polynomial coefficients ordered from low to high. - - Returns - ------- - [quo, rem] : ndarrays - Of coefficient series representing the quotient and remainder. - - See Also - -------- - polyadd, polysub, polymul, polypow - - Examples - -------- - >>> import numpy.polynomial as P - >>> c1 = (1,2,3) - >>> c2 = (3,2,1) - >>> P.polydiv(c1,c2) - (array([ 3.]), array([-8., -4.])) - >>> P.polydiv(c2,c1) - (array([ 0.33333333]), array([ 2.66666667, 1.33333333])) - - """ - # c1, c2 are trimmed copies - [c1, c2] = pu.as_series([c1, c2]) - if c2[-1] == 0 : - raise ZeroDivisionError() - - len1 = len(c1) - len2 = len(c2) - if len2 == 1 : - return c1/c2[-1], c1[:1]*0 - elif len1 < len2 : - return c1[:1]*0, c1 - else : - dlen = len1 - len2 - scl = c2[-1] - c2 = c2[:-1]/scl - i = dlen - j = len1 - 1 - while i >= 0 : - c1[i:j] -= c2*c1[j] - i -= 1 - j -= 1 - return c1[j+1:]/scl, pu.trimseq(c1[:j+1]) - -def polypow(cs, pow, maxpower=None) : - """Raise a polynomial to a power. - - Returns the polynomial `cs` raised to the power `pow`. The argument - `cs` is a sequence of coefficients ordered from low to high. i.e., - [1,2,3] is the series ``1 + 2*x + 3*x**2.`` - - Parameters - ---------- - cs : array_like - 1d array of chebyshev series coefficients ordered from low to - high. - pow : integer - Power to which the series will be raised - maxpower : integer, optional - Maximum power allowed. This is mainly to limit growth of the series - to umanageable size. Default is 16 - - Returns - ------- - coef : ndarray - Chebyshev series of power. - - See Also - -------- - chebadd, chebsub, chebmul, chebdiv - - Examples - -------- - - """ - # cs is a trimmed copy - [cs] = pu.as_series([cs]) - power = int(pow) - if power != pow or power < 0 : - raise ValueError("Power must be a non-negative integer.") - elif maxpower is not None and power > maxpower : - raise ValueError("Power is too large") - elif power == 0 : - return np.array([1], dtype=cs.dtype) - elif power == 1 : - return cs - else : - # This can be made more efficient by using powers of two - # in the usual way. - prd = cs - for i in range(2, power + 1) : - prd = np.convolve(prd, cs) - return prd - -def polyder(cs, m=1, scl=1): - """ - Differentiate a polynomial. - - Returns the polynomial `cs` differentiated `m` times. At each - iteration the result is multiplied by `scl` (the scaling factor is for - use in a linear change of variable). The argument `cs` is the sequence - of coefficients from lowest order term to highest, e.g., [1,2,3] - represents the polynomial ``1 + 2*x + 3*x**2``. - - Parameters - ---------- - cs: array_like - 1-d array of polynomial coefficients ordered from low to high. - m : int, optional - Number of derivatives taken, must be non-negative. (Default: 1) - scl : scalar, optional - Each differentiation is multiplied by `scl`. The end result is - multiplication by ``scl**m``. This is for use in a linear change - of variable. (Default: 1) - - Returns - ------- - der : ndarray - Polynomial of the derivative. - - See Also - -------- - polyint - - Examples - -------- - >>> from numpy import polynomial as P - >>> cs = (1,2,3,4) # 1 + 2x + 3x**2 + 4x**3 - >>> P.polyder(cs) # (d/dx)(cs) = 2 + 6x + 12x**2 - array([ 2., 6., 12.]) - >>> P.polyder(cs,3) # (d**3/dx**3)(cs) = 24 - array([ 24.]) - >>> P.polyder(cs,scl=-1) # (d/d(-x))(cs) = -2 - 6x - 12x**2 - array([ -2., -6., -12.]) - >>> P.polyder(cs,2,-1) # (d**2/d(-x)**2)(cs) = 6 + 24x - array([ 6., 24.]) - - """ - cnt = int(m) - - if cnt != m: - raise ValueError, "The order of derivation must be integer" - if cnt < 0: - raise ValueError, "The order of derivation must be non-negative" - if not np.isscalar(scl): - raise ValueError, "The scl parameter must be a scalar" - - # cs is a trimmed copy - [cs] = pu.as_series([cs]) - if cnt == 0: - return cs - elif cnt >= len(cs): - return cs[:1]*0 - else : - n = len(cs) - d = np.arange(n)*scl - for i in range(cnt): - cs[i:] *= d[:n-i] - return cs[i+1:].copy() - -def polyint(cs, m=1, k=[], lbnd=0, scl=1): - """ - Integrate a polynomial. - - Returns the polynomial `cs`, integrated `m` times from `lbnd` to `x`. - At each iteration the resulting series is **multiplied** by `scl` and - an integration constant, `k`, is added. The scaling factor is for use - in a linear change of variable. ("Buyer beware": note that, depending - on what one is doing, one may want `scl` to be the reciprocal of what - one might expect; for more information, see the Notes section below.) - The argument `cs` is a sequence of coefficients, from lowest order - term to highest, e.g., [1,2,3] represents the polynomial - ``1 + 2*x + 3*x**2``. - - Parameters - ---------- - cs : array_like - 1-d array of polynomial coefficients, ordered from low to high. - m : int, optional - Order of integration, must be positive. (Default: 1) - k : {[], list, scalar}, optional - Integration constant(s). The value of the first integral at zero - is the first value in the list, the value of the second integral - at zero is the second value, etc. If ``k == []`` (the default), - all constants are set to zero. If ``m == 1``, a single scalar can - be given instead of a list. - lbnd : scalar, optional - The lower bound of the integral. (Default: 0) - scl : scalar, optional - Following each integration the result is *multiplied* by `scl` - before the integration constant is added. (Default: 1) - - Returns - ------- - S : ndarray - Coefficients of the integral. - - Raises - ------ - ValueError - If ``m < 1``, ``len(k) > m``, ``np.isscalar(lbnd) == False``, or - ``np.isscalar(scl) == False``. - - See Also - -------- - polyder - - Notes - ----- - Note that the result of each integration is *multiplied* by `scl`. - Why is this important to note? Say one is making a linear change of - variable :math:`u = ax + b` in an integral relative to `x`. Then - :math:`dx = du/a`, so one will need to set `scl` equal to :math:`1/a` - - perhaps not what one would have first thought. - - Examples - -------- - >>> from numpy import polynomial as P - >>> cs = (1,2,3) - >>> P.polyint(cs) # should return array([0, 1, 1, 1]) - array([ 0., 1., 1., 1.]) - >>> P.polyint(cs,3) # should return array([0, 0, 0, 1/6, 1/12, 1/20]) - array([ 0. , 0. , 0. , 0.16666667, 0.08333333, - 0.05 ]) - >>> P.polyint(cs,k=3) # should return array([3, 1, 1, 1]) - array([ 3., 1., 1., 1.]) - >>> P.polyint(cs,lbnd=-2) # should return array([6, 1, 1, 1]) - array([ 6., 1., 1., 1.]) - >>> P.polyint(cs,scl=-2) # should return array([0, -2, -2, -2]) - array([ 0., -2., -2., -2.]) - - """ - cnt = int(m) - if np.isscalar(k) : - k = [k] - - if cnt != m: - raise ValueError, "The order of integration must be integer" - if cnt < 0 : - raise ValueError, "The order of integration must be non-negative" - if len(k) > cnt : - raise ValueError, "Too many integration constants" - if not np.isscalar(lbnd) : - raise ValueError, "The lbnd parameter must be a scalar" - if not np.isscalar(scl) : - raise ValueError, "The scl parameter must be a scalar" - - # cs is a trimmed copy - [cs] = pu.as_series([cs]) - if cnt == 0: - return cs - else: - k = list(k) + [0]*(cnt - len(k)) - fac = np.arange(1, len(cs) + cnt)/scl - ret = np.zeros(len(cs) + cnt, dtype=cs.dtype) - ret[cnt:] = cs - for i in range(cnt) : - ret[cnt - i:] /= fac[:len(cs) + i] - ret[cnt - i - 1] += k[i] - polyval(lbnd, ret[cnt - i - 1:]) - return ret - -def polyval(x, cs): - """ - Evaluate a polynomial. - - If `cs` is of length `n`, this function returns : - - ``p(x) = cs[0] + cs[1]*x + ... + cs[n-1]*x**(n-1)`` - - If x is a sequence or array then p(x) will have the same shape as x. - If r is a ring_like object that supports multiplication and addition - by the values in `cs`, then an object of the same type is returned. - - Parameters - ---------- - x : array_like, ring_like - If x is a list or tuple, it is converted to an ndarray. Otherwise - it must support addition and multiplication with itself and the - elements of `cs`. - cs : array_like - 1-d array of Chebyshev coefficients ordered from low to high. - - Returns - ------- - values : ndarray - The return array has the same shape as `x`. - - See Also - -------- - polyfit - - Notes - ----- - The evaluation uses Horner's method. - - """ - # cs is a trimmed copy - [cs] = pu.as_series([cs]) - if isinstance(x, tuple) or isinstance(x, list) : - x = np.asarray(x) - - c0 = cs[-1] + x*0 - for i in range(2, len(cs) + 1) : - c0 = cs[-i] + c0*x - return c0 - -def polyvander(x, deg) : - """Vandermonde matrix of given degree. - - Returns the Vandermonde matrix of degree `deg` and sample points `x`. - This isn't a true Vandermonde matrix because `x` can be an arbitrary - ndarray. If ``V`` is the returned matrix and `x` is a 2d array, then - the elements of ``V`` are ``V[i,j,k] = x[i,j]**k`` - - Parameters - ---------- - x : array_like - Array of points. The values are converted to double or complex doubles. - deg : integer - Degree of the resulting matrix. - - Returns - ------- - vander : Vandermonde matrix. - The shape of the returned matrix is ``x.shape + (deg+1,)``. The last - index is the degree. - - """ - x = np.asarray(x) + 0.0 - order = int(deg) + 1 - v = np.ones((order,) + x.shape, dtype=x.dtype) - if order > 1 : - v[1] = x - for i in range(2, order) : - v[i] = v[i-1]*x - return np.rollaxis(v, 0, v.ndim) - -def polyfit(x, y, deg, rcond=None, full=False, w=None): - """ - Least-squares fit of a polynomial to data. - - Fit a polynomial ``c0 + c1*x + c2*x**2 + ... + c[deg]*x**deg`` to - points (`x`, `y`). Returns a 1-d (if `y` is 1-d) or 2-d (if `y` is 2-d) - array of coefficients representing, from lowest order term to highest, - the polynomial(s) which minimize the total square error. - - Parameters - ---------- - x : array_like, shape (`M`,) - x-coordinates of the `M` sample (data) points ``(x[i], y[i])``. - y : array_like, shape (`M`,) or (`M`, `K`) - y-coordinates of the sample points. Several sets of sample points - sharing the same x-coordinates can be (independently) fit with one - call to `polyfit` by passing in for `y` a 2-d array that contains - one data set per column. - deg : int - Degree of the polynomial(s) to be fit. - rcond : float, optional - Relative condition number of the fit. Singular values smaller - than `rcond`, relative to the largest singular value, will be - ignored. The default value is ``len(x)*eps``, where `eps` is the - relative precision of the platform's float type, about 2e-16 in - most cases. - full : bool, optional - Switch determining the nature of the return value. When ``False`` - (the default) just the coefficients are returned; when ``True``, - diagnostic information from the singular value decomposition (used - to solve the fit's matrix equation) is also returned. - w : array_like, shape (`M`,), optional - Weights. If not None, the contribution of each point - ``(x[i],y[i])`` to the fit is weighted by `w[i]`. Ideally the - weights are chosen so that the errors of the products ``w[i]*y[i]`` - all have the same variance. The default value is None. - .. versionadded:: 1.5.0 - - Returns - ------- - coef : ndarray, shape (`deg` + 1,) or (`deg` + 1, `K`) - Polynomial coefficients ordered from low to high. If `y` was 2-d, - the coefficients in column `k` of `coef` represent the polynomial - fit to the data in `y`'s `k`-th column. - - [residuals, rank, singular_values, rcond] : present when `full` == True - Sum of the squared residuals (SSR) of the least-squares fit; the - effective rank of the scaled Vandermonde matrix; its singular - values; and the specified value of `rcond`. For more information, - see `linalg.lstsq`. - - Raises - ------ - RankWarning - Raised if the matrix in the least-squares fit is rank deficient. - The warning is only raised if `full` == False. The warnings can - be turned off by: - - >>> import warnings - >>> warnings.simplefilter('ignore', RankWarning) - - See Also - -------- - polyval : Evaluates a polynomial. - polyvander : Vandermonde matrix for powers. - chebfit : least squares fit using Chebyshev series. - linalg.lstsq : Computes a least-squares fit from the matrix. - scipy.interpolate.UnivariateSpline : Computes spline fits. - - Notes - ----- - The solutions are the coefficients ``c[i]`` of the polynomial ``P(x)`` - that minimizes the total squared error: - - .. math :: E = \\sum_j (y_j - P(x_j))^2 - - This problem is solved by setting up the (typically) over-determined - matrix equation: - - .. math :: V(x)*c = y - - where `V` is the Vandermonde matrix of `x`, the elements of `c` are the - coefficients to be solved for, and the elements of `y` are the observed - values. This equation is then solved using the singular value - decomposition of `V`. - - If some of the singular values of `V` are so small that they are - neglected (and `full` == ``False``), a `RankWarning` will be raised. - This means that the coefficient values may be poorly determined. - Fitting to a lower order polynomial will usually get rid of the warning - (but may not be what you want, of course; if you have independent - reason(s) for choosing the degree which isn't working, you may have to: - a) reconsider those reasons, and/or b) reconsider the quality of your - data). The `rcond` parameter can also be set to a value smaller than - its default, but the resulting fit may be spurious and have large - contributions from roundoff error. - - Polynomial fits using double precision tend to "fail" at about - (polynomial) degree 20. Fits using Chebyshev series are generally - better conditioned, but much can still depend on the distribution of - the sample points and the smoothness of the data. If the quality of - the fit is inadequate, splines may be a good alternative. - - Examples - -------- - >>> from numpy import polynomial as P - >>> x = np.linspace(-1,1,51) # x "data": [-1, -0.96, ..., 0.96, 1] - >>> y = x**3 - x + np.random.randn(len(x)) # x^3 - x + N(0,1) "noise" - >>> c, stats = P.polyfit(x,y,3,full=True) - >>> c # c[0], c[2] should be approx. 0, c[1] approx. -1, c[3] approx. 1 - array([ 0.01909725, -1.30598256, -0.00577963, 1.02644286]) - >>> stats # note the large SSR, explaining the rather poor results - [array([ 38.06116253]), 4, array([ 1.38446749, 1.32119158, 0.50443316, - 0.28853036]), 1.1324274851176597e-014] - - Same thing without the added noise - - >>> y = x**3 - x - >>> c, stats = P.polyfit(x,y,3,full=True) - >>> c # c[0], c[2] should be "very close to 0", c[1] ~= -1, c[3] ~= 1 - array([ -1.73362882e-17, -1.00000000e+00, -2.67471909e-16, - 1.00000000e+00]) - >>> stats # note the minuscule SSR - [array([ 7.46346754e-31]), 4, array([ 1.38446749, 1.32119158, - 0.50443316, 0.28853036]), 1.1324274851176597e-014] - - """ - order = int(deg) + 1 - x = np.asarray(x) + 0.0 - y = np.asarray(y) + 0.0 - - # check arguments. - if deg < 0 : - raise ValueError, "expected deg >= 0" - if x.ndim != 1: - raise TypeError, "expected 1D vector for x" - if x.size == 0: - raise TypeError, "expected non-empty vector for x" - if y.ndim < 1 or y.ndim > 2 : - raise TypeError, "expected 1D or 2D array for y" - if len(x) != len(y): - raise TypeError, "expected x and y to have same length" - - # set up the least squares matrices - lhs = polyvander(x, deg) - rhs = y - if w is not None: - w = np.asarray(w) + 0.0 - if w.ndim != 1: - raise TypeError, "expected 1D vector for w" - if len(x) != len(w): - raise TypeError, "expected x and w to have same length" - # apply weights - if rhs.ndim == 2: - lhs *= w[:, np.newaxis] - rhs *= w[:, np.newaxis] - else: - lhs *= w[:, np.newaxis] - rhs *= w - - # set rcond - if rcond is None : - rcond = len(x)*np.finfo(x.dtype).eps - - # scale the design matrix and solve the least squares equation - scl = np.sqrt((lhs*lhs).sum(0)) - c, resids, rank, s = la.lstsq(lhs/scl, rhs, rcond) - c = (c.T/scl).T - - # warn on rank reduction - if rank != order and not full: - msg = "The fit may be poorly conditioned" - warnings.warn(msg, pu.RankWarning) - - if full : - return c, [resids, rank, s, rcond] - else : - return c - - -def polyroots(cs): - """ - Compute the roots of a polynomial. - - Return the roots (a.k.a. "zeros") of the "polynomial" `cs`, the - polynomial's coefficients from lowest order term to highest - (e.g., [1,2,3] represents the polynomial ``1 + 2*x + 3*x**2``). - - Parameters - ---------- - cs : array_like of shape (M,) - 1-d array of polynomial coefficients ordered from low to high. - - Returns - ------- - out : ndarray - Array of the roots of the polynomial. If all the roots are real, - then so is the dtype of ``out``; otherwise, ``out``'s dtype is - complex. - - See Also - -------- - chebroots - - Examples - -------- - >>> import numpy.polynomial as P - >>> P.polyroots(P.polyfromroots((-1,0,1))) - array([-1., 0., 1.]) - >>> P.polyroots(P.polyfromroots((-1,0,1))).dtype - dtype('float64') - >>> j = complex(0,1) - >>> P.polyroots(P.polyfromroots((-j,0,j))) - array([ 0.00000000e+00+0.j, 0.00000000e+00+1.j, 2.77555756e-17-1.j]) - - """ - # cs is a trimmed copy - [cs] = pu.as_series([cs]) - if len(cs) <= 1 : - return np.array([], dtype=cs.dtype) - if len(cs) == 2 : - return np.array([-cs[0]/cs[1]]) - n = len(cs) - 1 - cmat = np.zeros((n,n), dtype=cs.dtype) - cmat.flat[n::n+1] = 1 - cmat[:,-1] -= cs[:-1]/cs[-1] - roots = la.eigvals(cmat) - roots.sort() - return roots - - -# -# polynomial class -# - -exec polytemplate.substitute(name='Polynomial', nick='poly', domain='[-1,1]') - diff --git a/pythonPackages/numpy/numpy/polynomial/polytemplate.py b/pythonPackages/numpy/numpy/polynomial/polytemplate.py deleted file mode 100755 index 75b1b4eb1c..0000000000 --- a/pythonPackages/numpy/numpy/polynomial/polytemplate.py +++ /dev/null @@ -1,688 +0,0 @@ -""" -Template for the Chebyshev and Polynomial classes. - -This module houses a Python string module Template object (see, e.g., -http://docs.python.org/library/string.html#template-strings) used by -the `polynomial` and `chebyshev` modules to implement their respective -`Polynomial` and `Chebyshev` classes. It provides a mechanism for easily -creating additional specific polynomial classes (e.g., Legendre, Jacobi, -etc.) in the future, such that all these classes will have a common API. - -""" -import string -import sys - -if sys.version_info[0] >= 3: - rel_import = "from . import" -else: - rel_import = "import" - -polytemplate = string.Template(''' -from __future__ import division -REL_IMPORT polyutils as pu -import numpy as np - -class $name(pu.PolyBase) : - """A $name series class. - - Parameters - ---------- - coef : array_like - $name coefficients, in increasing order. For example, - ``(1, 2, 3)`` implies ``P_0 + 2P_1 + 3P_2`` where the - ``P_i`` are a graded polynomial basis. - domain : (2,) array_like - Domain to use. The interval ``[domain[0], domain[1]]`` is mapped to - the interval ``$domain`` by shifting and scaling. - - Attributes - ---------- - coef : (N,) array - $name coefficients, from low to high. - domain : (2,) array_like - Domain that is mapped to ``$domain``. - - Class Attributes - ---------------- - maxpower : int - Maximum power allowed, i.e., the largest number ``n`` such that - ``p(x)**n`` is allowed. This is to limit runaway polynomial size. - domain : (2,) ndarray - Default domain of the class. - - Notes - ----- - It is important to specify the domain for many uses of graded polynomial, - for instance in fitting data. This is because many of the important - properties of the polynomial basis only hold in a specified interval and - thus the data must be mapped into that domain in order to benefit. - - Examples - -------- - - """ - # Limit runaway size. T_n^m has degree n*2^m - maxpower = 16 - # Default domain - domain = np.array($domain) - # Don't let participate in array operations. Value doesn't matter. - __array_priority__ = 0 - - def __init__(self, coef, domain=$domain) : - [coef, domain] = pu.as_series([coef, domain], trim=False) - if len(domain) != 2 : - raise ValueError("Domain has wrong number of elements.") - self.coef = coef - self.domain = domain - - def __repr__(self): - format = "%s(%s, %s)" - coef = repr(self.coef)[6:-1] - domain = repr(self.domain)[6:-1] - return format % ('$name', coef, domain) - - def __str__(self) : - format = "%s(%s, %s)" - return format % ('$nick', str(self.coef), str(self.domain)) - - # Pickle and copy - - def __getstate__(self) : - ret = self.__dict__.copy() - ret['coef'] = self.coef.copy() - ret['domain'] = self.domain.copy() - return ret - - def __setstate__(self, dict) : - self.__dict__ = dict - - # Call - - def __call__(self, arg) : - off, scl = pu.mapparms(self.domain, $domain) - arg = off + scl*arg - return ${nick}val(arg, self.coef) - - - def __iter__(self) : - return iter(self.coef) - - def __len__(self) : - return len(self.coef) - - # Numeric properties. - - - def __neg__(self) : - return self.__class__(-self.coef, self.domain) - - def __pos__(self) : - return self - - def __add__(self, other) : - """Returns sum""" - if isinstance(other, self.__class__) : - if np.all(self.domain == other.domain) : - coef = ${nick}add(self.coef, other.coef) - else : - raise PolyDomainError() - else : - try : - coef = ${nick}add(self.coef, other) - except : - return NotImplemented - return self.__class__(coef, self.domain) - - def __sub__(self, other) : - """Returns difference""" - if isinstance(other, self.__class__) : - if np.all(self.domain == other.domain) : - coef = ${nick}sub(self.coef, other.coef) - else : - raise PolyDomainError() - else : - try : - coef = ${nick}sub(self.coef, other) - except : - return NotImplemented - return self.__class__(coef, self.domain) - - def __mul__(self, other) : - """Returns product""" - if isinstance(other, self.__class__) : - if np.all(self.domain == other.domain) : - coef = ${nick}mul(self.coef, other.coef) - else : - raise PolyDomainError() - else : - try : - coef = ${nick}mul(self.coef, other) - except : - return NotImplemented - return self.__class__(coef, self.domain) - - def __div__(self, other): - # set to __floordiv__ /. - return self.__floordiv__(other) - - def __truediv__(self, other) : - # there is no true divide if the rhs is not a scalar, although it - # could return the first n elements of an infinite series. - # It is hard to see where n would come from, though. - if isinstance(other, self.__class__) : - if len(other.coef) == 1 : - coef = div(self.coef, other.coef) - else : - return NotImplemented - elif np.isscalar(other) : - coef = self.coef/other - else : - return NotImplemented - return self.__class__(coef, self.domain) - - def __floordiv__(self, other) : - """Returns the quotient.""" - if isinstance(other, self.__class__) : - if np.all(self.domain == other.domain) : - quo, rem = ${nick}div(self.coef, other.coef) - else : - raise PolyDomainError() - else : - try : - quo, rem = ${nick}div(self.coef, other) - except : - return NotImplemented - return self.__class__(quo, self.domain) - - def __mod__(self, other) : - """Returns the remainder.""" - if isinstance(other, self.__class__) : - if np.all(self.domain == other.domain) : - quo, rem = ${nick}div(self.coef, other.coef) - else : - raise PolyDomainError() - else : - try : - quo, rem = ${nick}div(self.coef, other) - except : - return NotImplemented - return self.__class__(rem, self.domain) - - def __divmod__(self, other) : - """Returns quo, remainder""" - if isinstance(other, self.__class__) : - if np.all(self.domain == other.domain) : - quo, rem = ${nick}div(self.coef, other.coef) - else : - raise PolyDomainError() - else : - try : - quo, rem = ${nick}div(self.coef, other) - except : - return NotImplemented - return self.__class__(quo, self.domain), self.__class__(rem, self.domain) - - def __pow__(self, other) : - try : - coef = ${nick}pow(self.coef, other, maxpower = self.maxpower) - except : - raise - return self.__class__(coef, self.domain) - - def __radd__(self, other) : - try : - coef = ${nick}add(other, self.coef) - except : - return NotImplemented - return self.__class__(coef, self.domain) - - def __rsub__(self, other): - try : - coef = ${nick}sub(other, self.coef) - except : - return NotImplemented - return self.__class__(coef, self.domain) - - def __rmul__(self, other) : - try : - coef = ${nick}mul(other, self.coef) - except : - return NotImplemented - return self.__class__(coef, self.domain) - - def __rdiv__(self, other): - # set to __floordiv__ /. - return self.__rfloordiv__(other) - - def __rtruediv__(self, other) : - # there is no true divide if the rhs is not a scalar, although it - # could return the first n elements of an infinite series. - # It is hard to see where n would come from, though. - if len(self.coef) == 1 : - try : - quo, rem = ${nick}div(other, self.coef[0]) - except : - return NotImplemented - return self.__class__(quo, self.domain) - - def __rfloordiv__(self, other) : - try : - quo, rem = ${nick}div(other, self.coef) - except : - return NotImplemented - return self.__class__(quo, self.domain) - - def __rmod__(self, other) : - try : - quo, rem = ${nick}div(other, self.coef) - except : - return NotImplemented - return self.__class__(rem, self.domain) - - def __rdivmod__(self, other) : - try : - quo, rem = ${nick}div(other, self.coef) - except : - return NotImplemented - return self.__class__(quo, self.domain), self.__class__(rem, self.domain) - - # Enhance me - # some augmented arithmetic operations could be added here - - def __eq__(self, other) : - res = isinstance(other, self.__class__) \ - and len(self.coef) == len(other.coef) \ - and np.all(self.domain == other.domain) \ - and np.all(self.coef == other.coef) - return res - - def __ne__(self, other) : - return not self.__eq__(other) - - # - # Extra numeric functions. - # - - def degree(self) : - """The degree of the series. - - Notes - ----- - .. versionadded:: 2.0.0 - - """ - return len(self) - 1 - - def cutdeg(self, deg) : - """Truncate series to the given degree. - - Reduce the degree of the $name series to `deg` by discarding the - high order terms. If `deg` is greater than the current degree a - copy of the current series is returned. This can be useful in least - squares where the coefficients of the high degree terms may be very - small. - - Parameters - ---------- - deg : non-negative int - The series is reduced to degree `deg` by discarding the high - order terms. The value of `deg` must be a non-negative integer. - - Returns - ------- - new_instance : $name - New instance of $name with reduced degree. - - Notes - ----- - .. versionadded:: 2.0.0 - - """ - return self.truncate(deg + 1) - - def convert(self, domain=None, kind=None) : - """Convert to different class and/or domain. - - Parameters - ---------- - domain : {None, array_like} - The domain of the new series type instance. If the value is is - ``None``, then the default domain of `kind` is used. - kind : {None, class} - The polynomial series type class to which the current instance - should be converted. If kind is ``None``, then the class of the - current instance is used. - - Returns - ------- - new_series_instance : `kind` - The returned class can be of different type than the current - instance and/or have a different domain. - - Examples - -------- - - Notes - ----- - Conversion between domains and class types can result in - numerically ill defined series. - - """ - if kind is None : - kind = $name - if domain is None : - domain = kind.domain - return self(kind.identity(domain)) - - def mapparms(self) : - """Return the mapping parameters. - - The returned values define a linear map ``off + scl*x`` that is - applied to the input arguments before the series is evaluated. The - of the map depend on the domain; if the current domain is equal to - the default domain ``$domain`` the resulting map is the identity. - If the coeffients of the ``$name`` instance are to be used - separately, then the linear function must be substituted for the - ``x`` in the standard representation of the base polynomials. - - Returns - ------- - off, scl : floats or complex - The mapping function is defined by ``off + scl*x``. - - Notes: - ------ - If the current domain is the interval ``[l_1, r_1]`` and the default - interval is ``[l_2, r_2]``, then the linear mapping function ``L`` is - defined by the equations: - - L(l_1) = l_2 - L(r_1) = r_2 - - """ - return pu.mapparms(self.domain, $domain) - - def trim(self, tol=0) : - """Remove small leading coefficients - - Remove leading coefficients until a coefficient is reached whose - absolute value greater than `tol` or the beginning of the series is - reached. If all the coefficients would be removed the series is set to - ``[0]``. A new $name instance is returned with the new coefficients. - The current instance remains unchanged. - - Parameters - ---------- - tol : non-negative number. - All trailing coefficients less than `tol` will be removed. - - Returns - ------- - new_instance : $name - Contains the new set of coefficients. - - """ - return self.__class__(pu.trimcoef(self.coef, tol), self.domain) - - def truncate(self, size) : - """Truncate series to length `size`. - - Reduce the $name series to length `size` by discarding the high - degree terms. The value of `size` must be a positive integer. This - can be useful in least squares where the coefficients of the - high degree terms may be very small. - - Parameters - ---------- - size : positive int - The series is reduced to length `size` by discarding the high - degree terms. The value of `size` must be a positive integer. - - Returns - ------- - new_instance : $name - New instance of $name with truncated coefficients. - - """ - isize = int(size) - if isize != size or isize < 1 : - raise ValueError("size must be a positive integer") - if isize >= len(self.coef) : - return self.__class__(self.coef, self.domain) - else : - return self.__class__(self.coef[:isize], self.domain) - - def copy(self) : - """Return a copy. - - A new instance of $name is returned that has the same - coefficients and domain as the current instance. - - Returns - ------- - new_instance : $name - New instance of $name with the same coefficients and domain. - - """ - return self.__class__(self.coef, self.domain) - - def integ(self, m=1, k=[], lbnd=None) : - """Integrate. - - Return an instance of $name that is the definite integral of the - current series. Refer to `${nick}int` for full documentation. - - Parameters - ---------- - m : non-negative int - The number of integrations to perform. - k : array_like - Integration constants. The first constant is applied to the - first integration, the second to the second, and so on. The - list of values must less than or equal to `m` in length and any - missing values are set to zero. - lbnd : Scalar - The lower bound of the definite integral. - - Returns - ------- - integral : $name - The integral of the series using the same domain. - - See Also - -------- - `${nick}int` : similar function. - `${nick}der` : similar function for derivative. - - """ - off, scl = self.mapparms() - if lbnd is None : - lbnd = 0 - else : - lbnd = off + scl*lbnd - coef = ${nick}int(self.coef, m, k, lbnd, 1./scl) - return self.__class__(coef, self.domain) - - def deriv(self, m=1): - """Differentiate. - - Return an instance of $name that is the derivative of the current - series. Refer to `${nick}der` for full documentation. - - Parameters - ---------- - m : non-negative int - The number of integrations to perform. - - Returns - ------- - derivative : $name - The derivative of the series using the same domain. - - See Also - -------- - `${nick}der` : similar function. - `${nick}int` : similar function for integration. - - """ - off, scl = self.mapparms() - coef = ${nick}der(self.coef, m, scl) - return self.__class__(coef, self.domain) - - def roots(self) : - """Return list of roots. - - Return ndarray of roots for this series. See `${nick}roots` for - full documentation. Note that the accuracy of the roots is likely to - decrease the further outside the domain they lie. - - See Also - -------- - `${nick}roots` : similar function - `${nick}fromroots` : function to go generate series from roots. - - """ - roots = ${nick}roots(self.coef) - return pu.mapdomain(roots, $domain, self.domain) - - def linspace(self, n): - """Return x,y values at equally spaced points in domain. - - Returns x, y values at `n` equally spaced points across domain. - Here y is the value of the polynomial at the points x. This is - intended as a plotting aid. - - Paramters - --------- - n : int - Number of point pairs to return. - - Returns - ------- - x, y : ndarrays - ``x`` is equal to linspace(self.domain[0], self.domain[1], n) - ``y`` is the polynomial evaluated at ``x``. - - .. versionadded:: 1.5.0 - - """ - x = np.linspace(self.domain[0], self.domain[1], n) - y = self(x) - return x, y - - - - @staticmethod - def fit(x, y, deg, domain=None, rcond=None, full=False, w=None) : - """Least squares fit to data. - - Return a `$name` instance that is the least squares fit to the data - `y` sampled at `x`. Unlike ${nick}fit, the domain of the returned - instance can be specified and this will often result in a superior - fit with less chance of ill conditioning. See ${nick}fit for full - documentation of the implementation. - - Parameters - ---------- - x : array_like, shape (M,) - x-coordinates of the M sample points ``(x[i], y[i])``. - y : array_like, shape (M,) or (M, K) - y-coordinates of the sample points. Several data sets of sample - points sharing the same x-coordinates can be fitted at once by - passing in a 2D-array that contains one dataset per column. - deg : int - Degree of the fitting polynomial - domain : {None, [beg, end], []}, optional - Domain to use for the returned $name instance. If ``None``, - then a minimal domain that covers the points `x` is chosen. If - ``[]`` the default domain ``$domain`` is used. The default - value is $domain in numpy 1.4.x and ``None`` in later versions. - The ``'[]`` value was added in numpy 1.5.0. - rcond : float, optional - Relative condition number of the fit. Singular values smaller - than this relative to the largest singular value will be - ignored. The default value is len(x)*eps, where eps is the - relative precision of the float type, about 2e-16 in most - cases. - full : bool, optional - Switch determining nature of return value. When it is False - (the default) just the coefficients are returned, when True - diagnostic information from the singular value decomposition is - also returned. - w : array_like, shape (M,), optional - Weights. If not None the contribution of each point - ``(x[i],y[i])`` to the fit is weighted by `w[i]`. Ideally the - weights are chosen so that the errors of the products - ``w[i]*y[i]`` all have the same variance. The default value is - None. - .. versionadded:: 1.5.0 - - Returns - ------- - least_squares_fit : instance of $name - The $name instance is the least squares fit to the data and - has the domain specified in the call. - - [residuals, rank, singular_values, rcond] : only if `full` = True - Residuals of the least-squares fit, the effective rank of the - scaled Vandermonde matrix and its singular values, and the - specified value of `rcond`. For more details, see - `linalg.lstsq`. - - See Also - -------- - ${nick}fit : similar function - - """ - if domain is None : - domain = pu.getdomain(x) - elif domain == [] : - domain = $domain - xnew = pu.mapdomain(x, domain, $domain) - res = ${nick}fit(xnew, y, deg, w=w, rcond=rcond, full=full) - if full : - [coef, status] = res - return $name(coef, domain=domain), status - else : - coef = res - return $name(coef, domain=domain) - - @staticmethod - def fromroots(roots, domain=$domain) : - """Return $name object with specified roots. - - See ${nick}fromroots for full documentation. - - See Also - -------- - ${nick}fromroots : equivalent function - - """ - if domain is None : - domain = pu.getdomain(roots) - rnew = pu.mapdomain(roots, domain, $domain) - coef = ${nick}fromroots(rnew) - return $name(coef, domain=domain) - - @staticmethod - def identity(domain=$domain) : - """Identity function. - - If ``p`` is the returned $name object, then ``p(x) == x`` for all - values of x. - - Parameters: - ----------- - domain : array_like - The resulting array must be if the form ``[beg, end]``, where - ``beg`` and ``end`` are the endpoints of the domain. - - Returns: - -------- - identity : $name object - - """ - off, scl = pu.mapparms($domain, domain) - coef = ${nick}line(off, scl) - return $name(coef, domain) -'''.replace('REL_IMPORT', rel_import)) diff --git a/pythonPackages/numpy/numpy/polynomial/polyutils.py b/pythonPackages/numpy/numpy/polynomial/polyutils.py deleted file mode 100755 index 5c65e03c2f..0000000000 --- a/pythonPackages/numpy/numpy/polynomial/polyutils.py +++ /dev/null @@ -1,393 +0,0 @@ -""" -Utililty objects for the polynomial modules. - -This module provides: error and warning objects; a polynomial base class; -and some routines used in both the `polynomial` and `chebyshev` modules. - -Error objects -------------- -- `PolyError` -- base class for this sub-package's errors. -- `PolyDomainError` -- raised when domains are "mismatched." - -Warning objects ---------------- -- `RankWarning` -- raised by a least-squares fit when a rank-deficient - matrix is encountered. - -Base class ----------- -- `PolyBase` -- The base class for the `Polynomial` and `Chebyshev` - classes. - -Functions ---------- -- `as_series` -- turns a list of array_likes into 1-D arrays of common - type. -- `trimseq` -- removes trailing zeros. -- `trimcoef` -- removes trailing coefficients that are less than a given - magnitude (thereby removing the corresponding terms). -- `getdomain` -- returns a domain appropriate for a given set of abscissae. -- `mapdomain` -- maps points between domains. -- `mapparms` -- parameters of the linear map between domains. - -""" -from __future__ import division - -__all__ = ['RankWarning', 'PolyError', 'PolyDomainError', 'PolyBase', - 'as_series', 'trimseq', 'trimcoef', 'getdomain', 'mapdomain', - 'mapparms'] - -import warnings -import numpy as np -import sys - -# -# Warnings and Exceptions -# - -class RankWarning(UserWarning) : - """Issued by chebfit when the design matrix is rank deficient.""" - pass - -class PolyError(Exception) : - """Base class for errors in this module.""" - pass - -class PolyDomainError(PolyError) : - """Issued by the generic Poly class when two domains don't match. - - This is raised when an binary operation is passed Poly objects with - different domains. - - """ - pass - -# -# Base class for all polynomial types -# - -class PolyBase(object) : - pass - -# -# We need the any function for python < 2.5 -# -if sys.version_info[:2] < (2,5) : - def any(iterable) : - for element in iterable: - if element : - return True - return False - -# -# Helper functions to convert inputs to 1d arrays -# -def trimseq(seq) : - """Remove small Poly series coefficients. - - Parameters - ---------- - seq : sequence - Sequence of Poly series coefficients. This routine fails for - empty sequences. - - Returns - ------- - series : sequence - Subsequence with trailing zeros removed. If the resulting sequence - would be empty, return the first element. The returned sequence may - or may not be a view. - - Notes - ----- - Do not lose the type info if the sequence contains unknown objects. - - """ - if len(seq) == 0 : - return seq - else : - for i in range(len(seq) - 1, -1, -1) : - if seq[i] != 0 : - break - return seq[:i+1] - - -def as_series(alist, trim=True) : - """ - Return argument as a list of 1-d arrays. - - The returned list contains array(s) of dtype double, complex double, or - object. A 1-d argument of shape ``(N,)`` is parsed into ``N`` arrays of - size one; a 2-d argument of shape ``(M,N)`` is parsed into ``M`` arrays - of size ``N`` (i.e., is "parsed by row"); and a higher dimensional array - raises a Value Error if it is not first reshaped into either a 1-d or 2-d - array. - - Parameters - ---------- - a : array_like - A 1- or 2-d array_like - trim : boolean, optional - When True, trailing zeros are removed from the inputs. - When False, the inputs are passed through intact. - - Returns - ------- - [a1, a2,...] : list of 1d-arrays - A copy of the input data as a list of 1-d arrays. - - Raises - ------ - ValueError : - Raised when `as_series` cannot convert its input to 1-d arrays, or at - least one of the resulting arrays is empty. - - Examples - -------- - >>> from numpy import polynomial as P - >>> a = np.arange(4) - >>> P.as_series(a) - [array([ 0.]), array([ 1.]), array([ 2.]), array([ 3.])] - >>> b = np.arange(6).reshape((2,3)) - >>> P.as_series(b) - [array([ 0., 1., 2.]), array([ 3., 4., 5.])] - - """ - arrays = [np.array(a, ndmin=1, copy=0) for a in alist] - if min([a.size for a in arrays]) == 0 : - raise ValueError("Coefficient array is empty") - if any([a.ndim != 1 for a in arrays]) : - raise ValueError("Coefficient array is not 1-d") - if trim : - arrays = [trimseq(a) for a in arrays] - - if any([a.dtype == np.dtype(object) for a in arrays]) : - ret = [] - for a in arrays : - if a.dtype != np.dtype(object) : - tmp = np.empty(len(a), dtype=np.dtype(object)) - tmp[:] = a[:] - ret.append(tmp) - else : - ret.append(a.copy()) - else : - try : - dtype = np.common_type(*arrays) - except : - raise ValueError("Coefficient arrays have no common type") - ret = [np.array(a, copy=1, dtype=dtype) for a in arrays] - return ret - - -def trimcoef(c, tol=0) : - """ - Remove "small" "trailing" coefficients from a polynomial. - - "Small" means "small in absolute value" and is controlled by the - parameter `tol`; "trailing" means highest order coefficient(s), e.g., in - ``[0, 1, 1, 0, 0]`` (which represents ``0 + x + x**2 + 0*x**3 + 0*x**4``) - both the 3-rd and 4-th order coefficients would be "trimmed." - - Parameters - ---------- - c : array_like - 1-d array of coefficients, ordered from lowest order to highest. - tol : number, optional - Trailing (i.e., highest order) elements with absolute value less - than or equal to `tol` (default value is zero) are removed. - - Returns - ------- - trimmed : ndarray - 1-d array with trailing zeros removed. If the resulting series - would be empty, a series containing a single zero is returned. - - Raises - ------ - ValueError - If `tol` < 0 - - See Also - -------- - trimseq - - Examples - -------- - >>> from numpy import polynomial as P - >>> P.trimcoef((0,0,3,0,5,0,0)) - array([ 0., 0., 3., 0., 5.]) - >>> P.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed - array([ 0.]) - >>> i = complex(0,1) # works for complex - >>> P.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) - array([ 0.0003+0.j , 0.0010-0.001j]) - - """ - if tol < 0 : - raise ValueError("tol must be non-negative") - - [c] = as_series([c]) - [ind] = np.where(np.abs(c) > tol) - if len(ind) == 0 : - return c[:1]*0 - else : - return c[:ind[-1] + 1].copy() - -def getdomain(x) : - """ - Return a domain suitable for given abscissae. - - Find a domain suitable for a polynomial or Chebyshev series - defined at the values supplied. - - Parameters - ---------- - x : array_like - 1-d array of abscissae whose domain will be determined. - - Returns - ------- - domain : ndarray - 1-d array containing two values. If the inputs are complex, then - the two returned points are the lower left and upper right corners - of the smallest rectangle (aligned with the axes) in the complex - plane containing the points `x`. If the inputs are real, then the - two points are the ends of the smallest interval containing the - points `x`. - - See Also - -------- - mapparms, mapdomain - - Examples - -------- - >>> from numpy.polynomial import polyutils as pu - >>> points = np.arange(4)**2 - 5; points - array([-5, -4, -1, 4]) - >>> pu.getdomain(points) - array([-5., 4.]) - >>> c = np.exp(complex(0,1)*np.pi*np.arange(12)/6) # unit circle - >>> pu.getdomain(c) - array([-1.-1.j, 1.+1.j]) - - """ - [x] = as_series([x], trim=False) - if x.dtype.char in np.typecodes['Complex'] : - rmin, rmax = x.real.min(), x.real.max() - imin, imax = x.imag.min(), x.imag.max() - return np.array((complex(rmin, imin), complex(rmax, imax))) - else : - return np.array((x.min(), x.max())) - -def mapparms(old, new) : - """ - Linear map parameters between domains. - - Return the parameters of the linear map ``offset + scale*x`` that maps - `old` to `new` such that ``old[i] -> new[i]``, ``i = 0, 1``. - - Parameters - ---------- - old, new : array_like - Each domain must (successfully) convert to a 1-d array containing - precisely two values. - - Returns - ------- - offset, scale : scalars - The map ``L(x) = offset + scale*x`` maps the first domain to the - second. - - See Also - -------- - getdomain, mapdomain - - Notes - ----- - Also works for complex numbers, and thus can be used to calculate the - parameters required to map any line in the complex plane to any other - line therein. - - Examples - -------- - >>> from numpy import polynomial as P - >>> P.mapparms((-1,1),(-1,1)) - (0.0, 1.0) - >>> P.mapparms((1,-1),(-1,1)) - (0.0, -1.0) - >>> i = complex(0,1) - >>> P.mapparms((-i,-1),(1,i)) - ((1+1j), (1+0j)) - - """ - oldlen = old[1] - old[0] - newlen = new[1] - new[0] - off = (old[1]*new[0] - old[0]*new[1])/oldlen - scl = newlen/oldlen - return off, scl - -def mapdomain(x, old, new) : - """ - Apply linear map to input points. - - The linear map ``offset + scale*x`` that maps `old` to `new` is applied - to the points `x`. - - Parameters - ---------- - x : array_like - Points to be mapped. - old, new : array_like - The two domains that determine the map. Each must (successfully) - convert to 1-d arrays containing precisely two values. - - Returns - ------- - x_out : ndarray - Array of points of the same shape as `x`, after application of the - linear map between the two domains. - - See Also - -------- - getdomain, mapparms - - Notes - ----- - Effectively, this implements: - - .. math :: - x\\_out = new[0] + m(x - old[0]) - - where - - .. math :: - m = \\frac{new[1]-new[0]}{old[1]-old[0]} - - Examples - -------- - >>> from numpy import polynomial as P - >>> old_domain = (-1,1) - >>> new_domain = (0,2*np.pi) - >>> x = np.linspace(-1,1,6); x - array([-1. , -0.6, -0.2, 0.2, 0.6, 1. ]) - >>> x_out = P.mapdomain(x, old_domain, new_domain); x_out - array([ 0. , 1.25663706, 2.51327412, 3.76991118, 5.02654825, - 6.28318531]) - >>> x - P.mapdomain(x_out, new_domain, old_domain) - array([ 0., 0., 0., 0., 0., 0.]) - - Also works for complex numbers (and thus can be used to map any line in - the complex plane to any other line therein). - - >>> i = complex(0,1) - >>> old = (-1 - i, 1 + i) - >>> new = (-1 + i, 1 - i) - >>> z = np.linspace(old[0], old[1], 6); z - array([-1.0-1.j , -0.6-0.6j, -0.2-0.2j, 0.2+0.2j, 0.6+0.6j, 1.0+1.j ]) - >>> new_z = P.mapdomain(z, old, new); new_z - array([-1.0+1.j , -0.6+0.6j, -0.2+0.2j, 0.2-0.2j, 0.6-0.6j, 1.0-1.j ]) - - """ - [x] = as_series([x], trim=False) - off, scl = mapparms(old, new) - return off + scl*x diff --git a/pythonPackages/numpy/numpy/polynomial/setup.py b/pythonPackages/numpy/numpy/polynomial/setup.py deleted file mode 100755 index 173fd126cf..0000000000 --- a/pythonPackages/numpy/numpy/polynomial/setup.py +++ /dev/null @@ -1,11 +0,0 @@ - - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('polynomial',parent_package,top_path) - config.add_data_dir('tests') - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/polynomial/tests/test_chebyshev.py b/pythonPackages/numpy/numpy/polynomial/tests/test_chebyshev.py deleted file mode 100755 index 981481ff1f..0000000000 --- a/pythonPackages/numpy/numpy/polynomial/tests/test_chebyshev.py +++ /dev/null @@ -1,521 +0,0 @@ -"""Tests for chebyshev module. - -""" -from __future__ import division - -import numpy as np -import numpy.polynomial.chebyshev as ch -from numpy.testing import * - -def trim(x) : - return ch.chebtrim(x, tol=1e-6) - -T0 = [ 1] -T1 = [ 0, 1] -T2 = [-1, 0, 2] -T3 = [ 0, -3, 0, 4] -T4 = [ 1, 0, -8, 0, 8] -T5 = [ 0, 5, 0, -20, 0, 16] -T6 = [-1, 0, 18, 0, -48, 0, 32] -T7 = [ 0, -7, 0, 56, 0, -112, 0, 64] -T8 = [ 1, 0, -32, 0, 160, 0, -256, 0, 128] -T9 = [ 0, 9, 0, -120, 0, 432, 0, -576, 0, 256] - -Tlist = [T0, T1, T2, T3, T4, T5, T6, T7, T8, T9] - - -class TestPrivate(TestCase) : - - def test__cseries_to_zseries(self) : - for i in range(5) : - inp = np.array([2] + [1]*i, np.double) - tgt = np.array([.5]*i + [2] + [.5]*i, np.double) - res = ch._cseries_to_zseries(inp) - assert_equal(res, tgt) - - def test__zseries_to_cseries(self) : - for i in range(5) : - inp = np.array([.5]*i + [2] + [.5]*i, np.double) - tgt = np.array([2] + [1]*i, np.double) - res = ch._zseries_to_cseries(inp) - assert_equal(res, tgt) - - -class TestConstants(TestCase) : - - def test_chebdomain(self) : - assert_equal(ch.chebdomain, [-1, 1]) - - def test_chebzero(self) : - assert_equal(ch.chebzero, [0]) - - def test_chebone(self) : - assert_equal(ch.chebone, [1]) - - def test_chebx(self) : - assert_equal(ch.chebx, [0, 1]) - - -class TestArithmetic(TestCase) : - - def test_chebadd(self) : - for i in range(5) : - for j in range(5) : - msg = "At i=%d, j=%d" % (i,j) - tgt = np.zeros(max(i,j) + 1) - tgt[i] += 1 - tgt[j] += 1 - res = ch.chebadd([0]*i + [1], [0]*j + [1]) - assert_equal(trim(res), trim(tgt), err_msg=msg) - - def test_chebsub(self) : - for i in range(5) : - for j in range(5) : - msg = "At i=%d, j=%d" % (i,j) - tgt = np.zeros(max(i,j) + 1) - tgt[i] += 1 - tgt[j] -= 1 - res = ch.chebsub([0]*i + [1], [0]*j + [1]) - assert_equal(trim(res), trim(tgt), err_msg=msg) - - def test_chebmul(self) : - for i in range(5) : - for j in range(5) : - msg = "At i=%d, j=%d" % (i,j) - tgt = np.zeros(i + j + 1) - tgt[i + j] += .5 - tgt[abs(i - j)] += .5 - res = ch.chebmul([0]*i + [1], [0]*j + [1]) - assert_equal(trim(res), trim(tgt), err_msg=msg) - - def test_chebdiv(self) : - for i in range(5) : - for j in range(5) : - msg = "At i=%d, j=%d" % (i,j) - ci = [0]*i + [1] - cj = [0]*j + [1] - tgt = ch.chebadd(ci, cj) - quo, rem = ch.chebdiv(tgt, ci) - res = ch.chebadd(ch.chebmul(quo, ci), rem) - assert_equal(trim(res), trim(tgt), err_msg=msg) - - def test_chebval(self) : - def f(x) : - return x*(x**2 - 1) - - #check empty input - assert_equal(ch.chebval([], [1]).size, 0) - - #check normal input) - for i in range(5) : - tgt = 1 - res = ch.chebval(1, [0]*i + [1]) - assert_almost_equal(res, tgt) - tgt = (-1)**i - res = ch.chebval(-1, [0]*i + [1]) - assert_almost_equal(res, tgt) - zeros = np.cos(np.linspace(-np.pi, 0, 2*i + 1)[1::2]) - tgt = 0 - res = ch.chebval(zeros, [0]*i + [1]) - assert_almost_equal(res, tgt) - x = np.linspace(-1,1) - tgt = f(x) - res = ch.chebval(x, [0, -.25, 0, .25]) - assert_almost_equal(res, tgt) - - #check that shape is preserved - for i in range(3) : - dims = [2]*i - x = np.zeros(dims) - assert_equal(ch.chebval(x, [1]).shape, dims) - assert_equal(ch.chebval(x, [1,0]).shape, dims) - assert_equal(ch.chebval(x, [1,0,0]).shape, dims) - - -class TestCalculus(TestCase) : - - def test_chebint(self) : - # check exceptions - assert_raises(ValueError, ch.chebint, [0], .5) - assert_raises(ValueError, ch.chebint, [0], -1) - assert_raises(ValueError, ch.chebint, [0], 1, [0,0]) - assert_raises(ValueError, ch.chebint, [0], 1, lbnd=[0,0]) - assert_raises(ValueError, ch.chebint, [0], 1, scl=[0,0]) - - # check single integration with integration constant - for i in range(5) : - scl = i + 1 - pol = [0]*i + [1] - tgt = [i] + [0]*i + [1/scl] - chebpol = ch.poly2cheb(pol) - chebint = ch.chebint(chebpol, m=1, k=[i]) - res = ch.cheb2poly(chebint) - assert_almost_equal(trim(res), trim(tgt)) - - # check single integration with integration constant and lbnd - for i in range(5) : - scl = i + 1 - pol = [0]*i + [1] - chebpol = ch.poly2cheb(pol) - chebint = ch.chebint(chebpol, m=1, k=[i], lbnd=-1) - assert_almost_equal(ch.chebval(-1, chebint), i) - - # check single integration with integration constant and scaling - for i in range(5) : - scl = i + 1 - pol = [0]*i + [1] - tgt = [i] + [0]*i + [2/scl] - chebpol = ch.poly2cheb(pol) - chebint = ch.chebint(chebpol, m=1, k=[i], scl=2) - res = ch.cheb2poly(chebint) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with default k - for i in range(5) : - for j in range(2,5) : - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j) : - tgt = ch.chebint(tgt, m=1) - res = ch.chebint(pol, m=j) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with defined k - for i in range(5) : - for j in range(2,5) : - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j) : - tgt = ch.chebint(tgt, m=1, k=[k]) - res = ch.chebint(pol, m=j, k=range(j)) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with lbnd - for i in range(5) : - for j in range(2,5) : - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j) : - tgt = ch.chebint(tgt, m=1, k=[k], lbnd=-1) - res = ch.chebint(pol, m=j, k=range(j), lbnd=-1) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with scaling - for i in range(5) : - for j in range(2,5) : - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j) : - tgt = ch.chebint(tgt, m=1, k=[k], scl=2) - res = ch.chebint(pol, m=j, k=range(j), scl=2) - assert_almost_equal(trim(res), trim(tgt)) - - def test_chebder(self) : - # check exceptions - assert_raises(ValueError, ch.chebder, [0], .5) - assert_raises(ValueError, ch.chebder, [0], -1) - - # check that zeroth deriviative does nothing - for i in range(5) : - tgt = [1] + [0]*i - res = ch.chebder(tgt, m=0) - assert_equal(trim(res), trim(tgt)) - - # check that derivation is the inverse of integration - for i in range(5) : - for j in range(2,5) : - tgt = [1] + [0]*i - res = ch.chebder(ch.chebint(tgt, m=j), m=j) - assert_almost_equal(trim(res), trim(tgt)) - - # check derivation with scaling - for i in range(5) : - for j in range(2,5) : - tgt = [1] + [0]*i - res = ch.chebder(ch.chebint(tgt, m=j, scl=2), m=j, scl=.5) - assert_almost_equal(trim(res), trim(tgt)) - - -class TestMisc(TestCase) : - - def test_chebfromroots(self) : - res = ch.chebfromroots([]) - assert_almost_equal(trim(res), [1]) - for i in range(1,5) : - roots = np.cos(np.linspace(-np.pi, 0, 2*i + 1)[1::2]) - tgt = [0]*i + [1] - res = ch.chebfromroots(roots)*2**(i-1) - assert_almost_equal(trim(res),trim(tgt)) - - def test_chebroots(self) : - assert_almost_equal(ch.chebroots([1]), []) - assert_almost_equal(ch.chebroots([1, 2]), [-.5]) - for i in range(2,5) : - tgt = np.linspace(-1, 1, i) - res = ch.chebroots(ch.chebfromroots(tgt)) - assert_almost_equal(trim(res), trim(tgt)) - - def test_chebvander(self) : - # check for 1d x - x = np.arange(3) - v = ch.chebvander(x, 3) - assert_(v.shape == (3,4)) - for i in range(4) : - coef = [0]*i + [1] - assert_almost_equal(v[...,i], ch.chebval(x, coef)) - - # check for 2d x - x = np.array([[1,2],[3,4],[5,6]]) - v = ch.chebvander(x, 3) - assert_(v.shape == (3,2,4)) - for i in range(4) : - coef = [0]*i + [1] - assert_almost_equal(v[...,i], ch.chebval(x, coef)) - - def test_chebfit(self) : - def f(x) : - return x*(x - 1)*(x - 2) - - # Test exceptions - assert_raises(ValueError, ch.chebfit, [1], [1], -1) - assert_raises(TypeError, ch.chebfit, [[1]], [1], 0) - assert_raises(TypeError, ch.chebfit, [], [1], 0) - assert_raises(TypeError, ch.chebfit, [1], [[[1]]], 0) - assert_raises(TypeError, ch.chebfit, [1, 2], [1], 0) - assert_raises(TypeError, ch.chebfit, [1], [1, 2], 0) - assert_raises(TypeError, ch.chebfit, [1], [1], 0, w=[[1]]) - assert_raises(TypeError, ch.chebfit, [1], [1], 0, w=[1,1]) - - # Test fit - x = np.linspace(0,2) - y = f(x) - # - coef3 = ch.chebfit(x, y, 3) - assert_equal(len(coef3), 4) - assert_almost_equal(ch.chebval(x, coef3), y) - # - coef4 = ch.chebfit(x, y, 4) - assert_equal(len(coef4), 5) - assert_almost_equal(ch.chebval(x, coef4), y) - # - coef2d = ch.chebfit(x, np.array([y,y]).T, 3) - assert_almost_equal(coef2d, np.array([coef3,coef3]).T) - # test weighting - w = np.zeros_like(x) - yw = y.copy() - w[1::2] = 1 - y[0::2] = 0 - wcoef3 = ch.chebfit(x, yw, 3, w=w) - assert_almost_equal(wcoef3, coef3) - # - wcoef2d = ch.chebfit(x, np.array([yw,yw]).T, 3, w=w) - assert_almost_equal(wcoef2d, np.array([coef3,coef3]).T) - - def test_chebtrim(self) : - coef = [2, -1, 1, 0] - - # Test exceptions - assert_raises(ValueError, ch.chebtrim, coef, -1) - - # Test results - assert_equal(ch.chebtrim(coef), coef[:-1]) - assert_equal(ch.chebtrim(coef, 1), coef[:-3]) - assert_equal(ch.chebtrim(coef, 2), [0]) - - def test_chebline(self) : - assert_equal(ch.chebline(3,4), [3, 4]) - - def test_cheb2poly(self) : - for i in range(10) : - assert_equal(ch.cheb2poly([0]*i + [1]), Tlist[i]) - - def test_poly2cheb(self) : - for i in range(10) : - assert_equal(ch.poly2cheb(Tlist[i]), [0]*i + [1]) - - -class TestChebyshevClass(TestCase) : - - p1 = ch.Chebyshev([1,2,3]) - p2 = ch.Chebyshev([1,2,3], [0,1]) - p3 = ch.Chebyshev([1,2]) - p4 = ch.Chebyshev([2,2,3]) - p5 = ch.Chebyshev([3,2,3]) - - def test_equal(self) : - assert_(self.p1 == self.p1) - assert_(self.p2 == self.p2) - assert_(not self.p1 == self.p2) - assert_(not self.p1 == self.p3) - assert_(not self.p1 == [1,2,3]) - - def test_not_equal(self) : - assert_(not self.p1 != self.p1) - assert_(not self.p2 != self.p2) - assert_(self.p1 != self.p2) - assert_(self.p1 != self.p3) - assert_(self.p1 != [1,2,3]) - - def test_add(self) : - tgt = ch.Chebyshev([2,4,6]) - assert_(self.p1 + self.p1 == tgt) - assert_(self.p1 + [1,2,3] == tgt) - assert_([1,2,3] + self.p1 == tgt) - - def test_sub(self) : - tgt = ch.Chebyshev([1]) - assert_(self.p4 - self.p1 == tgt) - assert_(self.p4 - [1,2,3] == tgt) - assert_([2,2,3] - self.p1 == tgt) - - def test_mul(self) : - tgt = ch.Chebyshev([7.5, 10., 8., 6., 4.5]) - assert_(self.p1 * self.p1 == tgt) - assert_(self.p1 * [1,2,3] == tgt) - assert_([1,2,3] * self.p1 == tgt) - - def test_floordiv(self) : - tgt = ch.Chebyshev([1]) - assert_(self.p4 // self.p1 == tgt) - assert_(self.p4 // [1,2,3] == tgt) - assert_([2,2,3] // self.p1 == tgt) - - def test_mod(self) : - tgt = ch.Chebyshev([1]) - assert_((self.p4 % self.p1) == tgt) - assert_((self.p4 % [1,2,3]) == tgt) - assert_(([2,2,3] % self.p1) == tgt) - - def test_divmod(self) : - tquo = ch.Chebyshev([1]) - trem = ch.Chebyshev([2]) - quo, rem = divmod(self.p5, self.p1) - assert_(quo == tquo and rem == trem) - quo, rem = divmod(self.p5, [1,2,3]) - assert_(quo == tquo and rem == trem) - quo, rem = divmod([3,2,3], self.p1) - assert_(quo == tquo and rem == trem) - - def test_pow(self) : - tgt = ch.Chebyshev([1]) - for i in range(5) : - res = self.p1**i - assert_(res == tgt) - tgt *= self.p1 - - def test_call(self) : - # domain = [-1, 1] - x = np.linspace(-1, 1) - tgt = 3*(2*x**2 - 1) + 2*x + 1 - assert_almost_equal(self.p1(x), tgt) - - # domain = [0, 1] - x = np.linspace(0, 1) - xx = 2*x - 1 - assert_almost_equal(self.p2(x), self.p1(xx)) - - def test_degree(self) : - assert_equal(self.p1.degree(), 2) - - def test_trimdeg(self) : - assert_raises(ValueError, self.p1.cutdeg, .5) - assert_raises(ValueError, self.p1.cutdeg, -1) - assert_equal(len(self.p1.cutdeg(3)), 3) - assert_equal(len(self.p1.cutdeg(2)), 3) - assert_equal(len(self.p1.cutdeg(1)), 2) - assert_equal(len(self.p1.cutdeg(0)), 1) - - def test_convert(self) : - x = np.linspace(-1,1) - p = self.p1.convert(domain=[0,1]) - assert_almost_equal(p(x), self.p1(x)) - - def test_mapparms(self) : - parms = self.p2.mapparms() - assert_almost_equal(parms, [-1, 2]) - - def test_trim(self) : - coef = [1, 1e-6, 1e-12, 0] - p = ch.Chebyshev(coef) - assert_equal(p.trim().coef, coef[:3]) - assert_equal(p.trim(1e-10).coef, coef[:2]) - assert_equal(p.trim(1e-5).coef, coef[:1]) - - def test_truncate(self) : - assert_raises(ValueError, self.p1.truncate, .5) - assert_raises(ValueError, self.p1.truncate, 0) - assert_equal(len(self.p1.truncate(4)), 3) - assert_equal(len(self.p1.truncate(3)), 3) - assert_equal(len(self.p1.truncate(2)), 2) - assert_equal(len(self.p1.truncate(1)), 1) - - def test_copy(self) : - p = self.p1.copy() - assert_(self.p1 == p) - - def test_integ(self) : - p = self.p2.integ() - assert_almost_equal(p.coef, ch.chebint([1,2,3], 1, 0, scl=.5)) - p = self.p2.integ(lbnd=0) - assert_almost_equal(p(0), 0) - p = self.p2.integ(1, 1) - assert_almost_equal(p.coef, ch.chebint([1,2,3], 1, 1, scl=.5)) - p = self.p2.integ(2, [1, 2]) - assert_almost_equal(p.coef, ch.chebint([1,2,3], 2, [1,2], scl=.5)) - - def test_deriv(self) : - p = self.p2.integ(2, [1, 2]) - assert_almost_equal(p.deriv(1).coef, self.p2.integ(1, [1]).coef) - assert_almost_equal(p.deriv(2).coef, self.p2.coef) - - def test_roots(self) : - p = ch.Chebyshev(ch.poly2cheb([0, -1, 0, 1]), [0, 1]) - res = p.roots() - tgt = [0, .5, 1] - assert_almost_equal(res, tgt) - - def test_linspace(self): - xdes = np.linspace(0, 1, 20) - ydes = self.p2(xdes) - xres, yres = self.p2.linspace(20) - assert_almost_equal(xres, xdes) - assert_almost_equal(yres, ydes) - - def test_fromroots(self) : - roots = [0, .5, 1] - p = ch.Chebyshev.fromroots(roots, domain=[0, 1]) - res = p.coef - tgt = ch.poly2cheb([0, -1, 0, 1]) - assert_almost_equal(res, tgt) - - def test_fit(self) : - def f(x) : - return x*(x - 1)*(x - 2) - x = np.linspace(0,3) - y = f(x) - - # test default value of domain - p = ch.Chebyshev.fit(x, y, 3) - assert_almost_equal(p.domain, [0,3]) - - # test that fit works in given domains - p = ch.Chebyshev.fit(x, y, 3, None) - assert_almost_equal(p(x), y) - assert_almost_equal(p.domain, [0,3]) - p = ch.Chebyshev.fit(x, y, 3, []) - assert_almost_equal(p(x), y) - assert_almost_equal(p.domain, [-1, 1]) - # test that fit accepts weights. - w = np.zeros_like(x) - yw = y.copy() - w[1::2] = 1 - yw[0::2] = 0 - p = ch.Chebyshev.fit(x, yw, 3, w=w) - assert_almost_equal(p(x), y) - - def test_identity(self) : - x = np.linspace(0,3) - p = ch.Chebyshev.identity() - assert_almost_equal(p(x), x) - p = ch.Chebyshev.identity([1,3]) - assert_almost_equal(p(x), x) diff --git a/pythonPackages/numpy/numpy/polynomial/tests/test_polynomial.py b/pythonPackages/numpy/numpy/polynomial/tests/test_polynomial.py deleted file mode 100755 index 4bfbc46d95..0000000000 --- a/pythonPackages/numpy/numpy/polynomial/tests/test_polynomial.py +++ /dev/null @@ -1,492 +0,0 @@ -"""Tests for polynomial module. - -""" -from __future__ import division - -import numpy as np -import numpy.polynomial.polynomial as poly -from numpy.testing import * - -def trim(x) : - return poly.polytrim(x, tol=1e-6) - -T0 = [ 1] -T1 = [ 0, 1] -T2 = [-1, 0, 2] -T3 = [ 0, -3, 0, 4] -T4 = [ 1, 0, -8, 0, 8] -T5 = [ 0, 5, 0, -20, 0, 16] -T6 = [-1, 0, 18, 0, -48, 0, 32] -T7 = [ 0, -7, 0, 56, 0, -112, 0, 64] -T8 = [ 1, 0, -32, 0, 160, 0, -256, 0, 128] -T9 = [ 0, 9, 0, -120, 0, 432, 0, -576, 0, 256] - -Tlist = [T0, T1, T2, T3, T4, T5, T6, T7, T8, T9] - - -class TestConstants(TestCase) : - - def test_polydomain(self) : - assert_equal(poly.polydomain, [-1, 1]) - - def test_polyzero(self) : - assert_equal(poly.polyzero, [0]) - - def test_polyone(self) : - assert_equal(poly.polyone, [1]) - - def test_polyx(self) : - assert_equal(poly.polyx, [0, 1]) - - -class TestArithmetic(TestCase) : - - def test_polyadd(self) : - for i in range(5) : - for j in range(5) : - msg = "At i=%d, j=%d" % (i,j) - tgt = np.zeros(max(i,j) + 1) - tgt[i] += 1 - tgt[j] += 1 - res = poly.polyadd([0]*i + [1], [0]*j + [1]) - assert_equal(trim(res), trim(tgt), err_msg=msg) - - def test_polysub(self) : - for i in range(5) : - for j in range(5) : - msg = "At i=%d, j=%d" % (i,j) - tgt = np.zeros(max(i,j) + 1) - tgt[i] += 1 - tgt[j] -= 1 - res = poly.polysub([0]*i + [1], [0]*j + [1]) - assert_equal(trim(res), trim(tgt), err_msg=msg) - - def test_polymul(self) : - for i in range(5) : - for j in range(5) : - msg = "At i=%d, j=%d" % (i,j) - tgt = np.zeros(i + j + 1) - tgt[i + j] += 1 - res = poly.polymul([0]*i + [1], [0]*j + [1]) - assert_equal(trim(res), trim(tgt), err_msg=msg) - - def test_polydiv(self) : - # check zero division - assert_raises(ZeroDivisionError, poly.polydiv, [1], [0]) - - # check scalar division - quo, rem = poly.polydiv([2],[2]) - assert_equal((quo, rem), (1, 0)) - quo, rem = poly.polydiv([2,2],[2]) - assert_equal((quo, rem), ((1,1), 0)) - - # check rest. - for i in range(5) : - for j in range(5) : - msg = "At i=%d, j=%d" % (i,j) - ci = [0]*i + [1,2] - cj = [0]*j + [1,2] - tgt = poly.polyadd(ci, cj) - quo, rem = poly.polydiv(tgt, ci) - res = poly.polyadd(poly.polymul(quo, ci), rem) - assert_equal(res, tgt, err_msg=msg) - - def test_polyval(self) : - def f(x) : - return x*(x**2 - 1) - - #check empty input - assert_equal(poly.polyval([], [1]).size, 0) - - #check normal input) - x = np.linspace(-1,1) - for i in range(5) : - tgt = x**i - res = poly.polyval(x, [0]*i + [1]) - assert_almost_equal(res, tgt) - tgt = f(x) - res = poly.polyval(x, [0, -1, 0, 1]) - assert_almost_equal(res, tgt) - - #check that shape is preserved - for i in range(3) : - dims = [2]*i - x = np.zeros(dims) - assert_equal(poly.polyval(x, [1]).shape, dims) - assert_equal(poly.polyval(x, [1,0]).shape, dims) - assert_equal(poly.polyval(x, [1,0,0]).shape, dims) - - -class TestCalculus(TestCase) : - - def test_polyint(self) : - # check exceptions - assert_raises(ValueError, poly.polyint, [0], .5) - assert_raises(ValueError, poly.polyint, [0], -1) - assert_raises(ValueError, poly.polyint, [0], 1, [0,0]) - assert_raises(ValueError, poly.polyint, [0], 1, lbnd=[0,0]) - assert_raises(ValueError, poly.polyint, [0], 1, scl=[0,0]) - - # check single integration with integration constant - for i in range(5) : - scl = i + 1 - pol = [0]*i + [1] - tgt = [i] + [0]*i + [1/scl] - res = poly.polyint(pol, m=1, k=[i]) - assert_almost_equal(trim(res), trim(tgt)) - - # check single integration with integration constant and lbnd - for i in range(5) : - scl = i + 1 - pol = [0]*i + [1] - res = poly.polyint(pol, m=1, k=[i], lbnd=-1) - assert_almost_equal(poly.polyval(-1, res), i) - - # check single integration with integration constant and scaling - for i in range(5) : - scl = i + 1 - pol = [0]*i + [1] - tgt = [i] + [0]*i + [2/scl] - res = poly.polyint(pol, m=1, k=[i], scl=2) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with default k - for i in range(5) : - for j in range(2,5) : - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j) : - tgt = poly.polyint(tgt, m=1) - res = poly.polyint(pol, m=j) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with defined k - for i in range(5) : - for j in range(2,5) : - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j) : - tgt = poly.polyint(tgt, m=1, k=[k]) - res = poly.polyint(pol, m=j, k=range(j)) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with lbnd - for i in range(5) : - for j in range(2,5) : - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j) : - tgt = poly.polyint(tgt, m=1, k=[k], lbnd=-1) - res = poly.polyint(pol, m=j, k=range(j), lbnd=-1) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with scaling - for i in range(5) : - for j in range(2,5) : - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j) : - tgt = poly.polyint(tgt, m=1, k=[k], scl=2) - res = poly.polyint(pol, m=j, k=range(j), scl=2) - assert_almost_equal(trim(res), trim(tgt)) - - def test_polyder(self) : - # check exceptions - assert_raises(ValueError, poly.polyder, [0], .5) - assert_raises(ValueError, poly.polyder, [0], -1) - - # check that zeroth deriviative does nothing - for i in range(5) : - tgt = [1] + [0]*i - res = poly.polyder(tgt, m=0) - assert_equal(trim(res), trim(tgt)) - - # check that derivation is the inverse of integration - for i in range(5) : - for j in range(2,5) : - tgt = [1] + [0]*i - res = poly.polyder(poly.polyint(tgt, m=j), m=j) - assert_almost_equal(trim(res), trim(tgt)) - - # check derivation with scaling - for i in range(5) : - for j in range(2,5) : - tgt = [1] + [0]*i - res = poly.polyder(poly.polyint(tgt, m=j, scl=2), m=j, scl=.5) - assert_almost_equal(trim(res), trim(tgt)) - - -class TestMisc(TestCase) : - - def test_polyfromroots(self) : - res = poly.polyfromroots([]) - assert_almost_equal(trim(res), [1]) - for i in range(1,5) : - roots = np.cos(np.linspace(-np.pi, 0, 2*i + 1)[1::2]) - tgt = Tlist[i] - res = poly.polyfromroots(roots)*2**(i-1) - assert_almost_equal(trim(res),trim(tgt)) - - def test_polyroots(self) : - assert_almost_equal(poly.polyroots([1]), []) - assert_almost_equal(poly.polyroots([1, 2]), [-.5]) - for i in range(2,5) : - tgt = np.linspace(-1, 1, i) - res = poly.polyroots(poly.polyfromroots(tgt)) - assert_almost_equal(trim(res), trim(tgt)) - - def test_polyvander(self) : - # check for 1d x - x = np.arange(3) - v = poly.polyvander(x, 3) - assert_(v.shape == (3,4)) - for i in range(4) : - coef = [0]*i + [1] - assert_almost_equal(v[...,i], poly.polyval(x, coef)) - - # check for 2d x - x = np.array([[1,2],[3,4],[5,6]]) - v = poly.polyvander(x, 3) - assert_(v.shape == (3,2,4)) - for i in range(4) : - coef = [0]*i + [1] - assert_almost_equal(v[...,i], poly.polyval(x, coef)) - - def test_polyfit(self) : - def f(x) : - return x*(x - 1)*(x - 2) - - # Test exceptions - assert_raises(ValueError, poly.polyfit, [1], [1], -1) - assert_raises(TypeError, poly.polyfit, [[1]], [1], 0) - assert_raises(TypeError, poly.polyfit, [], [1], 0) - assert_raises(TypeError, poly.polyfit, [1], [[[1]]], 0) - assert_raises(TypeError, poly.polyfit, [1, 2], [1], 0) - assert_raises(TypeError, poly.polyfit, [1], [1, 2], 0) - assert_raises(TypeError, poly.polyfit, [1], [1], 0, w=[[1]]) - assert_raises(TypeError, poly.polyfit, [1], [1], 0, w=[1,1]) - - # Test fit - x = np.linspace(0,2) - y = f(x) - # - coef3 = poly.polyfit(x, y, 3) - assert_equal(len(coef3), 4) - assert_almost_equal(poly.polyval(x, coef3), y) - # - coef4 = poly.polyfit(x, y, 4) - assert_equal(len(coef4), 5) - assert_almost_equal(poly.polyval(x, coef4), y) - # - coef2d = poly.polyfit(x, np.array([y,y]).T, 3) - assert_almost_equal(coef2d, np.array([coef3,coef3]).T) - # test weighting - w = np.zeros_like(x) - yw = y.copy() - w[1::2] = 1 - yw[0::2] = 0 - wcoef3 = poly.polyfit(x, yw, 3, w=w) - assert_almost_equal(wcoef3, coef3) - # - wcoef2d = poly.polyfit(x, np.array([yw,yw]).T, 3, w=w) - assert_almost_equal(wcoef2d, np.array([coef3,coef3]).T) - - def test_polytrim(self) : - coef = [2, -1, 1, 0] - - # Test exceptions - assert_raises(ValueError, poly.polytrim, coef, -1) - - # Test results - assert_equal(poly.polytrim(coef), coef[:-1]) - assert_equal(poly.polytrim(coef, 1), coef[:-3]) - assert_equal(poly.polytrim(coef, 2), [0]) - - def test_polyline(self) : - assert_equal(poly.polyline(3,4), [3, 4]) - -class TestPolynomialClass(TestCase) : - - p1 = poly.Polynomial([1,2,3]) - p2 = poly.Polynomial([1,2,3], [0,1]) - p3 = poly.Polynomial([1,2]) - p4 = poly.Polynomial([2,2,3]) - p5 = poly.Polynomial([3,2,3]) - - def test_equal(self) : - assert_(self.p1 == self.p1) - assert_(self.p2 == self.p2) - assert_(not self.p1 == self.p2) - assert_(not self.p1 == self.p3) - assert_(not self.p1 == [1,2,3]) - - def test_not_equal(self) : - assert_(not self.p1 != self.p1) - assert_(not self.p2 != self.p2) - assert_(self.p1 != self.p2) - assert_(self.p1 != self.p3) - assert_(self.p1 != [1,2,3]) - - def test_add(self) : - tgt = poly.Polynomial([2,4,6]) - assert_(self.p1 + self.p1 == tgt) - assert_(self.p1 + [1,2,3] == tgt) - assert_([1,2,3] + self.p1 == tgt) - - def test_sub(self) : - tgt = poly.Polynomial([1]) - assert_(self.p4 - self.p1 == tgt) - assert_(self.p4 - [1,2,3] == tgt) - assert_([2,2,3] - self.p1 == tgt) - - def test_mul(self) : - tgt = poly.Polynomial([1,4,10,12,9]) - assert_(self.p1 * self.p1 == tgt) - assert_(self.p1 * [1,2,3] == tgt) - assert_([1,2,3] * self.p1 == tgt) - - def test_floordiv(self) : - tgt = poly.Polynomial([1]) - assert_(self.p4 // self.p1 == tgt) - assert_(self.p4 // [1,2,3] == tgt) - assert_([2,2,3] // self.p1 == tgt) - - def test_mod(self) : - tgt = poly.Polynomial([1]) - assert_((self.p4 % self.p1) == tgt) - assert_((self.p4 % [1,2,3]) == tgt) - assert_(([2,2,3] % self.p1) == tgt) - - def test_divmod(self) : - tquo = poly.Polynomial([1]) - trem = poly.Polynomial([2]) - quo, rem = divmod(self.p5, self.p1) - assert_(quo == tquo and rem == trem) - quo, rem = divmod(self.p5, [1,2,3]) - assert_(quo == tquo and rem == trem) - quo, rem = divmod([3,2,3], self.p1) - assert_(quo == tquo and rem == trem) - - def test_pow(self) : - tgt = poly.Polynomial([1]) - for i in range(5) : - res = self.p1**i - assert_(res == tgt) - tgt *= self.p1 - - def test_call(self) : - # domain = [-1, 1] - x = np.linspace(-1, 1) - tgt = (3*x + 2)*x + 1 - assert_almost_equal(self.p1(x), tgt) - - # domain = [0, 1] - x = np.linspace(0, 1) - xx = 2*x - 1 - assert_almost_equal(self.p2(x), self.p1(xx)) - - def test_degree(self) : - assert_equal(self.p1.degree(), 2) - - def test_trimdeg(self) : - assert_raises(ValueError, self.p1.cutdeg, .5) - assert_raises(ValueError, self.p1.cutdeg, -1) - assert_equal(len(self.p1.cutdeg(3)), 3) - assert_equal(len(self.p1.cutdeg(2)), 3) - assert_equal(len(self.p1.cutdeg(1)), 2) - assert_equal(len(self.p1.cutdeg(0)), 1) - - def test_convert(self) : - x = np.linspace(-1,1) - p = self.p1.convert(domain=[0,1]) - assert_almost_equal(p(x), self.p1(x)) - - def test_mapparms(self) : - parms = self.p2.mapparms() - assert_almost_equal(parms, [-1, 2]) - - def test_trim(self) : - coef = [1, 1e-6, 1e-12, 0] - p = poly.Polynomial(coef) - assert_equal(p.trim().coef, coef[:3]) - assert_equal(p.trim(1e-10).coef, coef[:2]) - assert_equal(p.trim(1e-5).coef, coef[:1]) - - def test_truncate(self) : - assert_raises(ValueError, self.p1.truncate, .5) - assert_raises(ValueError, self.p1.truncate, 0) - assert_equal(len(self.p1.truncate(4)), 3) - assert_equal(len(self.p1.truncate(3)), 3) - assert_equal(len(self.p1.truncate(2)), 2) - assert_equal(len(self.p1.truncate(1)), 1) - - def test_copy(self) : - p = self.p1.copy() - assert_(self.p1 == p) - - def test_integ(self) : - p = self.p2.integ() - assert_almost_equal(p.coef, poly.polyint([1,2,3], 1, 0, scl=.5)) - p = self.p2.integ(lbnd=0) - assert_almost_equal(p(0), 0) - p = self.p2.integ(1, 1) - assert_almost_equal(p.coef, poly.polyint([1,2,3], 1, 1, scl=.5)) - p = self.p2.integ(2, [1, 2]) - assert_almost_equal(p.coef, poly.polyint([1,2,3], 2, [1, 2], scl=.5)) - - def test_deriv(self) : - p = self.p2.integ(2, [1, 2]) - assert_almost_equal(p.deriv(1).coef, self.p2.integ(1, [1]).coef) - assert_almost_equal(p.deriv(2).coef, self.p2.coef) - - def test_roots(self) : - p = poly.Polynomial([0, -1, 0, 1], [0, 1]) - res = p.roots() - tgt = [0, .5, 1] - assert_almost_equal(res, tgt) - - def test_linspace(self): - xdes = np.linspace(0, 1, 20) - ydes = self.p2(xdes) - xres, yres = self.p2.linspace(20) - assert_almost_equal(xres, xdes) - assert_almost_equal(yres, ydes) - - def test_fromroots(self) : - roots = [0, .5, 1] - p = poly.Polynomial.fromroots(roots, domain=[0, 1]) - res = p.coef - tgt = [0, -1, 0, 1] - assert_almost_equal(res, tgt) - - def test_fit(self) : - def f(x) : - return x*(x - 1)*(x - 2) - x = np.linspace(0,3) - y = f(x) - - # test default value of domain - p = poly.Polynomial.fit(x, y, 3) - assert_almost_equal(p.domain, [0,3]) - - # test that fit works in given domains - p = poly.Polynomial.fit(x, y, 3, None) - assert_almost_equal(p(x), y) - assert_almost_equal(p.domain, [0,3]) - p = poly.Polynomial.fit(x, y, 3, []) - assert_almost_equal(p(x), y) - assert_almost_equal(p.domain, [-1, 1]) - # test that fit accepts weights. - w = np.zeros_like(x) - yw = y.copy() - w[1::2] = 1 - yw[0::2] = 0 - p = poly.Polynomial.fit(x, yw, 3, w=w) - assert_almost_equal(p(x), y) - - def test_identity(self) : - x = np.linspace(0,3) - p = poly.Polynomial.identity() - assert_almost_equal(p(x), x) - p = poly.Polynomial.identity([1,3]) - assert_almost_equal(p(x), x) diff --git a/pythonPackages/numpy/numpy/polynomial/tests/test_polyutils.py b/pythonPackages/numpy/numpy/polynomial/tests/test_polyutils.py deleted file mode 100755 index 86f2a5b9b3..0000000000 --- a/pythonPackages/numpy/numpy/polynomial/tests/test_polyutils.py +++ /dev/null @@ -1,86 +0,0 @@ -"""Tests for polyutils module. - -""" -from __future__ import division - -import numpy as np -import numpy.polynomial.polyutils as pu -from numpy.testing import * - -class TestMisc(TestCase) : - - def test_trimseq(self) : - for i in range(5) : - tgt = [1] - res = pu.trimseq([1] + [0]*5) - assert_equal(res, tgt) - - def test_as_series(self) : - # check exceptions - assert_raises(ValueError, pu.as_series, [[]]) - assert_raises(ValueError, pu.as_series, [[[1,2]]]) - assert_raises(ValueError, pu.as_series, [[1],['a']]) - # check common types - types = ['i', 'd', 'O'] - for i in range(len(types)) : - for j in range(i) : - ci = np.ones(1, types[i]) - cj = np.ones(1, types[j]) - [resi, resj] = pu.as_series([ci, cj]) - assert_(resi.dtype.char == resj.dtype.char) - assert_(resj.dtype.char == types[i]) - - def test_trimcoef(self) : - coef = [2, -1, 1, 0] - # Test exceptions - assert_raises(ValueError, pu.trimcoef, coef, -1) - # Test results - assert_equal(pu.trimcoef(coef), coef[:-1]) - assert_equal(pu.trimcoef(coef, 1), coef[:-3]) - assert_equal(pu.trimcoef(coef, 2), [0]) - - -class TestDomain(TestCase) : - - def test_getdomain(self) : - # test for real values - x = [1, 10, 3, -1] - tgt = [-1,10] - res = pu.getdomain(x) - assert_almost_equal(res, tgt) - - # test for complex values - x = [1 + 1j, 1 - 1j, 0, 2] - tgt = [-1j, 2 + 1j] - res = pu.getdomain(x) - assert_almost_equal(res, tgt) - - def test_mapdomain(self) : - # test for real values - dom1 = [0,4] - dom2 = [1,3] - tgt = dom2 - res = pu. mapdomain(dom1, dom1, dom2) - assert_almost_equal(res, tgt) - - # test for complex values - dom1 = [0 - 1j, 2 + 1j] - dom2 = [-2, 2] - tgt = dom2 - res = pu.mapdomain(dom1, dom1, dom2) - assert_almost_equal(res, tgt) - - def test_mapparms(self) : - # test for real values - dom1 = [0,4] - dom2 = [1,3] - tgt = [1, .5] - res = pu. mapparms(dom1, dom2) - assert_almost_equal(res, tgt) - - # test for complex values - dom1 = [0 - 1j, 2 + 1j] - dom2 = [-2, 2] - tgt = [-1 + 1j, 1 - 1j] - res = pu.mapparms(dom1, dom2) - assert_almost_equal(res, tgt) diff --git a/pythonPackages/numpy/numpy/random/SConscript b/pythonPackages/numpy/numpy/random/SConscript deleted file mode 100755 index a2acb0a668..0000000000 --- a/pythonPackages/numpy/numpy/random/SConscript +++ /dev/null @@ -1,50 +0,0 @@ -# Last Change: Wed Nov 19 09:00 PM 2008 J -# vim:syntax=python -import os - -from numscons import GetNumpyEnvironment, scons_get_mathlib - -from setup import needs_mingw_ftime_workaround - -def CheckWincrypt(context): - from copy import deepcopy - src = """\ -/* check to see if _WIN32 is defined */ -int main(int argc, char *argv[]) -{ -#ifdef _WIN32 - return 0; -#else - return 1; -#endif -} -""" - - context.Message("Checking if using wincrypt ... ") - st = context.env.TryRun(src, '.C') - if st[0] == 0: - context.Result('No') - else: - context.Result('Yes') - return st[0] - -env = GetNumpyEnvironment(ARGUMENTS) - -mlib = scons_get_mathlib(env) -env.AppendUnique(LIBS = mlib) - -# On windows, see if we should use Advapi32 -if os.name == 'nt': - config = env.NumpyConfigure(custom_tests = {'CheckWincrypt' : CheckWincrypt}) - if config.CheckWincrypt: - config.env.AppendUnique(LIBS = 'Advapi32') - config.Finish() - -if needs_mingw_ftime_workaround(): - env.Append(CPPDEFINES=['NPY_NEEDS_MINGW_TIME_WORKAROUND']) - -sources = [os.path.join('mtrand', x) for x in - ['mtrand.c', 'randomkit.c', 'initarray.c', 'distributions.c']] - -# XXX: Pyrex dependency -env.NumpyPythonExtension('mtrand', source = sources) diff --git a/pythonPackages/numpy/numpy/random/SConstruct b/pythonPackages/numpy/numpy/random/SConstruct deleted file mode 100755 index a377d8391b..0000000000 --- a/pythonPackages/numpy/numpy/random/SConstruct +++ /dev/null @@ -1,2 +0,0 @@ -from numscons import GetInitEnvironment -GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') diff --git a/pythonPackages/numpy/numpy/random/__init__.py b/pythonPackages/numpy/numpy/random/__init__.py deleted file mode 100755 index 8c3a333688..0000000000 --- a/pythonPackages/numpy/numpy/random/__init__.py +++ /dev/null @@ -1,102 +0,0 @@ -""" -======================== -Random Number Generation -======================== - -==================== ========================================================= -Utility functions -============================================================================== -random Uniformly distributed values of a given shape. -bytes Uniformly distributed random bytes. -random_integers Uniformly distributed integers in a given range. -random_sample Uniformly distributed floats in a given range. -permutation Randomly permute a sequence / generate a random sequence. -shuffle Randomly permute a sequence in place. -seed Seed the random number generator. -==================== ========================================================= - -==================== ========================================================= -Compatibility functions -============================================================================== -rand Uniformly distributed values. -randn Normally distributed values. -ranf Uniformly distributed floating point numbers. -randint Uniformly distributed integers in a given range. -==================== ========================================================= - -==================== ========================================================= -Univariate distributions -============================================================================== -beta Beta distribution over ``[0, 1]``. -binomial Binomial distribution. -chisquare :math:`\\chi^2` distribution. -exponential Exponential distribution. -f F (Fisher-Snedecor) distribution. -gamma Gamma distribution. -geometric Geometric distribution. -gumbel Gumbel distribution. -hypergeometric Hypergeometric distribution. -laplace Laplace distribution. -logistic Logistic distribution. -lognormal Log-normal distribution. -logseries Logarithmic series distribution. -negative_binomial Negative binomial distribution. -noncentral_chisquare Non-central chi-square distribution. -noncentral_f Non-central F distribution. -normal Normal / Gaussian distribution. -pareto Pareto distribution. -poisson Poisson distribution. -power Power distribution. -rayleigh Rayleigh distribution. -triangular Triangular distribution. -uniform Uniform distribution. -vonmises Von Mises circular distribution. -wald Wald (inverse Gaussian) distribution. -weibull Weibull distribution. -zipf Zipf's distribution over ranked data. -==================== ========================================================= - -==================== ========================================================= -Multivariate distributions -============================================================================== -dirichlet Multivariate generalization of Beta distribution. -multinomial Multivariate generalization of the binomial distribution. -multivariate_normal Multivariate generalization of the normal distribution. -==================== ========================================================= - -==================== ========================================================= -Standard distributions -============================================================================== -standard_cauchy Standard Cauchy-Lorentz distribution. -standard_exponential Standard exponential distribution. -standard_gamma Standard Gamma distribution. -standard_normal Standard normal distribution. -standard_t Standard Student's t-distribution. -==================== ========================================================= - -==================== ========================================================= -Internal functions -============================================================================== -get_state Get tuple representing internal state of generator. -set_state Set state of generator. -==================== ========================================================= - -""" -# To get sub-modules -from info import __doc__, __all__ -from mtrand import * - -# Some aliases: -ranf = random = sample = random_sample -__all__.extend(['ranf','random','sample']) - -def __RandomState_ctor(): - """Return a RandomState instance. - - This function exists solely to assist (un)pickling. - """ - return RandomState() - -from numpy.testing import Tester -test = Tester().test -bench = Tester().bench diff --git a/pythonPackages/numpy/numpy/random/info.py b/pythonPackages/numpy/numpy/random/info.py deleted file mode 100755 index 6139e57841..0000000000 --- a/pythonPackages/numpy/numpy/random/info.py +++ /dev/null @@ -1,134 +0,0 @@ -""" -======================== -Random Number Generation -======================== - -==================== ========================================================= -Utility functions -============================================================================== -random Uniformly distributed values of a given shape. -bytes Uniformly distributed random bytes. -random_integers Uniformly distributed integers in a given range. -random_sample Uniformly distributed floats in a given range. -permutation Randomly permute a sequence / generate a random sequence. -shuffle Randomly permute a sequence in place. -seed Seed the random number generator. -==================== ========================================================= - -==================== ========================================================= -Compatibility functions -============================================================================== -rand Uniformly distributed values. -randn Normally distributed values. -ranf Uniformly distributed floating point numbers. -randint Uniformly distributed integers in a given range. -==================== ========================================================= - -==================== ========================================================= -Univariate distributions -============================================================================== -beta Beta distribution over ``[0, 1]``. -binomial Binomial distribution. -chisquare :math:`\\chi^2` distribution. -exponential Exponential distribution. -f F (Fisher-Snedecor) distribution. -gamma Gamma distribution. -geometric Geometric distribution. -gumbel Gumbel distribution. -hypergeometric Hypergeometric distribution. -laplace Laplace distribution. -logistic Logistic distribution. -lognormal Log-normal distribution. -logseries Logarithmic series distribution. -negative_binomial Negative binomial distribution. -noncentral_chisquare Non-central chi-square distribution. -noncentral_f Non-central F distribution. -normal Normal / Gaussian distribution. -pareto Pareto distribution. -poisson Poisson distribution. -power Power distribution. -rayleigh Rayleigh distribution. -triangular Triangular distribution. -uniform Uniform distribution. -vonmises Von Mises circular distribution. -wald Wald (inverse Gaussian) distribution. -weibull Weibull distribution. -zipf Zipf's distribution over ranked data. -==================== ========================================================= - -==================== ========================================================= -Multivariate distributions -============================================================================== -dirichlet Multivariate generalization of Beta distribution. -multinomial Multivariate generalization of the binomial distribution. -multivariate_normal Multivariate generalization of the normal distribution. -==================== ========================================================= - -==================== ========================================================= -Standard distributions -============================================================================== -standard_cauchy Standard Cauchy-Lorentz distribution. -standard_exponential Standard exponential distribution. -standard_gamma Standard Gamma distribution. -standard_normal Standard normal distribution. -standard_t Standard Student's t-distribution. -==================== ========================================================= - -==================== ========================================================= -Internal functions -============================================================================== -get_state Get tuple representing internal state of generator. -set_state Set state of generator. -==================== ========================================================= - -""" - -depends = ['core'] - -__all__ = [ - 'beta', - 'binomial', - 'bytes', - 'chisquare', - 'exponential', - 'f', - 'gamma', - 'geometric', - 'get_state', - 'gumbel', - 'hypergeometric', - 'laplace', - 'logistic', - 'lognormal', - 'logseries', - 'multinomial', - 'multivariate_normal', - 'negative_binomial', - 'noncentral_chisquare', - 'noncentral_f', - 'normal', - 'pareto', - 'permutation', - 'poisson', - 'power', - 'rand', - 'randint', - 'randn', - 'random_integers', - 'random_sample', - 'rayleigh', - 'seed', - 'set_state', - 'shuffle', - 'standard_cauchy', - 'standard_exponential', - 'standard_gamma', - 'standard_normal', - 'standard_t', - 'triangular', - 'uniform', - 'vonmises', - 'wald', - 'weibull', - 'zipf' -] diff --git a/pythonPackages/numpy/numpy/random/mtrand/Python.pxi b/pythonPackages/numpy/numpy/random/mtrand/Python.pxi deleted file mode 100755 index 01d47af500..0000000000 --- a/pythonPackages/numpy/numpy/random/mtrand/Python.pxi +++ /dev/null @@ -1,57 +0,0 @@ -# :Author: Robert Kern -# :Copyright: 2004, Enthought, Inc. -# :License: BSD Style - - -cdef extern from "Python.h": - # Not part of the Python API, but we might as well define it here. - # Note that the exact type doesn't actually matter for Pyrex. - ctypedef int size_t - - # String API - char* PyString_AsString(object string) - char* PyString_AS_STRING(object string) - object PyString_FromString(char* c_string) - object PyString_FromStringAndSize(char* c_string, int length) - - # Float API - double PyFloat_AsDouble(object ob) - long PyInt_AsLong(object ob) - - # Memory API - void* PyMem_Malloc(size_t n) - void* PyMem_Realloc(void* buf, size_t n) - void PyMem_Free(void* buf) - - void Py_DECREF(object obj) - void Py_XDECREF(object obj) - void Py_INCREF(object obj) - void Py_XINCREF(object obj) - - # CObject API -# If this is uncommented it needs to be fixed to use PyCapsule -# for Python >= 3.0 -# -# ctypedef void (*destructor1)(void* cobj) -# ctypedef void (*destructor2)(void* cobj, void* desc) -# int PyCObject_Check(object p) -# object PyCObject_FromVoidPtr(void* cobj, destructor1 destr) -# object PyCObject_FromVoidPtrAndDesc(void* cobj, void* desc, -# destructor2 destr) -# void* PyCObject_AsVoidPtr(object self) -# void* PyCObject_GetDesc(object self) -# int PyCObject_SetVoidPtr(object self, void* cobj) - - # TypeCheck API - int PyFloat_Check(object obj) - int PyInt_Check(object obj) - - # Error API - int PyErr_Occurred() - void PyErr_Clear() - -cdef extern from "string.h": - void *memcpy(void *s1, void *s2, int n) - -cdef extern from "math.h": - double fabs(double x) diff --git a/pythonPackages/numpy/numpy/random/mtrand/distributions.c b/pythonPackages/numpy/numpy/random/mtrand/distributions.c deleted file mode 100755 index 39bd82f4a7..0000000000 --- a/pythonPackages/numpy/numpy/random/mtrand/distributions.c +++ /dev/null @@ -1,884 +0,0 @@ -/* Copyright 2005 Robert Kern (robert.kern@gmail.com) - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sublicense, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * The above copyright notice and this permission notice shall be included - * in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* The implementations of rk_hypergeometric_hyp(), rk_hypergeometric_hrua(), - * and rk_triangular() were adapted from Ivan Frohne's rv.py which has this - * license: - * - * Copyright 1998 by Ivan Frohne; Wasilla, Alaska, U.S.A. - * All Rights Reserved - * - * Permission to use, copy, modify and distribute this software and its - * documentation for any purpose, free of charge, is granted subject to the - * following conditions: - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the software. - * - * THE SOFTWARE AND DOCUMENTATION IS PROVIDED WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO MERCHANTABILITY, FITNESS - * FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHOR - * OR COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM OR DAMAGES IN A CONTRACT - * ACTION, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR ITS DOCUMENTATION. - */ - -#include -#include "distributions.h" -#include - -#ifndef min -#define min(x,y) ((xy)?x:y) -#endif - -#ifndef M_PI -#define M_PI 3.14159265358979323846264338328 -#endif -/* log-gamma function to support some of these distributions. The - * algorithm comes from SPECFUN by Shanjie Zhang and Jianming Jin and their - * book "Computation of Special Functions", 1996, John Wiley & Sons, Inc. - */ -extern double loggam(double x); -double loggam(double x) -{ - double x0, x2, xp, gl, gl0; - long k, n; - - static double a[10] = {8.333333333333333e-02,-2.777777777777778e-03, - 7.936507936507937e-04,-5.952380952380952e-04, - 8.417508417508418e-04,-1.917526917526918e-03, - 6.410256410256410e-03,-2.955065359477124e-02, - 1.796443723688307e-01,-1.39243221690590e+00}; - x0 = x; - n = 0; - if ((x == 1.0) || (x == 2.0)) - { - return 0.0; - } - else if (x <= 7.0) - { - n = (long)(7 - x); - x0 = x + n; - } - x2 = 1.0/(x0*x0); - xp = 2*M_PI; - gl0 = a[9]; - for (k=8; k>=0; k--) - { - gl0 *= x2; - gl0 += a[k]; - } - gl = gl0/x0 + 0.5*log(xp) + (x0-0.5)*log(x0) - x0; - if (x <= 7.0) - { - for (k=1; k<=n; k++) - { - gl -= log(x0-1.0); - x0 -= 1.0; - } - } - return gl; -} - -double rk_normal(rk_state *state, double loc, double scale) -{ - return loc + scale*rk_gauss(state); -} - -double rk_standard_exponential(rk_state *state) -{ - /* We use -log(1-U) since U is [0, 1) */ - return -log(1.0 - rk_double(state)); -} - -double rk_exponential(rk_state *state, double scale) -{ - return scale * rk_standard_exponential(state); -} - -double rk_uniform(rk_state *state, double loc, double scale) -{ - return loc + scale*rk_double(state); -} - -double rk_standard_gamma(rk_state *state, double shape) -{ - double b, c; - double U, V, X, Y; - - if (shape == 1.0) - { - return rk_standard_exponential(state); - } - else if (shape < 1.0) - { - for (;;) - { - U = rk_double(state); - V = rk_standard_exponential(state); - if (U <= 1.0 - shape) - { - X = pow(U, 1./shape); - if (X <= V) - { - return X; - } - } - else - { - Y = -log((1-U)/shape); - X = pow(1.0 - shape + shape*Y, 1./shape); - if (X <= (V + Y)) - { - return X; - } - } - } - } - else - { - b = shape - 1./3.; - c = 1./sqrt(9*b); - for (;;) - { - do - { - X = rk_gauss(state); - V = 1.0 + c*X; - } while (V <= 0.0); - - V = V*V*V; - U = rk_double(state); - if (U < 1.0 - 0.0331*(X*X)*(X*X)) return (b*V); - if (log(U) < 0.5*X*X + b*(1. - V + log(V))) return (b*V); - } - } -} - -double rk_gamma(rk_state *state, double shape, double scale) -{ - return scale * rk_standard_gamma(state, shape); -} - -double rk_beta(rk_state *state, double a, double b) -{ - double Ga, Gb; - - if ((a <= 1.0) && (b <= 1.0)) - { - double U, V, X, Y; - /* Use Jonk's algorithm */ - - while (1) - { - U = rk_double(state); - V = rk_double(state); - X = pow(U, 1.0/a); - Y = pow(V, 1.0/b); - - if ((X + Y) <= 1.0) - { - return X / (X + Y); - } - } - } - else - { - Ga = rk_standard_gamma(state, a); - Gb = rk_standard_gamma(state, b); - return Ga/(Ga + Gb); - } -} - -double rk_chisquare(rk_state *state, double df) -{ - return 2.0*rk_standard_gamma(state, df/2.0); -} - -double rk_noncentral_chisquare(rk_state *state, double df, double nonc) -{ - double Chi2, N; - - Chi2 = rk_chisquare(state, df-1); - N = rk_gauss(state) + sqrt(nonc); - return Chi2 + N*N; -} - -double rk_f(rk_state *state, double dfnum, double dfden) -{ - return ((rk_chisquare(state, dfnum) * dfden) / - (rk_chisquare(state, dfden) * dfnum)); -} - -double rk_noncentral_f(rk_state *state, double dfnum, double dfden, double nonc) -{ - return ((rk_noncentral_chisquare(state, dfnum, nonc)*dfden) / - (rk_chisquare(state, dfden)*dfnum)); -} - -long rk_binomial_btpe(rk_state *state, long n, double p) -{ - double r,q,fm,p1,xm,xl,xr,c,laml,lamr,p2,p3,p4; - double a,u,v,s,F,rho,t,A,nrq,x1,x2,f1,f2,z,z2,w,w2,x; - long m,y,k,i; - - if (!(state->has_binomial) || - (state->nsave != n) || - (state->psave != p)) - { - /* initialize */ - state->nsave = n; - state->psave = p; - state->has_binomial = 1; - state->r = r = min(p, 1.0-p); - state->q = q = 1.0 - r; - state->fm = fm = n*r+r; - state->m = m = (long)floor(state->fm); - state->p1 = p1 = floor(2.195*sqrt(n*r*q)-4.6*q) + 0.5; - state->xm = xm = m + 0.5; - state->xl = xl = xm - p1; - state->xr = xr = xm + p1; - state->c = c = 0.134 + 20.5/(15.3 + m); - a = (fm - xl)/(fm-xl*r); - state->laml = laml = a*(1.0 + a/2.0); - a = (xr - fm)/(xr*q); - state->lamr = lamr = a*(1.0 + a/2.0); - state->p2 = p2 = p1*(1.0 + 2.0*c); - state->p3 = p3 = p2 + c/laml; - state->p4 = p4 = p3 + c/lamr; - } - else - { - r = state->r; - q = state->q; - fm = state->fm; - m = state->m; - p1 = state->p1; - xm = state->xm; - xl = state->xl; - xr = state->xr; - c = state->c; - laml = state->laml; - lamr = state->lamr; - p2 = state->p2; - p3 = state->p3; - p4 = state->p4; - } - - /* sigh ... */ - Step10: - nrq = n*r*q; - u = rk_double(state)*p4; - v = rk_double(state); - if (u > p1) goto Step20; - y = (long)floor(xm - p1*v + u); - goto Step60; - - Step20: - if (u > p2) goto Step30; - x = xl + (u - p1)/c; - v = v*c + 1.0 - fabs(m - x + 0.5)/p1; - if (v > 1.0) goto Step10; - y = (long)floor(x); - goto Step50; - - Step30: - if (u > p3) goto Step40; - y = (long)floor(xl + log(v)/laml); - if (y < 0) goto Step10; - v = v*(u-p2)*laml; - goto Step50; - - Step40: - y = (int)floor(xr - log(v)/lamr); - if (y > n) goto Step10; - v = v*(u-p3)*lamr; - - Step50: - k = fabs(y - m); - if ((k > 20) && (k < ((nrq)/2.0 - 1))) goto Step52; - - s = r/q; - a = s*(n+1); - F = 1.0; - if (m < y) - { - for (i=m; i<=y; i++) - { - F *= (a/i - s); - } - } - else if (m > y) - { - for (i=y; i<=m; i++) - { - F /= (a/i - s); - } - } - else - { - if (v > F) goto Step10; - goto Step60; - } - - Step52: - rho = (k/(nrq))*((k*(k/3.0 + 0.625) + 0.16666666666666666)/nrq + 0.5); - t = -k*k/(2*nrq); - A = log(v); - if (A < (t - rho)) goto Step60; - if (A > (t + rho)) goto Step10; - - x1 = y+1; - f1 = m+1; - z = n+1-m; - w = n-y+1; - x2 = x1*x1; - f2 = f1*f1; - z2 = z*z; - w2 = w*w; - if (A > (xm*log(f1/x1) - + (n-m+0.5)*log(z/w) - + (y-m)*log(w*r/(x1*q)) - + (13680.-(462.-(132.-(99.-140./f2)/f2)/f2)/f2)/f1/166320. - + (13680.-(462.-(132.-(99.-140./z2)/z2)/z2)/z2)/z/166320. - + (13680.-(462.-(132.-(99.-140./x2)/x2)/x2)/x2)/x1/166320. - + (13680.-(462.-(132.-(99.-140./w2)/w2)/w2)/w2)/w/166320.)) - { - goto Step10; - } - - Step60: - if (p > 0.5) - { - y = n - y; - } - - return y; -} - -long rk_binomial_inversion(rk_state *state, long n, double p) -{ - double q, qn, np, px, U; - long X, bound; - - if (!(state->has_binomial) || - (state->nsave != n) || - (state->psave != p)) - { - state->nsave = n; - state->psave = p; - state->has_binomial = 1; - state->q = q = 1.0 - p; - state->r = qn = exp(n * log(q)); - state->c = np = n*p; - state->m = bound = min(n, np + 10.0*sqrt(np*q + 1)); - } else - { - q = state->q; - qn = state->r; - np = state->c; - bound = state->m; - } - X = 0; - px = qn; - U = rk_double(state); - while (U > px) - { - X++; - if (X > bound) - { - X = 0; - px = qn; - U = rk_double(state); - } else - { - U -= px; - px = ((n-X+1) * p * px)/(X*q); - } - } - return X; -} - -long rk_binomial(rk_state *state, long n, double p) -{ - double q; - - if (p <= 0.5) - { - if (p*n <= 30.0) - { - return rk_binomial_inversion(state, n, p); - } - else - { - return rk_binomial_btpe(state, n, p); - } - } - else - { - q = 1.0-p; - if (q*n <= 30.0) - { - return n - rk_binomial_inversion(state, n, q); - } - else - { - return n - rk_binomial_btpe(state, n, q); - } - } - -} - -long rk_negative_binomial(rk_state *state, double n, double p) -{ - double Y; - - Y = rk_gamma(state, n, (1-p)/p); - return rk_poisson(state, Y); -} - -long rk_poisson_mult(rk_state *state, double lam) -{ - long X; - double prod, U, enlam; - - enlam = exp(-lam); - X = 0; - prod = 1.0; - while (1) - { - U = rk_double(state); - prod *= U; - if (prod > enlam) - { - X += 1; - } - else - { - return X; - } - } -} - -#define LS2PI 0.91893853320467267 -#define TWELFTH 0.083333333333333333333333 -long rk_poisson_ptrs(rk_state *state, double lam) -{ - long k; - double U, V, slam, loglam, a, b, invalpha, vr, us; - - slam = sqrt(lam); - loglam = log(lam); - b = 0.931 + 2.53*slam; - a = -0.059 + 0.02483*b; - invalpha = 1.1239 + 1.1328/(b-3.4); - vr = 0.9277 - 3.6224/(b-2); - - while (1) - { - U = rk_double(state) - 0.5; - V = rk_double(state); - us = 0.5 - fabs(U); - k = (long)floor((2*a/us + b)*U + lam + 0.43); - if ((us >= 0.07) && (V <= vr)) - { - return k; - } - if ((k < 0) || - ((us < 0.013) && (V > us))) - { - continue; - } - if ((log(V) + log(invalpha) - log(a/(us*us)+b)) <= - (-lam + k*loglam - loggam(k+1))) - { - return k; - } - - - } - -} - -long rk_poisson(rk_state *state, double lam) -{ - if (lam >= 10) - { - return rk_poisson_ptrs(state, lam); - } - else if (lam == 0) - { - return 0; - } - else - { - return rk_poisson_mult(state, lam); - } -} - -double rk_standard_cauchy(rk_state *state) -{ - return rk_gauss(state) / rk_gauss(state); -} - -double rk_standard_t(rk_state *state, double df) -{ - double N, G, X; - - N = rk_gauss(state); - G = rk_standard_gamma(state, df/2); - X = sqrt(df/2)*N/sqrt(G); - return X; -} - -/* Uses the rejection algorithm compared against the wrapped Cauchy - distribution suggested by Best and Fisher and documented in - Chapter 9 of Luc's Non-Uniform Random Variate Generation. - http://cg.scs.carleton.ca/~luc/rnbookindex.html - (but corrected to match the algorithm in R and Python) -*/ -double rk_vonmises(rk_state *state, double mu, double kappa) -{ - double r, rho, s; - double U, V, W, Y, Z; - double result, mod; - int neg; - - if (kappa < 1e-8) - { - return M_PI * (2*rk_double(state)-1); - } - else - { - r = 1 + sqrt(1 + 4*kappa*kappa); - rho = (r - sqrt(2*r))/(2*kappa); - s = (1 + rho*rho)/(2*rho); - - while (1) - { - U = rk_double(state); - Z = cos(M_PI*U); - W = (1 + s*Z)/(s + Z); - Y = kappa * (s - W); - V = rk_double(state); - if ((Y*(2-Y) - V >= 0) || (log(Y/V)+1 - Y >= 0)) - { - break; - } - } - - U = rk_double(state); - - result = acos(W); - if (U < 0.5) - { - result = -result; - } - result += mu; - neg = (result < 0); - mod = fabs(result); - mod = (fmod(mod+M_PI, 2*M_PI)-M_PI); - if (neg) - { - mod *= -1; - } - - return mod; - } -} - -double rk_pareto(rk_state *state, double a) -{ - return exp(rk_standard_exponential(state)/a) - 1; -} - -double rk_weibull(rk_state *state, double a) -{ - return pow(rk_standard_exponential(state), 1./a); -} - -double rk_power(rk_state *state, double a) -{ - return pow(1 - exp(-rk_standard_exponential(state)), 1./a); -} - -double rk_laplace(rk_state *state, double loc, double scale) -{ - double U; - - U = rk_double(state); - if (U < 0.5) - { - U = loc + scale * log(U + U); - } else - { - U = loc - scale * log(2.0 - U - U); - } - return U; -} - -double rk_gumbel(rk_state *state, double loc, double scale) -{ - double U; - - U = 1.0 - rk_double(state); - return loc - scale * log(-log(U)); -} - -double rk_logistic(rk_state *state, double loc, double scale) -{ - double U; - - U = rk_double(state); - return loc + scale * log(U/(1.0 - U)); -} - -double rk_lognormal(rk_state *state, double mean, double sigma) -{ - return exp(rk_normal(state, mean, sigma)); -} - -double rk_rayleigh(rk_state *state, double mode) -{ - return mode*sqrt(-2.0 * log(1.0 - rk_double(state))); -} - -double rk_wald(rk_state *state, double mean, double scale) -{ - double U, X, Y; - double mu_2l; - - mu_2l = mean / (2*scale); - Y = rk_gauss(state); - Y = mean*Y*Y; - X = mean + mu_2l*(Y - sqrt(4*scale*Y + Y*Y)); - U = rk_double(state); - if (U <= mean/(mean+X)) - { - return X; - } else - { - return mean*mean/X; - } -} - -long rk_zipf(rk_state *state, double a) -{ - double T, U, V; - long X; - double am1, b; - - am1 = a - 1.0; - b = pow(2.0, am1); - do - { - U = 1.0-rk_double(state); - V = rk_double(state); - X = (long)floor(pow(U, -1.0/am1)); - /* The real result may be above what can be represented in a signed - * long. It will get casted to -sys.maxint-1. Since this is - * a straightforward rejection algorithm, we can just reject this value - * in the rejection condition below. This function then models a Zipf - * distribution truncated to sys.maxint. - */ - T = pow(1.0 + 1.0/X, am1); - } while (((V*X*(T-1.0)/(b-1.0)) > (T/b)) || X < 1); - return X; -} - -long rk_geometric_search(rk_state *state, double p) -{ - double U; - long X; - double sum, prod, q; - - X = 1; - sum = prod = p; - q = 1.0 - p; - U = rk_double(state); - while (U > sum) - { - prod *= q; - sum += prod; - X++; - } - return X; -} - -long rk_geometric_inversion(rk_state *state, double p) -{ - return (long)ceil(log(1.0-rk_double(state))/log(1.0-p)); -} - -long rk_geometric(rk_state *state, double p) -{ - if (p >= 0.333333333333333333333333) - { - return rk_geometric_search(state, p); - } else - { - return rk_geometric_inversion(state, p); - } -} - -long rk_hypergeometric_hyp(rk_state *state, long good, long bad, long sample) -{ - long d1, K, Z; - double d2, U, Y; - - d1 = bad + good - sample; - d2 = (double)min(bad, good); - - Y = d2; - K = sample; - while (Y > 0.0) - { - U = rk_double(state); - Y -= (long)floor(U + Y/(d1 + K)); - K--; - if (K == 0) break; - } - Z = (long)(d2 - Y); - if (good > bad) Z = sample - Z; - return Z; -} - -/* D1 = 2*sqrt(2/e) */ -/* D2 = 3 - 2*sqrt(3/e) */ -#define D1 1.7155277699214135 -#define D2 0.8989161620588988 -long rk_hypergeometric_hrua(rk_state *state, long good, long bad, long sample) -{ - long mingoodbad, maxgoodbad, popsize, m, d9; - double d4, d5, d6, d7, d8, d10, d11; - long Z; - double T, W, X, Y; - - mingoodbad = min(good, bad); - popsize = good + bad; - maxgoodbad = max(good, bad); - m = min(sample, popsize - sample); - d4 = ((double)mingoodbad) / popsize; - d5 = 1.0 - d4; - d6 = m*d4 + 0.5; - d7 = sqrt((popsize - m) * sample * d4 *d5 / (popsize-1) + 0.5); - d8 = D1*d7 + D2; - d9 = (long)floor((double)((m+1)*(mingoodbad+1))/(popsize+2)); - d10 = (loggam(d9+1) + loggam(mingoodbad-d9+1) + loggam(m-d9+1) + - loggam(maxgoodbad-m+d9+1)); - d11 = min(min(m, mingoodbad)+1.0, floor(d6+16*d7)); - /* 16 for 16-decimal-digit precision in D1 and D2 */ - - while (1) - { - X = rk_double(state); - Y = rk_double(state); - W = d6 + d8*(Y- 0.5)/X; - - /* fast rejection: */ - if ((W < 0.0) || (W >= d11)) continue; - - Z = (long)floor(W); - T = d10 - (loggam(Z+1) + loggam(mingoodbad-Z+1) + loggam(m-Z+1) + - loggam(maxgoodbad-m+Z+1)); - - /* fast acceptance: */ - if ((X*(4.0-X)-3.0) <= T) break; - - /* fast rejection: */ - if (X*(X-T) >= 1) continue; - - if (2.0*log(X) <= T) break; /* acceptance */ - } - - /* this is a correction to HRUA* by Ivan Frohne in rv.py */ - if (good > bad) Z = m - Z; - - /* another fix from rv.py to allow sample to exceed popsize/2 */ - if (m < sample) Z = good - Z; - - return Z; -} -#undef D1 -#undef D2 - -long rk_hypergeometric(rk_state *state, long good, long bad, long sample) -{ - if (sample > 10) - { - return rk_hypergeometric_hrua(state, good, bad, sample); - } else - { - return rk_hypergeometric_hyp(state, good, bad, sample); - } -} - -double rk_triangular(rk_state *state, double left, double mode, double right) -{ - double base, leftbase, ratio, leftprod, rightprod; - double U; - - base = right - left; - leftbase = mode - left; - ratio = leftbase / base; - leftprod = leftbase*base; - rightprod = (right - mode)*base; - - U = rk_double(state); - if (U <= ratio) - { - return left + sqrt(U*leftprod); - } else - { - return right - sqrt((1.0 - U) * rightprod); - } -} - -long rk_logseries(rk_state *state, double p) -{ - double q, r, U, V; - long result; - - r = log(1.0 - p); - - while (1) { - V = rk_double(state); - if (V >= p) { - return 1; - } - U = rk_double(state); - q = 1.0 - exp(r*U); - if (V <= q*q) { - result = (long)floor(1 + log(V)/log(q)); - if (result < 1) { - continue; - } - else { - return result; - } - } - if (V >= q) { - return 1; - } - return 2; - } -} diff --git a/pythonPackages/numpy/numpy/random/mtrand/distributions.h b/pythonPackages/numpy/numpy/random/mtrand/distributions.h deleted file mode 100755 index 6f60a4ff30..0000000000 --- a/pythonPackages/numpy/numpy/random/mtrand/distributions.h +++ /dev/null @@ -1,185 +0,0 @@ -/* Copyright 2005 Robert Kern (robert.kern@gmail.com) - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sublicense, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * The above copyright notice and this permission notice shall be included - * in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -#ifndef _RK_DISTR_ -#define _RK_DISTR_ - -#include "randomkit.h" - -#ifdef __cplusplus -extern "C" { -#endif - -/* References: - * - * Devroye, Luc. _Non-Uniform Random Variate Generation_. - * Springer-Verlag, New York, 1986. - * http://cgm.cs.mcgill.ca/~luc/rnbookindex.html - * - * Kachitvichyanukul, V. and Schmeiser, B. W. Binomial Random Variate - * Generation. Communications of the ACM, 31, 2 (February, 1988) 216. - * - * Hoermann, W. The Transformed Rejection Method for Generating Poisson Random - * Variables. Insurance: Mathematics and Economics, (to appear) - * http://citeseer.csail.mit.edu/151115.html - * - * Marsaglia, G. and Tsang, W. W. A Simple Method for Generating Gamma - * Variables. ACM Transactions on Mathematical Software, Vol. 26, No. 3, - * September 2000, Pages 363–372. - */ - -/* Normal distribution with mean=loc and standard deviation=scale. */ -extern double rk_normal(rk_state *state, double loc, double scale); - -/* Standard exponential distribution (mean=1) computed by inversion of the - * CDF. */ -extern double rk_standard_exponential(rk_state *state); - -/* Exponential distribution with mean=scale. */ -extern double rk_exponential(rk_state *state, double scale); - -/* Uniform distribution on interval [loc, loc+scale). */ -extern double rk_uniform(rk_state *state, double loc, double scale); - -/* Standard gamma distribution with shape parameter. - * When shape < 1, the algorithm given by (Devroye p. 304) is used. - * When shape == 1, a Exponential variate is generated. - * When shape > 1, the small and fast method of (Marsaglia and Tsang 2000) - * is used. - */ -extern double rk_standard_gamma(rk_state *state, double shape); - -/* Gamma distribution with shape and scale. */ -extern double rk_gamma(rk_state *state, double shape, double scale); - -/* Beta distribution computed by combining two gamma variates (Devroye p. 432). - */ -extern double rk_beta(rk_state *state, double a, double b); - -/* Chi^2 distribution computed by transforming a gamma variate (it being a - * special case Gamma(df/2, 2)). */ -extern double rk_chisquare(rk_state *state, double df); - -/* Noncentral Chi^2 distribution computed by modifying a Chi^2 variate. */ -extern double rk_noncentral_chisquare(rk_state *state, double df, double nonc); - -/* F distribution computed by taking the ratio of two Chi^2 variates. */ -extern double rk_f(rk_state *state, double dfnum, double dfden); - -/* Noncentral F distribution computed by taking the ratio of a noncentral Chi^2 - * and a Chi^2 variate. */ -extern double rk_noncentral_f(rk_state *state, double dfnum, double dfden, double nonc); - -/* Binomial distribution with n Bernoulli trials with success probability p. - * When n*p <= 30, the "Second waiting time method" given by (Devroye p. 525) is - * used. Otherwise, the BTPE algorithm of (Kachitvichyanukul and Schmeiser 1988) - * is used. */ -extern long rk_binomial(rk_state *state, long n, double p); - -/* Binomial distribution using BTPE. */ -extern long rk_binomial_btpe(rk_state *state, long n, double p); - -/* Binomial distribution using inversion and chop-down */ -extern long rk_binomial_inversion(rk_state *state, long n, double p); - -/* Negative binomial distribution computed by generating a Gamma(n, (1-p)/p) - * variate Y and returning a Poisson(Y) variate (Devroye p. 543). */ -extern long rk_negative_binomial(rk_state *state, double n, double p); - -/* Poisson distribution with mean=lam. - * When lam < 10, a basic algorithm using repeated multiplications of uniform - * variates is used (Devroye p. 504). - * When lam >= 10, algorithm PTRS from (Hoermann 1992) is used. - */ -extern long rk_poisson(rk_state *state, double lam); - -/* Poisson distribution computed by repeated multiplication of uniform variates. - */ -extern long rk_poisson_mult(rk_state *state, double lam); - -/* Poisson distribution computer by the PTRS algorithm. */ -extern long rk_poisson_ptrs(rk_state *state, double lam); - -/* Standard Cauchy distribution computed by dividing standard gaussians - * (Devroye p. 451). */ -extern double rk_standard_cauchy(rk_state *state); - -/* Standard t-distribution with df degrees of freedom (Devroye p. 445 as - * corrected in the Errata). */ -extern double rk_standard_t(rk_state *state, double df); - -/* von Mises circular distribution with center mu and shape kappa on [-pi,pi] - * (Devroye p. 476 as corrected in the Errata). */ -extern double rk_vonmises(rk_state *state, double mu, double kappa); - -/* Pareto distribution via inversion (Devroye p. 262) */ -extern double rk_pareto(rk_state *state, double a); - -/* Weibull distribution via inversion (Devroye p. 262) */ -extern double rk_weibull(rk_state *state, double a); - -/* Power distribution via inversion (Devroye p. 262) */ -extern double rk_power(rk_state *state, double a); - -/* Laplace distribution */ -extern double rk_laplace(rk_state *state, double loc, double scale); - -/* Gumbel distribution */ -extern double rk_gumbel(rk_state *state, double loc, double scale); - -/* Logistic distribution */ -extern double rk_logistic(rk_state *state, double loc, double scale); - -/* Log-normal distribution */ -extern double rk_lognormal(rk_state *state, double mean, double sigma); - -/* Rayleigh distribution */ -extern double rk_rayleigh(rk_state *state, double mode); - -/* Wald distribution */ -extern double rk_wald(rk_state *state, double mean, double scale); - -/* Zipf distribution */ -extern long rk_zipf(rk_state *state, double a); - -/* Geometric distribution */ -extern long rk_geometric(rk_state *state, double p); -extern long rk_geometric_search(rk_state *state, double p); -extern long rk_geometric_inversion(rk_state *state, double p); - -/* Hypergeometric distribution */ -extern long rk_hypergeometric(rk_state *state, long good, long bad, long sample); -extern long rk_hypergeometric_hyp(rk_state *state, long good, long bad, long sample); -extern long rk_hypergeometric_hrua(rk_state *state, long good, long bad, long sample); - -/* Triangular distribution */ -extern double rk_triangular(rk_state *state, double left, double mode, double right); - -/* Logarithmic series distribution */ -extern long rk_logseries(rk_state *state, double p); - -#ifdef __cplusplus -} -#endif - - -#endif /* _RK_DISTR_ */ diff --git a/pythonPackages/numpy/numpy/random/mtrand/initarray.c b/pythonPackages/numpy/numpy/random/mtrand/initarray.c deleted file mode 100755 index d36d512e33..0000000000 --- a/pythonPackages/numpy/numpy/random/mtrand/initarray.c +++ /dev/null @@ -1,151 +0,0 @@ -/* - * These function have been adapted from Python 2.4.1's _randommodule.c - * - * The following changes have been made to it in 2005 by Robert Kern: - * - * * init_by_array has been declared extern, has a void return, and uses the - * rk_state structure to hold its data. - * - * The original file has the following verbatim comments: - * - * ------------------------------------------------------------------ - * The code in this module was based on a download from: - * http://www.math.keio.ac.jp/~matumoto/MT2002/emt19937ar.html - * - * It was modified in 2002 by Raymond Hettinger as follows: - * - * * the principal computational lines untouched except for tabbing. - * - * * renamed genrand_res53() to random_random() and wrapped - * in python calling/return code. - * - * * genrand_int32() and the helper functions, init_genrand() - * and init_by_array(), were declared static, wrapped in - * Python calling/return code. also, their global data - * references were replaced with structure references. - * - * * unused functions from the original were deleted. - * new, original C python code was added to implement the - * Random() interface. - * - * The following are the verbatim comments from the original code: - * - * A C-program for MT19937, with initialization improved 2002/1/26. - * Coded by Takuji Nishimura and Makoto Matsumoto. - * - * Before using, initialize the state by using init_genrand(seed) - * or init_by_array(init_key, key_length). - * - * Copyright (C) 1997 - 2002, Makoto Matsumoto and Takuji Nishimura, - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * 3. The names of its contributors may not be used to endorse or promote - * products derived from this software without specific prior written - * permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR - * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, - * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, - * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR - * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF - * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING - * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * - * Any feedback is very welcome. - * http://www.math.keio.ac.jp/matumoto/emt.html - * email: matumoto@math.keio.ac.jp - */ - -#include "initarray.h" - -static void -init_genrand(rk_state *self, unsigned long s); - -/* initializes mt[RK_STATE_LEN] with a seed */ -static void -init_genrand(rk_state *self, unsigned long s) -{ - int mti; - unsigned long *mt = self->key; - - mt[0] = s & 0xffffffffUL; - for (mti = 1; mti < RK_STATE_LEN; mti++) { - /* - * See Knuth TAOCP Vol2. 3rd Ed. P.106 for multiplier. - * In the previous versions, MSBs of the seed affect - * only MSBs of the array mt[]. - * 2002/01/09 modified by Makoto Matsumoto - */ - mt[mti] = (1812433253UL * (mt[mti-1] ^ (mt[mti-1] >> 30)) + mti); - /* for > 32 bit machines */ - mt[mti] &= 0xffffffffUL; - } - self->pos = mti; - return; -} - - -/* - * initialize by an array with array-length - * init_key is the array for initializing keys - * key_length is its length - */ -extern void -init_by_array(rk_state *self, unsigned long init_key[], unsigned long key_length) -{ - /* was signed in the original code. RDH 12/16/2002 */ - unsigned int i = 1; - unsigned int j = 0; - unsigned long *mt = self->key; - unsigned int k; - - init_genrand(self, 19650218UL); - k = (RK_STATE_LEN > key_length ? RK_STATE_LEN : key_length); - for (; k; k--) { - /* non linear */ - mt[i] = (mt[i] ^ ((mt[i - 1] ^ (mt[i - 1] >> 30)) * 1664525UL)) - + init_key[j] + j; - /* for > 32 bit machines */ - mt[i] &= 0xffffffffUL; - i++; - j++; - if (i >= RK_STATE_LEN) { - mt[0] = mt[RK_STATE_LEN - 1]; - i = 1; - } - if (j >= key_length) { - j = 0; - } - } - for (k = RK_STATE_LEN - 1; k; k--) { - mt[i] = (mt[i] ^ ((mt[i-1] ^ (mt[i-1] >> 30)) * 1566083941UL)) - - i; /* non linear */ - mt[i] &= 0xffffffffUL; /* for WORDSIZE > 32 machines */ - i++; - if (i >= RK_STATE_LEN) { - mt[0] = mt[RK_STATE_LEN - 1]; - i = 1; - } - } - - mt[0] = 0x80000000UL; /* MSB is 1; assuring non-zero initial array */ - self->gauss = 0; - self->has_gauss = 0; - self->has_binomial = 0; -} diff --git a/pythonPackages/numpy/numpy/random/mtrand/initarray.h b/pythonPackages/numpy/numpy/random/mtrand/initarray.h deleted file mode 100755 index a4ac210f42..0000000000 --- a/pythonPackages/numpy/numpy/random/mtrand/initarray.h +++ /dev/null @@ -1,6 +0,0 @@ -#include "randomkit.h" - -extern void -init_by_array(rk_state *self, unsigned long init_key[], - unsigned long key_length); - diff --git a/pythonPackages/numpy/numpy/random/mtrand/mtrand.c b/pythonPackages/numpy/numpy/random/mtrand/mtrand.c deleted file mode 100755 index cfb4bf4c9a..0000000000 --- a/pythonPackages/numpy/numpy/random/mtrand/mtrand.c +++ /dev/null @@ -1,21946 +0,0 @@ -/* Generated by Cython 0.12.1 on Mon Jul 5 13:47:19 2010 */ - -#define PY_SSIZE_T_CLEAN -#include "Python.h" -#include "structmember.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#else - -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#if PY_VERSION_HEX < 0x02040000 - #define METH_COEXIST 0 - #define PyDict_CheckExact(op) (Py_TYPE(op) == &PyDict_Type) - #define PyDict_Contains(d,o) PySequence_Contains(d,o) -#endif - -#if PY_VERSION_HEX < 0x02050000 - typedef int Py_ssize_t; - #define PY_SSIZE_T_MAX INT_MAX - #define PY_SSIZE_T_MIN INT_MIN - #define PY_FORMAT_SIZE_T "" - #define PyInt_FromSsize_t(z) PyInt_FromLong(z) - #define PyInt_AsSsize_t(o) PyInt_AsLong(o) - #define PyNumber_Index(o) PyNumber_Int(o) - #define PyIndex_Check(o) PyNumber_Check(o) - #define PyErr_WarnEx(category, message, stacklevel) PyErr_Warn(category, message) -#endif - -#if PY_VERSION_HEX < 0x02060000 - #define Py_REFCNT(ob) (((PyObject*)(ob))->ob_refcnt) - #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - #define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) - #define PyVarObject_HEAD_INIT(type, size) \ - PyObject_HEAD_INIT(type) size, - #define PyType_Modified(t) - - typedef struct { - void *buf; - PyObject *obj; - Py_ssize_t len; - Py_ssize_t itemsize; - int readonly; - int ndim; - char *format; - Py_ssize_t *shape; - Py_ssize_t *strides; - Py_ssize_t *suboffsets; - void *internal; - } Py_buffer; - - #define PyBUF_SIMPLE 0 - #define PyBUF_WRITABLE 0x0001 - #define PyBUF_FORMAT 0x0004 - #define PyBUF_ND 0x0008 - #define PyBUF_STRIDES (0x0010 | PyBUF_ND) - #define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) - #define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) - #define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) - #define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) - -#endif - -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" -#endif - -#if PY_MAJOR_VERSION >= 3 - #define Py_TPFLAGS_CHECKTYPES 0 - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif - -#if (PY_VERSION_HEX < 0x02060000) || (PY_MAJOR_VERSION >= 3) - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif - -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyString_Type PyUnicode_Type - #define PyString_CheckExact PyUnicode_CheckExact -#else - #define PyBytes_Type PyString_Type - #define PyBytes_CheckExact PyString_CheckExact -#endif - -#if PY_MAJOR_VERSION >= 3 - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) - -#endif - -#if PY_MAJOR_VERSION >= 3 - #define PyMethod_New(func, self, klass) PyInstanceMethod_New(func) -#endif - -#if !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#else - #define _USE_MATH_DEFINES -#endif - -#if PY_VERSION_HEX < 0x02050000 - #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),((char *)(n))) - #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),((char *)(n)),(a)) - #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),((char *)(n))) -#else - #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),(n)) - #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),(n),(a)) - #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),(n)) -#endif - -#if PY_VERSION_HEX < 0x02050000 - #define __Pyx_NAMESTR(n) ((char *)(n)) - #define __Pyx_DOCSTR(n) ((char *)(n)) -#else - #define __Pyx_NAMESTR(n) (n) - #define __Pyx_DOCSTR(n) (n) -#endif -#ifdef __cplusplus -#define __PYX_EXTERN_C extern "C" -#else -#define __PYX_EXTERN_C extern -#endif -#include -#define __PYX_HAVE_API__mtrand -#include "string.h" -#include "math.h" -#include "numpy/arrayobject.h" -#include "mtrand_py_helper.h" -#include "randomkit.h" -#include "distributions.h" -#include "initarray.h" - -#ifndef CYTHON_INLINE - #if defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #else - #define CYTHON_INLINE - #endif -#endif - -typedef struct {PyObject **p; char *s; const long n; const char* encoding; const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; /*proto*/ - - -/* Type Conversion Predeclarations */ - -#if PY_MAJOR_VERSION < 3 -#define __Pyx_PyBytes_FromString PyString_FromString -#define __Pyx_PyBytes_FromStringAndSize PyString_FromStringAndSize -#define __Pyx_PyBytes_AsString PyString_AsString -#else -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -#define __Pyx_PyBytes_AsString PyBytes_AsString -#endif - -#define __Pyx_PyBytes_FromUString(s) __Pyx_PyBytes_FromString((char*)s) -#define __Pyx_PyBytes_AsUString(s) ((unsigned char*) __Pyx_PyBytes_AsString(s)) - -#define __Pyx_PyBool_FromLong(b) ((b) ? (Py_INCREF(Py_True), Py_True) : (Py_INCREF(Py_False), Py_False)) -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x); - -#if !defined(T_PYSSIZET) -#if PY_VERSION_HEX < 0x02050000 -#define T_PYSSIZET T_INT -#elif !defined(T_LONGLONG) -#define T_PYSSIZET \ - ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ - ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : -1)) -#else -#define T_PYSSIZET \ - ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ - ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : \ - ((sizeof(Py_ssize_t) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))) -#endif -#endif - - -#if !defined(T_ULONGLONG) -#define __Pyx_T_UNSIGNED_INT(x) \ - ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ - ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ - ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ - ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : -1)))) -#else -#define __Pyx_T_UNSIGNED_INT(x) \ - ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ - ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ - ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ - ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : \ - ((sizeof(x) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))))) -#endif -#if !defined(T_LONGLONG) -#define __Pyx_T_SIGNED_INT(x) \ - ((sizeof(x) == sizeof(char)) ? T_BYTE : \ - ((sizeof(x) == sizeof(short)) ? T_SHORT : \ - ((sizeof(x) == sizeof(int)) ? T_INT : \ - ((sizeof(x) == sizeof(long)) ? T_LONG : -1)))) -#else -#define __Pyx_T_SIGNED_INT(x) \ - ((sizeof(x) == sizeof(char)) ? T_BYTE : \ - ((sizeof(x) == sizeof(short)) ? T_SHORT : \ - ((sizeof(x) == sizeof(int)) ? T_INT : \ - ((sizeof(x) == sizeof(long)) ? T_LONG : \ - ((sizeof(x) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))))) -#endif - -#define __Pyx_T_FLOATING(x) \ - ((sizeof(x) == sizeof(float)) ? T_FLOAT : \ - ((sizeof(x) == sizeof(double)) ? T_DOUBLE : -1)) - -#if !defined(T_SIZET) -#if !defined(T_ULONGLONG) -#define T_SIZET \ - ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ - ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : -1)) -#else -#define T_SIZET \ - ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ - ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : \ - ((sizeof(size_t) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))) -#endif -#endif - -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject*); - -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) - - -#ifdef __GNUC__ -/* Test for GCC > 2.95 */ -#if __GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)) -#define likely(x) __builtin_expect(!!(x), 1) -#define unlikely(x) __builtin_expect(!!(x), 0) -#else /* __GNUC__ > 2 ... */ -#define likely(x) (x) -#define unlikely(x) (x) -#endif /* __GNUC__ > 2 ... */ -#else /* __GNUC__ */ -#define likely(x) (x) -#define unlikely(x) (x) -#endif /* __GNUC__ */ - -static PyObject *__pyx_m; -static PyObject *__pyx_b; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm= __FILE__; -static const char *__pyx_filename; -static const char **__pyx_f; - - -/* Type declarations */ - -typedef double (*__pyx_t_6mtrand_rk_cont0)(rk_state *); - -typedef double (*__pyx_t_6mtrand_rk_cont1)(rk_state *, double); - -typedef double (*__pyx_t_6mtrand_rk_cont2)(rk_state *, double, double); - -typedef double (*__pyx_t_6mtrand_rk_cont3)(rk_state *, double, double, double); - -typedef long (*__pyx_t_6mtrand_rk_disc0)(rk_state *); - -typedef long (*__pyx_t_6mtrand_rk_discnp)(rk_state *, long, double); - -typedef long (*__pyx_t_6mtrand_rk_discdd)(rk_state *, double, double); - -typedef long (*__pyx_t_6mtrand_rk_discnmN)(rk_state *, long, long, long); - -typedef long (*__pyx_t_6mtrand_rk_discd)(rk_state *, double); - -/* "mtrand.pyx":522 - * return sum - * - * cdef class RandomState: # <<<<<<<<<<<<<< - * """ - * RandomState(seed=None) - */ - -struct __pyx_obj_6mtrand_RandomState { - PyObject_HEAD - rk_state *internal_state; -}; - -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif - -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, int); - void (*DECREF)(void*, PyObject*, int); - void (*GOTREF)(void*, PyObject*, int); - void (*GIVEREF)(void*, PyObject*, int); - void* (*SetupContext)(const char*, int, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct * __Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule((char *)modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, (char *)"RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); - end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; - } - #define __Pyx_RefNannySetupContext(name) void *__pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) - #define __Pyx_RefNannyFinishContext() __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r);} } while(0) -#else - #define __Pyx_RefNannySetupContext(name) - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) -#endif /* CYTHON_REFNANNY */ -#define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);} } while(0) -#define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r);} } while(0) - -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, PyObject* kw_name); /*proto*/ - -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); /*proto*/ - -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[], PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, const char* function_name); /*proto*/ - - -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (!j) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} - - -#define __Pyx_GetItemInt_List(o, i, size, to_py_func) ((size <= sizeof(Py_ssize_t)) ? \ - __Pyx_GetItemInt_List_Fast(o, i, size <= sizeof(long)) : \ - __Pyx_GetItemInt_Generic(o, to_py_func(i))) - -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, int fits_long) { - if (likely(o != Py_None)) { - if (likely((0 <= i) & (i < PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, i); - Py_INCREF(r); - return r; - } - else if ((-PyList_GET_SIZE(o) <= i) & (i < 0)) { - PyObject *r = PyList_GET_ITEM(o, PyList_GET_SIZE(o) + i); - Py_INCREF(r); - return r; - } - } - return __Pyx_GetItemInt_Generic(o, fits_long ? PyInt_FromLong(i) : PyLong_FromLongLong(i)); -} - -#define __Pyx_GetItemInt_Tuple(o, i, size, to_py_func) ((size <= sizeof(Py_ssize_t)) ? \ - __Pyx_GetItemInt_Tuple_Fast(o, i, size <= sizeof(long)) : \ - __Pyx_GetItemInt_Generic(o, to_py_func(i))) - -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, int fits_long) { - if (likely(o != Py_None)) { - if (likely((0 <= i) & (i < PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, i); - Py_INCREF(r); - return r; - } - else if ((-PyTuple_GET_SIZE(o) <= i) & (i < 0)) { - PyObject *r = PyTuple_GET_ITEM(o, PyTuple_GET_SIZE(o) + i); - Py_INCREF(r); - return r; - } - } - return __Pyx_GetItemInt_Generic(o, fits_long ? PyInt_FromLong(i) : PyLong_FromLongLong(i)); -} - - -#define __Pyx_GetItemInt(o, i, size, to_py_func) ((size <= sizeof(Py_ssize_t)) ? \ - __Pyx_GetItemInt_Fast(o, i, size <= sizeof(long)) : \ - __Pyx_GetItemInt_Generic(o, to_py_func(i))) - -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int fits_long) { - PyObject *r; - if (PyList_CheckExact(o) && ((0 <= i) & (i < PyList_GET_SIZE(o)))) { - r = PyList_GET_ITEM(o, i); - Py_INCREF(r); - } - else if (PyTuple_CheckExact(o) && ((0 <= i) & (i < PyTuple_GET_SIZE(o)))) { - r = PyTuple_GET_ITEM(o, i); - Py_INCREF(r); - } - else if (Py_TYPE(o)->tp_as_sequence && Py_TYPE(o)->tp_as_sequence->sq_item && (likely(i >= 0))) { - r = PySequence_GetItem(o, i); - } - else { - r = __Pyx_GetItemInt_Generic(o, fits_long ? PyInt_FromLong(i) : PyLong_FromLongLong(i)); - } - return r; -} - -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(void); - -static PyObject *__Pyx_UnpackItem(PyObject *, Py_ssize_t index); /*proto*/ -static int __Pyx_EndUnpack(PyObject *); /*proto*/ - -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); /*proto*/ - -static CYTHON_INLINE int __Pyx_CheckKeywordStrings(PyObject *kwdict, - const char* function_name, int kw_allowed); /*proto*/ - -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); /*proto*/ - -static CYTHON_INLINE PyObject* __Pyx_PyObject_Append(PyObject* L, PyObject* x) { - if (likely(PyList_CheckExact(L))) { - if (PyList_Append(L, x) < 0) return NULL; - Py_INCREF(Py_None); - return Py_None; /* this is just to have an accurate signature */ - } - else { - PyObject *r, *m; - m = __Pyx_GetAttrString(L, "append"); - if (!m) return NULL; - r = PyObject_CallFunctionObjArgs(m, x, NULL); - Py_DECREF(m); - return r; - } -} - -#define __Pyx_SetItemInt(o, i, v, size, to_py_func) ((size <= sizeof(Py_ssize_t)) ? \ - __Pyx_SetItemInt_Fast(o, i, v, size <= sizeof(long)) : \ - __Pyx_SetItemInt_Generic(o, to_py_func(i), v)) - -static CYTHON_INLINE int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v) { - int r; - if (!j) return -1; - r = PyObject_SetItem(o, j, v); - Py_DECREF(j); - return r; -} - -static CYTHON_INLINE int __Pyx_SetItemInt_Fast(PyObject *o, Py_ssize_t i, PyObject *v, int fits_long) { - if (PyList_CheckExact(o) && ((0 <= i) & (i < PyList_GET_SIZE(o)))) { - Py_INCREF(v); - Py_DECREF(PyList_GET_ITEM(o, i)); - PyList_SET_ITEM(o, i, v); - return 1; - } - else if (Py_TYPE(o)->tp_as_sequence && Py_TYPE(o)->tp_as_sequence->sq_ass_item && (likely(i >= 0))) - return PySequence_SetItem(o, i, v); - else { - PyObject *j = fits_long ? PyInt_FromLong(i) : PyLong_FromLongLong(i); - return __Pyx_SetItemInt_Generic(o, j, v); - } -} - -static CYTHON_INLINE void __Pyx_ExceptionSave(PyObject **type, PyObject **value, PyObject **tb); /*proto*/ -static void __Pyx_ExceptionReset(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ - -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list); /*proto*/ - -static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name); /*proto*/ - -static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ -static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb); /*proto*/ - -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ - -static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject *); - -static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject *); - -static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject *); - -static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject *); - -static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject *); - -static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject *); - -static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject *); - -static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject *); - -static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject *); - -static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject *); - -static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject *); - -static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject *); - -static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject *); - -static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject *); - -static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject *); - -static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, long size, int strict); /*proto*/ - -static PyObject *__Pyx_ImportModule(const char *name); /*proto*/ - -static void __Pyx_AddTraceback(const char *funcname); /*proto*/ - -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); /*proto*/ -/* Module declarations from numpy */ - -/* Module declarations from mtrand */ - -static PyTypeObject *__pyx_ptype_6mtrand_dtype = 0; -static PyTypeObject *__pyx_ptype_6mtrand_ndarray = 0; -static PyTypeObject *__pyx_ptype_6mtrand_flatiter = 0; -static PyTypeObject *__pyx_ptype_6mtrand_broadcast = 0; -static PyTypeObject *__pyx_ptype_6mtrand_RandomState = 0; -static PyObject *__pyx_f_6mtrand_cont0_array(rk_state *, __pyx_t_6mtrand_rk_cont0, PyObject *); /*proto*/ -static PyObject *__pyx_f_6mtrand_cont1_array_sc(rk_state *, __pyx_t_6mtrand_rk_cont1, PyObject *, double); /*proto*/ -static PyObject *__pyx_f_6mtrand_cont1_array(rk_state *, __pyx_t_6mtrand_rk_cont1, PyObject *, PyArrayObject *); /*proto*/ -static PyObject *__pyx_f_6mtrand_cont2_array_sc(rk_state *, __pyx_t_6mtrand_rk_cont2, PyObject *, double, double); /*proto*/ -static PyObject *__pyx_f_6mtrand_cont2_array(rk_state *, __pyx_t_6mtrand_rk_cont2, PyObject *, PyArrayObject *, PyArrayObject *); /*proto*/ -static PyObject *__pyx_f_6mtrand_cont3_array_sc(rk_state *, __pyx_t_6mtrand_rk_cont3, PyObject *, double, double, double); /*proto*/ -static PyObject *__pyx_f_6mtrand_cont3_array(rk_state *, __pyx_t_6mtrand_rk_cont3, PyObject *, PyArrayObject *, PyArrayObject *, PyArrayObject *); /*proto*/ -static PyObject *__pyx_f_6mtrand_disc0_array(rk_state *, __pyx_t_6mtrand_rk_disc0, PyObject *); /*proto*/ -static PyObject *__pyx_f_6mtrand_discnp_array_sc(rk_state *, __pyx_t_6mtrand_rk_discnp, PyObject *, long, double); /*proto*/ -static PyObject *__pyx_f_6mtrand_discnp_array(rk_state *, __pyx_t_6mtrand_rk_discnp, PyObject *, PyArrayObject *, PyArrayObject *); /*proto*/ -static PyObject *__pyx_f_6mtrand_discdd_array_sc(rk_state *, __pyx_t_6mtrand_rk_discdd, PyObject *, double, double); /*proto*/ -static PyObject *__pyx_f_6mtrand_discdd_array(rk_state *, __pyx_t_6mtrand_rk_discdd, PyObject *, PyArrayObject *, PyArrayObject *); /*proto*/ -static PyObject *__pyx_f_6mtrand_discnmN_array_sc(rk_state *, __pyx_t_6mtrand_rk_discnmN, PyObject *, long, long, long); /*proto*/ -static PyObject *__pyx_f_6mtrand_discnmN_array(rk_state *, __pyx_t_6mtrand_rk_discnmN, PyObject *, PyArrayObject *, PyArrayObject *, PyArrayObject *); /*proto*/ -static PyObject *__pyx_f_6mtrand_discd_array_sc(rk_state *, __pyx_t_6mtrand_rk_discd, PyObject *, double); /*proto*/ -static PyObject *__pyx_f_6mtrand_discd_array(rk_state *, __pyx_t_6mtrand_rk_discd, PyObject *, PyArrayObject *); /*proto*/ -static double __pyx_f_6mtrand_kahan_sum(double *, long); /*proto*/ -#define __Pyx_MODULE_NAME "mtrand" -int __pyx_module_is_main_mtrand = 0; - -/* Implementation of mtrand */ -static PyObject *__pyx_builtin_ValueError; -static PyObject *__pyx_builtin_TypeError; -static char __pyx_k_1[] = "size is not compatible with inputs"; -static char __pyx_k_2[] = "algorithm must be 'MT19937'"; -static char __pyx_k_3[] = "state must be 624 longs"; -static char __pyx_k_4[] = "low >= high"; -static char __pyx_k_9[] = "scale <= 0"; -static char __pyx_k_10[] = "a <= 0"; -static char __pyx_k_11[] = "b <= 0"; -static char __pyx_k_13[] = "shape <= 0"; -static char __pyx_k_15[] = "dfnum <= 0"; -static char __pyx_k_16[] = "dfden <= 0"; -static char __pyx_k_17[] = "dfnum <= 1"; -static char __pyx_k_18[] = "nonc < 0"; -static char __pyx_k_19[] = "df <= 0"; -static char __pyx_k_20[] = "nonc <= 0"; -static char __pyx_k_21[] = "df <= 1"; -static char __pyx_k_22[] = "kappa < 0"; -static char __pyx_k_31[] = "sigma <= 0"; -static char __pyx_k_32[] = "sigma <= 0.0"; -static char __pyx_k_34[] = "scale <= 0.0"; -static char __pyx_k_35[] = "mean <= 0"; -static char __pyx_k_36[] = "mean <= 0.0"; -static char __pyx_k_37[] = "left > mode"; -static char __pyx_k_38[] = "mode > right"; -static char __pyx_k_39[] = "left == right"; -static char __pyx_k_40[] = "n <= 0"; -static char __pyx_k_41[] = "p < 0"; -static char __pyx_k_42[] = "p > 1"; -static char __pyx_k_44[] = "lam < 0"; -static char __pyx_k_45[] = "a <= 1.0"; -static char __pyx_k_46[] = "p < 0.0"; -static char __pyx_k_47[] = "p > 1.0"; -static char __pyx_k_48[] = "ngood < 1"; -static char __pyx_k_49[] = "nbad < 1"; -static char __pyx_k_50[] = "nsample < 1"; -static char __pyx_k_51[] = "ngood + nbad < nsample"; -static char __pyx_k_52[] = "p <= 0.0"; -static char __pyx_k_53[] = "p >= 1.0"; -static char __pyx_k_54[] = "mean must be 1 dimensional"; -static char __pyx_k_55[] = "cov must be 2 dimensional and square"; -static char __pyx_k_56[] = "mean and cov must have same length"; -static char __pyx_k_57[] = "numpy.dual"; -static char __pyx_k_58[] = "sum(pvals[:-1]) > 1.0"; -static char __pyx_k_59[] = "standard_exponential"; -static char __pyx_k_60[] = "noncentral_chisquare"; -static char __pyx_k_61[] = "RandomState.seed (line 567)"; -static char __pyx_k_62[] = "RandomState.get_state (line 600)"; -static char __pyx_k_63[] = "RandomState.set_state (line 637)"; -static char __pyx_k_64[] = "RandomState.random_sample (line 718)"; -static char __pyx_k_65[] = "RandomState.tomaxint (line 761)"; -static char __pyx_k_66[] = "RandomState.randint (line 789)"; -static char __pyx_k_67[] = "RandomState.bytes (line 866)"; -static char __pyx_k_68[] = "RandomState.uniform (line 893)"; -static char __pyx_k_69[] = "RandomState.rand (line 981)"; -static char __pyx_k_70[] = "RandomState.randn (line 1024)"; -static char __pyx_k_71[] = "RandomState.random_integers (line 1080)"; -static char __pyx_k_72[] = "RandomState.standard_normal (line 1158)"; -static char __pyx_k_73[] = "RandomState.normal (line 1190)"; -static char __pyx_k_74[] = "RandomState.beta (line 1290)"; -static char __pyx_k_75[] = "RandomState.exponential (line 1349)"; -static char __pyx_k_76[] = "RandomState.standard_exponential (line 1403)"; -static char __pyx_k_77[] = "RandomState.standard_gamma (line 1431)"; -static char __pyx_k_78[] = "RandomState.gamma (line 1513)"; -static char __pyx_k_79[] = "RandomState.f (line 1604)"; -static char __pyx_k_80[] = "RandomState.noncentral_f (line 1707)"; -static char __pyx_k_81[] = "RandomState.chisquare (line 1802)"; -static char __pyx_k_82[] = "RandomState.noncentral_chisquare (line 1882)"; -static char __pyx_k_83[] = "RandomState.standard_cauchy (line 1974)"; -static char __pyx_k_84[] = "RandomState.standard_t (line 2035)"; -static char __pyx_k_85[] = "RandomState.vonmises (line 2136)"; -static char __pyx_k_86[] = "RandomState.pareto (line 2231)"; -static char __pyx_k_87[] = "RandomState.weibull (line 2320)"; -static char __pyx_k_88[] = "RandomState.power (line 2420)"; -static char __pyx_k_89[] = "RandomState.laplace (line 2529)"; -static char __pyx_k_90[] = "RandomState.gumbel (line 2619)"; -static char __pyx_k_91[] = "RandomState.logistic (line 2743)"; -static char __pyx_k_92[] = "RandomState.lognormal (line 2831)"; -static char __pyx_k_93[] = "RandomState.rayleigh (line 2962)"; -static char __pyx_k_94[] = "RandomState.wald (line 3034)"; -static char __pyx_k_95[] = "RandomState.triangular (line 3120)"; -static char __pyx_k_96[] = "RandomState.binomial (line 3208)"; -static char __pyx_k_97[] = "RandomState.negative_binomial (line 3316)"; -static char __pyx_k_98[] = "RandomState.poisson (line 3411)"; -static char __pyx_k_99[] = "RandomState.zipf (line 3474)"; -static char __pyx_k__a[] = "a"; -static char __pyx_k__b[] = "b"; -static char __pyx_k__f[] = "f"; -static char __pyx_k__n[] = "n"; -static char __pyx_k__p[] = "p"; -static char __pyx_k_100[] = "RandomState.geometric (line 3566)"; -static char __pyx_k_101[] = "RandomState.hypergeometric (line 3632)"; -static char __pyx_k_102[] = "RandomState.logseries (line 3751)"; -static char __pyx_k_103[] = "RandomState.multivariate_normal (line 3846)"; -static char __pyx_k_104[] = "RandomState.multinomial (line 3979)"; -static char __pyx_k_105[] = "RandomState.dirichlet (line 4072)"; -static char __pyx_k_106[] = "RandomState.shuffle (line 4166)"; -static char __pyx_k_107[] = "RandomState.permutation (line 4202)"; -static char __pyx_k__df[] = "df"; -static char __pyx_k__mu[] = "mu"; -static char __pyx_k__nd[] = "nd"; -static char __pyx_k__np[] = "np"; -static char __pyx_k__add[] = "add"; -static char __pyx_k__any[] = "any"; -static char __pyx_k__cov[] = "cov"; -static char __pyx_k__dot[] = "dot"; -static char __pyx_k__key[] = "key"; -static char __pyx_k__lam[] = "lam"; -static char __pyx_k__loc[] = "loc"; -static char __pyx_k__low[] = "low"; -static char __pyx_k__pos[] = "pos"; -static char __pyx_k__svd[] = "svd"; -static char __pyx_k__beta[] = "beta"; -static char __pyx_k__copy[] = "copy"; -static char __pyx_k__data[] = "data"; -static char __pyx_k__high[] = "high"; -static char __pyx_k__left[] = "left"; -static char __pyx_k__less[] = "less"; -static char __pyx_k__mean[] = "mean"; -static char __pyx_k__mode[] = "mode"; -static char __pyx_k__nbad[] = "nbad"; -static char __pyx_k__nonc[] = "nonc"; -static char __pyx_k__rand[] = "rand"; -static char __pyx_k__seed[] = "seed"; -static char __pyx_k__size[] = "size"; -static char __pyx_k__sqrt[] = "sqrt"; -static char __pyx_k__uint[] = "uint"; -static char __pyx_k__wald[] = "wald"; -static char __pyx_k__zipf[] = "zipf"; -static char __pyx_k___rand[] = "_rand"; -static char __pyx_k__alpha[] = "alpha"; -static char __pyx_k__array[] = "array"; -static char __pyx_k__bytes[] = "bytes"; -static char __pyx_k__dfden[] = "dfden"; -static char __pyx_k__dfnum[] = "dfnum"; -static char __pyx_k__empty[] = "empty"; -static char __pyx_k__equal[] = "equal"; -static char __pyx_k__gamma[] = "gamma"; -static char __pyx_k__gauss[] = "gauss"; -static char __pyx_k__kappa[] = "kappa"; -static char __pyx_k__ngood[] = "ngood"; -static char __pyx_k__numpy[] = "numpy"; -static char __pyx_k__power[] = "power"; -static char __pyx_k__pvals[] = "pvals"; -static char __pyx_k__randn[] = "randn"; -static char __pyx_k__right[] = "right"; -static char __pyx_k__scale[] = "scale"; -static char __pyx_k__shape[] = "shape"; -static char __pyx_k__sigma[] = "sigma"; -static char __pyx_k__zeros[] = "zeros"; -static char __pyx_k__arange[] = "arange"; -static char __pyx_k__gumbel[] = "gumbel"; -static char __pyx_k__normal[] = "normal"; -static char __pyx_k__pareto[] = "pareto"; -static char __pyx_k__random[] = "random"; -static char __pyx_k__reduce[] = "reduce"; -static char __pyx_k__uint32[] = "uint32"; -static char __pyx_k__MT19937[] = "MT19937"; -static char __pyx_k__asarray[] = "asarray"; -static char __pyx_k__dataptr[] = "dataptr"; -static char __pyx_k__float64[] = "float64"; -static char __pyx_k__greater[] = "greater"; -static char __pyx_k__integer[] = "integer"; -static char __pyx_k__laplace[] = "laplace"; -static char __pyx_k__nsample[] = "nsample"; -static char __pyx_k__poisson[] = "poisson"; -static char __pyx_k__randint[] = "randint"; -static char __pyx_k__shuffle[] = "shuffle"; -static char __pyx_k__uniform[] = "uniform"; -static char __pyx_k__weibull[] = "weibull"; -static char __pyx_k____main__[] = "__main__"; -static char __pyx_k____test__[] = "__test__"; -static char __pyx_k__binomial[] = "binomial"; -static char __pyx_k__logistic[] = "logistic"; -static char __pyx_k__multiply[] = "multiply"; -static char __pyx_k__rayleigh[] = "rayleigh"; -static char __pyx_k__subtract[] = "subtract"; -static char __pyx_k__tomaxint[] = "tomaxint"; -static char __pyx_k__vonmises[] = "vonmises"; -static char __pyx_k__TypeError[] = "TypeError"; -static char __pyx_k__chisquare[] = "chisquare"; -static char __pyx_k__dirichlet[] = "dirichlet"; -static char __pyx_k__geometric[] = "geometric"; -static char __pyx_k__get_state[] = "get_state"; -static char __pyx_k__has_gauss[] = "has_gauss"; -static char __pyx_k__lognormal[] = "lognormal"; -static char __pyx_k__logseries[] = "logseries"; -static char __pyx_k__set_state[] = "set_state"; -static char __pyx_k__ValueError[] = "ValueError"; -static char __pyx_k__dimensions[] = "dimensions"; -static char __pyx_k__less_equal[] = "less_equal"; -static char __pyx_k__standard_t[] = "standard_t"; -static char __pyx_k__triangular[] = "triangular"; -static char __pyx_k__RandomState[] = "RandomState"; -static char __pyx_k__exponential[] = "exponential"; -static char __pyx_k__multinomial[] = "multinomial"; -static char __pyx_k__permutation[] = "permutation"; -static char __pyx_k__noncentral_f[] = "noncentral_f"; -static char __pyx_k__greater_equal[] = "greater_equal"; -static char __pyx_k__random_sample[] = "random_sample"; -static char __pyx_k__hypergeometric[] = "hypergeometric"; -static char __pyx_k__internal_state[] = "internal_state"; -static char __pyx_k__standard_gamma[] = "standard_gamma"; -static char __pyx_k__random_integers[] = "random_integers"; -static char __pyx_k__standard_cauchy[] = "standard_cauchy"; -static char __pyx_k__standard_normal[] = "standard_normal"; -static char __pyx_k__negative_binomial[] = "negative_binomial"; -static char __pyx_k____RandomState_ctor[] = "__RandomState_ctor"; -static char __pyx_k__multivariate_normal[] = "multivariate_normal"; -static PyObject *__pyx_kp_s_1; -static PyObject *__pyx_kp_s_10; -static PyObject *__pyx_kp_u_100; -static PyObject *__pyx_kp_u_101; -static PyObject *__pyx_kp_u_102; -static PyObject *__pyx_kp_u_103; -static PyObject *__pyx_kp_u_104; -static PyObject *__pyx_kp_u_105; -static PyObject *__pyx_kp_u_106; -static PyObject *__pyx_kp_u_107; -static PyObject *__pyx_kp_s_11; -static PyObject *__pyx_kp_s_13; -static PyObject *__pyx_kp_s_15; -static PyObject *__pyx_kp_s_16; -static PyObject *__pyx_kp_s_17; -static PyObject *__pyx_kp_s_18; -static PyObject *__pyx_kp_s_19; -static PyObject *__pyx_kp_s_2; -static PyObject *__pyx_kp_s_20; -static PyObject *__pyx_kp_s_21; -static PyObject *__pyx_kp_s_22; -static PyObject *__pyx_kp_s_3; -static PyObject *__pyx_kp_s_31; -static PyObject *__pyx_kp_s_32; -static PyObject *__pyx_kp_s_34; -static PyObject *__pyx_kp_s_35; -static PyObject *__pyx_kp_s_36; -static PyObject *__pyx_kp_s_37; -static PyObject *__pyx_kp_s_38; -static PyObject *__pyx_kp_s_39; -static PyObject *__pyx_kp_s_4; -static PyObject *__pyx_kp_s_40; -static PyObject *__pyx_kp_s_41; -static PyObject *__pyx_kp_s_42; -static PyObject *__pyx_kp_s_44; -static PyObject *__pyx_kp_s_45; -static PyObject *__pyx_kp_s_46; -static PyObject *__pyx_kp_s_47; -static PyObject *__pyx_kp_s_48; -static PyObject *__pyx_kp_s_49; -static PyObject *__pyx_kp_s_50; -static PyObject *__pyx_kp_s_51; -static PyObject *__pyx_kp_s_52; -static PyObject *__pyx_kp_s_53; -static PyObject *__pyx_kp_s_54; -static PyObject *__pyx_kp_s_55; -static PyObject *__pyx_kp_s_56; -static PyObject *__pyx_n_s_57; -static PyObject *__pyx_kp_s_58; -static PyObject *__pyx_n_s_59; -static PyObject *__pyx_n_s_60; -static PyObject *__pyx_kp_u_61; -static PyObject *__pyx_kp_u_62; -static PyObject *__pyx_kp_u_63; -static PyObject *__pyx_kp_u_64; -static PyObject *__pyx_kp_u_65; -static PyObject *__pyx_kp_u_66; -static PyObject *__pyx_kp_u_67; -static PyObject *__pyx_kp_u_68; -static PyObject *__pyx_kp_u_69; -static PyObject *__pyx_kp_u_70; -static PyObject *__pyx_kp_u_71; -static PyObject *__pyx_kp_u_72; -static PyObject *__pyx_kp_u_73; -static PyObject *__pyx_kp_u_74; -static PyObject *__pyx_kp_u_75; -static PyObject *__pyx_kp_u_76; -static PyObject *__pyx_kp_u_77; -static PyObject *__pyx_kp_u_78; -static PyObject *__pyx_kp_u_79; -static PyObject *__pyx_kp_u_80; -static PyObject *__pyx_kp_u_81; -static PyObject *__pyx_kp_u_82; -static PyObject *__pyx_kp_u_83; -static PyObject *__pyx_kp_u_84; -static PyObject *__pyx_kp_u_85; -static PyObject *__pyx_kp_u_86; -static PyObject *__pyx_kp_u_87; -static PyObject *__pyx_kp_u_88; -static PyObject *__pyx_kp_u_89; -static PyObject *__pyx_kp_s_9; -static PyObject *__pyx_kp_u_90; -static PyObject *__pyx_kp_u_91; -static PyObject *__pyx_kp_u_92; -static PyObject *__pyx_kp_u_93; -static PyObject *__pyx_kp_u_94; -static PyObject *__pyx_kp_u_95; -static PyObject *__pyx_kp_u_96; -static PyObject *__pyx_kp_u_97; -static PyObject *__pyx_kp_u_98; -static PyObject *__pyx_kp_u_99; -static PyObject *__pyx_n_s__MT19937; -static PyObject *__pyx_n_s__RandomState; -static PyObject *__pyx_n_s__TypeError; -static PyObject *__pyx_n_s__ValueError; -static PyObject *__pyx_n_s____RandomState_ctor; -static PyObject *__pyx_n_s____main__; -static PyObject *__pyx_n_s____test__; -static PyObject *__pyx_n_s___rand; -static PyObject *__pyx_n_s__a; -static PyObject *__pyx_n_s__add; -static PyObject *__pyx_n_s__alpha; -static PyObject *__pyx_n_s__any; -static PyObject *__pyx_n_s__arange; -static PyObject *__pyx_n_s__array; -static PyObject *__pyx_n_s__asarray; -static PyObject *__pyx_n_s__b; -static PyObject *__pyx_n_s__beta; -static PyObject *__pyx_n_s__binomial; -static PyObject *__pyx_n_s__bytes; -static PyObject *__pyx_n_s__chisquare; -static PyObject *__pyx_n_s__copy; -static PyObject *__pyx_n_s__cov; -static PyObject *__pyx_n_s__data; -static PyObject *__pyx_n_s__dataptr; -static PyObject *__pyx_n_s__df; -static PyObject *__pyx_n_s__dfden; -static PyObject *__pyx_n_s__dfnum; -static PyObject *__pyx_n_s__dimensions; -static PyObject *__pyx_n_s__dirichlet; -static PyObject *__pyx_n_s__dot; -static PyObject *__pyx_n_s__empty; -static PyObject *__pyx_n_s__equal; -static PyObject *__pyx_n_s__exponential; -static PyObject *__pyx_n_s__f; -static PyObject *__pyx_n_s__float64; -static PyObject *__pyx_n_s__gamma; -static PyObject *__pyx_n_s__gauss; -static PyObject *__pyx_n_s__geometric; -static PyObject *__pyx_n_s__get_state; -static PyObject *__pyx_n_s__greater; -static PyObject *__pyx_n_s__greater_equal; -static PyObject *__pyx_n_s__gumbel; -static PyObject *__pyx_n_s__has_gauss; -static PyObject *__pyx_n_s__high; -static PyObject *__pyx_n_s__hypergeometric; -static PyObject *__pyx_n_s__integer; -static PyObject *__pyx_n_s__internal_state; -static PyObject *__pyx_n_s__kappa; -static PyObject *__pyx_n_s__key; -static PyObject *__pyx_n_s__lam; -static PyObject *__pyx_n_s__laplace; -static PyObject *__pyx_n_s__left; -static PyObject *__pyx_n_s__less; -static PyObject *__pyx_n_s__less_equal; -static PyObject *__pyx_n_s__loc; -static PyObject *__pyx_n_s__logistic; -static PyObject *__pyx_n_s__lognormal; -static PyObject *__pyx_n_s__logseries; -static PyObject *__pyx_n_s__low; -static PyObject *__pyx_n_s__mean; -static PyObject *__pyx_n_s__mode; -static PyObject *__pyx_n_s__mu; -static PyObject *__pyx_n_s__multinomial; -static PyObject *__pyx_n_s__multiply; -static PyObject *__pyx_n_s__multivariate_normal; -static PyObject *__pyx_n_s__n; -static PyObject *__pyx_n_s__nbad; -static PyObject *__pyx_n_s__nd; -static PyObject *__pyx_n_s__negative_binomial; -static PyObject *__pyx_n_s__ngood; -static PyObject *__pyx_n_s__nonc; -static PyObject *__pyx_n_s__noncentral_f; -static PyObject *__pyx_n_s__normal; -static PyObject *__pyx_n_s__np; -static PyObject *__pyx_n_s__nsample; -static PyObject *__pyx_n_s__numpy; -static PyObject *__pyx_n_s__p; -static PyObject *__pyx_n_s__pareto; -static PyObject *__pyx_n_s__permutation; -static PyObject *__pyx_n_s__poisson; -static PyObject *__pyx_n_s__pos; -static PyObject *__pyx_n_s__power; -static PyObject *__pyx_n_s__pvals; -static PyObject *__pyx_n_s__rand; -static PyObject *__pyx_n_s__randint; -static PyObject *__pyx_n_s__randn; -static PyObject *__pyx_n_s__random; -static PyObject *__pyx_n_s__random_integers; -static PyObject *__pyx_n_s__random_sample; -static PyObject *__pyx_n_s__rayleigh; -static PyObject *__pyx_n_s__reduce; -static PyObject *__pyx_n_s__right; -static PyObject *__pyx_n_s__scale; -static PyObject *__pyx_n_s__seed; -static PyObject *__pyx_n_s__set_state; -static PyObject *__pyx_n_s__shape; -static PyObject *__pyx_n_s__shuffle; -static PyObject *__pyx_n_s__sigma; -static PyObject *__pyx_n_s__size; -static PyObject *__pyx_n_s__sqrt; -static PyObject *__pyx_n_s__standard_cauchy; -static PyObject *__pyx_n_s__standard_gamma; -static PyObject *__pyx_n_s__standard_normal; -static PyObject *__pyx_n_s__standard_t; -static PyObject *__pyx_n_s__subtract; -static PyObject *__pyx_n_s__svd; -static PyObject *__pyx_n_s__tomaxint; -static PyObject *__pyx_n_s__triangular; -static PyObject *__pyx_n_s__uint; -static PyObject *__pyx_n_s__uint32; -static PyObject *__pyx_n_s__uniform; -static PyObject *__pyx_n_s__vonmises; -static PyObject *__pyx_n_s__wald; -static PyObject *__pyx_n_s__weibull; -static PyObject *__pyx_n_s__zeros; -static PyObject *__pyx_n_s__zipf; -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_624; -static PyObject *__pyx_k_5; -static PyObject *__pyx_k_6; -static PyObject *__pyx_k_7; -static PyObject *__pyx_k_8; -static PyObject *__pyx_k_12; -static PyObject *__pyx_k_14; -static PyObject *__pyx_k_23; -static PyObject *__pyx_k_24; -static PyObject *__pyx_k_25; -static PyObject *__pyx_k_26; -static PyObject *__pyx_k_27; -static PyObject *__pyx_k_28; -static PyObject *__pyx_k_29; -static PyObject *__pyx_k_30; -static PyObject *__pyx_k_33; -static PyObject *__pyx_k_43; - -/* "mtrand.pyx":128 - * import numpy as np - * - * cdef object cont0_array(rk_state *state, rk_cont0 func, object size): # <<<<<<<<<<<<<< - * cdef double *array_data - * cdef ndarray array "arrayObject" - */ - -static PyObject *__pyx_f_6mtrand_cont0_array(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_cont0 __pyx_v_func, PyObject *__pyx_v_size) { - double *__pyx_v_array_data; - PyArrayObject *arrayObject; - long __pyx_v_length; - long __pyx_v_i; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - long __pyx_t_5; - __Pyx_RefNannySetupContext("cont0_array"); - __Pyx_INCREF(__pyx_v_size); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":134 - * cdef long i - * - * if size is None: # <<<<<<<<<<<<<< - * return func(state) - * else: - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":135 - * - * if size is None: - * return func(state) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, np.float64) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyFloat_FromDouble(__pyx_v_func(__pyx_v_state)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 135; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":137 - * return func(state) - * else: - * array = np.empty(size, np.float64) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 137; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 137; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 137; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__float64); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 137; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 137; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 137; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_4))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "mtrand.pyx":138 - * else: - * array = np.empty(size, np.float64) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < length: - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":139 - * array = np.empty(size, np.float64) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = func(state) - */ - __pyx_v_array_data = ((double *)arrayObject->data); - - /* "mtrand.pyx":140 - * length = PyArray_SIZE(array) - * array_data = array.data - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = func(state) - * return array - */ - __pyx_t_5 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_5; __pyx_v_i++) { - - /* "mtrand.pyx":141 - * array_data = array.data - * for i from 0 <= i < length: - * array_data[i] = func(state) # <<<<<<<<<<<<<< - * return array - * - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state); - } - - /* "mtrand.pyx":142 - * for i from 0 <= i < length: - * array_data[i] = func(state) - * return array # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - } - __pyx_L3:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.cont0_array"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":145 - * - * - * cdef object cont1_array_sc(rk_state *state, rk_cont1 func, object size, double a): # <<<<<<<<<<<<<< - * cdef double *array_data - * cdef ndarray array "arrayObject" - */ - -static PyObject *__pyx_f_6mtrand_cont1_array_sc(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_cont1 __pyx_v_func, PyObject *__pyx_v_size, double __pyx_v_a) { - double *__pyx_v_array_data; - PyArrayObject *arrayObject; - long __pyx_v_length; - long __pyx_v_i; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - long __pyx_t_5; - __Pyx_RefNannySetupContext("cont1_array_sc"); - __Pyx_INCREF(__pyx_v_size); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":151 - * cdef long i - * - * if size is None: # <<<<<<<<<<<<<< - * return func(state, a) - * else: - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":152 - * - * if size is None: - * return func(state, a) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, np.float64) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyFloat_FromDouble(__pyx_v_func(__pyx_v_state, __pyx_v_a)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 152; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":154 - * return func(state, a) - * else: - * array = np.empty(size, np.float64) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 154; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 154; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 154; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__float64); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 154; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 154; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 154; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_4))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "mtrand.pyx":155 - * else: - * array = np.empty(size, np.float64) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < length: - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":156 - * array = np.empty(size, np.float64) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = func(state, a) - */ - __pyx_v_array_data = ((double *)arrayObject->data); - - /* "mtrand.pyx":157 - * length = PyArray_SIZE(array) - * array_data = array.data - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = func(state, a) - * return array - */ - __pyx_t_5 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_5; __pyx_v_i++) { - - /* "mtrand.pyx":158 - * array_data = array.data - * for i from 0 <= i < length: - * array_data[i] = func(state, a) # <<<<<<<<<<<<<< - * return array - * - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, __pyx_v_a); - } - - /* "mtrand.pyx":159 - * for i from 0 <= i < length: - * array_data[i] = func(state, a) - * return array # <<<<<<<<<<<<<< - * - * cdef object cont1_array(rk_state *state, rk_cont1 func, object size, ndarray oa): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - } - __pyx_L3:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.cont1_array_sc"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":161 - * return array - * - * cdef object cont1_array(rk_state *state, rk_cont1 func, object size, ndarray oa): # <<<<<<<<<<<<<< - * cdef double *array_data - * cdef double *oa_data - */ - -static PyObject *__pyx_f_6mtrand_cont1_array(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_cont1 __pyx_v_func, PyObject *__pyx_v_size, PyArrayObject *__pyx_v_oa) { - double *__pyx_v_array_data; - double *__pyx_v_oa_data; - PyArrayObject *arrayObject; - npy_intp __pyx_v_length; - npy_intp __pyx_v_i; - PyArrayIterObject *__pyx_v_itera; - PyArrayMultiIterObject *__pyx_v_multi; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - npy_intp __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - __Pyx_RefNannySetupContext("cont1_array"); - __Pyx_INCREF(__pyx_v_size); - __Pyx_INCREF((PyObject *)__pyx_v_oa); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_itera = ((PyArrayIterObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_multi = ((PyArrayMultiIterObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":170 - * cdef broadcast multi - * - * if size is None: # <<<<<<<<<<<<<< - * array = PyArray_SimpleNew(oa.nd, oa.dimensions, NPY_DOUBLE) - * length = PyArray_SIZE(array) - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":171 - * - * if size is None: - * array = PyArray_SimpleNew(oa.nd, oa.dimensions, NPY_DOUBLE) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_2 = PyArray_SimpleNew(__pyx_v_oa->nd, __pyx_v_oa->dimensions, NPY_DOUBLE); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 171; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":172 - * if size is None: - * array = PyArray_SimpleNew(oa.nd, oa.dimensions, NPY_DOUBLE) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * itera = PyArray_IterNew(oa) - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":173 - * array = PyArray_SimpleNew(oa.nd, oa.dimensions, NPY_DOUBLE) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * itera = PyArray_IterNew(oa) - * for i from 0 <= i < length: - */ - __pyx_v_array_data = ((double *)arrayObject->data); - - /* "mtrand.pyx":174 - * length = PyArray_SIZE(array) - * array_data = array.data - * itera = PyArray_IterNew(oa) # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = func(state, ((itera.dataptr))[0]) - */ - __pyx_t_2 = PyArray_IterNew(((PyObject *)__pyx_v_oa)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 174; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayIterObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_itera)); - __pyx_v_itera = ((PyArrayIterObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":175 - * array_data = array.data - * itera = PyArray_IterNew(oa) - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = func(state, ((itera.dataptr))[0]) - * PyArray_ITER_NEXT(itera) - */ - __pyx_t_3 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":176 - * itera = PyArray_IterNew(oa) - * for i from 0 <= i < length: - * array_data[i] = func(state, ((itera.dataptr))[0]) # <<<<<<<<<<<<<< - * PyArray_ITER_NEXT(itera) - * else: - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (((double *)__pyx_v_itera->dataptr)[0])); - - /* "mtrand.pyx":177 - * for i from 0 <= i < length: - * array_data[i] = func(state, ((itera.dataptr))[0]) - * PyArray_ITER_NEXT(itera) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, np.float64) - */ - PyArray_ITER_NEXT(__pyx_v_itera); - } - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":179 - * PyArray_ITER_NEXT(itera) - * else: - * array = np.empty(size, np.float64) # <<<<<<<<<<<<<< - * array_data = array.data - * multi = PyArray_MultiIterNew(2, array, - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 179; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 179; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 179; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__float64); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 179; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 179; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 179; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":180 - * else: - * array = np.empty(size, np.float64) - * array_data = array.data # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(2, array, - * oa) - */ - __pyx_v_array_data = ((double *)arrayObject->data); - - /* "mtrand.pyx":182 - * array_data = array.data - * multi = PyArray_MultiIterNew(2, array, - * oa) # <<<<<<<<<<<<<< - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - */ - __pyx_t_5 = PyArray_MultiIterNew(2, ((void *)arrayObject), ((void *)__pyx_v_oa)); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 181; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":183 - * multi = PyArray_MultiIterNew(2, array, - * oa) - * if (multi.size != PyArray_SIZE(array)): # <<<<<<<<<<<<<< - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - */ - __pyx_t_1 = (__pyx_v_multi->size != PyArray_SIZE(arrayObject)); - if (__pyx_t_1) { - - /* "mtrand.pyx":184 - * oa) - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_1)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":185 - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, oa_data[0]) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":186 - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * array_data[i] = func(state, oa_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) - */ - __pyx_v_oa_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":187 - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, oa_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXTi(multi, 1) - * return array - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_oa_data[0])); - - /* "mtrand.pyx":188 - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, oa_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) # <<<<<<<<<<<<<< - * return array - * - */ - PyArray_MultiIter_NEXTi(__pyx_v_multi, 1); - } - } - __pyx_L3:; - - /* "mtrand.pyx":189 - * array_data[i] = func(state, oa_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) - * return array # <<<<<<<<<<<<<< - * - * cdef object cont2_array_sc(rk_state *state, rk_cont2 func, object size, double a, - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.cont1_array"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF((PyObject *)__pyx_v_itera); - __Pyx_DECREF((PyObject *)__pyx_v_multi); - __Pyx_DECREF(__pyx_v_size); - __Pyx_DECREF((PyObject *)__pyx_v_oa); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":191 - * return array - * - * cdef object cont2_array_sc(rk_state *state, rk_cont2 func, object size, double a, # <<<<<<<<<<<<<< - * double b): - * cdef double *array_data - */ - -static PyObject *__pyx_f_6mtrand_cont2_array_sc(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_cont2 __pyx_v_func, PyObject *__pyx_v_size, double __pyx_v_a, double __pyx_v_b) { - double *__pyx_v_array_data; - PyArrayObject *arrayObject; - long __pyx_v_length; - long __pyx_v_i; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - long __pyx_t_5; - __Pyx_RefNannySetupContext("cont2_array_sc"); - __Pyx_INCREF(__pyx_v_size); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":198 - * cdef long i - * - * if size is None: # <<<<<<<<<<<<<< - * return func(state, a, b) - * else: - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":199 - * - * if size is None: - * return func(state, a, b) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, np.float64) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyFloat_FromDouble(__pyx_v_func(__pyx_v_state, __pyx_v_a, __pyx_v_b)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":201 - * return func(state, a, b) - * else: - * array = np.empty(size, np.float64) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 201; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 201; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 201; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__float64); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 201; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 201; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 201; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_4))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "mtrand.pyx":202 - * else: - * array = np.empty(size, np.float64) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < length: - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":203 - * array = np.empty(size, np.float64) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = func(state, a, b) - */ - __pyx_v_array_data = ((double *)arrayObject->data); - - /* "mtrand.pyx":204 - * length = PyArray_SIZE(array) - * array_data = array.data - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = func(state, a, b) - * return array - */ - __pyx_t_5 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_5; __pyx_v_i++) { - - /* "mtrand.pyx":205 - * array_data = array.data - * for i from 0 <= i < length: - * array_data[i] = func(state, a, b) # <<<<<<<<<<<<<< - * return array - * - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, __pyx_v_a, __pyx_v_b); - } - - /* "mtrand.pyx":206 - * for i from 0 <= i < length: - * array_data[i] = func(state, a, b) - * return array # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - } - __pyx_L3:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.cont2_array_sc"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":209 - * - * - * cdef object cont2_array(rk_state *state, rk_cont2 func, object size, # <<<<<<<<<<<<<< - * ndarray oa, ndarray ob): - * cdef double *array_data - */ - -static PyObject *__pyx_f_6mtrand_cont2_array(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_cont2 __pyx_v_func, PyObject *__pyx_v_size, PyArrayObject *__pyx_v_oa, PyArrayObject *__pyx_v_ob) { - double *__pyx_v_array_data; - double *__pyx_v_oa_data; - double *__pyx_v_ob_data; - PyArrayObject *arrayObject; - npy_intp __pyx_v_i; - PyArrayMultiIterObject *__pyx_v_multi; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - npy_intp __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - __Pyx_RefNannySetupContext("cont2_array"); - __Pyx_INCREF(__pyx_v_size); - __Pyx_INCREF((PyObject *)__pyx_v_oa); - __Pyx_INCREF((PyObject *)__pyx_v_ob); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_multi = ((PyArrayMultiIterObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":219 - * cdef broadcast multi - * - * if size is None: # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(2, oa, ob) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":220 - * - * if size is None: - * multi = PyArray_MultiIterNew(2, oa, ob) # <<<<<<<<<<<<<< - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) - * array_data = array.data - */ - __pyx_t_2 = PyArray_MultiIterNew(2, ((void *)__pyx_v_oa), ((void *)__pyx_v_ob)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 220; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":221 - * if size is None: - * multi = PyArray_MultiIterNew(2, oa, ob) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < multi.size: - */ - __pyx_t_2 = PyArray_SimpleNew(__pyx_v_multi->nd, __pyx_v_multi->dimensions, NPY_DOUBLE); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 221; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":222 - * multi = PyArray_MultiIterNew(2, oa, ob) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 0) - */ - __pyx_v_array_data = ((double *)arrayObject->data); - - /* "mtrand.pyx":223 - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) - * array_data = array.data - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * oa_data = PyArray_MultiIter_DATA(multi, 0) - * ob_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":224 - * array_data = array.data - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 0) # <<<<<<<<<<<<<< - * ob_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, oa_data[0], ob_data[0]) - */ - __pyx_v_oa_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 0)); - - /* "mtrand.pyx":225 - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 0) - * ob_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * array_data[i] = func(state, oa_data[0], ob_data[0]) - * PyArray_MultiIter_NEXT(multi) - */ - __pyx_v_ob_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":226 - * oa_data = PyArray_MultiIter_DATA(multi, 0) - * ob_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, oa_data[0], ob_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXT(multi) - * else: - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_oa_data[0]), (__pyx_v_ob_data[0])); - - /* "mtrand.pyx":227 - * ob_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, oa_data[0], ob_data[0]) - * PyArray_MultiIter_NEXT(multi) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, np.float64) - */ - PyArray_MultiIter_NEXT(__pyx_v_multi); - } - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":229 - * PyArray_MultiIter_NEXT(multi) - * else: - * array = np.empty(size, np.float64) # <<<<<<<<<<<<<< - * array_data = array.data - * multi = PyArray_MultiIterNew(3, array, oa, ob) - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 229; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 229; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 229; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__float64); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 229; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 229; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 229; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":230 - * else: - * array = np.empty(size, np.float64) - * array_data = array.data # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(3, array, oa, ob) - * if (multi.size != PyArray_SIZE(array)): - */ - __pyx_v_array_data = ((double *)arrayObject->data); - - /* "mtrand.pyx":231 - * array = np.empty(size, np.float64) - * array_data = array.data - * multi = PyArray_MultiIterNew(3, array, oa, ob) # <<<<<<<<<<<<<< - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - */ - __pyx_t_5 = PyArray_MultiIterNew(3, ((void *)arrayObject), ((void *)__pyx_v_oa), ((void *)__pyx_v_ob)); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 231; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":232 - * array_data = array.data - * multi = PyArray_MultiIterNew(3, array, oa, ob) - * if (multi.size != PyArray_SIZE(array)): # <<<<<<<<<<<<<< - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - */ - __pyx_t_1 = (__pyx_v_multi->size != PyArray_SIZE(arrayObject)); - if (__pyx_t_1) { - - /* "mtrand.pyx":233 - * multi = PyArray_MultiIterNew(3, array, oa, ob) - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 233; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_1)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 233; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 233; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":234 - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * ob_data = PyArray_MultiIter_DATA(multi, 2) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":235 - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * ob_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, oa_data[0], ob_data[0]) - */ - __pyx_v_oa_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":236 - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * ob_data = PyArray_MultiIter_DATA(multi, 2) # <<<<<<<<<<<<<< - * array_data[i] = func(state, oa_data[0], ob_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) - */ - __pyx_v_ob_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 2)); - - /* "mtrand.pyx":237 - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * ob_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, oa_data[0], ob_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXTi(multi, 1) - * PyArray_MultiIter_NEXTi(multi, 2) - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_oa_data[0]), (__pyx_v_ob_data[0])); - - /* "mtrand.pyx":238 - * ob_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, oa_data[0], ob_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXTi(multi, 2) - * return array - */ - PyArray_MultiIter_NEXTi(__pyx_v_multi, 1); - - /* "mtrand.pyx":239 - * array_data[i] = func(state, oa_data[0], ob_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) - * PyArray_MultiIter_NEXTi(multi, 2) # <<<<<<<<<<<<<< - * return array - * - */ - PyArray_MultiIter_NEXTi(__pyx_v_multi, 2); - } - } - __pyx_L3:; - - /* "mtrand.pyx":240 - * PyArray_MultiIter_NEXTi(multi, 1) - * PyArray_MultiIter_NEXTi(multi, 2) - * return array # <<<<<<<<<<<<<< - * - * cdef object cont3_array_sc(rk_state *state, rk_cont3 func, object size, double a, - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.cont2_array"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF((PyObject *)__pyx_v_multi); - __Pyx_DECREF(__pyx_v_size); - __Pyx_DECREF((PyObject *)__pyx_v_oa); - __Pyx_DECREF((PyObject *)__pyx_v_ob); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":242 - * return array - * - * cdef object cont3_array_sc(rk_state *state, rk_cont3 func, object size, double a, # <<<<<<<<<<<<<< - * double b, double c): - * - */ - -static PyObject *__pyx_f_6mtrand_cont3_array_sc(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_cont3 __pyx_v_func, PyObject *__pyx_v_size, double __pyx_v_a, double __pyx_v_b, double __pyx_v_c) { - double *__pyx_v_array_data; - PyArrayObject *arrayObject; - long __pyx_v_length; - long __pyx_v_i; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - long __pyx_t_5; - __Pyx_RefNannySetupContext("cont3_array_sc"); - __Pyx_INCREF(__pyx_v_size); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":250 - * cdef long i - * - * if size is None: # <<<<<<<<<<<<<< - * return func(state, a, b, c) - * else: - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":251 - * - * if size is None: - * return func(state, a, b, c) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, np.float64) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyFloat_FromDouble(__pyx_v_func(__pyx_v_state, __pyx_v_a, __pyx_v_b, __pyx_v_c)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 251; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":253 - * return func(state, a, b, c) - * else: - * array = np.empty(size, np.float64) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 253; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 253; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 253; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__float64); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 253; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 253; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 253; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_4))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "mtrand.pyx":254 - * else: - * array = np.empty(size, np.float64) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < length: - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":255 - * array = np.empty(size, np.float64) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = func(state, a, b, c) - */ - __pyx_v_array_data = ((double *)arrayObject->data); - - /* "mtrand.pyx":256 - * length = PyArray_SIZE(array) - * array_data = array.data - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = func(state, a, b, c) - * return array - */ - __pyx_t_5 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_5; __pyx_v_i++) { - - /* "mtrand.pyx":257 - * array_data = array.data - * for i from 0 <= i < length: - * array_data[i] = func(state, a, b, c) # <<<<<<<<<<<<<< - * return array - * - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, __pyx_v_a, __pyx_v_b, __pyx_v_c); - } - - /* "mtrand.pyx":258 - * for i from 0 <= i < length: - * array_data[i] = func(state, a, b, c) - * return array # <<<<<<<<<<<<<< - * - * cdef object cont3_array(rk_state *state, rk_cont3 func, object size, ndarray oa, - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - } - __pyx_L3:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.cont3_array_sc"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":260 - * return array - * - * cdef object cont3_array(rk_state *state, rk_cont3 func, object size, ndarray oa, # <<<<<<<<<<<<<< - * ndarray ob, ndarray oc): - * - */ - -static PyObject *__pyx_f_6mtrand_cont3_array(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_cont3 __pyx_v_func, PyObject *__pyx_v_size, PyArrayObject *__pyx_v_oa, PyArrayObject *__pyx_v_ob, PyArrayObject *__pyx_v_oc) { - double *__pyx_v_array_data; - double *__pyx_v_oa_data; - double *__pyx_v_ob_data; - double *__pyx_v_oc_data; - PyArrayObject *arrayObject; - npy_intp __pyx_v_i; - PyArrayMultiIterObject *__pyx_v_multi; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - npy_intp __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - __Pyx_RefNannySetupContext("cont3_array"); - __Pyx_INCREF(__pyx_v_size); - __Pyx_INCREF((PyObject *)__pyx_v_oa); - __Pyx_INCREF((PyObject *)__pyx_v_ob); - __Pyx_INCREF((PyObject *)__pyx_v_oc); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_multi = ((PyArrayMultiIterObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":272 - * cdef broadcast multi - * - * if size is None: # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(3, oa, ob, oc) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":273 - * - * if size is None: - * multi = PyArray_MultiIterNew(3, oa, ob, oc) # <<<<<<<<<<<<<< - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) - * array_data = array.data - */ - __pyx_t_2 = PyArray_MultiIterNew(3, ((void *)__pyx_v_oa), ((void *)__pyx_v_ob), ((void *)__pyx_v_oc)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":274 - * if size is None: - * multi = PyArray_MultiIterNew(3, oa, ob, oc) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < multi.size: - */ - __pyx_t_2 = PyArray_SimpleNew(__pyx_v_multi->nd, __pyx_v_multi->dimensions, NPY_DOUBLE); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 274; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":275 - * multi = PyArray_MultiIterNew(3, oa, ob, oc) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 0) - */ - __pyx_v_array_data = ((double *)arrayObject->data); - - /* "mtrand.pyx":276 - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) - * array_data = array.data - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * oa_data = PyArray_MultiIter_DATA(multi, 0) - * ob_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":277 - * array_data = array.data - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 0) # <<<<<<<<<<<<<< - * ob_data = PyArray_MultiIter_DATA(multi, 1) - * oc_data = PyArray_MultiIter_DATA(multi, 2) - */ - __pyx_v_oa_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 0)); - - /* "mtrand.pyx":278 - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 0) - * ob_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * oc_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, oa_data[0], ob_data[0], oc_data[0]) - */ - __pyx_v_ob_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":279 - * oa_data = PyArray_MultiIter_DATA(multi, 0) - * ob_data = PyArray_MultiIter_DATA(multi, 1) - * oc_data = PyArray_MultiIter_DATA(multi, 2) # <<<<<<<<<<<<<< - * array_data[i] = func(state, oa_data[0], ob_data[0], oc_data[0]) - * PyArray_MultiIter_NEXT(multi) - */ - __pyx_v_oc_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 2)); - - /* "mtrand.pyx":280 - * ob_data = PyArray_MultiIter_DATA(multi, 1) - * oc_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, oa_data[0], ob_data[0], oc_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXT(multi) - * else: - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_oa_data[0]), (__pyx_v_ob_data[0]), (__pyx_v_oc_data[0])); - - /* "mtrand.pyx":281 - * oc_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, oa_data[0], ob_data[0], oc_data[0]) - * PyArray_MultiIter_NEXT(multi) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, np.float64) - */ - PyArray_MultiIter_NEXT(__pyx_v_multi); - } - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":283 - * PyArray_MultiIter_NEXT(multi) - * else: - * array = np.empty(size, np.float64) # <<<<<<<<<<<<<< - * array_data = array.data - * multi = PyArray_MultiIterNew(4, array, oa, - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 283; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 283; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 283; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__float64); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 283; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 283; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 283; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":284 - * else: - * array = np.empty(size, np.float64) - * array_data = array.data # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(4, array, oa, - * ob, oc) - */ - __pyx_v_array_data = ((double *)arrayObject->data); - - /* "mtrand.pyx":286 - * array_data = array.data - * multi = PyArray_MultiIterNew(4, array, oa, - * ob, oc) # <<<<<<<<<<<<<< - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - */ - __pyx_t_5 = PyArray_MultiIterNew(4, ((void *)arrayObject), ((void *)__pyx_v_oa), ((void *)__pyx_v_ob), ((void *)__pyx_v_oc)); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 285; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":287 - * multi = PyArray_MultiIterNew(4, array, oa, - * ob, oc) - * if (multi.size != PyArray_SIZE(array)): # <<<<<<<<<<<<<< - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - */ - __pyx_t_1 = (__pyx_v_multi->size != PyArray_SIZE(arrayObject)); - if (__pyx_t_1) { - - /* "mtrand.pyx":288 - * ob, oc) - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 288; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_1)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 288; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 288; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":289 - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * ob_data = PyArray_MultiIter_DATA(multi, 2) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":290 - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * ob_data = PyArray_MultiIter_DATA(multi, 2) - * oc_data = PyArray_MultiIter_DATA(multi, 3) - */ - __pyx_v_oa_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":291 - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * ob_data = PyArray_MultiIter_DATA(multi, 2) # <<<<<<<<<<<<<< - * oc_data = PyArray_MultiIter_DATA(multi, 3) - * array_data[i] = func(state, oa_data[0], ob_data[0], oc_data[0]) - */ - __pyx_v_ob_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 2)); - - /* "mtrand.pyx":292 - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * ob_data = PyArray_MultiIter_DATA(multi, 2) - * oc_data = PyArray_MultiIter_DATA(multi, 3) # <<<<<<<<<<<<<< - * array_data[i] = func(state, oa_data[0], ob_data[0], oc_data[0]) - * PyArray_MultiIter_NEXT(multi) - */ - __pyx_v_oc_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 3)); - - /* "mtrand.pyx":293 - * ob_data = PyArray_MultiIter_DATA(multi, 2) - * oc_data = PyArray_MultiIter_DATA(multi, 3) - * array_data[i] = func(state, oa_data[0], ob_data[0], oc_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXT(multi) - * return array - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_oa_data[0]), (__pyx_v_ob_data[0]), (__pyx_v_oc_data[0])); - - /* "mtrand.pyx":294 - * oc_data = PyArray_MultiIter_DATA(multi, 3) - * array_data[i] = func(state, oa_data[0], ob_data[0], oc_data[0]) - * PyArray_MultiIter_NEXT(multi) # <<<<<<<<<<<<<< - * return array - * - */ - PyArray_MultiIter_NEXT(__pyx_v_multi); - } - } - __pyx_L3:; - - /* "mtrand.pyx":295 - * array_data[i] = func(state, oa_data[0], ob_data[0], oc_data[0]) - * PyArray_MultiIter_NEXT(multi) - * return array # <<<<<<<<<<<<<< - * - * cdef object disc0_array(rk_state *state, rk_disc0 func, object size): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.cont3_array"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF((PyObject *)__pyx_v_multi); - __Pyx_DECREF(__pyx_v_size); - __Pyx_DECREF((PyObject *)__pyx_v_oa); - __Pyx_DECREF((PyObject *)__pyx_v_ob); - __Pyx_DECREF((PyObject *)__pyx_v_oc); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":297 - * return array - * - * cdef object disc0_array(rk_state *state, rk_disc0 func, object size): # <<<<<<<<<<<<<< - * cdef long *array_data - * cdef ndarray array "arrayObject" - */ - -static PyObject *__pyx_f_6mtrand_disc0_array(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_disc0 __pyx_v_func, PyObject *__pyx_v_size) { - long *__pyx_v_array_data; - PyArrayObject *arrayObject; - long __pyx_v_length; - long __pyx_v_i; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - long __pyx_t_5; - __Pyx_RefNannySetupContext("disc0_array"); - __Pyx_INCREF(__pyx_v_size); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":303 - * cdef long i - * - * if size is None: # <<<<<<<<<<<<<< - * return func(state) - * else: - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":304 - * - * if size is None: - * return func(state) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, int) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyInt_FromLong(__pyx_v_func(__pyx_v_state)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 304; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":306 - * return func(state) - * else: - * array = np.empty(size, int) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 306; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 306; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 306; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_2, 1, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 306; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_4))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "mtrand.pyx":307 - * else: - * array = np.empty(size, int) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < length: - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":308 - * array = np.empty(size, int) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = func(state) - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":309 - * length = PyArray_SIZE(array) - * array_data = array.data - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = func(state) - * return array - */ - __pyx_t_5 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_5; __pyx_v_i++) { - - /* "mtrand.pyx":310 - * array_data = array.data - * for i from 0 <= i < length: - * array_data[i] = func(state) # <<<<<<<<<<<<<< - * return array - * - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state); - } - - /* "mtrand.pyx":311 - * for i from 0 <= i < length: - * array_data[i] = func(state) - * return array # <<<<<<<<<<<<<< - * - * cdef object discnp_array_sc(rk_state *state, rk_discnp func, object size, long n, double p): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - } - __pyx_L3:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.disc0_array"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":313 - * return array - * - * cdef object discnp_array_sc(rk_state *state, rk_discnp func, object size, long n, double p): # <<<<<<<<<<<<<< - * cdef long *array_data - * cdef ndarray array "arrayObject" - */ - -static PyObject *__pyx_f_6mtrand_discnp_array_sc(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_discnp __pyx_v_func, PyObject *__pyx_v_size, long __pyx_v_n, double __pyx_v_p) { - long *__pyx_v_array_data; - PyArrayObject *arrayObject; - long __pyx_v_length; - long __pyx_v_i; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - long __pyx_t_5; - __Pyx_RefNannySetupContext("discnp_array_sc"); - __Pyx_INCREF(__pyx_v_size); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":319 - * cdef long i - * - * if size is None: # <<<<<<<<<<<<<< - * return func(state, n, p) - * else: - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":320 - * - * if size is None: - * return func(state, n, p) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, int) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyInt_FromLong(__pyx_v_func(__pyx_v_state, __pyx_v_n, __pyx_v_p)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 320; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":322 - * return func(state, n, p) - * else: - * array = np.empty(size, int) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 322; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 322; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 322; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_2, 1, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 322; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_4))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "mtrand.pyx":323 - * else: - * array = np.empty(size, int) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < length: - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":324 - * array = np.empty(size, int) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = func(state, n, p) - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":325 - * length = PyArray_SIZE(array) - * array_data = array.data - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = func(state, n, p) - * return array - */ - __pyx_t_5 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_5; __pyx_v_i++) { - - /* "mtrand.pyx":326 - * array_data = array.data - * for i from 0 <= i < length: - * array_data[i] = func(state, n, p) # <<<<<<<<<<<<<< - * return array - * - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, __pyx_v_n, __pyx_v_p); - } - - /* "mtrand.pyx":327 - * for i from 0 <= i < length: - * array_data[i] = func(state, n, p) - * return array # <<<<<<<<<<<<<< - * - * cdef object discnp_array(rk_state *state, rk_discnp func, object size, ndarray on, ndarray op): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - } - __pyx_L3:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.discnp_array_sc"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":329 - * return array - * - * cdef object discnp_array(rk_state *state, rk_discnp func, object size, ndarray on, ndarray op): # <<<<<<<<<<<<<< - * cdef long *array_data - * cdef ndarray array "arrayObject" - */ - -static PyObject *__pyx_f_6mtrand_discnp_array(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_discnp __pyx_v_func, PyObject *__pyx_v_size, PyArrayObject *__pyx_v_on, PyArrayObject *__pyx_v_op) { - long *__pyx_v_array_data; - PyArrayObject *arrayObject; - npy_intp __pyx_v_i; - double *__pyx_v_op_data; - long *__pyx_v_on_data; - PyArrayMultiIterObject *__pyx_v_multi; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - npy_intp __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - __Pyx_RefNannySetupContext("discnp_array"); - __Pyx_INCREF(__pyx_v_size); - __Pyx_INCREF((PyObject *)__pyx_v_on); - __Pyx_INCREF((PyObject *)__pyx_v_op); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_multi = ((PyArrayMultiIterObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":338 - * cdef broadcast multi - * - * if size is None: # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(2, on, op) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":339 - * - * if size is None: - * multi = PyArray_MultiIterNew(2, on, op) # <<<<<<<<<<<<<< - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - * array_data = array.data - */ - __pyx_t_2 = PyArray_MultiIterNew(2, ((void *)__pyx_v_on), ((void *)__pyx_v_op)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 339; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":340 - * if size is None: - * multi = PyArray_MultiIterNew(2, on, op) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < multi.size: - */ - __pyx_t_2 = PyArray_SimpleNew(__pyx_v_multi->nd, __pyx_v_multi->dimensions, NPY_LONG); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 340; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":341 - * multi = PyArray_MultiIterNew(2, on, op) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 0) - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":342 - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - * array_data = array.data - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * on_data = PyArray_MultiIter_DATA(multi, 0) - * op_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":343 - * array_data = array.data - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 0) # <<<<<<<<<<<<<< - * op_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, on_data[0], op_data[0]) - */ - __pyx_v_on_data = ((long *)PyArray_MultiIter_DATA(__pyx_v_multi, 0)); - - /* "mtrand.pyx":344 - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 0) - * op_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * array_data[i] = func(state, on_data[0], op_data[0]) - * PyArray_MultiIter_NEXT(multi) - */ - __pyx_v_op_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":345 - * on_data = PyArray_MultiIter_DATA(multi, 0) - * op_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, on_data[0], op_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXT(multi) - * else: - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_on_data[0]), (__pyx_v_op_data[0])); - - /* "mtrand.pyx":346 - * op_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, on_data[0], op_data[0]) - * PyArray_MultiIter_NEXT(multi) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, int) - */ - PyArray_MultiIter_NEXT(__pyx_v_multi); - } - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":348 - * PyArray_MultiIter_NEXT(multi) - * else: - * array = np.empty(size, int) # <<<<<<<<<<<<<< - * array_data = array.data - * multi = PyArray_MultiIterNew(3, array, on, op) - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 348; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 348; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 348; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_2, 1, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 348; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":349 - * else: - * array = np.empty(size, int) - * array_data = array.data # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(3, array, on, op) - * if (multi.size != PyArray_SIZE(array)): - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":350 - * array = np.empty(size, int) - * array_data = array.data - * multi = PyArray_MultiIterNew(3, array, on, op) # <<<<<<<<<<<<<< - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - */ - __pyx_t_5 = PyArray_MultiIterNew(3, ((void *)arrayObject), ((void *)__pyx_v_on), ((void *)__pyx_v_op)); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 350; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":351 - * array_data = array.data - * multi = PyArray_MultiIterNew(3, array, on, op) - * if (multi.size != PyArray_SIZE(array)): # <<<<<<<<<<<<<< - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - */ - __pyx_t_1 = (__pyx_v_multi->size != PyArray_SIZE(arrayObject)); - if (__pyx_t_1) { - - /* "mtrand.pyx":352 - * multi = PyArray_MultiIterNew(3, array, on, op) - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 352; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_1)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 352; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 352; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":353 - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * on_data = PyArray_MultiIter_DATA(multi, 1) - * op_data = PyArray_MultiIter_DATA(multi, 2) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":354 - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * op_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, on_data[0], op_data[0]) - */ - __pyx_v_on_data = ((long *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":355 - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 1) - * op_data = PyArray_MultiIter_DATA(multi, 2) # <<<<<<<<<<<<<< - * array_data[i] = func(state, on_data[0], op_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) - */ - __pyx_v_op_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 2)); - - /* "mtrand.pyx":356 - * on_data = PyArray_MultiIter_DATA(multi, 1) - * op_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, on_data[0], op_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXTi(multi, 1) - * PyArray_MultiIter_NEXTi(multi, 2) - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_on_data[0]), (__pyx_v_op_data[0])); - - /* "mtrand.pyx":357 - * op_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, on_data[0], op_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXTi(multi, 2) - * - */ - PyArray_MultiIter_NEXTi(__pyx_v_multi, 1); - - /* "mtrand.pyx":358 - * array_data[i] = func(state, on_data[0], op_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) - * PyArray_MultiIter_NEXTi(multi, 2) # <<<<<<<<<<<<<< - * - * return array - */ - PyArray_MultiIter_NEXTi(__pyx_v_multi, 2); - } - } - __pyx_L3:; - - /* "mtrand.pyx":360 - * PyArray_MultiIter_NEXTi(multi, 2) - * - * return array # <<<<<<<<<<<<<< - * - * cdef object discdd_array_sc(rk_state *state, rk_discdd func, object size, double n, double p): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.discnp_array"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF((PyObject *)__pyx_v_multi); - __Pyx_DECREF(__pyx_v_size); - __Pyx_DECREF((PyObject *)__pyx_v_on); - __Pyx_DECREF((PyObject *)__pyx_v_op); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":362 - * return array - * - * cdef object discdd_array_sc(rk_state *state, rk_discdd func, object size, double n, double p): # <<<<<<<<<<<<<< - * cdef long *array_data - * cdef ndarray array "arrayObject" - */ - -static PyObject *__pyx_f_6mtrand_discdd_array_sc(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_discdd __pyx_v_func, PyObject *__pyx_v_size, double __pyx_v_n, double __pyx_v_p) { - long *__pyx_v_array_data; - PyArrayObject *arrayObject; - long __pyx_v_length; - long __pyx_v_i; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - long __pyx_t_5; - __Pyx_RefNannySetupContext("discdd_array_sc"); - __Pyx_INCREF(__pyx_v_size); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":368 - * cdef long i - * - * if size is None: # <<<<<<<<<<<<<< - * return func(state, n, p) - * else: - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":369 - * - * if size is None: - * return func(state, n, p) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, int) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyInt_FromLong(__pyx_v_func(__pyx_v_state, __pyx_v_n, __pyx_v_p)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 369; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":371 - * return func(state, n, p) - * else: - * array = np.empty(size, int) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 371; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 371; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 371; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_2, 1, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 371; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_4))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "mtrand.pyx":372 - * else: - * array = np.empty(size, int) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < length: - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":373 - * array = np.empty(size, int) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = func(state, n, p) - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":374 - * length = PyArray_SIZE(array) - * array_data = array.data - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = func(state, n, p) - * return array - */ - __pyx_t_5 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_5; __pyx_v_i++) { - - /* "mtrand.pyx":375 - * array_data = array.data - * for i from 0 <= i < length: - * array_data[i] = func(state, n, p) # <<<<<<<<<<<<<< - * return array - * - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, __pyx_v_n, __pyx_v_p); - } - - /* "mtrand.pyx":376 - * for i from 0 <= i < length: - * array_data[i] = func(state, n, p) - * return array # <<<<<<<<<<<<<< - * - * cdef object discdd_array(rk_state *state, rk_discdd func, object size, ndarray on, ndarray op): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - } - __pyx_L3:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.discdd_array_sc"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":378 - * return array - * - * cdef object discdd_array(rk_state *state, rk_discdd func, object size, ndarray on, ndarray op): # <<<<<<<<<<<<<< - * cdef long *array_data - * cdef ndarray array "arrayObject" - */ - -static PyObject *__pyx_f_6mtrand_discdd_array(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_discdd __pyx_v_func, PyObject *__pyx_v_size, PyArrayObject *__pyx_v_on, PyArrayObject *__pyx_v_op) { - long *__pyx_v_array_data; - PyArrayObject *arrayObject; - npy_intp __pyx_v_i; - double *__pyx_v_op_data; - double *__pyx_v_on_data; - PyArrayMultiIterObject *__pyx_v_multi; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - npy_intp __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - __Pyx_RefNannySetupContext("discdd_array"); - __Pyx_INCREF(__pyx_v_size); - __Pyx_INCREF((PyObject *)__pyx_v_on); - __Pyx_INCREF((PyObject *)__pyx_v_op); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_multi = ((PyArrayMultiIterObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":387 - * cdef broadcast multi - * - * if size is None: # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(2, on, op) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":388 - * - * if size is None: - * multi = PyArray_MultiIterNew(2, on, op) # <<<<<<<<<<<<<< - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - * array_data = array.data - */ - __pyx_t_2 = PyArray_MultiIterNew(2, ((void *)__pyx_v_on), ((void *)__pyx_v_op)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 388; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":389 - * if size is None: - * multi = PyArray_MultiIterNew(2, on, op) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < multi.size: - */ - __pyx_t_2 = PyArray_SimpleNew(__pyx_v_multi->nd, __pyx_v_multi->dimensions, NPY_LONG); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 389; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":390 - * multi = PyArray_MultiIterNew(2, on, op) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 0) - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":391 - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - * array_data = array.data - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * on_data = PyArray_MultiIter_DATA(multi, 0) - * op_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":392 - * array_data = array.data - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 0) # <<<<<<<<<<<<<< - * op_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, on_data[0], op_data[0]) - */ - __pyx_v_on_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 0)); - - /* "mtrand.pyx":393 - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 0) - * op_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * array_data[i] = func(state, on_data[0], op_data[0]) - * PyArray_MultiIter_NEXT(multi) - */ - __pyx_v_op_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":394 - * on_data = PyArray_MultiIter_DATA(multi, 0) - * op_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, on_data[0], op_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXT(multi) - * else: - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_on_data[0]), (__pyx_v_op_data[0])); - - /* "mtrand.pyx":395 - * op_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, on_data[0], op_data[0]) - * PyArray_MultiIter_NEXT(multi) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, int) - */ - PyArray_MultiIter_NEXT(__pyx_v_multi); - } - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":397 - * PyArray_MultiIter_NEXT(multi) - * else: - * array = np.empty(size, int) # <<<<<<<<<<<<<< - * array_data = array.data - * multi = PyArray_MultiIterNew(3, array, on, op) - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 397; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 397; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 397; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_2, 1, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 397; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":398 - * else: - * array = np.empty(size, int) - * array_data = array.data # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(3, array, on, op) - * if (multi.size != PyArray_SIZE(array)): - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":399 - * array = np.empty(size, int) - * array_data = array.data - * multi = PyArray_MultiIterNew(3, array, on, op) # <<<<<<<<<<<<<< - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - */ - __pyx_t_5 = PyArray_MultiIterNew(3, ((void *)arrayObject), ((void *)__pyx_v_on), ((void *)__pyx_v_op)); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 399; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":400 - * array_data = array.data - * multi = PyArray_MultiIterNew(3, array, on, op) - * if (multi.size != PyArray_SIZE(array)): # <<<<<<<<<<<<<< - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - */ - __pyx_t_1 = (__pyx_v_multi->size != PyArray_SIZE(arrayObject)); - if (__pyx_t_1) { - - /* "mtrand.pyx":401 - * multi = PyArray_MultiIterNew(3, array, on, op) - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 401; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_1)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 401; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 401; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":402 - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * on_data = PyArray_MultiIter_DATA(multi, 1) - * op_data = PyArray_MultiIter_DATA(multi, 2) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":403 - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * op_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, on_data[0], op_data[0]) - */ - __pyx_v_on_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":404 - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 1) - * op_data = PyArray_MultiIter_DATA(multi, 2) # <<<<<<<<<<<<<< - * array_data[i] = func(state, on_data[0], op_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) - */ - __pyx_v_op_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 2)); - - /* "mtrand.pyx":405 - * on_data = PyArray_MultiIter_DATA(multi, 1) - * op_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, on_data[0], op_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXTi(multi, 1) - * PyArray_MultiIter_NEXTi(multi, 2) - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_on_data[0]), (__pyx_v_op_data[0])); - - /* "mtrand.pyx":406 - * op_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, on_data[0], op_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXTi(multi, 2) - * - */ - PyArray_MultiIter_NEXTi(__pyx_v_multi, 1); - - /* "mtrand.pyx":407 - * array_data[i] = func(state, on_data[0], op_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) - * PyArray_MultiIter_NEXTi(multi, 2) # <<<<<<<<<<<<<< - * - * return array - */ - PyArray_MultiIter_NEXTi(__pyx_v_multi, 2); - } - } - __pyx_L3:; - - /* "mtrand.pyx":409 - * PyArray_MultiIter_NEXTi(multi, 2) - * - * return array # <<<<<<<<<<<<<< - * - * cdef object discnmN_array_sc(rk_state *state, rk_discnmN func, object size, - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.discdd_array"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF((PyObject *)__pyx_v_multi); - __Pyx_DECREF(__pyx_v_size); - __Pyx_DECREF((PyObject *)__pyx_v_on); - __Pyx_DECREF((PyObject *)__pyx_v_op); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":411 - * return array - * - * cdef object discnmN_array_sc(rk_state *state, rk_discnmN func, object size, # <<<<<<<<<<<<<< - * long n, long m, long N): - * cdef long *array_data - */ - -static PyObject *__pyx_f_6mtrand_discnmN_array_sc(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_discnmN __pyx_v_func, PyObject *__pyx_v_size, long __pyx_v_n, long __pyx_v_m, long __pyx_v_N) { - long *__pyx_v_array_data; - PyArrayObject *arrayObject; - long __pyx_v_length; - long __pyx_v_i; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - long __pyx_t_5; - __Pyx_RefNannySetupContext("discnmN_array_sc"); - __Pyx_INCREF(__pyx_v_size); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":418 - * cdef long i - * - * if size is None: # <<<<<<<<<<<<<< - * return func(state, n, m, N) - * else: - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":419 - * - * if size is None: - * return func(state, n, m, N) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, int) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyInt_FromLong(__pyx_v_func(__pyx_v_state, __pyx_v_n, __pyx_v_m, __pyx_v_N)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 419; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":421 - * return func(state, n, m, N) - * else: - * array = np.empty(size, int) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 421; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 421; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 421; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_2, 1, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 421; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_4))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "mtrand.pyx":422 - * else: - * array = np.empty(size, int) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < length: - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":423 - * array = np.empty(size, int) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = func(state, n, m, N) - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":424 - * length = PyArray_SIZE(array) - * array_data = array.data - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = func(state, n, m, N) - * return array - */ - __pyx_t_5 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_5; __pyx_v_i++) { - - /* "mtrand.pyx":425 - * array_data = array.data - * for i from 0 <= i < length: - * array_data[i] = func(state, n, m, N) # <<<<<<<<<<<<<< - * return array - * - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, __pyx_v_n, __pyx_v_m, __pyx_v_N); - } - - /* "mtrand.pyx":426 - * for i from 0 <= i < length: - * array_data[i] = func(state, n, m, N) - * return array # <<<<<<<<<<<<<< - * - * cdef object discnmN_array(rk_state *state, rk_discnmN func, object size, - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - } - __pyx_L3:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.discnmN_array_sc"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":428 - * return array - * - * cdef object discnmN_array(rk_state *state, rk_discnmN func, object size, # <<<<<<<<<<<<<< - * ndarray on, ndarray om, ndarray oN): - * cdef long *array_data - */ - -static PyObject *__pyx_f_6mtrand_discnmN_array(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_discnmN __pyx_v_func, PyObject *__pyx_v_size, PyArrayObject *__pyx_v_on, PyArrayObject *__pyx_v_om, PyArrayObject *__pyx_v_oN) { - long *__pyx_v_array_data; - long *__pyx_v_on_data; - long *__pyx_v_om_data; - long *__pyx_v_oN_data; - PyArrayObject *arrayObject; - npy_intp __pyx_v_i; - PyArrayMultiIterObject *__pyx_v_multi; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - npy_intp __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - __Pyx_RefNannySetupContext("discnmN_array"); - __Pyx_INCREF(__pyx_v_size); - __Pyx_INCREF((PyObject *)__pyx_v_on); - __Pyx_INCREF((PyObject *)__pyx_v_om); - __Pyx_INCREF((PyObject *)__pyx_v_oN); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_multi = ((PyArrayMultiIterObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":439 - * cdef broadcast multi - * - * if size is None: # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(3, on, om, oN) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":440 - * - * if size is None: - * multi = PyArray_MultiIterNew(3, on, om, oN) # <<<<<<<<<<<<<< - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - * array_data = array.data - */ - __pyx_t_2 = PyArray_MultiIterNew(3, ((void *)__pyx_v_on), ((void *)__pyx_v_om), ((void *)__pyx_v_oN)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 440; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":441 - * if size is None: - * multi = PyArray_MultiIterNew(3, on, om, oN) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < multi.size: - */ - __pyx_t_2 = PyArray_SimpleNew(__pyx_v_multi->nd, __pyx_v_multi->dimensions, NPY_LONG); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 441; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":442 - * multi = PyArray_MultiIterNew(3, on, om, oN) - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 0) - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":443 - * array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - * array_data = array.data - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * on_data = PyArray_MultiIter_DATA(multi, 0) - * om_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":444 - * array_data = array.data - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 0) # <<<<<<<<<<<<<< - * om_data = PyArray_MultiIter_DATA(multi, 1) - * oN_data = PyArray_MultiIter_DATA(multi, 2) - */ - __pyx_v_on_data = ((long *)PyArray_MultiIter_DATA(__pyx_v_multi, 0)); - - /* "mtrand.pyx":445 - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 0) - * om_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * oN_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, on_data[0], om_data[0], oN_data[0]) - */ - __pyx_v_om_data = ((long *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":446 - * on_data = PyArray_MultiIter_DATA(multi, 0) - * om_data = PyArray_MultiIter_DATA(multi, 1) - * oN_data = PyArray_MultiIter_DATA(multi, 2) # <<<<<<<<<<<<<< - * array_data[i] = func(state, on_data[0], om_data[0], oN_data[0]) - * PyArray_MultiIter_NEXT(multi) - */ - __pyx_v_oN_data = ((long *)PyArray_MultiIter_DATA(__pyx_v_multi, 2)); - - /* "mtrand.pyx":447 - * om_data = PyArray_MultiIter_DATA(multi, 1) - * oN_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, on_data[0], om_data[0], oN_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXT(multi) - * else: - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_on_data[0]), (__pyx_v_om_data[0]), (__pyx_v_oN_data[0])); - - /* "mtrand.pyx":448 - * oN_data = PyArray_MultiIter_DATA(multi, 2) - * array_data[i] = func(state, on_data[0], om_data[0], oN_data[0]) - * PyArray_MultiIter_NEXT(multi) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, int) - */ - PyArray_MultiIter_NEXT(__pyx_v_multi); - } - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":450 - * PyArray_MultiIter_NEXT(multi) - * else: - * array = np.empty(size, int) # <<<<<<<<<<<<<< - * array_data = array.data - * multi = PyArray_MultiIterNew(4, array, on, om, - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 450; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 450; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 450; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_2, 1, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 450; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":451 - * else: - * array = np.empty(size, int) - * array_data = array.data # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(4, array, on, om, - * oN) - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":453 - * array_data = array.data - * multi = PyArray_MultiIterNew(4, array, on, om, - * oN) # <<<<<<<<<<<<<< - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - */ - __pyx_t_5 = PyArray_MultiIterNew(4, ((void *)arrayObject), ((void *)__pyx_v_on), ((void *)__pyx_v_om), ((void *)__pyx_v_oN)); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 452; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":454 - * multi = PyArray_MultiIterNew(4, array, on, om, - * oN) - * if (multi.size != PyArray_SIZE(array)): # <<<<<<<<<<<<<< - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - */ - __pyx_t_1 = (__pyx_v_multi->size != PyArray_SIZE(arrayObject)); - if (__pyx_t_1) { - - /* "mtrand.pyx":455 - * oN) - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 455; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_1)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 455; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 455; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":456 - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * on_data = PyArray_MultiIter_DATA(multi, 1) - * om_data = PyArray_MultiIter_DATA(multi, 2) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":457 - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * om_data = PyArray_MultiIter_DATA(multi, 2) - * oN_data = PyArray_MultiIter_DATA(multi, 3) - */ - __pyx_v_on_data = ((long *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":458 - * for i from 0 <= i < multi.size: - * on_data = PyArray_MultiIter_DATA(multi, 1) - * om_data = PyArray_MultiIter_DATA(multi, 2) # <<<<<<<<<<<<<< - * oN_data = PyArray_MultiIter_DATA(multi, 3) - * array_data[i] = func(state, on_data[0], om_data[0], oN_data[0]) - */ - __pyx_v_om_data = ((long *)PyArray_MultiIter_DATA(__pyx_v_multi, 2)); - - /* "mtrand.pyx":459 - * on_data = PyArray_MultiIter_DATA(multi, 1) - * om_data = PyArray_MultiIter_DATA(multi, 2) - * oN_data = PyArray_MultiIter_DATA(multi, 3) # <<<<<<<<<<<<<< - * array_data[i] = func(state, on_data[0], om_data[0], oN_data[0]) - * PyArray_MultiIter_NEXT(multi) - */ - __pyx_v_oN_data = ((long *)PyArray_MultiIter_DATA(__pyx_v_multi, 3)); - - /* "mtrand.pyx":460 - * om_data = PyArray_MultiIter_DATA(multi, 2) - * oN_data = PyArray_MultiIter_DATA(multi, 3) - * array_data[i] = func(state, on_data[0], om_data[0], oN_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXT(multi) - * - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_on_data[0]), (__pyx_v_om_data[0]), (__pyx_v_oN_data[0])); - - /* "mtrand.pyx":461 - * oN_data = PyArray_MultiIter_DATA(multi, 3) - * array_data[i] = func(state, on_data[0], om_data[0], oN_data[0]) - * PyArray_MultiIter_NEXT(multi) # <<<<<<<<<<<<<< - * - * return array - */ - PyArray_MultiIter_NEXT(__pyx_v_multi); - } - } - __pyx_L3:; - - /* "mtrand.pyx":463 - * PyArray_MultiIter_NEXT(multi) - * - * return array # <<<<<<<<<<<<<< - * - * cdef object discd_array_sc(rk_state *state, rk_discd func, object size, double a): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.discnmN_array"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF((PyObject *)__pyx_v_multi); - __Pyx_DECREF(__pyx_v_size); - __Pyx_DECREF((PyObject *)__pyx_v_on); - __Pyx_DECREF((PyObject *)__pyx_v_om); - __Pyx_DECREF((PyObject *)__pyx_v_oN); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":465 - * return array - * - * cdef object discd_array_sc(rk_state *state, rk_discd func, object size, double a): # <<<<<<<<<<<<<< - * cdef long *array_data - * cdef ndarray array "arrayObject" - */ - -static PyObject *__pyx_f_6mtrand_discd_array_sc(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_discd __pyx_v_func, PyObject *__pyx_v_size, double __pyx_v_a) { - long *__pyx_v_array_data; - PyArrayObject *arrayObject; - long __pyx_v_length; - long __pyx_v_i; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - long __pyx_t_5; - __Pyx_RefNannySetupContext("discd_array_sc"); - __Pyx_INCREF(__pyx_v_size); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":471 - * cdef long i - * - * if size is None: # <<<<<<<<<<<<<< - * return func(state, a) - * else: - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":472 - * - * if size is None: - * return func(state, a) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, int) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyInt_FromLong(__pyx_v_func(__pyx_v_state, __pyx_v_a)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 472; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":474 - * return func(state, a) - * else: - * array = np.empty(size, int) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 474; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 474; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 474; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_2, 1, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 474; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_4))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "mtrand.pyx":475 - * else: - * array = np.empty(size, int) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < length: - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":476 - * array = np.empty(size, int) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = func(state, a) - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":477 - * length = PyArray_SIZE(array) - * array_data = array.data - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = func(state, a) - * return array - */ - __pyx_t_5 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_5; __pyx_v_i++) { - - /* "mtrand.pyx":478 - * array_data = array.data - * for i from 0 <= i < length: - * array_data[i] = func(state, a) # <<<<<<<<<<<<<< - * return array - * - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, __pyx_v_a); - } - - /* "mtrand.pyx":479 - * for i from 0 <= i < length: - * array_data[i] = func(state, a) - * return array # <<<<<<<<<<<<<< - * - * cdef object discd_array(rk_state *state, rk_discd func, object size, ndarray oa): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - } - __pyx_L3:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.discd_array_sc"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":481 - * return array - * - * cdef object discd_array(rk_state *state, rk_discd func, object size, ndarray oa): # <<<<<<<<<<<<<< - * cdef long *array_data - * cdef double *oa_data - */ - -static PyObject *__pyx_f_6mtrand_discd_array(rk_state *__pyx_v_state, __pyx_t_6mtrand_rk_discd __pyx_v_func, PyObject *__pyx_v_size, PyArrayObject *__pyx_v_oa) { - long *__pyx_v_array_data; - double *__pyx_v_oa_data; - PyArrayObject *arrayObject; - npy_intp __pyx_v_length; - npy_intp __pyx_v_i; - PyArrayMultiIterObject *__pyx_v_multi; - PyArrayIterObject *__pyx_v_itera; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - npy_intp __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - __Pyx_RefNannySetupContext("discd_array"); - __Pyx_INCREF(__pyx_v_size); - __Pyx_INCREF((PyObject *)__pyx_v_oa); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_multi = ((PyArrayMultiIterObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_itera = ((PyArrayIterObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":490 - * cdef flatiter itera - * - * if size is None: # <<<<<<<<<<<<<< - * array = PyArray_SimpleNew(oa.nd, oa.dimensions, NPY_LONG) - * length = PyArray_SIZE(array) - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":491 - * - * if size is None: - * array = PyArray_SimpleNew(oa.nd, oa.dimensions, NPY_LONG) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_2 = PyArray_SimpleNew(__pyx_v_oa->nd, __pyx_v_oa->dimensions, NPY_LONG); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 491; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":492 - * if size is None: - * array = PyArray_SimpleNew(oa.nd, oa.dimensions, NPY_LONG) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * itera = PyArray_IterNew(oa) - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":493 - * array = PyArray_SimpleNew(oa.nd, oa.dimensions, NPY_LONG) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * itera = PyArray_IterNew(oa) - * for i from 0 <= i < length: - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":494 - * length = PyArray_SIZE(array) - * array_data = array.data - * itera = PyArray_IterNew(oa) # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = func(state, ((itera.dataptr))[0]) - */ - __pyx_t_2 = PyArray_IterNew(((PyObject *)__pyx_v_oa)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 494; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayIterObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_itera)); - __pyx_v_itera = ((PyArrayIterObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":495 - * array_data = array.data - * itera = PyArray_IterNew(oa) - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = func(state, ((itera.dataptr))[0]) - * PyArray_ITER_NEXT(itera) - */ - __pyx_t_3 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":496 - * itera = PyArray_IterNew(oa) - * for i from 0 <= i < length: - * array_data[i] = func(state, ((itera.dataptr))[0]) # <<<<<<<<<<<<<< - * PyArray_ITER_NEXT(itera) - * else: - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (((double *)__pyx_v_itera->dataptr)[0])); - - /* "mtrand.pyx":497 - * for i from 0 <= i < length: - * array_data[i] = func(state, ((itera.dataptr))[0]) - * PyArray_ITER_NEXT(itera) # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, int) - */ - PyArray_ITER_NEXT(__pyx_v_itera); - } - goto __pyx_L3; - } - /*else*/ { - - /* "mtrand.pyx":499 - * PyArray_ITER_NEXT(itera) - * else: - * array = np.empty(size, int) # <<<<<<<<<<<<<< - * array_data = array.data - * multi = PyArray_MultiIterNew(2, array, oa) - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 499; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 499; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 499; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_2, 1, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 499; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":500 - * else: - * array = np.empty(size, int) - * array_data = array.data # <<<<<<<<<<<<<< - * multi = PyArray_MultiIterNew(2, array, oa) - * if (multi.size != PyArray_SIZE(array)): - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":501 - * array = np.empty(size, int) - * array_data = array.data - * multi = PyArray_MultiIterNew(2, array, oa) # <<<<<<<<<<<<<< - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - */ - __pyx_t_5 = PyArray_MultiIterNew(2, ((void *)arrayObject), ((void *)__pyx_v_oa)); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 501; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)((PyArrayMultiIterObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)__pyx_v_multi)); - __pyx_v_multi = ((PyArrayMultiIterObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":502 - * array_data = array.data - * multi = PyArray_MultiIterNew(2, array, oa) - * if (multi.size != PyArray_SIZE(array)): # <<<<<<<<<<<<<< - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - */ - __pyx_t_1 = (__pyx_v_multi->size != PyArray_SIZE(arrayObject)); - if (__pyx_t_1) { - - /* "mtrand.pyx":503 - * multi = PyArray_MultiIterNew(2, array, oa) - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") # <<<<<<<<<<<<<< - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 503; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_1)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 503; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 503; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":504 - * if (multi.size != PyArray_SIZE(array)): - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: # <<<<<<<<<<<<<< - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, oa_data[0]) - */ - __pyx_t_3 = __pyx_v_multi->size; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_3; __pyx_v_i++) { - - /* "mtrand.pyx":505 - * raise ValueError("size is not compatible with inputs") - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) # <<<<<<<<<<<<<< - * array_data[i] = func(state, oa_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) - */ - __pyx_v_oa_data = ((double *)PyArray_MultiIter_DATA(__pyx_v_multi, 1)); - - /* "mtrand.pyx":506 - * for i from 0 <= i < multi.size: - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, oa_data[0]) # <<<<<<<<<<<<<< - * PyArray_MultiIter_NEXTi(multi, 1) - * return array - */ - (__pyx_v_array_data[__pyx_v_i]) = __pyx_v_func(__pyx_v_state, (__pyx_v_oa_data[0])); - - /* "mtrand.pyx":507 - * oa_data = PyArray_MultiIter_DATA(multi, 1) - * array_data[i] = func(state, oa_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) # <<<<<<<<<<<<<< - * return array - * - */ - PyArray_MultiIter_NEXTi(__pyx_v_multi, 1); - } - } - __pyx_L3:; - - /* "mtrand.pyx":508 - * array_data[i] = func(state, oa_data[0]) - * PyArray_MultiIter_NEXTi(multi, 1) - * return array # <<<<<<<<<<<<<< - * - * cdef double kahan_sum(double *darr, long n): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.discd_array"); - __pyx_r = 0; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF((PyObject *)__pyx_v_multi); - __Pyx_DECREF((PyObject *)__pyx_v_itera); - __Pyx_DECREF(__pyx_v_size); - __Pyx_DECREF((PyObject *)__pyx_v_oa); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":510 - * return array - * - * cdef double kahan_sum(double *darr, long n): # <<<<<<<<<<<<<< - * cdef double c, y, t, sum - * cdef long i - */ - -static double __pyx_f_6mtrand_kahan_sum(double *__pyx_v_darr, long __pyx_v_n) { - double __pyx_v_c; - double __pyx_v_y; - double __pyx_v_t; - double __pyx_v_sum; - long __pyx_v_i; - double __pyx_r; - long __pyx_t_1; - __Pyx_RefNannySetupContext("kahan_sum"); - - /* "mtrand.pyx":513 - * cdef double c, y, t, sum - * cdef long i - * sum = darr[0] # <<<<<<<<<<<<<< - * c = 0.0 - * for i from 1 <= i < n: - */ - __pyx_v_sum = (__pyx_v_darr[0]); - - /* "mtrand.pyx":514 - * cdef long i - * sum = darr[0] - * c = 0.0 # <<<<<<<<<<<<<< - * for i from 1 <= i < n: - * y = darr[i] - c - */ - __pyx_v_c = 0.0; - - /* "mtrand.pyx":515 - * sum = darr[0] - * c = 0.0 - * for i from 1 <= i < n: # <<<<<<<<<<<<<< - * y = darr[i] - c - * t = sum + y - */ - __pyx_t_1 = __pyx_v_n; - for (__pyx_v_i = 1; __pyx_v_i < __pyx_t_1; __pyx_v_i++) { - - /* "mtrand.pyx":516 - * c = 0.0 - * for i from 1 <= i < n: - * y = darr[i] - c # <<<<<<<<<<<<<< - * t = sum + y - * c = (t-sum) - y - */ - __pyx_v_y = ((__pyx_v_darr[__pyx_v_i]) - __pyx_v_c); - - /* "mtrand.pyx":517 - * for i from 1 <= i < n: - * y = darr[i] - c - * t = sum + y # <<<<<<<<<<<<<< - * c = (t-sum) - y - * sum = t - */ - __pyx_v_t = (__pyx_v_sum + __pyx_v_y); - - /* "mtrand.pyx":518 - * y = darr[i] - c - * t = sum + y - * c = (t-sum) - y # <<<<<<<<<<<<<< - * sum = t - * return sum - */ - __pyx_v_c = ((__pyx_v_t - __pyx_v_sum) - __pyx_v_y); - - /* "mtrand.pyx":519 - * t = sum + y - * c = (t-sum) - y - * sum = t # <<<<<<<<<<<<<< - * return sum - * - */ - __pyx_v_sum = __pyx_v_t; - } - - /* "mtrand.pyx":520 - * c = (t-sum) - y - * sum = t - * return sum # <<<<<<<<<<<<<< - * - * cdef class RandomState: - */ - __pyx_r = __pyx_v_sum; - goto __pyx_L0; - - __pyx_r = 0; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":557 - * cdef rk_state *internal_state - * - * def __init__(self, seed=None): # <<<<<<<<<<<<<< - * self.internal_state = PyMem_Malloc(sizeof(rk_state)) - * - */ - -static int __pyx_pf_6mtrand_11RandomState___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_pf_6mtrand_11RandomState___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_seed = 0; - int __pyx_r; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__seed,0}; - __Pyx_RefNannySetupContext("__init__"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[1] = {0}; - values[0] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__seed); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "__init__") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 557; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_seed = values[0]; - } else { - __pyx_v_seed = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: __pyx_v_seed = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 557; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.__init__"); - return -1; - __pyx_L4_argument_unpacking_done:; - - /* "mtrand.pyx":558 - * - * def __init__(self, seed=None): - * self.internal_state = PyMem_Malloc(sizeof(rk_state)) # <<<<<<<<<<<<<< - * - * self.seed(seed) - */ - ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state = ((rk_state *)PyMem_Malloc((sizeof(rk_state)))); - - /* "mtrand.pyx":560 - * self.internal_state = PyMem_Malloc(sizeof(rk_state)) - * - * self.seed(seed) # <<<<<<<<<<<<<< - * - * def __dealloc__(self): - */ - __pyx_t_1 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__seed); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 560; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 560; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_seed); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_seed); - __Pyx_GIVEREF(__pyx_v_seed); - __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 560; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("mtrand.RandomState.__init__"); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":562 - * self.seed(seed) - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * if self.internal_state != NULL: - * PyMem_Free(self.internal_state) - */ - -static void __pyx_pf_6mtrand_11RandomState___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_pf_6mtrand_11RandomState___dealloc__(PyObject *__pyx_v_self) { - int __pyx_t_1; - __Pyx_RefNannySetupContext("__dealloc__"); - __Pyx_INCREF((PyObject *)__pyx_v_self); - - /* "mtrand.pyx":563 - * - * def __dealloc__(self): - * if self.internal_state != NULL: # <<<<<<<<<<<<<< - * PyMem_Free(self.internal_state) - * self.internal_state = NULL - */ - __pyx_t_1 = (((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state != NULL); - if (__pyx_t_1) { - - /* "mtrand.pyx":564 - * def __dealloc__(self): - * if self.internal_state != NULL: - * PyMem_Free(self.internal_state) # <<<<<<<<<<<<<< - * self.internal_state = NULL - * - */ - PyMem_Free(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state); - - /* "mtrand.pyx":565 - * if self.internal_state != NULL: - * PyMem_Free(self.internal_state) - * self.internal_state = NULL # <<<<<<<<<<<<<< - * - * def seed(self, seed=None): - */ - ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state = NULL; - goto __pyx_L5; - } - __pyx_L5:; - - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_RefNannyFinishContext(); -} - -/* "mtrand.pyx":567 - * self.internal_state = NULL - * - * def seed(self, seed=None): # <<<<<<<<<<<<<< - * """ - * seed(seed=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_seed(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_seed[] = "\n seed(seed=None)\n\n Seed the generator.\n\n This method is called when `RandomState` is initialized. It can be\n called again to re-seed the generator. For details, see `RandomState`.\n\n Parameters\n ----------\n seed : int or array_like, optional\n Seed for `RandomState`.\n\n See Also\n --------\n RandomState\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_seed(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_seed = 0; - rk_error __pyx_v_errcode; - PyArrayObject *arrayObject_obj; - PyObject *__pyx_v_iseed; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - unsigned long __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__seed,0}; - __Pyx_RefNannySetupContext("seed"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[1] = {0}; - values[0] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__seed); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "seed") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 567; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_seed = values[0]; - } else { - __pyx_v_seed = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: __pyx_v_seed = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("seed", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 567; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.seed"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_seed); - arrayObject_obj = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_iseed = Py_None; __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":588 - * cdef rk_error errcode - * cdef ndarray obj "arrayObject_obj" - * if seed is None: # <<<<<<<<<<<<<< - * errcode = rk_randomseed(self.internal_state) - * elif type(seed) is int: - */ - __pyx_t_1 = (__pyx_v_seed == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":589 - * cdef ndarray obj "arrayObject_obj" - * if seed is None: - * errcode = rk_randomseed(self.internal_state) # <<<<<<<<<<<<<< - * elif type(seed) is int: - * rk_seed(seed, self.internal_state) - */ - __pyx_v_errcode = rk_randomseed(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state); - goto __pyx_L6; - } - - /* "mtrand.pyx":590 - * if seed is None: - * errcode = rk_randomseed(self.internal_state) - * elif type(seed) is int: # <<<<<<<<<<<<<< - * rk_seed(seed, self.internal_state) - * elif isinstance(seed, np.integer): - */ - __pyx_t_1 = (((PyObject *)Py_TYPE(__pyx_v_seed)) == ((PyObject *)((PyObject*)&PyInt_Type))); - if (__pyx_t_1) { - - /* "mtrand.pyx":591 - * errcode = rk_randomseed(self.internal_state) - * elif type(seed) is int: - * rk_seed(seed, self.internal_state) # <<<<<<<<<<<<<< - * elif isinstance(seed, np.integer): - * iseed = int(seed) - */ - __pyx_t_2 = __Pyx_PyInt_AsUnsignedLong(__pyx_v_seed); if (unlikely((__pyx_t_2 == (unsigned long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - rk_seed(__pyx_t_2, ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state); - goto __pyx_L6; - } - - /* "mtrand.pyx":592 - * elif type(seed) is int: - * rk_seed(seed, self.internal_state) - * elif isinstance(seed, np.integer): # <<<<<<<<<<<<<< - * iseed = int(seed) - * rk_seed(iseed, self.internal_state) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__integer); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = PyObject_IsInstance(__pyx_v_seed, __pyx_t_4); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":593 - * rk_seed(seed, self.internal_state) - * elif isinstance(seed, np.integer): - * iseed = int(seed) # <<<<<<<<<<<<<< - * rk_seed(iseed, self.internal_state) - * else: - */ - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_seed); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_seed); - __Pyx_GIVEREF(__pyx_v_seed); - __pyx_t_3 = PyObject_Call(((PyObject *)((PyObject*)&PyInt_Type)), __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_v_iseed); - __pyx_v_iseed = __pyx_t_3; - __pyx_t_3 = 0; - - /* "mtrand.pyx":594 - * elif isinstance(seed, np.integer): - * iseed = int(seed) - * rk_seed(iseed, self.internal_state) # <<<<<<<<<<<<<< - * else: - * obj = PyArray_ContiguousFromObject(seed, NPY_LONG, 1, 1) - */ - __pyx_t_2 = __Pyx_PyInt_AsUnsignedLong(__pyx_v_iseed); if (unlikely((__pyx_t_2 == (unsigned long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - rk_seed(__pyx_t_2, ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state); - goto __pyx_L6; - } - /*else*/ { - - /* "mtrand.pyx":596 - * rk_seed(iseed, self.internal_state) - * else: - * obj = PyArray_ContiguousFromObject(seed, NPY_LONG, 1, 1) # <<<<<<<<<<<<<< - * init_by_array(self.internal_state, (obj.data), - * obj.dimensions[0]) - */ - __pyx_t_3 = PyArray_ContiguousFromObject(__pyx_v_seed, NPY_LONG, 1, 1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 596; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)arrayObject_obj)); - arrayObject_obj = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":598 - * obj = PyArray_ContiguousFromObject(seed, NPY_LONG, 1, 1) - * init_by_array(self.internal_state, (obj.data), - * obj.dimensions[0]) # <<<<<<<<<<<<<< - * - * def get_state(self): - */ - init_by_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, ((unsigned long *)arrayObject_obj->data), (arrayObject_obj->dimensions[0])); - } - __pyx_L6:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.RandomState.seed"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject_obj); - __Pyx_DECREF(__pyx_v_iseed); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_seed); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":600 - * obj.dimensions[0]) - * - * def get_state(self): # <<<<<<<<<<<<<< - * """ - * get_state() - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_get_state(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_get_state[] = "\n get_state()\n\n Return a tuple representing the internal state of the generator.\n\n For more details, see `set_state`.\n\n Returns\n -------\n out : tuple(str, ndarray of 624 uints, int, int, float)\n The returned tuple has the following items:\n\n 1. the string 'MT19937'.\n 2. a 1-D array of 624 unsigned integer keys.\n 3. an integer ``pos``.\n 4. an integer ``has_gauss``.\n 5. a float ``cached_gaussian``.\n\n See Also\n --------\n set_state\n\n Notes\n -----\n `set_state` and `get_state` are not needed to work with any of the\n random distributions in NumPy. If the internal state is manually altered,\n the user should know exactly what he/she is doing.\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_get_state(PyObject *__pyx_v_self, PyObject *unused) { - PyArrayObject *arrayObject_state; - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - __Pyx_RefNannySetupContext("get_state"); - arrayObject_state = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":631 - * """ - * cdef ndarray state "arrayObject_state" - * state = np.empty(624, np.uint) # <<<<<<<<<<<<<< - * memcpy((state.data), (self.internal_state.key), 624*sizeof(long)) - * state = np.asarray(state, np.uint32) - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 631; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__empty); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 631; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 631; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__uint); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 631; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 631; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_int_624); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_int_624); - __Pyx_GIVEREF(__pyx_int_624); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 631; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)arrayObject_state)); - arrayObject_state = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":632 - * cdef ndarray state "arrayObject_state" - * state = np.empty(624, np.uint) - * memcpy((state.data), (self.internal_state.key), 624*sizeof(long)) # <<<<<<<<<<<<<< - * state = np.asarray(state, np.uint32) - * return ('MT19937', state, self.internal_state.pos, - */ - memcpy(((void *)arrayObject_state->data), ((void *)((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state->key), (624 * (sizeof(long)))); - - /* "mtrand.pyx":633 - * state = np.empty(624, np.uint) - * memcpy((state.data), (self.internal_state.key), 624*sizeof(long)) - * state = np.asarray(state, np.uint32) # <<<<<<<<<<<<<< - * return ('MT19937', state, self.internal_state.pos, - * self.internal_state.has_gauss, self.internal_state.gauss) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 633; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__asarray); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 633; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 633; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__uint32); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 633; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 633; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)arrayObject_state)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)arrayObject_state)); - __Pyx_GIVEREF(((PyObject *)arrayObject_state)); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 633; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)arrayObject_state)); - arrayObject_state = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":634 - * memcpy((state.data), (self.internal_state.key), 624*sizeof(long)) - * state = np.asarray(state, np.uint32) - * return ('MT19937', state, self.internal_state.pos, # <<<<<<<<<<<<<< - * self.internal_state.has_gauss, self.internal_state.gauss) - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyInt_FromLong(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state->pos); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 634; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - - /* "mtrand.pyx":635 - * state = np.asarray(state, np.uint32) - * return ('MT19937', state, self.internal_state.pos, - * self.internal_state.has_gauss, self.internal_state.gauss) # <<<<<<<<<<<<<< - * - * def set_state(self, state): - */ - __pyx_t_3 = PyInt_FromLong(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state->has_gauss); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 635; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyFloat_FromDouble(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state->gauss); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 635; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyTuple_New(5); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 634; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_n_s__MT19937)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_n_s__MT19937)); - __Pyx_GIVEREF(((PyObject *)__pyx_n_s__MT19937)); - __Pyx_INCREF(((PyObject *)arrayObject_state)); - PyTuple_SET_ITEM(__pyx_t_4, 1, ((PyObject *)arrayObject_state)); - __Pyx_GIVEREF(((PyObject *)arrayObject_state)); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_4, 3, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 4, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.RandomState.get_state"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject_state); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":637 - * self.internal_state.has_gauss, self.internal_state.gauss) - * - * def set_state(self, state): # <<<<<<<<<<<<<< - * """ - * set_state(state) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_set_state(PyObject *__pyx_v_self, PyObject *__pyx_v_state); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_set_state[] = "\n set_state(state)\n\n Set the internal state of the generator from a tuple.\n\n For use if one has reason to manually (re-)set the internal state of the\n \"Mersenne Twister\"[1]_ pseudo-random number generating algorithm.\n\n Parameters\n ----------\n state : tuple(str, ndarray of 624 uints, int, int, float)\n The `state` tuple has the following items:\n\n 1. the string 'MT19937', specifying the Mersenne Twister algorithm.\n 2. a 1-D array of 624 unsigned integers ``keys``.\n 3. an integer ``pos``.\n 4. an integer ``has_gauss``.\n 5. a float ``cached_gaussian``.\n\n Returns\n -------\n out : None\n Returns 'None' on success.\n\n See Also\n --------\n get_state\n\n Notes\n -----\n `set_state` and `get_state` are not needed to work with any of the\n random distributions in NumPy. If the internal state is manually altered,\n the user should know exactly what he/she is doing.\n\n For backwards compatibility, the form (str, array of 624 uints, int) is\n also accepted although it is missing some information about the cached\n Gaussian value: ``state = ('MT19937', keys, pos)``.\n\n References\n ----------\n .. [1] M. Matsumoto and T. Nishimura, \"Mersenne Twister: A\n 623-dimensionally equidistributed uniform pseudorandom number\n generator,\" *ACM Trans. on Modeling and Computer Simulation*,\n Vol. 8, No. 1, pp. 3-30, Jan. 1998.\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_set_state(PyObject *__pyx_v_self, PyObject *__pyx_v_state) { - PyArrayObject *arrayObject_obj; - int __pyx_v_pos; - PyObject *__pyx_v_algorithm_name; - PyObject *__pyx_v_key; - PyObject *__pyx_v_has_gauss; - PyObject *__pyx_v_cached_gaussian; - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - Py_ssize_t __pyx_t_7; - double __pyx_t_8; - __Pyx_RefNannySetupContext("set_state"); - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_state); - arrayObject_obj = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_algorithm_name = Py_None; __Pyx_INCREF(Py_None); - __pyx_v_key = Py_None; __Pyx_INCREF(Py_None); - __pyx_v_has_gauss = Py_None; __Pyx_INCREF(Py_None); - __pyx_v_cached_gaussian = Py_None; __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":686 - * cdef ndarray obj "arrayObject_obj" - * cdef int pos - * algorithm_name = state[0] # <<<<<<<<<<<<<< - * if algorithm_name != 'MT19937': - * raise ValueError("algorithm must be 'MT19937'") - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_state, 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 686; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_v_algorithm_name); - __pyx_v_algorithm_name = __pyx_t_1; - __pyx_t_1 = 0; - - /* "mtrand.pyx":687 - * cdef int pos - * algorithm_name = state[0] - * if algorithm_name != 'MT19937': # <<<<<<<<<<<<<< - * raise ValueError("algorithm must be 'MT19937'") - * key, pos = state[1:3] - */ - __pyx_t_1 = PyObject_RichCompare(__pyx_v_algorithm_name, ((PyObject *)__pyx_n_s__MT19937), Py_NE); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 687; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 687; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_2) { - - /* "mtrand.pyx":688 - * algorithm_name = state[0] - * if algorithm_name != 'MT19937': - * raise ValueError("algorithm must be 'MT19937'") # <<<<<<<<<<<<<< - * key, pos = state[1:3] - * if len(state) == 3: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 688; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_2)); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_kp_s_2)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_2)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 688; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 688; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L5; - } - __pyx_L5:; - - /* "mtrand.pyx":689 - * if algorithm_name != 'MT19937': - * raise ValueError("algorithm must be 'MT19937'") - * key, pos = state[1:3] # <<<<<<<<<<<<<< - * if len(state) == 3: - * has_gauss = 0 - */ - __pyx_t_3 = PySequence_GetSlice(__pyx_v_state, 1, 3); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 689; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - if (PyTuple_CheckExact(__pyx_t_3) && likely(PyTuple_GET_SIZE(__pyx_t_3) == 2)) { - PyObject* tuple = __pyx_t_3; - __pyx_t_1 = PyTuple_GET_ITEM(tuple, 0); __Pyx_INCREF(__pyx_t_1); - __pyx_t_4 = PyTuple_GET_ITEM(tuple, 1); __Pyx_INCREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyInt_AsInt(__pyx_t_4); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 689; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_v_key); - __pyx_v_key = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_pos = __pyx_t_5; - } else { - __pyx_t_6 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 689; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_UnpackItem(__pyx_t_6, 0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 689; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_UnpackItem(__pyx_t_6, 1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 689; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyInt_AsInt(__pyx_t_4); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 689; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__Pyx_EndUnpack(__pyx_t_6) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 689; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_v_key); - __pyx_v_key = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_pos = __pyx_t_5; - } - - /* "mtrand.pyx":690 - * raise ValueError("algorithm must be 'MT19937'") - * key, pos = state[1:3] - * if len(state) == 3: # <<<<<<<<<<<<<< - * has_gauss = 0 - * cached_gaussian = 0.0 - */ - __pyx_t_7 = PyObject_Length(__pyx_v_state); if (unlikely(__pyx_t_7 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 690; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_t_2 = (__pyx_t_7 == 3); - if (__pyx_t_2) { - - /* "mtrand.pyx":691 - * key, pos = state[1:3] - * if len(state) == 3: - * has_gauss = 0 # <<<<<<<<<<<<<< - * cached_gaussian = 0.0 - * else: - */ - __Pyx_INCREF(__pyx_int_0); - __Pyx_DECREF(__pyx_v_has_gauss); - __pyx_v_has_gauss = __pyx_int_0; - - /* "mtrand.pyx":692 - * if len(state) == 3: - * has_gauss = 0 - * cached_gaussian = 0.0 # <<<<<<<<<<<<<< - * else: - * has_gauss, cached_gaussian = state[3:5] - */ - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 692; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_v_cached_gaussian); - __pyx_v_cached_gaussian = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L6; - } - /*else*/ { - - /* "mtrand.pyx":694 - * cached_gaussian = 0.0 - * else: - * has_gauss, cached_gaussian = state[3:5] # <<<<<<<<<<<<<< - * try: - * obj = PyArray_ContiguousFromObject(key, NPY_ULONG, 1, 1) - */ - __pyx_t_3 = PySequence_GetSlice(__pyx_v_state, 3, 5); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 694; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - if (PyTuple_CheckExact(__pyx_t_3) && likely(PyTuple_GET_SIZE(__pyx_t_3) == 2)) { - PyObject* tuple = __pyx_t_3; - __pyx_t_4 = PyTuple_GET_ITEM(tuple, 0); __Pyx_INCREF(__pyx_t_4); - __pyx_t_1 = PyTuple_GET_ITEM(tuple, 1); __Pyx_INCREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_v_has_gauss); - __pyx_v_has_gauss = __pyx_t_4; - __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_v_cached_gaussian); - __pyx_v_cached_gaussian = __pyx_t_1; - __pyx_t_1 = 0; - } else { - __pyx_t_6 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 694; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = __Pyx_UnpackItem(__pyx_t_6, 0); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 694; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_UnpackItem(__pyx_t_6, 1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 694; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_EndUnpack(__pyx_t_6) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 694; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_v_has_gauss); - __pyx_v_has_gauss = __pyx_t_4; - __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_v_cached_gaussian); - __pyx_v_cached_gaussian = __pyx_t_1; - __pyx_t_1 = 0; - } - } - __pyx_L6:; - - /* "mtrand.pyx":695 - * else: - * has_gauss, cached_gaussian = state[3:5] - * try: # <<<<<<<<<<<<<< - * obj = PyArray_ContiguousFromObject(key, NPY_ULONG, 1, 1) - * except TypeError: - */ - { - PyObject *__pyx_save_exc_type, *__pyx_save_exc_value, *__pyx_save_exc_tb; - __Pyx_ExceptionSave(&__pyx_save_exc_type, &__pyx_save_exc_value, &__pyx_save_exc_tb); - __Pyx_XGOTREF(__pyx_save_exc_type); - __Pyx_XGOTREF(__pyx_save_exc_value); - __Pyx_XGOTREF(__pyx_save_exc_tb); - /*try:*/ { - - /* "mtrand.pyx":696 - * has_gauss, cached_gaussian = state[3:5] - * try: - * obj = PyArray_ContiguousFromObject(key, NPY_ULONG, 1, 1) # <<<<<<<<<<<<<< - * except TypeError: - * # compatibility -- could be an older pickle - */ - __pyx_t_3 = PyArray_ContiguousFromObject(__pyx_v_key, NPY_ULONG, 1, 1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 696; __pyx_clineno = __LINE__; goto __pyx_L7_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)arrayObject_obj)); - arrayObject_obj = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_XDECREF(__pyx_save_exc_type); __pyx_save_exc_type = 0; - __Pyx_XDECREF(__pyx_save_exc_value); __pyx_save_exc_value = 0; - __Pyx_XDECREF(__pyx_save_exc_tb); __pyx_save_exc_tb = 0; - goto __pyx_L14_try_end; - __pyx_L7_error:; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":697 - * try: - * obj = PyArray_ContiguousFromObject(key, NPY_ULONG, 1, 1) - * except TypeError: # <<<<<<<<<<<<<< - * # compatibility -- could be an older pickle - * obj = PyArray_ContiguousFromObject(key, NPY_LONG, 1, 1) - */ - __pyx_t_5 = PyErr_ExceptionMatches(__pyx_builtin_TypeError); - if (__pyx_t_5) { - __Pyx_AddTraceback("mtrand.RandomState.set_state"); - if (__Pyx_GetException(&__pyx_t_3, &__pyx_t_1, &__pyx_t_4) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 697; __pyx_clineno = __LINE__; goto __pyx_L9_except_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_t_4); - - /* "mtrand.pyx":699 - * except TypeError: - * # compatibility -- could be an older pickle - * obj = PyArray_ContiguousFromObject(key, NPY_LONG, 1, 1) # <<<<<<<<<<<<<< - * if obj.dimensions[0] != 624: - * raise ValueError("state must be 624 longs") - */ - __pyx_t_6 = PyArray_ContiguousFromObject(__pyx_v_key, NPY_LONG, 1, 1); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 699; __pyx_clineno = __LINE__; goto __pyx_L9_except_error;} - __Pyx_GOTREF(__pyx_t_6); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_6))); - __Pyx_DECREF(((PyObject *)arrayObject_obj)); - arrayObject_obj = ((PyArrayObject *)__pyx_t_6); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L8_exception_handled; - } - __pyx_L9_except_error:; - __Pyx_XGIVEREF(__pyx_save_exc_type); - __Pyx_XGIVEREF(__pyx_save_exc_value); - __Pyx_XGIVEREF(__pyx_save_exc_tb); - __Pyx_ExceptionReset(__pyx_save_exc_type, __pyx_save_exc_value, __pyx_save_exc_tb); - goto __pyx_L1_error; - __pyx_L8_exception_handled:; - __Pyx_XGIVEREF(__pyx_save_exc_type); - __Pyx_XGIVEREF(__pyx_save_exc_value); - __Pyx_XGIVEREF(__pyx_save_exc_tb); - __Pyx_ExceptionReset(__pyx_save_exc_type, __pyx_save_exc_value, __pyx_save_exc_tb); - __pyx_L14_try_end:; - } - - /* "mtrand.pyx":700 - * # compatibility -- could be an older pickle - * obj = PyArray_ContiguousFromObject(key, NPY_LONG, 1, 1) - * if obj.dimensions[0] != 624: # <<<<<<<<<<<<<< - * raise ValueError("state must be 624 longs") - * memcpy((self.internal_state.key), (obj.data), 624*sizeof(long)) - */ - __pyx_t_2 = ((arrayObject_obj->dimensions[0]) != 624); - if (__pyx_t_2) { - - /* "mtrand.pyx":701 - * obj = PyArray_ContiguousFromObject(key, NPY_LONG, 1, 1) - * if obj.dimensions[0] != 624: - * raise ValueError("state must be 624 longs") # <<<<<<<<<<<<<< - * memcpy((self.internal_state.key), (obj.data), 624*sizeof(long)) - * self.internal_state.pos = pos - */ - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_3)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_s_3)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_3)); - __pyx_t_1 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_1, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L17; - } - __pyx_L17:; - - /* "mtrand.pyx":702 - * if obj.dimensions[0] != 624: - * raise ValueError("state must be 624 longs") - * memcpy((self.internal_state.key), (obj.data), 624*sizeof(long)) # <<<<<<<<<<<<<< - * self.internal_state.pos = pos - * self.internal_state.has_gauss = has_gauss - */ - memcpy(((void *)((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state->key), ((void *)arrayObject_obj->data), (624 * (sizeof(long)))); - - /* "mtrand.pyx":703 - * raise ValueError("state must be 624 longs") - * memcpy((self.internal_state.key), (obj.data), 624*sizeof(long)) - * self.internal_state.pos = pos # <<<<<<<<<<<<<< - * self.internal_state.has_gauss = has_gauss - * self.internal_state.gauss = cached_gaussian - */ - ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state->pos = __pyx_v_pos; - - /* "mtrand.pyx":704 - * memcpy((self.internal_state.key), (obj.data), 624*sizeof(long)) - * self.internal_state.pos = pos - * self.internal_state.has_gauss = has_gauss # <<<<<<<<<<<<<< - * self.internal_state.gauss = cached_gaussian - * - */ - __pyx_t_5 = __Pyx_PyInt_AsInt(__pyx_v_has_gauss); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 704; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state->has_gauss = __pyx_t_5; - - /* "mtrand.pyx":705 - * self.internal_state.pos = pos - * self.internal_state.has_gauss = has_gauss - * self.internal_state.gauss = cached_gaussian # <<<<<<<<<<<<<< - * - * # Pickling support: - */ - __pyx_t_8 = __pyx_PyFloat_AsDouble(__pyx_v_cached_gaussian); if (unlikely((__pyx_t_8 == (double)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 705; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state->gauss = __pyx_t_8; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("mtrand.RandomState.set_state"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject_obj); - __Pyx_DECREF(__pyx_v_algorithm_name); - __Pyx_DECREF(__pyx_v_key); - __Pyx_DECREF(__pyx_v_has_gauss); - __Pyx_DECREF(__pyx_v_cached_gaussian); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_state); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":708 - * - * # Pickling support: - * def __getstate__(self): # <<<<<<<<<<<<<< - * return self.get_state() - * - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState___getstate__(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ -static PyObject *__pyx_pf_6mtrand_11RandomState___getstate__(PyObject *__pyx_v_self, PyObject *unused) { - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - __Pyx_RefNannySetupContext("__getstate__"); - - /* "mtrand.pyx":709 - * # Pickling support: - * def __getstate__(self): - * return self.get_state() # <<<<<<<<<<<<<< - * - * def __setstate__(self, state): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__get_state); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 709; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_Call(__pyx_t_1, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 709; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("mtrand.RandomState.__getstate__"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":711 - * return self.get_state() - * - * def __setstate__(self, state): # <<<<<<<<<<<<<< - * self.set_state(state) - * - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState___setstate__(PyObject *__pyx_v_self, PyObject *__pyx_v_state); /*proto*/ -static PyObject *__pyx_pf_6mtrand_11RandomState___setstate__(PyObject *__pyx_v_self, PyObject *__pyx_v_state) { - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - __Pyx_RefNannySetupContext("__setstate__"); - - /* "mtrand.pyx":712 - * - * def __setstate__(self, state): - * self.set_state(state) # <<<<<<<<<<<<<< - * - * def __reduce__(self): - */ - __pyx_t_1 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__set_state); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 712; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 712; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 712; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("mtrand.RandomState.__setstate__"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":714 - * self.set_state(state) - * - * def __reduce__(self): # <<<<<<<<<<<<<< - * return (np.random.__RandomState_ctor, (), self.get_state()) - * - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState___reduce__(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ -static PyObject *__pyx_pf_6mtrand_11RandomState___reduce__(PyObject *__pyx_v_self, PyObject *unused) { - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - __Pyx_RefNannySetupContext("__reduce__"); - - /* "mtrand.pyx":715 - * - * def __reduce__(self): - * return (np.random.__RandomState_ctor, (), self.get_state()) # <<<<<<<<<<<<<< - * - * # Basic distributions: - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 715; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__random); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 715; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s____RandomState_ctor); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 715; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__get_state); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 715; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_Call(__pyx_t_2, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 715; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 715; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)__pyx_empty_tuple)); - PyTuple_SET_ITEM(__pyx_t_2, 1, ((PyObject *)__pyx_empty_tuple)); - __Pyx_GIVEREF(((PyObject *)__pyx_empty_tuple)); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_1 = 0; - __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("mtrand.RandomState.__reduce__"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":718 - * - * # Basic distributions: - * def random_sample(self, size=None): # <<<<<<<<<<<<<< - * """ - * random_sample(size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_random_sample(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_random_sample[] = "\n random_sample(size=None)\n\n Return random floats in the half-open interval [0.0, 1.0).\n\n Results are from the \"continuous uniform\" distribution over the\n stated interval. To sample :math:`Unif[a, b), b > a` multiply\n the output of `random_sample` by `(b-a)` and add `a`::\n\n (b - a) * random_sample() + a\n\n Parameters\n ----------\n size : int or tuple of ints, optional\n Defines the shape of the returned array of random floats. If None\n (the default), returns a single float.\n\n Returns\n -------\n out : float or ndarray of floats\n Array of random floats of shape `size` (unless ``size=None``, in which\n case a single float is returned).\n\n Examples\n --------\n >>> np.random.random_sample()\n 0.47108547995356098\n >>> type(np.random.random_sample())\n \n >>> np.random.random_sample((5,))\n array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428])\n\n Three-by-two array of random numbers from [-5, 0):\n\n >>> 5 * np.random.random_sample((3, 2)) - 5\n array([[-3.99149989, -0.52338984],\n [-2.99091858, -0.79479508],\n [-1.23204345, -1.75224494]])\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_random_sample(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_size = 0; - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("random_sample"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[1] = {0}; - values[0] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "random_sample") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 718; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_size = values[0]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("random_sample", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 718; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.random_sample"); - return NULL; - __pyx_L4_argument_unpacking_done:; - - /* "mtrand.pyx":759 - * - * """ - * return cont0_array(self.internal_state, rk_double, size) # <<<<<<<<<<<<<< - * - * def tomaxint(self, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_f_6mtrand_cont0_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_double, __pyx_v_size); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 759; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("mtrand.RandomState.random_sample"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":761 - * return cont0_array(self.internal_state, rk_double, size) - * - * def tomaxint(self, size=None): # <<<<<<<<<<<<<< - * """ - * tomaxint(size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_tomaxint(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_tomaxint[] = "\n tomaxint(size=None)\n\n Uniformly sample discrete random integers `x` such that\n ``0 <= x <= sys.maxint``.\n\n Parameters\n ----------\n size : tuple of ints, int, optional\n Shape of output. If the given size is, for example, (m,n,k),\n m*n*k samples are generated. If no shape is specified, a single sample\n is returned.\n\n Returns\n -------\n out : ndarray\n Drawn samples, with shape `size`.\n\n See Also\n --------\n randint : Uniform sampling over a given half-open interval of integers.\n random_integers : Uniform sampling over a given closed interval of\n integers.\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_tomaxint(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_size = 0; - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("tomaxint"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[1] = {0}; - values[0] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "tomaxint") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 761; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_size = values[0]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("tomaxint", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 761; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.tomaxint"); - return NULL; - __pyx_L4_argument_unpacking_done:; - - /* "mtrand.pyx":787 - * - * """ - * return disc0_array(self.internal_state, rk_long, size) # <<<<<<<<<<<<<< - * - * def randint(self, low, high=None, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_f_6mtrand_disc0_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_long, __pyx_v_size); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 787; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("mtrand.RandomState.tomaxint"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":789 - * return disc0_array(self.internal_state, rk_long, size) - * - * def randint(self, low, high=None, size=None): # <<<<<<<<<<<<<< - * """ - * randint(low, high=None, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_randint(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_randint[] = "\n randint(low, high=None, size=None)\n\n Return random integers from `low` (inclusive) to `high` (exclusive).\n\n Return random integers from the \"discrete uniform\" distribution in the\n \"half-open\" interval [`low`, `high`). If `high` is None (the default),\n then results are from [0, `low`).\n\n Parameters\n ----------\n low : int\n Lowest (signed) integer to be drawn from the distribution (unless\n ``high=None``, in which case this parameter is the *highest* such\n integer).\n high : int, optional\n If provided, one above the largest (signed) integer to be drawn\n from the distribution (see above for behavior if ``high=None``).\n size : int or tuple of ints, optional\n Output shape. Default is None, in which case a single int is\n returned.\n\n Returns\n -------\n out : int or ndarray of ints\n `size`-shaped array of random integers from the appropriate\n distribution, or a single such random int if `size` not provided.\n\n See Also\n --------\n random.random_integers : similar to `randint`, only for the closed\n interval [`low`, `high`], and 1 is the lowest value if `high` is\n omitted. In particular, this other one is the one to use to generate\n uniformly distributed discrete non-integers.\n\n Examples\n --------\n >>> np.random.randint(2, size=10)\n array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0])\n >>> np.random.randint(1, size=10)\n array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])\n\n Generate a 2 x 4 array of ints between 0 and 4, inclusive:\n\n >>> np.random.randint(5, size=(2, 4))\n array([[4, 0, 2, 1],\n [3, 2, 2, 0]])\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_randint(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_low = 0; - PyObject *__pyx_v_high = 0; - PyObject *__pyx_v_size = 0; - long __pyx_v_lo; - long __pyx_v_hi; - long __pyx_v_diff; - long *__pyx_v_array_data; - PyArrayObject *arrayObject; - long __pyx_v_length; - long __pyx_v_i; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - long __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__low,&__pyx_n_s__high,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("randint"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[1] = ((PyObject *)Py_None); - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__low); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__high); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "randint") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 789; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_low = values[0]; - __pyx_v_high = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_high = ((PyObject *)Py_None); - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: __pyx_v_high = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_low = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("randint", 0, 1, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 789; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.randint"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_low); - __Pyx_INCREF(__pyx_v_high); - __Pyx_INCREF(__pyx_v_size); - arrayObject = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":845 - * cdef long i - * - * if high is None: # <<<<<<<<<<<<<< - * lo = 0 - * hi = low - */ - __pyx_t_1 = (__pyx_v_high == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":846 - * - * if high is None: - * lo = 0 # <<<<<<<<<<<<<< - * hi = low - * else: - */ - __pyx_v_lo = 0; - - /* "mtrand.pyx":847 - * if high is None: - * lo = 0 - * hi = low # <<<<<<<<<<<<<< - * else: - * lo = low - */ - __pyx_t_2 = __Pyx_PyInt_AsLong(__pyx_v_low); if (unlikely((__pyx_t_2 == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 847; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_v_hi = __pyx_t_2; - goto __pyx_L6; - } - /*else*/ { - - /* "mtrand.pyx":849 - * hi = low - * else: - * lo = low # <<<<<<<<<<<<<< - * hi = high - * - */ - __pyx_t_2 = __Pyx_PyInt_AsLong(__pyx_v_low); if (unlikely((__pyx_t_2 == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 849; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_v_lo = __pyx_t_2; - - /* "mtrand.pyx":850 - * else: - * lo = low - * hi = high # <<<<<<<<<<<<<< - * - * diff = hi - lo - 1 - */ - __pyx_t_2 = __Pyx_PyInt_AsLong(__pyx_v_high); if (unlikely((__pyx_t_2 == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 850; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_v_hi = __pyx_t_2; - } - __pyx_L6:; - - /* "mtrand.pyx":852 - * hi = high - * - * diff = hi - lo - 1 # <<<<<<<<<<<<<< - * if diff < 0: - * raise ValueError("low >= high") - */ - __pyx_v_diff = ((__pyx_v_hi - __pyx_v_lo) - 1); - - /* "mtrand.pyx":853 - * - * diff = hi - lo - 1 - * if diff < 0: # <<<<<<<<<<<<<< - * raise ValueError("low >= high") - * - */ - __pyx_t_1 = (__pyx_v_diff < 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":854 - * diff = hi - lo - 1 - * if diff < 0: - * raise ValueError("low >= high") # <<<<<<<<<<<<<< - * - * if size is None: - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 854; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_4)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_4)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_4)); - __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 854; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 854; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":856 - * raise ValueError("low >= high") - * - * if size is None: # <<<<<<<<<<<<<< - * return rk_interval(diff, self.internal_state) + lo - * else: - */ - __pyx_t_1 = (__pyx_v_size == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":857 - * - * if size is None: - * return rk_interval(diff, self.internal_state) + lo # <<<<<<<<<<<<<< - * else: - * array = np.empty(size, int) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_4 = PyInt_FromLong((((long)rk_interval(__pyx_v_diff, ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state)) + __pyx_v_lo)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 857; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - goto __pyx_L8; - } - /*else*/ { - - /* "mtrand.pyx":859 - * return rk_interval(diff, self.internal_state) + lo - * else: - * array = np.empty(size, int) # <<<<<<<<<<<<<< - * length = PyArray_SIZE(array) - * array_data = array.data - */ - __pyx_t_4 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 859; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_4, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 859; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 859; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_4, 1, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __pyx_t_5 = PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 859; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_5))); - __Pyx_DECREF(((PyObject *)arrayObject)); - arrayObject = ((PyArrayObject *)__pyx_t_5); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "mtrand.pyx":860 - * else: - * array = np.empty(size, int) - * length = PyArray_SIZE(array) # <<<<<<<<<<<<<< - * array_data = array.data - * for i from 0 <= i < length: - */ - __pyx_v_length = PyArray_SIZE(arrayObject); - - /* "mtrand.pyx":861 - * array = np.empty(size, int) - * length = PyArray_SIZE(array) - * array_data = array.data # <<<<<<<<<<<<<< - * for i from 0 <= i < length: - * array_data[i] = lo + rk_interval(diff, self.internal_state) - */ - __pyx_v_array_data = ((long *)arrayObject->data); - - /* "mtrand.pyx":862 - * length = PyArray_SIZE(array) - * array_data = array.data - * for i from 0 <= i < length: # <<<<<<<<<<<<<< - * array_data[i] = lo + rk_interval(diff, self.internal_state) - * return array - */ - __pyx_t_2 = __pyx_v_length; - for (__pyx_v_i = 0; __pyx_v_i < __pyx_t_2; __pyx_v_i++) { - - /* "mtrand.pyx":863 - * array_data = array.data - * for i from 0 <= i < length: - * array_data[i] = lo + rk_interval(diff, self.internal_state) # <<<<<<<<<<<<<< - * return array - * - */ - (__pyx_v_array_data[__pyx_v_i]) = (__pyx_v_lo + ((long)rk_interval(__pyx_v_diff, ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state))); - } - - /* "mtrand.pyx":864 - * for i from 0 <= i < length: - * array_data[i] = lo + rk_interval(diff, self.internal_state) - * return array # <<<<<<<<<<<<<< - * - * def bytes(self, unsigned int length): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)arrayObject)); - __pyx_r = ((PyObject *)arrayObject); - goto __pyx_L0; - } - __pyx_L8:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.randint"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_low); - __Pyx_DECREF(__pyx_v_high); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":866 - * return array - * - * def bytes(self, unsigned int length): # <<<<<<<<<<<<<< - * """ - * bytes(length) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_bytes(PyObject *__pyx_v_self, PyObject *__pyx_arg_length); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_bytes[] = "\n bytes(length)\n\n Return random bytes.\n\n Parameters\n ----------\n length : int\n Number of random bytes.\n\n Returns\n -------\n out : str\n String of length `N`.\n\n Examples\n --------\n >>> np.random.bytes(10)\n ' eh\\x85\\x022SZ\\xbf\\xa4' #random\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_bytes(PyObject *__pyx_v_self, PyObject *__pyx_arg_length) { - unsigned int __pyx_v_length; - void *__pyx_v_bytes; - PyObject *__pyx_v_bytestring; - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - __Pyx_RefNannySetupContext("bytes"); - assert(__pyx_arg_length); { - __pyx_v_length = __Pyx_PyInt_AsUnsignedInt(__pyx_arg_length); if (unlikely((__pyx_v_length == (unsigned int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 866; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.bytes"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_v_bytestring = Py_None; __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":889 - * """ - * cdef void *bytes - * bytestring = empty_py_bytes(length, &bytes) # <<<<<<<<<<<<<< - * rk_fill(bytes, length, self.internal_state) - * return bytestring - */ - __pyx_t_1 = empty_py_bytes(__pyx_v_length, (&__pyx_v_bytes)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 889; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_v_bytestring); - __pyx_v_bytestring = __pyx_t_1; - __pyx_t_1 = 0; - - /* "mtrand.pyx":890 - * cdef void *bytes - * bytestring = empty_py_bytes(length, &bytes) - * rk_fill(bytes, length, self.internal_state) # <<<<<<<<<<<<<< - * return bytestring - * - */ - rk_fill(__pyx_v_bytes, __pyx_v_length, ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state); - - /* "mtrand.pyx":891 - * bytestring = empty_py_bytes(length, &bytes) - * rk_fill(bytes, length, self.internal_state) - * return bytestring # <<<<<<<<<<<<<< - * - * def uniform(self, low=0.0, high=1.0, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_bytestring); - __pyx_r = __pyx_v_bytestring; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("mtrand.RandomState.bytes"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF(__pyx_v_bytestring); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":893 - * return bytestring - * - * def uniform(self, low=0.0, high=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * uniform(low=0.0, high=1.0, size=1) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_uniform(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_uniform[] = "\n"" uniform(low=0.0, high=1.0, size=1)\n""\n"" Draw samples from a uniform distribution.\n""\n"" Samples are uniformly distributed over the half-open interval\n"" ``[low, high)`` (includes low, but excludes high). In other words,\n"" any value within the given interval is equally likely to be drawn\n"" by `uniform`.\n""\n"" Parameters\n"" ----------\n"" low : float, optional\n"" Lower boundary of the output interval. All values generated will be\n"" greater than or equal to low. The default value is 0.\n"" high : float\n"" Upper boundary of the output interval. All values generated will be\n"" less than high. The default value is 1.0.\n"" size : tuple of ints, int, optional\n"" Shape of output. If the given size is, for example, (m,n,k),\n"" m*n*k samples are generated. If no shape is specified, a single sample\n"" is returned.\n""\n"" Returns\n"" -------\n"" out : ndarray\n"" Drawn samples, with shape `size`.\n""\n"" See Also\n"" --------\n"" randint : Discrete uniform distribution, yielding integers.\n"" random_integers : Discrete uniform distribution over the closed interval\n"" ``[low, high]``.\n"" random_sample : Floats uniformly distributed over ``[0, 1)``.\n"" random : Alias for `random_sample`.\n"" rand : Convenience function that accepts dimensions as input, e.g.,\n"" ``rand(2,2)`` would generate a 2-by-2 array of floats, uniformly\n"" distributed over ``[0, 1)``.\n""\n"" Notes\n"" -----\n"" The probability density function of the uniform distribution is\n""\n"" .. math:: p(x) = \\frac{1}{b - a}\n""\n"" anywhere within the interval ``[a, b)``, and zero elsewhere.\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> s = np.random.uniform(-1,0,1000)\n""\n"" All values are within the given interval:\n""\n"" >>> np.all(s >= -1)\n"" True\n""\n"" >>> np.all(s < 0)\n"" True\n""\n"" Display the histogram of the samples, along with the\n"" probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> count, bins, ignored = plt.hist(s, 15, normed=True)\n"" >>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r')\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_uniform(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_low = 0; - PyObject *__pyx_v_high = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_olow; - PyArrayObject *__pyx_v_ohigh; - PyArrayObject *__pyx_v_odiff; - double __pyx_v_flow; - double __pyx_v_fhigh; - PyObject *__pyx_v_temp; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__low,&__pyx_n_s__high,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("uniform"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[0] = __pyx_k_5; - values[1] = __pyx_k_6; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__low); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__high); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "uniform") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 893; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_low = values[0]; - __pyx_v_high = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_low = __pyx_k_5; - __pyx_v_high = __pyx_k_6; - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: __pyx_v_high = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_low = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("uniform", 0, 0, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 893; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.uniform"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_low); - __Pyx_INCREF(__pyx_v_high); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_olow = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_ohigh = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_odiff = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_temp = Py_None; __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":968 - * cdef object temp - * - * flow = PyFloat_AsDouble(low) # <<<<<<<<<<<<<< - * fhigh = PyFloat_AsDouble(high) - * if not PyErr_Occurred(): - */ - __pyx_v_flow = PyFloat_AsDouble(__pyx_v_low); - - /* "mtrand.pyx":969 - * - * flow = PyFloat_AsDouble(low) - * fhigh = PyFloat_AsDouble(high) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * return cont2_array_sc(self.internal_state, rk_uniform, size, flow, fhigh-flow) - */ - __pyx_v_fhigh = PyFloat_AsDouble(__pyx_v_high); - - /* "mtrand.pyx":970 - * flow = PyFloat_AsDouble(low) - * fhigh = PyFloat_AsDouble(high) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_uniform, size, flow, fhigh-flow) - * PyErr_Clear() - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":971 - * fhigh = PyFloat_AsDouble(high) - * if not PyErr_Occurred(): - * return cont2_array_sc(self.internal_state, rk_uniform, size, flow, fhigh-flow) # <<<<<<<<<<<<<< - * PyErr_Clear() - * olow = PyArray_FROM_OTF(low, NPY_DOUBLE, NPY_ALIGNED) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_uniform, __pyx_v_size, __pyx_v_flow, (__pyx_v_fhigh - __pyx_v_flow)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 971; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":972 - * if not PyErr_Occurred(): - * return cont2_array_sc(self.internal_state, rk_uniform, size, flow, fhigh-flow) - * PyErr_Clear() # <<<<<<<<<<<<<< - * olow = PyArray_FROM_OTF(low, NPY_DOUBLE, NPY_ALIGNED) - * ohigh = PyArray_FROM_OTF(high, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":973 - * return cont2_array_sc(self.internal_state, rk_uniform, size, flow, fhigh-flow) - * PyErr_Clear() - * olow = PyArray_FROM_OTF(low, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * ohigh = PyArray_FROM_OTF(high, NPY_DOUBLE, NPY_ALIGNED) - * temp = np.subtract(ohigh, olow) - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_low, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 973; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_olow)); - __pyx_v_olow = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":974 - * PyErr_Clear() - * olow = PyArray_FROM_OTF(low, NPY_DOUBLE, NPY_ALIGNED) - * ohigh = PyArray_FROM_OTF(high, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * temp = np.subtract(ohigh, olow) - * Py_INCREF(temp) # needed to get around Pyrex's automatic reference-counting - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_high, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 974; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_ohigh)); - __pyx_v_ohigh = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":975 - * olow = PyArray_FROM_OTF(low, NPY_DOUBLE, NPY_ALIGNED) - * ohigh = PyArray_FROM_OTF(high, NPY_DOUBLE, NPY_ALIGNED) - * temp = np.subtract(ohigh, olow) # <<<<<<<<<<<<<< - * Py_INCREF(temp) # needed to get around Pyrex's automatic reference-counting - * # rules because EnsureArray steals a reference - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 975; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__subtract); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 975; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 975; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_v_ohigh)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_v_ohigh)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_ohigh)); - __Pyx_INCREF(((PyObject *)__pyx_v_olow)); - PyTuple_SET_ITEM(__pyx_t_2, 1, ((PyObject *)__pyx_v_olow)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_olow)); - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 975; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_v_temp); - __pyx_v_temp = __pyx_t_4; - __pyx_t_4 = 0; - - /* "mtrand.pyx":976 - * ohigh = PyArray_FROM_OTF(high, NPY_DOUBLE, NPY_ALIGNED) - * temp = np.subtract(ohigh, olow) - * Py_INCREF(temp) # needed to get around Pyrex's automatic reference-counting # <<<<<<<<<<<<<< - * # rules because EnsureArray steals a reference - * odiff = PyArray_EnsureArray(temp) - */ - Py_INCREF(__pyx_v_temp); - - /* "mtrand.pyx":978 - * Py_INCREF(temp) # needed to get around Pyrex's automatic reference-counting - * # rules because EnsureArray steals a reference - * odiff = PyArray_EnsureArray(temp) # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_uniform, size, olow, odiff) - * - */ - __pyx_t_4 = PyArray_EnsureArray(__pyx_v_temp); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 978; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_4))); - __Pyx_DECREF(((PyObject *)__pyx_v_odiff)); - __pyx_v_odiff = ((PyArrayObject *)__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "mtrand.pyx":979 - * # rules because EnsureArray steals a reference - * odiff = PyArray_EnsureArray(temp) - * return cont2_array(self.internal_state, rk_uniform, size, olow, odiff) # <<<<<<<<<<<<<< - * - * def rand(self, *args): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_4 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_uniform, __pyx_v_size, __pyx_v_olow, __pyx_v_odiff); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 979; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.RandomState.uniform"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_olow); - __Pyx_DECREF((PyObject *)__pyx_v_ohigh); - __Pyx_DECREF((PyObject *)__pyx_v_odiff); - __Pyx_DECREF(__pyx_v_temp); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_low); - __Pyx_DECREF(__pyx_v_high); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":981 - * return cont2_array(self.internal_state, rk_uniform, size, olow, odiff) - * - * def rand(self, *args): # <<<<<<<<<<<<<< - * """ - * rand(d0, d1, ..., dn) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_rand(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_rand[] = "\n rand(d0, d1, ..., dn)\n\n Random values in a given shape.\n\n Create an array of the given shape and propagate it with\n random samples from a uniform distribution\n over ``[0, 1)``.\n\n Parameters\n ----------\n d0, d1, ..., dn : int\n Shape of the output.\n\n Returns\n -------\n out : ndarray, shape ``(d0, d1, ..., dn)``\n Random values.\n\n See Also\n --------\n random\n\n Notes\n -----\n This is a convenience function. If you want an interface that\n takes a shape-tuple as the first argument, refer to\n `random`.\n\n Examples\n --------\n >>> np.random.rand(3,2)\n array([[ 0.14022471, 0.96360618], #random\n [ 0.37601032, 0.25528411], #random\n [ 0.49313049, 0.94909878]]) #random\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_rand(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_args = 0; - PyObject *__pyx_r = NULL; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - __Pyx_RefNannySetupContext("rand"); - if (unlikely(__pyx_kwds) && unlikely(PyDict_Size(__pyx_kwds) > 0) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "rand", 0))) return NULL; - __Pyx_INCREF(__pyx_args); - __pyx_v_args = __pyx_args; - __Pyx_INCREF((PyObject *)__pyx_v_self); - - /* "mtrand.pyx":1019 - * - * """ - * if len(args) == 0: # <<<<<<<<<<<<<< - * return self.random_sample() - * else: - */ - __pyx_t_1 = PyObject_Length(__pyx_v_args); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1019; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_t_2 = (__pyx_t_1 == 0); - if (__pyx_t_2) { - - /* "mtrand.pyx":1020 - * """ - * if len(args) == 0: - * return self.random_sample() # <<<<<<<<<<<<<< - * else: - * return self.random_sample(size=args) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__random_sample); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1020; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_Call(__pyx_t_3, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1020; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - goto __pyx_L5; - } - /*else*/ { - - /* "mtrand.pyx":1022 - * return self.random_sample() - * else: - * return self.random_sample(size=args) # <<<<<<<<<<<<<< - * - * def randn(self, *args): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_4 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__random_sample); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1022; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1022; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(((PyObject *)__pyx_t_3)); - if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_n_s__size), __pyx_v_args) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1022; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_t_5 = PyEval_CallObjectWithKeywords(__pyx_t_4, ((PyObject *)__pyx_empty_tuple), ((PyObject *)__pyx_t_3)); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1022; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(((PyObject *)__pyx_t_3)); __pyx_t_3 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - } - __pyx_L5:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.rand"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF(__pyx_v_args); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1024 - * return self.random_sample(size=args) - * - * def randn(self, *args): # <<<<<<<<<<<<<< - * """ - * randn([d1, ..., dn]) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_randn(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_randn[] = "\n randn([d1, ..., dn])\n\n Return a sample (or samples) from the \"standard normal\" distribution.\n\n If positive, int_like or int-convertible arguments are provided,\n `randn` generates an array of shape ``(d1, ..., dn)``, filled\n with random floats sampled from a univariate \"normal\" (Gaussian)\n distribution of mean 0 and variance 1 (if any of the :math:`d_i` are\n floats, they are first converted to integers by truncation). A single\n float randomly sampled from the distribution is returned if no\n argument is provided.\n\n This is a convenience function. If you want an interface that takes a\n tuple as the first argument, use `numpy.random.standard_normal` instead.\n\n Parameters\n ----------\n d1, ..., dn : `n` ints, optional\n The dimensions of the returned array, should be all positive.\n\n Returns\n -------\n Z : ndarray or float\n A ``(d1, ..., dn)``-shaped array of floating-point samples from\n the standard normal distribution, or a single such float if\n no parameters were supplied.\n\n See Also\n --------\n random.standard_normal : Similar, but takes a tuple as its argument.\n\n Notes\n -----\n For random samples from :math:`N(\\mu, \\sigma^2)`, use:\n\n ``sigma * np.random.randn(...) + mu``\n\n Examples\n --------\n >>> np.random.randn()\n 2.1923875335537315 #random\n\n Two-by-four array of samples from N(3, 6.25):\n\n >>> 2.5 * np.random.randn(2, 4) + 3\n array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], #random\n [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) #random\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_randn(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_args = 0; - PyObject *__pyx_r = NULL; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - __Pyx_RefNannySetupContext("randn"); - if (unlikely(__pyx_kwds) && unlikely(PyDict_Size(__pyx_kwds) > 0) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "randn", 0))) return NULL; - __Pyx_INCREF(__pyx_args); - __pyx_v_args = __pyx_args; - __Pyx_INCREF((PyObject *)__pyx_v_self); - - /* "mtrand.pyx":1075 - * - * """ - * if len(args) == 0: # <<<<<<<<<<<<<< - * return self.standard_normal() - * else: - */ - __pyx_t_1 = PyObject_Length(__pyx_v_args); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1075; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_t_2 = (__pyx_t_1 == 0); - if (__pyx_t_2) { - - /* "mtrand.pyx":1076 - * """ - * if len(args) == 0: - * return self.standard_normal() # <<<<<<<<<<<<<< - * else: - * return self.standard_normal(args) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__standard_normal); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1076; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_Call(__pyx_t_3, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1076; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - goto __pyx_L5; - } - /*else*/ { - - /* "mtrand.pyx":1078 - * return self.standard_normal() - * else: - * return self.standard_normal(args) # <<<<<<<<<<<<<< - * - * def random_integers(self, low, high=None, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_4 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__standard_normal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1078; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1078; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_args); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_args); - __Pyx_GIVEREF(__pyx_v_args); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1078; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - } - __pyx_L5:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.randn"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF(__pyx_v_args); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1080 - * return self.standard_normal(args) - * - * def random_integers(self, low, high=None, size=None): # <<<<<<<<<<<<<< - * """ - * random_integers(low, high=None, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_random_integers(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_random_integers[] = "\n"" random_integers(low, high=None, size=None)\n""\n"" Return random integers between `low` and `high`, inclusive.\n""\n"" Return random integers from the \"discrete uniform\" distribution in the\n"" closed interval [`low`, `high`]. If `high` is None (the default),\n"" then results are from [1, `low`].\n""\n"" Parameters\n"" ----------\n"" low : int\n"" Lowest (signed) integer to be drawn from the distribution (unless\n"" ``high=None``, in which case this parameter is the *highest* such\n"" integer).\n"" high : int, optional\n"" If provided, the largest (signed) integer to be drawn from the\n"" distribution (see above for behavior if ``high=None``).\n"" size : int or tuple of ints, optional\n"" Output shape. Default is None, in which case a single int is returned.\n""\n"" Returns\n"" -------\n"" out : int or ndarray of ints\n"" `size`-shaped array of random integers from the appropriate\n"" distribution, or a single such random int if `size` not provided.\n""\n"" See Also\n"" --------\n"" random.randint : Similar to `random_integers`, only for the half-open\n"" interval [`low`, `high`), and 0 is the lowest value if `high` is\n"" omitted.\n""\n"" Notes\n"" -----\n"" To sample from N evenly spaced floating-point numbers between a and b,\n"" use::\n""\n"" a + (b - a) * (np.random.random_integers(N) - 1) / (N - 1.)\n""\n"" Examples\n"" --------\n"" >>> np.random.random_integers(5)\n"" 4\n"" >>> type(np.random.random_integers(5))\n"" \n"" >>> np.random.random_integers(5, size=(3.,2.))\n"" array([[5, 4],\n"" [3, 3],\n"" [4, 5]])\n""\n"" Choose five random numbers from the set of five evenly-spaced\n"" numbers between 0 and 2.5, inclusive (*i.e.*, from the set\n"" :math:`{0, 5/8, 10/8, 15/8, 20/8}`):\n""\n"" >>> 2.5 * (np.random.random_integers(5, size=(5,)) - 1) / 4.\n"" array([ 0.625, 1.25 , 0.625, 0.625, 2.5 ])\n""\n"" Roll two six sided dice 1000 times and sum the results:\n""\n"" >>> d1 = np.random.random_integers(1, 6, 1000)\n"" >>> d2 = np.random.random_integers(1, 6, 1000)\n"" >>> dsums = d1 + d2\n""\n"" Display results as a histogram:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> count, bins, ignored = plt.hist(dsums, 11, normed=True)\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_random_integers(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_low = 0; - PyObject *__pyx_v_high = 0; - PyObject *__pyx_v_size = 0; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__low,&__pyx_n_s__high,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("random_integers"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[1] = ((PyObject *)Py_None); - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__low); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__high); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "random_integers") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1080; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_low = values[0]; - __pyx_v_high = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_high = ((PyObject *)Py_None); - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: __pyx_v_high = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_low = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("random_integers", 0, 1, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1080; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.random_integers"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_low); - __Pyx_INCREF(__pyx_v_high); - __Pyx_INCREF(__pyx_v_size); - - /* "mtrand.pyx":1152 - * - * """ - * if high is None: # <<<<<<<<<<<<<< - * high = low - * low = 1 - */ - __pyx_t_1 = (__pyx_v_high == Py_None); - if (__pyx_t_1) { - - /* "mtrand.pyx":1153 - * """ - * if high is None: - * high = low # <<<<<<<<<<<<<< - * low = 1 - * return self.randint(low, high+1, size) - */ - __Pyx_INCREF(__pyx_v_low); - __Pyx_DECREF(__pyx_v_high); - __pyx_v_high = __pyx_v_low; - - /* "mtrand.pyx":1154 - * if high is None: - * high = low - * low = 1 # <<<<<<<<<<<<<< - * return self.randint(low, high+1, size) - * - */ - __Pyx_INCREF(__pyx_int_1); - __Pyx_DECREF(__pyx_v_low); - __pyx_v_low = __pyx_int_1; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":1155 - * high = low - * low = 1 - * return self.randint(low, high+1, size) # <<<<<<<<<<<<<< - * - * # Complicated, continuous distributions: - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__randint); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Add(__pyx_v_high, __pyx_int_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_low); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_low); - __Pyx_GIVEREF(__pyx_v_low); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.RandomState.random_integers"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_low); - __Pyx_DECREF(__pyx_v_high); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1158 - * - * # Complicated, continuous distributions: - * def standard_normal(self, size=None): # <<<<<<<<<<<<<< - * """ - * standard_normal(size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_standard_normal(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_standard_normal[] = "\n standard_normal(size=None)\n\n Returns samples from a Standard Normal distribution (mean=0, stdev=1).\n\n Parameters\n ----------\n size : int or tuple of ints, optional\n Output shape. Default is None, in which case a single value is\n returned.\n\n Returns\n -------\n out : float or ndarray\n Drawn samples.\n\n Examples\n --------\n >>> s = np.random.standard_normal(8000)\n >>> s\n array([ 0.6888893 , 0.78096262, -0.89086505, ..., 0.49876311, #random\n -0.38672696, -0.4685006 ]) #random\n >>> s.shape\n (8000,)\n >>> s = np.random.standard_normal(size=(3, 4, 2))\n >>> s.shape\n (3, 4, 2)\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_standard_normal(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_size = 0; - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("standard_normal"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[1] = {0}; - values[0] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "standard_normal") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1158; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_size = values[0]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("standard_normal", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1158; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.standard_normal"); - return NULL; - __pyx_L4_argument_unpacking_done:; - - /* "mtrand.pyx":1188 - * - * """ - * return cont0_array(self.internal_state, rk_gauss, size) # <<<<<<<<<<<<<< - * - * def normal(self, loc=0.0, scale=1.0, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_f_6mtrand_cont0_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_gauss, __pyx_v_size); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1188; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("mtrand.RandomState.standard_normal"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1190 - * return cont0_array(self.internal_state, rk_gauss, size) - * - * def normal(self, loc=0.0, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * normal(loc=0.0, scale=1.0, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_normal(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_normal[] = "\n"" normal(loc=0.0, scale=1.0, size=None)\n""\n"" Draw random samples from a normal (Gaussian) distribution.\n""\n"" The probability density function of the normal distribution, first\n"" derived by De Moivre and 200 years later by both Gauss and Laplace\n"" independently [2]_, is often called the bell curve because of\n"" its characteristic shape (see the example below).\n""\n"" The normal distributions occurs often in nature. For example, it\n"" describes the commonly occurring distribution of samples influenced\n"" by a large number of tiny, random disturbances, each with its own\n"" unique distribution [2]_.\n""\n"" Parameters\n"" ----------\n"" loc : float\n"" Mean (\"centre\") of the distribution.\n"" scale : float\n"" Standard deviation (spread or \"width\") of the distribution.\n"" size : tuple of ints\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.norm : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" Notes\n"" -----\n"" The probability density for the Gaussian distribution is\n""\n"" .. math:: p(x) = \\frac{1}{\\sqrt{ 2 \\pi \\sigma^2 }}\n"" e^{ - \\frac{ (x - \\mu)^2 } {2 \\sigma^2} },\n""\n"" where :math:`\\mu` is the mean and :math:`\\sigma` the standard deviation.\n"" The square of the standard deviation, :math:`\\sigma^2`, is called the\n"" variance.\n""\n"" The function has its peak at the mean, and its \"spread\" increases with\n"" the standard deviation (the function reaches 0.607 times its maximum at\n"" :math:`x + \\sigma` and :math:`x - \\sigma` [2]_). This implies that\n"" `numpy.random.normal` is more likely to return samples lying close to the\n"" mean, rather than those far away.\n""\n"" References\n"" ----------\n"" .. [1] Wikipedia, \"Normal distribution\",\n"" http://en.wikipedia.org/wiki/Normal_distribution\n"" .. [2] P. R. Peebles Jr., \"Central Limit Theorem\" in \"Probability, Random\n"" Variables and Random Signal Principles\", 4th ed., 2001,\n"" pp. 51, 51, 125.\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> mu, sigma = 0, 0.1 # mean and standard deviation\n"" >>> s = np.random.normal(mu, sigma, 1000)\n""\n"" Verify the mean and the variance:\n""\n"" >>> abs(mu - np.mean(s)) < 0.01\n"" True\n""\n"" >>> abs(sigma - np.std(s, ddof=1)) < 0.01\n"" True\n""\n"" Display the histogram of the samples, along with\n"" the probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> count, bins, ignored = plt.hist(s, 30, normed=True)\n"" >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *\n"" ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ),\n"" ... linewidth=2, color='r')\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_normal(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_loc = 0; - PyObject *__pyx_v_scale = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oloc; - PyArrayObject *__pyx_v_oscale; - double __pyx_v_floc; - double __pyx_v_fscale; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__loc,&__pyx_n_s__scale,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("normal"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[0] = __pyx_k_7; - values[1] = __pyx_k_8; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__loc); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__scale); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "normal") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1190; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_loc = values[0]; - __pyx_v_scale = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_loc = __pyx_k_7; - __pyx_v_scale = __pyx_k_8; - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: __pyx_v_scale = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_loc = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("normal", 0, 0, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1190; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.normal"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_loc); - __Pyx_INCREF(__pyx_v_scale); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oloc = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_oscale = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":1275 - * cdef double floc, fscale - * - * floc = PyFloat_AsDouble(loc) # <<<<<<<<<<<<<< - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - */ - __pyx_v_floc = PyFloat_AsDouble(__pyx_v_loc); - - /* "mtrand.pyx":1276 - * - * floc = PyFloat_AsDouble(loc) - * fscale = PyFloat_AsDouble(scale) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fscale <= 0: - */ - __pyx_v_fscale = PyFloat_AsDouble(__pyx_v_scale); - - /* "mtrand.pyx":1277 - * floc = PyFloat_AsDouble(loc) - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fscale <= 0: - * raise ValueError("scale <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":1278 - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - * if fscale <= 0: # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_normal, size, floc, fscale) - */ - __pyx_t_1 = (__pyx_v_fscale <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1279 - * if not PyErr_Occurred(): - * if fscale <= 0: - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_normal, size, floc, fscale) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1279; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1279; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1279; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":1280 - * if fscale <= 0: - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_normal, size, floc, fscale) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_normal, __pyx_v_size, __pyx_v_floc, __pyx_v_fscale); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1280; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":1282 - * return cont2_array_sc(self.internal_state, rk_normal, size, floc, fscale) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":1284 - * PyErr_Clear() - * - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0)): - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_loc, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1284; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_oloc)); - __pyx_v_oloc = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":1285 - * - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oscale, 0)): - * raise ValueError("scale <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_scale, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1285; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_oscale)); - __pyx_v_oscale = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":1286 - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0)): # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array(self.internal_state, rk_normal, size, oloc, oscale) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1286; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1286; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1286; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1286; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1286; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_oscale)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oscale)); - __Pyx_INCREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1286; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1286; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1286; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1286; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1287 - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0)): - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_normal, size, oloc, oscale) - * - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1287; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1287; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1287; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":1288 - * if np.any(np.less_equal(oscale, 0)): - * raise ValueError("scale <= 0") - * return cont2_array(self.internal_state, rk_normal, size, oloc, oscale) # <<<<<<<<<<<<<< - * - * def beta(self, a, b, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_normal, __pyx_v_size, __pyx_v_oloc, __pyx_v_oscale); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1288; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.normal"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oloc); - __Pyx_DECREF((PyObject *)__pyx_v_oscale); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_loc); - __Pyx_DECREF(__pyx_v_scale); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1290 - * return cont2_array(self.internal_state, rk_normal, size, oloc, oscale) - * - * def beta(self, a, b, size=None): # <<<<<<<<<<<<<< - * """ - * beta(a, b, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_beta(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_beta[] = "\n beta(a, b, size=None)\n\n The Beta distribution over ``[0, 1]``.\n\n The Beta distribution is a special case of the Dirichlet distribution,\n and is related to the Gamma distribution. It has the probability\n distribution function\n\n .. math:: f(x; a,b) = \\frac{1}{B(\\alpha, \\beta)} x^{\\alpha - 1}\n (1 - x)^{\\beta - 1},\n\n where the normalisation, B, is the beta function,\n\n .. math:: B(\\alpha, \\beta) = \\int_0^1 t^{\\alpha - 1}\n (1 - t)^{\\beta - 1} dt.\n\n It is often seen in Bayesian inference and order statistics.\n\n Parameters\n ----------\n a : float\n Alpha, non-negative.\n b : float\n Beta, non-negative.\n size : tuple of ints, optional\n The number of samples to draw. The ouput is packed according to\n the size given.\n\n Returns\n -------\n out : ndarray\n Array of the given shape, containing values drawn from a\n Beta distribution.\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_beta(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_b = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oa; - PyArrayObject *__pyx_v_ob; - double __pyx_v_fa; - double __pyx_v_fb; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__a,&__pyx_n_s__b,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("beta"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__a); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__b); - if (likely(values[1])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("beta", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1290; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "beta") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1290; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_a = values[0]; - __pyx_v_b = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: - __pyx_v_b = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_a = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("beta", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1290; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.beta"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_a); - __Pyx_INCREF(__pyx_v_b); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oa = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_ob = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":1330 - * cdef double fa, fb - * - * fa = PyFloat_AsDouble(a) # <<<<<<<<<<<<<< - * fb = PyFloat_AsDouble(b) - * if not PyErr_Occurred(): - */ - __pyx_v_fa = PyFloat_AsDouble(__pyx_v_a); - - /* "mtrand.pyx":1331 - * - * fa = PyFloat_AsDouble(a) - * fb = PyFloat_AsDouble(b) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fa <= 0: - */ - __pyx_v_fb = PyFloat_AsDouble(__pyx_v_b); - - /* "mtrand.pyx":1332 - * fa = PyFloat_AsDouble(a) - * fb = PyFloat_AsDouble(b) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fa <= 0: - * raise ValueError("a <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":1333 - * fb = PyFloat_AsDouble(b) - * if not PyErr_Occurred(): - * if fa <= 0: # <<<<<<<<<<<<<< - * raise ValueError("a <= 0") - * if fb <= 0: - */ - __pyx_t_1 = (__pyx_v_fa <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1334 - * if not PyErr_Occurred(): - * if fa <= 0: - * raise ValueError("a <= 0") # <<<<<<<<<<<<<< - * if fb <= 0: - * raise ValueError("b <= 0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1334; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_10)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_10)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_10)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1334; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1334; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":1335 - * if fa <= 0: - * raise ValueError("a <= 0") - * if fb <= 0: # <<<<<<<<<<<<<< - * raise ValueError("b <= 0") - * return cont2_array_sc(self.internal_state, rk_beta, size, fa, fb) - */ - __pyx_t_1 = (__pyx_v_fb <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1336 - * raise ValueError("a <= 0") - * if fb <= 0: - * raise ValueError("b <= 0") # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_beta, size, fa, fb) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1336; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_11)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_11)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_11)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1336; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1336; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":1337 - * if fb <= 0: - * raise ValueError("b <= 0") - * return cont2_array_sc(self.internal_state, rk_beta, size, fa, fb) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_beta, __pyx_v_size, __pyx_v_fa, __pyx_v_fb); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1337; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":1339 - * return cont2_array_sc(self.internal_state, rk_beta, size, fa, fb) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":1341 - * PyErr_Clear() - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * ob = PyArray_FROM_OTF(b, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oa, 0)): - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_a, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1341; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_oa)); - __pyx_v_oa = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":1342 - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - * ob = PyArray_FROM_OTF(b, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oa, 0)): - * raise ValueError("a <= 0") - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_b, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1342; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_ob)); - __pyx_v_ob = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":1343 - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - * ob = PyArray_FROM_OTF(b, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oa, 0)): # <<<<<<<<<<<<<< - * raise ValueError("a <= 0") - * if np.any(np.less_equal(ob, 0)): - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1343; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__any); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1343; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1343; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1343; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1343; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_v_oa)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_v_oa)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oa)); - __Pyx_INCREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1343; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1343; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1343; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1343; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1344 - * ob = PyArray_FROM_OTF(b, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oa, 0)): - * raise ValueError("a <= 0") # <<<<<<<<<<<<<< - * if np.any(np.less_equal(ob, 0)): - * raise ValueError("b <= 0") - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1344; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_10)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_10)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_10)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1344; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1344; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":1345 - * if np.any(np.less_equal(oa, 0)): - * raise ValueError("a <= 0") - * if np.any(np.less_equal(ob, 0)): # <<<<<<<<<<<<<< - * raise ValueError("b <= 0") - * return cont2_array(self.internal_state, rk_beta, size, oa, ob) - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1345; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__any); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1345; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1345; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1345; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1345; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_v_ob)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_v_ob)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_ob)); - __Pyx_INCREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1345; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1345; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_5, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1345; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1345; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1346 - * raise ValueError("a <= 0") - * if np.any(np.less_equal(ob, 0)): - * raise ValueError("b <= 0") # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_beta, size, oa, ob) - * - */ - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1346; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_11)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_s_11)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_11)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1346; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1346; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":1347 - * if np.any(np.less_equal(ob, 0)): - * raise ValueError("b <= 0") - * return cont2_array(self.internal_state, rk_beta, size, oa, ob) # <<<<<<<<<<<<<< - * - * def exponential(self, scale=1.0, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_beta, __pyx_v_size, __pyx_v_oa, __pyx_v_ob); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1347; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.beta"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oa); - __Pyx_DECREF((PyObject *)__pyx_v_ob); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_a); - __Pyx_DECREF(__pyx_v_b); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1349 - * return cont2_array(self.internal_state, rk_beta, size, oa, ob) - * - * def exponential(self, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * exponential(scale=1.0, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_exponential(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_exponential[] = "\n exponential(scale=1.0, size=None)\n\n Exponential distribution.\n\n Its probability density function is\n\n .. math:: f(x; \\frac{1}{\\beta}) = \\frac{1}{\\beta} \\exp(-\\frac{x}{\\beta}),\n\n for ``x > 0`` and 0 elsewhere. :math:`\\beta` is the scale parameter,\n which is the inverse of the rate parameter :math:`\\lambda = 1/\\beta`.\n The rate parameter is an alternative, widely used parameterization\n of the exponential distribution [3]_.\n\n The exponential distribution is a continuous analogue of the\n geometric distribution. It describes many common situations, such as\n the size of raindrops measured over many rainstorms [1]_, or the time\n between page requests to Wikipedia [2]_.\n\n Parameters\n ----------\n scale : float\n The scale parameter, :math:`\\beta = 1/\\lambda`.\n size : tuple of ints\n Number of samples to draw. The output is shaped\n according to `size`.\n\n References\n ----------\n .. [1] Peyton Z. Peebles Jr., \"Probability, Random Variables and\n Random Signal Principles\", 4th ed, 2001, p. 57.\n .. [2] \"Poisson Process\", Wikipedia,\n http://en.wikipedia.org/wiki/Poisson_process\n .. [3] \"Exponential Distribution, Wikipedia,\n http://en.wikipedia.org/wiki/Exponential_distribution\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_exponential(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_scale = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oscale; - double __pyx_v_fscale; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__scale,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("exponential"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[0] = __pyx_k_12; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__scale); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "exponential") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1349; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_scale = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_scale = __pyx_k_12; - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_scale = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("exponential", 0, 0, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1349; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.exponential"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_scale); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oscale = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":1390 - * cdef double fscale - * - * fscale = PyFloat_AsDouble(scale) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fscale <= 0: - */ - __pyx_v_fscale = PyFloat_AsDouble(__pyx_v_scale); - - /* "mtrand.pyx":1391 - * - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fscale <= 0: - * raise ValueError("scale <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":1392 - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - * if fscale <= 0: # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont1_array_sc(self.internal_state, rk_exponential, size, fscale) - */ - __pyx_t_1 = (__pyx_v_fscale <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1393 - * if not PyErr_Occurred(): - * if fscale <= 0: - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont1_array_sc(self.internal_state, rk_exponential, size, fscale) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1393; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1393; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1393; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":1394 - * if fscale <= 0: - * raise ValueError("scale <= 0") - * return cont1_array_sc(self.internal_state, rk_exponential, size, fscale) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont1_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_exponential, __pyx_v_size, __pyx_v_fscale); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1394; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":1396 - * return cont1_array_sc(self.internal_state, rk_exponential, size, fscale) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":1398 - * PyErr_Clear() - * - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_scale, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1398; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_oscale)); - __pyx_v_oscale = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":1399 - * - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont1_array(self.internal_state, rk_exponential, size, oscale) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1399; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1399; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1399; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1399; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1399; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1399; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_oscale)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1399; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1399; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1399; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1399; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1400 - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont1_array(self.internal_state, rk_exponential, size, oscale) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1400; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1400; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1400; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":1401 - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") - * return cont1_array(self.internal_state, rk_exponential, size, oscale) # <<<<<<<<<<<<<< - * - * def standard_exponential(self, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont1_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_exponential, __pyx_v_size, __pyx_v_oscale); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1401; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.exponential"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oscale); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_scale); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1403 - * return cont1_array(self.internal_state, rk_exponential, size, oscale) - * - * def standard_exponential(self, size=None): # <<<<<<<<<<<<<< - * """ - * standard_exponential(size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_standard_exponential(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_standard_exponential[] = "\n standard_exponential(size=None)\n\n Draw samples from the standard exponential distribution.\n\n `standard_exponential` is identical to the exponential distribution\n with a scale parameter of 1.\n\n Parameters\n ----------\n size : int or tuple of ints\n Shape of the output.\n\n Returns\n -------\n out : float or ndarray\n Drawn samples.\n\n Examples\n --------\n Output a 3x8000 array:\n\n >>> n = np.random.standard_exponential((3, 8000))\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_standard_exponential(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_size = 0; - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("standard_exponential"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[1] = {0}; - values[0] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "standard_exponential") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1403; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_size = values[0]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("standard_exponential", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1403; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.standard_exponential"); - return NULL; - __pyx_L4_argument_unpacking_done:; - - /* "mtrand.pyx":1429 - * - * """ - * return cont0_array(self.internal_state, rk_standard_exponential, size) # <<<<<<<<<<<<<< - * - * def standard_gamma(self, shape, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_f_6mtrand_cont0_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_standard_exponential, __pyx_v_size); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1429; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("mtrand.RandomState.standard_exponential"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1431 - * return cont0_array(self.internal_state, rk_standard_exponential, size) - * - * def standard_gamma(self, shape, size=None): # <<<<<<<<<<<<<< - * """ - * standard_gamma(shape, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_standard_gamma(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_standard_gamma[] = "\n"" standard_gamma(shape, size=None)\n""\n"" Draw samples from a Standard Gamma distribution.\n""\n"" Samples are drawn from a Gamma distribution with specified parameters,\n"" shape (sometimes designated \"k\") and scale=1.\n""\n"" Parameters\n"" ----------\n"" shape : float\n"" Parameter, should be > 0.\n"" size : int or tuple of ints\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" Returns\n"" -------\n"" samples : ndarray or scalar\n"" The drawn samples.\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.gamma : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" Notes\n"" -----\n"" The probability density for the Gamma distribution is\n""\n"" .. math:: p(x) = x^{k-1}\\frac{e^{-x/\\theta}}{\\theta^k\\Gamma(k)},\n""\n"" where :math:`k` is the shape and :math:`\\theta` the scale,\n"" and :math:`\\Gamma` is the Gamma function.\n""\n"" The Gamma distribution is often used to model the times to failure of\n"" electronic components, and arises naturally in processes for which the\n"" waiting times between Poisson distributed events are relevant.\n""\n"" References\n"" ----------\n"" .. [1] Weisstein, Eric W. \"Gamma Distribution.\" From MathWorld--A\n"" Wolfram Web Resource.\n"" http://mathworld.wolfram.com/GammaDistribution.html\n"" .. [2] Wikipedia, \"Gamma-distribution\",\n"" http://en.wikipedia.org/wiki/Gamma-distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> shape, scale = 2., 1. # mean and width\n"" >>> s = np.random.standard_gamma(shape, 1000000)\n""\n"" Display the histogram of the samples, along with\n"" the probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> import scipy.special as sps\n"" >>> count, bins, ignored = plt.hist(s, 50, normed=True)\n"" >>> y = bins**(shape-1) * ((np.exp(-bins/scale))/ \\\n"" ... (sps.gamma(shape) * scale**shape))\n"" >>> plt.plot(bins, y, linewidth=2, color='r')\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_standard_gamma(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_shape = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oshape; - double __pyx_v_fshape; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__shape,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("standard_gamma"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__shape); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "standard_gamma") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1431; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_shape = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_shape = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("standard_gamma", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1431; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.standard_gamma"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oshape = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":1501 - * cdef double fshape - * - * fshape = PyFloat_AsDouble(shape) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fshape <= 0: - */ - __pyx_v_fshape = PyFloat_AsDouble(__pyx_v_shape); - - /* "mtrand.pyx":1502 - * - * fshape = PyFloat_AsDouble(shape) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fshape <= 0: - * raise ValueError("shape <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":1503 - * fshape = PyFloat_AsDouble(shape) - * if not PyErr_Occurred(): - * if fshape <= 0: # <<<<<<<<<<<<<< - * raise ValueError("shape <= 0") - * return cont1_array_sc(self.internal_state, rk_standard_gamma, size, fshape) - */ - __pyx_t_1 = (__pyx_v_fshape <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1504 - * if not PyErr_Occurred(): - * if fshape <= 0: - * raise ValueError("shape <= 0") # <<<<<<<<<<<<<< - * return cont1_array_sc(self.internal_state, rk_standard_gamma, size, fshape) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1504; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_13)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_13)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_13)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1504; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1504; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":1505 - * if fshape <= 0: - * raise ValueError("shape <= 0") - * return cont1_array_sc(self.internal_state, rk_standard_gamma, size, fshape) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont1_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_standard_gamma, __pyx_v_size, __pyx_v_fshape); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1505; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":1507 - * return cont1_array_sc(self.internal_state, rk_standard_gamma, size, fshape) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * oshape = PyArray_FROM_OTF(shape, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oshape, 0.0)): - */ - PyErr_Clear(); - - /* "mtrand.pyx":1508 - * - * PyErr_Clear() - * oshape = PyArray_FROM_OTF(shape, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oshape, 0.0)): - * raise ValueError("shape <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_shape, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1508; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_oshape)); - __pyx_v_oshape = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":1509 - * PyErr_Clear() - * oshape = PyArray_FROM_OTF(shape, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oshape, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("shape <= 0") - * return cont1_array(self.internal_state, rk_standard_gamma, size, oshape) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1509; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1509; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1509; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1509; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1509; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1509; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_oshape)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_oshape)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oshape)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1509; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1509; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1509; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1509; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1510 - * oshape = PyArray_FROM_OTF(shape, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oshape, 0.0)): - * raise ValueError("shape <= 0") # <<<<<<<<<<<<<< - * return cont1_array(self.internal_state, rk_standard_gamma, size, oshape) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1510; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_13)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_13)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_13)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1510; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1510; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":1511 - * if np.any(np.less_equal(oshape, 0.0)): - * raise ValueError("shape <= 0") - * return cont1_array(self.internal_state, rk_standard_gamma, size, oshape) # <<<<<<<<<<<<<< - * - * def gamma(self, shape, scale=1.0, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont1_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_standard_gamma, __pyx_v_size, __pyx_v_oshape); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1511; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.standard_gamma"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oshape); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_shape); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1513 - * return cont1_array(self.internal_state, rk_standard_gamma, size, oshape) - * - * def gamma(self, shape, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * gamma(shape, scale=1.0, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_gamma(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_gamma[] = "\n"" gamma(shape, scale=1.0, size=None)\n""\n"" Draw samples from a Gamma distribution.\n""\n"" Samples are drawn from a Gamma distribution with specified parameters,\n"" `shape` (sometimes designated \"k\") and `scale` (sometimes designated\n"" \"theta\"), where both parameters are > 0.\n""\n"" Parameters\n"" ----------\n"" shape : scalar > 0\n"" The shape of the gamma distribution.\n"" scale : scalar > 0, optional\n"" The scale of the gamma distribution. Default is equal to 1.\n"" size : shape_tuple, optional\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" Returns\n"" -------\n"" out : ndarray, float\n"" Returns one sample unless `size` parameter is specified.\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.gamma : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" Notes\n"" -----\n"" The probability density for the Gamma distribution is\n""\n"" .. math:: p(x) = x^{k-1}\\frac{e^{-x/\\theta}}{\\theta^k\\Gamma(k)},\n""\n"" where :math:`k` is the shape and :math:`\\theta` the scale,\n"" and :math:`\\Gamma` is the Gamma function.\n""\n"" The Gamma distribution is often used to model the times to failure of\n"" electronic components, and arises naturally in processes for which the\n"" waiting times between Poisson distributed events are relevant.\n""\n"" References\n"" ----------\n"" .. [1] Weisstein, Eric W. \"Gamma Distribution.\" From MathWorld--A\n"" Wolfram Web Resource.\n"" http://mathworld.wolfram.com/GammaDistribution.html\n"" .. [2] Wikipedia, \"Gamma-distribution\",\n"" http://en.wikipedia.org/wiki/Gamma-distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> shape, scale = 2., 2. # mean and dispersion\n"" >>> s = np.random.gamma(shape, scale, 1000)\n""\n"" Display the histogram of the samples, along with\n"" the probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> import scipy.special as sps\n"" >>> count, bins, ignored = plt.hist(s, 50, normed=True)\n"" >>> y = bins**(shape-1)*(np.exp(-bins/scale) /\n"" ... (sps.gamma(shape)*scale**shape))\n"" >>> plt.plot(bins, y, linewidth=2, color='r')\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_gamma(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_shape = 0; - PyObject *__pyx_v_scale = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oshape; - PyArrayObject *__pyx_v_oscale; - double __pyx_v_fshape; - double __pyx_v_fscale; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__shape,&__pyx_n_s__scale,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("gamma"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[1] = __pyx_k_14; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__shape); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__scale); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "gamma") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1513; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_shape = values[0]; - __pyx_v_scale = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_scale = __pyx_k_14; - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: __pyx_v_scale = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_shape = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("gamma", 0, 1, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1513; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.gamma"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_INCREF(__pyx_v_scale); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oshape = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_oscale = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":1586 - * cdef double fshape, fscale - * - * fshape = PyFloat_AsDouble(shape) # <<<<<<<<<<<<<< - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - */ - __pyx_v_fshape = PyFloat_AsDouble(__pyx_v_shape); - - /* "mtrand.pyx":1587 - * - * fshape = PyFloat_AsDouble(shape) - * fscale = PyFloat_AsDouble(scale) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fshape <= 0: - */ - __pyx_v_fscale = PyFloat_AsDouble(__pyx_v_scale); - - /* "mtrand.pyx":1588 - * fshape = PyFloat_AsDouble(shape) - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fshape <= 0: - * raise ValueError("shape <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":1589 - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - * if fshape <= 0: # <<<<<<<<<<<<<< - * raise ValueError("shape <= 0") - * if fscale <= 0: - */ - __pyx_t_1 = (__pyx_v_fshape <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1590 - * if not PyErr_Occurred(): - * if fshape <= 0: - * raise ValueError("shape <= 0") # <<<<<<<<<<<<<< - * if fscale <= 0: - * raise ValueError("scale <= 0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1590; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_13)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_13)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_13)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1590; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1590; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":1591 - * if fshape <= 0: - * raise ValueError("shape <= 0") - * if fscale <= 0: # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_gamma, size, fshape, fscale) - */ - __pyx_t_1 = (__pyx_v_fscale <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1592 - * raise ValueError("shape <= 0") - * if fscale <= 0: - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_gamma, size, fshape, fscale) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":1593 - * if fscale <= 0: - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_gamma, size, fshape, fscale) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_gamma, __pyx_v_size, __pyx_v_fshape, __pyx_v_fscale); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":1595 - * return cont2_array_sc(self.internal_state, rk_gamma, size, fshape, fscale) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * oshape = PyArray_FROM_OTF(shape, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":1596 - * - * PyErr_Clear() - * oshape = PyArray_FROM_OTF(shape, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oshape, 0.0)): - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_shape, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1596; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_oshape)); - __pyx_v_oshape = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":1597 - * PyErr_Clear() - * oshape = PyArray_FROM_OTF(shape, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oshape, 0.0)): - * raise ValueError("shape <= 0") - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_scale, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1597; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_oscale)); - __pyx_v_oscale = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":1598 - * oshape = PyArray_FROM_OTF(shape, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oshape, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("shape <= 0") - * if np.any(np.less_equal(oscale, 0.0)): - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1598; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__any); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1598; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1598; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1598; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1598; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1598; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_oshape)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_oshape)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oshape)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1598; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1598; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1598; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1598; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1599 - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oshape, 0.0)): - * raise ValueError("shape <= 0") # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1599; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_13)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_13)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_13)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1599; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1599; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":1600 - * if np.any(np.less_equal(oshape, 0.0)): - * raise ValueError("shape <= 0") - * if np.any(np.less_equal(oscale, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array(self.internal_state, rk_gamma, size, oshape, oscale) - */ - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1600; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1600; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1600; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1600; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1600; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1600; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_v_oscale)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1600; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1600; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1600; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1600; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1601 - * raise ValueError("shape <= 0") - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_gamma, size, oshape, oscale) - * - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1601; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1601; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1601; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":1602 - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") - * return cont2_array(self.internal_state, rk_gamma, size, oshape, oscale) # <<<<<<<<<<<<<< - * - * def f(self, dfnum, dfden, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_4 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_gamma, __pyx_v_size, __pyx_v_oshape, __pyx_v_oscale); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1602; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.gamma"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oshape); - __Pyx_DECREF((PyObject *)__pyx_v_oscale); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_shape); - __Pyx_DECREF(__pyx_v_scale); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1604 - * return cont2_array(self.internal_state, rk_gamma, size, oshape, oscale) - * - * def f(self, dfnum, dfden, size=None): # <<<<<<<<<<<<<< - * """ - * f(dfnum, dfden, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_f(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_f[] = "\n"" f(dfnum, dfden, size=None)\n""\n"" Draw samples from a F distribution.\n""\n"" Samples are drawn from an F distribution with specified parameters,\n"" `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom\n"" in denominator), where both parameters should be greater than zero.\n""\n"" The random variate of the F distribution (also known as the\n"" Fisher distribution) is a continuous probability distribution\n"" that arises in ANOVA tests, and is the ratio of two chi-square\n"" variates.\n""\n"" Parameters\n"" ----------\n"" dfnum : float\n"" Degrees of freedom in numerator. Should be greater than zero.\n"" dfden : float\n"" Degrees of freedom in denominator. Should be greater than zero.\n"" size : {tuple, int}, optional\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``,\n"" then ``m * n * k`` samples are drawn. By default only one sample\n"" is returned.\n""\n"" Returns\n"" -------\n"" samples : {ndarray, scalar}\n"" Samples from the Fisher distribution.\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.f : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" Notes\n"" -----\n""\n"" The F statistic is used to compare in-group variances to between-group\n"" variances. Calculating the distribution depends on the sampling, and\n"" so it is a function of the respective degrees of freedom in the\n"" problem. The variable `dfnum` is the number of samples minus one, the\n"" between-groups degrees of freedom, while `dfden` is the within-groups\n"" degrees of freedom, the sum of the number of samples in each group\n"" minus the number of groups.\n""\n"" References\n"" ----------\n"" .. [1] Glantz, Stanton A. \"Primer of Biostatistics.\", McGraw-Hill,\n"" Fifth Edition, 2002.\n"" .. [2] Wikipedia, \"F-distribution\",\n"" http://en.wikipedia.org/wiki/F-distribution\n""\n"" Examples\n"" --------\n"" An example from Glantz[1], pp 47-40.\n"" Two groups, children of diabetics (25 people) and children from people\n"" without diabetes (25 controls). Fasting blood glucose was measured,\n"" case group had a mean value of 86.1, controls had a mean value of\n"" 82.2. Standard deviations were 2.09 and 2.49 respectively. Are these\n"" data consistent with the null hypothesis that the parents diabetic\n"" status does not affect their children's blood glucose levels?\n"" Calculating the F statistic from the data gives a value of 36.01.\n""\n"" Draw samples from the distribution:\n""\n"" >>> dfnum = 1. # between group degrees of freedom\n"" >>> dfden = 48. # within groups degrees of freedom\n"" >>> s = np.random.f(dfnum, dfden, 1000)\n""\n"" The lower bound for the top 1% of the samples is :\n""\n"" >>> sort(s)[-10]\n"" 7.61988120985\n""\n"" So there is about a 1% chance that the F statistic will exceed 7.62,\n"" the measured value is 36, so the null hypothesis is rejected at the 1%\n"" level.\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_f(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_dfnum = 0; - PyObject *__pyx_v_dfden = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_odfnum; - PyArrayObject *__pyx_v_odfden; - double __pyx_v_fdfnum; - double __pyx_v_fdfden; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__dfnum,&__pyx_n_s__dfden,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("f"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__dfnum); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__dfden); - if (likely(values[1])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("f", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1604; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "f") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1604; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_dfnum = values[0]; - __pyx_v_dfden = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: - __pyx_v_dfden = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_dfnum = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("f", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1604; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.f"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_dfnum); - __Pyx_INCREF(__pyx_v_dfden); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_odfnum = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_odfden = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":1688 - * cdef double fdfnum, fdfden - * - * fdfnum = PyFloat_AsDouble(dfnum) # <<<<<<<<<<<<<< - * fdfden = PyFloat_AsDouble(dfden) - * if not PyErr_Occurred(): - */ - __pyx_v_fdfnum = PyFloat_AsDouble(__pyx_v_dfnum); - - /* "mtrand.pyx":1689 - * - * fdfnum = PyFloat_AsDouble(dfnum) - * fdfden = PyFloat_AsDouble(dfden) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fdfnum <= 0: - */ - __pyx_v_fdfden = PyFloat_AsDouble(__pyx_v_dfden); - - /* "mtrand.pyx":1690 - * fdfnum = PyFloat_AsDouble(dfnum) - * fdfden = PyFloat_AsDouble(dfden) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fdfnum <= 0: - * raise ValueError("shape <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":1691 - * fdfden = PyFloat_AsDouble(dfden) - * if not PyErr_Occurred(): - * if fdfnum <= 0: # <<<<<<<<<<<<<< - * raise ValueError("shape <= 0") - * if fdfden <= 0: - */ - __pyx_t_1 = (__pyx_v_fdfnum <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1692 - * if not PyErr_Occurred(): - * if fdfnum <= 0: - * raise ValueError("shape <= 0") # <<<<<<<<<<<<<< - * if fdfden <= 0: - * raise ValueError("scale <= 0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1692; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_13)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_13)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_13)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1692; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1692; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":1693 - * if fdfnum <= 0: - * raise ValueError("shape <= 0") - * if fdfden <= 0: # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_f, size, fdfnum, fdfden) - */ - __pyx_t_1 = (__pyx_v_fdfden <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1694 - * raise ValueError("shape <= 0") - * if fdfden <= 0: - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_f, size, fdfnum, fdfden) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1694; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1694; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1694; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":1695 - * if fdfden <= 0: - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_f, size, fdfnum, fdfden) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_f, __pyx_v_size, __pyx_v_fdfnum, __pyx_v_fdfden); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1695; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":1697 - * return cont2_array_sc(self.internal_state, rk_f, size, fdfnum, fdfden) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * odfnum = PyArray_FROM_OTF(dfnum, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":1699 - * PyErr_Clear() - * - * odfnum = PyArray_FROM_OTF(dfnum, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * odfden = PyArray_FROM_OTF(dfden, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(odfnum, 0.0)): - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_dfnum, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1699; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_odfnum)); - __pyx_v_odfnum = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":1700 - * - * odfnum = PyArray_FROM_OTF(dfnum, NPY_DOUBLE, NPY_ALIGNED) - * odfden = PyArray_FROM_OTF(dfden, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(odfnum, 0.0)): - * raise ValueError("dfnum <= 0") - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_dfden, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1700; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_odfden)); - __pyx_v_odfden = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":1701 - * odfnum = PyArray_FROM_OTF(dfnum, NPY_DOUBLE, NPY_ALIGNED) - * odfden = PyArray_FROM_OTF(dfden, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(odfnum, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("dfnum <= 0") - * if np.any(np.less_equal(odfden, 0.0)): - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__any); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_odfnum)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_odfnum)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_odfnum)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1702 - * odfden = PyArray_FROM_OTF(dfden, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(odfnum, 0.0)): - * raise ValueError("dfnum <= 0") # <<<<<<<<<<<<<< - * if np.any(np.less_equal(odfden, 0.0)): - * raise ValueError("dfden <= 0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1702; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_15)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_15)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_15)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1702; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1702; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":1703 - * if np.any(np.less_equal(odfnum, 0.0)): - * raise ValueError("dfnum <= 0") - * if np.any(np.less_equal(odfden, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("dfden <= 0") - * return cont2_array(self.internal_state, rk_f, size, odfnum, odfden) - */ - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1703; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1703; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1703; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1703; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1703; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1703; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_v_odfden)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_v_odfden)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_odfden)); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1703; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1703; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1703; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1703; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1704 - * raise ValueError("dfnum <= 0") - * if np.any(np.less_equal(odfden, 0.0)): - * raise ValueError("dfden <= 0") # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_f, size, odfnum, odfden) - * - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1704; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_16)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_16)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_16)); - __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1704; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1704; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":1705 - * if np.any(np.less_equal(odfden, 0.0)): - * raise ValueError("dfden <= 0") - * return cont2_array(self.internal_state, rk_f, size, odfnum, odfden) # <<<<<<<<<<<<<< - * - * def noncentral_f(self, dfnum, dfden, nonc, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_4 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_f, __pyx_v_size, __pyx_v_odfnum, __pyx_v_odfden); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1705; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.f"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_odfnum); - __Pyx_DECREF((PyObject *)__pyx_v_odfden); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_dfnum); - __Pyx_DECREF(__pyx_v_dfden); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1707 - * return cont2_array(self.internal_state, rk_f, size, odfnum, odfden) - * - * def noncentral_f(self, dfnum, dfden, nonc, size=None): # <<<<<<<<<<<<<< - * """ - * noncentral_f(dfnum, dfden, nonc, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_noncentral_f(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_noncentral_f[] = "\n"" noncentral_f(dfnum, dfden, nonc, size=None)\n""\n"" Draw samples from the noncentral F distribution.\n""\n"" Samples are drawn from an F distribution with specified parameters,\n"" `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of\n"" freedom in denominator), where both parameters > 1.\n"" `nonc` is the non-centrality parameter.\n""\n"" Parameters\n"" ----------\n"" dfnum : int\n"" Parameter, should be > 1.\n"" dfden : int\n"" Parameter, should be > 1.\n"" nonc : float\n"" Parameter, should be >= 0.\n"" size : int or tuple of ints\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" Returns\n"" -------\n"" samples : scalar or ndarray\n"" Drawn samples.\n""\n"" Notes\n"" -----\n"" When calculating the power of an experiment (power = probability of\n"" rejecting the null hypothesis when a specific alternative is true) the\n"" non-central F statistic becomes important. When the null hypothesis is\n"" true, the F statistic follows a central F distribution. When the null\n"" hypothesis is not true, then it follows a non-central F statistic.\n""\n"" References\n"" ----------\n"" Weisstein, Eric W. \"Noncentral F-Distribution.\" From MathWorld--A Wolfram\n"" Web Resource. http://mathworld.wolfram.com/NoncentralF-Distribution.html\n""\n"" Wikipedia, \"Noncentral F distribution\",\n"" http://en.wikipedia.org/wiki/Noncentral_F-distribution\n""\n"" Examples\n"" --------\n"" In a study, testing for a specific alternative to the null hypothesis\n"" requires use of the Noncentral F distribution. We need to calculate the\n"" area in the tail of the distribution that exceeds the value of the F\n"" distribution for the null hypothesis. We'll plot the two probability\n"" distributions for comparison.\n""\n"" >>> dfnum = 3 # between group deg of freedom\n"" >>> dfden = 20 # within groups degrees of freedom\n"" >>> nonc = 3.0\n"" >>> nc_vals = np.random.noncentral_f(dfnum, dfden, nonc, 1000000)\n"" >>> NF = np.histogram(nc_vals, bins=50, normed=True)\n"" >>> c_vals = np.random.f(dfnum, dfden, 1000000)\n"" >>> F = np.histogram(c_vals, bins=50, normed=True)\n"" >>> plt.plot(F[1][1:], F[0])\n"" >>> plt.plot(NF[1][1:], NF[0])\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_noncentral_f(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_dfnum = 0; - PyObject *__pyx_v_dfden = 0; - PyObject *__pyx_v_nonc = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_odfnum; - PyArrayObject *__pyx_v_odfden; - PyArrayObject *__pyx_v_ononc; - double __pyx_v_fdfnum; - double __pyx_v_fdfden; - double __pyx_v_fnonc; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__dfnum,&__pyx_n_s__dfden,&__pyx_n_s__nonc,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("noncentral_f"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[4] = {0,0,0,0}; - values[3] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__dfnum); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__dfden); - if (likely(values[1])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("noncentral_f", 0, 3, 4, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1707; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 2: - values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__nonc); - if (likely(values[2])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("noncentral_f", 0, 3, 4, 2); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1707; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 3: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[3] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "noncentral_f") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1707; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_dfnum = values[0]; - __pyx_v_dfden = values[1]; - __pyx_v_nonc = values[2]; - __pyx_v_size = values[3]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 4: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 3); - case 3: - __pyx_v_nonc = PyTuple_GET_ITEM(__pyx_args, 2); - __pyx_v_dfden = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_dfnum = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("noncentral_f", 0, 3, 4, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1707; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.noncentral_f"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_dfnum); - __Pyx_INCREF(__pyx_v_dfden); - __Pyx_INCREF(__pyx_v_nonc); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_odfnum = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_odfden = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_ononc = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":1774 - * cdef double fdfnum, fdfden, fnonc - * - * fdfnum = PyFloat_AsDouble(dfnum) # <<<<<<<<<<<<<< - * fdfden = PyFloat_AsDouble(dfden) - * fnonc = PyFloat_AsDouble(nonc) - */ - __pyx_v_fdfnum = PyFloat_AsDouble(__pyx_v_dfnum); - - /* "mtrand.pyx":1775 - * - * fdfnum = PyFloat_AsDouble(dfnum) - * fdfden = PyFloat_AsDouble(dfden) # <<<<<<<<<<<<<< - * fnonc = PyFloat_AsDouble(nonc) - * if not PyErr_Occurred(): - */ - __pyx_v_fdfden = PyFloat_AsDouble(__pyx_v_dfden); - - /* "mtrand.pyx":1776 - * fdfnum = PyFloat_AsDouble(dfnum) - * fdfden = PyFloat_AsDouble(dfden) - * fnonc = PyFloat_AsDouble(nonc) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fdfnum <= 1: - */ - __pyx_v_fnonc = PyFloat_AsDouble(__pyx_v_nonc); - - /* "mtrand.pyx":1777 - * fdfden = PyFloat_AsDouble(dfden) - * fnonc = PyFloat_AsDouble(nonc) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fdfnum <= 1: - * raise ValueError("dfnum <= 1") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":1778 - * fnonc = PyFloat_AsDouble(nonc) - * if not PyErr_Occurred(): - * if fdfnum <= 1: # <<<<<<<<<<<<<< - * raise ValueError("dfnum <= 1") - * if fdfden <= 0: - */ - __pyx_t_1 = (__pyx_v_fdfnum <= 1); - if (__pyx_t_1) { - - /* "mtrand.pyx":1779 - * if not PyErr_Occurred(): - * if fdfnum <= 1: - * raise ValueError("dfnum <= 1") # <<<<<<<<<<<<<< - * if fdfden <= 0: - * raise ValueError("dfden <= 0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1779; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_17)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_17)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_17)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1779; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1779; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":1780 - * if fdfnum <= 1: - * raise ValueError("dfnum <= 1") - * if fdfden <= 0: # <<<<<<<<<<<<<< - * raise ValueError("dfden <= 0") - * if fnonc < 0: - */ - __pyx_t_1 = (__pyx_v_fdfden <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1781 - * raise ValueError("dfnum <= 1") - * if fdfden <= 0: - * raise ValueError("dfden <= 0") # <<<<<<<<<<<<<< - * if fnonc < 0: - * raise ValueError("nonc < 0") - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1781; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_16)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_16)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_16)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1781; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1781; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":1782 - * if fdfden <= 0: - * raise ValueError("dfden <= 0") - * if fnonc < 0: # <<<<<<<<<<<<<< - * raise ValueError("nonc < 0") - * return cont3_array_sc(self.internal_state, rk_noncentral_f, size, - */ - __pyx_t_1 = (__pyx_v_fnonc < 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1783 - * raise ValueError("dfden <= 0") - * if fnonc < 0: - * raise ValueError("nonc < 0") # <<<<<<<<<<<<<< - * return cont3_array_sc(self.internal_state, rk_noncentral_f, size, - * fdfnum, fdfden, fnonc) - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1783; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_18)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_18)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_18)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1783; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1783; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":1784 - * if fnonc < 0: - * raise ValueError("nonc < 0") - * return cont3_array_sc(self.internal_state, rk_noncentral_f, size, # <<<<<<<<<<<<<< - * fdfnum, fdfden, fnonc) - * - */ - __Pyx_XDECREF(__pyx_r); - - /* "mtrand.pyx":1785 - * raise ValueError("nonc < 0") - * return cont3_array_sc(self.internal_state, rk_noncentral_f, size, - * fdfnum, fdfden, fnonc) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __pyx_t_3 = __pyx_f_6mtrand_cont3_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_noncentral_f, __pyx_v_size, __pyx_v_fdfnum, __pyx_v_fdfden, __pyx_v_fnonc); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1784; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":1787 - * fdfnum, fdfden, fnonc) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * odfnum = PyArray_FROM_OTF(dfnum, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":1789 - * PyErr_Clear() - * - * odfnum = PyArray_FROM_OTF(dfnum, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * odfden = PyArray_FROM_OTF(dfden, NPY_DOUBLE, NPY_ALIGNED) - * ononc = PyArray_FROM_OTF(nonc, NPY_DOUBLE, NPY_ALIGNED) - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_dfnum, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1789; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_odfnum)); - __pyx_v_odfnum = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":1790 - * - * odfnum = PyArray_FROM_OTF(dfnum, NPY_DOUBLE, NPY_ALIGNED) - * odfden = PyArray_FROM_OTF(dfden, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * ononc = PyArray_FROM_OTF(nonc, NPY_DOUBLE, NPY_ALIGNED) - * - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_dfden, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1790; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_odfden)); - __pyx_v_odfden = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":1791 - * odfnum = PyArray_FROM_OTF(dfnum, NPY_DOUBLE, NPY_ALIGNED) - * odfden = PyArray_FROM_OTF(dfden, NPY_DOUBLE, NPY_ALIGNED) - * ononc = PyArray_FROM_OTF(nonc, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * - * if np.any(np.less_equal(odfnum, 1.0)): - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_nonc, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1791; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_ononc)); - __pyx_v_ononc = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":1793 - * ononc = PyArray_FROM_OTF(nonc, NPY_DOUBLE, NPY_ALIGNED) - * - * if np.any(np.less_equal(odfnum, 1.0)): # <<<<<<<<<<<<<< - * raise ValueError("dfnum <= 1") - * if np.any(np.less_equal(odfden, 0.0)): - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1793; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1793; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1793; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1793; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1793; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1793; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_odfnum)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_odfnum)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_odfnum)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1793; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1793; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1793; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1793; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1794 - * - * if np.any(np.less_equal(odfnum, 1.0)): - * raise ValueError("dfnum <= 1") # <<<<<<<<<<<<<< - * if np.any(np.less_equal(odfden, 0.0)): - * raise ValueError("dfden <= 0") - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1794; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_17)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_17)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_17)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1794; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1794; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":1795 - * if np.any(np.less_equal(odfnum, 1.0)): - * raise ValueError("dfnum <= 1") - * if np.any(np.less_equal(odfden, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("dfden <= 0") - * if np.any(np.less(ononc, 0.0)): - */ - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1795; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__any); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1795; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1795; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1795; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1795; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1795; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_v_odfden)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_v_odfden)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_odfden)); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1795; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1795; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1795; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1795; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1796 - * raise ValueError("dfnum <= 1") - * if np.any(np.less_equal(odfden, 0.0)): - * raise ValueError("dfden <= 0") # <<<<<<<<<<<<<< - * if np.any(np.less(ononc, 0.0)): - * raise ValueError("nonc < 0") - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1796; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_16)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_16)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_16)); - __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1796; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1796; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L11; - } - __pyx_L11:; - - /* "mtrand.pyx":1797 - * if np.any(np.less_equal(odfden, 0.0)): - * raise ValueError("dfden <= 0") - * if np.any(np.less(ononc, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("nonc < 0") - * return cont3_array(self.internal_state, rk_noncentral_f, size, odfnum, - */ - __pyx_t_4 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1797; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_4, __pyx_n_s__any); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1797; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1797; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_4, __pyx_n_s__less); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1797; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1797; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1797; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_v_ononc)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_v_ononc)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_ononc)); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1797; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1797; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_5, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1797; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1797; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1798 - * raise ValueError("dfden <= 0") - * if np.any(np.less(ononc, 0.0)): - * raise ValueError("nonc < 0") # <<<<<<<<<<<<<< - * return cont3_array(self.internal_state, rk_noncentral_f, size, odfnum, - * odfden, ononc) - */ - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1798; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_18)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_s_18)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_18)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1798; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1798; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L12; - } - __pyx_L12:; - - /* "mtrand.pyx":1799 - * if np.any(np.less(ononc, 0.0)): - * raise ValueError("nonc < 0") - * return cont3_array(self.internal_state, rk_noncentral_f, size, odfnum, # <<<<<<<<<<<<<< - * odfden, ononc) - * - */ - __Pyx_XDECREF(__pyx_r); - - /* "mtrand.pyx":1800 - * raise ValueError("nonc < 0") - * return cont3_array(self.internal_state, rk_noncentral_f, size, odfnum, - * odfden, ononc) # <<<<<<<<<<<<<< - * - * def chisquare(self, df, size=None): - */ - __pyx_t_2 = __pyx_f_6mtrand_cont3_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_noncentral_f, __pyx_v_size, __pyx_v_odfnum, __pyx_v_odfden, __pyx_v_ononc); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1799; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.noncentral_f"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_odfnum); - __Pyx_DECREF((PyObject *)__pyx_v_odfden); - __Pyx_DECREF((PyObject *)__pyx_v_ononc); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_dfnum); - __Pyx_DECREF(__pyx_v_dfden); - __Pyx_DECREF(__pyx_v_nonc); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1802 - * odfden, ononc) - * - * def chisquare(self, df, size=None): # <<<<<<<<<<<<<< - * """ - * chisquare(df, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_chisquare(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_chisquare[] = "\n chisquare(df, size=None)\n\n Draw samples from a chi-square distribution.\n\n When `df` independent random variables, each with standard\n normal distributions (mean 0, variance 1), are squared and summed,\n the resulting distribution is chi-square (see Notes). This\n distribution is often used in hypothesis testing.\n\n Parameters\n ----------\n df : int\n Number of degrees of freedom.\n size : tuple of ints, int, optional\n Size of the returned array. By default, a scalar is\n returned.\n\n Returns\n -------\n output : ndarray\n Samples drawn from the distribution, packed in a `size`-shaped\n array.\n\n Raises\n ------\n ValueError\n When `df` <= 0 or when an inappropriate `size` (e.g. ``size=-1``)\n is given.\n\n Notes\n -----\n The variable obtained by summing the squares of `df` independent,\n standard normally distributed random variables:\n\n .. math:: Q = \\sum_{i=0}^{\\mathtt{df}} X^2_i\n\n is chi-square distributed, denoted\n\n .. math:: Q \\sim \\chi^2_k.\n\n The probability density function of the chi-squared distribution is\n\n .. math:: p(x) = \\frac{(1/2)^{k/2}}{\\Gamma(k/2)}\n x^{k/2 - 1} e^{-x/2},\n\n where :math:`\\Gamma` is the gamma function,\n\n .. math:: \\Gamma(x) = \\int_0^{-\\infty} t^{x - 1} e^{-t} dt.\n\n References\n ----------\n .. [1] NIST/SEMATECH e-Handbook of Statistical Methods,\n http://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm\n .. [2] Wikipedia, \"Chi-square distribution\",\n http://en.wikipedia.org/wiki/Chi-square_distribution\n\n Examples\n --------\n >>> np.random.chisquare(2,4)\n array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272])\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_chisquare(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_df = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_odf; - double __pyx_v_fdf; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__df,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("chisquare"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__df); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "chisquare") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1802; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_df = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_df = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("chisquare", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1802; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.chisquare"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_df); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_odf = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":1869 - * cdef double fdf - * - * fdf = PyFloat_AsDouble(df) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fdf <= 0: - */ - __pyx_v_fdf = PyFloat_AsDouble(__pyx_v_df); - - /* "mtrand.pyx":1870 - * - * fdf = PyFloat_AsDouble(df) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fdf <= 0: - * raise ValueError("df <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":1871 - * fdf = PyFloat_AsDouble(df) - * if not PyErr_Occurred(): - * if fdf <= 0: # <<<<<<<<<<<<<< - * raise ValueError("df <= 0") - * return cont1_array_sc(self.internal_state, rk_chisquare, size, fdf) - */ - __pyx_t_1 = (__pyx_v_fdf <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1872 - * if not PyErr_Occurred(): - * if fdf <= 0: - * raise ValueError("df <= 0") # <<<<<<<<<<<<<< - * return cont1_array_sc(self.internal_state, rk_chisquare, size, fdf) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1872; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_19)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_19)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_19)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1872; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1872; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":1873 - * if fdf <= 0: - * raise ValueError("df <= 0") - * return cont1_array_sc(self.internal_state, rk_chisquare, size, fdf) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont1_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_chisquare, __pyx_v_size, __pyx_v_fdf); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1873; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":1875 - * return cont1_array_sc(self.internal_state, rk_chisquare, size, fdf) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":1877 - * PyErr_Clear() - * - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(odf, 0.0)): - * raise ValueError("df <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_df, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1877; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_odf)); - __pyx_v_odf = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":1878 - * - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(odf, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("df <= 0") - * return cont1_array(self.internal_state, rk_chisquare, size, odf) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1878; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1878; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1878; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1878; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1878; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1878; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_odf)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_odf)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_odf)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1878; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1878; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1878; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1878; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1879 - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(odf, 0.0)): - * raise ValueError("df <= 0") # <<<<<<<<<<<<<< - * return cont1_array(self.internal_state, rk_chisquare, size, odf) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1879; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_19)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_19)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_19)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1879; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1879; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":1880 - * if np.any(np.less_equal(odf, 0.0)): - * raise ValueError("df <= 0") - * return cont1_array(self.internal_state, rk_chisquare, size, odf) # <<<<<<<<<<<<<< - * - * def noncentral_chisquare(self, df, nonc, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont1_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_chisquare, __pyx_v_size, __pyx_v_odf); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1880; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.chisquare"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_odf); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_df); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1882 - * return cont1_array(self.internal_state, rk_chisquare, size, odf) - * - * def noncentral_chisquare(self, df, nonc, size=None): # <<<<<<<<<<<<<< - * """ - * noncentral_chisquare(df, nonc, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_noncentral_chisquare(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_noncentral_chisquare[] = "\n"" noncentral_chisquare(df, nonc, size=None)\n""\n"" Draw samples from a noncentral chi-square distribution.\n""\n"" The noncentral :math:`\\chi^2` distribution is a generalisation of\n"" the :math:`\\chi^2` distribution.\n""\n"" Parameters\n"" ----------\n"" df : int\n"" Degrees of freedom, should be >= 1.\n"" nonc : float\n"" Non-centrality, should be > 0.\n"" size : int or tuple of ints\n"" Shape of the output.\n""\n"" Notes\n"" -----\n"" The probability density function for the noncentral Chi-square distribution\n"" is\n""\n"" .. math:: P(x;df,nonc) = \\sum^{\\infty}_{i=0}\n"" \\frac{e^{-nonc/2}(nonc/2)^{i}}{i!}P_{Y_{df+2i}}(x),\n""\n"" where :math:`Y_{q}` is the Chi-square with q degrees of freedom.\n""\n"" In Delhi (2007), it is noted that the noncentral chi-square is useful in\n"" bombing and coverage problems, the probability of killing the point target\n"" given by the noncentral chi-squared distribution.\n""\n"" References\n"" ----------\n"" .. [1] Delhi, M.S. Holla, \"On a noncentral chi-square distribution in the\n"" analysis of weapon systems effectiveness\", Metrika, Volume 15,\n"" Number 1 / December, 1970.\n"" .. [2] Wikipedia, \"Noncentral chi-square distribution\"\n"" http://en.wikipedia.org/wiki/Noncentral_chi-square_distribution\n""\n"" Examples\n"" --------\n"" Draw values from the distribution and plot the histogram\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000),\n"" ... bins=200, normed=True)\n"" >>> plt.show()\n""\n"" Draw values from a noncentral chisquare with very small noncentrality,\n"" and compare to a chisquare.\n""\n"" >>> plt.figure()\n"" >>> values = plt.hist(np.random.noncentral_chisquare(3, .0000001, 100000),\n"" ... bins=np.arange(0., 25, .1), normed=True)\n"" >>> values2 = plt.hist(np.random.chisquare(3, 100000),\n"" ... bins=np.arange(0., 25, .1), normed=True)\n"" >>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob')\n"" >>> plt.show()\n""\n"" Demonstrate how large values of non-centrality lead to a more symmetric\n"" distribution.\n""\n"" >>> plt.figure()\n"" >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000),\n"" ... bins=200, normed=True)\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_noncentral_chisquare(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_df = 0; - PyObject *__pyx_v_nonc = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_odf; - PyArrayObject *__pyx_v_ononc; - double __pyx_v_fdf; - double __pyx_v_fnonc; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__df,&__pyx_n_s__nonc,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("noncentral_chisquare"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__df); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__nonc); - if (likely(values[1])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("noncentral_chisquare", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1882; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "noncentral_chisquare") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1882; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_df = values[0]; - __pyx_v_nonc = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: - __pyx_v_nonc = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_df = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("noncentral_chisquare", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1882; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.noncentral_chisquare"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_df); - __Pyx_INCREF(__pyx_v_nonc); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_odf = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_ononc = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":1953 - * cdef ndarray odf, ononc - * cdef double fdf, fnonc - * fdf = PyFloat_AsDouble(df) # <<<<<<<<<<<<<< - * fnonc = PyFloat_AsDouble(nonc) - * if not PyErr_Occurred(): - */ - __pyx_v_fdf = PyFloat_AsDouble(__pyx_v_df); - - /* "mtrand.pyx":1954 - * cdef double fdf, fnonc - * fdf = PyFloat_AsDouble(df) - * fnonc = PyFloat_AsDouble(nonc) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fdf <= 1: - */ - __pyx_v_fnonc = PyFloat_AsDouble(__pyx_v_nonc); - - /* "mtrand.pyx":1955 - * fdf = PyFloat_AsDouble(df) - * fnonc = PyFloat_AsDouble(nonc) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fdf <= 1: - * raise ValueError("df <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":1956 - * fnonc = PyFloat_AsDouble(nonc) - * if not PyErr_Occurred(): - * if fdf <= 1: # <<<<<<<<<<<<<< - * raise ValueError("df <= 0") - * if fnonc <= 0: - */ - __pyx_t_1 = (__pyx_v_fdf <= 1); - if (__pyx_t_1) { - - /* "mtrand.pyx":1957 - * if not PyErr_Occurred(): - * if fdf <= 1: - * raise ValueError("df <= 0") # <<<<<<<<<<<<<< - * if fnonc <= 0: - * raise ValueError("nonc <= 0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1957; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_19)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_19)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_19)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1957; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1957; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":1958 - * if fdf <= 1: - * raise ValueError("df <= 0") - * if fnonc <= 0: # <<<<<<<<<<<<<< - * raise ValueError("nonc <= 0") - * return cont2_array_sc(self.internal_state, rk_noncentral_chisquare, - */ - __pyx_t_1 = (__pyx_v_fnonc <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":1959 - * raise ValueError("df <= 0") - * if fnonc <= 0: - * raise ValueError("nonc <= 0") # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_noncentral_chisquare, - * size, fdf, fnonc) - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_20)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_20)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_20)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":1960 - * if fnonc <= 0: - * raise ValueError("nonc <= 0") - * return cont2_array_sc(self.internal_state, rk_noncentral_chisquare, # <<<<<<<<<<<<<< - * size, fdf, fnonc) - * - */ - __Pyx_XDECREF(__pyx_r); - - /* "mtrand.pyx":1961 - * raise ValueError("nonc <= 0") - * return cont2_array_sc(self.internal_state, rk_noncentral_chisquare, - * size, fdf, fnonc) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __pyx_t_2 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_noncentral_chisquare, __pyx_v_size, __pyx_v_fdf, __pyx_v_fnonc); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1960; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":1963 - * size, fdf, fnonc) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":1965 - * PyErr_Clear() - * - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * ononc = PyArray_FROM_OTF(nonc, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(odf, 0.0)): - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_df, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1965; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_odf)); - __pyx_v_odf = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":1966 - * - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - * ononc = PyArray_FROM_OTF(nonc, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(odf, 0.0)): - * raise ValueError("df <= 1") - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_nonc, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1966; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_ononc)); - __pyx_v_ononc = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":1967 - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - * ononc = PyArray_FROM_OTF(nonc, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(odf, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("df <= 1") - * if np.any(np.less_equal(ononc, 0.0)): - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1967; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__any); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1967; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1967; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1967; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1967; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1967; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_odf)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_odf)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_odf)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1967; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1967; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1967; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1967; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1968 - * ononc = PyArray_FROM_OTF(nonc, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(odf, 0.0)): - * raise ValueError("df <= 1") # <<<<<<<<<<<<<< - * if np.any(np.less_equal(ononc, 0.0)): - * raise ValueError("nonc < 0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1968; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_21)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_21)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_21)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1968; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1968; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":1969 - * if np.any(np.less_equal(odf, 0.0)): - * raise ValueError("df <= 1") - * if np.any(np.less_equal(ononc, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("nonc < 0") - * return cont2_array(self.internal_state, rk_noncentral_chisquare, size, - */ - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_v_ononc)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_v_ononc)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_ononc)); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":1970 - * raise ValueError("df <= 1") - * if np.any(np.less_equal(ononc, 0.0)): - * raise ValueError("nonc < 0") # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_noncentral_chisquare, size, - * odf, ononc) - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1970; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_18)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_18)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_18)); - __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1970; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1970; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":1971 - * if np.any(np.less_equal(ononc, 0.0)): - * raise ValueError("nonc < 0") - * return cont2_array(self.internal_state, rk_noncentral_chisquare, size, # <<<<<<<<<<<<<< - * odf, ononc) - * - */ - __Pyx_XDECREF(__pyx_r); - - /* "mtrand.pyx":1972 - * raise ValueError("nonc < 0") - * return cont2_array(self.internal_state, rk_noncentral_chisquare, size, - * odf, ononc) # <<<<<<<<<<<<<< - * - * def standard_cauchy(self, size=None): - */ - __pyx_t_4 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_noncentral_chisquare, __pyx_v_size, __pyx_v_odf, __pyx_v_ononc); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1971; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.noncentral_chisquare"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_odf); - __Pyx_DECREF((PyObject *)__pyx_v_ononc); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_df); - __Pyx_DECREF(__pyx_v_nonc); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":1974 - * odf, ononc) - * - * def standard_cauchy(self, size=None): # <<<<<<<<<<<<<< - * """ - * standard_cauchy(size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_standard_cauchy(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_standard_cauchy[] = "\n"" standard_cauchy(size=None)\n""\n"" Standard Cauchy distribution with mode = 0.\n""\n"" Also known as the Lorentz distribution.\n""\n"" Parameters\n"" ----------\n"" size : int or tuple of ints\n"" Shape of the output.\n""\n"" Returns\n"" -------\n"" samples : ndarray or scalar\n"" The drawn samples.\n""\n"" Notes\n"" -----\n"" The probability density function for the full Cauchy distribution is\n""\n"" .. math:: P(x; x_0, \\gamma) = \\frac{1}{\\pi \\gamma \\bigl[ 1+\n"" (\\frac{x-x_0}{\\gamma})^2 \\bigr] }\n""\n"" and the Standard Cauchy distribution just sets :math:`x_0=0` and\n"" :math:`\\gamma=1`\n""\n"" The Cauchy distribution arises in the solution to the driven harmonic\n"" oscillator problem, and also describes spectral line broadening. It\n"" also describes the distribution of values at which a line tilted at\n"" a random angle will cut the x axis.\n""\n"" When studying hypothesis tests that assume normality, seeing how the\n"" tests perform on data from a Cauchy distribution is a good indicator of\n"" their sensitivity to a heavy-tailed distribution, since the Cauchy looks\n"" very much like a Gaussian distribution, but with heavier tails.\n""\n"" References\n"" ----------\n"" ..[1] NIST/SEMATECH e-Handbook of Statistical Methods, \"Cauchy\n"" Distribution\",\n"" http://www.itl.nist.gov/div898/handbook/eda/section3/eda3663.htm\n"" ..[2] Weisstein, Eric W. \"Cauchy Distribution.\" From MathWorld--A\n"" Wolfram Web Resource.\n"" http://mathworld.wolfram.com/CauchyDistribution.html\n"" ..[3] Wikipedia, \"Cauchy distribution\"\n"" http://en.wikipedia.org/wiki/Cauchy_distribution\n""\n"" Examples\n"" --------\n"" Draw samples and plot the distribution:\n""\n"" >>> s = np.random.standard_cauchy(1000000)\n"" >>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well\n"" >>> plt.hist(s, bins=100)\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_standard_cauchy(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_size = 0; - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("standard_cauchy"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[1] = {0}; - values[0] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "standard_cauchy") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1974; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_size = values[0]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 1: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("standard_cauchy", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1974; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.standard_cauchy"); - return NULL; - __pyx_L4_argument_unpacking_done:; - - /* "mtrand.pyx":2033 - * - * """ - * return cont0_array(self.internal_state, rk_standard_cauchy, size) # <<<<<<<<<<<<<< - * - * def standard_t(self, df, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_f_6mtrand_cont0_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_standard_cauchy, __pyx_v_size); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2033; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("mtrand.RandomState.standard_cauchy"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":2035 - * return cont0_array(self.internal_state, rk_standard_cauchy, size) - * - * def standard_t(self, df, size=None): # <<<<<<<<<<<<<< - * """ - * standard_t(df, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_standard_t(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_standard_t[] = "\n"" standard_t(df, size=None)\n""\n"" Standard Student's t distribution with df degrees of freedom.\n""\n"" A special case of the hyperbolic distribution.\n"" As `df` gets large, the result resembles that of the standard normal\n"" distribution (`standard_normal`).\n""\n"" Parameters\n"" ----------\n"" df : int\n"" Degrees of freedom, should be > 0.\n"" size : int or tuple of ints, optional\n"" Output shape. Default is None, in which case a single value is\n"" returned.\n""\n"" Returns\n"" -------\n"" samples : ndarray or scalar\n"" Drawn samples.\n""\n"" Notes\n"" -----\n"" The probability density function for the t distribution is\n""\n"" .. math:: P(x, df) = \\frac{\\Gamma(\\frac{df+1}{2})}{\\sqrt{\\pi df}\n"" \\Gamma(\\frac{df}{2})}\\Bigl( 1+\\frac{x^2}{df} \\Bigr)^{-(df+1)/2}\n""\n"" The t test is based on an assumption that the data come from a Normal\n"" distribution. The t test provides a way to test whether the sample mean\n"" (that is the mean calculated from the data) is a good estimate of the true\n"" mean.\n""\n"" The derivation of the t-distribution was forst published in 1908 by William\n"" Gisset while working for the Guinness Brewery in Dublin. Due to proprietary\n"" issues, he had to publish under a pseudonym, and so he used the name\n"" Student.\n""\n"" References\n"" ----------\n"" .. [1] Dalgaard, Peter, \"Introductory Statistics With R\",\n"" Springer, 2002.\n"" .. [2] Wikipedia, \"Student's t-distribution\"\n"" http://en.wikipedia.org/wiki/Student's_t-distribution\n""\n"" Examples\n"" --------\n"" From Dalgaard page 83 [1]_, suppose the daily energy intake for 11\n"" women in Kj is:\n""\n"" >>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \\\n"" ... 7515, 8230, 8770])\n""\n"" Does their energy intake deviate systematically from the recommended\n"" value of 7725 kJ?\n""\n"" We have 10 degrees of freedom, so is the sample mean within 95% of the\n"" recommended value?\n""\n"" >>> s = np.random.standard_t(10, size=100000)\n"" >>> np.mean(intake)\n"" 6753.636363636364\n"" >>> intake.std(ddof=1)\n"" 1142.1232221373727\n""\n"" Calculate the t statistic, setting the ddof parameter to the unbiased\n"" value so the divisor in the standard deviation will be degrees of\n"" freedom, N-1.\n""\n"" >>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake)))\n"" >>> import matplotlib.pyplot as plt\n"" >>> h = plt.hist(s, bins=100, normed=True)\n""\n"" For a one-sided t-test, how far out in the distribution does the t\n"" statistic appear?\n""\n"" >>> >>> np.sum(s 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "standard_t") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2035; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_df = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_df = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("standard_t", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2035; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.standard_t"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_df); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_odf = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":2123 - * cdef double fdf - * - * fdf = PyFloat_AsDouble(df) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fdf <= 0: - */ - __pyx_v_fdf = PyFloat_AsDouble(__pyx_v_df); - - /* "mtrand.pyx":2124 - * - * fdf = PyFloat_AsDouble(df) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fdf <= 0: - * raise ValueError("df <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":2125 - * fdf = PyFloat_AsDouble(df) - * if not PyErr_Occurred(): - * if fdf <= 0: # <<<<<<<<<<<<<< - * raise ValueError("df <= 0") - * return cont1_array_sc(self.internal_state, rk_standard_t, size, fdf) - */ - __pyx_t_1 = (__pyx_v_fdf <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":2126 - * if not PyErr_Occurred(): - * if fdf <= 0: - * raise ValueError("df <= 0") # <<<<<<<<<<<<<< - * return cont1_array_sc(self.internal_state, rk_standard_t, size, fdf) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2126; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_19)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_19)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_19)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2126; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2126; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":2127 - * if fdf <= 0: - * raise ValueError("df <= 0") - * return cont1_array_sc(self.internal_state, rk_standard_t, size, fdf) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont1_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_standard_t, __pyx_v_size, __pyx_v_fdf); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2127; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":2129 - * return cont1_array_sc(self.internal_state, rk_standard_t, size, fdf) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":2131 - * PyErr_Clear() - * - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(odf, 0.0)): - * raise ValueError("df <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_df, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2131; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_odf)); - __pyx_v_odf = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":2132 - * - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(odf, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("df <= 0") - * return cont1_array(self.internal_state, rk_standard_t, size, odf) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_odf)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_odf)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_odf)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":2133 - * odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(odf, 0.0)): - * raise ValueError("df <= 0") # <<<<<<<<<<<<<< - * return cont1_array(self.internal_state, rk_standard_t, size, odf) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2133; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_19)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_19)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_19)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2133; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2133; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":2134 - * if np.any(np.less_equal(odf, 0.0)): - * raise ValueError("df <= 0") - * return cont1_array(self.internal_state, rk_standard_t, size, odf) # <<<<<<<<<<<<<< - * - * def vonmises(self, mu, kappa, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont1_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_standard_t, __pyx_v_size, __pyx_v_odf); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2134; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.standard_t"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_odf); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_df); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":2136 - * return cont1_array(self.internal_state, rk_standard_t, size, odf) - * - * def vonmises(self, mu, kappa, size=None): # <<<<<<<<<<<<<< - * """ - * vonmises(mu=0.0, kappa=1.0, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_vonmises(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_vonmises[] = "\n"" vonmises(mu=0.0, kappa=1.0, size=None)\n""\n"" Draw samples from a von Mises distribution.\n""\n"" Samples are drawn from a von Mises distribution with specified mode (mu)\n"" and dispersion (kappa), on the interval [-pi, pi].\n""\n"" The von Mises distribution (also known as the circular normal\n"" distribution) is a continuous probability distribution on the circle. It\n"" may be thought of as the circular analogue of the normal distribution.\n""\n"" Parameters\n"" ----------\n"" mu : float\n"" Mode (\"center\") of the distribution.\n"" kappa : float, >= 0.\n"" Dispersion of the distribution.\n"" size : {tuple, int}\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" Returns\n"" -------\n"" samples : {ndarray, scalar}\n"" The returned samples live on the unit circle [-\\pi, \\pi].\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.vonmises : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" Notes\n"" -----\n"" The probability density for the von Mises distribution is\n""\n"" .. math:: p(x) = \\frac{e^{\\kappa cos(x-\\mu)}}{2\\pi I_0(\\kappa)},\n""\n"" where :math:`\\mu` is the mode and :math:`\\kappa` the dispersion,\n"" and :math:`I_0(\\kappa)` is the modified Bessel function of order 0.\n""\n"" The von Mises, named for Richard Edler von Mises, born in\n"" Austria-Hungary, in what is now the Ukraine. He fled to the United\n"" States in 1939 and became a professor at Harvard. He worked in\n"" probability theory, aerodynamics, fluid mechanics, and philosophy of\n"" science.\n""\n"" References\n"" ----------\n"" .. [1] Abramowitz, M. and Stegun, I. A. (ed.), Handbook of Mathematical\n"" Functions, National Bureau of Standards, 1964; reprinted Dover\n"" Publications, 1965.\n"" .. [2] von Mises, Richard, 1964, Mathematical Theory of Probability\n"" and Statistics (New York: Academic Press).\n"" .. [3] Wikipedia, \"Von Mises distribution\",\n"" http://en.wikipedia.org/wiki/Von_Mises_distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> mu, kappa = 0.0, 4.0 # mean and dispersion\n"" >>> s = np.random.vonmises(mu, kappa, 1000)\n""\n"" Display the histogram of the samples, along with\n"" the probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> import scipy.special as sps\n"" >>> count, bins, ignored = plt.hist(s, 50, normed=True)\n"" >>> x = np.arange(-np.pi, np.pi, 2*np.pi/50.)\n"" >>> y = -np.exp(kappa*np.cos(x-mu))/(2*np.pi*sps.jn(0,kappa))\n"" >>> plt.plot(x, y/max(y), linewidth=2, color='r')\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_vonmises(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_mu = 0; - PyObject *__pyx_v_kappa = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_omu; - PyArrayObject *__pyx_v_okappa; - double __pyx_v_fmu; - double __pyx_v_fkappa; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__mu,&__pyx_n_s__kappa,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("vonmises"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__mu); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__kappa); - if (likely(values[1])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("vonmises", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2136; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "vonmises") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2136; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_mu = values[0]; - __pyx_v_kappa = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: - __pyx_v_kappa = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_mu = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("vonmises", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2136; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.vonmises"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_mu); - __Pyx_INCREF(__pyx_v_kappa); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_omu = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_okappa = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":2216 - * cdef double fmu, fkappa - * - * fmu = PyFloat_AsDouble(mu) # <<<<<<<<<<<<<< - * fkappa = PyFloat_AsDouble(kappa) - * if not PyErr_Occurred(): - */ - __pyx_v_fmu = PyFloat_AsDouble(__pyx_v_mu); - - /* "mtrand.pyx":2217 - * - * fmu = PyFloat_AsDouble(mu) - * fkappa = PyFloat_AsDouble(kappa) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fkappa < 0: - */ - __pyx_v_fkappa = PyFloat_AsDouble(__pyx_v_kappa); - - /* "mtrand.pyx":2218 - * fmu = PyFloat_AsDouble(mu) - * fkappa = PyFloat_AsDouble(kappa) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fkappa < 0: - * raise ValueError("kappa < 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":2219 - * fkappa = PyFloat_AsDouble(kappa) - * if not PyErr_Occurred(): - * if fkappa < 0: # <<<<<<<<<<<<<< - * raise ValueError("kappa < 0") - * return cont2_array_sc(self.internal_state, rk_vonmises, size, fmu, fkappa) - */ - __pyx_t_1 = (__pyx_v_fkappa < 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":2220 - * if not PyErr_Occurred(): - * if fkappa < 0: - * raise ValueError("kappa < 0") # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_vonmises, size, fmu, fkappa) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2220; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_22)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_22)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_22)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2220; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2220; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":2221 - * if fkappa < 0: - * raise ValueError("kappa < 0") - * return cont2_array_sc(self.internal_state, rk_vonmises, size, fmu, fkappa) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_vonmises, __pyx_v_size, __pyx_v_fmu, __pyx_v_fkappa); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2221; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":2223 - * return cont2_array_sc(self.internal_state, rk_vonmises, size, fmu, fkappa) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * omu = PyArray_FROM_OTF(mu, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":2225 - * PyErr_Clear() - * - * omu = PyArray_FROM_OTF(mu, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * okappa = PyArray_FROM_OTF(kappa, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less(okappa, 0.0)): - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_mu, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2225; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_omu)); - __pyx_v_omu = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":2226 - * - * omu = PyArray_FROM_OTF(mu, NPY_DOUBLE, NPY_ALIGNED) - * okappa = PyArray_FROM_OTF(kappa, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less(okappa, 0.0)): - * raise ValueError("kappa < 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_kappa, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2226; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_okappa)); - __pyx_v_okappa = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":2227 - * omu = PyArray_FROM_OTF(mu, NPY_DOUBLE, NPY_ALIGNED) - * okappa = PyArray_FROM_OTF(kappa, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less(okappa, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("kappa < 0") - * return cont2_array(self.internal_state, rk_vonmises, size, omu, okappa) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2227; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2227; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2227; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2227; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2227; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2227; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_okappa)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_okappa)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_okappa)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2227; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2227; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2227; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2227; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":2228 - * okappa = PyArray_FROM_OTF(kappa, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less(okappa, 0.0)): - * raise ValueError("kappa < 0") # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_vonmises, size, omu, okappa) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2228; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_22)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_22)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_22)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2228; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2228; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":2229 - * if np.any(np.less(okappa, 0.0)): - * raise ValueError("kappa < 0") - * return cont2_array(self.internal_state, rk_vonmises, size, omu, okappa) # <<<<<<<<<<<<<< - * - * def pareto(self, a, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_vonmises, __pyx_v_size, __pyx_v_omu, __pyx_v_okappa); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2229; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.vonmises"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_omu); - __Pyx_DECREF((PyObject *)__pyx_v_okappa); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_mu); - __Pyx_DECREF(__pyx_v_kappa); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":2231 - * return cont2_array(self.internal_state, rk_vonmises, size, omu, okappa) - * - * def pareto(self, a, size=None): # <<<<<<<<<<<<<< - * """ - * pareto(a, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_pareto(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_pareto[] = "\n"" pareto(a, size=None)\n""\n"" Draw samples from a Pareto distribution with specified shape.\n""\n"" This is a simplified version of the Generalized Pareto distribution\n"" (available in SciPy), with the scale set to one and the location set to\n"" zero. Most authors default the location to one.\n""\n"" The Pareto distribution must be greater than zero, and is unbounded above.\n"" It is also known as the \"80-20 rule\". In this distribution, 80 percent of\n"" the weights are in the lowest 20 percent of the range, while the other 20\n"" percent fill the remaining 80 percent of the range.\n""\n"" Parameters\n"" ----------\n"" shape : float, > 0.\n"" Shape of the distribution.\n"" size : tuple of ints\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.genpareto.pdf : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" Notes\n"" -----\n"" The probability density for the Pareto distribution is\n""\n"" .. math:: p(x) = \\frac{am^a}{x^{a+1}}\n""\n"" where :math:`a` is the shape and :math:`m` the location\n""\n"" The Pareto distribution, named after the Italian economist Vilfredo Pareto,\n"" is a power law probability distribution useful in many real world problems.\n"" Outside the field of economics it is generally referred to as the Bradford\n"" distribution. Pareto developed the distribution to describe the\n"" distribution of wealth in an economy. It has also found use in insurance,\n"" web page access statistics, oil field sizes, and many other problems,\n"" including the download frequency for projects in Sourceforge [1]. It is\n"" one of the so-called \"fat-tailed\" distributions.\n""\n""\n"" References\n"" ----------\n"" .. [1] Francis Hunt and Paul Johnson, On the Pareto Distribution of\n"" Sourceforge projects.\n"" .. [2] Pareto, V. (1896). Course of Political Economy. Lausanne.\n"" .. [3] Reiss, R.D., Thomas, M.(2001), Statistical Analysis of Extreme\n"" Values, Birkhauser Verlag, Basel, pp 23-30.\n"" .. [4] Wikipedia, \"Pareto distribution\",\n"" http://en.wikipedia.org/wiki/Pareto_distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> a, m = 3., 1. # shape and mode\n"" >>> s = np.random.pareto(a, 1000) + m\n""\n"" Display the histogram of the samples, along with\n"" the probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> count, bins, ignored = plt.hist(s, 100, normed=True, align='center')\n"" >>> fit = a*m**a/bins**(a+1)\n"" >>> plt.plot(bins, max(count)*fit/max(fit),linewidth=2, color='r')\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_pareto(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oa; - double __pyx_v_fa; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__a,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("pareto"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__a); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "pareto") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2231; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_a = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_a = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("pareto", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2231; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.pareto"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_a); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oa = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":2307 - * cdef double fa - * - * fa = PyFloat_AsDouble(a) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fa <= 0: - */ - __pyx_v_fa = PyFloat_AsDouble(__pyx_v_a); - - /* "mtrand.pyx":2308 - * - * fa = PyFloat_AsDouble(a) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fa <= 0: - * raise ValueError("a <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":2309 - * fa = PyFloat_AsDouble(a) - * if not PyErr_Occurred(): - * if fa <= 0: # <<<<<<<<<<<<<< - * raise ValueError("a <= 0") - * return cont1_array_sc(self.internal_state, rk_pareto, size, fa) - */ - __pyx_t_1 = (__pyx_v_fa <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":2310 - * if not PyErr_Occurred(): - * if fa <= 0: - * raise ValueError("a <= 0") # <<<<<<<<<<<<<< - * return cont1_array_sc(self.internal_state, rk_pareto, size, fa) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_10)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_10)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_10)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":2311 - * if fa <= 0: - * raise ValueError("a <= 0") - * return cont1_array_sc(self.internal_state, rk_pareto, size, fa) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont1_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_pareto, __pyx_v_size, __pyx_v_fa); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2311; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":2313 - * return cont1_array_sc(self.internal_state, rk_pareto, size, fa) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":2315 - * PyErr_Clear() - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oa, 0.0)): - * raise ValueError("a <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_a, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2315; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_oa)); - __pyx_v_oa = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":2316 - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oa, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("a <= 0") - * return cont1_array(self.internal_state, rk_pareto, size, oa) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2316; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2316; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2316; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2316; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2316; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2316; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_oa)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_oa)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oa)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2316; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2316; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2316; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2316; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":2317 - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oa, 0.0)): - * raise ValueError("a <= 0") # <<<<<<<<<<<<<< - * return cont1_array(self.internal_state, rk_pareto, size, oa) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2317; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_10)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_10)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_10)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2317; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2317; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":2318 - * if np.any(np.less_equal(oa, 0.0)): - * raise ValueError("a <= 0") - * return cont1_array(self.internal_state, rk_pareto, size, oa) # <<<<<<<<<<<<<< - * - * def weibull(self, a, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont1_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_pareto, __pyx_v_size, __pyx_v_oa); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2318; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.pareto"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oa); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_a); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":2320 - * return cont1_array(self.internal_state, rk_pareto, size, oa) - * - * def weibull(self, a, size=None): # <<<<<<<<<<<<<< - * """ - * weibull(a, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_weibull(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_weibull[] = "\n"" weibull(a, size=None)\n""\n"" Weibull distribution.\n""\n"" Draw samples from a 1-parameter Weibull distribution with the given\n"" shape parameter.\n""\n"" .. math:: X = (-ln(U))^{1/a}\n""\n"" Here, U is drawn from the uniform distribution over (0,1].\n""\n"" The more common 2-parameter Weibull, including a scale parameter\n"" :math:`\\lambda` is just :math:`X = \\lambda(-ln(U))^{1/a}`.\n""\n"" The Weibull (or Type III asymptotic extreme value distribution for smallest\n"" values, SEV Type III, or Rosin-Rammler distribution) is one of a class of\n"" Generalized Extreme Value (GEV) distributions used in modeling extreme\n"" value problems. This class includes the Gumbel and Frechet distributions.\n""\n"" Parameters\n"" ----------\n"" a : float\n"" Shape of the distribution.\n"" size : tuple of ints\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.weibull : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" gumbel, scipy.stats.distributions.genextreme\n""\n"" Notes\n"" -----\n"" The probability density for the Weibull distribution is\n""\n"" .. math:: p(x) = \\frac{a}\n"" {\\lambda}(\\frac{x}{\\lambda})^{a-1}e^{-(x/\\lambda)^a},\n""\n"" where :math:`a` is the shape and :math:`\\lambda` the scale.\n""\n"" The function has its peak (the mode) at\n"" :math:`\\lambda(\\frac{a-1}{a})^{1/a}`.\n""\n"" When ``a = 1``, the Weibull distribution reduces to the exponential\n"" distribution.\n""\n"" References\n"" ----------\n"" .. [1] Waloddi Weibull, Professor, Royal Technical University, Stockholm,\n"" 1939 \"A Statistical Theory Of The Strength Of Materials\",\n"" Ingeniorsvetenskapsakademiens Handlingar Nr 151, 1939,\n"" Generalstabens Litografiska Anstalts Forlag, Stockholm.\n"" .. [2] Waloddi Weibull, 1951 \"A Statistical Distribution Function of Wide\n"" Applicability\", Journal Of Applied Mechanics ASME Paper.\n"" .. [3] Wikipedia, \"Weibull distribution\",\n"" http://en.wikipedia.org/wiki/Weibull_distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> a = 5. # shape\n"" >>> s = np.random.weibull(a, 1000)\n""\n"" Display the histogram of the samples, along with\n"" the probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> x = np.arange(1,100.)/50.\n"" >>> def weib(x,n,a):\n"" ... return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a)\n""\n"" >>> count, bins, ignored = plt.hist(np.random.weibull(5.,1000))\n"" >>> x = np.arange(1,100.)/50.\n"" >>> scale = count.max()/weib(x, 1., 5.).max()\n"" >>> plt.plot(x, weib(x, 1., 5.)*scale)\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_weibull(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oa; - double __pyx_v_fa; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__a,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("weibull"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__a); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "weibull") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2320; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_a = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_a = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("weibull", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2320; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.weibull"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_a); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oa = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":2407 - * cdef double fa - * - * fa = PyFloat_AsDouble(a) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fa <= 0: - */ - __pyx_v_fa = PyFloat_AsDouble(__pyx_v_a); - - /* "mtrand.pyx":2408 - * - * fa = PyFloat_AsDouble(a) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fa <= 0: - * raise ValueError("a <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":2409 - * fa = PyFloat_AsDouble(a) - * if not PyErr_Occurred(): - * if fa <= 0: # <<<<<<<<<<<<<< - * raise ValueError("a <= 0") - * return cont1_array_sc(self.internal_state, rk_weibull, size, fa) - */ - __pyx_t_1 = (__pyx_v_fa <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":2410 - * if not PyErr_Occurred(): - * if fa <= 0: - * raise ValueError("a <= 0") # <<<<<<<<<<<<<< - * return cont1_array_sc(self.internal_state, rk_weibull, size, fa) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2410; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_10)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_10)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_10)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2410; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2410; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":2411 - * if fa <= 0: - * raise ValueError("a <= 0") - * return cont1_array_sc(self.internal_state, rk_weibull, size, fa) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont1_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_weibull, __pyx_v_size, __pyx_v_fa); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2411; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":2413 - * return cont1_array_sc(self.internal_state, rk_weibull, size, fa) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":2415 - * PyErr_Clear() - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oa, 0.0)): - * raise ValueError("a <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_a, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2415; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_oa)); - __pyx_v_oa = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":2416 - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oa, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("a <= 0") - * return cont1_array(self.internal_state, rk_weibull, size, oa) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2416; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2416; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2416; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2416; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2416; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2416; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_oa)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_oa)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oa)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2416; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2416; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2416; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2416; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":2417 - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oa, 0.0)): - * raise ValueError("a <= 0") # <<<<<<<<<<<<<< - * return cont1_array(self.internal_state, rk_weibull, size, oa) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2417; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_10)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_10)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_10)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2417; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2417; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":2418 - * if np.any(np.less_equal(oa, 0.0)): - * raise ValueError("a <= 0") - * return cont1_array(self.internal_state, rk_weibull, size, oa) # <<<<<<<<<<<<<< - * - * def power(self, a, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont1_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_weibull, __pyx_v_size, __pyx_v_oa); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2418; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.weibull"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oa); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_a); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":2420 - * return cont1_array(self.internal_state, rk_weibull, size, oa) - * - * def power(self, a, size=None): # <<<<<<<<<<<<<< - * """ - * power(a, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_power(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_power[] = "\n"" power(a, size=None)\n""\n"" Draws samples in [0, 1] from a power distribution with positive\n"" exponent a - 1.\n""\n"" Also known as the power function distribution.\n""\n"" Parameters\n"" ----------\n"" a : float\n"" parameter, > 0\n"" size : tuple of ints\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" Returns\n"" -------\n"" samples : {ndarray, scalar}\n"" The returned samples lie in [0, 1].\n""\n"" Raises\n"" ------\n"" ValueError\n"" If a<1.\n""\n"" Notes\n"" -----\n"" The probability density function is\n""\n"" .. math:: P(x; a) = ax^{a-1}, 0 \\le x \\le 1, a>0.\n""\n"" The power function distribution is just the inverse of the Pareto\n"" distribution. It may also be seen as a special case of the Beta\n"" distribution.\n""\n"" It is used, for example, in modeling the over-reporting of insurance\n"" claims.\n""\n"" References\n"" ----------\n"" .. [1] Christian Kleiber, Samuel Kotz, \"Statistical size distributions\n"" in economics and actuarial sciences\", Wiley, 2003.\n"" .. [2] Heckert, N. A. and Filliben, James J. (2003). NIST Handbook 148:\n"" Dataplot Reference Manual, Volume 2: Let Subcommands and Library\n"" Functions\", National Institute of Standards and Technology Handbook\n"" Series, June 2003.\n"" http://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/powpdf.pdf\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> a = 5. # shape\n"" >>> samples = 1000\n"" >>> s = np.random.power(a, samples)\n""\n"" Display the histogram of the samples, along with\n"" the probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> count, bins, ignored = plt.hist(s, bins=30)\n"" >>> x = np.linspace(0, 1, 100)\n"" >>> y = a*x**(a-1.)\n"" >>> normed_y = samples*np.diff(bins)[0]*y\n"" >>> plt.plot(x, normed_y)\n"" >>> plt.show()\n""\n"" Compare the power function distribution to the inverse of the Pareto.\n""\n"" >>> from scipy import stats\n"" >>> rvs = np.random.power(5, 1000000)\n"" >>> rvsp = np.random.pareto(5, 1000000)\n"" >>> xx = np.linspace(0,1,100)\n"" >>> powpdf = stats.powerlaw.pdf(xx,5)\n""\n"" >>> plt.figure()\n"" >>> plt.hist(rvs, bins=50, normed=True)\n"" >>> plt.plot(xx,powpdf,'r-')\n"" >>> plt.title('np.random.power(5)')\n""\n"" >>> plt.figure()\n"" >>> plt.hist(1./(1.+rvsp), bins=50, normed=True)\n"" >>> plt.plot(xx,powpdf,'r-')\n"" >>> plt.title('inverse of 1 + np.random.pareto(5)')\n""\n"" >>> plt.figure()\n"" >>> plt.hist(1./(1.+rvsp), bins=50, normed=True)\n"" >>> plt.plot(xx,powpdf,'r-')\n"" >>> plt.title('inverse of stats.pareto(5)')\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_power(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oa; - double __pyx_v_fa; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__a,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("power"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__a); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "power") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2420; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_a = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_a = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("power", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2420; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.power"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_a); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oa = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":2516 - * cdef double fa - * - * fa = PyFloat_AsDouble(a) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fa <= 0: - */ - __pyx_v_fa = PyFloat_AsDouble(__pyx_v_a); - - /* "mtrand.pyx":2517 - * - * fa = PyFloat_AsDouble(a) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fa <= 0: - * raise ValueError("a <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":2518 - * fa = PyFloat_AsDouble(a) - * if not PyErr_Occurred(): - * if fa <= 0: # <<<<<<<<<<<<<< - * raise ValueError("a <= 0") - * return cont1_array_sc(self.internal_state, rk_power, size, fa) - */ - __pyx_t_1 = (__pyx_v_fa <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":2519 - * if not PyErr_Occurred(): - * if fa <= 0: - * raise ValueError("a <= 0") # <<<<<<<<<<<<<< - * return cont1_array_sc(self.internal_state, rk_power, size, fa) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2519; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_10)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_10)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_10)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2519; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2519; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":2520 - * if fa <= 0: - * raise ValueError("a <= 0") - * return cont1_array_sc(self.internal_state, rk_power, size, fa) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont1_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_power, __pyx_v_size, __pyx_v_fa); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2520; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":2522 - * return cont1_array_sc(self.internal_state, rk_power, size, fa) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":2524 - * PyErr_Clear() - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oa, 0.0)): - * raise ValueError("a <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_a, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2524; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_oa)); - __pyx_v_oa = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":2525 - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oa, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("a <= 0") - * return cont1_array(self.internal_state, rk_power, size, oa) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2525; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2525; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2525; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2525; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2525; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2525; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_oa)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_oa)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oa)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2525; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2525; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2525; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2525; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":2526 - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oa, 0.0)): - * raise ValueError("a <= 0") # <<<<<<<<<<<<<< - * return cont1_array(self.internal_state, rk_power, size, oa) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2526; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_10)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_10)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_10)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2526; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2526; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":2527 - * if np.any(np.less_equal(oa, 0.0)): - * raise ValueError("a <= 0") - * return cont1_array(self.internal_state, rk_power, size, oa) # <<<<<<<<<<<<<< - * - * def laplace(self, loc=0.0, scale=1.0, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont1_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_power, __pyx_v_size, __pyx_v_oa); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2527; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.power"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oa); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_a); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":2529 - * return cont1_array(self.internal_state, rk_power, size, oa) - * - * def laplace(self, loc=0.0, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * laplace(loc=0.0, scale=1.0, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_laplace(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_laplace[] = "\n"" laplace(loc=0.0, scale=1.0, size=None)\n""\n"" Draw samples from the Laplace or double exponential distribution with\n"" specified location (or mean) and scale (decay).\n""\n"" The Laplace distribution is similar to the Gaussian/normal distribution,\n"" but is sharper at the peak and has fatter tails. It represents the\n"" difference between two independent, identically distributed exponential\n"" random variables.\n""\n"" Parameters\n"" ----------\n"" loc : float\n"" The position, :math:`\\mu`, of the distribution peak.\n"" scale : float\n"" :math:`\\lambda`, the exponential decay.\n""\n"" Notes\n"" -----\n"" It has the probability density function\n""\n"" .. math:: f(x; \\mu, \\lambda) = \\frac{1}{2\\lambda}\n"" \\exp\\left(-\\frac{|x - \\mu|}{\\lambda}\\right).\n""\n"" The first law of Laplace, from 1774, states that the frequency of an error\n"" can be expressed as an exponential function of the absolute magnitude of\n"" the error, which leads to the Laplace distribution. For many problems in\n"" Economics and Health sciences, this distribution seems to model the data\n"" better than the standard Gaussian distribution\n""\n""\n"" References\n"" ----------\n"" .. [1] Abramowitz, M. and Stegun, I. A. (Eds.). Handbook of Mathematical\n"" Functions with Formulas, Graphs, and Mathematical Tables, 9th\n"" printing. New York: Dover, 1972.\n""\n"" .. [2] The Laplace distribution and generalizations\n"" By Samuel Kotz, Tomasz J. Kozubowski, Krzysztof Podgorski,\n"" Birkhauser, 2001.\n""\n"" .. [3] Weisstein, Eric W. \"Laplace Distribution.\"\n"" From MathWorld--A Wolfram Web Resource.\n"" http://mathworld.wolfram.com/LaplaceDistribution.html\n""\n"" .. [4] Wikipedia, \"Laplace distribution\",\n"" http://en.wikipedia.org/wiki/Laplace_distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution\n""\n"" >>> loc, scale = 0., 1.\n"" >>> s = np.random.laplace(loc, scale, 1000)\n""\n"" Display the histogram of the samples, along with\n"" the probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> count, bins, ignored = plt.hist(s, 30, normed=True)\n"" >>> x = np.arange(-8., 8., .01)\n"" >>> pdf = np.exp(-abs(x-loc/scale))/(2.*scale)\n"" >>> plt.plot(x, pdf)\n""\n"" Plot Gaussian for comparison:\n""\n"" >>> g = (1/(scale * np.sqrt(2 * np.pi)) * \n"" ... np.exp( - (x - loc)**2 / (2 * scale**2) ))\n"" >>> plt.plot(x,g)\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_laplace(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_loc = 0; - PyObject *__pyx_v_scale = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oloc; - PyArrayObject *__pyx_v_oscale; - double __pyx_v_floc; - double __pyx_v_fscale; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__loc,&__pyx_n_s__scale,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("laplace"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[0] = __pyx_k_23; - values[1] = __pyx_k_24; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__loc); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__scale); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "laplace") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2529; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_loc = values[0]; - __pyx_v_scale = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_loc = __pyx_k_23; - __pyx_v_scale = __pyx_k_24; - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: __pyx_v_scale = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_loc = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("laplace", 0, 0, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2529; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.laplace"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_loc); - __Pyx_INCREF(__pyx_v_scale); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oloc = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_oscale = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":2605 - * cdef double floc, fscale - * - * floc = PyFloat_AsDouble(loc) # <<<<<<<<<<<<<< - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - */ - __pyx_v_floc = PyFloat_AsDouble(__pyx_v_loc); - - /* "mtrand.pyx":2606 - * - * floc = PyFloat_AsDouble(loc) - * fscale = PyFloat_AsDouble(scale) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fscale <= 0: - */ - __pyx_v_fscale = PyFloat_AsDouble(__pyx_v_scale); - - /* "mtrand.pyx":2607 - * floc = PyFloat_AsDouble(loc) - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fscale <= 0: - * raise ValueError("scale <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":2608 - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - * if fscale <= 0: # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_laplace, size, floc, fscale) - */ - __pyx_t_1 = (__pyx_v_fscale <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":2609 - * if not PyErr_Occurred(): - * if fscale <= 0: - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_laplace, size, floc, fscale) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2609; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2609; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2609; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":2610 - * if fscale <= 0: - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_laplace, size, floc, fscale) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_laplace, __pyx_v_size, __pyx_v_floc, __pyx_v_fscale); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2610; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":2612 - * return cont2_array_sc(self.internal_state, rk_laplace, size, floc, fscale) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":2613 - * - * PyErr_Clear() - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_loc, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2613; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_6mtrand_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2613; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(((PyObject *)__pyx_v_oloc)); - __pyx_v_oloc = ((PyArrayObject *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "mtrand.pyx":2614 - * PyErr_Clear() - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_scale, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2614; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_6mtrand_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2614; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(((PyObject *)__pyx_v_oscale)); - __pyx_v_oscale = ((PyArrayObject *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "mtrand.pyx":2615 - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array(self.internal_state, rk_laplace, size, oloc, oscale) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2615; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2615; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2615; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2615; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2615; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2615; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_oscale)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2615; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2615; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2615; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2615; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":2616 - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_laplace, size, oloc, oscale) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2616; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2616; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2616; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":2617 - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") - * return cont2_array(self.internal_state, rk_laplace, size, oloc, oscale) # <<<<<<<<<<<<<< - * - * def gumbel(self, loc=0.0, scale=1.0, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_laplace, __pyx_v_size, __pyx_v_oloc, __pyx_v_oscale); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2617; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.laplace"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oloc); - __Pyx_DECREF((PyObject *)__pyx_v_oscale); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_loc); - __Pyx_DECREF(__pyx_v_scale); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":2619 - * return cont2_array(self.internal_state, rk_laplace, size, oloc, oscale) - * - * def gumbel(self, loc=0.0, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * gumbel(loc=0.0, scale=1.0, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_gumbel(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_gumbel[] = "\n"" gumbel(loc=0.0, scale=1.0, size=None)\n""\n"" Gumbel distribution.\n""\n"" Draw samples from a Gumbel distribution with specified location (or mean)\n"" and scale (or standard deviation).\n""\n"" The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme Value\n"" Type I) distribution is one of a class of Generalized Extreme Value (GEV)\n"" distributions used in modeling extreme value problems. The Gumbel is a\n"" special case of the Extreme Value Type I distribution for maximums from\n"" distributions with \"exponential-like\" tails, it may be derived by\n"" considering a Gaussian process of measurements, and generating the pdf for\n"" the maximum values from that set of measurements (see examples).\n""\n"" Parameters\n"" ----------\n"" loc : float\n"" The location of the mode of the distribution.\n"" scale : float\n"" The scale parameter of the distribution.\n"" size : tuple of ints\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" See Also\n"" --------\n"" scipy.stats.gumbel : probability density function,\n"" distribution or cumulative density function, etc.\n"" weibull, scipy.stats.genextreme\n""\n"" Notes\n"" -----\n"" The probability density for the Gumbel distribution is\n""\n"" .. math:: p(x) = \\frac{e^{-(x - \\mu)/ \\beta}}{\\beta} e^{ -e^{-(x - \\mu)/\n"" \\beta}},\n""\n"" where :math:`\\mu` is the mode, a location parameter, and :math:`\\beta`\n"" is the scale parameter.\n""\n"" The Gumbel (named for German mathematician Emil Julius Gumbel) was used\n"" very early in the hydrology literature, for modeling the occurrence of\n"" flood events. It is also used for modeling maximum wind speed and rainfall\n"" rates. It is a \"fat-tailed\" distribution - the probability of an event in\n"" the tail of the distribution is larger than if one used a Gaussian, hence\n"" the surprisingly frequent occurrence of 100-year floods. Floods were\n"" initially modeled as a Gaussian process, which underestimated the frequency\n"" of extreme events.\n""\n"" It is one of a class of extreme value distributions, the Generalized\n"" Extreme Value (GEV) distributions, which also includes the Weibull and\n"" Frechet.\n""\n"" The function has a mean of :math:`\\mu + 0.57721\\beta` and a variance of\n"" :math:`\\frac{\\pi^2}{6}\\beta^2`.\n""\n"" References\n"" ----------\n"" .. [1] Gumbel, E.J. (1958). Statistics of Extremes. Columbia University\n"" Press.\n"" .. [2] Reiss, R.-D. and Thomas M. (2001), Statistical Analysis of Extreme\n"" Values, from Insurance, Finance, Hydrology and Other Fields,\n"" Birkhauser Verlag, Basel: Boston : Berlin.\n"" .. [3] Wikipedia, \"Gumbel distribution\",\n"" http://en.wikipedia.org/wiki/Gumbel_distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> mu, beta = 0, 0.1 # location and scale\n"" >>> s = np.random.gumbel(mu, beta, 1000)\n""\n"" Display the histogram of the samples, along with\n"" the probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> count, bins, ignored = plt.hist(s, 30, normed=True)\n"" >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta)\n"" ... * np.exp( -np.exp( -(bins - mu) /beta) ),\n"" ... linewidth=2, color='r')\n"" >>> plt.show()\n""\n"" Show how an extreme value distribution can arise from a Gaussian process\n"" and compare to a Gaussian:\n""\n"" >>> means = []\n"" >>> maxima = []\n"" >>> for i in range(0,1000) :\n"" ... a = np.random.normal(mu, beta, 1000)\n"" ... means.append(a.mean())\n"" ... maxima.append(a.max())\n"" >>> count, bins, ignored = plt.hist(maxima, 30, normed=True)\n"" >>> beta = np.std(maxima)*np.pi/np.sqrt(6)\n"" >>> mu = np.mean(maxima) - 0.57721*beta\n"" >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta)\n"" ... * np.exp(-np.exp(-(bins - mu)/beta)),\n"" ... linewidth=2, color='r')\n"" >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi))\n"" ... * np.exp(-(bins - mu)**2 / (2 * beta**2)),\n"" ... linewidth=2, color='g')\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_gumbel(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_loc = 0; - PyObject *__pyx_v_scale = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oloc; - PyArrayObject *__pyx_v_oscale; - double __pyx_v_floc; - double __pyx_v_fscale; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__loc,&__pyx_n_s__scale,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("gumbel"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[0] = __pyx_k_25; - values[1] = __pyx_k_26; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__loc); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__scale); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "gumbel") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2619; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_loc = values[0]; - __pyx_v_scale = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_loc = __pyx_k_25; - __pyx_v_scale = __pyx_k_26; - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: __pyx_v_scale = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_loc = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("gumbel", 0, 0, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2619; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.gumbel"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_loc); - __Pyx_INCREF(__pyx_v_scale); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oloc = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_oscale = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":2729 - * cdef double floc, fscale - * - * floc = PyFloat_AsDouble(loc) # <<<<<<<<<<<<<< - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - */ - __pyx_v_floc = PyFloat_AsDouble(__pyx_v_loc); - - /* "mtrand.pyx":2730 - * - * floc = PyFloat_AsDouble(loc) - * fscale = PyFloat_AsDouble(scale) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fscale <= 0: - */ - __pyx_v_fscale = PyFloat_AsDouble(__pyx_v_scale); - - /* "mtrand.pyx":2731 - * floc = PyFloat_AsDouble(loc) - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fscale <= 0: - * raise ValueError("scale <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":2732 - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - * if fscale <= 0: # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_gumbel, size, floc, fscale) - */ - __pyx_t_1 = (__pyx_v_fscale <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":2733 - * if not PyErr_Occurred(): - * if fscale <= 0: - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_gumbel, size, floc, fscale) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2733; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2733; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2733; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":2734 - * if fscale <= 0: - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_gumbel, size, floc, fscale) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_gumbel, __pyx_v_size, __pyx_v_floc, __pyx_v_fscale); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2734; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":2736 - * return cont2_array_sc(self.internal_state, rk_gumbel, size, floc, fscale) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":2737 - * - * PyErr_Clear() - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_loc, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2737; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_6mtrand_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2737; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(((PyObject *)__pyx_v_oloc)); - __pyx_v_oloc = ((PyArrayObject *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "mtrand.pyx":2738 - * PyErr_Clear() - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_scale, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2738; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_6mtrand_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2738; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(((PyObject *)__pyx_v_oscale)); - __pyx_v_oscale = ((PyArrayObject *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "mtrand.pyx":2739 - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array(self.internal_state, rk_gumbel, size, oloc, oscale) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2739; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2739; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2739; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2739; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2739; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2739; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_oscale)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2739; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2739; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2739; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2739; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":2740 - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_gumbel, size, oloc, oscale) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":2741 - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") - * return cont2_array(self.internal_state, rk_gumbel, size, oloc, oscale) # <<<<<<<<<<<<<< - * - * def logistic(self, loc=0.0, scale=1.0, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_gumbel, __pyx_v_size, __pyx_v_oloc, __pyx_v_oscale); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2741; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.gumbel"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oloc); - __Pyx_DECREF((PyObject *)__pyx_v_oscale); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_loc); - __Pyx_DECREF(__pyx_v_scale); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":2743 - * return cont2_array(self.internal_state, rk_gumbel, size, oloc, oscale) - * - * def logistic(self, loc=0.0, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * logistic(loc=0.0, scale=1.0, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_logistic(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_logistic[] = "\n"" logistic(loc=0.0, scale=1.0, size=None)\n""\n"" Draw samples from a Logistic distribution.\n""\n"" Samples are drawn from a Logistic distribution with specified\n"" parameters, loc (location or mean, also median), and scale (>0).\n""\n"" Parameters\n"" ----------\n"" loc : float\n""\n"" scale : float > 0.\n""\n"" size : {tuple, int}\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" Returns\n"" -------\n"" samples : {ndarray, scalar}\n"" where the values are all integers in [0, n].\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.logistic : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" Notes\n"" -----\n"" The probability density for the Logistic distribution is\n""\n"" .. math:: P(x) = P(x) = \\frac{e^{-(x-\\mu)/s}}{s(1+e^{-(x-\\mu)/s})^2},\n""\n"" where :math:`\\mu` = location and :math:`s` = scale.\n""\n"" The Logistic distribution is used in Extreme Value problems where it\n"" can act as a mixture of Gumbel distributions, in Epidemiology, and by\n"" the World Chess Federation (FIDE) where it is used in the Elo ranking\n"" system, assuming the performance of each player is a logistically\n"" distributed random variable.\n""\n"" References\n"" ----------\n"" .. [1] Reiss, R.-D. and Thomas M. (2001), Statistical Analysis of Extreme\n"" Values, from Insurance, Finance, Hydrology and Other Fields,\n"" Birkhauser Verlag, Basel, pp 132-133.\n"" .. [2] Weisstein, Eric W. \"Logistic Distribution.\" From\n"" MathWorld--A Wolfram Web Resource.\n"" http://mathworld.wolfram.com/LogisticDistribution.html\n"" .. [3] Wikipedia, \"Logistic-distribution\",\n"" http://en.wikipedia.org/wiki/Logistic-distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> loc, scale = 10, 1\n"" >>> s = np.random.logistic(loc, scale, 10000)\n"" >>> count, bins, ignored = plt.hist(s, bins=50)\n""\n"" # plot against distribution\n""\n"" >>> def logist(x, loc, scale):\n"" ... return exp((loc-x)/scale)/(scale*(1+exp((loc-x)/scale))**2)\n"" >>> plt.plot(bins, logist(bins, loc, scale)*count.max()/\\\n"" ... logist(bins, loc, scale).max())\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_logistic(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_loc = 0; - PyObject *__pyx_v_scale = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oloc; - PyArrayObject *__pyx_v_oscale; - double __pyx_v_floc; - double __pyx_v_fscale; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__loc,&__pyx_n_s__scale,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("logistic"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[0] = __pyx_k_27; - values[1] = __pyx_k_28; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__loc); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__scale); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "logistic") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2743; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_loc = values[0]; - __pyx_v_scale = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_loc = __pyx_k_27; - __pyx_v_scale = __pyx_k_28; - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: __pyx_v_scale = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_loc = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("logistic", 0, 0, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2743; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.logistic"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_loc); - __Pyx_INCREF(__pyx_v_scale); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oloc = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_oscale = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":2817 - * cdef double floc, fscale - * - * floc = PyFloat_AsDouble(loc) # <<<<<<<<<<<<<< - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - */ - __pyx_v_floc = PyFloat_AsDouble(__pyx_v_loc); - - /* "mtrand.pyx":2818 - * - * floc = PyFloat_AsDouble(loc) - * fscale = PyFloat_AsDouble(scale) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fscale <= 0: - */ - __pyx_v_fscale = PyFloat_AsDouble(__pyx_v_scale); - - /* "mtrand.pyx":2819 - * floc = PyFloat_AsDouble(loc) - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fscale <= 0: - * raise ValueError("scale <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":2820 - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - * if fscale <= 0: # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_logistic, size, floc, fscale) - */ - __pyx_t_1 = (__pyx_v_fscale <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":2821 - * if not PyErr_Occurred(): - * if fscale <= 0: - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_logistic, size, floc, fscale) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2821; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2821; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2821; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":2822 - * if fscale <= 0: - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_logistic, size, floc, fscale) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_logistic, __pyx_v_size, __pyx_v_floc, __pyx_v_fscale); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2822; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":2824 - * return cont2_array_sc(self.internal_state, rk_logistic, size, floc, fscale) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":2825 - * - * PyErr_Clear() - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_loc, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2825; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_6mtrand_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2825; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(((PyObject *)__pyx_v_oloc)); - __pyx_v_oloc = ((PyArrayObject *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "mtrand.pyx":2826 - * PyErr_Clear() - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_scale, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2826; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_6mtrand_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2826; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(((PyObject *)__pyx_v_oscale)); - __pyx_v_oscale = ((PyArrayObject *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "mtrand.pyx":2827 - * oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array(self.internal_state, rk_logistic, size, oloc, oscale) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_oscale)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":2828 - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_logistic, size, oloc, oscale) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2828; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2828; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2828; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":2829 - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0") - * return cont2_array(self.internal_state, rk_logistic, size, oloc, oscale) # <<<<<<<<<<<<<< - * - * def lognormal(self, mean=0.0, sigma=1.0, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_logistic, __pyx_v_size, __pyx_v_oloc, __pyx_v_oscale); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2829; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.logistic"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oloc); - __Pyx_DECREF((PyObject *)__pyx_v_oscale); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_loc); - __Pyx_DECREF(__pyx_v_scale); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":2831 - * return cont2_array(self.internal_state, rk_logistic, size, oloc, oscale) - * - * def lognormal(self, mean=0.0, sigma=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * lognormal(mean=0.0, sigma=1.0, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_lognormal(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_lognormal[] = "\n"" lognormal(mean=0.0, sigma=1.0, size=None)\n""\n"" Return samples drawn from a log-normal distribution.\n""\n"" Draw samples from a log-normal distribution with specified mean, standard\n"" deviation, and shape. Note that the mean and standard deviation are not the\n"" values for the distribution itself, but of the underlying normal\n"" distribution it is derived from.\n""\n""\n"" Parameters\n"" ----------\n"" mean : float\n"" Mean value of the underlying normal distribution\n"" sigma : float, >0.\n"" Standard deviation of the underlying normal distribution\n"" size : tuple of ints\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" See Also\n"" --------\n"" scipy.stats.lognorm : probability density function, distribution,\n"" cumulative density function, etc.\n""\n"" Notes\n"" -----\n"" A variable `x` has a log-normal distribution if `log(x)` is normally\n"" distributed.\n""\n"" The probability density function for the log-normal distribution is\n""\n"" .. math:: p(x) = \\frac{1}{\\sigma x \\sqrt{2\\pi}}\n"" e^{(-\\frac{(ln(x)-\\mu)^2}{2\\sigma^2})}\n""\n"" where :math:`\\mu` is the mean and :math:`\\sigma` is the standard deviation\n"" of the normally distributed logarithm of the variable.\n""\n"" A log-normal distribution results if a random variable is the *product* of\n"" a large number of independent, identically-distributed variables in the\n"" same way that a normal distribution results if the variable is the *sum*\n"" of a large number of independent, identically-distributed variables\n"" (see the last example). It is one of the so-called \"fat-tailed\"\n"" distributions.\n""\n"" The log-normal distribution is commonly used to model the lifespan of units\n"" with fatigue-stress failure modes. Since this includes\n"" most mechanical systems, the log-normal distribution has widespread\n"" application.\n""\n"" It is also commonly used to model oil field sizes, species abundance, and\n"" latent periods of infectious diseases.\n""\n"" References\n"" ----------\n"" .. [1] Eckhard Limpert, Werner A. Stahel, and Markus Abbt, \"Log-normal\n"" Distributions across the Sciences: Keys and Clues\", May 2001\n"" Vol. 51 No. 5 BioScience\n"" http://stat.ethz.ch/~stahel/lognormal/bioscience.pdf\n"" .. [2] Reiss, R.D., Thomas, M.(2001), Statistical Analysis of Extreme\n"" Values, Birkhauser Verlag, Basel, pp 31-32.\n"" .. [3] Wikipedia, \"Lognormal distribution\",\n"" http://en.wikipedia.org/wiki/Lognormal_distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> mu, sigma = 3., 1. # mean and standard deviation\n"" >>> s = np.random.lognormal(mu, sigma, 1000)\n""\n"" Display the histogram of the samples, along with\n"" the probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> count, bins, ignored = plt.hist(s, 100, normed=True, align='mid')\n""\n"" >>> x = np.linspace(min(bins), max(bins), 10000)\n"" >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2))\n"" ... / (x * sigma * np.sqrt(2 * np.pi)))\n""\n"" >>> plt.plot(x, pdf, linewidth=2, color='r')\n"" >>> plt.axis('tight')\n"" >>> plt.show()\n""\n"" Demonstrate that taking the products of random samples from a uniform\n"" distribution can be fit well by a log-normal probability density function.\n""\n"" >>> # Generate a thousand samples: each is the product of 100 random\n"" >>> # values, drawn from a normal distribution.\n"" >>> b = []\n"" >>> for i in range(1000):\n"" ... a = 10. + np.random.random(100)\n"" ... b.append(np.product(a))\n""\n"" >>> b = np.array(b) / np.min(b) # scale values to be positive\n""\n"" >>> count, bins, ignored = plt.hist(b, 100, normed=True, align='center')\n""\n"" >>> sigma = np.std(np.log(b))\n"" >>> mu = np.mean(np.log(b))\n""\n"" >>> x = np.linspace(min(bins), max(bins), 10000)\n"" >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2))\n"" ... / (x * sigma * np.sqrt(2 * np.pi)))\n""\n"" >>> plt.plot(x, pdf, color='r', linewidth=2)\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_lognormal(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_mean = 0; - PyObject *__pyx_v_sigma = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_omean; - PyArrayObject *__pyx_v_osigma; - double __pyx_v_fmean; - double __pyx_v_fsigma; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__mean,&__pyx_n_s__sigma,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("lognormal"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[0] = __pyx_k_29; - values[1] = __pyx_k_30; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__mean); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__sigma); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "lognormal") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2831; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_mean = values[0]; - __pyx_v_sigma = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_mean = __pyx_k_29; - __pyx_v_sigma = __pyx_k_30; - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: __pyx_v_sigma = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_mean = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("lognormal", 0, 0, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2831; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.lognormal"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_mean); - __Pyx_INCREF(__pyx_v_sigma); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_omean = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_osigma = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":2946 - * cdef double fmean, fsigma - * - * fmean = PyFloat_AsDouble(mean) # <<<<<<<<<<<<<< - * fsigma = PyFloat_AsDouble(sigma) - * - */ - __pyx_v_fmean = PyFloat_AsDouble(__pyx_v_mean); - - /* "mtrand.pyx":2947 - * - * fmean = PyFloat_AsDouble(mean) - * fsigma = PyFloat_AsDouble(sigma) # <<<<<<<<<<<<<< - * - * if not PyErr_Occurred(): - */ - __pyx_v_fsigma = PyFloat_AsDouble(__pyx_v_sigma); - - /* "mtrand.pyx":2949 - * fsigma = PyFloat_AsDouble(sigma) - * - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fsigma <= 0: - * raise ValueError("sigma <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":2950 - * - * if not PyErr_Occurred(): - * if fsigma <= 0: # <<<<<<<<<<<<<< - * raise ValueError("sigma <= 0") - * return cont2_array_sc(self.internal_state, rk_lognormal, size, fmean, fsigma) - */ - __pyx_t_1 = (__pyx_v_fsigma <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":2951 - * if not PyErr_Occurred(): - * if fsigma <= 0: - * raise ValueError("sigma <= 0") # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_lognormal, size, fmean, fsigma) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2951; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_31)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_31)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_31)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2951; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2951; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":2952 - * if fsigma <= 0: - * raise ValueError("sigma <= 0") - * return cont2_array_sc(self.internal_state, rk_lognormal, size, fmean, fsigma) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_lognormal, __pyx_v_size, __pyx_v_fmean, __pyx_v_fsigma); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2952; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":2954 - * return cont2_array_sc(self.internal_state, rk_lognormal, size, fmean, fsigma) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * omean = PyArray_FROM_OTF(mean, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":2956 - * PyErr_Clear() - * - * omean = PyArray_FROM_OTF(mean, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * osigma = PyArray_FROM_OTF(sigma, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(osigma, 0.0)): - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_mean, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2956; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_6mtrand_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2956; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(((PyObject *)__pyx_v_omean)); - __pyx_v_omean = ((PyArrayObject *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "mtrand.pyx":2957 - * - * omean = PyArray_FROM_OTF(mean, NPY_DOUBLE, NPY_ALIGNED) - * osigma = PyArray_FROM_OTF(sigma, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(osigma, 0.0)): - * raise ValueError("sigma <= 0.0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_sigma, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2957; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_6mtrand_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2957; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(((PyObject *)__pyx_v_osigma)); - __pyx_v_osigma = ((PyArrayObject *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "mtrand.pyx":2958 - * omean = PyArray_FROM_OTF(mean, NPY_DOUBLE, NPY_ALIGNED) - * osigma = PyArray_FROM_OTF(sigma, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(osigma, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("sigma <= 0.0") - * return cont2_array(self.internal_state, rk_lognormal, size, omean, osigma) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2958; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2958; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2958; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2958; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2958; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2958; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_osigma)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_osigma)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_osigma)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2958; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2958; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2958; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2958; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":2959 - * osigma = PyArray_FROM_OTF(sigma, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(osigma, 0.0)): - * raise ValueError("sigma <= 0.0") # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_lognormal, size, omean, osigma) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_32)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_32)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_32)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":2960 - * if np.any(np.less_equal(osigma, 0.0)): - * raise ValueError("sigma <= 0.0") - * return cont2_array(self.internal_state, rk_lognormal, size, omean, osigma) # <<<<<<<<<<<<<< - * - * def rayleigh(self, scale=1.0, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_lognormal, __pyx_v_size, __pyx_v_omean, __pyx_v_osigma); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2960; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.lognormal"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_omean); - __Pyx_DECREF((PyObject *)__pyx_v_osigma); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_mean); - __Pyx_DECREF(__pyx_v_sigma); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":2962 - * return cont2_array(self.internal_state, rk_lognormal, size, omean, osigma) - * - * def rayleigh(self, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * rayleigh(scale=1.0, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_rayleigh(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_rayleigh[] = "\n rayleigh(scale=1.0, size=None)\n\n Draw samples from a Rayleigh distribution.\n\n The :math:`\\chi` and Weibull distributions are generalizations of the\n Rayleigh.\n\n Parameters\n ----------\n scale : scalar\n Scale, also equals the mode. Should be >= 0.\n size : int or tuple of ints, optional\n Shape of the output. Default is None, in which case a single\n value is returned.\n\n Notes\n -----\n The probability density function for the Rayleigh distribution is\n\n .. math:: P(x;scale) = \\frac{x}{scale^2}e^{\\frac{-x^2}{2 \\cdotp scale^2}}\n\n The Rayleigh distribution arises if the wind speed and wind direction are\n both gaussian variables, then the vector wind velocity forms a Rayleigh\n distribution. The Rayleigh distribution is used to model the expected\n output from wind turbines.\n\n References\n ----------\n ..[1] Brighton Webs Ltd., Rayleigh Distribution,\n http://www.brighton-webs.co.uk/distributions/rayleigh.asp\n ..[2] Wikipedia, \"Rayleigh distribution\"\n http://en.wikipedia.org/wiki/Rayleigh_distribution\n\n Examples\n --------\n Draw values from the distribution and plot the histogram\n\n >>> values = hist(np.random.rayleigh(3, 100000), bins=200, normed=True)\n\n Wave heights tend to follow a Rayleigh distribution. If the mean wave\n height is 1 meter, what fraction of waves are likely to be larger than 3\n meters?\n\n >>> meanvalue = 1\n >>> modevalue = np.sqrt(2 / np.pi) * meanvalue\n >>> s = np.random.rayleigh(modevalue, 1000000)\n\n The percentage of waves larger than 3 meters is:\n\n >>> 100.*sum(s>3)/1000000.\n 0.087300000000000003\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_rayleigh(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_scale = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oscale; - double __pyx_v_fscale; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__scale,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("rayleigh"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[0] = __pyx_k_33; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__scale); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "rayleigh") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2962; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_scale = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_scale = __pyx_k_33; - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_scale = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("rayleigh", 0, 0, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2962; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.rayleigh"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_scale); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oscale = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":3020 - * cdef double fscale - * - * fscale = PyFloat_AsDouble(scale) # <<<<<<<<<<<<<< - * - * if not PyErr_Occurred(): - */ - __pyx_v_fscale = PyFloat_AsDouble(__pyx_v_scale); - - /* "mtrand.pyx":3022 - * fscale = PyFloat_AsDouble(scale) - * - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fscale <= 0: - * raise ValueError("scale <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":3023 - * - * if not PyErr_Occurred(): - * if fscale <= 0: # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont1_array_sc(self.internal_state, rk_rayleigh, size, fscale) - */ - __pyx_t_1 = (__pyx_v_fscale <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3024 - * if not PyErr_Occurred(): - * if fscale <= 0: - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont1_array_sc(self.internal_state, rk_rayleigh, size, fscale) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3024; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3024; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3024; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":3025 - * if fscale <= 0: - * raise ValueError("scale <= 0") - * return cont1_array_sc(self.internal_state, rk_rayleigh, size, fscale) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_cont1_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_rayleigh, __pyx_v_size, __pyx_v_fscale); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3025; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":3027 - * return cont1_array_sc(self.internal_state, rk_rayleigh, size, fscale) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":3029 - * PyErr_Clear() - * - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0.0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_scale, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3029; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_oscale)); - __pyx_v_oscale = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3030 - * - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0.0") - * return cont1_array(self.internal_state, rk_rayleigh, size, oscale) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3030; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3030; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3030; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3030; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3030; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3030; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_oscale)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3030; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3030; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3030; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3030; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3031 - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0.0") # <<<<<<<<<<<<<< - * return cont1_array(self.internal_state, rk_rayleigh, size, oscale) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3031; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_34)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_34)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_34)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3031; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3031; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":3032 - * if np.any(np.less_equal(oscale, 0.0)): - * raise ValueError("scale <= 0.0") - * return cont1_array(self.internal_state, rk_rayleigh, size, oscale) # <<<<<<<<<<<<<< - * - * def wald(self, mean, scale, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_cont1_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_rayleigh, __pyx_v_size, __pyx_v_oscale); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3032; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.rayleigh"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oscale); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_scale); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":3034 - * return cont1_array(self.internal_state, rk_rayleigh, size, oscale) - * - * def wald(self, mean, scale, size=None): # <<<<<<<<<<<<<< - * """ - * wald(mean, scale, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_wald(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_wald[] = "\n"" wald(mean, scale, size=None)\n""\n"" Draw samples from a Wald, or Inverse Gaussian, distribution.\n""\n"" As the scale approaches infinity, the distribution becomes more like a\n"" Gaussian.\n""\n"" Some references claim that the Wald is an Inverse Gaussian with mean=1, but\n"" this is by no means universal.\n""\n"" The Inverse Gaussian distribution was first studied in relationship to\n"" Brownian motion. In 1956 M.C.K. Tweedie used the name Inverse Gaussian\n"" because there is an inverse relationship between the time to cover a unit\n"" distance and distance covered in unit time.\n""\n"" Parameters\n"" ----------\n"" mean : scalar\n"" Distribution mean, should be > 0.\n"" scale : scalar\n"" Scale parameter, should be >= 0.\n"" size : int or tuple of ints, optional\n"" Output shape. Default is None, in which case a single value is\n"" returned.\n""\n"" Returns\n"" -------\n"" samples : ndarray or scalar\n"" Drawn sample, all greater than zero.\n""\n"" Notes\n"" -----\n"" The probability density function for the Wald distribution is\n""\n"" .. math:: P(x;mean,scale) = \\sqrt{\\frac{scale}{2\\pi x^3}}e^\n"" \\frac{-scale(x-mean)^2}{2\\cdotp mean^2x}\n""\n"" As noted above the Inverse Gaussian distribution first arise from attempts\n"" to model Brownian Motion. It is also a competitor to the Weibull for use in\n"" reliability modeling and modeling stock returns and interest rate\n"" processes.\n""\n"" References\n"" ----------\n"" ..[1] Brighton Webs Ltd., Wald Distribution,\n"" http://www.brighton-webs.co.uk/distributions/wald.asp\n"" ..[2] Chhikara, Raj S., and Folks, J. Leroy, \"The Inverse Gaussian\n"" Distribution: Theory : Methodology, and Applications\", CRC Press,\n"" 1988.\n"" ..[3] Wikipedia, \"Wald distribution\"\n"" http://en.wikipedia.org/wiki/Wald_distribution\n""\n"" Examples\n"" --------\n"" Draw values from the distribution and plot the histogram:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> h = plt.hist(np.random.wald(3, 2, 100000), bins=200, normed=True)\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_wald(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_mean = 0; - PyObject *__pyx_v_scale = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_omean; - PyArrayObject *__pyx_v_oscale; - double __pyx_v_fmean; - double __pyx_v_fscale; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__mean,&__pyx_n_s__scale,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("wald"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__mean); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__scale); - if (likely(values[1])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("wald", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3034; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "wald") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3034; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_mean = values[0]; - __pyx_v_scale = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: - __pyx_v_scale = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_mean = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("wald", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3034; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.wald"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_mean); - __Pyx_INCREF(__pyx_v_scale); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_omean = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_oscale = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":3100 - * cdef double fmean, fscale - * - * fmean = PyFloat_AsDouble(mean) # <<<<<<<<<<<<<< - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - */ - __pyx_v_fmean = PyFloat_AsDouble(__pyx_v_mean); - - /* "mtrand.pyx":3101 - * - * fmean = PyFloat_AsDouble(mean) - * fscale = PyFloat_AsDouble(scale) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fmean <= 0: - */ - __pyx_v_fscale = PyFloat_AsDouble(__pyx_v_scale); - - /* "mtrand.pyx":3102 - * fmean = PyFloat_AsDouble(mean) - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fmean <= 0: - * raise ValueError("mean <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":3103 - * fscale = PyFloat_AsDouble(scale) - * if not PyErr_Occurred(): - * if fmean <= 0: # <<<<<<<<<<<<<< - * raise ValueError("mean <= 0") - * if fscale <= 0: - */ - __pyx_t_1 = (__pyx_v_fmean <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3104 - * if not PyErr_Occurred(): - * if fmean <= 0: - * raise ValueError("mean <= 0") # <<<<<<<<<<<<<< - * if fscale <= 0: - * raise ValueError("scale <= 0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3104; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_35)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_35)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_35)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3104; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3104; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":3105 - * if fmean <= 0: - * raise ValueError("mean <= 0") - * if fscale <= 0: # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_wald, size, fmean, fscale) - */ - __pyx_t_1 = (__pyx_v_fscale <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3106 - * raise ValueError("mean <= 0") - * if fscale <= 0: - * raise ValueError("scale <= 0") # <<<<<<<<<<<<<< - * return cont2_array_sc(self.internal_state, rk_wald, size, fmean, fscale) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3106; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_9)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_9)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_9)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3106; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3106; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":3107 - * if fscale <= 0: - * raise ValueError("scale <= 0") - * return cont2_array_sc(self.internal_state, rk_wald, size, fmean, fscale) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_f_6mtrand_cont2_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_wald, __pyx_v_size, __pyx_v_fmean, __pyx_v_fscale); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3107; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":3109 - * return cont2_array_sc(self.internal_state, rk_wald, size, fmean, fscale) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * omean = PyArray_FROM_OTF(mean, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":3110 - * - * PyErr_Clear() - * omean = PyArray_FROM_OTF(mean, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(omean,0.0)): - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_mean, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3110; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_6mtrand_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3110; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(((PyObject *)__pyx_v_omean)); - __pyx_v_omean = ((PyArrayObject *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "mtrand.pyx":3111 - * PyErr_Clear() - * omean = PyArray_FROM_OTF(mean, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(omean,0.0)): - * raise ValueError("mean <= 0.0") - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_scale, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_6mtrand_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(((PyObject *)__pyx_v_oscale)); - __pyx_v_oscale = ((PyArrayObject *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "mtrand.pyx":3112 - * omean = PyArray_FROM_OTF(mean, NPY_DOUBLE, NPY_ALIGNED) - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(omean,0.0)): # <<<<<<<<<<<<<< - * raise ValueError("mean <= 0.0") - * elif np.any(np.less_equal(oscale,0.0)): - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__any); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_omean)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_omean)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_omean)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3113 - * oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(omean,0.0)): - * raise ValueError("mean <= 0.0") # <<<<<<<<<<<<<< - * elif np.any(np.less_equal(oscale,0.0)): - * raise ValueError("scale <= 0.0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3113; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_36)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_36)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_36)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3113; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3113; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - - /* "mtrand.pyx":3114 - * if np.any(np.less_equal(omean,0.0)): - * raise ValueError("mean <= 0.0") - * elif np.any(np.less_equal(oscale,0.0)): # <<<<<<<<<<<<<< - * raise ValueError("scale <= 0.0") - * return cont2_array(self.internal_state, rk_wald, size, omean, oscale) - */ - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3114; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3114; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3114; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3114; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3114; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3114; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_v_oscale)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oscale)); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3114; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3114; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3114; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3114; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3115 - * raise ValueError("mean <= 0.0") - * elif np.any(np.less_equal(oscale,0.0)): - * raise ValueError("scale <= 0.0") # <<<<<<<<<<<<<< - * return cont2_array(self.internal_state, rk_wald, size, omean, oscale) - * - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3115; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_34)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_34)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_34)); - __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3115; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3115; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":3116 - * elif np.any(np.less_equal(oscale,0.0)): - * raise ValueError("scale <= 0.0") - * return cont2_array(self.internal_state, rk_wald, size, omean, oscale) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_4 = __pyx_f_6mtrand_cont2_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_wald, __pyx_v_size, __pyx_v_omean, __pyx_v_oscale); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.wald"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_omean); - __Pyx_DECREF((PyObject *)__pyx_v_oscale); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_mean); - __Pyx_DECREF(__pyx_v_scale); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":3120 - * - * - * def triangular(self, left, mode, right, size=None): # <<<<<<<<<<<<<< - * """ - * triangular(left, mode, right, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_triangular(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_triangular[] = "\n"" triangular(left, mode, right, size=None)\n""\n"" Draw samples from the triangular distribution.\n""\n"" The triangular distribution is a continuous probability distribution with\n"" lower limit left, peak at mode, and upper limit right. Unlike the other\n"" distributions, these parameters directly define the shape of the pdf.\n""\n"" Parameters\n"" ----------\n"" left : scalar\n"" Lower limit.\n"" mode : scalar\n"" The value where the peak of the distribution occurs.\n"" The value should fulfill the condition ``left <= mode <= right``.\n"" right : scalar\n"" Upper limit, should be larger than `left`.\n"" size : int or tuple of ints, optional\n"" Output shape. Default is None, in which case a single value is\n"" returned.\n""\n"" Returns\n"" -------\n"" samples : ndarray or scalar\n"" The returned samples all lie in the interval [left, right].\n""\n"" Notes\n"" -----\n"" The probability density function for the Triangular distribution is\n""\n"" .. math:: P(x;l, m, r) = \\begin{cases}\n"" \\frac{2(x-l)}{(r-l)(m-l)}& \\text{for $l \\leq x \\leq m$},\\\\\n"" \\frac{2(m-x)}{(r-l)(r-m)}& \\text{for $m \\leq x \\leq r$},\\\\\n"" 0& \\text{otherwise}.\n"" \\end{cases}\n""\n"" The triangular distribution is often used in ill-defined problems where the\n"" underlying distribution is not known, but some knowledge of the limits and\n"" mode exists. Often it is used in simulations.\n""\n"" References\n"" ----------\n"" ..[1] Wikipedia, \"Triangular distribution\"\n"" http://en.wikipedia.org/wiki/Triangular_distribution\n""\n"" Examples\n"" --------\n"" Draw values from the distribution and plot the histogram:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> h = plt.hist(np.random.triangular(-3, 0, 8, 100000), bins=200,\n"" ... normed=True)\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_triangular(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_left = 0; - PyObject *__pyx_v_mode = 0; - PyObject *__pyx_v_right = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oleft; - PyArrayObject *__pyx_v_omode; - PyArrayObject *__pyx_v_oright; - double __pyx_v_fleft; - double __pyx_v_fmode; - double __pyx_v_fright; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__left,&__pyx_n_s__mode,&__pyx_n_s__right,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("triangular"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[4] = {0,0,0,0}; - values[3] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__left); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__mode); - if (likely(values[1])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("triangular", 0, 3, 4, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3120; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 2: - values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__right); - if (likely(values[2])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("triangular", 0, 3, 4, 2); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3120; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 3: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[3] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "triangular") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3120; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_left = values[0]; - __pyx_v_mode = values[1]; - __pyx_v_right = values[2]; - __pyx_v_size = values[3]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 4: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 3); - case 3: - __pyx_v_right = PyTuple_GET_ITEM(__pyx_args, 2); - __pyx_v_mode = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_left = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("triangular", 0, 3, 4, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3120; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.triangular"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_left); - __Pyx_INCREF(__pyx_v_mode); - __Pyx_INCREF(__pyx_v_right); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oleft = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_omode = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_oright = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":3180 - * cdef double fleft, fmode, fright - * - * fleft = PyFloat_AsDouble(left) # <<<<<<<<<<<<<< - * fright = PyFloat_AsDouble(right) - * fmode = PyFloat_AsDouble(mode) - */ - __pyx_v_fleft = PyFloat_AsDouble(__pyx_v_left); - - /* "mtrand.pyx":3181 - * - * fleft = PyFloat_AsDouble(left) - * fright = PyFloat_AsDouble(right) # <<<<<<<<<<<<<< - * fmode = PyFloat_AsDouble(mode) - * if not PyErr_Occurred(): - */ - __pyx_v_fright = PyFloat_AsDouble(__pyx_v_right); - - /* "mtrand.pyx":3182 - * fleft = PyFloat_AsDouble(left) - * fright = PyFloat_AsDouble(right) - * fmode = PyFloat_AsDouble(mode) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fleft > fmode: - */ - __pyx_v_fmode = PyFloat_AsDouble(__pyx_v_mode); - - /* "mtrand.pyx":3183 - * fright = PyFloat_AsDouble(right) - * fmode = PyFloat_AsDouble(mode) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fleft > fmode: - * raise ValueError("left > mode") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":3184 - * fmode = PyFloat_AsDouble(mode) - * if not PyErr_Occurred(): - * if fleft > fmode: # <<<<<<<<<<<<<< - * raise ValueError("left > mode") - * if fmode > fright: - */ - __pyx_t_1 = (__pyx_v_fleft > __pyx_v_fmode); - if (__pyx_t_1) { - - /* "mtrand.pyx":3185 - * if not PyErr_Occurred(): - * if fleft > fmode: - * raise ValueError("left > mode") # <<<<<<<<<<<<<< - * if fmode > fright: - * raise ValueError("mode > right") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_37)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_37)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_37)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":3186 - * if fleft > fmode: - * raise ValueError("left > mode") - * if fmode > fright: # <<<<<<<<<<<<<< - * raise ValueError("mode > right") - * if fleft == fright: - */ - __pyx_t_1 = (__pyx_v_fmode > __pyx_v_fright); - if (__pyx_t_1) { - - /* "mtrand.pyx":3187 - * raise ValueError("left > mode") - * if fmode > fright: - * raise ValueError("mode > right") # <<<<<<<<<<<<<< - * if fleft == fright: - * raise ValueError("left == right") - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3187; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_38)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_38)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_38)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3187; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3187; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":3188 - * if fmode > fright: - * raise ValueError("mode > right") - * if fleft == fright: # <<<<<<<<<<<<<< - * raise ValueError("left == right") - * return cont3_array_sc(self.internal_state, rk_triangular, size, fleft, - */ - __pyx_t_1 = (__pyx_v_fleft == __pyx_v_fright); - if (__pyx_t_1) { - - /* "mtrand.pyx":3189 - * raise ValueError("mode > right") - * if fleft == fright: - * raise ValueError("left == right") # <<<<<<<<<<<<<< - * return cont3_array_sc(self.internal_state, rk_triangular, size, fleft, - * fmode, fright) - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3189; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_39)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_39)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_39)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3189; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3189; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":3190 - * if fleft == fright: - * raise ValueError("left == right") - * return cont3_array_sc(self.internal_state, rk_triangular, size, fleft, # <<<<<<<<<<<<<< - * fmode, fright) - * - */ - __Pyx_XDECREF(__pyx_r); - - /* "mtrand.pyx":3191 - * raise ValueError("left == right") - * return cont3_array_sc(self.internal_state, rk_triangular, size, fleft, - * fmode, fright) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __pyx_t_3 = __pyx_f_6mtrand_cont3_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_triangular, __pyx_v_size, __pyx_v_fleft, __pyx_v_fmode, __pyx_v_fright); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":3193 - * fmode, fright) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * oleft = PyArray_FROM_OTF(left, NPY_DOUBLE, NPY_ALIGNED) - * omode = PyArray_FROM_OTF(mode, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":3194 - * - * PyErr_Clear() - * oleft = PyArray_FROM_OTF(left, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * omode = PyArray_FROM_OTF(mode, NPY_DOUBLE, NPY_ALIGNED) - * oright = PyArray_FROM_OTF(right, NPY_DOUBLE, NPY_ALIGNED) - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_left, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_oleft)); - __pyx_v_oleft = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3195 - * PyErr_Clear() - * oleft = PyArray_FROM_OTF(left, NPY_DOUBLE, NPY_ALIGNED) - * omode = PyArray_FROM_OTF(mode, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * oright = PyArray_FROM_OTF(right, NPY_DOUBLE, NPY_ALIGNED) - * - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_mode, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_omode)); - __pyx_v_omode = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3196 - * oleft = PyArray_FROM_OTF(left, NPY_DOUBLE, NPY_ALIGNED) - * omode = PyArray_FROM_OTF(mode, NPY_DOUBLE, NPY_ALIGNED) - * oright = PyArray_FROM_OTF(right, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * - * if np.any(np.greater(oleft, omode)): - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_right, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3196; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_oright)); - __pyx_v_oright = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3198 - * oright = PyArray_FROM_OTF(right, NPY_DOUBLE, NPY_ALIGNED) - * - * if np.any(np.greater(oleft, omode)): # <<<<<<<<<<<<<< - * raise ValueError("left > mode") - * if np.any(np.greater(omode, oright)): - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__greater); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_oleft)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_oleft)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oleft)); - __Pyx_INCREF(((PyObject *)__pyx_v_omode)); - PyTuple_SET_ITEM(__pyx_t_3, 1, ((PyObject *)__pyx_v_omode)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_omode)); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3199 - * - * if np.any(np.greater(oleft, omode)): - * raise ValueError("left > mode") # <<<<<<<<<<<<<< - * if np.any(np.greater(omode, oright)): - * raise ValueError("mode > right") - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_37)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_37)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_37)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":3200 - * if np.any(np.greater(oleft, omode)): - * raise ValueError("left > mode") - * if np.any(np.greater(omode, oright)): # <<<<<<<<<<<<<< - * raise ValueError("mode > right") - * if np.any(np.equal(oleft, oright)): - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__greater); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_omode)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_omode)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_omode)); - __Pyx_INCREF(((PyObject *)__pyx_v_oright)); - PyTuple_SET_ITEM(__pyx_t_3, 1, ((PyObject *)__pyx_v_oright)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oright)); - __pyx_t_4 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_5, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3201 - * raise ValueError("left > mode") - * if np.any(np.greater(omode, oright)): - * raise ValueError("mode > right") # <<<<<<<<<<<<<< - * if np.any(np.equal(oleft, oright)): - * raise ValueError("left == right") - */ - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3201; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_38)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_s_38)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_38)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3201; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3201; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L11; - } - __pyx_L11:; - - /* "mtrand.pyx":3202 - * if np.any(np.greater(omode, oright)): - * raise ValueError("mode > right") - * if np.any(np.equal(oleft, oright)): # <<<<<<<<<<<<<< - * raise ValueError("left == right") - * return cont3_array(self.internal_state, rk_triangular, size, oleft, - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3202; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3202; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3202; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__equal); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3202; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3202; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_oleft)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_oleft)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oleft)); - __Pyx_INCREF(((PyObject *)__pyx_v_oright)); - PyTuple_SET_ITEM(__pyx_t_3, 1, ((PyObject *)__pyx_v_oright)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oright)); - __pyx_t_2 = PyObject_Call(__pyx_t_5, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3202; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3202; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3202; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3202; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3203 - * raise ValueError("mode > right") - * if np.any(np.equal(oleft, oright)): - * raise ValueError("left == right") # <<<<<<<<<<<<<< - * return cont3_array(self.internal_state, rk_triangular, size, oleft, - * omode, oright) - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3203; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_39)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_39)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_39)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3203; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3203; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L12; - } - __pyx_L12:; - - /* "mtrand.pyx":3204 - * if np.any(np.equal(oleft, oright)): - * raise ValueError("left == right") - * return cont3_array(self.internal_state, rk_triangular, size, oleft, # <<<<<<<<<<<<<< - * omode, oright) - * - */ - __Pyx_XDECREF(__pyx_r); - - /* "mtrand.pyx":3205 - * raise ValueError("left == right") - * return cont3_array(self.internal_state, rk_triangular, size, oleft, - * omode, oright) # <<<<<<<<<<<<<< - * - * # Complicated, discrete distributions: - */ - __pyx_t_3 = __pyx_f_6mtrand_cont3_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_triangular, __pyx_v_size, __pyx_v_oleft, __pyx_v_omode, __pyx_v_oright); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3204; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.triangular"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oleft); - __Pyx_DECREF((PyObject *)__pyx_v_omode); - __Pyx_DECREF((PyObject *)__pyx_v_oright); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_left); - __Pyx_DECREF(__pyx_v_mode); - __Pyx_DECREF(__pyx_v_right); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":3208 - * - * # Complicated, discrete distributions: - * def binomial(self, n, p, size=None): # <<<<<<<<<<<<<< - * """ - * binomial(n, p, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_binomial(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_binomial[] = "\n"" binomial(n, p, size=None)\n""\n"" Draw samples from a binomial distribution.\n""\n"" Samples are drawn from a Binomial distribution with specified\n"" parameters, n trials and p probability of success where\n"" n an integer > 0 and p is in the interval [0,1]. (n may be\n"" input as a float, but it is truncated to an integer in use)\n""\n"" Parameters\n"" ----------\n"" n : float (but truncated to an integer)\n"" parameter, > 0.\n"" p : float\n"" parameter, >= 0 and <=1.\n"" size : {tuple, int}\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" Returns\n"" -------\n"" samples : {ndarray, scalar}\n"" where the values are all integers in [0, n].\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.binom : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" Notes\n"" -----\n"" The probability density for the Binomial distribution is\n""\n"" .. math:: P(N) = \\binom{n}{N}p^N(1-p)^{n-N},\n""\n"" where :math:`n` is the number of trials, :math:`p` is the probability\n"" of success, and :math:`N` is the number of successes.\n""\n"" When estimating the standard error of a proportion in a population by\n"" using a random sample, the normal distribution works well unless the\n"" product p*n <=5, where p = population proportion estimate, and n =\n"" number of samples, in which case the binomial distribution is used\n"" instead. For example, a sample of 15 people shows 4 who are left\n"" handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4,\n"" so the binomial distribution should be used in this case.\n""\n"" References\n"" ----------\n"" .. [1] Dalgaard, Peter, \"Introductory Statistics with R\",\n"" Springer-Verlag, 2002.\n"" .. [2] Glantz, Stanton A. \"Primer of Biostatistics.\", McGraw-Hill,\n"" Fifth Edition, 2002.\n"" .. [3] Lentner, Marvin, \"Elementary Applied Statistics\", Bogden\n"" and Quigley, 1972.\n"" .. [4] Weisstein, Eric W. \"Binomial Distribution.\" From MathWorld--A\n"" Wolfram Web Resource.\n"" http://mathworld.wolfram.com/BinomialDistribution.html\n"" .. [5] Wikipedia, \"Binomial-distribution\",\n"" http://en.wikipedia.org/wiki/Binomial_distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> n, p = 10, .5 # number of trials, probability of each trial\n"" >>> s = np.random.binomial(n, p, 1000)\n"" # result of flipping a coin 10 times, tested 1000 times.\n""\n"" A real world example. A company drills 9 wild-cat oil exploration\n"" wells, each with an estimated probability of success of 0.1. All nine\n"" wells fail. What is the probability of that happening?\n""\n"" Let's do 20,000 trials of the model, and count the number that\n"" generate zero positive results.\n""\n"" >>> sum(np.random.binomial(9,0.1,20000)==0)/20000.\n"" answer = 0.38885, or 38%.\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_binomial(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_n = 0; - PyObject *__pyx_v_p = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_on; - PyArrayObject *__pyx_v_op; - long __pyx_v_ln; - double __pyx_v_fp; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__p,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("binomial"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__p); - if (likely(values[1])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("binomial", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3208; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "binomial") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3208; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_n = values[0]; - __pyx_v_p = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: - __pyx_v_p = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("binomial", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3208; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.binomial"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_n); - __Pyx_INCREF(__pyx_v_p); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_on = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_op = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":3293 - * cdef double fp - * - * fp = PyFloat_AsDouble(p) # <<<<<<<<<<<<<< - * ln = PyInt_AsLong(n) - * if not PyErr_Occurred(): - */ - __pyx_v_fp = PyFloat_AsDouble(__pyx_v_p); - - /* "mtrand.pyx":3294 - * - * fp = PyFloat_AsDouble(p) - * ln = PyInt_AsLong(n) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if ln <= 0: - */ - __pyx_v_ln = PyInt_AsLong(__pyx_v_n); - - /* "mtrand.pyx":3295 - * fp = PyFloat_AsDouble(p) - * ln = PyInt_AsLong(n) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if ln <= 0: - * raise ValueError("n <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":3296 - * ln = PyInt_AsLong(n) - * if not PyErr_Occurred(): - * if ln <= 0: # <<<<<<<<<<<<<< - * raise ValueError("n <= 0") - * if fp < 0: - */ - __pyx_t_1 = (__pyx_v_ln <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3297 - * if not PyErr_Occurred(): - * if ln <= 0: - * raise ValueError("n <= 0") # <<<<<<<<<<<<<< - * if fp < 0: - * raise ValueError("p < 0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3297; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_40)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_40)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_40)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3297; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3297; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":3298 - * if ln <= 0: - * raise ValueError("n <= 0") - * if fp < 0: # <<<<<<<<<<<<<< - * raise ValueError("p < 0") - * elif fp > 1: - */ - __pyx_t_1 = (__pyx_v_fp < 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3299 - * raise ValueError("n <= 0") - * if fp < 0: - * raise ValueError("p < 0") # <<<<<<<<<<<<<< - * elif fp > 1: - * raise ValueError("p > 1") - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3299; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_41)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_41)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_41)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3299; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3299; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - - /* "mtrand.pyx":3300 - * if fp < 0: - * raise ValueError("p < 0") - * elif fp > 1: # <<<<<<<<<<<<<< - * raise ValueError("p > 1") - * return discnp_array_sc(self.internal_state, rk_binomial, size, ln, fp) - */ - __pyx_t_1 = (__pyx_v_fp > 1); - if (__pyx_t_1) { - - /* "mtrand.pyx":3301 - * raise ValueError("p < 0") - * elif fp > 1: - * raise ValueError("p > 1") # <<<<<<<<<<<<<< - * return discnp_array_sc(self.internal_state, rk_binomial, size, ln, fp) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3301; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_42)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_42)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_42)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3301; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3301; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":3302 - * elif fp > 1: - * raise ValueError("p > 1") - * return discnp_array_sc(self.internal_state, rk_binomial, size, ln, fp) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_discnp_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_binomial, __pyx_v_size, __pyx_v_ln, __pyx_v_fp); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3302; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":3304 - * return discnp_array_sc(self.internal_state, rk_binomial, size, ln, fp) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * on = PyArray_FROM_OTF(n, NPY_LONG, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":3306 - * PyErr_Clear() - * - * on = PyArray_FROM_OTF(n, NPY_LONG, NPY_ALIGNED) # <<<<<<<<<<<<<< - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(n, 0)): - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_n, NPY_LONG, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3306; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_on)); - __pyx_v_on = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3307 - * - * on = PyArray_FROM_OTF(n, NPY_LONG, NPY_ALIGNED) - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(n, 0)): - * raise ValueError("n <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_p, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3307; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_op)); - __pyx_v_op = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3308 - * on = PyArray_FROM_OTF(n, NPY_LONG, NPY_ALIGNED) - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(n, 0)): # <<<<<<<<<<<<<< - * raise ValueError("n <= 0") - * if np.any(np.less(p, 0)): - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3308; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3308; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3308; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3308; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3308; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_n); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_n); - __Pyx_GIVEREF(__pyx_v_n); - __Pyx_INCREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3308; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3308; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3308; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3308; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3309 - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(n, 0)): - * raise ValueError("n <= 0") # <<<<<<<<<<<<<< - * if np.any(np.less(p, 0)): - * raise ValueError("p < 0") - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3309; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_40)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_40)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_40)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3309; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3309; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":3310 - * if np.any(np.less_equal(n, 0)): - * raise ValueError("n <= 0") - * if np.any(np.less(p, 0)): # <<<<<<<<<<<<<< - * raise ValueError("p < 0") - * if np.any(np.greater(p, 1)): - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_p); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_p); - __Pyx_GIVEREF(__pyx_v_p); - __Pyx_INCREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - __pyx_t_4 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_5, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3310; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3311 - * raise ValueError("n <= 0") - * if np.any(np.less(p, 0)): - * raise ValueError("p < 0") # <<<<<<<<<<<<<< - * if np.any(np.greater(p, 1)): - * raise ValueError("p > 1") - */ - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3311; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_41)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_s_41)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_41)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3311; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3311; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":3312 - * if np.any(np.less(p, 0)): - * raise ValueError("p < 0") - * if np.any(np.greater(p, 1)): # <<<<<<<<<<<<<< - * raise ValueError("p > 1") - * return discnp_array(self.internal_state, rk_binomial, size, on, op) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3312; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3312; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3312; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__greater); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3312; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3312; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_p); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_p); - __Pyx_GIVEREF(__pyx_v_p); - __Pyx_INCREF(__pyx_int_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - __pyx_t_2 = PyObject_Call(__pyx_t_5, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3312; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3312; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3312; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3312; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3313 - * raise ValueError("p < 0") - * if np.any(np.greater(p, 1)): - * raise ValueError("p > 1") # <<<<<<<<<<<<<< - * return discnp_array(self.internal_state, rk_binomial, size, on, op) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3313; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_42)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_42)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_42)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3313; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3313; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L11; - } - __pyx_L11:; - - /* "mtrand.pyx":3314 - * if np.any(np.greater(p, 1)): - * raise ValueError("p > 1") - * return discnp_array(self.internal_state, rk_binomial, size, on, op) # <<<<<<<<<<<<<< - * - * def negative_binomial(self, n, p, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_discnp_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_binomial, __pyx_v_size, __pyx_v_on, __pyx_v_op); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3314; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.binomial"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_on); - __Pyx_DECREF((PyObject *)__pyx_v_op); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_n); - __Pyx_DECREF(__pyx_v_p); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":3316 - * return discnp_array(self.internal_state, rk_binomial, size, on, op) - * - * def negative_binomial(self, n, p, size=None): # <<<<<<<<<<<<<< - * """ - * negative_binomial(n, p, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_negative_binomial(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_negative_binomial[] = "\n"" negative_binomial(n, p, size=None)\n""\n"" Draw samples from a negative_binomial distribution.\n""\n"" Samples are drawn from a negative_Binomial distribution with specified\n"" parameters, `n` trials and `p` probability of success where `n` is an\n"" integer > 0 and `p` is in the interval [0, 1].\n""\n"" Parameters\n"" ----------\n"" n : int\n"" Parameter, > 0.\n"" p : float\n"" Parameter, >= 0 and <=1.\n"" size : int or tuple of ints\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" Returns\n"" -------\n"" samples : int or ndarray of ints\n"" Drawn samples.\n""\n"" Notes\n"" -----\n"" The probability density for the Negative Binomial distribution is\n""\n"" .. math:: P(N;n,p) = \\binom{N+n-1}{n-1}p^{n}(1-p)^{N},\n""\n"" where :math:`n-1` is the number of successes, :math:`p` is the probability\n"" of success, and :math:`N+n-1` is the number of trials.\n""\n"" The negative binomial distribution gives the probability of n-1 successes\n"" and N failures in N+n-1 trials, and success on the (N+n)th trial.\n""\n"" If one throws a die repeatedly until the third time a \"1\" appears, then the\n"" probability distribution of the number of non-\"1\"s that appear before the\n"" third \"1\" is a negative binomial distribution.\n""\n"" References\n"" ----------\n"" .. [1] Weisstein, Eric W. \"Negative Binomial Distribution.\" From\n"" MathWorld--A Wolfram Web Resource.\n"" http://mathworld.wolfram.com/NegativeBinomialDistribution.html\n"" .. [2] Wikipedia, \"Negative binomial distribution\",\n"" http://en.wikipedia.org/wiki/Negative_binomial_distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" A real world example. A company drills wild-cat oil exploration wells, each\n"" with an estimated probability of success of 0.1. What is the probability\n"" of having one success for each successive well, that is what is the\n"" probability of a single success after drilling 5 wells, after 6 wells,\n"" etc.?\n""\n"" >>> s = np.random.negative_binomial(1, 0.1, 100000)\n"" >>> for i in range(1, 11):\n"" ... probability = sum(s 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "negative_binomial") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3316; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_n = values[0]; - __pyx_v_p = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: - __pyx_v_p = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("negative_binomial", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3316; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.negative_binomial"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_n); - __Pyx_INCREF(__pyx_v_p); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_on = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_op = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":3386 - * cdef double fp - * - * fp = PyFloat_AsDouble(p) # <<<<<<<<<<<<<< - * fn = PyFloat_AsDouble(n) - * if not PyErr_Occurred(): - */ - __pyx_v_fp = PyFloat_AsDouble(__pyx_v_p); - - /* "mtrand.pyx":3387 - * - * fp = PyFloat_AsDouble(p) - * fn = PyFloat_AsDouble(n) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fn <= 0: - */ - __pyx_v_fn = PyFloat_AsDouble(__pyx_v_n); - - /* "mtrand.pyx":3388 - * fp = PyFloat_AsDouble(p) - * fn = PyFloat_AsDouble(n) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fn <= 0: - * raise ValueError("n <= 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":3389 - * fn = PyFloat_AsDouble(n) - * if not PyErr_Occurred(): - * if fn <= 0: # <<<<<<<<<<<<<< - * raise ValueError("n <= 0") - * if fp < 0: - */ - __pyx_t_1 = (__pyx_v_fn <= 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3390 - * if not PyErr_Occurred(): - * if fn <= 0: - * raise ValueError("n <= 0") # <<<<<<<<<<<<<< - * if fp < 0: - * raise ValueError("p < 0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3390; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_40)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_40)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_40)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3390; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3390; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":3391 - * if fn <= 0: - * raise ValueError("n <= 0") - * if fp < 0: # <<<<<<<<<<<<<< - * raise ValueError("p < 0") - * elif fp > 1: - */ - __pyx_t_1 = (__pyx_v_fp < 0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3392 - * raise ValueError("n <= 0") - * if fp < 0: - * raise ValueError("p < 0") # <<<<<<<<<<<<<< - * elif fp > 1: - * raise ValueError("p > 1") - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3392; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_41)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_41)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_41)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3392; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3392; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - - /* "mtrand.pyx":3393 - * if fp < 0: - * raise ValueError("p < 0") - * elif fp > 1: # <<<<<<<<<<<<<< - * raise ValueError("p > 1") - * return discdd_array_sc(self.internal_state, rk_negative_binomial, - */ - __pyx_t_1 = (__pyx_v_fp > 1); - if (__pyx_t_1) { - - /* "mtrand.pyx":3394 - * raise ValueError("p < 0") - * elif fp > 1: - * raise ValueError("p > 1") # <<<<<<<<<<<<<< - * return discdd_array_sc(self.internal_state, rk_negative_binomial, - * size, fn, fp) - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3394; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_42)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_42)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_42)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3394; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3394; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":3395 - * elif fp > 1: - * raise ValueError("p > 1") - * return discdd_array_sc(self.internal_state, rk_negative_binomial, # <<<<<<<<<<<<<< - * size, fn, fp) - * - */ - __Pyx_XDECREF(__pyx_r); - - /* "mtrand.pyx":3396 - * raise ValueError("p > 1") - * return discdd_array_sc(self.internal_state, rk_negative_binomial, - * size, fn, fp) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __pyx_t_3 = __pyx_f_6mtrand_discdd_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_negative_binomial, __pyx_v_size, __pyx_v_fn, __pyx_v_fp); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3395; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":3398 - * size, fn, fp) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * on = PyArray_FROM_OTF(n, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":3400 - * PyErr_Clear() - * - * on = PyArray_FROM_OTF(n, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(n, 0)): - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_n, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3400; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_on)); - __pyx_v_on = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3401 - * - * on = PyArray_FROM_OTF(n, NPY_DOUBLE, NPY_ALIGNED) - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(n, 0)): - * raise ValueError("n <= 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_p, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3401; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_op)); - __pyx_v_op = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3402 - * on = PyArray_FROM_OTF(n, NPY_DOUBLE, NPY_ALIGNED) - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(n, 0)): # <<<<<<<<<<<<<< - * raise ValueError("n <= 0") - * if np.any(np.less(p, 0)): - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3402; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3402; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3402; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3402; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3402; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_n); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_n); - __Pyx_GIVEREF(__pyx_v_n); - __Pyx_INCREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3402; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3402; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3402; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3402; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3403 - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(n, 0)): - * raise ValueError("n <= 0") # <<<<<<<<<<<<<< - * if np.any(np.less(p, 0)): - * raise ValueError("p < 0") - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3403; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_40)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_40)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_40)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3403; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3403; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":3404 - * if np.any(np.less_equal(n, 0)): - * raise ValueError("n <= 0") - * if np.any(np.less(p, 0)): # <<<<<<<<<<<<<< - * raise ValueError("p < 0") - * if np.any(np.greater(p, 1)): - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3404; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3404; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3404; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3404; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3404; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_p); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_p); - __Pyx_GIVEREF(__pyx_v_p); - __Pyx_INCREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - __pyx_t_4 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3404; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3404; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_5, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3404; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3404; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3405 - * raise ValueError("n <= 0") - * if np.any(np.less(p, 0)): - * raise ValueError("p < 0") # <<<<<<<<<<<<<< - * if np.any(np.greater(p, 1)): - * raise ValueError("p > 1") - */ - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3405; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_41)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_s_41)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_41)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3405; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3405; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":3406 - * if np.any(np.less(p, 0)): - * raise ValueError("p < 0") - * if np.any(np.greater(p, 1)): # <<<<<<<<<<<<<< - * raise ValueError("p > 1") - * return discdd_array(self.internal_state, rk_negative_binomial, size, - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3406; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3406; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3406; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__greater); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3406; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3406; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_p); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_p); - __Pyx_GIVEREF(__pyx_v_p); - __Pyx_INCREF(__pyx_int_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - __pyx_t_2 = PyObject_Call(__pyx_t_5, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3406; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3406; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3406; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3406; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3407 - * raise ValueError("p < 0") - * if np.any(np.greater(p, 1)): - * raise ValueError("p > 1") # <<<<<<<<<<<<<< - * return discdd_array(self.internal_state, rk_negative_binomial, size, - * on, op) - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3407; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_42)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_42)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_42)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3407; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3407; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L11; - } - __pyx_L11:; - - /* "mtrand.pyx":3408 - * if np.any(np.greater(p, 1)): - * raise ValueError("p > 1") - * return discdd_array(self.internal_state, rk_negative_binomial, size, # <<<<<<<<<<<<<< - * on, op) - * - */ - __Pyx_XDECREF(__pyx_r); - - /* "mtrand.pyx":3409 - * raise ValueError("p > 1") - * return discdd_array(self.internal_state, rk_negative_binomial, size, - * on, op) # <<<<<<<<<<<<<< - * - * def poisson(self, lam=1.0, size=None): - */ - __pyx_t_3 = __pyx_f_6mtrand_discdd_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_negative_binomial, __pyx_v_size, __pyx_v_on, __pyx_v_op); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3408; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.negative_binomial"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_on); - __Pyx_DECREF((PyObject *)__pyx_v_op); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_n); - __Pyx_DECREF(__pyx_v_p); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":3411 - * on, op) - * - * def poisson(self, lam=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * poisson(lam=1.0, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_poisson(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_poisson[] = "\n poisson(lam=1.0, size=None)\n\n Draw samples from a Poisson distribution.\n\n The Poisson distribution is the limit of the Binomial\n distribution for large N.\n\n Parameters\n ----------\n lam : float\n Expectation of interval, should be >= 0.\n size : int or tuple of ints, optional\n Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n ``m * n * k`` samples are drawn.\n\n Notes\n -----\n The Poisson distribution\n\n .. math:: f(k; \\lambda)=\\frac{\\lambda^k e^{-\\lambda}}{k!}\n\n For events with an expected separation :math:`\\lambda` the Poisson\n distribution :math:`f(k; \\lambda)` describes the probability of\n :math:`k` events occurring within the observed interval :math:`\\lambda`.\n\n References\n ----------\n .. [1] Weisstein, Eric W. \"Poisson Distribution.\" From MathWorld--A Wolfram\n Web Resource. http://mathworld.wolfram.com/PoissonDistribution.html\n .. [2] Wikipedia, \"Poisson distribution\",\n http://en.wikipedia.org/wiki/Poisson_distribution\n\n Examples\n --------\n Draw samples from the distribution:\n\n >>> import numpy as np\n >>> s = np.random.poisson(5, 10000)\n\n Display histogram of the sample:\n\n >>> import matplotlib.pyplot as plt\n >>> count, bins, ignored = plt.hist(s, 14, normed=True)\n >>> plt.show()\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_poisson(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_lam = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_olam; - double __pyx_v_flam; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__lam,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("poisson"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[0] = __pyx_k_43; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__lam); - if (unlikely(value)) { values[0] = value; kw_args--; } - } - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "poisson") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3411; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_lam = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_lam = __pyx_k_43; - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_lam = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("poisson", 0, 0, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3411; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.poisson"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_lam); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_olam = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":3461 - * cdef ndarray olam - * cdef double flam - * flam = PyFloat_AsDouble(lam) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if lam < 0: - */ - __pyx_v_flam = PyFloat_AsDouble(__pyx_v_lam); - - /* "mtrand.pyx":3462 - * cdef double flam - * flam = PyFloat_AsDouble(lam) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if lam < 0: - * raise ValueError("lam < 0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":3463 - * flam = PyFloat_AsDouble(lam) - * if not PyErr_Occurred(): - * if lam < 0: # <<<<<<<<<<<<<< - * raise ValueError("lam < 0") - * return discd_array_sc(self.internal_state, rk_poisson, size, flam) - */ - __pyx_t_2 = PyObject_RichCompare(__pyx_v_lam, __pyx_int_0, Py_LT); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3463; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3463; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3464 - * if not PyErr_Occurred(): - * if lam < 0: - * raise ValueError("lam < 0") # <<<<<<<<<<<<<< - * return discd_array_sc(self.internal_state, rk_poisson, size, flam) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3464; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_44)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_44)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_44)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3464; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3464; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":3465 - * if lam < 0: - * raise ValueError("lam < 0") - * return discd_array_sc(self.internal_state, rk_poisson, size, flam) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_discd_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_poisson, __pyx_v_size, __pyx_v_flam); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3465; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":3467 - * return discd_array_sc(self.internal_state, rk_poisson, size, flam) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * olam = PyArray_FROM_OTF(lam, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":3469 - * PyErr_Clear() - * - * olam = PyArray_FROM_OTF(lam, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less(olam, 0)): - * raise ValueError("lam < 0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_lam, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3469; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_olam)); - __pyx_v_olam = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3470 - * - * olam = PyArray_FROM_OTF(lam, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less(olam, 0)): # <<<<<<<<<<<<<< - * raise ValueError("lam < 0") - * return discd_array(self.internal_state, rk_poisson, size, olam) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3470; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3470; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3470; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3470; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3470; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_olam)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_olam)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_olam)); - __Pyx_INCREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3470; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3470; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3470; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3470; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3471 - * olam = PyArray_FROM_OTF(lam, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less(olam, 0)): - * raise ValueError("lam < 0") # <<<<<<<<<<<<<< - * return discd_array(self.internal_state, rk_poisson, size, olam) - * - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3471; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_44)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_44)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_44)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3471; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3471; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":3472 - * if np.any(np.less(olam, 0)): - * raise ValueError("lam < 0") - * return discd_array(self.internal_state, rk_poisson, size, olam) # <<<<<<<<<<<<<< - * - * def zipf(self, a, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_discd_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_poisson, __pyx_v_size, __pyx_v_olam); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3472; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.poisson"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_olam); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_lam); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":3474 - * return discd_array(self.internal_state, rk_poisson, size, olam) - * - * def zipf(self, a, size=None): # <<<<<<<<<<<<<< - * """ - * zipf(a, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_zipf(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_zipf[] = "\n"" zipf(a, size=None)\n""\n"" Draw samples from a Zipf distribution.\n""\n"" Samples are drawn from a Zipf distribution with specified parameter (a),\n"" where a > 1.\n""\n"" The zipf distribution (also known as the zeta\n"" distribution) is a continuous probability distribution that satisfies\n"" Zipf's law, where the frequency of an item is inversely proportional to\n"" its rank in a frequency table.\n""\n"" Parameters\n"" ----------\n"" a : float\n"" parameter, > 1.\n"" size : {tuple, int}\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" Returns\n"" -------\n"" samples : {ndarray, scalar}\n"" The returned samples are greater than or equal to one.\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.zipf : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" Notes\n"" -----\n"" The probability density for the Zipf distribution is\n""\n"" .. math:: p(x) = \\frac{x^{-a}}{\\zeta(a)},\n""\n"" where :math:`\\zeta` is the Riemann Zeta function.\n""\n"" Named after the American linguist George Kingsley Zipf, who noted that\n"" the frequency of any word in a sample of a language is inversely\n"" proportional to its rank in the frequency table.\n""\n""\n"" References\n"" ----------\n"" .. [1] Weisstein, Eric W. \"Zipf Distribution.\" From MathWorld--A Wolfram\n"" Web Resource. http://mathworld.wolfram.com/ZipfDistribution.html\n"" .. [2] Wikipedia, \"Zeta distribution\",\n"" http://en.wikipedia.org/wiki/Zeta_distribution\n"" .. [3] Wikipedia, \"Zipf's Law\",\n"" http://en.wikipedia.org/wiki/Zipf%27s_law\n"" .. [4] Zipf, George Kingsley (1932): Selected Studies of the Principle\n"" of Relative Frequency in Language. Cambridge (Mass.).\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> a = 2. # parameter\n"" >>> s = np.random.zipf(a, 1000)\n""\n"" Display the histogram of the samples, along with\n"" the probability density function:\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> import scipy.special as sps\n"" Truncate s values at 50 so plot is interesting\n"" >>> count, bins, ignored = plt.hist(s[s<50], 50, normed=True)\n"" >>> x = np.arange(1., 50.)\n"" >>> y = x**(-a)/sps.zetac(a)\n"" >>> plt.plot(x, y/max(y), linewidth=2, color='r')\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_zipf(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_oa; - double __pyx_v_fa; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__a,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("zipf"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__a); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "zipf") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3474; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_a = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_a = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("zipf", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3474; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.zipf"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_a); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_oa = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":3553 - * cdef double fa - * - * fa = PyFloat_AsDouble(a) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fa <= 1.0: - */ - __pyx_v_fa = PyFloat_AsDouble(__pyx_v_a); - - /* "mtrand.pyx":3554 - * - * fa = PyFloat_AsDouble(a) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fa <= 1.0: - * raise ValueError("a <= 1.0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":3555 - * fa = PyFloat_AsDouble(a) - * if not PyErr_Occurred(): - * if fa <= 1.0: # <<<<<<<<<<<<<< - * raise ValueError("a <= 1.0") - * return discd_array_sc(self.internal_state, rk_zipf, size, fa) - */ - __pyx_t_1 = (__pyx_v_fa <= 1.0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3556 - * if not PyErr_Occurred(): - * if fa <= 1.0: - * raise ValueError("a <= 1.0") # <<<<<<<<<<<<<< - * return discd_array_sc(self.internal_state, rk_zipf, size, fa) - * - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3556; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_45)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_45)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_45)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3556; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3556; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":3557 - * if fa <= 1.0: - * raise ValueError("a <= 1.0") - * return discd_array_sc(self.internal_state, rk_zipf, size, fa) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_6mtrand_discd_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_zipf, __pyx_v_size, __pyx_v_fa); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3557; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":3559 - * return discd_array_sc(self.internal_state, rk_zipf, size, fa) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":3561 - * PyErr_Clear() - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(oa, 1.0)): - * raise ValueError("a <= 1.0") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_a, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3561; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_oa)); - __pyx_v_oa = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3562 - * - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oa, 1.0)): # <<<<<<<<<<<<<< - * raise ValueError("a <= 1.0") - * return discd_array(self.internal_state, rk_zipf, size, oa) - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3562; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3562; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3562; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3562; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3562; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3562; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_oa)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_oa)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_oa)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3562; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3562; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3562; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3562; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3563 - * oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(oa, 1.0)): - * raise ValueError("a <= 1.0") # <<<<<<<<<<<<<< - * return discd_array(self.internal_state, rk_zipf, size, oa) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3563; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_45)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_45)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_45)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3563; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3563; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":3564 - * if np.any(np.less_equal(oa, 1.0)): - * raise ValueError("a <= 1.0") - * return discd_array(self.internal_state, rk_zipf, size, oa) # <<<<<<<<<<<<<< - * - * def geometric(self, p, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = __pyx_f_6mtrand_discd_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_zipf, __pyx_v_size, __pyx_v_oa); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3564; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.zipf"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_oa); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_a); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":3566 - * return discd_array(self.internal_state, rk_zipf, size, oa) - * - * def geometric(self, p, size=None): # <<<<<<<<<<<<<< - * """ - * geometric(p, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_geometric(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_geometric[] = "\n geometric(p, size=None)\n\n Draw samples from the geometric distribution.\n\n Bernoulli trials are experiments with one of two outcomes:\n success or failure (an example of such an experiment is flipping\n a coin). The geometric distribution models the number of trials\n that must be run in order to achieve success. It is therefore\n supported on the positive integers, ``k = 1, 2, ...``.\n\n The probability mass function of the geometric distribution is\n\n .. math:: f(k) = (1 - p)^{k - 1} p\n\n where `p` is the probability of success of an individual trial.\n\n Parameters\n ----------\n p : float\n The probability of success of an individual trial.\n size : tuple of ints\n Number of values to draw from the distribution. The output\n is shaped according to `size`.\n\n Returns\n -------\n out : ndarray\n Samples from the geometric distribution, shaped according to\n `size`.\n\n Examples\n --------\n Draw ten thousand values from the geometric distribution,\n with the probability of an individual success equal to 0.35:\n\n >>> z = np.random.geometric(p=0.35, size=10000)\n\n How many trials succeeded after a single run?\n\n >>> (z == 1).sum() / 10000.\n 0.34889999999999999 #random\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_geometric(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_p = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_op; - double __pyx_v_fp; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__p,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("geometric"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__p); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "geometric") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3566; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_p = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_p = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("geometric", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3566; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.geometric"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_p); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_op = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":3614 - * cdef double fp - * - * fp = PyFloat_AsDouble(p) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fp < 0.0: - */ - __pyx_v_fp = PyFloat_AsDouble(__pyx_v_p); - - /* "mtrand.pyx":3615 - * - * fp = PyFloat_AsDouble(p) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fp < 0.0: - * raise ValueError("p < 0.0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":3616 - * fp = PyFloat_AsDouble(p) - * if not PyErr_Occurred(): - * if fp < 0.0: # <<<<<<<<<<<<<< - * raise ValueError("p < 0.0") - * if fp > 1.0: - */ - __pyx_t_1 = (__pyx_v_fp < 0.0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3617 - * if not PyErr_Occurred(): - * if fp < 0.0: - * raise ValueError("p < 0.0") # <<<<<<<<<<<<<< - * if fp > 1.0: - * raise ValueError("p > 1.0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3617; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_46)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_46)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_46)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3617; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3617; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":3618 - * if fp < 0.0: - * raise ValueError("p < 0.0") - * if fp > 1.0: # <<<<<<<<<<<<<< - * raise ValueError("p > 1.0") - * return discd_array_sc(self.internal_state, rk_geometric, size, fp) - */ - __pyx_t_1 = (__pyx_v_fp > 1.0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3619 - * raise ValueError("p < 0.0") - * if fp > 1.0: - * raise ValueError("p > 1.0") # <<<<<<<<<<<<<< - * return discd_array_sc(self.internal_state, rk_geometric, size, fp) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3619; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_47)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_47)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_47)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3619; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3619; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":3620 - * if fp > 1.0: - * raise ValueError("p > 1.0") - * return discd_array_sc(self.internal_state, rk_geometric, size, fp) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_f_6mtrand_discd_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_geometric, __pyx_v_size, __pyx_v_fp); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3620; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":3622 - * return discd_array_sc(self.internal_state, rk_geometric, size, fp) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * - */ - PyErr_Clear(); - - /* "mtrand.pyx":3625 - * - * - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less(op, 0.0)): - * raise ValueError("p < 0.0") - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_p, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3625; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_op)); - __pyx_v_op = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":3626 - * - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less(op, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("p < 0.0") - * if np.any(np.greater(op, 1.0)): - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3626; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__any); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3626; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3626; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__less); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3626; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3626; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3626; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_op)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_op)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_op)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3626; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3626; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3626; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3626; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3627 - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less(op, 0.0)): - * raise ValueError("p < 0.0") # <<<<<<<<<<<<<< - * if np.any(np.greater(op, 1.0)): - * raise ValueError("p > 1.0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3627; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_46)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_46)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_46)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3627; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3627; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":3628 - * if np.any(np.less(op, 0.0)): - * raise ValueError("p < 0.0") - * if np.any(np.greater(op, 1.0)): # <<<<<<<<<<<<<< - * raise ValueError("p > 1.0") - * return discd_array(self.internal_state, rk_geometric, size, op) - */ - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__greater); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_v_op)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_v_op)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_op)); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3629 - * raise ValueError("p < 0.0") - * if np.any(np.greater(op, 1.0)): - * raise ValueError("p > 1.0") # <<<<<<<<<<<<<< - * return discd_array(self.internal_state, rk_geometric, size, op) - * - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3629; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_47)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_47)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_47)); - __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3629; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3629; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":3630 - * if np.any(np.greater(op, 1.0)): - * raise ValueError("p > 1.0") - * return discd_array(self.internal_state, rk_geometric, size, op) # <<<<<<<<<<<<<< - * - * def hypergeometric(self, ngood, nbad, nsample, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_4 = __pyx_f_6mtrand_discd_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_geometric, __pyx_v_size, __pyx_v_op); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3630; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.geometric"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_op); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_p); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":3632 - * return discd_array(self.internal_state, rk_geometric, size, op) - * - * def hypergeometric(self, ngood, nbad, nsample, size=None): # <<<<<<<<<<<<<< - * """ - * hypergeometric(ngood, nbad, nsample, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_hypergeometric(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_hypergeometric[] = "\n"" hypergeometric(ngood, nbad, nsample, size=None)\n""\n"" Draw samples from a Hypergeometric distribution.\n""\n"" Samples are drawn from a Hypergeometric distribution with specified\n"" parameters, ngood (ways to make a good selection), nbad (ways to make\n"" a bad selection), and nsample = number of items sampled, which is less\n"" than or equal to the sum ngood + nbad.\n""\n"" Parameters\n"" ----------\n"" ngood : float (but truncated to an integer)\n"" parameter, > 0.\n"" nbad : float\n"" parameter, >= 0.\n"" nsample : float\n"" parameter, > 0 and <= ngood+nbad\n"" size : {tuple, int}\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" Returns\n"" -------\n"" samples : {ndarray, scalar}\n"" where the values are all integers in [0, n].\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.hypergeom : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" Notes\n"" -----\n"" The probability density for the Hypergeometric distribution is\n""\n"" .. math:: P(x) = \\frac{\\binom{m}{n}\\binom{N-m}{n-x}}{\\binom{N}{n}},\n""\n"" where :math:`0 \\le x \\le m` and :math:`n+m-N \\le x \\le n`\n""\n"" for P(x) the probability of x successes, n = ngood, m = nbad, and\n"" N = number of samples.\n""\n"" Consider an urn with black and white marbles in it, ngood of them\n"" black and nbad are white. If you draw nsample balls without\n"" replacement, then the Hypergeometric distribution describes the\n"" distribution of black balls in the drawn sample.\n""\n"" Note that this distribution is very similar to the Binomial\n"" distribution, except that in this case, samples are drawn without\n"" replacement, whereas in the Binomial case samples are drawn with\n"" replacement (or the sample space is infinite). As the sample space\n"" becomes large, this distribution approaches the Binomial.\n""\n"" References\n"" ----------\n"" .. [1] Lentner, Marvin, \"Elementary Applied Statistics\", Bogden\n"" and Quigley, 1972.\n"" .. [2] Weisstein, Eric W. \"Hypergeometric Distribution.\" From\n"" MathWorld--A Wolfram Web Resource.\n"" http://mathworld.wolfram.com/HypergeometricDistribution.html\n"" .. [3] Wikipedia, \"Hypergeometric-distribution\",\n"" http://en.wikipedia.org/wiki/Hypergeometric-distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> ngood, nbad, nsamp = 100, 2, 10\n"" # number of good, number of bad, and number of samples\n"" >>> s = np.random.hypergeometric(ngood, nbad, nsamp, 1000)\n"" >>> hist(s)\n"" # note that it is very unlikely to grab both bad items\n""\n"" Suppose you have an urn with 15 white and 15 black marbles.\n"" If you pull 15 marbles at random, how likely is it that\n"" 12 or more of them are one color?\n""\n"" >>> s = np.random.hypergeometric(15, 15, 15, 100000)\n"" >>> sum(s>=12)/100000. + sum(s<=3)/100000.\n"" # answer = 0.003 ... pretty unlikely!\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_hypergeometric(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_ngood = 0; - PyObject *__pyx_v_nbad = 0; - PyObject *__pyx_v_nsample = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_ongood; - PyArrayObject *__pyx_v_onbad; - PyArrayObject *__pyx_v_onsample; - long __pyx_v_lngood; - long __pyx_v_lnbad; - long __pyx_v_lnsample; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__ngood,&__pyx_n_s__nbad,&__pyx_n_s__nsample,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("hypergeometric"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[4] = {0,0,0,0}; - values[3] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__ngood); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__nbad); - if (likely(values[1])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("hypergeometric", 0, 3, 4, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3632; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 2: - values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__nsample); - if (likely(values[2])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("hypergeometric", 0, 3, 4, 2); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3632; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 3: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[3] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "hypergeometric") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3632; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_ngood = values[0]; - __pyx_v_nbad = values[1]; - __pyx_v_nsample = values[2]; - __pyx_v_size = values[3]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 4: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 3); - case 3: - __pyx_v_nsample = PyTuple_GET_ITEM(__pyx_args, 2); - __pyx_v_nbad = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_ngood = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("hypergeometric", 0, 3, 4, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3632; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.hypergeometric"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_ngood); - __Pyx_INCREF(__pyx_v_nbad); - __Pyx_INCREF(__pyx_v_nsample); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_ongood = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_onbad = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_onsample = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":3719 - * cdef long lngood, lnbad, lnsample - * - * lngood = PyInt_AsLong(ngood) # <<<<<<<<<<<<<< - * lnbad = PyInt_AsLong(nbad) - * lnsample = PyInt_AsLong(nsample) - */ - __pyx_v_lngood = PyInt_AsLong(__pyx_v_ngood); - - /* "mtrand.pyx":3720 - * - * lngood = PyInt_AsLong(ngood) - * lnbad = PyInt_AsLong(nbad) # <<<<<<<<<<<<<< - * lnsample = PyInt_AsLong(nsample) - * if not PyErr_Occurred(): - */ - __pyx_v_lnbad = PyInt_AsLong(__pyx_v_nbad); - - /* "mtrand.pyx":3721 - * lngood = PyInt_AsLong(ngood) - * lnbad = PyInt_AsLong(nbad) - * lnsample = PyInt_AsLong(nsample) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if ngood < 1: - */ - __pyx_v_lnsample = PyInt_AsLong(__pyx_v_nsample); - - /* "mtrand.pyx":3722 - * lnbad = PyInt_AsLong(nbad) - * lnsample = PyInt_AsLong(nsample) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if ngood < 1: - * raise ValueError("ngood < 1") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":3723 - * lnsample = PyInt_AsLong(nsample) - * if not PyErr_Occurred(): - * if ngood < 1: # <<<<<<<<<<<<<< - * raise ValueError("ngood < 1") - * if nbad < 1: - */ - __pyx_t_2 = PyObject_RichCompare(__pyx_v_ngood, __pyx_int_1, Py_LT); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3723; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3723; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3724 - * if not PyErr_Occurred(): - * if ngood < 1: - * raise ValueError("ngood < 1") # <<<<<<<<<<<<<< - * if nbad < 1: - * raise ValueError("nbad < 1") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3724; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_48)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_48)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_48)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3724; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3724; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":3725 - * if ngood < 1: - * raise ValueError("ngood < 1") - * if nbad < 1: # <<<<<<<<<<<<<< - * raise ValueError("nbad < 1") - * if nsample < 1: - */ - __pyx_t_3 = PyObject_RichCompare(__pyx_v_nbad, __pyx_int_1, Py_LT); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3725; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3725; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3726 - * raise ValueError("ngood < 1") - * if nbad < 1: - * raise ValueError("nbad < 1") # <<<<<<<<<<<<<< - * if nsample < 1: - * raise ValueError("nsample < 1") - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3726; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_49)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_49)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_49)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3726; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3726; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":3727 - * if nbad < 1: - * raise ValueError("nbad < 1") - * if nsample < 1: # <<<<<<<<<<<<<< - * raise ValueError("nsample < 1") - * if ngood + nbad < nsample: - */ - __pyx_t_2 = PyObject_RichCompare(__pyx_v_nsample, __pyx_int_1, Py_LT); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3727; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3727; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3728 - * raise ValueError("nbad < 1") - * if nsample < 1: - * raise ValueError("nsample < 1") # <<<<<<<<<<<<<< - * if ngood + nbad < nsample: - * raise ValueError("ngood + nbad < nsample") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3728; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_50)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_50)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_50)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3728; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3728; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":3729 - * if nsample < 1: - * raise ValueError("nsample < 1") - * if ngood + nbad < nsample: # <<<<<<<<<<<<<< - * raise ValueError("ngood + nbad < nsample") - * return discnmN_array_sc(self.internal_state, rk_hypergeometric, size, - */ - __pyx_t_3 = PyNumber_Add(__pyx_v_ngood, __pyx_v_nbad); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3729; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_RichCompare(__pyx_t_3, __pyx_v_nsample, Py_LT); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3729; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3729; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3730 - * raise ValueError("nsample < 1") - * if ngood + nbad < nsample: - * raise ValueError("ngood + nbad < nsample") # <<<<<<<<<<<<<< - * return discnmN_array_sc(self.internal_state, rk_hypergeometric, size, - * lngood, lnbad, lnsample) - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3730; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_51)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_51)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_51)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3730; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3730; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":3731 - * if ngood + nbad < nsample: - * raise ValueError("ngood + nbad < nsample") - * return discnmN_array_sc(self.internal_state, rk_hypergeometric, size, # <<<<<<<<<<<<<< - * lngood, lnbad, lnsample) - * - */ - __Pyx_XDECREF(__pyx_r); - - /* "mtrand.pyx":3732 - * raise ValueError("ngood + nbad < nsample") - * return discnmN_array_sc(self.internal_state, rk_hypergeometric, size, - * lngood, lnbad, lnsample) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_f_6mtrand_discnmN_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_hypergeometric, __pyx_v_size, __pyx_v_lngood, __pyx_v_lnbad, __pyx_v_lnsample); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3731; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":3735 - * - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * ongood = PyArray_FROM_OTF(ngood, NPY_LONG, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":3737 - * PyErr_Clear() - * - * ongood = PyArray_FROM_OTF(ngood, NPY_LONG, NPY_ALIGNED) # <<<<<<<<<<<<<< - * onbad = PyArray_FROM_OTF(nbad, NPY_LONG, NPY_ALIGNED) - * onsample = PyArray_FROM_OTF(nsample, NPY_LONG, NPY_ALIGNED) - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_ngood, NPY_LONG, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3737; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_ongood)); - __pyx_v_ongood = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3738 - * - * ongood = PyArray_FROM_OTF(ngood, NPY_LONG, NPY_ALIGNED) - * onbad = PyArray_FROM_OTF(nbad, NPY_LONG, NPY_ALIGNED) # <<<<<<<<<<<<<< - * onsample = PyArray_FROM_OTF(nsample, NPY_LONG, NPY_ALIGNED) - * if np.any(np.less(ongood, 1)): - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_nbad, NPY_LONG, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3738; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_onbad)); - __pyx_v_onbad = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3739 - * ongood = PyArray_FROM_OTF(ngood, NPY_LONG, NPY_ALIGNED) - * onbad = PyArray_FROM_OTF(nbad, NPY_LONG, NPY_ALIGNED) - * onsample = PyArray_FROM_OTF(nsample, NPY_LONG, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less(ongood, 1)): - * raise ValueError("ngood < 1") - */ - __pyx_t_3 = PyArray_FROM_OTF(__pyx_v_nsample, NPY_LONG, NPY_ALIGNED); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3739; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_3))); - __Pyx_DECREF(((PyObject *)__pyx_v_onsample)); - __pyx_v_onsample = ((PyArrayObject *)__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":3740 - * onbad = PyArray_FROM_OTF(nbad, NPY_LONG, NPY_ALIGNED) - * onsample = PyArray_FROM_OTF(nsample, NPY_LONG, NPY_ALIGNED) - * if np.any(np.less(ongood, 1)): # <<<<<<<<<<<<<< - * raise ValueError("ngood < 1") - * if np.any(np.less(onbad, 1)): - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_ongood)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_ongood)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_ongood)); - __Pyx_INCREF(__pyx_int_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3740; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3741 - * onsample = PyArray_FROM_OTF(nsample, NPY_LONG, NPY_ALIGNED) - * if np.any(np.less(ongood, 1)): - * raise ValueError("ngood < 1") # <<<<<<<<<<<<<< - * if np.any(np.less(onbad, 1)): - * raise ValueError("nbad < 1") - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3741; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_48)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_48)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_48)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3741; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3741; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L11; - } - __pyx_L11:; - - /* "mtrand.pyx":3742 - * if np.any(np.less(ongood, 1)): - * raise ValueError("ngood < 1") - * if np.any(np.less(onbad, 1)): # <<<<<<<<<<<<<< - * raise ValueError("nbad < 1") - * if np.any(np.less(onsample, 1)): - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_onbad)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_onbad)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_onbad)); - __Pyx_INCREF(__pyx_int_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - __pyx_t_4 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_5, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3743 - * raise ValueError("ngood < 1") - * if np.any(np.less(onbad, 1)): - * raise ValueError("nbad < 1") # <<<<<<<<<<<<<< - * if np.any(np.less(onsample, 1)): - * raise ValueError("nsample < 1") - */ - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_49)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_s_49)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_49)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L12; - } - __pyx_L12:; - - /* "mtrand.pyx":3744 - * if np.any(np.less(onbad, 1)): - * raise ValueError("nbad < 1") - * if np.any(np.less(onsample, 1)): # <<<<<<<<<<<<<< - * raise ValueError("nsample < 1") - * if np.any(np.less(np.add(ongood, onbad),onsample)): - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3744; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3744; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3744; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3744; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3744; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_onsample)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_onsample)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_onsample)); - __Pyx_INCREF(__pyx_int_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_1); - __Pyx_GIVEREF(__pyx_int_1); - __pyx_t_2 = PyObject_Call(__pyx_t_5, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3744; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3744; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3744; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3744; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3745 - * raise ValueError("nbad < 1") - * if np.any(np.less(onsample, 1)): - * raise ValueError("nsample < 1") # <<<<<<<<<<<<<< - * if np.any(np.less(np.add(ongood, onbad),onsample)): - * raise ValueError("ngood + nbad < nsample") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3745; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_50)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_50)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_50)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3745; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3745; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L13; - } - __pyx_L13:; - - /* "mtrand.pyx":3746 - * if np.any(np.less(onsample, 1)): - * raise ValueError("nsample < 1") - * if np.any(np.less(np.add(ongood, onbad),onsample)): # <<<<<<<<<<<<<< - * raise ValueError("ngood + nbad < nsample") - * return discnmN_array(self.internal_state, rk_hypergeometric, size, - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__less); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__add); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_ongood)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_ongood)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_ongood)); - __Pyx_INCREF(((PyObject *)__pyx_v_onbad)); - PyTuple_SET_ITEM(__pyx_t_3, 1, ((PyObject *)__pyx_v_onbad)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_onbad)); - __pyx_t_6 = PyObject_Call(__pyx_t_5, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - __Pyx_INCREF(((PyObject *)__pyx_v_onsample)); - PyTuple_SET_ITEM(__pyx_t_3, 1, ((PyObject *)__pyx_v_onsample)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_onsample)); - __pyx_t_6 = 0; - __pyx_t_6 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_6 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3747 - * raise ValueError("nsample < 1") - * if np.any(np.less(np.add(ongood, onbad),onsample)): - * raise ValueError("ngood + nbad < nsample") # <<<<<<<<<<<<<< - * return discnmN_array(self.internal_state, rk_hypergeometric, size, - * ongood, onbad, onsample) - */ - __pyx_t_6 = PyTuple_New(1); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3747; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_6); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_51)); - PyTuple_SET_ITEM(__pyx_t_6, 0, ((PyObject *)__pyx_kp_s_51)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_51)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_6, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3747; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3747; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L14; - } - __pyx_L14:; - - /* "mtrand.pyx":3748 - * if np.any(np.less(np.add(ongood, onbad),onsample)): - * raise ValueError("ngood + nbad < nsample") - * return discnmN_array(self.internal_state, rk_hypergeometric, size, # <<<<<<<<<<<<<< - * ongood, onbad, onsample) - * - */ - __Pyx_XDECREF(__pyx_r); - - /* "mtrand.pyx":3749 - * raise ValueError("ngood + nbad < nsample") - * return discnmN_array(self.internal_state, rk_hypergeometric, size, - * ongood, onbad, onsample) # <<<<<<<<<<<<<< - * - * def logseries(self, p, size=None): - */ - __pyx_t_3 = __pyx_f_6mtrand_discnmN_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_hypergeometric, __pyx_v_size, __pyx_v_ongood, __pyx_v_onbad, __pyx_v_onsample); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3748; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("mtrand.RandomState.hypergeometric"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_ongood); - __Pyx_DECREF((PyObject *)__pyx_v_onbad); - __Pyx_DECREF((PyObject *)__pyx_v_onsample); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_ngood); - __Pyx_DECREF(__pyx_v_nbad); - __Pyx_DECREF(__pyx_v_nsample); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":3751 - * ongood, onbad, onsample) - * - * def logseries(self, p, size=None): # <<<<<<<<<<<<<< - * """ - * logseries(p, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_logseries(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_logseries[] = "\n"" logseries(p, size=None)\n""\n"" Draw samples from a Logarithmic Series distribution.\n""\n"" Samples are drawn from a Log Series distribution with specified\n"" parameter, p (probability, 0 < p < 1).\n""\n"" Parameters\n"" ----------\n"" loc : float\n""\n"" scale : float > 0.\n""\n"" size : {tuple, int}\n"" Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n"" ``m * n * k`` samples are drawn.\n""\n"" Returns\n"" -------\n"" samples : {ndarray, scalar}\n"" where the values are all integers in [0, n].\n""\n"" See Also\n"" --------\n"" scipy.stats.distributions.logser : probability density function,\n"" distribution or cumulative density function, etc.\n""\n"" Notes\n"" -----\n"" The probability density for the Log Series distribution is\n""\n"" .. math:: P(k) = \\frac{-p^k}{k \\ln(1-p)},\n""\n"" where p = probability.\n""\n"" The Log Series distribution is frequently used to represent species\n"" richness and occurrence, first proposed by Fisher, Corbet, and\n"" Williams in 1943 [2]. It may also be used to model the numbers of\n"" occupants seen in cars [3].\n""\n"" References\n"" ----------\n"" .. [1] Buzas, Martin A.; Culver, Stephen J., Understanding regional\n"" species diversity through the log series distribution of\n"" occurrences: BIODIVERSITY RESEARCH Diversity & Distributions,\n"" Volume 5, Number 5, September 1999 , pp. 187-195(9).\n"" .. [2] Fisher, R.A,, A.S. Corbet, and C.B. Williams. 1943. The\n"" relation between the number of species and the number of\n"" individuals in a random sample of an animal population.\n"" Journal of Animal Ecology, 12:42-58.\n"" .. [3] D. J. Hand, F. Daly, D. Lunn, E. Ostrowski, A Handbook of Small\n"" Data Sets, CRC Press, 1994.\n"" .. [4] Wikipedia, \"Logarithmic-distribution\",\n"" http://en.wikipedia.org/wiki/Logarithmic-distribution\n""\n"" Examples\n"" --------\n"" Draw samples from the distribution:\n""\n"" >>> a = .6\n"" >>> s = np.random.logseries(a, 10000)\n"" >>> count, bins, ignored = plt.hist(s)\n""\n"" # plot against distribution\n""\n"" >>> def logseries(k, p):\n"" ... return -p**k/(k*log(1-p))\n"" >>> plt.plot(bins, logseries(bins, a)*count.max()/\n"" logseries(bins, a).max(), 'r')\n"" >>> plt.show()\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_logseries(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_p = 0; - PyObject *__pyx_v_size = 0; - PyArrayObject *__pyx_v_op; - double __pyx_v_fp; - PyObject *__pyx_r = NULL; - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__p,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("logseries"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__p); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "logseries") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3751; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_p = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_p = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("logseries", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3751; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.logseries"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_p); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_op = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":3828 - * cdef double fp - * - * fp = PyFloat_AsDouble(p) # <<<<<<<<<<<<<< - * if not PyErr_Occurred(): - * if fp <= 0.0: - */ - __pyx_v_fp = PyFloat_AsDouble(__pyx_v_p); - - /* "mtrand.pyx":3829 - * - * fp = PyFloat_AsDouble(p) - * if not PyErr_Occurred(): # <<<<<<<<<<<<<< - * if fp <= 0.0: - * raise ValueError("p <= 0.0") - */ - __pyx_t_1 = (!PyErr_Occurred()); - if (__pyx_t_1) { - - /* "mtrand.pyx":3830 - * fp = PyFloat_AsDouble(p) - * if not PyErr_Occurred(): - * if fp <= 0.0: # <<<<<<<<<<<<<< - * raise ValueError("p <= 0.0") - * if fp >= 1.0: - */ - __pyx_t_1 = (__pyx_v_fp <= 0.0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3831 - * if not PyErr_Occurred(): - * if fp <= 0.0: - * raise ValueError("p <= 0.0") # <<<<<<<<<<<<<< - * if fp >= 1.0: - * raise ValueError("p >= 1.0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_52)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_52)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_52)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":3832 - * if fp <= 0.0: - * raise ValueError("p <= 0.0") - * if fp >= 1.0: # <<<<<<<<<<<<<< - * raise ValueError("p >= 1.0") - * return discd_array_sc(self.internal_state, rk_logseries, size, fp) - */ - __pyx_t_1 = (__pyx_v_fp >= 1.0); - if (__pyx_t_1) { - - /* "mtrand.pyx":3833 - * raise ValueError("p <= 0.0") - * if fp >= 1.0: - * raise ValueError("p >= 1.0") # <<<<<<<<<<<<<< - * return discd_array_sc(self.internal_state, rk_logseries, size, fp) - * - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3833; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_53)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_53)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_53)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3833; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3833; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":3834 - * if fp >= 1.0: - * raise ValueError("p >= 1.0") - * return discd_array_sc(self.internal_state, rk_logseries, size, fp) # <<<<<<<<<<<<<< - * - * PyErr_Clear() - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_f_6mtrand_discd_array_sc(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_logseries, __pyx_v_size, __pyx_v_fp); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3834; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":3836 - * return discd_array_sc(self.internal_state, rk_logseries, size, fp) - * - * PyErr_Clear() # <<<<<<<<<<<<<< - * - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - */ - PyErr_Clear(); - - /* "mtrand.pyx":3838 - * PyErr_Clear() - * - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) # <<<<<<<<<<<<<< - * if np.any(np.less_equal(op, 0.0)): - * raise ValueError("p <= 0.0") - */ - __pyx_t_2 = PyArray_FROM_OTF(__pyx_v_p, NPY_DOUBLE, NPY_ALIGNED); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3838; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_op)); - __pyx_v_op = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":3839 - * - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(op, 0.0)): # <<<<<<<<<<<<<< - * raise ValueError("p <= 0.0") - * if np.any(np.greater_equal(op, 1.0)): - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3839; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__any); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3839; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3839; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__less_equal); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3839; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3839; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3839; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_v_op)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_op)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_op)); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3839; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3839; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3839; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3839; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3840 - * op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - * if np.any(np.less_equal(op, 0.0)): - * raise ValueError("p <= 0.0") # <<<<<<<<<<<<<< - * if np.any(np.greater_equal(op, 1.0)): - * raise ValueError("p >= 1.0") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_52)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_52)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_52)); - __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_5, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":3841 - * if np.any(np.less_equal(op, 0.0)): - * raise ValueError("p <= 0.0") - * if np.any(np.greater_equal(op, 1.0)): # <<<<<<<<<<<<<< - * raise ValueError("p >= 1.0") - * return discd_array(self.internal_state, rk_logseries, size, op) - */ - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3841; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__any); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3841; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3841; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__greater_equal); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3841; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3841; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3841; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(((PyObject *)__pyx_v_op)); - PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_v_op)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_op)); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3841; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3841; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3841; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_1 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3841; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "mtrand.pyx":3842 - * raise ValueError("p <= 0.0") - * if np.any(np.greater_equal(op, 1.0)): - * raise ValueError("p >= 1.0") # <<<<<<<<<<<<<< - * return discd_array(self.internal_state, rk_logseries, size, op) - * - */ - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3842; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_53)); - PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_s_53)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_53)); - __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3842; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3842; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":3843 - * if np.any(np.greater_equal(op, 1.0)): - * raise ValueError("p >= 1.0") - * return discd_array(self.internal_state, rk_logseries, size, op) # <<<<<<<<<<<<<< - * - * # Multivariate distributions: - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_4 = __pyx_f_6mtrand_discd_array(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, rk_logseries, __pyx_v_size, __pyx_v_op); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3843; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.logseries"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_op); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_p); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":3846 - * - * # Multivariate distributions: - * def multivariate_normal(self, mean, cov, size=None): # <<<<<<<<<<<<<< - * """ - * multivariate_normal(mean, cov[, size]) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_multivariate_normal(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_multivariate_normal[] = "\n"" multivariate_normal(mean, cov[, size])\n""\n"" Draw random samples from a multivariate normal distribution.\n""\n"" The multivariate normal, multinormal or Gaussian distribution is a\n"" generalisation of the one-dimensional normal distribution to higher\n"" dimensions.\n""\n"" Such a distribution is specified by its mean and covariance matrix,\n"" which are analogous to the mean (average or \"centre\") and variance\n"" (standard deviation squared or \"width\") of the one-dimensional normal\n"" distribution.\n""\n"" Parameters\n"" ----------\n"" mean : (N,) ndarray\n"" Mean of the N-dimensional distribution.\n"" cov : (N,N) ndarray\n"" Covariance matrix of the distribution.\n"" size : tuple of ints, optional\n"" Given a shape of, for example, (m,n,k), m*n*k samples are\n"" generated, and packed in an m-by-n-by-k arrangement. Because each\n"" sample is N-dimensional, the output shape is (m,n,k,N). If no\n"" shape is specified, a single sample is returned.\n""\n"" Returns\n"" -------\n"" out : ndarray\n"" The drawn samples, arranged according to `size`. If the\n"" shape given is (m,n,...), then the shape of `out` is is\n"" (m,n,...,N).\n""\n"" In other words, each entry ``out[i,j,...,:]`` is an N-dimensional\n"" value drawn from the distribution.\n""\n"" Notes\n"" -----\n"" The mean is a coordinate in N-dimensional space, which represents the\n"" location where samples are most likely to be generated. This is\n"" analogous to the peak of the bell curve for the one-dimensional or\n"" univariate normal distribution.\n""\n"" Covariance indicates the level to which two variables vary together.\n"" From the multivariate normal distribution, we draw N-dimensional\n"" samples, :math:`X = [x_1, x_2, ... x_N]`. The covariance matrix\n"" element :math:`C_{ij}` is the covariance of :math:`x_i` and :math:`x_j`.\n"" The element :math:`C_{ii}` is the variance of :math:`x_i` (i.e. its\n"" \"spread\").\n""\n"" Instead of specifying the full covariance matrix, popular\n"" approximations include:\n""\n"" - Spherical covariance (`cov` is a multiple of the identity matrix)\n"" - Diagonal covariance (`cov` has non-negative elements, and only on\n"" the diagonal)\n""\n"" This geometrical property can be seen in two dimensions by plotting\n"" generated data-points:\n""\n"" >>> mean = [0,0]\n"" >>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis\n""\n"" >>> import matplotlib.pyplot as plt\n"" >>> x,y = np.random.multivariate_normal(mean,cov,5000).T\n"" >>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show()\n""\n"" Note that the covariance matrix must be non-negative definite.\n""\n"" References\n"" ----------\n"" .. [1] A. Papoulis, \"Probability, Random Variables, and Stochastic\n"" Processes,\" 3rd ed., McGraw-Hill Companies, 1991\n"" .. [2] R.O. Duda, P.E. Hart, and D.G. Stork, \"Pattern Classification,\"\n"" 2nd ed., Wiley, 2001.\n""\n"" Examples\n"" --------\n"" >>> mean = (1,2)\n"" >>> cov = [[1,0],[1,0]]\n"" >>> x = np.random.multivariate_normal(mean,cov,(3,3))\n"" >>> x.shape\n"" (3, 3, 2)\n""\n"" The following is probably true, given that 0.6 is roughly twice the\n"" standard deviation:\n""\n"" >>> print list( (x[0,0,:] - mean) < 0.6 )\n"" [True, True]\n""\n"" "; -static PyObject *__pyx_pf_6mtrand_11RandomState_multivariate_normal(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_mean = 0; - PyObject *__pyx_v_cov = 0; - PyObject *__pyx_v_size = 0; - PyObject *__pyx_v_shape; - PyObject *__pyx_v_final_shape; - PyObject *__pyx_v_x; - PyObject *__pyx_v_svd; - PyObject *__pyx_v_u; - PyObject *__pyx_v_s; - PyObject *__pyx_v_v; - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - int __pyx_t_6; - int __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__mean,&__pyx_n_s__cov,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("multivariate_normal"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__mean); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__cov); - if (likely(values[1])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("multivariate_normal", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3846; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "multivariate_normal") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3846; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_mean = values[0]; - __pyx_v_cov = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: - __pyx_v_cov = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_mean = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("multivariate_normal", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3846; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.multivariate_normal"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_mean); - __Pyx_INCREF(__pyx_v_cov); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_shape = Py_None; __Pyx_INCREF(Py_None); - __pyx_v_final_shape = Py_None; __Pyx_INCREF(Py_None); - __pyx_v_x = Py_None; __Pyx_INCREF(Py_None); - __pyx_v_svd = Py_None; __Pyx_INCREF(Py_None); - __pyx_v_u = Py_None; __Pyx_INCREF(Py_None); - __pyx_v_s = Py_None; __Pyx_INCREF(Py_None); - __pyx_v_v = Py_None; __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":3939 - * """ - * # Check preconditions on arguments - * mean = np.array(mean) # <<<<<<<<<<<<<< - * cov = np.array(cov) - * if size is None: - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3939; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__array); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3939; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3939; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_mean); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_mean); - __Pyx_GIVEREF(__pyx_v_mean); - __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3939; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_v_mean); - __pyx_v_mean = __pyx_t_3; - __pyx_t_3 = 0; - - /* "mtrand.pyx":3940 - * # Check preconditions on arguments - * mean = np.array(mean) - * cov = np.array(cov) # <<<<<<<<<<<<<< - * if size is None: - * shape = [] - */ - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3940; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__array); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3940; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3940; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_cov); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_cov); - __Pyx_GIVEREF(__pyx_v_cov); - __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3940; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_v_cov); - __pyx_v_cov = __pyx_t_2; - __pyx_t_2 = 0; - - /* "mtrand.pyx":3941 - * mean = np.array(mean) - * cov = np.array(cov) - * if size is None: # <<<<<<<<<<<<<< - * shape = [] - * else: - */ - __pyx_t_4 = (__pyx_v_size == Py_None); - if (__pyx_t_4) { - - /* "mtrand.pyx":3942 - * cov = np.array(cov) - * if size is None: - * shape = [] # <<<<<<<<<<<<<< - * else: - * shape = size - */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3942; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(((PyObject *)__pyx_t_2)); - __Pyx_DECREF(__pyx_v_shape); - __pyx_v_shape = ((PyObject *)__pyx_t_2); - __pyx_t_2 = 0; - goto __pyx_L6; - } - /*else*/ { - - /* "mtrand.pyx":3944 - * shape = [] - * else: - * shape = size # <<<<<<<<<<<<<< - * if len(mean.shape) != 1: - * raise ValueError("mean must be 1 dimensional") - */ - __Pyx_INCREF(__pyx_v_size); - __Pyx_DECREF(__pyx_v_shape); - __pyx_v_shape = __pyx_v_size; - } - __pyx_L6:; - - /* "mtrand.pyx":3945 - * else: - * shape = size - * if len(mean.shape) != 1: # <<<<<<<<<<<<<< - * raise ValueError("mean must be 1 dimensional") - * if (len(cov.shape) != 2) or (cov.shape[0] != cov.shape[1]): - */ - __pyx_t_2 = PyObject_GetAttr(__pyx_v_mean, __pyx_n_s__shape); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3945; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyObject_Length(__pyx_t_2); if (unlikely(__pyx_t_5 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3945; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_4 = (__pyx_t_5 != 1); - if (__pyx_t_4) { - - /* "mtrand.pyx":3946 - * shape = size - * if len(mean.shape) != 1: - * raise ValueError("mean must be 1 dimensional") # <<<<<<<<<<<<<< - * if (len(cov.shape) != 2) or (cov.shape[0] != cov.shape[1]): - * raise ValueError("cov must be 2 dimensional and square") - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3946; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_54)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_54)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_54)); - __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3946; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3946; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L7; - } - __pyx_L7:; - - /* "mtrand.pyx":3947 - * if len(mean.shape) != 1: - * raise ValueError("mean must be 1 dimensional") - * if (len(cov.shape) != 2) or (cov.shape[0] != cov.shape[1]): # <<<<<<<<<<<<<< - * raise ValueError("cov must be 2 dimensional and square") - * if mean.shape[0] != cov.shape[0]: - */ - __pyx_t_3 = PyObject_GetAttr(__pyx_v_cov, __pyx_n_s__shape); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3947; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_Length(__pyx_t_3); if (unlikely(__pyx_t_5 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3947; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = (__pyx_t_5 != 2); - if (!__pyx_t_4) { - __pyx_t_3 = PyObject_GetAttr(__pyx_v_cov, __pyx_n_s__shape); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3947; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_3, 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3947; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyObject_GetAttr(__pyx_v_cov, __pyx_n_s__shape); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3947; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_t_3, 1, sizeof(long), PyInt_FromLong); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3947; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyObject_RichCompare(__pyx_t_2, __pyx_t_1, Py_NE); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3947; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3947; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_7 = __pyx_t_6; - } else { - __pyx_t_7 = __pyx_t_4; - } - if (__pyx_t_7) { - - /* "mtrand.pyx":3948 - * raise ValueError("mean must be 1 dimensional") - * if (len(cov.shape) != 2) or (cov.shape[0] != cov.shape[1]): - * raise ValueError("cov must be 2 dimensional and square") # <<<<<<<<<<<<<< - * if mean.shape[0] != cov.shape[0]: - * raise ValueError("mean and cov must have same length") - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3948; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_55)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_55)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_55)); - __pyx_t_1 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3948; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_1, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3948; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L8; - } - __pyx_L8:; - - /* "mtrand.pyx":3949 - * if (len(cov.shape) != 2) or (cov.shape[0] != cov.shape[1]): - * raise ValueError("cov must be 2 dimensional and square") - * if mean.shape[0] != cov.shape[0]: # <<<<<<<<<<<<<< - * raise ValueError("mean and cov must have same length") - * # Compute shape of output - */ - __pyx_t_1 = PyObject_GetAttr(__pyx_v_mean, __pyx_n_s__shape); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3949; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_GetItemInt(__pyx_t_1, 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3949; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyObject_GetAttr(__pyx_v_cov, __pyx_n_s__shape); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3949; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3949; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyObject_RichCompare(__pyx_t_3, __pyx_t_2, Py_NE); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3949; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_7 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3949; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_7) { - - /* "mtrand.pyx":3950 - * raise ValueError("cov must be 2 dimensional and square") - * if mean.shape[0] != cov.shape[0]: - * raise ValueError("mean and cov must have same length") # <<<<<<<<<<<<<< - * # Compute shape of output - * if isinstance(shape, int): - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3950; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_56)); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_kp_s_56)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_56)); - __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_1, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3950; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3950; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L9; - } - __pyx_L9:; - - /* "mtrand.pyx":3952 - * raise ValueError("mean and cov must have same length") - * # Compute shape of output - * if isinstance(shape, int): # <<<<<<<<<<<<<< - * shape = [shape] - * final_shape = list(shape[:]) - */ - __pyx_t_7 = PyObject_TypeCheck(__pyx_v_shape, ((PyTypeObject *)((PyObject*)&PyInt_Type))); - if (__pyx_t_7) { - - /* "mtrand.pyx":3953 - * # Compute shape of output - * if isinstance(shape, int): - * shape = [shape] # <<<<<<<<<<<<<< - * final_shape = list(shape[:]) - * final_shape.append(mean.shape[0]) - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3953; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(((PyObject *)__pyx_t_2)); - __Pyx_INCREF(__pyx_v_shape); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - __Pyx_DECREF(__pyx_v_shape); - __pyx_v_shape = ((PyObject *)__pyx_t_2); - __pyx_t_2 = 0; - goto __pyx_L10; - } - __pyx_L10:; - - /* "mtrand.pyx":3954 - * if isinstance(shape, int): - * shape = [shape] - * final_shape = list(shape[:]) # <<<<<<<<<<<<<< - * final_shape.append(mean.shape[0]) - * # Create a matrix of independent standard normally distributed random - */ - __pyx_t_2 = PySequence_GetSlice(__pyx_v_shape, 0, PY_SSIZE_T_MAX); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3954; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3954; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(((PyObject *)((PyObject*)&PyList_Type)), __pyx_t_1, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3954; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_v_final_shape); - __pyx_v_final_shape = __pyx_t_2; - __pyx_t_2 = 0; - - /* "mtrand.pyx":3955 - * shape = [shape] - * final_shape = list(shape[:]) - * final_shape.append(mean.shape[0]) # <<<<<<<<<<<<<< - * # Create a matrix of independent standard normally distributed random - * # numbers. The matrix has rows with the same length as mean and as - */ - __pyx_t_2 = PyObject_GetAttr(__pyx_v_mean, __pyx_n_s__shape); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3955; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_t_2, 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3955; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Append(__pyx_v_final_shape, __pyx_t_1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3955; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":3959 - * # numbers. The matrix has rows with the same length as mean and as - * # many rows are necessary to form a matrix of shape final_shape. - * x = self.standard_normal(np.multiply.reduce(final_shape)) # <<<<<<<<<<<<<< - * x.shape = (np.multiply.reduce(final_shape[0:len(final_shape)-1]), - * mean.shape[0]) - */ - __pyx_t_2 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__standard_normal); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__multiply); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__reduce); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_final_shape); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_final_shape); - __Pyx_GIVEREF(__pyx_v_final_shape); - __pyx_t_8 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_8); - __Pyx_GIVEREF(__pyx_t_8); - __pyx_t_8 = 0; - __pyx_t_8 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3959; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_v_x); - __pyx_v_x = __pyx_t_8; - __pyx_t_8 = 0; - - /* "mtrand.pyx":3960 - * # many rows are necessary to form a matrix of shape final_shape. - * x = self.standard_normal(np.multiply.reduce(final_shape)) - * x.shape = (np.multiply.reduce(final_shape[0:len(final_shape)-1]), # <<<<<<<<<<<<<< - * mean.shape[0]) - * # Transform matrix of standard normals into matrix where each row - */ - __pyx_t_8 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3960; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_8, __pyx_n_s__multiply); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3960; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__reduce); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3960; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_5 = PyObject_Length(__pyx_v_final_shape); if (unlikely(__pyx_t_5 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3960; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_t_3 = PySequence_GetSlice(__pyx_v_final_shape, 0, (__pyx_t_5 - 1)); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3960; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3960; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_8, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3960; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":3961 - * x = self.standard_normal(np.multiply.reduce(final_shape)) - * x.shape = (np.multiply.reduce(final_shape[0:len(final_shape)-1]), - * mean.shape[0]) # <<<<<<<<<<<<<< - * # Transform matrix of standard normals into matrix where each row - * # contains multivariate normals with the desired covariance. - */ - __pyx_t_2 = PyObject_GetAttr(__pyx_v_mean, __pyx_n_s__shape); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3961; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = __Pyx_GetItemInt(__pyx_t_2, 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_8) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3961; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3960; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_8); - __Pyx_GIVEREF(__pyx_t_8); - __pyx_t_3 = 0; - __pyx_t_8 = 0; - - /* "mtrand.pyx":3960 - * # many rows are necessary to form a matrix of shape final_shape. - * x = self.standard_normal(np.multiply.reduce(final_shape)) - * x.shape = (np.multiply.reduce(final_shape[0:len(final_shape)-1]), # <<<<<<<<<<<<<< - * mean.shape[0]) - * # Transform matrix of standard normals into matrix where each row - */ - if (PyObject_SetAttr(__pyx_v_x, __pyx_n_s__shape, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3960; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":3969 - * # decomposition of cov is such an A. - * - * from numpy.dual import svd # <<<<<<<<<<<<<< - * # XXX: we really should be doing this by Cholesky decomposition - * (u,s,v) = svd(cov) - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(((PyObject *)__pyx_t_2)); - __Pyx_INCREF(((PyObject *)__pyx_n_s__svd)); - PyList_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_n_s__svd)); - __Pyx_GIVEREF(((PyObject *)__pyx_n_s__svd)); - __pyx_t_8 = __Pyx_Import(((PyObject *)__pyx_n_s_57), ((PyObject *)__pyx_t_2)); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_t_8, __pyx_n_s__svd); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3969; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_2); - __Pyx_DECREF(__pyx_v_svd); - __pyx_v_svd = __pyx_t_2; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "mtrand.pyx":3971 - * from numpy.dual import svd - * # XXX: we really should be doing this by Cholesky decomposition - * (u,s,v) = svd(cov) # <<<<<<<<<<<<<< - * x = np.dot(x*np.sqrt(s),v) - * # The rows of x now have the correct covariance but mean 0. Add - */ - __pyx_t_8 = PyTuple_New(1); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3971; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_cov); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_cov); - __Pyx_GIVEREF(__pyx_v_cov); - __pyx_t_2 = PyObject_Call(__pyx_v_svd, __pyx_t_8, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3971; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (PyTuple_CheckExact(__pyx_t_2) && likely(PyTuple_GET_SIZE(__pyx_t_2) == 3)) { - PyObject* tuple = __pyx_t_2; - __pyx_t_8 = PyTuple_GET_ITEM(tuple, 0); __Pyx_INCREF(__pyx_t_8); - __pyx_t_3 = PyTuple_GET_ITEM(tuple, 1); __Pyx_INCREF(__pyx_t_3); - __pyx_t_1 = PyTuple_GET_ITEM(tuple, 2); __Pyx_INCREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_v_u); - __pyx_v_u = __pyx_t_8; - __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_v_s); - __pyx_v_s = __pyx_t_3; - __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_v_v); - __pyx_v_v = __pyx_t_1; - __pyx_t_1 = 0; - } else { - __pyx_t_9 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_9)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3971; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_8 = __Pyx_UnpackItem(__pyx_t_9, 0); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3971; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_3 = __Pyx_UnpackItem(__pyx_t_9, 1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3971; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_UnpackItem(__pyx_t_9, 2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3971; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_EndUnpack(__pyx_t_9) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3971; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_v_u); - __pyx_v_u = __pyx_t_8; - __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_v_s); - __pyx_v_s = __pyx_t_3; - __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_v_v); - __pyx_v_v = __pyx_t_1; - __pyx_t_1 = 0; - } - - /* "mtrand.pyx":3972 - * # XXX: we really should be doing this by Cholesky decomposition - * (u,s,v) = svd(cov) - * x = np.dot(x*np.sqrt(s),v) # <<<<<<<<<<<<<< - * # The rows of x now have the correct covariance but mean 0. Add - * # mean to each row. Then each row will have mean mean. - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3972; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__dot); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3972; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3972; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__sqrt); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3972; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3972; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_s); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_s); - __Pyx_GIVEREF(__pyx_v_s); - __pyx_t_8 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3972; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_v_x, __pyx_t_8); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3972; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyTuple_New(2); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3972; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_8); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_v); - PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_v_v); - __Pyx_GIVEREF(__pyx_v_v); - __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_8, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3972; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_v_x); - __pyx_v_x = __pyx_t_2; - __pyx_t_2 = 0; - - /* "mtrand.pyx":3975 - * # The rows of x now have the correct covariance but mean 0. Add - * # mean to each row. Then each row will have mean mean. - * np.add(mean,x,x) # <<<<<<<<<<<<<< - * x.shape = tuple(final_shape) - * return x - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3975; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__add); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3975; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3975; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_mean); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_mean); - __Pyx_GIVEREF(__pyx_v_mean); - __Pyx_INCREF(__pyx_v_x); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - __Pyx_INCREF(__pyx_v_x); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - __pyx_t_1 = PyObject_Call(__pyx_t_8, __pyx_t_2, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3975; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":3976 - * # mean to each row. Then each row will have mean mean. - * np.add(mean,x,x) - * x.shape = tuple(final_shape) # <<<<<<<<<<<<<< - * return x - * - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3976; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_final_shape); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_final_shape); - __Pyx_GIVEREF(__pyx_v_final_shape); - __pyx_t_2 = PyObject_Call(((PyObject *)((PyObject*)&PyTuple_Type)), __pyx_t_1, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3976; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_v_x, __pyx_n_s__shape, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3976; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":3977 - * np.add(mean,x,x) - * x.shape = tuple(final_shape) - * return x # <<<<<<<<<<<<<< - * - * def multinomial(self, long n, object pvals, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_x); - __pyx_r = __pyx_v_x; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("mtrand.RandomState.multivariate_normal"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF(__pyx_v_shape); - __Pyx_DECREF(__pyx_v_final_shape); - __Pyx_DECREF(__pyx_v_x); - __Pyx_DECREF(__pyx_v_svd); - __Pyx_DECREF(__pyx_v_u); - __Pyx_DECREF(__pyx_v_s); - __Pyx_DECREF(__pyx_v_v); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_mean); - __Pyx_DECREF(__pyx_v_cov); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":3979 - * return x - * - * def multinomial(self, long n, object pvals, size=None): # <<<<<<<<<<<<<< - * """ - * multinomial(n, pvals, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_multinomial(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_multinomial[] = "\n multinomial(n, pvals, size=None)\n\n Draw samples from a multinomial distribution.\n\n The multinomial distribution is a multivariate generalisation of the\n binomial distribution. Take an experiment with one of ``p``\n possible outcomes. An example of such an experiment is throwing a dice,\n where the outcome can be 1 through 6. Each sample drawn from the\n distribution represents `n` such experiments. Its values,\n ``X_i = [X_0, X_1, ..., X_p]``, represent the number of times the outcome\n was ``i``.\n\n Parameters\n ----------\n n : int\n Number of experiments.\n pvals : sequence of floats, length p\n Probabilities of each of the ``p`` different outcomes. These\n should sum to 1 (however, the last element is always assumed to\n account for the remaining probability, as long as\n ``sum(pvals[:-1]) <= 1)``.\n size : tuple of ints\n Given a `size` of ``(M, N, K)``, then ``M*N*K`` samples are drawn,\n and the output shape becomes ``(M, N, K, p)``, since each sample\n has shape ``(p,)``.\n\n Examples\n --------\n Throw a dice 20 times:\n\n >>> np.random.multinomial(20, [1/6.]*6, size=1)\n array([[4, 1, 7, 5, 2, 1]])\n\n It landed 4 times on 1, once on 2, etc.\n\n Now, throw the dice 20 times, and 20 times again:\n\n >>> np.random.multinomial(20, [1/6.]*6, size=2)\n array([[3, 4, 3, 3, 4, 3],\n [2, 4, 3, 4, 0, 7]])\n\n For the first run, we threw 3 times 1, 4 times 2, etc. For the second,\n we threw 2 times 1, 4 times 2, etc.\n\n A loaded dice is more likely to land on number 6:\n\n >>> np.random.multinomial(100, [1/7.]*5)\n array([13, 16, 13, 16, 42])\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_multinomial(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - long __pyx_v_n; - PyObject *__pyx_v_pvals = 0; - PyObject *__pyx_v_size = 0; - long __pyx_v_d; - PyArrayObject *arrayObject_parr; - PyArrayObject *arrayObject_mnarr; - double *__pyx_v_pix; - long *__pyx_v_mnix; - long __pyx_v_i; - long __pyx_v_j; - long __pyx_v_dn; - double __pyx_v_Sum; - PyObject *__pyx_v_shape; - PyObject *__pyx_v_multin; - PyObject *__pyx_r = NULL; - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - long __pyx_t_6; - double __pyx_t_7; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__pvals,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("multinomial"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[3] = {0,0,0}; - values[2] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__pvals); - if (likely(values[1])) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("multinomial", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3979; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - case 2: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "multinomial") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3979; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_n = __Pyx_PyInt_AsLong(values[0]); if (unlikely((__pyx_v_n == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3979; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_v_pvals = values[1]; - __pyx_v_size = values[2]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: - __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 2); - case 2: - __pyx_v_pvals = PyTuple_GET_ITEM(__pyx_args, 1); - __pyx_v_n = __Pyx_PyInt_AsLong(PyTuple_GET_ITEM(__pyx_args, 0)); if (unlikely((__pyx_v_n == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3979; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("multinomial", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3979; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.multinomial"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_pvals); - __Pyx_INCREF(__pyx_v_size); - arrayObject_parr = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - arrayObject_mnarr = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_shape = Py_None; __Pyx_INCREF(Py_None); - __pyx_v_multin = Py_None; __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":4038 - * cdef double Sum - * - * d = len(pvals) # <<<<<<<<<<<<<< - * parr = PyArray_ContiguousFromObject(pvals, NPY_DOUBLE, 1, 1) - * pix = parr.data - */ - __pyx_t_1 = PyObject_Length(__pyx_v_pvals); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4038; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_v_d = __pyx_t_1; - - /* "mtrand.pyx":4039 - * - * d = len(pvals) - * parr = PyArray_ContiguousFromObject(pvals, NPY_DOUBLE, 1, 1) # <<<<<<<<<<<<<< - * pix = parr.data - * - */ - __pyx_t_2 = PyArray_ContiguousFromObject(__pyx_v_pvals, NPY_DOUBLE, 1, 1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4039; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)arrayObject_parr)); - arrayObject_parr = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4040 - * d = len(pvals) - * parr = PyArray_ContiguousFromObject(pvals, NPY_DOUBLE, 1, 1) - * pix = parr.data # <<<<<<<<<<<<<< - * - * if kahan_sum(pix, d-1) > (1.0 + 1e-12): - */ - __pyx_v_pix = ((double *)arrayObject_parr->data); - - /* "mtrand.pyx":4042 - * pix = parr.data - * - * if kahan_sum(pix, d-1) > (1.0 + 1e-12): # <<<<<<<<<<<<<< - * raise ValueError("sum(pvals[:-1]) > 1.0") - * - */ - __pyx_t_3 = (__pyx_f_6mtrand_kahan_sum(__pyx_v_pix, (__pyx_v_d - 1)) > (1.0 + 9.9999999999999998e-13)); - if (__pyx_t_3) { - - /* "mtrand.pyx":4043 - * - * if kahan_sum(pix, d-1) > (1.0 + 1e-12): - * raise ValueError("sum(pvals[:-1]) > 1.0") # <<<<<<<<<<<<<< - * - * if size is None: - */ - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4043; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)__pyx_kp_s_58)); - PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_58)); - __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_58)); - __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4043; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4043; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - goto __pyx_L6; - } - __pyx_L6:; - - /* "mtrand.pyx":4045 - * raise ValueError("sum(pvals[:-1]) > 1.0") - * - * if size is None: # <<<<<<<<<<<<<< - * shape = (d,) - * elif type(size) is int: - */ - __pyx_t_3 = (__pyx_v_size == Py_None); - if (__pyx_t_3) { - - /* "mtrand.pyx":4046 - * - * if size is None: - * shape = (d,) # <<<<<<<<<<<<<< - * elif type(size) is int: - * shape = (size, d) - */ - __pyx_t_4 = PyInt_FromLong(__pyx_v_d); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4046; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4046; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_v_shape); - __pyx_v_shape = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L7; - } - - /* "mtrand.pyx":4047 - * if size is None: - * shape = (d,) - * elif type(size) is int: # <<<<<<<<<<<<<< - * shape = (size, d) - * else: - */ - __pyx_t_3 = (((PyObject *)Py_TYPE(__pyx_v_size)) == ((PyObject *)((PyObject*)&PyInt_Type))); - if (__pyx_t_3) { - - /* "mtrand.pyx":4048 - * shape = (d,) - * elif type(size) is int: - * shape = (size, d) # <<<<<<<<<<<<<< - * else: - * shape = size + (d,) - */ - __pyx_t_2 = PyInt_FromLong(__pyx_v_d); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4048; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4048; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_v_shape); - __pyx_v_shape = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L7; - } - /*else*/ { - - /* "mtrand.pyx":4050 - * shape = (size, d) - * else: - * shape = size + (d,) # <<<<<<<<<<<<<< - * - * multin = np.zeros(shape, int) - */ - __pyx_t_4 = PyInt_FromLong(__pyx_v_d); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4050; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4050; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Add(__pyx_v_size, __pyx_t_2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4050; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_v_shape); - __pyx_v_shape = __pyx_t_4; - __pyx_t_4 = 0; - } - __pyx_L7:; - - /* "mtrand.pyx":4052 - * shape = size + (d,) - * - * multin = np.zeros(shape, int) # <<<<<<<<<<<<<< - * mnarr = multin - * mnix = mnarr.data - */ - __pyx_t_4 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4052; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_4, __pyx_n_s__zeros); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4052; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4052; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_4, 1, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __pyx_t_5 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4052; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_v_multin); - __pyx_v_multin = __pyx_t_5; - __pyx_t_5 = 0; - - /* "mtrand.pyx":4053 - * - * multin = np.zeros(shape, int) - * mnarr = multin # <<<<<<<<<<<<<< - * mnix = mnarr.data - * i = 0 - */ - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_v_multin))); - __Pyx_DECREF(((PyObject *)arrayObject_mnarr)); - arrayObject_mnarr = ((PyArrayObject *)__pyx_v_multin); - - /* "mtrand.pyx":4054 - * multin = np.zeros(shape, int) - * mnarr = multin - * mnix = mnarr.data # <<<<<<<<<<<<<< - * i = 0 - * while i < PyArray_SIZE(mnarr): - */ - __pyx_v_mnix = ((long *)arrayObject_mnarr->data); - - /* "mtrand.pyx":4055 - * mnarr = multin - * mnix = mnarr.data - * i = 0 # <<<<<<<<<<<<<< - * while i < PyArray_SIZE(mnarr): - * Sum = 1.0 - */ - __pyx_v_i = 0; - - /* "mtrand.pyx":4056 - * mnix = mnarr.data - * i = 0 - * while i < PyArray_SIZE(mnarr): # <<<<<<<<<<<<<< - * Sum = 1.0 - * dn = n - */ - while (1) { - __pyx_t_3 = (__pyx_v_i < PyArray_SIZE(arrayObject_mnarr)); - if (!__pyx_t_3) break; - - /* "mtrand.pyx":4057 - * i = 0 - * while i < PyArray_SIZE(mnarr): - * Sum = 1.0 # <<<<<<<<<<<<<< - * dn = n - * for j from 0 <= j < d-1: - */ - __pyx_v_Sum = 1.0; - - /* "mtrand.pyx":4058 - * while i < PyArray_SIZE(mnarr): - * Sum = 1.0 - * dn = n # <<<<<<<<<<<<<< - * for j from 0 <= j < d-1: - * mnix[i+j] = rk_binomial(self.internal_state, dn, pix[j]/Sum) - */ - __pyx_v_dn = __pyx_v_n; - - /* "mtrand.pyx":4059 - * Sum = 1.0 - * dn = n - * for j from 0 <= j < d-1: # <<<<<<<<<<<<<< - * mnix[i+j] = rk_binomial(self.internal_state, dn, pix[j]/Sum) - * dn = dn - mnix[i+j] - */ - __pyx_t_6 = (__pyx_v_d - 1); - for (__pyx_v_j = 0; __pyx_v_j < __pyx_t_6; __pyx_v_j++) { - - /* "mtrand.pyx":4060 - * dn = n - * for j from 0 <= j < d-1: - * mnix[i+j] = rk_binomial(self.internal_state, dn, pix[j]/Sum) # <<<<<<<<<<<<<< - * dn = dn - mnix[i+j] - * if dn <= 0: - */ - __pyx_t_7 = (__pyx_v_pix[__pyx_v_j]); - if (unlikely(__pyx_v_Sum == 0)) { - PyErr_Format(PyExc_ZeroDivisionError, "float division"); - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4060; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - } - (__pyx_v_mnix[(__pyx_v_i + __pyx_v_j)]) = rk_binomial(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, __pyx_v_dn, (__pyx_t_7 / __pyx_v_Sum)); - - /* "mtrand.pyx":4061 - * for j from 0 <= j < d-1: - * mnix[i+j] = rk_binomial(self.internal_state, dn, pix[j]/Sum) - * dn = dn - mnix[i+j] # <<<<<<<<<<<<<< - * if dn <= 0: - * break - */ - __pyx_v_dn = (__pyx_v_dn - (__pyx_v_mnix[(__pyx_v_i + __pyx_v_j)])); - - /* "mtrand.pyx":4062 - * mnix[i+j] = rk_binomial(self.internal_state, dn, pix[j]/Sum) - * dn = dn - mnix[i+j] - * if dn <= 0: # <<<<<<<<<<<<<< - * break - * Sum = Sum - pix[j] - */ - __pyx_t_3 = (__pyx_v_dn <= 0); - if (__pyx_t_3) { - - /* "mtrand.pyx":4063 - * dn = dn - mnix[i+j] - * if dn <= 0: - * break # <<<<<<<<<<<<<< - * Sum = Sum - pix[j] - * if dn > 0: - */ - goto __pyx_L11_break; - goto __pyx_L12; - } - __pyx_L12:; - - /* "mtrand.pyx":4064 - * if dn <= 0: - * break - * Sum = Sum - pix[j] # <<<<<<<<<<<<<< - * if dn > 0: - * mnix[i+d-1] = dn - */ - __pyx_v_Sum = (__pyx_v_Sum - (__pyx_v_pix[__pyx_v_j])); - } - __pyx_L11_break:; - - /* "mtrand.pyx":4065 - * break - * Sum = Sum - pix[j] - * if dn > 0: # <<<<<<<<<<<<<< - * mnix[i+d-1] = dn - * - */ - __pyx_t_3 = (__pyx_v_dn > 0); - if (__pyx_t_3) { - - /* "mtrand.pyx":4066 - * Sum = Sum - pix[j] - * if dn > 0: - * mnix[i+d-1] = dn # <<<<<<<<<<<<<< - * - * i = i + d - */ - (__pyx_v_mnix[((__pyx_v_i + __pyx_v_d) - 1)]) = __pyx_v_dn; - goto __pyx_L13; - } - __pyx_L13:; - - /* "mtrand.pyx":4068 - * mnix[i+d-1] = dn - * - * i = i + d # <<<<<<<<<<<<<< - * - * return multin - */ - __pyx_v_i = (__pyx_v_i + __pyx_v_d); - } - - /* "mtrand.pyx":4070 - * i = i + d - * - * return multin # <<<<<<<<<<<<<< - * - * def dirichlet(self, object alpha, size=None): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_multin); - __pyx_r = __pyx_v_multin; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.multinomial"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)arrayObject_parr); - __Pyx_DECREF((PyObject *)arrayObject_mnarr); - __Pyx_DECREF(__pyx_v_shape); - __Pyx_DECREF(__pyx_v_multin); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_pvals); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":4072 - * return multin - * - * def dirichlet(self, object alpha, size=None): # <<<<<<<<<<<<<< - * """ - * dirichlet(alpha, size=None) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_dirichlet(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_dirichlet[] = "\n dirichlet(alpha, size=None)\n\n Draw samples from the Dirichlet distribution.\n\n Draw `size` samples of dimension k from a Dirichlet distribution. A\n Dirichlet-distributed random variable can be seen as a multivariate\n generalization of a Beta distribution. Dirichlet pdf is the conjugate\n prior of a multinomial in Bayesian inference.\n\n Parameters\n ----------\n alpha : array\n Parameter of the distribution (k dimension for sample of\n dimension k).\n size : array\n Number of samples to draw.\n\n Notes\n -----\n .. math:: X \\approx \\prod_{i=1}^{k}{x^{\\alpha_i-1}_i}\n\n Uses the following property for computation: for each dimension,\n draw a random sample y_i from a standard gamma generator of shape\n `alpha_i`, then\n :math:`X = \\frac{1}{\\sum_{i=1}^k{y_i}} (y_1, \\ldots, y_n)` is\n Dirichlet distributed.\n\n References\n ----------\n .. [1] David McKay, \"Information Theory, Inference and Learning\n Algorithms,\" chapter 23,\n http://www.inference.phy.cam.ac.uk/mackay/\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_dirichlet(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_alpha = 0; - PyObject *__pyx_v_size = 0; - long __pyx_v_k; - long __pyx_v_totsize; - PyArrayObject *__pyx_v_alpha_arr; - PyArrayObject *__pyx_v_val_arr; - double *__pyx_v_alpha_data; - double *__pyx_v_val_data; - long __pyx_v_i; - long __pyx_v_j; - double __pyx_v_acc; - double __pyx_v_invacc; - PyObject *__pyx_v_shape; - PyObject *__pyx_v_diric; - PyObject *__pyx_r = NULL; - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - long __pyx_t_6; - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__alpha,&__pyx_n_s__size,0}; - __Pyx_RefNannySetupContext("dirichlet"); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 0: - values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__alpha); - if (likely(values[0])) kw_args--; - else goto __pyx_L5_argtuple_error; - case 1: - if (kw_args > 1) { - PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__size); - if (unlikely(value)) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "dirichlet") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4072; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - } - __pyx_v_alpha = values[0]; - __pyx_v_size = values[1]; - } else { - __pyx_v_size = ((PyObject *)Py_None); - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: __pyx_v_size = PyTuple_GET_ITEM(__pyx_args, 1); - case 1: __pyx_v_alpha = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("dirichlet", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4072; __pyx_clineno = __LINE__; goto __pyx_L3_error;} - __pyx_L3_error:; - __Pyx_AddTraceback("mtrand.RandomState.dirichlet"); - return NULL; - __pyx_L4_argument_unpacking_done:; - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_alpha); - __Pyx_INCREF(__pyx_v_size); - __pyx_v_alpha_arr = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_val_arr = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); - __pyx_v_shape = Py_None; __Pyx_INCREF(Py_None); - __pyx_v_diric = Py_None; __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":4136 - * cdef double acc, invacc - * - * k = len(alpha) # <<<<<<<<<<<<<< - * alpha_arr = PyArray_ContiguousFromObject(alpha, NPY_DOUBLE, 1, 1) - * alpha_data = alpha_arr.data - */ - __pyx_t_1 = PyObject_Length(__pyx_v_alpha); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4136; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_v_k = __pyx_t_1; - - /* "mtrand.pyx":4137 - * - * k = len(alpha) - * alpha_arr = PyArray_ContiguousFromObject(alpha, NPY_DOUBLE, 1, 1) # <<<<<<<<<<<<<< - * alpha_data = alpha_arr.data - * - */ - __pyx_t_2 = PyArray_ContiguousFromObject(__pyx_v_alpha, NPY_DOUBLE, 1, 1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4137; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_t_2))); - __Pyx_DECREF(((PyObject *)__pyx_v_alpha_arr)); - __pyx_v_alpha_arr = ((PyArrayObject *)__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4138 - * k = len(alpha) - * alpha_arr = PyArray_ContiguousFromObject(alpha, NPY_DOUBLE, 1, 1) - * alpha_data = alpha_arr.data # <<<<<<<<<<<<<< - * - * if size is None: - */ - __pyx_v_alpha_data = ((double *)__pyx_v_alpha_arr->data); - - /* "mtrand.pyx":4140 - * alpha_data = alpha_arr.data - * - * if size is None: # <<<<<<<<<<<<<< - * shape = (k,) - * elif type(size) is int: - */ - __pyx_t_3 = (__pyx_v_size == Py_None); - if (__pyx_t_3) { - - /* "mtrand.pyx":4141 - * - * if size is None: - * shape = (k,) # <<<<<<<<<<<<<< - * elif type(size) is int: - * shape = (size, k) - */ - __pyx_t_2 = PyInt_FromLong(__pyx_v_k); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4141; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4141; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_v_shape); - __pyx_v_shape = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L6; - } - - /* "mtrand.pyx":4142 - * if size is None: - * shape = (k,) - * elif type(size) is int: # <<<<<<<<<<<<<< - * shape = (size, k) - * else: - */ - __pyx_t_3 = (((PyObject *)Py_TYPE(__pyx_v_size)) == ((PyObject *)((PyObject*)&PyInt_Type))); - if (__pyx_t_3) { - - /* "mtrand.pyx":4143 - * shape = (k,) - * elif type(size) is int: - * shape = (size, k) # <<<<<<<<<<<<<< - * else: - * shape = size + (k,) - */ - __pyx_t_4 = PyInt_FromLong(__pyx_v_k); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4143; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4143; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_size); - __Pyx_GIVEREF(__pyx_v_size); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_v_shape); - __pyx_v_shape = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L6; - } - /*else*/ { - - /* "mtrand.pyx":4145 - * shape = (size, k) - * else: - * shape = size + (k,) # <<<<<<<<<<<<<< - * - * diric = np.zeros(shape, np.float64) - */ - __pyx_t_2 = PyInt_FromLong(__pyx_v_k); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4145; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4145; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_v_size, __pyx_t_4); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4145; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_v_shape); - __pyx_v_shape = __pyx_t_2; - __pyx_t_2 = 0; - } - __pyx_L6:; - - /* "mtrand.pyx":4147 - * shape = size + (k,) - * - * diric = np.zeros(shape, np.float64) # <<<<<<<<<<<<<< - * val_arr = diric - * val_data= val_arr.data - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__zeros); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__float64); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_4, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_v_diric); - __pyx_v_diric = __pyx_t_5; - __pyx_t_5 = 0; - - /* "mtrand.pyx":4148 - * - * diric = np.zeros(shape, np.float64) - * val_arr = diric # <<<<<<<<<<<<<< - * val_data= val_arr.data - * - */ - __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_v_diric))); - __Pyx_DECREF(((PyObject *)__pyx_v_val_arr)); - __pyx_v_val_arr = ((PyArrayObject *)__pyx_v_diric); - - /* "mtrand.pyx":4149 - * diric = np.zeros(shape, np.float64) - * val_arr = diric - * val_data= val_arr.data # <<<<<<<<<<<<<< - * - * i = 0 - */ - __pyx_v_val_data = ((double *)__pyx_v_val_arr->data); - - /* "mtrand.pyx":4151 - * val_data= val_arr.data - * - * i = 0 # <<<<<<<<<<<<<< - * totsize = PyArray_SIZE(val_arr) - * while i < totsize: - */ - __pyx_v_i = 0; - - /* "mtrand.pyx":4152 - * - * i = 0 - * totsize = PyArray_SIZE(val_arr) # <<<<<<<<<<<<<< - * while i < totsize: - * acc = 0.0 - */ - __pyx_v_totsize = PyArray_SIZE(__pyx_v_val_arr); - - /* "mtrand.pyx":4153 - * i = 0 - * totsize = PyArray_SIZE(val_arr) - * while i < totsize: # <<<<<<<<<<<<<< - * acc = 0.0 - * for j from 0 <= j < k: - */ - while (1) { - __pyx_t_3 = (__pyx_v_i < __pyx_v_totsize); - if (!__pyx_t_3) break; - - /* "mtrand.pyx":4154 - * totsize = PyArray_SIZE(val_arr) - * while i < totsize: - * acc = 0.0 # <<<<<<<<<<<<<< - * for j from 0 <= j < k: - * val_data[i+j] = rk_standard_gamma(self.internal_state, alpha_data[j]) - */ - __pyx_v_acc = 0.0; - - /* "mtrand.pyx":4155 - * while i < totsize: - * acc = 0.0 - * for j from 0 <= j < k: # <<<<<<<<<<<<<< - * val_data[i+j] = rk_standard_gamma(self.internal_state, alpha_data[j]) - * acc = acc + val_data[i+j] - */ - __pyx_t_6 = __pyx_v_k; - for (__pyx_v_j = 0; __pyx_v_j < __pyx_t_6; __pyx_v_j++) { - - /* "mtrand.pyx":4156 - * acc = 0.0 - * for j from 0 <= j < k: - * val_data[i+j] = rk_standard_gamma(self.internal_state, alpha_data[j]) # <<<<<<<<<<<<<< - * acc = acc + val_data[i+j] - * invacc = 1/acc - */ - (__pyx_v_val_data[(__pyx_v_i + __pyx_v_j)]) = rk_standard_gamma(((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state, (__pyx_v_alpha_data[__pyx_v_j])); - - /* "mtrand.pyx":4157 - * for j from 0 <= j < k: - * val_data[i+j] = rk_standard_gamma(self.internal_state, alpha_data[j]) - * acc = acc + val_data[i+j] # <<<<<<<<<<<<<< - * invacc = 1/acc - * for j from 0 <= j < k: - */ - __pyx_v_acc = (__pyx_v_acc + (__pyx_v_val_data[(__pyx_v_i + __pyx_v_j)])); - } - - /* "mtrand.pyx":4158 - * val_data[i+j] = rk_standard_gamma(self.internal_state, alpha_data[j]) - * acc = acc + val_data[i+j] - * invacc = 1/acc # <<<<<<<<<<<<<< - * for j from 0 <= j < k: - * val_data[i+j] = val_data[i+j] * invacc - */ - if (unlikely(__pyx_v_acc == 0)) { - PyErr_Format(PyExc_ZeroDivisionError, "float division"); - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4158; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - } - __pyx_v_invacc = (1 / __pyx_v_acc); - - /* "mtrand.pyx":4159 - * acc = acc + val_data[i+j] - * invacc = 1/acc - * for j from 0 <= j < k: # <<<<<<<<<<<<<< - * val_data[i+j] = val_data[i+j] * invacc - * i = i + k - */ - __pyx_t_6 = __pyx_v_k; - for (__pyx_v_j = 0; __pyx_v_j < __pyx_t_6; __pyx_v_j++) { - - /* "mtrand.pyx":4160 - * invacc = 1/acc - * for j from 0 <= j < k: - * val_data[i+j] = val_data[i+j] * invacc # <<<<<<<<<<<<<< - * i = i + k - * - */ - (__pyx_v_val_data[(__pyx_v_i + __pyx_v_j)]) = ((__pyx_v_val_data[(__pyx_v_i + __pyx_v_j)]) * __pyx_v_invacc); - } - - /* "mtrand.pyx":4161 - * for j from 0 <= j < k: - * val_data[i+j] = val_data[i+j] * invacc - * i = i + k # <<<<<<<<<<<<<< - * - * return diric - */ - __pyx_v_i = (__pyx_v_i + __pyx_v_k); - } - - /* "mtrand.pyx":4163 - * i = i + k - * - * return diric # <<<<<<<<<<<<<< - * - * # Shuffling and permutations: - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_diric); - __pyx_r = __pyx_v_diric; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("mtrand.RandomState.dirichlet"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_alpha_arr); - __Pyx_DECREF((PyObject *)__pyx_v_val_arr); - __Pyx_DECREF(__pyx_v_shape); - __Pyx_DECREF(__pyx_v_diric); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_alpha); - __Pyx_DECREF(__pyx_v_size); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":4166 - * - * # Shuffling and permutations: - * def shuffle(self, object x): # <<<<<<<<<<<<<< - * """ - * shuffle(x) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_shuffle(PyObject *__pyx_v_self, PyObject *__pyx_v_x); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_shuffle[] = "\n shuffle(x)\n\n Modify a sequence in-place by shuffling its contents.\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_shuffle(PyObject *__pyx_v_self, PyObject *__pyx_v_x) { - long __pyx_v_i; - long __pyx_v_j; - int __pyx_v_copy; - PyObject *__pyx_r = NULL; - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - int __pyx_t_6; - __Pyx_RefNannySetupContext("shuffle"); - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_x); - - /* "mtrand.pyx":4176 - * cdef int copy - * - * i = len(x) - 1 # <<<<<<<<<<<<<< - * try: - * j = len(x[0]) - */ - __pyx_t_1 = PyObject_Length(__pyx_v_x); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4176; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_v_i = (__pyx_t_1 - 1); - - /* "mtrand.pyx":4177 - * - * i = len(x) - 1 - * try: # <<<<<<<<<<<<<< - * j = len(x[0]) - * except: - */ - { - PyObject *__pyx_save_exc_type, *__pyx_save_exc_value, *__pyx_save_exc_tb; - __Pyx_ExceptionSave(&__pyx_save_exc_type, &__pyx_save_exc_value, &__pyx_save_exc_tb); - __Pyx_XGOTREF(__pyx_save_exc_type); - __Pyx_XGOTREF(__pyx_save_exc_value); - __Pyx_XGOTREF(__pyx_save_exc_tb); - /*try:*/ { - - /* "mtrand.pyx":4178 - * i = len(x) - 1 - * try: - * j = len(x[0]) # <<<<<<<<<<<<<< - * except: - * j = 0 - */ - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_x, 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4178; __pyx_clineno = __LINE__; goto __pyx_L5_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_Length(__pyx_t_2); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4178; __pyx_clineno = __LINE__; goto __pyx_L5_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_j = __pyx_t_1; - } - __Pyx_XDECREF(__pyx_save_exc_type); __pyx_save_exc_type = 0; - __Pyx_XDECREF(__pyx_save_exc_value); __pyx_save_exc_value = 0; - __Pyx_XDECREF(__pyx_save_exc_tb); __pyx_save_exc_tb = 0; - goto __pyx_L12_try_end; - __pyx_L5_error:; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4179 - * try: - * j = len(x[0]) - * except: # <<<<<<<<<<<<<< - * j = 0 - * - */ - /*except:*/ { - __Pyx_AddTraceback("mtrand.RandomState.shuffle"); - if (__Pyx_GetException(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4179; __pyx_clineno = __LINE__; goto __pyx_L7_except_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GOTREF(__pyx_t_4); - - /* "mtrand.pyx":4180 - * j = len(x[0]) - * except: - * j = 0 # <<<<<<<<<<<<<< - * - * if (j == 0): - */ - __pyx_v_j = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L6_exception_handled; - } - __pyx_L7_except_error:; - __Pyx_XGIVEREF(__pyx_save_exc_type); - __Pyx_XGIVEREF(__pyx_save_exc_value); - __Pyx_XGIVEREF(__pyx_save_exc_tb); - __Pyx_ExceptionReset(__pyx_save_exc_type, __pyx_save_exc_value, __pyx_save_exc_tb); - goto __pyx_L1_error; - __pyx_L6_exception_handled:; - __Pyx_XGIVEREF(__pyx_save_exc_type); - __Pyx_XGIVEREF(__pyx_save_exc_value); - __Pyx_XGIVEREF(__pyx_save_exc_tb); - __Pyx_ExceptionReset(__pyx_save_exc_type, __pyx_save_exc_value, __pyx_save_exc_tb); - __pyx_L12_try_end:; - } - - /* "mtrand.pyx":4182 - * j = 0 - * - * if (j == 0): # <<<<<<<<<<<<<< - * # adaptation of random.shuffle() - * while i > 0: - */ - __pyx_t_5 = (__pyx_v_j == 0); - if (__pyx_t_5) { - - /* "mtrand.pyx":4184 - * if (j == 0): - * # adaptation of random.shuffle() - * while i > 0: # <<<<<<<<<<<<<< - * j = rk_interval(i, self.internal_state) - * x[i], x[j] = x[j], x[i] - */ - while (1) { - __pyx_t_5 = (__pyx_v_i > 0); - if (!__pyx_t_5) break; - - /* "mtrand.pyx":4185 - * # adaptation of random.shuffle() - * while i > 0: - * j = rk_interval(i, self.internal_state) # <<<<<<<<<<<<<< - * x[i], x[j] = x[j], x[i] - * i = i - 1 - */ - __pyx_v_j = rk_interval(__pyx_v_i, ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state); - - /* "mtrand.pyx":4186 - * while i > 0: - * j = rk_interval(i, self.internal_state) - * x[i], x[j] = x[j], x[i] # <<<<<<<<<<<<<< - * i = i - 1 - * else: - */ - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_x, __pyx_v_j, sizeof(long), PyInt_FromLong); if (!__pyx_t_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4186; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_x, __pyx_v_i, sizeof(long), PyInt_FromLong); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4186; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_SetItemInt(__pyx_v_x, __pyx_v_i, __pyx_t_4, sizeof(long), PyInt_FromLong) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4186; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__Pyx_SetItemInt(__pyx_v_x, __pyx_v_j, __pyx_t_3, sizeof(long), PyInt_FromLong) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4186; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "mtrand.pyx":4187 - * j = rk_interval(i, self.internal_state) - * x[i], x[j] = x[j], x[i] - * i = i - 1 # <<<<<<<<<<<<<< - * else: - * # make copies - */ - __pyx_v_i = (__pyx_v_i - 1); - } - goto __pyx_L15; - } - /*else*/ { - - /* "mtrand.pyx":4190 - * else: - * # make copies - * copy = hasattr(x[0], 'copy') # <<<<<<<<<<<<<< - * if copy: - * while(i > 0): - */ - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_x, 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_HasAttr(__pyx_t_3, ((PyObject *)__pyx_n_s__copy)); if (unlikely(__pyx_t_5 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_copy = __pyx_t_5; - - /* "mtrand.pyx":4191 - * # make copies - * copy = hasattr(x[0], 'copy') - * if copy: # <<<<<<<<<<<<<< - * while(i > 0): - * j = rk_interval(i, self.internal_state) - */ - __pyx_t_6 = __pyx_v_copy; - if (__pyx_t_6) { - - /* "mtrand.pyx":4192 - * copy = hasattr(x[0], 'copy') - * if copy: - * while(i > 0): # <<<<<<<<<<<<<< - * j = rk_interval(i, self.internal_state) - * x[i], x[j] = x[j].copy(), x[i].copy() - */ - while (1) { - __pyx_t_5 = (__pyx_v_i > 0); - if (!__pyx_t_5) break; - - /* "mtrand.pyx":4193 - * if copy: - * while(i > 0): - * j = rk_interval(i, self.internal_state) # <<<<<<<<<<<<<< - * x[i], x[j] = x[j].copy(), x[i].copy() - * i = i - 1 - */ - __pyx_v_j = rk_interval(__pyx_v_i, ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state); - - /* "mtrand.pyx":4194 - * while(i > 0): - * j = rk_interval(i, self.internal_state) - * x[i], x[j] = x[j].copy(), x[i].copy() # <<<<<<<<<<<<<< - * i = i - 1 - * else: - */ - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_x, __pyx_v_j, sizeof(long), PyInt_FromLong); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__copy); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_4, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_x, __pyx_v_i, sizeof(long), PyInt_FromLong); if (!__pyx_t_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_4, __pyx_n_s__copy); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyObject_Call(__pyx_t_2, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__Pyx_SetItemInt(__pyx_v_x, __pyx_v_i, __pyx_t_3, sizeof(long), PyInt_FromLong) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_SetItemInt(__pyx_v_x, __pyx_v_j, __pyx_t_4, sizeof(long), PyInt_FromLong) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "mtrand.pyx":4195 - * j = rk_interval(i, self.internal_state) - * x[i], x[j] = x[j].copy(), x[i].copy() - * i = i - 1 # <<<<<<<<<<<<<< - * else: - * while(i > 0): - */ - __pyx_v_i = (__pyx_v_i - 1); - } - goto __pyx_L18; - } - /*else*/ { - - /* "mtrand.pyx":4197 - * i = i - 1 - * else: - * while(i > 0): # <<<<<<<<<<<<<< - * j = rk_interval(i, self.internal_state) - * x[i], x[j] = x[j][:], x[i][:] - */ - while (1) { - __pyx_t_5 = (__pyx_v_i > 0); - if (!__pyx_t_5) break; - - /* "mtrand.pyx":4198 - * else: - * while(i > 0): - * j = rk_interval(i, self.internal_state) # <<<<<<<<<<<<<< - * x[i], x[j] = x[j][:], x[i][:] - * i = i - 1 - */ - __pyx_v_j = rk_interval(__pyx_v_i, ((struct __pyx_obj_6mtrand_RandomState *)__pyx_v_self)->internal_state); - - /* "mtrand.pyx":4199 - * while(i > 0): - * j = rk_interval(i, self.internal_state) - * x[i], x[j] = x[j][:], x[i][:] # <<<<<<<<<<<<<< - * i = i - 1 - * - */ - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_x, __pyx_v_j, sizeof(long), PyInt_FromLong); if (!__pyx_t_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = PySequence_GetSlice(__pyx_t_4, 0, PY_SSIZE_T_MAX); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_x, __pyx_v_i, sizeof(long), PyInt_FromLong); if (!__pyx_t_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = PySequence_GetSlice(__pyx_t_4, 0, PY_SSIZE_T_MAX); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__Pyx_SetItemInt(__pyx_v_x, __pyx_v_i, __pyx_t_3, sizeof(long), PyInt_FromLong) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_SetItemInt(__pyx_v_x, __pyx_v_j, __pyx_t_2, sizeof(long), PyInt_FromLong) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4200 - * j = rk_interval(i, self.internal_state) - * x[i], x[j] = x[j][:], x[i][:] - * i = i - 1 # <<<<<<<<<<<<<< - * - * def permutation(self, object x): - */ - __pyx_v_i = (__pyx_v_i - 1); - } - } - __pyx_L18:; - } - __pyx_L15:; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.RandomState.shuffle"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_x); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "mtrand.pyx":4202 - * i = i - 1 - * - * def permutation(self, object x): # <<<<<<<<<<<<<< - * """ - * permutation(x) - */ - -static PyObject *__pyx_pf_6mtrand_11RandomState_permutation(PyObject *__pyx_v_self, PyObject *__pyx_v_x); /*proto*/ -static char __pyx_doc_6mtrand_11RandomState_permutation[] = "\n permutation(x)\n\n Randomly permute a sequence, or return a permuted range.\n\n Parameters\n ----------\n x : int or array_like\n If `x` is an integer, randomly permute ``np.arange(x)``.\n If `x` is an array, make a copy and shuffle the elements\n randomly.\n\n Returns\n -------\n out : ndarray\n Permuted sequence or array range.\n\n Examples\n --------\n >>> np.random.permutation(10)\n array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6])\n\n >>> np.random.permutation([1, 4, 9, 12, 15])\n array([15, 1, 9, 4, 12])\n\n "; -static PyObject *__pyx_pf_6mtrand_11RandomState_permutation(PyObject *__pyx_v_self, PyObject *__pyx_v_x) { - PyObject *__pyx_v_arr; - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - __Pyx_RefNannySetupContext("permutation"); - __Pyx_INCREF((PyObject *)__pyx_v_self); - __Pyx_INCREF(__pyx_v_x); - __pyx_v_arr = Py_None; __Pyx_INCREF(Py_None); - - /* "mtrand.pyx":4229 - * - * """ - * if isinstance(x, (int, long, np.integer)): # <<<<<<<<<<<<<< - * arr = np.arange(x) - * else: - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4229; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__integer); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4229; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4229; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyInt_Type))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyInt_Type))); - __Pyx_INCREF(((PyObject *)((PyObject*)&PyLong_Type))); - PyTuple_SET_ITEM(__pyx_t_1, 1, ((PyObject *)((PyObject*)&PyLong_Type))); - __Pyx_GIVEREF(((PyObject *)((PyObject*)&PyLong_Type))); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_3 = PyObject_IsInstance(__pyx_v_x, __pyx_t_1); if (unlikely(__pyx_t_3 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4229; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_3) { - - /* "mtrand.pyx":4230 - * """ - * if isinstance(x, (int, long, np.integer)): - * arr = np.arange(x) # <<<<<<<<<<<<<< - * else: - * arr = np.array(x) - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4230; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__arange); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4230; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4230; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_x); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - __pyx_t_4 = PyObject_Call(__pyx_t_2, __pyx_t_1, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4230; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_v_arr); - __pyx_v_arr = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L5; - } - /*else*/ { - - /* "mtrand.pyx":4232 - * arr = np.arange(x) - * else: - * arr = np.array(x) # <<<<<<<<<<<<<< - * self.shuffle(arr) - * return arr - */ - __pyx_t_4 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4232; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_4, __pyx_n_s__array); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4232; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4232; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_x); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_4, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4232; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_v_arr); - __pyx_v_arr = __pyx_t_2; - __pyx_t_2 = 0; - } - __pyx_L5:; - - /* "mtrand.pyx":4233 - * else: - * arr = np.array(x) - * self.shuffle(arr) # <<<<<<<<<<<<<< - * return arr - * - */ - __pyx_t_2 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__shuffle); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4233; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4233; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_arr); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_arr); - __Pyx_GIVEREF(__pyx_v_arr); - __pyx_t_1 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4233; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4234 - * arr = np.array(x) - * self.shuffle(arr) - * return arr # <<<<<<<<<<<<<< - * - * _rand = RandomState() - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_arr); - __pyx_r = __pyx_v_arr; - goto __pyx_L0; - - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("mtrand.RandomState.permutation"); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_DECREF(__pyx_v_arr); - __Pyx_DECREF((PyObject *)__pyx_v_self); - __Pyx_DECREF(__pyx_v_x); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_tp_new_6mtrand_RandomState(PyTypeObject *t, PyObject *a, PyObject *k) { - PyObject *o = (*t->tp_alloc)(t, 0); - if (!o) return 0; - return o; -} - -static void __pyx_tp_dealloc_6mtrand_RandomState(PyObject *o) { - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - ++Py_REFCNT(o); - __pyx_pf_6mtrand_11RandomState___dealloc__(o); - if (PyErr_Occurred()) PyErr_WriteUnraisable(o); - --Py_REFCNT(o); - PyErr_Restore(etype, eval, etb); - } - (*Py_TYPE(o)->tp_free)(o); -} - -static struct PyMethodDef __pyx_methods_6mtrand_RandomState[] = { - {__Pyx_NAMESTR("seed"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_seed, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_seed)}, - {__Pyx_NAMESTR("get_state"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_get_state, METH_NOARGS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_get_state)}, - {__Pyx_NAMESTR("set_state"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_set_state, METH_O, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_set_state)}, - {__Pyx_NAMESTR("__getstate__"), (PyCFunction)__pyx_pf_6mtrand_11RandomState___getstate__, METH_NOARGS, __Pyx_DOCSTR(0)}, - {__Pyx_NAMESTR("__setstate__"), (PyCFunction)__pyx_pf_6mtrand_11RandomState___setstate__, METH_O, __Pyx_DOCSTR(0)}, - {__Pyx_NAMESTR("__reduce__"), (PyCFunction)__pyx_pf_6mtrand_11RandomState___reduce__, METH_NOARGS, __Pyx_DOCSTR(0)}, - {__Pyx_NAMESTR("random_sample"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_random_sample, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_random_sample)}, - {__Pyx_NAMESTR("tomaxint"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_tomaxint, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_tomaxint)}, - {__Pyx_NAMESTR("randint"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_randint, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_randint)}, - {__Pyx_NAMESTR("bytes"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_bytes, METH_O, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_bytes)}, - {__Pyx_NAMESTR("uniform"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_uniform, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_uniform)}, - {__Pyx_NAMESTR("rand"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_rand, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_rand)}, - {__Pyx_NAMESTR("randn"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_randn, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_randn)}, - {__Pyx_NAMESTR("random_integers"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_random_integers, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_random_integers)}, - {__Pyx_NAMESTR("standard_normal"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_standard_normal, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_standard_normal)}, - {__Pyx_NAMESTR("normal"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_normal, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_normal)}, - {__Pyx_NAMESTR("beta"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_beta, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_beta)}, - {__Pyx_NAMESTR("exponential"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_exponential, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_exponential)}, - {__Pyx_NAMESTR("standard_exponential"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_standard_exponential, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_standard_exponential)}, - {__Pyx_NAMESTR("standard_gamma"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_standard_gamma, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_standard_gamma)}, - {__Pyx_NAMESTR("gamma"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_gamma, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_gamma)}, - {__Pyx_NAMESTR("f"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_f, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_f)}, - {__Pyx_NAMESTR("noncentral_f"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_noncentral_f, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_noncentral_f)}, - {__Pyx_NAMESTR("chisquare"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_chisquare, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_chisquare)}, - {__Pyx_NAMESTR("noncentral_chisquare"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_noncentral_chisquare, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_noncentral_chisquare)}, - {__Pyx_NAMESTR("standard_cauchy"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_standard_cauchy, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_standard_cauchy)}, - {__Pyx_NAMESTR("standard_t"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_standard_t, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_standard_t)}, - {__Pyx_NAMESTR("vonmises"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_vonmises, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_vonmises)}, - {__Pyx_NAMESTR("pareto"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_pareto, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_pareto)}, - {__Pyx_NAMESTR("weibull"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_weibull, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_weibull)}, - {__Pyx_NAMESTR("power"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_power, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_power)}, - {__Pyx_NAMESTR("laplace"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_laplace, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_laplace)}, - {__Pyx_NAMESTR("gumbel"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_gumbel, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_gumbel)}, - {__Pyx_NAMESTR("logistic"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_logistic, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_logistic)}, - {__Pyx_NAMESTR("lognormal"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_lognormal, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_lognormal)}, - {__Pyx_NAMESTR("rayleigh"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_rayleigh, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_rayleigh)}, - {__Pyx_NAMESTR("wald"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_wald, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_wald)}, - {__Pyx_NAMESTR("triangular"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_triangular, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_triangular)}, - {__Pyx_NAMESTR("binomial"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_binomial, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_binomial)}, - {__Pyx_NAMESTR("negative_binomial"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_negative_binomial, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_negative_binomial)}, - {__Pyx_NAMESTR("poisson"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_poisson, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_poisson)}, - {__Pyx_NAMESTR("zipf"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_zipf, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_zipf)}, - {__Pyx_NAMESTR("geometric"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_geometric, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_geometric)}, - {__Pyx_NAMESTR("hypergeometric"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_hypergeometric, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_hypergeometric)}, - {__Pyx_NAMESTR("logseries"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_logseries, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_logseries)}, - {__Pyx_NAMESTR("multivariate_normal"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_multivariate_normal, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_multivariate_normal)}, - {__Pyx_NAMESTR("multinomial"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_multinomial, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_multinomial)}, - {__Pyx_NAMESTR("dirichlet"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_dirichlet, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_dirichlet)}, - {__Pyx_NAMESTR("shuffle"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_shuffle, METH_O, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_shuffle)}, - {__Pyx_NAMESTR("permutation"), (PyCFunction)__pyx_pf_6mtrand_11RandomState_permutation, METH_O, __Pyx_DOCSTR(__pyx_doc_6mtrand_11RandomState_permutation)}, - {0, 0, 0, 0} -}; - -static PyNumberMethods __pyx_tp_as_number_RandomState = { - 0, /*nb_add*/ - 0, /*nb_subtract*/ - 0, /*nb_multiply*/ - #if PY_MAJOR_VERSION < 3 - 0, /*nb_divide*/ - #endif - 0, /*nb_remainder*/ - 0, /*nb_divmod*/ - 0, /*nb_power*/ - 0, /*nb_negative*/ - 0, /*nb_positive*/ - 0, /*nb_absolute*/ - 0, /*nb_nonzero*/ - 0, /*nb_invert*/ - 0, /*nb_lshift*/ - 0, /*nb_rshift*/ - 0, /*nb_and*/ - 0, /*nb_xor*/ - 0, /*nb_or*/ - #if PY_MAJOR_VERSION < 3 - 0, /*nb_coerce*/ - #endif - 0, /*nb_int*/ - #if PY_MAJOR_VERSION >= 3 - 0, /*reserved*/ - #else - 0, /*nb_long*/ - #endif - 0, /*nb_float*/ - #if PY_MAJOR_VERSION < 3 - 0, /*nb_oct*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*nb_hex*/ - #endif - 0, /*nb_inplace_add*/ - 0, /*nb_inplace_subtract*/ - 0, /*nb_inplace_multiply*/ - #if PY_MAJOR_VERSION < 3 - 0, /*nb_inplace_divide*/ - #endif - 0, /*nb_inplace_remainder*/ - 0, /*nb_inplace_power*/ - 0, /*nb_inplace_lshift*/ - 0, /*nb_inplace_rshift*/ - 0, /*nb_inplace_and*/ - 0, /*nb_inplace_xor*/ - 0, /*nb_inplace_or*/ - 0, /*nb_floor_divide*/ - 0, /*nb_true_divide*/ - 0, /*nb_inplace_floor_divide*/ - 0, /*nb_inplace_true_divide*/ - #if (PY_MAJOR_VERSION >= 3) || (Py_TPFLAGS_DEFAULT & Py_TPFLAGS_HAVE_INDEX) - 0, /*nb_index*/ - #endif -}; - -static PySequenceMethods __pyx_tp_as_sequence_RandomState = { - 0, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - 0, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_RandomState = { - 0, /*mp_length*/ - 0, /*mp_subscript*/ - 0, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_RandomState = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - #if PY_VERSION_HEX >= 0x02060000 - 0, /*bf_getbuffer*/ - #endif - #if PY_VERSION_HEX >= 0x02060000 - 0, /*bf_releasebuffer*/ - #endif -}; - -PyTypeObject __pyx_type_6mtrand_RandomState = { - PyVarObject_HEAD_INIT(0, 0) - __Pyx_NAMESTR("mtrand.RandomState"), /*tp_name*/ - sizeof(struct __pyx_obj_6mtrand_RandomState), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_6mtrand_RandomState, /*tp_dealloc*/ - 0, /*tp_print*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_compare*/ - 0, /*tp_repr*/ - &__pyx_tp_as_number_RandomState, /*tp_as_number*/ - &__pyx_tp_as_sequence_RandomState, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_RandomState, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_RandomState, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_NEWBUFFER, /*tp_flags*/ - __Pyx_DOCSTR("\n RandomState(seed=None)\n\n Container for the Mersenne Twister pseudo-random number generator.\n\n `RandomState` exposes a number of methods for generating random numbers\n drawn from a variety of probability distributions. In addition to the\n distribution-specific arguments, each method takes a keyword argument\n `size` that defaults to ``None``. If `size` is ``None``, then a single\n value is generated and returned. If `size` is an integer, then a 1-D\n array filled with generated values is returned. If `size` is a tuple,\n then an array with that shape is filled and returned.\n\n Parameters\n ----------\n seed : int or array_like, optional\n Random seed initializing the pseudo-random number generator.\n Can be an integer, an array (or other sequence) of integers of\n any length, or ``None`` (the default).\n If `seed` is ``None``, then `RandomState` will try to read data from\n ``/dev/urandom`` (or the Windows analogue) if available or seed from\n the clock otherwise.\n\n Notes\n -----\n The Python stdlib module \"random\" also contains a Mersenne Twister\n pseudo-random number generator with a number of methods that are similar\n to the ones available in `RandomState`. `RandomState`, besides being\n NumPy-aware, has the advantage that it provides a much larger number\n of probability distributions to choose from.\n\n "), /*tp_doc*/ - 0, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_6mtrand_RandomState, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - __pyx_pf_6mtrand_11RandomState___init__, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_6mtrand_RandomState, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - #if PY_VERSION_HEX >= 0x02060000 - 0, /*tp_version_tag*/ - #endif -}; - -static struct PyMethodDef __pyx_methods[] = { - {0, 0, 0, 0} -}; - -static void __pyx_init_filenames(void); /*proto*/ - -#if PY_MAJOR_VERSION >= 3 -static struct PyModuleDef __pyx_moduledef = { - PyModuleDef_HEAD_INIT, - __Pyx_NAMESTR("mtrand"), - 0, /* m_doc */ - -1, /* m_size */ - __pyx_methods /* m_methods */, - NULL, /* m_reload */ - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ -}; -#endif - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_kp_s_1, __pyx_k_1, sizeof(__pyx_k_1), 0, 0, 1, 0}, - {&__pyx_kp_s_10, __pyx_k_10, sizeof(__pyx_k_10), 0, 0, 1, 0}, - {&__pyx_kp_u_100, __pyx_k_100, sizeof(__pyx_k_100), 0, 1, 0, 0}, - {&__pyx_kp_u_101, __pyx_k_101, sizeof(__pyx_k_101), 0, 1, 0, 0}, - {&__pyx_kp_u_102, __pyx_k_102, sizeof(__pyx_k_102), 0, 1, 0, 0}, - {&__pyx_kp_u_103, __pyx_k_103, sizeof(__pyx_k_103), 0, 1, 0, 0}, - {&__pyx_kp_u_104, __pyx_k_104, sizeof(__pyx_k_104), 0, 1, 0, 0}, - {&__pyx_kp_u_105, __pyx_k_105, sizeof(__pyx_k_105), 0, 1, 0, 0}, - {&__pyx_kp_u_106, __pyx_k_106, sizeof(__pyx_k_106), 0, 1, 0, 0}, - {&__pyx_kp_u_107, __pyx_k_107, sizeof(__pyx_k_107), 0, 1, 0, 0}, - {&__pyx_kp_s_11, __pyx_k_11, sizeof(__pyx_k_11), 0, 0, 1, 0}, - {&__pyx_kp_s_13, __pyx_k_13, sizeof(__pyx_k_13), 0, 0, 1, 0}, - {&__pyx_kp_s_15, __pyx_k_15, sizeof(__pyx_k_15), 0, 0, 1, 0}, - {&__pyx_kp_s_16, __pyx_k_16, sizeof(__pyx_k_16), 0, 0, 1, 0}, - {&__pyx_kp_s_17, __pyx_k_17, sizeof(__pyx_k_17), 0, 0, 1, 0}, - {&__pyx_kp_s_18, __pyx_k_18, sizeof(__pyx_k_18), 0, 0, 1, 0}, - {&__pyx_kp_s_19, __pyx_k_19, sizeof(__pyx_k_19), 0, 0, 1, 0}, - {&__pyx_kp_s_2, __pyx_k_2, sizeof(__pyx_k_2), 0, 0, 1, 0}, - {&__pyx_kp_s_20, __pyx_k_20, sizeof(__pyx_k_20), 0, 0, 1, 0}, - {&__pyx_kp_s_21, __pyx_k_21, sizeof(__pyx_k_21), 0, 0, 1, 0}, - {&__pyx_kp_s_22, __pyx_k_22, sizeof(__pyx_k_22), 0, 0, 1, 0}, - {&__pyx_kp_s_3, __pyx_k_3, sizeof(__pyx_k_3), 0, 0, 1, 0}, - {&__pyx_kp_s_31, __pyx_k_31, sizeof(__pyx_k_31), 0, 0, 1, 0}, - {&__pyx_kp_s_32, __pyx_k_32, sizeof(__pyx_k_32), 0, 0, 1, 0}, - {&__pyx_kp_s_34, __pyx_k_34, sizeof(__pyx_k_34), 0, 0, 1, 0}, - {&__pyx_kp_s_35, __pyx_k_35, sizeof(__pyx_k_35), 0, 0, 1, 0}, - {&__pyx_kp_s_36, __pyx_k_36, sizeof(__pyx_k_36), 0, 0, 1, 0}, - {&__pyx_kp_s_37, __pyx_k_37, sizeof(__pyx_k_37), 0, 0, 1, 0}, - {&__pyx_kp_s_38, __pyx_k_38, sizeof(__pyx_k_38), 0, 0, 1, 0}, - {&__pyx_kp_s_39, __pyx_k_39, sizeof(__pyx_k_39), 0, 0, 1, 0}, - {&__pyx_kp_s_4, __pyx_k_4, sizeof(__pyx_k_4), 0, 0, 1, 0}, - {&__pyx_kp_s_40, __pyx_k_40, sizeof(__pyx_k_40), 0, 0, 1, 0}, - {&__pyx_kp_s_41, __pyx_k_41, sizeof(__pyx_k_41), 0, 0, 1, 0}, - {&__pyx_kp_s_42, __pyx_k_42, sizeof(__pyx_k_42), 0, 0, 1, 0}, - {&__pyx_kp_s_44, __pyx_k_44, sizeof(__pyx_k_44), 0, 0, 1, 0}, - {&__pyx_kp_s_45, __pyx_k_45, sizeof(__pyx_k_45), 0, 0, 1, 0}, - {&__pyx_kp_s_46, __pyx_k_46, sizeof(__pyx_k_46), 0, 0, 1, 0}, - {&__pyx_kp_s_47, __pyx_k_47, sizeof(__pyx_k_47), 0, 0, 1, 0}, - {&__pyx_kp_s_48, __pyx_k_48, sizeof(__pyx_k_48), 0, 0, 1, 0}, - {&__pyx_kp_s_49, __pyx_k_49, sizeof(__pyx_k_49), 0, 0, 1, 0}, - {&__pyx_kp_s_50, __pyx_k_50, sizeof(__pyx_k_50), 0, 0, 1, 0}, - {&__pyx_kp_s_51, __pyx_k_51, sizeof(__pyx_k_51), 0, 0, 1, 0}, - {&__pyx_kp_s_52, __pyx_k_52, sizeof(__pyx_k_52), 0, 0, 1, 0}, - {&__pyx_kp_s_53, __pyx_k_53, sizeof(__pyx_k_53), 0, 0, 1, 0}, - {&__pyx_kp_s_54, __pyx_k_54, sizeof(__pyx_k_54), 0, 0, 1, 0}, - {&__pyx_kp_s_55, __pyx_k_55, sizeof(__pyx_k_55), 0, 0, 1, 0}, - {&__pyx_kp_s_56, __pyx_k_56, sizeof(__pyx_k_56), 0, 0, 1, 0}, - {&__pyx_n_s_57, __pyx_k_57, sizeof(__pyx_k_57), 0, 0, 1, 1}, - {&__pyx_kp_s_58, __pyx_k_58, sizeof(__pyx_k_58), 0, 0, 1, 0}, - {&__pyx_n_s_59, __pyx_k_59, sizeof(__pyx_k_59), 0, 0, 1, 1}, - {&__pyx_n_s_60, __pyx_k_60, sizeof(__pyx_k_60), 0, 0, 1, 1}, - {&__pyx_kp_u_61, __pyx_k_61, sizeof(__pyx_k_61), 0, 1, 0, 0}, - {&__pyx_kp_u_62, __pyx_k_62, sizeof(__pyx_k_62), 0, 1, 0, 0}, - {&__pyx_kp_u_63, __pyx_k_63, sizeof(__pyx_k_63), 0, 1, 0, 0}, - {&__pyx_kp_u_64, __pyx_k_64, sizeof(__pyx_k_64), 0, 1, 0, 0}, - {&__pyx_kp_u_65, __pyx_k_65, sizeof(__pyx_k_65), 0, 1, 0, 0}, - {&__pyx_kp_u_66, __pyx_k_66, sizeof(__pyx_k_66), 0, 1, 0, 0}, - {&__pyx_kp_u_67, __pyx_k_67, sizeof(__pyx_k_67), 0, 1, 0, 0}, - {&__pyx_kp_u_68, __pyx_k_68, sizeof(__pyx_k_68), 0, 1, 0, 0}, - {&__pyx_kp_u_69, __pyx_k_69, sizeof(__pyx_k_69), 0, 1, 0, 0}, - {&__pyx_kp_u_70, __pyx_k_70, sizeof(__pyx_k_70), 0, 1, 0, 0}, - {&__pyx_kp_u_71, __pyx_k_71, sizeof(__pyx_k_71), 0, 1, 0, 0}, - {&__pyx_kp_u_72, __pyx_k_72, sizeof(__pyx_k_72), 0, 1, 0, 0}, - {&__pyx_kp_u_73, __pyx_k_73, sizeof(__pyx_k_73), 0, 1, 0, 0}, - {&__pyx_kp_u_74, __pyx_k_74, sizeof(__pyx_k_74), 0, 1, 0, 0}, - {&__pyx_kp_u_75, __pyx_k_75, sizeof(__pyx_k_75), 0, 1, 0, 0}, - {&__pyx_kp_u_76, __pyx_k_76, sizeof(__pyx_k_76), 0, 1, 0, 0}, - {&__pyx_kp_u_77, __pyx_k_77, sizeof(__pyx_k_77), 0, 1, 0, 0}, - {&__pyx_kp_u_78, __pyx_k_78, sizeof(__pyx_k_78), 0, 1, 0, 0}, - {&__pyx_kp_u_79, __pyx_k_79, sizeof(__pyx_k_79), 0, 1, 0, 0}, - {&__pyx_kp_u_80, __pyx_k_80, sizeof(__pyx_k_80), 0, 1, 0, 0}, - {&__pyx_kp_u_81, __pyx_k_81, sizeof(__pyx_k_81), 0, 1, 0, 0}, - {&__pyx_kp_u_82, __pyx_k_82, sizeof(__pyx_k_82), 0, 1, 0, 0}, - {&__pyx_kp_u_83, __pyx_k_83, sizeof(__pyx_k_83), 0, 1, 0, 0}, - {&__pyx_kp_u_84, __pyx_k_84, sizeof(__pyx_k_84), 0, 1, 0, 0}, - {&__pyx_kp_u_85, __pyx_k_85, sizeof(__pyx_k_85), 0, 1, 0, 0}, - {&__pyx_kp_u_86, __pyx_k_86, sizeof(__pyx_k_86), 0, 1, 0, 0}, - {&__pyx_kp_u_87, __pyx_k_87, sizeof(__pyx_k_87), 0, 1, 0, 0}, - {&__pyx_kp_u_88, __pyx_k_88, sizeof(__pyx_k_88), 0, 1, 0, 0}, - {&__pyx_kp_u_89, __pyx_k_89, sizeof(__pyx_k_89), 0, 1, 0, 0}, - {&__pyx_kp_s_9, __pyx_k_9, sizeof(__pyx_k_9), 0, 0, 1, 0}, - {&__pyx_kp_u_90, __pyx_k_90, sizeof(__pyx_k_90), 0, 1, 0, 0}, - {&__pyx_kp_u_91, __pyx_k_91, sizeof(__pyx_k_91), 0, 1, 0, 0}, - {&__pyx_kp_u_92, __pyx_k_92, sizeof(__pyx_k_92), 0, 1, 0, 0}, - {&__pyx_kp_u_93, __pyx_k_93, sizeof(__pyx_k_93), 0, 1, 0, 0}, - {&__pyx_kp_u_94, __pyx_k_94, sizeof(__pyx_k_94), 0, 1, 0, 0}, - {&__pyx_kp_u_95, __pyx_k_95, sizeof(__pyx_k_95), 0, 1, 0, 0}, - {&__pyx_kp_u_96, __pyx_k_96, sizeof(__pyx_k_96), 0, 1, 0, 0}, - {&__pyx_kp_u_97, __pyx_k_97, sizeof(__pyx_k_97), 0, 1, 0, 0}, - {&__pyx_kp_u_98, __pyx_k_98, sizeof(__pyx_k_98), 0, 1, 0, 0}, - {&__pyx_kp_u_99, __pyx_k_99, sizeof(__pyx_k_99), 0, 1, 0, 0}, - {&__pyx_n_s__MT19937, __pyx_k__MT19937, sizeof(__pyx_k__MT19937), 0, 0, 1, 1}, - {&__pyx_n_s__RandomState, __pyx_k__RandomState, sizeof(__pyx_k__RandomState), 0, 0, 1, 1}, - {&__pyx_n_s__TypeError, __pyx_k__TypeError, sizeof(__pyx_k__TypeError), 0, 0, 1, 1}, - {&__pyx_n_s__ValueError, __pyx_k__ValueError, sizeof(__pyx_k__ValueError), 0, 0, 1, 1}, - {&__pyx_n_s____RandomState_ctor, __pyx_k____RandomState_ctor, sizeof(__pyx_k____RandomState_ctor), 0, 0, 1, 1}, - {&__pyx_n_s____main__, __pyx_k____main__, sizeof(__pyx_k____main__), 0, 0, 1, 1}, - {&__pyx_n_s____test__, __pyx_k____test__, sizeof(__pyx_k____test__), 0, 0, 1, 1}, - {&__pyx_n_s___rand, __pyx_k___rand, sizeof(__pyx_k___rand), 0, 0, 1, 1}, - {&__pyx_n_s__a, __pyx_k__a, sizeof(__pyx_k__a), 0, 0, 1, 1}, - {&__pyx_n_s__add, __pyx_k__add, sizeof(__pyx_k__add), 0, 0, 1, 1}, - {&__pyx_n_s__alpha, __pyx_k__alpha, sizeof(__pyx_k__alpha), 0, 0, 1, 1}, - {&__pyx_n_s__any, __pyx_k__any, sizeof(__pyx_k__any), 0, 0, 1, 1}, - {&__pyx_n_s__arange, __pyx_k__arange, sizeof(__pyx_k__arange), 0, 0, 1, 1}, - {&__pyx_n_s__array, __pyx_k__array, sizeof(__pyx_k__array), 0, 0, 1, 1}, - {&__pyx_n_s__asarray, __pyx_k__asarray, sizeof(__pyx_k__asarray), 0, 0, 1, 1}, - {&__pyx_n_s__b, __pyx_k__b, sizeof(__pyx_k__b), 0, 0, 1, 1}, - {&__pyx_n_s__beta, __pyx_k__beta, sizeof(__pyx_k__beta), 0, 0, 1, 1}, - {&__pyx_n_s__binomial, __pyx_k__binomial, sizeof(__pyx_k__binomial), 0, 0, 1, 1}, - {&__pyx_n_s__bytes, __pyx_k__bytes, sizeof(__pyx_k__bytes), 0, 0, 1, 1}, - {&__pyx_n_s__chisquare, __pyx_k__chisquare, sizeof(__pyx_k__chisquare), 0, 0, 1, 1}, - {&__pyx_n_s__copy, __pyx_k__copy, sizeof(__pyx_k__copy), 0, 0, 1, 1}, - {&__pyx_n_s__cov, __pyx_k__cov, sizeof(__pyx_k__cov), 0, 0, 1, 1}, - {&__pyx_n_s__data, __pyx_k__data, sizeof(__pyx_k__data), 0, 0, 1, 1}, - {&__pyx_n_s__dataptr, __pyx_k__dataptr, sizeof(__pyx_k__dataptr), 0, 0, 1, 1}, - {&__pyx_n_s__df, __pyx_k__df, sizeof(__pyx_k__df), 0, 0, 1, 1}, - {&__pyx_n_s__dfden, __pyx_k__dfden, sizeof(__pyx_k__dfden), 0, 0, 1, 1}, - {&__pyx_n_s__dfnum, __pyx_k__dfnum, sizeof(__pyx_k__dfnum), 0, 0, 1, 1}, - {&__pyx_n_s__dimensions, __pyx_k__dimensions, sizeof(__pyx_k__dimensions), 0, 0, 1, 1}, - {&__pyx_n_s__dirichlet, __pyx_k__dirichlet, sizeof(__pyx_k__dirichlet), 0, 0, 1, 1}, - {&__pyx_n_s__dot, __pyx_k__dot, sizeof(__pyx_k__dot), 0, 0, 1, 1}, - {&__pyx_n_s__empty, __pyx_k__empty, sizeof(__pyx_k__empty), 0, 0, 1, 1}, - {&__pyx_n_s__equal, __pyx_k__equal, sizeof(__pyx_k__equal), 0, 0, 1, 1}, - {&__pyx_n_s__exponential, __pyx_k__exponential, sizeof(__pyx_k__exponential), 0, 0, 1, 1}, - {&__pyx_n_s__f, __pyx_k__f, sizeof(__pyx_k__f), 0, 0, 1, 1}, - {&__pyx_n_s__float64, __pyx_k__float64, sizeof(__pyx_k__float64), 0, 0, 1, 1}, - {&__pyx_n_s__gamma, __pyx_k__gamma, sizeof(__pyx_k__gamma), 0, 0, 1, 1}, - {&__pyx_n_s__gauss, __pyx_k__gauss, sizeof(__pyx_k__gauss), 0, 0, 1, 1}, - {&__pyx_n_s__geometric, __pyx_k__geometric, sizeof(__pyx_k__geometric), 0, 0, 1, 1}, - {&__pyx_n_s__get_state, __pyx_k__get_state, sizeof(__pyx_k__get_state), 0, 0, 1, 1}, - {&__pyx_n_s__greater, __pyx_k__greater, sizeof(__pyx_k__greater), 0, 0, 1, 1}, - {&__pyx_n_s__greater_equal, __pyx_k__greater_equal, sizeof(__pyx_k__greater_equal), 0, 0, 1, 1}, - {&__pyx_n_s__gumbel, __pyx_k__gumbel, sizeof(__pyx_k__gumbel), 0, 0, 1, 1}, - {&__pyx_n_s__has_gauss, __pyx_k__has_gauss, sizeof(__pyx_k__has_gauss), 0, 0, 1, 1}, - {&__pyx_n_s__high, __pyx_k__high, sizeof(__pyx_k__high), 0, 0, 1, 1}, - {&__pyx_n_s__hypergeometric, __pyx_k__hypergeometric, sizeof(__pyx_k__hypergeometric), 0, 0, 1, 1}, - {&__pyx_n_s__integer, __pyx_k__integer, sizeof(__pyx_k__integer), 0, 0, 1, 1}, - {&__pyx_n_s__internal_state, __pyx_k__internal_state, sizeof(__pyx_k__internal_state), 0, 0, 1, 1}, - {&__pyx_n_s__kappa, __pyx_k__kappa, sizeof(__pyx_k__kappa), 0, 0, 1, 1}, - {&__pyx_n_s__key, __pyx_k__key, sizeof(__pyx_k__key), 0, 0, 1, 1}, - {&__pyx_n_s__lam, __pyx_k__lam, sizeof(__pyx_k__lam), 0, 0, 1, 1}, - {&__pyx_n_s__laplace, __pyx_k__laplace, sizeof(__pyx_k__laplace), 0, 0, 1, 1}, - {&__pyx_n_s__left, __pyx_k__left, sizeof(__pyx_k__left), 0, 0, 1, 1}, - {&__pyx_n_s__less, __pyx_k__less, sizeof(__pyx_k__less), 0, 0, 1, 1}, - {&__pyx_n_s__less_equal, __pyx_k__less_equal, sizeof(__pyx_k__less_equal), 0, 0, 1, 1}, - {&__pyx_n_s__loc, __pyx_k__loc, sizeof(__pyx_k__loc), 0, 0, 1, 1}, - {&__pyx_n_s__logistic, __pyx_k__logistic, sizeof(__pyx_k__logistic), 0, 0, 1, 1}, - {&__pyx_n_s__lognormal, __pyx_k__lognormal, sizeof(__pyx_k__lognormal), 0, 0, 1, 1}, - {&__pyx_n_s__logseries, __pyx_k__logseries, sizeof(__pyx_k__logseries), 0, 0, 1, 1}, - {&__pyx_n_s__low, __pyx_k__low, sizeof(__pyx_k__low), 0, 0, 1, 1}, - {&__pyx_n_s__mean, __pyx_k__mean, sizeof(__pyx_k__mean), 0, 0, 1, 1}, - {&__pyx_n_s__mode, __pyx_k__mode, sizeof(__pyx_k__mode), 0, 0, 1, 1}, - {&__pyx_n_s__mu, __pyx_k__mu, sizeof(__pyx_k__mu), 0, 0, 1, 1}, - {&__pyx_n_s__multinomial, __pyx_k__multinomial, sizeof(__pyx_k__multinomial), 0, 0, 1, 1}, - {&__pyx_n_s__multiply, __pyx_k__multiply, sizeof(__pyx_k__multiply), 0, 0, 1, 1}, - {&__pyx_n_s__multivariate_normal, __pyx_k__multivariate_normal, sizeof(__pyx_k__multivariate_normal), 0, 0, 1, 1}, - {&__pyx_n_s__n, __pyx_k__n, sizeof(__pyx_k__n), 0, 0, 1, 1}, - {&__pyx_n_s__nbad, __pyx_k__nbad, sizeof(__pyx_k__nbad), 0, 0, 1, 1}, - {&__pyx_n_s__nd, __pyx_k__nd, sizeof(__pyx_k__nd), 0, 0, 1, 1}, - {&__pyx_n_s__negative_binomial, __pyx_k__negative_binomial, sizeof(__pyx_k__negative_binomial), 0, 0, 1, 1}, - {&__pyx_n_s__ngood, __pyx_k__ngood, sizeof(__pyx_k__ngood), 0, 0, 1, 1}, - {&__pyx_n_s__nonc, __pyx_k__nonc, sizeof(__pyx_k__nonc), 0, 0, 1, 1}, - {&__pyx_n_s__noncentral_f, __pyx_k__noncentral_f, sizeof(__pyx_k__noncentral_f), 0, 0, 1, 1}, - {&__pyx_n_s__normal, __pyx_k__normal, sizeof(__pyx_k__normal), 0, 0, 1, 1}, - {&__pyx_n_s__np, __pyx_k__np, sizeof(__pyx_k__np), 0, 0, 1, 1}, - {&__pyx_n_s__nsample, __pyx_k__nsample, sizeof(__pyx_k__nsample), 0, 0, 1, 1}, - {&__pyx_n_s__numpy, __pyx_k__numpy, sizeof(__pyx_k__numpy), 0, 0, 1, 1}, - {&__pyx_n_s__p, __pyx_k__p, sizeof(__pyx_k__p), 0, 0, 1, 1}, - {&__pyx_n_s__pareto, __pyx_k__pareto, sizeof(__pyx_k__pareto), 0, 0, 1, 1}, - {&__pyx_n_s__permutation, __pyx_k__permutation, sizeof(__pyx_k__permutation), 0, 0, 1, 1}, - {&__pyx_n_s__poisson, __pyx_k__poisson, sizeof(__pyx_k__poisson), 0, 0, 1, 1}, - {&__pyx_n_s__pos, __pyx_k__pos, sizeof(__pyx_k__pos), 0, 0, 1, 1}, - {&__pyx_n_s__power, __pyx_k__power, sizeof(__pyx_k__power), 0, 0, 1, 1}, - {&__pyx_n_s__pvals, __pyx_k__pvals, sizeof(__pyx_k__pvals), 0, 0, 1, 1}, - {&__pyx_n_s__rand, __pyx_k__rand, sizeof(__pyx_k__rand), 0, 0, 1, 1}, - {&__pyx_n_s__randint, __pyx_k__randint, sizeof(__pyx_k__randint), 0, 0, 1, 1}, - {&__pyx_n_s__randn, __pyx_k__randn, sizeof(__pyx_k__randn), 0, 0, 1, 1}, - {&__pyx_n_s__random, __pyx_k__random, sizeof(__pyx_k__random), 0, 0, 1, 1}, - {&__pyx_n_s__random_integers, __pyx_k__random_integers, sizeof(__pyx_k__random_integers), 0, 0, 1, 1}, - {&__pyx_n_s__random_sample, __pyx_k__random_sample, sizeof(__pyx_k__random_sample), 0, 0, 1, 1}, - {&__pyx_n_s__rayleigh, __pyx_k__rayleigh, sizeof(__pyx_k__rayleigh), 0, 0, 1, 1}, - {&__pyx_n_s__reduce, __pyx_k__reduce, sizeof(__pyx_k__reduce), 0, 0, 1, 1}, - {&__pyx_n_s__right, __pyx_k__right, sizeof(__pyx_k__right), 0, 0, 1, 1}, - {&__pyx_n_s__scale, __pyx_k__scale, sizeof(__pyx_k__scale), 0, 0, 1, 1}, - {&__pyx_n_s__seed, __pyx_k__seed, sizeof(__pyx_k__seed), 0, 0, 1, 1}, - {&__pyx_n_s__set_state, __pyx_k__set_state, sizeof(__pyx_k__set_state), 0, 0, 1, 1}, - {&__pyx_n_s__shape, __pyx_k__shape, sizeof(__pyx_k__shape), 0, 0, 1, 1}, - {&__pyx_n_s__shuffle, __pyx_k__shuffle, sizeof(__pyx_k__shuffle), 0, 0, 1, 1}, - {&__pyx_n_s__sigma, __pyx_k__sigma, sizeof(__pyx_k__sigma), 0, 0, 1, 1}, - {&__pyx_n_s__size, __pyx_k__size, sizeof(__pyx_k__size), 0, 0, 1, 1}, - {&__pyx_n_s__sqrt, __pyx_k__sqrt, sizeof(__pyx_k__sqrt), 0, 0, 1, 1}, - {&__pyx_n_s__standard_cauchy, __pyx_k__standard_cauchy, sizeof(__pyx_k__standard_cauchy), 0, 0, 1, 1}, - {&__pyx_n_s__standard_gamma, __pyx_k__standard_gamma, sizeof(__pyx_k__standard_gamma), 0, 0, 1, 1}, - {&__pyx_n_s__standard_normal, __pyx_k__standard_normal, sizeof(__pyx_k__standard_normal), 0, 0, 1, 1}, - {&__pyx_n_s__standard_t, __pyx_k__standard_t, sizeof(__pyx_k__standard_t), 0, 0, 1, 1}, - {&__pyx_n_s__subtract, __pyx_k__subtract, sizeof(__pyx_k__subtract), 0, 0, 1, 1}, - {&__pyx_n_s__svd, __pyx_k__svd, sizeof(__pyx_k__svd), 0, 0, 1, 1}, - {&__pyx_n_s__tomaxint, __pyx_k__tomaxint, sizeof(__pyx_k__tomaxint), 0, 0, 1, 1}, - {&__pyx_n_s__triangular, __pyx_k__triangular, sizeof(__pyx_k__triangular), 0, 0, 1, 1}, - {&__pyx_n_s__uint, __pyx_k__uint, sizeof(__pyx_k__uint), 0, 0, 1, 1}, - {&__pyx_n_s__uint32, __pyx_k__uint32, sizeof(__pyx_k__uint32), 0, 0, 1, 1}, - {&__pyx_n_s__uniform, __pyx_k__uniform, sizeof(__pyx_k__uniform), 0, 0, 1, 1}, - {&__pyx_n_s__vonmises, __pyx_k__vonmises, sizeof(__pyx_k__vonmises), 0, 0, 1, 1}, - {&__pyx_n_s__wald, __pyx_k__wald, sizeof(__pyx_k__wald), 0, 0, 1, 1}, - {&__pyx_n_s__weibull, __pyx_k__weibull, sizeof(__pyx_k__weibull), 0, 0, 1, 1}, - {&__pyx_n_s__zeros, __pyx_k__zeros, sizeof(__pyx_k__zeros), 0, 0, 1, 1}, - {&__pyx_n_s__zipf, __pyx_k__zipf, sizeof(__pyx_k__zipf), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} -}; -static int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_ValueError = __Pyx_GetName(__pyx_b, __pyx_n_s__ValueError); if (!__pyx_builtin_ValueError) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_builtin_TypeError = __Pyx_GetName(__pyx_b, __pyx_n_s__TypeError); if (!__pyx_builtin_TypeError) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 697; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - return 0; - __pyx_L1_error:; - return -1; -} - -static int __Pyx_InitGlobals(void) { - if (__Pyx_InitStrings(__pyx_string_tab) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; - __pyx_int_624 = PyInt_FromLong(624); if (unlikely(!__pyx_int_624)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; - return 0; - __pyx_L1_error:; - return -1; -} - -#if PY_MAJOR_VERSION < 3 -PyMODINIT_FUNC initmtrand(void); /*proto*/ -PyMODINIT_FUNC initmtrand(void) -#else -PyMODINIT_FUNC PyInit_mtrand(void); /*proto*/ -PyMODINIT_FUNC PyInit_mtrand(void) -#endif -{ - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - #if CYTHON_REFNANNY - void* __pyx_refnanny = NULL; - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); - if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); - } - __pyx_refnanny = __Pyx_RefNanny->SetupContext("PyMODINIT_FUNC PyInit_mtrand(void)", __LINE__, __FILE__); - #endif - __pyx_init_filenames(); - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - #if PY_MAJOR_VERSION < 3 - __pyx_empty_bytes = PyString_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - #else - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - #ifdef WITH_THREAD /* Python build with threading support? */ - PyEval_InitThreads(); - #endif - #endif - /*--- Module creation code ---*/ - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4(__Pyx_NAMESTR("mtrand"), __pyx_methods, 0, 0, PYTHON_API_VERSION); - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - #endif - if (!__pyx_m) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; - #if PY_MAJOR_VERSION < 3 - Py_INCREF(__pyx_m); - #endif - __pyx_b = PyImport_AddModule(__Pyx_NAMESTR(__Pyx_BUILTIN_MODULE_NAME)); - if (!__pyx_b) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; - if (__Pyx_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; - /*--- Initialize various global constants etc. ---*/ - if (unlikely(__Pyx_InitGlobals() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - if (__pyx_module_is_main_mtrand) { - if (__Pyx_SetAttrString(__pyx_m, "__name__", __pyx_n_s____main__) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; - } - /*--- Builtin init code ---*/ - if (unlikely(__Pyx_InitCachedBuiltins() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - /*--- Global init code ---*/ - /*--- Function export code ---*/ - /*--- Type init code ---*/ - __pyx_ptype_6mtrand_dtype = __Pyx_ImportType("numpy", "dtype", sizeof(PyArray_Descr), 0); if (unlikely(!__pyx_ptype_6mtrand_dtype)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 74; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_ptype_6mtrand_ndarray = __Pyx_ImportType("numpy", "ndarray", sizeof(PyArrayObject), 0); if (unlikely(!__pyx_ptype_6mtrand_ndarray)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 79; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_ptype_6mtrand_flatiter = __Pyx_ImportType("numpy", "flatiter", sizeof(PyArrayIterObject), 0); if (unlikely(!__pyx_ptype_6mtrand_flatiter)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 88; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_ptype_6mtrand_broadcast = __Pyx_ImportType("numpy", "broadcast", sizeof(PyArrayMultiIterObject), 0); if (unlikely(!__pyx_ptype_6mtrand_broadcast)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 94; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - if (PyType_Ready(&__pyx_type_6mtrand_RandomState) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 522; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - if (__Pyx_SetAttrString(__pyx_m, "RandomState", (PyObject *)&__pyx_type_6mtrand_RandomState) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 522; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_ptype_6mtrand_RandomState = &__pyx_type_6mtrand_RandomState; - /*--- Type import code ---*/ - /*--- Function import code ---*/ - /*--- Execution code ---*/ - - /* "mtrand.pyx":124 - * - * # Initialize numpy - * import_array() # <<<<<<<<<<<<<< - * - * import numpy as np - */ - import_array(); - - /* "mtrand.pyx":126 - * import_array() - * - * import numpy as np # <<<<<<<<<<<<<< - * - * cdef object cont0_array(rk_state *state, rk_cont0 func, object size): - */ - __pyx_t_1 = __Pyx_Import(((PyObject *)__pyx_n_s__numpy), 0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 126; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__np, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 126; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":893 - * return bytestring - * - * def uniform(self, low=0.0, high=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * uniform(low=0.0, high=1.0, size=1) - */ - __pyx_t_1 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 893; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_5 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 893; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_6 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "mtrand.pyx":1190 - * return cont0_array(self.internal_state, rk_gauss, size) - * - * def normal(self, loc=0.0, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * normal(loc=0.0, scale=1.0, size=None) - */ - __pyx_t_1 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_7 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_8 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "mtrand.pyx":1349 - * return cont2_array(self.internal_state, rk_beta, size, oa, ob) - * - * def exponential(self, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * exponential(scale=1.0, size=None) - */ - __pyx_t_1 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1349; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_12 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "mtrand.pyx":1513 - * return cont1_array(self.internal_state, rk_standard_gamma, size, oshape) - * - * def gamma(self, shape, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * gamma(shape, scale=1.0, size=None) - */ - __pyx_t_1 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1513; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_14 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "mtrand.pyx":2529 - * return cont1_array(self.internal_state, rk_power, size, oa) - * - * def laplace(self, loc=0.0, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * laplace(loc=0.0, scale=1.0, size=None) - */ - __pyx_t_1 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2529; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_23 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2529; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_24 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "mtrand.pyx":2619 - * return cont2_array(self.internal_state, rk_laplace, size, oloc, oscale) - * - * def gumbel(self, loc=0.0, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * gumbel(loc=0.0, scale=1.0, size=None) - */ - __pyx_t_1 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2619; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_25 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2619; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_26 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "mtrand.pyx":2743 - * return cont2_array(self.internal_state, rk_gumbel, size, oloc, oscale) - * - * def logistic(self, loc=0.0, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * logistic(loc=0.0, scale=1.0, size=None) - */ - __pyx_t_1 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_27 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_28 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "mtrand.pyx":2831 - * return cont2_array(self.internal_state, rk_logistic, size, oloc, oscale) - * - * def lognormal(self, mean=0.0, sigma=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * lognormal(mean=0.0, sigma=1.0, size=None) - */ - __pyx_t_1 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_29 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_30 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "mtrand.pyx":2962 - * return cont2_array(self.internal_state, rk_lognormal, size, omean, osigma) - * - * def rayleigh(self, scale=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * rayleigh(scale=1.0, size=None) - */ - __pyx_t_1 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2962; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_33 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "mtrand.pyx":3411 - * on, op) - * - * def poisson(self, lam=1.0, size=None): # <<<<<<<<<<<<<< - * """ - * poisson(lam=1.0, size=None) - */ - __pyx_t_1 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3411; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_k_43 = __pyx_t_1; - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "mtrand.pyx":4236 - * return arr - * - * _rand = RandomState() # <<<<<<<<<<<<<< - * seed = _rand.seed - * get_state = _rand.get_state - */ - __pyx_t_1 = PyObject_Call(((PyObject *)((PyObject*)__pyx_ptype_6mtrand_RandomState)), ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4236; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - if (PyObject_SetAttr(__pyx_m, __pyx_n_s___rand, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4236; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4237 - * - * _rand = RandomState() - * seed = _rand.seed # <<<<<<<<<<<<<< - * get_state = _rand.get_state - * set_state = _rand.set_state - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4237; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__seed); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4237; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__seed, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4237; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4238 - * _rand = RandomState() - * seed = _rand.seed - * get_state = _rand.get_state # <<<<<<<<<<<<<< - * set_state = _rand.set_state - * random_sample = _rand.random_sample - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4238; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__get_state); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4238; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__get_state, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4238; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4239 - * seed = _rand.seed - * get_state = _rand.get_state - * set_state = _rand.set_state # <<<<<<<<<<<<<< - * random_sample = _rand.random_sample - * randint = _rand.randint - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4239; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__set_state); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4239; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__set_state, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4239; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4240 - * get_state = _rand.get_state - * set_state = _rand.set_state - * random_sample = _rand.random_sample # <<<<<<<<<<<<<< - * randint = _rand.randint - * bytes = _rand.bytes - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4240; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__random_sample); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4240; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__random_sample, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4240; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4241 - * set_state = _rand.set_state - * random_sample = _rand.random_sample - * randint = _rand.randint # <<<<<<<<<<<<<< - * bytes = _rand.bytes - * uniform = _rand.uniform - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4241; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__randint); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4241; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__randint, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4241; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4242 - * random_sample = _rand.random_sample - * randint = _rand.randint - * bytes = _rand.bytes # <<<<<<<<<<<<<< - * uniform = _rand.uniform - * rand = _rand.rand - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4242; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__bytes); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4242; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__bytes, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4242; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4243 - * randint = _rand.randint - * bytes = _rand.bytes - * uniform = _rand.uniform # <<<<<<<<<<<<<< - * rand = _rand.rand - * randn = _rand.randn - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4243; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__uniform); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4243; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__uniform, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4243; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4244 - * bytes = _rand.bytes - * uniform = _rand.uniform - * rand = _rand.rand # <<<<<<<<<<<<<< - * randn = _rand.randn - * random_integers = _rand.random_integers - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4244; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4244; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__rand, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4244; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4245 - * uniform = _rand.uniform - * rand = _rand.rand - * randn = _rand.randn # <<<<<<<<<<<<<< - * random_integers = _rand.random_integers - * standard_normal = _rand.standard_normal - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4245; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__randn); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4245; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__randn, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4245; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4246 - * rand = _rand.rand - * randn = _rand.randn - * random_integers = _rand.random_integers # <<<<<<<<<<<<<< - * standard_normal = _rand.standard_normal - * normal = _rand.normal - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4246; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__random_integers); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4246; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__random_integers, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4246; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4247 - * randn = _rand.randn - * random_integers = _rand.random_integers - * standard_normal = _rand.standard_normal # <<<<<<<<<<<<<< - * normal = _rand.normal - * beta = _rand.beta - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4247; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__standard_normal); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4247; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__standard_normal, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4247; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4248 - * random_integers = _rand.random_integers - * standard_normal = _rand.standard_normal - * normal = _rand.normal # <<<<<<<<<<<<<< - * beta = _rand.beta - * exponential = _rand.exponential - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4248; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__normal); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4248; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__normal, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4248; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4249 - * standard_normal = _rand.standard_normal - * normal = _rand.normal - * beta = _rand.beta # <<<<<<<<<<<<<< - * exponential = _rand.exponential - * standard_exponential = _rand.standard_exponential - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4249; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__beta); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4249; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__beta, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4249; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4250 - * normal = _rand.normal - * beta = _rand.beta - * exponential = _rand.exponential # <<<<<<<<<<<<<< - * standard_exponential = _rand.standard_exponential - * standard_gamma = _rand.standard_gamma - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4250; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__exponential); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4250; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__exponential, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4250; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4251 - * beta = _rand.beta - * exponential = _rand.exponential - * standard_exponential = _rand.standard_exponential # <<<<<<<<<<<<<< - * standard_gamma = _rand.standard_gamma - * gamma = _rand.gamma - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4251; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s_59); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4251; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_59, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4251; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4252 - * exponential = _rand.exponential - * standard_exponential = _rand.standard_exponential - * standard_gamma = _rand.standard_gamma # <<<<<<<<<<<<<< - * gamma = _rand.gamma - * f = _rand.f - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4252; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__standard_gamma); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4252; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__standard_gamma, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4252; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4253 - * standard_exponential = _rand.standard_exponential - * standard_gamma = _rand.standard_gamma - * gamma = _rand.gamma # <<<<<<<<<<<<<< - * f = _rand.f - * noncentral_f = _rand.noncentral_f - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4253; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__gamma); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4253; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__gamma, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4253; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4254 - * standard_gamma = _rand.standard_gamma - * gamma = _rand.gamma - * f = _rand.f # <<<<<<<<<<<<<< - * noncentral_f = _rand.noncentral_f - * chisquare = _rand.chisquare - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4254; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__f); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4254; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__f, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4254; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4255 - * gamma = _rand.gamma - * f = _rand.f - * noncentral_f = _rand.noncentral_f # <<<<<<<<<<<<<< - * chisquare = _rand.chisquare - * noncentral_chisquare = _rand.noncentral_chisquare - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4255; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__noncentral_f); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4255; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__noncentral_f, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4255; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4256 - * f = _rand.f - * noncentral_f = _rand.noncentral_f - * chisquare = _rand.chisquare # <<<<<<<<<<<<<< - * noncentral_chisquare = _rand.noncentral_chisquare - * standard_cauchy = _rand.standard_cauchy - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4256; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__chisquare); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4256; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__chisquare, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4256; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4257 - * noncentral_f = _rand.noncentral_f - * chisquare = _rand.chisquare - * noncentral_chisquare = _rand.noncentral_chisquare # <<<<<<<<<<<<<< - * standard_cauchy = _rand.standard_cauchy - * standard_t = _rand.standard_t - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4257; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s_60); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4257; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_60, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4257; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4258 - * chisquare = _rand.chisquare - * noncentral_chisquare = _rand.noncentral_chisquare - * standard_cauchy = _rand.standard_cauchy # <<<<<<<<<<<<<< - * standard_t = _rand.standard_t - * vonmises = _rand.vonmises - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4258; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__standard_cauchy); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4258; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__standard_cauchy, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4258; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4259 - * noncentral_chisquare = _rand.noncentral_chisquare - * standard_cauchy = _rand.standard_cauchy - * standard_t = _rand.standard_t # <<<<<<<<<<<<<< - * vonmises = _rand.vonmises - * pareto = _rand.pareto - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4259; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__standard_t); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4259; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__standard_t, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4259; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4260 - * standard_cauchy = _rand.standard_cauchy - * standard_t = _rand.standard_t - * vonmises = _rand.vonmises # <<<<<<<<<<<<<< - * pareto = _rand.pareto - * weibull = _rand.weibull - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4260; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__vonmises); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4260; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__vonmises, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4260; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4261 - * standard_t = _rand.standard_t - * vonmises = _rand.vonmises - * pareto = _rand.pareto # <<<<<<<<<<<<<< - * weibull = _rand.weibull - * power = _rand.power - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4261; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__pareto); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4261; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__pareto, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4261; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4262 - * vonmises = _rand.vonmises - * pareto = _rand.pareto - * weibull = _rand.weibull # <<<<<<<<<<<<<< - * power = _rand.power - * laplace = _rand.laplace - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4262; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__weibull); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4262; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__weibull, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4262; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4263 - * pareto = _rand.pareto - * weibull = _rand.weibull - * power = _rand.power # <<<<<<<<<<<<<< - * laplace = _rand.laplace - * gumbel = _rand.gumbel - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4263; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__power); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4263; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__power, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4263; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4264 - * weibull = _rand.weibull - * power = _rand.power - * laplace = _rand.laplace # <<<<<<<<<<<<<< - * gumbel = _rand.gumbel - * logistic = _rand.logistic - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4264; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__laplace); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4264; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__laplace, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4264; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4265 - * power = _rand.power - * laplace = _rand.laplace - * gumbel = _rand.gumbel # <<<<<<<<<<<<<< - * logistic = _rand.logistic - * lognormal = _rand.lognormal - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4265; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__gumbel); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4265; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__gumbel, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4265; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4266 - * laplace = _rand.laplace - * gumbel = _rand.gumbel - * logistic = _rand.logistic # <<<<<<<<<<<<<< - * lognormal = _rand.lognormal - * rayleigh = _rand.rayleigh - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__logistic); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__logistic, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4267 - * gumbel = _rand.gumbel - * logistic = _rand.logistic - * lognormal = _rand.lognormal # <<<<<<<<<<<<<< - * rayleigh = _rand.rayleigh - * wald = _rand.wald - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4267; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__lognormal); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4267; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__lognormal, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4267; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4268 - * logistic = _rand.logistic - * lognormal = _rand.lognormal - * rayleigh = _rand.rayleigh # <<<<<<<<<<<<<< - * wald = _rand.wald - * triangular = _rand.triangular - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4268; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__rayleigh); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4268; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__rayleigh, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4268; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4269 - * lognormal = _rand.lognormal - * rayleigh = _rand.rayleigh - * wald = _rand.wald # <<<<<<<<<<<<<< - * triangular = _rand.triangular - * - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4269; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__wald); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4269; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__wald, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4269; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4270 - * rayleigh = _rand.rayleigh - * wald = _rand.wald - * triangular = _rand.triangular # <<<<<<<<<<<<<< - * - * binomial = _rand.binomial - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4270; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__triangular); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4270; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__triangular, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4270; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4272 - * triangular = _rand.triangular - * - * binomial = _rand.binomial # <<<<<<<<<<<<<< - * negative_binomial = _rand.negative_binomial - * poisson = _rand.poisson - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4272; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__binomial); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4272; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__binomial, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4272; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4273 - * - * binomial = _rand.binomial - * negative_binomial = _rand.negative_binomial # <<<<<<<<<<<<<< - * poisson = _rand.poisson - * zipf = _rand.zipf - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__negative_binomial); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__negative_binomial, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4274 - * binomial = _rand.binomial - * negative_binomial = _rand.negative_binomial - * poisson = _rand.poisson # <<<<<<<<<<<<<< - * zipf = _rand.zipf - * geometric = _rand.geometric - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4274; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__poisson); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4274; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__poisson, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4274; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4275 - * negative_binomial = _rand.negative_binomial - * poisson = _rand.poisson - * zipf = _rand.zipf # <<<<<<<<<<<<<< - * geometric = _rand.geometric - * hypergeometric = _rand.hypergeometric - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4275; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__zipf); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4275; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__zipf, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4275; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4276 - * poisson = _rand.poisson - * zipf = _rand.zipf - * geometric = _rand.geometric # <<<<<<<<<<<<<< - * hypergeometric = _rand.hypergeometric - * logseries = _rand.logseries - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4276; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__geometric); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4276; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__geometric, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4276; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4277 - * zipf = _rand.zipf - * geometric = _rand.geometric - * hypergeometric = _rand.hypergeometric # <<<<<<<<<<<<<< - * logseries = _rand.logseries - * - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4277; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__hypergeometric); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4277; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__hypergeometric, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4277; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4278 - * geometric = _rand.geometric - * hypergeometric = _rand.hypergeometric - * logseries = _rand.logseries # <<<<<<<<<<<<<< - * - * multivariate_normal = _rand.multivariate_normal - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4278; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__logseries); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4278; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__logseries, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4278; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4280 - * logseries = _rand.logseries - * - * multivariate_normal = _rand.multivariate_normal # <<<<<<<<<<<<<< - * multinomial = _rand.multinomial - * dirichlet = _rand.dirichlet - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4280; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__multivariate_normal); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4280; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__multivariate_normal, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4280; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4281 - * - * multivariate_normal = _rand.multivariate_normal - * multinomial = _rand.multinomial # <<<<<<<<<<<<<< - * dirichlet = _rand.dirichlet - * - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4281; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__multinomial); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4281; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__multinomial, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4281; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4282 - * multivariate_normal = _rand.multivariate_normal - * multinomial = _rand.multinomial - * dirichlet = _rand.dirichlet # <<<<<<<<<<<<<< - * - * shuffle = _rand.shuffle - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4282; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__dirichlet); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4282; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__dirichlet, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4282; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":4284 - * dirichlet = _rand.dirichlet - * - * shuffle = _rand.shuffle # <<<<<<<<<<<<<< - * permutation = _rand.permutation - */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4284; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__shuffle); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4284; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__shuffle, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4284; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "mtrand.pyx":4285 - * - * shuffle = _rand.shuffle - * permutation = _rand.permutation # <<<<<<<<<<<<<< - */ - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s___rand); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4285; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__permutation); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4285; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s__permutation, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4285; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "mtrand.pyx":1 - * # mtrand.pyx -- A Pyrex wrapper of Jean-Sebastien Roy's RandomKit # <<<<<<<<<<<<<< - * # - * # Copyright 2005 Robert Kern (robert.kern@gmail.com) - */ - __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(((PyObject *)__pyx_t_1)); - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__seed); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_61), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__get_state); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_62), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__set_state); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_63), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__random_sample); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_64), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__tomaxint); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_65), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__randint); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_66), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__bytes); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_67), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__uniform); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_68), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__rand); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_69), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__randn); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_70), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__random_integers); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_71), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__standard_normal); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_72), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__normal); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_73), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__beta); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_74), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__exponential); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_75), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s_59); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_76), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__standard_gamma); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_77), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__gamma); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_78), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__f); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_79), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__noncentral_f); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_80), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__chisquare); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_81), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s_60); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_82), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__standard_cauchy); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_83), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__standard_t); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_84), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__vonmises); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_85), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__pareto); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_86), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__weibull); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_87), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__power); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_88), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__laplace); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_89), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__gumbel); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_90), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__logistic); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_91), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__lognormal); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_92), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__rayleigh); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_93), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__wald); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_94), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__triangular); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_95), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__binomial); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_96), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__negative_binomial); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_97), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__poisson); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_98), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__zipf); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_99), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__geometric); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_100), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__hypergeometric); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_101), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__logseries); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_102), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__multivariate_normal); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_103), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__multinomial); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_104), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__dirichlet); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_105), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__shuffle); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_106), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__RandomState); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__permutation); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_107), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyObject_SetAttr(__pyx_m, __pyx_n_s____test__, ((PyObject *)__pyx_t_1)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_DECREF(((PyObject *)__pyx_t_1)); __pyx_t_1 = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - if (__pyx_m) { - __Pyx_AddTraceback("init mtrand"); - Py_DECREF(__pyx_m); __pyx_m = 0; - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init mtrand"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if PY_MAJOR_VERSION < 3 - return; - #else - return __pyx_m; - #endif -} - -static const char *__pyx_filenames[] = { - "mtrand.pyx", - "numpy.pxi", -}; - -/* Runtime support code */ - -static void __pyx_init_filenames(void) { - __pyx_f = __pyx_filenames; -} - -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AS_STRING(kw_name)); - #endif -} - -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *number, *more_or_less; - - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - number = (num_expected == 1) ? "" : "s"; - PyErr_Format(PyExc_TypeError, - #if PY_VERSION_HEX < 0x02050000 - "%s() takes %s %d positional argument%s (%d given)", - #else - "%s() takes %s %zd positional argument%s (%zd given)", - #endif - func_name, more_or_less, num_expected, number, num_found); -} - -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - - while (PyDict_Next(kwds, &pos, &key, &value)) { - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - } else { - #if PY_MAJOR_VERSION < 3 - if (unlikely(!PyString_CheckExact(key)) && unlikely(!PyString_Check(key))) { - #else - if (unlikely(!PyUnicode_CheckExact(key)) && unlikely(!PyUnicode_Check(key))) { - #endif - goto invalid_keyword_type; - } else { - for (name = first_kw_arg; *name; name++) { - #if PY_MAJOR_VERSION >= 3 - if (PyUnicode_GET_SIZE(**name) == PyUnicode_GET_SIZE(key) && - PyUnicode_Compare(**name, key) == 0) break; - #else - if (PyString_GET_SIZE(**name) == PyString_GET_SIZE(key) && - _PyString_Eq(**name, key)) break; - #endif - } - if (*name) { - values[name-argnames] = value; - } else { - /* unexpected keyword found */ - for (name=argnames; name != first_kw_arg; name++) { - if (**name == key) goto arg_passed_twice; - #if PY_MAJOR_VERSION >= 3 - if (PyUnicode_GET_SIZE(**name) == PyUnicode_GET_SIZE(key) && - PyUnicode_Compare(**name, key) == 0) goto arg_passed_twice; - #else - if (PyString_GET_SIZE(**name) == PyString_GET_SIZE(key) && - _PyString_Eq(**name, key)) goto arg_passed_twice; - #endif - } - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - } - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, **name); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION < 3 - "%s() got an unexpected keyword argument '%s'", - function_name, PyString_AsString(key)); - #else - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - - -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - #if PY_VERSION_HEX < 0x02050000 - "need more than %d value%s to unpack", (int)index, - #else - "need more than %zd value%s to unpack", index, - #endif - (index == 1) ? "" : "s"); -} - -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(void) { - PyErr_SetString(PyExc_ValueError, "too many values to unpack"); -} - -static PyObject *__Pyx_UnpackItem(PyObject *iter, Py_ssize_t index) { - PyObject *item; - if (!(item = PyIter_Next(iter))) { - if (!PyErr_Occurred()) { - __Pyx_RaiseNeedMoreValuesError(index); - } - } - return item; -} - -static int __Pyx_EndUnpack(PyObject *iter) { - PyObject *item; - if ((item = PyIter_Next(iter))) { - Py_DECREF(item); - __Pyx_RaiseTooManyValuesError(); - return -1; - } - else if (!PyErr_Occurred()) - return 0; - else - return -1; -} - -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *local_type, *local_value, *local_tb; - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyThreadState *tstate = PyThreadState_GET(); - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; - PyErr_NormalizeException(&local_type, &local_value, &local_tb); - if (unlikely(tstate->curexc_type)) - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - #endif - *type = local_type; - *value = local_value; - *tb = local_tb; - Py_INCREF(local_type); - Py_INCREF(local_value); - Py_INCREF(local_tb); - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - /* Make sure tstate is in a consistent state when we XDECREF - these objects (XDECREF may run arbitrary code). */ - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - - -static CYTHON_INLINE int __Pyx_CheckKeywordStrings( - PyObject *kwdict, - const char* function_name, - int kw_allowed) -{ - PyObject* key = 0; - Py_ssize_t pos = 0; - while (PyDict_Next(kwdict, &pos, &key, 0)) { - #if PY_MAJOR_VERSION < 3 - if (unlikely(!PyString_CheckExact(key)) && unlikely(!PyString_Check(key))) - #else - if (unlikely(!PyUnicode_CheckExact(key)) && unlikely(!PyUnicode_Check(key))) - #endif - goto invalid_keyword_type; - } - if ((!kw_allowed) && unlikely(key)) - goto invalid_keyword; - return 1; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%s() keywords must be strings", function_name); - return 0; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION < 3 - "%s() got an unexpected keyword argument '%s'", - function_name, PyString_AsString(key)); - #else - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif - return 0; -} - -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { - if (unlikely(!type)) { - PyErr_Format(PyExc_SystemError, "Missing type object"); - return 0; - } - if (likely(PyObject_TypeCheck(obj, type))) - return 1; - PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", - Py_TYPE(obj)->tp_name, type->tp_name); - return 0; -} - - -static CYTHON_INLINE void __Pyx_ExceptionSave(PyObject **type, PyObject **value, PyObject **tb) { - PyThreadState *tstate = PyThreadState_GET(); - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} - -static void __Pyx_ExceptionReset(PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyThreadState *tstate = PyThreadState_GET(); - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} - -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list) { - PyObject *__import__ = 0; - PyObject *empty_list = 0; - PyObject *module = 0; - PyObject *global_dict = 0; - PyObject *empty_dict = 0; - PyObject *list; - __import__ = __Pyx_GetAttrString(__pyx_b, "__import__"); - if (!__import__) - goto bad; - if (from_list) - list = from_list; - else { - empty_list = PyList_New(0); - if (!empty_list) - goto bad; - list = empty_list; - } - global_dict = PyModule_GetDict(__pyx_m); - if (!global_dict) - goto bad; - empty_dict = PyDict_New(); - if (!empty_dict) - goto bad; - module = PyObject_CallFunctionObjArgs(__import__, - name, global_dict, empty_dict, list, NULL); -bad: - Py_XDECREF(empty_list); - Py_XDECREF(__import__); - Py_XDECREF(empty_dict); - return module; -} - -static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name) { - PyObject *result; - result = PyObject_GetAttr(dict, name); - if (!result) - PyErr_SetObject(PyExc_NameError, name); - return result; -} - -static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyThreadState *tstate = PyThreadState_GET(); - - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} - -static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb) { - PyThreadState *tstate = PyThreadState_GET(); - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} - - -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb) { - Py_XINCREF(type); - Py_XINCREF(value); - Py_XINCREF(tb); - /* First, check the traceback argument, replacing None with NULL. */ - if (tb == Py_None) { - Py_DECREF(tb); - tb = 0; - } - else if (tb != NULL && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - /* Next, replace a missing value with None */ - if (value == NULL) { - value = Py_None; - Py_INCREF(value); - } - #if PY_VERSION_HEX < 0x02050000 - if (!PyClass_Check(type)) - #else - if (!PyType_Check(type)) - #endif - { - /* Raising an instance. The value should be a dummy. */ - if (value != Py_None) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - /* Normalize to raise , */ - Py_DECREF(value); - value = type; - #if PY_VERSION_HEX < 0x02050000 - if (PyInstance_Check(type)) { - type = (PyObject*) ((PyInstanceObject*)type)->in_class; - Py_INCREF(type); - } - else { - type = 0; - PyErr_SetString(PyExc_TypeError, - "raise: exception must be an old-style class or instance"); - goto raise_error; - } - #else - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - #endif - } - - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} - -#else /* Python 3+ */ - -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb) { - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (!PyExceptionClass_Check(type)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - - PyErr_SetObject(type, value); - - if (tb) { - PyThreadState *tstate = PyThreadState_GET(); - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } - } - -bad: - return; -} -#endif - -static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject* x) { - const unsigned char neg_one = (unsigned char)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; - if (sizeof(unsigned char) < sizeof(long)) { - long val = __Pyx_PyInt_AsLong(x); - if (unlikely(val != (long)(unsigned char)val)) { - if (!unlikely(val == -1 && PyErr_Occurred())) { - PyErr_SetString(PyExc_OverflowError, - (is_unsigned && unlikely(val < 0)) ? - "can't convert negative value to unsigned char" : - "value too large to convert to unsigned char"); - } - return (unsigned char)-1; - } - return (unsigned char)val; - } - return (unsigned char)__Pyx_PyInt_AsUnsignedLong(x); -} - -static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject* x) { - const unsigned short neg_one = (unsigned short)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; - if (sizeof(unsigned short) < sizeof(long)) { - long val = __Pyx_PyInt_AsLong(x); - if (unlikely(val != (long)(unsigned short)val)) { - if (!unlikely(val == -1 && PyErr_Occurred())) { - PyErr_SetString(PyExc_OverflowError, - (is_unsigned && unlikely(val < 0)) ? - "can't convert negative value to unsigned short" : - "value too large to convert to unsigned short"); - } - return (unsigned short)-1; - } - return (unsigned short)val; - } - return (unsigned short)__Pyx_PyInt_AsUnsignedLong(x); -} - -static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject* x) { - const unsigned int neg_one = (unsigned int)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; - if (sizeof(unsigned int) < sizeof(long)) { - long val = __Pyx_PyInt_AsLong(x); - if (unlikely(val != (long)(unsigned int)val)) { - if (!unlikely(val == -1 && PyErr_Occurred())) { - PyErr_SetString(PyExc_OverflowError, - (is_unsigned && unlikely(val < 0)) ? - "can't convert negative value to unsigned int" : - "value too large to convert to unsigned int"); - } - return (unsigned int)-1; - } - return (unsigned int)val; - } - return (unsigned int)__Pyx_PyInt_AsUnsignedLong(x); -} - -static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject* x) { - const char neg_one = (char)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; - if (sizeof(char) < sizeof(long)) { - long val = __Pyx_PyInt_AsLong(x); - if (unlikely(val != (long)(char)val)) { - if (!unlikely(val == -1 && PyErr_Occurred())) { - PyErr_SetString(PyExc_OverflowError, - (is_unsigned && unlikely(val < 0)) ? - "can't convert negative value to char" : - "value too large to convert to char"); - } - return (char)-1; - } - return (char)val; - } - return (char)__Pyx_PyInt_AsLong(x); -} - -static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject* x) { - const short neg_one = (short)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; - if (sizeof(short) < sizeof(long)) { - long val = __Pyx_PyInt_AsLong(x); - if (unlikely(val != (long)(short)val)) { - if (!unlikely(val == -1 && PyErr_Occurred())) { - PyErr_SetString(PyExc_OverflowError, - (is_unsigned && unlikely(val < 0)) ? - "can't convert negative value to short" : - "value too large to convert to short"); - } - return (short)-1; - } - return (short)val; - } - return (short)__Pyx_PyInt_AsLong(x); -} - -static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject* x) { - const int neg_one = (int)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; - if (sizeof(int) < sizeof(long)) { - long val = __Pyx_PyInt_AsLong(x); - if (unlikely(val != (long)(int)val)) { - if (!unlikely(val == -1 && PyErr_Occurred())) { - PyErr_SetString(PyExc_OverflowError, - (is_unsigned && unlikely(val < 0)) ? - "can't convert negative value to int" : - "value too large to convert to int"); - } - return (int)-1; - } - return (int)val; - } - return (int)__Pyx_PyInt_AsLong(x); -} - -static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject* x) { - const signed char neg_one = (signed char)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; - if (sizeof(signed char) < sizeof(long)) { - long val = __Pyx_PyInt_AsLong(x); - if (unlikely(val != (long)(signed char)val)) { - if (!unlikely(val == -1 && PyErr_Occurred())) { - PyErr_SetString(PyExc_OverflowError, - (is_unsigned && unlikely(val < 0)) ? - "can't convert negative value to signed char" : - "value too large to convert to signed char"); - } - return (signed char)-1; - } - return (signed char)val; - } - return (signed char)__Pyx_PyInt_AsSignedLong(x); -} - -static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject* x) { - const signed short neg_one = (signed short)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; - if (sizeof(signed short) < sizeof(long)) { - long val = __Pyx_PyInt_AsLong(x); - if (unlikely(val != (long)(signed short)val)) { - if (!unlikely(val == -1 && PyErr_Occurred())) { - PyErr_SetString(PyExc_OverflowError, - (is_unsigned && unlikely(val < 0)) ? - "can't convert negative value to signed short" : - "value too large to convert to signed short"); - } - return (signed short)-1; - } - return (signed short)val; - } - return (signed short)__Pyx_PyInt_AsSignedLong(x); -} - -static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject* x) { - const signed int neg_one = (signed int)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; - if (sizeof(signed int) < sizeof(long)) { - long val = __Pyx_PyInt_AsLong(x); - if (unlikely(val != (long)(signed int)val)) { - if (!unlikely(val == -1 && PyErr_Occurred())) { - PyErr_SetString(PyExc_OverflowError, - (is_unsigned && unlikely(val < 0)) ? - "can't convert negative value to signed int" : - "value too large to convert to signed int"); - } - return (signed int)-1; - } - return (signed int)val; - } - return (signed int)__Pyx_PyInt_AsSignedLong(x); -} - -static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject* x) { - const unsigned long neg_one = (unsigned long)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; -#if PY_VERSION_HEX < 0x03000000 - if (likely(PyInt_Check(x))) { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to unsigned long"); - return (unsigned long)-1; - } - return (unsigned long)val; - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { - if (unlikely(Py_SIZE(x) < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to unsigned long"); - return (unsigned long)-1; - } - return PyLong_AsUnsignedLong(x); - } else { - return PyLong_AsLong(x); - } - } else { - unsigned long val; - PyObject *tmp = __Pyx_PyNumber_Int(x); - if (!tmp) return (unsigned long)-1; - val = __Pyx_PyInt_AsUnsignedLong(tmp); - Py_DECREF(tmp); - return val; - } -} - -static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject* x) { - const unsigned PY_LONG_LONG neg_one = (unsigned PY_LONG_LONG)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; -#if PY_VERSION_HEX < 0x03000000 - if (likely(PyInt_Check(x))) { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to unsigned PY_LONG_LONG"); - return (unsigned PY_LONG_LONG)-1; - } - return (unsigned PY_LONG_LONG)val; - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { - if (unlikely(Py_SIZE(x) < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to unsigned PY_LONG_LONG"); - return (unsigned PY_LONG_LONG)-1; - } - return PyLong_AsUnsignedLongLong(x); - } else { - return PyLong_AsLongLong(x); - } - } else { - unsigned PY_LONG_LONG val; - PyObject *tmp = __Pyx_PyNumber_Int(x); - if (!tmp) return (unsigned PY_LONG_LONG)-1; - val = __Pyx_PyInt_AsUnsignedLongLong(tmp); - Py_DECREF(tmp); - return val; - } -} - -static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject* x) { - const long neg_one = (long)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; -#if PY_VERSION_HEX < 0x03000000 - if (likely(PyInt_Check(x))) { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long)-1; - } - return (long)val; - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { - if (unlikely(Py_SIZE(x) < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long)-1; - } - return PyLong_AsUnsignedLong(x); - } else { - return PyLong_AsLong(x); - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_Int(x); - if (!tmp) return (long)-1; - val = __Pyx_PyInt_AsLong(tmp); - Py_DECREF(tmp); - return val; - } -} - -static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject* x) { - const PY_LONG_LONG neg_one = (PY_LONG_LONG)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; -#if PY_VERSION_HEX < 0x03000000 - if (likely(PyInt_Check(x))) { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to PY_LONG_LONG"); - return (PY_LONG_LONG)-1; - } - return (PY_LONG_LONG)val; - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { - if (unlikely(Py_SIZE(x) < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to PY_LONG_LONG"); - return (PY_LONG_LONG)-1; - } - return PyLong_AsUnsignedLongLong(x); - } else { - return PyLong_AsLongLong(x); - } - } else { - PY_LONG_LONG val; - PyObject *tmp = __Pyx_PyNumber_Int(x); - if (!tmp) return (PY_LONG_LONG)-1; - val = __Pyx_PyInt_AsLongLong(tmp); - Py_DECREF(tmp); - return val; - } -} - -static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject* x) { - const signed long neg_one = (signed long)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; -#if PY_VERSION_HEX < 0x03000000 - if (likely(PyInt_Check(x))) { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to signed long"); - return (signed long)-1; - } - return (signed long)val; - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { - if (unlikely(Py_SIZE(x) < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to signed long"); - return (signed long)-1; - } - return PyLong_AsUnsignedLong(x); - } else { - return PyLong_AsLong(x); - } - } else { - signed long val; - PyObject *tmp = __Pyx_PyNumber_Int(x); - if (!tmp) return (signed long)-1; - val = __Pyx_PyInt_AsSignedLong(tmp); - Py_DECREF(tmp); - return val; - } -} - -static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject* x) { - const signed PY_LONG_LONG neg_one = (signed PY_LONG_LONG)-1, const_zero = 0; - const int is_unsigned = neg_one > const_zero; -#if PY_VERSION_HEX < 0x03000000 - if (likely(PyInt_Check(x))) { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to signed PY_LONG_LONG"); - return (signed PY_LONG_LONG)-1; - } - return (signed PY_LONG_LONG)val; - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { - if (unlikely(Py_SIZE(x) < 0)) { - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to signed PY_LONG_LONG"); - return (signed PY_LONG_LONG)-1; - } - return PyLong_AsUnsignedLongLong(x); - } else { - return PyLong_AsLongLong(x); - } - } else { - signed PY_LONG_LONG val; - PyObject *tmp = __Pyx_PyNumber_Int(x); - if (!tmp) return (signed PY_LONG_LONG)-1; - val = __Pyx_PyInt_AsSignedLongLong(tmp); - Py_DECREF(tmp); - return val; - } -} - -#ifndef __PYX_HAVE_RT_ImportType -#define __PYX_HAVE_RT_ImportType -static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, - long size, int strict) -{ - PyObject *py_module = 0; - PyObject *result = 0; - PyObject *py_name = 0; - char warning[200]; - - py_module = __Pyx_ImportModule(module_name); - if (!py_module) - goto bad; - #if PY_MAJOR_VERSION < 3 - py_name = PyString_FromString(class_name); - #else - py_name = PyUnicode_FromString(class_name); - #endif - if (!py_name) - goto bad; - result = PyObject_GetAttr(py_module, py_name); - Py_DECREF(py_name); - py_name = 0; - Py_DECREF(py_module); - py_module = 0; - if (!result) - goto bad; - if (!PyType_Check(result)) { - PyErr_Format(PyExc_TypeError, - "%s.%s is not a type object", - module_name, class_name); - goto bad; - } - if (!strict && ((PyTypeObject *)result)->tp_basicsize > size) { - PyOS_snprintf(warning, sizeof(warning), - "%s.%s size changed, may indicate binary incompatibility", - module_name, class_name); - PyErr_WarnEx(NULL, warning, 0); - } - else if (((PyTypeObject *)result)->tp_basicsize != size) { - PyErr_Format(PyExc_ValueError, - "%s.%s has the wrong size, try recompiling", - module_name, class_name); - goto bad; - } - return (PyTypeObject *)result; -bad: - Py_XDECREF(py_module); - Py_XDECREF(result); - return 0; -} -#endif - -#ifndef __PYX_HAVE_RT_ImportModule -#define __PYX_HAVE_RT_ImportModule -static PyObject *__Pyx_ImportModule(const char *name) { - PyObject *py_name = 0; - PyObject *py_module = 0; - - #if PY_MAJOR_VERSION < 3 - py_name = PyString_FromString(name); - #else - py_name = PyUnicode_FromString(name); - #endif - if (!py_name) - goto bad; - py_module = PyImport_Import(py_name); - Py_DECREF(py_name); - return py_module; -bad: - Py_XDECREF(py_name); - return 0; -} -#endif - -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" - -static void __Pyx_AddTraceback(const char *funcname) { - PyObject *py_srcfile = 0; - PyObject *py_funcname = 0; - PyObject *py_globals = 0; - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - - #if PY_MAJOR_VERSION < 3 - py_srcfile = PyString_FromString(__pyx_filename); - #else - py_srcfile = PyUnicode_FromString(__pyx_filename); - #endif - if (!py_srcfile) goto bad; - if (__pyx_clineno) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - #else - py_funcname = PyUnicode_FromString(funcname); - #endif - } - if (!py_funcname) goto bad; - py_globals = PyModule_GetDict(__pyx_m); - if (!py_globals) goto bad; - py_code = PyCode_New( - 0, /*int argcount,*/ - #if PY_MAJOR_VERSION >= 3 - 0, /*int kwonlyargcount,*/ - #endif - 0, /*int nlocals,*/ - 0, /*int stacksize,*/ - 0, /*int flags,*/ - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - __pyx_lineno, /*int firstlineno,*/ - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - if (!py_code) goto bad; - py_frame = PyFrame_New( - PyThreadState_GET(), /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - py_globals, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - py_frame->f_lineno = __pyx_lineno; - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_srcfile); - Py_XDECREF(py_funcname); - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} - -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION < 3 - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - #else /* Python 3+ has unicode identifiers */ - if (t->is_unicode | t->is_str) { - if (t->intern) { - *t->p = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); - } else { - *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - #endif - if (!*t->p) - return -1; - ++t; - } - return 0; -} - -/* Type Conversion Functions */ - -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - if (x == Py_True) return 1; - else if ((x == Py_False) | (x == Py_None)) return 0; - else return PyObject_IsTrue(x); -} - -static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x) { - PyNumberMethods *m; - const char *name = NULL; - PyObject *res = NULL; -#if PY_VERSION_HEX < 0x03000000 - if (PyInt_Check(x) || PyLong_Check(x)) -#else - if (PyLong_Check(x)) -#endif - return Py_INCREF(x), x; - m = Py_TYPE(x)->tp_as_number; -#if PY_VERSION_HEX < 0x03000000 - if (m && m->nb_int) { - name = "int"; - res = PyNumber_Int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = PyNumber_Long(x); - } -#else - if (m && m->nb_int) { - name = "int"; - res = PyNumber_Long(x); - } -#endif - if (res) { -#if PY_VERSION_HEX < 0x03000000 - if (!PyInt_Check(res) && !PyLong_Check(res)) { -#else - if (!PyLong_Check(res)) { -#endif - PyErr_Format(PyExc_TypeError, - "__%s__ returned non-%s (type %.200s)", - name, name, Py_TYPE(res)->tp_name); - Py_DECREF(res); - return NULL; - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} - -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject* x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} - -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { -#if PY_VERSION_HEX < 0x02050000 - if (ival <= LONG_MAX) - return PyInt_FromLong((long)ival); - else { - unsigned char *bytes = (unsigned char *) &ival; - int one = 1; int little = (int)*(unsigned char*)&one; - return _PyLong_FromByteArray(bytes, sizeof(size_t), little, 0); - } -#else - return PyInt_FromSize_t(ival); -#endif -} - -static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject* x) { - unsigned PY_LONG_LONG val = __Pyx_PyInt_AsUnsignedLongLong(x); - if (unlikely(val == (unsigned PY_LONG_LONG)-1 && PyErr_Occurred())) { - return (size_t)-1; - } else if (unlikely(val != (unsigned PY_LONG_LONG)(size_t)val)) { - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to size_t"); - return (size_t)-1; - } - return (size_t)val; -} - - -#endif /* Py_PYTHON_H */ diff --git a/pythonPackages/numpy/numpy/random/mtrand/mtrand.pyx b/pythonPackages/numpy/numpy/random/mtrand/mtrand.pyx deleted file mode 100755 index f3caac14bd..0000000000 --- a/pythonPackages/numpy/numpy/random/mtrand/mtrand.pyx +++ /dev/null @@ -1,4297 +0,0 @@ -# mtrand.pyx -- A Pyrex wrapper of Jean-Sebastien Roy's RandomKit -# -# Copyright 2005 Robert Kern (robert.kern@gmail.com) -# -# Permission is hereby granted, free of charge, to any person obtaining a -# copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be included -# in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS -# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY -# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, -# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE -# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -include "Python.pxi" -include "numpy.pxi" - -cdef extern from "math.h": - double exp(double x) - double log(double x) - double floor(double x) - double sin(double x) - double cos(double x) - -cdef extern from "mtrand_py_helper.h": - object empty_py_bytes(unsigned long length, void **bytes) - -cdef extern from "randomkit.h": - - ctypedef struct rk_state: - unsigned long key[624] - int pos - int has_gauss - double gauss - - ctypedef enum rk_error: - RK_NOERR = 0 - RK_ENODEV = 1 - RK_ERR_MAX = 2 - - char *rk_strerror[2] - - # 0xFFFFFFFFUL - unsigned long RK_MAX - - void rk_seed(unsigned long seed, rk_state *state) - rk_error rk_randomseed(rk_state *state) - unsigned long rk_random(rk_state *state) - long rk_long(rk_state *state) - unsigned long rk_ulong(rk_state *state) - unsigned long rk_interval(unsigned long max, rk_state *state) - double rk_double(rk_state *state) - void rk_fill(void *buffer, size_t size, rk_state *state) - rk_error rk_devfill(void *buffer, size_t size, int strong) - rk_error rk_altfill(void *buffer, size_t size, int strong, - rk_state *state) - double rk_gauss(rk_state *state) - -cdef extern from "distributions.h": - - double rk_normal(rk_state *state, double loc, double scale) - double rk_standard_exponential(rk_state *state) - double rk_exponential(rk_state *state, double scale) - double rk_uniform(rk_state *state, double loc, double scale) - double rk_standard_gamma(rk_state *state, double shape) - double rk_gamma(rk_state *state, double shape, double scale) - double rk_beta(rk_state *state, double a, double b) - double rk_chisquare(rk_state *state, double df) - double rk_noncentral_chisquare(rk_state *state, double df, double nonc) - double rk_f(rk_state *state, double dfnum, double dfden) - double rk_noncentral_f(rk_state *state, double dfnum, double dfden, double nonc) - double rk_standard_cauchy(rk_state *state) - double rk_standard_t(rk_state *state, double df) - double rk_vonmises(rk_state *state, double mu, double kappa) - double rk_pareto(rk_state *state, double a) - double rk_weibull(rk_state *state, double a) - double rk_power(rk_state *state, double a) - double rk_laplace(rk_state *state, double loc, double scale) - double rk_gumbel(rk_state *state, double loc, double scale) - double rk_logistic(rk_state *state, double loc, double scale) - double rk_lognormal(rk_state *state, double mode, double sigma) - double rk_rayleigh(rk_state *state, double mode) - double rk_wald(rk_state *state, double mean, double scale) - double rk_triangular(rk_state *state, double left, double mode, double right) - - long rk_binomial(rk_state *state, long n, double p) - long rk_binomial_btpe(rk_state *state, long n, double p) - long rk_binomial_inversion(rk_state *state, long n, double p) - long rk_negative_binomial(rk_state *state, double n, double p) - long rk_poisson(rk_state *state, double lam) - long rk_poisson_mult(rk_state *state, double lam) - long rk_poisson_ptrs(rk_state *state, double lam) - long rk_zipf(rk_state *state, double a) - long rk_geometric(rk_state *state, double p) - long rk_hypergeometric(rk_state *state, long good, long bad, long sample) - long rk_logseries(rk_state *state, double p) - -ctypedef double (* rk_cont0)(rk_state *state) -ctypedef double (* rk_cont1)(rk_state *state, double a) -ctypedef double (* rk_cont2)(rk_state *state, double a, double b) -ctypedef double (* rk_cont3)(rk_state *state, double a, double b, double c) - -ctypedef long (* rk_disc0)(rk_state *state) -ctypedef long (* rk_discnp)(rk_state *state, long n, double p) -ctypedef long (* rk_discdd)(rk_state *state, double n, double p) -ctypedef long (* rk_discnmN)(rk_state *state, long n, long m, long N) -ctypedef long (* rk_discd)(rk_state *state, double a) - - -cdef extern from "initarray.h": - void init_by_array(rk_state *self, unsigned long *init_key, - unsigned long key_length) - -# Initialize numpy -import_array() - -import numpy as np - -cdef object cont0_array(rk_state *state, rk_cont0 func, object size): - cdef double *array_data - cdef ndarray array "arrayObject" - cdef long length - cdef long i - - if size is None: - return func(state) - else: - array = np.empty(size, np.float64) - length = PyArray_SIZE(array) - array_data = array.data - for i from 0 <= i < length: - array_data[i] = func(state) - return array - - -cdef object cont1_array_sc(rk_state *state, rk_cont1 func, object size, double a): - cdef double *array_data - cdef ndarray array "arrayObject" - cdef long length - cdef long i - - if size is None: - return func(state, a) - else: - array = np.empty(size, np.float64) - length = PyArray_SIZE(array) - array_data = array.data - for i from 0 <= i < length: - array_data[i] = func(state, a) - return array - -cdef object cont1_array(rk_state *state, rk_cont1 func, object size, ndarray oa): - cdef double *array_data - cdef double *oa_data - cdef ndarray array "arrayObject" - cdef npy_intp length - cdef npy_intp i - cdef flatiter itera - cdef broadcast multi - - if size is None: - array = PyArray_SimpleNew(oa.nd, oa.dimensions, NPY_DOUBLE) - length = PyArray_SIZE(array) - array_data = array.data - itera = PyArray_IterNew(oa) - for i from 0 <= i < length: - array_data[i] = func(state, ((itera.dataptr))[0]) - PyArray_ITER_NEXT(itera) - else: - array = np.empty(size, np.float64) - array_data = array.data - multi = PyArray_MultiIterNew(2, array, - oa) - if (multi.size != PyArray_SIZE(array)): - raise ValueError("size is not compatible with inputs") - for i from 0 <= i < multi.size: - oa_data = PyArray_MultiIter_DATA(multi, 1) - array_data[i] = func(state, oa_data[0]) - PyArray_MultiIter_NEXTi(multi, 1) - return array - -cdef object cont2_array_sc(rk_state *state, rk_cont2 func, object size, double a, - double b): - cdef double *array_data - cdef ndarray array "arrayObject" - cdef long length - cdef long i - - if size is None: - return func(state, a, b) - else: - array = np.empty(size, np.float64) - length = PyArray_SIZE(array) - array_data = array.data - for i from 0 <= i < length: - array_data[i] = func(state, a, b) - return array - - -cdef object cont2_array(rk_state *state, rk_cont2 func, object size, - ndarray oa, ndarray ob): - cdef double *array_data - cdef double *oa_data - cdef double *ob_data - cdef ndarray array "arrayObject" - cdef npy_intp length - cdef npy_intp i - cdef broadcast multi - - if size is None: - multi = PyArray_MultiIterNew(2, oa, ob) - array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) - array_data = array.data - for i from 0 <= i < multi.size: - oa_data = PyArray_MultiIter_DATA(multi, 0) - ob_data = PyArray_MultiIter_DATA(multi, 1) - array_data[i] = func(state, oa_data[0], ob_data[0]) - PyArray_MultiIter_NEXT(multi) - else: - array = np.empty(size, np.float64) - array_data = array.data - multi = PyArray_MultiIterNew(3, array, oa, ob) - if (multi.size != PyArray_SIZE(array)): - raise ValueError("size is not compatible with inputs") - for i from 0 <= i < multi.size: - oa_data = PyArray_MultiIter_DATA(multi, 1) - ob_data = PyArray_MultiIter_DATA(multi, 2) - array_data[i] = func(state, oa_data[0], ob_data[0]) - PyArray_MultiIter_NEXTi(multi, 1) - PyArray_MultiIter_NEXTi(multi, 2) - return array - -cdef object cont3_array_sc(rk_state *state, rk_cont3 func, object size, double a, - double b, double c): - - cdef double *array_data - cdef ndarray array "arrayObject" - cdef long length - cdef long i - - if size is None: - return func(state, a, b, c) - else: - array = np.empty(size, np.float64) - length = PyArray_SIZE(array) - array_data = array.data - for i from 0 <= i < length: - array_data[i] = func(state, a, b, c) - return array - -cdef object cont3_array(rk_state *state, rk_cont3 func, object size, ndarray oa, - ndarray ob, ndarray oc): - - cdef double *array_data - cdef double *oa_data - cdef double *ob_data - cdef double *oc_data - cdef ndarray array "arrayObject" - cdef npy_intp length - cdef npy_intp i - cdef broadcast multi - - if size is None: - multi = PyArray_MultiIterNew(3, oa, ob, oc) - array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) - array_data = array.data - for i from 0 <= i < multi.size: - oa_data = PyArray_MultiIter_DATA(multi, 0) - ob_data = PyArray_MultiIter_DATA(multi, 1) - oc_data = PyArray_MultiIter_DATA(multi, 2) - array_data[i] = func(state, oa_data[0], ob_data[0], oc_data[0]) - PyArray_MultiIter_NEXT(multi) - else: - array = np.empty(size, np.float64) - array_data = array.data - multi = PyArray_MultiIterNew(4, array, oa, - ob, oc) - if (multi.size != PyArray_SIZE(array)): - raise ValueError("size is not compatible with inputs") - for i from 0 <= i < multi.size: - oa_data = PyArray_MultiIter_DATA(multi, 1) - ob_data = PyArray_MultiIter_DATA(multi, 2) - oc_data = PyArray_MultiIter_DATA(multi, 3) - array_data[i] = func(state, oa_data[0], ob_data[0], oc_data[0]) - PyArray_MultiIter_NEXT(multi) - return array - -cdef object disc0_array(rk_state *state, rk_disc0 func, object size): - cdef long *array_data - cdef ndarray array "arrayObject" - cdef long length - cdef long i - - if size is None: - return func(state) - else: - array = np.empty(size, int) - length = PyArray_SIZE(array) - array_data = array.data - for i from 0 <= i < length: - array_data[i] = func(state) - return array - -cdef object discnp_array_sc(rk_state *state, rk_discnp func, object size, long n, double p): - cdef long *array_data - cdef ndarray array "arrayObject" - cdef long length - cdef long i - - if size is None: - return func(state, n, p) - else: - array = np.empty(size, int) - length = PyArray_SIZE(array) - array_data = array.data - for i from 0 <= i < length: - array_data[i] = func(state, n, p) - return array - -cdef object discnp_array(rk_state *state, rk_discnp func, object size, ndarray on, ndarray op): - cdef long *array_data - cdef ndarray array "arrayObject" - cdef npy_intp length - cdef npy_intp i - cdef double *op_data - cdef long *on_data - cdef broadcast multi - - if size is None: - multi = PyArray_MultiIterNew(2, on, op) - array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - array_data = array.data - for i from 0 <= i < multi.size: - on_data = PyArray_MultiIter_DATA(multi, 0) - op_data = PyArray_MultiIter_DATA(multi, 1) - array_data[i] = func(state, on_data[0], op_data[0]) - PyArray_MultiIter_NEXT(multi) - else: - array = np.empty(size, int) - array_data = array.data - multi = PyArray_MultiIterNew(3, array, on, op) - if (multi.size != PyArray_SIZE(array)): - raise ValueError("size is not compatible with inputs") - for i from 0 <= i < multi.size: - on_data = PyArray_MultiIter_DATA(multi, 1) - op_data = PyArray_MultiIter_DATA(multi, 2) - array_data[i] = func(state, on_data[0], op_data[0]) - PyArray_MultiIter_NEXTi(multi, 1) - PyArray_MultiIter_NEXTi(multi, 2) - - return array - -cdef object discdd_array_sc(rk_state *state, rk_discdd func, object size, double n, double p): - cdef long *array_data - cdef ndarray array "arrayObject" - cdef long length - cdef long i - - if size is None: - return func(state, n, p) - else: - array = np.empty(size, int) - length = PyArray_SIZE(array) - array_data = array.data - for i from 0 <= i < length: - array_data[i] = func(state, n, p) - return array - -cdef object discdd_array(rk_state *state, rk_discdd func, object size, ndarray on, ndarray op): - cdef long *array_data - cdef ndarray array "arrayObject" - cdef npy_intp length - cdef npy_intp i - cdef double *op_data - cdef double *on_data - cdef broadcast multi - - if size is None: - multi = PyArray_MultiIterNew(2, on, op) - array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - array_data = array.data - for i from 0 <= i < multi.size: - on_data = PyArray_MultiIter_DATA(multi, 0) - op_data = PyArray_MultiIter_DATA(multi, 1) - array_data[i] = func(state, on_data[0], op_data[0]) - PyArray_MultiIter_NEXT(multi) - else: - array = np.empty(size, int) - array_data = array.data - multi = PyArray_MultiIterNew(3, array, on, op) - if (multi.size != PyArray_SIZE(array)): - raise ValueError("size is not compatible with inputs") - for i from 0 <= i < multi.size: - on_data = PyArray_MultiIter_DATA(multi, 1) - op_data = PyArray_MultiIter_DATA(multi, 2) - array_data[i] = func(state, on_data[0], op_data[0]) - PyArray_MultiIter_NEXTi(multi, 1) - PyArray_MultiIter_NEXTi(multi, 2) - - return array - -cdef object discnmN_array_sc(rk_state *state, rk_discnmN func, object size, - long n, long m, long N): - cdef long *array_data - cdef ndarray array "arrayObject" - cdef long length - cdef long i - - if size is None: - return func(state, n, m, N) - else: - array = np.empty(size, int) - length = PyArray_SIZE(array) - array_data = array.data - for i from 0 <= i < length: - array_data[i] = func(state, n, m, N) - return array - -cdef object discnmN_array(rk_state *state, rk_discnmN func, object size, - ndarray on, ndarray om, ndarray oN): - cdef long *array_data - cdef long *on_data - cdef long *om_data - cdef long *oN_data - cdef ndarray array "arrayObject" - cdef npy_intp length - cdef npy_intp i - cdef broadcast multi - - if size is None: - multi = PyArray_MultiIterNew(3, on, om, oN) - array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) - array_data = array.data - for i from 0 <= i < multi.size: - on_data = PyArray_MultiIter_DATA(multi, 0) - om_data = PyArray_MultiIter_DATA(multi, 1) - oN_data = PyArray_MultiIter_DATA(multi, 2) - array_data[i] = func(state, on_data[0], om_data[0], oN_data[0]) - PyArray_MultiIter_NEXT(multi) - else: - array = np.empty(size, int) - array_data = array.data - multi = PyArray_MultiIterNew(4, array, on, om, - oN) - if (multi.size != PyArray_SIZE(array)): - raise ValueError("size is not compatible with inputs") - for i from 0 <= i < multi.size: - on_data = PyArray_MultiIter_DATA(multi, 1) - om_data = PyArray_MultiIter_DATA(multi, 2) - oN_data = PyArray_MultiIter_DATA(multi, 3) - array_data[i] = func(state, on_data[0], om_data[0], oN_data[0]) - PyArray_MultiIter_NEXT(multi) - - return array - -cdef object discd_array_sc(rk_state *state, rk_discd func, object size, double a): - cdef long *array_data - cdef ndarray array "arrayObject" - cdef long length - cdef long i - - if size is None: - return func(state, a) - else: - array = np.empty(size, int) - length = PyArray_SIZE(array) - array_data = array.data - for i from 0 <= i < length: - array_data[i] = func(state, a) - return array - -cdef object discd_array(rk_state *state, rk_discd func, object size, ndarray oa): - cdef long *array_data - cdef double *oa_data - cdef ndarray array "arrayObject" - cdef npy_intp length - cdef npy_intp i - cdef broadcast multi - cdef flatiter itera - - if size is None: - array = PyArray_SimpleNew(oa.nd, oa.dimensions, NPY_LONG) - length = PyArray_SIZE(array) - array_data = array.data - itera = PyArray_IterNew(oa) - for i from 0 <= i < length: - array_data[i] = func(state, ((itera.dataptr))[0]) - PyArray_ITER_NEXT(itera) - else: - array = np.empty(size, int) - array_data = array.data - multi = PyArray_MultiIterNew(2, array, oa) - if (multi.size != PyArray_SIZE(array)): - raise ValueError("size is not compatible with inputs") - for i from 0 <= i < multi.size: - oa_data = PyArray_MultiIter_DATA(multi, 1) - array_data[i] = func(state, oa_data[0]) - PyArray_MultiIter_NEXTi(multi, 1) - return array - -cdef double kahan_sum(double *darr, long n): - cdef double c, y, t, sum - cdef long i - sum = darr[0] - c = 0.0 - for i from 1 <= i < n: - y = darr[i] - c - t = sum + y - c = (t-sum) - y - sum = t - return sum - -cdef class RandomState: - """ - RandomState(seed=None) - - Container for the Mersenne Twister pseudo-random number generator. - - `RandomState` exposes a number of methods for generating random numbers - drawn from a variety of probability distributions. In addition to the - distribution-specific arguments, each method takes a keyword argument - `size` that defaults to ``None``. If `size` is ``None``, then a single - value is generated and returned. If `size` is an integer, then a 1-D - array filled with generated values is returned. If `size` is a tuple, - then an array with that shape is filled and returned. - - Parameters - ---------- - seed : int or array_like, optional - Random seed initializing the pseudo-random number generator. - Can be an integer, an array (or other sequence) of integers of - any length, or ``None`` (the default). - If `seed` is ``None``, then `RandomState` will try to read data from - ``/dev/urandom`` (or the Windows analogue) if available or seed from - the clock otherwise. - - Notes - ----- - The Python stdlib module "random" also contains a Mersenne Twister - pseudo-random number generator with a number of methods that are similar - to the ones available in `RandomState`. `RandomState`, besides being - NumPy-aware, has the advantage that it provides a much larger number - of probability distributions to choose from. - - """ - cdef rk_state *internal_state - - def __init__(self, seed=None): - self.internal_state = PyMem_Malloc(sizeof(rk_state)) - - self.seed(seed) - - def __dealloc__(self): - if self.internal_state != NULL: - PyMem_Free(self.internal_state) - self.internal_state = NULL - - def seed(self, seed=None): - """ - seed(seed=None) - - Seed the generator. - - This method is called when `RandomState` is initialized. It can be - called again to re-seed the generator. For details, see `RandomState`. - - Parameters - ---------- - seed : int or array_like, optional - Seed for `RandomState`. - - See Also - -------- - RandomState - - """ - cdef rk_error errcode - cdef ndarray obj "arrayObject_obj" - if seed is None: - errcode = rk_randomseed(self.internal_state) - elif type(seed) is int: - rk_seed(seed, self.internal_state) - elif isinstance(seed, np.integer): - iseed = int(seed) - rk_seed(iseed, self.internal_state) - else: - obj = PyArray_ContiguousFromObject(seed, NPY_LONG, 1, 1) - init_by_array(self.internal_state, (obj.data), - obj.dimensions[0]) - - def get_state(self): - """ - get_state() - - Return a tuple representing the internal state of the generator. - - For more details, see `set_state`. - - Returns - ------- - out : tuple(str, ndarray of 624 uints, int, int, float) - The returned tuple has the following items: - - 1. the string 'MT19937'. - 2. a 1-D array of 624 unsigned integer keys. - 3. an integer ``pos``. - 4. an integer ``has_gauss``. - 5. a float ``cached_gaussian``. - - See Also - -------- - set_state - - Notes - ----- - `set_state` and `get_state` are not needed to work with any of the - random distributions in NumPy. If the internal state is manually altered, - the user should know exactly what he/she is doing. - - """ - cdef ndarray state "arrayObject_state" - state = np.empty(624, np.uint) - memcpy((state.data), (self.internal_state.key), 624*sizeof(long)) - state = np.asarray(state, np.uint32) - return ('MT19937', state, self.internal_state.pos, - self.internal_state.has_gauss, self.internal_state.gauss) - - def set_state(self, state): - """ - set_state(state) - - Set the internal state of the generator from a tuple. - - For use if one has reason to manually (re-)set the internal state of the - "Mersenne Twister"[1]_ pseudo-random number generating algorithm. - - Parameters - ---------- - state : tuple(str, ndarray of 624 uints, int, int, float) - The `state` tuple has the following items: - - 1. the string 'MT19937', specifying the Mersenne Twister algorithm. - 2. a 1-D array of 624 unsigned integers ``keys``. - 3. an integer ``pos``. - 4. an integer ``has_gauss``. - 5. a float ``cached_gaussian``. - - Returns - ------- - out : None - Returns 'None' on success. - - See Also - -------- - get_state - - Notes - ----- - `set_state` and `get_state` are not needed to work with any of the - random distributions in NumPy. If the internal state is manually altered, - the user should know exactly what he/she is doing. - - For backwards compatibility, the form (str, array of 624 uints, int) is - also accepted although it is missing some information about the cached - Gaussian value: ``state = ('MT19937', keys, pos)``. - - References - ---------- - .. [1] M. Matsumoto and T. Nishimura, "Mersenne Twister: A - 623-dimensionally equidistributed uniform pseudorandom number - generator," *ACM Trans. on Modeling and Computer Simulation*, - Vol. 8, No. 1, pp. 3-30, Jan. 1998. - - """ - cdef ndarray obj "arrayObject_obj" - cdef int pos - algorithm_name = state[0] - if algorithm_name != 'MT19937': - raise ValueError("algorithm must be 'MT19937'") - key, pos = state[1:3] - if len(state) == 3: - has_gauss = 0 - cached_gaussian = 0.0 - else: - has_gauss, cached_gaussian = state[3:5] - try: - obj = PyArray_ContiguousFromObject(key, NPY_ULONG, 1, 1) - except TypeError: - # compatibility -- could be an older pickle - obj = PyArray_ContiguousFromObject(key, NPY_LONG, 1, 1) - if obj.dimensions[0] != 624: - raise ValueError("state must be 624 longs") - memcpy((self.internal_state.key), (obj.data), 624*sizeof(long)) - self.internal_state.pos = pos - self.internal_state.has_gauss = has_gauss - self.internal_state.gauss = cached_gaussian - - # Pickling support: - def __getstate__(self): - return self.get_state() - - def __setstate__(self, state): - self.set_state(state) - - def __reduce__(self): - return (np.random.__RandomState_ctor, (), self.get_state()) - - # Basic distributions: - def random_sample(self, size=None): - """ - random_sample(size=None) - - Return random floats in the half-open interval [0.0, 1.0). - - Results are from the "continuous uniform" distribution over the - stated interval. To sample :math:`Unif[a, b), b > a` multiply - the output of `random_sample` by `(b-a)` and add `a`:: - - (b - a) * random_sample() + a - - Parameters - ---------- - size : int or tuple of ints, optional - Defines the shape of the returned array of random floats. If None - (the default), returns a single float. - - Returns - ------- - out : float or ndarray of floats - Array of random floats of shape `size` (unless ``size=None``, in which - case a single float is returned). - - Examples - -------- - >>> np.random.random_sample() - 0.47108547995356098 - >>> type(np.random.random_sample()) - - >>> np.random.random_sample((5,)) - array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428]) - - Three-by-two array of random numbers from [-5, 0): - - >>> 5 * np.random.random_sample((3, 2)) - 5 - array([[-3.99149989, -0.52338984], - [-2.99091858, -0.79479508], - [-1.23204345, -1.75224494]]) - - """ - return cont0_array(self.internal_state, rk_double, size) - - def tomaxint(self, size=None): - """ - tomaxint(size=None) - - Random integers between 0 and ``sys.maxint``, inclusive. - - Return a sample of uniformly distributed random integers in the interval - [0, ``sys.maxint``]. - - Parameters - ---------- - size : tuple of ints, int, optional - Shape of output. If this is, for example, (m,n,k), m*n*k samples - are generated. If no shape is specified, a single sample is - returned. - - Returns - ------- - out : ndarray - Drawn samples, with shape `size`. - - See Also - -------- - randint : Uniform sampling over a given half-open interval of integers. - random_integers : Uniform sampling over a given closed interval of - integers. - - Examples - -------- - >>> RS = np.random.mtrand.RandomState() # need a RandomState object - >>> RS.tomaxint((2,2,2)) - array([[[1170048599, 1600360186], - [ 739731006, 1947757578]], - [[1871712945, 752307660], - [1601631370, 1479324245]]]) - >>> import sys - >>> sys.maxint - 2147483647 - >>> RS.tomaxint((2,2,2)) < sys.maxint - array([[[ True, True], - [ True, True]], - [[ True, True], - [ True, True]]], dtype=bool) - - """ - return disc0_array(self.internal_state, rk_long, size) - - def randint(self, low, high=None, size=None): - """ - randint(low, high=None, size=None) - - Return random integers from `low` (inclusive) to `high` (exclusive). - - Return random integers from the "discrete uniform" distribution in the - "half-open" interval [`low`, `high`). If `high` is None (the default), - then results are from [0, `low`). - - Parameters - ---------- - low : int - Lowest (signed) integer to be drawn from the distribution (unless - ``high=None``, in which case this parameter is the *highest* such - integer). - high : int, optional - If provided, one above the largest (signed) integer to be drawn - from the distribution (see above for behavior if ``high=None``). - size : int or tuple of ints, optional - Output shape. Default is None, in which case a single int is - returned. - - Returns - ------- - out : int or ndarray of ints - `size`-shaped array of random integers from the appropriate - distribution, or a single such random int if `size` not provided. - - See Also - -------- - random.random_integers : similar to `randint`, only for the closed - interval [`low`, `high`], and 1 is the lowest value if `high` is - omitted. In particular, this other one is the one to use to generate - uniformly distributed discrete non-integers. - - Examples - -------- - >>> np.random.randint(2, size=10) - array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) - >>> np.random.randint(1, size=10) - array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) - - Generate a 2 x 4 array of ints between 0 and 4, inclusive: - - >>> np.random.randint(5, size=(2, 4)) - array([[4, 0, 2, 1], - [3, 2, 2, 0]]) - - """ - cdef long lo, hi, diff - cdef long *array_data - cdef ndarray array "arrayObject" - cdef long length - cdef long i - - if high is None: - lo = 0 - hi = low - else: - lo = low - hi = high - - diff = hi - lo - 1 - if diff < 0: - raise ValueError("low >= high") - - if size is None: - return rk_interval(diff, self.internal_state) + lo - else: - array = np.empty(size, int) - length = PyArray_SIZE(array) - array_data = array.data - for i from 0 <= i < length: - array_data[i] = lo + rk_interval(diff, self.internal_state) - return array - - def bytes(self, unsigned int length): - """ - bytes(length) - - Return random bytes. - - Parameters - ---------- - length : int - Number of random bytes. - - Returns - ------- - out : str - String of length `N`. - - Examples - -------- - >>> np.random.bytes(10) - ' eh\\x85\\x022SZ\\xbf\\xa4' #random - - """ - cdef void *bytes - bytestring = empty_py_bytes(length, &bytes) - rk_fill(bytes, length, self.internal_state) - return bytestring - - def uniform(self, low=0.0, high=1.0, size=None): - """ - uniform(low=0.0, high=1.0, size=1) - - Draw samples from a uniform distribution. - - Samples are uniformly distributed over the half-open interval - ``[low, high)`` (includes low, but excludes high). In other words, - any value within the given interval is equally likely to be drawn - by `uniform`. - - Parameters - ---------- - low : float, optional - Lower boundary of the output interval. All values generated will be - greater than or equal to low. The default value is 0. - high : float - Upper boundary of the output interval. All values generated will be - less than high. The default value is 1.0. - size : tuple of ints, int, optional - Shape of output. If the given size is, for example, (m,n,k), - m*n*k samples are generated. If no shape is specified, a single sample - is returned. - - Returns - ------- - out : ndarray - Drawn samples, with shape `size`. - - See Also - -------- - randint : Discrete uniform distribution, yielding integers. - random_integers : Discrete uniform distribution over the closed interval - ``[low, high]``. - random_sample : Floats uniformly distributed over ``[0, 1)``. - random : Alias for `random_sample`. - rand : Convenience function that accepts dimensions as input, e.g., - ``rand(2,2)`` would generate a 2-by-2 array of floats, uniformly - distributed over ``[0, 1)``. - - Notes - ----- - The probability density function of the uniform distribution is - - .. math:: p(x) = \\frac{1}{b - a} - - anywhere within the interval ``[a, b)``, and zero elsewhere. - - Examples - -------- - Draw samples from the distribution: - - >>> s = np.random.uniform(-1,0,1000) - - All values are within the given interval: - - >>> np.all(s >= -1) - True - - >>> np.all(s < 0) - True - - Display the histogram of the samples, along with the - probability density function: - - >>> import matplotlib.pyplot as plt - >>> count, bins, ignored = plt.hist(s, 15, normed=True) - >>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r') - >>> plt.show() - - """ - cdef ndarray olow, ohigh, odiff - cdef double flow, fhigh - cdef object temp - - flow = PyFloat_AsDouble(low) - fhigh = PyFloat_AsDouble(high) - if not PyErr_Occurred(): - return cont2_array_sc(self.internal_state, rk_uniform, size, flow, fhigh-flow) - PyErr_Clear() - olow = PyArray_FROM_OTF(low, NPY_DOUBLE, NPY_ALIGNED) - ohigh = PyArray_FROM_OTF(high, NPY_DOUBLE, NPY_ALIGNED) - temp = np.subtract(ohigh, olow) - Py_INCREF(temp) # needed to get around Pyrex's automatic reference-counting - # rules because EnsureArray steals a reference - odiff = PyArray_EnsureArray(temp) - return cont2_array(self.internal_state, rk_uniform, size, olow, odiff) - - def rand(self, *args): - """ - rand(d0, d1, ..., dn) - - Random values in a given shape. - - Create an array of the given shape and propagate it with - random samples from a uniform distribution - over ``[0, 1)``. - - Parameters - ---------- - d0, d1, ..., dn : int - Shape of the output. - - Returns - ------- - out : ndarray, shape ``(d0, d1, ..., dn)`` - Random values. - - See Also - -------- - random - - Notes - ----- - This is a convenience function. If you want an interface that - takes a shape-tuple as the first argument, refer to - `random`. - - Examples - -------- - >>> np.random.rand(3,2) - array([[ 0.14022471, 0.96360618], #random - [ 0.37601032, 0.25528411], #random - [ 0.49313049, 0.94909878]]) #random - - """ - if len(args) == 0: - return self.random_sample() - else: - return self.random_sample(size=args) - - def randn(self, *args): - """ - randn([d1, ..., dn]) - - Return a sample (or samples) from the "standard normal" distribution. - - If positive, int_like or int-convertible arguments are provided, - `randn` generates an array of shape ``(d1, ..., dn)``, filled - with random floats sampled from a univariate "normal" (Gaussian) - distribution of mean 0 and variance 1 (if any of the :math:`d_i` are - floats, they are first converted to integers by truncation). A single - float randomly sampled from the distribution is returned if no - argument is provided. - - This is a convenience function. If you want an interface that takes a - tuple as the first argument, use `numpy.random.standard_normal` instead. - - Parameters - ---------- - d1, ..., dn : `n` ints, optional - The dimensions of the returned array, should be all positive. - - Returns - ------- - Z : ndarray or float - A ``(d1, ..., dn)``-shaped array of floating-point samples from - the standard normal distribution, or a single such float if - no parameters were supplied. - - See Also - -------- - random.standard_normal : Similar, but takes a tuple as its argument. - - Notes - ----- - For random samples from :math:`N(\\mu, \\sigma^2)`, use: - - ``sigma * np.random.randn(...) + mu`` - - Examples - -------- - >>> np.random.randn() - 2.1923875335537315 #random - - Two-by-four array of samples from N(3, 6.25): - - >>> 2.5 * np.random.randn(2, 4) + 3 - array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], #random - [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) #random - - """ - if len(args) == 0: - return self.standard_normal() - else: - return self.standard_normal(args) - - def random_integers(self, low, high=None, size=None): - """ - random_integers(low, high=None, size=None) - - Return random integers between `low` and `high`, inclusive. - - Return random integers from the "discrete uniform" distribution in the - closed interval [`low`, `high`]. If `high` is None (the default), - then results are from [1, `low`]. - - Parameters - ---------- - low : int - Lowest (signed) integer to be drawn from the distribution (unless - ``high=None``, in which case this parameter is the *highest* such - integer). - high : int, optional - If provided, the largest (signed) integer to be drawn from the - distribution (see above for behavior if ``high=None``). - size : int or tuple of ints, optional - Output shape. Default is None, in which case a single int is returned. - - Returns - ------- - out : int or ndarray of ints - `size`-shaped array of random integers from the appropriate - distribution, or a single such random int if `size` not provided. - - See Also - -------- - random.randint : Similar to `random_integers`, only for the half-open - interval [`low`, `high`), and 0 is the lowest value if `high` is - omitted. - - Notes - ----- - To sample from N evenly spaced floating-point numbers between a and b, - use:: - - a + (b - a) * (np.random.random_integers(N) - 1) / (N - 1.) - - Examples - -------- - >>> np.random.random_integers(5) - 4 - >>> type(np.random.random_integers(5)) - - >>> np.random.random_integers(5, size=(3.,2.)) - array([[5, 4], - [3, 3], - [4, 5]]) - - Choose five random numbers from the set of five evenly-spaced - numbers between 0 and 2.5, inclusive (*i.e.*, from the set - :math:`{0, 5/8, 10/8, 15/8, 20/8}`): - - >>> 2.5 * (np.random.random_integers(5, size=(5,)) - 1) / 4. - array([ 0.625, 1.25 , 0.625, 0.625, 2.5 ]) - - Roll two six sided dice 1000 times and sum the results: - - >>> d1 = np.random.random_integers(1, 6, 1000) - >>> d2 = np.random.random_integers(1, 6, 1000) - >>> dsums = d1 + d2 - - Display results as a histogram: - - >>> import matplotlib.pyplot as plt - >>> count, bins, ignored = plt.hist(dsums, 11, normed=True) - >>> plt.show() - - """ - if high is None: - high = low - low = 1 - return self.randint(low, high+1, size) - - # Complicated, continuous distributions: - def standard_normal(self, size=None): - """ - standard_normal(size=None) - - Returns samples from a Standard Normal distribution (mean=0, stdev=1). - - Parameters - ---------- - size : int or tuple of ints, optional - Output shape. Default is None, in which case a single value is - returned. - - Returns - ------- - out : float or ndarray - Drawn samples. - - Examples - -------- - >>> s = np.random.standard_normal(8000) - >>> s - array([ 0.6888893 , 0.78096262, -0.89086505, ..., 0.49876311, #random - -0.38672696, -0.4685006 ]) #random - >>> s.shape - (8000,) - >>> s = np.random.standard_normal(size=(3, 4, 2)) - >>> s.shape - (3, 4, 2) - - """ - return cont0_array(self.internal_state, rk_gauss, size) - - def normal(self, loc=0.0, scale=1.0, size=None): - """ - normal(loc=0.0, scale=1.0, size=None) - - Draw random samples from a normal (Gaussian) distribution. - - The probability density function of the normal distribution, first - derived by De Moivre and 200 years later by both Gauss and Laplace - independently [2]_, is often called the bell curve because of - its characteristic shape (see the example below). - - The normal distributions occurs often in nature. For example, it - describes the commonly occurring distribution of samples influenced - by a large number of tiny, random disturbances, each with its own - unique distribution [2]_. - - Parameters - ---------- - loc : float - Mean ("centre") of the distribution. - scale : float - Standard deviation (spread or "width") of the distribution. - size : tuple of ints - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - See Also - -------- - scipy.stats.distributions.norm : probability density function, - distribution or cumulative density function, etc. - - Notes - ----- - The probability density for the Gaussian distribution is - - .. math:: p(x) = \\frac{1}{\\sqrt{ 2 \\pi \\sigma^2 }} - e^{ - \\frac{ (x - \\mu)^2 } {2 \\sigma^2} }, - - where :math:`\\mu` is the mean and :math:`\\sigma` the standard deviation. - The square of the standard deviation, :math:`\\sigma^2`, is called the - variance. - - The function has its peak at the mean, and its "spread" increases with - the standard deviation (the function reaches 0.607 times its maximum at - :math:`x + \\sigma` and :math:`x - \\sigma` [2]_). This implies that - `numpy.random.normal` is more likely to return samples lying close to the - mean, rather than those far away. - - References - ---------- - .. [1] Wikipedia, "Normal distribution", - http://en.wikipedia.org/wiki/Normal_distribution - .. [2] P. R. Peebles Jr., "Central Limit Theorem" in "Probability, Random - Variables and Random Signal Principles", 4th ed., 2001, - pp. 51, 51, 125. - - Examples - -------- - Draw samples from the distribution: - - >>> mu, sigma = 0, 0.1 # mean and standard deviation - >>> s = np.random.normal(mu, sigma, 1000) - - Verify the mean and the variance: - - >>> abs(mu - np.mean(s)) < 0.01 - True - - >>> abs(sigma - np.std(s, ddof=1)) < 0.01 - True - - Display the histogram of the samples, along with - the probability density function: - - >>> import matplotlib.pyplot as plt - >>> count, bins, ignored = plt.hist(s, 30, normed=True) - >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * - ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ), - ... linewidth=2, color='r') - >>> plt.show() - - """ - cdef ndarray oloc, oscale - cdef double floc, fscale - - floc = PyFloat_AsDouble(loc) - fscale = PyFloat_AsDouble(scale) - if not PyErr_Occurred(): - if fscale <= 0: - raise ValueError("scale <= 0") - return cont2_array_sc(self.internal_state, rk_normal, size, floc, fscale) - - PyErr_Clear() - - oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oscale, 0)): - raise ValueError("scale <= 0") - return cont2_array(self.internal_state, rk_normal, size, oloc, oscale) - - def beta(self, a, b, size=None): - """ - beta(a, b, size=None) - - The Beta distribution over ``[0, 1]``. - - The Beta distribution is a special case of the Dirichlet distribution, - and is related to the Gamma distribution. It has the probability - distribution function - - .. math:: f(x; a,b) = \\frac{1}{B(\\alpha, \\beta)} x^{\\alpha - 1} - (1 - x)^{\\beta - 1}, - - where the normalisation, B, is the beta function, - - .. math:: B(\\alpha, \\beta) = \\int_0^1 t^{\\alpha - 1} - (1 - t)^{\\beta - 1} dt. - - It is often seen in Bayesian inference and order statistics. - - Parameters - ---------- - a : float - Alpha, non-negative. - b : float - Beta, non-negative. - size : tuple of ints, optional - The number of samples to draw. The ouput is packed according to - the size given. - - Returns - ------- - out : ndarray - Array of the given shape, containing values drawn from a - Beta distribution. - - """ - cdef ndarray oa, ob - cdef double fa, fb - - fa = PyFloat_AsDouble(a) - fb = PyFloat_AsDouble(b) - if not PyErr_Occurred(): - if fa <= 0: - raise ValueError("a <= 0") - if fb <= 0: - raise ValueError("b <= 0") - return cont2_array_sc(self.internal_state, rk_beta, size, fa, fb) - - PyErr_Clear() - - oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - ob = PyArray_FROM_OTF(b, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oa, 0)): - raise ValueError("a <= 0") - if np.any(np.less_equal(ob, 0)): - raise ValueError("b <= 0") - return cont2_array(self.internal_state, rk_beta, size, oa, ob) - - def exponential(self, scale=1.0, size=None): - """ - exponential(scale=1.0, size=None) - - Exponential distribution. - - Its probability density function is - - .. math:: f(x; \\frac{1}{\\beta}) = \\frac{1}{\\beta} \\exp(-\\frac{x}{\\beta}), - - for ``x > 0`` and 0 elsewhere. :math:`\\beta` is the scale parameter, - which is the inverse of the rate parameter :math:`\\lambda = 1/\\beta`. - The rate parameter is an alternative, widely used parameterization - of the exponential distribution [3]_. - - The exponential distribution is a continuous analogue of the - geometric distribution. It describes many common situations, such as - the size of raindrops measured over many rainstorms [1]_, or the time - between page requests to Wikipedia [2]_. - - Parameters - ---------- - scale : float - The scale parameter, :math:`\\beta = 1/\\lambda`. - size : tuple of ints - Number of samples to draw. The output is shaped - according to `size`. - - References - ---------- - .. [1] Peyton Z. Peebles Jr., "Probability, Random Variables and - Random Signal Principles", 4th ed, 2001, p. 57. - .. [2] "Poisson Process", Wikipedia, - http://en.wikipedia.org/wiki/Poisson_process - .. [3] "Exponential Distribution, Wikipedia, - http://en.wikipedia.org/wiki/Exponential_distribution - - """ - cdef ndarray oscale - cdef double fscale - - fscale = PyFloat_AsDouble(scale) - if not PyErr_Occurred(): - if fscale <= 0: - raise ValueError("scale <= 0") - return cont1_array_sc(self.internal_state, rk_exponential, size, fscale) - - PyErr_Clear() - - oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oscale, 0.0)): - raise ValueError("scale <= 0") - return cont1_array(self.internal_state, rk_exponential, size, oscale) - - def standard_exponential(self, size=None): - """ - standard_exponential(size=None) - - Draw samples from the standard exponential distribution. - - `standard_exponential` is identical to the exponential distribution - with a scale parameter of 1. - - Parameters - ---------- - size : int or tuple of ints - Shape of the output. - - Returns - ------- - out : float or ndarray - Drawn samples. - - Examples - -------- - Output a 3x8000 array: - - >>> n = np.random.standard_exponential((3, 8000)) - - """ - return cont0_array(self.internal_state, rk_standard_exponential, size) - - def standard_gamma(self, shape, size=None): - """ - standard_gamma(shape, size=None) - - Draw samples from a Standard Gamma distribution. - - Samples are drawn from a Gamma distribution with specified parameters, - shape (sometimes designated "k") and scale=1. - - Parameters - ---------- - shape : float - Parameter, should be > 0. - size : int or tuple of ints - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - Returns - ------- - samples : ndarray or scalar - The drawn samples. - - See Also - -------- - scipy.stats.distributions.gamma : probability density function, - distribution or cumulative density function, etc. - - Notes - ----- - The probability density for the Gamma distribution is - - .. math:: p(x) = x^{k-1}\\frac{e^{-x/\\theta}}{\\theta^k\\Gamma(k)}, - - where :math:`k` is the shape and :math:`\\theta` the scale, - and :math:`\\Gamma` is the Gamma function. - - The Gamma distribution is often used to model the times to failure of - electronic components, and arises naturally in processes for which the - waiting times between Poisson distributed events are relevant. - - References - ---------- - .. [1] Weisstein, Eric W. "Gamma Distribution." From MathWorld--A - Wolfram Web Resource. - http://mathworld.wolfram.com/GammaDistribution.html - .. [2] Wikipedia, "Gamma-distribution", - http://en.wikipedia.org/wiki/Gamma-distribution - - Examples - -------- - Draw samples from the distribution: - - >>> shape, scale = 2., 1. # mean and width - >>> s = np.random.standard_gamma(shape, 1000000) - - Display the histogram of the samples, along with - the probability density function: - - >>> import matplotlib.pyplot as plt - >>> import scipy.special as sps - >>> count, bins, ignored = plt.hist(s, 50, normed=True) - >>> y = bins**(shape-1) * ((np.exp(-bins/scale))/ \\ - ... (sps.gamma(shape) * scale**shape)) - >>> plt.plot(bins, y, linewidth=2, color='r') - >>> plt.show() - - """ - cdef ndarray oshape - cdef double fshape - - fshape = PyFloat_AsDouble(shape) - if not PyErr_Occurred(): - if fshape <= 0: - raise ValueError("shape <= 0") - return cont1_array_sc(self.internal_state, rk_standard_gamma, size, fshape) - - PyErr_Clear() - oshape = PyArray_FROM_OTF(shape, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oshape, 0.0)): - raise ValueError("shape <= 0") - return cont1_array(self.internal_state, rk_standard_gamma, size, oshape) - - def gamma(self, shape, scale=1.0, size=None): - """ - gamma(shape, scale=1.0, size=None) - - Draw samples from a Gamma distribution. - - Samples are drawn from a Gamma distribution with specified parameters, - `shape` (sometimes designated "k") and `scale` (sometimes designated - "theta"), where both parameters are > 0. - - Parameters - ---------- - shape : scalar > 0 - The shape of the gamma distribution. - scale : scalar > 0, optional - The scale of the gamma distribution. Default is equal to 1. - size : shape_tuple, optional - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - Returns - ------- - out : ndarray, float - Returns one sample unless `size` parameter is specified. - - See Also - -------- - scipy.stats.distributions.gamma : probability density function, - distribution or cumulative density function, etc. - - Notes - ----- - The probability density for the Gamma distribution is - - .. math:: p(x) = x^{k-1}\\frac{e^{-x/\\theta}}{\\theta^k\\Gamma(k)}, - - where :math:`k` is the shape and :math:`\\theta` the scale, - and :math:`\\Gamma` is the Gamma function. - - The Gamma distribution is often used to model the times to failure of - electronic components, and arises naturally in processes for which the - waiting times between Poisson distributed events are relevant. - - References - ---------- - .. [1] Weisstein, Eric W. "Gamma Distribution." From MathWorld--A - Wolfram Web Resource. - http://mathworld.wolfram.com/GammaDistribution.html - .. [2] Wikipedia, "Gamma-distribution", - http://en.wikipedia.org/wiki/Gamma-distribution - - Examples - -------- - Draw samples from the distribution: - - >>> shape, scale = 2., 2. # mean and dispersion - >>> s = np.random.gamma(shape, scale, 1000) - - Display the histogram of the samples, along with - the probability density function: - - >>> import matplotlib.pyplot as plt - >>> import scipy.special as sps - >>> count, bins, ignored = plt.hist(s, 50, normed=True) - >>> y = bins**(shape-1)*(np.exp(-bins/scale) / - ... (sps.gamma(shape)*scale**shape)) - >>> plt.plot(bins, y, linewidth=2, color='r') - >>> plt.show() - - """ - cdef ndarray oshape, oscale - cdef double fshape, fscale - - fshape = PyFloat_AsDouble(shape) - fscale = PyFloat_AsDouble(scale) - if not PyErr_Occurred(): - if fshape <= 0: - raise ValueError("shape <= 0") - if fscale <= 0: - raise ValueError("scale <= 0") - return cont2_array_sc(self.internal_state, rk_gamma, size, fshape, fscale) - - PyErr_Clear() - oshape = PyArray_FROM_OTF(shape, NPY_DOUBLE, NPY_ALIGNED) - oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oshape, 0.0)): - raise ValueError("shape <= 0") - if np.any(np.less_equal(oscale, 0.0)): - raise ValueError("scale <= 0") - return cont2_array(self.internal_state, rk_gamma, size, oshape, oscale) - - def f(self, dfnum, dfden, size=None): - """ - f(dfnum, dfden, size=None) - - Draw samples from a F distribution. - - Samples are drawn from an F distribution with specified parameters, - `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom - in denominator), where both parameters should be greater than zero. - - The random variate of the F distribution (also known as the - Fisher distribution) is a continuous probability distribution - that arises in ANOVA tests, and is the ratio of two chi-square - variates. - - Parameters - ---------- - dfnum : float - Degrees of freedom in numerator. Should be greater than zero. - dfden : float - Degrees of freedom in denominator. Should be greater than zero. - size : {tuple, int}, optional - Output shape. If the given shape is, e.g., ``(m, n, k)``, - then ``m * n * k`` samples are drawn. By default only one sample - is returned. - - Returns - ------- - samples : {ndarray, scalar} - Samples from the Fisher distribution. - - See Also - -------- - scipy.stats.distributions.f : probability density function, - distribution or cumulative density function, etc. - - Notes - ----- - - The F statistic is used to compare in-group variances to between-group - variances. Calculating the distribution depends on the sampling, and - so it is a function of the respective degrees of freedom in the - problem. The variable `dfnum` is the number of samples minus one, the - between-groups degrees of freedom, while `dfden` is the within-groups - degrees of freedom, the sum of the number of samples in each group - minus the number of groups. - - References - ---------- - .. [1] Glantz, Stanton A. "Primer of Biostatistics.", McGraw-Hill, - Fifth Edition, 2002. - .. [2] Wikipedia, "F-distribution", - http://en.wikipedia.org/wiki/F-distribution - - Examples - -------- - An example from Glantz[1], pp 47-40. - Two groups, children of diabetics (25 people) and children from people - without diabetes (25 controls). Fasting blood glucose was measured, - case group had a mean value of 86.1, controls had a mean value of - 82.2. Standard deviations were 2.09 and 2.49 respectively. Are these - data consistent with the null hypothesis that the parents diabetic - status does not affect their children's blood glucose levels? - Calculating the F statistic from the data gives a value of 36.01. - - Draw samples from the distribution: - - >>> dfnum = 1. # between group degrees of freedom - >>> dfden = 48. # within groups degrees of freedom - >>> s = np.random.f(dfnum, dfden, 1000) - - The lower bound for the top 1% of the samples is : - - >>> sort(s)[-10] - 7.61988120985 - - So there is about a 1% chance that the F statistic will exceed 7.62, - the measured value is 36, so the null hypothesis is rejected at the 1% - level. - - """ - cdef ndarray odfnum, odfden - cdef double fdfnum, fdfden - - fdfnum = PyFloat_AsDouble(dfnum) - fdfden = PyFloat_AsDouble(dfden) - if not PyErr_Occurred(): - if fdfnum <= 0: - raise ValueError("shape <= 0") - if fdfden <= 0: - raise ValueError("scale <= 0") - return cont2_array_sc(self.internal_state, rk_f, size, fdfnum, fdfden) - - PyErr_Clear() - - odfnum = PyArray_FROM_OTF(dfnum, NPY_DOUBLE, NPY_ALIGNED) - odfden = PyArray_FROM_OTF(dfden, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(odfnum, 0.0)): - raise ValueError("dfnum <= 0") - if np.any(np.less_equal(odfden, 0.0)): - raise ValueError("dfden <= 0") - return cont2_array(self.internal_state, rk_f, size, odfnum, odfden) - - def noncentral_f(self, dfnum, dfden, nonc, size=None): - """ - noncentral_f(dfnum, dfden, nonc, size=None) - - Draw samples from the noncentral F distribution. - - Samples are drawn from an F distribution with specified parameters, - `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of - freedom in denominator), where both parameters > 1. - `nonc` is the non-centrality parameter. - - Parameters - ---------- - dfnum : int - Parameter, should be > 1. - dfden : int - Parameter, should be > 1. - nonc : float - Parameter, should be >= 0. - size : int or tuple of ints - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - Returns - ------- - samples : scalar or ndarray - Drawn samples. - - Notes - ----- - When calculating the power of an experiment (power = probability of - rejecting the null hypothesis when a specific alternative is true) the - non-central F statistic becomes important. When the null hypothesis is - true, the F statistic follows a central F distribution. When the null - hypothesis is not true, then it follows a non-central F statistic. - - References - ---------- - Weisstein, Eric W. "Noncentral F-Distribution." From MathWorld--A Wolfram - Web Resource. http://mathworld.wolfram.com/NoncentralF-Distribution.html - - Wikipedia, "Noncentral F distribution", - http://en.wikipedia.org/wiki/Noncentral_F-distribution - - Examples - -------- - In a study, testing for a specific alternative to the null hypothesis - requires use of the Noncentral F distribution. We need to calculate the - area in the tail of the distribution that exceeds the value of the F - distribution for the null hypothesis. We'll plot the two probability - distributions for comparison. - - >>> dfnum = 3 # between group deg of freedom - >>> dfden = 20 # within groups degrees of freedom - >>> nonc = 3.0 - >>> nc_vals = np.random.noncentral_f(dfnum, dfden, nonc, 1000000) - >>> NF = np.histogram(nc_vals, bins=50, normed=True) - >>> c_vals = np.random.f(dfnum, dfden, 1000000) - >>> F = np.histogram(c_vals, bins=50, normed=True) - >>> plt.plot(F[1][1:], F[0]) - >>> plt.plot(NF[1][1:], NF[0]) - >>> plt.show() - - """ - cdef ndarray odfnum, odfden, ononc - cdef double fdfnum, fdfden, fnonc - - fdfnum = PyFloat_AsDouble(dfnum) - fdfden = PyFloat_AsDouble(dfden) - fnonc = PyFloat_AsDouble(nonc) - if not PyErr_Occurred(): - if fdfnum <= 1: - raise ValueError("dfnum <= 1") - if fdfden <= 0: - raise ValueError("dfden <= 0") - if fnonc < 0: - raise ValueError("nonc < 0") - return cont3_array_sc(self.internal_state, rk_noncentral_f, size, - fdfnum, fdfden, fnonc) - - PyErr_Clear() - - odfnum = PyArray_FROM_OTF(dfnum, NPY_DOUBLE, NPY_ALIGNED) - odfden = PyArray_FROM_OTF(dfden, NPY_DOUBLE, NPY_ALIGNED) - ononc = PyArray_FROM_OTF(nonc, NPY_DOUBLE, NPY_ALIGNED) - - if np.any(np.less_equal(odfnum, 1.0)): - raise ValueError("dfnum <= 1") - if np.any(np.less_equal(odfden, 0.0)): - raise ValueError("dfden <= 0") - if np.any(np.less(ononc, 0.0)): - raise ValueError("nonc < 0") - return cont3_array(self.internal_state, rk_noncentral_f, size, odfnum, - odfden, ononc) - - def chisquare(self, df, size=None): - """ - chisquare(df, size=None) - - Draw samples from a chi-square distribution. - - When `df` independent random variables, each with standard normal - distributions (mean 0, variance 1), are squared and summed, the - resulting distribution is chi-square (see Notes). This distribution - is often used in hypothesis testing. - - Parameters - ---------- - df : int - Number of degrees of freedom. - size : tuple of ints, int, optional - Size of the returned array. By default, a scalar is - returned. - - Returns - ------- - output : ndarray - Samples drawn from the distribution, packed in a `size`-shaped - array. - - Raises - ------ - ValueError - When `df` <= 0 or when an inappropriate `size` (e.g. ``size=-1``) - is given. - - Notes - ----- - The variable obtained by summing the squares of `df` independent, - standard normally distributed random variables: - - .. math:: Q = \\sum_{i=0}^{\\mathtt{df}} X^2_i - - is chi-square distributed, denoted - - .. math:: Q \\sim \\chi^2_k. - - The probability density function of the chi-squared distribution is - - .. math:: p(x) = \\frac{(1/2)^{k/2}}{\\Gamma(k/2)} - x^{k/2 - 1} e^{-x/2}, - - where :math:`\\Gamma` is the gamma function, - - .. math:: \\Gamma(x) = \\int_0^{-\\infty} t^{x - 1} e^{-t} dt. - - References - ---------- - `NIST/SEMATECH e-Handbook of Statistical Methods - `_ - - Examples - -------- - >>> np.random.chisquare(2,4) - array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272]) - - """ - cdef ndarray odf - cdef double fdf - - fdf = PyFloat_AsDouble(df) - if not PyErr_Occurred(): - if fdf <= 0: - raise ValueError("df <= 0") - return cont1_array_sc(self.internal_state, rk_chisquare, size, fdf) - - PyErr_Clear() - - odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(odf, 0.0)): - raise ValueError("df <= 0") - return cont1_array(self.internal_state, rk_chisquare, size, odf) - - def noncentral_chisquare(self, df, nonc, size=None): - """ - noncentral_chisquare(df, nonc, size=None) - - Draw samples from a noncentral chi-square distribution. - - The noncentral :math:`\\chi^2` distribution is a generalisation of - the :math:`\\chi^2` distribution. - - Parameters - ---------- - df : int - Degrees of freedom, should be >= 1. - nonc : float - Non-centrality, should be > 0. - size : int or tuple of ints - Shape of the output. - - Notes - ----- - The probability density function for the noncentral Chi-square distribution - is - - .. math:: P(x;df,nonc) = \\sum^{\\infty}_{i=0} - \\frac{e^{-nonc/2}(nonc/2)^{i}}{i!}P_{Y_{df+2i}}(x), - - where :math:`Y_{q}` is the Chi-square with q degrees of freedom. - - In Delhi (2007), it is noted that the noncentral chi-square is useful in - bombing and coverage problems, the probability of killing the point target - given by the noncentral chi-squared distribution. - - References - ---------- - .. [1] Delhi, M.S. Holla, "On a noncentral chi-square distribution in the - analysis of weapon systems effectiveness", Metrika, Volume 15, - Number 1 / December, 1970. - .. [2] Wikipedia, "Noncentral chi-square distribution" - http://en.wikipedia.org/wiki/Noncentral_chi-square_distribution - - Examples - -------- - Draw values from the distribution and plot the histogram - - >>> import matplotlib.pyplot as plt - >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000), - ... bins=200, normed=True) - >>> plt.show() - - Draw values from a noncentral chisquare with very small noncentrality, - and compare to a chisquare. - - >>> plt.figure() - >>> values = plt.hist(np.random.noncentral_chisquare(3, .0000001, 100000), - ... bins=np.arange(0., 25, .1), normed=True) - >>> values2 = plt.hist(np.random.chisquare(3, 100000), - ... bins=np.arange(0., 25, .1), normed=True) - >>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob') - >>> plt.show() - - Demonstrate how large values of non-centrality lead to a more symmetric - distribution. - - >>> plt.figure() - >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000), - ... bins=200, normed=True) - >>> plt.show() - - """ - cdef ndarray odf, ononc - cdef double fdf, fnonc - fdf = PyFloat_AsDouble(df) - fnonc = PyFloat_AsDouble(nonc) - if not PyErr_Occurred(): - if fdf <= 1: - raise ValueError("df <= 0") - if fnonc <= 0: - raise ValueError("nonc <= 0") - return cont2_array_sc(self.internal_state, rk_noncentral_chisquare, - size, fdf, fnonc) - - PyErr_Clear() - - odf = PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - ononc = PyArray_FROM_OTF(nonc, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(odf, 0.0)): - raise ValueError("df <= 1") - if np.any(np.less_equal(ononc, 0.0)): - raise ValueError("nonc < 0") - return cont2_array(self.internal_state, rk_noncentral_chisquare, size, - odf, ononc) - - def standard_cauchy(self, size=None): - """ - standard_cauchy(size=None) - - Standard Cauchy distribution with mode = 0. - - Also known as the Lorentz distribution. - - Parameters - ---------- - size : int or tuple of ints - Shape of the output. - - Returns - ------- - samples : ndarray or scalar - The drawn samples. - - Notes - ----- - The probability density function for the full Cauchy distribution is - - .. math:: P(x; x_0, \\gamma) = \\frac{1}{\\pi \\gamma \\bigl[ 1+ - (\\frac{x-x_0}{\\gamma})^2 \\bigr] } - - and the Standard Cauchy distribution just sets :math:`x_0=0` and - :math:`\\gamma=1` - - The Cauchy distribution arises in the solution to the driven harmonic - oscillator problem, and also describes spectral line broadening. It - also describes the distribution of values at which a line tilted at - a random angle will cut the x axis. - - When studying hypothesis tests that assume normality, seeing how the - tests perform on data from a Cauchy distribution is a good indicator of - their sensitivity to a heavy-tailed distribution, since the Cauchy looks - very much like a Gaussian distribution, but with heavier tails. - - References - ---------- - ..[1] NIST/SEMATECH e-Handbook of Statistical Methods, "Cauchy - Distribution", - http://www.itl.nist.gov/div898/handbook/eda/section3/eda3663.htm - ..[2] Weisstein, Eric W. "Cauchy Distribution." From MathWorld--A - Wolfram Web Resource. - http://mathworld.wolfram.com/CauchyDistribution.html - ..[3] Wikipedia, "Cauchy distribution" - http://en.wikipedia.org/wiki/Cauchy_distribution - - Examples - -------- - Draw samples and plot the distribution: - - >>> s = np.random.standard_cauchy(1000000) - >>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well - >>> plt.hist(s, bins=100) - >>> plt.show() - - """ - return cont0_array(self.internal_state, rk_standard_cauchy, size) - - def standard_t(self, df, size=None): - """ - standard_t(df, size=None) - - Standard Student's t distribution with df degrees of freedom. - - A special case of the hyperbolic distribution. - As `df` gets large, the result resembles that of the standard normal - distribution (`standard_normal`). - - Parameters - ---------- - df : int - Degrees of freedom, should be > 0. - size : int or tuple of ints, optional - Output shape. Default is None, in which case a single value is - returned. - - Returns - ------- - samples : ndarray or scalar - Drawn samples. - - Notes - ----- - The probability density function for the t distribution is - - .. math:: P(x, df) = \\frac{\\Gamma(\\frac{df+1}{2})}{\\sqrt{\\pi df} - \\Gamma(\\frac{df}{2})}\\Bigl( 1+\\frac{x^2}{df} \\Bigr)^{-(df+1)/2} - - The t test is based on an assumption that the data come from a Normal - distribution. The t test provides a way to test whether the sample mean - (that is the mean calculated from the data) is a good estimate of the true - mean. - - The derivation of the t-distribution was forst published in 1908 by William - Gisset while working for the Guinness Brewery in Dublin. Due to proprietary - issues, he had to publish under a pseudonym, and so he used the name - Student. - - References - ---------- - .. [1] Dalgaard, Peter, "Introductory Statistics With R", - Springer, 2002. - .. [2] Wikipedia, "Student's t-distribution" - http://en.wikipedia.org/wiki/Student's_t-distribution - - Examples - -------- - From Dalgaard page 83 [1]_, suppose the daily energy intake for 11 - women in Kj is: - - >>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \\ - ... 7515, 8230, 8770]) - - Does their energy intake deviate systematically from the recommended - value of 7725 kJ? - - We have 10 degrees of freedom, so is the sample mean within 95% of the - recommended value? - - >>> s = np.random.standard_t(10, size=100000) - >>> np.mean(intake) - 6753.636363636364 - >>> intake.std(ddof=1) - 1142.1232221373727 - - Calculate the t statistic, setting the ddof parameter to the unbiased - value so the divisor in the standard deviation will be degrees of - freedom, N-1. - - >>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake))) - >>> import matplotlib.pyplot as plt - >>> h = plt.hist(s, bins=100, normed=True) - - For a one-sided t-test, how far out in the distribution does the t - statistic appear? - - >>> >>> np.sum(s PyArray_FROM_OTF(df, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(odf, 0.0)): - raise ValueError("df <= 0") - return cont1_array(self.internal_state, rk_standard_t, size, odf) - - def vonmises(self, mu, kappa, size=None): - """ - vonmises(mu, kappa, size=None) - - Draw samples from a von Mises distribution. - - Samples are drawn from a von Mises distribution with specified mode - (mu) and dispersion (kappa), on the interval [-pi, pi]. - - The von Mises distribution (also known as the circular normal - distribution) is a continuous probability distribution on the unit - circle. It may be thought of as the circular analogue of the normal - distribution. - - Parameters - ---------- - mu : float - Mode ("center") of the distribution. - kappa : float - Dispersion of the distribution, has to be >=0. - size : int or tuple of int - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - Returns - ------- - samples : scalar or ndarray - The returned samples, which are in the interval [-pi, pi]. - - See Also - -------- - scipy.stats.distributions.vonmises : probability density function, - distribution, or cumulative density function, etc. - - Notes - ----- - The probability density for the von Mises distribution is - - .. math:: p(x) = \\frac{e^{\\kappa cos(x-\\mu)}}{2\\pi I_0(\\kappa)}, - - where :math:`\\mu` is the mode and :math:`\\kappa` the dispersion, - and :math:`I_0(\\kappa)` is the modified Bessel function of order 0. - - The von Mises is named for Richard Edler von Mises, who was born in - Austria-Hungary, in what is now the Ukraine. He fled to the United - States in 1939 and became a professor at Harvard. He worked in - probability theory, aerodynamics, fluid mechanics, and philosophy of - science. - - References - ---------- - Abramowitz, M. and Stegun, I. A. (ed.), *Handbook of Mathematical - Functions*, New York: Dover, 1965. - - von Mises, R., *Mathematical Theory of Probability and Statistics*, - New York: Academic Press, 1964. - - Examples - -------- - Draw samples from the distribution: - - >>> mu, kappa = 0.0, 4.0 # mean and dispersion - >>> s = np.random.vonmises(mu, kappa, 1000) - - Display the histogram of the samples, along with - the probability density function: - - >>> import matplotlib.pyplot as plt - >>> import scipy.special as sps - >>> count, bins, ignored = plt.hist(s, 50, normed=True) - >>> x = np.arange(-np.pi, np.pi, 2*np.pi/50.) - >>> y = -np.exp(kappa*np.cos(x-mu))/(2*np.pi*sps.jn(0,kappa)) - >>> plt.plot(x, y/max(y), linewidth=2, color='r') - >>> plt.show() - - """ - cdef ndarray omu, okappa - cdef double fmu, fkappa - - fmu = PyFloat_AsDouble(mu) - fkappa = PyFloat_AsDouble(kappa) - if not PyErr_Occurred(): - if fkappa < 0: - raise ValueError("kappa < 0") - return cont2_array_sc(self.internal_state, rk_vonmises, size, fmu, fkappa) - - PyErr_Clear() - - omu = PyArray_FROM_OTF(mu, NPY_DOUBLE, NPY_ALIGNED) - okappa = PyArray_FROM_OTF(kappa, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less(okappa, 0.0)): - raise ValueError("kappa < 0") - return cont2_array(self.internal_state, rk_vonmises, size, omu, okappa) - - def pareto(self, a, size=None): - """ - pareto(a, size=None) - - Draw samples from a Pareto distribution with specified shape. - - This is a simplified version of the Generalized Pareto distribution - (available in SciPy), with the scale set to one and the location set to - zero. Most authors default the location to one. - - The Pareto distribution must be greater than zero, and is unbounded above. - It is also known as the "80-20 rule". In this distribution, 80 percent of - the weights are in the lowest 20 percent of the range, while the other 20 - percent fill the remaining 80 percent of the range. - - Parameters - ---------- - shape : float, > 0. - Shape of the distribution. - size : tuple of ints - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - See Also - -------- - scipy.stats.distributions.genpareto.pdf : probability density function, - distribution or cumulative density function, etc. - - Notes - ----- - The probability density for the Pareto distribution is - - .. math:: p(x) = \\frac{am^a}{x^{a+1}} - - where :math:`a` is the shape and :math:`m` the location - - The Pareto distribution, named after the Italian economist Vilfredo Pareto, - is a power law probability distribution useful in many real world problems. - Outside the field of economics it is generally referred to as the Bradford - distribution. Pareto developed the distribution to describe the - distribution of wealth in an economy. It has also found use in insurance, - web page access statistics, oil field sizes, and many other problems, - including the download frequency for projects in Sourceforge [1]. It is - one of the so-called "fat-tailed" distributions. - - - References - ---------- - .. [1] Francis Hunt and Paul Johnson, On the Pareto Distribution of - Sourceforge projects. - .. [2] Pareto, V. (1896). Course of Political Economy. Lausanne. - .. [3] Reiss, R.D., Thomas, M.(2001), Statistical Analysis of Extreme - Values, Birkhauser Verlag, Basel, pp 23-30. - .. [4] Wikipedia, "Pareto distribution", - http://en.wikipedia.org/wiki/Pareto_distribution - - Examples - -------- - Draw samples from the distribution: - - >>> a, m = 3., 1. # shape and mode - >>> s = np.random.pareto(a, 1000) + m - - Display the histogram of the samples, along with - the probability density function: - - >>> import matplotlib.pyplot as plt - >>> count, bins, ignored = plt.hist(s, 100, normed=True, align='center') - >>> fit = a*m**a/bins**(a+1) - >>> plt.plot(bins, max(count)*fit/max(fit),linewidth=2, color='r') - >>> plt.show() - - """ - cdef ndarray oa - cdef double fa - - fa = PyFloat_AsDouble(a) - if not PyErr_Occurred(): - if fa <= 0: - raise ValueError("a <= 0") - return cont1_array_sc(self.internal_state, rk_pareto, size, fa) - - PyErr_Clear() - - oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oa, 0.0)): - raise ValueError("a <= 0") - return cont1_array(self.internal_state, rk_pareto, size, oa) - - def weibull(self, a, size=None): - """ - weibull(a, size=None) - - Weibull distribution. - - Draw samples from a 1-parameter Weibull distribution with the given - shape parameter. - - .. math:: X = (-ln(U))^{1/a} - - Here, U is drawn from the uniform distribution over (0,1]. - - The more common 2-parameter Weibull, including a scale parameter - :math:`\\lambda` is just :math:`X = \\lambda(-ln(U))^{1/a}`. - - The Weibull (or Type III asymptotic extreme value distribution for smallest - values, SEV Type III, or Rosin-Rammler distribution) is one of a class of - Generalized Extreme Value (GEV) distributions used in modeling extreme - value problems. This class includes the Gumbel and Frechet distributions. - - Parameters - ---------- - a : float - Shape of the distribution. - size : tuple of ints - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - See Also - -------- - scipy.stats.distributions.weibull : probability density function, - distribution or cumulative density function, etc. - - gumbel, scipy.stats.distributions.genextreme - - Notes - ----- - The probability density for the Weibull distribution is - - .. math:: p(x) = \\frac{a} - {\\lambda}(\\frac{x}{\\lambda})^{a-1}e^{-(x/\\lambda)^a}, - - where :math:`a` is the shape and :math:`\\lambda` the scale. - - The function has its peak (the mode) at - :math:`\\lambda(\\frac{a-1}{a})^{1/a}`. - - When ``a = 1``, the Weibull distribution reduces to the exponential - distribution. - - References - ---------- - .. [1] Waloddi Weibull, Professor, Royal Technical University, Stockholm, - 1939 "A Statistical Theory Of The Strength Of Materials", - Ingeniorsvetenskapsakademiens Handlingar Nr 151, 1939, - Generalstabens Litografiska Anstalts Forlag, Stockholm. - .. [2] Waloddi Weibull, 1951 "A Statistical Distribution Function of Wide - Applicability", Journal Of Applied Mechanics ASME Paper. - .. [3] Wikipedia, "Weibull distribution", - http://en.wikipedia.org/wiki/Weibull_distribution - - Examples - -------- - Draw samples from the distribution: - - >>> a = 5. # shape - >>> s = np.random.weibull(a, 1000) - - Display the histogram of the samples, along with - the probability density function: - - >>> import matplotlib.pyplot as plt - >>> x = np.arange(1,100.)/50. - >>> def weib(x,n,a): - ... return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a) - - >>> count, bins, ignored = plt.hist(np.random.weibull(5.,1000)) - >>> x = np.arange(1,100.)/50. - >>> scale = count.max()/weib(x, 1., 5.).max() - >>> plt.plot(x, weib(x, 1., 5.)*scale) - >>> plt.show() - - """ - cdef ndarray oa - cdef double fa - - fa = PyFloat_AsDouble(a) - if not PyErr_Occurred(): - if fa <= 0: - raise ValueError("a <= 0") - return cont1_array_sc(self.internal_state, rk_weibull, size, fa) - - PyErr_Clear() - - oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oa, 0.0)): - raise ValueError("a <= 0") - return cont1_array(self.internal_state, rk_weibull, size, oa) - - def power(self, a, size=None): - """ - power(a, size=None) - - Draws samples in [0, 1] from a power distribution with positive - exponent a - 1. - - Also known as the power function distribution. - - Parameters - ---------- - a : float - parameter, > 0 - size : tuple of ints - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - Returns - ------- - samples : {ndarray, scalar} - The returned samples lie in [0, 1]. - - Raises - ------ - ValueError - If a<1. - - Notes - ----- - The probability density function is - - .. math:: P(x; a) = ax^{a-1}, 0 \\le x \\le 1, a>0. - - The power function distribution is just the inverse of the Pareto - distribution. It may also be seen as a special case of the Beta - distribution. - - It is used, for example, in modeling the over-reporting of insurance - claims. - - References - ---------- - .. [1] Christian Kleiber, Samuel Kotz, "Statistical size distributions - in economics and actuarial sciences", Wiley, 2003. - .. [2] Heckert, N. A. and Filliben, James J. (2003). NIST Handbook 148: - Dataplot Reference Manual, Volume 2: Let Subcommands and Library - Functions", National Institute of Standards and Technology Handbook - Series, June 2003. - http://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/powpdf.pdf - - Examples - -------- - Draw samples from the distribution: - - >>> a = 5. # shape - >>> samples = 1000 - >>> s = np.random.power(a, samples) - - Display the histogram of the samples, along with - the probability density function: - - >>> import matplotlib.pyplot as plt - >>> count, bins, ignored = plt.hist(s, bins=30) - >>> x = np.linspace(0, 1, 100) - >>> y = a*x**(a-1.) - >>> normed_y = samples*np.diff(bins)[0]*y - >>> plt.plot(x, normed_y) - >>> plt.show() - - Compare the power function distribution to the inverse of the Pareto. - - >>> from scipy import stats - >>> rvs = np.random.power(5, 1000000) - >>> rvsp = np.random.pareto(5, 1000000) - >>> xx = np.linspace(0,1,100) - >>> powpdf = stats.powerlaw.pdf(xx,5) - - >>> plt.figure() - >>> plt.hist(rvs, bins=50, normed=True) - >>> plt.plot(xx,powpdf,'r-') - >>> plt.title('np.random.power(5)') - - >>> plt.figure() - >>> plt.hist(1./(1.+rvsp), bins=50, normed=True) - >>> plt.plot(xx,powpdf,'r-') - >>> plt.title('inverse of 1 + np.random.pareto(5)') - - >>> plt.figure() - >>> plt.hist(1./(1.+rvsp), bins=50, normed=True) - >>> plt.plot(xx,powpdf,'r-') - >>> plt.title('inverse of stats.pareto(5)') - - """ - cdef ndarray oa - cdef double fa - - fa = PyFloat_AsDouble(a) - if not PyErr_Occurred(): - if fa <= 0: - raise ValueError("a <= 0") - return cont1_array_sc(self.internal_state, rk_power, size, fa) - - PyErr_Clear() - - oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oa, 0.0)): - raise ValueError("a <= 0") - return cont1_array(self.internal_state, rk_power, size, oa) - - def laplace(self, loc=0.0, scale=1.0, size=None): - """ - laplace(loc=0.0, scale=1.0, size=None) - - Draw samples from the Laplace or double exponential distribution with - specified location (or mean) and scale (decay). - - The Laplace distribution is similar to the Gaussian/normal distribution, - but is sharper at the peak and has fatter tails. It represents the - difference between two independent, identically distributed exponential - random variables. - - Parameters - ---------- - loc : float - The position, :math:`\\mu`, of the distribution peak. - scale : float - :math:`\\lambda`, the exponential decay. - - Notes - ----- - It has the probability density function - - .. math:: f(x; \\mu, \\lambda) = \\frac{1}{2\\lambda} - \\exp\\left(-\\frac{|x - \\mu|}{\\lambda}\\right). - - The first law of Laplace, from 1774, states that the frequency of an error - can be expressed as an exponential function of the absolute magnitude of - the error, which leads to the Laplace distribution. For many problems in - Economics and Health sciences, this distribution seems to model the data - better than the standard Gaussian distribution - - - References - ---------- - .. [1] Abramowitz, M. and Stegun, I. A. (Eds.). Handbook of Mathematical - Functions with Formulas, Graphs, and Mathematical Tables, 9th - printing. New York: Dover, 1972. - - .. [2] The Laplace distribution and generalizations - By Samuel Kotz, Tomasz J. Kozubowski, Krzysztof Podgorski, - Birkhauser, 2001. - - .. [3] Weisstein, Eric W. "Laplace Distribution." - From MathWorld--A Wolfram Web Resource. - http://mathworld.wolfram.com/LaplaceDistribution.html - - .. [4] Wikipedia, "Laplace distribution", - http://en.wikipedia.org/wiki/Laplace_distribution - - Examples - -------- - Draw samples from the distribution - - >>> loc, scale = 0., 1. - >>> s = np.random.laplace(loc, scale, 1000) - - Display the histogram of the samples, along with - the probability density function: - - >>> import matplotlib.pyplot as plt - >>> count, bins, ignored = plt.hist(s, 30, normed=True) - >>> x = np.arange(-8., 8., .01) - >>> pdf = np.exp(-abs(x-loc/scale))/(2.*scale) - >>> plt.plot(x, pdf) - - Plot Gaussian for comparison: - - >>> g = (1/(scale * np.sqrt(2 * np.pi)) * - ... np.exp( - (x - loc)**2 / (2 * scale**2) )) - >>> plt.plot(x,g) - - """ - cdef ndarray oloc, oscale - cdef double floc, fscale - - floc = PyFloat_AsDouble(loc) - fscale = PyFloat_AsDouble(scale) - if not PyErr_Occurred(): - if fscale <= 0: - raise ValueError("scale <= 0") - return cont2_array_sc(self.internal_state, rk_laplace, size, floc, fscale) - - PyErr_Clear() - oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oscale, 0.0)): - raise ValueError("scale <= 0") - return cont2_array(self.internal_state, rk_laplace, size, oloc, oscale) - - def gumbel(self, loc=0.0, scale=1.0, size=None): - """ - gumbel(loc=0.0, scale=1.0, size=None) - - Gumbel distribution. - - Draw samples from a Gumbel distribution with specified location (or mean) - and scale (or standard deviation). - - The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme Value - Type I) distribution is one of a class of Generalized Extreme Value (GEV) - distributions used in modeling extreme value problems. The Gumbel is a - special case of the Extreme Value Type I distribution for maximums from - distributions with "exponential-like" tails, it may be derived by - considering a Gaussian process of measurements, and generating the pdf for - the maximum values from that set of measurements (see examples). - - Parameters - ---------- - loc : float - The location of the mode of the distribution. - scale : float - The scale parameter of the distribution. - size : tuple of ints - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - See Also - -------- - scipy.stats.gumbel : probability density function, - distribution or cumulative density function, etc. - weibull, scipy.stats.genextreme - - Notes - ----- - The probability density for the Gumbel distribution is - - .. math:: p(x) = \\frac{e^{-(x - \\mu)/ \\beta}}{\\beta} e^{ -e^{-(x - \\mu)/ - \\beta}}, - - where :math:`\\mu` is the mode, a location parameter, and :math:`\\beta` - is the scale parameter. - - The Gumbel (named for German mathematician Emil Julius Gumbel) was used - very early in the hydrology literature, for modeling the occurrence of - flood events. It is also used for modeling maximum wind speed and rainfall - rates. It is a "fat-tailed" distribution - the probability of an event in - the tail of the distribution is larger than if one used a Gaussian, hence - the surprisingly frequent occurrence of 100-year floods. Floods were - initially modeled as a Gaussian process, which underestimated the frequency - of extreme events. - - It is one of a class of extreme value distributions, the Generalized - Extreme Value (GEV) distributions, which also includes the Weibull and - Frechet. - - The function has a mean of :math:`\\mu + 0.57721\\beta` and a variance of - :math:`\\frac{\\pi^2}{6}\\beta^2`. - - References - ---------- - .. [1] Gumbel, E.J. (1958). Statistics of Extremes. Columbia University - Press. - .. [2] Reiss, R.-D. and Thomas M. (2001), Statistical Analysis of Extreme - Values, from Insurance, Finance, Hydrology and Other Fields, - Birkhauser Verlag, Basel: Boston : Berlin. - .. [3] Wikipedia, "Gumbel distribution", - http://en.wikipedia.org/wiki/Gumbel_distribution - - Examples - -------- - Draw samples from the distribution: - - >>> mu, beta = 0, 0.1 # location and scale - >>> s = np.random.gumbel(mu, beta, 1000) - - Display the histogram of the samples, along with - the probability density function: - - >>> import matplotlib.pyplot as plt - >>> count, bins, ignored = plt.hist(s, 30, normed=True) - >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) - ... * np.exp( -np.exp( -(bins - mu) /beta) ), - ... linewidth=2, color='r') - >>> plt.show() - - Show how an extreme value distribution can arise from a Gaussian process - and compare to a Gaussian: - - >>> means = [] - >>> maxima = [] - >>> for i in range(0,1000) : - ... a = np.random.normal(mu, beta, 1000) - ... means.append(a.mean()) - ... maxima.append(a.max()) - >>> count, bins, ignored = plt.hist(maxima, 30, normed=True) - >>> beta = np.std(maxima)*np.pi/np.sqrt(6) - >>> mu = np.mean(maxima) - 0.57721*beta - >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) - ... * np.exp(-np.exp(-(bins - mu)/beta)), - ... linewidth=2, color='r') - >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi)) - ... * np.exp(-(bins - mu)**2 / (2 * beta**2)), - ... linewidth=2, color='g') - >>> plt.show() - - """ - cdef ndarray oloc, oscale - cdef double floc, fscale - - floc = PyFloat_AsDouble(loc) - fscale = PyFloat_AsDouble(scale) - if not PyErr_Occurred(): - if fscale <= 0: - raise ValueError("scale <= 0") - return cont2_array_sc(self.internal_state, rk_gumbel, size, floc, fscale) - - PyErr_Clear() - oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oscale, 0.0)): - raise ValueError("scale <= 0") - return cont2_array(self.internal_state, rk_gumbel, size, oloc, oscale) - - def logistic(self, loc=0.0, scale=1.0, size=None): - """ - logistic(loc=0.0, scale=1.0, size=None) - - Draw samples from a Logistic distribution. - - Samples are drawn from a Logistic distribution with specified - parameters, loc (location or mean, also median), and scale (>0). - - Parameters - ---------- - loc : float - - scale : float > 0. - - size : {tuple, int} - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - Returns - ------- - samples : {ndarray, scalar} - where the values are all integers in [0, n]. - - See Also - -------- - scipy.stats.distributions.logistic : probability density function, - distribution or cumulative density function, etc. - - Notes - ----- - The probability density for the Logistic distribution is - - .. math:: P(x) = P(x) = \\frac{e^{-(x-\\mu)/s}}{s(1+e^{-(x-\\mu)/s})^2}, - - where :math:`\\mu` = location and :math:`s` = scale. - - The Logistic distribution is used in Extreme Value problems where it - can act as a mixture of Gumbel distributions, in Epidemiology, and by - the World Chess Federation (FIDE) where it is used in the Elo ranking - system, assuming the performance of each player is a logistically - distributed random variable. - - References - ---------- - .. [1] Reiss, R.-D. and Thomas M. (2001), Statistical Analysis of Extreme - Values, from Insurance, Finance, Hydrology and Other Fields, - Birkhauser Verlag, Basel, pp 132-133. - .. [2] Weisstein, Eric W. "Logistic Distribution." From - MathWorld--A Wolfram Web Resource. - http://mathworld.wolfram.com/LogisticDistribution.html - .. [3] Wikipedia, "Logistic-distribution", - http://en.wikipedia.org/wiki/Logistic-distribution - - Examples - -------- - Draw samples from the distribution: - - >>> loc, scale = 10, 1 - >>> s = np.random.logistic(loc, scale, 10000) - >>> count, bins, ignored = plt.hist(s, bins=50) - - # plot against distribution - - >>> def logist(x, loc, scale): - ... return exp((loc-x)/scale)/(scale*(1+exp((loc-x)/scale))**2) - >>> plt.plot(bins, logist(bins, loc, scale)*count.max()/\\ - ... logist(bins, loc, scale).max()) - >>> plt.show() - - """ - cdef ndarray oloc, oscale - cdef double floc, fscale - - floc = PyFloat_AsDouble(loc) - fscale = PyFloat_AsDouble(scale) - if not PyErr_Occurred(): - if fscale <= 0: - raise ValueError("scale <= 0") - return cont2_array_sc(self.internal_state, rk_logistic, size, floc, fscale) - - PyErr_Clear() - oloc = PyArray_FROM_OTF(loc, NPY_DOUBLE, NPY_ALIGNED) - oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oscale, 0.0)): - raise ValueError("scale <= 0") - return cont2_array(self.internal_state, rk_logistic, size, oloc, oscale) - - def lognormal(self, mean=0.0, sigma=1.0, size=None): - """ - lognormal(mean=0.0, sigma=1.0, size=None) - - Return samples drawn from a log-normal distribution. - - Draw samples from a log-normal distribution with specified mean, standard - deviation, and shape. Note that the mean and standard deviation are not the - values for the distribution itself, but of the underlying normal - distribution it is derived from. - - - Parameters - ---------- - mean : float - Mean value of the underlying normal distribution - sigma : float, >0. - Standard deviation of the underlying normal distribution - size : tuple of ints - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - See Also - -------- - scipy.stats.lognorm : probability density function, distribution, - cumulative density function, etc. - - Notes - ----- - A variable `x` has a log-normal distribution if `log(x)` is normally - distributed. - - The probability density function for the log-normal distribution is - - .. math:: p(x) = \\frac{1}{\\sigma x \\sqrt{2\\pi}} - e^{(-\\frac{(ln(x)-\\mu)^2}{2\\sigma^2})} - - where :math:`\\mu` is the mean and :math:`\\sigma` is the standard deviation - of the normally distributed logarithm of the variable. - - A log-normal distribution results if a random variable is the *product* of - a large number of independent, identically-distributed variables in the - same way that a normal distribution results if the variable is the *sum* - of a large number of independent, identically-distributed variables - (see the last example). It is one of the so-called "fat-tailed" - distributions. - - The log-normal distribution is commonly used to model the lifespan of units - with fatigue-stress failure modes. Since this includes - most mechanical systems, the log-normal distribution has widespread - application. - - It is also commonly used to model oil field sizes, species abundance, and - latent periods of infectious diseases. - - References - ---------- - .. [1] Eckhard Limpert, Werner A. Stahel, and Markus Abbt, "Log-normal - Distributions across the Sciences: Keys and Clues", May 2001 - Vol. 51 No. 5 BioScience - http://stat.ethz.ch/~stahel/lognormal/bioscience.pdf - .. [2] Reiss, R.D., Thomas, M.(2001), Statistical Analysis of Extreme - Values, Birkhauser Verlag, Basel, pp 31-32. - .. [3] Wikipedia, "Lognormal distribution", - http://en.wikipedia.org/wiki/Lognormal_distribution - - Examples - -------- - Draw samples from the distribution: - - >>> mu, sigma = 3., 1. # mean and standard deviation - >>> s = np.random.lognormal(mu, sigma, 1000) - - Display the histogram of the samples, along with - the probability density function: - - >>> import matplotlib.pyplot as plt - >>> count, bins, ignored = plt.hist(s, 100, normed=True, align='mid') - - >>> x = np.linspace(min(bins), max(bins), 10000) - >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) - ... / (x * sigma * np.sqrt(2 * np.pi))) - - >>> plt.plot(x, pdf, linewidth=2, color='r') - >>> plt.axis('tight') - >>> plt.show() - - Demonstrate that taking the products of random samples from a uniform - distribution can be fit well by a log-normal probability density function. - - >>> # Generate a thousand samples: each is the product of 100 random - >>> # values, drawn from a normal distribution. - >>> b = [] - >>> for i in range(1000): - ... a = 10. + np.random.random(100) - ... b.append(np.product(a)) - - >>> b = np.array(b) / np.min(b) # scale values to be positive - - >>> count, bins, ignored = plt.hist(b, 100, normed=True, align='center') - - >>> sigma = np.std(np.log(b)) - >>> mu = np.mean(np.log(b)) - - >>> x = np.linspace(min(bins), max(bins), 10000) - >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) - ... / (x * sigma * np.sqrt(2 * np.pi))) - - >>> plt.plot(x, pdf, color='r', linewidth=2) - >>> plt.show() - - """ - cdef ndarray omean, osigma - cdef double fmean, fsigma - - fmean = PyFloat_AsDouble(mean) - fsigma = PyFloat_AsDouble(sigma) - - if not PyErr_Occurred(): - if fsigma <= 0: - raise ValueError("sigma <= 0") - return cont2_array_sc(self.internal_state, rk_lognormal, size, fmean, fsigma) - - PyErr_Clear() - - omean = PyArray_FROM_OTF(mean, NPY_DOUBLE, NPY_ALIGNED) - osigma = PyArray_FROM_OTF(sigma, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(osigma, 0.0)): - raise ValueError("sigma <= 0.0") - return cont2_array(self.internal_state, rk_lognormal, size, omean, osigma) - - def rayleigh(self, scale=1.0, size=None): - """ - rayleigh(scale=1.0, size=None) - - Draw samples from a Rayleigh distribution. - - The :math:`\\chi` and Weibull distributions are generalizations of the - Rayleigh. - - Parameters - ---------- - scale : scalar - Scale, also equals the mode. Should be >= 0. - size : int or tuple of ints, optional - Shape of the output. Default is None, in which case a single - value is returned. - - Notes - ----- - The probability density function for the Rayleigh distribution is - - .. math:: P(x;scale) = \\frac{x}{scale^2}e^{\\frac{-x^2}{2 \\cdotp scale^2}} - - The Rayleigh distribution arises if the wind speed and wind direction are - both gaussian variables, then the vector wind velocity forms a Rayleigh - distribution. The Rayleigh distribution is used to model the expected - output from wind turbines. - - References - ---------- - ..[1] Brighton Webs Ltd., Rayleigh Distribution, - http://www.brighton-webs.co.uk/distributions/rayleigh.asp - ..[2] Wikipedia, "Rayleigh distribution" - http://en.wikipedia.org/wiki/Rayleigh_distribution - - Examples - -------- - Draw values from the distribution and plot the histogram - - >>> values = hist(np.random.rayleigh(3, 100000), bins=200, normed=True) - - Wave heights tend to follow a Rayleigh distribution. If the mean wave - height is 1 meter, what fraction of waves are likely to be larger than 3 - meters? - - >>> meanvalue = 1 - >>> modevalue = np.sqrt(2 / np.pi) * meanvalue - >>> s = np.random.rayleigh(modevalue, 1000000) - - The percentage of waves larger than 3 meters is: - - >>> 100.*sum(s>3)/1000000. - 0.087300000000000003 - - """ - cdef ndarray oscale - cdef double fscale - - fscale = PyFloat_AsDouble(scale) - - if not PyErr_Occurred(): - if fscale <= 0: - raise ValueError("scale <= 0") - return cont1_array_sc(self.internal_state, rk_rayleigh, size, fscale) - - PyErr_Clear() - - oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oscale, 0.0)): - raise ValueError("scale <= 0.0") - return cont1_array(self.internal_state, rk_rayleigh, size, oscale) - - def wald(self, mean, scale, size=None): - """ - wald(mean, scale, size=None) - - Draw samples from a Wald, or Inverse Gaussian, distribution. - - As the scale approaches infinity, the distribution becomes more like a - Gaussian. - - Some references claim that the Wald is an Inverse Gaussian with mean=1, but - this is by no means universal. - - The Inverse Gaussian distribution was first studied in relationship to - Brownian motion. In 1956 M.C.K. Tweedie used the name Inverse Gaussian - because there is an inverse relationship between the time to cover a unit - distance and distance covered in unit time. - - Parameters - ---------- - mean : scalar - Distribution mean, should be > 0. - scale : scalar - Scale parameter, should be >= 0. - size : int or tuple of ints, optional - Output shape. Default is None, in which case a single value is - returned. - - Returns - ------- - samples : ndarray or scalar - Drawn sample, all greater than zero. - - Notes - ----- - The probability density function for the Wald distribution is - - .. math:: P(x;mean,scale) = \\sqrt{\\frac{scale}{2\\pi x^3}}e^ - \\frac{-scale(x-mean)^2}{2\\cdotp mean^2x} - - As noted above the Inverse Gaussian distribution first arise from attempts - to model Brownian Motion. It is also a competitor to the Weibull for use in - reliability modeling and modeling stock returns and interest rate - processes. - - References - ---------- - ..[1] Brighton Webs Ltd., Wald Distribution, - http://www.brighton-webs.co.uk/distributions/wald.asp - ..[2] Chhikara, Raj S., and Folks, J. Leroy, "The Inverse Gaussian - Distribution: Theory : Methodology, and Applications", CRC Press, - 1988. - ..[3] Wikipedia, "Wald distribution" - http://en.wikipedia.org/wiki/Wald_distribution - - Examples - -------- - Draw values from the distribution and plot the histogram: - - >>> import matplotlib.pyplot as plt - >>> h = plt.hist(np.random.wald(3, 2, 100000), bins=200, normed=True) - >>> plt.show() - - """ - cdef ndarray omean, oscale - cdef double fmean, fscale - - fmean = PyFloat_AsDouble(mean) - fscale = PyFloat_AsDouble(scale) - if not PyErr_Occurred(): - if fmean <= 0: - raise ValueError("mean <= 0") - if fscale <= 0: - raise ValueError("scale <= 0") - return cont2_array_sc(self.internal_state, rk_wald, size, fmean, fscale) - - PyErr_Clear() - omean = PyArray_FROM_OTF(mean, NPY_DOUBLE, NPY_ALIGNED) - oscale = PyArray_FROM_OTF(scale, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(omean,0.0)): - raise ValueError("mean <= 0.0") - elif np.any(np.less_equal(oscale,0.0)): - raise ValueError("scale <= 0.0") - return cont2_array(self.internal_state, rk_wald, size, omean, oscale) - - - - def triangular(self, left, mode, right, size=None): - """ - triangular(left, mode, right, size=None) - - Draw samples from the triangular distribution. - - The triangular distribution is a continuous probability distribution with - lower limit left, peak at mode, and upper limit right. Unlike the other - distributions, these parameters directly define the shape of the pdf. - - Parameters - ---------- - left : scalar - Lower limit. - mode : scalar - The value where the peak of the distribution occurs. - The value should fulfill the condition ``left <= mode <= right``. - right : scalar - Upper limit, should be larger than `left`. - size : int or tuple of ints, optional - Output shape. Default is None, in which case a single value is - returned. - - Returns - ------- - samples : ndarray or scalar - The returned samples all lie in the interval [left, right]. - - Notes - ----- - The probability density function for the Triangular distribution is - - .. math:: P(x;l, m, r) = \\begin{cases} - \\frac{2(x-l)}{(r-l)(m-l)}& \\text{for $l \\leq x \\leq m$},\\\\ - \\frac{2(m-x)}{(r-l)(r-m)}& \\text{for $m \\leq x \\leq r$},\\\\ - 0& \\text{otherwise}. - \\end{cases} - - The triangular distribution is often used in ill-defined problems where the - underlying distribution is not known, but some knowledge of the limits and - mode exists. Often it is used in simulations. - - References - ---------- - ..[1] Wikipedia, "Triangular distribution" - http://en.wikipedia.org/wiki/Triangular_distribution - - Examples - -------- - Draw values from the distribution and plot the histogram: - - >>> import matplotlib.pyplot as plt - >>> h = plt.hist(np.random.triangular(-3, 0, 8, 100000), bins=200, - ... normed=True) - >>> plt.show() - - """ - cdef ndarray oleft, omode, oright - cdef double fleft, fmode, fright - - fleft = PyFloat_AsDouble(left) - fright = PyFloat_AsDouble(right) - fmode = PyFloat_AsDouble(mode) - if not PyErr_Occurred(): - if fleft > fmode: - raise ValueError("left > mode") - if fmode > fright: - raise ValueError("mode > right") - if fleft == fright: - raise ValueError("left == right") - return cont3_array_sc(self.internal_state, rk_triangular, size, fleft, - fmode, fright) - - PyErr_Clear() - oleft = PyArray_FROM_OTF(left, NPY_DOUBLE, NPY_ALIGNED) - omode = PyArray_FROM_OTF(mode, NPY_DOUBLE, NPY_ALIGNED) - oright = PyArray_FROM_OTF(right, NPY_DOUBLE, NPY_ALIGNED) - - if np.any(np.greater(oleft, omode)): - raise ValueError("left > mode") - if np.any(np.greater(omode, oright)): - raise ValueError("mode > right") - if np.any(np.equal(oleft, oright)): - raise ValueError("left == right") - return cont3_array(self.internal_state, rk_triangular, size, oleft, - omode, oright) - - # Complicated, discrete distributions: - def binomial(self, n, p, size=None): - """ - binomial(n, p, size=None) - - Draw samples from a binomial distribution. - - Samples are drawn from a Binomial distribution with specified - parameters, n trials and p probability of success where - n an integer > 0 and p is in the interval [0,1]. (n may be - input as a float, but it is truncated to an integer in use) - - Parameters - ---------- - n : float (but truncated to an integer) - parameter, > 0. - p : float - parameter, >= 0 and <=1. - size : {tuple, int} - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - Returns - ------- - samples : {ndarray, scalar} - where the values are all integers in [0, n]. - - See Also - -------- - scipy.stats.distributions.binom : probability density function, - distribution or cumulative density function, etc. - - Notes - ----- - The probability density for the Binomial distribution is - - .. math:: P(N) = \\binom{n}{N}p^N(1-p)^{n-N}, - - where :math:`n` is the number of trials, :math:`p` is the probability - of success, and :math:`N` is the number of successes. - - When estimating the standard error of a proportion in a population by - using a random sample, the normal distribution works well unless the - product p*n <=5, where p = population proportion estimate, and n = - number of samples, in which case the binomial distribution is used - instead. For example, a sample of 15 people shows 4 who are left - handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4, - so the binomial distribution should be used in this case. - - References - ---------- - .. [1] Dalgaard, Peter, "Introductory Statistics with R", - Springer-Verlag, 2002. - .. [2] Glantz, Stanton A. "Primer of Biostatistics.", McGraw-Hill, - Fifth Edition, 2002. - .. [3] Lentner, Marvin, "Elementary Applied Statistics", Bogden - and Quigley, 1972. - .. [4] Weisstein, Eric W. "Binomial Distribution." From MathWorld--A - Wolfram Web Resource. - http://mathworld.wolfram.com/BinomialDistribution.html - .. [5] Wikipedia, "Binomial-distribution", - http://en.wikipedia.org/wiki/Binomial_distribution - - Examples - -------- - Draw samples from the distribution: - - >>> n, p = 10, .5 # number of trials, probability of each trial - >>> s = np.random.binomial(n, p, 1000) - # result of flipping a coin 10 times, tested 1000 times. - - A real world example. A company drills 9 wild-cat oil exploration - wells, each with an estimated probability of success of 0.1. All nine - wells fail. What is the probability of that happening? - - Let's do 20,000 trials of the model, and count the number that - generate zero positive results. - - >>> sum(np.random.binomial(9,0.1,20000)==0)/20000. - answer = 0.38885, or 38%. - - """ - cdef ndarray on, op - cdef long ln - cdef double fp - - fp = PyFloat_AsDouble(p) - ln = PyInt_AsLong(n) - if not PyErr_Occurred(): - if ln <= 0: - raise ValueError("n <= 0") - if fp < 0: - raise ValueError("p < 0") - elif fp > 1: - raise ValueError("p > 1") - return discnp_array_sc(self.internal_state, rk_binomial, size, ln, fp) - - PyErr_Clear() - - on = PyArray_FROM_OTF(n, NPY_LONG, NPY_ALIGNED) - op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(n, 0)): - raise ValueError("n <= 0") - if np.any(np.less(p, 0)): - raise ValueError("p < 0") - if np.any(np.greater(p, 1)): - raise ValueError("p > 1") - return discnp_array(self.internal_state, rk_binomial, size, on, op) - - def negative_binomial(self, n, p, size=None): - """ - negative_binomial(n, p, size=None) - - Draw samples from a negative_binomial distribution. - - Samples are drawn from a negative_Binomial distribution with specified - parameters, `n` trials and `p` probability of success where `n` is an - integer > 0 and `p` is in the interval [0, 1]. - - Parameters - ---------- - n : int - Parameter, > 0. - p : float - Parameter, >= 0 and <=1. - size : int or tuple of ints - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - Returns - ------- - samples : int or ndarray of ints - Drawn samples. - - Notes - ----- - The probability density for the Negative Binomial distribution is - - .. math:: P(N;n,p) = \\binom{N+n-1}{n-1}p^{n}(1-p)^{N}, - - where :math:`n-1` is the number of successes, :math:`p` is the probability - of success, and :math:`N+n-1` is the number of trials. - - The negative binomial distribution gives the probability of n-1 successes - and N failures in N+n-1 trials, and success on the (N+n)th trial. - - If one throws a die repeatedly until the third time a "1" appears, then the - probability distribution of the number of non-"1"s that appear before the - third "1" is a negative binomial distribution. - - References - ---------- - .. [1] Weisstein, Eric W. "Negative Binomial Distribution." From - MathWorld--A Wolfram Web Resource. - http://mathworld.wolfram.com/NegativeBinomialDistribution.html - .. [2] Wikipedia, "Negative binomial distribution", - http://en.wikipedia.org/wiki/Negative_binomial_distribution - - Examples - -------- - Draw samples from the distribution: - - A real world example. A company drills wild-cat oil exploration wells, each - with an estimated probability of success of 0.1. What is the probability - of having one success for each successive well, that is what is the - probability of a single success after drilling 5 wells, after 6 wells, - etc.? - - >>> s = np.random.negative_binomial(1, 0.1, 100000) - >>> for i in range(1, 11): - ... probability = sum(s 1: - raise ValueError("p > 1") - return discdd_array_sc(self.internal_state, rk_negative_binomial, - size, fn, fp) - - PyErr_Clear() - - on = PyArray_FROM_OTF(n, NPY_DOUBLE, NPY_ALIGNED) - op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(n, 0)): - raise ValueError("n <= 0") - if np.any(np.less(p, 0)): - raise ValueError("p < 0") - if np.any(np.greater(p, 1)): - raise ValueError("p > 1") - return discdd_array(self.internal_state, rk_negative_binomial, size, - on, op) - - def poisson(self, lam=1.0, size=None): - """ - poisson(lam=1.0, size=None) - - Draw samples from a Poisson distribution. - - The Poisson distribution is the limit of the Binomial - distribution for large N. - - Parameters - ---------- - lam : float - Expectation of interval, should be >= 0. - size : int or tuple of ints, optional - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - Notes - ----- - The Poisson distribution - - .. math:: f(k; \\lambda)=\\frac{\\lambda^k e^{-\\lambda}}{k!} - - For events with an expected separation :math:`\\lambda` the Poisson - distribution :math:`f(k; \\lambda)` describes the probability of - :math:`k` events occurring within the observed interval :math:`\\lambda`. - - References - ---------- - .. [1] Weisstein, Eric W. "Poisson Distribution." From MathWorld--A Wolfram - Web Resource. http://mathworld.wolfram.com/PoissonDistribution.html - .. [2] Wikipedia, "Poisson distribution", - http://en.wikipedia.org/wiki/Poisson_distribution - - Examples - -------- - Draw samples from the distribution: - - >>> import numpy as np - >>> s = np.random.poisson(5, 10000) - - Display histogram of the sample: - - >>> import matplotlib.pyplot as plt - >>> count, bins, ignored = plt.hist(s, 14, normed=True) - >>> plt.show() - - """ - cdef ndarray olam - cdef double flam - flam = PyFloat_AsDouble(lam) - if not PyErr_Occurred(): - if lam < 0: - raise ValueError("lam < 0") - return discd_array_sc(self.internal_state, rk_poisson, size, flam) - - PyErr_Clear() - - olam = PyArray_FROM_OTF(lam, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less(olam, 0)): - raise ValueError("lam < 0") - return discd_array(self.internal_state, rk_poisson, size, olam) - - def zipf(self, a, size=None): - """ - zipf(a, size=None) - - Draw samples from a Zipf distribution. - - Samples are drawn from a Zipf distribution with specified parameter - `a` > 1. - - The Zipf distribution (also known as the zeta distribution) is a - continuous probability distribution that satisfies Zipf's law: the - frequency of an item is inversely proportional to its rank in a - frequency table. - - Parameters - ---------- - a : float > 1 - Distribution parameter. - size : int or tuple of int, optional - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn; a single integer is equivalent in - its result to providing a mono-tuple, i.e., a 1-D array of length - *size* is returned. The default is None, in which case a single - scalar is returned. - - Returns - ------- - samples : scalar or ndarray - The returned samples are greater than or equal to one. - - See Also - -------- - scipy.stats.distributions.zipf : probability density function, - distribution, or cumulative density function, etc. - - Notes - ----- - The probability density for the Zipf distribution is - - .. math:: p(x) = \\frac{x^{-a}}{\\zeta(a)}, - - where :math:`\\zeta` is the Riemann Zeta function. - - It is named for the American linguist George Kingsley Zipf, who noted - that the frequency of any word in a sample of a language is inversely - proportional to its rank in the frequency table. - - References - ---------- - Zipf, G. K., *Selected Studies of the Principle of Relative Frequency - in Language*, Cambridge, MA: Harvard Univ. Press, 1932. - - Examples - -------- - Draw samples from the distribution: - - >>> a = 2. # parameter - >>> s = np.random.zipf(a, 1000) - - Display the histogram of the samples, along with - the probability density function: - - >>> import matplotlib.pyplot as plt - >>> import scipy.special as sps - Truncate s values at 50 so plot is interesting - >>> count, bins, ignored = plt.hist(s[s<50], 50, normed=True) - >>> x = np.arange(1., 50.) - >>> y = x**(-a)/sps.zetac(a) - >>> plt.plot(x, y/max(y), linewidth=2, color='r') - >>> plt.show() - - """ - cdef ndarray oa - cdef double fa - - fa = PyFloat_AsDouble(a) - if not PyErr_Occurred(): - if fa <= 1.0: - raise ValueError("a <= 1.0") - return discd_array_sc(self.internal_state, rk_zipf, size, fa) - - PyErr_Clear() - - oa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(oa, 1.0)): - raise ValueError("a <= 1.0") - return discd_array(self.internal_state, rk_zipf, size, oa) - - def geometric(self, p, size=None): - """ - geometric(p, size=None) - - Draw samples from the geometric distribution. - - Bernoulli trials are experiments with one of two outcomes: - success or failure (an example of such an experiment is flipping - a coin). The geometric distribution models the number of trials - that must be run in order to achieve success. It is therefore - supported on the positive integers, ``k = 1, 2, ...``. - - The probability mass function of the geometric distribution is - - .. math:: f(k) = (1 - p)^{k - 1} p - - where `p` is the probability of success of an individual trial. - - Parameters - ---------- - p : float - The probability of success of an individual trial. - size : tuple of ints - Number of values to draw from the distribution. The output - is shaped according to `size`. - - Returns - ------- - out : ndarray - Samples from the geometric distribution, shaped according to - `size`. - - Examples - -------- - Draw ten thousand values from the geometric distribution, - with the probability of an individual success equal to 0.35: - - >>> z = np.random.geometric(p=0.35, size=10000) - - How many trials succeeded after a single run? - - >>> (z == 1).sum() / 10000. - 0.34889999999999999 #random - - """ - cdef ndarray op - cdef double fp - - fp = PyFloat_AsDouble(p) - if not PyErr_Occurred(): - if fp < 0.0: - raise ValueError("p < 0.0") - if fp > 1.0: - raise ValueError("p > 1.0") - return discd_array_sc(self.internal_state, rk_geometric, size, fp) - - PyErr_Clear() - - - op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less(op, 0.0)): - raise ValueError("p < 0.0") - if np.any(np.greater(op, 1.0)): - raise ValueError("p > 1.0") - return discd_array(self.internal_state, rk_geometric, size, op) - - def hypergeometric(self, ngood, nbad, nsample, size=None): - """ - hypergeometric(ngood, nbad, nsample, size=None) - - Draw samples from a Hypergeometric distribution. - - Samples are drawn from a Hypergeometric distribution with specified - parameters, ngood (ways to make a good selection), nbad (ways to make - a bad selection), and nsample = number of items sampled, which is less - than or equal to the sum ngood + nbad. - - Parameters - ---------- - ngood : float (but truncated to an integer) - parameter, > 0. - nbad : float - parameter, >= 0. - nsample : float - parameter, > 0 and <= ngood+nbad - size : {tuple, int} - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - Returns - ------- - samples : {ndarray, scalar} - where the values are all integers in [0, n]. - - See Also - -------- - scipy.stats.distributions.hypergeom : probability density function, - distribution or cumulative density function, etc. - - Notes - ----- - The probability density for the Hypergeometric distribution is - - .. math:: P(x) = \\frac{\\binom{m}{n}\\binom{N-m}{n-x}}{\\binom{N}{n}}, - - where :math:`0 \\le x \\le m` and :math:`n+m-N \\le x \\le n` - - for P(x) the probability of x successes, n = ngood, m = nbad, and - N = number of samples. - - Consider an urn with black and white marbles in it, ngood of them - black and nbad are white. If you draw nsample balls without - replacement, then the Hypergeometric distribution describes the - distribution of black balls in the drawn sample. - - Note that this distribution is very similar to the Binomial - distribution, except that in this case, samples are drawn without - replacement, whereas in the Binomial case samples are drawn with - replacement (or the sample space is infinite). As the sample space - becomes large, this distribution approaches the Binomial. - - References - ---------- - .. [1] Lentner, Marvin, "Elementary Applied Statistics", Bogden - and Quigley, 1972. - .. [2] Weisstein, Eric W. "Hypergeometric Distribution." From - MathWorld--A Wolfram Web Resource. - http://mathworld.wolfram.com/HypergeometricDistribution.html - .. [3] Wikipedia, "Hypergeometric-distribution", - http://en.wikipedia.org/wiki/Hypergeometric-distribution - - Examples - -------- - Draw samples from the distribution: - - >>> ngood, nbad, nsamp = 100, 2, 10 - # number of good, number of bad, and number of samples - >>> s = np.random.hypergeometric(ngood, nbad, nsamp, 1000) - >>> hist(s) - # note that it is very unlikely to grab both bad items - - Suppose you have an urn with 15 white and 15 black marbles. - If you pull 15 marbles at random, how likely is it that - 12 or more of them are one color? - - >>> s = np.random.hypergeometric(15, 15, 15, 100000) - >>> sum(s>=12)/100000. + sum(s<=3)/100000. - # answer = 0.003 ... pretty unlikely! - - """ - cdef ndarray ongood, onbad, onsample - cdef long lngood, lnbad, lnsample - - lngood = PyInt_AsLong(ngood) - lnbad = PyInt_AsLong(nbad) - lnsample = PyInt_AsLong(nsample) - if not PyErr_Occurred(): - if ngood < 1: - raise ValueError("ngood < 1") - if nbad < 1: - raise ValueError("nbad < 1") - if nsample < 1: - raise ValueError("nsample < 1") - if ngood + nbad < nsample: - raise ValueError("ngood + nbad < nsample") - return discnmN_array_sc(self.internal_state, rk_hypergeometric, size, - lngood, lnbad, lnsample) - - - PyErr_Clear() - - ongood = PyArray_FROM_OTF(ngood, NPY_LONG, NPY_ALIGNED) - onbad = PyArray_FROM_OTF(nbad, NPY_LONG, NPY_ALIGNED) - onsample = PyArray_FROM_OTF(nsample, NPY_LONG, NPY_ALIGNED) - if np.any(np.less(ongood, 1)): - raise ValueError("ngood < 1") - if np.any(np.less(onbad, 1)): - raise ValueError("nbad < 1") - if np.any(np.less(onsample, 1)): - raise ValueError("nsample < 1") - if np.any(np.less(np.add(ongood, onbad),onsample)): - raise ValueError("ngood + nbad < nsample") - return discnmN_array(self.internal_state, rk_hypergeometric, size, - ongood, onbad, onsample) - - def logseries(self, p, size=None): - """ - logseries(p, size=None) - - Draw samples from a Logarithmic Series distribution. - - Samples are drawn from a Log Series distribution with specified - parameter, p (probability, 0 < p < 1). - - Parameters - ---------- - loc : float - - scale : float > 0. - - size : {tuple, int} - Output shape. If the given shape is, e.g., ``(m, n, k)``, then - ``m * n * k`` samples are drawn. - - Returns - ------- - samples : {ndarray, scalar} - where the values are all integers in [0, n]. - - See Also - -------- - scipy.stats.distributions.logser : probability density function, - distribution or cumulative density function, etc. - - Notes - ----- - The probability density for the Log Series distribution is - - .. math:: P(k) = \\frac{-p^k}{k \\ln(1-p)}, - - where p = probability. - - The Log Series distribution is frequently used to represent species - richness and occurrence, first proposed by Fisher, Corbet, and - Williams in 1943 [2]. It may also be used to model the numbers of - occupants seen in cars [3]. - - References - ---------- - .. [1] Buzas, Martin A.; Culver, Stephen J., Understanding regional - species diversity through the log series distribution of - occurrences: BIODIVERSITY RESEARCH Diversity & Distributions, - Volume 5, Number 5, September 1999 , pp. 187-195(9). - .. [2] Fisher, R.A,, A.S. Corbet, and C.B. Williams. 1943. The - relation between the number of species and the number of - individuals in a random sample of an animal population. - Journal of Animal Ecology, 12:42-58. - .. [3] D. J. Hand, F. Daly, D. Lunn, E. Ostrowski, A Handbook of Small - Data Sets, CRC Press, 1994. - .. [4] Wikipedia, "Logarithmic-distribution", - http://en.wikipedia.org/wiki/Logarithmic-distribution - - Examples - -------- - Draw samples from the distribution: - - >>> a = .6 - >>> s = np.random.logseries(a, 10000) - >>> count, bins, ignored = plt.hist(s) - - # plot against distribution - - >>> def logseries(k, p): - ... return -p**k/(k*log(1-p)) - >>> plt.plot(bins, logseries(bins, a)*count.max()/ - logseries(bins, a).max(), 'r') - >>> plt.show() - - """ - cdef ndarray op - cdef double fp - - fp = PyFloat_AsDouble(p) - if not PyErr_Occurred(): - if fp <= 0.0: - raise ValueError("p <= 0.0") - if fp >= 1.0: - raise ValueError("p >= 1.0") - return discd_array_sc(self.internal_state, rk_logseries, size, fp) - - PyErr_Clear() - - op = PyArray_FROM_OTF(p, NPY_DOUBLE, NPY_ALIGNED) - if np.any(np.less_equal(op, 0.0)): - raise ValueError("p <= 0.0") - if np.any(np.greater_equal(op, 1.0)): - raise ValueError("p >= 1.0") - return discd_array(self.internal_state, rk_logseries, size, op) - - # Multivariate distributions: - def multivariate_normal(self, mean, cov, size=None): - """ - multivariate_normal(mean, cov[, size]) - - Draw random samples from a multivariate normal distribution. - - The multivariate normal, multinormal or Gaussian distribution is a - generalisation of the one-dimensional normal distribution to higher - dimensions. - - Such a distribution is specified by its mean and covariance matrix, - which are analogous to the mean (average or "centre") and variance - (standard deviation squared or "width") of the one-dimensional normal - distribution. - - Parameters - ---------- - mean : (N,) ndarray - Mean of the N-dimensional distribution. - cov : (N,N) ndarray - Covariance matrix of the distribution. - size : tuple of ints, optional - Given a shape of, for example, (m,n,k), m*n*k samples are - generated, and packed in an m-by-n-by-k arrangement. Because each - sample is N-dimensional, the output shape is (m,n,k,N). If no - shape is specified, a single sample is returned. - - Returns - ------- - out : ndarray - The drawn samples, arranged according to `size`. If the - shape given is (m,n,...), then the shape of `out` is is - (m,n,...,N). - - In other words, each entry ``out[i,j,...,:]`` is an N-dimensional - value drawn from the distribution. - - Notes - ----- - The mean is a coordinate in N-dimensional space, which represents the - location where samples are most likely to be generated. This is - analogous to the peak of the bell curve for the one-dimensional or - univariate normal distribution. - - Covariance indicates the level to which two variables vary together. - From the multivariate normal distribution, we draw N-dimensional - samples, :math:`X = [x_1, x_2, ... x_N]`. The covariance matrix - element :math:`C_{ij}` is the covariance of :math:`x_i` and :math:`x_j`. - The element :math:`C_{ii}` is the variance of :math:`x_i` (i.e. its - "spread"). - - Instead of specifying the full covariance matrix, popular - approximations include: - - - Spherical covariance (`cov` is a multiple of the identity matrix) - - Diagonal covariance (`cov` has non-negative elements, and only on - the diagonal) - - This geometrical property can be seen in two dimensions by plotting - generated data-points: - - >>> mean = [0,0] - >>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis - - >>> import matplotlib.pyplot as plt - >>> x,y = np.random.multivariate_normal(mean,cov,5000).T - >>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show() - - Note that the covariance matrix must be non-negative definite. - - References - ---------- - .. [1] A. Papoulis, "Probability, Random Variables, and Stochastic - Processes," 3rd ed., McGraw-Hill Companies, 1991 - .. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification," - 2nd ed., Wiley, 2001. - - Examples - -------- - >>> mean = (1,2) - >>> cov = [[1,0],[1,0]] - >>> x = np.random.multivariate_normal(mean,cov,(3,3)) - >>> x.shape - (3, 3, 2) - - The following is probably true, given that 0.6 is roughly twice the - standard deviation: - - >>> print list( (x[0,0,:] - mean) < 0.6 ) - [True, True] - - """ - # Check preconditions on arguments - mean = np.array(mean) - cov = np.array(cov) - if size is None: - shape = [] - else: - shape = size - if len(mean.shape) != 1: - raise ValueError("mean must be 1 dimensional") - if (len(cov.shape) != 2) or (cov.shape[0] != cov.shape[1]): - raise ValueError("cov must be 2 dimensional and square") - if mean.shape[0] != cov.shape[0]: - raise ValueError("mean and cov must have same length") - # Compute shape of output - if isinstance(shape, int): - shape = [shape] - final_shape = list(shape[:]) - final_shape.append(mean.shape[0]) - # Create a matrix of independent standard normally distributed random - # numbers. The matrix has rows with the same length as mean and as - # many rows are necessary to form a matrix of shape final_shape. - x = self.standard_normal(np.multiply.reduce(final_shape)) - x.shape = (np.multiply.reduce(final_shape[0:len(final_shape)-1]), - mean.shape[0]) - # Transform matrix of standard normals into matrix where each row - # contains multivariate normals with the desired covariance. - # Compute A such that dot(transpose(A),A) == cov. - # Then the matrix products of the rows of x and A has the desired - # covariance. Note that sqrt(s)*v where (u,s,v) is the singular value - # decomposition of cov is such an A. - - from numpy.dual import svd - # XXX: we really should be doing this by Cholesky decomposition - (u,s,v) = svd(cov) - x = np.dot(x*np.sqrt(s),v) - # The rows of x now have the correct covariance but mean 0. Add - # mean to each row. Then each row will have mean mean. - np.add(mean,x,x) - x.shape = tuple(final_shape) - return x - - def multinomial(self, long n, object pvals, size=None): - """ - multinomial(n, pvals, size=None) - - Draw samples from a multinomial distribution. - - The multinomial distribution is a multivariate generalisation of the - binomial distribution. Take an experiment with one of ``p`` - possible outcomes. An example of such an experiment is throwing a dice, - where the outcome can be 1 through 6. Each sample drawn from the - distribution represents `n` such experiments. Its values, - ``X_i = [X_0, X_1, ..., X_p]``, represent the number of times the outcome - was ``i``. - - Parameters - ---------- - n : int - Number of experiments. - pvals : sequence of floats, length p - Probabilities of each of the ``p`` different outcomes. These - should sum to 1 (however, the last element is always assumed to - account for the remaining probability, as long as - ``sum(pvals[:-1]) <= 1)``. - size : tuple of ints - Given a `size` of ``(M, N, K)``, then ``M*N*K`` samples are drawn, - and the output shape becomes ``(M, N, K, p)``, since each sample - has shape ``(p,)``. - - Examples - -------- - Throw a dice 20 times: - - >>> np.random.multinomial(20, [1/6.]*6, size=1) - array([[4, 1, 7, 5, 2, 1]]) - - It landed 4 times on 1, once on 2, etc. - - Now, throw the dice 20 times, and 20 times again: - - >>> np.random.multinomial(20, [1/6.]*6, size=2) - array([[3, 4, 3, 3, 4, 3], - [2, 4, 3, 4, 0, 7]]) - - For the first run, we threw 3 times 1, 4 times 2, etc. For the second, - we threw 2 times 1, 4 times 2, etc. - - A loaded dice is more likely to land on number 6: - - >>> np.random.multinomial(100, [1/7.]*5) - array([13, 16, 13, 16, 42]) - - """ - cdef long d - cdef ndarray parr "arrayObject_parr", mnarr "arrayObject_mnarr" - cdef double *pix - cdef long *mnix - cdef long i, j, dn - cdef double Sum - - d = len(pvals) - parr = PyArray_ContiguousFromObject(pvals, NPY_DOUBLE, 1, 1) - pix = parr.data - - if kahan_sum(pix, d-1) > (1.0 + 1e-12): - raise ValueError("sum(pvals[:-1]) > 1.0") - - if size is None: - shape = (d,) - elif type(size) is int: - shape = (size, d) - else: - shape = size + (d,) - - multin = np.zeros(shape, int) - mnarr = multin - mnix = mnarr.data - i = 0 - while i < PyArray_SIZE(mnarr): - Sum = 1.0 - dn = n - for j from 0 <= j < d-1: - mnix[i+j] = rk_binomial(self.internal_state, dn, pix[j]/Sum) - dn = dn - mnix[i+j] - if dn <= 0: - break - Sum = Sum - pix[j] - if dn > 0: - mnix[i+d-1] = dn - - i = i + d - - return multin - - def dirichlet(self, object alpha, size=None): - """ - dirichlet(alpha, size=None) - - Draw samples from the Dirichlet distribution. - - Draw `size` samples of dimension k from a Dirichlet distribution. A - Dirichlet-distributed random variable can be seen as a multivariate - generalization of a Beta distribution. Dirichlet pdf is the conjugate - prior of a multinomial in Bayesian inference. - - Parameters - ---------- - alpha : array - Parameter of the distribution (k dimension for sample of - dimension k). - size : array - Number of samples to draw. - - Notes - ----- - .. math:: X \\approx \\prod_{i=1}^{k}{x^{\\alpha_i-1}_i} - - Uses the following property for computation: for each dimension, - draw a random sample y_i from a standard gamma generator of shape - `alpha_i`, then - :math:`X = \\frac{1}{\\sum_{i=1}^k{y_i}} (y_1, \\ldots, y_n)` is - Dirichlet distributed. - - References - ---------- - .. [1] David McKay, "Information Theory, Inference and Learning - Algorithms," chapter 23, - http://www.inference.phy.cam.ac.uk/mackay/ - - """ - - #================= - # Pure python algo - #================= - #alpha = N.atleast_1d(alpha) - #k = alpha.size - - #if n == 1: - # val = N.zeros(k) - # for i in range(k): - # val[i] = sgamma(alpha[i], n) - # val /= N.sum(val) - #else: - # val = N.zeros((k, n)) - # for i in range(k): - # val[i] = sgamma(alpha[i], n) - # val /= N.sum(val, axis = 0) - # val = val.T - - #return val - - cdef long k - cdef long totsize - cdef ndarray alpha_arr, val_arr - cdef double *alpha_data, *val_data - cdef long i, j - cdef double acc, invacc - - k = len(alpha) - alpha_arr = PyArray_ContiguousFromObject(alpha, NPY_DOUBLE, 1, 1) - alpha_data = alpha_arr.data - - if size is None: - shape = (k,) - elif type(size) is int: - shape = (size, k) - else: - shape = size + (k,) - - diric = np.zeros(shape, np.float64) - val_arr = diric - val_data= val_arr.data - - i = 0 - totsize = PyArray_SIZE(val_arr) - while i < totsize: - acc = 0.0 - for j from 0 <= j < k: - val_data[i+j] = rk_standard_gamma(self.internal_state, alpha_data[j]) - acc = acc + val_data[i+j] - invacc = 1/acc - for j from 0 <= j < k: - val_data[i+j] = val_data[i+j] * invacc - i = i + k - - return diric - - # Shuffling and permutations: - def shuffle(self, object x): - """ - shuffle(x) - - Modify a sequence in-place by shuffling its contents. - - """ - cdef long i, j - cdef int copy - - i = len(x) - 1 - try: - j = len(x[0]) - except: - j = 0 - - if (j == 0): - # adaptation of random.shuffle() - while i > 0: - j = rk_interval(i, self.internal_state) - x[i], x[j] = x[j], x[i] - i = i - 1 - else: - # make copies - copy = hasattr(x[0], 'copy') - if copy: - while(i > 0): - j = rk_interval(i, self.internal_state) - x[i], x[j] = x[j].copy(), x[i].copy() - i = i - 1 - else: - while(i > 0): - j = rk_interval(i, self.internal_state) - x[i], x[j] = x[j][:], x[i][:] - i = i - 1 - - def permutation(self, object x): - """ - permutation(x) - - Randomly permute a sequence, or return a permuted range. - - Parameters - ---------- - x : int or array_like - If `x` is an integer, randomly permute ``np.arange(x)``. - If `x` is an array, make a copy and shuffle the elements - randomly. - - Returns - ------- - out : ndarray - Permuted sequence or array range. - - Examples - -------- - >>> np.random.permutation(10) - array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6]) - - >>> np.random.permutation([1, 4, 9, 12, 15]) - array([15, 1, 9, 4, 12]) - - """ - if isinstance(x, (int, long, np.integer)): - arr = np.arange(x) - else: - arr = np.array(x) - self.shuffle(arr) - return arr - -_rand = RandomState() -seed = _rand.seed -get_state = _rand.get_state -set_state = _rand.set_state -random_sample = _rand.random_sample -randint = _rand.randint -bytes = _rand.bytes -uniform = _rand.uniform -rand = _rand.rand -randn = _rand.randn -random_integers = _rand.random_integers -standard_normal = _rand.standard_normal -normal = _rand.normal -beta = _rand.beta -exponential = _rand.exponential -standard_exponential = _rand.standard_exponential -standard_gamma = _rand.standard_gamma -gamma = _rand.gamma -f = _rand.f -noncentral_f = _rand.noncentral_f -chisquare = _rand.chisquare -noncentral_chisquare = _rand.noncentral_chisquare -standard_cauchy = _rand.standard_cauchy -standard_t = _rand.standard_t -vonmises = _rand.vonmises -pareto = _rand.pareto -weibull = _rand.weibull -power = _rand.power -laplace = _rand.laplace -gumbel = _rand.gumbel -logistic = _rand.logistic -lognormal = _rand.lognormal -rayleigh = _rand.rayleigh -wald = _rand.wald -triangular = _rand.triangular - -binomial = _rand.binomial -negative_binomial = _rand.negative_binomial -poisson = _rand.poisson -zipf = _rand.zipf -geometric = _rand.geometric -hypergeometric = _rand.hypergeometric -logseries = _rand.logseries - -multivariate_normal = _rand.multivariate_normal -multinomial = _rand.multinomial -dirichlet = _rand.dirichlet - -shuffle = _rand.shuffle -permutation = _rand.permutation diff --git a/pythonPackages/numpy/numpy/random/mtrand/mtrand_py_helper.h b/pythonPackages/numpy/numpy/random/mtrand/mtrand_py_helper.h deleted file mode 100755 index 2e7e4b7dc0..0000000000 --- a/pythonPackages/numpy/numpy/random/mtrand/mtrand_py_helper.h +++ /dev/null @@ -1,23 +0,0 @@ -#ifndef _MTRAND_PY_HELPER_H_ -#define _MTRAND_PY_HELPER_H_ - -#include - -static PyObject *empty_py_bytes(unsigned long length, void **bytes) -{ - PyObject *b; -#if PY_MAJOR_VERSION >= 3 - b = PyBytes_FromStringAndSize(NULL, length); - if (b) { - *bytes = PyBytes_AS_STRING(b); - } -#else - b = PyString_FromStringAndSize(NULL, length); - if (b) { - *bytes = PyString_AS_STRING(b); - } -#endif - return b; -} - -#endif /* _MTRAND_PY_HELPER_H_ */ diff --git a/pythonPackages/numpy/numpy/random/mtrand/numpy.pxi b/pythonPackages/numpy/numpy/random/mtrand/numpy.pxi deleted file mode 100755 index 3c5f6f9562..0000000000 --- a/pythonPackages/numpy/numpy/random/mtrand/numpy.pxi +++ /dev/null @@ -1,133 +0,0 @@ -# :Author: Travis Oliphant - -cdef extern from "numpy/arrayobject.h": - - cdef enum NPY_TYPES: - NPY_BOOL - NPY_BYTE - NPY_UBYTE - NPY_SHORT - NPY_USHORT - NPY_INT - NPY_UINT - NPY_LONG - NPY_ULONG - NPY_LONGLONG - NPY_ULONGLONG - NPY_FLOAT - NPY_DOUBLE - NPY_LONGDOUBLE - NPY_CFLOAT - NPY_CDOUBLE - NPY_CLONGDOUBLE - NPY_OBJECT - NPY_STRING - NPY_UNICODE - NPY_VOID - NPY_NTYPES - NPY_NOTYPE - - cdef enum requirements: - NPY_CONTIGUOUS - NPY_FORTRAN - NPY_OWNDATA - NPY_FORCECAST - NPY_ENSURECOPY - NPY_ENSUREARRAY - NPY_ELEMENTSTRIDES - NPY_ALIGNED - NPY_NOTSWAPPED - NPY_WRITEABLE - NPY_UPDATEIFCOPY - NPY_ARR_HAS_DESCR - - NPY_BEHAVED - NPY_BEHAVED_NS - NPY_CARRAY - NPY_CARRAY_RO - NPY_FARRAY - NPY_FARRAY_RO - NPY_DEFAULT - - NPY_IN_ARRAY - NPY_OUT_ARRAY - NPY_INOUT_ARRAY - NPY_IN_FARRAY - NPY_OUT_FARRAY - NPY_INOUT_FARRAY - - NPY_UPDATE_ALL - - cdef enum defines: - NPY_MAXDIMS - - ctypedef struct npy_cdouble: - double real - double imag - - ctypedef struct npy_cfloat: - double real - double imag - - ctypedef int npy_intp - - ctypedef extern class numpy.dtype [object PyArray_Descr]: - cdef int type_num, elsize, alignment - cdef char type, kind, byteorder, hasobject - cdef object fields, typeobj - - ctypedef extern class numpy.ndarray [object PyArrayObject]: - cdef char *data - cdef int nd - cdef npy_intp *dimensions - cdef npy_intp *strides - cdef object base - cdef dtype descr - cdef int flags - - ctypedef extern class numpy.flatiter [object PyArrayIterObject]: - cdef int nd_m1 - cdef npy_intp index, size - cdef ndarray ao - cdef char *dataptr - - ctypedef extern class numpy.broadcast [object PyArrayMultiIterObject]: - cdef int numiter - cdef npy_intp size, index - cdef int nd - cdef npy_intp *dimensions - cdef void **iters - - object PyArray_ZEROS(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran) - object PyArray_EMPTY(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran) - dtype PyArray_DescrFromTypeNum(NPY_TYPES type_num) - object PyArray_SimpleNew(int ndims, npy_intp* dims, NPY_TYPES type_num) - int PyArray_Check(object obj) - object PyArray_ContiguousFromAny(object obj, NPY_TYPES type, - int mindim, int maxdim) - object PyArray_ContiguousFromObject(object obj, NPY_TYPES type, - int mindim, int maxdim) - npy_intp PyArray_SIZE(ndarray arr) - npy_intp PyArray_NBYTES(ndarray arr) - void *PyArray_DATA(ndarray arr) - object PyArray_FromAny(object obj, dtype newtype, int mindim, int maxdim, - int requirements, object context) - object PyArray_FROMANY(object obj, NPY_TYPES type_num, int min, - int max, int requirements) - object PyArray_NewFromDescr(object subtype, dtype newtype, int nd, - npy_intp* dims, npy_intp* strides, void* data, - int flags, object parent) - - object PyArray_FROM_OTF(object obj, NPY_TYPES type, int flags) - object PyArray_EnsureArray(object) - - object PyArray_MultiIterNew(int n, ...) - - char *PyArray_MultiIter_DATA(broadcast multi, int i) - void PyArray_MultiIter_NEXTi(broadcast multi, int i) - void PyArray_MultiIter_NEXT(broadcast multi) - - object PyArray_IterNew(object arr) - void PyArray_ITER_NEXT(flatiter it) - - void import_array() diff --git a/pythonPackages/numpy/numpy/random/mtrand/randomkit.c b/pythonPackages/numpy/numpy/random/mtrand/randomkit.c deleted file mode 100755 index b18897e2c0..0000000000 --- a/pythonPackages/numpy/numpy/random/mtrand/randomkit.c +++ /dev/null @@ -1,402 +0,0 @@ -/* Random kit 1.3 */ - -/* - * Copyright (c) 2003-2005, Jean-Sebastien Roy (js@jeannot.org) - * - * The rk_random and rk_seed functions algorithms and the original design of - * the Mersenne Twister RNG: - * - * Copyright (C) 1997 - 2002, Makoto Matsumoto and Takuji Nishimura, - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * 3. The names of its contributors may not be used to endorse or promote - * products derived from this software without specific prior written - * permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR - * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, - * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, - * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR - * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF - * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING - * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - * Original algorithm for the implementation of rk_interval function from - * Richard J. Wagner's implementation of the Mersenne Twister RNG, optimised by - * Magnus Jonsson. - * - * Constants used in the rk_double implementation by Isaku Wada. - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sublicense, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * The above copyright notice and this permission notice shall be included - * in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* static char const rcsid[] = - "@(#) $Jeannot: randomkit.c,v 1.28 2005/07/21 22:14:09 js Exp $"; */ -#include -#include -#include -#include -#include -#include - -#ifdef _WIN32 -/* - * Windows - * XXX: we have to use this ugly defined(__GNUC__) because it is not easy to - * detect the compiler used in distutils itself - */ -#if (defined(__GNUC__) && defined(NPY_NEEDS_MINGW_TIME_WORKAROUND)) - -/* - * FIXME: ideally, we should set this to the real version of MSVCRT. We need - * something higher than 0x601 to enable _ftime64 and co - */ -#define __MSVCRT_VERSION__ 0x0700 -#include -#include - -/* - * mingw msvcr lib import wrongly export _ftime, which does not exist in the - * actual msvc runtime for version >= 8; we make it an alias to _ftime64, which - * is available in those versions of the runtime - */ -#define _FTIME(x) _ftime64((x)) -#else -#include -#include -#define _FTIME(x) _ftime((x)) -#endif - -#ifndef RK_NO_WINCRYPT -/* Windows crypto */ -#ifndef _WIN32_WINNT -#define _WIN32_WINNT 0x0400 -#endif -#include -#include -#endif - -#else -/* Unix */ -#include -#include -#include -#endif - -#include "randomkit.h" - -#ifndef RK_DEV_URANDOM -#define RK_DEV_URANDOM "/dev/urandom" -#endif - -#ifndef RK_DEV_RANDOM -#define RK_DEV_RANDOM "/dev/random" -#endif - -char *rk_strerror[RK_ERR_MAX] = -{ - "no error", - "random device unvavailable" -}; - -/* static functions */ -static unsigned long rk_hash(unsigned long key); - -void -rk_seed(unsigned long seed, rk_state *state) -{ - int pos; - seed &= 0xffffffffUL; - - /* Knuth's PRNG as used in the Mersenne Twister reference implementation */ - for (pos = 0; pos < RK_STATE_LEN; pos++) { - state->key[pos] = seed; - seed = (1812433253UL * (seed ^ (seed >> 30)) + pos + 1) & 0xffffffffUL; - } - state->pos = RK_STATE_LEN; - state->gauss = 0; - state->has_gauss = 0; - state->has_binomial = 0; -} - -/* Thomas Wang 32 bits integer hash function */ -unsigned long -rk_hash(unsigned long key) -{ - key += ~(key << 15); - key ^= (key >> 10); - key += (key << 3); - key ^= (key >> 6); - key += ~(key << 11); - key ^= (key >> 16); - return key; -} - -rk_error -rk_randomseed(rk_state *state) -{ -#ifndef _WIN32 - struct timeval tv; -#else - struct _timeb tv; -#endif - int i; - - if (rk_devfill(state->key, sizeof(state->key), 0) == RK_NOERR) { - /* ensures non-zero key */ - state->key[0] |= 0x80000000UL; - state->pos = RK_STATE_LEN; - state->gauss = 0; - state->has_gauss = 0; - state->has_binomial = 0; - - for (i = 0; i < 624; i++) { - state->key[i] &= 0xffffffffUL; - } - return RK_NOERR; - } - -#ifndef _WIN32 - gettimeofday(&tv, NULL); - rk_seed(rk_hash(getpid()) ^ rk_hash(tv.tv_sec) ^ rk_hash(tv.tv_usec) - ^ rk_hash(clock()), state); -#else - _FTIME(&tv); - rk_seed(rk_hash(tv.time) ^ rk_hash(tv.millitm) ^ rk_hash(clock()), state); -#endif - - return RK_ENODEV; -} - -/* Magic Mersenne Twister constants */ -#define N 624 -#define M 397 -#define MATRIX_A 0x9908b0dfUL -#define UPPER_MASK 0x80000000UL -#define LOWER_MASK 0x7fffffffUL - -/* Slightly optimised reference implementation of the Mersenne Twister */ -unsigned long -rk_random(rk_state *state) -{ - unsigned long y; - - if (state->pos == RK_STATE_LEN) { - int i; - - for (i = 0; i < N - M; i++) { - y = (state->key[i] & UPPER_MASK) | (state->key[i+1] & LOWER_MASK); - state->key[i] = state->key[i+M] ^ (y>>1) ^ (-(y & 1) & MATRIX_A); - } - for (; i < N - 1; i++) { - y = (state->key[i] & UPPER_MASK) | (state->key[i+1] & LOWER_MASK); - state->key[i] = state->key[i+(M-N)] ^ (y>>1) ^ (-(y & 1) & MATRIX_A); - } - y = (state->key[N - 1] & UPPER_MASK) | (state->key[0] & LOWER_MASK); - state->key[N - 1] = state->key[M - 1] ^ (y >> 1) ^ (-(y & 1) & MATRIX_A); - - state->pos = 0; - } - y = state->key[state->pos++]; - - /* Tempering */ - y ^= (y >> 11); - y ^= (y << 7) & 0x9d2c5680UL; - y ^= (y << 15) & 0xefc60000UL; - y ^= (y >> 18); - - return y; -} - -long -rk_long(rk_state *state) -{ - return rk_ulong(state) >> 1; -} - -unsigned long -rk_ulong(rk_state *state) -{ -#if ULONG_MAX <= 0xffffffffUL - return rk_random(state); -#else - return (rk_random(state) << 32) | (rk_random(state)); -#endif -} - -unsigned long -rk_interval(unsigned long max, rk_state *state) -{ - unsigned long mask = max, value; - - if (max == 0) { - return 0; - } - /* Smallest bit mask >= max */ - mask |= mask >> 1; - mask |= mask >> 2; - mask |= mask >> 4; - mask |= mask >> 8; - mask |= mask >> 16; -#if ULONG_MAX > 0xffffffffUL - mask |= mask >> 32; -#endif - - /* Search a random value in [0..mask] <= max */ -#if ULONG_MAX > 0xffffffffUL - if (max <= 0xffffffffUL) { - while ((value = (rk_random(state) & mask)) > max); - } - else { - while ((value = (rk_ulong(state) & mask)) > max); - } -#else - while ((value = (rk_ulong(state) & mask)) > max); -#endif - return value; -} - -double -rk_double(rk_state *state) -{ - /* shifts : 67108864 = 0x4000000, 9007199254740992 = 0x20000000000000 */ - long a = rk_random(state) >> 5, b = rk_random(state) >> 6; - return (a * 67108864.0 + b) / 9007199254740992.0; -} - -void -rk_fill(void *buffer, size_t size, rk_state *state) -{ - unsigned long r; - unsigned char *buf = buffer; - - for (; size >= 4; size -= 4) { - r = rk_random(state); - *(buf++) = r & 0xFF; - *(buf++) = (r >> 8) & 0xFF; - *(buf++) = (r >> 16) & 0xFF; - *(buf++) = (r >> 24) & 0xFF; - } - - if (!size) { - return; - } - r = rk_random(state); - for (; size; r >>= 8, size --) { - *(buf++) = (unsigned char)(r & 0xFF); - } -} - -rk_error -rk_devfill(void *buffer, size_t size, int strong) -{ -#ifndef _WIN32 - FILE *rfile; - int done; - - if (strong) { - rfile = fopen(RK_DEV_RANDOM, "rb"); - } - else { - rfile = fopen(RK_DEV_URANDOM, "rb"); - } - if (rfile == NULL) { - return RK_ENODEV; - } - done = fread(buffer, size, 1, rfile); - fclose(rfile); - if (done) { - return RK_NOERR; - } -#else - -#ifndef RK_NO_WINCRYPT - HCRYPTPROV hCryptProv; - BOOL done; - - if (!CryptAcquireContext(&hCryptProv, NULL, NULL, PROV_RSA_FULL, - CRYPT_VERIFYCONTEXT) || !hCryptProv) { - return RK_ENODEV; - } - done = CryptGenRandom(hCryptProv, size, (unsigned char *)buffer); - CryptReleaseContext(hCryptProv, 0); - if (done) { - return RK_NOERR; - } -#endif - -#endif - return RK_ENODEV; -} - -rk_error -rk_altfill(void *buffer, size_t size, int strong, rk_state *state) -{ - rk_error err; - - err = rk_devfill(buffer, size, strong); - if (err) { - rk_fill(buffer, size, state); - } - return err; -} - -double -rk_gauss(rk_state *state) -{ - if (state->has_gauss) { - const double tmp = state->gauss; - state->gauss = 0; - state->has_gauss = 0; - return tmp; - } - else { - double f, x1, x2, r2; - - do { - x1 = 2.0*rk_double(state) - 1.0; - x2 = 2.0*rk_double(state) - 1.0; - r2 = x1*x1 + x2*x2; - } - while (r2 >= 1.0 || r2 == 0.0); - - /* Box-Muller transform */ - f = sqrt(-2.0*log(r2)/r2); - /* Keep for next call */ - state->gauss = f*x1; - state->has_gauss = 1; - return f*x2; - } -} diff --git a/pythonPackages/numpy/numpy/random/mtrand/randomkit.h b/pythonPackages/numpy/numpy/random/mtrand/randomkit.h deleted file mode 100755 index e049488eeb..0000000000 --- a/pythonPackages/numpy/numpy/random/mtrand/randomkit.h +++ /dev/null @@ -1,189 +0,0 @@ -/* Random kit 1.3 */ - -/* - * Copyright (c) 2003-2005, Jean-Sebastien Roy (js@jeannot.org) - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sublicense, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * The above copyright notice and this permission notice shall be included - * in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY - * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE - * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* @(#) $Jeannot: randomkit.h,v 1.24 2005/07/21 22:14:09 js Exp $ */ - -/* - * Typical use: - * - * { - * rk_state state; - * unsigned long seed = 1, random_value; - * - * rk_seed(seed, &state); // Initialize the RNG - * ... - * random_value = rk_random(&state); // Generate random values in [0..RK_MAX] - * } - * - * Instead of rk_seed, you can use rk_randomseed which will get a random seed - * from /dev/urandom (or the clock, if /dev/urandom is unavailable): - * - * { - * rk_state state; - * unsigned long random_value; - * - * rk_randomseed(&state); // Initialize the RNG with a random seed - * ... - * random_value = rk_random(&state); // Generate random values in [0..RK_MAX] - * } - */ - -/* - * Useful macro: - * RK_DEV_RANDOM: the device used for random seeding. - * defaults to "/dev/urandom" - */ - -#include - -#ifndef _RANDOMKIT_ -#define _RANDOMKIT_ - -#define RK_STATE_LEN 624 - -typedef struct rk_state_ -{ - unsigned long key[RK_STATE_LEN]; - int pos; - int has_gauss; /* !=0: gauss contains a gaussian deviate */ - double gauss; - - /* The rk_state structure has been extended to store the following - * information for the binomial generator. If the input values of n or p - * are different than nsave and psave, then the other parameters will be - * recomputed. RTK 2005-09-02 */ - - int has_binomial; /* !=0: following parameters initialized for - binomial */ - double psave; - long nsave; - double r; - double q; - double fm; - long m; - double p1; - double xm; - double xl; - double xr; - double c; - double laml; - double lamr; - double p2; - double p3; - double p4; - -} -rk_state; - -typedef enum { - RK_NOERR = 0, /* no error */ - RK_ENODEV = 1, /* no RK_DEV_RANDOM device */ - RK_ERR_MAX = 2 -} rk_error; - -/* error strings */ -extern char *rk_strerror[RK_ERR_MAX]; - -/* Maximum generated random value */ -#define RK_MAX 0xFFFFFFFFUL - -#ifdef __cplusplus -extern "C" { -#endif - -/* - * Initialize the RNG state using the given seed. - */ -extern void rk_seed(unsigned long seed, rk_state *state); - -/* - * Initialize the RNG state using a random seed. - * Uses /dev/random or, when unavailable, the clock (see randomkit.c). - * Returns RK_NOERR when no errors occurs. - * Returns RK_ENODEV when the use of RK_DEV_RANDOM failed (for example because - * there is no such device). In this case, the RNG was initialized using the - * clock. - */ -extern rk_error rk_randomseed(rk_state *state); - -/* - * Returns a random unsigned long between 0 and RK_MAX inclusive - */ -extern unsigned long rk_random(rk_state *state); - -/* - * Returns a random long between 0 and LONG_MAX inclusive - */ -extern long rk_long(rk_state *state); - -/* - * Returns a random unsigned long between 0 and ULONG_MAX inclusive - */ -extern unsigned long rk_ulong(rk_state *state); - -/* - * Returns a random unsigned long between 0 and max inclusive. - */ -extern unsigned long rk_interval(unsigned long max, rk_state *state); - -/* - * Returns a random double between 0.0 and 1.0, 1.0 excluded. - */ -extern double rk_double(rk_state *state); - -/* - * fill the buffer with size random bytes - */ -extern void rk_fill(void *buffer, size_t size, rk_state *state); - -/* - * fill the buffer with randombytes from the random device - * Returns RK_ENODEV if the device is unavailable, or RK_NOERR if it is - * On Unix, if strong is defined, RK_DEV_RANDOM is used. If not, RK_DEV_URANDOM - * is used instead. This parameter has no effect on Windows. - * Warning: on most unixes RK_DEV_RANDOM will wait for enough entropy to answer - * which can take a very long time on quiet systems. - */ -extern rk_error rk_devfill(void *buffer, size_t size, int strong); - -/* - * fill the buffer using rk_devfill if the random device is available and using - * rk_fill if is is not - * parameters have the same meaning as rk_fill and rk_devfill - * Returns RK_ENODEV if the device is unavailable, or RK_NOERR if it is - */ -extern rk_error rk_altfill(void *buffer, size_t size, int strong, - rk_state *state); - -/* - * return a random gaussian deviate with variance unity and zero mean. - */ -extern double rk_gauss(rk_state *state); - -#ifdef __cplusplus -} -#endif - -#endif /* _RANDOMKIT_ */ diff --git a/pythonPackages/numpy/numpy/random/setup.py b/pythonPackages/numpy/numpy/random/setup.py deleted file mode 100755 index dde3119b7f..0000000000 --- a/pythonPackages/numpy/numpy/random/setup.py +++ /dev/null @@ -1,69 +0,0 @@ -from os.path import join, split, dirname -import os -import sys -from distutils.dep_util import newer -from distutils.msvccompiler import get_build_version as get_msvc_build_version - -def needs_mingw_ftime_workaround(): - # We need the mingw workaround for _ftime if the msvc runtime version is - # 7.1 or above and we build with mingw ... - # ... but we can't easily detect compiler version outside distutils command - # context, so we will need to detect in randomkit whether we build with gcc - msver = get_msvc_build_version() - if msver and msver >= 8: - return True - - return False - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration, get_mathlibs - config = Configuration('random',parent_package,top_path) - - def generate_libraries(ext, build_dir): - config_cmd = config.get_config_cmd() - libs = get_mathlibs() - tc = testcode_wincrypt() - if config_cmd.try_run(tc): - libs.append('Advapi32') - ext.libraries.extend(libs) - return None - - defs = [] - if needs_mingw_ftime_workaround(): - defs.append(("NPY_NEEDS_MINGW_TIME_WORKAROUND", None)) - - libs = [] - # Configure mtrand - config.add_extension('mtrand', - sources=[join('mtrand', x) for x in - ['mtrand.c', 'randomkit.c', 'initarray.c', - 'distributions.c']]+[generate_libraries], - libraries=libs, - depends = [join('mtrand','*.h'), - join('mtrand','*.pyx'), - join('mtrand','*.pxi'), - ], - define_macros = defs, - ) - - config.add_data_files(('.', join('mtrand', 'randomkit.h'))) - config.add_data_dir('tests') - - return config - -def testcode_wincrypt(): - return """\ -/* check to see if _WIN32 is defined */ -int main(int argc, char *argv[]) -{ -#ifdef _WIN32 - return 0; -#else - return 1; -#endif -} -""" - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/random/setupscons.py b/pythonPackages/numpy/numpy/random/setupscons.py deleted file mode 100755 index f5342c39ec..0000000000 --- a/pythonPackages/numpy/numpy/random/setupscons.py +++ /dev/null @@ -1,40 +0,0 @@ -import glob -from os.path import join, split - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration, get_mathlibs - config = Configuration('random',parent_package,top_path) - - source_files = [join('mtrand', i) for i in ['mtrand.c', - 'mtrand.pyx', - 'numpy.pxi', - 'randomkit.c', - 'randomkit.h', - 'Python.pxi', - 'initarray.c', - 'initarray.h', - 'distributions.c', - 'distributions.h', - ]] - config.add_sconscript('SConstruct', source_files = source_files) - config.add_data_files(('.', join('mtrand', 'randomkit.h'))) - config.add_data_dir('tests') - - return config - -def testcode_wincrypt(): - return """\ -/* check to see if _WIN32 is defined */ -int main(int argc, char *argv[]) -{ -#ifdef _WIN32 - return 0; -#else - return 1; -#endif -} -""" - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(configuration=configuration) diff --git a/pythonPackages/numpy/numpy/random/tests/test_random.py b/pythonPackages/numpy/numpy/random/tests/test_random.py deleted file mode 100755 index 8b5d083a37..0000000000 --- a/pythonPackages/numpy/numpy/random/tests/test_random.py +++ /dev/null @@ -1,111 +0,0 @@ -from numpy.testing import * -from numpy import random -import numpy as np - -class TestRegression(TestCase): - - def test_VonMises_range(self): - """Make sure generated random variables are in [-pi, pi]. - - Regression test for ticket #986. - """ - for mu in np.linspace(-7., 7., 5): - r = random.mtrand.vonmises(mu,1,50) - assert np.all(r > -np.pi) and np.all(r <= np.pi) - - def test_hypergeometric_range(self) : - """Test for ticket #921""" - assert_(np.all(np.random.hypergeometric(3, 18, 11, size=10) < 4)) - assert_(np.all(np.random.hypergeometric(18, 3, 11, size=10) > 0)) - - def test_logseries_convergence(self) : - """Test for ticket #923""" - N = 1000 - np.random.seed(0) - rvsn = np.random.logseries(0.8, size=N) - # these two frequency counts should be close to theoretical - # numbers with this large sample - # theoretical large N result is 0.49706795 - freq = np.sum(rvsn == 1) / float(N) - msg = "Frequency was %f, should be > 0.45" % freq - assert_(freq > 0.45, msg) - # theoretical large N result is 0.19882718 - freq = np.sum(rvsn == 2) / float(N) - msg = "Frequency was %f, should be < 0.23" % freq - assert_(freq < 0.23, msg) - - def test_permutation_longs(self): - np.random.seed(1234) - a = np.random.permutation(12) - np.random.seed(1234) - b = np.random.permutation(12L) - assert_array_equal(a, b) - -class TestMultinomial(TestCase): - def test_basic(self): - random.multinomial(100, [0.2, 0.8]) - - def test_zero_probability(self): - random.multinomial(100, [0.2, 0.8, 0.0, 0.0, 0.0]) - - def test_int_negative_interval(self): - assert -5 <= random.randint(-5,-1) < -1 - x = random.randint(-5,-1,5) - assert np.all(-5 <= x) - assert np.all(x < -1) - - - -class TestSetState(TestCase): - def setUp(self): - self.seed = 1234567890 - self.prng = random.RandomState(self.seed) - self.state = self.prng.get_state() - - def test_basic(self): - old = self.prng.tomaxint(16) - self.prng.set_state(self.state) - new = self.prng.tomaxint(16) - assert np.all(old == new) - - def test_gaussian_reset(self): - """ Make sure the cached every-other-Gaussian is reset. - """ - old = self.prng.standard_normal(size=3) - self.prng.set_state(self.state) - new = self.prng.standard_normal(size=3) - assert np.all(old == new) - - def test_gaussian_reset_in_media_res(self): - """ When the state is saved with a cached Gaussian, make sure the cached - Gaussian is restored. - """ - self.prng.standard_normal() - state = self.prng.get_state() - old = self.prng.standard_normal(size=3) - self.prng.set_state(state) - new = self.prng.standard_normal(size=3) - assert np.all(old == new) - - def test_backwards_compatibility(self): - """ Make sure we can accept old state tuples that do not have the cached - Gaussian value. - """ - old_state = self.state[:-2] - x1 = self.prng.standard_normal(size=16) - self.prng.set_state(old_state) - x2 = self.prng.standard_normal(size=16) - self.prng.set_state(self.state) - x3 = self.prng.standard_normal(size=16) - assert np.all(x1 == x2) - assert np.all(x1 == x3) - - def test_negative_binomial(self): - """ Ensure that the negative binomial results take floating point - arguments without truncation. - """ - self.prng.negative_binomial(0.5, 0.5) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/setup.py b/pythonPackages/numpy/numpy/setup.py deleted file mode 100755 index c55c85a256..0000000000 --- a/pythonPackages/numpy/numpy/setup.py +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env python - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('numpy',parent_package,top_path) - config.add_subpackage('distutils') - config.add_subpackage('testing') - config.add_subpackage('f2py') - config.add_subpackage('core') - config.add_subpackage('lib') - config.add_subpackage('oldnumeric') - config.add_subpackage('numarray') - config.add_subpackage('fft') - config.add_subpackage('linalg') - config.add_subpackage('random') - config.add_subpackage('ma') - config.add_subpackage('matrixlib') - config.add_subpackage('compat') - config.add_subpackage('polynomial') - config.add_subpackage('doc') - config.add_data_dir('doc') - config.add_data_dir('tests') - config.make_config_py() # installs __config__.py - return config - -if __name__ == '__main__': - print('This is the wrong setup.py file to run') diff --git a/pythonPackages/numpy/numpy/setupscons.py b/pythonPackages/numpy/numpy/setupscons.py deleted file mode 100755 index 59fa57a4de..0000000000 --- a/pythonPackages/numpy/numpy/setupscons.py +++ /dev/null @@ -1,42 +0,0 @@ -#!/usr/bin/env python -from os.path import join as pjoin - -def configuration(parent_package='', top_path=None): - from numpy.distutils.misc_util import Configuration - from numpy.distutils.misc_util import scons_generate_config_py - - pkgname = 'numpy' - config = Configuration(pkgname, parent_package, top_path, - setup_name = 'setupscons.py') - config.add_subpackage('distutils') - config.add_subpackage('testing') - config.add_subpackage('f2py') - config.add_subpackage('core') - config.add_subpackage('lib') - config.add_subpackage('oldnumeric') - config.add_subpackage('numarray') - config.add_subpackage('fft') - config.add_subpackage('linalg') - config.add_subpackage('random') - config.add_subpackage('ma') - config.add_subpackage('matrixlib') - config.add_subpackage('compat') - config.add_subpackage('polynomial') - config.add_subpackage('doc') - config.add_data_dir('doc') - config.add_data_dir('tests') - - def add_config(*args, **kw): - # Generate __config__, handle inplace issues. - if kw['scons_cmd'].inplace: - target = pjoin(kw['pkg_name'], '__config__.py') - else: - target = pjoin(kw['scons_cmd'].build_lib, kw['pkg_name'], - '__config__.py') - scons_generate_config_py(target) - config.add_sconscript(None, post_hook = add_config) - - return config - -if __name__ == '__main__': - print 'This is the wrong setup.py file to run' diff --git a/pythonPackages/numpy/numpy/testing/__init__.py b/pythonPackages/numpy/numpy/testing/__init__.py deleted file mode 100755 index f391c80537..0000000000 --- a/pythonPackages/numpy/numpy/testing/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -"""Common test support for all numpy test scripts. - -This single module should provide all the common functionality for numpy tests -in a single location, so that test scripts can just import it and work right -away. -""" - -from unittest import TestCase - -import decorators as dec -from utils import * -from numpytest import * -from nosetester import NoseTester as Tester -from nosetester import run_module_suite -test = Tester().test diff --git a/pythonPackages/numpy/numpy/testing/decorators.py b/pythonPackages/numpy/numpy/testing/decorators.py deleted file mode 100755 index 68335cc185..0000000000 --- a/pythonPackages/numpy/numpy/testing/decorators.py +++ /dev/null @@ -1,275 +0,0 @@ -""" -Decorators for labeling and modifying behavior of test objects. - -Decorators that merely return a modified version of the original -function object are straightforward. Decorators that return a new -function object need to use -:: - - nose.tools.make_decorator(original_function)(decorator) - -in returning the decorator, in order to preserve meta-data such as -function name, setup and teardown functions and so on - see -``nose.tools`` for more information. - -""" -import warnings -import sys - -from numpy.testing.utils import \ - WarningManager, WarningMessage - -def slow(t): - """ - Label a test as 'slow'. - - The exact definition of a slow test is obviously both subjective and - hardware-dependent, but in general any individual test that requires more - than a second or two should be labeled as slow (the whole suite consits of - thousands of tests, so even a second is significant). - - Parameters - ---------- - t : callable - The test to label as slow. - - Returns - ------- - t : callable - The decorated test `t`. - - Examples - -------- - The `numpy.testing` module includes ``import decorators as dec``. - A test can be decorated as slow like this:: - - from numpy.testing import * - - @dec.slow - def test_big(self): - print 'Big, slow test' - - """ - - t.slow = True - return t - -def setastest(tf=True): - """ - Signals to nose that this function is or is not a test. - - Parameters - ---------- - tf : bool - If True, specifies that the decorated callable is a test. - If False, specifies that the decorated callable is not a test. - Default is True. - - Notes - ----- - This decorator can't use the nose namespace, because it can be - called from a non-test module. See also ``istest`` and ``nottest`` in - ``nose.tools``. - - Examples - -------- - `setastest` can be used in the following way:: - - from numpy.testing.decorators import setastest - - @setastest(False) - def func_with_test_in_name(arg1, arg2): - pass - - """ - def set_test(t): - t.__test__ = tf - return t - return set_test - -def skipif(skip_condition, msg=None): - """ - Make function raise SkipTest exception if a given condition is true. - - If the condition is a callable, it is used at runtime to dynamically - make the decision. This is useful for tests that may require costly - imports, to delay the cost until the test suite is actually executed. - - Parameters - ---------- - skip_condition : bool or callable - Flag to determine whether to skip the decorated test. - msg : str, optional - Message to give on raising a SkipTest exception. Default is None. - - Returns - ------- - decorator : function - Decorator which, when applied to a function, causes SkipTest - to be raised when `skip_condition` is True, and the function - to be called normally otherwise. - - Notes - ----- - The decorator itself is decorated with the ``nose.tools.make_decorator`` - function in order to transmit function name, and various other metadata. - - """ - - def skip_decorator(f): - # Local import to avoid a hard nose dependency and only incur the - # import time overhead at actual test-time. - import nose - - # Allow for both boolean or callable skip conditions. - if callable(skip_condition): - skip_val = lambda : skip_condition() - else: - skip_val = lambda : skip_condition - - def get_msg(func,msg=None): - """Skip message with information about function being skipped.""" - if msg is None: - out = 'Test skipped due to test condition' - else: - out = '\n'+msg - - return "Skipping test: %s%s" % (func.__name__,out) - - # We need to define *two* skippers because Python doesn't allow both - # return with value and yield inside the same function. - def skipper_func(*args, **kwargs): - """Skipper for normal test functions.""" - if skip_val(): - raise nose.SkipTest(get_msg(f,msg)) - else: - return f(*args, **kwargs) - - def skipper_gen(*args, **kwargs): - """Skipper for test generators.""" - if skip_val(): - raise nose.SkipTest(get_msg(f,msg)) - else: - for x in f(*args, **kwargs): - yield x - - # Choose the right skipper to use when building the actual decorator. - if nose.util.isgenerator(f): - skipper = skipper_gen - else: - skipper = skipper_func - - return nose.tools.make_decorator(f)(skipper) - - return skip_decorator - - -def knownfailureif(fail_condition, msg=None): - """ - Make function raise KnownFailureTest exception if given condition is true. - - If the condition is a callable, it is used at runtime to dynamically - make the decision. This is useful for tests that may require costly - imports, to delay the cost until the test suite is actually executed. - - Parameters - ---------- - fail_condition : bool or callable - Flag to determine whether to mark the decorated test as a known - failure (if True) or not (if False). - msg : str, optional - Message to give on raising a KnownFailureTest exception. - Default is None. - - Returns - ------- - decorator : function - Decorator, which, when applied to a function, causes SkipTest - to be raised when `skip_condition` is True, and the function - to be called normally otherwise. - - Notes - ----- - The decorator itself is decorated with the ``nose.tools.make_decorator`` - function in order to transmit function name, and various other metadata. - - """ - if msg is None: - msg = 'Test skipped due to known failure' - - # Allow for both boolean or callable known failure conditions. - if callable(fail_condition): - fail_val = lambda : fail_condition() - else: - fail_val = lambda : fail_condition - - def knownfail_decorator(f): - # Local import to avoid a hard nose dependency and only incur the - # import time overhead at actual test-time. - import nose - from noseclasses import KnownFailureTest - def knownfailer(*args, **kwargs): - if fail_val(): - raise KnownFailureTest, msg - else: - return f(*args, **kwargs) - return nose.tools.make_decorator(f)(knownfailer) - - return knownfail_decorator - -def deprecated(conditional=True): - """ - Filter deprecation warnings while running the test suite. - - This decorator can be used to filter DeprecationWarning's, to avoid - printing them during the test suite run, while checking that the test - actually raises a DeprecationWarning. - - Parameters - ---------- - conditional : bool or callable, optional - Flag to determine whether to mark test as deprecated or not. If the - condition is a callable, it is used at runtime to dynamically make the - decision. Default is True. - - Returns - ------- - decorator : function - The `deprecated` decorator itself. - - Notes - ----- - .. versionadded:: 1.4.0 - - """ - def deprecate_decorator(f): - # Local import to avoid a hard nose dependency and only incur the - # import time overhead at actual test-time. - import nose - from noseclasses import KnownFailureTest - - def _deprecated_imp(*args, **kwargs): - # Poor man's replacement for the with statement - ctx = WarningManager(record=True) - l = ctx.__enter__() - warnings.simplefilter('always') - try: - f(*args, **kwargs) - if not len(l) > 0: - raise AssertionError("No warning raised when calling %s" - % f.__name__) - if not l[0].category is DeprecationWarning: - raise AssertionError("First warning for %s is not a " \ - "DeprecationWarning( is %s)" % (f.__name__, l[0])) - finally: - ctx.__exit__() - - if callable(conditional): - cond = conditional() - else: - cond = conditional - if cond: - return nose.tools.make_decorator(f)(_deprecated_imp) - else: - return f - return deprecate_decorator diff --git a/pythonPackages/numpy/numpy/testing/noseclasses.py b/pythonPackages/numpy/numpy/testing/noseclasses.py deleted file mode 100755 index f97ea91264..0000000000 --- a/pythonPackages/numpy/numpy/testing/noseclasses.py +++ /dev/null @@ -1,351 +0,0 @@ -# These classes implement a doctest runner plugin for nose, a "known failure" -# error class, and a customized TestProgram for NumPy. - -# Because this module imports nose directly, it should not -# be used except by nosetester.py to avoid a general NumPy -# dependency on nose. - -import os -import doctest - -import nose -from nose.plugins import doctests as npd -from nose.plugins.errorclass import ErrorClass, ErrorClassPlugin -from nose.plugins.base import Plugin -from nose.util import src, getpackage -import numpy -from nosetester import get_package_name -import inspect - -_doctest_ignore = ['generate_numpy_api.py', 'scons_support.py', - 'setupscons.py', 'setup.py'] - -# Some of the classes in this module begin with 'Numpy' to clearly distinguish -# them from the plethora of very similar names from nose/unittest/doctest - - -#----------------------------------------------------------------------------- -# Modified version of the one in the stdlib, that fixes a python bug (doctests -# not found in extension modules, http://bugs.python.org/issue3158) -class NumpyDocTestFinder(doctest.DocTestFinder): - - def _from_module(self, module, object): - """ - Return true if the given object is defined in the given - module. - """ - if module is None: - #print '_fm C1' # dbg - return True - elif inspect.isfunction(object): - #print '_fm C2' # dbg - return module.__dict__ is object.func_globals - elif inspect.isbuiltin(object): - #print '_fm C2-1' # dbg - return module.__name__ == object.__module__ - elif inspect.isclass(object): - #print '_fm C3' # dbg - return module.__name__ == object.__module__ - elif inspect.ismethod(object): - # This one may be a bug in cython that fails to correctly set the - # __module__ attribute of methods, but since the same error is easy - # to make by extension code writers, having this safety in place - # isn't such a bad idea - #print '_fm C3-1' # dbg - return module.__name__ == object.im_class.__module__ - elif inspect.getmodule(object) is not None: - #print '_fm C4' # dbg - #print 'C4 mod',module,'obj',object # dbg - return module is inspect.getmodule(object) - elif hasattr(object, '__module__'): - #print '_fm C5' # dbg - return module.__name__ == object.__module__ - elif isinstance(object, property): - #print '_fm C6' # dbg - return True # [XX] no way not be sure. - else: - raise ValueError("object must be a class or function") - - - - def _find(self, tests, obj, name, module, source_lines, globs, seen): - """ - Find tests for the given object and any contained objects, and - add them to `tests`. - """ - - doctest.DocTestFinder._find(self,tests, obj, name, module, - source_lines, globs, seen) - - # Below we re-run pieces of the above method with manual modifications, - # because the original code is buggy and fails to correctly identify - # doctests in extension modules. - - # Local shorthands - from inspect import isroutine, isclass, ismodule, isfunction, \ - ismethod - - # Look for tests in a module's contained objects. - if ismodule(obj) and self._recurse: - for valname, val in obj.__dict__.items(): - valname1 = '%s.%s' % (name, valname) - if ( (isroutine(val) or isclass(val)) - and self._from_module(module, val) ): - - self._find(tests, val, valname1, module, source_lines, - globs, seen) - - - # Look for tests in a class's contained objects. - if isclass(obj) and self._recurse: - #print 'RECURSE into class:',obj # dbg - for valname, val in obj.__dict__.items(): - #valname1 = '%s.%s' % (name, valname) # dbg - #print 'N',name,'VN:',valname,'val:',str(val)[:77] # dbg - # Special handling for staticmethod/classmethod. - if isinstance(val, staticmethod): - val = getattr(obj, valname) - if isinstance(val, classmethod): - val = getattr(obj, valname).im_func - - # Recurse to methods, properties, and nested classes. - if ((isfunction(val) or isclass(val) or - ismethod(val) or isinstance(val, property)) and - self._from_module(module, val)): - valname = '%s.%s' % (name, valname) - self._find(tests, val, valname, module, source_lines, - globs, seen) - - -class NumpyDocTestCase(npd.DocTestCase): - """Proxy for DocTestCase: provides an address() method that - returns the correct address for the doctest case. Otherwise - acts as a proxy to the test case. To provide hints for address(), - an obj may also be passed -- this will be used as the test object - for purposes of determining the test address, if it is provided. - """ - - # doctests loaded via find(obj) omit the module name - # so we need to override id, __repr__ and shortDescription - # bonus: this will squash a 2.3 vs 2.4 incompatiblity - def id(self): - name = self._dt_test.name - filename = self._dt_test.filename - if filename is not None: - pk = getpackage(filename) - if pk is not None and not name.startswith(pk): - name = "%s.%s" % (pk, name) - return name - - -# second-chance checker; if the default comparison doesn't -# pass, then see if the expected output string contains flags that -# tell us to ignore the output -class NumpyOutputChecker(doctest.OutputChecker): - def check_output(self, want, got, optionflags): - ret = doctest.OutputChecker.check_output(self, want, got, - optionflags) - if not ret: - if "#random" in want: - return True - - # it would be useful to normalize endianness so that - # bigendian machines don't fail all the tests (and there are - # actually some bigendian examples in the doctests). Let's try - # making them all little endian - got = got.replace("'>","'<") - want= want.replace("'>","'<") - - # try to normalize out 32 and 64 bit default int sizes - for sz in [4,8]: - got = got.replace("'>> np.testing.nosetester.get_package_name('nonsense') - 'numpy' - - """ - - fullpath = filepath[:] - pkg_name = [] - while 'site-packages' in filepath or 'dist-packages' in filepath: - filepath, p2 = os.path.split(filepath) - if p2 in ('site-packages', 'dist-packages'): - break - pkg_name.append(p2) - - # if package name determination failed, just default to numpy/scipy - if not pkg_name: - if 'scipy' in fullpath: - return 'scipy' - else: - return 'numpy' - - # otherwise, reverse to get correct order and return - pkg_name.reverse() - - # don't include the outer egg directory - if pkg_name[0].endswith('.egg'): - pkg_name.pop(0) - - return '.'.join(pkg_name) - -def import_nose(): - """ Import nose only when needed. - """ - fine_nose = True - minimum_nose_version = (0,10,0) - try: - import nose - from nose.tools import raises - except ImportError: - fine_nose = False - else: - if nose.__versioninfo__ < minimum_nose_version: - fine_nose = False - - if not fine_nose: - msg = 'Need nose >= %d.%d.%d for tests - see ' \ - 'http://somethingaboutorange.com/mrl/projects/nose' % \ - minimum_nose_version - - raise ImportError(msg) - - return nose - -def run_module_suite(file_to_run = None): - if file_to_run is None: - f = sys._getframe(1) - file_to_run = f.f_locals.get('__file__', None) - assert file_to_run is not None - - import_nose().run(argv=['',file_to_run]) - -# contructs NoseTester method docstrings -def _docmethod(meth, testtype): - if not meth.__doc__: - return - - test_header = \ - '''Parameters - ---------- - label : {'fast', 'full', '', attribute identifer} - Identifies the %(testtype)ss to run. This can be a string to - pass to the nosetests executable with the '-A' option, or one of - several special values. - Special values are: - 'fast' - the default - which corresponds to nosetests -A option - of 'not slow'. - 'full' - fast (as above) and slow %(testtype)ss as in the - no -A option to nosetests - same as '' - None or '' - run all %(testtype)ss - attribute_identifier - string passed directly to nosetests as '-A' - verbose : integer - verbosity value for test outputs, 1-10 - extra_argv : list - List with any extra args to pass to nosetests''' \ - % {'testtype': testtype} - - meth.__doc__ = meth.__doc__ % {'test_header':test_header} - - -class NoseTester(object): - """ - Nose test runner. - - This class is made available as numpy.testing.Tester, and a test function - is typically added to a package's __init__.py like so:: - - from numpy.testing import Tester - test = Tester().test - - Calling this test function finds and runs all tests associated with the - package and all its sub-packages. - - Attributes - ---------- - package_path : str - Full path to the package to test. - package_name : str - Name of the package to test. - - Parameters - ---------- - package : module, str or None - The package to test. If a string, this should be the full path to - the package. If None (default), `package` is set to the module from - which `NoseTester` is initialized. - - """ - - def __init__(self, package=None): - ''' Test class init - - Parameters - ---------- - package : string or module - If string, gives full path to package - If None, extract calling module path - Default is None - ''' - package_name = None - if package is None: - f = sys._getframe(1) - package_path = f.f_locals.get('__file__', None) - assert package_path is not None - package_path = os.path.dirname(package_path) - package_name = f.f_locals.get('__name__', None) - elif isinstance(package, type(os)): - package_path = os.path.dirname(package.__file__) - package_name = getattr(package, '__name__', None) - else: - package_path = str(package) - - self.package_path = package_path - - # find the package name under test; this name is used to limit coverage - # reporting (if enabled) - if package_name is None: - package_name = get_package_name(package_path) - self.package_name = package_name - - def _test_argv(self, label, verbose, extra_argv): - ''' Generate argv for nosetest command - - %(test_header)s - ''' - argv = [__file__, self.package_path, '-s'] - if label and label != 'full': - if not isinstance(label, basestring): - raise TypeError, 'Selection label should be a string' - if label == 'fast': - label = 'not slow' - argv += ['-A', label] - argv += ['--verbosity', str(verbose)] - if extra_argv: - argv += extra_argv - return argv - - def _show_system_info(self): - nose = import_nose() - - import numpy - print "NumPy version %s" % numpy.__version__ - npdir = os.path.dirname(numpy.__file__) - print "NumPy is installed in %s" % npdir - - if 'scipy' in self.package_name: - import scipy - print "SciPy version %s" % scipy.__version__ - spdir = os.path.dirname(scipy.__file__) - print "SciPy is installed in %s" % spdir - - pyversion = sys.version.replace('\n','') - print "Python version %s" % pyversion - print "nose version %d.%d.%d" % nose.__versioninfo__ - - - def prepare_test_args(self, label='fast', verbose=1, extra_argv=None, - doctests=False, coverage=False): - """ - Run tests for module using nose. - - This method does the heavy lifting for the `test` method. It takes all - the same arguments, for details see `test`. - - See Also - -------- - test - - """ - - # if doctests is in the extra args, remove it and set the doctest - # flag so the NumPy doctester is used instead - if extra_argv and '--with-doctest' in extra_argv: - extra_argv.remove('--with-doctest') - doctests = True - - argv = self._test_argv(label, verbose, extra_argv) - if doctests: - argv += ['--with-numpydoctest'] - - if coverage: - argv+=['--cover-package=%s' % self.package_name, '--with-coverage', - '--cover-tests', '--cover-inclusive', '--cover-erase'] - - # enable assert introspection - argv += ['--detailed-errors'] - - # bypass these samples under distutils - argv += ['--exclude','f2py_ext'] - argv += ['--exclude','f2py_f90_ext'] - argv += ['--exclude','gen_ext'] - argv += ['--exclude','pyrex_ext'] - argv += ['--exclude','swig_ext'] - - nose = import_nose() - - # construct list of plugins - import nose.plugins.builtin - from noseclasses import NumpyDoctest, KnownFailure - plugins = [NumpyDoctest(), KnownFailure()] - plugins += [p() for p in nose.plugins.builtin.plugins] - return argv, plugins - - def test(self, label='fast', verbose=1, extra_argv=None, doctests=False, - coverage=False): - """ - Run tests for module using nose. - - Parameters - ---------- - label : {'fast', 'full', '', attribute identifier}, optional - Identifies the tests to run. This can be a string to pass to the - nosetests executable with the '-A' option, or one of - several special values. - Special values are: - 'fast' - the default - which corresponds to the ``nosetests -A`` - option of 'not slow'. - 'full' - fast (as above) and slow tests as in the - 'no -A' option to nosetests - this is the same as ''. - None or '' - run all tests. - attribute_identifier - string passed directly to nosetests as '-A'. - verbose : int, optional - Verbosity value for test outputs, in the range 1-10. Default is 1. - extra_argv : list, optional - List with any extra arguments to pass to nosetests. - doctests : bool, optional - If True, run doctests in module. Default is False. - coverage : bool, optional - If True, report coverage of NumPy code. Default is False. - (This requires the `coverage module: - `_). - - Returns - ------- - result : object - Returns the result of running the tests as a - ``nose.result.TextTestResult`` object. - - Notes - ----- - Each NumPy module exposes `test` in its namespace to run all tests for it. - For example, to run all tests for numpy.lib:: - - >>> np.lib.test() - - Examples - -------- - >>> result = np.lib.test() - Running unit tests for numpy.lib - ... - Ran 976 tests in 3.933s - - OK - - >>> result.errors - [] - >>> result.knownfail - [] - - """ - - # cap verbosity at 3 because nose becomes *very* verbose beyond that - verbose = min(verbose, 3) - - import utils - utils.verbose = verbose - - if doctests: - print "Running unit tests and doctests for %s" % self.package_name - else: - print "Running unit tests for %s" % self.package_name - - self._show_system_info() - - # reset doctest state on every run - import doctest - doctest.master = None - - argv, plugins = self.prepare_test_args(label, verbose, extra_argv, - doctests, coverage) - from noseclasses import NumpyTestProgram - t = NumpyTestProgram(argv=argv, exit=False, plugins=plugins) - return t.result - - def bench(self, label='fast', verbose=1, extra_argv=None): - """ - Run benchmarks for module using nose. - - Parameters - ---------- - label : {'fast', 'full', '', attribute identifier}, optional - Identifies the tests to run. This can be a string to pass to the - nosetests executable with the '-A' option, or one of - several special values. - Special values are: - 'fast' - the default - which corresponds to the ``nosetests -A`` - option of 'not slow'. - 'full' - fast (as above) and slow tests as in the - 'no -A' option to nosetests - this is the same as ''. - None or '' - run all tests. - attribute_identifier - string passed directly to nosetests as '-A'. - verbose : int, optional - Verbosity value for test outputs, in the range 1-10. Default is 1. - extra_argv : list, optional - List with any extra arguments to pass to nosetests. - - Returns - ------- - success : bool - Returns True if running the benchmarks works, False if an error - occurred. - - Notes - ----- - Benchmarks are like tests, but have names starting with "bench" instead - of "test", and can be found under the "benchmarks" sub-directory of the - module. - - Each NumPy module exposes `bench` in its namespace to run all benchmarks - for it. - - Examples - -------- - >>> success = np.lib.bench() - Running benchmarks for numpy.lib - ... - using 562341 items: - unique: - 0.11 - unique1d: - 0.11 - ratio: 1.0 - nUnique: 56230 == 56230 - ... - OK - - >>> success - True - - """ - - print "Running benchmarks for %s" % self.package_name - self._show_system_info() - - argv = self._test_argv(label, verbose, extra_argv) - argv += ['--match', r'(?:^|[\\b_\\.%s-])[Bb]ench' % os.sep] - - nose = import_nose() - return nose.run(argv=argv) - - # generate method docstrings - _docmethod(_test_argv, '(testtype)') - _docmethod(test, 'test') - _docmethod(bench, 'benchmark') - - -######################################################################## -# Doctests for NumPy-specific nose/doctest modifications - -# try the #random directive on the output line -def check_random_directive(): - ''' - >>> 2+2 - #random: may vary on your system - ''' - -# check the implicit "import numpy as np" -def check_implicit_np(): - ''' - >>> np.array([1,2,3]) - array([1, 2, 3]) - ''' - -# there's some extraneous whitespace around the correct responses -def check_whitespace_enabled(): - ''' - # whitespace after the 3 - >>> 1+2 - 3 - - # whitespace before the 7 - >>> 3+4 - 7 - ''' diff --git a/pythonPackages/numpy/numpy/testing/nulltester.py b/pythonPackages/numpy/numpy/testing/nulltester.py deleted file mode 100755 index 50d5484f65..0000000000 --- a/pythonPackages/numpy/numpy/testing/nulltester.py +++ /dev/null @@ -1,15 +0,0 @@ -''' Null tester to signal nose tests disabled - -Merely returns error reporting lack of nose package or version number -below requirements. - -See pkgtester, nosetester modules - -''' - -class NullTester(object): - def test(self, labels=None, *args, **kwargs): - raise ImportError, \ - 'Need nose >=0.10 for tests - see %s' % \ - 'http://somethingaboutorange.com/mrl/projects/nose' - bench = test diff --git a/pythonPackages/numpy/numpy/testing/numpytest.py b/pythonPackages/numpy/numpy/testing/numpytest.py deleted file mode 100755 index 683df7a010..0000000000 --- a/pythonPackages/numpy/numpy/testing/numpytest.py +++ /dev/null @@ -1,50 +0,0 @@ -import os -import sys -import traceback - -__all__ = ['IgnoreException', 'importall',] - -DEBUG=0 -get_frame = sys._getframe - -class IgnoreException(Exception): - "Ignoring this exception due to disabled feature" - - -def output_exception(printstream = sys.stdout): - try: - type, value, tb = sys.exc_info() - info = traceback.extract_tb(tb) - #this is more verbose - #traceback.print_exc() - filename, lineno, function, text = info[-1] # last line only - msg = "%s:%d: %s: %s (in %s)\n" % ( - filename, lineno, type.__name__, str(value), function) - printstream.write(msg) - finally: - type = value = tb = None # clean up - return - -def importall(package): - """ - Try recursively to import all subpackages under package. - """ - if isinstance(package,str): - package = __import__(package) - - package_name = package.__name__ - package_dir = os.path.dirname(package.__file__) - for subpackage_name in os.listdir(package_dir): - subdir = os.path.join(package_dir, subpackage_name) - if not os.path.isdir(subdir): - continue - if not os.path.isfile(os.path.join(subdir,'__init__.py')): - continue - name = package_name+'.'+subpackage_name - try: - exec 'import %s as m' % (name) - except Exception, msg: - print 'Failed importing %s: %s' %(name, msg) - continue - importall(m) - return diff --git a/pythonPackages/numpy/numpy/testing/setup.py b/pythonPackages/numpy/numpy/testing/setup.py deleted file mode 100755 index 6d8fc85c50..0000000000 --- a/pythonPackages/numpy/numpy/testing/setup.py +++ /dev/null @@ -1,18 +0,0 @@ -#!/usr/bin/env python - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('testing',parent_package,top_path) - - config.add_data_dir('tests') - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(maintainer = "NumPy Developers", - maintainer_email = "numpy-dev@numpy.org", - description = "NumPy test module", - url = "http://www.numpy.org", - license = "NumPy License (BSD Style)", - configuration = configuration, - ) diff --git a/pythonPackages/numpy/numpy/testing/setupscons.py b/pythonPackages/numpy/numpy/testing/setupscons.py deleted file mode 100755 index ad248d27fa..0000000000 --- a/pythonPackages/numpy/numpy/testing/setupscons.py +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env python - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - config = Configuration('testing',parent_package,top_path) - return config - -if __name__ == '__main__': - from numpy.distutils.core import setup - setup(maintainer = "NumPy Developers", - maintainer_email = "numpy-dev@numpy.org", - description = "NumPy test module", - url = "http://www.numpy.org", - license = "NumPy License (BSD Style)", - configuration = configuration, - ) diff --git a/pythonPackages/numpy/numpy/testing/tests/test_decorators.py b/pythonPackages/numpy/numpy/testing/tests/test_decorators.py deleted file mode 100755 index 504971e612..0000000000 --- a/pythonPackages/numpy/numpy/testing/tests/test_decorators.py +++ /dev/null @@ -1,156 +0,0 @@ -import numpy as np -from numpy.testing import * -from numpy.testing.noseclasses import KnownFailureTest -import nose - -def test_slow(): - @dec.slow - def slow_func(x,y,z): - pass - - assert(slow_func.slow) - -def test_setastest(): - @dec.setastest() - def f_default(a): - pass - - @dec.setastest(True) - def f_istest(a): - pass - - @dec.setastest(False) - def f_isnottest(a): - pass - - assert(f_default.__test__) - assert(f_istest.__test__) - assert(not f_isnottest.__test__) - -class DidntSkipException(Exception): - pass - -def test_skip_functions_hardcoded(): - @dec.skipif(True) - def f1(x): - raise DidntSkipException - - try: - f1('a') - except DidntSkipException: - raise Exception('Failed to skip') - except nose.SkipTest: - pass - - @dec.skipif(False) - def f2(x): - raise DidntSkipException - - try: - f2('a') - except DidntSkipException: - pass - except nose.SkipTest: - raise Exception('Skipped when not expected to') - - -def test_skip_functions_callable(): - def skip_tester(): - return skip_flag == 'skip me!' - - @dec.skipif(skip_tester) - def f1(x): - raise DidntSkipException - - try: - skip_flag = 'skip me!' - f1('a') - except DidntSkipException: - raise Exception('Failed to skip') - except nose.SkipTest: - pass - - @dec.skipif(skip_tester) - def f2(x): - raise DidntSkipException - - try: - skip_flag = 'five is right out!' - f2('a') - except DidntSkipException: - pass - except nose.SkipTest: - raise Exception('Skipped when not expected to') - - -def test_skip_generators_hardcoded(): - @dec.knownfailureif(True, "This test is known to fail") - def g1(x): - for i in xrange(x): - yield i - - try: - for j in g1(10): - pass - except KnownFailureTest: - pass - else: - raise Exception('Failed to mark as known failure') - - - @dec.knownfailureif(False, "This test is NOT known to fail") - def g2(x): - for i in xrange(x): - yield i - raise DidntSkipException('FAIL') - - try: - for j in g2(10): - pass - except KnownFailureTest: - raise Exception('Marked incorretly as known failure') - except DidntSkipException: - pass - - -def test_skip_generators_callable(): - def skip_tester(): - return skip_flag == 'skip me!' - - @dec.knownfailureif(skip_tester, "This test is known to fail") - def g1(x): - for i in xrange(x): - yield i - - try: - skip_flag = 'skip me!' - for j in g1(10): - pass - except KnownFailureTest: - pass - else: - raise Exception('Failed to mark as known failure') - - - @dec.knownfailureif(skip_tester, "This test is NOT known to fail") - def g2(x): - for i in xrange(x): - yield i - raise DidntSkipException('FAIL') - - try: - skip_flag = 'do not skip' - for j in g2(10): - pass - except KnownFailureTest: - raise Exception('Marked incorretly as known failure') - except DidntSkipException: - pass - - -if __name__ == '__main__': - run_module_suite() - - - - diff --git a/pythonPackages/numpy/numpy/testing/tests/test_utils.py b/pythonPackages/numpy/numpy/testing/tests/test_utils.py deleted file mode 100755 index 5106c1184d..0000000000 --- a/pythonPackages/numpy/numpy/testing/tests/test_utils.py +++ /dev/null @@ -1,456 +0,0 @@ -import warnings -import sys - -import numpy as np -from numpy.testing import * -import unittest - -class _GenericTest(object): - def _test_equal(self, a, b): - self._assert_func(a, b) - - def _test_not_equal(self, a, b): - try: - self._assert_func(a, b) - passed = True - except AssertionError: - pass - else: - raise AssertionError("a and b are found equal but are not") - - def test_array_rank1_eq(self): - """Test two equal array of rank 1 are found equal.""" - a = np.array([1, 2]) - b = np.array([1, 2]) - - self._test_equal(a, b) - - def test_array_rank1_noteq(self): - """Test two different array of rank 1 are found not equal.""" - a = np.array([1, 2]) - b = np.array([2, 2]) - - self._test_not_equal(a, b) - - def test_array_rank2_eq(self): - """Test two equal array of rank 2 are found equal.""" - a = np.array([[1, 2], [3, 4]]) - b = np.array([[1, 2], [3, 4]]) - - self._test_equal(a, b) - - def test_array_diffshape(self): - """Test two arrays with different shapes are found not equal.""" - a = np.array([1, 2]) - b = np.array([[1, 2], [1, 2]]) - - self._test_not_equal(a, b) - - def test_objarray(self): - """Test object arrays.""" - a = np.array([1, 1], dtype=np.object) - self._test_equal(a, 1) - -class TestArrayEqual(_GenericTest, unittest.TestCase): - def setUp(self): - self._assert_func = assert_array_equal - - def test_generic_rank1(self): - """Test rank 1 array for all dtypes.""" - def foo(t): - a = np.empty(2, t) - a.fill(1) - b = a.copy() - c = a.copy() - c.fill(0) - self._test_equal(a, b) - self._test_not_equal(c, b) - - # Test numeric types and object - for t in '?bhilqpBHILQPfdgFDG': - foo(t) - - # Test strings - for t in ['S1', 'U1']: - foo(t) - - def test_generic_rank3(self): - """Test rank 3 array for all dtypes.""" - def foo(t): - a = np.empty((4, 2, 3), t) - a.fill(1) - b = a.copy() - c = a.copy() - c.fill(0) - self._test_equal(a, b) - self._test_not_equal(c, b) - - # Test numeric types and object - for t in '?bhilqpBHILQPfdgFDG': - foo(t) - - # Test strings - for t in ['S1', 'U1']: - foo(t) - - def test_nan_array(self): - """Test arrays with nan values in them.""" - a = np.array([1, 2, np.nan]) - b = np.array([1, 2, np.nan]) - - self._test_equal(a, b) - - c = np.array([1, 2, 3]) - self._test_not_equal(c, b) - - def test_string_arrays(self): - """Test two arrays with different shapes are found not equal.""" - a = np.array(['floupi', 'floupa']) - b = np.array(['floupi', 'floupa']) - - self._test_equal(a, b) - - c = np.array(['floupipi', 'floupa']) - - self._test_not_equal(c, b) - - def test_recarrays(self): - """Test record arrays.""" - a = np.empty(2, [('floupi', np.float), ('floupa', np.float)]) - a['floupi'] = [1, 2] - a['floupa'] = [1, 2] - b = a.copy() - - self._test_equal(a, b) - - c = np.empty(2, [('floupipi', np.float), ('floupa', np.float)]) - c['floupipi'] = a['floupi'].copy() - c['floupa'] = a['floupa'].copy() - - self._test_not_equal(c, b) - -class TestEqual(TestArrayEqual): - def setUp(self): - self._assert_func = assert_equal - - def test_nan_items(self): - self._assert_func(np.nan, np.nan) - self._assert_func([np.nan], [np.nan]) - self._test_not_equal(np.nan, [np.nan]) - self._test_not_equal(np.nan, 1) - - def test_inf_items(self): - self._assert_func(np.inf, np.inf) - self._assert_func([np.inf], [np.inf]) - self._test_not_equal(np.inf, [np.inf]) - - def test_non_numeric(self): - self._assert_func('ab', 'ab') - self._test_not_equal('ab', 'abb') - - def test_complex_item(self): - self._assert_func(complex(1, 2), complex(1, 2)) - self._assert_func(complex(1, np.nan), complex(1, np.nan)) - self._test_not_equal(complex(1, np.nan), complex(1, 2)) - self._test_not_equal(complex(np.nan, 1), complex(1, np.nan)) - self._test_not_equal(complex(np.nan, np.inf), complex(np.nan, 2)) - - def test_negative_zero(self): - self._test_not_equal(np.PZERO, np.NZERO) - - def test_complex(self): - x = np.array([complex(1, 2), complex(1, np.nan)]) - y = np.array([complex(1, 2), complex(1, 2)]) - self._assert_func(x, x) - self._test_not_equal(x, y) - -class TestArrayAlmostEqual(_GenericTest, unittest.TestCase): - def setUp(self): - self._assert_func = assert_array_almost_equal - - def test_simple(self): - x = np.array([1234.2222]) - y = np.array([1234.2223]) - - self._assert_func(x, y, decimal=3) - self._assert_func(x, y, decimal=4) - self.assertRaises(AssertionError, - lambda: self._assert_func(x, y, decimal=5)) - - def test_nan(self): - anan = np.array([np.nan]) - aone = np.array([1]) - ainf = np.array([np.inf]) - self._assert_func(anan, anan) - self.assertRaises(AssertionError, - lambda : self._assert_func(anan, aone)) - self.assertRaises(AssertionError, - lambda : self._assert_func(anan, ainf)) - self.assertRaises(AssertionError, - lambda : self._assert_func(ainf, anan)) - -class TestAlmostEqual(_GenericTest, unittest.TestCase): - def setUp(self): - self._assert_func = assert_almost_equal - - def test_nan_item(self): - self._assert_func(np.nan, np.nan) - self.assertRaises(AssertionError, - lambda : self._assert_func(np.nan, 1)) - self.assertRaises(AssertionError, - lambda : self._assert_func(np.nan, np.inf)) - self.assertRaises(AssertionError, - lambda : self._assert_func(np.inf, np.nan)) - - def test_inf_item(self): - self._assert_func(np.inf, np.inf) - self._assert_func(-np.inf, -np.inf) - - def test_simple_item(self): - self._test_not_equal(1, 2) - - def test_complex_item(self): - self._assert_func(complex(1, 2), complex(1, 2)) - self._assert_func(complex(1, np.nan), complex(1, np.nan)) - self._assert_func(complex(np.inf, np.nan), complex(np.inf, np.nan)) - self._test_not_equal(complex(1, np.nan), complex(1, 2)) - self._test_not_equal(complex(np.nan, 1), complex(1, np.nan)) - self._test_not_equal(complex(np.nan, np.inf), complex(np.nan, 2)) - - def test_complex(self): - x = np.array([complex(1, 2), complex(1, np.nan)]) - z = np.array([complex(1, 2), complex(np.nan, 1)]) - y = np.array([complex(1, 2), complex(1, 2)]) - self._assert_func(x, x) - self._test_not_equal(x, y) - self._test_not_equal(x, z) - -class TestApproxEqual(unittest.TestCase): - def setUp(self): - self._assert_func = assert_approx_equal - - def test_simple_arrays(self): - x = np.array([1234.22]) - y = np.array([1234.23]) - - self._assert_func(x, y, significant=5) - self._assert_func(x, y, significant=6) - self.assertRaises(AssertionError, - lambda: self._assert_func(x, y, significant=7)) - - def test_simple_items(self): - x = 1234.22 - y = 1234.23 - - self._assert_func(x, y, significant=4) - self._assert_func(x, y, significant=5) - self._assert_func(x, y, significant=6) - self.assertRaises(AssertionError, - lambda: self._assert_func(x, y, significant=7)) - - def test_nan_array(self): - anan = np.array(np.nan) - aone = np.array(1) - ainf = np.array(np.inf) - self._assert_func(anan, anan) - self.assertRaises(AssertionError, - lambda : self._assert_func(anan, aone)) - self.assertRaises(AssertionError, - lambda : self._assert_func(anan, ainf)) - self.assertRaises(AssertionError, - lambda : self._assert_func(ainf, anan)) - - def test_nan_items(self): - anan = np.array(np.nan) - aone = np.array(1) - ainf = np.array(np.inf) - self._assert_func(anan, anan) - self.assertRaises(AssertionError, - lambda : self._assert_func(anan, aone)) - self.assertRaises(AssertionError, - lambda : self._assert_func(anan, ainf)) - self.assertRaises(AssertionError, - lambda : self._assert_func(ainf, anan)) - -class TestRaises(unittest.TestCase): - def setUp(self): - class MyException(Exception): - pass - - self.e = MyException - - def raises_exception(self, e): - raise e - - def does_not_raise_exception(self): - pass - - def test_correct_catch(self): - f = raises(self.e)(self.raises_exception)(self.e) - - def test_wrong_exception(self): - try: - f = raises(self.e)(self.raises_exception)(RuntimeError) - except RuntimeError: - return - else: - raise AssertionError("should have caught RuntimeError") - - def test_catch_no_raise(self): - try: - f = raises(self.e)(self.does_not_raise_exception)() - except AssertionError: - return - else: - raise AssertionError("should have raised an AssertionError") - -class TestWarns(unittest.TestCase): - def test_warn(self): - def f(): - warnings.warn("yo") - - before_filters = sys.modules['warnings'].filters[:] - assert_warns(UserWarning, f) - after_filters = sys.modules['warnings'].filters - - # Check that the warnings state is unchanged - assert_equal(before_filters, after_filters, - "assert_warns does not preserver warnings state") - - def test_warn_wrong_warning(self): - def f(): - warnings.warn("yo", DeprecationWarning) - - failed = False - filters = sys.modules['warnings'].filters[:] - try: - try: - # Should raise an AssertionError - assert_warns(UserWarning, f) - failed = True - except AssertionError: - pass - finally: - sys.modules['warnings'].filters = filters - - if failed: - raise AssertionError("wrong warning caught by assert_warn") - -class TestAssertAllclose(unittest.TestCase): - def test_simple(self): - x = 1e-3 - y = 1e-9 - - assert_allclose(x, y, atol=1) - self.assertRaises(AssertionError, assert_allclose, x, y) - - a = np.array([x, y, x, y]) - b = np.array([x, y, x, x]) - - assert_allclose(a, b, atol=1) - self.assertRaises(AssertionError, assert_allclose, a, b) - - b[-1] = y * (1 + 1e-8) - assert_allclose(a, b) - self.assertRaises(AssertionError, assert_allclose, a, b, - rtol=1e-9) - - assert_allclose(6, 10, rtol=0.5) - self.assertRaises(AssertionError, assert_allclose, 10, 6, rtol=0.5) - - -class TestArrayAlmostEqualNulp(unittest.TestCase): - def test_simple(self): - dev = np.random.randn(10) - x = np.ones(10) - y = x + dev * np.finfo(np.float64).eps - assert_array_almost_equal_nulp(x, y, nulp=2 * np.max(dev)) - - def test_simple2(self): - x = np.random.randn(10) - y = 2 * x - def failure(): - return assert_array_almost_equal_nulp(x, y, - nulp=1000) - self.assertRaises(AssertionError, failure) - - def test_big_float32(self): - x = (1e10 * np.random.randn(10)).astype(np.float32) - y = x + 1 - assert_array_almost_equal_nulp(x, y, nulp=1000) - - def test_big_float64(self): - x = 1e10 * np.random.randn(10) - y = x + 1 - def failure(): - assert_array_almost_equal_nulp(x, y, nulp=1000) - self.assertRaises(AssertionError, failure) - - def test_complex(self): - x = np.random.randn(10) + 1j * np.random.randn(10) - y = x + 1 - def failure(): - assert_array_almost_equal_nulp(x, y, nulp=1000) - self.assertRaises(AssertionError, failure) - - def test_complex2(self): - x = np.random.randn(10) - y = np.array(x, np.complex) + 1e-16 * np.random.randn(10) - - assert_array_almost_equal_nulp(x, y, nulp=1000) - -class TestULP(unittest.TestCase): - def test_equal(self): - x = np.random.randn(10) - assert_array_max_ulp(x, x, maxulp=0) - - def test_single(self): - # Generate 1 + small deviation, check that adding eps gives a few UNL - x = np.ones(10).astype(np.float32) - x += 0.01 * np.random.randn(10).astype(np.float32) - eps = np.finfo(np.float32).eps - assert_array_max_ulp(x, x+eps, maxulp=20) - - def test_double(self): - # Generate 1 + small deviation, check that adding eps gives a few UNL - x = np.ones(10).astype(np.float32) - x += 0.01 * np.random.randn(10).astype(np.float64) - eps = np.finfo(np.float64).eps - assert_array_max_ulp(x, x+eps, maxulp=200) - - def test_inf(self): - for dt in [np.float32, np.float64]: - inf = np.array([np.inf]).astype(dt) - big = np.array([np.finfo(dt).max]) - assert_array_max_ulp(inf, big, maxulp=200) - - def test_nan(self): - # Test that nan is 'far' from small, tiny, inf, max and min - for dt in [np.float32, np.float64]: - if dt == np.float32: - maxulp = 1e6 - else: - maxulp = 1e12 - inf = np.array([np.inf]).astype(dt) - nan = np.array([np.nan]).astype(dt) - big = np.array([np.finfo(dt).max]) - tiny = np.array([np.finfo(dt).tiny]) - zero = np.array([np.PZERO]).astype(dt) - nzero = np.array([np.NZERO]).astype(dt) - self.assertRaises(AssertionError, - lambda: assert_array_max_ulp(nan, inf, - maxulp=maxulp)) - self.assertRaises(AssertionError, - lambda: assert_array_max_ulp(nan, big, - maxulp=maxulp)) - self.assertRaises(AssertionError, - lambda: assert_array_max_ulp(nan, tiny, - maxulp=maxulp)) - self.assertRaises(AssertionError, - lambda: assert_array_max_ulp(nan, zero, - maxulp=maxulp)) - self.assertRaises(AssertionError, - lambda: assert_array_max_ulp(nan, nzero, - maxulp=maxulp)) -if __name__ == '__main__': - run_module_suite() diff --git a/pythonPackages/numpy/numpy/testing/utils.py b/pythonPackages/numpy/numpy/testing/utils.py deleted file mode 100755 index 6f18b54685..0000000000 --- a/pythonPackages/numpy/numpy/testing/utils.py +++ /dev/null @@ -1,1437 +0,0 @@ -""" -Utility function to facilitate testing. -""" - -import os -import sys -import re -import operator -import types -import warnings -from nosetester import import_nose - -__all__ = ['assert_equal', 'assert_almost_equal','assert_approx_equal', - 'assert_array_equal', 'assert_array_less', 'assert_string_equal', - 'assert_array_almost_equal', 'assert_raises', 'build_err_msg', - 'decorate_methods', 'jiffies', 'memusage', 'print_assert_equal', - 'raises', 'rand', 'rundocs', 'runstring', 'verbose', 'measure', - 'assert_', 'assert_array_almost_equal_nulp', - 'assert_array_max_ulp', 'assert_warns', 'assert_allclose'] - -verbose = 0 - -def assert_(val, msg='') : - """ - Assert that works in release mode. - - The Python built-in ``assert`` does not work when executing code in - optimized mode (the ``-O`` flag) - no byte-code is generated for it. - - For documentation on usage, refer to the Python documentation. - - """ - if not val : - raise AssertionError(msg) - -def gisnan(x): - """like isnan, but always raise an error if type not supported instead of - returning a TypeError object. - - Notes - ----- - isnan and other ufunc sometimes return a NotImplementedType object instead - of raising any exception. This function is a wrapper to make sure an - exception is always raised. - - This should be removed once this problem is solved at the Ufunc level.""" - from numpy.core import isnan - st = isnan(x) - if isinstance(st, types.NotImplementedType): - raise TypeError("isnan not supported for this type") - return st - -def gisfinite(x): - """like isfinite, but always raise an error if type not supported instead of - returning a TypeError object. - - Notes - ----- - isfinite and other ufunc sometimes return a NotImplementedType object instead - of raising any exception. This function is a wrapper to make sure an - exception is always raised. - - This should be removed once this problem is solved at the Ufunc level.""" - from numpy.core import isfinite, seterr - err = seterr(invalid='ignore') - try: - st = isfinite(x) - if isinstance(st, types.NotImplementedType): - raise TypeError("isfinite not supported for this type") - finally: - seterr(**err) - return st - -def gisinf(x): - """like isinf, but always raise an error if type not supported instead of - returning a TypeError object. - - Notes - ----- - isinf and other ufunc sometimes return a NotImplementedType object instead - of raising any exception. This function is a wrapper to make sure an - exception is always raised. - - This should be removed once this problem is solved at the Ufunc level.""" - from numpy.core import isinf, seterr - err = seterr(invalid='ignore') - try: - st = isinf(x) - if isinstance(st, types.NotImplementedType): - raise TypeError("isinf not supported for this type") - finally: - seterr(**err) - return st - -def rand(*args): - """Returns an array of random numbers with the given shape. - - This only uses the standard library, so it is useful for testing purposes. - """ - import random - from numpy.core import zeros, float64 - results = zeros(args, float64) - f = results.flat - for i in range(len(f)): - f[i] = random.random() - return results - -if sys.platform[:5]=='linux': - def jiffies(_proc_pid_stat = '/proc/%s/stat'%(os.getpid()), - _load_time=[]): - """ Return number of jiffies (1/100ths of a second) that this - process has been scheduled in user mode. See man 5 proc. """ - import time - if not _load_time: - _load_time.append(time.time()) - try: - f=open(_proc_pid_stat,'r') - l = f.readline().split(' ') - f.close() - return int(l[13]) - except: - return int(100*(time.time()-_load_time[0])) - - def memusage(_proc_pid_stat = '/proc/%s/stat'%(os.getpid())): - """ Return virtual memory size in bytes of the running python. - """ - try: - f=open(_proc_pid_stat,'r') - l = f.readline().split(' ') - f.close() - return int(l[22]) - except: - return -else: - # os.getpid is not in all platforms available. - # Using time is safe but inaccurate, especially when process - # was suspended or sleeping. - def jiffies(_load_time=[]): - """ Return number of jiffies (1/100ths of a second) that this - process has been scheduled in user mode. [Emulation with time.time]. """ - import time - if not _load_time: - _load_time.append(time.time()) - return int(100*(time.time()-_load_time[0])) - def memusage(): - """ Return memory usage of running python. [Not implemented]""" - raise NotImplementedError - -if os.name=='nt' and sys.version[:3] > '2.3': - # Code "stolen" from enthought/debug/memusage.py - def GetPerformanceAttributes(object, counter, instance = None, - inum=-1, format = None, machine=None): - # NOTE: Many counters require 2 samples to give accurate results, - # including "% Processor Time" (as by definition, at any instant, a - # thread's CPU usage is either 0 or 100). To read counters like this, - # you should copy this function, but keep the counter open, and call - # CollectQueryData() each time you need to know. - # See http://msdn.microsoft.com/library/en-us/dnperfmo/html/perfmonpt2.asp - # My older explanation for this was that the "AddCounter" process forced - # the CPU to 100%, but the above makes more sense :) - import win32pdh - if format is None: format = win32pdh.PDH_FMT_LONG - path = win32pdh.MakeCounterPath( (machine,object,instance, None, inum,counter) ) - hq = win32pdh.OpenQuery() - try: - hc = win32pdh.AddCounter(hq, path) - try: - win32pdh.CollectQueryData(hq) - type, val = win32pdh.GetFormattedCounterValue(hc, format) - return val - finally: - win32pdh.RemoveCounter(hc) - finally: - win32pdh.CloseQuery(hq) - - def memusage(processName="python", instance=0): - # from win32pdhutil, part of the win32all package - import win32pdh - return GetPerformanceAttributes("Process", "Virtual Bytes", - processName, instance, - win32pdh.PDH_FMT_LONG, None) - -def build_err_msg(arrays, err_msg, header='Items are not equal:', - verbose=True, - names=('ACTUAL', 'DESIRED')): - msg = ['\n' + header] - if err_msg: - if err_msg.find('\n') == -1 and len(err_msg) < 79-len(header): - msg = [msg[0] + ' ' + err_msg] - else: - msg.append(err_msg) - if verbose: - for i, a in enumerate(arrays): - try: - r = repr(a) - except: - r = '[repr failed]' - if r.count('\n') > 3: - r = '\n'.join(r.splitlines()[:3]) - r += '...' - msg.append(' %s: %s' % (names[i], r)) - return '\n'.join(msg) - -def assert_equal(actual,desired,err_msg='',verbose=True): - """ - Raise an assertion if two objects are not equal. - - Given two objects (lists, tuples, dictionaries or numpy arrays), check - that all elements of these objects are equal. An exception is raised at - the first conflicting values. - - Parameters - ---------- - actual : list, tuple, dict or ndarray - The object to check. - desired : list, tuple, dict or ndarray - The expected object. - err_msg : string - The error message to be printed in case of failure. - verbose : bool - If True, the conflicting values are appended to the error message. - - Raises - ------ - AssertionError - If actual and desired are not equal. - - Examples - -------- - >>> np.testing.assert_equal([4,5], [4,6]) - ... - : - Items are not equal: - item=1 - ACTUAL: 5 - DESIRED: 6 - - """ - if isinstance(desired, dict): - if not isinstance(actual, dict) : - raise AssertionError(repr(type(actual))) - assert_equal(len(actual),len(desired),err_msg,verbose) - for k,i in desired.items(): - if k not in actual : - raise AssertionError(repr(k)) - assert_equal(actual[k], desired[k], 'key=%r\n%s' % (k,err_msg), verbose) - return - if isinstance(desired, (list,tuple)) and isinstance(actual, (list,tuple)): - assert_equal(len(actual),len(desired),err_msg,verbose) - for k in range(len(desired)): - assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg), verbose) - return - from numpy.core import ndarray, isscalar, signbit - from numpy.lib import iscomplexobj, real, imag - if isinstance(actual, ndarray) or isinstance(desired, ndarray): - return assert_array_equal(actual, desired, err_msg, verbose) - msg = build_err_msg([actual, desired], err_msg, verbose=verbose) - - # Handle complex numbers: separate into real/imag to handle - # nan/inf/negative zero correctly - # XXX: catch ValueError for subclasses of ndarray where iscomplex fail - try: - usecomplex = iscomplexobj(actual) or iscomplexobj(desired) - except ValueError: - usecomplex = False - - if usecomplex: - if iscomplexobj(actual): - actualr = real(actual) - actuali = imag(actual) - else: - actualr = actual - actuali = 0 - if iscomplexobj(desired): - desiredr = real(desired) - desiredi = imag(desired) - else: - desiredr = desired - desiredi = 0 - try: - assert_equal(actualr, desiredr) - assert_equal(actuali, desiredi) - except AssertionError: - raise AssertionError(msg) - - # Inf/nan/negative zero handling - try: - # isscalar test to check cases such as [np.nan] != np.nan - if isscalar(desired) != isscalar(actual): - raise AssertionError(msg) - - # If one of desired/actual is not finite, handle it specially here: - # check that both are nan if any is a nan, and test for equality - # otherwise - if not (gisfinite(desired) and gisfinite(actual)): - isdesnan = gisnan(desired) - isactnan = gisnan(actual) - if isdesnan or isactnan: - if not (isdesnan and isactnan): - raise AssertionError(msg) - else: - if not desired == actual: - raise AssertionError(msg) - return - elif desired == 0 and actual == 0: - if not signbit(desired) == signbit(actual): - raise AssertionError(msg) - # If TypeError or ValueError raised while using isnan and co, just handle - # as before - except (TypeError, ValueError, NotImplementedError): - pass - if desired != actual : - raise AssertionError(msg) - -def print_assert_equal(test_string,actual,desired): - """ - Test if two objects are equal, and print an error message if test fails. - - The test is performed with ``actual == desired``. - - Parameters - ---------- - test_string : str - The message supplied to AssertionError. - actual : object - The object to test for equality against `desired`. - desired : object - The expected result. - - Examples - -------- - >>> np.testing.print_assert_equal('Test XYZ of func xyz', [0, 1], [0, 1]) - >>> np.testing.print_assert_equal('Test XYZ of func xyz', [0, 1], [0, 2]) - Traceback (most recent call last): - ... - AssertionError: Test XYZ of func xyz failed - ACTUAL: - [0, 1] - DESIRED: - [0, 2] - - """ - import pprint - try: - assert(actual == desired) - except AssertionError: - import cStringIO - msg = cStringIO.StringIO() - msg.write(test_string) - msg.write(' failed\nACTUAL: \n') - pprint.pprint(actual,msg) - msg.write('DESIRED: \n') - pprint.pprint(desired,msg) - raise AssertionError(msg.getvalue()) - -def assert_almost_equal(actual,desired,decimal=7,err_msg='',verbose=True): - """ - Raise an assertion if two items are not equal up to desired precision. - - The test is equivalent to abs(desired-actual) < 0.5 * 10**(-decimal) - - Given two objects (numbers or ndarrays), check that all elements of these - objects are almost equal. An exception is raised at conflicting values. - For ndarrays this delegates to assert_array_almost_equal - - Parameters - ---------- - actual : number or ndarray - The object to check. - desired : number or ndarray - The expected object. - decimal : integer (decimal=7) - desired precision - err_msg : string - The error message to be printed in case of failure. - verbose : bool - If True, the conflicting values are appended to the error message. - - Raises - ------ - AssertionError - If actual and desired are not equal up to specified precision. - - See Also - -------- - assert_array_almost_equal: compares array_like objects - assert_equal: tests objects for equality - - - Examples - -------- - >>> import numpy.testing as npt - >>> npt.assert_almost_equal(2.3333333333333, 2.33333334) - >>> npt.assert_almost_equal(2.3333333333333, 2.33333334, decimal=10) - ... - : - Items are not equal: - ACTUAL: 2.3333333333333002 - DESIRED: 2.3333333399999998 - - >>> npt.assert_almost_equal(np.array([1.0,2.3333333333333]), - \t\t\tnp.array([1.0,2.33333334]), decimal=9) - ... - : - Arrays are not almost equal - - (mismatch 50.0%) - x: array([ 1. , 2.33333333]) - y: array([ 1. , 2.33333334]) - - """ - from numpy.core import ndarray - from numpy.lib import iscomplexobj, real, imag - - # Handle complex numbers: separate into real/imag to handle - # nan/inf/negative zero correctly - # XXX: catch ValueError for subclasses of ndarray where iscomplex fail - try: - usecomplex = iscomplexobj(actual) or iscomplexobj(desired) - except ValueError: - usecomplex = False - - msg = build_err_msg([actual, desired], err_msg, verbose=verbose, - header='Arrays are not almost equal') - - if usecomplex: - if iscomplexobj(actual): - actualr = real(actual) - actuali = imag(actual) - else: - actualr = actual - actuali = 0 - if iscomplexobj(desired): - desiredr = real(desired) - desiredi = imag(desired) - else: - desiredr = desired - desiredi = 0 - try: - assert_almost_equal(actualr, desiredr, decimal=decimal) - assert_almost_equal(actuali, desiredi, decimal=decimal) - except AssertionError: - raise AssertionError(msg) - - if isinstance(actual, (ndarray, tuple, list)) \ - or isinstance(desired, (ndarray, tuple, list)): - return assert_array_almost_equal(actual, desired, decimal, err_msg) - try: - # If one of desired/actual is not finite, handle it specially here: - # check that both are nan if any is a nan, and test for equality - # otherwise - if not (gisfinite(desired) and gisfinite(actual)): - if gisnan(desired) or gisnan(actual): - if not (gisnan(desired) and gisnan(actual)): - raise AssertionError(msg) - else: - if not desired == actual: - raise AssertionError(msg) - return - except (NotImplementedError, TypeError): - pass - if round(abs(desired - actual),decimal) != 0 : - raise AssertionError(msg) - - -def assert_approx_equal(actual,desired,significant=7,err_msg='',verbose=True): - """ - Raise an assertion if two items are not equal up to significant digits. - - Given two numbers, check that they are approximately equal. - Approximately equal is defined as the number of significant digits - that agree. - - Parameters - ---------- - actual : number - The object to check. - desired : number - The expected object. - significant : integer (significant=7) - desired precision - err_msg : string - The error message to be printed in case of failure. - verbose : bool - If True, the conflicting values are appended to the error message. - - Raises - ------ - AssertionError - If actual and desired are not equal up to specified precision. - - See Also - -------- - assert_almost_equal: compares objects by decimals - assert_array_almost_equal: compares array_like objects by decimals - assert_equal: tests objects for equality - - - Examples - -------- - >>> np.testing.assert_approx_equal(0.12345677777777e-20, 0.1234567e-20) - >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345671e-20, - significant=8) - >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345672e-20, - significant=8) - ... - : - Items are not equal to 8 significant digits: - ACTUAL: 1.234567e-021 - DESIRED: 1.2345672000000001e-021 - - the evaluated condition that raises the exception is - - >>> abs(0.12345670e-20/1e-21 - 0.12345672e-20/1e-21) >= 10**-(8-1) - True - - """ - import numpy as np - actual, desired = map(float, (actual, desired)) - if desired==actual: - return - # Normalized the numbers to be in range (-10.0,10.0) - # scale = float(pow(10,math.floor(math.log10(0.5*(abs(desired)+abs(actual)))))) - err = np.seterr(invalid='ignore') - try: - scale = 0.5*(np.abs(desired) + np.abs(actual)) - scale = np.power(10,np.floor(np.log10(scale))) - finally: - np.seterr(**err) - - try: - sc_desired = desired/scale - except ZeroDivisionError: - sc_desired = 0.0 - try: - sc_actual = actual/scale - except ZeroDivisionError: - sc_actual = 0.0 - msg = build_err_msg([actual, desired], err_msg, - header='Items are not equal to %d significant digits:' % - significant, - verbose=verbose) - try: - # If one of desired/actual is not finite, handle it specially here: - # check that both are nan if any is a nan, and test for equality - # otherwise - if not (gisfinite(desired) and gisfinite(actual)): - if gisnan(desired) or gisnan(actual): - if not (gisnan(desired) and gisnan(actual)): - raise AssertionError(msg) - else: - if not desired == actual: - raise AssertionError(msg) - return - except (TypeError, NotImplementedError): - pass - if np.abs(sc_desired - sc_actual) >= np.power(10.,-(significant-1)) : - raise AssertionError(msg) - -def assert_array_compare(comparison, x, y, err_msg='', verbose=True, - header=''): - from numpy.core import array, isnan, any - x = array(x, copy=False, subok=True) - y = array(y, copy=False, subok=True) - - def isnumber(x): - return x.dtype.char in '?bhilqpBHILQPfdgFDG' - - try: - cond = (x.shape==() or y.shape==()) or x.shape == y.shape - if not cond: - msg = build_err_msg([x, y], - err_msg - + '\n(shapes %s, %s mismatch)' % (x.shape, - y.shape), - verbose=verbose, header=header, - names=('x', 'y')) - if not cond : - raise AssertionError(msg) - - if (isnumber(x) and isnumber(y)) and (any(isnan(x)) or any(isnan(y))): - # Handling nan: we first check that x and y have the nan at the - # same locations, and then we mask the nan and do the comparison as - # usual. - xnanid = isnan(x) - ynanid = isnan(y) - try: - assert_array_equal(xnanid, ynanid) - except AssertionError: - msg = build_err_msg([x, y], - err_msg - + '\n(x and y nan location mismatch %s, ' \ - '%s mismatch)' % (xnanid, ynanid), - verbose=verbose, header=header, - names=('x', 'y')) - raise AssertionError(msg) - # If only one item, it was a nan, so just return - if x.size == y.size == 1: - return - val = comparison(x[~xnanid], y[~ynanid]) - else: - val = comparison(x,y) - if isinstance(val, bool): - cond = val - reduced = [0] - else: - reduced = val.ravel() - cond = reduced.all() - reduced = reduced.tolist() - if not cond: - match = 100-100.0*reduced.count(1)/len(reduced) - msg = build_err_msg([x, y], - err_msg - + '\n(mismatch %s%%)' % (match,), - verbose=verbose, header=header, - names=('x', 'y')) - if not cond : - raise AssertionError(msg) - except ValueError: - msg = build_err_msg([x, y], err_msg, verbose=verbose, header=header, - names=('x', 'y')) - raise ValueError(msg) - -def assert_array_equal(x, y, err_msg='', verbose=True): - """ - Raise an assertion if two array_like objects are not equal. - - Given two array_like objects, check that the shape is equal and all - elements of these objects are equal. An exception is raised at - shape mismatch or conflicting values. In contrast to the standard usage - in numpy, NaNs are compared like numbers, no assertion is raised if - both objects have NaNs in the same positions. - - The usual caution for verifying equality with floating point numbers is - advised. - - Parameters - ---------- - x : array_like - The actual object to check. - y : array_like - The desired, expected object. - err_msg : string - The error message to be printed in case of failure. - verbose : bool - If True, the conflicting values are appended to the error message. - - Raises - ------ - AssertionError - If actual and desired objects are not equal. - - See Also - -------- - assert_array_almost_equal: test objects for equality up to precision - assert_equal: tests objects for equality - - - Examples - -------- - the first assert does not raise an exception - - >>> np.testing.assert_array_equal([1.0,2.33333,np.nan], - \t\t\t[np.exp(0),2.33333, np.nan]) - - assert fails with numerical inprecision with floats - - >>> np.testing.assert_array_equal([1.0,np.pi,np.nan], - \t\t\t[1, np.sqrt(np.pi)**2, np.nan]) - ... - : - AssertionError: - Arrays are not equal - - (mismatch 50.0%) - x: array([ 1. , 3.14159265, NaN]) - y: array([ 1. , 3.14159265, NaN]) - - use assert_array_almost_equal for these cases instead - - >>> np.testing.assert_array_almost_equal([1.0,np.pi,np.nan], - \t\t\t[1, np.sqrt(np.pi)**2, np.nan], decimal=15) - - """ - assert_array_compare(operator.__eq__, x, y, err_msg=err_msg, - verbose=verbose, header='Arrays are not equal') - -def assert_array_almost_equal(x, y, decimal=6, err_msg='', verbose=True): - """ - Raise an assertion if two objects are not equal up to desired precision. - - The test verifies identical shapes and verifies values with - abs(desired-actual) < 0.5 * 10**(-decimal) - - Given two array_like objects, check that the shape is equal and all - elements of these objects are almost equal. An exception is raised at - shape mismatch or conflicting values. In contrast to the standard usage - in numpy, NaNs are compared like numbers, no assertion is raised if - both objects have NaNs in the same positions. - - Parameters - ---------- - x : array_like - The actual object to check. - y : array_like - The desired, expected object. - decimal : integer (decimal=6) - desired precision - err_msg : string - The error message to be printed in case of failure. - verbose : bool - If True, the conflicting values are appended to the error message. - - Raises - ------ - AssertionError - If actual and desired are not equal up to specified precision. - - See Also - -------- - assert_almost_equal: simple version for comparing numbers - assert_array_equal: tests objects for equality - - - Examples - -------- - the first assert does not raise an exception - - >>> np.testing.assert_array_almost_equal([1.0,2.333,np.nan], - [1.0,2.333,np.nan]) - - >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], - \t\t\t[1.0,2.33339,np.nan], decimal=5) - ... - : - AssertionError: - Arrays are not almost equal - - (mismatch 50.0%) - x: array([ 1. , 2.33333, NaN]) - y: array([ 1. , 2.33339, NaN]) - - >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], - \t\t\t[1.0,2.33333, 5], decimal=5) - : - ValueError: - Arrays are not almost equal - x: array([ 1. , 2.33333, NaN]) - y: array([ 1. , 2.33333, 5. ]) - - """ - from numpy.core import around, number, float_ - from numpy.core.numerictypes import issubdtype - from numpy.core.fromnumeric import any as npany - def compare(x, y): - try: - if npany(gisinf(x)) or npany( gisinf(y)): - xinfid = gisinf(x) - yinfid = gisinf(y) - if not xinfid == yinfid: - return False - # if one item, x and y is +- inf - if x.size == y.size == 1: - return x == y - x = x[~xinfid] - y = y[~yinfid] - except (TypeError, NotImplementedError): - pass - z = abs(x-y) - if not issubdtype(z.dtype, number): - z = z.astype(float_) # handle object arrays - return around(z, decimal) <= 10.0**(-decimal) - assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, - header='Arrays are not almost equal') - -def assert_array_less(x, y, err_msg='', verbose=True): - """ - Raise an assertion if two array_like objects are not ordered by less than. - - Given two array_like objects, check that the shape is equal and all - elements of the first object are strictly smaller than those of the - second object. An exception is raised at shape mismatch or incorrectly - ordered values. Shape mismatch does not raise if an object has zero - dimension. In contrast to the standard usage in numpy, NaNs are - compared, no assertion is raised if both objects have NaNs in the same - positions. - - - - Parameters - ---------- - x : array_like - The smaller object to check. - y : array_like - The larger object to compare. - err_msg : string - The error message to be printed in case of failure. - verbose : bool - If True, the conflicting values are appended to the error message. - - Raises - ------ - AssertionError - If actual and desired objects are not equal. - - See Also - -------- - assert_array_equal: tests objects for equality - assert_array_almost_equal: test objects for equality up to precision - - - - Examples - -------- - >>> np.testing.assert_array_less([1.0, 1.0, np.nan], [1.1, 2.0, np.nan]) - >>> np.testing.assert_array_less([1.0, 1.0, np.nan], [1, 2.0, np.nan]) - ... - : - Arrays are not less-ordered - (mismatch 50.0%) - x: array([ 1., 1., NaN]) - y: array([ 1., 2., NaN]) - - >>> np.testing.assert_array_less([1.0, 4.0], 3) - ... - : - Arrays are not less-ordered - (mismatch 50.0%) - x: array([ 1., 4.]) - y: array(3) - - >>> np.testing.assert_array_less([1.0, 2.0, 3.0], [4]) - ... - : - Arrays are not less-ordered - (shapes (3,), (1,) mismatch) - x: array([ 1., 2., 3.]) - y: array([4]) - - """ - assert_array_compare(operator.__lt__, x, y, err_msg=err_msg, - verbose=verbose, - header='Arrays are not less-ordered') - -def runstring(astr, dict): - exec astr in dict - -def assert_string_equal(actual, desired): - """ - Test if two strings are equal. - - If the given strings are equal, `assert_string_equal` does nothing. - If they are not equal, an AssertionError is raised, and the diff - between the strings is shown. - - Parameters - ---------- - actual : str - The string to test for equality against the expected string. - desired : str - The expected string. - - Examples - -------- - >>> np.testing.assert_string_equal('abc', 'abc') - >>> np.testing.assert_string_equal('abc', 'abcd') - Traceback (most recent call last): - File "", line 1, in - ... - AssertionError: Differences in strings: - - abc+ abcd? + - - """ - # delay import of difflib to reduce startup time - import difflib - - if not isinstance(actual, str) : - raise AssertionError(`type(actual)`) - if not isinstance(desired, str): - raise AssertionError(`type(desired)`) - if re.match(r'\A'+desired+r'\Z', actual, re.M): return - diff = list(difflib.Differ().compare(actual.splitlines(1), desired.splitlines(1))) - diff_list = [] - while diff: - d1 = diff.pop(0) - if d1.startswith(' '): - continue - if d1.startswith('- '): - l = [d1] - d2 = diff.pop(0) - if d2.startswith('? '): - l.append(d2) - d2 = diff.pop(0) - if not d2.startswith('+ ') : - raise AssertionError(`d2`) - l.append(d2) - d3 = diff.pop(0) - if d3.startswith('? '): - l.append(d3) - else: - diff.insert(0, d3) - if re.match(r'\A'+d2[2:]+r'\Z', d1[2:]): - continue - diff_list.extend(l) - continue - raise AssertionError(`d1`) - if not diff_list: - return - msg = 'Differences in strings:\n%s' % (''.join(diff_list)).rstrip() - if actual != desired : - raise AssertionError(msg) - - -def rundocs(filename=None, raise_on_error=True): - """ - Run doctests found in the given file. - - By default `rundocs` raises an AssertionError on failure. - - Parameters - ---------- - filename : str - The path to the file for which the doctests are run. - raise_on_error : bool - Whether to raise an AssertionError when a doctest fails. Default is - True. - - Notes - ----- - The doctests can be run by the user/developer by adding the ``doctests`` - argument to the ``test()`` call. For example, to run all tests (including - doctests) for `numpy.lib`:: - - >>> np.lib.test(doctests=True) - - """ - import doctest, imp - if filename is None: - f = sys._getframe(1) - filename = f.f_globals['__file__'] - name = os.path.splitext(os.path.basename(filename))[0] - path = [os.path.dirname(filename)] - file, pathname, description = imp.find_module(name, path) - try: - m = imp.load_module(name, file, pathname, description) - finally: - file.close() - - tests = doctest.DocTestFinder().find(m) - runner = doctest.DocTestRunner(verbose=False) - - msg = [] - if raise_on_error: - out = lambda s: msg.append(s) - else: - out = None - - for test in tests: - runner.run(test, out=out) - - if runner.failures > 0 and raise_on_error: - raise AssertionError("Some doctests failed:\n%s" % "\n".join(msg)) - - -def raises(*args,**kwargs): - nose = import_nose() - return nose.tools.raises(*args,**kwargs) - -def assert_raises(*args,**kwargs): - """ - assert_raises(exception_class, callable, *args, **kwargs) - - Fail unless an exception of class exception_class is thrown - by callable when invoked with arguments args and keyword - arguments kwargs. If a different type of exception is - thrown, it will not be caught, and the test case will be - deemed to have suffered an error, exactly as for an - unexpected exception. - - """ - nose = import_nose() - return nose.tools.assert_raises(*args,**kwargs) - -def decorate_methods(cls, decorator, testmatch=None): - """ - Apply a decorator to all methods in a class matching a regular expression. - - The given decorator is applied to all public methods of `cls` that are - matched by the regular expression `testmatch` - (``testmatch.search(methodname)``). Methods that are private, i.e. start - with an underscore, are ignored. - - Parameters - ---------- - cls : class - Class whose methods to decorate. - decorator : function - Decorator to apply to methods - testmatch : compiled regexp or str, optional - The regular expression. Default value is None, in which case the - nose default (``re.compile(r'(?:^|[\\b_\\.%s-])[Tt]est' % os.sep)``) - is used. - If `testmatch` is a string, it is compiled to a regular expression - first. - - """ - if testmatch is None: - testmatch = re.compile(r'(?:^|[\\b_\\.%s-])[Tt]est' % os.sep) - else: - testmatch = re.compile(testmatch) - cls_attr = cls.__dict__ - - # delayed import to reduce startup time - from inspect import isfunction - - methods = filter(isfunction, cls_attr.values()) - for function in methods: - try: - if hasattr(function, 'compat_func_name'): - funcname = function.compat_func_name - else: - funcname = function.__name__ - except AttributeError: - # not a function - continue - if testmatch.search(funcname) and not funcname.startswith('_'): - setattr(cls, funcname, decorator(function)) - return - - -def measure(code_str,times=1,label=None): - """ - Return elapsed time for executing code in the namespace of the caller. - - The supplied code string is compiled with the Python builtin ``compile``. - The precision of the timing is 10 milli-seconds. If the code will execute - fast on this timescale, it can be executed many times to get reasonable - timing accuracy. - - Parameters - ---------- - code_str : str - The code to be timed. - times : int, optional - The number of times the code is executed. Default is 1. The code is - only compiled once. - label : str, optional - A label to identify `code_str` with. This is passed into ``compile`` - as the second argument (for run-time error messages). - - Returns - ------- - elapsed : float - Total elapsed time in seconds for executing `code_str` `times` times. - - Examples - -------- - >>> etime = np.testing.measure('for i in range(1000): np.sqrt(i**2)', - ... times=times) - >>> print "Time for a single execution : ", etime / times, "s" - Time for a single execution : 0.005 s - - """ - frame = sys._getframe(1) - locs,globs = frame.f_locals,frame.f_globals - - code = compile(code_str, - 'Test name: %s ' % label, - 'exec') - i = 0 - elapsed = jiffies() - while i < times: - i += 1 - exec code in globs,locs - elapsed = jiffies() - elapsed - return 0.01*elapsed - -def _assert_valid_refcount(op): - """ - Check that ufuncs don't mishandle refcount of object `1`. - Used in a few regression tests. - """ - import numpy as np - a = np.arange(100 * 100) - b = np.arange(100*100).reshape(100, 100) - c = b - - i = 1 - - rc = sys.getrefcount(i) - for j in range(15): - d = op(b,c) - - assert(sys.getrefcount(i) >= rc) - -def assert_allclose(actual, desired, rtol=1e-7, atol=0, - err_msg='', verbose=True): - """ - Raise an assertion if two objects are not equal up to desired tolerance. - - The test is equivalent to ``allclose(actual, desired, rtol, atol)`` - - Parameters - ---------- - actual : array_like - Array obtained. - desired : array_like - Array desired - rtol : float, optional - Relative tolerance - atol : float, optional - Absolute tolerance - err_msg : string - The error message to be printed in case of failure. - verbose : bool - If True, the conflicting values are appended to the error message. - - Raises - ------ - AssertionError - If actual and desired are not equal up to specified precision. - - """ - import numpy as np - def compare(x, y): - return np.allclose(x, y, rtol=rtol, atol=atol) - actual, desired = np.asanyarray(actual), np.asanyarray(desired) - header = 'Not equal to tolerance rtol=%g, atol=%g' % (rtol, atol) - assert_array_compare(compare, actual, desired, err_msg=str(err_msg), - verbose=verbose, header=header) - -def assert_array_almost_equal_nulp(x, y, nulp=1): - """ - Compare two arrays relatively to their spacing. - - This is a relatively robust method to compare two arrays whose amplitude - is variable. - - Parameters - ---------- - x, y : array_like - Input arrays. - nulp : int, optional - The maximum number of unit in the last place for tolerance (see Notes). - Default is 1. - - Returns - ------- - None - - Raises - ------ - AssertionError - If the spacing between `x` and `y` for one or more elements is larger - than `nulp`. - - See Also - -------- - assert_array_max_ulp : Check that all items of arrays differ in at most - N Units in the Last Place. - spacing : Return the distance between x and the nearest adjacent number. - - Notes - ----- - An assertion is raised if the following condition is not met:: - - abs(x - y) <= nulps * spacing(max(abs(x), abs(y))) - - Examples - -------- - >>> x = np.array([1., 1e-10, 1e-20]) - >>> eps = np.finfo(x.dtype).eps - >>> np.testing.assert_array_almost_equal_nulp(x, x*eps/2 + x) - - >>> np.testing.assert_array_almost_equal_nulp(x, x*eps + x) - ------------------------------------------------------------ - Traceback (most recent call last): - ... - AssertionError: X and Y are not equal to 1 ULP (max is 2) - - """ - import numpy as np - ax = np.abs(x) - ay = np.abs(y) - ref = nulp * np.spacing(np.where(ax > ay, ax, ay)) - if not np.all(np.abs(x-y) <= ref): - if np.iscomplexobj(x) or np.iscomplexobj(y): - msg = "X and Y are not equal to %d ULP" % nulp - else: - max_nulp = np.max(nulp_diff(x, y)) - msg = "X and Y are not equal to %d ULP (max is %g)" % (nulp, max_nulp) - raise AssertionError(msg) - -def assert_array_max_ulp(a, b, maxulp=1, dtype=None): - """ - Check that all items of arrays differ in at most N Units in the Last Place. - - Parameters - ---------- - a, b : array_like - Input arrays to be compared. - maxulp : int, optional - The maximum number of units in the last place that elements of `a` and - `b` can differ. Default is 1. - dtype : dtype, optional - Data-type to convert `a` and `b` to if given. Default is None. - - Returns - ------- - ret : ndarray - Array containing number of representable floating point numbers between - items in `a` and `b`. - - Raises - ------ - AssertionError - If one or more elements differ by more than `maxulp`. - - See Also - -------- - assert_array_almost_equal_nulp : Compare two arrays relatively to their - spacing. - - Examples - -------- - >>> a = np.linspace(0., 1., 100) - >>> res = np.testing.assert_array_max_ulp(a, np.arcsin(np.sin(a))) - - """ - import numpy as np - ret = nulp_diff(a, b, dtype) - if not np.all(ret <= maxulp): - raise AssertionError("Arrays are not almost equal up to %g ULP" % \ - maxulp) - return ret - -def nulp_diff(x, y, dtype=None): - """For each item in x and y, eeturn the number of representable floating - points between them. - - Parameters - ---------- - x : array_like - first input array - y : array_like - second input array - - Returns - ------- - nulp: array_like - number of representable floating point numbers between each item in x - and y. - - Examples - -------- - # By definition, epsilon is the smallest number such as 1 + eps != 1, so - # there should be exactly one ULP between 1 and 1 + eps - >>> nulp_diff(1, 1 + np.finfo(x.dtype).eps) - 1.0 - """ - import numpy as np - if dtype: - x = np.array(x, dtype=dtype) - y = np.array(y, dtype=dtype) - else: - x = np.array(x) - y = np.array(y) - - t = np.common_type(x, y) - if np.iscomplexobj(x) or np.iscomplexobj(y): - raise NotImplementedError("_nulp not implemented for complex array") - - x = np.array(x, dtype=t) - y = np.array(y, dtype=t) - - if not x.shape == y.shape: - raise ValueError("x and y do not have the same shape: %s - %s" % \ - (x.shape, y.shape)) - - def _diff(rx, ry, vdt): - diff = np.array(rx-ry, dtype=vdt) - return np.abs(diff) - - rx = integer_repr(x) - ry = integer_repr(y) - return _diff(rx, ry, t) - -def _integer_repr(x, vdt, comp): - # Reinterpret binary representation of the float as sign-magnitude: - # take into account two-complement representation - # See also - # http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm - rx = x.view(vdt) - if not (rx.size == 1): - rx[rx < 0] = comp - rx[rx<0] - else: - if rx < 0: - rx = comp - rx - - return rx - -def integer_repr(x): - """Return the signed-magnitude interpretation of the binary representation of - x.""" - import numpy as np - if x.dtype == np.float32: - return _integer_repr(x, np.int32, np.int32(-2**31)) - elif x.dtype == np.float64: - return _integer_repr(x, np.int64, np.int64(-2**63)) - else: - raise ValueError("Unsupported dtype %s" % x.dtype) - -# The following two classes are copied from python 2.6 warnings module (context -# manager) -class WarningMessage(object): - - """ - Holds the result of a single showwarning() call. - - Notes - ----- - `WarningMessage` is copied from the Python 2.6 warnings module, - so it can be used in NumPy with older Python versions. - - """ - - _WARNING_DETAILS = ("message", "category", "filename", "lineno", "file", - "line") - - def __init__(self, message, category, filename, lineno, file=None, - line=None): - local_values = locals() - for attr in self._WARNING_DETAILS: - setattr(self, attr, local_values[attr]) - if category: - self._category_name = category.__name__ - else: - self._category_name = None - - def __str__(self): - return ("{message : %r, category : %r, filename : %r, lineno : %s, " - "line : %r}" % (self.message, self._category_name, - self.filename, self.lineno, self.line)) - -class WarningManager: - """ - A context manager that copies and restores the warnings filter upon - exiting the context. - - The 'record' argument specifies whether warnings should be captured by a - custom implementation of ``warnings.showwarning()`` and be appended to a - list returned by the context manager. Otherwise None is returned by the - context manager. The objects appended to the list are arguments whose - attributes mirror the arguments to ``showwarning()``. - - The 'module' argument is to specify an alternative module to the module - named 'warnings' and imported under that name. This argument is only useful - when testing the warnings module itself. - - Notes - ----- - `WarningManager` is a copy of the ``catch_warnings`` context manager - from the Python 2.6 warnings module, with slight modifications. - It is copied so it can be used in NumPy with older Python versions. - - """ - def __init__(self, record=False, module=None): - self._record = record - if module is None: - self._module = sys.modules['warnings'] - else: - self._module = module - self._entered = False - - def __enter__(self): - if self._entered: - raise RuntimeError("Cannot enter %r twice" % self) - self._entered = True - self._filters = self._module.filters - self._module.filters = self._filters[:] - self._showwarning = self._module.showwarning - if self._record: - log = [] - def showwarning(*args, **kwargs): - log.append(WarningMessage(*args, **kwargs)) - self._module.showwarning = showwarning - return log - else: - return None - - def __exit__(self): - if not self._entered: - raise RuntimeError("Cannot exit %r without entering first" % self) - self._module.filters = self._filters - self._module.showwarning = self._showwarning - -def assert_warns(warning_class, func, *args, **kw): - """ - Fail unless the given callable throws the specified warning. - - A warning of class warning_class should be thrown by the callable when - invoked with arguments args and keyword arguments kwargs. - If a different type of warning is thrown, it will not be caught, and the - test case will be deemed to have suffered an error. - - Parameters - ---------- - warning_class : class - The class defining the warning that `func` is expected to throw. - func : callable - The callable to test. - \\*args : Arguments - Arguments passed to `func`. - \\*\\*kwargs : Kwargs - Keyword arguments passed to `func`. - - Returns - ------- - None - - """ - - # XXX: once we may depend on python >= 2.6, this can be replaced by the - # warnings module context manager. - ctx = WarningManager(record=True) - l = ctx.__enter__() - warnings.simplefilter('always') - try: - func(*args, **kw) - if not len(l) > 0: - raise AssertionError("No warning raised when calling %s" - % func.__name__) - if not l[0].category is warning_class: - raise AssertionError("First warning for %s is not a " \ - "%s( is %s)" % (func.__name__, warning_class, l[0])) - finally: - ctx.__exit__() diff --git a/pythonPackages/numpy/numpy/tests/test_ctypeslib.py b/pythonPackages/numpy/numpy/tests/test_ctypeslib.py deleted file mode 100755 index 832ce683c5..0000000000 --- a/pythonPackages/numpy/numpy/tests/test_ctypeslib.py +++ /dev/null @@ -1,100 +0,0 @@ -import sys - -import numpy as np -from numpy.ctypeslib import ndpointer, load_library -from numpy.testing import * - -try: - cdll = load_library('multiarray', np.core.multiarray.__file__) - _HAS_CTYPE = True -except ImportError: - _HAS_CTYPE = False - -class TestLoadLibrary(TestCase): - @dec.skipif(not _HAS_CTYPE, "ctypes not available on this python installation") - @dec.knownfailureif(sys.platform=='cygwin', "This test is known to fail on cygwin") - def test_basic(self): - try: - cdll = load_library('multiarray', - np.core.multiarray.__file__) - except ImportError, e: - msg = "ctypes is not available on this python: skipping the test" \ - " (import error was: %s)" % str(e) - print msg - - @dec.skipif(not _HAS_CTYPE, "ctypes not available on this python installation") - @dec.knownfailureif(sys.platform=='cygwin', "This test is known to fail on cygwin") - def test_basic2(self): - """Regression for #801: load_library with a full library name - (including extension) does not work.""" - try: - try: - from distutils import sysconfig - so = sysconfig.get_config_var('SO') - cdll = load_library('multiarray%s' % so, - np.core.multiarray.__file__) - except ImportError: - print "No distutils available, skipping test." - except ImportError, e: - msg = "ctypes is not available on this python: skipping the test" \ - " (import error was: %s)" % str(e) - print msg - -class TestNdpointer(TestCase): - def test_dtype(self): - dt = np.intc - p = ndpointer(dtype=dt) - self.assert_(p.from_param(np.array([1], dt))) - dt = 'i4') - p = ndpointer(dtype=dt) - p.from_param(np.array([1], dt)) - self.assertRaises(TypeError, p.from_param, - np.array([1], dt.newbyteorder('swap'))) - dtnames = ['x', 'y'] - dtformats = [np.intc, np.float64] - dtdescr = {'names' : dtnames, 'formats' : dtformats} - dt = np.dtype(dtdescr) - p = ndpointer(dtype=dt) - self.assert_(p.from_param(np.zeros((10,), dt))) - samedt = np.dtype(dtdescr) - p = ndpointer(dtype=samedt) - self.assert_(p.from_param(np.zeros((10,), dt))) - dt2 = np.dtype(dtdescr, align=True) - if dt.itemsize != dt2.itemsize: - self.assertRaises(TypeError, p.from_param, np.zeros((10,), dt2)) - else: - self.assert_(p.from_param(np.zeros((10,), dt2))) - - def test_ndim(self): - p = ndpointer(ndim=0) - self.assert_(p.from_param(np.array(1))) - self.assertRaises(TypeError, p.from_param, np.array([1])) - p = ndpointer(ndim=1) - self.assertRaises(TypeError, p.from_param, np.array(1)) - self.assert_(p.from_param(np.array([1]))) - p = ndpointer(ndim=2) - self.assert_(p.from_param(np.array([[1]]))) - - def test_shape(self): - p = ndpointer(shape=(1,2)) - self.assert_(p.from_param(np.array([[1,2]]))) - self.assertRaises(TypeError, p.from_param, np.array([[1],[2]])) - p = ndpointer(shape=()) - self.assert_(p.from_param(np.array(1))) - - def test_flags(self): - x = np.array([[1,2,3]], order='F') - p = ndpointer(flags='FORTRAN') - self.assert_(p.from_param(x)) - p = ndpointer(flags='CONTIGUOUS') - self.assertRaises(TypeError, p.from_param, x) - p = ndpointer(flags=x.flags.num) - self.assert_(p.from_param(x)) - self.assertRaises(TypeError, p.from_param, np.array([[1,2,3]])) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/tests/test_matlib.py b/pythonPackages/numpy/numpy/tests/test_matlib.py deleted file mode 100755 index 0766764958..0000000000 --- a/pythonPackages/numpy/numpy/tests/test_matlib.py +++ /dev/null @@ -1,53 +0,0 @@ -import numpy as np -import numpy.matlib -from numpy.testing import assert_array_equal, assert_ - -def test_empty(): - x = np.matlib.empty((2,)) - assert_(isinstance(x, np.matrix)) - assert_(x.shape, (1,2)) - -def test_ones(): - assert_array_equal(np.matlib.ones((2, 3)), - np.matrix([[ 1., 1., 1.], - [ 1., 1., 1.]])) - - assert_array_equal(np.matlib.ones(2), np.matrix([[ 1., 1.]])) - -def test_zeros(): - assert_array_equal(np.matlib.zeros((2, 3)), - np.matrix([[ 0., 0., 0.], - [ 0., 0., 0.]])) - - assert_array_equal(np.matlib.zeros(2), np.matrix([[ 0., 0.]])) - -def test_identity(): - x = np.matlib.identity(2, dtype=np.int) - assert_array_equal(x, np.matrix([[1, 0], [0, 1]])) - -def test_eye(): - x = np.matlib.eye(3, k=1, dtype=int) - assert_array_equal(x, np.matrix([[ 0, 1, 0], - [ 0, 0, 1], - [ 0, 0, 0]])) - -def test_rand(): - x = np.matlib.rand(3) - # check matrix type, array would have shape (3,) - assert_(x.ndim == 2) - -def test_randn(): - x = np.matlib.randn(3) - # check matrix type, array would have shape (3,) - assert_(x.ndim == 2) - -def test_repmat(): - a1 = np.arange(4) - x = np.matlib.repmat(a1, 2, 2) - y = np.array([[0, 1, 2, 3, 0, 1, 2, 3], - [0, 1, 2, 3, 0, 1, 2, 3]]) - assert_array_equal(x, y) - - -if __name__ == "__main__": - run_module_suite() diff --git a/pythonPackages/numpy/numpy/version.py b/pythonPackages/numpy/numpy/version.py deleted file mode 100644 index 53ae863108..0000000000 --- a/pythonPackages/numpy/numpy/version.py +++ /dev/null @@ -1,18 +0,0 @@ - -# THIS FILE IS GENERATED FROM NUMPY SETUP.PY -short_version='1.5.0b1' -version='1.5.0b1' -release=True - -if not release: - version += '.dev' - import os - svn_version_file = os.path.join(os.path.dirname(__file__), - 'core','__svn_version__.py') - if os.path.isfile(svn_version_file): - import imp - svn = imp.load_module('numpy.core.__svn_version__', - open(svn_version_file), - svn_version_file, - ('.py','U',1)) - version += svn.version diff --git a/pythonPackages/numpy/setup.py b/pythonPackages/numpy/setup.py deleted file mode 100755 index e59262aeb5..0000000000 --- a/pythonPackages/numpy/setup.py +++ /dev/null @@ -1,210 +0,0 @@ -#!/usr/bin/env python -"""NumPy: array processing for numbers, strings, records, and objects. - -NumPy is a general-purpose array-processing package designed to -efficiently manipulate large multi-dimensional arrays of arbitrary -records without sacrificing too much speed for small multi-dimensional -arrays. NumPy is built on the Numeric code base and adds features -introduced by numarray as well as an extended C-API and the ability to -create arrays of arbitrary type which also makes NumPy suitable for -interfacing with general-purpose data-base applications. - -There are also basic facilities for discrete fourier transform, -basic linear algebra and random number generation. -""" - -DOCLINES = __doc__.split("\n") - -import os -import shutil -import sys -import re -import subprocess - -if sys.version_info[0] < 3: - import __builtin__ as builtins -else: - import builtins - -CLASSIFIERS = """\ -Development Status :: 5 - Production/Stable -Intended Audience :: Science/Research -Intended Audience :: Developers -License :: OSI Approved -Programming Language :: C -Programming Language :: Python -Topic :: Software Development -Topic :: Scientific/Engineering -Operating System :: Microsoft :: Windows -Operating System :: POSIX -Operating System :: Unix -Operating System :: MacOS -""" - -NAME = 'numpy' -MAINTAINER = "NumPy Developers" -MAINTAINER_EMAIL = "numpy-discussion@scipy.org" -DESCRIPTION = DOCLINES[0] -LONG_DESCRIPTION = "\n".join(DOCLINES[2:]) -URL = "http://numpy.scipy.org" -DOWNLOAD_URL = "http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103" -LICENSE = 'BSD' -CLASSIFIERS = filter(None, CLASSIFIERS.split('\n')) -AUTHOR = "Travis E. Oliphant, et.al." -AUTHOR_EMAIL = "oliphant@enthought.com" -PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"] -MAJOR = 1 -MINOR = 5 -MICRO = 0 -ISRELEASED = True -VERSION = '%d.%d.%db1' % (MAJOR, MINOR, MICRO) - -# Return the svn version as a string, raise a ValueError otherwise -def svn_version(): - def _minimal_ext_cmd(cmd): - # construct minimal environment - env = {} - for k in ['SYSTEMROOT', 'PATH']: - v = os.environ.get(k) - if v is not None: - env[k] = v - # LANGUAGE is used on win32 - env['LANGUAGE'] = 'C' - env['LANG'] = 'C' - env['LC_ALL'] = 'C' - out = subprocess.Popen(cmd, stdout = subprocess.PIPE, env=env).communicate()[0] - return out - - try: - out = _minimal_ext_cmd(['svn', 'info']) - except OSError: - print(" --- Could not run svn info --- ") - return "" - - r = re.compile('Revision: ([0-9]+)') - svnver = "" - - out = out.decode() - - for line in out.split('\n'): - m = r.match(line.strip()) - if m: - svnver = m.group(1) - - if not svnver: - print("Error while parsing svn version") - - return svnver - -# BEFORE importing distutils, remove MANIFEST. distutils doesn't properly -# update it when the contents of directories change. -if os.path.exists('MANIFEST'): os.remove('MANIFEST') - -# This is a bit hackish: we are setting a global variable so that the main -# numpy __init__ can detect if it is being loaded by the setup routine, to -# avoid attempting to load components that aren't built yet. While ugly, it's -# a lot more robust than what was previously being used. -builtins.__NUMPY_SETUP__ = True - -FULLVERSION = VERSION -if not ISRELEASED: - FULLVERSION += '.dev' - # If in git or something, bypass the svn rev - if os.path.exists('.svn'): - FULLVERSION += svn_version() - -def write_version_py(filename='numpy/version.py'): - cnt = """ -# THIS FILE IS GENERATED FROM NUMPY SETUP.PY -short_version='%(version)s' -version='%(version)s' -release=%(isrelease)s - -if not release: - version += '.dev' - import os - svn_version_file = os.path.join(os.path.dirname(__file__), - 'core','__svn_version__.py') - if os.path.isfile(svn_version_file): - import imp - svn = imp.load_module('numpy.core.__svn_version__', - open(svn_version_file), - svn_version_file, - ('.py','U',1)) - version += svn.version -""" - a = open(filename, 'w') - try: - a.write(cnt % {'version': VERSION, 'isrelease': str(ISRELEASED)}) - finally: - a.close() - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - - config = Configuration(None, parent_package, top_path) - config.set_options(ignore_setup_xxx_py=True, - assume_default_configuration=True, - delegate_options_to_subpackages=True, - quiet=True) - - config.add_subpackage('numpy') - - config.add_data_files(('numpy','*.txt'), - ('numpy','COMPATIBILITY'), - ('numpy','site.cfg.example')) - - config.get_version('numpy/version.py') # sets config.version - - return config - -def setup_package(): - - # Rewrite the version file everytime - if os.path.exists('numpy/version.py'): os.remove('numpy/version.py') - write_version_py() - - # Perform 2to3 if needed - local_path = os.path.dirname(os.path.abspath(sys.argv[0])) - src_path = local_path - - if sys.version_info[0] == 3: - src_path = os.path.join(local_path, 'build', 'py3k') - sys.path.insert(0, os.path.join(local_path, 'tools')) - import py3tool - print("Converting to Python3 via 2to3...") - py3tool.sync_2to3('numpy', os.path.join(src_path, 'numpy')) - - site_cfg = os.path.join(local_path, 'site.cfg') - if os.path.isfile(site_cfg): - shutil.copy(site_cfg, src_path) - - # Run build - old_path = os.getcwd() - os.chdir(src_path) - sys.path.insert(0, src_path) - - from numpy.distutils.core import setup - - try: - setup( - name=NAME, - maintainer=MAINTAINER, - maintainer_email=MAINTAINER_EMAIL, - description=DESCRIPTION, - long_description=LONG_DESCRIPTION, - url=URL, - download_url=DOWNLOAD_URL, - license=LICENSE, - classifiers=CLASSIFIERS, - author=AUTHOR, - author_email=AUTHOR_EMAIL, - platforms=PLATFORMS, - configuration=configuration ) - finally: - del sys.path[0] - os.chdir(old_path) - return - -if __name__ == '__main__': - setup_package() diff --git a/pythonPackages/numpy/setupegg.py b/pythonPackages/numpy/setupegg.py deleted file mode 100755 index 31e8b3aed5..0000000000 --- a/pythonPackages/numpy/setupegg.py +++ /dev/null @@ -1,7 +0,0 @@ -#!/usr/bin/env python -""" -A setup.py script to use setuptools, which gives egg goodness, etc. -""" - -from setuptools import setup -execfile('setup.py') diff --git a/pythonPackages/numpy/setupscons.py b/pythonPackages/numpy/setupscons.py deleted file mode 100755 index 890d4795d8..0000000000 --- a/pythonPackages/numpy/setupscons.py +++ /dev/null @@ -1,120 +0,0 @@ -#!/usr/bin/env python -"""NumPy: array processing for numbers, strings, records, and objects. - -NumPy is a general-purpose array-processing package designed to -efficiently manipulate large multi-dimensional arrays of arbitrary -records without sacrificing too much speed for small multi-dimensional -arrays. NumPy is built on the Numeric code base and adds features -introduced by numarray as well as an extended C-API and the ability to -create arrays of arbitrary type which also makes NumPy suitable for -interfacing with general-purpose data-base applications. - -There are also basic facilities for discrete fourier transform, -basic linear algebra and random number generation. -""" - -DOCLINES = __doc__.split("\n") - -import __builtin__ -import os -import sys - -CLASSIFIERS = """\ -Development Status :: 4 - Beta -Intended Audience :: Science/Research -Intended Audience :: Developers -License :: OSI Approved -Programming Language :: C -Programming Language :: Python -Topic :: Software Development -Topic :: Scientific/Engineering -Operating System :: Microsoft :: Windows -Operating System :: POSIX -Operating System :: Unix -Operating System :: MacOS -""" - -# BEFORE importing distutils, remove MANIFEST. distutils doesn't properly -# update it when the contents of directories change. -if os.path.exists('MANIFEST'): os.remove('MANIFEST') - -sys.path.insert(0, os.path.dirname(__file__)) -try: - setup_py = __import__("setup") - FULLVERSION = setup_py.FULLVERSION - write_version_py = setup_py.write_version_py -finally: - sys.path.pop(0) - -# This is a bit hackish: we are setting a global variable so that the main -# numpy __init__ can detect if it is being loaded by the setup routine, to -# avoid attempting to load components that aren't built yet. While ugly, it's -# a lot more robust than what was previously being used. -__builtin__.__NUMPY_SETUP__ = True - -# DO NOT REMOVE numpy.distutils IMPORT ! This is necessary for numpy.distutils' -# monkey patching to work. -import numpy.distutils -from distutils.errors import DistutilsError -try: - import numscons -except ImportError, e: - msg = ["You cannot build numpy with scons without the numscons package "] - msg.append("(Failure was: %s)" % e) - raise DistutilsError('\n'.join(msg)) - -def configuration(parent_package='',top_path=None): - from numpy.distutils.misc_util import Configuration - - config = Configuration(None, parent_package, top_path, setup_name = 'setupscons.py') - config.set_options(ignore_setup_xxx_py=True, - assume_default_configuration=True, - delegate_options_to_subpackages=True, - quiet=True) - - config.add_subpackage('numpy') - - config.add_data_files(('numpy','*.txt'), - ('numpy','COMPATIBILITY'), - ('numpy','site.cfg.example'), - ('numpy','setup.py')) - - config.get_version('numpy/version.py') # sets config.version - - return config - -def setup_package(): - - from numpy.distutils.core import setup - - old_path = os.getcwd() - local_path = os.path.dirname(os.path.abspath(sys.argv[0])) - os.chdir(local_path) - sys.path.insert(0,local_path) - - # Rewrite the version file everytime - if os.path.exists('numpy/version.py'): os.remove('numpy/version.py') - write_version_py() - - try: - setup( - name = 'numpy', - maintainer = "NumPy Developers", - maintainer_email = "numpy-discussion@lists.sourceforge.net", - description = DOCLINES[0], - long_description = "\n".join(DOCLINES[2:]), - url = "http://numeric.scipy.org", - download_url = "http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103", - license = 'BSD', - classifiers=filter(None, CLASSIFIERS.split('\n')), - author = "Travis E. Oliphant, et.al.", - author_email = "oliphant@ee.byu.edu", - platforms = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"], - configuration=configuration ) - finally: - del sys.path[0] - os.chdir(old_path) - return - -if __name__ == '__main__': - setup_package() diff --git a/pythonPackages/numpy/setupsconsegg.py b/pythonPackages/numpy/setupsconsegg.py deleted file mode 100755 index 2baae18674..0000000000 --- a/pythonPackages/numpy/setupsconsegg.py +++ /dev/null @@ -1,7 +0,0 @@ -#!/usr/bin/env python -""" -A setup.py script to use setuptools, which gives egg goodness, etc. -""" - -from setuptools import setup -execfile('setupscons.py') diff --git a/pythonPackages/numpy/site.cfg.example b/pythonPackages/numpy/site.cfg.example deleted file mode 100755 index 53f2829986..0000000000 --- a/pythonPackages/numpy/site.cfg.example +++ /dev/null @@ -1,135 +0,0 @@ -# This file provides configuration information about non-Python dependencies for -# numpy.distutils-using packages. Create a file like this called "site.cfg" next -# to your package's setup.py file and fill in the appropriate sections. Not all -# packages will use all sections so you should leave out sections that your -# package does not use. - -# To assist automatic installation like easy_install, the user's home directory -# will also be checked for the file ~/.numpy-site.cfg . - -# The format of the file is that of the standard library's ConfigParser module. -# -# http://www.python.org/doc/current/lib/module-ConfigParser.html -# -# Each section defines settings that apply to one particular dependency. Some of -# the settings are general and apply to nearly any section and are defined here. -# Settings specific to a particular section will be defined near their section. -# -# libraries -# Comma-separated list of library names to add to compile the extension -# with. Note that these should be just the names, not the filenames. For -# example, the file "libfoo.so" would become simply "foo". -# libraries = lapack,f77blas,cblas,atlas -# -# library_dirs -# List of directories to add to the library search path when compiling -# extensions with this dependency. Use the character given by os.pathsep -# to separate the items in the list. On UN*X-type systems (Linux, FreeBSD, -# OS X): -# library_dirs = /usr/lib:/usr/local/lib -# On Windows: -# library_dirs = c:\mingw\lib,c:\atlas\lib -# -# include_dirs -# List of directories to add to the header file earch path. -# include_dirs = /usr/include:/usr/local/include -# -# src_dirs -# List of directories that contain extracted source code for the -# dependency. For some dependencies, numpy.distutils will be able to build -# them from source if binaries cannot be found. The FORTRAN BLAS and -# LAPACK libraries are one example. However, most dependencies are more -# complicated and require actual installation that you need to do -# yourself. -# src_dirs = /home/rkern/src/BLAS_SRC:/home/rkern/src/LAPACK_SRC -# -# search_static_first -# Boolean (one of (0, false, no, off) for False or (1, true, yes, on) for -# True) to tell numpy.distutils to prefer static libraries (.a) over -# shared libraries (.so). It is turned off by default. -# search_static_first = false - -# Defaults -# ======== -# The settings given here will apply to all other sections if not overridden. -# This is a good place to add general library and include directories like -# /usr/local/{lib,include} -# -#[DEFAULT] -#library_dirs = /usr/local/lib -#include_dirs = /usr/local/include - -# Optimized BLAS and LAPACK -# ------------------------- -# Use the blas_opt and lapack_opt sections to give any settings that are -# required to link against your chosen BLAS and LAPACK, including the regular -# FORTRAN reference BLAS and also ATLAS. Some other sections still exist for -# linking against certain optimized libraries (e.g. [atlas], [lapack_atlas]), -# however, they are now deprecated and should not be used. -# -# These are typical configurations for ATLAS (assuming that the library and -# include directories have already been set in [DEFAULT]; the include directory -# is important for the BLAS C interface): -# -#[blas_opt] -#libraries = f77blas, cblas, atlas -# -#[lapack_opt] -#libraries = lapack, f77blas, cblas, atlas -# -# If your ATLAS was compiled with pthreads, the names of the libraries might be -# different: -# -#[blas_opt] -#libraries = ptf77blas, ptcblas, atlas -# -#[lapack_opt] -#libraries = lapack, ptf77blas, ptcblas, atlas - -# UMFPACK -# ------- -# The UMFPACK library is used to factor large sparse matrices. It, in turn, -# depends on the AMD library for reordering the matrices for better performance. -# Note that the AMD library has nothing to do with AMD (Advanced Micro Devices), -# the CPU company. -# -# http://www.cise.ufl.edu/research/sparse/umfpack/ -# http://www.cise.ufl.edu/research/sparse/amd/ -# -#[amd] -#amd_libs = amd -# -#[umfpack] -#umfpack_libs = umfpack - -# FFT libraries -# ------------- -# There are two FFT libraries that we can configure here: FFTW (2 and 3) and djbfft. -# -# http://fftw.org/ -# http://cr.yp.to/djbfft.html -# -# Given only this section, numpy.distutils will try to figure out which version -# of FFTW you are using. -#[fftw] -#libraries = fftw3 -# -# For djbfft, numpy.distutils will look for either djbfft.a or libdjbfft.a . -#[djbfft] -#include_dirs = /usr/local/djbfft/include -#library_dirs = /usr/local/djbfft/lib - - -# MKL -#---- -# For recent (9.0.21, for example) mkl, you need to change the names of the -# lapack library. Assuming you installed the mkl in /opt, for a 32 bits cpu: -# [mkl] -# library_dirs = /opt/intel/mkl/9.1.023/lib/32/ -# lapack_libs = mkl_lapack -# -# For 10.*, on 32 bits machines: -# [mkl] -# library_dirs = /opt/intel/mkl/10.0.1.014/lib/32/ -# lapack_libs = mkl_lapack -# mkl_libs = mkl, guide